#( first clear thought in years: default vi. )
Explore tagged Tumblr posts
Text
from here! @bravevolunteer
THE CREATURE MICHAEL FACES HAS NOT FELT DEFENSIVE INDIGNATION IN A VERY LONG TIME. Usually its self protection consists of strike, attack, destroy; physical destruction that it doesn’t have to think too deeply about. Not that it CAN think too deeply, these days. Reflection is more difficult than it used to be. A defence mechanism—hasn’t most of his life consisted of defensive mechanisms?—outlasting death. It hadn’t meant to end like this.
What is a father? A protective warmth? Open arms? A reassurance that a child is enough despite failure? William is none of that and never has been. And what he’s become is even less: exposed bone and rotten skin and a barely beating heart clinging to borrowed life. What the fuck does he know about fatherhood?
(Watching your boy scuff his knee and remembering your own scrapes as a child. Helping him to his feet, offering sage advice about how to deal with the pain. Knowing, inevitably, he’d fail to take it.)
(Take a deep breath, Mike. The pain isn’t that bad when you breathe, see?)
“All of this.” He says, its breathing almost louder than his words. “All of this was for family. Because of you. Don’t you think—“
A hollow, hacking cough. Matted fur moves with crushed bone, stationary face locked in a grimacing smile. Fredbear had crushed one son’s skull. Spring-Bonnie (can they even still call themselves that?) could rip out the other’s heart.
They don’t. He doesn’t.
It lumbers forwards instead, fabric and metal and bone meshed together. He can’t stop looking at Michael. Mike.
“I was rebuilding our family.” William’s voice rings out, gravelly, clinging to life if it can even be called that. “I was fixing your mistakes. You never could understand that.”
Isn’t that a father’s job? To fix things?
Emotions are so difficult these days, but he tries to recapture how he’d once felt about his son. All he manages to dredge up is an ache below where his heart had once been from a spring twitching back into place.
“You didn’t need saved.” He says. Comes to a lurching stop about five feet from Michael, unable or unwilling to get closer. “You broke things, and I fixed them. But you have never broken, Michael.” Not physically. “You didn’t need fixing.”
#(( breaking news! your assholefather has insulted slash complimented slash not forgotten you. despite the rot ))#(( and to top it all off he STILL thinks you’re selfish for not understanding his maniacal work 🙄 ))#( a vicinity one should always flee: threads. )#( a father is a claw lodged deep: william & michael. )#a; bravevolunteer#tw body horror#tw dysfunctional family#tw child abuse#tw neglect#tw emotional abuse#tw child death#tw child loss#tw violent thoughts#tw dark content#( first clear thought in years: default vi. )#( ask to tag. )#tw body gore
2 notes
·
View notes
Text
IT HAS TO GROWL. GONE ARE HIS DAYS OF GRACEFUL RETORTS -- IF THEY'D EVER EXISTED. All that remains is rot and viscous fluid and suffering: and, of course, hate. Not that it hates Michael, not anymore. There had been a time when if he'd had any company to listen to him, he would have declared his loathing for everyone and everything on the planet.
However, that had been a long time ago. Not distant enough that he can't recall that feeling, not enough that it can't channel its rage at any given moment, but enough that the all - encompassing urge to kill can be managed. Somewhat, anyway. But as time slips on, and he is loose in this attraction, temper rises. It does so now, hearing his? its? son speak to him now.
" All experimental. " It rasps out, voice gravelly and sinewy compared to what it had been. Regrettably experimental. But he survives. Doesn't he ? " But plans - - change. I do not. "
Not anymore. His goal remains the same: to put it back together again. 'It' varies, depending on what he remembers. Time comes for all things, and his memory isn't what it used to be.
" You and I, " it says, with fervor, " cannot change. " They're still alike, aren't they ? This facsimile of father and son ?
@slaughterlocked / probing deeper
"Looking at the mechanics of it—" He leans back, and the office chair squeaks in protest. There's fluff coming out of the seat, and one wheel is missing its cap, making the entire thing tilt to the side. Michael had labeled it 'charmingly, accurately bad' on first glance and had yet to feel any need to reevaluate that judgment. Especially when the rest of the place was ghoulish. "How much of this," and he raises his emaciated, rotting hands to the thick window that separates him from Springtrap, "was intentional?"
#(( evil manfail dad and long suffering manfail son. both rotting. who will win ))#a; revvnant#( a vicinity one should always flee: threads. )#( a father is a claw lodged deep: william & michael. )#( first clear thought in years: default vi. )#tw body horror#tw injury#tw dark themes#tw murder#tw death#( ask to tag. )
6 notes
·
View notes
Photo

(Picture taken from @Spindash54 on Twitter.) KHUx is my favorite gacha game to complain about and I know a lot of people share that sentiment because it doesn't have to be that way. There was a time I actually enjoyed playing this game. However, I want to address an issue concerning the complaints I've seen and hopefully encourage people to set their priorities straight. What I see below the KHUx official tweets are mostly problems that have little to no relevance. People are upset about VIP buyers getting mediocre medals and avatar/pet parts. They are mad we are not getting everything JP did (the picture above being posted below every single tweet). They are complaining about being done with event boards. I'm not saying they are invalid or shouldn't be addressed but it seems those fans are sweating the small stuff while completely overlooking issues which have been around for ages. I know people get tired of SE not addressing them but I cannot help but wonder why people have the stamina to always mention the stuff above again and again but won't talk about real issues. Let's talk business here from the standpoint of someone who plays both versions daily since the release of the Japanese version and who has even put quite a bit of money into this game. (Not whale standards, but still more than 300$.) ● The balance of the game has completely gone nuts. Quests are either so easy that there's no challenge anymore or so hard that even those buying the WJE cannot finish them. There's often no in-between anymore, no way to turtle your way through. You need the strongest and newest medals with the best traits to clear all objectives. ● Regarding the above, nearly all core mechanics are based on RNG (i.e. luck). I have nothing against the gacha aspect for medal pulls, but traits have become the most important factor and they are RNG and 70% are useless, with the useful ones being the rarest. That might not be so bad if the powercreep wasn't so drastic. You basically amass medals and finally get some good traits and poof - the medals are useless again. I have over 1000 medals which are basically just subslot fodder or there for collection purposes. And that's not even the worst aspect. If you do finally get one of the good medals outside their respective banners, you can't even be happy about them because they are essentially worthless without traits, except for lower tier PvP perhaps. ● The sheer amount of useless medals you get showered with. I'm not talking about medal draws but about Cids, Brooms, Fantasia Mickey Bs, Magic Mirrors, Blue Fairies, Tier 7 Dual Meow Wows and useless skills. I know, I know. Two years ago we were whining about being granted Brooms but ever since 7* medals have become a standard while pulling, all of the above medals have ceased functionality. And if that wasn't the worst of it, we still get them shoved down our throat at every turn. I never thought there would come a day where I'd prefer HDLs and Chip and Dale over Brooms. It isn't even a problem between JP and global anymore but something which affects both versions equally. Why the heck doesn't it get adjusted? The PVE/Coliseum boards still provide us the same useless skills as 3 years ago, there's a DAILY Cid quest and we still get them as rewards for events and quests and we get showered with Magic Mirrors and Brooms when 7* medals are the norm for new medals and every 7* medal is fully guilted by default. Do you know how many stacks of Magic Mirrors I have? 23! Over 2000 Magic Mirrors I can't use. I am selling Blue Fairy Stacks for Tier 1-4 because I have nothing to evolve anymore and my subslots don't even need another Tier 4 medal. I have two stacks of M&Bs and 3 Broom stacks and I'm starting to use them on irrelevant HSC and event medals from 2 years ago. I recently sold the equivalent of 3000 Cids because I have nothing to use them on! ● Events and the daily aspects have become repetitive and boring because the rewards are the ones mentioned above. People do not need another Cid. We don't want another broom. We need gems, HDLs, Chip & Dales and jewels. ● They haven't caught us up to the story even though everything else is. Issues get burried by short-term solutions which temporarily raise the overall mood (like a speed-up option or draw tickets) but nothing ever gets fixed for good. Instead we get another RNG option added, because why the heck not? The unlucky people who can't spend thousands of Dollars on a game which does nothing to return the investments can stay behind. (That even includes the weekly WJE buyers because it hardly makes a difference.) I am not complaining about VIP getting more than the rest. VIP buyers should feel they are getting their share for the atrocious price of 15$. But even VIP buyers cannot finish every event anymore because it doesn't improve their luck with necessary traits. The game has nothing to do with skill or strategy anymore. I am not expecting strategy levels of Final Fantasy Brave Exvius, but 2-3 years ago we could actually work our way around enemies with different medals and setups. Nowadays you can only win when you slap the newest medals onto your Keyblades with the best traits (i.e. all of them needing ADD, GDD and EA) which is impossible if you didn't draw trait medals galore with them or pulled a medal which currently provides no possibility to obtain trait medals. And better medals come nearly monthly with all the rest becoming useless. Why are people whining about cosmetic stuff first instead of addressing the real problems? I am not saying I wouldn't want this accessory or that, but I want improvements first! What worth has a Nick & Judy medal in the age of Supernova+ medals, except for collection puposes? It's the reason I stopped investing real money months ago. But since it's rude just to complain without offering any methods, here are some ideas which would be simple to implement. ● Offer a quest for farming HDLs effectively. With how many we need, it really doesn't help anyone if not even VIP members get EXP medals. ● Update PVE rewards and remove the skills from the objectives of early quests. We are at the point of having ATK IX Max, so giving out an ATK V or VI Max monthly really shouldn't be an issue or would diminish board sales while helping newcomers out as well. ● Instead of Brooms, Blue Fairies Magic Mirrors and Cids, start developing new helpful support medals. A very good example: the creation of universal trait medals with a specific trait (mostly ADD, GDD and EA) attached. They don't have to be handed out like candy but it would aid so many people if we could use trait medals which are not bound to a specific medal and are not providing a useless trait in 80% of the time because you simply don't have that luck. With how quickly the current meta is replaced, it shouldn't be too much to ask. ● Offer more diverse and actually useful event rewards. Back in the day event medals were useful. Maybe not top tier, but you could manage to get through some events. You have no idea how much I used the HPO medal to turtle my way through tough events. This isn't possible anymore. We have to defeat everything in three turns at most to get all objectives. And we get trait medals as rewards for medals we need to purchase with jewels (like the Organization events which became useless too after the implementation of SN medals). It's not a big deal to stop being lazy and recycle the same things over and over. At a point where Keyblades can reach level 50 in 0.1 steps from 35 onward, it shouldn't be hard to reward more gems. I didn't manage to get a Keyblade past 38 yet because I level them evenly. Perhaps give more avatar parts too which aren't meant to be bought. Some people want a goal to work toward. But they aren't given one because they can't use the rewards they are given. I was a very active player who finished all events back then but nowadays I find it hard to even bother with lots of events because I don't need another Cid or another Magic Mirror. The stuff I need can mainly be bought or is so RNG-based that it takes me forever to get good stuff or it involves a lot of saving. I'm a veteran player. I won't tell people what to demand and what to wish for, but recently I see so many complaints about stuff which is purely cosmetic or merely serves to quench the symptoms instead of treating the disease. KHUx has ceased to provide something new. Which isn't necessarily thst bad if we can still use the things we provide. But most of us don't need the basic rewards. We don't care about PvP because we mainly need the Tier 9 stuff at most at this point. This is exactly what I predicted with the implementation of 7* medals. The medals themselves weren't the issue, the problem is that they make 90% of the rewards we get utterly senseless. And what is even worse is that SE fails to adjust to that accordingly. Just now, we got another event rewarding Cids. Thank you very much KHUx. Whatever will I do without another to add to my ten existing stacks? There is so much more to improve but KHUx currently seems to run down a path of self-destruction. KH3's hype has called back some players but at this point you don't need to be active to finish the 5-15 story quests per month. The important story aspects can be cut down to about a page of dialogue at most. It doesn't invite players to play more because they can be dealt with without the newest medals, too. Not to mention that it shouldn't be an issue to catch us up there. Translations by fans are up within a day or two, fully embedded into the cutscene text bubbles. They caught us up with everything else, but information such as this which is actually important for the understanding of KH3 and beyond gets neglected without an understandable reason. I don't play many gacha games but after I started with FFBE (which isn't known as the most generous game to grace this planet either), the issues KHUx has become all the more prominent. And there doesn't seem to be a desire to fix them. They just throw out the same stuff as always without noticing the mood deteriorating toward rock bottom. And only then do they do some small patches which do nothing to improve the game in the long run. But the fans also need to realize that. There's no use in complaining about small issues which barely scratch the surface at this point. I admire that stamina but it can be put to better use than complaining about purely cosmetic avatar parts. An addition to the unbalanced quest system. The current Shiva event rewards an actually useful gem for the most difficult quest. But I just barely managed to do it because I was hit with a temporary stroke of luck yesterday night. This was my setup and it only shows how ridiculous it is. (The condition for the last quest is that medals need to have one gauge cost or more.) Keyblade: Fairy Stars (level 37, all sub slots filled with tier 4 and above, providing a bonus of roughly x2.7) Pet level: 13 (Max) 1st Slot: SN+ Axel (This one just copies SN Kairi and doesn't deal damage.) (EA, ADD) 2nd Slot: SN Kairi (Useless since she cannot be activated.) (Second Chance.) 3rd Slot: SN+ Monster Sora (EA, GDD, ADD) (ATK VIII Max + Gauge 0.) 4th Slot: SN Pete (EA, GDD, ADD) (ATK VIII Max + Gauge 2) 5th Slot: SN Ven, Terra and Aqua (GDD, ADD, 1000 STR) (ATK VII Max + Gauge 0) Pet: SN+ Key Art 20 (EA, GDD, ADD, 1000 STR) (ATK IX Max + Gauge 0) Friend: SN+ Key Art 19 (EA, GDD, ADD, 1000 STR) (ATK IX Max + Gauge 0) As I said, I had unusual amount of luck with those traits (some of which required over 20 trait medals though which I accumulated over time.) If you look at my other SN medals, you will find loads of terribly traited medals. But such a setup isn't the norm and even with Titan I had to use tons of tries to have Combo I trigger (another RNG factor). This game is purely luck at this point. At first you are happy to finally get KA19 for example only to realize you have no traits to give it, making its uses limited in the tougher events. (And for the rest you won't even need such a strong medal.) SE doesn't need to cater to every need. Certainly not. But you can see people are angry and bored with the game. I don't even need to do much anymore to rank in the top 500 in the weekly rankings when it was real work two years ago. And it doesn't help a game when everyone except newcomers who aren't aware of the problems are ready to get their pitchforks. But hey, at least I can drown my sorrow in my pile of useless support medals.
10 notes
·
View notes
Text
WandaVision Episode 4 Review: We Interrupt This Program
https://ift.tt/3qZKKnN
This WandaVision review contains spoilers.
WandaVision Episode 4
Through its first three episodes, WandaVision has been more of a compelling and creative thought experiment that it was a coherent TV show. Which is ironic, of course, considering that the aim of the first three episodes was to mimic the style and flow of classic TV shows as closely as possible.
Still, while inventive mimicry is appreciated, it does not a TV program make. At some point the conceptual rubber of WandaVision was going to have to meet the narrative road. It finally does so in the series’ wildly thrilling and entertaining fourth episode “We Interrupt This Program.”
As the title implies, this episode is a much-needed interruption into the faux narrative of the going-ons in Westview, where Wanda, Vision, and all their assembled neighbors are living out the fabricated life of classic sitcoms. “We Interrupt This Program” reveals the woman behind the curtain and establishes stakes and consequences that were sorely missing from earlier installments.
It helps that the focus this time is on a series of characters who are just as confused as we are. Episode 4 opens with Monica Rambeau (Teyonah Parris) coming back to life from “The Blip” following the events of Avengers: Endgame. Correct me if I’m wrong, but is this the first time we’ve seen someone actually return from The Blip? If so, it’s all so much more literal than I imagined. Just as Monica (and half of sentient life in the universe) dissolved into dust after Thanos’s snap, that dust seemingly just re-arranges out of the ether to form her being once more. Does this mean that everyone on Earth was just breathing in “people dust” for years? Anywho���
Read more
TV
Marvel’s WandaVision Glitches Spell Trouble for MCU Reality
By Gavin Jasper
TV
WandaVision: Does Westview Have a Marvel Connection?
By Mike Cecchini
The episode’s opener is its finest moment as it quickly and efficiently establishes Monica as both a worthwhile hero and a human being who just experienced an enormous trauma. Going back to work at SWORD just three weeks after you sprung back into existence from a universal purge is the mark of someone equally heroic and ill-advised. Even though we don’t spend much time with Monica after that, her presence and rapid characterization gives WandaVision its most prominent sense of stakes and consequence yet.
Thankfully, even after Monica disappears into Wanda’s analog Westview, WandaVision gives us the opportunity to catch up with some old MCU favorites. Agent Jimmy Woo (Randall Park) and Dr. Darcy Lewis (Kat Dennings) work as audience cyphers partly because much of the audience is familiar with them (as part of the Thor movies for Darcy and Ant-Man for Jimmy)- but also because Park and Dennings are each so titanically charismatic. Though Wanda and Vision in their sitcom world have been entertaining and intriguing up to this point, there has been a clear tinge of artificiality to them. Spending time with the very “real” Darcy and Jimmy is a welcome injection of plot and humanity.
Thankfully, Jimmy, Darcy, and their SWORD compatriots seem roughly as competent as their SHIELD progenitors. Faceless bureaucratic organizations tend to get the short shrift in superhero movies as pencil pushers who are intent on stopping the heroes from having fun (or in SHIELD’s case, actively trying to kill Captain America). In WandaVision, however, SWORD, Jimmy, and Darcy all operate with a level of sincere curiosity and care that matches the audience’s own.
Within 24 hours of Monica disappearing into Westview, SWORD has a fully operating Cape Canaveral-style mission control center on the outskirts of town. When Darcy needs an old TV to monitor the strange signal emanating from the bubble, she is given dozens. Jimmy and co. quickly get to work identifying the citizens of Westview and who they’re “playing” on the show. In a small, but consequential moment, one SWORD grunt expresses surprise that Darcy is picking up radioactivity because he was assured the radiation levels were normal. A lesser fictional superhero bureaucracy would have forgotten to check for radiation in the first place. In fact, it seems as though the only thing SWORD is missing is the coffee Darcy so desperately craves.
“We Interrupt This Program” is the most coherent episode of WandaVision, which also makes it its best, almost by default. There is a real joy present here, largely thanks to Jimmy and Darcy. The first three episodes drew their bliss from old-fashioned TV sitcom tropes, and this episode decides to one-up it with an even older trope: the scientific method. Getting to the bottom of things is fun! And that’s exactly what episode 4 does.
Read more
TV
WandaVision: The Sitcom Influences of Episode 3
By Alec Bojalad
TV
WandaVision: The Sitcom Influences of Episodes 1 and 2
By Alec Bojalad
That’s not to say that this installment is merely a fact-finding mission. What’s particularly impressive is how WandaVision can preserve the high strangeness of its sitcom concept while even outside the confines of said sitcom.
“What? I’m invested!” Darcy tells Jimmy after she expresses some audience-like excitement upon seeing Wanda give birth to twins.
It also helps that, in getting to the bottom of what’s going in Westview from a scientific perspective, WandaVision also delves into its main character’s heart.
“It’s Wanda. It’s all Wanda,” Monica gasps upon re-entering the real world.
Yes, as many of us expected, the nature of Wanda and Vision’s sitcom situation doesn’t come from outer space, an alternate dimension, or any of the Avengers’ many enemies – it comes straight from Wanda herself. Let’s not forget the enormity of the trauma that Wanda experienced in battling Thanos. She had to mercy kill her lover to save the world…only for the dickish Mad Titan to undo that sacrifice and kill Vis himself for the Mind Stone in his head. That is an almost incomprehensible level of tragedy. We saw Wanda’s anger in Avengers: Endgame but we still had yet to see her pain.
cnx.cmd.push(function() { cnx({ playerId: "106e33c0-3911-473c-b599-b1426db57530", }).render("0270c398a82f44f49c23c16122516796"); });
Well…until now. What better way to deal with the hurt than to flip on the TV? Superheroes – they’re just like us.
New episodes of WandaVision premiere every Friday on Disney+.
The post WandaVision Episode 4 Review: We Interrupt This Program appeared first on Den of Geek.
from Den of Geek https://ift.tt/39wfBCq
0 notes
Text
> I don’t think he would have loved the child the other two miss so much
It’s an interesting point, and one worth pondering. Powder was mad at Topside, but she was young and still a bit clawless. Her bombs didn’t work, but she was a scamp that knew how to shoot dead-on. It boils down to what exactly solidified Silco’s attraction to Powder in the first place. Was it her desperation to seek him for comfort brought on by abandonment/orphaning that cracked his shell a tiny bit to allow some empathy to pour through? Or was it over time, as he revealed to Vi, that he thought Vi was the more valuable sister, but he found his opinion changed when Powder took to his tutelage—flourished like a poisonous flower—and grew into this manic free-wheeling beautiful mad engineering genius of bombs and bullets? What connected him, the shared sense of loss or the untapped potential (the very same potential and ingenuity left to rot un-actualized under Piltover’s yoke?) he held in his hands?
Silco is a man who believes goodness, naivete, softness is what invites your inevitable exploitation because the World Is Not Kind. At Silco’s least charitable, I believe an “untransformed” Powder—one who grows into herself but remains “Good” and unvengeful—would leave a mildly pleasant but saccharine taste in his mouth if she refused to sharpen her teeth and use her bombs and such for terrorism. He might also think her reliance on Vi as an emotional ballast to be limiting her potential.
Silco is simultaneously removed from sentiment since the years under the (assumedly) worst of Piltover’s economic oppression have rendered his charity sparse to near non existent. He’s a social darwinist with fangs to cut out a life for himself but also Zaun, but he would not cease to be ruthless even after Zaun was emancipated, because it is that same ruthlessness that makes you strong, makes you able to protect and defend what’s yours. Bitten animals biting at each other in endless rat race, except the rat race isn’t organized by anyone but each other. Even playing field, known stakes, clear conditions.
And yet he has a dream where one’s own merits, their own genius and well applied efforts, earn them everything they deserve. You get what you put in. That’s the long and short of it. So if he saw someone who was intelligent, relentless, tough, or crafty, and had the ambition to back it up, they earn his respect by default, so a Powder that was busy tinkering and self improving and developing her gadgetry would in turn earn a nod of respect from him, even if he thought she was a scampish little goody two shoes. Just so long as you don’t get in his way, he doesn’t care what or who you are if you’re living your life and chasing your dreams, does any of this make sense or am I just yipping.
I keep thinking about the way that Arcane!Jinx's loved ones all have different attitudes about how she's changed.
Ekko's view of her is bitter, full of grief. "This is what you have become and I hate you for it, but I can't stop thinking about what might have been if you hadn't. I can't get you back, and I miss you. I had a crush on you, until you started talking to the gun."
Vi's view of her centers around denial. "You've changed but I refuse to accept it. I know that you're still the little girl I loved as a child. Give me a chance and I'll help you see it", ignoring the fact that no one stays the same since childhood. I also think Vi idolized Powder a bit, seeing her as this epitome of innocence despite the fact that she made nail bombs - maybe she hasn't changed as much as Vi thinks she has.
And lastly, Silco's view of her is full of twisted pride. "You are just like me, and you are the person I wish I could have been when I was your age. I created you, made you the person you are today, and you are perfect". I refuse to think he's not projecting on her something fierce. He loves her for the person she is now, but I don't think he would have loved the child the other two miss so much.
I know I'm not saying anything groundbreaking here, but I just think it's interesting.
742 notes
·
View notes
Text
Street Tactics – Part 2

By Greg Koukl
One critical challenge you will not be able to avoid as a Christian ambassador is the challenge of the atheist. Yet, most Christians are not prepared for—and are therefore understandably apprehensive about—encounters with them. I want to help you be ready when the opportunity comes your way.
There are two reasons engaging with atheists can be daunting.
First, the most vulnerable part of any worldview is its foundation. Undermine that, and you undercut every single thing resting upon it. Destroy the footings, and the whole lot crumbles.
It’s an effective general strategy we use often at STR, but there’s a reason atheism puts us on the defensive side of that approach. Our story starts, “In the beginning, God….” If there is no God, then there is no story and Christianity never gets off the ground. Simple.
Second, atheists are often confident, aggressive, and unyielding. Plus, their complaints against theism initially make sense to a lot of people who are on the fence. Their points are often rhetorically clever and complex, making them difficult for Christians to counter.
That’s one reason they’ve been effectively turning heads in recent years. According to a current Pew Research Center poll, those who describe themselves as atheists account for 4% of U.S. adults, up from 2% in 2009. That’s a 100% increase in the last decade.[i] There’s even a popular tutorial available providing tactical training to help atheists make an atheist out of you.[ii]
With the spate of deconversion stories recently making their rounds on the internet, it’s clear the approach is having an impact on Christians. That’s why you need to be prepared with specific plans to help you engage the issue thoughtfully and with grace when you encounter it.
In the book Tactics—A Game Plan for Discussing Your Christian Convictions,[iii] I detail the larger plan for engaging with non-Christians. Here, though, I’m focusing on one application of that plan that I’ve called “Street Tactics,” a battery of specific questions you can use to challenge a dissenter on a particular topic. This approach keeps you in the driver’s seat of the conversation in a pleasant way, protecting you from the risk associated with more head-to-head encounters.
First, I give you some insight into the specific weaknesses of the challenge offered. Then I provide specific questions to get you started (rendered in bold) along with samples of how the initial stage of a dialogue might unfold.
Those initial questions are important. When dealing with a tough issue, it’s always good to have an opening move at the ready. If you’re prepared with a question giving you something to say right out of the gate, it gives you a safe launching pad into the conversation.
When I know my first move, it relaxes me and gives me confidence since I’m the one taking the initiative in the conversation. It gets me going in a friendly way, yet with little risk.
I explained the general Street Tactics approach in the last issue of Solid Ground. There, I gave the guidance you need to maneuver though the minefields of one the most frequent challenges you’ll face, the problem of evil.[iv]
Three Moves
Of course, there is a plethora of issues atheists raise and a rack of titles offered by thoughtful Christians responding to them.[v] My purpose here is not to retrace that ground.
Instead, I want to give you insight into three general moves you’ll face with atheists and then provide some tactical questions to get you moving forward comfortably in conversation on those concerns, yet with a minimum of risk.
Take note, the goal of this approach is not to close the deal. We’re not in harvest mode here. Instead, I want to help you do a little gardening by offering a few simple questions to get your friend thinking. I call it “putting a stone in his shoe.”
There are three errors you will consistently confront when talking with atheists. The atheist’s first misstep is a defensive move, a deflection. By redefining the word “atheist,” he attempts to absolve himself of any responsibility to defend his own view. His second move is another redefinition, this time of the word “faith,” distorting it to make it impossible for you to defend your view. Finally, there’s the blanket dismissal, “Believing in God is irrational. There is no evidence.”
Before I go further, though, let me give you a general maneuver. My first response when somebody tells me he’s an atheist is, “Really? That’s interesting. What kind of atheist are you?”
My question trades on the simple fact that atheists do not agree on everything. Most atheists are materialists—convinced that nothing exists except physical things known empirically by the five senses—but not all are. Some believe in objective morality; some do not. Some flutter back and forth between atheism and agnosticism, depending on the definitions.
Asking this question has a number of advantages.
First, I want the atheist to see I’m not shocked or intimidated by his announcement but rather curious about his convictions and comfortable learning more about them. An opening question like this also buys me time to think about where I might go next with my queries.
Next, this question immediately forces the atheist to begin thinking about his own view in a more precise fashion, something I’m convinced most atheists have rarely done.
My second general question is, “Why are you an atheist?” I have no idea how my friend is going to respond—except to offer a vague claim that there is no evidence for God or that theism is irrational (I’ll deal with those canards in a moment).[vi]
Notice, since I’m the one initiating the conversation, I’m in the driver’s seat. Because I’m using questions, there’s no pressure on me. I’m in student mode, not persuasion mode. It’s a safe place to be.
With your initial queries in place, let’s move on to the three faulty maneuvers I mentioned that atheists frequently make.
Atheism Lite
Oddly, many atheists apparently no longer believe there is no God. Instead, they say, they merely lack belief in God. They don’t claim God doesn’t exist. Rather, they simply don’t believe He does exist.”
Atheists are not un-believers, then. They are simply non-believers. Since a non-belief is not a claim, it requires no defense. Thus, atheism secures the inside lane as the default view of reasonable people. Or so atheists think. That’s the strategy here.
Some will attempt to find safe harbor in a vague agnosticism. Since they don’t know God exists (“Theoretically, it’s possible He does”), they’re not really atheist but agnostic—in knowledge limbo on the issue.
These moves are almost always disingenuous coming from someone who is clearly a committed atheist. True agnosticism is an intellectually noble position, of course. But that’s not what’s going on here.
Theism, atheism, and agnosticism are not knowledge categories, but belief categories. Most of our beliefs are fallible—capable of being false—yet we still think they’re true, often with good reason. If agnosticism merely means lack of certainty, then each of us is agnostic on just about everything. This is silly.
The label “skeptic” often suffers from the same linguistic subterfuge since most self-described skeptics are not the least bit skeptical about their own skepticism; they are fully committed atheists.
Here is the insight that betrays the flaw in this verbal sleight of hand: Atheists may lack a belief in God, true enough, but they do not lack a belief about God. They are neither agnostics nor non-believers. Rather, they are believers of a certain kind: They actually believe that there is no God, even if they don’t know for sure.
The root word “theism” means the existence of God, and the prefix “a” is a negation. An atheist, then, is one who holds “not God,” or “God is not.” In plain language, atheism is the belief that there is no God. This is not linguistically complicated. It never has been.
If I were an atheist, I would not take this route. I’d fear people would think I was cheating with words, betraying weakness, not strength. This, as it turns out, is exactly what’s happening. Yes, there is a difference between non-belief and un-belief, but there is no refuge here for the atheist. Here’s why.
If you asked me which rugby team was the best in England, I wouldn’t know where to start. Since I have no beliefs about the quality of rugby competition in the U.K., I am truly a non-believer regarding the question. I am neutral.
This is not the case with atheists, though, since they are not neutral on the God question. If they were, they wouldn’t be writing books or doing debates. No one pontificates on their non-beliefs. There’d be nothing to talk about.
Richard Dawkins is currently the world’s most famous atheist. He makes his case in his best-selling book, The God Delusion. If God is really a delusion, then He does not exist. Simple. Theists say there is a God, and atheists like Dawkins contend they’re wrong—even delusional. Thus, atheists argue that there is no God—hardly a non-belief.
Look, anyone who has a point of view has a belief he thinks is true even if he doesn’t know it’s true. Atheists have a point of view. This makes them believers of a particular stripe: They believe God doesn’t exist.
With that insight clearly in place, here is how I would proceed in a conversation with someone I’m convinced is actually a standard, run-of-the-mill atheist in spite of his evasiveness.
“Given your claim that you simply lack a belief in God, would you mind if I ask you a question?”
“No. Go ahead.”
“I’m going to make a statement, and I’d like you to respond to it. Okay?”
“Sure.”
“Here it is: God exists. What do you personally believe about that statement?”
“Like I told you before, I don’t know for certain.”
“Right. I got that. But I’m not asking you what you know. I’m asking you what you believe.”
“I’m agnostic.”
“If you were truly an agnostic, you’d have no opinion one way or the other. From what you’ve said so far, though, you’re not neutral on the subject of God. So let me put the issue another way. Given my statement ‘God exists,’ it seems you have one of three choices.[vii] You could either affirm the statement (that would be my view, theism), you could deny the statement (that would be atheism), or you could completely withhold judgment since you have no opinion one way or another (agnosticism). Can you think of any other logical options?”
“No, not really.”[viii]
“So what’s your view—affirm, deny, or withhold?”
“Like I said, I lack a belief in God.”
“I get that. But that’s not one of the logical options available here. Are you really saying you have no opinion on this matter? If so, then what are you trying to convince me of? It doesn’t sound like you simply want me to ‘lack a belief in God.’ It sounds like you want me to believe there is no God. Which is it?”
The purpose of this line of questioning is to even the playing field. Both the Christian and the atheist have a conviction—a belief—and those beliefs are at odds. Fair enough. That means both have a view to defend. If you’re not clear on this issue, you’ll always be on the defensive and the atheist will have a free ride in the discussion. Don’t let that happen.
Fanciful Faith
The atheist’s first maneuver keeps him from having to defend his own view. The second is an attempt to keep you from defending yours. The ploy is a common one, a mistake in thinking even Christians have unwittingly abetted, so it’s an error made on both sides of the aisle.
Suppose I claim that atheism is lame since atheists don’t believe in science. After all, they don’t believe in God because they can’t see Him, but they can’t see atoms either, so their existence must be in question, too. Since atoms are pretty foundational to science, atheists, then, must not believe in science.
You can immediately see the errors here. I have misrepresented the atheist’s view in a number of ways, easily “defeating” the distortion. This is not only bad manners, it’s bad thinking—an informal fallacy called a “straw man.” Erect a caricature of someone’s view (the straw man), then easily knock the scarecrow down.
This is precisely what atheists continue to do with “faith.” Peter Boghossian is typical. “The word ‘faith’” he writes, “is a very slippery pig…. Malleable definitions allow faith to slip away from critique.”[ix] He’s right on this point, of course. Definitions should not be malleable. Twisting the definition of faith to suit his own purposes, though, is not the answer, just as twisting the definition of atheism (by Christians or by atheists) is equally illicit. Boghossian himself acknowledges he’s redefining the term to suit his purposes.[x] Don’t let atheists do it.
Boghossian defines the “faith virus”[xi] as either “belief without evidence,” or “pretending to know things you don’t know.”[xii] In fact, “if one had sufficient evidence to warrant belief in a particular claim, then one wouldn’t believe the claim on the basis of faith. ‘Faith’ is the word one uses when one does not have enough evidence to justify holding a belief.”[xiii] This, of course, is circular.[xiv]
Clearly, faith is critical to Christianity, so it’s an obvious target. Let me say respectfully, though, that it does not matter how atheists like Boghossian define faith or even how some misinformed and confused Christians characterize it. It only matters how Christianity itself defines faith. Otherwise, the critic will be jousting with scarecrows. For this we must go back to the Christian’s authority, the Bible.
The biblical accounts are replete with appeals to evidence to justify its claims.[xv] The summary at the end of John’s Gospel should be enough to make this basic point:
Therefore many other signs [miracles] Jesus also performed in the presence of the disciples, which are not written in this book; but these have been written so that you may believe that Jesus is the Christ, the Son of God; and that believing you may have life in His name. (Jn. 20:30–31)
No appeal to blind faith here. Simply put, in the context in which it’s used, the Greek word for biblical faith—pistis—means active trust, and this trust is continually enjoined based on reasons and evidence.
It’s tiresome having to keep correcting this distortion, but you must insist on this definition in your conversations with atheists or you’ll have no grounds for discussion—and little ability to defend your view. Your insistence is based on a simple rule: If someone wants to critique a view, then he has to critique the view itself and not something else.
A single question should suffice to clear the air: “It’s clear we have a different understanding of ‘faith.’ If you are critiquing my view, though, what is more important, your definition of my view or my definition of my view?”
The perpetual distortion of faith by atheists is tied to another misconception.
“Show Me the Money”
There’s a reason atheists insist that Christian faith is blind. They’re convinced there is no evidence for God, so belief in Him is irrational and faith in Him must be a leap. But the assertion is baseless; it’s simply not true.
The easiest way to get to the heart of the issue is to ask probing questions. Here’s my first one: “Precisely, what is irrational about belief in God?” Here I’m looking for specifics. It’s not enough for the atheist to respond, “It’s just dumb.” Exactly what is “dumb” about it?
An irrational belief is one that either contradicts good reason or flies in the face of solid evidence to the contrary, so ask, “How does belief in God violate reason, and what is the evidence against belief in God?”
Any evidence contrary to theism would actually be evidence for atheism, since it’s the only other alternative—in rational terms, either A or non-A, either God or not God. But giving evidence for atheism is precisely what many atheists are trying to avoid with the first maneuver mentioned above.
The question “What exactly is the evidence for atheism and therefore against theism?” is in order here. Some have offered the problem of evil, but objective evil in the world—the only kind of evil that matters in this complaint—ironically turns out to be evidence for God, not against Him, as I demonstrated in the last issue of Solid Ground. I’m open to hearing other suggestions, but the offerings have been thin.
Evidence for God, by contrast, abounds, and tomes have been written detailing it. These include arguments for the beginning of the universe (cosmological arguments, e.g., the Kalam), arguments from obvious design of all sorts in the natural realm (teleological arguments), and arguments based on objective morality (the moral argument), to name just a few. There is Lewis’s argument from desire, there is evidence based on well-documented miracles, and there is historical evidence in support of Jesus’ resurrection.
When an atheist claims there is no evidence, then, I have a few questions for him.
“What specific arguments for God have you considered?”
“I haven’t seen any.”
“Well, if you haven’t considered the arguments for God, how do you know no such evidence exits?”
“Well, I have considered some of them.”
“Good. Then please tell me which ones you’ve thought about and what, in your opinion, is wrong with them. How, specifically, have they failed?”
You might even ask, “What would count as legitimate evidence for God, in your mind?” This query tests the atheist’s intellectual honesty.
These questions are good ones even if you’re not versed in the arguments for God themselves. If the atheist gives any content, make note of it, thank him for it, and tell him you’ll give his ideas some thought. There’s no obligation to answer every challenge on the spot, especially if a particular issue is out of your depth. Do some research later, on your own, when the pressure is off.
The key here is to not settle for vague generalities. Make the atheist spell out the specific shortcomings of belief in God, if he can. I want him to be clear on the exact reasons why belief in God is nonsense.
A warning is in order here. Often, there’s a shell game going on. There is a difference between having credible reasons to believe something and having reasons adequate to convince a hardened skeptic. The claim that there is no evidence is not the same as saying the evidence available is not convincing. That’s a different matter. A piece of evidence is an indicator, not necessarily a decisive proof.
Generally, a thoughtful theist’s approach is what’s called abductive reasoning. All things considered, what is the best explanation for the way things are? That’s a matter for thoughtful discussion, not thoughtless dismissal.
When atheists keep asking, “Where’s the evidence?” they’re either not paying attention or they’re misunderstanding the role of evidence—both odd since they identify themselves as the party of reason.
There’s much to discuss with atheists, but clearing the air on these three critical misperceptions—or in one case, outright distortion—is critical to making progress.
Both atheists and Christians make claims. It’s an even playing field in that sense. Both are required to give reasons for their views. We’re ready to do that, demonstrating that our confidence in God is grounded not in wishful thinking but in a body of evidence that needs to be addressed rather than dismissed as naught.
Use these questions as friendly probes in conversation. They’re formidable tools to keep you in the driver’s seat of otherwise difficult interactions with those trying to undermine the very foundation of Christianity: God.
__________________________
[i] “U.S., Decline of Christianity Continues at Rapid Pace,” Pew Research Center, Oct. 17, 2019.
[ii] Peter Boghossian, A Manual for Creating Atheists (Durham, NC: Pitchstone Publishing, 2013). Find my response to Boghossian’s project in “Tactics for Atheists,” Solid Ground, May-June 2019, at str.org.
[iii] Gregory Koukl, Tactics—A Game Plan for Discussing Your Christian Convictions, 10th Anniversary Ed. (Grand Rapids, MI: Zondervan, 2019).
[iv] See “Street Tactics I,” Solid Ground, Jan-Feb 2020, at str.org.
[v] Standouts include Turek and Geisler’s I Don’t Have Enough Faith to Be an Atheist, J. Warner Wallace’s God’s Crime Scene, and Ravi Zacharias’s Can Man Live without God?
[vi] Often atheists will reflexively invoke the problem of evil. In the last issue of Solid Ground I gave tactics to deal with that challenge.
[vii] I owe this line of thinking to my philosopher friend Douglas Geivett.
[viii] This would be an intellectually honest answer, unless he can offer a fourth option, which he can’t since it doesn’t exist.
[ix] Boghossian, 22–23.
[x] Ibid., 80.
[xi] Ibid., 68.
[xii] Ibid., 23-24.
[xiii] Ibid., 23.
[xiv] I go into detail on Boghossian’s mangling of “faith” in “Tactics for Atheists,” Solid Ground, May-June 2019.
[xv] Whether or not a critic believes the accounts is irrelevant to the question. My point here is only that, since the Bible offers reasons for faith, biblical faith is not blind. Whether those reasons are persuasive to a skeptic or not is a different issue.
1 note
·
View note
Link
Facebook founder Mark Zuckerberg’s visage loomed large over the European parliament this week, both literally and figuratively, as global privacy regulators gathered in Brussels to interrogate the human impacts of technologies that derive their power and persuasiveness from our data.
The eponymous social network has been at the center of a privacy storm this year. And every fresh Facebook content concern — be it about discrimination or hate speech or cultural insensitivity — adds to a damaging flood.
The overarching discussion topic at the privacy and data protection confab, both in the public sessions and behind closed doors, was ethics: How to ensure engineers, technologists and companies operate with a sense of civic duty and build products that serve the good of humanity.
So, in other words, how to ensure people’s information is used ethically — not just in compliance with the law. Fundamental rights are increasingly seen by European regulators as a floor not the ceiling. Ethics are needed to fill the gaps where new uses of data keep pushing in.
As the EU’s data protection supervisor, Giovanni Buttarelli, told delegates at the start of the public portion of the International Conference of Data Protection and Privacy Commissioners: “Not everything that is legally compliant and technically feasible is morally sustainable.”
As if on cue Zuckerberg kicked off a pre-recorded video message to the conference with another apology. Albeit this was only for not being there to give an address in person. Which is not the kind of regret many in the room are now looking for, as fresh data breaches and privacy incursions keep being stacked on top of Facebook’s Cambridge Analytica data misuse scandal like an unpalatable layer cake that never stops being baked.
Evidence of a radical shift of mindset is what champions of civic tech are looking for — from Facebook in particular and adtech in general.
But there was no sign of that in Zuckerberg’s potted spiel. Rather he displayed the kind of masterfully slick PR manoeuvering that’s associated with politicians on the campaign trail. It’s the natural patter for certain big tech CEOs too, these days, in a sign of our sociotechnical political times.
(See also: Facebook hiring ex-UK deputy PM, Nick Clegg, to further expand its contacts database of European lawmakers.)
And so the Facebook founder seized on the conference’s discussion topic of big data ethics and tried to zoom right back out again. Backing away from talk of tangible harms and damaging platform defaults — aka the actual conversational substance of the conference (from talk of how dating apps are impacting how much sex people have and with whom they’re doing it; to shiny new biometric identity systems that have rebooted discriminatory caste systems) — to push the idea of a need to “strike a balance between speech, security, privacy and safety”.
This was Facebook trying reframe the idea of digital ethics — to make it so very big-picture-y that it could embrace his people-tracking ad-funded business model as a fuzzily wide public good, with a sort of ‘oh go on then’ shrug.
“Every day people around the world use our services to speak up for things they believe in. More than 80 million small businesses use our services, supporting millions of jobs and creating a lot of opportunity,” said Zuckerberg, arguing for a ‘both sides’ view of digital ethics. “We believe we have an ethical responsibility to support these positive uses too.”
Indeed, he went further, saying Facebook believes it has an “ethical obligation to protect good uses of technology”.
And from that self-serving perspective almost anything becomes possible — as if Facebook is arguing that breaking data protection law might really be the ‘ethical’ thing to do. (Or, as the existentialists might put it: ‘If god is dead, then everything is permitted’.)
It’s an argument that radically elides some very bad things, though. And glosses over problems that are systemic to Facebook’s ad platform.
A little later, Google’s CEO Sundar Pichai also dropped into the conference in video form, bringing much the same message.
“The conversation about ethics is important. And we are happy to be a part of it,” he began, before an instant hard pivot into referencing Google’s founding mission of “organizing the world’s information — for everyone” (emphasis his), before segwaying — via “knowledge is empowering” — to asserting that “a society with more information is better off than one with less”.
Is having access to more information of unknown and dubious or even malicious provenance better than having access to some verified information? Google seems to think so.
SAN FRANCISCO, CA – OCTOBER 04: Pichai Sundararajan, known as Sundar Pichai, CEO of Google Inc. speaks during an event to introduce Google Pixel phone and other Google products on October 4, 2016 in San Francisco, California. The Google Pixel is intended to challenge the Apple iPhone in the premium smartphone category. (Photo by Ramin Talaie/Getty Images)
The pre-recorded Pichai didn’t have to concern himself with all the mental ellipses bubbling up in the thoughts of the privacy and rights experts in the room.
“Today that mission still applies to everything we do at Google,” his digital image droned on, without mentioning what Google is thinking of doing in China. “It’s clear that technology can be a positive force in our lives. It has the potential to give us back time and extend opportunity to people all over the world.
“But it’s equally clear that we need to be responsible in how we use technology. We want to make sound choices and build products that benefit society that’s why earlier this year we worked with our employees to develop a set of AI principles that clearly state what types of technology applications we will pursue.”
Of course it sounds fine. Yet Pichai made no mention of the staff who’ve actually left Google because of ethical misgivings. Nor the employees still there and still protesting its ‘ethical’ choices.
It’s not almost as if the Internet’s adtech duopoly is singing from the same ‘ads for greater good trumping the bad’ hymn sheet; the Internet’s adtech’s duopoly is doing exactly that.
The ‘we’re not perfect and have lots more to learn’ line that also came from both CEOs seems mostly intended to manage regulatory expectation vis-a-vis data protection — and indeed on the wider ethics front.
They’re not promising to do no harm. Nor to always protect people’s data. They’re literally saying they can’t promise that. Ouch.
Meanwhile, another common FaceGoog message — an intent to introduce ‘more granular user controls’ — just means they’re piling even more responsibility onto individuals to proactively check (and keep checking) that their information is not being horribly abused.
This is a burden neither company can speak to in any other fashion. Because the solution is that their platforms not hoard people’s data in the first place.
The other ginormous elephant in the room is big tech’s massive size; which is itself skewing the market and far more besides.
Neither Zuckerberg nor Pichai directly addressed the notion of overly powerful platforms themselves causing structural societal harms, such as by eroding the civically minded institutions that are essential to defend free societies and indeed uphold the rule of law.
Of course it’s an awkward conversation topic for tech giants if vital institutions and societal norms are being undermined because of your cut-throat profiteering on the unregulated cyber seas.
A great tech fix to avoid answering awkward questions is to send a video message in your CEO’s stead. And/or a few minions. Facebook VP and chief privacy officer, Erin Egan, and Google’s SVP of global affairs Kent Walker, were duly dispatched and gave speeches in person.
They also had a handful of audience questions put to them by an on stage moderator. So it fell to Walker, not Pichai, to speak to Google’s contradictory involvement in China in light of its foundational claim to be a champion of the free flow of information.
“We absolutely believe in the maximum amount of information available to people around the world,” Walker said on that topic, after being allowed to intone on Google’s goodness for almost half an hour. “We have said that we are exploring the possibility of ways of engaging in China to see if there are ways to follow that mission while complying with laws in China.
“That’s an exploratory project — and we are not in a position at this point to have an answer to the question yet. But we continue to work.”
Egan, meanwhile, batted away her trio of audience concerns — about Facebook’s lack of privacy by design/default; and how the company could ever address ethical concerns without dramatically changing its business model — by saying it has a new privacy and data use team sitting horizontally across the business, as well as a data protection officer (an oversight role mandated by the EU’s GDPR; into which Facebook plugged its former global deputy chief privacy officer, Stephen Deadman, earlier this year).
She also said the company continues to invest in AI for content moderation purposes. So, essentially, more trust us. And trust our tech.
She also replied in the affirmative when asked whether Facebook will “unequivocally” support a strong federal privacy law in the US — with protections “equivalent” to those in Europe’s data protection framework.
But of course Zuckerberg has said much the same thing before — while simultaneously advocating for weaker privacy standards domestically. So who now really wants to take Facebook at its word on that? Or indeed on anything of human substance.
Not the EU parliament, for one. MEPs sitting in the parliament’s other building, in Strasbourg, this week adopted a resolution calling for Facebook to agree to an external audit by regional oversight bodies.
But of course Facebook prefers to run its own audit. And in a response statement the company claims it’s “working relentlessly to ensure the transparency, safety and security” of people who use its service (so bad luck if you’re one of those non-users it also tracks then). Which is a very long-winded way of saying ‘no, we’re not going to voluntarily let the inspectors in’.
Facebook’s problem now is that trust, once burnt, takes years and mountains’ worth of effort to restore.
This is the flip side of ‘move fast and break things’. (Indeed, one of the conference panels was entitled ‘move fast and fix things’.) It’s also the hard-to-shift legacy of an unapologetically blind ~decade-long dash for growth regardless of societal cost.
Given the, it looks unlikely that Zuckerberg’s attempt to paint a portrait of digital ethics in his company’s image will do much to restore trust in Facebook.
Not so long as the platform retains the power to cause damage at scale.
It was left to everyone else at the conference to discuss the hollowing out of democratic institutions, societal norms, humans interactions and so on — as a consequence of data (and market capital) being concentrated in the hands of the ridiculously powerful few.
“Today we face the gravest threat to our democracy, to our individual liberty in Europe since the war and the United States perhaps since the civil war,” said Barry Lynn, a former journalist and senior fellow at the Google-backed New America Foundation think tank in Washington, D.C., where he had directed the Open Markets Program — until it was shut down after he wrote critically about, er, Google.
“This threat is the consolidation of power — mainly by Google, Facebook and Amazon — over how we speak to one another, over how we do business with one another.”
Meanwhile the original architect of the World Wide Web, Tim Berners-Lee, who has been warning about the crushing impact of platform power for years now is working on trying to decentralize the net’s data hoarders via new technologies intended to give users greater agency over their data.
On the democratic damage front, Lynn pointed to how news media is being hobbled by an adtech duopoly now sucking hundreds of billion of ad dollars out of the market annually — by renting out what he dubbed their “manipulation machines”.
Not only do they sell access to these ad targeting tools to mainstream advertisers — to sell the usual products, like soap and diapers — they’re also, he pointed out, taking dollars from “autocrats and would be autocrats and other social disruptors to spread propaganda and fake news to a variety of ends, none of them good”.
The platforms’ unhealthy market power is the result of a theft of people’s attention, argued Lynn. “We cannot have democracy if we don’t have a free and robustly funded press,” he warned.
His solution to the society-deforming might of platform power? Not a newfangled decentralization tech but something much older: Market restructuring via competition law.
“The basic problem is how we structure or how we have failed to structure markets in the last generation. How we have licensed or failed to license monopoly corporations to behave.
“In this case what we see here is this great mass of data. The problem is the combination of this great mass of data with monopoly power in the form of control over essential pathways to the market combined with a license to discriminate in the pricing and terms of service. That is the problem.”
“The result is to centralize,” he continued. “To pick and choose winners and losers. In other words the power to reward those who heed the will of the master, and to punish those who defy or question the master — in the hands of Google, Facebook and Amazon… That is destroying the rule of law in our society and is replacing rule of law with rule by power.”
For an example of an entity that’s currently being punished by Facebook’s grip on the social digital sphere you need look no further than Snapchat.
Also on the stage in person: Apple���s CEO Tim Cook, who didn’t mince his words either — attacking what he dubbed a “data industrial complex” which he said is “weaponizing” people’s person data against them for private profit.
The adtech modeus operandi sums to “surveillance”, Cook asserted.
Cook called this a “crisis”, painting a picture of technologies being applied in an ethics-free vacuum to “magnify our worst human tendencies… deepen divisions, incite violence and even undermine our shared sense of what is true and what is false” — by “taking advantage of user trust”.
“This crisis is real… And those of us who believe in technology’s potential for good must not shrink from this moment,” he warned, telling the assembled regulators that Apple is aligned with their civic mission.
Of course Cook’s position also aligns with Apple’s hardware-dominated business model — in which the company makes most of its money by selling premium priced, robustly encrypted devices, rather than monopolizing people’s attention to sell their eyeballs to advertisers.
The growing public and political alarm over how big data platforms stoke addiction and exploit people’s trust and information — and the idea that an overarching framework of not just laws but digital ethics might be needed to control this stuff — dovetails neatly with the alternative track that Apple has been pounding for years.
So for Cupertino it’s easy to argue that the ‘collect it all’ approach of data-hungry platforms is both lazy thinking and irresponsible engineering, as Cook did this week.
“For artificial intelligence to be truly smart it must respect human values — including privacy,” he said. “If we get this wrong, the dangers are profound. We can achieve both great artificial intelligence and great privacy standards. It is not only a possibility — it is a responsibility.”
Yet Apple is not only a hardware business. In recent years the company has been expanding and growing its services business. It even involves itself in (a degree of) digital advertising. And it does business in China.
It is, after all, still a for-profit business — not a human rights regulator. So we shouldn’t be looking to Apple to spec out a digital ethical framework for us, either.
No profit making entity should be used as the model for where the ethical line should lie.
Apple sets a far higher standard than other tech giants, certainly, even as its grip on the market is far more partial because it doesn’t give its stuff away for free. But it’s hardly perfect where privacy is concerned.
One inconvenient example for Apple is that it takes money from Google to make the company’s search engine the default for iOS users — even as it offers iOS users a choice of alternatives (if they go looking to switch) which includes pro-privacy search engine DuckDuckGo.
DDG is a veritable minnow vs Google, and Apple builds products for the consumer mainstream, so it is supporting privacy by putting a niche search engine alongside a behemoth like Google — as one of just four choices it offers.
But defaults are hugely powerful. So Google search being the iOS default means most of Apple’s mobile users will have their queries fed straight into Google’s surveillance database, even as Apple works hard to keep its own servers clear of user data by not collecting their stuff in the first place.
There is a contradiction there. So there is a risk for Apple in amping up its rhetoric against a “data industrial complex” — and making its naturally pro-privacy preference sound like a conviction principle — because it invites people to dial up critical lenses and point out where its defence of personal data against manipulation and exploitation does not live up to its own rhetoric.
One thing is clear: In the current data-based ecosystem all players are conflicted and compromised.
Though only a handful of tech giants have built unchallengeably massive tracking empires via the systematic exploitation of other people’s data.
And as the apparatus of their power gets exposed, these attention-hogging adtech giants are making a dumb show of papering over the myriad ways their platforms pound on people and societies — offering paper-thin promises to ‘do better next time — when ‘better’ is not even close to being enough.
Call for collective action
Increasingly powerful data-mining technologies must be sensitive to human rights and human impacts, that much is crystal clear. Nor is it enough to be reactive to problems after or even at the moment they arise. No engineer or system designer should feel it’s their job to manipulate and trick their fellow humans.
Dark pattern designs should be repurposed into a guidebook of what not to do and how not to transact online. (If you want a mission statement for thinking about this it really is simple: Just don’t be a dick.)
Sociotechnical Internet technologies must always be designed with people and societies in mind — a key point that was hammered home in a keynote by Berners-Lee, the inventor of the World Wide Web, and the tech guy now trying to defang the Internet’s occupying corporate forces via decentralization.
“As we’re designing the system, we’re designing society,” he told the conference. “Ethical rules that we choose to put in that design [impact society]… Nothing is self evident. Everything has to be put out there as something that we think we will be a good idea as a component of our society.”
The penny looks to be dropping for privacy watchdogs in Europe. The idea that assessing fairness — not just legal compliance — must be a key component of their thinking, going forward, and so the direction of regulatory travel.
Watchdogs like the UK’s ICO — which just fined Facebook the maximum possible penalty for the Cambridge Analytica scandal — said so this week. “You have to do your homework as a company to think about fairness,” said Elizabeth Denham, when asked ‘who decides what’s fair’ in a data ethics context. “At the end of the day if you are working, providing services in Europe then the regulator’s going to have something to say about fairness — which we have in some cases.”
“Right now, we’re working with some Oxford academics on transparency and algorithmic decision making. We’re also working on our own tool as a regulator on how we are going to audit algorithms,” she added. “I think in Europe we’re leading the way — and I realize that’s not the legal requirement in the rest of the world but I believe that more and more companies are going to look to the high standard that is now in place with the GDPR.
“The answer to the question is ‘is this fair?’ It may be legal — but is this fair?”
So the short version is data controllers need to prepare themselves to consult widely — and examine their consciences closely.
Rising automation and AI makes ethical design choices even more imperative, as technologies become increasingly complex and intertwined, thanks to the massive amounts of data being captured, processed and used to model all sorts of human facets and functions.
The closed session of the conference produced a declaration on ethics and data in artificial intelligence — setting out a list of guiding principles to act as “core values to preserve human rights” in the developing AI era — which included concepts like fairness and responsible design.
Few would argue that a powerful AI-based technology such as facial recognition isn’t inherently in tension with a fundamental human right like privacy.
Nor that such powerful technologies aren’t at huge risk of being misused and abused to discriminate and/or suppress rights at vast and terrifying scale. (See, for example, China’s push to install a social credit system.)
Biometric ID systems might start out with claims of the very best intentions — only to shift function and impact later. The dangers to human rights of function creep on this front are very real indeed. And are already being felt in places like India — where the country’s Aadhaar biometric ID system has been accused of rebooting ancient prejudices by promoting a digital caste system, as the conference also heard.
The consensus from the event is it’s not only possible but vital to engineer ethics into system design from the start whenever you’re doing things with other people’s data. And that routes to market must be found that don’t require dispensing with a moral compass to get there.
The notion of data-processing platforms becoming information fiduciaries — i.e. having a legal duty of care towards their users, as a doctor or lawyer does — was floated several times during public discussions. Though such a step would likely require more legislation, not just adequately rigorous self examination.
In the meanwhile civic society must get to grips, and grapple proactively, with technologies like AI so that people and societies can come to collective agreement about a digital ethics framework. This is vital work to defend the things that matter to communities so that the anthropogenic platforms Berners-Lee referenced are shaped by collective human values, not the other way around.
It’s also essential that public debate about digital ethics does not get hijacked by corporate self interest.
Tech giants are not only inherently conflicted on the topic but — right across the board — they lack the internal diversity to offer a broad enough perspective.
People and civic society must teach them.
A vital closing contribution came from the French data watchdog’s Isabelle Falque-Pierrotin, who summed up discussions that had taken place behind closed doors as the community of global data protection commissioners met to plot next steps.
She explained that members had adopted a roadmap for the future of the conference to evolve beyond a mere talking shop and take on a more visible, open governance structure — to allow it to be a vehicle for collective, international decision-making on ethical standards, and so alight on and adopt common positions and principles that can push tech in a human direction.
The initial declaration document on ethics and AI is intended to be just the start, she said — warning that “if we can’t act we will not be able to collectively control our future”, and couching ethics as “no longer an option, it is an obligation”.
She also said it’s essential that regulators get with the program and enforce current privacy laws — to “pave the way towards a digital ethics” — echoing calls from many speakers at the event for regulators to get on with the job of enforcement.
This is vital work to defend values and rights against the overreach of the digital here and now.
“Without ethics, without an adequate enforcement of our values and rules our societal models are at risk,” Falque-Pierrotin also warned. “We must act… because if we fail, there won’t be any winners. Not the people, nor the companies. And certainly not human rights and democracy.”
If the conference had one short sharp message it was this: Society must wake up to technology — and fast.
“We’ve got a lot of work to do, and a lot of discussion — across the boundaries of individuals, companies and governments,” agreed Berners-Lee. “But very important work.
“We have to get commitments from companies to make their platforms constructive and we have to get commitments from governments to look at whenever they see that a new technology allows people to be taken advantage of, allows a new form of crime to get onto it by producing new forms of the law. And to make sure that the policies that they do are thought about in respect to every new technology as they come out.”
This work is also an opportunity for civic society to define and reaffirm what’s important. So it’s not only about mitigating risks.
But, equally, not doing the job is unthinkable — because there’s no putting the AI genii back in the bottle.
from TechCrunch https://ift.tt/2OTqNR3
0 notes
Text
Big tech must not reframe digital ethics in its image
Facebook founder Mark Zuckerberg’s visage loomed large over the European parliament this week, both literally and figuratively, as global privacy regulators gathered in Brussels to interrogate the human impacts of technologies that derive their power and persuasiveness from our data.
The eponymous social network has been at the center of a privacy storm this year. And every fresh Facebook content concern — be it about discrimination or hate speech or cultural insensitivity — adds to a damaging flood.
The overarching discussion topic at the privacy and data protection confab, both in the public sessions and behind closed doors, was ethics: How to ensure engineers, technologists and companies operate with a sense of civic duty and build products that serve the good of humanity.
So, in other words, how to ensure people’s information is used ethically — not just in compliance with the law. Fundamental rights are increasingly seen by European regulators as a floor not the ceiling. Ethics are needed to fill the gaps where new uses of data keep pushing in.
As the EU’s data protection supervisor, Giovanni Buttarelli, told delegates at the start of the public portion of the International Conference of Data Protection and Privacy Commissioners: “Not everything that is legally compliant and technically feasible is morally sustainable.”
As if on cue Zuckerberg kicked off a pre-recorded video message to the conference with another apology. Albeit this was only for not being there to give an address in person. Which is not the kind of regret many in the room are now looking for, as fresh data breaches and privacy incursions keep being stacked on top of Facebook’s Cambridge Analytica data misuse scandal like an unpalatable layer cake that never stops being baked.
Evidence of a radical shift of mindset is what champions of civic tech are looking for — from Facebook in particular and adtech in general.
But there was no sign of that in Zuckerberg’s potted spiel. Rather he displayed the kind of masterfully slick PR manoeuvering that’s associated with politicians on the campaign trail. It’s the natural patter for certain big tech CEOs too, these days, in a sign of our sociotechnical political times.
(See also: Facebook hiring ex-UK deputy PM, Nick Clegg, to further expand its contacts database of European lawmakers.)
And so the Facebook founder seized on the conference’s discussion topic of big data ethics and tried to zoom right back out again. Backing away from talk of tangible harms and damaging platform defaults — aka the actual conversational substance of the conference (from talk of how dating apps are impacting how much sex people have and with whom they’re doing it; to shiny new biometric identity systems that have rebooted discriminatory caste systems) — to push the idea of a need to “strike a balance between speech, security, privacy and safety”.
This was Facebook trying reframe the idea of digital ethics — to make it so very big-picture-y that it could embrace his people-tracking ad-funded business model as a fuzzily wide public good, with a sort of ‘oh go on then’ shrug.
“Every day people around the world use our services to speak up for things they believe in. More than 80 million small businesses use our services, supporting millions of jobs and creating a lot of opportunity,” said Zuckerberg, arguing for a ‘both sides’ view of digital ethics. “We believe we have an ethical responsibility to support these positive uses too.”
Indeed, he went further, saying Facebook believes it has an “ethical obligation to protect good uses of technology”.
And from that self-serving perspective almost anything becomes possible — as if Facebook is arguing that breaking data protection law might really be the ‘ethical’ thing to do. (Or, as the existentialists might put it: ‘If god is dead, then everything is permitted’.)
It’s an argument that radically elides some very bad things, though. And glosses over problems that are systemic to Facebook’s ad platform.
A little later, Google’s CEO Sundar Pichai also dropped into the conference in video form, bringing much the same message.
“The conversation about ethics is important. And we are happy to be a part of it,” he began, before an instant hard pivot into referencing Google’s founding mission of “organizing the world’s information — for everyone” (emphasis his), before segwaying — via “knowledge is empowering” — to asserting that “a society with more information is better off than one with less”.
Is having access to more information of unknown and dubious or even malicious provenance better than having access to some verified information? Google seems to think so.
SAN FRANCISCO, CA – OCTOBER 04: Pichai Sundararajan, known as Sundar Pichai, CEO of Google Inc. speaks during an event to introduce Google Pixel phone and other Google products on October 4, 2016 in San Francisco, California. The Google Pixel is intended to challenge the Apple iPhone in the premium smartphone category. (Photo by Ramin Talaie/Getty Images)
The pre-recorded Pichai didn’t have to concern himself with all the mental ellipses bubbling up in the thoughts of the privacy and rights experts in the room.
“Today that mission still applies to everything we do at Google,” his digital image droned on, without mentioning what Google is thinking of doing in China. “It’s clear that technology can be a positive force in our lives. It has the potential to give us back time and extend opportunity to people all over the world.
“But it’s equally clear that we need to be responsible in how we use technology. We want to make sound choices and build products that benefit society that’s why earlier this year we worked with our employees to develop a set of AI principles that clearly state what types of technology applications we will pursue.”
Of course it sounds fine. Yet Pichai made no mention of the staff who’ve actually left Google because of ethical misgivings. Nor the employees still there and still protesting its ‘ethical’ choices.
It’s not almost as if the Internet’s adtech duopoly is singing from the same ‘ads for greater good trumping the bad’ hymn sheet; the Internet’s adtech’s duopoly is doing exactly that.
The ‘we’re not perfect and have lots more to learn’ line that also came from both CEOs seems mostly intended to manage regulatory expectation vis-a-vis data protection — and indeed on the wider ethics front.
They’re not promising to do no harm. Nor to always protect people’s data. They’re literally saying they can’t promise that. Ouch.
Meanwhile, another common FaceGoog message — an intent to introduce ‘more granular user controls’ — just means they’re piling even more responsibility onto individuals to proactively check (and keep checking) that their information is not being horribly abused.
This is a burden neither company can speak to in any other fashion. Because the solution is that their platforms not hoard people’s data in the first place.
The other ginormous elephant in the room is big tech’s massive size; which is itself skewing the market and far more besides.
Neither Zuckerberg nor Pichai directly addressed the notion of overly powerful platforms themselves causing structural societal harms, such as by eroding the civically minded institutions that are essential to defend free societies and indeed uphold the rule of law.
Of course it’s an awkward conversation topic for tech giants if vital institutions and societal norms are being undermined because of your cut-throat profiteering on the unregulated cyber seas.
A great tech fix to avoid answering awkward questions is to send a video message in your CEO’s stead. And/or a few minions. Facebook VP and chief privacy officer, Erin Egan, and Google’s SVP of global affairs Kent Walker, were duly dispatched and gave speeches in person.
They also had a handful of audience questions put to them by an on stage moderator. So it fell to Walker, not Pichai, to speak to Google’s contradictory involvement in China in light of its foundational claim to be a champion of the free flow of information.
“We absolutely believe in the maximum amount of information available to people around the world,” Walker said on that topic, after being allowed to intone on Google’s goodness for almost half an hour. “We have said that we are exploring the possibility of ways of engaging in China to see if there are ways to follow that mission while complying with laws in China.
“That’s an exploratory project — and we are not in a position at this point to have an answer to the question yet. But we continue to work.”
Egan, meanwhile, batted away her trio of audience concerns — about Facebook’s lack of privacy by design/default; and how the company could ever address ethical concerns without dramatically changing its business model — by saying it has a new privacy and data use team sitting horizontally across the business, as well as a data protection officer (an oversight role mandated by the EU’s GDPR; into which Facebook plugged its former global deputy chief privacy officer, Stephen Deadman, earlier this year).
She also said the company continues to invest in AI for content moderation purposes. So, essentially, more trust us. And trust our tech.
She also replied in the affirmative when asked whether Facebook will “unequivocally” support a strong federal privacy law in the US — with protections “equivalent” to those in Europe’s data protection framework.
But of course Zuckerberg has said much the same thing before — while simultaneously advocating for weaker privacy standards domestically. So who now really wants to take Facebook at its word on that? Or indeed on anything of human substance.
Not the EU parliament, for one. MEPs sitting in the parliament’s other building, in Strasbourg, this week adopted a resolution calling for Facebook to agree to an external audit by regional oversight bodies.
But of course Facebook prefers to run its own audit. And in a response statement the company claims it’s “working relentlessly to ensure the transparency, safety and security” of people who use its service (so bad luck if you’re one of those non-users it also tracks then). Which is a very long-winded way of saying ‘no, we’re not going to voluntarily let the inspectors in’.
Facebook’s problem now is that trust, once burnt, takes years and mountains’ worth of effort to restore.
This is the flip side of ‘move fast and break things’. (Indeed, one of the conference panels was entitled ‘move fast and fix things’.) It’s also the hard-to-shift legacy of an unapologetically blind ~decade-long dash for growth regardless of societal cost.
Given the, it looks unlikely that Zuckerberg’s attempt to paint a portrait of digital ethics in his company’s image will do much to restore trust in Facebook.
Not so long as the platform retains the power to cause damage at scale.
It was left to everyone else at the conference to discuss the hollowing out of democratic institutions, societal norms, humans interactions and so on — as a consequence of data (and market capital) being concentrated in the hands of the ridiculously powerful few.
“Today we face the gravest threat to our democracy, to our individual liberty in Europe since the war and the United States perhaps since the civil war,” said Barry Lynn, a former journalist and senior fellow at the Google-backed New America Foundation think tank in Washington, D.C., where he had directed the Open Markets Program — until it was shut down after he wrote critically about, er, Google.
“This threat is the consolidation of power — mainly by Google, Facebook and Amazon — over how we speak to one another, over how we do business with one another.”
Meanwhile the original architect of the World Wide Web, Tim Berners-Lee, who has been warning about the crushing impact of platform power for years now is working on trying to decentralize the net’s data hoarders via new technologies intended to give users greater agency over their data.
On the democratic damage front, Lynn pointed to how news media is being hobbled by an adtech duopoly now sucking hundreds of billion of ad dollars out of the market annually — by renting out what he dubbed their “manipulation machines”.
Not only do they sell access to these ad targeting tools to mainstream advertisers — to sell the usual products, like soap and diapers — they’re also, he pointed out, taking dollars from “autocrats and would be autocrats and other social disruptors to spread propaganda and fake news to a variety of ends, none of them good”.
The platforms’ unhealthy market power is the result of a theft of people’s attention, argued Lynn. “We cannot have democracy if we don’t have a free and robustly funded press,” he warned.
His solution to the society-deforming might of platform power? Not a newfangled decentralization tech but something much older: Market restructuring via competition law.
“The basic problem is how we structure or how we have failed to structure markets in the last generation. How we have licensed or failed to license monopoly corporations to behave.
“In this case what we see here is this great mass of data. The problem is the combination of this great mass of data with monopoly power in the form of control over essential pathways to the market combined with a license to discriminate in the pricing and terms of service. That is the problem.”
“The result is to centralize,” he continued. “To pick and choose winners and losers. In other words the power to reward those who heed the will of the master, and to punish those who defy or question the master — in the hands of Google, Facebook and Amazon… That is destroying the rule of law in our society and is replacing rule of law with rule by power.”
For an example of an entity that’s currently being punished by Facebook’s grip on the social digital sphere you need look no further than Snapchat.
Also on the stage in person: Apple’s CEO Tim Cook, who didn’t mince his words either — attacking what he dubbed a “data industrial complex” which he said is “weaponizing” people’s person data against them for private profit.
The adtech modeus operandi sums to “surveillance”, Cook asserted.
Cook called this a “crisis”, painting a picture of technologies being applied in an ethics-free vacuum to “magnify our worst human tendencies… deepen divisions, incite violence and even undermine our shared sense of what is true and what is false” — by “taking advantage of user trust”.
“This crisis is real… And those of us who believe in technology’s potential for good must not shrink from this moment,” he warned, telling the assembled regulators that Apple is aligned with their civic mission.
Of course Cook’s position also aligns with Apple’s hardware-dominated business model — in which the company makes most of its money by selling premium priced, robustly encrypted devices, rather than monopolizing people’s attention to sell their eyeballs to advertisers.
The growing public and political alarm over how big data platforms stoke addiction and exploit people’s trust and information — and the idea that an overarching framework of not just laws but digital ethics might be needed to control this stuff — dovetails neatly with the alternative track that Apple has been pounding for years.
So for Cupertino it’s easy to argue that the ‘collect it all’ approach of data-hungry platforms is both lazy thinking and irresponsible engineering, as Cook did this week.
“For artificial intelligence to be truly smart it must respect human values — including privacy,” he said. “If we get this wrong, the dangers are profound. We can achieve both great artificial intelligence and great privacy standards. It is not only a possibility — it is a responsibility.”
Yet Apple is not only a hardware business. In recent years the company has been expanding and growing its services business. It even involves itself in (a degree of) digital advertising. And it does business in China.
It is, after all, still a for-profit business — not a human rights regulator. So we shouldn’t be looking to Apple to spec out a digital ethical framework for us, either.
No profit making entity should be used as the model for where the ethical line should lie.
Apple sets a far higher standard than other tech giants, certainly, even as its grip on the market is far more partial because it doesn’t give its stuff away for free. But it’s hardly perfect where privacy is concerned.
One inconvenient example for Apple is that it takes money from Google to make the company’s search engine the default for iOS users — even as it offers iOS users a choice of alternatives (if they go looking to switch) which includes pro-privacy search engine DuckDuckGo.
DDG is a veritable minnow vs Google, and Apple builds products for the consumer mainstream, so it is supporting privacy by putting a niche search engine alongside a behemoth like Google — as one of just four choices it offers.
But defaults are hugely powerful. So Google search being the iOS default means most of Apple’s mobile users will have their queries fed straight into Google’s surveillance database, even as Apple works hard to keep its own servers clear of user data by not collecting their stuff in the first place.
There is a contradiction there. So there is a risk for Apple in amping up its rhetoric against a “data industrial complex” — and making its naturally pro-privacy preference sound like a conviction principle — because it invites people to dial up critical lenses and point out where its defence of personal data against manipulation and exploitation does not live up to its own rhetoric.
One thing is clear: In the current data-based ecosystem all players are conflicted and compromised.
Though only a handful of tech giants have built unchallengeably massive tracking empires via the systematic exploitation of other people’s data.
And as the apparatus of their power gets exposed, these attention-hogging adtech giants are making a dumb show of papering over the myriad ways their platforms pound on people and societies — offering paper-thin promises to ‘do better next time — when ‘better’ is not even close to being enough.
Call for collective action
Increasingly powerful data-mining technologies must be sensitive to human rights and human impacts, that much is crystal clear. Nor is it enough to be reactive to problems after or even at the moment they arise. No engineer or system designer should feel it’s their job to manipulate and trick their fellow humans.
Dark pattern designs should be repurposed into a guidebook of what not to do and how not to transact online. (If you want a mission statement for thinking about this it really is simple: Just don’t be a dick.)
Sociotechnical Internet technologies must always be designed with people and societies in mind — a key point that was hammered home in a keynote by Berners-Lee, the inventor of the World Wide Web, and the tech guy now trying to defang the Internet’s occupying corporate forces via decentralization.
“As we’re designing the system, we’re designing society,” he told the conference. “Ethical rules that we choose to put in that design [impact society]… Nothing is self evident. Everything has to be put out there as something that we think we will be a good idea as a component of our society.”
The penny looks to be dropping for privacy watchdogs in Europe. The idea that assessing fairness — not just legal compliance — must be a key component of their thinking, going forward, and so the direction of regulatory travel.
Watchdogs like the UK’s ICO — which just fined Facebook the maximum possible penalty for the Cambridge Analytica scandal — said so this week. “You have to do your homework as a company to think about fairness,” said Elizabeth Denham, when asked ‘who decides what’s fair’ in a data ethics context. “At the end of the day if you are working, providing services in Europe then the regulator’s going to have something to say about fairness — which we have in some cases.”
“Right now, we’re working with some Oxford academics on transparency and algorithmic decision making. We’re also working on our own tool as a regulator on how we are going to audit algorithms,” she added. “I think in Europe we’re leading the way — and I realize that’s not the legal requirement in the rest of the world but I believe that more and more companies are going to look to the high standard that is now in place with the GDPR.
“The answer to the question is ‘is this fair?’ It may be legal — but is this fair?”
So the short version is data controllers need to prepare themselves to consult widely — and examine their consciences closely.
Rising automation and AI makes ethical design choices even more imperative, as technologies become increasingly complex and intertwined, thanks to the massive amounts of data being captured, processed and used to model all sorts of human facets and functions.
The closed session of the conference produced a declaration on ethics and data in artificial intelligence — setting out a list of guiding principles to act as “core values to preserve human rights” in the developing AI era — which included concepts like fairness and responsible design.
Few would argue that a powerful AI-based technology such as facial recognition isn’t inherently in tension with a fundamental human right like privacy.
Nor that such powerful technologies aren’t at huge risk of being misused and abused to discriminate and/or suppress rights at vast and terrifying scale. (See, for example, China’s push to install a social credit system.)
Biometric ID systems might start out with claims of the very best intentions — only to shift function and impact later. The dangers to human rights of function creep on this front are very real indeed. And are already being felt in places like India — where the country’s Aadhaar biometric ID system has been accused of rebooting ancient prejudices by promoting a digital caste system, as the conference also heard.
The consensus from the event is it’s not only possible but vital to engineer ethics into system design from the start whenever you’re doing things with other people’s data. And that routes to market must be found that don’t require dispensing with a moral compass to get there.
The notion of data-processing platforms becoming information fiduciaries — i.e. having a legal duty of care towards their users, as a doctor or lawyer does — was floated several times during public discussions. Though such a step would likely require more legislation, not just adequately rigorous self examination.
In the meanwhile civic society must get to grips, and grapple proactively, with technologies like AI so that people and societies can come to collective agreement about a digital ethics framework. This is vital work to defend the things that matter to communities so that the anthropogenic platforms Berners-Lee referenced are shaped by collective human values, not the other way around.
It’s also essential that public debate about digital ethics does not get hijacked by corporate self interest.
Tech giants are not only inherently conflicted on the topic but — right across the board — they lack the internal diversity to offer a broad enough perspective.
People and civic society must teach them.
A vital closing contribution came from the French data watchdog’s Isabelle Falque-Pierrotin, who summed up discussions that had taken place behind closed doors as the community of global data protection commissioners met to plot next steps.
She explained that members had adopted a roadmap for the future of the conference to evolve beyond a mere talking shop and take on a more visible, open governance structure — to allow it to be a vehicle for collective, international decision-making on ethical standards, and so alight on and adopt common positions and principles that can push tech in a human direction.
The initial declaration document on ethics and AI is intended to be just the start, she said — warning that “if we can’t act we will not be able to collectively control our future”, and couching ethics as “no longer an option, it is an obligation”.
She also said it’s essential that regulators get with the program and enforce current privacy laws — to “pave the way towards a digital ethics” — echoing calls from many speakers at the event for regulators to get on with the job of enforcement.
This is vital work to defend values and rights against the overreach of the digital here and now.
“Without ethics, without an adequate enforcement of our values and rules our societal models are at risk,” Falque-Pierrotin also warned. “We must act… because if we fail, there won’t be any winners. Not the people, nor the companies. And certainly not human rights and democracy.”
If the conference had one short sharp message it was this: Society must wake up to technology — and fast.
“We’ve got a lot of work to do, and a lot of discussion — across the boundaries of individuals, companies and governments,” agreed Berners-Lee. “But very important work.
“We have to get commitments from companies to make their platforms constructive and we have to get commitments from governments to look at whenever they see that a new technology allows people to be taken advantage of, allows a new form of crime to get onto it by producing new forms of the law. And to make sure that the policies that they do are thought about in respect to every new technology as they come out.”
This work is also an opportunity for civic society to define and reaffirm what’s important. So it’s not only about mitigating risks.
But, equally, not doing the job is unthinkable — because there’s no putting the AI genii back in the bottle.
from iraidajzsmmwtv https://ift.tt/2OTqNR3 via IFTTT
0 notes
Link
Facebook founder Mark Zuckerberg’s visage loomed large over the European parliament this week, both literally and figuratively, as global privacy regulators gathered in Brussels to interrogate the human impacts of technologies that derive their power and persuasiveness from our data.
The eponymous social network has been at the center of a privacy storm this year. And every fresh Facebook content concern — be it about discrimination or hate speech or cultural insensitivity — adds to a damaging flood.
The overarching discussion topic at the privacy and data protection confab, both in the public sessions and behind closed doors, was ethics: How to ensure engineers, technologists and companies operate with a sense of civic duty and build products that serve the good of humanity.
So, in other words, how to ensure people’s information is used ethically — not just in compliance with the law. Fundamental rights are increasingly seen by European regulators as a floor not the ceiling. Ethics are needed to fill the gaps where new uses of data keep pushing in.
As the EU’s data protection supervisor, Giovanni Buttarelli, told delegates at the start of the public portion of the International Conference of Data Protection and Privacy Commissioners: “Not everything that is legally compliant and technically feasible is morally sustainable.”
As if on cue Zuckerberg kicked off a pre-recorded video message to the conference with another apology. Albeit this was only for not being there to give an address in person. Which is not the kind of regret many in the room are now looking for, as fresh data breaches and privacy incursions keep being stacked on top of Facebook’s Cambridge Analytica data misuse scandal like an unpalatable layer cake that never stops being baked.
Evidence of a radical shift of mindset is what champions of civic tech are looking for — from Facebook in particular and adtech in general.
But there was no sign of that in Zuckerberg’s potted spiel. Rather he displayed the kind of masterfully slick PR manoeuvering that’s associated with politicians on the campaign trail. It’s the natural patter for certain big tech CEOs too, these days, in a sign of our sociotechnical political times.
(See also: Facebook hiring ex-UK deputy PM, Nick Clegg, to further expand its contacts database of European lawmakers.)
And so the Facebook founder seized on the conference’s discussion topic of big data ethics and tried to zoom right back out again. Backing away from talk of tangible harms and damaging platform defaults — aka the actual conversational substance of the conference (from talk of how dating apps are impacting how much sex people have and with whom they’re doing it; to shiny new biometric identity systems that have rebooted discriminatory caste systems) — to push the idea of a need to “strike a balance between speech, security, privacy and safety”.
This was Facebook trying reframe the idea of digital ethics — to make it so very big-picture-y that it could embrace his people-tracking ad-funded business model as a fuzzily wide public good, with a sort of ‘oh go on then’ shrug.
“Every day people around the world use our services to speak up for things they believe in. More than 80 million small businesses use our services, supporting millions of jobs and creating a lot of opportunity,” said Zuckerberg, arguing for a ‘both sides’ view of digital ethics. “We believe we have an ethical responsibility to support these positive uses too.”
Indeed, he went further, saying Facebook believes it has an “ethical obligation to protect good uses of technology”.
And from that self-serving perspective almost anything becomes possible — as if Facebook is arguing that breaking data protection law might really be the ‘ethical’ thing to do. (Or, as the existentialists might put it: ‘If god is dead, then everything is permitted’.)
It’s an argument that radically elides some very bad things, though. And glosses over problems that are systemic to Facebook’s ad platform.
A little later, Google’s CEO Sundar Pichai also dropped into the conference in video form, bringing much the same message.
“The conversation about ethics is important. And we are happy to be a part of it,” he began, before an instant hard pivot into referencing Google’s founding mission of “organizing the world’s information — for everyone” (emphasis his), before segwaying — via “knowledge is empowering” — to asserting that “a society with more information is better off than one with less”.
Is having access to more information of unknown and dubious or even malicious provenance better than having access to some verified information? Google seems to think so.
SAN FRANCISCO, CA – OCTOBER 04: Pichai Sundararajan, known as Sundar Pichai, CEO of Google Inc. speaks during an event to introduce Google Pixel phone and other Google products on October 4, 2016 in San Francisco, California. The Google Pixel is intended to challenge the Apple iPhone in the premium smartphone category. (Photo by Ramin Talaie/Getty Images)
The pre-recorded Pichai didn’t have to concern himself with all the mental ellipses bubbling up in the thoughts of the privacy and rights experts in the room.
“Today that mission still applies to everything we do at Google,” his digital image droned on, without mentioning what Google is thinking of doing in China. “It’s clear that technology can be a positive force in our lives. It has the potential to give us back time and extend opportunity to people all over the world.
“But it’s equally clear that we need to be responsible in how we use technology. We want to make sound choices and build products that benefit society that’s why earlier this year we worked with our employees to develop a set of AI principles that clearly state what types of technology applications we will pursue.”
Of course it sounds fine. Yet Pichai made no mention of the staff who’ve actually left Google because of ethical misgivings. Nor the employees still there and still protesting its ‘ethical’ choices.
It’s not almost as if the Internet’s adtech duopoly is singing from the same ‘ads for greater good trumping the bad’ hymn sheet; the Internet’s adtech’s duopoly is doing exactly that.
The ‘we’re not perfect and have lots more to learn’ line that also came from both CEOs seems mostly intended to manage regulatory expectation vis-a-vis data protection — and indeed on the wider ethics front.
They’re not promising to do no harm. Nor to always protect people’s data. They’re literally saying they can’t promise that. Ouch.
Meanwhile, another common FaceGoog message — an intent to introduce ‘more granular user controls’ — just means they’re piling even more responsibility onto individuals to proactively check (and keep checking) that their information is not being horribly abused.
This is a burden neither company can speak to in any other fashion. Because the solution is that their platforms not hoard people’s data in the first place.
The other ginormous elephant in the room is big tech’s massive size; which is itself skewing the market and far more besides.
Neither Zuckerberg nor Pichai directly addressed the notion of overly powerful platforms themselves causing structural societal harms, such as by eroding the civically minded institutions that are essential to defend free societies and indeed uphold the rule of law.
Of course it’s an awkward conversation topic for tech giants if vital institutions and societal norms are being undermined because of your cut-throat profiteering on the unregulated cyber seas.
A great tech fix to avoid answering awkward questions is to send a video message in your CEO’s stead. And/or a few minions. Facebook VP and chief privacy officer, Erin Egan, and Google’s SVP of global affairs Kent Walker, were duly dispatched and gave speeches in person.
They also had a handful of audience questions put to them by an on stage moderator. So it fell to Walker, not Pichai, to speak to Google’s contradictory involvement in China in light of its foundational claim to be a champion of the free flow of information.
“We absolutely believe in the maximum amount of information available to people around the world,” Walker said on that topic, after being allowed to intone on Google’s goodness for almost half an hour. “We have said that we are exploring the possibility of ways of engaging in China to see if there are ways to follow that mission while complying with laws in China.
“That’s an exploratory project — and we are not in a position at this point to have an answer to the question yet. But we continue to work.”
Egan, meanwhile, batted away her trio of audience concerns — about Facebook’s lack of privacy by design/default; and how the company could ever address ethical concerns without dramatically changing its business model — by saying it has a new privacy and data use team sitting horizontally across the business, as well as a data protection officer (an oversight role mandated by the EU’s GDPR; into which Facebook plugged its former global deputy chief privacy officer, Stephen Deadman, earlier this year).
She also said the company continues to invest in AI for content moderation purposes. So, essentially, more trust us. And trust our tech.
She also replied in the affirmative when asked whether Facebook will “unequivocally” support a strong federal privacy law in the US — with protections “equivalent” to those in Europe’s data protection framework.
But of course Zuckerberg has said much the same thing before — while simultaneously advocating for weaker privacy standards domestically. So who now really wants to take Facebook at its word on that? Or indeed on anything of human substance.
Not the EU parliament, for one. MEPs sitting in the parliament’s other building, in Strasbourg, this week adopted a resolution calling for Facebook to agree to an external audit by regional oversight bodies.
But of course Facebook prefers to run its own audit. And in a response statement the company claims it’s “working relentlessly to ensure the transparency, safety and security” of people who use its service (so bad luck if you’re one of those non-users it also tracks then). Which is a very long-winded way of saying ‘no, we’re not going to voluntarily let the inspectors in’.
Facebook’s problem now is that trust, once burnt, takes years and mountains’ worth of effort to restore.
This is the flip side of ‘move fast and break things’. (Indeed, one of the conference panels was entitled ‘move fast and fix things’.) It’s also the hard-to-shift legacy of an unapologetically blind ~decade-long dash for growth regardless of societal cost.
Given the, it looks unlikely that Zuckerberg’s attempt to paint a portrait of digital ethics in his company’s image will do much to restore trust in Facebook.
Not so long as the platform retains the power to cause damage at scale.
It was left to everyone else at the conference to discuss the hollowing out of democratic institutions, societal norms, humans interactions and so on — as a consequence of data (and market capital) being concentrated in the hands of the ridiculously powerful few.
“Today we face the gravest threat to our democracy, to our individual liberty in Europe since the war and the United States perhaps since the civil war,” said Barry Lynn, a former journalist and senior fellow at the Google-backed New America Foundation think tank in Washington, D.C., where he had directed the Open Markets Program — until it was shut down after he wrote critically about, er, Google.
“This threat is the consolidation of power — mainly by Google, Facebook and Amazon — over how we speak to one another, over how we do business with one another.”
Meanwhile the original architect of the World Wide Web, Tim Berners-Lee, who has been warning about the crushing impact of platform power for years now is working on trying to decentralize the net’s data hoarders via new technologies intended to give users greater agency over their data.
On the democratic damage front, Lynn pointed to how news media is being hobbled by an adtech duopoly now sucking hundreds of billion of ad dollars out of the market annually — by renting out what he dubbed their “manipulation machines”.
Not only do they sell access to these ad targeting tools to mainstream advertisers — to sell the usual products, like soap and diapers — they’re also, he pointed out, taking dollars from “autocrats and would be autocrats and other social disruptors to spread propaganda and fake news to a variety of ends, none of them good”.
The platforms’ unhealthy market power is the result of a theft of people’s attention, argued Lynn. “We cannot have democracy if we don’t have a free and robustly funded press,” he warned.
His solution to the society-deforming might of platform power? Not a newfangled decentralization tech but something much older: Market restructuring via competition law.
“The basic problem is how we structure or how we have failed to structure markets in the last generation. How we have licensed or failed to license monopoly corporations to behave.
“In this case what we see here is this great mass of data. The problem is the combination of this great mass of data with monopoly power in the form of control over essential pathways to the market combined with a license to discriminate in the pricing and terms of service. That is the problem.”
“The result is to centralize,” he continued. “To pick and choose winners and losers. In other words the power to reward those who heed the will of the master, and to punish those who defy or question the master — in the hands of Google, Facebook and Amazon… That is destroying the rule of law in our society and is replacing rule of law with rule by power.”
For an example of an entity that’s currently being punished by Facebook’s grip on the social digital sphere you need look no further than Snapchat.
Also on the stage in person: Apple’s CEO Tim Cook, who didn’t mince his words either — attacking what he dubbed a “data industrial complex” which he said is “weaponizing” people’s person data against them for private profit.
The adtech modeus operandi sums to “surveillance”, Cook asserted.
Cook called this a “crisis”, painting a picture of technologies being applied in an ethics-free vacuum to “magnify our worst human tendencies… deepen divisions, incite violence and even undermine our shared sense of what is true and what is false” — by “taking advantage of user trust”.
“This crisis is real… And those of us who believe in technology’s potential for good must not shrink from this moment,” he warned, telling the assembled regulators that Apple is aligned with their civic mission.
Of course Cook’s position also aligns with Apple’s hardware-dominated business model — in which the company makes most of its money by selling premium priced, robustly encrypted devices, rather than monopolizing people’s attention to sell their eyeballs to advertisers.
The growing public and political alarm over how big data platforms stoke addiction and exploit people’s trust and information — and the idea that an overarching framework of not just laws but digital ethics might be needed to control this stuff — dovetails neatly with the alternative track that Apple has been pounding for years.
So for Cupertino it’s easy to argue that the ‘collect it all’ approach of data-hungry platforms is both lazy thinking and irresponsible engineering, as Cook did this week.
“For artificial intelligence to be truly smart it must respect human values — including privacy,” he said. “If we get this wrong, the dangers are profound. We can achieve both great artificial intelligence and great privacy standards. It is not only a possibility — it is a responsibility.”
Yet Apple is not only a hardware business. In recent years the company has been expanding and growing its services business. It even involves itself in (a degree of) digital advertising. And it does business in China.
It is, after all, still a for-profit business — not a human rights regulator. So we shouldn’t be looking to Apple to spec out a digital ethical framework for us, either.
No profit making entity should be used as the model for where the ethical line should lie.
Apple sets a far higher standard than other tech giants, certainly, even as its grip on the market is far more partial because it doesn’t give its stuff away for free. But it’s hardly perfect where privacy is concerned.
One inconvenient example for Apple is that it takes money from Google to make the company’s search engine the default for iOS users — even as it offers iOS users a choice of alternatives (if they go looking to switch) which includes pro-privacy search engine DuckDuckGo.
DDG is a veritable minnow vs Google, and Apple builds products for the consumer mainstream, so it is supporting privacy by putting a niche search engine alongside a behemoth like Google — as one of just four choices it offers.
But defaults are hugely powerful. So Google search being the iOS default means most of Apple’s mobile users will have their queries fed straight into Google’s surveillance database, even as Apple works hard to keep its own servers clear of user data by not collecting their stuff in the first place.
There is a contradiction there. So there is a risk for Apple in amping up its rhetoric against a “data industrial complex” — and making its naturally pro-privacy preference sound like a conviction principle — because it invites people to dial up critical lenses and point out where its defence of personal data against manipulation and exploitation does not live up to its own rhetoric.
One thing is clear: In the current data-based ecosystem all players are conflicted and compromised.
Though only a handful of tech giants have built unchallengeably massive tracking empires via the systematic exploitation of other people’s data.
And as the apparatus of their power gets exposed, these attention-hogging adtech giants are making a dumb show of papering over the myriad ways their platforms pound on people and societies — offering paper-thin promises to ‘do better next time — when ‘better’ is not even close to being enough.
Call for collective action
Increasingly powerful data-mining technologies must be sensitive to human rights and human impacts, that much is crystal clear. Nor is it enough to be reactive to problems after or even at the moment they arise. No engineer or system designer should feel it’s their job to manipulate and trick their fellow humans.
Dark pattern designs should be repurposed into a guidebook of what not to do and how not to transact online. (If you want a mission statement for thinking about this it really is simple: Just don’t be a dick.)
Sociotechnical Internet technologies must always be designed with people and societies in mind — a key point that was hammered home in a keynote by Berners-Lee, the inventor of the World Wide Web, and the tech guy now trying to defang the Internet’s occupying corporate forces via decentralization.
“As we’re designing the system, we’re designing society,” he told the conference. “Ethical rules that we choose to put in that design [impact society]… Nothing is self evident. Everything has to be put out there as something that we think we will be a good idea as a component of our society.”
The penny looks to be dropping for privacy watchdogs in Europe. The idea that assessing fairness — not just legal compliance — must be a key component of their thinking, going forward, and so the direction of regulatory travel.
Watchdogs like the UK’s ICO — which just fined Facebook the maximum possible penalty for the Cambridge Analytica scandal — said so this week. “You have to do your homework as a company to think about fairness,” said Elizabeth Denham, when asked ‘who decides what’s fair’ in a data ethics context. “At the end of the day if you are working, providing services in Europe then the regulator’s going to have something to say about fairness — which we have in some cases.”
“Right now, we’re working with some Oxford academics on transparency and algorithmic decision making. We’re also working on our own tool as a regulator on how we are going to audit algorithms,” she added. “I think in Europe we’re leading the way — and I realize that’s not the legal requirement in the rest of the world but I believe that more and more companies are going to look to the high standard that is now in place with the GDPR.
“The answer to the question is ‘is this fair?’ It may be legal — but is this fair?”
So the short version is data controllers need to prepare themselves to consult widely — and examine their consciences closely.
Rising automation and AI makes ethical design choices even more imperative, as technologies become increasingly complex and intertwined, thanks to the massive amounts of data being captured, processed and used to model all sorts of human facets and functions.
The closed session of the conference produced a declaration on ethics and data in artificial intelligence — setting out a list of guiding principles to act as “core values to preserve human rights” in the developing AI era — which included concepts like fairness and responsible design.
Few would argue that a powerful AI-based technology such as facial recognition isn’t inherently in tension with a fundamental human right like privacy.
Nor that such powerful technologies aren’t at huge risk of being misused and abused to discriminate and/or suppress rights at vast and terrifying scale. (See, for example, China’s push to install a social credit system.)
Biometric ID systems might start out with claims of the very best intentions — only to shift function and impact later. The dangers to human rights of function creep on this front are very real indeed. And are already being felt in places like India — where the country’s Aadhaar biometric ID system has been accused of rebooting ancient prejudices by promoting a digital caste system, as the conference also heard.
The consensus from the event is it’s not only possible but vital to engineer ethics into system design from the start whenever you’re doing things with other people’s data. And that routes to market must be found that don’t require dispensing with a moral compass to get there.
The notion of data-processing platforms becoming information fiduciaries — i.e. having a legal duty of care towards their users, as a doctor or lawyer does — was floated several times during public discussions. Though such a step would likely require more legislation, not just adequately rigorous self examination.
In the meanwhile civic society must get to grips, and grapple proactively, with technologies like AI so that people and societies can come to collective agreement about a digital ethics framework. This is vital work to defend the things that matter to communities so that the anthropogenic platforms Berners-Lee referenced are shaped by collective human values, not the other way around.
It’s also essential that public debate about digital ethics does not get hijacked by corporate self interest.
Tech giants are not only inherently conflicted on the topic but — right across the board — they lack the internal diversity to offer a broad enough perspective.
People and civic society must teach them.
A vital closing contribution came from the French data watchdog’s Isabelle Falque-Pierrotin, who summed up discussions that had taken place behind closed doors as the community of global data protection commissioners met to plot next steps.
She explained that members had adopted a roadmap for the future of the conference to evolve beyond a mere talking shop and take on a more visible, open governance structure — to allow it to be a vehicle for collective, international decision-making on ethical standards, and so alight on and adopt common positions and principles that can push tech in a human direction.
The initial declaration document on ethics and AI is intended to be just the start, she said — warning that “if we can’t act we will not be able to collectively control our future”, and couching ethics as “no longer an option, it is an obligation”.
She also said it’s essential that regulators get with the program and enforce current privacy laws — to “pave the way towards a digital ethics” — echoing calls from many speakers at the event for regulators to get on with the job of enforcement.
This is vital work to defend values and rights against the overreach of the digital here and now.
“Without ethics, without an adequate enforcement of our values and rules our societal models are at risk,” Falque-Pierrotin also warned. “We must act… because if we fail, there won’t be any winners. Not the people, nor the companies. And certainly not human rights and democracy.”
If the conference had one short sharp message it was this: Society must wake up to technology — and fast.
“We’ve got a lot of work to do, and a lot of discussion — across the boundaries of individuals, companies and governments,” agreed Berners-Lee. “But very important work.
“We have to get commitments from companies to make their platforms constructive and we have to get commitments from governments to look at whenever they see that a new technology allows people to be taken advantage of, allows a new form of crime to get onto it by producing new forms of the law. And to make sure that the policies that they do are thought about in respect to every new technology as they come out.”
This work is also an opportunity for civic society to define and reaffirm what’s important. So it’s not only about mitigating risks.
But, equally, not doing the job is unthinkable — because there’s no putting the AI genii back in the bottle.
from Social – TechCrunch https://ift.tt/2OTqNR3 Original Content From: https://techcrunch.com
0 notes
Text
Big tech must not reframe digital ethics in its image
Facebook founder Mark Zuckerberg’s visage loomed large over the European parliament this week, both literally and figuratively, as global privacy regulators gathered in Brussels to interrogate the human impacts of technologies that derive their power and persuasiveness from our data.
The eponymous social network has been at the center of a privacy storm this year. And every fresh Facebook content concern — be it about discrimination or hate speech or cultural insensitivity — adds to a damaging flood.
The overarching discussion topic at the privacy and data protection confab, both in the public sessions and behind closed doors, was ethics: How to ensure engineers, technologists and companies operate with a sense of civic duty and build products that serve the good of humanity.
So, in other words, how to ensure people’s information is used ethically — not just in compliance with the law. Fundamental rights are increasingly seen by European regulators as a floor not the ceiling. Ethics are needed to fill the gaps where new uses of data keep pushing in.
As the EU’s data protection supervisor, Giovanni Buttarelli, told delegates at the start of the public portion of the International Conference of Data Protection and Privacy Commissioners: “Not everything that is legally compliant and technically feasible is morally sustainable.”
As if on cue Zuckerberg kicked off a pre-recorded video message to the conference with another apology. Albeit this was only for not being there to give an address in person. Which is not the kind of regret many in the room are now looking for, as fresh data breaches and privacy incursions keep being stacked on top of Facebook’s Cambridge Analytica data misuse scandal like an unpalatable layer cake that never stops being baked.
Evidence of a radical shift of mindset is what champions of civic tech are looking for — from Facebook in particular and adtech in general.
But there was no sign of that in Zuckerberg’s potted spiel. Rather he displayed the kind of masterfully slick PR manoeuvering that’s associated with politicians on the campaign trail. It’s the natural patter for certain big tech CEOs too, these days, in a sign of our sociotechnical political times.
(See also: Facebook hiring ex-UK deputy PM, Nick Clegg, to further expand its contacts database of European lawmakers.)
And so the Facebook founder seized on the conference’s discussion topic of big data ethics and tried to zoom right back out again. Backing away from talk of tangible harms and damaging platform defaults — aka the actual conversational substance of the conference (from talk of how dating apps are impacting how much sex people have and with whom they’re doing it; to shiny new biometric identity systems that have rebooted discriminatory caste systems) — to push the idea of a need to “strike a balance between speech, security, privacy and safety”.
This was Facebook trying reframe the idea of digital ethics — to make it so very big-picture-y that it could embrace his people-tracking ad-funded business model as a fuzzily wide public good, with a sort of ‘oh go on then’ shrug.
“Every day people around the world use our services to speak up for things they believe in. More than 80 million small businesses use our services, supporting millions of jobs and creating a lot of opportunity,” said Zuckerberg, arguing for a ‘both sides’ view of digital ethics. “We believe we have an ethical responsibility to support these positive uses too.”
Indeed, he went further, saying Facebook believes it has an “ethical obligation to protect good uses of technology”.
And from that self-serving perspective almost anything becomes possible — as if Facebook is arguing that breaking data protection law might really be the ‘ethical’ thing to do. (Or, as the existentialists might put it: ‘If god is dead, then everything is permitted’.)
It’s an argument that radically elides some very bad things, though. And glosses over problems that are systemic to Facebook’s ad platform.
A little later, Google’s CEO Sundar Pichai also dropped into the conference in video form, bringing much the same message.
“The conversation about ethics is important. And we are happy to be a part of it,” he began, before an instant hard pivot into referencing Google’s founding mission of “organizing the world’s information — for everyone” (emphasis his), before segwaying — via “knowledge is empowering” — to asserting that “a society with more information is better off than one with less”.
Is having access to more information of unknown and dubious or even malicious provenance better than having access to some verified information? Google seems to think so.
SAN FRANCISCO, CA – OCTOBER 04: Pichai Sundararajan, known as Sundar Pichai, CEO of Google Inc. speaks during an event to introduce Google Pixel phone and other Google products on October 4, 2016 in San Francisco, California. The Google Pixel is intended to challenge the Apple iPhone in the premium smartphone category. (Photo by Ramin Talaie/Getty Images)
The pre-recorded Pichai didn’t have to concern himself with all the mental ellipses bubbling up in the thoughts of the privacy and rights experts in the room.
“Today that mission still applies to everything we do at Google,” his digital image droned on, without mentioning what Google is thinking of doing in China. “It’s clear that technology can be a positive force in our lives. It has the potential to give us back time and extend opportunity to people all over the world.
“But it’s equally clear that we need to be responsible in how we use technology. We want to make sound choices and build products that benefit society that’s why earlier this year we worked with our employees to develop a set of AI principles that clearly state what types of technology applications we will pursue.”
Of course it sounds fine. Yet Pichai made no mention of the staff who’ve actually left Google because of ethical misgivings. Nor the employees still there and still protesting its ‘ethical’ choices.
It’s not almost as if the Internet’s adtech duopoly is singing from the same ‘ads for greater good trumping the bad’ hymn sheet; the Internet’s adtech’s duopoly is doing exactly that.
The ‘we’re not perfect and have lots more to learn’ line that also came from both CEOs seems mostly intended to manage regulatory expectation vis-a-vis data protection — and indeed on the wider ethics front.
They’re not promising to do no harm. Nor to always protect people’s data. They’re literally saying they can’t promise that. Ouch.
Meanwhile, another common FaceGoog message — an intent to introduce ‘more granular user controls’ — just means they’re piling even more responsibility onto individuals to proactively check (and keep checking) that their information is not being horribly abused.
This is a burden neither company can speak to in any other fashion. Because the solution is that their platforms not hoard people’s data in the first place.
The other ginormous elephant in the room is big tech’s massive size; which is itself skewing the market and far more besides.
Neither Zuckerberg nor Pichai directly addressed the notion of overly powerful platforms themselves causing structural societal harms, such as by eroding the civically minded institutions that are essential to defend free societies and indeed uphold the rule of law.
Of course it’s an awkward conversation topic for tech giants if vital institutions and societal norms are being undermined because of your cut-throat profiteering on the unregulated cyber seas.
A great tech fix to avoid answering awkward questions is to send a video message in your CEO’s stead. And/or a few minions. Facebook VP and chief privacy officer, Erin Egan, and Google’s SVP of global affairs Kent Walker, were duly dispatched and gave speeches in person.
They also had a handful of audience questions put to them by an on stage moderator. So it fell to Walker, not Pichai, to speak to Google’s contradictory involvement in China in light of its foundational claim to be a champion of the free flow of information.
“We absolutely believe in the maximum amount of information available to people around the world,” Walker said on that topic, after being allowed to intone on Google’s goodness for almost half an hour. “We have said that we are exploring the possibility of ways of engaging in China to see if there are ways to follow that mission while complying with laws in China.
“That’s an exploratory project — and we are not in a position at this point to have an answer to the question yet. But we continue to work.”
Egan, meanwhile, batted away her trio of audience concerns — about Facebook’s lack of privacy by design/default; and how the company could ever address ethical concerns without dramatically changing its business model — by saying it has a new privacy and data use team sitting horizontally across the business, as well as a data protection officer (an oversight role mandated by the EU’s GDPR; into which Facebook plugged its former global deputy chief privacy officer, Stephen Deadman, earlier this year).
She also said the company continues to invest in AI for content moderation purposes. So, essentially, more trust us. And trust our tech.
She also replied in the affirmative when asked whether Facebook will “unequivocally” support a strong federal privacy law in the US — with protections “equivalent” to those in Europe’s data protection framework.
But of course Zuckerberg has said much the same thing before — while simultaneously advocating for weaker privacy standards domestically. So who now really wants to take Facebook at its word on that? Or indeed on anything of human substance.
Not the EU parliament, for one. MEPs sitting in the parliament’s other building, in Strasbourg, this week adopted a resolution calling for Facebook to agree to an external audit by regional oversight bodies.
But of course Facebook prefers to run its own audit. And in a response statement the company claims it’s “working relentlessly to ensure the transparency, safety and security” of people who use its service (so bad luck if you’re one of those non-users it also tracks then). Which is a very long-winded way of saying ‘no, we’re not going to voluntarily let the inspectors in’.
Facebook’s problem now is that trust, once burnt, takes years and mountains’ worth of effort to restore.
This is the flip side of ‘move fast and break things’. (Indeed, one of the conference panels was entitled ‘move fast and fix things’.) It’s also the hard-to-shift legacy of an unapologetically blind ~decade-long dash for growth regardless of societal cost.
Given the, it looks unlikely that Zuckerberg’s attempt to paint a portrait of digital ethics in his company’s image will do much to restore trust in Facebook.
Not so long as the platform retains the power to cause damage at scale.
It was left to everyone else at the conference to discuss the hollowing out of democratic institutions, societal norms, humans interactions and so on — as a consequence of data (and market capital) being concentrated in the hands of the ridiculously powerful few.
“Today we face the gravest threat to our democracy, to our individual liberty in Europe since the war and the United States perhaps since the civil war,” said Barry Lynn, a former journalist and senior fellow at the Google-backed New America Foundation think tank in Washington, D.C., where he had directed the Open Markets Program — until it was shut down after he wrote critically about, er, Google.
“This threat is the consolidation of power — mainly by Google, Facebook and Amazon — over how we speak to one another, over how we do business with one another.”
Meanwhile the original architect of the World Wide Web, Tim Berners-Lee, who has been warning about the crushing impact of platform power for years now is working on trying to decentralize the net’s data hoarders via new technologies intended to give users greater agency over their data.
On the democratic damage front, Lynn pointed to how news media is being hobbled by an adtech duopoly now sucking hundreds of billion of ad dollars out of the market annually — by renting out what he dubbed their “manipulation machines”.
Not only do they sell access to these ad targeting tools to mainstream advertisers — to sell the usual products, like soap and diapers — they’re also, he pointed out, taking dollars from “autocrats and would be autocrats and other social disruptors to spread propaganda and fake news to a variety of ends, none of them good”.
The platforms’ unhealthy market power is the result of a theft of people’s attention, argued Lynn. “We cannot have democracy if we don’t have a free and robustly funded press,” he warned.
His solution to the society-deforming might of platform power? Not a newfangled decentralization tech but something much older: Market restructuring via competition law.
“The basic problem is how we structure or how we have failed to structure markets in the last generation. How we have licensed or failed to license monopoly corporations to behave.
“In this case what we see here is this great mass of data. The problem is the combination of this great mass of data with monopoly power in the form of control over essential pathways to the market combined with a license to discriminate in the pricing and terms of service. That is the problem.”
“The result is to centralize,” he continued. “To pick and choose winners and losers. In other words the power to reward those who heed the will of the master, and to punish those who defy or question the master — in the hands of Google, Facebook and Amazon… That is destroying the rule of law in our society and is replacing rule of law with rule by power.”
For an example of an entity that’s currently being punished by Facebook’s grip on the social digital sphere you need look no further than Snapchat.
Also on the stage in person: Apple’s CEO Tim Cook, who didn’t mince his words either — attacking what he dubbed a “data industrial complex” which he said is “weaponizing” people’s person data against them for private profit.
The adtech modeus operandi sums to “surveillance”, Cook asserted.
Cook called this a “crisis”, painting a picture of technologies being applied in an ethics-free vacuum to “magnify our worst human tendencies… deepen divisions, incite violence and even undermine our shared sense of what is true and what is false” — by “taking advantage of user trust”.
“This crisis is real… And those of us who believe in technology’s potential for good must not shrink from this moment,” he warned, telling the assembled regulators that Apple is aligned with their civic mission.
Of course Cook’s position also aligns with Apple’s hardware-dominated business model — in which the company makes most of its money by selling premium priced, robustly encrypted devices, rather than monopolizing people’s attention to sell their eyeballs to advertisers.
The growing public and political alarm over how big data platforms stoke addiction and exploit people’s trust and information — and the idea that an overarching framework of not just laws but digital ethics might be needed to control this stuff — dovetails neatly with the alternative track that Apple has been pounding for years.
So for Cupertino it’s easy to argue that the ‘collect it all’ approach of data-hungry platforms is both lazy thinking and irresponsible engineering, as Cook did this week.
“For artificial intelligence to be truly smart it must respect human values — including privacy,” he said. “If we get this wrong, the dangers are profound. We can achieve both great artificial intelligence and great privacy standards. It is not only a possibility — it is a responsibility.”
Yet Apple is not only a hardware business. In recent years the company has been expanding and growing its services business. It even involves itself in (a degree of) digital advertising. And it does business in China.
It is, after all, still a for-profit business — not a human rights regulator. So we shouldn’t be looking to Apple to spec out a digital ethical framework for us, either.
No profit making entity should be used as the model for where the ethical line should lie.
Apple sets a far higher standard than other tech giants, certainly, even as its grip on the market is far more partial because it doesn’t give its stuff away for free. But it’s hardly perfect where privacy is concerned.
One inconvenient example for Apple is that it takes money from Google to make the company’s search engine the default for iOS users — even as it offers iOS users a choice of alternatives (if they go looking to switch) which includes pro-privacy search engine DuckDuckGo.
DDG is a veritable minnow vs Google, and Apple builds products for the consumer mainstream, so it is supporting privacy by putting a niche search engine alongside a behemoth like Google — as one of just four choices it offers.
But defaults are hugely powerful. So Google search being the iOS default means most of Apple’s mobile users will have their queries fed straight into Google’s surveillance database, even as Apple works hard to keep its own servers clear of user data by not collecting their stuff in the first place.
There is a contradiction there. So there is a risk for Apple in amping up its rhetoric against a “data industrial complex” — and making its naturally pro-privacy preference sound like a conviction principle — because it invites people to dial up critical lenses and point out where its defence of personal data against manipulation and exploitation does not live up to its own rhetoric.
One thing is clear: In the current data-based ecosystem all players are conflicted and compromised.
Though only a handful of tech giants have built unchallengeably massive tracking empires via the systematic exploitation of other people’s data.
And as the apparatus of their power gets exposed, these attention-hogging adtech giants are making a dumb show of papering over the myriad ways their platforms pound on people and societies — offering paper-thin promises to ‘do better next time — when ‘better’ is not even close to being enough.
Call for collective action
Increasingly powerful data-mining technologies must be sensitive to human rights and human impacts, that much is crystal clear. Nor is it enough to be reactive to problems after or even at the moment they arise. No engineer or system designer should feel it’s their job to manipulate and trick their fellow humans.
Dark pattern designs should be repurposed into a guidebook of what not to do and how not to transact online. (If you want a mission statement for thinking about this it really is simple: Just don’t be a dick.)
Sociotechnical Internet technologies must always be designed with people and societies in mind — a key point that was hammered home in a keynote by Berners-Lee, the inventor of the World Wide Web, and the tech guy now trying to defang the Internet’s occupying corporate forces via decentralization.
“As we’re designing the system, we’re designing society,” he told the conference. “Ethical rules that we choose to put in that design [impact society]… Nothing is self evident. Everything has to be put out there as something that we think we will be a good idea as a component of our society.”
The penny looks to be dropping for privacy watchdogs in Europe. The idea that assessing fairness — not just legal compliance — must be a key component of their thinking, going forward, and so the direction of regulatory travel.
Watchdogs like the UK’s ICO — which just fined Facebook the maximum possible penalty for the Cambridge Analytica scandal — said so this week. “You have to do your homework as a company to think about fairness,” said Elizabeth Denham, when asked ‘who decides what’s fair’ in a data ethics context. “At the end of the day if you are working, providing services in Europe then the regulator’s going to have something to say about fairness — which we have in some cases.”
“Right now, we’re working with some Oxford academics on transparency and algorithmic decision making. We’re also working on our own tool as a regulator on how we are going to audit algorithms,” she added. “I think in Europe we’re leading the way — and I realize that’s not the legal requirement in the rest of the world but I believe that more and more companies are going to look to the high standard that is now in place with the GDPR.
“The answer to the question is ‘is this fair?’ It may be legal — but is this fair?”
So the short version is data controllers need to prepare themselves to consult widely — and examine their consciences closely.
Rising automation and AI makes ethical design choices even more imperative, as technologies become increasingly complex and intertwined, thanks to the massive amounts of data being captured, processed and used to model all sorts of human facets and functions.
The closed session of the conference produced a declaration on ethics and data in artificial intelligence — setting out a list of guiding principles to act as “core values to preserve human rights” in the developing AI era — which included concepts like fairness and responsible design.
Few would argue that a powerful AI-based technology such as facial recognition isn’t inherently in tension with a fundamental human right like privacy.
Nor that such powerful technologies aren’t at huge risk of being misused and abused to discriminate and/or suppress rights at vast and terrifying scale. (See, for example, China’s push to install a social credit system.)
Biometric ID systems might start out with claims of the very best intentions — only to shift function and impact later. The dangers to human rights of function creep on this front are very real indeed. And are already being felt in places like India — where the country’s Aadhaar biometric ID system has been accused of rebooting ancient prejudices by promoting a digital caste system, as the conference also heard.
The consensus from the event is it’s not only possible but vital to engineer ethics into system design from the start whenever you’re doing things with other people’s data. And that routes to market must be found that don’t require dispensing with a moral compass to get there.
The notion of data-processing platforms becoming information fiduciaries — i.e. having a legal duty of care towards their users, as a doctor or lawyer does — was floated several times during public discussions. Though such a step would likely require more legislation, not just adequately rigorous self examination.
In the meanwhile civic society must get to grips, and grapple proactively, with technologies like AI so that people and societies can come to collective agreement about a digital ethics framework. This is vital work to defend the things that matter to communities so that the anthropogenic platforms Berners-Lee referenced are shaped by collective human values, not the other way around.
It’s also essential that public debate about digital ethics does not get hijacked by corporate self interest.
Tech giants are not only inherently conflicted on the topic but — right across the board — they lack the internal diversity to offer a broad enough perspective.
People and civic society must teach them.
A vital closing contribution came from the French data watchdog’s Isabelle Falque-Pierrotin, who summed up discussions that had taken place behind closed doors as the community of global data protection commissioners met to plot next steps.
She explained that members had adopted a roadmap for the future of the conference to evolve beyond a mere talking shop and take on a more visible, open governance structure — to allow it to be a vehicle for collective, international decision-making on ethical standards, and so alight on and adopt common positions and principles that can push tech in a human direction.
The initial declaration document on ethics and AI is intended to be just the start, she said — warning that “if we can’t act we will not be able to collectively control our future”, and couching ethics as “no longer an option, it is an obligation”.
She also said it’s essential that regulators get with the program and enforce current privacy laws — to “pave the way towards a digital ethics” — echoing calls from many speakers at the event for regulators to get on with the job of enforcement.
This is vital work to defend values and rights against the overreach of the digital here and now.
“Without ethics, without an adequate enforcement of our values and rules our societal models are at risk,” Falque-Pierrotin also warned. “We must act… because if we fail, there won’t be any winners. Not the people, nor the companies. And certainly not human rights and democracy.”
If the conference had one short sharp message it was this: Society must wake up to technology — and fast.
“We’ve got a lot of work to do, and a lot of discussion — across the boundaries of individuals, companies and governments,” agreed Berners-Lee. “But very important work.
“We have to get commitments from companies to make their platforms constructive and we have to get commitments from governments to look at whenever they see that a new technology allows people to be taken advantage of, allows a new form of crime to get onto it by producing new forms of the law. And to make sure that the policies that they do are thought about in respect to every new technology as they come out.”
This work is also an opportunity for civic society to define and reaffirm what’s important. So it’s not only about mitigating risks.
But, equally, not doing the job is unthinkable — because there’s no putting the AI genii back in the bottle.
Via Natasha Lomas https://techcrunch.com
0 notes
Text
World: With a vocabulary from 'Goodfellas,' Trump evokes his native New York
Now, as Trump faces his own mushrooming legal troubles, he has taken to using a vocabulary that sounds uncannily like that of Gotti and his fellow mobsters in the waning days of organized crime.
WASHINGTON — For much of the 1980s and 1990s, “the Dapper Don” and “the Donald” vied for supremacy on the front pages of New York’s tabloids. The don, John J. Gotti, died in a federal prison in 2002, while Donald Trump went on to be president of the United States.
Now, as Trump faces his own mushrooming legal troubles, he has taken to using a vocabulary that sounds uncannily like that of Gotti and his fellow mobsters in the waning days of organized crime, when ambitious prosecutors like Rudy Giuliani tried to turn witnesses against their bosses to win racketeering convictions.
“I know all about flipping,” Trump told Fox News this week. “For 30, 40 years I’ve been watching flippers. Everything’s wonderful and then they get 10 years in jail and they flip on whoever the next highest one is, or as high as you can go.”
Trump was referring to the decision by his former lawyer, Michael D. Cohen, to take a plea deal on fraud charges and admit to prosecutors that he paid off two women to clam up about the sexual affairs that they claimed to have had with Trump.
But the president was also evoking a bygone world — the outer boroughs of New York City, where he grew up — a place of leafy neighborhoods and working-class families, as well as its share of shady businessmen and mob-linked politicians. From an early age, Trump encountered these raffish types with their unscrupulous methods, unsavory connections and uncertain loyalties.
Trump is comfortable with the wiseguys-argot of that time and place, and he defaults to it whether he is describing his faithless lawyer or his fruitless efforts to discourage the FBI director, James B. Comey, from investigating one of his senior advisers, Michael T. Flynn, over his connections to Russia.
“When I first heard that Trump said to Comey, ‘Let this go,’ it just rang such a bell with me,” said Nicholas Pileggi, an author who has chronicled the Mafia in books and films like “Goodfellas” and “Casino.” “Trump was surrounded by these people. Being raised in that environment, it was normalized to him.”
Pileggi traced the president’s language to the Madison Club, a Democratic Party machine in Brooklyn that helped his father, Fred Trump, win his first real estate deals in the 1930s. In those smoke-filled circles, favors were traded like cases of whiskey and loyalty mattered above all.
Trump honed his vocabulary over decades through his association with lawyer Roy Cohn, who besides working for Sen. Joseph McCarthy also represented Mafia bosses like Gotti, Tony Salerno and Carmine Galante. He also gravitated to colorful characters like Roger J. Stone Jr., the pinkie-ring-wearing political consultant, and Stone’s onetime partner, Paul Manafort, the former Trump campaign chairman who was convicted Tuesday of eight counts of tax and bank fraud.
“It’s the kind of subculture that most people avoid,” said Michael D’Antonio, one of Trump’s biographers. “You cross the street to get away from people like that. Donald brings them close. He’s most comfortable with them.”
Trump’s current lawyer, Giuliani, said that as a U.S. attorney for the Southern District of New York, he listened to 4,000 hours of taped conversations of Mafia suspects — a discipline that he claims makes him an expert in deciphering Trump’s intent in recorded exchanges with Cohen about paying off women. It has also steeped him in the language and folkways of the mob.
Giuliani was an enthusiastic fan of “The Sopranos,” once joking that HBO set its celebrated series about an everyday mob family in New Jersey because he had done such a good job driving the Mafia out of New York.
During Giuliani’s days as a U.S. attorney, his office was labeled the “House of Pancakes” for the parade of suspects who “flipped” to try to reduce their prison sentences.
In his Fox interview, Trump expressed a fleeting moment of sympathy for Cohen’s desire to do likewise.
“If somebody defrauded a bank and he’s going to get 10 years in jail or 20 years in jail, but if you can say something bad about Donald Trump and you’ll go down to two years or three years, which is the deal he made,” the president said. “In all fairness to him, most people are going to do that.”
Still, Trump added, “it almost ought to be illegal.”
At other times, he has made clear that he views disloyalty pretty much the way Gotti would have viewed the decision of his underboss, Sammy Gravano, to cooperate with the government in 1991 and testify against him in the trial that sent him away for life.
Defending the White House counsel, Donald F. McGahn II, after a report in The New York Times that he had spent 30 hours speaking to the special counsel, Robert Mueller, Trump wrote on Twitter that McGahn would never sell out his boss like a “John Dean type ‘RAT.'”
Dean, whose testimony as White House counsel about Watergate helped bring down President Richard M. Nixon, fired back. Trump, he tweeted, “thinks, acts and sounds like a mob boss.”
“There is nothing presidential about him or his actions,” Dean added.
Sometimes Trump’s gangland references can be baffling. This month, he defended Manafort by comparing him to Al Capone. Manafort, he suggested, was getting rougher treatment than Capone, whom the president called a “legendary mob boss, killer and ‘Public Enemy Number One.'”
His references are also unlikely to impress prosecutors like Mueller, for whom the mob is old hat. But they, too, have been struck by the parallels. Comey, in his recent book, “A Higher Loyalty,” likened his first meeting with the future president at Trump Tower in Manhattan to paying a call to a Mafia don.
“I thought of the New York Mafia social clubs, an image from my days as a Manhattan federal prosecutor in the 1980s and 1990s,” Comey said. “The Ravenite. The Palma Boys. Café Giardino. I couldn’t shake the picture. And looking back, it wasn’t as odd or dramatic as I thought at the time.”
Trump, he wrote, seemed to be trying to make Comey and his colleagues from the intelligence agencies “part of the same family.”
To D’Antonio, the president’s tough-guy language mostly sounds quaint — the vocabulary of a man who grew up with a comic-book view that real men wore fedoras and carried .38-caliber revolvers.
“He thinks other people understand the ‘Guys and Dolls’ dialogue the way he does,” said D’Antonio, whose next book is about Vice President Mike Pence. “He doesn’t realize in 2018 that it sounds ridiculous to talk about rats.”
This article originally appeared in The New York Times.
Mark Landler © 2018 The New York Times
source http://www.newssplashy.com/2018/08/world-with-vocabulary-from-goodfellas.html
0 notes
Text
Antitrust Approval in Minority Acquisitions — A case of several Ifs and Buts
Enforced in 2011, the Indian merger control regime envisages an ex-ante assessment by the Competition Commission of India (CCI) of all M&A transactions meeting certain financial thresholds provided in the Competition Act, 2002, as an anticipatory step to avoid potential anti-competitive outcomes such as creation of a monopoly or co-ordinated action by competitors. However, considering the need to avoid filing requirement for certain types of M&A transactions which are not likely to cause an appreciable adverse effect on competition, the CCI, by way of the Competition Commission of India (Procedure in Regard to the Transaction of Business Relating to Combinations) Regulations, 2011 (Combination Regulations) exempted certain categories of M&A transactions from a notification requirement. One such exemption (provided in Item 1 of Schedule I to the Combination Regulations) deals with minority investments and exempts acquisitions of less than 25% shares, if they are made “solely as an investment” or in the acquirer’s “ordinary course of business”, with a categorical caveat that such transaction should not result in the acquisition of “control” (25% exemption).
Though the 25% exemption may, at first glance, seem extremely advantageous to private equity and other financial investors, the verbose riders under Item 1 and various CCI orders, considerably limit its scope. More often than not, acquirers are willing to err on the side of caution and seek the CCI’s approval, to avoid monetary as well as reputational loss. This article highlights a few of the issues that are encountered when determining the applicability of the 25% exemption and in particular, the phrase “solely as an investment”.
Interpreting “solely as an investment”
The 25% exemption provides that a minority acquisition when made solely as an investment, need not be notified to the CCI if it does not lead to an acquisition of control. The term “solely as an investment” has been construed to imply that a minority acquisition, made with a “strategic intent”, cannot avail of the 25% exemption. As is evident from various CCI orders, strategic intent itself has been inferred by the CCI on the basis of various factors such as the existence of a co-operation/partnership alliance[1], public statements made by the parties[2] characterising the acquisition as “strategic”, and in some cases, based on the presence of the acquirer in horizontally/vertically linked markets.[3] For instance, Alibaba’s non-controlling minority acquisition of 4.14% in Jasper Infotech (Snapdeal) was notified to the CCI since the parties were competitors.[4]
Less than 10% exemption
A slight departure was noted in the CCI’s reading of transactions being undertaken solely as an investment through the introduction of a deeming explanation (10% exemption) in 2016, which provided that an acquisition of less than 10% shareholding, where the acquirer does not have: (i) rights which are not exercisable by the ordinary shareholders; (ii) the right/intention to nominate a director; and (iii) the intention to participate in the affairs or management of the target, would be regarded as a transaction made solely as an investment.
Unfortunately, the 10% exemption has led to greater confusion as it is not clear whether acquisition of less than 10% shares would by default become strategic in case it fails one of the three criteria mentioned above. This has resulted in situations where parties have notified acquisition of less than 10% shareholding, even when such acquisition was only accompanied with the right to appoint one director on the target’s board. For instance, P5 Asia Holding Investments (Mauritius) Ltd. (P5) notified an acquisition of 4.85% shares in Indus Towers Ltd. (Indus) given that the acquirer had the intention to nominate a director on the board of the target.[5] While this transaction could not have benefitted from the 10% exemption, the CCI’s order did not clarify why the transaction could not have been classified as being undertaken in the “ordinary course of business”. Notably, there were no horizontal or vertical overlaps between Indus and P5.
A similar approach was followed by the CCI in TPG Manta/FTW[6], where the CCI required Thoma Bravo to become a notifying party since it had acquired the right to nominate a director on the target’s board. The CCI noted that Thoma Bravo’s investment along with a board seat did not meet the conditions laid out under the 10% exemption. However, despite noting that Thomas Bravo did not have any affirmative veto rights over the target’s strategic business decisions (which are typically viewed as amounting to control), the availability of the 25% exemption was not granted to it.
Evidently, the 25% exemption stipulates an or test i.e. a transaction may either be made solely as an investment or in the ordinary course to avail of the exemption. A fulfilment of either of these limbs should ideally be sufficient ground for claiming the 25% exemption. The aforesaid cases seem to place reliance on, and examine transactions only on the basis of, one of these limbs.
Solely as an investment when existing shareholding in competitors?
With the introduction of the 10% exemption, it was also thought that cases such as Alibaba/Jasper Infotech[7], where the acquirer was present in a competing or vertically linked business but acquired a non-controlling stake of less than 10%, would be viewed as a transaction undertaken solely as an investment. However, the CCI has recently reiterated its view from orders passed prior to the introduction of the 10% exemption. In New Moon/Mylan[8], a CCI decision prior to the introduction of the 10% exemption, the CCI had noted that
an acquisition of shares or voting rights, even if it is of less than 25 per cent, may raise competition concerns if the acquirer and the target are either engaged in business of substitutable products/services or are engaged in activities at different stages or levels of the production chain. Such acquisitions need not necessarily be termed as an acquisition made solely as an investment or in the ordinary course of business, and thus would require competition assessment, on a case-to-case basis…
In the EMC Ltd./Mcnally Bharat Engg. Co. Ltd.[9] order passed earlier this year, the CCI reiterated this view and held that where an acquirer and the target are engaged in the same, substitutable or competing businesses or where their businesses are vertically related, such acquisition of shares would not necessarily be termed as an acquisition made solely as an investment or in the ordinary course of business. While this case related to the acquisition of more than 10% shareholding in a competing entity, it remains to be seen whether the CCI would adopt a similar approach while assessing cases of less than 10% acquisitions.
Similarly, whether parties could rely on the 10% exemption in case of acquisition of a less than 10% shareholding if such acquisition is accompanied with a co-operation/partnership agreement, is also questionable.
Conclusion
While the CCI, through its orders, has created additional caveats to the 25% exemption and the 10% exemption, we cannot overlook the rationale for doing so. Pertinently, a blanket 25% exemption would be impractical and the CCI has been endeavouring to maintain a balance between assessing transactions which may potentially cause competition concerns vis-à-vis facilitating M&A with minimal regulatory hindrances.
However, certainty in the letter of law and its interpretation, forms a cornerstone of legislation. The CCI is a pro-active regulator and has repeatedly taken steps to evolve the regime, based on inputs received from stakeholders. To its credit, the CCI also actively runs a pre-filing consultation process, allowing parties to informally approach its officials to seek clarity on procedural and substantive issues. An additional step in clarifying the questions raised repeatedly, either through amendments or guidance notes, would also go a long way in establishing an efficient merger control regime.
Bharat Budholia is a Partner with the Competition Law Practice at Cyril Amarchand Mangaldas and can be contacted at [email protected]. Arunima Chandra is a Senior Associate with the Competition Law Practice at Cyril Amarchand Mangaldas and can be contacted at [email protected].
[1] SAAB AB(Publ.)/Pipavav Defence and Offshore Engg. Co. Ltd., Combination Registration No. C-2012/11/95 dated 1-1-2013; Sumitomo Mitsui Trust Bank Ltd./Reliance Capital Ltd., Combination Registration No. C-2014/12/235 dated 26-2-2015.
[2] Piramal Enterprises Ltd./Shriram Transport Finance Co., Combination Registration No. C-2015/02/249 dated 2-5-2016; SCM Soilfert Ltd./Deepak Fertilizers and Petrochemicals Corpn. Ltd., Combination Registration No. C-2014/05/175 dated 10-2-2015.
[3] New Moon B.V./Mylan, Combination Registration No. C-2014/08/202 dated 10-11-2014.
[4] Alibaba/Jasper Infotech (P) Ltd., Combination Registration No. C-2015/08/301 dated 7-10-2015.
[5] P5 Asia Holding Investments (Mauritius) Ltd./Indus Towers Ltd., Combination Registration No. C-2016/10/452 dated 7-12-2016.
[6] Combination Registration No. C-2016/10/439 dated 31-1-2017.
[7] Combination Registration No. C-2015/08/301 dated 7-10-2015.
[8] Combination Registration No. C-2014/08/202 dated 10-11-2014.
[9] Combination Registration No. C-2015/07/293 dated 26-4-2017.
Tweet
The post Antitrust Approval in Minority Acquisitions — A case of several Ifs and Buts appeared first on SCC Blog.
Antitrust Approval in Minority Acquisitions — A case of several Ifs and Buts published first on https://sanantoniolegal.tumblr.com/
0 notes
Text
[ tag drop 1: CANON VERSES. descriptions under cut. ]
( weaned on poison: default i. )
The making of a man. Spanning twenty two years from birth to college graduation, William’s childhood verse finds him resentful, ambitious, and uncertain about the kind of man he wants to be. While prickly and eccentric, William doesn’t hide his insecurity or true self as much as he wishes he could: the people that meet him in college see him transform into someone who has perfected the delicately crafted persona he begins creating during this time.
( hungry dog’s logic: default ii. )
The happiest years of William’s life, from the opening of the diner to the birthday party where it all goes wrong. MORE TBA.
( house haunted by shame: default iii. )
After Evan’s death and the incident with Suzie, it’s safe to say William spirals, just a little. From the bite of ‘83 to Elizabeth’s death (in ‘90 — though this is dependent obviously on who I’m writing with too!). William at his most unstable. MORE TBA.
( still left with his hands: default iv. )
Elizabeth’s death leaves William stricken with grief and the crumbling discovery that he can’t outrun the misfortune that is plaguing his family (mostly at his own hands). Left only with Michael, his need to fix everything, and the crippling loss of his family, it wouldn’t be entirely accurate to say William is MORE stable: but perhaps less likely to murder. Between ‘90 and ‘92 (or whenever Michael ‘dies’: again dependent on writing). MORE TBA.
( just to be alive: default v. )
Entirely alone, this last human verse spans the two years between Michael’s ‘death’ and William’s own springlock incident in ‘94. MORE TBA.
( first clear thought in years: default vi. )
Get springlocked idiot! Takes place any time in the thirty years before Springtrap is discovered (1994–2024). William’s memory and sense of. well, being ‘William’ drifts further and further as time progresses as his fury and agony grows. MORE TBA.
( tomb that won’t close: default vii. )
F.NAF 3 to F.NAF 6! More monster than man but still unbearably human, it has enough of itself left to recognise its son & old friend. That doesn’t stop it wanting to ruin everything in its path. MORE TBA.
( what is the difference between science and god?: default viii. )
Sometime before Elizabeth’s death, William infuses his remnant with a computer in another effort to prolong his lifespan. It results in Glitchtrap: a virus that both is and isn’t William. Glitchtrap, emerging thanks to the game it infuses with, has only one goal — revive himself properly to continue the work he started decades ago. See this post (link tba) for more Glitchtrap information. MORE TBA.
( obsessive replay: default ix. )
Ultimate Custom Night time!! After the events of F.NAF 6, William actually dies, and his soul is transported elsewhere. Turns out, hell is the neverending nightmare that a little kid dreams up for you. William suffers, screams, schemes, and looks for salvation at every corner. It never seems to come. More TBA.
( alive while a name is spoken: default x. )
Glitchtrap’s goal succeeds, and William is back in the land of the living! Trouble is, the virus hadn’t predicted its true self facing decades of torture at Cassidy’s hands… or its true self’s inability to fully adapt to the modern world. Torn between an inability to move forward and a desperate need to not look back, William is finally honest with himself, and tries to run from things one last time. Adopting a role at the Pizzaplex unassumingly and trying to live out the rest of his life in (relative peace), he naively hopes that everything is over. Of course, he's never been more sorely mistaken. MORE TBA.
#( weaned on poison: default i. )#( hungry dog’s logic: default ii. )#( house haunted by shame: default iii. )#( still left with his hands: default iv. )#( just to be alive: default v. )#( first clear thought in years: default vi. )#( tomb that won’t close: default vii. )#( what’s the difference between science and god?: default viii. )#( obsessive replay: default ix. )#( alive while a name is spoken: default x. )
1 note
·
View note
Audio
Deconstruction & Love or: Weirdos
Ayelet Lerman is one of the most active individuals within the Israeli improv scene. In the past two years I have seen her almost exclusively play viola in such improv session (whether in group or solo performances), making me assume that her entire practice can be encompassed by this raw statistic.
Our interview quickly disclosed how wrong I was, as I discovered that Lerman represents an experimental type we have almost but grown accustomed to: Her bio discloses a child violinist (later to be replaced with viola) who took an unruly stance towards classical music, and this although she will continue following that same classical trajectory for years to come.
Lerman discovered the need to express her “wild side”, as she terms it, but found herself in disagreement with the “taming circus animals” attitude of classical music didacticism. However, in front of me sat a calm, level headed and thoughtful individual who seemed drawn more towards eastern philosophy in her spiritual practice and life trajectory. But I soon discover that this too, like my earlier assumption, is merely a diminution of who Ayelet Lerman really is.
Her career in art took Lerman through many forms of expression including installation, curation and currently even film studies, an art form Lerman has loved for many years and has finally felt ready to tackle. So indeed, Lerman has a true creative side where she commits to ideas, but when it comes to music, or specifically viola playing, she cannot pin the notion of composition onto her practice. She approaches viola playing as a means for immediate release of ideas – a spontaneous activity, in which the viola is treated like an appendage of Lerman’s body and not an exterior tool. She never prepares these iterations, rather simply has spontaneous conversations with herself or others, but conversations in which she also recognizes her very distinct style of playing and voice, which she simply sums up thus: “I am very much Ayelet on the viola”!
Individualistic as she is in her stance and artistic voice, Lerman is still, and very much a woman. And it is actually from this standpoint that we begin our interview, where I confront Lerman with questions regarding gender roles within experimental music, and at large. Lerman, who obviously dedicates much thought to these issues, retorts almost immediately: “It’s a question of language first and foremost”. To Lerman the verbal and non-verbal language we practice in society at large and in Israel in particular are extremely masculine. It is a precursor to the very male-oriented thought processes we undergo as a society. According to her, this language is slowly changing, but it’s a matter of time and probably hard work until the general populous will get used to a new type of language. Lerman is a modern feminist; hence her stance does not seek to obliterate manhood, but simply allow a separate narrative. However, like a true thinker that does not shy away from inconvenient truths, Lerman immediately plays devil’s advocate and recognizes for us a “female state” in art: less individualistic, and more prone towards collaboration. However, questioning the place of women in experimentalism brings Lerman and myself back to the notion of deconstructing the “language” of society at large and the microcosm of this idea infiltrating experimental practices as well. But, and this is a huge but, Lerman also recognizes clear feminine traits within artistic creation. For instance, Lerman speaks of her attitude towards improv, which she claims is “a woman’s attitude towards improv”. This attitude consists of noticing the minute detail, and more so – treat this minute detail, this peripheral material at times, with love – in short: to attach oneself through love. This is a concept, feminine or not, that throws Lerman almost seamlessly into her day-to-day life where she practices the aforementioned notions through meditation. In meditation, Lerman tells us, one allows herself to be an empty vessel. And it is from within this same stance that Lerman would like to approach improvisation.
Indeed, like many of our past guests, Lerman too recognizes the toll this might take of the audience, or at the very least require of them a similar meditational mode, or awareness. But regardless of what it exactly requires of the audience, it no doubt requires it in the form of some “work”, and thus immediately sets this practice apart from most classical music, or indeed most music out there. However, according to Lerman, behind this “work” lays some hidden meaning, which is the reasoning at the base of the entire practice. Lerman clarifies – this is not a search for a-priori meaning, but rather the creation of a “state of being”, which in itself creates meaning, albeit subjective. The improv session can work or not, it can be hailed or booed, it can create wonderful moving sounds or horrific noises; regardless, if an alternative state of things was introduced, this in itself is the goal. This conclusion takes Lerman full circle and back to a possible conciliation with classical music. As even in fixed forms (through-composed music, films, etc) there is improv: “A film might be fully scripted, but when it is shot, there is usually more improv employed during the scenes than adherence to the written script. And this example can be transposed to almost any fixed format art form: there is always a commingling of strict materials vis-a-vis improvisation. And so, through a microscopic view, Lerman unfolds a supposed clarity as bustling with underlying chaos. And this, of course, closes the circle opened when discussing questions of gender and language. Here too, Lerman exemplifies the holism of her stance to life and music, and more so, how effortlessly it comes to her.
The local experimental and improv scene open up a fascinating discussion that takes us through topics that in many ways summarise Lerman’s approach. Her point of departure is the topic of funding. Lerman recognizes that which many of us have –namely the fact that arts funding is almost non-existent in Israel. Noting that the usual stance of experimenters is that of individualistic renegades who put an emphasis on the individual persona, raises the question whether this stance is not aided by the aforementioned lack of funding? In fact, does it not create a default stance that sets the experimenter opposite the “classical” stance? Israel, continues Lerman, is a society of conformists – this can be felt in every aspect of society, and is especially felt to women. Lerman recognizes in herself the innate seed of antagonism, which in the face of this aforementioned conformity can sometimes be expressed with rage in her music. It’s as if she was saying: “you all want to conform to the same ideas, then I will present you with ideas that you simply don’t understand”. It is an antagonistic approach, teeming with artistic negativism, and at least to some extent exemplifies to Lerman why it is so hard for lay-audiences to listen to experimental music at large and in Israel in particular. It takes quite a knowing audience to be able to treat antagonism with tolerance or respect, not to mention love. “In a society where the nature of discussion (even within the family unit) is so violent, it is not at all surprising to find an active underground”, says Lerman. This underground immediately acts as a refuge for all of those proponents of society who are lacking some vital characteristics allowing them to express their voice within “normative” societal terms. And the expressions of this on the experimental stage can be wide and varied: perhaps one person proposes a language devoid of rage, perhaps another presents a language with exaggerated rage. But the commonality is that all of these people are “allowed” to simply be for a while, and express a voice that society at large does not yet know how to hear, or understand. Lerman continues and claims that in a society where an artist could find her or himself silenced by the government and authorities (and thus legitimise a media and public witch hunt with outcomes unknown), it is not surprising to see experimental artists express their voice with more passion, and gut felt works.
As ever, Lerman’s holistic approach manages to make sense of what seems like a blunder, and leads her to ask whether it’s not the experimental and underground artist’s task to help society find its boundaries? And indeed – isn’t an experimental artist, who finds herself hounded by police and media due to an artistic expression, an important precursor allowing society a glimpse at their future? Indeed, an immense role fulfilling an important, somewhat thankless, service to society. This, concludes Lerman, is always very easy to forget, as the performances themselves are usually to small publics, and more often than not garner ambivalent and tepid responses. But the role is the same regardless, and at least for Lerman, manages to create a vibrant contrast to the ideas of Zionism. Zionism’s main tenet recognizes a new nation with a new culture claiming a supposed wilderness. As romantic is this ideal seems, it is simply not true! Israel as a nation is based on a series of subsets of immigrants from different cultures – each with their own language or dialect, foods and cultural affiliations. The new invented culture for Israel perhaps represented an ethos of its time, but it in no way represents a majority of the country’s voices today (nor did it ever), and these cracks are slowly starting to show. The main crack has of course to do with this supposed wilderness, which in time will become the Israeli-Palestinian conflict, and a contentious point for those still claiming that the country indeed was a wilderness when settled. Over the backdrop of this supposed void, bustle several cultures fighting for their existence, not accepting the hegemony, and asking whether things really are the way they are presented. The experimental scene, claims Lerman, is in itself antagonistic to the idea of a void, any form of stasis, or non-organic culture.
Therefore, it is not surprising to discover that for Lerman, experimentalism is, in fact, an act of deconstruction. What is surprising is a fleeting recognition that this practice, negating any act of stasis, is in itself a static act. This state of being might allow a negativist expression, yet still, as an act of being it is always one and the same. With this realisation, and after having talked about western vs. eastern culture, I suddenly realise that none of these cultures can encompass experimentalism fully. It is true that western culture puts the individual at the forefront, but this individuality must conform to norms and hence cannot accept the renegade approach. Eastern culture, on the other hand, begs to obliterate the individual altogether and thus creates a form of existence that does not require expression; again, nothing the renegade approach can dwell within. And so, bound to no state, culture, border or ism, experimentalism is likened to an island that, by default, is deemed for cultural loneliness. It brings to mind a beautiful story that Lerman chose to end our broadcast with, taking us back to her days as a classical viola student in Bologna. As she was setting herself up in the city, Lerman found herself in a circumstance requiring her to sleep in the streets for a few nights. Lerman describes how the vibrant city, its streets teeming with young students, suddenly became the city of street-dwellers. Her main recollection is of the many unique characters she met during those nights, predominantly men – all of whom reminded her very much of the experimental type she has by now grown accustomed to: “They were like a bunch of weirdoes at the edge of society, who couldn’t find an outlet of self expression anywhere else but there”.
״פורטריט״ עם הקטע הראשון מתוך הדיסק החדש של איילת לרמן, “7 STEPS”
״קורוס נשות דימונה״
עבודות מיתרים ומיצבים אקוסטים:
פינאלה בחזותי: ״פליטת נפש״
מיצב ברחוב יפו: ״אטלס עכביש״
מיצב במעמותה: ״צעצוע״
תערוכת יחיד של מיצבים במעמותה: ״יותר מידי דיבור על חלל״
פרפורמנס: ״מותר להציץ״
0 notes
Audio
Ayelet Lerman - Experimental Israel
Deconstruction & Love or: Weirdos
Ayelet Lerman is one of the most active individuals within the Israeli improv scene. In the past two years I have seen her almost exclusively play viola in such improv session (whether in group or solo performances), making me assume that her entire practice can be encompassed by this raw statistic.
Our interview quickly disclosed how wrong I was, as I discovered that Lerman represents an experimental type we have almost but grown accustomed to: Her bio discloses a child violinist (later to be replaced with viola) who took an unruly stance towards classical music, and this although she will continue following that same classical trajectory for years to come. Lerman discovered the need to express her “wild side”, as she terms it, but found herself in disagreement with the “taming circus animals” attitude of classical music didacticism. However, in front of me sat a calm, level headed and thoughtful individual who seemed drawn more towards eastern philosophy in her spiritual practice and life trajectory. But I soon discover that this too, like my earlier assumption, is merely a diminution of who Ayelet Lerman really is.
Her career in art took Lerman through many forms of expression including installation, curation and currently even film studies, an art form Lerman has loved for many years and has finally felt ready to tackle. So indeed, Lerman has a true creative side where she commits to ideas, but when it comes to music, or specifically viola playing, she cannot pin the notion of composition onto her practice. She approaches viola playing as a means for immediate release of ideas – a spontaneous activity, in which the viola is treated like an appendage of Lerman’s body and not an exterior tool. She never prepares these iterations, rather simply has spontaneous conversations with herself or others, but conversations in which she also recognizes her very distinct style of playing and voice, which she simply sums up thus: “I am very much Ayelet on the viola”!
Individualistic as she is in her stance and artistic voice, Lerman is still, and very much a woman. And it is actually from this standpoint that we begin our interview, where I confront Lerman with questions regarding gender roles within experimental music, and at large. Lerman, who obviously dedicates much thought to these issues, retorts almost immediately: “It’s a question of language first and foremost”. To Lerman the verbal and non-verbal language we practice in society at large and in Israel in particular are extremely masculine. It is a precursor to the very male-oriented thought processes we undergo as a society. According to her, this language is slowly changing, but it’s a matter of time and probably hard work until the general populous will get used to a new type of language. Lerman is a modern feminist; hence her stance does not seek to obliterate manhood, but simply allow a separate narrative. However, like a true thinker that does not shy away from inconvenient truths, Lerman immediately plays devil’s advocate and recognizes for us a “female state” in art: less individualistic, and more prone towards collaboration. However, questioning the place of women in experimentalism brings Lerman and myself back to the notion of deconstructing the “language” of society at large and the microcosm of this idea infiltrating experimental practices as well. But, and this is a huge but, Lerman also recognizes clear feminine traits within artistic creation. For instance, Lerman speaks of her attitude towards improv, which she claims is “a woman’s attitude towards improv”. This attitude consists of noticing the minute detail, and more so – treat this minute detail, this peripheral material at times, with love – in short: to attach oneself through love. This is a concept, feminine or not, that throws Lerman almost seamlessly into her day-to-day life where she practices the aforementioned notions through meditation. In meditation, Lerman tells us, one allows herself to be an empty vessel. And it is from within this same stance that Lerman would like to approach improvisation.
Indeed, like many of our past guests, Lerman too recognizes the toll this might take of the audience, or at the very least require of them a similar meditational mode, or awareness. But regardless of what it exactly requires of the audience, it no doubt requires it in the form of some “work”, and thus immediately sets this practice apart from most classical music, or indeed most music out there. However, according to Lerman, behind this “work” lays some hidden meaning, which is the reasoning at the base of the entire practice. Lerman clarifies – this is not a search for a-priori meaning, but rather the creation of a “state of being”, which in itself creates meaning, albeit subjective. The improv session can work or not, it can be hailed or booed, it can create wonderful moving sounds or horrific noises; regardless, if an alternative state of things was introduced, this in itself is the goal. This conclusion takes Lerman full circle and back to a possible conciliation with classical music. As even in fixed forms (through-composed music, films, etc) there is improv: “A film might be fully scripted, but when it is shot, there is usually more improv employed during the scenes than adherence to the written script. And this example can be transposed to almost any fixed format art form: there is always a commingling of strict materials vis-a-vis improvisation. And so, through a microscopic view, Lerman unfolds a supposed clarity as bustling with underlying chaos. And this, of course, closes the circle opened when discussing questions of gender and language. Here too, Lerman exemplifies the holism of her stance to life and music, and more so, how effortlessly it comes to her.
The local experimental and improv scene open up a fascinating discussion that takes us through topics that in many ways summarise Lerman’s approach. Her point of departure is the topic of funding. Lerman recognizes that which many of us have –namely the fact that arts funding is almost non-existent in Israel. Noting that the usual stance of experimenters is that of individualistic renegades who put an emphasis on the individual persona, raises the question whether this stance is not aided by the aforementioned lack of funding? In fact, does it not create a default stance that sets the experimenter opposite the “classical” stance? Israel, continues Lerman, is a society of conformists – this can be felt in every aspect of society, and is especially felt to women. Lerman recognizes in herself the innate seed of antagonism, which in the face of this aforementioned conformity can sometimes be expressed with rage in her music. It’s as if she was saying: “you all want to conform to the same ideas, then I will present you with ideas that you simply don’t understand”. It is an antagonistic approach, teeming with artistic negativism, and at least to some extent exemplifies to Lerman why it is so hard for lay-audiences to listen to experimental music at large and in Israel in particular. It takes quite a knowing audience to be able to treat antagonism with tolerance or respect, not to mention love. “In a society where the nature of discussion (even within the family unit) is so violent, it is not at all surprising to find an active underground”, says Lerman. This underground immediately acts as a refuge for all of those proponents of society who are lacking some vital characteristics allowing them to express their voice within “normative” societal terms. And the expressions of this on the experimental stage can be wide and varied: perhaps one person proposes a language devoid of rage, perhaps another presents a language with exaggerated rage. But the commonality is that all of these people are “allowed” to simply be for a while, and express a voice that society at large does not yet know how to hear, or understand. Lerman continues and claims that in a society where an artist could find her or himself silenced by the government and authorities (and thus legitimise a media and public witch hunt with outcomes unknown), it is not surprising to see experimental artists express their voice with more passion, and gut felt works.
As ever, Lerman’s holistic approach manages to make sense of what seems like a blunder, and leads her to ask whether it’s not the experimental and underground artist’s task to help society find its boundaries? And indeed – isn’t an experimental artist, who finds herself hounded by police and media due to an artistic expression, an important precursor allowing society a glimpse at their future? Indeed, an immense role fulfilling an important, somewhat thankless, service to society. This, concludes Lerman, is always very easy to forget, as the performances themselves are usually to small publics, and more often than not garner ambivalent and tepid responses. But the role is the same regardless, and at least for Lerman, manages to create a vibrant contrast to the ideas of Zionism. Zionism’s main tenet recognizes a new nation with a new culture claiming a supposed wilderness. As romantic is this ideal seems, it is simply not true! Israel as a nation is based on a series of subsets of immigrants from different cultures – each with their own language or dialect, foods and cultural affiliations. The new invented culture for Israel perhaps represented an ethos of its time, but it in no way represents a majority of the country’s voices today (nor did it ever), and these cracks are slowly starting to show. The main crack has of course to do with this supposed wilderness, which in time will become the Israeli-Palestinian conflict, and a contentious point for those still claiming that the country indeed was a wilderness when settled. Over the backdrop of this supposed void, bustle several cultures fighting for their existence, not accepting the hegemony, and asking whether things really are the way they are presented. The experimental scene, claims Lerman, is in itself antagonistic to the idea of a void, any form of stasis, or non-organic culture.
Therefore, it is not surprising to discover that for Lerman, experimentalism is, in fact, an act of deconstruction. What is surprising is a fleeting recognition that this practice, negating any act of stasis, is in itself a static act. This state of being might allow a negativist expression, yet still, as an act of being it is always one and the same. With this realisation, and after having talked about western vs. eastern culture, I suddenly realise that none of these cultures can encompass experimentalism fully. It is true that western culture puts the individual at the forefront, but this individuality must conform to norms and hence cannot accept the renegade approach. Eastern culture, on the other hand, begs to obliterate the individual altogether and thus creates a form of existence that does not require expression; again, nothing the renegade approach can dwell within. And so, bound to no state, culture, border or ism, experimentalism is likened to an island that, by default, is deemed for cultural loneliness. It brings to mind a beautiful story that Lerman chose to end our broadcast with, taking us back to her days as a classical viola student in Bologna. As she was setting herself up in the city, Lerman found herself in a circumstance requiring her to sleep in the streets for a few nights. Lerman describes how the vibrant city, its streets teeming with young students, suddenly became the city of street-dwellers. Her main recollection is of the many unique characters she met during those nights, predominantly men – all of whom reminded her very much of the experimental type she has by now grown accustomed to: “They were like a bunch of weirdoes at the edge of society, who couldn’t find an outlet of self expression anywhere else but there”.
״פורטריט״ עם הקטע הראשון מתוך הדיסק החדש של איילת לרמן, “7 STEPS”
״קורוס נשות דימונה״
עבודות מיתרים ומיצבים אקוסטים:
פינאלה בחזותי: ״פליטת נפש״
מיצב ברחוב יפו: ״אטלס עכביש״
מיצב במעמותה: ״צעצוע״
תערוכת יחיד של מיצבים במעמותה: ״יותר מידי דיבור על חלל״
פרפורמנס: ״מותר להציץ״
0 notes