Tumgik
#the dangers of deepfake technology and how to spot them
reallytoosublime · 7 months
Text
youtube
This video is all about the dangers of deepfake technology. In short, deepfake technology is a type of AI that is able to generate realistic, fake images of people. This technology has the potential to be used for a wide variety of nefarious purposes, from porn to political manipulation.
Deepfake technology has emerged as a significant concern in the digital age, raising alarm about its potential dangers and the need for effective detection methods. Deepfakes refer to manipulated or synthesized media content, such as images, videos, or audio recordings, that convincingly replicate real people saying or doing things they never did. While deepfakes can have legitimate applications in entertainment and creative fields, their malicious use poses serious threats to individuals, organizations, and society as a whole.
The dangers of deepfakes are not very heavily known by everyone, and this poses a threat. There is no guarantee that what you see online is real, and deepfakes have successfully lessened the gap between fake and real content. Even though the technology can be used for creating innovative entertainment projects, it is also being heavily misused by cybercriminals. Additionally, if the technology is not monitored properly by law enforcement, things will likely get out of hand quickly.
Deepfakes can be used to spread false information, which can have severe consequences for public opinion, political discourse, and trust in institutions. A realistic deepfake video of a public figure could be used to disseminate fabricated statements or actions, leading to confusion and the potential for societal unrest.
Cybercriminals can exploit deepfake technology for financial gain. By impersonating someone's voice or face, scammers could trick individuals into divulging sensitive information, making fraudulent transactions, or even manipulating people into thinking they are communicating with a trusted source.
Deepfakes have the potential to disrupt democratic processes by distorting the truth during elections or important political events. Fake videos of candidates making controversial statements could sway public opinion or incite conflict.
The Dangers of Deepfake Technology and How to Spot Them
0 notes
youtubemarketing1234 · 7 months
Text
youtube
This video is all about the dangers of deepfake technology. In short, deepfake technology is a type of AI that is able to generate realistic, fake images of people. This technology has the potential to be used for a wide variety of nefarious purposes, from porn to political manipulation.
Deepfake technology has emerged as a significant concern in the digital age, raising alarm about its potential dangers and the need for effective detection methods. Deepfakes refer to manipulated or synthesized media content, such as images, videos, or audio recordings, that convincingly replicate real people saying or doing things they never did. While deepfakes can have legitimate applications in entertainment and creative fields, their malicious use poses serious threats to individuals, organizations, and society as a whole.
The dangers of deepfakes are not very heavily known by everyone, and this poses a threat. There is no guarantee that what you see online is real, and deepfakes have successfully lessened the gap between fake and real content. Even though the technology can be used for creating innovative entertainment projects, it is also being heavily misused by cybercriminals. Additionally, if the technology is not monitored properly by law enforcement, things will likely get out of hand quickly.
Deepfakes can be used to spread false information, which can have severe consequences for public opinion, political discourse, and trust in institutions. A realistic deepfake video of a public figure could be used to disseminate fabricated statements or actions, leading to confusion and the potential for societal unrest.
Cybercriminals can exploit deepfake technology for financial gain. By impersonating someone's voice or face, scammers could trick individuals into divulging sensitive information, making fraudulent transactions, or even manipulating people into thinking they are communicating with a trusted source.
Deepfakes have the potential to disrupt democratic processes by distorting the truth during elections or important political events. Fake videos of candidates making controversial statements could sway public opinion or incite conflict.
The Dangers of Deepfake Technology and How to Spot Them
0 notes
ailtrahq · 1 year
Text
The crisis of online fraud, and, in particular, online scams that make use of social media platforms and influencers, is out of control. Federal regulators in Washington are ramping up their efforts to raise awareness. On October 4, the Commodity Futures Trading Commission (CFTC) will host a virtual event, “Technology and Fraud: Stopping Scams in a Digital World,” as part of World Investor Week, October 2 to 9. The CFTC Calls Attention to Investment Scams Involving Crypto The first panel of the event, “Exploring Effective Social Media Investment Scam Interventions,” addresses one of the most common problems of the online experience of our time. According to the CFTC’s announcement, it aims at explaining so many people fall victim to scams on social media. More people, the CFTC claims, than fall prey to any other type of fraud. Education about such a pervasive ill clearly meets an acute need and could not come a moment too soon. Last year, crypto investment scams reached record levels. Especially those where a stranger approached someone online and persuaded the victim to invest via a website that looked legitimate. Such scams often involve pig butchering. The bad actors cultivate their victims’ trust over time, often under bogus pretense of friendship or, more often, romance. Last year alone, crypto investment scams cost Americans $2.57 billion, according to a Carlson Law report. The total amount lost to investment scams, full stop, came to $3.82 billion. And in Canada, 35% of people who own and trade crypto have fallen victim to scams, according to the findings of researchers at Toronto Metropolitan University. Learn more about the proliferation of online and social media scams and their impact on investors. The CFTC aims to train investors to spot investment scams, particularly those where bad actors use social media to lure the unwitting. Source: Statista Facing the Evolving AI Challenge The CFTC will also host a segment on the implications of AI for investing, “Is AI for Investors Road Ready?” The session will take a deep dive into investors’ grasp of the extent to which AI may assume control of daily trading activities. Along with the danger AI deepfakes and other scams. Jorge Herrada, director of the CFTC’s Office of Technology Innovation, described the ascendence of AI in the investment space as fraught with “promise and pitfalls.” “Like any fast-moving technology, education is critical to understand the capabilities of AI, when it is appropriate to use, and how to avoid being scammed,” he said. The CFTC may believe it is acting in the best interest of the public and protecting financial market stability. But a growing chorus of voices question the CFTC’s zeal. Many believe the regulator is far overstepping its legal mandate in targeting the activities of decentralized finance (DeFi) firms. Coinbase CEO Brian Armstrong has gone so far as to urge DeFi players not to settle with the CFTC. But to challenge them in court and to question the CFTC’s jurisdictional reach under the Commodity Exchange Act. Source
0 notes
orbitariums · 4 years
Text
𝐠𝐢𝐫𝐥𝐬 𝐨𝐧 𝐟𝐢𝐥𝐦 | 𝐬𝐡𝐨𝐰 𝐲𝐨𝐮𝐫𝐬𝐞𝐥𝐟 | 𝐬𝐭𝐞𝐯𝐞 𝐫𝐨𝐠𝐞𝐫𝐬 (𝟔)
note: it’s been a while y’all!!! i hope you’re still here & i hope you’ve been taking the time to educate yourself on everything that’s going on around the world!
this chapter probably took me the longest to write out of any chapter bc i was trying to get all the details just right. i felt like maybe something was missing, and i edited it all this week to get it how i wanted. now i feel more secure!!
i hope y’all enjoy it, i’m so excited to see and show y’all what’s up next for moonrose/yn and steve. leave your thoughts !! let’s gooo
playlist
word count: 8.1k
warnings: none really? other than mentions of sex work and the age gap btwn steve and moonrose. but no smut this time! this starts off where chapter five ended.
𝐠𝐢𝐫𝐥𝐬 𝐨𝐧 𝐟𝐢𝐥𝐦 | 𝐩𝐚𝐫𝐭 𝐬𝐢𝐱: 𝐬𝐡𝐨𝐰 𝐲𝐨𝐮𝐫𝐬𝐞𝐥𝐟 | 𝐬𝐭𝐞𝐯𝐞 𝐫𝐨𝐠𝐞𝐫𝐬
The sight in front of you when that camera turned on made you sit right up, your eyes nearly popping out of your head. There was no fucking way. It was some sort of sick joke. Because the man behind the camera simply could not be who you thought it was.    
     “What. The. Fuck.”
| | |
     "What the fuck?" you continued, less statically now that the initial shock was gone.
But there was no way in hell that the man you were talking to, had been talking to for over a month, was Captain America? He didn't have to be in his full attire, the face of Steve Rogers was noticeable anywhere.
     And then it hit you, a flood of realizations. Of course he had used a fake name at first. It should've been obvious when he changed his name from Grant Roberts to Steve - short for Steve Rogers. It should've been obvious when he told you that he was a "scientist", such a vague term to use for the many branches of science that existed.
     It should've been obvious why he wouldn't turn his camera on. And yet, it shouldn't have, because this wasn't something that you could even begin to suspect. Customers had their reasons for turning off their cameras - one of those reasons was not usually because they were secretly an Avenger.
     But still, it didn't feel quite real. The logical, pragmatic side of you calmly figured that this was all just somebody's idea of a sick joke, that maybe this person behind the screen had set you up all along just for this big reveal, as some sort of way to deceive you. In fact, the logical side of you wanted to write this all off as a pathetic joke.
     It didn't make sense. And you desperately needed it to. You needed answers, now. If this were some kind of highly calculated joke for whatever reason, you couldn't even find one, you wanted to know. And if it were a joke, you wanted to know why you. You wanted to know how much farther this person was willing to go.
    And if it weren't a joke, if you really were talking to Steve Rogers... you wanted to deny it, but something in you urged that this was real, as unlikely as it seemed. The feeling that he was who you had truly been talking to felt as true as the connection you had created with Steve.
Still, that didn't mean you weren't shook the fuck up.
     "What the fuck!" You repeated, standing up and bringing your laptop with you as you migrated into your office, turning on the lights and then sitting back down at your office table. This felt like official business. You wanted to really be able to take it in.
     And Steve? Well, he was just waiting for you to finish reacting, all the while his eyes slowly watching you on the screen, his chest filling up with worry. He shouldn't have, he really shouldn't have, even if it were in the name of bettering himself and fixing things with you. But he knew he couldn't panic again, couldn't retract out of fear. He would face this, even if it meant having to endure a reaction from you.
     The worst that could happen was you could expose him and it would lead to some unnecessarily huge scandal. Even worse, it could turn out that you were not to be trusted, and that somehow this would take a turn for the dangerous. But he had known that all along. He had thought about it long and hard before he made the decision to show himself, and he still did. So there was no turning back now.
     "Are you kidding me?" you barked, not out of anger but out of pure confusion - you felt like you had to assume this sort of accusatory position to defend yourself, whether or not he was real. And if he was, you had some choice words to say.
     "Moonrose..." Steve said, the first time he was speaking ever since he turned on the camera.
     You focused in on the way his lips moved and his careful expression, the way he spoke slowly and calmly, like he was approaching a scared animal who was ready to attack. All of his words would be calculated, you could tell in the way his eyebrows came together, the way he watched your reaction as if he were concerned for your own well being. And to admit it, he was to some extent. He understood your freak out, but he was trying to be calm to avoid a meltdown that would ruin the both of them, depending on what you decided to do in the midst of said meltdown.
     "Okay. You're talking," you noted, blinking. Maybe if he hadn't spoken you could slam your laptop shut and brush this off as some sort of glitch in the matrix. You still had questions. You were a smart woman. You weren't about to be played. "How do I know you aren't a deepfake?"
Steve furrowed his brows. All his years in this business and some of this new technology was still hard for him to keep up with,
     "What?"
You ignored his confusion and continued on. Your request was more like a command, Steve felt like he was back in the army again.
     "Blink," you commanded.
      You wanted to see if you could spot any inconsistencies in him, just to cross out the idea that the person in front of you could be generated by an algorithm. Was it desperate? Maybe, but not unnecessary. If you were talking to Steve Rogers you bet your ass you would make sure he was real.
     Steve wanted to ask why, but he figured he was best not going against your wishes in any way. So he blinked, and to your wondrous disdain, it seemed legitimate. You felt some sort of marvelous sinking feeling in your chest. Like this - your career, your customer interactions, your life - was realer than you had signed up for. Everywhere you turned these days, something was surprising you. But what made this sinking feeling so brilliant was the fact that you were seeing Steve. And you weren't just seeing anyone, you were seeing Steve Rogers. That was a big deal.
     A quick bark of laughter escaped from your lips - first because of the fact that you thought you could prove whether or not what you were seeing on the screen was real, when all signs pointed to yes, and second because you were in genuine shock, processing what had happened to you. But you were laughing because it was almost funny. Lately your life had thrown you so many curveballs. It was almost unbelievably hilarious that of all the people in the world, Steve would be this Steve in particular. The universe would broaden those slim chances just for you.
     And for what? You wondered. Maybe it was a test to see if you would break down again. But really, you had no reason to. You weren't necessarily upset over this revelation, in fact it made the discrepancies of your relationship with Steve make a whole lot more sense. It made the blow less heavy. So you weren't upset. You weren't on the verge of tears or a brand new breakdown that would take you weeks to recover from - you were just sitting motionless in a soup of disbelief.
It was kind of cool, though. This whole time you were talking to Steve Rogers, the Captain America. It was also worrisome, because you thought there might have been some reason in particular that he chose you, although you couldn't figure out why for the life of you. The most illegal thing you'd done in the past few years was neglect a couple of parking tickets and have a few underage drinks.
     "How do I know I'm really talking to Captain America?" you asked, narrowing your eyes at him through the screen.
Steve sighed as if resigning and reached over beside him, where from behind his bed frame he pulled out the shield, showing it on camera like it was nothing. You squinted and folded your arms, observing it up close.
     "That could be anything," you raised a brow, and Steve sighed again, this time closing his eyes.
     "It's me. Really. I don't know how else I could prove it to you."
When you heard his voice again, the pure intention in his voice, hoping that you'd believe him, it all seemed to click. And any shred of denial you had left was gone, defeated by acceptance.
     "Jesus," you breathed out, looking at him in silence for a second and then shaking your head, confused. "You know so many girls on here would kill to talk to you? I mean, seriously. I have a friend who dresses up in Avengers cosplay every night and uses a dildo the same colors as your shield. So why me? You know there's nothing illegal about what I'm doing, right? I can't get in trouble for this."
You realized you were sort of rambling and not making any sense, but this was one of those times where you let yourself. What was the appropriate response to this? The answer - there was none. Also, you wanted to make sure you were in the clear. Though you doubted Captain America would be prowling against sex workers, you had to make your innocence obvious in general. It was like you hadn't considered that maybe Steve, like anyone else, had needs, and that you were just helping him fulfill those needs... until it spiraled into, well, this.
     "Listen," Steve started.
Even he hardly knew what to say. For all the time he had spent thinking about this decision, he was starting to feel that he wasn't really thinking at all. He didn't know how he would de-escalate the situation, and he didn't know what exactly he would say. He just figured it would provide a sense of relief for him and hopefully for you too, as well as resolve any discrepancies in why he stopped talking to you.
But now he felt like maybe this was just his selfish excuse for the fact that he wanted more, and that he was willing to show his face because of that fact. Did he feel better? Slightly. It felt like a weight had been lifted off of his chest now that you were talking again, now that he was seeing you again. The feeling was so odd, a kind of weird lightheartedness that didn't feel like it belonged.
He finally chose his words.
     "I'm sorry. For everything. I... I don't even know what to say right now."
     "You're telling me," you responded with raised eyebrows.
The situation may have been weird and more than either of you could deal with, but it was nice talking to him. The face was nothing like you had been expecting at all. But it was still Steve... right?
     "Really, though. I want to apologize. And I don't have to show my face to do that, but I feel like I need to. I'm sorry for how things ended last time. I was afraid of the things that could happen if I let myself open up to you. I was trying to be extremely careful, and I let that take over me. It wasn't fair to you to detach myself the way I did," Steve explained slowly, and you listened, taking in each and every word.
     It wasn't hard to understand. It made sense why he wouldn't want to open up to someone on the internet, being who he was. Still, you wished he hadn't been so sudden about it. You'd wished you could've at least understood him a little, so it didn't end out like that.
He continued,
     "And I'm sorry for enabling any of this at all, even though I don't regret it any more. It's not that I don't think you'd be able to handle that kind of communication or that I found you immature. It's that I think I wouldn't be able to handle that kind of communication. And... and I was beating myself up for letting things go so far the way they did instead of just realizing... just realizing that..."
     He swallowed down his words a bit, but you were curious, leaning your head in as if to prompt him. All his words were reassuring, a genuine apology. Like he was making up for his own failure, explaining his own faults. It made you feel a whole lot less naive and it cleared up so much for you, after things were left so blurry. And you were surprised he was even giving you an explanation. Why did he come back, if he were so busy and had weaned himself of you already? Why was someone of his importance being so caring when he didn't have to be at all?
You wanted to know.
     "Realizing what?" you asked.
     "That what we had wasn't something I could just brush off because I was scared. That just because a strong connection like that intimidates me doesn't mean trying to pretend it doesn't exist will help. It's not like me, honest. I value loyalty above all else. I consider you a friend, and I wasn't loyal to you. And I'm sorry."
The thoughts in your brain were running a mile a minute. If anything, you weren't expecting this to begin with. But an apology? You weren't expecting that at all, from the Steve you once knew, or from Captain America. And now that you could see who he was, this connection you had felt like something you were more willing to lean into. You were more willing to be honest about the fact that you liked each other, and not just for the purpose of your work.
    You had so many customers who considered you a friend, but not in this way. Not in the genuine way, where even though you serviced him, you weren't being nice because of that. If he had been just some rando, you might have been able to brush his words off easier. And you wouldn't even be entertaining the idea of a conversation that was this exclusive, this revealing. Had he been anyone else, this wouldn't be happening. But you'd seen who he was, on the news and in the public world, and through a screen. It just made it easier to want to trust him and his intentions.
    And right now, it sounded like Steve was genuinely sorry, and that he felt like he had let down a friend. And you were both surprised and ecstatic that he saw you in that way. It wasn't every day a public figure like Steve Rogers wanted to talk to you. It felt like speaking with an old friend, so mundane and nonchalant, yet so out of the norm.
    Yet, even though you were happy to be talking with him, you couldn't help but criticize his methods. You thought of how he had thrown you completely off guard while you were in this carefree disposition, but you didn't forget that it was your reaction that mattered.
    Your reaction would set the feel for the entire week. You were proud of yourself for not allowing the kind of reaction that would send you back to the place you had spent time getting out of. You were glad that this revelation didn't ruin the good mood that had been curated over the past few hours during the night out. He had just come out of the blue, and was giving some hearty apology that you weren't even prepared for in the slightest. Uttering your next words, you shook your head slowly as you expressed your feelings of disappointment in him.
    "I'm glad you've come to your senses. And, I can understand where you're coming from. But I can't help but think that this isn't like you, or it shouldn't be. I mean, you're kind of a big deal. You should know how to handle your feelings instead of just leaving me out of the blue and then coming back to reveal that you're... well, you! It's really a lot to take in, I would hope you're not missing that."
Steve nodded, glad that you were expressing yourself. It didn't upset him that you were calling him out- if anything, he deserved it, and he liked someone that could point out his own wrongdoings, although that wasn't because he wasn't responsible for himself. He liked a woman who could call him out, but he didn't need a woman who could stay on top of him, because he was adult enough to do it himself. There was a difference, and distinct levels of maturity that came with that difference.
    He had been so engulfed in his own shortcomings and anxieties and that wasn't fair to you, nor was coming back and doing this big reveal, being as prolific a person as he was. To be fair though, he hadn't really been thinking logically in the moments before he showed you who he was. But you had made all correct points - he was supposed to be the smart adult in the situation and communicate efficiently - you understood why that hadn't happened, but you just wanted to bring it to the table. You were vocal about your feelings. You didn't just make excuses for people.
    "Yeah. I know. It's silly of me, I was thinking of myself and stressing over the details. So, I'm sorry, I know that wasn't very heroic of me. I feel a little selfish, because I don't want knowing who I am to put any added stress on you," Steve became slightly sheepish, apologizing for the fact that he was who he was, and that he was intentionally revealing himself to you despite his high importance.
     You had settled into the reality of the situation, and ever since you took the time for yourself to heal, this sort of rolled off your back. Another conflict down, just like that. You were ready to take on more surprises, more shock. Maybe a month ago something like this would've blown you out of the water and put you on edge, especially if it were in tandem with the stressful things you were already going through.
But now, you were mellowed out. And you were thankful for the fact that you had been on a night out before this, the drinks in your system and the fun you had had definitely took the edge off, made you feel more in the moment without the anxieties of the present.
So you almost laughed it off, genuinely chuckling.
    "You don't sound cocky at all," you joked sarcastically, and Steve made a playful face.
    "What's that mean?"
You did your best impression of him, putting on airs and sitting up high and mighty, imitating his voice,
     "I'm sorry that I'm Steve Rogers, defender of justice. Here's my shield, no big deal. Next caller."
Steve chuckled, lowering his head,
    "Oh, is that how I sound?"
You shook your head slowly and playfully,
    "Without a doubt. And by the way, the fact that you sleep next to your shield? Classic."
    "Not next to it, it's just beside the bed frame," Steve defended himself, playing along with the joke.
    "Same thing," you teased, with a dismissive wave of your hand.
    A beat went by, silence. The two of you sitting in the acknowledgment of what you had, staring at the other on the screen. Sated, but not elated by what had just happened. As for Steve, he felt much more relaxed. Like he was in a better place, now that he had explained himself to the only person who it made a difference for. Now that he had finally broken past that wall of fear. And he wasn't thinking about the future, wasn't worrying his head off about the possible consequences of what he was doing, though there were so many that his brain could think up. Instead he was just sitting in his good feeling, floating in it.
    He was being honest with himself, with no fear of what that meant. So many times he pushed back opportunities like this because of his own fear, or because he convinced himself he was too busy to pursue something like this. And though it wasn't like you two were dating or in an official relationship, there was something between you. It was clear that you liked each other, more than just in the way a customer would. And instead of running from that spark, Steve was letting it shine. Whether it turned out to be something more ot not wasn't what you two were worried about. It was just sitting in the moment. Although, the silence, the attraction in your gazes made you wonder where to go from here.
    Would Steve be continuing to attend your shows, and carry on like normal? Would he want to talk more, now that he had gotten rid of this fear he was telling you about? Your mind wasn't going too far on that front - you weren't thinking of technical things, like what this would mean when it came to your relationship with Steve, that seemed like it was outlandish to be talking about. You weren't pursuing anything with him and he didn't seem to be pursuing anything with him. But you wanted to know what the hell would happen after this?
    "So... what now?" you asked calmly - because you weren't worried about what was next, you just wondered.
Steve took in a deep breath, slowly shrugging his shoulders up and down. He raised his brows,
    "I dunno. What's important to me is what's important to you. What are you hoping for?"
    "I'm not hoping for anything," you replied, and to hear yourself say those words was such a relief. You were done wanting anything from men, or anyone, or hoping that they would follow through with your desires. Your hopes for how other people would act always spiraled into desperate measures, and that wasn't you any more. You continued, "Also, you're the one who wanted to clear things up. I think what's next is your decision."
    "You're right," Steve nodded. That was fair enough. He didn't want to put any of this on you. To him, it was a matter of how this relationship would progress. He wanted to know you on a base level, not just through this. He wanted to know you the same way he knew a normal friend. He saw you as that, why not make things that way? But for now, it was best to just take things slow. Not out of fear, but for the sake of reality. "I guess I just want to get to know you on a real level. Not as a customer, but as a friend."
    Steve's words struck a realization in you. Not only were you talking to Steve, but you had also performed for him. He knew your o-face. And that wasn't something that embarrassed you, because it was your job, and you were very comfortable with your sexuality because of your job. But knowing now that you were performing for Captain America? It felt like the stakes were just a bit higher, and you always put on a good show.
    And it was just a tad bit flustering to know that the man you gave your all to sexually, the man whose groans and moans turned you on to no end, the man who needed you to please him, was Steve Rogers. Unbelievable, yet the proof was in front of you. You'd be lying if it didn't make you feel powerful to know that you had been the reason that a whole Avenger was pleasuring himself almost every night.
    "Huh. So do I still show you my ass?" you asked, masking your flustered state with humor.
A laugh tumbled out of Steve's lips, and you could see him turning a shade of pink, see his face change as he got what he could only describe as flashbacks. You smirked at the impact you seemed to have on him. He cleared his throat so his voice wouldn't break as he continued, smiling shyly at the camera,
     "Uh," he started, realizing he hadn't quite formulated a response. He chuckled nervously. "If-if you want to."
You nearly snorted, feeling especially devious now,
    "Wow, Steve, I'm shocked. After all this time, I still make you nervous?"
You kind of felt like the shit. Who else could say they successfully got Steve Rogers off, without even touching him? You were the only one who could make him feel this way, and he didn't have to say it out loud, he already knew it. There was a reason he chose you specifically. The minute he saw you, he was drawn to you. That hadn't changed.
He chuckled at your question,
    "I'm not such a tough guy when it comes to these things."
    "Oh, but that can change. Trust me, I've seen it," you commented, and you both knew what you were talking about - the time when you had taught Steve how to be more dominant with you. That was probably one of your more intense sessions with him.
    "Really though, I do want to get to know you better. You're a friend to me. I want it to feel like a friendship. If you're comfortable with that."
    "I think so," you responded. Again, it was only because it was him that you were agreeing to this. But you didn't quite know how to make that happen, because it never had before. "I guess it's just a question of how to be friends outside of this."
Steve shrugged,
    "We could talk outside of this. If you're okay with doing that."
You raised a brow, sort of surprised at that suggestion,
    "Are you? I mean, what are you thinking?"
Steve felt secure enough that he wanted to be able to talk to you outside of this site, as long as he wasn't being reckless with his communication. He didn't want there to be some way for important information to leak if he started talking to you on his phone, or give up too much personal information of his own. But he knew he wanted to talk to you outside of just this site, and hear your voice, too.
    "There's gotta be some way we can talk more frequently. I'm not really a texting guy, but I have... several phones. Some are for business and some are for-"
     "Talking to cam girls online," you filled in the blank.
    "Sure. Except you're my friend. Who happens to be a sex worker."
You laughed, grinning at him, a warm feeling blossoming in your chest at the fragile correction,
    "Got it. I mean, I have a phone number. I'm sure one of your techie friends can find a way for us to text without revealing too much personal information, if that's what you're thinking about. But hey, you know I'm not gonna like... I don't know, try to rob the Avengers."
Steve nodded understandingly.
    "No, I understand that. It's just, I don't know, a precaution thing. A job thing. It's less personal and more just, professional."
     "Hm. Do you usually hide your number from friends?" you asked inquisitively, raising an eyebrow.
    "Sometimes," Steve said shortly, then sighing as he began to think of the circumstances. This friendship was different from one he ever had. It was so based on trust and making slow progress, within the boundaries you both had to set. "You make a good point. I'll think about it," Steve decided.
    "Here, we can compromise. I have two snapchats. One is a private snapchat, a special treat for loyal customers, and the other is my regular snapchat. The private is for nudes, the regular one is for... my life. You can see my boobs and my hiking trips. And, we can text on my regular snapchat, like friends would. But, just to be clear, you're... still a customer, right? You can be both a friend and a customer. Huh. Now that I'm saying that out loud, I realize that a lot of my friends have seen me naked."
Steve laughed, and you grinned just at the sight of his smile. It was nice to hear his voice, but it was more than enough to be able to see the face that went along with it. Maybe this was the start to your friendship.
    "I get that. And I'll always be a customer. But I think, maybe for a few days, it would be healthy for me to just see you as you. It would feel weird watching my... friend, you know," Steve couldn't even complete that sentence, and he wasn't quite sure how he could.
You did a lot of sexual stuff on camera, it wasn't just one thing. But it was how he felt. He wanted it to feel like a refresh. That didn't mean he didn't want to see you that way at all, but it was the old fashioned part of him that made him feel like he needed to see you as just you. He continued,
    "You know, not while you're performing as Moonrose."
    "Sure, I can appreciate that," you nodded. "But in the meantime, don't be a stranger on here. I actually like doing those things for you."
    "Oh, don't get me wrong. I'll definitely be back," Steve replied quickly - there wasn't a big enough old fashioned bone in his body to keep him from interacting with you the way the site was designed for. He needed you in that way, he knew that was undeniable. But first, a fresh start. "And the Snapchat thing sounds good. You do have to teach me how to use it, though."
     "Sure, Steve," you smiled. You felt some sort of excited pang in your chest, like this was the start of something new and good.
     Lately you'd been circulating in such good energy, and even though this came to you as a shock, the end result was so positive. You were glad to be returning to interacting with Steve, to be feeling the joyous depth of this connection that you had. You were glad that he was who he was, because it made it that much more fun to talk to him, it felt like some sort of special occasion. Because you knew Captain America, without knowing that you knew him. And now you were becoming friends with him, and it was almost normal because you had been talking to him for so long. It was hard for you to get a clear grasp on, it was so unexpected, so irregular. But it was exciting. A rush, and not in a toxic, thrilling way. It was a fulfilling and wholesome rush, one that made you feel full.
    "Oh, and by the way," Steve added, the thought just coming up in his mind. "What's your real name?"
You were practically beaming. Never in your life would you have expected to be telling this to a customer, to be becoming friends. But he was asking, and you were willing to respond. You wanted him to know you, just as he wanted to know you.
    "My name is Y/N," you said, like you were letting out a breath and taking in fresh air. You couldn't wipe the smile off your face when he heard your real name, how it sounded just right coming from your lips, and he decided he wanted to say it all the time.
     "Hi Y/N. I'm Steve. Nice to finally meet you."
| | |
    "What's got you giggling like a schoolgirl with a crush on teacher?" Sam inquired as he walked into the kitchen, catching sight of Steve leaning over the counter, his phone in his hand, an unlikely grin on his face, laughter coming from his normally pouty lips.
      Steve just looked up from his phone, trying to appear as serious as possible. Sam's observation wasn't totally off, after all. For the past week or so, you and Steve had been talking through Snapchat, a different means of communication like how you discussed. You taught Steve how to download it, which was an experience unique to you and you only.
    How many people could say they taught the strongest, smartest supersoldier out there how to download and create a Snapchat account? (All while you were teaching him how to use the app, he kept insisting "I'm not that old", to which you did nothing to reassure him that he wasn't). Anyways, Steve had been preoccupying himself with that as of late.
    During this time, the spring period of the year, the Avengers were a lot less busy, and had a lot more downtime on their hands. He spent that downtime on Snapchat with you, and with his team in real life. And it was safe to say, he was back up again. But not in the almost superficial, hyper-pleased way that he was when he first met you, the kind of happiness that was like a sugar high that crashed hard. No, this time he was happy, truly. There were no blurred lines between the two of you at the moment, only honesty, only truth.
    So when he watched your Snapchat stories on your regular snapchat and got to see your real life, he was always highly entertained. He truly wrapped himself into your hiking adventures, study sessions and student life, your daily acai bowls, all the outfits you were making, all the things you enjoyed. You weren't perfect, but he admired you so much. You were hardworking and adventurous. You made everything you did beautiful, and you only wanted beautiful energy around you - you exuded energy of that very same magnitude.
And plus, you were always sending him funny snaps - pictures of yourself with filters on them, updates about your day, or just posts you saw that you found funny. It was so good to be interacting on a humane, friend level. He could admit he got caught up sometimes, like in this very moment.
    He switched off his phone, putting it in the back pocket of his sweatpants, and offered a small smile to Sam, who was teasing him.
    "Hello to you too," Steve chuckled, shaking his head.
     He wouldn't answer that question though. Even though he was much more comfortable with you, he still didn't want his business revolving around you to circulate. But this time it wasn't out of fear, it was simply because he wanted to keep things private and be smart about it. He still knew that his team would probably be concerned if he was talking to you at this level, that he let it get this far, but he wasn't exactly worried about that. 
     He just didn't want to deal with the controversy if he told them about you. For now, this was just something for him. Not secretive, but not public, either. The shift was similar to your own healing transition - Steve was less robust and scared, less type A about the whole situation.
    "I don't think I've ever seen you look at your phone like that. I don't even think I've seen you on your phone... at all," Sam continued, though he wasn't pressing Steve, he was just curious.
Everyone had taken notice, once again, of Steve's shift in energy - they wondered if it would be permanent or if he would just become withdrawn or irritable once again. And they wondered what brought these shifts on, but they mainly just admired the moments like these. Maybe it was just the fact that it was getting nicer outside, that the skies were clear and blue, and that they weren't overwhelmed with work.
    Steve dodged the question yet again, watching as Sam filled up a water bottle.
    "First time for everything, huh?"
Sam grinned mischievously,
    "My man Steve. Finally adjusting to the times."
Steve chuckled silently - he was adjusting more than Sam even knew.
And when it came to you, you were glad to have Steve in your life. He wasn't a priority to you, and that was a good feeling. He was just someone you liked talking to, a friend who you felt you had a deep connection with. You were glad that you had each cleared your feelings up, that you decided to make this compromise in order to be friends, in order to revive and live through that connection that you both acknowledged you had.
    "You could say that," Steve grinned at Sam, who was in athletic clothing and was filling up a water bottle at the fridge.
    "Going for a run. Wanna come with?" Sam asked, Steve smirking playfully as he folded his muscular arms, which bulged through his t-shirt. He was already in athletic wear - a tshirt and sweatpants - he was ready any time.
    "That something you really wanna do right now?" Steve teased.
After taking a big gulp of water, Sam pointed at Steve, indicating that the competition was on,
    "Try me."
Steve shook his head and laughed,
    "Sure, why not?"
     As Steve and Sam embarked on their afternoon run, they were followed by the sound of snapping cameras and flashing lights, which was normal for any Avenger doing anything. Though, press generally respected their wishes and didn't ask them any questions or bother them much, they were still there. Though, it was generally something Steve could ignore, and this time in particular it didn't bother him much. He was too wrapped up in the moment, the warm spring breeze against his face as he ran alongside his close friend, the thought of you fresh in his mind, the image of your smile burning in his brain.
All these things kept him warm, and Steve was glad. He was much too used to the cold.
✺ ✺ ✺
    You liked talking to Steve - scratch that, you loved talking to Steve. If you had a connection before, it was undeniable now. It was really him. And knowing that, you could sink into the comfort of talking to him. Neither of you felt like there was anything to be afraid of. You were just friends, and it was great like that, even if you both knew you had the bursting potential to be more. 
     Of course you understood the romantic undertones of your relationship, they'd been there from the start, first under the guise of flirting, then stretched out as you spoke to one another and got to know each other. And now that you actually knew each other, the possibilities for more were right above your heads, all it would take was a little reach.
    But you weren't quite thinking of that. You knew it, but you let yourself rest in the friendship you had now. You were still taking care of yourself, not focusing on your love life at the moment. But if the feeling should become so pressing, who were you to deny it? You would be lying to yourself if you did, and probably denying yourself a good thing. You only wanted good things.
    Each day, when you thought about your situation with Steve, your heart swelled up with the magic of your beautiful secret. No one could know, of course, but it was nice to know for yourself. If there were anybody that you were going to tell, it would be Aaliyah - she usually found things anyway.
    You were getting to see him as the real Steve Rogers - not Captain America, not The Man Out of Time, not the Steve Rogers that all the news stories reported on - though he wasn't quite different from the loyal, strong minded and good natured person that he was known to be. And although you knew it was so special to talk to someone like him, you didn't idolize it as much after that week, and that was good. It meant that you really did have the capacity to be friends with each other.
    Out of respect for Steve, and because you were being smart, you decided that you would tell no one. You didn't want to sacrifice the fact that every day you were talking to the one and only Steve Rogers on Snapchat, and he was your friend. You enjoyed sending him things just as much as he liked receiving things. You'd taught him how to use it, but he was still getting used to all the oddities and newness of Snapchat - filters, bitmojis, all that. It was still cool to know that you had this exclusive way to talk to an Avenger. If you weren't smart, you'd tell it on the mountains.
    You were just opening a snapchat from him, a picture of him and The Falcon, yet Steve had the audacity to caption it: "Out with a friend." Looking at the picture, your eyes went wide, glancing between Steve and Sam and not being able to decide who to focus on. You slowly realized there would be more perks to talking to Steve than just Steve - after all, he was a part of a team with the rest of the greats. The perks weren't all you cared about, but they definitely garnered a reaction. 
     You were fixated on the image, hardly paying attention to Aaliyah who was in front of you. Again you were out for brunch together. You'd decided to take up your tradition of Sunday girls brunch yet again, ever since you'd started up therapy and your self proclaimed healing process. But what was great about that process was that you were in a space where you could say that you were happy. Still on ground, but at least you weren't beneath the surface of the earth.
   "Hello! Earth to YN!" Aaliyah exclaimed, waving her hand in front of your face.
Quickly, you turned off your phone, the image of Steve and Sam disappearing (but how badly you wanted to screenshot it), and set it down on the table, letting a smile replace your entranced features. You folded your arms and tried to appear as nonchalant as possible. Luckily, Aaliyah didn't get on your ass about whatever was on your phone, because she had other things on her mind.
     "What's up?" you quirked your eyebrow, taking a sip of your green juice.
     "You know what's up. I've been trying to ask you about Alex all week," Aaliyah raised her eyebrows, and you nearly chortled at the mention of his name.
     You remembered that incident, it had only been a week ago. But that was a moment of spontaneity. You weren't thinking of seeing him again, but Aaliyah seemed to have other plans.
     "What about him?" you smirked, biting down on your straw.
     "You know 'what about him'! You were sucking his face and then you never spoke again, that's what about him," Aaliyah replied.
You rolled your eyes playfully, but a blush came to your cheeks as you remembered the events that went down. Lucky for you though, both you and Alex had agreed that you weren't looking for anything. So you felt fine just leaving it the way it was. You liked those moments of spontaneity, liked the fact that you didn't need to worry.
    "And let's leave it just like that," you grinned, and Aaliyah leaned back in her chair, impressed.
    "Hmm. This is interesting for you. You really aren't a hookup girl, I mean, not in real life. I'd think you wanted something more out of that."
Aaliyah was right, you weren't a hookup girl. You had your attractions in the past, but even before your boyfriend, you'd been more traditional. You were attractive and flirtatious though, so you'd had a small amount of flings and hookups, but it wasn't your style. You did it so much on the internet to begin with. In real life, you had a good balance of spontaneity and regularity. But this was different. 
     You had done what you did with Alex for yourself, for your own pleasure with no strings attached, with the knowledge that this wouldn't be followed by the long, winding road of trust exercises and disappointment that came with an actual relationship. And it inadvertently taught you to stop resisting when you wanted someone, even if it didn't mean you'd be together forever. Would you do it again? Probably not. You'd satisfied that small craving you had.
    In response to Aaliyah, you simply shrugged and said,
    "I guess there's just more in store for me."    
✺ ✺ ✺
    After the run with Sam, Steve took a hot shower and let the water run down his body. As always, hot showers brought along thinking sessions. Usually Steve thought of the things he'd lost, the things he still had yet to adjust to. But right now, all he had on his mind was you. And it didn't make him melancholy or nervous the way that it did in the past. 
     This time he just thought of you with sweet pleasure, without even touching himself. And he thought of the things Bucky had said about trust, just a little while before he'd revealed his face to you; about how at surface level it can appear hard to trust someone, but that gut instinct reveals who you could trust, even if it didn't seem like you should. And you were gaining Steve's trust steadfastly.
   To Steve, it was crazy that you had only just started talking to each other on this level. You felt much closer. It made sense, because you'd spoken for so much time before, but it wasn't the same as how you were speaking now. It was developing, quick and easy, it wasn't hard for Steve to call you his friend.
    And maybe, even more. The same with you, Steve had known the romantic potentials of your relationship - it was a part of what had scared him off at first. He knew it, maybe even more than you did. Because while you solely acknowledged the romantic potential, Steve could feel himself looking a bit more into it. He was wondering what it might be like to be closer to you- the beautiful pros and even the cons. He wanted to know how much closer he could get, to satisfy the feeling of simply wanting to be closer. He couldn't get enough of you and your cute quirks and the conversations you had together outside of the cam site.
    Being friends with you was more than enough, but the feelings that were bubbling up were hard to ignore. They made him so vibrant, and it was noticeable. He kept his head on his shoulders, but not pinned on too hard. Some part of him thought that maybe he was just letting his head go too far in the clouds because he wasn't used to being this spontaneous, wasn't used to the feeling of earning a new friend under such exclusive circumstances. That the freshness of the situation was getting him overexcited, and that maybe he was more of an old fuddy duddy than he realized. But another part of him thought that this was a slow blooming realization, and that he shouldn't clip it while it was still budding.
    It was exciting, it was nerve wracking. He had felt this way before the reveal, but it was crushed beneath the ruins of his own anxieties and fears. Now that he felt he was free to connect in this way, he was hopeful, like there really was something out there for him. Months ago, he thought looking for love was pretty much a dead end, and something he didn't have the time for. Now, even though he wasn't infatuated, he was a bit more optimistic about the fact that there was something here for him, something he had passed roadblocks to get to. 
     He was already learning from you, he could only imagine what you could teach each other if you got closer. And so, the possibility, no matter how reachable or unreachable it was, intrigued him. He was walking around with the ghost of a smile on his face because of it. Again, he wasn't completely gone off you. The feeling was like he was just dipping his toes in to a very deep pool.
     He was lying in bed, opening another snap from you. No filter, no makeup, just you in bed on your side, the sheets over your head, a small smile to match your sleepy eyes. The caption read: "goodnight!" Steve couldn't help but feel special about the fact that he got to see you up close like this, outside of your Moonrose act, stripped down, the same way you felt special about the fact that you were texting someone like him. He looked at the picture for far too long, in the same position as you, smiling before he was able to realize he was even doing it. If he could feel all these things just by looking at you, a friend, he knew there was more to come. And finally, he was thinking he could be open to that.
    Thinking that he could want to take things further, on his own initiative.
note: EEP!!! this was a big chapter !!! how do we feel <3 
237 notes · View notes
scifigeneration · 5 years
Text
AI can now read emotions – should it?
by Christoffer Heckman
Tumblr media
Emotion recognition technology, an outgrowth of facial recognition technology, continues to advance quickly. Steve Jurvetson/flickr, CC BY-SA
In its annual report, the AI Now Institute, an interdisciplinary research center studying the societal implications of artificial intelligence, called for a ban on technology designed to recognize people’s emotions in certain cases. Specifically, the researchers said affect recognition technology, also called emotion recognition technology, should not be used in decisions that “impact people’s lives and access to opportunities,” such as hiring decisions or pain assessments, because it is not sufficiently accurate and can lead to biased decisions.
What is this technology, which is already being used and marketed, and why is it raising concerns?
Outgrowth of facial recognition
Researchers have been actively working on computer vision algorithms that can determine the emotions and intent of humans, along with making other inferences, for at least a decade. Facial expression analysis has been around since at least 2003. Computers have been able to understand emotion even longer. This latest technology relies on the data-centric techniques known as “machine learning,” algorithms that process data to “learn” how to make decisions, to accomplish even more accurate affect recognition.
The challenge of reading emotions
Researchers are always looking to do new things by building on what has been done before. Emotion recognition is enticing because, somehow, we as humans can accomplish this relatively well from even an early age, and yet capably replicating that human skill using computer vision is still challenging. While it’s possible to do some pretty remarkable things with images, such as stylize a photo to make it look as if it were drawn by a famous artist and even create photo-realistic faces – not to mention create so-called deepfakes – the ability to infer properties such as human emotions from a real image has always been of interest for researchers.
youtube
Recognizing people’s emotions with computers has potential for a number of positive applications, a researcher who now works at Microsoft explains.
Emotions are difficult because they tend to depend on context. For instance, when someone is concentrating on something it might appear that they’re simply thinking. Facial recognition has come a long way using machine learning, but identifying a person’s emotional state based purely on looking at a person’s face is missing key information. Emotions are expressed not only through a person’s expression but also where they are and what they’re doing. These contextual cues are difficult to feed into even modern machine learning algorithms. To address this, there are active efforts to augment artificial intelligence techniques to consider context, not just for emotion recognition but all kinds of applications.
Reading employee emotions
The report released by AI Now sheds light on some ways in which AI is being applied to the workforce in order to evaluate worker productivity and even as early as at the interview stage. Analyzing footage from interviews, especially for remote job-seekers, is already underway. If managers can get a sense of their subordinates’ emotions from interview to evaluation, decision-making regarding other employment matters such as raises, promotions or assignments might end up being influenced by that information. But there are many other ways that this technology could be used.
Why the worry
These types of systems almost always have fairness, accountability, transparency and ethical (“FATE”) flaws baked into their pattern-matching. For example, one study found that facial recognition algorithms rated faces of black people as angrier than white faces, even when they were smiling.
Many research groups are tackling this problem but it seems clear at this point that the problem can’t be solved exclusively at the technological level. Issues regarding FATE in AI will require a continued and concerted effort on the part of those using the technology to be aware of these issues and to address them. As the AI Now report highlights: “Despite the increase in AI ethics content … ethical principles and statements rarely focus on how AI ethics can be implemented and whether they’re effective.” It notes that such AI ethics statements largely ignore questions of how, where, and who will put such guidelines into operation. In reality, it’s likely that everyone must be aware of the types of biases and weaknesses these systems present, similar to how we must be aware of our own biases and those of others.
The problem with blanket technology bans
Greater accuracy and ease in persistent monitoring bring along other concerns beyond ethics. There are also a host of general technology-related privacy concerns, spanning from the proliferation of cameras that serve as police feeds to potentially making sensitive data anonymous.
With these ethical and privacy concerns, a natural reaction might be to call for a ban on these techniques. Certainly, applying AI to job interview results or criminal sentencing procedures seems dangerous if the systems are learning biases or are otherwise unreliable. There are useful applications however, for instance in helping spot warning signs to prevent youth suicide and detecting drunk drivers. That’s one reason why even concerned researchers, regulators and citizens have generally stopped short of calling for blanket bans on AI-related technologies.
Combining AI and human judgment
Ultimately, technology designers and society as a whole need to look carefully at how information from AI systems is injected into decision-making processes. These systems can give incorrect results just like any other form of intelligence. They are also notoriously bad at rating their own confidence, not unlike humans, even in simpler tasks like the ability to recognize objects. There also remain significant technical challenges in reading emotions, notably considering context to infer emotions.
If people rely on a system that isn’t accurate in making decisions, the users of that system are worse off. It’s also well-known that humans tend to trust these systems more than other authority figures. In light of this, we as a society need to carefully consider these systems’ fairness, accountability, transparency and ethics both during design and application, always keeping a human as the final decision-maker.
Tumblr media
About The Author:
Christoffer Heckman is Assistant Professor of Computer Science at the University of Colorado Boulder
This article is republished from our content partners over at The Conversation under a Creative Commons license.
19 notes · View notes
kenyatta · 5 years
Link
When the “Drunk Pelosi” video first appeared on a Facebook page on May 28, it seemed it would be yet another high-profile reminder that social media platforms allow and even encourage the spread of disinformation. The video, posted to a self-described news Facebook page with a fan base of about 35,000, depicted Nancy Pelosi slurring her words and sounding intoxicated. However, when compared with another video from the same event, it was clear even to nonexperts that it had been slowed down to produce the “drunken” effect. Call it a “cheapfake”—it was modified only very slightly. While the altered video garnered some significant views on Facebook, it was only after it was amplified by President Donald Trump and other prominent Republicans on Twitter that it became a newsworthy issue. The heightened drama surrounding this video raises interesting questions not only about platform accountability but also about how to spot disinformation in the wild.
Journalists, politicians, and others worry that the technological sophistication of artificial intelligence–generated deepfakes makes them dangerous to democracy because it renders evidence meaningless. But what panic over this deepfake phenomenon misses is that audiovisual content doesn’t have to be generated through artificial intelligence to be dangerous to society. “Cheapfakes” rely on free software that allows manipulation through easy conventional editing techniques like speeding, slowing, and cutting, as well as nontechnical manipulations like restaging or recontextualizing existing footage that are already causing problems. Cheapfakes already call into question the methods of evidence that scientists, courts, and newsrooms traditionally use to call for accountability.
Memes don’t work because they have high production value. They work because they provoke somebody to share them with the rest of their social network (like this news story provoked me into sharing it with you.)
8 notes · View notes
msclaritea · 2 years
Text
Deepfakes are not generally well understood by the public, but they’re becoming an ever-present source of fear for many. These days, it’s hard to trust what you read. Now, deepfakes are making it harder to trust what you see. People must educate themselves on the truth about deepfakes if we ever hope to mitigate the damage they can do. Below, we offer a deep dive into deepfakes, as well as an explanation on how to spot them and why they’re dangerous.
What Are Deepfakes And Why Are They Dangerous?
What Are Deepfakes?
A deepfake is a new breed of video that became popular with certain online communities over the past few years. These fakes use AI technology to transplant one person’s face onto another person’s body. A piece of software takes as many images, and videos of the target’s face as then uses that to create a special map. The creator can then apply this map to any already existing piece of footage.
A similar type of software can be found in applications like ReFace, though in a much less advanced form. These apps typically only work from a single image but produce less life-like results. The programs run on PCs are intended to generate realistic fake videos to genuinely fool people.
Where Did Deepfakes Come From?
Gal Gadot DeepfakeOne of the most famous early deepfakes featured the face of actress Gal Gadot. | Source: Vice
As with many developments in human history, deepfakes have their roots in pornography. Although image manipulation has existed for years, and deepfake technology has also been around in research institutions since the 90s, deepfakes really came to life in 2017.
On Reddit during 2017, the user deepfake created r/deepfakes. This group was created primarily to share pornographic content featuring celebrities. This content featured the celebrities faces swapped onto the bodies of the adult actors. Although Reddit banned the group in early 2018, pornographic deepfakes continue to be created and shared online.
Since Reddit bought deepfakes to public attention, several companies have created apps that rely on deepfake technology. Deepfake technology is also used in cinema, primarily to impose a dead actor’s face onto a living body or allow older actors to appear younger.
What is the Problem With Deepfakes?
If you’ve only interacted with the fun side of deepfakes, you may wonder what the problem is. People putting Nicholas Cage’s face onto the body of other movie actors is just a bit of fun, after all.
The truth is that deepfakes can cause several severe issues in the world. Celebrities and adult film producers do not give their consent for these deepfake videos. Not only does this potentially break copyright law, but it can also cause deep emotional and mental trauma to the celebrity involved.
(Question: does anyone think that Judd Apatow asked for Ben's permission before using his likeness in The Bubble? I don't either.)
Other major issues with deepfakes include the ability to use these clips to blackmail victims and creating ‘sockpuppets.’ Creators can provide any level of video evidence of sockpuppets’ existence, while they don’t exist at all. Online abusers can then use these sockpuppets to cause trouble online consequence-free. In mid-2020, a harasser used the Oliver Taylorpersonality to harass a couple of activists. This persona was actually created with deepfake technology.
Perhaps the most dangerous implication of deepfake technology is political in nature. It’s entirely possible to create a deepfake video that seems to show a world leader declaring war on a foreign nation. This is especially dangerous thanks to advances in AI-generated voices that closely mimic the target.
Famous Examples of Deepfakes
Since Reddit bought deepfakes to the public’s attention, they’ve appeared more and more frequently, both online and on TV/Film. Below are some of the most famous examples of deepfakes in the public eye and where they came from.
Each year in the UK, the queen famously addresses the nation in her Royal Christmas Message. The UK’s Channel 4 also broadcast an alternative message. These alternative messages usually feature a celebrity or comedian making jokes about the year. In contrast, the Queen’s official message tries to comfort or inspires the public.
In December 2020, Channel 4 elected to use deepfake technology to create a message seemingly broadcast by the Queen herself. This message poked jokes at the expense of the royal family. Channel 4 also used the video to impart the danger of deepfakes to the general public. The big moment came when the actress beneath the technology was revealed towards the end of the video.
While the effect wasn’t quite perfect, and the message was clearly fake, the impact behind the message is undeniable. In the end, over 2 million people viewed the alternative message.
Tom Cruise Does Magic Tricks
In early March 2021, a series of videos went viral on TikTok and Twitter, purporting to show famous actor Tom Cruise golfing, doing magic tricks, and sucking a lollipop.
In reality, these videos were created by VFX artist Chris Ume to try and raise awareness about the advanced nature of modern deepfake technology.
Ume created some quite sophisticated videos but claimed most people didn’t fall for the trick. These productions’ real aim was to show what can be achieved with the right knowledge and software, even by a person working alone.
Dali Lives
While not an intentional deception, the Dali Lives exhibit in The Dali museum in Florida used deepfake technology to create a lifelike recreation of famous artist Salvidor Dali 30 years after he died.
The intention behind this particular use of deepfakes was artistic in nature. However, Dali Lives is a good example of how sophisticated deepfake technology has become. Fake versions of Dali were featured all over the museum, giving visitors an insight into the artist’s life and works, and each of these installations was incredibly life-like.
What Can You Do About Deepfakes?
The true nature of deepfakes can be quite overwhelming. With enough time and effort, deepfakes can already fool a lot of people. As time goes by, the technology behind these videos will only get better and more realistic.
While some scientists are developing software to identify deepfakes, it’s incredibly possible for deepfakes to advance beyond those technologies. This could end up creating a fake video arms race. The best thing you can do is constantly question the veracity and sources behind the information you’re being fed online.
When you see a video online, you should ensure that it is coming from a trustworthy source. You should also try to be aware of the intention behind a video and ensure that you’re checking with fact-checking organizations if you have any reason to suspect that a video might have been faked.
That recent post of Tom Cruise jumping over the head of that comedian ( coincidentally, sane one who was in the Bubble) was a Deep Fake. All of these so-called attempts to 'warn' the public were basically just ways to spread the word. The question is, why is Scientology and members of the Hollywood Mafia so interested in Deep Fake unless it's for nefarious purposes, such as blackmail, threatening to replace an actor if they don't comply with blackmail or even worse?
0 notes
scienceblogtumbler · 4 years
Text
From deepfakes to fake news, an array of influences aim to shape voter decisions
Gone are the days when voter influences involved a candidate stump speech, a piece of direct mail, a TV ad or a pamphlet.
Today, the forces that influence us are far more complex and pervasive, powered by cybersecurity threats and foreign operatives, U.S. conspiracy sites, digital fakes and propaganda-rich social media echo chambers — all playing upon a peculiarly fickle human mind that culminates in a decision when a voter casts a ballot, according to USC experts.
This election, simply reaching the polls to cast a vote is complicated. Foreign agents, bots, inaccurate tweets and White House attacks on the validity of elections can confuse voters. Cyberattacks can reach voters by email and phone, sending misleading information about polling places or mail-in deadlines, creating long lines at polling locations or shutting down polls in targeted communities. The risk of COVID-19 deters people from going to the polls, as the Spanish Flu did in elections a century ago.
But what makes this year’s election truly unique is the widespread use of mail-in ballots.
“Today, forces are at work to make people not participate in the election by questioning the integrity of elections and saying the system is broken,” said Christina Bellantoni, professor at the USC Annenberg School for Communication and Journalism and director of the Annenberg Media Center. “With so few people undecided about the upcoming presidential election, influencing just a handful of people on the margins can sway an election.”
Cyberattacks are likely to target the 2020 election
Clifford Neuman, a scientist at the Information Sciences Institute and the Computer Science Department of the USC Viterbi School of Engineering, says the U.S. election process is particularly vulnerable to manipulation due to a convergence of computer dependency, polarized politics and protections regarding freedom of expression.
Cyberattacks will target all aspects of this year’s election.
Clifford Neuman
“Computers are used throughout the election process, and cyberattacks will target all aspects of this year’s election,” said Neuman, who directs the USC Center for Computer Systems Security. “Computers are used for voter registration, accepting political contributions, for get-out-the-vote activities and practically all communication by campaigns. Journalists covering the election use computers to gather information and to publish their stories. Social media provides a medium for the spread of information and misinformation, along with the information needed to target messaging to like-thinking segments of the citizenry.”
Neuman has presented on cyberthreats and attacks for the USC Cybersecurity Election Initiative, a series of state-by-state workshops — supported by Google — that raise cybersecurity awareness among election and campaign officials.
He advised that paper ballots are critical this year, amid an increase in cyberattacks on elections systems. “U.S. adversaries can manipulate elections by targeting the ballot recording and counting infrastructure,” he said. “The threat is very concerning.”
Never been easier for technology, misinformation to influence voters
New and remarkably easy-to-make deepfakes are increasingly common, too. These fraudulent yet persuasive videos can be made in a few hours on a $2,000 computer by a competent programmer and posted to social media networks, said Wael AbdAlmageed, professor at USC Viterbi and the Information Sciences Institute. He added that USC has developed state-of-the-art misinformation detection technology that can spot more than 96% of fraudulent videos in almost real time.
“Deepfakes have great power,” AbdAlmageed said. “They are a significant and growing risk to elections and democracy. There is now so much visual manipulation that the whole notion of ‘seeing is believing’ is not valid anymore. Deepfakes are dangerous and can sway an election.”
Peculiarities of the human mind render people especially vulnerable to manipulation; it helps explain the proliferation of efforts to manipulate elections.
We are much more vulnerable to misinformation than we used to be.
Norbert Schwarz
“We humans are terrible at discerning truth from untruth,” said Norbert Schwarz, a Provost Professor of Psychology and Marketing at the USC Dornsife College of Letters, Arts and Sciences and the USC Marshall School of Business. “People are persuaded by messages that are simple, easy to process and agreeable. The more attention and effort it takes to process information, the more people look for ease in message assimilation. What’s easy to process becomes a proxy for truth.”
Schwarz said that the proliferation and fragmentation of media sources floods people with messages that can be difficult to sort.
“We are much more vulnerable to misinformation than we used to be,” said Schwarz, the co-director of USC Dornsife’s Mind and Society Center.
A more splintered media landscape mean tribal influences reign
Indeed, big changes in media leave people confused.
Many distrust the news media, in part because some journalists veer toward opinion on their social media channels more than ever before. TV networks or newspapers used to provide a common narrative upon which to build American political consensus.
Bellantoni said that fake news can sway voters, such as the viral video on social media suggesting partisan postal workers could destroy ballots that indicate someone’s voter registration on the envelope.
So what’s a voter to do? Increasingly, Americans retreat inside their own tribal groups and information bubbles.
Schwarz said that people now shape their information diet to the exclusion of anything they don’t want to hear, sometimes resulting in fact-free realities. But this is dangerous, he said, because democracy in which people can’t agree on facts, or people have alternative facts and realities, is not viable.
We are becoming a tribal country, and tribal identities shape our votes today.
John Matsusaka
John Matsusaka, executive director of the Initiative and Referendum Institute at USC, said that people have long voted according to partisan orientation, but what’s changed is partisanship has become a bigger part of people’s identity. He is the executive director of the Initiative and Referendum Institute at USC and a professor at USC Marshall and the USC Gould School of Law.
“American identity politics has changed to where people don’t vote according to the identity of their church, union or job anymore,” he said. “People decide how they vote by their partisan identity, which is often more important to them than the issues. We are becoming a tribal country, and tribal identities shape our votes today.”
The result is partisanship has become a bigger part of people’s identity, which means they see the opposing camp as the wrong kind of Americans, Matsusaka said. Yet democracy depends on political compromise, which he said is vanishing as the political center crumbles.
source https://scienceblog.com/518664/from-deepfakes-to-fake-news-an-array-of-influences-aim-to-shape-voter-decisions/
0 notes
techcrunchappcom · 4 years
Photo
Tumblr media
New Post has been published on https://techcrunchapp.com/10-cybersecurity-myths-you-need-to-stop-believing/
10 cybersecurity myths you need to stop believing
Tumblr media Tumblr media Tumblr media
These are the top cybersecurity myths you need to let go of. (iStock)
On the Dark Web, you can purchase cybercrime “how-to kits” that gather lists of breached names, account numbers, passwords, and even telephone support lines for the victims to call. It’s not difficult to get on the Dark Web. Tap or click here for my short guide that tells you how to access the Dark Web.
Make no mistake. Just because you’re on the Dark Web doesn’t mean you’re anonymous. Tap or click here for a video that shows how the FBI works the Dark Web.
Ransomware attacks, data breaches, and scams — along with a steady stream of extortion and phishing emails — have taken over the internet. We hear about cybercrime so often that it can quickly turn into white noise. That’s a mistake.
Here are 10 security myths you need to stop believing about your data.
1. I don’t have anything worth protecting
You might think your data isn’t worth anything. You might think because you’re broke, no one cares about your data. You might also think that since you have nothing to hide, there’s no point in protecting your identity or information.
Think about it this way: All those free social media apps you sign up for — Facebook, Twitter, Instagram, Pinterest, Snapchat — aren’t free at all. When you sign the Terms and Conditions, you’re signing away your right to privacy, which lets the apps build a detailed demographic profile of you.
The companies turn around and sell this information to marketers; that means your information is making these companies millions of dollars. So why wouldn’t hackers want to cash in on that?
2. I use security software, so I’m fine
Many people think that security software will act as an invincible shield between their data and hackers. A group of Russian hackers breached servers of three major antivirus providers. Now, all the information they stole is up for sale on the Dark Web
So, what’s an excellent way to work around this danger? Keep your operating system software and security software updated. Do the same for your other devices, including your phone and tablet.
Don’t forget about your router. Once hackers break into that, every device using it to connect to the internet is vulnerable. Tap or click here for a free test to see your router has already been hacked.
Finally, make sure you’re using the right security software. Tap or click here for 5 free cybersecurity tools you can download today.
3. With all these data breaches, I have nothing left to protect
Want to see if your data has already been breached? One website has been tracking data breaches for years and put a handy search tool online. You simply enter your email address and get a yes or no answer. Tap or click here to see if your data has been compromised.
Let’s say you’re on the list. You may feel hopeless, and like there’s no point in protecting your data since it’s already been overtaken.
That’s not true. There are different types of data breaches that can have different impacts. For example, say your password and username to your bank account have been breached. Don’t give up — inaction empowers the hackers to pry for even more information, which could lead them to your Social Security number.
4. Phishing scams are easy to spot
Phishing scams are becoming more sophisticated as hackers infiltrate companies, CEO’s personal accounts, and even government agencies. Phishing scams have skyrocketed during COVID-19.
Very realistic looking extortion scams are making the rounds. The subject contains your email address and a password that looks familiar. The scammer says unless you pay up, they will release the video of you that they took using your webcam when you visited a porn site.
Don’t buy it. The scammer got your email address and password from a data breach. If you are still using the combination of both, it’s best to change your password at the very least.
It’s not always as simple as an unfamiliar account reaching out to you with bizarre messages trying to get you to click on a link. Sometimes, they use familiar faces against you, which leads to the next myth.
5. My friends on social media won’t hurt me
The great thing about social media is that it connects you with your friends and relatives. Unfortunately, the web of connectivity can be an opening for spiders to turn friends into gateways for data breaches.
Say your friend has a weak password, and their account gets breached. Say they send you a private message saying they found a funny new video or a cool new site you should check out. Since the link is coming from a familiar face, your guard may be down. After all, you’re aware of phishing scams when you get a message from someone you’ve never heard of, but you don’t have that on your mind when you hear from a friend.
Hackers bank on those lowered guards to corrupt your web and turn it into a jumping point for even more data breaches.
6. Hackers are mysterious, scary figures
When you think of a hacker, you probably imagine popular images of hooded figures hunched over a computer. A lot of hackers are regular people and can be hard to spot.
It’s important to realize that hackers aren’t lone wolves. There are entire organizations — some government-funded — that work together to infiltrate data and rake in millions. Hacking is also a popular way for mobsters to bring in cash at long distances.
Once you realize just how dire this threat is, it becomes easy to understand why it is so important to take steps towards cybersecurity.
To stay on top of the latest hacker threats, get my breaking tech news alerts. You’ll only get an alert when threats strike. Tap or click here to sign up now.
7. I only go to mainstream sites, so I don’t need security software
You need security software no matter where you go. Remember what I said earlier, about how social media apps sell your data to make their money? The more cookies you have in your browser, the more your every step is being followed.
When multiple sites have a detailed profile of you, that increases your chances of getting your data breached, since all companies are vulnerable to a data breach. Security software keeps you safe. It’s like two-factor authentication: a necessary step towards protecting your privacy.
RELATED: 3 security programs that should be on every computer and laptop
8. I use complex passwords
Even a long, complicated password isn’t enough to keep you safe in today’s security landscape.
Nowadays, there are speedy programs people use to run billions of password combinations — and it only takes a second to run these potential passwords. Not only that, but hackers have sophisticated methods for identifying passwords we use in password creation.
That’s why you should also use password managers as well as two-factor authentication.
9. I know a fake voice when I hear one
You’ve probably heard that scammers will call you with robotic voices, pretending to be the IRS. They demand money. Maybe you’ve gotten one of these calls yourself. You may think you can recognize a robotic voice, but unfortunately, robocallers are improving their techniques.
Deepfake technology can replicate more than just faces. It’s also expanding into voices. Online programs need only to hear your voice to create a close copy.
10. I will know when something bad gets in my device or computer
Cybercriminals’ work is stealth. When they’re doing their deeds, there’s no red flag that pops up. They have intricate ways of infiltrating your data; there could even be Trojan horses in the form of viruses lurking in your code right now.
Now that you’re aware of the 10 most common cybersecurity myths, you’re better equipped to recognize misinformation spread by hackers who want to keep you vulnerable.
Remember, your data is worth a lot to cybercriminals, so take steps to protect it. Make sure all of your gadgets are up to date with all the security patches needed to fend off online attacks.
Make use of robust security software, password managers, and two-factor authentication. Most of all, follow news on recent breaches and hacking trends to keep your security tools reliable and timely.
On my website, we have a busy Q&A forum where you can post your tech questions and get answers you can trust from real tech pros, including me. Check it out here and let us solve your tech issues.
Call Kim’s national radio show and tap or click here to find it on your local radio station. You can listen to or watch The Kim Komando Show on your phone, tablet, television or computer. Or tap or click here for Kim’s free podcasts.
Copyright 2020, WestStar Multimedia Entertainment. All rights reserved.
Learn about all the latest technology on The Kim Komando Show, the nation’s largest weekend radio talk show. Kim takes calls and dispenses advice on today’s digital lifestyle, from smartphones and tablets to online privacy and data hacks. For her daily tips, free newsletters and more, visit her website at Komando.com.
0 notes
asking-answers · 4 years
Text
Is Deepfake technology as harmful as revenge porn
The term “Deepfake” originated on a Reddit forum, where users were editing images of celebrities into pornographic imagery. This forum has since been removed, but the community has rebuilt themselves in the corners of the internet and have evolved to using images of average women and young girls, with the aid of more convincing technology.
Deepfake technology works by feeding an Artificial Intelligence system images of a person, and the algorithm will build and recreate a mimic of that face over the face of a new person. Due to the rise of social media people can collect enough source images to feed the algorithm and create deepfake images from mostly anyone.
The media has recently started highlighting deepfakes as a risk to politics, as videos of political candidates were doctored from interpretations. But, a September 2019 Deeptrace found that 96% of the use of Deepfakes online were for pornographic use.  Pornhub and other porn streaming sites have banned deepfakes’ on their site, although due to the upload nature of these free to use sites, most videos are ether not spotted or re-uploaded by the same or other users.
Australian law graduate Noelle Martin is the most well known case of a non-celebrity figure who’s spoken out against the use of deepfake technology being used for pornography.
At age 17 Noelle Martin first found doctored images of herself after reverse image searching a selfie of herself online, finding predatory images of her face on the body of porn actresses. In her 2018 TedTalk speech for Ted x Perth, Martin described her experience across the previous five years since finding the doctored photographs as, “-horrific battle that almost ruined my (Martin’s) life.” In the talk she voices the graphic violations of her likeness, accounting images from her social media being edited to elude to her shirt being transparent and to show semen on her face, as well as finding images of her face on the body of adult models engaging in intercourse and actresses in videos uploaded on porn sites. Also, I would like to highlight the predatory behaviour shown of these Deepfakers’ editing the likeness of underage girls into pornographic images, with Martin highlighting how men would ejaculate on images of girls and share them on porn sites, As well as identifying personal information of these young girls being published accompanying the images, such as full names’, home addresses’ and place of study of the girls. After coming out about her story to petition for law reform in late 2017, Martin was victim blamed online, being called an “attention seeking piece of trash” on social media, and without actually posting any provocative images of herself Martin was branded  a “slut” and “whore” by strangers over the internet. This reaction proves that “leaked” suggestive images don’t need to be authentic for a percentage of the public opinion to shame you and your body, and now with this technology becoming more accessible to the public, or otherwise just a small payment on a forum, This form of violating attack is going to become a more widely used as a way to shame and belittle average women and girls. Imagine that mean girl or jilted boy you rejected in high school now having the ability to spread mostly realistic images of you, nude, to your peers, you could be as careful as you want or even had never taken a nude image in the first place, but now everyone thinks they’ve seen you naked and you’re being humiliated and shamed for a doctored image that you didn’t consent to in the first place.
A figure by DEEPTRACE cataloguing the increased number of deepfakes online, which had doubled to 14,678 during 2019 comparatively to 2018. In addition to being invasive to the victims privacy and right of choice, there is a major concern of the potential increase of revenge porn cases. According to a study by the Cyber Civil Rights organisation 51% of revenge porn victims have suicidal thoughts, many will try to harm themselves or successfully commit suicide due to the harassment they endured from their bullies after their pictures are leaked. Also, in some cultures the sexualisation of a female body could result in the honour killing of the victim, whether the images are authentic or not the sexual connotations of the images would bring “shame” on the family, and therefor be putting the victim in genuine (life threatening) danger.
We live in a world were the female form is subjected to harassment and given less respect than our male piers, and with deepfake technology our sexualisation can be weaponised, putting our lives and careers at risk by what is seen as a joke or harmless compliment to others. We need to form laws to regulate the use of deepfake technology, and put in place clear laws punishing the predators creating the images and prohibiting businesses from firing victims. 
Resources if you’re being threatened with revenge porn. https://revengepornhelpline.org.uk/
"We always encourage people to take screenshots of what you've found, where it's been shared," Rebecca Sharp
0 notes
abangtech · 4 years
Text
Deepfakes aren’t very good—nor are the tools to detect them
Enlarge / A comparison of an original and deepfake video of Facebook CEO Mark Zuckerberg.
reader comments
15 with 11 posters participating
Share this story
We’re lucky that deepfake videos aren’t a big problem yet. The best deepfake detector to emerge from a major Facebook-led effort to combat the altered videos would only catch about two-thirds of them.
In September, as speculation about the danger of deepfakes grew, Facebook challenged artificial intelligence wizards to develop techniques for detecting deepfake videos. In January, the company also banned deepfakes used to spread misinformation.
Facebook’s Deepfake Detection Challenge, in collaboration with Microsoft, Amazon Web Services, and the Partnership on AI, was run through Kaggle, a platform for coding contests that is owned by Google. It provided a vast collection of face-swap videos: 100,000 deepfake clips, created by Facebook using paid actors, on which entrants tested their detection algorithms. The project attracted more than 2,000 participants from industry and academia, and it generated more than 35,000 deepfake detection models.
The best model to emerge from the contest detected deepfakes from Facebook’s collection just over 82 percent of the time. But when that algorithm was tested against a set of previously unseen deepfakes, its performance dropped to a little over 65 percent.
“It’s all fine and good for helping human moderators, but it’s obviously not even close to the level of accuracy that you need,” says Hany Farid, a professor at UC Berkeley and an authority on digital forensics, who is familiar with the Facebook-led project. “You need to make mistakes on the order of one in a billion, something like that.”
Deepfakes use artificial intelligence to digitally graft a person’s face onto someone else, making it seem as if that person did and said things they never did. For now, most deepfakes are bizarre and amusing; a few have appeared in clever advertisements.
The worry is that deepfakes might someday become a particularly powerful and potent weapon for political misinformation, hate speech, or harassment, spreading virally on platforms such as Facebook. The bar for making deepfakes is worryingly low, with simple point-and-click programs built on top of AI algorithms already freely available.
“Frustrated”
“I was pretty personally frustrated with how much time and energy smart researchers were putting into making better deepfakes,” says Mike Schroepfer, Facebook’s chief technology officer. He says the challenge aimed to encourage “broad industry focus on tools and technologies to help us detect these things, so that if they’re being used in malicious ways we have scaled approaches to combat them.”
Schroepfer considers the results of the challenge impressive, given that entrants had only a few months. Deepfakes aren’t yet a big problem, but Schroepfer says it’s important to be ready in case they are weaponized. “I want to be really prepared for a lot of bad stuff that never happens rather than the other way around,” Schroepfer says.
The top-scoring algorithm from the deepfake challenge was written by Selim Seferbekov, a machine-learning engineer at Mapbox, who is in Minsk, Belarus; he won $500,000. Seferbekov says he isn’t particularly worried about deepfakes, for now.
“At the moment their malicious use is quite low, if any,” Seferbekov says. But he suspects that improved machine-learning approaches could change this. “They might have some impact in the future the same as the written fake news nowadays.” Seferbekov’s algorithm will be open sourced, so that others can use it.
Cat and mouse
Catching deepfakes with AI is something of a cat-and-mouse game. A detector algorithm can be trained to spot deepfakes, but then an algorithm that generates fakes can potentially be trained to evade detection. Schroepfer says this caused some concern around releasing the code from the project, but Facebook concluded that it was worth the risk in order to attract more people to the effort.
Facebook already uses technology to automatically detect some deepfakes, according to Schroepfer, but the company declined to say how many deepfake videos have been flagged this way. Part of the problem with automating the detection of deepfakes, Schroepfer says, is that some are simply entertaining while others might do harm. In other words, as will other forms of misinformation, the context is important. And that is hard for a machine to grasp.
Creating a really useful deepfake detector might be even harder than the contest suggests, according to Farid of UC Berkeley, because new techniques are rapidly emerging and a malicious deepfake maker might work hard to outwit a particular detector.
Farid questions the value of such a project when Facebook seems reluctant to police the content that users upload. “When Mark Zuckerberg says we are not the arbiters of truth, why are we doing this?” he asks.
Even if Facebook’s policy were to change, Farid says the social media company has more pressing misinformation challenges. “While deepfakes are an emerging threat, I would encourage us not to get too distracted by them,” says Farid. “We don’t need them yet. The simple stuff works.”
This story originally appeared on wired.com.
Source
The post Deepfakes aren’t very good—nor are the tools to detect them appeared first on abangtech.
from abangtech https://abangtech.com/deepfakes-arent-very-good-nor-are-the-tools-to-detect-them/
0 notes
wsmith215 · 4 years
Text
If We’re Not Careful, Tech Could Hurt the Fight against COVID-19
When COVID-19 emerged, many of us felt the instinct to use our technical skills to contribute something—and fast. But as researchers and technologists at Stanford, we also felt deep concern, having witnessed technologists’ blind spots and biases give birth to many dangerous technologies, including digital gaydar, deepfakes, discriminatory AI, AI surveillance and more. Even well-meant technologies can shift power away from those they purport to help. We have come to recognize that while the desire to help during COVID-19 is right, the rush to push just any COVID-19 technology is wrong and even has the potential to kill.
Trying to innovate their way back to normal, many technologists without previous medical or ethical expertise have proposed or deployed projects to calculate risk scores, trace contacts, model disease patterns and enforce quarantines. This inundation of ill-advised projects has led people to fall victim to misinformation and scams, eroded trust in science and provided cover for governments to expand their powers. Most new COVID-19 technologies risk adding to the chaos and eroding fundamental freedoms. With the stakes so high, here are four questions we call on ourselves and fellow technologists to answer before pushing technological responses to COVID-19:
1. Are you listening to experts and vulnerable communities?
Develop these relationships long before deployment to ensure you understand the social context, what will be helpful, and what will be harmful. If you are not already working with experts, start by finding credible sources of expert information. Meanwhile, while conversations and apps like Nextdoor and Twitter can surface the needs of some, these spaces often obscure the voices of the most vulnerable—including communities without access to technology; people who are unhoused, in nursing homes or in prisons; and those who cannot speak freely. Find people and organizations that center vulnerable communities. Listen carefully. What do they think is most pressing? Do they want you to build your technology for them, with them, or not at all?
2. Can you join existing efforts?
The process of open listening will lead to many calls to action. Investigate the solutions being called for. There are also groups such as U.S. Digital Response and Digital Aid, that match technologist volunteers with projects that would benefit from their skills. Search for a team that can speak to all four of these questions confidently and a project that has appropriate experts, community involvement, infrastructure and ethical frameworks in place.
3. Can your technology do what you say it’s going to do?
Will your solution improve outcomes in the real world, or might it only work in simulated environments? Do you claim your technology is scientific? If so, it must uphold scientific standards and avoid pseudoscientific approximations—including pseudoscientific machine-learned approximations. Can you complete and test your work at the desired scale in the desired timeframe? Down the line, will you have the resources to maintain your project, or might you abandon it? Document your sources of data, data preprocessing, modeling and analysis of results, making transparent the assumptions and limitations of your project. As an example, see this paper on the privacy implications of contact tracing and the authors’ explicit statement of how their ideas should and should not be used. In many cases, your technology’s limitations mean it should not influence policy decisions; state this up front and repeat it as necessary. 
4. How does your technology shift power?
Finally, consider whom your project shifts power away from and whom it shifts power to. Ownership of data is a form of power: Do you provide meaningful opt-in to data collection? Whom are you giving access to this data? Do you inform users exactly what the service will and will not do and enforce those commitments with privacy-by-design principles and data governance? Many people are scared and willing to trust technology more than usual; we must hold ourselves accountable to this trust. Moreover, some solutions enable governments to expand mass surveillance or otherwise expand their power. This is especially insidious because, once created, government powers rarely go away. Reflect on who will have access to your technology and whether it will help vulnerable people or compound circumstances already stacked against them.
In our own work, we commit to reaching clarity on each of these four questions, and we encourage fellow technologists to do the same. If there is any question you cannot answer confidently, embrace your responsibility to bring your project to a close to avoid harming others. If we don’t contribute to the response to COVID-19 by coding, that’s okay. Here are ways we can help that don’t involve building new technologies:
Amplify true information and current needs. Your information literacy is a precious resource. Find, translate and amplify credible expert information and community needs to your family, friends, and online. Respond to technological and nontechnological calls to action. Some efforts you’ll come across will need technical skills—like helping organizations move their work online safely. Many efforts do not need those skills—like donating money, donating blood, making and wearing masks, and volunteering to distribute food, PPE, or other mutual aid. Whatever the call to action, listen carefully and prioritize what is being asked for. Be politically engaged. As companies and governments make sweeping policy changes, it’s critical that citizens shape these changes. Follow the lead of experts and vulnerable communities calling to support or oppose these policies. Call out the risks of new technologies. Understanding technologies often makes you uniquely equipped to explain their risks. Investigate the technologies others are proposing, make sure you understand them, and if necessary sound the alarm bells.
This is an uncertain time, and many of us may feel drawn to the clarity of technological solutions. But as we strive to contribute to our communities, we must make sure we are practicing humility and thinking critically about our actions. With the stakes so high, this is no time to move fast and break things.
Read more about the coronavirus outbreak from Scientific American here. And read coverage from our international network of magazines here.
Source link
The post If We’re Not Careful, Tech Could Hurt the Fight against COVID-19 appeared first on The Bleak Report.
from WordPress https://bleakreport.com/if-were-not-careful-tech-could-hurt-the-fight-against-covid-19/
0 notes
scifigeneration · 5 years
Text
Artificial intelligence can now emulate human behaviors – soon it will be dangerously good
by Ana Santos Rutschman
Tumblr media
Is this face just an assembly of computer bits? PHOTOCREO Michal Bednarek/Shutterstock.com
When artificial intelligence systems start getting creative, they can create great things – and scary ones. Take, for instance, an AI program that let web users compose music along with a virtual Johann Sebastian Bach by entering notes into a program that generates Bach-like harmonies to match them.
Run by Google, the app drew great praise for being groundbreaking and fun to play with. It also attracted criticism, and raised concerns about AI’s dangers.
My study of how emerging technologies affect people’s lives has taught me that the problems go beyond the admittedly large concern about whether algorithms can really create music or art in general. Some complaints seemed small, but really weren’t, like observations that Google’s AI was breaking basic rules of music composition.
In fact, efforts to have computers mimic the behavior of actual people can be confusing and potentially harmful.
Impersonation technologies
Google’s program analyzed the notes in 306 of Bach’s musical works, finding relationships between the melody and the notes that provided the harmony. Because Bach followed strict rules of composition, the program was effectively learning those rules, so it could apply them when users provided their own notes.
youtube
The Google Doodle team explains the Bach program.
The Bach app itself is new, but the underlying technology is not. Algorithms trained to recognize patterns and make probabilistic decisions have existed for a long time. Some of these algorithms are so complex that people don’t always understand how they make decisions or produce a particular outcome.
AI systems are not perfect – many of them rely on data that aren’t representative of the whole population, or that are influenced by human biases. It’s not entirely clear who might be legally responsible when an AI system makes an error or causes a problem.
Now, though, artificial intelligence technologies are getting advanced enough to be able to approximate individuals’ writing or speaking style, and even facial expressions. This isn’t always bad: A fairly simple AI gave Stephen Hawking the ability to communicate more efficiently with others by predicting the words he would use the most.
More complex programs that mimic human voices assist people with disabilities – but can also be used to deceive listeners. For example, the makers of Lyrebird, a voice-mimicking program, have released a simulated conversation between Barack Obama, Donald Trump and Hillary Clinton. It may sound real, but that exchange never happened.
From good to bad
In February 2019, nonprofit company OpenAI created a program that generates text that is virtually indistinguishable from text written by people. It can “write” a speech in the style of John F. Kennedy, J.R.R. Tolkien in “The Lord of the Rings” or a student writing a school assignment about the U.S. Civil War.
The text generated by OpenAI’s software is so believable that the company has chosen not to release the program itself.
Similar technologies can simulate photos and videos. In early 2018, for instance, actor and filmmaker Jordan Peele created a video that appeared to show former U.S. President Barack Obama saying things Obama never actually said to warn the public about the dangers posed by these technologies.
youtube
Be careful what videos you believe.
In early 2019, a fake nude photo of U.S. Rep. Alexandria Ocasio-Cortez circulated online. Fabricated videos, often called “deepfakes,” are expected to be increasingly used in election campaigns.
Members of Congress have started to look into this issue ahead of the 2020 election. The U.S. Defense Department is teaching the public how to spot doctored videos and audio. News organizations like Reuters are beginning to train journalists to spot deepfakes.
But, in my view, an even bigger concern remains: Users might not be able to learn fast enough to distinguish fake content as AI technology becomes more sophisticated. For instance, as the public is beginning to become aware of deepfakes, AI is already being used for even more advanced deceptions. There are now programs that can generate fake faces and fake digital fingerprints, effectively creating the information needed to fabricate an entire person – at least in corporate or government records.
Machines keep learning
At the moment, there are enough potential errors in these technologies to give people a chance of detecting digital fabrications. Google’s Bach composer made some mistakes an expert could detect. For example, when I tried it, the program allowed me to enter parallel fifths, a music interval that Bach studiously avoided. The app also broke musical rules of counterpoint by harmonizing melodies in the wrong key. Similarly, OpenAI’s text-generating program occasionally wrote phrases like “fires happening under water” that made no sense in their contexts.
As developers work on their creations, these mistakes will become rarer. Effectively, AI technologies will evolve and learn. The improved performance has the potential to bring many social benefits – including better health care, as AI programs help democratize the practice of medicine.
Giving researchers and companies freedom to explore, in order to seek these positive achievements from AI systems, means opening up the risk of developing more advanced ways to create deception and other social problems. Severely limiting AI research could curb that progress. But giving beneficial technologies room to grow comes at no small cost – and the potential for misuse, whether to make inaccurate “Bach-like” music or to deceive millions, is likely to grow in ways people can’t yet anticipate.
Tumblr media
About The Author:
Ana Santos Rutschman is an Assistant Professor of Law at Saint Louis University
This article is republished from The Conversation under a Creative Commons license.
37 notes · View notes
toldnews-blog · 6 years
Photo
Tumblr media
New Post has been published on https://toldnews.com/business/tech-trends-2019-the-end-of-truth-as-we-know-it/
Tech trends 2019: 'The end of truth as we know it?'
Tumblr media Tumblr media
Image copyright Getty Images
More than 200 firms contributed to our request for ideas on what the global tech trends will be in 2019. Here’s a synthesis of the main themes occupying the minds of the technorati this year. You may be surprised.
This year it’s all about data – a small, rather dull word for something that is profoundly changing the world we live in.
New technologies, from voice-controlled speakers to “internet of things” (IoT) sensors, connected cars to fitness wearables, are vastly increasing the amount of digital data we produce.
And artificial intelligence (AI), machine learning and cloud computing are transforming the way we store, analyse and apply it.
“In 2019 smart sensors will start to be found everywhere, automating data collection to satisfy the voracious appetite of AI,” says Tim Harper, a former European Space Centre engineer and now founder of G2O Water Technologies.
AI could be a powerful force for good, improving healthcare and combating climate change, for example. But it also presents many dangers – to democracy, to financial markets, to the belief in objective truth.
Image copyright Getty Images
Image caption Our world is digital now, but has that made it better or worse?
Data in the wrong hands, used in the wrong way, could even threaten world peace, some commentators warn.
Fake news?
“Deepfakes” – manipulated digital videos that overlay another person’s face onto a body or change what people actually said – pose a growing threat, argues Katja Bego, data scientist at innovation foundation, Nesta.
‘Fake porn’ has serious consequences
“2019 will be the year that a malicious ‘deepfake’ video sparks a geopolitical incident,” she predicts.
Tumblr media
Media playback is unsupported on your device
Media captionThe face-mapping technology raising fears about fake news
“Though deepfakes are still a relatively new technology, they are evolving incredibly fast, making it harder and harder for the naked eye – or even digital forensics tools – to identify them. At the same time, they are also becoming ever easier and cheaper to create.”
She envisages a nightmare scenario in which a world leader could appear to declare war or spread damaging propaganda, with potentially devastating results.
As fake news stories – often state-sponsored – continue to flood social media and China’s Xinhua News Agency launches its first AI-created newsreaders, the lines between the fake and the real are becoming increasingly blurred.
If we cannot trust what we see or hear any more, is this “the end of truth as we know it?” asks Ms Bego.
Tumblr media
Media playback is unsupported on your device
Media captionChina’s ‘first AI news anchor’
Andrew Tsonchev, director of technology at cyber security company Darktrace Industrial, believes the internet’s openness and lack of accountability – qualities its founders cherished – play into the hands of those with malicious intent.
“Ultimately, manipulating the public discourse might prove to be a greater cyber-risk than the hacking of our devices,” he says.
“Controlling data may soon become more important than stealing it.”
Under attack
Cybersecurity companies are notorious for scaring us witless in the drive to sell more of their products. But that doesn’t mean their warnings are worthless.
And AI in the hands of criminal or state-sponsored hackers is certainly worth worrying about.
“2019 will see the first AI-orchestrated attack take down a FTSE 100 company,” predicts Jason Hart, chief technology officer, data protection, at security firm Gemalto.
Image copyright Getty Images
Image caption Could an AI-powered cyber-attack knock out a city’s entire electricity supply?
“A new breed of AI-powered malware will infect an organisation’s systems, sit undetected gathering information, adapt to its surroundings, and unleash a series of bespoke attacks targeted to take down a company from the inside out.”
AI will be needed to fight AI, many believe, particularly as the IoT vastly increases the number of potential weak points in this burgeoning network of connected devices.
Greg Day from Palo Alto Networks says: “AI on AI cyber-battles will begin. Cybersecurity will be a machine versus machine fight with humans alongside to help and adjudicate.
“While cybersecurity will look for new ways to spot adversaries and threats with AI, adversaries will use AI themselves and increasingly look to subvert machine learning and AI.”
This will ramp up the stakes even further.
“We’ll see the first example of a scaled-up mass IoT attack affecting critical infrastructure,” predicts Darren Thomson, chief technology officer at cybersecurity firm Symantec.
Healthier lives
But it’s not all bad news. AI unleashed on all our health data could herald a new era of personalised medicine, many believe.
“We predict that by mid-2020, two in three patients with any condition will be supported by AI and AI-related technologies, either as part of diagnostics, treatment, or administration,” says John Gikopoulos, global head of AI and automation at Infosys Consulting.
AI-powered computers are getting better at analysing images and diagnosing cancers, and helping to identify molecules that could be turned into life-saving drugs.
Image copyright Getty Images
Image caption Wearable devices are giving us increasingly sophisticated data about our health and wellbeing
Virtual doctors and chatbots are giving us health advice via apps.
“In 2019, for the first time ever, there will be more health data available outside health systems than inside them,” says Othman Laraki, chief executive of Color, a San Francisco-based genetic testing company.
“Your Apple Watch can deduce your heart health, your mood, your sleep patterns. Your genome can tell you your risk for inherited cancer and heart disease and traits that impact everything from your caffeine sensitivity to your ability to metabolise medicine.”
Data-driven healthcare, with an emphasis on prevention rather than cure, will have a “tremendous societal impact”, he believes.
Taking back control?
In the wake of last year’s Facebook-Cambridge Analytica scandal, which resulted in a maximum £500,000 fine for Facebook imposed by the UK’s Information Commissioner’s Office, how big companies use and abuse our data has been under much greater scrutiny.
As firms scramble to get their data privacy policies up to scratch now that the European Union’s General Data Protection Regulation is in force, “2019 will be the year of GDPR fines”, says Harrison Van Riper, a senior analyst at cybersecurity firm Digital Shadows.
Some commentators are predicting a consumer fightback.
Image copyright EPA
Image caption Facebook’s Mark Zuckerberg came under fire for the cavalier way his firm treated customer data
“In 2019, I expect that consumers will start to reclaim control of their data and monetise it,” says Phil Beckett, managing director of disputes and investigations at management consultancy Alvarez and Marsal.
Systems are being developed to allow us to control our health, financial, social, and entertainment data effectively, argues Paul Winstanley, chief executive of Censis, a centre of excellence for sensing and imaging systems.
“It is then an individual’s choice whether they want to share that data with a third party, or not.”
As consumer trust has been “severely dented”, argues Mark Curtis, co-founder at design consultancy Fjord, firms will increasingly adopt a “data minimalism” approach, only asking for data they really need.
More Technology of Business
“They will have to clearly show the payback for users sharing their data, drawing a straight line from the act of sharing to receiving relevant products and services in return,” he says.
All this consumer data, analysed by AI, will at least enable firms to personalise their services, argues Nigel Vaz, international chief executive of digital transformation agency Publicis.Sapient.
But rebuilding trust will be key, and this means consumers understanding how and why their data is being used, believes Ojas Rege, chief strategy officer at MobileIron, a mobile security company.
“Without transparency, there is no trust. Without trust, there is no data. Without data, there is no AI,” he concludes.
The last word…
While data – how it is produced, stored, analysed and applied – is the key theme for 2019, developing technologies, such as voice control, superfast 5G mobile and connected cars, will gather pace through the year.
But these will only emphasise further how vulnerable our data is and how much harder we need to work to protect, own and value it.
Follow Matthew on Twitter and Facebook
0 notes
Text
Now fake Facebook accounts are using fake faces
Artificially-generated faces of people who don’t exist are being used to front fake Facebook accounts in an attempt to trick users and game the company’s systems, the social media network said Friday.
Experts who reviewed the accounts say it is the first time they have seen fake images like this being used at scale as part of a single social media campaign.
The accounts, which were removed by Facebook on Friday, were part of a network that generally posted in support of President Trump and against the Chinese government, experts who reviewed the accounts said. Many of the accounts promoted links to a Facebook page and website called “The BL.” Facebook said the accounts were tied to the US-based Epoch Media Group, which owns The Epoch Times newspaper, a paper tied to the Falun Gong movement that is similarly pro-Trump.
The publisher of the Epoch Times denied that Epoch and The BL were linked in emails to the fact-checking organization Snopes earlier this year.
In a statement released after this story initially published on Friday, Epoch Times publisher Stephen Gregory said, “The Epoch Times and The BL media companies are unaffiliated. The BL was founded by a former employee, and employs some of our former employees. However, that some of our former employees work for BL is not evidence of any connection between the two organizations.
“The BL is a publication of Epoch Times Vietnam. As can be seen in archived pages of The Epoch Times website, Epoch Times Vietnam was no longer listed as part of Epoch Media Group in October 2018.”
In response, a Facebook spokesperson told CNN Business that executives at The BL were active administrators on Epoch Media Group Pages as recently as Friday morning.
The dystopian revelation of the use of artificially-generated images in this way points to an increasingly complicated online information landscape as America enters a presidential election year. Silicon Valley and the US intelligence community are still struggling with the fallout from widespread online interference in the 2016 presidential election.
The Facebook accounts used profile pictures that appeared to show real people smiling and looking directly into a camera. But the people do not and have never existed, according to Facebook and other researchers. The images were created using artificial intelligence technology. The same basic methods are used to produce deepfake videos — fake videos that the US intelligence community has warned could be used as part of a foreign disinformation campaign targeting Americans.
Other fake accounts that were part of the same network used stolen pictures of real people, according to the social media investigations company Graphika and the thinktank the Atlantic Council. Facebook provided information to Graphika and the Atlantic Council for analysis in advance of Friday’s announcement.
The accounts were used to run dozens of pro-Trump Facebook groups with names like “America Needs President Trump,” and “WE STAND WITH TRUMP & PENCE!,” according to Graphika and the Atlantic Council.
The fact-checking organizations Snopes and Lead Stories had reported in recent weeks and months about the use of artificial images on Facebook that were part of this network of accounts. Snopes published a story last week criticizing Facebook’s apparent inaction on the issue. Facebook said Friday it had “benefited from open source reporting” in the takedown but said that its own systems that monitor for coordinated and inauthentic behavior had proactively identified many of the accounts.
In a joint report on their findings, Graphika and the Atlantic Council outlined how they were able to determine which of the profile photos had been generated using artificial intelligence. “This technology is rapidly evolving toward generating more believable pictures, but a few indicators still give these profile pictures away,” they said.
Images generated using artificial intelligence, specifically by a machine-learning method known as a GAN, or generative adversarial network, are “notorious for struggling with features that should be symmetrical on the human face, such as glasses or earrings, and with background details. Profile pictures from the network showed telltales of all three.”
GANs consist of two neural networks — which are algorithms modeled on the neurons in a brain — facing off against each other to produce real-looking images. One of the neural networks generates images (of, say, a woman’s face), while the other tries to determine whether that image is a fake or a real face.
While experts were able to spot these telltale signs on close inspection, it is likely the regular Facebook user would not.
Over the past year, a number of websites have emerged online that create fake faces using artificial intelligence.
Researchers from Graphika and the Atlantic Council could not conclusively determine if the people behind the fake accounts had used artificial pictures from these public sites or had generated their own.
In their report released Friday, Graphika and the Atlantic Council said, “The ease with which the operation managed to generate so many synthetic pictures, in order to give its fake accounts (mostly) convincing faces, is a concern. Further research is needed to find ways to identify AI-generated profile pictures reliably and at scale, so that platforms and researchers can automate their detection.”
Connection to Epoch Media Group
In all, Facebook said Friday, it had removed a network of 610 Facebook accounts, 89 pages, 90 groups, and 72 Instagram accounts. About 55 million accounts followed one or more of the pages, and the vast majority of followers were outside the United States, Facebook said. Facebook did not say if all of all these followers were real — some of them may themselves have been fake accounts.
The network of pages removed on Friday had spent almost $10 million on Facebook ads, according to Facebook.
Facebook’s investigation primarily focused on “The BL” (The Beauty of Life) — a set of Facebook pages and a website that says its goal is to “present to the world the most beautiful aspects of life.”The pages often shared pro-Trump and anti-China content.
On its website, The BL outlined the dangers of “inaccurate and degenerate information” that it said “can be easily channeled toward vulnerable or uninformed people.”
The purpose of the fake accounts, including those using fake faces, appears to have been to promote links to The BL’s website and Facebook pages, Ben Nimmo, director of investigations at Graphika told CNN Business on Friday.
Facebook said the fake accounts were tied to the US-based Epoch Media Group and “individuals in Vietnam working on its behalf.” The company did not outline precisely how it made the connection, but in recent years Facebook has hired a team of investigators to find fake accounts on the platform.
The Epoch Times newspaper is part of the Epoch Media Group. The newspaper has almost 6 million followers on Facebook. Nathaniel Gleicher, Facebook’s head of security policy, told CNN Business Thursday that Facebook was not suspending the newspaper’s account but investigations into Epoch’s behavior on Facebook were ongoing.
Snopes reported earlier this month that the publisher of the Epoch Times denied that The BL and Epoch were linked.
In August, Facebook banned ads from The Epoch Times after an NBC News investigation detailed how the newspaper was secretly running pro-Trump Facebook ads under alternate accounts. The Epoch Times’ publisher said in a statement to NBC News, “The Epoch Times advertisements are print-subscription advertisements describing our paper’s reporting — a popular practice of many publishers — and every one of these ads was approved by Facebook before publishing.”
A Facebook spokesperson said the company shared its findings with Twitter and Google, which owns YouTube.
A Twitter spokesperson confirmed in a statement Friday, “today we identified and suspended approximately 700 accounts originating from Vietnam for violating our rules around platform manipulation — specifically fake accounts and spam.”
“Investigations are still ongoing, but our initial findings have not identified links between these accounts and state-sponsored actors,” the spokesperson added.
Google did not immediately respond to CNN Business’ request for comment
from FOX 4 Kansas City WDAF-TV | News, Weather, Sports https://fox4kc.com/2019/12/21/now-fake-facebook-accounts-are-using-fake-faces/
from Kansas City Happenings https://kansascityhappenings.wordpress.com/2019/12/22/now-fake-facebook-accounts-are-using-fake-faces/
0 notes
thewebofslime · 5 years
Link
During this already insane era of dystopian paranoia we live in, let me add one more thing for everyone to worry about: the Gorgon Stare. “What’s a Gorgon Stare?” you might ask. SEE ALSO: Why AI Deepfakes Should Scare the Living Bejeezus Out of You The Gorgon Stare is the eye-in-the-sky, a military surveillance drone built by the Pentagon, that can simultaneously track 1,000 moving targets. It’s the cute nickname for a type of wide-area motion imagery (WAMI) technology, that allows a camera, which has had its power greatly expanded, to be attached to a drone—and can then watch and record a massive area. To break it down further, what WAMI does is expand the camera aperture, so an entire city can be watched at once. What it’s really good at is zooming in on particular parts of the imagery on the ground—with a decent amount of detail—while still being able to record everything else. Top that with lots of resolution. For example, an iPhone has roughly 12 million pixels in its camera. Compare that to a Gorgon Stare with 1.8 billion pixels or 1.8 gigapixels. Crunch the numbers, that’s a resolution 150 times more powerful than an iPhone. Here’s an idea of this type of surveillance, as seen though 2016 eyes. Gorgon Stare’s original intended purpose was to track insurgents across conflict zones, as a counter terrorism tool, and to help prevent IED (improvised explosive device) attacks. Oh, well, it’s a great thing that the Gorgon Stare is only used for military operations, you say. Yes, but in this era of increasing NSA-style surveillance and diminishing civil liberties, will the Gorgon Stare soon be used to track our movements in American cities? (Insert literally any George Orwell Big Brother quip here.) The name alone, Gorgon Stare, does not create the sense of assurance of a better, private tomorrow. In Greek mythology a Gorgon was a monster with hair comprised of live snakes. Gorgons, such as Medusa, where so horrifying, that those who stared at them would turn to stone. Though that’s mythology, surveillance drones raise a host of significant privacy and civil liberties issues. One major problem? Privacy laws have not evolved at the same rapid pace of drone technology, leaving law enforcement with the belief that they can use drones to spy on citizens without the benefit of warrants or legal processes. In 2016, a company called, Persistent Surveillance Systems, did an extensive aerial surveillance program in Baltimore, which was not only used for investigative law enforcement, but the technology was also used on the people of Baltimore without their knowledge. Ross Mcnutt, who was in charge of the company, defended the program by saying it was not illegal. Will the future Edward Snowdens of the world come forward and tell us how the NSA has been spying on us with drones for all these years? In Arthur Holland Michel’s book, Eyes in the Sky: The Secret Rise of Gorgon Stare and How It Will Watch Us All, he points out that the Gorgon Stare’s origins lie partially in the 1998 Will Smith thriller Enemy of the State. In the flick, innocent man, Will Smith, is pursued by a rogue spy agency that uses the advanced satellite “Big Daddy” to monitor his every move. Holland Michel, who is also the founder and co-director of Bard College’s Center for the Study of the Drone, said the dystopian movie inspired a researcher at the top-secret Lawrence Livermore National Laboratory, who used it as a blueprint for the most powerful surveillance technology ever created. (It would be a much different surveillance world if the researcher was, instead, inspired by Bad Boys II.) Big Daddy is fiction; Gorgon Stare is reality. And everyone involved in the development of Gorgon Stare who Holland Michel interviewed for his book was very realistic about the fact that they had created a very dangerous tool if misused. Gulp. Back in the old-timey days of 2011, the U.S. Air Force collected over 325,000 hours of drone footage—just in that year alone. To break it down, that’s 37 years worth of video that was gathered by one military service utilizing 2011 technology. So why would this be bad if it were used in the public sector? We already have non-militarized drones becoming part of our civilian daily lives in a number of ways. These are the flying robots that we warmly welcomed because they were going to deliver us burritos. This past New Year’s Eve, the New York Police Department used a drone to get a bird’s eye view of the massive crowd in Times Square and spot criminals. Meanwhile, Amazon has just unveiled the latest version of its Prime Air delivery drone, which the company says it will try to launch in “the coming months.” Amazon also has another offering from its drones: “surveillance as a service.” Amazon was granted a patent that outlines how its drones could keep an eye on customers’ property between deliveries and, supposedly, maintain the homeowner’s privacy. Wouldn’t want someone to hack into that for evil blackmail purposes… It’s a strange first that a home delivery service was granted a patent for surveillance, mixed with the whole array of privacy issues associated with accidentally capturing footage of a neighbor’s home. The Air Force owns Gorgon Stare. Holland Michel states that it is also flying right now as we speak, but we don’t know in what capacity; all operational information is considered classified. If Homeland Security has the ability to prevent a terrorist attack, then it’s incumbent to use the technology at hand to do so—just don’t get on the wrong side of the Patriot Act. A crazy stat from the U.K. is that an average Londoner is caught on security cameras over 300 times a day, while in the U.S., an average American citizen might be caught on camera more than 75 times. Gorgon Stare presents a tangible fear that we are always being observed from the sky, and this will directly impact our behavior, desires and decisions when entering into public spaces, creating a society where people could become scared to organize around, say, a political cause. It’s a phenomenon brought up in French philosopher Michel Foucault’s 1975 book Discipline and Punish, in which an inmate in prison believes he is always being watched and thus he policies himself for fear of punishment. Plus, drone surveillance is a slippery civil liberties legal slope that has been addressed by the ACLU: “Routine aerial surveillance in American life would profoundly change the character of public life in the United States. Rules must be put in place to ensure that we can enjoy the benefits of this new technology without bringing us closer to a ‘surveillance society’ in which our every move is monitored, tracked, recorded and scrutinized by the authorities.” The ACLU also noted that drones “deployed without proper regulation, drones equipped with facial recognition software, infrared technology, and speakers capable of monitoring personal conversations would cause unprecedented invasions of our privacy rights.” So, once again, why is this a bad thing? Imagine tiny drones that could go completely unnoticed while peering into your window. Just never do anything wrong, OK?!? Filed Under: Business, Technology, NSA, Drones, Will Smith, Big Brother, Edward Snowden, George Orwell, Michel Foucault SEE ALSO: Ready to Have Your Dinner Delivered by Drones? Think Again.
0 notes