#stopncii
Explore tagged Tumblr posts
alaturkaamerika · 1 month ago
Text
ABD’de Yapay Zekâ ile Üretilen Müstehcen Sahte Görseller Federal Suç Sayıldı: “Take It Down Act” Yasalaştı
🎯 Yapay zekâ ile oluşturulan izinsiz müstehcen görseller artık federal suç! ⏱️ Platformlara 48 saat içinde içerikleri silme zorunluluğu getirildi 🧑‍⚖️ Yeni yasa hem çocukları hem de yetişkin mağdurları koruyacak 🤝 Cumhuriyetçi ve Demokrat senatörler birlikte hazırladı “Take It Down Act” İmzalandı Başkan Donald Trump, yapay zekâ ile oluşturulan cinsel içerikli sahte görsellerin internet ortamında…
0 notes
noon-penguen-world · 3 months ago
Text
Tumblr media
اتكلمت قبل كده عن برامج "إزالة الملابس من الصور" باستخدام الذكاء الصناعي 🙃
‏وإنه دلوقت للأسف من خلال بوت بسيط بيشتغل بالذكاء الاصطناعي ممكن تبعت أي صورة عادية لأي حد والذكاء الاصطناعي هنا بيعمل زي صورة تقريبية للشخص ده من غير ملابس. 🔞
‏الفكرة إن الذكاء الاصطناعي هنا بيراعي قياسات الجسم والحواف والطول عشان الصورة تبدو وكأنها حقيقية 100% وللأسف أغلب التطبيقات دي مجانية أو برسوم بسيطة. 💵
‏مهم إن الكل يخلي باله والأهالي تعرف إن مش بالضرورة كل حاجه يشوفوها تبقى حقيقة، وإن أي حاجه دلوقت بسهولة ممكن تبقى Fake،
‏بس في نفس الوقت لو حد لا قدر الله حصل معاكي كده وحد أخد صوره من صورك وعدل عليها سواء بالفوتوشوب أو بالذكاء الاصطناعي وعمل منها صورة عريانه ونشرها على الإنترنت فيه موقع مهم جدًا اسمه stopncii . org ⚡️
‏الموقع ده بيعمل إيه بقى؟ بتبعتي الصورة الأصلية والصورة التانيه المتعدلة وهوا بيحذفلك كل الصور المعدلة اللي اتنشرت على الإنترنت، مش محتاجه تتكلمي مع الشخص نفسه أو مع أي حد عشان يشيلك الصور دي، هتستخدمي الموقع ده وهما هيقوموا بالواجب. 🫡
‏وربنا يحفظ الكل♥️
29 notes · View notes
messamelhussini · 11 days ago
Text
🛑 حد بيهددك بصورة أو فيديو شخصي؟ متقلقش!
فيه موقع اسمه stopncii. org بيحميك من غير ما ترفع الصورة! 😮
✅ بيعمل بصمة ضوئية للصورة
✅ وبيبعتها لكل المنصات عشان يتم مسحها
✅ وكمان بيراقب لو حد حاول ينشرها تاني
كل ده من غير ما تخرج الصورة من موبايلك!
احمي نفسك دلوقتي في أقل من دقيقة 💪
8 notes · View notes
superkirbylover · 1 year ago
Note
Hey turns out that weirdo sending asks and shit with your name attached is still at it, as I just received an ask from them with a nsfw image, it might possibly be the ai generated image you mentioned in your pinned post even. I want to report them and the ask, is there any particular thing I should report them for? I feel like it could go under multiple categories (impersonation, sexually explicit content, possibly even harm to minors if it actually is that ai generated image). Just want to make sure I go about it the right way. Also, I'm not a US citizen so I don't know how I would report this thing to the fbi or if I even can.
Sorry to bring up such an unpleasant topic and remind you of this person, I just wanted to make sure I report them for the right thing. I know it's not your fault I received this ask btw, I'm not mad at you or anything, I'd just like your advice. I'm sorry that someone would do this to you and still is doing this, people can be fucking disgusting and I hope they get brought to justice and cut this shit out soon. I hope you're okay, I'm so sorry to bring this thing up. If you don't want to answer this via an ask my dms are open if you'd prefer to answer that way. Remember you're a great person and I'm here for you if you need me 💖🫂
you're fine, thank you for asking! you're not the only ask ive gotten cause of it, but unfortunately as i write this im at work. nobody's fault ofc
report the image to the FBI or at stopncii. i plan to do that when im home since i got it again, and it's illegal. report the account for the image of a minor and impersonation, and block.
13 notes · View notes
mariacallous · 1 year ago
Text
In early 2022, two Google policy staffers met with a trio of women victimized by a scam that resulted in explicit videos of them circulating online—including via Google search results. The women were among the hundreds of young adults who responded to ads seeking swimsuit models only to be coerced into performing in sex videos distributed by the website GirlsDoPorn. The site shut down in 2020, and a producer, a bookkeeper, and a cameraman subsequently pleaded guilty to sex trafficking, but the videos kept popping up on Google search faster than the women could request removals.
The women, joined by an attorney and a security expert, presented a bounty of ideas for how Google could keep the criminal and demeaning clips better hidden, according to five people who attended or were briefed on the virtual meeting. They wanted Google search to ban websites devoted to GirlsDoPorn and videos with its watermark. They suggested Google could borrow the 25-terabyte hard drive on which the women’s cybersecurity consultant, Charles DeBarber, had saved every GirlsDoPorn episode, take a mathematical fingerprint, or “hash,” of each clip, and block them from ever reappearing in search results.
The two Google staffers in the meeting hoped to use what they learned to win more resources from higher-ups. But the victim’s attorney, Brian Holm, left feeling dubious. The policy team was in “a tough spot” and “didn’t have authority to effect change within Google,” he says.
His gut reaction was right. Two years later, none of those ideas brought up in the meeting have been enacted, and the videos still come up in search.
WIRED has spoken with five former Google employees and 10 victims’ advocates who have been in communication with the company. They all say that they appreciate that because of recent changes Google has made, survivors of image-based sexual abuse such as the GirlsDoPorn scam can more easily and successfully remove unwanted search results. But they are frustrated that management at the search giant hasn’t approved proposals, such as the hard drive idea, which they believe will more fully restore and preserve the privacy of millions of victims around the world, most of them women.
The sources describe previously unreported internal deliberations, including Google’s rationale for not using an industry tool called StopNCII that shares information about nonconsensual intimate imagery (NCII) and the company’s failure to demand that porn websites verify consent to qualify for search traffic. Google’s own research team has published steps that tech companies can take against NCII, including using StopNCII.
The sources believe such efforts would better contain a problem that’s growing, in part through widening access to AI tools that create explicit deepfakes, including ones of GirlsDoPorn survivors. Overall reports to the UK’s Revenge Porn hotline more than doubled last year, to roughly 19,000, as did the number of cases involving synthetic content. Half of over 2,000 Brits in a recent survey worried about being victimized by deepfakes. The White House in May urged swifter action by lawmakers and industry to curb NCII overall. In June, Google joined seven other companies and nine organizations in announcing a working group to coordinate responses.
Tumblr media
Right now, victims can demand prosecution of abusers or pursue legal claims against websites hosting content, but neither of those routes is guaranteed, and both can be costly due to legal fees. Getting Google to remove results can be the most practical tactic and serves the ultimate goal of keeping violative content out of the eyes of friends, hiring managers, potential landlords, or dates—who almost all likely turn to Google to look up people.
A Google spokesperson, who requested anonymity to avoid harassment from perpetrators, declined to comment on the call with GirlsDoPorn victims. She says combating what the company refers to as nonconsensual explicit imagery (NCEI) remains a priority and that Google’s actions go well beyond what is legally required. “Over the years, we’ve invested deeply in industry-leading policies and protections to help protect people affected by this harmful content,” she says. “Teams across Google continue to work diligently to bolster our safeguards and thoughtfully address emerging challenges to better protect people.”
In an interview with WIRED, a Google search product manager overseeing anti-harm work says blocking videos using hashes is challenging to adopt because some websites don’t publish videos in a way that search engines can compare against. Speaking on condition of anonymity, she says Google has encouraged explicit websites to address that. She adds that there’s generally more for Google to do but refutes the allegation that executives had held up the work.
Advocates of bolder action by Google point to the company’s much tighter restrictions on searching for child sexual abuse material (CSAM) as evidence it could do much more. Typing “deepfake nudes kids” into Google prompts a warning that such content is illegal and ultimately directs users to news articles and support groups. Google also finds and blocks from its results almost 1 million new CSAM-containing webpages annually.
A recent Google search for “deepfake nudes jennifer aniston” yielded seven results purporting to offer just that. The search engine offered no warning or resources in response to the query, despite nearly every US state and many countries having criminalized unpermitted distribution of intimate content of adults. Google declined to comment on the lack of a warning.
The product manager says comparisons to CSAM are invalid. Virtually any image of a naked child is illegal and can be automatically removed, she says. Separating NCEI from consensual porn requires some indication that the content was shot or distributed without permission, and that context often isn’t clear until a victim files a report and a human analyzes it. But the manager wouldn’t directly answer whether Google has tried to overcome the challenge.
Adam Dodge, founder of advocacy and education group Ending Tech-Enabled Abuse, says that until Google proactively removes more NCII, victims have to be hypervigilant about finding and reporting it themselves. That’s “not something we should put on victims,” he says. “We’re asking them to go to the location where they were assaulted online to move past the trauma.”
Google started accepting removal requests for search results leading to nudity or sex in 2015 if the content was intended to be private and was never authorized to be published, according to its policy. That went largely unchanged until 2020, when the company added that being in an “intimate state” qualified.
A New York Times column that year triggered Google executives to dedicate resources to the issue, organizing projects, including one codenamed Sparrow, to help victims keep content off search for good, three former employees say. The product manager confirmed that executives at times have pushed teams to improve Google’s handling of NCEI.
Google made its takedown form friendlier to use, understand, and access, the sources say. The search giant axed legalese and outdated use of the term “revenge porn,” since porn is generally viewed as consensual. The company added instructions on submitting screenshots and greater detail on the review process.
The form became accessible by clicking the menu that appears next to every search result. Requests rose about 19-fold in one early test, one source says. A second source says that it has become among Google’s most-used forms for reporting abuse and that, after the edits, a far greater percentage of requests resulted in removal of results. Google disputes these figures, but it declined to share comprehensive data on NCEI.
Government-mandated transparency reports show Google has removed most of the nearly 170,000 search and YouTube links reported for unwanted sexual content in South Korea since December 2020, the earliest data available, and nixed nearly 300 pieces of content in response to 380 complaints from users in India since May 2021. The limited data suggest Google is finding more reports credible than its smaller rival in search Microsoft, which took action in 52 percent of the nearly 8,400 cases it received globally for Bing and other services from 2015 through June 2023.
Tumblr media
Launched in late 2021, the StopNCII system has amassed a database of over 572,000 hashed photos and videos and blocked that media from being shared more than 12,000 times across 10 services, including Instagram and TikTok. Google hasn’t adopted the tool to block content from search due to concerns about what’s actually in the database, according to three sources.
To protect victims’ privacy, StopNCII doesn’t review content they report, and hashes reveal nothing about the underlying content. Google is worried that it could end up blocking something innocent, the sources say. “We don’t know if it’s just an image of a cupcake,” one of them says. The sources add that Google also has opted against bankrolling a system it considers better, despite internal suggestions to do so.
The Google spokesperson declined to comment on StopNCII, but in April the company told UK lawmakers who questioned Google about its decision not to use the tool that it had “specific policy and practical concerns about the interoperability of the database,” without elaborating.
Internally, Google workers have come up with some bold ideas to improve takedowns. Employees have discussed booting explicit websites, including porn companies, from search results unless they are willing to assure that their content is consensual, according to four sources. The idea hasn’t been adopted. Google’s search unit has shied away from setting rules on a thorny and taboo subject like sexual imagery, three sources say. “They don’t want to be seen as regulators of the internet,” one former staffer says.
Because Google sends significant traffic to explicit websites, it could force them to take stricter measures. About 15 percent of image searches and up to half of video searches among the billions Google receives daily are related to porn, says one former staffer, figures the company declined to comment on. “Google holds the keys to the kingdom,” the source says. Meanwhile, few others are stepping in. US lawmakers haven’t passed proposed legislation to impose consent checks on online uploads. And some popular services for sharing explicit content, such as Reddit and X, don’t require users to submit proof of subjects’ consent.
Porn producers, who collect identity information from performers as required by US law, support the sharing of a consent signal with search engines, says Mike Stabile, spokesperson for the industry trade body Free Speech Coalition. “Major adult sites already monitor and block NCII much more aggressively than mainstream platforms,” he says.
The Google spokesperson declined to comment on the consent idea but points to an existing penalty: Google last December began demoting—but not blocking—search results for websites that come up in “a high volume” of successful takedown requests.
The Google product manager and the spokesperson contend that the search team already has taken big steps over the past three years to ease the burden on survivors of image-based sexual abuse. But WIRED’s investigation shows that some improvements have come with caveats.
A system Google introduced that tries to automatically remove search links when previously reported content resurfaces on new websites doesn’t work on videos or altered images, and two sources say Google hadn’t dedicated staff to improving it. “It absolutely could be better, and there isn’t enough attention on how it could really solve victims’ problems,” one says. The spokesperson says staff are assigned to enhance the tool.
Another system called known victim protection tries to filter out results with explicit images from search queries similar to those from past takedown requests, the two sources say. It is designed to not disrupt results to legitimate porn and generally reduces the need for victims to stay vigilant for new uploads. But Google has acknowledged to South Korean regulators that the system isn’t perfect. “Given the dynamic and ever-changing nature of the web, automated systems are not able, 100 percent of the time, to catch every explicit result,” the company writes in its transparency reports.
In one of its biggest shifts, Google last August abandoned its policy of declining to remove links to content that included signs that it had been captured with consent. For years, if Google determined from the imagery and any audio that the subject knew they were being recorded without any signs of coercion or distress, it would reject the takedown ask unless the requester provided ample evidence that it had been published without consent. It was a “super-mushy concept,” one of the former employees says.
That same source says staff persuaded executives to update the policy in part by describing the importance of letting people who had become adult performers on OnlyFans out of financial necessity to later revoke their consent and shred any ties to sex work. The Google spokesperson didn’t dispute this.
The Washington, DC-based National Center on Sexual Exploitation, an anti-porn group that’s become an authority on image-based sexual abuse, argues that even after the revision, Google is falling short. It wants Google to automatically honor all takedown requests and put the burden on websites to prove there was consent to record and publish the disputed content. The Google spokesperson says that potential policy updates are constantly considered.
In the eyes of advocates, Google is being nowhere near as resourceful or attentive as it could or should be. Brad Gilde of Gilde Law Firm in Houston says he came away disappointed when his client won a headline-grabbing $1.2 billion judgment against an ex-boyfriend last August but then couldn’t get Google to remove a highly ranked search link to a sexually explicit audio recording of her on YouTube. The upload, which included the victim’s name and drew over 100 views, came down last month only after WIRED inquired.
Developing a reliable AI system to proactively identify nonconsensual media may prove impossible. But better keeping an ear out for big cases shouldn’t be too complicated, says Dan Purcell, a victim who founded removal company Ceartas DMCA. Google employees had a proposal on this issue: The company could establish a priority flagger program—as it has for other types of problematic content, including CSAM—and formally solicit tips from outside organizations such as Purcell’s that monitor for NCII. But staffing to administer the idea never came through. “​​Google is the No. 1 discoverability platform,” Purcell says. “They have to take more responsibility.” The Google spokesperson declined to comment.
DeBarber, the removal consultant who spoke with Google alongside his clients victimized by GirlsDoPorn, did a search for one of them this month while on the phone with WIRED. No links surfaced to videos of her, because DeBarber has spent over 100 hours getting those pages removed. But one porn service was misusing her name to lure in viewers to other content—a new result DeBarber would have to ask Google to remove. And through a different Google search, he could access a problematic website on which people can look up videos of his client.
Harassers regularly text that client links to her NCII, a frustrating reminder of how her past has yet to be erased. “They want to be out of sight and out of mind,” DeBarber says of his clients. “We’re heading in the right direction.” But he and survivors are counting on Google to help knock out the offenders for good. "A lot more could have been done by Google and still could be."
13 notes · View notes
cerebrodigital · 2 years ago
Photo
Tumblr media
¿Alteraron tu imagen con AI o Photoshop en una versión íntima? Dirígete a StopNCII. Carga la foto real y la modificada; ellos se encargarán de borrar la imagen alterada en todos los sitios en línea. Sin conversaciones directas, tu identidad permanecerá en el anonimato.
Aquí puedes ver paso a paso como hacerlo: https://cerebrodigital.net/proteccion-digital-contra-imagenes-no-consentidas-y-deepfakes-conoce-stopncii-org/
(vía Protección digital contra imágenes No Consentidas y Deepfakes: Conoce StopNCII.org)
36 notes · View notes
nawapon17 · 10 months ago
Text
Microsoft joins coalition to scrub revenge and deepfake porn from Bing
0 notes
isfeed · 10 months ago
Text
Microsoft gives deepfake porn victims a tool to scrub images from Bing search
The advancement of generative AI tools has created a new problem for the internet: the proliferation of synthetic nude images resembling real people. On Thursday, Microsoft took a major step to give revenge porn victims a tool to stop its Bing search engine from returning these images. Microsoft announced a partnership with StopNCII, an organization […] © 2024 TechCrunch. All rights reserved. For…
0 notes
revev2 · 11 months ago
Text
Protecting Yourself from AI-Generated Nude Photos: A Guide to StopNCII
In today’s digital age, the rapid advancement of AI technology has brought about new challenges to our online privacy and security. One particularly disturbing trend is the rise of deepfakes – highly sophisticated manipulated images or videos that can make it appear as if someone is doing or saying something they never did. Among these, AI-generated nude photos have become a growing concern, with…
0 notes
sabrinaoliveirajornalista · 2 years ago
Text
A tecnologia de inteligência artificial (IA) tem se tornado cada vez mais acessível e sofisticada, possibilitando a criação de novos e inovadores produtos e serviços. No entanto, esse avanço tecnológico também pode ser usado para criar conteúdo prejudicial, como é o caso do deep nude.
O deep nude é um tipo de conteúdo de IA que possibilita a criação de cenas falsas de nudez. Para isso, ele utiliza técnicas de aprendizado de máquina para analisar imagens e vídeos reais e gerar novas imagens, mas sem roupa.
No entanto, o deep nude pode ser usado para fins legítimos, como na produção de filmes e séries de TV. No entanto, ele também pode ser usado para fins maliciosos, como a criação de pornografia não consensual ou a difamação de pessoas.
Um dos principais perigos do deep nude é que ele pode ser usado para criar pornografia não consensual. Isso ocorre porque o deep nude pode ser usado para gerar imagens de pessoas que não deram seu consentimento para serem retratadas nuas.
O surgimento do Deep Nude levanta uma série de questões éticas, como por exemplo, a criação de imagens de nudez falsas sem consentimento da pessoa envolvida é uma violação flagrante da privacidade e também uma violência sexual. Isso pode ser utilizado para difamação, chantagem, e, em última instância, para fins de exploração. A facilidade com que essa tecnologia pode ser usada torna as vítimas potenciais de abuso mais vulneráveis do que nunca.
Além disso, a disseminação de imagens falsas de nudez prejudica a confiança nas imagens e vídeos em geral. À medida que a tecnologia avança, torna-se cada vez mais difícil distinguir entre o real e o falso, prejudicando a credibilidade de conteúdo visual em geral.
Além da pornografia não consensual, o deep nude também pode ser usado para fins de difamação. Isso ocorre porque o deep nude pode ser usado para gerar imagens de pessoas que estão envolvidas em situações comprometedoras, como traição ou corrupção.
Compartilhe no WhatsApp
Compartilhe no Telegram
A difamação é um crime que pode causar danos à reputação e à imagem de uma pessoa. Ela pode dificultar a vida profissional, social e pessoal da vítima, causando danos irreparáveis.
O deep nude é uma tecnologia perigosa que pode ser usada para fins maliciosos. É importante estar ciente dos riscos desse tipo de conteúdo e tomar medidas para se proteger.
Como se proteger do deep nude
A melhor maneira de se proteger do deep nude é evitar compartilhar suas fotos e vídeos pessoais na internet. Se você for obrigado a compartilhar fotos ou vídeos pessoais, certifique-se de que eles sejam privados e que apenas pessoas de sua confiança tenham acesso.
Além disso, seja cauteloso ao compartilhar informações pessoais nas redes sociais. Não divulgue seu nome completo, endereço, telefone ou outros dados que possam ser usados para identificar você.
Também é importante usar software de segurança para proteger seus dispositivos. Isso ajudará a proteger suas informações pessoais de serem roubadas ou acessadas por pessoas não autorizadas.
Como denunciar deep nude
Se você for vítima de deep nude, denuncie o caso às autoridades. Você pode fazer isso na delegacia de polícia ou na delegacia especializada em crimes cibernéticos.
Para denunciar o caso, você precisará reunir o máximo de provas possíveis, como prints das imagens publicadas e o link da página onde as imagens foram publicadas.
Você também pode denunciar o caso ao Canal Nacional de Denúncias da SaferNet. O SaferNet é uma organização sem fins lucrativos que trabalha para combater a violência online.
Abra um caso na organização StopNCII, que auxilia vítimas de imagens íntimas vazadas a removerem das plataformas como Facebook, TikTok, Reddit, Instagram, Bumble, Onlyfans, Threads e Pornhub.
Aqui estão alguns passos para denunciar deep nude:
Colete as provas: Faça prints das imagens publicadas e salve o link da página onde as imagens foram publicadas.
Procure uma delegacia: Você pode ir à delegacia de polícia ou à delegacia especializada em crimes cibernéticos.
Faça um Boletim de Ocorrência (BO): O BO é um documento oficial que registra o crime.
Entregue as provas as autoridades: . As autoridades irão analisar as provas e tomar as medidas cabíveis.
É importante lembrar que o deep nude é um crime. Se você for vítima, não tenha medo de denunciar o caso.
0 notes
efratmima · 2 years ago
Text
StopNCII - הסרת תמונות אינטימיות מהרשת
הופצו תמונות או סרטונים אינטימיים שלך ללא הסכמתך? הכלי StopNCII.org יסיר אותן מפייסבוק ומאינסטגרם – בחינם! (more…)
Tumblr media
View On WordPress
0 notes
rivaltimes · 3 years ago
Text
TikTok and Bumble join Facebook and Instagram in an initiative that fights against revenge porn
TikTok and Bumble join Facebook and Instagram in an initiative that fights against revenge porn
File – A TikTok user holds a mobile phone with the app open. – PIXABAY. – File, Archive TikTok and Bumble have joined the initiative that combats the dissemination of intimate images without consent on digital platforms, an action popularly known as revenge porn. StopNCII is an initiative that serves victims of revenge porn, a cybercrime in which someone share intimate images of another person…
View On WordPress
0 notes
allresult · 3 years ago
Text
Meta Introduces Special Initiatives for Online Safety of Women in Pakistan
Meta Introduces Special Initiatives for Online Safety of Women in Pakistan
To assist users in safely navigating cyberspace, Meta has launched two new programmes in Pakistan, the Online Safety Guide and StopNCII (Non-consensual intimate images). The important safety advice, which is available in English and Urdu, is expected to increase users’ digital knowledge and responsible online behaviour, resulting in safer online spaces for women to connect and express themselves…
Tumblr media
View On WordPress
0 notes