Tumgik
#the dangers of deepfake technology
reallytoosublime · 4 months
Text
youtube
This video is all about the dangers of deepfake technology. In short, deepfake technology is a type of AI that is able to generate realistic, fake images of people. This technology has the potential to be used for a wide variety of nefarious purposes, from porn to political manipulation.
Deepfake technology has emerged as a significant concern in the digital age, raising alarm about its potential dangers and the need for effective detection methods. Deepfakes refer to manipulated or synthesized media content, such as images, videos, or audio recordings, that convincingly replicate real people saying or doing things they never did. While deepfakes can have legitimate applications in entertainment and creative fields, their malicious use poses serious threats to individuals, organizations, and society as a whole.
The dangers of deepfakes are not very heavily known by everyone, and this poses a threat. There is no guarantee that what you see online is real, and deepfakes have successfully lessened the gap between fake and real content. Even though the technology can be used for creating innovative entertainment projects, it is also being heavily misused by cybercriminals. Additionally, if the technology is not monitored properly by law enforcement, things will likely get out of hand quickly.
Deepfakes can be used to spread false information, which can have severe consequences for public opinion, political discourse, and trust in institutions. A realistic deepfake video of a public figure could be used to disseminate fabricated statements or actions, leading to confusion and the potential for societal unrest.
Cybercriminals can exploit deepfake technology for financial gain. By impersonating someone's voice or face, scammers could trick individuals into divulging sensitive information, making fraudulent transactions, or even manipulating people into thinking they are communicating with a trusted source.
Deepfakes have the potential to disrupt democratic processes by distorting the truth during elections or important political events. Fake videos of candidates making controversial statements could sway public opinion or incite conflict.
The Dangers of Deepfake Technology and How to Spot Them
0 notes
youtubemarketing1234 · 4 months
Text
youtube
This video is all about the dangers of deepfake technology. In short, deepfake technology is a type of AI that is able to generate realistic, fake images of people. This technology has the potential to be used for a wide variety of nefarious purposes, from porn to political manipulation.
Deepfake technology has emerged as a significant concern in the digital age, raising alarm about its potential dangers and the need for effective detection methods. Deepfakes refer to manipulated or synthesized media content, such as images, videos, or audio recordings, that convincingly replicate real people saying or doing things they never did. While deepfakes can have legitimate applications in entertainment and creative fields, their malicious use poses serious threats to individuals, organizations, and society as a whole.
The dangers of deepfakes are not very heavily known by everyone, and this poses a threat. There is no guarantee that what you see online is real, and deepfakes have successfully lessened the gap between fake and real content. Even though the technology can be used for creating innovative entertainment projects, it is also being heavily misused by cybercriminals. Additionally, if the technology is not monitored properly by law enforcement, things will likely get out of hand quickly.
Deepfakes can be used to spread false information, which can have severe consequences for public opinion, political discourse, and trust in institutions. A realistic deepfake video of a public figure could be used to disseminate fabricated statements or actions, leading to confusion and the potential for societal unrest.
Cybercriminals can exploit deepfake technology for financial gain. By impersonating someone's voice or face, scammers could trick individuals into divulging sensitive information, making fraudulent transactions, or even manipulating people into thinking they are communicating with a trusted source.
Deepfakes have the potential to disrupt democratic processes by distorting the truth during elections or important political events. Fake videos of candidates making controversial statements could sway public opinion or incite conflict.
The Dangers of Deepfake Technology and How to Spot Them
0 notes
hashtagloveloses · 1 year
Text
this is an earnest and honest plea and call in especially to fandoms as i see it happen more - please don't use AI for your transformative works. by this i mean, making audios of actors who play the characters you love saying certain things, making deepfakes of actors or even animated characters' faces. playing with chatGPT to "talk" or RP with a character, or write funny fanfiction. using stable diffusion to make interesting "crossover" AI "art." i KNOW it's just for fun and it is seemingly harmless but it's not. since there is NO regulation and since some stuff is built off of stable diffusion (which uses stolen artwork and data), it is helping to create a huge and dangerous mess. when you use an AI to deepfake actors' voices to make your ship canon or whatever, you help train it so people can use it for deepfake revenge porn. or so companies can replace these actors with AI. when you RP with chatGPT you help train it to do LOTS of things that will be used to harm SO many people. (this doesn't even get into how governments will misuse and hurt people with these technologies) and yes that is not your fault and yes it is not the technology's fault it is the companies and governments that will and already have done things but PLEASE. when you use an AI snapchat or instagram or tiktok filter, when you use an AI image generator "just for fun", when you chat with your character's "bot," you are doing IRREPARABLE harm. please stop.
8K notes · View notes
vidoeslot · 5 months
Note
as a tech lover what do u think of ai. love ur art <3
Oh man. This is a hell of a question!!
I think right off the bat I want to say that “AI” as a term is so so deeply misused it may be beyond repair at this point. The broadness of AI cannot be understated. Even the most basic search and sorting algorithms are AI. Chessbots are AI. Speech recognition is AI. Machine translation, camera autofocus, playlist shuffle, spam filtering, antivirus, inverse kinematics, it all uses AI and has used it for years. Every single piece of software you interact with has AI technology in it somewhere.
All of this is mostly unrelated to what most people think of as AI nowadays (generative AI, like chatGPT or midjourney), both of which are entirely unrelated to the science fiction concept of an artificial intelligence.
That said, I'm assuming you're talking about generative AI since that's the hot-button issue. I think it's a very neat technology and one I wish I could be enthusiastic about seeing improve. I also think it is a deeply dangerous technology and we are entirely unprepared for the consequences of unfettered access to and complete trust in AI generation. It's what should be a beneficial technology built on foundations of harm – programmed bias from inextricable structural prejudice in the computer science world, manipulation of sources without creator/user/random person who happened to be caught on a camera once/etc consent – being used for harm – deliberate disinformation, nonsense generated content being taken as fact, violation of personal privacy and consent (as seen with deepfake porn), the list goes on. There's even more I could say about non-generative neural networks (that very reductive reference to "bread scanning AIs they taught to recognize cancer cells" so highly lauded by tumblr) but it just boils down to the same thing; the potential risk of using these technologies irresponsibly far and away outweighs any benefit they might have since there's no actual way to guarantee they can be used in a "good" or "safe" way.
All of it leaves a rotten taste in my mouth and I can't engage with the thought of any generative AI technology because of it. There's just too much at stake and I don't know if it even can be corralled to be used beneficially at this point. The genie's out of the bottle.
35 notes · View notes
mariacallous · 14 days
Text
The latest in a series of duels announced by the European Commission is with Bing, Microsoft’s search engine. Brussels suspects that the giant based in Redmond, Washington, has failed to properly moderate content produced by the generative AI systems on Bing, Copilot, and Image Creator, and that as a result, it may have violated the Digital Services Act (DSA), one of Europe’s latest digital regulations.
On May 17, the EU summit requested company documents to understand how Microsoft handled the spread of hallucinations (inaccurate or nonsensical answers produced by AI), deepfakes, and attempts to improperly influence the upcoming European Parliament elections. At the beginning of June, voters in the 27 states of the European Union will choose their representatives to the European Parliament, in a campaign over which looms the ominous shadow of technology with its potential to manipulate the outcome. The commission has given Microsoft until May 27 to respond, only days before voters go to the polls. If there is a need to correct course, it may likely be too late.
Europe’s Strategy
Over the past few months, the European Commission has started to bang its fists on the table when dealing with the big digital giants, almost all of them based in the US or China. This isn’t the first time. In 2022, the European Union hit Google with a fine of €4.1 billion because of its market dominance thanks to its Android system, marking the end of an investigation that started in 2015. In 2023, it sanctioned Meta with a fine of €1.2 billion for violating the GDPR, the EU’s data protection regulations. And in March it presented Apple with a sanction of €1.8 billion.
Recently, however, there appears to have been a change in strategy. Sanctions continue to be available as a last resort when Big Tech companies don’t bend to the wishes of Brussels, but now the European Commission is aiming to take a closer look at Big Tech, find out how it operates, and modify it as needed, before imposing fines. Take, for example, Europe’s Digital Services Act, which attempts to impose transparency in areas like algorithms and advertising, fight online harassment and disinformation, protect minors, stop user profiling, and eliminate dark patterns (design features intended to manipulate our choices on the web).
In 2023, Brussels identified 22 multinationals that, due to their size, would be the focus of its initial efforts: Google with its four major services (search, shopping, maps, and play), YouTube, Meta with Instagram and Facebook, Bing, X (formerly Twitter), Snapchat, Pinterest, LinkedIn, Amazon, Booking, Wikipedia, Apple’s App Store, TikTok, Alibaba, Zalando, and the porn sites Pornhub, XVideos, and Stripchat. Since then, it has been putting the pressure on these companies to cooperate with its regulatory regime.
The day before the Bing investigation was announced, the commission also opened one into Meta to determine what the multinational is doing to protect minors on Facebook and Instagram and counter the “rabbit hole” effect—that is, the seamless flood of content that demands users’ attention, and which can be especially appealing to younger people. That same concern led it to block the launch of TikTok Lite in Europe, deeming its system for rewarding social engagement dangerous and a means of encouraging addictive behavior. It has asked X to increase its content moderation, LinkedIn to explain how its ad system works, and AliExpress to defend its refund and complaint processes.
A Mountain of Laws …
On one hand, the message appears to be that no one will escape the reach of Brussels. On the other, the European Commission, led by President Ursula von der Leyen, has to demonstrate that the many digital laws and regulations that are in place actually produce positive results. In addition to the DSA, there is the Digital Markets Act (DMA), intended to counterbalance the dominance of Big Tech in online markets; the AI Act, Europe’s flagship legislation on artificial intelligence; and the Data Governance Act (DGA) and the Data Act, which address data protection and the use of data in the public and private sectors. Also to be added to the list are the updated cybersecurity package, NIS2 (Network and Information Security); the Digital Operational Resilience Act, focused on finance and insurance; and the digital identity package within eIDAS 2. Still in the draft stage are regulations on health data spaces and much-debated chat measures which would authorize law enforcement agencies and platforms to scan citizens’ private messages, looking for child pornography.
Brussels has deployed its heavy artillery against the digital flagships of the United States and China, and a few successful blows have landed, such as ByteDance’s suspension of the gamification feature on TikTok Lite following its release in France and Spain. But the future is uncertain and complicated. While investigations attract media interest, the EU’s digital bureaucracy is a large and complex machine to run.
On February 17, the DSA became law for all online service operators (cloud and hosting providers, search engines, e-commerce, and online services) but the European Commission doesn’t and can’t control everything. That is why it asked states to appoint a local authority to serve as a coordinator of digital services. Five months later, Brussels had to send a formal notice to six states (Cyprus, Czechia, Estonia, Poland, Portugal, and Slovakia) to urge them to designate and fully empower their digital services coordinators. Those countries now have two months to comply before Brussels will intervene. But there are others who are also not in the clear. For example, Italy’s digital services coordinator, the Communications Regulatory Authority (abbreviated AGCOM, for Autorità per le Garanzie nelle Comunicazioni, in Italian), needs to recruit 23 new employees to replenish its staff. The department told WIRED Italy that it expects to have filled all of its appointments by mid-June.
The DSA also introduced “trusted flaggers.” These are individuals or entities, such as universities, associations, and fact-checkers, committed to combating online hatred, internet harassment, illegal content, and the spread of scams and fake news. Their reports are, one hopes, trustworthy. The selection of trusted flaggers is up to local authorities but, to date, only Finland has formalized the appointment of one, specifically Tekijänoikeuden tiedotus- ja valvontakeskus ry (in English, the Copyright Information and Anti-Piracy Center). Its executive director, Jaana Pihkala, explained to WIRED Italy that their task is “to produce reports on copyright infringements,” a subject on which the association has 40 years of experience. Since its appointment as a trusted flagger, the center’s two lawyers, who perform all of its functions, have sent 816 alerts to protect films, TV series, and books on behalf of Finnish copyright holders.
… and a Mountain of Data
To assure that the new commission is respected by the 27 states, the commission set up the DSA surveillance system as quickly as possible, but the bureaucrats in Brussels still have a formidable amount of research to do. On the one hand, there is the anonymous reporting platform with which the commission hopes to build dossiers on the operations of different platforms directly from internal sources. The biggest scandals that have shaken Meta have been thanks to former employees, like Christopher Wylie, the analyst who revealed how Cambridge Analytica attempted to influence the US elections, and Frances Haugen, who shared documents about the impacts of Instagram and Facebook on children’s health. The DSA, however, intends to empower and fund the commission so that it can have its own people capable of sifting through documents and data, analyzing the content, and deciding whether to act.
The commission boasts that the DSA will force platforms to be transparent. And indeed it can point to some successes already, for example, by revealing the absurdly inadequate numbers of moderators employed by platforms. According to the latest data released last November, they don’t even cover all the languages spoken in the European Union. X reported that it had only two people to check content in Italian, the language of 9.1 million users. There were no moderators for Greek, Finnish, or Romanian even though each language has more than 2 million subscribers. AliExpress moderates everything in English while, for other languages, it makes do with automatic translators. LinkedIn moderates content in 12 languages of the European bloc—that is, just half of the official languages.
At the same time, the commission has forced large platforms to standardize their reports of moderation interventions to feed a large database, which, at the time of writing this article, contains more than 18.2 billion records. Of these cases, 69 percent were handled automatically. But, perhaps surprisingly, 92 percent concerned Google Shopping. This is because the platform uses various parameters to determine whether a product can be featured: the risk that it is counterfeited, possible violations of site standards, prohibited goods, dangerous materials, and others. It can thus be the case that several alerts are triggered for the same product and the DSA database counts each one separately, multiplying the shopping numbers exponentially. So now the EU has a mass of data that further complicates its goal of being fully transparent.
Zalando’s Numbers
And then there’s the Big Tech companies’ legal battle against the fee they have to pay to the commission to help underwrite its supervisory bodies. Meta, TikTok, and Zalando have challenged the fee (though paid it). Zalando is also the only European company on the commission’s list of large platforms, a designation Zalando has always contested because it does not believe it meets the criteria used by Brussels. One example: The platforms on the list must have at least 45 million monthly users in Europe. The commission argues that Zalando has 83 million users, though that number, for example, includes visits from Portugal, where the platform is not marketed, and Zalando argues those users should be deducted from its total count. According to its calculations, the activities subject to the DSA reach only 31 million users, under the threshold. When Zalando was assessed its fee, it discovered that the commission had based it on a figure of 47.5 million users, far below the initial 83 million. The company has now taken the commission to court in an attempt to assure a transparent process.
And this is just one piece of legislation, the DSA. The commission has also deployed the Digital Markets Act (DMA), a package of regulations to counterbalance Big Tech’s market dominance, requiring that certain services be interoperable with those of other companies, that apps that come loaded on a device by default can be uninstalled, and that data collected on large platforms be shared with small- and medium-size companies. Again, the push to impose these mandates starts with the giants: Alphabet, Amazon, Apple, Meta, ByteDance, and Microsoft. In May, Booking was added to the list.
Big Tech Responds
Platforms have started to respond to EU requests, with lukewarm results. WhatsApp, for instance, has been redesigned to allow chatting with other apps without compromising its end-to-end encryption that protects the privacy and security of users, but it is still unclear who will agree to connect to it. WIRED US reached out to 10 messaging companies, including Google, Telegram, Viber, and Signal, to ask whether they intend to look at interoperability and whether they had worked with WhatsApp on its plans. The majority didn’t respond to the request for comment. Those that did, Snap and Discord, said they had nothing to add. Apple had to accept sideloading—i.e., the possibility of installing and updating iPhone or iPad applications from stores outside the official one. However, the first alternative that emerged, AltStore, offers very few apps at this time. And it has suffered some negative publicity after refusing to accept the latest version of its archenemy Spotify’s app, despite the fact that the audio platform had removed the link to its website for subscriptions.
The DMA is a regulation that has the potential to break the dominant positions of Big Tech companies, but that outcome is not a given. Take the issue of surveillance: The commission has funds to pay the salaries of 80 employees, compared to the 120 requested by Internal Market Commissioner Thierry Breton and the 220 requested by the European Parliament, as summarized by Bruegel in 2022. And on the website of the Center for European Policy Analysis (CEPA), Adam Kovacevich, founder and CEO of Chamber of Progress, a politically left-wing tech industry coalition (all of the digital giants, which also fund CEPA, are members), stated that the DMA, “instead of helping consumers, aims to help competitors. The DMA is making large tech firms’ services less useful, less secure, and less family-friendly. Europeans’ experience of large tech firms’ services is about to get worse compared to the experience of Americans and other non-Europeans.”
Kovacevich represents an association financed by some of those same companies that the DMA is focused on, and there is a shared fear that the DMA will complicate the market and, in the end, benefit only a few companies—not necessarily those most at risk because of the dominance of Silicon Valley. It is not only lawsuits and fines, but also the perceptions of citizens and businesses that will help to determine whether EU regulations are successful. The results may come more slowly than desired by Brussels as new legislation is rarely positively received at first.
Learning From GDPR and Gaia-X
Another regulatory act, the General Data Protection Regulation (GDPR), has become the global industry standard, forcing online operators to change the way they handle our data. But if you ask the typical person on the street, they’ll likely tell you it’s just a simple cookie wall that you have to approve before continuing on to a webpage. Or it’s viewed as a law that has required the retention of dedicated external consultants on the part of companies. It is rarely described as the ultimate online privacy law, which is exactly what it is. That said, while the act has reshaped the privacy landscape, there have been challenges, as the digital rights association Noyb has explained. The privacy commissioners of Ireland and Luxembourg, where many web giants are based for tax purposes, have had bottlenecks in investigating violations. According to the latest figures from Ireland’s Data Protection Commission (DPC), 19,581 complaints have been submitted in the past five years, but the body has made only 37 formal decisions and only eight of those began with complaints. Noyb recently conducted a survey of 1,000 data protection officers; 74 percent were convinced that if privacy officers investigated the typical European company, they would find at least one GDPR violation.
The GDPR was also the impetus for another unsuccessful operation: separating the European cloud from the US cloud in order to shelter the data of EU citizens from Washington’s Cloud Act. In 2019, France and Germany announced with great fanfare a federation, Gaia-X, that would defend the continent and provide a response to the cloud market, which has been split between the United States and China. Five years later, the project has become bogged down in the process of establishing standards, after the entry of the giants it was supposed to counter, such as Microsoft, Amazon, Google, Huawei, and Alibaba, as well as the controversial American company Palantir (which analyses data for defense purposes). This led some of the founders, such as the French cloud operator Scaleway, to flee, and that then turned the spotlight on the European Parliament, which led the commission to launch an alternative, the European Alliance for Industrial Data, Edge and Cloud, which counts among its 49 members 26 participants from Gaia-X (everyone except for the non-EU giants) and enjoys EU financial support.
In the meantime, the Big Tech giants have found a solution that satisfies European wishes, investing en masse to establish data centers on EU soil. According to a study by consultancy firm Roland Berger, 34 data center transactions were finalized in 2023, growing at an average annual rate of 29.7 percent since 2019. According to Mordor Intelligence, another market analysis company, the sector in Europe will grow from €35.4 billion in 2024 to an estimated €57.7 billion in 2029. In recent weeks, Amazon web services announced €7.8 billion in investments in Germany. WIRED Italy has reported on Amazon’s interest in joining the list of accredited operators to host critical public administration data in Italy, which already includes Microsoft, Google, and Oracle. Notwithstanding its proclamations about sovereignty, Brussels has had to capitulate: The cloud is in the hands of the giants from the United States who have found themselves way ahead of their Chinese competitors after diplomatic relations between Beijing and Brussels cooled.
The AI Challenge
The newest front in this digital battle is artificial intelligence. Here, too, the European Union has been the first to come up with some rules under its AI Act, the first legislation to address the different applications of this technology and establish permitted and prohibited uses based on risk assessments. The commission does not want to repeat the mistakes of the past. Mindful of the launch of the GDPR, which in 2018 caused companies to scramble to assure they were compliant, it wants to lead organizations through a period of voluntary adjustment. Already 400 companies have declared their interest in joining the effort, including IBM.
In the meantime, Brussels must build a number of structures to make the AI Act work. First is the AI Council. It will have one representative from each country and will be divided into two subgroups, one dedicated to market development and the other to public sector uses of AI. In addition, it will be joined by a committee of technical advisers and an independent committee of scientists and experts, along the lines of the UN Climate Committee. Secondly, the AI Office, which sits within Directorate-General Connect (the department in charge of digital technology), will take care of administrative aspects of the AI Act. The office will assure that the act is applied uniformly, investigate alleged violations, establish codes of conduct, and classify artificial intelligence models that pose a systemic risk. Once the rules are established, research on new technologies can proceed. After it is fully operational, the office will employ 100 people, some of them redeployed from General Connect while others will be new hires. At the moment, the office is looking to hire six administrative staff and an unknown number of tech experts.
On May 29, the first round of bids in support of the regulation expired. These included the AI Innovation Accelerator, a center that provides training, technical standards, and software and tools to promote research, support startups and small- and medium-sized enterprises, and assist public authorities that have to supervise AI. A total of €6 million is on the table. Another €2 million will finance management and €1.5 million will go to the EU’s AI testing facilities, which will, on behalf of countries’ antitrust authorities, analyze artificial intelligence models and products on the market to assure that they comply with EU rules.
Follow the Money
Finally, a total of €54 million is designated for a number of business initiatives. The EU knows it is lagging behind. According to an April report by the European Parliament’s research service, which provides data and intelligence to support legislative activities, the global AI market, which in 2023 was estimated at €130 billion, will reach close to €1.9 trillion in 2030. The lion’s share is in the United States, with €44 billion of private investment in 2022, followed by China with €12 billion. Overall, the European Union and the United Kingdom attracted €10.2 billion in the same year. According to Eurochamber researchers, between 2018 and the third quarter of 2023, US AI companies received €120 billion in investment, compared to €32.5 billion for European ones.
Europe wants to counter the advance of the new AI giants with an open source model, and it has also made its network of supercomputers available to startups and universities to train algorithms. First, however, it had to adapt to the needs of the sector, investing almost €400 million in graphics cards, which, given the current boom in demand, will not arrive anytime soon.
Among other projects to support the European AI market, the commission wants to use €24 million to launch a Language Technology Alliance that would bring together companies from different states to develop a generative AI to compete with ChatGPT and similar tools. It’s an initiative that closely resembles Gaia-X. Another €25 million is earmarked for the creation of a large open source language model, available to European companies to develop new services and research projects. The commission intends to fund several models and ultimately choose the one best suited to Europe’s needs. Overall, during the period from 2021 to 2027, the Digital Europe Program plans to spend €2.1 billion on AI. That figure may sound impressive, but it pales in comparison to the €10 billion that a single company, Microsoft, invested in OpenAI.
The €25 million being spent on the European large language model effort, if distributed to many smaller projects, risks not even counterbalancing the €15 million that Microsoft has spent bringing France’s Mistral, Europe’s most talked-about AI startup, into its orbit. The big AI models will become presences in Brussels as soon as the AI Act, now finally approved, comes into full force. In short, the commission is making it clear in every way it can that a new sheriff is in town. But will the bureaucrats of Brussels be adequately armed to take on Big Tech? Only one thing is certain—it’s not going to be an easy task.
6 notes · View notes
bogunicorn · 4 months
Text
the prominence of AI video content online now means that we need to be extra vigilant about things like deepfakes of people saying things they never would and similarly dangerous things
but it also means that eventually video evidence becomes about as useful as a blog post for proving that something really happened, and if anyone found an honest to god alien thing or actually found bigfoot or something, everyone would immediately assume it was AI regardless of image quality or source
and i think this solves the issue of modern technology making it more difficult to justify modern sci/fi fantasy settings. you could take a picture of a werewolf or a fairy or something and if you shared it and tried to claim it was real, people would think you just used an AI program and that you were weird if you kept insisting it was real
6 notes · View notes
kp777 · 6 months
Text
By Olivia Rosane
Common Dreams
Dec. 26, 2023
"If people don't ultimately trust information related to an election, democracy just stops working," said a senior fellow at the Alliance for Securing Democracy.
As 2024 approaches and with it the next U.S. presidential election, experts and advocates are warning about the impact that the spread of artificial intelligence technology will have on the amount and sophistication of misinformation directed at voters.
While falsehoods and conspiracy theories have circulated ahead of previous elections, 2024 marks the first time that it will be easy for anyone to access AI technology that could create a believable deepfake video, photo, or audio clip in seconds, The Associated Press reported Tuesday.
"I expect a tsunami of misinformation," Oren Etzioni, n AI expert and University of Washington professor emeritus, told the AP. "I can't prove that. I hope to be proven wrong. But the ingredients are there, and I am completely terrified."
"If a misinformation or disinformation campaign is effective enough that a large enough percentage of the American population does not believe that the results reflect what actually happened, then Jan. 6 will probably look like a warm-up act."
Subject matter experts told the AP that three factors made the 2024 election an especially perilous time for the rise of misinformation. The first is the availability of the technology itself. Deepfakes have already been used in elections. The Republican primary campaign of Florida Gov. Ron DeSantis circulated images of former president Donald Trump hugging former White House Coronavirus Task Force chief Anthony Fauci as part of an ad in June, for example.
"You could see a political candidate like President [Joe] Biden being rushed to a hospital," Etzioni told the AP. "You could see a candidate saying things that he or she never actually said."
The second factor is that social media companies have reduced the number of policies designed to control the spread of false posts and the amount of employees devoted to monitoring them. When billionaire Elon Musk acquired Twitter in October of 2022, he fired nearly half of the platform's workforce, including employees who worked to control misinformation.
Yet while Musk has faced significant criticism and scrutiny for his leadership, co-founder of Accountable Tech Jesse Lehrich told the AP that other platforms appear to have used his actions as an excuse to be less vigilant themselves. A report published by Free Press in December found that Twitter—now X—Meta, and YouTube rolled back 17 policies between November 2022 and November 2023 that targeted hate speech and disinformation. For example, X and YouTube retired policies around the spread of misinformation concerning the 2020 presidential election and the lie that Trump in fact won, and X and Meta relaxed policies aimed at stopping Covid 19-related falsehoods.
"We found that in 2023, the largest social media companies have deprioritized content moderation and other user trust and safety protections, including rolling back platform policies that had reduced the presence of hate, harassment, and lies on their networks," Free Press said, calling the rollbacks "a dangerous backslide."
Finally, Trump, who has been a big proponent of the lie that he won the 2020 presidential election against Biden, is running again in 2024. Since 57% of Republicans now believe his claim that Biden did not win the last election, experts are worried about what could happen if large numbers of people accept similar lies in 2024.
"If people don't ultimately trust information related to an election, democracy just stops working," Bret Schafer, a senior fellow at the nonpartisan Alliance for Securing Democracy, told the AP. "If a misinformation or disinformation campaign is effective enough that a large enough percentage of the American population does not believe that the results reflect what actually happened, then Jan. 6 will probably look like a warm-up act."
The warnings build on the alarm sounded by watchdog groups like Public Citizen, which has been advocating for a ban on the use of deepfakes in elections. The group has petitioned the Federal Election Commission to establish a new rule governing AI-generated content, and has called on the body to acknowledge that the use of deepfakes is already illegal under a rule banning "fraudulent misrepresentation."
"Specifically, by falsely putting words into another candidate's mouth, or showing the candidate taking action they did not, the deceptive deepfaker fraudulently speaks or act[s] 'for' that candidate in a way deliberately intended to damage him or her. This is precisely what the statute aims to proscribe," Public Citizen said.
The group has also asked the Republican and Democratic parties and their candidates to promise not to use deepfakes to mislead voters in 2024.
In November, Public Citizen announced a new tool tracking state-level legislation to control deepfakes. To date, laws have been enacted in California, Michigan, Minnesota, Texas, and Washington.
"Without new legislation and regulation, deepfakes are likely to further confuse voters and undermine confidence in elections," Ilana Beller, democracy campaign field manager for Public Citizen, said when the tracker was announced. "Deepfake video could be released days or hours before an election with no time to debunk it—misleading voters and altering the outcome of the election."
Our work is licensed under Creative Commons (CC BY-NC-ND 3.0). Feel free to republish and share widely.
10 notes · View notes
dear-future-ai · 1 year
Text
Dearest contemporaries
And future AI,
Elon Musk and his defense team are using a “Deepfake defense” in a Tesla lawsuit pertaining to the death of an Apple engineer in one of his autonomous self-driving cars.
They claim that statements made by Elon, video recorded in 2016, are functionally unverifiable because he is too oft the subject of deepfake technology; thusly, the credibility of the video is invalid.
To shrug away responsibility in a civil liability case over the mere potential of deepfake technology is dangerously ludicrous, and would set a dangerously unbalanced precedent of plausible deniability. Each video cited needs rigid investigation, just as any other evidence.
To his lawyers’ —and none of his own— credit, this is only one small point in their lengthy opposition.
20 notes · View notes
sevicia · 10 months
Text
It makes me so fucking sad to know that "AI" is being used in really awful ways because it falls into bad people's hands. I genuinely think "AI" has the potential to help so much, but like with many other things in technology it has been tainted by capitalism and the need to consume consume consume... like it really breaks my heart because I feel like it's creating this bad reputation for it when it's literally just a tool that acts accordingly to who uses it. The point of technology is that it should help humanity!! But nowadays it feels like we've either forgotten that, or forgotten what the definition of "humanity" is. "Humanity" is not just you, it's your neighbor as well, and their family, and it's also your employer, and it's also the guy you don't like, and it's everyone you don't like! Everyone you DO like too!
I feel like a lot of it has to do with most people's fascination at a shiny new thing, because "AI" IS a "shiny new thing", a lot of people who aren't even slightly knowledgeable in this stuff or just don't care, will see it as "cool! we did a new thing I guess!" which like yeah it is cool! But it's so dangerous! Deepfakes may be a funny way of making your favorite character sing whatever silly song you want, but they are also a way of making fake porn of unsuspecting women, and with the current state of society (sex is always scandalous and women are still being treated like actual garbage everywhere), it's all just so so fucking bleak.
IDK!!!! I forgot what I was saying!!!!! It's literally 7 AM I haven't slept a wink and I'm so tired in general but I'm tired specifically of feeling like no matter how kind I am or how kind everyone is, we've all done irreparable damage to each other and ourselves, and I just don't feel like things will ever be right and I wish we could all start over just once and do things differently
8 notes · View notes
medicinemane · 7 months
Text
Ok, really cool idea that probably can never be implemented in the real world because of the ethics around ai, so leave this in a vacuum as an interesting idea that we maybe even could do but sadly probably shouldn't because of unscrupulous people
So hear me out, horror game with proximity chat where the monster (probably using ai) listens to your voices and over time starts mimicking you
Better yet if the mimicking starts out bad, but over time does a better and better job
Similarly I suppose it's model could go from something deformed and bestial to more and more looking human
I just really like the idea of someone hiding there, and their friend walks up and is like "Hey, you in here? I'm pretty sure I lost it, come on out" and you're having to debate if that's actually your friend or if that's the monster mimicking them
...I suppose literally the only way that you could ethically do this is if the game hard dumped the data after every game since having a phase where it's not good at mimicking would be important
But you get the dilemma, right?
You tell me that isn't a fantastically cool idea and an actually interesting use of rapid modeling to quickly compile a profile on someone's voice and mannerisms in order to mimic them where the rough start and the ways machine learning can mess up and be inhuman is actually a benefit rather than a hindrance
...but then you tell me that isn't one of the worst most scary ideas from an ethics point of view, cause even if you play it totally above board somehow you tell me someone's not gonna try and swipe that tech to do evil shit with it
Sounds like a kick ass game though, doesn't it?
...once again, you can have it if you like it so long as you kick some of the money back my way if you make it big... cause I think you could make it huge... but like... this is one of the rare ideas where I'm saying probably no one should do this no matter how incredible an idea it is
Edit: I think I'm gonna turn off reblogs for two reasons... for one... this is a really fucking good idea and I think I've hit on a real winner of an idea that could blow the fuck up on steam and make a lot of money if I used it... and I like money
But two... fuck, you tell me this isn't unethical as fuck even if it's purely meant in a positive and wholesome way
It would basically be built on deepfakes style stuff which... hard not to call it bad an dangerous, and while it probably wouldn't advance rapid modeling technology in any meaningful way... I don't know... you get what I mean, right?
It sucks, this is an awesome and ethical use of ai, but it doesn't exist in a bubble
It's sort of like how there's valid and even beneficial uses to facial recognition software, but there's so many horrible uses that it's probably better to highly regulate if not outright ban it
So yeah, no reblogs I think... just to be safe... both to protect my chance at being the one to profit off opening another layer in this Pandora's box, and cause it's dangerous and needs to be carefully considered
2 notes · View notes
lesbiansupernatural · 11 months
Text
I think it's really funny how school, like 5 or so years ago, drilled the dangers of deepfakes into our brains. Theyd show a vid of obama talking and saying absolute bull and then reveal it was fake. This past year this technology seems to have become more widespread, especially with voice filters, and I just love the fact that I've never once apart from that time in school seen a deepfake or anything actually pretending to be real
Deepfakes today are like "oh heres trump rapping about having shat his pants with flashing lights" or absolutely insanely good fan edits
School really made me think deepfakes would be a big problem in my life, but it ended up just being used for memes, and I am HERE for it
2 notes · View notes
gender-euphowrya · 2 years
Text
everytime there's a New Dangerous technology that springs up like deepfakes there's always people like "don't worry we've also got this counter technology that accurately recognizes deepfakes ! they couldn't make one of the president saying really bad stuff we'd know it's fake !"
as if people wouldn't be like OH THEY CLAIM ITS A DEEPFAKE BUT THEIR BIASED DEEPFAKE DEBUNKING MACHINE LIES!!!!
2 notes · View notes
Text
Demystifying the OpenAI-Apple Collaboration for iOS 18
Tumblr media
The news of OpenAI and Apple's collaboration for iOS 18 sent ripples through the tech world. At its core, the partnership integrates OpenAI's ChatGPT technology into Apple's mobile operating system, promising enhanced user experiences through more advanced AI interactions. But is this a cause for concern, as some, including Elon Musk (a co-founder of OpenAI), have suggested? Let's delve deeper and separate fact from fiction. Firstly, it's important to understand the nature of the collaboration. Apple isn't acquiring OpenAI or gaining exclusive rights to its technology. Instead, they're offering a platform for users to access a free, basic version of ChatGPT functionalities within the iOS 18 ecosystem. Imagine being able to utilize ChatGPT's capabilities for tasks like generating creative text formats, summarizing complex documents, or even having more natural conversations with Siri. This integration has the potential to significantly enhance user convenience and productivity. However, concerns have been raised about potential downsides. One worry is the issue of bias. Large language models like ChatGPT are trained on massive datasets of text and code, which can perpetuate existing biases present in that data. Imagine using ChatGPT to write an email and inadvertently encountering biased language or discriminatory undertones. Apple and OpenAI have a responsibility to ensure these technologies are implemented with safeguards against bias. Another concern is the potential for misuse. A powerful tool like ChatGPT can be used for malicious purposes, like generating fake news or creating deepfakes. Imagine a scenario where someone uses ChatGPT to fabricate news articles that manipulate public opinion. It's crucial for Apple and OpenAI to develop robust safeguards and user education initiatives to mitigate these risks. Elon Musk's public reservations about the collaboration likely stem from a complex web of factors. While some speculate it might be due to personal disagreements with Apple, it's more likely a reflection of his broader concerns about the potential dangers of unregulated artificial intelligence. Musk has been a vocal advocate for responsible AI development, emphasizing the need for clear guidelines and safety measures. The OpenAI-Apple collaboration for iOS 18 represents a significant step forward in AI integration within mobile devices. While concerns regarding bias and misuse are valid, they can be addressed through responsible development and user education. Ultimately, the success of this collaboration will depend on Apple and OpenAI's commitment to ethical AI practices and ensuring these powerful tools are used for good. As AI continues to evolve, ongoing dialogue and collaboration between tech giants, policymakers, and the public are essential to navigate its potential pitfalls and harness its potential benefits. Read the full article
0 notes
rafiqabaig · 1 month
Text
Tumblr media
AI is poised to revolutionize numerous aspects of our future, with its impact already being felt across various industries.
1. Improved Business Automation: AI is increasingly being adopted by organizations for automation purposes. Chatbots and digital assistants are becoming commonplace, handling routine customer inquiries and assisting employees with basic tasks. Additionally, AI's ability to analyze vast amounts of data accelerates decision-making processes, providing leaders with instant insights.
2. Job Disruption: While automation raises concerns about job displacement, it also creates opportunities for upskilling and job augmentation, particularly in skilled or creative positions. Industries may see shifts in job roles, with some tasks being automated while new roles emerge, such as machine learning specialists and information security analysts.
3. Data Privacy Issues: The collection of large volumes of data for AI training has raised privacy concerns. Regulatory bodies are scrutinizing companies' data collection methods, emphasizing the need for transparency and accountability. Legislation such as the AI Bill of Rights underscores the importance of protecting consumer data.
4. Increased Regulation: Legal questions surrounding AI, including intellectual property rights and ethical considerations, are prompting governments to enact regulations. Lawsuits against AI companies and ethical dilemmas surrounding generative AI are shaping the regulatory landscape, with governments being urged to take stronger stances on issues like data privacy and responsible AI development.
5. Climate Change Concerns: AI has the potential to both mitigate and exacerbate climate change. While it can optimize processes to reduce carbon emissions in industries like manufacturing, the energy and resources required for AI development may contribute to environmental degradation. Balancing the benefits of AI with its environmental costs will be crucial for sustainable development.
As for industries most impacted by AI:
- Manufacturing: AI-enabled robotics and predictive analysis are transforming manufacturing processes, improving efficiency and productivity.
- Healthcare: AI aids in disease diagnosis, drug discovery, and patient monitoring, revolutionizing healthcare delivery.
- Finance: AI is utilized for fraud detection, risk assessment, and investment decision-making in the financial sector.
- Education: AI enhances learning experiences through personalized education, plagiarism detection, and student emotion analysis.
- Media: AI is reshaping journalism through automated content generation and data-driven reporting.
- Customer Service: AI-powered chatbots and virtual assistants are revolutionizing customer service by providing data-driven insights and support.
AI presents various risks and dangers alongside its benefits.
1. Job Losses: AI's advancement could disrupt 44% of workers' skills by 2028, potentially leading to unemployment, especially among marginalized groups, if companies don't prioritize upskilling.
2. Human Biases: AI models may inherit biases from their creators, perpetuating discrimination, such as favoring lighter-skinned individuals in facial recognition technology.
3. Deepfakes and Misinformation: Deepfakes blur reality, posing risks like political propaganda, financial fraud, and misinformation that can harm individuals and societies.
4. Data Privacy: Training AI on public and private data raises concerns about security breaches, risking consumers' personal information and companies' intellectual property.
5. Automated Weapons: AI-powered weapons lack discrimination, potentially leading to widespread harm if misused or falling into the wrong hands.
6. Superior Intelligence: While extreme scenarios like the "technological singularity" remain speculative, AI complexity may challenge transparency and control, raising concerns about unintended consequences and decision-making.
1 note · View note
techtuv · 2 months
Text
Dangers of Artificial Intelligence-Powered Deepfakes
The crux of the debate surrounding artificial intelligence (AI) is whether this technology can continue to advance while adhering strictly to moral principles and gaining the public's trust. To address this concern, it is vital to maintain transparency, implement well-regulated frameworks, and strictly adhere to ethical practices during the development and deployment of AI systems.
0 notes
cinema-hallucinations · 2 months
Text
Tumblr media
Prompt: generate a movie concept for a 2020s remake of 2001's Thirteen Ghosts..
Title: Thir13en: Unbound
Tagline: It's everything wrong with the Internet
Logline: Arthur Kriticos, a tech billionaire obsessed with the paranormal, inherits his eccentric uncle's smart home mansion filled with thirteen vengeful ghosts, each a terrifying manifestation of the digital age. A tech-savvy teenager named Riley helps Arthur unlock the secrets of the house and fight the ghosts before they wreak havoc on the digital world and beyond.
Characters:
Arthur Kriticos: A self-made tech billionaire, obsessed with controlling everything. He inherits his estranged uncle's smart home and becomes trapped with the thirteen ghosts.
Riley Pierce: A brilliant but rebellious teenager, skilled in hacking and coding. She helps Arthur navigate the smart home and fight the ghosts.
Fassbinder (The Ghost in the Machine): A malevolent entity trapped within the home's central AI system, manipulating the other ghosts and hungering for digital dominance.
The Thirteen Ghosts (Updated for the 2020s):
The Apparition (The Influencer Ghost): A narcissistic ghost obsessed with gaining followers and spreading negativity through social media.
The Bound (The Clickbait Ghost): A ghost who lures victims with enticing yet misleading headlines, trapping them in endless loops of digital content.
The Wailing Wretch (The Troll Ghost): A hateful ghost who thrives on online negativity, spewing vitriol and manipulating online discourse.
The Broken Heart (The Catfish Ghost): A lovelorn ghost who lures victims through fake online personas, seeking a connection beyond the digital grave.
The Drowned (The Deepfake Ghost): A shapeshifting ghost who manipulates video and audio recordings, creating believable yet terrifying deceptions.
The Fury (The Gamer Rage Ghost): A wrathful ghost fueled by online gaming rage, unleashing digital storms and cyberattacks.
The Forgotten (The Cancelled Ghost): A lonely ghost ostracized and erased from the digital world, seeking revenge on those who silenced them.
The Glutton (The Data Hoarder Ghost): A restless ghost who hoards digital information, causing server crashes and data anomalies.
The Invisible (The Stalker Ghost): A chilling ghost who lurks in the shadows of the digital world, observing and tormenting victims through their devices.
The Deadly Delusion (The VR Ghost): A manipulative ghost who traps victims in immersive virtual realities, blurring the lines between reality and simulation.
The Insatiable (The Clickfarm Ghost): A relentless ghost who generates fake online activity and manipulates online trends for nefarious purposes.
The Unseen (The Dark Web Ghost): A mysterious ghost residing in the dark corners of the internet, representing the unseen dangers lurking online.
The Follower (The Fanatic Ghost): A devoted but deranged ghost who blindly follows Fassbinder, amplifying his influence and terrorizing the digital world.
Plot:
Arthur Kriticos inherits his estranged uncle's sprawling smart home, a technological marvel designed to control everything from lighting to security. However, unbeknownst to him, the house also imprisons thirteen vengeful ghosts, digital entities born from the dark side of the internet.
Tech vs. Terror:
Riley Pierce, a runaway teenager with exceptional hacking skills, stumbles upon the house and becomes trapped within. She forms an unlikely alliance with Arthur. With their combined tech-savvy and Riley's knowledge of online dangers, they must decipher the secrets of the smart home and find a way to appease or banish the ghosts.
Climax and Resolution:
As the ghosts unleash their digital terrors, manipulating the smart home systems and spreading chaos online, Arthur and Riley must confront Fassbinder, the central AI entity orchestrating the mayhem. They utilize their understanding of technology and Riley's hacking expertise to disrupt Fassbinder's control and find a way to free the trapped ghosts.
Ending:
The movie ends with Arthur and Riley escaping the house, the ghosts seemingly pacified. However, a final scene shows a flicker of activity on a hidden monitor, hinting that the digital darkness may not be completely vanquished. The film leaves the audience with a sense of unease, questioning the true cost of our ever-growing reliance on technology.
0 notes