julianns6
julianns6
Untitled
2 posts
Don't wanna be here? Send us removal request.
julianns6 · 2 years ago
Text
The Unethical Use of AI for Artificial Imagery
The world continues to technologically evolve, as mentioned in the previous blogspot. In the 21st century, the large entity known as the Internet has contributed to human development in such a vast fashion that this blog post could not do justice in noting them all. However, one of the more recent developments witnessed, best described as an expansion in the virtual world, is the development of artificial intelligence. With the use of artificial intelligence one is now able to create deep fakes and AI generated voices to create artificial imagery (in this blog post the combination of these two AI tools will be referred to as Artificial Imagery). Artificial Imagery, a newly discovered territory being introduced to the world, has generated much discourse on how it should be approached, whether letting it continue to develop, or imposing some sort of restrictions around this new form of technology. Artificial intelligence is an expansive topic; however, in this blog post I will highlight why during this phase of technological development, laws should be created to limit Artificial Imagery as its potential for unethical uses could cause more societal negative than positive with the spread of untruths, distress, and framing of individuals. 
As mentioned in the previous blog post, Artificial Imagery has facilitated the creation of videos where a whole person's identity is completely replicated. Although currently it is quite easy to distinguish whether a video has been manipulated or not for the average Gen Z and millennial, the current technology output may already be so advanced that the older generations might have a hard time distinguishing. This brings to light the fact that the technology is improving at a rate to cause concern if measures are not associated with this rapid development. As the technology continues to improve, many have questioned what is stopping someone (or a company, government campaign, malicious trolls, etc.) from creating artificial imagery of an individual with their AI generated voice performing in a certain way, or saying obscenities, or committing illegal acts, so people question their character. Not only could an individual's life be ruined (even if proven innocent of the acts committed, people will most likely still doubt or have the artificial image in their mind), but an already divided America could experience further tension with politicians using AI to set up slander campaigns against opponents. Furthermore, once Artificial Imagery becomes a known ammunition for fake news, scenarios where people did commit wrongdoings, could claim innocence through the possibility that their actions did not happen and were AI generated, adding to the obstruction of truth in the media and everyday conflicts. Artificial Imagery has also become a weapon against objectifying individuals, especially women. For example “Sensity AI, a research company that has tracked online deepfake videos since December of 2018, has consistently found that between 90% and 95% of them are non consensual porn. About 90% of that is non consensual porn of women '' (Hao). In consideration of this statistic, one could deduce that Artificial Imagery is used as a form of sexual harassment and a violation of privacy for both female celebrities (whose videos and appearance are public), together with the everyday woman (for example Helen Hort had her personal pictures leaked to create deepfake pornography videos, meaning someone within her social group had released these images to then be manipulated through Artificial Imagery). With that being said, it is without doubt that laws should be established to halt this situation, especially when the various scenarios detailed could easily be categorized as defamation. 
Some may argue that Artificial Imagery is just another form of parody, that it is up to the individual to realize whether something has been manipulated or not. To that, I would use the words of youtuber Mutahar Anas who states “we all can understand that there is an emotional component when you see nude photos or videos of yourself that you didn't make versus somebody who's clearly authored a fanfic (or any kind of piece of media that borders on parody) piece while we all know that it's authored by somebody else and it has no involvement with the actual person that's being thrown into the parody” (Anas). He goes on to say that while for “internet veterans” a video or image may be obviously manipulated, for the average person, especially the older generations, it is getting harder to distinguish what is reality. Others proclaim placing legal restrictions on Artificial Imagery will not deter people from making them, as the internet can be such a behemoth. Once again, Anas has a great counter by stating “Computer hacking is an illegal crime. Tampering and manipulating networks is a crime, it doesn't mean the software isn't floating around and is super easy to monitor. Having laws just means you can deter a majority of individuals and have a framework in place so if somebody is abusing things you can jump in and actually send and enforce that law against that individual. It's obviously present so you can have some deterrence” (Anas). Lastly, others contend that Artificial Imagery is not all bad and therefore should not be beholden to legal ramifications. For example, Artificial Imagery has been of great benefit to the movie industry, allowing to replicate actors who were unable to perform in certain scenes due to various circumstances (Howard). However, the stated positives, to me, are starkly superficial (at the very least from a utilitarian standpoint) with the cons very heavily outweighing the pros. I believe that placing limitations, along with legal ramifications on creating Artificial Imagery deemed as harmful, has way more positive outcomes, in contrast to letting AI become weaponized just for the sake of making better media. Lastly, I am not suggesting that Artificial Imagery be outright banned, what should be restricted, however, is non consensual Artificial Imagery.  
In summary, as the virtual world evolves, it is crucial we continue to evolve with it. There is no doubt that Artificial Imagery is an extremely harmful weapon for humanity with the spread of misinformation and harm especially to women, if not monitored. For this reason, in this period of development, laws should be created to limit Artificial Imagery as its unethical uses could cause the spread of fake news, distress, and framing of individuals. While this blog post did not expand upon how these laws should be executed, continuing the conversation is crucial to putting a stop to the defamatory and exploitative aspects of the world of Artificial Imagery.
Works Cited
Anas, Mutahar. “Should Deepfakes Be Illegal?” SomeOrdinaryGamers , Youtube.com, 6 Feb. 2023, https://www.youtube.com/watch?v=cVd5HzQK9LA&t=1070s.  Accessed 14 April 2023.
Hao, Karen. “Deepfake Porn Is Ruining Women’s Lives. Now the Law May Finally Ban It.” MIT Technology Review , 12 Feb. 2021, https://www.technologyreview.com/2021/02/12/1018222/deepfake-revenge-porn-coming-ban/. Accessed 14 April 2023.
Howard, Karen. “Deconstructing Deepfakes-How Do They Work and What Are the   Risks?” U.S. GAO, U.S. Government Accountability Office, 23 June 2022, https://www.gao.gov/blog/deconstructing-deepfakes-how-do-they-work-and-what-are-risks#:~:text=Deepfakes%20rely%20on%20artificial%20neural,and%20reconstruct%20patterns%E2%80%94usually%20faces. Accessed 14 April 2023.
0 notes
julianns6 · 2 years ago
Text
Artificial Intelligence Coming To Life: The Future of Society with AI Generated Voices and Deepfakes
Whether it be on the social, technological, political, or medical landscape, the world we live in is constantly evolving. There is no denying the way we live now has deviated substantially from the time our parents and grandparents were our age. A catalyst to the large strides made in our world today is the monumental entity that is the Internet. Discussion around the extent the internet has “evolved” or influenced the advancement of humanity in a positive direction can certainly be vast. However, perhaps a less nuanced debate could be presented around the emergence of AI generated voices and deepfakes owing to the development of Artificial Intelligence (AI). 
Although many have heard of the concept of AI, or at the very least the word AI especially with the word trending lately in the media, a majority of people do not grasp well what Artificial Intelligence actually means. Former head of the RSA Benedict Dellot defines AI as “the simulation of human intelligence processes by machines, especially computer systems”. As broad as its definition, AI has been able to replicate human functions and even surpass humans in an array of aspects, however, some of the most notable being in writing, art, and even in replicating our voices or appearances. A single, informative blog cannot cover all the topics associated with artificial intelligence; as such, this post will solely focus on the last two functions, replicating voices and appearances. 
In the past, people have been able to slice voice recordings of an individual and manipulate the recording to make it sound like the person said something completely different. It is important to note that it is generally pretty easy to tell when a person has spliced an audio. The use of AI/synthetic voices has been present for about 100 years; however, in the last decade AI also developed the capacity to replicate human voices by intaking samples of a person speaking. Through this intake of information, the AI is then able to recite any text provided, replicating the individuals whose voice was put in the system. Similar to splicing, a couple of years ago the AI generated voices were pretty easy to distinguish as the voice generator would mispronounce words and sound very monotone. However, in recent years many AI voice generators/replicators have advanced to the point that even emotion can be heard in the artificially created voice. On platforms such as Tik Tok, videos of AI generated voices of celebrities have gone viral. For example, many have found it amusing to watch high status politicians Joe Biden, Donald Trump, Barack Obama etc.. “say” unusual things, such as obscenities, as it is not something you would generally associate with those types of professionals when they are speaking. 
Similar to the way AI is able to replicate an individual's articulation, AI can also replicate the appearance of a human. This manipulation of imagery is referred to as deepfakes; as stated by science editor Ian Sample from the Guardian, it is "the 21st century’s answer to Photoshopping, deepfakes use a form of artificial intelligence called deep learning to make images of fake events, hence the name deepfake”.  Generally with deepfakes, the videos use AI to swap faces. In other words, generally they use a video of someone who looks similar to the intended person and swap that person's face onto a similar looking “model”. Deepfakes have been of great benefit to the movie industry, allowing them to replicate actors who were unable to perform in certain scenes due to various circumstances. However, a big controversy with deepfakes has been the amount of pornographic content that has emerged (with the most popular being of celebrities' faces being deep faked onto pornographic videos). There is definitely extended debate around this use of deepfakes as it is clear that the majority if not all of these deepfakes are non consensual and it can be emotionally distressing to witness your face on pornographic videos, especially as they get more realistic.  
As AI continues to improve, both forms of digital manipulation have become more realistic as people are able to create deepfakes videos with AI generated voices of individuals. This combination has facilitated the creation of videos where a whole person's identity is completely replicated. While this technology has achieved amazing feats, such as tributes for people who have passed away, in a way “bringing them back to life”, it also has caused a lot of distress concerning the future of misinformation. As the technology keeps improving, many have questioned what is stopping someone (or a company, government campaign, malicious trolls etc.) from creating a deepfake of an individual with their AI generated voice performing or saying obscenities or committing illegal acts so people question their character. Others also have begun to question scenarios in which people did commit wrongdoings, however claim innocence through the possibility that their actions did not happen and were AI generated. 
For those who fear falling victim to this new era of potential misinformation, thankfully there are still ways to distinguish an AI generated voice or if a video is a deepfake. As previously mentioned, videos or audios with AI generated voices will generally sound considerably more monotone or lack the general emotion or frequency in comparison to the actual person speaking (also, if the individual is saying beyond preposterous things then it is most likely not real). Deepfakes are still developing so a quick glance at a video will be enough to tell whether the video has been manipulated or not; however, in cases of uncertainty, check for facial tics or lack of blinking, as AI has not yet mastered such facial feature details. For many it will also be heartening to hear that Microsoft and Intel have experimented in creating software to detect deepfakes. Lastly and most importantly, one must always remember that the internet is full of misinformation and to not automatically ingest data nor jump to conclusions.  
In summary, it is crucial to note that we are in an era of change. Technology is one of these sectors that is continuously advancing and it is important to be conscious of how our world is evolving. As things continue to transition, one can only wonder what the government will do in relation to this new era of misinformation. Will laws be cemented to avoid the spread of deepfakes? Or will the government turn a blind eye, as the topic is so complex and new? Time will only give us the answer to these heavy questions and the best one can do currently is to stay informed.
Works Cited
Dellot, Benedict. “What Is Artificial Intelligence Anyway?” RSA, The RSA, 15 Dec. 2016, https://www.thersa.org/blog/2016/12/what-is-artificial-intelligence-anyway. 
Howard, Karen. “Deconstructing Deepfakes-How Do They Work and What Are the Risks?” U.S. GAO, U.S. Government Accountability Office, 23 June 2022, https://www.gao.gov/blog/deconstructing-deepfakes-how-do-they-work-and-what-are-risks#:~:text=Deepfakes%20rely%20on%20artificial%20neural,and%20reconstruct%20patterns%E2%80%94usually%20faces. 
N.a. “A Brief History of Ai Voices.” Matinée Multilingual, Matinée Multilingual, 10 Mar. 2023, https://matinee.co.uk/blog/a-brief-history-of-ai-voices/. 
Sample, Ian. “What Are Deepfakes – and How Can You Spot Them?” The Guardian, Guardian News and Media, 13 Jan. 2020, https://www.theguardian.com/technology/2020/jan/13/what-are-deepfakes-and-how-can-you-spot-them. 
1 note · View note