#like an author/artist tagging and adding as much things possible to get the algorithm’s attention
Explore tagged Tumblr posts
mirensiart · 2 months ago
Note
Oh the critique was mainly about Awakening and why I personally feel the timeline with Archanea is so confusing if its alright to share?
oh yeah! though to be fair, I do feel a lot of stuff about awakening is mostly just references for nostalgia’s sake and they didn’t think about it too much cause awakening was supposed to be the last game ever in the franchise so they probably just went “eh fuck it” and added a lot of stuff just because (example: Priam lmao)
like Lucina cosplaying as marth is 100% nostalgia based and meant as fanservice cause they really didn’t think they would get a new game lol so I personally wouldn’t see awakening’s plot devices as too deep?
4 notes · View notes
mariacallous · 2 years ago
Text
To understand how the American media landscape fractured, one must first understand the brands that forged it. According to Faris Yakob, cofounder of creative consultancy Genius Steals and author of Paid Attention, advertisers created the neutral “view from nowhere” voice in media. In the 19th and 20th centuries, national brands looking to grow customers wouldn’t partner with biased publications. But everything changed when ad tech arrived.
“People started tagging their digital media buys so it wouldn’t appear next to topics like homosexuality, or Covid, to avoid getting into clusters,” Yakob says. “But that means that the news isn’t being funded. If you can pick and choose what topics to fund in news, you can distort what is being reported on, to some degree.”
That distortion, like the US Federal Communications Commission’s abolition of the fairness doctrine in 1987, is part of how America got into this mess. Similar to content recommendation algorithms, audience profiles in digital marketing created micro-targeted ads. Those ads are more valuable on multiple screens. Media executive Euan McLeod recalls growing up when “there was no choice” but to watch what his parents were watching. Now each person in a household might be watching something wildly different, and the shared experience has dissolved. Isolated artists are creating for isolated audiences. Is it any wonder that generative AI seems poised to tailor entertainment to audiences of one?
In this world, we can all be George Lucas, using technology to create special editions. Rick gets on the plane with Ilsa. Jack fits on the door with Rose. Ben Solo lives. As Marvel Comics writer Anthony Oliveira says, Andy Warhol was fascinated by the fact that people everywhere drank the same Coke. But the allure of AI content generation, he says, is the same as the Coca-Cola Freestyle: filling your own cup with someone else’s flavors.
But when everyone can just request the narrative path they want, opportunities to hear other people’s stories greatly diminish. “That is a very sad world to live in, because how else are we gonna be conveying our deepest hopes and wishes, what we think should be a vision of the world we want to live in, what we should worry about?" Yang says. "This is what story and art is for.”
Using AI to sanitize content in regions where certain subjects are banned is already possible, especially if actors yield likeness rights. Generative AI means that studios could edit or change the content of some films without consulting the people who signed a contract based on a script, and the only thing stopping them is the possibility of a defamation suit. It sounds unlikely, until you remember that multiple versions of Spider-Man: Across the Spider-Verse appeared in cinemas.
And animation is an apt comparison: Most changes to entertainment production have made film and TV more like animation or video game development, not less. With current technology, actors can be little more than action figures smashing together, as weightless as they are sexless. With AI, the actors need never leave the trailer. Or exist.
“[Studios will] say it’s for the insurance,” says production designer Blass, suggesting a “Paul Walker scenario” in which a deceased actor’s performance needs generating, because that performance is one of the terms of the film’s business insurance. But in reality, these likenesses could be used to do things that actors would rather not—whether it’s a dangerous stunt or a sex scene.
Generative AI could also be used to edit films in real time, responsive to data-brokered preferences, with algorithms running A/B tests on how much nudity you want based on the customer profile you most closely match.
If this sounds familiar, that’s because it is: In the 1990s, Blockbuster Video refused to stock films like Natural Born Killers and The Last Temptation of Christ. But that tradition goes back even further. Otherwise known as the Hays Code, the Production Code was an industry standard of self-censorship guidelines for major US studios from 1930 to 1968, when it was replaced by the movie ratings system. The Code influenced everything from the Comics Code to parental advisory warnings to video game ratings. It’s why titles from major studios during that period don’t depict graphic violence. It’s also why they lack out-and-proud queer and interracial relationships. But today, a revived Production Code might have very different guidelines. For example, the Pentagon recently announced it would no longer offer technical support to filmmakers who censor their films for the Chinese market.
When I ask McLeod if he thinks America will ever re-adopt the Production Code, he’s unequivocal: “Absolutely. Everything goes in cycles.”
Hollywood’s Future Belongs to People—Not Machines
9 notes · View notes
hashtag--triggered · 8 years ago
Text
Meaning Behind the Images?
In “On the Invention of Photographic Meaning”, Allan Sekula discusses the idea of “photographic discourse” and defining it as limiting in regards to the the ability to be exchanged as information. This brings attention to the idea that any form of communication has some form of meaning behind it. Sekula continues to describe “photographic discourse” as being able to not only communicate to the viewer by showing what is printed physically, but also communicate through implied meanings behind each image. He then discussed the meaning of photographic language, whether it is implied or authoritative form of discourse. 
Sekula says to not only to read the shapes that consist within the image, but  read the photograph's status as well. The author refers to the works of photographers and critics, and argues the idea that photography is not the status of authority and seer. Sekula argues that there are many situations where photographs has different representational roles as part of its “true value”. He then refers to the work of Stieglitz’s The Steerage 1907 and Hine’s Immigrants Going Down A Gangplank, NY 1905 as examples to contradicting the idea that photographic discourse both underlines the true meaning of the image as well as contradict it. 
Compared to the works of Stieglitz, Hine’s works focused on the idea of social politics, with his images being more straight forward. Hine wanted to explore the social political landscape through the use of imagery. This helped support the idea that photograph’s informative function represent its true status. Although Stieglitz’s works resembled similar styles to Hine’s works in regards to subject matter, the function differed. Stieglitz focus was to retain the aura of the works and the structural aspects of each piece. Both images contribute to the idea that photographic discourse exists on a connotative level. One image focuses on the time and place and social aspect while the other focuses on the art form of the image. Sekula continues to discuss the idea of an art discourse that contributes to the limiting concept of status. 
Because of the topographic discourse, readers must come to images with certain presumptions in order to contrast against the idea that the image should be read as a mechanical reproduction. Sekula places emphasis on the viewer since the way the artist deems his works should not affect the way the viewer views the image. However, this thought largely differs to how images are being portrayed today.
Invisible Images written by Trevor Paglen begins by arguing that advanced technology has caused visual culture to become invisible. Paglen claims that since a majority of images are now being created by machines to be read by other machines, people are left looking in from the outside. He argues that the prevalence of invisible images is beginning to have a significant impact on daily life so much that it may influence our actions without us being fully aware. Theorists have been skeptical in regards to digital images in the past and also more recently due to their implications when present on social media. More so than that, Paglen believes that the real issue does not involve human’s interpretation of photos at all. 
He argues that machines interpreting these photos so effortlessly is the real concern. Since photos are shot on digital devices, the photo is essentially just data. Whether or not the photo is being looked at by a human, the file is still machine readable. The author goes on to explain how image interpreting software such as license plate readers are present everywhere such as on police cars, buildings, and highways. The companies which operate image interpreting software are steadily building a database full of wide ranging information. Paglen explains that cameras in malls allow the operator to determine who is looking at what item and for how long and can even determine the emotional state of the person being observed. The more information these systems obtain, the more intelligent they become. 
This also applies to the sharing of images on social media such as Facebook. The more photos are posted, the easier it becomes for software systems to determine locations, people, objects, and other things of the sort. It does not matter whether humans even view the picture, data is collected and stored either way. The article progresses and explains how machine readability is all based off of mathematics and algorithms. These systems are only as smart as the information that has been input into them which means that they are not fully capable of interpreting images like a human would. Examples of this could include artificial intelligence systems interpreting a person holding any object in their hand as a person holding cellphone, or determining that an image of a black person was an ape. 
Paglen explains that in order to understand machine-machine visual culture, one must unlearn how to see like a human. Features, classifiers, training sets, and key points make up the cognitive ability of these systems which is why it does not make sense to assume that they can interpret like humans. He continues on to talk about Vigilant Solutions and their contracts with Texas governments. They gave the police their own license plate reading systems as well as shared with them their database which contains billions of vehicle locations. In return, Vigilant Solutions received records of outstanding arrest warrants. The concern here is the amount of power which these systems hold and how they affect people when they are sold for capital. Although the systems themselves are objective, government and private companies gain power at the expense of the public. For example, when a photo is posted of a woman drinking a beer on Facebook, these systems may analyze the photo and notify the woman’s insurance provider of her actions and her premium will be adjusted accordingly. Situations like this will occur more frequently and may result in long term consequences for the public. 
In response to this, Pagan writes that people are attempting to create ways to challenge these systems. People have come up with different make up techniques to throw off facial recognition software and designed pockets that prevent cell phones from broadcasting their location. Although these are creative ideas, they will not be enough to out smart these systems. The more these techniques are used, the smarter the systems get. Invisible images interfere with everyday life, and in order to challenge their power, we must first come to a better understanding of how these systems operate.
These two article show a great contrast in how photos were used then and now. Sekula’s main point is how photos don’t fall into separate categories of either aesthetic or representational, but they are instead a combination of the two which depends on how the viewer sees the image. Since most photos are digital and machine-read today, the aesthetic of the photos are lost. A lot of images today contain metadata that can allow just about anyone to find out when, where, and with what an image was taken with. Additionally, tags added on to images also act as implied text to describe what is going on in the photo. 
However, all of that data is mostly viewed for machines, which is unable to view and acknowledge the beauty of the photos. There are also many photos that fail to have any aesthetic quality, such as those taken for license plate reading or facial recognition. No human is place in-between the process of taking the photo to associating it with someone. There is also a loss in the aesthetic purpose and value of photos when machines are viewing the images on social media to, possibly one day, alter our insurance price. Machines and technology has altered the values of the image, taking the beauty of the orignal image for granted. When our world focus on the convience of technology, we fail to appreciate the values and true purpose of art and photos
1 note · View note