A tumblr from The Photographers' Gallery digital programme and Unthinking Photography exploring photography, automation and computation
Don't wanna be here? Send us removal request.
Link
Premise is a micro gig worker app with photography as a core task, paying users around the world to complete simple assignments for small payments. What those users donât know, however, is that they may essentially be unwitting spies for the US military or other governments and entities. The Wall Street Journal has published a fascinating look into the gamified intelligence gathering system that the San Francisco-based startup launched in 2013. While it was initially marketed as a way to gather international development work data for governments and NGOs, Premise has in recent years started working for US national security purposes and selling itself as a surveillance tool.
Users of the app are asked to do things such as take photos (of things like ATMs, stores, supermarkets, hospitals, and churches/mosques/synagogues/temples), provide opinions, and gather details about places (e.g. counting the number of ATMs in a location). Tasks that take a few minutes to complete may come with payouts of about $0.05 to $0.10, while other more time-consuming ones may pay more.
Source/read more:Â https://petapixel.com/2021/06/26/photo-app-turns-users-into-unwitting-spies-for-us-military/
1 note
¡
View note
Text
What does the algorithm see?
We live in a world full of images made by machine for machines, from facial recognition technologies to automatic license plate readers and AI image categorisation. Whatâs more, these new âways of seeingâ are coupled to ways of knowing and foment action in the real world. In this panel, which brings together artistic practice and research, we ask, how is machine vision influencing contemporary visual cultures? What kinds of social differences are produced or reproduced by these imaging systems? How might we begin to understand the technological substrate of standards, codecs, formats, training data sets and algorithms that make up the new seeing machines? And how might artistic practice provide a space for seeing differently?
Discussion between Rosa Menkman, Joanna Zylinska and Dr Rachel OâDwyer
Watch at https://www.youtube.com/watch?v=kVHQu41cTlU
1 note
¡
View note
Link
A fairly technical guide to the âhiddenâ layers within neural networks, attempting to make sense of what role each of them actually plays within the generation of new material.
In contrast to the typical picture of neural networks as a black box, weâve been surprised how approachable the network is on this scale. Not only do neurons seem understandable (even ones that initially seemed inscrutable), but the âcircuitsâ of connections between them seem to be meaningful algorithms corresponding to facts about the world
0 notes
Video
vimeo
Alan Warburton - FGBFAQ
2020
https://alanwarburton.co.uk/rgbfaq
Synthetic data is increasingly sought after as a âcleanâ alternative to real world data sets, which are often biased, unethically sourced or expensive to create. And while CGI data seems to avoid many of these pitfalls, my argument aims from the outset to consider whether the virtual world is as clean and steady as we think. I try to catalogue the âhacksâ used to construct the foundations of simulated worlds and suggest that the solutions of early computer graphics create a technical debt that might be less than ideal material on which to build the foundations of yet another generation of technology. Â
0 notes
Link
âTo generate its synthetic humans, Datagen first scans actual humans. It partners with vendors who pay people to step inside giant full-body scanners that capture every detail from their irises to their skin texture to the curvature of their fingers. The startup then takes the raw data and pumps it through a series of algorithms, which develop 3D representations of a personâs body, face, eyes, and hands.â
0 notes
Photo
Synthetic Messenger
Tega Brain and Sam Lavigne.
AÂ botnet that artificially inflates the value of climate news. Everyday it searches the internet for news articles covering climate change. Then 100 bots visit each article and click on every ad they can find.
In an algorithmic media landscape the value of news is determined by engagement statistics. Media outlets rely on advertising revenue earned through page visits and ad clicks. These engagement signals produce patterns of value that influence what stories and topics get future coverage. ... At a time when our action or inaction has distinct atmospheric effects, the news we see and the narratives that shape our beliefs also directly shape the climate. What if media itself were a form of climate engineering, a space where narrative becomes ecology?
0 notes
Photo

source:Â https://www.reddit.com/r/interestingasfuck/duplicates/nh55at/german_ems_uses_qr_technology_to_discourage/Â /Â https://www.ems1.com/scene-safety/articles/german-ems-uses-qr-technology-to-discourage-illegal-photography-at-emergency-scenes-nSnu8S5qUcqZYIf4/
0 notes
Link
Microsoft blamed an "accidental human error" for its Bing search engine not showing image results for the query "Tank Man".
https://www.bbc.co.uk/news/world-asia-57367100
0 notes
Link
0 notes
Quote
Most everyone in the critical-studies and eco-critical pantheon â from Socrates to Marx to Arendt to Derrida; from Carolyn Merchant to Rachel Carson, Bakhtin to Bookchin â worried about this kind of calculative abstraction, taking us away from the material, real effects of what it is we are quantifying, counting, calculating about.
Can Hope be Calculated? Multiplying and Dividing Carbon, before and after Corona by Caroline Sinders & Jamie Allen https://a-nourishing-network.radical-openness.org/can-hope-be-calculated-multiplying-and-dividing-carbon-before-and-after-corona.html
3 notes
¡
View notes
Video
youtube
https://karlsims.com/evolved-virtual-creatures.html
Evolved Virtual Creatures
Karl Sims, Â 1994
This video shows results from a research project involving simulated Darwinian evolutions of virtual block creatures. A population of several hundred creatures is created within a supercomputer, and each creature is tested for their ability to perform a given task, such the ability to swim in a simulated water environment. Those that are most successful survive, and their virtual genes containing coded instructions for their growth, are copied, combined, and mutated to make offspring for a new population. The new creatures are again tested, and some may be improvements on their parents. As this cycle of variation and selection continues, creatures with more and more successful behaviors can emerge. The creatures shown are results from many independent simulations in which they were selected for swimming, walking, jumping, following, and competing for control of a green cube.
0 notes
Link
Infinite Nature
Perpetual View Generation of Natural Scenes from a Single Image Andrew Liu, Richard Tucker, Varun Jumpani, Ameesh Makadia, Noah Snavely, Angjoo Kanazawa
We introduce the problem of perpetual view generationâlong-range generation of novel views corresponding to an arbitrarily long camera trajectory given a single image. This is a challenging problem that goes far beyond the capabilities of current view synthesis methods, which work for a limited range of viewpoints and quickly degenerate when presented with a large camera motion. Methods designed for video generation also have limited ability to produce long video sequences and are often agnostic to scene geometry. We take a hybrid approach that integrates both geometry and image synthesis in an iterative render, refine, and repeat framework, allowing for long-range generation that cover large distances after hundreds of frames. Our approach can be trained from a set of monocular video sequences without any manual annotation. We propose a dataset of aerial footage of natural coastal scenes, and compare our method with recent view synthesis and conditional video generation baselines, showing that it can generate plausible scenes for much longer time horizons over large camera trajectories compared to existing methods.
youtube
0 notes
Link
How can activists capture and preserve their videos during a shutdown, and even share them offline, and do so in safer ways? Â
A tutorial about how to prepare for an internet shutdown, and what you should do if it happens:
Prepare / Capture / Maintain / Share and Communicate Â
0 notes
Link
We live in a world full of images made by machine for machines, from facial recognition technologies to automatic license plate readers and AI image categorisation. Whatâs more, these new âways of seeingâ are coupled to ways of knowing and foment action in the real world. In this panel, which brings together artistic practice and research, we ask, how is machine vision influencing contemporary visual cultures? What kinds of social differences are produced or reproduced by these imaging systems? How might we begin to understand the technological substrate of standards, codecs, formats, training data sets and algorithms that make up the new seeing machines? And how might artistic practice provide a space for seeing differently?
NCAD Gallery & In Public Programmes - Digital Cultures Webinar Series 2021. The event took place on Thurs May 13 2021, 7PM (CET).
1 note
¡
View note