beardysoul
beardysoul
Beardy
5 posts
computer art
Don't wanna be here? Send us removal request.
beardysoul · 4 years ago
Text
Project Plan
My initial starting point for this project was to make a point about “Artificial Intelligence” in the way it is understood most commonly today (Artificial Neural Networks), and show an audience that this understanding of “Intelligence” is still nowhere near being comparable to a human intelligence.
I first started by thinking about how a dataset used to train a neural network could be applied in the wrong context, creating strange observations by the algorithm (e.g A neural network trained on a detailed database of images of cats will necessarily make an error when trying to assess what is contained in an image of a dog)
Tumblr media
This example shows a similar idea, a neural network incorrectly identifying quite a simple image. 
The issue with this in terms of being a criticism of artificial intelligence is simply that the network has not been trained thoroughly enough to understand the image presented to it, making this a weak criticism. My aim is to find a fundamental problem in the way that a machine thinks versus how a human thinks.
This led me to think along the lines of emotions. Humans are inherently emotional while machines are inherently rational, however to try to show a machines misunderstanding of emotion seems difficult to even try to tackle. How would one show a fundamental lack of emotion in a thought process?
From this point, I began to think more deeply about what decisions a human would find inappropriate to apply a fixed rationality to. I wanted to find a situation where a quantization of the decision making process seems, at least on the surface, as inappropriate. This led me down the path of decisions of morality. 
Is it appropriate for a computer to make decisions of morality? Looking at the “Trolley Problem”, a notoriously problematic moral dilemma emerges. If a neural network was trained to solve these problems, do we as humans believe that it is able to make a true assessment of what we believe is the moral choice? Is the quantization of a dilemma such as this even appropriate? What dataset could even be said to be the “correct” one to use in such a situation? How can you quantify a life to a level of understanding which lives are worth more or less than others?
0 notes
beardysoul · 4 years ago
Text
Neuroevolution
The concept of neuroevolution is to use a genetic algorithm in order to produce a suitably evolved neural network to solve a problem.
Normally, neural networks use a system of training in which the network is trained through an algorithm which compares its “error” with the the desired output, and adjusts its internal weights slightly to reduce the error. Neuroevolution uses a system in which we create many neural networks (a “population” of neural networks), then apply Darwinian theory to “mutate” and “evolve” the internal weights of each member of the population based on the “fitness” of the network for the next generation, until a suitable neural network is evolved.
The example MARI/O demonstrates a complex version of a neuroevolution algorithm in which a neural network is created to play through a level of Super Mario World. This is using the NEAT (Neuroevolution of augmenting topologies) algorithm which not only adjusts the internal weights of the neural network, but adds neurons when necessary.
Daniel Shiffman’s series on neuroevolution provides an accessible (to those with a basic understandig of programming) introduction to neuroevolution and the types of situations it can be applied to.
youtube
0 notes
beardysoul · 4 years ago
Text
Simone Giertz
Simone Giertz is an “inventor and breaker of thing”. Her work often consists of building “useless” things, usually robots. Her work is an exploration of the amusing and playful side of engineering through failure.  Looking at a concept through the lens of failure is one that I hope to use in my current project as failure will foreground the nature of the material itself. That is, Giertz’s failing robots foreground the nature of the robotics she is using. A perfectly working machine hides something about its inner working, a failing machine gives us insight into it’s structure. I would like to explore this idea of failure and perhaps “uselessness” in a work relating to artificial intelligence, to remove the magic surrounding machine learning and AI.
In Giertz’s TED talk (Why you should make useless things, 2018), her closing remark about building useless things: “it turns off that voice in your head that tells you that you know exactly how the world works”, is an important statement regarding the nature of “failure” in her machines, that through failure there is often a hidden knowledge being revealed.
youtube
0 notes
beardysoul · 4 years ago
Text
Humans of AI
Humans of AI (https://humans-of.ai/) is a work based around the use of AI. This work is interesting since it discusses issues with “Artificial intelligence” in a context in which I am hoping to create my own work. For me the main interest is in the attempt to show that AI is not a “magical” technology, but in fact works in a relatively understandable way, it is not “thinking” and can only relate back to you what you have shown it with no true subjective analysis that we would expect from a true intelligence.
The other facet of this work to note is about the morality and origin of the datasets being used in commonly used libraries. This work looks at the COCO (common objects in context: https://cocodataset.org) and foregrounds the work that has been scraped by photographers around the world, seemingly without their knowledge and with no apparent attribution.
Tumblr media
0 notes
beardysoul · 4 years ago
Video
tumblr
A sketch based on Manfred Mohr’s works with cubes.
41 notes · View notes