annainteractivity
annainteractivity
Anna K - Interactivity HT21
27 posts
Don't wanna be here? Send us removal request.
annainteractivity · 4 years ago
Text
Meta-reflections
While I was polishing my reflective journal, I came up with some reflections on writing the journal and reflecting.
First of all, I can see some progress when I compare my older posts to the most recent ones. The first posts were more like notes on what we were doing. There was a description of the process, but very little reflections, dissecting stuff, asking questions. On the first journal feedback, I was told to reflect more and show more of my thoughts and which paths they run. I am not a forthcoming person and I am not used to express what I think. I guess I started to feel ok with it during the third module. Now I regret that I forget my thought process and I can’t complete the first posts with more reflections, but this is the learning for the future.
I feel so much better with arranging my thoughts in bullet points. I have always been doing it like this, because bullet points let me quickly write down my thoughts, but I was advised not to during the coaching. “Dressing my notes up with words” later on was very tiring and felt like I do it under pressure. Now I feel like some of my reflections “sound” weird or I might have altered them in some way.
Reflecting on the literature is not as hard as reading it with focus. I can focus better when I read the text and it is being read to me by an AI at the same time. However, some of the papers or at least parts of them felt extremely boring or irrelevant at the point when I was reading them. I re-read all the obligatory papers before submitting my journal and tried to fill in the lacking reflections. In the future, I have to keep on searching for my own method of coping with brain blackouts during reading, it is not worth it to lose twice as much time on reading again and again.
I changed platforms for my reflective journal a few times. I haven’t found a perfect one yet, all of them have flaws. When I take photos and videos with my phone, there is no easy way to put them directly to the journal, preferably on a phone level. In Tumblr mobile app, I can add only one video per post. I will keep on looking for better solutions regarding journal platforms.
0 notes
annainteractivity · 4 years ago
Text
M3: Friday, 2021-10-29
This morning I got enlightened. I am not exactly sure how I came up with this, but I wrote a simple function which generates a white thin ring of random radius size and in random coordinates on the canvas each time the browser window is being refreshed.
youtube
I think that this simple operation is very significant to our project, because it finally gives a task to perform to a user. They have the goal to reach while using this program and therefore there is the factor of fun that keeps a user on their performance.
Moreover, the irritating issue of a non-mirror view, which keeps making things more difficult, is actually a feature now. It complicates operating for a user, but it is part of “the game” in this case.
During the show and tell, we presented how our ideation went, what ideas we had, what problems we faced, what were our constraints and that we decided to make something despite of them, but also within them.
We also presented our progress by showing these three iterations:
1. Touching the ball
youtube
2. Punching and sliding the ball all over canvas
youtube
3. Changing size of the ball by changing the distance between hands
youtube
4. Final iteration with a white ring generated each time the program loads.
youtube
Fun factor: a user can “joggle” a ball around the canvas.
Goal factor: a user can fit the ball into the white ring.
Experience factor: a user has to focus on their movements, which are not in a mirror reflection, to manipulate the ball around the canvas. They can train their focus, stimulate their brain, move their body and have fun at the same time.
Feedback
The teachers told us that they don’t see the experience concerning movement in our project. I can agree, that our concept doesn’t explore any movement patterns and doesn’t require moving in any certain way from a user. Roel mentioned that the experience in our project is more mental than physical - he emphasized how focused you should be to perform the task. The difficulties which result from bugs in our code and non-mirror view, can both challenge a user and irritate them, so the project definitely triggers emotions.
However, I feel like due to the shortage of time, our presentation wasn’t as thoughtful as it should be and we could have explained better what is the project about. All the pieces fall into the right places for me right now, when I am writing the journal.
0 notes
annainteractivity · 4 years ago
Text
M3: Thursday, 2021-10-28
We are short of time, so the work is quite intense. We liked the idea of a punching ball, but after I managed to put it into code, it seemed like too shallow for the show and tell.
youtube
Therefore, I added some features to be able to use more body parts. If you punch it with you right hand, it goes left and comes back. If you punch it with your left hand, it goes right and comes back. If you punch it with your head (nose in fact), it goes down and comes back. I also started using index fingers instead of wrists, as the placement is more natural for a palm (regarding how it looks if the body part dots are not visible). Also, it turned out that index fingers detection feels more stable.
youtube
I was tinkering and then, trying to do something totally different, I put the variables for the ball outside of the function. And suddenly, it turned out, that the ball moves! This accident refreshed our ideation. We took a step back and decided that being able to move the ball around the canvas, actually gives us more opportunities to develop the project in an interesting direction. So we decided to go with sliding the ball first:
youtube
Why stop at sliding? I added knees to the program so you can “kick up” the ball and made the nose punch lower the ball down the canvas. We got to the point where we could move the ball all around the canvas:
youtube
It can be compared to joggling with a real ball, but we lack coding skills to make it act as a real ball. It doesn’t have any features of a real ball and it is not a subject to any laws of physics. We got stuck.
Today we were trying out projects of other people in our class. We felt like our project lacks a purpose or a goal for a user. It doesn’t have this factor that makes you want to use this program, get interested or have fun while moving your body. Other colleagues’ projects were really fun to use and ours is just sliding the ball all over the canvas. Moreover, the issue of having a direct camera output instead of a mirror reflection, was quite irritating for everyone using it. You can do that for a while to try it out, but then what? It doesn’t have a purpose and in the addition, irritates you. We were quite frustrated with what we had.
We decided that in this kind of deadlock, when we don’t have time to think of anything new, we can try adding more features and count on an inspiration to come along the way.
I quickly wrote a code which enables you to enlarge and shrink the circle when your hands are within the circle radius:
youtube
But still, it doesn’t give us this special factor, there is still no goal, no experience, no emotion in this interaction. We decided we will show the teachers what we have and describe our problems during the design process, as this is the only thing we can do right now.
0 notes
annainteractivity · 4 years ago
Text
M3: Wednesday, 2021-10-27
This week we managed to work on the project, despite the viruses. We got really serious with starting off with the magic ball project. Our movements in this case should be:
getting hands closer or further to the ball: the ball may react to the approaching heat of our hands by changing its saturation,
touching the ball: the ball may react more intensely, like flickering with various colors or various levels of saturation,
rubbing the ball: the ball may react in changing another color factor, like brightness, to imitate getting hotter and hotter.
We had some ideas how to make the ball react to distance between the ball and hands, but we got stuck at rubbing. How to put the movement of rubbing into our code?
I started dissecting this issue, inspired a bit by the Movement Schema designed by Hansen & Morrison:
the more we rub it, the hotter it gets,
the more moves we make, the lighter the color gets,
the more intense the moves, the lighter the color gets,
the more frequent and the faster the moves, the lighter the color gets,
the shorter and the faster the moves, the lighter the color gets,
the less distance the wrist makes in the shorter time and opposite directions, the lighter the color gets,
therefore...
rubbing: short, fast movements in opposite directions
short = coordinates (small difference between coordinates)
fast = speed (more than 0.5 m/s?)
direction = coordinates (x and y growing or shrinking)
This seems like a lot of factors to take into consideration and especially the direction part is a thing I cannot imagine to wrap my head around. Therefore, we started limiting our design idea.
Touching
If the hand is within the circle coordinates, the circle reacts. How it can react:
it can change its size,
it can change its coordinates (move around the canvas),
it can disappear or reappear,
moreover:
it can react to how fast we touch it (punch or tender touch, how fast is the movement),
it can react to frequency of touching (one touch or tapping it),
it can react to how long we touch it (continuous touch or just one short tap).
Regarding our coding skills, we managed to make the first iteration concerning the touching part. The ball when touched flashes random colors and when we stop touching it, it lingers with a randomly picked color:
youtube
After that, we tried to make the ball react to speed of a movement. It changes its color when we touch it, but it moves when we punch it:
youtube
This reminded us of this sort of an exercise punching bag, which goes back to the starting point after each punch. The ideal situation would be: the harder you punch (the faster the movement is) the further the circle moves and comes back to the starting place.
Today we also noticed one problem with the code: the coordinates of the body parts that are not visible on canvas are predicted and show up anyway from nowhere which may ruin the interaction. We cannot do much about it, except of making sure that all the concerned body parts are visible on canvas.
0 notes
annainteractivity · 4 years ago
Text
M3: Friday, 2021-10-22
This week we haven’t had much progress, because we were both sick. I managed to tinker a bit with the examples and to think about what we should work on. Some of my ideas were:
- Drawing with a finger
This seems like a good one, but a bit cliché. I suspect many people think about it. Unfortunately, the code is not very good at detecting fingers.
First of all, fingers are small body parts, they are very similar to each other and therefore are hard to detect by the program. It lags and encounter a lot of errors in detection when we are trying to rely on finger detection.
Second of all, even if we follow an index finger, its placement is not precise = rather somewhere around a palm, I would say, so if we are close to the camera, it is impossible to achieve an impression of drawing with a finger.
- Changing background color
I couldn’t figure out how to do it. Do i have to add a DOM element? I have to check this out.
- How to make a video output a mirror reflection?
We cannot figure out how to flip the canvas so it is a mirror reflection. If we want to operate while looking at what we are doing on the live view, then it becomes really irritating and difficult. We are used to have a mirror reflection view when we look at ourselves through a webcam.
Wikipedia: The mere-exposure effect is a psychological phenomenon by which people tend to develop a preference for things merely because they are familiar with them. In social psychology, this effect is sometimes called the familiarity principle. The effect has been demonstrated with many kinds of things, including words, Chinese characters, paintings, pictures of faces, geometric figures, and sounds. In studies of interpersonal attraction, the more often someone sees a person, the more pleasing and likeable they find that person.
Tumblr media Tumblr media
https://brightside.me/wonder-curiosities/experts-explain-why-we-always-look-better-in-the-mirror-794582/
I asked my colleagues on Discord if anyone has figured out how to flip the canvas, but it doesn’t seem like it. The only comment I got was ‘make it a feature’. At first I got irritated by it, but at the second thought, I decided to take it into consideration.
Today we went to school to brainstorm a bit about our design idea. We consulted with our colleagues and they had various ideas, but most of them try to stick to having an experience and emotions transferred in their design.
We had the following ideas:
A yoga teacher
The program that corrects your yoga warrior pose. We can set in which coordinates every essential body part should be placed and the “corrector” is letting you know where to put them (up, down, left, right, perfect). Personally, I thought it was one of the best ideas, but my partner didn’t like it. We work with coordinates, which is relatively easy and moreover, we have a strict kinesthetic, yoga-related and set in the context experience.
Martial arts inspired
Objects appear on the canvas and we have to punch them. If we hit one, another one appears in a different place. Here we work with coordinates and speed.
A magic ball
A round object is set in the middle of a canvas. Our goal is to make it warm by rubbing it and keep it warm. The more we keep our hands close to it and the faster we move them around the object, the “hotter” it gets (changes color). If you back away, it loses the heat (loses the saturation? loses the color it turned into?). Working with coordinates and speed.
Clarisse came up with this idea to create a fire or a bonfire, but it is hard to picture a fire in a simple object. It is beyond our coding skills, so maybe let’s stay at a simple object changing its color.
Inspo: Mood rings!
 https://somethingborrowedpdx.com/how-do-mood-rings-work/
youtube
0 notes
annainteractivity · 4 years ago
Text
M3: Thursday, 2021-10-14
Today we started to brainstorm and tinker around the code that we had been given. We played on ‘the Playground’ and tried to pull different body parts from the code. We were wondering what are the factors we can work with and these are:
speed of body parts,
distance between body parts of one person or two people,
coordinates of body parts on a canvas,
confidence score of detection of body parts (which seems like the less approachable for us, regarding our knowledge and coding skills).
Tumblr media
During the coaching Clint advised us to focus on smoothness of our coding, eg if we work with distance between certain body parts, we should map the output to percentage of this distance instead of creating big amounts of various conditional statements. And if we are stuck, we should take a step back. After all, we still have quite a lot of time for this project and everyone is more or less in the same place: tinkering and trying to understand the code and get some inspiration on what to work on.
Here, I was trying to enlarge the nose dot when distance between wrists is over than 0.5 m. I learned how to ‘pull out’ certain body parts, but struggled to find a right place for the conditional.
Tumblr media Tumblr media
youtube
0 notes
annainteractivity · 4 years ago
Text
M3: Tuesday, 2021-10-12
Fogtmann et al - Kinesthetic Interaction - Revealing the Bodily Potential in Interaction Design
This article explores kinesthetic interaction as a concept of describing the body in motion as foundation for designing interactive systems. Kinesthetic Interaction is when the body in motion experiences the world through interactive technologies. Three axioms of Kinesthetic Interaction are:
physiology (kinesthesis - awareness of the position and movement of the body in space)
kinesthetic experience (how the kinesthetic sense grounds our everyday actions in the world as moving bodies)
interactive technologies (computers, cameras, sensors)
The paper presents seven parameters to reveal how the bodily potential is addressed in a design process, then discusses four interactive systems and proposes how they could be altered using the above mentioned parameters so the kinesthetic experience is richer. The authors created a sort of a framework, which we can use to facilitate an analysis of systems or, in our case, probably our design ideas.
Hansen & Morrison - Materializing Movement—Designing for Movement-based Digital Interaction
The article shows an example of using full-body movement data as a design material to explore interpersonal embodied communication. The authors presents the system named Sync, which has something in common with the program we are working with: it detects certain body parts and follows their movement concerning velocity, position, repetition and frequency.
I feel this article, and especially the Movement Schema, which “translates” core modalities to salient characteristics to visual description, might be useful for us when we are trying to “picture” a specific type of movement in the code.
Loke & Robertson - Inventing and Devising Movement in the Design of Movement-based Interactive Systems
In this article, the authors describe the study they conducted with professional dancers/choreographers. The aim was to explore ways of inventing and devising movement for use in the design of kinesthetic interaction with interactive technologies. The study led to developing methods and tools which facilitate creating new kind of movements. The notion and method of “making the familiar strange” is the most outstanding. While we are, due to some biological and social standards, used to how different movements look like, making them “weird” is a great method to create an eye-catching and impressive choreography. It reminded me of the “uncanny valley”, meaning that we are the most struck by things that seem familiar to us, but something is disturbingly different about them. This article might be a great inspiration during our design process.
0 notes
annainteractivity · 4 years ago
Text
M2: Friday, 2021-10-08
The final iteration of my project can be presented like this:
youtube
Here are the main points worth mentioning during the show and tell:
context: behavior of a brain of a person with social anxiety,
the circle represents the brain, it spins at its own pace reaching its maximum speed while undisturbed, 
it slows down when talked to,
the higher the volume, the slower it goes,
almost stops when the volume is high and it takes a while for it to move and gain its own speed again,
reaction is based on a difference between the volume of the received sound and processed sound, so it should respond to its environment. When it is in a loud environment, then it can adapt and go at its own pace,
the higher the frequency, the higher the saturation of the color,
could have made the saturation change more smooth or limit it to react only at certain level of frequency, but this way it seems more “alive” and responsive to environment, almost as it can hear what is going on around it and trying to show its awareness of the surroundings.
Feedback
The feedback I got was that there is no real nuance control over the circle. It has its own inertia, it sorts of live its own life and we can influence it, but we can’t really control it. There is no learning curve and no option to master controlling the circle, as the maximum grip in Dreyfus. The teachers remarked the aesthetic part of the project: they liked how it looked, they said that it is an original idea to make a circle spinning around a circle. However, the project turned out rather artsy.
After seeing the projects of other people, I think I understand what they meant. I tried to picture what I could change to reach full nuance control over the circle. Here are some examples:
change the direction of spinning in relation to volume or frequency, eg. it spins left when I am speaking quietly and it spins right when I am shouting,
make it grow at stay this way when I shout or make some repetitive sound (like hitting the table three times) and make it shrink when I whistle,
make it change color, eg it could change the number of hue in relation to the frequency.
This way I could hypothetically achieve a desired effect, eg make a green circle, of window size, spinning left - and this would mean I have a nuance control over it.
0 notes
annainteractivity · 4 years ago
Text
M2: Thursday, 2021-10-07
Today I consulted my project during the coaching. The feedback I got was to add another nuance, because there is not enough nuance control over the circle.
youtube
I decided to work with a frequency, but to do that, I needed help. I consulted it with Renato who helped me to adjust the code. I don’t have much time, so I thought the easiest thing I can do is to make the color of the circle react to the frequency of the input sound.
I had to read again about what FFT, frequency and frequency bins are. To be honest, it was really hard for me to understand, as I feel that I lack some strictly mathematic and physics knowledge. Therefore, Renato just helped me with adjusting the number of bins to my FFT size, which was 1024.
I already had the color of my circle broke down to hsl variables. It made it possible for me to adjust those properties accordingly to the frequency. I tried to change hue, saturation and brightness of the color, but the best effect I got was with adjusting the saturation. I changed the background color to black and the fill color to red, which is often seen as a color of an alert, an alarm. I wanted the circle to be grey, so kind of ‘neutral’ when it is not disturbed and showing sort of an alarmed state when the frequency of an input sound gets higher - therefore becoming more red.
I was wondering if I should add a statement which allows the saturation change only at some point, at a certain frequency, but the circle reacting continuously  with pink flashes even to a quiet conversation felt more lively, more active and more interactive. Therefore, I decided I will leave it this way.
youtube
0 notes
annainteractivity · 4 years ago
Text
M2: Wednesday, 2021-10-06
On Friday I consulted my crowd idea with Clint. He told me that it is not the best way to go and I should probably not pick sound as an output, because there is nothing much I can do with it. I can make it play, make it stop playing, make it louder and make it quieter. I was advised to create something in a visual form, to get an inspiration from Clint’s examples, tinker a bit and then build on one of them.
I got really inspired by The Thing last time, so I decided to to work on it. I didn’t want to drop my idea of silencing the crowd though. The Thing reacts explicitly and spins the faster the louder the sound of the environment is. This is the opposite reaction to what I want to achieve. My interaction must be something like this:
When I am silent, the crowd is buzzing at its own pace, the metaphor for it will be the Thing spinning fast at its undisturbed pace,
When I try to say something to the crowd, therefore I start speaking or I am clearing my throat gently, I catch the attention of some people in the crowd. They stop talking, so the buzzing of the crowd gets quieter. The metaphor for this one will be the Thing slowing down its pace - we see it reacts, but it doesn’t stop yet.
When I shout at the crowd, therefore the difference between input sound and the sound of an environment is big, I catch all the people’s attention, because it is something uncommon and sudden for them. Everyone stops talking and the crowd falls silent. The metaphor will be that the Thing stops spinning.
To do that, I have to invert how the Thing works:
youtube
As seen on the video, the Thing now works as I intended it to. As I stated before, I don’t want to copy the visual side of what I am building on. Therefore, I needed to think about how I should picture this interaction, taking following things into consideration:
constraints of my coding skills,
time management,
metaphorical accuracy,
aesthetics.
Concerning all of the above and elements of a code I have access to, I decided I will work with a shape of a circle and its dimensions. Let’s say the circle represents the buzzing of our crowd - the louder I am, the smaller the circle gets. The smaller the difference between input sound and the sound of environment is, the slower it takes for the circle to shrink. When the difference is big, therefore I am suddenly really loud, the circle shrinks suddenly and rapidly and if I don’t continue making sound, it takes a longer while for the circle to start growing again.
youtube
Unfortunately, I couldn’t make the code work. I was struggling with affecting the size of the circle. But I still had the spinning figured out! So why not make a circle which spins around another circle? It came out by accident, but I found it looking very interesting, especially with this fading out effect. Now all that is left is to consult it with teachers and bring in some improvements in the code concerning how the circle reacts.
youtube
When I was watching how it works, I suddenly had a thought: wait, this is how my introverted brain behaves!
My brain is working at its own pace: fast and steady when undisturbed.
It loses its pace, therefore slows down when something distracts it (talking to me, starting to interact with me?).
It panics and stops functioning properly when someone around is really invasive (starts acting very loud or even yells).
For now, I will call my project “The anxiety circle” ;)
0 notes
annainteractivity · 4 years ago
Text
M2: Thursday, 2021-09-30
When I was browsing pictures of people online, I stumbled upon photos of a crowd. Now, that is an inspiration! When there is a buzzing crowd, no one would pay attention if I speak in a normal tone. But when I scream at the crowd, they would probably fall silent.
Tumblr media
This may be it:
we have a nuance of volume and the volume of my own voice can control if the crowd is silent or if it keeps on buzzing,
this is a very simple interaction and I might be able to do it on my own,
this idea is totally different than other people in class work with (they mostly work with geometric figures changing its attributes).
I just need an audio file of a buzzing crowd and write a program that changes its volume accordingly to the volume of my voice (input). To make sure the effect is steady, I must pick:
an audio with a rather “flat” waveform, so it doesn’t have any pitches that stand out significantly and so it can be played on loop,
an audio that is long enough to interact with it,
an audio, where words or screams are not distinguishable.
I started searching for different kinds of crowd recordings, changing the volume up and down.
This sound has too many peaks in its waveform, it wouldn’t be useful:
Tumblr media
This sound has a flat waveform, it would be useful:
Tumblr media
0 notes
annainteractivity · 4 years ago
Text
M2: Wednesday, 2021-09-29
Djajadiningrat et al 2007 - Easy doesn't do it - skill and expression in tangible aesthetics
In this paper the authors focus on movement as a part of human-product interaction. They state that this issue has been neglected and technological focus is rather on users’ cognitive skills and therefore it denies us the opportunity to develop our bodily skills. The coupling between movement in user actions and movement in product reactions is speculated on and different examples of alternative solutions are shown.
I could not catch the deeper sense of this article, except that it draws the community’s attention to certain aspects of product design. I noticed that the example with a microwave door perfectly summarizes the authors’ point - why designers and producers surrender themselves to product design trends, therefore pushing buttons as a basic interaction part in almost every kitchen ware device, instead of focusing on exploring movements, which we are not only perfectly capable of, but also can master them to the point, that multiple clicks seem like a very slow, annoying and unprecise way of interacting with products.
Dreyfus 2002 - Intelligence without representation – Merleau-Ponty’s critique of mental representation
This article was one of the hardest to focus on and understand. I will try to dissect it a little bit:
The author explores and challenges two notions from Merleau-Ponty’s Phenomenology of Perception: the intentional arc and a maximal grip.
Intentional arc: the tight connection between body and world, such that, as the active body acquires skills, those skills are “stored”, not as representations in the mind, but as dispositions to respond to the solicitations of situations in the world.
Maximal grip: the body’s tendency to refine its responses so as to bring the current situation closer to an optimal gestalt.
Skill acquisition: as one acquires expertise, the acquired know-how is experienced as finer and finer discriminations of situations paired with the appropriate response to each.
The author states that neither the intentional arc nor getting the maximal grip, require mental or brain representations.
If I understand it correctly and were asked to put it into my own words, the article explores the process of learning new things and achieving this sort of amazing fluency in doing them, which is a place when we know what we are doing, we have a full control of it, but we don’t have to actively think about it, because we rely on our expertise and muscle memory.
“[...] in absorbed, skillful coping, I don’t need a mental representation of my goal. Rather, acting is experienced as a steady flow of skillful activity in response to one’s sense of the situation.“
I guess it has a lot to do with a notion of nuance control, which we have to posses over things as a result of our work during this module. We have to create a situation in which we learn to control something with making sounds of specific properties to learn how to achieve a total control over the thing.
0 notes
annainteractivity · 4 years ago
Text
M2: Monday, 2021-09-27
Today I started wondering if I should maybe make art. They say that art is anything, that is called art by its author. Also, art doesn’t need to have a function or a specific aim. I got inspired when I was watching Clint’s codes and Thing.js really got my attention. The Thing is just moving in different directions and in different ways, reacting to the input sound. It doesn’t really indicate anything, although it resembles a bend waveform and maybe, if you are very familiar with the topic of sound waves, you could tell something from it. But if you do not, the only thing you can notice are some regularities, like the spikes grow in certain placements in correlation to volume and frequency. But it was not clear for me. However, it looks mesmerizing and there is a certain pleasure and satisfaction in watching how it reacts to your own voice.
youtube
Nevertheless, I do not want to copy it, so I started thinking about the project in terms of its constraints:
fulfillment of the assignment conditions (nuance control, sound as an input, processing),
things I am able to do on my own,
creating something visually different from Clint’s examples.
I figured that probably working with volume should be the easiest. How about making something react to volume?
I tried to put it in a context: What could be my output? A real life reactions, like toy train leaving a platform? A web browser visualization? Arduino LED?
A web browser visualization would definitely take the least of my time, but what should it be? What can I interact with? Whisper to and shout at? Talk to in different volume or timbre? A person seemed to be a natural answer to these questions.
But how should this person look like? Should I draw a smiley face in Java Script and make it change its properties regarding size or color? I could do that, but will I manage with my coding skills? This will be a lot of drawing and to pull out all those different parameters from a drawing and changing them seems like something I am not capable of doing. Or should I just take a photo or an illustration? In such case, it will be easier to make it react: change size or rotate and I could be able to do these things.
Inspo: https://www.c-sharpcorner.com/UploadFile/219d4d/how-to-create-a-smiley-face-using-javascript/
Tumblr media
0 notes
annainteractivity · 4 years ago
Text
M2: Friday, 2021-09-24
I got a little lost in my thoughts today, so I decided to do something that always works: to take a step back. I did some more research using external sources about the basics. I was interested in:
How soundwaves propagate in various environments?
Sound can propagate through a medium such as air, water and solids as longitudinal waves and also as a transverse wave in solids. The particles of the medium do not travel with the sound wave, but with vibrations - wow! The propagation depends on a density and pressure of the medium, affected by temperature; motion of the medium and viscosity of it.
How are ultrasounds, infrasounds and vibrations characterized?
Ultrasound is sound waves with frequencies higher than 20,000 Hz. Ultrasound is not different from audible sound in its physical properties it just cannot be heard by humans. Infrasound is sound waves with frequencies lower than 20 Hz. The studies of sound and vibration are closely related. Sound, or pressure waves, are generated by vibrating structures (e.g. vocal cords); these pressure waves can also induce the vibration of structures (e.g. ear drum). Hence, attempts to reduce noise are often related to issues of vibration.
What is sound? Definition vs. how we perceive it
Sound is defined as an oscillation in pressure, stress, particle displacement, particle velocity, etc., propagated in a medium with internal forces. Sound as a stimulus: can be viewed as a wave motion in air or other elastic media. Sound as a sensation: can be viewed as an excitation of the hearing mechanism that results in the perception of sound.
What is the speed of sound and how it affects our hearing and our environment?
The speed of sound depends on the density of the medium.
What is the range of sound I can hear comparing to other people?
It turns out I can hear sounds from 75 Hz to around 17000 Hz of frequency. My younger friends can hear even ‘higher’ sounds. This is an interesting potential design case, maybe I can think of how I can use this fun fact in my design.
Tumblr media Tumblr media Tumblr media
Diagram illustrating longitudinal and transverse waves. The high points of the transverse waves (peaks) represent more-dense areas of the longitudinal waves, and the low points (troughs) represent less-dense areas. The arrows show the directions of wave material movement.
0 notes
annainteractivity · 4 years ago
Text
M2: Thursday, 2021-09-23
I participated in the coaching with Alexa, because we are both in the same place with our project. We were struggling to understand the core of all of this and we though we could use an explanation. We talked about interaction and how we experience it through our bodies. “Intelligence is not represented in thought”.
Jens advised us to start researching sounds:
What kind of sounds should a mic react to? Low? High?
What aspects of sound to focus on? Attributes: frequency, pace, rhythm?
Experiment with sounds
The model of our assignment:
Input (sound) -> processing -> output
Therefore: I need a remote to catch the sound that I am making on my own end, I need to have a program to process this sound and I need an output after processing. The output can be anything: a visual output in a web browser, a reaction of Arduino’s outputs or even an action in real life.
The only condition is that I must have a nuance control over the output - so I need to take certain attributes of sound (frequency, volume, rhythm, pace) and make them take visible control over the output.
Tumblr media
We were also discussing different sounds and how people take in different sounds. Most of our discussion went down to a common subject for us, therefore misophonia.
Misophonia is a condition in which individuals experience intense anger and disgust when they are confronted with sounds made by other human beings. In particular, sounds like chewing, lip smacking or breathing.
For a while, I even thought about working with misophonia, but I figured possibility of torturing myself with such sounds for the sake of M2, is a little bit too much to handle.
Tumblr media
0 notes
annainteractivity · 4 years ago
Text
M2: Monday, 2021-09-20
I knew very little about sound before this lecture, and since today, I feel like I know even less. I got lost during the lecture in all the concepts and definitions:
What is frequency?
I have always thought of frequency as a ‘pitch’ of  sound. High-frequency, so high-pitched sounds are ultrasounds, and low-frequency sounds are infrasounds.
Frequency is the number of occurrences of a repeating event per unit of time. Frequency is measured in hertz (Hz) which is equal to one event per second. The period is the duration of time of one cycle in a repeating event, so the period is the reciprocal of the frequency.
FFT
It is an important measurement method in the science of audio and acoustics measurement. It converts a signal into individual spectral components and thereby provides frequency information about the signal. I found the video quite helpful: https://www.youtube.com/watch?v=spUNpyF58BY&ab_channel=3Blue1Brown
Tumblr media
bins
Frequency bins are intervals between samples in frequency domain. For example, if your sample rate is 100 Hz and your FFT size is 100, then you have 100 points between [0 100) Hz. Therefore, you divide the entire 100 Hz range into 100 intervals, like 0-1 Hz, 1-2 Hz, and so on.
I knew about waveforms, I worked with them in Audacity software. I decided to download it and play a bit with different sounds to observe their waveforms.
Waveform of a high-pitched scream:
Tumblr media
Talking:
Tumblr media
It was very hard for me to follow the lecture and the tutorial in the class. A small screen made in impossible form me to fluently switch between windows and during working with Glitch, Visual Studio Code and web browser with at least two tabs opened (remote and output) it was essential. Also, buzzing and clattering in our classroom does not create a friendly environment to work with sounds. I think I will do much better at home.
0 notes
annainteractivity · 4 years ago
Text
M1: Friday, 2021-09-17
Our final iteration can be presented like this:
youtube
Our inspiration: parking sensor, but we wanted to put our prototype in a context of warning about an approaching danger.
The idea of use: pandemic distancing restrictions between people.
We wanted to avoid taking speed or acceleration into account, because the range of distance is too small and the LED’s reaction to acceleration might not be noticeable or perceived as a failure.
1st iteration:
Over 250 cm: off
250 - 150: fade in
150 - 100: blinking at 250
100 - 50: blinking at 50
50 - 5: wave (100, 245, 10)
Wave sketch: we tried warning in a pattern of light wave but it turned out to be rather calm and comforting than alarming.
youtube
2nd iteration:
Over 300: off
300 - 0: blinking speed proportional to a distance, starting with 550.
Annoying, the blinking goes on for too long. Suddenly you stop noticing the difference in blinking speed and the blinking itself, especially if you move very slowly.
3rd iteration:
Over 300: off
300 - 150: fades in
150 - 100: blinks at 200
100 - 50: blinks faster at 150
50 - 40: alarming rapid blinking at 100
40 - 30: alarming rapid blinking at 50
30 - 20: alarming rapid blinking at 30
20 - 10: alarming rapid blinking at 20
10 - 0: alarming rapid blinking at 10
It looked good, but we found a way to make it smoother.
4th iteration:
Over 300: off
300 - 150: fades in from 0 to 255 in brightness
150 - 0: blinks accordingly to the distance, starts at 150 rate and goes up to almost steady light at 0 cm.
Two circles of range: 300 to 150 and 150 to 0. First circle is to communicate the presence of an object/person around and the second circle is to warn about an approaching object/person.
Think of the outer range as more awareness or information
Transition: fading in from 0 to 255 brightness then at 150 cm it starts blinking from 255 to 0 brightness. We set the distance of 150 cm as crucial, eg in a context of physical distancing during pandemics. Blinking light attracts attention, but it is not as alarming at the beginning. It goes faster and faster to 0 cm.
3 m is a lot in the context of personal distance. 1.5 m gives the person enough time to act if needed but that also could be changed depending on the context.
The brightness and the blinking interval depend on the distance. At the end, the blinking gets so fast that we barely can see it, but it is still flickering. From the beginning we were inspired by a fire as a primal light, which we react to as humans. The effect of fast flickering resembles a big fire, therefore a danger.
Feedback:
Due to technical problems, we were not able to fully show what we did, but hopefully, our process is well documented in our journals. Roel mentioned that we were the only group, which actually moved around with the project and asked people for feedback, which is good and he advised everyone to do the same.
0 notes