A Design Journal about Interactivity at Malmo University
Don't wanna be here? Send us removal request.
Text
In the end of M3 and Interactivity course
This very challenging module has come to an end. For good or not, it is over. Unfortunately, it had a lot of negative feelings, big disappointments and challenges, but among them there were a lot of positives as well. Most importantly, I learnt a lot from both aspects. To begin with the negative things, because they were predominant. Actually, it is not necessarily to be negative in the bad way. When everything was over, I had the time to look back and reflect on what happened. What I found out is that everything was purposeful and it might have been a bummer at the time it happened, but that actually led me to where I am now.
In every module in this course, we were working with a partner and this was not an exception. Or, at least, was not supposed to. Probably my biggest challenge in this module was the groupwork. Maybe it is good to mention that I have never had such difficulties since the beginning of the whole programme. So, I was not prepared at all and didn’t know how to deal with that problem. What’s more was that I wasn’t really sure what is the real issue. Since the first couple of days in this module, I saw hints of disinterest and lack of motivation. But I thought that this was temporary and there might be an unknown valid reason behind this. Who am I to judge people?! So I left things as they were, even though we were already falling behind after the first few days compared to the other groups. I was dependand on what he produced in code and since he was mostly late, the delay started to grow. I did not say anything again but later on, in the second week, it was too obvious even for the teachers that we are unproductive and still in the middle of nowhere. It wasn’t until the beginning the third and final week that I said “Enough!” and started on my own. Having a look at it now, I had to do that step a lot earlier, because it felt like I sabotaged myself because of a partner who isn’t contributing enough. But the decision was hard, because I couldn’t say he was doing nothing when he was actually doing what we have discussed. There was no bond and lack of initiative from him. If I didn’t tell him let’s talk, he wouldn’t get in contact with me. I tried to motivate and encourage him, but it didn’t work.
Those problems led to changing the topics and our direction of work a lot. From steering to tilting to circular movements and control, we were literally jumping from one to another almost every other day. Even the end result was not tightly connected to what I was presenting in the show n tell. Still, there were some things that led me to where I ended up. Needless to say that having no real topic, especially after the first week, was a purpose for difficulties in the design process. We were literally all over the place day after day and at the same time we had almost nothing. I had no idea how to fix that. Maybe I had to get feedback from teachers and classmates more often. I guess I was a bit afraid of asking people for feedback because we had almost nothing to show. But then again, what could help in cases like that is the feedback. I find feedback crucial in a design process because of the guidance you get and we were lacking a direction of work. What I will know for the future projects is to get as much feedback as possible both from teachers and classmates.
One thing I totally missed was the machine learning perspective and it’s a fact that I hadn’t really thought about it. A reason could be that I didn’t have the time for that, since I literally wrapped up something just hours before the show n tell. However, that is not a valid excuse. I really regret not having a deeper look at that aspect because not only it was important for the course but also I find this topic of AI and ML very intriguing. Machine learning is a very technology, which is present almost everywhere nowadays – from cameras as a tool for tracking to medicine as a way of medical diagnosis and etc. In other words, it is a very powerful tool created by humand kind which is more helpful than harmful.
As whole, one of the best things that happened over the whole course was that I have improved my coding skills a lot. I can admit that I had a lot of difficulties, especially at the beginning of each module. I had been feeling lost and had doubts that I could produce something on a high level for each module’s show n tell. I guess that my insecurities were due to having a lot of difficulties in programming and prototyping courses last year. Apart from that, I haven’t really had the chance to work with programming as much as I would have wanted. What was also “scaring” me was that we were working with some abstract (from my point of view) cases and libraries such as the tenserflow and the topic of skill and sound. However, I proved myself wrong and I can say I’m proud of that. Working with different libraries and in different topics really broadened my knowledge and improved my skills. I gained some confidence as well and I feel a better programmer than at the start of the course. I’m glad that I didn’t give up on the coding and sticked with it until I got the result I needed. Nevertheless, there is a lot more to explore and learn, but I now have the basis and the interest to develop my programming skills even further.
0 notes
Text
Starting from scratch right before the show n tell
I did as planned – first thing to do was asking Jens to help me out with the code. I presented him the idea I had and I expected him to accept it, having in mind the short amount of time left. Unfortunately, he was not satisfied with it. And with purpose. First of all, it was not as easy as I thought doing the trail in code. It required a lot of calculations of positioning the low opacity circles in the trail and overall making it work precisely and smoothly would have been very hard and time-consuming. Secondly, from the kinaesthetic point of view, it was again “flat” and without any real value. The body movements were limited to only moving one of the wrists, while the rest of the body was still. It happened what Clint has warned me in the previous days – changing the visuals while the kinaesthetic experience and the way of interacting with the sketch is still the same as before. What I thought about this idea in general was that it was more complete and that automatically gave value and meaning to it, but it was not like that. I totally agreed with what Jens said, although I was under pressure and that was not in my favor.
However, he told me to keep it as simple as possible. It’s not the visuals that matter the most but the kinaesthetic experience after all. Jens gave me the idea of including some things from the first module when we were working with sound, such as using frequency and also oscillator. But how could I include that to the body detection and kinaesthetic experience? It was not very logical to me. Nevertheless, Jens and I brainstormed a bit on this. The biggest problem was that I had less than 24 hours before the show n tell and I was about to start from scratch. That meaned I would have no time to experiment and get in depth with experiencing. I really had no idea what to do. But Jens insisted that it would be the best if I include the sound in that. I thought that I could generate tones based on my bodyparts positions in the coordinate system. Again, I had no idea how, but that sounded like a good idea to both of us. After many calculations, I was able to generate sound by my body parts positions. Moreover, I disabled the video, so instead of seeing myself on the canvas, it is a blank white field. The biggest advantage of that was that the body parts were leaving tracks on the canvas and they didn’t disappear. That made it as I was drawing with sound using my nose, left and right wrists and the right knee.
youtube
What I found interesting was that looking at the white canvas screen, nothing really happens until you move one of the assigned body parts. Nothing is really provoking the user and telling what to do, how to act, what body parts can you use, what is going to happen. That isn’t necessary to be an advantage. Some people could find it not interesting, since nothing is really happening. Having a look at it, there is that question: Okay, what is going on in here? However, the kinaesthetic experience is totally different than the one I had before (with the swiping) because now you are engaging more body parts and there is that exploration you do on your own.
Also, the role of the sound (as annoying it is), is tracking the bodyparts positions in the canvas and works somehow as a tone generator. Doing body moves is modifying the pitch. Basically, what happens is that you are controlling the sound with body.
The main difference and advantage I could say is that by using nose and knee the experience is a bit different because of including other body parts, that are “unusual” to draw with. Since we are “programmed” to draw with hands, I decided to include them, even though using the wrists feels a bit static. But drawing with the nose and knee, you unintentionally move more of the body, rather than just upper body parts. The role of the right knee is to clear the canvas when it reaches a certain y position. It was a last-minute idea and maybe one of the best ones I came up with in this module.
0 notes
Text
Officially alone
I was oficially on my own now. My partner understood my position and that was a relief. After wasting the first two days of the week with things that were “dead end”, I now had less than 72 hours before the show n tell to come up with something and present it. I felt the pressure was super high. I had almost nothing, so I had to begin somewhere, somehow. It was pretty hard because of not being successful and productive during the past two weeks. I knew I was far behind from my classmates and from I was supposed to be at that moment. I could feel that this was one of my toughest challenges so far in the programme.
Roel gave me a pretty good idea to create a circle that leaves track after moving it, like a tail. Maybe there was some potential in that? It kind of reminded me of a comet. What’s more is that I could put a picture with planets as a background of the canvas. The comet would be attached to one of my wrists. I would have to control it steer around the planets and not hit any of them. Plus, it fitted well enough whithin my “topic” of control and upper body movements. And that sounded like an idea. I now had something to try to pursue.
I was left with the impression that this motion trail of the circle could be easily done in code. I kind of knew how to do it, but I couldn’t formulate it and write it in code. So, I began my research on how could I do it. I found a very useful web article about this topic. I read through it and everything was very well explained. I did all the steps just as the author, but unfortunately there was something wrong with my code. For some reason, it was not working at all. I assume the problem was either because of two canvases overlapping one another, causing a conflict or something wrong with my coding. I was so disappointed. What was I supposed to do? Tried my best, asked for help some classmates, but they couldn’t solve the issue as well. There was one thing left to do – ask Jens the next morning as soon as possible for help, otherwise I couldn’t see any exit from this situation.
0 notes
Text
End of a non-productive week
After talking with Clint and Roel the last two days of the previous week and seeing how far behind we were as a group, I thought that it is time to give my partner one last chance. I know it was pretty late but we have been in the middle of nowhere so far. I felt that I could do better on my own. When talking with Clint on Thursday I had that circle increasing its size when doing swiping gestures with the hand (wrists). On Friday we had a static square which was changing its color and nothing else happening. Things were bad. Something had to change. I took both Clint and Roel’s feedback into mind, so I decided to make the square react to arms position. I informed my partner and I really hoped that now after all the negative feedback we received he will pick up the pieces and work on the code. I was ready to work during the weekend and told him as soon as he is ready with the code to contact me. Unfortunately, that didn’t happen on time. I felt very disappointed and discouraged. I needed to wrap myself up, put all the energy and knowledge I have and start fresh. Before that, I have already brought my small idea to life.
The square was increasing and decreasing its size depending on the arms (wrists) positions. From the kinaesthetic point of view, the meaning of those body movements were as if you are showing something big and broad, showing that the square should get bigger and bigger. However, it was still very “flat”, without any deep meaning and value. It was also not that hard code-wise but that didn’t really matter at that point. I again felt that I couldn’t escape from that, stuck in that figure. What more could I do? Rotating the figure? It was a good idea, having in mind that I have a shape with angles. Plus, I was still interested in control, so I could use certain gestures to control with precision the rotation of the figure. Unfortunately, it wasn’t as easy as I thought. I lost a lot of time thinking how to do it in code, and when I found out what function should be written, it didn’t work. Instead of rotating the figure, it was rotating the whole canvas. Such a mess… I needed to take a huge step back and ask myself: Okay what do I have here and what am I pursuing?
So where and how to begin? I wanted to control something, but what? I guess that what bothers me the most is the visual aspect. I was literraly jumping from circles to squares, trying to get inspired and get going but didn’t work the way I wanted. Maybe something more significant would make the difference? I asked Clint for some ideas. He told me that it could be anything – from a text that’s changing its size, to a video being played back. But what matters the most is how the interaction would be different. I thought that it wouldn’t really change the way of interacting with it. Since the tenserflow is detecting wrists, whatever is placed as an output – text or square or whatever figure, the way you interact with the sketch would still be more or less the same. Experimenting different type of movements that could influence different type of objects and not the other way around. So I had to think of the interaction first.
I got a bunch of questions to find the answer to: Since I am interested in control, how does it feel controlling this certain object? Effortless or something more ‘heavy’ and hard to control? Because it’s not just what the person is doing with their body, but also the characteristics the object has and its integrity as well.
0 notes
Text
Stuck in the process
We were still far behind and not working optimally. Actually, I wasn’t sure what we are working on. We had this idea of circular movements but still struggling in narrowing down and experimenting. It is both my and my partner’s fault. On the one hand, he was supposed to work with the sketch and try to clear the errors it had, but it was taking him a lot of time, which was a bummer because if he hadn’t done the code, I could hardly work with experiencing body movements. While he was working on the code I decided to go back to circular movements and ask myself: Okay, we do use circular movements daily. But what do we do with circular movements in our daily lives? Brushing teeth when looking at the mirror? Stirring the dish when cooking? And what about the control of that? I needed some guidance so bad. I got some feedback from Clint and I again understood how far behind we are with our design work. However, the feedback he gave me was very valuable. It was obvious that the circle sketch was too simple and ‘flat’, which made it hard to feel the movement. There was no precision as well. It was questionable how could we control the object, and what is the interactive artifact doing for the kinaesthetic movement. We were trying to control the object with our hands/arms but it wasn’t meaningful. Moreover, the changing colors are random, meaning that they also have no point and no connection to what body movements are done. I know that the object must be influenced by something meaningful and respond to that in a meaningful and aesthetically attractive way. Unfortunately, I had no choice other than talking seriously with my partner about the situation so that I know whether I am alone in this or we are still working together.
0 notes
Text
Week 2 beginning
During this week we are supposed to be ideating and experimenting with the body movements we chose. However, my partner and I only knew the general topic (a.k.a circular movements) but we hadn’t came up and designed a particular movement. I felt a bit stressed because I saw the progress my classmates had and we were kind of falling behind. We decided to split the tasks – my partner Miroslav should create some kind of an object, that responds to some movement and I will focus on bodystorming and experiencing the feeling of conducting the certain move. I wasn’t sure this was the right way to begin our design work but we were already falling behind and feeling pressed by time.
Unfortunately, that task took more time than expected. My partner was late with doing the coding part, which was a bit disappointing because he didn’t say anything about it. However, we attached a color changing circle on the rightWrist. When you move your hand to the left it bigger, and when you go back to the right it gets small.
It had no concrete meaning and it was hard to say why we did it that way. We just wanted to ‘get started’ somehow. What I experienced while playing with this sketch was:
You are very static while doing it, only arms and hands moving
Muscles starts hurting a bit, because of streching, after more continuous conducting of that movement (20-30sec)
Looks like you are doing some kind of magic (doctor strange alike
Very abstract and no exact meaning
That was basically what we did the first two days of the week. I felt we have some problems, since we are that behind compared to the others, but not sure what and how to solve them.
0 notes
Text
How to sketch in movement
Today, we had a lecture on how to sketch in movement. Since kinaesthetics consist of aesthetics, it belongs to what we sense. As we sense everything around us, we are sensing our body in the surroundings as well. How do we design our movements? I thought that we don’t think about body movements we do daily, such as walking for example. We don’t really think how to do it and how it feels. It feels totally natural and we are not ‘designing’ it. So we had to think of methods to design upper body movements? Maybe bodystorming? Bodystorming is apart from brainstorming but with our bodies, a very creative technique that helps us ‘imagine’ we are doing something so that we can try to experience it. It is a way of sketching in movement, but a totally different kind of activity than sketching on Adobe XD or on paper for example, which doesn’t allow us experience the kinaesthetic experience.
Following the brainstorming from the previous day, my groupmate and I decided to work with circular movements with our upper body parts. This decision didn’t come easily but it was logical because of thinking about steering as something that’s interesting to us. Moreover, circular movements can be done with our head spinning, a lot of circular moves we do daily with our arms and hands, even with our torso. I did have a look at the nintendo Wii and Xbox Kinect games, where the context of the games enabled people to perform meaningful movements (Loke and Robertson 2010) to draw some inspiration from. Unfortunately, it was not as helpful as I thought, because most of the games were using full body and very few arms/hands moves.
I started asking myself where could those movements be seen. I remembered about the infamous Marvel superhero movie Doctor Strange. My partner saw that as a joke but I actually found it useful imagining what are the circular movements and how can they be visualized.
Doctor Strange. Retrieved from: https://tenor.com/view/doctor-strange-infinity-war-iron-man-dr-strange-iron-man-dr-strange-infinity-war-gif-11182948
We felt stuck because we didn’t really know how to proceed. It felt very difficult just coming up with something. We were advised by Jens not to focus on the function at the moment but instead on the movement itself. It totally makes sense, since this module is based on movement. And of course later on think on how could the computer react to the interaction, what is meaningful and possible and what not?
A good question popped up during the lecture: Does Interaction design have the same aesthetics as graphic design? It really got me thinking. If we say that Interaction design has kinaesthetics with functioality and behaviour, the aesthetics of graphic design are just visual (color, layout). You are also sensing but with eyes only, while in the field of interaction design, you can sense with your whole body.
References:
Loke, Lian, and Toni Robertson. 2010. “Studies of Dancers: Moving from Experience to Interaction Design.” International Journal of Design 4(2): 1–16.
0 notes
Text
Brainstorming about potential movements
To begin our exploration of a potential movement, my partner Miroslav and I started thinking of a sphere that we are interested of. What is the first thing that comes to mind when talking about some kind of physical activity? Sports? Dancing? I think we first and foremost associate physical activity with sports. Since there are a lot of different sports, we thought that there are a lot of opportunities for exploration as well. From cycling – moving with lower body and mostly feet, to – basketball where you use mostly upper body. After brainstorming for quite a while, we couldn’t really set our minds to something particular. I thought that maybe it would be easier to narrow down our options by choоsing to work with whether upper or lower body movements. We didn’t see much value and interest in working with lower body movements. To be really honest, we hadnt put much thought on that because we were afraid not to lose too much time. So we decided to focus on exploring whithin the upper body parts.
I remembered that I drew a lot of inspiration from Nintendo Wii and Xbox Kinect for the project in the studio course last semester, so maybe we need to have a look at them again from a different perspective?
Following that, esports came to my mind. So we began thinking about that. What esports there are and how are they actually conducted. Is it only sitting in front of a screen and pushing the keys of the keyboard or a basic controller? With most of the games, yes. But what about the racing games, where the controller is in the shape of a steering wheel, so that the experience is as accurate to real life as possible. The body movement when you steer sounded interesting to us. However, not sure whether is valuable from an interaction design perspective. In what other context could this movement be used, apart from cars/racing? Or is a context needed for this movement?

0 notes
Text
M3 introduction to code
We got introduced to the code we are going to work with for this module. It uses google’s open source framework tensorflow. Jens has had included 2 example sketches which track different body parts. The code didn’t look very complex and difficult but that can be tricky.
I started tinkering the code to get some understanding of it, since it is going to be a crucial part in our design work. It was pretty entertaining to see for example having stars on your eyes and even when you move, they still find the body part they are attached to, as they are following you. I added stars on my ears as well.

While playing with it, it reminded me a bit of one of those lo-fi snapchat or instagram filters.

Even with my non-expert knowledge, I was able to understand most of the code easily. I am not sure about my partner because I haven’t been in contact with him yet. I know he is working from home only which could be challenging having in mind the project is based on physical activity.
We were advised to start thinking of a body movement and come up with something until the end of the week
We need to think of what do we get out of the code? What are we interested in?
0 notes
Text
M3 initial thoughts
It’s time for a new challenge. In this module, we are going to explore and experiment with kinaesthetic interaction experiences, and more concretely designing body movements. Also, we will touch a bit of the topic of machine learning. My first thought was that this sounds very interesting and a bit different than what we have worked with before. I immediately had a throwback to the studio course last semester when I was exploring more or less a similar topic. I felt that I am one step ahead because of that but I shoudn’t rely on that, because it could be tricky.
It is true that when we interact with everyday technology devices such as phone and computer, we use very limited parts of our body - mostly our hands (fingers) and eyes, or let’s say upper body. The body movements are small and limited as well. What does it mean touching or swiping on the phone, or pressing the keys on the keyboard? How does it feel? That depends on the material as well. Since most of the smartphones nowadays are with similar design, materials and functionality, the way of interacting with them is the same. Swiping on a made-out-of-glass display doesn’t feel aestheticaly pleasing and doesn’t require “big” movements. Also, it doesn’t include most of our body parts. For example, I can’t imagine swiping with elbow or knee or feet. There are no other ways of interaction apart from fingers. At least this is the social norms.
It looks this topic is open for interpretation as the first module – meaning that its up to us to come up with something.
0 notes
Text
M2 analysis + Show n tell
The show n tell went as I expected. I can’t say the feedback we received was positive but it was very valuable. Our presentation didn’t go as smooth as I would liked to for few reasons:
We were not very well prepared because we haven’t practiced presenting properly. The only practice we made was in the morning 10 minutes before the presentation. Felt like this wasn’t enough but we didn’t have time left. Anyways, we didn’t fail but we could have done a lot better in this aspect
We haven’t taken a lot of notes from the feedback our teachers gave us during the show n tell. I was trying to take some notes but I couldn’t keep up with listening to them, making eye contact, answering questions and taking notes.
Right before the start we were advised to join the zoom session if we have some media to show (videos, photos, graphs, etc) so that anyone can see instead of turning the screen of the laptop. However, technical issues with internet connection and zoom prevented me and I had to turn around with my laptop. Maybe few people didn’t manage to see properly and I really regret that. Even though it was not my fault, it wasn’t nice and I didn’t have any back up plan as well. Next time I will have this in mind.
Now when I have a look back, I feel like we were not efficient enough during this module. There were some situations that we didn’t know what to do and we were stuck (especially in the end of first week) so I think that had its impact, since we lost some time. Also, some activities took us more time than supposed to, such as playing with the values of the LDRs and the soldering. However, I see everything as learning activities so even if it wasn’t for the good of the project, it definitely was good for developing skills and knowledge to use in the future. There was a lot I learnt in this module.
To begin with, I ‘upgraded’ my Arduino skills a lot. Since the micro controller was a crucial part in this module, it was inevitable to not use it. Even though my coding skills are far from great, they have improved a lot. The sketch we worked on was complex because of the many emotional states we included and their interactivity as well. It turned out that designing a personality from the technical perspective is not easy at all. It required a lot of thinking and adapting. For example, the LED will shine on low brightness to show that it is “awake”, but won’t interact if the brightness is too high. To make it interact with you, it should be dark and the darker it gets, it will become excited. At the same time, the excitement can’t last too long until it reaches the next state – exhaustment. And what do we do when we are exhausted? Have rest. So the object as well will need to have rest.
Probably the biggest challenge for me in this module was personificating the LED and setting behaviour and emotions to it. To be honest, it took me quite a while to set my mind to it and actually understand that. Most of the time, especially in the beginning, I had this question in my mind: How can a simple LED express emotions? Reading the article surely helped clarifying it and also with the progress of our design work, it got even better. In the beginning, when we started our exploarion and we were trying out different patterns to see how they would differ from each other, nothing “blinked”. Even when we thought about what feelings they invoked when looking at them, nothing happened. I was sure that I was interested, but I wasn’t able to assign a certain feeling to it yet. Not until when we decided to change our way of working and started coming up with emotions that sound interesting to implement to the patterns rather than the other way around. That really helped and despite being stuck during the process, we saw a lot of potential in working with excitement as our main emotion. Overall, my partner and I believe that in our case, it’s not the blink of the LED that matters, but the whole sequence of light that creates it.
0 notes
Text
Prototyping
It was time to get our hands dirty with doing a prototype. We had a brief idea of what we want to do – a round shape (maybe a ball or half sphere), where the 4 LDRs would be placed in different positions, so that they get light from different angles and somewhere on top the LED would be placed as a signal. The reason behind the shape was simple – we wanted to create something that you could hold in your hands and by doing so, not only the feeling of closeness but also holding it, you cover some of the sensors with hands. We didn’t know what material should we use or how would the end result look like and I think that was a bit problematic, because we had no clear vision of what to do and we were kind of experimenting with what’s available in the workshop. We knew for sure that we had to get it done until afternoon, so we were in a hurry and that reflected on our prototyping process. We found a leftover half foam ball and decided to drill holes for the sensors and the LED. Our improvisation led to this

I am thinking now that maybe we should have used a different material for stability reasons. Maybe plastic? Or something else? Because the feeling of holding the made-out-of foam object was not aesthetically pleasing because of the rough surface, like a sandpaper. At least when we had our physical prototype done and the sensors put in their places, it was easier to hold it and interact with it and also have a look at the output values of the code. Now it was easier to see what worked and what not.

What wasn’t working the way we wanted to was the warning for exhaustion. It didn’t really invoke the feeling of a warning, more of a pre-excitement state. So, we had to do last minute changes to parameters to see the relation between excitement and warning for exhaustion.
The feelings before the show n tell were mixed. We felt kind of satisfied with what we have done over the past 3 weeks, but maybe this was not enough? On the one hand, having in mind all of the constraints and also us not managing our time properly, I believe that what we more or less accomplished what we were supposed to. On the other hand, I tend to think the prototype is not good enough and we should have spent more time on it. Maybe about the possibilities of interacting with it and the meaning behind them?
The day before the show n tell I faced a challenge for myself - soldering the wires so that the connection is stable when we start using the prototype. I have never done a solder joint before, and it took me a while to figure it out, but apart from taking a little bit too much time, I feel I did a good job for a first try.

0 notes
Text
Final concept development #2
After receiving some feedback the day before, we felt encouraged. We were advised to put the value of light of the LDR in percentage and not in number. We thought that it would be easier to be understood and controlled. If we say, for example, 40% darkness rather than <=200. But what if it didn’t work? Was it vital to use %? We were not sure how to do it. Our coding skills were limited and after trying out for several hours to come up with solution, we decided to ask Roel for help. We both felt so disappointed as we totally gave up on this idea and moreover we lost a lot of precious time. We also had problems with our Arduinos and the code, because for some reason it was not printing the value of one of the sensors, but after checking everything we found out it was a small mistake we made that was solved easily. If I could go back, I wouldn’t spent that much time trying to make it since the code had been still working fairly well after all. Since our day was more or less ruined, we decided to work a bit more on the code and make some adjustments related to the sensors.
We tried to make it more “alive” as we assigned certain states (emotions) to be dependand on light. For example, when the environment is too bright – it will go in a state of not liking the light and wanting a bit more darkness. On the other hand, when it’s dark, it gets excited but if it gets too dark, it falls asleep. Since we are looking at the light as a living being, that has its own behaviour, we need more states, depending on the situation, so we need to think more because we are running out of time. Next think to do is physical prototyping.

0 notes
Text
Final concept development
After the weekend, we had to finally decide whether we will use LDRs or FSRs as output. Karin preferred the FSR a little bit more, because of the way you interact with it, whilst I was more interested in exploring the LDRs because I saw more potential and opprotunities in that. After some discussion we both got on the same page with working with LDRs. Since working with just one LDR has already been done last week and there was no more to do, we decided to add 3 more. So we began our exploration by placing the 4 LDRs in a square but each one of them points to a different direction. Doing so, we were able to get different angles of where the light came from. What we were also trying to see was if they would interrupt each other and would they detect light altogether or separately. It turned out working, despite feeling a little bit insecure about how do we adapt the code to the bigger number of sensors. Moreover, we were able to manipulate the sensor by covering them with our fingers, so it felt like a similar interaction like with the FSR (but more gentle touch rather than applying force). I was very happy with our progress so far, as we were on the right track. I was curious about how would they work if we had longer wires, attached to the sensors. In that way they would be again pointing to different angles but they would also be far away from one another. It took us quite a while, but we managed to wire everything up. After experimenting, it turned out that you had to grab them altogether, so that they react, because they were all assigned with the same values. We thought that we need to assign a different value to each one of them and maybe at some value a certain state will be activated? That was some food for thought for the next days.
youtube
0 notes
Text
How to trigger excitement?
At this stage, we asked ourselves – is blinking just blinking? If not, what does the blink mean? It felt like we were on the right track – we had our basis to work on with the code, we got encouraging feedback from Roel as well. But I thought that excitement doesn’t come from nothing. There should be something that triggers this emotion in us. So, there should be something that provokes the excitement in the LED as well. It was time to change the input of interaction.
Meanwhile, we needed some adjustments to the code. We were both working on separate sketches. Karin had worked with sudden excitement, whereas I worked with more gradual build up of excitement. Both had pretty interesting sections and we had to think of the bigger picture of how we can implement them. If there is an excited state, there should be a normal one as well. And then a disappointed one after the excitement reaches the peak. That sounds good but not enough. What would happen between these stages? How would the excitement be provoked? What if it has been too excited for too long? We decide to add some more states, even though we had no idea how to create them in code.

I think that by adding more states, we have more things happening and they seem more logical. Everyone gets exhausted and goes to sleep for example, so we thought that if we are personificating the LED, we need more “human” qualities. I am not really sure whether we need to include a state of sadness, because it has some connection to the excitement but it isn’t that tightly related. However, it is my partner’s idea, so it is worth giving it a thought later on.
We placed a button and by pressing it, it will trigger the LED blinking faster. However, this was easily crossed out because there is no real interaction happening and we didn’t see any potential continuing with it. However, I believed there could be something about it, but we couldn’t really focus on that at that certain moment. We had to think of more input options.
Next thing we tried was the joystick. At first it looked like it would be interesting input to be used. I was playing with it in my hand and thinking of what purpose could it have. To draw some inspiration, I searched for some projects in the arduino hub, but most of them included more LEDs and different kinds of modules, but nothing whithin our constraints. My partner Karin and I both agreed that it has nothing to do with excitement. Moreover, it would count as an output and we werent allowed, because of the constraints. The movements of the joystick and excitement didn’t match. To be honest, crossing it out felt a little bit like a missed opportunity but we couldn’t really put value in it and I’m not sure it would have worked properly for our project.
Then we narrowed it down to these two – Light dependand resistor(LDR) and forse sensitive resistor(FSR). There was something interesting in both. Firstly, I connected the LDR as an input, so when it is dark, nothing is going to happen. However, when there was a certain amount of light, it will start doing the wave. Even though it felt like a “switch button”, we didn’t give up on this idea. We thought that maybe we could implement more sensors and they will behave differently.
The next approach was with the FSR and it reacts on touch/press. The functionality was more or less the same as the LDR – when pressed, the LED is off, but when released it is going through the states of the wave. We found it valuable because the interaction is closely related and dependand on the user, while with the LDR the user had no such role. Also, as the name states, the light dependand resistor is to a large extend dependand on the light in the room, so the values should be changed and adapted to the circumstances. We had no idea which one to choose – they both seemed as logical and interesting to work with, but they both had their pros and cons. It was a difficult decision to make and we were really afraid not to mess up and end up with a missed opportunity. For that reason, we decided to brainstorm a little bit more during the weekend and pick which path to go.
youtube
0 notes
Text
Excitement
Since we had set our minds up with working with excitement, we were ready to kick start our design work. But since excitement could be portrayed in various ways, depending on context, we were still not sure how can we get the LED excited. Actually, how would an excited LED look like? It seems like giving an inanimate object a personality, which establishes a connection with the user (Spadafora et al. 2016). Still, the question is how the users can refer to the LED by as something “alive” .
We made the first step and that is to imagine. Imagine the LED is expressing excitement. Now comes the next part – how do we accomplish that in code and later on as a prototype. We both created some patterns individually during the first week and there were some elements of each that we liked. I managed to make a gradual increase and then decrease of the wave in a loop.
The isnpiration for this came from a real life situation. I think we have all been excited about an upcoming event days before it happens. So, the closer the event is, gradually getting more and more excited until you reach the peak when attending the event and then disappointment when it is over. Another example of excitement is when a dog sees his owner after a long day being alone and starts wagging his tail fast. However, as any other feeling, excitement is also subjective – every individual expresses and feels excitement differently. So we needed to figure how can we depict the excitement so that it is visible and understandable for everyone. I understand excitement as a feeling full of joy, exhilaration, energetic, non-boring and with fast-paced movements (like the dog’s tail). So when it comes to the LED, what we were thinking of at first is some fast blinking and playing around with brightness levels. Following that, we decided to modify the aforementioned. It has one starting level, followed by an increase where it stops at half brightness for a bit, then increase again with a stop at full brightness and then decrease down to the starting level. With excitement in mind, we felt that it got our interest and it is a good basis to explore further on.
References:
Spadafora, Marco et al. 2016. “Designing the Behavior of Interactive Objects.” TEI 2016 - Proceedings of the 10th Anniversary Conference on Tangible Embedded and Embodied Interaction (February): 70–77.
0 notes
Text
First struggles
It’s just the end of the first week of this module and we already feel a bit stuck. But before that, I managed to create a light pattern, based on one of the mentioned in the article
youtube
I felt so happy because it was quite interesting to see how the LED reacts and also how different the light pattern looked now than before. It was a satisfying result but we had to start thinking about what emotion we want to express. And this was the time we got stucked. What was the meaning of all this? Does our work so far is related to/expresses happiness? Actually not exactly. I guess working with happiness and sadness wasn’t what we were looking for. Maybe because they are kind of too superficial and obvious and we wanted something a bit more abstract. So that’s why after serious brainstorming we both decided to go for excitement and a notch of disappointment. It was quite exhausting because we spent a lot of time thinking what would be valuable path to go. I think we were a little bit too stressed out of failure. Since we were on a stage where we were still experimenting, it was not such of a big deal. We got encouraged by Jens to just go for it and whether we lose time or not, at least we would know we have tried instead of just sitting and being stuck and too afraid to start exploring something. Now that we know what to focus on, we have to continue working with the code but with a concrete goal now – expressing excitement.
0 notes