Don't wanna be here? Send us removal request.
Text
Essay
How are notions of” interaction aesthetics” significant for interaction design practice?
Introduction
The purpose of this text is to discuss and discover how notions of “interaction aesthetics” are significant for interaction design as a design practice. This will be explored through literature and design activities related to the “Interactivity” course at the Interaction Design program at Malmö Högskola. The text will reflect upon how interactions can be described and mediated with the help of interaction aesthetics and by using the “Interaction Vocabulary”. It will also further be discussed how the vocabulary can be used as a tool for interaction design practice and how it possibly can help designers to open up or narrow down a design space.
Interaction aesthetics
When a product’s aesthetics is mentioned it is easy to primarily think about the visual aesthetics like shape, colour and form. On the other hand, it is not only the visually aesthetics that is important but also the aesthetics of the interactions. Not only do products need to be beautiful, but they also need to “feel good to use”. Before the technology developed into what it is today, the interactions were limited to the available technologies. As the technology matures and develops, products are defined beyond its functionality, technology and material form. Interactions do no longer only exist to invoke a function, it has consequently developed into an essential part of the product itself which creates emerging experiences (Lenz, Diefenbach, Hassenzahl, 2013).
Designers need to know what makes a design and its interactions good or bad, but describing interactions with an ordinary vocabulary by using words like “good”, “nice” and “beautiful” is not sufficient enough. While putting experiences and feelings of an interaction to words is clearly not an easy task, there is a need for a common vocabulary regarding the interaction aesthetics to be able to describe the interactions deeper, and thereby create better designs.
Interaction vocabulary
When interactions are to be described, designers should not only be interested in how users interact with products, for example if an artefact is slow or dynamic. Additionally, designers should consider why users choose to do something and the emerging emotions from that point of view. These could be described with words like exciting, unnatural or surprising. Lenz et al. (2013) has developed an “Interaction Vocabulary” where they aimed to create a list of attributes to describe differences between forms of interactions.
The important aspect of the “Interaction Vocabulary” to acknowledge is that the vocabulary is meant to conceptualize the aesthetics of interactions. They are meant to be a set of attributes to be able to describe felt differences between forms of interaction. Due to a limited space, detailed description of all the attributes will not be given. However, to get a grip of how the attributes work, an example will be given.
Two attributes that work opposite of each other is stepwise and fluent. A stepwise interaction is an interaction that could be experienced as a guidance through complex situations or processes where interactions in a way are ritualized. Every step the user takes has a meaning where the interactions have a clear structure and the user can get the feeling of approaching a goal step by step. Fluent interactions on the other hand are interactions that gives the users a feeling of autonomy and the power to change the interaction however they want. It’s an interaction where the users feel that they have the power and right to change what’s happening at any time of the process with a fluent integration (Lenz et al., 2013).
Interaction vocabulary as a tool
When interactions are stripped down and reflected upon with attributes, it enables designers to see and acknowledge the feelings and experiences the interactions creates which often otherwise go unnoticed. Even a simple task as turning on a lamp can be reflected upon with different attributes depending on what kind of lamp it is. When it comes to interactions, they can easily become automatized and remain unconscious which makes it difficult to describe how the interactions are feeling or should feel like. The “Interaction Vocabulary” provides a possibility to talk about interactions and put words to the experiences and feelings through the attributes (Lenz et al., 2013). It is an important and powerful tool for deconstructing interactions where it enables a way of communicating about interactions.
Diefenbach et al. (2013) describes the “Interaction Vocabulary’s” range of application to be manifold. The attributes could be used in any of the stages in a design process. By using the attributes in the beginning of a process, it could help designers understand what they are aiming for and possibly open up the design space. Instead of designing without a clear purpose or idea of how the interactions are to be experienced as, the attributes can be used as a guiding and helping tool. But it can also help designers to narrow down a design concept if they are aiming too wide or have gotten lost in the design process. Designers can easily get lost at some point during the design process where they jump between ideas. By reflecting back to the attributes, it can help designers stay focused throughout.
The attributes can be used to reflect back upon and see where they are with a design, and acknowledging the experiences that are currently in progress. This can both help designers to open up new questions but also to get an insight on what they have done so far in a design process and how they can proceed. That procedure could involve narrowing down the concept or opening up new questions. An additional way to use the attributes in a design work is to use them and ask, how the design would be if the opposite attribute was used and designed with. If an initial design was on the fluent side of the spectrum, how would it be if it became stepwise instead? This opens up to new ways of designing an idea and gives valuable insights on how much of a designs “success” the interactions stand for.
Overall, the “Interaction Vocabulary” with its attributes can be described as an inspirational tool to outline the potential ways to design interactions. In addition, it is also a means to talk more about how interactions feel, both within design teams but also when engaging with users (Diefenbach, Lenz, Hassenzahl, 2013). Diefenbach et al. (2013) find it that most users and designers are able to describe all sorts of positive and negative experiences and feelings on a meaning-related level, but when it comes to describing how an interaction feels it is more difficult. This is where the “Interaction Vocabulary” comes to its use.
Related work
Throughout the “Interactivity” course at Malmö Högskola, the vocabulary has proven its effectiveness and helpfulness when it comes to design work. It was experienced initially in an exercise where groups were given attributes to present through a mini video where the attribute and its opponent was to be revealed in practice. Through this the participants could connect the attributes with existing designs and products and see them in a different way than they otherwise might have. An example with the attributes stepwise and fluent was shown with a lamp and a dimmer switch. When a dimmer switch is used, users get the feeling of autonomy and that they can change the light the way they want it to be. It is a fluent interaction where users don’t have to stick to predefined values. While with a normal flip switch the interaction could be described as a stepwise interaction where it is one simple step that has a clear structure and goal, which in this case would be to turn on the light. This exercise showed how the vocabulary can help designers to look at existing products and designs and reflect back what kind of interactions and feelings that users actually engage through. Different ways of turning off and on a lamp are interactions that otherwise could be hard to describe. But with the help of the “Interaction Vocabulary”, these interactions and the evoked feelings can be noticed and put to words.
In the example above, the vocabulary was used in relation to existing products and designs. However, as mentioned previously in the text, the vocabulary can be a tool to use in different stages of a design process. In Module 3 conducted in the “Interactivity” course, a design work started off by being placed and described with the attribute spatial separation, which can be described as not feeling as a part of the artefact and a feeling of distance. To try to move away from the initial concept and open up new questions and design possibilities, the concept was reflected upon as if it was put and described with the opposite attribute, which would be spatial proximity. Spatial proximity can be described as something with a personal contact, the feeling of relatedness, safety and being a part of something. Doing this helped to see the idea with new possibilities. The fundamental idea was to create something inviting that the users would want to interact with. When the attribute spatial proximity was used, it was easy to make it inviting resulting in users feeling a relatedness to the design. While changing the attribute to spatial separation, it made the users see the design in a way as an “individual” who controlled itself, even though with the inviting notion still applied. By looking at the idea with a different attribute, the design ended up as different and separate one than the first design. What started off as the same fundamental idea for the concepts, it became two completely different designs just by changing the attribute to work upon.
The limitations to the Interaction Vocabulary
As any other tool, service or practice available, the “Interaction Vocabulary” is not perfect or flawless and it has its limitations. The attributes are not deterministic and a particular interaction may not always be experienced as the attributes describe it. Likewise, the meaning and experiences of the attributes may differ from person to person. As an example, take the meaning of the colour red. It grabs the attention and may be used as a warning sign, but it doesn’t take away the possibility of one can still like the colour red and not understand it as a warning signal (Lenz et al., 2013). The same problems can occur with the interaction attributes, something that is experienced and can be described with a certain attribute for one person might not be experienced in the same way for another one.
It can also be discussed if the attributes might work opposite of their purpose and constrain the designer. They could possibly constrainer the designer to the chosen or used attributes in a faulty way so the designer unintentionally doesn’t consider other aspects of how interactions could be design and used. Therefore, in a way, the attributes can both open and close further thinking and analysis of a design and limiting the designer. While the intention with the “Interaction Vocabulary” is to be used as a tool to understand and describe how the interactions are experienced, it might make designers stare blind at the attributes, resulting in them to not think outside of the box. What if relevant attributes or notions of interactions are found that is not provided in the vocabulary by Lenz et al (2013). If designers put themselves to only use the attributes from the vocabulary, will they ignore other relevant notions found during the design process just because they are not in the provided vocabulary?
Conclusion
This essay has touched upon the importance of interaction aesthetics and how the “Interaction Vocabulary” can be used as a tool to help designers open up and explore the design space but also to narrow it down when needed. Furthermore, the text has also touched upon the possible limitations the vocabulary has.
With the increasing development of technology, resulting in decreasing restraints to possibilities, there is an increasing importance to the interaction aesthetics. It is no longer all about the visual aesthetics, the focus also needs to be on the aesthetics of the interactions. Artefacts also needs to “feel good to use”. But to be able to do this, a common language for designers is found necessary.
Whether if the described “Interaction Vocabulary” is the perfect tool to use for that purpose, it is a start to build further upon and develop. Even though it has its limitations, the vocabulary is good to use as a guiding tool to understand the interaction aesthetics. With the vocabulary, designers are able to describe felt differences between forms of interaction. It can help designers to see and acknowledge feelings and experiences in the interactions which often otherwise go unnoticed. As described and shown earlier in the related work, the vocabulary can help designer to see the interactions in work and put them to words, but also to use it to open up or narrow down a design space. Concluding in the “Interaction Vocabulary” being significant for interaction design practices.
References
Lenz, E., Diefenbach, S., & Hassenzahl, M. (2013). Exploring relationships between interaction attributes and experience. Proceedings of the 6th International Conference on Designing Pleasurable Products and Interfaces - DPPI '13., 126-135. doi:10.1145/2513506.2513520
Diefenbach, S., Lenz, E., & Hassenzahl, M. (2013). An interaction vocabulary. describing the how of interaction. CHI '13 Extended Abstracts on Human Factors in Computing Systems on - CHI EA '13. 607-612. doi:10.1145/2468356.2468463
0 notes
Text
2017-10-30
I think this module was kind of hard to grasp and know what to do. Since it was so wide and open to interpret it the way you wanted, as long as we worked with the servo, I think we in a way ended up not having a clear question/topic to work and reflect upon. We found ourselves in our work that we experimented with autonomous mostly, but we had it still a bit wide and should have pinned down some more narrowed down questions and probed deeper with our sketches.
Our main focus has been on the radio since that’s the one we made completely autonomous. The things we could have done better there was to make the antenna work in a different way. The feedback we got was that they got the feeling of the antenna being a bit out of sync. I think we had as initial thought that it was supposed to look as it was searching for channels and when it stopped it had found a channel and stuck with it. Maybe to increase that feeling we could have had the antenna stop on different locations, now it moves in the same movement over and over again. In future work, I think it is wise to stop ourselves and try to look at our sketches from an objective point of view, it is otherwise easy to get stuck with your own ideas and get stuck with a solution that may not be what we are actually aiming for.
We used the attribute spatial separation as something to describe something feeling autonomous. I don’t think it necessarily needs to be spatial separated from a user, but the feeling of separation is something that creates an autonomous feeling to something. Sure, it can be something you hold in your hand, but it can still be autonomous. Like Siri on iPhones, you get the feeling that it is something that is its own and you feel parted from it, but you can still hold it in your hand and move it around.
0 notes
Text
2017-10-25
To sum up what we have done this module, we found ourselves to explore the reactive-autonomous spectrum of interacting with something and mainly on autonomous because we found that most interesting. We started of by trying out and exploring the servo by manipulating background colours and shapes. We noticed that many groups did this, and we wanted to do something a bit different and abstract. So, we started off trying out how we could control the servo from different perspectives and inputs. Our first experiment was the dog sketch we made where it wags its tail of you smile towards it or if you approach it with your hand. It also pulled its tail back if you tried to pull the tail. With this sketch we expressed it as a dog and we wanted to explore the qualities of interacting with a “dog” and implement them on something completely different. We stopped ourselves to think about what we are doing and to connect the experiment with attributes. I think the attributes really can help with pin pointing what kind of experience we are trying to design, both in the beginning of a design pointing out what we want to do but also in the middle of the design work to realize what we are doing which can open up new questions and wonders. This is where we found ourselves in the Reactive-Autonomous spectrum. The reason to this is because we created movements in the dog that was kind of reacting on us but we didn’t control what movement it did, which was kind of moving towards the autonomous part. From here we wanted to move on and create something completely autonomous and what that behaviour of an artefact means. We felt that autonomous design could be implemented in all sorts of areas like service design to robotic design, and we wanted to explore how we could apply those different aspect on different artefacts.
We went on and did the radio experiment. We chose to do it in the form of a radio since the radio has been around for decades and it’s something that we all know how it works. For this experiment, we had the questioning “How can we make this autonomous and what behaviour would it had, and what experience and feelings does this evoke with the users?”. We created its independence by disabling the users from changing the channels of the radio. The radio gives the impression of not wanting to be disturbed and can even “speak up” and warn the users if they try to approach it with their hand. This could create the sense of not wanting to be disturbed either by fear or respect to the artefact that is autonomous. We wanted it to be obvious that it is its own object and lives in its own kind of bubble. We created this by having it changing channels by itself, which also can increase the fear of interrupting it.
To sum up the main insight to autonomous:
For us, autonomous was mainly interesting because it was a sort of experience you have to create and what it could deliver in the end. It was interesting because the experiences you could create for the users was broad and could be as abstract or weird as you want. We are the ones designing and creating the experience, and the experience was that something would look like its own living thing in a way. But we are not really creating a product that its independent, like AI, but we are creating the experience and feeling of something being independent and autonomous.
By doing these experiments discovering the materials, we found out that when we added our “own” attribute of autonomous, it consisted of several other attributes that could describe something that’s autonomous. We saw this work as a recipe where we could experiment with the ingredients by adjusting and testing different attributes with autonomous.
0 notes
Text
2017-10-23
For this module we have found ourselves working in the spectrum of Reactive – Autonomous. The dog in a way ended up in the middle of the spectrum since it is both reactive of the users known movements and doing as well as it is autonomous and reacts in a way on its own. So after we felt that we had gotten all out of the dog we moved on to the radio to put ourselves on the Autonomous side of the spectrum. We wanted to work more with extremes of the spectrum to get more intense and vivid experiences. So we moved on to make the radio which works exclusively on its own and doesn’t allow any input from the users. It’s still in a way invites the user to interact with it by looking like a radio, but behaves the opposite of what the user would expect. This creates a feeling of distance to the artefact by having an interaction, or lack of inputted interaction, which creates certain feelings and experiences with the user. When we felt that we couldn’t get any more out of the radio and autonomous side of the spectrum, we moved on to make a sketch which explores the reactive side of the spectrum. We wanted to make a sketch that purely relies on the users input to see what experience and feelings that would give the users. To try this notion out we created a “canon” on the screen which shoots balls in an interval. The user controls the canon’s shooting direction by turning the servo. This creates a feeling of connection for the user and you know exactly what you do which gives the user a feeling of safety. The user becomes more a part of it, the opposite of the feelings the radio gives.
So why is it a nicer/more soothing feeling to do something physical and see the feedback digitally rather than doing everything digitally? With the canon, we control it by twisting the servo rather than using our mouse as a controller. I would compare this experiment to games where you shoot balls and control where the direction with your mouse or keyboard. When we use the servo it gives a different kind of feeling to the controlling interaction. I think you feel more connected to the it, like you’re turning the thing yourself and you can feel a kind of resistance to it that you don’t feel by using a mouse or keyboard where you only give vague commands. By using the servo we connect the physical feeling of turning something to the digital feedback where it actually turns.
We have focused a lot on what experience and feelings we evoke with the users when it comes to the interaction. We have wanted to explore the actual interactions and the experiences it gives the user. The focus hasn’t been on the “concepts” we have made, they have only been intermediary to explore the interactions. In some way we had to create the interactions in relatedness to something, as long as we keep the focus of the interactions and the outcome and insight of that rather than if the “concept” is good or bad. We feel that by using one attribute can change the other attributes. If we would refer to the cake Clint has talked about, one ingredient can change the whole taste of the cake.
0 notes
Text
2017-10-20
We decided to drop the dog sketch and move on to explore a new sketch since we feel that we got all out of the dog sketch as we could. We started to discuss how we could make something that invites people in. We chose to try to make something autonomous and separate that controls itself. What if we used a daily life object that everyone knows about and make it do the opposite of what you expect? What if it in a way invites people in but it doesn’t let you do what you want to do. We started by making a kind of radio. The radio shuffles between different radio stations (visualized by having music videos shown on the computer) where the servo is the antenna who searches stations. The radio invites the user to change station by changing the direction of the antenna, but when the user comes closer with the hand to do it the radio gives the user a warning. The radio is its own thing and decides for the user what music to play.
youtube
youtube
In the text “Exploring Relationships Between Interaction Attributes and Experience” by Lenz et al (2013), they talk about the Why which is about the focus on what makes use meaningful for people and the psychological need and emotions that emerges through a certain activity. In a way the radio could be explained with the attribute inconstant. Inconstant - Liveliness, suspense, you can’t adapt yourself to it, unreliable, chance as an idea generator. The radio invites the user to interact with it, but in turn surprises the user and creates challenges. By making it more autonomous, it adds “life” to it. By referring the How that Lenz et al also talks about, we can see how the experiences emerge from interactions and connect it to the Why. I think that the radio acting in the opposite way as the user expects, it evokes feelings with the user that makes us more aware of something. I think we become more aware of things when they don’t act or work as we want them to, and we become annoyed by it. But it makes us think more about the interaction and what the output of it is, since the interaction doesn’t become fluent and the user loses the feeling of being in control.
0 notes
Text
2017-10-18
We decided to move on and make a kind of a dog with the servo where the servo would be the tail. The “dog” starts to wiggle its tail either when you smile by using face detection or when you approach it with your hand by using a proximity sensor. We also added that if the user pulls its “tail”, the dog pulls back the tail.
youtube
youtube
So what does this give the user? What feelings and experiences does this give? If I would refer this to the design attributes I would use spatial separation and gentle to describe this sketch.
Spatial separation - Not feeling as a part of it, feeling of distance.
Gentle - Carefulness, awareness, appreciation, making a relationship with the thing (being gentle with it), being a part of it, revaluation of the action, raises the quality, allows to perform a loving gesture.
With the dog, the user is not really in control of what happens, it’s more like the dog itself reacts on the user as a person and what the user does. The user becomes aware of the dog and that it behaves dependant on you, but it doesn’t invite the user to interact with it. The dog really becomes a separate thing that the user doesn’t feel any part of. This becomes simply a robotic dog that we could try to improve to make it perfect, but then we wouldn’t get any new insights or a new experience.
I think it is hard to pin down what interactions or properties that makes a design good. Clint descrbed this in a good way by comparing a design to a chocolate cake. When we have a cake, it is a finished thing, and when we are asked what makes it a good cake it can be because it’s gooie, soft or crunchy. Then we ask, what is it that makes it gooie, soft and crunchy? And how can we apply that when we want to make delicious muffins for example. I guess that it is about dividing something into abstract attributes and properties to understand why something has a nice and good design. So, when I try to do this with the dog we have created I don’t really feel that we have any valuable interactions for the users. Yes, it is a fun thing to create and have but it doesn’t have any values to it. Maybe if it was implemented to a software on the computer so it would wiggle it’s tail when you did something good on the computer for example it would give a different kind of value. Then it would evoke feelings and awareness for the user. But still, it doesn’t invite the user to interact with it, it is still its own thing and behaves on its own.
0 notes
Text
2017-10-17 / Module 3 Start
If I thought it was hard to grasp what the other modules was about and what to do, it is nothing compared to module 3. Basically we got the Arduino as a starting point and we should choose a topic to work with. There is a lot of interesting topics we have worked with that I could consider working with, but the hard part I feel is to implement it with the Arduino. I think it is easier (and not in a positive way) to end up with concepts. That’s a problem I’ve been struggling with throughout this course, to do experiments and investigate interactions rather than coming up with a concept and test how we interact with that. I feel I succeeded better in module 2 than in module 1 to keep the experiments abstract, and I need to keep that abstractness in module 3.
We started off by trying to get to know the material and take it from there. We had a hard time to grasp what topic we wanted to work with, so we figured if we got the initial codes to work we would figure out the rest on the way. What we ended up a code that could track the movements from the face and control the servo from that. So you are controlling the servo with your head movements. The servo is now kind of mimicking the head movements.
youtube
But how do we as users feel about this and where to we take it from where? What can we do with this? We should consider and think about what experience and emotions we want to evoke for the users. Right now our test doesn’t create any experience for the user. We could use different materials or attachments to give the users different feelings. Maybe we could think about how it is mounted. It doesn’t need to stand flat on the table like it does now, maybe it could be mounted on a wall and that way give the users one kind of experience. We could tweak with different parameters or introduce different layers to the test. We could make it in a way so the user doesn’t see it right away, like something that pops out. Put it in the background or foreground, or make it more independent/implicit.
We also got it to work with facial expressions, so when we smile it starts to move:
youtube
So far, we have only thought about the servo as an output, not as an input. Maybe we should tinker more about how it could be used as an input.
0 notes
Text
2017-10-12 / Module 2 wrap up
When we were preparing for the presentation, me and Martin sat down and tried to wrap up our insights up and see what general insights and answers we had gotten from all our sketches. We went back to the questions again but tried to answer them in general for this module rather than answering them to each of our sketches as we had done before.
How can we work in alternative ways of making spaces on small screens?
Can tilting be can be an alternative interaction for creating space on small screens?
Yes, it's a realistic alternative, but we see some issues for it to be commonly used.
The drop issue, and activate by accident
It's an unfamiliar interaction for most users of handheld small screens.
How can we create a sense of feeling at home?
We have worked with different ways of working with the sense of where and how to make users feel at home. We have tried different ways of controlling the sense of feeling at home, some of them listed here:
Partially hidden content
Arrows
Hidden content with depth
What we learned is that feeling at home is about expectancy and to feel a sense of control/consistency and previous knowledge. It’s not about a gesture not feeling natural as an interaction, but knowing when and how to use it. This relates to the example with the book where it’s a familiar gesture from the physical world. So, we're not sure about indicators helping when you actually are familiar with the interaction, but partly hidden space object is a more effective way of indicating content or space.
Goal
We feel that we in a way have connected the physical space with screen space. We use the physical space for the actual tilt interaction, and then we interact with the screen using pinch to work with depth. But questions regarding depth are still open that we are struggling with, is it just a visual effect or are we creating depth and space?
The presentation today went well, and I feel that we managed to get an interesting discussion going. We passed our phones around with the most recent sketches so people could try them out themselves but we also had a simple presentation with gifs so we easily could show the iterations we had done during this module.
But in what way are we creating space? I guess that we have been trying to work with depth and with space beyond what we see on the screen at first glance. With our first sketch, the main part is hidden on the side so if almost is like it is somewhere on the side but we can’t see it, but we still get the feeling of it being there, available to us to use. And while we have been trying this, we have also battled with the notion of keeping the sense of feeling at home. Or at least we have discussed why/why not something has the feeling of home. We have also used different kinds of indicators such as arrows to indicate the space. Daniel and Victoria came up with a good idea that instead of indicators as arrows we could work with sound or vibration feedback, something that already is commonly used today. That would have been an interesting thing to try out if we had the time. I think we would have gotten a lot of valuable insight, and possibly new questions, if we had tried that out.
One of the comments we got from Clint was that we should avoid using traditional indicators such as arrows or to put labels on them because that could make the sketch lose its abstractness. Maybe we could have worked more with how we could use the actual content itself as an indicator and see where that would have led us. As Jens said, maybe we should have stayed longer with one of the sketches and worked out some of the technical issues we had or dig deeper and expand that sketch. We could have added optical semantic zoom or changing content on zoom level with one of our sketches and tried how that could have worked together with tilt.
One interesting discussion note that occurred while Daniel and Victoria were presenting, that also fits our experiments, is that interaction with an actual hardware through tilting, or in their case orientation, is a way of creating and navigating screen space with a different physical gesture than we usually do today. This could have a good impact from a health perspective since most interactions today has to do with finger gestures which can become quite static. There is even a term called “smartphone thumb” which is caused by typing and using our thumbs on smartphones which is caused by abnormal thumb bone movements. (http://www.dailymail.co.uk/sciencetech/article-4552760/Smartphone-thumb-major-problem-US.html) So by implementing and introducing new ways to interact with screens, maybe we could design and create a healthier variation of interactions with screen space.
Reflections on my own work
For this module I really tried to move away from ending up with a concept. In module 1 I didn’t reflect enough on what we had done and how we could have moved forward. I focused too much on what we were doing and missed out on the so what and then what. As said, in module 1 we pretty much ended up with a concept and missed out on understanding the material and the interaction we were trying to investigate. So, for this module I really tried to stop several times in our work and ask myself what we were trying to figure out with the experiments and sketches we were doing. And also not ending up with a concept, which I think we managed by not labelling or using too many obvious indicators, even if we should have used even less indicators. Sure, we had a “label” on each sketch but that was pretty much only to know which sketch who was which. When we discussed and reflected on our sketches we were not stuck to the “label” we had put on them from the beginning. I think the questions we came up with in the beginning really helped us in our work because then we always had questions we could go back and reflect upon relating to our sketches, it made it easy to not end up with a concept that way.
0 notes
Text
2017-10-09
Today we decided to go back to the questions we put up on day one and go through our sketches with our questions.
Sketch 1 – Tilt menu
youtube
How can we work in alternative ways of making spaces on small screens?
We have progressed in this artefact by making it more fluid, but we haven't really tried other ways than tilting for this sketch. That's because this experiment builds upon the physical gesture of tilting. One thing that we discussed for this module was to bring it out in some other context or area to get new ideas, but the outgoing point was to limit ourselves to small screens and how we could work with enhancing space. So we could not see how that could be made since this idea was dependent on a the phones accelerator.
Are there other physical gestures other than tilt that could be used? How can we create a sense of feeling at home?
We have kind of stuck to experiment with tilt for this module, but we have tried different ways of tilting. Some interactions are more fluid while some are more direct. First one we did was more fluid, but we had problems getting it consistent, and getting it to stuck in the outward position. The next iteration was more controlled but got a kind of off/on feel to it and not so interactive. So the third one we went back to a more fluid approach, but the tilt needed to be more precise in order to work.
We have discussed how to use different angles to tilt the phone. We chose to go with tilting the phone to the sides because it felt more natural and more in control/consistent. We discussed about tilting the phone forward and backwards, but we didn’t think it would fit to mix all angles when making space because it could become confusing and maybe lose your sense of feeling home. And we could not see what having hidden spaces in all directions could add for this sketch.
So how does this sketch deal with the sense of feeling at home?
First there is the search through book-gesture that we felt was a natural gesture to bring to a hardware device.
Then we got inspiration from the SL-app that indicated that something more was hiding in a certain direction when just showing a small section of it.
And then we asked ourselves, is this enough or should we try to make it even clearer? So we added an arrow for showing direction, and also indicated that you could tilt back to make space for other things on your small screen.
We asked our fellow students Emelie and Robin to try out our sketch. Robin had initial difficulties to understand that he should use the tilting gesture. His initial thought was to use swipe. And is the arrow making it clearer how to use it? They both felt that the arrow was not clear enough with the ring, and they also thought that maybe you don’t need an arrow as long as you get introduced to our kind of tilt interaction. It’s just not clear the first times you use it. And maybe the arrow takes up valuable space. So one idea now is to maybe change the arrow to something more clear, and also a kind of introduction div to the interaction of tilting your phone as an instruction or animation the first time you use it. And if that could enough, then the arrow maybe is not needed at all.
How are standardised ways of guiding the user to explore the space?
This is maybe the biggest challenge, we have all these standards for arrows, or shadows etc but interacting with the actual hardware will have to be established on a larger scale for it to be 100% intuitive. Using functions like tilting is not widely used today so people are not familiar how to use it. We figured that introducing instructions on how to use it can be a way forward.
youtube
Goal: we want to explore the possibilities to connect the physical spaces with the digital spaces. One way of doing that is to use movements or gestures that could possibly interact with the phone as a physical hardware object rather than a glass screen as a interaction possibility. Kind of “becoming one” with the phone.
We have used gestures in good way, while our challenge was to create a sense of feeling at home with this interaction. We feel that it might not actually be about the interaction itself not feeling natural in the gesture or interaction, but actually knowing when and how to use it.
Tabs
youtube
How can we work in alternative ways of making spaces on small screens?
This is a more organic approach to tilt interaction inspired by tabs when opening different softwares/apps. What could we do with this interaction when combining it with tilt?
To connect it with the metaphor of books, like when we see books stacked on a shelf, another way could be our earlier thinking of viewing a book from the side to display different worlds. This experiment we tried to work with depth.
Like Bollow writes in On Human space on page 31, “The difference between the terms space and place; for places necessarily lie side by side, while spaces (that is´”topoi” in the Aristotelses sense) can lie within each other, a smaller space within the larger surrounding space”
That really points out what we have been thinking about with this sketch. You could layer space, you can create depth in space, through actively pinch or tap down to another layer or dimension of space. You could argue against this as just peaking nor really entering another space.
Bollow writes in On Human space on page 30: “Its interesting how the old greek language use “chora” for space from the word “choreo” that means to give room or making space.” It’s a little abstract but that’s what we’re trying to ask questions about and hopefully get some answers. This thinking occurred so long ago but can still be applied to screen spaces.
Are there other physical gestures other than tilt that could be used?
Is there other physical gestures or other natural interaction possibilities to apply to technology? Could we use voice in the same way as this game, maybe wouldn’t work in a social environment, but it would be fun to try out. A kind of critical design. We discussed a mobile game where you control a character by levelling your voice/screaming at the phone. What if we used the same technique but to everyday tasks on the phone.
youtube
In today we can work with voice command, but that interaction is more or less tried out by others, as same as our normal screen interaction with fingers.
How can we create a sense of feeling at home?
I think we create a sense of feeling at home with it responding very fluidly. It’s playful and there is almost something magical about it as the Lenz attributes covered and uniform.
Covered: magic, excitement, exploration, action-mode, witchcraft, deeply impress somebody.
Uniform: Influence by intuition, control un.
When you tilt it kind of gives the feeling of looking what’s underneath. Almost like in real life when you lift and tilt something to see what’s underneath it, like for example lifting a book. That association maybe increases the sense of feeling at home.
Goal: we want to explore the possibilities to connect the physical spaces with the digital spaces. One way of doing that is to use movements or gestures that could possibly interact with the phone as a physical hardware object rather than a glass screen as an interaction possibility. Kind of “becoming one” with the phone.
Like said earlier, when we in life lift something and tilt it to see what’s underneath or inside we do the same thing when tilting the phone in our sketch to see what tabs we have. We feel that it is a strong connection to the physical world of interaction with object rather than screen.
Scrolling through spaces
In searching for ways of using tilt and working with space, we tried to use tilt for scrolling/jumping between different spaces. This proves to be effective when trying to dive into great amount of space or as for this example images. Imagine that you would like to scroll through a vast number of pictures, if you would want to find a picture way back in your library. We found that this idea was interesting, but we found when working to enhance the interaction that our other two ideas had more dimensions to explore.
youtube
0 notes
Text
2017-10-05
We have applied pinch to our Tab-sketch to in a way enhance the interaction to actually include both screen interaction and physical interaction. This screen interaction opens up a new space where you go deeper into a tab that’s in the stack and opening up a new space. But we still have wonders about id we are just peeking through spaces or if we are actually making depth in space. Or are we just making a nice visual effect that doesn’t give the user any valuable experience. What should we then try to not make it just a visual effect? I think we try to make it more of a space by making it responsive to different commands. Like the pinch function we have already added makes us peak into what’s on that specific tab. Maybe we could add tap to tap into that tab and jump between the spaces. That could take away the “visual effect only” feeling since you can actually do something with it and it serves a purpose.
youtube
For the tilted menu we removed the titles we had. We also thought about enhancing the sense of where by adding an arrow. We wanted to see if the arrow helps the user to sense the menu hiding away from the screen or if it just excessive. Maybe it doesn’t add anything for the experience of it. We also made the transition more fluid and soft to gain the feeling of something floating forward from the side than just popping out. This could help to give the users the sense of them actually bringing the menu in to the screen like it floats in when they actually tilt the phone. Otherwise if the menu just pops out when it is tilted we lose the sense of control of the menu and it would just be like pressing a button to bring out something. We would lose the connection between the user’s physical interactions and what’s happening on the screen if it just pops out.
youtube
0 notes
Text
2017-10-03
We have continued to try out experiments with the accelerometer on the phones. We have made two more sketches:
- We have full screen tabs stacked which you can tilt to check the other tabs underneath. We got inspired by the function Samsung has where you can check open programs and scroll between them. It has the same kind of tilt interaction as the sketch where you tilt to toggle a kind of menu on the side. This becomes a limitation as well as it creates led confusion when it comes to options and what it does, but it does feel a little bit flat.
- One idea we tried was that through the tilt gesture change the content on full screen, kind of jumping in between programs or tabs. But what experience of the interaction does this give the users? The risk is that they lose the feeling of home, since you can’t really see where you are or what content you are changing to. This idea kind of reminds of the function that MacBook’s have where you can slide between contents. A problem with this is that you have to “get to know” this interaction before you would feel at home with it. It not intuitive in the same way. The problem we also found with this was that we could not find a way to make it fluent, a technical problem where the phone flipped through the photos too fast. We wanted a more fluid calm experience, but this wasn't necessarily a bad thing if someone could flip through a large number of pictures in short time, something that is not really possible in any solution today.
But we need to go deeper into these sketches and experiment with different ways of indicating space, and if we can affect that space with screen gestures too like pinch, swipe etc. Not just using physical gestures, but maybe combine them with screen gestures. We should think about how we experience space, like how space is structured in for example a book which we used as a metaphor before. With a book you have a front and back with pages which each of them opens a new space (page). This could in different ways be related to all our ideas, especially the one with the tabs stacked on each other. With that sketch you kind of do as you do when you look through a book from the side or checking what’s underneath the book. But maybe we can expand that space by tap, drag etc.
One good tip we got from Clint was that we shouldn’t define our ideas by naming them as for example to menus. Doing this we prevent us from being too narrow in the way we and other sees our sketches. If you let them be undefined it opens up the interpretation and possibilities with it that you wouldn’t have if you had labelled it too early. This could open up for new ideas with the first sketch we did with the menu coming in from the side with tilt, this doesn’t necessarily need to be a menu. So by not labelling it as a menu we can see the opportunities with it.
When it comes to how we indicate the different parts of the design and how the presence and indicators affects the orientation of home, we should have that in consideration. For instance, with the Menu bar sketch we have it to show a little part of the menu on the side to indicate that it is something there to investigate. But could we enhance or decrease that experience? How would the user experience this if we used an arrow to show that it is supposed to be flipped out on the screen.
0 notes
Text
2017-10-02
We started to try out the phones accelerometer by testing our first idea which is showing and hiding a menu from the side of the phones screen by tilting in.
youtube
We feel that it almost is something implicit about it, a kind of interaction that are connected to the real world in both movement while it is a familiar movement that can be connected to daily activities, like when you search in a book or try to shake something off your hands.
youtube
One issue we acknowledge early on was that the tilt function needs to be precise enough so it can’t be activated by accident when using the phone. And the tilt shouldn’t demand too much power of your hands and wrist movement, since then the risk is that you drop your phone, something we experienced when trying to make it work. But we do see this as a good design possibility for small screens. We want to try out and see the possibilities using these kinds of gestures have. We feel that using gestures is an effective way of navigating on phones, since a common problem is that it is hard to reach the whole screen by using just one hand. It's not used widely over apps, and one reason can be that it could easily misunderstood or too unclear and untried. And you too many users that don’t know this way of interacting with a phone. This could prevent companies to try to implement that kind of interaction in their devices.
To go back to the questions and goal we had initially and see if we got anything out of this idea and if it relates the assumptions we had.
How can we create a sense of feeling at home?
We sense that having a menu that you can interact with outside your glass screen, and that it follows your hand and wrist movement can help users not feeling estranged and gives them a sense of gained control. This could increase the sense of feeling at home with the screen and its content. The interactions become more concrete than just interacting directly with the touchscreen. When we started to try out this idea, without any thought we put the menu on the left-hand side of the screen, which is not suitable since it gets hard to reach it when you use only one hand. If the user is left handed it would be more suitable to have it on the left-hand side, but if you’re right hand side it’s better to have it on the right-hand side. If this were to be used in the industry, it could be changed in the settings on which side you would want the menu to be at.
Goal: we want to explore the possibilities to connect the physical spaces with the digital spaces. One way of doing that is to use movements or gestures that could possibly interact with the phone as a physical hardware object rather than a glass screen as an interaction possibility. Kind of “becoming one” with the phone.
With idea we feel like we’re actually doing that. When going out of only touchscreen interaction it gives it a more tool like feeling, more of an extension of oneself. If we could compare it to a physical tool, for instance and hammer you get a direct response, even though we’re not banging nail (spikar), but it gives you the same physical feeling. It makes the menu feel more like an actual object than just an animation on the screen.
How discreet can you make these kind of guide tools to enhance space?
This becomes more discreet, because you’re not in first hand interacting with the glass screen, and you can actually hide stuff.
0 notes
Text
2017-09-29
To start off with this module we kind of just jumped right into it all. We started by going through the slides about spaces and discussed what we were interested or fascinated of regarding the topic. We compared how iPhone versus Samsung Galaxy solves space solutions and we also took a look at the apple watch to see how they use the limited space, just to get a grip of how the use of spaces is approached today.
We set up some questions and goals we wanted to explore. From the slides, we thought the headlines “Making space” and “Screen space” was the most interesting topics. The questions we came up with was:
How can we work in alternative ways of making spaces on small screens?
Are there other physical gestures other than tilt that could be used?
How can we create a sense of feeling at home?
Hide and show feature
Technologies of orientation
How are standardised ways of guiding the user to explore the space?
How discreet can you make these kind of guide tools to enhance space?
Goal: we want to explore the possibilities to connect the physical spaces with the digital spaces. One way of doing that is to use movements or gestures that could possibly interact with the phone as a physical hardware object rather than a glass screen as a interaction possibility. Kind of “becoming one” with the phone.
One thing I think is important is to get users feel at home with the space. We discussed what factors that creates this sense of feeling at home. It’s about continuity, that things are where you expect them to be and that things don’t differ too much. We want to see if we can use these factors in some ways for our questions and goals. One interesting way to look at this is to ask what factors or attributes that are used today. For example, the hamburger menu is something that has been an obvious element that we all know is, but what kinds of attributes can be used instead? How discreet can we make these kinds of guide tools to enhance the space? One example that we like is how SL (Stockholm’s public transport corporation) app for travelling is designed. We liked how they in a subtle way indicated that the menu continues below the visible screen. This inspired us a lot in the upcoming test we did.
Apps from SJ, SL and Skånetrafiken has the same end goal for the users but they do in in different ways. SJ, different from the SL app has the whole menu shown at once with symbols. So we would like to use this as a kind of inspiration and investigate more in how you can do the same thing in different ways. And that you also should not underestimate keeping things simple.
0 notes
Text
2017-09-26 / Module 2 Start
Space.
Initially, I think it was kind of hard to grasp what space is in relation to interaction design. We are supposed to approach space in relations to screens (2d) and seeing the possibilities and constrains of digital space, but also approach it in the light of ordinary everyday space. How can we give users the notions of ”feeling at home”, “being familiar with” and “dwelling in” and what that means. It is a lot about being in, knowing, moving and working in digital spaces.
To do this, a way to start is to ask what lived space is. Lived space expressed as a place. A place is where we live and sometimes belong. Digital spaces are a lot “a place” where we live. Today, we live a lot through digital spaces. Many do their work partially digitally, we connect socially digitally and we engage culturally digitally by, for example, listening to music through Spotify. So, the dwelling is a central part which becomes important, since our lives are surrounded by digital elements. Dwelling is to feel at home and being familiar with, or the opposite when we feel estrange and out of place. It is important to understand digital spaces in the terms of dwelling. It is important to make users feel at home with their digital devices.
To do this, we as designers need to think how we can use the actual screen space, which is generally a very restricted space. I think how we design how the different screen sizes is used and how the interactions are restricted to the screen space is an important part to the dwelling part. If the screen space is not used properly, the users might feel estranged. Can we make users feel more at home with digital spaces if we use physical gestures and elements connected to the digital space? For example, my banks app has a function where I just shake the phone to display what they call “quick balance” (directly translated from the Swedish “snabbsaldo”) of my bank account connected to my debit card. This makes the interaction quicker than typing in my password and manually go to the right bank account to check the balance. It makes me feel more at home with the app and it also feels more accessible, which I guess favours the notion of feeling at home with the app. I wish to explore how we can connect physical spaces with the digital spaces, and if this can favour the notion of feeling at home.
When we move in to a new apartment or renovates the kitchen, it takes a while until we get the feeling of being at home. This also implements on digital spaces, but what’s interesting is how we can make it easier for users to feel at home in digital spaces, or to even get to that point of feeling at home in digital spaces. In this module I wish to go back to that question and relate it to the different challenges of creating digital spaces.
0 notes
Text
2017-09-22
The show n’ tell of our work went quite well I think. The sense and experience of safety was something we aimed for which seemed to be understood by everyone. We got some comments that I think was well pointed out, for instance that what we had was a cool concept but was it an interesting experiment for implicit interaction. Also a comment for our insights we talked about that it’s very subjective on what counts as implicit and that it’s hard to take something explicit and making it implicit was kind of top level and that we should go deeper and ask why we thought this way. Which I will try to do now…
To actually collect data from implicit interaction and do something valuable with it is difficult since I think it is hard to imagine and know what people would do that could be implicit. For instance, walking through a door that opens automatically is not hard to come up with and is something that has been around for so long that we don’t really think about those kinds of doors not opening automatically, but taking something else we do in our daily lives and make it implicit is harder. It can also be very subjective on what someone feels implicit or not. Some people might be very aware of how the interact with an automatic door while some people might not even think about it at all. One thing mentioned during the presentations that I found interesting was that if something could begin as explicit and become a habit and that way become implicit. I’m trying to find an example of that but can’t think of any, but it is an interesting question. It’s different taking something plain new that’s implicit and trying to implement it in users lives. They still have to get used to it and kind of making it a habit. Just imagine when the automatic door openers were introduced. People probably didn’t know the doors were going to open when they approached the door and were getting ready to open the doors by hand. By time people got used to it and could recognize automatic doors and therefore they know what’s going to happen. In a way it became a habit, but it was implicit from the start. Maybe it’s easier to first ask the question what makes something possible to become a habit? Like with all things, technical or non-technical, it becomes a habit when we do it over and over again. But making a technical product or an interaction implicit by making it a habit is a hard question we can dwell on. Maybe one way to look at a habit and implicit interaction is that it not only something that happens in the background but it is also an interaction that has some kind of meaning. It is not only getting the system to do something in the background, but it is the interaction and how users gain on it.
One interesting insight about implicit design is the question on how people would react and perceive implicit interactions. They might feel creeped out by it or make them feel not connected since they can feel that “I didn’t do anything for this to happen”. When something reacts to our emotions or movements we could feel that the computer is one step ahead of us and maybe that it thinks for us, which some users might not like before we get used to it. There is a thin line between having the computer leading you to something or if you’re in control of what’s happening depending on how you interact with it. When the computer is leading us, we could get the sense of things just appearing from nowhere and we lose the sense of us being an actual user since we don’t do anything intentional to make the computer do something. Maybe this is something we would feel now since it’s not implemented so much in today’s technology that we use in our daily lives apart from automatic doors, soap dispensers and water tap. As Ju, Lee and Klemmer mentions, a big challenge is to understand how users will interpret what is presented to them (Ju, Lee, Klemmer, 2008).
Something that’s also a big challenge with implicit interactions is the moral issues that comes with it. A technology that gathers information on your face, emotions and movements can make people feel uncomfortable and give them the feeling of “being watched”. But that is a feeling that is hard to work away from. There is no other technology that can do the same thing as this if you want to do something based on for example facial recognition. But it is something that is already being used today and are growing. The new iPhone X has facial recognition to unlock it, but at the same time it has sparked an online conversation about the moral issues. On the online network 9GAG for example there has been a lot of connections and jokes about NSA (see picture below for an example).
To go back to the feedback we got on our work, I agree that our work got too conceptualized. Working a lot with concepts previously, I think it was hard to get away from that thought process for this module and it felt like we without intent happened to end up with a too conceptualized work. For the next module, I will try to take a step back and ask myself “What is interesting about this related to the topic”, for module 1 it would have been for implicit interaction. While we had the idea to display the activity within a public space to give the feelings of safety, self-awareness and individuality, we should have asked how this is interesting for implicit design.
References:
Ju, W., Lee, B. A., & Klemmer, S. R. (2008, November). Range: exploring implicit interaction through electronic whiteboard design. In Proceedings of the 2008 ACM conference on Computer supported cooperative work (pp. 17-26). ACM.
0 notes
Text
2017-09-21
With a lot of tinkering, discussion and testing back and forth we ended up with a last test result for the “interactive art” idea for a public area. We played a lot with the technology to get a god print on the canvas from people passing by, which was harder to do than it sounds. The one that looked the best as far as our coding skills allowed us is shown in the gif below.
From a technical point of view, we simply wanted to recognise shapes from persons walking past the camera and capture it so it would leave an imprint of the shapes displayed on the wall with for example a projector. We wanted to display the activity and flow of people in a public area. I imagine this being set up in a tunnel or in the subway where a lot of people are passing by during the day. It would bring people the possibility to get a sense of the activity in that area, which you may not get when you just walk through there on a normal day. It can be hard to perceive and imagine how many people that are passing through during a day. It could also bring a sense of safety to persons since you can see how many people that are passing the area and give the sense of not being alone, which can give it a sense of safety also. One kind of hidden quality we also noticed with this idea that we did not intend to was that we became very aware of out posture when we walked past in and saw it being displayed on the canvas. In the same way you become aware of your looks when you’re in an elevator with a mirror, you get aware of your posture since it captures a still image of you and your posture. This could be both good and bad since you might get worried about how others perceive you based on your posture but it might also make you try to get a better posture.
But it could also be an underlying tension with this idea that people might get the feeling of being unwillingly surveillance since it doesn’t ask for permission from the user to capture their silhouette. We played with the notion of how our idea would be perceived and interpreted by the users if it was more explicit and you had to press a button to capture and display your movement. But then it would also miss out on some of the safety aspect and activity because it would not show everyone’s movement. It would also be much in the foreground and the users would take more notice probably than just walking by and getting the feeling of being surrounded by many persons and activities.
0 notes
Text
2017-09-12
During our work we had problems to pin point and actually use implicit design in our ideas. In the beginning of our work we set some attributes we thought fitted in to implicit deigns. But at this point we felt that we needed to revisit these attributes and think them though again and see if we should take away or add other attributes. One attribute we added was incidental which is an interaction that just happens and it is almost not worthy of users’ attention (Lenz et al., 2013). In a sense some of the previous attributes we had set up didn’t really relate to implicit design. One that we acknowledge didn’t fit anymore was the attribute fluent, which is when something is autonomy and gives the power to change what’s happening (Lenz et al., 2013).
So, with this in mind we also revisited our initial ideas and the things we had done so far and ask ourselves if it was implicit design or explicit design. We decided to use the diagram from Ju et al (2008) to pin out where our designs fitted.
The design where we change filters on a video by clicking
Filters on a video changing depending on the persons facial emotions
Controlling a setting with your head (VR-like)
Drawing on canvas with an object of a specific colour
The interactive art installation with colours
The only design we had worked with that was close to pure implicit design was the one where colours are drawn on canvas in a public space, like an art installation. But as mentioned before there is a technical issue where it is quite hard for the camera to differ between colours. We decided to not continue in that way but we still wanted to use the main idea with the design. We went back to brainstorming. We knew that we wanted to create something that could be placed in a public area and objectify the activity during the day there. We created a new experiment where the camera snapped a picture every millisecond and displayed the content.
What result we want here is that we want the silhouettes from people to kind of “stick” on the screen and as the day goes by and the activity increases the silhouettes will keep on adding on top of each other and that way show the activity. The goal with what we want people to experience is the same as before, we just would execute it in a separate way. We also though that we could do it by making the camera to detect movement from a person and when movement detected draw lines of colour. We will continue to try these techniques out and tinker.
Referencing
Lenz, E., Diefenbach, S., & Hassenzahl, M. (2013, September). Exploring relationships between interaction attributes and experience. In Proceedings of the 6th International Conference on Designing Pleasurable Products and Interfaces (pp. 126-135). ACM.
0 notes