Don't wanna be here? Send us removal request.
Text
Additional journal supplement
During the course I set out to use common and well-established types of interactive objects as starting points and thereafter attempt to create designs that intentionally subverted a user’s expectations of how these type of interactions might feel and function. In the same way that a provotype can be a terrible user experience but still reveal valuable insights, these designs were attempts to explore possible alternative modes of interaction by gravitating towards to other extreme of the type of interaction aesthetic attribute scale described by Lenz, Diefenbach & Hassenzahl.
0 notes
Text
How can notions of "interaction aesthetics" be significant for interaction design practice?
Introduction
The purpose of this essay is to explore the concept of interaction aesthetics within the context of interaction design. This essay further argues for the value of establishing a common vocabulary for interaction aesthetics while simultaneously highlighting potential issues that might arise from such a system. To support my argument, I draw upon real-world examples and Lenz, Diefenbach & Hassenzahl’s writings on this subject.
The aesthetics of interaction
Up until quite recently the human experience of interacting with a machine or piece of software was shaped mainly by technical constraints and necessities, the possibility space offered to designers was delimited by hard factors such as functionality, cost, size, weight and so forth. However, in recent years the emergence of a vast array of affordable and miniaturized new technologies have to a significant degree untethered the possibilities of interaction from these constraints, resulting in a high degree of freedom in actually designing the aesthetics of an interactive object. It is now possible to create interactions that are not merely functional, but also beautiful and emotionally satisfying in much the same that has long been possible with user interface design. This development in turn makes interaction aesthetics an emergent field worthy of study and discussion. (Lenz, Diefenbach & Hassenzahl, 2014)
Lenz, Diefenbach & Hassenzahl (2013) describe how current attempts to discuss interaction aesthetics tend to focus on specific aspects of an interaction without providing a holistic view. In order to remedy this situation, they propose a kind of standardized vocabulary of interaction aesthetics which describes different types of interactions using different attributes that scale between two extremes, for example slow to fast or direct to mediated. They further categorize these attributes into why, what and how-levels. The why-level focuses on the subjective emotional experience created by the interaction, the what-level describes the actual purpose of the interaction, and the how-level deals with how the interaction is designed.
Two real-world examples
Direct versus mediated
This first protype we created during the course consisted of a virtual humanoid stick figure that a user could move across the screen using one joystick to control each leg. The purpose of the joystick walker was to explore how a user might experience having direct control of a virtual characters limbs, that is to say the direct opposite of the heavily mediated type of movement controls found in many video games. As opposed to simply abstracting movement control into a directional input, this prototype allowed for discrete control of each leg using two separate joysticks. This prototype could be said to invert the conventional “how” of controlling a virtual character in order to explore how this affects the “why”.
Johan Hellgren IDK 17
According to Lenz et al. (2013) mediated interaction creates a sense of remove from the object of interaction, as if the user is merely triggering some action rather than directly creating and controlling it. Direct interaction on the other hand creates a “close relationship between the human and the thing being manipulated.” (ibid, 131)
This prototype demonstrates the value of having access to a vocabulary that both allows a designer to accurately define the nature of an interaction and grants access to its antonym and the wide gradient of possible modes of interaction in between both extremes. This is especially true when it comes to an interaction such as virtual character movement control, where the norm is entrenched to such a degree that it becomes difficult to imagine any other kind of interaction. Simply having access to a pre-defined opposite encourages a designer to expand the scope of their inquiry.
However, this example also raises the issue of the relativity of language and the great extent to which words are interpreted differently based on a person’s individual experience. While our prototype was arguably very direct compared to a traditional movement control interaction where a user would push a single stick to make a character move in a direction and we discussed the design as if we were moving from one extreme to other, it is in hindsight possible to imagine interactions that exist even further out on both sides of the axis. Thus, the scope of the gradient between two extremes expands and contracts based on context and the experience of the persons involved in a design process.
Instant versus delayed
The text-based compass was a phone-based prototype created with the purpose of navigating around places of interest in an urban environment while simultaneously facilitating spontaneous discovery by only showing the direction towards a location as opposed to traditional map applications that give the user precise directions to their destination. The compass rotated in concert with the yaw of the user’s phone, just like a traditional magnet- based compass would as one turn it. However, it further tracks the pitch of the user’s movements and tilts the on-screen content in accordance. However, the navigation system does not actually take the relative elevation of destinations into account, the tilting is purely aesthetic. The rotation on both axes does not precisely track to the movement of the user’s phone, it was intentionally made to gradually interpolate towards the current rotation of the phone. The interaction could thus be said to be both fluent and delayed, but it was the delay that was our main focus.
Instant interaction creates a feeling of physical connection and oneness with the object being interacted with, whereas delayed interaction promotes an awareness of what is happening during the interaction and imbues it with a sense of greater importance, that the interaction itself is something worthy of paying attention to rather than just the result. (Lenz et al., 2013)
As our prototype was intended to promote a sense of slow-paced casual discovery, we attempted to design it to create just the kind of feeling that Lenz et al. (2013) ascribe to a delayed interaction. However, in practice this approach rather ended up creating a feeling of sluggishness and lack of precision for the user, the delayed reaction of the compass led to the impression that it was struggling to lock in on the correct direction. This example demonstrates how it is key to view these terms in a wider context of already existing similar
Johan Hellgren IDK 17
interactive objects, in this case our prototype emphasized attributes that conventional navigation tools try their hardest to minimize and thus generated a sense of performing poorly compared to what a user might be accustomed to.
Conclusion
The above-mentioned examples clearly illustrate the value for designers of establishing a common vocabulary for describing the aesthetics of interactions, especially, as was frequently the case during the course, when intentionally attempting to build knowledge by subverting and working against established norms for common types of digital interaction.
Besides the obvious advantage of engendering a more precise discussion during the design process, adhering to an established vocabulary of interaction also provides a designer with a toolkit of precise terms that allow one to define discrete attributes that can describe both practical and experiential aspects of an interaction and place them on a scale between two extremes, thereby facilitating experimentation and a wider scope of designerly inquiry.
However, any such usage of these terms would still unavoidably be highly contextual and must be interpreted in relation to similar phenomena and the individual experience of both designers and users. For example, while the transitional animations featured in iOS generally last less than half a second and could thus be described as fairly fast they might still be perceived as slow by a user who is accustomed to the even more rapid animations found on Android devices. Therefore, it might be valuable to expand on the vocabulary of interaction aesthetics by attempting to clearly delineate between which terms are absolute and which are relative, e.g. temporally fast versus merely feeling fast. Despite the existence of a common vocabulary for interaction aesthetics, designers must continually make sure that everyone involved has a shared understanding of the terms involved. Lenz et al. (2014) touch upon this problem when they caution against using terminology that does not build upon well-established definitions.
This objection does by no means render the concept of a vocabulary for interaction aesthetics useless, but it is an unavoidable problem that must always be taken into account when making use of such a system. Words are inherently imprecise and highly contextual, but a commonly agreed upon set of terms would still be a significant improvement over a situation where definitions are different between individual designers or at the most agreed upon among a small group.
References
Lenz, E., Diefenbach, S., & Hassenzahl, M. (2014). Aesthetics of interaction. Proceedings of the 8th Nordic Conference on Human-Computer Interaction Fun, Fast, Foundational - NordiCHI 14. doi:10.1145/2639189.2639198
Lenz, E., Diefenbach, S., & Hassenzahl, M. (2013). Exploring relationships between interaction attributes and experience. Proceedings of the 6th International Conference on Designing Pleasurable Products and Interfaces - DPPI 13. doi: 10.1145/2513506.2513520
0 notes
Text
Supplement on my designerly practice
The most valuable insight I gained during the course was learning the value of coupling physical user input with on-screen animation and discovering various ways of making this integral to the design rather than merely a cosmetic flourish. During module 1 I was paired up with Helena who works a lot with animation and motion graphics at her home university whereas my previous experience with animation was limited to very basic operations such as moving an element from A to B. Having a collaborator with such a skillset while doing a project that was about having the user employ their fine motor skills in order to make a humanoid figure walk in a reasonably realistic fashion greatly expanded my understanding of both animation in general and more specifically the power of having a one to one relationship between physical user input and on-screen animation. For example, I had previously thought that doing the trackpad gesture to open the Windows 10 multitasking view somehow felt less satisfying than doing the same operation in Mac OS but it was not until this project that I realized that it is because Windows will transition to the multitask view using the same predetermined animation each time whereas the transition in Mac OS actually follows along with one’s motion across the trackpad which creates the feeling of actually manipulating a physical object instead of triggering an animation. This allows for not just user input but nuanced and individual user expressivity and playfulness both in everyday interactions and more novel ones such as our project. Had we programmed our prototype to simply trigger walking animations as the user manipulates the joystick instead of offering discrete limb control it would have lacked any possibility for user expressivity and been basically meaningless. I also realized that, contrary to what one might expect, by introducing a certain amount of resistance or sluggishness to the animation the experience of the interaction is enhanced even though the relationship between bodily movement and animation is no longer completely one to one. This is because introducing resistance creates the feeling that the on-screen object has its own physical properties that one has to contend with in order to manipulate it rather than merely being a digital representation of physical input. However, to maintain ease of use in many cases it might not be desirable to go too far in this direction and accurately simulate all the physical properties of real objects but rather to attempt to find a middle ground between a sense of physicality and the ease of manipulation of the digital, see my earlier comparison of our design and the significantly more richly simulated and difficult game QWOP.
Module 2 was a further exploration based on the same insights about the importance of establishing a sense of physical connection between animation and user input. In terms of interactivity the most significant difference between our text-based compass and the prototypes made during the other two modules is that the compass did not just attempt to impart a sense of physicality in the interaction between user and device, it was a part of and interacted with the wider three-dimensional world surrounding the user. That the interface elements would, just like a normal compass, rotate to face real-world locations as the user would rotate their phone was the most basic requirement of our design, but we also added a 3D effect when user would tilt their phone. Initially, we viewed this as just a cosmetic detail but we quickly realized that it actually allowed the user to experiment with different angles and discover distant locations that would otherwise fall outside the space of the screen. This broke the usual cognitive divide between two-dimensional map and three-dimensional world and turned the phone into not just a navigation tool but a different lens for viewing the world.
Module 3 on the other hand was something of an experiment where by allowing the user to use different amounts of fingers to scroll at different rates through a photo album (e.g. two fingers would scroll to the next date rather than the next photo) we explored whether we could maintain a sense of physicality while also going beyond the confines of the established scrolling metaphor in order to allow for rapid navigation of a vast digital space. I do feel that, based on our limited user testing, our prototype demonstrated that we could indeed maintain that sense of physicality as varying the number of fingers used to scroll gives the sense of using different levels of force to navigate the space. However, it is also now clear to me that that the same goal could have been achieved without forcing the user to learn a new and counternormative vocabulary of gestures but simply by for example allowing for scrolling by date by doing a single-finger swipe on the date itself. While such an interaction would still fall outside of the commonly expected result of a scrolling motion and would require all elements affording rapid scrolling a clear and consistent visual design the interaction itself would still be more similar to established actions such as navigating a calendar application by month or week and would thus be easier to intuit while offering the same functionality. Such a design might also have felt more physical and intuitive if considered in the context of mechanical real-word devices, scrolling the date would not be dissimilar to slowly turning a wheel to make a cog spin at a much quicker rate.
0 notes
Text
On the significance of unity of action and response and the possible value of intentionally reducing said unity
Introduction
This essay intends to explore and discuss the concept of unity between and action and response and how the esthetics, user experience and even basic usability might under some circumstances be improved by decreasing or modifying that unity.
Unity of action and response
Wensveen et al. describe the natural coupling of action and function as being a one to one relationship between action and response, which is unavoidably the case in all mechanical non-electronic devices. Even in some large complex mechanical device where the response might appear to be both very different from and out of proportion to the action, e.g winding up an old clock on the face of a public building, the response is still a direct result of human action despite being non-intuitive. They further identify six different aspects of natural coupling; time, location, direction, dynamics, modality and expression. When user input and device function are synchronized along these axes they argue that there is established an intuitive sense of natural connection. On the other hand, an electronic device or piece of software has no built-in natural coupling, thus it falls to the designer to attempt to create a sense of the same kind unity between action and response that can be found in mechanical devices. (Wensveen et al., 2004, p. 2-3)
Coupling user action with device response response does not necessarily have to fulfill some crucial practical or cognitive need, there is an argument to be made for using it for purely esthetic and emotional purposes. As an example, the home screen in recent versions of Apple’s operating system for mobile devices uses device orientation to subtly shift the position of the background image. This serves no practical purpose and is not even a type of interaction widely used in the rest of the operating system and if the subtle effect passes unnoticed by the user it will not be detrimental to their overall experience. However, one could argue that it immediately establishes a strong sense of physical connection between user and device, it listens and responds to the user.
Compromising unity of action and response in order to improve esthetics and user experience
Inherent feedback based on the user’s bodily movements or other bodily functions also does not necessarily have to reflect and react to every detailed nuance of the user’s actions. In fact, this essay argues that in some cases it might be preferable not to and employ a certain amount of abstraction, modification or simplification. The designer can modify the response to the user’s action in order to make the response live up to the user’s expectations or esthetic preferences or even to improve the functionality of the product.
An example of the phenomenon of intentionally decreasing the unity between action and response for the purposes of improving esthetics and user experience is the common act of scrolling on a screen. In most modern web browsers the viewport does not only scroll based on mouse, trackpad or touch input but there is also a smooth and brief deceleration after the user has ceased their input. By adding this subtle animation to the raw user input the designer is able to create a sense of physicality akin to the sensation of spinning a wheel. Wensveen et al. define the dynamic aspect of natural coupling as the relationship between the speed, acceleration and force of the action and that of the response. In this example the dynamic aspect is both perfectly coupled and artificially added to. (Wensveen et al., 2004, p. 2) As I will discuss further in the next example there is value in this beyond the pure esthetics.
Figure 1: Visual audio spectrum analyzer in Ableton Live
In some cases the device might not be able to respond to the user’s action in a desirable way despite being technically perfectly coupled. The visual audio spectrum analyzer (fig. 1) that can often be found in audio recording software might use linear interpolation or similar technologies to smooth out the erratic analog audio signal in order to visually represent it as a smoothly animating curve. This approach implies that the designer has opted to sacrifice a certain amount of unity between action and response in order to make the end result both more visually appealing and more easily readable, a visualization of action that is changing so rapidly as to become unreadable serves no purpose. Here we see that esthetic choices can enhance not just the impression of an interactive object but also its functionality despite technically being a less accurate representation of the user’s input. That the level of unity between action and response is technically decreased is in this case not necessarily detrimental to the user experience and might not even be perceived as a decrease by the user. Rather, this beautified and embellished visualization of a raw and constantly fluctuating signal might be experienced as a state of complete unity due to it being closer than the unfiltered signal to what the user imagines that it should look like.
Conclusion
As can be concluded from the two examples discussed above there can be multiple rationales and levels of justification for compromising the unity between action and response. In the case of the audio spectrum analyzer there is a strong argument for believing that it is essential for the usability of the product, while in the case of scrolling it is more about establishing the screen space as a seemingly physical space that can be navigated in a similar manner as a real space. To use the terminology established by Wensveen et al., the designer is modifying or arguably enhancing both the dynamics and the modality, that is the audiovisual expression of the interaction.
In the natural world we rarely (lightning come to mind as an obvious exception) encounter objects or phenomena that are rapidly fluctuating between multiple states or switching between binary states without any kind of discernible transition through time and space. The physical world is full of smooth, gradual transitions. Thus, the smoothly animating curve or gradually decelerating scrolling window might be experienced not just as more esthetically pleasing but also as more true and real because they conform to the user’s expectations, past experiences and conventions of the natural world, they feel less artificial despite actually being more so.
References
Wensveen, S.A.G, Djajadiningrat, J.P, Overbeeke, C.J. (2004). Interaction frogger: A design framework to couple action and function through feedback and feedforward. Proceedings of the Conference on Designing Interactive Systems: Processes, Practices, Methods, and Techniques, Cambridge, MA, USA
0 notes
Text
End of M3
Due to illness I was unfortunately unable to attend the show and tell session or do much work at all this week, but I gather from Karl that several of our concerns were reflected in the feedback. Explicitly avoiding even the hint of conceptualization was extremely useful for our prototyping process because it helped us avoid the trap of treating the constraints of the concepts as the constraints of the possibility space of our prototypes. However, as we had already noticed during our peer review session the lack of any kind of graspable real-world context made it very difficult for people to understand the purpose of our prototypes. We tend to treat the value of avoiding concepts as a truism, but there might be exceptions to this rule.
0 notes
Video
tumblr
This week we created a second prototype, which allows a user to scroll through a set of pdf documents, images or other large collection of visual media. The idea is to establish a hierarchy of spatial navigation by letting the user move different chronological or contextual distances based on the number of fingers used to swipe. In the context of a photo gallery this could for example mean that a two-finger swipe would move the viewport to the next or previous picture while a three-finger swipe would move to photos from different weeks, four fingers would move the viewport an entire month, and so forth. In a PDF reader, the same gestures might trigger movement between pages, paragraphs and chapters. The purpose of this prototype is to explore two different questions. The first one is whether the hierarchy of navigation is obvious enough that a user could intuit that increasing the number of fingers used would increase the scale of movement, does it feel like a natural hierarchy or something that has to be explicitly taught? Touch-based navigation usually has a one-to-one relationship between physical and on-screen movement but our prototype goes against this paradigm, so the second question is whether this type of navigation is possible while maintaining a sense of continuous space. The view smoothly interpolates towards the new target view rather than simply teleporting to it, but is that sufficient? Both this new prototype and our first one from last week depend on touch gesture direction and the number of fingers used to trigger different actions and movements, but they have very different attitudes towards space. The one I worked one exists within a large continuous scrollable space whereas the one created by Karl is a seemingly fixed space that user interface elements move in and out of. This ties into what I touched on briefly in my previous entry regarding how to think about screen space. Our respective prototypes ostensibly represent two extremes, but might it be possible and valuable two combine these two different attitudes? It could be argued that a fixed space where user interface elements appear and disappear is not fixed at all and is actually navigating on the z-axis, if one is willing to accept that premise combining the two suddenly seems a lot more straightforward. One could for example imagine a 3D file explorer where X and Y would navigate within a folder while Z would move up and down the folder hierarchy.
0 notes
Text
Week 7 - Start of M3
Our third and final assignment for this course concerns itself with space. The introduction to On human space by O.F Bollow discusses the differences between what they term “mathematical space” and “experienced space”. Mathematical space is classified as measured space, the world of XYZ-values where any arbitrary point can be the center of the world(that is 0, 0, 0), all possible directions are equal and space is endless. Experienced, or lived, space is on the other hand is described as space as subjectively experienced by a human. The person’s body is always the center of the world and the vertical axis is tied to the direction the person is being pulled by the nearest source of gravity(i.e the earth). In the context of screen space, this situation is a bit more complex and less clear cut. For one, the mathematical y-axis and the experienced y-axis are one and the same. And while the two-dimensional coordinates used on a screen are mathematically endless actual screen space is obviously limited, but it is still possible to place user interface elements outside the screen. This presents the idea of the screen as a physically static but digitally movable window into a potentially endless two-dimensional space, and for a designer leads to the question of how to indicate to the user that additional content is present outside the current viewport. Further still, might it be possible to not just indicate things outside the screen but make it an integral part of the design rather than something tacked on like scroll bars?
The material for module 3 consists of a javascript template that enables us to do intricate work with multitouch, such as tracking the speed and position of multiple touch points. We are supposed to use this as a basis for exploring the possibilities of the two-dimensional space of a screen within the context of touch-based interactions. We spent the first few days familiarizing ourselves with the code and doing a few small experiments before creating our first prototype, which consists of text boxes that can be revealed or hidden by a swiping gesture in the style of pull-down notifications on phones. This could be seen as an exercise in feedback. Pulling a UI element into view is a form of inherent feedback, there’s a one-to-one relationship between body and screen, but might we be able to build some touch-based experience where that is not the case without losing the mental connection between action and result?
0 notes
Video
tumblr
Week 6 – End of M2
I’m overall happy with our last prototype and intend to continue working on it in my spare time but I also agree with the feedback we received during our show and tell, that we did not sufficiently engage with the assigned material, with the exception of using font weight to indicate approximate distance which I think works very well. On the other hand, as I’ve mentioned in previous posts the subtlety of the variable font parameters make them very challenging to utilize effectively in in the context of animated text on a small screen. I’m a big believer in the power of using arbitrary constraints as a creative catalyst, but whereas the previous joystick-based project presented a possibility space large enough to maneuver and experiment working within the miniscule constraints of the specific variable typeface we were assigned felt more like trying do carpentry in an airplane bathroom. This feeling is probably at least in part due to the way I use arbitrary constraints in my personal projects, which is as a starting point that I sooner or later break out of once I know what the project is about. Not being able to move on to this more open second phase is very frustrating, but it obviously wouldn’t be much of an exercise if students could abandon the premise and do whatever they felt like.
I also feel that we got slightly mentally trapped in our context, we could have replaced our urban directions with locations in a national park or the surface of the moon and it wouldn’t have made a difference, in fact a less familiar context might have allowed us to get less bogged down in specifics. The familiar urban setting led us to focus too much on details such as how the business of a subway station could be indicated and similar concerns. During our final coaching session we were given the advice to consider using subtle font changes to hint at hidden information as a means of surprising or delighting the user, in our context they might not for example realize that the name of a restaurant subtly undulating might indicate that it is located on a boat. I think that our failure to consider this approach by ourselves -despite intentional obfuscation and discovery being core components of our compass- was due to us being biased by how text is normally used when giving directions, that is to as clearly as possible indicate exactly what you are going to encounter. I’m reminded of a certain school of game design that emphasizes the value of mystery and discovery by dropping the player into an unknown and unexplained setting, by gradually building a vocabulary of non-verbal information one learns to read the world and develop a sense of mastery of the system. This feeling of accumulating secret knowledge can be very powerful and it’s something that I really wish that we had tapped into.
0 notes
Video
tumblr
The largest technical challenge of the text-based compass prototype is to draw animated circular text, something that CSS does not support out of the box. Initially we attempted to use the CircleType library(https://circletype.labwire.ca/), and while it works fine for static elements it does not support animations so we had to code our own solution from scratch, and by “scratch” I obviously mean spending hours scouring Stack Overflow. Our solution ended up consisting of breaking text strings into individual characters, placing these characters in individual text elements and doing a bunch of math to place each element along the circumference of a circle. It works surprisingly well but has a major drawback in that the kerning of the font is lost due to words consisting of many HTML text elements, words containing thin letters like “i” and “l” look especially hideous. Since there appears to be no way, at least in Javascript, to read the kerning data of a font the only solution would be to spend hours and hours setting up fine-tuned individual text distance parameters for each letter which is something we don’t really have time for.
Another problem with conveying feedback via animated text is that subtle changes in the variable font tend to go unnoticed, one only pays attention to the main method of feedback(i.e movement) and fairly obvious changes like size and weight. I don’t have access to a large tablet or gyroscope-equipped laptop to compare, but I think that using a small phone screen exacerbates the problem. In the lab there's a big yellow box with a screen on top, I think it's by the third year students. It also uses movement to animate things on screen, I think if we had had time to build something like that with a big screen it would have been the ideal device for our prototype. Conveying a sense of distance by controlling the size and/or weight of the font works fine, but we’ve also discussed the possibility of more fine-grained feedback/forward like indicating how busy a restaurant is or if a train is about to leave the station. I think conveying things like those using very subtle changes in a variable font would easily be overlooked by the user, nevertheless this will probably be our focus for the remainder of the project.
We further explored the notion of inherent feedback by allowing the user to tilt the phone to see text that is outside the screen, using a simple parallax effect to create the sense of a 3D space. This one to one relationship between the user’s bodily movement and what is happening on the screen is surprisingly powerful, it feels less like software than a physical object reacting to both user action and the natural world.
0 notes
Text
Week 4 – Beginning M2
Our new assignment is about working with variable typefaces, a new technology that lets one control detailed parameters of a font such as weight or the shape of serifs. We had a guest lecture with a typographer who discussed his work and specifically the emerging technology of variable typefaces. He also provided us with an in-progress version of a new variable font called Funkis. According to him, it is currently somewhat of a useless gimmicky technology, so I guess that part of our assignment will be about trying to find design opportunities for variable fonts.
At first, we discussed working within the context of a traditional map, but we ended up taking a different path. Taking inspiration both from the traditional compass and the (De)tour Guide described in Alternatives: Exploring Information Appliances through Conceptual Design Proposals, which we read during the previous semester, we decided to focus on creating a text-based compass. Regular maps such as the one provided by Google prompts the user to search for a specific location or type of location and then gives precise instructions on how to get there as fast as possible whereas (De)tour guides the user towards interesting and unexpected places. Our idea falls somewhere between the two by giving the user directions(as in compass directions, not step-by-step instructions) and indicating distance by controlling the size and weight of the text.
The paper provided for this module, Interaction Frogger: a Design Framework talks about the idea of achieving “freedom of interaction”. The authors define this freedom as offering the possibility of taking full advantage of motor skills, giving many different ways to achieve functionality, allowing the user to act at multiple points simultaneously and easily allowing for actions to be reversed. Despite only being controlled by the orientation of the user’s phone one could argue that our text-based compass fulfills three of these criteria, the exception being that there is only one way to achieve its functionality. The paper also discusses different forms of feedback and feedforward and categorizes them into functional, inherent and augmented. Our compass would probably fall into the inherent category because it responds directly based on the users' bodily movement.
0 notes
Text
Week 3
vimeo
We spent too much time unsuccessfully attempting to implement gravity, something we felt was necessary to be able to fully explore the skill of walking. Implementing a simple gravity system in Processing doesn’t seem that difficult in itself, but combined with our already very complex movement and animation code it was a small programming nightmare. In hindsight we were probably a bit fixated on creating a realistic simulation rather than trying to find the essential components that makes walking feel like walking. In the end I actually think that gravity could have made the experience too difficult and too focused on not simply falling over, making it too much of a game. One of our inspirations was a game called QWOP(http://www.foddy.net/Athletics.html), which has similar controls to what we made but is in practice all about trying to run while also managing to not fall over. QWOP is an interesting exploration of what might happen if a game gives the player too detailed control as opposed to the customary abstraction, but actual humans walking generally don’t struggle to stay upright(balancing could be considered a related but separate skill) with every step and it was actually those more normal movements we were interested in having a user relearn via the joystick control mechanism.
I realized that the process of figuring out how to code a reasonable imitation of human locomotion maybe taught me as much or even more about that skill and what it consists of than the final product might teach a user. I’m not saying that having the user relearn walking(or any other bodily skill) in a new context using a new method is meaningless, I definitely feel that we at least got a good way towards achieving our goal of having people closely study something they usually take for granted, but the process of actually building this thing was also extremely educational. This line of thought leads me to consider that it might be of value to create a system where a user could experiment with leg-based locomotion and attempt to build a successfully walking person or creature out of modular limbs rather than controlling a ready-made humanoid.
0 notes
Text
Week 2
We had an interesting guest lecture with Stephanie Hutchison where afterwards we did various bodily exercises. Immediately after we got the idea to try working with walking. Walking is something that most able-bodied people do effortlessly and without thinking about the mechanics of the action, and it is not something we intentionally learn as children. When working on the sketch I even had to get up from my desk and walk around while paying strict attention in order to realize that the hips don’t really move vertically while walking. Dreyfus & Dreyfus talk about the various stages a person goes through as they intentionally acquire a skill, so the question becomes if we can deconstruct the act of walking and turn it into a learnable skill and whether that learning process might generate some insight. How does it feel to relearn something that you have mindlessly been able to do for most of your life? We based our sketch on some example code demonstrating inverse kinematics in Processing and managed to turn that into a simple animated figure whose legs can be controlled by two joysticks in order to execute something that might generously be described as a caricature of a walking human.
0 notes
Text
Week 1
Our first project of the course consists of exploring a bodily skill, the only requirement being that we use at least one joystick. We discussed various possible skills including balancing, throwing, picking something up and drawing. At first our main idea was to somehow sabotage or interfere with the execution of the skill, for example by reversing the directions of a joystick or forcing the user to work with their non-dominant hand, but discussing this with a teacher made it clear that merely making a skill more difficult or unpredictable to do would probably not reveal anything of use. We then decided that taking a skill usually requiring only one hand and separating its components into actions mapped to two separate joysticks might be more fruitful, can we learn something new about a skill by dissecting and separating it?
We opted to work with Arduino and Processing rather than the provided javascript template since we felt pretty sure that we wouldn’t need any online functionality and Processing provides a more comfortable programming environment with fewer moving parts.
Our first functional sketch, after two quickly aborted experiments with throwing and grabbing, explored drawing with two joysticks. When you draw with a pen your hand operates on all three axes, X and Y for movement and Z for pressure. After some experimentation with different control mappings we decided to map movement to one stick and pressure to another. Testing revealed that it was very difficult to work with all three axes at once when they were separated, I frequently found myself stopping movement for a second to be able to focus on adjusting the pressure.
0 notes