lvdy4ixd-blog
lvdy4ixd-blog
LEARNING JOURNAL
22 posts
LUKAS VYTAUTAS DAGILIS | INTERACTION DESIGN MMXVII
Don't wanna be here? Send us removal request.
lvdy4ixd-blog ¡ 8 years ago
Text
Tumblr media
Hi. This is the end of the blog. Here is a shortcut to the second(first) and scroll all the way down. Some posts require to be ‘opened’ up further by clicking a ‘read more’-type link, but most will be fully expanded as is. I wrote the posts over time, coming back, editing, changing as things changed, and then finally publishing all of them in the correct order near the end of the year. Most were written casually. Overall, I just tried to focus writing about whatever I myself was focused on at that particular point. happy reading.
0 notes
lvdy4ixd-blog ¡ 8 years ago
Video
tumblr
Trailer! (rude, I barely know her!)
0 notes
lvdy4ixd-blog ¡ 8 years ago
Text
Project Booklet/Tabloid
As part of the final submission, we were also tasked with creating a project-book, almost a condensed version of our blog/sketchbooks. Personally, I wanted to guide the person through the process of how my concept changed from the start of the year until the end, as well as providing a better look at some images that I looked at when considering what home meant to me (some of which were in the installation, others that were not). I was not aiming to create a flashy 
I was getting very near to finishing up the text portion of the booklet when I still struggling with figuring out how to lay-out the images. In wanting to figure that out, I tasked myself with printing out the booklet as-is (with some place-holder text, a few blank pages, missing images). While printing the booklet out on its own proved insanely difficult (thanks, blurb, inDesign, acrobat, and studio printer, for all having minds off your own and not caring at all about any of the settings), and then once I physically had it in my hands, I realized that the booklet might just not be the format for me. 
While it seems to be not all that bad, I knew that there was really no point in printing the images that small, and that trying to print them across pages would be practically impossible due to the binding. Finally, re-looking at the price, quality, format, and time it would take to arrive, I decided I needed to look for an alternative.  Thankfully, Olof had been looking for places that print on newspaper, and was able to point me in the direction of printonpaper. I wasn’t concerned with the quality of the paper, as I actually quite prefer a slightly less bleached color, and the paper weight was the same as it would have been with blurb. Not only would I have more space for images, but more space for text as well, allowing myself to engage in talking about the concept more.
Though that meant that I had to completely redesign the booklet, as well as expand the text a considerable amount, once it arrived, I think it was definitely the right choice.
0 notes
lvdy4ixd-blog ¡ 8 years ago
Text
Final testing in Studio
Tumblr media Tumblr media Tumblr media
Towards the end of April, I carried out the final testing in the studio, trying to find some bits and pieces that might need fixing, working out exactly what is needed for the degree show set-up (i.e. projector-screen-viewer distances, heights, seats), and at the same time I took the chance to do a bit of documentation. 
youtube
Kind of works like a trailer!
0 notes
lvdy4ixd-blog ¡ 8 years ago
Text
Installation design
[note: this is the third time I’m having to write this blog post. Even with 16gb of RAM, chrome keeps crashing every time this post nears completion. For my own sanity, and to have a post that doesn’t crash, the text will quickly describe, rather than thoroughly explain.] Once I had started thinking about the installation itself in more specific terms, i.e. not just what would be in it, but how would it be arranged, etc, I found it easiest to create mock-ups in Maya. Unlike sketches, this allowed me to quickly move things around without having to comletely redo everything. Additionally, I could simulate things like lighting, and if I wanted - materials, etc.
Initially, I was playing with the idea of the Leap Motion controller on a plynth, around which would be a cube standing on a corner, a physical boundary and invitation for interaction. In the space, four speakers would create a sort of rectangle where the viewers could experience the multi-channel audio (I was actually doing these renders far before even really considering multichannel audio, but I kept including it in them for no apparent reason).
Tumblr media
Once I started considering using lights, I instantly mocked up a design, where the only light coming in the space was from a set of lights laced around the center of the installation. Each light would be above a speaker. Eight speakers, eight lamps (yes, I know no such round perfect light exists in the real world), and a plynth in the middle with the Leap on it.
Tumblr media
The reality was that that wasn’t really possible. Instead, once I had managed to get the SunStrips working, I mocked up an installation where there would be four speakers somewhere in the space, and two SunStrips on the sides of the viewer for visual feedback.
Tumblr media Tumblr media
And then I discovered how fun Arnold is, and played around with reflective everything, turning the SunStrip into an actual light source (I actually mocked the SunStrip up... placed ten different spotlights within an enclosure, designed to be the same dimensions, etc), and for some reason: flesh rendering.
Tumblr media
Once I switched to the idea of exploring my own identity and idea of home, I thought that it would be more approriate to create a homely scene. But then, upon thinking about it, I realized that what might seem homely and inviting to me will probably seem cold to most people. In our family, whenever we come home, we sit with our family in the living room in front of the TV, and show each other photos on the TV of what we’ve been up to since the last time we spoke. I figured some generic IKEA furniture in the form of a coffee table and stand for a tv, along with a rug and pillow to sit on could seem quite cozy. But upon actually trying to see what it might look like, and thinking about how it will interact with the rest of the space, I realized it’s far from homely.
Tumblr media Tumblr media
Additionally, I really wanted to get away from TV’s or computer screens. I wanted it to seem inviting yet almost otherworldly, rather than so recognizable as a TV or screen. So, when it came time to try and mock-up what my installation might look like in the actual space, I placed a sphere in place of what I thought might become a sculptural object I could project onto. The idea for the space was that a wall would be built to segment off the area. By enclosing it, the space would become less ‘basement of the Reid’ and more a room, someplace more intimate. A single viewer would have a place to sit and interact with the piece, with two speakers in front of them and two in far back, where people would enter the space. The idea was that the space would naturally split into two, where an imaginary line would be drawn at the seat. Only one person could interact with it, so the others would be forced to stand behind, watch, and listen. 
Tumblr media Tumblr media
After finishing work on the fashion show projections, I thought maybe I could use the tulle as a basis for the sculptural object. The problem was that with a thin layer, it was too intricate for something on such a small scale, it simply let too much light past. And if you tried to bundle it up, it instantly became too dense for projection, and honestly - just disgusting looking. I hated it so much I didn’t even document past the basic tests.
youtube
Towards the end of the year, we were finally beginning to decide exactly what each of our pieces was going to consist of. However, it still kept changing quite significantly, so for quite a while I refrained from making any new renders. However, with Paul’s help, with three weeks to go we finally decided on a setup. Below are two videos of me trying to figure out how to arrange everything in the space, and then an image of one of the final renders (with one speaker out of frame)
youtube
youtube
Tumblr media
0 notes
lvdy4ixd-blog ¡ 8 years ago
Text
Max and Leap changes
Once I had stereo sound working, I knew that I simply had to try and get a more immersive sound scape working. Even stereo panning was insanely effective, however, as soon as one turned their head so that both speakers are closer to one ear, the illusion is instantly ruined. Additionally, I thought that an interesting way of exploring my identity could be through recordings in multiple languages, rather than just English (at this point, I was still preparing for recording. Sorry about the inconsistency in chronology, but I was really working on everything simultaneously, so I’m breaking it up by topics for my own sanity).
Initially, I was very scared of even opening up Max. For me, patch-based programming is a bit alien, as it requires knowing what each object does, why each one is better for what, etc., whereas with Processing, even if there is a function for something, as long as I understand the very basics of how a library works, I can write my own functions and code to do what’s needed. Without Jen’s help, I probably would have been unable to get anywhere near the final patch, so Jen, sincerely, thank you.  While I understood the basics of what was needed, and was able to find many examples for each of the separate elements of the patch, I was struggling to connect them together, as well as to integrate them with the Leap Motion.  As it happens, the main documented methods of Leap integration with Max is deprecated for Windows, and as far as I’ve read, difficult to set-up, even on Mac.
I was already planning on using both Processing and Max, as I was nowhere near knowledeable in Max to undertake building the visual framework that I had in mind using it, I realized that I can simply send messages from Processing to Max with the data it needed.
Simply, he says.  First, we start with a whole bunch of data from Processing that gets broken up into bits, as needed. So, we’re sending each hand’s XYZ, and each finger’s XYZ. And scaling all of them.
Tumblr media
 Two subpatches as below in the main patch (or at least that was the idea, maybe not in this specific version) In these, we take the position data of the hand to pan all of the audio of this hand (i.e. all of the audio of the left-hand-fingers), and set the volume based on the height.
Tumblr media
Each of these subpatches has another subpatch, where we actually control the audio. So each finger has its own soundfile, which gets stretched and pitched based on the finger position. Easy:
Tumblr media
Did I say one subpatch? Yeah, for testing purposes. Later there will be five of each for each hand. So ten sub-subpatches, in two sub-patches, in one big mess of a main patch.
So in the end, it looks more like this... just for the main patch and hand patches. There’s another ten patches within that, but I’ll skip those for lack of space.
Tumblr media
So, as you can imagine, my lack of experience in Max meant that while it actually worked, it was so unbearable slow it might as well not have worked.
I had a lot of optimization to do. In my sketchbook, you’ll find all of the different notes I scribbled while trying to work out the best way of sending messages from Processing into Max. In the end, I first minimized the number of messages I was sending. Rather than sending a message with five values per finger, per hand, per frame (so that’s 5*5*2*60=3000 values to be sorted/second), I began by only sending the messages when they were needed. That is, if there was no hand visible that frame (checked via a trigger), a message containing x 0 y 0 z 0 would get sent to that finger, and then a trigger set telling processing it doesn’t need to send that message again. Alright, so now we’re sending only the messages we need to send. Rather than expanding the messages as I had until that point with many route objects (which then required the message to be far more complex, i.e. have variable names before variables, for proper routing), I learnt that I could instead have one very large unpack message. While this helped, it wasn’t really enough.
But what are we actually doing with the fingers? At this point, the work had become focused on imagery and storytelling. There was no need for each finger to control a sound, each, instead, each hand only needed to control one image in processing, and one sound in Max. So, now we can cut out many, many more messages. We’re only sending a message per hand, only when the hand is visible. So at most, we’d be sending 120 messages / second (not calculating in values anymore, as I was unpacking much more efficiently than previously. If your curious, I was sending anywhere between 12 and 5 values; in the latest version there are eight values), which already yields a great performance increase. Additionally, though, we weren’t having to deal with 10 different samples, where all of them had to be correctly moved around the ‘virtual’ space for the proper illusion of sound moving around the physical space. Instead, it was just two audio files!
For the moment, we can pause on talking about Max, and move over to how the visuals were progressing in Processing as all of this was happening.
Up until this point, I had really only had two versions of visuals based on Processing - simple shapes to provide myself feedback while working with the Leap, and the visual feedback by way of lights. I wasn’t really focused on the visuals at this point, though, and was rather just trying to create a system that would communicate with Max. So, after fourteen code iterations just with this library, the sketch that was sending messages to the first working max patches had the following visuals:
Tumblr media
Very conceptual.
Once I actually had the Max messages somewhat sorted out, I actually began integrating the visuals - photos that in some way represented my identity or idea of home, that would get ‘drawn’ on the screen via the viewers’ interaction.
Tumblr media
The very first version was actually a bit more of a challenge to make than I had thought. I had never really worked with images in Processing up until this point, only with generative visuals, so while just building something to draw one image in this way was easy, it was rather the thinking ahead and trying to make sure that the system was flexible that was the difficult part.
The first version is primarily based on the classic example sketch “Pattern” that draws ellipses at your mouse, where the size of the ellipse is dependend on the ‘speed’, yet I replaced speed with the y-height of the hand, thinking that might add some interesting interaction (which was a stupid thought, as it just meant that it was easier to draw at the bottom and harder to draw at the top, and was not interesting at all in any way interaction wise nor visually). As I had the Max side of things relatively sorted out at this point, I put a lot of work into really pushing this framework that I was building. Initially I was quite hesitant on having the visuals on screen without any sort of fading, where it would be just drawing over top of everything. So, I set out to build a framework where each pixel was drawn individually, and a few arrays kept track of which images were ‘active’, how many pixels of each image were drawn on the screen at that time, and what the ‘image’ of every pixel was.
I ended up having many, many issues with figuring out how to get it to work, but once I did, I instantly went to adding another function, that would put all of that hard work to use - for every pixel whose ‘image’ value was not one of the two ‘active’ images, the pixel had a probability of getting turned black. Basically, it was a way to fade the screen in the parts where the image was an ‘old’ one, not one being currently drawn.
Now, if you know anything about any of what I said above, you’ll probably already see that the way I was doing this was very, very inefficient. Every single frame, I was not only drawing massive rectangle-shaped bits of images, PIXEL BY PIXEL, but I was then at the end of the draw-loop going through every pixel, and comparing it’s value to that of the hands’. There’s 1920*1080 pixels. Just over two million pixels, that I was going through one by one, and for each of them asking - is the pixel’s value the same as x or y? Yes? cool, move to the next pixel. No? Alright, if this random number between zero and one is greater than 0.25, let’s make this pixel black, change the value of the pixel to 0, oh and also, what was the previous value of the pixel? Let’s subtract one from the total count of that value, so we can keep tracking how many pixels each image currently ‘occupies’.
youtube
Yeah, the system was a bit slow. But that wasn’t the real issue. The issue was that I wanted to create such a robust and versatile system, that I didn’t really think about what the system had to do. Yes, the sillhoutte-effect that happens as the image fades is quite cool, but past that - so what? Not only was it jarring to look at, it was also completely unnecessary. There was no conceptual reason for having this, in fact it was quite the opposite. 
The whole idea of drawing the images was to layer all of these different places where I had lived and that influenced me one atop the other, and to show the viewer that this physical movement, expressed through their own movement and the movement of sound in the space, doesn’t allow for a single identity, for a single home. And here I was creating visuals where the other places, the other identities simply get removed. And sure, in a way, that is true - I have blocked out a few periods of my life from my memory, but that isn’t the point of the work. So, I made the decision that the whole system I was making had become too complex for its own good. 
Rather than just removing those elements, though, I thought it was best to just take everything that I had learnt and write a new version of the program from scratch. Just before doing that, though, I played with adding a few simple calculations - a velocity-based approach to drawing the ‘rectangles’. The size of the image would only be as large as the speed at which the viewer moved their hands, meaning that one had to continue moving their hands to hear and see the story. As I knew the ‘center’ of the hand, and thus of the image, I could then use the velocity as an input for the size of the rectangle, setting one corner as the horizontal center minus one half the horizontal velocity and the vertical center minus one half the vertical velocity, and the other corner as the horizontal center plus one half the horizontal velocity and the vertical center plus one half the vertical velocity. Boom. Rectangle.
Tumblr media
Having tested this, I began completely rewriting the code, including only that what needed to be included. This time, I was only building for the present, not trying to future-proof the project. Seven code iterations later, I had a system that worked just as well as the previous one, but much more clean, and was able to start working on all of the various extra elements past the main visuals. I began adding a title scene, rather than just having a blank screen, working on a reset mechanism, ensuring that the same images were not repeated (unless a reset happens), and sending new data to Max! Also, due to the wonderful GSA wi-fi, the processing sketch began to send the data to some bizarre subnet. Because I did not have a static-IP, and the OSCP5 IP address function worked in bizarre ways, rather than broadcasting to one single client, I switched to a (deprecated, yet still working) protocol - multicast. In theory, this meant that more than one device could be ‘listening’ to the processing sketch, and was a sort of insurance for myself, in case I did not receive the equipment I was hoping to get from the EMS for the degree show. This did mean some redesigning in Max as well, however, compared to some other issues, it was really a non-issue.
With a new processing sketch, I had to redesign the messages I was sending to Max. As the Max patch had also changed a bit, to reflect the overall changes (from finger control to hand control), I had to rethink a few things. The playlist object accepts integers to control which sound in the list should be played, howeever, every time them message is received, it starts the clip again. So, I could not send the image number all of the time. I mitigated this by placing the image number at the end of the message, and sending the full length message only when the image changed. This way, the message was sent once, and there was no need to complicate both sides by making new types of messages. While this allowed me to keep the Max patch clean, it meant that the sound would play when it wasn’t supposed to: the image, and hence audio file, would only change when the hand was NOT visible (and thus the image and audio should both be off). This then required another value to be sent - a boolean signifying whether the file is alive or dead. So now, every time that the audio file was changed, and began playing, a zero was also sent. That zero was then slightly delayed in Max, and attached to a pause message, meaning that once the audio file had been switched - the audio paused. Problem solved!
The data I was receiving in Max for each hand, every frame, was as follows: alive/dead,pos.x,pos.y,pos.z,vel.x,vel.y,vel.z, image (the velocities could be removed, as they only served a purpose in Processing, but the three extra values don’t actually slow it down, and the messages are so fragile that I would rather keep a working and slightly unoptimized version than risk breaking it for unnoticeable gains).
From 3000 values being inefficiently processed every second, to seven (the eighth only gets sent once every time you remove a hand) per hand per frame - 840 values per second, yet in such an efficient manner that reducing that number to 112 wasn’t even worth it. I think that speaks volumes for the amount I learned in such a short period of time. 
0 notes
lvdy4ixd-blog ¡ 8 years ago
Text
Audio Recording
You know that feeling when you listen to a recording of your voice, and find it absolutely the most annoying thing ever? Imagine doing that for hours on end every day for nearly three weeks. And not only is it some random clips of audio that someone recorded of you, it’s you, talking about yourself, with no one else there. Really though, listening back to the audio was only really hard because of the contents of the audio.
It was truly excruciating to sit there in the recording booth and dig very, very deep into myself to try and figure out what I thought about all of these different homes, places I lived, what I thought of my own identity, what I thought others thought of my identity, and what I thought opeople thought of their own identities. There were times when I truly broke down while recording. In fact, there’s a 10minute audio file on my hard drive of me breaking down, trying to figure out what the hell I’m doing, and then after taking a deep breath - continuing with the audio recordings as if nothing happened. 
youtube
In reality, I only recorded about two or three hours of audio. But it took me at least double that to actually get it recorded. I had to set-up every time, make sure everything worked, level my audio, set-up the cameras if I was documenting it, and then actually get into the mindset needed to talk about things so personal that this was the first time I was really even thinking about them. I was so afraid of thinking about these topics that I never really did up until this point in my life. So it took quite a lot of guts and concentration and also relaxation to force myself to open up, knowing full well that most of the audio was going to then be played at the degree show. Not only was it going to be potentially hundreds or thousands of random people hearing this, my family was also coming to visit. And honestly, I’m still scared of what they’ll think. I’m scared my parents will take everything negatively, and feel guilty for my own confusion as to my identity. And I want to avoid that so much that I really really am even afraid of my parents hearing or seeing the work.
But at this point, it’s a bit late to go back. And there’s really nothing to be ashamed of or scared of. I just have to be completely open.
While I would love to upload some of the full-length videos of me talking so you could get a feeling of just how uncomfortable I was talking about myself, I feel so uncomfortable with it that I just cannot force myself to put something like that online, especially without editing it. And at the moment, after having listened to my own voice for such a long time, listening to it even more for something non-crucial makes me feel physically ill. So instead, here’s a super-condensed version of all the long-form videos I took while recording audio.
youtube
0 notes
lvdy4ixd-blog ¡ 8 years ago
Text
Switch to Home
While I continued to dilligently work on exploring how I could create a framework for re-performing existing work, I was still waiting on the composer I was working with. Initially, the plan was to be done by the end of January with the composition and hopefully even recording the audio. However, bit by bit, things kept changing, getting pushed back. And while I kept hoping that it was in the works, in the back of my mind, with each day that passed, I was preparing more and more for the opposite. 
I hand’t really thought of what exactly I would do if the original plan didn’t come to fruition... I think I describe the thought process of the next little while quite well in the booklet, and so rather than trying to rewrite that, or force you to go find it for yourself, here it is:
It was quite ironic, that at the end of the previous year, one bit of advice that was given to me by my tutors was to stick with one idea. I always have hundreds of ideas flying about, but usually start off work on a solid idea, only to get scared by some aspect of it, and my ability in executing it. Then, in a rush, I would switch ideas comletely, and end up handing in something that was really far below the quality that I am capable of.
The thing is, at this point I didn’t really have a choice. This wasn’t a fear of not being able to achieve what I envisioned, nor me just getting bored with an idea.This was real. I had two months to go, and no performance of any sorts.
Looking back, I really don’t think that the change was one to a different project; instead I see everything up until that point as a major learning experience. Learning about the tools, the materials, the capabilities.
At the same time, I had a chance to stop and reflect upon what I was doing. Why was it important to me? Why did I want to do it?
Essentially, I took a hard look at all of it, and remembered why I was initially exploring these ideas, even before the school year started.The FoCI elective on curation had made me interested in instrumentalizing the viewer. I didn’t really care about the performances. They were just a tool to exploring how technology can be used in furthering traditional methods of artistic expression, of engaging a viewer, their mind, and their emotions.
So really, what I had to do was find a new tool of exploring these concepts, another medium to e`ngage with and transform with modern methods.
As it happened, on my trip to Lithuania around Christmas time, I had spent a considerable amount of time sitting in front of video installations at galleries (most notably (Ne)regimos svajos ir srovės / (In)visible dreams and streams at the Contemporary Art Center, Vilnius), and I began to reflect back upon the explorations that I had done at the start of the year with triggering videos based on a viewer’s position in the space.As I had done so much work with small, intimate installations by using the Leap, the scale of the work instantly changed.
The same issue as before remained - what would the actual content of the work be?
An almost instant realization: it will be my own visuals. And no, I don’t mean that sarcastically. For some reason, up until this point I didn’t really think of using any photos or videos that I had taken. I just didn’t have a reason to. Yet now, having earlier in the year handed in a dissertation talking about ethnosymbolism in Lithuania, I realized that there was no better subject than myself.
Why not try to portray this thing that I have so much trouble expressing? This idea of being both from one place, and many at the same time. Yes, I’m Lithuanian, but I’m not just that. Every one of the seven places I’ve lived and every place that I’ve travelled through even, - to me, everything I remember - has formed who I am. I decided to take the opportunity to really dig deep into who I am, and how to portray that to others using these tools I had been working with. At the end of the day, there was still a traditional medium that I was trying to enhance: storytelling. Over time, it had evolved from just word of mouth, to written or visual. 
And as it happened, I had been into photography since quite a young age. And really, the photos that I happen to have coincide with what I truly remember. There are certain periods, even recent ones, that I simply block out in my mind, without really realizing it. And perhaps it’s an interesting dynamic where I did not take photos of the things I did not want to remember, thus leaving me nothing to look back at and help me remember. But additionally, I can link each of these memories, and thus photos, to what made me who I am today. Really, it’s the memories, feelings, smells and sounds that each photo brings back that show how I became who I am, but the photos themselves thus become a link between my subconscious, and the viewer.
The photos both allow me to explore my own life, identity, memories, and additionally provide an important visual reference to the viewer, allowing them to better understand me, hopefully letting them connect with me.
The actual process of selecting photos and then trying to figure out how to arrange them, how to talk about them was actually quite difficult. Initially, I tried creatinbg some sort of timeline to follow, so that I could talk about them chronologically, yet then I realized that I both did not have photos for some places, no memory of some of these small ‘stops’, and also, Lithuania was really inbetween and during everything, not just one specific period of time. 
youtube
However, I still had to somehow turn these memories from the photos into audio, so I organized the photos semi-chronologically, with Lithuania coming at the end, as a sort of reflection on everything else. I initially planned to write a script for recording the audio, but could not bring myself to doing it. The process felt unnatural - I needed to open up my own memories and dig deep into the subconscious, and I needed to do that while recording to really capture the emotion. I didn’t want something diluted, controlled. Instead, I wanted it to be organic and completely natural. I’ll tell you this: recording was not easy.
0 notes
lvdy4ixd-blog ¡ 8 years ago
Text
Leap and Lights
As mentioned in the post about my trip to Den Haag, the use of lights as an ambient visual feedback mechanism would become quite a large part of my work.
As soon as I came back from the Netherlands, I knew that I wanted to figure out a light-based feedback system. While I had previous experience with DMX light control (making a moving-head act as a spot-light. The light would be placed on a table at the end of the room, and would follow a viewer around. As the viewer got closer, the light got brighter, blinding the person and pushing them away), I was afraid of going down that route in fear of the costs involved. 
As such, I initially I thought about using standard light bulbs, and controlling them via an arduino and a pcb acting as dimmers, however, upon consulting Jen about this, she recommended simply asking around if anyone has some DMX controllable lights that I could borrow for a little while. That same evening, in a GSA Sport Executive Board meeting, I had the chance to ask our student president if she could inquire the student assosciation’s in-house lighting technicians about having some spare lights I could borrow. A few days later, when I was in the space designing the fashion show projections, I had the chance to go pick out something I wanted. I knew I wanted a SunStrip, and for testing, I also borrowed an LED RGBY spot.
Tumblr media
I have to say, the hardest part about getting all of this to work must have been actual getting the lights in the first place, and even that wasn’t hard. Initially, I borrowed an arduino-based DMX controller board from Jen for computer-light communication. By the end of the day, I was controlling the SunStrip purely based on my hand-position. Which of the 10 bulbs on the SunStrip was on was determined based on the hand.pos.x, and the intensity of the light was based on the hand.pos.y. Even in this simple state, where it would only work with one hand, and one bulb, the interaction felt quite organic, and was very fun to interact with, even though it seemed like one would get bored very quickly.
I only had the SunStrip out on loan for a week initially, and so I was quite short on time to get everything that I wanted tested out. Trying to simplify the control method, I decided to replace the arduino with the openDMX USB box that we had in our studio. The issue was that no one had gotten it to work with Processing yet, yet after a day of downloading various drivers, installing a hundred different libraries, I stumbled upon a github repo for a windows-only processing library for the openDXM controller. A few minutes and bit of code modification later, I had taken out the need for another programmable device - the arduino. [to be fair, it only needed to be programmed once, but in my mind, the arduino had a greater potential for failure than the openDMX box simply because of the exposed wires, PCB, etc.]
Tumblr media
Once I had the openDMX box working, I began working on improving the interaction methods. At the moment, it was very limited in the effective interaction range. The way the lights theoretically worked was an intensity of 0-255, where 0 is off and 255 is full intensity; however, in practical use the range was more like 50-180. The effective y-range of the Leap was up to almost two meters (in perfect conditions). This meant that at the moment, if I just mapped the intensity to the y-postition, you could basically see the lights increase in brightness step by step, it was not a smooth transition. So how do I maintain a smooth increase in brightness, while not limiting the interaction-height range? What I decided would make sense was to keep the one light at a smooth range, i.e. if your hand was at the desk, the light was at 0, and once your hand was around a third of the way to the max height, the light was at full intensity. but doesn’t that decrease the effective interaction height range? Yes! But we can circumvent that by then turning on the lights next to the one we were triggerring initially! so if we have light i on, once it’s fully on, we turn lights i-1 and i+1 on as well! 
Initially this ended up being a rough way of working, but after a bit of tweaking, the lights worked beautifully. I modified it to start gradually increasing the other lights before the first had reached full intensity, making a smoother transition. And then I found that I still hadn’t used the full effective range, so now I turned on lights i-2 and i+2 as well! Just after a bit of experimenting, tweaking, more experimenting, and just trying out various methods of interaction, I had created an effective feedback mechanism. For one hand. At the moment, it always defaulted to being controlled by the first visible hand. If both were visible, the left hand was always hand[0], so the left ended up controlling the lights whenever you tried to put both hands in.  So what about the other hand? I decided a different control mechanism altogether was an appropriate way of going about this. This was supposed to accompany a musical performance, made of various stages to both introduce the viewer to the interaction methods, and just to build up different layers of melody, etc, as one would with a song (rather than just starting out with all instruments playing at the same time).
As it happened, the SunStrip has ten bulbs, which beautifully coincides with the number of fingers on two hands. This only seemed like a natural way to progress, so I decided that the next ‘stage’ of interaction would be with the two handed method, where each finger controlled a single light.
youtube
Now, as soon as you put two hands in, the light would completely change. Each light cooresponded to one finger (and I’ll be honest, there was a bit of difficulty in mapping these correctly, as the right hand is inverted from the left, and I wanted the system to be versatile for either orientation of the Leap and Lights). If you closed your fists - no lights would be on; the lights would only trigger for an extended finger (done for a variety of reasons, both for added control and also the position of a non-extended finger is just a guess by the Leap software, meaning it was mostly inaccurate and all-over-the-place). Then, each finger’s height was mapped to the intensity of the light. 
Additionally, I worked on having the sound provide spatial feedback as well, in the form of stereo panning (to start with). Each hand was playing an audio sample, which would move around the space in accordance to where the hand was. The idea was that once there were more voices, each finger would control the sound characteristics (speed, pitch, etc) based on the offset from the hand position, whereas the panning would be controlled by the hand position on its own, meaning you would move five sounds with one hand, five with the other. So there would really only be two ‘sources’ of audio, but through these small changes in the sound through the specific changes of each finger, the illusion of many more sources of audio comes out, without actually having to deal with sounds coming from ten different (virtual, for the moment) places.
Tumblr media
Finally, the whole system was designed to be scale-able, such that I could just plug in more SunStrips, add a few lines of code, and it would perfectly work. In the videos and images below, the cable ‘mess’ is not really a mess. Initially, I worked with a very clean, sterile environment. But as people kept interacting with it (this was around the time of interviews and such, so I was able to use many people as guinnea pigs), they did not realize what/how was happening. Basically, I was toying with the idea of the final installation being a single leap on a plynth, and the speakers/lights against walls/corners, yet instead of hiding the cables, everything would come out from the plynth, directly indicating that this one small device was controlling everything else, and allowing people to figure out what the installation might be without actually having to interact with it, hopefully making them curious enough to come up and try it out for themselves.
youtube
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
An additional experiment that happened much later was using the same documentation videos as above. While trying to decide what the project could become once it was clear that the composer was not going to be able to create something, I thought it was interesting to explore the actual performativity and look of the motion of hands. Personally. I have quite bony hands, long skinny fingers, meaning that with nice lighting, the hands become very theatrical, out-of-this world even. While I ended up staying away from this this time around due to a lack of time to properly explore it, I really think that a future iteration of a performative installation using the Leap would benefit largely from a second part, where a camera records the movement of the hands and distorts things. 
As at the time I was switching ideas and transitioning to using Max for audio, I was scared of trying out the visuals. Instead, towards hand-in, I came back to the idea for further development and iteration of the project. I used the videos above and AfterEffects to play around with some ways that the performativity of the movement could be exaggerated. The first is a simple tiled/mirrored video that creates interesting patterns. The second - a video where the red channel is being used for time displacement and map displacement effects. While I made an echo video too, I think we’ve all seen enough of these kinds of things. 
youtube
youtube
1 note ¡ View note
lvdy4ixd-blog ¡ 8 years ago
Text
Prototyping an enclosure for the Leap
A few physical issues that I had with the leap were quite small, however, ones that I wanted to address even before really having a fully designed control framework. Primarily, I was always getting annoyed by the cable. With the way the Leap works, it requires the cable to come out from the left. Might not seem like a big deal, but when the device is so small and lightweight, any slightly sturdier cable keeps moving the Leap out of place, turning it upside down, etc.. Additionally, I wanted to be able to have the cable come out from either side, or even go below the Leap.  I decided I should mock-up a small enclosure and see what works and what doesn’t. the main issue, size-wise, was the actual USB cable, increasing the width of the enclosure considerably. For the next little bit, I continued prototyping in Maya, and importing these models and trying to prepare them for 3D printing. I had never really printed something that had any actual utility to it, so trying to make sure the right things printed and didn’t print was a good learning exerience.
Tumblr media Tumblr media Tumblr media
youtube
Tumblr media Tumblr media
While the print did not go as it should have (print preview even showed a different route it would take), I still let it finish and wanted to actually see whether my measurements and sizes were correct or not. Overall, I should have extended the side for the cable just a bit more, as it was forcing the cable to bend at a very sharp angle, which the cable wasn’t designed for. Additionally, I compensated just a bit too much on the measurements of the leap, and each side could be pulled in .5mm for a much more snug fit.  Finally, I went with measurements, sketches, and the prototype to the 3D making studio in the Reid to talk about potentitally getting this CNC’d out of metal - not only would it be heavier (weigh it down to the platform), more aesthetically pleasing, it would also help dissipate the heat from the Leap (running all day it tends to get hot to touch. nothing dangerous, but the additional heat dissipation might have helped performance just a tiny bit). Overall, I was told a few things: the way I designed it was a bit complex for what it really needed to be; I should try and make my own cable to reduce the size needed for the Leap; and that overall I came to them at a bad time (between closures and lots of prebooked sessions for degree show work). Having explored my options, I decided that in the end this was far down on my priorities for the project, so I would put it off, and if I had any spare time towards the end, I would redesign the enclosure, a cable, mock up the box, and try to get it milled.
0 notes
lvdy4ixd-blog ¡ 8 years ago
Text
Den Haag + LUSTLab
Tumblr media
Now, you might think that this is just going to be another bit of research that I undertook in the first half of the year while researching interactive installations, but you’d be wrong.
I had seen their work in the previous years, but they had kind of escaped my mind. And then, in January, I found myself in the Hague for the KABK open day. As it happens, my brother-in-law lives, studies, and works in The Hague. 
Tumblr media
As soon as I arrived, I headed to Mauritshuis for their permanent collection, I explored the royal city, and then - I had to go meet the brother-in-law (Ignas) so he could bring me to the flat so I could get some rest after a sleepless night.
Tumblr media
I knew that he had interned at a design studio, and was now working there, but I really did not put two and two together, even when I was google’ing for how to get to LUST studios. And then, as soon as I arrived at the studios, it all clicked. As Ignas showed me around and introduced me to everyone he works with (well, those who weren’t running around madly in preparation for the premiere of a project that night), and we headed off to the flat. He only came home to show me where the place was, and he headed back to finish preparaing, while also inviting me to come see the premiere that night. Sleepless and with a migraine, I said I’d think about it, and went for a nap. Once the evening rolled around, I started looking into the project, and realized that I cannot miss it. The project was part of the Cadance modern dance festival, specifically the 4x4 - Fellowship of the Dance location project, where the project aims to show you the Hague by travelling across town from venue to venue, where brilliant performances are held for the festival.
The final stop of the night was going to be at LUST, and my brother-in-law was creating the visuals for it. how could I miss it?
Tumblr media
As I neither had a ticket nor was VIP-enough for the whole series, I got to LUST far before any of the others, and I got walk around the space, see the behind-the-scenes, talk to the performers, and see how the visuals/lights/sound was installed. 
The first thing that catches you off guard is the seating arrangement - a single straight line extending diagonally across the room, from one corner to the other. On both ends were too projections, and along the walls - lights. 
I was uneasy both walking around the space and sitting down before anyone else had arrived. Being placed smack-in-the-middle of the room made me feel like if people walked in they would think I was part of the performance, too. And sure enough, once everyone did arrive, got seated, and the performance began, you really did feel like you were part of the performance. The dancers were in conflict not only with each other, but around the audience as well. You had to keep not only looking left, right, behind you, in front of you, you physically had to change which way you were facing on the stool. So you were constantly staring at projections across the room, dancers moving about, and the rest of the audience. And any time that someone would turn their head to look the other way, you would instinctually look there, too, as you felt that you were missing out on something. And so, the whole audience kept changing where they were looking, all because everyone else was doing the same. The performance really did became unique to each person based on where they were sat and where they CHOSE to look (opposed to being forced to look somewhere specific based on the staging, etc.)
A final thing that I loved about the piece was the use of light as visual feedback. Personally, I did not notice too much of a crucial change in the projections (sorry, Ignai, I think a more intense reaction would have been more effective on that scale), and so the technological aspect to me came with the changes in lighting. The dancers had smartphones on them that were sending sensor data back to the tech desk, and thus triggering changes in the projections, lights, and even audio (though that was a more controlled change, as the performance was in time to the music). The lights flickering, turning on and off, growing bright and dying dim - it fascinated me how this simple setup of ambient lights could influence the piece so much.  As you will see in another post, seeing light used in this way ended up heavily inspiring me.
Also, on my half-day excursion to Amsterdam, I chose Stedelijk over the coffee houses, and was rewarded with Jordan Wolfson’s Manic/Love, and Jean Tinguely’s Machine Spectacle. 
Tumblr media
0 notes
lvdy4ixd-blog ¡ 8 years ago
Text
Leap basic tests
The reason Paul had asked me whether it had to be touch-based was actually a great question - I was so focused on the touch and just trying to look at different methods of achieving it that I didn’t even consider using some other approach to hand/leap detection. What Paul had in mind specifically was a Leap controller - a tiny little sensor (IR blaster + two IR cameras) that was meant to turn any screen into a touchscreen, or to add a viewer’s hands into VR applications - basically, a non-touch finger/hand detection method. Very quickly, I repurposed the code I had made for the rear-DI approach, and was soon making music just by waving my hands around the air. First two videos show the actual interaction, but since the audio was quite faint, the third is a screen recording of how it sounds and looks (also a bonus crash at the end!)
youtube
youtube
SOUND WARNING! Very loud, potentially annoying-pitched audio in the following video. 
youtube
After this basic experiment, I began to look into which Leap library was best for us to use. And there was a lot of them. In the end, I chose to use an old library (that theoretically doesn’t support Processing 3...). And then a bit later, I decided I should probably actually use a library that’s somewhat more recent and can take advantage of some of the newer changes in the Leap (accuracy, range, etc). Honestly, I could go on for hours on the different versions of the code, why I made certain changes, etc, but I’m not going to go through all of my code versions and compare them to figure that out. If I struggled with some logic or how to go about some code things, I would quickly jot them down in my sketchbook or just on any paper laying around. Personally, I find it distracting to try and document version changes etc while coding, so I apologize for the lack of talking about all the work going into that. And trust me, there was a lot of that. At the end of the year, I will upload a repository of all the sketches so you can have a look for yourself (though to save space, I’ll be taking out all of the images, video, and audio, so you’ll have to place that in yourself).  However, one of the things that I can talk about is that in continued conversation with the composer, I knew that generative audio was not what needed to be done. So instead, I worked on a simple setup to playback audio samples, to allow for a re-composition of a music piece where each finger controlled an instrument or voice that would layer up with others to create a full music piece. Nothing too major to talk about here. It worked. Was quite nice, but without audio for it, I was simply using placeholders such as different pieces of music and some audio recordings I had done of some original writing.
0 notes
lvdy4ixd-blog ¡ 8 years ago
Text
And some more
youtube
youtube
youtube
http://julienbayle.net/works/creation/struct/
vimeo
0 notes
lvdy4ixd-blog ¡ 8 years ago
Text
More installations
http://www.lozano-hemmer.com/under_scan.php
vimeo
vimeo
youtube
youtube
vimeo
0 notes
lvdy4ixd-blog ¡ 8 years ago
Text
Touch Screen/RearDI experiments
Having tried out the location-based triggering and realized that I was not interested in creating a framework where the work has to be made specifically for it, i.e. a series of videos that can be played one by one, all together, or anywhere in between, where the framing also matters greatly... Instead, I began to look at how one could create something dependent on a single viewer’s presence, and the way they specifically interact with the piece. I wanted to create something that could be quite easily adapted to work with already existing work, with various forms of performance art as well. I wanted to go back to the things that I thought about over summer - the instrumentalization of the viewer, using technology to turn the viewer into a performer/composer/artist, simply through the way they interact with the piece. 
I was very interested in how such an interaction could create a bond between the viewer, the artist, and maybe the curator. What more direct way of forming and showing that connection could there be than a physical touch? I thought that the obvious choice for a technology that allows this would be a touch screen, however, personally I find both the physical feel and the look of traditional touchscreens - such as those in phones, tablets, product displays - to be completely uninviting, cold, and impersonal. Instead, I started to explore different options for making a different kind of touch screen, and after lots of research, it seemed that the best option would be to work with rear-diffused illumination. For initial testing, I worked with a simple setup - placing a webcam under a bit of frosted glass, which through the use of NUI Group’s CCV toolkit turns the frosted glass into a touch screen. The basic idea behind it is the finger touch blocks out light from above the glass, meaning a few well-placed lights and a webcam can turn any semi-opaque surface into a touchscreen. Sadly, with my phone dying recently, I was only able to find a single video of experimenting with this while in Lithuania, using a cheap webcam, and a coffee table from my parents’ living room. After simply getting the data into Processing and allowing a visual feedback, I started working with audio-generation, where each finger would create a random synth, with the tone and wave speed depending on the position of the finger on the glass. An example of this, using a different interaction method, will be shown on a later post.
youtube
The idea was that this method of interaction can easily be combined with a projection, by replacing the visible light and regular webcam with infrared, and an infrared camera. If one simply started rear-projecting onto the regular setup, as shown above, the projector would negate any shadows that are necessary for the blob/finger detection. Hence, the infrared as a workaround. Once I began researching the way to actually set-up everything, I realized quite a few things about this were actually very annoying, potentially expensive, and in general very specific for making the rear DI work with a projection.  Not only would I have to modify a camera, buy expensive perspex (both needs to ‘hold’ the projection and also have a specific make-up to allow the infrared to properly ‘bounce around’ inside of the perspex) that then needs modifications and expensive IR led’s to be very carefully soldered (which of course are difficult to find, and no one really agrees as to which are better), placed, and programmed around the edge of the perspex, then a projector placed below everything to cover the area of the perspex, and then an enclosure needs to be built around the whole thing that both allows enough ventilation for the projector but also doesn’t let light in from outside to not mess with the IR and projector. While this really meant that the installation setup would now be much more difficult to repurpose, as it would be dependent on each of the specific components being the same (or just sacrificing a projector, etc to this box and hoping that you’ll put it to use again), I still wanted to try and explore it without really going all-in with it. I had already tried out the touch-based triggering, seen the up-sides (more pleasant touch than a phone or tablet, nearly unlimited number of figners one can detect at a time) and the downsides (no way to distinguish between which finger is which, between left/right hands, and using large surface-area touch, such as an arm, foot [yes, I tested it with my feet] can simply block out too much light and make the blobs far less precise), and now just wanted to see how it felt to touch rear projections. I had obviously seen them used often in exhibitions, just never where one was supposed to touch the work.
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
For first test, I grabbed the small product-lighting-box-thing-backdrop-light-diffuser (that’s the technical term), grabbed a projector, and just played with how it felt. Somehow, I just wasn’t a big fan of touching the projection screen. It just felt.. almost creepy. Still, I kept considering this method, and was getting prepared to spend some money on actually building a working enclosure, but then, Paul asked me a question: Does it have to be touch?
0 notes
lvdy4ixd-blog ¡ 8 years ago
Text
Music - Inspirations, Research, Background
What follows is not a write-up or reviews of albums/artists/mixes. Instead, treat it almost as a repository of the music I listened to for this project, that I felt I should share. Whether it be just background music, or something I connected to and felt inspired by. (in no specific order). 
[WARNING: MANY IMAGES. CLICK READ MORE AT OWN DISCRETION]
Format: Image Artist - Song/Album Link to stream Image Source
Tumblr media
Colleen (aka  CÊcile Schott ) at Big Ears Festival via RBR Stream: https://www.redbullradio.com/shows/main-stage/episodes/colleen-big-ears-festival Image: http://adhoc.fm/post/colleen-humming-fields/
Tumblr media
Matana Roberts (Jazz composer, clarinetist, saxophonist, comoser, sound exerimentalist, multimedia artist) at Big Ears Festival via RBR Stream: https://www.redbullradio.com/shows/main-stage/episodes/matana-roberts-big-ears Image: https://www.timeout.com/newyork/blog/saxist-matana-roberts-discusses-this-weeks-michael-brown-memorial-benefit
Tumblr media
Mary Lattimore (contemporary harpist) plays a selection of music she finds influential or inspiring, via RBR Stream: https://www.redbullradio.com/shows/headphone-highlights/episodes/mary-lattimore Image: http://nyc.thedelimagazine.com/category/bands/mary-lattimore
Tumblr media
Tangerine Dream - Zeit Stream: https://www.youtube.com/watch?v=rjvF36gzLF8 Image: https://www.discogs.com/Tangerine-Dream-Zeit/master/13491
Tumblr media
Zoviet France - Shouting at the Ground Stream: https://www.youtube.com/watch?v=PiGjkg3k650 Image: https://www.discogs.com/zoviet-france-Shouting-At-The-Ground/release/106038
Tumblr media
Nocturnal Emissions - Spiritflesh Stream: https://www.youtube.com/watch?v=_a02dTvN7V0 Image: https://www.discogs.com/Nocturnal-Emissions-Spiritflesh/release/215700
Tumblr media
Beardyman - Distractions Stream: https://beardyman1.bandcamp.com/album/distractions Image: https://beardyman1.bandcamp.com/album/distractions
Tumblr media
M. Geddes Gengras’ live modular play-around for Fact TV’s ‘Against the Clock’ Stream: https://www.youtube.com/watch?v=-n3Sye0cLRI Image: http://www.factmag.com/2015/10/09/against-the-clock-m-geddes-gengras/
Tumblr media
Murcof - Cosmos Stream: https://www.youtube.com/watch?v=Dm9W83XJyGg&t=477s Image: https://www.discogs.com/Murcof-Cosmos/release/1030349
Tumblr media
Murcof & Vanessa Wagner - Statea Stream: https://www.youtube.com/watch?v=GGGfLLkIoVc&t=883s Image: https://www.discogs.com/Murcof-x-Wagner-Statea/master/1061809
Tumblr media
Amulets - Suitcase of Drone Stream: https://www.youtube.com/watch?v=MIiC0DSmLDE Image: youtube thumbnail
Tumblr media
Vlsonn - Dronology Stream: https://www.youtube.com/watch?v=V4lW7-5YA3E Image: https://aufectrecordings.bandcamp.com/album/vlsonn-dronology-ep-2012-auf010
Tumblr media
Pink Floyd - Set the Controls for the Heart of the Sun (really, the whole album, but the title song specifically) Stream: https://www.youtube.com/watch?v=OovgLbTLafw Image: https://www.discogs.com/Pink-Floyd-Set-The-Controls-For-The-Heart-Of-The-Sun/master/527597
Tumblr media
The Taj Mahal Travellers - July 5, 1972 (low quality youtube stream...) Stream: https://www.youtube.com/watch?v=h3H4QIbcXpI Image: https://www.discogs.com/Taj-Mahal-Travellers-July-15-1972/release/3248859
Tumblr media
Jeremy D. Larson - Undertones, Beyond Toru Takemitsu - exploration of japanese compositions with western influence, via RBR Stream: https://www.redbullradio.com/shows/undertones/episodes/beyond-toru-takemitsu Image: https://www.discogs.com/artist/115467-Toru-Takemitsu
Tumblr media
Lulu Rouge - Landscape of Love Stream: https://www.youtube.com/watch?v=dzb909f87wI& Image: http://www.lulurouge.com/
Tumblr media
Quintus Project - Night Flight Stream: https://www.youtube.com/watch?v=iGfBAk8UyPc Image: http://lexxmusic.com/quintus-project/
In general - all of raster noton - but here’s my favorites:
Tumblr media
Aoki Takamasa + Tujiko Noriko - 28 Stream: https://www.youtube.com/watch?v=Zrn1K7GKUDw Image: https://www.discogs.com/AOKI-Takamasa-Tujiko-Noriko-28/release/514574
Tumblr media
Alva Noto & Ryuichi Sakamoto - Vrioon Stream: https://www.youtube.com/watch?v=lYeP8a_Y_0A Image: https://www.discogs.com/Alva-Noto-Ryuichi-Sakamoto-Vrioon/master/9557
Tumblr media
Ryuichi Sakamoto - Async Stream: https://www.youtube.com/watch?v=kk_lK6wvAxY Image: https://www.discogs.com/Ryuichi-Sakamoto-Async/release/10092906
Tumblr media
AOKI takamasa - rhythm variation 06 Stream: https://www.youtube.com/watch?v=AovB1kid35o Image: https://www.discogs.com/AOKI-Takamasa-RV8/release/4605993
Tumblr media
Kangding ray - these are my rivers (and as a lover of pixel-sorting.. the video is incredible) Stream: https://www.youtube.com/watch?v=w-Q4qzeuV-8 Image: https://www.discogs.com/Kangding-Ray-Cory-Arcane/master/916688
Tumblr media
Kangding Ray - Amber Decay Stream: https://www.youtube.com/watch?v=LO40CaFEG5Y Image: https://www.discogs.com/Kangding-Ray-Solens-Arc/master/657222
Tumblr media
Senking - Serpent Stream: https://www.youtube.com/watch?v=XM3mvTrkVZE Image: https://www.discogs.com/Senking-Closing-Ice/release/7548341
Back to our regularly scheduled programming____
Tumblr media
Jeremy D. Larson explores Steve Reich in Undertones, via RBR Stream: https://www.redbullradio.com/shows/undertones/episodes/stevereich Image: http://www.limelightmagazine.com.au/steve-reich-master-minimalism
Tumblr media
Nobuo Uematsu - Liberi Fatali. So many memories from this. Stream: https://www.youtube.com/watch?v=k9IkmZLFkFw Image: http://2p.com/960165_1/Nobuo-Uematsu--Kenji-Ito-Join-Oceanhorn.htm
Tumblr media
Secret Frequency Crew - Ghost in the Bayou Stream: https://www.youtube.com/watch?v=xGyjuhx3Qdg Image: https://www.discogs.com/Secret-Frequency-Crew-The-Black-Moss-EP/master/23221
Tumblr media
Ludique - Nightfall Stream: https://www.youtube.com/watch?v=S0kUij5QPoA Image: https://www.discogs.com/Ludique-Ludique/master/414950
Tumblr media
Chris Clark - Slow Spines Stream: https://www.youtube.com/watch?v=RKX7Knen4BI Image: https://www.discogs.com/Chris-Clark-Empty-The-Bones-Of-You/master/75463
Tumblr media
Oh Land - numb Stream: https://www.youtube.com/watch?v=vqgqm07tk4M Image: https://www.discogs.com/Oh-Land-Fauna/master/756466
Tumblr media
Hans Zimmer - Interstellar Suite Stream: https://www.youtube.com/watch?v=LgDDRQNHNfw Image: https://www.discogs.com/Hans-Zimmer-Interstellar-Original-Motion-Picture-Soundtrack/master/761723
Tumblr media
Mr Oizo - Flat Beat Stream: https://www.youtube.com/watch?v=qmsbP13xu6k Image: https://www.discogs.com/Mr-Oizo-Flat-Beat/master/64910
Tumblr media
Aphex Twin - Selected Ambient Works 85-92 Stream: https://www.youtube.com/watch?v=Xw5AiRVqfqk Image: https://www.discogs.com/Aphex-Twin-Selected-Ambient-Works-85-92/master/565
Tumblr media
Anna von Hausswolff mix for Headphone Highlights via RBR Stream: https://www.redbullradio.com/shows/headphone-highlights/episodes/anna-von-hausswolff Image: https://www.discogs.com/artist/1724854-Anna-von-Hausswolff
Tumblr media
Hypnos Hour with Chelsea Wolfe - Extasis via RBR Stream: https://www.redbullradio.com/shows/hypnos-hour-chelsea-wolfe/episodes/extasis Image: https://www.discogs.com/artist/2078470-Chelsea-Wolfe
Tumblr media
Dorit Chrysler: Theremin Tracks mix for Headphone Highlights via RBR Stream: https://www.redbullradio.com/shows/headphone-highlights/episodes/dorit-chrysler Image: https://www.discogs.com/artist/265773-Dorit-Chrysler
Tumblr media
Daina D. mix for Minimal Mondays, via Minimal-lt Stream and Image: https://soundcloud.com/minimal-lt/daina-d-minimal-mondays
Tumblr media
Fnuf. Heard him live when together with the rest of GSA Sports I organized the Late Night Session at the Art School. Stream: https://soundcloud.com/fnuf Image: https://fnuf.bandcamp.com/
Tumblr media
Kaito - Inside River (Beatless Version) Stream: https://www.youtube.com/watch?v=-Qnqo-9SaDU Image: https://www.discogs.com/Kaito-Special-Love/release/101656
Tumblr media
Boards of Canada - Geogaddi Stream: https://www.youtube.com/watch?v=1FGtd3oH_PQ Image: https://www.discogs.com/Boards-Of-Canada-Geogaddi/master/2129
Tumblr media
Venetian Snares - Traditional Synthesizer Music Stream: https://www.youtube.com/watch?v=9YYzx5PJIrg Image: https://www.discogs.com/Venetian-Snares-Traditional-Synthesizer-Music/master/956897
Tumblr media
All of the Desert Sessions. Like. All of it. Stream: https://www.youtube.com/watch?v=632yZzoy9TU Image: https://www.discogs.com/Desert-Sessions-Volume-IVolume-II/master/115471
Tumblr media
Tsuneo Imahori - Permanent Vacation Stream: https://www.youtube.com/watch?v=dxadFKYwbEs Image: https://www.discogs.com/Tsuneo-Imahori-Trigun-The-First-Donuts/release/1060606
Tumblr media
Gato Barbieri and Don Cherry - Togetherness Stream: https://www.youtube.com/watch?v=SlVC0fFR8K4&list=PL33A706DA9FCC109C Image: https://www.discogs.com/Don-Cherry-Lee-Gato-Barbieri-Togetherness/master/39918
Tumblr media
Connect.Ohm - 9980 Stream: https://www.youtube.com/watch?v=mmxBoFzdqVI Image: https://www.discogs.com/ConnectOhm-9980/master/492032
Tumblr media
Coming back to Life Guitar Solo in NASA Space Chamber - title says it all. Stream: https://www.youtube.com/watch?v=vdpc6KbT7qo Image: https://www.nasa.gov/centers/glenn/multimedia/imagegallery/if_80_spf.html
Tumblr media
Ornette Coleman Double Quartet - Free Jazz (and oh my the stereo! Beautiful!) Stream: https://www.youtube.com/watch?v=8bRTFr0ytA8 Image: https://www.discogs.com/Ornette-Coleman-Double-Quartet-Free-Jazz/master/28578
Tumblr media
D Tiberio - Jerome Stream: https://www.youtube.com/watch?v=d5gxzgHtqMI Image: https://www.discogs.com/D-Tiberio-304/release/5286376
D Tiberio - Make It All (Everything) Stream: https://www.youtube.com/watch?v=kZaO5UM9ABw
Tumblr media
Hidden Orchestra - Spoken Stream: https://www.youtube.com/watch?v=UDzPfOUlcg8 Image: https://www.discogs.com/Hidden-Orchestra-Archipelago/master/474982
Hidden Orchestra - Seven Hunters Stream: https://www.youtube.com/watch?v=WBIsw5rtWjk
Tumblr media
DJ Shadow - Bergschrund (feat Nils Frahm) https://www.youtube.com/watch?v=9_e-HYgzd7A
Tumblr media
Autechre - Amber Stream: https://www.youtube.com/watch?v=vFqFyyay87s Image: https://www.discogs.com/Autechre-Amber/master/1302
Tumblr media
Dub FX feat Mr Woodnote - Flow Stream: https://www.youtube.com/watch?v=WhBoR_tgXCI Image: https://www.discogs.com/Dub-FX-Everythinks-A-Ripple/release/1864600
Tumblr media
Mr Woodnote - Get Down Stream: https://www.youtube.com/watch?v=r8p5GB_jasc Image: https://www.discogs.com/MrWoodnote-Winter-Of-Woodshed/release/2391092
0 notes
lvdy4ixd-blog ¡ 8 years ago
Text
Location based triggering.
youtube
One of the first ways of interaction that I wanted to explore was based on the idea that without the viewer - there is nothing to see or hear (or at least that’s what it started with). One of the projects we carried out the year prior was sense&sensibility - a project focused on computer vision. My piece involved placing a camera in the space to track movement and create a map of the politics of the space (project: http://lvdagilis.com/kinetic-cartography-small-scale-geopolitics). The whole project was carried out using openTSPS, so I chose to use the same familiar framework as a basis for my explorations.  The idea was to trigger audio/video only when a person is sat in front of that portion of the projection. Wanting to see the whole work, one would need a person sat in every seat. The idea was inspired from watching how people, myself included, behaved in galleries around projected films. Some enter in the middle of it, and just watch through till they get to where they started. Others come in and leave swiftly, others will sit through nearly two full playthroughs, as they will want to have seen the whole piece from start to finish. So, taking some excruciatingly long work combined with this sort of trigger method would make it so that a group of people would really have to commit themselves to all agree to sit through together from start to finish. 
From a technological perspective, there was no necessity to even use openTSPS. It would have been quite simple to figure out a person’s position in Processing, using openCV. However, for initial prototyping I chose to use openTSPS, as it allowed me to focus on the content side more.
youtube
youtube
youtube
The first steps are seen above: creating a simple, scalable method to mask rectangular areas of the processing sketch, based on the person.centroid x and y positions that Processing receives from openTSPS. As I chose to use an object-oriented aproach, I allowed myself to quickly transform this basic framework in the future, as needed. Initial experiments into this began with audio files. The place-holder music was that of Ryuichi Sakamoto’s piano album, creating an eerie sound, while the idea was to later reappropriate it for tracks of voices and instruments.
At this point, however, I decided to move into repurposing it for video installations. The idea was to trigger different bits of a multi-layer video based on where you were sitting. The problems here became quite.... plentiful. To start with, even though I was using small video-files, processing was strugglling playing even five videos. As they videos I was thinking of potentiall making were going to have to be in-sync, I was simply masking the ones that shouldn’t be ‘seen’ with a white rectangle. So even with only one person there, all five videos were playing, giving me a stellar ten frames per second. Subpar for video installations... Additionally, there arose an issue with scaling and moving the videos - the videos would have to be shot specifically for this form of framing...  I might not have thought it through completely.
youtube
0 notes