This is a blog created to document my work for my final project. I am working on creating a performance for violin and computer system which aims to create expressive systems for performance through exploration of the relationship between the performer, the system, and the audience. To do this I explore interaction between the performer and the computer through both visual and audio analysis, applying intuitive mapping strategies to these features, and realising both the visual and audio feedback during performance.
Don't wanna be here? Send us removal request.
Text
May
This week I spent a lot of time working on the 3rd movement of my piece - for granulator & violin. As discussed last week I wanted to improve on it as I felt it fell a bit short during the performance.Â
You can see a live version of the altered piece here
https://www.youtube.com/watch?v=G922bgK7hNk&feature=youtu.be
I have decided to split the piece into its separate sections, so I eliminated the audio problems I had last week. This also meant I could adapt the midi controller to be easier to use.
The first thing I did was adapted the controls for the patch, I assigned more foot pedal control to it and altered the triggering system for the piece. I also returned to the mapping strategies in attempt to streamline them to ensure that they created the most perceptual changes possible. Something I realised from the performance was that some of the changes required an unnatural amount of movement during the performance to reflect the sound. To combat this I made some aspects of system more sensitive, and dampened the effects of others.Â
There was not much practical work to discuss over the end of May as I was working on my write up and presentations. As a result of this, I have combined may into one post and it will be my final blog post.Â
0 notes
Text
Week Beginning: 1st May
Now that I had completed my performance it was only the write up and the documentation to complete.Â
I took two cameraâs and 2 audio recordings for the performance, plus I mixed in the reactive visuals I wanted to use for the work. So i spent a while editing this together.Â
Please follow the link to see the resulting documentation https://youtu.be/w7pPuhuVNiUÂ
On reflection of the performance I was pleased however felt that there were improvements to be done.
Implementing the entire work into one patch sacrificed a lot of work and I feel should instead be split into three separate patches for each piece. This would eliminate many of the problems with sound quality you can hear in the recording, and I would get to utilise more of the work I have done for the project.
The bow tracking system was not informant enough. On the night I struggled to gain control over the granulation section, looking at the video I can see that I was standing at an angle to the laptop which was unintentional and probably caused the problems. However I will adapt the interactions and perhaps alter the mappings to try get even more impressive results.
The Mubu buffer did not playback correctly as heard in my previous examples. There was some element of risk in my approach, which was intentional as I wanted to demonstrate expression & virtuosic elements of control - hence I did not use pre recorded material. Where the cpu was working so high, the input signal kept cutting during the recording. Therefore the buffer was filled with short staggered audio rather than long held pizz and so the process object would have cut these up into small and unmusical segments. I think operating this in a smaller patch should eliminate these problems, but if I were to perform this again I would be tempted to use pre recorded material for this section to eliminate any possible problems.
Despite these problems though I am pleased with how it went. Aesthetically the music sounded how I planned it too, it was enjoyable to perform and I had some great feedback. Once I separate the patch into individual sections I believe many of the problems I encountered will be solved. Also, some of the problems I had were intentionally a part of the system - (attempting) demonstrating virtuosity.
âThe vast majority of performances of computer music that involve new interfaces, new instruments, alternative controllers, etc. are more experimental than they are refined and virtuosic. They are generally performed by someone who has only recently encountered the instrumentâ Â
Christopher Dobrian , and Daniel Koppelman. The 'E' in NIME
In order to provide a fair representation of my work I have decided that I will record each piece again - but with altered methods for improvement.
0 notes
Text
Week Beginning: 24th April
This week was the week of the performance.
 I spent a lot of my time putting all of my work together to create one coherent piece. So far I have 3 separate works, which I (just about) managed to combine for the performance.Â
Freeze piece - exploring textural and spectral detail in plucking a string.Â
Mubu segmentation - I had devised several methods for segmenting and recalling recorded live audio which I hoped to use both audio and visual analysis for control.
Granulation control - Using traditional performance gesture I control playback of a live recorded sample aiming to demonstrate expressivity through interaction with the system

Unfortunately upon putting the separate pieces together I ran into quite a few problems. Some of these I was able to solve by re-patching certain parts of the software however I also had to compromise on some of my work, omitting it from the performance.
The main problem I encountered was that there was too much going on in the entire patch.
Video tracking and live visuals - both jitter processes - were expensive through multiple matrix calculations operating every frame. To combat this I reduced the speed of the operating metroâs, and reduced the dimensions of the jitter matrices to as low as possible.Â
The pfft object I used for the freeze points operate at a window size of 4096 with an overlap factor of 4. Unfortunately reducing the window size caused severe changes in the quality of the sound and so I had to attempt to use them as little as possible. Fortunately it turns out you can operate the pfft object like a poly, and send it mute messages to stop it operating.Â
I had to separate the video tracking sections from the freeze sections as both these functions proved to be too expensive on the cpu together.Â
Mubu Granulator UI above, mubu instrument beneath
I also had quite a lot of grief from the Mubu software, it doesnt like to run multiple mubu buffers and operations within the same patch. Unfortunately I had to cut a couple of my mubu objects out of the performance, and use pre recorded material for the granular section, just to ensure the patch didnât crash or behave unexpectedly.Â
We spent 2 days inside the SIML space setting up and rehearsing for the concert. My patch was still misbehaving despite my attempts to streamline it and i made the decision to cut the visuals from the performance. During the dress rehearsal, the patch crashed half way through the performance and I thought it better to take out this risk.Â
A link to a video of the performance -Â https://youtu.be/WMsFDl2itpwÂ
0 notes
Text
Week Beginning: 17th April
This Week I started looking at methods to analyse the audio input of the violin. Using the pipo~ object I was able to access many audio features easily.Â
Including;
Fast Fourier Transform (FFT) to see spectral content across a sig.
Mel Frequency Cepstrum Coefficients (MFCC) perceptually weighted spectral content
FrequencyÂ
Energy & loudnessÂ
Spectral content features such as the Centroid, spectral spread, skewness and kurtosis
Beneath is a screengrab of a simple patch I made to compare the results of the different processes. The audio I have sent through to obtain these results Is just a pluck on the D string.
(please note the basic pipo audio features are all in different ranges so you cannot see the variations between them as they are so small)
Once these features had been extracted I had to come up with a way of recording them. I wanted to be able to store some of these values and use them as presets in my performance to control parameters (like the panning)
To do this I had to use the mtr object, which although very useful, proved quite difficult to work with. The mtr object is built to record messages over time for playback in real time. The problem I had with using the audio features is that they were operating so fast it just didnt work with mtr. I developed an easy hack around this. Simply store the values and bang one into the mtr at a slower rate.Â
I spent a lot of time finishing off my freeze composition. Â The piece aims to explore the spectral content of the violin. It extends the instrument freezing points of plucking and emphasises the vast spectral content that is almost unnoticeable when the instrument is played acoustically. You can listen to it on the link below.Â
https://soundcloud.com/george-sullivan-672646425/freezetake2Â
I built a few new patches to create this work including a tremolo module. Although quite a simple design I felt that its presence within the composition relieves tension built up by the held freeze points. I would like to adapt this to give the performer more contorl over during the live performance.Â
I also spent some time designing a new Instrument with Mubu this week. I wanted to create a way of segmenting the violin input and recalling it based on the spectral content which i could control through the instrument in some way.
To do this i just used the mubu chop process to seperate the audio into  200 millisecond chunks and then sent weighted values for the audio features to the mubu,knn object. I could  then select different amounts of each one and the knn object would send out 10 segments  marker indices with the closest features to the input.
Sending the marker and buffer indices to a mubu.concat synthesis object connected to the same mubu buffer outputs the segments and allowed some control as to how they were played. The link beneath is a link to an example of this system being used along with the pfft spectral freeze patch I have been using (However I am banging it with a metro object so it creates more like a reverb effect)
https://soundcloud.com/george-sullivan-672646425/mubuknnfreezeÂ
0 notes
Text
Week Beginning 10th April
This week I started implementing the video tracking systems for my performance. Previously I have created my own colour tracking in Max and been able to get position data for an object, and derive features such as velocity and direction from it. However I found working in this way limiting as I only built it to follow one object.Â
To expand on what I had previously done, I decided to look into the cv.jit library . This library is built for computer vision techniques and Im hoping it will improve the accuracy and interactivity between the performer and the computer. Being able to easily track multiple blobs is the first goal but once I have my head around it I hope to be able to implement more methods for extracting movement and gesture from the performer.
I managed to get it working pretty quickly. Beneath is a screenshot of how to get  the basics working. I am able to track blobs and number them either from positioning from the top right or from size (largest to smallest).Â
I went through a great tutorial for using the cv.jit library which helped me build my system for blob tracking with either motion detection or background subtraction techniques. Although quite a lengthy video It goes through everything step by step and has helped me get my head around the library.
The first step is to get your video feed into max. Easy. Next you should create video streams you will use for the tracking using the source video. Beneath is my patch showing the different methods and steps you should need
Now you have your streams set up for analysis,  you must select which ones you will use and apply necessary  thresholding. Then you can generate the blobs using the cv.jit.blobs.centroids object
cv.jit stores the data for each blob centroid in a 3 plane jitter matrix which allows easy extraction of the following;
 Plane 0 - X position
 Plane 1 - Y position
 Plane 2 - Size
The tutorial presents two methods for extracting data. By using the jit.iter object you are able to read through each individual blobâs data. This method is suitable if you are performing actions on all blobs. Alternatley you can access individual blob data with the jit.spill object. Useful if you are only looking for the largest or a specified number of blobs.
Once this was done I started using methods from this for creating performance interaction. Within my work I want to create an immersive sound environment which explores parts of the violin which are inaudible usually. So I have created freeze effects (Previously mentioned with Jean-Francois tutorials) which will create multiple layers of sound I wish to move about the space with the performers movement.Â
Beneath is an early example of a method i tested to use the video data interestingly with the panning on one channel
Expanding on this approach, I experimented with using my own movements to control panning values of four separate freeze points I set up. Changing the way each one was affected by the values created subtle and immersive textures. For example some would pan left to right others would increase in volume slightly and pan oppositely.Â
With only 2 speakers I want to look into different methods of varying each freeze channels panning to create an expressive system. I am going to attempt to get this working with 4 speakersÂ
To expand on this I also want to try identify certain parts of the performers body. I think this will not only add to the coherence of the performance but allow me to take features more relevant to the actual instrument. For example; with the tip and frog of the bow I could easily calculate the angle of the bow and the direction of movement, as well as positioning/distance from the instruments body, estimations of how far along the bow is playing.Â
0 notes
Text
Week Beginning: 3rd April
This week I spent most of my time working on my write up. However I looked back through the mubu library to see how I could expand on the work I have done and come up with some new ideas for my performance.Â
Using the granulation object I built a simple version that would be useful in lie performance. It needs to be expanded on but for now it is useful seeing what kinds of sounds I can get out of it.Â
After experimenting with these controls I have a rough idea of what parameters would be best for control with violin. Much like IRCAMâs augmented violin and MIT Labs hyperviolin, I am aiming to control these using traditional performance gestures of the instrument. To do this I will map the extracted video features from last weeks cv.jit work to;
Playback speed
Period
Duration
ResamplingÂ
Resampling varience
I also am going to build my own filtering methods to apply to the instrument. The intergrated Mubu filter system works but I found it to be temperamental and often overloaded and stopped functioning.Â
I spent some time working on my compositions and developed my own panning patch to allow automation which I have put below. Took me quite a while to figure out the maths, shame on me.Â
0 notes
Text
Week Beginning: 27th March
 This week I spent a lot of time thinking about the spectral qualities of a string instrument and how I might reflect on this within my performance. These features keep reappearing in my research and I decided to build methods to use.Â
Jean-Francois Charlesâ article âA tutorial on Spectral Sound Processing using Max/MSP and Jitterâ, published in the Computer Music Journal,  was a great place to start. Charles describes his methods of performing spectral sound processing using the jitter matrix to store the phase and amplitude values for FFT.
I went through the accompanying patches he provides with the paper and figured out how they worked, and built my own versions. I experimented with using some of the different techniques and came up with some ways Id like to expand on this.Â
The screenshot above is from part of my Pfft~ patches to demonstrate how Jean-Francois Charles uses jitter matrices and jitter functions to apply effects onto spectral freezeâs (in this case a blending effect caused by the use of the jit.slide function).Â
From picking apart his patches I have managed to implement my own functions into the audio signal. So far I have experimented with objects such as the jit.streak and jit.sprinkle, jit.xfade has also created pleasing results.Â
This week I spent a lot of time doing further research into performance with electronics. I read a number of papers including Joel Ryanâs âEffort and Expressionâ , Paradisoâs âElectronic Music Interfacesâ and âUsing Contemporary Technology in Live Performance: The Dilemma of the Performerâ  an article written by W. Andrew Schloss for the journal of new music research.Â
Schlossâ article I found particularly interesting, and he highlights 7 conclusions which I felt were closely tied into the ways I had been considering my work
âCause and effectâ - reinforces understanding/trust between audience and performer
âCorollaryâ - complexity of a system can be too vague and lack of understanding creates lack of interest
âVisual componentâ - gestural aspect aids understanding of experience
âSubtletyâ - intimacy helps portray emotion
âEffortâ - Â computer systems generally create difficulties in demonstrating effort
âImprovisationâ - difference between editing how music sounds and creating the changes in the sound
âPerformâ - Is the performer part of the performance. Are they clearly necessary
0 notes
Text
Weeks Beginning: 13th & 20th March
Unfortunately I lost my laptop and most of my work along with it. So I have spent the last two weeks rebuilding patches, finding my references & resources and recreating notes I lost.
I spent a little time experimenting with Mubu and uploaded a short composition using the techniques I started last week onto my Soundcloud;
https://soundcloud.com/george-sullivan-672646425/laptops-lament
After reading Dan Truemans âReinventing the Violinâ I have thought a lot more about the implications of using an electronic instrument and it has highlighted some interesting parts of the performance I have not considered. A couple of ideas I have a particular interest in exploring include;
- Developing new playing techniques for performanceÂ
âIn comparison to any other violin, the electric violin has virtually no music and no social context. This is compounded by the fact that it is physically very different from any other violin. It is a vast reservoir of undeveloped potential, and as a result it lacks the models and traditions that are so rich for other violins.â (Trueman)Â
- spatialisation and how to approach âdissattachmentâ caused by electronic features such as amplification.Â
I decided to spend next week researching into methods I might use to allow me to explore the spectral qualities not accessible through an acoustic instrument. With the aim of creating an instrument where both the violin and the computer are vital in the production and expression of the sound.
0 notes
Text
Week Beginning: 6th March
I decided to spend more time this week focussing on creating ideas for pieces on max and thus spent a few days in the studio. As mentioned last week I started looking into Mubu to see what I could come up with. I also had a meeting with Patricia and started planning my write up.Â
To begin my research into Mubu, I looked through the examples folder to gain a better idea of the functionality I could get out of the package. Although not greatly documented, there are some great applications of the mubu objects to create interesting instrument-type patches. For example the catoracle patch demonstrates a concatenative synthesiser which allows a lot of control to the user. Personally I feel this example is difficult to use as an instrument for live performance, however perhaps I have not quite grasped the control methods appropriately.Â
The mubu-pipo examples I found to be most interesting, particularly as at the moment I am focussing on creating instruments and compositions for my performance. Not only is there simple tutorials on how to set up imubu interfaces but they included a few patches which I found really inspiring, the mubu-shaker patch in particular.Â
I went through the shaker patch and rebuilt it to figure out what was going on and how it all worked.
I experimented with lots of different ideas for creating a piece using this patch, and furthered it by adding a few dsp effects. I found the best results came when I chose a simple pitch set, and played each note in a variety of techniques and phrasings. This gives a wide range of segments which (usually) sound great when played together in any order.Â
https://soundcloud.com/george-sullivan-672646425/concatenative-sequencing
I am really pleased with this approach and have lots of ideas of how I could expand on this already. This was also a good way to get started with Mubu which I plan on going more into next week.
In my meeting with Patricia we spoke mainly about what to be putting in my write up. We spoke about contextualising both my research and creative work, focussing on artists who have created music for violin and processing, Joel Ryan and Dan Trueman for example. We also discussed what my main research problems were, and thought about similar work that has been done in these areas.Â
0 notes
Text
Week Beginning: 27th Feb
This week I wanted to repeat my experiments looking at the effects of common playing techniques on the frequency spectrum, aiming to move forward and achieve much more concise results. I also spent time in the studio exploring new methods of processing for live performance and made some interesting work with a granulator. I went through a few more of the jitter recepies and had another meeting with Patricia.Â
On reflection of the experiments I did last week, I decided to refine my tests to obtain clearer results. I found a few research papers which explore the same ideas which used much more thorough techniques to compare. âExtracting the fingering and the plucking points on a guitar string from a recordingâ (last accessed 13.3.17) was an interesting study about similar techniques but on a guitar. They too used analysis in the frequency domain to extract features, and spoke about the importance of understanding the physics of the instrument.Â
âPlucking a string close the the bridge produces a tone which is softer in volume, brighter and sharper ... The sound is richer in higher frequency componentsâ
I found it interesting learning how a lot of features I have been looking to identify are created. For example the brightness of a note is very different if you were to use a finger or a plectrum to pluck a string, and even the way you pluck with each creates a noticeable difference. This is because the non-zero width of the object used creates lo-pass filtering on the string when it is struck. My creative work has already been shifting towards contrasting the different timbres you can get out of the instrument, and I will pursue research into the physics of the instrument to gain further understanding of what I am working with.
Another paper published in the proceedings of the 2003 International Conference on Multimedia and Expo, âInferring control inputs to an acoustic violin from audio spectraâ (last accessed 13.3.17),  was very similar to my approach and aimed to examine the frequency content of different playing styles. Written by A. Krishnasway and J.O.Smith, they discussed how they set up their experiments by creating âspectral classesâ to limit the outcome possibilities. They took twelve notes on each string (12), which were either bowed or plucked (2), and this action was performed either next to the bridge or over the fingerboard (2), across the four strings (4). This created 208 different classes (12 x 2 x 2 x 4) and simplified the experiment, making the results easier to compare and improved reliability.Â
I created my own version of this to gather my own results. To do this I set up a  few limitations:
Using the note E4 on the D stringÂ
Three positions on the string
Bowed or plucked
Played gently or hard
And once i had tested possibilities with each technique:
E4 played on G string
E4 played on C string (I have a 5 string violin)
E5 played on A string
E6 played on E string
E4 on D with different amounts of vibrato
Below is a sonogram of the test which if you compare with last weeks tests, provides much more clarity and thus is more reliable than the results from last week.
There were a few features I was impressed with here and wish to explore in my compositions.
You can clearly see the spectral fall off changing, where it reaches higher are sections either played louder or closer to the bridge.Â
The pizzicato section in the middle shows an increase in harmonic content as the string is plucked at different positions and at different intensities.
The higher octaves have less spectral content as the overtones are double the fundamental.Â
I repeated the experiment with contact mics placed onto the bow like I had done last week as I felt they gave interesting results. As they only gave interesting results on contact with the violin I took the pizz. section out and replaced it with tremolo bowing moving down the bow.
contact mic placed at frog of bow
contact mic placed on tip of bow
The results here I felt werenât as convincing as the signal coming straight from the instrument. I guess the noise of the bow moving across the string makes it difficult to examine spectral content as it is not so clear.Â
In the studio this week I decided to move on from the feedback loop approach I have been using recently and use granulation to create a composition. I used Nobuyasu Sakondaâs âsugarSynthâ patch to see what ideas I could come up with, as although I have my own granulator instrument, it does not yet allow as much control as the sugarSynth patch.Â
I experimented with using different audio and different techniques and came up with a piece which evolves from a fairly simple violin part into progressively extreme cases of granulation. The recordings were done in two parts, as I have not built a system to perform these ideas live with yet, but can be heard on the following links;
https://soundcloud.com/george-sullivan-672646425/unprocessedviolin
https://soundcloud.com/george-sullivan-672646425/granulatedv
In my meeting with Patricia this week we discussed what the next steps could be with using audio descriptors from the violin to create a performance. After discussing the functionality of objects such as yin~ and pipo~, I decided it would be a good idea to download IRCAMâs Mubu package and see what was possible with this.
Mubu is a max extension which can be used to analyse an audio buffer in real time, segment it depending on audio descriptors and index each segment. To playback these segments they have built several synthesis methods including granular and concatenative synthesis objects. It also offers further analysis methods through the pipo objects, allowing easy extraction of features such as MEL frequency bands.
I completed a few more of the Jitter recipes tutorials this week. I was interested with Recipe 7:FFT Collector as this demonstrated a technique of turning audio signal into a jitter matrix. To do this you simply take an FFT of the audio signal, and send the real and imaginary output into the jit.catch~ object with 2 input channels. This transforms the signal data into a matrix we can send into the jit,matrix object at the bottom of the patchÂ
Although quite a simple patch I feel this could be expanded into something quite interesting. Particularly as a lot of my work at the moment is looking into spectral analysis. I have not used the jit.catch object before and can see the potential of using it to combine audio features with jitter.
Recipe 08: VideoBlobs
Within this tutorial featured a great technique for generating random textures with noise that slowly morph between states. As previously mentioned in my proposal, I plan to create some sort of generative visuals for my performance and so this tutorial related well to this. Below is a copy of this part of the patch in which i have attempted to outline all the important steps.
0 notes
Text
Week Beginning: 20th Feb
After last week I decided I needed to seriously consider the audio signal chain for my performance. The recordings I took from last week were very tinny and very quiet, I figured this was due to sending the violins signal straight into a sound card and then Max and that I should experiment with using a different set ups to address this problem. While I was in the studio, I experimented with contact mics in effort to find interesting features of the instrument. I continued working on the feedback ideas I started last week and took a few recordings of these. I continued with the Jitter recipes.
I was aware of a thread available on cycling 74 titled âGetting Guitar Signal into MaxMSPâ (22.2.2017) which discusses different setups users have tried. A post by Pierre Alexandre Tremblay made me realise there were many ways I could arrange the signal chain using the same equipment, and so I attempted some of his recommended configurations.Â
The best results I took this week came from;
Violin output -> Pre amp -> Soundcard -> Max -> Max Audio Processing -> Speakers
The signal coming out was much better than previously, and gave much more depth to the sound of the violin, as well as much more volume. The photos below show the difference between signals and I think the results speak for themselves !
Signal without pre amp
Added Pre amp
I was only able to work with the Behringer Tube Ultragain Mic100 this week but was really pleased with the results. I am going to try get hold of a few different pre amps to experiment with as I know they can make a big difference to the sound. As the electric violin I am using has no body, it has no resonance and thus its important I use the right equipment (although this âcleanâ signal is beneficial, I will discuss this in the future).
I further tested the contact mics that I had looked at last week. I was interested in trying to see what kind of signal I would get in different positions on the instrument. I recorded different bowing techniques and playing styles in each place so that I could listen back and evaluate the differences, and create sonograms of each one in attempt to identify my own features. I have been reading about different audio analysis methods this week, which I will talk about later, but was inspired to attempt to find some of the features identified in âLearning and Extraction of Violin Instrumental Controls from Audio Signalâ  (27.2.2017) and an IRCAM paper titled âA Large Set of Audio Features for Sound Descriptionâ (27.2.2017). Beneath is a diagram to show the mic positions I tested.
1 - Under the fingerboard. I was hoping to pick up the sounds of finger positions on the neck by placing the microphone here, which I thought could be used to create a basic proximity measurement for hand position using the amplitude of the fingers. However it did not really show any convincing results which was disappointing. Â
2 - On the centre of the body. Although the microphone is positioned in the bridge of the instrument, I thought it might be interesting to compare the clean sound of the violin with a more DIY microphone set up. The resulting signal showed variance to the built in pickup and i feel theres definitely possibilities I could explore.Â
3 - Underneath the chin rest. This one i positioned on the outer wood to see what the difference in sound here was. Unsurprisingly, it had nothing really to offer other than sounding like it had been recorded through a tin can.Â
4 - The frog of the bow. I thought the signal obtained from this position was quite interesting. The sounds amplified from the bow changed slightly in harmonic content depending on where the bow was playing from and how far this was from the mic (picture beneath demonstrating this). Another interesting feature I noticed was that as soon as the bow was taken off the string there was instantly no sound as the bow hairs do not resonate. Although this might seem obvious it sounds quite strange as the sound stops so abruptly, and I can imagine this would be more useful in determining when the performer changes bow/note than analysing the live signal from the instrument.Â
5 - Tip of the bow. As anticipated, the results from this were very similar to placing the mic on the frog of the bow. The sonogram readings for this one appear to be cleaner, likely due to the hand not being so near to the mic and thus not picking up its movements. However I am concerned with the playability of this approach as it is quite weighty and made it difficult to play. Although pleased with the results, on evaluating each recording and sonogram I quickly realised that I would have to be much more precise to get convincing results. Next week I will redo some of the positions but play the same thing on each one so I can easily see the change in harmonic content.
I propose to play three frequencies each in different playing styles, and where possible I will play them in different positions on the fingerboard. This should give me a good spread of results ranging from the high to low range of the instrument, and I hope testing different positions  will help me see a difference between strings. I will take more care when the mic is positioned on the bow in attempt to cut out noise.Â
This week I spent time reading about other musicians experiences and ideas when creating electronic music with live instruments. There was an interesting thread on Muff Wiggler âProcessing a violinâ (21.2.2017) which included a link to an interesting piece âDay 2.2âł by ThreeB_. I liked this as I felt the piece was similar to the sort of sounds I had been creating in the studio and was inspired by their use of some processing features. They explained to create the sounds they used a Makenoise Echophon, a pitch shifting echo, the Makenoise modDemix, a module capeable for creating ring modulation, distortion and a few other effects. Other posts included ideas I would like to try such as processing the violins signal into square waves and using envelope following to control oscillators.
This lead me onto the work of âAncient Eyeball Recipeâ which can be found on his Soundcloud page. They create work by processing a live violin through modular synthesisers and is able to create some really interesting sounds. Although more of a noise artist I found myself inspired by some of his pieces, particularly âFar Truculent Frenzyâ, and the âInsect love among the ancient libertariansâ ep. At times, these pieces were quite difficult to work out where the sound was coming from. However the violin was often reflected in the sounds produced and they used some interesting processing techniques to achieve this.Â
Now I had started thinking about the signal chain for my performance, it was time to look into analysis methods. I As previously mentioned,  Wanderley and Carrilloâs publication âLearning and Extraction of Violin Instrumental Controls from Audio Signalâ was an insightful look into feature extraction during violin performance. Interestingly they talk about an interpretation of performance which is similar to the ideas behind my project.Â
âDuring a musical performance a performer transforms a musical idea or score into a sequence of instrumental gestures that control the instrument, which in turn, produces the sound. In this manner, the musical idea is transformed into different representation domains; the musical score, the gesture and the sound domain.â
flow chart of performance from paper
They briefly discuss how it is possible to obtain musical gestures through sensors and interfaces, stating âdirect measurement involves the use of usually expensive sensors with some degree of intrusivity and generally entails complex setupsâ. I have already found this to be true when testing the contact microphones this week, the most interesting readings came from placing the mic on the bow which created difficulties when playing the instrument. However to some extent I think this is untrue as I have found it possible to take fair representations of gesture through a webcam, which is inexpensive nor obstructive to the performer.Â
However, the paper focusses on drawing features out of the audio signal of the instrument. They calculate descriptors which are better explained in Geoffroy Peeters IRCAM publishing âA large set of audio features for sound description in the CUIDADO projectâ. This is a great overview of possible readings I could take for my project and next week I am going to attempt to instigate some of these analysis techniques within Max MSP.
Diagram of how their analysis works
Once I can obtain these results through analysis I will need to decide the ways I am going to map these parameters into my work. As mentioned in my preliminary report, I wanted these mappings to be intuitive and reflect the performers expression clearly. However, another paper by Wanderlely, Andy Hunt and Matthew Paradis which was published for the 2002 NIME conference, âThe importance of parameter mapping in electronic instrument designâ, has challenged this idea.Â
In this paper they discuss a number of experiments on mapping including a previous study lead by Hunt, âRadical user interfaces for real-time controlâ. In this paper Hunt devises several simple interfaces consisting of sliders and/or a computer mouse. He then creates different mapping strategies for each interface varying in complexity, and asked the subjects to adjust the parameters of a sound to recreate an example given to them. Interestingly, the more complex mapping strategy took better results than the other two with more complex exercises. Even in the tests where it performed worse there was always an improvement in the results.Â
âperhaps our preconceptions about computer interfaces are wrong. Possibly for some tasks we should not expect an âeasyâ interface which takes minimal learning. For many real-time interactive interfaces we need control over many parameters in a continuous fashion, with a complex mapping of input controls to internal system parametersâ
0 notes
Text
January - February
Over the last weeks of January I did a lot of preparation for my project, predominantly in Max I started creating lots of short sketches exploring audio processing and visuals for exploring through my project. I went through the max jitter tutorials in attempt to get my head around jitterâs functionality. Once this was complete, I began looking through the jitter recipes tutorials which are available on the cycling 74 website.Â
True to my project plan I attempted seeing what I could get out of IRCAMâs Antescofo software. Designed for score following techniques, it has been used by ensembles such as the New York & Los Angeles Philharmonics and offers an interesting method for controlling computer systems during performance. Unfortunately the score editing program kept crashing my computer and I was unable to fix this. I decided to abandon Antescofo and perhaps look into other methods such as the MuBu library.Â
Due to my laptop being stolen before I had written this period up, I unfortunately lost all these patches explaining these Antescofo and Jitter methods with thorough commenting. I also lost all the research I had been looking at over this period and so am unable to comment on this.Â
0 notes
Text
An introduction
As outlined in my proposal, the goal of my work is to create and perform compositions for instrumentalist and computer system. To achieve this, I propose creating an interactive environment for performance which is informative of both performer and the computational integrity during the work.
I wish to explore interactive methods between the performer and system, and will do this with analysis of both sound and movement. I am aware that when performing with live electronics there is often disorientation caused due to the multiple origins of sound. Using both live acoustic and live electronics, it can create miscommunication between the performer and audience as to what is happening. This problem may be heightened as I plan on using an electric instrument, which in turn may include obscure possibilities in the signal chain. In a paper âGenerative Music and Laptop Performanceâ, Nicolas Collins identifies this issue saying âmuch of the complexity of these real time systems is lost on a potential audienceâ1.
There are many ways one could visually manifest the generation of sound. Many live coding or âalgoraveâ artists, Alex McLean for example, project their screen for the audience to see2. Graham Dunning provides visual feedback by showing the source of each sound created during his âMechanical Technoâ3 piece. Owen Pallett performs using Max MSP to create his own looping systems; he used to project his Max patches onto the stage however more recently builds pieces slowly so it is clear how it is working. In interview with The Creators Project he states âthe goal with the looping show is that I want the process to be transparent to the fans ... Its important because I really want the audience to feel as if thereâs the thrill of creation and that theyâre a part of itâ4.
In classical violin performance, certain repertoires which lend themselves to traditional form can identify an instrumentalistâs virtuosity due to the nature of performance, however in computer music this can be difficult for the listener to gauge. I want my system to reciprocate the expressivity of the performer. I plan on researching into methods of analysis and implementing them where appropriate within my system in attempt to reinforce the relationship between performer and computer.
To succeed in this, I plan to use Max MSP to: 1) Extract formative features of the input audio. 2) Track the performers movement. 3) Create instruments and effects, generate reactive visuals. 4) Control parameters within the computer system expressively and freely.
1 Collins, Nick. "Generative Music and Laptop Performance". Contemporary Music Review 22.4 (2003): 67-79. Web. 2 "Canute Live In Jubez Karlsruhe Algorave". YouTube. N.p., 2017. Web. 18 Jan. 2017. 3 "Graham Dunning 'Mechnical Techno' Boiler Room London LIVE Set". YouTube. N.p., 2017. Web. 18 Jan. 2017.
4 Sokol, Zach. "Owen Pallett Breaks Down His New Album "In Conflict" | The Creators Project". The Creators Project. N.p., 2014. Web. 12 Jan. 2017. Avaiable At: https://thecreatorsproject.vice.com/blog/owen-pallett-in-conflict
When approaching sound analysis in the past, I have had issues with analysis of the input signal and thus controlling the desired output. Robert Rowe discusses the use of interactive music systems and contrasts the use of MIDI control with newer methods of raw signal processing.
âThe issue for composers of interactive media is that adopting the power and flexibility of audio based systems means exchanging a high level symbolic representation for a low level sub symbolic oneâ5.
He goes on to evaluate certain issues and the techniques one might use to solve them, identifying that processing a raw audio signal is much harder to obtain higher level features. He talks about segmentation, which is something I have personally had problems with when using a violin. Due to the bowing technique of playing the instrument, it can be difficult to register the onset of notes when measuring the changing amplitude of a signal. Rowe explains how he has used high frequency content to provide appropriate results, something I will certainly look into.
Rowe also touches on the timbral variation as a control signal. Timbre analysis can give a rich data stream which reflects the interest and variance of the signal, correlating to human perception. I plan on using this technique throughout my performance as it provides data that makes sense to be used for visualisations. He identifies the importance of measuring the noise of a signal, the RMS amplitude, spectral centroid and spectral flatness. There are many other features we can obtain from audio, including the Fast Fourier Transform (FFT), the Mel frequency cestrum coefficients â a redistribution of the spectrum into a nonlinear scale that appropriates human perception, and spectral flux â the change in spectrum relative to previous values from the signal.
I briefly touched on this idea in my proposal and I wish to reflect on this within my work. I believe informing the audience how the system works will add success to the performance, and textural analysis is an important aspect of this. I plan on using textural content to give intuitive data which can be used to control aspects of the visuals which will give coherence to the work.
Feature of the performers movement will also play an important part in my work. As the movement of the performer will inform the audience their expression and technique it is important to be able to identify this within my computer system. There are several ways I plan on doing this: 1) Colour tracking on the bow is a great way to take features from the performer. By simply placing a brightly coloured object on the bow and placing the performer in a neutral background with neutral clothing it is very easy to obtain accurate readings representing bowing techniques. Large gestalt movements can be easily differentiated from smaller movements and lends itself to intuitive mapping of space within the performance, something I can also reflect on within the sound.
5 Rowe, Robert. "Split Levels: Symbolic to Sub-Symbolic Interactive Music Systems". Contemporary Music Review 28.1 (2009): 31-42. Web. 11. Jan. 2017 Available At: http://www.tandfonline.com/doi/full/10.1080/07494460802664015?src=recsys
There are other techniques I plan on looking into, such as frame differencing, which I would like to use as controls for instruments, sounds and visuals. To start with this, I plan on going through the jitter tutorials available with max and try to create my own systems for feature extraction.
Aside from the e-violins output signal I plan on using microphones on the bow to take separate data. Using a contact mic on the frog of the bow, I can get interesting readings that would reflect the style of bowing. Placing a small microphone on the bow could also give me proximity readings from the source of the sound (the strings). However, there may not be enough amplitude from the electric instrument to allow this thus I am considering attaching a small speaker to the body to allow this approach to work.
There is plenty of documentation within Max MSP on creating instruments and audio effects. Also worth noting is the website âMusic DSPâ6 which hosts the source code for many different audio effects. Iâm particularly interested in creating my own software instruments which will be controlled by parameters obtained from the performer as described previously. So far my ideas include; a sample player which is controlled by movement of the bow inspired by Laurie Andersonâs âtape-bow violinâ, a granulator type instrument which can create textural drones for accompaniment, delay lines and other effects generated by Euclidian rhythms. Nic Collins âTobabo Fonioâ7 is also a great example of mapping conventional playing techniques to unusual DSP systems.
For the visual side of my project I have been looking at artists such as Craig Reynolds and Matt Pearson who have a large amount of resources available online8 9. I would like to create separate visuals for each piece, and aim to reflect the nature of the work within each one.
Plan:
I have broken my project into 5 topics;
⢠Audio analysis and feature extraction ⢠Video analysis and feature extraction ⢠Generative visuals ⢠DSP instruments and effects
⢠Composition Each of these overlap techniques and will in turn impact on one another, but for now I aim
to build on each separately before conglomerating them together.
To begin with I will spend time working on the inputs of my system, audio and video. As previously mentioned, I plan on going through the Max Jitter tutorials. These should give me
6 "Music DSP Source Code Archive". Musicdsp.org. N.p., 2017. Web. 14 Jan. 2017. 7 "Tobabo Fonio (Nicolas Collins)". YouTube. N.p., 2017. Web. 18 Jan. 2017. 8 Reynolds, Craig. "Craig W. Reynolds". Red3d.com. N.p., 2017. Web. 14 Jan. 2017. 9 Pearson, Matt. "Generative Art (Abandonedart.Org)". Abandonedart.org. N.p., 2017. Web. 14 Jan. 2017.
a pretty comprehensive guide to what is possible through Max and allow me to extract many expressive features from the performers movement and sound. Once these methods are implicated I will need to think about calibration methods to ensure that I can set up and use the full range of data every time I set up. This will allow me to practice with the system over time so by May I will have complete control over the interactions I have created.
I also plan on experimenting with IRCAMâs gesture follower (gf)10 software and their Antescofo score following software. In the past I have struggled using these approaches due to the difficulties recognising note attacks. Unlike previous works I am using an e-violin which should remove some problems I had before, therefore I hope to be able to utilise Antescofo. Thus I will need to spend some time experimenting with analysis methods and equipment fairly soon to establish how possible this approach may be.
For the visuals, I plan on looking into different generative methods and examples for the time being. I plan on trying to create a portfolio of works throughout the term so that when I get to April I can create visuals to reflect the compositions and tailor them further. To explore into this I am currently reading âGenerative Artâ11 by Matt Pearson, and need to look further into this field to find out what is possible.
I mentioned a few ideas I had for instruments I wish to create in Max earlier. I plan on having these created by the end of February so that I may begin working on the compositions. This also gives me a lot of time to improve them and engineer new ideas as we go through the term.
Currently I have a few ideas I would like to explore within my compositions. Feedback has always interested me and I would like to create a piece or an instrument which uses this technique. I like the idea of being able to play a live performance with just a mic and speaker, this is inspired by pieces such as Steve Reichâs âPendulum Musicâ12 and the ensemble piece.
Luke DuBoisâ work âGrowing Painsâ13 is an interesting composition for guitar in which the score and visuals are created with a Lindenmayer system (L-system). When the performer plays the piece the audio signal is checked against the score and, if perfect, creates a fern pattern within the visuals. Any mistakes within the L system cause changes to the visuals and thus create a glitch version of the L-system, reflecting the relationship between performer and computer.
âIf a computer were to play the piece, the fern would generate perfectly every time, but by using human performers an element of chaos is introduced into the systemâ14
10 http://imtr.ircam.fr/imtr/Gesture_Follower 11 Pearson, Matt. Generative Art. 1st ed. Shelter Island, NY: Manning, 2011. Print. 12 Reich, Steve. Pendulum Music. 1968. Composition. 13 DuBois, Luke. Growing Pains. 2003. Composition. 14 Schedel, Margaret and Alison Rootberg. "Generative Techniques in Hypermedia Performance". Contemporary Music Review 28.1 (2009): 57-73. Web.
Projected Week Plan:
January
Week Beginning : 16th Jan
Evaluate equipment
Week Beginning : 23rd Jan
Test Antescofo & GF methods Research generative techniques Test mics in EMS
February
Week Beginning : 30th Jan
Build Max Instruments & audio Processes Start creating generative visual portfolio
Week Beginning : 6th Feb
Max MSP Jitter Tutorials Generative visual portfolio Begin 1st draft of write up
Week Beginning : 13th Feb
Research Audio analysis techniques Begin composition 1 & 2
Week Beginning : 20th Feb
Implement and experiment audio analysis Composition 1 & 2
March
Week Beginning : 27th Feb
Research into new found technologies and ideas Composition 1 & 2 Finish 1st draft
Week Beginning : 6th Mar
Perform 2 compositions & take feedback Continue research into new ideas Reflect on write up draft feedback Begin write up
Week Beginning : 13th Mar
Composition 3 & 4 Reflect on feedback Finish visual portfolio
Week Beginning : 20th Mar
Composition 3 & 4 Add visuals into each composition
Week Beginning :27th Mar
Complete 3 & 4 Complete accompanying visuals
April
Week Beginning : 3rd Apr
Performance Take feedback
Week Beginning : 10th Apr
Apply feedback to write up
Week Beginning : 17th Apr
-
Week Beginning : 24th Apr
-
May
Week Beginning : 1st May
-
Week Beginning : 8th May
Submit work this week
0 notes