A digital sketchbook for The Augmented Tonoscope - a PhD practice as research project by Lewis Sykes.
Don't wanna be here? Send us removal request.
Text
Arduino Workshop 11-11-13
One Button Challenge
What is a button? http://en.wikipedia.org/wiki/Push_switch How many buttons have you pressed today?
Arduino http://arduino.cc
Anatomy of an Arduino Uno http://arduino.cc/en/Main/ArduinoBoardUno
http://arduino.cc/en/Main/Products
arduino software - http://arduino.cc/en/Main/Software sketchbook folder libraries folder
Fritzing http://fritzing.org/download/
Button http://fritzing.org/projects/digital-input-button/
now add an LED on PIN 8
M-M jumper cables - or F-F JTAG cables + long headers
What is a resistor? http://en.wikipedia.org/wiki/Resistor
Ohm’s Law I = V/R
resistors - reading resistor values
Resistor Color Code Calculator colour to value http://www.digikey.com/us/en/mkt/4-band-resistors.html value to colour http://www.electronics2000.co.uk/calc/resistor-code-calculator.php
RGB LED http://fritzing.org/projects/random-rgb-led-with-button
EasyButton Library http://playground.arduino.cc//Code/EasyButton
Servos http://fritzing.org/projects/analog-input-to-servo/ http://arduino.cc/en/reference/servo
OLED display http://www.adafruit.com/products/326#Learn http://learn.adafruit.com/monochrome-oled-breakouts/wiring-1-dot-3-128x64
0 notes
Text
Making of Stravinsky Rose Dome format
A collection of notes, research, reflection and links that show the development of a full dome format version of Stravinsky Rose for the Understanding Visual Music - UVM 13 Concert, 9 August 2013, Galileo Galilei Planetarium, Buenos Aires, Argentina.
Filming Fiona Cross

For the first outing of Stravinsky Rose at UpClose 4 Fiona Cross performed Stravinsky’s Three Pieces for Clarinet Solo live.
So with Fiona’s consent and the then Head of Creative Programming for Manchester Camerata, Manus Carey’s support, I arranged a filming and recording session in early July ’13 using the cool light studio facility at MMU. Fiona played through the works several times while I filmed her performance using a Nikon D3100 Digital SLR and my own Canon S100. Jaydev Mistry kindly helped with the audio recordings using his high quality studio microphone into Logic Pro on his MacBook Pro and a couple of Rhode rifle mics into Logic Pro on my MacBook Pro.
I then imported and aligned all these assets in FCP X and listened carefully through the various takes, finally settling on and editing together those performances and recordings of each piece which felt and sounded best. I explained as much in a subsequent email to Fiona:
“I’ve now made a selection from our session a couple of weeks back... There really wasn't much between your various performances... I just felt that these (they're all from the last of the three run throughs) had the edge. Do let me know you're happy for me to use them.
The audio recording is as it was recorded via Jaydev's studio microphone - currently only in mono in the video. I'll make this stereo, master it and add a little room ambience for the final version.
I decided to keep a single camera full length shot of you for this version of the film... though I might go back later and edit in some of the close ups. I'm also making it in black and white not colour.”
A high resolution version of the openFrameworks visualisation
Dome format is enormous - 4096x4096 pixels. My Early 2011 MacBook Pro 13” has a maximum built-in screen resolution of 1280x800px (though i could push it to 1080p using the not particularly stable HiDPI mode of QuickRes). So even if I could have managed to screen record the openFrameworks visualisation at a high enough quality without dropping frames... these assets still wouldn’t be close to large enough.
So with the help of Ben Lycett and his later model MacBook Pro 15” with retina screen, we attempted capturing the visualisation at higher resolution. We tried numerous approaches - outputting individual frames from openFrameworks itself and testing dedicated screen recording software such as iShowU HD and ScreenFlow. In the end, and somewhat surprisingly, we got the best results using the ‘screen record’ function of Apple’s QuickTime X.
We improved the quality further by slowing down the audio, and likewise the easing times within the visualisation sketch and then screen recording the visualisation running at a half to quarter speed depending on the tempo of the piece. Using this technique we were able to capture the visualisations at a resolution of ~2570x1920px, without dropping frames and at an acceptable frame rate of ~20 FPS (since once retimed back to ‘normal’ speed in FCP they would be closer to 40-80 FPS, so I’d actually have to drop frames to get to the required 30 FPS).
I ended up using these slowed down audio files in the final edit of the film too - they underpin the ‘information’ sections. I liked the fact that the process of actually making the film was referenced within the film itself.
Other assets
Researching online I found program notes of Three Pieces for Clarinet Solo from the Chigaco Symphony Orchestra and a student paper on interpreting the pieces as well as a PDF of the score, photographs of Igor Stravinsky and John Whitney Sr. and other useful graphical assets including concert posters and film credits - which I then referenced to compose the info text and supporting notes and to select suitable fonts for the credits and animations within Stravinsky Rose.



FCP X and Dome format
I had some experience of FCP 7 but on advice from Ben Hudson, Technical Officer, Digital Video at MMU, switched to using FCP X. He suggested that it’s improved background rendering operations would significantly decrease the time I had to wait around during the editing process - particularly for an edit at this resolution. He also argued, despite the fact that many familiar with FCP 7 disliked the way FCP X split out much of the transformation functionality into the separate yet integrated Motion application, that this was actually a very sensible approach.
FCP X doesn’t have a Dome format preset by default (though it does have 4K), but with advice and support from Ben, I discovered a work around was to create a new project but choose the option to set its format based on the first video clip - which could be Dome format.
I made several tests at this resolution before I realised this wasn’t quite the way to make a half sphere film - unless you were unconcerned about the spherical distortions inherent in projecting within a dome - because there’s no way to account for them using this approach. I clearly needed to learn more about creating media for Dome format.
Andrew Hazeldean’ Domemaster Photoshop Actions Pack
Further research on the Sky Skan Definiti Projection Systems used within the 20 meter-diameter dome of the Galileo Galilei Planetarium in Buenos Aires and the various commercial (and expensive) software plugins - such as DomeFX for After Effects - used to develop content for this format, eventually led me to Andrew Hazelden's Blog and his free Domemaster Photoshop Actions Pack - “a collection of custom Adobe Photoshop Actions that were designed to speed up the fulldome content creation workflow. The actions provide tools for converting images from several common panoramic formats such as angular fisheye, equirectangular, and cube map panoramas, and general utilities for fulldome production.”
Exploring these I realised I could actually edit the film in 4096x2048px format in FCP X, export the final edit as a PNG sequence at 30 FPS and then use Andrew’s Photoshop Actions to convert these PNGs to Dome format. So I did a quick test and sent the output to the technical team at UVM 13 to check I was on the right track - which they confirmed.

Working with the Dome format
My initial submission to the UVM 13 call for half-sphere works had been The Cymatic Cosmos - “...using the metaphor of the orrery to express the periodic and cyclic nature of John Whitney Sr.’s ‘Rose of Grandii’ algorithm”.
Although I still hadn’t quite twigged issues of perspective and field of view for Dome format at this stage (I’d downloaded but hadn’t quite internalised Andrew Hazeldean’s PowerPoint Dome Template), at least I had the conception that the half-sphere format would lend itself particularly well to rotation. I could ‘spin’ the various on-screen elements around the vertical axis of the half-sphere, moving them around and behind the audience so that they would move into and out of their field of view as the work progressed. I really liked this notion and realised that I could create an impression of the orrery by rotating the various on-screen elements at different speeds.
Template layout
So I created a template in Photshop to help me align all the various assets accurately within FCP X. From my tests with Andrew Hazeldean’ Domemaster Photoshop Actions Pack I appreciated that content at the very top of the frame would be ‘wrapped’ round by the rectangular to polar coordinate conversion.

Creating elements
So in order to have a circular animation slowly rotating at the centre of the dome I worked out that I needed to create this animation in Motion at the correct size but on a 4096x4096 canvas, then export it as a PNG sequence, use the Domemaster to Powerpoint PhotoShop action to apply the necessary polar to rectangular conversion, crop the resulting 4096x2048px PNGs to a 4096x600px strip, import this image sequence into QuickTime and save it as a movie. I then imported this as an asset into FCP X, positioned it at the very top of the frame and duplicated it as required over the length of the film.
Motion proved to be a really useful tool for creating not just the circular central animation but also the various ‘marching ants’ style dashed lines that framed and separated the various on-screen elements. Ben Hudson also used Motion to create a series of FCP X Effect templates - including one that slid the central content to the left but ‘wrapped’ it to appear back on the right - creating what looked like a smooth rotation. Good stuff.
While the edit and rendering certainly took a considerable amount of time, Ben was right, FCP X was so much quicker and at least I could get on with other jobs while rendering was going on in the background.
Credits & FOV
Since Andrew Hazelden had been so generous with sharing his knowledge and tools via his blog I thought I’d email him for advice:
"I'm close to finishing the edit - but realise I don't really have a sense of how large the text of the credits I'm currently finalising will be in the field of view... I was hoping you may be offer a bit of advice on rule of thumb font size, word count per line, width of screen real estate etc. that would make a block of text readable on screen...”
and he responded promptly:
“Here are a few dome reference grids that should help you figure out the basic field of view... you can see the comfortable "forwards" viewing area for titles in light blue.
As a general rule you should try to keep "important" information and text in the front of the screen and wrap extra content like imagery around the frame to make the experience as immersive as possible. It is a good idea to keep text line lengths fairly short so the viewer doesn't have to pan their head to read the text."
Andrew also sent on some examples from the Fulldome Database...
Unfortunately his advice confirmed what I already expected... that I had too much text and it was well outside the preferred field of view.


DomeTester
Andrew also recommended other full dome tools:
“There are a variety of fulldome review tools that let you simulate the fulldome screen viewing experience on your home computer.
The best dome previewing tool is called DomeView. I don't own a copy of the software but I have heard good things about it from friends.
A free dome previewing tool is called DomeTester. The Mac version seems* to work okay but it has graphics card compatibility issues on Windows. *Some video codecs seem to cause the preview image to flip upside down in DomeTester [I can confirm this for MP4]. You can also load PNG images into the DomeTester program and hit the pause button to look at a single frame.”
DomeTester - a Cinder based app by Christopher Warnow and Dimitar Ruszev featured on the University of Applied Sciences, Potsdam website - was a great recommendation... I really wished I’d asked Andrew sooner.
Unfortunately it also alerted me to what I hadn’t previously appreciated about perspective in a dome and how much content is actually distorted. I’d obviously realised this to some extent - I’d accommodated the rectangular to polar conversion in the circular Stravinsky Rose logo... but I hadn’t considered how much the video of Fiona would be distorted. She ended up with too thick thighs and a pointed head - not a good look.
But with no time to adjust the layout (I was heading off to Argentina the following day) I had to live with it...

As it turned out, the photographs and video of the screening I shot using my Canon S100 - even at it widest angle - captured only a fraction of the dome - it was enormous. So the best documentation I could produce was a screen record from DomeTester - remapping a low resolution (512x512 pixel) version of the half sphere format onto a virtual dome.
Rendering, rendering...
With an export from FCP as image sequence on my hard drive I started the batch conversion to Dome format using Andrew Hazelden’s Photoshop Actions. All seemed to be proceeding well enough - until I estimated the length of time it would take to convert the 12,899 frames of my film based on progress after 4 hours - and it was a further 75 hours! I’m used to long rendering times... but i just hadn’t accounted for the scale of working with Dome format. I turned every computer I owned to the task of converting frames from FCP X to Dome format - and still had to take my Mac mini as well as my MacBook Pro out to Buenos Aires with me to complete the task once I was there. I managed it in time to present the assets in the requested format to the technicians at the planetarium on the Monday before the concert on Saturday night - and it still had to go through another lengthy process of being prepared for their Sky Skan Definiti Projection Systems.
Audio
Finally, I imported the low resolution DomeTester version of the film into Ableton Live, aligned the audio from Fiona’s audio recordings against the frames and sequenced the atmospherics, sound FX and reduced speed audio from the visualisations into a soundtrack and added FX and signal processing. It was a bit raw and underproduced, but was the best i could manage under the circumstances and in the time scale. I exported the final mix as a WAV file and passed this to the technicians at the planetarium along with the PNG files.
Reflections
It was an undertaking to be sure... and a steep learning curve... but despite it’s obvious flaws I’m pleased with the end result and that I managed to produce a work that was actually screened as part of the UVM 13 concert.
It was truly amazing to be in the planetarium and see it in a format that was actually too large for my eyes, as well as watch the audience’s reaction to it. A rare and memorable experience. The work was well received and I had a fair amount of positive feedback from other contributors afterwards.
I’d certainly re-edit it for future screenings though. The visualisations should clearly be ‘centre stage’, focussed on the centre point of the dome. While this would entail a fair amount of reworking - a process similar to the circular animation slowly rotating right at the centre of the dome - it’s necessary. This would also allow Fiona’s recording to be reduced in size and repeated at regular intervals around the bottom edge of the frame, so that she’d be undistorted and always in the field of view. I’d also need to rethink the amount of and approach to the on-screen text.
On advice from Ben Lycett, I moved the entire opening info section to the back of the film in a subsequent 1080p edit of Stravinsky Rose I made from these assets - and I’d do likewise for a next iteration half sphere edit too.
While there are a limited number of dome projection systems worldwide, I have come across festival calls for submissions in dome format, so I do plan to produce a next iteration version.
0 notes
Text
Refining the Cymatic Adufe
As is frequently the case with media artworks, the Cymatic Adufe as first exhibited in Porto work was a prototype. It’s subsequent exhibition at MUDE in Lisbon gave me the opportunity to stabilise and refine it.


For the Cymatic Adufe in Porto I’d requested a late 2009 2.5GHz Mac mini Intel Core 2 Duo - but in the end I had to make do with my own early 2006 Mac mini Intel Core Solo 1.5GHz. It ran the visualisation OK... but the problem was the audio.
Thinking it best not to send the adufe rhythm underpinning my version of Cristina singing the Senhora do Almortão through the main speaker (this was bound to jostle the polystyrene balls I was using to visualise the melody line on the surface of the adufe) I elected to hard pan Cristina’s vocals and an underpinning synth line (this is what actually caused the adufe to vibrate sufficiently to show cymatic patterns on its surface) to one channel in the stereo file and the adufe rhythm to the other. I’d then play the vocal side through the main 12” speaker and the adufe rhythm through a small Altec Lansing Orbit iM237 USB embedded into the side of the plinth. But to do this I needed to create a Multi-Output device in OS X’s Audio MIDI Setup which grouped both the built-in output and iM237 so I could send each channel of the stereo file to a respective output. This was easy enough during testing on my MacBook Pro... but once I got to Porto and tried to set this up on my Mac mini I discovered that the facility didn’t exist in it’s OS X Lion OS - it didn’t actually appear until Mountain Lion. What’s more, the Mac mini was too early a model to run Mountain Lion at all. All of this research and testing was done without an Internet connection in the venue - so i had to frequently relocate to a local cafe to try and research the issue and download installers. I struggled to find an alternative solution, eventually finding one in the AU Lab app within the Developer Tools, but not in time to configure the Mac mini before I left Porto. The upshot was that the Cymatic Adufe ran for the Private View using my Macbook Pro but was inactive for the rest of the exhibition. I did try to resolve the problem through local support - but no-one we could find felt confident enough to make the adjustments. Shame.
So I was far more insistent on sourcing an appropriate Mac for MUDE... but even though the museum could provide an iMac it was far from ideal (it wouldn’t have fitted inside the plinth for example) and in the end just didn’t have the necessary spec. With no other choice I bought a Late 2009 Mac mini 2.26GHz Intel Core 2 Duo and upgraded it to 8Gb RAM - and it behaved perfectly, running without issue for the 31/2 month duration of the exhibition. Job done.
Refining the visualisation

The patterns projected on the adufe are an example of my exploration of real-time computer animations generated from mathematically derived virtual models of various oscillating and harmonic systems - in this case the Superformula - a generic geometric transformation equation that encompasses a wide range forms found in nature.
My starting point was Reza Ali’s Organic SuperShapes built in Processing. For the Cymatic Adufe in Porto I tweaked and adapted the sketch to create a colour class and multiple instances of the SuperShapes2D, stored these in an Array List class and then used FFT analysis to track the melody of the Senhora do Almortao as sung by Cristina, using the amplitude of certain notes within the melody to change variables within the formula. I then used beat detection to ’snapshot’ this dynamic pattern - you might just be able to see it in light grey - and draw the ‘snapshot’ to the screen using one of the colours chosen randomly from the palette.
Built in Processing 1.5.1 while on site at the exhibition venue, the first iteration was a bit quick and shoddy, but essentially did what I wanted - a sound responsive version of Reza Ali’s Organic SuperShapes.
For MUDE, there was a fairly thorough overhaul of the processing sketch, where I:
moved it into Processing 2.0b8 and version 2.0.4 of the ControlP5 library;
reworked the original code into a Superformula class;
spent a fair amount of time smoothing the FFT analysis data and mapping it to the ranges of the various variables to maximise the impact on the dynamic SuperShapes2D pattern;
extracted the harmonic relationship between successive notes and the tonic or root note - the song is in C# major - to drive the symmetry of the shape;
coded a simple ‘state machine’ so the visualisation would drop into certain ‘modes’ dependent on time and sensor input - such as play once through on the quarter hour if not triggered;
added triggering via an array of PIR distance sensors;
added a simple clock face for ‘idle’ periods;
and adjusted the palette.
Many creative coders have explored the Superformula as a means to generate naturalistic 2D and 3D shapes. One of it’s limitations is that it can be difficult to control the ‘tidiness’ of the shape... the start and endpoints often misalign resulting in ‘broken’ outlines. I actually turned the ‘dampening’ and ‘attraction’ variables in Reza Ali’s code up really high... to keep the dynamic figure as elastic and ‘bouncy’ as possible so that it changed dramatically over time... but this had the effect of lots of ‘broken’ outlines. I decided that this didn’t matter... it suited the aesthetic of the work and actually looked more like the naturalistic, hand-drawn traditional Portuguese designs and patterns I was trying to emulate.
There’s a video of this updated visualisation on Vimeo.
The Processing sketch and libraries are available here on my public Webdisk.
Sensors


As a sound based artist I’m obviously sensitive to the sonics within the exhibition space. I’m particularly alert to the ‘abrasive’ quality of sound... and how an endlessly repeating loop or motif can quickly become tiresome to the visitor (and absolutely maddening to the invigilator). So I decided to make the Cymatic Adufe at MUDE only play once every quarter of an hour or as a visitor approached it.
I bought some suitable PIR motion sensors - they had on-board adjustment for distance, trigger mode and re-trigger time. Ben Lycett helped me by soldering up a simple sensor shield and coding a test Arduino sketch. I fitted these sensors to the bottom edge of each side of the plinth and adjusted their settings and tweaked the Arduino code to optimum values in situ. In retrospect I only needed to fit two sensors - the work was sited in a corner of the gallery - and i could have saved myself some fabrication and coding time as a result.
Though very simple, the Arduino sketch is available here on my public Webdisk.
New fitting for micro projector

In Porto I’d used a Manfrotto 259B Extension For Table Tripod to attached the projector to the aluminium profile... but it just didn’t give the level of adjustment I needed to align the projected screen onto the adufe properly. So for MUDE I bought a Joby GorillaPod SLR and designed an acrylic disk style ‘holder’ to lock it firmly and attach it to the aluminium profile. By shortening the length of the legs and adjusting the flexible tripod i was able to align the projection from the Optoma ML-300 perfectly onto the surface of the adufe - with a little spill over on each side which actually looked quite good.
Strip lighting

I bought a set of 12V white LED strip lights and fitted these and their switch slightly set inside from the bottom edge of the plinth. The result was a lovely glow that spilled out onto the concrete floor of the exhibition space around the base of the Cymatic Adufe. It looked particularly nice as night drew in the general lighting level in the gallery dropped.
Anti-static
In Porto I’d used bean bag filler - small polystyrene balls - as the medium to visualise the vibration of the adufe. They were light and white (good for projection) and worked well, but they suffered terribly from static, ‘sticking’ to the inside of the acrylic box. Static build up over time actually made things worse... they’d gradually creep up the bottom edge of the acrylic.
So for MUDE I tried to work out how to overcome this problem via a range of solutions - I bought:
an anti-static gun off eBay - the type used to de-static vinyl records. By aiming this a short distance away from each side of the bottom of the acrylic box and gently pulling its trigger it helped to dissipate the build up static on the acrylic;
anti-static acrylic cleaner and an anti-static cloth - and only used these to clean the acrylic during the construction and setup of the work and instructed MUDE to use these and the anti-static gun regularly to clean the acrylic (folk do like to touch it with their fingers) and so help to control the static build up.
anti-static spray - an aerosol of special anti-static coating used in industrial settings. By pouring bean bag filler into a plastic bag and then thoroughly coating them with this anti-static spray I was able to reduce the inherent static behaviour of the polystyrene balls considerably - in fact almost to nothing.
This seems to have worked... when the work was shipped back to me un-dismantled from Lisbon, the bean bag filler was still barely attracted to the acrylic.
Refine the audio
Cristina was unhappy about me using her rendition of the Senhora do Almortão for MUDE - I understand why, although she’d done a solid job she’s not a professional singer. I tried, with Cristina and Paulo’s help, to find alternative a cappella versions of the song, but not were suitable. So I came up with a compromise - I’d ‘autotune’ Cristina’s original recording to give it the ‘X Factor’ treatment. Jaydev Mistry kindly helped me here, using Melodyne to adjust Cristina’s vocals to the tuning and timing of an authentic version of the melody I’d found played on a recorder. Unfortunately with all the work I had to do to setup the Cymatic Adufe in MUDE, (the acrylic boxes which took so long to make in Porto had been broken apart during dismantling and transportation and I had to re-glue them), I never managed to implement this.
Wireless
Once the front panel on the plinth is screwed in place, and in truth even before that, it’s difficult to access the Mac mini. So I added a compact, USB powered wireless router to the set up (you can just see it at the top of the plinth in the photo with its side panel off above). This allowed me to access the Mac mini HD remotely and to ‘Screen Share’ from my Macbook Pro while I setting it up and testing it - and this really proved invaluable. I left the wireless router in the plinth just in case it was needed by a technician for maintenance during the exhibition - though it never was. Shame, I needed it a t home :-(
0 notes
Text
Planning for Stravinsky Rose
A collection of notes, research, reflection and links that show some of the thinking behind and development of the Stravinsky Rose.

Background
In early 2012, MIRIAD Director John Hyatt (and also my Director of Studies) asked if I’d join him in his artist residency with the Manchester Camerata - through their UpClose season - “an eclectic series for the curious who want to experience Classic and modern chamber music in a laid-back intimate setting”. The idea was for us to develop and showcase contemporary sound-based artworks taking inspiration from the music at each event.
We were subsequently invited to contribute to the second season... and this gave me the opportunity to develop visualisations focussed specifically around my PhD research.
For UpClose 3 I developed a new work in collaboration with Ben Lycett - the Cymatic Cello. For UpClose 4 we developed two new works Exploding John Hyatt’s Pixels and Stravinsky Rose.
There’s a post with details and documentation of the UpClose 4 event on my ‘The Augmented Tonoscope’ website.
And Manchester Camerata ‘News’ links to the residency and details of the season: Camerata & Hyatt Collaboration Second season of Camerata UpClose
Working Notes
‘Tracking’ the clarinet
Having settled on the MaxMSP ‘fiddle’ object as the most effective way to measure pitch in real-time, Ben Lycett helped me to build a MaxMSP patch that ‘tracked’ the pitch from either an audio file played back from the HD or from an audio input - and then send this value out as a MIDI note number via UDP. For the Cymatic Cello the MaxMSP ‘fiddle’ patch was too ‘jittery’ - sending out a constantly changing range of values - and though I’ve since managed to tweak this and it’s now behaving much more dependably - it still behaved a little erratically for Stravinsky Rose. This was in part due to the threshold of the input from the mic - I had limited time to setup and test the configuration - which turned out to be far too sensitive. You can see that the visualisation responds to any noise in the space in the documentation videos.
Mechanism of the Visualisation
This was a next stage development of my visualisation for The Whitney Modality - itself a port my Processing based The Whitney System into openFrameworks.
I routed the ‘tracked’ note value from the MaxMSP patch into the openFrameworks visualisation sketch, ’rounded’ the value to a whole MIDI number and calculated the corresponding frequency. I then compared this value to the frequency of the dominant note for each of the three works as advised by Fiona Cross (Stravinsky didn’t write these pieces in any given key and the second piece is without bar lines) to calculate a ratio. I then used fmod() to constrain this ratio to a value between 0.0 and 1.0 (more of why later); and then used a tweening addon to ‘scrub’ the rose algorithm to that ratio value along it’s progression. I also adding ‘easing’ to the tween to make the ‘scrubbing’ from ratio to ratio less linear and more naturalistic.
There’s more on the background to this process at my Reflections on The Whitney System post.
Essentially the Whitney Rose algorithm is cyclic... it repeats... so to my mind it’s possible to conceive of it as a loop. By mapping this loop to an octave range within a musical scale, it’s possible to create a more consistent correlation between pattern and pitch. So an Eb4 - frequency 311.13 Hz - when played by Fiona on the clarinet would result in a ratio of 311.13/313.13 or 1.0 (the dominant note for the first two pieces is Eb4 or 311.13Hz). If Fiona played a note an octave higher than the Eb4, an Eb5 of frequency 622.25Hz, the resulting ratio would be 622.25/313.13 or 2.0 - but here is where the fmod() function comes to play and the resulting ratio would also be 1.0. So an Eb4 and any Eb above that produces the same pattern in the Whitney Rose - and likewise for any other octave scale of notes.
The Vimeo video - UpClose IV - Stravinsky's Rose - Introduction - demonstrates this in action.
I appreciate this relationship doesn’t quite hold for notes below the dominant note where the resulting ratio is always below 1.0. I have yet to rationalise this.
Apart from general refinements and tweaks to the The Whitney Modality code, the major addition was a preset class, allowing me to configure a particular configuration of variables for the Whitney Rose algorithm which I could then save and recall. This allowed me to easily create a certain ‘look’ for a given piece which I could ‘load’ on a key press.
Aesthetic of the Visualisation
Despite using ‘primary’ CMYK colours in The Whitney Modality, I’d increasingly been drawn towards a reductionist aesthetic - reflected in a preference for a minimalist palette. So Stravinsky Rose uses only a greyscale tonal range. This is in part based on evidence from recent cognitive science research that there is actually no ‘natural’ connection in the non synaesthete brain between colour and pitch - we don’t associate a particular musical tone with a specific colour or hue in any way. However there is a correlation between frequency and luminosity - the higher the musical tone the brighter the colour we associate with it, the lower the musical tone the darker the colour we associate it. I responded to this research by connecting the alpha transparency of the dots that make up the Whitney Rose to frequency... the lower the tone of the clarinet the higher the transparency and so the darker the white dots against the black background - and the higher the pitch the lower the transparency and the brighter the white dots against the black background.
Notes on the Clarinet
Stravinsky indicates on the score that pieces 1 & 2 should preferably use a clarinet in A and piece 3 a clarinet in Bb
If the clarinet is in Bb, then playing a B on the clarinet, it sounds as an A in concert pitch. Its lowest note is E below middle C (or D below middle C concert pitch) and it goes up 3 octaves and a 5th to C 3 octaves above middle C.
Reflections
Stravinsky Rose was to my mind, my most successful work for UpClose. It demonstrated that the Whitney Rose algorithm could be used effectively to visualise a real-time music performance - and I believe the result genuinely added to rather than detracted from the experience of listening to Fiona’s performance. It also highlighted the possibility for developing a body of work based on this approach - which I aim to follow up on. Fiona Cross of Manchester Camerata seemed very open to this experiment - so I plan to work with her in the future - all being well recording her performing the work in a photographic studio so that I can use this recording for future iterations of the piece.
Links
On Vimeo:
UpClose IV - Stravinsky's Rose - Introduction
Introduction to the performance by Fiona Cross and Lewis Sykes. Performance at Manchester Camerata's UpClose IV, Tuesday 23 April 2013, The Deaf Institute, Manchester, UK - manchestercamerata.co.uk/whats-on/concerts/upclose-iv
UpClose IV - Stravinsky's Rose - Performance
Performance at Manchester Camerata's UpClose IV, Tuesday 23 April 2013, The Deaf Institute, Manchester, UK - manchestercamerata.co.uk/whats-on/concerts/upclose-iv
UpClose IV - Exploding John Hyatt's Pixels
Screened as as part of UpClose 4, Deaf Institute, Manchester, UK, 26th March 2013.
Reference:
3 Pieces for Clarinet Solo (Stravinsky, Igor) [Score, PDF], http://imslp.org/wiki/3_Pieces_for_Clarinet_Solo_(Stravinsky,_Igor) [accessed 17-04-13]
Huscher, Phillip, Program Notes: Igor Stravinsky, Three Pieces for Clarinet Solo [PDF] http://cso.org/uploadedFiles/1_Tickets_and_Events/Program_Notes/ProgramNotes_Stravinsky_ThreePiecesClarinetSolo.pdf [accessed 17-04-13]
0 notes
Text
Planning for the Cymatic Cello
A collection of notes, research, reflection and links that show some of the thinking behind and development of the Cymatic Cello.
Background
In early 2012, MIRIAD Director John Hyatt (also my Director of Studies) asked if I’d join him in his artist residency with the Manchester Camerata - through their UpClose season - “an eclectic series for the curious who want to experience Classic and modern chamber music in a laid-back intimate setting”. The idea was for us to develop and showcase contemporary sound-based artworks taking inspiration from the music at each event.
We were subsequently invited to contribute to the second season... and this gave me the opportunity to develop visualisations focussed specifically around my PhD research.
For UpClose 3 I developed a new work in collaboration with Ben Lycett - the Cymatic Cello. For UpClose 4 we developed two new works Exploding John Hyatt’s Pixels and Stravinsky’s Rose.
There’s a post with details and documentation of the UpClose 3 event on my ‘The Augmented Tonoscope’ website.
And Manchester Camerata ‘News’ links to the residency and details of the season: Camerata & Hyatt Collaboration Second season of Camerata UpClose
Working Notes
Trying to visualise the vibrational modes of the back plate of a cello was something I’d wanted to try since I started thinking about my PhD.
I’d seen images of Chladni figures showing the vibrational modes of violin family shaped plates.
I’d also found that the effect was known to some luthiers who tuned their instruments according to these modes of vibration using a technique of sprinkling fine powder onto the front or back plate while playing a note of specific pitch at volume through a speaker directly below the instrument body.


Below are (supplemented) notes of early ideas for the residency - they originally featured on a somewhat underused “blog of communications, ideas and development for the UpClose artist residency” - MANCHESTER CAMERATA UPCLOSE 2012-13
2. Cymatic figures on the back plate of instruments
We have a proof of concept with John’s guitar – though it did look like the patterns were mirroring the internal support struts. Unfortunately I managed to blow my Rolen Star audio transducer in the process - Replacing my Rolen Star Audio Transducer
Have bought a 3/4 cello off eBay for £60.

Am fabricating a flexible stand so the cello can lie safely on it’s front with a speaker under it. I designed the stand in Illustrator, scaling it using images and sizes for cellos found via Google. I then used the Shopbot in MMU CAM Suite to fabricate it out of 12mm ply. I sourced and bought the required bolts, washers, wing nuts, tubing and foam insulation from B&Q beforehand and factored their sizes into the design. Unfortunately, I can’t seem to find any photos of it once it was constructed.


Need to test that similar patterns emerge on the cello – and then log the discreet frequencies at which the standing waves patterns appear. They did... but it took some time to work out how. A 50 Watt speaker placed directly under the sound hole had no discernible effect. So I switched to a 100W transducer mounted directly to the front of the cello, held in place under the tension of the strings with some additional foam spacers and Museum Gel to keep the transducer plate in close contact with the front plate.


Because the back plate isn’t flat like the guitar – it may require a strobe light? It didn’t... but the fine (size not quality) reflective glass beads I used to visualise the vibrational pattern tended to collect in the groove around the edge of the back plate and spill over. Using a cosmetic brush I was able to sweep the beads back to the centre for each successive tone - but as a result some of the weaker patterns do almost look like they’ve been brushed into place. Also unlike my drum skin tonoscope, where the vibrational pattern resolves almost immediately, it took sometime for the sound to move the beads into a settled patterns on the back plate of the cello. So this is not something that could be done in real-time.

Need proper lighting to photograph these patterns. I bought a pair of affordable ~500 Watt each Soft Spot studio lights off eBay.
Then create an interactive visualisation which layers these patterns in a stack and increases/decreases opacity of the image nearest in value to the actual note of the instrument which is being played - or some other mechanism… And I did exactly this... processing the series of 26 photos I’d shot across the register of the cello - from G#2 to A5 - (actually the cello has a professional range from C2 to C6 but not every tone produced a discernibly different pattern) and compiling them within an openFrameworks sketch.

Need to find a reliable way to track the frequency of the instrument – most likely not via FFT on a computer – more likely Arduino Frequency Measurement – via a microphone or bridge mount transducer.
Need to finish off building + testing new Frequency Measurement circuit – requires an amplified signal
Although I’d wanted to try and find more use for the Frequency Measurement device I’d built, on advice from Matthew Yee-King I settled on the MaxMSP ‘fiddle’ object as the most effective way to measure pitch in real-time. Ben Lycett helped me to build a MaxMSP patch that ‘tracked’ the pitch from either an audio file played back from the HD or from an audio input and then sent this value out as a MIDI note number via UDP. I routed this value into the openFrameworks visualisation sketch and ’rounded’ the value to a whole number to display an image closest to that note.
Seek expertise from RNCM recording technicians on how to mic up instruments – 45 degrees two microphones. In the end I used a single Rode rifle mic on a small stand positioned close to the sound hole of the cello.
Once we settle on a reliable way to track the frequency of the instrument this technique could be extended to other instruments in the string quartet – violins and viola
If this is successful it could open a pathway to a duet with the Augmented Tonoscope – with both instruments displaying their cymatic visualisations side by side.
Reflections
Some of the work I did in photographing the cello wash’t actually utilised for the piece - the Sarabande from J.S. Bach’s Suite for Cello no. 2 in D minor is mainly in the lower register of the instrument - never rising above the F4 of the cello’s range. I didn’t realise this at the time and started photographing tones systematically from the top rather the bottom of the cello’s register but became increasingly laissez-faire by the lowest notes - it was March and really cold in my studio.
While this was definitely a successful ‘proof of concept’ it was a bit ‘coarse’. The MaxMSP ‘fiddle’ patch was too ‘jittery’ - sending out a constantly changing range of values (I’ve since managed to tweak this and it’s now behaving much more dependably). Also the visualisation mechanism I settled on - of placing two overlaid images on screen for the last and newly detected note and then fading between them - didn’t work that effectively. In retrospect I should have stuck with my initial idea of a ‘stack’ of images - “create an interactive visualisation which layers these patterns in a stack and increases/decreases opacity of the image nearest in value to the actual note of the instrument which is being played”. Finally, deciding to show the whole cello, rather than just the backplate, reduced the screen estate of the effective visualisation quite significantly. On reflection it would have been more effective and far easier to have just shown the backplate - it was extremely fiddly to realise 23 perfectly aligned cellos in Photoshop - my camera got jiggled a little during the shoot.
The piece seemed to be well received... it certainly paved the way for the subsequent Stravinsky Rose. Also I was approached after the show by an audience member who turned out to be a retired Physics Professor. He had enjoyed the visualisation and commented that he’s always known that the body of the cello vibrates anharmonically - and this discordancy is what contributes to its unique timbre - but he had never actually seen it demonstrated - and now he had ;-)
Links
On Vimeo:
Cymatic Cello performance
Documentation of a performance of the Sarabande from Bach's Suite for solo cello no. 2 in D minor played by Manchester Camerata's Hannah Roberts and visualised by Lewis Sykes and Ben Lycett's Cymatic Cello.
Cymatic Cello animation
An animation of processed studio photographs for the Cymatic Adufe showing the various vibrational patterns of the back plate of a 3/4 cello bought off eBay for £60.
Reference:
Tuning and range
The cello has four strings referred to by their standard tuning, which is in perfect fifth intervals: the C-string, G-string, D-string, and A-string. The A-string is tuned to the pitch A3 (which is three half-steps lower than middle C), the D-string a fifth lower at D3, the G-string a fifth below that at G2, and the C-string tuned to C2 (two octaves lower than middle C). Cellos are usually tuned to a reference pitch of A4 at 440 Hz, though tuning to 442 Hz or 444 Hz is becoming increasingly popular. Some pieces, notably the 5th of Bach's 6 Suites for Unaccompanied Cello, require an altered tuning of the strings, a technique known as scordatura.
While the lower range is constrained by the tuning of the lowest string (typically C2, two octaves below middle C), the upper range of the cello can vary according to the skill of the player. A general guideline when writing for professional cellists sets the upper limit at C6 (two octaves above middle C). Because of the enormous range of the instrument, written music for the cello frequently alternates between the bass clef, tenor clef, and treble clef.
http://www.thecellosite.com/history.html [accessed 24-03-13]
Frequencies for equal-tempered scale
This table created using A4 = 440 Hz
Speed of sound = 345 m/s = 1130 ft/s = 770 miles/hr
("Middle C" is C4 )
http://www.phy.mtu.edu/~suits/notefreqs.html [accessed 24-03-13]
Cello Suite No.2 in D minor, BWV 1008 (Bach, Johann Sebastian)
http://imslp.org/wiki/Cello_Suite_No.2_in_D_minor,_BWV_1008_(Bach,_Johann_Sebastian) [accessed 14-04-14]
MIDI numbering, the Helmholtz Pitch Notation System and other octave numbering conventions used in music and music notation
http://www.theoreticallycorrect.com/Helmholtz-Pitch-Numbering/ [accessed 15-04-14]
0 notes
Text
Critical Reflection on ORCiM Seminar 2013
I departed Ghent feeling a bit unsettled to be honest. I don't think I'd gotten sufficient 'measure' of the context and level of the ORCiM Seminar by the time of my presentation to tweak it's focus and delivery to suit the audience... and in hindsight it needed it.

I just hadn’t done enough research of the Orpheus Instituut and its research agenda before I arrived and I failed to engage with their theme of 'Traces, Faces and Places'. In truth I sort of sidelined it as not being particularly relevant to my practice. But I now see that they were just trying to engage with a theoretical research framework that could be applied to their practice of postgraduate interpretative musicianship. I know from my participation in research seminars at the RNCM that this is an area that conservatoires are wrestling with - and I should have been more prepared for and mindful of.
My own presentation didn’t go that well either... once of the keynote speakers, Joel Ryan (musician. composer Docent at Royal Academy of Art, Ballet Frankfurt and Royal Conservatory of Music, The Hague) was particularly critical of my central premise - that it was possible to find a cymatic visual equivalence to the auditory intricacies of melody, harmony and rhythm. Joel argued quite vociferously that vibrating circular drum skins don’t behave harmonically - they behave anharmonically - and so it was just not possible to correlate the distinct cymatic patterns that emerge at certain frequencies on a circular drum head with the harmonic series of tones. I argued back weakly that square drum heads did behave more harmonically - and tried to cite the Cymatic Adufe as being evidence of that... but Joel was right. My argument wasn’t backed by evidence... it was speculation... and ORCiM wasn’t the right context to speculate.
This criticism wasn’t lost on me however... it gave me the impetus yo do what I should, in hindsight, have done far earlier in my research - conduct more detailed research into the vibrations of circular drum heads. I needed to find practical applications e.g. the tuning of orchestral Timpani and mathematical derivations that described their various modes of vibration, find ways to relate this more directly to my own research and explore creative opportunities from these findings - and I’ve now done that.
0 notes
Text
Replacing my Rolen Star Audio Transducer
In a March ’11 Speakers, Transducers & Resonators post on my Tublog I write about the Soundpod Audio Transducer (actually a Rolen Star Audio Transducer once I’ve peeled the SoundPod branding sticker off) 20W (RMS) - 8ohm, 20-20,000 Hz +/-3db, 100mm diameter x 45mm depth, 1kg (£80 + £18 P&P + VAT) that I’d bought ti test in my experimental tonoscope designs.

I’ve since managed to overdrive and blow it… so i’ve been looking for a replacement.
Weight 2.2 lbs
Diameter 4“
Thickness 1.5 “
Model 801- 8 Ohms, Model401 - 4 Ohms
Power Handling 90 watts RMS - 200 watts Peak Frequency Response* 20Hz to 20Khz +/- 3 db
Water Proof - Explosion Proof - Fire Resistant
The unit comes with a mounting bolt (an SAE 10-24 “Machine Thread” with pitch of 0.1629”), insulated female quick connects and a Raychem polyswitch - liquid self-resetting power protection which is evidently “installed IN SERIES with the horn” according to an entry under ‘Reviews’ on the product page. I also bought a 5” x 1/2” resin composite mounting board.
Admittedly it took quite some time to arrange for this to be bought and delivered through Manchester Met… but it’s finally arrived. Yay!
0 notes
Text
Thesis in Progress
I thought I’d set up a private WordPress site on the miriadonline.info network to act as a ‘Thesis in Progress’. The public ‘About’ page outlines my rationale.

“The MIRIAD Online Group have always argued that the Web is a tool and medium academics just can’t ignore but that the way individual researchers use the Web should be as unique as their research.
A series of ‘Academic Blogging 101′ training workshops aimed at MIRIAD postgraduates in the Autumn ’12 term, led in the main by Hannah Allan and David Jackson, shifted focus from the essentially technical workshop series I ran in Autumn ’11 to emphasise the nature of content and planning for content.
It’s been really interesting to see how people have taken on these ideas. One response that particularly appealed was the notion of a private research blog explicitly to share progress with a Supervisory team.
This has made me think about how I might use the Web in my third ‘write up’ year. So I’ve decided to develop this WordPress site as a ‘Thesis in Progress’. To not only collate, refine and edit my existing papers, reports, critical reflections and literature review but also to channel and focus future writing towards the final thesis.
Enabling comments by my Supervisory team will mean I can post new writing and collect feedback and comments at their leisure.
It will also allow me to easily embed illustrative media content such as video and audio into the flow of the writing – something that’s far harder to achieve with a printed version.
I don’t see this as an alternative to the written thesis… but I do think it has the potential to be complementary.
This is something of an experiment. It may work… it may not. Time will tell.
For now, only this page will be publicly viewable.”

Looking for ways to restrict access just to my Supervisory team I found the WordPress Plugin Page Restrict - “This plugin will allow you to restrict all, none, or certain pages/posts to logged in users only.”
I also thought it might be useful to look for examples of WordPress Plugins that could statistically and perhaps even stylistically analyse the text I upload… and found and have installed the following.
FD Word Statistics Plugin for WordPress
“Shows readability of the post currently being edited using three different readability measurements and also includes a word and sentence count.
Readability analysis is an attempt to show how difficult a text is to read. There are several methods of doing readability analysis. The most popular methods are used here. The Flesch and Flesch-Kincaid methods use formulas based on the average number of words per sentence and the average number of syllables per word. The Gunning-Fog method uses a formula based on the average number of words per sentence and the percentage of “hard” words (words with 3 or more syllables) in the passage.”
“WP Word Count is a plugin for WordPress that gives you word count statistics for your blog’s posts and pages. In addition to overall stats, WP Word Count also gives figures and details for the largest posts and pages of your blog as well as breakdowns for each of your blog’s authors.”

“The Free Online Word Counter (http://www.count-word.com)… is a free online tool primarily having the word count function. But, wait, there’s more. This word count tool is not merely a word or character count tool. It has another count function that the freelance writers, especially those who work with SEO or Search Engine Optimization, need most. The Free Online Word Counter is a word density count tool too. Why it is useful when the integrated word count tool found in the word processors like Microsoft Word can provide more details? Well, the answer is pretty simple. Everyone, every time may not have an access to a word processor with this word count feature. Moreover, when it is needed to determine the keyword density in a given write-up, the Free Online Word Counter can be very handy. Its amazing ability to count words, characters, specific word appearances and word density is something you would like to use for your writing projects.”
0 notes
Text
Documenting my Process… playing catch up
I had intended to be more disciplined about documenting and sharing my process - and I did make about 50 posts in the first 18 months of my PhD project. But it’s been a while since I’ve posted to this Tumblog… though it’s not for lack of recent practical research.
Perhaps it served its purpose for the early stages of my research - helping me to reflect on and plot a ‘trajectory’ for my practical experiments and to 'think through writing’. But as I’ve become more confident about my research strategy and approach and more involved in the day-to-day practice this has somehow become less important.
I have been keeping rough notes of my activities in MacJournal… though I haven’t made the effort to tidy up, edit and upload these notes here. But I should. This is still a good source of evidence to support my research findings and outcomes.
So I’m going to start posting again… but I’m going to be expedient and begin by uploading those notes which require the least effort to tidy up and edit - and those aren’t necessarily in chronological order. This means I’ll compromise this Tumblog’s sequential nature for a while… though I do intend to order posts according to their MacJounal creation date. I’ll work on catching up and filling in the gaps over the next few months.
0 notes
Text
Playing the Lamdoma matrix via a monome64
John Telfer’s Cymatic Music project and his reworked Lamdoma matrix provides, to my mind, an ideal framework for using the monome64 as a physical interface for the Augmented Tonoscope.
His 16-limit version of the Lamdoma matrix probably provides as much subtlety as I need in intervals between musical notes - being a 16x16 grid of 256 cells - almost 3 times as many notes as keys as on a standard piano (albeit that there’s a fair degree of duplication of ratios and therefore notes in the Lamdoma matrix).
So far so good… but one issue I’ve been struggling with is how to best map the consonances and dissonances within the Lamdoma matrix - defined by Telfer through his colour scheme - onto the monome64 itself - so that I can actually see, using its decoupled LEDs, some of the harmonic structure in the matrix on the monome64 itself. I suspect this ‘information’ will make it far easier to play.
Telfer describes the principals behind his colour selection:
“Since no natural music/colour correlation is apparent, I have adopted a system of colour-coding in this project which is purely functional but utilises certain structural features of the visible spectrum.
All ratios of unity as well as their doublings (2/1, 4/1 etc.) and halvings (1/2, 1/4 etc.) are coded white. As these ratios are perceived as the most consonant so the lightness of the colour informs us of their relative lack of tension. The movement from light to dark, from white to black through the coloured spectrum, is an attempt to reflect the increasing dissonance of ratios as the Lambdoma radiates outwards, giving a general measure of tension.
The dominant consideration when assigning colours to the Lambdoma, should be to reflect the simple number relationships between certain musical ratios. In figure 18 we saw that subtle gradations of spectral hue clouded this issue due to their indistinctness. Here, in choosing a limited palette, the more consonant ratios are more easily identified while coding ratios with terms greater than 8 as black, adds a clarity to the figure.
Within the 8 limit, the colour scheme derives from bisecting the spectrum. Thus, on one side of the central white, the overtone series 1/1, 2/1, 3/1 etc. moves through yellow and orange to red within each octave. Similarly, the reciprocal undertone series 1/1, 1/2, 1/3 etc. moves through light and dark green to blue.
The assignment of the other colours within the top quadrant is based on numerical affinity allied with aesthetic considerations . The coding of the rows with a 7 identity works well - the overtone series based on 7 starts from blue and progressively darkens to purple as the tension of dissonance increases; likewise, the undertone series based on 7 starts from red and progressively darkens to deep brown for similar reasons. Some of the other overtone and undertone series are less successful since they do not clearly indicate the relationships within each series, each diagonal row.
Whatever its failings, this colour system serves its purpose as a tool for investigatiing and notating the principles of Harmonicism. It may be a rather leaky boat to set sail in, but it is seaworthy, and familiarity breeds a certain fondness for its idiosyncracy.”
Since it’s not possible to dim individual LEDs on my 2007 walnut monome64 the options seem to be to flash individual LEDs at different speeds but at high enough rates to create a PWM like dimming effect - or, as I’ve elected, to flash them all at the same speed but to change the relative length of how long each LED is on and off during the ‘blink’ cycle… the darker the colour in the Lamdoma matrix… the less time the LED is actually on for each successive blink.
At a speed of about 200ms i.e. 5 times a second… this creates a marked difference in brightness between LEDs that are on for 99% of the cycle and those that are only on for 1% of the cycle and creates discernible differences between LEDs at differences of about 15%. So I’m proceeding with mapping the colours of the Lamdoma matrix into a 2D array in which I can see the mirroring of the colour scheme in the values.
White - 99% Yellow/Light Green - 85% Orange/dark Green - 71% dark Orange/Teal - 57% Red/Blue - 43% Light Brown/Purple - 29% dark Brown/Maroon - 15% Black - 1%
which I’ve mapped into the oF sketch as a 2D array...
// 2D array to store individual LED PWM values
int PWMarray[16][16] = { //1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 {99,99,85,99,71,85,43,99, 1,71, 1,85, 1,43, 1,99}, // 1 {99,99,85,99,71,85,43,99, 1,71, 1,85, 1,43, 1,99}, // 2 {85,85,99,85,57,99,29,85,85,57, 1,99, 1,29, 1,85}, // 3 {99,99,85,99,71,85,43,99, 1,71, 1,85, 1,43, 1,99}, // 4 {71,71,57,71,99,57,15,71, 1,99, 1,57, 1,15,85,71}, // 5 {85,85,99,85,57,99,29,85,85,57, 1,99, 1,29, 1,85}, // 6 {43,43,29,43,15,29,99,43, 1,15, 1,29, 1,99, 1,43}, // 7 {99,99,85,99,71,85,43,99, 1,71, 1,85, 1,43, 1,99}, // 8 { 1, 1,85, 1, 1,85, 1, 1,99, 1, 1,85, 1, 1,57, 1}, // 9 {71,71,57,71,99,57,15,71, 1,99, 1,57, 1,15,85,71}, // 10 { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,99, 1, 1, 1, 1, 1}, // 11 {85,85,99,85,57,99,29,85,85,57, 1,99, 1,29,71,85}, // 12 { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,99, 1, 1, 1}, // 13 {43,43,29,43,15,29,99,43, 1,15, 1,29, 1,99, 1,43}, // 14 { 1, 1,71, 1,85,71, 1, 1,57,85, 1,71, 1, 1,99, 1}, // 15 {99,99,85,99,71,85,43,99, 1,71, 1,85, 1,43, 1,99} // 16 };
This approach seems to give me both the benefits, in coding terms at least, of a programmatically concise yet also relatively clear to read data set - though I appreciate multidimensional arrays aren’t particularly liked by C++ programmers.
1 note
·
View note
Text
Planning for the Cymatic Adufe
A rough collection of notes, links and research that show some of the thinking behind the development of the Cymatic Adufe.

Colour Scheme

#0d9195 - R: 13, G: 145, B: 149
#f52427 - R: 245, G: 36, B: 39
#fdfed2 - R: 253, G: 254, B: 210
#e72a7c - R: 231, G: 42, B: 124
#1f6543 - R: 31, G: 101, B: 67
#ffffff - R: 255, G: 255, B: 255
#000000 - R: 0, G: 0, B: 0
#ec8401 - R: 236, G: 132, B: 1
#1558da - R: 21, G: 88, B: 218
#fffe5c - R: 255, G: 254, B: 92
I later added additional colours extracted from the decorative tassels on the adufe itself.
Digital Visualisation
The patterns projected on the adufe are an example of my exploration of real-time computer animations generated from mathematically derived virtual models of various oscillating and harmonic systems - in this case the Superformula - a generic geometric transformation equation that encompasses a wide range forms found in nature. Johan Gielis’s paper on the Superformula is essential reading for the computational biologist and botanist. It’s cited literature reference list, alone, reads like a who’s who of the history of mathematical and computational modelling of morphogenesis.
My starting point was Reza Ali’s Organic SuperShapes built in Processing. I tweaked and adapted the sketch to create a colour class and multiple instances of the SuperShapes2D, stored these in an Array List class and then used FFT analysis to track the melody of the Senhora do Almortao as sung by Cristina, using the amplitude of certain notes within the melody to change variables within the formula. I also extracted the harmonic relationship between successive notes and the tonic or root note - the song is in C# major - to drive the symmetry of the shape. I then use beat detection to ’snapshot’ the dynamic pattern - you might just be able to see this as light grey - and draw it to the screen using one of the colours chosen randomly from the palette.

Built in Processing 1.5.1 while on site at the exhibition venue, this first iteration was a bit quick and shoddy (I refined it for subsequent exhibitions), but it essentially did what I wanted - a sound responsive version of Reza Ali’s Organic SuperShapes. I now can’t get it to compile, but I’ve uploaded the Processing sketch to my public Webdisk in case anyone is interested in taking a look at this first version - there’s an OS X binary included in the ZIP which should play fullscreen and Reza Ali’s SuperShape2D downloads.
A video of it is available on Vimeo here.
I explored other visualisation ideas too - such as the highly geometric tessellations of Moorish art, the influence of which is still evident in Portuguese traditional design.
Islamic Stars - OpenProcessing
Google searches for - porto portugal traditional tile patterns and islamic design
In the end I used the Minim library functionality for the FFT and Beat Detection, but a Google search on “extracting melody from audio file site:processing.org” also suggested the following possibilities:
R2D2 Processing Pitch - Extracting pitch from a recording or live input… a human voice pitch detector in Processing based on Minim audio library - https://github.com/Notnasiul/R2D2-Processing-Pitch
ESS library
Music
By using the ‘Convert Melody to New MIDI Track’ functionality in Ableton Live on the melody line from Cristina’s recorded vocals for the Senhora Do Almortão I determined the song used the following notes:
Verse
C#4 - 277.18
D#4 - 311.13
F4 - 349.23
F#4 - 369.99
G#4 - 415.30
Chorus
G#3 - 207.65
A#3 - 233.08
C4 - 261.63
C#4 - 277.18
D#4 - 311.13
F4 - 349.23
G#4 - 415.30
gives 8 distinct notes to FFT for
G#3 - 207.65
A#3 - 233.08
C4 - 261.63
C#4 - 277.18
D#4 - 311.13
F4 - 349.23
F#4 - 369.99
G#4 - 415.30
http://www.phy.mtu.edu/~suits/notefreqs.html
According to iHarmony it’s in the key of
C# Major
or possibly (but unlikely)
Natural Minor Ionian (minor 3rd mode)
Looking to explore less contemporary and familiar musical forms through my PhD research I’d been hoping that this traditional Portuguese folk melody might have been a little more exotic... but hey... C# Major is fine.
Lyrics to Senhora Do Almortão (Our Lady of Almurtão):
Senhora, senhora do Almortão Senhora do Almortão Ó minha linda raiana Virai costas a Castela Não queirais ser castelhana Senhora, senhora do Almortão Senhora do Almortão A vossa capela cheira Cheira a cravos cheira a rosas Cheira a flor de laranjeira Senhora, senhora do Almortão
Senhora do Almortão Eu pró ano não prometo Que me morreu o amor Ando vestida de preto
Technical Equipment + Material Suppliers
Alpine MRP-M500 - http://www.testfreaks.co.uk/car-amplifiers/alpine-mrp-m500/
Best 15”speaker out there? - http://www.talkbass.com/forum/f15/best-15-speaker-out-there-886892/
Eminence Kappalite 3015LF - http://www.bluearan.co.uk/index.php?id=EMIKLIT3015LF
Best HD LED Pico Projector for a small room? - http://www.engadget.com/2012/01/28/ask-engadget-best-hd-led-pico-projector-for-a-small-room/
Optoma ML500 Accessories - http://www.optoma.co.uk/projectordetails.aspx?PTypedb=Business&PC=ML500
Optoma mini WiFi dongle (SP.8JQ02GC01) - http://skinflint.co.uk/738360
Manfrotto 259B Extension For Table Tripod - Black - http://www.amazon.co.uk/Manfrotto-259B-Extension-Table-Tripod/dp/B001A1Q0AM
Hafele Screw-in Sleeve, M6 Internal Thread, Hexagonal Socket He - http://www.google.co.uk/products/catalog?q=m6+threaded+sockets&oe=utf-8&rls=org.mozilla:en-US:official&client=firefox-a&um=1&ie=UTF-8&tbm=shop&cid=6699505957384822928&sa=X&ei=aSoaUPeHC-mq0AXP74Fw&ved=0CF8Q8wIwAw
30mm Dia Standoff Bracket Grade 316 Satin Polished with Flat Back - http://www.smuksolutions.com/Glass_Clamps#loadoutput
Natural cork sheets, ground - http://translate.google.co.uk/translate?sl=auto&tl=en&js=n&prev=_t&hl=en&ie=UTF-8&layout=2&eotf=1&u=http%3A%2F%2Fwww.modulor.de%2Fshop%2Foxid.php%2Fsid%2F5711e916c6ee246ec8ba74382af1229d%2Fcl%2Fdetails%2Fcnid%2FCPK%2Fanid%2FCPKC
Cork Bark Wall Tiles - http://www.corkstore.com/Products/Cork-Bark-Wall-Tiles
DECORATIVE WOOD VENEERS - DESIGNER PANELS - http://ultraveneer.com/products.php?scatid=11
1 note
·
View note
Text
Seeking the Critical Path...
I’m going to draw a line in the sand with my current device and software development… and establish new priorities that will enable me to fast-track my research so I can realise performative work and screenings of short films within the next 4-6 months.
This will mean leaving some unresolved development of current iterations of the Sine Wave Generator and other components in my set-up unfinished… but so be it.
If I’m going to pursue an agenda focussing on the development of a scale and an interface - in ways suggested by Mick Grierson (below) - then I need to reconsider priorities and work out practical and expedient solutions to realise them.
So I’ve been reflecting on where I’m unto… and where I need to be…
Hands-on control...
Despite my best efforts to develop an integrated system of ‘hands-on’ operational modes for the Sine Wave Generator and my other modules using:
hardware inputs - a touchscreen, keypad, Softpot rotary and linear ‘touch’ sensors, rotary encoder, rotary pots and toggle switches;
automatic control - implementation of Andy Brown’s Arduino ‘Easing’ library to ‘ease’ between frequencies;
memories - up to 10 frequencies stored in volatile and EEPROM memory;
outputs and buses - 2 x mono and a mixable stereo audio, an I2C data and 5V power bus;
the SWG is still quite clunky and coarse as a ‘hands-on’ instrument. Mick Grierson also commented as much.
Modular design...
While the paradigm of the ‘modular synth’ seemed to suit a certain stage of its development and aspects of this approach may persist into the next phase, the comparative naturalness, ergonomics, fluidity and musicality of control via OSC - both through an initial TouchOSC layout on my iPad 2 but more significantly through my implementation of John Telfer’s ‘Lamdoma matrix’ via the physical interface of my monome64 - clearly shows where future efforts should lie.
Mapping Consonances/Dissonances…
I still need to refine aspects of this implementation… using ofxUI sliders to move the monome64’s ‘physical’ 8x8 grid over the virtual ‘Lamdoma matrix’ is crude. Also it would be really useful to somehow display the relative consonance/dissonance between ratios as mapped through Telfer’s colour scheme of the ‘Lamdoma matrix’ onto the monome64 - perhaps through flashing individual button LEDs at different rates to produce a perceived ‘brightness/dimness’ of consonant ratios.
Aesthetics...
I’d like to update my monome64 with a monome64 grayscale… from walnut case, aluminium faceplate, white silicon buttons and orange LEDs to black rubber and steel case, aluminium faceplate, white silicon buttons and white LEDs which seems to suit my current aesthetic much better.
Phase interference…
Somewhat as an aside, I’ve had issues with phase offsets in the mixed stereo output of the SWG destructively interfering the output from the two AD9835s, affecting the timbre and reducing the volume considerably. I think is because the Arduino sends the selected frequency/ies to each of the AD9835s in turn and there must be some small delay in the CPU cycle between these commands resulting in a phase offset. I’ve thought about trying to counteract this by delaying the command to the second AD9835 by the time of a complete cycle of the waveform at the selected frequency to try and compensate… but I haven’t implemented it and I’m not sure it’ll work anyway… the 32bit frequency word is sent to the IC by the Arduino in eight 4bit chunks and so programming in a dependable delay that accommodates for this under all conditions of use and demands on the Arduino’s microprocessor seems unlikely.
Amplitude to Resonance Mapping…
The SWG currently outputs pure unmodulated and unmodified sine waves, resulting in a percussive, plucked type attack whenever a note is played with no control over the amplitude/volume of successive notes nor any potential to shape the sound using an amplitude envelope.
This has resulted in a uneven response to frequency by my current analogue tonoscope device - adapting a 13” piccolo snare drum. The amplitude of the resonance of the drum skin varies considerably across those frequencies that induce a standing modal wave pattern on its surface. I’m constantly having to ‘ride’ the volume slider on the relevant channel of the iRig Mix mixer as I change frequency using the monome64, TouchOSC layout or hardware controls on the SWG itself.
So I think I need to implement two types of amplitude modulation to the sine wave. First, a global mapping of the amplitude of output frequency to accommodate the differences in amplitude of the resonance of the drum skin - so that I can stop ‘riding’ the volume slider on the relevant channel of the iRig Mix mixer. If I can do this systematically it should also provide some interesting data to show how the drum skin amplitude changes in response to frequency. Second, an ADSR type envelope that will allow me to shape the sound - most significantly with longer attacks (making the sound softer at the front) which should not only help avoid some of the more violent responses of the drum skin which ends up scattering glass beads everywhere but also make the sine wave sound more voice like and so more musical.
I had planned to try and do this by making an Envelope Module using a AD5206 digital pot… and I’d already got some way in developing this via Gian Pablo Villamil’s ‘Making Sound with the Arduino’ NYC Resistor class notes and example sketches as well as a node based ADSR envelope generator in Processing that could have provided wavetable data sets and perhaps real-time control… but the Ronin Synth should allow me to programme this far more straight-forwardly.
Visualisations...
I think I’ll pursue the idea I’ve had to try and visualise the modal wave patterns on the surface of the drum skins using the principle of moire patterns or perhaps even laser interferometry. This might allow me to discard the somewhat messy glass beads or other physical medium.
But What I Really Need is a Sequencer...
While addressing these issues may well help ’smooth’ the performance of the snare drum tonoscope, make the audio output sound more ‘musical’ and make my live playing of it more fluid and natural what I really need to explore in order to realise my ambitions to make a true audiovisual instrument, is to address how I can make audiovisual composition using it. For this I need a sequencer of some kind… one that can work simultaneously with JI scales, Bezier curve shaping between notes and ADSR amplitude envelopes.
I know of no commercially available software that will enable me to do this - though it may be worth investigating whether some existing solution exists - so it seems that I now have to focus on the design and build of a custom sequencer that can do the job. I’ve essentially outlined this approach in my ‘nodal sequencing with controllable tween between frequencies via Bezier curve’ Tumblog post from March ’12 - though my initial MacJournal entry dates back to October ’11.
Documentation...
I also have to start moving my documentation tools into higher gear… to optimise my oF multiple camera source video capture sketch for speed and quality… and to implement the remote control of my Canon A640 - or possibly a new Canon S100 to replace my damaged S95. Some of this is going to entail rethinking my current lighting solutions - my DIY LED ring lighting isn’t sufficient and causes overexposure and ’sync’ artefacts in the video stream…
0 notes
Text
BMMF12 - feedback/discussions/reflections
Through showcasing The latest iteration of my Augmented Tonoscope at Brighton Mini Maker Faire ’12 I had lots of interesting conversations with other makers and visitors.

This notes of some of the more though-provoking ideas and recommendations.
Hang drum (hand hand drum) - all steel - http://www.hang-music.com/hang.php
use standing wave patterns to tune the drum…
Ian Helliwell, Brighton - http://ianhelliwell.co.uk/
Ian is a self-taught audio-visual artist operating from his current hq in Brighton since 1991. His work encompasses short experimental films, shown at festivals worldwide, and the composition of electronic music with instruments he designs and builds himself. On occasions he performs live with his Hellitrons and Hellisizers, and exhibits his creations in galleries. For many years he has devised multi-projector light-show projections with his collection of handmade and found slides, optical wheels and super 8 film. He is a curator and collector with a special interest in early electronic music, world's fairs and abstract film. He runs workshops, assembles themed programmes for film festivals, and creates pieces for radio including the ongoing series, The Tone Generation. He was featured in issue 336 of the Wire magazine in february 2012.
FerroTec - ferrofluid.ferrotec.com/products/ferrofluid/ have a Ferrofluid Product Series - including anti-oxidising ferrofluids - that is definitely worth investigating in more detail.
Use bean bag filler - polystyrene spheres - to view standing waves - [UPDATE] bought via ebay
THE MUSIC COMPUTING LAB - http://mcl.open.ac.uk/musiclab
The Music Computing Lab at The Open University is a research group focused on empowering musicians, illuminating musical activities, and modelling music perception and cognition. Our work is informed by musicology, psychology, ethnography, embodied cognition, pervasive interaction, mathematics and advanced computing techniques. In particular, we devise and investigate new ways to:
Empower beginners to engage deeply with musical activities;
Provide new tools and capabilities for expert musicians and theorists;
Cast new light on how music works.
I was lucky enough to have a very able ‘helper’ at BMMF 12 - Luke - he builds speaker cabs (I should approach Luke with my design for the Cymatic Adufe plinth - he might have some thoughts). He suggested using pressure sensitive film to visualise the standing wave pattern. Even though subsequent investigation showed that these materials only measure static pressure and are not reusable - it made me think about the idea of using moire patterns - by layering a thin film of Letratone like halftone pattern to the actual drum skin and then mounting a second, static grid a little distance away from the surface of the drum… I know from empirical experience that if this plate is too close to the drum skin it dampens the vibration… perhaps this could be a fine mesh? Start by buying some Letratone or some Deleter manga comic halftone pattern film…
Matt - Edinburgh International Science Festrival - should contact http://www.sciencefestival.co.uk/
Fergus Ray Murray - oolong.co.uk
Fergus Ray Murray lives in Edinburgh, where he programs interactive graphics and web sites, writes non-fiction of various sorts, sculpts little critters, takes photographs, cooks vegan food, and brews quite a lot of good tea.
Some nice ‘Warped membrane’ animations on his website…
General Reflections….
I could probably use the Manfrotto extension arm I bought for the Cymatic Adufe to mount both the Firefly MV and the modded PS3Eye camera using the Manfrotto magic arm - if I had the right adaptors… though this isn’t a cheap solution… there are other lockable arms I’ve seen on ebay that might do the job better.
I need to re-check the oF video capture sketches - they didn’t work on the day. Probably should disable capturing a stream until actually required… so I can work with fewer cameras using the same sketch but add them as required.
The card collar around the top of the drum improves things…. but the glass beads still get everywhere… the Moire pattern idea is definitely worth pursuing.
Rationalising the set-up would be good - i.e. router and USB wired up in a case and ready to go… It takes ages to mire up from scratch.
I need to buy a flight case on wheels that can hold most of my stuff
Fit switches on the speaker cables so I can switch signals between different tonoscope devices.
The sooner I can get to composition the better...
0 notes
Text
PPR Studio Week - Notes + Reflections
Notes and reflections on cymatic patterns with liquids explored during the PPR Studio Week, 20-24th August 2012.

Day 3
I like the totemic nature of the 6.5” speaker on the Menu ceramic tea light holder and anti-vibration base with the camera above it… Can I extend this? - bright LEDs mounted on the outside edge of a top plate - try with the spare Blink-M Max LED I have + a ping pong ball diffuser
- how to control amplitude between octaves - there’s a marked jump in excitation of the liquid between frequencies - how much can I drive the speaker for very low frequencies? - this is where I think the proposed Envelope Generator mode will come in handy…
focus/blur - aesthetically I like this effect… but it’s difficult to access the focus ring on the PS3 Eye when the LED Ring and diffuser is mounted - can I do this in software without slowing it down too much? - the colour stream was attractive here too… can I add RGB and alpha faders - like Firefly MV oF sketch? - can I overlay the adaptive threshold image on top of the colour/greyscale image - black as matte - white with transparency?
The ring of light pattern is possibly too complex to discern the structure on the surface of the liquid - and looks far too abstracted from above… - second camera looking at the edge - use IPEVO Point2View - try using USB LED magnifier lighting instead/as well - alternative single light source
Get some advice for David about getting the best out of my S95 - filters + in camera settings
Mount the camera on the Manfrotto magic arm kit as high as possible - otherwise get oil on the lens - even at ~20cm distance
Logging patterns which emerge at octave ratios below 89Hz - 4 octaves below mathematical C - ~256 Hz
New glass beads in small dish and silicon baking tray - no discernible effect
water in silicon baking tray - get effects at higher frequencies than small baking dish.. ~70Hz - but they’re complex shifting patterns - pulling the fader level has a marked effect - strobe mode on LED Ring shows potential - despite the syncing issues with the camera and black bars appearing across screen..
glycerol - seems to work in the small tray within a narrow frequency range ~11-15 Hz - but once it gets vibrating the pattern is very stable though it seems to have fewer modes
How do I get better representation of structure? - onion skin type animation - frame grab on timer - % of frequency? - blob detection - single light source
Day 4
Need a higher sided tray - or a tray on a tray - too much spillage
At 12 o’clock on gain and full on fader I get complex shifting patterns on oil and water beginning at 47.5 Hz
Drop by 2 on fader by 28 Hz There’s some interesting parallel linear arrangements going on at 26 Hz drop 2 on fader by 25 Hz
switching to the unpainted tray and just water has some interesting shadows I didn’t catch earlier
Fit rubber edging strip around inside of steel ring of Menu tea light holder
can get down to 10 Hz with this setup - and down to 8.5 Hz if the water is already in motion - otherwise I have to start overdriving the speaker to the point where it starts buzzing…
Day 5
tried using the Firefly MV
the short focal length lens means the camera has to be very close to the tray… which results in splashes on the lens and the LED Ring being too low to actually disperse enough to light the tray… - with the shortest focal length get a not displeasing ‘fish-eye’ view which actually suits the subject well I think - 7.5 frame rate setting slows the action down to reveal the patterns more clearly - the LED Ring causes black bars to ‘cartwheel’ up and down the screen… the timing adjustment on the LED Ring Controller allows this to be sped up or down quite effectively… PWM increases or decreases the height of the bar… - don’t think I can use the LED Ring with the Firefly MV - at 60fps get occasional glitches in the feed - image flicks off to the left - blocking seems to slow down the frame-rate considerably - a 500 Watt flood creates over dramatic lighting - whether spot with closed barn doors of flood with open… Even with diffusion film it doesn’t produce the desired effect
0 notes
Text
What I need is a compact DJ xixer
I learn a lot about how the Augmented Tonoscope should be configured by exhibiting it at events such as BEAM ’12 Festival and Manchester Mini Maker Faire.
…
In my studio i sort of ‘get by’ and accommodate failings in its functionality or input systems which are made more ‘obvious’ when showing it to others…
One key area I’ve identified is the unergonomic nature of the knobs on the Behringer 6-channel mixing desk I’m currently using… I need the channel faders and crossfaders of a DJ mixer… but which one?
So looking for an alternative I came across the IK Multimedia iRig MIX ultra compact DJ mixer:
“iRig™ MIX is the first mobile mixer for iPhone, iPod touch, or iPad. iRig MIX offers the same controls you would expect from a professional DJ mixer (crossfader, cues, EQ and volume controls, etc.) in an ultra-compact mobile mixer that can be used with a huge variety of iOS DJ mixing and other apps. Features:
2 stereo inputs with gain, bass, treble and volume controls, independent cue on each channel with LED indication and channel cross-fader
Instrument/microphone/extra input with volume control can be processed by iOS apps (such as AmpliTube, VocaLive)
Stereo output with RCA connectors, master level and LED meters
High quality, pristine sound
Quality headphone output for master or cue monitoring with independent volume control
Input switch splits Input 1 into dual-mono for use with DJ mixing apps on a single iOS device
“X-Sync” mode allows auto-sync with any audio source using the included DJ Rig free app
Can be powered with the included AC adapter, battery pack and laptop USB ports
Includes 4 free apps: DJ Rig, AmpliTube, VocaLive, GrooveMaker”
While I’m not expecting a pro DJ experience it seems to meet my needs perfectly - being not much bigger than one of my Augmented Tonoscope single modules. It’ll also allow me to play out more conveniently on my occasional DJ sets… and I found a B-Stock listing on ebay.co.uk for £53.95 inc P&P. Result.
0 notes
Text
BEAM - feedback/discussions
Through showcasing The Augmented Tonoscope at the BEAM 12 Open Call I had several interesting conversations with other BEAM 12 contributors and general Festival goers.

So I thought it was worth noting some of the more thought-provoking ideas and recommendations.
Pedro Robelo, SARC - Music for Prosthetic Congas http://www.somasa.qub.ac.uk/~prebelo/index/works/prostheticcongas/index.htm
Joe Paradiso, MIT - Passive Acoustic Tap Tracking Across Large Interactive Surfaces http://resenv.media.mit.edu/Tapper/index.html
David Tudor - Rainforest http://davidtudor.org/Works/rainforest.html http://www.getty.edu/research/tools/guides_bibliographies/david_tudor/av/rainforest.html
My experiments with ferrofluid using a15 Watt electromagnet, variable speed motor driver and glass petri dish of the black stuff generated a lot of interest. I had several discussion about ferrofluid - the possibilities of magnetic stirrers, a configuration of small electromagnets and permanent magnets, and most interestingly, special types of ferrofluid that are less prone to oxidisation and so last longer in air - mine had congealed.
Discussed the idea of using RGB LEDs instead of the white LEDs in my custom-made camera LED ring and to control their colour based on the properties of the sound. Although I’ve been thinking that this would be a useful next iteration for the LED ring I’m doubtful that the effort to scale up the electronics for the three LED drivers it would require to achieve this, while keeping the same, if not a smaller form factor, is actually worth the effort. I’m actually less interested in mapping any quality of sound directly to colour anyway.
Sergi Jorda of Reactable suggested that I need to maximise the FPS of the camera to overcome anti-aliasing of captured image and advised not to use a strobe - one of the settings of my custom-made camera LED light ring, which confirmed some of my doubts about whether the camera capture will work as well as I’d like. he also suggested using a sealant around edge of drum skin to stop the glass beads falling into the gap between it and metal rim.
Matt Spendlove of Cenatus CIC and Netaudio London suggested that once I have a performance ready it could be suitable for Sally Golding’s Unsconcious Archives analogue film night.
Scott McLoughlin was at BEAM 12 presenting Resonant Systems in which he creates an oscillating and evolving system of inharmonic sounds by exciting the resonant nodes of cymbals with sinewaves. We mooted the idea of building a Cymatic Drumkit together.
0 notes
Text
Reflections on The Whitney System
Through my development of The Whitney System for Whitney Evolved I’ve started to understand the nature of the algorithm more. I’m hoping that this insight will allow me to discern a growing correspondence between musical pitch and the convergence/divergence of the patterns in the Whitney Rose.

I’ve realised that the deceptively simple Rose algorithm has a periodic, cyclic nature - it repeats. So it’s possible to conceive of it as a long looped sequence - albeit with a cycle length ranging from minutes to hours. It’s also possible to jump to and ’jog’ up and down points along the periodic timeline of this sequence in a manner very akin to video editing or music sequencing.
So I think it should be possible to correlate pattern to a pitch - to match those points along it’s progression, inevitably at whole number ratios, where the patterns converge and diverges with striking dynamic effect with the frequency or pitch of a note also defined by whole number ratio or Just Intonation harmonic relationships.
For example, if I set a root frequency, say A4 at 440Hz, then a ratio of 1/2 or 0.5 will produce a frequency of 440/2 i.e. 220Hz or A3 - one octave down - and the Whitney Rose algorithm will display two arms…
In the code
// I know that when stepTime = 960, a default rate of 3.351 for classic and 0.003351 = one complete cycle - and multiplying the rate by a factor of 10 divides the stepTime for a complete cycle by a factor of 10 cycleLength = 16*60; = 960 // this is the only rate at which the cycle demonstrates full harmonic resolution - at other rates it only displays some harmonic resolution rate = (2*PI*nbrPoints)/cycleLength; // defines the start point of the rose pattern along its progression defined by mouse x position startTime = -cycleLength*(mouseX/(float)width); // calculates the ratio along the length of the progression float ratio = -startTime/cycleLength;
These ideas have been confirmed by my subsequent reading of Bill Alves paper Digital Harmony of Sound and Light in Computer Music Journal Winter 2005, Vol. 29, No. 4, Pages 45-54.
The right-hand column of Figure 2 (shown above) shows the chords that would result if a set of sixteen pitches went through the same differential cycle. - some of the pitches are poorly approximated by twelve-tone notation…
Unlike the harmonic series, this set of musical intervals has a constant factor in the numerator. This set defines what Partch called a utonality and what others have called a subharmonic series (that is, the inversion of the harmonic series).
The higher the numerator, or number of divisions, what Partch calls the numerary nexus, the further along the spectrum we are towards dissonance. The smaller this common factor or numerary nexus, the more stable but less dynamic the sound and image.
In my first video based on these principles, Hiway 70 (1997), I extended the polar coordinate curves of Whitney’s Permutations to three-dimensional graphics. But the most important way in which my work was distinguished from his is that, approaching this work as a composer, I created a soundtrack in tandem with the visual composition, carefully synchronizing movement between points of tension and dissonance and points of stability and tonal consonance. I created the music entirely in Just intonation, using harmonies which were often direct analogues of the patterns of visual symmetry (see Figures 3 and 4).
Just as digital technology allowed me to control the visual elements with precision necessary for differential dynamics, so did Csound realization of the music allow me to create a Just intonation system which could freely modulate between tonal centers…. [and] implement my vision of differential dynamics and dynamic Just intonation.
0 notes