I am a Canadian composer based in Kingston, Ontario. Much of my work stems from an interest in music technology, and/or working with guitars. In addition to writing experimental, electronic and classical music I occasionally design software for audio synthesis and processing. I use this page to post informal writings on technical aspects of writing music with computers. You can read more about me and listen to my music at www.michaellukaszuk.com
Don't wanna be here? Send us removal request.
Text
#3 What’s in a glitch?
Sounds that result from the malfunction of software or audio hardware have become an extremely prevalent part of the electronic music world, both in academic/research-oriented and D.I.Y. or commercial communities.
To me, glitch feels kind of like the final frontier of sound synthesis and effects processing in electroacoustic music. Recently, there’s been such a great focus on granular and spectral drone/gate & hold type effects, and I feel that has lead to a crisis of similarity in the music that composers are producing.
Even with popular digital audio glitch techniques like bitcrushing and the use of foldover from aliasing, the sound of the malfunction differs so greatly depending on the software and hardware used, that there’s a great opportunity for freshness. The inherent unpredictability of glitch helps forge new paths for sound design.
Here’s one of my favourite software-based audio glitches:
I’ve posted the code below, it’s written in the chucK language.
The main idea expressed in the code below:
1. Feed signal into a resonant bandpass filter (I used a sine tone)
2. Gains should be super super low, I scaled mine by 0.00000000000003
3. Set the input signal to a infrasonic frequency, this frequency affects the of times per second that the resonant filter frequency changes
4. Set the center frequency of the resonant filter to values > 7000
Here’s the meat of the glitch: in the chucK language, a resonant bandpass filter is supposed to have a Q value that’s greater than 1, this controls how narrow or wide the emphasis on the center frequency will be.
Using values < 1 you are going beyond the maximum steepness of filter.
What you end up hearing is a whole lot of crazy chirping, but you can extract useful sounds by adjusting the center frequency of the filter and the frequency of the input signal (sine tone in this case).
I used this technique in my fixed-media piece My Metal Bird Can Sing:
https://soundcloud.com/mplukaszuk/my-metal-bird-can-sing
//////////////////////////////////////////////////////////////////////////////////////////////
// Michael Lukaszuk , March, 2016
class MPLfilt extends Chubgraph
{
SinOsc s => ResonZ rez => Gain gain1;
// CAREFUL, CAN BE VERY LOUD!! 1st try, no headphones
// the gain has been scaled down quite a bit but still, might be a good idea to test on something other than expensive studio monitors
gain1 => outlet;
0.0001 => s.gain;
0.0001 => rez.gain;
// 0.01 => gain.gain;
0.15 => rez.Q;
0.00000000000003 => gain1.gain; // ridiculously low...
//0.0002 => master.gain;
function void filterGlitch(int On)
{
if (On == 1)
{
Math.random2(1,6) => s.freq; // infrasonic Hz give different glitches than audio lvl Hz
Math.random2(8000,10000) => rez.freq;
}
}
function void setQ (float Qval) // should be around 0.1-0.2
{
Qval => rez.Q; // Q < 1 controls glitch lvl
}
function void amp (float gainLVL)
{
(gainLVL*10) * 0.00000000000003 => gain1.gain; // ridiculously low...
}
}
MPLfilt mike => dac;
while (true)
{
1 => mike.filterGlitch;
0.9 => mike.amp;
0.15 => mike.setQ;
150::ms => now;
}
0 notes
Text
#2 STK instruments as more than physical models
Many computer music languages include at least a few resources for physical modelling synthesis. It’s a very complicated and intriguing idea, that based on analyses, we can use recreate sounds from our environment and use them in a way that defies their physical limitations.
A lot of earlier electronic and computer music composition explored imitation of acoustic instruments, especially percussion, winds and brass. Pieces like Phoneé by John Chowning (a personal favourite of mine) are built on the interplay of computer-generated and slightly ambiguous “almost real” sounds, which made the electronic sounding stuff feel more organic.
The Synthesis Toolkit is a fantastic library of classes that have been ported into many of the most popular computer music tools such as chucK and RTcmix. It was written by Perry Cook and Gary Scavone, designed to “facilitate rapid development of music synthesis and audio processing software, with an emphasis on cross-platform functionality, realtime control, ease of use, and educational example code.” https://ccrma.stanford.edu/software/stk/
Many of the STK models were developed in the early 90′s, but there have been some great more recent additions to this world like John Gibson’s mesh~ external for Max/MSP -- there’s also a metallic mesh type instrument in the STK.
I’ve been present for at least a few classroom debates about the believability or “realness” of these models, i.e. “does the clarinet model really sound like a clarinet? I personally feel that with some additional processing using the right tools in a DAW that you can get it there, but such debates are beside the point when it comes to my relationship with STK instruments.
In my own compositional work, I feel that there’s a lot of potential in thinking about STK instruments not as physical models, but as more abstract noise generating electronic instruments. Concerning yourself less with the idea of replication -- thinking more outside of the box using presets such as body size and breath pressure can yield very interesting results. This was one of the main technical aspects of a fixed-media piece I wrote in 2015 called “My Metal Bird Can Sing”.
Examples!!
RTcmix examples can be found via the following google drive link:
https://drive.google.com/file/d/1r8qm8O8pGfCfhXb7zMJiVI-YE4pACf3E/view?usp=sharing
(code for score files are better indented/organized in the .sco files via the google drive link)
strum_lukaszu_1_birds.sco
A simple example of thinking outside of the box with STK insts - using pitch values that are far below the intended range of the instrument (see the pitchArr variable that uses octave pt. pitch notation), durations are sometimes too short to give enough spectral info. to indicate any kind of modelling. The decay variable is what really takes this score into odd territory, as it is decremented with each successive loop - this uses the STRUM2 model as a computer instrument, a guitarist or harpist wouldn’t be able to manipulate tone in such a way.
-------------------------------------------------------------------------------------------
rtsetparams(44100,2) load("STRUM2")
set_option("clobber=on") rtoutput("Strum_interlude2_4", "wav")
srand(122)
a = 1
for (i=0; i <2.4; i += 0.1) { if (a > 8) a = 0 a += 1
start = i+1 start2 = i + pickrand(0,random())
dur = irand(0.17,1.1) amp = 10500 ampenv = maketable("line",1000,0,1,9,1,10,0)
pitchArr= {1.05,2.00,2.07,3.05,4.00,4.07,7.00,7.05} pitch=3.00 pitch2=3.07
squish=irand(1,9)
if ((i *7) % 5 ==0) pitch = cpspch(7.05) + pitchArr[a]
decay = dur-0.02 pan = pickrand(0,1)
STRUM2(start,dur,amp*ampenv,pitch,squish,decay,pan) STRUM2(start2,dur,amp*ampenv,pitch2,squish,decay,pan) }
----------------------------------------------------------------------------------------
A much more interesting example:
strumfb_lukaszuk_birds_1.sco
STRUMFB from RTCmix uses the Karplus-Strong plucked string model but has some additional functionality for adding feedback, probably some metalhead at Columbia or CCRMA had a bit too much coffee one night.
This is a more complex score so I don’t want to labour over all the little details. Basically, the idea is to move away from a guitar sound and to isolate the feedback elements of the instrument to create interesting drones and glitch-like sounds.
----------------------------------------------------------------------------------------
rtsetparams(44100,2) load("STRUMFB")
set_option("clobber = on") rtoutput("Birds_strumGlitch1", "wav")
srand(140)
fbTrans=20 startRand=1.145
for (i=0; i <100; i+=0.1) {
if ( i % 5 == 0) startRand = pickrand(1.0,0.02,0.01,0.02,0.9) else if (irand(1,5) != 1) startRand=2.111 else startRand=pickrand(0.05,1.5,0.001,0.01)
start = i+(random()*startRand) dur = 1.8*random() amp = 1600 // set amp 1400
trans=pickrand(1.05,1.08,1.03) pitch = 11.93*trans
if ( i % 3 == 0) fbTrans=19.3 else if (i% 5 == 0) fbTrans=16.0 else fbTrans=20.0
fbPitch = fbTrans*pickrand(1.66,3.222,0.8,1.75) squish = 10
if (irand(0,10) == 2) squish=irand(3,8) else squish=10
fundDecay = dur*random() nyqDecay = 1.0 distGain = 15 fbGain = 0.9 cleanLvl = 0.1 distLvl = 1.0 pan = random()
ampEnv = maketable("line",1000,0,0,1,0,2,1,5,1,9,1,10,0)
STRUMFB(start,dur,amp,pitch,fbPitch,squish,fundDecay,nyqDecay,distGain,fbGain,cleanLvl,distLvl, pan)
}
-------------------------------------------------------------------------------------------
0 notes
Text
#1 Wavetable oscillator as an LFO
I am deeply enamoured with low frequency oscillators -- this is no secret to anyone who has ever taken a course that I've taught or spoken to me about electronic music stuff for a little while. One of the things I find unsatisfactory about many commercial synthesizers and plugins is that they usually only have a few different choices for LFO shapes.
I understand that it's problematic to clutter an interface with too much complexity, but still, LFO's relate to rhythm in electronic music, so having such few options is a bit far from ideal, at least for the way that I like to work.
Using a wavetable oscillator as an LFO is a fantastic approach for creating interesting rhythms, whether it be pulsing, repetitive beats or frenetic bursts for aspiring breakcore musicians. I find that it's also useful to think about A-D-S-R type envelopes as building blocks for rhythms and phrases in electronic music... User interfaces often present an LFO or envelope as a mere effect or way of further shaping a sound, but there's a lot that they contribute to the way the musical material is developing.
Some commercial products like NI Massive and Absynth are pretty useful for using weird or complex wavetable shapes as LFO's, but tools like SuperCollider/Max/chucK allow you to achieve similar results, but also allow you to take advantage of the algorithmic capabilities of the language/environment.
side note about wavetables:
I've also been using wavetables and wavetable synthesis in a lot of my recent work. I like how wavetables, just a collection of points in a table, stored in a buffer, then used for an oscillator force you to think critically about how you're sculpting timbre in your work with sound. Even simple changes in the position or slope of a point in the table and you can drastically change the timbre of the oscillator that contains the wavetable.
Some simple/semi-simple SuperCollider examples demonstrating how to use a wavetable oscillator as an LFO to create interesting rhythms in a patch:
Very simple, just demonstration of the concept:
A more musical example, the right channel uses wavetable oscillators as LFO’s:
vimeo
https://drive.google.com/file/d/1MWNadRzY7AjWTXHBJVBPD5qrtmWvx_fT/view?usp=sharing
see google drive link for SuperCollider code.
0 notes