Don't wanna be here? Send us removal request.
Text
Music and Sound Synthesis 2023 Conclusion
I’ve always had an interest in synthesisers since around Year 12 when I started Music Technology in A Level. I never did my own research about them between then and the start of this module, I only learnt through first hand experience and experimenting with the synth instruments on Logic, mainly ES2. So when I saw this module was available to me to study I knew at the time it was something I wanted to do.
However, starting the module it was exactly what I expected it to be yet for some reason, I still felt disappointed but I didn’t know why. I felt like this for a while and only recently I think I know why; it wasn’t something that clicked with me initially which isn’t normally the case when it comes to new areas of music. Since it wasn’t something that directly interested me, I left it on the back-burner whilst I had another module and my final year project to do. These two modules I did enjoy doing more, so I spent my time doing them instead. I think it is safe to assume that this is evident in my work.
I’m not proud of what I have created for this module. I know that I am capable of creating work to a much better standard, for example my 7-minute long piece for my final year project. This module has been fun and I know that I did learn a lot from it, mainly how to operate the Oakley which I am very grateful for and that I am proud of myself for.
Unfortunately for the last few weeks I have been in a very big mental slump which has not only affected my work attitude but also my overall mood at home also. Why I feel this way I do not know, and since this module is also the very last module I have to complete for my final year of university, I should feel motivated to have it completed and to a high standard since I believe the rest of my work this year has been, yet that is not the case. The last few weeks I have lost next to all of my drive and motivation to finish university and it frustrates me that I do not know why I have been feeling this way.
The things that I have taken away from this module that I am happy to have done is how to operate the Oakley effectively and have a slight introduction into VCV Rack. However I wish I was more comfortable with VCV Rack as not only would I have wanted to include in my pieces since none of them actually do, but I’m sure I could’ve created a great range of sounds from this but I think from what I said earlier about the module not clicking with me, this is probably why I didn’t spend much of my free time experimenting with VCV.
Overall I would say I enjoyed the module, but I think now if given more time and a better mental state I could’ve created better pieces of music. But the main takeaway from this I feel is that I did learn something new despite the outcome not being what I wanted.
0 notes
Text
Synthesis Update: 16th May 2023
The deadline is two days away yet I think I am able to submit the work early as I cannot think of what else to add to the three pieces. I’ve spent the majority of today doing light mixing and making sure everything is all good and to a standard I am somewhat comfortable with.
The West Coast piece I have named “Analogue Animosity” to reflect the idea of not only solely using analogue synths but also the aggressive sounds they made. I didn’t want to do a lot of mixing to this piece since it would defeat the purpose of doing it on the synths themselves. One main thing I did do was add a tremolo effect to some tracks as I did not know how to do this on either the Buchla or Oakley so it would add some interest. Near the start with the pink noise track, I did use automation to slow it down and then speed it up, again to add interest.
I am well aware that this piece is, to put it bluntly, quite boring. This style of more ambient and less traditionally musically written sounds are not my forte and I really struggled to sonically visualise where the piece could go structure wise, but I am still pleased that I made three evidently different sections in it.
For (Ghost) Riders In The Sky, this was the one I had most fun working on since I have a fair bit of experience with this song already as it was included as help in my dissertation. However I still did struggle to create sounds that would sound nice together as I remember this is advice Stephen gave to me; not to create sounds similar enough to one another since the mix would become quite muddy. I was happy with the end result of the sound of the repeated arpeggio chords. To create this I utilised the pulse waveform on the Oakley which meant it could sound more colourful compared to a sawtooth for example. I did use the ES2 synth from Logic to create the chord sounds as it would be easier to do so since it is polyphonic and would take up less time compared to doing it in the studio using analogue synths.
The Give Me Love instrumental was the first of the 3 pieces that I started, originally inspired by the “Switched On” movement. It was also the first of the pieces that I recorded using the analogue synths so hearing it back compared to the other two, the tones are a lot more basic. For the arpeggio sound, I do really like how it sounds and I’m also really pleased with how similar yet slightly different it sounds to the preset I used originally for the draft, especially with the frequency sweeps. An element that I did struggle with was the final section and getting low tones. I knew I wanted a sub bass sounds by using sine waves, yet I wanted something more audible yet aggressive too so I started with a sawtooth bass, but this meant I had to get creative to not have clashing sounds in the low end, so I used pulse waves in order to add a bit of colour especially in the higher end of the frequency range.
0 notes
Text
Synthesis Update: 15th May 2023
The aim of today’s studio session was to create the West Coast inspired piece. This was also my first solo attempt at using the university’s Buchla system and I was well aware going into it that my knowledge was next to none since I couldn’t remember much from the session with Stephen however long ago. Despite this, I did manage to get some sound out of it and I think that was de to being able to identify what the key modules were since they looked somewhat similar to the oscillators on the Oakley system which I am much more comfortable on.
After about half an hour or so, I managed to get a waveform playing, and then through previous knowledge of FM, I used another oscillator to control that to add some interest into the sounds. However it wasn’t long until I didn’t know what else to do with the Buchla so after a couple recordings, I resorted back to the Oakley to finish the track.
The Oakley was set up in a similar way to before as this is my go-to set up to begin with. A major difference was that I utilised pink noise instead of a more conventional waveform so I could add more sound design than a musical element into the West Coast piece. I also adjusted the speed of the Sine waveform manually, I liked how it at the start it imitated the sound of an old steam train. I was relatively pleased with what I created but overall I knew I could do better but I was so stuck with how to operate the Buchla and even with the Oakley, it was tough mentally to not create something using that instrument that I don’t normally create.
0 notes
Text
Synthesis Update: 11th May 2023
Today was a double studio session with the aim being getting the Ghost Riders track and the Ed Sheeran track re-recorded using the Oakley. I feel a lot more comfortable using the Oakley than the Buchla so that’s why I preferred to use it for the East Coast inspired tracks.
Although I am comfortable using the Oakley, my knowledge is still somewhat basic. When creating the drone sound for the Ghost Riders track, I knew I wanted to use the frequency module to make it sound somewhat interesting and less repetitive. Since I liked how this sounded, I kept it on the majority of the tracks on the Ghost Riders cover but also the Ed Sheeran one too. One problem I did have with the Ghost Riders track that I hadn’t noticed till after the studio session was that some of the midi didn’t go into the Oakley or it wasn’t recorded right so there is just an empty section of the melody which is unfortunate. Theoretically I could try and recreate it but that would also mean I would have to try and line up the frequency sweep as well as work out which waveforms were used.
I was really pleased with the sounds I created for the Ed Sheeran track, especially the one that plays the arpeggio chords as I wanted something that sounded somewhat plucky, so using the ADSR module I had to make sure the release was short. In the final section too I really like how the bass sounds yet it does lack some low end, so using the ES2 synth in Logic I made an aggressive sound. This was done by using two sawtooth waves and a square wave.
Ed Sheeran Intro: https://drive.google.com/file/d/1R88GdjnX2LIaW7wpYjALW-khu3wY0zo3/view?usp=share_link
Ghost Riders Intro: https://drive.google.com/file/d/1N_BmOWwrFQhZNpaQfw0iHaEP8Kp5Q-qk/view?usp=share_link
0 notes
Text
Synthesis Update: 6th May 2023
Further progress has been made on the Ghost Riders In The Sky cover, including more structure being added and some further mixing to get ideas on what I would want to do using the physical synths which would most likely be the uni’s Oakley.
The sub bass at the start has its cutoff setting “matching” the tempo (this was done manually through automation but I would use an LFO on the Oakley for this). The other main changes are the actual sound of the now sawtooth bass and also splitting the melody in two, with one octave playing more left and the other more right.
https://drive.google.com/file/d/1xi4djScc-ku0BFs4Y0_p577jfFge1tGE/view?usp=share_link
0 notes
Text
Synthesis Update: 3rd May 2023
Since the last post, more of the structure has been done on the two covers from before and I plan on using the studios this weekend to make a start on what sounds will actually be used in the final tracks.
I’ve also started a third cover that is a mix of the two coasts but with more on East. It’s a somewhat old school techno beat esque cover of The Mandalorian by Ludwig Göransson. Linked below is what it sounds like so far.
https://drive.google.com/file/d/1_cvM22jWZ7lmZeS8OqQfQpIWbFQmhGQ7/view?usp=share_link
0 notes
Text
Synthesis Update: 25th April 2023
There’s been a long gap since the last post, mainly because my time has been spent focusing on my dissertation. However now that’s been submitted, my time can now be spent on this module.
I have already made a start of 2 of the 3 songs. The first is for the east-coast piece. It is switched-on inspired and is a cover of Ed Sheeran’s “Give Me Love”, the idea being it will have a somewhat similar vibe of Susanna and the Magical Orchestra’s cover of “Love Will Tear Us Apart”. Attached is a link to the draft. It’s using presets from Logic just to act as a foundation and rough idea on what it may sound like.
https://drive.google.com/file/d/1W7E5yW4jxzT8D31xjKG0sErJZ543bQg5/view?usp=share_link
The 2nd song is meant to be the no-coast piece; a heavy synth cover of “(Ghost) Riders In The Sky” by The Ventures. At the moment it does sound sort of 80s dystopia sondtrack-esque. Although I want to add more techno elements inspired by DAF (”Der Mussolini” and “Liebe auf den ersten Blick” for example), I do want to focus on more sound design elements to add the west coast inspiration too. The link to that draft is also attached.
https://drive.google.com/file/d/1LaN_DKvEFMTHtaIE_cbejbvwmF9lSx8P/view?usp=share_link
As for the west-coast piece, I have no real plan as of now. I think the best thing to do is to book the studio for a session and just play about on the Oakley or Buchler or even VCV rack at home and just see what sticks.
0 notes
Text
Fourth ‘Music and Sound Synthesis’ Session
In todays session named ‘Switched On’, we were shown tracks through the 60s and 70s that exclusively used synthesisers. A common trend was to do covers of classical music, starting with Wendy Carlos’ ‘Switched-On Bach�� where in 1970, won Grammy’s for Best Classical Album, Best Classical Performance and Best Engineered Classical Recording. They were also hired by Stanley Kubrick to compose synth covers of Beethoven pieces for the movie adaptation of ‘A Clockwork Orange’. We also had a discussion about the difference between a piece being performed or composed if done on synths since artists can easily add their own timbral variations to the piece.
After this we are introduced to a condensed version of how to operate VCV rack for our home systems, however I did not understand a lot of the terminology. If I were able to visualise it by having my laptop with me and being able to do it there and then, I think I would’ve understood it more.
The first practical task was about emulating real life instruments. To start off simple, we began with attempting to replicate a flute. In regards to what waveform to use, a sine wave seemed the most suitable since it isn’t as harsh and aggressive compared to a sawtooth or triangle for example. Utilising the ADSR module, a fast attack was used to imitate the time between the air leaving the player’s mouth and the sound coming out of the flute. Next, an oscillator producing pink noise was used to imitate the sound of the playing breathing into the flute. The result was basic, but gave us a good understanding of what elements we would need to think about if we were to recreate real sounds using synths.
https://drive.google.com/file/d/1DcntliJWjqQyX1dVXHzbfulrEyVnDTcy/view?usp=share_link
The next practical task was to create a collection of percussion patches that can be triggered. Using what I learnt from the previous task, I wanted to start with a kick drum since it seemed easy to replicate: a low sine wave to imitate the low-frequencies, and then pink noise with a filter on to emulate the sound of the beater hitting the skin. From this, Matt explained the main differences of white and pink noise. White noise is a constant amplitude across all frequencies, whereas pink noise sounds as if it has a filter on it, where the higher the frequency, the lower the amplitude. Whilst experimenting with pink noise, we created a sound that is somewhat reminiscent of helicopter blades flying overhead, which I plan on using as an introduction to one of my pieces for the final submission of this module.
https://drive.google.com/file/d/1hGwg23zgjGRgiCjODx2nvviSrx9qEF7C/view?usp=share_link
https://drive.google.com/file/d/1_3hTo9Ho4iM3htBFXp2LqQvKSXJsQVXy/view?usp=share_link
0 notes
Text
Third ‘Music and Sound Synthesis’ Session
This session’s main focus was on the history of Donald Buchla, the man behind the famous synthesiser company. He was previously a Nasa engineer before he applied for a role at the San Francisco Tape Music Center to create a music easel. From this, he started creating his own systems and they soon grew in public recognition. They have always been relatively expensive to purchase, like the Moogs from previous weeks, so the majority of those systems tended to be found in high-end recording studios. Due to this, many smaller companies recreated (or cloned) these units
One of the two practical tasks later on in the session was an introduction to the university’s Buchla: the 200e System 7. The main modules we went over were the two fundamentals in my opinion: the oscillators and the sequencer. I think the sequencer is significant because out of the two main synthesiser units the uni has (the Buchla and the Oakleys), the Oakleys doesn’t have a sequencer module, which limits the user on what can be created.
The second task was to create a drone piece with three different musical elements exploring complex modulations. However I did not achieve this since my knowledge of the Oakley is still somewhat limited. However with the help of Matt, a uni staff member, I was able to create a drone sound with an arpeggio melody over the top. Although it was able to change notes via a midi keyboard, in the video below the note does not change. Although I hadn’t met the brief of the task, I did understand how to use the Oakley more effectively which is still a positive outcome.
https://drive.google.com/file/d/1fpm17pTLZb-VhzrOdFqH5jVHej8fl9Up/view?usp=share_link
0 notes
Text
Second ‘Music and Sound Synthesis’ Session
The second weekly session was similar to the first, spending the first hour learning about the history of synthesisers and the other two hours doing practical work with the Oakleys.
For the history section, we learnt about Robert ‘Bob’ Moog, the founder of Moog Music and the lineup of products they offer/have offered. Since there weren’t many other major synthesiser companies at the time, their products were very expensive (and still are) so were only found in high end recording studios. Through time they were becoming a more regular household name in the world of synthesisers, but close to Moog’s death in 2005, a lot of the plans for components were leaked, which made it easy for other manufacturers to replicate and clone certain Moog modules. A popular clone one being the ‘Behringer Poly D’ which is based off the ‘Minimoog Model D’.
For the other two hours, we were using the Oakley systems hands on. My group was with Stephen for the first hour where he explained how to set up the synth into the ‘Diode Superladder Filter’. This allowed the sound of the synth to achieve a warm and iconic bass tone. For the second hour, we aimed to do the same thing but instead we were on our own to try and achieve the same sound which we did struggle with. However I did remember a fair bit more than I thought I would from last week’s session so I know progress is being made.
0 notes
Text
First ‘Music and Sound Synthesis’ Session
In the first session, we were introduced to the module and what to expect in regards to content to learn and what we will be assessed on at the end. Nothing seems out of the ordinary and all the work seems capable.
I don’t have any real experience with physical, analog synths so I look forward to doing more practical work so get more experience with them. In the session, my group started with reviewing previous EP submissions for this module. The majority of them were okay, however they sounded quite thin most of the time. Plus only a few had artwork in the metadata which seems like I’m nitpicking but it adds to the professional element of the releases.
After that we were introduced to the Oakley. Initially it seemed very overwhelming, but once it was explained how each aspect of it worked and how to do the bare basics, it became slightly less intimidating. I am most likely going to book a studio session for sometime this week to get more practise with it and also to complete the first weekly task of the module. I’ve also attached photos of what the final setup looked like in the session.



0 notes
Text
My First Year Experience with CCMS
At the beginning of the year when I first started doing the computing module, I remember being nervous and quite frankly overwhelmed at such new software. The last time I had done anything similar to coding like I’ve done in Max was around 5-6 years ago back in secondary school. I could say that knowledge stuck with me and helped me to learn Max but to be honest, I did struggle at the start of the year. Computing for me has always been one of those things where I’ve never really been interested in because it just seems such a long process to even start having a basic understanding of it. But since it was compulsory for the module, I’ve had that time to learn.
Like most people at the start of the course, whenever I heard “You’ll need this object, connect it to this and then connect it to that” (Paraphrasing of course), it would mostly go in one ear and out the other as I found it difficult to learn such a new and complicated type of language and work. Around this time is when Stephen was starting the Max workshops after the lectures. I didn’t attend many at the start of the year as I was just very nervous and was also having troubles with my mental health, so in my head I thought “If it isn’t compulsory, I won’t feel bad about not attending”. However this mindset caught up to me quick when it came to the first official submission back in January.
I knew that I had work to do, so I started watching the recorded workshops and did the work in them. At this point I had a healthier mental health which allowed me to concentrate on the work and it resulted in me starting to understand some of the objects in Max. My main issue with this though is that if I didn’t understand something, Stephen nor Harry were there live to help. I did have a 1-to-1 with Stephen about this and he helped me with a synth patch I had started. I found this very beneficial, not only because progress had been made, but also from a confidence standpoint as Stephen said I was using the right terminology and was suggesting the correct/similar ways in order to program something (i.e what objects to use).
After the submission and the very helpful 1-to-1 with Stephen, I had realised how important it was to attend any additional computing sessions (maybe not necessarily in learning how to operate Max, but also to my mental health). Around this time was when we should’ve started working on our main project for the end of the year. For just over a week I did struggle to come up with an idea, but I did get one in the end: using the software “reactivision” (RT), symbols will control audio files and the user can adjust volume, panning, reverb and the EQ of it.
When I first started this project, the only thing I knew how to do was preparing the audio tracks and how to get them to play and loop. When having a 1-to-1 during the workshop with Stephen, he sent me a RT workshop that he done with 2nd years for me to learn from. I found the section where the data read by RT sends it into Max (It looked important and so I copied and pasted it hoping it was the right thing, and happened to get lucky).
I still was unaware of what I had to do, so with the help of Harry, I found out that transferring the RT data to Max was a lot easier than I thought it would be. There was a float box object that outputted the number from what RT was reading from the symbol, and I would just simply have to connect that to the volume dial, reverb object etc. However, what I didn’t know is that I’d have to have several gate objects which would determine which audio track that data would be going to, so I wouldn’t be controlling the volume (for example) for more than one track.
Similarly to the gate object, the more I used the scale object, the more I understood how it worked. It basically does division and multiplication for you in one object; if you wanted the equivalent of 47/246 for ___/127 it would output that number, with the maximum now being 127. This was extremely helpful as the maximum number controlling the slide objects which controlled the panning is 127, and the max number of the X-axis was 640, and I would need a scale object for each effect.
Around this point is when I felt a lot more confident with what I needed to do and what objects I would need to complete the smaller tasks. For example, I wanted to add a locking feature which would stop sending data into the audio track so you couldn’t update the effects if it was locked. I knew I would need to do something with gates as this would control whether or not to allow data to pass through and affect the controls. I spoke to Harry about this and he helped me as it wasn’t as simple as I initially thought, but I was definitely close which again, helped build my confidence.
At this point, the majority of the project had been completed, I just needed to troubleshoot and go over it to make sure everything worked all good, and to my knowledge, it does.
So that is my experience with the computing module and Max in first year. I have chosen to do the studio pathway for second year so it is unlikely that I will take up computing again, but who knows what the future will hold.
1 note
·
View note