Elly Johnson // Game Audio Designer based in the UK // BA (Hons) Professional Musicianship, PGCert in Sound & Music for Interactive Games
Don't wanna be here? Send us removal request.
Video
youtube
Sound Design and UE4 Implementation
throw back to my first project implementing audio in UE4, using procedural sound design for lights, taps & water.
Here’s an example:
Taps On:
This sound is made up of Squeak 1, Squeak 2, Water Start, and Water Running. Water Running loops until tap is turned off.
Taps Off:
Tap off is made by ending the water running sound, Squeak 1, and water end.
This is how the light flicker sound was made:
0 notes
Audio
RESCORE: The Final Station (Day City) - Part 2/3
Continuing from my previous post, another level I worked on rescoring was a ‘Day City’. These levels had very little action, and were almost purely for collecting resources and new passengers and uncovering some narrative. These levels were more densely populated with characters and buildings, so I wanted to have music that sounded a little busier that still fit within the aesthetic of the game.
There are a series of ambient chords which were recorded with no click to avoid any feeling of time or tempo. I used slight low pass filter to give the sound a ‘wind-like’ quality, hopefully evoking feelings of sparseness. I then used an RTPC to fade in and out an arpegiated bell melody depending on how dense the area surrounding the player was (sparse scene = windy ambience, dense scene = noticeable melody). ‘I created a ‘bell’ sounding synthesizer and set it to arpegiate randomly in Logic Pro X using Alchemy. I found this made looping more difficult to detect because there was no clear pattern being followed. I think it’d be interesting to re-create this using procedural methods in UE4, making it random every time and avoiding loops all together.
Another way I think that this could be more interesting is we could create loops of varying lengths, allowing them to become more and more out of sync (akin to Steve Reich - New York Counterpoint) and causing them to reharmonise in interesting ways.
0 notes
Audio
RESCORE: The Final Station (Night Station) - Part 1/2
As part of my MSc in Sound & Music for Interactive Games, I worked on rescoring some levels of The Final Station (Do My Best Games, 2016) using Wwise. Because I’m such a huge fan of the game, I wanted to write something that paid tribute to the games minimalist and poignant aproach. Also, because the game revolves so much around exploring and uncovering the narrative through the environment, it was important to design systems that could generate music that sounded endless. The game also has a relatively slow pace to it, so I wanted the music to react to smaller plot points as well as any action to avoid any dullness and keep the music flowing. Above is a composite of the different elements of the score for a typical ‘Night Station’ level.
In order to generate variety in the ambient state, I created an ambient-drone loop and two varying melodies (referred to as Melody A & B) and then used random sub-tracks to create different weightings. I also made use of playlist containers to ensure no melody played straight away, establishing the ambient drone on the players initial arrival in the level, and that the first melody to play would be Melody A.
For a more detailed description of how I implemented this project in Wwise, you can read the essay I wrote here.
0 notes
Video
tumblr
MOTION CONTROLLED VOX EFFECTS PEDAL - LEAP MOTION & MAX MSP
From my previous experience as a singer, I found performing electronic and experimental music live a little more difficult. Emulating the kind of music I had made involved a lot of live effects, and I found being tied to a pedal inhibited performance. This is why I thought I’d try and build a motion controlled effects pedal. Below are some screenshots of the patch and an extract from the report I wrote evaluating the project, and you can read the full essay here.
(regarding leap motion control)
After forming the effects portion of the patch, I attached each variable to a max live live.dial object and created a graphic interface. Then I used Ircam’s leapmotion object & help file to form a subpatch and send specific skeletal tracking information to the root patch containing the effects. Upon receiving the information, I used ‘unpack’ and ‘scale’ to adjust each stream of data to match the scale needed to control each variable, and connected this to the corresponding ‘live.dial’.
I tried to assign each effect a movement that felt conducive to said effect, but I ran into difficulty with overlapping commands. For example, when tracking palm position for one effects and index fingertip position for another, both register as moving together. This meant trying to be more creative with which parts of the hand were being tracked. This is one of the reasons that Voice 1 & 2’s pitch is tracked through both hands z axis using ‘tipposition’ – position of index finger on each hand. Another solution to this was using a swipe gesture and a gate object. I chose to make the second voice and the filter the gated effects, as I thought these were the least likely effects to need to adjust simultaneously. However, after evaluation, I’d like to revisit this decision (see page 5 for more information).This was implemented using the directional data from the swipe gesture within Ircam’s leapmotion object. Because the gate requires either a 1 or 0 input (0 closes the gate, 1 opens the gate), and each direction was represented through positive and negative numbers, I used the ‘if’ object to generate a 1 or 0 based on whether the input was greater or less than 0.
0 notes
Text
Innovation - Blurring the line between Sound and Music
In this blog post, I’m going to be talking about Tobias Lilja’s work for Little Nightmares (Tasier Studios, 2017) and look at different games that are working toward blending their sound design and music.
Tobias Lilja is the audio director of this project and worked closely with the audio team (Christian Vasselbring & Christian Björklund Audio Designers). ‘They shared all the work with sound effects and implementation, but Tobias Lilja did most of the music and ambiences, while Christian Vasselbring did the Foley work and the more advanced technical implementation work’ (Hughes, 2017).
In an interview with The Sound Architect, Lilja discusses how there was intentionally no clear separation between the sound design and music, and how he “deliberately wanted them to constantly bleed into each other by using sounds from the environment as “musical” elements, such as fog horns or seagulls.” (Hughes, 2017).
The below clip is ‘March of the Guests’, which is made from muffled voices the sounds of guests slamming cutlery and other diegetic noises from the ship.
youtube
“I enjoy using those type of sounds because they can sneak up on you on an almost subconscious level and add both tension and immersion. The fact that I was working with both sound effects and music for the game helped a lot in making it possible to integrate the two” (Huges, 2017). This strays away from the norm of involving a composer at a much later stage. As the game has no non-diegetic graphic communication with the player (Enemy NPC markers, health bars, Maps), the majority of Ludic Information (as discussed in Week 1) is communicated through the audio alone. This is why it make sense to integrate the sound design and music, because it’s a large part of what shapes the game and how it’s played.
Incorporating ambient sound into music is not necessarily innovative, it follows the same principles outlined by people like Brian Eno and John Cage in the 60′s and 70′s and it even adheres to Erik Satie’s concept of Furniture Music (1917). Lilja’s pieces were inspired by surrealist films like Eraserhead (Lynch, 1977). What makes this seem innovative is its application in video games, and particularly the process of the studio allowing it’s audio team to work on both sound design and the musical elements, allowing for a soundtrack that sounds integrated and complete.
Blending diegetic sound with the musical score is starting to become more prevalent in games, for example, NieR: Automata (PlatinumGames, 2017), which won "Best Score/Music” at The Game Awards 2017. “The music reflects the speed in which the player enters combat, beginning with soprano vocal lines of unidentifiable language which mix with Simone’s diegetic speech, driving violin lines, and relentless timpani accompaniment that portray the desperation of both Simone, to be beautiful, and of 2B and 9S, to overcome this machine” (Smith 2017).
youtube
While this game only uses this technique as a means to transition into soundtracks, it’s a nod to more games looking at their audio as a whole, instead of separating its music from diegetic sound.
References
Hughes, S. (2017). ittle Nightmares: The Depths Of Audio With Tobias Lilja. Available: http://www.thesoundarchitect.co.uk/littlenightmaresinterview/. Last accessed 12/12/2017.
Smith, J. (2017). A Beautiful Song: The Adaptive Music of NieR: Automata. Available: http://melodrive.com/blog/beautiful-song-adaptive-music-nier-automata/. Last accessed 12/12/2017.
0 notes
Text
Adaptive Music in FEZ (2012)
In this post, I will be discussing adaptive music in video games, focusing on the techniques used in FEZ (Trapdoor, 2012) by composer Rich Vreeland (Disasterpeace).
The difference between Film music and Video Game music is in its narrative structure. Films have a set narrative timeline, so composers can create linear music. However, because video games are non-linear, it’s music can’t be either. (Clark, 2007). This is where adaptive and interactive music techniques come into play.
FEZ - SYNC
youtube
(00:30)
This level uses adaptive techniques to build the music around the players success within the platforming puzzle. For this game, developers built their own software to design levels and integrated their own audio and music system (Short, 2013). Using conditional blocks (similar to using triggers within UE4), as the player reaches certain altitude more musical elements are layered using vertical mixing. What makes this adaptive is that the music will not progress if the player cant move forward. The music itself is built from sounds that are attached to the conditional platforms. The level (platforms and sound) has a set BPM of 120, allowing each layer to fade in and out whilst keeping the rhythm. This is also an example of Ludic information through Rhythmic-Action, and serves as a reward for progressing within the level.
Another example of these adaptive techniques is in the overarching collectible feedback sounds. One of the main goals of the game is to collect a series of cubes, and every 8 cubes form a larger cube. The feedback sound in this instance is a note of a scale. Each time the player collects a cube, the following note in the scale will play. Using the music system, the game was programmed to adjust the scale that these notes are pulled from depending on the key of the soundtrack in the background. An issue that Vreeland ran into from this was when using music with key changes, as the scale only adapted to the base key that had been implemented.
Bibliography
Clark, A. (2017). Defining Adaptive Music. Available: https://www.gamasutra.com/view/feature/129990/defining_adaptive_music.php[Last accessed 13/11/2017]
Hunter Short, 2013. Philosophy of Music Design in Games - Fez [online video] Available at: https://www.youtube.com/watch?v=Pl86ND_c5Og [Last accessed 13/11/2017]
0 notes
Text
Dialogue in Video Games
This week I’m going to be discussing Dialogue in Video Games and some of the theory surrounding it, particularly in relation to Ludic functions.
Dialogue in Video Games can be broken into different levels of communication. Unlike non-interactive media (such as film and TV), Video Games feature two levels of communication. Diegetic Communication - that which happens within the game - and Ludic Communication, which occurs between the player and the game. (Domsch, 2017). Conveying ludic information through diegetic dialogue can further immersion by creating a more realistic experience. For example, see the below video of gameplay from Uncharted: The Lost Legacy (Naughty Dog, 2017). (Timecode:3:47)
youtube
In this context, the game uses dialogue (with a mixture of an audio cue and a visual outline) to Notify the player that they’ve been spotted and of the enemy NPC’s whereabouts; but within a diegetic context. In theory, this should encourage a deeper immersion, by communicating this ludic information to the character within the game instead of directly communicating with the player (Domsch, 2017)
An example that could be argued to contradict this is Metal Gear Solid V: The Phantom Pain (Kojima Productions, 2015). This game uses an alerting sound cue and slowed motion visuals when spotted, and although there is still diegetic sound from NPC’s, it’s less obvious than the previous example. The game also uses on-screen text to communicate directly with the player. Watch the clip below for an example. (Timecode: 13:16)
youtube
These two contradicting examples tie-in to a discussion about the role of protagonist dialogue in video games. There is an argument that a lack of dialogue from the protagonist can ‘ease identification’ (Domsch, 2017) and therefor immersion. By leaving out dialogue there is a lack of characterisation which some could argue makes it easier for a player to imagine themselves as the hero. However, some have argued that this ‘jarring disparity’ between speaking NPC’s and mute protagonists can cause a breakdown in immersion (Miozzi, 2012). For a good example of these two arguments, it’s worth looking at the difference between Outlast (Red Barrels, 2013) and Outlast 2 (Red Barrels, 2017), both of which are first-person games with no physical characterisation but the latter features dialogue from the protagonist.
Aside from Ludic functions, dialogue also communicates Narrative information, (Domsch, 2012) particularly in games like Life is Strange ( Dontnod Entertainment, 2015) or games by Telltale Games, where dialogue based decisions dictate how the game is played. Life is Strange is particularly interesting to examine, due to its ‘time-travel’ mechanic, allowing the player to go back and make different decisions, changing the outcome of their story, allowing us to see the complexity of the dialogue tree at work (although this power is taken away at certain plot points, possibly to maintain the challenge of the game). What’s interesting about games like this, is that as dialogue-centric games they have to provide challenging gameplay in a different way, using thought-provoking situations that force the player to make decisions and push themselves to new emotional places (Meslow, 2017). For example, in Life is Strange, the player has to talk another character out of committing suicide by remembering important interactions with the player, and the time-travel mechanic isn’t usable.
youtube
The ‘Cinematic’ aspect of these games makes them almost interactive movie’s, which for some players makes a very immersive gameplay experience (Meslow, 2017).
Bibliography
[XCV //], 2015. Life Is Strange · Kate Commits Suicide (Episode 2: Out of Time). [online video] Available at: https://www.youtube.com/watch?v=NqS2ppt3NQU [Last accessed 24/10/2017].
djostikk, 2017. Uncharted: The Lost Legacy Stealth Combat 2. [online video] Available at: https://www.youtube.com/watch?v=7akerTaO-UM [Last accessed 24/10/2017].
Domsch, S. (2017). Dialogue in Video Games. In: Mildorf, J. & Thomas, B. eds. Dialogue across Media. USA: John Benjamins Publishing Company, p251-270
Meslow, S. (2017). Life Is Strange, One of the Best Video Games of This Generation, Is Now Available for Free. Available: https://www.gq.com/story/life-is-strange-available-for-free. [Last accessed 24/10/2017].
resioil1, 2015. Metal Gear Solid V: METAL GEAR SPOTTED. [online video] Available at: https://www.youtube.com/watch?v=b_3q3JktCjQ&feature=youtu.be&t=13m16s [Last accessed 24/10/2017].
0 notes
Text
Procedural Sound Design
Audio plays a large part in creating an immersive environment for the player. Immersion can also be understood using ‘Spatial Presence’ theory (Madigan, 2010), which relies on 4 key aspects to create optimal spatial presence. One of these is Completeness of sensory information, which explains that players need a (mostly) realistic and complete feeling world in order to feel truly immersed, and unnatural or non-sensical audio and visual content can be distracting and pull them out of this state. In order to maintain this ‘completeness’, sound designers, among other things, look to avoid repetitive sound. However, particularly with footsteps or one-shot sounds, using an abundance of different sounds can become an arduous task. This is where procedural sound design comes in.
Instead of attaching lots of single audio files to each action, procedural sound design is creating a system to playback several parts of the sound with slight, random differences each time (Stevens, 2016), in order to create a larger variety of possible sound outputs. It’s also possible to add other variables, such as different layers for different surfaces. For example, triggering more watery samples over footsteps when the player walks near water. It’s also possible to add variants to pitch and delay to create even more possibilities (Stevens, 2016).
In reality, if we complete the same action over and over again, it’s unlikely we will be able to redo it perfectly. If we tap the floor with our foot several times, we’re rarely able to match the same force, hit the same parts of the floor, etc. The sounds we hear each time will just be variants of each other. Building a system to re-create this effect is usually more realistic than a human trying to create hundreds of variations of the same sound, and will save a lot of time and computational cost.
Tristan Panniers has some great examples of procedural sound design in his demo reel, you can check it out below:
youtube
Sources:
Andersen, A. & Stevens, R. (2016) Why Procedural Game Sound Design is so useful – demonstrated in the Unreal Engine. Available at https://www.asoundeffect.com/procedural-game-sound-design/ Last Viewed 08/10/2017
Madigan, J. (2010) The Psychology of Immersion in Video Games. Available at http://www.psychologyofgames.com/2010/07/the-psychology-of-immersion-in-video-games/#foot_text_514_3 Last Viewed 08/10/2017.
Wirth, W., Hartmann, T., Bocking, S., Vorderer, P., Klimmt, C., Holger, S., Saari, T., Laarni, J., Ravaja, N., Gouveia, F., Biocca, F., Sacau, A. Jancke, L., Baumgartner, T., & Jancke, P. (2007). A Process Model for the Formation of Spatial Presence Experiences. Media Psychology, 9, 493-525. as referenced by Madigan, J (2010).
0 notes
Text
DIY - Building an Ambient Sound-bank & Field Recording
Last week I spent a few hours wandering around my campus recording sounds for my soundbank. Although pre-existing samples can be useful, you have a little more control over your own recordings (and it’s more fun). As this was my first time using a portable recorder and working in an environment with little control, I thought I’d write up my experience & some lessons for anyone else who might be interested.
Equipment wise, I was using a TASCAM DR-07 and some standard ibud headphones. I was recording 24bit WAV files.
The main sound I was looking to record was water running through pipes. For my MA, I’m currently working on ambient sound for an abandoned hospital game level, so I basically wanted a variety of old-bathroomy sounds. Modern bathrooms just don’t have the same charm, they’re too functional and soundproofed. The main building on my campus is quite an old building, and thankfully still has some older bathrooms. The main problem I ran into however was that with old buildings, tend to come poor acoustics. This meant that some of my recordings have some intermittent boomy chatter that bled through. I plan to remedy this in post, probably using filters, but it wasn’t ideal. (I will add that if you’re using public bathrooms, you’re probably going to run into people, so choose your times wisely.) Also, if you’re trying to get a particular sound, i.e. drips, running water, etc. then you’re probably better off trying to re-create these sounds in a more acoustically treated room (or with a portable recorder that has a phenomenal noise-to-signal ratio) because Isolating those sounds afterwards is much harder than getting a clean recording the first time around.
I decided to do record some lifts, as I think their mechanical hum is pretty useful, particularly for layering with other sounds to create deeper room tones. I’m really happy with these recordings, and I also got some accidental thumps that could easily be repurposed for some SFX. I really wanted to get the sound of the doors opening and closing, but the inevitable ‘lift-lady’ ruined this by constantly announcing the state of the doors. (I also got a lot questions about whether I was lost, because people were concerned as to why I was just going up and down in a lift).
I did however get incredibly lucky on my hunt for interesting sounds, and found a malfunctioning exit light. I tried to get my microphone as close to the source as possible to get the truest recording possible. I’m hoping to match up the flickering of this sound with the flickering of the lights in the level I’m working on, and I also want to build some synthesizers using this.
The next post I write will be on the post-production of these sounds and how I implement them into UE4, so stay tuned.
0 notes
Text
Ludic Functions
This week, we looked at the functions of Game Audio. For this post I’m focusing on the ‘feedback’ function, using L.A. Noire (Rockstar Games, 2011) as an example.

The feedback function is used to help navigate the player through the game by reacting positively or negatively to the players behaviour.
L.A Noire uses Feedback to enhance the ‘clue searching’ aspect of the game. Once the player has found all the necessary clues a particular sting plays, allowing the player to continue on with the game. The use of this function helps the player in two ways. Narratively, it lets the player know they have found all the clues and progress with the game, but also emotionally. It serves as a reward for solving that section of the game. During the ‘clue searching’ section, there are several objects that the player has the option of looking at that bare no importance to the investigation at hand. What works well about this sting is that it plays after all relevant clues have been found, rewarding the players intuition in selecting only relevant objects, instead of encouraging them to click on everything, which could become tiresome in a slower-paced game like this.
Another reason that this section of the game is an interesting example of the feedback function is that instead of relying on visual clues, in order not to disrupt the realistic 1940’s aesthetic of the game, the game uses audio to convey information as opposed to visual clues. For example, When walking over a potential clue (see clip below), the controller vibrates and you can hear a small chime. Not only does this help with keeping more realistic visuals, but it forces the player to search the scene, lending to a more realistic and immersive gameplay.
It could be argued that this is an example of notification function and not Feedback. These two functions do have a lot of overlap, and the small differences lie in technicality. However, as this sound only occurs as a direct response to the players gameplay, it seems more like a reward than a notification.
youtube
Sources:
Gamespot, 2011, L.A. Noire - Bloody Scene (Xbox 360, PS3) [Online Video] Available at: https://www.youtube.com/watch?v=z5n2IGSngt0 Last Viewed 04/10/2017
Huiberts, S. 2010, 'Captivating Sound - The Role of Audio for Immersion in Computer Games', University of Portsmouth, Available at: <http://download.captivatingsound.com/Sander_Huiberts_CaptivatingSound.pdf>. Last Viewed 04/10/17
Rockstar Games, 2011, L.A. Noire [Video Game]
0 notes