Don't wanna be here? Send us removal request.
Text
Statement
Studying at the McCallum Fine Arts Academy in high school gave me the opportunity to further my study of piano, which I had begun at the age of 13, and helped cultivate my passion for music. In addition to taking classical piano lessons in a private and group setting, I performed in the school’s Jazz band throughout my three years there. During my final year, I took two classes that introduced me to the idea of creating my own music. Music Theory and Songwriting offered very different perspectives on the creative process, but they both inspired me to start composing music in my own time. After high school, I studied Music Composition at St. Olaf College under Dr. Timothy Mahr and Dr. Justin Merritt for one year, but a spurring interest in cutting edge Electronic Music led me to change to a production based HND course through the University of Plymouth in order to put the composition skills I had gained into practice in the domain of modern Digital Music. The course has helped me to expand upon my experience with music production in Ableton Live and Logic Pro and also to develop a wide range of new skills and practices.
My primary goal for the near future is to further develop my portfolio as both an artist and a professional. The first half of this involves creating a body of work that best represents my unique artistic perspective. The second part involves expanding upon and showcasing a few key areas of strength that could be implemented into the professional world. The course’s focus on the relationship between sound and other mediums is a big draw to me as this is something I’d like to explore in depth. In particular, I’m interested in music composition and production for films, television, and games. As an experimental artist, I’m also interested in the course for its exploration of various forms of new media and the innovation that is occurring within these fields.
0 notes
Text
Written Work - Example 1
Controlled Dissonance: A Harmonic Analysis of ‘Deacon Blues’ by Steely Dan
Jamison McMackin
Despite their reputation as a Rock band, Steely Dan wrote music that was as sonically sophisticated and experimental in nature as Modern Jazz. Frontlined by West Londoners Donald Fagen and Walter Becker, the band achieved a level of commercial success that is rarely seen by those whose work is so musically complex. Their 1977 release, Aja, showcased their perfectionism in terms of musicianship and production; It went on to become their most successful album, selling over five million copies. (Sweet, 2000) How then did a band whose music is considered highly sophisticated and experimental achieve such commercial success? The following will provide an insight into the band’s compositional process by deconstructing and analyzing the musical content of one of the most well known songs on the album, Deacon Blues, with a primary focus on its harmonic makeup. While the song follows the standard structure of a Pop song, its use of extended chords, linear voice leading techniques, and complex substitutions for standard chord progressions place it in a unique position between sophistication and accessibility.
Figure A: Triad Vs. 7th Chord Vs. Extended Chord (9th chord)
Figure B: Introduction
Until the 20th century, it was common practice in Popular Music to use triads and seventh chords, while extended chords such as ninth and eleventh chords were used sparingly and primarily for effect. (See Figure A) (Mulholland and Hojnacki, 2013) Extended chords first made their way into Popular Music in the late 1920s as Jazz musicians began to stack additional major and minor thirds onto simpler tertian chords. (See Figure C) (Gioia, 2011)
Figure C: Extended chords in early 20th Century Popular Music (Honeysuckle Rose, music composed in 1929 by Fats Waller)
(Sheet Music Direct, 2014)
By the 1970s, with the popularity of Disco and Jazz Fusion, it had become common practice to use tonic major seventh and ninth chords in favor of the tonic dominant seventh chords which were popular in the Blues and Rock music of the previous two decades. (Gioia, 2011) Sonically, the major seventh chord creates a soft, mellow atmosphere while the dominant seventh creates one that is powerful and slightly tense. The characteristics attributed to these different chord types are determined by the intervals of which they are made. (See Figure D) (Mulholland and Hojnacki, 2013) Because the major seventh chord lacks a tritone, one of the most dissonant intervals, it does not demand any immediate resolution to a more stable chord. This meant that unlike the Blues, which typically stuck to a rigid chord structure, genre’s which used these softer chords were able to take a more exploratory approach. (McAdams and Bregman, 1979) Upon examining the introduction of Deacon Blues, one can observe a heavy use of both seventh and extended major chords. In fact, besides the final chord, the introduction is an alternation of two chord types, the major seventh and the Mu Major chord, around a series of different starting points.
Figure D: Intervals and their consonance (or dissonance)
(Robertson, 2005)
While these chords avoid the extremely dissonant tritone, they still contain the slightly less dissonant major seventh and major second intervals, adding a subtler edge to the triad, which the group considered overly consonant and dull. (Sweet, 2000) If the harmonic content of a piece of music is thought of in terms of a two dimensional space then the extension of triadic chords relates to the vertical aspect of that space. (See Figure F) (Popp, 1998) Extending the vertical makeup of a piece of music does not greatly limit its accessibility if it is done using common extensions of ninths, elevenths, and thirteenths. When less common extensions are made, dissonant intervals are introduced, which creates tension. A chord with a high level of dissonance is still acceptable, but only if it’s used to build tension before a release such as at the end of a passage. An example of this is the final chord in this passage, which contains a tritone and a minor second, two very dissonant intervals. This technique of ending a passage on an unresolved chord dates back to the Baroque Era, when composers would use a V chord to create what is called a half cadence . (See Figure E) (Mulholland and Hojnacki, 2013)
Figure E: Half Cadence in J. S. Bach’s Chorale #300
(Dahn, 2010)
Figure F: Vertical Harmony in ‘Somewhere Over the Rainbow’
(Alfred Publishing Co., 2017)
If Baroque Era rules of music theory were followed, however, the E dominant seventh chord at the end of the introduction would have led to a tonic A minor at the start of the verse. As a band that often eschewed tradition, however, Steely Dan wrote the verse in the key of G major and as such the section begins on a tonic major sixth chord. While these two chords might initially appear unrelated, they are actually only different by one note. (See Figure G)
Figure G: E7(b9) to G6 (inversion) using only one movement
When harmony is approached in this way, it can be viewed as a collection of contrasting melodies that form together to create a cohesive structure. When changes in a progression are created by a gradual movement of these individual voices, it is known as voice leading. (Everett, 2004) The practice of voice leading became popular during the Classical Period, when composers created contrapuntal melodies in small ensembles comprised of monophonic instruments. (McAdams and Bregman, 1979) In the 20th century, Jazz musicians playing polyphonic instruments used the same principle to create temporary dissonance within chord
progressions. (Gioia, 2011)
Figure H: Passage in second half of Verse 1 shown on a piano roll
Figure J: Passage in second half of Verse 1 shown in roman numeral analysis
Figures H and J depict a progression which occurs in the second half of the verse. The G that’s played in the upper voice during the F major seventh chord temporarily extends it into a familiar major ninth chord. As the rest of the voices migrate upward or downward in a stepwise motion to form the next chord, however, the G is held. This adds a dissonant minor second interval to an already unstable E dominant seventh chord to create a chord that if played out of context would sound arbitrary and unpleasing. Instead of resolving immediately afterward, the G moves down to an F, creating what is called a diminished seventh chord. The diminished seventh is made up of a pair of tritones and therefore can be considered the most unstable chord in terms of its interval makeup. This gradual buildup of tension is finally resolved as the voices move again to form a stable A minor seventh chord. Contrary to chordal extensions, which concern the vertical aspect of the harmonic progression, the step based melodies that flow smoothly throughout the verse of the song relate to its horizontal aspect. (Popp, 1998) While some of the chords in this passage are highly dissonant, they do not limit the song’s accessibility as they are given a context within a fluid, multi-voice melodic structure.
Figure K: Roman Numeral Analysis of Verse 1 (first half)
Figure L: Harmonic Function
(Michero, 2011)
When analyzing the complex chord structures that make up Deacon Blues, it is helpful to simplify chord types into larger families based on their harmonic function. (See Figure L) The theory of harmonic function is based on the idea that chords are a collection of scale degrees and that each scale degree has its own tendencies in terms of how it progresses. (McAdams and Bregman, 1979) To illustrate how complex phrases can be simplified into the three basic functions, a passage from the chorus has been simplified into its chord families. (See Figures M and N)
Figure M: Roman Numeral Analysis of Chorus
Figure N: Chorus Simplified into Three Categories (T = Tonic, SD = Sub-Dominant, D = Dominant)
While this analysis is an oversimplification, it allows one to see that even though this chord progression appears highly complex, it can still be separated into the harmonic functions as defined by the pioneers of Western Music. Viewing the chord progression of the chorus in this manner is a good starting point in exploring the ways in which this piece of music draws from other styles of 20th Century Popular Music, particularly Jazz.
Interestingly, Becker and Fagen report being primarily inspired by Jazz music of the 1940s and 1950s and not by their Jazz-Rock and Fusion contemporaries such as Frank Zappa and Weather Report. (Everett, 2004) As such, the chord progressions that appear throughout their music the most often can also be found in earlier Jazz Standards such as Autumn Leaves. (See Figure O)
Figure O: ii˚ V i chord progression in Autumn Leaves (composed by Joseph Kosma in 1945)
(Various, 2014)
The ii-V-I progression, as shown above, is extremely common in Jazz and is considered one of the genre’s fundamental building blocks. (Mulholland and Hojnacki, 2013) Using the concept of harmonic function, the ii-V-I can be analyzed as a Sub-Dominant followed by a Dominant, which then leads to a Tonic. In its most common form, the progression is made up of seventh chords or triads that are built upon the notes of the major or minor scale of the piece of music. In practice, however, these chords are often extended or even altered so that harmonic interest is added. (Sweet, 2000) The instrumentation of the traditional Jazz ensemble was a key factor in allowing for the further harmonic development of the ii-V-I structure. As the bass player was responsible for supplying the fundamental notes of each chord, which was in most cases the root note, or first scale degree, the pianist and horn section began to omit the root notes from their chord voicings in order to avoid being redundant. (Popp, 1998) Aided by a strong harmonic foundation, the musicians were also able to substitute other non essential notes such as fifth scale degree for ones that added a subtle or even strong dissonance. Steely Dan, whose instrumentation is based upon this model, employed the same technique; The bassist provides the foundation while the keyboardist, guitarist, and brass players expand upon it. (Sweet, 2000)
Figure P: Extended ii-v-i in A minor on a piano roll, taken from the chorus of Deacon Blues
Figure Q: Extended ii-v-i in A minor as Roman Numeral Analysis, taken from the chorus of Deacon Blues
Figure R: Tritone Substitutions and Extended ii-v-i Progressions
Figures P and Q, which depict an excerpt examined earlier for its use of voice leading techniques, are also an example of an altered ii-v-i chord progression in the key of A minor. The progression uses a technique called tritone substitution to replace what would normally be a B half-diminished seventh chord with a chord that has its root a tritone apart. (Popp, 1998) If this progression is written as a ii-v-i, the first chord can be expressed as an extended B half-diminished chord that is lacking a root note and a third. (See Figure R) While the ii-V-I is most often associated with Jazz music, it can also be found throughout the Popular Music of the 1960s. Bands such as The Beatles and The Rolling Stones, whose songwriting was a major influence on Steely Dan, used simpler versions of the progression in many of their compositions. (See Figure S) (Sweet, 2000) The chord progressions found within Deacon Blues are much more harmonically complex, but they are based upon the same foundations as those found within the Popular Music of the previous decades.
Figure S: ii-V-I within Rock Music (With a Little Help from my Friends by The Beatles) (Daniels, 2017)
The techniques that Steely Dan uses to make Deacon Blues sound harmonically sophisticated are all ways of introducing controlled dissonance. When this dissonance is used to form a naturally flowing storyline of tension and release, it is easier for the listener to follow because each sound is given a context. Furthermore, the harmonic devices used are not new to 20th century musicians; The use of extended chords, linear voice leading, and chord substitutions has been present in Jazz since the 1920s. (Popp, 1998) What makes their music appealing to both a sophisticated and a mainstream audience, however, is the band’s ability to effectively use all of these techniques within the structure of a 1970s Soft-Rock song. In contrast to the prevailing musical styles of the time such as Disco and Punk, which embraced a stripping down of music to its core elements, Steely Dan’s dense arrangements and attention to details has earned them the reputation as one of the most sonically complex Rock groups of all time.
Sources:
Gioia, T. 2011. The History of Jazz. New York, NY: Oxford University Press.
Everett, W. 2004. Oxford Journals: A Royal Scam: The Abstruse and Ironic Bop-Rock Harmony of Steely Dan, [online] Available through: <http://www.jstor.org/stable/10.1525/mts.2004.26.2.201> [Accessed on: 24/10/17]
McAdams, S. and Bregman, A. 1979. Computer Music Journal 3(4): 26-44 [Accessed on: 08/11/17]
Mulholland, J. and Hojnacki, T. 2013. The Berklee Book of Jazz Harmony. Boston, MA: Berklee Press.
Popp, M. 1998. Applicatory Harmony in Jazz, Pop & Rock Improvisation. Bucharest: Nemira Publishing House.
Sweet, B. 2000. Steely Dan: Reelin’ in the Years. London: Omnibus Press.
Images:
Alfred Publishing Co, 2017. Judy Garland’s “Over The Rainbow”. Available at: https://www.musicnotes.com/sheetmusic/mtd.asp?ppn=MN0035420 [12/11/17]
Dahn, L. 2010. Bach’s 12-Tone Chorale Phrases. Available at: https://lukedahn.wordpress.com/2010/02/08/bachs-12-tone-chorale-phrases/ [28/10/17]
Daniels, N. 2017. With a Little Help from my Friends. [online image] Available at: https://www.sheetmusicdirect.com/se/ID_No/111569/Product.aspx [10/11/17]
Michero, T. 2011. Chord Substitution. Available at: http://www.lotusmusic.com/lm_chordsub.html [02/11/17]
Robertson, D. 2005. The Basics: Consonance and Dissonance. Available at: http://www.dovesong.com/centuries/spiral.asp [02/11/17]
Sheet Music Direct. 2014. Fats Waller: Honeysuckle Rose. Available at: https://www.sheetmusicdirect.com/se/ID_No/48496/Product.aspx [11/11/17]
Various, 2014. Autumn Leaves. Available at: http://www.saxuet.qc.ca/TheSaxyPage/Realbook%20C/Autumn%20Leaves.jpg [10/11/17]
0 notes
Text
Written Work - Example 2
Analysis of Technical Developments and their Impact of Audio Quality on the Final Master
Jamison McMackin
Introduction
The term mastering describes the highly misunderstood process of applying creative and corrective changes to an audio mixdown in order to prepare it for release and distribution. “The age of the mastering engineer began in the year 1948 when Ampex Corporation invented the first low-cost reel-to-reel tape recorders.” (Audio Mastering, 2013) While similar tape recorders, called magnetophones, had already been in use since the 1930s, it was the affordability and high fidelity of Ampex’s magnetic tape recorder that made recording audio to tape a standard practice. (Schoerherr 2005) But because vinyl was the dominant listening format at the time, a transfer from tape to vinyl had to be made in order to reproduce and distribute records to the public. This was the task of the original mastering engineer, known as a transfer engineer, which required a high level of technical precision and almost no creative control. (Owsinski, 2008) While there is no doubt that the mastering process employed today has changed with the evolution of music, it has also been largely dictated by the advantages and limitations of popular audio formats. While vinyl can theoretically provide a more accurate representation of an audio signal in ideal conditions, its physical components always impart imperfections upon a piece of music. On the other hand, digital’s lack of playback error, increased dynamic range, and accessibility make it a more suitable medium for modern music despite a decrease in audio quality in some formats.
The Vinyl Era
Because of the physical nature of the needle and grooves system used in vinyl records, the medium had both volume and frequency content restrictions that must be taken into account when mastering. The lack of space on discs made cutting grooves to represent large wavelength, low frequency sounds problematic. To allow for smaller grooves to be cut in the vinyl, engineers would reduce the amplitude of lower frequencies using equalization curves. (Owsinski, 2008) While this technique allowed for longer albums to be made, problems arose when different record companies began applying their own equalization curves that weren’t able to be properly matched with inverse filters during playback. (Schoerherr 2005) To solve this problem, the Recording Industry Association of America introduced a standard playback equalization curve in 1954 called the RIAA curve. “This curve applied a dramatic cut of 6 decibels per octave to bass frequency content and a simultaneous boost of 6 decibels per octave to the treble.” (Huber and Runstein, 2005) Upon playback of the disc, an inverse equalization curve was used to restore the original content and in attenuating higher frequencies had the additional effect of quieting unwanted noises and clicks that occurred during playback.
Because the medium required such a precise balance of frequency content, vinyl mastering engineers had to be highly skilled in identifying imbalances in a timely manner. As the cutting stylus and lathe were designed to cut a disc from start to finish without stopping, error correction, in addition to identification, “had to be made on the spot or the master disc could be severely damaged.” (Owsinski, 2008). To give the engineer a chance to address these problems, which could occur as a result of excessive bass content and loud peaks in the signal, the vinyl mastering console contained an important feature which has since fallen out of use. This device, called a preview system, ensured that the entire signal was played back well before the disc cutting was made in order to allow time for error correction. (Audio Mastering Techniques, 2013) As a result of this, the equalizers and compressors used to adjust these imbalances had to be designed with the real time nature of disc cutting in mind.
“In order to allow quick changes to be made from song to song, consoles would contain two versions of each of these processing tools which were designed with stepped faders and knobs. This enabled exact measurements to be recorded on paper, by engineers, for later use.” (Owsinski, 2008)
While these mastering tools were primarily used for corrective purposes, they also enabled engineers to satisfy client’s needs by raising the volume level of the final master to one that was competitive and push the RMS levels beyond what was currently standard. While the role of the mastering engineer during this time was still highly technical, these new tools began to encourage experimentation in the quest for a bigger sound. (Katz, 2002)
The Digital Era
In 1989, the introduction of the Sonic Solutions digital audio workstation, or DAW, served as an important milestone in the transition of mastering from an analog to a digital domain, but the new medium’s advantages came with apparent limitations. This program, which was the first DAW capable of running on a computer, contained all of the necessary tools to master a song without the need to buy highly expensive analog equipment. (Audio Mastering, 2013) Sonic Solutions, and other similar programs, made the playback and editing of audio files possible by using a technique called pulse code modulation or PCM. This process, which is still used today, involves “sampling the amplitude of the original signal at regular intervals and then quantizing each sample to the nearest value in terms of digital steps called bits.” (Columbia, 2014) The number of times that this sampling occurs per second corresponds to the extent to which the audio’s changes in volume can be preserved and is referred to as the bit rate. (Dan, 2009) In addition to sampling volume levels, PCM must also detect the frequencies present in the audio signal by taking samples at twice the rate of the highest desired frequency. “This number, which is known as the sample rate, allows for the positive and negative extremes of the each waveform cycle to be sampled and for a more accurate representation of the audio to be recorded.” (Columbia, 2014) Most digital converters, however, cannot capture frequencies above this point, called the Nyquist Frequency, without introducing unwanted distortion so these very high frequencies must be removed using a low pass filter. (Columbia, 2014)
A standard CD master with a sample rate of 44,100 Hz only allows for frequencies up to 20,050 Hz to be preserved. (Dan, 2009) The average person, however, cannot hear frequencies above 20,000 Hz so the changes in frequency content are typically undetectable. Because vinyl does not rely on this sampling process to produce sound, the medium has no problem playing back frequency content above the Nyquist Frequency. (Huber and Runstein, 2005) Despite vinyls advantages in terms of playback of high frequency content, a CD has a dynamic range, defined by the difference between the loudest and softest sounds, of around 90 to 95 dBs which is actually greater than than that of earlier formats such as tape and vinyl which had a dynamic range of around 60 to 70 dBs. (Dan, 2009) This means that in addition to being non degradable and easily transferrable, a CD master is also of capable of more accurately representing changes in dynamics, making it a more suitable format in terms of balancing audio quality with accessibility. As the demand for instant streaming and quicker download speeds has increased throughout the past decade, however, newer digital formats have been developed to provide smaller file sizes. The MP3, one of the most popular formats, uses a slower bit rate in addition to data compression, which discard sounds that are least likely to be heard.
While the audio quality of an MP3 is noticeably lower than that of a CD, especially on high end sound systems, there are ways in which a mastering engineer can mitigate audible degradation of the signal and prevent digital artifacts. For example, higher frequencies that are less likely to be heard are often filtered out in order to preserve sounds that are more vital to the song. (Owsinski, 2008) Doing this, however, trades better quality for a “high end crisp” (Audio Mastering, 2013) that many listeners report is lacking in MP3s when compared with CD quality audio. Fortunately, as data storage continues to become more affordable, it is likely that higher quality formats, such as WAV and FLAC will become the standard for digital audio. These formats combine the fidelity of the CD with the portability of the MP3, making them the best available option for music in a modern, connected world.
Mastering in Today’s World
The primary task of the modern engineer is to “make a track as loud as possible” while at the same time preserving the dynamics that give music its “lifelike qualities”. (Audio Mastering Techniques,2013) This seemingly simple task has become more difficult as the average level of loudness in music has increased steadily since the 1980s. (Computer Music, 2009) This is in part due to digital’s greater dynamic range, but also because of our brain’s response to louder sounds. “Raising the loudness of music elevates the intensity of the experience. Listeners undergo significant, measurable changes in mind-body states” (Computer Music, 2009). This trend, often termed the Loudness Wars, is evident in almost every genre from Hip Hop to Alternative Rock, but it is more commonly associated with genres that are stylistically louder such as Electronic Music and Heavy Metal. While modern mastering engineers use many of the same tools as traditional engineers, including compressors and equalizers, it is the digital adaptations of the brickwall limiter which have allowed for volume levels to be pushed to their maximum with minimal loss of clarity. This is done by attenuating the loudest parts, or peaks, in a song so that the entire signal can be raised in amplitude to a greater extent before it reaches the point of digital clipping.
One such plugin called Maximizer, made by Izotope, uses what it terms “Intelligent Limiting” to reduce these unwanted artifacts and maximize volume levels.
“Intelligent III mode allows for the most aggressive limiting by using an advanced psychoacoustic model to intelligently determine the amount of limiting that can be done to the incoming signal before producing distortion that is detectable to the human ear.” (Izotope, 2014)
While it is true that music has gotten louder over the years in terms of the average loudness level, referred to as RMS, it has actually remained relatively stable in terms of dynamic range. (Deruty, 2011) Initially, this claim seems counterintuitive because a limiter, in quieting the loudest parts of a signal, does lessen the dynamic range of a piece of music. In order to understand why the dynamic range has not decreased in today’s popular music, however, one must consider the changes in source material and production styles that have taken place over the past decade. (Deruty, 2011) Modern styles of hip hop, for example, use sparse production styles that juxtapose atmosphere with intensity. Airy pads and ethereal landscapes are kept at a very low volume to allow vocals and drum parts to dominate in the mix. It appears that in response to increased amounts compression being applied during the mastering process, many artists have actually increased the dynamic range of their music in the compositional stage. (Deruty, 2011) While many critics of the loudness wars argue that a return to pre-2000s mastering techniques would, ”restore the dynamics that are no longer present in modern music” (Computer Music, 2009), it is ultimately the client’s needs and the musical styles themselves which dictate such trends.
Conclusion
While vinyl’s tangibility and subtle imperfections appeal to many listeners, the format relies on very expensive equipment for mastering and reproduction, making it an unsustainable medium in a world of instant streaming and fast digital downloads. The digital medium is advantageous not only in its ability to be easily reproduced, but also in its greater dynamic range. While low quality formats, such as MP3, are currently the most popular, it is important to remember that these formats came about as a result of data storage being limited and expensive. As access to digital storage continues to rapidly increase due to affordability, higher quality formats such as WAV and FLAC will begin to replace the MP3 as the standard digital format. While the digital format does impose some technical limitations in terms of bit rate and sample rate, it has ultimately provided the modern mastering engineer with more flexibility and creative control than ever before. This sentiment is perhaps captured most accurately and concisely in the words of American mastering engineer, Bob Katz, who states that “Mastering is the last creative step in the process of producing an album.” (Katz, 2002)
Sources
Audio Mastering Techniques, 2013. [video]. Bob Owsinski. United States. (Lynda.com)
Columbia Gorge Community College, 2014. Digital Signal Processing Basics and Nyquist Sampling Theorem. [video] Available at: https://www.youtube.com/watch?v=WgJMjDh0nLU&list=PLHLsXtuS19i1_aYa78p8e3pckKHNSe0u6&index=2 [07/03/2017]
Computer Music, 2009. Why do we like our music loud? [online] Available at: http://www.musicradar.com/tuition/tech/why-do-we-like-our-music-loud-212790 [19/03/2017]
Dan, C. 2009. Sample Rate and Bit Depth: The Guts of Digital Audio. [online] Available at: http://thestereobus.com/2008/01/12/sample-rate-and-bitrate-the-guts-of-digital-audio/
[02/03/2017, 14/03/2017]
Deruty, E. 2011. ‘Dynamic Range’ and The Loudness War. Sound on Sound, [online] Available at: http://www.soundonsound.com/sound-advice/dynamic-range-loudness-war [14/03/2017-20/03-2017]
Huber D. and Runstein R. 2005. Modern Recording Techniques. Abingdon, United Kingdom: Routledge.
Izotope Manual, 2014. Maximizer [online] Available at: http://help.izotope.com/docs/ozone/pages/modules_loudness_maximizer.htm [19/03/2017]
Katz, B. 2002. Mastering Audio: The Art and The Science. Waltham, MA: Focal Press.
Owsinski, B. 2008. The Mastering Engineer’s Handbook: The Audio Mastering Handbook. Boston, MA: Cengage Learning.
Schoenherr, S. 2002, The History of Magnetic Recording. [online] Available at: http://www.aes.org/aeshc/docs/recording.technology.history/magnetic4.html [07/03/2017]
Images
Crutchfield, B. 2015. Intro to high-resolution audio. [online image] Available at: https://www.crutchfield.com/S-FlU9XgfCI53/learn/high-resolution-audio-guide.html [09/03/2017]
Lainf, 2006. RIAA-EQ-Curve. [online image] Available at: https://en.wikipedia.org/wiki/RIAA_equalization#/media/File:RIAA-EQ-Curve_rec_play.svg [07/03/2017]
National Instruments, 2000. What are Anti-Aliasing Filters and Why are They Used? [online image] Available at: http://digital.ni.com/public.nsf/websearch/68F14E8E26B3D101862569350069E0B9/$FILE/filter.gif [19/03/2017]
Sample Magic, 2016. Interview - Dark Start Audio Mastering. [online image] Available at: https://www.samplemagic.com/journal/2016/11/interview-dark-star-audio-mastering/ [14/03/2017]
Warmplace, 2013. Hard Limiter (a.k.a. Brickwall Limiter) [online image] Available at: http://www.warmplace.ru/forum/viewtopic.php?t=2932 [19/03/2017]
0 notes