Don't wanna be here? Send us removal request.
Video
tumblr
Heighten Multisensory Experience Installation Stanton Smooker 2019
The aim for this piece was to create a generative audio visual installation based on experiences felt during the phenomenon of Hypnagogia. The intention for the work was to be immersed in a state that is representative of an ever-changing conscious and unconscious mind.
The work sees a composition of various shapes and colours reacting to a generative audio soundtrack. It was created to purvey a feeling of calm yet intrigue in the viewer. By use of generative digital media, I wanted the piece to create a unique experience for each participant, similar to the individual and sometimes absurdity experienced in dreamlike states.
0 notes
Text
Content
The soundtrack for the piece was created with Ableton. By wanting the audio to have a generative nature, I saw the use of clip follow orders to create a random sequence of the composition. A total of 12 instrument tracks were used, with a variation of 3-6 melody lines for each. These melodies were contain within clips, and as each played through, a random one would be selected to follow.
In order to achieve some spacing in the audio, clips that didn’t have any notes were added. Although they were all randomised I wanted to ensure that there weren’t moments were all instruments were playing at the same time, although I statistically couldn’t stop this entirely.
As each instrument channel performed, that initial output was run through a series of audio delay and reverb devices. This made the sound gradually faded away into a field of sound. Instrument had a specific panning position, with some having panning modulations.
The final audio outputted from Ableton is redirect into TouchDesigner using a program called Loopback. Loopback works as a thru-line of audio, giving that ability to capture an audio stream between the program generating it and the audio output on the computer.
Once the audio is patched into TouchDesigner, it is send through an audio spectrum analyser. This operator visualises and converts the frequency range, similar to that of a visual equaliser. The data from spectrum is then attached to a number of created geometric circles. The bands of the audio spectrum are attached to individual circles, while the amplitude of each dictates the sizing of each. The arrangement of these circles are are positioned using a noise LFO, with the rate of the noise transformation being set using a Python code. The audio amplitude is also set to various colour levels on all of these circles with the louder the sound being, the more vibrant the colour. With this clutter of circles moving around the screen and pulsing to the soundtrack, a feedback loop is introduced to show the delay effect of this movement and to prolong the colours on the screen.
This entire processing stream is split into two at the audio input stage, separating the high frequencies from the low. By doing this, I could make changes to each chain that would result in slight variation between the two, adding to the unpredictable nature of the visuals.
With the two running independently, individual colours could now be manipulated. The colour changes were derived from simple RGB levels. I created numerous LFO operators, all running are different speeds and amplitudes. The outputs of the LFO’s were linked to a respective R,G or B parameter. So with 2 channels of colour information available, 6 LFO’s were needed for complete variation.
Towards the end of the patch the 2 channels were composited together using a subtraction effect. This effect mostly removes the differences between the two inputs while enhancing the colour and light values of the remaining. Further operators were added such as blurring and resizing in order to gain a desirable aspect ratio and feeling for the piece.
The piece was presented as a live process, instead of a rendered movie file. Although movie file playback would have been less power intensive, I felt the installation needed to being running in real-time for the life-like nature of the piece to have its intended effect on the audience members. Having the installation set in a basement room in Building 6, I like the idea of entering from the street and following a dark passage way to have this otherworldly experience, in what would normally be an inconspicuous room.
0 notes
Text
Reflection
Upon presenting the work for several hours, the variation of colour changes stood out as needing further investigation. The working of the TouchDesigner patch see several LFO’s running at extremely low rates. The values of these LFO’s were sent to envelope operators which fluctuated the various RGB values of the visuals. My impressions during the showing was that these colours change could be elongated, and further match the overall pacing of both the visuals and audio streams. By making these envelops longer in duration it could see a colour variance change over up to 3 minutes, whereas in the presented work the colour changes are only sustained for 5-10 seconds. Although the inconsistent values of the RGB was frequent enough to captivate the audience, experiments could further enhance the perception of time that the piece was attempting to manipulate.
In regards to the generative audio track, volume level mixing proved be an issue. As the intention was for each audio track to have a large focus on their emerging and descending elements, the volume changes would clash with each other at certain intervals. Although the volumes of individual tracks would fluctuate during the piece, the overall gain was static. This led to moment where an abundance of sound was being produced and clipping would occur. I attempted to counteract this was individual panning of each track, and although this alleviated these clip occurrences, I was still left with moments of distortion. This leads back to the issues with generative work. It is difficult for volume levels to be mixed correctly, as their timings are not defined on a linear or detailed timeline. Perhaps use of side-chaining track with one another would help this issue, but that still creates a set hierarchy of volumes, when the intention was for each track to exist with its own sense of place, as well as enhance the sense of movement within the audio itself.
Another issue concerning audio that is used for visual interaction is the isolation of frequencies. Within the TouchDesigner patch, the frequencies were split between highs and lows. With lower frequency occupying a large space in the audio spectrum, they could be seen to dictate the general shapes of visuals, minimising the effect that the high frequencies would have. I attempted to increase the gain on the higher frequency sections, but the affect wasn’t apparent in the final work. On discussing this issue with Harry, he recommended adding a Math operator in TouchDesigner that multiplied set value of these higher frequencies. This would work as it doesn’t manipulate amplitude information, rather manipulates the channel value information that is being outputted. I predicted this would lead further to fine-tuning parameters with the patch in order to arrive at a harmonious state of the high and lows frequencies having similar screen real-estate. If this were to be proven, further modulations of these sizes would be needed for the work to relate further into a generative field.
A revision of this work would see that audio delays used in Ableton be replaced with MIDI delay effects. This effect would still produce the sounds needed, but I would gain volume, velocity and pitch information. With audio delay I only had access to the initial MIDI note trigger and amplitude information at the end of the effect chain. The use of audio in conjunction with the MIDI effect would see the work still operate as audio reactive, but would have more specific details of each instrument track. I could then set value thresholds on a MIDI channel to manipulate shape sizing. Generally speaking, I would gain more control over the reactions taking place because I would have the specific information for the mapping of parameters.
A major shortcoming of the work was a lack of direct interaction between the visuals and the audio. As it was presented, the audio was fed into the visuals and a reaction took place. In an attempt to enhance the works lifelike qualities, having the ability for the visuals to then affect the audio would have created a feedback loop that has the potential for the operation of the piece to enter a realm of unpredictability - which was the original intention of the piece.
One attempt to create this cross-application interaction was to have an operator analyse the amplitude valves of the final output within TouchDesigner. Once the amplitude valves within Touch crossed below a threshold, converted MIDI information was sent to Ableton to lower the frequency of a Highpass Filter. Although this was successful, it created a negative feedback loop were the visuals wouldn’t have enough sound output necessary for a reaction to take place, and in turn did not rise enough for the threshold to be reset.
There are several solutions that could alleviate this issue, with one being to set smaller modulation ranges; both of the amplitude threshold values and parameter ranges within the Ableton audio effect devices. Another could be the lowering of visual information values being used as a trigger, rather than the information being used as a source of direct interactivity controlling Ableton. Lastly, audio could be rerouted into dummy clips. I theorise I could achieve this by creating blank dummy clips, which contain no audio generation modules, by have effect controls whose parameters could be automated for a set duration. When the effects are not trigger from TouchDesigner, the audio would pass through unaffected.
0 notes
Text
Realisation
Overall, I am satisfied with how the piece turned out, in regard to both the installation and content created. The visuals didn’t become stagnant over time and the soundtrack kept the right level of engagement as to not to be disruptive or mundane.
Having the work play at Capital theatre was helpful in seeing the visuals in a different space. It worked well as a moving wall-paper and was quite pleased with how it fit with the main show-reel. Even with the audio being an issue, it still had a good effect as a silent screening.
One thing that occurred toward the end of the showcase installation was listening to the audio track after it had been playing for several hours. Perhaps it was because the beginnings of the audio had become quiet predicable, the melodies that played during this time sounded a lot different. Having it generate for as long as it did gave time for each of the clip duration to shift more obviously out of time to one another, leading to interesting and unique combined melodies.
The feedback for audience member was very helpful. The comments I received were of people getting lost within it and feeling sleepy. Although my intentions were not to make people fall asleep, the idea that it was to represent the change from waking to sleep made the installation fell like a success.
0 notes
Video
youtube
Ann Veronica Janssens “yellowbluepink” 2015
Ann Veronica Janssens is a Brussels-based artist whose work primarily centres around use of light, colour and perceptions. Her work attempts to escape the ‘tyranny of objects’ and what she describes as their ‘overbearing materiality’.
Her work “yellowbluepink” involves the use of filling spaces with washes of coloured light, illuminated clouds of vapour that render the surroundings unfamiliar and sensory perception altered. The work highlights subtle modulations of colour and light alongside the subjective limitations of sight. As a secondary result, participants hearing became heightened to counter the dense field of view.
Although being an overwhelming feeling it would be to walk through this fog, it creates a physical immersion that removes one from their sense of place and time by dissolving the viewers perceptual boundaries.
I find a common thread within this work and works involving hypnagogic experiences through the internalisation and shifting of perceptions for the participants. Through this internalisation, it highlights the different states of consciousness that are shifting during a hypnagogic experience.
0 notes
Text
As we fall asleep we forget about the future: A quantitative linguistic analysis of mentation reports from hypnagogia
Jana Speth, Astrid M. Schloerscheidt, Clemens Speth Department of Psychology, School of Social Sciences University of Dundee UK 2016
In this research, conducted by the Department of Psychology at the University of Dundee, researchers aimed to investigate the contents of dreamlike states, specifically the idea of mental time travel and the prominence of future or past travel during sleeps onset.
Participants were awakened several seconds to several minutes after sleep onset and were questioned on their thoughts and visions during this hypnagogic state. The mental states was defined by EEG readings, with the sleep state being treated as when brainwaves became global and merged into large, synchronised network.
The report found that travel to the future decreased during sleep onset, although thoughts about the past and present remained the same. It theorised that this happens because of the perception of the physical space of the participant before each mentation. By having a position firmly based in the real-world, participants recall memories based on this reality. It is also theorised that the lack of future travel requires more deep imagination, one that occurs more frequently during REM sleep
This study has future potentials to map cognitive functions on neurophysiological processes, advances our understanding of human mentation and helps create treatments for clinical impairments in neuroscience.
0 notes
Video
instagram
Floating Points ‘Coorabell’ - Hamill Industries 2019
Hamill Industries are a Barcelona based creative studio partnership composed of mix-media artists Pablo Barquín and Anna Diaz. Their body of work focuses on marrying computerised, robotic and video techniques to explore concepts from nature, the cosmos and the laws of physics.
The visual for the latest Floating Points album was created by placing a liquid composition on top a speaker rig, which then reacted to specific tracks from a song. The liquid needed to be resistant enough to withstand the vibrations of the speakers and allow for long video captures. Appearing like a cosmos of colours and mesmerising textures, it is a direct reaction to the sound frequencies of the song.
This is an example of an audio reactive piece that demonstrates an interplay of substances, with a correlation between the two being vaguely defined enough for audiences understanding. The power in this piece lies in the perception that the audio and visual substances combine together to bring to work to have a life of its own. This is also reminiscent of feelings felt in hypnagogic states as the source of this interaction is not truly known and provides an abstract, hypnotic sensation.
0 notes
Text
Milestone II
Similar to the first milestone, speed and pacing was discussed upon during this presentation. Darrin suggested lowering the tempo of the audio track “just to see what happens”. Adjustments would need to be made to the movement parameters with TouchDesigner as well, as multiple operators dictate the visual speed. It would also entail a trial and error to match all the speed variables, as the final look is based on the general quality, rather than any quantised formula.
As the visuals are the summation of two main processing streams, oscillating opacity levels of the two result in moments of complete drops in both. This makes the visuals disappear for 10-20 seconds, with them reemerging as a new shape and colour.
The options moving forward would be to have this blank screens affect the audio channels, potentially with an more apparent effect eg. Enhanced reverb on the whole audio piece, or simply a sharp cutting of higher frequencies. This would created a direct feedback relationship between the audio and visuals.
0 notes
Video
youtube
Kali Malone ‘The Scarifical Code’ 2019
Kali Malone is a Stockholm based composer and performer who’s music focuses on long-form compositions that combine modular synthesis with acoustic instrumentation; through use of church organs and wind instruments with an analogue Buchla synthesiser.
Her latest album ‘The Scarifical Code’ features a series of slow, emotive drone organ pieces, recorded using specifically placed microphones, within a traditional church organ - removing the big room reverberation usually associated with the instrument.
By recording in this matter, the true harmonic qualities are made much more apparent, as well as highlighting the wind in the nature of the instrument. It leaves the listener with simple elongated chords, but the details of the air vibrations become a form of melody or variance. I found these pieces to create a hypnotic effect, one where you are listening so intently for the details, you become immersed within.
The use of these air vibration details would work well within the soundtrack of my piece. I feel the instrument delays could counteract to create disharmonious yet unique sounds.
0 notes
Text
Hypnagogic Imagery and EEG Activity
Mitsuo Hayashi, Kohich Katoh and Tadoa Hori Department of Behavioural Sciences, Faculty of Integrated Arts and Sciences Hiroshima University 1999
This study measured the EEG readings of 7 subjects, with a button pressed at certain intervals, the participants explain what the had been thinking/dreaming about. 5 stages of EEG were categorised by alpha, theta, vertex and spindles brainwaves, as well at a EEG fattening wave.
Researchers found that hypnagogic visuals were experienced during the EEG flattening and vertex shaped waves, which suggests that these were unique to the hypnagogic state. The final conclusion of the study theorises that hypnagogic imagery changes depending on the EEG stages
0 notes
Text
What is the Link Between Hallucinations, Dreams and Hypnagogic-Hypnopompic Experiences
Flavie Waters, Jan Dirk Blom, Thien Thanh Dang-Vu, Allan J. Cheyne, Ben Alderson-Day, Peter Woodruff, and Daniel Collerton The School of Psychiatry and Clinical Neurosciences, The University of Western Australia 2016
In this paper the researchers report on the similarities and difference between sleep-related perceptions and daytime hallucinations. In the article they make comparisons to sleep-related experiences and hallucination in Parkinson’s disease, schizophrenia and eye disease.
One of the defining characteristic of hallucinations are that they are discrete and appears to overlay on regular sense perceptions, with the physical environment acting as a grounding for the episode.
An interesting difference between hallucination/dreams and hypnagogia, is that hypnagogic experiences rarely invoke intense emotions.
0 notes
Text
Milestone I
During the first milestone, the overall speed of the piece was discussed. Although the visuals were well received, it didn’t have the calming effect that I wanted. It felt like the piece was moving too fast to comprehend. Because the general size of the shapes are dependent on the audio stream, I’ll have to look into putting delay operators in TouchDesigner to make the size transformations slower.
People also made a note of smaller outer circles that would appear quite frequently. Generally, people enjoyed them but I have actually been trying to remove them. I can see why they are appealing as they just subtly move around the screen, not reacting to much of the audio, but I wanted the visuals to be generally one large shape, rather than a combination of a few.
The soundtrack worked well for the demonstration but needs more variations as to not be predictable. This can be fixed easily by added more melody lines to each instrument, as well as fine-tune the clips launch settings. Maybe having a composition that has more space would work better if the visuals are to be slower.
0 notes
Video
youtube
Aphex Twin ‘Selected Ambient Works II’ 1994
Richard D. James is a British electronic music. He is the pioneer of IDM and ambient music, starting in the early 90’s. During the writing of His 1994 album ‘Selected Ambient Works II, he learnt to lucid dream and claims that 70% of the content was produced during these dreams.
The sounds the album are very dreamlike, ranging from soft synth lines, unique textures, all the way to harsh metallic soundscapes. The opening track ‘Cliffs’ is very reminiscent of a fading away dream, with a long delayed melodies and a disappearing soft voice.
It creates a soothing atmosphere of a deep sleep and comes as no surprise that it was conceived in a dream state. This quality of music would work well in my soundtrack, as I want it to have a relaxing and calming effect for the viewer and to represent a gateway between waking and sleeping.
0 notes
Video
youtube
This YouTube tutorial by a user ‘bileaem tschepe’ creates an audio reactive particle cloud in TouchDesigner. He demonstrates how to take incoming audio information and how to use it to dictate shapes and movements.
He does this by running audio through an audio spectrum and then filtering out frequencies using filters and envelope operators. He also creates a particle cloud which is generated using surface operators. A noise channel is introduced to randomised the overall turbulence of the particle cloud. It shows how the geometries created exist in a virtual 3D space, and how the camera operator is used as the main way of have control over the content within this space.
This shows how the different types of operators within TouchDesigner are simply information channels, ones that can be used to interact with each other. Simply speaking, TouchDesigner is just dealing with data values, and creating an interaction between each other. That Math operators is a good way to convert data values into a workable range that can be easily mapped to other parameters.
0 notes
Text
Accessing Anomalous States of Consciousness with a Binaural Beat Technology
F. Halmes Atwater Journal of Scientific Exploration, Vol. 11, No. 3 1997
In this paper, Atwater discusses how binaural beating can be used to access altered states of consciousness. It can be achieve when individuals in an environment of restricted stimulation wilfully focus their attention on binaural beats that are played with music, pink noise or various natural sounds.
Binaural beats work best inhuman when the frequencies are below 1000Hz. This is because this the sound wave is generally the same as the human skull and are about to curve around the skull by diffraction. Because frequencies below 1000 Hz curve around the skull, incoming signals below 1000 Hz are heard by both ears. But due to the distance between the ears, the brain ‘hears’ the inputs from the ears as out of phase with each other. As the sound wave passes around the skull, each ear hears a different portion of the wave. It is this phase difference that allows for accurate location of sounds below 1000 Hz.
Binaural beats are know induce a sensation known as hemisphere synchronisation. This is when all parts of the part are operating at the same brainwave frequency. The reason why binaural beat bring about this synchronisation is because of where the beat is perceived in the brain.
With each hemisphere of the brain having its own olivtary nucleus, incoming sound is processed by each hemisphere of the brain independently. The difference from the two frequencies that occur are ‘heard’ at the brainstem, contributing to the ‘beat’ frequency influencing the brainwaves from the stem, outwards.
The effectiveness of binaural beats can only work if participants are actively entering themselves in a relaxed state. As each state of consciousness is not represented by one simple brainwave, it its difficult for binaural beats to have a direct effect on consciousness states alone.
0 notes
Video
youtube
Brian Eno ‘Discreet Music” 1975
Brian Eno is a British composer, famous for the creation of ambient music. He discovered a new way of listening to music, where at low volumes the sound on the verge of vanishing, and merges with the background noises of the environment.
In this album, Eno created a sound-on-sound tape loop system were a synthesizer tuns into echo unit then into a dual tape system. This dual tape system allowed for several seconds of delay, which causes a cascading effect of the sound.
In his 1978 album, Music for Airports’, the composition is the summation of 2 instrument tracks, with each being panned hard right and left respectively. The two parts are of different lengths, which when played simultaneously and over a long duration cause a relationship between the musical phrases to constantly shift. On each round, phrases will intersect differently, sometimes appearing to combine into new a phrase or variation.
The intention for my work is to have a soundtrack that takes a life of its own. I know that Ableton has the ability to randomise clip launch sequences, I could use this with the cascading effect from Discreet Music to create a piece that is ever-changing. The overall sound of these two pieces isn’t actually what I was imagining, so some further sound designing will be needed.
0 notes
Text
Hypnagogia - The Nature and Function of the Hypnagogic State
Andrea Mavromatis Depart of Psychology, Brunel University 1983
In this research paper Andrea Mayromatis relates the state of Hypnagogia to other states and processes of humans, such as hypnosis, meditation, dreams and creativity. He further analysis’s the phenomenon in relation to neurology and psychology. Towards end of the paper, Hypnagogia is analysed for its evolutionary progression and function for humans.
Hypnagogia constitutes the dream components found within life of dreaming, sleeping and wakefulness. Although older than the function of sleep, it allows for the functions as well as the psychological and physical characteristics of both
As for its evolutionary function in regards to basic survival, it allows for the functions of sleep to be used, whilst not completely becoming unconsciousness. A hypnagogic state allows for someone to be aware of their surroundings, while allowing the introspection found during sleep, which is needed for complex problem solving.
In this way, it may also be beneficial to psychological health. It helps draw away stress that occur in a waking state, by shifting the body to more its parasympathetic system activities and functions. It induces a lower wakefulness state also elevates pain by keeping the body and mind in a relaxed state.
Mayromatis views hypnagogia as not a regressive state but rather a progressive one in which conscious and unconscious functions are brought together as a coefficient relationship.
0 notes