Don't wanna be here? Send us removal request.
Audio
This is the final more rock oriented track that was more closely aligned with my original genre choice. It’s further changed and tweaked in game by the audio mixers.
1 note
·
View note
Audio
This is the first audio track created for mixers made in soundation. Less rock focused, made just to have three distinct synchronised audio elements/tracks.
1 note
·
View note
Text
Project Blog 11: Getting the game ready for testing: the audio feedback build
I also wanted to let people modify the audio parameters of the game. In part because I had some feedback from my tutor that the prior track was a bit high, especially the strings section. This may have been fixed by the change tot he rock tacks and swapping these assets out for some lower volume ones, but I thought the best source of feedback would be other people.
Ideally I would have had a double handled slider to control the minimum sound: the base level the sound reaches after initial interaction as a background volume, and the maximum sound: the volume a track reaches after interactions. But unity in game built in ui only allows for single handled sldiers and so I went for the imperfect solution of two sliders, where the second is a number influenced by the first.
Initially showing decibels these were changed so that they were abstracted positive values, easier to udnerstand for testers. I also wanted ot let people control the decay of the tracks that happened over time. I tried to get the decay working in time.deltatime measured in seconds but this proved quite tricky, so I left it as is. With all these different values this skyrocketed to 9 sliders for the audio build!
While a lot of sliders I wanted to see what visions of sound people had. One instrument emphasised over the others? Maybe the bass decaying less and being more consistent with a higher minimum volume? Everything mixed lower because my sounds were too loud? Things decaying super quickly so that you have to constantly vacuum up sound agents, to maintain volume, and be very active?
In addition to this the separation between gameplay and audio settings is somewhat flawed as the audio settings people can curate are limtied only to the default gameplay build I give them. But this felt a worthy compromise to bombarding people with too many sldiers at once.
The particle system was updated 2d flat squares that better fit in with the minimalist rigid shapes of my game
Ui was updated to be more legible and some variable names were changed to be more easily interpretable at this point after pilot test feedback from my tutor. I also made the decimal places displayed only to one decimal point, so people would be less overwhelmed. Toggles changed to dropdown menus.
One overlooked feature that required alot of sloppy duplicate code and busywork was keeping the same variables between pauses, and updating them in real time. This requried functions and public access across multiple scripts to store variables when the game wasn’t paused. This was likely because of a shortcut I used of setting the pause screen to inactive so that it was off during gameplay, Variables that were edited in the pause screen were stored in the gameManager so that they could be called again when the pause screen started back up.
1 note
·
View note
Text
Project Blog 10: Getting the game ready for testing: The gameplay build
The next stage of developing the game involved getting it ready for wider testing:
This involved creating a pause screen that could edit the parameters of the game which would then be saved, and could be re-edited. As well as creating the test paper and the sliders and buttons people would test with.
First I needed to decide what variables I wanted to test and was unsure about, and wanted feedback on. I needed to create new functionality to alter certain variables and create new ones for the game to interact with. I also created some new gameplay mechanics/ styles for the purposes of testing and comparison.
First order of business was adding a pause screen. I was initially panicked at this prospect because I hadnt planned for this and was worried my audio intensive game that hadn’t been designed with this in mind would struggle to have everything sync up. It turned out to be surprisingly easy!
Following the info here: https://gamedevbeginner.com/the-right-way-to-pause-the-game-in-unity/
Simply changing the time scale to 0 stopped all game activity. No variables mechanics were running outside of time.deltatime. Those that didnt use it were being updated in update in a way which still worked.
I created a pause canvas ui which displayed over the whole of the screen when the game was paused on key press, and wrote instructions for the game’s controls on the pause menu.
The resource for the player was still regenerating while paused so I rewrote the meter to run on time.deltatime this also had the benefit of making the variables used to control it more legible to me, so that I could see the meter for player inactivity would last 6 seconds. -A relatively long time as I wanted the play experience to be somewhat relaxed and approachable. I also thought players might be able to change its duration but I ended up not testing for this as I felt there was already too much being tested and it did not fit into the other variables I was testing.
I also changed the music tracks from the more synth and strings oriented temporary ones to a more pleasing and cohesive generic rock output. These too were made in soundation. While I intended for the game to be closely tied to desert rock, I was limited on time and my own emotional resources in composing new music. I felt that composing something quicker that was close to my original intent and theming was the best use of my time, without going on a massive tanget, recording my playing. Finding stoner rock samples is not easy. The set I used in soundation from the ‘arena rock’ collection was more accessible and a better match than the previous audio track.
Next I changed the dash so that dashing while in the inactive state deleted enemies without triggering them. This let people clear the field and once again collect themselves while maintaining a ‘rest’ period in the music track. There was some trouble in checking for the player when dashing as the way I stopped triggering sound obejct was to turn the players trigger to ‘off’. Making it so that the player had acces to the spawner allowed me to get the location of the sound agent to call the spawn particles function, spawning the particle prefab on the sound agents position, By calling the function onCollisionStay, instead of onCollisionEnter
```
void OnCollisionStay(Collision collision) { if (dashing && !active) { //if (collision.gameObject.name != "Player") //{ spawnController.SpawnParticles(collision.gameObject.transform.position); Destroy(collision.gameObject); Debug.Log("Destroyed : " + collision.gameObject.name); //} }
}
```
I could check for the player dashing while also satisfying the inactive state. the check only occured on collision stay instead of all the time so it did require too many additional processes, thought I see it could have been optimised further.
Using the default Unity Slider ui I added sliders for the spawn rate: the time added on to each spawn interval. this was changed to ‘spawn delay’ as raising this value increased the time between spawns unlike the implications of ’spawn rate’ where you might think being high would make it so spawning happened more often.
I also added ‘spawn variance’ to control the random range that the spawn interval was affected by, this was added on to the spawn delay.
In this way you could make the game consistently spawn stuff with 0 variance/ randomness. Or make it less consistent. A spawn variance of 10 would have longer pauses, but also sometimes shorter ones, but this was always added on as time to the interval, which was a major limitation of this spawn system.
‘Spawn change’ originally called ‘spawn coefficient ‘, changed the coefficient of the calc adjustment value described in a previous post. Explaining this algortihm was seen as unnecessary to testers and coefficient too confusing so it was left as the vague: spawn change.
The next variable people were able to choose initially by ticking one of two boxes that switched on and off and later to a drop down menu that implied a discrete chocie between one of two options was inverting the player inactive state behaviour. This would invert the default state to be the inative one and the right click limited resource meter state to be the active one. I wanted to see whether people prefered this more active play style which I felt was more like the activity of playing music in some ways because rest is easier than playing an instrument; it is an active choice. I know this game is abstracted and I prefered the first original behaviour myself but I was curious what peoples reactions would be. I wonder if it would be biased by the defaults.
Also fixed a bug where dashing while inactive didnt change the sound agent count which would mess with the calc adjustment function. Essentially increasing time between spawns as the player deleted enemies while dashing while inactive even though there were lesss enemies on screen.
Recognising that the way the current enemy movement worked was based on transform, somewhat inelegant, and sound agents could be launched offscreen I added boundaries to delete sound agents pushed offscreen and removing them from the total count. Also let people edit the sound agent speed. And let people choose between the transform and velocity based sound agent movement. This is a very subtle change, and velocity may be more consistent but without further editing so that it could be psuhed by the player, the transform method of movement allowed the player to bunt enemies away while inactive. Which I thought was fun, but so was having enemies cling to you like a mish pit in some ways. I left the choice up to testing.
1 note
·
View note
Text
Project Blog 9: Further iteration and polish; A new gameplay mechanic: Not triggering sound agents!
After discovering that building the game and publishing at this point meant that the edges of the screens parameters were cut off, some settings were changed to alter the boundaries that the player could and couldn’t enter, in relation to the camera position so that they wouldn’t be able to move outside of the camera’s field of view. Thinking that the game could use more visual feedback for when the player deleted/interacted with a sound agent I added an early particle system to the game, however this not much time was spent polishing its visuals at this pointm the particles were quite ugly in a way that didnt fit with the abstract look of the game. A very web 2.0 looking gradient circle.
Taking some inspiration from a tutorial I had remembered looking at earlier:
https://learn.unity.com/tutorial/introduction-to-space-shooter#5c7f8528edbc2a002053b72a
I decided to make use of the fact that despite our 2d perspective the game was rendered in 2d by tilting the player avatar cube slightly in relation to the directional movement. This took some additional effort given our altered perspective and co-ordinate space. Originally it was pivoting about the wrong axis.
In response to early tutor feedback, I also decided that in order to afford more control over when the player triggered notes I would add a new gameplay mechanic: the ability to not trigger/ interact with sound agents by means of switching to an alternate state.
This functioned as a right click toggle which made it so that the player was no trigger. To visualise this the player turned transparent to show the difference between the two states. This was done by altering the player material colour in script upong calling a state change function. This came after some research and botched attempts that resulted in a lack of colour, or recolouring the player so that it did not have our default texture.
Which was pizza at this point btw. some placeholder textrues used under creative commons use and modification license of pizza olives, mushrooms and skulls were used to texture and dsitinguish each sound agents and the player. This was done with the intention of later deciding on a more coherent visual style but this persisted throughout most of development as gameplay changes took priority of visual and aesthetic ones.
Back to the main inacitve gameplay mechanic: I also implemented a resource which drained on the inactive state so that players could not remain in that state permanently. This meter was represented by a radial bar around the player that drained while the player was in the inactive state until it reached 0 triggering a switch back to the active state. Initially these values and meter were arbitrary numbers but later in testing were changed to function using time.deltaTime as seconds to be more interpretable and manageable.
This meter choose to navigate through the space without triggering sounds, giving them more time to reposition in the environment and collect themselves so that they could think about what sounds they wanted to trigger when, as opposed to bein overwhelmed and bombarded, with no control. I needed to learn about worldspace ui, image rendering and fill techniques.
Links used to learn about transparency and material rendering in unity:
https://docs.unity3d.com/ScriptReference/Renderer-material.html
https://docs.unity3d.com/Manual/StandardShaderMaterialParameterAlbedoColor.html
https://answers.unity.com/questions/584873/renderermaterialcolora.html
Links used to learn how to make a radial slider in unity:
https://www.youtube.com/watch?v=uDlGIXFeNwg
https://forum.unity.com/threads/how-to-detect-edge-of-image-using-the-fill-amount.461283/#post-3011164
https://docs.unity3d.com/Packages/[email protected]/manual/HOWTO-UIWorldSpace.html
1 note
·
View note
Text
Project Blog 8: Beginning the process of iteration, and influencing the structure of play
It was at this point that the bones of the game originally envisioned and outlined by the design document and intentions had more or less taken shape, and now what remained was polishing, adding features, bug fixes, and expanding the concept. Some measure of structural influence on the song, the aesthetic being more in line with stoner rock and the potential for the aesthetic and sound agents to become more literal as well as other visual polish were intended areas of game development, but these were de-prioritised, and most would end up being cut from the final version of the project.
Spawning was changed to occur from all four orthognal directions rather than just at the top of the screen. Sound objects/ sound agents were made to actually use rigidbody collision as opposed to script defined proximity detection to gameobjects with other tags.
This time saw the most development of how the spawn rate worked. Instead of the interval between each of the spawns of an object being controlled by random intervallic changes, it would take into account more game parameters in a way intended as more intelligent.
private float CalcAdjustment(int obj)//needs to be finished { float n = (float) spawnController.soundObjects;//total number of sound agents if (n != 0) { return ((obj / n)*(obj + n));//fraction of selected sound agent in relation to total * number of that bject + total } else return 0; } }
This calculation of an 'adjustment' to the next interval at which point a sound agent would spawn, was determined by This code which would take into account the total number of sound agents as well as the number of and individual type of sound agent in relation to the total. As an example, if there were 20 sound agents on screen, then it would take longer for any sound agents to spawn than if there were only 5 sound agents on screen. As well as this if of the 20 sound agents on screen there were 15 drum agents and only 2 strings agents, the string agents would have less time added to their calcadjusment variable than the drum agents, thus making it so that the strings agents would spawn more often than the drum agents.
While a small component hard to identify in the game, this was key to parameterising the spawn rate, and affecting the structure of the song and output somewhat by responding to game context and changing the players potential input. While not as far reaching as the original intention of there being different types of music spawned/available, this felt an essential feature to add to carry the original design intentions of the project.
Spawning and the spawn system in general remains an underdeveloped and underresearched part of the game that requires more attention in future.
1 note
·
View note
Text
Project Blog 7: Incorporating audio mixers
So in order to have multiple audio layers playing in synch as one track, each of which can be modulated/ interacted with in real time was not served by my current audio source set up. It turned out that playing individual sound files via the sound class method using an audio source, meant that changing a clip that was already playing was quite difficult
Real time mixing was better served by a new audio feature of unity called audio mixers:
https://subscription.packtpub.com/book/game_development/9781787286450/4/04lvl1sec27/audio-mixer-scripting-with-parameters
https://answers.unity.com/questions/988884/how-do-i-use-audiomixergetfloat.html
This meant that how my audio was being played had to be totally redone. The audio manager now got and set the volume for 3 different audio tracks each of which had their own scripts: drums, bass, strings. Initially silent until the first sound agent is interacted with, starting the track, these 3 tracks would keep a quiet bass level of volume for the track in the background which would increase for each track depending on if its corresponding sound agent was triggered by player collision. After raising the volume by a set amount it would decay over time to the original quiet base volume.
Each controller for the 3 types of sound agents controlled their movement towards the player, their collision (which was just based on coded proximity at this point) and sending info to uppdate the audio for their corresponding tracks.
At this point because all tracks were intended to decay to the same minimum the and raise to the same maximum these tracks. The master mixer which acts as something of a parent to the 3 sound agent mixer groups and the audio in general was used to mediate min and max audio output to avoid clipping and decaying to complete quietness as though the music had stopped. However this meant that some tracks like the strings layer which I hadnt mixed appropriately cut through the mix in a way that felt more jarring and a touch clipping in a way that wasnt intended. This would be addressed when the track changed.
Audio sources for the bass, strings, and drums were still used in the game hierarchy, but were no longer the sole means by which the tracks were played.
I felt it important that the audio didnt start until the first sound agent was interacted with at which point the sound would continue playing because in tis way it was like the player was choosing when to begin their own ‘performance’.
The music for these tracks was using soundation, I just wanted to create 3 distinct audio layers that coalesced into one synchronised track. Not resembling the rock themes originally intended, but being what was available this was implemented as a temp track:
https://soundation.com/
Used/ read for research:
https://answers.unity.com/questions/988884/how-do-i-use-audiomixergetfloat.html
https://answers.unity.com/questions/949627/c-setting-audiomixergroup-through-code.html
https://www.youtube.com/watch?v=9tqi1aXlcpE
https://www.youtube.com/watch?v=vOaQp2x-io0
https://gamedevbeginner.com/how-to-play-audio-in-unity-with-examples/
1 note
·
View note
Text
Project Blog 6: Implementing sound as arrays part 2
At this point I was debating about the use of lists vs arrays to store sounds to iterate through following Hylics example of free form music and just to get something musical happening in game. I wanted to keep track of how many sound objects there were of each one in case that became a viable method/avenue for modulating the audio but eventually I realised I was only going to keep count of the total number of each type of sound object and wouldn’t necessarily need to be able to access each indexed entry and so arrays were suitable enough.
I created an audio manager that used a sound class written by me to play the audio iterating through each array. Following Brackeys example:
https://www.youtube.com/watch?v=6OT43pvUyfY
An audio manager was implemented as a singleton following this example. While not strictly necessary this was useful and interesting as a way of demonstrating and limiting the instances of Audio managers of my game to one as this audio manager was intended to handle all the input from players and enemies and outputting all the necessary sound.
This audio manager iterated through two different arrays of sounds: rhythm and lead guitars. These were composed of two audio clip chopped up into small audio snippets. With each clip named as a number so that the audio manager accessing the sound class could call the relevant audio clips by iterating through the array and parsing a string + iterated number. Resulting in two arrays of looped variables.
Spawning of these variables was handled somewhat crudely at first because they were spawning only fromt he top of the screen and at a consistent rate. then changed to each variable spawning at an interval that randomly alternated in a range. Tho this combined with there only being two spawning objects and areas resulted in a still flat experience of spawn rate.
This iteration through arrays of variables while similar to something like hylics, and drawing from a preselected linear track, certainly musical, was quite limited in expressing player creativity, and this was more intended as an initial audio test and direction, so it was at this point we switched from using audio sources and arrays of sounds to a single playing track with layers.
1 note
·
View note
Text
Project Blog 5: Implementing sound as arrays and audio sources part 1
For the next stage of the project the priority was getting to be familiar with sound in Unity, as well as triggering different sound outputs from different sound agents. At this stage I was still toying with how to generate different sounds upon colliding on a sound agent of the same type. So to facillitate this I created two pools of sounds in two different arrays divided into ‘rhythm’ and ‘lead’. These lists would be iterated through as the player continued to collide with sound objects of the same type. It was also at this point it was also decided that it would be necessary to have a single audio manager and script implemented as a singleton to handle all audio output.
While developing this code and trying to get audio working, development for some of the more ambitious aspects of the project felt like they were behind any kind of reasonable schedule for both research and development. Thus adaptive audio contexts for sound, as well as a recombinatorial structure for the sequencing of music were unfortunately removed from the immediate development schedule, with recombinatorial music seen as a potential avenue at the end of the project. In it’s place a new focus on emphasising player expression via impacting a dynamic audio soundtrack was chosen. This approach seen in many video game properties such as red dead redemption where once a player mounts their horse a different layer (say the drums) of an already playing audio track begins playing, synchronised to be playing at the right part of the soundtrack. This felt like an appropriate compromise in order to implement an audio system that players could influence which also still sounded like a coherent music track. Because the currently implmented system while appropriate perhaps as a way to cycle through phrases, in isolation sound mostly like selctive noise or bursts of sound rather than a cohesive track.
Another problem solved by switching to dynamic audio tracks, was that streaming audio in time which is what was likely or desirable with the alternative context based version, is Unity’s ability to stream sound. Streaming several distinct sound from a large library at runtime in a manner that they need to arrive on time is something that would be tricky to implement if we wanted the track to be mostly consistent with only some leeway for player rests such that the track remained coherent. I had considered only loading in the relevant sounds into the audio buffer at runtime when each different set of sound objects spawned and then playing the one the player chose (as at the time I was envisioning individual choices of each set of sounds from a selection of agents spawned based on the current audio context). However it is still uncertain if this would be a perfect solution as Unity is not an engine specially catered towards streaming sound in particular. A balance may be possible between individual music/audio phrase length, agent spawn rate, agent spawn speed + acceleration, + distance spawned from player, but it would have been tricky. Dynamic audio represented a more known quantity within the bounds of Unity’s engine.
1 note
·
View note
Text
Project Blog 4: Implementing the dash
For starters despite playing from a largely top down 2d perspective, the game was being made in 3d with 3d primitives, just limited to a 2d plane. My rationale was that at this stage where visuals were not being considered, leaving the option of developing 3d visuals and effects open might be desirable. The thing is that the controller input method I had chosen for dashing was mouseclick. And interpreting direction vector co-ordinates for a 3d shape(any shape for that matter) in 2d space relative to camera/screen position based on mouseclick is a somewhat non trivial affair.
After spending some time looking up how to get the mouse position I inverted its z values so that the coordinate space matched more traditional x,y values. As described in this post: https://gamedevbeginner.com/how-to-convert-the-mouse-position-to-world-space-in-unity-2d-3d/#:~:text=In%20Unity%2C%20getting%20the%20mouse,bottom%20left%20of%20the%20screen.
‘ To calculate the mouse position in world space, use Camera.ScreenToWorldPoint with Input.mousePosition, to get a Vector3 value of the mouse’s position in the Scene. When Using a 3D Perspective Camera you must set the Z value of Input.MousePosition to a positive value (such as the Camera’s Near Clip Plane) before passing it into ScreenToWorldPoint. ‘
Essentially converting it from screen space to world space, and making sure that there is no depth (players moving to different planes in a 2D game and thus not colliding)
At this point I was deciding on what type of movement I wanted to use to implemment into my game, if I wanted to alter velocity, or if I wanted to transform postion or objects, or if I wanted to use forces. After a small amount of research I figured that forces would perhaps require more setup but result in less bugs once fine tuned. However working to implement forces at this early stage of the game when very little was happening resulted in me abandoning it eventually in favour of using velocity AND transform (which caused it’s own set of troubles). Regardless seeing as at this stage I was just trying to get some form of dash working I implmented it as a transform, as this code requires the least setup.
In the spirit of getting something working I used a hacky method to implement my dash code which I kept for far too long: creating an end point for the dash based on the direction of mouseclick and specified dash length, and then checking if the player reaches that point before stopping. This is important because a dash is only a dash if it is a temporary burst of speed, not a constant speed input method. To make detecting the player reaching the end point more consistent I checked in a radius around the end point and then stopped the dash. This had several issues that were later compounded when I decided to add traditional movement with velocity in addition to the transform dash.
Knowing it was an imperfect solution, but that too much time had been spent on the non-music parts of this music game, I decided to leave it alone and look into implementing audio and getting it working within my project.
1 note
·
View note
Text
Project Blog 3: Transformational vs Generative Algorithms and Recombinatorial Music
Knowing what I now know as described by Karen Collins describing Wooler et. all’s ‘ A framework forcomparison of processes in algorithmic music systems.’ I now know that one of the ways I envisioned changing the music based on player action was a combination of some generative and transformational algortithms. As Described by Karen Collins ‘ Transformational algorithmshave a lesser impact on the data size, but impact the overall structure. For example, aphrase may have several notes whose pitch value can be randomly altered, or phrases themselves may be restructured in the wider song, while the actual phrase remainsunaltered. Within a phrase, instrument parts may be added or dropped according toin-game states. Generative algorithms, on the other hand, increase the overall musical data size in that the basic musical materials are themselves created.’
Initially I was thinking of creating a musical context system that went through various chord progressions choosing phrases or riffs that fit based on what the player had interacted with. If the player had been consistently dashing to the same sound agent over and over, it signals a desire to repeat the phrase and maintain the groove so the same musical phrase may be chosen. If the player had been avoiding sound agents, to slow down or space out the song, interacting with a sound agent may eventually radically change the structure of the song by shifting to a different part. Ie, moving from player induced slowdown which is like a breakdown, to a chorus. This could have been done by speeding up sound agent generation and speed, changing which sound agents were generated, or merely changing the effect a sound agent had so that the context while adapting to the players inputs was ultimately out of their hands. This points to another set of terms described by Karen Collins: Adaptive vs interactive audio. While the context adapts to the player action they are not in direct control of it, but they are in control of the interactive audio generated by interacting witha sound object.
This would involve creating a complicated context system that interpreted player input, pause action and reaction to sound agent stimulis, which fit into some range of music theory suited to desert rock or at least a traditional rock song structure. Allowing for repeated playthroughs of the game to allow for sections like chorus, verses, intros, solos, or breakdowns to all be able to be changed in order by player influence (Recombinatorial music). As well as supplying some sort of music phrase generation, or at least record of phrases to be played in their suited relevant sections.
While I still believe in the potential of this idea, I think it requires a more formal framework ready going into it, and it’s scope could obviously dauntingly infinite as it could easily be limited to a few options. Unfortunately, we are about to get to where the project actually started getting held up in implementation. Focusing on the dash feature (instead of audio)
1 note
·
View note
Text
Project Blog 2: Reasons for the Dash
Original conceptions of how the game would look and play centered more around each interaction between the player and other sound agents or objects triggering some sort of related sound or phrase, perhaps one interaction woudl play a phrase from interacting with it. Maybe that phrase would be instrument specific (just the drums) or involve a whole section (eg rhythm so drums, and bass and some other elements. Perhaps it could have been note by note, but I theorised that may be too hectic in terms of the number of sound agents required to make a song, and make intentional player choice more difficult. All of this falls under the category of interactive audio, that being the player input (colliding with/ interacting with a sound agent) directly controls sound output. The length of that phrase, how to control the spacing such that the music still resonated as music, was all still up in the air and nebulous.
Initially because I envisioned that the mechanics would be tied more closely to my specified genre (stoner rock) a genre known of being very repetitive and groove oriented (playing the same phrase over and over), I thought that giving the player a movement tool (for a 2d top down player controller char/ avatar/ player controlled agent), that would make rhythmiclly choosing notes/ phrases offer some variety when interacting with the sound object. I also thought that in order to facillitate music that still sounded recognisably as music that the sound agents would have to be implmented in such a way that there would not be a period where the game was silent for too long. A player could create rests by avoiding sound agents but they would accelerate such that eventually notes would be played. For this I decided to try to implement a Dash.
Dashing accomplished a lot of different goals for what I wanted to create in my game: it brought it closer to the action trappings that had inspired it and would perhaps feel more like it was in the middle of that toy/instrument-> reflex challenge spectrum of music games. It also allowed me to give the player character more states so that I could if needed have more means of interacting with a sound object. A player may be able to generate different output based on whether they dashed into or moved into a sound agent. Dashing away when the sound agent was accelerating might be able to afford even more tension by creating a longer rest gap in the music than by moving away. If the player was just trying to ‘groove’ in time by repetetively colliding with sound agents and triggering their cues, they would receive more discrete feedback of player input on dash than if they simply moved into it. There is a greater visual difference in player position whe dashing from note to note as opposed to slowly moving into them. I thought this complimented a groove oriented genre where repetition often is the point. Having more feedback just on the player action of moving and triggering sound objects in this way seemed like a good idea, and expanded the range of player input.
1 note
·
View note
Text
Project Blog 1: Dust
Hoping to be able to cover some of my project and retrace my steps in thinking as I write some of these reports a bit late.
So to reiterate from the Jump: The goal of the project was to deliver a musically expressive game inspired by the desert rock/ stoner rock genre of music. In terms of this I wanted to offer a game in a niche between the two main modes of music game, with things resembling an instrument or toy on one end (Elektroplankton) and rhythm focused games that strictly assess player input in the other (guitar hero). A relationship described in ‘Levels of Sound: On the Principles of Interactivity in Music Video Games’. The main focus was that players should be afforded some degree of authorship over the music based on their input and action, to emphasise and explore the part of music playing or ‘Musicking’ (term by musicologist Christopher Small that talks about music as a process as opposed to an object) that involves player/performer creativity and intent.
Initial other inspirations were games in an action genre that had entirely different approaches to player expression outside of sound, things like Devil May Cry’s combat system, or Tony Hawk’s Pro Skater, affording and encouraging a diverse range of player options and expression. This combined with the trappings of the genre, my own tastes, and a desire to move away from the note highway led me to want to approach creating a game with more of a player avatar centered input method that interacted with various sound agents to affect sound.
1 note
·
View note
Text
Minutiae and minutes - Making milestones, and small steps
Breaking down the next couple of weeks into small steps to do list:
Make a controllable character (square) within an environment
Make movement use unit vectors so speed is not faster on diagonals
Create sound/note/enemy object that can be destroyed by the controllable char,
Enemy generates sound on death
Milestone Timeline which is feeling pretty arbitrary but for my own sake to be able to edit later am just gonna spit some stuff out
10 February
- All tasks above completed
24 February
- Made a basic context controller which alters the sound based on whether it is a chorus/verse ie. passages of a song
- Make some different ways to trigger notes eg. different enemy types, make them generated based on the musicContext object
10 March
- Try to fine-tune context so that it is more like the desert rock genre
- Make it so that players can control the pace of a song: ie make a section go on longer or finish sooner
- Make it so that enemy nodes trigger notes based on different ways of engaging with them specifically: (direction of dash, different types of engagement eg an attack)
24 March
- Make it so that players can have some influence over which section is progressed to next (make the drums pace slow down/drop out, choose nodes in such a way to make the song build)
7 April
- Try to add some aesthetic feedback type stuff that maybe is the opportunity to reinforce thematic scene and genre values
Questions to ask:
will start in 2D but for aesthetic / feedback stuff should I work in Unity 3D and also how to transfer over
How to reconcile theme and aesthetic with purely mechanical game
0 notes
Text
Project Blog 25/01/2019
Christmas break wasn’t productive. I effectively shut down, and any moment thinking about the project see-sawed somewhere between existential panic and the faculties and abilities of someone comatose. Barely got in a submission for my one other module, so I guess that’s something.
Coming away from The Project in Games Development seminar today with Sarah. I have some goals in order to become more productive every day and ultimately make something in order to be able to submit anything viable for the report.
1. Need to set up some weekly sessions with Federico and discuss the project development.
2. Talk about creating some kind of deadlines / milestones. My previous apprehension to do so was based on the misinformed idea that because I feel I have no ability to estimate how long anything will take.
Other ideas spitballed: using a sequencer at some point to control different parameters in response to incoming signs.
At the wise advice of Sarah, I’m going to more concretely lay out what my game needs to be:
Top-down, dashing to interact abstract rhythm action game set in the sky, which very loosely spits out somewhat appropriate chords+ simple note choices, based on an evolving context. You need to dash to move. How to capture the mood/ aesthetic? : Skybox reacts to the sound. The position of the sun. The density of clouds.
Dash to interact music expression game with 4-8 different ‘enemy types’, which affect the sound in different ways when dashed through.
Rough ideas: single dash, multiple dashes, dashes from a specific direction, charged dash. Choice comes from selection. Maybe certain choices eliminate others.
maybe a wake/trail of the dash also causes interaction ARGH scope aahhhhh.
I have something more concrete maybe? I have some idea of what to make I think. Can sorta see the wasted opportunity boiling this down into some sort of top down simple game thing and scorning the theme to some extent, but at the end of the day delivering on conveying this aesthetic, theme and scene needs to be done via some sort of interface/ method of interaction. And a top down space with an ability to dash through objects/ agents to trigger sounds offers an ability to choose/ express within a simplified space.
0 notes
Text
Breaking down the Dust: higher level design goals and challenges
Here’s that second post of stream of consciousness mechanical thoughts and ideas. Just ideas for now.
So one of the goals I have for developing this idea is creating a creatively expressive music game. When compared to other music games often times there is a great emphasis placed on technical mastery and execution. Most rhythm games focus on matching the timing + accuracy of your input; eg. guitar hero, rockband, singstar, rhythm heaven
While technique and timing are strong elements of mastery in music, there’s more the experience of playing music than simply playing the ‘right’ notes at the right time, and these prescriptive styles of matching the notes to established songs ignore the creative and expressive aspect of playing music. Often times there is more than one way for someone to express the same musical idea, which is subject to that person’s individual tastes. There is more than one way to play the same song or jam over the same idea, different notes/chords to get to the next part of a song.
More recent guitar hero+ rock band iterations, as well as other music games, will have improv breaks in the song where a wider selection of inputs are valid and contribute to score, and while I haven’t really played these games, these sections are 1. de-emphasised and 2. often result on people ravingly hammering all over their controller/ singing somewhat flippantly. This is good fun in a party setting where these games thrive on replicating the ‘feel’ of being a guitarist frenetically twisting their fingers into knots all over the fretboard to shoot thunder out of the sky, but don’t really capture that feeling in improv of musical expression + intent.
One game that I feel gets closer to this (albeit in a somewhat limited capacity is the captivating recreational program ‘Hylics’ by Mason Lindroth
Which has an optional section available near the end of the game where the player is able to assemble their band and cycle them each through various tracks to play at the same time, and then selecting from their own range of sounds and play select set of notes / chords in each style you switch to. It’s fairly freeform and without much in the way of constraints/ judgements on how you play. You can’t fail, it’s entirely optional and is sort of just a fun short side outlet of maybe improving over some pre-arranged grooves/synths/beats.
This brings up the question of how to create a musically expressive game for players that has depth and facilitates some degree of creativity, but still in the context of something that is engaging as a stand-alone experience. As much as I’m a big fan of the concert you can play in Hylics, it’s very strictly an optional part of the game and not the main focus of the experience. There is no feedback given to the player on how good they’re doing which in someways is great. As a relaxing side thing it’s nice that the player has the opportunity to just ‘play’ and decide for themselves whether they like what they did, but its ultimately something that doesn’t reflect the constraints that are an essential part of playing music.
Musicians respond to eachother, they respond to the context of where they are in a song and where they’re going, and also respond to their audience on some level while performing as well. A music game that is purely expressive without constraints while appealing to me, certainly more appealing than making something with a different note highway, runs the risk of not actually resulting in people making something recognisable as appealing music. Some constraint on when you should be playing notes, feedback on playing things is necessary to some degree to encourage players to create a performance.
This brings the challenge of creating a musically expressive game without requiring a vast musical understanding.
Something like Rocksmith which is sort of just an interface for a real guitar (complete once again with note highway) while teaching musical concepts and opening up the ability to start learning an instrument, is way too complicated in terms of fidelity of representation in playing an instrument for most people just looking to play a game. Besides that this game has some modes that offer a practice space where you can express yourself as opposed to playing the written notes (and even within the framework of songs there is some greater degree of expression in terms of strumming available) this game also isn’t really trying to convey the idea of self-expression, but start handing you some tools that could be used to make it, outside of the game, on a real instrument.
So the major challenge to take from this is how to create a fun musical or musically themed game, which facilitates player expression and depth without requiring an intimate understanding of how to play music.
If I were to guess what my design arrives at, it will probably share some more with a game like ‘Floor Kids’ the recent breakdancing game:
I haven’t played it but it judges you on score and variety of moves, while allowing for different inputs and moves to be interpreted as valid and mostly just requiring a steady tempo for inputs, as opposed to matching them to a scrolling sheet of different specific inputs at different times. In order to be able to express yourself, you need to have meaningful options.
When looking at other games that allow for some degree of creative player expression some ones that come to mind are:
Fighting Games:
[Absolver]
Sports/trick games in the tradition of series like tony hawk:
And spectacle fighters like Devil May Cry:
These games all have systems that make judgements on play that deign there is such a thing as ‘optimal play’, but still within each of these games there are vastly different types of players who will feel happy with how they play relative to their own sense of mastery and targets within their evaluative frameworks. The better scoring systems in something like Devil May Cry are designed to make the assigned value between different modes of play more even. Still other players ignore the scoring systems entirely to just make it through the game, or because they prefer their own way of playing over what the game says is desirable. Fighting games at the high-level trend towards certain styles of play, moves being completely unutilised because they are bad and others used more because they are optimal. When the requirements to pass or succeed are high the ability to self-express reduces. I think the most potentially interesting of these is something like Devil May Cry, which features distinct switchable movesets that have different strengths but are all potentially viable. Some players will specialise in one and not make use of all of its moves but develop a mastery over its core features (dodging, parrying etc.) While other players will be more broad.
The room for this specialisation and ability for several moves to accomplish the same task is tied up in the game’s enemies which are effectively challenges that prompt the player in different ways to respond to that enemies strengths and weaknesses. The utility/threat of these enemies varies massively, with some being only a threat when near another enemy or in a large number, others having more deadly versions introduces later once the player has improved in skill etc.
One of the main differences however is that in music repetition is often desirable. Things can repeat aesthetically but have a different accent, articulation or energy because of where they are in the song and still feel really effective. This is one of the aesthetic features often noted in the desert rock / stoner rock genre. They are very repetitive and focused on the trancelike nature of staying in a groove. This is not to say there is a lack of progression or movement in these genres, only that this is a feature often associated with them which runs somewhat counter to the encouraged expression in games that evaluate based on variety. How to encourage the kind of play where going on for longer is desirable?
The player’s priority of how they respond is determined by their own ability and tastes in response to the changing context of the fight/ arena/ enemy composition. As a way of expressing oneself within constraints, comparisons could be drawn to how musicians will respond based on their own tastes and experience to the changing context of the sound/performance. There is a certain rhythm, and call and response to playing these kinds of games. Even more so in iterations where the timing on certain moves is emphasized by allowing for a more effective execution.
These games have players expressing themselves spacially, while most musical games (understandably) are focused on the temporal qualities, of when sound is played. Maybe there’s a way to visualise musical context in a way which is more spatial without just being a 1-dimensional note highway?
Another aspect of the scene that I feel warrants some thought is how to create the sense of community. Lots of music games have an audience responding to the player’s ranking and the setting of something like a generator party next to a kidney pool provides a good fit for this scene but I’m curious if they could be more involved. Having the crowd and how they respond being the really visual thing you are responding to in music? Different types of people and how they move making you decide to play differently ie speed up, slow down, change style? Furthermore when playing music does the player play one musician or the whole band? Different instruments and roles have different play styles? WHat about gamifying the other aspects of this scenes experience? Putting the poster together a la. the more freeform creative stuff like building a snowman in NITW demo game Longest Night, or colouring the butterflies in Florence. The experience of following a mad map in the pitch black of a desert to get to a show. Most of this is just gonna end up being scope creep nonsense but is at least worth considering. How much to represent all aspects of the scene vs a higher fidelity representation of one style of playing for one instrument.
Only other aesthetic considerations I have for the framework are that the wind, dust and sky are all ripe for implementation, and surreal, emotive, anthropomorphic fallacy-ising. Figures + shapes in the night sky being the thing the player responds to?
Main goal for now is to figure out the main mechanic of creating some musical expressive stuff.
1 note
·
View note
Text
Playing in the Desert Age
Recently watched the documentary Desert Age (2016) which asides from being interesting helped me think about the spirit of the scene that could hopefully be evoked by the mechanics of a game that explores that kind of scene and sound.
One of the major takeaways I get from the doc is the idea that the sound was not especially unified. While the idea of what this ‘genre’ of rock music has been somewhat retroactively codified (maybe just in my own head) by the band that made it big (Kyuss), at the time the scene where this music was getting made was pretty open. People were playing punk/ metal thudding groovy stuff, but there were all sorts of stuff being made, like a two man band with a drum + bass played over tape, accompanied just by singer and keyboard, or the more Jazz inspired stuff that cropped up later on in the scene ‘The Sort of Quartet’. Kids who were ~14 were welcomed up on to stage to play with there band after in the same generator party where a more established act had just preceded them.
In addition to this is the idea that this was an isolated community of kids that were bored with nothing to do. It’s reiterated with some gravity that ‘they didn’t know there was a world out there’. The scene served itself, and in participation of the scene and playing music, the goal was not generally making it big but a means of self-expression and having fun. This is kinda why it ties up skateboarding with this. Skateboarders who drained the holiday home kidney pools of out of towners (Sean Wheeler- Mutual hatred, Myke Bates- Target 13) would go on to play in the scene, after listening to punk rock while skateboarding.
Taking this community aspect of this scene further there’s a fair bit of emphasis placed on how there are certain people that really help to foster a scene. Specific individuals that really try to help put on shows, and support the sustained life and growth of the community which for the palm desert scene was Mario Lalli.
Also worth noting the use of psychadelics and speed that were prevalent in this scene, although it’s worth noting that this can inspire the conflation of Stoner Rock + Desert Rock, which while sorta sharing overlap are exactly the same. A game I choose to make is going to require some constraining in scale so it may be worth focusing on the more Kyuss-esque and stoner rock comparison of this scene, as opposed to trying to think of incorporating a whole range of styles, but I would like the idea of creating some aspect of self-expression on the players end.
Got a pretty good inspiration for the setting of a game like this. ‘Generator Parties’ where at night in obscure places in the desert, powered by a gas generator, people would flock to watch lots of bands perform. Spread by diy posters, one of the first generator party called ‘Dust Fest’ because every time they went out to play there were winds kicking up a dust storm. By no means the only place where bands in this scene practised (also in meth houses, bars that wouldn’t kick people out).
The desert hills nudist ranch that was used later with a kidney pool that overlooked a big drop where people would gather round the band playing, while people skateboarded is pretty good as far as inspo.
[Skaters used the pool before it became the new site for parties]
Maps and directions to these generator parties could get pretty winding. [note dust fest is several years before the nudist colony kidney bowl shows]
Nice audio detail that the wind would carry the sound and make it swirl in a flangy way.
[Kind of view you might have in transit to one of these generator parties]
One other idea that I kinda like mentioned by Brant Bjork (drummer of Kyuss) is that the long jams that they would end up playing were the result of wanting to keep the party going, as opposed to pushing out a 2 minute punk song. Josh Homme remarks when staring up into the desert sky, with people lighting fires, drinking and having their headlights shining, they found it natural make play their songs in the way that they did. Dave Grohl notes that parts of their songs and grew bigger and bigger while they played in that jam context.
[Desert sky images shown at this point in the doc, probs not recorded from the same place and not at the same time, but trying to evoke this idea.]
[Shot nicely on long exposure to see streaks of car lights along the road]
[”The Desert is such a wide open bank canvas” - Brant Bjork]
[”There was no destination. We were just stoked to build something and get it out on the road. It was built to blast off. We were from the desert. Going nowhere at lightspeed.”]
Ultimately the scene sorta collapsed after it got big enough that people from out of town started fights with the local skateboarders during concerts, and fires during the concert that spread across the hills, or eventually had guns and knives being drawn at the parties. There’s a crazy story of the fire surrounding the whole ridgeline in the wake of the wind, and the band getting nervous but taking a little more time to play before stopping. While some the scene isn’t necessarily that unique compared to the spirit of any other scene of bored people looking for some way to express themselves, it’s still a pretty evocative space and I just love this music.
[New pool was dug out by skaters after the old one got filled in]
Tumblr getting pretty slow and temperamental, so I’m gonna post this now, then follow up with what I can really pull from this as higher level goals/ mechanical frameworks and ideas in either a new post or an edit.
1 note
·
View note