Dynamic playback engine
From Post-Apocalyptic RPG wiki
What it is?
It is a playback engine. It plays background music, ambient sounds, part of sound effects, all up to rhythm, all without interruptions. The engine should be able to change playback accordingly to the game situation. The engine is a interdepartment effort, it needs cooperative programming, sound engineering and composing music. I'm not good at this stuff. I'm visual artist, noobish at sound stuff and coding, really. So please, just ask, correct, and let this project evolve.
I attempted to make a prototype. I thought it will be usefull when making real code. But, due to how simple language/environment PureData is, I had (or would have to) to make some simple things in a complicated way, and some complicated tasks are very simple. So I stopped when I stumbled into key feature that needed totally different approach when coding in PureData. Most important part of what I have done (basically a very simple sequencer) is here: .
Inside: simple sequencer should make a good heart of the engine. There are three types of samples played:
- Music background - a longer, multi-track tune played in a loop.
- Ambient sounds - set of sounds that correspond to a certain situation and location in the game. The sounds are kept in "sets" corresponding to a particular state of things in game (location, weather, health and other circumstances are my picks), like "wilderness", "fight", "critical health condition", "social place", "night", "snowfall" and so on. The idea is, when you have one set turned on from, lets say, "other circumstances", like "fighting" sample set, you cannot have all the others, so no "social interaction" or "long walk" samples can be played.
- Sound effects - things that are mandatory to play, yet it would be cool to play it to the rhythm with whole game. Like sharp bass guitar and drum phrase when fight starts or loud heart beat when stamina/life drops down to a critical level. Played once, that is crucial.
Outside: Input parameters that the engine should handle:
- Intensity level - this describes how intense player's emotions should be. Lower intensity = slow tempo, instrumental, rare and quiet ambient sounds, high intensity = fast tempo sounds, drums, some extra dynamic phrases.
- Set of ambient samples to be turned on, accordingly to what is happening inside the game.
- Overall volume
- particular sount effect to play more or less immediately
Inside again There is very important thing that differs the engine from an ordinary sequencer (apart from the fact that we need really simple sequencer). The engine needs a "brain" that evaluates intensity into volumes of tracks and takes care for making the sequencer to know what to play.
- Music background needs to be played in a loop, tracks will change volume accordingly to intensity - dynamic tracks should be louder with high intensity. But there can be phrases that are played randomly from, lets say 4 to choose from, why not.
- Ambient sounds may differ in length, and they also should be picked up randomly from whole base of ambient sample sets actually turned on, but rather not played more than one at once.
- Sound effects are simple - they are mandatory to play, so all that is to it is to queue one to be played when needed. Important thing to keep in mind is that not every single sample has to be started from 1, so there may be a "time offset" parameter bonded to a particular sample
- With high intensity sample sets related to location or weather should give place to sets related more to the situation and PC condition.
Sound engineering and composing
- I thought to use 140 bpm and 70 bpm for "slower tempo" pieces. May be different, but anything from 120 to 160bpm should work.
- We need several tracks, from very ambient-like to really dynamic, so that you can regulate how dynamic it is by regulating volume.
- Ambient sounds, tracks and sound effects should be from one story. Not only matching tempo. Not only matching pitch. Also more subtle characteristics should match. Reverb, noise level and some more advanced studio-related stories as well. I even thought of things like applying comb filter with echo/reverb to mixed output (again - trivial in PD, but probably hectic without serious sound processing library in Python, so, as it was told to me, it may as well be pre-calculated in samples).