For our final assignment, we students were tasked to create a music piece to present during our final class. While the initial intent with this assignment was to compose for Dave and Gabe's multichannel audio system, health concerns have prohibited this activity. Thus, we were only required to create a work with any manner of spatialization.
An optional constraint for this assignment pertained to the utilization of Berklee's One Laptop Per Child Project's sample bank. As I was curious about the samples within this bank, I decided to work within this limitation. Furthermore, the available software licenses within my personal collection can occasionally cause issues regarding instrument choice. Hence, this constraint was welcomed.
My intial step within this work was sifting through the downloaded bank of sounds. Throughout the auditioning of samples, I would occasionally pull sounds-of-interest into a separate folder. The end result primarily pertained to a large group of vocal and percussive sounding samples.
Then, I loosely grouped these sounds within instances of Native Instrument's drum sampler Battery inside the workstation Logic. And within these instances, I began a further refinement of sound selection. Inside Battery, I began editing and processing the sample sets into playable, musical sounds for the work. It must be noted that these samples were converted from .mp3 to .wav prior to their importing to Battery, as this software does not allow the utilization of the former.
Regarding the composition, I did not begin the sequencing process with a defined structure. Rather, I took a build-as-I-go step sequencing approach. While I typically shy away from this manner of composing, as I tend to have difficulties expanding a work with this process, the task of completing a short piece with the confinement of a limited sample set might be feasible in this manner.
Regarding this build-as-I-go approach, vocal sequencing was initially handled, followed with percussion programming. Thereafter, further processing, level balancing, automating, and spatializing within a stereo field commenced.
With the exception of a noise generator to support the snare-like samples, all sounds within this work derived from the downloadable bank. Regarding software plug-ins, all processing was handled with Logic's native plug-in set, with the exceptions of Battery, 2cAudio's Aether (algorithmic reverb), and Voxengo's Elephant (limiter).
It must be noted that I intended to mix the piece in conjunction with IRCAM's SPAT MaxMSP patch. Due to time constraints, I took the common stereo mixing approach to ensure that the work was completed.
The fourth week revolved around the spatialization of audio in a 3D manner. Accordingly, we were tasked to create a 3D sound work set in a binaural manner.
Regarding my work, I decided to revisit the third assignment, where I could focus on the workflow between Logic and SPAT. Furthermore, the former work was asking for a bit more "life," specifically towards dynamics.
The approach began with musical and dynamic refinement to the work, particularly focussed on elements within the beginning. After "handling" this component, focus pertained to the utilization of Logic with Max.
Communication between these two pieces of software was handled with Soundflower in conjuction with an aggregated device, created within OSX's AudioMidi Setup. Regarding this aggregated device, Soundflower's 64 I/O was combined with OSX's built-in I/O. Thereafter, both Logic's and Max's audio devices were changed according to the routing needs of the project.
Thereafter, SPAT was implemented within the Max session in accordance to Dave's tutorial.
Upon failing to create a somewhat realistic "drum circle" listening environment through tweaking source positions, reverb elements, and source parameters, I decided head a different route. In this particular case, the listener is standing nearly "in" a "lined-up" percussion ensemble, where the players are in close proximity to each other.
Upon settling on SPAT parameters for the project, I captured the recieved audio within the Logic session. Thereafter, EQ was applied to the audio file. Furthermore, a brickwall limiter was applied to the final output prior to rendering the project.
The third week of class pertained to information regarding the utilization of four channels for spatialized music. Accordingly, we were assigned to create a piece with four channels.
However, due to the ongoing pandemic, the utilization of loudspeakers at ITP facitilies became unavailable. In light of the situation, I decided to create a four-channel-out setup within Logic. And upon composing the work, it was summed to two channels within the master bus for playback purposes.
Regarding the work, I attempted to simulate a drum circle within a standard quad setup, surrounding the listener in a centralized manner.
As this work revolved around the utilization of four channels, I created four virtual drummers that were placed within four separate areas. With that said, there is some deviation from this notion, as the bass drum lies within the front, central area. Additionally, the cymbal activity is placed within two separate areas, front/back right and front/back left.
Regarding the composition, there is a variation on a call-and-response within the beginning of the work. The primary intent with this component is to not only establish the location of the virtual dummers, but to also introduce the percussion samples within the piece. Thereafter, the drums unify in rhythm for the remainder of the piece. It must be noted that the bass drum initiates after a small drum break midway through work. Additionally, the cymbals make an entrance shortly after.
Regarding the drum samples, one may characterize them as tribal with some marching drumline elements. Additionally, these samples were sourced and played through Kontakt's Damage instrument. An instance of 2cAudio's Aether provided the reverb (algorithmic) for the work. Lastly, Voxengo's Elephant was utilized as the brickwall limiter.
For the surround setup, Logic provided a means of creating a quad session. Furthermore, the individual channel strips contained a convenient visual tool for placing the sound within the field. It must be noted that the monitoring from quad to stereo was handled with Logic's Down Mixer plugin.
Lastly, a pair of DynAudio BM5A monitors were utilized for playback.
For our second class assignment, we students were tasked to compose a stereo music piece. Regarding my particular project, I chose to focus on the spatialization of a melody, constantly maneuvering in a sinusoidal fashion accross the entire piece. Supporting this melody is the occasional harmonious note sequence and crescendoing wall of sound (pad-like).
Regarding the spatialization of the melody sequence, five summing channels were set with the Logic session, where each of these channels were specified for a particular location in the stereo field. These designated areas included the following: left, mid-left, center, mid-right, and right. Additionally, each channel contained its own dedicated delay and reverb plug-ins. Regarding these settings, left/right and mid-left/mid-right had mirrored settings; the center had its own setup. The idea behind this configuration was to sonically establish each spatial location in the work.
The occasional melodic note sequence follows a similar spatial structure to the melody, moving with a slight polar manner. However, in the case where the harmony shares the same spatial path of the melody, that particular channel will distort, as the unmentioned saturation plug-in limiters on each spatial channel catch the increase in amplitude. It must be noted that this creative choice was derived form a "happy accident," and was periodically implemented for variation and tension.
The mentioned crescendoing wall of sound was added to bring closure to the short piece, filling the spatial area and supporting/surrounding the melody. Furthermore, heavy saturation was gradually applied for tension.
Regarding the most notably utilized plug-ins, a barebones Native Instruments' Massive patch was the source for all sonic elements, Ohm Force's Ohmicide was utilized for the saturation effects, and 2cAudio's Aether handled the majority of the reverb. Other plug-ins included Izotope's Vinyl, Voxengo's Elephant, Logic's StepFX, Echo, PShifter, EQ's, and Multiband Compressor.
For playback, my pair of DynAudio's BM5a's were utilized for a more "accurate" sonic reproduction.
For our first assignment, we students were asked to create a music composition for a single speaker (mono). As the ability to spatialize audio in a left-to-right manner is not an option, I chose to focus on movement in an down-to-up manner. Furthermore, I utilized the Shepard's tone as a reference point for this creation.
As previously mentioned, the intent with this particular work was to give the perception of an upward motion. My approach entailed the utilization of multiple oscillators rising in frequency, triggered in equally timed increments. Furthermore, each oscillator begins and ends with the same starting and ending frequency. Additionally, a low pass filter's cutoff, applied globally, continuously "opens" throughout the piece. For aesthetic purposes, a global reverb and tremolo (amplitude modulation) are also gradually adjusted in a continuous manner. The intent with the latter two components was an attempt in applying "tension."
Regarding tools to create this piece, Logic was utilized as the workspace, where Fab Filter's One provided the oscillators. Additionally, U-he's Runciter handled the filtering and 2cAudio's Aether provided the reverb. Logic's native Tremolo was also utilized in the piece. Additional processing pertained to equalization and gain structuring, where the former was applied with Logic's native equalizer and Voxengo's Elephant for the latter.
Regarding the loudspeaker for playback, I chose to utilize one of my DynAudio BM5a monitors, as a "more accurate" sonic production of this work could be handled with this unit.
While I could have captured the playback of this piece in a live-recording manner, the following file is the session's rendering. This choice pertains to the sonic degredation typically applied from the utilization of a microphone.
Note: While I initially intended to create this work in MaxMsp or PureData, I figured that my workflow would be more efficient in Logic.