EMCT: Computing Final Project Introduction
Sound Identity through technology
Ottavio Sostero
Supervisor: Dr Jenn Kirby
Project Overview
For an electronic musician, how crucial is to being able to develop an individual sound? And to what extent can technical knowledge help them to define it?
There are many ready to go plugins available today for music composers and producers, both free and licensed. But how can we identify what is worth and what is not, independently form the cost? Today, all over the internet, it is not easy to discern the value of a product, especially when we are bombarded by multiple contrasting opinions all over the web. The best way to develop an individual sound signature is to improve one’s technical knowledge by building custom software.
Introduction and Context
The identity of a musician can be easily associated with their sound.
This is true in many different contexts: from classical performance to the latest innovations of electronic music, passing through the evolution of the sound of Jazz.
The unique sound of an established artist provides a sort of signature, that allows them to stand out and explore their artistic practice. As Dennis DeSantis states in his book Making Music:
“Particularly among music critics, journalists and bloggers, originality is regularly cited as a necessary component of good music [1]”.
But where does this unique footprint come from?
Thinking of instrumental music, a musician is creating sound through one or more devices, let it be physical or digital: it could be a stringed instrument like a Violin and a guitar, as well as a percussion, wind, or synthesised instrument.
Practicing on an acoustic instrument usually leads the player to be curious about the physics of the tool in question. I myself play the double bass, which is in fact quite a rudimental instrument comprising a few important elements such as: four strings, tuning pegs, a neck, a harmonic chamber, a bridge and a tailpiece.
Understanding the physics principles that allow the instrument to properly resonate is of vital importance within the process of finding and sculpting an individual tone. The quality of the resultant tone is strictly related to focussed and mindful practice and emulation of model performers. The same is true for the now standard electronical augmentations that are so crucial today, like the piezo transducer/mic, the preamp and amp sections and the way the signal is carried throughout: gain staging, phase switch and parametric EQ are also of vital importance when a player wants to amplify their own sound without losing their signature.
Similarly, when it comes to audio technology and music production, it is best to emulate and stick to quality results before trying to outreach more complex solutions and practices:
“Instead of aiming for an abstract goal like originality, aim instead for the concrete goal of quality [2]”.
When it comes to electronic music composition and digital signal processing, it is clear that even a basic understanding of music technology can be beneficial in many ways.
Not only it gives a composer the ability to effortlessly design their own sounds by programming a set of parameters, but it also allows them to potentially create a unique software and, by consequence, a personal sound signature.
Today’s relatively easy access to audio software technology, then, opens up a great deal of possibilities, as Roads states his book Composing Electronic Music:
“Students with scientific training in areas like audio engineering, software programming, and digital signal processing are better able to work independently, design their own tools, and follow the research literature [3]”.
The capability of understanding how existing pieces of software work is a dealbreaker:
“In this way, they can grow as the field evolves and tools and technical concepts change [4]”.
Aims and Objectives
My main objective, within this research project, is to extend the work done in the modules music computing 1 and 2, by creating a suite of audio plugins by using Max MSP, GEN∼ and RNBO.
The scope is to reason about how todays audio technology tools at our disposition can allow musicians to sculpt their sound, as well as find an individual identity and voice through audio software development.
I would like to explore different concepts in each one of the plugins, more specifically:
· Audio spatialization and creative use of reverb
· Synthesis
· Microsound
· Sample manipulation
· Generative techniques
Audio software development can lead to several benefits, for example:
· Give the composer an economic advantage, by not depending completely on third party software.
· Let the composer to separate themselves from the competition, by finding their unique voice.
Since the advent of software instruments then, and the ability to compose “in the box” without depending on a large number of hardware devices, electronic musicians can easily decide to take a more DIY route.
This is also useful in order to allow artistic practice growth and adaptation to new stylistic currents, as well as reinforce one´s theoretical knowledge. As Zicarelli states in the foreword of Electronic Music and Sound Design, vol. 1:
“To educate ourselves fully about digitally produced sound, we need more than predictive knowledge. We need to know why our manipulations make the perceptual changes we experience [5]”.
And again:
“This theoretical knowledge reinforces our intuitive experiential knowledge, and at the same time, our experience gives perceptual meaning to theoretical explanations [6]”.
Research, technical and practical aspects, methods
Key milestones
This is a project plan outline with detailed description of the tasks involved:
1. Max MSP plugin
Building a basic Max for Live device: the objective is to end up with a device that differentiate itself from the basic native Ableton Live instruments and / or effects that are available to use when the software is purchased.
2. Max / GEN∼ plugin
Creation of a Max for Live device that makes use of the GEN∼ extension within the patch itself, to better understand the reasons why it can be convenient to make use of this environment instead of Max MSP.
3. Max / RNBO prototype plugin
The objective here is to first create a prototype: it is possible to create a RNBO object that can be immediately tested within the Max environment before compile and export the code. This represents an advantage when compared to, for example, building a software in C++: testing can be more difficult and it may need to first create a Max prototype, and then rebuilding the prototype in using a different language.
4. VST3 / C++ export test
After the prototype is complete and fully functional, it is time to export the project by using the RNBO cloud compiler. This operation requires that the inputs and outputs of the prototype are adequate because the compiler differentiates between two different type of devices that are possible to create: an audio effect or a midi instrument.
It is then crucial to make sure that the Max MSP part of the prototype faithfully represents the ”real world” DAW environment in which the plugin is going to be used: in an audio track or in a MIDI track.
5. C++ UI creation (using the JUCE library)
This step is probably going to be the most difficult, as it will require re open the compiled code in a source code editor, and then being able to write some custom code for the plugins UI. There are examples available in the RNBO documentation page, so some guidance will be helpful to achieve this task.
6. Original composition / rework of an original composition that uses the plugins created.
This section of the project will be focussing on creating and / or reworking one or two original compositions that will use the suite of plugins developed. This is important to better understand if the overall task will be successful in giving the author the capability of expressing themselves with a unique and individual voice, when compared with fundamentally the same pieces of music created with the standard Ableton Live plugins and / or other third party plugins. Will the resulting music sound better, or worse, or essentially the same?
In any case, the aesthetical and technical research involved in the project will surely be useful for me to better understand how to better use available software or create custom devices when composing original music. So it will be still worth exploring the possibilities that custom software can provide as well as gaining knowledge and expertise on the matter itself.
During the process, of course, I will make use and add any other devices and prototypes that I will build within the Max MSP, GEN∼ and RNBO environments, and hopefully end up with more that three patches to include in the suite of instruments and effects.
I hope to be able to tackle some of the difficulties that I encountered during my first two years of study on the matter within the modules Music Computing 1 and Music Computing 2.
Resources
The resources used will be cited when needed, both in the code and in the paper. I will obviously use both the Max documentation and other online resources if needed, but I would like to stick with trusted resources that provide a logical explanation in order to avoid working with material that I do not fully understand.
There are a lot of interesting and useful starter patches in the three volumes by Cipriani and Giri Electronic music and sound design: these books have the advantage of having a large theory section that can be used to better understand the practice examples and personalising them.
I will also try to get a copy of the book Generating Sound and Organizing Time by Graham Wakefield and Gregory Taylor.
Other useful books, for a more theoretical and aesthetic point of view, will be Making music: 74 Creative Strategies for Electronic Music Producers as well as Composing Electronic Music: A New Aesthetic by Curtis Roads.
In regards to RNBO, the documentation is online at this address:
https://rnbo.cycling74.com/learn/export-targets-overview
This page is being constantly updated and some examples are present.
Concerning the creation of a custom UI by using the JUCE library, there is an useful page explaining how to get started with the process, even if not in depth:
https://rnbo.cycling74.com/learn/programming-a-custom-ui-with-juce
Luckily, In year 2 I attended both the C++ for creative practice and extended C++ modules, so it will likely be possible for me to create custom UIs as we did in class last year.
Apart from the VLE resources available, the audio programmer YouTube channel an website, as well as the JUCE website, are good places to get started:
https://docs.juce.com/master/index.html
https://wiki.theaudioprogrammer.com/
Bibliography:
[1] [2] D. DeSantis, Making Music, 74 creative Strategies for Electronic Music Producers. Berlin: Ableton AG, 2015.
[3] [4] C. Roads, Composing Electronic Music, A New Aesthetic. New York: Oxford University Press, 2015.
[5] [6] A. Cipriani & M. Giri, Electronic Music and Sound Design, Volume 1. Rome: Contemponet, 2019.
[7] https://rnbo.cycling74.com/learn/welcome-to-rnbo