Audio - Analog Audio in a Digital World
January 19, 2023 technology
Since digital audio has been available, there has been a constant debate between which is better, analog audio, with its smoother and true to life representation of sound, or digital audio with its practicality and portability.
This debate has centered mainly in the music world, and not so much in other media, like games, the reason for this might be that some music producers and musicians relate to legacy processing techniques that takes them back to other eras of music, and others advocate for the workflow advantages that digital audio allows. Nonetheless, there is the fact that hardware audio equipment imparts a certain degree and type of distortion that impacts our perception of sound.
As we will see, it is obvious that for game audio a hybrid approach is required, since video games are a digital medium, but there are reasons which make audio processing with analog hardware beneficial.
The earliest technologies for sound recording are based on analog audio. The process of recording analog audio is based on capturing sound waves and transducing them into electric signals (or mechanical signals in the case of phonographs) using a microphone, this electrical signals are then processed and/or amplified using pre-amps and dynamic processors with tubes, transistors, etc… always as electric signals, and then stored in an analog medium by converting the electric signal into magnetic fields that are “printed” to magnetic tape.
There are 2 important concepts in digital audio that are directly tied to the quality and fidelity of the sampled audio signal:
Sample Rate: The sample rate is the amount of times a the value of an analog signal is taken per second, it is represented by Hz. For audio to be properly represented in digital form the Nyquist Theorem must be followed, this theorem specifies that in order to properly digitally represent a signal, the rate or amount of samples taken has to be twice of the converted frequency. In the case of complex audio signals (not pure tones) this rate has to be based on the highest frequency humans can hear (20 KHz), which is why the minimum sample rate is 44.1KHz.
Bit Depth: The audio bit depth determines the number of possible amplitude values we can record for each sample. Systems of higher audio bit depths are able to express more possible amplitude values. Bit depth has a direct impact in the capture of the dynamic range of a signal.
Analog Vs Digital
Working with any of the formats has its advantages and disadvantages, and things that might seem like issues are actually features which we can take advantage of, but this is not about justifying one format over the other, since by the nature of video games, we will work with digital audio, ultimately our concern is with how we can use analog processing in game audio production.
Where In the Audio Chain?
Before going into the reasons for using analog gear in a game audio setting, we should specify in what point of the audio production chain we use analog processing.
A basic audio production chain looks like this:
Recording: gathering of the raw audio material, this step can also be getting the audio file from a library.
Edit/processing of audio assets: During this step we produce the audio assets to be implemented in game, it is during this stage that we will use the analog gear to impart the different qualities that it can to the audio files.
Implementation: We add all our processed audio assets into the game engine where they are dynamically mixed and played.
As we can see, the most logical step to include analog processing is during the editing and processing of audio assets. We could use analog tape recorders or analog pre-amps during the recording phase but we would run into issues regarding the practicality of the process, and also the fact is that digital recordings allows us to get cleaner raw material with a higher dynamic range, which we can process and modify as we please later.
Lastly, if we would like to give the final mix, after implementation, the analog “feel”, we would need to have an analog signal chain hooked to the output of the game, which is not feasible for obvious reasons (not everyone has thousands of dollars and know-how to buy and use analog gear).
So, now that we know where in the production chain we use analog hardware for game audio production, now we must ask ourselves: Why?.
Audio distortion is described as any deformation of an audio signal at an output compared to an input. Typically this comes from limitations in electronic components (analog or digital).
Digital Audio Distortion
In digital audio, we use decibels full scale (dBFS) to denote signal amplitude. When a signal reaches 0 dBFS it means that all the bits are 1s (in digital audio the signals are represented with the binary 0's and 1's). This is referred to as the ceiling. It's the absolute maximum amplitude a digital audio signal can have. Any signal peak (or trough) that is boosted above 0 dBFS is effectively flattened or “clipped”, as in a square wave. The result is a harsh and over-compressed output signal. 
Aliasing, is another type of digital distortion, this happens when in a signal there are frequencies that are above the maximum permitted by the sampling rate, these are interpreted by the converter and mapped to frequencies within this range. This is aliasing — when one frequency is coded as a different frequency. This is solved by the use of Anti-Aliasing Filters, which are basically low-pass filters which cuts all frequencies above the Nyquist frequency. 
Analog Audio Distortion
In the context we are discussing we refer to analog distortion to the saturation given by the electronic components to the audio signals that pass trough them,
Although analog distortion can also be problematic, it is introduced more gradually to the signal, and, if the components are adequate, it gives a “warmer” and characteristic qualities to audio. The saturation given by the electronic components happens because it boosts and/or creates specific harmonics in the audio signal. Saturation adds character to audio signals and is a big reason why many listeners prefer analog over digital audio recordings. 
Fundamental Frequencies & Harmonics
In order to better understand harmonic distortion (like the one introduced by analog gear), first we should have a basic knowledge of what a fundamental frequency and its harmonics are.
The lowest frequency of any given signal is referred to as fundamental frequency, and it determines the pitch reference of a signal, meaning it is the predominant frequency of any complex waveform.
Harmonics are frequencies that are multiples of the fundamental frequency. The fundamental frequency is also known as the first harmonic, the second harmonic is what in music is known as the octave, and mathematically can be expressed by: 1st Harmonic * 2 (so for a fundamental frequency of 100 Hz, its 2nd harmonic is 200 Hz), the rest of the harmonics are calculated by multiplying the fundamental by the harmonic number.
Odd harmonics are the ones given by multiplying the fundamental by odd numbers (3, 5, 7…..) and even harmonics are the ones resulting from the multiplication of the fundamental frequency by even numbers (2, 4, 6…). In practice even harmonics reinforce the fundamental frequency, but odd harmonics have a musical impact in their relationship with the fundamental. 
In an article titled “Amplifier Musicality”, written by Jean Hiraga, and published in March of 1977 in the Hi-Fi News magazine,It was a discussion about the use of the word musicality as a way to describe the subjectively perceived performance of analog audio components, especially amplifiers, but musicality is not an objective term used by any conventional measurement procedure.
Hiraga argued that it wasn't the amount of non-linear distortion () introduced by an electronic component what determined the quality of its sonic footprint, but, instead, the pattern of the distortion is what determined the sonic footprint of an electronic component. To anyone who understood that the Total Harmonic Distortion (THD) was a a proper measure of the non-linearity of electronic components, this idea was a surprise.
Studies had been previously done on the idea that the nature of an amplifier's nonlinear behavior is as important as its amplitude. Which showed a better correlation between sound quality and harmonic distortion measurement was obtained if the amplitude of each harmonic were appropriately weighted before being summed into an overall distortion metric. 
But, Hiraga’s claims went a bit further, he claimed that a particular harmonic pattern is desirable, and that an amplifier that departs from this will sound less natural even though the total amount of distortion it introduces may be very much less. Correlating subjective assessments of different amplifiers' sound qualities with their distortion spectra, Hiraga concluded that the ideal harmonic pattern displays progressively decreasing harmonic amplitudes, with even-order (second, fourth, sixth, etc.) and odd-order (third, fifth, seventh, etc.) harmonics all expressed. According to this view, an amplifier that produces dominant odd-order harmonics—behavior typical of push-pull designs—can never sound as natural as one in which both even and odd harmonics are present with progressively declining amplitudes.
What Hiraga was claiming, in short, is that certain patterns of nonlinear distortion are euphonic—i.e., pleasant to the ear—and others not. If these benign patterns are absent, then the resulting sound will be less natural.
We might disagree with Hiraga’s claims that an specific pattern of distortion will sound more believable or natural, and that it imparts a higher degree of fidelity to audio, but what its undeniable is that the euphonic distortion introduced by electronic components has a direct impact in the enjoyment of the audio content, and for game audio that could mean longer play time and a stronger emotional relationship with the game. 
Analog Hardware Vs Plug-in Versions
Many hardware audio gear has its version in the digital world in the form of plug-ins to be used in a DAW (Digital Audio Workstation), these plugins can be good approximations of the real analog hardware, they cost a fraction of the price and any changes we make in their settings can be saved and recalled easily. Having said that, these plug-ins behave different than their hardware counterparts, especially when it comes to handling transients, it’s also less predictable, in the sense that it will act different depending on factors like temperature and/or tear and wear, hardware gear tends to give audio more depth and character.
Additionally, hardware analog gear lends itself to a more hands-on workflow, where more attention to details will be given from the beginning.
In the following video we will compare three different types of sounds using the same signal chain (Chandler Zener Limiter + Chandler Curve Bender) in analog and in plug-in versions (Softube), we’ll also listen to the original bypassed sounds for reference.
From this video the most noticeable difference between the sounds processed using plug-ins is that the high frequency content tends to be harsher and pronounced, in comparison to the hardware version which offer an smoother higher frequency response. On the lower end of the spectrum, hardware lends a deeper and more pleasing response than the plug-in version. For the sounds with higher attacks, like the explosions, we clearly notice how hardware dynamics are smoother, but still keeping the initial “snap” of the explosion.
As mentioned previously, working with analog hardware changes the audio production workflow, it has challenges, and it makes the process less efficient, but it also introduces the opportunity to change the relationship of the sound designer towards the craft of audio, and when the artist/designer is more connected to the creation process it reflects in the final product, not to mention the more elevated quality of the final audio for the player.
Digital to Analog to Digital
The first challenge on workflow we encounter when using out-of-the-box gear is the digital-analog-digital conversions. Evidently, to be able to process digital audio with analog hardware we first must convert it to analog audio using a Digital to Analog Converter (DAC), it is important to use high quality converters in order to not introduce unwanted degradation of the audio signal, in PP’s case, we use a Burl’s B2 Bomber DAC , the reason for this choice was that it offers a clean, but musical, conversion, with focused low end and clean high end. After processing, the analog signal is converted back to digital using an Analog to Digital Converter (ADC), in our case we use Burl’s B2 Bomber ADC , to keep the conversions consistent .
Both (DAC and ADC) are connected optically to an RME Fireface UFX+, the only drawback of using these converters is that they only allow for mono or stereo signals, but this is also the case for most analog hardware.
Details & Commitment
An advantage of digital audio over analog is how practical it is for recalling settings, but this advantage can in turn give the sound designer the option to be able to redo several times a given asset without losing previous work.
The same process, but in the analog world becomes more complex and time consuming, this is why when we work on analog gear we give greater attention to details and a higher level of commitment to the settings we use for signal processing.
Ultimately, game audio development is a complex process that requires the best of both formats, analog and digital, in order to create unique, immersive and relatable worlds. We can use the flexibility and efficiency offered by digital audio since we are dealing with large amounts of audio data, and, also, because at the end the final result is a digital product. On the other hand, analog processing can be used to impart our audio assets that familiarity, unique and greater quality that analog gear offers. We also use analog hardware as a way to avoid automations when working with audio which gives souless results with the propensity to overlook important details.