Sonic Design: Project I - Audio Fundamentals
MODULE VSA 60304: Sonic Design
Tristan Vaughan Sleep - 0365120
Sonic Design / Bachelor of Design (Hons) in Creative Media
Sonic Design - Audio Fundamentals
MODULE INSTRUCTIONS:
“SEPT 2025: VSA 64304 / SONIC DESIGN MODULE INFORMATION BOOKLET”
Sonic Design: Exercises (Weighted 20%)
Exercises: Students of the cohort are expected to complete the series of projects given to them by the lecturer with the purpose of developing their sound design sensibilities while also increasing their technical proficiency. With the tasks provided, students are to analyze and manipulate sound properties through the use of sound editing software to fit to a desired scenario. All exercises are to be published to the students E-Portfolio and are to be submitted as outlined by the MyTimes.
PARTICIPATED LECTURES:
WEEK I: Understanding Sound Fundamentals
Introduction to module: With the first live-lesson within the module, the cohort and I were introduced to the core material of Sound Design while also discussing the projects + assignments and their requirements. In a basic sense, this module focuses on the recording and manipulation of audio or other sounds to help create a soundscape for creative projects such as Video-games or films. Students will complete a series of projects to help them understand the basic concepts of formatting sounds and learning important skills such as how to place sounds to communicate the space around the listeners.
Before being able to begin the main projects needed for the module, students would first need to complete a series of exercises to help gauge their hearing and how to properly tune, alter and edit audio to fit into a scene - starting the first exercise this week.
“Sound Fundamentals by Lecturers”
Science/Nature of Sound Design: By Scientific Definition, Sound is the process in which any item or object is able to vibrate air molecules to stimulate our eardrums. This can be produced either by our vocal chords which vibrate at different frequencies to make unique sounds or objects causing a large enough disturbance in the air to be audible. In Sound, there are the three main Phases: Production (Source of Sound), Propagation (Travelling of Sound), Perception (How the sound is heard.)
Human Ears: Our ears are able to pick-up sounds due to the way they’ve been structured over millennia of evolution. Our Outer Ear is the external & visible part of the ear + Canal which has been designed to catch frequencies more effectively (similar to that of reflectors found in recording Booths.) In the middle Ear, we have a thin paper-like eardrum as well as three bones which respond to the air vibrations to ‘hear’ sounds. This is then processed in the inner-ear with the cochlea (hearing canal), endolymphatic sac and semicircular canals.
Psychoacoustics: This is the study of subjective human perception of sounds which focuses on how an individual listens to sounds, their psychological responses to said sounds or music and how this is imparted on the human nervous system. This study looks mainly into how our perception is impacted by pitch, loudness, volume and timbre, while also suggesting if individuals can understand the sound such as its emotional response.
The Six Properties of Sound: Sound as a concept can be simplified down into six categories which covers aspects of how a sound is produced, propagates and influences the listeners.
Pitch: Refers to the relative amount of vibrations per second (Frequency) and how differing amounts can change its perceived sound. The more vibrations there are, the higher the pitch and higher the frequency while the inverse is also true. Overall, the range of human hearing is measured in frequency and is between 20Hz to 20kHz.
Loudness: Refers to how intense a sound can be perceived which has mainly been caused by the size of the vibration (Amplitude.) Loudness is a result of the vibrations carrying energy which can influence the force it has on both the body and eardrums which makes that sound appear to be more intensive than others.
Timbre: Refers to the notation of how a sound is perceived as well as defining the quality of the sound by its pitch and intensity. In Sound Design, terms such as ‘Light, Flat, Smooth, Smoky, Breathy, Rough’ have been used to characterise how something sounds and can suggest which aspects of the sound are unique or different.
Perceived Duration: Refers to the subjective experience of any sound and how long it appears to event lasts, as opposed to its actual physical duration. This can mainly be seen in tests to determine the range of an individual's hearing which is affected by age or exposure to louder sounds - the sound may continue to play despite the listener being unable to hear it.
Envelope: Refers to how any sound can change overtime or distances. This mainly looks into the attributes of sound such as how its amplitude (volume), frequency or pitch increases or decreases which can influence how something is heard.
Spatialisation: Refers to the ability to shape sounds with the intention of making them appear as if they are originating from somewhere within the physical space where it is not. This is seen mainly in films or surround-sound systems which have been designed to sound like it's happening around the viewers and making them feel more present within the entertainment.
Exercise I - Sound Ranges & Equaliser: To help the students gauge their own hearing and the output of their equipment, the first exercise was for students to edit audio tracks to match their original file. Students were encouraged to first edit the original track using the ‘Parametric Equaliser’ to see how the effect alters the frequency. In my experience, playing around with the original file was useful as it would help pinpoint the range of frequency of each instrument within the song. This would mainly aid me when ‘restoring’ the edited files as I could find which instruments sounded different.
Once again, these edited files would need to be corrected using their own ‘Parametric Equaliser’ effects to sound as close as we can to the original file. In addition to this ‘restoration,’ this process was also to get us more familiar with the software and how to bulk manage sound channels and their effects. Completing the exercise, these were alterations I’d made for each track:
“EQ Equaliser Correction 1”
For the first edit, the altered track sounded not as full as the original and instruments such as the drums and bass guitar sounded reduced. To correct it, I boosted the low pass, the 1st band and a little bit of the 2nd band to raise the bass guitar.
“EQ Equaliser Correction 2”
For the second edit, the altered track sounded a bit flat with there being a lack of the drum’s high hat. To correct this, I did the opposite of the previous and needed to raise the high pass as well as raising the 5th band to restore said high hats.
“EQ Equaliser Correction 3”
For the third edit, the altered track sounded washed with a majority of the sound being either deep or high with notable instruments as the guitar being reduced. To correct, I mainly altered the track by raising the mid bands such as 3rd and 4th while also slightly adjusting the 2nd band a bit as the bass guitar also blended with the drums.
“EQ Equaliser Correction 4”
For the final edit, the altered track evidently had too much bass which made everything sound like it was blown out. To make corrections, I lowered the low pass and 2nd band while majorly reducing the 1st band which was due to the drums being too overpowering.
WEEK II: Learning Sound Design
Sound Design & Tools: Following the introduction to the module as well as the testing of personal equipment to test its suitability, this week the cohort are learning the fundamentals of Sound Design. In the recommended Videos, students would learn key terminology for the module as well as familiarise themselves with the working environment - this is crucial for working on later projects and needing to improve works or build upon feedback.
“Sonic Week 3 Lecture Sound Design Tools”
DAW: Digital Audio Workstations (DAW for short) refers to any software of software addons which allows its users to record, customise and alter any audio tracks for their projects. The current one used in this module is Adobe Audition.
Layering: Layers or Layering is the process of placing multiple sounds on top of one another to create a blend or mix of various sounds or to make a single unique sound. This can be seen when Audio designers mix sounds together to create an ‘image’ of a scene for the audience or in cases where select sounds have been combined to make a new sound for something that doesn’t exist.
Time Stretch: Time stretching is the process of manipulating the length of time an audio track will play for - being either shorten, cut or, elongated. The crucial part of this process is understanding how altering the audio affects its sound while knowing how to avoid/repair said alterations. Mainly used to fix the pacing or timing of a scene.
Pitch Shifting: a simple process in which a sound has its pitch artificially raised or lowered for various reasons. This process is often used raising the pitch to sound more comedic/disruptive, lowering the voice to sound intimidating/droning or dynamically altering to create an auto-tune effect. Furthermore, this can also be used to repair some audio such as time-stretching.
Mouthing it: refers to the method of using your voice to create the practical sound that you may be unable to find or the sound you envisioned. Using the dynamic range of the human voice as well as various tools to alter its sound, it is possible to recreate a good amount of sound with voices alone.
“The Art of Sound Design by DreamDuo Films”
Application in films: Sound design is one of the most important yet the often forgotten elements in visual productions such as films, televisions series or video-games. With the combination of visuals and strong sound effects, the audience is able to become more immersed in their entertainment and feel more present within the space. Sound effects are needed to make scenes feel more realistic and alive which can help the audience remain focused on the production. Furthermore, in unique cases, Sound effects become more important as they help the audience map out a three-dimensional space and can inform them of where some action may be if not directly seen,
Music in scenes is also an interesting detail which is also overlooked yet is mainly responsible for invoking an emotional response in the audience. Music has been creatively designed and used in productions to enforce an idea on how the audience should feel about a scene even if no person or thing is there to connect with. Therefore it is important to consider what sounds are used within a scene, how loud/present they are within the scene, are they important. When choosing music, consider its tone and what ideas it may imply to the audience.
Practical Sound Shaping: continuing with our practical studies for the module, this week students would work on altering and shaping sounds to make them appear as if they were being in a 3D space or were being played through some devices. For the first half of the practical, students were provided with a sample of a person speaking which would need to be modified using the Parametric Equaliser. With the equaliser, students would need to simulate what the voice should sound like if it was being played through a phone, if the person was speaking in a different room or being spoken through a walkie-talkie etc.
Before altering the audio, it is important for students to note that the ‘edit’ has to be completely accurate. It is common for audio designers to exaggerate sounds and their effects such that it clearly stands out from the rest of the soundscape - it's more important about communicating ideas more than keeping the audio accurate.
“Audio Sample 1 - Phone Call Equaliser”
"Audio Sample 1 - Phonecall"
Phonecall: When audio is passed through devices such as phones, a majority of its lower frequency is reduced with some of the highest tones also being reduced. In addition, the middle tones where most human-voices are ranged are heightened to cut through background noises and to make the voice sharper, easier to hear. As a result, the audio should sound a bit flat but clear. Furthermore, it is important that the alterations sound separate enough from the original audio as the audience needs to clearly notice the difference.
“Audio Sample 2 - Closest/other room Equaliser”
"Audio Sample 2 - Next Room"
Closest: For the second task, the audio would need to be altered to re-shape the sound to make it appear that the sound is originating within a closet or within a separate room. When a sound originates through the surface, a majority of the higher frequency is lost due to it being absorbed while the lower frequencies are still able to reach the listener (although with less energy.) As a result, the audio should sound more muffled and bass-heavy which is why it's harder to hear.
“Audio Sample 3 - Walkie Talkie”
"Audio Sample 3 - Walkie"
Walkie-Talkie: For the third task for equalising, the audio would need to be altered to sound similar to that of a walkie-talkie or other long-range audio devices.In order to get a more genuine sound, I would need to use two separate Equalisers to boost the mid-high tone high enough to be very sharp while dulling the remaining sound scape.
“Audio Sample 4 - Public Bathroom Reverb”
"Audio Sample 4 - Public Bathroom"
Bathroom: For this next series, the audio would need to be manipulated using the Reverb rack effect to create a distinct sound which can emphasise space. For this first task, I would alter the sound by boosting the decay-time a small amount to create the echo effect while also increasing the diffusion to make sound appear as if it's spreading across the space.
“Audio Sample 5 - Airport Reverb”
"Audio Sample 5 - Airport"
Airport: For this second task, I would alter it similar to the first by boosting the decay-time and diffusion to make the sound appear to be within a larger area. This time however, I would further increase the decay-time as this is mainly what sells the size of the space. I would also increase the ‘wetness’ of the sound to make it favour more of the effects than the original sound which gives it that slight intercom sound.
“Audio Sample 6 - Closed Stadium Reverb”
"Audio Sample 6 - Stadium"
Stadium: For this final task, the goal was to make the audio sound as it was coming from a stadium which was somewhat challenging. For this effect, I mainly increase the decay-time by a larger amount but chose to limit the diffusion and ‘wetness’ as stadium commentators are to be heard but have minimal echo for a better audible experience.
WEEK III: Sound Design in film
Role of Sound in Film: For this week’s activity, students of the cohort would need to learn more about the basics of Sound Design - mainly about its relation to film or other entertainment media. As a whole, Sound Design is crucial to a production as it is one of the elements which is capable of influencing or enhancing the viewing experience for an audience without being directly told so and can lead to creative messages in films. Students would be asked to view a series of videos explaining the relationship of sound within a production as well as learning to edit sounds to sound more present within the narratives world:
Diegetic Sounds: A subsection of Sound Design where the sounds played within a film or other media is being played as if it was ‘in their world.’ This is commonly seen with additions such as ambient sounds to convey a more lively environment, the present sound of characters moving around or interacting with objects. This also can include elements of music such as if it's being performed by a character or is originating from a prop such as a speaker, radio etc. In essence, Diegetic Sound are any sounds that are present to make the ‘film world’ feel real
Non-Diegetic Sounds: An opposing subsection of Sound Design where any sounds or music played is exclusively for the audience of the film or other media. This can often be seen in films where music is played to enhance the audience's emotions to a particular scene. Furthermore, this can also be used with sound effects to create tension within a scene to make the audience feel ‘uncomfortableness’ until a resolution. In essence, Non-Diegetic Sound is any audio that has been added to improve the audiences’ viewing experience and is not present within the world of the narrative.
Sound Graphic & Spatial Audio: For the practical lessons for this week, students would learn how to make audio sound more present within a narrative and would need to be used to convey parts of a story without visuals. To achieve this, students would need to use Adobe’s built-in graph editors in conjunction with rack effects to make the supplied audio match the narrative explained by the lecturer. In this first example, students would be asked to produce an audio sample which would feature panning audio of a person talking while also communicating that this person is walking away and into a cave in the distance.
To Achieve this, I first used the panning audio to make it sound as if the person speaking was walking past the listener on their left - the audio starting on one side and gradually returning to both channels when close. As the person would then get closer/further away, the volume would need to be adjusted as well as the tones which was done with an equaliser. As a person gets further away from you, only the mid-to-lower tones become more audible and can sound like the person is mumbling. As for when the person goes into the cave, the main inclusion is reverb and having the audio sound as if it is being cast & reflected back to the listener. In addition to this, I would also bump the volume up once again while lowering the EQ as when sound reverberates in cave mouths, the sound can be somewhat clearer before falling out again.
“Attempt at Audioscape suggested by lecturer”
Environmental Sound Based on Image: For this Exercise, Students would be given the choice between two visuals and would then need to simulate a soundscape as to what this image may sound like. With the two choices containing advanced technologies, students would also need to learn how and where to collect samples of audio which would need to be edited to fit the narrative the student is conveying. Students are recommended to cite audio tracks from sources such as freesound or the BBC’s free audio sample list to prevent any issues with protected audio. For this assignment, I would end up going with the second image and would experiment with applying all the techniques we had learnt thus far in the final design.
For the opening, I wanted to re-use our previous lessons on sound spacing by making aspects of the scene sounding as if it would be coming from an adjacent room or echoing through a larger room. This can be seen with the inclusion of the elevator section in the beginning where the sound of the lab(?) can be heard through the wall before becoming sharper and clearer as the door opens. This was achieved by altering the background audio to sound muffled using Parametric EQ. An additional tweak was also done for the voices and machine humming where these two sounds are less present from behind the wall due to higher frequencies being more likely to get absorbed in materials.
For the main soundscape itself, I worked to make it sound ‘more full’ with me needing to alter tracks such as the machine hum and wildlife to be pitched higher or lower to help them mix together while not overriding one another too frequently. For further experimentation with reverb and equalisers, I also added a section towards the end with an Intercom message which required me to play with what an echo-y radio would sound like in a large room. Overall, I’m quite happy with how this final one turned out.
“Attempt at Environment-Based Audio”
Second Environment Soundscape: Continuing with this project, students would need to develop the second piece as well to demonstrate their understanding to grasp the important ‘details’ within a scene while still having the ability to create a unique sound. As previously mentioned, the difficulty in this project came from making sounds to represent non-existent technologies and how we wish to represent them. This would be the most difficult with this second image, featuring a laboratory setting with a large laser-like device. For this soundscape, I need to factor in what the ambience sounds like and how this may interact with the people and the main machine. With the sounds assembled, they should also be able to convey a short story based solely on the audio.
Similar to the first soundscape, I mainly wanted to demonstrate my understanding of space and sound propagation with the audio sounding muffled and distant as if it was in an adjacent room. The sound would increase in volume and would become clearer as the ‘audience’ entered the room. To make this more clearer I included present footsteps to convey motion which first starts louder and sharp in the beginning and then becomes quieter and echo-y when entering the main room. These effects were achieved by adding parametric EQs while adding elements of Reverb to create a sense of position and scale of everything.
For the main feature of this soundscape, the laser - I wanted it to have a longer process to explain how powerful it is and its importance in the scene. The first part of the laser was its mechanical sound which I had created by sampling existing machines like old Tape-Recorders starting up or the engine of a P47 Thunderbolt. The ‘laser-y’ sounds were produced using sounds made by a synthesiser with some being short waves or drawn out sounds. To make the machine and laser sound more intimidating, I added a rumbling sound which also had the benefit of creating a fuller sound scape with both high-tones + Low-Tones. To make the laser sound more interesting, I added the sound of a jet-engine starting up which had an ascending pitch - something I would mirror on the original laser with a pitch-shifter. Finally, to help with the story; I wanted to make the laser sound ‘unstable’ which saw the use of an old warning alarm combined with the near-resolving sound of the pitch (diverting audible expectation.)
I was able to achieve these effects by mainly experimenting with the Parametric EQs, the reverb settings while experimenting with other settings such as the pitch-shifter or time-stretching tool. Together, I believe I was capable of making the sound of a machine in an adjacent room which gradually grew louder and more foreign in sound yet still proposed a clear idea of some kind of laser. Overall, I think this second soundscape is satisfactory and was a good lesson for exploring sound design.
“Attempt at Environment-Based Audio II”










Comments
Post a Comment