How to Use Automation on Effects in Pro Tools (Production technique)

Adding automation makes the record more exciting and bring life to your mixes only if you add the right amount of it just to make it fun. Using reverb and a delay are some of the plugins you add in the automation.

In this case lets focus on the vocal track by selecting it then spend some time to listen to it meanwhile you have send the signal to the reverb/delay bus which gives you a fader. Use the fader whilst the vocal plays going up and down which is a good way of just practising automation. The idea is not to have the automation constantly on but just comes on regular intervals. In audio post production, ‘fader-only’ automation is not really sufficient. Mixing is a somewhat different process from music recording; the quality of the soundtracks is often altered to match the pictures. This will mean not only changes in level on the channel faders, but also changes in frequency equalization.
Wyatt, Hilary, and Tim Amyes. Audio Post Production for Television and Film : An introduction to technology and techniques, Taylor & Francis Group, 2004

Use any type of reverb and delays plugins in Pro tools of your choice to suit the needs of your mix such as the D verb and delayed the D verb with a quarter note with a large hall effect and 50% wet then get another extra long delay with quarter note delay and add a little distortion to it which becomes your custom effect. Once you are done with your practise with the send fader go to the automation window to make sure that the automation is enabled and select automation and an automation window will appear. If it is liet red that means the parameter list set for automation. Make sure send volume is enabled in there. This technique is most often used on reverb-​ tail returns, and the delay should not be more than 50 ms; otherwise, Elmosnino, Stephane. Audio Production Principles : Practical Studio Applications, Oxford University Press, Incorporated, 2018

On your send level fader on the automation mode select touch mode and just simply turns on touch mode. When you record in automation you do not have to have the track record enabled or rerecord it in your session, all you have to do is play your session back and make the changes you want with a proper automation mode selected and those changes will be saved. Play the session then move the send fader up and down on the selected parts and just like that you automation has been recorded. Make sure your automation mode is selected back to read so that you do not risk recording over what you recorded. Now you can have a listen to the track and changes effected. Let’s assume you have made a couple of mistakes and you want to make some changes on your automation, just simply go to the track view selector and the default wave view is waveform. Click right were waveform is and change to whatever automation lane set for what you have just done and you will see the automation data. You can make adjustments on the data by turning down or up the levels by using the smart tool or delete any you do not need . You can also grab the pencil tool and start to draw in some automation too. When you are done editing just change the view to waveform and keep on mixing

In conclusion, automation is the best way to add in special effects to your session and can a nice vibe to your session.

Wyatt, Hilary, and Tim Amyes. Audio Post Production for Television and Film : An introduction to technology and techniques, Taylor & Francis Group, 2004
Elmosnino, Stephane. Audio Production Principles : Practical Studio Applications, Oxford University Press, Incorporated, 2018

How to Set Recording Levels In Pro Tools (Production Technique)

Metering levels in pro tools is very crucial to always keep your eyes on the meters whether recording or mixing making sure that the signal is not coming too hot or too low. When you have an artist in the booth what you want to aim for is a negative six (-6db) input level.

To aim for negative six (-6db) does not mean you go directly to adjust the fader and that either will not get you a negative six. Nothing inside pro tools can change your signal input level or changing the compressor or EQ. If you are recording signal into the software that means you have to change it in one or three places whether loud or quiet. The first choice is to change the source volume, an example can be a keyboard or guitar volume is a great way to change your input level. The second example is microphone placement. If you have the microphone close to the source the signal will be louder and if it is further away it will be quieter. The third example s adjusting the gain on the preamp which adjust the level of the microphone signal either on the interface or external hardware preamp. For best results, aim for an average peak input level around –6 dBFS or lower, keeping the track meter in the yellow range. To do this, adjust the level of your analog source while monitoring the indicator lights on your onscreen track meter.
Cook, Frank D.. Pro Tools® 101 : An Introduction to Pro Tools 11, Cengage Learning, 2013

Aim to get the negative six (-6db) on the master fader and that will make sure that you have enough headroom for any mastering process. Any picks that may come through you may want to make sure they go through without clipping. Sometime parts of a record may get louder making sure they do not clip which may end up into distortion in your session. Jus make sure you have that negative six (-6db) loudness on your input and on your masterfader which is your output. Send window in post-fader mode set to unity gain with Follow Main Pan selected. Collins, Mike. Pro Tools 11 : Music Production, Recording, Editing, and Mixing, Taylor & Francis Group, 201

In conclusion, monitor levels and your input levels are extremely important so make sure you always keep an eye on the meters aiming on the negative six (-6db) input and output level. Once you get this part accurate and you will have loads of good clean signal process in post production.

Izhaki, R., & Izhaki, R. (2017). Mixing audio : Concepts, practices, and tools
Cook, Frank D.. Pro Tools® 101 : An Introduction to Pro Tools 11, Cengage Learning, 2013
Collins, Mike. Pro Tools 11 : Music Production, Recording, Editing, and Mixing, Taylor & Francis Group, 201

Reggae Genre Analysis

Group of People Waving the Flag of Jamaica

Jamaica used to be a British colony until 1962 (which is after the rise of Ska), (Potash, C 1997).
Ska has characteristics of R&B piano, guitar, bass , drums and brass instruments. It is said it originated as a new sound that critiqued the indigenous sounds that dominated as Mento and Jamaican blues. With more experimenting in that era emerged one of the most recognized and renowned musician Bob Marley who was a key development to reggae music. The Wailers band started by him & Peter Tosh & Bunny Wailer in 1996a 63 they made the transition through three stages of early Jamaican popular music, Ska Rocksteady and reggae (King, S., Bays, B., & Foster, P. 2002)

The target audience for reggae was poor, working class Jamaicans, however the genre ended up being listened to in all countries by people of all ages and race. It is associated with hardship, and good times and the perseverance of celebration of conquer struggle from western and societal pain of living. Reggae music became rapidly global as Jamaicans living in great Britain caused labels such as Trojan to capitalise and broaden reggae music (Chang, K., & Chen, W. 1998). Record labels in UK began to experience a new wave of reggae music through experimenting thus creating more subcultures that kept the genre grow much faster. Growing trends of dreadlocks and smoking weed made reggae music a fashion brand as the target audience began to be associated with reggae as a fashion.

The heart of reggae music lies on its influence on social issues (King, S., Bays, B., & Foster, P. 2002) . they suggest that he made reggae a universal language having given a voice to the specific political and cultural focal point of Jamaica. I do remember the time he was invited by the Zimbabwean former president to come a grace the independence from colonial rule from the British. bob Marley even composed a song call Zimbabwean as a sign of solidarity.

The lyric content in Bob Marley’s reggae music helped spread awareness of the Rastafarian religion (King, S., Bays, B., & Foster, P. 2002). Many believed that he was the main factor in the spread to USA, Canada most Europe, Africa and Australasia. Political and social ideas of the movement spoke more on inequality that the black community experienced negatively and exposed to. They also suggest that it is a representation of a mystical union of the human and the divine. Rastafarian seeks a unity of personal, social and interpersonal aspects of being. Also having been born from a white father and a black mother, Bob Marley addressed oppression that his mother suffered whilst raising her children single handed.

Reggae music is a manifestation of Pan African aesthetics ( Gooden A 2014). Gooden suggests that African aesthetics reflect African cultural values of the society that reflect reggae as a product of African people history and identity. The beauty and enjoyment of reggae is a philosophy that thought to have the ability to affect our emotions, intellect and psychology. Reggae music continues to live on as more upcoming artists emerge with new style but maintaining its originality.

I am looking forward to that day to produce a reggae style song and will be pretty experimental as reggae music came from experimenting.

King, S., Bays, B., & Foster, P. (2002). Reggae, rastafari, and the rhetoric of social control. Jackson: University Press of Mississippi. (2002)
Chang, K., & Chen, W. (1998). Reggae routes : The story of jamaican music. Philadelphia: Temple University Press.
Potash, C. (1997). Reggae, rasta, revolution : Jamaican music from ska to dub. New York: Schirmer Books.
Bader, S., Palik, B., Greutert, V., Jaxa, P., Cole, S., Lewis, H., . . . Festival international de jazz de Montréal (Directors). (2010).
Barrow, S., & Dalton, P. (2004). The rough guide to reggae (3rd ed., Rough guides music series). London: Rough Guides.
Daynes, Sarah. (2010) Time and Memory in Reggae Music : The Politics of Hope, Manchester University Press.
Gooden, A. (2014 “The Pan-African Aesthetic in Reggae Music”

Social Conscious project (Suicide Prevention)

Suicide is a complex issue involving numerous factors and should not be attributed to any one single cause. Not all people who die by suicide have been diagnosed with a mental illness and not all people with a mental illness attempt to end their lives by suicide.

It is projected that in 2015 the rate of suicide was 12.6 per 100,000 in Australia and is the highest rate in 10 plus years. This equates to more than eight deaths each day and occur among males three times greater than that of females. People who experience suicidal thoughts and feelings are suffering with tremendous emotional pain. People who have died by suicide typically had overwhelming feelings of hopelessness, despair, and helplessness. Suicide is not about a moral weakness or a character flaw. People considering suicide feel as though their pain will never end and that suicide is the only way to stop the suffering.

The suicide rate amongst Aboriginal and Torres Strait Islander peoples is more than double the national rate. In 2015, suicide accounted for 5.2% of all Indigenous deaths compared to 1.8% for non-Indigenous people

Many factors and circumstances can contribute to someone’s decision to end his or her life. Factors such as loss, addictions, childhood trauma or other forms of trauma, depression, serious physical illness, and major life changes can make some people feel overwhelmed and unable to cope. It is important to remember that it isn’t necessarily the nature of the loss or stressor that is as important as the individual’s experience of these things feeling unbearable. Certain segments of our society, especially those who wave been marginalized, are at greater risk of suicide. Marginalization, institutionalized trauma, colonialism, structural violence, racism, prejudice, acculturation, and homophobia have contributed to First Nations, Inuit and LGBTQ people having higher rates of suicide related behaviors. Older white males also have among the highest suicide rates with contributing factors including cultural expectations, and gender or societal roles.

The information made a strong impact in my life knowing that people can take their own lives due to factors that complex interconnected factors, individual, environmental, biological, psychological, social, cultural, historical, political and spiritual, including psychological trauma. Over the last 2 years I have had time to encourage members of my band as some also went through suicide thoughts but overcame them through giving them a listening ear. Suicide risk can be reduced with individual and societal commitments to social justice, equality and equity including but not limited to addressing and speaking out on such issues as stigma, homophobia, racism, institutional poverty, misogyny, abuse, oppression, and patriarchy along with ensuring access to effective and appropriate psychological and medical treatment and support. Born in a Christian background I found a scripture in the bible found in Matthew 6.33 (But seek ye first the kingdom of God, and his righteousness; and all these things shall be added unto you) to have a great positive impact in my life. I am not being religious reasons being that religion has much judgement and condemnation rather I am driven by the Gospel of Jesus. Many would argue about it but this has been my way of helping anyone in distress. I believe that our bodies are designed body, mind, soul and spirit and it is significant to have all these aspects living healthy and at peace as they are all interconnected.

Therefore I was inspired by the gospel to compose a song using scriptures and my own words. A ‎R&B‎, disco, ‎funk‎, ‎soul genre type of song characterized by dancing, shaking, celebrating and happiness to overcome any notions of suicide. The song is also meant to motivate the young and the old that life is precious and it only comes once and its best to embrace it with its ups and downs. i believe there is no permanent problem, every problem has an expiry date. The song promotes hope, faith, healing, helping and love even if you do not believe in religion, God or religious practices. The also the song promotes unity of multiculturalism and my band is a great example as it includes multiple races such as Caucasian, Chinese, Samoan, African and Sri lankan. Suicide does not select any race and diverse society that embrace a shared and mutual responsibility to support the dignity of human life and each person and I believe if we give love, hope, have faith we will conquer suicide. Hence the title of the song is “Give”. No single discipline or level of societal organization is solely responsible for preventing suicide. Individuals in many roles and at all levels of community, society and government can and should contribute to the prevention of suicide related behaviours. Suicide Prevention therefore requires collaboration based on equality where no discipline or stakeholder is privileged over another.

Paterson, Craig. Assisted Suicide and Euthanasia : A Natural Law Ethics Approach, Routledge, 2008
Statistics on Suicide in Australia(2019) Retrieved 8 November 2019, from https://www.lifeline.org.au/about-lifeline/lifeline-information/statistics-on-suicide-in-australia
Suicide facts and stats (2019) Retrieved 8 November 2019, from https://www.lifeinmindaustralia.com.au/about-suicide/suicide-data/suicide-facts-and-stats
Goldney, Robert D.. Suicide Prevention, Oxford University Press, 2008
Lifestyle, health, mind. (2019). Retrieved 9 November 2019, from https://www.news.com.au/lifestyle/health/mind/australian-men-are-in-crisis-with-suicide-rates-rising-meet-some-of-the-men-wholl-die-this-week/news-story/4488a31ab0392ce1f7ee1a8717e73d38

Social Conscious Studio Recording Project

Myself and 4 gifted musicians went into the Neve studio to record some music for social conscious project directed at Suicide. In this blog I will talk about the preparations and the process involved in preparing the recorded session and post production, editing and mixing.

Roy and I discussed on how to setup the studio since all the instruments were to be recorded in the same environment (live room) meaning to say there will would be lots of bleed from microphones. However the bleed is what we wanted to capture most and therefore no need to isolated the guitar & keyboard amps. We also made sure we set Di boxes all guitars and keyboards so we capture sounds from amps and Di. There was a bit of a mix up that caused the guitar Di not to record but however we managed to get the sound from the amp recorded.

One of the most important aspects I have heard Rose and Guy say was “to have less work on editing and mixing will be determined by how your microphone placement is carried out”. Bearing that in mind we made sure that the drums and the guitar amps were placed in positions that would capture a good sound. I was also considering that it was first time recording for the drummer and had to make sure the mics are set right. Roy being our engineer made sure that our Pro tools session was well named with organised tracks and ordered into groups and colour coding them so to make easy mixing later on. We did not intend to use any special effects and we just wanted a clean dry sound then play around with them during mixing.

For communication purpose we had headphones for each musician and a talkback microphone that was on the main keyboard that I used to direct the band from the intro of the song, verses, chorus, bridge and outro. I gave opportunity for the band to try with a click track but they honestly could not work along it as they are used to play without it. The dynamics of the song are characterized by a quiet intro which gradually gets louder and a loud chorus with a strong beat of 4/4 time signature with mostly staccato notes that give an energetic feel.. The tempo of the song was around 115 bpm mark which is pretty groovy. Melody lines for the main keys repeated constantly whilst the second keys had much room for a group of tones played around the melody to give an interesting feel creating a sweet mood. The couple of takes we had gave the band more confidence which made them not to be conscious of the recording but rather enjoying playing like we do by my house, which is a very cool thing. Also having these takes helped Roy get the right balance on levels on the console to get a good signal into Pro tools without clipping or less volume.

The recording experience and atmosphere was awesome and considering that this is the first time for all the musicians recording in the studio and promoting diversity in Australia, this was impressive. I will be working on getting the guitarist to come and do the guitars since we did not get much signal from the Di regarding the feedback from Rose about having the guitar clean and not muddy. I will experiment this in my music lounge at home and see how we roll. As for the extra stuff such as brass section, strings and violin I did them at home using my my Korg PA588 synthesizer and its onboard sounds. I avoided using midi sounds as I felt that I get nice texture sound directly from the synth. Additions of keys just made the song full on and rich with variations from different sounds. The work now is to get the mix right making sure the sounds are complementing each other and adding stereo effect to bring excitement to the ear.

Signal flow (Audio interface setup)

Setting up an audio interface has its own challenges whether you followed the proper signal flow chain or not. I did spent almost 8 hours trying to get signal come through pro tools from my audio interface. I had an M-Audio fasttrack and a Behringer xenyx 12 channel mixer interface.

I figured that If I do not ask for help I would definitely continue to be stuck. One may assume that once you connect the usb interface to the computer signal works automatically. However there are a couple of processes that need to be attended to get the right settings. I then asked my colleague Roy to help out with setting the interfaces to work with pro tools. We spent a good 3 hours trying to figure how to have signal into pro tools. Both interfaces did register on the pro tools setup, the challenge was on the I/O settings. Both default for the bus and output worked but the input nothing was registering and that was a mystery. After many attempts we decided to seek for help from the Tech department, they were busy at the time so they were not able to address my problem.

I then asked Rose my lecturer for help and she was willing to help on the following day. After class I then took the opportunity to ask Guy Cooper for help with the interfaces. Immediately he knew the problem and took me through the process. Firstly he connected the xenyx usb interface then went on to a window titled audio devices, on the left panel of that window were built in output which had both input and output panel. On the Output panel under each audio device such as built in microphone, pro tools aggregate I/O and the usb audio codec which was my interface. Now my thoughts were if once we click on the usb codec interface it will work, rather not exactly my thoughts. Instead Guy explained that there was need to create a new audio interface with 2 ins and 2 outs which did register already on the panel using the usb audio codec drivers. That simple setup worked and did not take 5 minutes. Signal from the microphone to the audio interface sending it to pro tools and all was working.

In conclusion When you get stuck in setting up your audio interface after connecting it to the computer and your pro tools playback engine selected to your interface, consider the search icon on the far left if using Mac and type in audio midi setup. Your interface will register and you can see it but then you need create another device that has both input and output registering then tick the 2 ticks of your interface like my example ( usb audio codec) which has In and Out. Rename your device to a name related to your interface so you do not get confused. the next time you do not get signal just straight to the audio midi setup panel. Credit to Guy Cooper!

(2019). presonus.com. Retrieved 5 November 2019 from https://www.presonus.com/learn/technical-articles/How-To-Set-Up-a-Home-Recording-Studio

(2019). ledgernote.com Retrieved 5 November 2019 from https://ledgernote.com/blog/q-and-a/audio-interface-recording/

Parallel Compression

Parallel compression is a technique dynamic range used in sound recording and mixing, a combination of dry signal mixed with a compressed version. It uses a send and return setup similar to sending signal to an effects processor. Also known as Upward compression or New York compression.

The benefits of using parallel compression is you get to keep your dynamics in the signal whilst adding some oomph that the compressor gives you giving that extra body in vocal. Typically when using a compressor you use it to reduce the dynamics clamping down on the peaks making even overall signal. But with parallel compression you get to keep the dynamics whilst still getting that lush richness that the compressor would provide. Depending on the settings used, compressors can give very different results. It is generally accepted that compressors can work on micro-​ dynamics (altering the inner dynamics or peaks of individual instruments) and macro-​ dynamics (altering the overall loudness of longer passages) (Elmosnino, S 2018)

To do that its best to duplicate the signal either have it lightly compressed or not compressed at all whilst the other copy is squashed to the extreme. Then set the compressor to a send select and choose the route out to new track. If the source track is in mono its then keep the parallel compressed track mono and that way you do not run into phasing issues and so on. Keep it on aux input and then rename it to something like P comp then press ok and it will give parallel compressor track. There are a couple of things that need to be done and the first thing with the send level that is being sent needs to be set to unit gain right away and this can be set by pressing the option key and click on the send level fader to set it to unit gain. You would also want to make sure the send is a pre fader send so that once you have the squashed copy of the signal you would use the tracks faders to balance the levels between the compressed and the uncompressed signals. If you do not set the track they send to a pre fader send and anytime you would want to change the original track fader it will change the level going to the compressor which will ultimately change the amount of compression you are getting, which you I advise no to do.

Once done you can add in the insert window any of your favorite compressors or use the limiter plugin as the whole point is to squash the signal by any means necessary. Turn the ration all the way to squash or smack or high ratio 7-1 then turn up the input a little bit then jump to the mix window then unmute the lead vocal then reduce the level of the parallel compression all the way down. As you are mixing slowly the add the compressed track into the mix and when you find the sweet spot just leave it. Take time to listen to before and after so you can hear the difference. After that you can play around with a 7 band EQ and use a high shelf to boost the high frequencies and the do the low end to give a nice boost it which also helps on the drums to give a boost. Use the bypass just to hear the difference and continue tweeking it to your taste buds. I am looking forward to this process when I start to mix my songs.

Elmosnino, Stephane. Audio Production Principles : Practical Studio Applications. Oxford University Press, 2018.
(2019). Izotope.com. Retrieved 4 November 2019 from https://www.izotope.com/en/learn/5-ways-to-use-parallel-processing-in-music-production.html
(2019). Izotope.com. Retrieved 4 November 2019 from https://www.izotope.com/en/learn/expanding-on-compression-3-overlooked-techniques-for-improving-dynamic-range.html

How to use Reverb

Reverb (short for reverberation) is the acoustic environment that surrounds a sound. Natural reverb exists everywhere. Reverb is composed of a series of tightly-spaced echoes. The number of echoes and the way that they decay play a major role in shaping the sound that you hear. Many other factors influence the sound of a reverberant space. These include the dimensions of the actual space (length, width, and height), the construction of the space (such as whether the walls are hard or soft and whether the floor is carpeted), and diffusion (what the sound bounces off)

Almost every record played on radio or any platform does have a certain amount of reverb in it and it seems to be a whole lot important. There are so many stocks of reverb plugins that are available to choose from. Which one to choose and use is all up to the engineer’s choice. One of the first things you want to do whenever you are going to use a reverb is to setup your reverb to a send and return. Create a new stereo aux input by pressing shift command N on the keyboard and go to stereo aux input as it will be on mono by default then click create. Once done you might want to rename it to something like verb as a shortcut then assign a bus as the input to this reverb track. It’s possible to either in the mix window or the edit window the controls are pretty the same. If you look at the I/O window there is input on the top and the output on the bottom. Click on input and assign a bus and bus is simply an internal routing pathway (virtual routing system) that allows signal to travel from one track to the other choosing any bus like 9 & 10.

It is a good practise to rename the bus to something like verb. Once done you can go ahead and to choose a reverb of your choice and something like a D verb in pro tools are cool. Now finish setting the send to actually get the vocal signal over into the reverb channel. To do that you got to the sends of your vocal track and go over the send selector and go over the track if its well labelled or go to the verb bus, they all do the same and will accomplish the same job. Whenever you create a send the send level fader will pop up and this determines how much of the reverb signal you are sending over to the vocal channel. Hold on the option key and click the fader to set to unit gain so you can hear how the reverb is interacting with the vocal then once you find the reverb you like you canbackdown the level of the fader. Take time to listen to the dry vocal by pressing mute on the reverb track.

Whilst playing the vocal track you can press the preset library menu on the reverb plugin and that gives options for different types of reverbs to choose from and you can hear how each of them sounds such as halls, room, plate, ambient etc. Let’s say you pick the halls reverb you might want to start playing with its time as it can be long. Adjust the pre delay time which determines how long after the initial signal the reverb start. The idea is to create the space between the vocal and the reverb and to get a cool separation you calculate a pre delay time that is going to be in time with the music. I typically like to use a 16th note pre delay time so would be 43 milliseconds, you can choose your own as this gets a bit mathematical. It can be a little bit big so then work with the decay time like 3.6 lowering it to where it fits in without going too long in the next phrase. The reverb comes with filter section and it has its adjustment according to the preset so you can set to their default by holding option and clicking the knobs. An EQ can be added to shape and mold the shape of the reverb track if it sounds brighter and not natural and not fitting with the mood of the song. Use filters to roll off the a lot of the high end and low end and leave a narrow band for this frequency above 3k and anything under 400hz. Play with the send level fader and if you feel like the reverb is getting lost but you have set the levels set you can use a compressor which can help restrict the dynamics of that reverb just to keep it present for a little longer.

Add another reverb like room to get that close reverb to the vocal, select a new track and call it room reverb and click create and a new track will be created. This keep the pre delay right down to zero to a small room effect creating space. Slap in an EQ again and do similar process to the other rolling of the highs and the low ends. It’s all about experimenting all the time as there is no right or wrong. In my next mix I will surely run through this process and play around a lot more to get new ideas.

Izhaki, Roey. Mixing Audio : Concepts, Practices and Tools. 2nd ed., Focal Press, 2012.
Elmosnino, Stephane. Audio Production Principles : Practical Studio Applications. Oxford University Press, 2018.
(2019). Collinsdictionary.com. Retrieved 2 November 2019 from https://www.collinsdictionary.com/dictionary/english/reverb

Post Mortem Black Hole

ADR ( Automated Dialog Replacement), Foleys & Sound design I have got pretty good understanding what they stand for as I got it introduced in my 5th tri. The first project required us to be team of 2 or 3 members which then I had an opportunity to work with Roy and Alice. As a team we decided to do the Black Hole project a short film which required replacement of sounds by recreating them to interact with visual.

Creating a group chat in slack helped us communicate and I am happy to say the communication was great with not struggles or glitches from any of the group members. A group folder was created in google drive to keep up with updates, backing up files and sharing information. We presented a project proposal to Rose just like its done in the industry then we received an approval.

Communication helped us to organise booking for the C24 studio in week 2 & 3 and recreate sounds. As a team we made markers in the memory location panel in Pro tools session which helped us in identifying what type of sounds to create going along with visual short film clip. After creating each sound we would tick off on the printed runsheet to identify as completed. It is a good practise to label every step you are undertaking so that you avoid redoing or missing anything.

Roy composed a 12 bar boogie track to go along with the clip with electric guitar, bass and programmed drums. It was an interesting idea just to have a different feel from the original clip. We booked 8 hr session in the C24 studio and made sure we created as much as we could by the end of the day.

We used two microphones one distant the other close and both were Rode brands, the NTG-1 and the NT2A. The close mic was our main microphone designed to filter out surrounding sounds and only capture sound it is directed to. Challenges arose when creating an alien kind of sound, blowing of drinking water flask, groaning sounds on microphones, sliding door miking, we did what we could to try and achieve the desired sound. I also embedded a synth sound from a Roland plugin in logic which Nick heard before and suggested to play around with it.

Alice did some time stretching on the wave transients in and out and reversed the clip on the character pulling his hand out. Roy suggested to use Time Compression/Expansion trim to give the clips an alien strange feel. This process really helped work together as a team and having three heads put together can bring lots of creativity.

The Black hole project i would say we did exceptionally well being professional to each other and treating with respect. Communication was very effective even in cases of delays we still kept the communication open through Slack, Skype. We fully utilised the C24 studio when we booked it and helped each when getting stuck on allocated duties. The milestone plan did work well it could have been better especially on creating the alien sound. I would say if we spent a little bit of time on that we could have achieved much better results. We used the feedback from Rose and Nick on the first mix and this helped us to work on the areas that needed improvement.

What i have learnt that sound design is a big thing considering us doing a short clip. i can imagine the amount of work done for a full movie. A lot of creativity is quite a process which needs brainstorming. Great thoughts leads a project to success. An audio engineer’s job I have come to learn that it fits in the film, animation and game design industry and I need to be very knowledgeable of my work. Learning foley, sound design and ADR was a great experience for me and I resonate when watching films or movies the relationship between the visual and the audio is of great importance to keep the audience engaged. Our presentation could have been better, If only I had not made assumptions that the file was properly saved. I have learnt that communication still needs to be effective and make sure everything is all saved correctly. Correct steps in saving the project in pro tools are very important so that we get the correct file. Despite the setback we worked as a team to get the correct file and the whole experience united us. I do applaud the team effort in this project.