Thursday, 12 January 2012

Monitors - And how to get the best out of them.

The following only applies to professional studio quality monitors, it is aimed at noobs and audiophiles alike:

  • First thing you must ensure you do is correctly set up your monitors

- The speakers have been tested in thousands of ways and the manufacturer has provided you with the optimum set up for that particular monitor, if you align them correctly to start with many problems will never develope to start with.

- Another thing that gets over looked if the optimum or suggested operating level (volume in dB) some speakers are designed to work at specific volumes, if you drop bellow this level the sound will be coloured or masked by natural accoustic effects,

- you want to stay away from corners, corners+bass=problems.

- Correct amp power. If your monitors are unpowered (no amp built in) ensure that the amp output (power *watts) matches the monitors power ...in other words if your amp goes to 150watts your combined pair of monitors should be 150watts (or 75watts each). If your amp exceeds your monitors capacity and you push them past their operating capacity you are going to overdrive them and this will damage the speakers ability to reproduce sound accuratly ...and thats a problem.

If are in the position where the amp exceeds the monitors capacity you should place a limited between your monitors and amp, this will protect your speakers.

If however your monitors are 50watt each (100watt combined) and your amp is 75watt there is no problem, your monitors will just be limited to 75watt maximum volume (37 watt to the left, 37watt to the right.

  • Second thing on the list is absorsion.

- its a good idea placing soft items around the walls and roof to diffuse or minimise standing waves (or black spots) appearing.

- absorbtion blocks are mistakenly placed in (for lack of a better word) mirror image positions from each other, but sporadic positioning is best as it helps the diffusion of the standing waves,

- you may need to place foam behind & directly in front of your monitors (on the wall behind your head), if your on a budget pillows towels and duvets will do. You may want to invest in a pad for the monitor to sit on.

- things to bear in mind = the denser the material the more bass it will absort, but if its too dense its gonna reflect (so wood, metal and glass are out of the question), some people carpet their roof or one of their walls. Denser(as in foam duvet) materials also work better when dealing with bass traps (the nastyness that happens when bass hits the corners I mentioned earlier).

  • Third thing is find the sweet spot:

- The sweetspot is the perfect position to sit stand in in the room, it is located directly infront of and between the monitors. If your unsure of where it is check your manual, but its generally about an arms lengh(arm and a bit) distance from the monitor in a small space however as the room increases in size you will find it moves towards the center of the room, with a little amplification thrown in (to compensate for the room size ^_^)

  • And last Signal flow:

- at the start of a session/mix zero your desk/mixer and ensure that the levels are correct (with regards to bus channels and monitor/output levels). It wont in anyway affect what you hear unless your right in the red to start with, but if your mulittrack(mix out) channel is lower than your monitor level (depending on what hardware you use) you can end up recording quieter than you hear/think ..and this can result in noise being introduced into the next stage of the mix when the lower (mix out) recording is amplified to compensate.- If you are going to use EQ (inserted between mixer and monitor) make sure you remember that you may not be hearing what is being recorded. and if possible take notes of the EQ configuration/settings.

If your wondering what standing waves are:
Standing waves are like voids of dead air that cause strage things to happen, giving you the impression stff if there when it is not or that there is too much bass when the bass is perfectly fine. - this results in use incorrectly mixing or mastering our tracks.

Equalisation/Tone-control

Tone control can be as simple or complex as you want it to be, but as with most elements of sound creation there are no stict rules,
At different stages of a mix or sound creation certain frequency ranges will reqiure a little tweekery, you will notice quite soon that different pieces of hardware or software will have more control over the sound than others.
One application may give you 2 bands to play with where as another may offer 32 bands to cut or boost as required, to the uneducated observer it may seem only logical that a 4 band EQ module may offer twice the control of its 2 band counter part and therfor a 32 band EQ if '16times better' than the 2 band module howver this is not the case...

Its all about where the bands are split and how sever a kill each band will perform, not to mention the characteristics of the hardware (or software).

To get the most out of your EQ its best to take alook at the manuals/specification sheet to see exacly where for example low/mid/high frequencies are divided, this will allow you to identify what meathod of equalisation is best situated for your current recording situation.

Its quite common for engineers or producers to use specific pieces of equipment to take advantage of good tone control. ...infact a whole mastering industry has grown out of it.

Wednesday, 11 January 2012

Software Synthesizers

This section is basically a quick rundown of the basic merits and drawbacks of todays current software synthesizers in comparisson with their analogue counterparts...

Over the past decade software synthesizers have become more and more powerful and accesable to amatures and professionals alike, but when comparing software to hardware in general I notice the following limitations:



  • Character/Warmth. (analogue vs digital)

- assuming you are recording to a high standard (clear of noise) the first thing that becomes clear is the low end that you hear when an analogue synthesizer is used, generally speaking the standard components used in home computers cannot physically recreate the full frequency content and alot is lost in the process of converting this analogue sound into something that can be stored digital (your wav file). However this being that these frequencies occur outside the threshold of human hearing this usually isnt a problem,

  • Character/Warmth. (software vs software)
- Although the processed implemented could be the same all synthesizers, even software ones, will have their own character. This can be because of things program-unique algorythms that do the same things but in different ways/orders or the actual analogue to digital conversion processes that take place when exporting a portion of audio.
- Another 2 main factors that contribute the a synth's character are the number of oscillators and the way they are combined/subtracted etc (subtracive synthesis vs additive etc etc).


  • Output (analogue vs software)
- the best part about Software is that you never leave the digital domain, meaning that quality remains the same and does not deterierate while editing ..and in theory no noise should enter the recording over time or through editing unless intentionally.
- Another bonus with some software synthesizers is the ability to export sound at multiple formats/bitrate etc, this will ofcourse colour or distort the output (wav) content producing variations of an initial sound that can useful in lots of creative ways when producing music or sound design.
- If live performance is a big factor in how the synthesizer is going to be used it must be said that its always good to have a dedicated rotary or botton/pad to tweek or punch when you need to turn something on/off or alter a parameter.


  • Output (software vs software)
- The most important factor in recording is to capture sound at as high quality as possible so output format will by default be wav file. The higher resolution the better. As synthesizers and computers get more powerful were able to capture our sounds at higher and higher resolution, with this in mind its always a good idea to take a look at what output options each program has to offer.
-

  • Analogue vs Software in general

- Not really a fair comparison when you think of it, on one hand you have something that you can touch that in some cases can be unreliable in the most fun ways and on the other hand you have something that is FAR more reliable, portable and accessable (availablity and price).

- Another good thing software has to offer is its ability to practically store an infinite variation of sounds as presets, which can then be automated at a later point.

- ability to export audio at high resolution is another bonus software has over analogue as you dont need external gear to record a sound.

*unfinished*

Thursday, 5 January 2012

Microphones - Vocals

Frequency Response:
- Frequency response of a microphone is good to bare in mind as this determines what you will record. If your choice of microphones is limited take in to consideration the characteristics of the voice.. is it a high pitched female or a deep booming male? ...some hiphop producers use bass drum mics to capture deep voices.

Type of Microphone:
- As a general rule condensor microphones will give the best quality, however there require a mixer that powers the mic or a bit of jiggery pokery with a DI box or mic preamp.
- In live situations (dependiing on style of music) a dynamic mic may be the better option.

Compression:
- Ideally all vocals should be recorded with a some level of compression, as I general rule I start with 5:1 ratio and tweek till it sounds and looks right but every recording is different so there are no real rules.

EQ:
- I recomend not applying any EQ to vocals while recording (fix it in the mix) unless there are problems with noisy mics or mixer. Its much easier to fix a vocal at a later point than to try and reintroduce bass/mid to a weak recording.

Effects:
- If recording a vocalist with effect, if possible, also record the untreated vocal... processed vocals might sound good now but at some point in the mix you might want to use another effect. (reverb, delay, vocoder etc)

Technology - The Synthesizer (an introduction to)

What is a synthesizer?
Why would I need to know about my synthesizer?
Who uses synthesizers these days?
What types of synths are available to me these says?


What is a synthesizer?
- A synthesizer is an electronic device that produces sound, though design and techniques implemented differ all synthesizers are capable of producing sound at specific frequencies by either combining, dividing or multiplying waveforms/tones produced by their oscillators.

What would I need to know about my synthesizer?
- Its pretty easy to get a sound out of a synthesizer however if your after a specific kind of sound a little tweeking in the right places is all you need ..however this requires a basic idea of what each module is actually doing. So first up its a good idea to identify each section of the synth and see/hear how it effects the sound produced.

it will basically break down into:
SOUND(VCO/oscillators) --> SHAPING(ADSR) --> AMPLIFICATION-(VCA)
- so its a good idea to read up on the basics of Oscillators and ADSR. Once you have a basic grasp of what each module is doing its safe to read up on the LFO ...this module allows you to create wobble effects, gating pads, turn things (effects) on and off and basically automate other parts of the synthesizer.

Who uses synthesizers these days?
- Who doesnt? Right accross the board it seems like every mainstream (rock) band is using synthesizers, not to mention electronic music that is dependent on synthesizers.
But from a creative perspective people I would say really made/make the most of their synthesizers are... (in no particular order):
George Duke, Frank Zappa, Squarepusher, Venetian Snares, Brian Eno, Tangerine Dream, Rhyuchi Sakamoto, Aphex Twin, Herbie Hancock, Vangelis, Kraftwerk...


What types of synths are available to me these says?
- in a nut shell ...software synthesizers or hardware synthesizers.
- *see my synthesizer section for full explanation of what types of synthesizers are out there (FM Synthesis etc).

Noise

At some point in the mix some level of noise will be present, if your using analogue gear the effects will be more pronounced...
If You are using software and rendering to a compressed format such as mp3 noise will be produced that was not present in the higher quality original...

Noise is basically unwanted random frequency occuring from distortion of amplification of background sound already present. But noise can also be used to hide/mask other problems if deliberatly introduced in to the mix in the right way.
Another interesting thing about noise is that for some reson it makes us 'believe' what we are hearing is more real - as in that a song completely free of noise does not sound as pleasing as an iidenticle recording with a little noise introduced.

Noise is also used during sound rendering stage or file conversion ..the process known is as Dithering.

Frequency

Frequency

Frequency range problems constantly crop up throughout every stage of sound creation, so I always try and visualise what is going on behind the speakers...
The main area of concern is the frequency range of the speakers of headphones use as if their frequency response is poor /severely restricted this will not give a true idea of what is being recorded. Many producers use a variety of speakers or devices to test their recording to ensure they have a solid idea of what it will sound like nomatter what device is used for playback.



So what areas do frequency ralated problems arise and what kind of problems are they?

  • Initial recordings of sound elements. (example kick drum)

- What kind of microphones or devices were used & what was their frequency response? How does this colour the sound we hear?

  • initial mixing stage (example kick and bass guitar)

- What are the fundamental frequencies of the sounds being used? Are some sounds masking others because they share similar frequency range?

- What is the frequency range of your playback speakers/headphones? How much of the sound are you actually hearing? How will this differ on another playback device/system?

  • Production/rendering stage (example DAW song rendered)

- How will things like HAS Effect, Dither, and harmonic related effects affect the final rendered audio & how will this then sound on playback (differing from the realtime playback),

- How will playback differ from device to device(with regards to componant response & range)?

  • Sound Synthesis (example emulating a tone/sound)

- When recreating a specific sound it could be useful to consider things like the frequency range of the particular sound. Also the type of filter used by the synth.

- Waveforms used?

The basics:

The range of human hearing is around 20Hz and 20,000Hz, however ther frequencies outside that range are still very important. What we record is not entirely what was initially created.

Frequencies below the 20Hz range can be felt in a club situation by contrast if we imagine the same song played back on a £20 cd player (with speakers) the difference is clear. ...however if the recording had been mixed in a way so that the bass was not as heavy in the club senario and then that mix was played back on the £20 cd player difference would be dramatic. - So its good to pay attention to the lower frequencies in the mix and have a good idea of how a little tweek to the low end eq can have a drastic effect on the track as a whole.

Where EQ is concerned it is sometimes a good idea to look up what the funamental frequency is for each of the pots (rotary/slide/controls/etc) this way when you turn your high EQ knob, for example, you know what frequencies will begin to be effected.

System noise frequency range will very depending on what you are using but if you can identify the specific area where noise occurs, you can specifically turn those frequencies down to help clean up samples, mixes or whatever is being recorded.