Music Production Process
How does a musical idea make its way from a composer's imagination to
a finished piece of recorded music that can play through your music
playback system? Throughout this course we'll be thinking about this
question as we take a look at the various stages in the electronic music
production process and the tools that are used.
A musical idea can take many forms, from a simple drum and bass pattern
to a complete song with melody, lyrics, and chord changes. How the final
product sounds has a great deal to do with musical arrangement and the
tools used to produce it.
In this lesson, we'll take a look at the steps involved in producing a piece
of music. The steps are:
Musical ideas
Recording
Editing
Mixing
Mastering
Mouse over any step below and read what happens at each stage of the
music production process.
The individual tracks that make up a multitrack recording are combined
and processed using effects to create a final stereo recording to our song.
3 of 48
Creating Musical Ideas
Musical ideas take many forms but usually start out as a groove, beat, or
simple melody and chord progression. From there, a producer makes
decisions about how those ideas are to be arranged and developed; i.e.
what sounds will be used and what the musical form will be? Along the
way, these arranging choices will have a profound effect on how a piece
of music is produced.
Let's take a look at different kinds of arrangements and the tools we'll use
to produce them.
Types of Arrangements: Vocal or Instrumental?
The first arranging choices to be made are generally whether the melody
of the piece will be sung or played by an instrument, and if played by an
instrument, which one. In any case, we'll usually have instruments
providing accompaniment to the melody.
Vocal Arrangements Description
Voice with single This is the typical starting point for singer/songwriters. The
accompanying instrument accompanying instrument is often guitar or keyboard or
synthesized electronic counterparts of these.
Voice with rhythm The accompanying instruments are usually drums, bass,
section keyboard, and/or guitar or synthesized electronic counterparts of
these.
Voice with rhythm Additional horn, string, or other parts are often added to a basic
section and an vocal and rhythm section arrangement to provide a fuller, richer
instrumental arrangement sound. Electronic sounds are often used, such as pads or
atmospheric sounds.
Instrumental Arrangements Description
Solo instrumental An instrumental sound such as a piano, guitar, or
synthesizer plays the melody and accompaniment parts at
the same time.
Instrumental melody with This is the standard model for a jazz, rock, or fusion
rhythm section accompaniment ensembles, and a possible model for an electronic
ensemble.
Instrumental melody with This is typical of any type of traditional orchestral or
instrumental arrangement chamber music, but also can be the formula for a totally
electronic arrangement.
Types of Vocal and Instrumental Arrangements
Acoustic, Electric, or Electronic Instruments and Production Tools
Once you decide on an arrangement, you choose the instruments. The
types of instruments you choose will greatly influence the kinds of tools
you'll use to produce that piece.
Acoustic Sources
Performances by vocalists or any type of purely acoustic instrument, such
as a piano or sax, need to be recorded as audio. You'll typically need to
use a microphone and an audio interface to record these types of
performances using a software application such as Reason.
Electric Instruments
Instruments such as electric guitars, basses, and some electric pianos or
organs are electro-acoustic instruments that need some sort of
amplification to be heard. Although these instruments can be recorded
using microphones much like acoustic sources, the fact that they produce
an electrical output allows us to record directly to audio recording
software. Amplifier modeling technology allows the electronic music
producer to record convincing guitar sounds without the need to use a
microphone and amplifier.
Purely Electronic Instruments
Synthesizers, samplers, drum machines, and grooveboxes are all
examples of purely electronic instruments. Unlike an electric guitar that
amplifies a vibrating string, these instruments produce sound solely
through electronic means. More importantly, any recent electronic
instrument can be controlled using a communications language called
MIDI, the Musical Instrument Digital Interface.
4 of 48
Example of Creating a Musical Idea
"Change Up" was written and produced by David Mash and featured on
his 2010 double CD project Decades by Mashine Music, using acoustic,
electric, and electronic instruments. David played all the instruments
except the tenor saxophone, which was played by Greg Badolato. Give a
listen to this short musical example and see if you can identify the
instruments used to produce it.
Play Audio
"Change Up" (Excerpt)
Here's a worksheet for the production you just heard; notice that, although
the arrangement sounds full, we only used seven tracks to produce it, five
MIDI and two audio tracks. Use this worksheet as a guide to complete this
week's assignments.
Type of Arrangement: Instrumental
Type of Recording: Multitrack MIDI and Audio
Instrument Instrument Type Source
Drums/Percussion Electronic Sample Player: Sonoma Wireworks DrumCore
Bass Electronic Sample Player/Synthesizer: Spectrasonics Trillian
Guitar Electric Electric Guitar recorded using Native Instruments Guitar Rig
Melody Flute Electronic Sample Player/Synthesizer: Native Instruments Kontakt
Melody Saxophone Acoustic Live Performance recorded direct to hard disk
Keyboards Electronic Sample Player/Synthesizer: Native Instruments Kontakt
Vibes Electronic Sample Player/Synthesizer: Native Instruments Kontakt
As we've seen from the previous audio example, you, the electronic music
producer, have a wide range of musical choices in producing any piece of
music. The instrumentation you choose will often depend on the resources
you have available. Fortunately, the current crop of available hardware
and software tools offers a wide range of options for even the modest
home setup.
In this course, we'll focus mainly on electronic music production, and we'll
utilize acoustic, electric, and electronic sounds as source material for our
work.
5 of 48
Discussion 1.1: Listening and Analysis
(Due Sep 29)
Listen to the song and identify the instruments used on the recording. Post
your answers to the following questions:
Play Audio
Berklee Students. "Time," from Berklee Compilation, 2001
What instruments in the recording were acoustic? Electric?
Electronic?
How could each instrument have been recorded?
The example we're using was recorded by Berklee students in the
recording studios here at the college. The song "Time" was written and
produced by Lorenzo Peris-Rodriquez, with lyrics by Misha Rajaratnam. It
appeared on the Berklee College of Music Technology Division's 2001
compilation CD.
Use this activity as an opportunity to start thinking about the music you'd
like to produce. What kinds of instruments would you use and what tools
would you need to produce it?
Participate in Discussion!
6 of 48
Recording
Once you have decided on the type of musical arrangement you are going
to produce, you can begin recording. Keep in mind that the final
distributed version of your song or composition will be some sort of stereo
audio file that can be played from a variety of music file players on an
even wider variety of sound systems. Although there are many recording
formats to choose from, you'll want to end up with a version that can be
played by the widest possible audience.
There are two distinct models for recording a musical performance:
Direct-to-stereo recording
Multitrack recording
7 of 48
Direct-to-Stereo Recording
Ensemble recording occurs when an entire musical arrangement is
performed and recorded. Most recordings of orchestras and jazz
ensembles are done in this fashion. The spontaneous interaction between
musicians is an essential part of these performances, and it's the goal of a
good recording to capture this.
Another use for direct-to-stereo recording is live capturing of sounds for
sampling purposes, not just of instrumental sounds, but also any sounds
that can be produced in the wild. These can be very useful for musical
effects, or for crafting very cool beats for loops to manipulate in our
productions or even live performances.
One way to record a live performance is to record directly in stereo using
a pair of microphones and a recorder. Currently, there are a wide variety
of affordable portable recorders that come equipped with a built-in stereo
pair of microphones. Here are some examples:
Olympus Stereo Recorder
The Zoom H6 Handy Recorder
Alternatively, most smartphones and tablets have audio recording
software available, and there are a wide variety of microphones available
for use with these devices. Suppose you've written a song and you want
to record a demo or even a band rehearsal; the easiest way to do this is to
simply set up one of these recorders and perform the song. This recording
becomes the final version and you are unable to change any of the
characteristics of the instruments used in the recording.
8 of 48
Multitrack Recording
Most professional recording is done using multitrack recording, where
individual musical performances are recorded on separate tracks. This
gives us the flexibility to edit and process the tracks, and to mix them into
a final stereo version well after the original performance.
Multitrack recording also allows us to add performances to an original
recording. In this way, we're capturing a performance that interacts with
previously recorded performances. The process of adding parts to an
existing recording is called overdubbing.
Today, many musicians integrate direct-to-stereo recording
into multitrack recording environments by recording some tracks live,
bringing them into a multitrack recording program, then adding
subsequent tracks to further develop the work.
This way of working has changed the way music is produced. In electronic
production, the self-produced artist often wears more than one hat, in
many cases taking on the roles of composer, arranger, and performer.
When we use multitrack recording techniques, composition and arranging
can be part of an interactive process where one musical idea suggests
another.
Just as composers have traditionally used the piano to work out fully
formed compositions from fragments of musical ideas, electronic music
producers can start with a skeletal idea for a song—perhaps just a drum
pattern and bass line—and develop it into a complete musical
arrangement.
By breaking down the recording process into individual performances or
takes, musicians no longer have to be in the same place at the same time
to contribute to a performance.
Much like this online class, where your classmates participate from
various locations at various times, musicians now commonly collaborate
by adding their unique, individual performances to previously recorded
tracks.
In addition, a performer can now record more than one part. Electronic
music producers often take on the role of drummer, bass player, keyboard
player, and soloist.
In this method of recording, it's essential that a musician is able to hear
what's been previously recorded by monitoring those tracks with speakers
or headphones (the latter is preferable if acoustic parts are being
recorded).
In electronic music production, we rely on multitrack recording as our
basic model for recording and assembling musical performances.
Zoom F8 Field Recorder
9 of 48
Digital Audio Workstations
Computers allow us to record audio directly to digital storage devices such
as hard disks, solid state drives, and flash memory. Recording software
running on a computing device is often referred to as a Digital Audio
Workstation or DAW for short. DAWs provide a way to view and edit the
audio events that make up a performance, making it easy to record and
edit a performance according using musical time—bars and beats. When
we use electronic instrumentation, we have the ability to record virtually
every aspect of a musical performance as MIDI data. DAWs, like Reason,
enable us to easily integrate audio and MIDI recording into a single
workflow.
The Musical Instrument Digital Interface is a specification that defines how
electronic instruments exchange information and how they are connected.
When we record using MIDI, we're not recording the sound produced by a
device, but the actual physical gestures used to produce sound, such as
when we strike/release a key and how fast we strike the key.
These MIDI messages are recorded into the DAW, creating a MIDI
sequence. They can be easily displayed and edited on a grid that
displays bars and beats. We'll take a closer look at MIDI sequencing later
in this course.
The following graphic shows audio and MIDI tracks in Reason. The
numbers along the top indicate measure numbers and the grid marks
between those numbers display beats. We can identify the rhythmic
nature of the audio track at the top of the display because of its sharp
transient peaks. Notice that they line up with the bars and beats indicated
along the top.
Audio and MIDI Tracks in Reason
Listen to the following example—see if you can follow along with the
waveform display.
10 of 48
Sample Loops
Once we've established bars and beats as the grid that we'll use in
organizing a production, it's very easy to set up patterns that repeat in
different sections of a song. Since hard disk recording is random access,
we can play audio from any point in a recording at any time.
All drum machines use patterns as building blocks for musical
arrangements. Most MIDI sequencers and DAWs support this way of
working as well. We can use the same approach with random access
digital audio by repeating or looping a given number of bars over the
course of an arrangement.
The example below is a waveform view of two measures of a drum
pattern. When looped, this can become a continuous musical
performance.
Waveform View of Two Measures of a Drum Pattern
Play Audio
Drum pattern
This way of working allows us to use recorded material from virtually any
existing source in our production as long as we have legal permission
from the copyright owner. A large number of libraries are available that
provide the desktop producer a wide variety of copyright-cleared sampled
material.
Who's to say that the late John Lee Hooker's driving rhythm guitar as well
as an ensemble of Japanese Taiko drummers can't find a place in the
song that's taking shape on your desktop?
A Looped, 2-Bar Guitar Pattern Playing along with the Looped 4-Bar Drum Pattern
Play Audio
Looped guitar pattern
11 of 48
Editing
The third step in the production process is editing. Once musical parts
have been recorded, we can further refine an arrangement by
editing them. The most powerful capability of DAW-based electronic music
production is the ability to rearrange and edit both audio and MIDI
performances after they've been recorded.
Edit the Change the sounds Build composite Change the form of
individual notes used to play the tracks using a song by copying
in a performance notes in a standard copy-and- and pasting entire
performance paste editing sections of a
techniques recording
MIDI YES YES YES YES
Editing
Digital Some tools such NO YES YES
Audio as Ableton Live
Editing and Melodyne
allow you to
adjust the
pitches of
individual notes.
MIDI Editing vs. Digital Audio Editing
The various tools we'll work with in electronic music production offer us
several ways to view and edit the performances we've recorded. The edit
window in Reason offers a few different methods of viewing recorded
data, along with tools and functions for editing. We'll be demonstrating the
specifics of how you can edit your performances in later lessons.
12 of 48
Mixing
At the end of the recording and editing process, we're left with a number of
finished tracks. Each track will contain an individual musical part of the
total sound.
To produce a final version of our song we'll need to combine these
individual elements in a process called mixing. The audio output of all the
tracks we've recorded and edited is sent to either a hardware or
virtual mixing console, and the resulting stereo mix is then recorded or
saved as an audio file in a process called bounce to disk.
In mixing, after each individual musical part has been recorded to its own
track, we have the ability to control each one separately. It's here that we
craft and polish the sound of our final product.
Here are some things we typically control in mixing:
Volume Faders control the levels of individual tracks.
Panning Panning controls placement of each track in the left-right stereo image.
Equalization The character of the sound recorded on each track can be adjusted using
specialized types of tone controls called equalizers.
Effects Effects such as compression, chorus, flanging, and delay further alter the
recorded sound of a performance. These types of effects often change
the basic character of a recorded performance.
Reverb Individual tracks can be placed in a simulated acoustic space.
Elements You Can Control in Mixing
Most of these functions are familiar to musicians. Certainly, many guitar,
bass, and keyboard players already have pedals and stomp boxes that
give them this type of control in performance.
13 of 48
Mastering
How will your audience listen to your work? The last step in the production
process is mastering: preparing the final mix of a song for distribution. In
the mastering process we typically remove any unwanted silence that
exists before the track begins or after it ends to make the file the precise
length of the musical track. Final touch equalization is often performed to
make individual tracks in an album collection sound like they belong to the
same overall recording. Also, the overall volume between tracks may be
adjusted, again so that all tracks in a collection sound at similar levels.
The most common full bandwidth consumer format for a final project is
the audio CD. Many types of audio CD players on the market allow the
consumer to listen at home, in the car, or with a portable player—almost
anywhere. Today most music is consumed via the Internet, and therefore
exists in a compressed file format so the files can download or stream
more quickly. One popular format is an MP3 file. This is a standard type of
computer file that reduces the size of an audio file so that it's practical to
download it from the Internet. We'll be using MP3 files in this course. Once
saved as a sound file, we go through the following steps to create a final
product. Mouse over each step below to learn more.
Loading interaction...
This covers the steps in the music production process. In the next section,
we'll take a look at the kinds of tools we'll use in turning our musical ideas
into finished masters.
14 of 48
Music Production Tools
Now that we have seen the steps of the production process, we will
examine the tools used in the electronic music production process.
Let’s begin with the most important components of a personal home
studio; those that transduce signals between the physical and virtual
worlds. In the virtual studio, audio and MIDI signals are processed as
binary data inside the computer, but we have to first get signals into the
computer. We'll also need to be able to listen to the sound throughout the
process, so we need to get the data back out into the physical world.
The primary connection between the computer and the physical world is
the audio interface. This is a device that usually contains two primary
components—an analog-to-digital converter, where electrical signals such
as those coming from a microphone or electric guitar are converted into
streams of numbers that represent the original analog electrical signal;
and a digital-to-analog converter, where the digital number stream is
converted back to an electrical signal that may then be sent to an
amplified monitor system.
The number of inputs and outputs of an audio interface may vary, as can
the types of connectors used, and we will cover these in more detail later
in this lesson. Audio interfaces may also provide microphone preamps to
boost the incoming signals from the mic to a level appropriate for the
converters. Some audio interfaces have built-in digital signal processing
(DSP) chips to offload some complex audio processing algorithms from
the computer’s internal processors. This can give you added horsepower
for large complicated mixes, as well as provide lower latency—the time
between when the signal enters the interface, travels through the
computer hardware, software, and then returns back to the outputs. More
on this later as well.
These audio interfaces usually connect to the computer via USB,
FireWire, or Thunderbolt, the most common high-speed digital
connections available on computers today. Some interfaces get their
power from these connections as well as use them as transport between
the interface and the computer, but others require dedicated power
sources to operate.
Since these devices are the critical path between the virtual and physical
worlds, you should pick the best quality audio interface possible within
your budget. If the sound isn’t good quality when it enters the computer,
no amount of DSP will make it better. And, since what comes out of the
speakers is only as good as what goes into them, the audio interface is a
key component of the electronic music production studio. Let's take a look
at some come current audio interfaces along with information about
connectivity.
15 of 48
USB Audio Interfaces
USB is a very common way to connect a variety of peripherals to a
computing device. USB 2 provides 480 megabits per second transfer
rates between devices, fast enough for multichannel audio to travel
between the computer and interface with very little delay times (latency).
USB 3 provides up to 5 gigabits per second. It is generally used now for
data transfer between storage devices and the computer, but it is also
excellent for real-time audio data between an audio interface and
computer.
Let's look at some common USB audio interfaces.
This is a stereo audio interface using USB 2 connectivity.
Roland Super UA USB 2 Audio Interface
This is a 26-input, 32-output audio interface capable of high sample rates
using USB 3 connectivity.
PreSonus Studio 192 USB 3 Audio Interface
16 of 48
FireWire Audio Interfaces
FireWire, also known as IEEE 1394, is a high-speed bus for connecting
peripherals to computing devices. Originally designed as a standard for
connecting digital video cameras to computers and storage devices,
FireWire is very capable for connecting multichannel audio interfaces.
There are several standards for FireWire; the two most common are
FireWire 400 and 800, reflecting their connectivity speeds in megabits per
second. In recent years, hardware manufacturers such as Apple have
stopped including FireWire ports in favor of newer bus technologies, but
there are FireWire adapters to maintain these connections.
MOTU Traveler MKIII FireWire Audio Interface.
The MOTU Traveler provides 28 separate inputs and 30 separate outputs
over FireWire.
17 of 48
Thunderbolt Audio Interfaces
Thunderbolt is an interface designed by Intel that supersedes many
former bus architectures and provides very high-speed connections of up
to 20 gigabits per second. It is very well suited for multichannel digital
audio transfers.
The Apollo series of interfaces from Universal Audio provide between 2
and 16 channels of audio, depending on the model, as well as built-in
digital signal processing (DSP) chips for real-time audio processing with
very low latency.
Apollo Twin Thunderbolt Audio Interface
18 of 48
Speakers
The next most critical devices in your production studio are the speakers
you use to monitor the sound throughout the process. You will spend a lot
of time listening to these, and your final product will be shaped by the
sound of your speakers, so choose monitors carefully—always listen
before you buy! Speakers vary greatly in size and construction, and
should be matched to the size of the physical space you have for your
studio. Huge monitors will overpower a small studio, but small speakers
may not fill a larger space. The key is to get the right size for your studio,
and then to place them properly. Today, it is very common to use monitors
that contain both the amplifier and speaker systems, rather than having
separate components.
Here are some commonly used monitors for personal project studios.
Behringer Bi-Amped Studio Monitors with USB Digital Audio Inputs
KRK Rokit 5 Studio Monitors
19 of 48
Microphones
If you will be recording live instruments or voice in your musical
productions, you will also need microphones (and mic preamps if they not
included in your chosen audio interface). Microphones vary greatly in type
and cost, and should be chosen for the kinds of recording situations you
have. Different acoustic instruments require different types of
microphones, as well as different mic placement. There is a huge art to
the choice of microphone and placement, and this is beyond the scope of
this course.
Here are a few links to good materials on the web:
Microphone Choice and Placement Secrets for Recording
SOS Guide to Choosing & Using Studio Microphones
Microphone Buying Guide
Other physical tools that are commonly used in an electronic production
studio include MIDI performance controllers and control surfaces. Let’s
take a deeper look at these.
20 of 48
MIDI Performance Controllers
To record MIDI data in your DAW, you need to have an instrument on
which to perform the musical parts. Typically this will be a keyboard
instrument, but today there are good controller interfaces for guitarists,
drummers, and woodwind players. These days, these will generally
connect to the computer via USB, even though the data they send is MIDI.
Older controllers will have 5-pin DIN connectors for MIDI, and we will
delve into this in greater depth later in this lesson. Often, keyboard
controllers will also contain other controller type devices, such as velocity-
sensitive drum pads and faders to use during the mixing process.
Here are some common MIDI performance devices:
Advance49 Keyboard Controller
Fishman TriplePlay Wireless MIDI Controller for Guitars
Roland V-Drums: Controllers for Drummers
AKAI Drum Pad Controller
Yamaha WX5 Woodwind Controller
21 of 48
Control Surfaces
Once you have your music stored in the DAW and you start the mixing
process, you may want to be able to control the levels of multiple tracks
independently at once, which is difficult with a computer mouse. Today,
many producers use control surfaces for this, either via software on
touchscreen devices such as tablet computers, or via actual physical
devices with multiple faders and knobs that send signals, usually over
MIDI, to control on-screen controls of a mixer.
Avid Artist Mix Controller
Livid Instruments Base II Control Surface
22 of 48
Connections
Now that we have seen a few examples of devices we commonly use in
electronic music production systems, let's delve a little more into the
common system configurations and connections needed to make our
studio work well for us.
This diagram shows a very simple system configuration with a MIDI
keyboard connected to a computer, which is also connected to an audio
interface, in turn connected to stereo powered monitors.
A Simple Studio Configuration
A More Complex Studio Configuration
In this second diagram, we use a MIDI keyboard and control surface, both
connected to the computer via a USB hub, and a Thunderbolt audio
interface connected to the speakers. There are obviously a much wider
variety of options for configuring a studio, but for now let's dig deeper into
the cables and adapters we’ll need to connect our devices.
Hardware or Software Mixer?
Many project studios have a variety of analog and digital hardware
instruments, whose audio outputs need to be mixed to the stereo
monitors. Some producers like to include a hardware mixer in front of the
audio interface, but with modern audio interfaces this is not always
needed. DJs with turntables or grooveboxes, however, like to use
hardware controls. For this, many use a small mixer
from Mackie or Behringer before their audio interface. The following
diagram shows that configuration.
A Studio Configuration with Mixer
Many modern producers have moved to software-based mixing; if physical
controls are desired, a control surface with knobs and sliders such as the
Artist Series Mix may be used. Most audio interfaces today provide
complex integrated mixers with low-latency monitoring. The Universal
Audio Apollo interfaces come with such a mixer, and their software
includes many signal processors that run in the mixer environment,
including mic preamps, as shown below:
The Universal Audio Apollo Console
Note that this mixer shows my personal setup, with a variety of inputs
shown on the “scribble strips” for each channel:
1. Mags—the magnetic pickups output from my guitars
2. Piezo—the piezo pickup output from my guitars
3. Theremini stereo output
4. Moog Voyager stereo outputs
5. Modular—stereo ouputs from my System 1m and modular gear
6. S/PDIF—digital inputs
7. Virtual stereo return from Guitar Rig
8. Virtual stereo returns from my TriplePlay software
All of this audio can be routed and controlled even without the computer,
with the audio interface hardware acting as the physical mixer and the
software console as the user interface. As mentioned before, we can
connect a physical control surface to the system so it all acts like an actual
physical mixer (at a fraction of the cost).
23 of 48
Making Connections
In order to understand the various ways we might configure a studio, we
must understand how audio moves between devices. Audio signals may
exist in the analog domain as electrical voltages carried along copper
wires. Audio may also be represented in the digital domain as numbers
and transferred either as electrical signals (on or off pulses) on copper
wire or as light pulses through fiber-optic cables.
In addition, analog and digital audio signals may exist in various formats,
each using a unique type of cable and/or connector.
24 of 48
Connectors
One of the complicating factors in connecting audio between devices is
that there is no single standard for these connections. Instead, connectors
come in different sizes and shapes, and can be either a plug or a jack. We
should become very consistent in our use of terms when describing audio
connectors, using the following standardized descriptors:
Plug
A plug is a connector that is inserted into a receptacle-styled
connector. Often referred to as a "male" gender connector. These
come in a variety of shapes and sizes.
Plug
Jack
A jack is a receptacle-type connector that accepts a plug. Often called a
"female" gender connector. Jacks, too, come in a variety of shapes and
sizes.
Jack
Patch Cord
A patch cord is a cable with a plug or jack on either
end, used to connect various devices. The cable
itself may take many forms, with two or more
wires, either bundled around one another
(coaxially) or next to one another in a sheath.
Patch cord Cables that will exceed a length of 20 feet should
be "balanced," with special shielding involving a
third wire, and will usually have special plugs or
jacks with extra pins for the extra wiring.
Adapter
Adapters are devices that change the size, shape, or gender of one
plug or jack to another. These are often required when the
specific plugs or jacks needed to connect two devices are not
available on a given patch cord.
Adapter
25 of 48
Specific Connectors Used with Analog Audio
1/4" Phone Plugs and Jacks
Monophonic signals are often used with two-conductor
cables and are connected with the signal going to the
tip, and the ground to the sleeve. Guitars often use
1/4" phone plug
these connectors.
1/4" Stereo Phone Plugs and Jacks
Stereophonic signals are often used with three-
conductor cables and are connected with the right signal
1/4" stereo phone plug going to the tip, the left signal to the ring, and the
ground to the sleeve. These Tip-Ring-Sleeve (TRS)
connectors are sometimes used to support balanced
monophonic cabling with the third connector used for
the shielding wire.
1/8" Miniplugs and Jacks
Monophonic signals are often used with two-conductor cables
and are connected with the signal going to the tip, and the ground
1/8" miniplug
to the sleeve.
1/8" Stereo Miniplugs
Stereophonic signals are often used with three-conductor cables
and are connected with the right signal going to the tip, the left
1/8" stereo signal to the ring, and the ground to the sleeve.
miniplug
Phono Plug (also known as RCA)
These are always monophonic signals, but stereo cables are
available with two plugs or jacks on either side.
Phono plug
XLR or Canon Plugs and Jacks
These are for monophonic signals, and are
usually used for microphone level connections.
These connectors are often used for making
XLR or Canon plugs balanced connections, as the third pin provides
the capability for the shield wire.
26 of 48
Digital Audio Signals and Connections
Digital Signal Formats
Digital audio may be represented as stereo audio or multichannel
audio, and transferred between devices as either electrical or optical
signals.
Stereo audio is available in two standards:
1. S/PDIF (Sony/Phillips Digital Interface Format): This format is
very common between consumer and semiprofessional devices.
2. AES/EBU (Audio Engineering Society/European Broadcast
Union) format: This format is more common in professional
recording studio equipment.
Multichannel audio may use one of several standards:
3. ADAT (Alesis Digital Audio Tape)
4. TDIF (Tascam Digital Interface)
5. AES/EBU, in multiple stereo pairs
Digital audio may also be directly transferred between devices using USB,
and through audio networking protocols such as Ethernet and AVB (Audio
Video Bridging).
27 of 48
Connectors Used for Digital Audio
When connecting two digital devices, the clocks that keep the digital
words in precise timing and order must be synchronized. The S/PDIF
standard provides for both stereo signals (left and right) as well as the
synchronization signal to be carried on a single unidirectional cable.
Therefore, it generally takes two cables (one each for input and output) to
complete a digital audio system.
When using electrical means to connect digital audio devices with S/PDIF, the signals are sent
via coaxial cables (two connectors, one wrapped around the other within the same sheath)
generally using RCA phono connectors.
RCA plug
Toshiba developed a method for using optical signals to connect digital audio devices using the
S/PDIF standard. This standard is called TOSlink after the developer, and it uses fiber-optic
cables.
Fiberoptic cables
The AES/EBU standard calls for stereo signals to be carried on a single unidirectional cable
with XLR jacks and plugs. This requires two cables, one each for input and output. The
synchronization signal may be included or carried on a third, separate coaxial cable, often
labeled "Word Clock," using BNC connectors (shown here).
BNC connector
Multichannel digital audio is managed either by using multiple pairs of
AES/EBU connections or by using the ADAT or TDIF formats.
1. ADAT was developed by Alesis (Alesis Digital Audio Tape) and
combines eight channels of audio and the synchronization signal on
a single unidirectional optical cable.
2. TDIF was developed by Tascam for transferring audio to and from
their MDM (Modular Digital Multitrack); it stands for Tascam Digital
Interface. TDIF uses a 25-pin D-sub connector over shielded cabling,
and it includes both digital audio and synchronization information.
28 of 48
USB Connectors
The USB “A” connector is widely used to connect peripherals to a computer.
USB "A" connector
The USB “B” connector is widely used to connect a computer to a peripheral.
USB "B" connector
The USB mini connector is widely used to connect a computer to a portable peripheral.
USB mini
connector
The USB micro connector is widely used to connect a computer to a portable peripheral.
USB micro
connector
The USB 3 “A” connector is widely used to connect to drives or other peripherals, although
some newer computers use this connector.
USB 3 "A"
connector
The USB 3 “B” connector is widely used to connect to drives or other peripherals.
USB 3 "B"
connector
There are two main connectors used for Firewire:
FireWire 400 connector
FireWire 800 connector
There are other IEEE1394 connectors in use, but these are the two
principle connectors.
Thunderbolt
One of the best features of Thunderbolt is that there is only one type of
connector in use:
Thunderbolt connector
Ethernet
The most common networking protocols for audio use standard Ethernet
connectors.
Ethernet connector
Let’s see how to set up a simple studio configuration using all that we
have explored so far.
29 of 48
Studio Configuration Example
Here is a typical desktop music production studio. Take a look at how it is
used, what types of devices are used, and how they're connected. This is
the personal studio of course co-author David Mash, of Mashine Studio.
Note the centralized focus on the computer, controllers, and alpha-
numeric keyboard—all placed at easy reach in front of the musician. Here
is a full list of the hardware and software used in this setup:
Hardware (from left to right):
Stand with 3 Godin instruments (Inuk Steel, Grand Concert Ambience
Duet, and Multiac Nylon Fretless):
1. 3 Godin guitars on wall hangers (Custom Montreal Supreme
TriplePlay, Passion RG-3, and Custom LGX with TriplePlay and
Fluence pickups)
2. Roger Linn Designs LinnStrument
3. Acoustic Solutions amplifier
4. Custom pedal setup: Gordius Little Giant, Keith McMillen 12-step, 3
Korg foot controllers, 2 Boss FS-5U switches
5. Rack with Furman power supply, XLR patch bay (for connecting
microphones to the audio interface), Univeral Audio Apollo Quad
Thunderbolt audio interface, and 2 Line 6 Relay G55 wireless audio
recievers
6. Native Instruments Maschine Mikro
7. Native Instruments Komplete Kontrol S61
8. Avid Artist Series MC Mix and MC Control
9. Novation Nocturn control surface
10. Apple 27" Thunderbolt Display
11. Apple MacBook Pro 15" Retina laptop
12. Livid Instruments Elements control surface
13. JBL 4300 Speakers
14. iRig BlueBoard
15. Moog Minimoog Voyager Signature Series synthesizer with
VX351 CV Expander and CP 251 Control Processor
16. Pittsburgh Modular Eurorack with PM Mix/Mult, Time Runner,
and Phase Shifter modules, Make Noise Maths module, and Synth
Rotek analog sequencer
17. Doepfer Dark Time sequencer
18. Roland Aira System 1m
19. Tripp's HyperkeysTM 3-dimensional keyboard controller
20. Moog Theremini
Software:
DAWs:
21. Reason
22. Logic
23. Ableton Live
Synthesizers:
24. Native Instruments Komplete Ultimate
25. Spectrasonics Omnisphere, Stylus RMX, and Trillian
26. Korg MS20, Polysix, Wavestation, M-1, MonoPoly, Legacy Cell,
Arturia Matrix 12, CS-80v, Minimoog V, Modular V, Arp 2600v,
Jupiter8 V,Oberheim SEM, Solina, Spark, Vox V, and Wurlitzer V
27. TimeWARP 2600
28. Roland System 1, SH-101, SH-2, and ProMARS
29. U-HE ACE, Diva, Bazille, and Hive
30. DrumCore 4
31. SampleTank
32. Ivory
Ancillary Music Software
33. Bome MIDI Translator Pro
34. Onstage X
35. Onstage Live
36. Fishman TriplePlay
37. Native Instruments Guitar Rig
38. LG2 Control Center
39. Sysex Librarian
40. MIDI Monitor
30 of 48
Practice Exercise: Setting up a Virtual Studio
(Part 1)
Now, it's time to get to work and configure a system for our use in this
course. Since we cannot guarantee that everyone who takes this course
has the same hardware and cables in their desktop system, we can
actually do this using software.
Propellerhead's Reason is a virtual studio that exists inside your personal
computer, and it can be configured much like a physical studio. For this
exercise, we assume you have downloaded and installed Reason. If you
have not yet installed Reason, please do that now, before continuing.
Please follow the instructions found in the documentation for the Reason
program.
Reason provides a rack full of equipment, with outputs that connect
through your personal computer's audio hardware to your listening
system. Let's configure a simple system that will include a synthesizer, a
mixer, and one effects device.
1. Download and open the file Null_Rack.reason
2. Launch Reason by double-clicking its icon.
3. From the File menu, choose Open. Locate the file named
"Null_Rack.reason." You will see the empty equipment rack as
shown below.
The Equipment Rack in Reason
In this file, only the Subtractor synthesizer, Mixer, and RV7000 reverb
devices are fully shown, as the other devices have been minimized in the
rack. We can focus on configuring these modules that we see.
31 of 48
Practice Exercise: Setting up a Virtual Studio
(Part 2)
Press the Tab key on your computer keyboard to rotate the rack so you
can see the back.
The Back of the Rack in Reason
As you can see, the devices have not yet been connected. Our task is to
connect the modules into a virtual system.
1. Click and hold on the Audio Output of the synthesizer and drag away
towards the mixer device. You will see a cable follow your mouse
path on screen.
2. Drag this cable to the mixer's first input. When you are over the input,
let go of the mouse button while over the input and your connection
will be made. When this step is complete, your rack should look like
this:
Subtractor Connected to the Mixer
You can watch the process in action below:
32 of 48
Practice Exercise: Setting up a Virtual Studio
(Part 3)
1. Next, we'll connect the mixer to the computer's audio system by
dragging from the mixer's left and right master outputs to the first two
inputs of the Reason hardware interface. The stereo connections are
made with a single drag. When this step is complete, your rack
should look like this.
The Mixer Connected to the Audio Interface
Let's watch how the connections are made.
Finally, we will add the reverb unit to the mixer's effects bus by dragging
from the aux 1 send outs left and right to the RV7000's audio inputs, and
from the RV7000's audio outputs to the mixer's aux 1 returns. The final
system setup should look like this:
The Completed Rack Wiring
Watch this demonstration video to see how to make the final connections:
1. Save your work by choosing Save As… from the File menu.
2. Name the file “yourlastname_MyFirstStudio” and save it to your
computer. For example: Bierylo_MyFirstStudio.rns.
Congratulations! You have completed your first system project.
33 of 48
Practice Exercise: Configuring and Testing
Reason (Part 1)
We just completed an exercise that demonstrated how to make some
simple connections in Reason. Let's do a couple of simple tests to make
sure Reason receives MIDI input from your keyboard controller and sends
sound from your soundcard or audio interface. Before you start this
exercise, make sure that you have read Chapter 1 "Introduction," which
starts on page 29 of the Reason Installation Manual included with the
Reason application.
It is also provided here for quick reference.
Download Reason 11 Operation Manual (PDF)
If you haven't done so already, configure audio output as described in the
section titled "Setting Up the Audio Hardware." Configure MIDI input from
your keyboard controller as described in "Setting Up MIDI."
1. Download the file Reason_test.reason to your computer
(CTRL + click the file to save to it to your computer). Open the file by
double-clicking its icon. You should see a single Subtractor and the
Sequence and Transport windows as shown below:
The Open Reason Test File
34 of 48
Practice Exercise: Configuring and Testing
Reason (Part 2)
MIDI data from a keyboard controller is routed to a sound module in
Reason by clicking on the icon before the module's name in the
Sequencer window. Select Subtractor 1 and the icon should be
highlighted as shown below.
Selecting Subtractor in the Sequencer
1. Play a few notes on your keyboard controller. The Note On indicator
in Subtractor should light up any time MIDI is received as shown
below. If this does not happen, try the following:
o Make sure your keyboard is properly connected, either using a
USB cable to your computer, or with a cable running from the
MIDI Out jack of your controller to MIDI In on your interface.
o If your interface can be turned on and off, do so.
o Unplug the USB connection between the keyboard or MIDI
interface and the computer, and reconnect. Make sure the
connections are secure.
2. If you hear a bass guitar sound when you play, then audio output is
working in Subtractor, and you are done configuring Reason. If not,
follow the next steps to set up audio output.
3. As a final test, play the short demo sequence. If there is no audio or if
there are problems with the audio, you may need to adjust the audio
settings. A larger buffer size will usually provide better playback
performance.
4. If you're still having problems, and you're sure that your computer
and speakers are properly connected and that you followed the
previous steps, contact Reason Studios' customer support for
technical assistance.
Next, let’s ensure that you have your audio and MIDI settings correctly set
for use with Reason. Watch a demonstration on how to do this:
Now let’s explore how we navigate Reason using the Browser pane.
Watch this screen flow movie to see how it works:
35 of 48
Virtual Studio Devices in Reason
Reason provides us a total production studio environment in software
which is layed out similar to the way hardware might be configured in a
physical studio.
Virtual studio devices are software tools that exist only in the bits and
bytes within your computer. You can see them, but you cannot touch them
except with a mouse or some other computer input device. While these do
cost real money, their cost is typically only a fraction of their hardware
counterparts.
Theoretically, the power of these tools is limitless. Unfortunately, in
practice, they are limited by your computer's horsepower. The faster your
microprocessor (CPU), the more random access memory (RAM) you
have, and the bigger and faster your hard drive, the more you can
accomplish with virtual tools.
In this course, we will focus on virtual tools because we can provide you
computer access to them. Whenever possible, however, we will relate
these virtual devices to their physical hardware counterparts.
Let’s take a look at the various studio devices in Reason.
36 of 48
Synthesizers
Synthesizers are devices that produce sound entirely by electronic
means. Using a synthesizer, the desktop music producer may control
every parameter of the sound, including pitch, timbre, and loudness. Since
the synthesizer allows for very fine control over musical sound, every part
performed can meet the precise musical needs of the production.
Key technical issues that differentiate synthesizers from one another
include:
1. Polyphony: the number of notes that may be simultaneously
sounded
2. Multitimbral capability: the number of unique instrumental sounds
that may be sounded at once
3. Memory: the number of sounds contained in a device's memory
4. Synthesis engine: the actual software algorithms that create the
sounds, including subtractive synthesis, FM synthesis, sampling,
granular synthesis, and other hybrid forms
Even more important than these technical specifications are the sound
quality, ease of use, and the feel of any synth. These are more difficult to
quantify and require musicians to actually experience the instrument in
order to determine if it is right for their production needs.
37 of 48
Virtual Synthesizers and Samplers
Virtual Synthesizers
Most off-the-shelf computer systems are powerful enough to use their
native processing power to generate synthesized sound. The virtual
instruments we'll be using in Reason produce sound in this manner. Since
the user interface of these instruments will be displayed on your computer
screen, they'll be great tools to help us learn about synthesizers.
Take a look at Subtractor. Every parameter is shown in a single, easy-to-
read display. There are clearly defined sections that will allow us to quickly
edit the elements of sound: pitch, timbre, and loudness.
Subtractor
Synthesizers and Samplers
There is a wide variety of synthesizers and samplers available in the
physical world, most of which are now emulated in the virtual world.
Synthesizers produce sound in several different ways. Some, like
Subtractor, make their sounds by starting with electronic building blocks,
while others begin with short digital recordings of acoustic instruments or
sounds from nature called samples.
These devices are often called samplers if they can record sounds
themselves. If they use prerecorded samples and offer synthesizer control
over the sounds, they are referred to as sample-based
synthesizers. Using the NN-19 Sampler in Reason, we'll learn the basics
of using samples to create new sounds.
Take a look at Reason's NN-19 Sampler pictured below. You'll notice that
although it's a sample-based synthesizer, most of the parameters you'll be
editing are the same as what you see in Subtractor. Again, what you learn
using NN-19 can be applied to any sampler in the physical or virtual world.
Reason's NN-19 Sampler
38 of 48
Thor and Malström Synthesizers
Reason also provides several other very powerful synthesizers, including
Thor and Malström.
Thor
Malström
In this course, we will be focusing on using the following virtual
instruments that are part of Reason:
Instrument Reason
Synthesizer Module Subtractor
Digital Sample Player NN-19
NN-XT
Drum Machine ReDrum
Virtual Instruments That Are Part of Reason
Although these virtual synthesizer modules will help us to create and
control sound, we'll still need to stay in the physical world to play them.
Any MIDI keyboard controller will allow us to control the synthesizers in
Reason and to record our performances using the MIDI sequencer.
39 of 48
Digital Signal Processing Tools
As part of the mixing process, audio recordings may be altered in both
subtle and significant ways. Using Digital Signal Processing (DSP) tools,
we may add acoustical effects such as reverb, echo, and chorus. We can
change the tone color of a sound or perform very significant changes in
pitch or audio quality. In the physical world, there are many varieties of
effects processors.
The effects processors we'll use in Reason are modeled after similar
hardware devices from the physical world, with some occasional twists. In
Reason's virtual studio environment, we'll connect them in much the same
way as we would if we were wiring a physical studio. This will make it easy
to apply what we've learned in this course to other similar devices.
Much like synthesizers, though, effects processors are subject to user
interface issues. The best devices not only sound great, but are easy to
use. In some effects, this is not so much of an issue simply because there
are far fewer parameters to be edited than on a synthesizer.
Reason provides a full-featured, professional reverb: the RV7000 MkII.
The front panel view shows basic controls similar to the RV-7, but clicking
on the Remote Programmer drop-down triangle expands the view,
showing additional features. This expanded remote programmer view is a
common function found on more complex Reason devices such as the
NN-XT Advanced Sampler and the Thor Polysonic Synthesizer.
The Expanded RV7000 Reverb Remote Programmer View
40 of 48
Processors
Reason also provides other DSP units such as delay, chorus, guitar amps,
and vocoders. You simply locate the desired effect in the browser, drag it
into your project studio, and it automatically configures into the mixer. You
can always flip the rack around and repatch your studio if so desired.
Plug-ins
Another approach to effects processing in the virtual world is the use of
software plug-ins, which are very common in most DAWs. In general,
plug-ins are software products that add functionality to a host application.
There are many different formats of plug-ins on both Mac and PC, and the
incompatibility of these can sometimes make purchasing plug-ins
confusing.
There are several standards for software DSP plug-ins, including the
cross-platform VST (Virtual Studio Technology) format developed by
Steinberg, AAX (Avid Audio eXtension) developed by Avid, and the Mac
OSX standard AU (Audio Units) by Apple Computer.
Starting with version 9.5, Reason supports VST plug-ins, but Reason also
uses a concept called Rack Extensions rather than generalized plug-ins
to avoid a too-common problem in electronic music production software—
incomplete or incorrectly implemented standards that create problems
between software products, often resulting in computer crashes. Reason
avoids this problem by regulating the creation of these extensions and
testing them before allowing their use inside a Reason Rack.
Let’s learn how to download and use Rack Extensions in Reason:
Let's take a look at some common types of effects we'll use in desktop
production, particularly the different models found in Reason.
Delay
Delay in Reason
Compressor
Compressor
Equalizer
Equalizer in Reason
We’ll take a closer look at all these kinds of processors and effects in
lesson 10.
41 of 48
Mixing in Reason
In conjunction with our editing tools, we use a mixer and effects devices to
combine different musical parts into a complete production of our music.
A mixer has a number of channels that accept audio inputs; it combines
them to produce a stereo output by routing and processing those signals
in various ways. Each channel is usually laid out in a strip of controls that
is replicated for each physical or virtual input. These controls typically
include placement in the stereo image (pan), equalization controls that
change the tone coloring of the signal, and effect routing controls. With
Reason, the mixer automatically adds channels to meet your needs as
you add devices or tracks to the project.
Reason includes a multichannel mixing console as seen below.
Reason's Multichannel Mixing Console
42 of 48
Music Production in Reason
Reason is a complete environment for electronic music production, and it
contains a full set of tools for recording (both audio and MIDI data),
editing, mixing, and mastering. There are, as we have just seen, devices
for producing sound, like synthesizers and samplers, devices for
processing sound and adding effects, and a full-featured mixer. Reason
contains many tools for working with your production from concept
through to finished product. Let’s take a look at the workspace, how it is
organized, and how we interact with the tools during the various stages of
the production process.
Overview
Required reading: Reason Manual Chapter 2, pp. 42–66.
Chapter 2 in the Reason manual gives a great overview of all the basic
operations and features of the program. The following ScreenFlow will
also give you a basic tour of the workspace provided in Reason:
We’ve explored the music production process as well as the basic tools
and methods used to configure your personal production studio. Here is a
link that can give you more information about setting up a home studio:
Acoustics: Fact and Fiction
43 of 48