There are changes in Autopilot and with these changes you will need to upgrade your test scripts.
Issue 1: AttributeError: type object ‘AutopilotTestCase’ has no attribute ‘register_known_application’
To fix this we need to modify the test script from something like this:
from autopilot.testcase import AutopilotTestCase
from autopilot.matchers import Eventually
from testtools.matchers import Equals, Contains
#register firefox as an application so we can call it
AutopilotTestCase.register_known_application("Firefox", "firefox.desktop", "firefox")
self.app = self.start_app_window("Firefox")
from autopilot.testcase import AutopilotTestCase
from autopilot.matchers import Eventually
from testtools.matchers import Equals, Contains
if "Firefox" not in self.process_manager.KNOWN_APPS: self.process_manager.register_known_application("Firefox", "firefox.desktop", "firefox")
#make sure firefox is up and loaded
Issue 2: AttributeError: ‘FirefoxTests’ object has no attribute ‘start_app_window’
Hi. I am Darran Kelinske from Austin, TX in the USA. This lesson is for Week 6 of Introduction To Music Production at Coursera.org. This week I will be teaching you about the basic components of a Synthesizer which include the following: Oscillator, Filter, Amplifier, Envelope, and LFO. In this lesson, we will be exploring the components from a subtractive synthesis perspective.
The Oscillator, sometimes referred to as a Voltage Controlled Oscillator (VCO), is responsible for generating the raw audio signal. The timbre/personality of the raw sound can be modified by changing the waveform that is responsible for generating the audio signal. Most Oscillators allow you to select from sine, square, saw, and other kinds of waveforms. Some synthesizers, such as Native Instruments Massive, include a plethora of waveforms to choose from (see below).
The Oscillator will also typically allow you to modify the base frequency at which the waveform is generated. This allows you to modify the overall base pitch of the audio signal. As we can see in the Oscillator image below, the pitch has been lowered 12 semitones which results in an overall frequency reduction of the signal generated by the Oscillator.
The Filter, sometimes referred to as a Voltage Controlled Filter (VCF), is responsible for shaping the sound of the raw audio signal. This is typically achieved by setting the type of filter to be applied, the cutoff frequency, and the resonance.
Many synthesizers allow you to choose from High-pass, Low-pass, Band-pass, and other filter types, but in subtractive synthesis the most common filter type to use is a low pass filter. Because the sound generated by the oscillator is so “bright”, we use a low-pass filter to remove/attenuate the partials that are above the cut-off frequency.
The Cutoff Frequency is used by the filter to set the frequency at which the filter will begin attenuating the audio signal.
Resonance amplifies the signal at the specified cutoff frequency. As you modify the cutoff frequency, a high resonant filter will highlight the cut off frequency. This can most often be heard during filter sweeps in which the cutoff frequency can be heard sweeping across the audio signal.
An example of a Synthesizer filter from Massive:
The Amplifier, sometimes referred to as a Voltage Controlled Amplifier, is responsible for boosting the signal before passing it to an external source.
An Envelope is used to modulate the signal that is passing through the Synthesizer. Typically, the Envelope is used to modulate the amplifier. A standard Envelope has four sections which include Attack Time, Decay Time, Sustain Level, and Release Time.
The attack time is how long it takes for the audio signal to go from nil to its peak. This time begins when the key is first pressed. Short attack times can create punchy sounds, while longer attack times can result in a sound that gradually builds when the key is pressed.
Decay time is how long it takes for the audio signal to decrease from it’s peak level to it’s sustain level. This is measured from the end of the attack time.
The Sustain level begins after the attack and decay time and is the level in which the sound is maintained while the key is pressed.
Lastly, the Release time is how long it takes for the audio signal to decrease from it’s sustain level to zero. Longer release times can create sounds will continue to persist after the key has been released.
A visualization of an “ADSR” Envelope in Massive is shown below.
An LFO (Low-Frequency Oscillator) is a modulation device that can be used to control other synthesizer components. For example, the LFO can be applied to the Oscillator so that the pitch oscillates between two frequencies. Similar to the Oscillator mentioned earlier, we can specify the type of waveform that is used by the Oscillator. We can also set the amount of oscillation that the LFO provides. Setting higher amounts will create sounds that waver in pitch.
In this post, we reviewed the fundamental components of a Synthesizer. Further discussion of each of these components can be found in the manuals for some synthesizers as well as Wikipedia.
While this lesson is very basic, many of the ideas and facts covered in this lesson were new to me.
If there are any inaccuracies or if there is any feedback you have to provide, please contact me through Coursera or social media. Thank you for reading.
Hi. I am Darran Kelinske from Austin, TX in the USA. This lesson is for Week 5 of Introduction To Music Production at Coursera.org. This week I will be teaching you about the Flanger and Chorus effects. Both of these effects are considered to be modulated short delay effects.
In this lesson we will be using the following recording as a sample.
A Flanger is an effects unit which produces flanging. According to Wikipedia flanging is:
“an audio effect produced by mixing two identical signals together, one signal delayed by a small and gradually changing period, usually smaller than 20 milliseconds.”
Said differently, a Flanger produces an audio effect by combining an audio signal with an identical audio signal that has been slightly delayed. When the signals are combined, the frequency response of the combined signal is different than the original signal due to constructive and destructive interference created by the addition of the delayed signal. The amount of delay applied to the delayed signal is varied over time. This variation in delay creates the flanging effect.
Flangers typically produce an audio effect that is described as a “jet plane-like”. To me, they produce an airy sound when used in moderation, but can produce psychedelic sounds when setting parameters to their extremes.
A screenshot of the Ableton Live Flanger effect is pictured below.
Listen to the audio clip below to hear the default Flanger settings applied to our original audio clip.
The Delay Time setting on a Flanger specifies how long to delay the audio signal that is being added to the original audio signal. During my experimentation I’ve found that higher delay times typically produce more unconventional sounds.
LFO (Low-frequency oscillator)
The LFO is responsible for varying the amount of delay that is applied to the signals that are added to the original signal. The Flanger in Ableton uses two parallel time-modulated delays to create flanging effects. One of these delays is for the right channel and one is for the left channel. Because of this, you can hear the sound moving from side to side in your headphones while the Flanger is in effect.
The LFO in Ableton Live has a few settings which include Shape, Amount, Rate, and Phase.
The Shape settings specifies the shape of the modulation and includes sine, square, sawtooth, and random options.
According to the Ableton Live manual, the Amount parameter specifies “the extent of LFO influence on the delays.” Experimenting shows that the Amount to 0% results in 0% modulating of the time delay. Increasing the amount results in greater variations in time delay.
The Rate setting is responsible for controlling how often the LFO waveform repeats itself. Note: I found this easy to understand description from canadianmusicartists.com here.
Lastly, according to the Ableton Live manual the Phase setting behaves in the following manner:
“The Phase control lends the sound stereo movement by setting the LFOs to run at the same frequency, but offsetting their waveforms relative to each other. Set this to ”180”, and the LFOs will be perfectly out of phase (180 degrees apart), so that when one reaches its peak, the other is at its minimum.”
The Feedback setting controls how much of the output signal is sent back into the input signal.
Similar to other effects, the Dry/Wet knob determines how much of the output signal is composed of the original signal (dry) and the processed signal (wet).
A Chorus is an audio effect that is typically produced by “taking an audio signal and mixing it with one or more, pitch modulated copies of itself.” Source: Wikipedia
Using a Chorus effect can make an audio signal sound more full. This is due to the original signal and copies of the signal combining to be perceived as one full sound as opposed separate distinct sounds.
The Chorus in Ableton Live appears to be comprised of two delay units. Each of these delay units can have differing delay times that are modulated according to settings available in the effect unit.
A screenshot of the Ableton Live Chorus effect is pictured below.
Listen to the audio clip below to hear the default Chorus settings applied to our original audio clip. The clip will sound a little brighter because the default effect setting includes a high-pass filter which filters out much of the kick drum.
The delay time is found at the bottom of each delay unit and specifies the amount of delay that will applied to the audio signal.
The delay time for each delay can be modulated using the Modulation settings. The modulation settings include the Amount which determines the amount of modulation that will be applied to the delay and Rate which is the rate at which the modulation will occur.
Similar to the Flanger effect discussed earlier, the Chorus effect has a feedback setting which allows the processed signal to be sent back into the input signal.
The Chorus plugin has a Dry/Wet knob which allows you to specify how much of the output signal is comprised of the original (dry) and processed (wet) signal.
I found a good overall description of the Ableton Live Chorus plugin here.
Hi. I am Darran Kelinske from Austin, TX in the USA. This lesson is for Week 4 of Introduction To Music Production at Coursera.org. This week I will be teaching you about the Gate effect that is available in Ableton Live. I will give a brief overview of the Gate effect and discuss the some of the parameters and visualizations the Gate effect provides.
In this lesson we will be using our audio recording from the past few week’s assignments (found below).
“The Gate effect passes only signals whose level exceeds a user-specified threshold. A gate can eliminate low-level noise that occurs between sounds (e.g., hiss or hum), or shape a sound by turning up the threshold so that it cuts off reverb or delay tails or truncates an instrument’s natural decay.”
In other words, a Gate effect is a device that allows us to only allow sound through that is above a certain threshold level. In addition, to setting the threshold level that allows sound to pass through, Gates typically have other parameters which include ratio/floor, return, attack, hold, and release.
A screenshot of the Ableton Live Gate effect is pictured below.
The Threshold parameter sets the dB level at which the Gate will allow sound to pass through. This setting can be modified in Ableton Live by dragging the horizontal line displayed on the Gate effect, modifying the knob underneath the Treshold setting, or using a knob on a connected controller like an APC40.
Setting the Threshold value to a level that only lets the loudest sounds through creates a pulsing kind of effect on the audio track.
This setting is explained well in the Ableton Live manual:
“Return (also known as “hysteresis”) sets the difference between the level that opens the gate and the level that closes it. Higher hysteresis values reduce “chatter” caused by the gate rapidly opening and closing when the input signal is near the threshold level. The Return value is represented in the display as an additional horizontal orange line.”
In the screenshot below we can see the range that is created when setting a return level. As mentioned earlier, this range is depicted by a second horizontal bar that is displayed on the Gate effect. A further description of the Threshold and Return/hysteresis settings can be found on Wikipedia here.
Setting a Return level removes some of the pulsating created by using the Gate in the previous recording by allowing the Gate to remain open as the sound level level is decreasing.
From the Ableton live manual:
“The Attack time determines how long it takes for the gate to switch from closed to open when a signal goes from below to above the threshold. Very short attack times can produce sharp clicking sounds, while long times soften the sound’s attack.”
When setting the Attack time to 0.02 ms, the Gate produces a clicking sound.
From the Ableton Live manual:
“When the signal goes from above to below the threshold, the Hold time kicks in.”
The Hold time is the time the Gate will remain open once the signal falls bellow the the threshold.
From the Ableton Live manual:
“After the hold time expires, the gate closes over a period of time set by the Release parameter.”
In other words, the Release time is the amount of time that will be spent closing the gate. As with Hold, please be aware that setting long Release times may result in the Gate remaining open despite a drop in signal.
This graph on Wikipedia is a great visualization of how the various Gate parameters effect signal flow.
From the Ableton Live Manual:
“The Floor knob sets the amount of attenuation that will be applied when the gate is closed. If set to -inf dB, a closed gate will mute the input signal. A setting of 0.00 dB means that even if the gate is closed, there is no effect on the signal. Settings in between these two extremes attenuate the input to a greater or lesser degree when the gate is closed.”
A Ratio setting is not included in the Ableton Live Gate effect, but a Ratio parameter allows you to specify the ratio at which sound will be attenuated. Ratio’s are specified as a ratio of input signal to output signal. I found a good description of Ratio settings related to Gates from DoctorProAudio.com.
“The attenuation ratio works in an equivalent way to that of the compressor, defining the amount of a attenuation (compression) that is applied to the signal. These ratios are expressed in dB, so that, for example, 1:6, means a signal that is 1 dB below the threshold will get reduced to 6 dB below it, while a signal 3 dB below the threshold will get reduced to 18 dB below it. Likewise, a 1:3 (one to three) means a signal 1 dB below the threshold will be attenuated 2 dB (as the level will go from -1 dB to -3 dB; we use a negative sign as these levels are below the threshold, which is the 0 dB reference in this case). With a ratio of 1:10 and higher, the expander is considered to work as a pure noise gate, though an ideal gate would have a theoretical ratio of 1:infinity (any level below the threshold would be totally muted).”
The Gate effect in Ableton Live gives us a few visual indicators to help us better understand the impact the Gate is having on our signal. The image below highlights where the threshold, return, output signal, and gain reduction meter are located on the Ableton Live Gate effect.
In this post, we reviewed the Gate effect that is available in Ableton Live and how some of the parameters in the Gate effect impact sound. While I have begun to understand how the various settings function, it will take much longer to understand how to use them musically. If there are any inaccuracies or if there is any feedback you have to provide, please contact me through Coursera or social media. Thank you for reading.
Hi. I am Darran Kelinske from Austin, TX in the USA. This lesson is for week 3 of Introduction To Music Production at Coursera.org. I will be teaching you about the different effect categories. These categories include dynamic effects, delay effects, and filter effects.
In this lesson we will be using our audio recording from last week’s assignment (found below). Additionally, we will be using Ableton Live to explore an effect in each effect category.
To begin, we will look at dynamic effects. Dynamic effects control amplitude. Amplitude is perceived by the listener as loudness.
One device that we have available in Ableton Live that provides dynamic effect capabilities is the Compressor. From the Ableton Live manual we can get a good understanding of how the Compressor works.
“A compressor reduces gain for signals above a user-settable threshold. Compression reduces the levels of peaks, opening up more headroom and allowing the overall signal level to be turned up. This gives the signal a higher average level, resulting in a sound that is subjectively louder and ”punchier” than an uncompressed signal.”
Furthermore, the manual describes the two most important parameters on the compressor device:
“A compressor’s two most important parameters are the Threshold and the compression Ratio.
The Threshold slider sets where compression begins. Signals above the threshold are attenuated by an amount specified by the Ratio parameter, which sets the ratio between the input and out- put signal. For example, with a compression ratio of 3, if a signal above the threshold increases by 3 dB, the compressor output will increase by only 1 dB. If a signal above the threshold in- creases by 6 dB, then the output will increase by only 2 dB. A ratio of 1 means no compression, regardless of the threshold.”
After reading the description from the manual, the idea of the compressor starts to make sense conceptually. The compressor is compressing the amplitude of the waveform by removing the peaks in amplitude. Below, you can see a screenshot of the Compressor device applied to an audio track in Ableton Live.
Next, we will discuss delay effects. Delay effects control propagation qualities. One device we have available in Ableton Live that provides a delay effect is Reverb. The Reverb device allows us to simulate the environment an audio recording might be played in by delaying the propagation of all or parts of the audio recording. The full description of the parameters available in the Reverb device can be found in the Ableton manual here.
A Reverb device added to an audio track can be seen below.
Lastly, we will discuss filter effects. Filter effects control timbre. An example of a filter effect found in Ableton Live is the Auto Filter.
This device allows us to filter out particular frequencies in the audio sample. For example, using the Auto Filter as a Low-pass filter we can allow frequencies lower in the spectrum to pass through while preventing frequencies higher in the frequency from being reproduced. In effect, we are filtering out the high frequencies.
A full description of the Auto Filter can be found here.
In this post, we reviewed the three effect categories we have discussed so far. These categories include dynamic, delay, and filter effects. We also looked at a device from each category that is available in Ableton Live. If you can, experiment and play with the different devices in your DAW. This will help further your understanding of how the various devices impact sound.
Thank you for reading and please post any questions or comments below. Or feel free to contact me using social media.
Hi. I am Darran Kelinske from Austin, TX in the USA. This lesson is for week 2 of Introduction To Music Production at Coursera.org. I will be teaching you about adding a software instrument, recording midi, and quantizing in a DAW. The DAW I will be using in this lesson is Ableton Live.
In this lesson we will be recoding MIDI using an MPK mini. This is a portable USB-Powered 25-key keyboard that also includes 8 drum pads (pictured below). The MPK mini will need to be connected to the PC you are recording MIDI with. The song we will be recording is called “The Ash Grove” and is taken from a beginners piano book.
Setting up the MIDI Instrument
To begin this task we will need to setup our MIDI instrument in Ableton Live. To do this, I created a new Ableton Project, named RecordingMidiWithAbleton, and then removed all of the tracks except for one MIDI track. I then renamed this MIDI track to Piano.
Next, I searched for a Piano Instrument in the top left search box. This produced a “Grand Piano” instrument which sounded nice. MIDI Instruments can be downloaded from Ableton.com. This instrument entry needs to be placed over the Piano MIDI track to specify that this MIDI track is using this instrument to produce sound.
Setting up the Click and Countoff
Setting up the click and countoff in Ableton is done in the top left of the GUI. To do this, we will need to enable the metronome by clicking the metronome button. When enabled, the metronome button will be yellow. To set the countoff we will need to click the drop-down button to the right of the metronome button and select the appropriate countoff time. In this example, we are using 1 Bar.
Recording the Instrument
Before recording, it is good practice to practice your performance. Loudon continuously reminds us that there is not substitute for good performance and this is true. After getting comfortable with the music, I clicked the record button and pressed play. After a 1 Bar count off, everything that was played was recorded Ableton Live. The recorded MIDI notes can be seen in Arrangement View in the image below.
If you click on the above image, you can see in detail that there is plenty of room for improvement on both timing and velocity. To improve the timing we can quantize the MIDI track. To do this, I first selected all of the recorded notes and then right-clicked the selection. From the right-click menu I selected Quantize settings and set them similar to what Loudon suggested. I am using 1/8th note quantization, adjusting from the start, with 20% quantization.
After running quantization, you can see the start of the MIDI notes get tighter to the grid. This is expected. Some of the notes that were originally played were off to the extent that they are getting quantized to the wrong start. These notes will need to be hand corrected by manually dragging the MIDI note to the correct position. After some more quantizing, manually correcting velocity, and manual correction of a few of the notes, the final MIDI recording can be seen and heard below.
Copying the MIDI Track
Once we get the the MIDI track where we would like it, we can copy it from Arrangement view to Session view. To do this, right-click the title of the recording in Arrangement view and click copy. You can then paste this into session view.
Copied MIDI recording in Session View:
In this post, we covered quite a few things. We first found and selected a MIDI Instrument. We then recorded MIDI using a MIDI keyboard. After we made the initial recording, we began cleaning up the recorded MIDI notes using quantization. Lastly, we copied the recorded MIDI notes to session view so that the clip could be launched in future recordings.
Hi I am Darran Kelinske from Austin, TX in the USA. This lesson is for week 1 of Introduction To Music Production at Coursera.org. I will be teaching you about visualizing sound using Ableton Live and other tools available for MAC OS X.
In this lesson I will show you where to find different tools to visualize sound and explain some of the concepts related to visualizing sound. The tools we will explore in this post are an oscilloscope, spectrum analyzer, and sonogram.
To begin, we will look at an oscilloscope. An oscilloscope is used to visually display the waves that sound make in a medium. Time is displayed on the horizontal access and amplitude is displayed on the vertical access. By counting the number of waves for a particular 1 second time period we can express the frequency of the sound in hertz.
Using Ableton Live and a free plugin, Blue Cat’s Oscilloscope Multi, we can view the differences in frequency of each sound wave while playing different notes. After installing the plugin, I created a simple Grand Piano track that plays the note C on different octaves. The track can be seen below (click to expand).
Now, let’s play the track while watching the Oscilloscope. We can see that when we play C4 we have around 10 peaks in a .02 second period.
When we play C3, an octave below C4, we can see that there are five peaks in the same time period. As you go up by an octave the frequency doubles and as you go down by an octave the frequency halves. There is no correlation between amplitude and frequency.
As we can see, one of the drawbacks of the oscilloscope is determining the frequency of a sound. This requires us to manually count and sum each peak in a one second time period.
Next we will look at the spectrum analyzer using the same piano track that we were working with earlier. A spectrum analyzer measures frequency on the horizontal access and amplitude on vertical access. A Spectrum Analyzer is built into Ableton Live and can be accessed by performing a search for Spectrum in the top right.
Playing C3 while watching the spectrum analyzer shows that the largest peak is at around 262 Hz. If you hover over the peak Ableton will display the note and the frequency in the bottom left of the spectrum analyzer. While the mouse cursor is missing in the screenshot below, the peak that is seen here is C3 and is noted in the bottom left of the spectrum analyzer.
Now, let’s play C2 which is an octave below C3. In the spectrum analyzer we can see that the frequency for C2 131, which is half of C3. This is in line with the observations we made while using the oscilloscope.
One disadvantage of the spectrum analyzer is that it displays characteristics of sound at a particular time it and does not give you a picture of how sound is changing over time.
A sonogram allows you to view frequency, amplitude, and time in a single pane. Frequency is displayed on the vertical access. On the horizontal access time is displayed and on the z axis amplitude is displayed.
I was able to find a free sonogram called Sonic Visualizer to use for this post. After exporting our Grand Piano track (can be downloaded here: VisualizingSound), I loaded it into the Sonogram in Sonic Visualizer.
Here we can also see that as we play lower notes, the bottom frequencies of the sonogram are darker. We can also easily determine each particular note that is played as there is a distinct break between each note displayed on the sonogram. We can also see the harmonics that are related to each note.
Thank you for reading my post on Visualizing Sound. This is a new domain for me and I enjoyed learning and writing about some of the ways we can visualize sound. All of the tools mentioned in this post come with free demo’s or are included in Ableton. If you have any questions, please comment below or contact me using any of the methods listed on the site. Thanks again.