D4 always starts off shrouded in secrecy. First, a backronym is released, giving us absolutely no idea what we'll be doing. For us, the acronym was "BOOMBASTIC". A week later, everyone attends a lecture where the project is revealed, the specification given and deadlines set. BOOMBASTIC turned out to be "Body-Operated One-Man Band with Amplification, Storage and Transmission Integrated Circuits". Apart from "Integrated Circuits", this was actually a decent description of what we had to build - in groups of 6 we had to create a portable audio device for a street performer, with the spec demanding a ridiculously large number of features. The device had to amplify its output, be battery powered, stream audio wirelessly and support saving and playback of performances - all to be flawlessly implemented in 12 days. Fortunately I don't think I could been given a better project, considering my interest in audio programming.
I was part of team "Eminem" and one of the others in our group thought up a cool concept - what if we turned a computer keyboard into a musical keyboard. We envisaged a small module that had a USB port, allowing any standard keyboard to be plugged in and used as a musical instrument. We split the design up into separate tasks: power supply, amplifier, wireless streaming, keyboard interface, peripherals and the synth software. I took up the software responsibilities, basically writing a synthesiser from scratch to run in real-time on a Raspberry Pi. Since the synth was quite key in having a product that did anything interesting, I had a fair amount of pressure to get my part working to some degree. Everyone had a responsibility to design and get a working module though, the idea being we could work as independently as possible.
There were several key parts to my software design. I used a relatively simple method of audio generation called "wave table" synthesis - in this method we calculate and store a single period of any waveform. This means the wave table can store something as simple as a sine wave, to something as complex as a piano sound. By reading from the table, which is just an array in a program, at different speeds we can generate different frequencies. This is intuitive if we read it back at nice multiples e.g. 2x or 4x. For all other frequencies, there will be some aspect of interpolation, which is just guessing the value between different samples. For this project high fidelity audio wasn't really an aim, so I settled with linear interpolation. My understanding of this technique came from the book I used to self teach audio programming - "The Audio Programming Book" by Richard Boulanger.
In my design I decided on having two wave tables - this allows you to do lots of cool stuff like amplitude modulation (AM), frequency modulation (FM) or just mixing between the two tables. I also wanted enveloping of the output - so notes could have attack and decay. This was supported by also having two envelope tables - sets of points corresponding to the envelope of a given note. Finally, for maximum flexibility, I included two extra tables called "tone maps". This maps every ASCII letter to a frequency - by default these frequencies correspond to the Western scale using equal temperament, but this table allows the synth to make pretty much any noise.
|The basic idea of my design.|
Polyphony was probably too ambitious with the time given, but I pretty much had it working by the end. Polyphony basically means being able to produce multiple notes at once, think of chords on a piano. The first attempt I didn't realise how hard it would be, so the solution was buggy. A lot of exception faults later, I stopped and thought hard about what I done. I then came up with a data structure and way to manage the polyphony which would be a bit more robust. Although there are only 2 wave tables, we also need oscillators to generate audio. For the uninitiated, an oscillator is a device that generates a single frequency, which might be in circuitry or software. In this case an oscillator is simply a structure containing position in a wave table, with an amount to increment on each sample. If we have say 5 oscillators, we can generate 5 different notes simultaneously. More oscillators will however equal more computation for each sample. Although it would be convenient (but sound awful), the Pi couldn't support enough polyphony to have every note sounding at once - this meant I had to set a limit on the number of notes sounding simultaneously. With a limited number of oscillators, I had to manage which oscillators were used and which oscillators were free. I had to track which oscillator belonged to which note. In the end I used a linked list based structure to track available and used oscillators, with the oscillators returned to a pool after a note is released and requested when a new note pressed.
|The core API design.|
I used this design to implement some useful example functions. One of these functions was tremolo - this was achieved by writing a sine wave (or triangle wave) to the second wavetable and performing AM. Remembering the principle the design is based on, all this function does is generate a stream of character codes which can feed into the synth. Any notes the user now plays, which pass to the synth core, will have tremolo applied (sounds like a note fading in and out quickly). Vibrato is another useful one, which is slight pitch varying, done similarly with a sine wave in table 2 and FM of the first signal. Hopefully it's kind of clear now how lots of things can be achieved with this really simple design.
Finally, just in case I hadn't already attempted enough, I added some basic filtering in as well. I implemented a low-pass Butterworth filter which could be added to the output of the synth. This was an example of a feature that couldn't really be done using simply by generating characters. An attempt was made to add a (dynamic) compressor to the output, as I found the synth was really quiet with a single note sounding but really loud with lots of notes. I realised I was being stupid and just scaled the output by the number of notes playing; a compressor is maybe another project for another day.
From this I probably make it sound like this project was overall straightforwards. It's quite easy with hindsight to make it sound like I designed stuff and it worked perfectly, unfortunately this wasn't the case (and never is for an engineer). The first part of implementing was to get the synth core API working on a PC and producing a audio file output, so not running in real time. I could feed a few characters in, step through with a debugger and examine the audio output with Audacity. Without this step I probably wouldn't have got anything working. However, moving the design to the Pi was incredibly frustrating - I had a pretty cool synth module working on a PC but for a few days the Pi couldn't even produce the simplest sine wave tone without sounding mangled. There was also a much bigger learning curve for ALSA (sound in Linux) and getting PortAudio to run on a Pi. One of the biggest challenges was audio glitches on the output. The final solution was to use the callback mechanism in PortAudio, and also to use the circular buffer structure supplied by the library. This circular buffer structure is thread safe and it just works, unlike my buffer. All the buffer had to do was store a set of generated samples, which was both written to and read from in blocks of samples.
|An oscilloscope trace of the glitching before the fix - in this case I was hoping for a sine wave.|
Perhaps the biggest failure for us was the final stage - integrating all the parts together. In the last week I easily put in 13+ hours a day, with the final night being my first real "all-nighter" coding. Although not everyone was as keen as me, almost all of us put in a lot of work, and had a lot to do. Because of this, we all kind of forgot that we would at some point need a complete unit, with parts working together. Fortunately, because of the simplicity of the protocol, we did get some of the bits communicating. Most importantly, the keyboard worked and we had a design that made sounds. This integration was all done within the last couple of hours, with frantic work literally done in the final minutes of the project. A week later, each team was allocated about an hour to set up a demonstration of the project. This ended up, for us, being a frantic hour disassembling our nicely packaged, but slightly "bricked" design, so that I could hack together the parts that all worked reliably. Luckily for the team, the markers were happy with our end result, even if it was far from where we'd have liked it to be.
For me the final important lesson was that as electronics students we are now capable of making something really cool if we just take the initiative. We have the tools now to be able to come up with concepts, designs, then source components and quickly prototype something. I'm going to leave a few resources for the interested. Just for a bit more information, here's a link to my project report. Here's a link to the code just in case anyone has a vague interest in looking at / hacking the code themselves. Finally, here's a link to the photo album for D4 2014 (which features all the groups in my cohort). There were some other interesting and impressive projects, from gloves that produced piano sounds, to laser harps, to a print out, playable, paper keyboard.