Starting out I envision the endproduct as being built up with the following elements:
- Custom designed PCB for the electronics
- STM32 chip as the brain (only done arduino before)
- Potentionmeters ONLY as the user interface
- 3D printed casing
- USB powered
- Stereo out interface
- Input interface for a modular clock signal
- (Maybe) CV-in (02 June 2024: I binned this early on)
- (Maybe) Midi Din5 clock in (02 June 2024: I added this in the end)
Project Plan:
I'm gonna figure out my plan as I go along. But for now I have these main steps in my head:
1. Write code for drum machine to run on my computer
2. Prototype this code using the black/blue pill and further develop (07 April 2024: after ordering the black pill, found out this was not the best choice, see blog)
3. Once finalised, design and print PCB
4. Design and print case
5. Assemble and done
I want to focus on documenting what steps I undertake on my way to the endproject and what lessons I learn along the way. If there is anything you are missing, or is unclear feel free to reach out: mortenjoes[at]gmail.com.
I have multiple github repos with code. I have one for local development, this runs the code on your laptop and be used to write new features: github.com/mjoes/drumbadum.
Then there is a repo for the code that is used on the discovery board, this has more basic functionality and hasn't been updated since I got my PCB for the final drummachine: github.com/mjoes/drumbadum-STM32DISC1.
Also a massive shoutout to Pladask Elektrisk who helped me get started on the hardware side of things. Very knowledgable and helpfull. Check out their stuff, it's next level.
DISCLAIMER: I started documenting a lot of details of my code but after a while realised this was costing more time and effort than it's worth. The goal is also not to explain how the code of my drum machine works, but how we go from an idea to a machine.
I will track my progress chronologically and write the dates for each post as well as the github commit hash or pull request. Seeing as I already made some progress before I decided to write it down, the first post is quite large.
My c++ knowledge starting out is quite limited. I did some arduino and used c++ to solve a couple of Advent of Code questions, but that's it. I do have coding experience though, so that helps a lot. For the c++ thought, I started reading "The Audio Programming Book". This is a great start for some basics in audio programming in c/c++. Another great resource is the public available code by Emilie Gillet for her Mutable Instruments modules, found here: https://github.com/pichenettes/eurorack/. I have the Mutable instruments peaks module, of which the drum setting plays a drum sound based on an input trigger. Seeing as a sequencer is basically just triggering sounds at predefined times, this mutable Peaks code should be of use. In the style of learning by doing, I did not first read these sources, just got stuck in as quickly as possible.
My high level plan is for writing the code is the following:
1. Create developing environment for my project
2. Create a simple outline for the program
3. Code a simple sinewave
4. Add an attack-decay envelope to the sinewave
5. Trigger this sinewave at predefined times
6. Shape the soundwave to a parametrized kickdrum
7. Add parametrized hihat sound using kickdrum as template
8. Add parametrized snare-drum sound using kickdrum as template
9. Write the sequencer logic
10. Write algos (Pretty empty statement this one here, but will think about algorithms after the sound generation part is in place).
Note that I don't include any bootstrapping of the microcontroller yet. The end product here is a program that writes an audio file to my hard drive. Taking it to a chip is a problem for future Morten.
Development Environment
During development I want to output both audio as well as a plot of the audio each time I compile and run. The plot is very useful in identifying any issues with the waveform. I saw that c++ programs, generally use a makefile to define how/what to compile, so I made a simple one for this project. Since the program outputs raw audio, I use the sox command line tool to convert this to a .wav file. Also I don't plot the graph within the c++ program, but just a simple text file with x,y coordinates. This is turned into a graph using plotutils. All these steps I combined in a simple bash script, so for every time I want to run, I just execute run.sh.
Program outline
I thought I'd start out with what seems most straightforward to me. I'm picturing a sequencer as a simple loop of say 16 events, where an event can be either 'do nothing' or 'trigger 1, 2 or 3 sounds' (since I am limiting myself to 3 sounds atm). These events are triggered at set times based on the BPM (for now I won't care about bpm tho). And these times could easily be calculated in time (e.g. seconds). Since however our program doesn't deal with seconds as such (yet), but with positions (where time in seconds = position / sample rate), I try to think in terms of position. So my sequencer for the time being will be simple: when the program starts, the position will start increasing and at predefined positions the sound will be triggered. With this approach I don't have to think about real world time yet as part of the program logic. This only comes into play when I export the raw audio.
For the sound generation, I create a class for each type of sound. This project is a perfect example of where object oriented programming is the bees knees. It provides the flexibility to easily adjust parameters between hits or have simultaneous hits of the same instrument just by initialing a new class instance and then neatly keeps track of the state of each instrument. My implementation will be that when the position is reached for e.g. a kick drum hit, then we set all the parameters for that hit (decay, overdrive etc.) and start the hit. The processing of the hit is defined by the process function in the class and the output returned to the main loop. Thinking ahead I can later easily replace the manually inputted trigger position by a sequence algorithm of my choosing. Hope that makes sense...
Lastly I am writing the program to later work with a 16bit DAC and thus my output is 16bit. Before I started I had no idea what that meant, but it simply means my output ranges from −32,768 to 32,767. So where a simple sine wave as y(t)=sin(phi*t) ranges from -1 to 1, we'd have to scale this to the 16 bit range by multiplying it by 32,767. This will output a wave that looks like this (sort of):
The kick drum
First sound I wrote an algorithm for is the kick. I remember the kick from the Ableton DS Kick (midi instrument) was quite nice, so checked that out as inspiration. It has a couple of parameters to tweak, and by comparing the tweaked waveform with the original one can quite easily digest what is going on. And in essence it's then possible reverse engineer these features into my own drum algorithm. I'm using it as a starting point and take it from there. In total I have 7 parameters that can be tweaked. 2 of these are attack, velocity and decay, which I will discuss a little later. The others are:
1. Frequency
This is quite simply the frequency of the base bass drum, nothing fancy.
2. Envelope
This controls a form of frequency modulation of the kick drum wave. If it is more than 0,then the frequency at the start of the sample will be higher than the base frequency and linearly decrease until it's at the base frequency. The higher the value, the higher the start frequency. This gives you that nice lazer zap effect. Initially I thought this would be fairly straightforward, just multiply a decreasing exponential with the base frequency or something like that, but I was very much mistaken. In the end I find out I needed (linear) chirp modulation. Clearly need to brush up on my periodic functions.. (edit: later found out this can be done using Direct Digital Synthesis, which would have saved me a lot of headache if I'd known at this stage)
3. Overdrive
For overdrive I simply apply either hard clipping or soft clipping (included both for now). Hard clipping means that we increase the amplitude of the wave and where the output is larger than the maximum/minimum (in this case the 16-bit limits defined above) it is capped at this maximum/minumum. Since this can have a quite harsh sound one can also soft clip. Then we smooth the transition from wave to capped output. The figure below shows it well:
I wasn't familiar with implementation of soft clipping before, and I read quite a bit to find out the various methods available for this. I landed on probably the most common one though: soft clipping algo
4. Harmonics
Gotta love some harmonics in your kick. Turning this up from 0 increases the level of the additional harmonics. I implemented this parameter quick and dirty for now, but it sounds good, so maybe I leave it. The principle of adding harmonics is simply adding some more frequencies to the sound than just the base frequency. I gives the kick some body and tonal content. I decided to add 3 frequencies. The first is fixed at 220Hz, the 2nd and 3rd are multiples of the base frequency, in this case the 2nd is 7.8x the base frequency and the 3rd is 10.2x the base frequency. These are chosen without any theoretical backing, just cause it sounds good. Also the 2nd frequency is slightly more present than the 1st and 3rd. Also for fun I added some randomness to each harmonic frequency, which means that each hit with the same parameter value for harmonics, will have an every so slightly different sound.
Hihats
For the hihat, I chose the route of least resistance whihc is a straight up whitenoise hihat. I took some inspiration from the track Clipper by Autechre, which has this very dry minimal beat with a clear whitenoise hihat (and snare). These sounds seem to have a different pitch, however the thing with whitenoise is that it has no pitch, so we are actually talking filter here. By bandpassing part of the whitenoise spectrum we can trick the ears into hearing a pitched up and down whitenoise. But that brought up the existential question of how to program a filter. This is where the Audio programming book (see link above) came in handy, as it has a good section on filters. I chose a 2nd order Butterworth bandpass filter, partially cause I easily could find the precalculated coefficients (path of least resistance). It sounded nice straight off the bat, so I stuck with it. Could think about adding resonance, but that is for later.
Another cool thing in the Autechre song is that the envelope of the hat is not a classic exponential or linear decay, but more a constant volume and sudden stop. This gives the sound a glitchy feel, which is my jam. So I added the option to choose either a classic or full-stop decay for each hit individually.
Snare
So initially as a placeholder I combined a high pitched kick with some whitenoise transient for a snare drum. This actually sounded pretty nice, considering it basically was a combination of my 2 other sounds. But I want a more distinct 3rd sound, and think FM is the way to go. I haven't gotten round to programming it yet, but it's next on the list.
Envelopes
Each sound needs an amplitude (volume) envelope. Where synths often use ADSR envelopes, this is less relevant for percussion. Often actually only a decay envelop is needed, i.e. we start high and quickly decrease volume before mellowing out the volume decrease towards the tail of the sound. For the kick I have included an attack envelope, as it can give a slight pumping effect which can be be nice if I'd want to use the kick to also make bass sounds. In addition a tiny attck can help with eliminating a clicking sound which can occur (especially noticable for low pitched instruments). For the hat we only use a decay envelope.
Initially I calculated the envelope for each timestep. However I stumbled upon a thread on the signal processing stackexchange where using lookup tables was advised: stackexchange thread. I also saw this method on the Mutable instruments github, and then realized it was actually Emilie who had responded in this thread. So that's enough proof for me, lookup tables it is! I added a python script, that writes these tables for you. Then it's a simple copy and paste job to get them in the envelopes.cpp file.
This has been a recap of my work so far. It isn't as detailed as I'd like, seeing as I started this blog a little late. Going forward I hope to provide some more details. Hope it gives some idea of my way of approaching the project. As I said there is little planning involved, we just go forwards and see where it takes goes. Next up is programming the last sound algorithm, FM synthesized snare.
So the next step was to add an FM hit to the code as my third and (for now) final sound. For background on FM synthesis suggest you just google it, but the maths behind it is luckily very simple:
We have a carrier, and a modulator and for each timestep t we just calculate this formula. In this formula we can then paramterize the base frequency (f_c) and modulation amplitude (i(t)) and frequency (f_m). Bob's your uncle.
But the issue is that we need to set some limits. In theory we can have massive ranges for all these parameters which would yield a massive range of sounds, but this would make the sound hard to control. Also we could add multiple modulators or/and modulate the modulators and you quickly understand why FM synths are so hard to program. So what I decided to do is to open Ableton and take the Operator instrument, which is an FM synth, and start playing around. Just need to make sure we use only sinewaves for both the carrier and modulators, since I don't want to use anything more complicated in my program. Also we choose an fm-algorithm to suit my needs. I chose quite a simple algorithm, being 2 parallel modulators that modulate the carrier independently. This equates to just adding another (modulating frequency) term to the formula above. Then I just played around with the ratios and decays until I found a range that I think suits a snare type sound (it doesn't sound like your traditional snare though), but also can create some wacky sounds when pushed to the limit.
Since I have 2 modulators I have 2 ratios to set between modulating and base frequency. This seems overbearing, I'd rather have it simpler and control the ratios for both with only 1 input. So I chose the ratio of the 2nd modulator to be 3/7 of the 1st modulator. This provides a nice minor chord style harmonic whatever the setting of the input ratio. Also to keep things simple, the amplitude of the 2 modulators are linked and can't be set individually. Lastly the decay of all 3 waves are linked. This provides a good balance between flexibility on sound and complexity.
Initially I had the value for the amplitude of the modulating frequency range between 0 and unity. This is the classic way as far as I know. But then I found that pushing this amplitude over the unity (up to 100x) gave a nice overdriven sound. However this cool feature would cost a lot of precision in my fm_amount range. Suddenly the range is not from 0 to 1 but from 0 to 100 (imagine setting this with a potentionmeter). So I chose to have 2 settings instead; normal FM and bonkers FM, where bonkers sets the range from 0 to 100. Now you can easly switch for each hit and tell the program to deliver a normal or bonkers hit.
Lastly I found the sound was lacking some punch. The initial hit was just a tad weak. To spice this up I decided to add a splash of whitenoise. Basically a hihat sound, but with a very short decay (80ms). This is just to beef up the transient. I thought of just summing the original sound with this whitenoise transient, but thought that would be a bit messy. An easier way of adding this is just to simply add another modulator to the carrier, but instead of a periodic function as in the formula above, we just sample from a uniform distribution between -1 and 1. This worked nicely, only hassle was that the program needs to calculate 2 envelopes for the FM hit now; 1 for the main hit and 1 for the transient (since the lengths of these envelopes aren't the same). An alternative would be to just store the whitenoise sample as a vector. This would be very easy to create, and I made a python file that does just this. The issue is that it would take up a lot of memory, and I don't know if that is feasible as we have a limited amount in our chip. A memory vs speed issue, which we'll probably get back to later on.
Thought I'd write a post per pull request instead of just referencing a commit hash, since that makes it clear and nice to backtrack later on if needed. This PR is not too interesting though, some refactoring mainly and small updates. What I did was:
1. Removed the old snare drum remnant from the project
2. Add velocity parameter to hihat and snare (I forgot to add this there, but simply took the one from the bassdrum)
3. Cleaned up and consolidated some code
4. Used the lookup table for the envelope also for the bassdrum. Made this into a common envelope function which is shared across instruments
5. Remove floating points (incomplete)
The last one is a headscratcher, I noticed from the Mutable Instruments Peaks github project that there was no use of floats whatsoever in the program (I did ctrl+f to check). I, on the otherhand, have floating points scattered throughout. Some googling quickly led to the insight that it can be beneficial to avoid floats, especially when programming on embedded systems (something to do with processor intrinsics that I don't understand). So fair enough, I saw in the mutable project they were using fixed points instead of floating points. I needed to watch some youtube to understand how this works, since I hadn't encountered fixed point arithmatic before. But the concept was pretty simple, so I thought it would be relatively straightforward to implement. It wasn't though, I got stuck on my periodic functions which are used to calculate my sine waves (which are used a lot). Couldn't quickly figure out how to scale these functions from the floating point world to the fixed point world.
That's what you get for just starting before thinking. But no matter and after doing a little work with fixed points on my bassdrum, I decided to park this issue for future Morten to solve. I'm sure it can be done, I could e.g. make use of lookup tables for my sinewaves. At the moment though it was harshing my vibe and I want to continue writing a draft for my sequencer so I can produce a beat that doesn't sound like 2 drunk people hitting pots and pans.
This PR is quite a biggy, containing the foundation for my sequencer. It sort of evolved while I was working on it, starting out as being a sequencer completely dominated by randomness to a sort of rhythmic sequencer with dialled in randomness. I'm really happy with the result, although I'm sure it needs (significant) tweaking once I start prototyping it. But think the foundation is solid. Also I have now implemented 1 type of algorithm, I have ideas for more but these I will put on the backburner.
I started out coding the simplest sequencer I could think of; a sort of random hit sequencer. What this basically meant was that for each instrument we provide a probability that it is hit. Then for each 16th step we draw a sample and if this sample is lower than the provided probability we 'hit' the instrument. In addition we also randomly sample values for the sound parameters of the various drums. This sampling method is purely random and not musical whatsoever. However the results were actually really wicked and a good starting point.
Next step was making it more musical. For this I drew inspiration from my very limited experience of playing the drums, or more specifically from a youtube lesson on improvising on drums. Since this is what we essentially are planning to achieve I guess. The instructor (Jonathan Curtis) said he'd choose a rhythmic pattern and use this as the backbone for the improvisation. He used the clave as example, which is a classic and looks like this:
But there are many others, like a tresillo and a 4 to the floor. So I chose to use these patterns as a foundation of my drum machine improvisation algorithm. So I assigned the 1st potentiometer (virtual for now) to determine which rhythmic pattern to use. A pattern is saved in the program as a simple 1x16 array (16 steps), with 1s for each 'hit' in the rhythm and 0s otherwise. So unfortunately no triplets for now. Next up is how to determine what happens when my sequencer reaches a 'hit'.
We have 3 drum sounds, and we have to determine which one is hit. I want this choice to be deterministic from the outset, and then randomization can be dialled in. I played with various ways of doing this in my head, but landed on a solution inspired by the bastle kastle drum machine. This drum machine has a pattern generator which is a simple 8 step pattern where each step outputs a value in a predefined range. I took this idea of using a pattern to decide my drum sound, but made it a bit more complicated (at least for myself). I created 50 random 16 step patterns, where each step in the pattern is a random value between 0 and 100. I used python to generate these and then just copied the output to my code. Then I added 2 potentiometers, which both scroll through these 50 patterns. So for illustration imagine that pot 1 is at 1/5 and pot 2 at 3/5 and we are at step 3 of our 16 step sequence. This means pot 1 selects pattern number 10 and pot 2 selects pattern number 30. Now hang in, for the drum sound to be chosen we take the value at step 3 for pattern 10 and map this to a value between 0 and 2. And the same for step 3 of pattern 30. We then take absolute difference between these values to end up with our final value, which is still in the range 0-2. Now for the last step, we need to map this value to an instrument. In my case a 0 corresponds to the FM hit, a 1 to the bassdrum and a 2 to a hihat. This mapping is not chosen at random. Since we subtract 2 random values and then take the absolute value we are not anymore dealing with a uniformly distributed variable but a triangular distributed variable. Therefore the chance of ending up with 0, 1, 2 is not equal. At first this seemed like a nuisance and broke my plan, but actually I can use this too my advantage. The result that is most probable in my case is 1. Since this is the drum that will be hit on the beat of the rhythmic pattern it makes sense to have a drum sound that emphasizes this rhythm. A bass drum is the instrument for this in my opinion. Next up in probability is 0, and follwoing the same logic this would be a snare. The least suited instrument to emphasize the rhythm is the hihat and therefore this gets the value with the least probability. Hereby a picture that hopefully clarifies the above a bit better (note that I have 4 steps in the pattern, this is for illustrative purpose. In actual fact there are 16 steps)
No future code blogs for the time being, as this was consuming a lot of time. Will return at a later stage when the code is finalised.
Before I finished my code, I decided to already order the parts I need to start prototyping. I know that I have to embark on a learning curve regarding the STM32 programming, and cannot directly start flashing it with my drum machine code. First I think I just need to make a LED blink, that's probably hard enough.
OOPS: After I wrote the below, I ordered the parts and after receiving I noticed I made some mistakes:
1. Always check the footprint of your parts, I ordered a DAC which had a footprint so small that it would have been impossible for me to solder
2. Read the internet before you order. I had the plan of making this entire DAC output circuit on my breadboard. Found out I couldn't due to problem #1 and then found out there is a different STM32 prototype board which has a built in DAC even with audio jack (facepalms himself massively). So I ordered this one instead. It even uses the exact microchip I had in mind.
I decided to order a 'real' black pill dev board, not a cheap knock off. Also I need something to flash it with, and based on the internet I ordered a ST-LINK V2 debugger. Since I'm already ordering stuff, I thought I'd also get some other parts I will need in the future, such as the DAC8552, some op-amps, capacitors and resistors. I am basing these parts on the Mutable Instruments (MI) Peaks schematic and BOM (found here), which I am using as reference. So my choice of DAC and other parts is solely copy and paste from here.
One thing I did do on forehand is take a good look at the circuit that sits between the DAC and the output jack. I see that Emilie uses the -10V reference in the op-amp feedback path, but since we are not operating on eurorack power we don't have this rail available. Also the output for the MI Peaks is -8V to +8V (approx), which is eurorack levels. We want line levels which is about -1.4 and 1.4V. We have no negative rail whatsoever actually, so I chose to use the 2.5V power we have instead (see MI Peaks schematic) and make it negative using an op-amp. In order to determine the details of this circuit to get to the output I want, I used LTSpice to model it. Using LTSpice was really helpful for a beginner like me to see how my circuit affects the in-/output. It took a lot of trial and error, but think I got there in the end, see figure 1 below. Note that in this figure, I omit the first section of the circuit in Emilie's schematic, which is a low pass filter for high frequency noise. My circuit takes a voltage that oscillates between 0 and 2.5V (DAC output) and translates this into a voltage that oscillates between -1.4 and 1.4V, which is line level output.
Now this is me making a circuit based on my knowledge, which is very very limited. I am ignoring impedance and probably many other electronics rules here. DIY at work, let's see what happens when I get to prototyping it :).
So after my first mistake of ordering the black pill, I instead got the much more versatile STM32F407G-DISC1 board. When writing this I spent 2 days trying to play a sinewave out of it, and just now managed, mostly thanks to youtube though. A first step I did was to make a led blink. This was pretty easy, but needs to be done as it is the "Hello World" of embedded systems. I quickly found out I need quite a bit more than just VSCode to compile and run my code on the board, and spent some time reading up on the different IDE's available for STM32. Being a mac user I was left with the official STM IDE, as others were not availablle for macs. I then followed this tutorial for getting a led to blink: blinky led tutorial.
My dodgy table top development set-up
Next up was getting some audio out the STM board. This proved quite a bit more tricky. Where I first thought it would be a case of sending my audio output to the dac and we're in business, this quickly proved quite a bit more tricky. Definitely a level up from an arduino. For one we need to write our own driver for the DAC. Then we are going to use a protocol that I never have heard of before, being I2S (and a smudge of I2C). And in the mean time we try to understand what is going on, with a reference manual of 1700 pages without any code examples as well as a DAC datasheet I can't wrap my head around. So I quickly lowered my bar and set out to just get audio out of it and leave the understanding part 'til later.
Now as of today, I managed to get a sine wave out of the board, which was the initial goal. But instead of me reiterating youtube, I hereby list which videos I recommend watching to achieve the same:
1. Phil's lab - Digital Audio Processing with STM32
Belter of a tutorial, for me the best of the lot. Even though in his case he is receiving as well as sending audio (in our case we are just sending), the principle is exactly the same as what I ended up with. A great blueprint for the Drumbadum code I have already. The DAC he uses however is different, so the driver he has written does not apply to my board.
2. I2S Audio Codec - CS43L22
So how does our DAC work? That is were this video is great. I literally copy pasted his audio driver to my code. Just made one small adjustment as there were duplicate definitions of a variable (MODE_ANALOG). If you copy paste his driver and get an error message linked to this variable, check my code which is linked below. Also I took some inspiration of how to create the sinewave from this video.
5. STM32 Audio Recorder 1: STM32 I2S DMA
Another tutorial on using the I2S protocol on the discovery board. Not much new info as opposed to the first 2 links, but it gave me inspiration to also draw a graph of my output in debug mode when I got stuck. He has another nice video on how to use printf statements when using the STM32 in debug mode. Useful stuff.
These tutorials combined gave me enough knowledge to output my sine. For my final code check this github commmit: 63bcf79.
The result after much optimization
After managing to play a sine wave, next up was playing my drumbadum code that I had written prior to prototyping (see section on the Code). It was running on my computer and already has placeholders for analog inputs (potentiometers) and the method of writing audio could easily translate to the set up I had with the sinewave. With that I mean that I calculate a value for each sample at the given sample rate, just like I did with the sinewave. So the first thing to do is basically copy over the code and compile and cross fingers. Think there was about a 1% it would work straight of the bat and indeed it didn't. But I was awfully close in hindsight. My code was c++, while the code generated by CubeMX (which is basically the standard code generated to enable the correct peripherals etc) was c. Even though when I started my project I explicitly said I want c++. Anyway, did a quick google and it turns out that I can just rename my main.c to main.cpp and it would compile in c++. Easy peasy. Except for 1 snag, every time I do an update in cubeMX, it creates the main.c file, without my user code (which is in main.cpp). This is a known issue and at mainly just an annoyance as I have to copy my code from the main.cpp to the new main.c and rename the file. Luckily I can directly copy the code in the instrument header files (e.g. bass_drum.h), straight from my original project.
So after flashing my code it was running, but with a lot of artificats, distortion and overall ugliness in sound. It was clear that the processor could not keep up and my code needed to be leaner. So I started isolating the instruments to see where the trouble lay. First I just played the hihat, as it was the most simple algorithm. And in isolation the processor had no issues with playing this sound. Next up was the bassdrum, which was a struggle. As I wrote about in the code blogs, I know I should be avoiding using floating points, but since this STM32F407 had an FPU I thought I'd test it out and not remove all my floats beforehand. For the bassdrum it was the added harmonics section which caused most issues, probably because it uses sines. I spend about 4 hours trying to get it to work with floats, by trying various different things and tripple checking that the FPU was actually being used, but to no avail. Then I rewrote the code (see this commit) to use lookup tables. This fixed the bassdrum and it was playing without issues. Also tried together with the hihat and that worked as well.
Next up the FM drum. This was the most frustrating and at the same time most rewarding part of the project so far. In my FM algorithm I was using floats and trigonometry functions. It consist of a carrier wave which has a base frequency, and then the phase which is modulated by 2 other sine waves (see code blogs on more explaination). How the hell do I do this with lookup tables without using floats. Took me 2 days to figure it out, of which most time was spent trying to brute force my original method. In theory it should have worked, but one of the issues was precision. Now my modulators were running at frequencies up to 8khz, with an amplitude that was relatively small. As I couldn't use floats, I was using fixed points but I just could not get the precision right. Frustrating to say the least. Then I started thinking (way toooooo late), how did the peeps at Yamaha do this on their FM synths back in the 80s/90s? These were surely running on weaker processors than my STM32. Googled and found out about Direct Digital Synthesis (DDS). I will post some articles below, but the gist is that when we think of our sinewave in the time domain, we imagine that with each increase in sample, we increase the phase and use that phase increase as input parameter for our lookup table. I understand this is a terrible explanation, so please read the articles. Anyway it was one of these "of course, that's how they do it" moments. Because using this method we can recalculate the frequency after each sample and subsequently each phase shift at each time step. But we can now also easily adjust the frequency at each timestep and start varying our sinewave frequency easily
'on the fly' (again apologies for this terrible explanation). In addition this solves my headscratcher which is the envelope parameter for the bass drum. This can be solved using DDS also.
So I implemented this an ran the code and ... it did not solve the issue, still hearing artifacts. However when playing the FM drum in isolation it did work, so clearly an improvement. For the sake of it I started disabling some features, first one was the harmonics parameter of the bass drum as that is my least favourite parameter. And lo and behold it worked, great success. I will deal with the harmonics parameter later if I feel like it. Not too bothered about not using it tbh.
Next I touched up the code a little. I implemented the bass drum envelop use DDS. Also I added code to read potentiometer. For this I based myself on the last 6 minutes of Phil's lab - Digital Audio Processing with STM32.
The only things I didn't prototype now are:
- Clock in signal
- Midi in signal
- Push button
Regarding the clock in signal, I can't be arsed to prototype it as I have processed such signals before using an arduino and will basically copy exactly the schematic used by Mutable Instruments. Could be a mistake no to test it out, but I choose to move ahead without prototyping anyway. Same goes for the push button, this is quite trivial if you have used arduino's before. However quick shout out to this info on debouncing buttons: embed with elliot. I will use a simple RC filter described in this website, nothing fancy.
The midi is a different story, I have no clue how to tackle this yet, but since it is a bonus feature for the time being, I will not prototype it. I will however lay out the circuit so I can try to develop it further on using my PCB as prototype. So basically a circuit attached to a pin on my stm32 for which we program the software at a later stage. For this circuit I again refer to a Mutable Instruments module. This time it is the Yarns module: Yarns schematic.
So in another words, moving forward with schematic and PCB design :D.
Before doing anyting related to the schematic and subsequent PCB I watched some videos and followed some tutorials. Check out these:
Getting to blinky
First KiCAD and PCB tutorial. Aimed at total beginners.
To follow along with the blog, suggest to open the v1.0 schematic which you can download below.
This blog covers sheet 1 of the schematic you can download above. It concerns the STM32 chip itself and the power supply section.
To make my life really easy I will keep this section short and point you into the direction of the Phil's Lab tutorial referenced above. My schematic is basically Phil's schematic as outlined in that video, with some considerations which I will discuss.
Firstly I am using a different chip, I went for the STM32F405RGT6. I landed on this one because it is basically the same as the chip found in my development board (slightly less functionality, but same speed and memory). And with 64 pins instead of 100. This barely affects the approach to the schematic though, think the only extra pins are the VCAP pins. I used the datasheet and internet to find out how to hook these up.
I added a LED light to one of the GPIO pins. There is no functionality for it yet, but thought I want to have a button which can be used to e.g. change modes in the drum machine. For now I only have 1 working mode, but in regard to making it future proof I thought it wouldn't harm to add a led. Can always do something fun with a LED even if I turn out not to add more drum machine modes.
My SWD header has included the NRST pin. I read this could be useful but most likely is completely unnecessary. Apparantly using SWD can do a software reset or so (this could be a completely wrong interpretation though) so doesn't need the reset pin. But better safe than sorry. I however decided not to be better safe then sorry in case of the boot pin, but am fairly confident I don't need to touch this. Again as I will program the chip using SWD, I won't need any other boot modes and can just leave the pin pulled low all the time.
Choosing the capacitors for using with the crystal oscillator was a headache. I had chosen a crystal oscillator with a load capacitance of 20pf (more on this choice later). Using the formula from the video I would need: 2*(20-5) = 30pf capacitors (another resource on this formula: oscillator design). However the datasheet says it recommends capacitors in the 5-25pf range. Hmmmmmmmmm confusing, so how did I land on 20pf? Well for one, the parts supplier did not have 25pf capacitors, nor 24 or 27pf. Secondly, my great leader Emilie Gillet didn't adhere to this formula either. I was getting severely mixed signals from the internet as well and in the end just gave up finding the correct answer and went for 20pf. This seemed to work for Mutable, sure it will work for me.
Lastly the power will come from the USB. I ditched the battery power idea, that was silly, think that will lead to a lot of battery replacements and who doesn't have a usb cable knocking about. I added a on/off switch and a chassis ground (more on that later).
My schematic for my chosen DAC: PCM5102a
The hardest part was choosing which DAC to use. My dev-board which I used for prototyping had a DAC by cirrus logic (the CS43L22). This actually is something called an Audio codec apparently, mainly because it is more fancy that a DAC (in my simple understanding). It offers e.g. a headphone out, an ADC and 2x DACs. But this fancy stuff has the downside of being more difficult to program. One needs to write a driver to set up and use this codec. During prototyping I was lucky to find a nice reference which I could copy paste, but not really the way to go for the final 'product'. I was dreading that I'd manufacture a PCB after all this work only to find out I couldn't create a driver or would fuck up the wiring of it or something. Also I did not need anything fancy.
Checking Mutable Instruments again, I see both audio codecs and simple DACs are used in the various modules. In this case a 'simple DAC' does nothing more than take an analog signal and output a digital signal, no I2C/I2S protocol or any fancy stuff. However one thing I did like in the prototype setup was that I could use I2S to send data back and forth to the codec. This allowed me to use some nice features of the STM32 chip, like direct memory access (DMA). After some googling I (think I) found the chip that suited my needs, the PCM5102a. It has I2S for communication with the MCU and outputs 2 channels at line out level, but it has no other thrills, no headphone out and no driver needed. The only config that can be done is muting and some filter selectio, but this is done by simply pulling some pins high or low.
If you check the schematic, it is basically the example schematic shown in the datasheet section 10.1. The main things pointers to take away from my implementation of the schematic are:
- Mute pin (XMST) is pulled high, so it doesn't mute.
- Filter pin (FLT) is pulled low, default filter used
- Audio format (FMT) is pulled low, using I2S for data transfer
- SCK is pulled low, Apparantly there is a built in clock or PLL, so no need to supply an external clock. I had a lot of doubt on whether or not to map this pin to an output on the STM32 just to be sure, but think it should be fine.
The last sheet of the schematic covers the user interface part. Not very much exciting here. We have 14 B10k potentiometers, with a very simple wiring. We have 2 momentary switches, wiring for these is based on the previously mentioned webpage: embed with elliot. Lastly a CLK and a MIDI in. These schematics are straight up ripped from Mutable. I liked the CLK setup, which uses a transistor. This way we get a constant 3.3V pulse to our pin, regardless of the voltage of the actual CLK signal. Probably standard practice for modular setups, but in previous projects on arduino I just took the CLK signal directly into my GPIO pin, so opening it up to the risk of receiving a too high voltage.
With the schematic in the bag, next is the PCB. Again follow the holy grail tutorial by Phil linked above closely, that's what I did at least.
The first step in this process is assigning footprints to each part in the schematic. In my case I decided to go for surface mount parts. My thought was since I want the PCB factory to solder my STM32 chip, they can also solder the other parts. I decided to use JLCPCB as my manufacturer, as it seemed the cheapest. They have their own parts library, so I used this to find my parts. One important thing to look out for is what JLCPCB calls basic parts and extended parts. Extended parts cost more, since they have to manually hook up the reel or something like that. So I went out of my way to make sure I picked as many basic parts as possible. Think the only 'extended' parts are the STM32 and DAC. Regarding SMT parts sizing, I went for mostly 0402 and 0603 and avoided any polarized parts (often you see the large decoupling caps are polarized). It reads as if I know what I'm doing, but to be honest I didn't really. Just made sure the parts were rated for enough voltage (>25V) and googled the shit out stuff. Full list I ended up can be found in the BOM.
Another decision that needed making before I start laying out the PCB is how many layers to have. Now this took some severe googling and youtubing, because there was a lot of conflicting opinions on the matter. I'll give a short recap on my process, but I'll link some good videos below for the background info. It explains everything way better than I'll ever do.
Basically for this scale of a project one can go for a 2-layer board or 4-layer board (the latter having 2 internal layers). It is basically better to have a 4-layer board, since you can have 2 GND planes on the internal 2 layers, which provide a steady return path for the signal paths on the top and bottom. I had basically decided to go for 4 layers, just better to be safe than sorry. However as soon as I plopped my PCB dimensions in JLCPCB, I saw that the price difference between 2 and 4 layers was very significant. I wanted to surface mount my potentiometers, which means that the PCB needs to be as large as the front plate, which in my sketches was about 160mm x 100mm. So I didn't really have the option of reducing the size of my PCB.
I saw that as soon as your 4-layer PCB becomes larger than 100mm x 100mm you get charged an engineering fee of 30$. You don't get this fee for a 2 layer board however. Also, because my board was so big, I already knew that I wouldn't have an issue with spacing and could be a bit more free in laying the parts out to be as efficient as possible in relation. So I went for 2-layers, and happy that the decision was made for me in the end.
With the board size, number of layers and footprints set, I can start laying out the parts. But first some links to videos that relate to PCB stackup issue.
Arguably the most fun I had with the project was laying out the PCB in KiCAD. Like a next level jigsaw puzzle. Also the most daunting task to begin with, especially as I had never done this before and started out with 0 knowledge on PCB design. Spoiler: my PCB worked, and if I can do it, anyone can, so don't hold back.
The PCB layout in KiCAD
I do suggest starting out with someting easy first, just to get a hang of KiCAD and some confidence. I made a supersimple fuzz guitar effect from a simple schematic I took from the interwebs first (without any microcontroller or surface mounted parts). Then ordered it and got to hold it in my hand before I moved on. Then I did a relatively simple circuit, but with an ATMEGA328P chip (aka Arduino), which was surface mounted and soldered by the PCB manufacturer (JLCPCB). After that I was confident enought to move to this project.
From watching tutorials (Mainly again the one from Phil's tutorials) and my discussion with Pladask I tackled my layout with the following main pointers in mind:
1. Start with most important parts first (Microcontroller)
2. Make sure the decoupling capacitors (capacitors to absord voltage spikes) are placed as close to the microchip as possible.
3. Physically separate power, digital and analog circuits as much as possible.
I understood that this is important to avoid interference between sections, especially relevant to the analog section. But in my case I was (and still am) a bit unsure what is what. Like potentiometers, are they considered as an analog circuit? And my optocoupler for the MIDI? And the DAC? A digital to audio converter, that means it's both analog and digital right? Hella confusing. In the end I decided to consider the DAC an analog circuit. Also cause my potentiometeres are spread around the entire board I can't really separate these anyway so didn't think any more about those. Lastly I checked several of Mutable's PCBs and I could not identify any considerable physical separation between digital and analog circuits. So in the end I just spaced everything out as much as I could and hoped for the best.
4. Have as few and as small cuts in the ground plane as possible.
5. Space out traces where possible
When you start out the design you are faced with a bunch of parts that are not layed out in any way that makes sense, ubt with a so-called 'rats nest', which shows you which pins need to connect to which pins. From here you just start dragging parts around the board drawing traces between them and slowly but surely you end up in a layout that works for you. This could be after a large number of iterations though, as you keep tweaking it all the time. In my case the layout was governed by my user interface. I had on forehand thought about where I want my knobs and buttons to be positioned, which meant I could place these first and have a general idea of where I had room to place the STM32, DAC, power supply etc.
A good tip (and something I struggle with) was to often just stop working on it and continue the day after. As I would get stuck in endless tweaking of traces with no particular reasoning behind it. Then with a clear mind the next day I often knew that either it was fine, or it needed 1 final tweak.
Final PCB with surface mounted components
I want to conclude with some design decisions I made, and why I made these, as well as some general considerations:
Trace sizing
For signal traces I started out with 0.3mm. This was based on very little info, and because my board was large I thought space was not an issue anyway. And in general that was true, but I did run into some tight spots, especially underneath the PCM5102a. I then googlded and read that 0.2mm is more than enough, especially for the low voltages I am involved in. For power and ground I went for 0.5mm though. I read you generally want these large because that has a positive effect on impedance (I won't pretend like I know what that means).
Trace spacing
Referring to the tight spot underneath the PCM5102a, I was quite nervous that having that many traces close together and underneath a microcontroller would lead to interference issues. I googled a lot to find an answer, but did not find a satisfying one. It worked out in the end though, so all good.
Traces underneath the STM
For some reason I wasn't really comfortable with the layout with the power rail underneath the microcontroller, but couldn't find a way around it. Also this worked out luckily.
Vias
My via size was an outer diameter of 0.7mm and hole dia of 0.3mm. I think this was recommended by Phil in his video. As you can see every ground connection is made using a via, a method I saw in one of the tutorial videos, but think this is standard practice if you have no copper pour on the signal layer.
Copper pour on the signal layer
Copper pour is basically when we fill all the empty space between traces and pads with copper, in addition to the already copper filled ground layer on the other side of the PCB. Then you connect these filled areas to the ground layer using vias thus providing you with ample ground on both sides of the PCB. After a lot of doubting I decided not to pour the signal layer though. Again I got mixed signals from the internet on what was best. My main reason was that adding this copper pour meant I had to add so-called stitching vias and because my board was so big I felt I had to have a lot of these. But how many was enough? I tried it, but the process made me feel like I had no idea what I was doing. I felt I could better understand my design without the pour and thus deleted it. Still don't know what is better though, but the board works, so I am happy.
Power rail
When I started laying out the PCB I thought I'd end up with a so-called power plane which, similar to a ground plane, covers most of the board and makes distributing power easy. This assumption was made of course on 1 super simple tutorial video which in no shape or form had anything to do with my project. And when I got round to my board stack up (see earlier blog), it was clear there was not gonna be a power plane. The alternative is a power rail, which I think is a confusing name, it is just a long trace that connects up all the different components that need power. In my mind, this method seems very iffy, as some parts will be located far away from the power source (as in connected by a long trace), and there are many parts connected to this aorta of power. But again, internet said this was a good method (so did Mutable), and low and behold it worked :).
Final notes
Finally a couple of small pointers. Firstly I made a habit of checking my return paths (video). What this meant was just drawing an imaginary straight line on the ground plane between the end of the signal trace and it's source and see if there were any major obstructions (like cut traces). I'm not sure if this is a correct thing to do, but gave me some confidence in my design.
Lastly and most importantly: double, triple, quadruple, quintuple check everything when you're done. Are my footprints correct, is my power rail correct, do I need mounting holes, what about the orientation of my potentiometers, etc, etc, etc. I tend to be to eager to press the order now button (good business for Tayda electronics), but in this case since there are so many things that can go wrong I decided to be super thorough.
I considered a couple of options for the casing: a metal stompbox style case, prefab plastic enclosures, 3D printed enclosure or PCB frontplate. I landed on the latter in combination with a 3D printed enclosure. The reason for choosing a PCB frontplate is that for starters it is dirt cheap. And in addition one is free to design the frontplate as they wish in a CAD type program (makes the engineer in me very happy). There are of course some limitations, e.g. you are stuck with only a couple of colours, and the silkscreen text is only white. But then again you can play around with exposing the copper layer, which gives a cool effect. Also no drilling involved, you can design the holes with 0.1mm accuracy. I used KiCAD for my front plate design again, which was pretty easy after designing the actual PCB. Recommend checking this video for tips and inspiration: How to Design a Synth Interface in KiCad.
Front plate PCB
One thing I suggest is to have copper pour on both sides, this strengthens the board and apparantly also keeps it from warping. In addition you can use this layer for shielding. In my understanding, shielding is the art of protecting your circuit from any outside electromagnetic interference by creating a sort of Faraday cage. You do this by connecting your metal enclosure to your circuit ground. In addition you can do this with the potentiometers that portrude the frontplate for some superduper shielding. I was in doubt whether this was strictly necessary however, the internet was a bit vague on the subject. So I went for a hybrid solution, I made a small opening on the solder mask to expose some of the copper on the back (inside) of front plate. In addition I added a test point in my schematic and PCB to the ground layer. This means I can solder a wire between circuit ground and my front plate in case I have any issues. So far I haven't so no wire there yet. Also the rest of my enclosure is plastic, so no shielding there anyway.
Solder mask opening for potential shielding wire
I won't go into detail on the 3D printed case, that is for someone else to blog about. But I drew it in Fusion360 (free version) and printed it in my local library. I added the .stl file in the links below.
Finally, the last of the blogs (for now). Completely underestimated how much work writing is. But I ordered my PCB and Front plate from JLCPCB and got them a couple of weeks ago. Then soldered the few through hole parts I needed to (3.5mm jacks, on/off switch, leds, buttons, potentiometers and serial wire debug (SWD) header), and connected it to my PC using an STLink V2 and flashed my code for the first time. And crazily enough it fucking worked. An amazing feeling, as I was not expecting that at all. I had mentally prepared for hours/days/weeks of debugging and then finding a mistake in my PCB, needing to order a new one and all that jazz.
Flashing prototype using STLink
Anyway one by one I started touching the stuff I had skipped previously, or couldn't do with the discovery board. So for instnce making the potentiometers functional. I had orientedd them the wrong way round on my PCB (GND to where I should have put 3.3V) so my readout was inverted, which although easy to fix in the code, could easily have been avoided in the PCB layout. I added some cool master FX which could be dialled in with one of the pots.
Then for the more challenging stuff, first; the clock in. Had to play around with timers to calculate the BPM on the fly based on time between 2 clock pulses, which was a fun challenge. Also fun was programming the MIDI sync. Again amazingly enough the circuit worked immediately (well not that amazing as I copied it from Mutable). Had to learn about hexadecimals and the midi protocol, which was cool. Recommend to check out this resource if you're interested: sparfun midi tutorial. And along the way found some other bugs and small fixes to iron out, but as of today I have v1.0 of the firmware finished. Now the plan is to make some more prototypes, as I still have 4 PCBs and frontplates waiting to be soldered. Then hand them out to some friends for some proper testing and user feedback.
So at the end of a steep learning curve I sit with my very own hardware drum machine. I set out to program, design and fabricate a PCB with little experience and can now safely say it's doable :) And fun as well. The cost of the various parts was around:
- PCB, parts & assembly: 12$ (Ridiculously cheap)
- Frontpanel: $2.40
- Other parts: $14 (ordered from Tayda)
So in total just shy of $30 (excluding VAT). Not bad I'd say.
Here are the files that are the result of this project. Surely it can be improved with regard to EMI, and I would like to add some more buttons as well as 6.5mm audio jacks. But for now here is v1.0.