-
The End Result
09/02/2018 at 21:15 • 0 comments(Original post date: 09/04/16)
So this is my 17th and final log for my music tech design challenge project, and for it I have created three new videos talking about and demoing the end result of the project:
- A specification overview video, outlining all the features of the synth as well as briefly discussing the overall development of it and how it works
- A complete walkthrough video, demoing in detail each control and parameter of the synthesiser
- A video on how the device can be used as a MIDI controller
I've also included the demo sound video that I posted a couple of weeks back, as that video is a great example of the range of sounds that the synth can produce, as well as some new higher-quality photos of the synth.
Specification Overview Video
This video briefly goes over the specifications and features of the synth from both a user and development point-of-view.
Complete Walkthrough Video
This video is an extension of the last video, and in detail talks about and demos each control and parameter of the vintage toy synth. I apologise that the video quality isn't great and that you can't quite make out all the text on the panel, however hopefully I explain in enough detail what I am doing as I go along for you to make sense of it. Also it was produced to demo how each control/parameter effects the sound, rather than how to create a great sounding patch, so please don't judge the sound capabilities of the synth based on the fairly average patches that I produce in this video.
Sound/Patch Demo Video
I first posted this video a couple of weeks back, however it is a perfect example of the types and range of sounds that you can create with the final state of the vintage toy synth, so I thought it would make sense to include it here as well. This video also demonstrates the use of the VTS Editor - a desktop/laptop application I developed for the synth for adding patch saving and loading capabilities to the instrument.
MIDI Controller Video
While the previous videos demonstrate the instruments primary function as a standalone synthesiser, this video shows how the device can be used as a MIDI controller. The video demos the synths MIDI capabilities with Logic Pro and Ableton Live, however the device could theoretically be used to control any external MIDI software or hardware.
Photos
Below are some high-quality photos of the final state of the vintage toy synthesiser.
Final Development Material
All the final code, circuit diagrams and design files for the synthesiser can be found in the projects Github repository.
Conclusion
I've spent the majority of my free time over the past 3 and half months working on this project, and I couldn't be happier with the result. My skills in both software and hardware development have dramatically improved thanks to this project, and even though it's been a lot of hard work and very stressful at times, it has overall been a very fun experience. I hope those of you who have been following the project have enjoyed what I've done, and please feel free to leave any questions below.
Thanks!
-
Fixing-Together and Refining the Enclosure
09/02/2018 at 21:00 • 0 comments(Original post date: 03/04/16)
In order to undertake this project I had to completely take-apart the toy piano, unfortunately slightly damaging it in the process due to the way it was originally connected together. As all the electronics for the project are now finished, I spent this week putting the piano back together as well as adding some small extra touches, some to make the synth easier to use though some just to improve the aesthetics of the design.
The finished enclosure of the vintage toy synthesiser, propped open like a grand piano
Attaching the Panel
Instead of securing the synth panel to the top of the piano enclosure in the original standard way, I decided to connect it in a way so that the top of the synth could be opened like that of a real grand piano (see image above). I chose to do this for a couple of reasons:
- It improves the charming miniature form of the toy piano, a characteristic of the object that I didn't want to lose in the conversion.
- It gives it a great modular-synth-esque look, exposing all the colourful wires and flashy LEDs of the microcontrollers.
- It allows me to easily get into the synth for development and repairs
To do this I added 8 miniature hinges to the top-left-underside of the panel, using screws to attach the hinges to the wooden side, but unfortunately having to use superglue to attach the hinges to the acrylic panel due to the screws being too brittle for the tougher material (I prefer screws so things can be easily taken apart again if needed).
The hinges attaching the panel to the enclosure
Back Labels
A couple of weeks ago I posted a log about the sockets and controls I've added to the back of the synth, and this week I added some labels to the sockets/controls so that the user knows what each socket/control is for. I made these labels using gloss white filmic sticker sheets, using text of the same font and colour as that of the front panel on a black background, in the hope that it would look as similar as possible to the panel for continuity without being able to apply the same laser-engraving method to this part of the synth. Unfortunately I don't think they look quite as professional as the text on the panel, and I'm probably going to recut and reposition them before the end of the project so that they look a bit neater, however they're not highly visible and are mainly there so that I can remember which MIDI socket is MIDI-in and which is MIDI-out!
The finished (-ish) back panel
Keyboard Gap Covering
The original toy piano enclosure came with a strip of blue fabric (possibly velvet) draped above the keyboard to hide a fairly large gap into the pianos body. Unfortunately I made the mistake of removing and misplacing this fabric, however on the plus side it gave me the chance to experiment with different types and colours of material to use for the synth. After trying out various colours of both ribbon and felt, I settled on using a burgundy ribbon as a replacement. I chose a burgundy colour as I felt it matched with the red on the front of the keyboard keys but without being too garish, with the glossy/shiny aspect of ribbon going well with the rest of the glossy enclosure. Below are a couple of photos:
The gap above the keyboard
A strip of ribbon used to cover the gap
Painting
The main thing that got damaged when taking apart the existing toy piano enclosure was the paintwork, so I needed to touch up the paint where this had happened. I also had to paint some new areas of the existing enclosure now that the front panel could be opened and expose some previously-hidden areas.After trying out a disastrous gloss black paint which destroyed the first synth panel I had produced, it turned out that gloss black nail varnish was the perfect tool for painting the enclosure.
Painting the synth with nail varnish
Other
A couple of other things I did to fix-together and refine the enclosure:
- The base sections of the toy piano (including the keyboard) were reattached to the rest of the enclosure using self-tapping screws. It was initially secured together using nails which is what made taking it apart so difficult and destructive, however I'd like the option to remove the base/keyboard in the future incase I need to do improvements or repairs to the electronics I can't get to otherwise.
- I attached a set of rubber feet to the base of the synth, so that the enclosure could sit stably on unevenly surfaces.
- All stripboards and the BeagleBone Black have been secured to the enclosure with self-tapping screws
The Result
Here are some images of the final enclosure of the vintage toy synthesiser. I'm probably going to take some better quality images for my final log next week.
The original toy piano enclosure
The finished vintage toy synthesiser enclosure
The synth propped open like a grand piano
Back view of the synth propped open
That's it for now. Next week I'll be posting my final log, after doing some final software tweaks, in which I hope to include a set of videos demoing the finished synth and showing everything that it can do.
-
Patch Manager Application and Sound Demos
09/02/2018 at 20:48 • 0 commentsAt the start of this project I wasn't planning on having any kind of sound/patch storage or management within the vintage toy synthesiser, however as the project progressed I was more and more finding the need to quickly save and recall patches for both testing and demoing the functionality of the sound synthesis engine. In the end I decided to implement an external desktop application to handle this.
Approach
Synthesiser patch management allows the user to save the sound parameter settings into a 'patch' so that a particular sound can be quickly recalled at a later time. It is a common features on commercial synthesisers, however I originally decided not to include patch management on the vintage toy synth for the following reasons:
- Patch management works best on synths that have relative or stateless controls (e.g. rotary encoders, which just increment or decrement a stored value in the backend) with an LCD for displaying control/parameter values, as opposed to absolute controls (e.g. potentiometers, which set a specific value determined by their position). This is because, unless you have motorised controls, loading a patch doesn't change the physical state of the controls, meaning that with pots it would cause them to be potentially complete out-of-sync with the backend. I didn't want to add an LCD to the piano as it would take away the vintage aesthetic of the object, as well as adding cost and implementation time to the project. Also I like the fact that with pots a user can glance at the panel and instantly see all the parameter values.
- Another reason an LCD is so important for patch storage is so that the user knows what number patch they are saving or loading. A minimal patch storage interface could be implemented using a set of toggle switches that represent patch numbers using a base-2 numeral system, however this would have involved an extra set of controls on the panel that I initially didn't think I could add, in regards to both space on the panel and connections to the Arduino Pro Mini.
However as the project progressed I kept finding myself wanting to save the sounds I was able to create with the synth, which would make the device a lot easier to demo once finished. However by this point the front panel was already constructed so adding any extra controls on the synth was out of the question. However after giving it a bit of thought I realised that I could simply implement patch management in a separate external application that runs on a desktop/laptop computer which communicates with the synth via MIDI, which would work with the existing synth hardware. I therefore set about developing a Mac OS X GUI application using the C++ framework JUCE, and you can see the code for this in the project Github repo here.
Having an external patch manager application isn't my preferred solution as it means you'll always need a computer with a MIDI interface to save and load patches, however from an interaction design perspective it could be considered a better implementation over adding an LCD to the synth. I recently attended MiXD 2016, a music interaction design symposium hosted by Birmingham Conservatoire's Integra Lab research group, where keynote speaker Jason Mesut stated that it could be considered inferior to add costly and complex LCDs and displays to products such as digital musical instruments when most of us already carry smartphones/tablets/laptops with us at all times - devices that can easily be used to control other digital devices.
How it Works
The patch manager application, which is called 'VTS Editor', is very simply and just relies on the correct MIDI messages being sent between the application and the synth in order for it to work correctly.
Saving a patch works as follows:
- A specific MIDI CC 127 value is sent from the application to the synth to request all patch data
- The synth sends back all the current patch data in the form of the parameters MIDI CC messages (the same as the ones that come from the synths panel)
- Once the synth has sent all patch data it sends a 'finished' MIDI CC value so that the patch manager application knows it has got a complete patch
- The patch data is encoded into lines of text and saved into its own text file
Loading a patch is even simpler:
- A patch text file is decoded into patch parameter values
- The patch parameter values are sent to the synth as a stream of MIDI CC messages
I've called the application 'VTS Editor' rather 'VTS Patch Manager' as in the future I'd like this application to become a full software editor for the synth (essentially an extended virtual version of the synths panel), however that's beyond the scope of this design challenge. However I have already implemented a couple of extra controls/features within the editor application that aren't related to patch loading/saving:
- Reset Synth to Panel Settings - triggers the sound engine to be set to the current panel settings. This functionality also happens when the synth is turned on so that the panel and the backend are in sync.
- Disable/Enable Synth Panel - temporarily disables the synths panel controls (except for the volume control) from doing anything. I need this as unfortunately I'm still getting some very occasional panel potentiometer jitter, so this allows the user to load a patch without any jitter changing the way it should sound. Eventually I hope to get rid of this control once I've fixed the pot jitter issues.
The Interface
Below is a screenshot of the current VTS Editor interface:
A couple of notes on the interface and its controls:
- Loading a patch is done using a window that displays all saved patch files
- It includes controls for setting which MIDI input and output the vintage toy synthesiser is connected to
It's also worth noting that I haven't yet finished the general look-and-feel of the interface - eventually I want it to use the same colour scheme and font as that of the synths front panel.
Sound Demos
Here is a quick and rough video previewing a couple of demo sounds I have made with the vintage toy synthesiser. I'm neither a sound designer nor a keyboardist so don't expect anything mind-blowing, plus I've still got a couple of tweaks to do to the sound engine, however this should give you an idea of the range of sounds that you can create with the synth. There is also a bit of noise (possibly ground loop/hum) in the recording. At the end of the project I'm hoping to do a much better video that covers all the features and controls of the synth, as well as some high-quality patch demos and recordings. Enjoy!
Just to be clear, all sound is coming directly from the BeagleBone Black within the synth itself, and I'm only using the MacBook to send patch change information to the synth via MIDI.
-
Audio Synthesis Engine Implementation - Part 2
09/02/2018 at 20:35 • 0 comments(Original post date: 20/03/16)
Just over a month ago I posted about the implementation of the audio synthesis engine for the vintage toy synthesiser, however since then I've got the synths front panel developed and fully working which has allowed me to rapidly complete the main features of the synth. Here I'm going to follow on from that blogpost and talk about the final few features I've implemented since then, however it's worth mentioning that there are still a few refinements I need to make before I can settle on a final implementation of the brain and sound engine software for the synth, which I'll probably cover in a future log.
Voice Mode and Voice Allocation
The Voice Mode parameter on the synth sets whether the device is in polyphonic mode or monophonic mode. Here I'm going to cover how I've implemented both poly and mono mode in the vintage toy synth, which is implemented within the vintageBrain application on the synth.
Polyphonic Mode
Poly mode is implemented using an array that stores an ordered-list of 'free' voices - voices that are not currently playing a note. The number at the beginning of the list always represents the next available free voice. I've also implemented 'last note' voice stealing, so that if attempting to play a note when there are no free voices left it will 'steal' the voice that is playing the last played note.
This is how poly mode works when a note-on message is received:
- The next available free voice is pulled out of the first index of the 'free voice' array
- If the voice number from point 1 is a valid voice number (1 or above):
- The 'free voice' array is rearranged so that all numbers are shuffled forward by 1 (removing the next available free voice), and a '0' (representing 'no free voice') is added to the end of the array. This puts the following free voice for the next note-on message at the beginning of the array.
- The note number of the note message is stored in an array of 'voice notes', which signifies what note each voice is currently playing.
- The voice number is stored as the last used voice (for the note stealing implementation).
- The voice number is used to set the MIDI channel of the MIDI note-on message that is sent to the voice engine, triggering that particular voice to play a note.
- If the voice number from point 1 invalid voice number (0), meaning there are no free voices left:
- The last used voice number is set as the voice to use
- A MIDI note-off message is sent to the stolen voice so that when sending the new note-on it enters the attack phase of the note
- The note number of the note-on message is stored in an array of 'voice notes', which signifies what note each voice is currently playing.
- The voice number is used to set the MIDI channel of the MIDI note-on message that is sent to the voice engine, triggering that particular voice to play a note.
When a note-off message is received:
- A search for the note number in the 'voice notes' array is done
- If the note number is found in the 'voice notes' array, the index of the number is used to signify the voice number that is currently playing the note
- The voice number is put back into the 'free voice' array, replacing the first instance of '0' found at the end of the array
- The index of the 'voice notes' array that represents this voice is set to -1 to signify that this voice is no longer playing a note
- The voice number is used to set the MIDI channel of the MIDI note-off message that is sent to the voice engine, triggering that particular voice to stop playing a note.
Monophonic Mode
Surprisingly, the mono mode implementation is just as complex as poly mode even though it only ever uses the first voice. This is because we need to store a 'stack' of notes that represent all the keys that are currently being held down, so that if a key is released whilst there are still keys being held down the played note is changed to the previously played key rather than just turning the note off. This is the expected behaviour of a monophonic voice mode within a synthesiser.
This is how mono mode works when a note-on message is received:
- The note number is added to the 'mono stack' array, at an index that represents the number of keys currently being held down (if this is the first pressed key it will be the 1st index, if there is already one key being held down it will be 2nd index, and so on).
- A 'stack pointer' variable is incremented by 1 to signify that a note has been added to the 'mono stack' array
- The note number is sent to voice 0 of the sound engine in the form of a MIDI note-on message
When a note-off is received:
- A search for the note number in the 'mono stack' array is done
- If the note number is found in the 'mono stack' array, the note is removed by shuffling forward all elements of the array above it by 1
- The 'stack pointer' variable is decremented by 1 to signify that a note has been removed from the 'mono stack' array
- If there is still at least 1 note in the 'mono stack' array, signified by the value of the 'stack pointer' variable, a MIDI note-on message is sent to voice 0 of the sound engine using the note number at the top of the mono stack, changing the playing note.
- If there is no notes left in the 'mono stack' array, a MIDI note-off message is sent to voice 0 of the sound engine, stopping the playing note.
Below is the current code that handles both poly and mono voice/note allocation, however for an up-to-date version of the code see the vintageBrain.c file in the projects Github repo.
//==================================================================================== //==================================================================================== //==================================================================================== //Gets the next free voice (the oldest played voice) from the voice_alloc_data.free_voices buffer, //or steals the oldest voice if not voices are currently free. uint8_t GetNextFreeVoice (VoiceAllocData *voice_alloc_data) { uint8_t free_voice = 0; //get the next free voice number from first index of the free_voices array free_voice = voice_alloc_data->free_voices[0]; //if got a free voice if (free_voice != 0) { //shift all voices forwards, removing the first value, and adding 0 on the end... for (uint8_t voice = 0; voice < NUM_OF_VOICES - 1; voice++) { voice_alloc_data->free_voices[voice] = voice_alloc_data->free_voices[voice + 1]; } voice_alloc_data->free_voices[NUM_OF_VOICES - 1] = 0; } //if (free_voice != 0) else { //use the oldest voice free_voice = voice_alloc_data->last_voice; //TODO: Send a note-off message to the stolen voice so that when //sending the new note-on it enters the attack phase.... } //else ((free_voice != 0)) return free_voice; } //==================================================================================== //==================================================================================== //==================================================================================== //Adds a new free voice to the voice_alloc_data.free_voices buffer uint8_t FreeVoiceOfNote (uint8_t note_num, VoiceAllocData *voice_alloc_data) { //first, find which voice note_num is currently being played on uint8_t free_voice = 0; for (uint8_t voice = 0; voice < NUM_OF_VOICES; voice++) { if (note_num == voice_alloc_data->note_data[voice].note_num) { free_voice = voice + 1; break; } } //for (uint8_t voice = 0; voice < NUM_OF_VOICES; voice++) //if we have a voice to free up if (free_voice > 0) { //find space in voice buffer for (uint8_t voice = 0; voice < NUM_OF_VOICES; voice++) { //if we find zero put the voice in that place if (voice_alloc_data->free_voices[voice] == 0) { voice_alloc_data->free_voices[voice] = free_voice; break; } } //for (uint8_t voice = 0; voice < NUM_OF_VOICES; voice++) } //if (free_voice > 0) return free_voice; } //==================================================================================== //==================================================================================== //==================================================================================== //Returns a list of voices that are currently playing note note_num (using the voice_list array) //as well as returning the number of voices. //Even though at the moment it will probably only ever be 1 voice here, I'm implementing //it to be able to return multiple voices incase in the future I allow the same note //to play multiple voices. uint8_t GetVoicesOfNote (uint8_t note_num, VoiceAllocData *voice_alloc_data, uint8_t voice_list[]) { uint8_t num_of_voices = 0; for (uint8_t voice = 0; voice < NUM_OF_VOICES; voice++) { if (note_num == voice_alloc_data->note_data[voice].note_num) { voice_list[num_of_voices] = voice + 1; num_of_voices++; } } //for (uint8_t voice = 0; voice < NUM_OF_VOICES; voice++) return num_of_voices; } //==================================================================================== //==================================================================================== //==================================================================================== ///Removes a note from the mono stack by shuffling a set of notes down void RemoveNoteFromMonoStack (uint8_t start_index, uint8_t end_index, VoiceAllocData *voice_alloc_data) { //shuffle the notes in the stack down to remove the note for (uint8_t index = start_index; index < end_index; index++) { voice_alloc_data->note_data[index].note_num = voice_alloc_data->note_data[index + 1].note_num; } //set top of stack to empty voice_alloc_data->note_data[voice_alloc_data->mono_note_stack_pointer].note_num = VOICE_NO_NOTE; voice_alloc_data->note_data[voice_alloc_data->mono_note_stack_pointer].note_vel = VOICE_NO_NOTE; //set internal keyboard note stuff voice_alloc_data->note_data[voice_alloc_data->mono_note_stack_pointer].keyboard_key_num = VOICE_NO_NOTE; //decrement pointer if above 0 if (voice_alloc_data->mono_note_stack_pointer) { voice_alloc_data->mono_note_stack_pointer--; } } //==================================================================================== //==================================================================================== //==================================================================================== //Adds a note to mono mode stack void AddNoteToMonoStack (uint8_t note_num, uint8_t note_vel, VoiceAllocData *voice_alloc_data, bool from_internal_keyboard, uint8_t keyboard_key_num) { //add note to the top of the stack voice_alloc_data->note_data[voice_alloc_data->mono_note_stack_pointer].note_num = note_num; voice_alloc_data->note_data[voice_alloc_data->mono_note_stack_pointer].note_vel = note_vel; //set internal keyboard note stuff voice_alloc_data->note_data[voice_alloc_data->mono_note_stack_pointer].from_internal_keyboard = from_internal_keyboard; if (from_internal_keyboard) voice_alloc_data->note_data[voice_alloc_data->mono_note_stack_pointer].keyboard_key_num = keyboard_key_num; //increase stack pointer voice_alloc_data->mono_note_stack_pointer++; //if the stack is full if (voice_alloc_data->mono_note_stack_pointer >= VOICE_MONO_BUFFER_SIZE) { //remove the oldest note from the stack RemoveNoteFromMonoStack (0, VOICE_MONO_BUFFER_SIZE, voice_alloc_data); } } //==================================================================================== //==================================================================================== //==================================================================================== //Pulls a note from the mono stack void PullNoteFromMonoStack (uint8_t note_num, VoiceAllocData *voice_alloc_data) { uint8_t note_index; //find the note in the stack buffer for (uint8_t i = 0; i < voice_alloc_data->mono_note_stack_pointer; i++) { //if it matches if (voice_alloc_data->note_data[i].note_num == note_num) { //store index note_index = i; //break from loop break; } } //if (uint8_t i = 0; i < voice_alloc_data->mono_note_stack_pointer; i++) //remove the note from the stack RemoveNoteFromMonoStack (note_index, voice_alloc_data->mono_note_stack_pointer, voice_alloc_data); } //==================================================================================== //==================================================================================== //==================================================================================== //Processes a note message recieived from any source, sending it to the needed places void ProcessNoteMessage (uint8_t message_buffer[], PatchParameterData patch_param_data[], VoiceAllocData *voice_alloc_data, bool send_to_midi_out, int sock, struct sockaddr_un sound_engine_sock_addr, bool from_internal_keyboard, uint8_t keyboard_key_num) { //==================================================================================== //Voice allocation for sound engine //FIXME: it is kind of confusing how in mono mode the seperate functions handle the setting //of voice_alloc_data, however in poly mode all of that is done within this function. It //may be a good idea to rewrite the voice allocation stuff to make this neater. //========================================= //if a note-on message if ((message_buffer[0] & MIDI_STATUS_BITS) == MIDI_NOTEON) { //==================== //if in poly mode if (patch_param_data[PARAM_VOICE_MODE].user_val > 0) { //get next voice we can use uint8_t free_voice = GetNextFreeVoice (voice_alloc_data); #ifdef DEBUG printf ("[VB] Next free voice: %d\r\n", free_voice); #endif //if we have a voice to use if (free_voice > 0) { //put free_voice into the correct range free_voice -= 1; //store the note info for this voice voice_alloc_data->note_data[free_voice].note_num = message_buffer[1]; voice_alloc_data->note_data[free_voice].note_vel = message_buffer[2]; //set the last played voice (for note stealing) voice_alloc_data->last_voice = free_voice + 1; //set internal keyboard note stuff voice_alloc_data->note_data[free_voice].from_internal_keyboard = from_internal_keyboard; if (from_internal_keyboard) voice_alloc_data->note_data[free_voice].keyboard_key_num = keyboard_key_num; //Send to the sound engine... uint8_t note_buffer[3] = {MIDI_NOTEON + free_voice, message_buffer[1], message_buffer[2]}; SendToSoundEngine (note_buffer, 3, sock, sound_engine_sock_addr); } //if (free_voice > 0) } //if (patch_param_data[PARAM_VOICE_MODE].user_val > 0) //==================== //if in mono mode else { AddNoteToMonoStack (message_buffer[1], message_buffer[2], voice_alloc_data, from_internal_keyboard, keyboard_key_num); //Send to the sound engine for voice 0... uint8_t note_buffer[3] = {MIDI_NOTEON, message_buffer[1], message_buffer[2]}; SendToSoundEngine (note_buffer, 3, sock, sound_engine_sock_addr); } //else (mono mode) } //((message_buffer[0] & MIDI_STATUS_BITS) == MIDI_NOTEON) //========================================= //if a note-off message else { //==================== //if in poly mode if (patch_param_data[PARAM_VOICE_MODE].user_val > 0) { //free used voice of this note uint8_t freed_voice = FreeVoiceOfNote (message_buffer[1], voice_alloc_data); #ifdef DEBUG printf ("[VB] freed voice: %d\r\n", freed_voice); #endif //if we sucessfully freed a voice if (freed_voice > 0) { //put freed_voice into the correct range freed_voice -= 1; //reset the note info for this voice voice_alloc_data->note_data[freed_voice].note_num = VOICE_NO_NOTE; voice_alloc_data->note_data[freed_voice].note_vel = VOICE_NO_NOTE; voice_alloc_data->note_data[freed_voice].keyboard_key_num = VOICE_NO_NOTE; //Send to the sound engine... uint8_t note_buffer[3] = {MIDI_NOTEOFF + freed_voice, message_buffer[1], message_buffer[2]}; SendToSoundEngine (note_buffer, 3, sock, sound_engine_sock_addr); } //if (freed_voice > 0) } //if (patch_param_data[PARAM_VOICE_MODE].user_val > 0) //==================== //if in mono mode else { PullNoteFromMonoStack (message_buffer[1], voice_alloc_data); //if there is still atleast one note on the stack if (voice_alloc_data->mono_note_stack_pointer != 0) { //Send a note-on message to the sound engine with the previous note on the stack... uint8_t note_num = voice_alloc_data->note_data[voice_alloc_data->mono_note_stack_pointer - 1].note_num; uint8_t note_vel = voice_alloc_data->note_data[voice_alloc_data->mono_note_stack_pointer - 1].note_vel; uint8_t note_buffer[3] = {MIDI_NOTEON, note_num, note_vel}; SendToSoundEngine (note_buffer, 3, sock, sound_engine_sock_addr); } //if (prev_stack_note != VOICE_NO_NOTE) //if this was the last note in the stack else { //Send to the sound engine as a note off... uint8_t note_buffer[3] = {MIDI_NOTEOFF, message_buffer[1], message_buffer[2]}; SendToSoundEngine (note_buffer, 3, sock, sound_engine_sock_addr); } } //else (mono mode) } //else (note-off message) //==================================================================================== //Sending to MIDI-out //Send to MIDI out if needed if (send_to_midi_out) { WriteToMidiOutFd (message_buffer, 3); } }
Keyboard Parameters
There are three keyboard parameters on the vintage toy synth that generate and control what notes the keyboard plays - scale, octave, and transpose; another set of parameters that are implemented within the vintageBrain application on the synth. The keyboard on the synth sends key/note messages to the brain application using MIDI note messages, using note numbers 0-17 to signify the key number that has been pressed/released, therefore it is up to these parameters to convert these key numbers into meaningful and audible note numbers.
Scale
Scale controls what particular musical scale is played by the keyboard, and at the moment I have included a selection of 8 scale - chromatic, major, major pentatonic, minor, minor pentatonic, melodic minor, harmonic minor, and blues. This has been implemented fairly simply by putting each scale into its own array in the form of semitones starting from 0, and using the key number coming from the keyboard to select a note/semitone value from the array:
//apply scale value //Note numbers come from the keyboard in the range of 0 - KEYBOARD_NUM_OF_KEYS-1, //and are used to select an index of keyboardScales[patchParameterData[PARAM_KEYS_SCALE].user_val] note_num = keyboardScales[patch_param_data[PARAM_KEYS_SCALE].user_val][keyboard_key_num];
Octave
Octave controls the musical octave that the keyboard scale is offset by, where an octave value of 0 sets the bottom key on the keyboard to play middle E (MIDI note 64), with greater octave value adding 12 semitones each time, or lower octave values reducing the notes by 12 semitones each time:
//apply octave value //if octave value is 64 (0) bottom key is note 64 (middle E, as E is the first key) note_num = (note_num + 64) + ((patch_param_data[PARAM_KEYS_OCTAVE].user_val - 64) * 12);
Transpose
Transpose controls a singular semitone offset applied to the note number, and allows the bottom key on the keyboard to be any musical note rather than just E:
//apply tranpose //a value of 64 (0) means no transpose note_num += patch_param_data[PARAM_KEYS_TRANSPOSE].user_val - 64;
Global Volume
The global volume parameter is a boring yet essential control that needs to be included, and I have implemented this to control the main volume of the Linux OS soundcard driver I am using on the BBB. As I am using the ALSA soundcard driver for audio output, I need to use the amixer command-line application to do this, using the sset command:
//set the Linux system volume... //create start of amixer command to set 'Speaker' control value //See http://linux.die.net/man/1/amixer for more options uint8_t volume_cmd[64] = {"amixer -q sset Speaker "}; //turn the param value into a percentage string uint8_t volume_string[16]; sprintf(volume_string, "%d%%", param_val); //append the value string onto the command strcat (volume_cmd, volume_string); //Send the command to the system system (volume_cmd);
Vintage Amount
The idea of the Vintage Amount parameter is to allow the synth to model old or even broken analogue synthesiser voices, however as this is an uncommon setting found on commercial synthesisers there is no set functionality for this parameter. The most obvious behaviour for this parameter, and the way it currently works, is that it randomly modifies the pitch of each voice when a new note is played, with a greater amount value creating larger pitch offsets:
//============================ //Set 'vintage amount' pitch offset int16_t vintage_pitch_offset = 0; //if there is a vintage value if (patchParameterData[PARAM_GLOBAL_VINTAGE_AMOUNT].voice_val != 0) { //get a random pitch value using the vintage amount as the max possible value vintage_pitch_offset = rand() % (int)patchParameterData[PARAM_GLOBAL_VINTAGE_AMOUNT].voice_val; //offset the random pitch value so that the offset could be negative vintage_pitch_offset -= patchParameterData[PARAM_GLOBAL_VINTAGE_AMOUNT].voice_val / 2; //FIXME: the above algorithm will make lower notes sound less out of tune than higher notes - fix this. } //if (patchParameterData[PARAM_GLOBAL_VINTAGE_AMOUNT].voice_val != 0)
However the more and more I play with the current implementation, the more I realise that adding random pitch offsets to each voice isn't very musically useful, especially when using a large amount value.
Therefore I'm probably going to experiment with other behaviours for this parameter that are potentially more musical useful before I settle on a final implementation for this, such as:
- Randomly detuning each oscillator on a voice by a small amount rather than the whole voice, which would create phase and 'beating' effects
- Adding random amounts of noise to each note
- Adding random rhythmic amplitude modulation (like an LFO set to a random shape modulating amplitude amount)
-
Panel - Part 2 (Electronics and Software)
09/02/2018 at 20:22 • 0 comments(Original post date: 13/03/16)
Last week I posted about the design and construction of the front panel for the vintage toy synthesiser, however another thing I had been doing alongside that is putting together the electronics and software for allowing the synthesis engine to be controlled by the panel controls. This ended up being a bit of a nightmare to get working well as I'll talk about below, but I think I've finally got it into a stable state. A lot of the electronics and software for the panel is very similar to that of the key mechanism of the synth, therefore I will often refer to the blogpost on that within this post rather than repeating myself.
Electronics
Components used:
- Potentiometer, 10K, regular (x 35)
- Potentiometer, 10k, centre-detented (x 7)
- Toggle Switch
- Resistor, 10k
- Ceramic capacitor, 0.1uF (x 4)
- MC14067 multiplexer (x 3)
- Arduino Pro Mini (3.3V version)
- DIP24 0.6" IC socket (x 4)
Controls
As mentioned in a previous log the only controls I am using on my panel are potentiometers/dials and a toggle switch, simply because these are the most useful and common controls that are used in similar projects and products.
Potentiometers
I decided to only use dial pots instead of slider pots as they take up less room on the panel. I am using pots with a value of 10k as this is recommended pot to use when just using a microcontroller to read its value. I am also using a few centre-detented pots for the bipolar depth controls so that the user can easily centre these values. I had considered using centre-detented pots for a few of the other parameters (oscillator coarse tune, pulse amount, keyboard octave and transpose) however from testing these pots they often don't actually centre on the exact central value, which would not work with these particular parameters which are quite coarse.
I have connected the pots to the circuit in the standard way - the two outer pins go to power and ground and the centre pin goes to an analogue input (which in my case is on a multiplexer).
A potentiometer connected directly to an Arduino. Source: https://www.arduino.cc/en/Tutorial/AnalogReadSerial
Toggle Switch
The switch I am using is a SPST (Single Pole, Single Throw) switch, which is all that is needed when wanting to read a switch/button value using a microcontroller.
I have connected the toggle switch to the circuit in a standard way, using a 10k pull-down resistor so that when the switch is off it gets pulled to ground to produce a value of LOW. However as all my multiplexers are connected to analogue inputs the switch is connected to an analogue input instead of a digital input, but this just mean I'll get a value of 0 or 1023 instead of LOW or HIGH.
A button connected to an Arduino. Source: https://www.arduino.cc/en/Tutorial/Button
Microcontroller and Multiplexers
Just like with the synths key mechanism, I am using a 3.3V Arduino Pro Mini microcontroller for reading the control values which are then send to the BBB via serial. See the key mechanism production log for more info on this design decision. However there are a couple of changes I have made here compared to that of the key mech:
- I am using 16-channel multiplexers instead of 8-channel multiplexers. This is simply because I am not able to get enough analogue inputs for all 43 panel controls using 8-channel muxes with an Arduino Pro Mini (well that's what I thought at the time of developing this circuit, however since then I have learnt that that's not the case, which I've talked about below in the 'Alternative Circuit Design' section).
- All the muxes and the Arduino are attached to the circuit via DIP IC sockets. I did this so that these components can be easily replaced if they break, which is something I learnt the hard way with the key mech circuit (I have actually since gone back and added this to the key mech circuit).
- All the muxes (as well as the VCC signal to the pots) have had 0.1uf decoupling capacitors added to them - something that digital circuits should have which I wasn't aware of (another thing that I have since gone back and added to the key mech circuit).
The Completed Circuit
The completed circuit for the panel has been developed using stripboard which will be screwed to the underside of the panel using standoffs, using solid core wire to make all connections. Below is a breadboard diagram of the circuit but with only one potentiometer attached:
Here are some photos of the completed circuit:
The completed vintage toy synth panel circuit
The potentiometers and toggle switch connected to the panel
It's not my neatest or prettiest wiring, though unfortunately if attempting to develop a circuit that contains 42 potentiometers on stripboard instead of a PCB there are going to be lots of wires.
Alternative Circuit Design
As with the key mech circuit, within the panel circuit each mux uses its own set of digital and analogue pins on the Arduino, meaning that in total I've used 12 digital pins (4 digital outs as the control/select inputs for each mux) and 3 analogue pins (1 analogue output from each mux). At the time of developing this circuit I thought that this was the only way it could be done, however since then I've discovered through one of my superiors that it can be done using less Arduino pins, meaning that I could have used cheaper 8-channel muxes (such as 4051s) and still able to get enough analogue inputs. This can be done by sharing the digital pins between the muxes (connecting the same 4 digital outs to all of the mux select/control pins), which can be done as I only need to read from one mux at a time. This can be taken a step further by using only one analogue input on the Arduino and sharing it between all the muxes, using the mux inhibit pins to only turn on one mux at a time. Therefore using these two methods I could change this panel circuit to only use 7 digital pins (4 for the mux control/select inputs, and 3 for each of the muxes inhibit pins) and 1 analogue pin (for the analogue output coming from each mux).
The main benefit to this alternative circuit design is that it allows you to add more inputs/outputs to your microcontrollers, which is very useful when using boards such as the Arduino Pro Mini which only has a limited number of them. For example, using these two methods with an Arduino pro mini, which has 12 digital pins (ignoring the serial RX and TX pins) and 8 analogue pins (which can be used as digital pins if needed), it would be possible to have a total of 128 analogue inputs using 16 8-channel 4051 muxes, or 240 analogue inputs/outputs using 15 16-channel 4067 muxes! However the main downside to these methods is that they are more prone to errors such as reading from multiple muxes at the same time, so you need to be extra careful in the software that you are definitely turning off one mux before you start reading from the next one.
Software
As mentioned above, all the reading of controls is handled using an Arduino microcontroller, so the only software required for the front panel is a single Arduino sketch that needs to handle two things - reading value changes from the controls, and sending these changes to the BBB as serial-based MIDI messages.
The panel software is a lot less complex than that of the key mechanism. All it needs to do is read the state of every pot and switch, and if it reads a new/changed value for a controls it converts it into the range of the sound parameter it is controlling and sends the value to the BBB via serial as a MIDI message. The MIDI message used by the panel are Control Change (CC) messages, where the first byte is 176 + MIDI channel (always 0 in this case), the second byte is a control number, and third byte is control value. Each parameter within the synth has it's own MIDI CC controller number, which is used within the panel and the BBB software for accessing and setting the parameters value. It can also be used by external MIDI gear for controlling that parameter externally, or for controlling external MIDI gear using the synths panel. I haven't yet offically documented the MIDI CC specification of the synth, however you can see a list of the CCs in the globals.h file.
I have created a GitHub repository to host all my code and schematics/diagrams for this project. To see the up-to-date panel code click here, or for the code at the time of writing this blogpost see below.
/* Vintage Toy Synthesiser Project - panel code. This the code for the Arduino Pro Mini attached to the piano's panel. This particular code is for using up to 4 16-channel multiplexers. All pins are used for the following: 2 - 5: Mux1 select output pins 6 - 9: Mux2 select output pins 10 - 13: Mux3 select output pins A4 - A7 (as digital outputs): Mux4 select output pins A0: Mux1 input pin A1: Mux2 input pin A3: Mux3 input pin A4: Mux4 input pin Note that Mux4 may not be connected, but this code allows for it to be used. Mux4 mist be connected if NUM_OF_CONTROLS is greater than 16 * 3. //REMEMBER THAT ANY SERIAL DEBUGGING HERE MAY SCREW UP THE SERIAL COMMS TO THE BBB! */ //========================================== //The number of pots/switches attached const byte NUM_OF_CONTROLS = 43; //for dev const byte FIRST_CONTROL = 0; const byte LAST_CONTROL = 42; //The previous anologue value received from each control int prevAnalogueValue[NUM_OF_CONTROLS] = {0}; //The previous param/MIDI value sent by each control byte prevParamValue[NUM_OF_CONTROLS] = {0}; //MIDI channel we want to use const byte midiChan = 0; const byte VAL_CHANGE_OFFSET = 8; //========================================== //param data for each control struct ControlParamData { const byte cc_num; const byte cc_min_val; const byte cc_max_val; const bool is_depth_param; }; ControlParamData controlParamData[NUM_OF_CONTROLS] = { {.cc_num = 74, .cc_min_val = 0, .cc_max_val = 127, false}, //0 - PARAM_FILTER_CUTOFF {.cc_num = 19, .cc_min_val = 0, .cc_max_val = 127, false}, //1 - PARAM_FILTER_RESO {.cc_num = 26, .cc_min_val = 0, .cc_max_val = 127, false}, //2 - PARAM_FILTER_LP_MIX {.cc_num = 28, .cc_min_val = 0, .cc_max_val = 127, false}, //3 - PARAM_FILTER_HP_MIX {.cc_num = 27, .cc_min_val = 0, .cc_max_val = 127, false}, //4 - PARAM_FILTER_BP_MIX {.cc_num = 29, .cc_min_val = 0, .cc_max_val = 127, false}, //5 - PARAM_FILTER_NOTCH_MIX {.cc_num = 50, .cc_min_val = 0, .cc_max_val = 3, false}, //6 - PARAM_LFO_SHAPE {.cc_num = 47, .cc_min_val = 0, .cc_max_val = 127, false}, //7 - PARAM_LFO_RATE {.cc_num = 48, .cc_min_val = 0, .cc_max_val = 127, true}, //8 - PARAM_LFO_DEPTH {.cc_num = 14, .cc_min_val = 0, .cc_max_val = 127, false}, //9 - PARAM_OSC_SINE_LEVEL {.cc_num = 15, .cc_min_val = 0, .cc_max_val = 127, false}, //10 - PARAM_OSC_TRI_LEVEL {.cc_num = 16, .cc_min_val = 0, .cc_max_val = 127, false}, //11 - PARAM_OSC_SAW_LEVEL {.cc_num = 18, .cc_min_val = 0, .cc_max_val = 127, false}, //12 - PARAM_OSC_SQUARE_LEVEL {.cc_num = 17, .cc_min_val = 0, .cc_max_val = 127, false}, //13 - PARAM_OSC_PULSE_LEVEL {.cc_num = 3, .cc_min_val = 0, .cc_max_val = 127, false}, //14 - PARAM_OSC_PULSE_AMOUNT {.cc_num = 7, .cc_min_val = 0, .cc_max_val = 127, false}, //15 - PARAM_AEG_AMOUNT {.cc_num = 73, .cc_min_val = 0, .cc_max_val = 127, false}, //16 - PARAM_AEG_ATTACK {.cc_num = 75, .cc_min_val = 0, .cc_max_val = 127, false}, //17 - PARAM_AEG_DECAY {.cc_num = 79, .cc_min_val = 0, .cc_max_val = 127, false}, //18 - PARAM_AEG_SUSTAIN {.cc_num = 72, .cc_min_val = 0, .cc_max_val = 127, false}, //19 - PARAM_AEG_RELEASE {.cc_num = 13, .cc_min_val = 0, .cc_max_val = 127, false}, //20 - PARAM_FX_DISTORTION_AMOUNT {.cc_num = 33, .cc_min_val = 40, .cc_max_val = 88, false}, //21 - PARAM_OSC_SINE_NOTE {.cc_num = 34, .cc_min_val = 40, .cc_max_val = 88, false}, //22 - PARAM_OSC_TRI_NOTE {.cc_num = 35, .cc_min_val = 40, .cc_max_val = 88, false}, //23 - PARAM_OSC_SAW_NOTE {.cc_num = 37, .cc_min_val = 40, .cc_max_val = 88, false}, //24 - PARAM_OSC_SQUARE_NOTE {.cc_num = 36, .cc_min_val = 40, .cc_max_val = 88, false}, //25 - PARAM_OSC_PULSE_NOTE {.cc_num = 20, .cc_min_val = 0, .cc_max_val = 127, false}, //26 - PARAM_OSC_PHASE_SPREAD {.cc_num = 22, .cc_min_val = 0, .cc_max_val = 127, false}, //27 - PARAM_FEG_ATTACK {.cc_num = 23, .cc_min_val = 0, .cc_max_val = 127, false}, //28 - PARAM_FEG_DECAY {.cc_num = 24, .cc_min_val = 0, .cc_max_val = 127, false}, //29 - PARAM_FEG_SUSTAIN {.cc_num = 25, .cc_min_val = 0, .cc_max_val = 127, false}, //30 - PARAM_FEG_RELEASE {.cc_num = 107, .cc_min_val = 0, .cc_max_val = 127, false}, //31 - PARAM_GLOBAL_VINTAGE_AMOUNT {.cc_num = 102, .cc_min_val = 0, .cc_max_val = 7, false}, //32 - PARAM_KEYS_SCALE {.cc_num = 114, .cc_min_val = 61, .cc_max_val = 67, false}, //33 - PARAM_KEYS_OCTAVE {.cc_num = 106, .cc_min_val = 58, .cc_max_val = 70, false}, //34 - PARAM_KEYS_TRANSPOSE {.cc_num = 103, .cc_min_val = 0, .cc_max_val = 127, false}, //35 - PARAM_VOICE_MODE {.cc_num = 58, .cc_min_val = 0, .cc_max_val = 127, true}, //36 - PARAM_MOD_LFO_AMP {.cc_num = 112, .cc_min_val = 0, .cc_max_val = 127, true}, //37 - PARAM_MOD_LFO_CUTOFF {.cc_num = 56, .cc_min_val = 0, .cc_max_val = 127, true}, //38 - PARAM_MOD_LFO_RESO {.cc_num = 9, .cc_min_val = 0, .cc_max_val = 100, false}, //39 - PARAM_GLOBAL_VOLUME {.cc_num = 63, .cc_min_val = 0, .cc_max_val = 127, true}, //40 - PARAM_MOD_VEL_AMP {.cc_num = 109, .cc_min_val = 0, .cc_max_val = 127, true}, //41 - PARAM_MOD_VEL_CUTOFF {.cc_num = 110, .cc_min_val = 0, .cc_max_val = 127, true}, //42 - PARAM_MOD_VEL_RESO }; //FOR DEVELOPMENT //ControlParamData controlParamData[NUM_OF_CONTROLS] = //{ // // {.cc_num = 0, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 1, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 2, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 3, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 4, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 5, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 6, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 7, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 8, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 9, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 10, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 11, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 12, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 13, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 14, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 15, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 16, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 17, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 18, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 19, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 20, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 21, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 22, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 23, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 24, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 25, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 26, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 27, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 28, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 29, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 30, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 31, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 32, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 33, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 34, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 35, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 36, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 37, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 38, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 39, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 40, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 41, .cc_min_val = 0, .cc_max_val = 127}, // {.cc_num = 42, .cc_min_val = 0, .cc_max_val = 127}, //}; void setup() { //Setup serial comms for sending MIDI messages to BBB. //We don't need to use the MIDI baud rate (31250) here, as we're sending the messages to a general //serial output rather than a MIDI-specific output. Serial.begin(38400); //set all needed digital output pins for (byte i = 2; i <= 13; i++) { pinMode (i, OUTPUT); } pinMode (A4, OUTPUT); pinMode (A5, OUTPUT); pinMode (A6, OUTPUT); pinMode (A7, OUTPUT); } void loop() { byte input_to_read; byte mux_input_pin; byte first_select_pin; //for each control for (byte control_num = FIRST_CONTROL; control_num <= LAST_CONTROL; control_num++) { //========================================== //========================================== //========================================== //Read analogue control input... //Select the mux/analogue pin we want to read from based on the control number //FIXME: there are probably equations I can use here instead. if (control_num < 16) { input_to_read = A0; mux_input_pin = control_num; first_select_pin = 2; } else if (control_num < 32) { input_to_read = A1; mux_input_pin = control_num - 16; first_select_pin = 6; } else if (control_num < 48) { input_to_read = A2; mux_input_pin = control_num - 32; first_select_pin = 10; } else { input_to_read = A3; mux_input_pin = control_num - 48; first_select_pin = A4; } //select the input pin on the mux we want to read from, by splitting //the mux input pin into bits and sending the bit values to mux select pins. int b0 = bitRead (mux_input_pin, 0); int b1 = bitRead (mux_input_pin, 1); int b2 = bitRead (mux_input_pin, 2); int b3 = bitRead (mux_input_pin, 3); digitalWrite (first_select_pin, b0); digitalWrite (first_select_pin + 1, b1); digitalWrite (first_select_pin + 2, b2); digitalWrite (first_select_pin + 3, b3); //read the input value int read_val = analogRead (input_to_read); //========================================== //========================================== //========================================== //Process analogue control input... //if the read control value is greater that +/-VAL_CHANGE_OFFSET from the last value //this is a quick dirty hack to prevent jitter if ((read_val > prevAnalogueValue[control_num] + VAL_CHANGE_OFFSET) || (read_val < prevAnalogueValue[control_num] - VAL_CHANGE_OFFSET) || (read_val == 0 && prevAnalogueValue[control_num] != 0) || (read_val == 1023 && prevAnalogueValue[control_num] != 1023)) { // Serial.print(control_num); // Serial.print(" "); // Serial.println(read_val); //store the value prevAnalogueValue[control_num] = read_val; //convert the control value into a param/MIDI CC value byte param_val = ConvertControlValToParamVal (control_num); //if this control is for a bipolar depth parameter if (controlParamData[control_num].is_depth_param == true) { //make sure the control definietly centres on the centre value of the parameter //by setting a certain window around the centre value to be set to the centre value if (param_val >= 63 && param_val <= 65) { param_val = 64; } //if (param_val >= 63 && param_val <= 65) } //if (controlParamData[control_num].is_bipolar_control == true) //if the param val is different from the last param val if (prevParamValue[control_num] != param_val) { //store the value prevParamValue[control_num] = param_val; //Send the param value as a MIDI CC message SendMidiMessage (0xB0 + midiChan, controlParamData[control_num].cc_num, prevParamValue[control_num]); } //if (prevParamValue[control_num] != param_val) } //if (prevAnalogueValue[control_num] != read_val) //slow down control reading to help prevent jitter. //it also means when pots are turned fast they only send a small number of values delay (2); } //for (byte control_num; control_num < NUM_OF_CONTROLS; control_num++) //========================================== //========================================== //========================================== //Read serial input... //if there is something to read on the serial port if (Serial.available()) { Serial.println ("Received messages from serial input"); byte midi_in_buf[64]; int num_of_bytes = Serial.readBytes (midi_in_buf, 64); //if received a request for all panel control values if (num_of_bytes == 3 && midi_in_buf[0] == 0xB0 && midi_in_buf[1] == 127 && midi_in_buf[2] == 1) { //send back all control values for (byte control_num = 0; control_num < NUM_OF_CONTROLS; control_num++) { SendMidiMessage (0xB0 + midiChan, controlParamData[control_num].cc_num, prevParamValue[control_num]); } } //if (num_of_bytes == 3 && midi_in_buf[0] == 0xB0 && midi_in_buf[1] = 127 && midi_in_buf[2] == 1) } //if (Serial.available()) } //===================================================== //===================================================== //===================================================== //Converts a control value into a param/MIDI CC value byte ConvertControlValToParamVal (byte control_num) { byte result; result = ((((float)controlParamData[control_num].cc_max_val - (float)controlParamData[control_num].cc_min_val) * (float)prevAnalogueValue[control_num]) / 1023.0) + (float)controlParamData[control_num].cc_min_val; return result; } //===================================================== //===================================================== //===================================================== //Sends a 3 byte MIDI message to the serial output void SendMidiMessage (byte cmd_byte, byte data_byte_1, byte data_byte_2) { byte buf[3] = {cmd_byte, data_byte_1, data_byte_2}; Serial.write (buf, 3); // Serial.print(buf[0]); // Serial.print(" "); // Serial.print(buf[1]); // Serial.print(" "); // Serial.println(buf[2]); }
Issues
As mentioned at the start it was a bit of a nightmare getting a stable working panel. These are the main issues I had and how I resolved them:
- Non-working or erratic potentiometers. Up to this point I've had about 10-15 pots that either spat out erratic values or didn't work at all. In most cases they would behave fine, but after moving the panel or rearranging the wires they would suddenly start misbehaving, which suggested it was a problem with the pots or wiring rather than the Arduino, muxes, or software. After getting the circuit checked out by one of my superiors it turned out I was soldering the pots wrong - I was soldering the wires very close the opening of the internal mechanism of the pots instead of the pins/legs, and most probably getting solder/flux inside or damaging the terminal, causing them to misbehave or break. I was soldering them here as my original soldering on the pins was becoming disconnected very easily, but it turns out that's a common issue. Replacing the broken pots with a very careful soldering job fixed the issue. So lesson learnt - solder on the pot legs only!
- Potentiometer jitter. A very common problem with pots, but I didn't realise how much I would get. I added the decoupling capacitors to the circuit to help prevent this, but this didn't appear to be enough. Therefore in the software I have done two things to help prevent jitter:
- Any new pot value has to be greater or less than 8 of the previous pot value for it to send a new parameter value for to the BBB. This decreases the resolution of the pots, however the greatest resolution of a parameter value sent to the BBB is 7 bit (0-127) which is the same as scaling down from the 10 bit analogue input value (1024 / 8 = 128).
- I've slowed down how often the analogue inputs are read by adding a small delay between reading each control value.
- I attempted to implement the common running/moving average method for smoothing analogue input values, however this ended up just using up most of the Arduino's memory.
Example video
I was planning on adding an example video of using the panel here, however unfortunately last night my BBB decided to stop working (from searching online for other cases of the symptoms it looks like the processor has randomly blown). Therefore I'll post an example video at a later date once I get a new BBB.
-
Adding Sockets for External Connections
09/02/2018 at 19:37 • 0 comments(Original post date: 06/03/16)
Even though I ended up constructing a brand new front panel for the toy piano for this project, the rest of the enclosure of the vintage toy synth will be using the existing piano enclosure. Apart from the front panel, the other part of the piano that needs modifying for the project is the back section where I need to add a set of sockets and controls so that the synth can be easily connected to a power source, an audio output, and external MIDI gear. A second part to this task was connecting these sockets to the internal electronics of the vintage toy synth.
Construction
The sockets I have added are:
- 2x 5-pin DIN sockets (for MIDI I/O)
- 1x 6.3mm stereo jack socket (for audio output)
- 1x 2.1mm/5.5mm DC socket (for power), coupled with 1x standard SPST toggle switch (as a power switch)
The first thing I needed to do was to get out my Dremel and cut out some holes for all five components. Below is a photo of the back of the piano enclosure after I had done this:
Back of the vintage toy piano with holes cut out for sockets and controls
I'm in no way saying that this is my best Dremel work - the holes aren't perfect circles or in line. However as it is a vintage hand-build piano nothing is perfectly straight anyway, so my sloppy drilling actually goes quite well with the existing enclosure!
Below are some examples of what the back will look like once the sockets and controls are added:
Back of the toy piano with the sockets/controls added
5-pin DIN MIDI sockets
6.3 mm stereo jack socket
DC socket and toggle switch
An example of the sockets with the rest of the synth
The MIDI sockets and the toggle switch were long enough to fit through the wooden panel, however the jack and DC sockets were too short to allow me to fit a washer and nut to them for securing the sockets to the panel. Therefore on the inside of the enclosure I had to cut away an area of wood around the holes for these sockets so that the components would fit correctly, as show in the below photo:
Inside of the back
Socket Choice
There were a couple of reasons why I chose these particular sockets/controls to use on the back of the synth:
- I decided to use the metal-framed MIDI sockets rather than the more-commonly used right-angle MIDI sockets as they are much easier to connect and secure to 6mm-thick wood. Also the right-angle sockets are designed to be secured to a PCB/circuit instead of the enclosure, which would have given me less freedom in regards to where I place the MIDI circuit/stripboard within the synth.
- I decided to use a 6.3mm audio jack socket instead of a 3.5mm mini jack socket as they are more commonly found on commercial synthesisers and similar products. Even though the current synth engine is just monophonic, I chose to use a stereo jack instead of a mono jack so that stereo headphones could be used without only one ear being active.
- I didn't particularly need to add a power switch, however it is a nice little extra thing to have. I am also considering having a power LED too.
Connecting to Internal Electronics
Now that I have a bunch of sockets connected to the back of my synth the user can easily apply power, get audio, and connect to MIDI gear without needing to open up the device. However these sockets need to be connected to the rest of the electronics of the synth in some way...
MIDI Sockets
Connecting these sockets were easy - If you've read my previous log on the development of the MIDI I/O electronics you would have seen the circuit I made that allows MIDI gear to be connected to the BeagleBone Black via MIDI DIN sockets. Therefore here I just needed to connect these sockets to my MIDI I/O circuit via the screw terminals I added.
Audio Jack Socket
From my previous log on BeagleBone Black audio you would have read that I'm using an EC Technology USB audio adapter for the audio output of the BBB within my synth, which has a standard 3.5mm stereo mini jack as the audio connector. Initially I was trying to find an existing cable/adapter that goes from a male 3.5mm jack to a female 6.3mm jack, where the socket side of the cable could be secured to a hole using a washer and nut, however I had no luck finding this cable. Therefore I ended up making my own cable, where I have attached a mini-jack plug the the jack socket using the three need wires - left (tip), right (ring), and ground (sleeve). As the cable isn't longer than 8 inches I didn't need to worry about insulating the wires to prevent noise interference.
MY DIY audio cable
The jack socket side of the cable, which will be attached to the back of the synth
The jack plug side of the cable, that will connect to the USB audio adapter connected to the BBB
The DIY audio cable connected to the BBB
DC Power Socket
For the power socket I have essentially done the same kind of thing as that for the audio connection - I've build my own cable that goes from the socket to a DC plug that connects to the 5V power socket on the BBB. However here I have also added in the power switch that breaks the power line when turned off.
My DIY power cable
The DIY power cable connected to the BBB
-
Panel - Part 1 (Design and Construction)
09/02/2018 at 19:13 • 0 comments(Original post date: 1/03/16)
The front panel of my vintage toy synthesiser is the place where all the dials and buttons for controlling the sound parameters will be attached to the toy piano. While the final design of the panel has turned out very similar to how I had originally planned it to look, the construction of the panel compared to my initial plan has changed dramatically. In this log I'm going to cover the process of both designing and constructing the front panel for the vintage toy synthesiser, which has been an ongoing process for me over the past couple of weeks.
Design
When approaching the design of the panel there were three main aspects I needed to consider - control layout, control aesthetics, and labelling/text.
Control Layout
Control layout is the process of placing all the needed controls on the front the panel. There were a few things to consider here that affected my final design:
- The total number of sound parameters within the synthesiser - 43
- Panel size - the overall size I can use here is roughy 614cm squared
- Control size - the majority of the controls I am using are potentiometers which are 16mm x 25mm
- Grouping similar controls together - one of the most important rules to any good interface design is that similar controls should be grouped together within their own sections
- Leaving space for other things - I need to make sure I've left enough room for a user to easily operate the controls (e.g. their fingers can fit around the dials), as well as leaving space for control labelling.
Control Aesthetics
My original plan for this project was to use vintage and old-looking controls; however when consider other things such as budget, time, and panel layout, this proved to be a very hard task. Therefore in the end I abandoned this idea, and set myself a new plan to just make sure the controls match the black/white/silver colour scheme of the piano. However another part of my initial plan was to make sure controls are small/miniature, again keeping inline with the design of the piano.
There were only two types of controls I needed for the front panel - dials/knobs/potentiometers, and a toggle switches.
Dials
I've spent the past couple of months buying a range of different knob caps from eBay, and seeing how they look attached to the toy piano. The knob cap I settled on is an aluminium black and silver cap with a very simple design, simply because I thought it went well with the existing aesthetics of the piano. I tried several sizes of the same knob cap, however settled on a 13mm one.
Different knob caps I tried, with the one I settled on on the far right.
Toggle Switches
One parameter of the synth needs to use a switch rather than a dial, and from the get-go of this project I knew exact what switch I would use which would suit the vintage toy piano aesthetic - a simple mini silver metal toggle switch.
The type of toggle switch I will be using on the panel
Labelling/Text
All the controls on the front panel need to be labelled in some way so that the user knows what they do, and the main thing to consider here was what type of font to use. Whereas I had original planned to use a handwritten or old-style font, I ended up choosing a common sans-serif font due to it looking best with the final panel construction method (see Construction section below). I also had to consider what colour to use here, which preferably would be silver/grey/white.
Final Design
Here is a technical drawing of the final design of the front panel, showing all the positions of the controls as well as the labelling of the controls:
The final panel design, show control positions and labelling
There are a couple of reasons why I placed the controls in this particular layout:
- All controls are grouped into their relevant individual sections
- There's space left for adding further controls into relevant positions in the future
Construction
As mentioned above, the construction of the front panel of the vintage toy synthesiser changed dramatically from my original plan.
Initial Plan
My initial plan for constructing the front panel was to drill holes into the existing wooden panel of the piano, and labelling each control my etching text into the existing paintwork. However both of these ideas ended up being abandoned for the following main reasons:
- The existing panel was too thick and wouldn't have allowed me to fit the bolts onto the potentiometers to attach them to the panel. I attempted to find pots with longer shafts, but this proved to be very difficult.
- The existing panel was quite brittle and would probably have split quite easily after drilling 43 holes into it.
- The paintwork was also very brittle and chipped easily, so attempting to etch text into it wouldn't have looked very good.
Laser Cutting - First Attempt
After realising I would need to construct a whole new panel for the piano I was recommended getting it produced using laser cutting, as this could cut out all the needed holes instead of me having to do it myself. With the help of my wonderful girlfriend I got a CAD drawing produced, found a local laser cutting company, Bristol Design Forge, and got a new panel constructed in 3mm birch plywood with all the needed holes for the controls. The thickness was perfect for attaching potentiometers, and was a lot stronger.
A CAD drawing for the first panel design
The 3mm birch plywood laser cut panel
The main downside of this method was that I would now have to completely paint the panel, and this is where disaster struck. First of all I used a gloss black paint that probably wasn't designed to to be used on objects that would be handled a lot (it was tacky and smelly, even after it had dried), and secondly the paint caused the panel to warp quite considerably meaning that it now didn't sit nicely on the existing piano enclosure. I learned two things here:
- Plywood is susceptible to warping
- Try paint on a test bit of material first!!
I decided to learn from these mistakes and move on quickily.
Final Laser Cut Panel
After the first failed attempt at laser cutting I was then recommended to consider using acrylic instead of wood. While I really wanted to keep all parts of the synth wooden, keeping inline with the existing enclosure, there were quite a few benefits to using acrylic instead of wood:
- It could come in gloss black without me needing to apply any paint
- I could use laser engraving to produce the control labelling on my panel, which would come out in frosted white - one of preferable labelling colours. This would mean I wouldn't need to paint or stick labels on the panel myself, which probably wouldn't have looked that good.
- It's not susceptible to warping
Therefore once again with the help of my wonderful girlfriend and Bristol Design Forge I got a second laser cut panel produced, this time in 3mm gloss black acrylic.
A close up of the CAD drawing for the second version of the panel, showing cut lines in red and engrave lines in white.
The .dxf design file for this can be found in the projects git repository.
Photos of the gloss black acrylic panel
While I was initially concerned that using acrylic instead of wood would ruin the aesthetics of the vintage toy piano, it turned out to not look too different from the original panel. Hopefully this is the final panel design and construction, and now all I need to do is attach all the controls and get them talking to the BeagleBone Black!
-
BeagleBone Proto Cape
09/02/2018 at 19:06 • 0 comments(Original post date: 21/02/16)
Over the past week I've been working on various parts of my project - designing the front panel, starting on the panel electronics, as well as optimising the sound engine software. All of these things are only half-finished so I don't want to document them in a log yet, however one small yet important thing I have completed this week is the wiring and soldering of the BeagleBone Proto Cape, so I thought I'd do a quick and short (for a change!) log on how I've used the proto shield.
The BeagleBone Proto Cape
The Proto Cape is important for my project, and probably for most serious BBB projects, as it allows you to solder your connections to the board so that things don't accidentally become unconnected during use. Saying that, the idea of permanently soldering all of my connections on my BBB didn't appeal to me, so instead I soldered a set of screw terminals to my proto cape (like I did with the MIDI interface circuit for my project), allowing me to disconnect certain connections and circuits from the BBB if needed (which is very useful during development), but at the same time providing a way to securely connect everything when needed.
Here are a couple of photos of my proto cape:
As can be seen from the above photos I've attached four pairs of screw terminals to the cape. These are for the following connections:
- Three pairs for connecting my keyboard, panel, and MIDI interface circuits to the BBB via the UART serial pins (both TX and RX for each circuit).
- Two pairs for providing 3.3V power to my three circuits (leaving one terminal currently unused)
- Two pairs for connecting the GND of my circuits to the BBB (leaving one terminal currently unused)
- A spare pair, just incase.
Here's a photo of the cape in use, with the keyboard, MIDI interface, and panel fully connected:
-
Audio Synthesis Engine Implementation
08/31/2018 at 17:15 • 0 comments(Original post date: 14/02/16)
Since my log a couple of weeks back where I highlighted the design for my audio synthesis engine I've been hard at work attempting to implement it using the C++ audio synthesis library Maximilian. I'm now at a stage where I have a working and controllable synthesis engine, so I thought it would be a good time to talk about how I've done it. I've managed to implement most of my original design plus a few extra parameters, however I've still got a few small things to implement as well as some niggling bugs to iron out.
Before I go on, just thought I'd mention that the code used at the end of my project may change slightly from the code example shown here, so for up-to-date and full code see the Github repository for this project.
The Synthesis Engine Application
In my last log on software architecture I briefly introduced the vintageSoundEngine application which is the program running on the BeagleBone Black that generates the sound for my synth. This application has two main tasks - receiving note and control messages and forwarding them onto the correct 'voice', and mixing together the audio output from each voice and sending it to the main audio output. This is all done within the main piece of code for the application, vintageSoundEngine.cpp, however the code that handles the audio processing for each voice is implemented as a C++ class/object, vintageVoice, and multiple instances of this object are created depending on the polyphony value of the synth. While I'm on the subject of polyphony, at the moment I've just got a polyphony value of two due to high CPU usage of each voice, however I'm hoping to increase this before the end of the project.
Processing Note and Control Messages
As mentioned in my last blogpost it is the vintageBrain application that handles voice allocation, therefore vintageSoundEngine doesn't have to do anything complicated in order to forward MIDI note messages to the correct voice - it just uses the MIDI channel of the note message to determine the voice number. This is also the same for MIDI control/CC messages, however I also use MIDI channel 15 here to specify that a message needs to go to all voices. Once the program knows which voice the message needs to go to, it calls a specific function within the desired voice to forward the message. Here is a snippet of the current code that handles this:
//================================ //Process note-on messages if (input_message_flag == MIDI_NOTEON) { //channel relates to voice number uint8_t voice_num = input_message_buffer[0] & MIDI_CHANNEL_BITS; vintageVoice[voice_num]->processNoteMessage (1, input_message_buffer[1], input_message_buffer[2]); } //if (input_message_flag == MIDI_NOTEON) //================================ //Process note-off messages else if (input_message_flag == MIDI_NOTEOFF) { //channel relates to voice number uint8_t voice_num = input_message_buffer[0] & MIDI_CHANNEL_BITS; vintageVoice[voice_num]->processNoteMessage (0, 0, 0); } //if (input_message_flag == MIDI_NOTEOFF) //================================ //Process CC/param messages else if (input_message_flag == MIDI_CC) { //channel relates to voice number. Channel 15 means send to all voices uint8_t voice_num = input_message_buffer[0] & MIDI_CHANNEL_BITS; for (uint8_t voice = 0; voice < NUM_OF_VOICES; voice++) { //if we want to send this message to voice number 'voice' if (voice_num == 15 || voice_num == voice) { //TODO: check if this param/CC num is a sound param, and in range. //At this point it always should be, but it may be best to check anyway. //set the paramaters voice value vintageVoice[voice]->setPatchParamVoiceValue (input_message_buffer[1], input_message_buffer[2]); } //if (voice_num == 15 || voice_num == voice) } //for (uint8_t voice = 0; voice < NUM_OF_VOICES; voice++) } //if (input_message_flag == MIDI_CC)
Mixing Voices
Mixing the audio output of the voice objects is done in the audio callback function that is called for each audio sample by the audio streaming thread of the application, handled by the RtAudio API. This is done in the same way as that of the Maximilian examples, however their code for generating and controlling audio was not split into separate objects. Here is the current code that handles this:
void play (double *output) { double voice_out[NUM_OF_VOICES]; double mix = 0; //process each voice for (uint8_t voice = 0; voice < NUM_OF_VOICES; voice++) { vintageVoice[voice]->processAudio (&voice_out[voice]); } //mix all voices together (for some reason this won't work if done in the above for loop...) for (uint8_t voice = 0; voice < NUM_OF_VOICES; voice++) { mix += voice_out[voice]; } //set output for (uint8_t i = 0; i < maxiSettings::channels; i++) { output[i] = mix; } }
The code is fairly simply, and just does three things:
- Calls the audio processing function of each voice, passing in the variable that the voices audio sample will be store in
- Mixes the audio samples of each voice into a single sample
- Puts the sample into all channels of the audio output buffer
Voice Design Implementation
Now I'm going to talk about the more interesting code - the code that generates and controls the synthesised audio within each voice. As stated above this is all within the vintageVoice class, and relies mostly on the Maximilian library for the implementation of the essential components of the synthesis engine. When talking about all the features here, remember that this is for each voice.
To implement the synthesis engine I needed the following Maximilian objects:
- maxiOsc (x6) - objects for creating the five separate oscillators as well as the LFO for each voice
- maxiEnv (x2) - objects for creating the amplitude and filter envelopes for each voice
- maxiSVF - object for creating the State-Variable-Filter for each voice
- maxiDistortion - object for applying distortion to each voice
As previously mentioned vintageSoundEngine is a multithreaded application. The main thread handles the receiving and processing of MIDI messages, whereas the second thread handles all the audio streaming and processing.
Processing Control Messages
As stated above, MIDI CC messages are sent to the voices to control the parameters of the sound. When the CC messages get to a voice they are converted into a value that the voice parameters understand (e.g. from the typical MIDI CC value of 0-127 to the typical filter cutoff value of 20-20000Hz), and then stored in an array of parameter data that is used throughout the rest of the code, most importantly within the audio processing callback function. For certain CC messages other particular things need to be done, e.g. if it is an oscillator coarse tune control message the pitch of the oscillator needs to be updated. To make developing the audio processing code easier, macros are used instead of parameter numbers and the array of parameter values are stored as part of a struct which contain variables for storing other data about each parameters such as the range of the value. See the globals.h file for more info.
This task is handled in the main thread. Here is the current code that processes MIDI CC messages:
//========================================================== //========================================================== //========================================================== //Sets a parameters voice value based on the parameters current user value void VintageVoice::setPatchParamVoiceValue (uint8_t param_num, uint8_t param_user_val) { patchParameterData[param_num].user_val = param_user_val; //FIXME: this could probably be done within vintageSoundEngine.cpp instead of within the voice object, //as each voice will probably be given the same value most of the time, so it would save CPU //to only have to do this once instead of for each voice. patchParameterData[param_num].voice_val = scaleValue (patchParameterData[param_num].user_val, patchParameterData[param_num].user_min_val, patchParameterData[param_num].user_max_val, patchParameterData[param_num].voice_min_val, patchParameterData[param_num].voice_max_val); //========================================================== //Set certain things based on the recieved param num if (param_num == PARAM_AEG_ATTACK) { envAmp.setAttack (patchParameterData[param_num].voice_val); } else if (param_num == PARAM_AEG_DECAY) { envAmp.setDecay (patchParameterData[param_num].voice_val); } else if (param_num == PARAM_AEG_SUSTAIN) { envAmp.setSustain (patchParameterData[param_num].voice_val); } else if (param_num == PARAM_AEG_RELEASE) { envAmp.setRelease (patchParameterData[param_num].voice_val); } else if (param_num == PARAM_FEG_ATTACK) { envFilter.setAttack (patchParameterData[param_num].voice_val); } else if (param_num == PARAM_FEG_DECAY) { envFilter.setDecay (patchParameterData[param_num].voice_val); } else if (param_num == PARAM_FEG_SUSTAIN) { envFilter.setSustain (patchParameterData[param_num].voice_val); } else if (param_num == PARAM_FEG_RELEASE) { envFilter.setRelease (patchParameterData[param_num].voice_val); } else if (param_num == PARAM_OSC_SINE_NOTE) { convert mtof; oscSinePitch = mtof.mtof (rootNoteNum + (patchParameterData[param_num].voice_val - 64)); } else if (param_num == PARAM_OSC_TRI_NOTE) { convert mtof; oscTriPitch = mtof.mtof (rootNoteNum + (patchParameterData[param_num].voice_val - 64)); } else if (param_num == PARAM_OSC_SAW_NOTE) { convert mtof; oscSawPitch = mtof.mtof (rootNoteNum + (patchParameterData[param_num].voice_val - 64)); } else if (param_num == PARAM_OSC_PULSE_NOTE) { convert mtof; oscPulsePitch = mtof.mtof (rootNoteNum + (patchParameterData[param_num].voice_val - 64)); } else if (param_num == PARAM_OSC_SQUARE_NOTE) { convert mtof; oscSquarePitch = mtof.mtof (rootNoteNum + (patchParameterData[param_num].voice_val - 64)); } else if (param_num == PARAM_OSC_PHASE_SPREAD) { //FIXME: I need to properly understand what the phase value represents in order to implement a definitive algorithm here. //But basically what it does is, the higher the param value, the more spread the phases are of each oscillator from one another. //Sine will always stay at 0, tri will change of a small range, saw over a slightly bigger range, and so on. oscSine.phaseReset(0.0); oscTri.phaseReset (patchParameterData[param_num].voice_val * 0.002); oscSaw.phaseReset (patchParameterData[param_num].voice_val * 0.004); oscPulse.phaseReset (patchParameterData[param_num].voice_val * 0.006); oscSquare.phaseReset (patchParameterData[param_num].voice_val * 0.008); } else if (param_num == PARAM_MOD_VEL_AMP) { //vel->amp env modulation velAmpModVal = getModulatedParamValue (param_num, PARAM_AEG_AMOUNT, voiceVelocityValue); } else if (param_num == PARAM_MOD_VEL_FREQ) { //vel->amp env modulation velAmpModVal = getModulatedParamValue (param_num, PARAM_FILTER_FREQ, voiceVelocityValue); } else if (param_num == PARAM_MOD_VEL_RESO) { //vel->amp env modulation velAmpModVal = getModulatedParamValue (param_num, PARAM_FILTER_RESO, voiceVelocityValue); } }
Processing Note Messages
Processing MIDI notes messages within the voices are a little bit more complicated than processing MIDI CC messages.
The following main things happen for each note message:
- If a note-on message:
- The pitches of the five oscillators are set based on the received MIDI note number as well as the oscillators coarse tune values
- The MIDI note velocity value (0-127) is converted into a voice amplitude value (0-1)
- Velocity modulation depth parameter values are used to generate the realtime parameter modulation values that need to be added to the parameter patch values
- The LFO oscillator phase is reset to 0
- The amplitude envelope trigger value is set. If a note-on message, this opens the envelope and causes sound to start playing in the audio thread, however if a note-off message it triggers the envelope to go to the release phase, eventually silencing the audio.
- The filter envelope trigger value is set.
Again this task is handled in the main thread. Here is the function that handles this:
//========================================================== //========================================================== //========================================================== //Function that does everything that needs to be done when a new //note-on or note-off message is sent to the voice. void VintageVoice::processNoteMessage (bool note_status, uint8_t note_num, uint8_t note_vel) { //========================================================== //if a note-on if (note_status == true) { //============================ //store the root note num rootNoteNum = note_num; //============================ //set the oscillator pitches convert mtof; oscSinePitch = mtof.mtof (rootNoteNum + (patchParameterData[PARAM_OSC_SINE_NOTE].voice_val - 64)); oscTriPitch = mtof.mtof (rootNoteNum + (patchParameterData[PARAM_OSC_TRI_NOTE].voice_val - 64)); oscSawPitch = mtof.mtof (rootNoteNum + (patchParameterData[PARAM_OSC_SAW_NOTE].voice_val - 64)); oscPulsePitch = mtof.mtof (rootNoteNum + (patchParameterData[PARAM_OSC_PULSE_NOTE].voice_val - 64)); oscSquarePitch = mtof.mtof (rootNoteNum + (patchParameterData[PARAM_OSC_SQUARE_NOTE].voice_val - 64)); //TODO: vintage amount paramater - randomly detune each oscillator and/or the overall voice tuning //on each note press, with the vintage amount value determining the amount of detuning. //============================ //set the note velocity voiceVelocityValue = scaleValue (note_vel, 0, 127, 0., 1.); //============================ //work out velocity modulation values //vel->amp env modulation velAmpModVal = getModulatedParamValue (PARAM_MOD_VEL_AMP, PARAM_AEG_AMOUNT, voiceVelocityValue); //vel->cutoff modulation velFreqModVal = getModulatedParamValue (PARAM_MOD_VEL_FREQ, PARAM_FILTER_FREQ, voiceVelocityValue); //vel->resonance modulation velResoModVal = getModulatedParamValue (PARAM_MOD_VEL_RESO, PARAM_FILTER_RESO, voiceVelocityValue); //============================ //reset LFO osc phase lfo.phaseReset(0.0); } //if (note_status == true) //========================================================== //if a note-off else if (note_status == false) { //reset aftertouch value aftertouchValue = 0; } //========================================================== //set trigger value of envelopes envAmp.trigger = note_status; envFilter.trigger = note_status; }
Generating and Processing Audio
As previously mentioned all audio processing is handled within an audio callback function which is repetitively called by the audio processing thread for each sample in the audio stream. Here I'm going to outline each section of the audio callback function within the voice class, which relies heavily on the Maximilian library.
LFO
The LFO is generated and set in the following way:
- An output sample of an oscillator object is generated using the following parameters:
- LFO shape controls which maxiOsc shape is used
- LFO rate controls the frequency/pitch of the maxiOsc object
- The oscillator output (-1 to +1) is converted into the range needed for an LFO (0 - 1).
- The LFO output sample is multiplied by the LFO depth parameter value
//========================================================== //process LFO... //set shape and rate //FIXME: for LFO rate it would be better if we used an LFO rate table (an array of 128 different rates). if (patchParameterData[PARAM_LFO_SHAPE].voice_val == 0) lfoOut = lfo.sinewave (patchParameterData[PARAM_LFO_RATE].voice_val); else if (patchParameterData[PARAM_LFO_SHAPE].voice_val == 1) lfoOut = lfo.triangle (patchParameterData[PARAM_LFO_RATE].voice_val); else if (patchParameterData[PARAM_LFO_SHAPE].voice_val == 2) lfoOut = lfo.saw (patchParameterData[PARAM_LFO_RATE].voice_val); else if (patchParameterData[PARAM_LFO_SHAPE].voice_val == 3) lfoOut = lfo.square (patchParameterData[PARAM_LFO_RATE].voice_val); //convert the osc wave into an lfo wave (multiply and offset) lfoOut = ((lfoOut * 0.5) + 0.5); //set depth lfoOut = lfoOut * patchParameterData[PARAM_LFO_DEPTH].voice_val;
Amplitude Envelope
The amplitude envelope is generated and set in the following way:
- The LFO->amplitude modulation depth parameter value is used to generate the realtime parameter modulation value that needs to be added to the amplitude envelope amount parameter value
- The envelope amount value is worked out by adding the realtime amplitude modulation values (generated by both velocity and LFO modulation) to the amplitude envelope amount parameter value
- An output sample of the envelope is generated using a maxiEnv object, passing in the envelope amount value to control the depth, and the envelope trigger value that was set by the last received MIDI note message to set the current phase of the envelope.
//========================================================== //Amp envelope stuff... //process LFO->amp env modulation double amp_lfo_mod_val = getModulatedParamValue (PARAM_MOD_LFO_AMP, PARAM_AEG_AMOUNT, lfoOut); //Add the amp modulation values to the patch value, making sure the produced value is in range double amp_val = patchParameterData[PARAM_AEG_AMOUNT].voice_val + amp_lfo_mod_val + velAmpModVal; amp_val = boundValue (amp_val, patchParameterData[PARAM_AEG_AMOUNT].voice_min_val, patchParameterData[PARAM_AEG_AMOUNT].voice_max_val); //generate the amp evelope output using amp_val as the envelope amount envAmpOut = envAmp.adsr (amp_val, envAmp.trigger);
Filter Envelope
This is generated in essentially the same way as the amplitude envelope, however it uses a different maxiEnv object, and a static value of 1 as the envelope depth.
//========================================================== //process filter envelope envFilterOut = envFilter.adsr (1.0, envFilter.trigger);
Oscillators
The oscillators are generated and set in the following way:
- An output sample of each of the five oscillator objects is generated using the following parameters:
- Each oscillator uses a different shape of the maxiOsc class
- The frequency/pitch of each oscillator are set to the pitch values generated with the last received MIDI note-on message
- The oscillator mix/level parameters multiply the output sample
- For the pulse oscillator, the pulse amount is set using the pulse amount parameter
- The five samples are mixed into a single sample, and divided by the number of samples to prevent gain clipping.
This is the point in the audio processing callback function that sound is initially generated.
//========================================================== //process oscillators oscSineOut = oscSine.sinewave (oscSinePitch) * patchParameterData[PARAM_OSC_SINE_LEVEL].voice_val; oscTriOut = (oscTri.triangle (oscTriPitch) * patchParameterData[PARAM_OSC_TRI_LEVEL].voice_val); oscSawOut = (oscSaw.saw (oscSawPitch) * patchParameterData[PARAM_OSC_SAW_LEVEL].voice_val); oscPulseOut = (oscPulse.pulse (oscPulsePitch, patchParameterData[PARAM_OSC_PULSE_AMOUNT].voice_val) * patchParameterData[PARAM_OSC_PULSE_LEVEL].voice_val); oscSquareOut = (oscSquare.square (oscSquarePitch) * patchParameterData[PARAM_OSC_SQUARE_LEVEL].voice_val); //mix oscillators together oscMixOut = (oscSineOut + oscTriOut + oscSawOut + oscPulseOut + oscSquareOut) / 5.;
Filter
The filter is generated, set, and used in the following way:
- The LFO->cutoff modulation depth parameter value is used to generate the realtime parameter modulation value that needs to be added to the cutoff parameter value
- The filter cutoff value is worked out by adding the realtime cutoff modulation values (generated by both velocity and LFO modulation) to the filter cutoff parameter value
- The maxiSVF object cutoff value is set using the cutoff value multiplied by the current output sample of the filter envelope
- The LFO->resonance modulation depth parameter value is used to generate the realtime parameter modulation value that needs to be added to the resonance parameter value
- The filter resonance value is worked out by adding the realtime resonance modulation values (generated by both velocity and LFO modulation) to the filter resonance parameter value
- The maxiSVF object resonance value is set using the resonance value
- An output sample of the filter applied to the mixed oscillator sample is generated by calling play() on the maxiSVF object using the following parameters:
- The passed in audio sample is the output of the oscillators
- The filter LP, HP, BP, and notch mix parameters are used to set the mix of the filter
//========================================================== //process filter (pass in oscOut, return filterOut) //================================ //process LFO->cutoff modulation double cutoff_lfo_mod_val = getModulatedParamValue (PARAM_MOD_LFO_FREQ, PARAM_FILTER_FREQ, lfoOut); //Add the cutoff modulation values to the patch value, making sure the produced value is in range double cutoff_val = patchParameterData[PARAM_FILTER_FREQ].voice_val + cutoff_lfo_mod_val + velFreqModVal; cutoff_val = boundValue (cutoff_val, patchParameterData[PARAM_FILTER_FREQ].voice_min_val, patchParameterData[PARAM_FILTER_FREQ].voice_max_val); //set cutoff value, multipled by filter envelope filterSvf.setCutoff (cutoff_val * envFilterOut); //================================ //process LFO->reso modulation double reso_lfo_mod_val = getModulatedParamValue (PARAM_MOD_LFO_RESO, PARAM_FILTER_RESO, lfoOut); //Add the reso modulation values to the patch value, making sure the produced value is in range double reso_val = patchParameterData[PARAM_FILTER_RESO].voice_val + reso_lfo_mod_val + velResoModVal; reso_val = boundValue (reso_val, patchParameterData[PARAM_FILTER_RESO].voice_min_val, patchParameterData[PARAM_FILTER_RESO].voice_max_val); //set resonance value filterSvf.setResonance (reso_val); //================================ //Apply the filter filterOut = filterSvf.play (oscMixOut, patchParameterData[PARAM_FILTER_LP_MIX].voice_val, patchParameterData[PARAM_FILTER_BP_MIX].voice_val, patchParameterData[PARAM_FILTER_HP_MIX].voice_val, patchParameterData[PARAM_FILTER_NOTCH_MIX].voice_val);
Distortion
The current implementation of applying distortion to the voices is as follows:
- An output sample of distorted audio is generated by passing the filtered audio sample into the maxiDistortion::atanDist function with a static shape value of 200.
- The distorted audio sample is mixed with the undistorted filtered audio sample, using the distortion amount parameter value to set the gain/mix of each audio sample
//========================================================== //process distortion... //FIXME: should PARAM_FX_DISTORTION_AMOUNT also change the shape of the distortion? distortionOut = distortion.atanDist (filterOut, 200.0); //process distortion mix //FIXME: is this (mixing dry and wet) the best way to apply distortion? Or should I just always be running the main output through the distortion function? //FIXME: probably need to reduce the disortionOut value so bringing in disortion doesn't increase the overall volume too much effectsMixOut = (distortionOut * patchParameterData[PARAM_FX_DISTORTION_AMOUNT].voice_val) + (filterOut * (1.0 - patchParameterData[PARAM_FX_DISTORTION_AMOUNT].voice_val));
However, as per the comments in the above code, I may change this implementation so that I don't mix a 'dry' audio sample with the distorted sample, and instead just use the distortion amount parameter value to control the shape of the distortion.
Output
Lastly the generated audio sample needs to be applied to audio sample that goes to the main audio output. This is done by setting the output sample to be the generated audio sample multiplied by the current output sample of the amplitude envelope.
//========================================================== //apply amp envelope, making all channels the same (pass in effectsMixOut, return output) for (uint8_t i = 0; i < maxiSettings::channels; i++) { output[i] = effectsMixOut * envAmpOut; }
Changes from the Initial Synthesis Engine Design
As can be seen from above I've managed to implement the majority of my initial design, however there have been a few changes:
- I've added coarse tune parameters for each of the oscillators
- Due to the last point, I've renamed the sub oscillator to just be called the square oscillator
- I've added a 'phase spread' parameter to the oscillators, allowing the phase of the oscillators to be different from each other at varying amounts
- I've added velocity->cutoff and velocity->resonance modulation
- I've removed all aftertouch modulation (for now), as currently the audio glitches fairly bad when attempting to process aftertouch messages. However I'm hoping to put this back in eventually if I have time to figure out what the issue is.
What's Next
There are a couple of parameters within my initial synth engine design that I haven't mentioned here, simply because I haven't yet implemented them. This includes:
- Voice mode. I've implemented voice allocation for polyphonic mode, but not yet for mono mode. This feature is handled within the vintageBrain application.
- All keyboard parameters, which once again will be handled within the vintageBrain application.
- Vintage amount, which will detuned the oscillators by random amounts on each note press.
- Global volume, which will set the Linux system volume.
Also there are a couple of bugs I need to address, the main one being random frequent audio glitches. I'm not sure whether this is related to CPU usage, audio buffer size, thread priority, or something else, but it's the main thing that's holding me back putting out some audio examples of my synthesis engine.
-
Software Architecture
08/31/2018 at 07:32 • 0 comments(Original post date: 12/02/16)
Over the past couple of weeks I have been dipping in and out of various parts of my project - developing the MIDI I/O interface, as well as starting to implement my audio synthesis engine design into a working entity (which I will probably talk about it my next log). However both of these elements have required me to develop a general structure of software on the BeagleBone Black board that allow the keyboard, MIDI interface, and eventually the panel to communicate with a sound engine. Therefore in this log I thought I'd cover the various different pieces of software that make up the vintage toy synthesiser, both on the BBB and off, and how they all connect together.
To begin with, here's a diagram of the software architecture of the synth:
Arduino Software
Keyboard
As shown in an earlier log, the keys/sensors on the digitised keyboard mechanism are scanned/read by a dedicated microcontroller - an Arduino Pro Mini. The Arduino software, or sketch, for this Pro Mini simply reads the state of each sensor over and over, and detects any changes in the press or pressure status of any of the keys. Note and aftertouch messages are then sent from the Arduino to the BBB using MIDI messages over serial.
As previous stated I decided to use a dedicated microcontroller for this task, instead if using the BBB, for two main reasons:
- Splitting tasks - The main job for the BBB in this project is to run a sound synthesis engine which is going to be time critical, so I don't want it to be doing any extra tasks that could slow it down. Also the scanning of the pianos 18 keys needs to be done as fast as possible so that the keys trigger sound as soon as they are pressed, so using a dedicated microcontroller for this task would be preferable.
- More Modular - Connecting a microcontroller to the BBB rather than connecting 18 sensors directly requires a lot less connections and wires to the BBB, which makes it easier to remove the key mech or BBB from the piano if desired.
You can see the latest version of the Keyboard code here.
Panel
The software for the panel is essentially going to be the same as that of the keyboard - a sketch running on a second Arduino Pro Mini that scans the state of a number of potentiometers and switches, sending any control changes to the BBB over serial using MIDI CC messages. Once again a dedicated microcontroller is being used for this task for the exact same reasons.
I've only just started writing the panel code, and as I haven't yet completed the circuit this may change, so I'll wait until a later blogpost to show the code for this.
BeagleBone Black Software
The BBB is both the brain and the soul of the vintage toy piano - by that I mean it runs the central process that communicates between all the different parts of the synth, as well as running the synthesis engine that creates the sound of the synthesiser. I decided to split these two main tasks into separate pieces of software which run side-by-side on the Linux OS - vintageBrain and vintageSoundEngine, which communicate with each other using standard MIDI messages but sent via datagram sockets.
I've given each of these tasks dedicated applications for almost the same reasons as using Arduino's as well as the BBB:
- Multithreading - Splitting the tasks into two separate applications means that each process can run in its own thread without the complexities of writing multi-threaded applications.
- Using multiple programming languages - vintageBrain is written in C where I've had the most experience with developing this kind of application, however vintageSoundEngine is written in C++ due to using the C++ audio synthesis library Maximilian. However these two languages aren't that different and can easily be combined if needed.
- Keeping code separate - developing two completely separate applications means that the code is separate, rather than potentially having lots of different code that does different things mixed together, potentially making it harder to maintain. This kind of thing can be solved using object-orientated programming languages such as C++ where code can be split into dedicated classes/objects, however the C language doesn't have this feature.
- More modular - Say in the future I want to swap my digital sound engine for an analogue one, having the brain application separate from the sound application means that all I'd need to do is reroute my messages from the brain to a different destination, rather than having the rewrite a large chunk of the program.
vintageBrain
As stated above, the vintage brain application handles the task of allowing all the separate parts of the synthesiser to communicate. It is a single-threaded application that listens for messages coming from the keyboard, MIDI, and panel serial ports, and sends the messages to the sound engine and possible back to the MIDI serial port. It also handles all the voice and keyboard settings of the synthesiser, particularly:
- Voice mode and voice allocation. In polyphonic mode this involves knowing which digital 'voice' within the vintageSoundEngine application each note and aftertouch message needs to be sent to, and in monophonic mode keeping track of all currently held down notes/keys so that the synth can be played with the expected mono behaviour.
- Keyboard notes. The raw MIDI note messages coming directly from the keyboard will always be the same, however it's the vintageBrain's job to modify these messages based on the octave, transpose, and scale settings, allowing the user to chose the exact range of notes that the keyboard can play.
This application is developed in C, and I use my cross-compiler mentioned in a previous log to compile the application before using a script to copy the binary onto the BBB. You can see the up-to-date code for vintageBrain here.
vintageSoundEngine
vintageSoundEngine is the more interesting application of the two, as this is where the sound synthesis engine has been developed. It is a multithreaded application where the main thread is responsible for processing any MIDI messages coming from vintageBrain via the datagram socket which are used to trigger and control the sound, however the second thread is used to handle audio streaming and processing. As stated previously I am using the Maximilian audio synthesis library to develop my synthesis engine, and a lot of the structure of this application is based on the example Maximilian applications. However within this application I've created a 'vintageVoice' class which handles all the audio processing for a single 'voice' within my synth; making a dedicated class/object for this allows me to easy increase or decrease the amount of voices within my synth.
This application is developed in C++, and is compiled on the BBB itself due to not being able to get my cross-compiler to compile any Maximilian-based application, as outlined in a previous log. I will talk about this application and the sound engine in a lot more detail in a future log, as well give examples of the code.