Close
0%
0%

rPi bare metal vGuitar rig

An attempt to create a working live guitar rig that includes effects, midi synthesizers, loopers, controllers, and user feedback systems.

Public Chat
Similar projects worth following
The project consists of nothing less than re-envisioning, and implementing, in bare metal, the interface between a person with a guitar, and the systems they use to modify and/or add to the sound of the guitar. It includes guitar effects, a usb midi synthesizer, and a sophisticated looper. It accepts input from usb midi instruments and existing controllers, mice and keyboards, touch screens, existing analog controllers, custom built controllers using serial, I2c, and rfcomm over BT or other protocols and transports to interface to custom built or existing apps on remote devices like phones and tablets. It provides feedback through HDMI, LCD, and LED displays, usb midi output and serial ports, as well as communications with custom built or existing apps on remote devices. It intends to result in a robust and compact system for heavy road use which minimizes the amount of equipment to transport and the complexity, and time, for setup and teardown.

HISTORY

Please see My Embedded Project History page for a history of the projects that led up to this vGuitar rig.

MOTIVATION


I've been a gigging guitar player for over 40 years.  I've been a professional software engineer for the same amount of time.   Over the years my "rig" has evolved from a few foot pedals, to multi-effect pedals and loopers, to most recently, an iPad based system that includes an FTP Triple Play guitar synthesizer with SampleTank, ToneStack, and Quantiloop pro running in an AudioBus framework, with IO provided by an iRigHD usb audio device for guitar sound input and final sound output.   I use a variety of controllers including 4 analog volume pedals, a SoftStep II foot pedal, and a MPD218 that I put on the floor as a foot switch.


The basic problem with the iPad setup is latency.  I have measured 38ms of latency from the time I pluck the guitar until the sound from the guitar makes it up through the iRig into the iPad, gets through the Audiobus framework, and is returned as a line output by the iRig.   This is independent of the particular apps (effects) I am using.   And it even appears to be independent from Audiobus itself.   This appears to mostly be latency caused by conversion to USB, and the sound making it's way up the USB stack into the iPad sound architecture and back out to the iRig.

Another major issue is the complexity, and the resultant fragility, of the setup.  There are no less than 21 connectors, including a 7 port USB hub, and a dozen or more cables, where a failure of any one of them can basically fail the entire system .... live and in real-time ... in the middle of a song while I am onstage.

Another factor is related to the fact that I am using a general purpose device (the iPad) for a very specific function, and it is not optimized for that.   This manifests itself in many ways. 

IOS, and the whole apple world, are notoriously closed and proprietary.   You can't even write an app for it without apple's approval.   I'm not talking about this whole generation of "served" apps, where you have to connect to the "cloud" or a web-server and are really running a JavaScript app.   These frameworks are prolific, but, not appropriate for what I'm trying to do.  A served app won't do for a live gig rig.  I'm talking about native apps only.  Like you would expect from a professional setup.

You have to accept the architecture IOS provides (Core Midi) and the way the apps that you purchase interact with it. In turn, each app is it's own closed, proprietary architecture.

For example, SampleTank cannot be made to respond to a specific MIDI device.  It accepts any midi device, all glommed together by Core Midi, and the only way you can differentiate devices is by using the precious midi channel numbers, 1 through 16.   Of course, in full blown mode, the FTP pickup uses no less than 14 of those 16 channels, due to it's own bogus setup (it uses channels 1-6 for the strings, and then duplicates them again on channels 11-16, while also using two or three channels for mysterious proprietary messages).    

Don't get me started on the FTP itself!    It outputs so many midi messages, including sysex, during a gig, that often I think the whole system just crashes because of overload.    I have spent a LOT of time reverse engineering the FTP usb pickup.   Part of my project will be to isolate, and filter it, so that it presents the correct set of midi events that I want to see, and not millions of other messages that clog up the system downstream.

Anyways,  please understand that I absolutely LOVED the the iPad rig when I first got it.   Mostly because for the first time I personally could add bass, organ, piano, violin, or spacey synth sounds, to my onstage repetoire.   And Quantiloop...

Read more »

  • 1 × Motivated Engineer an engineer who is also a musician, vast programming skills, obsession with actualizing things
  • 5 × Arduino Uno gotta start someplace, so there are at least two components, then next thing you know you have 5 of em
  • 1 × General Architecture Document hmm ... or that a "File" ... a component of any good software project is a good architecture document!
  • 12392912 × other little pieces not counting resistors

  • First UI and bare-metal multi-core

    Patrick06/28/2019 at 04:18 0 comments

    I have checked in a very rudimentary example-only four track audio recorder based on my versions of the bare metal Circle and ported Teensy Audio libraries.   I hacked my way through the Circle addon UGUI library, starting with the Tiny Oscilloscope example, and then adding my own C++ classes, to get a nominally workable windowing system.  For the first time I started testing the audio system in conjunction with some sort-of real world capabilities, namely a mouse and HDMI output.

    I need to emphasize that this is a very early preliminary proof-of-concept of a small part of what the system can/will do.  Most of the code is actually throw-away, as, for example, I will probably write my own event driven windowing system from the ground up.  I will not be, on the other hand, hopefully, re-writing the USB stack, which, thanks to rst, seems to work pretty good at this stage.

    I spent some time evaluating available GUI’s that I could port to Circle and came away deciding to write my own because of the same reasons I am doing this project in the first place.  Yes, it *might* be possible to port GLUT to the OpenGL 3 work that rst has already done, and then to port GTK on top of that, and then to port a windowing system on top of that, yawn.  But part of the whole raison d’etre for this project is to get away from huge stacks of complicated arcane software into a simpler C++ framework that a person can easily understand.  Linux is what, 3 million lines of code and takes hours and hours to compile.   This program, and Circle, are probably on the order of 50,000 lines and compile in less than a minute.

    That being said, the UGUI addon is nice and works well for this simple demo, but I realize that I want to build a complete audio oriented event driven windowing system … with things like MIDI and Audio device control messages built in from the ground up, so this code is more or less, as I said, merely a proof of concept and to test things for a while going forwards.

    There were a couple of other personal “firsts” in this foray.  The bare metal code is capable of running on a single core or on multiple cores.  In multiple-core “mode” it runs the audio and USB interrupts on core 0, the audio processing tasks on core 1, and the UI on core 2.   It’s nice to know that there is some horsepower in the barn in case I need it (which I will) in the future, even though in single core mode the simple recorder, even with 4 tracks simultaneously, is using less than 1% of the processor time by certain measurements.

    This project also allowed me to delve into visualization of the audio … which came in handy, actually, in finding a bug in one of the audio devices (the “tdm” device I use with the Octo).  Sometimes it is hard to deal with zillions of bytes flying by on the wire, but a picture immediately shows you glitch.  

    So there’s that too …

    (p.s. sorry for the image quality ... its a photo of my television taken with my phone ... no hdmi capture at this time).

    In any case, I feel like I am now nearly in a position to begin designing the actual system.   I have a few different Audio cards working and a basic audio library that can efficiently process sound.  There are some audio objects and effects that are workable, including reverb, a sine-wave generator, four channel mixer, and so on, that I can play with.  I have a rudimentary UI capability and some understanding of how the real windowing system will work.   I believe (and will be proving that) I can hook up a variety of different touch screens, from the official 7” rpi screen down to a 2.5” screen that I got.  I have a slew of Teensys and Arduinos with which to develop interesting controllers, and I believe that the USB midi is likely to work “out of the box”.

    There's obviously still some stuff...

    Read more »

  • Pi Circle to Teensy Audio Shield and Quad Audio device

    Patrick06/19/2019 at 03:12 0 comments

    I added the AudioControlSGTL5000 object to my port of the Teensy Audio Library.  Please see the page at:

    https://hackaday.io/page/6335-pi-circle-to-teensy-audio-shield-and-quad-audio-device

  • Circle Fork Source Code

    Patrick06/16/2019 at 18:06 0 comments

    Without further ado, here is a link my Fork of the Circle source code including the work-in-progress on the Circle Teensy Audio Library and my Circle rPi Bootloader, among other things.

    https://bitbucket.org/phorton1/src-circle/src/master/

    Sorry it's in bitbucket.  At some point I will need to move everything to github.

  • Initial Latency Benchmarks

    Patrick06/16/2019 at 18:02 0 comments

    I did some initial benchmarks of the latency of the Circle Teensy Audio Library with the Octo 6-in 8 out sound card.   At first I was disappointed.   With the standard settings of 44.1khz sampling and a 128 sample buffer, the latency was about 9 ms (milliseconds).   Though way better than the 38ms I was getting from the iPad setup, I was hoping it would be quicker.   I then tried a few things.

    • default setup (44.1khz, 128 sample buffer) - 9 ms
    • "short circuit" input buffer to output buffer - 6ms
    • decrease buffer size to 32 samples - 3 ms
    • increase sample rate to 88.2 khs - approx 1.5 ms

    The initial measurement represents the output buffer being 3 full buffers behind the input buffer.  This sort of makes sense, but I was hoping it would be 6ms.

    The first thing I tried was to short circuit my code, using the same buffers for input and output DMA.  I had noticed, due to a bug, before, that this worked, though I'm not sure why.  By using the same DMA buffers, data is being read into the buffers FROM the sound card simultaneously as it is being transmitted TO the sound card.  It just happens to work, I think, because of the order in which I start the DMA's.  It is not a valid measurement, because the system is unusable, basically, with no access to the data in this configuration, but it gives me a rough idea of the minimum latency possible at a given sample rate and buffer size.  Since each buffer takes about 3ms to transfer, 6ms means that the transmit buffer is 2 full buffers behind the input buffer. 

    So, I then decreased the buffer size to 32 samples. with a resultant measured latency of 3 ms.  I'm not sure if the reverbs will work, but straight through sound worked ok.  By the way, the Octo sounds pretty darned good.  I cannot hear any artifacts or distortion at any reasonable listening levels.  When I decreased the buffer to 32 samples, it continued to work, but, even without short circuiting. The buffers ARE going through the Teensy Audio Library transmit() and update() processes, so this is a usable configuration. 

    I then tried upping the sample rate to 88.2 khz - which means BCLK is running at 22.4Mhz and LRCLK at abut 704kHz, and it still worked!  I was surprised.   I'll have to see how it works with real code (i.e. the reverbs), but this brought the latency down to a respectable 1.5 ms.

    In my experience anything under about 2ms and we humans think of it as real-time.  Although most folks seem to accept "under 10ms" as functional, I am afraid, particularly early in development, about assuming that is sufficient.   So I want to ensure that, in the future, if I need to, that the system can be built to a lower latency specification.

  • Audio Injector Octo working with Circle port of Teensy Audio Library

    Patrick06/13/2019 at 13:58 0 comments

    Got the 6-in, 8-out, Audio Injector Octo sound card working with my bare metal Circle port of the Teensy Audio Library !!

    I am very close to making the source public on github.

    After much consideration, I finally decided about two weeks ago to port Paul's Teensy Audio Library to work with RST's bare metal Circle framework on the rPi.   I retain my desire to avoid using linux and the complicated driver layers it presents,  His library is far simpler, easier to understand and port, and provides a good basic starting point for doing sound projects.

    The initial steps of the port, using the Audio Injector Stereo soundcard, went well as I already had a working bi-directional Circle i2s device written from previous experiments. Within an evening I had the three most basic classes working: AudioInputI2s, AudioOutputI2s, and AudioControlWm8731.  After that It only took a few hours to port a couple more interesting example classes including the Mixer and Reverbs.   The Mixer ported directly, in 5 minutes, with no source level mods !  To get the reverbs to compile I had to dig up and add some arm-math libraries to Circle, but even that only took a few hours, but they also basically compiled without source level mods.. 

    I even created an arduino-like framework for running the programs within Circle, so the following working source code will look familiar to anyone who has worked with the Teensy Audio Library before.  But please understand that the following program is not an Arduino sketch.  It runs on an rPI within the Circel bare metal framework!


    //-----------------------------------------------------------
    // reverb_test.cpp
    
    #include <Arduino.h>
         // Arduino.h is not needed but it is fun to list it
         // here as if it were a real arduino program 
    
    #include <audio\Audio.h>
    
    #define I2S_MASTER  0
    
    #if I2S_MASTER
         AudioInputI2S input;
    #else
         AudioInputI2Sslave input;
    #endif
    
    AudioEffectReverb   reverb1;
    AudioEffectReverb   reverb2;
    AudioMixer4         mixer1;
    AudioMixer4         mixer2;
    
    #if I2S_MASTER
         AudioOutputI2S output;
         AudioControlWM8731 control;
    #else
         AudioOutputI2Sslave output;
         AudioControlWM8731master control;
    #endif
    
    
    AudioConnection  c3(input,   0, mixer1, 0);
    AudioConnection  c4(reverb1, 0, mixer1, 1);
    AudioConnection  c5(input,   1, mixer2, 0);
    AudioConnection  c6(reverb2, 0, mixer2, 1);
    AudioConnection  c7(mixer1,  0, output, 0);
    AudioConnection  c8(mixer2,  0, output, 1);
    
    
    void setup()
    {
         printf("reverb_test::setup()\n");
    
         reverb1.reverbTime(0.6);
         reverb2.reverbTime(0.6);
    
         AudioMemory(20);
    
         control.enable();
         control.inputSelect(AUDIO_INPUT_LINEIN);
         control.inputLevel(1.0);
         control.volume(1.0);
    
         mixer1.gain(0, 0.6);
         mixer1.gain(1, 0.3);
         mixer2.gain(0, 0.6);
         mixer2.gain(1, 0.3);
    
         printf("reverb_test::setup() finished\n");
    }
    
    
    void loop()
    {
    }

    I figured it would then be pretty easy to implement the Audio Injector Octo, as Paul already had a ControlCS42448 class in the library.

    However, it turns out that the Octo is not really a "standard" CS42448.  Flatmax has implemented an FPGA on the board that becomes the BCLK and LRCLK master for both the cs42448 and the rpi i2s (which are both slaves), and it took some digging to figure out how to initialize it and get it working.  For better or worse, Flatmax used an additional FIVE gpio pins to communicate with the fpga.  One for a reset signal, and four to set a 4 bit sample rate.    Although I'm not terribly thrilled with the extra gpio pin usage, and wonder why he did not perhaps just piggyback the fpga control as an additional i2c device on the existing bus, by and large the board is impressive with it's 6 (six) ins and 8 output channels, and sounds pretty clean at 44.1K 16bit.

    I still have some reservations about moving forward with the Teensy Audio Library ... namely that it IS more or less limited (constructed) to 16 bit 44.1K sound, but for the time being it is what I have.  I can change that if I really want.  I...

    Read more »

  • prh - my first Log Entry

    Patrick05/24/2019 at 18:58 1 comment

    This is my first project log entry.  

    (1) Started Hackaday.io pages.

    There!   That's it.   Not really, there's ton's of backdated log entries I want to make, but I'm just getting started with Hackaday and learning to use it.   I don;t think the "MOTIVATION" section from my initial "Details" should go here in the Log, but maybe that's a way to get started.  All I know is that I have made my first log entry, and this is it.

    pressing "Publish" ...

View all 6 project logs

  • 1
    1. Spend nearly 50 years playing the guitar and writing code​

    A prerequisite for reproducing this project from scratch.   Probably will be an actual set of steps to build a box, but for now, I'm just playing around.

  • 2
    2. Retire comfortably to a sailboat in Panama

    Otherwise, how could you have the time to do all this?

  • 3
    3. Pick the guitar rig of your dreams

    Since I live on a sailboat and have to transport everything by dinghy, small and robust is important to me.   The guitar rig this is intended to replace is an iPad running SampleTank, Quantiloop, and Tonestack within an Audiobus environtment, using an iRigHD usb audio device for guitar input and line out.  My main guitar is a Rainsong carbon fiber acousitic guitar with an added FTP Fishman Triple Play midii pickup that communicates with a USB dongle.   On the floor are 4 analog pedals connected to an analog-to-usb dohickey that is basically a teensy on the inside, along with a SoftstepII usb midi switch padal, and an akai MPD218 that I use as a foot switch array.    I also have one of the few ACPADs, that is like a midi-guitar-wing, that *could* provide a variety of midi controls over BTLE, but I don't think I'm gonna use it.   It looks cool though and has a bunch of leds, drum pads, and slider things that go on the face of the guitar.

View all 6 instructions

Enjoy this project?

Share

Discussions

Patrick wrote 05/24/2019 at 19:05 point

a comment at the project level, about my own project ...

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates