Close
0%
0%

openSampler

a hardware sampler based on the raspberry pi running bare metal (so no Linux)

Public Chat
Similar projects worth following
A hardware sampler implementation using the raspberry pi hardware.

So I'm very into samplers. I create sample based music, if you're curious about that you can check that out on Spotify: https://open.spotify.com/album/5rpGsYSNGsEcHBqkMMOj1d?si=EucBjNHOT3ikMJifgUlW0w.

I absolutely love samplers since I was a kid, but couldn't afford one. Specifically, I was in love with the ones from Akai for some reason. I remember gazing upon an image of an Akai s6000 in a sales magazine for a large music shop, dreaming about all the crazy things this thing could do, even though I did not really have a good understanding of what a sampler actually was back then. It just looked like an interesting futuristic huge machine with a gameboy like removable device.

Skip to 2019. I finally acquired one when they were breaking down the old audio postproduction studio at the television network where I worked back then.

It was in fairly good condition, it just needed some external cleaning. Checking the contents on the internal harddrive, I found the remnants of sounds used by the "Who Wants To Be A Millionaire" show. It's been a long time since the last time that has aired in my country, so it must have been just sitting there for a while. I ordered a SCSI2SD kit, removed the internal scsi harddrive and installed the SCSI2SD kit instead leaving the original floppy drive intact. Great, I now have a fully working future proofed Akai s6000 to play with. It's great and all, but I just wish the firmware source code was leaked somewhere so I could start to make some changes to it.

My main sampler is an Akai MPC1000. It just checks all the boxes I want from a sampler, is easy to operate, and has a very raw processing feel to it. When pitching samples, there seems to be no low pass filter (which is something I actually like a lot). I even own two of them, just in case one breaks down. But it's not completely perfect. While JJOS is famous for adding a lot of features to the existing firmware, it also adds stuff I don't really need, and makes the UI more complex in my opinion.

In Utopia, there would be a sampler that combines the best of the s6000's friendly UI and feature set, with all the strengths of the MPC1000.

That's when I started thinking, as a programmer, if I really would put my mind to it I could make one myself these days with all the information out there. So I got started researching hardware. First I was looking at DSPs. Full disclosure, I suck at maths. Don't even understand simple concepts. But I'm a good handyman, and can puzzle things together. Ideally I would find a base to work with, and puzzle my way through. So I read some more, and finally found someone who said that the Raspberry Pi's arm is probably powerful enough to blow these old DSP's out of the water. A while back I was experimenting with programming a bare metal midi processor on the Pi 3 from scratch using David Welch's tutorials in C, so already gained some knowledge on how to do it. It also has a ton of working memory, unlike MCUs such as the Arduino, AVR, PIC or even ESP32. When researching I also came across Circle, which is a bare metal framework for the Raspberry Pi series that has got a lot of awesome work already put into it and figured this would be the right choice for my project. I'm not really familiar with C++, but have got a lot of experience with object oriented languages such as C# and Java, so the object concept should not be an obstacle.

We're going to take things slow, step by step. This has to potential to be one of those huge-mountain-of-work projects that never gets finished, so I need to take things slow and simple if I don't want to discourage myself.

I imagine the steps would be like this:

1. Setup bare metal compiler environment workflow with Circle on macOS

2. Create a simple program that plays a sample through the headphone jack

3. Find out how to mix samples for playing back multiple overlapping voices.

4. Implement classes for voices and mixing.

5. ...

Like I said, I suck at maths, so I'm going the...

Read more »

  • bixl, 1-bit pixel editor is born!

    Nick Verlinden08/05/2020 at 08:27 0 comments

    If you read the previous logs, you might have gotten the message that I love 1-bit guis. I loved Atari TOS and the early version of MacOS. To create 1 bit graphics, you can draw lines or pixels using code like this for instance (this is not real code!):

    Draw::Line(0,0,320,240);

     To draw fonts, there are several different solutions out there. Having followed David Welch's bare metal programming tutorials, I was using the psf font format, which is very simple. It contains a 32byte header with a few important parameters such as the width and height of the characters. You can create these fonts by hand if you want. Just create a C header filer with a byte array, and start entering bytes manually according to the spec.

    Even though the format is quite simple, it's a very tedious job. I looked around, and there seemed to be no simple (free) app to create 1 bit pixel graphics. I also have a Samsung Note 9, and it seemed like a romantic idea to be able to use the included pen for drawing. I have to travel to work a lot this month, so I had some train time to waste. That's when I created bixl. It's a very simple pixel drawing program.

    It's a web app that is available here: https://synthology.gitlab.io/bixl/

    I had some experimentation code lying around for creating a pwa, so this web app is also a pwa that you can install directly from the browser onto your computer/phone/... 

    You can start from scratch, or load an existing psf font. Only psf version 2 is supported at this time, but I'm not planning to add support for more file formats. If you want to add it yourself, dig in, it's open source! :-)

    Fighting the lifelong enemy called perfection

    Now is a great time to talk to you about the worst enemy I have to face all the time: perfection. If you don't want a lecture about that, just skip the rest of the log as there will be no new technical information.

    When working on stuff like this, it's easy to lose yourself in details. I constantly think like: 'oh it would be nice to add x feature', or: 'people will probably need feature y'. This process keeps on going and if you give in to it, you will never deliver or it will take you a very long time to do so. And even then, it will never feel complete. there's always something more you can add.

    A few years ago, I was dreaming of a distributed system where you could just include packages directly from github/lab/... in a web app. Think of it like npm, but without the need to locally download and keep these libraries. Your browser would just fetch them when needed. So I started experimenting and writing code. At one point I had a working system. It was fast, and it worked. Then the ocd kicked in and said: you need to rewrite this now, because people are going to see this mess you created. So I started to rewrite with the knowledge I gained by creating it. Javascript promises where the new thing, and everyone was using it, so I had to use them in my rewrite as well. Then, that version was almost finished, and I decided I wanted to add a feature because it seemed nice to have, or it just made sense (though not required!). But then I patched something and the code seemed messy again (though it really wasn't, but I thought it was), so I decided I would rewrite it from the ground up with this new knowledge. By the end, you could write plugins for loading from your own custom repository and all, quite fancy. It just took 3 years of train travel time, and a decent amount of spare time from my life to create it. A few months later, this project was almost obsolete by the javascript import api. I never started using it. So when I look back at it now, it just seemed like one big waste of time. Image you spent your life building your own car. But by the time you finish, cars are no longer being used and were replaced by drones. You created a nice car. For yourself. Great. Now you know how to build a car.

    So it was then that I realised that I have to fight the ocd and perfectionism when it strikes. With...

    Read more »

  • Getting the 64bit compiler working.. Failure!

    Nick Verlinden07/27/2020 at 09:18 0 comments

    Failure is a part of life, and we just have to deal with it and move on. After several hours of trying to compile the aarch64 gcc cross compiler on macOS myself, I have given it up for now. I will continue development and testing in 32bit. I'll just compile and test on linux for aarch64 when a major version has been released.

    I got as far as getting a binary for the g++ compiler, but using it with circle results in:

      CPP   actled.o
    'armv8-a+crc' is not a recognized processor for this target (ignoring processor)
    'armv8-a+crc' is not a recognized processor for this target (ignoring processor)
    /var/folders/q1/0jsxyqlx6jl_592pvc_z3lx00000gn/T//ccLAjiBh.s:1:2: error: unknown directive
            .arch armv8-a+crc
            ^
    /var/folders/q1/0jsxyqlx6jl_592pvc_z3lx00000gn/T//ccLAjiBh.s:10:2: error: unknown directive
            .type   _ZN7CActLEDC2Eb, %function
            ^
    /var/folders/q1/0jsxyqlx6jl_592pvc_z3lx00000gn/T//ccLAjiBh.s:17:16: error: brackets expression not supported on this target
            stp     x29, x30, [sp, -48]!
                              ^
    /var/folders/q1/0jsxyqlx6jl_592pvc_z3lx00000gn/T//ccLAjiBh.s:21:2: error: unknown use of instruction mnemonic without a size suffix
            mov     x29, sp
            ^
    /var/folders/q1/0jsxyqlx6jl_592pvc_z3lx00000gn/T//ccLAjiBh.s:25:2: error: invalid instruction mnemonic 'adrp'
            adrp    x2, .LANCHOR0
            ^~~~
    

     This is the procedure that finally gave me a (non-working) g++ binary:

    #download aarch64 gcc arm compiler source (https://developer.arm.com/tools-and-software/open-source-software/developer-tools/gnu-toolchain/gnu-a/downloads)
    #extract and cd into directory
    
    ./contrib/download-prerequisites
    
    mkdir build && cd build
    
    ../configure --build=x86_64-build_apple-darwin19.5.0 --host=x86_64-build_apple-darwin19.5.0 --target=aarch64-none-elf --prefix=/Users/nick/Downloads/aarch64/aarch64-none-elf --with-local-prefix=/Users/nick/Downloads/aarch64/aarch64-none-elf/aarch64-none-elf --with-gnu-as --with-gnu-ld --disable-libstdcxx --without-headers --with-newlib --enable-threads=no --disable-shared --disable-__cxa_atexit --disable-libffi --disable-libgomp --disable-libmudflap --disable-libmpx --disable-libssp --disable-libquadmath --disable-libquadmath-support --enable-lto --enable-target-optspace --disable-nls --disable-multilib --enable-languages=c,c++
    
    make -j 8

    stdlibc++-v3 gave me a lot of problems when compiling that I could not figure out, so I tried to compile g++ without libsdtcxx. I figure that is the root cause of all subsequent problems. Anyway, I wanted to share my workflow just in case someone else wants to have a stab at it. 

    For the record, compiling the c compiler with the configure parameters below works, it's the c++ compiler that's giving me a hard time.

    download gcc arm source
    extract
    
    ./contrib/download-prerequisites
    
    mkdir build && cd build
    
    ../configure --build=x86_64-build_apple-darwin19.5.0 --host=x86_64-build_apple-darwin19.5.0 --target=aarch64-none-elf --prefix=/Users/nick/Downloads/aarch64/aarch64-none-elf --with-local-prefix=/Users/nick/Downloads/aarch64/aarch64-none-elf/aarch64-none-elf --without-headers --with-newlib --enable-threads=no --disable-shared --disable-__cxa_atexit --disable-libgomp --disable-libmudflap --disable-libmpx --disable-libssp --disable-libquadmath --disable-libquadmath-support --enable-lto --enable-target-optspace --disable-nls --disable-multilib --enable-languages=c
    
    make -j 8
    
    make install-strip

  • Eureka! It's working!

    Nick Verlinden07/22/2020 at 09:38 0 comments

    Yesssss, making some good progress here! It's been a wild adventure already. I am very grateful Patrick already did the heavy lifting on the Audio Injector Octo card in his vGuitar project, because it took me a while to get it to work properly. 

    When I started to use some of his code I knew zero about i2s or DMA. Niks, nada, noppes. And to be honest, I still don't fully understand it, but after staring at the code for hours, I now have at least superficial understanding of how it works, and can go on to the next steps to be taken. Right now only 2 output channels are working using a modified version of circle's built in i2s output code. Patrick actually rewrote it into something he could use with the Teensy audio library. There is a chance I might make something similar myself in the future when I want to get audio input working. The reason why it doesn't work out of the box with circle's code is because the Audio Injector should be the i2s master, and the circle code is made for the Pi to be the i2s master. It took some poking around to get things working. But like I said, thanks to Patrick for figuring out the CS42448 initialisation part, and the part to make the pi accept a master clock from the i2s bus. 

    The second reason is that the Octo actually uses something called TDM, that is to my understanding taking abuse of the i2s protocol to transfer more than 2 channels. Apparantly i2s was only designed for transporting two channels simultaneously. TDM is a way to get multiple channels working over the i2s bus.

    The entire project now seems very promising since it left the theoretical concept stage. I've been giving the gui some thought as well, and think I'm going to create it with two types of input in mind: the touchscreen, and a rotary encoder (and other buttons). You should be able to do everything with the touchscreen, but also be able to connect a rotary encoder like on the mpc1000 and s6000 to change values fast.

    I'm working as a freelance video editor for 6 days a week in August, so development for this project might be a bit slowed down during that time because I will be mentally drained when I get home in the evening, and imagine I'll need my Sunday to recuperate from screen fatigue.

  • Update

    Nick Verlinden07/15/2020 at 11:14 0 comments

    Today i received the audio injector octo 8 soundcards. I'm looking forward to figure out how to get audio input and output. I'll have a look at Patrick's vGuitar rig project code, because he already figured out a lot of the details.

    In the mean time I have been busy with creating the ui code. Let's get something straight about that. I just absolutely love 1 bit guis, and both the mpc1000 and s6000 have 1 bit guis because of their use of graphic 1 bit displays. So, I'm creating a 1 bit gui for this project. I hear you thinking: this guy... I know, tastes differ, and you may not like the pixellated look of 1 bit displays, but with those limitations, a minimalistic design comes with it and thats the way I like it. Further more, if at one point we discover that we dont like touchscreens, we can always replace it with a 1 bit spi/i2c display with minimal change. Also, since this is an open source project, you are free to fork it, and create a gui to your liking! While looking at the guis of the mpc1000 and the s6000, I stumbled upon the gui of the mpc4000. It's like a mix between the mpc1000 and s6000, and it's a great source to base the gui on.

    The source of the first working version is going on the gitlab repo soon. Stay tuned!

  • Manipulating audio sample data

    Nick Verlinden07/10/2020 at 13:46 0 comments

    Last time we figured out how sound works in code, so now we can get to the fun stuff and do some manipulation on that sound! 

    Now that we know that a sample is the amplitude, it was easy to figure out that by dividing or multiplying the value, you could make it louder or less loud. Now let's see what our sampler needs to be able to do.

    When you use a sample on an Akai MPC1000, you can specify it's tuning during recording and manipulate the pitch after recording. On the MPC1000 this can sound quite grungy. I did not immediately know how pitch works in code, so I let it rest for moment and went on to re-watch some episodes of Community. My brain kept doing some thinking work while I was watching. I don't know if this will make sense to you, or if I'm even explaining this in a way that makes it clear, but let's have a shot. I figured when talking about music theory, for a certain tone to be an octave higher, the wave needs to be double the speed. So, when you want it be an octave lower, it needs to be half the speed. 

    Could it be really that simple?

    It was already late, but ran to the basement where my 'lab' is to test my theory. I came up with this piece of code.

    Sound[int(sPos++ * pitch)];
    

    In case you havent figured it out, this need to be in a loop where sPos increases the sample data index.

    And you know what? To my big surprise this actually sound just like as it would on the MPC1000. There is a reason for that. I suspect the MPC1000 does not have a low-pass filter or any fancy algorithm correcting the aliasing that occurs when resampling the audio. So the artefacts that your hear are imperfections when modifying the length of the wave sample data.

    Figure out how pitching works: CHECK.

    If you have listened to my music, you will probably notice that I often like to use aliasing as an effect. Take this song for instance: 'https://open.spotify.com/album/5rpGsYSNGsEcHBqkMMOj1d?si=qn2PIBugQymtz5FKivAzqA', you can hear it very clearly in the vocal buildup in the middle of the song at 1:40.

    With the newly gained knowledge about how and when aliasing occurs, we can create decimator effect (also often present in bit crusher style effects). This is what I came up with:

    // decimator
    float decimate = 1.0f;         
    if (decimate != 1) {
        int idx = int(int(sPos++ * decimate) / decimate);
        nLevel = Sound[int(idx * pitch)];
    }

    Now I'm not going to explain this in detail, just have a look at it, and try to figure it out knowing that 'Sound' is the signed 16-bit sample data, and nLevel is a signed short (or in other words a 16-bit signed int)  that is going to be sent to the audio device's output. If you use a value of 0.1f for decimate, you will get a really low-fi gritty sound. just the way I like it.

    Talking about bit crushing, how to approach that? Simple, by removing bits like this:

    // bit reduction
    int bits = 9;
    nLevel = nLevel >> bits;
    nLevel = nLevel << bits;

    But on the MPC1000 the effect is called 'Bit Grunger', and does not sound like the bit reduction technique like in the code above. Instead, I think that they drive the sound by compressing the quiet parts, so that it fits into the new bitdepth. Think of it like rescaling the wave so it fits in the newly specified bit depth. Our code above just throws the bottom part away, but the code below adds some 'drive' to the sound.

    // drive
    float depth = 0;
    if (depth > 0) {
        nLevel = 32767 * (tanh((float(nLevel)/32767)*depth));
    }

    If you do the driving part before the bit reduction, it will sound more like the 'Bit Grunger' effect on the MPC1000. By the way at this point I would like to thank Heikki Rasilo for helping me out with the math part of the drive. I did not even know what a tangent function was, and couldn't have done it without him. You know; I suck at math. In high school I even had extra after school classes for algebra, but it just didn't work. I failed maths that year, and went on to the lower grade wood workshop education course. But that...

    Read more »

  • Finding out how sound works in code

    Nick Verlinden07/08/2020 at 11:14 0 comments

    Last time we set up the compiler environment. I'm using it extensively now and works great. I only had a few times where the bootloader did not start the image after sending it over serial (and then I had to reset it and send the image again, so no drama).

    I decided to start out by using the '34-sounddevices' sample project, because it looks like it's really simple to modify it, and get it to play sound data instead of a 440hz tone.

    So my idea is to add sound data to the project, so that I can patch it in to where the 440hz tone is fed into the buffer. I'm really in unfamiliar territory here, but have heard of the double buffering pattern. I think that's what the original author does here. We have an audio buffer of a fixed length, and then there is a loop that will fetch data from this buffer, and write parts of it to the audio device's output buffer. We are supposed to write data to the first buffer, so that the loop can write it to the second. The reason behind this is to prevent audio dropouts. Basically by having two audio buffers, we make sure that the strictly timed audio device always has data, even if the timing of the processing/generation code is not that strict. Audio is something that is very time sensitive. Much like a video plays at 30 frames per second, cd-quality audio (wow that's been a while since I called it that) needs 44100 samples per second (times two if you want stereo output).

    If you're already experienced with how sound works in bits and bytes, I suggest you skip the rest of this log, as it really only explains the fundamentals, no advanced processing (yet).

    Now let's talk about what a sample actually is. Before starting this project, I had a vague representation in my head of how a sample is stored in computer memory. But diving deeper into this, I picked up a few things.

    So let's get it out of the way; sound consists out of waves. A wave goes up and down. Depending on how loud it is, the wave goes higher. But like I said, a wave goes up and down. When it goes down, it still goes louder. This is called the amplitude. It's an audio waves thing, if this sounds weird to you, I suggest you look up on information how sound reproduction works, like say how a speaker reproduces sound.

    A computer does not store waves, it stores bits. So, how can a computer then store a wave? By taking samples from it. Samples are points in time where the amplitude of a wave are measured. And for cd-quality audio, for every second 44100 points are measured.

    There is a good image on wikipedia that illustrates this, every point on the wave is a sample:

    For the time being, assume we're talking about 1 channel (so mono, not stereo). When your audio is stored as 8-bit, your computer offers you an array of 44100 bytes for each second of audio that represent the loudness of those parts of the wave. The bytes represent time sequentially, so byte 22050 contains a sample of the wave at 0.5 seconds in time. So every byte is 1 sample. The midpoint of a wave is silence. In the illustration from wikipedia above, the midpoint is represented by the line in the middle. The image below illustrates this a little bit better because its shows you a longer waveform over time, you may say it's 'zoomed out' compared to the illustration from wikipedia above, which is heavily 'zoomed in'.

    In 8-bit audio the midpoint is 127. So silence is represented by 127. That is ... when we are talking about 8-bit audio. For those of you who lived and played games in the MS DOS era, 8-bit audio just sounds yuck. CD-quality audio is 16-bit. And in 16-bit you have the choice to see your samples represented as a signed or unsigned integer. In case you wonder, a 'signed' integer is a number that can go below zero. Knowing that, when using a 16-bit signed integer, the highest amplitude (a.k.a. loudness) a sample can have is 32767 and -32767, and so 0 equals silence. On the other side, if audio is stored as a 'unsigned' 16-bit integer, the highest amplitude is...

    Read more »

  • Setting up our build environment

    Nick Verlinden07/04/2020 at 14:58 0 comments

    I'm running macOS. I prefer that over Linux and Windows. I have worked a few years in IT infrastructure, and while my experience with Linux may not be equal to the veterans; I think the Linux ecosystem often gets in the way if you just want to get things done quickly without having to tinker too much. Windows is just.. well let's say uninspiring. I hate Windows style paths. Despite having years of scripting experience with Batch, I'm not looking forward to using it again and Powershell is just an abomination (to me). macOS has a great balance, despite most likely being 'evil', but I must be honest and say that I don't really care and just want to get thins done. Politics aside, let's set up the compiler environment on macOS Catalina.

    Contents

    Installing the build tools

    Preparing the compilation workflow

         Cloning the Circle repository

         Setting up the project, and pre-building the framework binaries

         Serial bootlader

         Monitoring the serial port

    Soldering a reset switch

    That's it!

    Installing the build tools

    You'll need to install Brew. Get that done first!

    https://brew.sh/

    First I installed the 64 bit arm compiler from here:

    brew tap SergioBenitez/osxct
    brew install aarch64-none-elf
    brew install qemu

    But, it seems that the 'g++' variant is missing for building c++ code, so I don't think this is a real requirement but had it installed anyway because of a previous experiment with bare metal programming on the Pi 3 B+.

    This one has the compiler that the build scripts of Circle use, so we'll need this:

    brew tap ArmMbed/homebrew-formulae
    brew install arm-none-eabi-gcc

    We need make for building, and if you have the xcode commandline tools installed, make comes with it. But the problem is that the make version is too old for building Circle, so we need to install a newer version.

    brew install make

    After installing, you might have a conflict if make was already on the system and the newer version will be installed as 'gmake' instead of make. You can fix this by adding the new make binary to the path variable of your terminal. macOS Catalina now uses zshell, so you add it to your profile by modifying (or creating if it does not already exist) '~/.zprofile':

    PATH="/usr/local/opt/make/libexec/gnubin:$PATH"

     After closing and reopening the terminal, executing 'make -v' will give you a version above 4.

    Circle's bootloader build script uses wget to fetch some files from a git repo, and macOS has curl installed by default. So we need to add wget to our system.

    brew install wget

    We'll also need the python serial module for flashing our circle binary to the Pi over serial. More on that later, but we'll need this:

    pip3 install pyserial

    Preparing the compilation workflow

    Cloning the circle repository

    We'll start of by cloning the Circle repo from GitHub.

    git clone https://github.com/rsta2/circle

    Setting up the project, and pre-building the framework binaries

    First we have to modify the 'Rules.mk' file in the project root. We'll have to change the RASPPI version to the board we're using (for me that is 4), and we'll keep the 32bit architecture. If we want 64bit, we'll need to find the right compiler, but I'm not going to do that. For now, 32bit will be fine.

    AARCH     ?= 32
    RASPPI     ?= 4

    After doing that, we can start and build the framework binaries. Execute in the project root:

    ./makeall clean
    ./makeall

    Depending on your computer, this will take a few seconds or cloud take a few minutes.

    Serial Bootloader

    Circle is already well developed, and comes with easy to use build scripts. Normally, when you want to use your custom kernel, you'll build it, and copy files to the microsd card, and insert it into your Pi. This will get very old soon, so we'll prepare a workflow for flashing the kernel over serial. This will allow us to upload a new kernel every time we reboot the pi.

    You'll need a usb-serial...

    Read more »

View all 7 project logs

Enjoy this project?

Share

Discussions

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates