Close

01/06/2022 Project log entry #7

A project log for Smart earbuds

The stmart earbuds are earbuds that helps you prevent ear damage for artists and staff on stage

laurenlauren 06/01/2022 at 21:590 Comments
# -*- coding: utf-8 -*-
##############################################
# QuadMic Test for all 4-Microphones
# ---- this code plots the time series for all
# ---- four MEMS microphones on the QuadMic
# ---- attached to the Raspberry Pi
#
# -- by Josh Hrisko, Principal Engineer
#       Maker Portal LLC 2021
#

import pyaudio,sys,time
import matplotlib
matplotlib.use('TkAgg')
import numpy as np
import matplotlib.pyplot as plt
#
##############################################
# Finding QuadMic Device 
##############################################
#
def indx_getter():
    quadmic_indx = []
    for indx in range(audio.get_device_count()):
        dev = audio.get_device_info_by_index(indx) # get device
        if dev['maxInputChannels']==2 :
            print('-'*30)
            print('Mics!')
            print('Device Index: {}'.format(indx)) # device index
            print('Device Name: {}'.format(dev['name'])) # device name
            print('Device Input Channels: {}'.format(dev['maxInputChannels'])) # channels
            quadmic_indx = int(indx)
            channels = dev['maxInputChannels']
    if quadmic_indx == []:
        print('No Mic Found')
        sys.exit() # exit the script if no QuadMic found
    return quadmic_indx,channels # return index, if found
#
##############################################
# pyaudio Streaming Object
##############################################
#
def audio_dev_formatter():
    stream = audio.open(format=pyaudio_format,rate=samp_rate,
                        channels=chans,input_device_index=quadmic_indx,
                        input=True,output_device_index=(0),output=True,frames_per_buffer=CHUNK) # audio stream
    stream.stop_stream() # stop streaming to prevent overload
    return stream
#
##############################################
# Grabbing Data from Buffer
##############################################
#
def data_grabber():
    stream.start_stream() # start data stream
    channel_data = [[]]*chans # data array
    [stream.read(CHUNK,exception_on_overflow=False) for ii in range(0,1)] # clears buffer
    for frame in range(0,int(np.ceil((samp_rate*record_length)/CHUNK))):
        if frame==0:
            print('Recording Started...')
        # grab data frames from buffer
        stream_data = stream.read(CHUNK,exception_on_overflow=False)
        data = np.frombuffer(stream_data,dtype=buffer_format) # grab data from buffer
        stream_listen = stream.write(data) #writting the data allows us to have the microphones' feedback  
        for chan in range(chans): # loop through all channels
            channel_data[chan] = np.append(channel_data[chan],
                                        data[chan::chans]) # separate channels
    print('Recording Stopped')
    return channel_data
#
##############################################
# functions for plotting data
##############################################
#
def plotter():
    ##########################################
    # ---- time series for all mics
    plt.style.use('ggplot') # plot formatting
    fig,ax = plt.subplots(figsize=(12,8)) # create figure
    ax.set_ylabel('Amplitude',fontsize=16) # amplitude label
    ax.set_ylim([-2**15,2**15]) # set 16-bit limits
    fig.canvas.draw() # draw initial plot
    ax_bgnd = fig.canvas.copy_from_bbox(ax.bbox) # get background
    lines = [] # line array for updating
    for chan in range(chans): # loop through channels
        chan_line, = ax.plot(data_chunks[chan],
                label='Microphone {0:1d}'.format(chan+1)) # initial channel plot
        lines.append(chan_line) # channel plot array
    ax.legend(loc='upper center',
              bbox_to_anchor=(0.5,-0.05),ncol=chans) # legend for mic labels
    fig.show() # show plot
    return fig,ax,ax_bgnd,lines

def plot_updater():
    ##########################################
    # ---- time series and full-period FFT
    fig.canvas.restore_region(ax_bgnd) # restore background (for speed)
    for chan in range(chans):
        lines[chan].set_ydata(data_chunks[chan]) # set channel data
        ax.draw_artist(lines[chan]) # draw line
    fig.canvas.blit(ax.bbox) # blitting (for speed)
    fig.canvas.flush_events() # required for blitting
    return lines
#
##############################################
# Main Loop
##############################################
#
if __name__=="__main__":
    #########################
    # Audio Formatting
    #########################
    #
    samp_rate      = 48000 # audio sample rate
    CHUNK          = 12000 # frames per buffer reading
    buffer_format  = np.int16 # 16-bit for buffer
    pyaudio_format = pyaudio.paInt16 # bit depth of audio encoding
    
    audio = pyaudio.PyAudio() # start pyaudio device
    quadmic_indx,chans = indx_getter() # get QuadMic device index and channels
    
    stream = audio_dev_formatter() # audio stream

    record_length = 30 # seconds to record
    data_chunks = data_grabber() # grab the data    
    fig,ax,ax_bgnd,lines = plotter() # establish initial plot

    while True:
        data_chunks = data_grabber() # grab the data   
        lines = plot_updater() # update plot with new data
    # Stop, Close and terminate the stream
    stream.stop_stream()
    stream.close()
    audio.terminate()

Updates of the previous code :

# Stop, Close and terminate the stream
stream.stop_stream()
stream.close()
audio.terminate()

This solves the problem of the code being able to run only one time. So the py was busy because the stream and pyAudio were not closed at least for spyder.

stream = audio.open(format=pyaudio_format,rate=samp_rate,
                        channels=chans,input_device_index=quadmic_indx,
                        input=True,output_device_index=(0),output=True,frames_per_buffer=CHUNK)

stream_listen = stream.write(data)

By adding the device outpout (headphones plugged) and writting on the stream we can listen to what the microphones are hearing. But there's a lot of latency which is going to be an issue if we want to implement the noise cancelling function. We can also hear clicks. I think it's because the codes is made to stream the audio by recording for 0.1s then stop and record again ect. We hear is cuts in the sound basically.

So when I raise the recording length, it's better but the the graph freezes. Raising the rates from 16000 Hz to 48000 Hz, the frames per buffer from 4000 to 12000 and the recording time from 0.1s to 30s allows  the recordings to be more linked. I chose these values because I remember that when I was testing the mic I was at 48000 Hz so I did the ratio to 16000 which gave me 3. Therefore, I multiplied the frames per buffer number by 3. I also realized that you have to keep a ratio of 400 between the recording length and the frames per buffer number to lower the latency.

but if we set the recording length for a high value it is impossible to visualize the wave form in real time.

Since there's already latency in the sound coming through the headphones, we will have to calculate it and and subtract it to pi to have our counter wave. Why not just modify the recording length to de-phase the signal ? In the code the recording length is liked to the reading (microphones) and the writing (headphones) on the stream. So it will affect how the device reads the data from the microphones like we saw earlier.

Next step : noise cancelling (maybe by modifying the stream.write(data) )and adjustments

if you want to see the wave form in real time:

16000 Hz

4000

0.1s

Discussions