Hi. So this is the story:

My interest/knowledge of embedded stuff was always around 32bit MCUs and a little bit of FPGA/CPLD devices. Mostly in audio signal processing world. The embedded Linux always was for me something out of my scope.

Back in the day I’ve got the early revision of BeagleBone Black platform with only 2Gb eMMC flash instead of 4Gb, which was problematic to manage the space in latest distributions. So playing around with this board my interest leaned towards the video streaming applications. But unfortunately messing with the ffmpeg library for too long I’ve only managed to get about 1 second delay for mjpeg stream through usb wi-fi dongle with bad framerate. So I gave up.

And, after 10 years, here we go again :)

By choosing the BeagleBone AI-64 board I decided (I guess) to do not follow the Jetson Nano/Raspberry Pi mainstream. Plus it was slightly cheaper than Jetson. But there is one issue — it is not the most popular platform. So the community is small. And the number of stupid questions you may ask on forums is strictly limited. Or people just can not help you with your specific situation at all.

The Quick Start Guide is must to read. And in my case messing up the file system and re-flashing it with the last distros several times was the way to go.

Few caveats related to the hardware:

Mini Display Port instead of mini HDMI? Because of TDA4VM SoC hardware support of DP interface. So the ONLY! Active miniDP to HDMI 4K adapter should be used to get a picture through HDMI.

Power. I’m using 5 Volts 3 Amps wall adapter with barrel jack. Waveshare UPS Module 3S does work for me as autonomous power supply. According to the ups metrics BeagleBone AI-64 is consuming about 5W when idle. So it is quite power hungry device.

Wi-Fi. BeagleBone AI-64 has M.2 E-key PCIe connector for Wi-Fi modules. However as I found from this and this forum threads the only Intel AX200, AX210 were tested and supported out of the box.

So lets get started.

BeagleBone AI-64 board has two CSI (Camera Serial Interface) connectors. This feature attracted me in the first place. But it seems that so far the only camera sensor “supported” is the IMX219. In TI Edge AI Documentation of their TDA4VM based evaluation board you may find support of several other sensors. But compatibility of their Device Tree Overlays (DTO’s) is questionable.

So I took the Arducam IMX219 Camera Module with 15 Pin to 22 Pin camera flex cable included. And the struggle begun…

It is possible to grab the camera picture with v4l2 from /dev/video* and use it by any of linux program you want. But the problem is that imx219 sensor requires additional image signal processing to control the white balance, exposure, etc. And this is done by TI in their custom GStreamer plugins edgeai-gst-plugins, which utilize TDA4VM hardware accelerated ISP called Vision Preprocessing ACcelerator (VPAC), aside the v4l2 driver. So v4l2-ctl -d /dev/video2 --list-ctrls (list video device controls) shows nothing and the GStreamer is the only way to get proper picture from CSI camera. Also TI’s dcc isp files needed by the tiovxis plugin are missing in the distro, and can be taken for example from TI J721E SDK and placed in /opt/imaging folder.

This knowledge came to me after a few weeks of digging the forums/documentation without having any clue what I’m looking for.

And finally with the few “magic” lines:

sudo media-ctl -d 0 --set-v4l2 '"imx219 6-0010":0[fmt:SRGGB8_1X8/1920x1080]'
sudo gst-launch-1.0 v4l2src device=/dev/video2 ! video/x-bayer, width=1920, height=1080, format=rggb ! tiovxisp sink_0::device=/dev/v4l-subdev2 sensor-name=SENSOR_SONY_IMX219_RPI dcc-isp-file=/opt/imaging/imx219/dcc_viss.bin sink_0::dcc-2a-file=/opt/imaging/imx219/dcc1/dcc_2a.bin format-msb=7 ! kmssink driver-name=tidss

...the image from camera is shown on screen!

GStreamer command line tools and its philosophy as well as edgeai-gst-apps are the places to dig deeper and find more answers.

My solution was to create GStreamer pipeline where 1920x1080 image goes to tiovxisp image signal processing module for white balance control than rescales to 1280x720 by tiovxmultiscaler, encodes with jpegenc and as mjpeg stream goes to local tcp socket, from where I can finally grab it.

The Hardware.

The Go part

All the code I have for this project is at github.com

To keep it short I will list only few main functions of the wifi_two_wheeled_basic/main.go file

func main() {
 upsModule = ups.NewUpsModule3S(i2c.Bus1)
 go upsModule.Run(time.Second)
 defer upsModule.Stop()

 mux := makeMjpegMuxer(":9990", "/mjpeg_stream")
 defer mux.Stop()
 go gstpipeline.LauchImx219CsiCameraMjpegStream(

 http.HandleFunc("/ws", serveVehicleControlWSRequest)
 http.Handle("/", http.FileServer(http.Dir("./public")))
 if err := http.ListenAndServe(SERVER_ADDRESS, nil); !errors.Is(err, http.ErrServerClosed) {
  log.Fatal("Unable to start HTTP server: ", err)

func makeMjpegMuxer(inputAddr string, outputAddr string) *muxer.Muxer[Chunk] {
 mux := muxer.NewMuxer[Chunk](MJPEG_STREAM_CHUNKS_BUFFER_LENGTH/2 - 1)
 go mux.Run()
 go serveMjpegStreamTcpSocket(mux, inputAddr)
 http.HandleFunc(outputAddr, handleMjpegStreamHttpRequest(mux))
 return mux

UpsModule3S is the small library to communicate with the INA219 power monitoring IC of UPS Module 3S over I2C bus. I’ve used this Go implementation of Linux i2c driver wrapper.

twowheeled package is for robot platform control. I’m using 4 PWM channels (2 channels for each wheel) to control IN(1–4) inputs when EN(1–2) are connected to VCC. It is not the best approach if you want to save few PWM outputs for something else. The better way is to use 2 GPIO pins split by NOT Gate or MOSFET into 2 pairs of complimentary signals to control motors directions via IN(1–2, 3–4) and the only 2 PWM channels to control speed via EN(1–2). But would require additional custom pcb.

In makeMjpegMuxer function I’m creating the “fan-out” like Muxer, which helps serveMjpegStreamTcpSocket goroutine to distribute the 4KB (Chunk)s of raw mjpeg data coming to a tcp port from GStreamer to all http connections handled by handleMjpegStreamHttpRequest.

gstpipeline package is for launching GStreamer pipelines.

The robot control is done by gorilla/websocket library in serveVehicleControlWSRequest. I’ve used mutex there to prevent multiple connections, so only one operator can control the robot.

And finally the vehicle.html page from public folder is served as web client application. In the update function called each 20 milliseconds I’m getting gamepad input and sending steering and throttle values through web socket connection.

function update() {
  let steering = steeringCenter;
  let throttle = 0;
  if (gamepadIndex !== null) {
    const gamepad = navigator.getGamepads()[gamepadIndex];
    const buttons = gamepad.buttons;
    const axes = gamepad.axes;
    throttleMax += buttons[12].pressed ? PWM_ADJUST_INCREMENT : 0; // Up arrow
    throttleMax -= buttons[13].pressed ? PWM_ADJUST_INCREMENT : 0; // Down arrow
    steeringCenter -= buttons[14].pressed ? PWM_ADJUST_INCREMENT : 0; // Left arrow
    steeringCenter += buttons[15].pressed ? PWM_ADJUST_INCREMENT : 0; // Right arrow
    steering += axes[0]; // Left Stick
    const breakInput = buttons[6].value; // L2
    const throttleInput = buttons[7].value; // R2
    throttle = -breakInput + throttleInput;
  if (webSocket !== null && webSocket.readyState === WebSocket.OPEN) {
    steering = Math.round(steering * 1000) / 1000;
    throttle = Math.round(throttle * throttleMax * 1000) / 1000;
    const vehicleState = {
      inputs: [steering, throttle],

In conclusion I can say that my experience with Go on BeagleBone AI-64 is quite positive. Ease of use, powerful out of the box web stack. Goroutines can be threated similar to the Tasks in FreeRTOS. So program can be divided into a set of independent modules running in parallel and communicating to each other thru channels.

The BeagleBone AI-64 is a good built powerful platform indeed. But unfortunately it is not beginner-friendly. If you are not doing Linux embedded stuff for living — your experience probably may be quite frustrating. For me (because of absolute absence of any step-by-step tutorials/examples on how to write your own image processing code from scratch) it was kind of digging/ripping stuff from TI's demo applications and gluing everything together just to make it work. At least I gained some experience :)