Close

Nginx and Memcached

A project log for FLED

An LED display showing visualizations and rendering data from a variety of TCP sources over the Open Pixel Control protocol

Ben DelarreBen Delarre 01/29/2014 at 17:460 Comments

With FLED we are trying to make a visualization platform, somewhere all our apps and monitoring tools can send data and where we can then use that data to make interesting visualizations.

As such we need to be able to accept data from a variety of sources quickly and easily, and we need to be able to do that constantly without impacting how the animations run. We also need to expose data to engineers developing new visualizations, and ensure that while engineers are developing code they can't impact the animation running on the display.

All this means we need to create some separation of concerns. It turns out that NodeJS isn't a good fit for accepting constant post requests containing data from services, since it only has the single event loop and since the Raspberry Pi is single core, any incoming requests cause our animation loop to suffer.

After much head scratching we decided to try a different approach. Memcached is very good at keeping data in memory, and makes it easy to make simple fast requests to it to retrieve data when its needed. All we need is a good way of posting data into it. Nginx it turns out has a very useful module called â€‹HttpMemcModule. This module lets us setup simple Nginx configurations that let us post and get data into Memcached with the absolute minimum of overhead.

To set this up we first had to compile a version of Nginx for the Raspberry Pi with this module included, here's the steps:

mkdir nginx-src
cd nginx-src
wget 'http://nginx.org/download/nginx-1.4.3.tar.gz'
tar -xzvf nginx-1.4.3.tar.gz
wget 'https://github.com/agentzh/memc-nginx-module/archive/v0.13.tar.gz'
tar -xzvf v0.13.tar.gz
wget 'https://github.com/agentzh/echo-nginx-module/archive/v0.49.tar.gz'
tar -xzvf v0.49.tar.gz
wget 'https://github.com/agentzh/headers-more-nginx-module/archive/v0.23.tar.gz'
tar -xzvf v0.23.tar.gz

cd nginx-1.4.3
./configure --prefix=/opt/nginx --add-module=../echo-nginx-module-0.49 --add-module=../memc-nginx-module-0.13 --add-module=../headers-more-nginx-module-0.23

make -j2
make install
sudo mkdir /opt/nginx/sites-available
sudo mkdir /opt/nginx/sites-enabled
sudo mkdir /var/log/nginx

We now need to setup the config for this Nginx configuration, for this we just do a very simple basic config that loads subconfigurations from /opt/nginx/sites-enabled.

worker_processes  1;


events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;


    sendfile        on;


    keepalive_timeout  65;


    gzip  on;


    include /opt/nginx/sites-enabled/*;
}

You'll also need to setup Nginx to start on boot, on Raspbian we do this with Sysinitv. To do this create a file at '/etc/init.d/nginx' and add the following to it:

#! /bin/sh
 
### BEGIN INIT INFO
# Provides:          nginx
# Required-Start:    $all
# Required-Stop:     $all
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: starts the nginx web server
# Description:       starts nginx using start-stop-daemon
### END INIT INFO
 
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DAEMON=/opt/nginx/sbin/nginx
NAME=nginx
DESC=nginx
 
test -x $DAEMON || exit 0
 
# Include nginx defaults if available
if [ -f /etc/default/nginx ] ; then
    . /etc/default/nginx
fi
 
set -e
 
. /lib/lsb/init-functions
 
case "$1" in
  start)
    echo -n "Starting $DESC: "
    start-stop-daemon --start --quiet --pidfile /usr/local/nginx/logs/$NAME.pid \
        --exec $DAEMON -- $DAEMON_OPTS || true
    echo "$NAME."
    ;;
  stop)
    echo -n "Stopping $DESC: "
    start-stop-daemon --stop --quiet --pidfile /usr/local/nginx/logs/$NAME.pid \
        --exec $DAEMON || true
    echo "$NAME."
    ;;
  restart|force-reload)
    echo -n "Restarting $DESC: "
    start-stop-daemon --stop --quiet --pidfile \
        /usr/local/nginx/logs/$NAME.pid --exec $DAEMON || true
    sleep 1
    start-stop-daemon --start --quiet --pidfile \
        /usr/local/nginx/logs/$NAME.pid --exec $DAEMON -- $DAEMON_OPTS || true
    echo "$NAME."
    ;;
  reload)
      echo -n "Reloading $DESC configuration: "
      start-stop-daemon --stop --signal HUP --quiet --pidfile /usr/local/nginx/logs/$NAME.pid \
          --exec $DAEMON || true
      echo "$NAME."
      ;;
  status)
      status_of_proc -p /usr/local/nginx/logs/$NAME.pid "$DAEMON" nginx && exit 0 || exit $?
      ;;
  *)
    N=/etc/init.d/$NAME
    echo "Usage: $N {start|stop|restart|reload|force-reload|status}" >&2
    exit 1
    ;;
esac
 
exit 0

Here's the config we add to our sites-enabled folder to configure Nginx to push data into Memcached and to forward all other requests to our Nodejs process.

upstream node_app {
    server localhost:8080;
}


server {


  listen 80;
  server_name localhost;


  root /home/ubuntu/fled;
  access_log /var/log/nginx/fled.access.log;
  error_page 404 /404.html;


  location /data/set {
    set $memc_cmd 'set';
    set $memc_key $arg_key;
    set $memc_flags $arg_flags; # defaults to 0
    set $memc_exptime $arg_exptime; # defaults to 0


    memc_pass 127.0.0.1:11211;
  }


  #location /data/register {
  #  set $memc_cmd 'append';
  #  set $memc_key 'variables';


    # create wrapper around object name
  #  set $dataname ',"${arg_key}"';


  #  set $memc_value $dataname;


  #  memc_pass 127.0.0.1:11211;
  #}
  location /data/list {
    set $memc_cmd 'get';
    set $memc_key 'variables';


    echo_before_body -n "[";
    memc_pass 127.0.0.1:11211;
    echo_after_body -n "]";


    more_set_headers 'Content-Type: application/json';
  }
  location /data/get {
    set $memc_cmd 'get';
    set $memc_key $arg_key;


    more_set_headers 'Content-Type: application/json';


    memc_pass 127.0.0.1:11211;
  }
  location / {
    try_files $uri $uri/ @proxy;
  }
  location @proxy {
    # required for socket.io
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_set_header X-NginX-Proxy true;


    # supposedly prevents 502 bad gateway error;
    # ultimately not necessary in my case
    proxy_buffers 8 32k;
    proxy_buffer_size 64k;


    # the following is required
    proxy_pass http://node_app;
    proxy_redirect off;


    # the following is required as well for WebSockets
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";


    tcp_nodelay on; # not necessary
  }
}

We should now have a working Nginx server, that is pushing data to Memcached and passing through to our NodeJS application on port 8080. We can setup our monitoring tools and services to push data into the Memcached instance by making HTTP GET request to '/data/set?key=OurVariableName&val=OurValueHere'. We can also use the 'flags' and 'exptime' parameters to control how Memcached handles our data.

Now that we have a working data system we need to setup a system for animations to be able to pull this data, and for engineers to be able to get access to that data while developing animations. We'll cover this in a bit more detail next time, but its actually pretty simple.

We setup a bunch of flags on an animation that declare which variables need to be available for that animation to work, when an engineer is developing an animation they simply go to the Data tab, tick the tickboxes for the variables they want to use and our system then sets up a Socket.IO message that starts pushing that data to the engineers browser. In this manner only the data necessary gets pulled from Memcached and sent over the wire, so we're doing the minimum possible work at any time. When we load an animation to view on the main display we read this list of required data variables, ensure they are available then start pulling them from Memcached for the animation to use.

All this code will be documented in our repository which we'll open source as soon as the project is complete. This could be a month or two though, so much to do!

Discussions