This project is inspired by the movie Demolition Man and the future society it envisions.  The movie is set in the 2030's when crime has been completely eradicated inside a tightly controlled society.  Swearing is a violation of the 'verbal morality statues' and its enforcement is done through monitoring devices mounted in each room.  These verbal morality statute monitors are a running joke in the movie as characters use profanity and trigger violations.  This project is an attempt to recreate the verbal morality statute monitor from Demolition Man.

For hardware, this project uses a Raspberry Pi model B running the latest version of the Raspbian OS.  The model B Pi is preferred because it has multiple USB ports (good for connecting both a wifi chip, and USB microphone), however the project will likely also work on a model A Pi.  Connected to the Raspberry Pi are a USB microphone, amplifier and speaker, thermal printer, a SPST toggle switch, and some red LEDs.

For software, the project source code is available on github here.  Make sure to install the following dependencies using apt-get by executing:

sudo apt-get install build-essential autoconf libtool automake bison swig python-dev libasound2-dev gcc-4.7 g++-4.7 libasound2-dev

The swear detection is done using the PocketSphinx library, an excellent open source speech recognition engine.  The in development next version of PocketSphinx has support for keyword spotting, which is the detection of certain keywords being uttered in everyday speech.  You will need to download the subversion trunk of the library, compile, and install the SphinxBase and PocketSphinx subprojects on the Pi.  You will also need to install the wiringPi library which is used to access GPIO pins on the Pi.

To compile the project, invoke 'make DemoManMonitor'.  You can also invoke 'make runtests' to run a few unit tests for the code.  Once the code is compiled you will need to make sure you have some speech model data available in the project directory.  Create a folder called 'models' and copy in the appropriate models for your language from PocketSphinx's pocketsphinx\model directory.  I copied in the hmm/en_US/hub4wsj_sc_8k folder,  lm/en_US/cmu07a.dic, and lm/en_US/wsj0vp.5000.DMP files.  Note that right now the project expects to find a speech model named hub4wsj_sc_8k_adapted.  You can adjust the run_main.sh shell script to send the appropriate model as a parameter when it runs DemoManMonitor (the parameters are exactly the same as the pocketsphinx_continuous tool that ships with PocketSphinx, you can see more details here).  You might later want to adapt the models to your voice and microphone for better accuracy--I had good, but not amazing results from adaptation.

To get the microphone to work you will need to enable USB audio with ALSA on the Pi, follow these instructions to change the snd-usb-audio option from -2 to 0.  Once done, if you execute 'cat /proc/asound/cards' you should see two devices, one for the microphone (for the PS3 eye cam look for 'OmniVision Technologies') and one for PCM audio output.

To connect the printer, connect the red and black power pins from the printer to power supply positive and ground respectively (do not try to power the printer from the Pi!).  Connect the printer serial RX pin (yellow) to the TX pin on the Pi (GPIO 14).  Leave the printer TX pin unconnected as it outputs 5V which is unsafe for the Pi RX pin to receive (and data doesn't need to be read from the printer in this application).  Note when you power on the Pi the printer might start printing garbage so disconnect the RX pin on boot and connect it before running the software.

Connect the quiet mode switch to any GPIO pin on the Pi, I used wiring Pi pin 0 (which is Pi BCM pin 17).  If using an SPST toggle switch, the middle pin goes to the GPIO pin, while one outer pin goes to 3.3V power and the other ground on the Pi (don't connect it to 5V!).  Adding a 10k resistor in series with the middle pin and GPIO pin is a smart idea to prevent excessive current draw.

For the speakers, cut up an audio cable so you can access the left, right, and ground (usually stranded jacket around the left & right cables).  With one speaker connect the left audio channel to an amplifier and connect that amplifier to a speaker.  If using the Adafruit class D amp I used, make sure to disable the right channel by pulling its SDR pin to ground.  You can adjust the speaker volume by using the alsamixer command and tweaking the gain DIP switches on the amp.

Finally for red LEDs I chose to run them from the 5V power supply (two LEDs in series, plus a 47 ohm resistor to limit current to ~20 milliamps) and switch their ground using an NPN transistor.  This prevents damaging the Pi by running too much current from it to light the LEDs.  This is a useful guide on wiring up a transistor to switch LEDs and other loads.

To control the profanity keywords that will be detected, create a file keywords.txt in the root of the project.  On each line add a word or short phrase to be detected.  Only add words which are defined in the dictionary your model uses, i.e. words in the cmu07a.dic file.

Adjust the #define values near the top of main.cpp to tweak which ALSA hardware is the microphone or audio out, which wiringPi pins are connected to the switch and LED, and the serial port for the printer (should be the default ttyAMA0).  Compile the project, and run run_main.sh using sudo.  After the language models are loaded you should be able to speak the keywords and see them detected.  Press Ctrl-C to quit the program.  Enjoy the project!