Close

Does this project spark your interest?

Become a member to follow this project and don't miss any updates

0%
0%

Demolition Man Verbal Morality Statute Monitor

Project to build a working swear detector to enforce the verbal morality statute from the movie Demolition Man.

Similar projects worth following
See this compilation of scenes with the 'verbal morality statute monitor' from Demolition Man: https://www.youtube.com/watch?v=LE3oczJ1zgM

The goal of this project is to replicate the functionality of the Demolition Man device as much as possible using continuous speech recognition and keyword spotting algorithms running on a small embedded platform (Raspberry Pi or Beaglebone Black).

This project is inspired by the movie Demolition Man and the future society it envisions.  The movie is set in the 2030's when crime has been completely eradicated inside a tightly controlled society.  Swearing is a violation of the 'verbal morality statues' and its enforcement is done through monitoring devices mounted in each room.  These verbal morality statute monitors are a running joke in the movie as characters use profanity and trigger violations.  This project is an attempt to recreate the verbal morality statute monitor from Demolition Man.

For hardware, this project uses a Raspberry Pi model B running the latest version of the Raspbian OS.  The model B Pi is preferred because it has multiple USB ports (good for connecting both a wifi chip, and USB microphone), however the project will likely also work on a model A Pi.  Connected to the Raspberry Pi are a USB microphone, amplifier and speaker, thermal printer, a SPST toggle switch, and some red LEDs.

For software, the project source code is available on github here.  Make sure to install the following dependencies using apt-get by executing:

sudo apt-get install build-essential autoconf libtool automake bison swig python-dev libasound2-dev gcc-4.7 g++-4.7 libasound2-dev

The swear detection is done using the PocketSphinx library, an excellent open source speech recognition engine.  The in development next version of PocketSphinx has support for keyword spotting, which is the detection of certain keywords being uttered in everyday speech.  You will need to download the subversion trunk of the library, compile, and install the SphinxBase and PocketSphinx subprojects on the Pi.  You will also need to install the wiringPi library which is used to access GPIO pins on the Pi.

To compile the project, invoke 'make DemoManMonitor'.  You can also invoke 'make runtests' to run a few unit tests for the code.  Once the code is compiled you will need to make sure you have some speech model data available in the project directory.  Create a folder called 'models' and copy in the appropriate models for your language from PocketSphinx's pocketsphinx\model directory.  I copied in the hmm/en_US/hub4wsj_sc_8k folder,  lm/en_US/cmu07a.dic, and lm/en_US/wsj0vp.5000.DMP files.  Note that right now the project expects to find a speech model named hub4wsj_sc_8k_adapted.  You can adjust the run_main.sh shell script to send the appropriate model as a parameter when it runs DemoManMonitor (the parameters are exactly the same as the pocketsphinx_continuous tool that ships with PocketSphinx, you can see more details here).  You might later want to adapt the models to your voice and microphone for better accuracy--I had good, but not amazing results from adaptation.

To get the microphone to work you will need to enable USB audio with ALSA on the Pi, follow these instructions to change the snd-usb-audio option from -2 to 0.  Once done, if you execute 'cat /proc/asound/cards' you should see two devices, one for the microphone (for the PS3 eye cam look for 'OmniVision Technologies') and one for PCM audio output.

To connect the printer, connect the red and black power pins from the printer to power supply positive and ground respectively (do not try to power the printer from the Pi!).  Connect the printer serial RX pin (yellow) to the TX pin on the Pi (GPIO 14).  Leave the printer TX pin unconnected as it outputs 5V which is unsafe for the Pi RX pin to receive (and data doesn't need to be read from the printer in this application).  Note when you power on the Pi the printer might start printing garbage so disconnect the RX pin on boot and connect it before running the software.

Connect the quiet mode switch to any GPIO pin on the Pi, I used wiring Pi pin 0 (which is Pi BCM pin 17).  If using an SPST toggle switch, the middle...

Read more »

  • 1 × Raspberry Pi Model B is best because of multiple USB ports. Memory usage is not super high so it would probably still work on a model A.
  • 1 × Playstation 3 Eye Camera Any USB microphone will work, but this one has a nice microphone built to capture audio in a room.
  • 1 × Mini Thermal Printer Adafruit has a good one: http://www.adafruit.com/products/597
  • 1 × Small Amplifier & Speaker I used this amp and speaker: http://www.adafruit.com/products/1552 and http://www.adafruit.com/products/1313
  • 1 × 5 volt power supply with 2+ amps of current Adafruit has some good options: http://www.adafruit.com/products/658 or http://www.adafruit.com/products/1466
  • 1 × SPST toggle switch This is used to put the device into 'quiet mode' where no alarms are raised.
  • 2 × Red LEDs I used a couple high power red LEDs to light up the front of the enclosure.
  • 1 × NPN transistor Used to switch the LEDs on and off without pulling too much current from the Pi.

  • Final Project

    tdicola04/19/2014 at 00:28 5 comments

    Finished a few small things like adding red LEDs that light when the alarm goes off, take a look:

    The project is basically done at this point.  I'll fill in a few more details on the project page this weekend and then consider it complete.

  • Shiny!

    tdicola04/15/2014 at 05:38 0 comments

    Finished the enclosure by covering it in metallic paper.  Take a look at the results here: 

    At a craft store I found 12"x12" sheets of metallic paper on thick cardstock that ended up working great.  The thicker size of the material helped to smooth out imperfections in the cardboard.  For attaching the paper I used Super 77 spray glue so it's a smooth, permanent finish.  The only difficulty was working with the glue.  You get one shot to line things up and that's it!  Also cutting the curved shapes and edges was a little challenging--a very sharp knife is required.  The edges aren't perfect but I'm still very happy with how things turned out.

  • Enclosure Progress

    tdicola04/14/2014 at 03:23 0 comments

    Here's an update on the progress I've made building an enclosure for the device: 

    The enclosure is made out of a couple oatmeal containers attached to a frame of foam board.  The front is a piece of mat board (roughly the same thickness as the oatmeal containers) bent and glued to the frame in the tear drop shape.  I plan to finish the enclosure by covering everything in metallic tape which should give a nice metallic finish without a lot of prep work.

    Once the enclosure is finished I'll mount the hardware inside and the project will be finished! :)

  • Speech Adaptation

    tdicola04/07/2014 at 03:26 0 comments

    In this update I've worked on improving the speech recognition by adapting PocketSphinx's acoustic model to my microphone and voice.  I've also added a switch to put the device into a quiet mode where it doesn't sound an alarm or print a ticket when profanity is detected.  Take a look at this video: 

    For the speech model adaptation I followed these steps from the PocketSphinx website: http://cmusphinx.sourceforge.net/wiki/tutorialadapt  With the adapted model the profanity detection is a little bit better.  Some words still aren't recognized very well--for example it still doesn't recognize 'fuck' that often (it sometimes thinks 'fix' is fuck), but strangely it recognizes 'fucker' very well.  That said, I'm pretty happy with where things are at with the speech recognition and keyword spotting right now.

    I also added a switch attached to the Raspberry Pi GPIO which puts the software into a quiet mode.  This is useful for testing the recognition without wasting printer paper or blaring the audio.

    One issue I'm still trying to figure out is why ALSA sometimes cuts off playback of the alarm audio.  The full alarm should say "You are fined one credit for violation of the verbal morality statutes.", however you can sometimes hear it cuts off "statutes" at the very end.  I've tried adding ALSA calls to wait until the playback buffer is drained and even padded the audio file with a second of silence at the end but still see it cutting off audio randomly.  I plan to look a little more into this, but if I can't resolve it I don't think it's a big deal.

    Finally, as a next step I hope to make progress on building the enclosure for the device.  Originally I didn't think there would be time for it, but based on how much progress I made I think I can get something together to resemble the real device.  My current plan is to use a cardboard tube (like from an oatmeal container), foam board, and bent cardboard to build the enclosure.  Metal tape stuck to the outside should be a cheap and easy way to get a metallic finish.  I'm not going for perfect film accuracy--just something that is recognizable as the real thing.

  • Printer

    tdicola03/30/2014 at 02:14 0 comments

    Added support for the thermal printer, and swapped to a small amplifier & speaker.  Take a look at the video for more information: 

    In the process of integrating the printer I ported the Adafruit thermal printer Arduino library to POSIX/Linux too if anyone is curious.

    The only gotcha integrating the printer is dealing with the fact that both the audio needs to play and the printer needs to print a ticket at the same time.  Each task needs some periodic updates from the main program--the audio buffer needs to be kept full of samples, and the printer needs to be told what to print next.  Since the Pi is rather limited in CPU resources (one core at 700mhz), I went with a non-blocking I/O approach in a tight loop instead of something like multi-threading or multiple processes.  So far things work well and the code isn't too ugly by using some nice C++11 stuff like lambdas.

    Next step will be to work on the speech recognition.  I'm going to investigate adapting the speech model for my microphone or voice to see if it improves the accuracy.  Right now the false positive rate isn't too bad, but some small swear words like 'fuck' or 'damn' are easy for it to misinterpret because they sound like normal parts of speech.

    I also plan on adding a switch to put it into a 'quiet' mode where it might flash a light on swear detect, but otherwise not alarm.

    Also starting to think a little more about getting it put into a case that looks like the real prop.  Luckily the prop isn't that complex--it's really just a cylinder with curved box on one end.  Looking at stuff around the house I see an oatmeal container is just about the perfect size cylinder to fit the printer.  Thin cardboard wouldn't be too difficult to bend into the box shape, and aluminum tape covering the whole thing would give it a metallic look.  More investigation into that later.

  • Raspberry Pi Support

    tdicola03/27/2014 at 02:56 0 comments

    Ported everything over to the Raspberry Pi and it works great, check out the video: 

    The only gotcha was that Raspbian ships with USB audio configured off, but after a small config tweak it worked just like on the PC.  Very happy to see there aren't any performance issues and it seems to handle processing in real time without issue on the 700mhz Pi.

    Next step is to integrate the thermal printer to print violations, and switch to a small amplifier & speaker.  On the software side I need to look at tweaking PocketSphinx to get better keyword spotting accuracy.

    If time allows I might even start thinking about trying to get everything into an enclosure that mimics the look of the movie prop.  Something made of cardboard covered in aluminum tape would probably be simple enough to capture the look of the prop. 

  • Coding Progress

    tdicola03/26/2014 at 08:55 0 comments

    In the past couple days I've sorted out how to use ALSA and now have basic audio output working.  The basic skeleton of the app is in place too.  Right now it just listens on a mic, runs PocketSphinx's keyword spotting, and plays a sound when a keyword is detected.

    If you're curious you can find the code on github here: https://github.com/tdicola/DemoManMonitor  This is still very much in development and not really ready for anyone to consume.  I've tried to make the components somewhat loosely coupled so it wouldn't be difficult to add support for other audio sources, sinks, or even speech rec engines in the future.

    For next steps I plan to get the code working on the Raspberry Pi to sort out any issues or performance problems there as early as possible.  Will also order a small thermal printer and other small things to get started on the hardware soon.

    Once a complete hardware & software prototype is working I want to come back to improve the speech rec/keyword spotting accuracy.  Right now with no special training or adaptation for my voice it can pick up some words very well (like 'bullshit') but others it totally misses (I have yet to see it pick up 'fuck' correctly from my speech).

  • Early Prototype

    tdicola03/21/2014 at 07:47 0 comments

    Hacked together a quick and dirty prototype using PocketSphinx and alsa.  You can see a quick video of it here (it goes without saying there will be profanity in the videos I post): 


    For a first effort I'm pretty happy.  There are a ton of options for tuning & training the speech recognition so hopefully I can increase the accuracy.

    While putting together the prototype I hit a few issues and dead ends, like the PS3 eye cam mic not playing well with pulseaudio, alsa's python bindings not working at all for some reason, and gstreamer looking way too complex to try using.  In the end I'm going to keep it simple and just use C++ with alsa and PocketSphinx.  Hoping to clean up the code into something presentable, put it on github, and keep iterating on it.

  • Promising Lead

    tdicola03/19/2014 at 08:24 0 comments

    Found a great working example of keyword spotting with the in development version of PocketSphinx: http://syl22-00.github.io/pocketsphinx.js/live-demo-kws.html  Some of the words don't work well, but others like 'OK Google' seem to work very well.

    This demo is from a Javascript port of PocketSphinx and unfortunately is limited to searching for only one keyword at a time.  However digging into PocketSphinx's source a bit, it seems the normal C library can be given a file of keywords.  More investigation is necessary (unfortunately the keyword spotting stuff is only in the subversion trunk and not yet documented), but it's good to see a working demo to know what's possible.

  • Project Start

    tdicola03/18/2014 at 09:37 0 comments

    Goals:

    - Replicate functionality of the 'verbal morality statute monitor'/swear detector from Demolition Man.

    - Detect when a swear word is uttered and sound a warning bell / flash lights / print out violation ticket.

    - Replicating the look of the device is not a primary goal.  Given the time constraint, my (lack of) knowledge replicating props, and other risks it's not feasible to replicate the look of the device.

    Current Plan:

    - Software: 

    Use continuous speech recognition library with keyword spotting to detect swear words.  Great summary of options here: http://raspberrypi.stackexchange.com/questions/10384/speech-processing-on-the-raspberry-pi  I briefly experimented with Pocket Sphinx and found somewhat unsatisfactory results because it is not optimized for keyword spotting out of the box.  The biggest challenge and risk in this project will be getting a satisfactory keyword spotting algorithm to work. 

    Some things to follow up on here are:

    http://www.quora.com/Speech-Recognition/What-is-the-best-SDK-for-KeyWord-Spotting 

    http://sourceforge.net/p/cmusphinx/discussion/sphinx4/thread/69cbc4eb/?limit=25

    - Hardware Platform:  

    Raspberry Pi or Beaglebone Black are available and should have the power to do continuous speech recognition (based on googling around for speech recognition projects on each platform).  Leaning towards using the model B Pi because it has multiple USB ports and an audio out on board.

    - Microphone:

    PS3 eye camera's microphone.  In my testing this device has a good microphone that can pick up audio from a distance reasonably well.  Getting it to work with Linux is mostly straightforward: http://renatocunha.com/blog/2012/04/playstation-eye-audio-linux/

    - Audio Output:

    Nothing fancy is needed here--just need to play a few audio samples like the buzzer and warning message.  The audio output on the Pi should be sufficient when sent to a small amplified speaker.

    - Printer

    Haven't thought much about this yet, but expect a small receipt/thermal printer should be sufficient for printing violations.  More info to check out later: http://learn.adafruit.com/internet-of-things-printer


    Next steps:

    - Install speech recognition libraries and do serious investigation into which can do keyword spotting reasonably well.

View all 10 project logs

View all instructions

Enjoy this project?

Share      

Discussions

josephchrzempiec wrote 07/03/2015 at 14:52 point

I must say this is the coolest project from that movie i have seen others but this one is the best. :)

  Are you sure? yes | no

DanoldKong wrote 05/15/2015 at 07:06 point

I was hoping that you could post the final SD card image for this project. I got it very close to working, but ran into a few issues, for example that it will not accept more than one word in the keywords.txt file without causing an error.

  Are you sure? yes | no

sm wrote 08/01/2014 at 10:38 point
Thats a great project..I am working on a similar project that involves keyword spotting mechanism which i coded myself but it doesn't work that great since its normally recording stuff in an infinite loop and trying to match the exact keyword from decoded speech.
It would be great if you could give a little insight on how this this is working,also my project is completely in python,so do i have to build a wrapper around the PocketSphinxKWS.cpp code?
Thanks

  Are you sure? yes | no

= Sienar = wrote 07/04/2014 at 13:11 point
this is so nice! wish i could give doubleskull!

  Are you sure? yes | no

paul wrote 06/08/2014 at 07:12 point
call me when you've figured out the three shells.

  Are you sure? yes | no

blarbles wrote 05/14/2014 at 19:34 point
Following the instructions in the "Enable Software Access to Serial Port" section of https://learn.adafruit.com/pi-thermal-printer/pi-setup-part-2 eliminated the printer garbage at boot for me. Here are the instructions (written by Adafruit):

The serial port on the Raspberry Pi’s GPIO header is normally configured for console cable use. But now we want to use this port for the thermal printer instead, so we’ll need to disable this default behavior.

sudo nano /boot/cmdline.txt

Change:

dwc_otg.lpm_enable=0 console=ttyAMA0,115200 kgdboc=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline rootwait

to:

dwc_otg.lpm_enable=0 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline rootwait

And:

sudo nano /etc/inittab

Comment out or delete the last line. i.e. change this:

T0:23:respawn:/sbin/getty -L ttyAMA0 115200 vt100

to:

# T0:23:respawn:/sbin/getty -L ttyAMA0 115200 vt100

  Are you sure? yes | no

tdicola wrote 05/18/2014 at 23:32 point
Awesome, thanks for pointing out how to stop the spam of junk from the printer!

  Are you sure? yes | no

Antonio Scolari wrote 05/08/2014 at 21:59 point
Hi tdicola,
Question,: How are you sending to print from the raspberry ?
I'm starting a project that includes ticket printing,

  Are you sure? yes | no

tdicola wrote 05/09/2014 at 19:59 point
The printer talks over a serial connection to the Raspberry Pi. I ported Adafruit's Arduino library to run on linux--check out Adafruit_Thermal.cpp and .h in the source code on github. You can also see a simple example of using it in tests/AdafruitThermalTest.cpp. If you're using Python in your project, check out Adafruit's own python library: https://github.com/adafruit/Python-Thermal-Printer In general they have some good tutorials on setting up the printer too: https://learn.adafruit.com/mini-thermal-receipt-printer/overview

  Are you sure? yes | no

Mike Szczys wrote 05/07/2014 at 21:40 point
I'm just getting down to the last few entries in the Sci-Fi contest. Several have made me laugh and your finished project video made me laugh perhaps the hardest.

I hope you have a hackerspace full of foul-mouthed members where this can be installed!

  Are you sure? yes | no

blarbles wrote 05/04/2014 at 12:53 point
Just in case others are interested in this project: To get everything to compile without errors (including sphinxbase, pocketsphinx, and DemoManMonitor) I had to install the following pacakages: autoconf, libtool, automake, bison, swig, python-dev, gcc-4.7, g++-4.7, libasound2-dev (do "apt-get install "). Also the DemoManMonitor will compile and run but will have an exception if you don't rename the "hub4wsj_sc_8k" to "hub4wsj_sc_8k _adapted" (took me forever to figure that one out, but I am new to this stuff). I found doing apt-get upgrade broke quite a few things so I stuck with the original RASPBIAN Jan release (plus the packages I listed above). Also if your "make" command has an error you have to do "make clean" to clean up any files it created before trying "make" again. Thanks for this great project!

  Are you sure? yes | no

tdicola wrote 05/04/2014 at 17:37 point
Oh wow, thanks for writing up the details on the dependencies and sorry for the pain of figuring them out. I was hoping once pocketsphinx has an official release with the keyword spotting stuff I could just package up the project as a binary and release it so it's easier to install. I'll update the description to note the dependencies like you call out--the packages build-essential, gcc-4.7, libasound2-dev should be the major ones + compiling and installing sphinxbase, pocketsphinx, and wiringPi. Great point about the speech model name too--the command line parameters in run_main.sh choose the model so you can change the name there too. If your speech recognition results aren't great you might look into adapting the model by following this guide too: http://cmusphinx.sourceforge.net/wiki/tutorialadapt And yeah the makefile is not my finest hour--I really only know enough about make to be dangerous. :)

  Are you sure? yes | no

pierrep wrote 04/27/2014 at 08:18 point
Sweet movie and really nice project, maybe you could add cable tidies to clean the view around the Rasberry Pi.

  Are you sure? yes | no

tdicola wrote 04/27/2014 at 20:24 point
Great point, I could definitely do some cleanup with the cables.

  Are you sure? yes | no

x3n0x wrote 04/22/2014 at 06:06 point
NICE! Coming along great man!

  Are you sure? yes | no

Lewis Callaway wrote 04/17/2014 at 01:34 point
Awesome Project! I saw that on Show and Tell! If they would have that at my school, everyone would be broke.

  Are you sure? yes | no

tdicola wrote 04/19/2014 at 00:37 point
Haha, thanks for the feedback.

  Are you sure? yes | no

Steven Hickson wrote 04/06/2014 at 21:37 point
Hey guys, saw you were using pocketsphinx as your speech recognition. I tried it and found Google's to work much better. If you have an internet connection it is pretty fast and reliable. I have code to use it available for the Raspberry Pi.
http://hackaday.io/project/87-Voice-Controlled-Raspberry-Pi

  Are you sure? yes | no

tdicola wrote 04/07/2014 at 03:37 point
Thanks for the feedback. I've looked into other speech recognition options like Google's API but was a little concerned that they wouldn't handle continuous speech recognition and keyword spotting very well. With your program do you have the mic constantly listening and sending data to Google's API, or do you only record audio when something happens like a button is pressed? In my case I want to continuously record audio in the background, so I'm concerned it would be too much load and bandwidth to be constantly sending audio to Google's API.

  Are you sure? yes | no

OneShot Willie wrote 03/19/2014 at 02:43 point
I think i found your test file... ; )

George Carlin's monologue at issue in the Supreme Court case of FCC v. Pacifica Foundation

http://law2.umkc.edu/faculty/projects/ftrials/conlaw/filthywords.html

  Are you sure? yes | no

tdicola wrote 03/19/2014 at 08:13 point
Haha, great idea.

  Are you sure? yes | no

Similar Projects