After lasercutting the wooden body, Cara and I started working on the requirements for the interactive electronics components of the project. We had the following requirements:
- Deep Thought recognizes speech to answer specific questions in accordance with quotes from HHGTTG
- Deep thought speaks the answers in a computer generated voice
- Deep thought has a mouth that lights up with speech.
- When Deep Thought is being spoken to, LED strips will make it appear that the words are being processed into the central part of Deep Thought.
We started getting work on exploring localized speech recognition software for the Raspberry Pi (testing on my personal linux box first). We found the pocketsphinx library for python and a good speech recognition wrapper for it. After some initial testing, we realized that we would simply seek out some key words to determine which response to give to the question. Therefore, this isn't very complicated speech recognition, but hey, it gets the job done!
Next up was working with the LED strip. We're envisioning a sort of cyberpunk, gritty Deep Thought and thought having neural pathways lit up with green LEDs would be awesome. We're still in the phase of testing all of this, but we came up with a couple of prototypes for possibilities with the LEDs.