My first commit to on-air
shows March 3, 2020. My first blog post was at the beginning of February. I know that the weeks leading up to that commit I spent some time reading through the TF Lite documentation, playing with Cloudflare Workers K/V and getting my first setup of esp-idf
squared away. After that it was off to the races. I outlined my original goal in the planning post. I didn't quite get to that goal. The project currently doesn't have a VAD to handle the scenario where I forget to activate the display before starting a call or hangout. Additionally I wasn't able to train a custom keyword as highlighted in the custom model post. I was however able to get a functional implementation of the concept. I am able to hang the display up, and then in my lab with the ESP-EYE
plugged in I can use the wake word visual
followed by on/off
to toggle the display status.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.
I'm trying to figure out speech recognition on the ESP-EYE, and have hotword/wake-word detection working, but am having trouble setting up a command "chain", p much exactly like the one described here, but with different and more follow-up command words.
This project keeps popping up in the top Google hits when I research this stuff, and seems promising, but reading the code I'm a bit confused.
I should preface that I'm not super familiar with ESP-IDF and have never worked w/ the Adafruit PyPortal, so I might just be looking in the wrong place, but I can't for the life of me find where the code looks out for the command after the wake word (the on/off). Am I missing something very obvious here?
Are you sure? yes | no