Preparation
First is the object detection. I started off with a site called Roboflow. It had a nice dataset setup but seemed to want me to make calls online, which isn't much of an option over in the chicken coop far beyond the reach of our wifi. So, I opted to do the training locally. Turns out, that meant a week and a half of training each night. I may have gone overboard on the data augmentation... Either way, the model I trained is available within this project. It focuses on predators that I am confident are within my area. Unluckily, we moved to an area that has quite an extensive list in that regard. The model focuses on coyotes, foxes, hawks, opossums, raccoons, and snakes.
One of the main goals of this project is to scare the predators away. I reinforced the coop, but if they stick around and try to figure out how to get in, we want to stop them in their tracks. So, I recorded voice lines for each predator on the list. Granted, they're going to have absolutely no clue that I'm specifically addressing them, or what I'm saying in general, but when they hear a person's voice angrily shouting at them they're going to take a bow and leave the area. The voice lines are included as well, but, honestly, it's pretty fun to record raccoon smack talk this might be a good area for customization.
Now, we have a way to detect the predators and what we're going to use to scare them off when we see them, so it's time to put together our actual device and put it to the test.
Hardware
For running all the features we're putting together, we're using a raspberry pi from PCBWay. Part one, as discussed, is identifying the predator, so we have a very standard USB webcam for that. Next up is playing the audio, which just uses a USB speaker. After this, we get a bit fancier. To alert us of predators, we need to get data back to the house and to do something with it. For this, we're going to use Blues Wireless. I've been using a Notecarrier-F for some products that leverage Blues and I've grown quite comfortable with it, so that's what we're using here.
As far as hardware assembly goes, everything (including the Notecarrier-F) is plugged into the raspberry pi via usb. You'll notice that during the project development phase that means 3 usb devices plus a mouse and keyboard for a total of 5 with only 4 usb ports available on the Raspberry Pi, though. I just used a usb splitter for the mouse and keyboard and had no issues whatsoever.
That takes care of our detection device, but as for how we communicate that there is a predator to those in the house, we sound a literal siren. The siren itself is very simple. It has an on/off switch and a volume knob. That doesn't do much for us, though, so to actually utilize it we're using a smart plug. As it turns out, some of these make life easy and others take a lot of effort as far as using them in this type of project goes. Of the ones I had around the house, the one that was easier to use was a TP-Link Kasa. It has a simple library I was able to leverage, which was so much easier than some of the other options I was looking into prior to finding it. With it, we can take our predator alert, turn on the smart plug (and therefore our siren) for a set amount of time before turning it back off. This gives us a very clear indication that a predator is attacking our chickens, all automatically.
Object Detection
With all the hardware in place, it's time to get into how it works. Thankfully, a lot of it is very intuitive based on what has been discussed so far. We have a YOLO detection model, and we use it to look for predators. I used YOLOv8, and all this training occurred on my PC. I did augment the data a bit, meaning that I included slightly altered versions of the training images to help improve the dataset. If I were to do this all again I probably would have skipped out on this; especially with blurring images (adding a bit of blur is an option when augmenting data, which made...
Read more »