In this post, I try to explain how the robot tracks the ball using image processing algorithms.
You first need to install OpenCV in your Raspberry Pi which you can find the tutorials in following websites:
sudo apt-get install python-wxgtk2.8 python-matplotlib python-opencv python-pip python-numpy sudo apt-get guvcview
You can use both cv and cv2 libraries but I used cv2 for this project.The petdog.py code tracks a tennis ball using color detection methods and calculates center of mass of the tennis.
To capture a video, you need to create a VideoCapture object. Its argument can be either the device index or the name of a video file. Device index is just the number to specify which camera is used and in this case it's zero because there is only one camera module.
cap = cv2.VideoCapture(0)
Next you can use Property identifier to modify the height and width of VideoCapture object.
3- Width of the frames in the video stream.
4- Height of the frames in the video stream.
The VideoCapture object needs to be read frame by frame to perform object detection which is done using .read method:
_, frame = cap.read()
Now you need to convert image frames from one BGR color-space to HSV color-space.
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
Next you need to use upper and lower color thresholds to define range of yellow color in HSV which is used to find tennis ball. The values needs to be changed based on light settings.
lower_yellow = np.array([30, 255, 135], dtype=np.uint8) upper_yellow = np.array([40, 255, 185], dtype=np.uint8)
An image mask can now be created to get only yellow colors. The image mask is used to calculate image moment of tennis ball which is a certain particular weighted average.
mask = cv2.inRange(hsv, lower_yellow, upper_yellow) moments = cv2.moments(mask)
The image moment can also be used to calculate area and center of mass of the tennis ball.
area = moments['m00']
Now we need to check the area of the object before calculating the x and y centroids.If the object is too small then there is a high chance that it's not a tennis ball.
if(area > 100000)
Calculate x and y centroids:
x = moments['m10'] / area y = moments['m01'] / area
And finally we can implement the function that calculates robot movement based on changes in x and y centroids values.
def move_pet(x,y,area): if x > 330: pivot_right((x-320)/(320*12)) elif x < 310: pivot_left((320-x)/(320*12)) elif ((310 <= x <= 330) and area<1500000): forward(0.05) elif ((310 <= x <= 330) and area>4000000): reverse(0.05)
We know the center of each image frame is 320 because the total width of each image frame is 640. The robot moves after each movement so the tennis ball stays at the center of image frame.
The total time for pivoting or turning the robot in each direction can simply be calculated using:
You can also use the calculated area to move the robot back and forward as well as turning which you can see in the video.
For example you know the tennis ball is too close to the robot and the robot needs to move back in the following case:
elif ((310 <= x <= 330) and area>4000000): reverse(0.05)
Or move forward the robot petdog if tennis ball is too far:
elif ((310 <= x <= 330) and area<1500000): forward(0.05)
Now you can use the following commands to start the robot:
sudo modprobe bcm2835-v4l2 sudo python petdog.py
This includes the overview of object tracking code and you can go ahead and create your own raspberryPetDog robot. I hope you create your own raspberryPetDog robots with even fancier features.