No, its not a band, yet. The ARDrone has a built-in machine vision capability that recognizes certain key images, one of which is shown below. The data stream responds with the estimated distance, which I believe is derived from the ultrasonic sensor, and the orientation in degrees rotation with the below image being at 90 reported degrees. So ROSCOE has a 'toy' that he get agitated and tries to find if he cant see it within 500 CM, if he can he will sit 500CM away and stare at it. In a way its an extremely sophisticated 'off' switch as we move into more motion studies. The movement algorithm is extremely simple; move forward by issuing 50mm forward motion Twist messages on the ROS 'cmd_vel' topic every 500ms, which is a standard ROS communication pattern. Four or five inches per second indoors for this thing is enough, it will push the furniture out of the way, and not just the end tables either. The ARDrone published sensor data and external ultrasonics are partially fused to avoid shocks/off axis and we include the publication of 'ardrone/vision' Pose2D range/orientation message if a roundel is detected by the ARDrone. The 'cmd_vel' movement publisher subscribes to the 'ardrone/vision' topic and if we see the roundel within 500 cm don't issue movement publication on 'cmd_vel' topic for that 500ms interval. If we are moving and an obstacle or shock is encountered, rotate 30 degrees off axis, or if the ultrasonic readings are below 200mm range bottom sensor or 300mm top sensor, back up 50mm, then continue to process Twist messages from the 'cmd_vel' ROS topic. The baseline behavior here will allow us to integrate the functor-based natural transformation cognitive functions now that there is a partial data fusion pipeline of video/accelerometer/gyro/ultrasonic/machine vision/magnetometer.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.