An extension to the WEEDINATOR https://hackaday.io/project/53896-weedinator-2018 project, this system uses an Nvidia Jetson TX2 / Xavier to detect the location of individual plants, that have previously been accurately planted in a grid, to reconstruct that grid in a computer system, and use it for orientation and navigation of the robot.Previously, navigation has been attempted by means of GPS, coloured ropes and wires carried high frequency AC current but none of these proved to be effective due to poor accuracy and impracticability. 'Models' can be trained to recognise the individual plants using so called 'neural networks' and previous tests have suggested that results will be very good as the background will generally be uniform, clean, soil and maybe a few stones. This background will contrast strongly with the green, leafy patterned plants.
Place the above before "CUDA(cudaNormalizeRGBA()" in the draw section at the bottom of the main loop.
In the section near the top where the code creates the display and texture, either set your texture size to a custom value or divide it by an amount that brings it into the size of your display properly. I divided the camera size by 2 for my needs.
It's all about the number of labels, not the number of images. A proportion of the images should be close up, high resolution, but, quite possibly, a large number can be lower resolution, so I decided to include photographs of the seedlings in groups of 9 as below:
On a relatively small dataset of just 2064 images, we're already getting good results detecting swede plants. The boxes are not tight on the crops yet and this can probably be cured by adding a load of null images of the soil. Shadows are also a problem and additional images will probably be added with shadows to counter that.
350 swedelings have been planted. The weather is dry and hot. Each plant is exactly 11" apart to match the weeding pattern of the robot. A giant wooden set square and carefully placed string lines are used for positioning.
From experiences using computer vision last year, some of the cameras got very confused by bits of dry vegetable matter, particularly long thin bits, or 'straw' lying on the surface of the soil. The previous log shows a very scrappy plot, mainly due to this straw being turned over near the surface rather than buried. A pass with the plough turns the soil over to about 8" depth and should help bury the rubbish. The test plot is now left to dry out and any remaining weeds to get blasted by the strong sunlight we're getting at the moment: