An extension to the WEEDINATOR project, this system uses an Nvidia Jetson TX2 / Xavier to detect the location of individual plants, that have previously been accurately planted in a grid, to reconstruct that grid in a computer system, and use it for orientation and navigation of the robot.
Previously, navigation has been attempted by means of GPS, coloured ropes and wires carried high frequency AC current but none of these proved to be effective due to poor accuracy and impracticability. 'Models' can be trained to recognise the individual plants using so called 'neural networks' and previous tests have suggested that results will be very good as the background will generally be uniform, clean, soil and maybe a few stones. This background will contrast strongly with the green, leafy patterned plants. Alliums such as leeks and onions might be more of a challenge as they could be confused with blades of grass.
From experiences using computer vision last year, some of the cameras got very confused by bits of dry vegetable matter, particularly long thin bits, or 'straw' lying on the surface of the soil. The previous log shows a very scrappy plot, mainly due to this straw being turned over near the surface rather than buried. A pass with the plough turns the soil over to about 8" depth and should help bury the rubbish. The test plot is now left to dry out and any remaining weeds to get blasted by the strong sunlight we're getting at the moment:
This patch was rotovated with the tractor and has grass and weeds drying out on the surface in the sun. It certainly looks a bit scrappy at the moment, but after waiting for another week, it can be processed once more to get a nice fine tilth. Meanwhile, some of the crops themselves are being reared in cells in the glasshouse: