Close
0%
0%

Autonomous Agri-robot Control System

Controlling autonomous robots the size of a small tractor for planting, weeding and harvesting

Similar projects worth following
Having robots on farms will help negate the need for pesticides, chemicals and destruction of soil structure, giving us hope for the future of our planet. The machines need to be fully autonomous and features in the control system should include:

1. High accuracy, error correcting GPS/GNSS with RTK
2. Super fast multi-core micro processors for controlling multiple electric motors and enabling parallel processing. Linked via I2C
3. Cellular 2G/3G/4G data comms where WIFI is not practical
4. Object recognition and positioning for distinguishing plants from soil
5. Interweb database and user dashboard for everyday machine control
6. Screens, buzzers and LEDs for status report / debugging
7. Text to Voice and speakers for interaction with humans
8. LIDAR / ultra sonic sensors for detecting unexpected objects in pathway
9. All sub modules securely bolted on one PCB for reliable, hard wired, comms

At present, the system will concentrate on one simple task - weed prevention
..

Licenses: Software: GPLv3; Hardware: Creative commons BY-SA.

There's plenty of 'robot controllers' and such like out there such as Pixhawk for drones and ROS for more complicated robots, but how many have all the modules you need bolted onto one PCB with seamless integration via SPI and I2C? And what if you want to expand the capabilities? Is there any spare 'headroom', for example spare analogue in pins or SPI pins? Or spare space on the PCB? How many are based on just one CPU core with nasty latency issues? How easy is it to understand the code and dependency structure?

A lot of this project revolves around the use of a very fast 3 core processor, the TC275. This is the gadget that holds the world record (16 Mar 2018) for solving the Rubik's cube in something like 0.3 seconds ….. And it can be programmed using Arduino IDE!

Firstly, each core can communicate seamlessly with the others so, for example, core 0 could be controlling motors whilst core 1 sends and receives data to other modules such as the GPRS and TFT screens. The advantage is that core 0 can run at full speed and toggle digital output pins at very high speed (10 nano seconds), which is fast enough for most motors, particularly if servo 'gearing' is used.

If the code on core 0 is not too protracted, the core can run incredibly fast with lots of motors SMOOTHLY accelerating and decelerating. How many motors? I don't know exactly ...... Maybe as many as 20?

An agricultural robot has different requirements from the general run of the mill home vacuum romba. It requires super accurate GPS/GNSS - not just one unit, but two, enabling error correction between the two - one is static and the other roving. Next .... WIFI is a non starter so either cellular GPRS or satellite is required. Then there is debugging ..... We need loads of buzzers and LEDs - yes SERIOUSLY! These things are incredibly useful and obviously some kind of screen which is again incredibly useful for testing / commissioning ..... And what happens when the screen needs to refresh - it pauses the whole CPU core, so we need yet another core. We simply can not have enough cores and eventually the control system will have (about) 5 cores as we gradually upgrade the system within the dark corridors of GitHub.

We're currently making rapid progress with Ai based object recognition and plan to spend the Winter working on perfecting techniques for creating models for detecting the crop itself and using it as the main source for navigation both along the rows and columns of plants. GPS will be used for general driving about the farm tasks. Springtime, we'll be starting to take photos of the crops and test the machine again, taking more and more photos as the season progresses, continually updating the system. At the end, we'll probably have about 10,000 photos to incorporate into the Ai model!

The overall plan is to market the control system using the actual WEEDINATOR as an example of what can be done rather than try and sell the whole machine. Much of the difficult work has been carried out in the background doing programming and the mechanical machine is the 'sexy' bit that attracts all the praise and adoration! Obviously we had to have the machine to test the controller, but the idea is that people are more likely to want to build their own mechanical machine to their own specs, but using our control system (hopefully).

945-82771-0005-000-2T.jpg

Nvidia Jetson TX2

JPEG Image - 75.35 kB - 06/13/2018 at 07:39

Preview
Download

Weedinator_Fona_Nano_13censored.ino

Arduino Nano controls Adafruit SIM800 GPRS module

ino - 8.67 kB - 03/20/2018 at 09:00

Download

Weedinator_TC275_51.ino

The main MCU which is also 'Master' on the I2C bus. Controls motors and one TFT screen.

ino - 30.03 kB - 03/20/2018 at 08:43

Download

Weedinator_NMEA_MEGA_36.ino

This MCU currently hosts a magnetic compass and receives NMEA data from the Ublox network. It's connected to the TC275 MCU as a slave on I2C bus.

ino - 17.59 kB - 03/19/2018 at 13:26

Download

text/plain - 5.72 kB - 03/19/2018 at 11:06

Download

View all 9 files

View all 27 components

  • Getting bounding box coordinates transmitted to Arduino over I2C

    Tegwyn☠Twmffat4 hours ago 0 comments

    After a few days work, I finally managed to get data out of the Jetson TX2 through the I2C bus. I started off using a tutorial from JetsonHacks that runs a 4 digit LED display and then stripped out most of the code to keep only the few lines that transmit the data. It was a bit tricky to compile the code along with the main 'inference' program which is called detectnet-camera.cpp. This basic code can only transmit one byte at a time so an integer such as 463 cannot be transmitted as the upper limit is 254. We get something like 46 instead of 463. This is not an insolvable problem as there is already I2C code within the WEEDINATOR software repository for doing this between the Arduino Mega and the TC275 so it should be just a case of re-purposing it for this new I2C task. It's also a chance for me to try and understand what Slash Dev wrote !!!!

    Here's some excerpts from my 'basic' I2C code:

    void OpenI2C()
    {
        int length;
        unsigned char buffer[60] = {0};
    
        
        //----- OPEN THE I2C BUS -----
        char *filename = (char*)"/dev/i2c-1";
        if ((kI2CFileDescriptor = open(filename, O_RDWR)) < 0)
        {
            //ERROR HANDLING: you can check errno to see what went wrong
            printf("*************** Failed to open the i2c bus ******************\n");
            //return;
        }
            if( ioctl( kI2CFileDescriptor, I2C_SLAVE, PADDYADDRESS ) < 0 )
            {
                    fprintf( stderr, "Failed to set slave address: %m\n" );
                    //return 2;
            }
    }
    int i2cwrite(int writeValue) 
    {
        int toReturn = i2c_smbus_write_byte(kI2CFileDescriptor, writeValue);
        if (toReturn < 0) 
        {
            printf(" ************ Write error ************* \n") ;
            toReturn = -1 ;
        }
        return toReturn ;
    }
                                    writeValue = static_cast<int>(bb[0]);
                                    printf(" writeValueZero   = %i \n",writeValue);
                                    i2cwrite(writeValue);
    
                                    writeValue = static_cast<int>(bb[1]);
                                    printf(" writeValueOne    = %i \n",writeValue);
                                    i2cwrite(writeValue);
    
                                    writeValue = static_cast<int>(bb[2]);
                                    printf(" writeValueTwo    = %i \n",writeValue);
                                    i2cwrite(writeValue);
    
                                    writeValue = static_cast<int>(bb[3]);
                                    printf(" writeValueThree  = %i \n",writeValue);
                                    i2cwrite(writeValue);
    writeValue = static_cast<int>(bb[0]); 
    printf(" writeValueZero   = %i \n",writeValue); 
    i2cwrite(writeValue);
    writeValue = static_cast<int>(bb[1]);           
    printf(" writeValueOne    = %i \n",writeValue);                               
    i2cwrite(writeValue);
    writeValue = static_cast<int>(bb[2]);   
    printf(" writeValueTwo    = %i \n",writeValue);                                
    i2cwrite(writeValue);
    writeValue = static_cast<int>(bb[3]);     
    printf(" writeValueThree  = %i \n",writeValue);                                
    i2cwrite(writeValue);

     Full code is on Github.

  • Step by Step Instructions for Turning Sets of Images into a Model for Object Detection on the Jetson TX2

    Tegwyn☠Twmffat3 days ago 0 comments

    To detect different crops a large set of photos need to be taken and boundary boxes 'drawn' around the actual plant to help determine where it is in the camera frame. Since we dont actually have any newly planted crops at this time of year, I've used a ready prepared set of dog photos as a practice run. These are accurate step by step instructions and this text assumes all the relevant software is already installed on the Jetson:

    Prerequisites: 

    Jetson TX2 flashed with JetPack 3.3.

    Caffe version: 0.15.14

    DIGITS version: 6.1.1

    Check that all software is installed correctly by using the pre-installed dog detect model that comes with Jetpack by running this in terminal:

    $ sudo ~/jetson_clocks.sh && cd jetson-inference/build/aarch64/bin && ./detectnet-camera coco-dog

    It will take a few minutes to load up before the camera footage appears.

    To start from scratch with a set of photos, first turn on the DIGITS server:

    $ sudo ~/jetson_clocks.sh && cd digits && export CAFFE_ROOT=/home/nvidia/caffe && ./digits-devserver

    Now we're going to build the model using actual images of dogs with their associated text files:

    In browser naviate to http://localhost:5000/      

    Importing the Detection Dataset into DIGITS: 

    > Datasets > Images > Object Detection

    Training image folder:  /media/nvidia/2037-F6FA/coco/train/images/dog 

    Training label folder:  /media/nvidia/2037-F6FA/coco/train/labels/dog 

    Validation image folder: /media/nvidia/2037-F6FA/coco/val/images/dog 

    Validation label folder: /media/nvidia/2037-F6FA/coco/val/labels/dog 

    Pad image (Width x Height): 640 x 640 Custom classes: dontcare, dog 

    Group Name: MS-COCO Dataset Name: coco-dog

    > Create > Home > Models > Images > Object Detection

    > Select Dataset: coco-dog 

    Training epochs = 16
    Snapshot interval (in epochs) = 16
    Validation interval (in epochs) = 16

    Subtract Mean: none 

    Solver Type: Adam 

    Base learning rate: 2.5e-05 

    > Show advanced learning options 

    Policy: Exponential Decay 

    Gamma: 0.99 

    batch size = 2 

    batch accumulation = 5  (for training on Jetson TX2)

    Specifying the DetectNet Prototxt: 

    > Custom Network > Caffe 

    The DetectNet prototxt is located at /home/nvidia/jetson-inference/data/networks/detectnet.prototxt in the repo.

    > Pretrained Model = /home/nvidia/jetson-inference/data/networks/bvlc_googlenet.caffemodel

     >Create 

    Location of epoch snapshots: /home/nvidia/digits/digits/jobs You should see the model being created through a series of epochs. Make a note of the final epoch.

    Navigate to /home/nvidia/digits/digits/jobs and open the latest job folder and check it has the 'snapshot_iter_*****.caffemodel' files in it. Make a note of the highest '*****' number then copy and paste the folder into here for deployment: /home/nvidia/jetson-inference/build/aarch64/bin.

    Rename the folder to reflect the number of epochs that it passed, eg myDogModel_epoch_30.

    For Jetson TX2, at the end of deploy.prototxt, delete the layer named cluster:

    layer {
      name: "cluster"
      type: "Python"
      bottom: "coverage"
      bottom: "bboxes"
      top: "bbox-list"
      python_param {
        module: "caffe.layers.detectnet.clustering"
        layer: "ClusterDetections"
        param_str: "640, 640, 16, 0.6, 2, 0.02, 22, 1"
      }
    }

    Open terminal and run, changing the '*****' number accordingly:

    $ cd jetson-inference/build/aarch64/bin && NET=myDogModel_epoch_30 && ./detectnet-camera \
    --prototxt=$NET/deploy.prototxt \
    --model=$NET/snapshot_iter_*****.caffemodel \
    ... Read more »

  • Dog Detector

    Tegwyn☠Twmffat4 days ago 0 comments

    Obviously, we're not going to be detecting dogs in the field, but there is not a publicly available ready made inference model for detecting vegetable seedlings - yet.

    A lot of Ai models were trained on cats and dogs, so not wanting to break with tradition, I thought it relevant to test the Jetson TX2 object recognition system on my dog. Actually, the correct term is 'inference' and searching the net for 'object recognition' is fairly useless.

    The demo used is found on the Nvidia GitHub page: https://github.com/dusty-nv/jetson-inference and the best thing to do is scroll right down to about 3/4 down and run this:

    $ cd jetson-inference/build/aarch64/bin

    $ ./detectnet-camera coco-dog                           # detect dogs in the camera

    in the terminal  (see video):


    Next thing to do is to try and get the bounding box coordinates exported into the real world via the I2C bus, then, sometime next year, train some models with plant images that represent what is actually grown here in the fields.

    Building the image set for the vegetables is not easy task and requires thousands of photos to be taken in different lighting conditions. Previous experience using the Pixy2 camera shows that bright sunlight causes relatively dark and sharp shadows which were a bit of a problem. With Ai, we can incorporate photos with various shadow permutations to train the model. We need to do some research to make sure that we do it properly.

  • First Steps With Ai on Jetson TX2

    Tegwyn☠Twmffat4 days ago 0 comments

    I really thought that there could not be any more files to upload after the marathon 4 month Jetpack install debacle ..... But, as might be expected, there were still many tens of thousands more to go. The interweb points to using a program called 'DIGITS' to get started 'quickly' , yet this was later defined to be a mere '2 days' work !!!! Anyway, after following the instructions at: https://github.com/NVIDIA/DIGITS/blob/master/docs/BuildDigits.md I eventually had some success. Not surprisingly, DIGITS needed a huge load of dependancies and I had to back track through each one, through 'dependencies of dependencies of dependencies' ....... A dire task for a relative Ubuntu beginner like myself.

    Fortunately, I had just about enough experience to spot the mistakes in each instruction set - usually a missing 'sudo' or failiure to cd into the right directory. A total beginner would have absolutely no chance ! For me, at least, deciphering the various error messages was extremely challenging. I made a note of most of the steps / problems pasted at the end of this log, which will probably make very little sense to anyone as very often I had to back track to get dependancies installed properly eg libprotobuf.so.12 .

    Anyway, here is my first adventure with Ai - recognising a O:


    Notes:

    File "/usr/local/lib/python2.7/dist-packages/protobuf-3.2.0-py2.7-linux-aarch64.egg/google/protobuf/descriptor.py", line 46, in <module>
        from google.protobuf.pyext import _message
    ImportError: libprotobuf.so.12: cannot open shared object file: No such file or directory

    Procedure:

    # For Ubuntu 16.04
    CUDA_REPO_PKG=http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.61-1_amd64.deb

    ML_REPO_PKG=http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64/nvidia-machine-learning-repo-ubuntu1604_1.0.0-1_amd64.deb

    # Install repo packages
    wget "$CUDA_REPO_PKG" -O /tmp/cuda-repo.deb && sudo dpkg -i /tmp/cuda-repo.deb && rm -f /tmp/cuda-repo.deb

    wget "$ML_REPO_PKG" -O /tmp/ml-repo.deb && sudo dpkg -i /tmp/ml-repo.deb && rm -f /tmp/ml-repo.deb

    # Download new list of packages
    sudo apt-get update

    sudo apt-get install --no-install-recommends git graphviz python-dev python-flask python-flaskext.wtf python-gevent python-h5py python-numpy python-pil python-pip python-scipy python-tk

                   ------------------DONE------------------------------

    sudo apt-get install autoconf automake libtool curl make g++ git python-dev python-setuptools unzip

                   ------------------DONE------------------------------

    $ git clone https://github.com/protocolbuffers/protobuf.git
    $ cd protobuf
    $ git submodule update --init --recursive
    $ ./autogen.sh
    To build and install the C++ Protocol Buffer runtime and the Protocol Buffer compiler (protoc) execute the following:

    $ ./configure
    $ make
    $ make check
    $ sudo make install
    $ sudo ldconfig # refresh shared library cache.
    cd python
    sudo python setup.py install --cpp_implementation

    Download Source
    DIGITS is currently compatiable with Protobuf 3.2.x

    # example location - can be customized
    export PROTOBUF_ROOT=~/protobuf
    cd $PROTOBUF_ROOT
    git clone https://github.com/google/protobuf.git $PROTOBUF_ROOT -b '3.2.x'
    Building Protobuf
    cd $PROTOBUF_ROOT
    ./autogen.sh
    ./configure
    make "-j$(nproc)"
    make install
    ldconfig
    cd python
    sudo python setup.py install --cpp_implementation
    This will ensure that Protobuf 3 is installed.

                  ------------------ DONE -------------------------

    sudo apt-get install --no-install-recommends build-essential cmake git gfortran libatlas-base-dev libboost-filesystem-dev libboost-python-dev
                   ----------- DONE -----------------------------------

    sudo apt-get install...

    Read more »

  • Ai Object Based Navigation Takes One Step Forwards

    Tegwyn☠Twmffat4 days ago 0 comments

    About 4 months ago I bought the Jetson TX2 development board and tried to install the JetPack software to it …….. but after many hours of struggle, I got pretty much nowhere. Fortunately, the next release, JetPack 3.3, worked a lot better and I finally managed to get a working system up and running:

    The installation uses two computers running Ubuntu and the tricks that I used are:
    • Make a fresh install of Ubuntu 16.04 (2018) on the host computer
    • Use the network settings panel to set up the USB interface, particularly the IPv4 settings. The documentation gives an address of 192.168.55.2, so enter this then 255.255.255.0 then 255.255.255.0 again. When the install itself asks for the address. use: 192.168.55.1.
    • There must be an internet connection !
    • Make sure the install knows which internet device to use eg Wi-Fi / Bluetooth / whatever. A router switch is NOT required as the install will automatically switch between the internet and USB connection whenever it needs to, as long as it was told before hand which connection to use.

    The plan is to spend the colder Winter months developing an object based navigation system for the machine so, for example, it can use the plants themselves to enhance the overall navigation accuracy. We'll still be using GNSS, electrical cables, barcodes etc but will eventually give mathematical weighting to the techniques that prove to be more useful.


  • WEEDINATOR Frontend Human - Machine Inteface Explained

    Tegwyn☠Twmffat4 days ago 0 comments

    Rafael has come over all the way from Brazil to the Land of Dragons to visit! Progress on web the interface has been ongoing in the background and it's great to get a guided tour by the man himself on how it works:

  • Control System: Computer or Microcontroller?

    Tegwyn☠Twmffat07/02/2018 at 13:27 0 comments

    One of the debates that came out of the Liverpool Makefest 2018 was whether it would be better to use a computer eg Raspberry Pi or microcontroller such as Arduino for the control system? ….. I tried a Google search, but nowhere could I find a definitive answer.

    In my mind, the Raspberry Pi, or the 'RPi', is great for complex servers or handling loads of complex data such as Ai based object recognition …… Or a complex robot with very many motors running at the same time. In contrast, the Arduino, or 'MCU', will handle simpler tasks with greater efficiency and reliability.

    The RPi works with a huge operating system composed of a vast, almost indecipherable, network of inter-dependable files, using a very large amount of precious silicon. The problem here is that computers are prone to crashing due to their sheer complexity whilst a MCU, with only a few thousand lines of code, is at least one order of magnitude more reliable. The other question that was posed is that if the system, whatever it is, does crash, how long will it take to reboot?

    As development of the machine continues, some of the tasks will be assigned to a small computer, the Nvidia TX2, hosting an enormous graphics processor for Ai based object recognition. More critical tasks such as navigation and detecting 'unexpected objects' will be done on MCUs. One of the major tasks is writing / finding code to get reliable communication between these devices. We might also want a simple 'watchdog' MCU to check that all the different systems are working properly. Maybe each system will constantly flash a 'heartbeat' LED (or equivalent) and the watchdog will monitor this. A small robotic arm would then move across to press the relevant reset button.

  • WEEDINATOR navigating using Pixy2 line tracking camera

    Tegwyn☠Twmffat06/10/2018 at 13:47 0 comments



    The WEEDINATOR uses advanced, super accurate GPS to navigate along farm tracks to the start of the beds of vegetables with an accuracy of about plus/minus 20 mm. Once on the bed, accuracy needs to be even greater - at least plus/minus 5 mm. 

    Here, we can use object recognition cameras such as the Pixy2 which can perform 'on chip' line recognition without taxing our lowly Arduinos etc.

    This test was done in ideal, cloudy conditions and eventually the lighting would need to be 100% controlled by extending the glass fibre body of the machine over the camera's field of view and using LEDs to illuminate the rope. Other improvements include changing the rope colour to white and moving the camera slightly closer to the rope.

    The Pixy2 is incredibly easy to code:

    #include <Pixy2.h>
    #include <PIDLoop.h>
    
    #define X_CENTER         (pixy.frameWidth/2)
    Pixy2 pixy;
    PIDLoop headingLoop(5000, 0, 0, false);
    int32_t panError; 
    int32_t flagValue; 
    ////////////////////////////////////////////////////////////////////////////
    
    void initPixy()
    {
      digitalWrite(PIXY_PROCESSING,HIGH);
      pixy.init();
      pixy.changeProg("line");
    }
    
    ////////////////////////////////////////////////////////////////////////////
    /* LINE_MODE_MANUAL_SELECT_VECTOR
        Normally, the line tracking algorithm will choose what it thinks is the best Vector line automatically. 
     *  Setting LINE_MODE_MANUAL_SELECT_VECTOR will prevent the line tracking algorithm from choosing the Vector automatically. 
     *  Instead, your program will need to set the Vector by calling setVector(). _MODE_MANUAL_SELECT_VECTOR
     *  uint8_t m_flags. This variable contains various flags that might be useful.
     *  uint8_t m_x0. This variable contains the x location of the tail of the Vector or line. The value ranges between 0 and frameWidth (79) 3) 
     *  int16_t m_angle . This variable contains the angle in degrees of the line.
    */
    void pixyModule()
    {
      if (not usePixy)
        return;
      int8_t res;
      int left, right;
      char buf[96];
      // Get latest data from Pixy, including main vector, new intersections and new barcodes.
      res = pixy.line.getMainFeatures();
      // We found the vector...
      if (res&LINE_VECTOR)
      {
        // Calculate heading error with respect to m_x1, which is the far-end (head) of the vector,
        // the part of the vector we're heading toward.
        panError = (int32_t)pixy.line.vectors->m_x1 - (int32_t)X_CENTER;
        flagValue = (int32_t)pixy.line.vectors->m_flags;
        DEBUG_PORT.print( F("Flag Value:       ") );DEBUG_PORT.println(flagValue);
        panError =  panError + 188;  // Cant send negative values.Lower value makes machine go anti clockwise.
        pixy.line.vectors->print();
    
        // Perform PID calcs on heading error.
        //headingLoop.update(panError);
      }
      //DEBUG_PORT.print( F("PAN POS:       ") );DEBUG_PORT.println(panError);
    
    } // pixyModule

  • Nvidia Jetson TX2 Installation HELL ....... AAAAAaaaaaaargh!!!

    Tegwyn☠Twmffat06/06/2018 at 21:38 0 comments

  • Upgrade to Pixy Camera

    Tegwyn☠Twmffat06/01/2018 at 11:30 0 comments

    Pixy2 is out! …. And it does line vector recognition:

    #include <Pixy2.h>
    #define X_CENTER         (pixy.frameWidth/2)
    Pixy2 pixy;
    int32_t pan; 
    
    void setup()
    {
      Serial.begin(115200);
      Serial.print("Starting...\n");
      pixy.init();
      pixy.changeProg("line");
      pixy.setLamp(1, 1);
    }
    
    void loop()
    {
      pixy.line.getAllFeatures();
      pan = (int32_t)pixy.line.vectors->m_x1 - (int32_t)X_CENTER;
      Serial.print("pan:  ");Serial.println(pan);
    }

    …. Now just need it to stop raining outside so it can be tested on blue rope pinned down on vegetable beds.

View all 18 project logs

  • 1
    Annotated Diagram

  • 2
    Surface Mount Soldering

    First thing is to solder all the 1206 components - resistors, LEDs and capacitors. No stencil is required - just a small amount of solder paste and a reflow heat gun. Fear not - soldering this size SMT is easy!

    Sometimes it's difficult to spot the polarity of the LEDs so it's a good idea to have a flying 5v power supply to check that the LEDs work before applying the final heat. Lay the LED in the solder on the pads and test they work. 

    The green LEDs require a higher resistor than the others so 2k is used with these and 1k with the others. 

    The 0 ohm resistors can be left off  - they give options to connect the SIM800 to the MEGA 2560 instead of a NANO. The 2560 tends to be more stable in operation.

  • 3
    Mount the Buzzers, switches, regulators, screw terminals

    These items are very robust, so need to be soldered next. Screw connectors are very useful where there is any vibration in the machine as they are pretty solid. Otherwise there are female connectors for flying leads on the stackable pins on the MCUs. The buzzers require 100 ohm resistors to protect the MCU from supplying too much current and burning out the pin circuit.

    There are some random locations for ground and 5v screw terminals which are very useful. The 12V screw terminals are all 5.08 mm pitch.

    NB. The Ublox Rover module can be connected to the PCB 12v supply or to a 10 to 30 VDC battery which is useful for keeping it 'live'.

View all 13 instructions

Enjoy this project?

Share

Discussions

Domen wrote 05/19/2018 at 08:14 point

Hi, great project!

Which motors are you using for movement and which for the steering? Could you please provide pricing and where to buy from?

Best regards

  Are you sure? yes | no

Tegwyn☠Twmffat wrote 05/19/2018 at 08:24 point

Drive: NEMA32 0.75KW 220V High Speed CNC Servo Control 2.4NM 2500line AC Servo Motor and Driver https://www.fasttobuy.com/nema32-075kw-220v-high-speed-cnc-servo-control-24nm-2500line-ac-servo-motor-and-driver_p35191.html

  Are you sure? yes | no

Tegwyn☠Twmffat wrote 05/19/2018 at 08:26 point

Steering: 2 Phase Closed Loop Stepper System NEMA34 12NM High Torque CNC Stepper Motor Control kits https://www.fasttobuy.com/2-phase-closed-loop-stepper-system-nema34-12nm-high-torque-cnc-stepper-motor-control-kits_p36311.html

  Are you sure? yes | no

merck.ding wrote 03/20/2018 at 06:57 point

This is a very good idea and I am very much looking forward to you completing it.

  Are you sure? yes | no

Tegwyn☠Twmffat wrote 03/20/2018 at 08:35 point

Thanks for the encouragement!

  Are you sure? yes | no

Similar Projects

Does this project spark your interest?

Become a member to follow this project and never miss any updates