Localize a Robot
ZaidPirwani wrote 05/22/2016 at 19:41 • 1 pointSo, this stack is more of a question,
How would I localize a robot within a known area - the area is about the 5m by 5m, it is distributed in grids and it is known beforehand, but the initial position of the robot is NOT known.
can put IMU and compass sensor on the robot and sonar and IR sharp range sensors,
There are walls and the floor is of different color in different areas.
The initial robot position (including orientation) will be random and robot will need to identify where it is and then move to specific points given to it.
Please do point me in the right direction, some proper terms to Google maybe or some similar project.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.
You should take a look at the book "Probabilistic Robotics - Sebastian Thrun, Dieter Fox, Wolfram Burgard". They give a nice introduction to localisation and slamming. They have some examples of localization with other sensors than LIDARs (you can actually use a sonar instead, but results will not be as good).
(SLAM is Simultaneous Localization and Mapping, but since you know the map, only localization is relevant to you)
Are you sure? yes | no
I think what you're looking for is a "particle filter". See the video below, starting at 18:33:
Basically, allocate an array of 1000 possible positions and initialize it with positions randomly chosen and evenly distributed in your area.
Step 1: Make some measurements (camera, sonar, whatever). Based on the results of the measurement, update the probability for each of the 1000 possible positions.
Step 2: Discard the positions with the lowest 900 probabilities and make a movement; forward, for example. For each of the remaining 100 positions, generate 10 new positions (10 each times 100 positions = 1000) based on where the movement might have put you.
(In other words, moving forward 5 cm might move you anywhere from 4 to 6 cm because the wheels can slip, and forward might be slanted because one wheel is larger than the other. Make a set of 10 possible destination positions where you could be, given that you started at one of the remaining 100 and your forward movement might be noisy.)
Go to step 1.
Eventually all the possible positions will drop out except for one, and the step-2 predictions will all be localized to a small area. Detect this case and drop out of the loop.
Are you sure? yes | no
Ok. The following is on the backlog of ideas I have for this.
If the 5x5 doesn't change you can do the following with a camera.
Have the robot move around and collect visual information and store it remotely.
You can run decomposition on all pictures using something like a FFT to give you a spatial distribution representative of many different images in the frequency domain. Next you try to condense those different images in that frequency point cloud into discrete states by running clustering algorithms like K-Means (?). The clustering algorithms work to group images that are near one another(in terms of frequency breakdown) into distinct clusters. The algorithm then finds a single(or set of points) that best represents a cluster for each cluster.
Intuition being. If the decomposed frequency signature is close enough to a certain frequency cluster, we're going to assume that the is robot in a certain orientation and position associated with that cluster.
If we use a graphical representation, we end up abstracting away notions of coordinate systems and degrees (orientation) and end up relying on a notion of state transitions between different clusters (Where current state is determined by our Image and what cluster/state it belongs to) .
We can bring in a probabilistic perspective and associate a probability that one state leads to another. We can then place our robot in this 5x5 environment, get our machine to move around and generate a sequence of guesses as to where it is in our cluster graph to generate a position and pair it with something like a confidence interval.
TLDR: Get rid of a grid or coordinate system. Construct a graph based on visual memory. Work out actuation to state transition mappings. BAM, you have a way to plan movements/actuation based on the statistical environment based on vision, rather than a human contrived coordinate system.
Are you sure? yes | no
If you know which areas are what color, then you can position yourself easily with a down-looking color sensor and some white light. If you have encoders on the wheels and two such sensors, you can also know the angle of the robot as it passes the area between two different colors.
If you don't know the colors, you can still build the map as you go, and use some collision/distance sensors to avoid the walls as you go.
The proper term to google is SLAM.
Are you sure? yes | no
Although for the angle you can also cheat and use a compass.
Are you sure? yes | no
If you can add to the environment put a ultrasonic transmitter at each corner of the square. Pulse them in sequence. Deduce your bots position by the time delay from each corner.
Are you sure? yes | no
no LIDAR.
no Camera.
I am getting techniques for completely unknown (no initial position no map info or completely known on Google.
Are you sure? yes | no
no LIDAR.
no Camera.
I am getting techniques for completely unknown (no initial position no map info or completely known on Google.
Are you sure? yes | no
>>floor is of different color in different areas.
drive around a bit and check for colours, do until position becomes unambiguous?
Are you sure? yes | no