I've Started To Work On Mapping Using The Vx-11 Lidar

 
#1

The boards have been pretty quiet so I thought I would share what I am working on now.

***This is for robots using an on-board computer***
I have started making a plugin that is designed to use the xv-11 sensor, wheel radius and encoder counts per Revolution of the wheels to make a map of the environment. It also uses the compass part of the 4-in-1 sensor sold on this site.

The map is currently stored in a SQLLite3 database. It is housed in the plugin directory. I don't know if this will be the final design or not but it is what I have working right now. There is a map table that is created by the user with the plugin. It contains the X coord, Y coord, tile and Counter fields. This will be used to store the map as it is being updated. The tile will be marked when the LIDAR recognizes something in that location of a certain intensity.

Quote:

The tile size is determined by the wheel diameter. If you have small wheels, you have small tiles. If you have large wheels, you have large tiles. The current environment that I am testing in has tiles that are about 4.19 inches by 4.19 inches. This is because I have wheels that are 4 inches in diameter and if you take the wheel diameter * pi / 3, you come up with 4.188790266.... I round this to 2 decimal places. If you had wheels that were 2 inches in diameter, you would have tiles that are 2.09 inches. If you had wheels that were 12 inches in diameter, the tiles would be 12.57 inches. The logic is that the wheels would be much smaller for robots in smaller environments and much larger for robots in larger environments. Larger wheels means faster moving robots and thus the updating of the environment would have to account for faster moving robots. The number of tiles in the map is determined by the configuration screen by setting the size you want your map to be. In the test, the map is 50 feet x 50 feet. Using a robot with 12 inch diameter wheels indoors in a 50x50 foot house could become problematic. These are all subject to change depending on testing.



Well the information quoted above has changed. I am in the US and as such am more comfortable using inches and feet, so I am making 1 inch tiles for everything. The wheel diameter is still important but not as important in laying out the grid. I am converting the mm readings from the LIDAR to inches and marking the squares. We will see how this works out and go from there. This, along with everything else is subject to change as I go through it all.

The map on the screen is loaded from the SQLLite3 database initially. As things are seen by the LIDAR, the map table is updated and the display is updated by marking the corresponding tile on the map.

Eventually my goal is to take this logic and use it in SLAM. I plan on starting with some simple SLAM using the RANSAC algorithm which is best used in indoor environments. This is because it estimates and creates landmarks based on straight lines. From there I will use the Extended Kalman Filter for data association. This allows the robot to recognize landmarks and then adjust its current position on the map based on these landmarks.

One of the reasons that I want to store this information in a SQLLite3 database is that this would allow me to have multiple maps housed in different tables. The configuration screen could be modified to allow the user to specify which environment the robot is in (office 1, Office 2, home, Mom's house for example). These maps would be stored in different tables and the user would just switch to the map that pertains to the current environment. Another thing that these multiple maps could be used for is to handle different floors of an office building, one for each floor.

The test map is about 13 meg in size. This isn't too large but is only based on a 50x50 foot house on a robot with 4 inch diameter wheels. If you were in a warehouse or large office building with a robot with small wheels, the size of the database could get really large I would imagine. The goal is to get this to work in a smaller environment, and then see what needs to be done to handle larger environments.

Eventually, I plan on incorporating a path finding algorithm. This shouldn't be too hard to do because it is done in video games like crazy. There is plenty of sample code to build from.

Anyway, that is what I am working on currently. I suspect it will take some time before I have something to share. This is a pretty ambitious project and I will post updates as I accomplish different things with it.

I am not sure if I will sell this plugin or make it freely available. This is something that I will decide after I know how it works in multiple environments. If it turns out to be simply amazing, I might sell it. If it just works, I will give it away for free and continue working on a final solution.

#2

Dude, if you were a girl I'd kiss you.... simply awesome man! Smile

#3

Ha ha ... this is transformational in terms of its potential so "man-hugs" from me!

Have a wonderful New Year everyone.

Cheers

Chris

#4

Talk about timing, I just got my early version Get Surreal Teensy chip flashed with the latest version of the XV Lidar Controller that flashes an LED when powered up over a USB connection. Now on to building a Lidar mount on top of a Roomba.

#5

How are you on your schedule with the time line for getting your robot to market? You've been a busy boy! You are Incorporating a lot of technology into this robot!

#6

I decided to use EZ-Builder as the interface for Rafiki. I have made a few parts into plugins so far for it and am working on this one. I have only shared one of them so far. The thought is to leverage what is available in EZ-Builder and add the controls needed for Rafiki. I want people to be able to customize Rafiki to their liking and be able to add anything that they want to add. EZ-AI will be a plugin for the client and a piece of hardware for the server.

As far as the schedule, this new direction sets me back a bit, but I think it helps out a lot in the long run. Gains will be had by not having to maintain an interface to EZ-Builder using the SDK.

Writing plugins takes a bit more time than just writing the code, but it works out in the long run and I believe in the product (EZ-Builder). I have been impressed with how DJ has done a lot of things. It isn't always easy to understand initially, but once you grasp how it works you realize the genius behind it. It lets Rafiki use other peoples plugins over time also. I wouldn't feel right about using other people's plugins if I didn't share mine, so as I complete the others, I will share them too.

The others that I am working on currently are used for ground height sensors, car bumper sensors, the Volvo motors (that dont have onboard controllers) that I am using and eventually I will get to the Omron B5T HVC. It just takes time to get things right and SLAM with path finding is what has my attention at the moment.

I had an issue a while back that blew all of the 5v devices on my prototype. I have discovered more things as time has gone on and just discovered that a kangaroo and one of my motor encoders also got taken out. I just ordered a replacement encoder and another Kangaroo. A lot of my devices could handle 12 volts so most weren't damaged, but unfortunately what was damaged was very difficult to get to. This caused me to have to disassemble a lot of the robot that I didn't want to disassemble, but also helped me to decide on a couple of design changes that will allow for easier access to the parts inside of the robot. Hard lessons to learn, but I just keep pushing forward with it.

Anyway, a lot more information than you asked, but I hope that this gives you an idea of all of the things going on. Focusing on SLAM is nice for me. It is a fun project for sure.

#7

The toughest part of getting this working is the timing. If timing is off, the marking of the tiles is off. Because of this, the robot will move to a tile, then stop, then get the readings from that tile. This will be done for the map building process. Once the map is built, I shouldn't need to stop before taking readings from the sensor as slam will kick in at that point and do the location adjustments as needed. I think that this is the best way to start with an accurate map.

Also, the compass will determine when a turn is complete and will be used to determine the heading and thus the location of what the robot is seeing.

Documenting more for my own but also sharing so that others can understand what is happening when they use this.

#8

This is another note to myself. I figure documenting here is as good as anywhere. Here are the functions used to calculate what angles and distances are and to decide which direction I need to turn the robot.

Code:


public String GetDirection(double currentangle, double desiredangle)
{
double theta1 = currentangle;
double theta2 = desiredangle;
double delta = NormalizeAngle(theta2 - theta1);

if (delta == 0)
return "Straight";
else if (delta == Math.PI)
return "Backwards";
else if (delta < Math.PI)
return "Left";
else return "Right";
}

private Double NormalizeAngle(Double angle)
{
return angle < 0 ? angle + 2 * Math.PI : angle; //This will make sure angle is [0..2PI]
}

public string GetDirectionPoints(Point a, Point b, Point c)
{
double theta1 = GetAngle(a, b);
double theta2 = GetAngle(b, c);
double delta = NormalizeAngle(theta2 - theta1);

if (delta == 0)
return "Straight";
else if (delta == Math.PI)
return "Backwards";
else if (delta < Math.PI)
return "Left";
else return "Right";
}

private Double GetAngle(Point p1, Point p2)
{
Double angleFromXAxis = Math.Atan((p2.Y - p1.Y) / (p2.X - p1.X)); // where y = m * x + K
if(p2.X - p1.X > 0)
{
return angleFromXAxis + Math.PI;
}
else
{
return angleFromXAxis;
}

}

public static double GetDistanceBetweenPoints(Point p, Point q)
{
double a = p.X - q.X;
double b = p.Y - q.Y;
double distance = Math.Sqrt(a * a + b * b);
return distance;
}

private void MoveRobot(Point comefrom, Point atnow, Point goingto)
{
//add code here to move the robot the requested distance

string command = GetDirectionPoints(comefrom,atnow,goingto);
double turnangle = GetAngle(atnow, goingto);
double distance = GetDistanceBetweenPoints(atnow, goingto);

switch (command)
{
case "Straight":
{
//distance will end up being devided by something to get inches... idk right now
break;
}
case "Backwards":
{
//distance will end up being devided by something to get inches... idk right now
break;
}
case "Right":
{
do
{
//turn right here
//distance will end up being devided by something to get inches... idk right now
} while (Convert.ToInt32(EZ_Builder.Scripting.VariableManager.GetVariable("CompassHeading")) > turnangle +1 && Convert.ToInt32(EZ_Builder.Scripting.VariableManager.GetVariable("CompassHeading")) < turnangle - 1 );
break;
}
case "Left":
{
do
{
//turn left here
//distance will end up being devided by something to get inches... idk right now
} while (Convert.ToInt32(EZ_Builder.Scripting.VariableManager.GetVariable("CompassHeading")) > turnangle + 1 && Convert.ToInt32(EZ_Builder.Scripting.VariableManager.GetVariable("CompassHeading")) < turnangle - 1);
break;
}

}

serialPort_GetData();
}

#9

David,

Have you utilized the magnetometer (compass) in 'real life' over an extended run yet?

I ask because these compasses are very sensitive to all magnetic sources, not just "Magnetic North". In addition, the magnetic lines of flux can be disturbed by many outside influences such as metal structure and/or fastners that are near the sensor and the effect of motors or wires carrying large currents.

My experience with these sensors is from several years developing multi rotor UAVs (drones). In those applications the compass was a critical part of the autonomous flight control. To assure proper operation, the sensors are typically placed away from the motors, power leads, and any metallic objects, often on stalks atop the airframe in nonmetallic cases.

For accurate directional indications there is usually some sort of calibration process. The drone is rotated 360 degrees about each axis. The sensor readings are stored and compared to null out any static disturbances. Then an offset is introduced to compensate for the local magnetic declination. After that the directional data is good as long as you don't run across any buried metallic objects (like rebar) or nearby flux distorting objects like structural metal.

Your application may not need absolute directional accuracy but it will need static repeatibility.

I hope this information is helpful.

#10

The beauty of SLAM is that it handles a certain degree of errors. Right now, the code above is more for the calculations. Good high Def encoders should prevent the compass from being needed. It is going to be used more for checking for wheel slipage.

The starting point for the compass when turning in a location should have a simular level of error as at the stopping point of a turn.

Also, there will be external devices that are available to validate the location of the robot using cameras doing object recognition in black and white mode to limit distortions based on lighting conditions. These will be stationary sevices.

Really, the compass is the smallest and least used component of the system.