Indoor Navigation System And Ezb Camera ?


Any updates on the indoor navigation system, camera and beacon system? There was discussion it was in the works. Just curious.


I believe the recent update to EZ-Builder which allowed 255 EZ-B controllers to be connected to an EZ-Builder session was a step toward doing this. I could be wrong, but it feels like it was.


Dave.. I got you overview in the other discussion on web cams. I was wondering if the separate ezb with camera would still be a better way to go rather than trying to track using the motor/encoder system and a rf signal. I have no real programing knowledge but wondered if a grid system (coordinates) could be made from camera data to navigate a robot. Then onboard controller could track from current point to program point. Kind of a radar navigation. A infrared signal could identify the room to identify a map of coordinates to travel through. Am I on the right track?


I think that the best solution is both. A camera can be used outside of the robot to tell the location of the robot. I think that right now the best way to detect objects is through echo type systems. Eventually I would like to move from these to echo systems to vision systems. This is the only platform that I know of that gives you all of the options you could dream of.

Echo systems have their disadvantages. Try to get an echo sensor to try to respond to a fur covered piece of furniture. The camera can see it without any issue. The camera has to be able to recognize what it is though. Once the training is done, the camera is superior. Now, what if the piece of furniture is replaced. The camera has to be retrained, but the echo sensor doesnt.

If they were used in conjunction with each other then the camera could train itself with some programming.

This is what I will be working on when I get Spock mobile. There is a connect on Spock that I will be playing with, along with 4 cameras in the front, one on each side and one in the back. It also will have 8 echo sensors around the base and possibly a pir. I still think that I will put multiple cameras in each room to triangulation the position of Spock in a room. The triangulation data along with the sensor data on Spock should provide enough data for reliable self navigation. The encoders on the wheels will provide accurate and reliable movement data.

It may be overkill, but I like redundant solutions that work together. Will just have to see how it goes.


Hey guys , you may want to take a look at the QR code based navigation videos I did. You can use a camera pointed at the ceiling and print QR codes and place them on the ceiling. Each time it reads one it can be a xy coordinate of your homes floorplan. If you want to get fancy you can make the QR codes using IR or UV reactive ink to make a "mostly invisible" QR code marker.


Hi Dave and Jstarne1. Like j's idea my first concept step is to plot a track across a room to a destination. I want to use a camera on board an adventure bot, to track to a Colored Wall Marker (also using a echo sensor). Second step is a colored beacon or marker on the robot and two cameras mounted 90 degrees apart on the walls in the room preferably web cams directly connected to the computer, mounted on servos to give location in the room or track (X & Y travel). This gives me a track with marker points which can be used to set location on X and Y. What do you think? Can it be done and could it work? My code skills are zero but I know I can make the basic parts work. My end goal is coordinates not destinations. This way you travel to coordinates.


I won't be putting stickers on the walls. To me it is very uninviting to have people over wondering why we have all of the stickers all over the place. The goal for me is to be able to have a system that can be dropped in place in any location which would allow the robot to map and navigate a house or business with minimal setup.

Most of my proposed solution would be database configuration for setup which would allow an application to be developed that can make this easy on the user. The "charge pad" idea also becomes quite possible.

Stickers to me wouldn't be inviting. I'm able to read them and use them for sure but they would be limited to things like verifying that I'm lined up with the charging station pad before moving onto it.


I believe that 2 cameras could work but 3 would be better.

If you knew the size of the object you were tracking (in pixles) at say 10 feet from the camera, you could calculate the distance from each of 3 cameras based on the size of the object in all 3 cameras. This would allow you to then know where the object is. You wouldn't be able to do this with 2 cameras but you would be able to know roughly where it was.


The stickers were for proof of concept using 1 camera. Like you say in your last statement cameras only is the goal. (Similar to Josh's meathod).

I am busy with a work project again but the mind is going...

I want to try simple things like a target on the bot and the remote camera on a servo and use the servo position to determine a value of 1 to 180. If it works I have an "X" location. I would assume the robot camera tracking to a fixed point (test device colored paper) and a sonic distance will give me the "Y". Basic navigation? When I get some time I will try it with my adventure bot. The third camera? I would need your thoughts.

Dave, you and I are on the same page but you software knowledge is WAY beyond my grasp. I am trying to do it the stone age way, LOL. The least amount of math is best for me..

I know DJ has something up his sleeve. I hope it comes soon.



There are a couple of keys to what I am going to be trying to do in January.

1. My robot hopefully will be mobile by then
2. My son (who was a math major and will have his CompSci degree) will have graduated.
3. He will be hired on at his current job in a position that keeps him from doing contract work on the side.
4. He will have time to help with this project.

My son is very interested in coming up with a navigation system for the inmoov. He also wants to build an inmoov so he has some incentive to get this feature working well with mine. Right now his time is being spent programming at work, programming for some contracts that he has on the side and school. His time should free up in January.

Here is my logic on 3 cameras. If you know the distance from 3 set points, you know exactly where something is. If you know the distance from 2 points, you have a pretty good idea of where something is. If one of those cameras is blocked by something and you have 2 cameras, you have no idea where something is. If you have 3 cameras and one is blocked, you have a pretty good idea of where it is. The object that you are tracking on the robot would need to be a ball for the shape of the object not to mutate at different angles from the cameras. If the size of the ball were known at a specific distance, the distance could be calculated by the size of the ball on each camera. The color of the ball would become a concern also. The balls color would change as lighting conditions changed. You also wouldn't want something that is a common color in the environment. It could be that a glowing ball would be the best option but I haven't looked into how the camera would pickup a glowing ball at different light levels of the environment. Filters could be used to give you a range of colors.

I am still a ways away from this. One of the other things that I need to come up with is a reliable data storage layout to map a floor to data points. I plan on using a grid layout. This grid layout would mark each square of the grid with a 1 if an object were determined to be in that grid. Each pass of the robot through the house would then add one to any grid that had an object in it, and subtract 1 if that grid has a positive value (meaning that something has been detected in that grid before) and that grid doesn't have an object in it now. This would allow me to map hard vs soft grids. When telling the robot to go from the study to the kitchen, the robot would calculate the shortest path using this information. It would look first for the shortest path and then see how this path would need to be modified for hard targets in its path. It would then modify its path for soft targets based on the value of the soft target. The external cameras would tell it where it is right now along with the log that is being held when it moves around the house, and it would use the encoders to calculate the speed in which it is moving so that it knows which grid it expects to be in. Additionally, the cameras would tell the robot which grid it is in as it moves. Along with all of this, there is software that uses cameras to record gates images it expects to see while it navigates from one location to another (roborealm avm). The 3 of these systems, working in conjunction with each other should provide a pretty accurate way to navigate. The key is that all of these systems have a way to communicate with each other. The SDK for EZ-Builder, the SDK for RoboRealm, Database technology and EZ-Builder itself will all work together from multiple machines to accomplish this. Thats the plan anyway at this point.

The only concern that I have is people's reaction to having cameras in every room of a house. This is also a concern in an office building scenario but not as much.

On specific items that you want the robot to validate its recognition, you could use object recognition and barcode type labels for the robot to have multiple forms of verification. The navigation systems would get you close to an object but you wouldn't be able to know that you were aligned with the object. the stickers could be used on those items that you wanted to make sure your robot was aligned to.

I can't wait to see what EZ-Robot has up its sleeves for this. I don't know that I will use it yet or not based on what it is. It might be that it is a great start or a great overall solution. I have until January to wait and see because my son and I won't have time to work on it until then.