Robots That Learn

 
#1

@d.cochran, @DJ Sures
What effort, software development, or application(s) would be required to add the "Robots That learn" capability like that of the braincorporation's eyeRover?
Ref: http://www.braincorporation.com/products/


User-inserted image

#2

We have had on and off communication with Brain Corporation for a while - it's something we plan to continue discussing with. Perhaps they need another poke Smile Feel free to send them a message as well! I think we're both waiting on the opportunity to be in the same place at the same time...

#3

@DJ Sures
Thanks for the reply.
I don't know how much poking I can do. I live in North San Diego County, near Qualcomm. Maybe some folks I know can do the poking.
Are there learning apps or process tools available for machine learning?

#4

The needed parts for a learning robot are really just storage. What do you want it to learn? Storing the variables that have been used in the past is entirely doable. There is software that can recognize things and be trained to do things. Speech recognition on your pc is always improving as it is always training as you use it. Having a PC gives you a world of knowledge at your applications disposal.

The most intelligent robots out there grade their own actions and try different things off of those actions. The way it knows if it did good or bad is by judging the result of those actions based on others that it has stored the results from in a database type of storage. All of this is doable with what is available, but it would take some coding using sensor output to evaluate if the result was good or not and something to identify what makes good or bad.

An example of this is trying to teach a robot to walk using the output of multiple tilt sensors as good or bad. Each movement of the robot takes readings from the tilt sensors so that it can know or chain together information to determine if the result was good or not. This type of data would be better suited in a data cube than in a database due to the fact that data cubes already contain aggregates of information from a database based on time or some other dimension. The robot would look at the cube and chain together what it takes to make a step based on the results of multiple failed attempts and small successful ones.

What I am trying to point out is that a robots memory or a learning robot is more an application that is running to feed the robot the data that it needs. The robot is more just a network of sensors that feeds back into the brain (computer) and stores the information in some sort of quick queryable format so that it can know what has worked or hasn't worked. The brain in their solution is a pretty weak linux machine with a customized kernel or operating system. Cool idea, but I think that being able to program on a full blown machine is much better in the long run for me. Think about this. Put a robot in a store and have it interact with customers based on either a touch screen or voice commands. The robot could access the POS database and have all of the products in the store available for the customer to look at. The robot would know what row and bin this information is in. All of that is cool. The power comes in knowing what the customer did. Did they search for lights in electronics or in home goods? How do you market your products based on these decisions that are recorded at the time that the robot and consumer interact? Can you tell facial expressions from video so that you can determine if the robot was being helpful or frustrating to the customers? Taking this data and crunching it down would allow the robot to know how to interact more favorably with the customers. Does it act differently for women and men? Elderly and young? Race of the person? It is all about storing the data and then deciding how to use the data. Data modeling is huge right now and is changing the way decisions are made by companies whether they are drafting the next sports hero or are selling life insurance via the mail.

This is why I am working on EZ-AI. The potential is there to have robots make decisions based on the data and their sensors just like companies are already doing with sales data. Its about storing the data and using it.

#5

That's a tough question to answer - are there tools in EZ-Builder for machine learning? Well, there is the vision learning system using Object Training.

As for having the robot "learn" using artificial intelligence, or something similar - is a really big question. It's pretty much as big as "what is the answer to life, the universe and everything". Without understanding the specifics of the question, there is no applicable answer.

Are asking for a specific demonstration of a pre-built ez-robot with learning abilities?

Might be easier if you ask a specific question, such as...

1) How do I make my robot learn an recognize an object visually?

2) How do I have my robot learn the dimensions of the room during it's exploring process?

Remember, the Brain Corp robot demos are specific applications that perform the specific task on specific hardware - they're not a "Hi, let's talk to this robot and ask it to do stuff because it's a little child" Smile

The magic with short demo videos is not telling the whole story - leaving a lot to your imagination - which may be assuming those robots have the intelligence and cognitive understanding of a chimp, which is false. They're still pre-programmed robots running a specific application to perform a specific demo.

#6

Yep, the first question is "What do you want it to learn?" Without that, you got nothing.

#7

A concrete case study of AI were the piece recognition for my jigsaw puzzle assistant. When solving such puzzles myself, I often wonder how I (my brain) finds out that one specific piece will fit into a specific location, while I'm searching for a piece for a different location. There must exist either heavy parallel processing, or kind of content addressable memory. That's certainly not a viable (technical) approach, even with nowadays PC and robot processors.

For now it were sufficient to find out the key characteristics of the pieces, required for reliable matching. Using the camera, the first step is the separation of pieces in a heap; this may be achievable by an attempt to fetch one piece on top of the heap. Next comes measuring the outline, which must not be very accurate - perhaps rough edge ratios are sufficient. More promising seem to be extreme shapes, like sharp spikes at the edges, or non-rectangular angles between the edges. Then comes a set of edge shapes (straight, convex, concave), and finally colors.

All that information could be stored in a huge database. That database also must include information about the current location of every physical piece. Now comes the question about the fastest matching algorithm. Brute force is only a last resort, it will take too much time when all the candidate pieces must be moved into their assumed places, and back again if they do not really fit. The primary goal will be the construction or reduction of the set of candidates, based on multiple possible algorithms. Here AI and machine learning may enter the scene, where e.g. the algorithms could be benchmarked for best operation based on certain (not necessarily predefined) characteristics of the piece shapes and pictures on them.

But back to the first step, the robot should be able to learn *how* to detect single pieces in a heap of pieces; this can be extended to concrete (motionless) objects in a room, for more common applications. Moving the camera or robot looks like a good approach, so that corresponding shapes can be extracted from multiple snapshots. Also movable light sources may be helpful, or turning them on and off, for detecting edges or (plane) surfaces. Of course there exist 3D laser scanners for exactly that purpose, but there exist many much cheaper ways for implementing similar capabilities with EZ robots.

For my own studies in that area, is it already possible to obtain videos or (preferrably) single pictures from the camera, for remote processing?

#8

@d.chochran,

There is already a robot that is being tested in some LOWE's stores. It goes further than what you have mentioned. Let's say you walk into a Lowe's and need a bolt. All you do is hold the bolt up to the robot's face and it will come back with a complete description of the bolt and say "follow me" and it will take you to the exact location of the bolt. Of course it could have been anything. So, your dream has already become a reality.

Oh, BTW, what is a DataCube? Are you talking about an Array?

I find that Machine learning is very interesting. I have a passion for it.

#9

Moravec's paradox is the discovery by artificial intelligence and robotics researchers that, contrary to traditional assumptions, high-level reasoning requires very little computation, but low-level sensorimotor skills require enormous computational resources. The principle was articulated by Hans Moravec, Rodney Brooks, Marvin Minsky and others in the 1980s. As Moravec writes, "it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility."

#10

@MovieMaker ... The Lowe's robot you speak of is not the type of robot they are talking about here... Although it is quite impressive it is **not** a learning robot in the true sense of the word... It has the database of the store's inventory, multi languages, and ability to answer simple questions programed into it... It can recognize objects and read bar codes... It uses this info to take the customer to the isle that the product is in... It has no ability to learn, it is just executing the programmer's code. The only way it can learn something new is if that information is added to it's code or database by the programmer...

What would make the Lowe's robot off the chart impressive is it did have AI built in... One example would be a customer coming into the store asking the Lowe's robot a question. The Lowe's robot immediately recognizes (facial recognition) that the same customer was in the store a month ago looking for a lawnmower part... The bot then asks the customer if he needs more lawnmower parts and even calls the customer by name (because of the previous conversation a month ago with the customer)...... That would be impressive... and that would be a form of AI... Smile