@Rich, can you explain the camera URL function a little more?

How do I enable this in EZ-Builder - is it the same as enabling the TCP/IP connection from the Connection options, settings, Server Tab, select your EZ-Board and enable scripts? My IP address is and the port listed in EZ-Builder is 6666


The camera is through the HTTP server control not the telnet control.

User-inserted image

That gives you the server control (with options i.e. port)

User-inserted image

The url I gave above (after adjusting the IP address and port) should give a snapshot of the camera as a jpeg.


This looks very promising. This method gets me a static images. I could use some controls in the EZ-Face app to refresh, but do you know is there a live feed vs. a static current .jpg?


I think it's just a static current jpeg but it's refreshed whenever you want so you should easily be able to make some kind of stream out of it by refreshing it once every 50ms, 100ms, whatever is needed. Since it's more than likely going to be local anyway there should be a very quick load time.


@Rich, I agree...was just hoping for a direct video stream. By the way, if anyone uses the HTTP method to test getting a picture in IE, it works fine minus your username and password where you have to manually log in. If you try to integrate the user name and password, by default the user name is Admin and the password is blank and must be represented by a blank space like so:
"http://Admin: @" *you may need to change to IP address for your system and of course add the HTTP Server function to your EZ-Builder and start it, along with having your camera turned on in EZ-Builder.

This will NOT work in modern IE, like version 10. It will work in Chrome and other browsers. http://support.microsoft.com/kb/834489


@Justin- Awesome. I can't wait to play with this in the near future. Thanks for sharing this with everybody.


@Justin, I am going to keep my eye on this. It has GREAT potential. EZB knowing not only that it is a face, but also WHO'S face it is.

Keep up the good work!


I just had a good laugh at my EZ-Face app. I had already trained it with DJ Sures face, so I was going to add the actor Steve Guttenburg from Short Circuit and EZ-Face didn't miss a beat to detect Dr. Newton Crosby PH.D. as DJ Sures! Grin Smile

I'm finding that the functionality maxes out for me at 4-5 faces for solid reliability. After that, the face trained the most (mine) starts to look like other people, where it detect Isaac Asimov as me at first, then it registers him correctly. All though I have not tried just capturing tons of images of every face, so that could effect the learning as well. I (we'll) learn more as we play it. I know it really had a tough time understand the Isaac Asimov at different ages (young vs old). So if you time travel or your robot does not see someone for a long time that might be important to know. Smile

I know for sure teaching faces based only on a picture of a picture, like I did for DJ Sures and Asimov, that it will work, but not's not great method.

I'm looking forward to sharing this in a couple of days.


Incredible! Sarah Connor just might get terminated after all.


This is one of the most exciting user projects I have heard about in quite a while. I am anticipating playing with it with baited breath. Also really excited you are open sourcing it. I am trying to find the time to learn C#, and theway I learned VB6 was by examining other peoples apps that did similar things to what I wanted to do. I found that much more effective than tutorials or books that started with "Hello World" and then went through dozens of simple apps that I wasn't interested in.