EZ-Face is the first in what I plan to develop into a suite of supporting application for EZ-Builder and other robotics applications. EZ-Face performs multiple face recognition. It has a interface for training faces and assigning names. When the application sees faces that are recognized the names are displayed and visually you'll see boxes around the faces with the names assigned. If a face is detected but not recognized there will be a display of a box around the face with no name. The more pictures of a face you train, the easier it is for the application to recognize a face.
This is a stand alone application developed in C# under Visual Studio.NET 2013. You should have .NET framework 4.5 and it is for Windows7 and Windows8.x systems.
This project showcase explains the technology behind the application and highlights development milestones.
Developed in .NET Visual Studio 2013 (you can use express versions with the source code)
Designed to work with EZ-Builder but could be integrated into other software or robotic systems
Is a standalone application
Is open source, source code is included
Uses emgu cv wrapper for .NET (Open CV)
Resources: (Things I found helpful in creating the application)
EZ-Builder Telnet interface tutorial (Enable Telnet as the first part Shows, this is used to test communications manually to EZ-Builder via TCP/IP): http://www.ez-robot.com/Tutorials/Help.aspx?id=159
If you do not have Telnet installed on your system go to this site: http://technet.microsoft.com/en-us/library/cc771275
EZ-Builder SDK Tutorial 52: http://www.ez-robot.com/Community/Forum/posts.aspx?threadId=4952&page=1
EZ-Builder script for listening to the TCP/IP port for variables: http://www.ez-robot.com/Community/Forum/posts.aspx?threadId=5255
DJ Sures, for making EZ-Robot and EZ-Builder so robust
Rich, for his help with EZ-Builder scripting
Sergio, for his emgu cv examples
Basic Usage Direction (after download and install):
1.) Open EZ-Builder and load the included EZ-Face example
2.) Click on the Script start button (this sets up the communications from the EZ-Builder side of things)
3.) Open the EZ-Face application
4.) Refresh your camera list (click the button)
5.) Select your camera (in the drop down list)
6.) Click the "1. Detect and recognize" button
7.) Train at least one face
8.) Change the local address and port number as needed (the local IP address may not be your computer's address - you can enter "localhost" and leave the port set to 6666 unless you changed that setting in EZ-Builder)
9.) Click File and select Save User Settings (to store your changes)
10.) Click Connection (this opens the communication line to EZ-Builder from the EZ-Face app side
11.) Allow EZ-Face to recognize the face you trained - then with your computer speakers turned on EZ-Builder should speak "Hello (the name of the face you trained)"
12.) If the example work - integrate in your EZ-Robot applications as you see fit
1.) If after training several faces if you get false recognition of faces (faces recognized with the wrong name) - to correct this you should train the incorrectly recognized faces with the correct name. After a couple of training pictures are stored the accuracy of the face recognition will improve.
2.) Do not train faces with one camera, then switch to another camera for face recognition - recognition accuracy will drop.
Using Two Cameras:
What I found worked best was to start EZ-Builder, select the camera I wanted and started the camera feed, then I started EZ-Face.
If I reverses the process (even though I was selecting a different camera) I would get a black image in EZ-Builder.
I still have several improvements I want to make before I upload the first public version of the application.
The first public version is ready for release and is posted at the link below. This version has many user improvements to allow you store many settings, including http and port address, camera device, logging of faces in a text file (up to 1mb of data before the file auto deletes), face variable output to EZ-Builder, face training and more.
I updated the script, version 3.3.14 has the HTTP server panel (which is not used - you don't need to start it) but it does show you your computer's IP address so you can enter it in EZ-Face. Remember to save your settings under the File Menu. I also changed the script so it will no only speak for variable values greater than "" or NULL.
I updated the EZ-Face application: "localhost" is now the default address, new option for auto connect, functions to receive commands from EZ-Builder or other 3rd party application to stop and start the camera feed within EZ-Face. There is also a new EZ-Builder project with several new scripts to test out the functions. Please go to my site to download the latest version. You will also find a video there that demonstrations the new functions and provides directions for setup and usages.
The latest version will be published here: http://www.j2rscientific.com/software
For support and reporting any errors please use the ContactUs feature from http://www.j2rscientific.com with the subject line "EZ-Face".
I welcome any and all feedback!
Wow this is incredible.... I cannot wait to try this out on my project when you release it
Outstanding job @JustinRatliff
Wow ! Thank you very much. It's really great.
One word ... AWESOME!
Great job, and to share this with the community is just fantastic!
Thank You Justin
I'm looking forward to trying it out.
Also thanks for the acknowledgement You have no idea how much that means to me!
@jdebay That is a good questions on the EB4. The question marks would be for the camera. If windows detects the new camera as a standard "camera" device then nothing would need to be changed. But I'm going to add that to my development to-do/check list.
On my plate right now is making different modes for the application to run in. One being stand alone - integrate it with anything you want with a dedicated camera (home automation, EZ-Robot, other robot apps, etc.)
The other main mode being for EZ-Builder with a single camera, because a camera feed can't be used in two applications at the same time so I need to add some smarts to the EZ-Builder script to send some communications back and forth via TCP/IP with the app. That way, EZ-Builder would stop it's camera usage, open the EZ-Face if it's not open, tell EZ-Face turn on camera, look for face(s), get face name (if unknown maybe ask for face name and learn new face), send face(s) as a string to EZ-Builder, wait for EZ-Builder to acknowledge, EZ-Face Turns off Camera and sends confirmation to EZ-Builder, EZ-Builder acknowledges and turn it's camera usage back on to do whatever.
The other option for single camera use I am thinking of is a 3rd mode for static picture face recognition. In this mode EZ-Builder would look for face, take a snapshot when detected, tell EZ-Face, look for faces in path/newpic.jpg, EZ-Face would analyze, send face(s) back to EZ-Builder as a string and EZ-Builder could do what ever. That would eliminate the single camera issue, but I fear it might end up slower and now as accurate. Because if EZ-Builder takes a snap shot but you moved your face, then it's a static picture of a blurry face or a sideways face that the EZ-Face app can't recognize so that app would reply with "I don't know". And the process might keep repeating if your usage was "Robot, who is this".....wait for face recognition.....wait on robot to do something....keep retaking pictures until something happens or you say "Robot, never mind". With a live video feed, I think people more instinctively know they need to point their face at the camera and that would have a higher success rate.
Then the whole learning faces things comes into play, because right now you need to train faces live at a terminal with keyed entries for names. But if your robot does not have an onboard PC and you are not sitting at a terminal to teach your robot faces then teaching would not be ideal. I'm picturing a future function to store unknown faces, if you asked the robot "who is this" it might reply, "I don't know, storing for future learning"....then at your convenience you could go through the static images and tell the robot who they are. That functionality might lead into live learning where a terminal is not needed for direct keyed entry.
99% of those things will be in future versions. I hope to release the first version by the end of the weekend. I'm hoping once folks get a change to play with it we can all co-develop it. Even if you don't have coding skills, if you just tell me what you need it do, I can work on adding those new features.
And you are welcome @Rich - thanks for all your help and always sharing your knowledge with the group!
You could add in an IF to stop the "hello" with no name too if you wanted to...
IF($FaceName <> " ")
Say("Hello" + $FaceName")
EZ-Builder will, with the Web Server/Remote Control control turned on send a stream of images on it's url i.e. http://192.168.0.200:80/CameraImage.jpg?c=Camera
I don't know if it's possible but maybe you could use that url as a camera for this app therefore leaving the robot to use it's camera as normal and no need to shut it off to detect faces.
Wow! Great job @Justin and thank you so much for sharing! Its one thing to recognize a red ball and track faces but its another level altogether to recognize a particular face! Of course the robot's camera would have to be at "eye level" for maximum recognition. With a script for "do not recognize immediately" the robot could request the "unknown" face to get closer and align squarely to the "eye" I am positive over time many improvements will be made by yourself and others.....Congrats @Justin!