Omron B5t Sensor


Hi David!

I see you are still @ work with the Omron Board!




Yea, once the end for the cable I will need gets here I will start to dig into it. I have so much to work on and this is just one of about 50 things I have to get working. It does need to be done so I might as well do it sooner rather than later.


Cool David!

If you need help (like some chinese work) tell me, i will help!



I'm hoping with DJs announcement with the support for developers and plugins, that lots of fold will start writing for lots of support for all kinds of Hardware like this sensor. And you can make money at it...


I should have a plugin for this sensor in about a week. I have c# getting almost all of the data from this sensor and returning it to a dictionary. Time permitting, I should have a plugin completed in about a week.


The plugin doesn't use an arduino for communication. it does use a usb/ttl device for communication. I will post information on the one i am using. It uses a baud rate of 921,600 bps. The plugin will require that you have an on board computer. The code will be open source as I am using some things from an open source project. This plugin uses a python app to communicate with the Omron, which then passes back json. The plugin processes the json and provides the variables to EZ-Builder. In order to keep the payload light, I won't be passing back the video or image.

This version doesn't do facial recognition. I will be adding those features later. I will post STL files for a case for the sensor, the python script for communicating with the Omron and the plugin for processing the JSON.


What's the advantage of using this? Specifically since the EZ-Builder mobile app has color, motion, face, multi color and glyph tracking built-in?


It is really good at facial recognition. It is better than what I have seen from OpenCV, Aforge, EZ-Builder or RoboRealm. I don't have this piece implemented yet but Toymaker does and the Example Project works wonderfully even in different lighting conditions (within reason). The sensor can store up to 500 faces for recognition.

In addition to this, it does
human body detection - Number of human bodies within view of the camera
Age estimation - Does a really good job of identifying the approximate age of the people in the view of the camera.
Gender Estimation - Is the person male, female or unknown.
Hand detection - can differentiate between how many fingers you are holding up
Expression estimation - is the person happy, sad, upset, neutral or whatever
eye estimation - location of the eyes and how open they are
Gaze estimation- where is the person looking?

Any or all of these can be enabled or disabled. This helps to make the robot smarter. Tracking would be done by another camera while this one handles the person recognition. This reduces the load on the computer running EZ-Builder. Also, it allows EZ-Builder to still do color tracking and whatever through other cameras.

All of this is done on the sensor so very little processing load in handled by the light weight tablets that people put on their robots.

Here is a sample of the data returned.

Detection execution:

num_detections: 0

detect_size: 106
reliability: 632
num_detections: 1
coord_y: 379
coord_x: 245

left_and_right_angle: -2
up_and_down_angle: 0

gender: man
reliability: 1000

age: 51
reliability: 285

face_inclination_angle: -1
vertical_angle: 13
reliability: 43
left_and_right_direction: -6

neg_pos_degree: -57
top_score: 43
expression: expressionless

eyes_head_right: 337
eyes_head_left: 399

num_detections: 0


David really looking forward to the plug in.


*video removed*