Api.ai Example To Learn A Little Bit From


I have been working in API.AI for a while and jumped back into it yesterday some. I want to describe what API.AI is and what it isn't so that there is a clearer picture of it. I am attaching a sample api.ai project that you can use to start building from if you would like to do so.

First, what is it? API.AI is great at taking speech and breaking it down to give you the intent behind the speech. For example, in the example, you can say something like "Hey Robot, I want you to raise your right arm 19 degrees". The returned text (what would be returned to EZ-Builder) is "[Movement] [RightArm] [up] [19] [deg]".
You can use a script to break down what the robot is being asked to do from the information above. The phrase "yo, raise your right arm 1 degree" would also work and return "[Movement] [RightArm] [up] [1] [deg]" for you to parse and use.

There are some chatbot features called "SmallTalk". This works like any other chatbot and takes questions or statements from the user and returns text that you want it to return. This is the simplest form of using API.AI and probably is the easiest part of it, but is also not the most useful.

There are some prebuilt agents. These agents each use their own API key. Because of this, I don't recommend using them because the plugin only allows one API key, and you will quickly run out of allowable uses. It is far better to build a single customized agent which contains everything that you want your robot to use.

The use of this tool is to break apart and parameterize language. This allows you to use completely different speech for specific commands in EZ-Builder. Currently, the plugin only sets 2 variables in EZ-Builder. This requires that you pass the parameters in API.AI back in the Text response field that would match the layout of what you want to parse.

In addition, you can add what are called integrations. This is where you could tie into many different things like Actions on Google, Microsoft Cortana, Amazon Alexa skills, telegram, skype, twitter, facebook messenger and many other one-click integrations. There are also SDK's for Android, IOS, Ruby, HTML5, JavaScript, Node.JS, Cordova, Unity, C#, C++, Xamarin, Python, PHP, Java, Botkit and Epson Moverio that allow you to write whatever you want with whatever language you want for the most part. These integrations allow you to run code instead of simply returning the meaning of what was stated back to EZ-Builder.

The example here doesn't use integrations, but is designed more to have the information sent back to EZ-Builder for you to do something with.

This is a very scaled down version of the Rafiki client that I had been working on.
You can take this and import or restore it into your api.ai project.

I hope this example helps people see what API.AI can be used for and better understand where it fits in your robot brain.


One more thing that I should mention... API.AI is constantly learning from how questions are asked. It uses AI learning methods to improve how it understands how questions or statements are asked. The more examples that you can provide in your intents, the more it will be able to learn initially, but even with only two items specified in the "user says" section for my robotRaiseRightArm Intent, it was able to easily understand the statements of
"Hey Robot, I want you to raise your right arm 19 degrees" and "yo, raise your right arm 1 degree".

These statements are


"raise right arm @sys.number:value @unit_rotation:unit"



"robot raise right arm @sys.number:value @unit_rotation:unit" 


Okay, now, what it isn't
It isn't google's search engine, although you could call the search engine from the results returned by api.ai.
It isn't a great chatbot. If you want to do that, I would recommend other options.
It isn't something that won't require you to do more work to use the data returned.
It isn't a complete AI that can just be dropped in and used.
It isn't something that requires no learning and effort
It isn't something that you can just let someone else develop things in it for you.

You will have to take the initiative to learn about it and make it customized for your robot. Not every robot has arms. Moving forward 5 feet is different for every type of robot with different types of sensors. Some wheeled robots have encoders, some don't.

It is a powerful tool for breaking apart speech and then formatting it in a way that you can use it. This allows many people to use the same robot with different speech styles and ways of saying something. It will help to prevent you from having to remember exact phrases that you have programmed to do things. It will make you a much better script writer if you use it.


Oh my! This is fabulous. I've been wanting to figure this out and see how it can be worked into my robot. I've wanted to base most of much of my robot control on voice command and the present MS based voice recognition really does have limits and problems for my needs. This looks like it's the answer to my dreams.

David, your such an asset and help to this forum. Personally, I really appreciate the time and expert knowledge you're giving. Thanks! Smile


That is a great wrap up of api.ai, I will surely check the client you included...
It's also a great starting point for everyone not yet familiar with the api.ai platform.
You did a great job in pointing out what api ai is, and what it is not...I could not agree more on all of your points!

When it comes to returned parameters, we could also ask @DJSures to be so kind and include all of them to be exposed within EZ-Builder, the first version of the plugin was only returning speech and later on he also included the action parameter, I do agree that it would make the work in EZ-Builder a lot simpler if we would not have to break down the returned text and the variables are directly accessible!

If you would want to use api.ai as a seach engine, you could do this in EZ-Builder or using api.ai fulfillment...same goes for smalltalk! If eg a smalltalk action is returned you could pass the speech to your chatbot within EZ-Builder, I would also guess you could use api.ai fulfillment to handle this, but I did not look into this yet!

Great intro, lets start digging into this...its worth it! Smile



This is an example EZ-Builder project that leverages the Bing Speech API, API.AI and Pandora Bot.

It could be cleaner, just wanting to put out something that could be used to start things off for people if they wanted to use it.

I will discuss this project and API.AI on this weeks Technology Today episode.


Very cool.

After installing your project I said "Robot move backwards" when the proper phrase should have been "Robot Reverse". It understood what I wanted and responded.

Very cool. Grin


@CochranRobotics This is great! Thanks for sharing! Grin Grin Grin
It already involves a great deal of coding, and having the speech recognition already triggered and ready to go is a cool thing!

To see how you coded the response parse is great! I still did not get pandora bot to respond, but I guess I can figure that out somehow...your example should send the speech from Bing to PandoraBot if the returned action is blank right?

I think it would be kind of cool to have your api.ai client access token so the testing enviroment will be level for everyone...that should be save right? Smile

I already deleted an EZ-Robot project that I posted because I read your thread on the Bing Speech key being included...but for the api.ai key this should not matter right?

Good starting point, I am really happy to have this, since you already worked with api.ai for quiet a bit, it is so cool seeing how you manage to piece it all together in EZ-Builder! Yaaaaay! Grin

I was just not getting the point that the provided example was obviously to be run on the Rafiki Client which can be installed from the .zip file that was uploaded earlier in this thread, it provides some interesting approaches on api.ai robot control and can be tailored to your own needs...just stating this in case someone else might also be confused about it! Smile


The new version of the plugin works well. I will post the example again this afternoon and describe what I am doing and where in a notepad control. I am away from the house for a bit but will get this out this afternoon hopefully.

Here is the jist of the Example Project

The init script just pauses the PandorBot control and the Bing Speech Recognition plugin, along with making sure that the Speech Recognition control is not paused.

In the Speech Recognition control there are a couple of different phrases that will unpause the Bing Speech Recognition control. They are "Hey Robot", "I want to ask you something" and "Hey Spock". Feel free to change these to whatever you need. They all run the same script which causes the Bing Speech Recognition plugin to start listening.

After the user says something, the Bing Speech Recognition plugin pauses the Speech Recognition control, calls ControlCommand("API.AI", Send, $BingSpeech) and then pauses the Bing Speech Recognition plugin. The converted text is returned to the $BingSpeech variable and then launches the API.AI plugin to send the $BingSpeech variable to API.AI.

The API.AI plugin starts the ResponseParse script which then decides what to do with the variables returned from API.AI. If it doesnt know what to do with the text, it sends the $BingSpeech variable to PandoraBot after unpausing Pandoria bot. After two seconds, Pandora Bot is paused again by the script.

If API.AI returns values that can be used, it calls the appropriate script to execute the actions necessary.

Okay, I will make sure that everything is working as intended this afternoon and publish it.



Example project here. You will have to supply your own API.AI key and Bing Speech key.