Microsoft AI overview for developers
Dr. Harry Shum, Brian Trager


This session was meant as a tour of all the AI services available in Azure today.

Announcements :

  • QnAMaker General Availability
  • Bot Services 3.0 (100+ new features in Bot Framework – new Bot Framework Emulator)
  • Luis / QnAMaker command line tools
  • az-cli includes Bot Services
  • Machine Learning for .NET (ML.NET)

To start, Harry reminded that Cognitive Services was launched 3 years ago at //Build 2015 and has already achieved 1+ million developers using it.

After that, we were shown videos and live demos that showcased the new services in Cognitive Services.

One of those was live translation with the Microsoft Translator app on smartphone. Brian Trager, which is deaf, talked in english with Harry Shum who responded in chinese.
Microsoft Translator uses a trained AI with Custom Speech, Custom Voice and Custom Translation for a near real-time and totally accurate translation between the 2 speakers (better and quicker than the live transcript used by Microsoft in all sessions at //Build).


Continuing the tour, we were shown several linked demos using Conversational AI with the Bot Services and lots of other Cognitive Services.

First was a chatbot on an e-commerce website.
The bot used Text Analytics to adapt to the user’s language on the fly.
It used to recognize intents like « I want to buy a new watch » and react accordingly (refreshing the displayed items on the website).
The bot then purposed to upload an image of a watch to analyze it with Custom Vision to find a similar model in the website.
QnAMaker (Generally Available as of today) was also used to answer questions. The new QnAMaker allows to find correct answers for implicit context based on the previous questions (through the use of metadata).
For example, « What is the return policy? » – « You can get a full refund up to 7 days after the purchase » – « What about after that? » – « You can get store credits for the next 30 days ».
This was not possible before.
To end with this demo, the bot was also capable of booking a visit to the nearest retail store by taking into account store hours, the user’s calendar, the road traffic, etc. And setting the visit into the user’s calendar.
The bot finally asked for a selfie of the user.

The second demo was another bot this time in a kiosk inside a mall.
The same user interacts with it and the bot recognizes the person using the previously taken selfie (using Face Recognition API)
The bot was using Text-To-Speech and Speech-To-Text to communicate with the user and was able to know that the user had a meeting inside one of the store in the mall, and displayed a personalized map to let the user know which way is the store.

The third and last demo was a website where the store clerk can view all previously aggregated info about the customer using Microsoft Dynamics.

Moving on the new features of the Bot Framework, Harry showcased the ability to load chat transcript directly into the emulator to avoid retyping everything to test a dialog.

The new Bot Framework Emulator is also capable of managing Luis/QnAMaker (through the use of the new command line tools) for a quicker develop-configure-test cycle.

Then we moved onto Machine Learning and the ONNX format (open source) created by Microsoft but now supported by 15 big companies.

A new toolkit to write Machine Learning in C# used by Microsoft is made available to all : ML.NET


To end this session, we were shown the integration of all the tooling into Visual Studio.
Like creating a whole project by just right clicking the Custom Vision resource in the Server Explorer tab of Visual Studio.

Useful links: