Auditing your Windows Javascript UWP application in production

Since the launch of Windows 8, I’m writing native Windows applications using HTML and JavaScript, for the Windows store and for enterprise applications. Believe it or not but I’m doing it full time and, so far, I’m really enjoying it. I know many people will disagree, especially in the Windows ecosystem, but I’m really enjoying the productivity and the expressiveness you get by writing rich client applications using HTML. And the beauty with Windows 8, and now Windows 10, is that you have full access in JavaScript to the Windows API, from rich file access, to sensors, Cortana integration, notifications, etc.

Even better, with Windows 10 and the Universal Windows Platform (or UWP) you could write one single application using HTML and JavaScript that will run on Windows desktop, laptop, tablet, phone, IoT devices, Xbox, Hololens and probably much more.

If you are familiar with the web ecosystem, you could also very easily reuse most or all of your skills and application code and use it to build awesome websites, or target other devices, with Github electron to make a Mac OsX version, or use Cordova or React native to reach iOS or Android. It really is an exciting time for web developpers.

As exciting as those experiences have been, there are a few things that are still frustrating. No matter what technology you use for development, one of them is the ability to detect and troubleshoot problems when your application have been released into the wild. Visual Studio is really great and provide many tools to help debug and audit your app on your dev box, but for troubleshooting applications in production, you’re naked in the dark.

Luckily when using web technologies, you could use some wonderful tools like Vorlon.js. This tool is really great because you could target any device, without any prerequisite. You have nothing to install on the client to have great diagnostics from your app.
I previously explained how to use Vorlon.js in production, and how to use it with your JavaScript UWP, follow those post if you want to setup the stage. In this post, we will see an overview of what you can achieve with Vorlon.js for a Windows UWP app in production, and a few guidance. What we will see is not specific to a JavaScript UWP. You could use the same things to debug a Webview in a C# or C++ application.

Inspect the DOM

Vorlon.js is designed from the ground with extensibility in mind. It features many plugins, and you could really easily write your own (we will talk more on that later). One of the most usefull for frontend applications is the DOM explorer. It’s very much like the tools you will find in the developper tools of your favorite browser.
For troubleshooting issues in production, it is very interesting to see what appened into the DOM, what styles are applyed, check the current size of the user screen, etc.

domexplorer.jpg

 

Watch the console

Another core plugin of Vorlon is the console. It will shows all console entries in your dashboard. If you put meaningful console logging into your app, this plugin is insanely usefull because you see in (almost) realtime what the user is doing, which is key to reproduce the issue. With some luck (and coding best practices), you could also have errors popping directly in the console with the stack trace (if the Promise god is with you). The console also feature an « immediate window » where you could issue simple JavaScript commands.

console.jpg

Check objects value

You could also have access to global JavaScript variables using Object explorer. This plugin allow you to watch values from your objects, and explore graph of objects. It’s always interesting to be able to keep an eye on your application state when trying to troubleshoot your application.

objexplorer

Monitoring CPU, memory, disk, …

You sometimes have to face more insidious bugs, like memory leaks, that usually requires advanced debugging tools. In the comfort of your dev box, you could use IDEs like Visual Studio to diagnose those bugs with specific tools. Modern browsers usually have similar tools, but does not have APIs to check things like memory or CPU that you could use remotely. Fortunately, when you are targeting Windows UWP, you could have access to all modern Windows APIs, and especially those about diagnostics.

That’s why the Vorlon team implemented a plugin specific to Windows UWP. It uses WinRT APIs for diagnostics and for different metadata from the client. For now, this plugin is not released, and you will have to look at the dev branch to try it out.

uwp.jpg

From the application metadata, you will get infos, such as its name, but also the application version, current language, and device family.

uwpmetadata

You will also have a glimpse at the network kind and status, and the battery level of the device.

It enables you to monitor the CPU, the memory and disk I/O. It’s especially usefull to track memory leaks, or excessive resources consumption.

uwpmemory.jpg

And much more…

You have many more plugins within Vorlon that will help you diagnose your app, monitor xhr calls, explore resources like localstorage, check for accessibility and best practices, and so on. We cannot cover them all in one post, but I hope you get a decent idea of the many possibilities it is unlocking to help you improve your applications, both during development and more than anything, in your production environment. It’s especially usefull for mobile applications such as Windows UWP.

Writing plugin for Vorlon.js is really easy. You have to implement one javascript class for the client, and one other for displaying results in the dashboard. Writing a simple plugin can sometimes helps you save a lot of time because it can helps you analyse the problem when and where they occur. If you are proud of your plugin, submit it back to Vorlon.js !

SpeechAPI OxfordProject

Exploring Microsoft Speech APIs

This article introduces the speech APIs, one of the services updated in Windows 10 and powered by the project Oxford. This article is divided into several parts:


I.Speech APIs features 

These APIs provide two major features :

  • Speech Recognition : Speech To Text (STT)

It converts spoken audio to text. The APIs can recognize audio coming from the microphone in real-time, or from an audio file.

  • Speech Synthesizer : Text To Speech (TTS)

It converts text to spoken audio when applications need to talk back to their users.

Microsoft Speech Platform

 

The Speech APIs are included in Windows 10 libraries, and also provided by the project oxford services, which require an Azure subscription.

II. Project Oxford Speech APIs 

These APIs are in a beta version, they work by sending data to Microsoft servers in the cloud, to use them we must have an Azure account. They also offer the Intent Recognition feature which can convert spoken audio to intent.

To use these APIs, follow the steps below:

  1. Using an Azure account, go to the Market Place to purchase the speechAPIs Service (which is for free 😉 ), then retrieve the primary or secondary key. Just in case you are using the Azure DreamSpark subscription, don’t be surprised if you don’t find this service. Unfortunately this type of account does not give access to Oxford Services.
  2. Download the Speech SDK of project oxford from here! if you are targeting another platform rather than Windows, have a look here you will find what you are looking for.

 

  • Speech To Text (STT):

The oxford version of Speech APIs offers two choice to make the STT:

  1. REST API
  2. Client library

When using the REST API, we only get one recognition result back at the end of the session, but in the case of a client library, we also get partial result before getting the final recognition.

Setting up speech recognition begins with the Speech Recognition Service Factory. By using this factory, we can create an object which can make a recognition request to the Speech Recognition Service. This factory can create two types of objects:

  1. A Data Recognition Client : used for speech recognition with data (for example from an audio file). The data is broken up into buffers and each buffer is sent to the Speech Recognition Service.
  2. A Microphone Recognition Client : used for speech recognition from the microphone. The microphone is turned on, and data is sent to the Speech Recognition Service.

When creating a client from the factory, it can be configured in one of two modes:

  1. In ShortPhrase mode, an utterance may only be up to 15 seconds long, As data is sent to the server, the client will receive multiple partial results and one final multiple N-best choice result.
  2. In LongDictation mode, an utterance may only be up to 2 minutes long. As data is sent to the server, the client will receive multiple partial results and multiple final results.

Also the client can be configured for one of the following several languages:

  • American English: « en-us »
  • British English: « en-gb »
  • German: « de-de »
  • Spanish: « es-es »
  • French: « fr-fr »
  • Italian: « it-it »
  • Mandarin: « zh-cn »

Now, time to code 😀 you can implement the code below in a WPF app


string stt_primaryOrSecondaryKey = ConfigurationManager.AppSettings["primaryKey"];

// We have 2 choices : LongDictation or ShortPhrase
SpeechRecognitionMode stt_recoMode = SpeechRecognitionMode.LongDictation;

// For a Speech recognition from a Microphone
MicrophoneRecognitionClient stt_micClient = SpeechRecognitionServiceFactory.CreateMicrophoneClient(stt_recoMode, "fr-fr",
stt_primaryOrSecondaryKey);

// For a Speech recognition from a data like wav file
DataRecognitionClient stt_dataClient = SpeechRecognitionServiceFactory.CreateDataClient(stt_recoMode, "fr-fr",
stt_primaryOrSecondaryKey);

Then we must subscribe some events to get the result of the recognition. The Microphone Recognition Client & Data Recognition Client have the same Events as follow:

  • OnConversationError : Event fired when a conversation error occurs
  • OnIntent : Event fired when a Speech Recognition has finished, the recognized text has
    been parsed with LUIS for intent and entities, and the structured JSON result is available.
  • OnMicrophoneStatus : Event fired when the microphone recording status has changed.
  • OnPartialResponseReceived : Event fired when a partial response is received
  • OnResponseReceived : Event fired when a response is received

Inside the events, we can do whatever we want, displaying the result in a textBox for ex. and more…


// Event handlers for speech recognition results
sst_micClient.OnResponseReceived += OnResponseReceivedHandler;
sst_micClient.OnPartialResponseReceived += OnPartialResponseReceivedHandler;
sst_micClient.OnConversationError += OnConversationErrorHandler;
sst_micClient.OnMicrophoneStatus += OnMicrophoneStatus;

// Data Client event from an audio file for ex.
sst_dataClient.OnResponseReceived += OnResponseReceivedHandler;
sst_dataClient.OnPartialResponseReceived += OnPartialResponseReceivedHandler;
sst_dataClient.OnConversationError += OnConversationErrorHandler;

Now how do we start or stop the speech recognition? It’s simple, we just need to make a method call


// Turn on the microphone and stream audio to the Speech Recognition Service
sst_micClient.StartMicAndRecognition();

// Turn off the microphone and the Speech Recognition
sst_micClient.EndMicAndRecognition();

To convert an audio file to text, it’s easy, we just need to convert the file into a byte array and send it to the server for the recognition, like shown below:


if (!string.IsNullOrWhiteSpace(filename))
using (FileStream fileStream = new FileStream(filename, FileMode.Open, FileAccess.Read))
{
int bytesRead = 0;
byte[] buffer = new byte[1024];

try
{
do
{
bytesRead = fileStream.Read(buffer, 0, buffer.Length);
// Send of audio data to cloud service.
sst_dataClient.SendAudio(buffer, bytesRead);
} while (bytesRead > 0);
}
finally
{
m_dataClient.EndAudio();
}
}

  • Text To Speech (TTS)

The TTS feature of project oxford can be used only through the REST API, and we have a complete example here.

The end-point to access the service is: https://speech.platform.bing.com/synthesize

The API uses HTTP POST to send audio back to the client. The maximum amount of audio returned for a given request will not exceed 15 seconds.

For any question about using this API, please refer to TTS through REST API documentation

III. Windows 10 Speech APIs

Windows 10 Speech APIs support all Windows 10 based devices including IoT hardware, phones, tablets, and PCs.

The Speech APIs in Windows 10 are represented under this two namespaces :

Requirement 

  1. Windows 10
  2. Visual Studio 2015
  3. Make sure that Windows Universal App Development Tools are installed in VS2015.

First of all we have to create a Windows 10 Universal application project in visual studio : New Project dialog box, click Visual C# > Windows > Windows Universal > Blank App (Windows Universal).

With Windows 10, applications don’t have the permission to use the microphone by default, so you must at first change the parameters of the universal application as follows:

Double click on the file Package.appxmanifest > Capablilites > Microphone > select the check box.

Note: The Windows 10 Speech APIs are using the languages installed in the Operating System.

  • Speech To Text (STT)

The STT feature using Windows 10 APIs works in online mode, if we want to make it available in offline mode we have to provide the necessary grammar manually.

To make this feature works we have 3 steps:

  • Create a SpeechRecognizer object,
  • Create an other object from SpeechRecognitionConstraint type and add it to the SpeechRecognizer object already created,
  • Compile the constraints.

SpeechRecognizer supports 2 types of recognition sessions:

  1. Continuous recognition sessions for prolonged audio input. A continuous session needs to be either explicitly ended or automatically times out after a configurable period of silence (default is 20 seconds).
  2. Speech recognition session for recognizing a short phrase. The session is terminated and the recognition results returned when a pause is detected by the recognizer.

Like shown in the code below


SpeechRecognizer speechRecognizer = new SpeechRecognizer();
// Here we choose a simple constraints scenario of dictation
var dictationConstraint = new SpeechRecognitionTopicConstraint(SpeechRecognitionScenario.Dictation, "dictation");
speechRecognizer.Constraints.Add(dictationConstraint);
SpeechRecognitionCompilationResult result = await speechRecognizer.CompileConstraintsAsync();

A continuous recognition session can be started by calling SpeechRecognizer.ContinuousRecognitionSession.StartAsync() method and can be stoped by calling speechRecognizer.ContinuousRecognitionSession.StopAsync(). The SpeechRecognizer.ContinuousRecognitionSession object provides two events :

  • Completed : Occurs when a continuous recognition session ends.
  • ResultGenerated : Occurs when the speech recognizer returns the result from a continuous recognition session.

We have another event tied to the speechRecognizer object, which is the HypothesisGenerated event, occurs when a recognition result fragment is returned by the speech recognizer.

The code below show how to start the recognition:


public async void StartRecognition()
{
// The recognizer can only start listening in a continuous fashion if the recognizer is urrently idle.
// This prevents an exception from occurring.
if (speechRecognizer.State == SpeechRecognizerState.Idle)
{
try
{
await speechRecognizer.ContinuousRecognitionSession.StartAsync();
}
catch (Exception ex)
{
var messageDialog = new Windows.UI.Popups.MessageDialog(ex.Message, "Exception");
await messageDialog.ShowAsync();
}
}
}

To stop the recognition :


public async void StopRecognition()
{
if (speechRecognizer.State != SpeechRecognizerState.Idle)
{
try
{
await speechRecognizer.ContinuousRecognitionSession.StopAsync();

TXB_SpeechToText.Text = dictatedTextBuilder.ToString();
}
catch (Exception exception)
{
var messageDialog = new Windows.UI.Popups.MessageDialog(exception.Message, "Exception");
await messageDialog.ShowAsync();
}
}
}

 

  • Text To Speech (TTS)

This feature is available in offline and online mode, to make it works we have to create a SpeechSynthesizer object, then we set the speech synthesizer engine (voice) and generate a stream from the speechSynthesizer.SynthesizeTextToStreamAsync method by passing the text we want to read in parameter.

To read the stream we have to use a MediaElement object, like shown in the code below:


SpeechSynthesizer speechSynthesizer = new SpeechSynthesizer();
speechSynthesizer.Voice = SpeechSynthesizer.DefaultVoice;
//Init the media element which will wrap tthe text to speech
MediaElement mediaElement = new MediaElement();
//We have to add the mediaElement to the Grid otherwise it won't work
LayoutRoot.Children.Add(mediaElement);

var stream = await speechSynthesizer.SynthesizeTextToStreamAsync("Hello World!");
mediaElement.SetSource(stream, stream.ContentType);
mediaElement.Play();

Managing voice commands using the STT and TTS features

We can make the applications implementing these APIs more interactive, by passing some commands using voice. Once the command is executed, the app will confirm this, using the TTS feature. To do that, we can use the STT events, like shown in the code below:


private async  void ContinuousRecognitionSession_ResultGenerated(SpeechContinuousRecognitionSession sender, SpeechContinuousRecognitionResultGeneratedEventArgs args)
{
// We can ignore the generated text using the level of the conversion confidence (Low,Medium,High)
if (args.Result.Confidence == SpeechRecognitionConfidence.Medium || args.Result.Confidence == SpeechRecognitionConfidence.High)
{
// The key word to activate any command
//ex. user says : text red, the key word is text and the command is red
string command = "text";

if (args.Result.Text.ToLower().Contains(command))
{
string result = args.Result.Text.ToLower();
string value = result.Substring(result.IndexOf(command) + command.Length + 1).ToLower();
//The generated text may ends with a point
value = value.Replace(".", "");
switch (value)
{
case "in line": case "line": case "in-line":
dictatedTextBuilder.AppendFormat("{0}", Environment.NewLine);
await dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
{
ReadSpecilaCommandToUser("Carriage return command is activated");
});
break;
case "blue":
await dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
{
TXB_SpeechToText.Foreground = new SolidColorBrush(Colors.Blue);
ReadSpecilaCommandToUser("Blue color command is activated");
});
break;
case "red":
await dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
{
TXB_SpeechToText.Foreground = new SolidColorBrush(Colors.Red);
ReadSpecilaCommandToUser("Red color command is activated");
});
break;
case "green":
await dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
{
TXB_SpeechToText.Foreground = new SolidColorBrush(Colors.Green);
ReadSpecilaCommandToUser("Green color command is activated");
});
break;
case "black":
await dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
{
TXB_SpeechToText.Foreground = new SolidColorBrush(Colors.Black);
ReadSpecilaCommandToUser("Black color command is activated");
});
break;
}
}
else
{
dictatedTextBuilder.Append(args.Result.Text + " ");
}

await dispatcher.RunAsync(CoreDispatcherPriority.Normal, () =>
{
TXB_SpeechToText.Text = dictatedTextBuilder.ToString();
});

}

}

private async void ReadSpecilaCommandToUser(string text)
{
if (!string.IsNullOrWhiteSpace(text))
{
using (SpeechSynthesizer speech = new SpeechSynthesizer())
{
speech.Voice = SpeechSynthesizer.AllVoices.FirstOrDefault(item => item.Language.Equals(language.LanguageTag));
SpeechSynthesisStream stream = await speech.SynthesizeTextToStreamAsync(text);

mediaElement.SetSource(stream, stream.ContentType);
mediaElement.Play();
}
}
}

 


IV. Demo

The video below shows how to edit text by voice, the app is using the Windows 10 Speech APIs:

Going further

 

Speech APIs – Universal Windows Platform:

SpeechAPIs – Project Oxford: https://github.com/Microsoft/ProjectOxford-ClientSDK/tree/master/Speech

Project Oxford : https://www.projectoxford.ai

Le Windows Store for Business de Microsoft est enfin là !

Après plusieurs mois d’attente le « Business Store » est enfin disponible et accessible gratuitement avec quelques prérequis :

  • Avoir un Azure Active Directory.
  • Des postes clients sous Windows 10 uniquement (Desktop/Phone)
  • (Énormément de patience pour le déploiement…)

(Pour les clients sous Windows 8 et Windows 8.1, ou ceux qui n’ont pas d’Azure AD, nous avons une solution pour vous !)

Comment s’inscrire ?

La première étape se déroule sur https://businessstore.microsoft.com, l’administrateur général de l’AD se connecte avec le compte de l’organisation.

La création du store est immédiate, mais son déploiement prend du temps.

L’administrateur peut attribuer les droits à d’autres utilisateurs de l’AD de gérer le store (directement dans le BO du store) pour les actes d’achats, d’attributions de licences et de publications.

login.png

Les applications grand public :

Après « l’achat » (visiblement pour l’instant nous ne pouvons acheter que des applications gratuites), l’attribution de la licence peut être globale (au niveau de l’organisation), par groupe ou par utilisateur. Le déploiement prend plusieurs heures (+12h)

achat.png

 Les application LOB :

Le processus est long puisqu’il se passe en plusieurs actes :

  • Premier acte :
    • L’administrateur invite un éditeur LOB avec une adresse mail pour que celui-ci puisse attribuer à l’organisation une ou des applications. (Éditeur LOB = un simple compte de développement qui possède un accès au Windows Dev Center https://dev.windows.com)

lob.png

  • Deuxième acte :
    • L’éditeur publie ou met à jour une application en précisant dans la section « tarifs et disponibilités », le store d’entreprise qu’il souhaite viser (une même app LOB peut être diffusée sur différents Business Stores).
  • Troisième acte :
    • Après la validation de l’application par Microsoft (cette étape dure entre quelques heures à quelques jours), l’application est disponible coté BO du Business Store, et c’est à ce moment là que l’administrateur peut l’ajouter au catalogue, de la même manière qu’une application grand public (avec les droits qui vont avec et l’attente du déploiement qui prend aussi plusieurs heures)

Configuration du poste client (Desktop/Phone):

L’étape la plus simple se passe coté poste client où il suffit de se connecter ou d’ajouter (si ce n’est pas déjà le cas) le compte utilisateur rattaché à l’AD directement dans l’application du store. Un nouvel onglet va apparaitre avec la liste des applications attribuées que l’utilisateur pourra alors installer.

Sans titre.png

A vos déploiements !

#UWPHTML – Building Windows 10 web applications

Many people still ignores it, but since Windows 8 (and 8.1 for Windows Phone), you can build Windows native applications using HTML and JavaScript. Yes you read it correctly NATIVE applications. Those applications does NOT run through a webview but with a native HTML/JavaScript application shell called « wwahost ». Those applications are now called « Windows web applications ».

Windows 10 introduced a few enhancements to Windows web apps. They run with a different security context and on Microsoft Edge engine. There is also something called « project Westminster ». More simply, Westminster is « hosted applications », or applications that runs with content hosted on the internet but still have access to Windows APIs. You could find more about hosted apps on our blog, or obviously on Microsoft’s blog.

Windows web apps are fully parts of the new concept of Universal Windows Platform applications. It means that you could make applications using HTML/JavaScript for Windows desktop and tablet, but also for Windows 10 mobile, Xbox, Windows IoT, and the soon to come Hololens.

This post is the first of a serie where we will talk about various aspects of Universal Windows web apps using HTML and JavaScript. We will illustrate the different topics with a real application called « Bring the popcorn », a remote controller for a Kodi or Xbmc media server.

This application is open source, and available in the Windows store. It uses WinJS and WinJSContrib to provide a fast and fluid experience, using the latest features of Edge and Chakra engine (or at least those made available in web apps…).

Hope you will enjoy this serie !

Writing Windows 10 App Services in JavaScript

What is an App Service ?

Windows 10 introduce a bunch of new ways for applications to communicate with each others. One way is to implement « App Services ». App Services are a request/response model where one app can call a service located within another app. App Services enable communication between apps, but also with the system. If you want to implement interactive scenarios with Cortana, you will have to implement an App Service to provide data to Cortana.

If you’re the kind of people who prefer code than blabla, you may head directly to the github repository with sample applications.

App Services use the same infrastructure as BackgroundTasks, and most of the logic and implementation details still applies. It means that when your service is called, you don’t have the whole application running, but only your service. It also means that your application don’t communicate directly with your App Service. For example, your application does not get notify when your service is called, or terminated.

In all resources I can found (Build session, Channel 9 videos, samples, …), App Services are implemented in C#. Those resources are really helpfull (especially this one on Channel 9), but if (like me) you are writing apps in HTML and JavaScript, it is likely that you prefer writing those services in JavaScript and share business code with the rest of your application. Porting C# resources to JavaScript is actually very easy. In this post, we will dive into implementing an App Service in Javascript, based on a C# sample from Microsoft Virtual Academy.

Show me some code !

In a Windows Web Application (Windows application written in HTML and JavaScript), a background task, therefore an App Service, should be thought of as a special Web Worker (no postMessage with it unfortunately). It’s a standalone JavaScript file that will get caught independently by the system.

The first step to implement your App Service is to create this file. As with web workers, you could use « importScripts » to reference any code you want to share between your app and the service. Be aware that, like with Web Workers, there is no « window » or « window.document » objects inside your background task or app service. The global context points to a completely different beast, and there is no DOM.

Inside your task or service, you will access to the current instance of the BackgroundTask object using WinRT APIs, and get a deferral on it to control the lifecycle of your service. As with background task, your service can also be canceled by the system if it needs to recover memory, or if battery is running low on the device. Your task instance provide a « cancel » event that will get caught is such cases.

A minimalistic background task/app service would look like this

var backgroundTaskInstance = Windows.UI.WebUI.WebUIBackgroundTaskInstance.current;
var triggerDetails = backgroundTaskInstance.triggerDetails;
var bgtaskDeferral = backgroundTaskInstance.getDeferral();

function endBgTask() {
    backgroundTaskInstance.succeeded = true;
    bgtaskDeferral.complete();
    bgtask.close();
}

backgroundTaskInstance.addEventListener("canceled", function onCanceled(cancelEventArg) {
    return endBgTask();
});

Now we must declare this file as an App Service. For that, we must add an extension to our application in its manifest, pointing to our javascript.

<Applications>
    <Application Id="App" StartPage="default.html">
        ...
        <Extensions>
            <uap:Extension Category="windows.appService" StartPage="js/appservice.js">
                <uap:AppService Name="MyJavascriptAppService"/>
            </uap:Extension>
        </Extensions>
    </Application>
</Applications>

As you can see, we provide the path to our JavaScript file, and we are giving a name (MyJavascriptAppService) to the App Service.

Now we must implement the service logic. To do that, we will check the trigger details on our background task, and register for a request event. When the event gets activated, we received an App Service request. This request will contains a message (with request arguments), and a sendResponseAsync method to reply to the caller. On both sides, the values in the request and in the response are provided with a ValueSet object.

//check that the app service called is the one defined in the manifest. You can host
//multiple AppServices with the same JavaScript files by adding multiple entries in the application manifest
if (triggerDetails && triggerDetails.name == 'MyJavascriptAppService') {
    triggerDetails.appServiceConnection.onrequestreceived = function (args) {
        if (args.detail && args.detail.length) {
            var appservicecall = args.detail[0];
            //request arguments are available in appservicecall.request.message
            var returnMessage = new Windows.Foundation.Collections.ValueSet();
            returnMessage.insert("Result", 42);
            appservicecall.request.sendResponseAsync(returnMessage)
        }
    }        
}

Calling your App Service

The app calling your service can be any app. If you want to restrict access, you will have to implement your own security mecanism. As you may have understood, the caller and the callee doesn’t have to be written in the same language. You can call a service written in C++ from a JavaScript app. All data are passing throught Windows APIs.

Calling the app service require some arguments, the caller should provide the package family name (PFN) of the target application, and the name of the App Service (as declared in the target’s app manifest). If you don’t know your PFN, you can get it through WinRT APIs by calling « Windows.ApplicationModel.Package.current.id.familyName » in your service application.

Using the PFN and service name, you will first get a connection to your App Service, and register to the « serviceclosed » event to be notified if your service terminate.

function getServiceConnection(){
    var connection = new Windows.ApplicationModel.AppService.AppServiceConnection();
    connection.appServiceName = "MyJavascriptAppService";
    connection.packageFamilyName = "...your PFN here...";

    return connection.openAsync().then(function(connectionStatus){
        if (connectionStatus == Windows.ApplicationModel.AppService.AppServiceConnectionStatus.success) {
            connection.onserviceclosed = serviceClosed;
            return connection;
        }
        else {
            return WinJS.Promise.wrapError({ message: 'service not available' });
        }
    });
}

Once you get a valid connection, you will be able to send requests to the service

function callService(){
    return getServiceConnection().then(function (connection) {
        var message = new Windows.Foundation.Collections.ValueSet();
        message.insert("Command", "CalcSum");
        message.insert("Value1", 8);
        message.insert("Value2", 42);

        return connection.sendMessageAsync(message).then(function (response) {
            var e = response;
            if (response.status === Windows.ApplicationModel.AppService.AppServiceResponseStatus.success) {
                document.getElementById('result').innerHTML = 'calculated ' + response.message.Result;
	                
                return response.message;
            }
        });
    });
}

And voilà ! you’re ready to go. A picture is worth a thousand words, so I put a sample with service and caller apps on github for you.

Debugging your service

If you grab the sample, you can see how easy it is to debug your service. If you configure the solution to run both caller and callee on debug, you can set breakpoints in your app service. If you don’t want to run the full service app, you could also edit the properties of the project hosting the service. In the debugging section, you could set « Launch Application » to false. In that case, when you run debug for both apps, you will only see the caller application starting, but your breakpoints in the app service will get called appropriately.

Porting Windows API calls from C# to JavaScript

Windows 10 is still in preview, and as of this writing, a lot of the samples available are written in C#.
The fact that samples are not available in JavaScript does not means that the feature is not available in JavaScript. The only things that may not be available is all topics about XAML controls, like the new InkCanvas, the RelativePanel, and so on. All code related to Windows APIs may be used in JavaScript.

Porting C# code to JavaScript is actually straightforward, especially if you are familiar with WinRT development. Windows APIs (aka WinRT) are projected in all available languages (C++, C#, and JavaScript), using each language paradigms. So, moving C# code to JavaScript is just about changing variable names to lowercase (ex « connection.SendMessageAsync() » => « connection.sendMessageAsync() »), or use the « on… » syntax for event. For example « connection.RequestReceived += someEventHandlerCallback » will convert to « connection.onrequestreceived += someEventHandlerCallback ».

As you can see, this is really easy…