//Build 2018 – Building event-driven Serverless Apps with Azure Functions and Azure Cosmos DB

Building event-driven Serverless Apps with Azure Functions and Azure Cosmos DB
Rafat Sarosh

In this session, Rafat Sarosh explained how to make a completely serverless backend with only Azure Functions and Azure Cosmos DB (with its ChangeFeed feature).

Despite being a very trendy subject, I found this session to be a little boring.
Not enough demos, not really explaining why use Azure Cosmos DB, and the speaker was not dynamic enough.

Azure Functions is part of the new FaaS offering of Azure that help make truly serverless development.

Azure Functions has 2 runtimes available. Version 1.x which is already GA (General Availability) and 2.x which is currently in preview.

Rafat demoed the tooling in the func CLI, Visual Studio and the Azure Portal to create new functions from templates. Those have been around for quite some time now.

Despite Azure Functions being serverless, you still have the choice to host them on your own App Service in Azure. This has the advantage to control a few scaling/performance options like AlwaysOn. By default, the Hosting Plan (no App Service) will put your functions to sleep if not used for 5 min.


Azure Functions has a trigger for Azure Cosmos DB that allows it to monitor for changes with the ChangeFeed functionality of Azure Cosmos DB and call a function when new content is available.
That allows to create automated workflows.


At the end of the session, 2 companies (Johnson Controls and Asos) explained their usage of both Azure Functions and Azure Cosmos DB.


//Build 2018 – .NET Overview and Roadmap

.NET Overview and Roadmap
Scott Hunter, Scott Hanselman


  • Visual Studio 2017 15.7
  • Visual Studio for Mac 7.5
  • .NET Core 2.1 RC (https://aka.ms/DotNetCore21)
  • .NET Core 3 (Preview later this year)
    • WPF/WinForms can now uses .NET Core instead of .NET FX
    • .NET Core App Bundler (make a single exe containing all dlls)
  • .NET Conf 2018 (September 12-14)
  • Azure SignalR Services (as a Service)
  • New debugging option for WebAPI (http cli)
  • Functional tests with WebApplicationFactory
  • Navigate to source of a NuGet package or decompile sources from a DLL
  • Better support for Editorconfig
  • Blazor (with C# dlls into a browser)

For this session full in announcements and demos, the Scotts started with the new .NET Core 2.1 which is available in Release Candidate today.


This new version features better performance, both at compile and run time.



Next was the Azure SignalR as a Service. This is a new service to allow provision a SignalR server that benefits from the scaling and resilient features of Azure.
This was demoed with a Trivia app that the session attendees could play live while the speakers were managing the app from their localhost. All messages going through the Azure SignalR service.


We were shown a currently in development new option for debugging WebAPIs.
It’s a command line with commands to list all endpoints, query the API, and debug it.


Next was the announcement of .NET Core 3.020180507_221753313_iOS.jpg

.NET Core 3.0 can now be chosen as the runtime for Win32 apps (like WinForms) and WPF.


Then we were shown the new versions of Visual Studio and Visual Studio for Mac



One feature that is currently experimental is the ability to navigate to sources of a NuGet Package, or to decompile an existing DLL (like Jetbrains already does)


To end this session, the Scotts showed us the new features for Web Development, like Blazor (and mono.js/web assembly which can run C# DLL right into the browser)20180507_224208579_iOS.jpg


//Build 2018 – Microsoft AI overview for developers

Microsoft AI overview for developers
Dr. Harry Shum, Brian Trager


This session was meant as a tour of all the AI services available in Azure today.

Announcements :

  • QnAMaker General Availability
  • Bot Services 3.0 (100+ new features in Bot Framework – new Bot Framework Emulator)
  • Luis / QnAMaker command line tools
  • az-cli includes Bot Services
  • Machine Learning for .NET (ML.NET)

To start, Harry reminded that Cognitive Services was launched 3 years ago at //Build 2015 and has already achieved 1+ million developers using it.

After that, we were shown videos and live demos that showcased the new services in Cognitive Services.

One of those was live translation with the Microsoft Translator app on smartphone. Brian Trager, which is deaf, talked in english with Harry Shum who responded in chinese.
Microsoft Translator uses a trained AI with Custom Speech, Custom Voice and Custom Translation for a near real-time and totally accurate translation between the 2 speakers (better and quicker than the live transcript used by Microsoft in all sessions at //Build).


Continuing the tour, we were shown several linked demos using Conversational AI with the Bot Services and lots of other Cognitive Services.

First was a chatbot on an e-commerce website.
The bot used Text Analytics to adapt to the user’s language on the fly.
It used Luis.ai to recognize intents like « I want to buy a new watch » and react accordingly (refreshing the displayed items on the website).
The bot then purposed to upload an image of a watch to analyze it with Custom Vision to find a similar model in the website.
QnAMaker (Generally Available as of today) was also used to answer questions. The new QnAMaker allows to find correct answers for implicit context based on the previous questions (through the use of metadata).
For example, « What is the return policy? » – « You can get a full refund up to 7 days after the purchase » – « What about after that? » – « You can get store credits for the next 30 days ».
This was not possible before.
To end with this demo, the bot was also capable of booking a visit to the nearest retail store by taking into account store hours, the user’s calendar, the road traffic, etc. And setting the visit into the user’s calendar.
The bot finally asked for a selfie of the user.

The second demo was another bot this time in a kiosk inside a mall.
The same user interacts with it and the bot recognizes the person using the previously taken selfie (using Face Recognition API)
The bot was using Text-To-Speech and Speech-To-Text to communicate with the user and was able to know that the user had a meeting inside one of the store in the mall, and displayed a personalized map to let the user know which way is the store.

The third and last demo was a website where the store clerk can view all previously aggregated info about the customer using Microsoft Dynamics.

Moving on the new features of the Bot Framework, Harry showcased the ability to load chat transcript directly into the emulator to avoid retyping everything to test a dialog.

The new Bot Framework Emulator is also capable of managing Luis/QnAMaker (through the use of the new command line tools) for a quicker develop-configure-test cycle.

Then we moved onto Machine Learning and the ONNX format (open source) created by Microsoft but now supported by 15 big companies.

A new toolkit to write Machine Learning in C# used by Microsoft is made available to all : ML.NET


To end this session, we were shown the integration of all the tooling into Visual Studio.
Like creating a whole project by just right clicking the Custom Vision resource in the Server Explorer tab of Visual Studio.

Useful links:

Getting started with Azure Search and the .NET SDK


In order to provide an alternative to ElasticSearch as a more convenient and straightforward solution, Microsoft introduced Azure Search, an Azure service based on ElasticSearch. Both solutions provide a dedicated environment for indexing and querying structured or semi-structured data. Azure Search, however, focuses on simplicity, to the expense of some of the features you may expect if you come from the more complex engines.

For starters, Azure Search is more rigid: it is contract-based, meaning you have to define the indexes and the structure of your documents (indexed data) before you can index anything. The document structure itself is simplified and aimed at simple use cases and you won’t have all the options ElasticSearch can offer. The most important limitation to keep in mind is that you cannot include complex types in your document structure.

You should really be aware of these limitations when considering Azure Search as a production tool. But if you’re going for a quick, low maintenance and scalable solution, it may very well fit your needs.

Now if you’re still there, let’s start by deploying it on Azure.

Deploying your Azure Search service

Provided you already have an Azure subscription running, setting Azure Search up couldn’t be easier. Just go to the official service page and hit « Try search azure now »… or just click this link.


The Azure Search service creation page. Tier prices are removed because they may vary

The service configuration is straightforward: just enter an identifier to be part of the URL of your search service (in the example, « sample-books »), select the usual Azure subscription, resource group and location, and choose the pricing option that best matches your needs. You can even select a free plan if you just want to try it out.

Once you hit the Create button and the service is deployed, we can access its dashboard page and start configuring it.


Our new Azure Search service’s Overview page

As you can see there, the service is available right away at the URL provided in the configuration, listed under URL on the overview page.

However, in order to make it operational, we have to create an index. If you’re not familiar with indexes, they are like collections of documents (data) sharing a similar structure, that are processed in order to optimize searches. Because documents live in an index, we are not going anywhere without one.

So let’s get this going by clicking the Add index button.


The Azure Search index creation page

This takes us to another config view where you have to input the name of the index, and the structure of the documents in the index. The structure works pretty much like any graphic database table creation form — you can add fields with a type, and a handful of options that let the engine know how your document should be processed, which in turn affects how the fields behave in search queries.

The mandatory « id » field that comes pre-created will be used as a unique identifier for our documents — if you are trying to index a document and the id has already been indexed, the existing document will be updated.

In our example, each document represents a book. So we set up a few fields that we want indexed for our books.
Here is a quick breakdown of the options for your fields:

  • Retrievable: determines whether the field will be included in the query responses, or if you want to hide it;
  • Filterable: determines the ability to filter on the field (e.g. take all documents with a pageCount value greater than 200);
  • Sortable: determines the ability to sort by the field;
  • Facetable: determines the ability to group by the field (e.g. group books by category);
  • Searchable: determines whether the value of the field is included in full text searches

Our example is set up so that the title, author and description are processed in full search, but not the category. This means that a full text search query for « Mystery » will not include books of the category Mystery in the results.

Once you are done with the creation, your index is ready, although still empty and sad… so let’s fix that!

Indexing data

The next thing to do is indexing actual documents. In our example, this means indexing books.

There are two ways to do this:

  • Adding a data source and an indexer, meaning that Azure Search is going to crawl your data source (Azure Storage, DocumentDB, etc) periodically to index new data;
  • Indexing documents through the REST API, either directly, or indirectly with an SDK.

And of course, nothing prevents you from doing both. But in our case, we are going to index documents programmatically, using C# and the .net Azure Search SDK.

So let’s dig into the coding. As a side note, if you’re allergic to code, you can skip right to the start of the next part, where we play around with Azure Search’s query interface.

First, we’re going to create a console application and add the Azure Search SDK NuGet package.


Installing the Azure Search SDK through the NuGet Package Manager view on VS2015

Alternatively, you can run the following NuGet command:

> Install-Package Microsoft.Azure.Search

Next, we are going to need a Book POCO class with properties matching the indexed fields. Rather than breaking the naming conventions of C# and using camelCase for our properties, we are going to use the SerializePropertyNamesAsCamelCase attribute to tell the SDK how it is supposed to handle our properties when uploading or downloading documents.

So here is our Book.cs:

public class Book
    public string Id { get; set; }
    public string Title { get; set; }
    public string Author { get; set; }
    public string Description { get; set; }
    public string Category { get; set; }
    public int PageCount { get; set; }
    public bool IsAvailableOnline { get; set; }

Next, we will need to create a client that connects to our Azure Search service. Using a bit from the official documentation, we can write the following method:

private static SearchServiceClient CreateAdminSearchServiceClient()
    string searchServiceName = "sample-books";
    string adminApiKey = "Put your API admin key here";

    SearchServiceClient serviceClient = new SearchServiceClient(searchServiceName, new SearchCredentials(adminApiKey));
    return serviceClient;

Note that you can find your keys in the Azure Service page, under the « Keys » section. You have to use an admin key in order to create indexes or index documents.

Now let’s write a method that indexes a few sample books:

private static void UploadDocuments(ISearchIndexClient indexClient)
    var books = new Book[]
        new Book()
            Id = "SomethingUnique01",
            Title = "Pride and Prejudice",
            Author = "Jane Austen",
            Category = "Classic",
            Description = "Set in the English countryside in a county roughly thirty miles from London...",
            IsAvailableOnline = true,
            PageCount = 234
        new Book()
            Id = "SomethingUnique02",
            Title = "Alice's Adventures in Wonderland",
            Author = "Lewis Carroll",
            Category = "Classic",
            Description = "Alice was beginning to get very tired of sitting by her sister on the bank...",
            IsAvailableOnline = true,
            PageCount = 171
        new Book()
            Id = "SomethingUnique03",
            Title = "Frankenstein",
            Author = "Mary Wollstonecraft Shelley",
            Category = "Horror",
            Description = "You will rejoice to hear that no disaster has accompanied...",
            IsAvailableOnline = true,
            PageCount = 346

    // Make a batch with our array of books
    var batch = IndexBatch.MergeOrUpload(books);

    // Query the API to index the documents

As you can see, the SDK allows us to use directly our Book objects in its upload methods, performing the REST API query for us.

Note that for the purpose of simplicity, we’re not handling exceptions, but you should really do it in production code.

Also keep in mind that your documents will not be instantly indexed. You should expect a little delay between document upload and their availability in index queries. The delay depends on the service load, but in our case a few seconds should be enough.

So let’s set up our program to call these methods and index the books.

static void Main(string[] args)
    var serviceClient = CreateAdminSearchServiceClient();
    // Get the index client by name - use your index name here
    var indexClient = serviceClient.Indexes.GetClient("mybookindex");

After running the program, if it ran alright and after the aforementioned delay, you should have your data indexed already.

You can check that your documents have been uploaded on the Azure dashboard page.


On the Azure overview page for our service, we can see that there are 3 documents indexed

Alright! When you’re done with the indexing, all that’s left to do is query!

Querying documents

So, our Azure Search service is up and running, with an operational index and some documents to go along. Let’s get to the core feature: querying documents.

For the purpose of illustration and to get a better understanding of what we will be doing next with the SDK, we are going to start with the Azure Search query interface called Search Explorer.

And sure enough, you can access it through the Search Explorer button on the overview dashboard.


The Azure Search Explorer default view

The Query string field roughly corresponds to the part after the « ? » in the URL when you query the rest API in a get request.

In the Request URL field below, you can see the full URL that will be called to execute your query.

And finally, the Results field shows the raw JSON response from the service.

Now let’s try it out with some examples:


An example of a full text search

In this example, we are searching for the term « disaster ». This will cause Azure Search to perform a full text search on every field that is marked as Searchable in the index document structure. Because the book « Frankenstein » has the word « disaster » in its description field, and that field is marked as Searchable, it is returned.

If we replace our search term with « Horror », the service returns no results, even though the value of category is literally « Horror » in the case of Frankenstein. Again, this is because our category field isn’t Searchable.


An example of a search using filters

This second example retrieves all books with more than 200 pages. I won’t explain the whole syntax here because there would be too much to write and it is already explained in the search documentation. In essence, we are using the $filter parameter to limit results to the documents satisfying the condition « pageCount gt 200 », which means that the value of pageCount has to be greater than 200 for a document to pass the filter.

Now that we have some clues about how the search API works, we are going to have the SDK do half of the job for us. Let’s go back to our C# .net project.

The first thing we want to start with when querying is a SearchServiceClient… and I know we already built one in part 2, but we are not going to use this one. When you are only querying, you’ll want to use a query API key instead of an admin key, for security reasons.

You can get those keys in the Keys section of the Azure Search service page, after clicking the Manage query keys link.
You are free to use the default one. In my case, I added a new key called « mykey » because I don’t like using an unnamed key and obviously « mykey » is much more descriptive.

So let’s write our new client creation method:

private static SearchServiceClient CreateQuerySearchServiceClient()
    string searchServiceName = "sample-books";
    string queryApiKey = "YOUR QUERY API KEY HERE";

    SearchServiceClient serviceClient = new SearchServiceClient(searchServiceName, new SearchCredentials(queryApiKey));
    return serviceClient;

Of course this is almost the same code as before and we should really refactor it, but I’m leaving that as an exercise to the reader. For the sake of simplicity, of course.

Once we got that, we are going to write the methods that query our books. Let’s just rewrite the tests we have done with the Search Explorer, using the SDK. We will write 2 separate methods, again for the sake of clarity:

private static Book[] GetDisasterBooks(ISearchIndexClient client)
    // Query with the search text "disaster"
    DocumentSearchResult response = client.Documents.Search("disaster");

    // Get the results
    IList<SearchResult> searchResults = response.Results;
    return searchResults.Select(searchResult => searchResult.Document).ToArray();

private static Book[] GetBooksWithMoreThan200Pages(ISearchIndexClient client)
    // Filter on documents that have a value in the field 'pageCount' greater than (gt) 200
    SearchParameters parameters = new SearchParameters()
        Filter = "pageCount gt 200"

    // Query with the search text "*" (everything) and include our parameters
    DocumentSearchResult response = client.Documents.Search("*", parameters);

    // Get the results
    IList<SearchResult> searchResults = response.Results;
    return searchResults.Select(searchResult => searchResult.Document).ToArray();

What we can see here is that you have all the URL parameters we can use in the optional SearchParameters object, except the search text itself, which is specified as a separate parameter of the Search method.
And once again, the SDK is capable of using directly our Book class and retrieves Book objects by deserializing the response from our Azure Search service, in a transparent way.

Now let’s use these methods in our program:

static void Main(string[] args)
    var queryServiceClient = CreateQuerySearchServiceClient();
    var queryIndexClient = queryServiceClient.Indexes.GetClient("mybookindex");

    var disasterBooks = GetDisasterBooks(queryIndexClient);
    Console.WriteLine("GetDisasterBooks results: " + string.Join(" ; ", disasterBooks.Select(b => b.Title)));
    var moreThan200PagesBooks = GetBooksWithMoreThan200Pages(queryIndexClient);
    Console.WriteLine("GetBooksWithMoreThan200Pages results: " + string.Join(" ; ", moreThan200PagesBooks.Select(b => b.Title)));


The client part is similar to what we did when we were indexing documents, and the rest is just getting the results from the query methods and displaying them with Console.WriteLine.

And running this program gets us this beautiful output:

GetDisasterBooks results: Frankenstein
GetBooksWithMoreThan200Pages results: Pride and Prejudice ; Frankenstein

Going deeper

We have seen how to deploy an Azure Search service, how to create and configure its indexes, and how to use the SDK for both indexing and querying documents. As mentioned in the previous parts, there is a bit more that you can do with Azure Search and that go beyond the scope of this article.

If you want to go further, here are some points we haven’t discussed, along with links providing documentation on the topic:

Thanks for reading and I hope this article at least made you want to read more classical literature.

EDF, Azure, Windows Phone et du Micro Framework

DISCLAIMER : avant toute chose, l’auteur n’est pas responsable pour toute action, accident, etc. survenue lors du montage, l’installation, l’utilisation, etc. des procédés décrits dans cet article !

L’objet de ce post est de présenter la solution que j’ai pu mettre en place pour récupérer la consommation électrique de la maison en temps réel, et la consulter sur mon Windows Phone.


Liste des courses

Pour ceci nous avons besoin :

  • D’un compteur ERDF numérique. Si vous disposez d’un ancien modèle (la version à disque rotatif), vous pouvez demander le changement avec des frais pas trop importants.
  • D’activer l’option « teleinfo » sur votre compteur. Par défaut il est installé.
  • D’un compte Azure pour stocker les informations.
  • D’une carte Micro Framework. Le code en exemple s’applique sur du Micro Framework 4.2.Ma recommandation actuelle est un bon compromis : la FEZ Cerbuino, disponible sur le site du producteur (https://www.ghielectronics.com/catalog/product/473 ).
  • A l’époque où j’ai fait tourner le code, j’ai utilisé un FEZ Domino mais qui n’est plus en vente.
  • D’un petit montage électronique, avec des composants disponibles sur ebay ou chez Farnell/Mouser/etc. Le prix des composants est ridicule :
    • Un optocoupleur SFH620 (j’ai utilisé une Vishay SFH6206-2)
    • 1 résistance 4.7KOhm
    • 1 résistance 1.2KOhm


Schéma physique

D’abord pour le montage, nous avons besoin de réaliser le schéma suivant :

schéma électronique

Dans la partie droite du schéma, le montage se branchera sur la carte Micro Framework. A gauche du schéma arrivent les fils reliés à la sortie Teleinfo du compteur électrique (I1 et I2) :

schéma montage

On note :

  • Le branchement en 5V de la carte et du petit montage électronique. Un chargeur USB peut faire l’affaire.
  • La liaison masse sur la carte Micro Framework et du petit montage (GND)
  • La double liaison avec le compteur (I1 et I2)
  • La liaison avec la COM1 IN sur la carte Micro Framework.


Le protocole Teleinfo

Peu connu, cette fonctionnalité permet de récupérer en mode port série le flux réel des informations EDF. Les caractéristiques du port exposé par le compteur : 1 200 bps, 7 bit, Parity Even, Stop Bit One.

Pour les trames envoyées, le protocole est plutôt bien documenté :

http://www.planete-domotique.com/notices/ERDF-NOI-CPT_O2E.pdf (documentation « officielle »)

http://bernard.lefrancois.free.fr/teleinfo.htm (exemples de trames)

Personnellement j’ai de l’heure creuse/heure pleine (HC/HP) à la maison, monophasé. Passer dans du triphasé ou autre revient à interpréter les bonnes trames.


Le code

Pour la partie code Micro Framework, rien de plus facile :

On commence déjà par se faire une classe qui encapsule les parties qui nous intéressent :

public class EdfReading
  public string NoCompteur { get; set; }
  public string PeriodeTarifaireEnCours { get; set; }
  public string PuissanceApparente { get; set; }
  public string IndexHeuresCreuses { get; set; }
  public string IndexHeuresPleines { get; set; }

Ensuite on démarre la lecture :

  • déclaration des variables importantes :
const byte startByte = 0x002;
const byte endByte = 0x003;
static Encoding enc = Encoding.UTF8;
static EdfReading currentReading;
  • déclaration du port sériel :
static SerialPort serialPortEdf = new SerialPort("COM1", 1200, Parity.Even, 7);

En plus il faut rajouter dans le Main :

serialPortEdf.StopBits = StopBits.One; // pas accessible dans le constructeur le stopbit… 

Cet appel de fonction pointe vers :

private static void InitiateEdfPort()
  // ouverture du port
  if (serialPortEdf.IsOpen)
  serialPortEdf.DataReceived += new SerialDataReceivedEventHandler(EdfPacketReceived);

L’interprétation des packets se fait dans EdfPacketReceived, avec un petit bémol :

    static void EdfPacketReceived(object sender, SerialDataReceivedEventArgs e)
        var incoming = new byte[serialPortEdf.BytesToRead];
        serialPortEdf.Read(incoming, 0, incoming.Length);

        if (incoming == null || incoming.Length < 5)
            return; // du bruit sur le reseau….

        if (currentPosition + incoming.Length > cursorByteArray.Length)
            // ah, il y a un probleme... on est au bout du buffer sans pourtant avoir la trame de fin
            currentPosition = 0;
            cursorByteArray = new byte[buffersize]; // repartir de zero

        // concatener un peu..
        System.Array.Copy(incoming, 0, cursorByteArray, currentPosition, incoming.Length)

        currentPosition += incoming.Length;
        // find startindex
        int startIndex = System.Array.IndexOf(cursorByteArray, startByte, 0);
        int endIndex = System.Array.IndexOf(cursorByteArray, startByte, startIndex + 1);

        // decommentez cette partie si vous voulez avoir la trame en DEBUG
        //string s = new String(Encoding.UTF8.GetChars(cursorByteArray));
        if (endIndex < 1 || startIndex < 0 || startIndex > endIndex)
            return;// pas de trame valide encore

        // si on est la ca veut dire
        // - trame edf valide (start, endbyte)
        // - on peut la lire</pre>
        // lire uniquement la partie qui nous interesse
        byte[] validPacket = new byte[endIndex - startIndex + 1];
        System.Array.Copy(cursorByteArray, startIndex, validPacket, 0, validPacket.Length);

        currentPosition = 0;
        cursorByteArray = null;
        cursorByteArray = new byte[buffersize];

    static void TranslateEdfProtocolIntoCurrentReading(byte[] packets)
        if (packets == null || packets.Length < 1)
        string adco = FindPacketValue(packets, "ADCO");
        string hchc = FindPacketValue(packets, "HCHC");
        string hphp = FindPacketValue(packets, "HCHP");
        string ptec = FindPacketValue(packets, "PTEC");
        string papp = FindPacketValue(packets, "PAPP

        if (hchc != null && hchc.Length > 1)
            // la lecture est ici
            currentReading = new EdfReading()
                IndexHeuresCreuses = hchc,
                IndexHeuresPleines = hphp,
                NoCompteur = adco,
                PeriodeTarifaireEnCours = ptec,
                PuissanceApparente = papp

      if (currentReading != null)
            Debug.Print("*** HP:" + currentReading.IndexHeuresPleines + " & HC:" + currentReading.IndexHeuresCreuses);

    private static string FindPacketValue(byte[] packets, string packetName)
        int position = FindIndexOf(packets, enc.GetBytes(packetName));
        if (position == -1) // not found...
            return string.Empty;

        int startByte = 0;
        int endByte = 0;
        for (int i = position; i < packets.Length; i++)
            var b = packets[i];
            if (b == 0x20)
                if (startByte == 0)
                    startByte = i;
                else if (endByte == 0)
                    endByte = i; // end of packet reached..

       // find the value...
        byte[] valByte = new byte[endByte - startByte + 1];
        System.Array.Copy(packets, startByte, valByte, 0, endByte - startByte);
        return new String(System.Text.Encoding.UTF8.GetChars(valByte));

    private static int FindIndexOf(byte[] arrayToSearchThrough, byte[] patternToFind)
        if (patternToFind.Length > arrayToSearchThrough.Length)
            return -1;
        for (int i = 0; i < arrayToSearchThrough.Length - patternToFind.Length; i++)
            bool found = true;
            for (int j = 0; j < patternToFind.Length; j++)
                if (arrayToSearchThrough[i + j] != patternToFind[j])
                    found = false;
            if (found)
                return i;
        return -1;

On voit bien que j’ai dû créer une méthode pour trouver des arrays de bytes dans des arrays de bytes. Il ne faut pas oublier qu’on est sur un Micro Framework et qu’on n’a pas le Rolls habituel !

Du coup, si tout est bon, vous avez les valeurs dans currentReading, et vous pouvez ensuite envoyer les infos vers Azure, les afficher, etc.

Chez moi la carte Micro Framework qui fait la lecture du compteur n’est pas le même que cela qui communique avec Azure. Les deux sont reliées par un réseau RS485.


Partie Azure

Dans Azure j’ai créé un simple site MVC qui m’expose, en GET tout simple, une manière des stocker les infos dans une base SQL Azure.

La sécurité se fait par l’envoi d’une clé non-répétitive (Hint: prenez l’heure, prenez une key et composez votre token non-répétitif).

La partie qui va envoyer l’info dans Azure tourne sur un FEZ Panda 2 + un FEZ Connect. Mais si vous avez le Cerbuino Net, vous pouvez récupérer et envoyer les informations directement depuis la même carte. L’URL contiendra les valeurs (ex : http://monsiteweb/Sauvegarde?valeurHC=123&valeurHP=321&tokenSecret=ABCD )

    private static void CallWSWithUrl(string url)
        for (int i = 0; i < 30; i++)
            if (requestInProgress)
        requestInProgress = true;

        using (HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url))
            request.ContentType = "application/text";
            request.Method = "GET";
            request.Timeout = 4000;
            request.KeepAlive = false;

                using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
            catch (Exception ex)
                Debug.Print("WS ERROR: " + ex.Message);

        requestInProgress = false;

Le résultat ?

tableau de valeurs dans Azure

On voit bien la sauvegarde des valeurs toutes les minutes, et la précision qui va nous permettre de calculer la consommation en Watt.

Un point d’attention : vu la volumétrie qui s’accumule, il faudra bien penser à indexer et optimiser la lecture de la table. Un exemple pour récupérer la consommation par heure, en HC/HP, d’une manière optimale, pour une période donné :

  MAX(ReadingValueHP) as hp,
  MAX(ReadingValueHC) as hc,
  CONVERT(DateTime, Convert(NVARCHAR, CONVERT(Date, ReadingDate))+' '+CONVERT(NVARCHAR, DATEPART(hour, ReadingDate)) + ':00') as Readingdate
FROM EdfReading
WHERE ReadingDate >= @start
AND ReadingDate <= @end
GROUP BY CONVERT(DateTime, CONVERT(NVARCHAR, CONVERT(Date, ReadingDate))+' '+CONVERT(NVARCHAR, DATEPART(hour, ReadingDate)) + ':00')

Et la même chose pour avoir la valeur par jour (on fera la différence par la suite, sur le phone) :

SELECT MAX(ReadingValueHP), MAX(ReadingValueHC), CONVERT(Date, Readingdate) as ReadingDate
FROM EdfReading
WHERE ReadingDate >= @start
AND ReadingDate <= @end
GROUP BY CONVERT(Date, Readingdate)


Partie Windows Phone

screenshot windows phone 2 screenshot windows phone 1

Pour la partie Phone, l’application récupère simplement les valeurs du site, et on fait la différence entre les valeurs pour avoir la consommation réelle.

Des totaux sont proposés pour la période affichée.

Les deux parties à retenir :

  • le calcul des Watts :
private static void CalculateConsumption(EdfReading[] results, out double totalKWHP, out double totalKWHC, out double indexHP, out double indexHC, out List<EdfReadingExtended> items)
var minValHP = results.Min(x => x.ReadingValueHP);
var minValHC = results.Min(x => x.ReadingValueHC);

indexHP = results.Max(x => x.ReadingValueHP);
indexHC = results.Max(x => x.ReadingValueHC);

totalKWHP = indexHP - minValHP;
totalKWHC = indexHC - minValHC;

items = new List<EdfReadingExtended>();
var orderedRdgs = results.OrderBy(x => x.ReadingDate).ToArray();

var cursorHP = minValHP;
var cursorHC = minValHC;
var cursorDate = results.Min(x => x.ReadingDate);

foreach (EdfReading rdg in orderedRdgs)
// start digging...
// find out the consumption in watts
double hours = (rdg.ReadingDate - cursorDate).TotalHours;

// now, we should get the wattage for the given interval
var hcConsumption = (rdg.ReadingValueHC - cursorHC);
var hpConsumption = (rdg.ReadingValueHP - cursorHP);

items.Add(new EdfReadingExtended()
ReadingDate = rdg.ReadingDate,
ReadingValueHC = hcConsumption / (double)1000,
ReadingValueHP = hpConsumption / (double)1000,
KW = (int)(hcConsumption / hours + hpConsumption / hours)

cursorHC = rdg.ReadingValueHC;
cursorHP = rdg.ReadingValueHP;
cursorDate = rdg.ReadingDate;
  • et l’utilisation des graphs DataVisualisation Toolkit. J’ai modifié légèrement les sources pour que je puisse :
    • slider gauche/droite pour décaler l’axe de temps
    • pinch pour changer l’échelle de temps (minute/heure/semaine/mois/période)

Depuis ma galère initiale pour le porter sur WP 8.0, il existe en package NuGet désormais, mais sans les deux features.


Conclusion et lecture supplémentaire

J’espère avoir suscité votre curiosité sur le sujet.

L’application tourne chez moi depuis plus d’un an, sans soucis majeur. La carte Micro Framework est dehors dans un boitier étanche, et elle résiste sans soucis à des températures extérieures entre -15 degrés l’hiver et +35 l’été. Donc une belle surprise pour une carte qui n’est pas prévu pour le milieu industriel.

Le code est parfois rudimentaire, et je m’excuse par avance si vous trouvez des raccourcis de geek.

La semaine prochaine, je vais essayer de vous expliquer le même processus, mais avec mon compteur d’eau (qui n’est pas avec des impulsions !).

En attendant, si vous avez des commentaires, n’hésitez pas à m’en faire part. Le code source ne sera ni publié, ni diffusé intégralement. Je pense avoir donné toutes les informations pour vous faire bien démarrer en revanche.

Mes points de départ :