Getting started with Azure Search and the .NET SDK

search

In order to provide an alternative to ElasticSearch as a more convenient and straightforward solution, Microsoft introduced Azure Search, an Azure service based on ElasticSearch. Both solutions provide a dedicated environment for indexing and querying structured or semi-structured data. Azure Search, however, focuses on simplicity, to the expense of some of the features you may expect if you come from the more complex engines.

For starters, Azure Search is more rigid: it is contract-based, meaning you have to define the indexes and the structure of your documents (indexed data) before you can index anything. The document structure itself is simplified and aimed at simple use cases and you won’t have all the options ElasticSearch can offer. The most important limitation to keep in mind is that you cannot include complex types in your document structure.

You should really be aware of these limitations when considering Azure Search as a production tool. But if you’re going for a quick, low maintenance and scalable solution, it may very well fit your needs.

Now if you’re still there, let’s start by deploying it on Azure.

Deploying your Azure Search service

Provided you already have an Azure subscription running, setting Azure Search up couldn’t be easier. Just go to the official service page and hit « Try search azure now »… or just click this link.

AzureSearchDeploy

The Azure Search service creation page. Tier prices are removed because they may vary

The service configuration is straightforward: just enter an identifier to be part of the URL of your search service (in the example, « sample-books »), select the usual Azure subscription, resource group and location, and choose the pricing option that best matches your needs. You can even select a free plan if you just want to try it out.

Once you hit the Create button and the service is deployed, we can access its dashboard page and start configuring it.

AzureSearchDashboard

Our new Azure Search service’s Overview page

As you can see there, the service is available right away at the URL provided in the configuration, listed under URL on the overview page.

However, in order to make it operational, we have to create an index. If you’re not familiar with indexes, they are like collections of documents (data) sharing a similar structure, that are processed in order to optimize searches. Because documents live in an index, we are not going anywhere without one.

So let’s get this going by clicking the Add index button.

AzureSearchIndexCreation.png

The Azure Search index creation page

This takes us to another config view where you have to input the name of the index, and the structure of the documents in the index. The structure works pretty much like any graphic database table creation form — you can add fields with a type, and a handful of options that let the engine know how your document should be processed, which in turn affects how the fields behave in search queries.

The mandatory « id » field that comes pre-created will be used as a unique identifier for our documents — if you are trying to index a document and the id has already been indexed, the existing document will be updated.

In our example, each document represents a book. So we set up a few fields that we want indexed for our books.
Here is a quick breakdown of the options for your fields:

  • Retrievable: determines whether the field will be included in the query responses, or if you want to hide it;
  • Filterable: determines the ability to filter on the field (e.g. take all documents with a pageCount value greater than 200);
  • Sortable: determines the ability to sort by the field;
  • Facetable: determines the ability to group by the field (e.g. group books by category);
  • Searchable: determines whether the value of the field is included in full text searches

Our example is set up so that the title, author and description are processed in full search, but not the category. This means that a full text search query for « Mystery » will not include books of the category Mystery in the results.

Once you are done with the creation, your index is ready, although still empty and sad… so let’s fix that!

Indexing data

The next thing to do is indexing actual documents. In our example, this means indexing books.

There are two ways to do this:

  • Adding a data source and an indexer, meaning that Azure Search is going to crawl your data source (Azure Storage, DocumentDB, etc) periodically to index new data;
  • Indexing documents through the REST API, either directly, or indirectly with an SDK.

And of course, nothing prevents you from doing both. But in our case, we are going to index documents programmatically, using C# and the .net Azure Search SDK.

So let’s dig into the coding. As a side note, if you’re allergic to code, you can skip right to the start of the next part, where we play around with Azure Search’s query interface.

First, we’re going to create a console application and add the Azure Search SDK NuGet package.

AzureSearchNugetPackage

Installing the Azure Search SDK through the NuGet Package Manager view on VS2015

Alternatively, you can run the following NuGet command:

> Install-Package Microsoft.Azure.Search

Next, we are going to need a Book POCO class with properties matching the indexed fields. Rather than breaking the naming conventions of C# and using camelCase for our properties, we are going to use the SerializePropertyNamesAsCamelCase attribute to tell the SDK how it is supposed to handle our properties when uploading or downloading documents.

So here is our Book.cs:

[SerializePropertyNamesAsCamelCase]
public class Book
{
    public string Id { get; set; }
    public string Title { get; set; }
    public string Author { get; set; }
    public string Description { get; set; }
    public string Category { get; set; }
    public int PageCount { get; set; }
    public bool IsAvailableOnline { get; set; }
}

Next, we will need to create a client that connects to our Azure Search service. Using a bit from the official documentation, we can write the following method:

private static SearchServiceClient CreateAdminSearchServiceClient()
{
    string searchServiceName = "sample-books";
    string adminApiKey = "Put your API admin key here";

    SearchServiceClient serviceClient = new SearchServiceClient(searchServiceName, new SearchCredentials(adminApiKey));
    return serviceClient;
}

Note that you can find your keys in the Azure Service page, under the « Keys » section. You have to use an admin key in order to create indexes or index documents.
AzureSearchAdminKey.png

Now let’s write a method that indexes a few sample books:

private static void UploadDocuments(ISearchIndexClient indexClient)
{
    var books = new Book[]
    {
        new Book()
        {
            Id = "SomethingUnique01",
            Title = "Pride and Prejudice",
            Author = "Jane Austen",
            Category = "Classic",
            Description = "Set in the English countryside in a county roughly thirty miles from London...",
            IsAvailableOnline = true,
            PageCount = 234
        },
        new Book()
        {
            Id = "SomethingUnique02",
            Title = "Alice's Adventures in Wonderland",
            Author = "Lewis Carroll",
            Category = "Classic",
            Description = "Alice was beginning to get very tired of sitting by her sister on the bank...",
            IsAvailableOnline = true,
            PageCount = 171
        },
        new Book()
        {
            Id = "SomethingUnique03",
            Title = "Frankenstein",
            Author = "Mary Wollstonecraft Shelley",
            Category = "Horror",
            Description = "You will rejoice to hear that no disaster has accompanied...",
            IsAvailableOnline = true,
            PageCount = 346
        }
    };

    // Make a batch with our array of books
    var batch = IndexBatch.MergeOrUpload(books);

    // Query the API to index the documents
    indexClient.Documents.Index(batch);
}

As you can see, the SDK allows us to use directly our Book objects in its upload methods, performing the REST API query for us.

Note that for the purpose of simplicity, we’re not handling exceptions, but you should really do it in production code.

Also keep in mind that your documents will not be instantly indexed. You should expect a little delay between document upload and their availability in index queries. The delay depends on the service load, but in our case a few seconds should be enough.

So let’s set up our program to call these methods and index the books.

static void Main(string[] args)
{
    var serviceClient = CreateAdminSearchServiceClient();
    // Get the index client by name - use your index name here
    var indexClient = serviceClient.Indexes.GetClient("mybookindex");
    UploadDocuments(indexClient);
}

After running the program, if it ran alright and after the aforementioned delay, you should have your data indexed already.

You can check that your documents have been uploaded on the Azure dashboard page.

AzureSearchDocumentsIndexed.png

On the Azure overview page for our service, we can see that there are 3 documents indexed

Alright! When you’re done with the indexing, all that’s left to do is query!

Querying documents

So, our Azure Search service is up and running, with an operational index and some documents to go along. Let’s get to the core feature: querying documents.

For the purpose of illustration and to get a better understanding of what we will be doing next with the SDK, we are going to start with the Azure Search query interface called Search Explorer.

And sure enough, you can access it through the Search Explorer button on the overview dashboard.

AzureSearchExplorer0.png

The Azure Search Explorer default view

The Query string field roughly corresponds to the part after the « ? » in the URL when you query the rest API in a get request.

In the Request URL field below, you can see the full URL that will be called to execute your query.

And finally, the Results field shows the raw JSON response from the service.

Now let’s try it out with some examples:

AzureSearchExplorer2

An example of a full text search

In this example, we are searching for the term « disaster ». This will cause Azure Search to perform a full text search on every field that is marked as Searchable in the index document structure. Because the book « Frankenstein » has the word « disaster » in its description field, and that field is marked as Searchable, it is returned.

If we replace our search term with « Horror », the service returns no results, even though the value of category is literally « Horror » in the case of Frankenstein. Again, this is because our category field isn’t Searchable.

AzureSearchExplorer3

An example of a search using filters

This second example retrieves all books with more than 200 pages. I won’t explain the whole syntax here because there would be too much to write and it is already explained in the search documentation. In essence, we are using the $filter parameter to limit results to the documents satisfying the condition « pageCount gt 200 », which means that the value of pageCount has to be greater than 200 for a document to pass the filter.

Now that we have some clues about how the search API works, we are going to have the SDK do half of the job for us. Let’s go back to our C# .net project.

The first thing we want to start with when querying is a SearchServiceClient… and I know we already built one in part 2, but we are not going to use this one. When you are only querying, you’ll want to use a query API key instead of an admin key, for security reasons.

You can get those keys in the Keys section of the Azure Search service page, after clicking the Manage query keys link.
AzureSearchQueryKeys.png
You are free to use the default one. In my case, I added a new key called « mykey » because I don’t like using an unnamed key and obviously « mykey » is much more descriptive.

So let’s write our new client creation method:

private static SearchServiceClient CreateQuerySearchServiceClient()
{
    string searchServiceName = "sample-books";
    string queryApiKey = "YOUR QUERY API KEY HERE";

    SearchServiceClient serviceClient = new SearchServiceClient(searchServiceName, new SearchCredentials(queryApiKey));
    return serviceClient;
}

Of course this is almost the same code as before and we should really refactor it, but I’m leaving that as an exercise to the reader. For the sake of simplicity, of course.

Once we got that, we are going to write the methods that query our books. Let’s just rewrite the tests we have done with the Search Explorer, using the SDK. We will write 2 separate methods, again for the sake of clarity:

private static Book[] GetDisasterBooks(ISearchIndexClient client)
{
    // Query with the search text "disaster"
    DocumentSearchResult response = client.Documents.Search("disaster");

    // Get the results
    IList<SearchResult> searchResults = response.Results;
    return searchResults.Select(searchResult => searchResult.Document).ToArray();
}

private static Book[] GetBooksWithMoreThan200Pages(ISearchIndexClient client)
{
    // Filter on documents that have a value in the field 'pageCount' greater than (gt) 200
    SearchParameters parameters = new SearchParameters()
    {
        Filter = "pageCount gt 200"
    };

    // Query with the search text "*" (everything) and include our parameters
    DocumentSearchResult response = client.Documents.Search("*", parameters);

    // Get the results
    IList<SearchResult> searchResults = response.Results;
    return searchResults.Select(searchResult => searchResult.Document).ToArray();
}

What we can see here is that you have all the URL parameters we can use in the optional SearchParameters object, except the search text itself, which is specified as a separate parameter of the Search method.
And once again, the SDK is capable of using directly our Book class and retrieves Book objects by deserializing the response from our Azure Search service, in a transparent way.

Now let’s use these methods in our program:

static void Main(string[] args)
{
    var queryServiceClient = CreateQuerySearchServiceClient();
    var queryIndexClient = queryServiceClient.Indexes.GetClient("mybookindex");

    var disasterBooks = GetDisasterBooks(queryIndexClient);
    Console.WriteLine("GetDisasterBooks results: " + string.Join(" ; ", disasterBooks.Select(b => b.Title)));
    var moreThan200PagesBooks = GetBooksWithMoreThan200Pages(queryIndexClient);
    Console.WriteLine("GetBooksWithMoreThan200Pages results: " + string.Join(" ; ", moreThan200PagesBooks.Select(b => b.Title)));

    Console.ReadKey(false);
}

The client part is similar to what we did when we were indexing documents, and the rest is just getting the results from the query methods and displaying them with Console.WriteLine.

And running this program gets us this beautiful output:

GetDisasterBooks results: Frankenstein
GetBooksWithMoreThan200Pages results: Pride and Prejudice ; Frankenstein

Going deeper

We have seen how to deploy an Azure Search service, how to create and configure its indexes, and how to use the SDK for both indexing and querying documents. As mentioned in the previous parts, there is a bit more that you can do with Azure Search and that go beyond the scope of this article.

If you want to go further, here are some points we haven’t discussed, along with links providing documentation on the topic:

Thanks for reading and I hope this article at least made you want to read more classical literature.

EDF, Azure, Windows Phone et du Micro Framework

DISCLAIMER : avant toute chose, l’auteur n’est pas responsable pour toute action, accident, etc. survenue lors du montage, l’installation, l’utilisation, etc. des procédés décrits dans cet article !

L’objet de ce post est de présenter la solution que j’ai pu mettre en place pour récupérer la consommation électrique de la maison en temps réel, et la consulter sur mon Windows Phone.

 

Liste des courses

Pour ceci nous avons besoin :

  • D’un compteur ERDF numérique. Si vous disposez d’un ancien modèle (la version à disque rotatif), vous pouvez demander le changement avec des frais pas trop importants.
  • D’activer l’option « teleinfo » sur votre compteur. Par défaut il est installé.
  • D’un compte Azure pour stocker les informations.
  • D’une carte Micro Framework. Le code en exemple s’applique sur du Micro Framework 4.2.Ma recommandation actuelle est un bon compromis : la FEZ Cerbuino, disponible sur le site du producteur (https://www.ghielectronics.com/catalog/product/473 ).
  • A l’époque où j’ai fait tourner le code, j’ai utilisé un FEZ Domino mais qui n’est plus en vente.
  • D’un petit montage électronique, avec des composants disponibles sur ebay ou chez Farnell/Mouser/etc. Le prix des composants est ridicule :
    • Un optocoupleur SFH620 (j’ai utilisé une Vishay SFH6206-2)
    • 1 résistance 4.7KOhm
    • 1 résistance 1.2KOhm

 

Schéma physique

D’abord pour le montage, nous avons besoin de réaliser le schéma suivant :

schéma électronique

Dans la partie droite du schéma, le montage se branchera sur la carte Micro Framework. A gauche du schéma arrivent les fils reliés à la sortie Teleinfo du compteur électrique (I1 et I2) :

schéma montage

On note :

  • Le branchement en 5V de la carte et du petit montage électronique. Un chargeur USB peut faire l’affaire.
  • La liaison masse sur la carte Micro Framework et du petit montage (GND)
  • La double liaison avec le compteur (I1 et I2)
  • La liaison avec la COM1 IN sur la carte Micro Framework.

 

Le protocole Teleinfo

Peu connu, cette fonctionnalité permet de récupérer en mode port série le flux réel des informations EDF. Les caractéristiques du port exposé par le compteur : 1 200 bps, 7 bit, Parity Even, Stop Bit One.

Pour les trames envoyées, le protocole est plutôt bien documenté :

http://www.planete-domotique.com/notices/ERDF-NOI-CPT_O2E.pdf (documentation « officielle »)

http://bernard.lefrancois.free.fr/teleinfo.htm (exemples de trames)

Personnellement j’ai de l’heure creuse/heure pleine (HC/HP) à la maison, monophasé. Passer dans du triphasé ou autre revient à interpréter les bonnes trames.

 

Le code

Pour la partie code Micro Framework, rien de plus facile :

On commence déjà par se faire une classe qui encapsule les parties qui nous intéressent :

public class EdfReading
{
  public string NoCompteur { get; set; }
  public string PeriodeTarifaireEnCours { get; set; }
  public string PuissanceApparente { get; set; }
  public string IndexHeuresCreuses { get; set; }
  public string IndexHeuresPleines { get; set; }
}

Ensuite on démarre la lecture :

  • déclaration des variables importantes :
const byte startByte = 0x002;
const byte endByte = 0x003;
static Encoding enc = Encoding.UTF8;
static EdfReading currentReading;
  • déclaration du port sériel :
static SerialPort serialPortEdf = new SerialPort("COM1", 1200, Parity.Even, 7);

En plus il faut rajouter dans le Main :

serialPortEdf.StopBits = StopBits.One; // pas accessible dans le constructeur le stopbit… 
InitiateEdfPort();

Cet appel de fonction pointe vers :

private static void InitiateEdfPort()
{
  // ouverture du port
  if (serialPortEdf.IsOpen)
    serialPortEdf.Close();
 
  serialPortEdf.DataReceived += new SerialDataReceivedEventHandler(EdfPacketReceived);
  serialPortEdf.Open();
}

L’interprétation des packets se fait dans EdfPacketReceived, avec un petit bémol :

    static void EdfPacketReceived(object sender, SerialDataReceivedEventArgs e)
    {
        var incoming = new byte[serialPortEdf.BytesToRead];
        serialPortEdf.Read(incoming, 0, incoming.Length);

        if (incoming == null || incoming.Length < 5)
            return; // du bruit sur le reseau….

        if (currentPosition + incoming.Length > cursorByteArray.Length)
        {
            // ah, il y a un probleme... on est au bout du buffer sans pourtant avoir la trame de fin
            currentPosition = 0;
            cursorByteArray = new byte[buffersize]; // repartir de zero
        }

        // concatener un peu..
        System.Array.Copy(incoming, 0, cursorByteArray, currentPosition, incoming.Length)

        currentPosition += incoming.Length;
        // find startindex
        int startIndex = System.Array.IndexOf(cursorByteArray, startByte, 0);
        int endIndex = System.Array.IndexOf(cursorByteArray, startByte, startIndex + 1);

        // decommentez cette partie si vous voulez avoir la trame en DEBUG
        //string s = new String(Encoding.UTF8.GetChars(cursorByteArray));
        //Debug.Print(s);
        if (endIndex < 1 || startIndex < 0 || startIndex > endIndex)
        {
            return;// pas de trame valide encore
        }

        // si on est la ca veut dire
        // - trame edf valide (start, endbyte)
        // - on peut la lire</pre>
        // lire uniquement la partie qui nous interesse
        byte[] validPacket = new byte[endIndex - startIndex + 1];
        System.Array.Copy(cursorByteArray, startIndex, validPacket, 0, validPacket.Length);
        TranslateEdfProtocolIntoCurrentReading(validPacket);

        currentPosition = 0;
        cursorByteArray = null;
        cursorByteArray = new byte[buffersize];
    }

    static void TranslateEdfProtocolIntoCurrentReading(byte[] packets)
    {
        if (packets == null || packets.Length < 1)
            return;
        string adco = FindPacketValue(packets, "ADCO");
        string hchc = FindPacketValue(packets, "HCHC");
        string hphp = FindPacketValue(packets, "HCHP");
        string ptec = FindPacketValue(packets, "PTEC");
        string papp = FindPacketValue(packets, "PAPP

        if (hchc != null && hchc.Length > 1)
        {
            // la lecture est ici
            currentReading = new EdfReading()
            {
                IndexHeuresCreuses = hchc,
                IndexHeuresPleines = hphp,
                NoCompteur = adco,
                PeriodeTarifaireEnCours = ptec,
                PuissanceApparente = papp
            };
        }

      if (currentReading != null)
            Debug.Print("*** HP:" + currentReading.IndexHeuresPleines + " & HC:" + currentReading.IndexHeuresCreuses);
    }

    private static string FindPacketValue(byte[] packets, string packetName)
    {
        int position = FindIndexOf(packets, enc.GetBytes(packetName));
        if (position == -1) // not found...
            return string.Empty;

        int startByte = 0;
        int endByte = 0;
        for (int i = position; i < packets.Length; i++)
        {
            var b = packets[i];
            if (b == 0x20)
            {
                if (startByte == 0)
                    startByte = i;
                else if (endByte == 0)
                {
                    endByte = i; // end of packet reached..
                    break;
                }
            }
        }

       // find the value...
        byte[] valByte = new byte[endByte - startByte + 1];
        System.Array.Copy(packets, startByte, valByte, 0, endByte - startByte);
        return new String(System.Text.Encoding.UTF8.GetChars(valByte));
    }

    private static int FindIndexOf(byte[] arrayToSearchThrough, byte[] patternToFind)
    {
        if (patternToFind.Length > arrayToSearchThrough.Length)
            return -1;
        for (int i = 0; i < arrayToSearchThrough.Length - patternToFind.Length; i++)
        {
            bool found = true;
            for (int j = 0; j < patternToFind.Length; j++)
            {
                if (arrayToSearchThrough[i + j] != patternToFind[j])
                {
                    found = false;
                    break;
                }
            }
            if (found)
            {
                return i;
            }
        }
        return -1;
    }

On voit bien que j’ai dû créer une méthode pour trouver des arrays de bytes dans des arrays de bytes. Il ne faut pas oublier qu’on est sur un Micro Framework et qu’on n’a pas le Rolls habituel !

Du coup, si tout est bon, vous avez les valeurs dans currentReading, et vous pouvez ensuite envoyer les infos vers Azure, les afficher, etc.

Chez moi la carte Micro Framework qui fait la lecture du compteur n’est pas le même que cela qui communique avec Azure. Les deux sont reliées par un réseau RS485.

 

Partie Azure

Dans Azure j’ai créé un simple site MVC qui m’expose, en GET tout simple, une manière des stocker les infos dans une base SQL Azure.

La sécurité se fait par l’envoi d’une clé non-répétitive (Hint: prenez l’heure, prenez une key et composez votre token non-répétitif).

La partie qui va envoyer l’info dans Azure tourne sur un FEZ Panda 2 + un FEZ Connect. Mais si vous avez le Cerbuino Net, vous pouvez récupérer et envoyer les informations directement depuis la même carte. L’URL contiendra les valeurs (ex : http://monsiteweb/Sauvegarde?valeurHC=123&valeurHP=321&tokenSecret=ABCD )

    private static void CallWSWithUrl(string url)
    {
        for (int i = 0; i < 30; i++)
        {
            if (requestInProgress)
                Thread.Sleep(100);
            else
                break;
        }
        requestInProgress = true;

        using (HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url))
        {
            request.ContentType = "application/text";
            request.Method = "GET";
            request.Timeout = 4000;
            request.KeepAlive = false;

            try
            {
                using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
                {
                    response.GetResponseStream().ReadByte();
                    Thread.Sleep(300);
                    response.Close();
                }
            }
            catch (Exception ex)
            {
                Debug.Print("WS ERROR: " + ex.Message);
            }
        }

        requestInProgress = false;
    }

Le résultat ?

tableau de valeurs dans Azure

On voit bien la sauvegarde des valeurs toutes les minutes, et la précision qui va nous permettre de calculer la consommation en Watt.

Un point d’attention : vu la volumétrie qui s’accumule, il faudra bien penser à indexer et optimiser la lecture de la table. Un exemple pour récupérer la consommation par heure, en HC/HP, d’une manière optimale, pour une période donné :

SELECT
  MAX(ReadingValueHP) as hp,
  MAX(ReadingValueHC) as hc,
  CONVERT(DateTime, Convert(NVARCHAR, CONVERT(Date, ReadingDate))+' '+CONVERT(NVARCHAR, DATEPART(hour, ReadingDate)) + ':00') as Readingdate
FROM EdfReading
WHERE ReadingDate >= @start
AND ReadingDate <= @end
GROUP BY CONVERT(DateTime, CONVERT(NVARCHAR, CONVERT(Date, ReadingDate))+' '+CONVERT(NVARCHAR, DATEPART(hour, ReadingDate)) + ':00')

Et la même chose pour avoir la valeur par jour (on fera la différence par la suite, sur le phone) :

SELECT MAX(ReadingValueHP), MAX(ReadingValueHC), CONVERT(Date, Readingdate) as ReadingDate
FROM EdfReading
WHERE ReadingDate >= @start
AND ReadingDate <= @end
GROUP BY CONVERT(Date, Readingdate)

 

Partie Windows Phone

screenshot windows phone 2 screenshot windows phone 1

Pour la partie Phone, l’application récupère simplement les valeurs du site, et on fait la différence entre les valeurs pour avoir la consommation réelle.

Des totaux sont proposés pour la période affichée.

Les deux parties à retenir :

  • le calcul des Watts :
private static void CalculateConsumption(EdfReading[] results, out double totalKWHP, out double totalKWHC, out double indexHP, out double indexHC, out List<EdfReadingExtended> items)
{
var minValHP = results.Min(x => x.ReadingValueHP);
var minValHC = results.Min(x => x.ReadingValueHC);

indexHP = results.Max(x => x.ReadingValueHP);
indexHC = results.Max(x => x.ReadingValueHC);

totalKWHP = indexHP - minValHP;
totalKWHC = indexHC - minValHC;

items = new List<EdfReadingExtended>();
var orderedRdgs = results.OrderBy(x => x.ReadingDate).ToArray();

var cursorHP = minValHP;
var cursorHC = minValHC;
var cursorDate = results.Min(x => x.ReadingDate);

foreach (EdfReading rdg in orderedRdgs)
{
// start digging...
// find out the consumption in watts
double hours = (rdg.ReadingDate - cursorDate).TotalHours;

// now, we should get the wattage for the given interval
var hcConsumption = (rdg.ReadingValueHC - cursorHC);
var hpConsumption = (rdg.ReadingValueHP - cursorHP);

items.Add(new EdfReadingExtended()
{
ReadingDate = rdg.ReadingDate,
ReadingValueHC = hcConsumption / (double)1000,
ReadingValueHP = hpConsumption / (double)1000,
KW = (int)(hcConsumption / hours + hpConsumption / hours)
});

cursorHC = rdg.ReadingValueHC;
cursorHP = rdg.ReadingValueHP;
cursorDate = rdg.ReadingDate;
}
}
  • et l’utilisation des graphs DataVisualisation Toolkit. J’ai modifié légèrement les sources pour que je puisse :
    • slider gauche/droite pour décaler l’axe de temps
    • pinch pour changer l’échelle de temps (minute/heure/semaine/mois/période)

Depuis ma galère initiale pour le porter sur WP 8.0, il existe en package NuGet désormais, mais sans les deux features.

 

Conclusion et lecture supplémentaire

J’espère avoir suscité votre curiosité sur le sujet.

L’application tourne chez moi depuis plus d’un an, sans soucis majeur. La carte Micro Framework est dehors dans un boitier étanche, et elle résiste sans soucis à des températures extérieures entre -15 degrés l’hiver et +35 l’été. Donc une belle surprise pour une carte qui n’est pas prévu pour le milieu industriel.

Le code est parfois rudimentaire, et je m’excuse par avance si vous trouvez des raccourcis de geek.

La semaine prochaine, je vais essayer de vous expliquer le même processus, mais avec mon compteur d’eau (qui n’est pas avec des impulsions !).

En attendant, si vous avez des commentaires, n’hésitez pas à m’en faire part. Le code source ne sera ni publié, ni diffusé intégralement. Je pense avoir donné toutes les informations pour vous faire bien démarrer en revanche.

Mes points de départ :

http://blog.cquad.eu/2012/02/02/recuperer-la-teleinformation-avec-un-arduino/

http://www.planete-domotique.com/blog/2010/03/30/la-teleinformation-edf/

 

Alex.