Azure IoT Device Provisioning Service via REST–part 2

This is part 2 of a two part post on provisioning IoT devices to Azure IoT Hub via the Azure IoT Device Provisioning Service (DPS) via its REST API.  Part 1 described the process for doing it with x.509 certificate attestation from devices and this part will describe doing it with Symmetric Key attestation. 

I won’t repeat all the introduction, caveats, etc. that accompanied part 1, but you may want to take a quick peek at them so you know what I will, and will not, be covering here.

If you don’t fully understand the Symmetric Key attestation options for DPS, I recommend you go read the docs here first and then come back…

Ok, welcome back!

So let’s just jump right in.  Similarly to part 1, there will be a couple of sections of ‘setup’, depending on whether you choose to go with Individual Enrollments or Group Enrollments in DPS.  Once that is done, and the accompanying attestation tokens are generated, the actual API calls are identical between the two.

Therefore, for the first two sections, you can choose the one that matches your desired enrollment type and read it for the required setup (or read them both if you are the curious type), then you can just jump to the bottom section for the actual API calls.

But, before we start with setup, there’s a little prep work to do.

Prep work (aka – how do I generate the SAS tokens?)

Symmetric Key attestation in DPS works, just like pretty much all the rest of Azure, on the concept of SAS tokens.  In Azure IoT, these tokens are typically derived from a cryptographic key tied to a device or service-access level.  As mentioned in the overview link above (in case you didn’t read it), DPS have two options for these keys.  One option is an individual key per device, as specified or auto-generated in the individual enrollments.  The other option is to have a group enrollment key, from which you derive a device-specific key, that you leverage for your SAS token generation.

Generating SAS tokens

So first, let’s talk about and prep for the generation of our SAS tokens, independent of what kind of key we use.   The use of, and generation of, SAS tokens is generally the same for both DPS and IoT Hub, so you can see the process and sample code in various languages here.  For my testing, I pretty much shamelessly stole re-used the python example from that page, which I slightly modified (to actually call the generate_sas_token method).

from base64 import b64encode, b64decode
from hashlib import sha256
from time import time
from urllib import quote_plus, urlencode
from hmac import HMAC

def generate_sas_token(uri, key, policy_name, expiry=3600):
     ttl = time() + expiry
     sign_key = “%s\n%d” % ((quote_plus(uri)), int(ttl))
     print sign_key
     signature = b64encode(HMAC(b64decode(key), sign_key, sha256).digest())

    rawtoken = {
         ‘sr’ :  uri,
         ‘sig’: signature,
         ‘se’ : str(int(ttl))
     }

    if policy_name is not None:
         rawtoken[‘skn’] = policy_name

    return ‘SharedAccessSignature ‘ + urlencode(rawtoken)

uri = ‘[resource_uri]’
key = ‘[device_key]’
expiry = [expiry_in_seconds]
policy=’[policy]’

print generate_sas_token(uri, key, policy, expiry)

the parameters at the bottom of the script, which I hardcoded because I am lazy busy, are as follows:

  • [resource_uri] – this is the URI of the resource you are trying to reach with this token.  For DPS, it is of the form ‘[dps_scope_id]/registrations/[dps_registration_id]’, where [dps_scope_id] is the scope id associated with your DPS instance, found on the overview blade of your DPS instance in the Azure portal, and [dps_registration_id] is the registration_id you want to use for your device.  It will be whatever you specified in an individual enrollment in DPS, or can be anything you want in a group enrollment as long as it is unique.  Frequently used ideas here are combinations of serial numbers, MAC addresses, GUIDs, etc
  • [device_key] is the device key associated with your device.  This is either the one specified or auto-generated for you in an individual enrollment, or a derived key for a group enrollment, as explained a little further below
  • [expiry_in_seconds] the validity period of this SAS token in sec…   ok, not going to insult your intelligence here
  • [policy] the policy with which the key above is associated.  For DPS device registration, this is hard coded to ‘registration’

So an example set of inputs for a device called ‘dps-sym-key-test01’ might look like this (with the scope id and key modified to protect my DPS instance from the Russians!)

uri = ‘0ne00057505/registrations/dps-sym-key-test01’
key = ‘gPD2SOUYSOMXygVZA+pupNvWckqaS3Qnu+BUBbw7TbIZU7y2UZ5ksp4uMJfdV+nTIBayN+fZIZco4tS7oeVR/A==’
expiry = 3600000
policy=’registration’

Save the script above to a *.py file.   (obviously, you’ll need to install python 2.7 or better if you don’t have it to run the script)

If you are only doing individual enrollments, you can skip the next section, unless you are just curious.

Generating derived keys

For group enrollments, you don’t have individual enrollment records for devices in the DPS enrollments, so therefore you don’t have individual device keys.  To make this work, we take the enrollment-group level key and, from it, cryptographically derive a device specific key.  This is done by essentially hashing the registration id for the device with the enrollment-group level key.  The DPS team has provided some scripts/commands for doing this for both bash and Powershell here.  I’ll repeat the bash command below just to demonstrate.

KEY=[group enrollment key]
REG_ID=[registration id]

keybytes=$(echo $KEY | base64 –decode | xxd -p -u -c 1000)
echo -n $REG_ID | openssl sha256 -mac HMAC -macopt hexkey:$keybytes -binary | base64

where [group enrollment key] is the key from your group enrollment in DPS.  this will generate a cryptographic key that uniquely represents the device specified by your registration id.  We can then use that key as the ‘[device_key]’ in the python script above to generate a SAS key specific to that device within the group enrollment.

Ok – enough prep, let’s get to it.  The next section shows the DPS setup for an Individual Enrollment.  Skip to the section beneath it for Group Enrollment.

DPS Individual Enrollment – setup

The setup for an individual device enrollment for symmetric key in DPS is pretty straightforward.  Navigate to the “manage enrollments” blade from the left nav underneath your DPS instance and click “Add Individual Enrollment”.  On the ‘Add Enrollment’ blade, for Mechanism, choose “Symmetric Key”, then below, enter in your desired registration Id (and an option device id for iot hub if you want it to be different).  It should look similar to the below (click on the pic for a bigger version).

dps-symkey-individual-setup

Click Save.   Once saved, drill back into the device and copy the Primary Key and remember your registration id, we’ll need both later.

That’s it for now.  You can skip to the “call DPS REST APIs” section below, or read on if you want to know how to do this with a group enrollment.

DPS Group Enrollment – setup

The setup for an group enrollment for symmetric key is only slightly more complicated than individual.  On the portal side, it’s fairly simple.  In the Azure portal, under your DPS instance, on the left nav click on ‘manage enrollments’ and then “Add Group Enrollment”.  On the Add Enrollment page, give the enrollment a meaningful name and set Attestation Type to Symmetric Key, like the screenshot below.

dps-symkey-group-setup

Once you do that, click Save, and then drill back down into the enrollment and copy the “Primary Key” that got generated.  This is the group key referenced above, from which we will derive the individual device keys.

In fact, let’s do that before the next section.  Recall the bash command given above for deriving the device key, below is an example using the group key from my ‘dps-test-sym-group1’ group enrollment above and I’ll just ‘dps-test-sym-device01’ as my registration id

dps-symkey-derive-device-key

You can see from the picture that the script generated a device-specific key (by hashing the registration id with the group key).

Just like with the individual enrollment above, we now have the pieces we need to generate our SAS key and call the DPS registration REST APIs

call DPS REST APIs

Now that we have everything setup, enrolled, and our device-specific keys ready, we can set up to call the APIs.  First we need to generate our SAS tokens to authenticate.  Plug in the values from your DPS instance into the python script you saved earlier.  For the [device key] parameter, be sure and plug in either the individual device key you copied earlier, or for the group enrollment, make sure and use the derived key you just created and not the group enrollment key.

Below is an example of a run with my keys, etc

dps-symkey-generate-sas

the very last line is the one we need.  In my case, it was (with a couple of characters changed to protect my DPS):

SharedAccessSignature sr=0ne00052505%2Fregistrations%2Fdps-test-sym-device01&skn=registration&sig=FKOnylJndmpPYgJ5CXkw1pw3kiywt%2FcJIi9eu4xJAEY%3D&se=1568718116

So we now have the pieces we need for the API call. 

The CURL command for the registration API looks like this (with the variable parts bolded).

curl -L -i -X PUT -H ‘Content-Type: application/json’ -H ‘Content-Encoding:  utf-8’ -H ‘Authorization: [sas_token]‘ -d ‘{“registrationId”: “[registration_id]“}’ https://global.azure-devices-provisioning.net/[dps_scope_id]/registrations/[registration_id]/register?api-version=2019-03-31

where

  • [sas_token]  is the one we just generated
  • [dps_scope_id] is the one we grabbed earlier from the azure portal
  • [registration_id] is the the one we chose for our device.
  • the –L tells CURL to follow redirects
  • -i tells CURL to output response headers so we can see them
  • -X PUT makes this a put command
  • -H ‘Content-Type:  application/json’ and –H ‘Content-Encoding: utf-8’ are required and tells DPS we are sending utf-8 encoded json in the body  (change the encoding to whatever matches what you are sending)

dps-symkey-registration-call_results

Above is an example of my call and the results returned.

Note two things.. One is the operationId.  DPS enrollment in an IoT Hub is a (potentially) long running operation, and thus is done asynchronously.  So to see the status of your IoT Hub provisioning, we’ll need to poll for status.  I’ll get to that in a minute.  The second thing is the “status” field, which begins in the ‘assigning’ status.

The next API call we need to make is get the status.  You’ll basically do this in a loop until you either get a success or failure status.  The valid status values for DPS are:

    • assigned
      – the return value from the status call will indicate what IoT Hub the device was assigned to
    • assigning
    • disabled
      – the device enrollment record is disabled in DPS, so we can’t assigned
    • failed
      – assignment failed.  There will be an errorCode and errorMessage returned in an registrationState record in the returned JSON to indicate what failed.
    • unassigned – ummm..  no clue.

To make the afore-mentioned status call, you need to copy the operationId from the return status above.  The CURL command for that call is (with variables bolded):

curl -L -i -X GET -H ‘Content-Type: application/json’ -H ‘Content-Encoding:  utf-8’ -H ‘Authorization: [sas_token]’ https://global.azure-devices-provisioning.net/[dps_scope_id]/registrations/[registration_id]/operations/[operation_id]?api-version=2019-03-31

use the same sas_token and registration_id as before and the operation_id you just copied.

A successful call looks like this:

dps-symkey-operation-status

Unfortunately, I’m not a fast enough copy/paste-r to catch it in a status other than ‘assigned’  (DPS is just too fast for me).  But you can try this all programmatically or in a script to do it.

Viola

That’s it.  Done.  You can check the status of your registration in the azure portal and see that the device was assigned.

dps-symkey-done

enjoy, and as always, if you have questions or even suggested topics (remember, it has to be complex, technical, and not well covered in the docs), hit me up in the comments

Azure IoT Device Provisioning Service via REST–part 1

This will be a two-part article on how to provision IoT devices using Microsoft’s Azure IoT Device Provisioning Service, or DPS, via its REST API.  DSP is part of our core IoT platform.  It gives you an global-scale solution for near zero touch provisioning and configuration of your IoT Devices.  We make the device-side provisioning pretty easy with nice integration with our open-source device SDKs.

With that said, one of our core design tenants for our IoT platform is that, while they do make life easier for you in most instances, you do not have to use our SDKs to access any of our services, whether that’s core IoT Hub itself, IoT Edge, or in this case, DPS.

I recently had a customer ask about invoking DPS for device registration from their field devices over the REST API.  For various reasons I won’t dive deep in, they didn’t want to use our SDKs and preferred the REST route.  The REST APIs for DPS are documented, but..  well..  we’ll just say “not very well” and leave it at that……..

ahem……..

anyway..

So I set out to figure out how to invoke device registration via the REST APIs for my customer and thought I would document the process here.

Ok, enough salad, on to the meat of the post.

First of all, this post assumes you are already familiar with DPS concepts have maybe even played with it a little.  If not, review the docs here and come back.  I’ll wait……….

Secondly, in case you didn’t notice the “part 1” in the title, because of the length, this will be a two-part post.  The first half will show you how to invoke DPS registration over REST for the x.509 certificate attestation use cases, both individual and group enrollments, and part 2 will show for the Symmetric Key attestation use cases.

NOTE- but what about TPM chip-based attestation?, you might ask..  Well, I’m glad you asked!   Using a TPM-chip for storing secrets and working with any IoT device is a best practice that we highly recommend!  with that said, I’m not covering it for three reasons, 1) the process will be relatively similar to the other scenarios, 2) despite it being our best practice, I don’t personally have any customers today it (shame on us all!) and 3) I don’t have an IoT device right now that has a TPM to test with Smile

One final note – I’m only covering the registration part from the device side.  There are also REST APIs for enrolling devices and many other ‘back end’ processes that I’m not covering. This is partially because the device side is the (slightly) harder side, but also because it’s the side that is more likely to need to be invoked over REST for constrained devices, etc.

Prep work (aka – how do I generate test certs?)

The first thing we need f0r x.509 certificate attestation is, you guess it, x.509 certificates.  There are several tools available out there to do it, some easy, and some requiring an advanced degree in “x.509 Certificate Studies”.  Since I don’t have a degree in x.509 Certificate Studies, I chose the easy route.  But to be clear, I just picked this method, but any method that can generate valid x.509 CA and individual certs will work. 

The tool I chose is provided by the nice people who write and maintain our Azure IoT SDK for Node.js.  This tool is specifically written to work/test with DPS, so it was ideal.  Like about 50 other tools, it’s a friendly wrapper around openssl.  Instructions for installing the tool are provided in the readme and I won’t repeat them here.  Note that you need NodeJs version 9.x or greater installed for it to work.

For my ‘dev’ environment, I used Ubuntu running on Windows Subsystem for Linux (WSL)  (man, that’s weird for a 21-year MSFT veteran to say), but any environment that can run node and curl will work.

Also, final note about the cert gen scripts..  these are scripts for creating test certificates…  please..  pretty please..  don’t use them for production.  If you do, Santa will be very unhappy with you!

In the azure portal, I’ll assume you’ve already set up a DPS instance.  At this point, on the DPS overview blade, note your scope id for your instance (upper right hand side of the DPS Overview blade), we’ll need it here in a sec.   From here on out, I’ll refer to that as the [dps_scope_id]

The next two sections tell you how to create and setup the certs we’ll need for either the Individual or Group Enrollments.  Once the certs are properly setup, the process for making the API calls are the same, so pick the one of the next two sections that apply to you and go  (or read them both if you are really thirsty for knowledge!)

x.509 attestation with Individual Enrollments – setup

Let’s start with the easy case, which is x.509 attestation with an Individual Enrollment.  The first thing I want to do is to generate some certs.

For my testing, using the create_test_cert tool to create a root certificate, and a device certificate for my individual enrollment, using the following two commands.

node create_test_cert.js  root “dps-test-root”

node create_test_cert.js device “dps-test-device-01” “dps-test-root”

The tool creates the certificates with the right CN/subject, but when saving the files, it drops the underscores and camel-cases the name.  I have no idea why.  Just roll with it.  Or feel free to create a root cert and device cert using some other tool.  The key is to make the subject/CN of the device cert be the exact same thing you plan to make your registration id in DPS.  They HAVE to match, or the whole thing doesn’t work.  For the rest of the article, I will refer to the registration id as [registration_id]

so my cert files look like this….  (click on the pic to make it larger)

dps-certs-individual

Back in the Azure portal, under “Manage enrollments” on the left nav, click on “Add Individual Enrollment”.   On the “Add Enrollment” blade, leave the default of “X.509” for the Mechanism.  On the “Primary Certificate .pem or .cer file”, click on the folder and upload/choose the device certificate you generated earlier (in my case, it’s dpsTestRoot_cert.pem).

The top half of my “Add Enrollment” blade looks like this (everything below it is default)

dps-cert-individual-dps-setup

I chose to leave the IoT Hub Device ID field blank.  If you want your device ID in Hub to be something different than your registration id (which is the CN/subject name in your cert), then you can enter a different name here.    click SAVE to save your enrollment and we are ready to go.

x.509 attestation with Group Enrollments – setup

Ok, if you’re reading this section, you are either a curious soul, or decided to go with the group enrollment option for x.509 attestation with DPS.

With the umbrella of group enrollment, there are actually two different options for the ‘group’ certificate, depending on whether or not you want to leverage a root CA certificate or an intermediate certificate.  For either option, for test purposes, we’ll go ahead and generate our test certificates.  The difference will primarily be which one we give to DPS.  In either case, once the certificate is in place, we can authenticate and register any device that presents a client certificate signed by the CA certificate that we gave to DPS.  For details of the philosophy behind x.509 authentication/attestation for IoT devices, see this article.

For this test, I generated three certificates, a root CA cert, an intermediate CA cert (signed by the root), and an end IoT device certificate, signed by the intermediate CA certificate.  I used the following commands to do so.

node create_test_cert.js root “dps-test-root”

this generated my root CA certificate

node create_test_cert.js intermediate “dps-test-intermediate” “dps-test-root”

this generated an intermediate CA certificate signed by the root CA cert I just generated

node ../create_test_cert.js device “dps-test-device-01” “dps-test-intermediate”

this generated a device certficate for a device called “dps-test-device-01” signed by the intermediate certificate above.   So now I have a device certificate that ‘chains up’ through the intermediate to the root.

At this point, you have the option of either setting up DPS with the root CA certificate, or the Intermediate Certificate for attesting the identity of your end devices.  The setup process for each option is slightly different and described below.

Root CA certificate attestation

For Root CA certificate attestation, you need to upload the root CA certificate that you generated, and then also provide proof of possession of the private key associated with that root CA certificate.  That is to keep someone from impersonating the holder of the public side of the root CA cert by making sure they have the corresponding private key as well.

The first step in root CA registration is to navigate to your DPS instance in the portal and click “Certificates” in the left-hand nav menu.  Then click “Add”.  In the Add Certificate blade, give your certificate a name that means something to you and click on the folder to upload our cert.  Once uploaded, click Save.

At that point, you should see your certificate listed in the Certificates blade with a status of “unverified”.  This is because we have not yet verified that we have the private key for this cert.

dps-cert-group-root-unverified

The process for verifying that we have the private key for this cert involves having DPS generate a “verification code” (a cryptographic alphanumeric string” and then we will take that string and, using our root CA certificate, create a certificate with the verification code as the CN/Subject name and then sign that certificate with the private key of the root CA cert.  This proves to DPS that we possess the private key.  To do this, click on your newly uploaded cert.  On the Certificate Details page, click on the “Generate Verification Code” button and it will generate a verification code as shown below.

dps-cert-group-root-verification-code

Copy that code.  Back on the box that you are using to generate the certs, run this command to create the verification cert.

create_test_cert.js verification [–ca root cert pem file name] [–key root cert key pem file name] [–nonce nonce]

where –ca is the path to your root CA cert you uploaded, –key is the path to it’s private key, and–nonce is the verification code you copied from the portal, for example, in my case:

node ../create_test_cert.js verification –ca dpsTestRoot_cert.pem –key dpsTestRoot_key.pem –nonce 07F9332E108C7D24283FB6B8A05284E6B873D43686940ACE

This will generate a cert called “verification_cert.pem”.   Back on the azure portal on the Certificates Detail page, click on the folder next to the box “Verification Certificate *.pem or *.cer file” and upload this verification cert and click the “Verify” button.

You will see that the status of your cert back on the Certificates blade now reads “Verified” with a green check box.   (you may have to refresh the page with the Refresh button to see the status change).

Now click on “Manage enrollments” on the left-nav and click on “Add enrollment group”.   Give it a meaningful name for you, make sure that “Certificate Type” is “CA Certificate” and choose the certificate you just verified from the drop-down box, like below.

dps-cert-group-root-setup

Click Save

Now you are ready to test your device cert.  You can skip the next section and jump to the “DPS registration  REST API calls” section

Intermediate CA certificate attestation

If you decided to go the Intermediate CA certificate route, which I think will be the most common, luckily the process is a little easier than with a root CA certificate.  In your DPS instance in the portal, under “Manage enrollments”, click on Add Enrollment Group”.  Make sure that Attestation Type is set to “Certificate” and give the group a meaningful name.  Under “Certificate Type”, choose “Intermediate Certificate” and click on the folder next to “Primary Certificate .pem or .cer file” and upload the Intermediate Certificate we generated earlier.  For me, it looks like this..

dps-cert-group-intermediate-setup

click Save and you are ready to go to the next section to try to register a device cert signed by your Intermediate Certificate.

The DPS registration REST API calls

Ok, so the moment you’ve all been waiting (very patiently) for…

As mentioned previously, now that the certs have all been created, uploaded, and setup properly in DPS, the process and the API calls from here on out is the same regardless of how you set up your enrollment in DPS.

For my testing, I didn’t want to get bogged down in how to make HTTP calls from various languages/platforms, so I chose the most universal and simple tool I could find, curl, which is available on both windows and linux.

The CURL command for invoking DPS device registration for x.509 individual enrollments with all the important and variable parameters in []’s..

curl -L -i -X PUT –cert ./[device cert].pem –key ./[device-cert-private-key].pem -H ‘Content-Type: application/json’ -H ‘Content-Encoding:  utf-8’ -d ‘{“registrationId”: “[registration_id]“}’ https://global.azure-devices-provisioning.net/[dps_scope_id]/registrations/[registration_id]/register?api-version=2019-03-31

That looks a little complicated, so let’s break each part down

  • -L  :  tells curl to follow HTTP redirects
  • – i :  tells curl to include protocol headers in output.  Not strictly necessary, but I like to see them
  • -X PUT : tells curl that is an HTTP PUT command.  Required for this API call since we are sending a message in the body
  • –cert  : this is the device certificate that we, as a TLS client, want to use for client authentication.  This parameter, and the next one (key) are the main thing that makes this an x.509-based attestation.  This has to be the same cert you registered in DPS
  • –key : the private key associated with the device certificate provided above.  This is necessary for the TLS handshake and to prove we own the cert
  • -H ‘Content-Type: application/json’ : required to tell DPS we are posting up JSON content and must be ‘application/json’
  • -H ‘Content-Encoding:  utf-8’ :   required to tell DPS the encoding we are using for our message body.  Set to the proper value for your OS/client  (I’ve never used anything other than utf-8 here)
  • -d ‘{“registrationId”: “[registration_id]”}’ :   the –d parameter is the ‘data’ or body of the message we are posting.  It must be JSON, in the form of “{registrationId”:”[registration_id”}.  Note that for CURL, I wrapped it in single quotes.  This nicely makes it where I don’t have to escape the double quotes in the JSON
  • Finally, the last parameter is the URL you post to.  For ‘regular’ (i.e not on-premises) DPS, the global DPS endpoint is global.azure-devices-provisioning.net, so that’s where we post.  https://global.azure-devices-provisioning.net/[dps_scope_id]/registrations/[registration_id]/register?api-version=2019-03-31.  Note that we have to replace the [dps_scope_id] with the one you captured earlier and [registration_id] with the one you registered.

you should get a return that looks something like this…

dps-cert-individual-return-val

Note two things.. One is the operationId.  DPS enrollment in an IoT Hub is a (potentially) long running operation, and thus is done asynchronously.  So to see the status of your IoT Hub provisioning, we’ll need to poll for status.  I’ll get to that in a minute.  The second thing is the “status” field, which begins in the ‘assigning’ status.

The next API call we need to make is get the status.  You’ll basically do this in a loop until you either get a success or failure status.  The valid status values for DPS are:

    • assigned
      – the return value from the status call will indicate what IoT Hub the device was assigned to
    • assigning
    • disabled
      – the device enrollment record is disabled in DPS, so we can’t assigned
    • failed
      – assignment failed.  There will be an errorCode and errorMessage returned in an registrationState record in the returned JSON to indicate what failed.
    • unassigned – ummm..  no clue.

To make the afore-mentioned status call, you need to copy the operationId from the return status above.  The CURL command for that call is:

curl -L -i -X GET –cert ./dpsTestDevice01_cert.pem –key ./dpsTestDevice01_key.pem -H ‘Content-Type: application/json’ -H ‘Content-Encoding:  utf-8’ https://global.azure-devices-provisioning.net/[dps_scope_id]/registrations/[registration_id]/operations/[operation_id]?api-version=2019-03-31

where [dps_scope_id] and [registration_id] are the same as above, and [operation_id] is the one you copied above.   The return will look something like this, keeping in mind the registrationState record will change fields based on what the returned status was.

dps-cert-individual-status-return

Unfortunately, I’m not a fast enough copy/paste-r to catch it in a status other than ‘assigned’  (DPS is just too fast for me).  But you can try this all programmatically or in a script to do it.

Ta-Da!

That’s it.  You can navigate back to DPS, drill in on your device, and see the results

dps-cert-success

Enjoy – and as always, hit me up with any questions in the comments section.

Monitor IoT Edge connectivity with Event Grid

Hi!  Long time, no see.  It’s been a few months since I posted here.  I won’t bore you with the many “I’ve been busy” excuses (but I have!).  Anyway, enough assuaging my guilt over not having shared with you guys in a while.   Let’s get to why you came here (since google.com probably sent you)

One of the frequent questions we get related to any IoT device, but especially IoT Edge is “how do I know if my IoT device/edge is up and running and talking to IoT Hub?”  I was recently researching this topic for a customer and ran across some cool stuff.   I even did a quick little sample/demo I’ll share at the bottom involving IoT Edge and Microsoft Teams. 

IoT Edge/Hub and Event Grid

As you may know, IoT Hub integrates with Azure Event Grid.  Azure Event Grid is a fully managed event routing service that uses a publish-subscribe model.  Event Grid integrates with a bunch of Azure Services (a pic is on that link above), including IoT Hub.    As shown from the list below, stolen from that same link, IoT Hub publishes several event types to Event Grid.

  • Microsoft.Devices.DeviceCreated – Published when a device is registered to an IoT hub.
  • Microsoft.Devices.DeviceDeleted – Published when a device is deleted from an IoT hub.
  • Microsoft.Devices.DeviceConnected – Published when a device is connected to an IoT hub.
  • Microsoft.Devices.DeviceDisconnected – Published when a device is disconnected from an IoT hub.
  • Microsoft.Devices.DeviceTelemetry – Published when a device telemetry message is sent to an IoT hub

Note that Device Connected and Device Disconnected events will be raised when the devices connect or disconnect.  That can be an explicit connection and disconnection, or a disconnect after a certain number of missed ‘heartbeats’, meaning the device or network may have committed seppuku (not to be confused with Sudoku) .  You can set Event Grid subscriptions to these events and then respond however you want to (Logic Apps, Azure Functions, Text, Email, etc)

So, what about IoT Edge?

Ok, Steve, but google sent me here because I searched for information about monitoring IoT *Edge*connectivity.  If it pleases the court, are you going to talk about that any time soon?

Yes!  The reason this is relevant to IoT Edge is that the connection/disconnection events not only work for IoT end-devices, but it also works for IoT Edge modules!   So, leveraging that, you can tell if any of your IoT Edge modules disconnect and don’t reconnect (did they crash?) or also if the IoT Edge runtime itself loses connectivity (crash?  network issue?) by looking for those events for the $edgeHub and $edgeAgent modules. 

Ok skippy, prove it!

Fine!  I will…   The IoT Hub/Event Grid teams have a nice little example of using the integration between IoT Hub and Event Grid to persistently track the status of your device connectivity in a CosmosDB database, via a LogicApps.  It will have a record for all of your devices, and their last connect/disconnect time.   It’s a pretty nice little sample and could be the beginning of a connectivity monitoring solution  You can build a UI, have it pull the status from CosmosDB, etc.

Check it out if you have the time or inclination.

However, I was too lazy busy to create a CosmosDB, stored procedure, UI, etc.   So I took the quicker way out.  I created a Channel in Microsoft Teams and, each time a device/module connected or disconnected, I just post a message to the channel.

I basically followed that article and all the pieces of it, except everywhere it referenced the stored procedure and CosmosDB, I just instead leveraged the built-in LogicApps connector for Microsoft Teams, which is super-easy to use.

The result is shown below.  I have a raspberry pi running IoT Edge sitting on my desk.  It has a SenseHat accessory in top of it and was running a module that I wrote that talks to it (uncreatively named “sensehat” – they don’t pay me enough for naming creativity).   At about 10:55 local time, I reached over the yanked the power plug out of the raspberry pi.  2-3 minutes later (after some missed heartbeats), viola, I got a post to my Teams channel telling me that both my module (sensehat) as well as the Edge runtime ($edgeHub and $edgeAgent) had disconnected.   (rpihat2 is the device name for my IoT Edge device)

IoT_Hub_Event_Grid_Teams

(full disclosure – I made ZERO effort to make the message look pretty)

I could just as easily, via LogicApps again,  have leveraged SendGrid, Twilio, etc to send me email alerts, TXTs, or whatever for these events.  If you do use Microsoft Teams, you can set alerts on the Channel when new events are posted too.  If you use the mobile app, you even get push notifications.

I didn’t write step-by-step instructions for doing this, as I hope it’s not too hard to follow the Azure IoT team’s cool instructions above and just substitute Teams/SendGrid/Twilio, etc for the output..   but feel free to let me know if I need to do step-by-step.

Leave a comment and let me know if this, or any of my other content, is useful (or if you need/want step-by-step instructions)

Enjoy,

–Steve

raw AMQP to IoT Hub and IoT Edge

It seems like lately my life has consisted mostly of trying to figure out how to connect “brownfield or legacy systems” (that’s MSFT-speak for “doesn’t use our IoT device SDKs” :-)) to Azure IoT Hub or Azure IoT Edge or both. I’ve previously in other posts shown how to do it with raw MQTT, Mosquitto, and Node-Red.

I was recently asked by a customer for a sample of connecting a raw AMQP client to IoT Edge. So with unbridled optimism, I quickly did a web search to hunt down what surely already existed as a sample. Both google and bing quickly dashed my hope for that (as they so often do). Even StackOverflow, the best hope for all development-kind failed me! So I waded in to figure it out myself.

Setting the stage

Just for simplicity, I used the python uamqp library for this. This is the AMQP library (or at least the C version of it) that we use ourselves underlying IoT Hub and IoT Edge (and service bus, event hub, etc), so it seemed like a natural fit. And it also came with a sample that I could start from and adapt. The code further below and information in this post is based on that sample. The primary two issues with the sample out of the box was that it used a ‘hub-level’ key vs. a device-scoped key for authentication to IoT Hub (don’t do that!) and for some reason it was written to show device-bound (cloud to device) connectivity vs. cloud-bound (device to cloud, aka ‘telemetry’) messaging. So I adapted the sample for my needs, and will show the adaptations below.

While it took me a little time to figure things out, the two most complicated parts where

  • Figuring out the right AMQP connection string format to connect to IoT Hub/Edge. This is normally handled under the covers with our SDKs, but getting it right without the SDKs took a little research and trial/error
  • Figuring out how to get the client to trust the certificate chain that edgeHub presents to connecting clients for TLS connections (for more details on how Edge uses certs, see this article by my very favorite author!). This second bullet is only needed if you are connecting to IoT Edge. The right root-ca certs (i.e. Baltimore) are embedded in the uamqp library for IoT Hub itself.

The format for the AMQP connection string is actually already documented here by our engineering team (under the “protocol specifics” section), but it’s not called out very obviously like the entire sub-article we have for MQTT, so I actually missed it for a while. If you use a device-scoped key (which you generally should), the correct format for the AMQP connection string is:


amqps://[device_id]@sas.[short-hub-name]:[sas-token]@[target-endpoint]/[operation]

where:

  • [device_id] is an iot-hub registered device id for an IoT device
  • [short-hub-name] is the name of your IoT Hub *without* the .azure-devices-net
    • NOTE: the combination of device_id and short-hub-name, which collectively is the ‘username’ in the connection string, must be URL encoded before sent
  • [sas-token] is a SAS token generated for your device
  • [target-endpoint] is either the name of your IoT Hub *with* the .azure-devices.net in the case of an IoT Hub direction connection OR it’s the FQDN of your IoT Edge box in the case of connecting to IoT Edge (i.e. mygateway.contoso.local)
  • [operation] is the desired operation. For example, to send telemetry data, operation is /devices/[device id]/messages/events

Just to show an example of what the connection string looks like with a live device and hub, below is an example of one of mine (with a few random characters in the sas-token changed to protect my hub :-))


amqps://amqptest%40sas.sdbiothub1:SharedAccessSignature+sr%3Dsdbiothub1.azure-devices.net%252Fdevices%252Famqptest%26sig%3DyfStnV4tfi3p7xeUg2DCTSauZowQ90Gplq3hKFzTY10%253D%26se%[email protected]/devices/amqptest/messages/events

where:

  • amqptest is the device id of my device registered in IoT Hub
  • sdbiothub1 is the name of my IoT Hub
  • mygateway.contoso.local is the FQDN of my IoT Edge device (not really, but you don’t need to know the real one…)
  • /devices/amqptest/messages/events is the ‘operation’ I’m invoking, which in the case of IoT Hub/Edge means to send device-to-cloud telemetry data

The code

ok, enough pre-amble, let’s get to the code

NOTE:  Please note - strangely enough, as of this writing (3/7/2019) the code and post below will NOT actually work today.  During my work and investigation, and working with one of the IoT Edge engineers, we discovered a small bug in edgeHub that prevented the raw AMQP connection string from being parsed correctly. The bug has already been fixed, per this pull request, but the fix won't be publicly available until later this month in the next official release.  But, since I'm internal MSFT and "it's good to be the king", I was able to get a private build of edgeHub to test against.  I'll update this post once the fix is publicly available.  (technically, if you really want it, you can do your own private build of edgeHub, since it's open source

The first step in using the sample is to install the uamqp library, the instructions for which can be found here.

Below is my modified version of the sample that i started with. I tried to annotate any change I made with a preceding comment that starts with #steve, so you can just search for them to understand what I changed, or just use the sample directly


#-------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#--------------------------------------------------------------------------

import os
import logging
import sys
from base64 import b64encode, b64decode
from hashlib import sha256
from hmac import HMAC
from time import time
from uuid import uuid4
try:
    from urllib import quote, quote_plus, urlencode #Py2
except Exception:
    from urllib.parse import quote, quote_plus, urlencode

import uamqp
from uamqp import utils, errors

#steve - added to share the SAS token and username broadly
sas_token = ''
auth_username = ''

def get_logger(level):
    uamqp_logger = logging.getLogger("uamqp")
    if not uamqp_logger.handlers:
        handler = logging.StreamHandler(stream=sys.stdout)
        handler.setFormatter(logging.Formatter('%(asctime)s %(name)-12s %(levelname)-8s %(message)s'))
        uamqp_logger.addHandler(handler)
    uamqp_logger.setLevel(level)
    return uamqp_logger

log = get_logger(logging.DEBUG)

def _generate_sas_token(uri, policy, key, expiry=None):

    if not expiry:
        expiry = time() + 3600  # Default to 1 hour.
    encoded_uri = quote_plus(uri)
    ttl = int(expiry)
    sign_key = '%s\n%d' % (encoded_uri, ttl)
    signature = b64encode(HMAC(b64decode(key), sign_key.encode('utf-8'), sha256).digest())
    result = {
        'sr': uri,
        'sig': signature,
        'se': str(ttl)}
    #if policy:
    #    result['skn'] = policy
    return 'SharedAccessSignature ' + urlencode(result)


def _build_iothub_amqp_endpoint_from_target(target, deviceendpoint):
#steve - reference global sas_token and auth_username because we will use it outside this function
    global sas_token
    global auth_username

    hub_name = target['hostname'].split('.')[0]

#steve - the format for a *device scoped* key for amqp is
# [deviceid]@sas.[shortiothubhubname]
# this is the same for both IoT Hub and IoT Edge.  This is a change from the original sample
# which used a 'hub scoped' key
    endpoint = "{}@sas.{}".format(target['device'], hub_name)
#steve - grab the username for use later..  before the URL-encoding below
    auth_username = endpoint

    endpoint = quote_plus(endpoint)
    sas_token = _generate_sas_token(target['hostname'] + deviceendpoint, target['key_name'],
                                    target['access_key'], time() + 36000)

#  steve - the first line below is used for talking to IoThub, the second for IoT Edge
#  basically we are just changing the connection endpoint
#    endpoint = endpoint + ":{}@{}".format(quote_plus(sas_token), target['hostname'])
    endpoint = endpoint + ":{}@{}".format(quote_plus(sas_token), target['edgehostname'])
    return endpoint

def test_iot_hub_send(live_iothub_config):
#steve - reference my globals set earlier
    global sas_token
    global auth_username

    msg_content = b"hello world"
    app_properties = {"test_prop_1": "value", "test_prop_2": "X"}
    msg_props = uamqp.message.MessageProperties()
#steve - honestly dunno what this property does :-), but we aren't going devicebound, so nuked it
#    msg_props.to = '/devices/{}/messages/devicebound'.format(live_iothub_config['device'])
    msg_props.message_id = str(uuid4())
    message = uamqp.Message(msg_content, properties=msg_props, application_properties=app_properties)

#steve - the original sample was set up for cloud-to-device communication.  I changed the 'operation'
# to be device-to-cloud by changing the operation to /devices/[device id]/messages/events
    #operation = '/messages/devicebound'
    deviceendpoint='/devices/{}'.format(live_iothub_config['device'])
    operation = deviceendpoint + '/messages/events'
    endpoint = _build_iothub_amqp_endpoint_from_target(live_iothub_config, deviceendpoint)

    target = 'amqps://' + endpoint + operation
    log.info("Target: {}".format(target))

#steve - this is where the magic happens for Edge.  We need a way to specify the
# path to the root ca cert used for the TLS connection to IoT Edge.  So we need to
# manually created the SASLPlain authentication object to be able to specify that
# and then pass it to the SendClient method below.  All of that is not necessary
# just to talk directly to IoT Hub as the root Baltimore cert for IoT Hub itself is buried
# somewhere in the uamqp library
# if you are connecting to IoT Hub directly, you can remove/comment this line
    auth_settings = uamqp.authentication.SASLPlain(live_iothub_config['edgehostname'], auth_username, sas_token, verify=live_iothub_config['edgerootcacert'])

#steve - for iot hub  (simple because we don't have to worry about the edge TLS cert
#    send_client = uamqp.SendClient(target, debug=True)
# for iot edge
    send_client = uamqp.SendClient(target, debug=True, auth=auth_settings)
    send_client.queue_message(message)
    results = send_client.send_all_messages()
    assert not [m for m in results if m == uamqp.constants.MessageState.SendFailed]
    log.info("Message sent.")

if __name__ == '__main__':
    config = {}
#steve - changed from environment variables to hardcoded, just for this sample
#    config['hostname'] = os.environ['IOTHUB_HOSTNAME']
#    config['device'] = os.environ['IOTHUB_DEVICE']
#    config['key_name'] = os.environ['IOTHUB_SAS_POLICY']
#    config['access_key'] = os.environ['IOTHUB_SAS_KEY']
    config['hostname'] = 'your long iothub name'  # e.g. 'sdbiothub1.azure-devices.net'
    config['device'] = 'your device id'  # e.g. 'amqptest'
    config['key_name'] = ''  # leave empty string
    config['access_key'] = 'your primary or secondary device key'   # e.g 'P38y2x3vdWNGu7Fd9Tqq9saPgDry/kZTyaKmpy1XYhg='
#steve - the FQDN of your edge box (i.e. mygateway.local)
# it MUST match the 'hostname' parameter in config.yaml on your edge box
# otherwise TLS certificate validation will fail
    config['edgehostname'] = 'your FQDN for your edge box'  # e.g. 'mygateway.contoso.local'
#steve - path to the 'root ca' certificate used for the IoT Edge TLS cert
    config['edgerootcacert'] = 'the path to your root ca cert for edge'  # e.g. '/home/stevebus/edge/certs/azure-iot-test-only.root.ca.cert.pem'

    test_iot_hub_send(config)

The code is a little hard to read in blog format, so feel free to copy/paste into your favorite python editor to view it.

The key changes are to the line that generates the ‘username’ for the connection string, changing it from a ‘hub level’ key to a device-level key


endpoint = "{}@sas.{}".format(target['device'], hub_name)

and the two lines that allow me to customize the SASLPlain authentication information to add in the path to the IoT Edge root CA cert


auth_settings = uamqp.authentication.SASLPlain(live_iothub_config['edgehostname'], auth_username, sas_token, verify=live_iothub_config['edgerootcacert'])

send_client = uamqp.SendClient(target, debug=True, auth=auth_settings)

When you run the sample, you’ll see a ton of debug output. At the top you should see your connection string dumped out, but most importantly, if it works, you should see a line like this somewhere in the middle of the output


2019-03-07 19:00:15,893 uamqp.client DEBUG Message sent: <MessageSendResult.Ok: 0>, []

This indicates a successful sending of a message to IoT Hub/Edge.

Enjoy, and as always, feel free to ping me via my contact page and/or via comments here

Quick update on MQTT and message routing in IoT Hub and IoT Edge

I’ve made an update to my posts on using a standard MQTT client and using Node-Red to connect to IoT Edge via MQTT.

One thing I missed in the original post, that I needed to figure out today for a customer, is that if you want to route messages in IoT Edge based on the message body, you need to use the contentType and contentEncoding fields to tell edgeHub that the content is, in fact, JSON, and also what encoding it is (i.e. utf-8, utf-16, etc). This is not unique to IoT Edge, but a requirement for IoT Hub itself as well.

The way you do that is by appending ‘properties’ to the end of your MQTT topic to indicated the content type and the content encoding. In MQTT, those are the $.ct and $.ce properties, respectively.. So, regardless of your client, you do that by appending the values $.ct=application/json and $.ce=utf-8, after URL encoding them, to your topic, like this:

devices/[device_id]/messages/events/$.ct=application%2Fjson&$.ce=utf-8

where [device_id] is obviously the device id in iot hub of your sending device. This allows you to do things like this in your IoT Edge routing (pretend we are sending a message like {“messageType”:”alert”, ……})

{
"routeToAlertHandler":"FROM /messages/* WHERE $body.messageType='alert' INTO ........
}

Enjoy! and, as always, if you have any questions, hit me up in the comments section or twitter/etc (links in my ‘about me’ page)

Connect Node-Red to Azure IoT Edge

NOTE:  updated on 2/14/2019 to add the note about adding the content type and content encoding to the MQTT topic if you want to route messages based on the message body in IoT Edge

Recently on behalf of a customer, I was asked if it was possible to connect Node-Red to Azure IoT Edge.  Node-Red, an IBM sponsored graphical process design tool, is increasingly popular in IoT scenarios.

My first thought was “well sure, there is an IoT Hub ‘node’ for Node-Red, surely it can be used for connecting to IoT Edge”.  Well, looks like it can’t.   For two reasons, 1) it’s not designed to do that (no way to specify the edge box/GatewayHostName) and 2) it’s not very good  Smile.  There are a couple of other options out there, too, but none of the IoT Hub specific ‘nodes’ will work for this use case.

Secondly, and this is really cool, my Azure IoT Global Black Belt peers in Europe have developed an IoT Edge module that hosts Node-Red inside Edge itself, where you can orchestrate your IoT process, and have it drop messages on edgeHub for further edge processing and/or uploading of messages to IoT Hub.   *If* you are interested in hosting and managing Node-Red from inside IoT Edge itself, you should definitely check it out.  It’s really nice.

With that said, a lot of customers (including the one that prompted this post) either already have a Node-Red environment outside of IoT Edge, or just for a host of reasons want to keep them separate.  If so, this post is for you.   Since the IoT Hub ‘node’ won’t seem to work for this use case, we are going to use the MQTT node.  One thing to note – this only addressed device-to-cloud telemetry.  I haven’t (yet) addressed cloud-to-device management/messages/configuration because my customer’s use case didn’t (yet) need it.  Baby steps.

First of all, let’s get some pre-requisites out of the way.

Pre-requisites

For this post, I assume a number of things:

  • You’ve already got a Node-Red environment up and going.  If not, follow the directions to do so from the Node-Red site.    I also assume you have some familiarity with Node-Red.  If you don’t, they have some nice tutorials on the site.  Make sure Node-Red is running   (node-red-start from the command prompt)
  • You already have Azure IoT Edge setup as a transparent gateway and have confirmed that you can connect to it over the 8883 MQTT using the handy openssl command included in the docs (this is necessary because Node-Red will be connecting to IoT Edge as a “leaf” or “downstream” device).  NOTE:  make sure that whatever name you use in the hostname parameter in your config.yaml file is the same resolvable name that you will use from Node-Red to connect to the IoT Edge box.  If you have to change it in config.yaml to match, restart iot edge before proceeding.
  • From your IoT Edge box, copy/download/ftp/whatever the root CA cert that was used to set up the IoT Edge box.  If you used a ‘real’ cert (i.e. from DigiCert, Baltimore, etc), then you’ll need to get the public key cert from them in the form of a pem or crt file.  If you used the convenience scripts from the transparent gateway instructions above, it’s the cert labeled azure-iot-test-only.root.ca.cert.pem in the ‘certs’ folder underneath whatever folder you generated your certs in.   Just grab that file and get it to your local desktop, we’ll need it later.
  • Your IoT Edge box is reachable, network-wise, from your Node-Red box.  Can you ping it?  Can you resolve it’s DNS name?  Can you telnet to 8883?

In my case, I ran everything (Node-Red and IoT Edge both) from a single Raspberry Pi that I had laying around.  But beyond the requirements for Node-Red and IoT Edge themselves, there is no requirement for specific OS’es or co-location, as long as they can see each other on the network, and IoT Edge can get to IoT Hub.

Setup

Ok, enough caveating and pre-req’ing, let’s do some setup.

One of the things you will need is an IoT Hub device identity in the hub to represent our Node-Red flow.  If you haven’t already, create an IoT Device to represent our Node-Red “device” (the flow).  Go ahead and capture the “device id”, and URI of the Hub (something.azure-devices.net).

You will also need a Shared Access Signature, a token that authenticates the device to IoT Hub.  The easiest way to generate one is to open up a cloud shell instance in the azure portal.   To do that, click on the Cloud Shell button in the top nav part of the portal.  It looks like this:

cloud-cli

You may have to go through some preliminary setup (select a blob storage account, etc) if it’s the first time you’ve done it.  You can use either Powershell or Bash.  I prefer Bash myself, but both work.

Once the cloud shell is created and you have a command prompt, the first thing we have to to do install the azure iot hub extension for the azure cli interface.   You can do this by running this command

az extension add –name azure-cli-iot-ext

You only have to do it the first time, it ‘sticks’ for subsequent times you use the cloud shell.  Once this is done, you can generate your SAS token with this command:

az iot hub generate-sas-token –d [device id]-n [hub name] –du [duration]

where [device id] is the device id from the device you created earlier, [hub name] is the short name of your IoT Hub (with the .azure-devices.net) and [duration] is the amount of time, in seconds, you want your toke to be valid.  Shorter values are more secure, but longer values are easier to manage (you have to re-create and re-configure the token in Node-Red when it expires).  It’s up to you to figure out your best balance between security and convenience Smile.  I will remain silent on the topic here.

Once you generate your token, you need to copy it somewhere, as we’ll need it later.  The process will look something like this (click on the picture to enlarge):

sas-token

You need the part that starts with “Shared Access Signature”, without the quotes.  Highlighted in yellow in the picture.     NOTE in the picture that ‘sdbiothub1’ is the name of my IoT Hub used in this demo.  Also note that I used a duration of 60 seconds, so you can’t monkey with my IoT Hub.  Not that you would ever do anything like that.

Ok, we have all the information we need to get started.

IoT Edge and IoT Hub debug prep

There’s really nothing else we need to do in order to get IoT Edge and IoT Hub ready, but we’ll do a couple of things to have them ready in the background to both debug our Node-Red setup and see the messages flow through if when we are successful.

On the IoT Edge box, run

iotedge logs –f edgeHub

This brings the edgeHub logs up and ‘follows’ them so we’ll be able to see our Node-Red box connect.

In the cloud shell that you have open in Azure, run this command

az iot hub monitor-events –n [hub name] –timeout 0

where [hub name] is your IoT Hub name as before.  This will monitor messages that make it to IoT Hub so we can see our messages flow in from Node-Red.

Just leave those things up and running in the background and we’ll come back to them later (you can CTRL-C to get out of them later when ready)

Let’s set up our Node-Red instance.

Node-Red configuration

On your Node-Red box, bring up the Node-Red configuration page in your browser (typically http://nameofyournoderedbox:1880)

You’ll have a blank canvas.  You can do something more sophisticated later, but for now we’ll just do a VERY simple flow to show messages flowing to IoT Edge.   We are using a very simple flow for two reasons — to show just the main concept of connecting to IoT Edge and because my Node-Red skills are every bit as awesome as my blog writing skills.

From the node palette on the left hand side, grab the MQTT node from the *output* (not input) section of the palette and drop it on the design canvas.  Also grab an Inject node from the input section and drop it on the canvas.    Connect the output of the Inject node into the input of the MQTT node, as shown  below (your MQTT node won’t say ‘connected’ yet).

node-red-canvas

Double click on the MQTT node to configure it.

  • In the ‘name’ field at the bottom, just give it a name you like.  Doesn’t matter what it is
  • For QOS, choose either 0 or 1 .  IoT Hub/Edge do not support QOS 2.
  • For the TOPIC, enter devices/[device id]/messages/events   (where [device id] is the device id you registered earlier in IoT Hub)
  • NOTE:  *if* your payload is JSON, and *if* you think you might want to do routing of the messages based on the *body* of the message, you need to send a content type of ‘application/json’ and a content encoding of ‘utf-8’  (or 16, etc).  to do that, we need to append that information (url-encoded) to the MQTT topic, which gets passed in as a system property.  So, in that case, the ‘topic’ call would look like this (note the ‘/$.ct=application%2Fjson&$.ce=utf-8’ appended to the MQTT topic)
    • devices/[device id]/messages/events/$.ct=application%2Fjson&$.ce=utf-8

Below is a screen shot of mine.  Ignore the “Server” field in the screenshot, as we haven’t configured that yet.  For my ‘device id’, I called my device nodeRedTest for this and subsequent screenshots

mqtt-node-config

I have Retain set to false, but I really have no idea if that makes any difference or not.  I’m not a Node-Red or MQTT expert, although I do play one on conference calls with customers Smile.

Ok, next we will configure our ‘server’.  Click on the little edit pencil next to the Server box

Give your ‘server’ a name (doesn’t matter what it is – I called mine iothub-mqtt in the screenshots).  On the Connection Tab,

  • for the Server  box, enter the hostname of your IoT Edge box (exactly as it’s specified in config.yaml and we verified we could resolve earlier)
  • Change the port from 1883 to 8883  (the MQTT over TLS port)
  • CHECK the Enable Secure (SSL/TLS) connection box.  We will come back and configure that in a moment.
  • For the “Client ID” box, enter your IoT device ID from earlier
  • UNCHECK the “use legacy MQTT 3.1 support” box.  IoT Edge does not support legacy MQTT connections

Example values based on my setup are shown below.  ‘rpihat1’ is the hostname of my IoT Edge box.

mqtt-node-connection

Click on the ‘Security’ Tab

  • For the username, enter  “[hub name].azure-devices.net/[device id]/api-version=2016-11-14”  (filling in your values for hub name and device id)
  • For the password, paste in the SAS token, in it’s entirety that you copied earlier

A sample screenshot of mine is below

mqtt-node-security

You don’t need to set or change anything on the Messages tab

Back on the Connections Tab, next to the “TLS Configuration” box, click on the edit/pencil button

Next to “CA Certificate”  (the fourth button down), click “Upload”.  Upload the root CA certificate that you downloaded earlier.

For the name at the bottom, just make up a meaningful name for this config and click Save

Click Save or Update on the main node configuration box to save all our values and close it as well

We are now ready to save our flow.  Click “Deploy” in the upper right to save and deploy your flow.

Checking progress

At this point your flow is running and you should see your MQTT node status as “Connecting” with a yellow dot next to it.  Flip over to your IoT Edge box and look at the edgeHub logs.  You should see something like this….  (it may take a minute or more to look like this)

edgehub-logs

For some unexplained reason, the TLS Handshake between the Node-Red box and my IoT Edge box fails exactly seven times before it successfully connects.  I have no idea why and haven’t (and probably won’t) put in the effort to troubleshoot.  I suspect it’s related to a a 10-second delay between tries and the 60-second refresh value in the node config..  but who knows?

Eventually you should see the successful connection in the logs as shown the bottom half of the picture above.

If you flip back over to your Node-Red canvas, you should have a green dot next to your MQTT node and it should say “connected”

We just left the Inject node configured with it’s default of “timestamp”…  it really doesn’t matter what we send our IoT Hub at this point, we just want to see messages go in, and an updating timestamp every time we send a message is handy.  NOTE that the Inject node sends ‘timestamps’ in Unix Epoch time (number of seconds since Jan 1, 1970) so it just looks like a big number.

click on the little button (circled in red below) on the Inject node in the Node-Red canvas

inject-button

This will cause a message to be created and sent to the MQTT node for sending to IoT Edge.  That will, in turn, send the message up to IoT Hub.

Flip over to your Azure Portal browser window and you should see your message having made it to IoT Hub.  click on the Inject button a few more times to send a few more messages (because it’s fun)

The output should look something like this

iothub-debug

Your Node-Red installation is now connected and sending messages through IoT Edge up to IoT Hub.  You can now make your Node-Red flow more sophisticated (reading from sensors, etc) and do any and all the goodness that comes with IoT Edge (stream processing, custom modules, machine learning, etc).

If you have any questions or feedback, please leave them in the ‘comments’ section below.

Enjoy!

Mosquitto MQTT broker to IoT Hub/IoT Edge

 

EDIT:  edited on 8/30 to change tls version to tls 1.2.  Seems that TLS 1.0 doesn’t work any more.  Thanks to Asish Sinha for the heads up.  Also updated the api-version to the latest

 

Earlier, I had a post on connecting an MQTT client to IoT Edge.

It seems like lately my team and I have had a lot of customers with brownfield equipment that can speak MQTT, but are either too old or too low powered (the devices, not the customers Smile)  to do MQTT over TLS.  It is also often the case that you have no real control over the MQTT topic(s) that the device sends events/messages over.  Additionally, many devices even imply “intelligence” or “data” into the topic structure, meaning the topic hierarchy itself conveys information vs. only having important information in the message payload.

Both a TLS connection, and sending data on a very specific topic, are current requirements to talk to either IoT Edge or IoT Hub itself over MQTT.   So, how do we overcome this impedance mismatch between what IoT Hub/Edge requires, and the equipment can do?   One way is to use a middle layer to do the translations.  A popular choice is the open source MQTT broker mosquitto from the Eclipse Foundation.   Mosquitto has a built-in option to set up an MQTT “bridge”, in which the broker will accept incoming messages over MQTT and then forward them as an MQTT client to another MQTT server.  The good news is, Mosquitto can listen to the unencrypted MQTT traffic (port 1883 by default), and then forward it along over a TLS-protected MQTTS connection (port 8883) via this bridge. 

That takes care of our MQTT vs. MQTTs issue.  But what about the any topic vs. a specific topic problem.  Unfortunately, IoT Hub and IoT Edge both only accept telemetry/event data on a specific MQTT topic:  devices/[device-id]/messages/events where [device-id] is the ID of the connected device.  That one is a little trickier, and will be addressed later in this post after we cover the basics of setting up the bridge.

A couple of notes/caveats before we get started:

  • I am NOT a mosquitto expert.  I’ve learned just enough to get this working Smile
  • This is certainly not the only way to solve this problem.  But is one way that seems to work pretty well.
Mosquitto Bridge Setup for IoT Hub/Edge

Before we can configure our Mosquitto MQTT bridge, there are a few pre-requisites to take care of 

  • If you don’t already have one, create an IoT Hub and create a device (only follow that one section) that will represent our Mosquitto broker. The messages in IoT Hub/Edge will appear as if they come from the broker as the IoT device.
  • If you are talking directly to IoT Hub, you can skip this step.  If you are wanting to route your messages through IoT Edge, you need to setup an IoT Edge device as a gateway.
  • Gather the TLS server-side root certificate.  In order for mosquitto to establish a TLS connection to either IoT Hub or IoT Edge, it needs to trust the server-side TLS certificate that will be presented to the broker when it tries to open the connection to IoT Hub/Edge.  Gathering the CA cert from which the TLS server-side cert was generated, the process differs slightly based on whether you are connecting to IoT Hub or IoT Edge.  Either way, save the cert to a file on the mosquitto server, we’ll use it later.

For IoT Hub, the TLS certificate chains up to the public DigiCert Baltimore Root certificate. You can create this file by copying the certificate information from certs.c in the Azure IoT SDK for C. Include the lines —–BEGIN CERTIFICATE—– and —–END CERTIFICATE—–, remove the ” marks at the beginning and end of every line, and remove the \r\n characters at the end of every line.  Name the file with a .pem extension.

For IoT Edge, use whatever root certificate you used to create the IoT Edge Device CA Certificate.  If you used our convenience scripts to set up IoT Edge, that will be the azure-iot-test-only.root.ca.cert.pem found in the ‘certs’ folder where you ran the scripts

Now that we have our pre-req’s finished, we can do our Mosquitto  bridge setup.  This is done via the Mosquitto configuration file.   There may be other things in that file, however, below is an example configuration entry.


# Bridge configuration
connection azureiot-bridge
log_type debug
address [edge or hub fqdn]:8883
remote_username [iothub-shortname].azure-devices.net/[device-id]/api-version=2019-03-31
remote_password [sas-token]
remote_clientid [device-id]
bridge_cafile [iot hub or edge root ca cert]
try_private false
cleansession true
start_type automatic
bridge_insecure false
bridge_protocol_version mqttv311
bridge_tls_version tlsv1.2
notifications false
notification_topic events/

topic devices/[device-id]/messages/events/# out 1

The parts in bold need to be replaced with your values, where

  • [iot hub or edge FQDN] is the DNS name of either your IoT Hub (including the .azure-devices.net) or your IoT Edge device  (i.e. whatever name was used as the ‘hostname’ in config.yaml on IoT Edge)
  • [iothub-shortname] is the name of your IoT Hub  (e.g. ‘myiothub’) without the .azure-devices.net
  • [device-id] is the name of the IoT device created in IoT Hub to represent this broker
  • [sas-token] is a SAS token generated for that device-id in that hub
  • [iot hub or edge root ca cert] is the full path to the root certificate file you created earlier
  • All values are case sensitive.

The very last line (that starts with the word ‘topic’) subscribes the bridge to all messages that are sent with the topic structure of ‘devices/[device-id]/messages/events/#’ (the # is a wildcard to include any sub-topics). When a message that fits that topic structure gets published, the bridge will get it and pass it along to the IoT Hub/Edge.

restart your Mosquitto broker using the updated configuration file.  You should see debug output indicating that it has connected the bridge (and if you are using IoT Edge, you should see debug output in the edgeHub logs showing the connection from the broker)

If you want to test the connection, you can send a test message using the mosquitto_pub command, using the following command (replacing [device-id] with your device id you created above):


mosquitto_pub -t devices/[device-id]/messages/events/ -m "hello world!"

The trailing slash is important and required. You should see the message above be forwarded by the MQTT bridge to either IoT Hub or IoT Edge.

If you are fortunate enough to have full control over your MQTT topic structure from your devices, and there is no intelligence in your topic structure, you’re done.  Congrat’s and have fun!  You can just point your MQTT clients at the broker address (making sure you update the MQTT topic to point to devices/[device-id]/messages/events/) and rock and roll.

However, for the use cases where you don’t have MQTT topic control, or there is intelligence in your topic hierarchy, keep reading.

MQTT Topic Translation

Unfortunately, this is where things get a little less “clean”.  The mosquitto MQTT bridge has no ability to “rewrite” or completely change the topic structure of the messages it receives.  One way to do it is to write a simple client that subscribes to all potential topics from which the MQTT devices might send data, and then resend the payload after translating the MQTT topic into the IoT Hub/Edge required topic structure. 

Below is a simple python script that I wrote to do the translation as an example.  This sample subscribes to all topics (#  – the wildcard) and, if the message doesn’t already use the IoT Hub/Edge topic structure, it simply resends the message payload using the hub/edge topic.


import paho.mqtt.client as mqtt
import time

#replace [device-id] with your device you created in IoT Hub.
iothubmqtttopic = "devices/[device-id]/messages/events/"

# this sample just resends the incoming message (on any topic) and just
# resends it on the iothub topic structure.  you could, of course, do any
# kind of sophisticated processing here you wanted...
def on_message(client, userdata, message):
     global iothubmqtttopic
     if(message.topic != iothubmqtttopic):
         messageStr = str(message.payload.decode("utf-8"))
         print("message received " ,messageStr)
         print("message topic=",message.topic)
         client.publish(iothubmqtttopic, messageStr)

# replace <broker address> with the FQDN or IP address of your MQTT broker
broker_address="[broker address]"

print("creating new instance")
client = mqtt.Client("iottopicxlate") #create new instance
client.on_message=on_message #attach function to callback
print("connecting to broker")
client.connect(broker_address) #connect to broker

print("Subscribing to all topics")
client.subscribe("#")

client.loop_forever() #stop the loop

Of course, this is one extremely simple example, that just passes along the same message payload and swaps out the message topic.  You can, of course, add any kind of sophisticated logic you need.  For example, you could parse the topic hierarchy, pull out any ‘intelligence’ in it, and add that to the message payload before sending.

if you want to test this, copy this python script to a file, edit it to add your device id and URI of your mosquitto broker, and run it.  You can then try….


mosquitto_pub –t /any/topic/structure/you/want –m "hello world"

You should see the python script receive the file, do the translation, and republish the message.  Then the mosquitto bridge will forward the new message along to IoT Hub/Edge.

Connect Kepware KEPServerEX through Azure IoT Edge to IoT Hub

TLDR: I’ve put together step-by-step instruction on how to leverage Kepware’s IoT Gateway as an MQTT-based “leaf IoT device” for IoT Edge.

I’ve gotten the request a few times from customers who leverage, or want to leverage Kepware for connectivity to their manufacturing equipment and then send that data to IoT Hub through Azure IoT Edge. 

Microsoft’s recommended solution is to leverage OPC-UA and our IoT Edge OPC-UA publisher.  OPC-UA is both a nice industrial protocol, but more importantly, offers a robust data format that plugs in nicely into other Azure services. 

However, in cases where customers either can’t, or don’t want to leverage OPC-UA, Kepware already published a nice technical note showing how to connect Kepware via MQTT directly to Azure IoT Hub via Kepware’s IoT Gateway and MQTT.  However, customers are interested in how to have the data flow through Azure IoT Edge to take advantage of the many nice edge-processing capabilities available.

So, based on the same principals as my “Connect MQTT client to Azure IoT Edge” post, I’ve put together step-by-step instruction on how to leverage Kepware’s IoT Gateway as an MQTT-based “leaf IoT device” for IoT Edge.

You can check out the instructions here.

Enjoy

S

Pardon this interruption for some brief IoT bragging

Pardon this interruption from (hopefully) good technical content for a small bragging opportunity.  🙂

I did a thing in IoT…

Worldwide Communities – Community SME 2018
Earners of this badge have been recognized as a Community Subject Matter Expert (SME) in the Microsoft Worldwide Communities program. In this position, earners have demonstrated a willingness to share their knowledge with the community by answering questions, presenting on calls and events, developing resources for the community, and/or mentoring others. These earners have the expertise and dedication needed to excel at Microsoft.

Ok – carry on and sorry for the interruption

Azure IoT Edge Hands-on Labs Updated

The Azure IoT Global Blackbelts (my team) maintain a set of hands-on labs for IoT Edge. They were originally written for some in-person workshops for customers that we did in February/March, but have proven to be valuable for people to do on their own as well. We have one version for Windows (with actual hardware) and one version with Linux (virtual machine in Azure).

After some delay due primarily to busy schedules and lots of customer work (IoT Edge is on fire!), I finally got a chance to update the “Linux” lab to a) be compatible with the Generally Available bits of IoT Edge and b) to leverage the Azure Cloud CLI for provisioning of the IoT Hub, Edge Device, monitoring IoT Hub, etc to cut down on all the clicking around!

The Linux lab is based on leveraging an Ubuntu VM in Azure with the specific purpose of you not having to install anything on your local machine except maybe an SSH client. This allows students who may be in a locked down situation on their work machines to still get to experiment with Edge.

The labs can be found here -> IoT Edge Linux HOLs

Over the next few weeks, I’ll be revising the Windows version, and also working on a version for the Raspberry Pi.

Enjoy!