Where ya’ been, dude?

Well, given that it’s been over two years since I posted here, you’re probably wondering where I’ve been. Or maybe not. Hell, you may not have even noticed I’ve been gone. But I’m going to tell you anyway!

Short Answer: I retired and rode off into the sunset

Longer Answer: Last December (2022), I (early) retired from Microsoft. After nearly two and a half decades at Microsoft, I was fortunate enough to be in a financial position where I could walk away and spend some more time with my (at the time) college-bound daughter, my aging parents, and my wife with our newly empty nest.

It’s been an incredible blessing to be able to take on a slower pace of life and focus on the home life and recharging the batteries after the “dog years” (despite how great they were!) of working at Microsoft.

I am doing some part-time work/consulting on the side, so I’m still doing some technical stuff, mostly broadly Azure focused. But I’m no longer doing any direct IoT or Azure Digital Twins work.

I started this blog because, in trying to solve some complex customer questions/issues in our Microsoft’s IoT Platform, I obtained some obscure and valuable knowledge that really wasn’t in our Microsoft’s ever evolving documentation. So, I felt like this blog added a lot of value. In fact, it still gets a couple dozen hits per day with NO SEO and no new content.

But the broader Azure areas I’m focused on in my part-time/consulting role are more well-trod (and documented) than IoT was back in the day, so I don’t know if I’ll keep writing here anymore or not. Maybe if I figure out something nobody else seems to know, I’ll put something down. I just don’t know yet. And since I’m (semi-)retired now, the odds are not in our favor.

Or maybe I’ll morph it onto writing about a totally different area. Some possibilities:

  • Home Automation
  • DIY – electrical, plumbing, carpentry, etc
  • Mustang restoration
  • Alabama Crimson Tide football and why it’s so awesome 😉
  • Ways to survive being retired and home full-time with your wife and not anger her into a murderous frenzy! <– this is still very much a work in progress!!

Thanks for all the support of my content. The popularity of this content, at least internal to MSFT, actually helped earn me my last promotion before I retired. I appreciate you all, and may we meet again, in this life or the next!


Changing roles…

It’s been a while since I wrote anything here. Life, the holidays, and now a job change have gotten in the way.

I’ve changed roles from my previous role as an IoT Global Black Belt, which was part of Microsoft’s field based technical sales org to a role in the Azure IoT engineering team itself. I’m now a Principal Program Manager on our Azure IoT Internet Business Acceleration team, helping partners and customers build those early lighthouse and high impact solutions using our newest technology. I’ve updated my “About me” page accordingly

It will evolve over time, but my initial focus is on our Azure Digital Twins solution!

I’ll continue to write general purpose Azure IoT articles, but you may see some slant towards ADT 🙂

Run Azure Digital Twins ADT-Explorer in Docker

This post walks you through running the ADT-Explorer tool, part of our Azure Digital Twins solution, in a docker container.

My apologies, but it’s been a while since I blogged. Life’s been getting in the way.

I’ve been doing more and more lately with our Azure Digital Twins solution. The ADT engineering team has a nice visualization tool for visualizing your models, twins, relationships, and query results. One of the biggest challenges for me and my customers, has been getting my environment set up: right version of node, right version of the code, authentication, etc.

Conveniently, the ADT team has some instructions for running the ADT-Explorer tool in a docker container, so you don’t have to worry about the pre-reqs. It generally works great. However, there’s one downside. The ADT-Explorer tool uses the Microsoft.Identity provider, and specifically the DefaultAzureCredential setting to do your authentication. This is usually a great thing, in that it just automatically picks up any cached local azure credentials you may already be logged in with (Azure CLI, VS Code, Visual Studio, ENV vars, etc) and uses them.

The challenge, however, is that ADT-Explorer running in a docker container cannot leverage those local cached credentials. So, what do you do?

The easiest way to handle this is to install the azure cli inside the docker container, do an ‘az login’ in it, and then start ADT explorer. The rest of this post walks you through this. This post assumes you have docker installed on your machine with linux containers enabled.

The first step is to clone the repo with git.

git clone https://github.com/Azure-Samples/digital-twins-explorer

After you do that, navigate to the digital-twins-explorer and create the adt-explorer docker container (Note the dot/period at the end of the command – don’t forget it!)

docker build -t adt-explorer .

So far this follows the instructions provided by the ADT team, but this is where we will deviate. The ADT team provided instructions have you just run the container as-is. However, if you do that, the DefaultAzureCredentials won’t work. Instead, we need to install the azure cli and login into azure before we start the adt-explorer app. The first thing we need to do is run the adt-explorer docker container, but we are going to start in a bash command prompt. Run the following command

docker run -it -p3000:3000 adt-explorer bash

Note that we added ‘bash’ to the end of the command line, to start the container but override the default entrypoint and give us a bash prompt. It should look like this:

Now that we have a prompt, we need to install the azure cli. To install it, run this command

curl -sL https://aka.ms/InstallAzureCLIDeb | bash

This should install the CLI for us. After a successful install, we can now do an ‘az login’ to authenticate to Azure. You will get a ‘code’ to use, and a URL to visit in your browser as shown below

Open your browser, navigate to http://microsoft.com/devicelogin, enter the code, then your azure subscription credentials as shown below

The authentication process will be slightly different for every environment based on your IT department’s set up (i.e. 2-factor auth, etc), but the result should eventually be a successful login

After a successful login, you’ll see a list of your azure subscriptions in the docker container.

Now we can start the adt-explorer app. To do so, run

npm run start

You’ll see this in your container.

You can now open up a browser, navigate to http://localhost:3000. Click on the People icon in the upper right corner, enter your ADT instance URL, and off you go.

Enjoy, and as always, hit me up in the comments if you have any issues.

Azure IoT Edge local and offline dashboards

This post will cover a common ask for customers. How to display, report, or dashboard data locally, even when offline, from Azure IoT Edge

One of the main uses for IoT is to be able to have dashboards and reports showing the latest information off of IoT devices. Often that happens with data sent from the devices to the cloud. In many IoT scenarios, especially in manufacturing, those dashboards also serve the needs of local operators or supervisors in the very plants or locations that supply that IoT data. So often that data “round trips” from the device, up to Azure, then the reports back down to screens at the plant.

But what happens if you need lower latency reporting? Or you lose your Internet connection frequently because your plant is in an area with slow or unreliable connectivity?

A common ask from customer is to, in addition to sending the IoT data to Azure for centralized reporting, ML model training, etc to also have the data ‘reported/dashboarded’ locally. That allows for lower latency display, as well as continued operation in the event of an Internet failure.

To address this need, I’m happy to announce that Azure IoT Global Black Belt counterparts have published guidance and a sample implementation on how to do just this. The sample implementation happens to be a common manufacturing KPI called Overall Equipment Effectiveness (OEE), but could be adapted to many other scenarios such as retail stores, warehouses, etc.

Please check it out at our github repo -> https://github.com/AzureIoTGBB/iot-edge-offline-dashboarding

Enjoy! And, as always, hit me up in the comments if you have questions.

IoT Transformers podcast

So… I did a thing.

My co-workers, Dani Diaz and Deb Oberly, host a very cool podcast called IoT Transformers. They typically host our Azure IoT customers and partners to talk about how they are transforming their businesses with IoT. It’s a great series full of really useful information about these customer’s and partner’s digital transformation journey.

For this most recent episode, they decided to interview me (“Insights from Busbyland” 🙂 ). As one of the founding members of our IoT Global Black Belt team, we talked about changes I’ve seen in the industry, cool projects, and the IoT-related tech I’m most excited about.

I’ve also created a cool circular reference. The podcast references the blog which references the podcast which references the blog which references the podcast………

Anyway, if you have interest, and want to hear my thoughts on IoT and just how horrible my electronically recorded voice is, how many times I say ‘ummm’, and a little southern drawl, check it out. And don’t forget to go back and give the entire series a listen. You won’t regret it.


IoT Edge Docker Image Cleanup

If anyone has done any development on, or run in production for a while, an IoT Edge box, inevitably they have a lot of unused docker images ‘hanging’ around on their IoT Edge boxes, just being bums and eating up disk space.  This can happen particularly if you’ve developed your own custom modules and released new versions over time, with new docker image tags over time, as shown below for the ‘echomodule’ module  (click on it for clearer view)


A frequent question we get from customers is “will IoT Edge clean up these unused images?”.  The answer is, well…  “no”.   There are suggestions we’ve seen around maybe using cron jobs and such to schedule a ‘docker image prune’ run to remove them, but I wanted to see if I could do this as an IoT Edge module itself, so you don’t have to fool with OS-level config and can run/configure it remotely.

The short version is:  yep!  (you probably guessed that since I wrote this post, right?  you’re pretty clever)

The longer version can be found at our Azure IoT GBB (my team)’s github site here –>  IoT Edge Image Cleanup

Enjoy and as always, let me know what you think…


Install IoT Edge on Red Hat Enterprise Linux (RHEL) – 7.x

This post demonstrates how to get Azure IoT Edge to work on Red Hat Enterprise Linux (RHEL)

Hi all.   Sorry for the lack of content lately.  Between some (minor) personal stuff and the coronavirus stuff with both myself and my customers, it’s been a bit of a goat-rodeo around Busby Manor lately. 


Recently I needed to help a customer get IoT Edge installed on a box running Red Hat Enterprise Linux (RHEL).  In this case, it was version 7.5, but this should work for other 7.x based versions too..   I think .  I’m about as far away from a RHEL expert as you can get.

NOTE:  credit for most of this info goes to Justin Dyer, a peer of mine on the Azure IoT pre-sales team!

First off, if you look at the “platform support” documentation for IoT Edge, you’ll notice that RHEL is a “Tier 2” supported platform.  That’s a fancy way of saying that either MSFT or someone we know has gotten it working on that platform and it generally works.  However, it also means that it is not a “gating” platform for us, meaning it’s not a platform that we test extensively every release on before we release it.  In other words, not working on RHEL will not block or gate a release.  That’s not because we don’t like it, or don’t want to “tier 1” support it, but rather it’s just one that we haven’t gotten around (yet) to doing all the necessary work to get it fully integrated into our extensive testing platform.  We love all Linux!  We’ve just prioritized based on how often we run into various platforms in the field with our customers. 

Now, with all the caveating out of the way, IoT Edge on RHEL DOES work, and seems to work fine, and we DO provide RPM packages for it whenever we do a release.


Ok..  enough pre-amble, let’s jump in.   For RHEL, we provide RPM packages that you can install with YUM…  The actual IoT Edge install is reasonably straightforward, once you get through the big pre-req, which is container-selinux.

The big issue is that the moby engine (i.e. open source docker) underneath IoT Edge needs a newer version of container-selinux than was installed on RHEL 7.5.   We need version 2:2.95 or greater.  If you have it already, great – proceed.


If you don’t, you can manually download from here and update.  Updating that package will be left as an exercise to the reader  (remember:  I’m not a RHEL expert, but hopefully you are )

If you are running your own RHEL install, you can skip this next section and jump down to the “Install IoT Edge” section

A note about RHEL on Azure VMs

Most of the testing I did here was on RHEL running in an Azure VM built with our ready-made RHEL images.  If you are running it on your own, you can skip this section.

container-selinux is found in the “rhel-7-server-extras-rpms” repo, which our Azure RHEL VMs do not have access to.  There are instructions on how to “remove the version lock and install the non-eus repos” in order to get access to it.

But, if you don’t want to read all that, these are the net instructions that you need to run:

sudo rm /etc/yum/vars/releasever
sudo yum --disablerepo='*' remove 'rhui-azure-rhel7-eus'
sudo yum --config='https://rhelimage.blob.core.windows.net/repositories/rhui-microsoft-azure-rhel7.config' install 'rhui-azure-rhel7'
sudo yum install container-selinux

Once those are complete, you can proceed with the “install IoT Edge” section below

Install IoT Edge

Finding the right packages

Before we install IoT Edge, a short note about how to release IOT Edge.  For all of the “non-docker-based” parts of the runtime (i.e. ignoring edgeAgent and edgeHub for the moment), there are really four major components of the runtime:

  • the moby engine:  the open-source version of docker, basically
  • the moby CLI:  gives you the ‘docker’ commands
  • libiothsm:  MSFT provided library that implements the security abstraction layer that let’s the edge runtime talk to various security hardware (like TPMS)
  • iotedged:  the IoT Edge “Security Manager”, which is the daemon based part of IoT Edge and really the component that ‘bootstraps’ all the rest of IoT Edge

When we do a ‘release’ (in the github sense of ‘release’) of IoT Edge, we only provide new packages for those components that changed with that release.  So, for example, in the 1.0.8 release, we had changes in all four components and you’ll see (under “assets”) new *.deb and *.rpm packages for all of them.

But in 1.0.9, only libiothsm and iotedged changed, so you only see new packages for those two components.

Unfortunately, that complicates the edge install for us, just a little bit.  So, for a given IoT Edge release, you need to spelunk a little to get the latest versions.  For the moby engine and CLI, you can usually find the latest version on the packages.microsoft.com site.  That’s the easier one.   For the iot edge components, unfortunately that requires a little more digging.  For the release you want to install, say 1.0.9, you have to work backwards through the releases to find the latest one in which we updated the libiothsm and iotedge components, in this case 1.0.8.  So, you need to go find those links, under ‘assets’ of each release, and capture the latest URL’s to the libiothsm and iotedge packages.

Sorry about that.  The good news is, that’s the hard part.

finally, install iotedge

Ok, finally, we can install IoT Edge.

The first step is to download the packages.  Make a folder on your device to hold them, CD into that folder, and then run

wget https://packages.microsoft.com/centos/7/prod/moby-cli-3.0.10%2Bazure-0.x86_64.rpm
wget https://packages.microsoft.com/centos/7/prod/moby-engine-3.0.10%2Bazure-0.x86_64.rpm
wget https://github.com/Azure/azure-iotedge/releases/download/1.0.9/libiothsm-std_1.0.9-1.el7.x86_64.rpm
wget https://github.com/Azure/azure-iotedge/releases/download/1.0.9/iotedge-1.0.9-1.el7.x86_64.rpm

Those URL’s are valid as the ‘latest’ releases of each component as of the 1.0.9 version of IoT Edge.  As future versions ship, you’ll need to see if the various components of them have updated, and replace the URI’s appropriately.

Next, we just install IoT Edge components with the following commands (run them one at a time, as they ask a y/n question in the middle):

sudo yum install moby-cli-3.0.10+azure-0.x86_64.rpm
sudo yum install moby-engine-3.0.10+azure-0.x86_64.rpm
sudo rpm -Uhv libiothsm-std_1.0.9-1.el7.x86_64.rpm
sudo rpm -Uhv iotedge-1.0.9-1.el7.x86_64.rpm

Obviously if you had to download newer package names, replace them.

Once those packages finish installing, all you need to do is open config.yaml and add in your connection string or DPS information and restart iotedge with:

sudo systemctl restart iotedge

There you go.  Enjoy.  As always, if you have issues, feel free to hit me up in the comments!

Azure Device Provisioning Server over MQTT using x509 Certificates

In a previous post, I showed you how to register a device with Azure’s Device Provisioning Server (DPS) over raw MQTT.  A reader/commenter asked how the process would differ if we used x.509 certificate based authentication vs. the SAS-token based authentication that the article was based on.

Since it’s inevitable that I’ll run across this in a customer situation, I thought I’d tackle it.  Based on the knowledge from the previous article, as well as my article on DPS over the REST APIs, it was pretty straightforward.  The process was nearly identical except for a few fields in the connection information, specifically specifying the iothub root cert, the device cert/key, and leaving off the SAS token.  I’ll cover the details below.

Generating Certs

The steps for generating the device certificates and creating the enrollment in DPS is the same process as outlined in my DPS over REST API article.  Specifically the sections titled “Prep work (aka – how do I generate test certs?)” and “X.509 attestion with individual enrollments- setup”, so I won’t repeat them here…    For the screenshots below, I called my enrollment registration id ‘dpstestdev01’.

The only other thing you need is the IoT Root CA cert.  This is the Baltimore-based root ca cert from which all the IoT Hub and DPS  “server-side” TLS certificates are generated.   The client needs this to validate that it is indeed talking to the genuine microsoft endpoint and not a ‘man in the middle’.   The easiest way to get this cert, is to open this file from the Azure IoT C SDK, copy everything from (and including) line 23 to (and including) line 43, then strip out the quotes at the beginning and end of each line, and strip off the ‘\r\n’ off the ends.  Save the file with a .pem extension.  We will call that the “DPS-root-CA” cert.

Client Setup

You can leverage any MQTT 3.1.1 client to talk to DPS, however, like in previous articles, I’m going to use MQTT.fx, which is an excellent MQTT GUI tool for ‘manually’ doing MQTT.  It allows you to get a really good feel for what’s happening under the covers without writing a bunch of code.

Through a series of screenshots below, I’ll show you my configuration.

The first step is to open Mqtt.Fx, click on the little ‘gear’ button next to the Connect button, and in the bottom right, click on the “+” button to create a new connection.   You can call it anything (I called mine ‘dpscert’ in the screenshots below)


This screenshot shows the ‘general’ settings…

  • The type is MQTT Broker
  • The broker address is the global DPS endpoint (global.azure-devices-provisioning.net)
  • The port is the MQTTS (tls) port 8883
  • The client ID is the ‘registration id’ from DPS, specifically in this instance is the CN/Subject name you used for your device cert when you generated it
  • the only other change from the defaults is to explicitly choose MQTT version 3.1.1


This screenshot shows the user credentials.   For DPS, the user-id is of the form of:


where {idScope} is the idScope of your DPS instance.

Note that, unlike the SAS-Token case, the password is BLANK for x.509 authentication.


This screenshot is the most important one and the biggest difference from the SAS-Token case.

  • Make sure you explicitly select TLS version 1.2 (we don’t support older versions)
  • in our use case, we are using self-signed certificates, so choose that option
  • For the “CA files”, this the DPS-root-CA cert we captured from github earlier.  (the baltimore root cert)
  • For the Client Certificate file, this is the device certificate we created earlier
  • For the Client Key file, this is the private key for the device cert that we generated earlier.
  • Make sure and check the “PEM formatted” checkbox, as that’s the format our certs are in.

All the other tabs are just left default.

Click Ok to close this dialog.  Click the “Connect” button to connect to DPS.

From this point on, you subscribe and publish exactly like you did in the previous article and/or as specified in the official DPS documentation here.

Enjoy – and as always, let me know if you run into any issue.  Hit me up on Twitter (@BamaSteveB), email (steve.busby ( at ) microsoft.com) or in the comments below.

Azure IoT Device Provisioning Service (DPS) over MQTT

Continuing the theme of “doing things on Azure IoT without using our SDKs”, this article describes how to provision IOT devices with Azure IoT’s Device Provisioning Service over raw MQTT.

Previously, I wrote an article that describes how to leverage Azure IoT’s Device Provisioning Service over its REST API, as well as an article about connecting to IoT Hub/Edge over raw MQTT.  Where possible, I do recommend using our SDKs, as they provide a nice abstraction layer over the supported transport protocols and frees you from all that protocol-level detailed work.  However, we understand there are times and reasons where it’s just a better fit to do things over the raw protocols.

To support this, the Azure IoT DPS engineering team has documented the necessary technical details to register your device via MQTT.   This document may provide enough details for you to figure out how to do it, but since I needed to test it for a customer anyway, I thought I’d capture a real-world example in hopes it can help others.

To make the scenario simpler, I chose to just use symmetric key attestation, but this would still work with any of the attestation methods supported by DPS.

Create individual enrollment

The first step is to create the enrollment in DPS.  In the Azure portal, in your DPS instance, from the ‘overview’ tab, grab your Scope ID from the upper right of the ‘overview’ tab as shown below (I’ve blacked out part of my details, for obvious reasons)


Once you have that, copy it somewhere like notepad or equivalent, we’ll use it later.  Once we have that, we can create our enrollment.  On the left nav, click on “Manage Enrollments” and then “Add Individual Enrollment”.  For “Mechanism”, choose Symmetric Key, enter a registration ID of your choosing (for the example further below, I used ‘my-mqtt-dev01’)


Click Save.  Then drill back into your enrollment in the portal and copy the “Primary Key”  and save it for later use.

Generate SAS token

Once you’ve created the enrollment and gotten the device key, we need to generate a SAS token for authentication to the DPS service.  A description of the SAS token, and several code samples for generating one in various languages can be found here.  Some of the inputs (discussed below) will be different for DPS versus IoT Hub, but the basic structure of the SAS token is the same.

For my purposes, I used this python code to generate mine:


from base64 import b64encode, b64decode
from hashlib import sha256
from time import time
from urllib import quote_plus, urlencode
from hmac import HMAC

def generate_sas_token(uri, key, policy_name, expiry=3600000000):
ttl = time() + expiry
sign_key = “%s\n%d” % ((quote_plus(uri)), int(ttl))
signature = b64encode(HMAC(b64decode(key), sign_key, sha256).digest())

rawtoken = {
‘sr’ :  uri,
‘sig’: signature,
‘se’ : str(int(ttl))

if policy_name is not None:
rawtoken[‘skn’] = policy_name

return ‘SharedAccessSignature ‘ + urlencode(rawtoken)

uri = ‘[dps URI]’
key = ‘[device key]’
expiry = [SAS token duration]

print(generate_sas_token(uri, key, policy, expiry))



  • [dps URI] is of the form [DPS scope id]/registrations/[registration id]
  • [device key] is the primary key you saved earlier
  • [SAS token duration] is the number of seconds you want the token to be valid for
  • policy is required to be ‘registration’ for DPS SAS tokens

running this code will give you a SAS token that looks something like this (changing a few random characters to protect my DPS):

SharedAccessSignature sr=0ne00055505%2Fregistrations%2Fmy-mqtt-dev01&skn=registration&sig=gMpllKo7qS1VR31vyfsT6JAcc4%2BHIu2gQSyai0Uz0KM%3D&se=1579698526

Now that we have our authentication credentials, we are ready to make our MQTT call.

Example call

The documentation does a decent job of showing the MQTT parameters and flow (read it first!), so I’m not going to repeat that here.  What I will show is an example call with screenshots to ‘make it real’.   For my testing, I used mqtt.fx, which is a pretty nice little interactive MQTT test client.

Once you download and install it,  click on the little lightning bolt to switch from localhost to allow you to create a new connection to an MQTT server.


After that, click on the settings symbol next to the edit box to open the settings dialog that lets you edit the various connection profiles:


On the “Edit Connection Profiles” dialog, in the very bottom left hand corner, click the “+” symbol to create a new connection profile.

Give your connection a name and choose MQTT Broker as the Profile Type


Enter the following settings in the top half of the dialog:

  • for “Broker Address”, use ‘global.azure-devices-provisioning.net’
  • for “Broker Port”, use “8883”
  • for Client ID, enter your registration ID you used in the portal for your device

Click on the General ‘tab’ at the bottom.  As in the screenshot above, for MQTT Version, uncheck the “Use Default” button and explicitly choose version 3.1.1.  Leave other settings on this tab alone.

click on the “User Credentials” tab’

  • for “User Name”, enter [DPS Scope Id]/registrations/[registration id]/api-version=2019-03-31  (replacing the scope id and registration id with your values)
  • for “Password”, copy/paste in your SAS token you generated earlier


Move to the SSL/TLS tab.   Check the box for “Enable SSL/TLS” and make sure that TLSv1.2 is chosen as the protocol


leave the proxy and LWT tabs alone.

Click Ok to save the settings and return to the main screen

Click on the Connect button and you should get a successful connection (you can verify by looking at the “log” tab)

Once connected, navigate to the “Subscribe” tab.  We will set up a subscription on the dps ‘response’ MQTT topic to receive responses to our registration attempts from DPS.  On the “Subscribe” tab, enter ‘$dps/registrations/res/#’ into the subscriptions box, choose “QoS1” from the buttons on the right, and click “Subscribe”.  You should see an active subscription get set up and waiting on responses.


Click back over on the “Publish” tab and we will make our registration attempt.  In the publish edit box, enter $dps/registrations/PUT/iotdps-register/?$rid={request_id}

replace {request_id} with an integer of your choosing (1 is fine to start with).  This lets us correlate requests with responses when we get responses back from the service.  For example, I entered:


in the big edit box beneath the publish edit box, we need to enter a ‘payload’ for the request.  For DPS registration requests, the payload takes the form of a JSON document like this:  {“registrationId”:”<registration id>”}

for example, for my sample it’s:

{“registrationId”: “my-mqtt-dev01”}


Hit the “Publish button”

Flip back over to the Subscribe tab and you should see on the right hand side of the screen that we’ve received a response from DPS.  You should see something like this:


This indicates that DPS is in the process of ‘assigning’ and registering our device to an IoT Hub.  This is a potentially long running operation, so to get the status of it, we have to query for that status.  To do that, we are going to publish another MQTT message to check on the status.  For that, we need the ‘operationId’ from the message we just received.  In the screenshot above, mine looks like this:


Copy that ID as we’ll use it in the next step.

To check on the status of the operation, switch back over to the Publish tab and replace the values in the publish edit box with this


replacing {request_id} with a new request id (2 in my case) and the {operationId} with the operationId you just copied. For example, with my sample values and the response received above, my request looks like this:


Delete the JSON in the payload box and click “publish”

Switch back over to the Subscribe tab and you should notice that you’ve received a response to your operational status query, similar to this:


Notice the status of “assigned”, as well as details like “assignedHub” that gives the state of the successful registration and connection details.

If you navigate back over to the azure portal and look at the enrollment record for your device (refresh the page.. you may have to exit and re-enter), you should see something like this:


This indicates that our DPS registration was successful.

In the “real world”, in your application, you’ll make the registration attempt and then poll the operational status until it gets to the state of ‘assigned’.  There will be intermediate states while it is being assigned, but doing this manually through a GUI, I’m not fast enough to catch them Smile

Enjoy – and let me know in the comments if you have any questions or issues.

Connect MXChip DevKit to Azure IoT Edge

A customer of mine who is working on an IoT POC to show their management wanted to connect the MXChip Devkit to IoT Hub via IoT Edge. This turned out to be trickier than it should be, as the connection between the MXChip and the IoT Edge box is, like all IoT Hub/Edge connections, TLS-encrypted. So you have to get the MXChip to trust the “tls server” certificate that IoT Edge returns when a client tries to connect. Thanks to some great ground-laying work by Arthur Ma, I wrote a hands-on lab walking you through this scenario. The lab can be found on my team’s github site here

Enjoy, and let me know if you have any problems.