Find all your DIY electronics in the MakerShed. 3D Printing, Kits, Arduino, Raspberry Pi, Books & more!

Whether you are monitoring your home while on a vacation, or heating up your weekend house before travelling there – your Internet of Things application uses Internet protocols to communicate between here and there. For this reason, I defined IoT as a “global network of computers, sensors and actuators connected through Internet protocols” in my book Getting Started with the Internet of Things, three years ago.connectedhome_banner

This is a greatly simplified, idealized picture of the emerging IoT. It’s the vision of a beautifully clear technology landscape, where IP packets can travel from a temperature sensor to a cloud data center and back to a heater control. Without unwieldy protocol conversion in between, without complicated gateways that are a hassle to configure or program.

The full picture today is quite the opposite: a bewildering zoo of sometimes complementary, often competing standards (“the more, the merrier”):

IoT - WoT landscape

I often wonder how the picture will look like in another three years, or in 10 years. It is fun to add yet more technologies to the picture. It is less fun to place bets on particular technologies by investing my friends’ and my life time and blood, sweat and tears into them. Or to give recommendations to clients that just might turn out to be horribly wrong.

High road or low road?

I’ve observed mainly two ways in which people react to this uncertainty. One way is to shudder, and then to roll up one’s sleeves “let’s do everything we can to approach the ideal; let’s bring IP protocols everywhere – after all, IP protocols have historically won against all competing protocols”. The other way is to shrug “that’s the way the world works; competition is a good thing, and what’s all the fuss about Internet protocols anyway – there are so many other proven technologies out in the field, let’s individually pick the right one for the job”.

Let’s call this the one “high road” versus the many “low roads” perspectives. It’s a fascinating tug of war, with many heated disputes: “you need an optimized wireless protocol for a sensor network, designed from the outset for low power consumption, otherwise you’ll have too much power drain to be practical – forget about IP protocols” versus “we can do IP header compression and other tricks, so we don’t waste much power on the wireless hop from the sensors to the router – IP is the ticket to the future”.

The technical disputes typically focus on some architecture qualities that are especially relevant over the last mile, or rather the last couple of meters, of the Internet of Things. That’s where the IoT “gets physical”: minimal power consumption of a battery-powered plant sensor in your garden, real-time determinism for controlling a conveyor belt, ease of system administration where the sysadmins are technical novices and eighty years old on average (one of our clients produces hearing aids…), ease of key management for machine tool controllers that are set up by minimally trained field technicians, etc.

Are internet protocols “good enough” even for such scenarios? How good is good enough? Even after reading many arguments and research papers, I feel like both sides have good arguments, but it still remains murky to me who will be right in the long term. At times, I am shuddering myself, other times I am shrugging. Internet protocols certainly have an impressive track record at bulldozing away proprietary protocols, for example IBM’s Token Ring, and at conquering even fields that are not obviously suited for them: for example, Ethernet has been successfully modified to guarantee real-time behavior for industrial applications. (Although, it can be tough to choose among currently over 20 (!) proposals vying for the crown among real-time Ethernet wannabe standards…)

Is it thus simply a matter of time until Internet protocols push aside all other “legacy” protocols, as their history seems to suggest?

Bluetooth Low Energy

There exist at least two counter-examples. Communication technologies that have succeeded in parallel to the Internet: USB and Bluetooth. They have proven quite immune against Internet protocols, even though tunneling of IP protocols over them is possible. So just maybe, the last meters of the IoT might remain immune as well?

In 2005, the CTO of a client told me of an exotic wireless technology being developed in a Nokia Research Center, called Wibree. It appeared very promising for short-range, low-power scenarios in home automation, for medical applications, and other use cases. A few years later, in a very clever move, Nokia passed control over Wibree to the Bluetooth Special Interest Group. The technology was subsequently modified to better fit the existing Bluetooth standard, and in 2010 became an official part of Bluetooth 4.0. Its current name is Bluetooth Low Energy (BLE), or Bluetooth Smart as a marketing label.

BLE-GSM Gateway V4

The one feature of BLE that makes it poised for explosive growth – incompatibility to IP protocols notwithstanding – is its minimal extra cost compared to a classic Bluetooth solution: it costs very little to upgrade an existing Bluetooth-enabled product to also support BLE. Starting with the iPhone 4s, Apple has begun to support BLE in all its products. Not just by including newer Bluetooth chips, but also with an API that you can use for developing BLE-enables apps. Such an app, together with a device that it connects to, is sometimes called an “appcessory.” For example, a field technician may use his smartphone to set up a new pump over BLE – the pump doesn’t need its own LCD screen for this purpose, not even a USB plug. When necessary, the app may even act as a temporary gateway from the printing machine to the Internet, e.g. so that the printing machine can fetch new firmware.

BLE advent wreath 2012

Of course, appcessories need not be all business: one of our first forays into this new field was an electronic advent wreath, in time for Christmas 2012. It was an interesting exercise. For example, Apple expects you to send them one of your devices, otherwise they don’t approve your app for the app store. We idly wondered whether BMW, in case they will ever build BLE support into one of their luxury cars, would need to send such a car to Apple also?

Last year, Google at long last added BLE support to the Android API as well, and this year Microsoft should follow suit with Windows Phone. This will make BLE a true cross-platform technology for appcessories, from fitness devices like shoe sensors or heartbeat monitors to smart watches and everything in your proximity that you might want to control: doors, TV sets, sprinklers in the garden, etc.

The new Google API was an opportunity to do an Android app for the advent wreath as well, for Christmas 2013. Google’s BLE implementation was very early and not yet as mature as the one of Apple, which still has some issues as well. But things are improving steadily, and developers are exchanging their experiences, e.g. here on the Facebook group that we set up for this purpose.

Is it here to stay?

BLE is probably the one new communication technology that isn’t threatened by the internet juggernaut. Even at its tender age, it is already quite the bully itself: I have recently heard of a tender for a building automation project where BLE support was mandated for the lighting stuff – an area where I would have expected ZigBee. It seems that the argument “you can control it from smartphone or tablet” beats nearly any counter-argument you could find against BLE. Sometimes it gets unreal: we have been contacted by a startup that has built their business idea on the assumption that it is possible to connect hundreds of BLE sensors, throughout a large commercial building, directly to a single GSM gateway. Unfortunately, BLE at this time is really a technology for the last couple of meters, it won’t reliably work across several floors and a dozen walls…

Service-oriented architecture for devices

Putting on my hat as a software architect, I find one aspect of BLE particularly intriguing, and sometimes irritating as well: it folds together very low-level optimizations with very high-level architectural concepts. It is designed to be optimized for low power consumption, trying hard to minimize the number of bits and bytes that need to be transmitted. Yet it also defines a so-called generic attribute profile (GATT), which includes characteristics (e.g. the air temperature in a room or the open/closed state of a valve), sets of related characteristics called services, and sets of related services called profiles. A profile corresponds to a use case, e.g. a blood pressure measurement profile is for, well, measuring blood pressure. A service is a reusable interface, e.g. a device information service that provides information like the device’s manufacturer and model number.

Firmware hell: DLL hell on steroids

Services are a key concept of BLE. A service defines an immutable set of characteristics. Why immutable? And in which way could this become relevant for you? The reason is that most low-cost devices are not designed to make firmware updates easy, or possible at all. Therefore, once you begin to sell such a device, you have a problem if you like to change a service, e.g. in order to add a useful characteristic to it. Your new firmware won’t reach all devices that are out there in the wild. The problem is that when an updated smartphone app tries to access this new characteristic on an old device, nothing good will happen. Such a version mismatch between components is known as “DLL hell” in the software world. There, a mismatch can at least be remedied by reinstalling a compatible set of files. If you have a set of incompatible devices, with no way to update their firmware, you are in an even worse “firmware hell”.

Thus the idea is to keep a service definition ServiceA immutable once devices with this service are being shipped*. When you later need to change the service, you add a new service definition ServiceA1, which is a superset of ServiceA in that it has an additional characteristic, for example. New devices implement both services (cheap to do if both are identical except for the additional characteristic). There are four constellations in which the provider of a service (“peripheral”) can meet the user of this service (“central”). Let’s assume that the central is your smartphone:

  1. Your smartphone with an app for the original service meets an original device: it looks for ServiceA on the device, finds it, and everything is well.
  2. Your smartphone with an app for the original service meets an extended device: it looks for ServiceA on the device, finds it, and everything is well. As the app doesn’t know about the additional characteristic supported by the extended device, it cannot use it, but that’s ok. It can still provide the original functionality.
  3. Your smartphone with an app for the extended service meets an extended device: it looks for ServiceA1 on the device, finds it, and everything is well. The app knows about, and can take advantage of, the additional characteristic.
  4. Your smartphone with an app for the extended service meets an original device: it looks for ServiceA1 on the device, but the device responds with “huh? never heard of ServiceA1!”. So the app tries looking for the older ServiceA, and voilà, it can gracefully fall back to this older service. Not as cool as with a new device, but it works.

Here these four constellations as a diagram:

Compatibility scenarios

If you know that older devices are not used anymore, or you don’t want to support them anymore, your next app version just needs to react differently: if a device doesn’t support ServiceA1, the app gives up and tells the user that this device is incompatible. So you don’t have to carry along old baggage forever, just as long as your users still demand it.

This mechanism is fundamental for a future where all kinds of devices and apps connect to each other, and these components may come in any combination of new, not so new or ancient versions.

Internet protocols at the core, other technologies at the edges

It looks like the maze of low roads will be with us for a long time to come, with various driveways to and exits from the high road. Especially at the “edges” of the network, some other technologies will remain with us for a while. Among them BLE, even though it doesn’t support Internet protocols, or more advanced features like mesh routing. I hope that the high interest in BLE won’t become its biggest problem: there are a number of groups working on proposals for extending BLE in one way or another; BLE hubs are part of the new Bluetooth 4.1 specification; and there are already attempts at running IPv6 over BLE…

But how about other “IoT edge technologies” that may also be useful, but not seamlessly integrated into the IP protocol world either? When do their benefits (e.g., being readily available, perfectly suited for the job, etc.) outweigh their disadvantages (e.g., having to learn a new technology, fewer “network effects”, etc.)? A similar question arises a level higher up, for the application-layer protocols: when should we focus on HTTP as the single unifying protocol for the “Web of Things”, and when should we switch to MQTT, XMPP, AMQP, ZeroMQ and the like?

What’s your take on these questions? I’d love to hear from you!

* If you happen to know Microsoft’s Component Object Model (COM), you will recognize the idea. Similar to a BLE service, a COM interface is an immutable collection of methods. To me, one of the most innovative and relevant ideas ever to come out of Microsoft, only to get lost on those who later developed .NET, and recently rediscovered by those who developed the new Windows Runtime.


Related
blog comments powered by Disqus

Featured Products from the MakerShed

Follow

Get every new post delivered to your Inbox.

Join 25,750 other followers