Network protocols are important in IoT architectures. Here are four of the most widely used options in the industry right now.
All distributed systems, computerized or not, make use of protocols. At their core, they represent a set of procedures used by the participants in a connection to transmit signals.
If you are already a developer but new to the Internet of Things (IoT), you may be tempted to use the RESTFul approach typically used in the Cloud. On the other hand, you could be worried about the overhead of HTTP and contemplating using plain TCP/IP. It is also possible that you have the impression that choosing a protocol is not so important since there are so many open source and commercial platforms that enable you to bridge them. All three viewpoints are naïve, if not downright dangerous.
In this article, I will first explain why protocols are important in IoT architecture. Then, I will describe in detail four of the most widely used options in the industry right now.
Why Protocols Matter
Most of the time, IoT and Embedded devices are deployed in the field outside the corporate network. Very often, they will operate on battery power and, for this reason, use power-efficient microcontrollers and radios. Network transmissions are typically the first item in a device’s power budget. Moreover, network bandwidth is a scarce and expensive commodity in such environments.
“Internet-class” protocols such as TCP/IP and HTTP do not consider the processing, bandwidth, and power constraints of IoT and embedded devices. In particular, they make no effort to minimize message overhead. This explains why UDP is so popular in IoT circles. True, TCP is more reliable. However, UDP emphasizes speed and efficiency. Many IoT-specific protocols leverage UDP as their transport for this reason.
RESTful microservices are at the core of Cloud-Native applications. That is another way to say that HTTP is a cornerstone of the Cloud. However, anyone who used a web application recently knows that HTTP is inherently unreliable. In particular, it offers no delivery guarantees. Any specific request you send to an HTTP server will not necessarily reach its destination or receive a response. This is why many transactional websites advise you not to click the “buy” button again to avoid duplicate orders or charges.
See also: How to Send Critical Data Over Constrained Networks with MQTT
Knowing the most widely used and best-supported options is crucial to pick the best protocol for your next project or product. To that effect, I will now cover CoAP, DDS, MQTT, and LwM2M.
Like HTTP, the Constrained Application Protocol (CoAP) is documented in a Request for Comments by the Internet Engineering Task Force (IETF): RFC 7252. CoAP leverages the Datagram Transport Layer Security (DTLS) protocol for encrypted communications. DTLS is based on the Transport Layer Security (TLS) protocol used on the web but adapted to datagram protocols such as UDP.
The creators of CoAP took their inspiration from HTTP. Like HTTP, CoAP is a request/response protocol using concepts such as content types and Uniform Resource Locators (URLs). CoAP requests use HTTP’s GET, PUT, POST, and DELETE methods, although the semantics differ slightly.
CoAP clients send requests to servers to execute an action specified through a method code on a specific resource, which is identified through a URI. Servers send back a response described by a response code that may include a resource’s representation. CoAP requests and responses are always asynchronous. Since the protocol runs on UDP or equivalent, messages can arrive out of order, be duplicated, or even get lost. CoAP provides a lightweight reliability mechanism to mitigate those problems.
CoAP uses binary message options rather than plain text headers like HTTP to reduce message size. Moreover, CoAP packets start with a fixed-size 4-byte header much smaller than the one used in HTTP, further reducing bandwidth requirements.
Eclipse Californium is a widely used and mature CoAP implementation written in Java. It provides both client and server implementations. The project has been active since April 2014 and is used by several other Eclipse IoT projects.
The OMG Data Distribution Service (DDS) is a protocol from the Object Management Group (OMG). Development of the DDS specification started in 2001, and the most recent version of the core specification is version 1.4, made available in March 2015.
DDS uses a publish/subscribe approach. However, there is no concept of a broker like the one in MQTT. There is no central registry of clients, either. DDS nodes communicate in a peer-to-peer fashion and discover each other automatically. Messages are published to topics that clients can subscribe to. DDS topics are defined by a name, a type, and a set of QoS policies; they are not mere filters. Since the protocol is language-independent, DDS topic types are defined in various syntaxes — Interface Definition Language (IDL) is the most common choice.
DDS uses two ways to scope information access: domains and partitions. Domains are akin to virtual networks specific to the applications that joined them. They are strictly segregated. If your application needs to connect to nodes in several domains, you must open distinct connections to these domains and mediate the exchanges yourself. Partitions, on the other hand, represent a logical group of topics inside a domain. Applications must join a partition before publishing or subscribing to the topics inside.
DDS is known for its very granular collection of Quality of Service (QoS) policies. You can use these policies to configure aspects such as data availability, data delivery, timeliness, resource usage, and configuration.
Eclipse Cyclone DDS is a mature and fully featured implementation of DDS hosted at the Eclipse Foundation. It has been used in several commercial implementations already. The project focuses on compliance with all the core DDS specifications. The core Cyclone DDS library is written in C, and the project team also maintains bindings in Python and C++ (ISO/IEC C++ PSM).
MQTT is an OASIS Open standard and an ISO recommendation (ISO/IEC 20922). The most recent version is version 5.0, made available in March 2019. MQTT was invented in 1999 by Andy Stanford-Clark and Arlen Nipper. Since existing technologies did not fulfill their requirements at the time, they created a new protocol to maximize the battery life of field devices while minimizing bandwidth usage.
MQTT leverages the publish/subscribe model in a centralized architecture. Message publishers and subscribers are decoupled clients. Those clients connect to brokers, which receive published messages and route them to the appropriate subscribers. Multiple clients can receive the same message by subscribing to the same topic. A topic can also receive messages published by multiple clients.
By design, MQTT is payload agnostic. Publishers can transmit anything as a payload: text or binary encodings are supported. Consequently, you can use XML, JSON, or any format you choose to represent the data.
MQTT offers three levels of quality of service: at most once (QoS 0), at least once (QoS 1), and exactly once (QoS 2). Higher QoS levels are more reliable but can limit the scalability of the infrastructure and result in higher resource consumption on the broker’s host.
MQTT brokers can only transmit messages to connected subscribers. When a client subscribes to a topic, it will only get the next published message. However, the retained messages feature of MQTT allows new subscribers to receive a copy of the last message published to a topic.
Out of the box, devices and software stacks that support CoAP are not interoperable. This is because the protocol’s RFCs do not regulate the resources servers expose or the message payloads. At a minimum, you must massage payloads and ensure messages are sent to the correct resource in the proper format. The Lightweight Machine-to-Machine (LwM2M) protocol aims to provide interoperability through an extensible resource and data model. Its steward is OMA SpecWorks, a nonprofit standards organization.
Early versions of LwM2M exclusively used CoAP as a transport. More recent versions introduced HTTP and MQTT as alternatives. However, implementations have been slow in adopting these new transports.
The LwM2M data model is based on resources, defined as information elements exposed by devices. Resources are grouped logically into objects. Each resource possesses a unique identifier within the scope of the enclosing object. OMA SpecWorks assigns unique Object Identifiers to all objects that belong to the core LwM2M and object specifications.
The resource model defines operations to create, update, and retrieve resources. Resource changes are communicated through asynchronous notifications. Resources support three operations: Read, Write, and Execute. Resources that offer Read or Write are called value resources. Resources that offer Execute are called executables. Executables are used to trigger actions. To access a resource, just refer to it through a simple URI following this pattern:
/[object id]/[object instance]/[resource id]/[resource instance]
At runtime, clients and servers instantiate objects and the resources they contain. A specific object can contain several instances of the same resource. Multiple instances of an object can also exist in parallel.
The Eclipse Leshan project provides Java libraries that you can leverage to build LwM2M clients and servers. The project started in 2014. It relies on Eclipse Californium as its CoAP implementation.
What comes next?
CoAP, DDS, MQTT, and LwM2M are mature, proven protocols. You will easily find open-source components implementing them. Each also gave birth to a thriving ecosystem spanning commercial products as well.
In the second part of this article, I will cover emerging protocols that address some of the weaknesses in the options I covered today. I will also discuss picking the right protocol for your next project. Thank you for reading, and see you next time!