IoT Event Processing: A Trillion-Node Network?

PinIt
Internet of Things

The Internet of Things (IoT) will result in dramatic and broad shifts in how we interact with computers. First and foremost, computers and computing devices will be all around us. Rather than today’s billion-node Internet network, the Internet of the near future will be used by trillions of devices, people, organizations and places. A trillion-node network poses design challenges along with great opportunities. Event processing (EP) will play a big role in making a trillion-node network possible.

In their recent book Trillions: Thriving in the Emerging Information Ecology, authors Peter Lucas, Joe Bailey and Mickey McManus describe a near-future world where devices are fungible, information is liquid and an environment of ubiquitous computing and automation combine to completely change how we interact with computers and how they interact with us.

No one reading this will need an explanation of Moore’s Law but we sometimes forget its ramifications. A microprocessor with a Wi-Fi chip can now be had for $30. Thanks to Moore’s Law, there will come a time when that same combination of functionality can be had for $0.03.

What will we do with networkable computers that cost $0.03? We’ll put them everywhere. For example, your glass at the restaurant could have an embedded computer in it. When it’s nearing empty, the waiter suddenly appears. Knock it over and the embedded accelerometer can alert the kitchen staff. You might ask why would we put a computer into a cheap piece of kitchen glassware but, at $0.03, the more appropriate question is “Why not?” A decade ago, people would have said the same about many of the things that are connected now such as bathroom scales or dog collars.

Of course, it’s not just about stemware and bathroom scales. If all of your things are connected, you’ll need to be connected, too. And by “connected,” I don’t mean with a smartphone. In this post-web world, just checking in won’t be enough: you’ll need 24/7 online representation.

The current model of connected devices won’t scale to work in a trillion-node network. I currently have about a dozen connected devices, including a Withings Wi-Fi Body Scale, a Fitbit activity monitor (before it went through the wash), some LED programmable Hue lights from Philips, a Universal Devices controller for Insteon power outlets, door sensors, motion detectors, and a Wi-Fi-enabled Radio Thermostat. Each of them has a website where I can sign up for an account and download an iPhone app to control them.

What happens when I have 100 or 200 connected things? I don’t want an app for each one. I don’t want to be the “meet point” for all of these various devices in my life, coordinating all of their action manually on my phone. I need something that’s always online and can represent me in this trillion-node network.

Since a trillion-node network won’t be just a bigger version of today’s web, what can we expect of it? In Trillions: Thriving in the Emerging Information Ecology, the authors point to nature as a model of a system of interconnected agents that has scaled far beyond even a trillion-node network. Your own body, for example, has trillions of cells and it’s just one of trillions of organisms on the planet. On the man-made side, the Internet embodies design principles that will be useful in constructing such a network.

Four characteristics are critical for this trillion-node network: resiliency, scalability, flexibility and privacy. Hence, loosely coupled, decentralized architectures will be needed. Event-driven systems will be key components of these architectures because of their inherent support for those four important properties.

To see how event-based systems make trillion-node networks more manageable manageable, consider this example showing the difference between EP and the traditional request-response style interaction that is the basis of the Web. In a May 2001 Scientific American article on the Semantic Web, Tim Berners-Lee, James Hendler and Ora Lassila give the following scenario:

“The entertainment system was belting out the Beatles’ “We Can Work It Out” when the phone rang. When Pete answered, his phone turned the sound down by sending a message to all of the other local devices that had a volume control.”

I was immediately enamored with this vision of the Semantic Web but also confused by the tight coupling between components that it presented. When the phone sends a “turn sound down” command to local devices with volume controls, it has to know which devices are in the vicinity that have volume controls and can explicitly control them. Discovery of this kind of information is difficult and computationally expensive. Adding a DVD player, lights and other devices increases the coupling even further. Knowing what commands to send to which device entails significant complexity.

Suppose instead, the scenario read: “The entertainment system was belting out the Beatles’ “We Can Work It Out” when the phone rang. When Pete answered, his phone broadcast a message to all local devices indicating it had received a call. His stereo responded by turning down the volume.”

In the second scenario, the phone doesn’t have to know anything about other local devices. The phone need only indicate that it has received a call; it raises an event. Each device can interpret that message however it sees fit or ignore it altogether. The complexity of the system is significantly reduced because individual devices are loosely coupled. The phone software is much simpler and the infrastructure to pass messages between devices is much less complex than an infrastructure that supports semantic discovery of capabilities and commands.

The phone-call scenario illustrates an important property of event-based systems: they allow the semantics of various players to be encapsulated within the device itself. Semantic encapsulation gives event-based systems a big advantage over request-response systems in building large, scalable networks of interacting devices.

Semantic encapsulation supports dynamic binding. Consider the preceding scenario. When a DVD player is added to the room, it must be configured to receive events but it doesn’t need to know anything about other devices in the living room. Adding or removing a device is easy, and so, it can be frequently done. Semantic encapsulation and dynamic binding support many of the most important scenarios in our trillion node network. While many of the relationships you’ll create will be permanent, many more will transient. When I sit down in the restaurant with the smart glass we imagined earlier, I will want a relationship with my connected glass but only for the duration of the meal. Then it will be someone else’s glass. Connections to things will even come and go as we walk past. Semantic encapsulation allows these connections to be made cheaply so they will happen all the more frequently.

The trillion-node network doesn’t exist yet but the drive to connected devices is inexorably leading us toward it. At present, the systems that will connect these devices together into a network that individuals can control are still under development. But event-based systems will provide an important cornerstone of this trillion-node network, making it resilient and flexible at a price that makes it practical.


Want more? Check out our most-read content:

Frontiers in Artificial Intelligence for the IoT: White Paper
Research from Gartner: Real-Time Analytics with the Internet of Things
How Real-Time Railroad Data Keeps Trains Running
Operational Analytics: Five Tips for Better Decisions
Why Gateways and Controllers Are Critical for IoT Architecture

Liked this article? Share it with your colleagues!

Dr. Phillip J. Windley

About Dr. Phillip J. Windley

Dr. Phillip J. Windley is an Enterprise Architect in the Office of the CIO at Brigham Young University. He is also the founder and CTO of Kynetx, creator of Fuse, an open-source connected car. Phillip is also the cofounder and organizer of the Internet Identity Workshop. He is also an Adjunct Professor of Computer Science at Brigham Young University where he teaches courses on reputation, digital identity, large-scale system design and programming languages. Phillip writes the popular Technometria blog (at www.windley.com) and is a frequent contributor to various technical publications. He is the author of the books "The Live Web" published by Course Technology in 2011 and "Digital Identity" published by O'Reilly Media in 2005. Phillip spent two years as the CIO for the State of Utah in 2001-2002, serving on Governor Mike Leavitt's cabinet and as a member of his senior staff. Before entering public service, Phillip was Vice President for Product Development and Operations at Excite@Home. He was the founder and CTO of iMALL, Inc., an early creator of electronic commerce tools. Phillip serves on the Board of Directors and Advisory Board for several high-tech companies. He received his PhD in Computer Science from the University of California, Davis in 1990. Follow him on Twitter at @windley.

Leave a Reply