Node-RED Meets Resin.io for Large Scale IoT

In this post I evaluate Node-RED for production deployments and show that it is not only for toy DIY projects. A combination of Resin.io and Node-RED is a viable option for an IoT Hub.

Node-RED is a lightweight event processing engine developed on top of Nodejs. It has been developed by IBM emerging technologies group and currently attracts a large community that grows Node-RED with many plugins. It aims to make IoT development easy for non-tech savvy people. As you can read from their website, the installation is just one node.js call:

1
sudo npm install -g --unsafe-perm node-red

Then, over a browser, you can attach nodes (i.e., processing units such as MQTT pub/sub, read/trigger GPIO, parse messages, connect API …) to each other via flows. Normally, I don’t like visual programming tools, but this one works and extremely easy to learn. A rather complex screen shot from a home automation project looks like:

Node-RED example

Other details about Node-RED are:

  • Hardware – Rasperry Pi, BeagleBone Black, Arduino
  • Network – HTTP, TCP, UDP, MQTT, WebSocket
  • Parsers – CSV, JSON, XML
  • Transformations – JavaScript Functions, Mustache Templates
  • Social – Twitter, Twillio, Email, IRC, RSS, XMPP
  • Storage – Filesystem, MongoDB, MySQL, PostgreSQL, Redis
  • Analysis – Sentiment, Statistics

What else do you need in a corporate solution:

Automated testing is possible with normal nodejs testing tools. I am not a nodejs expert but I see that the node-red code base in Github uses Travis continuous integration tool for unit testing and code coverage ( for code coverage it uses the package “Istanbul” which is the city where I obtained my BS and MS degrees :) ). For functional tests, I couldn’t find anything. But I’m sure that a nodejs expert can show us many alternatives.

Versionning is also easy since the nodes and the flows are all in javascript and json format.

Ease of use/development is quite good. Visual tool makes it like kindergarden programming assignment but obviously security and mqtt connections definitely requires some background knowledge. Support is via the community, which is bad. But in the node-red website it is stated that there are 120 000 modules available \o/. So there are people working behind.

It should be lightweight. Yes, it is lightweight since it works on node.js and it is designed for RPi from the beginning.

Deployment is a little bit scary but solution exists. First of all, from deployment I mean over-the-air update of the platform at any time without manual intervention. Here comes the Docker Containers and Resin.io.

I was already running Node-RED in docker containers in my laptop and was thinking about whether we can run on a container on RPi. It turns out that current record for number of containers on an RPi is 500. In this website, you can read more about how. So if we can make an application which uploads containers remotely to a RPi and switch them, then we have an over-the-air update mechanism. And it has already been done by Resin.io, which is also the point where you have to pay some money for your project (1000$ per month for 1000 devices). After you command “git push” to your repository, Resin compiles the code for a container and deploys it to your RPi. For run time configuration it provides the management of environmental variables and local storage based on container volumes. The reference architecture of Resin is as follows:

Resin.io stack

To conclude, creating IoT projects via RPi is easy with Node-RED and you can make massive deployments via containers and Resin.io. The only thing that I couldn’t find is that the real-life deployments via Node-RED that are not DIY PoC projects. This makes me worry whether there is a show-stopper in the performance. But the good thing is that you don’t have to bound yourself fully on node-red. We are able to embed Node-RED in other applications. For instance, zetta.js from APIGEE for IoT Hubs is based on node.js as well and we can easily embed Node-RED inside it as an event processing engine. And Resin.io is ready for containerized deployment.

Creating Bridges for ContikiOS Minimal-net (for Linux)

Minimal-net helps us runing ContikiOS as a normal application. Consequently, making it easier to debug especially in case of segmentation faults. However, to setup a minimal-net, we need to do some networking tricks. There is a nice tutorial on RPL network setup on minimal-net for windows OS but none for linux. Here, I will try to explain the linux case.

As I am working on DTLS, my example setup has two nodes, a client and a server. Each of the nodes will be in different tap interfaces. So we also need to bridge them. Though, bridging them is enough for connecting the two nodes, we need a few further steps for a connection to the outside world.

Creation of TAP interfaces

When you compile and run your code with minimal-net, a tap0 interface is created and deployed for you. However, the second tap interface is not deployed, since the name of tap0 is hardcoded. So we will make a tiny change to the code, contiki/cpu/native/net/tapdev6.c. We will use ifr.ifr_name to choose the right interface to deploy (line 22):

tapdev6.c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
void
tapdev_init(void)
{
  char buf[1024];

  fd = open(DEVTAP, O_RDWR);
  if(fd == -1) {
    perror("tapdev: tapdev_init: open");
    return;
  }

#ifdef linux
  {
    struct ifreq ifr;

    memset(&ifr, 0, sizeof(ifr));
    ifr.ifr_flags = IFF_TAP|IFF_NO_PI;
    if (ioctl(fd, TUNSETIFF, (void *) &ifr) < 0) {
      perror(buf);
      exit(1);
    }
    snprintf(buf, sizeof(buf), "ifconfig %s up",ifr.ifr_name);
    system(buf);
    printf("%s\n", buf);
  }
#endif /* Linux */
....

With the above code each time a new tap interface with a new name will be deployed (eg., tap0, tap1).

Let’s bridge the tap interfaces

The two (or more) tap interfaces are disjoint, so we need to connect them to each other. I prefer connecting them in the link layer (ie., MAC, switch) so I will use a bridge. I will use brctl command which can be installed easily via your favorite packet manager.

1
sudo brctl addbr br0

Then, we should add our interfaces and deploy the bridge:

1
2
3
sudo brctl addif br0 tap0
sudo brctl addif br0 tap1
sudo ifconfig br0 up

Done. Now the client and the server node can communicate.

Reaching from the outside world

In order to reach these nodes from the outside world, we also need to add eth0 or wlan0 interface (whatever you want). But this time, as an additional step we should add routes too. First, let’s update our bridge.

1
2
3
sudo ifconfig br0 down
sudo brctl addif br0 eth0
sudo ifconfig br0 up

Now we have a nice bridge that connects our nodes to the outer world. However, we should also show the route to our nodes. In TinyDTLS, aaaa::ff:fe02:232 and aaaa::ff:fe02:230 ipv6 addresses are hardcoded in the client and server. So our bridge address space should involve both of them.

1
2
3
sudo ifconfig br0 down
sudo ip -6 address add aaaa::1/128 dev br0
sudo ifconfig br0 up

Done. Now you can ping your nodes (ie., ping6 aaaa::ff:fe02:232).

Autonomous Social Interaction of Things - No Passwords

In my previous blog post, I described my new proposal for involving social network based authentication into WiFi access points. Now I want to go beyond the access points. Social network based authentication can be applied to many other scenarios such as:

  • Sharing/Streaming any kind of file with another device which belongs to same person or social network:
  • File exchange from laptop to smartphone
  • Streaming audio/video from smartphone to TV
  • Controlling car infotainment system with a smartphone.
  • The destination for the navigation device can be entered by anyone from the family or a close friend inside the car by using their smartphone.
  • Wearable medical sensors share information only with authorized medical stuff.
  • When a patient visit doctor, or remotely for elderly patients, only the authorized doctor should be able to collect information from the sensors.
  • Similarly, sensory information of a building should be accessed by officials in emergencies.

For all these scenarios we should understand that a device identity is a must. Like us, humans, every device should somehow present its properties, capabilities and more importantly its owner information. Then, when devices interact, they can incorporate this social information in authentication and authorization. Since devices should be able to comprehend the information in these social profiles, a structured presentation format from semantic web like RDF is a reasonable choice.

After having a social profile in somewhere on the web, we need to establish a bridge to it and existing password based single sign on solutions are the first candidates. Assume that a device has an embedded password and as username it uses the URL of its social profile. There is another malicious device, which has the same URL as username but a different password. Now, which password is true? We need an authentication server, like OpenID, Facebook Connect etc. Although seems pretty straightforward, password based solutions have challenges: ( i ) Central server is aware of all the attempts for access, it can abuse our privacy. ( ii ) How can we embed passwords? Can we remember all of them? ( iii ) Passwords allways require a central server involved in the process. What about ad hoc interaction?

As a decetralized solution, we can bridge devices to their social profiles by using WebID. WebID is a X509.v3 (can be self-signed) certificate which has an embedded link to a social profile. This social profile can be public, stored in any where on the web. For more detail please refer to WebID spec. With WebID based solution, there is no dependency in any kind of central server, and ad hoc operation is possible after the first interaction between two devices. In the first interaction, the authentication requires internet access, however, in the subsequent ones, the WebID certificate and profile can be stored in the peer like SSH authorized keys approach. Privacy is not a concern since even offline operation is possible and authenticator is the peer device not a centralized server. Lastly, embedded devices can be distributed with already installed WebID certificates. Then by granting access for the device profiles to their new owners, end users can manipulate the authentication and authorization.

WebID offers us a decentralized way of building social things. However, like every other technology it also has challenges. The main challenge is the computational requirements for asymmetric key cryptography. There are successful projects for applying asymmetric encryption and DTLS (datagram transport layer security). However, WebID as a protocol also requires parsing and inferring semantic web pages. All these together may be a challenge for constrained devices. Moreover, there are RFIDs which has no computational power. Therefore, we may need local gateways. For instance, our smartphones can collect sensory information and act as a gateway. The connection between the smartphone and sensors can be encrypted via embedded passwords, while smartphone is connected via WebID protocol to the outer world.

Social WiFi Access Control

Durmus, Y. and Langendoen K.G. WiFi Authentication through Social Networks – a Decentralized and Context-Aware Approach –. In 5th Int. Workshop on Pervasive Collaboration and Social Networking. PerCol 2014

WiFi is the dominant access network technology due to its high capacity and zero cost. And the dominant authentication method is the use of passwords. Passwords are handy, the logic behind is easily understood by the end users. However, we have many passwords, which are mostly the same since it is hard to invent and remember all these passwords. Due to the same memory issue, we have weak passwords. In the case of WiFi, especially for home networks, password is not something secret, it is rather public. We share our passwords with all of our guests.

A promising idea that deserves additional attention as the integration of social networks in WiFi authentication can completely remove the need for password distribution/sharing. I will refer to such integrated systems as social WiFi access points.

Existing solutions for social network integration assign a centralized system to control the authentication process. For example, Meraki and many other companies use captive portal to let the users access to the network via social network login. After joining to the WLAN, captive portal keeps you in a walled garden and only allows you to access a social network authentication page or a payment page. The problem with this approach is first of all, you are already inside the WLAN therefore any mistake in walled garden configuration can lead to open holes. Secondly, due to two step authentication first you are annoyed that you need use browser even if you just plan to check your emails. And second, if the device does not have a browser like sensors, embedded systems there is no way that they can access to the network. Lastly, humans are always in the loop with captive portal, you cannot automate authentication process.

A second example is Instabridge, who created its own centralized online social network, and uses that to distribute the WiFi password among the friends of an AP owner. Obviously, password distribution and revoke are costly. And in a company network scenario, you cannot track the identity of the connected customers. You only know their MAC addresses.

Apart from the above issues, centralized approaches for creating social WiFi APs cannot be used offline (i.e., fail without Internet connectivity). Some of them are prone to single-point-of-failure and scalability problems, and generally they raise privacy concerns. To address these drawbacks we advocate a decentralized approach in which individual APs take full control and perform the device authorization themselves; access will be granted when a trust relation can be established between the AP owner and the owner of the client device as recorded in a social network.

We design a decentralized authentication system by leveraging the WebID. WebID uses well-known ontologies like Friend-of-a-Friend (FOAF) that are designed to solve the interoperability problem among online social networks. Instead of shared secrets (i.e. passwords), X509v3 certificates are used for authentication. These certificates are adapted to connect the devices they represent to a social network by including a link (URI) to a social profile on the web. Both devices and humans must publish their social relations (owners, friends) in these profiles to allow for an integrated solution. We merge WebID with Extensible Authentication Protocol-Transport Layer Security (EAP-TLS) and call it as EAP-SocialTLS.

To validate our design of decentralized social WiFi APs, we implemented a prototype version and tested it on real hardware. We altered the EAP-TLS source, part of Hostapd as the basis for our EAP-SocTLS code. In the certificate verification part of the Hostapd code, we compare the public key to the one in the social web profile. After verification, we call an external Python library for the authorization part (i.e., searching for trust relations). Although all the extensions can be placed in Hostapd is C code, Python was selected for its ease-of-use. Note that our modification is completely transparent to the client side, who follows the normal EAP-TLS process when requesting service from the AP. The only requirement for the client device is to have a certificate with a link to the corresponding social-profile page.

Creating Self Signed Certificates With WebID

On web you can find numerous tips on creating self-signed certificates. However, the number of pages thhat describe adding a URI to the subject alternative name field is not so much.

With OpenSSL you have to follow three steps;

  • create a private key.
    1
    
    openssl genrsa -des3 -out myserver.key 2048
  • create a signing request.
    1
    
    openssl req -new -key myserver.key -out myserver.csr
  • create the certificate by using signing request. Basically you need to send this signing request to an authority. However, you can also sign your certificate by yourself.
    1
    
    openssl x509 -req -days 365 -in myserver.csr -signkey myserver.key -out myserver.crt
  • Additional step to convert certificate in PKCS#12 format which is a bundle of private key and the certificate.
    1
    
    openssl pkcs12 -export -out myserver.p12 -inkey myserver.key -in myserver.crt

But there is also single step version:

1
openssl req  -x509 -nodes -days 365 -newkey rsa:2048 -keyout mykey.pem -out mycert.pem

How to add WebID

WebID is a URI to define an agent, robot, thing etc. In WebID authentication, we use certificates and inside the certificates we need to present the URI (WebID). The WebID should be under Subject Alternative Name field. In order to add it, you need to change the openssl.cnf file, in ubuntu it is located in /etc/ssl/openssl.cnf. Read the configuration file, you will learn a lot. May be there are other tricks that I am missing.

If you plan to use a Certificate Authority to sign your certificate as described in ssl with SAN, you need to enable v3 req section and place the URI in subject alternative name field. However, if you want to create your own self-signed certificate by using the single step command that is above, you need to place URI under the v3 ca section. The reason is that you are now your own certificate authority therefore you need to use CA section.