VMware and Arm at the Edge
The prevalence of Arm-based devices in the Edge ecosystem hasn’t…
Before we got too far down the path of actually developing, we needed an architecture. We landed on a three-tier system: sensors, edge, and cloud. Those terms are somewhat amorphous, so let’s fill in a bit with some context.
This one is fairly self-explanatory. The sensors need an Arm device to read the data and report back. They are designed to operate in an environment where connectivity and power are not expected to be reliable, and only outbound connectivity is required. The sensors would be cached data until they can contact the Edge Tier. The Sensor Tier would collect the data from the sensor device, then transmit the data to the Edge tier via MQTT.
MQTT is a lightweight, standard protocol used for sending telemetry from sensors. It has good tools and fits on small devices, which makes it good for this project.
The Edge Tier is designed to be “near” some of the sensors, and requires reasonably reliable power and connectivity. Unlike the Sensor Tier, inbound connectivity would be required, so there cannot be network address translation (NAT) between the edge device and the internet at large. These devices also do not need storage, as they just pass along data to the Cloud Tier. Their purpose is to filter requests from the Sensor Tier (and validate that the sensors are allowed to send the platform data) and convert the data stream from MQTT to a query that the Cloud Tier’s database can understand.
The Cloud Tier is where all the storage is housed. The Cloud Tier is expected to have reliable power and connectivity, and also be remotely accessible. The Cloud Tier is also where the user interface would reside. The Cloud Tier would allow the Edge Tier to read and write to the database, and users at large would be able to access a dashboard that lets the user view the data in real time.
With the goal of everything being Arm-based, we decided to start at the top and work our way down. For the Cloud Tier we chose Packet’s c2.large.arm system, which has an Ampere eMAG processor at the heart of it. For the software, we decided to use Grafana for the dashboard, and InfluxDB for storage. We wrote automation using Ansible to deploy and configure the Grafana and InfluxDB software.
For the Edge Tier, we decided to use 96Boards HiKey 970 based Single Board Computers, and use mosquitto as the MQTT broker and mqtt_warn from JP Mens to convert the MQTT messages into InfluxDB queries.
For the Sensor Tier, we grabbed a Raspberry Pi 3B+ system, with Sparkfun Qwiic I2C sensors, and Arm Mbed OS with Pelion for device management. This allowed us to make a Python-based application and put it in a container for deployment on the devices.
For the actual sensors, we went with the Qwiic platform to allow for easy swapping of sensors and reduction in wiring complexity. We started by adding the Qwiic Environmental Combo Sensor and a GPS breakout to get position information of the sensor. We tried to use a lightning detection sensor, but had problems getting that sensor to work correctly (and later discovered that the I2C functionality was unreliable). Here’s a photo of some of the sensors attached to a RPi3B:
All the software we wrote is freely available: the Python code and container, as well as the Ansible playbooks for server configuration. The Ansible and configuration information can be accessed here and the container information can be viewed here.
While the system worked, there were a lot of limitations to contend with. There are a few things we would reconsider for future implementations, including:
The Raspberry Pi is a fantastic device, but it was overkill for our needs. A microcontroller using a Cortex M series processor core would be more than sufficient. We would also swap out the Arm Mbed OS for Arm Mbed or Zephyr as the OS/Firmware for the microcontrollers. This switch would also allow for powering the devices off solar and batteries instead of a wired connection— thus further reducing the required infrastructure.
Our sensors also required Ethernet connectivity—WiFi could be used, but configuration is non-trivial at best. With a microcontroller-based solution, a simpler connectivity solution would be advisable. We were looking at LoRaWAN as a solution for the sensor nodes, and then have the LoRaWAN gateway in close physical proximity to the sensor nodes, then relaying the information over The Things Network back to our edge nodes. This would allow for extremely low-power devices and no need for network configuration in the field.
All the work we did for this project is available under a permissive MIT open source license so that others can improve upon what we began.
We hope it can help folks get a start on development and not fall victim to all the same pitfalls we did! To learn more, check out the Arm AIoT Dev Summit on YouTube.
In the news kubespray port to arm64 Multiarch Kubernetes with kubespray (Sergey Nuzhdin,…