Automated Testing for the Internet of Things

automated-testing-for-the-internet-of-things3The Internet of Things (IoT) is upon us. Every object you see around you — whether it’s your fridge, your electric toothbrush, your car, even your clothes, are about to acquire a form of sentience.

Some of them already have. Fitbit watches, Nest thermostats and Apple TVs are just the tip of the iceberg when it comes to our embedded future. Sensors, embedded systems and cloud backends are coming together to bestow smartness upon a bewildering array of previously inanimate, dumb objects. IoT will be a trillion-dollar market by 2020 and every big player in the hardware space is jockeying for position, along with a flood of new IoT startups.

While embedded systems have been around for a very long time in the form of consumer electronics, IoT has given them a new dimension. Previously these systems were essentially self-contained and could work in isolation. But connected objects now need to converse with each other to function properly. This means that developers must consider ways to streamline device-to-device (D2D) and device-to-server (D2S) communication, in addition to the human interaction that comes into play as everyday objects become an extension of the Internet.

IoT and Testing

In traditional software development, code can be built and tests can be automated in production-like environments. The modern approach that makes this process repeatable and predictable is called Continuous Integration. Its purpose is to promote code quality, catch bugs early, reduce the risk of regressions and accelerate development iterations. It is very mature in the web development space, and increasingly so in mobile app development as well. In fact, test automation is so ingrained in the mind of developers that many have changed their entire approach to programming, favoring test-driven development — a paradigm where testing drives code design and development, rather than the other way around.

Though things are increasingly complicated in the embedded world, the end user’s expectations are the same — modern systems should just work. As the recent Nest thermostat dysfunctions have taught us, IoT products are not yet as robust as their more traditional counterparts.

Until recently continuous integration had never been a fixture of embedded software development, mostly because the interplay between hardware and software made things more difficult. To fix this gap, IoT vendors are moving to integrate better continuous integration into their daily practices.

What Makes IoT Applications Harder to Test

Agile development methodologies require a different approach when hardware is involved. More up front design is required, and discarding previous iterations of the product comes at a greater cost than in purely software projects. At the very least, iteration times are longer.

One assumption that needs to be verified in order to achieve continuous integration in the IoT is the possibility of test automation. While it can be achieved in the embedded space, a number of hurdles need to be overcome. It isn’t as easy to isolate code because of the dependencies to the underlying hardware that can hardly be overlooked.

IoT systems are composite applications that need:

  • The ability to gather sensor data by reacting to a variety of inputs like touch, voice and motion tracking
  • Different types of interoperating hardware, some of them well-known like Arduino boards or Raspberry Pi, others more unusual or context-specific, like a smart coffee machine, video camera or oven
  • Cloud-based servers, mobile and web applications from which the devices can be monitored and controlled
  • A Device API to enable routine data and device diagnostics pulls to cloud servers, as well as functionality manipulation.

To complicate things even further, IoT comes with its own protocols like MQTT, CoAP and ZigBee in addition to Wi-Fi and Bluetooth. Furthermore, embedded systems are subjected to regulatory requirements such as IEC 61508 and MISRA to ensure the safety and reliability of programmable electronic devices.

Programming languages used in embedded systems tend to be either C or C++. These languages, more low-level than those used in web development, typically imply better runtime performance but longer programming lead times and more hassle due to the need for explicit memory management, not to mention less available talent. Lack of code portability means that cross-compilation is required between development and target environments.

Independent Greenfield projects are relatively rare in embedded software — projects often have dependencies on monolithic legacy code into which CI principles are difficult to retrofit. Similarly, subsystems of IoT applications are often owned by different vendors. What happens if a bug is found in a vendor-supplied device that the application setup depends on?

Exhaustive test data, including telemetry data, can be difficult to obtain for IoT projects since it depends on real world situations where things like weather conditions and atmospheric pressure can influence outcomes.

Finally, non-functional requirements tend to be difficult to test against in systematic ways. These requirements can revolve around bandwidth limitations, battery failures, and interferences.

Simulations, Quality Assurance and other Approaches

Despite the problems listed in the previous section, teams building the future of IoT are trying to achieve systematic, repeatable, and regular release processes in the embedded world.

Part of the guesswork around test planning can be limited by making familiar choices where possible in the architecture. One example is to use Linux for all embedded software projects, so that at least the operating system layer remains predictable. In general, the ability to decouple programming logic from hardware makes for easier subsystem testing. Also, focusing early efforts on prototypes or low-volume first iterations can help chip away at the guesswork and ease further planning.

To mitigate problems around test data quality, IoT professionals record input data from real world users and environmental data, to be replayed in test environments. This helps to keep test environments as production-like as possible. Compliance testing against regulatory requirements can be partially automated using static analysis tools. DevOps automation solutions like Electric Accelerator include these types of checks out of the box.

But mostly, modern approaches to testing composite applications featuring embedded software involve some form of simulation. Much like the way mobile app developers use emulators like Perfecto and Sauce Labs to recreate test conditions across a variety of smartphones, embedded software developers resort to simulation software to abstract away parts of their continuous testing environments. In particular, simulations resolve the problems of hardware availability and accelerate tests against various versions, screen sizes, and other hardware properties.

These virtualized environments are specific to each application and tend to be expensive to set up, but as is the case in traditional continuous integration, the initial effort pays back for itself many times over as the project’s life extends. Not only do they afford much more flexibility in test setups and scale, but they help put testing at the center of the team’s preoccupations.

A hardware lab will usually follow the Model-Conductor-Hardware (MCH) design pattern. The Model defines the abstract logic underpinning the system, holds the system’s state and exposes an API to the outside world. The Conductor is a stateless component that orchestrates the bidirectional interplay between the Model and the Hardware. The Hardware component serves as a wrapper around the physical hardware. It triggers or reacts to events by communicating with the Conductor.

In a virtual hardware lab, the Hardware and the outside world (and its relationship to the Model) are replaced by a simulation so that the entire setup becomes purely software-based. A small number of vendors have built simulation offerings like Simics by WindRiver, which offers virtual target hardware that can be modified or scaled at will.

These simulators are becoming increasingly sophisticated, supporting performance and memory testing, security testing, and they even allow for edge case tests and fault injection, like dropped connections or electromagnetic interference creating noise in sensor data.

Nevertheless, a decent quality assurance process using real hardware is still necessary at the tail end of each major testing cycle. There are several reasons why teams may never rely entirely on simulations. For one thing, errors and approximations in the simulation can cause imperfections that must be caught before shipping. In addition, human interaction testing and emergent behaviors can’t be simulated, and a person’s emotional response to even a perfectly functional hardware-enabled system can be difficult to predict. A combination of white-box testing and black-box testing is usually employed during the QA phase to detect problems that might have fallen through the simulation’s cracks.

Stick around for our upcoming eBook release dedicated to API-Driven DevOps

New Frontiers

In the age of the Internet of Things, slow testing cycles and poorly tested products are no longer sufficient. Companies have adapted their internal processes to reflect these new expectations to thrive with products blending state-of-the-art software and hardware.

Smart objects like unmanned aerial drones will be subjected to deep scrutiny and regulations, and users will become less accepting of glitches in their smart home appliances. More companies will offer IoT-specific testing software, like SmartBear whose Ready! API solution enables API testing with support for MQTT and CoAP. As a side effect, test automation job opportunities will likely increase in the embedded world, creating new career prospects for engineers who straddle the line between software and hardware.

Expectations around new software deployment and availability of new capabilities on existing hardware have been greatly increased by recent advances in mobile and automotive firmware delivery processes. Until recently, buying any electronic device constituted a decreasing value proposition — the device would gradually lose value over time and consumers would be pressured to buy new versions of the hardware to benefit from new features.

But Over The Air (OTA) updates of firmware has changed that. OTA is a deployment method where software updates are pushed from a central cloud service to a range of devices anywhere in the world typically via Wi-Fi but also via mobile broadband, or even IoT-specific protocols like ZigBee.

Smartphones were the first connected devices to feature OTA updates, leading to longer-lived devices and a diminished feeling of planned obsolescence. Cars came next — Tesla famously designs their cars so that software upgrades can fix problems and enable new capabilities via OTA updates. This requires careful planning and a systematic approach to software delivery. A recent example is the auto-pilot feature on the Tesla Model S that was made available to existing cars after an OTA update. The cars had already been equipped with all the hardware necessary for the autopilot to function (cameras, sensors, radar), and a pure software update was then enough to enable the new feature.

The fact that they are able to confidently ship these changes to a product like a car, for which safety and usability are paramount, speaks volumes about the level of planning and test automation that they’ve put in place. The sensitiveness of these updates will only increase in the era of self-driving vehicles, when artificial intelligence will replace humans at the controls.

Tesla can’t fix everything with OTA updates — it has had to physically recall large numbers of cars for problems in the past, but this powerful new delivery method has given customers the expectation that the product they buy will improve over time.

OTA support must be built into all the supported devices, and IoT vendors need to do careful version management and testing in simulated environments in order to achieve seamless delivery in this way. To avoid long roll out times only the smallest possible delta files should be delivered via OTA updates, and security can be an issue as well. Companies like Redbend and Resin offer device management and OTA update workflow solutions to IoT customers.

Final Thought

The Internet of Things promises to change the way we relate to objects all around us, but the high expectations set by companies like Apple and Tesla will require IoT companies to evolve and elevate the culture of testing in consumer electronics to new standards.