ML at the Edge: Cheap Sensors, Rich Understanding
At IQT Labs, we explore emerging technology by working directly with it, building, tinkering, and, occasionally, destroying things. One theme area we are investigating is “AI at the Edge,” which is the concept of adding machine learning (ML) directly to sensors. As the processors inside computers have constantly gotten faster, so too have the processors embedded in everyday devices, from appliances to thermostats to security cameras. These improvements allow devices to use ML to directly interpret the rich data coming from their sensors. The addition of ML allows sensors to monitor new locations and new phenomena.
The Rise of Pervasive, Low-Cost Sensors
Sensors derive an electrical signal from a physical condition or event. Smartphones (not to mention an ever-growing list of other devices) are packed with sensors: cameras that capture images, accelerometers that record motion, and magnetometers that determine the device’s orientation. Figure 1 illustrates the more than 1 billion smartphones shipped in 2017 equipped with over 6 billion sensors.
The global demand for smartphones has created a competitive market for the sensors that go inside them, spurring innovation. Advances in manufacturing, such as the use of micro-electromechanical systems (MEMS), enabled sensor prices to drop dramatically. For instance, the average cost of an accelerometer dropped from $2 in 2004 to $0.40 in 2020.
These sensors might be small, but they can generate large amounts of data. Importantly, this data is often being created at the edge of internet, where network connections may be flakey or the amount of data that can be sent may be limited. Network architects call this space in network topology “the edge.”
These sensors are not only generating lots of data, but lots of complex data where only a small subset is interpretable by a human alone. There are some sensors, like temperature sensors, that measure straightforward and slow-changing phenomena. However, the accelerometer in your phone can capture hundreds of 3D samples per second. A human may be able to infer the orientation of the device from this but would not be able to interpret this data into measurements of vibration or complex gestures. Furthermore, a human would be unable to interpret the extent of this accelerometer data in the context of all the other sensor data a phone generates. Figure 2 further illustrates the bytes of data generated in one second.
The Benefits of ML on Low-Cost Sensors
This is where ML enters the story. ML has the potential to find patterns in high-volume, complex sensor data. Provided proper training data, a ML model could find a human form among image pixels or listen for different voice commands. Products like Amazon’s Echo or Google’s Nest camera have become so common that the public now views ML as simply one more feat performed by modern electronics. What is less easily observed is that Alexa and Nest actually stream their sensor data to datacenters (the cloud), where ML processes it and returns those results. Because internet access in many homes can be taken for granted, the Alexa and Nest devices appear to be performing these ML miracles themselves. However, the ML features of these devices cannot function without this high-bandwidth internet connection. And although these devices could use cellular networks, these networks have limited coverage and are more expensive to use.
Recent advances in both computing and ML have made it possible to run ML models on the class of processors that are built into low-cost devices. No longer requiring powerful computers for ML will change how these devices are designed. Instead of having to send complex sensor data to the cloud for processing, it can now be done using the processor in the device. Not requiring a network connection to use ML allows for devices to be placed in new locations and additional types of data to be sensed. And since the data stays local and does not have to traverse a network, conclusions can be reached faster, allowing for quicker reactions.
New ML techniques are helping make this possible. Efficient ML architectures, like MobileNet, are being designed specifically to run on resource-constrained devices. New techniques have also been developed that allow for the values inside an ML model to be stored with less precision, which makes the math easier and requires less memory. Embedded processors continue to improve in performance, while managing to use less power. Accelerators for common ML operations are also being added to hardware. Combined, these advances have made it practical to run complex ML models at the edge.
What this Means
Removing the need to communicate over a network makes it easier to create fully contained sensing modules that combine a sensor with a dedicated processor able to use ML. These integrated modules allow for new categories of low-cost sensors, tuned for very specific phenomena. A module with a camera could become a dedicated person counter or a parking spot monitor. With the ability to locally process voice commands, a microphone could act as a keypad. As these smart sensing modules become common, it will change the way users interact with their devices and transform how devices perceive their environment.
Without the requirement to send back all a sensor’s data, more locations can be monitored. Conservationists are using ML to watch for poachers moving along elephant paths. Without ML, this would be impractical because all the audio and video need to be sent from remote locations, requiring a satellite connection, or building network infrastructure. Using ML, only small amounts of data need to be sent for alerts. This keeps the solution cheap and makes it easier to install and maintain, allowing for more sensors to be deployed, resulting in more area being covered.
There are many different aspects that need to be taken into consideration when designing a device that uses ML, and a balance must be achieved across design goals. Compared to a traditional computer, the processor used in everyday devices will always have less computing power and memory available. Optimizing for these constraints means finding an ML model that works well enough. However, if the ML model in a device has a lot of incorrect detections, users will begin to not trust it. Even with the constraints of the processor, there are a number of different avenues designers have to improve accuracy, from simply making a better ML model, to adding additional types of sensors, or using multiple devices and having them work together.
Ultimately, some systems may blend approaches and run a ML model on a device to initially triage sensor data and pass the interesting portions to more complex models running in the cloud. This ensures that the model running on-device can be kept as simple as possible and drastically reduces the amount of sensor data that needs to be sent to the cloud.
At the Edge of the Future
What is clear is that future devices will be better able to sense the world and able to do so in new locations with limited or no network connectivity. There is still a lot to be learned on how to best do this and what problems using ML at the edge is best suited for. IQT Labs continues to explore this area, taking a multidisciplinary approach to ensure we have a comprehensive understanding of the domain. New possibilities have been unlocked and it is exciting to imagine what will be built!