Enabling low-power sensor context on mobile devices
posted on
Jul 22, 2013 11:21PM
Enabling low-power sensor context on mobile devices
One of the lowest power sensors in a smart phone is the accelerometer, which can be sampled at 50Hz for less than 0.02 mWh of energy. This is a negligible fraction of a typical phone battery which generally has a capacity of more than 5000 mWh. Still, by keeping the main application processor awake just to sample this data uses around 250mWh, completely defeating the purpose of low-power sensing. Over the course of a day, this would significantly drain the device battery. There needs to be a smart always-on trigger system that can continuously analyze incoming sensor data using very low power and memory resources, and wake up a more powerful processor whenever something interesting happens. At Sensor Platforms, we have developed just such a system called a “context change detector” [12].
This architecture enables both low power and low latency. This can be shown by running the change detector at different latencies on various user activities (see Figure 2). By comparing to a simple motion detector (such as that available with the inertial wake functionality of an accelerometer), the severity of the tradeoff between latency and power consumption is mitigated, if not eliminated, by using a smart trigger system. Note that the blue bars are relatively insensitive to latency, indicating that the change detector is responding to changes in the signals, whereas the red bars are inversely proportional to latency, indicating that the simple motion detector is triggering every single time.
Lean Motion Sensor Context Engine to Ensure Low Power
A low-level trigger system, such as the context change detector, greatly reduces how often the sensor data is processed, enabling always-on systems.A further optimization is to abstract out the relevant context information from the raw sensor data, allowing higher-level context algorithms to use these abstracted results rather than the sensor data directly for further inferences. This inertial sensor “context engine” reduces the amount of data that must be processed at higher levels. For example, raw inertial sensor data is typically sampled at 50-100Hz on Android systems, whereas motion context updates typically occur on the order of seconds. Consequently, this middle layer can be thought of as a second-level trigger system for higher level resource-intensive context algorithms.
The ability to abstract meaningful context information out of inertial sensor data may not be readily apparent, but at Sensor Platforms we have successfully demonstrated that this is both possible and accurate. Consider the following example where a device is resting on the table, then picked up and held at the same orientation in hand. The accelerometer signals in the x-, y-, and z-direction for these two contexts seem indistinguishable to the human eye, but not to a properly trained classifier (see Figure 3). Our machine-learning algorithms correctly distinguish off-body and in-hand within a few seconds even for this difficult classification scenario.
Not only is the Sensor Platforms’ FreeMotion Library capable of producing accurate context inferences from inertial sensors, it does so with efficient, lean machine learning algorithms. This leanness is a key aspect of this middle context layer for achieving lower power, and requires thoughtful feature selection. The entirety of the motion-based context engine in the FreeMotion Library can fit on an embedded system.
Usability for Developers Ensures Adoption
With such a low-power mobile context architecture in place, the context information can enable developers through additional virtual sensors. In much the same way that developers can currently request SCREEN_ORIENTATION in Android, rather than interpreting raw accelerometer data, they further should be able to query different context sensors. For motion based context, we see at least three fundamental virtual sensors:
Algorithms can combine these virtual sensors with each other, and possibly with other system-level sensors (e.g. application data, location data, audio signals, etc.), to generate even more complex context inferences. For example, the accelerometer can detect that a user is biking, but a higher-level context could distinguish between exercising or commuting to work, based on location and time of day.Therefore, the concept of virtual context sensors is one that extends beyond motion-based context to contexts derived from any set of sensors available on the device.
First and foremost, making these new virtual sensors available to developers will enable the next level of context-aware platforms.Second, making these new virtual sensors effective for developers requires the API to report confidence level. Just as location based services provide uncertainty information (best known to the end user as the size of the blue circle on map applications), context results should report reliability and accuracy.
Finally, standardizing these confidence levels requires a methodology to measure confidence on context results. Context accuracies in the industry are currently given as a single pre-determined percentage, e.g. 95% accuracy, but this accuracy only reflects the population for which the algorithm was trained and, more importantly, does not account for real-time fluctuations in the accuracy. For example, the accuracy of a walking context algorithm would likely go down if the user has a mis-step or fumbles the device, and this should be reflected in the sensor output. Determining a consistent standard for context awareness platforms is a difficult but important task. Sensor Platforms is working with industry standards bodies to address this.
Conclusion
The mobile architecture presented here uses machine learning to estimate sensor context, with strong emphasis on low latency and low power.The key enablers of this architecture are the tiered trigger systems developed by Sensor Platforms: the context change detector and the motion context engine, which together minimize AP wakes and reduce triggering of more resource intensive context algorithms.In addition, providing context information in the form of virtual sensors with corresponding confidence levels gives the developer more flexibility.Adoption of the additional virtual sensors introduced here provides an extensible, easy-to-use framework for combining a multitude of sensor information to enable the next level of context awareness apps.
References
[1] http://www.nytimes.com/2012/11/15/technology/personaltech/a-review-of-new-activity-tracking-bands-from-nike-and-jawbone.html?pagewanted=all&_r=0
[2] http://bestrunningwatches.co.uk/reviews/garmin-foot-pod-for-forerunner-review/
[3] http://www.cnet.com/8301-33373_1-57359020/cheap-sensors-enabling-new-smartphone-fitness-gadgets/
[4] http://en.wikipedia.org/wiki/Google_Glass
[5] Mannini, A. and A.M. Sabatini, “Machine Learning Methods for Classifying Human Physical Activity from On-Body Accelerometers,” Sensors 2010, 10, 1154-1175.
[6] http://www.affectiva.com/assets/Measuring-Emotion-Through-Mobile-Esomar.pdf
[7] http://scobleizer.com/2012/07/11/mobile-3-0-arrives-how-qualcom-just-showed-us-the-future-of-the-cell-phone-and-why-iphone-sucks-for-this-new-contextual-age/
[8] http://www.informatik.uni-freiburg.de/~spinello/storkROMAN12.pdf
[9] http://www.cooking-hacks.com/index.php/documentation/tutorials/ehealth-biometric-sensor-platform-arduino-raspberry-pi-medical
[10]http://developer.android.com/reference/com/google/android/gms/location/ActivityRecognitionClient.html
[11] http://forum.xda-developers.com/showthread.php?t=2217982 andhttp://productforums.google.com/forum/#!msg/mobile/0CQTG4PWp24/BrqxhGSeOmkJ
[12] Sensor Platforms Blog article: http://www.sensorplatforms.com/activity-transitions-context-aware-systems/
[13] http://www.anandtech.com/show/5559/qualcomm-snapdragon-s4-krait-performance-preview-msm8960-adreno-225-benchmarks/4
LINK http://www.edn.com/design/sensors/4418673/1/Enabling-low-power-sensor-context-on-mobile-devices-