HabitSense: A Privacy-Aware, AI-Enhanced Multimodal Wearable Platform for mHealth Applications

Glenn J. Fernandes, Jiayi Zheng, Mahdi Pedram, Christopher Romano, Farzad Shahabi, Blaine Rothrock, Thomas Cohen, Helen Zhu, Tanmeet S. Butani, Josiah Hester, Aggelos K. Katsaggelos, Nabil Alshurafa

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Wearable cameras provide an objective method to visually confirm and automate the detection of health-risk behaviors such as smoking and overeating, which is critical for developing and testing adaptive treatment interventions. Despite the potential of wearable camera systems, adoption is hindered by inadequate clinician input in the design, user privacy concerns, and user burden. To address these barriers, we introduced HabitSense, an open-source1, multi-modal neck-worn platform developed with input from focus groups with clinicians (N=36) and user feedback from in-wild studies involving 105 participants over 35 days. Optimized for monitoring health-risk behaviors, the platform utilizes RGB, thermal, and inertial measurement unit sensors to detect eating and smoking events in real time. In a 7-day study involving 15 participants, HabitSense recorded 768 hours of footage, capturing 420.91 minutes of hand-to-mouth gestures associated with eating and smoking data crucial for training machine learning models, achieving a 92% F1-score in gesture recognition. To address privacy concerns, the platform records only during likely health-risk behavior events using SECURE, a smart activation algorithm. Additionally, HabitSense employs on-device obfuscation algorithms that selectively obfuscate the background during recording, maintaining individual privacy while leaving gestures related to health-risk behaviors unobfuscated. Our implementation of SECURE has resulted in a 48% reduction in storage needs and a 30% increase in battery life. This paper highlights the critical roles of clinician feedback, extensive field testing, and privacy-enhancing algorithms in developing an unobtrusive, lightweight, and reproducible wearable system that is both feasible and acceptable for monitoring health-risk behaviors in real-world settings.

Original languageEnglish (US)
Article number101
JournalProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Volume8
Issue number3
DOIs
StatePublished - Sep 9 2024

Funding

We acknowledge the support from the National Institute of Diabetes and Digestive and Kidney Diseases of the National Institutes of Health (NIH) under award numbers R03DK127128 and R01DK129843, as well as the National Institute of Biomedical Imaging and Bioengineering of the NIH under award number R21EB030305. The opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NIH. The authors also gratefully acknowledge the contributions of colleagues and collaborators Dr. Brian Hitsman, Dr. Angela Pfammatter, Dr. Annie W. Lin, Dr. Yang Gao, Bonnie Nolan, and Dr. Krystina Neuman, for their constructive feedback.

Keywords

  • camera
  • eating
  • machine learning
  • multimodal
  • privacy
  • smoking
  • thermal
  • vision transformers
  • wearable

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Hardware and Architecture
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'HabitSense: A Privacy-Aware, AI-Enhanced Multimodal Wearable Platform for mHealth Applications'. Together they form a unique fingerprint.

Cite this