Matching entries: 0
settings...
Rahman MM, Ali AA, Plarre K, Al'Absi M, Ertin E and Kumar S (2011), "mconverse: Inferring conversation episodes from respiratory measurements collected in the field", In Proceedings of the 2nd Conference on Wireless Health. , pp. 10.
Abstract: Automated detection of social interactions in the natural environment has resulted in promising advances in organizational behavior, consumer behavior, and behavioral health. Progress, however, has been limited since the primary means of assessing social interactions today (i.e., audio recording) has several issues in field usage such as microphone occlusion, lack of speaker specificity, and high energy drain, in addition to significant privacy concerns. In this paper, we present mConverse, a new mobilebased system to infer conversation episodes from respiration measurements collected in the field from an unobtrusively wearable respiratory inductive plethysmograph (RIP) band worn around the user’s chest. The measurements are wirelessly transmitted to a mobile phone, where they are used in a novel machine learning model to determine whether the wearer is speaking, listening, or quiet. Our model incorporates several innovations to address issues that naturally arise in the noisy field environment such as confounding events, poor data quality due to sensor loosening and detachment, losses in the wireless channel, etc. Our basic model obtains 83% accuracy for the three class classification. We formulate a Hidden Markov Model to further improve the accuracy to 87%. Finally, we apply our model to data collected from 22 subjects who wore the sensor for 2 full days in the field to observe conversation behavior in daily life and find that people spend 25% of their day in conversations
BibTeX:
@inproceedings{rahman2011mconverse,
  author = {Rahman, Md Mahbubur and Ali, Amin Ahsan and Plarre, Kurt and Al'Absi, Mustafa and Ertin, Emre and Kumar, Santosh},
  title = {mconverse: Inferring conversation episodes from respiratory measurements collected in the field},
  booktitle = {Proceedings of the 2nd Conference on Wireless Health},
  year = {2011},
  pages = {10},
  url = {https://md2k.org/images/papers/biomarkers/mConverse.pdf}
}
Hossain S, Ali A, Rahman M, Ertin E, Epstein D, Kennedy A, Preston K, Umbricht A, Chen Y and Kumar S (2014), "Identifying Drug (Cocaine) Intake Events from Acute Physiological Response in the Presence of Free-living Physical Activity.", Proceedings of the 13th ACM/IEE Conference on Information Processing in Sensor Networks.. Thesis at: Dept of Computer Science and Engg., Washington University in St. Louise.. Vol. 2014, pp. 71-82.
Abstract: A variety of health and behavioral states can potentially be inferred from physiological measurements that can now be collected in the natural free-living environment. The major challenge, however, is to develop computational models for automated detection of health events that can work reliably in the natural field environment. In this paper, we develop a physiologically-informed model to automatically detect drug (cocaine) use events in the free-living environment of participants from their electrocardiogram (ECG) measurements. The key to reliably detecting drug use events in the field is to incorporate the knowledge of autonomic nervous system (ANS) behavior in the model development so as to decompose the activation effect of cocaine from the natural recovery behavior of the parasympathetic nervous system (after an episode of physical activity). We collect 89 days of data from 9 active drug users in two residential lab environments and 922 days of data from 42 active drug users in the field environment, for a total of 11,283 hours. We develop a model that tracks the natural recovery by the parasympathetic nervous system and then estimates the dampening caused to the recovery by the activation of the sympathetic nervous system due to cocaine. We develop efficient methods to screen and clean the ECG time series data and extract candidate windows to assess for potential drug use. We then apply our model on the recovery segments from these windows. Our model achieves 100% true positive rate while keeping the false positive rate to 0.87/day over (9+ hours/day of) lab data and to 1.13/day over (11+ hours/day of) field data.
BibTeX:
@article{Hossain2014a,
  author = {S.M. Hossain and A.A. Ali and M.M. Rahman and E. Ertin and D. Epstein and A. Kennedy and K. Preston and A. Umbricht and Y. Chen and S. Kumar},
  title = {Identifying Drug (Cocaine) Intake Events from Acute Physiological Response in the Presence of Free-living Physical Activity.},
  journal = {Proceedings of the 13th ACM/IEE Conference on Information Processing in Sensor Networks},
  school = {Dept of Computer Science and Engg., Washington University in St. Louise.},
  year = {2014},
  volume = {2014},
  pages = {71--82},
  url = {http://dl.acm.org/ft_gateway.cfm?id=2602348&ftid=1444466&dwn=1&CFID=494348324&CFTOKEN=58863845}
}
Mayberry A, Hu P, Marlin B, Salthouse C and Ganesan D (2014), "iShadow: Design of a Wearable, Real-time Mobile Gaze Tracker", In Proceedings of the 12th Annual International Conference on Mobile Systems, Applications, and Services (MobiSys '14). Bretton Woods, New Hampshire, USA , pp. 82-94. ACM.
Abstract: Continuous, real-time tracking of eye gaze is valuable in a variety of scenarios including hands-free interaction with the physical world, detection of unsafe behaviors, leveraging visual context for advertising, life logging, and others. While eye tracking is commonly used in clinical trials and user studies, it has not bridged the gap to everyday consumer use. The challenge is that a real-time eye tracker is a power-hungry and computation-intensive device which requires continuous sensing of the eye using an imager running at many tens of frames per second, and continuous processing of the image stream using sophisticated gaze estimation algorithms. Our key contribution is the design of an eye tracker that dramatically reduces the sensing and computation needs for eye tracking, thereby achieving orders of magnitude reductions in power consumption and form-factor. The key idea is that eye images are extremely redundant, therefore we can estimate gaze by using a small subset of carefully chosen pixels per frame. We instantiate this idea in a prototype hardware platform equipped with a low-power image sensor that provides random access to pixel values, a low-power ARM Cortex M3 microcontroller, and a bluetooth radio to communicate with a mobile phone. The sparse pixel-based gaze estimation algorithm is a multi-layer neural network learned using a state-of-the-art sparsity-inducing regularization function that minimizes the gaze prediction error while simultaneously minimizing the number of pixels used. Our results show that we can operate at roughly 70mW of power, while continuously estimating eye gaze at the rate of 30 Hz with errors of roughly 3 degrees.
BibTeX:
@inproceedings{Mayberry:2014:IDW:2594368.2594388,
  author = {A. Mayberry and P. Hu and B. Marlin and C. Salthouse and D. Ganesan},
  title = {iShadow: Design of a Wearable, Real-time Mobile Gaze Tracker},
  booktitle = {Proceedings of the 12th Annual International Conference on Mobile Systems, Applications, and Services (MobiSys '14)},
  publisher = {ACM},
  year = {2014},
  pages = {82--94},
  url = {https://md2k.org/images/papers/biomarkers/ishadow_mobisys14.pdf},
  doi = {10.1145/2594368.2594388}
}
Parate A, Chiu M, Chadowitz C, Ganesan D and Kalogerakis E (2014), "RisQ: Recognizing Smoking Gestures with Inertial Sensors on a Wristband", In Proceedings of the 12th Annual International Conference on Mobile Systems, Applications, and Services (MobiSys '14). Bretton Woods, New Hampshire, USA , pp. 149-161. ACM.
Abstract: Smoking-induced diseases are known to be the leading cause of death in the United States. In this work, we design RisQ, a mobile solution that leverages a wristband containing a 9-axis inertial measurement unit to capture changes in the orientation of a person's arm, and a machine learning pipeline that processes this data to accurately detect smoking gestures and sessions in real-time. Our key innovations are four-fold: a) an arm trajectory-based method that extracts candidate hand-to-mouth gestures, b) a set of trajectory-based features to distinguish smoking gestures from confounding gestures including eating and drinking, c) a probabilistic model that analyzes sequences of hand-to-mouth gestures and infers which gestures are part of individual smoking sessions, and d) a method that leverages multiple IMUs placed on a person's body together with 3D animation of a person's arm to reduce burden of self-reports for labeled data collection. Our experiments show that our gesture recognition algorithm can detect smoking gestures with high accuracy (95.7%), precision (91%) and recall (81%). We also report a user study that demonstrates that we can accurately detect the number of smoking sessions with very few false positives over the period of a day, and that we can reliably extract the beginning and end of smoking session periods.
BibTeX:
@inproceedings{Parate:2014:RRS:2594368.2594379,
  author = {A. Parate and M. Chiu and C. Chadowitz and D. Ganesan and E. Kalogerakis},
  title = {RisQ: Recognizing Smoking Gestures with Inertial Sensors on a Wristband},
  booktitle = {Proceedings of the 12th Annual International Conference on Mobile Systems, Applications, and Services (MobiSys '14)},
  publisher = {ACM},
  year = {2014},
  pages = {149--161},
  url = {https://md2k.org/images/papers/biomarkers/p149-parate_risq.pdf},
  doi = {10.1145/2594368.2594379}
}
Cordeiro F, Epstein D, Thomaz E, Bales E, Jagannathan A, Abowd G and Fogarty J (2015), "Barriers and Negative Nudges: Exploring Challenges in Food Journaling", In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15). Submission.
Abstract: Although food journaling is understood to be both important and difficult, little work has empirically documented the specific challenges people experience with food journals. We identify key challenges in a qualitative study combining a survey of 141 current and lapsed food journalers with analysis of 5,526 posts in community forums for three mobile food journals. Analyzing themes in this data, we find and discuss barriers to reliable food entry, negative nudges caused by current techniques, and challenges with social features. Our results motivate research exploring a wider range of approaches to food journal design and technology.
BibTeX:
@conference{cordeiro2015barriers,
  author = {F. Cordeiro and D. Epstein and E. Thomaz and E. Bales and A.K. Jagannathan and G.D. Abowd and J. Fogarty},
  title = {Barriers and Negative Nudges: Exploring Challenges in Food Journaling},
  booktitle = {Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI ’15)},
  publisher = {Submission},
  year = {2015},
  note = {Accepted to International ACM Conference on Human Factors in Computing Systems (CHI) 2015, Seoul, Korea},
  url = {https://md2k.org/images/papers/biomarkers/nihms675889_cordiero.pdf}
}
Hovsepian K, al'Absi M, Ertin E, Kamarck T, Nakajima M and Kumar S (2015), "cStress: Towards a Gold Standard for Continuous Stress Assessment in the Mobile Environment", In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing. Osaka, Japan , pp. 493-504. ACM.
Abstract: Recent advances in mobile health have produced several new models for inferring stress from wearable sensors. But, the lack of a gold standard is a major hurdle in making clinical use of continuous stress measurements derived from wear-able sensors. In this paper, we present a stress model (called cStress) that has been carefully developed with attention to every step of computational modeling including data collec-tion, screening, cleaning, filtering, feature computation, nor-malization, and model training. More importantly, cStress was trained using data collected from a rigorous lab study with 21 participants and validated on two independently col-lected data sets — in a lab study on 26 participants and in a week-long field study with 20 participants. In testing, the model obtains a recall of 89% and a false positive rate of 5%on lab data. On field data, the model is able to predict each instantaneous self-report with an accuracy of 72%.
BibTeX:
@inproceedings{Hovsepian:2015:CTG:2750858.2807526,
  author = {Hovsepian, Karen and al'Absi, Mustafa and Ertin, Emre and Kamarck, Thomas and Nakajima, Motohiro and Kumar, Santosh},
  title = {cStress: Towards a Gold Standard for Continuous Stress Assessment in the Mobile Environment},
  booktitle = {Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing},
  publisher = {ACM},
  year = {2015},
  pages = {493--504},
  url = {https://md2k.org/images/papers/biomarkers/p493-hovsepian_cStress.pdf},
  doi = {10.1145/2750858.2807526}
}
Kennedy AP, Epstein DH, Jobes ML, Agage D, Tyburski M, Phillips KA, Ali AA, Bari R, Hossain SM, Hovsepian K, Rahman M, Ertin E, Kumar S and Preston KL (2015), "Continuous In-The-Field Measurement of Heart Rate: Correlates of Drug Use, Craving, Stress, and Mood in Polydrug Users", Drug and Alcohol Dependence. (0), pp. -.
Abstract: AbstractBackground Ambulatory physiological monitoring could clarify antecedents and consequences of drug use and could contribute to a sensor-triggered mobile intervention that automatically detects behaviorally risky situations. Our goal was to show that such monitoring is feasible and can produce meaningful data. Methods We assessed heart rate (HR) with AutoSense, a suite of biosensors that wirelessly transmits data to a smartphone, for up to four weeks in 40 polydrug users in opioid-agonist maintenance as they went about their daily lives. Participants also self-reported drug use, mood, and activities on electronic diaries. We compared HR with self-report using multilevel modeling (SAS Proc Mixed). Results Compliance with AutoSense was good; the data yield from the wireless electrocardiographs was 85.7%. HR was higher when participants reported cocaine use than when they reported heroin use (F(2,9) = 250.3, p<.0001) and was also higher as a function of the dose of cocaine reported (F(1,8) = 207.7, p<.0001). HR was higher when participants reported craving heroin (F(1,16) = 230.9, p<.0001) or cocaine (F(1,14) = 157.2, p<.0001) than when they reported of not craving. HR was lower (p<.05) in randomly prompted entries in which participants reported feeling relaxed, feeling happy, or watching TV, and was higher when they reported feeling stressed, being hassled, or walking. Conclusions High-yield, high-quality heart-rate data can be obtained from drug users in their natural environment as they go about their daily lives, and the resultant data robustly reflect episodes of cocaine and heroin use and other mental and behavioral events of interest.
BibTeX:
@article{Kennedy2015,
  author = {Ashley P. Kennedy and David H. Epstein and Michelle L. Jobes and Daniel Agage and Matthew Tyburski and Karran A. Phillips and Amin Ahsan Ali and Rummana Bari and Syed Monowar Hossain and Karen Hovsepian and Mahbubur Rahman and Emre Ertin and Santosh Kumar and Kenzie L. Preston},
  title = {Continuous In-The-Field Measurement of Heart Rate: Correlates of Drug Use, Craving, Stress, and Mood in Polydrug Users},
  journal = {Drug and Alcohol Dependence},
  year = {2015},
  number = {0},
  pages = {-},
  url = {https://md2k.org/images/papers/biomarkers/nihms678863_Kennedy.pdf},
  doi = {10.1016/j.drugalcdep.2015.03.024}
}
Li Y, Ye Z and Rehg JM (2015), "Delving Into Egocentric Actions", In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)., June, 2015.
Abstract: We address the challenging problem of recognizing the camera wearer's actions from videos captured by an egocentric camera. Egocentric videos encode a rich set of signals regarding the camera wearer, including head movement, hand pose and gaze information. We propose to utilize these mid-level egocentric cues for egocentric action recognition. We present a novel set of egocentric features and show how they can be combined with motion and object features. The result is a compact representation with superior performance. In addition, we provide the first systematic evaluation of motion, object and egocentric cues in egocentric action recognition. Our benchmark leads to several surprising findings. These findings uncover the best practices for egocentric actions, with a significant performance boost over all previous state-of-the-art methods on three publicly available datasets.
BibTeX:
@inproceedings{Li_2015_CVPR,
  author = {Yin Li and Zhefan Ye and James M. Rehg},
  title = {Delving Into Egocentric Actions},
  booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2015},
  url = {https://md2k.org/images/papers/biomarkers/nihms728698_Rehg.pdf}
}
Saleheen N, Ali AA, Hossain SM, Sarker H, Chatterjee S, Marlin B, Ertin E, al'Absi M and Kumar S (2015), "puffMarker: A Multi-sensor Approach for Pinpointing the Timing of First Lapse in Smoking Cessation", In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing. Osaka, Japan , pp. 999-1010. ACM.
Abstract: Recent researches have demonstrated the feasibility of detect-ing smoking from wearable sensors, but their performance on real-life smoking lapse detection is unknown. In this pa-per, we propose a new model and evaluate its performance on 61 newly abstinent smokers for detecting a first lapse. We use two wearable sensors — breathing pattern from respira-tion and arm movements from 6-axis inertial sensors worn on wrists. In 10-fold cross-validation on 40 hours of training data from 6 daily smokers, our model achieves a recall rate of 96.9%, for a false positive rate of 1.1%. When our model is applied to 3 days of post-quit data from 32 lapsers, it cor-rectly pinpoints the timing of first lapse in 28 participants. Only 2 false episodes are detected on 20 abstinent days of these participants. When tested on 84 abstinent days from 28 abstainers, the false episode per day is limited to 1/6.
BibTeX:
@inproceedings{Saleheen:2015:PMA:2750858.2806897,
  author = {Saleheen, Nazir and Ali, Amin Ahsan and Hossain, Syed Monowar and Sarker, Hillol and Chatterjee, Soujanya and Marlin, Benjamin and Ertin, Emre and al'Absi, Mustafa and Kumar, Santosh},
  title = {puffMarker: A Multi-sensor Approach for Pinpointing the Timing of First Lapse in Smoking Cessation},
  booktitle = {Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing},
  publisher = {ACM},
  year = {2015},
  pages = {999--1010},
  url = {https://md2k.org/images/papers/biomarkers/p999-_PuffMarker.pdf},
  doi = {10.1145/2750858.2806897}
}
Thomaz E, Zhang C, Essa I and Abowd G (2015), "Inferring Meal Eating Activities in Real World Settings from Ambient Sounds: A Feasibility Study", In Proceedings of the 20th International Conference on Intelligent User Interfaces (IUI '15). Atlanta, Georgia, USA , pp. 427-431. ACM.
Abstract: Dietary self-monitoring has been shown to be an effective method for weight-loss, but it remains an onerous task despite recent advances in food journaling systems. Semi-automated food journaling can reduce the effort of logging, but often requires that eating activities be detected automatically. In this work we describe results from a feasibility study conducted in-the-wild where eating activities were inferred from ambient sounds captured with a wrist-mounted device; twenty participants wore the device during one day for an average of 5 hours while performing normal everyday activities. Our system was able to identify meal eating with an F-score of 79.8% in a person-dependent evaluation, and with 86.6% accuracy in a person-independent evaluation. Our approach is intended to be practical, leveraging off-the-shelf devices with audio sensing capabilities in contrast to systems for automated dietary assessment based on specialized sensors.
BibTeX:
@inproceedings{Thomaz:2015:IME:2678025.2701405,
  author = {E. Thomaz and C. Zhang and I. Essa and G.D. Abowd},
  title = {Inferring Meal Eating Activities in Real World Settings from Ambient Sounds: A Feasibility Study},
  booktitle = {Proceedings of the 20th International Conference on Intelligent User Interfaces (IUI '15)},
  publisher = {ACM},
  year = {2015},
  pages = {427--431},
  url = {https://md2k.org/images/papers/biomarkers/nihms675643_Thomaz.pdf},
  doi = {10.1145/2678025.2701405}
}
Thomaz E, Essa I and Abowd GD (2015), "A Practical Approach for Recognizing Eating Moments with Wrist-mounted Inertial Sensing", In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing. Osaka, Japan , pp. 1029-1040. ACM.
Abstract: Recognizing when eating activities take place is one of the key challenges in automated food intake monitoring. Despite progress over the years, most proposed approaches have been largely impractical for everyday usage, requiring multiple on-body sensors or specialized devices such as neck collars for swallow detection. In this paper, we describe the implemen-tation and evaluation of an approach for inferring eating mo-ments based on 3-axis accelerometry collected with a popu-lar off-the-shelf smartwatch. Trained with data collected in a semi-controlled laboratory setting with 20 subjects, our sys-tem recognized eating moments in two free-living condition studies (7 participants, 1 day; 1 participant, 31 days), with F-scores of 76.1% (66.7% Precision, 88.8% Recall), and 71.3%(65.2% Precision, 78.6% Recall). This work represents a contribution towards the implementation of a practical, au-tomated system for everyday food intake monitoring, with applicability in areas ranging from health research and food journaling.
BibTeX:
@inproceedings{Thomaz:2015:PAR:2750858.2807545,
  author = {Thomaz, Edison and Essa, Irfan and Abowd, Gregory D.},
  title = {A Practical Approach for Recognizing Eating Moments with Wrist-mounted Inertial Sensing},
  booktitle = {Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing},
  publisher = {ACM},
  year = {2015},
  pages = {1029--1040},
  url = {https://md2k.org/images/papers/biomarkers/ubicomp_2015_thomaz.pdf},
  doi = {10.1145/2750858.2807545}
}
Chatterjee S, Hovsepian K, Sarker H, Saleheen N, al’Absi M, Atluri G, Ertin E, Lam C, Lemieux A, Nakajima M, Spring B, Wetter DW and Kumar S (2016), "mCrave: Continuous Estimation of Craving During Smoking Cessation", In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. New York, NY USA , pp. 863-874. ACM.
Abstract: Craving usually precedes a lapse for impulsive behaviors such as overeating, drinking, smoking, and drug use. Passive estimation of craving from sensor data in the natural environment can be used to assist users in coping with craving. In this paper, we take the first steps towards developing a computational model to estimate cigarette craving (during smoking abstinence) at the minute-level using mobile sensor data. We use 2,012 hours of sensor data and 1,812 craving self-reports from 61 participants in a smoking cessation study. To estimate craving, we first obtain a continuous measure of stress from sensor data. We find that during hours of day when craving is high, stress associated with self-reported high craving is greater than stress associated with low craving. We use this and other insights to develop feature functions, and encode them as pattern detectors in a Conditional Random Field (CRF) based model to infer craving probabilities.
BibTeX:
@inproceedings{Chatterjee2016Crave,
  author = {Soujanya Chatterjee and Karen Hovsepian and Hillol Sarker and Nazir Saleheen and Mustafa al’Absi and Gowtham Atluri and Emre Ertin and Cho Lam and Andrine Lemieux and Motohiro Nakajima and Bonnie Spring and David W. Wetter and Santosh Kumar},
  title = {mCrave: Continuous Estimation of Craving During Smoking Cessation},
  booktitle = {Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing},
  publisher = {ACM},
  year = {2016},
  pages = {863-874},
  url = {https://md2k.org/images/papers/biomarkers/mCrave-UbiComp-2016.pdf},
  doi = {10.1145/2971648.2971672}
}
Sarker H, Tyburski M, Rahman MM, Hovsepian K, Sharmin M, Epstein DH, Preston KL, Furr-Holden CD, Milam A, Nahum-Shani I, al'Absi M and Kumar S (2016), "Finding Significant Stress Episodes in a Discontinuous Time Series of Rapidly Varying Mobile Sensor Data", In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. Santa Clara, California, USA , pp. 4489-4501. ACM.
Abstract: Management of daily stress can be greatly improved by de- livering sensor-triggered just-in-time interventions (JITIs) on mobile devices. The success of such JITIs critically depends on being able to mine the time series of noisy sensor data to find the most opportune moments. In this paper, we propose a time series pattern mining method to detect significant stress episodes in a time series of discontinuous and rapidly varying stress data. We apply our model to 4 weeks of physiological, GPS, and activity data collected from 38 users in their natu-ral environment to discover patterns of stress in real-life. We find that the duration of a prior stress episode predicts the du-ration of the next stress episode and stress in mornings and evenings is lower than during the day. We then analyze the relationship between stress and objectively rated disorder in the surrounding neighborhood and develop a model to predict stressful episodes.
BibTeX:
@inproceedings{Sarker:2016:FSS:2858036.2858218,
  author = {Sarker, Hillol and Tyburski, Matthew and Rahman, Md Mahbubur and Hovsepian, Karen and Sharmin, Moushumi and Epstein, David H. and Preston, Kenzie L. and Furr-Holden, C. Debra and Milam, Adam and Nahum-Shani, Inbal and al'Absi, Mustafa and Kumar, Santosh},
  title = {Finding Significant Stress Episodes in a Discontinuous Time Series of Rapidly Varying Mobile Sensor Data},
  booktitle = {Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems},
  publisher = {ACM},
  year = {2016},
  pages = {4489--4501},
  url = {https://md2k.org/images/papers/biomarkers/p4489-sarker.pdf},
  doi = {10.1145/2858036.2858218}
}
Parate A and Ganesan D (2017), "Detecting Eating and Smoking Behaviors Using Smartwatches", In Mobile Health: Sensors, Analytic Methods, and Applications. Cham , pp. 175-201. Springer International Publishing.
Abstract: Inertial sensors embedded in commercial smartwatches and fitness bands are among the most informative and valuable on-body sensors for monitoring human behavior. This is because humans perform a variety of daily activities that impacts their health, and many of these activities involve using hands and have some characteristic hand gesture associated with it. For example, activities like eating food or smoking a cigarette require the direct use of hands and have a set of distinct hand gesture characteristics. However, recognizing these behaviors is a challenging task because the hand gestures associated with these activities occur only sporadically over the course of a day, and need to be separated from a large number of irrelevant hand gestures. In this chapter, we will look at approaches designed to detect behaviors involving sporadic hand gestures. These approaches involve two main stages: (1) spotting the relevant hand gestures in a continuous stream of sensor data, and (2) recognizing the high-level activity from the sequence of recognized hand gestures. We will describe and discuss the various categories of approaches used for each of these two stages, and conclude with a discussion about open questions that remain to be addressed.
BibTeX:
@inbook{Parate2017,
  author = {Parate, Abhinav and Ganesan, Deepak},
  editor = {Rehg, James M. and Murphy, Susan A. and Kumar, Santosh},
  title = {Detecting Eating and Smoking Behaviors Using Smartwatches},
  booktitle = {Mobile Health: Sensors, Analytic Methods, and Applications},
  publisher = {Springer International Publishing},
  year = {2017},
  pages = {175--201},
  url = {https://md2k.org/images/papers/biomarkers/Parate_Detecting-Eating.pdf},
  doi = {10.1007/978-3-319-51394-2_10}
}
Soha R, Addison M, Deepak G, Benjamin M and Jeremy G (2017), "iLid: Low-power Sensing of Fatigue and Drowsiness Measures on a Computational Eyeglass", Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.. New York, NY, USA Vol. 1(2), pp. 23:1-23:26. ACM.
Abstract: The ability to monitor eye closures and blink patterns has long been known to enable accurate assessment of fatigue and drowsiness in individuals. Many measures of the eye are known to be correlated with fatigue including coarse-grained measures like the rate of blinks as well as #ne-grained measures like the duration of blinks and the extent of eye closures. Despite a plethora of research validating these measures, we lack wearable devices that can continually and reliably monitor them in the natural environment. In this work, we present a low-power system, iLid, that can continually sense fine-grained measures such as blink duration and Percentage of Eye Closures (PERCLOS) at high frame rates of 100fps. We present a complete solution including design of the sensing, signal processing, and machine learning pipeline; implementation on a prototype computational eyeglass platform; and extensive evaluation under many conditions including illumination changes, eyeglass shifts, and mobility. Our results are very encouraging, showing that we can detect blinks, blink duration, eyelid location, and fatigue-related metrics such as PERCLOS with less than a few percent error.
BibTeX:
@article{Soha:2017:ILS:3120957.3090088,
  author = {Soha, Rostaminia and Addison, Mayberry and Deepak, Ganesan and Benjamin, Marlin and Jeremy, Gummeson},
  title = {iLid: Low-power Sensing of Fatigue and Drowsiness Measures on a Computational Eyeglass},
  journal = {Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.},
  publisher = {ACM},
  year = {2017},
  volume = {1},
  number = {2},
  pages = {23:1--23:26},
  url = {https://md2k.org/images/papers/biomarkers/Ubicomp17-iLid.pdf},
  doi = {10.1145/3090088}
}
Thomaz E, Bedri A, Prioleau T, Essa I and Abowd GD (2017), "Exploring Symmetric and Asymmetric Bimanual Eating Detection with Inertial Sensors on the Wrist", In Proceedings of the 1st Workshop on Digital Biomarkers. New York, NY, USA , pp. 21-26. ACM.
Abstract: Motivated by health applications, eating detection with off-the-shelf devices has been an active area of research. A common approach has been to recognize and model individual intake gestures with wrist-mounted inertial sensors. Despite promising results, this approach is limiting as it requires the sensing device to be worn on the hand performing the intake gesture, which cannot be guaranteed in practice. Through a study with 14 participants comparing eating detection performance when gestural data is recorded with a wrist-mounted device on (1) both hands, (2) only the dominant hand, and (3) only the non-dominant hand, we provide evidence that a larger set of arm and hand movement patterns beyond food intake gestures are predictive of eating activities when L1 or L2 normalization is applied to the data. Our results are supported by the theory of asymmetric bimanual action and contribute to the field of automated dietary monitoring. In particular, it shines light on a new direction for eating activity recognition with consumer wearables in realistic settings.
BibTeX:
@inproceedings{Thomaz:2017:ESA:3089341.3089345,
  author = {Thomaz, Edison and Bedri, Abdelkareem and Prioleau, Temiloluwa and Essa, Irfan and Abowd, Gregory D.},
  title = {Exploring Symmetric and Asymmetric Bimanual Eating Detection with Inertial Sensors on the Wrist},
  booktitle = {Proceedings of the 1st Workshop on Digital Biomarkers},
  publisher = {ACM},
  year = {2017},
  pages = {21--26},
  url = {https://md2k.org/images/papers/biomarkers/p21-thomaz.pdf},
  doi = {10.1145/3089341.3089345}
}