Leveraging Pre-Trained Language Models to Streamline Natural Language Interaction for Self-Tracking

@article{Kim2022LeveragingPL,
  title={Leveraging Pre-Trained Language Models to Streamline Natural Language Interaction for Self-Tracking},
  author={Young-Ho Kim and Sungdong Kim and Minsuk Chang and Sang-Woo Lee},
  journal={ArXiv},
  year={2022},
  volume={abs/2205.15503}
}
Current natural language interaction for self-tracking tools largely depends on bespoke implementation optimized for a specific tracking theme and data format, which is neither gen-eralizable nor scalable to a tremendous design space of self-tracking. However, training machine learning models in the context of self-tracking is challenging due to the wide variety of tracking topics and data formats. In this paper, we propose a novel NLP task for self-tracking that extracts close- and open-ended… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 37 REFERENCES
Data@Hand: Fostering Visual Exploration of Personal Data on Smartphones Leveraging Speech and Touch Interaction
TLDR
This work aims to empower people to easily navigate and compare their personal health data on smartphones by enabling flexible time manipulation with speech, and designed and developed Data@Hand, a mobile app that leverages the synergy of two complementary modalities: speech and touch.
OmniTrack: A Flexible Self-Tracking Approach Leveraging Semi-Automated Tracking
TLDR
How OmniTrack positively influences and supports self-trackers’ tracking practices over time is discussed, and how to further improve OmniTrack by providing more appropriate visualizations and sharable templates, incorporating external contexts, and supporting researchers’ unique data collection needs are discussed.
Health
  • J. Herman
  • Medicine
    Annals of Internal Medicine
  • 1996
TLDR
By way of not defining it as an absence of disease or disability, let us refer to what allows living in fullest measure, which is unobtrusiveness.
MyMove: Facilitating Older Adults to Collect In-Situ Activity Labels on a Smartwatch with Speech
Current activity tracking technologies are largely trained on younger adults’ data, which can lead to solutions that are not well-suited for older adults. To build activity trackers for older adults,
Training language models to follow instructions with human feedback
TLDR
The results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent and showing improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets.
Zero-Shot Dialogue State Tracking via Cross-Task Transfer
TLDR
This work proposes TransferQA, a transferable generative QA model that seamlessly combines extractive QA and multi-choice QA via a text-to-text transformer framework, and tracks both categorical slots and non-categorical slots in DST.
Investigating Preferred Food Description Practices in Digital Food Journaling
Journaling of consumed foods through digital devices is a popular self-tracking strategy for weight loss and eating mindfulness. Research has explored modalities, like photos and open-ended text and
FoodScrap: Promoting Rich Data Capture and Reflective Food Journaling Through Speech Input
TLDR
How speech input can support low-burden and reflective food journaling and opportunities for effectively processing and presenting large amounts of speech data are discussed.
Calibrate Before Use: Improving Few-Shot Performance of Language Models
TLDR
This work first estimates the model's bias towards each answer by asking for its prediction when given the training prompt and a content-free test input such as "N/A", and then fits calibration parameters that cause the prediction for this input to be uniform across answers.
What Makes Good In-Context Examples for GPT-3?
TLDR
This work proposes to retrieve examples that are semantically-similar to a test query sample to formulate its corresponding prompt, and evaluates the proposed approach on several natural language understanding and generation benchmarks, where the retrieval-based prompt selection approach consistently outperforms the random selection baseline.
...
...