Activity Recognition: Translation across Sensor Modalities Using Deep Learning

@inproceedings{Okita2018ActivityRT,
  title={Activity Recognition: Translation across Sensor Modalities Using Deep Learning},
  author={Tsuyoshi Okita and Sozo Inoue},
  booktitle={UbiComp/ISWC Adjunct},
  year={2018}
}
We propose a method to translate between multi-modalities using an RNN encoder-decoder model. Based on such a model allowing to translate between modalities, we built an activity recognition system. The idea of equivalence of modality was investigated by Banos et al. This paper replaces this with deep learning. We compare the performance of translation with/without clustering and sliding window. We show the preliminary performance of activity recognition attained the F1 score of 0.78. 

From This Paper

Topics from this paper.

References

Publications referenced by this paper.
Showing 1-4 of 4 references

Similar Papers

Loading similar papers…