Augmentative and Alternative Communication (AAC) apps are apps that enable non-speech communicative forms. One class of AAC apps are speech-generating devices (SGDs), where icons/pictures are tapped to produce spoken words. These apps are widely used to support communication and language learning for individuals with disabilities such as autism spectrum disorder (ASD). Given that these apps are used in everyday scenarios, they can generate massive streams of data, providing a wealth of information regarding individual usage patterns and for developing usage model profiles. However, the utility and potential of these streams of data has been little explored from a data mining perspective. The objective of this study is to evaluate several feature representations of usage patterns, coupled with data mining and data modelling techniques, for identifying differences in AAC usage patterns between users with and without ASD. The study is conducted using data streams aggregated from an AAC app called FreeSpeech, specifically designed for individuals with learning disabilities and ASD. Several feature representations for modeling usage profiles based on temporal, behavioral and frequency of usage, are investigated. The potential of each usage representation is assessed using a collection of well-known and well-established learning methods such as support vector machine and ensemble learning. While, in general, prediction performance was only slightly above chance in most representations, results from unsupervised class labeling experiments showed promising results regarding the potential of stationary keypress usage representations with bootstrapped ensembles for separating ASD from non-ASD users.