Richard Billingsley

Learn More
Probabilistic topic models are widely used to discover latent topics in document collections, while latent feature vector representations of words have been used to obtain high performance in many NLP tasks. In this paper, we extend two different Dirichlet multinomial topic models by incorporating latent feature vector representations of words trained on(More)
George Washington University's Himmelfarb Health Sciences Library was investigating adding PDA resources to the collection when a medical education grant was received. Funds were used for a pilot project to compare PDA use by third-year medical students (MSIII), fourth-year medical students (MSIV), second-year physician assistant (PAII) students, and(More)
Many parsers learn sparse class distributions over trees to model natural language. Recursive Neural Networks (RNN) use much denser representations, yet can still achieve an F-score of 92.06% for right binarized sentences up to 15 words long. We examine an RNN model by comparing it with an abstract generative probabilistic model using a Deep Belief Network(More)
  • 1