Learn More
Deep machine learning offers a comprehensive framework for extracting meaningful features from complex observations in an unsupervised manner. The majority of deep learning architectures described in the literature primarily focus on extracting spatial features. However, in real-world settings, capturing temporal dependencies in observations is critical for(More)
Catastrophic forgetting is a well-studied attribute of most parameterized supervised learning systems. A variation of this phenomenon, in the context of feedforward neural networks, arises when nonstationary inputs lead to loss of previously learned mappings. The majority of the schemes proposed in the literature for mitigating catastrophic forgetting were(More)
Ensembles of neural networks have been the focus of extensive studies over the past two decades. Effectively encouraging diversity remains a key element in yielding improved performance from such ensembles. Negatively correlated learning (NCL) has emerged as a promising framework for concurrently training an ensemble of learners while emphasizing the(More)
I am submitting herewith a dissertation written by Steven Robert Young entitled "Scalable Hardware Efficient Deep Spatio-Temporal Inference Networks." I have examined the final electronic copy of this dissertation for form and content and recommend that it be accepted in partial fulfillment of the requirements for the degree of Doctor of Philosophy, with a(More)
An attack by an insider threat can be devastating to an organization and any infrastructure it supports. Current methods of detecting and preventing insider threats are limited. We propose a system that creates honey tokens from authentic documents containing unstructured text. While the original documents may contain information that would be valuable to(More)
  • 1