Aaron Mishtal

  • Citations Per Year
Learn More
Catastrophic forgetting is a well-studied attribute of most parameterized supervised learning systems. A variation of this phenomenon, in the context of feedforward neural networks, arises when nonstationary inputs lead to loss of previously learned mappings. The majority of the schemes proposed in the literature for mitigating catastrophic forgetting were(More)
Deep machine learning offers a comprehensive framework for extracting meaningful features from complex observations in an unsupervised manner. The majority of deep learning architectures described in the literature primarily focus on extracting spatial features. However, in real-world settings, capturing temporal dependencies in observations is critical for(More)
Ensembles of neural networks have been the focus of extensive studies over the past two decades. Effectively encouraging diversity remains a key element in yielding improved performance from such ensembles. Negatively correlated learning (NCL) has emerged as a promising framework for concurrently training an ensemble of learners while emphasizing the(More)
Deep machine learning (DML) is a promising field of research that has enjoyed much success in recent years. Two of the predominant deep learning architectures studied in the literature are Convolutional Neural Networks (CNNs) and Deep Belief Networks (DBNs). Both have been successfully applied to many standard benchmarks with a primary focus on machine(More)
An attack by an insider threat can be devastating to an organization and any infrastructure it supports. Current methods of detecting and preventing insider threats are limited. We propose a system that creates honey tokens from authentic documents containing unstructured text. While the original documents may contain information that would be valuable to(More)
  • 1