Mohiuddin Solaimani

Learn More
Anomaly detection refers to the identification of patterns in a dataset that do not conform to expected patterns. Depending on the domain, the non-conformant patterns are assigned various tags, e.g. anomalies, outliers, exceptions, malwares and so forth. Online anomaly detection aims to detect anomalies in data flowing in a streaming fashion. Such stream(More)
Anomaly detection refers to identifying the patterns in data that deviate from expected behavior. These non-conforming patterns are often termed as outliers, malwares, anomalies or exceptions in different application domains. This paper presents a novel, generic real-time distributed anomaly detection framework for multi-source stream data. As a case study,(More)
Anomaly detection is the identification of items or observations which deviate from an expected pattern in a dataset. This paper proposes a novel real time anomaly detection framework for dynamic resource scheduling of a VMware-based cloud data center. The framework monitors VMware performance stream data (e.g. CPU load, memory usage, etc.). Hence, the(More)
Political event data have been widely used to study international politics. Previously, natural text processing and event generation required a lot of human efforts. Today we have high computing infrastructure with advance NLP metadata to leverage those tiresome efforts. TABARI -- an open source non distributed event-coding software -- was an early effort(More)
In recent years, mass atrocities, terrorism, and political unrest have caused much human suffering. Thousands of innocent lives have been lost to these events. With the help of advanced technologies, we can now dream of a tool that uses machine learning and natural language processing (NLP) techniques to warn of such events. Detecting atrocities demands(More)
Anomaly detection is a crucial part of computer-security. This paper presents various host based anomaly detection techniques. One technique uses clustering with markov network (CMN). In CMN we first cluster the benign training data and then from each cluster we build a separate markov network to model the benign behavior. During testing, each Markov(More)
  • 1