• Corpus ID: 239998455

TOD: Tensor-based Outlier Detection

@article{Zhao2021TODTO,
  title={TOD: Tensor-based Outlier Detection},
  author={Yue Zhao and George H. Chen and Zhihao Jia},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.14007}
}
  • Yue Zhao, George H. Chen, Zhihao Jia
  • Published 26 October 2021
  • Computer Science
  • ArXiv
To scale outlier detection (OD) to large, high-dimensional datasets, we propose TOD, a novel system that abstracts OD algorithms into basic tensor operations for efficient GPU acceleration. To make TOD highly efficient in both time and space, we leverage recent advances in deep learning infrastructure in both hardware and software. To deploy large OD applications on GPUs with limited memory, we introduce two key techniques. First, provable quantization accelerates OD computation and reduces the… 

References

SHOWING 1-10 OF 52 REFERENCES
GPU Strategies for Distance-Based Outlier Detection
TLDR
A family of parallel and distributed algorithms for graphic processing units (GPU) derived from two distance-based outlier detection algorithms: BruteForce and SolvingSet are proposed, which differ in the way they exploit the architecture and memory hierarchy of the GPU and guarantee significant improvements with respect to the CPU versions.
Training Deeper Models by GPU Memory Optimization on TensorFlow
With the advent of big data, easy-to-get GPGPU and progresses in neural network modeling techniques, training deep learning model on GPU becomes a popular choice. However, due to the inherent
Accelerating SLIDE Deep Learning on Modern CPUs: Vectorization, Quantizations, Memory Optimizations, and More
TLDR
This paper argues that SLIDE’s current implementation is sub-optimal and does not exploit several opportunities available in modern CPUs, and shows how SLIDE's computations allow for a unique possibility of vectorization via AVX (Advanced Vector Extensions)-512.
Estimating GPU memory consumption of deep learning models
TLDR
DNNMem employs an analytic estimation approach to systematically calculate the memory consumption of both the computation graph and the DL framework runtime, and shows that DNNMem is effective in estimating GPU memory consumption.
Parallel processing for distance-based outlier detection on a multi-core CPU
TLDR
A new parallelization model for the parallel processing of Orca-based outlier detection on a multi-core CPU that utilizes data parallelism and a Multi-thread model and outperforms conventional parallelization models.
TensorFlow: A system for large-scale machine learning
TLDR
The TensorFlow dataflow model is described and the compelling performance that Tensor Flow achieves for several real-world applications is demonstrated.
On-the-fly Operation Batching in Dynamic Computation Graphs
TLDR
This paper presents an algorithm, and its implementation in the DyNet toolkit, for automatically batching operations, and obtains throughput similar to that obtained with manual batches, as well as comparable speedups over single-instance learning on architectures that are impractical to batch manually.
Research Issues of Outlier Detection in Trajectory Streams Using GPUs
TLDR
The problem of outlier Detection in trajectory streams is presented, and the research issues that should be addressed by new outlier detection techniques for trajectory streams on GPUs are discussed.
A Distributed Approach to Detect Outliers in Very Large Data Sets
TLDR
While solving the distance-based outlier detection task in the distributed scenario, the method computes an outlier Detection solving set of the overall data set ofThe same quality as that computed by the corresponding centralized method.
VS-Quant: Per-vector Scaled Quantization for Accurate Low-Precision Neural Network Inference
TLDR
Per-vector scaling consistently achieves better inference accuracy at low precision compared to conventional scaling techniques for popular neural networks without requiring retraining and modification of a deep learning accelerator hardware design to study the area and energy overheads.
...
1
2
3
4
5
...