• Publications
  • Influence
Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality
TLDR
The analysis of the LID characteristic for adversarial regions not only motivates new directions of effective adversarial defense, but also opens up more challenges for developing new attacks to better understand the vulnerabilities of DNNs.
Dimensionality-Driven Learning with Noisy Labels
TLDR
This work proposes a new perspective for understanding DNN generalization for such datasets, by investigating the dimensionality of the deep representation subspace of training samples, and develops a new dimensionality-driven learning strategy that can effectively learn low-dimensional local subspaces that capture the data distribution.
Normalized Loss Functions for Deep Learning with Noisy Labels
TLDR
Experiments on benchmark datasets demonstrate that the family of new loss functions created by the APL framework can consistently outperform state-of-the-art methods by large margins, especially under large noise rates such as 60% or 80% incorrect labels.
Unlearnable Examples: Making Personal Data Unexploitable
TLDR
This work establishes an important first step towards making personal data unexploitable to deep learning models, and empirically verify the effectiveness of error-minimizing noise in both sample-wise and class-wise forms.
Reinforcement Learning for Autonomous Defence in Software-Defined Networking
TLDR
This paper investigates the feasibility of applying a specific class of machine learning algorithms, namely, reinforcement learning (RL) algorithms, for autonomous cyber defence in software-defined networking (SDN).
Online cluster validity indices for performance monitoring of streaming data clustering
TLDR
Two incremental versions of the Xie‐Beni and Davies‐Bouldin validity indices are developed and used to monitor and control two streaming clustering algorithms (sk‐means and online ellipsoidal clustering), and it is shown that incremental cluster validity indices can send a distress signal to online monitors when evolving structure leads an algorithm astray.
Efficient Unsupervised Parameter Estimation for One-Class Support Vector Machines
TLDR
A new technique to set the hyperparameters and clean suspected anomalies from unlabelled training sets and statistically outperforms those semisupervised and unsupervised methods and its accuracy is comparable to supervised grid-search and cross validation.
R1SVM: A Randomised Nonlinear Approach to Large-Scale Anomaly Detection
TLDR
This paper proposes the RandomisedOne-class SVM (R1SVM), which is an efficient and scalable anomaly detection technique that can be trained on large-scale datasets and achieves comparable or better accuracy to deep autoen-coder and traditional kernelised approaches for anomaly de-tection.
Robust Domain Generalisation by Enforcing Distribution Invariance
TLDR
This work proposes Elliptical Summary Randomisation (ESRand), an efficient domain generalisation approach that comprises of a randomised kernel and elliptical data summarisation that learns a domain interdependent projection to a latent subspace that minimises the existing biases to the data while maintaining the functional relationship between domains.
...
...