Deep-gKnock: nonlinear group-feature selection with deep neural network

  title={Deep-gKnock: nonlinear group-feature selection with deep neural network},
  author={Guangyu Zhu and Tingting Zhao},
  journal={Neural networks : the official journal of the International Neural Network Society},
  • G. Zhu, Tingting Zhao
  • Published 2021
  • Computer Science, Mathematics, Medicine
  • Neural networks : the official journal of the International Neural Network Society
Feature selection is central to contemporary high-dimensional data analysis. Group structure among features arises naturally in various scientific problems. Many methods have been proposed to incorporate the group structure information into feature selection. However, these methods are normally restricted to a linear regression setting. To relax the linear constraint, we design a new Deep Neural Network (DNN) architecture and integrating it with the recently proposed knockoff technique to… Expand
Distribution-dependent feature selection for deep neural networks
This paper originally proposes a new feature selection algorithm for DNNs by integrating the knockoff technique and the distribution information of irrelevant features, and applies this method to Coronal Mass Ejections data and uncover the key features which contribute to the DNN-based CME arrival time. Expand
Identification of Significant Gene Expression Changes in Multiple Perturbation Experiments using Knockoffs
Motivation Large-scale multiple perturbation experiments have the potential to reveal a more detailed understanding of the molecular pathways that respond to genetic and environmental changes. A keyExpand


DeepPINK: reproducible feature selection in deep neural networks
A method to increase the interpretability and reproducibility of DNNs by incorporating the idea of feature selection with controlled error rate, by designing a new DNN architecture and integrating it with the recently proposed knockoffs framework. Expand
Bilevel learning of the Group Lasso structure
This work presents a method to estimate the group structure by means of a continuous bilevel optimization problem where the data is split into training and validation sets and relies on an approximation scheme where the lower level problem is replaced by a smooth dual forward-backward algorithm with Bregman distances. Expand
The group lasso for logistic regression
The group lasso is an extension of the lasso to do variable selection on (predefined) groups of variables in linear regression models. The estimates have the attractive property of being invariantExpand
Group SLOPE – Adaptive Selection of Groups of Predictors
It is proved that the resulting procedure adapts to unknown sparsity and is asymptotically minimax with respect to the estimation of the proportions of variance of the response variable explained by regressors from different groups. Expand
A fast unified algorithm for solving group-lasso penalize learning problems
  • Yi Yang, H. Zou
  • Computer Science
  • Stat. Comput.
  • 2015
A unified algorithm called groupwise-majorization-descent (GMD) for efficiently computing the solution paths of the corresponding group-lasso penalized learning problem and allows for general design matrices, without requiring the predictors to be group-wise orthonormal. Expand
The knockoff filter for FDR control in group-sparse and multitask regression
The group knockoff filter is proposed, a method for false discovery rate control in a linear regression setting where the features are grouped, and a set of relevant groups which have a nonzero effect on the response are selected. Expand
Learning Deep Architectures for AI
The motivations and principles regarding learning algorithms for deep architectures, in particular those exploiting as building blocks unsupervised learning of single-layer modelssuch as Restricted Boltzmann Machines, used to construct deeper models such as Deep Belief Networks are discussed. Expand
Deep Learning
Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data. Expand
Panning for Gold: Model-X Knockoffs for High-dimensional Controlled Variable Selection
A new framework of model-X knockoffs is proposed, which reads from a different perspective the knockoff procedure, originally designed for controlling the false discovery rate in linear models, and demonstrates the superior power of knockoffs through simulations. Expand
Simultaneous regression shrinkage, variable selection, and supervised clustering of predictors with OSCAR.
A new method called the OSCAR (octagonal shrinkage and clustering algorithm for regression) is proposed to simultaneously select variables while grouping them into predictive clusters, in addition to improving prediction accuracy and interpretation. Expand