• Publications
  • Influence
The Lovasz-Softmax Loss: A Tractable Surrogate for the Optimization of the Intersection-Over-Union Measure in Neural Networks
TLDR
This work presents a method for direct optimization of the mean intersection-over-union loss in neural networks, in the context of semantic image segmentation, based on the convex Lovász extension of submodular losses. Expand
Optimization of the Jaccard index for image segmentation with the Lovász hinge
TLDR
A specialized optimization method is developed, based on an efficient computation of the proximal operator of the Lovász hinge, yielding reliably faster and more stable optimization than alternatives. Expand
MultiGrain: a unified image embedding for classes and instances
TLDR
A key component of MultiGrain is a pooling layer that takes advantage of high-resolution images with a network trained at a lower resolution that provides state-of-the-art classification accuracy when fed to a linear classifier. Expand
Optimizing the Dice Score and Jaccard Index for Medical Image Segmentation: Theory and Practice
TLDR
This study investigates the theoretical differences in a risk minimization framework and question the existence of a weighted cross-entropy loss with weights theoretically optimized to surrogate Dice or Jaccard, and empirically investigates the behavior of the aforementioned loss functions w.r.t. evaluation with Dice score and J Accard index. Expand
Optimization for Medical Image Segmentation: Theory and Practice When Evaluating With Dice Score or Jaccard Index
TLDR
It is confirmed that metric-sensitive losses are superior to cross-entropy based loss functions in case of evaluation with Dice Score or Jaccard Index in a multi-class setting, and across different object sizes and foreground/background ratios. Expand
AOWS: Adaptive and Optimal Network Width Search With Latency Constraints
TLDR
This work introduces a novel efficient one-shot NAS approach to optimally search for channel numbers, given latency constraints on a specific hardware, and proposes an adaptive channel configuration sampling scheme to gradually specialize the training phase to the target computational constraints. Expand
A Bayesian Optimization Framework for Neural Network Compression
TLDR
A general Bayesian optimization framework for optimizing functions that are computed based on U-statistics is developed and a method that gives a probabilistic approximation certificate of the result is applied to parameter selection in neural network compression. Expand
Efficient semantic image segmentation with superpixel pooling
TLDR
Experimental results on the IBSR and Cityscapes dataset demonstrate that superpixel pooling can be leveraged to consistently increase network accuracy with minimal computational overhead. Expand
Adaptive Compression-based Lifelong Learning
TLDR
This work proposes a method based on Bayesian optimization to perform adaptive compression/pruning of the network and shows its effectiveness in lifelong learning, and demonstrates the applicability of learning network compression, where it is able to effectively preserve performances along sequences of tasks of varying complexity. Expand
Function Norms and Regularization in Deep Networks
TLDR
This work provides the first proof in the literature of the NP-hardness of computing function norms of DNNs, motivating the necessity of an approximate approach and derives a generalization bound for functions trained with weighted norms and proves that a natural stochastic optimization strategy minimizes the bound. Expand
...
1
2
...