• Corpus ID: 225039928

Optimal Approximation - Smoothness Tradeoffs for Soft-Max Functions

@article{Epasto2020OptimalA,
  title={Optimal Approximation - Smoothness Tradeoffs for Soft-Max Functions},
  author={Alessandro Epasto and Mohammad Mahdian and Vahab S. Mirrokni and Manolis Zampetakis},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.11450}
}
A soft-max function has two main efficiency measures: (1) approximation - which corresponds to how well it approximates the maximum function, (2) smoothness - which shows how sensitive it is to changes of its input. Our goal is to identify the optimal approximation-smoothness tradeoffs for different measures of approximation and smoothness. This leads to novel soft-max functions, each of which is optimal for a different application. The most commonly used soft-max function, called exponential… 

Figures from this paper

Provably Efficient Model-Free Constrained RL with Linear Function Approximation
TLDR
The constrained reinforcement learning problem is studied, and primal-dual optimization is introduced into the LSVI-UCB algorithm to balance between regret and constraint violation and the standard greedy selection with respect to the state-action function is replaced.
Revenue-Incentive Tradeoffs in Dynamic Reserve Pricing
TLDR
This paper proposes a novel class of dynamic reserve pricing policies and provides analytical tradeoffs between their revenue performance and bid-shading incentives and uncovers mechanisms with significantly better revenue-incentive tradeoffs than the exponential mechanism in practice.

References

SHOWING 1-10 OF 36 REFERENCES
An Exploration of Softmax Alternatives Belonging to the Spherical Loss Family
TLDR
Several loss functions from this family of loss functions, called the spherical family, are explored as possible alternatives to the traditional log-softmax loss and surprisingly outperform it in experiments on MNIST and CIFAR-10, suggesting that they might be relevant in a broad range of applications.
On Controllable Sparse Alternatives to Softmax
TLDR
This work proposes two novel sparse formulations, sparsegen-lin and sparsehourglass, that seek to provide a control over the degree of desired sparsity and develops novel convex loss functions that help induce the behavior of aforementioned formulations in the multilabel classification setting, showing improved performance.
Bicriteria Distributed Submodular Maximization in a Few Rounds
TLDR
This work presents a distributed algorithm that achieves an approximation factor of (1-ε) running in less than log 1/ε number of rounds, and proves a hardness result showing that the output of any 1-ε approximation distributed algorithm limited to one distributed round should have at least Ω(k/ε) items.
The sample complexity of revenue maximization
TLDR
It is shown that the only way to achieve a sufficiently good constant approximation of the optimal revenue is through a detailed understanding of bidders' valuation distributions, and introduces α-strongly regular distributions, which interpolate between the well-studied classes of regular and MHR distributions.
Learning Multi-Item Auctions with (or without) Samples
  • Yang Cai, C. Daskalakis
  • Computer Science
    2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS)
  • 2017
We provide algorithms that learn simple auctions whose revenue is approximately optimal in multi-item multi-bidder settings, for a wide range of bidder valuations including unit-demand, additive,
Mechanism design via machine learning
TLDR
These reductions imply that for a wide variety of revenue-maximizing pricing problems, given an optimal algorithm for the standard algorithmic problem, it can be converted into a (1 + /spl epsi/)-approximation for the incentive-compatible mechanism design problem, so long as the number of bidders is sufficiently large.
On the Pseudo-Dimension of Nearly Optimal Auctions
TLDR
This paper introduces t-level auctions to interpolate between simple auctions, such as welfare maximization with reserve prices, and optimal auctions, thereby balancing the competing demands of expressivity and simplicity.
Revenue maximization with a single sample
TLDR
This work designs and analyzes approximately revenue-maximizing auctions in general single-parameter settings and gives an auction that, for every environment and unknown valuation distributions, has expected revenue at least a constant fraction of the expected optimal welfare.
Efficient Exact Gradient Update for training Deep Networks with Very Large Sparse Targets
TLDR
This work develops an original algorithmic approach which, for a family of loss functions that includes squared error and spherical softmax, can compute the exact loss, gradient update for the output weights, and gradient for backpropagation, all in O(d^2) per example instead of O(Dd), remarkably without ever computing the D-dimensional output.
Differentially Private Submodular Maximization: Data Summarization in Disguise
TLDR
This work presents privacypreserving algorithms for both monotone and non-monotone submodular maximization under cardinality, matroid, and p-extendible system constraints, with guarantees that are competitive with optimal solutions.
...
...