Non-Parametric Stochastic Sequential Assignment With Random Arrival Times

@article{Dervovic2021NonParametricSS,
  title={Non-Parametric Stochastic Sequential Assignment With Random Arrival Times},
  author={Danial Dervovic and Parisa Hassanzadeh and Samuel A. Assefa and P. Rajasekhara Reddy},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.04944}
}
We consider a problem wherein jobs arrive at random times and assume random values. Upon each job arrival, the decision-maker must decide immediately whether or not to accept the job and gain the value on offer as a reward, with the constraint that they may only accept at most n jobs over some reference time period. The decision-maker only has access to M independent realisations of the job arrival process. We propose an algorithm, Non-Parametric Sequential Allocation (NPSA), for solving this… 

Figures and Tables from this paper

Optimal Admission Control for Multiclass Queues with Time-Varying Arrival Rates via State Abstraction

It is rigorously proved that as the gap between successive decision epochs grows smaller, the discrete time solution approaches the optimal solution to the original continuous time problem.

Tradeoffs in streaming binary classification under limited inspection resources

An imbalanced binary classification problem, where events arrive sequentially and only a limited number of suspicious events can be inspected, is considered, model the event arrivals as a non-homogeneous Poisson process, and compares various suspicious event selection methods including those based on static and adaptive thresholds.

A Sequential Stochastic Assignment Problem

Suppose there are n men available to perform n jobs. The n jobs occur in sequential order with the value of each job being a random variable X. Associated with each man is a probability p. If a "p"

A SEQUENTIAL ALLOCATION GAME FOR TARGETS WITH VARYING VALUES

The two-sided competitive situation where the players allocate ordnance, subject to resource const- raints and limited mission time, to attack (or defense, for the opponent player) for the targets

Constrained Markov Decision Processes

INTRODUCTION Examples of Constrained Dynamic Control Problems On Solution Approaches for CMDPs with Expected Costs Other Types of CMDPs Cost Criteria and Assumptions The Convex Analytical Approach

Adaptive Algorithms for Online Convex Optimization with Long-term Constraints

We present an adaptive online gradient descent algorithm to solve online convex optimization problems with long-term constraints , which are constraints that need to be satisfied when accumulated

Online Learning with Constraints

This work defines the reward-in-hindsight as the highest reward the decision maker could have achieved, while satisfying the constraints, had she known Nature's choices in advance and provides an explicit strategy that attains this convex hull using a calibrated forecasting rule.

Bandit Convex Optimization in Non-stationary Environments

A novel algorithm is proposed that achieves dynamic regret and optimal results for BCO in non-stationary environments and does not require prior knowledge of the path-length $P_T$ ahead of time, which is generally unknown.

Game-theoretic resource allocation for malicious packet detection in computer networks

Grande, a novel polynomial time algorithm that uses an approximated utility function to circumvent the limited scalability caused by the attacker's large strategy space and the non-linearity of the aforementioned mathematical program is proposed.

Estimation for nonhomogeneous Poisson processes from aggregated data

  • S. Henderson
  • Mathematics, Computer Science
    Oper. Res. Lett.
  • 2003

Calibrating Probability with Undersampling for Unbalanced Classification

This paper study analytically and experimentally how under sampling affects the posterior probability of a machine learning model, and uses Bayes Minimum Risk theory to find the correct classification threshold and show how to adjust it after under sampling.

The Foundations of Cost-Sensitive Learning

It is argued that changing the balance of negative and positive training examples has little effect on the classifiers produced by standard Bayesian and decision tree learning methods, and the recommended way of applying one of these methods is to learn a classifier from the training set and then to compute optimal decisions explicitly using the probability estimates given by the classifier.