• Corpus ID: 246822426

Automatic Curriculum Generation for Learning Adaptation in Networking

  title={Automatic Curriculum Generation for Learning Adaptation in Networking},
  author={Zheng Xia and Yajie Zhou and Francis Y. Yan and Junchen Jiang},
As deep reinforcement learning (RL) showcases its strengths in networking and systems, its pitfalls also come to the public’s attention—when trained to handle a wide range of network workloads and previously unseen deployment environments, RL policies often manifest suboptimal performance and poor generalizability. To tackle these problems, we present Genet, a new training framework for learning better RL-based network adaptation algorithms. Genet is built on the concept of curriculum learning… 
1 Citations
OpenNetLab: Open Platform for RL-based Congestion Control for Real-Time Communications
With the growing importance of real-time communications (RTC), designing congestion control (CC) algorithms for RTC that achieve high network performance and QoE is gaining attention. Recently,


Automatic Curriculum Learning For Deep RL: A Short Survey
The ambition of this work is to present a compact and accessible introduction to the Automatic Curriculum Learning literature and to draw a bigger picture of the current state of the art in ACL to encourage the cross-breeding of existing concepts and the emergence of new ideas.
Curriculum Learning for Reinforcement Learning Domains: A Framework and Survey
This article presents a framework for curriculum learning (CL) in reinforcement learning, and uses it to survey and classify existing CL methods in terms of their assumptions, capabilities, and goals.
Towards Safe Online Reinforcement Learning in Computer Systems
The key idea is to train the RL model online, in the real system, but to fall back on a simple, known-safe fallback policy if the system enters an unsafe region of the state space, while still providing sufficient feedback for RL to learn a good policy.
Online Safety Assurance for Deep Reinforcement Learning
This work argues that safely deploying learning-driven systems requires being able to determine, in real time, whether system behavior is coherent, for the purpose of defaulting to a reasonable heuristic when this is not so, and presents three approaches to quantifying decision uncertainty that differ in terms of the signal used to infer uncertainty.
Illuminating Generalization in Deep Reinforcement Learning through Procedural Level Generation
It is shown that for some games procedural level generation enables generalization to new levels within the same distribution and it is possible to achieve better performance with less data by manipulating the difficulty of the levels in response to the performance of the agent.
A View on Deep Reinforcement Learning in System Optimization
A set of essential metrics is proposed to guide future works in evaluating the efficacy of using deep reinforcement learning in system optimization, and includes challenges, the types of problems, their formulation in the deep reinforcementLearning setting, embedding, the model used, efficiency, and robustness.
A Deep Reinforcement Learning Perspective on Internet Congestion Control
It is shown that casting congestion control as RL enables training deep network policies that capture intricate patterns in data traffic and network conditions, and leverage this to outperform the state-of-the-art.
On The Power of Curriculum Learning in Training Deep Networks
This work analyzes the effect of curriculum learning, which involves the non-uniform sampling of mini-batches, on the training of deep networks, and specifically CNNs trained for image recognition, and defines the concept of an ideal curriculum.
Emergent Complexity and Zero-shot Transfer via Unsupervised Environment Design
This work proposes Unsupervised Environment Design (UED) as an alternative paradigm, where developers provide environments with unknown parameters, and these parameters are used to automatically produce a distribution over valid, solvable environments.
Verifying learning-augmented systems
WhiRL is presented, a platform for verifying DRL policies for systems, which combines recent advances in the verification of deep neural networks with scalable model checking techniques, and is capable of guaranteeing that natural requirements from recently introduced learning-augmented systems are satisfied, and of exposing specific scenarios in which other basic requirements are not.