Author pages are created from data sourced from our academic publisher partnerships and public sources.
Share This Author
Understanding Black-box Predictions via Influence Functions
This paper uses influence functions — a classic technique from robust statistics — to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction.
Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization
- Shiori Sagawa, Pang Wei Koh, Tatsunori B. Hashimoto, Percy Liang
- Computer ScienceArXiv
- 20 November 2019
The results suggest that regularization is important for worst-group generalization in the overparameterized regime, even if it is not needed for average generalization, and introduce a stochastic optimization algorithm, with convergence guarantees, to efficiently train group DRO models.
Tiled convolutional neural networks
- Quoc V. Le, Jiquan Ngiam, Zhenghao Chen, D. J. Chia, Pang Wei Koh, A. Ng
- Computer ScienceNIPS
- 6 December 2010
This paper proposes tiled convolution neural networks (Tiled CNNs), which use a regular "tiled" pattern of tied weights that does not require that adjacent hidden units share identical weights, but instead requires only that hidden units k steps away from each other to have tied weights.
Peer and self assessment in massive online classes
- Chinmay Kulkarni, Pang Wei Koh, Scott R. Klemmer
- EducationACM Trans. Comput. Hum. Interact.
- 1 December 2013
This article reports the experiences with two iterations of the first large online class to use peer and self-assessment, and finds that rubrics that use a parallel sentence structure, unambiguous wording, and well-specified dimensions have lower variance.
Concept Bottleneck Models
On x-ray grading and bird identification, concept bottleneck models achieve competitive accuracy with standard end-to-end models, while enabling interpretation in terms of high-level clinical concepts (“bone spurs”) or bird attributes ( “wing color”).
Certified Defenses for Data Poisoning Attacks
This work addresses the worst-case loss of a defense in the face of a determined attacker by constructing approximate upper bounds on the loss across a broad family of attacks, for defenders that first perform outlier removal followed by empirical risk minimization.
Distributionally Robust Neural Networks
The results suggest that regularization is critical for worst-group performance in the overparameterized regime, even if it is not needed for average performance, and introduce and provide convergence guarantees for a stochastic optimizer for this group DRO setting.
- Jiquan Ngiam, Pang Wei Koh, Zhenghao Chen, Sonia A. Bhaskar, Andrew Y. Ng
- Computer ScienceNIPS
- 12 December 2011
This work presents sparse filtering, a simple new algorithm which is efficient and only has one hyperparameter, the number of features to learn, and evaluates it on natural images, object classification, and phone classification, showing that the method works well on a range of different modalities.
On the Opportunities and Risks of Foundation Models
This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities, to their applications, and what they are even capable of due to their emergent properties.
Learning Deep Energy Models
This work proposes deep energy models, which use deep feedforward neural networks to model the energy landscapes that define probabilistic models, and is able to efficiently train all layers of this model simultaneously, allowing the lower layers of the model to adapt to the training of the higher layers, and thereby producing better generative models.