On the Fundamental Limits of Formally (Dis)Proving Robustness in Proof-of-Learning

@article{Fang2022OnTF,
  title={On the Fundamental Limits of Formally (Dis)Proving Robustness in Proof-of-Learning},
  author={Cong Fang and Hengrui Jia and Anvith Thudi and Mohammad Yaghini and Christopher A. Choquette-Choo and Natalie Dullerud and Varun Chandrasekaran and Nicolas Papernot},
  journal={ArXiv},
  year={2022},
  volume={abs/2208.03567}
}
—Proof-of-learning (PoL) proposes a model owner use machine learning training checkpoints to establish a proof of having expended the necessary compute for training. The authors of PoL forego cryptographic approaches and trade rigorous security guarantees for scalability to deep learning by being applicable to stochastic gradient descent and adaptive variants. This lack of formal analysis leaves the possibility that an attacker may be able to spoof a proof for a model they did not train. We… 

References

SHOWING 1-10 OF 33 REFERENCES

“Adversarial Examples” for Proof-of-Learning

It is shown that PoL is vulnerable to “adversarial examples”, and in a similar way as optimizing an adversarial example, an arbitrarily-chosen data point could be made to generate a given model, hence efficiently generating intermediate models with correct data points.

Proof-of-Learning: Definitions and Practice

The analyses and experiments show that an adversary seeking to illegitimately manufacture a proof-of-learning needs to perform at least as much work than is needed for gradient descent itself.

VeriDL: Integrity Verification of Outsourced Deep Learning Services (Extended Version)

VeriDL is a framework that supports efficient correctness verification of DNN models in the DLaaS paradigm with a deterministic guarantee and cheap overhead, and the design of a small-size cryptographic proof of the training process of the DNN model, which is associated with the model and returned to the client.

MUSE: Secure Inference Resilient to Malicious Clients

MUSE is designed and implemented, an efficient two-party secure inference protocol resilient to malicious clients that introduces a novel cryptographic protocol for conditional disclosure of secrets to switch between authenticated additive secret shares and garbled circuit labels, and an improved Beaver’s triple generation procedure.

Manipulating SGD with Data Ordering Attacks

A novel class of training-time attacks that require no changes to the underlying dataset or model architecture, but instead only change the order in which data are supplied to the model, which finds that the attacker can either prevent the model from learning, or poison it to learn behaviours specified by the attacker.

SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud

SafetyNets develops and implements a specialized interactive proof protocol for verifiable execution of a class of deep neural networks, i.e., those that can be represented as arithmetic circuits and demonstrates the run-time costs of this framework for both the client and server are low.

On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning

Unlearning is only well-defined at the algorithmic level, where an entity’s only possible auditable claim to unlearning is that they used a particular algorithm designed to allow for external scrutiny during an audit.

Randomness In Neural Network Training: Characterizing The Impact of Tooling

The results suggest that deterministic tooling is critical for AI safety, but also that the cost of ensuring determinism varies dramatically between neural network architectures and hardware types, e.g., with overhead up to 746% on a spectrum of widely used GPU accelerator architectures, relative to non-deterministic training.

PROOFS OF WORK AND BREAD PUDDING PROTOCOLS (EXTENDED ABSTRACT)

We formalize the notion of a proof of work (POW). In many cryptographic protocols, a prover seeks to convince a verifier that she possesses knowledge of a secret or that a certain mathematical

High-Fidelity Extraction of Neural Network Models

This work expands on prior work to develop the first practical functionally-equivalent extraction attack for direct extraction of a model’s weights, and demonstrates the practicality of model extraction attacks against production-grade systems.