• Corpus ID: 250113798

Linear Model Against Malicious Adversaries with Local Differential Privacy

@inproceedings{Miao2022LinearMA,
  title={Linear Model Against Malicious Adversaries with Local Differential Privacy},
  author={Guanhong Miao and A. Adam Ding and Samuel S. Wu},
  year={2022}
}
—Scientific collaborations benefit from collaborative learning of distributed sources, but remain difficult to achieve when data are sensitive. In recent years, privacy preserving techniques have been widely studied to analyze distributed data across different agencies while protecting sensitive information. Most existing privacy preserving techniques are designed to resist semi-honest adversaries and require intense computation to perform data analysis. Secure collaborative learning is signif… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 52 REFERENCES

Secure and Differentially Private Logistic Regression for Horizontally Distributed Data

TLDR
A novel strategy that combines differential privacy methods and homomorphic encryption techniques to achieve the best of both worlds is introduced and demonstrated the practicability of building secure and privacy-preserving models with high efficiency and good accuracy using a few real-world datasets.

Verifiable Data Mining Against Malicious Adversaries in Industrial Internet of Things

TLDR
To avoid malicious clouds from returning incorrect inference results, a privacy-preserving prediction scheme with lightweight verification is designed that achieves privacy, completeness, and soundness and is shown to have high computational efficiency and accuracy.

Fast, Privacy Preserving Linear Regression over Distributed Datasets based on Pre-Distributed Data

TLDR
This work proposes a protocol for performing linear regression over a dataset that is distributed over multiple parties without actually sharing their own private datasets, based on the assumption that a Trusted Initializer pre-distributes random, correlated data to the parties during a setup phase.

Helen: Maliciously Secure Coopetitive Learning for Linear Models

TLDR
Helen is a system that allows multiple parties to train a linear model without revealing their data, a setting the authors call coopetitive learning, and protects against a much stronger adversary who is malicious and can compromise m−1 out of m parties.

SecureML: A System for Scalable Privacy-Preserving Machine Learning

TLDR
This paper presents new and efficient protocols for privacy preserving machine learning for linear regression, logistic regression and neural network training using the stochastic gradient descent method, and implements the first privacy preserving system for training neural networks.

Secure multiple linear regression based on homomorphic encryption

TLDR
This work conceptualizes the existence of a single combined database containing all of the information for the individuals in the separate databases and for the union of the variables, and proposes an approach that gives full statistical calculation on this combined database without actually combining information sources.

DPPro: Differentially Private High-Dimensional Data Release via Random Projection

TLDR
DPPro, a differentially private algorithm for high-dimensional data release via random projection to maximize utility while guaranteeing privacy, is proposed and theoretically proves that DPPro can generate synthetic data set with the similar squared Euclidean distance between high- dimensional vectors while achieving differential privacy.

Sparse Matrix Masking-Based Non-Interactive Verifiable (Outsourced) Computation, Revisited

  • Liang ZhaoLiqun Chen
  • Computer Science, Mathematics
    IEEE Transactions on Dependable and Secure Computing
  • 2020
TLDR
A formal definition of the privacy property of an NPVC protocol with respect to matrix density is proposed and it is demonstrated that all of the SM masking-based NPVC protocols that the authors know of do not hold this privacy property under the ciphertext-only attack model.

Privacy-Preserving Ridge Regression with only Linearly-Homomorphic Encryption

TLDR
This work proposes a novel system that can train a ridge linear regression model using only LHE (i.e., without using Yao’s protocol) and greatly improves the overall performance as Yao's protocol was the main bottleneck in the previous solution.

Functional Mechanism: Regression Analysis under Differential Privacy

TLDR
The main idea is to enforce e-differential privacy by perturbing the objective function of the optimization problem, rather than its results, and it significantly outperforms existing solutions.
...