Planting Undetectable Backdoors in Machine Learning Models

Planting Undetectable Backdoors in Machine Learning Models

This is the huge module ever!!

[Submitted on 14 Apr 2022]

Download PDF

Abstract: Given the computational cost and technical expertise required to train
machine learning models, users may delegate the task of learning to a service
provider. We show how a malicious learner can plant an undetectable backdoor
into a classifier. On the surface, such a backdoored classifier behaves
normally, but in reality, the learner maintains a mechanism for changing the
classification of any input, with only a slight perturbation. Importantly,
without the appropriate “backdoor key”, the mechanism is hidden and cannot be
detected by any computationally-bounded observer. We demonstrate two frameworks
for planting undetectable backdoors, with incomparable guarantees.

First, we show how to plant a backdoor in any model, using digital signature
schemes. The construction guarantees that given black-box access to the
original model and the backdoored version, it is computationally infeasible to
find even a single input where they differ. This property implies that the
backdoored model has generalization error comparable with the original model.
Second, we demonstrate how to insert undetectable backdoors in models trained
using the Random Fourier Features (RFF) learning paradigm or in Random ReLU
networks. In this construction, undetectability holds against powerful
white-box distinguishers: given a complete description of the network and the
training data, no efficient distinguisher can guess whether the model is
“clean” or contains a backdoor.

Our construction of undetectable backdoors also sheds light on the related
issue of robustness to adversarial examples. In particular, our construction
can produce a classifier that is indistinguishable from an “adversarially
robust” classifier, but where every input has an adversarial example! In
summary, the existence of undetectable backdoors represent a significant
theoretical roadblock to certifying adversarial robustness.

Submission history From: Or Zamir [view email]

Thu, 14 Apr 2022 13:55:21 UTC (1,168 KB)

Read More
Share this on to discuss with people on this topicSign up on now if you’re not registered yet.

Related Articles

What’s recent in Emacs 28.1?

By Mickey Petersen It’s that time again: there’s a new major version of Emacs and, with it, a treasure trove of new features and changes.Notable features include the formal inclusion of native compilation, a technique that will greatly speed up your Emacs experience.A critical issue surrounding the use of ligatures also fixed; without it, you…

Windows 11 Guide

A guide on setting up your Windows 11 Desktop with all the essential Applications, Tools, and Games to make your experience with Windows 11 great! Note: You can easily convert this markdown file to a PDF in VSCode using this handy extension Markdown PDF. Getting Started Windows 11 Desktop Bypass Windows 11’s TPM, CPU and…

Wikimedia voting on stopping accepting cryptocurrency donations

This is a subpage; for more information, see the Requests for comments page. The Wikimedia Foundation currently accepts cryptocurrency donations in currencies including Bitcoin, Bitcoin Cash, and Ethereum, as explained on the “Other ways to give” page. I propose that we stop accepting cryptocurrency donations. Accepting cryptocurrency signals endorsement of the cryptocurrency space by the…