Science

Hash the Universe: Differentially Private Text Extraction with Feature Hashing

Authors:

Sam Fletcher Adam Roegiest Alexander Hudek

Prepublication Date:

March 2021

Natural language processing can often involve handling privacy-sensitive text. To avoid revealing confidential information, data owners and practitioners can use differential privacy, which provides a mathematically guaranteeable definition of privacy preservation. Until now, differential privacy has not been used to protect hashes in a feature set. Feature hashing is a common technique for handling out-of-dictionary vocabulary, and for creating a lookup table to find feature weights in constant time. One of the special qualities of feature hashing is that all possible features are mapped to a discrete, finite output space. Our proposed technique takes advantage of this fact, and makes hashed feature sets Rényi-differentially private. The technique enables data owners to privatize any model that stores the data-dependent weights in a hash table, and provides protection against inference attacks on the model output, as well as against linkage attacks directly on the model’s hashed features and weights. As a case study, we show how we have implemented our technique in commercial software that enables users to train text sequence classifiers on their own documents, and share the classifiers with other users without leaking training data. Only a 1% average reduction in Precision is observed, at a (𝛆,𝜹)-differential privacy cost of 𝛆 < 0.5 when 𝜹=10-5.


Read our other research papers

The Utility of Context When Extracting Entities From Legal Documents

Read The Paper →