PrivacyPreserving Aggregated Data Mining

0Associated
Cases 
0Associated
Defendants 
0Accused
Products 
7Forward
Citations 
0
Petitions 
1
Assignment
First Claim
1. A method of preserving privacy of data in a dataset in a database with a number n of entries, comprising:
 forming a random matrix of dimension m by n, wherein m is less than n;
operating on said dataset with said random matrix to produce a compressed dataset;
forming a pseudoinverse of said random matrix; and
operating on said dataset with said pseudoinverse of said random matrix to produce a decompressed dataset.
1 Assignment
0 Petitions
Accused Products
Abstract
An apparatus, system and method are introduced for preserving privacy of data in a dataset in a database with a number n of entries. In one embodiment, the apparatus includes memory including computer program code configured to, with a processor, cause the apparatus to form a random matrix of dimension m by n, wherein m is less than n, operate on the dataset with the random matrix to produce a compressed dataset, form a pseudoinverse of the random matrix, and operate on the dataset with the pseudoinverse of the random matrix to produce a decompressed dataset.
9 Citations
View as Search Results
COMPUTING SYSTEM WITH INFORMATION PRIVACY MECHANISM AND METHOD OF OPERATION THEREOF  
Patent #
US 20160117512A1
Filed 10/23/2015

Current Assignee
Samsung Electronics Co. Ltd.

Sponsoring Entity
Samsung Electronics Co. Ltd.

Apparatus and method for data matching and anonymization  
Patent #
US 9,934,409 B2
Filed 03/09/2015

Current Assignee
DataLogix Holdings Inc.

Sponsoring Entity
DataLogix Holdings Inc.

Computing system with information privacy mechanism and method of operation thereof  
Patent #
US 10,325,114 B2
Filed 10/23/2015

Current Assignee
Samsung Electronics Co. Ltd.

Sponsoring Entity
Samsung Electronics Co. Ltd.

System and method for privacy management of infinite data streams  
Patent #
US 10,366,249 B2
Filed 10/14/2016

Current Assignee
Samsung Electronics Co. Ltd.

Sponsoring Entity
Samsung Electronics Co. Ltd.

Tracking privacy budget with distributed ledger  
Patent #
US 10,380,366 B2
Filed 04/25/2017

Current Assignee
SAP SE

Sponsoring Entity
SAP SE

Emoji frequency detection and deep link frequency  
Patent #
US 10,454,962 B2
Filed 10/12/2018

Current Assignee
Apple Inc.

Sponsoring Entity
Apple Inc.

Data access authorization for dynamically generated database structures  
Patent #
US 10,530,779 B1
Filed 03/31/2018

Current Assignee
AtScale Inc.

Sponsoring Entity
AtScale Inc.

Methods and apparatus for enabling an electronic information marketplace  
Patent #
US 20030028469A1
Filed 06/29/2001

Current Assignee
International Business Machines Corporation

Sponsoring Entity
International Business Machines Corporation

METHOD OF COMPARING PRIVATE DATA WITHOUT REVEALING THE DATA  
Patent #
US 20130014270A1
Filed 07/09/2012

Current Assignee
Research Foundation of The City University of New York

Sponsoring Entity
Research Foundation of The City University of New York

20 Claims
 1. A method of preserving privacy of data in a dataset in a database with a number n of entries, comprising:
forming a random matrix of dimension m by n, wherein m is less than n; operating on said dataset with said random matrix to produce a compressed dataset; forming a pseudoinverse of said random matrix; and operating on said dataset with said pseudoinverse of said random matrix to produce a decompressed dataset.  View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
 11. An apparatus operable to preserve privacy of data in a dataset in a database with a number n of entries, comprising:
a processor, and memory including computer program code, said memory and said computer program code configured to, with said processor, cause said apparatus to perform at least the following; form a random matrix of dimension m by n, wherein m is less than n, operate on said dataset with said random matrix to produce a compressed dataset, form a pseudoinverse of said random matrix, and operate on said dataset with said pseudoinverse of said random matrix to produce a decompressed dataset.  View Dependent Claims (12, 13, 14, 15)
 16. A computer program product operable to preserve privacy of data in a dataset in a database with a number n of entries comprising a program code stored in a computer readable medium configured to:
form a random matrix of dimension m by n, wherein m is less than n, operate on said dataset with said random matrix to produce a compressed dataset, form a pseudoinverse of said random matrix, and operate on said dataset with said pseudoinverse of said random matrix to produce a decompressed dataset.  View Dependent Claims (17, 18, 19, 20)
1 Specification
This application claims the benefit of U.S. Provisional Application No. 61/585,179, entitled “PrivacyPreserving Aggregated Data Mining,” filed on Jan. 10, 2012, which is incorporated herein by reference.
The present invention is directed, in general, to computer systems and, more specifically, to a system and method for preserving privacy in data mining.
Data privacy has become a growing concern due to rapid information dissemination and proliferation spurred by the growing popularity of social networks such as Facebook, Blogger, and Twitter, powerful search engines like Google and Bing, and by the increasing sophistication of data mining algorithms. The ability of these systems to track the action of individuals and reveal individual identities is reminiscent of the omnipresence of the Big Brother character in George Orwell'"'"'s novel “1984,” which renders big corporations an unassailable advantage from knowing so much about individuals. As a consequence, minor lapses can wind up affecting people'"'"'s lives and careers. On the other hand, the epic demand for data mining and knowledge discovery in databases finds market relevance in contexts ranging from marketing analytics to retail merchandizing, and derives valuable statistical information for resourceconscious decisionmaking, thereby benefitting society at large. Therefore, a confluence of privacy confidentiality and accurate statistics presents special challenges. A central theme of privacypreserving data mining is the interplay between privacy and data utility.
There have been numerous attempts at preserving privacy of data related to individuals in statistical databases that may be publicly accessible (e.g., over the Internet). A frequent objective of preserving data against “data mining” is to hide privacy information about individuals, while at the same time providing information that will be statistically accurate about a group. This problem has become extremely important and has been discussed in database and cryptography communities, mainly for two reasons. One reason is widespread proliferation and accessibility of individual information in statistical databases that may be produced by government organizations or corporations, and the other is increasing sophistication of datamining algorithms. While objectives of processes introduced herein address the privacy issues associated with a statistical database, the underlying techniques are quite different when applied in the networks such as the Internet due to a need to provide a method that combines data compression with privacy protection.
In general, research related to strategies for privacypreserving methods have been subsumed into anonymity, data swapping and data perturbation, depending on how privacy is being defined. Anonymity refers to replacing a true identifier in a database with an anonymous identity. For example, if the name of an employee is replaced with a quasiidentifier of compound attributes, it would be difficult at a later time to associate the employee'"'"'s salary with the employee'"'"'s true identity. Data swapping refers to a process for disclosure limitation for databases containing categorical variables. The values of sensitive variables are swapped among records in such a way that torder frequency counts are preserved (i.e., entries in a tway marginal table). Data perturbation problem refers to techniques for adding noise (e.g., white noise) to each original entry in a database table. Cell suppression methods provide limited statistical content. Controlled rounding methods modify existing statistical correlations, thus alter a proper statistical inference. The release of partial information insures confidentiality by restricting the level of details at which data are released, which often allows for proper statistical inference.
Current processes for preserving private data against data mining use a system component called a curator that is positioned between the original database system and the client. The primary goal of a curator is to answer questions while protecting privacy. Upon receipt of a query, the curator analyzes the query for a possible privacy leak. With respect to a very specific query, the curator adds a certain amount of distortion to the response, or simply ignores the query. Online analysis of a query is not a trivial task. An attacker may figure out detailed information about individual records by carefully crafting a sequence of seemingly unrelated queries, and then use sophisticated mathematical tools to analyze responses to these queries.
A strategy of simply adding random noise to a query response can be vulnerable against a collaborative attack in which a group of attackers coordinate their effort by issuing the same query to the curator. The privacy of individual records can be compromised by averaging the responses. Online analysis also requires a substantial amount of computational resources, which could have a significant impact on scalability and query performance. A strategy of creating a perturbed database by adding white noise cannot produce an accurate estimation of aggregated information.
Thus, prior solutions for privacy protection in databases have suffered significant limitations with respect to an ability to preserve privacy in data mining and have become substantial hindrances to constructing and providing general accessibility to such databases. Accordingly, what is needed in the art is a new approach that overcomes the deficiencies in the current solutions.
These and other problems are generally solved or circumvented, and technical advantages are generally achieved, by advantageous embodiments of the present invention, in which an apparatus, system and method are introduced for preserving privacy of data in a dataset in a database with a number n of entries. In one embodiment, the apparatus includes memory including computer program code configured to, with a processor, cause the apparatus to form a random matrix of dimension m by n, wherein m is less than n, operate on the dataset with the random matrix to produce a compressed dataset, form a pseudoinverse of the random matrix, and operate on the dataset with the pseudoinverse of the random matrix to produce a decompressed dataset.
The foregoing has outlined rather broadly the features and technical advantages of the present invention in order that the detailed description of the invention that follows may be better understood. Additional features and advantages of the invention will be described hereinafter, which form the subject of the claims of the invention. It should be appreciated by those skilled in the art that the conception and specific embodiment disclosed may be readily utilized as a basis for modifying or designing other structures or processes for carrying out the same purposes of the present invention. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the spirit and scope of the invention as set forth in the appended claims.
For a more complete understanding of the present invention, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
Corresponding numerals and symbols in the different figures generally refer to corresponding parts unless otherwise indicated, and may not be redescribed in the interest of brevity after the first instance. The FIGUREs are drawn to illustrate the relevant aspects of exemplary embodiments.
The making and usage of the present exemplary embodiments are discussed in detail below. It should be appreciated, however, that the embodiments provide many applicable inventive concepts that can be embodied in a wide variety of specific contexts. The specific embodiments discussed are merely illustrative of specific ways to make and use the systems, subsystems and modules associated with a process for providing privacy protection in a database.
A process is introduced herein and referred to as randomized compression for providing privacy protection in a statistical database. An objective is a degree of unrecoverable information loss about individual records while preserving information of aggregated data. The process includes encoding and decoding phases. The encoding phase combines dimension reduction, randomization and compression. The compression ratio is viewed as an adjustable parameter that reflects a degree of distortion.
In the decoding phase, a MoorePenrose matrix pseudoinverse is used to create a statistical database that is provably resistant to a collaborative attack. A benefit of the process is that the generated database supports privacy protection without a need to modify the underlying database scheme and structured query language (“SQL”) and also without a need to perform querytailored vulnerability analysis.
A compressed version of a database is produced by processes having properties as set forth below. One property is the more specific a query, the more ambiguous the answer. Another property is the more general a query, the more accurate the answer. This is an uncertainty principle underlying privacypreserving data mining.
Processes are introduced herein to protect individual records while providing accurate aggregated database information. Disclosure control is provided via the introduction of lossy database information. The processes provide the addition of noise to the data, providing disclosure control by increasing a noisetosignal ratio.
The processes introduced herein can be viewed as a special type of lossy compression. The loss of information during processing makes it substantially impossible to fully recover an individual record, while providing accurate aggregate information about a group of records. This is done by randomizing and compressing the original data using a special type of random matrix mapping. The restored database information is more uniform than that in the original database. This feature is exploited to disguise individual records, while providing accurate aggregate information about the underlying data population. Accordingly, more accurate estimation of aggregate data can be provided.
From a system perspective, processes introduced herein do not require a change to an underlying database scheme, and do not require a change to the user environment. The original database scheme and access environment can be retained. A special type of random matrix is adopted with dimension reduction and MoorePenrose pseudoinversion. The combination of these techniques enables a privacy bound to be established for disclosure control for individual records, but preserves accurate estimation of aggregated data.
For a given n×1 matrix X (representing the original data), an m×1 compressed data matrix Y is generated, where n>m, by lossy compression employing a random mapping m×n matrix M. A reconstructed estimate (also referred to as a “perturbed dataset” and a distribution approximation”) {circumflex over (X)} of the original data matrix X is generated using the MoorePenrose pseudoinverse M^{+} of the random matrix M.
For a given Y=MX, let
{circumflex over (X)}=M
^{+}
Y=M
^{+}
MX.
Then the following conditions are automatically satisfied:
∥MX−Y∥_{2}≧∥M{circumflex over (X)}−Y∥_{2}, and Condition 1:
∥{circumflex over (X)}∥_{2}≦∥X∥_{2}, Condition 2:
where ∥. ∥_{2 }is the Euclidean norm.
Condition 1 above shows the direction of “data encryption,” meaning that once the original data matrix X is translated to Y=MX, the best reconstructed estimate (recovery) of the original data matrix X is the reconstructed estimate {circumflex over (X)}. The norm ∥X−{circumflex over (X)}∥_{2 }is referred to as the level of privacy protection.
Condition 2 means that the reconstructed estimate {circumflex over (X)} is more uniform than the original data matrix X, indicating that the reconstructed estimate {circumflex over (X)} is majorized by the original data matrix X in the theory of majorization. The mathematical implication is that the reconstructed estimate {circumflex over (X)} is a good estimator for the original data matrix X on a group level, while it serves as a poor estimator for the original data matrix X on an individual level. This is a desirable property for a statistical database for privacy protection.
Based on the first condition, a bound is established as set forth below.
Let ∥X∥_{0 }be the size of the original data matrix or vector X, the number of records (i.e., elements) in the original data matrix X. Then define:
as a tunable system metric for privacy protection, denoted by the function α(X, M). The metric α(X, M) is a function of the original data matrix X and the random matrix M. The metric α(X, M) measures an average distance between a true record and a corresponding perturbed record.
Turning now to
The database server 130 is formed with a communication module 132. The database server 130 includes a data processing and control unit 136 formed with a processor 137 coupled to a memory 138. Of course, the database server 130 includes other elements such as interface devices, etc. The database server 130 may provide access to a telecommunication network such as a public service telecommunications network (“PSTN”). Access may be provided using fiber optic, coaxial, twisted pair, microwave communications, or similar link coupled to an appropriate linkterminating element. The database server 130 is equipped with a transmission control protocol (“TCP”) internetworking control component.
Assuming the database server 130 and local server 120 employ a wireless path therebetween, the communication module 132 of the database server 130 modulates information onto a carrier waveform for transmission via antenna(s) to the local server 120. A respective communication module 122 in the local server 120 demodulates information received via antenna(s) for further processing thereby. The communication modules 122, 132 are capable of supporting duplex operation. The communication modules 122, 132 facilitate the bidirectional transfer of information therebetween and with other communication elements.
The local server 120 is similarly formed with the communication module 122, and a data processing and control unit 126. The data processing and control unit 126 is similarly formed with a processor 127 and memory 128. The user equipment 120 is correspondingly formed with a communication module and a data processing and control unit to provide a user interface for a user choosing to access the dataset in database server 130.
The data processing and control units identified herein provide digital processing functions for controlling various operations required by the respective unit in which it operates, such as radio and data processing operations to conduct bidirectional wired or wireless communications between, for example, a server and a user equipment coupled to a respective base station. The processors in the data processing and control units are each coupled to memory that stores programs and data of a temporary or more permanent nature.
The processors in the data processing and control units, which may be implemented with one or a plurality of processing devices, performs functions associated with its operation including, without limitation, precoding of antenna gain/phase parameters, encoding and decoding of individual bits forming a communication message, formatting of information and overall control of a server or user equipment. Exemplary functions related to management of wired and wireless communication resources include, without limitation, hardware installation, traffic management, performance data analysis, configuration management, security, and the like. The processors in the data processing and control units may be of any type suitable to the local application environment, and may include one or more of generalpurpose computers, special purpose computers, microprocessors, digital signal processors (“DSPs”), fieldprogrammable gate arrays (“FPGAs”), applicationspecific integrated circuits (“ASICs”), and processors based on a multicore processor architecture, as nonlimiting examples.
The memories in the data processing and control units may be one or more memories and of any type suitable to the local application environment, and may be implemented using any suitable volatile or nonvolatile data storage technology such as a semiconductorbased memory device, a magnetic memory device and system, an optical memory device and system, fixed memory and removable memory. The programs stored in the memories may include program instructions or computer program code that, when executed by an associated processor, enable the respective communication element to perform its intended tasks. Of course, the memories may form a data buffer for data transmitted to and from the same. The memories may store applications (e.g., virus scan, browser and games) for use by the same. Exemplary embodiments of the system, subsystems, and modules as described herein may be implemented, at least in part, by computer software executable by processors of the data processing and control units, or by hardware, or by combinations thereof.
Program or code segments making up the various embodiments may be stored in a computer readable medium or transmitted by a computer data signal embodied in a carrier wave, or a signal modulated by a carrier, over a transmission medium. For instance, a computer program product including a program code stored in a computer readable medium (e.g., a nontransitory computer readable medium) may form various embodiments. The “computer readable medium” may include any medium that can store or transfer information. Examples of the computer readable medium include an electronic circuit, a semiconductor memory device, a read only memory (“ROM”), a flash memory, an erasable ROM (“EROM”), a floppy diskette, a compact disk (“CD”)ROM, an optical disk, a hard disk, a fiber optic medium, a radio frequency (“RF”) link, and the like. The computer data signal may include any signal that can propagate over a transmission medium such as electronic communication network communication channels, optical fibers, air, electromagnetic links, RF links, and the like. The code segments may be downloaded via computer networks such as the Internet, Intranet, and the like.
Turning now to
Turning now to
For a dataset formed with n elements, a query class is defined as denoted Q. The query class Q is then decomposed into n disjoint query subclasses Q_{1}, . . . , Q_{n}, where Q_{k }refers to a query subclass in which all queries require accessing k individual records. The query subclass Q_{1 }is the most specific query subclass, while query subclass Q_{n }is the most general query class. For example, sum and average queries operative over all individual records fall into the query subclass Q_{n}. A query class Q is thus defined according to the generality or specificity of a query. The goal is to generate a perturbed database that produces ambiguous results with respect to queries in query subclass Q_{1 }while producing accurate results for queries in query subclass Q_{n}. It is clear that the metric α(X, M) is viewed as the average error over query subclass Q_{1 }as it measures the average distance between an actual element and its estimated element. A total average query falls into query subclass Q_{n}.
Turning now to
An observation from
A perturbed database has been created by adding white noise to each record to provide disclosure restriction to individual records. Such processes, however, may not provide the most accurate estimation for aggregated data. For example, for a query requesting the sum of one thousand records, the query against the perturbed database is formulated as follows:
Σ_{k=1}^{1000}[X[k]+n(0,1)]=Σ_{k=1}^{1000}X[k]+Σ_{k=1}^{1000}n (0,1),
where X[k], (1≦k≦1000) is a matrix representing the original records and n(0, 1) represents independent white noise samples with zero mean and unity variance. The sum over the perturbed database can be represented by Σ_{k=1}^{1000}X[k] plus Σ_{k=1}^{1000}n(0,1).
Turning now to
Turning now to
An example is now described that illustrates features presented hereinabove in accordance with
The age and salary of an employee are considered to be private information that should be protected from leaking. At the same time, however, it is desirable to provide accurate aggregate information at a departmental level.
It can be seen from
Consider the original data matrix X to be an nby1 vector (a matrix), and the random matrix M to be an mbyn matrix (n>m). To be concrete, let n=13, and m=10. The ratio m/n is viewed as a compression ratio. Thus, the random matrix M is a random 10by13 matrix that compressibly maps the original data matrix X into intermediate data, the mby1 compressed data matrix Y. The random matrix M compresses 13dimensional data into a 10dimension space, and each entry in the random matrix M is randomly selected from, for instance, the set {1, 2, 4, . . . 2^{n}} and is then normalized by a sum of each entry on the same row. An exemplary random matrix M is illustrated in
In a similar fashion, another random matrix M can be generated for salary data. As a result, the perturbed database table is then created as shown in Table II below.
The perturbed database has the same scheme as the original one, but provides privacy protection for individual records, without the need to perform querytailored analysis. For example, if an attacker wants to know the salary and age of employee H, the attacker can issue a request using a structured query language (“SQL”) statement, select age, salary from an employee where Name=H in Table II. The returned result is that the age of employee H is about 35.39 and the employee salary is 90,050.95, while the employee'"'"'s true age is 29 in Table I and the employee'"'"'s true salary is 81,202. The result obtained by the attacker differs from the true values.
The perturbed database created by this process can provide accurate aggregate information. Suppose a survey study conducted by the Institute of Electrical and Electronic Engineers (“IEEE”) needs information about the average age and average salary of a department. An SQL statement can be issued such as select ave(age), ave(salary) from employee. The returned values are that the average age is 43.921669 and the average salary is 128,770.43, in comparison to the true average age 43.92 and true average salary 129,475.
If the human resources (“HR”) department wants to conduct a survey for a departmentlevel comparison, the HR administrator can issue a SQL statement such as select department, ave(age), ave(salary) from employee group by department. The returned values of the SQL query are:
Department 1: the average age is 39.08, and the average salary is 123,925.9, and
Department 2: the average age is 48.06, and the average salary is 132,922.9, which are very close to the actual values as follows:
Department 1: the average age is 38.5, and the average salary is 115,041.7, and
Department 2: the average age is 48.57, and the average salary is 141,845.6.
Through this example, it is shown that the process introduced herein is based on maintaining datalevel privacy protection in a publicly accessible database and providing accurate aggregate information, while protecting the privacy of individual records. A salient feature is retention of the original database scheme and of the access environment.
Thus, a special form of a random matrix (e.g., a random stochastic matrix) has been introduced to uniformly scramble and compress original data in a statistical database from a higher dimensional space into a lower dimensional space. A MoorePenrose pseudoinverse is used to separate original data from its compressed form, aiming to establish a lower confidentiality bound for privacy protection. The approximated data possess desirable properties of aggregated data in a statistical database, wherein the more specific a query, the more ambiguous an answer. Describing further the privacypreserving database problem, users are allowed to query a published dataset in such a way that a set of basic properties is satisfied. Most notably, users can recover a reasonably close answer to their queries, often referred as “utility,” and data publishers maintain a certain amount of privacy about individual dataset values, often referred to as “privacy.”
A variant of the privacypreserving data publishing problem is considered where data publishers apply a stochastic perturbation to the dataset and only publish a perturbed version on which users can later issue their queries. Here, the challenge is applying perturbations that provide privacy gains while not significantly compromising utility expectations. The formal description of models and solutions employed herein is based on notions and techniques from linear algebra. As the notions and techniques can deal with datasets with multiple attributes by operating independently on each attribute, it is restricted, for simplicity, to give attention to single attribute datasets. Specifically, the original data is modeled as a real valued, ndimensional, vector or matrix X=(x_{1}, . . . , x_{n})□R_{n}^{+}. A dataset perturbation F is a (possibly probabilistic) function that maps the original data matrix X to a perturbed dataset {circumflex over (X)}. If the perturbed dataset {circumflex over (X)} is a realvalued and ndimensional vector (like the original data matrix X), then the dataset perturbation F is a schemapreserving dataset perturbation.
Two properties of dataset perturbations, utility and privacy, are now addressed. In formalizing the utility property, a specific function of the data is captured that well aggregates the n dataset components. The following specific example function is described as the trace of a realvalued, ndimensional, vector X defined as the sum of all components
The following definition is considered as a utility notion for dataset perturbation functions.
Definition 1. Let α≧0 and let F be a dataset perturbation. The dataset perturbation F is αtracepreserving if tr(F(X))−tr(X)≦αtr(X). The dataset perturbation F is tracepreserving if it is αtracepreserving for α=0.
In formalizing the privacy property, various notions and variants are considered including a linear algebra notion coming in three variants (one variant vectorbased and two variants componentbased), a geometric notion and an information theory notion. In all these notions, it is assumed that the original data matrix X−(x_{1}, . . . , x_{n}) be an ndimensional dataset, and the perturbed dataset {circumflex over (X)}=({circumflex over (x)}_{1}, . . . , {circumflex over (x)}_{n}) be the ndimensional perturbed dataset obtained by applying the dataset perturbation F to the original data matrix X.
Definition 2. Let iε(1, . . . , n) and ε≧0. The dataset perturbation F satisfies icoordinate εprivacy if and only if x_{i}−{circumflex over (x)}_{i}␣εx_{i}. Definition 2 states that the risk of personalized privacy breach entails if the estimate is within a small vicinity of the original point. This coordinatewise privacy notion is now extended into a vector based notion as set forth below.
Definition 3. Let ε≧0. The dataset perturbation F satisfies (vector based) εprivacy if and only if ∃iε (1, . . . , n) such that the dataset perturbation F satisfies icoordinate εprivacy. The symbol “∃” means “there exists” and the symbol “ε” means “is an element of”). It is observed that vectorbased εprivacy notion is much stronger than the coordinatewise notion. Indeed, if the dataset perturbation F does not satisfy vectorbased εprivacy, then it does not satisfy icoordinate εprivacy for all i=1, . . . , n.
Definition 4. Let i{1, . . . , n}) and ε≧0. The dataset perturbation F satisfies ipersonalized εprivacy if and only if the number of i(1, . . . , n) such that the dataset perturbation F satisfies icoordinate εprivacy is less than or equal to ε_{n}.
The cosine similarity metric is widely used in information retrieval and text matching where vectors are referred to as document vectors for storing the term frequencies.
Definition 5. For a given original and compressed data vectors or matrices X, Y, the cosine similarity is defined as:
cos(θ)=X,Y∥X∥_{2}∥Y∥_{2 }
and the similarity angle is defined as θ(X, Y)=arccos [X,Y/(∥X∥_{2}∥Y∥_{2})]. The cosine similarity measures a degree of similarity between two vectors. Two vectors are completely identical if the cosine similarity is 1, and two vectors are completely opposite if the cosine similarity is −1. Inbetween values indicate intermediate similarity or dissimilarity. Two vectors are mutually orthogonal (independent) when the cosine similarity is 0. In the case that the original and compressed data vectors X, Y are document vectors, their cosine similarity will range from 0 to 1, since the term frequencies cannot be negative. The angle θ between two term frequency vectors cannot be greater than 90 degrees.
The uncertainty of the original data matrix X is defined as:
where p(x) is the probability density of the original data matrix X. The mutual information is used to measure the similarity between the original data matrix X and the perturbed dataset {circumflex over (X)} as follows:
where p(x, {circumflex over (x)}) is the joint probability of the original data matrix X and the perturbed dataset {circumflex over (X)}, and p_{1}(x) and p_{2}({circumflex over (x)}) are the marginal probability density functions of the original data matrix X and the perturbed dataset {circumflex over (X)}, respectively. The conditional entropy (uncertainty) denoted as H(X{circumflex over (X)})=H(X)−I(X, {circumflex over (X)}) measures the information conveyed about the original data matrix X by the perturbed dataset {circumflex over (X)}. The mutual information plays an important role in communication theory. It represents the amount of information processed by the channel when the original data matrix or vector X is an input vector to the channel and perturbed dataset {circumflex over (X)} is the output vector. Definition 6. Let the original data matrix X and the perturbed dataset {circumflex over (X)} be the input and output vectors, respectively. The channel is said to be lossless if the uncertainty H(X{circumflex over (X)})=0 and the channel is said to be deterministic if the uncertainty H({circumflex over (X)}X)=0.
Definition 7. A random matrix M is called Hermitian if and only if M=M*. A random matrix M is called normal if MM*=M*M. If the random matrix M is an nbyn Hermitian matrix, and the original data matrix X is an ndimensional vector, then the RayleighRitz quotient R with respect to the random matrix M and the original data matrix X is defined as:
R(M,X)=X,MX/∥X∥_{2}^{2},∥X∥_{2}≠0
where , denotes the Euclidean inner product.
Let MR^{m×n }be an mbyn matrix. Then matrix norm ∥M∥_{2}, which refers as the spectral radius of the random matrix M, is
∥M∥_{2}=max{√{square root over (λ)}:λ an eigenvalue of M*M}.
A random matrix M is said to be a contraction if ∥M∥≦1. A positive random matrix M is contractive if and only if A≦I.
A square matrix P is called doubly stochastic if its entries are nonnegative, and its column sums and its row sums are 1. A square matrix P is called doubly substochastic if its row sums are less than 1. A square matrix P is called doubly superstochastic if its column sums are less than 1.
Let MεR^{m×n }be an mbyn matrix. Then matrix norm ∥M∥_{2}, which refers as the spectral radius of the random matrix M, is:
∥M∥_{2}=max{√{square root over (λ)}:λ an eigenvalue of M′M}.
A random matrix M is said to be a contraction if ∥M∥≦1. A positive matrix M is contractive if and only if A≦I. A square matrix P is called doubly stochastic if its entries are nonnegative, and its column sums and its row sums are 1. A square matrix P is called as doubly substochastic if its row sums are less than 1. A square matrix P is called as doubly superstochastic if its column sums are less than 1.
In order to set the stage for the problem exposition, M□R^{m×n }is defined to be a rectangular substochastic matrix from the original data matrix X□R^{n }to Y□R^{m}, where m_{i,j}□M is first generated by the uniform distribution in the range of [0, 1] and then normalized by the sum of entries in the same row. Hence, the random matrix M is called positivitypreserving. This type of matrix could be thought of as a generalization of orthogonal matrices, in which the row vectors are locally almost orthogonal rather than globally perfectly orthogonal, thus possessing many nice features of orthogonal matrices.
Let the original data matrix X□R^{n }be a microdata vector and Y□R^{m }and M□R^{m×n }be a stochastic matrix and (m<n). The term 1m/n is referred to as a compression ratio and the compressed data matrix Y as a lower dimensional sketch Y=MX.
An objective of constructing a lowdimensional sketch Y=MX is to preclude the possibility of exact recovery, thereby protecting the privacy of individual records. A goal of approximating the original data matrix X from the lowdimensional sketch of the compressed data matrix Y is to establish a lower privacy bound. It is achieved by identifying X* such that Y=MX* and ∥X*∥2 is minimal. M^{+}, which is called the MoorePenrose inverse of the random matrix M, can be uniquely determined, satisfying the following four conditions:
MM^{+}M=M, 1)
M^{+}MM^{+}=M^{+}, 2)
(MM^{+})*=(MM^{+}), 3)
(M^{+}M)*=M^{+}M. 4)
where X* denotes the complex conjugate transpose of the original data matrix X.
It is possible to verify that P=MM^{+} is an orthogonal projection operator that satisfies:
P=P^{2}, and P*=P.
The matrix I−M^{+}M is the orthogonal projection onto the kernel of the random matrix M as:
M(I−M^{+}M)=(M−MM^{+}M)=0.
It is possible to verify that M^{+}M is normal according to Definition 7.
Turning now to an example, a Lemma 1 is ∥MP+(I−MM^{+})Q∥=∥MP∥+∥(I−MM^{+})Q∥. The proof follows from the fact that the random matrix M and (I−MM^{+}) are orthogonal. A perturbed dataset {circumflex over (X)}εR^{n }can be expressed as {circumflex over (X)}=M^{+}Y=M^{+}MX, where M^{+} is the wellknown MoorePenrose inverse of the random matrix M. A perturbation strategy is set forth below in Table III.
As illustrated and described hereinabove with reference to
Theorem 1. M+Y is a unique best approximate solution of MX=Y. This implies:
(a) ∥Y−M{circumflex over (X)}∥_{2}≦∥Y−MX∥_{2}, and
(b) if ∥Y−M{circumflex over (X)}∥_{2}=∥Y−MX∥_{2}, then ∥{circumflex over (X)}∥_{2}≦∥X∥_{2}.
Proof:
∥MX−Y∥=∥M(X−M^{+}Y)+(I−MM^{+})(−Y)∥=∥MX−MM^{+}Y)∥+∥(MM^{+}Y−Y)∥≦∥M(M^{+}Y)−Y∥=∥M{circumflex over (X)}−Y∥,
which proves Theorem 1(a) above. The equality of the proof above holds if MX=MM^{+}Y ((MX=M{circumflex over (X)}).
A general solution to MX=Y is expressed as X=M^{+}Y+(I−M^{+}M)u, where u is an arbitrary vector. The general solution implies that:
MX=MM
^{+}
Y=M{circumflex over (X)}
as
M=MM
^{+}
M.
Since u is an arbitrary vector, it is assumed that u=X,
The above equation holds since (M^{+}M) and (I−M^{+}M) are mutually orthogonal. As a result,
∥X∥≧∥{circumflex over (X)}∥ if MX=MM^{+}Y−M{circumflex over (X)}.
Thus, theorem 1(b) is proved. It is natural to cast the distance between the original data matrix X and the perturbed dataset {circumflex over (X)},
∥X−{circumflex over (X)}_{2},
as the overall disclosure restriction. The lower and upper bounds on privacy disclosure is expressed as:
∥X∥_{2}−∥{circumflex over (X)}∥_{2}≦X−{circumflex over (X)}∥_{2}=∥X−M^{+}MX∥_{2}=X,(I−M+M)X)^{1/2}≦∥I−M+M∥_{2}·∥X∥_{2 }
The above equation provides considerable insight into the relationship between the privacy disclosure restriction ∥X−{circumflex over (X)}∥_{2}, the random matrix M, and the original data matrix X. The following relationship:
∥X−{circumflex over (X)}∥_{2}/∥X∥_{2 }
is a normalized privacy disclosure (“NPD”). The lower and upper bounds on the NPD become:
1−∥{circumflex over (X)}∥_{2}/∥X∥_{2}≦∥X−{circumflex over (X)}∥_{2}/∥X∥_{2}≦∥I−M^{+}M∥_{2}.
By Definition 7, the NPD is an alternative, but equivalent, definition of the square root of RayleighRitz quotient R(I−M^{+}M,X).
Define ρ(M,X)=1−∥{circumflex over (X)}∥_{2}/∥X∥_{2 }to be a lower bound and β(M)=∥I−M^{+}M∥_{2 }to be an upper privacy bound, which turns out to be the spectral radius of I−M^{+}M as I−M^{+}M is normal as I−M^{+}M=(I−M^{+}M)^{T}.
Theorem 2. The lower bound ρ(M,X)≧0 and the upper bound β(M)≦1.
Proof. It follows from Theorem 1(b) above that ∥{circumflex over (X)}∥2≦∥X∥_{2}. Thus ρ(M,X)=1−∥{circumflex over (X)}∥_{2}/∥X∥_{2}≧0. It is possible to show that rank(M^{+}M)=r≦m as rank(M^{+}M)≦rank(M)≦m. There are orthogonal matrices U_{m×m }and V_{n×n }such that relation 1 is:
where D=diag(λ_{1}, . . . , λ_{r}, 0, . . . , 0) and λ_{1}≧λ_{2}, . . . , ≧λ_{r}.
Hence, relation 2 is:
where U_{m×m}^{−} and V_{n×n}^{−} refer to the inverse of U_{m×m }and V_{n×n}, respectively, and
D^{+}=D^{−}=diag(1/λ_{1}, . . . ,1/λ_{r},0, . . . ,0).
Since U_{m×m }and V_{n×n }are orthogonal matrices, then U_{m×m}^{−}=U_{m×m}^{* }and V_{n×n}^{−}=V_{nn×n}^{*}. In other words, U_{m×m}^{−}U_{m×m}=U_{m×m}^{*}U_{m×m}=I_{m×m}. Substituting relation 1relation 2, we obtain:
This means that the maximum eigenvalue of the matrix (I−M^{+}M) is unity, and thus:
β(M)=∥I−M^{+}M∥_{2}=1.
The upper privacy bound β(M)1 is a constant, and is thus too loose to be useful in practice. For a given original data matrix X and random matrix M, one can calculate the squareroot of RayleighRitz quotient R(I−M^{+}M,X) (the normalized privacy bound) and the lower privacy bound ρ(X,M). The main idea of the perturbation method is to use the size of a stochastic random matrix to strike a balance between privacy and utility.
The focus next is on the data mining utility on statistical information, which is considered as a difficult aspect of the privacypreserving data mining problem. A framework is provided to exploit randomness in the stochastic matrix and identify a distribution approximation that meets both privacy and data utility requirements. The main idea is similar to the Monte Carlo method developed by John von Neumann. It exploits the randomness in a stochastic matrix to identify the stochastic random matrix M that satisfies the dual requirements:
1) the αtracepreservation condition tr(X)−tr({circumflex over (X)})/tr(X)≦α, and
2) the lower privacy bound condition ρ(X,M)≧ρ. These combined conditions form a permissible (ρ, α) privacyutility area.
Returning to Algorithm 1 (reproduced below in Table IV), the Algorithm 1 provides a framework to exploit randomness in a stochastic matrix and identify a distribution approximation that meets both privacy and data utility requirements.
The steps in Algorithm 1 between lines 28 form the processing methodology. The step at line 3 is used generate a stochastic random matrix M. The step in line 5 computes the corresponding MoorePenrose pseudoinverse M^{+} and the distribution approximation {circumflex over (X)}. The steps between lines 67 calculate the actual αtracepreservation and privacy bound. The step in line 8 checks the obtained (ρ, α) privacy utility point fall in the given permissible (ρ, α) privacyutility area. The whole process is repeated until the generated distribution approximation {circumflex over (X)} falls within the prescribed (ρ, α) privacyutility condition.
To illustrate, we use the original data matrix X with size 20, generated by:
 1) normal distribution (N(400, 20)),
 2) uniform distribution(uniform (100, 1000)), and
 3) Zipf distribution (α=2), respectively.
For each generated dataset, a stochastic random matrix M is used to uniformly compress the original data matrix X into a lowdimensional sketch (Y=MX), generate the perturbed dataset {circumflex over (X)}=M^{+}Y from the sketch, and calculate a privacy utility point.
Turning now to
The size of a random matrix is an effective means of controlling privacyutility scatter area. For example, a matrix of size 5 by 20 has a larger privacyutility scatter area than a matrix of size 15 by 20. Observe that the original dataset has a significant impact on shaping the privacyutility scatter pattern. The privacy and utility relations appear to be tightly correlated for the Gaussian distribution, and appear to be loosely correlated for the uniform and Zipf distributions. This means that for certain distributions with tightly correlated privacy and utility, the privacy and data utility are in essence irreconcilable with each other. In contrast, good data privacy and data utility might coexist when the privacy and utility are loosely coupled. Observe that for the uniform and Zipf distributions, a percentage of random matrices yields perturbed datasets having both good privacy and utility.
It can be seen that the choice of size of a stochastic random matrix and the distribution of the original data matrix X are important factors affecting the privacyutility scatter area. A small permissible privacyutility area entails an added computational cost. For example, for a total of 10,000 randomly generated matrices, for the normal distribution shown in the top graph in
A new paradigm of privacypreserving data mining is now discussed, which is toward the opposite end of the global level privacy enforcement presented in Algorithm 1. This new paradigm is called a personalized privacy breach threshold (“PPBT”). It allows a data owner to specify a privacy threshold on the respective private data instead of the globallevel privacy specification in Algorithm 1. By Definition 2, a privacy breach occurs if P(x, {circumflex over (x)},ε)=0 where ε is a PPBT value for private data x and {circumflex over (x)} is an estimate of private data x.
To address the personalized privacy concerns, two variants of the PPBT method are considered, namely, a fixed PPBT and a variable PPBT. The fixed PPBT method uses one single privacy breach threshold for all individual data in the original data matrix X, while the variable PPBT method assumes the individual privacy breach threshold follows a certain distribution.
A second method designated Algorithm 2 (as set forth below in Table V) addresses the PPBT issue, where P is an array containing the PPBT values specified by the respective data owners, as well as unspecified entries being initialized as 0.
Line 9 of Algorithm 2 computes the actual PPBT value, and line 10 checks whether the obtained PPBT individual value is less than the specified PPBT. If the answer is YES, the method jumps out the loop break. Otherwise, the method continues checking the next one. To ensure strict compliance with all individual privacy thresholds, a breach of privacy during the loop results in a regeneration of stochastic matrix. Notice that Algorithm 2 is presented for brevity and clarity rather than for performance efficiency.
Turning now to
At a step or module 1130, the dataset is operated on with the pseudoinverse of the random matrix to produce a decompressed dataset. At a step or module 1135, if aggregated information of the decompressed dataset is not satisfied the method returns to step or module 1110 to form another random matrix. Otherwise, at step or module 1140, if a lower privacy bound condition for the decompressed dataset is not satisfied, the method returns to step or module 1110 to form another random matrix. Otherwise, in step or module 1145, the result of a query on the decompressed dataset is produced. In an embodiment, the dataset is publicly accessible over the Internet. The method ends at step or module 1150.
Scalability and local and global privacy enforcement are further topics as addressed herein. The scalability issue is the observation that computation of MoorePenrose inverse is not a trivial task as it may involve a single value decomposition, which could be expensive and timeconsuming for a largesize matrix. As a result, in a practical setting, computing the MoorePenrose inverse of a largesize matrix could be problematic.
As seen from the relation:
1−∥{circumflex over (X)}∥_{2}/∥X∥_{2}≦∥X−{circumflex over (X)}∥_{2}/∥X∥_{2}≦∥I−M^{+}M∥_{2},
both the lower privacy bound and αtrace preservation are globallevel metrics, whereas the personalized privacy breach threshold is an individual privacy metric. A globallevel metric may often not be applied consistently across different levels. Questions naturally arise such as how to handle a large dataset and how to apply the globallevel privacy policy consistently across different levels (local and global levels).
A bottomup privacy enhancement process is introduced for efficiently handling a large dataset and consistent application of privacy policy at local and global levels. Let the original data matrix X be an original dataset of size of kn. We partition the original data matrix X into k equalsize blocks (X_{1}, . . . , X_{k}). For each block X_{i}, a stochastic random matrix M_{i }is identified such that a distribution approximation {circumflex over (X)}_{i}=M_{i}^{+}M_{i}X_{i }satisfies the privacy and utility requirements for each block. The distribution approximation {circumflex over (X)}_{i }for block X_{i }can be pieced together to form the distribution approximation for {circumflex over (X)}. The problem is thus formulated as below:
where M_{i}, 1≦i≦k is an mbyn stochastic submatrix on the diagonal of the random matrix M, the compressed data matrix Y_{i }is an mby1 matrix and the distribution approximation X_{4 }is an nby1 matrix.
Thus, the compression ratio is m/n.
where M_{i}^{+}, 1≦i≦k is an nbym submatrix on the diagonal of M^{+} (the MoorePenrose inverse of M), and the distribution approximation {circumflex over (X)}_{i }is an nby1 matrix.
The efficient benefit of this dataset partitioning is quantitatively analyzed as set forth below. Suppose that the overhead for computing the distribution approximation {circumflex over (X)} is O(n^{α}) for an mbyn stochastic matrix of size m≦n. When partitioning the original data matrix X into k data blocks, the overhead of computing the distribution approximation becomes
Theorem 3: Let the original data matrix X=(X_{1}, . . . , X_{k})^{T}, and the distribution approximation {circumflex over (X)}=({circumflex over (X)}_{1}, . . . , {circumflex over (X)}_{k})^{T }
where
{circumflex over (X)}
_{i}
=M
_{i}
^{+}
M
_{i}
X
_{i }
Let β_{i }be the prescribed privacy bound for block X_{i}, i.e., ∥bX i∥_{2}≦β_{i}∥X_{i}∥_{2}, 1≦i≦k. Then:
∥{circumflex over (X)}∥_{2}≦β∥X∥_{2 }where β=max{β_{1}, . . . ,β_{k}}.
Theorem 3 implies that the privacy bound on a dataset is bounded by the maximum privacy bound on its data blocks.
Theorem 4: Let the original data matrix X=(X_{1}, . . . , X_{k})^{T}, and the distribution approximation {circumflex over (X)}=({circumflex over (X)}_{1}, . . . , {circumflex over (X)}_{k})^{T }where {circumflex over (X)}_{i}=M^{+}M_{i}X_{i}. If M_{i}^{+}M_{i }is αtracepreserving with respect to X_{i }for 1≦j≦k, then M^{+}M is αtrace preserving with respect to the original data matrix X. Theorem 4 asserts that the αtracepreservation on the dataset is also bound by that on its constituent partition blocks. The two theorems show that the relationship between local and global privacy bounds when a partition scheme is employed. The theorems ensure that satisfying both the privacy and utility requirements on each constituent data block supports the privacy and utility compliance on the global level.
A perturbation model has been presented for providing a provably secure disclosure control over individual records while providing an accurate estimation of aggregate statistics. The new process comprises a lowdimensional sketch construction and a highdimensional approximation. In the sketch construction, a random matrix M is used to uniformly compress an original data matrix X into a lowdimensional sketch Y=MX. The goal is to reduce the possibility of exact recovery, thereby ensuring a level of privacy protection. The goal of distribution approximation is to find a complex conjugate transpose X* of the original data matrix X such that MX=MX* and ∥X*∥_{2 }is minimal. This seeks best lowdimensional descriptions of highdimensional data. The defining trait of the perturbation model is to exploit randomness in stochastic random matrices to reconcile two inherently contradictory privacy and data utility factors.
An advantage of the perturbation method is its retention of a database schema, which in turn preserves the database access environment. In contrast, the conventional disinformation strategies often use an interval to replace a single numeric number under a sensitive attribute. For example, the age of 45 will be replaced by the interval [20, 60] such a type mismatch requires a substantial schema change to accommodate a difference between the original and perturbation tables. This implies an enormous amount of time for testing and development.
In addition, the perturbation method supports personalized privacy breach control by allowing a data owner to specify a preferred privacy threshold. It was shown hereinabove that normalized privacy bound is equivalent to the square root of RayleighRitz quotient of I−MM. It was also shown how to generate a perturbed dataset that complies with specified privacy and utility conditions, including fixed and variable personalized privacy breach thresholds. The theoretical framework together with a comprehensive experimental study based on reallife datasets has been investigated.
Thus, an apparatus, system and method have been introduced for preserving privacy of data in a dataset in a database with a number n of entries. In one embodiment, the apparatus includes memory including computer program code configured to, with a processor, cause the apparatus to form a random matrix of dimension m by n, wherein m is less than n, normalize columns of the random matrix, operate on the dataset with the random matrix to produce a compressed dataset, form a pseudoinverse (e.g., a MoorePenrose pseudoinverse) of the random matrix, and operate on the dataset with the pseudoinverse of the random matrix to produce a decompressed dataset.
The memory and the computer program code are also further configured to, with the processor, cause the apparatus to form another random matrix if aggregated information of the decompressed dataset is not satisfied, or if a lower privacy bound condition of the decompressed dataset is not satisfied. The memory and the computer program code are also further configured to, with the processor, cause the apparatus to produce a result of a query on the decompressed dataset. In one embodiment, the elements of the random matrix are formed with random numbers independently generated using a Gaussian probability density. In another embodiment, the random matrix is a contraction with a Euclidean norm less than unity, and all entries are nonnegative. The dataset may be publicly accessible over the Internet.
For a better understanding of the general topic of privacy protection for databases, see the following references, which are incorporated herein by reference:
 Charu C. Aggarwal, “A Survey of Randomization Methods for Privacy Preserving Data Mining, in Privacy Preserving Data Mining: Models and Algorithm,” (Charu C. Aggarwal and Philip S. Yu, eds.), Springer, 2008, pp. 138181,
 D. Agrawal and C. C. Agrawal, “On the Design and Quantification of PrivacyPrivacy Data Mining Algorithms,” Proceedings in the ACM SIGMOD Conference, 2002,
 R. Agrawal and R. Srikant, “PrivacyPreserving Data Mining,” Proceedings in the ACM SIGMOD Conference, 2000,
 R. Agrawal, R. Srikant, and D. Thomas, “Privacy Preserving OLAP,” Proceedings in the ACM SIGMOD Conference, 2005,
 R. J. Bayardo and R. Agrawal, “Data Privacy through Optimal kAnonymization,” ICDE Conference, 2005,
 Adi BenIsrael and Thomas N. E. Greville, “Generalized Inverses: Theory and Applications,” SpringerVerlag, 2003,
 L. Bhattacharya and L. Getoor, “QueryTime Entity Resolution,” J. Artificial Intelligence Research 30 (2007), 621657,
 C. Blake and C. Mertz, “UGI Respository of Machine Learning Databases,” 1998, http://archive.ics.uci.edu/ml/datasets.html/,
 Yitao Duan, “P4P: A Practical Framework for Privacy Preserving Distributed Computation,” Ph.D. thesis, UC Berkeley, 2007,
 G. T. Duncan and R. W. Pearson, “Enhancing Access to Microdata While Protecting Confidentiality: Prospects for the Future,” Statistical Science 6 (1991), no. 3, 219239,
 A. Friedman and A. Schuster, “Data Mining with Differential Privacy,” ACM SIGKDD, 2010,
 D. C. Howe, H. Nissenbaum, and V. Toubiana, Trackmenot, 2011, http://cs.nyu.edu/trackmenot/,
 K. Liu, C. Giannella, and H. Kargupta, “An Attack'"'"'s View of Distance Preserving Maps for Privacy Preserving Data Mining,” Proceedings in the 10^{th }European Conference on Principles and Practice of Knowledge Discovery in Databases, September 2006,
 A. Machanavajjhala, J. Gehrke, D. Kifer, and M. Venkitasubramaniam, “Idiversity: Privacy Beyond kAnonymity,” ICDE, 2006,
 Albert W. Marshall and Ingram Olkin, “Inequalities: Theory of Majorization and Its Applications,” Academic Press, New York, 1979,
 A. Meyerson and R. Williams, “On the Complexity of Optimal kAnonymity,” ACM PODS Conference, 2004,
 R. Penrose, “A Generalized Inverse for Matrices,” Proceedings of the Cambridge Philosophical Society 51 (1954), 406413,
 R. Penrose and J. A. Todd, “On Best Approximate Solutions of linear Matrix Equations,” Proceedings of the Cambridge Philosophical Society 52 (1956), 1719,
 V. Rastogi, S. Hong, and D. Suciu, “The Boundary Between Privacy and Utility in Data Publishing,” VLDB, 2007, pp. 531542, Reputation Management Consultants, “Online Reputation Repair and Management,” 2011, http://yvw.reputationmanagementconsultants.com/,
 P. Samarati and L. Sweeney, “Protecting Privacy When Disclosing Information: kAnonymity and Its Enforcement through Generalization and Suppression,” IEEE Symp. on Security and Privacy, 1998,
 L. Sweeney, “kAnonymity: a Model for Protecting Privacy,” International Journal on Uncertainty, Fuzziness and Knowledgebased Systems 10 (2002), No. 5, 557570,
 Yufei Tao and Xiaokui Xiao, “Personalized Privacy Preservation,” PrivacyPreserving Data Mining Models and Algorithm (Charu C. Aggarwal and Philip S. Yu, eds.), Springer, 2008, pp. 461485,
 S. E. Whang and H. GarciaMolina, “Managing Information Leakage,” Fifth Biennial Conference on Innovative Data System Research (CIDR), 2011,
 Y. Zhu and L. Liu, “Optimal Randomization for Privacy Preserving Data Mining,” ACM KDD Conference, 2004,
 Rob Hall and Stephen E. Fienberg, “PrivacyPreserving Record Linkage, Privacy in Statistical Databases,” pp. 269283, 2010,
 S. E. Fienberg and J. McIntyre, “Data Swapping: Variations on a Theme by Dalenius and Reiss,” In Proceedings of Privacy in Statistical Databases, pp. 1427, 2004, and
 G. T. Duncan and R. W. Pearson, “Enhancing Access to Microdata While Protecting Confidentiality: Prospects for the Future,” Statistical Science, 6(3) pp. 219232, 1991.
As described above, the exemplary embodiment provides both a method and corresponding apparatus consisting of various modules providing functionality for performing the steps of the method. The modules may be implemented as hardware (embodied in one or more chips including an integrated circuit such as an application specific integrated circuit), or may be implemented as software or firmware for execution by a computer processor. In particular, in the case of firmware or software, the exemplary embodiment can be provided as a computer program product including a computer readable storage structure embodying computer program code (i.e., software or firmware) thereon for execution by the computer processor.
Although the embodiments and its advantages have been described in detail, it should be understood that various changes, substitutions, and alterations can be made herein without departing from the spirit and scope thereof as defined by the appended claims. For example, many of the features and functions discussed above can be implemented in software, hardware, or firmware, or a combination thereof. Also, many of the features, functions, and steps of operating the same may be reordered, omitted, added, etc., and still fall within the broad scope of the various embodiments.
Moreover, the scope of the various embodiments is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized as well. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.