Logo réduit OCTOPIZE - pictogramme
April 202023

How to evaluate the utility of synthetic data?

Synthetically generated data are an increasingly popular tool for data analysis and machine learning purposes. By generating new data that mimics the statistical properties of the original data without replicating them, synthetic data can be used to exploit the potential of the data without compromising individuals' privacy. 

However, to ensure that synthetic data is useful and effective, evaluating its utility is important. In this article, we'll explore how to evaluate the utility of synthetic data and ensure that it can be used effectively for analysis and modeling.

To evaluate the level of information retained in synthetic data, we use utility metrics which assess two aspects: consistency at the individual level and consistency at the population level.

Consistency at the individual level means that logical rules should be respected. This criteria is dataset dependent, thus it will not be developed in this article.

Consistency at the population level means statistical similarity between original and synthetic data. We assess this similarity at three levels:

  • Comparing variables distributions (univariate analysis)
  • Comparing dependencies across the variables (bivariate analysis)
  • Comparing the general information of the data (multivariate analysis)

In this article, we will describe how to evaluate the retention of statistical information at the population level. This analysis is global and not specific to the use case. For specific use cases, it is recommended to compare original and synthetic data based on the target analysis.

There are as many possibilities of evaluation of the utility as there are possible analyses. Here, we decide to focus on a sample of utility retention.

 

Comparing variables distributions 

distributions

For each variable of a dataset, we compare the distribution of this variable in the original (in grey) and synthetic dataset (in green). The Hellinger distance can be computed between both distributions. It results in a score between 0 and 1. 0 means that the two distributions are identical while 1 means that the distributions have no common bins. 

In the figure below, we can see small Hellinger distances, which reveal that the Avatar data distributions are similar to the original distributions.

 

Hellinger

In other cases, we can also use statistical tests like Kolmogorov-Smirnov test or the Chi-square test to assess if the original and Avatar samples are drawn from the same underlying distribution.

 

Comparing dependencies across the variables

Evaluating variable distributions alone is not sufficient. If we generate synthetic data as a draw of each variable independently, the distributions would be preserved but correlation across the variables would be destroyed. The synthetic data might not be useful for analyses or modeling tasks that depend on this correlation. Therefore, in addition to distribution comparisons, it is also important to compare variable dependencies or correlations. This is usually done with the Pearson correlation coefficient to evaluate for linear relationship between numerical variables.

Here, we see that Avatar data is preserving the correlation matrix of the original data.

Correlation

With this analysis, we understand that the Avatar method is preserving variable dependencies (bi-variate analysis). Weak correlations stay weak through the anonymization while the strongest stay strong. Other metrics, such as mutual information could be computed to evaluate bivariate utility retention with categorical data.

 

Comparing the general information of the data

projection

Preserving the general information contained in a dataset is a main concern of anonymization. In order to evaluate the multidimensional utility, we can use the factor analysis methods (FAMD, PCA and MCA). Using those, we can study the link between many variables and individuals of the dataset.

The visualization illustrates the similarity between the original data (in grey) and the Avatar data (in green). Links between variables and cluster of the dataset are maintained in the Avatar dataset. 

 

In the figure below, we visualize that the preanti variable information is maintained during the anonymization.

projection variety

 

In summary, it is important to verify that synthetic data preserve the useful information of the data. This evaluation is done by the use of utility metrics. By ensuring that synthetic data is consistent at both individual and population levels, we can be confident that synthetic data can effectively replace original data for analysis and modeling purposes.

Have a look at our technical documentation to see an example of an anonymization report which evaluates the privacy and utility of the Avatar data. 

Want to understand more? Read our scientific article published in Nature npj digital medicine that demonstrates the utility conservation and privacy protection of the Avatar method in two medical use cases.

 

Writing : Julien Petot & Alban-Félix Barreteau
© Octopize 2022
crossmenuchevron-down