Over the past decade, calls for better measures to protect sensitive, personally identifiable information have blossomed into what politicians like to call a "hot-button issue." Certainly, privacy violations have become rampant and people have grown keenly aware of just how vulnerable they are. When it comes to potential remedies, however, proposals have varied widely, leading to bitter, politically charged arguments. To date, what has chiefly come of that have been bureaucratic policies that satisfy almost no one—and infuriate many.
Now, into this muddled picture comes differential privacy. First formalized in 2006, it's an approach based on a mathematically rigorous definition of privacy that allows formalization and proof of the guarantees against re-identification offered by a system. While differential privacy has been accepted by theorists for some time, its implementation has turned out to be subtle and tricky, with practical applications only now starting to become available. To date, differential privacy has been adopted by the U.S. Census Bureau, along with a number of technology companies, but what this means and how these organizations have implemented their systems remains a mystery to many.
Looks interesting. That said; it appears that the problem remains. The data is still collected, stored and sometimes made available to third parties for mutual business gains. Such data at rest could be compromised by cyber attacks. I fully understand that things become more complicated when using medical data. Researchers in most cases need valid data without random injections. In case of advertisements to potencial customers it could be great when the data is sold. It seems that the companies collecting it will not be using the techniques described in the article in order to better target people. BTW I enjoyed reading the article. Hopefully research will continue and better techniques will be found and will become the norm for organizations that collect and use private data.
Displaying 1 comment