Regulating the Computational Inference?

By: Sonny Zulhuda

I just read this interesting piece by the New York Times technology columnist Zeynep Tufekci, entitled: Think You’re Discreet Online? Think Again. The article was written more than a year ago. But I feel it is good to reiterate her message for my readers here. Here are a few portions to take up.

In the beginning of the article Zeynep reminds us how much we have been mistaken about how unsafe our personal information is out there.

“Because of technological advances and the sheer amount of data now available about billions of other people, discretion no longer suffices to protect your privacy. Computer algorithms and network analyses can now infer, with a sufficiently high degree of accuracy, a wide range of things about you that you may have never disclosed, including your moods, your political beliefs, your sexual orientation and your health.”

Zeynep refers to those big data around us, from our own personal data to data of people around us, those who share the same house, the same family dining table, the same club and association as well as the same school or workplace. Worse, even the data of people who share the same road and city with us, can be as influential in determining how much our privacy has been kept. This is due to the fact that we live together and whatever data other people decided to have shared or not shared, it would have some inferential implications to us.

The tools and algorithms have been designed and produced to enable people or researchers to make use of those data. The data is gathered, unfortunately, not only from the external sources, but also the internal sources in us. Our personal data and communications on social media as well as our movement, location data and so on readily readable from our mobile phones are all prone to be the subject of the computational inference. Not to forget our faces captured by camera or facial recognition technologies as well as our transactions data. They are all not spared.

(At this moment this all remind me the nightmares and challenges Shea Salazar and her family has to face in the ongoing TV series neXt!)

Back to Laptop….. (‘Tukul mode’ ON)

These data when fed to the correct algorithm and machine learning can be an impressive tool: To assess symptoms of depression therefore to prevent depressive conduct including suicidal behavior. This fact can be used by medical researchers and practitioners to provide the much-needed preventive actions. But conversely, this can be used by others, eg. advertisers who sell medical and medicinal products! Thus it is a promising green pasture to make money from.

Worse, as Zeynep highlighted, “such tools are already being marketed for use in hiring employees, for detecting shoppers’ moods and predicting criminal behavior.”

In public life, there is more reason to worry. Computational inference has been and will still be used for surveillance. We know that surveillance always hide under the shadow of public need and order. In many jurisdictions, surveillance is not easy to check by citizens or parliaments. The executives control the whole execution of surveillance. Therefore, using computational inference tools for surveillance of suspected criminals, terrorists or spies will be going on and on in so long states keep on their capacity and capability to upskill their surveillance methods.

In my view, Zeynep has successfully alerted us about this positive and negative probabilities of the computational inference. What to do next? The writer had few suggestions to go for.

“What is to be done? Designing phones and other devices to be more privacy-protected would be start, and government regulation of the collection and flow of data would help slow things down. But this is not the complete solution. We also need to start passing laws that directly regulate the use of computational inference: What will we allow to be inferred, and under what conditions, and subject to what kinds of accountability, disclosure, controls and penalties for misuse?”

So, Zeynep offers few things to be done in parallel: Firstly, from the perspective of architectural re-engineering, we need to “force” the device designers to inherently reconstruct their technologies that enhance rather than reduce privacy protection. In other words, to have a PbD (“privacy by design”) embedded in the devices and systems. The root to that is actually for the technology designers to be more privacy aware, privacy-oriented, and more-privacy accultured.

In parallel, some legal and regulatory re-engineering will be crucial. Being the only social instrument that has a realistic power to generate fear and force a compliance, law needs to play the role it plays best: regulating! We need to have a law that can disrupt the disruptive, if that is really what it takes to have some protection for our people. In near future, we need to think more seriously of laws that regulate the computational inference, or lest the one that serves some standard behavior relating to protection people’s privacy and personal information.

Uh, by the way, it is good to note that Singapore has recently issued their updated “Model AI Governance Framework (Second Edition)” which was released in January 2020 at the 2020 World Economic Forum Annual Meeting in Davos, Switzerland. This piece will be just nice to read after reading this article in the New York Times!

A good food for thought from Zeynep Tufekci.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s