More Musings on Online Privacy

Have you seen this recent viral article on the information social media companies collect about you?

Are you ready?PrivacyArticlePic.jpg

Trust

In the one of the first replies to this article (no, I didn’t read them all) the writer asks “Do you really trust Google?” My answer is “yes, completely”. Google is a for profit information company and I trust it to collect every scrap of information about me and the rest of the world it can. I trust it to profit from this information in any way it can. A company, like a robot or an AI, has no human morality, no conscience, no moral compass. These characteristics have to come from the humans who control it — or abdicate that control.

“Trust” does not stand alone as a concept. To be meaningful it has to be accompanied by answers to the questions, trust whom? to do what? under what circumstances? And it is we humans who must supply those answers, thoughtfully.

Thoughts Left Out

What this article doesn’t say explicitly is that turning on the security controls doesn’t stop the tracking. The next step to privacy is to be very specific about what actions to take. For example, if I turn wi-fi off on my computer is it really offline or just not reporting to me? It isn’t in the interest of Google to tell us these things. The company’s advertisers (other profit-making companies) want us to expose ourselves so that they can entice us to buy their products. They don’t want us to control their access to information about us and they don’t want us to control our impulse to buy. Why would they make it easy for us to protect that information? Why would we “trust” them to do so?

I’ve been teaching since 1975 that the only way to stop a robot (or computer) is to detach it from its power source. That means, if it’s, say, solar powered, get inside it and snip the wires between the charger, the batteries and the cpu. This will also work if you want to stop a computer program designed to collect information that will result in giving it the ability to present images to us that will trigger behavior we might later regret. Asking a company to act against its self-interest seems unlikely to succeed. No matter how many apologies or assurances Facebook publishes, its survival depends on keeping your information flowing in. In the end, I don’t think greater privacy controls will solve the problem. Rather, we need to accept responsibility for responding to those oh-so-effective triggers.

What is Privacy?

The other thesis the article misses is that our whole concept/expectation of privacy has changed in the last 3 centuries. It used to be that “privacy” was the cultural practice of averting one’s eyes rather than today’s assumption that we ought to be able to prevent direct access to information. If you were a servant in an upperclass household, a clerk in a bank, or perhaps a resident in a multifamily, Native American longhouse, you saw a lot of things you never talked about. It’s only recently that a significant number of  humans have lived in conditions where information we now expect to be “private” was not readily available across a broad swath of neighbors, relatives, and tradespeople. The more I confront this topic the more convinced I become that we will find relief from our distress in cultural adaptation, not technical fixes.

Maybe some of today’s youngsters have got a better idea. Just take naked pictures of yourself and post them.

Leave a Comment

Filed under Artificial Intelligence and Stupidity

Leave a Reply

Your email address will not be published.