Garrett (garote) wrote,
Garrett
garote

Back doors and encryption

In physical communications, the level of personal privacy and ease of interception form an inverse relationship, and we all instinctively understand this. A shout is less private than a whisper. A wave in a crowd is less private than a touch on your shoulder. Skywriting is not at all private. An unvoiced thought is the most private of all.

These days we make personal communications on devices that operate on a level beyond the basic physical one we all know. We are also beyond the "very directed form of shouting" that the telephone and radio started out as, and into something else. These new devices are things that we use in ways that feel private - tapping silently on them with fingers, speaking into them behind closed doors, turning off the display when we're not looking at it - but that feeling of physical privacy is, of course, an illusion. Almost everything we do on the device relies on sending data over a wireless network that we cannot see, but reaches all the way around the Earth and up into space. That network also has a memory, extending back some unknown span of time into the past. Clearly the ease of interception may not match the level of privacy we instinctively expect.

The best tool we have (among many) to impose privacy on these devices, is encryption. We leverage encryption to make these communications secure the way we expect them to be, the way they often seem to be to novice users already. But by making these devices harder to tap into, are we also making a the world a more dangerous place?

The government is already allowed to force a phone company to tap into the communications of a person using its network, by convincing a judge that the act is necessary to pursue a case. End-to-end encryption of the content passing over the network denies them this ability. Should the government be allowed to sabotage end-to-end encryption? Can the government make a case that a truly secure communications network cannot be allowed to exist?

How does the argument change when the government wants to have access not just to real-time communications, but to a data store containing your movements, your financial records, private communication between you and your spouse, photographs and video of you and your family, and so on? This is the kind of back door that the government wants to carve into the smartphone of every citizen. Is it a natural extension of a wire tap, or is it an overreach?

Let's take it a step beyond. If a technology exists that provides selective access to the most private parts of your being, should you be denied the ability to completely control that access, for the sake of law enforcement?

Suppose that 50 years from now, we come up with a solid-state machine about the size of a peanut that can be surgically implanted in your skull, deriving all its power from blood flow or body movement or something, and it is able to detect your very thoughts, and transcribe them into signals and send them to the people of your choice. Suppose this device uses end-to-end encryption methods the way Apple uses them to encrypt its iMessage chat service now. The system, as designed, would be effectively impossible to tap by government officials, or criminals. It would be telepathy, made real. It would fundamentally change the human experience.

Our current society, collectively, would only go for a technology like this if it was extremely secure, and most of us wouldn't go for it at all. It's probably the idea of it being surgically attached that makes it the most scary. But we carry smartphones around all day, every day, and even sleep next to them at night, so how long before society changes, and a product like this goes from scary, to coveted?

Now suppose we all buy these devices, convinced of their security, and after we've been walking around with them for a number of years, the government demands changes to the software inside them to make them less secure, so they can tap directly into the minds of suspected criminals. Every device would be altered, including the one in your own head.

At that point, all it would take is one corrupt or sloppy government official leaking the toolkit onto the internet*, and your very thoughts - and no doubt the history of your thoughts - would be subject to eavesdropping, from foreign government agents, all the way down to jilted ex-boyfriends.

(* This has happened already, at least once, with government-owned router and smartphone hacking tools. )

Yes, it would be very convenient to tap into the brain of a suspected murderer or kidnapper or suicide bomber or warlord. Likewise it would be very convenient for them, to tap into everyone else. Imagine the hell they could create.

If your objections are ignored and the software is changed, what are you going to do? Your social and working life, even your identity, is thoroughly dependent on this device. It would be very hard to abandon. Plus, the device is surgically embedded. You might not even know for sure that it's off!

Let's look at this hypothetical situation from another angle: What if encryption wasn't an issue?

What if the battle over encryption was somehow rendered irrelevant, and the government could tap into anything, anywhere? Is there a level of privacy, a form of personal space, that is sacred enough that eavesdropping would be fundamentally wrong, even if the government could do it? Assuming it has the tech, should law enforcement be able to get a warrant to tap in to the thoughts of a private citizen without their knowledge, if they were a suspected terrorist? If so, what about passive surveillance? Should law enforcement be allowed to mass-harvest the thoughts of every citizen and crunch them for patterns, to root out suspected criminals and deviants, without any prior authorization such as a warrant?

The government is already engaged in mass-surveillance activities with internet data*, and fighting to weaken encryption in order to expand that surveillance. Have they already crossed the line of acceptability? How close to the ultimate privacy of an unvoiced thought will government surveillance be allowed to get before it is considered universally wrong?

(* e.g. PRISM. )

Or will we ever get to that point, if the transition happens slowly enough?

Or, what about the more insidious scenario: Will we voluntarily submit to this filtering and see it as "proof of innocence", and begin to assume that anyone who does not voluntarily submit is not trustworthy, and perhaps a criminal?
  • Post a new comment

    Error

    default userpic
  • 22 comments