I’ve been mulling this one over for a while. And honestly, after a post to an internal global mail list at work putting forward my ideas, I’ve come to realise there are at least two camps in information security:
- Those who aim via various usual suspects to protect things
- Those who aim via various often controversial and novel means to protect people
Think about this for one second. If your compliance program is entirely around protecting critical data assets, you’re protecting things. If your infosec program is about reducing fraud, building resilience, or reducing harmful events, you’re protecting people, often from themselves.
I didn’t think my rather longish post, which brought together the ideas of the information swarm (it’s there, deal with it), information security asymmetry and pets/cattle (I rather like this one), would land with the heavy thud akin to 95 bullet points nailed to the church door.
So I started thinking – why do people still promulgate stupid policies that have no bearing on evidence? Why do people still believe that policies, standards, and spending squillions on edge and end point protection when it is trivial to break it?
Faith.
Faith in our dads and grand dads that their received wisdom is appropriate for today’s conditions.
“Si Dieu n’existait pas, il faudrait l’inventer” Voltaire
(Often mis-translated as “if religion did not exist, it would be necessary to create it”, but close enough for my purposes)
I think we’re seeing the beginning of infosec religion, where it is not acceptable to speak up against unthinking enforcement of hand me down policies like 30 day password resets or absurd password complexity, where it is impossible to ask for reasonable alternatives when you attempt to rule out the imbecilic alternatives like basic authentication headers.
We cannot expect everyone using IT to do it right, or have high levels of operational security. Folks often have a quizzical laugh at my rather large random password collection and use of virtual machines to isolate Java and an icky SOE. But you know what? When Linked In got pwned, I had zero fears that my use of Linked In would compromise anything else. I had used a longish random password unique to Linked In. So I could take my time to reset that password, safe in the knowledge that even with the best GPU crackers in existence, the heat death of the universe would come before my password hash was cracked. Plenty of time. Fantastic … for me, and I finally get a pay off for being so paranoid.
But… I don’t check my main OS every day for malware I didn’t create. I don’t check the insides of my various devices for evil maid MITM or keyloggers. Let’s be honest – no one but the ultra paranoid do this, and they don’t get anything done. But infosec purists expect everyone to have a bleached white pristine machine to do things – or else the user is at fault for not maintaining their systems.
We have to stop protecting things and start protecting humans, by creating human friendly, resilient processes with appropriate checks and balances that do not break as soon as a key logger or network sniffer or more to the point, some skill is brought to bear. Security must be agreeable to humans, transparent (as in plain sight as well as easy to follow), equitable, and the user has to be in charge of their identity and linked personas, and ultimately their preferred level of privacy.
I am nailing my colors to the mast – we need to make information technology work for humans. It is our creature, to do with as we want. This human says “no“
Leave a Reply