Skip to content

Security by policy does not work

Management

The laptop systems aboard the International Space Station (ISS) have been infected by computer viruses and worms multiple times. The W32.Gammima.AG virus made it to space in July 2008. And it happily spread from laptop to laptop onboard the ISS. The virus has been written to steal credentials for some common games. It is unknown how many of these were run in orbit. The latency would kill the experience for sure.

I am sure there have been policies in place to prevent astronauts carrying personal soft- and hardware up to the ISS. Personal items must be explicitly applied for and will only be approved after severe scrutiny of each item. Even beyond the obvious security considerations, this is necessary as the launch weight needs to be calculated exactly.
NASA and Roscosmos both have very strict policies for their personnel and strict training to make sure they know and follow policy. The group of astronauts primarily affected by the policy is very well known and counts a few dozen heads.

Still at least one infected USB stick made it up to the ISS and could spread its malware. Other infections have happened and we can assume similar infection vectors.

So the policy has proven unenforceable. It is broken. It is still correct per se. There is nothing wrong with prohibiting personal soft- and hardware in a high risk environment. So the policy stays in place. NASA still needed to make sure to rely much less on its effectiveness.

Hence NASA did the only sane thing: Move from an unenforceable policy to a technically feasible solution, significantly reducing the security exposure. In May 2013 NASA announced the ISS laptops are being migrated to Debian 6. Imagine how much pressure Microsoft must have put up to prevent such a technical decision due to the adverse marketing message it provides along the way. And still the engineers at NASA saw this as the best way forward.

The take-away message here is: Security by policy does not work.

People will give away their company password for a few dollars worth of swag. We have had studies to prove this since at least 2003 and things have not changed much1. Edward Snowden was able to social engineer their passwords out of 25 colleagues. Associates with highest security clearance and the best awareness trainings in the industry. We have studies telling us that most CxO assistants know their bosses' passwords. And I have not seen a company where that is not the case in a long time. Of course these companies have policies that require associates to keep passwords personal and secret. And their managers lead by example.

People still click any file they have received from a friend. Or from the malware on a friend's computer...

Security policy is only a base to work from. It is necessary for legal reasons and as a common ground to educate people from.

It is not a law, so don't confuse it with one. A law does not have the potential to make society collapse when violated once. Now think about your company's fate in the light of a worst case security policy violation.

Add-on Security does not work either

NASA scans everything they send up to the ISS for viruses.

They have failed scanning the USB stick a Russian cosmonaut brought though.

The W32.Gammina.AG virus is very easy to detect, it drops a file ntdelect.com in the root directory of every USB stick or network share it can reach and adjusts autorun.inf to execute it. Nothing advanced at all.

But it was on the ISS more than a month before it was even discovered by the first security researcher on Earth. So "technical security" as in "run a virus scanner" did not help. And it will never help. Most of the malware that ends up in my mailbox has one or two generic hits on Virustotal. After a few days it'll be half of the engines detecting the malware as the detection strings have been added to their databases. And even after months the number of detections will not go much above three quarters of the engines Virustotal runs. So which one would you want to rely on?

The take-away message here is: Add-on security does not work.

Security is either embedded into the solution (and that includes the business processes at least as much as the IT systems) from an architectural stage or it will be as secure as bolted-on reinforcement on a highway bridge. This may have value to extend the life of something crumbling soon. But it is not a strategic option.

Rampant technical enforcement does not work either

Now after reading this far you may think about enforcing security policies through technology. Non-technical management often does. Because it sounds easy. It is not.

There are companies that enforce twelve+ character passwords with uppercase and lowercase, numbers and special chars that you need to change every month. You will usually find them on a sticker around the keyboard, written down into a notebook (the paper ones) or stored within a "personal password safe" application. The latter used to be a password protected Word file thanks to restrictive system install policies and their technical enforcement. Today it is often stored on the smartphone which runs unreviewed code by an unknown app developer and protects the precious company password by a four digit pin or an easy to break swipe code. If at all.

People surf the Internet. People read emails. People exchange files on USB sticks. Work files and private files. They want to really see that presentation on the projector now, ten minutes into the session. And they want to share that cute photo of their new born baby. Regardless.

So the take-away message here is: Rampant technical enforcement does not work.

Technical enforcement works to a certain level and then people will start to employ work-around processes.

They want to do their work. And if "IT" or "management" hinders them too much, they will evade.

So what to do?

There is a need for policy, but it needs to be sane, short and comprehensible by the people expected to abide by it. Training helps to deliver the message. Have good food and inspiring presenters during the training so people enjoy it. The message will stick longer.

There is a need for technical enforcement but beyond a tipping point more enforcement will generate evasion. Once your average associates start to trade information how to keep working despite the crazy restrictions "from IT" you know you company has been trotting down that path way too long. Go back to a sane level. Be honest about it. Explain that level to people and facilitate discussion, peer support and rapid continuous improvement.

Monitor more and enforce less. It is better to know you have average six character passwords with two numbers in (monitor) than to be sure you have twelve character complex passwords (enforce) while only purchasing wonders about the Post-it note consumption. It is better to monitor where logins come from and trace that single connection from a country where you have no business than prevent people traveling from accessing your network so they take full USB hard disks "just in case".

Note that everything you monitor has an inherent risk of being abused. So only monitor what you need to monitor e.g. through legal obligations or because it makes sense as a measure of the effectiveness of your security framework. Delete logs timely. A "take all" approach still often favored during the (slowly ending) "big data" hype leads to abuse potential and high embarrassment when dumped by a hacker and spread through the Internet. Which happens even to the best companies. They cannot steal what you have not collected.

Look for solutions that are architected for the appropriate level of security. You do not need end-to-end encrypted mobile communication if you run a school but you need encrypted data storage on mobile phones with a remote wipe capability as you will have people carry around a lot of customer data records. If you are in a highly competitive market and maintain your position though technology, process know-how or purchasing contracts, you do not store such critical data on a public cloud.

And - as NASA found out - if a 400km air gap cannot prevent infections and service technicians are expensive to deploy, don't run Windows XP.


  1. Update 05.12.2013: Yesterday Google committed a fix to only reveal stored website passwords after the master password has been entered again. A late but welcome step in the right direction. 

Trackbacks

No Trackbacks

Comments

Display comments as Linear | Threaded

Dean on :

Excellent article.

Janis Lepinski on :

To get this type of information from other sites or books would have taken me days. Thanks for the work you have done! Concise, ready to quote for management. Thanks again.

Rob Lear on :

You are so right. There is an insane amount of money spent on snake oil security solutions because people want to "buy security" as if it were an off-the-shelf product. And vendors obviously target these managers with re-assuring brochures how many "attacks" (portscans) are happening and need to be mitigated.

Costs are high, people can show the shiny dashboard from the new IDS "solution" and their senior managers are well impressed. But the real security issues have not been tackled. The hard stuff is left to lawyers, human resources and line management to sort out. And they do what they can do .. create more paper to cover their backs.

You need a really good CSIO to strike the right balance. I'd love to have you work at my company.

Add Comment

Markdown format allowed
Standard emoticons like :-) and ;-) are converted to images.
E-Mail addresses will not be displayed and will only be used for E-Mail notifications.

To prevent automated Bots from commentspamming, please enter the string you see in the image below in the appropriate input box. Your comment will only be submitted if the strings match. Please ensure that your browser supports and accepts cookies, or your comment cannot be verified correctly.
CAPTCHA

Form options

Submitted comments will be subject to moderation before being displayed.