06 April 2020

Security Engineering by Ross Anderson

Security Engineering by Ross Anderson is likely the best security book I've read so far.

Whereas other books explain from a technical point of view exclusively, Anderson focus on concepts establishing the mental framework to guide a security engineer along his professional career. So he does not refer to any specific firewall brand, programming language or operating system, but to design successes and failures along Information Technologies and Communications history. This is so enriching because dominant vendors marketing try to convince you that you only need to invest vast amounts of money to buy latest tech to get your information assets secure. However for Anderson technology is just a tool to perform a proper assess and design, from a mental framework based on comprehensive concepts independent from the latest tech state of art.

Along this book, these concepts are assessed, applying them to every information security field comparing them with historical events. So, many topics are covered. Topics so interesting and different like psychology, ergonomics, cryptography, access control policies to information assets, economics impact on security, integrity controls, security in shared data environments, intellectual property, terrorism and a quite long etc... 

The author's long expertise gives many examples to book from banking, defense industry and intelligence sector (of course, those sectors have been the great developers of current information security state of art). In those examples you get detailed descriptions ranging from IFF systems (Identify-Friend-or-Foe) to command and control military organizations; from the evolution of nuclear missiles protocols to improvements of electronics to spy electromagnetic emissions.

Besides, this books is going to stay relevant on your shelf for long as happens with general topics covered in it. This book is not one of those that end in your basket after some years.

All that makes Security Engineering a critical book for any security engineer and a good investment worth every penny you use to buy it.

20 February 2020

The Kerckhoffs's principle

Auguste Kerckhoffs was a dutch linguist that taught german language at Paris Commercial Studies School for the second half of XIX century. However, he is actually known for some essays he published at french Military Sciences Magazine. These essays evolved military cryptography as it was used so far. As a practical man, Kerkhoffs proposed 6 main principles to design a safe cryptographic system:

  1. If system is not theoretically safe, at least it should at be safe in practice.
  2. System effectiveness cannot depend of keeping its design details secret.
  3.  System's secret key should be easy to remember in order to avoid having it in a written note.
  4. Cryptosystems output should be alphanumeric.
  5. System should be able to be managed by an individual.
  6. System should be easy to be operated.

After a century, the whole 6 Kerckhoffs principles are still valid.

The first one is a main foundation of current cryptosystems. Those cryptosystems rely on such huge key spaces to make impossible a brute force attack against it, at least with technical resources available nowadays. The thing is current cryptosystem's keys can be theorically found by brute force attack (so cryptosystems are not theoretically safe), but doing so needs so vast amount of technical resources that in practice it is not viable (so cryptosystems are safe at practice). When technology gets a point were available computation horsepower make possible to break a key, then everybody increases those keys length to make a brute force attack harder to a point to make it not viable again.

Second principle has demonstrated its truth many times in history. Keeping secrets is hard. It is hard enough keep a cryptosystems key secret, but keeping secret its design for a long time is almost impossible, more nowadays in an interconnected world that tends to share data instead of hidding them. Actually this principle is what is mainly known as "The Kerckhoffs Principle". For the entire Cold War this principle was entirely ignored what in the end it has been generally accepted this principle is right. From that point on, cryptosystems design has been disclosed even opening to public proposals for its development (as happened with AES standard design). Opening those efforts has been a good way to include more thinking minds in standards and protocols design.

You already know what we're talking about at third principle. If your a security engineer you've surely faced a lot of wrong security policies to make users remember complex passwords... only to find out that users are keeping their hipercomplex passwords written at post-it stuck next to their computers.

4th, 5th, 6th principles make security engineers admit that human nature is not perfect when facing a new cryptosystem design. Humans are alphanumeric beings, we think visually and we find difficult imagine things beyond our know three dimensions.

A cryptosystem that does not take in count those factor will probably fail, because its human operator will no doubt take shortcomings and make tricks to overcome complexity but at the cost of reducing system entropy and with it its effectiveness. You can find an example in the Second World War, when lazy Enigma operators put its dials at predictable positions to avoid the hassle of changing them frequently as they have been advised. That thing only made easier their job for english criptoanalists at Bletchley Park.

If your are a security engineer, keep the second principle has a summary of this article: your system cannot depend of its design secrecy. Humans are not perfect and chances are that your design won't stay secret for long, not in this internet world.