- If system is not theoretically safe, at least it should at be safe in practice.
- System effectiveness cannot depend of keeping its design details secret.
- System's secret key should be easy to remember in order to avoid having it in a written note.
- Cryptosystems output should be alphanumeric.
- System should be able to be managed by an individual.
- System should be easy to be operated.
After a century, the whole 6 Kerckhoffs principles are still valid.
The first one is a main foundation of current cryptosystems. Those cryptosystems rely on such huge key spaces to make impossible a brute force attack against it, at least with technical resources available nowadays. The thing is current cryptosystem's keys can be theorically found by brute force attack (so cryptosystems are not theoretically safe), but doing so needs so vast amount of technical resources that in practice it is not viable (so cryptosystems are safe at practice). When technology gets a point were available computation horsepower make possible to break a key, then everybody increases those keys length to make a brute force attack harder to a point to make it not viable again.
Second principle has demonstrated its truth many times in history. Keeping secrets is hard. It is hard enough keep a cryptosystems key secret, but keeping secret its design for a long time is almost impossible, more nowadays in an interconnected world that tends to share data instead of hidding them. Actually this principle is what is mainly known as "The Kerckhoffs Principle". For the entire Cold War this principle was entirely ignored what in the end it has been generally accepted this principle is right. From that point on, cryptosystems design has been disclosed even opening to public proposals for its development (as happened with AES standard design). Opening those efforts has been a good way to include more thinking minds in standards and protocols design.
You already know what we're talking about at third principle. If your a security engineer you've surely faced a lot of wrong security policies to make users remember complex passwords... only to find out that users are keeping their hipercomplex passwords written at post-it stuck next to their computers.
4th, 5th, 6th principles make security engineers admit that human nature is not perfect when facing a new cryptosystem design. Humans are alphanumeric beings, we think visually and we find difficult imagine things beyond our know three dimensions.
A cryptosystem that does not take in count those factor will probably fail, because its human operator will no doubt take shortcomings and make tricks to overcome complexity but at the cost of reducing system entropy and with it its effectiveness. You can find an example in the Second World War, when lazy Enigma operators put its dials at predictable positions to avoid the hassle of changing them frequently as they have been advised. That thing only made easier their job for english criptoanalists at Bletchley Park.
If your are a security engineer, keep the second principle has a summary of this article: your system cannot depend of its design secrecy. Humans are not perfect and chances are that your design won't stay secret for long, not in this internet world.