How many times have you looked at an apparent security measure and wondered what on earth was going through the mind of the person who implemented it? Take the example of the humble garden shed; usually constructed from painfully thin wood panels nailed to a dodgy timber frame. Then you look at the door and you see a padlock that looks like is was bought in a fire-sale from Alcatraz. In reality, anybody wanting to steal from the garden shed would just kick in a few of the wood panels.
This sad state of affairs is pretty much what we’ve got as far as web and email security these days. The recent proof of concept that created a real rogue CA certificate using MD5 collisions is a prime example. Everybody has known for quite some time now that MD5 is not collision resistant, it is so broke that collisions can be found within minutes on any standard desktop or laptop. That combined with the normal issues that arise with the use of HTTPS as I discussed previously really does leave web security in a sorry state of affairs.
There are sites that use decent certificate authorities (the ones that issue their certs using the SHA family and do more thorough checks before issuing a cert in the first place), and also implement HTTPS in a proper and secure manner. Most financial institutions, for example, follow good practice. Unfortunately, banks and financial institutions have also tacitly instilled a sense of trust amongst the web user. We assume online banking is secure, and online banking uses SSL/TLS: so other sites that use SSL/TLS must also be secure – and there-in lies the problem. It is in part why phishing sites work so well, we know our online banking is secure and the website in front of me is my online banking site… well, er, it certainly looks the same. How many times have you actually read and checked the warning your browser periodically spits out about certificate validity? For me its 50/50 at best, and I know better.
A fine example of this misplacement of trust was no better demonstrated than in a conversation I had the other day. The conversation in question was about how PGP public keys are exchanged. I’d highlighted that you have to get the fingerprint verified (either in person, over the phone, or by a trusted third-party keysign) before accepting or signing someone else’s public key. The response I received was somewhat dismissive – and this was from a group of information security professionals. Missing that one critical step may seem insignificant but it completely defeats the entire security model, hence rendering the use of cryptography in that situation nothing more than a big padlock on a wobbly garden shed.
So what’s my point? Firstly, technologies that employ the use of cryptography have to do it properly, or not at all. This blasé, half-assed, approach by developers and users alike is actually harmful; it takes hard earned trust from one area and transfers it into a wide-spread false sense of security. The second and more important point is that developers and system designers really need to shoulder much more of the responsbility. It’s not your email provider that’s going to have to deal with the fall-out of your account being hacked, it’s you. The business that’s purchased a cert from a CA that uses MD5 and had a load of their customers defrauded will have its reputation in gutter while the CA sits pretty.
The result of this is that its in our hands. If our browser spits out a warning about a site certificate, check it and find out why, then promply complain to the company in question. If your webmail provider isn’t using SSL/TLS properly then complain, if it’s not fixed then move providers. Probablty the best example of what end users screaming about an issue en masse can do is with Microsoft: for years their products were pretty bad in the security stakes, I would argue that there have been vast improvements since.
UPDATED 12th April 2010: Added  extra meaning in title.