Welcome to Orcmid's Lair, the playground for family connections, pastimes, and scholarly vocation -- the collected professional and recreational work of Dennis E. Hamilton
The nfoCentrale Blog Conclave
nfoCentrale Associated Sites
Prophets in Their Own Lands
Back in February, I posted “Document Security Theater: When the Key is More Valuable than the Lock.” I was objecting to a technique, now being immortalized in open-document formats such as ODF and OOXML, whereby a hashed copy of a password is stored in the document such that it can easily be retrieved and used to attack the password itself. As explained there, the value of the password is not in being used to overcome the protection of the document against alteration – that is easy to do without ever bothering to know the password. The value of the password is that it is a memorable secret of the password holder and it needs to be protected (i.e., disguised) because it is also used for a variety of valuable purposes.
The failure to achieve a separation of concerns is probably a tip-off here. Either way, the exposure of hashed copies of passwords is not a new issue. There are available expert reports that identify the flaw. Attacks on passwords whose hashed copies are known have been popular since the first widespread Internet worm was released against unprotected systems. For example, the Unix /etc/passwords file with its hashed copies of passwords was commonly readable by all users and certainly anywhere once a root password was compromised. That users had the same passwords on different systems made leap-frog attacks from system-to-system particularly promising. It is like watching an elaborate arrangement of dominoes fall.
Encouraging Gullible Conduct
My argument then was that it is folly to increase the complexity of hash coding and believe that the password is thereby protected against discovery by a determined attacker. The defect in reasoning is in the assumption that the remedy to attackable hashed password copies is to use a “stronger” hashing technique. It does not make a memorable password stronger, and there is effectively a (disguised) copy of the password in plain sight. Having the copy and knowing the hashing technique allows that still-weak password to be attacked about as easily as it ever could be.
Systems which use password hashing as a way of not keeping passwords around in plaintext also arrange to secure the hashed copies against discovery. Once the hashed copies are known, discovery of the password is becoming child’s play, especially for memorable passwords that are reused by the password holder as a matter of convenience.
We’ve all learned by now that convenience trumps security, right? My objection is against willfully pandering to that conduct. You can imagine my dismay when my efforts to end that perpetration in the ODF specification were rebuffed by this argument:
“The justification for stronger algorithms than SHA1 is that many users use the same passwords for multiple tasks. So, it is worth to protect the key. Since we explicitly added the [SHA256 and stronger hashing methods] attributes to ODF 1.2 on request, we should not revert this.”
That is precisely the reason we should “revert” that so far draft-only provision of ODF 1.2.
Reality Will Not Be Fooled
Last week, there was announcement that some servers at Apache.org had been attacked and compromised. I saw notices such as ZDNet’s “Apache.or hit by targeted XSS attack, passwords compromised” and PCWorld’s (via Yahoo) “Apache Project Server Hacked, Passwords Compromised.” I didn’t read the articles, since it was about an all-too-common sort of break-in. What I didn’t appreciate was that the attackers stole lists of user names and their hash coded passwords.
What finally caught my undivided attention was the 2010-04-13 James Clark tweet, “Ouch. Hashed copy of password compromised for all users of Apache hosted JIRA, Bugzilla.”
The notice at the Apache Foundation cannot be clearer: “If you are a user of the Apache hosted JIRA, Bugzilla, or Confluence, a hashed copy of your password has been compromised.” And, of course, if we are putting hashed copies of passwords in plain site, it doesn’t need a hacked JIRA, Bugzilla, or Confluence configuration to get it. Even scarier is this observation: “JIRA and Confluence both use a SHA-512 hash, but without a random salt. We believe the risk to simple passwords based on dictionary words is quite high, and most users should rotate their passwords.”
What more do we need to know?
It is time to stop putting lipstick on what we know to be a pig.
I believe that this situation, for documents, arose through an over-constrained problem. We’ve been blinded into thinking that the safety of keys used for conveniently removing document protections is improved by strengthening the hashing for copies of those keys. All this does is encourage folks to be careless in the choice of passwords for this mundane purpose. We must find a way off that slippery spiral.
The intriguing problem is how to preserve the convenience of protection removal for document authors without subjecting their convenient, memorable password to discovery by attacking the plain-sight hashed copy. Is there a way out of the current awful practice? And if so, what do we do to overcome perpetuation of the flawed approach that is already in place?
[update 2010-04-17T19:09Z I broke up the first paragraph because it did not flow well. This allowed me to embellish the situation with more unpleasant historical facts. It is appalling to see how many years it’s been known that disclosure of hashed copies of passwords is a practically-attackable vulnerability]
|You are navigating Orcmid's Lair.|