Friday, 6 May 2016

Better tamper-proof ledgers

In the previous post, we pointed out how Certificate Transparency doesn't maintain cryptographic future or forwards integrity, and why that matters mainly if you try to reuse the log protocol for non-certificate things. Most other ledger schemes aren't much better. Blockchain - and especially private blockchains - can be tampered with by attackers with large network CPU, and Schneier-Kelsey logs and their derivatives allow verifiers to tamper with the logs.

It may very well be the case that there are public ledgers out there that have all of the properties I want, but if so, I can't find them and they're certainly not in widespread use. Indeed, most ledgers rely on access-control, probably because the few ledger designs that do exist have major usability or cryptographic problems that make them difficult for developers to actually use.

So let's fix that.

Future integrity is the easy bit. There's lots of work on this subject, and the crux of the solution is having roll-forward keys. For example, if K is our initial key, then Hash(K) is our second key and so on. This means if a hacker steals our key at time Tn, they can only tamper with older keys by performing inverse hashes - a problem that is cryptographically hard. This stops a hacker who hacks our logger from tampering with old log entries. This is really important in the case of, say, event logs.

Forwards integrity is a bit harder, and there's some work on this subject too. The crux to forwards integrity is having new keys have new entropy, but still be cryptographically linked to the previous key. This stops a hacker who hacks our logger from tampering with entries after they have been evicted (or left) the logger. This is particularly important in the case of, say, binary or update transparency and integrity.

SSL's perfect forward secrecy ciphers achieve forwards-integrity (for secrecy, rather than integrity reasons) by having a master SSL private key that signs individual ephemeral encryption keys, rather than encrypting with the master key directly. The key-exchange with the user then happens using the ephemeral key, not the signing key.

This deliberately limits the damage if a hacker steals your SSL private key. With knowledge of your private key, the hacker can obviously impersonate your site; they can even actively man-in-the-middle legitimate connections to your site. But they can't decrypt connections to your site unless they're still in control of your server. The private key they have allows them to forge ephemeral keys; it does not allow them to decrypt key-exchanges of ephemeral keys to which the hacker does not have the corresponding private key.

It's worth pointing out that the forward secrecy here is derived from the new private key introducing new entropy; not that the key is ephemeral. If the private keys of the ephemeral keys were derived from, say, a never-reseeded PRNG, the hacker can predict the ephemeral private keys if she steals the internal state of the PRNG and then leaves the server. This is why, for example, Schneier-Kelsey ledgers have no forwards integrity. That scheme has ephemeral keys, but because future keys are a fixed function of previous secrets, so a hacker who hacks the logger once can still predict future ephemeral keys.

Armed with these two facts, we can make a really simple ledger scheme that has both:

Ledger initialization
1. Generate a public/private key pair P0, P0'
2. Distribute P0 widely to verifiers out of band.

Adding the i-th entry to the ledger
1. Generate a public/private key pair P{i+1}, P{i+1}'
2. Sign the data Ei and the public key P{i+1} with private key Pi'.
3. Permanently erase Pi'

Public verification is now really simple. The first public key is known a priori. The first entry tells us the first entry's data and the public key that will be used to sign the second entry. We can then follow the chain of entries, each one signed by a key described by the previous entry.

Depending on how you look at it, the solution is really just a blend of different ideas already in use in other parts of cryptography, but now applied to ledgers. It uses asymmetric key-rolling like Axolotl, but for integrity, not secrecy; it uses asymmetric cryptography rather than symmetric cryptography to allow verifiers to verify without giving verifiers the ability to tamper so we can achieve zero-trust, and it uses block-chain like verification, but swapping mining for signing because most of the time we prefer fast centralized ledgers than slow decentralized ledgers that are inherently vulnerable to high network CPU attackers.

The scheme has some nice properties. It's centralized so only the logger gets to append entries. This is great for event logs, binary transparency, plane black-boxes and even most financial use-cases. It's publicly readable. Anyone can read or verify the entries without undermining the integrity of the system. Better still: it's zero trust. Since even the first private key is erased, the ledger cannot be tampered with by anyone: not the readers, not the verifiers - not even the logger itself.

We're also algorithm agnostic. Our only criteria for our algorithm is that generating new private keys requires new entropy if you want it to have forwards integrity, and that the algorithm we choose is a asymmetric signature algorithm. ECDSA or RSA would work just fine. But curiously we only ever need to sign data once. And for crypto-nerds that brings to mind a really cool asymmetric algorithm that nobody in their right mind ever uses: Lamport signatures.

Lamport signatures are old. They were invented in 1979, and, like one-time-pads, they have great security properties until you decide to use them more than once, at which point their security quickly collapses. But we only ever sign data exactly once, so how do they work?

A Lamport signature is coupled with two cryptographic hash functions: a data-hash and a key-hash, although normally both hashes are the same.

Generate Lamport keys
1. For i = 0 .. 2 x bitlen(key-hash)
  privatekey[i] = cryptorandom()
  publickey[i]  = keyhash(privatekey[i])

Generate Lamport signatures
1. Generate h=datahash(data)
2. For each bit b in h
   If b is 0: Signature[i] = privatekey[2*i]
   If b is 1: Signature[i] = privatekey[2*i+1]

Validating Lamport signatures
1. Generate h=datahash(data)
2. For each bit b in h
   If b is 0: Verify that keyhash(Signature[i]) = publickey[2*i]
   If b is 1: Verify that keyhash(Signature[i]) = publickey[2*i+1]

The way Lamport signatures work, is essentially thus. Suppose we want to use a 128-bit data hash algorithm. If so, we generate 256 "buckets" of random data (our private key), and publish the hash of each bucket (our public key). When we later want to sign some data, we hash that data to get a 128-bit hash. We use that hash to build a configuration of the 256 buckets that is unique to that hash value, and publish the original bits corresponding to those buckets.

A verifier can verify the integrity of the hash by hashing each element of our signature, and mapping them to the public key, which reveals the configuration. If that configuration exactly matches the hash of the data, the signature is valid. Otherwise it is not.

If a hacker modifies any bit of the data, the hash of the data will change and the configuration will not match the published signature. If the attacker wishes to forge the signature as well, she needs also to know the random data that led to the unpublished bucket corresponding to the bits that are different in the hash. To do so, she must either invert the key-hash, or brute-forcing the corresponding bucket of random bits. These are crypto-hard problems, so the scheme is secure.

In cryptography, asymmetric algorithms are based on functions that are easy to do forwards, and hard to do backwards. For example RSA is fundamentally based on the observation that factoring numbers is much harder than multiplying numbers. Lamport signatures are based on the observation that cryptographic hashes are easy to perform, but hard to invert.

So why does nobody use Lamport signatures in real life? Lamport signatures have big disadvantages compared with normal signatures. Most obviously, their security collapses if they are used more than once, but also their public keys and signatures are large.

Unfortunately, quantum-resistant asymmetric algorithms are rare. Bluntly, cryptographers have really screwed up here: the lack of standardized, peer-reviewed and easy-to-use quantum-resistant asymmetric algorithms is putting the entire world's security in jeopardy - but I digress. Here we have a nice example of an asymmetric algorithm that isn't based on the hardness of integer-factorization or discrete-logs, and which also isn't based on bleeding edge untested research. We can rely entirely on well-established principles of the security of cryptographic hashes, to quickly achieve quantum-computing resistance to the ledger, if you think your ledger's security against tampering is important enough to suck up the extra size costs of the public key and signature compared to ECDSA.

Of course, you can just use the scheme with RSA or ECDSA with the scheme provided above. Either way, it'll give you better tamper-proof characteristics than most cryptography or products on the market, including private-blockchains, ACL-based log schemes, or even Certificate Transparency.

Thursday, 28 April 2016

Cryptographic weaknesses in the Certificate Transparency append-only ledger

Q: What do we want? 
A: Tamper-proof, private-write, public-read, public-verify, offline-verify, online-truncation-resistant, zero-trust, quantum-computer-resistant append-only public ledgers.
Q: When do we...  wait what?

Recently, I've had a couple of occasions where I've needed a tamper-proof append-only log cryptographic primitive - initially as a way to implement binary transparency for automatic updates, and secondly as a way of preventing system event logs from being undetectably edited by malware shortly after system compromise, without having to wait for event logs to be taken off the machine first.

It's not obvious, but those two cases care about coping with compromise of the logger in important but subtly different ways.

For event logs I mainly care about preserving the entries that occurred immediately prior to a malware compromise. With automatic updates binary logs, it matters more that our logger "self-heals" after a key-compromise. This stops someone compromising the key and then having power over the system ever-after.

For various reasons I also wanted the log to be based on zero-trust, and to avoid being based on primitives that quantum computers are going to eventually break. It's simple enough to build logs that allow trusted third-parties to verify that the log has been tampered with (this is in fact how Schneier-Kelsey logs work), but these schemes tend to have nasty edge cases. Firstly, they tend to allow verifiers to undetectably edit the logs, fundamentally eroding the evidential value of the ledger. Secondly, they're not publicly verifiable. A scheme where anyone can freely validate for themselves that the ledger hasn't been tampered with is surely better.

Certificate Transparency, and append-only logs

When we talk about Certificate Transparency (CT), really we're talking about two things working in tandem. Firstly, there's a cryptographic append-only log which is based around armoured Merkel trees. Secondly there's the decision to put certificate thumbprints in that log, and some efficient ways to prove that SSL certificates are actually in that log to prove that the log is complete. If you care about the details, there's a good overview of how it works here, and the RFC is here.

If we decouple the front bit of CT to just get the tamper-proof append-only log, could we repurpose it for our design?

Not quite. It turns out there's a couple of cryptographic weaknesses in CT's log design that aren't a big problem for Certificate Transparency, but which make the log design problematic when used for logging different things (including for binary transparency).

What is Future and Forwards Integrity?

Forward and future integrity are to log integrity what forward and future secrecy are to message secrecy. If you know what those are, you can probably skip this section. Essentially both deal with how well the design copes immediately prior to and subsequent to a malware attack on the end-point.

A picture's worth a thousand words, so let's start with a picture.


Events 0 .. x are externally validated by an independent third-party verifier.
Events x .. i have not been externally validated, but were written by the logger before the attacker took control.
Events i .. j were written when the attacker had control over the logger.
Events j .. n were written after the attacker ceased having control over the logger.

The purple area (0..x) is by-definition immutable. Even if our log didn't even bother with encryption at all, the third-party verifiers could detect changes to these events merely by noticing that the plaintext changed.

The red area (i..j) is by-definition mutable to the attacker. Any secrets that the logger knows during this period, the attacker also knows. Anything the logger elects to do, the attacker can elect to do it, or not do it during this time. And any events that the logger intends to write, the attacker can either erase or modify prior to cryptographic insertion into the log.

The green area (x..i) is future-integrity. These are the log entries that our third-party verifiers haven't seen yet, but which were written before the attacker gained control of the logger. If the attacker can modify these entries without a verifier later noticing the alteration, the scheme does not have future integrity

The blue area (j..n) represents forward integrity. Informally, forward integrity is the ability of a scheme to self-heal after a malware attack ends so that the attacker cannot attack the system . Importantly, we must not assume, for the purposes of forward integrity that the malware was discovered or noticed by the logger.

Because the attacker Aj knows everything the logger does at time Tj, our attacker can obviously fork the log in a way that Vj cannot distinguish which branch of the fork is valid. For this reason forwards integrity requires a trivial distinguishing factor to exist in at least one event after Aj to at least detect whether we are running a malicious fork of the ledger. Most of the time this is a pedantic observation: if the ledger is published on the company website this is sufficient. If the ledger is published via, say, a TOR hidden service, or via some distributed peer-to-peer network, it's more problematic.

How does Certificate Transparency work?

Merkel Trees are awesome data structures. You can read more about them here. For the purposes of Certificate Transparency, they have two important features:

  • Any element added into the tree creates a new tree with a new root node, and the old tree is a subset of the new one.
  • Because the structure is a tree, any element can be uniquely found via an O(lgN) search.

These two properties make it very efficient to prove a tree update hasn't tampered with any previous element (an O(lgN) lookup will efficiently show that the new tree is a superset of the old one), and we can efficiently prove that the tree contains a specific element via an O(lgN) path as well.

Those provide quite nice append-only properties, but how does Certificate Transparency ensure that the log is a private-write log? (i.e. that only the CA has added new certificates to it?) It does so by digitally signing the root of the tree with the CA private key.

And herein lies the problem.

If a hacker chops down a Merkel Tree in a forest, and no verifiers were there to verify it, does it make a sound?

Suppose M{N} is the root hash of the Merkel Tree with N elements, verified by an independent verifier V{N}, and which is signed and put on the CA's website as CA_sign(M{N}). Now suppose three new certificates A, B and C are added to the set. This creates M{N+ABC}, which is signed and put on the website as CA_sign(M{N+ABC}). Suppose a hacker doesn't like the inclusion of certificate B in the new set. She hacks the CA, steals the private key, and creates a new Merkel Tree, M{N+AC}, and publishes a new signature on the website as CA_sign(M{N+AC}).

In short, CT doesn't prevent a hacker from retracting an authentic update to the Merkel tree set and adding new elements that the hacker sees fit. If nobody saw that authenticate update, nobody will notice. This is CT failing the future integrity property.

Similarly, suppose our hacker wants to alter an authentic update to the set; let's say, Q -> Q'. She previously hacked the logger, but has since retreated, but she is still able to alter the storage medium on which the CT log files are written to (maybe she's lost access to the key server, but still has control of the website frontends). Can she edit Q into Q'?

Yep. The authentic logger sends her Merkel Tree M{N+Q}. She deletes it, and instead creates M{N+Q'}. Next she publishes CA_sign(M{N+Q'}). When V{N} sees the update he'll assume Q' rather than Q is the legitimate addition. 

This is CT failing the forward integrity property. Our hacker has tampered with an entry from significantly after the hack stopped, and third-party verifiers are powerless to detect the alteration, even though they've been keeping up many of the events published since the compromise stopped.

Do we care if CT logs are malleable?

Maybe. It depends what you're putting in the logs.

If you're using them coupled with the rest of the Certificate Transparency protocols, then no, probably not. Hackers who delete entries from the CT log only end up performing a denial-of-service against websites that use those certificates (a particularly weak result for someone who's stolen the CA private key), and if a hacker adds certificate thumbprints to the log that's pretty weak too. After all, the certificate still needs to be CA-signed for the browser to see it as valid, and if our hacker is adding real CA-signed certificates into our CT log, that's actually a good thing, not a bad thing.

It does, however, matter if you start using it for other things. Like event logs, or update transparency. In the event log case, if malware compromises the root key as part of the full system compromise, it can unwind the most recent logs and erase or alter them. In the update transparency case, if malware compromises the root key, it can add new evil updates into the ledger on a targeted per-computer basis to trick that machine into thinking the evil update is valid - even long after the malware author has lost access to the logger itself.

Wednesday, 27 April 2016

Getting Started...

OK. Let's get this blog started.

I still don't know exactly what content I'll be posting here or when, but it'll mainly be technical posts about (mainly memory-corruption based) exploits, ways to mitigate them, applied cryptography and reverse-engineering.

There's a lot I want to say on the topic of tackling use-after-free vulnerabilities as a category by altering the system heap, or weirdnesses of the LetsEncrypt ACME protocol that allow attackers to man-in-the-middle certificate generation if you're generating certificates manually; but let's start out gently with a bit of applied cryptography, and make a tamper-proof append-only log.

As ever, if you want to get in touch, hello@capitalalphasecurity,com is the way to go about it.