Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login
LC4: A Low-Tech Authenticated Cipher for Human-To-Human Communication (eprint.iacr.org) similar stories update story
149.0 points by arthur2e5 | karma 1100 | avg karma 3.33 2018-03-14 17:02:27+00:00 | hide | past | favorite | 35 comments



view as:

This reminds me of the Pontifex cipher that featured heavily in Neal Stephenson's Cryptonomicon, which in turn was developed by Bruce Schneier (and called Solitaire). [0]

[0] https://www.schneier.com/academic/solitaire/


I came to post this same comment, then found that the paper itself cites Solitaire!

The only downside I see for this implementation is that a stack of "weird" tiles would be much more discoverable for any TLA than a deck of cards.


I don't see any obvious reason why you couldn't use deck of cards with ElsieFour

1) shuffle the deck 2) split into two parts, first part having 36 cards 3) Assign the alphabet to the 36 card part (e.g. by sorting the cards and assigning characters sequentially) 4) shuffle parts separately 5) combine the parts in a easily reversible way (e.g. simply appending)

When doing the crypto, just use the 36 card part. I suppose the remaining cards could relatively easily be repurposed as RNG for nonce generation.


It'd be simpler to just ignore the tens and face cards.

> I suppose the remaining cards could relatively easily be repurposed as RNG for nonce generation

I spent better part of a hour trying to google this to no avail. How do you convert random permutation of n items into random number <n! ? Intuitively it feels like it should be possible, if not even easy, but I can't figure it out.

edit: yay, finally found a lead: "rank of a permutation" and "lehmer code". So I guess it is doable, have to check how they would actually work in application


[ed: no, that's not right. With two cards, you can have red-back or black-red. I guess that'd translate to N bits for 2 n cards, but it's not quite trivial...

For four, we have: rrbb, rbrb, rbbr, bbrr, brbr, brrb etc]

I think I'm misunderstanding you - but say you have 2n cards, n red, n black. If you shuffle them, you have 2n random bits, read from first to last? (this is assuming it's feasible to actually randomly shuffle cards).


No, in this application cards are all distinct, not merely half red and half black. There are n! permutation of a deck of n cards, and a uniformly chosen random permutation contains entropy equal to the base 2 logarithm of n!.

Both choosing a random permutation given a supply of random bits and, as noted in the parent post, extracting random bits from a given random permutation are nontrivial computational problems leading to interesting algorithms; shuffling cards or tiles is a good practical shortcut to obtain a random permutation more directly.


Indeed. I was attempting to induct from basic principles - as in two distinct cards.. But gp's right - it's not trivial. Unsure if it's simple :)

You'll want to basically perform the reverse of a Fisher-Yates shuffle. Instead of generating a random number, you look at which card needs to be swapped with the Kth card in the deck to make the top K to N positions of the deck in sorted order.

If deck is a list holding a permutation of the numbers 0 to N - 1, it looks something like:

  random_value = 0
  for i in range(len(deck)-1, 0, -1):
    index = deck.index(i)
    random_value += index
    random_value *= i
    deck[index], deck[i] = deck[i], deck[index]
The above is a one-minute prototype and may or may not work. When I get some time, I'll actually test the above code, but it's close to doing the right thing.

The above naive implementation is O(N^2). Your end result is a sorted deck, and random_value is a number that when repeatedly mod'd and divided by the appropriate index to generate a "random" number during a Fisher-Yates shuffle would produce your original permutation.

In practice, you'd implement the algorithm in O(N) by first doing a single linear scan to build an index (actually just held in an array) of cards to their positions. Also, you wouldn't actually perform the swap, since you never look at deck[i] after performing the swap. You just do deck[index] = deck[i]. So, at the end, you won't wind up with deck being sorted.


Okay, I got a deck of cards and had quick try. Immediately I noticed few issues:

1) You practically need two decks of cards. One which stores your key, and another which you use for the encryption process. This is needed because the matrix is mutated when doing encryption, meaning that you lose the original key. So before you start encrypting, you sync the two decks so that one can be messed up and another still retains the key.

2) Nonce generation is an problem. Simplest way I got for now is shuffle unused 16 cards (e.g. 10,J,Q,K) and draw them pairwise and use the pair to get character (base4->base36 conversion). It is not ideal, but doable.

3) Signature is supposed to be a secret, but no method for managing it is provided. Again, leftover cards could be used here, but if the above simple card-pair -> character mapping is used then you get only 8 characters when the paper recommends 10. It is not clear how critical the randomness here is. Practically it can be considered part of your key almost.

4) Surprisingly large amount of desk space required. Not something that really would pose a problem, but you definitely need some space to layout the 6x6 matrix. I imagine with standard sized cards, 7x7 (as mentioned in one of the comments) would be already challenging. If you additionally need to manage the second deck (see 1.) then that might need some more space.

5) For effective use you need will need lookup tables for both card->character->card mapping and card->movement mappings. They can easily be generated on demand (basically base conversion table between 6/9/36) and as such do not need to be permanent, but its still a consideration.

6) Lookup tables feel like they will slow down the process. You will be doing at least 3 (or 5 depending on how you count) lookups for each character encrypted. And of course you will need to be careful with the lookups, try to avoid mixing up down vs right movements and adjacent rows/columns.

7) 36 characters is actually pretty limited. Sure, you can write messages with just [A-Z0-9], using e.g. 0 (or #/_ like in the article) as word-separator etc. 7x7 would be significant improvement here. If you don't mind extending the length of the message then I suppose you could use some sort of preprocessing to widen the character set (e.g. use 4 symbols to encode 3 chars). Although that might make the already laborous process too much so.

8) Still need to time the process. I feel like initially it will be super slow, but especially if you can manage to keep the lookups in your head then it might actually not be that bad.


It'd be easy enough to store a key in a deck of cards.

(just map the alphabet to cards 1:1 and ignore some cards)


It's cute but at even at 30 seconds per character, a 160 character SMS takes over one hour to decode.

I don't think this is intended for electronic communications.

160 characters isn't an exceedingly long message. That's two lines on a default terminal.

It's kinda weird to measure a non electronic message in units of electronic terminal widths.

It certainly is.

I'd add that it's two lines on a typewriter, but then I'd need to explain what a typewriter is.


Trying to translate it into a handwritten message length instantly runs into the problem that handwriting is all over the place. I guess another way to think about it is 160 characters is roughly 32 English words.

A lot can be encoded in 3 chars.

For ciphers like this it would be interesting to compare the speed of which a human would do this cipher (LC4 vs Solitare AKA Pontifex [0] ). Also, it seems like Scheiner's cipher only requires you to have playing cards and LC4 requires you to already have tiles (described as an appliance).

[0] https://www.schneier.com/academic/solitaire/


The tiles are hardly a requirement here, as mentioned in other comments you can use deck of cards also with LC4

I'm not really digging deeply into this at all, because really does it matter, but for what it's worth the rationale this paper has for being more secure than RC4 (on which it is based, and which is infamously one of the most egregiously insecure mainstream ciphers) is a little flimsy.

For many years after the adoption of RC4, it was known that there were pronounced biases early in the RC4 keystream. IIRC, these were the weaknesses that the 802.11 attacks too advantage of.

But the more recent TLS attacks are based on a better understanding of RC4 keystream biases, and some of them (the Fluhrer-McGrew biases, at least) persist throughout the entire keystream. I don't think you can simply encrypt a long nonce to avoid them.

(I'm not an expert on this and could be wrong, of course).


I'm completely novice at this matter, but from the sounds of it even if LC4 does not completely remove the biases but only reduce them, wouldn't it still be improvement? I'm kinda assuming that the remaining biases would be smaller, and thus harder to exploit in an attack?

Particularly if we're talking hundreds of characters here, not megabytes to work with or gigabytes that you can bounce off of an oracle. Humans will notice if you ask them to decrypt 20 near-identical messages and report which ones give a MAC error.

I mean, again, this is not a branch of cryptography I take especially seriously, but I'd assume that if you were defining any kind of new cipher, you'd want to avoid constructions that were known to have fatal flaws embedded in them.

Either way, I brought it up because the author brings this up in their paper, but doesn't seem to fully address the literature of the attack he's trying to defend again (I may have missed something, though).


It seems to me that LC4 key is essentially equivalent to RC4 state and thus RC4 early keystream bias does not apply as there is no key-expansion phase.

Edit to clarify: LC4 key has to be permutation of 36 elements, while RC4's state is bijection of 256 elements that is somehow construed from the byte-string key and the issue is in how this string->state transformation works (ie. you have to pump the function for >500 times to get unbiased output).


In other words, brute-force attacks depend on the ability to keep trying essentially indefinitely; it's a similar situation with payment card CVVs --- although they're only 3-4 digits, which makes for a tiny keyspace, any attempts to bruteforce one will be quickly detected and blocked.

Of course, but that wasn't my point. Not only brute force attack are foiled by humans, also attacks like padding oracles will be impossible because the user will notice.

I know this is not the appropriate venue, but can you get somebody to renew the cert for starfighter? Thank you.

In case you find the alphabet too small, there's a variant called ls47 (https://github.com/exaexa/ls47) that runs on a 7x7 tile with ~128 bits of security. It also suggests a key expansion algorithm.

The idea is interesting, but unfortunately it's still too hard to remember the rules and encrypt or decrypt by hand. One-time pad is much easier.

I don't see the benefits of these. Just because I can manually encrypt and decrypt I don't have more confidence in the cipher. If I already trust the cipher I might as well trust a program that does the encryption and decryption.

There are two benefits, both related to keys:

1. Key generation using tiles as proposed is truly random, not susceptible to attacks on a PRNG.

2. The key is never connected to any network in any way--it's effectively airgapped. This removes whole classes of key-stealing and side channel attacks.

That said, while real randomness is still uncommon, there are electronic solutions, and the benefits of networking generally outweigh the security risk for most use cases.


RC4, which this paper is based on was the original choice for the CipherSaber project:

https://en.wikipedia.org/wiki/CipherSaber

The core goal of which was the idea that people should have basic access to some sort of reasonably strong cryptography as a matter of personal liberty.

The theory goes that if it's an easy enough algorithm to do manually if you had, that makes it an easy enough algorithm to reimplement as needed in a situation where you have to forge your own tools if need be (all you have is a deck of cards, pencil, and paper, or all you have a TI-83 graphing calculator and its BASIC programming manual; can you securely send a message?).


Legal | privacy