Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

AFAIK one description of RC4 always expand the key to a 256 byte array by repeating the key. It would be interesting to run keystream bias tests against this full array.


sort by: page size:

It seems to me that LC4 key is essentially equivalent to RC4 state and thus RC4 early keystream bias does not apply as there is no key-expansion phase.

Edit to clarify: LC4 key has to be permutation of 36 elements, while RC4's state is bijection of 256 elements that is somehow construed from the byte-string key and the issue is in how this string->state transformation works (ie. you have to pump the function for >500 times to get unbiased output).


RC4 accepted up to 2048 key bits. (It is symmetric, it's just not a block cipher).

The keys they used were only 128 bits, whereas RC4 actually supports up to 2048 bits. I wonder how much that affects their results. (AFAIK the 128 bits is an export restriction thing, upgraded from the previous trivially-breakable 40 bits.)

Also, 16 characters seems awfully short for a cookie, especially one meant for authentication purposes.


> ... like 16 bytes, to generate an infinite amount of output, such that knowing any part of the output doesn't help you guess at any other part of the output nor the input key.

Isn't that the theory behind every stream cipher? (And stream ciphers are generally just 'simplified' one-time pads.)

That's what OpenBSD's arc4random(4) start as: the output of RC4.


Thanks for the paper. It only mentions testing 1024 - 2048 bit keys. Does this also impact anything using > 2048 bit keys?

Yep, namely <54 bytes. Just for ease of comparison.

~54 bytes vs 128 bytes.

Considering it's a logarithmic measure (every bit adds x2 difficulty for cracking) and 128 bytes is rather tight... gives an idea of the weakness of this key.


I don't disagree, and after some consideration and testing, will use the RFC recommendation of 3 iterations, 4 lanes, 64 MiB RAM.

However, it's also important to take note of how the KDF is used, and what is used by comparable solutions in equivalent settings.

For instance, the official extension, sold by the SQLite authors, uses RC4:

“For the "-textkey" option, up to 256 bytes of the passphrase are hashed using RC4 and the hash value becomes the encryption key. Note that in this context the RC4 algorithm is being used as a hash function, not as a cryptographic function, so the fact that RC4 is a cryptographically weak algorithm is irrelevant.”

The textkey= parameter is not really appropriate for interactive use, rather it is a convenient way for a user to specify a key in text form, as a URI parameter, without weakening it (by e.g. making NULs impossible, and reserved characters inconvenient to encode).

If you're taking a password from a user interactively, you should really be using your own password hashing, then using hexkey=. I'll try to make this clearer in the documentation.


> Yes, although even here, small is not that very small:

> > The main focus is about short keys of random lengths, with a distribution of length roughly in the 20-30 bytes area, featuring occasional outliers, both tiny and large.

Based on the graphs, XXH3 still seems to beat everything at key sizes > 6 (presumably bytes?)


I don't think SSL/TLS allow key lengths > 128 bits with RC4. Export is 40 or 56 bits. You can see most supported ciphers here: https://www.openssl.org/docs/apps/ciphers.html

e.g:

    TLS_RSA_WITH_RC4_128_MD5                RC4-MD5
    TLS_RSA_WITH_RC4_128_SHA                RC4-SHA
    TLS_ECDH_RSA_WITH_RC4_128_SHA           ECDH-RSA-RC4-SHA
    TLS_ECDH_ECDSA_WITH_RC4_128_SHA         ECDH-ECDSA-RC4-SHA

You're confusing symmetric key sizes with asymmetric key sizes. 1024 bits is a huge symmetric key, but a rather small asymmetric key.

Because there is a viable attack on the whole keystream, not just the first 256 bytes.

For this specific case checking the key length would be the first step. In the study afaik only 1024bit keys were found to be broken.

34 bytes is equivalent to bruteforcing a 272-bit key. It's already physically impossible to do that for a 256-bit key even if you ignore everything other than incrementing the key counter itself:

https://pthree.org/2016/06/19/the-physics-of-brute-force/


You don't need a 4kb key. 128bits is more than enough for AES. And there's no way you are going to brute force a random 128bit key.

I'm obviously talking about 128, since I can't see 32 bytes happening with AES256 CBC.

No, the upper byte of key should have 256 possible values. It now has one (all zeroes, essentially). Keyspace is reduced to 1/256 of original, a reduction of 255/256.

"Truly unlimited number of keys"

What data type would you use to store such a key? My guess is that it wouldn't be much bigger than 2048 bits after being implemented. Besides, key length is a terrible metric for measuring security.


Uhm, no. If you read that post you'd see that I was mainly interested in the per-key overhead. I'd seen it spoken of as being in the 100 bytes/key range but never saw confirmation of that. So I did some back of the envelope calculations and then decided to try it for real.

The signatures may be small (344 bits) but the keys are huge (tens to hundreds of kilobytes).
next

Legal | privacy