Cryptography @ Infosec.pub

467 readers
1 users here now

Questions, answers, discussions, and literature on the theory and practice of cryptography

Rules (longer version here)

##Related resources;

founded 2 years ago
MODERATORS
26
27
12
submitted 2 months ago* (last edited 2 months ago) by Natanael to c/crypto
28
29
30
31
32
33
 
 

UK wanted global access to decrypt any and all Apple users' iCloud data on request. Apple pulled iCloud encryption from the ADP program instead within UK.

Seems like their idea is to ensure encrypted data outside of UK stays out of UK jurisdiction because the affected feature isn't available there anymore. But this will prevent UK residents from using iCloud end to end encryption in ADP and keeping for example backups of photos and iMessage logs protected, so for example journalists are a lot more exposed to secret warrants and potential insider threats.

34
35
9
submitted 3 months ago* (last edited 3 months ago) by Natanael to c/crypto
36
 
 

Here's a copy of my own comment from the reddit thread;

Randomness is a property of a source, not of a number. Numbers are not random. Randomness is a distribution of possibilities and a chance based selection of an option from the possibilities.

What we use in cryptography to describe numbers coming from an RNG is entropy expressed in bits - roughly the (base 2 log of) number of equivalent unique possible values, a measure of how difficult it is to predict.

It's also extremely important to keep in mind that RNG algorithms are deterministic. Their behavior will repeat exactly given the same seed value. Given this you can not increase entropy with any kind of RNG algorithm. The entropy is defined exactly by the inputs to the algorithm.

Given this, the entropy of random numbers generated using a password as a seed value is equivalent to the entropy of the password itself, and the entropy of an encrypted message is the entropy of the key + entropy of the message. Encrypting a gigabyte of zeroes with a key has the total entropy of the key + "0" + length in bits, which is far less than the gigabytes worth of bits it produced, so instead of 8 billion bits of entropy, it's 128 + ~1 + 33 bits of entropy.

Then we get to kolgomorov complexity and computational complexity, in other words the shortest way to describe a number. This is also related to compression. The vast majority of numbers have high complexity which can not be described in full with a shorter number, they can not be compressed, and because of this a typical statistical test for randomness would say it passes with a certain probability (given the tests themselves can be encoded as shorter numbers), because the highest complexity test has too low complexity to have a high chance of describing the tested number.

(sidenote 1: The security of encryption depends on mixing in the key with the message sufficiently well that you can't derive the message without knowing the key - the complexity is high - and that the key is too big to bruteforce)
(sidenote 2: the kolgomorov complexity of a securely encrypted message is roughly the entropy + algorithm complexity, but for a weak algorithm it's less because leaking patterns lets you circumvent bruteforcing the key entropy - also we generally discount the algorithm itself as it's expected to be known. Computational complexity is essentially defined by expected runtime of attacks.)

And test suites are bounded. They all have an expected running time, and may be able to fit maybe 20-30 bits of complexity in there, because that's how much much compute resources you can put into a standardized test suite. This means all numbers with a pattern which requires more bits to describe will pass with a high probability.

... And this is why standard tests are easy to fool!

All you have to do is to create an algorithm with 1 more bit of complexity than the limit of the test and now your statistical tests will pass, because while algorithms with 15 bits of complexity will generally fail another bad algorithm with ~35 bits of complexity (above a hypothetical test threshold of 30) will frequently pass despite being insecure.

So if your encryption algorithm doesn't reach beyond the minimum cryptographic thresholds (roughly 100 bits of computational complexity, roughly equivalent to same bits of kolgomorov complexity*), and maybe just hit 35 bits, then your encrypted messages aren't complex enough to resist dedicated cryptoanalysis, and especially not if the adversary knows the algorithm already, even though they pass all standards tests.

What's worse is the attack might even be incredibly efficient once known (nothing says the 35 bit complexity attack has to be slow, it might simply be a 35 bit derived constant folding the rest of the algorithm down to nothing)!

* kolgomorov complexity doesn't account for different costs for memory usage versus processing power, nor for memory latency, so memory is often more expensive

37
38
39
6
submitted 3 months ago by Natanael to c/crypto
40
41
42
43
44
 
 

Cryptologue Arcade - Crypto Darkpaper - Crypto Color Prints - Simplementation Scheme : A Color-Coded Method For Indexing Research and Documentation

https://doi.org/10.5281/zenodo.13149715

Crypto Color Prints are a memorable and data-dense documentation paradigm with a color-coded, structural scheme. The scheme forms a simple framework for publishing and improving primitives and protocols with a focus on both cooperation and implementation.

@cryptography@lemmy.ml @crypto@infosec.pub

#Cryptography #Documentation #Schemes #Information #Papers #Preprints #Zenodo #octade

45
 
 

Hexlish Alphabet for English, Constructed Languages and Cryptography: Automatic, Structural Compression with a Phonetic Hexadecimal Alphabet

DOI : https://doi.org/10.5281/zenodo.13139469

Hexlish is a legible, sixteen-letter alphabet for writing the English language and for encoding text as legible base 16 or compressed binary. Texts composed using the alphabet are automatically compressed by exactly fifty percent when converted from Hexlish characters into binary characters. Although technically lossy, this syntactic compression enables recovery of the correct English letters via syntactic reconstruction. The implementer can predict the size of the compressed binary file and the size of the text that will result from decompression. Generally it is intuitive to recognize English alphabet analogues to Hexlish words. This makes Hexlish a legible alternative to the standard hexadecimal alphabet.

@cryptography@lemmy.ml @crypto@infosec.pub

#Hexlish #Conlang #Alphabets #Encoding #Cryptography #Ciphers #Crypto

46
 
 

Noch zu checken, ob die erwähnten Profile tatsächlich das zum Themenschwerpunkt haben, was der Name suggeriert …

@dutypo Ich wollte mal Typograph werden, in einem früheren Leben, als es das noch als Ausbildung und Studienschwerpunkt gab – als Übergang für ein paar Jahre, zwischen Offsetdruck und @hedgedoc und @cryptpad@fosstodon.org @cryptpad@peertube.xwiki.com @cryptpad_design . Dass es nichts wurde, hat meiner Liebe zu @Gedrucktem, #Hörbüchern, #Literaturverfilmungen, #Sprache, typographisch guten elektronischen Veröffentlichungen, #DTP, @PDF, @openscience , @opendatabund , #Aufklärung , @crypto usw. übrigens keinen Abbruch getan. Unter Anderem freie #HedgeDoc- und #Cryptpad-Instanzen gibts hier: https://timo-osterkamp.eu/random-redirect.html

47
48
4
submitted 2 years ago* (last edited 2 years ago) by iso@lemy.lol to c/crypto
 
 

I need to

  • encrypt JSON payload (not just sign)
  • not share private key
  • verify the payload is generated with the shared public key and RSA fitting all of these.

As I've only made auth with JWT so far, I'm not sure. If I use RSA, I guess I have to put the encrypted text in the body.

Do you think it can be used? Any other suggestions?

49
 
 

i remember pond used to have them. but pond is niche and dead. where else are bilinear parings used? i don't care about crapto deployments though...

50
 
 

TIL the French government may have broken encryption on a LUKS-encrypted laptop with a "greater than 20 character" password in April 2023.

When upgrading TAILS today, I saw their announcement changing LUKS from PBKDF2 to Argon2id.

The release announcement above has some interesting back-of-the-envelope calculations for the wall-time required to crack a master key from a LUKS keyslot with PBKDF2 vs Argon2id.

And they also link to Matthew Garrett's article, which describes how to manually upgrade your (non-TAILS) LUKS header to Argon2id.

view more: ‹ prev next ›