Any cryptographers who are sad about the post-quantum competitions coming to an end and looking for a new problem, here's one I've seen in a few places:
There's a trend towards end-to-end encryption for all datacentre interconnects (no plaintext on the wire, for any wire that leaves the CPU package). This includes things like PCIe, 100 GigE, and so on. As a result, we're rapidly approaching a world where there's over 1 Tb/s of encrypted traffic flowing in and out of every node.
At this rate, bit flips are inevitable somewhere (especially when you scale this up to a datacentre size). This leads to a couple of problems. The first one is bit flips on the wire.
The integrity tags in AES will catch these, but if you need to retransmit that's very painful (the bandwidth-delay product means the buffer sizes get huge), so ideally you want to bake in some forward error correction after encryption. But now you're reducing data rates.
Problem 1:
Can you design an integrity scheme for a symmetric cypher that also provides error correction, is easy to implement in hardware, and does not provide an oracle. I honestly have no idea whether this is even theoretically possible.
Beyond that, the AES engines are hot. Encrypting at even 10 Gb/s consumes a fair bit of power (Problem 0: Can you design a symmetric cypher that can be implemented in 10% of the power of AES in a hardware implementation?). This means that bit flips can occur in the middle of the encryption. These will corrupt the data but may have valid integrity tags.
Problem 2:
Can you design a symmetric cypher such that the integrity tag calculation can be computed in a pipeline that's independent of the main encryption (without duplicating a load of work or massively increasing the number of calculations) such that a bit flip in either pipeline will cause the integrity checks to fail?
Currently, I believe the work around for this is to add forward error correction before encrypting, such that a single block failing can be small, but that also adds a lot of overhead (i.e. lower bandwidth).
Problem 3:
Can you build a cypher scheme with both of these properties? Integrity tags permit error correction and can be computed cheaply in an independent pipeline so that they can catch bit flips during encryption.
@david_chisnall I am not a certified cryptographer but adding error correction requires redundancy one way or another, so doing it after encryption is simpler and faster (hw can deal with it easily). Doing it as part of the cipher sounds like a recipe for a weak cipher and probably a good way to create side-channels that will quickly compromise the private key.
Do not recommend.
@simo5 @david_chisnall I have to think: If I have my history right, in the late 1980s we had miniaturized all the logic needed to read data from the laser-light reflected from a CD into a single microchip. Including strong error correction.
I'm sure we've refined our error correction techniques since then, & we certainly can fit even more into a microchip!
@alcinnz @simo5 @david_chisnall you'd be right, and none of the high speed interfaces that even get remotely close to 1 Tb/s do it without extensive error correction.
State of the art is that we correct errors, mostly using block codes with blocks as long as the requirements and medium permit.
@alcinnz @simo5 and @david_chisnall is also seeing the right problem there: the error correcting code decoders are usually pretty energy-hungry per payload bit, even as they are all specific silicon these days, because nobody can do 30 iterations of a giant graph message passing algorithm on frames far beyond 15000 bits in length in software.
I don't know how these decoders compare to an AES accelerator in power per bit, but my guess is that the fact alone that they are working1/2
@alcinnz @simo5 @david_chisnall on soft bits to do decisions based on analog signal observations, and the fact that aes blocks are relatively small and locally decryptable, would make me guess that channel decoders are the power-wise worse problem at the same rate.