Lorenzo - 2020-11-03

I would like to bring to attention the following recent paper:
Lama Sleem and Raphaël Couturier, TestU01 and Practrand: Tools for a randomness evaluation for famous multimedia ciphers, Multimedia Tools and Applications (2020)79:24075–24088
https://link.springer.com/article/10.1007/s11042-020-09108-w (unfortunately behind paywall)

The authors use TestU01 and Practrand to test the implementations of several ciphers (block and stream).
The ciphers analysed are the following (the library they come from, if any, is in brackets):
RC4 (Libgcrypt)
ChaCha (Libgcrypt)
Camellia (Libgcrypt)
BlowFish (Libgcrypt)
TwoFish (Libgcrypt)
RC4 (WolfCrypt)
ChaCha (WolfCrypt)
HC-128
Camellia
IDEA
Rabbit
3DES
Hight
LBlock
Present
XXTEA
TEA
RC4D plain
RC4D optimized
RC4Dkip plain
RC4Dkip optimized

RC4 fails PractRand in both implementations; rather incredibly, the WolfCrypt implementation of ChaCha (with 20 rounds!) also fails PractRand with problems in tests FPF, GAP, BCFN and DC6.
Surely this can only mean that the WolfCrypt implementation is badly messed up???

All the other ciphers pass both TestU01 and PractRand, apart from RC4Dkip plain and RC4Dkip optimized (modifications of RC4 by Michael Kwasnicki https://kwasi-ich.de/blog/2018/03/05/mcu_encryption/ ).
I found it surprising that cryptographic ciphers such as RC4, the newer RC4D variants would fail generic tests such as those in PractRand.