Occasionally, some tests fail because the random parameters give results that exceed the error bounds. Thinking about this a bit suggests that we do not actually use random numbers for testing "real" randomness, but just for getting arbitrary input data without accidental symmetries or such. Hence, there is no real need for randomizing the input with every run.
As a solution, tests should always run with the same random numbers, so that results are strictly reproducible and we have no tests that occasionally fail for a bad rng.
Things to do:
Further Todo's if they are then still relevant:
Some tests have essentially the same code as the library code, but when you run them, the output differs. Best example: operator/OperationsTest, here especially the multiplication. In principle, the tests do the same thing as the product / sum operators, but they give different results. Figure out why, and try to fix this, to allow stricter error bounds for the low-level tests.
Two annotations: