#409 FLAC encoder output differs between MinGW/Win32 and Linux



output data of flac encoder is different when compiled on Win32 or on Linux. There is no loss of data, decoding back results in the same original data, only the .flac files are not binary identical. (Btw. is it problem at all? Maybe the both different versions of flac files are "within specifications"...?)

Interesting thing: Win64, Linux 32-bit, and Linux 64-bit produces the same results, only the Win32 platform produces different .flac files:

Linux 32-bit == Linux 64-bit == MinGW Win64 != MinGW Win32

When using --compression-level-2 and below (-1 and -0), or when using --max-lpc-order=1 and bellow, the produced flac files are identical on all tested platforms.

Tested on versions (the behaviour is the same):
- 1.3.0
- latest git (2014-02-03/4be8ed8)

OS: Fedora 20 (both 32-bit, 64-bit), flac.exe launched under wine (I could try later directly on M$ Windows if needed)

Example, where the two flac files are identical:
flac --max-lpc-order=1 -o l1.flac MarketFull1.wav

Example, where the two flac files are different:
flac --max-lpc-order=2 -o l1.flac MarketFull1.wav

Original wave (from lincity-ng):


  • I ran into the (same?) problem a while ago working with 1.2.1. It was related to code in lpc.c, that was using unsecure floating point operations as well as running into GCC optimization issues with floating-points. I made a local fix for the unsecure floating point stuff and provided a testcase to the GCC team for the optimization stuff, but it's a very old bug, so it's unlikely to be fixed anytime soon. A workaround was to always compile the flac code with -O0. I am reviewing all the local code changes right now and will provide more details on my issues, so we can determine, if it's the same.

  • Erik

    lvqcl on the flac-dev mailing list suggests that this may be the difference between configuring with --enable-sse on one platform and without it on the other.

    GIt commit 9863998c995 (on xiph.org) make --enable-sse the default regardless of platform. Please try that and report back.

    Also, the file sizes are within 0.005% of each other and decode to the exact same output file size. I think I would say that these are within spec.

  • Erik

    Martijn van Beurden on the flac-dev mailing list said this is a known 'feature' and not considered a bug. Se the FAQ:


  • OK, I tested the latest git (2014-03-15/cf55fc7), and now the encoder produces the same output on the all tested platforms (Linux/MinGW, x86_64/i686) for various options (also with "-l 2").

    And I'm sorry, I should have read the FAQ of course, this is not a bug. It was solved anyway - differences were due to the disabled SSE for Win32. ;-)

    Thank you!

  • I didn't read the FAQ either and my concern is mostly about having reproducible results. If I have a stable algorithm I expect it to return the same results with the same input. As I said I was able to get stable results across several platform with optimizations disabled and a small change to the code - but using SSE floating-point also fixed it. I have to review and test again with the changes and setting and post my findings here. Maybe something can be incorporated to make it more stable without SSE.

    FYI I came across this while writing a unit test for the chdman tool, which is used to create/modify files in the CHD file format the MAME/MESS project is using to archive/compress CD, HDD, etc. images and FLAC is one of the compression being used in that. And the concern there is, that creating such files on different platforms with different toolchains should produce the same output since having differently sized files based on the same input can lead to confusion (although there another internal checksum, that identical across the differently sized files).

    • Erik

      If I have a stable algorithm I expect it to return the same results with the same input.

      This is a valid expectation for operations on integers of a fixed size. Unfortunately it is not a valid expectation for floating point.

      The differences we've found here are between the Intel FPU and the Intel SSE floating point operations. Intel's FPU, when it operates on 64 bit values stores those values in 80 registers internal to the FPU, does the operations and only when the result is written to memory does it convert the value from the 80 bit internal representation to the required 64 bit external representation.

      My unerstanding is that for similar operations, the SSE unt does all operations on 64 bit values.

      The only way to force the FPU to behave the same as SSE would be to force it to write all internal 80 bit values to external memory locations. Unfortunately that would make encoding significantly slower.

      My suggestion is that you make sure your FLAC versions are always compiled wih SSE enabled as there is no reasonable software solution to this problem.

    • lvqcl

      And the concern there is, that creating such files on different platforms with different toolchains should produce the same output

      Unfortunately FLAC doesn't provide such guarantees. Probably it's better to use another codec (Wavpack?) Also there exists integer-only version of FLAC encoder.

      Last edit: lvqcl 2014-03-17
  • There's a little bit more to it. Here's the bug I filed with GCC about differing results with 32-bit and 64-bit compiles:


    The most interesting thing is in "Comment 3".

    • lvqcl

      I suspect that this fix won't work together with ffast-math or fp:fast options.

      Also, what if someone will create FMA-accelerated version of FLAC__lpc_compute_autocorrelation()? Its results will be different from SSE version.

  • Erik


    On the gcc bugtracker, you wrote:


    autoc[coeff] += d * data[sample+coeff];


    FLAC__real tmp = d * data[sample+coeff];
    autoc[coeff] += tmp;

    also provides the same results with 32-bit as 64-bit does.

    What file does that apply to?

    • That's in FLAC__lpc_compute_autocorrelation() in lpc.c from 1.2.1. I added the patch as attachment.

      As I said I have some changes to review, test and submit (there are already some other reports).

  • Erik

    • status: open --> closed-fixed
    • assigned_to: Erik
  • Erik

    Fixed in:

    commit 70b078cfd5f9d4b0692c33f018cac3c652b14f90
    Author: Erik de Castro Lopo <erikd@mega-nerd.com>
    Date:   Fri Mar 21 19:25:55 2014 +1100
    Attempt to fix differences between x86 FPU and SSE calculations.
  • Thanks - but keep in mind, that it will only fix it for unoptimized builds across toolchains. As soon you optimize the code you will get different results again as per GCC bug report.

  • Erik

    As discussed on the xiph-dev mailing list, the patch that was applied to fix this doesn't have the desired effect with all compilers/platfroms.

    I will be reverting this patch unless someone comes up with a very good reason not to.

  • Erik

    • status: closed-fixed --> open