#2009 Leaking memory in a thousand places


sdcc leaks memory. The attached file was produced using

valgrind --num-callers=16 --leak-check=yes sdcc --std-c99 -mz80 -c -I../include/ -I../../libcv/include --opt-code-size --fverbose-asm --max-allocs-per-node 10000 cvu_play_music.c 2> log

OK, it doesn\'t really show a thousand leaks - I already fixed a few, so it\'s down to 904 now. Similar results can be obtained by compiling any file for any target. The file I compiled is small; leaks are a more serious problem for larger source files.



  • Raphael Neider

    Raphael Neider - 2012-04-11

    I generally agree that avoiding memory leaks is a good idea. However, SDCC is a compiler, not a long-running daemon, job, or task.
    For SDCC, releasing all memory allocated during its lifetime costs a significant amount of runtime (I tested that on the pic backend once when I introduced control flow graph dumps for debugging purposes); letting the OS clean up after the process exits is much faster.
    Releasing memory can also introduce subtle bugs by prematurely freeing memory which is still in use (painfully experienced with pcode nodes in one of the pic backends once). Of course, not freeing unused memory can also hide bugs ...
    I would vote against "fixing" sdcc memory leaks with the sole purpose of reducing the number of complaints emitted by valgrind or similar tools. At least, I would heavily reduce the priority of these fixes.

    Just my two cents.

  • Maarten Brock

    Maarten Brock - 2012-04-11

    I agree with Raphael. You have to carry around all pointers to allocated memory so you can release them somewhere at the end of the runtime. While the OS will clean it all for you in one sweep.

    I suggest to set this one to Resolution: Wont Fix.

  • Philipp Klaus Krause

    I might not be worth chasing after every single byte.
    The end of the file shows a summary:

    ==12706== LEAK SUMMARY:
    ==12706== definitely lost: 686,329 bytes in 24,655 blocks
    ==12706== indirectly lost: 958,291 bytes in 38,944 blocks
    ==12706== possibly lost: 0 bytes in 0 blocks
    ==12706== still reachable: 836,625 bytes in 8,389 blocks
    ==12706== suppressed: 0 bytes in 0 blocks

    which means we lose a total of over one and a half Megabytes of memory for a file containing a single function of moderate size. I will investigate later today how this scales with file and function size. But I suspect that for larger files (e.g. autogenerated, due to programming style or just the input mentioned in the "implementation limits" of the standard) we might use an additional hundred megabytes or so due to leaks.


    P.S.: The leaks in the file seems to be sorted smallest-to-largest, so just fixing the last few ones could be a big improvement.

  • mz-fuzzy

    mz-fuzzy - 2012-05-28

    I'd vote for fixing at least biggest leaks. In my case, I'd like to squeeze my z80 code size as much as possible, but with my PC's 8GB RAM I'm able to set max-allocs-per-node to "only" about 6 million; I'd like to have possiblilty to run compilation overnight and get best possible code size with max-allocs-per-node like 60 million or so. I assume that this memory consumption is caused by not freed memory during compilation (please correct me if I'm wrong).


  • mz-fuzzy

    mz-fuzzy - 2012-05-28

    ... for ~1200 line-of-code module, forgot to note.


Log in to post a comment.

Get latest updates about Open Source Projects, Conferences and News.

Sign up for the SourceForge newsletter:

JavaScript is required for this form.

No, thanks