I'm using PCManFM 1.2.3 and I have noticed that after I'm going into a directory with many files and then closing PCManFM the used memory is still high. In the attachments is a testcase which creates 100.000 files in /tmp/test and opens it with PCManFM (after PCManFM finished with loading it needs just to be closed). In my case I get this results:
Anonymous
I have continued this test a little and found some new things: After I have closed PCManFM I have updated the script to create 1 million files (by replacing "for i in {1..100}" with "for i in {1..1000}" in line 4) and executed it. Interestingly even while PCManFM was closed it started to process something and got to 100% cpu usage on one core. Even after ~2 hours PCManFM was not finished and since it was locking now (it couldn't be opened and files on the desktop weren't visible anymore) I had to kill it. If I'm trying now to open /tmp/test with the 1 million files manually PCManFM does crash after a few seconds.
Last edit: Sworddragon 2015-03-12
Unfortunately 1 million files is too heavy task for any graphic file manager because it will create graphic representation of each file and watch the folder contents for updates. I would suggest to never open such extremely huge folder with any graphic file manager, even text one like mc would open it in noticeable big time (seconds to minutes) but fortunately text file managers like mc would read contents of folder just on enter or on some activity, not watch it constantly, it's why they are so fast, which is not the case for graphic ones.
Although PCManFM should have freed that memory if the folder isn't opened somewhere (in folder view or in tree view) so it's definitely a problem. Thank you very much for reporting this, I will test it thoroughly.
Last edit: Lonely Stranger 2015-05-11
Not every time it is known to a user what is in a folder (for example an erroneous script could have created this many files in a short time). Also I think every application should be stable enough to not crash even in such extreme cases except a technical limitation gets hit. But in this case it would be nice to give the user an error message (in case this should be possible then).
Thank you for sensible suggestion, sounds as a very good target for an improvement.
As for memory consumption - unfortunately, we cannot do anything with that. The system memory is requested from system when heap grows and some of it may be never returned to system when freed, that is related to memory segmentation and it's how both glibc (malloc/free) and glib (g_new/g_free) work. Fortunately, that memory is not leaked and will be reused later so it hopefully will not grow more (unless you enter even more complicated folder). I've retested that and found neither memory leaks nor stalled folder resources so it should be exactly as I said, so don't worry, please, everything should be fine.
As for safeguards on opening extremely large folders - change is too big to implement it in 1.2 so I'll mark it for 1.3, with big priority.
Thank you very much for a found problem.
This remembers me on this ticket: https://bugs.wireshark.org/bugzilla/show_bug.cgi?id=11002
Are you sure that free() does not always return the memory to the system? On my tests from the startpost the resident (RES) memory was still in use - not some sort of cache. But maybe it is only because of glib and can maybe be solved in a similar way as on the linked ticket.
free() returns memory not to the system but to the heap. Memory is returned to the system only by quantities of a system page which is 4KB (in case of small page) or 16MB (in case of large page). It is easily possible that out of 4096 bytes 8 bytes are in use and 4088 bytes are free - and that page is still busy so cannot be returned to the system. It's how memory allocation works. And we cannot do anything with that unless we integrate memory allocations into our code so all needed relocations will not break pointers. That said, it makes it impossible to use either glibc or glib allocations, so neither glib, nor gtk+ is possible to use. And also that integrated memory allocators will slow down application a lot because many of memory free functions will bring massive lock with pointers rewrite. And also all of that would require big number of programmers to implement own kind of toolkit (no GTK, no Qt, etc.), and I'm afraid that dynamic memory consumption doesn't worth those efforts, not saying is completely impossible for small gang of LXDE developers. I'm sorry but I should say, that is not a bug but normal behavior.
Well, it just looks weird to me that (based on the startpost: 91 MiB - 19 MiB) 72 MiB of memory were not returned to the system while it got already freed. Especially since I have never seen such a behavior on any other application (except for Wireshark as shown with the linked ticket but they should have already changed this).