From: Gunter K. <gu...@pe...> - 2018-04-24 10:45:17
|
In my experience with the gnome-shell if you start garbage-collecting too late things might be even worse: - for every command it executes maxima allocates a few bytes memory for saving the input and the output. If the garbage collector has not been run at this time this memory is likely to be allocated at the end of the memory the lisp has allocated. Running the garbage collector now will free loads of memory between the beginning of the heap and the end of the last output maxima has saved. But as there is still an object at the end of the heap no memory at the end of the heap can be given back to the operating system => Most of the lisp's memory might be filled with unallocated memory and the lisp even might know that but the system still might have to swap every single time it wants to change to a different task or back to maxima. - also depending on the garbage collector being used the more garbage there is to collect the longer it will run. ruby is said to be able to defragment the memory after garbage-collecting. But at least sbcl, ecl and gcl seem not to be able to do that. Kind regards, Gunter. On 24.04.2018 10:14, David Scherfgen wrote: > I don't know exactly how memory allocation in GCL is implemented, but I > see a problem. > > Assume that after performing lots of calculations, GCL has filled up 50 > percent of physical memory. The garbage collector doesn't kick in yet. > But if it did, it would be able to free up a large portion of that memory. > > Now some other process like a web browser also needs lots of memory. > That causes parts GCL's memory to be swapped out (slow). > > The pity here is that if the GC was run, there would be no swapping at > all. The operating system doesn't know that a large part of GCL's memory > is actually "garbage". > > Ideally, GCL would spawn a separate thread that watches total physical > RAM usage so that it can trigger the garbage collector as a response to > other processes needing more memory, not just to the GCL process itself > needing more memory. Or maybe there is even a way to ask the operating > system to give GCL a chance to run the garbage collector before swapping > out? > > Maybe this is based on wrong assumptions, maybe this is already > happening, I don't know ... > > Best regards > David Scherfgen > > Sent from my mobile phone. / Von meinem Handy aus gesendet. > > Gunter Königsmann <gu...@pe... <mailto:gu...@pe...>> > schrieb am Di., 24. Apr. 2018, 07:37: > > On 23.04.2018 23:46, Camm Maguire wrote: > > Greetings, and thanks for the dialogue on this. > > > > Let me state upfront that I am more than open to suggestions on > how this > > should be setup for our community, but I would like to avoid > heuristics > > and implement a rational policy even if imperfect. > > > > That sounds good. > > > > > GCL does not allocate *anything* on startup. Rather it probes > brk() to > > determine the operating system's hard limits on the data segment size. > > That would be a great plus in respect to sbcl which allocates all the > memory on startup and won't dynamically expand it lateron. > > > The results of this probe are governed by the actual amount of > ram+swap > > physically available, the kernel's policy/algorithms (e.g. overcommit, > > etc.), and the ambient system load as may be considered by the kernel. > > This value is then multiplied by GCL_MEM_MULTIPLE, producing a > value at > > which GCL will report memory exhaustion when reached. > > > > Perhaps that is part of the problem: As soon as gcl does no more fit > into the physical RAM it gets basically useless as it will be constantly > swapping. Even worse: Already when gcl and all the other applications > that actually run no more fits into the RAM gcl is a > several-gigabyte-long chunk of memory that needs to swapped out if an > application needs a few milliseconds of CPU time and then needs to be > swapped into the memory again. > > > Thus GCL will never use up memory unless the user asks for it. The GC > > policy is then tailored to try to optimize the time/memory load > tradeoff > > as best as possible. > > > > In my experience, sessions fall into two broad categories -- 1) > > benchmark some production code scheduled for release, and 2) do real > > incremental development work. > > > > The rate of allocation in 2) is exceedingly slow, likely resulting in > > days or weeks for a running session to begin to approach 1/2 memory. > > That sounds fine. But it seems that this isn't true in maxima: If I > - call read_matrix() on a long file or > - run the testbench or > - solve a longish differential equation using desolve (the solution of > the one in question is about 2 Din A4 pages long when using a 12pt > font) or > - factor a long equation (4 Din A4 pages) > > if GCL_MEM_MULTIPLE is unset gcl exceeds the 3,5 Gigabytes of free > memory my system provides within seconds even if I haven't opened any > other application; unfortunately the long equations I used belong to my > working place. > > When using clisp, ecl or sbcl the testbench stays below 5% of my main > memory. In clisp it even stays below 2% of my main memory. > > If I set GCL_MEM_MULTIPLE to 0.2 the testbench succeeds but still uses > up astonishingly big parts of my main memory. > > Something seems to be seriously wrong and maybe it is just the > combination of maxima and gcl, not one of these programs. > > > gunter@Marius:~$ gcl --version > GCL (GNU Common Lisp) 2.6.12 CLtL1 Fri Apr 22 15:51:11 UTC 2016 > Source License: LGPL(gcl,gmp), GPL(unexec,bfd,xgcl) > Binary License: GPL due to GPL'ed components: (XGCL READLINE UNEXEC) > Modifications of this banner must retain notice of a compatible license > Dedicated to the memory of W. Schelter > > Use (help) to get some basic information on how to use GCL. > Temporary directory for compiler files: > /tmp/ > > In my case /tmp/ isn't a tmpfs that keeps its contents in the RAM. > > gunter@Marius:~$ cat /proc/swaps > Filename Type Size > Used Priority > /dev/sda5 partition 7811068 0 > -2 > > Currently (thunderbird is open and many applications were open which > accounts for the big "Cached" section) my memory info is: > > gunter@Marius:~$ cat /proc/meminfo > MemTotal: 3855160 kB > MemFree: 385596 kB > MemAvailable: 2045568 kB > Buffers: 3212 kB > Cached: 2110116 kB > SwapCached: 0 kB > Active: 1473228 kB > Inactive: 1736004 kB > Active(anon): 1032348 kB > Inactive(anon): 344808 kB > Active(file): 440880 kB > Inactive(file): 1391196 kB > Unevictable: 204 kB > Mlocked: 204 kB > SwapTotal: 7811068 kB > SwapFree: 7811068 kB > Dirty: 1124 kB > Writeback: 0 kB > AnonPages: 1096108 kB > Mapped: 373736 kB > Shmem: 345956 kB > Slab: 155620 kB > SReclaimable: 80760 kB > SUnreclaim: 74860 kB > KernelStack: 10080 kB > PageTables: 43420 kB > NFS_Unstable: 0 kB > Bounce: 0 kB > WritebackTmp: 0 kB > CommitLimit: 9738648 kB > Committed_AS: 9296560 kB > VmallocTotal: 34359738367 kB > VmallocUsed: 0 kB > VmallocChunk: 0 kB > HardwareCorrupted: 0 kB > AnonHugePages: 0 kB > ShmemHugePages: 0 kB > ShmemPmdMapped: 0 kB > CmaTotal: 0 kB > CmaFree: 0 kB > HugePages_Total: 0 > HugePages_Free: 0 > HugePages_Rsvd: 0 > HugePages_Surp: 0 > Hugepagesize: 2048 kB > Hugetlb: 0 kB > DirectMap4k: 154368 kB > DirectMap2M: 3854336 kB > > Currently I have set GCL_MEM_MULTIPLE to default to 0.2 in > src/maxima.in <http://maxima.in> > which makes gcl usable on my system. But David may be right that this is > too low. On the other side not setting this variable makes maxima-gcl > basically useless to me so I am reluctant to revert my change without a > go from the mailing list. > > Kind regards, > > Gunter. > > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Maxima-discuss mailing list > Max...@li... > <mailto:Max...@li...> > https://lists.sourceforge.net/lists/listinfo/maxima-discuss > |