From: Henry B. <hb...@pi...> - 2023-05-21 21:08:05
|
Hi all: I want to be able to test some extremely large Maxima calculations, so I've been playing with my computer to see how large a memory space a program could utilize before it started paging itself to death. I just ran some tests on my current laptop with 16GB of RAM running WSL2 (Windows Subsystem for Linux) on top of Windows 11. My tests involved little more than a "Hello World" C program which also did a large malloc and then initialized the data. Actually, I used 'aligned_alloc' instead of 'malloc', but I expect that they both do roughly the same things. I was able to allocate a single object up to 2^32 *bytes* long (i.e., 4GB), and this object initialized properly and quite quickly (i.e., no paging). However, when I tried 2^33 bytes long (i.e., 8GB), the program works, but it pages like crazy. But I have 10GB of RAM free, so there should be plenty of room for an 8GB object completely in RAM in a trivial program like this. I checked 'prlimit -l', and both the hard & soft limits seem to allow 131072 pages of 'memlock'ed (non-pageable) memory. However 'ulimit -l' reports a limit of only 65536 pages of memlock'ed memory. Even after fiddling with /etc/security/limits.conf, I still couldn't increase the amount that 'ulimit -l' returns. I'm currently not running a pure Linux on this machine, so I can't tell if the problem is Linux (Ubuntu) itself, or whether the problem is WSL. Any thoughts ? |