Re: [Dev-C++] [Win32] Teoretical question about memory managing
Open Source C & C++ IDE for Windows
Brought to you by:
claplace
From: Per W. <pw...@ia...> - 2009-08-01 12:40:03
|
It is more efficient to have a local cache and reuse released memory than to send it back to Windows and directly reclaim it again. It is normally not good to preallocate all memory in advance unless the program is known to always consume almost the same amount of memory. A big preallocation will just punish other applications, while potentially slowing down the startup of your program while Windows swaps out data to be able to fulfill your allocations. Another thing is that the OS can often instantly allocate large chunks of memory without actually committing any RAM - your application does just get an address range, but no mapping of memory happens until your program tries to access the individual memory pages. On first access of each page, one memory page gets allocated into the address range. That means that to actually preallocate data, you also need to write to the memory - for example by clearing it to zero. And allocating and clearing a very large block of memory can get your application to seemingly hang for quite some time, while Windows finds something to throw out to be able to find unused RAM to map into your application space. Many applications that needs large numbers of allocations/releases are doing incremental allocations, and grouping the allocated objects into different block sizes. When a block is released, it is sent to a list of blocks of that size. When needing a block, the program checks if this list has any available block. If not, the program may allocate an array of 10 or 100 blocks using a single Windows allocation, and then splits the large block into the 10 or 100 smaller blocks and addes them to the free list. For just tracing the memory blocks your program uses, there are several free libraries available. They can, on exit of the application, tell you of any memory leaks. But such libraries don't work well with an application that performs allocations of large blocks and then splits these large blocks into smaller blocks. The library will then just see all these large blocks, and consider them all to be memory leaks. The reason is that if you split one large block into many small, it is very hard for your program to later figure out if all the small blocks has been released so that you may unlink them from the free list and release this big block. In the end, it is very hard to talk about a general solution that is applicable in all cases. What is best will depend exactly on the access patterns of the program. Quotient between small and large allocations. Total number of allocations. Amount of allocations in relation to releases. If allocations and releases are randomly distributed during the runtime or if the allocation patterns looks like waves where the program makes a lot of allocations and then a lot of releases before starting the next wave. /pwm On Fri, 31 Jul 2009, Frederico Marques wrote: > > Hello, > > Again, I seek your advice on a personal project : > I have searched for information related to the impact that managing, or more accurately not managing memory, will have on processes in win32. > Recently I have stumbled upon a paper disserting the effects of the paging process on windows, and the cost of requesting more resources than those currently addressed by the front-end virtual memory manager. > As I envision, the final code will use a great deal of memory, and actually, it will not always be coded by me, and as such, I cannot be certain that the allocated resourced will ever be freed. > I am quite the control freak, so I'm trying to implement one of two designs : > > a ) The software will contain one to several objects that allocate memory, at program start, based on the latest data retrieved from executions with high volume processing, or by other words, I'll subject my code to stress-tests, and register memory usage, for reserving it, on next program restart. I'll, obviously, be managing the memory, providing by one defined interface, my own allocation/freeing routines that will manage the already reserved memory(from the OS). > b ) All objects within the software will, somehow ( Haven't yet figured this out ) manage its own memory, having this way some information about the memory, besides its start address and the span of it. > > > To sum up, and finally presenting my question : which design would be better, and which would provide more information about the memory usage? > The above points are, obviously, based on my belief that the system( OS, L2,L1 caches, processor, ram ) will handle better paging out/in virtual memory, than actually reserving it at a latter state, especially, at different points in program logic, since the locality is not observed, on, for instance, linked-list implementations, and processor-cycles will be 'wasted' in finding available memory in kernel/c-runtime area. > > Please do correct me if my assumptions are not correct, and, please advice me if you have measurable results related to such implementations... This is a pet project, and although I am pursuing it by sheer fun, I believe it can be brought to good use. > > > Sincerely > Frederico Marques > > _________________________________________________________________ > Com o Windows Live, você pode organizar, editar e compartilhar suas fotos. > http://www.microsoft.com/brasil/windows/windowslive/products/photo-gallery-edit.aspx |