Thank you for all your replies.
First of, please forgive me if any concept is erroneus, or I mispelled anything. I am a naturally Portuguese-speaker and I do not have an indepth knowledge of English. I have only gotten to High-school level, so please bear with me :)...
PWM, your reply was exactly within the concepts I was envisioning.
I completely agree with your first paragraph: It is, or should be, more eficient to request memory and be able to reuse it, than 'free'ing it for the kernel.
Also, your point in having 'before-hand' an aproximation of the memory needed is also regarded: The object I was trying to define in the first mail, should keep a permanent record( file ) with memory usage so that in the next [program]execution, the object shall reserve enough memory to serve at least 80%( of 80-20 rule) of total memory requirements.
Your paragraph on the instantaneous allocation of memory in chunks, by the SO, that is only commited when written to, raises a question : Does HeapAlloc, with HEAP_ZERO_MEMORY obey to this rule? The memory is zero'ed, so, it should suffer the same penalties, right?
The paragraph disserting implementation of a 'internal-memory manager' is exactly my idea. The only diference, from those custom heap allocators, as I know them, is that mine will implement some sort of profilling; It will, by managing memory keep a record, to inform me of its usage, and possibly help me trace down those bottlenecks, as thread concurrency in multi-processor systems.
But this is getting ahead of itself. The main problem, to which I have got no solution thus yet, is to obviously figure out when could I release a allocated memory, as you have expressed so succintly.
This led me down another development path, one much more tiresome : To implement such features as a per-class implementation. Each class will have its own manager, that requests memory from a central memory manager 'dispatcher' of sorts - keeping the same features as above.
This should solve the situation, because each object will raise such event, upon releasing memory, that the central manager will know that such memory is now available for reabsortion.
This will waste a few CPU cycles, running the same code of code again and again...
This is a more robust, and elegant - giving me semantic information of data present at the released location -, solution, but it may prove itself less efficient - in CPU cycles, - comparing to the impact of reserving memory through kernel services, as I believe that Windows Kernel will always allocate more virtual memory than its asked to.
Although I have no data to comprove my point, and microsoft have not replied to my request to see such source code ( little bit of humor - have not really asked ... )
Could you point me down some libraries to do such analysis on sample codes?
I'll try to create the above mentioned algorithms, in a simplified manner, and subject them to a few groups of tests, simulating real activity.
Hopefully, I'll draw illations that will help me decide and, possibly, augment such models to a point where its superiority will be obvious.
Thank you in advance for the time spent on this subject,
> Date: Sat, 1 Aug 2009 14:14:56 +0200
> From: pwm@...
> To: the_sharrk1@...
> CC: dev-cpp-users@...
> Subject: Re: [Dev-C++] [Win32] Teoretical question about memory managing
> It is more efficient to have a local cache and reuse released memory than
> to send it back to Windows and directly reclaim it again.
> It is normally not good to preallocate all memory in advance unless the
> program is known to always consume almost the same amount of memory. A big
> preallocation will just punish other applications, while potentially
> slowing down the startup of your program while Windows swaps out data to
> be able to fulfill your allocations.
> Another thing is that the OS can often instantly allocate large chunks of
> memory without actually committing any RAM - your application does just
> get an address range, but no mapping of memory happens until your program
> tries to access the individual memory pages. On first access of each page,
> one memory page gets allocated into the address range. That means that to
> actually preallocate data, you also need to write to the memory - for
> example by clearing it to zero. And allocating and clearing a very large
> block of memory can get your application to seemingly hang for quite some
> time, while Windows finds something to throw out to be able to find unused
> RAM to map into your application space.
> Many applications that needs large numbers of allocations/releases are
> doing incremental allocations, and grouping the allocated objects into
> different block sizes. When a block is released, it is sent to a list of
> blocks of that size. When needing a block, the program checks if this list
> has any available block. If not, the program may allocate an array of 10
> or 100 blocks using a single Windows allocation, and then splits the large
> block into the 10 or 100 smaller blocks and addes them to the free list.
> For just tracing the memory blocks your program uses, there are several
> free libraries available. They can, on exit of the application, tell you
> of any memory leaks. But such libraries don't work well with an
> application that performs allocations of large blocks and then splits
> these large blocks into smaller blocks. The library will then just see all
> these large blocks, and consider them all to be memory leaks. The reason
> is that if you split one large block into many small, it is very hard for
> your program to later figure out if all the small blocks has been released
> so that you may unlink them from the free list and release this big block.
> In the end, it is very hard to talk about a general solution that is
> applicable in all cases. What is best will depend exactly on the access
> patterns of the program. Quotient between small and large allocations.
> Total number of allocations. Amount of allocations in relation to
> releases. If allocations and releases are randomly distributed during the
> runtime or if the allocation patterns looks like waves where the program
> makes a lot of allocations and then a lot of releases before starting the
> next wave.
> On Fri, 31 Jul 2009, Frederico Marques wrote:
> > Hello,
> > Again, I seek your advice on a personal project :
> > I have searched for information related to the impact that managing, or more accurately not managing memory, will have on processes in win32.
> > Recently I have stumbled upon a paper disserting the effects of the paging process on windows, and the cost of requesting more resources than those currently addressed by the front-end virtual memory manager.
> > As I envision, the final code will use a great deal of memory, and actually, it will not always be coded by me, and as such, I cannot be certain that the allocated resourced will ever be freed.
> > I am quite the control freak, so I'm trying to implement one of two designs :
> > a ) The software will contain one to several objects that allocate memory, at program start, based on the latest data retrieved from executions with high volume processing, or by other words, I'll subject my code to stress-tests, and register memory usage, for reserving it, on next program restart. I'll, obviously, be managing the memory, providing by one defined interface, my own allocation/freeing routines that will manage the already reserved memory(from the OS).
> > b ) All objects within the software will, somehow ( Haven't yet figured this out ) manage its own memory, having this way some information about the memory, besides its start address and the span of it.
> > To sum up, and finally presenting my question : which design would be better, and which would provide more information about the memory usage?
> > The above points are, obviously, based on my belief that the system( OS, L2,L1 caches, processor, ram ) will handle better paging out/in virtual memory, than actually reserving it at a latter state, especially, at different points in program logic, since the locality is not observed, on, for instance, linked-list implementations, and processor-cycles will be 'wasted' in finding available memory in kernel/c-runtime area.
> > Please do correct me if my assumptions are not correct, and, please advice me if you have measurable results related to such implementations... This is a pet project, and although I am pursuing it by sheer fun, I believe it can be brought to good use.
> > Sincerely
> > Frederico Marques
> > _________________________________________________________________
> > Com o Windows Live, você pode organizar, editar e compartilhar suas fotos.
> > http://www.microsoft.com/brasil/windows/windowslive/products/photo-gallery-edit.aspx
Conheça os novos produtos Windows Live! Clique aqui.