Jason seemed to be using Solaris for his testing platform. The
allocator depends on the mmap() system call working as described by
its man page under Linux. I believe that the mmap() system call is
similarly described under Solaris, but, unless I misunderstand Jason's
posting, the Solaris mmap implementation does not seem to follow that
description. I do not have ready access to a Solaris box to confirm
my suspicions. Might be worth checking with someone who is more
familiar with Solaris. If necessary, changing the mmap calls for
Solaris would impact one allocator file in one or two methods. There
may be other issues which are not portable. I develop this allocator
for my work under Linux.
I love the above thread of conversation as it asks "do you know of
major corporations using your software?", as if use by a major
corporation is evidence of quality, and then the same thread seems to
indicate that Solaris, which is produced by a major corporation, has
developed OS code which seems not to be implemented according to its
own man pages. I guess most likely, I have some misunderstanding of
the Solaris mmap implementation. At first glance it just seems funny.
Maybe someone has attempted to port the code to Solaris or has a work
around/alternate solution to share?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The code can be ported with some effort - you need to use the STLport version of the STL as the RogueWave version supports a flavour of allocator different from the STL one in terms of interface as the RW libs are quite old. I got all the code to compile and "work" except for the multi_process_pool_alloc class. I gave up on this once I realised the problem with mmap that Jason also identifies - I'd been ignoring the fact that the memory address of the header wasn't what it should be thinking I'd just goofed elsewhere.
The problem Jason raises is a genuine issue, where Solaris mmap call does not use the suggested address as a "recommendation" and instead puts the memory map at an address up near the 4GB boundary (IIRC). When using MAP_SHARED rather than MAP_FIXED the address passed to mmap in only a suggestion - Linux seems to use it pretty much as a "strong recommendation" without fail - otherwise Mark would have seen problems. Solaris however ignores it completely and chooses the next free address from the 4GB boundary such that if two processes try to map segments in different orders they won't end up at the same location. This causes problems when the STL container set up by process 1 says look at address X for item 1 when in fact for process 2 the data got mapped to address Y.
My gut feel is that if you could guarantee that any and all segments were mapped in a set order by all processes the addresses in all those processes would match. Ensuring this in any code using the allocator might be more tricky - perhaps interprocess comms to register the order of segments might do it but what an effort it would be.
I think the whole structure might be valid in cases where you can in one process create all your STL containers up front with the intention either that they are to be read only shared or are presized big enough never to need extra allocations. The next step would be to fork and exec this process and take the file mappings and therefore?? shared memory mappings with you into the child process. Note I haven't tried this and can't be sure that it will work. Fork takes the file mappings but I'm not sure if exec would trash them.
Andy
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The allocator actually started its development with STLport. Its development was later switched over to working with the GNU libraries. There used to be a link from the STLport pages to this allocator.
The original version of the allocator also only worked if the allocator was started first in a main program and all other processes which needed to attach were forked. A user in this forum requested that independent processes be able to attach, and that is a more elegant approach.
The code below the multi_process_pool_alloc class follows the standard STL allocator interface, but it also relies on mmap reasonably using the hint parameter.
The current version does not require that the size of the container classes be known/instantiated in advance if mmap works as described. Although unlikely, it might be better if Solaris conform more closely with its own man pages or rewrite the man page to more acurately describe the actual implemented behavior. Failing an OS change, is there an elegant solution which could serve as a substitute for Solaris and also work with other OSs including Linux?
Marc
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I suppose GNU libraries might also work on Solaris but STLport comes as a nice (read lazy) option with the Sun One/Forte c++ compiler suite so I tried that and it worked well enough not to try anything else as the RogueWave stuff that is the default failed to work.
I agree that the ability to allow independent process to attach is very desirable - that's why I tried getting it all to work on Solaris before realising the issue with mmap.
I also agree that it would be nice if Solaris at least attempted to use the address - I guess it's core to the Solaris virtual memory model to work how it does which is a shame really. I guess it would be interesting to see if MAP_FIXED would work reliably for Solaris for any particular addresses. However I couldn't come up with a substitute (which is not to say there isn't some clever work around - I don't do much/any UNIX programming at this level) or at least not in the time I had to experiment. I doubt Solaris will change any time soon to allow the existing version to work :-(.
Andy
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Does this work in solaris.Has anybody tried to build it in solaris. Can this allocator be used for memory mapped files.
Thanks,
Raj
See the posting thread at:
https://sourceforge.net/forum/forum.php?thread_id=1072117&forum_id=29411
Jason seemed to be using Solaris for his testing platform. The
allocator depends on the mmap() system call working as described by
its man page under Linux. I believe that the mmap() system call is
similarly described under Solaris, but, unless I misunderstand Jason's
posting, the Solaris mmap implementation does not seem to follow that
description. I do not have ready access to a Solaris box to confirm
my suspicions. Might be worth checking with someone who is more
familiar with Solaris. If necessary, changing the mmap calls for
Solaris would impact one allocator file in one or two methods. There
may be other issues which are not portable. I develop this allocator
for my work under Linux.
I love the above thread of conversation as it asks "do you know of
major corporations using your software?", as if use by a major
corporation is evidence of quality, and then the same thread seems to
indicate that Solaris, which is produced by a major corporation, has
developed OS code which seems not to be implemented according to its
own man pages. I guess most likely, I have some misunderstanding of
the Solaris mmap implementation. At first glance it just seems funny.
Maybe someone has attempted to port the code to Solaris or has a work
around/alternate solution to share?
Mark / Raj
The code can be ported with some effort - you need to use the STLport version of the STL as the RogueWave version supports a flavour of allocator different from the STL one in terms of interface as the RW libs are quite old. I got all the code to compile and "work" except for the multi_process_pool_alloc class. I gave up on this once I realised the problem with mmap that Jason also identifies - I'd been ignoring the fact that the memory address of the header wasn't what it should be thinking I'd just goofed elsewhere.
The problem Jason raises is a genuine issue, where Solaris mmap call does not use the suggested address as a "recommendation" and instead puts the memory map at an address up near the 4GB boundary (IIRC). When using MAP_SHARED rather than MAP_FIXED the address passed to mmap in only a suggestion - Linux seems to use it pretty much as a "strong recommendation" without fail - otherwise Mark would have seen problems. Solaris however ignores it completely and chooses the next free address from the 4GB boundary such that if two processes try to map segments in different orders they won't end up at the same location. This causes problems when the STL container set up by process 1 says look at address X for item 1 when in fact for process 2 the data got mapped to address Y.
My gut feel is that if you could guarantee that any and all segments were mapped in a set order by all processes the addresses in all those processes would match. Ensuring this in any code using the allocator might be more tricky - perhaps interprocess comms to register the order of segments might do it but what an effort it would be.
I think the whole structure might be valid in cases where you can in one process create all your STL containers up front with the intention either that they are to be read only shared or are presized big enough never to need extra allocations. The next step would be to fork and exec this process and take the file mappings and therefore?? shared memory mappings with you into the child process. Note I haven't tried this and can't be sure that it will work. Fork takes the file mappings but I'm not sure if exec would trash them.
Andy
Andy,
The allocator actually started its development with STLport. Its development was later switched over to working with the GNU libraries. There used to be a link from the STLport pages to this allocator.
The original version of the allocator also only worked if the allocator was started first in a main program and all other processes which needed to attach were forked. A user in this forum requested that independent processes be able to attach, and that is a more elegant approach.
The code below the multi_process_pool_alloc class follows the standard STL allocator interface, but it also relies on mmap reasonably using the hint parameter.
The current version does not require that the size of the container classes be known/instantiated in advance if mmap works as described. Although unlikely, it might be better if Solaris conform more closely with its own man pages or rewrite the man page to more acurately describe the actual implemented behavior. Failing an OS change, is there an elegant solution which could serve as a substitute for Solaris and also work with other OSs including Linux?
Marc
Marc
I suppose GNU libraries might also work on Solaris but STLport comes as a nice (read lazy) option with the Sun One/Forte c++ compiler suite so I tried that and it worked well enough not to try anything else as the RogueWave stuff that is the default failed to work.
I agree that the ability to allow independent process to attach is very desirable - that's why I tried getting it all to work on Solaris before realising the issue with mmap.
I also agree that it would be nice if Solaris at least attempted to use the address - I guess it's core to the Solaris virtual memory model to work how it does which is a shame really. I guess it would be interesting to see if MAP_FIXED would work reliably for Solaris for any particular addresses. However I couldn't come up with a substitute (which is not to say there isn't some clever work around - I don't do much/any UNIX programming at this level) or at least not in the time I had to experiment. I doubt Solaris will change any time soon to allow the existing version to work :-(.
Andy