From: Dean M. B. <mik...@gm...> - 2010-10-26 04:22:43
|
On Mon, Oct 25, 2010 at 10:32 PM, Nelson, Erik - 2 <eri...@ba...> wrote: >> Dean Michael Berris wrote on Monday, October 25, 2010 >> ... I've made some steps to start >> reducing the compile-times of release 0.8. One of the first casualties >> in the compile-time reduction effort is the Boost.Concept_check stuff >> on the fundamental accessors/wrappers. >> > > Just to throw it out there, maybe some of this could be controlled with > a BOOST_CPP_NETLIB_COMPILE_FAST macro, enabling it to be optional. > Actually there's already a way to disable the concept checking parts with a Macro. My objection to yet another preprocessor macro is that the code gets pretty ugly -- look at the client switching for BOOST_NETWORK_ENABLE_HTTPS to see what I mean. The reason also that these concept checks are removed from the fundamental accessors is that because the concept checks are meant to be applied to algorithms. In my head the only algorithms that can be applied to messages at the moment are: 1. Rendering 2. Transformations 3. Adaptations Algorithms for Rendering are still embedded in the client implementations. I'm seeing a need for a render(...) algorithm that deals with concepts instead so that we can abstract out the rendering of messages based on their protocol/domain. I haven't removed the concepts, I just removed the concept checks on the fundamental accessors -- like source, destination, headers, and body -- because they tend to be recursive and introduce all kinds of header include ordering hell. :D >> Now there are still a lot of other things I can do to start reducing >> compile-times. One is incorporating precompiled headers into the >> build, and start pre-compiling almost every single header in the >> library. > > Pre-compiled headers seem like they are going to be a mess, and > conceptually similar to going to a non-header-only implementation. I > like the idea of header-only, but maybe a more library-based one would > help the compile time. > Well, not really. Let me explain why. First, the precompiled headers will only be built if the compiler supports it. This means for compilers that don't support precompiled headers, there's no effect. Second, the precompiled headers will be placed in the same directory as where the original header is placed. Third, this helps a lot in development mode. Having precompiled headers while developing a header-only library is a good way to cut down compile times immensely. Having precompiled headers around is not the same as building something and linking against it. The reason I didn't want to build external libraries is so that there's no ABI compatibility issues to take care of when doing releases. Precompiled headers don't impose this problem. >> Has anyone played around with CMake and how to make precompiled >> headers part of the build process? I have seen some Google results on >> the matter but I currently have no time/patience to muck around with >> that. >> > > Nothing else in Boost requires CMake, as far as I know. Boost-devel has > had some rumblings about a change, but right now Boost.Build is the > Boost standard. > Well there is a Boost mirror where everything is in CMake -- it's hosted on GitHub, maintained by the ryppl folks. The reason I'm using CMake for development is because it's just way faster to start building things compared to using bjam and Boost.Build. Having to wait 1 minute before anything gets compiled is not a good way of spending that time. ;) Of course at this time Boost.Build is still supported. I just like to use CMake locally for testing. >> I'm nearing the completion of the asynchronous server that invokes >> request handlers in a thread pool and although the compile times have >> been reduced in the HTTP client side of things I would really >> appreciate some CMake kung-fu from people on the list to incorporate >> precompiled-headers (so that we can make that part of the install step >> in 0.8). >> > > Compile time *is* an important issue, but the beauty of a header library > is that there is no install step- you just copy the files into your > directory tree. > Yeah, but... if you get the best of both worlds with precompiled headers -- you just copy them along with the files when you "install" -- then I don't see why trying precompiled headers would be a bad thing for a header-only library. Having precompiled headers is really going to be transparent to the users of the library, because if the compiler they're using supports them then there's an advantage. Otherwise, no disadvantage. Also, Boost has an "install" step which places the headers and all other built libraries into certain places. Integrating with the "install" step of Boost will have to be done in the Boost.Build files anyway. ;) Just sayin'... precompiled headers make sense for header-only libraries. :D -- Dean Michael Berris deanberris.com |