Thread: [A-A-P-develop] Fwd: RFC: feature test (like configure) (now with attachment)
Brought to you by:
vimboss
From: <j....@we...> - 2002-12-27 16:16:56
Attachments:
featurtest.py
|
*** I am sorry, but I forgot to attach the python code, that *** I was reffering to in my posting, so I repost it, now with *** the attachment -- sorry. Dear List Readers, This material has been posted to the a-a-p list a few weeks ago. Now I have been invited by Bram Moolenaar and Steven Knight to repost my ideas to both: the a-a-p and the scons list. This request for comment is about an alternative approach to the well known configure script: how feature tests could be done. I assume that you all know how autoconf/automake and configure work, so I will not describe them in detail just give a short wrap-up: configure is (most of the time) a machine generated script written in a portable subset of /bin/sh (a big part of the autoconf documentation is dedicated to this subject: Portable Shell Programming). Since writing portable /bin/sh scripts is not an easy task, it has been widly automated by the autoconf tool. Autoconf is fed with m4-macros that expand to the real feature tests. Notice the 2-step approach: the devlopers writes (among other things) the configure.in, where he (in the most cases) only names the tests that should be done. The autoconf-tool builds the real tests. a-a-p as well as scons lack such a tool-family at the moment. configure i could not realy be used, since it generates the makefile (with the help of automake) and thus it deeply relies on make imternals and how make-rules are build. To write protable software one still needs such a tool set. I suggest implementing a replacement of configure, that works very similar to it. For the sake of this discussion I implemented 3 checks: - ft_prog_cc: find the compiler (not fully implemented, needs in the future to check if the compiler is a cross compiler and so on). - ft_check_files: checks if a given file is on the build machine, this could be used to ensure that a certain header exits. - ft_check_lib: checks if a given lib is on the machine The test results go into a python dictionary for later reference. E.g. if the C-compiler (name) is checked then the variable CC is set to the result. The C++-compiler is put into CXX. The tests that look for a library will add the linker command to the LIBS variable and set a HAVE_xxx feature marker. E.g. if you look for the z-lib then it will do this LIBS += "-lz" and set HAVE_LIBZ. The dictionary will be filled with know-how about the build environment as the feature tests are executed. All later tests take advantage of the earlier tests. E.g. if you have tested how to call the C compiler, then later test, that use the C compiler will do the correct call. This is exactly how the autotools family accumulates it's results. I am aware that this is very close to the traditional configure and I chose it this way, because the traditional approach proofed that it works for many years. configure scripts are not meant to be hand-written (autoconf springs to mind), so the interface to the software is developer is one layer above the actual feature test. I suggest a python object, called FeatureTester. It knows how to do the feature tests. Just call a method on the FeatureTester object to introspect the build environment. As last step call the object to dump it's results into 2 files: one to include in your C/C++ sources (called config.h) and one to include by your make-replacement (here called "config.aap"). The generation of the second file depends on the used make-tool, so each tool needs a backend to write the feature test results - which should be easy. Attached is a small piece of python source, that shows a proof of concept implementation of what I described here. I suggest, that the feature testing machine is implemented in python code, that stands for itself (is not depending on the make-a-like tool). The config.h file could easily be used anyway which make tool you use. The other part of the feature test results describes how to call some tools (which compiler and which compiler flags). Since the make tool (in your case a-a-p ore scons) needs to use this results, they have to be written in a way the make tool understands. For a-a-p it might be a recipe, and there would also be something for scons. I am aware that the decision how to integrate this into a-a-p (if there is need for a integration) still is a lot work. Please take this as it is: a bunch of ideas, not solutions. To get this going, I will turn the needed tests for a real world C ( or C++) project into the suggested python feature tests - that will show some weak points of my ideas. I would like to do this test with aap. If anybody is interested to integrate the same project into scons, then he is very welcome and would like to see that this way of doing feature tests works with (at least) both: aap and scons. If anybody else has an opinion about the task of feature test, then I would like to hear his questions/critique/remarks about it. happy discussion Joerg |
From: Wichert A. <wi...@wi...> - 2002-12-27 17:29:23
|
(note that I have not looked at the example code so this might already be done properly) Previously j....@we... wrote: > The tests that look for a library will add the linker command to the > LIBS variable and set a HAVE_xxx feature marker. E.g. if you look > for the z-lib then it will do this LIBS += "-lz" and set HAVE_LIBZ. Please make at least the variable you assign to configurable. It is very common for larger projects to have different LIBS variables for different executables being build and it is always painful to work around autoconf tests that blindly assign to LIBS. For this reason I have always preferred tests that have a seperate action for posivite and negative result: that way it is much easier to do possibly unexpected things based on the test result. Perhaps it makes sense to always split the two: performing the test, and handling the information that results from a test execution. I can think of different types of tests to be performed: * simple tests (does command XXX exist) * toolchain specific tests (can the C compiler find includefile xxx.h) * tests that possible need to loop over multiple possible settings ( try a couple of different -I parameters for the C compiler to determine where an include file can be found) Based on the result different types of actions can be executed: * set a specific flag (define HAVE_STDARG_H if stdarg.h is present) * set a variable to a specific value (set CC to gcc) * add to a possibly existing variable (add a -I parameter to CFLAGS) * custom action for test success and failure (not all tests will have a success or failure concept but will only result in some useful information) Wichert. -- Wichert Akkerman <wi...@wi...> http://www.wiggy.net/ A random hacker |