First, thanks for your efforts. I've seen some packages where
autoconf stuff is more than half of the code. I hope we can
get a lighter installation than that, but let's take the time
to do it right.
It is more important to do it right than to do it quickly.
The reason there are so many makefile options now is because, as a
developer, I often need to test whether a problem is caused by
OpenMP, or not, or to check the execution speed with and without
optimization. Using the extremely slow extreme debugging switch
OPTS=X is a great way to lint the code and test for ansi compliance.
Here are two ways that the current style of command line options
might look in the autoconf version:
I'm not sure whether all the current functionality can be implemented
with either of these methods---you are the autoconf expert.
But in your message you mention hand editing makefile to change, e.g.,
value of CC, whereas, if I understand things correctly, using
CC=icc ./configure should generate a Makefile with CC=icc, and thus
no hand editing is required.
We need to keep in mind that most users are only going to use the
configure script once, but that the developers want to benefit from
it as well. i.e., keeping the current Makefile is non-ideal, we do
not want to maintain separate build systems for developers and for
users. That said, we are very willing to alter some of the structure
of the current Makefile/source architecture to more easily accomodate
autoconf. It is reasonable to assume that users want an optimized
build when they type ./configure;make;make install. So that should
be the default.
Likewise it is reasonable to assume that developers will not be
changing build configurations everyday. Most development is done
with OPTS=D or OPTS=X. And developers are willing to type (or script)
a configure and build command that is as long as necessary to test
NCO functionality, e.g., using type (2) from above
make distclean;./configure --enable-debug;make
make distclean;./configure --enable-debug --enable-openmp
--enable-featurefoo;make
where featurefoo is some new alpha state feature that should be
disabled by default until it is thoroughly tested (e.g., I18N,
packing, OpenMP). Since developers typically work with a given
configuration for many days to thoroughly implement and debug a new
feature, we only need to specify a gruesome configure line very
rarely, the rest of the time we just type make like everyone else.
This may be obvious, but what I'm trying to say is developers
are willing to give up "make time" functionality as long as it
is preserved in "configure time" functionality (problem 2 below).
My specific thoughts on the five problems are inline'd below.
Thanks,
Charlie
>Problem 1) with GNU Make we use "../dir/%.o" pattern rules per file type.
>Without GNU Make we write an explicit "../dir/foo.o:" rule per file.
>
>Pro: Current directory structure is obeyed.
>Con: Non Gnu Make users get an ugly Makefile.rules (which is auto generated
> by nongnu.sh..) Inelegant but it works for all makes.
For the reasons I mentioned before, it's OK to assume GNU make is
installed. If both GNU and AT&T make are installed then maybe a
configure option will need to be added to ensure GNU make is used.
It sounds like you want to support AT&T make and that's fine but
yes it will be ugly.
>--------
>Problem 2) We allow dynamic "make time" setting of a few variables
>but not all. In particular OPTS and OMP are still make time variables.
>The compilers are semi "make time" variables. You cannot "make CC=gcc"
>but you can edit Makefile (and set CC=gcc) and the appropriate flags
>(if known) will be used (regardless of type of make used).
>
>Pro: allow "make time" changes.
It sounds like if developers are to adopt the configure mechanism,
then we must move existing "make time" capability to "configure
time" capability, discussed above. In that context, we developers
would need a short description from you of how to add a new
--enable/disable switch to the autoconf logic so we can work
on new features without trashing everybody else's code.
>Con: configure will only check the flags and libraries it knows of,
>"make time" changes are not guaranteed to work.
>Con: if --enable-make-time is used (disabled per default) slower
> compilation results from the "make time" evaluation of what flags
> to use.
If stuff is moved to "configure time" then the con is not an issue.
In any case, slower compilation is not an issue (unless it is, say
factor of 10 slower in wallclock time).
>--------
>Problem 3) since bld/pvmgetarch does not even produce WIN32, users
>who can run configure will end up running "unknown.sh" as their
>arch and pray for the best (see 5 for what "best" should be).
First, we do not care if windows is supported (do we?).
I recommend concentrating on UNIX first, then releasing, then,
if motivated, modifying to build on Windows too.
So getting windows working with initial port is up to you.
Note more recent versions of pvmgetarch may support a windows token.
However, I do want to eventually get rid of pvmgetarch and move to
using standard GNU target triplets, i.e., replace PVM_ARCH with
GNU_TRIPET such as i386-redhat-linux-gnu, or with #ifdef HAVE_X.
We can do this after the initial autoconf port or before, it's
up to you. We will need your help to do this. But actually there
are very few instances of PVM_ARCH in the code, it's mostly
in the build, not in the code.
>configure assumes sh, test, cat, rm, chmod, sed, grep, and
>pvmgetarch assumes uname. So Windows users would have to
>be completely unix like to even run configure.
>
>Pro: Porting to unknown unix like platforms should be automagic
Agreed, that's the whole point of using autoconf.
We want FREEBSD and MAC OS X to build easily.
But windows is not unix-like and we are happy to ignore it.
>Con: Windows users may not be able to run configure.
>Should yet another Makefile be generated for them?
>Like "PVM_ARCH=WIN32 ./configure --disable-gnu-make; mv Makefile Makefile.win"
>before generating a package for distribution.
Getting NCO to run on Windows is not important (to me).
If it works, fine. I am sure that whatever solution you devise will
work for windows users with cygwin installed.
That is all we care about (speak up if you disagree).
>--------
>Problem 4)
>Now -M is assumed to work on all arches and compilers, and depends
>will be auto generated/included with GNU Make and Non GNU Make
>(non gnu make not fully tested).
>
>I want to write this so -M is not assumed across platforms but various
>flags are tried until one matches for the current compiler/platform.
>And if the compiler does not support any such depend generation then
>a prebuilt dependency list included in the package distribution is
>used.
The automatic depends generation is something that is of most use
to developers, to speed-up rebuilding.
>--------
>Problem 5)
>
>Now configure uses the result of pvmgetarch to determine a list
>of compilers to try. The compiler and arch determines the
>prestored flags and libraries to use.
>
>What I want to do is better integrate configure with the compiler flags..
>e.g. test that -g or -ansi work
>But make time nature of flags gets in the way. If configure
>determines -g does not work but user chooses "make OPTS=D" what
>can we do? elegantly?
As per above, is it possible to change "make time" to "configure
time"? Clearly, I am hoping this solves many problems :)
>Also for libraries the logic should be turned on its head.
>Part of the philosophy of autoconf is that it should try its
>best to build even under unknown environments by assembling the
>requirements of the source and seeing if the host can support them.
>Then having #ifdefs HAVE_X in the code rather than say #ifdef LINUX.
>
>e.g. getopt, libresolv. maybe others.
>configure should check if sources can compile and run without linking
>against the libs. Then write the makefile accordingly.
>
>Now the arch determines to use -lresolv or not. and configure just
>checks for its existence of -lresolv, complaining if not present.
You should feel free to modify the source code (*.c files) so they
use #ifdef HAVE_X rather than #ifdef LINUX. We understand that we
must take this path to achieve autoconf enlightenment :)
But, you are going to have to make and validate the required source
code changes, at least for a few of the operators, because we do not
know what changes to make. Obviously we cannot adopt the autoconf
method until all the operators (including ncap) have been tested
to work with it. At that point we'll make a release (still with
the old Makefile renamed something else, just in case).
>------
>I haven't tested the FORTRAN building logic, nor tested on Sun,
>nor touched src/nco_c++.
We can deprecate the FORTRAN logic at your discretion
(not from the source code, but from the build logic).
Adding nco_c++ should be a piece of cake once you have main NCO done.
nco_c++ is pure C++, no flex, no bison, no fortran, no OpenMP.
>If it's urgent, say so and I'll put in a days work and get it done.
Not necessary. Haste makes waste.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi,
First, thanks for your efforts. I've seen some packages where
autoconf stuff is more than half of the code. I hope we can
get a lighter installation than that, but let's take the time
to do it right.
It is more important to do it right than to do it quickly.
The reason there are so many makefile options now is because, as a
developer, I often need to test whether a problem is caused by
OpenMP, or not, or to check the execution speed with and without
optimization. Using the extremely slow extreme debugging switch
OPTS=X is a great way to lint the code and test for ansi compliance.
Here are two ways that the current style of command line options
might look in the autoconf version:
1. OPTS="D" ./configure ...
OMP="Y" ./configure ...
2. ./configure --enable-debugging
./configure --enable-openmp
I'm not sure whether all the current functionality can be implemented
with either of these methods---you are the autoconf expert.
But in your message you mention hand editing makefile to change, e.g.,
value of CC, whereas, if I understand things correctly, using
CC=icc ./configure should generate a Makefile with CC=icc, and thus
no hand editing is required.
We need to keep in mind that most users are only going to use the
configure script once, but that the developers want to benefit from
it as well. i.e., keeping the current Makefile is non-ideal, we do
not want to maintain separate build systems for developers and for
users. That said, we are very willing to alter some of the structure
of the current Makefile/source architecture to more easily accomodate
autoconf. It is reasonable to assume that users want an optimized
build when they type ./configure;make;make install. So that should
be the default.
Likewise it is reasonable to assume that developers will not be
changing build configurations everyday. Most development is done
with OPTS=D or OPTS=X. And developers are willing to type (or script)
a configure and build command that is as long as necessary to test
NCO functionality, e.g., using type (2) from above
make distclean;./configure --enable-debug;make
make distclean;./configure --enable-debug --enable-openmp
--enable-featurefoo;make
where featurefoo is some new alpha state feature that should be
disabled by default until it is thoroughly tested (e.g., I18N,
packing, OpenMP). Since developers typically work with a given
configuration for many days to thoroughly implement and debug a new
feature, we only need to specify a gruesome configure line very
rarely, the rest of the time we just type make like everyone else.
This may be obvious, but what I'm trying to say is developers
are willing to give up "make time" functionality as long as it
is preserved in "configure time" functionality (problem 2 below).
My specific thoughts on the five problems are inline'd below.
Thanks,
Charlie
>Problem 1) with GNU Make we use "../dir/%.o" pattern rules per file type.
>Without GNU Make we write an explicit "../dir/foo.o:" rule per file.
>
>Pro: Current directory structure is obeyed.
>Con: Non Gnu Make users get an ugly Makefile.rules (which is auto generated
> by nongnu.sh..) Inelegant but it works for all makes.
For the reasons I mentioned before, it's OK to assume GNU make is
installed. If both GNU and AT&T make are installed then maybe a
configure option will need to be added to ensure GNU make is used.
It sounds like you want to support AT&T make and that's fine but
yes it will be ugly.
>--------
>Problem 2) We allow dynamic "make time" setting of a few variables
>but not all. In particular OPTS and OMP are still make time variables.
>The compilers are semi "make time" variables. You cannot "make CC=gcc"
>but you can edit Makefile (and set CC=gcc) and the appropriate flags
>(if known) will be used (regardless of type of make used).
>
>Pro: allow "make time" changes.
It sounds like if developers are to adopt the configure mechanism,
then we must move existing "make time" capability to "configure
time" capability, discussed above. In that context, we developers
would need a short description from you of how to add a new
--enable/disable switch to the autoconf logic so we can work
on new features without trashing everybody else's code.
>Con: configure will only check the flags and libraries it knows of,
>"make time" changes are not guaranteed to work.
>Con: if --enable-make-time is used (disabled per default) slower
> compilation results from the "make time" evaluation of what flags
> to use.
If stuff is moved to "configure time" then the con is not an issue.
In any case, slower compilation is not an issue (unless it is, say
factor of 10 slower in wallclock time).
>--------
>Problem 3) since bld/pvmgetarch does not even produce WIN32, users
>who can run configure will end up running "unknown.sh" as their
>arch and pray for the best (see 5 for what "best" should be).
First, we do not care if windows is supported (do we?).
I recommend concentrating on UNIX first, then releasing, then,
if motivated, modifying to build on Windows too.
So getting windows working with initial port is up to you.
Note more recent versions of pvmgetarch may support a windows token.
However, I do want to eventually get rid of pvmgetarch and move to
using standard GNU target triplets, i.e., replace PVM_ARCH with
GNU_TRIPET such as i386-redhat-linux-gnu, or with #ifdef HAVE_X.
We can do this after the initial autoconf port or before, it's
up to you. We will need your help to do this. But actually there
are very few instances of PVM_ARCH in the code, it's mostly
in the build, not in the code.
>configure assumes sh, test, cat, rm, chmod, sed, grep, and
>pvmgetarch assumes uname. So Windows users would have to
>be completely unix like to even run configure.
>
>Pro: Porting to unknown unix like platforms should be automagic
Agreed, that's the whole point of using autoconf.
We want FREEBSD and MAC OS X to build easily.
But windows is not unix-like and we are happy to ignore it.
>Con: Windows users may not be able to run configure.
>Should yet another Makefile be generated for them?
>Like "PVM_ARCH=WIN32 ./configure --disable-gnu-make; mv Makefile Makefile.win"
>before generating a package for distribution.
Getting NCO to run on Windows is not important (to me).
If it works, fine. I am sure that whatever solution you devise will
work for windows users with cygwin installed.
That is all we care about (speak up if you disagree).
>--------
>Problem 4)
>Now -M is assumed to work on all arches and compilers, and depends
>will be auto generated/included with GNU Make and Non GNU Make
>(non gnu make not fully tested).
>
>I want to write this so -M is not assumed across platforms but various
>flags are tried until one matches for the current compiler/platform.
>And if the compiler does not support any such depend generation then
>a prebuilt dependency list included in the package distribution is
>used.
The automatic depends generation is something that is of most use
to developers, to speed-up rebuilding.
>--------
>Problem 5)
>
>Now configure uses the result of pvmgetarch to determine a list
>of compilers to try. The compiler and arch determines the
>prestored flags and libraries to use.
>
>What I want to do is better integrate configure with the compiler flags..
>e.g. test that -g or -ansi work
>But make time nature of flags gets in the way. If configure
>determines -g does not work but user chooses "make OPTS=D" what
>can we do? elegantly?
As per above, is it possible to change "make time" to "configure
time"? Clearly, I am hoping this solves many problems :)
>Also for libraries the logic should be turned on its head.
>Part of the philosophy of autoconf is that it should try its
>best to build even under unknown environments by assembling the
>requirements of the source and seeing if the host can support them.
>Then having #ifdefs HAVE_X in the code rather than say #ifdef LINUX.
>
>e.g. getopt, libresolv. maybe others.
>configure should check if sources can compile and run without linking
>against the libs. Then write the makefile accordingly.
>
>Now the arch determines to use -lresolv or not. and configure just
>checks for its existence of -lresolv, complaining if not present.
You should feel free to modify the source code (*.c files) so they
use #ifdef HAVE_X rather than #ifdef LINUX. We understand that we
must take this path to achieve autoconf enlightenment :)
But, you are going to have to make and validate the required source
code changes, at least for a few of the operators, because we do not
know what changes to make. Obviously we cannot adopt the autoconf
method until all the operators (including ncap) have been tested
to work with it. At that point we'll make a release (still with
the old Makefile renamed something else, just in case).
>------
>I haven't tested the FORTRAN building logic, nor tested on Sun,
>nor touched src/nco_c++.
We can deprecate the FORTRAN logic at your discretion
(not from the source code, but from the build logic).
Adding nco_c++ should be a piece of cake once you have main NCO done.
nco_c++ is pure C++, no flex, no bison, no fortran, no OpenMP.
>If it's urgent, say so and I'll put in a days work and get it done.
Not necessary. Haste makes waste.