I want to stop supporting bld/Makefile as soon
as possible because it is so....stone-age.
So let's try to stabilize to autoconf build mechanism
by exercising it with various options on all the
architectures. To assist in this, I've checked in
a new file, configure.eg, which both summarizes
and details the build status on various architectures.
Rorik and I can divide the platforms as follows:
Rorik: linux, solaris
Charlie: aix, sgi, linuxalpha
Henry: any help appreciated
Please read configure.eg and change things to suit
you, Rorik. Ultimately, we want to end up with
the correct ./configure commands to successfully
build on each platform. Where it's obvious what
the problem is, I have footnoted the summary
table. I've also included the output to ./configure
where it fails in that step.
The specific items that need to be done to improve
the robustness of the builds are now enumerated
at the top of doc/TODO. As these items are addressed, they should be removed from this
list and as new items are found they should be
added. I am hoping Rorik will do the yeoman's work
on disposing of most of these problems, based
on the test results we feed him. Please remember
to update tand commit these files, configure.eg and TODO. Based on the items on the TODO that have
been addressed, I'll know when to retest on the various platforms I'm responsible for.
p.s. It should always be possible to see how the
old method, bld/Makefile, works on a platform that
you do not have access to just by manually setting
your PVM_ARCH to that platform and doing
'make -n'. The goal, then, is to figure out what
to pass ./configure such that the compiler/linker
receives the same arugments that bld/Makefile
would have sent it. So a lot of the recommended
commands in ./configure.eg can be filled in
(but not tested) without even having access to
the machine.
Thanks!
Charlie
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I worked a little more on the autoconf stuff this afternoon. I fixed a trivial bug that was causing CFLAGS, etc not to be updated. I also worked on TODO 3,5-11.
A question about TODO 2: why ftp this file? Why not just include it with the distribution?
Autoconf has a macro (AM_INIT_AUTOMAKE) that takes the version number in and handles all tar.gz files, etc with that. However, to pass VERSION while compiling, I still read it in from doc/VERSION (note: I removed the 'nco-' prefix, it seemed redundant in the current Makefile when I did 'make rpm' I ended up with nco-nco-2.5.6) As soon as autoconf is stable, we can get rid of doc/VERSION and simply edit configure.in when version numbers change; but that can wait to maintain compatibility with the current make for now.
I added NETCDF_INC and NETCDF_LIB. configure --help documents them. We still automatically add /usr/local/lib and /usr/local/include since that is the default for netcdf set by Unidata. Would there be a reason ever that we DONT want those in CPPFLAGS and LDFLAGS, respectively? I could make it a conditional based on whether NETCDF_INC and NETCDF_LIB are defined or not.
Is there a reason not to have nco_c++/tst.cc load ../../data/in.nc instead of making the symbolic link?
I haven't done anything with the OPENMP stuff yet.
rorik
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
> I worked a little more on the autoconf stuff this afternoon. I fixed a
> trivial bug that was causing CFLAGS, etc not to be updated. I also
> worked on TODO 3,5-11.
Good. The AIX and SGI platforms appear to be hobbled because
./configure cannot find the netcdf library, even when it is
specified with NETCDF_LIB.
> A question about TODO 2: why ftp this file? Why not just include it
> with the distribution?
One reason: The file is a largish binary file and binary files,
especially large ones, generally do not belong in a CVS repository.
Moreover, using ncks to retrieve it tests the ftp algorithm in ncks.
If ftp'ing it does not work because the user is behind a firewall,
then that's their problem. We should try to print a diagnostic,
but not sweat over it.
> Autoconf has a macro (AM_INIT_AUTOMAKE) that takes the version number
> in and handles all tar.gz files, etc with that. However, to pass
> VERSION while compiling, I still read it in from doc/VERSION (note: I
> removed the 'nco-' prefix, it seemed redundant in the current Makefile
> when I did 'make rpm' I ended up with nco-nco-2.5.6)
I see I have been inconsistent with VERSION between the Makefile and
some of the source code which prints it. We can leave VERSION as x.y.z.
It would still be nice to have HOSTNAME and USER and GNU_TRP so we can
continue to print useful diagnostics like
zender@lanina:~/nco/bld$ ncks -r
ncks version 20020819 built Aug 19 2002 on lanina.ps.uci.edu by zender
> autoconf is stable, we can get rid of doc/VERSION and simply edit
> configure.in when version numbers change; but that can wait to
> maintain compatibility with the current make for now.
Agreed! At some point, you'll need to tell me (or just do it yourself)
where/how in the NCO code (*.c and *.h files) to take advantage of
autoconf-supplied information like VERSION, USER...
> I added NETCDF_INC and NETCDF_LIB. configure --help documents them. We
> still automatically add /usr/local/lib and /usr/local/include since
> that is the default for netcdf set by Unidata.
> Would there be a reason
> ever that we DONT want those in CPPFLAGS and LDFLAGS, respectively? I
> could make it a conditional based on whether NETCDF_INC and NETCDF_LIB
> are defined or not.
Exactly.
/usr/local/lib and /usr/local/include should be the default to
search for netCDF libraries. Perhaps this will require rewriting
the Unidata-supplied macro for finding the netCDF installation.
But the user must be able to override
these using NETCDF_INC and NETCDF_LIB in the environment.
Many high-end machines have 4 sets of (incompatible) netCDF libraries,
(ABI=32,64, Fortran r4,r8, plus sometimes multiple compiler versions),
and /usr/local/lib and /usr/local/include might point to the wrong
ones. If this is impossible, then at least make sure that
user-specified NETCDF_INC and NETCDF_LIB precede the /usr/local
paths.
> Is there a reason not to have nco_c++/tst.cc load ../../data/in.nc
> instead of making the symbolic link?
No, it could work either way. The only reason was that MY_DAT_DIR
can be set to any directory, not just ../../data.
> I haven't done anything with the OPENMP stuff yet.
That's fine, getting the basic builds done is more important.
It's great to see the light at the end of the tunnel.
Thanks!
Charlie
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hola,
I want to stop supporting bld/Makefile as soon
as possible because it is so....stone-age.
So let's try to stabilize to autoconf build mechanism
by exercising it with various options on all the
architectures. To assist in this, I've checked in
a new file, configure.eg, which both summarizes
and details the build status on various architectures.
Rorik and I can divide the platforms as follows:
Rorik: linux, solaris
Charlie: aix, sgi, linuxalpha
Henry: any help appreciated
Please read configure.eg and change things to suit
you, Rorik. Ultimately, we want to end up with
the correct ./configure commands to successfully
build on each platform. Where it's obvious what
the problem is, I have footnoted the summary
table. I've also included the output to ./configure
where it fails in that step.
The specific items that need to be done to improve
the robustness of the builds are now enumerated
at the top of doc/TODO. As these items are addressed, they should be removed from this
list and as new items are found they should be
added. I am hoping Rorik will do the yeoman's work
on disposing of most of these problems, based
on the test results we feed him. Please remember
to update tand commit these files, configure.eg and TODO. Based on the items on the TODO that have
been addressed, I'll know when to retest on the various platforms I'm responsible for.
p.s. It should always be possible to see how the
old method, bld/Makefile, works on a platform that
you do not have access to just by manually setting
your PVM_ARCH to that platform and doing
'make -n'. The goal, then, is to figure out what
to pass ./configure such that the compiler/linker
receives the same arugments that bld/Makefile
would have sent it. So a lot of the recommended
commands in ./configure.eg can be filled in
(but not tested) without even having access to
the machine.
Thanks!
Charlie
I worked a little more on the autoconf stuff this afternoon. I fixed a trivial bug that was causing CFLAGS, etc not to be updated. I also worked on TODO 3,5-11.
A question about TODO 2: why ftp this file? Why not just include it with the distribution?
Autoconf has a macro (AM_INIT_AUTOMAKE) that takes the version number in and handles all tar.gz files, etc with that. However, to pass VERSION while compiling, I still read it in from doc/VERSION (note: I removed the 'nco-' prefix, it seemed redundant in the current Makefile when I did 'make rpm' I ended up with nco-nco-2.5.6) As soon as autoconf is stable, we can get rid of doc/VERSION and simply edit configure.in when version numbers change; but that can wait to maintain compatibility with the current make for now.
I added NETCDF_INC and NETCDF_LIB. configure --help documents them. We still automatically add /usr/local/lib and /usr/local/include since that is the default for netcdf set by Unidata. Would there be a reason ever that we DONT want those in CPPFLAGS and LDFLAGS, respectively? I could make it a conditional based on whether NETCDF_INC and NETCDF_LIB are defined or not.
Is there a reason not to have nco_c++/tst.cc load ../../data/in.nc instead of making the symbolic link?
I haven't done anything with the OPENMP stuff yet.
rorik
> I worked a little more on the autoconf stuff this afternoon. I fixed a
> trivial bug that was causing CFLAGS, etc not to be updated. I also
> worked on TODO 3,5-11.
Good. The AIX and SGI platforms appear to be hobbled because
./configure cannot find the netcdf library, even when it is
specified with NETCDF_LIB.
> A question about TODO 2: why ftp this file? Why not just include it
> with the distribution?
One reason: The file is a largish binary file and binary files,
especially large ones, generally do not belong in a CVS repository.
Moreover, using ncks to retrieve it tests the ftp algorithm in ncks.
If ftp'ing it does not work because the user is behind a firewall,
then that's their problem. We should try to print a diagnostic,
but not sweat over it.
> Autoconf has a macro (AM_INIT_AUTOMAKE) that takes the version number
> in and handles all tar.gz files, etc with that. However, to pass
> VERSION while compiling, I still read it in from doc/VERSION (note: I
> removed the 'nco-' prefix, it seemed redundant in the current Makefile
> when I did 'make rpm' I ended up with nco-nco-2.5.6)
I see I have been inconsistent with VERSION between the Makefile and
some of the source code which prints it. We can leave VERSION as x.y.z.
It would still be nice to have HOSTNAME and USER and GNU_TRP so we can
continue to print useful diagnostics like
zender@lanina:~/nco/bld$ ncks -r
ncks version 20020819 built Aug 19 2002 on lanina.ps.uci.edu by zender
> autoconf is stable, we can get rid of doc/VERSION and simply edit
> configure.in when version numbers change; but that can wait to
> maintain compatibility with the current make for now.
Agreed! At some point, you'll need to tell me (or just do it yourself)
where/how in the NCO code (*.c and *.h files) to take advantage of
autoconf-supplied information like VERSION, USER...
> I added NETCDF_INC and NETCDF_LIB. configure --help documents them. We
> still automatically add /usr/local/lib and /usr/local/include since
> that is the default for netcdf set by Unidata.
> Would there be a reason
> ever that we DONT want those in CPPFLAGS and LDFLAGS, respectively? I
> could make it a conditional based on whether NETCDF_INC and NETCDF_LIB
> are defined or not.
Exactly.
/usr/local/lib and /usr/local/include should be the default to
search for netCDF libraries. Perhaps this will require rewriting
the Unidata-supplied macro for finding the netCDF installation.
But the user must be able to override
these using NETCDF_INC and NETCDF_LIB in the environment.
Many high-end machines have 4 sets of (incompatible) netCDF libraries,
(ABI=32,64, Fortran r4,r8, plus sometimes multiple compiler versions),
and /usr/local/lib and /usr/local/include might point to the wrong
ones. If this is impossible, then at least make sure that
user-specified NETCDF_INC and NETCDF_LIB precede the /usr/local
paths.
> Is there a reason not to have nco_c++/tst.cc load ../../data/in.nc
> instead of making the symbolic link?
No, it could work either way. The only reason was that MY_DAT_DIR
can be set to any directory, not just ../../data.
> I haven't done anything with the OPENMP stuff yet.
That's fine, getting the basic builds done is more important.
It's great to see the light at the end of the tunnel.
Thanks!
Charlie