I think we can get the value for ABI with autoconf, it is a question of how easily or directly. This is out-of-my-league, so I'll need a little help implementing it. Off the top of my head, there are a couple autoconf macros that might get it directly. There is one called AC_CHECK_SIZEOF(type), where 'type' is something like 'int' or 'double'. On my linux box, 'int' yields SIZEOF_INT=4 and 'double' yields SIZEOF_DOUBLE=8. Would we be able to get it that way?
If not, there is AC_TRY_COMPILE(function). This macro test whether the compiler can compile the given function. Is there a code snippet we could give that would succeed or fail for 32 vs. 64 bit API?
At the very least, we can always write our own macro that uses any program available on the machine and analyzes its output. There are macros to test whether the desired program exists before trying to execute it, and it all fails, we can default to 32 bit (unless explitely given --enable-ABI64, which can override these tests if we want). So, is there a small script that can give us this information?
rorik
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
It's important not to get the wrong impression of what
I mean by ABI. I think we may be talk about two
related but different things. What I mean by ABI is
the address space available to the executable.
If sizeof (void *) = sizeof(int *)=sizeof(float *)= 8
then the application has a 64 bit address space.
So it's the size of the pointers that's germaine, not
the precision of the floats, or the size of the maximum
int. 64 bit ABI and 32 bit ABI object may not be
interlinked, so on plaforms that support both, separate
versions of each library (including libnetcdf.a) must
be available. Usually the names of the 32,64 bit
libraries are the same but their locations are
different, e.g., /usr/local/lib32 and /usr/local/lib64.
Given all of that, it seems like the autoconf way might
be to test AC_TRY_LINK against the specified
netcdf library. The attempted link should be between
a simple function compiled with the specified ABI
and the netcdf library. This will tell autoconf wheter
the netcdf library that was found is 32 or 64 bit.
Of course the user should be able to specify l
ocation of the netcdf library at ./configure time to
be sure the preferred ABI is used instead
of the default ABI. This may not make sense to
you unless you have ever compiled in the
confusing world of 64 bit chips before.
Just to summarize: 64 bit ABI is only available on
64 bit platforms. If first libnetcdf found by autoconf
is 64 bit then 64 bit ABI compiler/linker switches MUST BE used. Therefore I recommend
not implementing a --enable-ABI64 switch at all.
The whole procedure should be auomatic and
depend on what ABI the libnetcdf is found to be.
User should be allowed to specify location of
libnetcdf at ./configure time.
Does this make sense to you?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
So yes, the code snippet is easy to write.
On any 64bit platform (e.g., AIX, SGI)
See if a trivial program.e.g.,
#include <netcdf.h>
int main(){char *foo;foo=nco_inq_libvers();}
successfully links against libnetcdf.a.
If so, use default ABI compiler/linker flags.
If not, try non-default ABI compiler/linker flags.
IMHO the default should be 64 bit ABI on
platforms which support it, as this is the
wave of the future.
I hope this is clearer and something for
which you can implement the infrastructure.
You may even be able to test it at places
like the sourceforge compile farm on their
64 bit machines. If not I'll be happy to test
and give feedback.
Charlie
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
OK, I think I am getting the idea. Thanks for the detailed explanation. We could use AC_CHECK_SIZEOF(int*) and see whether it is 4 or 8. This macro makes a small program that simply checks sizeof(int*). For architectures that support both, I'm guessing the result is dependant on the compiler used and/or the flags given ( -q64 with xlc on AIX I think?). So if we set the compiler and flags first in autoconf, then check sizeof(int*) with AC_CHECK_SIZEOF(int*), we should know whether we are compiling for 32 or 64 bit, right?
Even if that works, we should probably do the link test you mention above because the if we attempt to link a 32-bit compiled NCO with 64-bit compiled netCDF (or vice versa), we'll crash, right? We can set up AC_CHECK_LIB to search a user-given path first when it looks for libnetcdf, so you can specify the 32 or 64 bit library preferentially if you happen to have both.
Nope, I've never dealt with the 32/64 bit option. I'll check out sourceforge options and see if I can figure this out further. Thanks.
rorik
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I looked through the generated configure script, and it looks like we should know this from $host_cpu and $host_os. For example, we might have $host_os as aix3* | aix4* | aix5* and then check if $host_cpu is ia64.
Anyway, the previous idea wouldn't work because sizeof(int*) is dependant on the compiler flags (at least with my solaris8 system), which you wouldn't know ahead of time if the point is to check whether 64-bit is available.
So, we should probably assume 64-bit if the $host_cpu supports it, and then try to link libnetcdf using 64-bit compiler flags and make sure it works. If it doesn't I guess that would mean that libnetcdf was compiled 32-bit altough the architecture would have supported 64-bit. In any case, NCO should be build to work with libnetcdf, whether it is 32 or 64. The previous comments about user-specified library locations for libnetcdf are still relevant and would allow for building two versions of NCO if two version of libnetcdf are available.
rorik
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I'll not respond to the previous message since this one seems to
supercede it.
> Anyway, the previous idea wouldn't work because sizeof(int*) is dependant on
> the compiler flags (at least with my solaris8 system), which you wouldn't know
> ahead of time if the point is to check whether 64-bit is available.
Yes, this is why I suggested attempting to check whether the simple
program correctly linked to the supplied netcdf library. If it doen't,
then configure should try compiling the test program with the
non-default ABI, if available, and see if the test program then links
with the supplied libnetcdf.a
> So, we should probably assume 64-bit if the $host_cpu supports it, and then
> try to link libnetcdf using 64-bit compiler flags and make sure it works. If
> it doesn't I guess that would mean that libnetcdf was compiled 32-bit altough
> the architecture would have supported 64-bit. In any case, NCO should be build
> to work with libnetcdf, whether it is 32 or 64. The previous comments about
> user-specified library locations for libnetcdf are still relevant and would
> allow for building two versions of NCO if two version of libnetcdf are
> available.
I think this is exactly right and I hope it is feasible to do it
this way.
Thanks,
Charlie
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Yep, the ABI thing is not going to be easy. I looked at GNU GMP's configure.in (> 2000 lines!) because it does all sorts of testing to determine the best ABI. In summary, autoconf has no native support for determing ABI, so we'll have to write our own tests.
I'm having trouble compiling libnetcdf.a on my Solaris machine with 64-bit (using the flag -xarch=v9), and until I can do that, I won't be able to find out if my autoconf tests for the ABI stuff are working or not.
On a different front, I added stuff to install the man pages, compile the c++ library, and even run the c++ test using 'make check'. It will also make the documentation, but there is something wrong in the texinfo source around line 2700 in the ncap netCDF Arithmetic Processor node.
I added the stripping -s linker flag under --enable-optimize for gcc, and it helps considerably with reducing executable size. I haven't tested other linkers.
Except for the ABI stuff, I have the other compiler flags set for the archtectures and compilers in the current Makefile for the different --enable-options. The mp/openmp stuff has not been touched either.
Is there a reason that the -ansi flag is used on win32 and RS6K's gcc but no one elses?
Does Cray not support -O2 but only -O when using cc and optimizing (current Makefile)? The others all use -O2 when optimizing.
We'll have to remove the -g flag I think when optimizing using gcc. I read something in GMP's configure.in to the effect that
optimization will get overridden otherwise. I haven't checked it.
It looks like I'm going to be out of town next week, so I may not get back to it until I return. My current effort is at http://puff.images.alaska.edu/temp/
but the CVS snapshot is probably getting old, July 3 I think.
rorik
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
> Yep, the ABI thing is not going to be easy. I looked at GNU GMP's configure.in
> (> 2000 lines!) because it does all sorts of testing to determine the best ABI.
> In summary, autoconf has no native support for determing ABI, so we'll have
> to write our own tests.
Yes, I gathered that but it is good to have it explicitly confirmed
> I'm having trouble compiling libnetcdf.a on my Solaris machine with 64-bit (using
> the flag -xarch=v9), and until I can do that, I won't be able to find out if
> my autoconf tests for the ABI stuff are working or not.
Maybe submit this question to Unidata or the netCDF list since Solaris
is a supported OS they should know the answer. Note that AIX requires
appropriate switches to xlc, ld, _and_ ar, so Solaris probably does likewise.
> On a different front, I added stuff to install the man pages, compile the c++
> library, and even run the c++ test using 'make check'. It will also make the
> documentation, but there is something wrong in the texinfo source around line
> 2700 in the ncap netCDF Arithmetic Processor node.
Good. Also make check on NCO should do what make tst does, I suppose,
i.e., run nco_tst.sh
> I added the stripping -s linker flag under --enable-optimize for gcc, and it
> helps considerably with reducing executable size. I haven't tested other
> linkers.
Good. The strip(1) command is another way to do this and appears portable
to all Unices.
> Except for the ABI stuff, I have the other compiler flags set for the archtectures
> and compilers in the current Makefile for the different --enable-options. The
> mp/openmp stuff has not been touched either.
Good. As a final check, please verify that your configure reproduces
all switches that Makefile uses for OPS=O,R,D,X for all architectures.
> Is there a reason that the -ansi flag is used on win32 and RS6K's gcc but no
> one elses?
Yes, the system file resolv.h file is not-ANSI compliant with Linux
last I checked. So either the relevant NCO code must change or -ansi
cannot be used. WIN32 does not support resolv.h (last I checked)
so the relevant code is #ifdef'd out on that platform so gcc -ansi is fine.
Care to investigate this resolv.h stuff on the side?
> Does Cray not support -O2 but only -O when using cc and optimizing (current
> Makefile)? The others all use -O2 when optimizing.
I have no idea. Been a long time since I used Cray.
> We'll have to remove the -g flag I think when optimizing using gcc. I read
> something in GMP's configure.in to the effect that
> optimization will get overridden otherwise. I haven't checked it.
That's fine. Optimized code should not use -g I think.
But I think I also remember the GNU GSL manual saying contradicting
what you just said about GMP.
> It looks like I'm going to be out of town next week, so I may not get back to
> it until I return. My current effort is at
> http://puff.images.alaska.edu/temp/
> but the CVS snapshot is probably getting old, July 3 I think.
Bon voyage!
Thanks again,
Charlie
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I did some more thinking about automagically setting the compiler and linker flags for different ABI's. Autoconf makes it simple to pass flags in by setting the environment variables CFLAGS, LDFLAGS, etc. You can also override the compiler choice by setting CC and/or CXX. So the question is: Is that simple enough? or should NCO's configure script try to figure out if additional flags need to be added? My first thought is that originally the user explicitely set these flags when libnetcdf.a was built, since unidata's configure script doesn't check for different ABI's. They do give instructions for different builds in the installation documentation, but their configure script doesn't set anything.
However, I guess someone could have gotten the 64-bit binary and now wants to build NCO and doesn't know that additional flags need to be set. We could go the route of Unidata and add instructions in an install document, but that requires reading it, which has its own inherent problems.
I went ahead and made the tests anyway to see that it can be done. I built 32-bit and 64-bit libnetcdf.a on my Solaris machine, so now I can test 'configure'. I use AC_CHECK_LIB twice, first with the default compiler flags, and if that fails, I add additional 64-bit flags and try again. If both fail, libnetcdf.a is not installed, cannot be found, or autoconf's choice of flags are still incorrect. If it succeeds, configuration continues, either with or without the additional flags. By using the default flags first, the user can set CFLAGS, LDFLAGS, etc with 64-bit stuff beforehand, and as long as libnetcdf.a was also built 64-bit, the first test works.
However, I wonder if this is worth the effort. It seems that anyone compiling 64-bit applications already knows they have to use these flags to get successful builds. Thus, they would set CFLAGS and LDFLAGS ahead of time, like they do for almost all other applications. Should we really be trying to help by setting them ourselves? If a compiler changes its required flags, we need to maintain the autoconf script to keep it from screwing things up. Perhaps this little bit of intervention by the installer (i.e. setting CFLAGS and LDFLAGS) is the best compromise between ease of installation and robustness of the install script.
What do you think?
rorik
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I did some more thinking about automagically setting the compiler and linker flags for different ABI's. Autoconf makes it simple to pass flags in by setting the environment variables CFLAGS, LDFLAGS, etc. You can also override the compiler choice by setting CC and/or CXX. So the question is: Is that simple enough? or should NCO's configure script try to figure out if additional flags need to be added? My first thought is that originally the user explicitely set these flags when libnetcdf.a was built, since unidata's configure script doesn't check for different ABI's. They do give instructions for different builds in the installation documentation, but their configure script doesn't set anything.
However, I guess someone could have gotten the 64-bit binary and now wants to build NCO and doesn't know that additional flags need to be set. We could go the route of Unidata and add instructions in an install document, but that requires reading it, which has its own inherent problems.
I went ahead and made the tests anyway to see that it can be done. I built 32-bit and 64-bit libnetcdf.a on my Solaris machine, so now I can test 'configure'. I use AC_CHECK_LIB twice, first with the default compiler flags, and if that fails, I add additional 64-bit flags and try again. If both fail, libnetcdf.a is not installed, cannot be found, or autoconf's choice of flags are still incorrect. If it succeeds, configuration continues, either with or without the additional flags. By using the default flags first, the user can set CFLAGS, LDFLAGS, etc with 64-bit stuff beforehand, and as long as libnetcdf.a was also built 64-bit, the first test works.
However, I wonder if this is worth the effort. It seems that anyone compiling 64-bit applications already knows they have to use these flags to get successful builds. Thus, they would set CFLAGS and LDFLAGS ahead of time, like they do for almost all other applications. Should we really be trying to help by setting them ourselves? If a compiler changes its required flags, we need to maintain the autoconf script to keep it from screwing things up. Perhaps this little bit of intervention by the installer (i.e. setting CFLAGS and LDFLAGS) is the best compromise between ease of installation and robustness of the install script.
What do you think?
rorik
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
You've clearly given this matter some in depth
attention. And since you are the one sweating
through the implementation, we'll go with what
you decide. Using the Unidata route, where
you supply the known switches for various
architectures, is certainly acceptable. I simply
cannot remember all the switches that need to
be set so, yes, we would need to have an
install file with the full ./configure command
for ABI=64 builds for the various architectures.
This would keep configure.in simpler, and simpler
to maintain. On the other hand, you've already
done the work to automate the switches
for the Sun platform, so clearly
it could be done for other platforms.
I do not think compiler switches ever change
for a given vendor, what happens more often
is that new compilers need support, or new
platforms get introduced. As you state, each
method has its advantages. As the presumed
maintainer of the system, the choice of which
method to support should be yours. Note that
should you decide to go the Unidata route,
I would like you to send me the ./configure
command for the AIX ABI=64 build command,
since that is one that is fairly complicated yet
important to me in real life. It should provide
a good test of whether all options can indeed
be set on the command line.
Thanks,
Charlie
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I think we can get the value for ABI with autoconf, it is a question of how easily or directly. This is out-of-my-league, so I'll need a little help implementing it. Off the top of my head, there are a couple autoconf macros that might get it directly. There is one called AC_CHECK_SIZEOF(type), where 'type' is something like 'int' or 'double'. On my linux box, 'int' yields SIZEOF_INT=4 and 'double' yields SIZEOF_DOUBLE=8. Would we be able to get it that way?
If not, there is AC_TRY_COMPILE(function). This macro test whether the compiler can compile the given function. Is there a code snippet we could give that would succeed or fail for 32 vs. 64 bit API?
At the very least, we can always write our own macro that uses any program available on the machine and analyzes its output. There are macros to test whether the desired program exists before trying to execute it, and it all fails, we can default to 32 bit (unless explitely given --enable-ABI64, which can override these tests if we want). So, is there a small script that can give us this information?
rorik
It's important not to get the wrong impression of what
I mean by ABI. I think we may be talk about two
related but different things. What I mean by ABI is
the address space available to the executable.
If sizeof (void *) = sizeof(int *)=sizeof(float *)= 8
then the application has a 64 bit address space.
So it's the size of the pointers that's germaine, not
the precision of the floats, or the size of the maximum
int. 64 bit ABI and 32 bit ABI object may not be
interlinked, so on plaforms that support both, separate
versions of each library (including libnetcdf.a) must
be available. Usually the names of the 32,64 bit
libraries are the same but their locations are
different, e.g., /usr/local/lib32 and /usr/local/lib64.
Given all of that, it seems like the autoconf way might
be to test AC_TRY_LINK against the specified
netcdf library. The attempted link should be between
a simple function compiled with the specified ABI
and the netcdf library. This will tell autoconf wheter
the netcdf library that was found is 32 or 64 bit.
Of course the user should be able to specify l
ocation of the netcdf library at ./configure time to
be sure the preferred ABI is used instead
of the default ABI. This may not make sense to
you unless you have ever compiled in the
confusing world of 64 bit chips before.
Just to summarize: 64 bit ABI is only available on
64 bit platforms. If first libnetcdf found by autoconf
is 64 bit then 64 bit ABI compiler/linker switches MUST BE used. Therefore I recommend
not implementing a --enable-ABI64 switch at all.
The whole procedure should be auomatic and
depend on what ABI the libnetcdf is found to be.
User should be allowed to specify location of
libnetcdf at ./configure time.
Does this make sense to you?
So yes, the code snippet is easy to write.
On any 64bit platform (e.g., AIX, SGI)
See if a trivial program.e.g.,
#include <netcdf.h>
int main(){char *foo;foo=nco_inq_libvers();}
successfully links against libnetcdf.a.
If so, use default ABI compiler/linker flags.
If not, try non-default ABI compiler/linker flags.
IMHO the default should be 64 bit ABI on
platforms which support it, as this is the
wave of the future.
I hope this is clearer and something for
which you can implement the infrastructure.
You may even be able to test it at places
like the sourceforge compile farm on their
64 bit machines. If not I'll be happy to test
and give feedback.
Charlie
OK, I think I am getting the idea. Thanks for the detailed explanation. We could use AC_CHECK_SIZEOF(int*) and see whether it is 4 or 8. This macro makes a small program that simply checks sizeof(int*). For architectures that support both, I'm guessing the result is dependant on the compiler used and/or the flags given ( -q64 with xlc on AIX I think?). So if we set the compiler and flags first in autoconf, then check sizeof(int*) with AC_CHECK_SIZEOF(int*), we should know whether we are compiling for 32 or 64 bit, right?
Even if that works, we should probably do the link test you mention above because the if we attempt to link a 32-bit compiled NCO with 64-bit compiled netCDF (or vice versa), we'll crash, right? We can set up AC_CHECK_LIB to search a user-given path first when it looks for libnetcdf, so you can specify the 32 or 64 bit library preferentially if you happen to have both.
Nope, I've never dealt with the 32/64 bit option. I'll check out sourceforge options and see if I can figure this out further. Thanks.
rorik
I looked through the generated configure script, and it looks like we should know this from $host_cpu and $host_os. For example, we might have $host_os as aix3* | aix4* | aix5* and then check if $host_cpu is ia64.
Anyway, the previous idea wouldn't work because sizeof(int*) is dependant on the compiler flags (at least with my solaris8 system), which you wouldn't know ahead of time if the point is to check whether 64-bit is available.
So, we should probably assume 64-bit if the $host_cpu supports it, and then try to link libnetcdf using 64-bit compiler flags and make sure it works. If it doesn't I guess that would mean that libnetcdf was compiled 32-bit altough the architecture would have supported 64-bit. In any case, NCO should be build to work with libnetcdf, whether it is 32 or 64. The previous comments about user-specified library locations for libnetcdf are still relevant and would allow for building two versions of NCO if two version of libnetcdf are available.
rorik
Hi,
I'll not respond to the previous message since this one seems to
supercede it.
> Anyway, the previous idea wouldn't work because sizeof(int*) is dependant on
> the compiler flags (at least with my solaris8 system), which you wouldn't know
> ahead of time if the point is to check whether 64-bit is available.
Yes, this is why I suggested attempting to check whether the simple
program correctly linked to the supplied netcdf library. If it doen't,
then configure should try compiling the test program with the
non-default ABI, if available, and see if the test program then links
with the supplied libnetcdf.a
> So, we should probably assume 64-bit if the $host_cpu supports it, and then
> try to link libnetcdf using 64-bit compiler flags and make sure it works. If
> it doesn't I guess that would mean that libnetcdf was compiled 32-bit altough
> the architecture would have supported 64-bit. In any case, NCO should be build
> to work with libnetcdf, whether it is 32 or 64. The previous comments about
> user-specified library locations for libnetcdf are still relevant and would
> allow for building two versions of NCO if two version of libnetcdf are
> available.
I think this is exactly right and I hope it is feasible to do it
this way.
Thanks,
Charlie
Yep, the ABI thing is not going to be easy. I looked at GNU GMP's configure.in (> 2000 lines!) because it does all sorts of testing to determine the best ABI. In summary, autoconf has no native support for determing ABI, so we'll have to write our own tests.
I'm having trouble compiling libnetcdf.a on my Solaris machine with 64-bit (using the flag -xarch=v9), and until I can do that, I won't be able to find out if my autoconf tests for the ABI stuff are working or not.
On a different front, I added stuff to install the man pages, compile the c++ library, and even run the c++ test using 'make check'. It will also make the documentation, but there is something wrong in the texinfo source around line 2700 in the ncap netCDF Arithmetic Processor node.
I added the stripping -s linker flag under --enable-optimize for gcc, and it helps considerably with reducing executable size. I haven't tested other linkers.
Except for the ABI stuff, I have the other compiler flags set for the archtectures and compilers in the current Makefile for the different --enable-options. The mp/openmp stuff has not been touched either.
Is there a reason that the -ansi flag is used on win32 and RS6K's gcc but no one elses?
Does Cray not support -O2 but only -O when using cc and optimizing (current Makefile)? The others all use -O2 when optimizing.
We'll have to remove the -g flag I think when optimizing using gcc. I read something in GMP's configure.in to the effect that
optimization will get overridden otherwise. I haven't checked it.
It looks like I'm going to be out of town next week, so I may not get back to it until I return. My current effort is at
http://puff.images.alaska.edu/temp/
but the CVS snapshot is probably getting old, July 3 I think.
rorik
> Yep, the ABI thing is not going to be easy. I looked at GNU GMP's configure.in
> (> 2000 lines!) because it does all sorts of testing to determine the best ABI.
> In summary, autoconf has no native support for determing ABI, so we'll have
> to write our own tests.
Yes, I gathered that but it is good to have it explicitly confirmed
> I'm having trouble compiling libnetcdf.a on my Solaris machine with 64-bit (using
> the flag -xarch=v9), and until I can do that, I won't be able to find out if
> my autoconf tests for the ABI stuff are working or not.
Maybe submit this question to Unidata or the netCDF list since Solaris
is a supported OS they should know the answer. Note that AIX requires
appropriate switches to xlc, ld, _and_ ar, so Solaris probably does likewise.
> On a different front, I added stuff to install the man pages, compile the c++
> library, and even run the c++ test using 'make check'. It will also make the
> documentation, but there is something wrong in the texinfo source around line
> 2700 in the ncap netCDF Arithmetic Processor node.
Good. Also make check on NCO should do what make tst does, I suppose,
i.e., run nco_tst.sh
> I added the stripping -s linker flag under --enable-optimize for gcc, and it
> helps considerably with reducing executable size. I haven't tested other
> linkers.
Good. The strip(1) command is another way to do this and appears portable
to all Unices.
> Except for the ABI stuff, I have the other compiler flags set for the archtectures
> and compilers in the current Makefile for the different --enable-options. The
> mp/openmp stuff has not been touched either.
Good. As a final check, please verify that your configure reproduces
all switches that Makefile uses for OPS=O,R,D,X for all architectures.
> Is there a reason that the -ansi flag is used on win32 and RS6K's gcc but no
> one elses?
Yes, the system file resolv.h file is not-ANSI compliant with Linux
last I checked. So either the relevant NCO code must change or -ansi
cannot be used. WIN32 does not support resolv.h (last I checked)
so the relevant code is #ifdef'd out on that platform so gcc -ansi is fine.
Care to investigate this resolv.h stuff on the side?
> Does Cray not support -O2 but only -O when using cc and optimizing (current
> Makefile)? The others all use -O2 when optimizing.
I have no idea. Been a long time since I used Cray.
> We'll have to remove the -g flag I think when optimizing using gcc. I read
> something in GMP's configure.in to the effect that
> optimization will get overridden otherwise. I haven't checked it.
That's fine. Optimized code should not use -g I think.
But I think I also remember the GNU GSL manual saying contradicting
what you just said about GMP.
> It looks like I'm going to be out of town next week, so I may not get back to
> it until I return. My current effort is at
> http://puff.images.alaska.edu/temp/
> but the CVS snapshot is probably getting old, July 3 I think.
Bon voyage!
Thanks again,
Charlie
I did some more thinking about automagically setting the compiler and linker flags for different ABI's. Autoconf makes it simple to pass flags in by setting the environment variables CFLAGS, LDFLAGS, etc. You can also override the compiler choice by setting CC and/or CXX. So the question is: Is that simple enough? or should NCO's configure script try to figure out if additional flags need to be added? My first thought is that originally the user explicitely set these flags when libnetcdf.a was built, since unidata's configure script doesn't check for different ABI's. They do give instructions for different builds in the installation documentation, but their configure script doesn't set anything.
However, I guess someone could have gotten the 64-bit binary and now wants to build NCO and doesn't know that additional flags need to be set. We could go the route of Unidata and add instructions in an install document, but that requires reading it, which has its own inherent problems.
I went ahead and made the tests anyway to see that it can be done. I built 32-bit and 64-bit libnetcdf.a on my Solaris machine, so now I can test 'configure'. I use AC_CHECK_LIB twice, first with the default compiler flags, and if that fails, I add additional 64-bit flags and try again. If both fail, libnetcdf.a is not installed, cannot be found, or autoconf's choice of flags are still incorrect. If it succeeds, configuration continues, either with or without the additional flags. By using the default flags first, the user can set CFLAGS, LDFLAGS, etc with 64-bit stuff beforehand, and as long as libnetcdf.a was also built 64-bit, the first test works.
However, I wonder if this is worth the effort. It seems that anyone compiling 64-bit applications already knows they have to use these flags to get successful builds. Thus, they would set CFLAGS and LDFLAGS ahead of time, like they do for almost all other applications. Should we really be trying to help by setting them ourselves? If a compiler changes its required flags, we need to maintain the autoconf script to keep it from screwing things up. Perhaps this little bit of intervention by the installer (i.e. setting CFLAGS and LDFLAGS) is the best compromise between ease of installation and robustness of the install script.
What do you think?
rorik
I did some more thinking about automagically setting the compiler and linker flags for different ABI's. Autoconf makes it simple to pass flags in by setting the environment variables CFLAGS, LDFLAGS, etc. You can also override the compiler choice by setting CC and/or CXX. So the question is: Is that simple enough? or should NCO's configure script try to figure out if additional flags need to be added? My first thought is that originally the user explicitely set these flags when libnetcdf.a was built, since unidata's configure script doesn't check for different ABI's. They do give instructions for different builds in the installation documentation, but their configure script doesn't set anything.
However, I guess someone could have gotten the 64-bit binary and now wants to build NCO and doesn't know that additional flags need to be set. We could go the route of Unidata and add instructions in an install document, but that requires reading it, which has its own inherent problems.
I went ahead and made the tests anyway to see that it can be done. I built 32-bit and 64-bit libnetcdf.a on my Solaris machine, so now I can test 'configure'. I use AC_CHECK_LIB twice, first with the default compiler flags, and if that fails, I add additional 64-bit flags and try again. If both fail, libnetcdf.a is not installed, cannot be found, or autoconf's choice of flags are still incorrect. If it succeeds, configuration continues, either with or without the additional flags. By using the default flags first, the user can set CFLAGS, LDFLAGS, etc with 64-bit stuff beforehand, and as long as libnetcdf.a was also built 64-bit, the first test works.
However, I wonder if this is worth the effort. It seems that anyone compiling 64-bit applications already knows they have to use these flags to get successful builds. Thus, they would set CFLAGS and LDFLAGS ahead of time, like they do for almost all other applications. Should we really be trying to help by setting them ourselves? If a compiler changes its required flags, we need to maintain the autoconf script to keep it from screwing things up. Perhaps this little bit of intervention by the installer (i.e. setting CFLAGS and LDFLAGS) is the best compromise between ease of installation and robustness of the install script.
What do you think?
rorik
Hi Rorik,
You've clearly given this matter some in depth
attention. And since you are the one sweating
through the implementation, we'll go with what
you decide. Using the Unidata route, where
you supply the known switches for various
architectures, is certainly acceptable. I simply
cannot remember all the switches that need to
be set so, yes, we would need to have an
install file with the full ./configure command
for ABI=64 builds for the various architectures.
This would keep configure.in simpler, and simpler
to maintain. On the other hand, you've already
done the work to automate the switches
for the Sun platform, so clearly
it could be done for other platforms.
I do not think compiler switches ever change
for a given vendor, what happens more often
is that new compilers need support, or new
platforms get introduced. As you state, each
method has its advantages. As the presumed
maintainer of the system, the choice of which
method to support should be yours. Note that
should you decide to go the Unidata route,
I would like you to send me the ./configure
command for the AIX ABI=64 build command,
since that is one that is fairly complicated yet
important to me in real life. It should provide
a good test of whether all options can indeed
be set on the command line.
Thanks,
Charlie