From: Jonathan W. <jw...@ph...> - 2007-08-27 02:48:21
|
> And it contains my merge introducing scons as a buildsystem. Just out of interest, what is it about scons that makes it so much better than auto* in the context of ffado? I know that numerous projects are moving to scons citing "problems" with the auto* tools, but what specific advantages does scons bring? Do we not risk replacing one set of build-related problems with another? In some ways it seems switching to "scons" (or any other relatively new/experimental build system) is becoming a bit of a hobby amongst FOSS projects. Note that I'm not criticising the move to scons here as such - it's just that to this point I've seen little technical justification as to why the move is desirable for us (perhaps I missed a post). > Using scons is quite easy, at least if you are used to the "./autogen.sh && > configure && make && make install": The one thing which irritates me about scons is the lack of "uninstall" functionality. While certainly not essential (or even bug-free) I have found it useful to ensure that things are cleaned up when switching between different versions of the same software. Regards jonathan |
From: Pieter P. <pi...@jo...> - 2007-08-27 08:40:30
|
Jonathan Woithe wrote: >> And it contains my merge introducing scons as a buildsystem. > > Just out of interest, what is it about scons that makes it so much better > than auto* in the context of ffado? I know that numerous projects are > moving to scons citing "problems" with the auto* tools, but what specific > advantages does scons bring? Do we not risk replacing one set of > build-related problems with another? > > In some ways it seems switching to "scons" (or any other relatively > new/experimental build system) is becoming a bit of a hobby amongst FOSS > projects. > > Note that I'm not criticising the move to scons here as such - it's just > that to this point I've seen little technical justification as to why > the move is desirable for us (perhaps I missed a post). The problem with auto* is that it's rather obscure how to achieve certain goals, and the usual workflow seems to be trial-and-error until it works. This results in horribly complex stuff. e.g: Implementing the conditional build of device support was a real PITA. The easiest thing to show is how scons reduces the amount of boilerplate code: [start of configure.ac excerpt] ... dnl --- Build BeBoB code? AC_ARG_ENABLE(bebob, AC_HELP_STRING([--enable-bebob], [build BeBoB support (default=yes)]), [case "${enableval}" in yes) build_bebob=true;; no) build_bebob=false;; *) AC_MSG_ERROR(bad value ${enableval} for --enable-bebob) ;; esac], [build_bebob=true]) ... dnl --- Build support for all supported devices? AC_ARG_ENABLE(all-devices, AC_HELP_STRING([--enable-all-devices], [build support for all supported devices (default=no)]), [case "${enableval}" in yes) build_all=true;; no) build_all=false;; *) AC_MSG_ERROR(bad value ${enableval} for --enable-all-devices) ;; esac], [build_all=false]) ... dnl Device classes if test "${build_all}" = true; then build_bebob=true build_genericavc=true build_motu=true build_dice=true build_metric_halo=true build_bounce=true build_rme=true fi; ... build_amdtp=false if test "${build_bebob}" = true; then CFLAGS="$CFLAGS -DENABLE_BEBOB" CXXFLAGS="$CXXFLAGS -DENABLE_BEBOB" supported_devices="${supported_devices}BeBoB " build_amdtp=true fi; ... AM_CONDITIONAL(BUILD_BEBOB,test "${build_bebob}" = true) ... AM_CONDITIONAL(BUILD_AMDTP,test "${build_amdtp}" = true) [end of configure.ac excerpt] [start of src/Makefile.am excerpt] ... if BUILD_BEBOB libffado_la_SOURCES += $(bebob_src) ... endif ... if BUILD_AMDTP libffado_la_SOURCES += $(amdtp_src) ... endif [end of src/Makefile.am excerpt] Becomes in scons style: [begin of SConstruct excerpt] opts.AddOptions( BoolOption( "ENABLE_BEBOB", "Enable/Disable the bebob part.", True ), BoolOption( "ENABLE_ALL", "Enable/Disable support for all devices.", False ) ) ... if env['ENABLE_ALL']: env['ENABLE_BEBOB'] = True ... if env['ENABLE_BEBOB']: env.AppendUnique( CCFLAGS=["-DENABLE_BEBOB"] ) [end of SConscript excerpt] [begin of src/SConscript excerpt] if env['ENABLE_BEBOB']: source += bebob_source ... if env['ENABLE_BEBOB'] or env['ENABLE_GENERICAVC'] or env['ENABLE_DICE'] or env['ENABLE_BOUNCE']: source += amdtp_source [end of src/SConscript excerpt] But the main issue with autotools for me is that everything you do can have unexpected side-effects. I've found it to be a bit indeterministic in it's behavior. Another example is the fact that I've succeeded to import the dbus c++ bindings into our scons build system in a few hours, with the lib being built as a static library using it's own build flags (i.e. not carrying the build flags of libffado). It was the first time I used scons. From my experience I don't think I would have been able to achieve the same result with autotools in the same timespan. And for developers there is another major advantage: if you add a file, you just have to add it to the scons script. No hassle with autoreconf -fis; ./configure --what-did-I-use-here-again-head-config.log-ah-i-see-copy-paste; make One disadvantage is that it doesn't integrate as nice with kdevelop, but the autotools integration was broken anyway by introducing the conditional build of device support. Greets, Pieter > >> Using scons is quite easy, at least if you are used to the "./autogen.sh && >> configure && make && make install": > > The one thing which irritates me about scons is the lack of "uninstall" > functionality. While certainly not essential (or even bug-free) I have > found it useful to ensure that things are cleaned up when switching between > different versions of the same software. I've read somewhere that it's just 'scons -c install'. But if not, scons uninstall can be implemented fairly easy IMHO. Greets, Pieter |
From: Francois.ernoult <fra...@pa...> - 2007-08-27 09:24:49
|
> > The one thing which irritates me about scons is the lack of "uninstall" > > functionality. While certainly not essential (or even bug-free) I have > > found it useful to ensure that things are cleaned up when switching > between > > different versions of the same software. > > I've read somewhere that it's just 'scons -c install'. But if not, scons > uninstall can be implemented fairly easy IMHO. Yes it is, but it doesn't work since rev 571. Francois |
From: Arnold K. <ar...@ar...> - 2007-08-27 09:32:58
|
Am Montag, 27. August 2007 schrieb Jonathan Woithe: > > And it contains my merge introducing scons as a buildsystem. > Just out of interest, what is it about scons that makes it so much better > than auto* in the context of ffado? I know that numerous projects are > moving to scons citing "problems" with the auto* tools, but what specific > advantages does scons bring? As ppalmers already presented, It reduces code and work and is easier to=20 maintain. Especially since ppalmers already knows python where on auto* you= =20 have to learn yet another script-language (or two or three) which are not=20 even object oriented (which can really get on your tits if you are used to= =20 that way of thinking). After all I think "make" isn't bad. If you know how to write makefiles, you= =20 can do small jobs pretty easy and fast. The problem is that after a certain= =20 point you need more automatism and then you search for other tools to creat= e=20 the makefiles. KDE switched to cmake to create these and Qt provides their= =20 qmake. They both seem to use the full featureset of make (real parallel=20 builds for example) while the grown-over-the-years autotools have implement= ed=20 so much features that its a hell to maintain a project with autotools and i= t=20 doesn't even use the full featureset of make. We could have switched to cmake (or even qmake) or something else. The real= =20 decision for scons (apart from the advantages listed above and below and by= =20 ppalmers) was simply that I know scons (I am using it for my private- and=20 work-projects) and was willing to spend some time to do the transition. If= =20 someone else had stepped up to port the buildsystem to something else, that= =20 would have been the decision. But in free software most decisions of this=20 kind aren't made by long and glorious discussions on mailinglists and irc b= ut=20 by people actually doing the stuff. So that is why ffado is using scons=20 now :-) > Do we not risk replacing one set of=20 > build-related problems with another? Yes. But we replace many problems by far less problems (which are easier to= =20 solve because the script-language is less obscure). > In some ways it seems switching to "scons" (or any other relatively > new/experimental build system) is becoming a bit of a hobby amongst FOSS > projects. Which shows a) that more and more people realize the shortcomings of auto* = and=20 b) that there are true alternatives available. I wouldn't call scons or cma= ke=20 experimental just because they are not as old as the autotools, they are=20 relatively new though. > Note that I'm not criticising the move to scons here as such - it's just > that to this point I've seen little technical justification as to why > the move is desirable for us (perhaps I missed a post). Apart from my big paragraphs above, you should look at your daily usage of= =20 auto* and now scons (and on ppalmers mails) and see which one has less work= =20 for you and all the devs. :-) > The one thing which irritates me about scons is the lack of "uninstall" > functionality. While certainly not essential (or even bug-free) I have > found it useful to ensure that things are cleaned up when switching betwe= en > different versions of the same software. You should read my announcement mail again, it states "scons -c install" as= =20 the replacement for "make uninstall". Its just not another target "uninstal= l"=20 to be defined in the scripts, but simply a negation of the install-target, = so=20 everything that would be installed is checked for presence in the system an= d=20 uninstalled. Arnold =2D-=20 visit http://www.arnoldarts.de/ =2D-- Hi, I am a .signature virus. Please copy me into your ~/.signature and send= me=20 to all your contacts. After a month or so log in as root and do a rm / -rf. Or ask your=20 administrator to do so... |
From: Pieter P. <pi...@jo...> - 2007-08-27 09:47:16
|
Arnold Krille wrote: > Am Montag, 27. August 2007 schrieb Jonathan Woithe: >>> And it contains my merge introducing scons as a buildsystem. >> Just out of interest, what is it about scons that makes it so much better >> than auto* in the context of ffado? I know that numerous projects are >> moving to scons citing "problems" with the auto* tools, but what specific >> advantages does scons bring? > > As ppalmers already presented, It reduces code and work and is easier to > maintain. Especially since ppalmers already knows python where on auto* you > have to learn yet another script-language (or two or three) which are not > even object oriented (which can really get on your tits if you are used to > that way of thinking). > > After all I think "make" isn't bad. If you know how to write makefiles, you > can do small jobs pretty easy and fast. The problem is that after a certain > point you need more automatism and then you search for other tools to create > the makefiles. KDE switched to cmake to create these and Qt provides their > qmake. They both seem to use the full featureset of make (real parallel > builds for example) while the grown-over-the-years autotools have implemented > so much features that its a hell to maintain a project with autotools and it > doesn't even use the full featureset of make. One thing scons solved is indeed that scons -j2 works, while make -j2 didn't (failed due to some depency problem). > We could have switched to cmake (or even qmake) or something else. The real > decision for scons (apart from the advantages listed above and below and by > ppalmers) was simply that I know scons (I am using it for my private- and > work-projects) and was willing to spend some time to do the transition. If > someone else had stepped up to port the buildsystem to something else, that > would have been the decision. But in free software most decisions of this > kind aren't made by long and glorious discussions on mailinglists and irc but > by people actually doing the stuff. So that is why ffado is using scons > now :-) Somebody on IRC used the beautiful term 'do-ocracy' for this... (I think it was drobilla) Greets, Pieter |
From: Michael G. <mg...@ti...> - 2007-08-28 06:08:10
|
> > After all I think "make" isn't bad. If you know how to write makefiles,= you=20 > > can do small jobs pretty easy and fast. If you know how to write makefiles, especially if you are comfortable with recursive makefiles you can do extremely complex jobs pretty easy and fast. With or without additional external tools. > > KDE switched to cmake to create these and Qt provides their qmake. Rosegarden moved to Scons only to drop it shortly after that and now uses cmake. Ardour uses Scons, many if not most gnutools use the autotools. Does that proof anything ? I don't think so! > > They both seem to use the full featureset of make (real parallel =20 > > builds for example) while the grown-over-the-years autotools have imple= mented=20 > > so much features that its a hell to maintain a project with autotools a= nd it=20 > > doesn't even use the full featureset of make. Huh ??? Apart from that not being a sensible quality measure in the first place what does that tell us ? Would a claim "Scons doesn't use the full featureset of Python" say anything about Scons ? Again I don't think so! > One thing scons solved is indeed that scons -j2 works, while make -j2=20 > didn't (failed due to some depency problem). Maybe it's me but I usually use make -j 3 when building a project and it usually works for me. > Somebody on IRC used the beautiful term 'do-ocracy' for this... (I think= =20 > it was drobilla) Yes, that's indeed a strong argument for whatever is chosen though it shouldn't be the only one. Anyway, apart from ranting I'd like to contribute some enhancement to the Scontruct file to solve the lib/lib64 issue. Adding the following lines somewhere early does provide for means to distinguish between various CPUs etc. It even works for xcompiles. snip -- snip -- snip -- snip -- snip -- snip -- snip -- snip -- snip # guess at the platform, used to define compiler flags config_guess =3D os.popen("./config.guess").read()[:-1] config_cpu =3D 0 config_arch =3D 1 config_kernel =3D 2 config_os =3D 3 config =3D config_guess.split ("-") print "system triple: " + config_guess snip -- snip -- snip -- snip -- snip -- snip -- snip -- snip -- snip Later down you then can change env['libdir'] =3D os.path.join( env['PREFIX'], "lib" ) into if config[config_cpu] =3D=3D 'x86_64': env['libdir'] =3D os.path.join( env['PREFIX'], "lib64" ) else: env['libdir'] =3D os.path.join( env['PREFIX'], "lib" ) During install I then get Install file: "src/libffado.so" as "/usr/local/lib64/libffado.so" Install file: "libffado.pc" as "/usr/local/lib64/pkgconfig/libffado.pc" This also provides infrastructure for cpu/arch/platform specific tweaks like changes for specific compilerflags etc. Last not least w/r to my problems using 'sudo scons install': I still haven't got that working and I don't know why because issuing su -c 'scons install' works for me. Best, Michael =2D-=20 Vote against SPAM - see http://www.politik-digital.de/spam/ !tagline Michael Gerdau email: mg...@ti... GPG-keys available on request or at public keyserver |
From: Jeremy K. <jk...@oz...> - 2007-08-28 07:12:34
|
Hi all, > Anyway, apart from ranting I'd like to contribute some enhancement to > the Scontruct file to solve the lib/lib64 issue. > > Adding the following lines somewhere early does provide for means to > distinguish between various CPUs etc. It even works for xcompiles. [snip] > Later down you then can change > env['libdir'] =3D os.path.join( env['PREFIX'], "lib" ) > into > if config[config_cpu] =3D=3D 'x86_64': > =A0=A0=A0=A0=A0=A0=A0=A0env['libdir'] =3D os.path.join( env['PREFIX'], "l= ib64" ) > else: > =A0=A0=A0=A0=A0=A0=A0=A0env['libdir'] =3D os.path.join( env['PREFIX'], "l= ib" ) As the token powerpc user here, I feel its my duty to step in - we need=20 to be a little more careful with the issues here. =46irstly x86_64 isn't the only machine that'll support 64-bit binaries -=20 64-bit powerpc machines will run in both 32- and 64-bit modes, but it's=20 usually up to the distro to choose whether to provide 32- or 64-bit (or=20 both) userland environments. Some do 64-bit only, some 32, some provide=20 both, and use /lib & /lib64 to provide the namespace separation. And then you have the compilers - some will output 32-bit binaries, some=20 will output 64-bit binaries, some will depend on the -m64 or -m32 flag=20 to determine the target of generated files. So - $prefix/lib64 should only be used when the actual ELF output is=20 64-bit, and $prefix/lib for 32, regardless of the CPU type. For a=20 pure-64-bit machine, it's also OK to put 64-bit binaries in $prefix/lib=20 (ie, there's no reason to have anything other than one lib dir) So, one solution may be just to do nothing, and leave it up to the=20 distro packager to specify $libdir correctly when building for a=20 specific target. I think this is the best way to go, as it's really up=20 to the distro to specify the arrangements of libraries for different=20 targets. Otherwise, we'd need to determine whether this is a 32- or 64-bit build=20 (which isn't trivial), and then fiddle with the paths appropriately. Just a couple of thoughts :) Cheers, Jeremy |
From: Michael G. <mg...@ti...> - 2007-08-28 07:44:07
|
> > Later down you then can change > > env['libdir'] =3D os.path.join( env['PREFIX'], "lib" ) > > into > > if config[config_cpu] =3D=3D 'x86_64': > > =A0=A0=A0=A0=A0=A0=A0=A0env['libdir'] =3D os.path.join( env['PREFIX'], = "lib64" ) > > else: > > =A0=A0=A0=A0=A0=A0=A0=A0env['libdir'] =3D os.path.join( env['PREFIX'], = "lib" ) >=20 > As the token powerpc user here, I feel its my duty to step in - we need=20 > to be a little more careful with the issues here. I'm aware the above doesn't solve the issues for powerpc. That's one of the reasons I didn't provide a patch but a more general description as to what could be done. Since I have no ppc I didn't try to guess how they possibly might work. On the other hand: the above code snippet should not change anything for ppc either because they should take the else branch which is what they have now. > Firstly x86_64 isn't the only machine that'll support 64-bit binaries -=20 > 64-bit powerpc machines will run in both 32- and 64-bit modes, but it's=20 > usually up to the distro to choose whether to provide 32- or 64-bit (or=20 > both) userland environments. Some do 64-bit only, some 32, some provide=20 > both, and use /lib & /lib64 to provide the namespace separation. While I have not actually tried to do this on a ppc system I was under the impression that the system triple returned by config.guess holds all info required to actually determine this and other such properties. I'm looking forward to all you ppc guys to add additional code to deal with this. > And then you have the compilers - some will output 32-bit binaries, some= =20 > will output 64-bit binaries, some will depend on the -m64 or -m32 flag=20 > to determine the target of generated files. Yes, certainly. And flags w/r to altivec/sse/sse2/... should depend on this too. As I wrote above I don't consider my snippet "work done", just a first step which hopefully provides for the framework. > So - $prefix/lib64 should only be used when the actual ELF output is=20 > 64-bit, and $prefix/lib for 32, regardless of the CPU type. For a=20 > pure-64-bit machine, it's also OK to put 64-bit binaries in $prefix/lib=20 > (ie, there's no reason to have anything other than one lib dir) On x86_64 machines config.guess *does* (well should ;) take care of it. > So, one solution may be just to do nothing, and leave it up to the=20 > distro packager to specify $libdir correctly when building for a=20 > specific target. I think this is the best way to go, as it's really up=20 > to the distro to specify the arrangements of libraries for different=20 > targets. I don't see why the script shouldn't try an educated guess when there is one. And at least for x86_64 there is. > Otherwise, we'd need to determine whether this is a 32- or 64-bit build=20 > (which isn't trivial), and then fiddle with the paths appropriately. AFAIK config.guess *does* check that too at least to a certain extend. And for cornercases there is always the option to add specific flags. But I don't think one shouldn't handle 95% of all situations automatically just because there remain 5% that can't be guessed correctly. Just my thoughts. Best, Michael =2D-=20 Vote against SPAM - see http://www.politik-digital.de/spam/ !tagline Michael Gerdau email: mg...@ti... GPG-keys available on request or at public keyserver |
From: Jeremy K. <jk...@oz...> - 2007-08-28 08:39:54
|
Hi Michael, > I'm aware the above doesn't solve the issues for powerpc. That's one > of the reasons I didn't provide a patch but a more general > description as to what could be done. Yeah, that's what I'd like to do to. I don't think the solution needs to be specific to x86 or powerpc. This is more generic stuff that is relevant to any machine that can run binaries of more than one architecture. > On the other hand: > the above code snippet should not change anything for ppc either > because they should take the else branch which is what they have now. is it still possible to explicitly set libdir though? It looks like the env object is just a python dict and we're overwriting whatever the user sepcifies. I'm no scons hacker, but something like the following would be better: if not env.has_key('libdir'): env['libdir'] = os.path.join(env['PREFIX'], 'lib') (and the same for bindir, includedir, etc). I can do up a patch for this if it's the right way to go. > While I have not actually tried to do this on a ppc system I was > under the impression that the system triple returned by config.guess > holds all info required to actually determine this and other such > properties. > > I'm looking forward to all you ppc guys to add additional code to > deal with this. My argument is that there is no code to add, instead we leave the defaults 'as-is', and let the user/distro specify where things should go, if they vary from the default. Less work for everybody! :D > As I wrote above I don't consider my snippet "work done", just a > first step which hopefully provides for the framework. Oh, I'm definitely not criticising, just adding a POV :) > On x86_64 machines config.guess *does* (well should ;) take care of > it. That'll tell you the architecture of the system, but it won't tell you what kind of binaries will be produced by default. For example, on my 32-bit powerbook: powerpc-unknown-linux-gnu And on a 64-bit g5, using ubuntu feisty (ie, defaulting to 32-bit userspace) powerpc64-unknown-linux-gnu In the latter case, even though it's a 64-bit CPU, you still probably want to build 32-bit binaries (which the default gcc produces), and put them in $PREFIX/lib. We have other machines that have the same config.guess output, but which we need to build 64-bit binaries for (and the default compiler does this). So... since, by default, the user is using the default compiler, using the default flags, shouldn't libs go in the default location ($prefix/lib) ? I think we can assume that $PREFIX/lib is the preferred location for default builds, even if it may be a symlink to lib32 or lib64. If the user starts using a different compiler (ie, one which produces binaries which aren't the default system arch, so don't belong in $prefix/lib), shouldn't they be responsible for specifing somewhere specific for the install location (eg, lib64) ? > I don't see why the script shouldn't try an educated guess when there > is one. And at least for x86_64 there is. This still depends on the distro though, not the machine's architecture. You'll be guessing incorrectly if the user only has a 32-bit userland on their x86_64 machine. Cheers, Jeremy |
From: Arnold K. <ar...@ar...> - 2007-08-28 09:19:44
|
Am Dienstag, 28. August 2007 schrieb Jeremy Kerr: > > On the other hand: > > the above code snippet should not change anything for ppc either > > because they should take the else branch which is what they have now. > is it still possible to explicitly set libdir though? It looks like the > env object is just a python dict and we're overwriting whatever the > user sepcifies. I'm no scons hacker, but something like the following > would be better: > if not env.has_key('libdir'): > env['libdir'] =3D os.path.join(env['PREFIX'], 'lib') > (and the same for bindir, includedir, etc). I can do up a patch for this > if it's the right way to go. Well, the env doesn't have a libdir before. And it won't have even if you=20 specify something on the commandline. Going hand in hand with the lib64 vs. lib problem (*) is another one which = has=20 to be addressed before the release and thats introduction of a DESTDIR=20 because at least gentoos ebuild installs the files to a sandbox environment= ,=20 makes a list of the files and copies them to the root system afterwards. An= d=20 I think rpm at least works the same. So maybe for a start we could add a=20 config-option for the lib64-thing and have a parameter DESTDIR which will b= e=20 used by the install-step... Have fun, Arnold (*) I just realized that at least some projects (like libraw1394) don't bot= her=20 about that either. And while libs might go to lib64, the pc-files seem to=20 still be in lib/pkgconfig... =2D-=20 visit http://www.arnoldarts.de/ =2D-- Hi, I am a .signature virus. Please copy me into your ~/.signature and send= me=20 to all your contacts. After a month or so log in as root and do a rm / -rf. Or ask your=20 administrator to do so... |
From: Jeremy K. <jk...@oz...> - 2007-08-28 09:44:13
|
Arnold, > Well, the env doesn't have a libdir before. And it won't have even if > you specify something on the commandline. Sorry, I'm still not a scons hacker - what does env contain? what's the scons-standard way of installing something to $PREFIX/lib ? I'm surprised that we have to specify this stuff, are we mis-using scons? doesn't it do this by default? > Going hand in hand with the lib64 vs. lib problem (*) is another one > which has to be addressed before the release and thats introduction > of a DESTDIR because at least gentoos ebuild installs the files to a > sandbox environment, makes a list of the files and copies them to the > root system afterwards. And I think rpm at least works the same. So > maybe for a start we could add a config-option for the lib64-thing > and have a parameter DESTDIR which will be used by the > install-step... Yep, but DESTDIR doesn't solve our problem - like you say, it is used to specify an alternative root for the *entire* filesystem, so the packages can be created. We shouldn't mess with DESTDIR, and it should be used as the base prefix for all installed files during the install stage. ie, specifying libdir as $PREFIX/lib should end up installing in $DESTDIR/$PREFIX/lib. When you say 'config-option for the lib64-thing', isn't that what libdir is? Cheers, Jeremy |
From: Jeremy K. <jk...@oz...> - 2007-08-28 09:55:01
|
Arnold, > Yep, but DESTDIR doesn't solve our problem Sorry, on re-reading, you've introduced this as a different issue :) However, I'd be surprised if scons doesn't support DESTDIR out of the box, it's a fairly standard thing. Perhaps it already 'just works' ? (or should I start learning scons?) Jeremy |
From: Michael G. <mg...@ti...> - 2007-08-28 09:47:52
|
> is it still possible to explicitly set libdir though? It looks like the=20 > env object is just a python dict and we're overwriting whatever the=20 > user sepcifies. I'm no scons hacker, but something like the following=20 > would be better: >=20 > if not env.has_key('libdir'): > env['libdir'] =3D os.path.join(env['PREFIX'], 'lib') >=20 > (and the same for bindir, includedir, etc). I can do up a patch for this= =20 > if it's the right way to go. Yes, certainly. I didn't add anythign along that line because the current SConstruct doesn't do that though. As a sidenote: env[] is explicitly populated by scons. If you wish to test for shell environment variables you should use os.environ['variablename'] > My argument is that there is no code to add, instead we leave the=20 > defaults 'as-is', and let the user/distro specify where things should=20 > go, if they vary from the default. Less work for everybody! :D The SConstruct did explicitly decide to install into PREFIX/lib regardless of whatever I did. There was no code that honored any external specification whatsoever (read: some code is required either way). Applying a change like the one I suggested how chooses PREFIX/lib64 for all x86_64 systems which IMO is an improvement. > That'll tell you the architecture of the system, but it won't tell you=20 > what kind of binaries will be produced by default. >=20 > For example, on my 32-bit powerbook: >=20 > powerpc-unknown-linux-gnu >=20 > And on a 64-bit g5, using ubuntu feisty (ie, defaulting to 32-bit=20 > userspace) >=20 > powerpc64-unknown-linux-gnu So we have powerpc and powerpc64, the former presumably 32bit at least in a linux kernel context. Would a 64bit binary work on your ubuntu feisty ? If yes then I don't understand why it should claim 'powerpc64'. Apart from that both your distros should provide a proper config.guess that does not give 'unknown'. Last not least one might decide using config.guess isn't the best idea anyway because that's part of the autotools. Or one decides to provide a local copy of it to be sure it's there. > In the latter case, even though it's a 64-bit CPU, you still probably=20 > want to build 32-bit binaries (which the default gcc produces), and put=20 > them in $PREFIX/lib. We have other machines that have the same =20 > config.guess output, but which we need to build 64-bit binaries for=20 > (and the default compiler does this). I'm happy with adding options to allow for particular build scenarios. I know the autotools have it OOTB and I expect this problem to be solved for scons as well (after all this project can't be the first to have it :) > So... since, by default, the user is using the default compiler, using=20 > the default flags, shouldn't libs go in the default location=20 > ($prefix/lib) ? > > I think we can assume that $PREFIX/lib is the preferred location for=20 > default builds, even if it may be a symlink to lib32 or lib64. On openSUSE 10.2 (and previous SuSE Linux) this definitely isn't true. Don't know whether that'll change with openSUSE 10.3 (but I will check these days). > If the user starts using a different compiler (ie, one which produces=20 > binaries which aren't the default system arch, so don't belong in=20 > $prefix/lib), shouldn't they be responsible for specifing somewhere=20 > specific for the install location (eg, lib64) ? Well, yes. So far I know of no way to specify that with the current SConstruct (i.e. someone would have to add it). > > I don't see why the script shouldn't try an educated guess when there > > is one. And at least for x86_64 there is. >=20 > This still depends on the distro though, not the machine's architecture.= =20 > You'll be guessing incorrectly if the user only has a 32-bit userland=20 > on their x86_64 machine. I'm not sure I understand what you mean. I'm not claiming my proposed snippet solves all problems for all users. So far it leaves everything but x86_64 alone (i.e. e.g. ppc does see no change for better or worse). I don't know what x86_64 systems that are installed to default to 32bit do provide -- these are the only systems that might suffer a regression. =46or all other it is either "no change" or an improvement, regardless of the distribution AFAICT. Best, Michael =2D-=20 Vote against SPAM - see http://www.politik-digital.de/spam/ !tagline Michael Gerdau email: mg...@ti... GPG-keys available on request or at public keyserver |
From: Michael G. <mg...@ti...> - 2007-08-28 10:10:30
|
> Sorry, I'm still not a scons hacker - what does env contain? what's the=20 > scons-standard way of installing something to $PREFIX/lib ? I'm no scons hacker either - however in all SConstruct files I have read so far you have to tell scons where to put the libs. > I'm surprised that we have to specify this stuff, are we mis-using=20 > scons? doesn't it do this by default? Again I'm not a scons hacker which is to say take the following with a grain of salt (or two). That said it is my impression that scons allows for an easy quickstart but once you come to the gory details of crossplatform, crossarchitecture etc. it isn't exactly nice or easy either. It also is my impression that it is not yet on par featurewise with autotools when it comes to automatically detecting complex environments. But then I have stopped trying after I had to reinvent the n-th wheel... Best, Michael =2D-=20 Vote against SPAM - see http://www.politik-digital.de/spam/ !tagline Michael Gerdau email: mg...@ti... GPG-keys available on request or at public keyserver |
From: Jeremy K. <jk...@oz...> - 2007-08-28 10:28:36
|
Hi Michael, > I'm no scons hacker either - however in all SConstruct files I have > read so far you have to tell scons where to put the libs. Ahh.. so there *is no* default then. Still I think that $PREFIX/lib would be a good default (regardless of system arch, or default compiler output, or available libraries), and (somehow) overridable by the user. Trying to be smarter that the tools (and the end-user) is a bad idea, IMHO. However, I'm not the maintainer, so I'll leave it to those who are in control :D Cheers, Jeremy |
From: Jeremy K. <jk...@oz...> - 2007-08-28 10:25:42
|
Hi Michael, > Yes, certainly. I didn't add anythign along that line because the > current SConstruct doesn't do that though. As a sidenote: > env[] is explicitly populated by scons. If you wish to test for shell > environment variables you should use os.environ['variablename'] so perhaps env['libdir'] is already set to $PREFIX/lib ? in this case, all is fine. > The SConstruct did explicitly decide to install into PREFIX/lib > regardless of whatever I did. There was no code that honored any > external specification whatsoever (read: some code is required > either way). yep, and I think that's a perfectly sensible default, and I assume that scons has some inbuild way of overriding the path (autoconf uses --libdir=, I assume it might be the same with scons?). > Applying a change like the one I suggested how chooses PREFIX/lib64 > for all x86_64 systems which IMO is an improvement. Just me, but I think this assumption is incorrect :) As I've said before, you don't always want to build 64-bit binaries on a 64-bit system, and you don't always want to install them in in $PREFIX/lib64. That's why I like the idea of just using the defaults. So I think $PREFIX/lib is a more sensible default for all cases. Is this breaking something at present? > > That'll tell you the architecture of the system, but it won't tell > > you what kind of binaries will be produced by default. > > > > For example, on my 32-bit powerbook: > > > > powerpc-unknown-linux-gnu > > > > And on a 64-bit g5, using ubuntu feisty (ie, defaulting to 32-bit > > userspace) > > > > powerpc64-unknown-linux-gnu > > So we have powerpc and powerpc64, the former presumably 32bit at > least in a linux kernel context. > > Would a 64bit binary work on your ubuntu feisty ? > If yes then I don't understand why it should claim 'powerpc64'. the first part of the target is the machine type. this doesn't mean that any libraries for any particular arch are available though, or that we even have a compiler available for that specific machine type. Same deal for x86_64 - you may have an x86_64 CPU, but have only x86 libs available. (the 'machine' part of the target spec is directly from the uname syscall, so it doesn't know if any specific libs are present) > On openSUSE 10.2 (and previous SuSE Linux) this definitely isn't > true. Don't know whether that'll change with openSUSE 10.3 (but I > will check these days). So you can't use the default compiler to produce a library and stick it in /usr/lib? I think that's kinda broken, but they probably have good reasons for doing so. Still, that's really up to the distro - if they want to put libraries somewhere else, they're free to do so, as long as they specify a different libdir. > I'm not sure I understand what you mean. I'm not claiming my proposed > snippet solves all problems for all users. I think the 'leave it up to scons' idea will solve all problems for all users, no? If one particular distro needs to specify something specific, they can do so. > I don't know what x86_64 systems that are installed to default to > 32bit do provide -- these are the only systems that might suffer a > regression. > > For all other it is either "no change" or an improvement, regardless > of the distribution AFAICT. Yep, so it'll break some things and fix some others - that's why I think it's best leaving it to the people who have solved this many times before (ie, the scons defaults :) ). Cheers, Jeremy |
From: Michael G. <mg...@ti...> - 2007-08-28 11:13:27
|
> > Applying a change like the one I suggested how chooses PREFIX/lib64 > > for all x86_64 systems which IMO is an improvement. >=20 > Just me, but I think this assumption is incorrect :) I think it is just you :) > As I've said =20 > before, you don't always want to build 64-bit binaries on a 64-bit=20 > system, and you don't always want to install them in in $PREFIX/lib64. No problem with that (though I'd like to add on true 64bit systems the default _is_ building a 64bit binary). > That's why I like the idea of just using the defaults. I probably didn't make it clear enough in past posts: _So_ far there is no 'default' other than the hardwired value in the SConstruct file and that hardly qualifies as a sensible default IMO. > So I think $PREFIX/lib is a more sensible default for all cases. Is this= =20 > breaking something at present? Actually it does. It forces all 64bit systems to include PREFIX/lib in their searchpaths. While this can be done it raises complexity. Unneeded I may add. And where would I put my 32bit libs then ? > So you can't use the default compiler to produce a library and stick it=20 > in /usr/lib? I think that's kinda broken, but they probably have good=20 > reasons for doing so. I _can_ do that but almost all configure scripts correctly deduce to put 64bit libs in PREFIX/lib64. I don't see why that should be 'kinda broken'. And yes, they have good reason for it, namely they also deliver 32bit versions of many libs (which reside in /usr/lib etc.) as well as a 32bit compiler. That's one of the reasons why e.g. wine works seamlessly even on x86_64. Just because one distribution decides to put things one way does not renders anothers distribution decision to put the same things at different places invalid. I'm confident both distribution packagers/layouter have invested lots of thoughts in the way they put things and have come to whatever conclusion they came to in a long process. One might prefer one way over the other. Claiming the other being broken without at least trying to understand why it was made this and not that way doesn't seem the most clever thing to do. > Still, that's really up to the distro - if they want to put libraries=20 > somewhere else, they're free to do so, as long as they specify a=20 > different libdir. What is this 'libdir' you keep refering to ? Where or how is it defined ? > I think the 'leave it up to scons' idea will solve all problems for all=20 > users, no? If one particular distro needs to specify something=20 > specific, they can do so. AFAICT the "'leave it up to scons' idea" is not (yet?) working. > Yep, so it'll break some things and fix some others - that's why I think= =20 > it's best leaving it to the people who have solved this many times=20 > before (ie, the scons defaults :) ). And here I repeat: I have yet to see the first SConstruct file that makes use of these (supposed?) "scons defaults". As I wrote in my previous post: AFAICT scons is not yet on par when it comes to these automatisms. But then I would like to be shown differently. Any scons hacker to do that ? Best, Michael =2D-=20 Vote against SPAM - see http://www.politik-digital.de/spam/ !tagline Michael Gerdau email: mg...@ti... GPG-keys available on request or at public keyserver |
From: Jeremy K. <jk...@oz...> - 2007-08-28 12:08:42
|
Michael, > No problem with that (though I'd like to add on true 64bit systems > the default _is_ building a 64bit binary). Exactly right, and you've mentioned a good point - systems have a 'default' arch (eg, let's define this as the architecture of standard system binaries, like /bin/ls), and some set of other supported arches. Sounds like x86_64's default arch on your distro is 64-bit, while on ubuntu feisty powerpc it's 32-bit. On ubuntu and fedora, you can install both 32- and 64- bit versions of most of the libraries. I haven't played with open suse, so can't comment on that. IMHO (and according to the filesystem heirachy standard, FHS), $PREFIX/lib should contain libraries for the default system architecture, be it 32-bit or 64-bit. If the default arch is 32-bit, you'll find the extra 64-bit libs in $PREFIX/lib64, and if the default arch is 64-bit, you find the 32-bit libs in $PREFIX/lib32. FHS refers to /usr/lib<qual> as "Alternate format libraries". On a system where 64-bit binaries are the default, 64-bit isn't "alternate". Ubuntu on x86_64 does this by having /usr/lib64 and /usr/lib32, with /usr/lib being a symlink to the default one. > I probably didn't make it clear enough in past posts: > _So_ far there is no 'default' other than the hardwired value in the > SConstruct file and that hardly qualifies as a sensible default IMO. Yep, I think that's where my main problem comes from. autoconf's configure scripts have the --libdir= argument, which lets you specify where to install system libraries, and it defaults to $PREFIX/lib. (this is what I have referred to as 'libdir' in my previous mails). We're arguing over this because we have to redefine it for scons, and my ideas (leave it to what auto* used to do) on redefinition disagrees from yours :) > > So I think $PREFIX/lib is a more sensible default for all cases. Is > > this breaking something at present? > > Actually it does. > > It forces all 64bit systems to include PREFIX/lib in their > searchpaths. While this can be done it raises complexity. Unneeded I > may add. And where would I put my 32bit libs then ? /usr/lib32. The default system binaries should use the FHS-defined library search path of /usr/lib. If you're trying to run a binary that it's of your default system type, then you'll need to define a different search path. (in fact, you can just specify a different interpreter in the binary's ELF header (eg /lib64/ld.so), which is configured to use /usr/lib64/, but that's more complex) but remember, that's all only if your binary isn't of the default system architecture. anything that's the same arch as /bin/ls should 'jsut work', by using /usr/lib). That's why it's a sensible default. > > So you can't use the default compiler to produce a library and > > stick it in /usr/lib? I think that's kinda broken, but they > > probably have good reasons for doing so. > > I _can_ do that but almost all configure scripts correctly deduce to > put 64bit libs in PREFIX/lib64. > > I don't see why that should be 'kinda broken'. Because you can't compile a library with the default compiler, put it in the standard library path (/usr/lib) and have it work. > And yes, they have good reason for it, namely they also deliver 32bit > versions of many libs (which reside in /usr/lib etc.) as well as a > 32bit compiler. That's one of the reasons why e.g. wine works > seamlessly even on x86_64. > > Just because one distribution decides to put things one way does not > renders anothers distribution decision to put the same things at > different places invalid. I'm confident both distribution > packagers/layouter have invested lots of thoughts in the way they put > things and have come to whatever conclusion they came to in a long > process. Yeah, defintely, that's why distros configure their packages with custom versions of --libdir= I'm not really defining what distros should be doing - that's entirely up to them, and they can control it by configuring with --libdir=/some/path. It sounds like we no longer give them that option with scons. So, my argument in 3 points: 1) since scons doesn't do this, we need to manually specify where to install the libraries 2) this location *needs* to be configurable by the user 3) I think we should use the default (specified by the FHS), of $PREFIX/lib OK, I've got a plane to catch, and should probably stop ranting about this :D Cheers, Jeremy |
From: Michael G. <mg...@ti...> - 2007-08-28 13:06:24
|
> Sounds like x86_64's default arch on your distro is 64-bit, while on=20 > ubuntu feisty powerpc it's 32-bit. On ubuntu and fedora, you can=20 > install both 32- and 64- bit versions of most of the libraries. I=20 > haven't played with open suse, so can't comment on that. >=20 > IMHO (and according to the filesystem heirachy standard, FHS),=20 > $PREFIX/lib should contain libraries for the default system=20 > architecture, be it 32-bit or 64-bit. If the default arch is 32-bit,=20 > you'll find the extra 64-bit libs in $PREFIX/lib64, and if the default=20 > arch is 64-bit, you find the 32-bit libs in $PREFIX/lib32. >=20 > FHS refers to /usr/lib<qual> as "Alternate format libraries". On a=20 > system where 64-bit binaries are the default, 64-bit isn't "alternate". That's not completely correct. Here is an excerpt from the fhs-2.3: The 64-bit architectures PPC64, s390x, sparc64 and AMD64 must place 64-bit libraries in /lib64, and 32-bit (or 31-bit on s390) libraries in /lib. The 64-bit architecture IA64 must place 64-bit libraries in /lib. I have not been able to find similar statements w/r to /usr/lib or /usr/lib<qual> which implies the standard does not say anything either way. However since for the "Essential shared libraries and kernel modules" residing in /lib or /lib<qual> it is explicitly required for both PPC64 and AMD64 (i.e. x86_64) systems to have the requested distinction between lib and lib64 for 32 and 64 respectively I personally find it kind of orthogonal to expect the same distinction for all other <path>/lib and <path>/lib64 on these architectures (though no requirement). Only in footnote [14] providing /lib32 is sanctioned as an option and only for systems that provide both 32 and 64 bit binaries. But even on these systems (with ppc64 and amd64) it is required that /lib is a symlink to /lib32. Interesting enough since the FHS had been incorporated into the LSB this requirement had been dropped and the Filesystem Hierarchy Standard had been renamed to Linux Filesystem Hierarchy. > Ubuntu on x86_64 does this by having /usr/lib64 and /usr/lib32,=20 > with /usr/lib being a symlink to the default one. Well, according to the FHS-2.3 /lib would be required to be a symlink to /lib32. The LFH does make no such statement. Neither says the default one should reside in /lib (or /usr/lib). > We're arguing over this because we have to redefine it for scons, and my= =20 > ideas (leave it to what auto* used to do) on redefinition disagrees=20 > from yours :) I'm not so sure we are that far apart. We both agree the way it is currently handled is to be improved. We also agree we need a way to tweak all sorts of settings if we wish to do that. We just differ in what we consider the best possible default. Maybe we have come closer together after the above paragraph on FHS ? ;-) > but remember, that's all only if your binary isn't of the default system= =20 > architecture. anything that's the same arch as /bin/ls should 'jsut=20 > work', by using /usr/lib). That's why it's a sensible default. I have found no such requirement in the FHS-2.3 or the LFH. > So, my argument in 3 points: >=20 > 1) since scons doesn't do this, we need to manually specify where to=20 > install the libraries >=20 > 2) this location *needs* to be configurable by the user >=20 > 3) I think we should use the default (specified by the FHS), of=20 > $PREFIX/lib I agree with 1) and 2). Since the FHS or (LFH) does not define the default as being PREFIX/lib (or PREFIX/lib64 or whatever) we should add code that tries to deduce the standard of the distribution (or something along that line). Best, Michael =2D-=20 Vote against SPAM - see http://www.politik-digital.de/spam/ !tagline Michael Gerdau email: mg...@ti... GPG-keys available on request or at public keyserver |