I think the argument about orientating towards flex is a good idea. The
cscope lex code still makes use of undocumented internals and this is
something that will probably break with a commercial lex that has
changed from the original AT&T source base. We still need to support the
original lex for now. Maybe post 15.0 we can insist that flex be
As for the rest of the autconf arguments, please carry on. The more we
discuss it the better we'll understand various ins and outs. The fact
that there are several projects that want to replace autoconf and
friends, implies that autoconf is not accepted by everybody.
The one problem I do find with Open Source is a lack of competitive
products. Sure you get some products which perform the same task, but
one seems to always outweight the other by a huge margin. For instance,
cs, a Open Source program similar to cscope, folded once cscope became
available. I was contacted by the author directly, but I couldn't
convince him to keep his project alive.
At some point, there is going to be a backlash against this, either from
the Open Source community or, more likely, from commercial vendors.
> [finding libs & stuff in /usr/local tree]
> > > That's not an issue of open source software, really. It's an issue of all
> > > those U*ix vendors having made incompatible decisions on what the
> > > environment should be, plus local administrators overriding those
> > > decisions, sometimes without any planning whatsoever.
> > And open source software purports to build on those
> > systems and configure purports to exist to flatten out those differences
> Flatten them out to the extent doable by a necessarily somewhat
> un-intelligent, automatic procedure: yes. Flatten them out completely: no.
> That would be impossible to do. Just because a package uses an autoconf
> generated configure script doesn't meant there's nothing left for the
> installer to think about. That's why there is an INSTALL file coming with
> the configure script (not the skeleton one found with cscope, currently --
> the real one, as delivered with automake and most GNU packages).
> The longer this discussion goes, the more I gather the impression that
> your idea of what 'configure' is supposed to do for program developers and
> installers is in conflict with the goals defined by the FSF. According to
> them (or what I understood their texts to mean), configure is meant to
> replace the libraries of feature lists per platform often found in
> packages by actual tests for those features. I.e. instead of having an
> almost unmaintainable heap of
> #ifdef __SOME_OS_SPECIFIC_DEFINE
> # define HAS_CAPABILITY_FOO
> # define CAN_DO_BAR
> in some central place, or (even worse) spreading the #ifdef
> __SOME_OS_SPECIFIC_DEFINE checks all over the source, you just tell
> autoconf that your package needs to know whether the system can do 'bar',
> and it will carry out an automated test for it, at 'configure' time.
> In the end, this definitely reduces the amount of porting work whenever a
> package has to be installed on a platform it was never tried on, before.
> It also catches otherwise unnoticed changes in already supported
> To give a rather extreme example: with only minor changes required by some
> of the remaining Unix-ism in autoconf, you can build many an autoconf'ed
> package right out of the box even on MS-DOS, with the existing DOS ports
> of GNU utilities. Even those that were never planned to work on DOS, that
> > > How, e.g., is 'configure' supposed to know whether the /usr/local tree on
> > > your build machine even exists on the host(s) the program is meant to work
> > > on?
> > O please - the autoconf/automake/autolib/autoObscuringMacroToolFRomHell....
> > configure tools don't even pretend to work well in that environment
> > - the assumption is that you will be running on what you built on
> > or something very closely approximating it ...
> ... and you like that feature? If not, why are you arguing in favour of
> tying the outcome of configure to details even *less* reliable than
> OS version, patchlevel or others?
> > If not .... you lose
> ... and making it automatically include the /usr/local tree (almost the
> least constant part in a usual Unix installation, from one machine to its
> neighbour) would make it lose even more often. Do we really want that?
> > > Exactly. And most of such vanilla default setups do *not* search
> > > /usr/local/lib for libraries, or need additional -R options or an
> > > LD_LIBRARY_PATH in the environment for that to work.
> > And autoconf as the finder of things needs to look there and tell the compiler
> > its a valid search path to find the probed for library
> How is it supposed to know where to look, if you didn't even inform the
> compiler/linker about it? Magic? Currently, you're asking for /usr/local.
> Next week, someone will require that we positively must search
> /opt/SUNWstol/lib/sparc-v8/lib too. The week after, some HP guy comes and
> request another thing. This is opening Pandora's box.
> The GNU default that autoconf only finds what your linker will find, too,
> is definitely saner than trying it to second-guess human decisions about
> where an adminstrator wants to put a certain file. You know where you
> put the stuff, so you tell autoconf, or make that a system-wide default,
> right from the start.
> > So, following your
> > > own rule, you shouldn't have any libraries installed there -- they
> > > wouldn't be found.
> > All the opensource software default installs its libs ( and other stuff)
> > under /usr/local - why should it not be probing there as an
> > extremely likely place to find binaries/objects
> Writing to a certain location is different from expecting stuff to be
> found there. One could arguably want to always automatically search for
> libraries in the 'exec_prefix' tree, or in the 'libdir'. But then, there's
> nothing telling you that those directories exist, on the build machine.
> E.g., if someone comes to me and wants a tarball of a program that
> has to work from the /usr/lib/openwin tree, which doesn't exist on
> my home machine, I can:
> ./configure --prefix=/usr/lib/openwin
> make install prefix=/tmp/pkgname
> and tar up what got installed to /tmp/pkgname. The client can take
> that tarball, unpack into his /usr/lib/openwin tree, and get a
> working installation.
> But all this does *not* mean that configure should search for
> -lfl or -lcurses in /usr/lib/openwin/lib on my build machine.
> No way.
> > Its the default case I'm after which as it involves a default install of
> > pieces in /user/local should include a search in /usr/local for
> > those (and any other) pieces.
> Just because packages install binaries into /usr/local/bin and manpages
> into /usr/local/man doesn't mean there is a /usr/local/lib, to begin with,
> nor that it makes sense to search for these libraries, there. And of
> course, that default will frequently be changed, anyway. If I './configure
> --prefix=/foo/bar/baz', where do you think should it search for libraries?
> Really /usr/local/bin? With what justification?
> > My point exactly - this sort of usability is aimed at people who want
> > a binary, download the source and do the configure,make, make check,
> > make install sequence as suggested in the install - it removes from
> > them the need to spend a lengthy period of time determining the
> > details of autoconf implementation so as to be able to configure the
> > builds ...
> If our INSTALL file were the full version already mentioned, this wouldn't
> be necessary. The vanilla INSTALL file that is supposed to be included
> with any autoconf / automake package explains the configuration
> by environment variables I described. Ours doesn't.
> And of course, people who build their own binaries are supposed to know at
> least the rough outline of how their compiler and linker works, and to
> read files named 'README' or 'INSTALL'. People not taking at least a quick
> glance at such files deserve to be punished by builds that don't work, to
> some extent. They should rather install binary rpms and be done with it.
> ['flex -l' vs. 'flex'+'%array']
> > > So, according to the docs, none of the differences between 'flex -l' and
> > > 'flex' using the %array option can be relied on in other lex
> > > implementations, either. In other words: if we really do need 'flex -l',
> > > rather than '%array', that'd be a symptom of a portatiblity bug in
> > > scanner.l.
> > true but we're minimising the possibility of such differences by using '-l'
> > - it supports a dialect closer to lex.
> Please define *which* lex you want to be close to. And why that one.
> 'lex', like the original K&R C, obviously was never defined precisely
> enough to avoid all the Unix vendors interpreting some details of it
> differently. For a portable program, these difference have to be avoided
> in the scanner.l. Using 'flex -l' to be a bit more compatible to all those
> tools that aren't even really compatible with each other is curing the
> symptom, but not the disease.
> > > > does it continue to work with lex ?
> > > I tried to test that, but Digital's 'lex' refuses to work for different
> > > reasons --> signed/unsigned char issue with yytext :-(.
> > So you're moving the supported dialect toward something more flex-like
> > without continuing to test that lex continues to be supported ?
> I would like to do that. But testing for compatibility with 'lex' seems
> to be practically impossible. I have access to quite a few more types
> of Unix, if necessary, but the variety of 'lex' incarnations is
> neverending, it seems.
> Actually, the example of Digital's 'lex' shows that it may well be
> fundamentally impossible to support all those different incarnations of
> 'lex', anyway. In which case I'd usually suggest requiring flex and forget
> about 'lex' altogether. It's available de facto everywhere, and for free,
> after all, so it's not going to stop anyone from using our program. In
> addition, if all else fails, we can still supply a scanner.c generated by
> a somewhat decent 'lex' on some platform with the source tarball, as
> recommended by the automake folks.
> Hans-Bernhard Broeker (broeker@...)
> Even if all the snow were burnt, ashes would remain.
> Cscope-devel mailing list
Petr Sorfa Software Engineer
Santa Cruz Operation (SCO)
430 Mountain Ave. http://www.sco.com
Murray Hill 07974
Disclaimer: All my comments are my own and nobody else's