|
From: Sam S. <sd...@gn...> - 2016-08-29 17:17:26
|
> * Daniel Jour <qna...@tz...> [2016-08-28 23:55:18 +0200]: > >> pwd is also a shell built-in, so full path is necessary. > > Hm, I understand why pwd may give (in case of symbolic links) a > different result as a non-builtin pwd. Why do we need the physical > (not logical, i.e. without the symlinks) path, though? > > As for /bin/pwd ... this is a bad idea. /bin (the directory) is about > to die (ok, that's a bit drastic, I'm reffering to the "usr merge" > here), and there may be more than one pwd on a single system (this is > the main reason for this change, because it directly affects me. This > could be fixed downstream by the currently few affected distributions, > too. But in the future this might affect every linux distribution that > uses systemd.) this is news to me. "/bin/cp" going away will break so much stuff that I doubt that anyone will seriously contemplate this. any unix experts here? Bruno? > Same goes for /bin/cat: There might be more than one cat, and all of > them might be in a different location than /bin. Thus it's better to > let the user (or the user's configured environment) choose which cat > to use. running external programs without a full path is a security risk. > Perhaps we could use (if it's still needed then) an autoconf macro to > determine the "correct" cat and pwd? (Using $CAT and $PWD then, so the > user could change these.) I don't think this is a good idea for "cat" (used once in the test suite), but for pwd - maybe. > > >> 15624::::Daniel Jour 2016-06-22 Remove comment5 utility, and any >> remaining non-C comments >> >> Nope. >> Each line with separate /* .. */ is ugly. >> We need to use either blocks or //. > > Hm, indeed ... > > >> When did // comments go into the C standard? > > C99, "ANSI C" (C89/90) didn't have these yet (but I'd bet the major > compilers wouldn't complain when not in -pedantic mode, didn't check > though.) > > >> Also, I think this should be done together with removing all >> pre-processing and renaming all *.d files to *.c. > > > This would also extremely simplify the Makefile.am file for > automake. What's the best base (revision) to make such a change on? > > > >> 15620::::Daniel Jour 2016-06-19 regexp: use autotools, gnulib, improve >> buffer handling >> >> 0! This is 3 patches in one: > > Yes, this is too much for a single change. I was working on all three > changes at the same time, which is how this huge commit came into > being (sorry). > >> 1. Why are you replacing stack allocation with arbitrary limits? > > I don't? I'm switching the buffer size for the error message from > BUFSIZ (which is an unrelated constant that could be easily too much > for the stack) to another constant SAFE_ERROR_MESSAGE_LENGTH which we > can define to a known "good" value (128 is a first start .. if there's > an error message that's longer we can expand this, and consistently > test whether that's too much on the stack or not). > > The other change is more important: With a carefully crafted regex > expression one can currently crash (or potentially worse) CLISP, > because the buffer for the matches can overflow the stack. I think > this might be an issue for e.g. web applications that pass a regex > expression such that a - potentially malicious - user can modify > it. As application developer I wouldn't expect an regex of some (arbitrary, > stack frame dependent) length to be able to crush my whole application. > > Therefore I changed it so that a small buffer is still allocated on > the stack (because of the speed benefit), but larger buffers get > allocated using dynamic memory allocation (which, in case of a > failure, raises a condition). > > It's not waterproof though: A large number of matches (from a > carefully crafted expression) could still cause the LISP stack to > overflow. I don't know how to prevent that, though (except for > enforcing some maximum). > > >> 2. Is it really necessary to create a separate m4 directory for the >> single file gnulib-cache.m4? It's really ugly! (same for the >> rawsock change below). > > Yes, that's the directory where all the gnulib (m4 macros) should > reside in, too. Running gnulib-tool --update in the regexp module > directory populates that directory (as well as lib/). > > I think the current state (in which I commited this) is unfortunate, > though: The gnulib files (in m4/ and lib/) should be put under (our) > revision control, too. What do you think would be the best approach > here? > (gnulib has some discussion about this very issue: > https://www.gnu.org/software/gnulib/manual/gnulib.html#VCS-Issues) > > > >> 15625:::tip:Daniel Jour 2016-08-23 rawsock: autotools build system, >> own gnulib checkout >> >> Copying code from socked.d into rawsock.c is no good. > > Yes, this is a (dirty) work around: I was not able to update the > gnulib code for core CLISP (thus socket.d). I was planing to revert > that as soon as core CLISP also uses updated gnulib code. > > I'm a bit worried about these dependencies (between the modules - > rawsock needing OS - and the rawsock module and the socket code from > core CLISP) though. Though this is IMO not relevant for now. > > >> Also, you removed the configure script, so now people have to install >> autoconf to build clisp. >> Are you sure this is right? > > I removed it from the repository because it is a generated file > now. It would be part of a source distribution (a release) though, > thus only CLISP developers need to have autoconf/automake/etc. > > >> DIUC that the main change required to make rawsock work on windows >> was the switch from rawsock_t to int? > > Basically, yes. gnulib is (or in part, will be) handling the windows > specific code. rawsock is interfacing to POSIX/BSD sockets, gnulib is > implementing them on windows. > > But I noticed that this commit still contains (printf) debugging and > other oddities. I'll fix that ASAP. > > > >> what does "wip" in "wip-autotools-export" stand for? > > work in progress. This is the state of switching to autotools at the > end of the coding period. It's not finished or useable yet. Which is > why ... > >> the change to built.d seems incomplete. > > ... a lot of the changes in that commit are indeed still incomplete. > > >> I need to understand what you are doing before importing anything. >> The best way would be if you made an isolated change >> (e.g., I don't see why you had to touch built.d). > > I'll try to isolate the changes needed to get an autotools based build > system from integrating the configuration into autoconf macros. This > should reduce the size of each change. I don't know how fast I'm able > to provide this atm, though. > > > >> please take a look at clisp/modules/berkeley-db/bdb.c:bdb_handle() >> for getting the regex_t pointer out of the struct. > > Hm .. is this faster than the approach that I took? It saves a level > of indirection ("one pointer"), right? Or is there something else > wrong with the approach I took in regexp? > > ------------------------------------------------------------------------------ > _______________________________________________ > clisp-devel mailing list > cli...@li... > https://lists.sourceforge.net/lists/listinfo/clisp-devel > -- Sam Steingold (http://sds.podval.org/) on darwin Ns 10.3.1404 http://www.childpsy.net/ http://camera.org http://dhimmi.org http://www.dhimmitude.org http://thereligionofpeace.com Illiterate? Write today, for free help! |
|
From: Elias P. <pip...@ic...> - 2016-08-29 17:32:38
|
> On 29 Aug 2016, at 19:17, Sam Steingold <sd...@gn...> wrote: > >> * Daniel Jour <qna...@tz...> [2016-08-28 23:55:18 +0200]: >> >>> pwd is also a shell built-in, so full path is necessary. >> >> Hm, I understand why pwd may give (in case of symbolic links) a >> different result as a non-builtin pwd. Why do we need the physical >> (not logical, i.e. without the symlinks) path, though? >> >> As for /bin/pwd ... this is a bad idea. /bin (the directory) is about >> to die (ok, that's a bit drastic, I'm reffering to the "usr merge" >> here), and there may be more than one pwd on a single system (this is >> the main reason for this change, because it directly affects me. This >> could be fixed downstream by the currently few affected distributions, >> too. But in the future this might affect every linux distribution that >> uses systemd.) > > this is news to me. > "/bin/cp" going away will break so much stuff that I doubt that anyone > will seriously contemplate this. I think what Daniel is suggesting here goes farther than the current goals of the usr-move camp (see [1] and [2]). Traditionally, /bin and /usr (thus /usr/bin) could live on different partitions and be mounted at different points in time. I believe defenders of the usr-move idea will argue that there’s no good use case for that anymore, so that /usr can just be assumed to be available whenever /bin is assumed to be available. That does not mean that /bin/cp and the like go the way of the dodo but rather that they might be symlinks to /usr/bin/cp, so that as a package maintainer you no longer need to spend time thinking about what has to go in /bin and what can go in /usr/bin instead. As a typical user you would not notice any difference. That said, I chimed in here to save the unit experts some typing, not because I consider myself one (I don’t). Elias [1] http://0pointer.net/blog/projects/the-usr-merge.html [2] https://fedoraproject.org/wiki/Features/UsrMove |
|
From: Bruno H. <br...@cl...> - 2016-08-29 23:49:41
|
Sam wrote:
> > * Daniel Jour <qna...@tz...> [2016-08-28 23:55:18 +0200]:
> >
> >> pwd is also a shell built-in, so full path is necessary.
> >
> > Hm, I understand why pwd may give (in case of symbolic links) a
> > different result as a non-builtin pwd. Why do we need the physical
> > (not logical, i.e. without the symlinks) path, though?
> >
> > As for /bin/pwd ... this is a bad idea. /bin (the directory) is about
> > to die (ok, that's a bit drastic, I'm reffering to the "usr merge"
> > here), and there may be more than one pwd on a single system (this is
> > the main reason for this change, because it directly affects me. This
> > could be fixed downstream by the currently few affected distributions,
> > too. But in the future this might affect every linux distribution that
> > uses systemd.)
>
> this is news to me.
> "/bin/cp" going away will break so much stuff that I doubt that anyone
> will seriously contemplate this.
>
> any unix experts here?
Differences between /bin/<prog> and <prog> in general:
* You can be sure that /bin/<prog> exists; you don't need to handle
the case that PATH has been set in such a way that <prog> is not found.
* PATH is under user control. If you use <prog> you need to handle
the case that <prog> is not executable, a dangling link, or it could
be slowed down on an NFS volume etc.
Differences between /bin/pwd and pwd in particular:
* pwd being a shell built-in, it takes care not to canonicalize the
directory name. If we want to canonicalize the directory name, e.g.
to test whether two directories are equal
A_abs=`cd "$A" && /bin/pwd`
B_abs=`cd "$B" && /bin/pwd`
test "$A_abs" = "$B_abs"
we can only use /bin/pwd.
The only good argument against /bin/<prog> that I can see is that some
modern distros, like GNU GuixSD, construct PATH based on symbolic links.
But I would expect them to have a mechanism to simulate a /bin directory
in some way.
Bruno
|
|
From: Tomas H. <to...@lo...> - 2016-08-30 07:36:28
|
Hi, Bruno Haible <br...@cl...> writes: > Differences between /bin/<prog> and <prog> in general: > > * You can be sure that /bin/<prog> exists; you don't need to handle > the case that PATH has been set in such a way that <prog> is not found. counterexample: $ ls -al /bin total 8 drwxr-xr-x 2 root root 4096 2016-08-25 09:44 . drwxr-xr-x 17 root root 4096 2016-05-29 00:28 .. lrwxrwxrwx 1 root root 63 2016-08-25 09:44 sh -> /nix/store/9zv8ph14qa9x685ig6agxy9yzxcapfar-bash-4.3-p42/bin/sh $ This is on NixOS so your assertion is valid only for systems that follow certain convention, e.g. <https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard>. Any dependency on external programs comes with many assumptions and no guaranties. For example, using /bin/<prog> harcodes the assumption about FHS, among others. > * PATH is under user control. If you use <prog> you need to handle > the case that <prog> is not executable, a dangling link, or it could > be slowed down on an NFS volume etc. Using /bin/<prog> has exactly the same problems as well as harcoding FHS assumption. > The only good argument against /bin/<prog> that I can see is that some > modern distros, like GNU GuixSD, construct PATH based on symbolic > links. But I would expect them to have a mechanism to simulate a /bin > directory in some way. Yes. They don't simulate the /bin directory. They patch away the hardcoded paths and construct the environment in such a way, that the program uses the right dependencies. Or, they replace the hardcoded path with the correct path of the dependency. So from the point of those modern distros, I think it is slightly better not to hardcode the FHS assumption so that they do not have to patch the sources. However, even if the path stays hardcoded, it is a solved problem (I have clisp running here). Sam Steingold <sd...@gn...> writes: > running external programs without a full path is a security risk. What is the reasoning behind this assertion? Tomas |
|
From: Jan S. <ha...@st...> - 2016-08-30 09:37:28
|
> 15613::::Daniel Jour 2016-04-30 clisp-link.in: Replace /bin/pwd with calls to pwd (for NixOS) > 15617::::Daniel Jour 2016-05-18 Don't call cat by absolute path > > pwd is also a shell built-in, so full path is necessary. > > Hm, I understand why pwd may give (in case of symbolic links) a > different result as a non-builtin pwd. Why do we need the physical > (not logical, i.e. without the symlinks) path, though? That's just part of the issue. For example, on OpenBSD, using ksh(1): $ cd /tmp $ mkdir foo $ ln -s foo bar $ cd bar $ pwd /tmp/bar $ /bin/pwd /tmp/foo More importantly, /bin/pwd is a standardized UNIX binary, guaranteed to exist, with behaviour defined by POSIX - as opposed to "pwd", which can be a shell builtin (of whichever shell the user is running), or anything named "pwd" that happens to be in $PATH. So make your mind about whether you want the symlinks resolved or not, and call either "/bin/pwd -P" or "/bin/pwd -L". (Because the default behaviour without options can also differ.) http://man.openbsd.org/OpenBSD-current/man1/pwd.1 > As for /bin/pwd ... this is a bad idea. > /bin (the directory) is about to die Stop smoking that shit. /bin has been around for decades and will continue to be, whatever any linux distro du jour gets to its head. > (ok, that's a bit drastic, I'm reffering to the "usr merge" > here), and there may be more than one pwd on a single system So use the standard one, obviously. > (this is the main reason for this change, because it directly affects me. > This could be fixed downstream by the currently few affected distributions, > too. But in the future this might affect every linux distribution that > uses systemd.) UNIX is not this or that "affected Linux distribution". http://man.openbsd.org/OpenBSD-current/man7/hier.7 > Same goes for /bin/cat Exactly. > There might be more than one cat, and all of > them might be in a different location than /bin. Except /bin/cat will always be there, with POSIX-defined behaviour. So use that, obviously. > Thus it's better to let the user (or the user's configured > environment) choose which cat to use. Quite the contrary: this is the reason to use the standard one. > Perhaps we could use (if it's still needed then) an autoconf macro to > determine the "correct" cat and pwd? (Using $CAT and $PWD then, so the > user could change these.) Yeah right. Let the user rewrite PWD. On Aug 29 19:32:28, pip...@ic... wrote: > I think what Daniel is suggesting here goes farther than the current goals > of the usr-move camp (see [1] and [2]). Traditionally, /bin and /usr (thus /usr/bin) > could live on different partitions and be mounted at different points in time. I > believe defenders of the usr-move idea will argue that there’s no good use case > for that anymore, On OpenBSD, for example, the programs in /bin are statically compiled, and therefore do not depend the libraries or the linker. That's why you can use them to e.g. repair your system in single-user mode, with /usr/* botched. Or use them in scripts without further assumptions. > so that /usr can just be assumed to be available whenever /bin > is assumed to be available. Again, that's not the issue here. > That does not mean that /bin/cp and the like go the > way of the dodo but rather that they might be symlinks to /usr/bin/cp, so that as > a package maintainer you no longer need to spend time thinking about what has > to go in /bin and what can go in /usr/bin instead. How much time do you need to spend thinking about the following? $ ldd /bin/ls /bin/ls: Start End Type Open Ref GrpRef Name 000001301d32b000 000001301d778000 dlib 1 0 0 /bin/ls $ ldd /usr/bin/diff /usr/bin/diff: Start End Type Open Ref GrpRef Name 00001a3559f00000 00001a355a309000 exe 1 0 0 /usr/bin/diff 00001a3773834000 00001a3773cfd000 rlib 0 1 0 /usr/lib/libc.so.88.0 00001a3760300000 00001a3760300000 rtld 0 1 0 /usr/libexec/ld.so On Aug 30 09:17:33, to...@lo... wrote: > Bruno Haible <br...@cl...> writes: > > Differences between /bin/<prog> and <prog> in general: > > > > * You can be sure that /bin/<prog> exists; you don't need to handle > > the case that PATH has been set in such a way that <prog> is not found. > > counterexample: > > $ ls -al /bin > total 8 > drwxr-xr-x 2 root root 4096 2016-08-25 09:44 . > drwxr-xr-x 17 root root 4096 2016-05-29 00:28 .. > lrwxrwxrwx 1 root root 63 2016-08-25 09:44 sh -> /nix/store/9zv8ph14qa9x685ig6agxy9yzxcapfar-bash-4.3-p42/bin/sh > $ That's not a counterexample: /bin/sh exists, as it always should; but depending on PATH, "sh" might not be found. > Any dependency on external programs comes with many assumptions and no > guaranties. Except the standardized /bin/pwd etc, which is precisely the reason to use those, and not some others. > For example, using /bin/<prog> harcodes the assumption > about FHS, among others. Yes. Much like using printf(3) hardcodes the assumption that you have a standard C library. > > The only good argument against /bin/<prog> that I can see is that some > > modern distros, like GNU GuixSD, construct PATH based on symbolic > > links. But I would expect them to have a mechanism to simulate a /bin > > directory in some way. > > Yes. They don't simulate the /bin directory. They patch away the > hardcoded paths and construct the environment in such a way, that the > program uses the right dependencies. Or, they replace the hardcoded > path with the correct path of the dependency. > > So from the point of those modern distros, I think it is slightly better > not to hardcode the FHS assumption so that they do not have to patch the > sources. Something small in me dies every time I see a _language_interpreter_ being considered "from the point of modern distros". Jan |
|
From: Sam S. <sd...@gn...> - 2016-08-30 14:05:26
|
> * Jan Stary <un...@fg...> [2016-08-30 11:10:34 +0200]: > > More importantly, /bin/pwd is a standardized UNIX binary, > guaranteed to exist, with behaviour defined by POSIX > - as opposed to "pwd", which can be a shell builtin > (of whichever shell the user is running), > or anything named "pwd" that happens to be in $PATH. Excellent! Indeed, the _functionality_ is specified: http://pubs.opengroup.org/onlinepubs/9699919799/utilities/pwd.html http://pubs.opengroup.org/onlinepubs/9699919799/utilities/cat.html Note, however, that /bin is not mentioned in http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap10.html I agree that the UNIX tradition enshrines /bin sufficiently so that we need not worry that /bin/pwd or /bin/cat will ever surprise us: https://en.wikipedia.org/wiki/Unix_filesystem https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard http://www.linfo.org/bin.html http://unix.stackexchange.com/questions/5915/difference-between-bin-and-usr-bin http://askubuntu.com/questions/138547/how-to-understand-the-ubuntu-file-system-layout The bottom line is that there is no compelling case to drop the full path, so let us keep it for the time being. -- Sam Steingold (http://sds.podval.org/) on darwin Ns 10.3.1404 http://www.childpsy.net/ http://iris.org.il http://jihadwatch.org http://truepeace.org http://islamexposedonline.com http://palestinefacts.org C combines the power of assembler with the portability of assembler. |
|
From: Sam S. <sd...@gn...> - 2016-08-30 13:53:33
|
Hi, > * Tomas Hlavaty <gb...@yb...> [2016-08-30 09:17:33 +0200]: > Sam Steingold <sd...@gn...> writes: >> running external programs without a full path is a security risk. > > What is the reasoning behind this assertion? * if clisp executes "pwd" and * you have, say, "~/bin" in your $PATH before "/bin" and * a malicious actor plants an executable named "pwd" into "~/bin", then you will run that executable as yourself. -- Sam Steingold (http://sds.podval.org/) on darwin Ns 10.3.1404 http://www.childpsy.net/ http://honestreporting.com http://think-israel.org http://iris.org.il http://thereligionofpeace.com http://islamexposedonline.com Abandon all hope, all ye who press Enter. |
|
From: Tomas H. <to...@lo...> - 2016-08-30 19:06:02
|
Hi Sam, Steingold <sd...@gn...> writes: >> * Tomas Hlavaty <gb...@yb...> [2016-08-30 09:17:33 +0200]: >> Sam Steingold <sd...@gn...> writes: >>> running external programs without a full path is a security risk. >> >> What is the reasoning behind this assertion? > > * if clisp executes "pwd" and > * you have, say, "~/bin" in your $PATH before "/bin" and > * a malicious actor plants an executable named "pwd" into "~/bin", then > you will run that executable as yourself. thanks for your reply. Tomas |
|
From: Sam S. <sd...@gn...> - 2016-08-30 14:44:39
|
> * Daniel Jour <qna...@tz...> [2016-08-28 23:55:18 +0200]: > >> When did // comments go into the C standard? > > C99, "ANSI C" (C89/90) didn't have these yet (but I'd bet the major > compilers wouldn't complain when not in -pedantic mode, didn't check > though.) I am pretty sure we can assume C99 when building CLISP. This means that * varbrace can be dropped (after deleting the "var" keyword) * comment5 can be dropped (after converting the comments) * ccmp2c is needed for new-clx, let it be for now * ccpaux is only needed on SUNWspro - does it even exist still? * deema is not needed on C99 (and C++11) (http://stackoverflow.com/q/33332819/850781) * gctrigger is necessary for GCSAFETY -- keep it! * txt2c is for doc processing, I think we should move away from it (what do other projects use?) The bottom line is that some source pre-processing (specifically, gctrigger) will _always_ be with us, so the transition to automake will have to account for that. The plan for now is to drop varbrace, comment5, ccpaux, deema and rename all sources to *.c from *.d. I would like that to happen _after_ the release. -- Sam Steingold (http://sds.podval.org/) on darwin Ns 10.3.1404 http://www.childpsy.net/ http://openvotingconsortium.org http://islamexposedonline.com http://ffii.org http://www.memritv.org Vegetarians eat Vegetables, Humanitarians are scary. |
|
From: Sam S. <sd...@gn...> - 2016-08-30 15:11:59
|
> * Daniel Jour <qna...@tz...> [2016-08-28 23:55:18 +0200]: > >> 15620::::Daniel Jour 2016-06-19 regexp: use autotools, gnulib, improve >> buffer handling >> >> 0! This is 3 patches in one: > > Yes, this is too much for a single change. I was working on all three > changes at the same time, which is how this huge commit came into > being (sorry). This is an absolute NO. Each commit must reflect a single logical change. It's hard, it requires discipline and focus, but this is the only way. >> 1. Why are you replacing stack allocation with arbitrary limits? > > I don't? I'm switching the buffer size for the error message from > BUFSIZ (which is an unrelated constant that could be easily too much > for the stack) to another constant SAFE_ERROR_MESSAGE_LENGTH which we > can define to a known "good" value (128 is a first start .. if there's > an error message that's longer we can expand this, and consistently > test whether that's too much on the stack or not). BUFSIZ is a pretty standard constant for all string buffers. > The other change is more important: With a carefully crafted regex > expression one can currently crash (or potentially worse) CLISP, > because the buffer for the matches can overflow the stack. Let us revisit this issue at a later date. I think the with_string_0 mechanism is good enough. If disagree, you will have to argue for it to be changed pervasively throughout CLISP. >> 2. Is it really necessary to create a separate m4 directory for the >> single file gnulib-cache.m4? It's really ugly! (same for the >> rawsock change below). > > Yes, that's the directory where all the gnulib (m4 macros) should > reside in, too. Running gnulib-tool --update in the regexp module > directory populates that directory (as well as lib/). This is why we don't run gnulib-tool in that directory! We only ever run it in src. > I think the current state (in which I commited this) is unfortunate, > though: The gnulib files (in m4/ and lib/) should be put under (our) > revision control, too. What do you think would be the best approach > here? > (gnulib has some discussion about this very issue: > https://www.gnu.org/software/gnulib/manual/gnulib.html#VCS-Issues) The gnulib file we import are already under our VCS in clisp/src/glm4 and clisp/src/gllib. >> 15625:::tip:Daniel Jour 2016-08-23 rawsock: autotools build system, >> own gnulib checkout >> >> Copying code from socked.d into rawsock.c is no good. > > Yes, this is a (dirty) work around: I was not able to update the > gnulib code for core CLISP (thus socket.d). what was the problem? >> Also, you removed the configure script, so now people have to install >> autoconf to build clisp. >> Are you sure this is right? > > I removed it from the repository because it is a generated file > now. It would be part of a source distribution (a release) though, > thus only CLISP developers need to have autoconf/automake/etc. Okay, so you want to go the way of Emacs - the developers have to install autotools and the generated files are excluded from VCS. Fine. Let us do that after the release. Note, however, that you should use "hg mv" for configure.in --> configure.am transition and make changes to configure.am only after committing the "mv" operation (same for _all_ renaming). >> DIUC that the main change required to make rawsock work on windows >> was the switch from rawsock_t to int? > > Basically, yes. gnulib is (or in part, will be) handling the windows > specific code. rawsock is interfacing to POSIX/BSD sockets, gnulib is > implementing them on windows. Fine. This means that the change necessary for a release is actually quite small: --8<---------------cut here---------------start------------->8--- diff -r 2d1aedc0b550 modules/rawsock/rawsock.c --- a/modules/rawsock/rawsock.c Mon Aug 29 19:20:13 2016 -0400 +++ b/modules/rawsock/rawsock.c Tue Aug 30 11:00:39 2016 -0400 @@ -66,7 +66,7 @@ #if defined(HAVE_IFADDRS_H) # include <ifaddrs.h> #endif -typedef SOCKET rawsock_t; +typedef int rawsock_t; DEFMODULE(rawsock,"RAWSOCK") --8<---------------cut here---------------end--------------->8--- Right? >> please take a look at clisp/modules/berkeley-db/bdb.c:bdb_handle() >> for getting the regex_t pointer out of the struct. > > Hm .. is this faster than the approach that I took? It saves a level > of indirection ("one pointer"), right? Or is there something else > wrong with the approach I took in regexp? It's the same but it allows restarts (i.e., the user can supply a different input if original argument is bad - instead of aborting the whole thing). In fact, you might want to model your approach on what I did in modules/pcre instead of modules/berkeley-db. However, this should be done _after_ the release. PLAN: -1- fix rawsock on windows and make a release (2.50) 2/3 *.d --> *.c rename 2/3 switch to autotools (dropping generated files) -4- update gnulib -5- your proposed regexp changes -6- release (3.0) Did I miss anything? -- Sam Steingold (http://sds.podval.org/) on darwin Ns 10.3.1404 http://www.childpsy.net/ http://think-israel.org http://camera.org http://truepeace.org http://www.memritv.org http://dhimmi.org Press any key to continue or any other key to quit. |
|
From: Daniel J. <dan...@gm...> - 2016-08-30 15:57:14
|
These little changes spawned quite a lot of discussion ... I need these changes (removing the absolute paths) because I - running NixOS - don't have pwd, cat etc. in /bin/. That's the main reason for these commits. If it is _really_ necessary to keep the absolute paths then I can keep these changes locally. Now I really don't want to offend anyone, so please don't get upset by the following, but I like being honest and ... > * if clisp executes "pwd" and you have, say, "~/bin" in your $PATH > * before "/bin" and * a malicious actor plants an executable named > * "pwd" into "~/bin", then you will run that executable as yourself. ... I'm a bit annoyed by this: What if a malicious actor plants an "exec modified_shell" into the shell's init file that rewrites all calls to /bin/pwd to a malicious binary? What if there's a rootkit running? What if the shell is running within a proot (usermode chroot) session that mounts a directory with malicious binaries to /bin? What if a malicious actor does this: alias cd='some_command_involing_rm_rf_and_a_home_directory' Even if we "secure" pwd and cat, what about: gcc, ranlib, ar, mv, cp, ln, make, ... ? Do we want to hardcode these paths, too? The point is: It is none of our business. If a malicious actor has that much control over the users environment, then all hope is lost eitherway. Moreover, we're talking here about the build system, and not about software to counteract such a security breach (this wouldn't make sense to run from the user environment eitherway). The whole purpose behind things like $PATH is to be able to configure which binaries to use. What if the user is a non-root user on some server, and the system /bin/pwd has a bug because of a wrong system time but the user has a working pwd in ~/bin ?(sounds unlikely? I've already been in that position ...) What if the user needs special versions of pwd, mv, cp and such because the sources live on a non-standard network filesystem that cannot be mounted, but can be accessed using these commands? (Unlikely, true, but we just don't know). The user should provide external dependencies. The build system tries to find them, or let's the user otherwise provide the location of these dependencies (for example by setting $PATH ..) We could also think this further: We depend on libsigsegv. Why let the user specify where to find it when we can also download it directly during the build process and put it into a known location? Why not download a libc of our choice? Or a compiler? ... That said, all of this (except the /bin/cat in the test) will become pointless because as soon as we switch to an autotools based build system, because it will do the right thing anyway. The discussion also seems to carry a bit much of emotion, so I hope I don't make things worse, but I'd like to come back to my original question: > Why do we need the physical (not logical, i.e. without the > symlinks) path, though? Rephrasing: Why do we want to compare directories? |
|
From: Daniel J. <dan...@gm...> - 2016-08-30 16:10:21
|
> I am pretty sure we can assume C99 when building CLISP.
That's good :)
> * gctrigger is necessary for GCSAFETY -- keep it!
Hm ... what about this:
MAYGC(void, my_function_name, type1, param1, type2, param2) {
// code
}
I'm pretty sure I could write a preprocessor macro that can turn the
above into the equivalent of what's currently produced by gctrigger.
(all it does is placing GCTRIGGER(param1, param2) in there,
depending on type1 and type2, right?)
Getting rid of all preprocessing would be nice because then we could
use the implicit rules and dependency tracking of automake.
> * txt2c is for doc processing, I think we should move away from it
> (what do other projects use?)
Do I understand correctly that txt2c is mostly used to enable or
disable parts of the docs depending on the configuration, as well as
putting some data about the build and configuration into the docs
(like whether it uses trivialmap memory or not)?
This is what AC_CONFIG_FILES and its variable substitution is IMO for,
though I'm not sure if it can replace txt2c completely (and in an
elegant and easy to use way).
> I would like that to happen _after_ the release.
Sounds reasonable :)
|
|
From: Sam S. <sd...@gn...> - 2016-08-30 17:21:13
|
> * Daniel Jour <qna...@tz...> [2016-08-30 18:10:12 +0200]:
>
>> * gctrigger is necessary for GCSAFETY -- keep it!
>
> Hm ... what about this:
>
> MAYGC(void, my_function_name, type1, param1, type2, param2) {
> // code
> }
We already have LISPFUN which looks like that, so, I guess, it will be
okay - even though I would _much_ prefer the standard C syntax.
> Getting rid of all preprocessing would be nice because then we could
> use the implicit rules and dependency tracking of automake.
C module files are pre-processed, so I don't think pre-processing can go
away entirely.
>> * txt2c is for doc processing, I think we should move away from it
>
> Do I understand correctly that txt2c is mostly used to enable or
> disable parts of the docs depending on the configuration, as well as
> putting some data about the build and configuration into the docs
> (like whether it uses trivialmap memory or not)?
yes, exactly.
> This is what AC_CONFIG_FILES and its variable substitution is IMO for,
> though I'm not sure if it can replace txt2c completely (and in an
> elegant and easy to use way).
I don't think this is feasible or convenient.
>> (what do other projects use?)
This is the key question.
txt2c processes these files:
* _clisp.c
https://sourceforge.net/p/clisp/mailman/message/23484750/
(also explained in its header)
looks like double pre-processing is inevitable.
* README*
--- should have no platform-dependent parts.
* man pages clisp.1, clisp.html &c
--- what do other projects use? e.g., does Emacs or sbcl or perl or
python man pages differ on different platforms?
--
Sam Steingold (http://sds.podval.org/) on darwin Ns 10.3.1404
http://www.childpsy.net/ http://iris.org.il http://www.dhimmitude.org
http://truepeace.org http://ffii.org http://thereligionofpeace.com
Takeoffs are optional. Landings are mandatory.
|
|
From: Jan S. <ha...@st...> - 2016-08-30 19:00:27
|
On Aug 30 17:57:06, dan...@gm... wrote: > I need these changes (removing the absolute paths) because I - running > NixOS - don't have pwd, cat etc. in /bin/. That's the main reason for > these commits. You are suggesting to discard the standard, portable way of doing things in order to accommodate one Frankenstein system. Not that I have any say in this, but no way. > If it is _really_ necessary to keep the absolute paths > then I can keep these changes locally. How about you locally make a /bin/pwd symlink and leave things like they should be? > > * if clisp executes "pwd" and you have, say, "~/bin" in your $PATH > > * before "/bin" and * a malicious actor plants an executable named > > * "pwd" into "~/bin", then you will run that executable as yourself. > > ... I'm a bit annoyed by this: > > What if a malicious actor plants an "exec modified_shell" into the > shell's init file that rewrites all calls to /bin/pwd to a malicious > binary? What if there's a rootkit running? [snip rest of rant about how we are doomed anyway if attacker can replace binaries] > Even if we "secure" pwd and cat, what about: gcc, ranlib, ar, mv, cp, > ln, make, ... ? Do we want to hardcode these paths, too? I don't think that security concerns are the main issue here: the point of calling /bin/pwd is not to "secure" pwd, but this: if you want to know the current working directory in a script, why would you call anything else but /bin/pwd? BTW: $ uname -a SunOS fray1 5.11 11.3 sun4v sparc SUNW,SPARC-Enterprise-T5120 $ find /bin/ /usr/bin/ /usr/xpg* -name pwd\* /bin/sparcv9/pwdx /bin/pwdx /bin/pwd /usr/bin/sparcv9/pwdx /usr/bin/pwdx /usr/bin/pwd > The whole purpose behind things like $PATH is to be able to configure > which binaries to use. Not here. The purpose of calling pwd(1) in a script such as tests/*tst is to get a result guaranteed by /bin/pwd to be what you expect. What's there to "configure" about this? > What if the user is a non-root user on some > server, and the system /bin/pwd has a bug because of a wrong system > time but the user has a working pwd in ~/bin ?(sounds unlikely? I've > already been in that position ...) Ah, another Frankenstein system we should accomodate instead of doing the obvious, standard thing. (BTW, how exactly would wrong system time influence the behaviour of pwd(1)?) > What if the user needs special versions of pwd, mv, cp and such > because the sources live on a non-standard network filesystem that > cannot be mounted, but can be accessed using these commands? > (Unlikely, true, but we just don't know). Hm, yet another Frankenstein. If you have a cp(1) that cannot cp or a mv(1) that cannot mv, fix _that_ before writing any scripts. > We could also think this further: We depend on libsigsegv. Why let the > user specify where to find it when we can also download it directly > during the build process and put it into a known location? Because libsigsegv can be installed anywhere in the system, unlike standard binaries like pwd(1). Jan |
|
From: Daniel J. <dan...@gm...> - 2016-08-31 11:57:21
|
First a word to GSoC: I'm of course sad that I didn't made it through the final evaluation, but it was definitely the right decision. I had not achieved the main goal (making a release, updating gnulib) and thus would've decided the same way (there was a checkbox asking about my own impression about the project status where I noted exactly this impression myself, too). > BUFSIZ is a pretty standard constant for all string buffers. Eh, can you support that? It's a standard constant for STREAM buffers. This is even defined by the C (C99) standard. Using it for anything that's not directly related to a stream thus seems wrong to me. The only thing that might be interpreted as "standard for [..] string buffers" is this quote from the glibc documentation: > Sometimes people also use BUFSIZ as the allocation size of buffers > used for related purposes, such as strings used to receive a line of > input with fgets (see Character Input). There is no particular > reason to use BUFSIZ for this instead of any other integer, except > that it might lead to doing I/O in chunks of an efficient size. Though this too does not specify whether BUFSIZ will be small enough to be put onto the stack. Moreover it's just in the documentation of a single libc, there might be systems that have a huge BUFSIZ but only provide limited stack space. > Let us revisit this issue at a later date. > I think the with_string_0 mechanism is good enough. > If disagree, you will have to argue for it to be changed pervasively > throughout CLISP. with_string_0 is not involved here. I'm concerned by this (from regexi.c): begin_system_call(); ret = (regmatch_t*)alloca((re->re_nsub+1)*sizeof(regmatch_t)); end_system_call(); re->re_nsub is the number of subexpressions, and if the regex is in anyway "modifyable" by a malicious actor (e.g. a POST parameter for a search field), then that actor could pass a regex with lots of subexpressions, thus causing above alloca to produce a stack overflow (in the best case). > This is why we don't run gnulib-tool in that directory! > We only ever run it in src. Hm, I'm afraid that this not a good idea, at least not a scalable one. Let me explain: We have modules because we don't want to have their code in core CLISP, and we want to be able to (or let the user) provide modules to extend CLISP at will (and even at runtime, with dynamic loading). And adding a new module should not require changes to core CLISP, right? A user should be able to write some module, and let clisp-link from the installed CLISP do it's magic. Now assume a CLISP module needs (for example) access to some_function, but core CLISP does not. The gnulib module the_module provides that some_function on systems where it's not available. Should we add the_module to core CLISP? I think no, because that would bloat core CLISP (and we'd need special linker flags to actually have it in the resulting binary). Thus we add it to the CLISP module, and everything's fine. Until we extend the CLISP module and it suddenly needs another_function. Core CLISP happens to need this function, too. So the gnulib module another_module which provides this function is already included in CLISP. Now we could just use that and be done with it. Until we need another version (newer, older, using xalloc-die instead of xalloc, ...) of it. Or another_module and the_module both depend on a gnulib module lowlevel_thing. another_module only works with lowlevel_thing from a year ago, but the_module needs a recent one. My point is: gnulib is not designed to be something "shareable" across projects (and core CLISP and a module is basically that: two separate projects). Thus IMO the correct approach at using gnulib is to have a gnulib checkout for core CLISP, using only the gnulib modules needed by core CLISP. And letting each CLISP module (which wants/needs to use gnulib) maintain an own gnulib checkout with only the gnulib modules needed by that particular CLISP module. This adds some bloat when functionality overlaps (rawsock and socket.d, and basic stuff like file IO), but probably only on "gnulib intensive" platforms (windows?): https://sourceforge.net/p/clisp/bugs/634/ Also I think complaining about a missing libgnu.so won't help. That's just not the way gnulib is supposed to be used. > what was the problem [updating gnulib for core CLISP]? I'm not entierly sure. Makefile.devel tried to update (?) something (configure?) for all modules first, but this failed for all modules. IMO updating gnulib should - in the end - be no more effort than running gnulib-tool --update (in the correct directories, that is the top level directory and every module directory that has an own gnulib checkout, we could put that into a script or Makefile.devel then). > Fine. This means that the change necessary for a release is actually > quite small: > [snip] > Right? Unfortunately it's not that easy: rawsock fails to build on Windows (MinGW) because the gnulib code (from core CLISP) is too old and makes a (now) false assumption about the internals of MinGW header files. Thus, the necessary changes are: 1) Either update gnulib for core CLISP, or give rawsock its own gnulib (that's what I did and - due to the reasoning above - I'd argue for) 2) Remove "windows" specific code from rawsock.c: The typedef, including windows headers, the parts with #if defined(WIN32_NATIVE) At this point it will compile, but has reduced functionality, because gnulib has a (IMHO) design flaw: If there's no (for example) netdb.h, then the corresponding gnulib module provides one. But it does not #define HAVE_NETDB_H, thus our code would not use it (because of the #if defined(HAVE_NETDB_H) conditional source parts). Therefore we need to either remove the #if defined(..) stuff, or (better) find a way to determine whether gnulib does provide a replacement or not. The first option is straightforward but will lead to issues with platforms that are not supported by gnulib or where gnulib does not provide a replacement (should we again target them). > PLAN: > > -1- fix rawsock on windows and make a release (2.50) > 2/3 *.d --> *.c rename > 2/3 switch to autotools (dropping generated files) > -4- update gnulib > -5- your proposed regexp changes > -6- release (3.0) Due to all of the above discussion I think the first and most important thing to do is to find a consensus on how we use gnulib, and then update it (otherwise rawsock will not work). > Okay, so you want to go the way of Emacs - the developers have to > install autotools and the generated files are excluded from VCS. > Fine. > Let us do that after the release. > > Note, however, that you should use "hg mv" for configure.in --> > configure.am transition and make changes to configure.am only after > committing the "mv" operation (same for _all_ renaming). Ok :) |
|
From: Sam S. <sd...@gn...> - 2016-08-31 13:49:21
|
> * Daniel Jour <qna...@tz...> [2016-08-31 13:57:12 +0200]: > > First a word to GSoC: I'm of course sad that I didn't made it through > the final evaluation, but it was definitely the right decision. I had > not achieved the main goal (making a release, updating gnulib) and > thus would've decided the same way (there was a checkbox asking about > my own impression about the project status where I noted exactly this > impression myself, too). I am glad we agree here. -- Sam Steingold (http://sds.podval.org/) on darwin Ns 10.3.1404 http://www.childpsy.net/ http://www.dhimmitude.org http://americancensorship.org http://mideasttruth.com http://truepeace.org http://thereligionofpeace.com Diplomacy is the art of saying "nice doggy" until you can find a nice rock. |
|
From: Sam S. <sd...@gn...> - 2016-08-31 13:55:34
|
> * Daniel Jour <qna...@tz...> [2016-08-31 13:57:12 +0200]: > >> BUFSIZ is a pretty standard constant for all string buffers. > > Eh, can you support that? It's a standard constant for STREAM > buffers. This is even defined by the C (C99) standard. Using it for > anything that's not directly related to a stream thus seems wrong to > me. > > The only thing that might be interpreted as "standard for [..] string > buffers" is this quote from the glibc documentation: > >> Sometimes people also use BUFSIZ as the allocation size of buffers >> used for related purposes, such as strings used to receive a line of >> input with fgets (see Character Input). There is no particular >> reason to use BUFSIZ for this instead of any other integer, except >> that it might lead to doing I/O in chunks of an efficient size. yes, this is precisely what I was talking about. > Though this too does not specify whether BUFSIZ will be small enough > to be put onto the stack. Moreover it's just in the documentation of a > single libc, there might be systems that have a huge BUFSIZ but only > provide limited stack space. BUFSIZ is usually page size (4kB). Given the amount of legacy code which does what glibc documentation talks about, we are in a good company. This is the same issue as /bin/pwd - if it ain't broke, don't fix it. Unless there is a commonly used clearly correct alternative approach, the "char buffer[BUFSIZ];" is going to stay with us, If it will ever be replaced, it will be a pervasive change throughout the CLISP sources, not just the regexp module. -- Sam Steingold (http://sds.podval.org/) on darwin Ns 10.3.1404 http://www.childpsy.net/ http://memri.org http://truepeace.org http://honestreporting.com http://openvotingconsortium.org People with a good taste are especially appreciated by cannibals. |
|
From: Daniel J. <dan...@gm...> - 2016-09-02 23:06:09
|
> BUFSIZ is usually page size (4kB). Ok, so since AFAIK we're not expecting to have deeply nested stack frames (> 1000) this shouldn't be an issue then. The downside of such large buffers is that this might screw up a CPU cache line, but I'd say this would be a very speculative reason for premature optimization :) > Unless there is a commonly used clearly correct alternative approach, > the "char buffer[BUFSIZ];" is going to stay with us, > If it will ever be replaced, it will be a pervasive change throughout > the CLISP sources, not just the regexp module. The commonly used clearly correct alternative is to use buffers with the (situation dependend) maximum size of the expected strings. Though, as you pointed out, in case it's changed it should be changed throughout the sources, and that's a lot of effort for a questionable gain. |
|
From: Sam S. <sd...@gn...> - 2016-08-31 14:01:55
|
> * Daniel Jour <qna...@tz...> [2016-08-31 13:57:12 +0200]:
>
>> Let us revisit this issue at a later date.
>> I think the with_string_0 mechanism is good enough.
>> If disagree, you will have to argue for it to be changed pervasively
>> throughout CLISP.
>
> with_string_0 is not involved here. I'm concerned by this (from regexi.c):
>
> begin_system_call();
> ret = (regmatch_t*)alloca((re->re_nsub+1)*sizeof(regmatch_t));
> end_system_call();
>
> re->re_nsub is the number of subexpressions, and if the regex is in
> anyway "modifyable" by a malicious actor (e.g. a POST parameter for a
> search field), then that actor could pass a regex with lots of
> subexpressions, thus causing above alloca to produce a stack overflow
> (in the best case).
I see.
We should handle it the same way we do in
clisp/modules/syscalls/calls.c:CONFSTR:
--8<---------------cut here---------------start------------->8---
#define CS_S(cmd) \
begin_system_call(); res = confstr(cmd,buf,BUFSIZ); end_system_call(); \
if (res == 0) value1 = T; \
else if (res <= BUFSIZ) value1 = asciz_to_string(buf,GLO(misc_encoding)); \
else { \
/* Here we cannot use alloca(), because alloca() is generally unsafe \
for sizes > BUFSIZ. */ \
char *tmp = (char*)clisp_malloc(res); \
begin_system_call(); \
confstr(cmd,tmp,res); \
end_system_call(); \
/* FIXME: asciz_to_string may signal an error in which case tmp leaks */ \
value1 = asciz_to_string(tmp,GLO(misc_encoding)); \
begin_system_call(); \
free(tmp); \
end_system_call(); \
}
--8<---------------cut here---------------end--------------->8---
--
Sam Steingold (http://sds.podval.org/) on darwin Ns 10.3.1404
http://www.childpsy.net/ http://islamexposedonline.com
http://iris.org.il http://thereligionofpeace.com http://camera.org
The dark past once was the bright future.
|
|
From: Sam S. <sd...@gn...> - 2016-08-31 14:30:52
|
> * Daniel Jour <qna...@tz...> [2016-08-31 13:57:12 +0200]: > >> This is why we don't run gnulib-tool in that directory! >> We only ever run it in src. > > Hm, I'm afraid that this not a good idea, at least not a scalable > one. Let me explain: We have modules because we don't want to have > their code in core CLISP, and we want to be able to (or let the user) > provide modules to extend CLISP at will (and even at runtime, with > dynamic loading). The rule above is only for _base_ modules. http://clisp.org/impnotes/modules.html#base-modules These are the modules that are always available to the user. > And adding a new module should not require changes to core CLISP, > right? A user should be able to write some module, and let clisp-link > from the installed CLISP do it's magic. Yes, other modules (e.g., rawsock & pcre) should have their own gnulib shared libraries - but not the code. > Now assume a CLISP module needs (for example) access to some_function, > but core CLISP does not. The gnulib module the_module provides that > some_function on systems where it's not available. Should we add > the_module to core CLISP? I think no, because that would bloat core > CLISP (and we'd need special linker flags to actually have it in the > resulting binary). Thus we add it to the CLISP module, and > everything's fine. Right. Note that "core CLISP" is actually the base linkset, not the boot linkset. Also, the pernicious nature of gnulib is the dependency creep. IOW, if you want to use gnulib_module_1, chances are that it will also pull gnulib_module_2, gnulib_module_3, gnulib_module_4 &c. And if some of these modules are already present in the base clisp, we do not want the module to pull it in. > My point is: gnulib is not designed to be something "shareable" across > projects (and core CLISP and a module is basically that: two separate > projects). This sucks. > Thus IMO the correct approach at using gnulib is to have a gnulib > checkout for core CLISP, using only the gnulib modules needed by core > CLISP. And letting each CLISP module (which wants/needs to use gnulib) > maintain an own gnulib checkout with only the gnulib modules needed by > that particular CLISP module. Indeed this is an easy way. > This adds some bloat when functionality overlaps (rawsock and > socket.d, and basic stuff like file IO), but probably only on "gnulib > intensive" platforms (windows?): > https://sourceforge.net/p/clisp/bugs/634/ The dependency creep of gnulib means that on any non-glibc platform it is often a copy of a large part of glibc. Even on linux - on LINUX, Carl! - it pulls in ioctl! > Also I think complaining about a missing libgnu.so won't help. That's > just not the way gnulib is supposed to be used. This sucks double. However, I tried to fight this battle 5 years ago and I am not interested in re-fighting it now. If every non-base module gets a 1MB libgnu.a, I guess our users will have to live with it. Bruno, is this really the way to go? >> what was the problem [updating gnulib for core CLISP]? > > I'm not entierly sure. Makefile.devel tried to update (?) something > (configure?) for all modules first, but this failed for all > modules. IMO updating gnulib should - in the end - be no more effort > than running gnulib-tool --update (in the correct directories, that is > the top level directory and every module directory that has an own > gnulib checkout, we could put that into a script or Makefile.devel > then). That's what the gnulib-imported target does, more or less. -- Sam Steingold (http://sds.podval.org/) on darwin Ns 10.3.1404 http://www.childpsy.net/ http://jihadwatch.org http://islamexposedonline.com http://truepeace.org http://mideasttruth.com http://americancensorship.org To iterate is human; to recurse, divine. |
|
From: Sam S. <sd...@gn...> - 2016-08-31 14:40:21
|
> * Daniel Jour <qna...@tz...> [2016-08-31 13:57:12 +0200]: > >> Fine. This means that the change necessary for a release is actually >> quite small: >> [snip] >> Right? > > Unfortunately it's not that easy: rawsock fails to build on Windows > (MinGW) because the gnulib code (from core CLISP) is too old and makes > a (now) false assumption about the internals of MinGW header files. > > Thus, the necessary changes are: > > 1) Either update gnulib for core CLISP, or give rawsock its own gnulib > (that's what I did and - due to the reasoning above - I'd argue > for) > > 2) Remove "windows" specific code from rawsock.c: The typedef, > including windows headers, the parts with #if defined(WIN32_NATIVE) > > At this point it will compile, but has reduced functionality, because > gnulib has a (IMHO) design flaw: If there's no (for example) netdb.h, > then the corresponding gnulib module provides one. But it does not > #define HAVE_NETDB_H, thus our code would not use it (because of the > #if defined(HAVE_NETDB_H) conditional source parts). all you need to do is "#include <netdb.h>" unconditionally. http://lists.gnu.org/archive/html/bug-gnulib/2011-05/msg00338.html -- Sam Steingold (http://sds.podval.org/) on darwin Ns 10.3.1404 http://www.childpsy.net/ http://camera.org http://jihadwatch.org http://memri.org http://americancensorship.org http://www.dhimmitude.org Why use Windows, when there are Doors? |
|
From: <Joe...@t-...> - 2016-09-01 12:01:55
|
Hi, Daniel Jour replied to Sam Steingold: >> Also, you removed the configure script, so now people have to install >> autoconf to build clisp. >> Are you sure this is right? >I removed it from the repository because it is a generated file now. It >would be part of a source distribution (a release) though, thus only >CLISP developers need to have autoconf/automake/etc. 1. It's good that you plan to keep configure as part of a release distribution. People expect to write ./configure ... && make. I believe some people could be upset if they had to use autotools. I remember lots of reports (rumours? problems long solved?) in misc. projects about version mismatch issues coming from autotools, when the developer and the user have different versions of autotools. Having the same configure for everybody nicely solves this issue. 2. Probably that is precisely the reason why some open source project nevertheless *include* configure within their RCS/SVN/hg/git tree, beside configure.in. The former file is then typically generated by one or a team of maintainers with a known (working) configuration, not by individual users. Often enough, the file is only updated at release time in hg/git. For instance, the wine project works exactly like this. Somehow, it would be a pain for a user to have to get clisp source from hg but a working configure from inside a downloaded distribution, instead of having everything in the tree. Regards, Jörg höhle |
|
From: Sam S. <sd...@gn...> - 2016-09-01 14:56:32
|
Hi, > * <Wbr...@g-...> [2016-09-01 11:34:06 +0000]: > > Daniel Jour replied to Sam Steingold: >>> Also, you removed the configure script, so now people have to install >>> autoconf to build clisp. >>> Are you sure this is right? >>I removed it from the repository because it is a generated file now. It >>would be part of a source distribution (a release) though, thus only >>CLISP developers need to have autoconf/automake/etc. > > 1. It's good that you plan to keep configure as part of a release > distribution. People expect to write ./configure ... && make. Absolutely! > I believe some people could be upset if they had to use autotools. > Probably that is precisely the reason why some open source project > nevertheless *include* configure within their RCS/SVN/hg/git tree, > beside configure.in. The former file is then typically generated by > one or a team of maintainers with a known (working) configuration, not > by individual users. Often enough, the file is only updated at > release time in hg/git. For instance, the wine project works exactly > like this. And Emacs no longer does. Getting autotools is trivial on most widely used platforms, e.g., linux, *bsd (including macosx), windows (cygwin/mingw). However, we don't have to switch away from our current modus operandi. Here and now is the right time and place for people to voice their opinions - do you like the current system (configure scripts in mercurial) or would you prefer the Emacs style (configure scripts in the distribution tar ball but regenerated by each developer). > Somehow, it would be a pain for a user to have to get clisp source > from hg but a working configure from inside a downloaded distribution, > instead of having everything in the tree. This is not right. You should regenerate the configure files yourself. E.g., when I clone the Emacs git tree, I run "./autogen.sh all" which does all the magic. At any rate, you vote is noted: keep configure in mercurial. Thanks. -- Sam Steingold (http://sds.podval.org/) on darwin Ns 10.3.1404 http://www.childpsy.net/ http://palestinefacts.org http://iris.org.il http://islamexposedonline.com http://mideasttruth.com http://www.memritv.org Microsoft wants to monopolize the right to be a monopoly. |
|
From: Ken B. <kb...@co...> - 2016-09-01 15:59:41
|
On 9/1/2016 10:56 AM, Sam Steingold wrote: > Here and now is the right time and place for people to voice their > opinions - do you like the current system (configure scripts in > mercurial) or would you prefer the Emacs style (configure scripts in the > distribution tar ball but regenerated by each developer). I prefer the Emacs style. Ken |
|
From: Jan S. <ha...@st...> - 2016-09-01 17:00:33
|
> Here and now is the right time and place for people to voice their > opinions - do you like the current system (configure scripts in > mercurial) or would you prefer the Emacs style (configure scripts in the > distribution tar ball but regenerated by each developer). I believe that the best way is to have a simple ./configure script which is _not_ generated, but written by hand. Such a script is then in the repository of course. For example http://mdocml.bsd.lv/ does this precisely to avoid dealing with the various versions of auto*. (It also avoids this question.) http://mdocml.bsd.lv/cgi-bin/cvsweb/configure If the ./configure is to be regenerated from configure.in, then a generated ./configure should be present in every distribution tarball of course, but not in the repo, being a generated file. > > Somehow, it would be a pain for a user to have to get clisp source > > from hg but a working configure from inside a downloaded distribution, > > instead of having everything in the tree. > > This is not right. You should regenerate the configure files yourself. Yes. Jan |