From: Alan W. I. <ir...@be...> - 2002-02-07 20:44:09
|
Have a look at http://sourceforge.net/project/stats/index.php?report=last_30&group_id=2915 to see the big effect of the recent release on our web view and download statistics. From past experience this effect is mostly due to freshmeat users taking a look. To put this fast-attack, slow-decay spike into perspective, the statistics for the first 7 days of February are roughly comparable to the entire month of January. From past experience the higher-than-normal web view and download rates will continue for some time after the release so it is likely this month will be the best month ever. In the first 7 days we are already 67 per cent of the way there in web views and 74 per cent of the way there in downloads (if you exclude a suspiciously large result for one day in October 2001 which completely distorts that month's statistics). So far, no bugs have been reported for 5.1.0. Congratulations to everybody who participated in this release. Alan email: ir...@be... phone: 250-727-2902 FAX: 250-721-7715 snail-mail: Dr. Alan W. Irwin Department of Physics and Astronomy, University of Victoria, P.O. Box 3055, Victoria, British Columbia, Canada, V8W 3P6 __________________________ Linux-powered astrophysics __________________________ |
From: Joao C. <jc...@fe...> - 2002-02-07 21:20:03
|
On Thursday 07 February 2002 8:43 pm, Alan W. Irwin wrote: > Have a look at > http://sourceforge.net/project/stats/index.php?report=3Dlast_30&group_i= d=3D2915 > to see the big effect of the recent release on our web view and downloa= d > statistics. From past experience this effect is mostly due to freshmea= t > users taking a look. > > To put this fast-attack, slow-decay spike into perspective, the statist= ics > for the first 7 days of February are roughly comparable to the entire m= onth > of January. From past experience the higher-than-normal web view and > download rates will continue for some time after the release so it is > likely this month will be the best month ever. In the first 7 days we = are > already 67 per cent of the way there in web views and 74 per cent of th= e > way there in downloads (if you exclude a suspiciously large result for = one > day in October 2001 which completely distorts that month's statistics). > > So far, no bugs have been reported for 5.1.0. I'm not sure if this is a bug, I configured --prefix=3D/usr/local, then "= make=20 -n install" gives: =2E.. cp -p -a plmodule.so /usr/local//usr/lib/python2.0//site-packages cp -p -a pyqt_plmodule.so /usr/local//usr/lib/python2.0//site-packages I have been looking at the distributed rpm specs files, and I found that = they=20 specify /usr as the install point. Is this the default? Shouldn't users=20 install non-distro packages in /usr/local, if possible? It is the default= in=20 "configure". I think to remember that you said the the java examples would stay under=20 lib/java/examples, to avoid setting CLASSPATH to two different directorie= s.=20 Is this correct? and really necessary? Joao |
From: Alan W. I. <ir...@be...> - 2002-02-07 22:31:11
|
> On Thursday 07 February 2002 8:43 pm, Alan W. Irwin wrote: > > So far, no bugs have been reported for 5.1.0. > > I'm not sure if this is a bug, I configured --prefix=/usr/local, then "make > -n install" gives: > > ... > cp -p -a plmodule.so /usr/local//usr/lib/python2.0//site-packages > cp -p -a pyqt_plmodule.so /usr/local//usr/lib/python2.0//site-packages Looks like YACP, yet another configuration problem. That extra /usr in the middle of the path shouldn't be there. If I specify either /usr/local/plplot or /usr for the prefix everything works fine. > > I have been looking at the distributed rpm specs files, and I found that they > specify /usr as the install point. That's standard for all rpms and also debs. In Linux distributions /usr/local is reserved for tarball installs *not* under rpm (or deb) control and /usr is reserved for debs and rpms. Those who use a prefix of /usr for tarball installs or a /usr/local prefix for rpm or deb installs get what they deserve....;-) > > I think to remember that you said the the java examples would stay under > lib/java/examples, to avoid setting CLASSPATH to two different directories. > Is this correct? and really necessary? I would prefer them installed with the other examples (which would require two separate directories in the colon-separated list of directories in CLASSPATH), but Geoffrey felt they should stay where they are currently installed so CLASSPATH would only need one directory. I went along at the time, but when users make their own java plplot examples they'll need two directories in the CLASSPATH in any case. So I hope Geoffrey changes his mind about where the Java examples should be installed, but if not, that is fine as well. Alan |
From: <jca...@in...> - 2002-02-08 00:48:42
|
On Thursday 07 February 2002 22:30, Alan W. Irwin wrote: | > On Thursday 07 February 2002 8:43 pm, Alan W. Irwin wrote: | > > So far, no bugs have been reported for 5.1.0. | > | > I'm not sure if this is a bug, I configured --prefix=3D/usr/local, | > then "make -n install" gives: | > | > ... | > cp -p -a plmodule.so /usr/local//usr/lib/python2.0//site-packages | > cp -p -a pyqt_plmodule.so | > /usr/local//usr/lib/python2.0//site-packages | | Looks like YACP, yet another configuration problem. That extra | /usr in the middle of the path shouldn't be there. If I specify | either /usr/local/plplot or /usr for the prefix everything works | fine. | | > I have been looking at the distributed rpm specs files, and I | > found that they specify /usr as the install point. | | That's standard for all rpms and also debs. In Linux distributions | /usr/local is reserved for tarball installs *not* under rpm (or | deb) control and /usr is reserved for debs and rpms. Those who use | a prefix of /usr for tarball installs or a /usr/local prefix for | rpm or deb installs get what they deserve....;-) Like me, when I made an automatic upgrade in my linux box, just to=20 find that (of course) the upgrade had upgraded just the distro=20 packages, not the manually installed rpm ones? And then searching for=20 all them... it would be much easier if they were in /usr/local -- or=20 if I had a separate rpm database just for them. But I don't see the point in installing rpms under /usr -- there must=20 exist some reasoning behind that? I have yet another rpm question. With dyndrivers, we can make rpm=20 with all available drivers, without having to worry if the user=20 system has the necessary libs or not! Of course the user will not be=20 able to use those drivers, but it will not hurt, and if the user=20 latter install the extra packages the drivers will be there. Correct? Joao |
From: Alan W. I. <ir...@be...> - 2002-02-08 06:38:32
|
On Fri, 8 Feb 2002, [iso-8859-1] Jo=E3o Cardoso wrote: > On Thursday 07 February 2002 22:30, Alan W. Irwin wrote: > [...] Those who use > | a prefix of /usr for tarball installs or a /usr/local prefix for > | rpm or deb installs get what they deserve....;-) > > Like me, when I made an automatic upgrade in my linux box, just to > find that (of course) the upgrade had upgraded just the distro > packages, not the manually installed rpm ones? And then searching for > all them... it would be much easier if they were in /usr/local -- or > if I had a separate rpm database just for them. The way I used to organize this when I used rpm extensively was I would cop= y all the downloaded package files (e.g., plplot....rpm) to certain directories. The contents of those directories would remind me what had to be updated in the future. Also, when actually building binary rpm's from src rpm's the resulting binary rpm's are naturally placed in the RPMS/i?86 directory and the src rpm into SRPMS. Under Debian I also do something similar to keep track of what has been downloaded or built outside the official distribution. > > But I don't see the point in installing rpms under /usr -- there must > exist some reasoning behind that? It's traditional. Maybe its even in the LSB by now, but violate that tradition at your peril. > > I have yet another rpm question. With dyndrivers, we can make rpm > with all available drivers, without having to worry if the user > system has the necessary libs or not! Of course the user will not be > able to use those drivers, but it will not hurt, and if the user > latter install the extra packages the drivers will be there. Correct? Didn't we just go through this exercise to prove the libraries had to be there (i.e., mentioned specifically for the link command) for the dynamic driver builds? Anyhow, when building from a src rpm we use the ordinary configure script just like when building from a tarball or cvs. (And I am sure Rafael would answer the same for the deb package build.) Our current configuration check= s for certain libraries and headers on the build machine. If libcd is not there it doesn't build the cgm driver. If libgd isn't there, it doesn't build the gd driver with png and jpeg devices. This is uniform across src rpm, (presumably) src deb, tarball, and cvs builds. Once the binary rpm's and binary debs are available, and assuming, for example, that libgd was found and linked with on the build machine, the packages will automatically have dependencies on libgd, etc. This forces the *binary* rpm user to install the libgd package before the plplot package is installed (or in Debian's case forcing apt-get to automatically install libgd before plplot.= ) This is yet another reason why the tarball approach and the package approac= h (be it rpm or deb) should not be mixed for installation. Time and again I have heard users complain they cannot build packages on their Linux machine= s and often it is due to a dependency problem caused by using this mixed approach. If you understand enough to build a package from tarball it is n= o great leap to doing it from a src rpm. I have done this many times (even once with the whole tcl/tk src rpm package from RH 6.2, that I required on my RH 5.2 machine at that time so I could play pysol (!)), and rarely run into any insoluble problems. It is so straightforward, I would encourage yo= u to use src rpm builds rather than tarball builds just so you won't mess up the package dependencies of your rpm-based Linux machine. If you run into any specific problems with a src rpm build of say PLplot, don't hesitate to ask me for help. But my help will only be effective if the system has been cleanly installed strictly from rpm's without any forcing. Alan |
From: <jca...@in...> - 2002-02-08 23:53:13
|
On Friday 08 February 2002 06:38, Alan W. Irwin wrote: | On Fri, 8 Feb 2002, [iso-8859-1] Jo=E3o Cardoso wrote: | > On Thursday 07 February 2002 22:30, Alan W. Irwin wrote: =2E.. | > But I don't see the point in installing rpms under /usr -- there | > must exist some reasoning behind that? | | It's traditional. Maybe its even in the LSB by now, but violate | that tradition at your peril. OK, but tradition is not the same as it was to be :) | > I have yet another rpm question. With dyndrivers, we can make rpm | > with all available drivers, without having to worry if the user | > system has the necessary libs or not! Of course the user will not | > be able to use those drivers, but it will not hurt, and if the | > user latter install the extra packages the drivers will be there. | > Correct? | | Didn't we just go through this exercise to prove the libraries had | to be there (i.e., mentioned specifically for the link command) for | the dynamic driver builds? I don't quite understand your phrasing. libplplot only depends on the=20 libs the xwin and tk drivers depend on, if they are build. Building=20 dynamic drivers does not put dependencies in libplplot and in none of=20 the executables, so I can build all dyn-drivers I can and package=20 them. If "rpm" creates dependencies on the package, then one has to=20 find the way to manually remove them (at rpm creation time). | Anyhow, when building from a src rpm we use the ordinary configure | script just like when building from a tarball or cvs. (And I am | sure Rafael would answer the same for the deb package build.) Our | current configuration checks for certain libraries and headers on | the build machine. If libcd is not there it doesn't build the cgm | driver. If libgd isn't there, it doesn't build the gd driver with | png and jpeg devices. This is uniform across src rpm, (presumably) I know all that, I just wanted to stress the fact that one can=20 distribute drivers even knowing that the user system might not have=20 imediate support for some of them. I don't agree with Rafael idea of creating a separate rpm for each=20 driver! What a mess to install all those perl modules! That's good only for=20 debian, and for guys who know what they are doing. When I installed =20 those perl modules I was tempted to (and just did) install them all,=20 because their names tell me nothing. | If you run into any specific problems with a src rpm build of say | PLplot, don't hesitate to ask me for help. But my help will only | be effective if the system has been cleanly installed strictly from | rpm's without any forcing. As you might have understand, I start creating a rpm for suse, but my=20 system is far from clean. I do know however what I have installed by=20 hand, from suse packages, or from other disto packages. Joao |
From: Alan W. I. <ir...@be...> - 2002-02-09 07:48:08
|
On Fri, 8 Feb 2002, [iso-8859-1] Jo=E3o Cardoso wrote: > | Didn't we just go through this exercise to prove the libraries had > | to be there (i.e., mentioned specifically for the link command) for > | the dynamic driver builds? > > I don't quite understand your phrasing. libplplot only depends on the > libs the xwin and tk drivers depend on, if they are build. Building > dynamic drivers does not put dependencies in libplplot and in none of > the executables, so I can build all dyn-drivers I can and package > them. It's that last phrase I disagree with (or perhaps misunderstand). Here is the current actual build results on my system (taken from make >&! make.out). gcc -shared -fPIC -o drivers/dg300.drv shared/dg300.o -L. -lplplotd That's okay, it is consistent with what you have said. But... gcc -shared -fPIC -o drivers/gd.drv shared/gd.o -lgd -lpng -ljpeg -lz -L. = -lplplotd So for this last line to work, libgd must be on the build system (our configure system enforces this in any case). Furthermore, this (rightfully) puts a dependency in the corresponding rpm package on libgd (and libpng and libjpeg and libz and finally libplplotd). So if you lumped all the *.drv files in one binary rpm, that rpm would necessarily depend on libgd (and th= e other extra libraries). If you split the *.drv results into one rpm per *.drv file, then only the rpm containing the gd.drv file would have the dependency on libgd (and other extra libraries), but you probably don't wan= t to generate quite so many different rpm packages. So what goes on now for my plplot binary rpm is virtually every dynamic driver is in there (as well as libplplotd, etc.) and the whole rpm depends on libgd, libpng, libjpeg, and libz (and also the many extra libraries required by gnome.drv which I haven't mentioned until now). IIRC, the only two drivers missing from my plplot rpm are the svga one, and the cgm driver (since there is no viable libcd rpm). Anyhow, splitting off the dynamic drivers into their own rpm package gains you little unless you micro-split them. Only in that case (drivers requiring special libraries in their own individual packages) do you remove some dependencies of the main package on the special libraries. I am sure Rafael is going through similar calculations. In his case, thoug= h, I assume he will use the micro-split solution since apt-get makes life simp= le in that case. > If "rpm" creates dependencies on the package, then one has to > find the way to manually remove them (at rpm creation time). I may be taking that remark out of context, but I think you will run into trouble with such an approach. > I know all that, I just wanted to stress the fact that one can > distribute drivers even knowing that the user system might not have > imediate support for some of them. Only if you use the micro-split solution. > > I don't agree with Rafael idea of creating a separate rpm for each > driver! Actually, he made clear he was talking about deb's. But I agree with you, without apt-get a micro-split solution for rpm's is not too practical so that is why I lumped everything together in the plplot rpm (with the downside that it has a lot of dependencies that must be satisfied). > What a mess to install all those perl modules! That's good only for > debian, and for guys who know what they are doing. When I installed > those perl modules I was tempted to (and just did) install them all, > because their names tell me nothing. I believe you are referring to the perl modules required for building the documentation? I skipped all that for my rpm. Too lazy.... Alan |
From: Rafael L. <lab...@mp...> - 2002-02-08 08:38:15
|
* Jo=E3o Cardoso <jca...@in...> [2002-02-08 00:48]: > Like me, when I made an automatic upgrade in my linux box, just to find > that (of course) the upgrade had upgraded just the distro packages, not= the > manually installed rpm ones? And then searching for all them... it woul= d be > much easier if they were in /usr/local -- or if I had a separate rpm > database just for them. <AD> This is an area where Debian excels. In my systems, I have no locally installed packages, every package I have comes from Debian mirrors and is automatically upgraded (thanks to "apt-get update; apt-get upgrade"). I = am talking about Debian woody (next stable release), that contains currently over 8000 packages. </AD> > I have yet another rpm question. With dyndrivers, we can make rpm=20 > with all available drivers, without having to worry if the user=20 > system has the necessary libs or not! Of course the user will not be=20 > able to use those drivers, but it will not hurt, and if the user=20 > latter install the extra packages the drivers will be there. Correct? I do not know the details of dependency handling in the rpm packaging system, but for Debian I am planning to make individual packages for each specific driver needing exteranl libraries. The dependencies will be set like: =20 plplot-tcl -> tcl8.3, tk8.3 plplot-xwin -> xlib6g plplot-gd -> libgd2 ... If the user says "apt-get install plplot-gd", then libgd2 will be automatically downloaded, installed, and configured. --=20 Rafael |
From: Alan W. I. <ir...@be...> - 2002-02-08 18:23:22
|
On Fri, 8 Feb 2002, Rafael Laboissiere wrote: > I do not know the details of dependency handling in the rpm packaging > system.... It's mostly automatic. I check the dependency list that comes as part of a particular package build (and which of course is also available for the binary version of the package), and it is almost always right without any intervention from me. > ... but for Debian I am planning to make individual packages for each > specific driver needing exteranl libraries. The dependencies will be set > like: > > plplot-tcl -> tcl8.3, tk8.3 > plplot-xwin -> xlib6g > plplot-gd -> libgd2 > ... > Splitting plplot up this way is probably the right thing to do for a system with apt-get, but many people still (despite Conectiva making an apt-get that works with rpm packages) handle their rpm dependencies by hand so it is probably better to lump everything together into one plplot package in that situation. Anyhow, that is good justification for my lazy rpm packaging approach where I only make one plplot package....;-) At some point I intend to get involved in (unofficial) Debian packaging myself for the yplot project. Out of curiosity, do you have to remember all the dependencies yourself or do you have an automatic system (as in rpm) that usually gets the dependices right for whatever package you have assembled? BTW, this has been an enjoyable thread which I have helped others to move far off topic, but I also hope people have paid attention to the original topic which was to have a look at http://sourceforge.net/project/stats/index.php?report=last_30&group_id=2915 and http://sourceforge.net/project/stats/index.php?report=months&group_id=2915 to get an idea of how many people are taking an interest in our work. Alan |
From: Rafael L. <lab...@mp...> - 2002-02-09 01:04:16
|
* Alan W. Irwin <ir...@be...> [2002-02-08 10:23]: > At some point I intend to get involved in (unofficial) Debian packaging > myself for the yplot project. Out of curiosity, do you have to remember all > the dependencies yourself or do you have an automatic system (as in rpm) > that usually gets the dependices right for whatever package you have > assembled? For tracking shared library dependencies, the Debian packaging tool (debuild, or dpkg-buildpackage) has a simple mechanism that is triggered by the following line in debian/control: Depends: ${shlibs:Depends} The "${shlibs:Depends}" string is automatically substituted. For instance, in the case of plplot-tcl, it becomes: libc6 (>= 2.1.2), tcl8.2 (>= 8.2.2), tk8.2 (>= 8.2.2), xlib6g (>= 3.3.6-4) Of course, other kind of dependencies (not shared libs) have to be included by hand. For instance, plplot-tcl depends on itk3.1, iwidgets3.1, and itcl3.1. Currently, this cannot be detected automatically. However, it could be, since there is already a mechanism for Perl, using the token "${perl:Depends}" in debian/control, that checks all module dependencies. This could easily be extended to Tcl, by parsing the "package require" instances in Tcl source files. However, I think that there is few interest on this mechanism among Debian developers and it is not going to be implemented. > BTW, this has been an enjoyable thread which I have helped others to move > far off topic, Right, but I changed the Subject now. -- Rafael |