helidelinux-devel Mailing List for Helide Linux (Page 2)
Status: Abandoned
Brought to you by:
rveen
You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
(42) |
Apr
(5) |
May
(4) |
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|
From: Hui Z. <zh...@wa...> - 2004-03-15 21:23:57
|
I restructured my lfsbuilder into different modules. LFS::Logger takes care of logging during installation or uninstallation. It optionally logs to file in tabbed format, it logs to console in colored format, and it supports any number of logging, can accept tty names to output info in different terminals. LFS::Fetcher (just a name sofar) takes care of looking for packages both from local storage and internet. It accepts a package name and a optional method (such as cvs, latest, version, etc) try to work out with its intelligence(heuristic) and internet if it's availaible. It always checks a profile if it exists so one teach it for some odd package and possible package name translation. LFS::Pkgger (I am working on it) is the package manager. Works together with install-log, logs all the installed package status LFS::Builder controls all other modules and does the actuall installation, uninstallation, and may be experimenting (Not much idea yet, sort of construt a pseudo environment and drop into a shell-like interface do semi-interactive building). It uses a profile(s), but it not found in the profile, it tries its intelligence (such as try configure, try make all, try perl *PL, etc), and do certain analysis(such as analyse the configure script, Makefile, error messages). --Maybe I should break this into further modules. LFS::Logger, Fetcher, Pkgger are independent, which mean can be used individually by another program. The Builder works with all the other modules. -Hui |
From: Hui Z. <zh...@wa...> - 2004-03-10 19:05:53
|
On Wed, Mar 10, 2004 at 06:30:09PM +0000, Rolf Veen wrote: > It can be done in a simple manner. > > -- > pkg postfix > > provides mta > -- > > Any name that appears in the 'provides' field is added to the list of > installed packages, nothing more. A few lines more in your package > manager. There are some packages that have more than one option: java, > httpd, pop3 servers, etc. That's an excellent idea. For a system that installed mta of some kind, one should always be able to query the package manager to find out the details such as which mta package is actually installed. -Hui |
From: Bennett T. <be...@ra...> - 2004-03-10 18:53:22
|
2004-03-10T18:30:09 Rolf Veen: > It can be done in a simple manner. >=20 > -- > pkg postfix >=20 > provides mta > -- >=20 > Any name that appears in the 'provides' field is added to the list of > installed packages, nothing more. Slick. In my package database (/var/lib/bpm), that would just result in creating a symlink mta -> postfix. That is simple. I'll keep it in mind, although I still don't see the complete drill. That seems to be only useful for manual dependency coding, which as I said doesn't have much interest to me (since people historically don't do them, or get them wrong oftener than right). If there were some way to also automate so that e.g. mutt, whose binary will contain the string /usr/sbin/sendmail (as the default thing to pipe mail to for outbound) will depend on mta, rather than postfix? Manual dependency specification is something for a different package manager; unless I see it actually used correctly in practice, rather than in theory, I'm not going to clutter bpm with it. -Bennett |
From: Rolf V. <rol...@he...> - 2004-03-10 17:48:53
|
Bennett Todd wrote: > Mostly. On my systems, /usr/sbin/sendmail is provided by postfix; on > other peoples' systems it'll be provided by qmail, exim, or even > (horrors!) sendmail. Red Hat built in provisions for manual > provides/requires dependancy tagging for a logical role like "mta", > which fixes that problem, but AFAIK that's the only place it was > ever used; it ended up being yet another little one-off hack, > complicating the package manager while going almost entirely unused. > I reject that solution. It can be done in a simple manner. -- pkg postfix provides mta -- Any name that appears in the 'provides' field is added to the list of installed packages, nothing more. A few lines more in your package manager. There are some packages that have more than one option: java, httpd, pop3 servers, etc. Rolf. |
From: Hui Z. <zh...@wa...> - 2004-03-10 16:04:40
|
On Wed, Mar 10, 2004 at 03:19:21PM +0000, Bennett Todd wrote: > > If you want to be so precise, make that > "package/version/build-recipe/build-environment"; the list of > packages that are installed on the build system influence its > dependencies, in particular when you consider optional feature > inclusion. Lots of things will depend on openssl if openssl is > installed, and won't if it's not, with ./configure automagically > deducing whether or not to attempt to include SSL support. One example I met is mythtv depends on QT with mysql support, which make me can't just list qt as its dependency. > > Imperfect. I plan on only worrying about dependencies that are > completely automatically generated, on associating them with built > packages, and on treating them as heuristic guidelines. If I ask to I believe manual dependency is useful. If you know the build environment and know its dependency and know the automatic approach doesn't work right, the manual listing of its dependency is the only way to get it installed unattended. Imaging of deploying a same installation on hundreds of workstations. > Take the precise same recipe, and rebuild on a system with different > packages installed, and you can get different dependencies. We just need train the program to be more intelligent. I believe that any work that human find tedious or trivia should be delegate to computer. -Hui |
From: Hui Z. <zh...@wa...> - 2004-03-10 15:51:30
|
On Wed, Mar 10, 2004 at 04:09:53PM +0000, Rolf Veen wrote: > Dependency information is only meaningful for a combination of > package/version/build-recipe. That means that probably a master > file with a map 'package/version -> dependencies' is pretty useless. > The correct map would be 'package/version/recipe -> dependencies'. > Ugly. > > That means that dependencies are more a property of the build > recipe, a prerequisite for that particular recipe: > > -- > pkg binutils-2.14 > > build > depends > bash, coreutils, diffutils, gcc, gettext, > glibc, grep, make, perl, sed, texinfo > recipe \ > # here the build recipe > > another_build > depends > # here other dependencies. > recipe \ > # another recipe > -- > > Rolf. > -- > Just trying to think :-). There is a lot repetition in the profile as for most recipies they share a common set of dependency and even recipe itself. Currently my approach is list the common build script (I even divided them into unpack, config, build, install, postinstall to increase the possibility to share ) and list recipe specific scripts in a seperate entry with a prefix for that recipe. The installer is trained to look in the common script if it couldn't find the recipe specific script. I find it not only saves human energy to edit it, but also increases the readability since it is obvious to spot each recipes particular variance. -Hui |
From: Bennett T. <be...@ra...> - 2004-03-10 15:37:09
|
2004-03-10T16:09:53 Rolf Veen: > Dependency information is only meaningful for a combination of > package/version/build-recipe. If you want to be so precise, make that "package/version/build-recipe/build-environment"; the list of packages that are installed on the build system influence its dependencies, in particular when you consider optional feature inclusion. Lots of things will depend on openssl if openssl is installed, and won't if it's not, with ./configure automagically deducing whether or not to attempt to include SSL support. > That means that probably a master file with a map 'package/version > -> dependencies' is pretty useless. Instead of "useless", say "imperfect". > The correct map would be 'package/version/recipe -> dependencies'. > Ugly. Imperfect. I plan on only worrying about dependencies that are completely automatically generated, on associating them with built packages, and on treating them as heuristic guidelines. If I ask to install something, it'll be installed; if it appears to depend on something that's not installed, I'll get a helpful comment (but no failure of the install). If I try a build, it'll go ahead and try to build it --- and if it succeeds, dependencies will be completely regenerated. If the build fails, the build-dependencies will be analyzed to try and offer helpful suggestions about possible reasons why this build might have failed when the previous one worked. > That means that dependencies are more a property of the build > recipe, a prerequisite for that particular recipe: Take the precise same recipe, and rebuild on a system with different packages installed, and you can get different dependencies. -Bennett |
From: Rolf V. <rol...@he...> - 2004-03-10 15:29:53
|
Dependency information is only meaningful for a combination of package/version/build-recipe. That means that probably a master file with a map 'package/version -> dependencies' is pretty useless. The correct map would be 'package/version/recipe -> dependencies'. Ugly. That means that dependencies are more a property of the build recipe, a prerequisite for that particular recipe: -- pkg binutils-2.14 build depends bash, coreutils, diffutils, gcc, gettext, glibc, grep, make, perl, sed, texinfo recipe \ # here the build recipe another_build depends # here other dependencies. recipe \ # another recipe -- Rolf. -- Just trying to think :-). |
From: Bennett T. <be...@ra...> - 2004-03-10 15:19:55
|
2004-03-10T14:15:56 Rolf Veen: > Before building: > - ./configure (autotools) 'reverse engineering' >=20 > After/during building: > - ldd > - strings > - strace > - #! I'd organize these differently. Heuristic behavior analysis: Build-time dependencies: strace Install-time dependencies: ldd #! strings Code analysis: ./configure (autotools) 'reverse engineering' I'm not sure whether the ./configure analysis splits up into build-time-dependencies -vs- install-time dependencies, or not. -Bennett |
From: Rolf V. <rol...@he...> - 2004-03-10 13:34:53
|
We can make a summary of methods: Before building: - ./configure (autotools) 'reverse engineering' After/during building: - ldd - strings - strace - #! More? Rolf. |
From: Bennett T. <be...@ra...> - 2004-03-09 15:51:59
|
2004-03-09T09:19:42 Hui Zhou: > However, some dependency is a multiple choice, such as either > glibc or uClibc. For these situations, file dependency may be more > precise. More informative, perhaps, but I am not inclined to try to track that level of detail. If I build a package, it'll have build-time dependancy on uClibc, and no install-time dependancy on libraries (I statically link). If you build the varient of the package that you find tasteful (editing LDFLAGS in the spec file's build node) perhaps you'll end up with both build- and install-time dependancies on glibc, build-time on glibc-devel and install-time on glibc. > For running dependency, one has to break common package apart, > such as devel package, doc package, etc. One has to, if one chooses to. I choose not to; instead, I use as few shared libs as possible (so packages that provide libs are generally only the "-devel" package), and I plan on partial installs to leave out bits that aren't needed on small dedicated servers. > I am thinking for some small dedicated system, can we enumerate > each executable with ldd to find out the runtime library > dependency? That can certainly be done, it's the most important technique rpm uses, it's a valuable heuristic. I'll probably end up including that, as well as #! analysis, as special-cases. > Bennett proposed using strings, which may be a better solution. Less positive than ldd and file (#!) analysis, but more inclusive; in particular, strings analysis can pick up dependancies on other programs that are execed, on data files like /etc/termcap, etc. -Bennett |
From: Bennett T. <be...@ra...> - 2004-03-09 15:46:21
|
2004-03-09T04:50:54 Rolf Veen: > On the other hand, once dependency info is generated for > a package/version, it is stable. Mostly. On my systems, /usr/sbin/sendmail is provided by postfix; on other peoples' systems it'll be provided by qmail, exim, or even (horrors!) sendmail. Red Hat built in provisions for manual provides/requires dependancy tagging for a logical role like "mta", which fixes that problem, but AFAIK that's the only place it was ever used; it ended up being yet another little one-off hack, complicating the package manager while going almost entirely unused. I reject that solution. Instead, I want the dependancy data to be the best that can be heuristically automatically computed, and have it advisory (knowing it will sometimes be wrong) rather than mandatory. I may be wrong, but I do know what I want:-). > [...] That is, once generated, it can be published on a website > and used by others. [...] Now that is an interesting proposal, one I hadn't thought of before. Given a packaging system like bpm, the dependancy data, both build-time and install-time, will be included in the bpms, so exporting them to a repository will be straightforward to automate. > - how do we handle optional deps I plan on associating the deps with the package build, and I encode all option picking in the spec file, I don't use options or envars to bpmbuild to influence the build process. I want the specs to completely document how the package was built. At the moment it's imperfect, since it doesn't mention all the packages that constitute the build env which of course influences the built package (including in particular optional features that are auto-detected). Once I tackle dependancy management that will take care of that bit. > - what are the basic units: packages, files. Packages. The lists in build-dependancies and install-dependancies will be lists of package names. Files that exist on the system but don't belong to any package are not counted. Only the packages that own the files will be counted. One thing I'm still pondering is whether to ignore files whose checksums don't match the checksum in the sha1 manifest of the package that claims to own them. I'm inclined to in fact ignore such broken package ownership claims. -Bennett |
From: Hui Z. <zh...@wa...> - 2004-03-09 14:36:03
|
On Tue, Mar 09, 2004 at 09:50:54AM +0000, Rolf Veen wrote: > But: > > - how do we handle optional deps Optional is optional, either omit all or include all, or prompt during installing and let user select. One problem remain in that all options are for a reason. How do we extract this reason so the user can make wise selection? > - what are the basic units: packages, files. For building dependency, certainly package unit is desirable, other wise one has to decide which file belongs to which package. However, some dependency is a multiple choice, such as either glibc or uClibc. For these situations, file dependency may be more precise. For running dependency, one has to break common package apart, such as devel package, doc package, etc. I am thinking for some small dedicated system, can we enumerate each executable with ldd to find out the runtime library dependency? Bennett proposed using strings, which may be a better solution. This definitely need thorough testing to find out. -Hui |
From: Rolf V. <rol...@he...> - 2004-03-09 09:08:44
|
One file that has dependency information is the ./configure script that many packages have. But not all have it and the information is 'not easily' (:-)) extracted. On the other hand, once dependency info is generated for a package/version, it is stable. That is, once generated, it can be published on a website and used by others. Maybe an automated system that is later hand-tuned can be a compromise solution. Once the first system is build and we have its (tuned) dependency file published on helidelinux.sf.net for example, subsequent builds do not need to create that file again. But: - how do we handle optional deps - what are the basic units: packages, files. Rolf. |
From: Bennett T. <be...@ra...> - 2004-03-09 01:00:17
|
2004-03-04T09:50:40 Hui Zhou: > I am writing an profile in ogdl for building packages from > sources. In order to make editing the profile easier and > efficient, it is necessary to define some variables or entities. > [...] > Thoughts? Same thought I had about comments --- open with a pipe through m4 before parsing. I've another thought, too. Before Rolf came over to lfs chat selling his OGDL, I was using Lua for my spec files. At first, having accepted and heavily used the rpm feature of being able to interpolate %{name}, %{version}, and %{release} into subsequent variable defines (commonly the Source:, occasionally in %prep), I was looking for some way to get the same effect with Lua. After searching a while I found it. Then I decided I didn't want it. The proliferation of these little convenience features comes with a cost; reading a complex rpm spec file is a real chore. Try and figure out what commands rpm will issue out of %build when it's doing glibc. So I decided that rather than having automatic variable interpolation to e.g. let me update the version of a package without having to edit it in multiple places, I'd just use global search-and-replace. Since I use a separate spec file for each package, this works just fine. -Bennett |
From: Hui Z. <zh...@wa...> - 2004-03-08 20:57:16
|
On Mon, Mar 08, 2004 at 02:40:09PM -0500, Bennett Todd wrote: > 1. Manual expression of dependencies is not interesting. People > Conclusion: all dependency management has to be completely > automated. > > 2. Even automated dependency management is generally wrong. The most > Conclusion: dependancy data can be used to provide hints and > advise, but it can't be trusted to be right. > > strace -efile -f, then analyzing the results to find all files > tree, I'll do a strings -a; of the resulting strings, those that are It is oriented for a binary distribution. I need more thinking or learning to make approprate comments. I am thinking about how to heruistically determine the dependency during or before build time. They may not be robust enough to trust, but can be used save 90% of time of human. Thank you for sharing the idea. -Hui |
From: Bennett T. <be...@ra...> - 2004-03-08 20:26:34
|
2004-03-08T14:32:07 Hui Zhou: > On Mon, Mar 08, 2004 at 01:30:46PM -0500, Bennett Todd wrote: > > bpmbuild filename > >=20 > > then bpmbuild would use wget to download the sources, then it'd > > unpack them, build them, and prepare the binary package. The binary > > package would have all the files that were populated by that "make > > install", plus these three: > How do you determine the files created by make install, install-log or > with an empty prefix directory? A bpm spec file is OGDL that provides three values: pkg is the package name (including the version, I don't separate that out as a separate value). url is the list of uris of the sources; plain files must already be present in a src/ dir, full urls will be downloaded with wget if they aren't already present build is a shell script fragment; it should unpack, patch, configure, compile, and install the package. The install should be directed into a tmp dir found by the envar BPM_ROOT, created but completely empty when the script fragment starts running. So I determine the files by looking in $BPM_ROOT, which looks something like /var/tmp/bpmbuild.$pid/root, after the build script is run. > Why you put both the spec and src files and the binary files into the > bpm packages? I always want to keep track of both. You can install one or the other by using include/exclude options on the cpio extract, but I like a default of installing both. For custom tight server installs I'll not only omit the sources, I'll also omit the docs. But on normal systems I'll enjoy having everything preserved. > IMHO, if one want to install binary, they don't need spec and src, > and vice versa. Splitting these out would be very easy to do, easy to automate. I don't want it for myself. > Why don't you distribute two sets of packages, one only with specs > or maybe sources as well for installation from sources or just > preparing the binary packages. After a fashion, I do. The bpm/ dir I distribute is everything, pre-built binaries and full sources, ready to unpack and have a full dev environment that can bootstrap itself. The bpmdist/ dir has snapshots at various times of the spec files, non-public sources (i.e. sources that don't have full URLs), and (starting with the most recent couple) the sha1 files to document what was delivered by each package. > The other are binary only and install with bzip2 and cpio? Am I > right if I am lazy enough I can just obtain the binary part and > install a system? Yup. I believe this sequence, run from a knoppix or whatever, would be a full system install: - partition the drive, mkfs as needed, mount it up under e.g. /a - wget all the *.cpio.bz2 files - for each one, extract it under /a - create a suitable lilo.conf, install lilo > Rpm just includes the binary package and a spec for more versatile > installation and configuration and dependency tracking. rpm includes a _Lot_ more stuff. It tracks three separate values to identify a package, the name, a separate version, and a separate release. Then, again separate, there's an architecture, which can be src for a source package. > The current bpm is simpler because it uses no-custom installation > and no-configuration and no-dependency (I haven't read your > actual implementation yet, these are my understanding from your > description, correct me mercilessly if I am wrong and accept my > appology). Dependency management hasn't been written yet, but I'm planning it (I posted a separate note to the list about it). It won't complicate the spec files at all, and I can still use cpio.bz2 for the package format. > Have you considered using rpm with extremely simple installation > spec for the binary only distribution? I started off planning on using rpm. Then I discovered a few things. (1) It's got the most completely broken, screwed up, demented i18n of any program I've heard of. I repackaged postfix, creating my own rpm. The %description in my rpm explains that my postfix omits Berkeley DB (exceedingly nasty license terms) and includes cdb (very pleasing license, and breathtakingly great performance). "rpm -qi postfix" doesn't show my %description, it shows the one for the Red Hat postfix package --- unless I set LC_ALL=3DC. How's _that_ for disgusting. And Red Hat states flat-out that this is a feature, and won't be fixed. (2) The code is sufficiently crufty --- a decade or so of band-aid hacks slapped on will do that --- that it's impractical to rip Berkeley DB out of rpm and replace it with something with a civilized license. That was enough to get me to create my own tool, and once I did I realized it could be _So_ much simpler and still do what I need. > > The bpm database in /var/lib/bpm/ is simple text files, not the > What does it look like? How did you populate the data base if you only > install with bzp2|cpio? It's just files that are installed with bzip2|cpio. For a package named "foo-1.0", the files would be something like /var/lib/bpm/foo-1.0/spec /var/lib/bpm/foo-1.0/src/foo-1.0.tar.gz /var/lib/bpm/foo-1.0/sha1 Simple text files. Once I add dependancy management, they'll be a couple more text files alongside spec and sha1. I'll probably be forced at some point to add {pre,post}{install,remove} scripts as well, they'll be specified in the spec file as additional fragments like the current build. -Bennett |
From: Bennett T. <be...@ra...> - 2004-03-08 19:56:47
|
I've not yet implemented it, but for my bent package manager I've been thinking about how I want to do dependency management. I've got some general thoughts, which give me an idea of what I want to implement; I share them in case anyone else finds them interesting. For compactness I'm going to express them as flat statements with no waffling about "in my opinion", please take such waffling as a global here, implied in everything that follows. Other folks will want to do other things. With that said: 1. Manual expression of dependencies is not interesting. People usually don't bother to do it, and on the rare occasions when they do, they get it wrong oftener than not --- or it breaks when the name or version of package that provides some resource changes. Conclusion: all dependency management has to be completely automated. 2. Even automated dependency management is generally wrong. The most elaborate effort I've seen to date is made by rpm, it's got a great steaming heap of heuristics, and it still ends up making packages "depend" on e.g. /usr/local/bin/perl (which isn't provided on any normal system) because some example script, that's getting installed in /usr/share/doc/, starts off with such a #! line. And as I mentioned in point 1 above, dependency data gets stale as packages change names. Conclusion: dependancy data can be used to provide hints and advise, but it can't be trusted to be right. Given the above, I'm planning on two kinds of dependency analysis. I want to generate build-depends data; I'm currently thinking something along the lines of running the build script fragment under strace -efile -f, then analyzing the results to find all files mentioned that were delivered by installed packages, and listing those packages as build-time dependencies of this package. Then, when re-building the package, if such build-time dependancies are available, and if the build fails, use the build-time dependancy data to generate suggestions that the builder can try. The second sort of dependancy data will be install-time; for that, I am thinking of a very simple approach: for every file in the install tree, I'll do a strings -a; of the resulting strings, those that are pathnames of files which exist on the current system, delivered by a package, and which would not be delivered by the package being installed, will be used to generate the install dependancy data, the list of packages delivering the files named. In this analysis, files in /usr/share/{man,info,doc} will be excluded. At install time, if an allegedly-depended-on package isn't already present, a message will be printed to stderr, but the installation will still proceed and the exit status won't be affected by any dependancy failures. And, of course, the dependency data should be sufficient to allow a driver like "apt" to be written, if desired. -Bennett |
From: Hui Z. <zh...@wa...> - 2004-03-08 19:48:02
|
On Mon, Mar 08, 2004 at 01:30:46PM -0500, Bennett Todd wrote: > bpmbuild filename > > then bpmbuild would use wget to download the sources, then it'd > unpack them, build them, and prepare the binary package. The binary > package would have all the files that were populated by that "make > install", plus these three: How do you determine the files created by make install, install-log or with an empty prefix directory? > > /var/lib/bpm/nmap-3.50/spec > /var/lib/bpm/nmap-3.50/src/nmap-3.50.tar.bz2 > /var/lib/bpm/nmap-3.50/sha1 > > The spec file is a copy of the file you fed to bpmbuild, and the > src/nmap-3.50.tar.bz2 is a copy of what wget pulled down. The sha1 > is the output of find ... -type f|xargs openssl sha1, a manifest > with crypto checksums. Why you put both the spec and src files and the binary files into the bpm packages? IMHO, if one want to install binary, they don't need spec and src, and vice versa. Why don't you distribute two sets of packages, one only with specs or maybe sources as well for installation from sources or just preparing the binary packages. The other are binary only and install with bzip2 and cpio? Am I right if I am lazy enough I can just obtain the binary part and install a system? > Yup. My current installer is > > bzip2 -d <pkg.cpio.bz2|(cd / && cpio -idm) > > > what's the major difference from rpm? > > There's little in common. Rpm just includes the binary package and a spec for more versatile installation and configuration and dependency tracking. The current bpm is simpler because it uses no-custom installation and no-configuration and no-dependency (I haven't read your actual implementation yet, these are my understanding from your description, correct me mercilessly if I am wrong and accept my appology). Have you considered using rpm with extremely simple installation spec for the binary only distribution? > > bpm's spec files are vastly simpler. > > bpm binary packages are true cpio.bz2s --- no special tools required > to install them. > > The bpm database in /var/lib/bpm/ is simple text files, not the What does it look like? How did you populate the data base if you only install with bzp2|cpio? > opaque binary databases built with the non-free, proprietary > Berkeley DB, that rpm puts into /var/lib/rpm/. > That is a rightful reason. -Hui |
From: Bennett T. <be...@ra...> - 2004-03-08 18:52:48
|
2004-03-08T12:07:24 Hui Zhou: > Very intersting. So for your final distribution, is it just a > collection of binary bpm packages? Yup. Some day I may whomp up an installer, but really any rescue disk should work fine for that. A minimal load is a linux (kernel), lilo, and busybox. For compact server installs, you can exclude /var/lib/bpm (which installs the spec file and full sources), /usr/include (only needed for compiling stuff, obviously), and the docs /usr/share/{man,info,doc}. And after running lilo, you can remove it if you want. So a running bent linux might have nothing but a kernel and a busybox. Then you can pick and choose what other bits you want. > [...] and both the source file and spec file are just > used as tools to produce bpm packages? Here's a reasonable sample spec file: pkg nmap-3.50 =09 url http://download.insecure.org/nmap/dist/nmap-3.50.tar.bz2 =09 build \ tar xjf nmap-3.50.tar.bz2 cd nmap-3.50 LDFLAGS=3D'-static -s' CFLAGS=3D-Os ./configure --prefix=3D/usr --mand= ir=3D/usr/share/man --without-nmapfe --without-openssl make make prefix=3D$BPM_ROOT/usr mandir=3D$BPM_ROOT/usr/share/man install If you saved that into any file, the name doesn't matter, and ran bpmbuild filename then bpmbuild would use wget to download the sources, then it'd unpack them, build them, and prepare the binary package. The binary package would have all the files that were populated by that "make install", plus these three: /var/lib/bpm/nmap-3.50/spec /var/lib/bpm/nmap-3.50/src/nmap-3.50.tar.bz2 /var/lib/bpm/nmap-3.50/sha1 The spec file is a copy of the file you fed to bpmbuild, and the src/nmap-3.50.tar.bz2 is a copy of what wget pulled down. The sha1 is the output of find ... -type f|xargs openssl sha1, a manifest with crypto checksums. If bpmbuild is provided a dir as its arg, it expects to find it in the above layout, and if all the needed sources are present in src/ it'll skip the wget step. So after installing the resulting binary package, you can rebuild it with bpmbuild /var/tmp/bpm/nmap-3.50 > From the file name, seems the package is a compressed cpio > archive, [...] Yup. My current installer is bzip2 -d <pkg.cpio.bz2|(cd / && cpio -idm) > what's the major difference from rpm? There's little in common. bpm's spec files are vastly simpler. bpm binary packages are true cpio.bz2s --- no special tools required to install them. The bpm database in /var/lib/bpm/ is simple text files, not the opaque binary databases built with the non-free, proprietary Berkeley DB, that rpm puts into /var/lib/rpm/. At the moment, bpm is implemented with a simple bash script that calls gpath. Once I get around to focusing more on it, bpm will be re-written in perl, and use custom code for reading and writing cpios; the custom writer will provide programmatic control of the owner of the files in the archive, allowing me to skip a currently-required bpmpkgfix step (that must be run as root) that ends up unpacking and repacking the archive. The dedicated reader will allow very efficient, single-pass pre-extraction and analysis of the spec and sha1 files, as well as to-be-implemented dependency data, which in turn will allow the installer to offer helpful warnings. > When you feel comfortable, I would like to be a tester for your > ``bent'' system.:) You're welcome to look it over whenever you like. <URL:http://bent.latency.net/bent/> has my current work. If you want to experiment in a chrooted jail, it's ready to go. If you have a scratch system (or a system with a scratch partition) and a rescue disk of some sort you can use to do the initial partitioning/newfsing/bzip2|cpio-installing, you can make a bootstrapping system with as little as busybox+linux, or as much more as you wish. -Bennett |
From: Hui Z. <zh...@wa...> - 2004-03-08 17:23:25
|
On Mon, Mar 08, 2004 at 10:11:38AM -0500, Bennett Todd wrote: > Thanks for Cc-ing me into this dialogue. You are welcome. > > My "bent" linux is based on bpm, the Bent Package Manager. Loosely > inspired by rpm, it currently has really just one tool, bpmbuild, > which takes a spec file (as minimalist as possible, in OGDL) and > creates a binary installable package that's a cpio.bz2. It's still > in rough prototype form, a rewrite will be needed before it's a > full-function package managment system. I'm focusing now more on the > packages than on the package manager. The recipes, you might say. Very intersting. So for your final distribution, is it just a collection of binary bpm packages? and both the source file and spec file are just used as tools to produce bpm packages? From the file name, seems the package is a compressed cpio archive, what's the major difference from rpm? > > At this point I have a bit over a hundred packages wrapped, if only > I can find a nice uninterrupted wad of time I hope to move my > primary workstation over to bent linux soon, I think I've got > everything built that I need for day-to-day use. > > While bent linux is definitely inspired by LFS, and borrows some > bits from it, it's certainly not LFS. For starters, I have no glibc; > instead, I'm using uClibc. With as few exceptions as possible, I'm > trying to completely avoid dynamic linking; only dev systems need to > have libc installed at all, and upgrading libc is not scary. > > I also have had such unpleasant experiences with i18n that I'm > disabling it wherever I can in the builds. When you feel comfortable, I would like to be a tester for your ``bent'' system.:) -Hui |
From: Hui Z. <zh...@wa...> - 2004-03-05 15:22:01
|
I imported a new module --- gadgets into the cvs Repository. The gadgets are intended to hold many small utility programs which just manage some simple aspects of lfs building. I put a program I just written in the gadget: blfsdep blfsdep will extract package dependency info from a blfs book and output into an OGDL file, a perl program depsort will recursively walk out an installation sequence for any blfs package. Here is the Readme: To build: run ``make'' To install: Run ``make install'', default will install into /usr/bin, for other place, change Makefile/PREFIX. To Use: Check out blfs book. Run ``blfsdep /To/Path/BLFSBOOK/index.xml >dep.txt'' to prepare the dependency file; To find a installation for package, run ``depsort package'' depsort supports options: -r only required dependency -o with all optional dependency -p will prompt for optional dependency -v will verbosely print message during recursively walking. For exact package name, browse dep.txt, it's in ogdl, so very readable. Note: make install doesn't install depsort. You have to manually copy it. As it uses the OGDL perl module, you may want check out that in http://ogdl.sourceforge.net/ as well. -Hui |
From: Hui Z. <zh...@wa...> - 2004-03-04 18:02:43
|
On Thu, Mar 04, 2004 at 05:54:59PM +0000, Rolf Veen wrote: > We need an algorithm to create a continuously updated > package URL database (by querying different sources), > including mirror sites. This database can be a simple > OGDL file. data base for every package? I will support such an effort, but mainly as a application for the algorithm. The data base will start from all the packages mentioned in lfs project and grow from there. How do we treat versions? we can only list the latest version, but someone may prefer certain versions (gcc2 for kernel?). We can list only the url root, but different versions sometime reside in different directory. > > URLs should have a location attribute, so that a user, > by specifying its location, can use the closest mirrors. > That's another algorithm. Geo-location ? I am thinking list url in two parts, one is the common path including package file name, another is a mirror list. A lot of packages may share a common mirror list. I suspect there is a simple algorithm can be used to choose the optimum mirror. I live in Maryland, United States, but I often find mirrors in Europe response much faster than mirrors in here. -Hui |
From: Rolf V. <rol...@he...> - 2004-03-04 17:10:00
|
Hui Zhou wrote: > What do you think? Still thinking. We need an algorithm to create a continuously updated package URL database (by querying different sources), including mirror sites. This database can be a simple OGDL file. URLs should have a location attribute, so that a user, by specifying its location, can use the closest mirrors. That's another algorithm. Geo-location ? Rolf. |
From: Hui Z. <zh...@wa...> - 2004-03-04 17:00:36
|
Hi, Rolf, I enjoy your revised project home page, thanks for your support. Question: why did you take out your pkg project? I would like to see it listed to encourage more people join. -Hui |