helidelinux-devel Mailing List for Helide Linux
Status: Abandoned
Brought to you by:
rveen
You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
(42) |
Apr
(5) |
May
(4) |
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|
From: Rolf V. <rol...@he...> - 2004-06-01 15:20:24
|
Bennett Todd wrote: > 2004-06-01T13:36:05 Rolf Veen: > >>http://www.opengroup.org/pubs/catalog/c701.htm > > > Sounds interesting --- but I'm not about to give them my name and > email addr, which they demand, without even offering a privacy > policy. I didn't like that either, and thus I would not call this an open standard. But, anyway, it's better than ISO. > I would question "wider view of what a package system should take > into account"; it seems to me like it provides a subset of the > functionality of rpm, which being more complex. After re-reading the spec, I think it's indeed unnecessarily complex, but it is good input if we ever try to make yet another package manager. And I'm going to :-). Rolf. |
From: Bennett T. <be...@ra...> - 2004-06-01 14:04:03
|
2004-06-01T13:36:05 Rolf Veen: > http://www.opengroup.org/pubs/catalog/c701.htm Sounds interesting --- but I'm not about to give them my name and email addr, which they demand, without even offering a privacy policy. > It is a wider view of what a package system should take > into account. There is an implementation: >=20 > http://swbis.sourceforge.net/ Thanks for that ptr, I was at least able to get a feel for the taxonomy of tags and whatnot without having to register with The Open Group:-). I would question "wider view of what a package system should take into account"; it seems to me like it provides a subset of the functionality of rpm, which being more complex. -Bennett |
From: Rolf V. <rol...@he...> - 2004-06-01 11:38:39
|
An interesting document is the CAE C701 specification about distributed software administration and packaging. It can be found here: http://www.opengroup.org/pubs/catalog/c701.htm It is a wider view of what a package system should take into account. There is an implementation: http://swbis.sourceforge.net/ It is curious to see how the file formats are very similar to OGDL. Cheers. Rolf. |
From: <ben...@id...> - 2004-05-25 07:48:53
|
Dear Open Source developer I am doing a research project on "Fun and Software Development" in which I kindly invite you to participate. You will find the online survey under http://fasd.ethz.ch/qsf/. The questionnaire consists of 53 questions and you will need about 15 minutes to complete it. With the FASD project (Fun and Software Development) we want to define the motivational significance of fun when software developers decide to engage in Open Source projects. What is special about our research project is that a similar survey is planned with software developers in commercial firms. This procedure allows the immediate comparison between the involved individuals and the conditions of production of these two development models. Thus we hope to obtain substantial new insights to the phenomenon of Open Source Development. With many thanks for your participation, Benno Luthiger PS: The results of the survey will be published under http://www.isu.unizh.ch/fuehrung/blprojects/FASD/. We have set up the mailing list fa...@we... for this study. Please see http://fasd.ethz.ch/qsf/mailinglist_en.html for registration to this mailing list. _______________________________________________________________________ Benno Luthiger Swiss Federal Institute of Technology Zurich 8092 Zurich Mail: benno.luthiger(at)id.ethz.ch _______________________________________________________________________ |
From: Bennett T. <be...@ra...> - 2004-05-13 11:44:56
|
2004-05-13T08:50:23 Rolf Veen: > It is so nice to be free from schemas and to be able to > add fields on the fly :-). For _sure_, that freedom has been something I've been adamant on since well before you came strolling into my life and lured me away to this OGDL thingie:-). Before I was doing OGDL, I was specifying my packaging spec files in Lua <URL:http://www.lua.org/>. Looked a _lot_ like my current OGDL spec files, only with a bit more typographic noise. Instead of: pkg tla-1.2 url http://ftp.gnu.org/gnu/gnu-arch/tla-1.2.tar.gz build \ tar xzf tla-1.2.tar.gz cd tla-1.2 cd src mkdir =build cd =build ../configure --prefix=/usr --destdir=$BPM_ROOT --with-cc='gcc -Os -s -static' make make test make install it would have been: pkg = "tla-1.2"; url = "http://ftp.gnu.org/gnu/gnu-arch/tla-1.2.tar.gz"; build = [[ tar xzf tla-1.2.tar.gz cd tla-1.2 cd src mkdir =build cd =build ../configure --prefix=/usr --destdir=$BPM_ROOT --with-cc='gcc -Os -s -static' make make test make install ]]; I like OGDL, no question it's still cleaner and sweeter, but the Lua version didn't half suck:-). And it was every bit as open-ended and schema-free. -Bennett |
From: Rolf V. <rol...@he...> - 2004-05-13 06:53:13
|
It is so nice to be free from schemas and to be able to add fields on the fly :-). Rolf. Bennett Todd wrote: > Still haven't gotten around to the major todos (custom cpio writer > to give control over perms, removing need for bpmpkgfix; then > automated dependency analysis), but I just decided to add an > optional tag somewhat in the flavour of rpm's "Group". I call it > "isa", and give it a list of keywords (whose order has no intended > meaning). I just used it to tag all the emacses I've built, since > I went on a binge, and with the tags in place it was a couple of > simple shell loops to compute: > > 12938 e3em > 56296 ee > 56604 emt > 86816 elle > 133144 ved > 158136 jove > 222636 microemacs > 281556 zile > 420648 jed > > Those are file sizes in bytes for -Os -static -s compilations > against uClibc. > > -Bennett |
From: Bennett T. <be...@ra...> - 2004-05-10 12:07:41
|
Still haven't gotten around to the major todos (custom cpio writer to give control over perms, removing need for bpmpkgfix; then automated dependency analysis), but I just decided to add an optional tag somewhat in the flavour of rpm's "Group". I call it "isa", and give it a list of keywords (whose order has no intended meaning). I just used it to tag all the emacses I've built, since I went on a binge, and with the tags in place it was a couple of simple shell loops to compute: 12938 e3em 56296 ee 56604 emt 86816 elle 133144 ved 158136 jove 222636 microemacs 281556 zile 420648 jed Those are file sizes in bytes for -Os -static -s compilations against uClibc. -Bennett |
From: Rolf V. <rol...@he...> - 2004-04-01 16:50:56
|
Hui Zhou wrote: >> My (old) idea was to create a /data directory and a dpath command >> so that we could have: > > I don't see the reason why you have to put everything into a specific > folder. The root directory info can be the root node of the path > construction, so dpath will always know where to look. I'm thinking about a clean name space, not depending on the particular system design: you may like to put all system data in /etc, I may like to have package info in /pkg/<package>/data, and so on. For each particular system design, a set of rules would apply. > I am thinking on that. There must be some schemes so dpath can tell > easily whether a text file is an ogdl file and an binary is safe to > execute and produce ogdl stream. Otherwise, how can it prevent it > from executing wrong binary and crashing the system or corrupting the > data? dpath should only access OGDL directory trees, not any directory (That functionality can be left for gpath). I'm thinking about these sub-namespaces for my implementation: system.* -> $SYSDATA/* pkg.<package>.* -> $PKG/<package>/$DATADIR/* user.* -> $HOME/$DATADIR/* users.<user>.* -> /home/<user>/$DATADIR/* Anything in these trees should be 'dpath compatible'. > And as an alternative, how does a data base compared with registry? In terms of ? Registry uses no additional database so as to minimize dependencies and gain simplicity and editability. For large data sets databases can be faster. > What is ACL? ACL stands for Access Control Lists. It allows finer grained security. A good explanation is: http://www.suse.de/~agruen/acl/chapter/fs_acl-en.pdf Rolf. |
From: Hui Z. <zh...@wa...> - 2004-04-01 16:19:29
|
On Thu, Apr 01, 2004 at 05:01:31PM +0000, Rolf Veen wrote: > My (old) idea was to create a /data directory and > a dpath command so that we could have: I don't see the reason why you have to put everything into a specific folder. The root directory info can be the root node of the path construction, so dpath will always know where to look. > > # dpath system.hostname > # dpath pkg.apache.conf.DocRoot > > The dpath would map paths to directory/OGDL files inside > /data. It also could dive into executables, if it finds > one along the path (if a path element corresponds to an > executable, asume it will print OGDL to stdout, and > continue from there). I am thinking on that. There must be some schemes so dpath can tell easily whether a text file is an ogdl file and an binary is safe to execute and produce ogdl stream. Otherwise, how can it prevent it from executing wrong binary and crashing the system or corrupting the data? > > There are two approaches to get the data: > > 1) Publish the data that you want to be accesible thru > dpath in /data (by means of a symlink for example). > Example: system.conf -> /data/system/conf/ > > 2) Filter some paths to different system locations, as > Example: users.nobody -> /home/nobody/data/ > Example 2: user.mail.conf -> $HOME/data/mail/conf > > The second approach only needs a series of rules in dpath > while the second needs that the package manager takes care > of placing the appropiate symlinks into the /data space. I don't quite like the idea of creating extra non-standard directories. The `data' can be all different purpose. Config data should go to /etc, user specific data should go to home dirs. Maybe you are actually thinking collecting all data into a central place, but my opinion is not to exert extra rules unless necessary. With above mentioned API, whether putting data in a single directory or putting them seperately will work under a universal API. As an example: # dpath /etc/system.hostname or dpath etc.system.hostname # dpath /home/nobody/mail.conf or dpath home.nobody.mail.conf And as an alternative, how does a data base compared with registry? > > Regarding security, I guess we should left that to the > file system (which after all probably has ACL as an option). What is ACL? > -Hui |
From: Hui Z. <zh...@wa...> - 2004-04-01 15:58:46
|
On Thu, Apr 01, 2004 at 04:05:36PM +0100, Marcus Furlong wrote: > > > > Your plan is to change all configuration files to ogdl? > > > > I'm not so optimistic :-). > > I was wondering! I thought you were going to patch all the system utils to > output ogdl (still not a bad idea tho..), and convert existing configuration > files to ogdl. I think we can start patch a few major ones (inittab, fstab... up to all that used in a lfs base). The idea is experimenting and demonstration. Once a robust API exist and there is good demonstration how it work, I expect more packages maintainers will start to consider using this cofig facility. But frankly, I am not quite certain of the benefits of central registry yet, just got enough interest to experimenting. I will go and see. > At the moment, I'm in the process of converting my existing lfs buildscripts > to use ogdl package descriptors. However, looking at the registry program, > I'm wondering if all package information could be likewise put into a > registry file. > While not as pleasing to the eye, I sure a utility will be > written to extract the info and make it more presentable. I believe a config file in OGDL will be very pleasing to the eye. > > Anyway, it seems to me that the two projects (helide and registry) are > attacking the same problem. How would you see them working together? I don't think Rolf or I will ever give up using OGDL. I will definitely look into the registry project from time to time and may steal some ideas, whether the registry project like to working together is entirely up to him. Hui |
From: Rolf V. <rol...@he...> - 2004-04-01 15:03:08
|
My (old) idea was to create a /data directory and a dpath command so that we could have: # dpath system.hostname # dpath pkg.apache.conf.DocRoot The dpath would map paths to directory/OGDL files inside /data. It also could dive into executables, if it finds one along the path (if a path element corresponds to an executable, asume it will print OGDL to stdout, and continue from there). There are two approaches to get the data: 1) Publish the data that you want to be accesible thru dpath in /data (by means of a symlink for example). Example: system.conf -> /data/system/conf/ 2) Filter some paths to different system locations, as Example: users.nobody -> /home/nobody/data/ Example 2: user.mail.conf -> $HOME/data/mail/conf The second approach only needs a series of rules in dpath while the second needs that the package manager takes care of placing the appropiate symlinks into the /data space. Regarding security, I guess we should left that to the file system (which after all probably has ACL as an option). Rolf. |
From: Rolf V. <rol...@he...> - 2004-04-01 11:15:22
|
Hui Zhou wrote: > I just spotted this project on freshmeat: > http://registry.sourceforge.net/ > > It tries to wrap all config files into a single API. I am thinking on > the same plot but using OGDL. After skim throught his document, I > found that he uses very similar path construction (eg. a.b.c.d) to > access the keys, which make it just natural to use OGDL. However, he uses > the directory tree to store the data (one file for each value) and I > speculate that most senior unix administrators. By adding the ability > to mixing directories with ogdl files, we can either use a single > registry file (like MS windows does), or use one file for each > application (as currently used in li(u)nix but all in the same > directory), or with any level of directories down to one file per key, > all through a single OGDL API. My thought was that a future dpath command would do exactly that. But the registry project has one advantage and that is security: each key-value pair has its own. On the other side, my question is: Is the unix security model good, or should we better think about ACL ? Rolf. |
From: Rolf V. <rol...@he...> - 2004-03-19 16:56:03
|
Hui Zhou wrote: > On Tue, Mar 16, 2004 at 02:47:14PM +0000, Rolf Veen wrote: > >>>>PKG -c <start|stop|status> name >>>>PKG -d name Return configuration (in OGDL) > To implement them in a package manager, it need to understand each > package, such as which program is daemon, how to translate all types > of conf file back and forth to ogdl and what action each package is > capable of. In some aspect, the package manager will never be > complete (too many package varieties to take care of). No, a package would be responsible for providing the apropiate interface. For example, in my case each package lives in its own directory below /pkg. The system wide Apache httpd is in /pkg/apache. The control script is /pkg/apache/META/bin/control. When doing: # pkg -c start apache the only thing that happens is that, if a control file is present, it is executed with the 'start' argument. Simple. > On the other hand, the package manager can provide a common gate way. > Each package provides a tool with common interface, and the package > manager talk to each package using this interface. Basically, each > package knows how to take care of themselves. But I am afraid this > won't happen until this sort of package manager is well established. Until then, we can supply patches to some packages. Rolf. |
From: Bennett T. <be...@ra...> - 2004-03-18 16:31:34
|
2004-03-17T16:05:58 Bennett Todd: > FYI, I'm doing a full rebuild now, with openssl-0.9.7d. I'll > upload the bpms that result later today. Something came up, held me up, the upload is only just completed. -Bennett |
From: Bennett T. <be...@ra...> - 2004-03-17 16:06:18
|
2004-03-08T18:30:46 Bennett Todd: > 2004-03-08T12:07:24 Hui Zhou: > > When you feel comfortable, I would like to be a tester for your > > ``bent'' system.:) >=20 > You're welcome to look it over whenever you like. > <URL:http://bent.latency.net/bent/> has my current work. FYI, I'm doing a full rebuild now, with openssl-0.9.7d. I'll upload the bpms that result later today. -Bennett |
From: Bennett T. <be...@ra...> - 2004-03-16 21:24:46
|
2004-03-16T20:46:27 Hui Zhou: > I am thinking that some dependency such as bash and libc are just > so common, is it worth listing them at all. This gets back to a statement I made earlier: I, _myself_, and completely uninterested in manually-created or manually-maintained dependancy data. It's routinely wrong, in my experience. Heuristic data won't completely correct in every way, but I think it'll be more useful than hand-maintained data. So I'm not building any support for manual dependancy assertion or checking, only whatever can be completely automated. I'm also not interested in ad-hoc hacks, like e.g. automatically computing build-depends with strace then pruning some arbitrarily-defined "base package list" out of it. > Here is an idea: always bundle some essential packages together > and offer as a base system, thus many packages' dependency may > automatically be solved. Red Hat does that. A minimal Red Hat base system is something over 100MB these days. My minimal system is a kernel+Busybox, lilo can be removed after it's been run, or could be run from a rescue disk. My vmlinuz-2.4.25 is 1.2MB, Busybox is 675KB, so my base system is under 2MB. Of course it's not a development system. But some of my bpm packages, perhaps even most, would (I suspect) build Ok without bash installed, using the ash from Busybox. > One may prepare a few different flavored base packages so user > have choices. The base packages also can't be uninstalled but > upgraded. Other distros have taken more or less that approach; I'm still enraptured by the simplicity of "a system can be as little as a kernel + Busybox, plus any other bits you want, pick and choose". Perhaps I'll get over it:-). -Bennett |
From: Hui Z. <zh...@wa...> - 2004-03-16 20:45:30
|
On Tue, Mar 16, 2004 at 08:32:10PM +0000, Bennett Todd wrote: > I'm planning on tackling it when I add dependency management to bpm, > by ensuring that the build script fragment is run under something > like "strace -f -eexecve -efile" or thereabouts, and including build > dependancies on every package containing any file referenced in the > strace output. The result won't be true build dependancies (which I > can't think of any way of robustly automating); e.g. a lot of > packages on my build system will come up with dependencies against > coreutils and bash, that would be satisfied as well by Busybox. > Instead, the dependancies I'll compute will (a) document _exactly_ > how the package was built, for reproduceability, and (b) be useful > for offering suggestions about what might help if a rebuild attempt > fails. I am thinking that some dependency such as bash and libc are just so coomon, is it worth listing them at all. Here is an idea: always bundle some essential packages together and offer as a base system, thus many packages' dependency may automatically be solved. One may prepare a few different flavored base packages so user have choices. The base packages also can't be uninstalled but upgraded. -Hui Zhou |
From: Bennett T. <be...@ra...> - 2004-03-16 20:32:18
|
2004-03-16T19:35:59 Hui Zhou: > BTW, as the work in LFS project indicate, the host tool chain can play > an important roles, how redhat or debian manage this problem? As far as I know, current package managers simply ignore this problem. Instead, they use their own distro, and ignore the [typically minor] updates in the toolchain over the life of a single major build. E.g. rpm documents the build date and build host, and leaves actual documenting of the details of the build chain to the users' ability to query what that system was running at that time. In practice, most people segregate binary packages into e.g. Red Hat 9, RHEL3, Fedora Core 1, etc., and keep them consistent with their build chains, the build chains being those that came with that release of Linux. I suspect Debian does much the same. I'm planning on tackling it when I add dependency management to bpm, by ensuring that the build script fragment is run under something like "strace -f -eexecve -efile" or thereabouts, and including build dependancies on every package containing any file referenced in the strace output. The result won't be true build dependancies (which I can't think of any way of robustly automating); e.g. a lot of packages on my build system will come up with dependencies against coreutils and bash, that would be satisfied as well by Busybox. Instead, the dependancies I'll compute will (a) document _exactly_ how the package was built, for reproduceability, and (b) be useful for offering suggestions about what might help if a rebuild attempt fails. -Bennett |
From: Hui Z. <zh...@wa...> - 2004-03-16 19:35:03
|
Hi, Bennett, thank you for your rich information, I will looking into rpm when I get time. On Tue, Mar 16, 2004 at 05:30:43PM +0000, Bennett Todd wrote: > > The minimum I believe is just install and uninstall. > > That's definitely not sufficient; a software package manager must > also offer inventory (what packages are installed), dependancy Using lfsbuilder to install or uninstall all the packages will provide the inventory with no extra effort. Dependancy as we discussed could be devided into building dependancy and running dependancy. Building (or installation) dependancy is something the installer(lfsbuilder) should consider, I will have to work on that even don't want to implement an package manager. Running dependancy is something the uninstaller need to consider. When uninstalling a package, it may broke other packages' running dependency, thus have to raise users attention. > management of some sort (what packages does this package depend on, > at a minimum), integrity checking (are the files in this package I am planning on a package fetcher (works through internet). If the signature file or MD5Sum is available along with the package, the fetcher is responsible to check the integrity. For custom binary package such as bent distribution, once the fetcher understand where the signature can be retrieved, it checks the binary package also. As for individual files of an installed package, install-log can help. > intact on the system?), and pre/post install/remove scripts. Most are system configurations (am I right?), these are nasty. > To look at the set of features a software packaging tool must > support to compete with rpm, consider some tasks that it must > assist in: > > - Easily wrapping new packages I am most insterested in installation directly from source, but when you think about it, instead of install, one can wrap into packages with slight twist. The catch is select a good package format. BTW, as the work in LFS project indicate, the host tool chain can play an important roles, how redhat or debian manage this problem? > > - Easily updating existing packages to track new versions of the > up-stream software > > - Easily wrapping packages that apply patches to the upstream > software, either to fix bugs quicker than the upstream integrates > the fixes, or to adjust the package to build the way you want it > on your dev platform > > - Assist in automating complex package installations; it should be > possible to build something like "apt", that can be pointed at a > package repository, and automatically install all prerequisites > > - Automate installing systems --- think Sun's Jumpstart, Red Hat's > kickstart. A fully automatic system build to a "profile" (defined > as a package list) is vital for enterprise computing. Updating the > build to track updates to packages must be easy. > > - Automate delivery of patches and updates; the update process must > be so robust and safe that it can be automated over tens of > thousands of machines with a near-zero failure rate and clear > error reporting. > > - Automate inventory of system configuration; with simple scripting > you should be able to populate an RDBMS with every version of > every package installed on every system in a large enterprise > Good digest. -Hui Zhou |
From: Bennett T. <be...@ra...> - 2004-03-16 17:30:52
|
2004-03-15T21:31:14 Hui Zhou: > I have no experience with package managers. I have quite a bit; I'll try and share some:-). But note a Great Big Disclaimer: different folks, with different experience with different package managers used to do different jobs, come out with different opinions; Debian's dpkg, the BSD "ports" facility, the package managers for Gentoo and Sourcerer, and I'm sure any number of other software package management strategies all have their own following. > My first linux system is redhat, [...] Not a bad exemplar, in my opinion; I think rpm is the most capable software package manager currently in existence. I also think it's grown a bit overripe, long in the tooth. That's why I'm implementing my own software package manager. > [...] installed and never used because I don't know how it was > managed. I can strongly recommend you at least skim, if not read, the book Maximum RPM, available from the rpm web site at <URL:http://www.rpm.org/>. Even if you end up going in a different direction, that book is accessible documentation of all the major capabilities of one of the best software package managers in existence today. > Then I tried linux from scratch, and now that's the only > thing I know how to work with. LFS is a _lovely_ place to learn how to build Linux systems, but plain LFS has no software package management at all. > I am thinking the automated lfsbuilder may finally develop into a > package manager also. Could be. If you try and grow it into a package management system, I think you'll lose the lovely density you've got, with so few lines of config describing the complete build. But I may be wrong! > What is generally expected for a package manager? "generally" can't be answered, different people have different expectations. I'll go into this more below. > The minimum I believe is just install and uninstall. That's definitely not sufficient; a software package manager must also offer inventory (what packages are installed), dependancy management of some sort (what packages does this package depend on, at a minimum), integrity checking (are the files in this package intact on the system?), and pre/post install/remove scripts. Modern software package managers also undertake to provide build reproduceability, with automated package build from sources and some provision for build dependancy management. The first software package management system I know of was the System V Packaging Tools. They completely ignored the package building automation side of things, they had week integrity checking, and they had quite poor performance, but nonetheless they work well enough to make Sun's Jumpstart a sound tool for enterprise use, and I've used them to build enterprise system configuration management, with automatic software inventory and system rebuilding and automated update deployment (at a previous job, work for hire, I don't have the code, it's not open source). rpm was built to tackle a harder problem; Red Hat supports a rich distro, with packages drawn from diverse upstream sources, across multiple architectures. When I got started with Red Hat, they were supporting i386 and Alpha. They update pretty aggressively, it's not hard to stay fairly up to date with Red Hat, and they automated both system builds and system updates. rpm grew to support their quite demanding needs. The only other software package management system I've looked at closely that tackles the same problem space so well is Debian's dpkg, and it has (or at least had, when last I looked at it), two features that put me off, left me favouring Red Hat. First, the sources dpkg works from aren't the virgin upstream sources; instead, they're a repackaging of the upstream tarball, and there's no place where the URL of the upstream source is automatically and reliably available. And second, you couldn't build a binary package from a source package as a non-root user. To look at the set of features a software packaging tool must support to compete with rpm, consider some tasks that it must assist in: - Easily wrapping new packages - Easily updating existing packages to track new versions of the up-stream software - Easily wrapping packages that apply patches to the upstream software, either to fix bugs quicker than the upstream integrates the fixes, or to adjust the package to build the way you want it on your dev platform - Assist in automating complex package installations; it should be possible to build something like "apt", that can be pointed at a package repository, and automatically install all prerequisites - Automate installing systems --- think Sun's Jumpstart, Red Hat's kickstart. A fully automatic system build to a "profile" (defined as a package list) is vital for enterprise computing. Updating the build to track updates to packages must be easy. - Automate delivery of patches and updates; the update process must be so robust and safe that it can be automated over tens of thousands of machines with a near-zero failure rate and clear error reporting. - Automate inventory of system configuration; with simple scripting you should be able to populate an RDBMS with every version of every package installed on every system in a large enterprise When you go through those cases, you'll find that many, perhaps most of the features of rpm get their motivation. I don't begrudge rpm its rich feature set, if I don't abandon the project bpm will grow a comparable one. I believe (I may be proven wrong) that a complete redesign can make it possible to deliver the features on a much, much simpler implementation; rpm "just grew", it's requirements were discovered as its implementation evolved. This is understandable in a new prototype, but the end result is not likely to be a clean implementation. A complete reimplementation from scratch can do much, much better. Sendmail -> {qmail, postfix} is a clear example; the two modern MTAs are vastly faster, more robust, more secure, and simpler to configure and manage than sendmail. Sendmail evolved as the requirements of an internet MTA were discovered; in fact, it pioneered so many features that it's the driver for the required feature set. -Bennett |
From: Hui Z. <zh...@wa...> - 2004-03-16 14:41:26
|
On Tue, Mar 16, 2004 at 02:47:14PM +0000, Rolf Veen wrote: > >> PKG -c <start|stop|status> name > >> PKG -d name Return configuration (in OGDL) > >> PKG -e expression name > >> Modify configuration > >> PKG --target name These are more application specific operations. Wouldn't it be better or simpler just use the application specific tools. We can use a seperate program to just manage the services (thought it should be able to talk to the package manager to check the installation status). As for application specific configuration, the application itself understands their configuration better. To implement them in a package manager, it need to understand each package, such as which program is daemon, how to translate all types of conf file back and forth to ogdl and what action each package is capable of. In some aspect, the package manager will never be complete (too many package varieties to take care of). On the other hand, the package manager can provide a common gate way. Each package provides a tool with common interface, and the package manager talk to each package using this interface. Basically, each package knows how to take care of themselves. But I am afraid this won't happen until this sort of package manager is well established. > > I'm leaning toward using the same tool but with different names and > different exposed functionality, 'pkg' and 'src'. The first one oriented > to package management in a running system, and the second oriented to > build a system. I see. -Hui Zhou |
From: Rolf V. <rol...@he...> - 2004-03-16 13:50:35
|
Hui Zhou wrote: >>Control: >> >> PKG -c <start|stop|status> name > > For services? Yes. > >>Configuration: >> >> PKG -d name Return configuration (in OGDL) >> PKG -e expression name >> Modify configuration > > A specific example? PKG -d apache could return http_conf.g PKG -e "DocumentRoot = /var/www2" apache would change that attribute. >>Additional operations: >> >> PKG --target name >> Execute a task defined in a >> targets.g file. > > A specific example? You do not know in advance what functions a package will need, besides some standard ones. It is easy to supply a file with embedded scripts (or a directory with scripts) for each package that needs extra functions. For example, a package named pop3d wants to specify a function that permits to print a summary of disk usage: pkg --summary pop3d > Before installation, deal with source packages, after installations > deal with installed binary. But I think it can use binary packages > also. The installation of a binary is simpler compared to source > packages. I'm leaning toward using the same tool but with different names and different exposed functionality, 'pkg' and 'src'. The first one oriented to package management in a running system, and the second oriented to build a system. Rolf. |
From: Hui Z. <zh...@wa...> - 2004-03-16 13:24:45
|
On Tue, Mar 16, 2004 at 12:47:32PM +0000, Rolf Veen wrote: > >PKG -i name > >PKG -u name > > > >maybe > > PKG -q name (querying the package info) > > Control: > > PKG -c <start|stop|status> name For services? > > Configuration: > > PKG -d name Return configuration (in OGDL) > PKG -e expression name > Modify configuration A specific example? > > List: > > PKG -l pattern > List installed packages Got it. > > Additional operations: > > PKG --target name > Execute a task defined in a > targets.g file. A specific example? > > This refers to binary packages. Sources have a > different life cycle. Before installation, deal with source packages, after installations deal with installed binary. But I think it can use binary packages also. The installation of a binary is simpler compared to source packages. -Hui Zhou |
From: Rolf V. <rol...@he...> - 2004-03-16 11:49:01
|
Hui Zhou wrote: > What is generally expected for a package manager? > > The minimum I believe is just install and uninstall. > > I would like to see a picture how it works: Given the package manager > name PKG, the basic function would be: > > PKG -i name > PKG -u name > > maybe > PKG -q name (querying the package info) Control: PKG -c <start|stop|status> name Configuration: PKG -d name Return configuration (in OGDL) PKG -e expression name Modify configuration List: PKG -l pattern List installed packages Additional operations: PKG --target name Execute a task defined in a targets.g file. This refers to binary packages. Sources have a different life cycle. Rolf. |
From: Hui Z. <zh...@wa...> - 2004-03-16 07:00:47
|
I have no experience with package managers. My first linux system is redhat, installed and never used because I don't know how it was managed. Then I tried linux from scratch, and now that's the only thing I know how to work with. I am thinking the automated lfsbuilder may finally develop into a package manager also. What is generally expected for a package manager? The minimum I believe is just install and uninstall. I would like to see a picture how it works: Given the package manager name PKG, the basic function would be: PKG -i name PKG -u name maybe PKG -q name (querying the package info) What else? (not limited to minimal, what fancy feature are good to have?) -Hui |