helidelinux-devel Mailing List for Helide Linux (Page 3)
Status: Abandoned
Brought to you by:
rveen
You can subscribe to this list here.
2004 |
Jan
|
Feb
|
Mar
(42) |
Apr
(5) |
May
(4) |
Jun
(3) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|
From: Hui Z. <zh...@wa...> - 2004-03-04 15:04:54
|
Hi, there. I am writing an profile in ogdl for building packages from sources. In order to make editing the profile easier and efficient, it is necessary to define some variables or entities. I am thinking of two approaches to it. 1. we can follow the route of xml, the entities can be transparent to application. This inevitablly complicates the OGDL spec, and makes automated profile editing very difficult. The pro's are they are more oriented toward hand editing and the application doesn't need to worry about it. 2. Due to the simplicity of OGDL, the application can define their own entity format. In this approach, I can forsee that many application will have to inventing their own wheels. The benefit is the flexibility. My opinion is more at the side of leaving OGDL in its simplicity. But maybe ogdl can have a few options and provide some library to help out. For example, two functions in the parser may help: AddEntity (adds entity definition to the parser), and TranslateEntity (translates the entity inside a string). To use the routine, the entity has to be in a ogdl recommended form. Thoughts? -Hui |
From: Hui Z. <zh...@wa...> - 2004-03-03 22:13:55
|
Hi, I assume you all familiar with lfs, for those are not, you may interested in checkout www.linuxfromscratch.org. I imported my `lfsbuilder' into the cvs repository. If you have a spare partition lying around, you may have a try. I am lucky to have a fast machine, it builds for 3hours. :-). Basic feature of lfsbuilder is very much like nALFS. Differences are following. It is writen in perl (I use C because the code-test circle in perl is faster); It forks to switch user and chroot, once in ch5 and once in chapter 6; It creates a central log for tracking building progress, which mean if it failed in the middle, the next time it will start from there (one also can manipulate this log to do testing); It tries to shift the dumb work to the program as much as possible. Currently, the profile ---`profiles/lfs.g' is only 430 lines, the i nALFS profiles are using 111 files! Imagin what a difference in browsing and editing the profile(s). If you already have all the packages in a local directory, one can cut another 100 or so lines off the profile. Collecting the up-to-date urls are tedious, to automating this task by lfsbuilder is listed high in my to-do list. The profile can be split into multiple file named after package names. But most profiles will be just a few lines. Both the profile and the status log uses OGDL, thus is requires OGDL::Parser and OGDL::Graph. lfsbuilder installs LFS::Builder (I imagin more people will put more LFS::modules to facilitate these tasks in the future, like grab commands and dependency from the book). For quick feather glancing, read profiles/lfs.g (the profile), examples/builder.log (the status tracking log), and examples/lfs.log (the stdout log). If you have a spare partition, I encourage you to have a try (do it overnight :)). The installation doesn't install profiles (not at this stage). Move the profiles directory to your home directory or any place you desire. Edit profiles/lfs.g to match your configuration. Minimally, you need set logs_dir, packages_dir, build_dir, profiles_dir, LFS, and LFSUSER Before lfs installation, prepare the partition, mkfs, and mount it to $LFS, and make sure the LFSUSER exists To automatically install base lfs, run : lfsbuilder -p /To/Path/profiles/lfs.g lfs The command output is suppressed. You can pass the option -t ttyname to let the lfsbuilder output the command output(both stdout and stderr) to ttyname(or a file). It forks in ch5 and ch6. In ch5, it switches to user LFSUSER; in ch6, it chroot to $LFS. So it should be relative safe to experiment with. :) Oh, I almost fogot. I put a few necessary configuration file in profiles, and they are used in the lfs profile. Adjust to your needs. It also uses a kernel.config file for compiling the kernel, which most likely doesn't suit you. :) Happy building! -Hui |
From: Hui Z. <zh...@wa...> - 2004-03-03 18:43:54
|
On Wed, Mar 03, 2004 at 06:16:14PM +0000, Rolf Veen wrote: > Hi, all :-). > > Downloading is the first step in a package's life cycle. > The question is how could this be done with the minimum > effort and maximum flexibility ? > > With minimum effort I mean not needing to maintain URLs, > mirror URLs, files to be downloaded. Probably we should > use (or build) a central repository where our tools > can ask for URLs. Mirrors are to be taken into account, We need both a url repository and local package repository. For quite a few people, download at a time and installation at another time is the only viable solution. The local package repository can be either on a local hard disk or a LAN server. On the other hand, for some, network is not a problem but disk space is. > so that bandwidth is saved when possible. > > With flexibility I mean that packages sometimes come > from ftp or http, other from cvs, etc. rsync? > > How is debian apt doing this ? Others ? They have central servers with complete listings and mirrors all over the world. The same for CPAN. To manage so, they need some organizations. The most flexible package retriever, however, are dealing with all vairety of uncontrolled sources, building a central url repository and maintaining it is not quite practical. > > Until now, my approach has been to write an OGDL file > that contains URLs and file names. Some common repositories > are variables such as $SRC_SF so that is was easy to > switch mirrors. Currently, I am doing similarly. A complete url usually contains the file name and usually the version infomation, and package type. So in my profile, usually one line of url is the only thing needed for downloading and setting the rest of the variables. I am planning adding mirrors to the urls so the installer may search for optimum servers. My experience is usually we only deal with single mirror anyway, so it is not high on my to list. For packages already exist in local repository, the url is not necessay, because the installer can find the package (heuristically) from the package name. Maintain a up-to-date package url list is not easy and only possible with a bunch of dedicated volunteers. I have installed lfs for three times and each time I spent significant portion of time just search each package website to find the url and to find whether they are the latest release, and if not how old are they. It is tedious and after a couple of hours, I feel very much like a robot. I would very much like to see a utility that collects these infomation for us. I would say most (90%?) of the packages we are using come from limited sources: gnu, sourceforge, kernel.org, freshmeat? If we can train the bot to query these source, collect the release info, grab the url, and make fair judgement of the version relations, its quite a saving of energy. For the rest of the packages, we, the human, can happily google manually and list the url into the profile, spent just a few minutes and proving we are smarter than computer.:) There are quite a few commponents that people can collabrate. Such as: grabbing the filename and deduce the url form ftp list, or SF, freshmeat project query page; deduce version, package name, package type from the url; version compare; and may be abstract, changelog, etc. Make a library, or a bunch small utility, it should be quite useful for many users. What do you think? -Hui |
From: Rolf V. <rol...@he...> - 2004-03-03 17:30:56
|
Hi, all :-). Downloading is the first step in a package's life cycle. The question is how could this be done with the minimum effort and maximum flexibility ? With minimum effort I mean not needing to maintain URLs, mirror URLs, files to be downloaded. Probably we should use (or build) a central repository where our tools can ask for URLs. Mirrors are to be taken into account, so that bandwidth is saved when possible. With flexibility I mean that packages sometimes come from ftp or http, other from cvs, etc. How is debian apt doing this ? Others ? Until now, my approach has been to write an OGDL file that contains URLs and file names. Some common repositories are variables such as $SRC_SF so that is was easy to switch mirrors. Rolf. |