From: Thorsten M. <the...@gm...> - 2010-08-26 09:01:31
|
Hi, currently every packager manages his SLKBUILDs at home and after creating new packages they are uploaded to the source directory. This leads to the following problems: * If a bug in a package is reported in the forum, someone may fix that in the SLKBUILD and upload the new one to the source dir. But then it's mandatory that the package maintainer get's the updated SLKBUILD. So he must be informed about it. * Last time when I've created the current branch I had to update *a lot* of gapan's packages, too. So he had to get all these SLKBUILDs from the source dir again to maintain them further. * If someone uploads a package for e.g. i486 only and I want to add it to the x86_64 branch I may have to tweak the SLKBUILD because it was not x86_64 ready before. Then I have to send the SLKBUILD to the maintainer again. Most of our SLKBUILDs are multiarch capable (I cannot remember where different SLKBUILDs for different arches are really needed. It should always be possible to tweak the SLKBUILD so that it can handle all arches). My suggestion now is to create some system where every packager first pulls the most recent SLKBUILDs, then modifies them and uploads them back. This should be a standardized process for official packagers. We currently have the source dirs where all SLKBUILDs could be obtained, but there are problems: * every arch has own SLKBUILDs. Tweaking it for one arch wold need uploading them to different locations. * Getting every SLKBUILD by hand before building a package is not a good idea Perhaps we could create a single dir for the multiarch capable SLKBUILDs and then just use some rsync to keep scripts up to date. But IMHO the better solution would be a real version control system. Then it would be possible to update all SLKBUILDs with a "svn update", edit some of them, build new packages, upload the packages and put the updates SLKBUILDs to svn again. What do you think of it? Greetings Thorsten |
From: George V. <vla...@gm...> - 2010-08-26 19:26:02
|
2010/8/26 Thorsten Mühlfelder <the...@gm...> > currently every packager manages his SLKBUILDs at home and after creating > new > packages they are uploaded to the source directory. This leads to the > following problems: > Honestly, I don't see any problem at all. > * If a bug in a package is reported in the forum, someone may fix that in > the > SLKBUILD and upload the new one to the source dir. But then it's mandatory > that the package maintainer get's the updated SLKBUILD. So he must be > informed about it. > Of course. An email would be enough in that case. Not too much to ask, simple and effective. > * Last time when I've created the current branch I had to update *a lot* of > gapan's packages, too. So he had to get all these SLKBUILDs from the source > dir again to maintain them further. > So? All updated packages were listed in the changelogs, I knew what to get. Not a big deal. * If someone uploads a package for e.g. i486 only and I want to add it to > the > x86_64 branch That is never going to happen. All packages should be updated and are updated at the same time in both i486 and x86_64 repositories. > I may have to tweak the SLKBUILD because it was not x86_64 > ready before. Then I have to send the SLKBUILD to the maintainer again. > Still don't see a problem here though. Communication should be encouraged in any case. > Most of our SLKBUILDs are multiarch capable (I cannot remember where > different > SLKBUILDs for different arches are really needed. It should always be > possible to tweak the SLKBUILD so that it can handle all arches). > > My suggestion now is to create some system where every packager first pulls > the most recent SLKBUILDs, then modifies them and uploads them back. This > should be a standardized process for official packa,gers. > We currently have the source dirs where all SLKBUILDs could be obtained, > but > there are problems: > * every arch has own SLKBUILDs. Tweaking it for one arch wold need > uploading > them to different locations. > Yes, we have different source repositories for each arch, because we actually have two arches. But no, if the package is not updated in both arches, the SLKBUILD should not be updated in both source repos. You're forgetting the fact that the source repos should not hold the latest source files. The should always hold the source files that were used to build the packages that are present in the repositories at each given time. And they should be split by salix/slack version. So in a case like this, if a different SLKBUILD has been used for the different arches, different SLKBUILDs should be in the source repo, irregardless of which one is the best/latest. * Getting every SLKBUILD by hand before building a package is not a good > idea > Why not? In any case, the SLKBUILD is usually already in the packagers hard drive. If someone else updated it in the meantime, the packager should have been informed by an email personally, or this mail list. So he would still have the SLKBUILD in his hard drive. Perhaps we could create a single dir for the multiarch capable SLKBUILDs and > then just use some rsync to keep scripts up to date. > Bad idea. What if a package gets updated only in one arch? Maybe because of a problem that is arch specific (like slapt-get was only updated in the i486 repo recently). Do we move packages in and out of that multiarch source repo all the time? It can become a really huge mess really fast. But IMHO the better solution would be a real version control system. Then it > would be possible to update all SLKBUILDs with a "svn update", edit some of > them, build new packages, upload the packages and put the updates SLKBUILDs > to svn again. > This would create the need to manage this version controlled repo separately. As it is now, it is really simple: 1. Upload new package to the binary repo 2. Upload the source to the source repo If we have such a repo, at least an additional step would be required: 1. Upload new package to the binary repo 2. Update the source files in the source tree 3. Sync the source tree. which adds complexity and that's never a good thing. Since I am the one that does the most work with the repositories, I don't welcome the added complexity. The simplest solution is always the best. All that said, if you have something concrete that will make my life easier instead of harder, I wouldn't mind at all. |
From: Thorsten M. <the...@gm...> - 2010-08-26 23:17:34
|
Am Thu, 26 Aug 2010 22:25:36 +0300 schrieb George Vlahavas <vla...@gm...>: > 2010/8/26 Thorsten Mühlfelder <the...@gm...> > > > * If a bug in a package is reported in the forum, someone may fix > > that in the > > SLKBUILD and upload the new one to the source dir. But then it's > > mandatory that the package maintainer get's the updated SLKBUILD. > > So he must be informed about it. > > > Of course. An email would be enough in that case. Not too much to ask, > simple and effective. Simple and effective: yes. But sadly sometimes forgotten. > > * Last time when I've created the current branch I had to update *a > > lot* of gapan's packages, too. So he had to get all these SLKBUILDs > > from the source dir again to maintain them further. > > > So? All updated packages were listed in the changelogs, I knew what > to get. Not a big deal. But downloading every single SLKBUILD is just time wasting. > * If someone uploads a package for e.g. i486 only and I want to add > it to > > the > > x86_64 branch > > That is never going to happen. All packages should be updated and are > updated at the same time in both i486 and x86_64 repositories. AFAIK we already have different packages in i486 and x86_64. IMHO people hestitate to become a packager because they have to build for both arches. And what about adding arm? Probably quite noone would want to package for this one. And yes, I'm thinking about how another arch could be added ;) > > I may have to tweak the SLKBUILD because it was not x86_64 > > ready before. Then I have to send the SLKBUILD to the maintainer > > again. > > > Still don't see a problem here though. Communication should be > encouraged in any case. I agree for the communication part. > > Most of our SLKBUILDs are multiarch capable (I cannot remember where > > different > > SLKBUILDs for different arches are really needed. It should always > > be possible to tweak the SLKBUILD so that it can handle all arches). > > > > My suggestion now is to create some system where every packager > > first pulls the most recent SLKBUILDs, then modifies them and > > uploads them back. This should be a standardized process for > > official packa,gers. We currently have the source dirs where all > > SLKBUILDs could be obtained, but > > there are problems: > > * every arch has own SLKBUILDs. Tweaking it for one arch wold need > > uploading > > them to different locations. > > > Yes, we have different source repositories for each arch, because we > actually have two arches. But no, if the package is not updated in > both arches, the SLKBUILD should not be updated in both source repos. I don't want to do that. The source repos should stay the same. I just want a svn repo where the latest SLKBUILDs are available. The build numbers are kept in sync anyway. > You're forgetting the fact that the source repos should not hold the > latest source files. The should always hold the source files that > were used to build the packages that are present in the repositories > at each given time. And they should be split by salix/slack version. > So in a case like this, if a different SLKBUILD has been used for the > different arches, different SLKBUILDs should be in the source repo, > irregardless of which one is the best/latest. > > * Getting every SLKBUILD by hand before building a package is not a > good > > idea > > > Why not? In any case, the SLKBUILD is usually already in the > packagers hard drive. If someone else updated it in the meantime, the > packager should have been informed by an email personally, or this > mail list. So he would still have the SLKBUILD in his hard drive. I just don't like the "click work": Got some SLKBUILDs per mail? Save everyone to some folder by clicking around. Got a message that SLKBUILDs have been updated in the source repo? Use a browser and download every single file. I'd rather run a single command from command line like "svn update" or "rsync something" to get all latest SLKBUILDs. > Perhaps we could create a single dir for the multiarch capable > SLKBUILDs and > > then just use some rsync to keep scripts up to date. > > > Bad idea. What if a package gets updated only in one arch? Maybe > because of a problem that is arch specific (like slapt-get was only > updated in the i486 repo recently). Do we move packages in and out of > that multiarch source repo all the time? It can become a really huge > mess really fast. As said some lines above ;) No multiarch source repo, only a svn repo with latest SLKBUILDs. > But IMHO the better solution would be a real version control system. > Then it > > would be possible to update all SLKBUILDs with a "svn update", edit > > some of them, build new packages, upload the packages and put the > > updates SLKBUILDs to svn again. > > > This would create the need to manage this version controlled repo > separately. As it is now, it is really simple: > 1. Upload new package to the binary repo > 2. Upload the source to the source repo I still use a script for these tasks. It's not perfect and surely it can be improved. But at least I don't have to use gftp or always the same commands to upload my files. > If we have such a repo, at least an additional step would be required: > 1. Upload new package to the binary repo > 2. Update the source files in the source tree > 3. Sync the source tree. > which adds complexity and that's never a good thing. Yes, it adds a step. But no big complexity. > Since I am the > one that does the most work with the repositories, I don't welcome > the added complexity. The simplest solution is always the best. I've already said in the beginning that I don't like they way we add packages from packagers without ssh access to our repo server and that we should seriously find some way to improve it. It's just the "clicking work" I've mentioned. Or ssh to the repo server and use wget to pull all the files or... No idea what is the best method at the moment. > All that said, if you have something concrete that will make my life > easier instead of harder, I wouldn't mind at all. I hope so ;) |
From: Thorsten M. <the...@gm...> - 2010-08-27 07:39:44
|
Am Friday 27 August 2010 01:17:24 schrieb Thorsten Mühlfelder: > > Since I am the > > one that does the most work with the repositories, I don't welcome > > the added complexity. The simplest solution is always the best. > > I've already said in the beginning that I don't like they way we add > packages from packagers without ssh access to our repo server and that > we should seriously find some way to improve it. > It's just the "clicking work" I've mentioned. Or ssh to the repo server > and use wget to pull all the files or... No idea what is the best > method at the moment. Since this is another topic, we could split the discussion here. On my way to work this came to my mind: IIRC you are not the greatest fan of automation scripts, but I am... :-D My idea is the following: Let's say we have a packager called john and he has a personal repo at ftp.johnsrepo.org. He should put his upload packages in dirs like xap/progname/pkg and the source in xap/progname/src So we could have a script on the repo server that does the following: 1. get all files in xap/progname/src (including the log file) and put in on a tmp folder: tmp/john/xap/progname/src 2. same for the pkg files: tmp/john/xap/progname/pkg 3. show the log file: less xap/progname/src/build...log 4. show content of tmp/john/xap/progname/src and tmp/john/xap/progname/pkg: ls -l ... 5. optionally test the package md5sum 6. ask if you want to proceed 7. delete files in the repo: rm -i xap/progname-...* rm -i source/xap/progname/* 8. mv the files from the tmp dir to the repo 9. optionally ask if you want to run metagen.sh the script can be called like this: pull-to-repo john xap progname This script can be used for trusted packagers. Packages of new packagers should get investigated further. |
From: George V. <vla...@gm...> - 2010-08-27 15:31:56
|
2010/8/27 Thorsten Mühlfelder <the...@gm...> > Am Friday 27 August 2010 01:17:24 schrieb Thorsten Mühlfelder: > > > Since I am the > > > one that does the most work with the repositories, I don't welcome > > > the added complexity. The simplest solution is always the best. > > > > I've already said in the beginning that I don't like they way we add > > packages from packagers without ssh access to our repo server and that > > we should seriously find some way to improve it. > > It's just the "clicking work" I've mentioned. Or ssh to the repo server > > and use wget to pull all the files or... No idea what is the best > > method at the moment. > > Since this is another topic, we could split the discussion here. On my way > to > work this came to my mind: > IIRC you are not the greatest fan of automation scripts, but I am... :-D My > idea is the following: > Let's say we have a packager called john and he has a personal repo at > ftp.johnsrepo.org. He should put his upload packages in dirs like > xap/progname/pkg and the source in xap/progname/src > So we could have a script on the repo server that does the following: > 1. get all files in xap/progname/src (including the log file) and put in on > a > tmp folder: tmp/john/xap/progname/src > 2. same for the pkg files: tmp/john/xap/progname/pkg > 3. show the log file: less xap/progname/src/build...log > 4. show content of tmp/john/xap/progname/src and tmp/john/xap/progname/pkg: > ls -l ... > 5. optionally test the package md5sum > 6. ask if you want to proceed > 7. delete files in the repo: rm -i xap/progname-...* > rm -i source/xap/progname/* > 8. mv the files from the tmp dir to the repo > 9. optionally ask if you want to run metagen.sh > > the script can be called like this: pull-to-repo john xap progname > This script can be used for trusted packagers. Packages of new packagers > should get investigated further. No. Packages should be thoroughly tested before they are uploaded. I do not upload packages that are submitted in the bug tracker just like that. That kind of script would not help me in any way, since I would still need to download all files and test the packages locally. That usually means rebuilding the packages for both arches and uploading my own copies anyway. In any case, automating it like that can potentially drag in all sort of nonsense and there is no way you can standardize on a format. |
From: Max <max...@gm...> - 2010-08-27 19:19:36
|
Hi, Sorry to change the subject a bit, but I noticed something you wrote George: / "I'm rebuilding and uploading my own packages instead of the ones they submit anyway."/ If that's the case, then when submitting packages for inclusion in the repos would it be better just to submit a SLKBUILD instead of uploading the entire package set (dep, md5, src, txz, log)? I realise the need for quality control and respect the commitment that you, as well as all the other dev team members, have made in time and effort spent in building and maintaining both the OS and extra packages, so please don't take this as some kind of cop-out on my part as a packager! But if you're going to build and upload your own packages anyway then what's the point of uploading anything but the SLKBUILD (and source patches if necessary)? From what you have said it also seems you are doing most of the leg-work in maintaining the repos and packages. I don't know your personal situation, but (and this is just a wild guess) I'm sure that doing this by yourself must be tiresome in the least, and probably also cuts into time that you could better spend doing something else. Which also leads onto something Thorsten said: /"Rebuilding the packages is a really good idea, especially when users come to the question: is the package source trusted? Every packageer could add malware to the package that cannot be detected. Furthermore it is important that the sources are downloaded from their official sites and that patches are reviewed."/ Which is a good point and something that I hadn't thought about (I guess I'm just not devious enough to think of something like that :-). But, unless one is willing to do everything by one's self, at some point others will have to be trusted to "pick up the slack" if you pardon the pun! What if some sort of 'official' packaging team was set up to help build/test/review submitted packages? That way efforts could be spread more evenly without sacrificing security to such a great degree, even more so if team members were allowed only to review others packages and not their own. Just a couple of ideas, sorry to butt in :-) Max |
From: Thorsten M. <the...@gm...> - 2010-08-27 19:40:33
|
Am Fri, 27 Aug 2010 21:19:34 +0200 schrieb Max <max...@gm...>: > What if some sort of 'official' packaging team > was set up to help build/test/review submitted packages? That way > efforts could be spread more evenly without sacrificing security to > such a great degree, even more so if team members were allowed only > to review others packages and not their own. We already have some kind of official packaging team. It consists of all users with ssh access to the repo, because these user can upload packages. IIRC this is Gapan, Akuna, JRD, Shador and me. But to be honest: I think Gapan is the only one with the muse to constantly review and upload foreign packages. I for myself quickly get tired of these tasks. But I am open for few new "trusted" packagers to join the boat, if they have been honest and reliable in the past. Thorsten PS: Instead of doing package reviews and stuff I'm out for a beer right now... :-D cu |
From: Max <max...@gm...> - 2010-08-27 19:51:54
|
> PS: Instead of doing package reviews and stuff I'm out for a beer right > now... :-D cu > Ha ha! Don't you get tired of the taste of beer? :-P |
From: Akuna <ak...@fr...> - 2010-08-27 22:54:44
|
Le 27/08/2010 21:40, Thorsten Mühlfelder a écrit : > We already have some kind of official packaging team. It consists of > all users with ssh access to the repo, because these user can upload > packages. IIRC this is Gapan, Akuna, JRD, Shador and me. > But to be honest: I think Gapan is the only one with the muse to > constantly review and upload foreign packages. I for myself quickly get > tired of these tasks. > But I am open for few new "trusted" packagers to join the boat, if they > have been honest and reliable in the past. > We can also now add Fred who has expressed his desire to help George out with this task. Even though I may have ssh access to the repo, I do not have the nerve nor a great interest for this task & greatly admire those that do it so well & consistently. In the interest of lightening the load of Gapan who btw is also needed on many other fronts & avoid some unnecessary burnout, it would be great if more interested folks could join him & Fred (& you Thenktor). So I really second Max suggestion... rather than devising a great scheme, maybe we would need to set up a great packaging trusted team? Maybe some of our most prolific, constant & talented packager might like to be more involved with this work? Like Max, but maybe also Richard & some others? |
From: George V. <vla...@gm...> - 2010-08-28 09:39:24
|
On Fri, Aug 27, 2010 at 10:19 PM, Max <max...@gm...> wrote: > * "I'm rebuilding and uploading my own packages instead of the ones they > submit anyway."* > > > If that's the case, then when submitting packages for inclusion in the > repos would it be better just to submit a SLKBUILD instead of uploading the > entire package set (dep, md5, src, txz, log)? I realise the need for quality > control and respect the commitment that you, as well as all the other dev > team members, have made in time and effort spent in building and maintaining > both the OS and extra packages, so please don't take this as some kind of > cop-out on my part as a packager! But if you're going to build and upload > your own packages anyway then what's the point of uploading anything but the > SLKBUILD (and source patches if necessary)? > Uploading all files helps me in many different ways. Without the logs I couldn't know if any package I built was built with the exact same options as the packager did. I might be missing an optional dependency for example, or building with an optional dependency that is not actually needed or wanted. Also, I don't always upload my own packages, especially with bigger packages it's a lot faster to download straight to the repository from the packager's ftp, after I match the md5sums with the packages I built myself, instead of uploading from my own pc. > >From what you have said it also seems you are doing most of the leg-work > in maintaining the repos and packages. I don't know your personal situation, > but (and this is just a wild guess) I'm sure that doing this by yourself > must be tiresome in the least, and probably also cuts into time that you > could better spend doing something else. Which also leads onto something > Thorsten said: > > *"Rebuilding the packages is a really good idea, especially when users > come to the question: is the package source trusted? Every packageer could > add malware to the package that cannot be detected. Furthermore it is > important that the sources are downloaded from their official sites and that > patches are reviewed."* > > Which is a good point and something that I hadn't thought about (I guess > I'm just not devious enough to think of something like that :-). But, unless > one is willing to do everything by one's self, at some point others will > have to be trusted to "pick up the slack" if you pardon the pun! What if > some sort of 'official' packaging team was set up to help build/test/review > submitted packages? That way efforts could be spread more evenly without > sacrificing security to such a great degree, even more so if team members > were allowed only to review others packages and not their own. > Anyone that wants to, can already help with all that by logging to the bugtracker every once in a while and testing the packages that are posted there. Not that many right now, but there will be lots of them when we get a current repo and almost everything will need to be upgraded. Unfortunately that is not a task that many people enjoy. |
From: Andreas B. <fut...@go...> - 2010-08-27 20:45:17
|
Am 27.08.2010 17:31, schrieb George Vlahavas: > 2010/8/27 Thorsten Mühlfelder <the...@gm... <mailto:the...@gm...>> > > Am Friday 27 August 2010 01:17:24 schrieb Thorsten Mühlfelder: > > > Since I am the > > > one that does the most work with the repositories, I don't welcome > > > the added complexity. The simplest solution is always the best. > > > > I've already said in the beginning that I don't like they way we add > > packages from packagers without ssh access to our repo server > and that > > we should seriously find some way to improve it. > > It's just the "clicking work" I've mentioned. Or ssh to the repo > server > > and use wget to pull all the files or... No idea what is the best > > method at the moment. > > Since this is another topic, we could split the discussion here. > On my way to > work this came to my mind: > IIRC you are not the greatest fan of automation scripts, but I > am... :-D My > idea is the following: > Let's say we have a packager called john and he has a personal repo at > ftp.johnsrepo.org <http://ftp.johnsrepo.org>. He should put his > upload packages in dirs like > xap/progname/pkg and the source in xap/progname/src > So we could have a script on the repo server that does the following: > 1. get all files in xap/progname/src (including the log file) and > put in on a > tmp folder: tmp/john/xap/progname/src > 2. same for the pkg files: tmp/john/xap/progname/pkg > 3. show the log file: less xap/progname/src/build...log > 4. show content of tmp/john/xap/progname/src and > tmp/john/xap/progname/pkg: > ls -l ... > 5. optionally test the package md5sum > 6. ask if you want to proceed > 7. delete files in the repo: rm -i xap/progname-...* > rm -i source/xap/progname/* > 8. mv the files from the tmp dir to the repo > 9. optionally ask if you want to run metagen.sh > > the script can be called like this: pull-to-repo john xap progname > This script can be used for trusted packagers. Packages of new > packagers > should get investigated further. > > > No. Packages should be thoroughly tested before they are uploaded. I > do not upload packages that are submitted in the bug tracker just like > that. That kind of script would not help me in any way, since I would > still need to download all files and test the packages locally. That > usually means rebuilding the packages for both arches and uploading my > own copies anyway. In any case, automating it like that can > potentially drag in all sort of nonsense and there is no way you can > standardize on a format. Since automation has been mentioned here too. I agree that such kind of automation Thorsten described may seem useful and probably is useful for one's own package. But as gapan put very well sometimes automation just produces a lot of nonsense. I for myself chose accordingly a compromise. I've got two small snippets I always copy in before uploading packages: getpkg() { LANG=C eval wget ${1/%.???/.{txz,dep,sug,con,md5}} 2>&1 | grep "Downloaded:"; } getsrc() { LANG=C wget --quiet ${1/%.???/.src}; LANG=C wget -i `echo ${1/%.???/.src} | sed -e 's;.*/;;'` 2>&1 | grep "Downloaded:"; rm `echo ${1/%.???/.src} | sed -e 's;.*/;;'`; } The getpkg one gets an URL to a pkg file (e.g. http://someserver/path/xy.txz) and changes the extension to download txz, dep, sug, con and md5 by wget. The other one gets also an URL (possibly the same) and replaces the extension to download the .src file and uses wget to download all sources mentioned in it. I got too tired but of the repeated wgets and copy'n pasting so I wrote this quite some time ago. |
From: George V. <vla...@gm...> - 2010-08-27 15:28:25
|
2010/8/27 Thorsten Mühlfelder <the...@gm...> > > AFAIK we already have different packages in i486 and x86_64. No we don't. The only packages that are different are the ones that are arch-specific (wine etc). > IMHO > people hestitate to become a packager because they have to build for > both arches. And what about adding arm? Probably quite noone would want > to package for this one. > Who says that packagers need to build for both arches? Most don't. Some may not even have the hardware to build for x86_64. I do it for them. Always. I'm rebuilding and uploading my own packages instead of the ones they submit anyway. If I was going to wait for every packager to build his packages for x86_64, we would still not have a x86_64 arch. And the reasons people hesitate to become packagers are completely different, they range from limited knowledge to time contraints on their part. And yes, I'm thinking about how another arch could be added ;) > If you're serious about arm, and since you're the only one with arm hardware (at least for now), prepare to rebuild everything for that architecture yourself. Not an easy task. > I don't want to do that. The source repos should stay the same. I just > want a svn repo where the latest SLKBUILDs are available. The build > numbers are kept in sync anyway. > So, you want to add an *extra* repo that will need *extra* work? I just don't like the "click work": Got some SLKBUILDs per mail? Save > everyone to some folder by clicking around. Got a message that > SLKBUILDs have been updated in the source repo? Use a browser and > download every single file. > I'd rather run a single command from command line like "svn update" or > "rsync something" to get all latest SLKBUILDs. > I'd love to have everything done for me by clicking a single button or running a single command, but things don't work that way. Manual work is needed. And having two different source repos, is really extra work for everyone and mostly for me, since I am the one that is doing most of the work. We have a source repo, we don't need a second one, it would only be confusing and duplicating effort. |
From: Thorsten M. <the...@gm...> - 2010-08-27 16:37:57
|
Am Friday 27 August 2010 17:31:29 schrieb George Vlahavas: > No. Packages should be thoroughly tested before they are uploaded. This is why I've said: trusted packagers. But I admit: the question who is "trusted" cannot be answered easily. If someone really is trusted he probably can have access to the repo server, too. > I do not > upload packages that are submitted in the bug tracker just like that. That > kind of script would not help me in any way, since I would still need to > download all files and test the packages locally. That usually means > rebuilding the packages for both arches and uploading my own copies anyway. So this is a good point! Rebuilding the packages is a really good idea, especially when users come to the question: is the package source trusted? Every packageer could add malware to the package that cannot be detected. Furthermore it is important that the sources are downloaded from their official sites and that patches are reviewed. > In any case, automating it like that can potentially drag in all sort of > nonsense and there is no way you can standardize on a format. What format do you mean here? |
From: George V. <vla...@gm...> - 2010-08-27 17:22:39
|
2010/8/27 Thorsten Mühlfelder <the...@gm...> > > In any case, automating it like that can potentially drag in all sort of > > nonsense and there is no way you can standardize on a format. > > What format do you mean here? I meant the directory structure you mentioned for every packager. |
From: Fred G. <fre...@gm...> - 2010-08-27 18:40:58
|
I just want to say that I agree with gapan. Even if it requires more work, the way that salix (gapan) works is the most secure way for a binary OS. If we have more servers, we can dedicaded some for the "building" job and that way will be THE good one, and, in that way, a (D)CVS repo for slkbuild will be usefull. But, yes, it requires money, donations, ......., so, maybe in the future ;) In the meantime..., I'm (try to) working on a web thing that will list Salix packages (with desc, allow search, ...), that will group our SLKBUILD in a same dir, ...etc, ... ;) (akuna is on this party too ;) ) ++ fredg 2010/8/27 Thorsten Mühlfelder <the...@gm...>: > Am Friday 27 August 2010 17:31:29 schrieb George Vlahavas: >> No. Packages should be thoroughly tested before they are uploaded. > > This is why I've said: trusted packagers. But I admit: the question who > is "trusted" cannot be answered easily. If someone really is trusted he > probably can have access to the repo server, too. > >> I do not >> upload packages that are submitted in the bug tracker just like that. That >> kind of script would not help me in any way, since I would still need to >> download all files and test the packages locally. That usually means >> rebuilding the packages for both arches and uploading my own copies anyway. > > So this is a good point! Rebuilding the packages is a really good idea, > especially when users come to the question: is the package source trusted? > Every packageer could add malware to the package that cannot be detected. > Furthermore it is important that the sources are downloaded from their > official sites and that patches are reviewed. > >> In any case, automating it like that can potentially drag in all sort of >> nonsense and there is no way you can standardize on a format. > > What format do you mean here? > > ------------------------------------------------------------------------------ > Sell apps to millions through the Intel(R) Atom(Tm) Developer Program > Be part of this innovative community and reach millions of netbook users > worldwide. Take advantage of special opportunities to increase revenue and > speed time-to-market. Join now, and jumpstart your future. > http://p.sf.net/sfu/intel-atom-d2d > _______________________________________________ > Salix-main mailing list > Sal...@li... > https://lists.sourceforge.net/lists/listinfo/salix-main > -- BloG : http://fredgnix.blogspot.com GPG keyID : 7E987005 |
From: Thorsten M. <the...@gm...> - 2010-08-27 18:48:01
|
Am Fri, 27 Aug 2010 20:40:51 +0200 schrieb Fred Galusik <fre...@gm...>: > I just want to say that I agree with gapan. > > Even if it requires more work, the way that salix (gapan) works is the > most secure way for a binary OS. I agree, too ;) |
From: Max <max...@gm...> - 2010-09-02 17:57:39
|
Hi George Thanks for the explanations! Would you mind describing the procedure you use when testing packages? Despite Akuna's confidence in my abilities (thanks by the way! :-) as a packager, I realise that I am far from perfect. So I think it would be helpful, both for the dev team as well as those willing to help test, to have a standard way of testing so that package quality has a good chance of remaining at the high standard that has been achieved. Let's say I was to test a package posted on the bugtracker. If everything looked good, and the md5sum of the submitted package matched that of the package built by myself, I am guessing that would be enough for that package to be included in the repo directly? I say this as you're mostly uploading the packages you built as opposed to the ones submitted, so would it be necessary and/or desirable for me to upload the package I built too? Max On 28/08/10 11:38 AM, George Vlahavas wrote: > On Fri, Aug 27, 2010 at 10:19 PM, Max <max...@gm... > <mailto:max...@gm...>> wrote: > > / "I'm rebuilding and uploading my own packages instead of the > ones they submit anyway."/ > > > If that's the case, then when submitting packages for inclusion in > the repos would it be better just to submit a SLKBUILD instead of > uploading the entire package set (dep, md5, src, txz, log)? I > realise the need for quality control and respect the commitment > that you, as well as all the other dev team members, have made in > time and effort spent in building and maintaining both the OS and > extra packages, so please don't take this as some kind of cop-out > on my part as a packager! But if you're going to build and upload > your own packages anyway then what's the point of uploading > anything but the SLKBUILD (and source patches if necessary)? > > Uploading all files helps me in many different ways. Without the logs > I couldn't know if any package I built was built with the exact same > options as the packager did. I might be missing an optional dependency > for example, or building with an optional dependency that is not > actually needed or wanted. Also, I don't always upload my own > packages, especially with bigger packages it's a lot faster to > download straight to the repository from the packager's ftp, after I > match the md5sums with the packages I built myself, instead of > uploading from my own pc. > > > >From what you have said it also seems you are doing most of the > leg-work in maintaining the repos and packages. I don't know your > personal situation, but (and this is just a wild guess) I'm sure > that doing this by yourself must be tiresome in the least, and > probably also cuts into time that you could better spend doing > something else. Which also leads onto something Thorsten said: > > /"Rebuilding the packages is a really good idea, especially when > users come to the question: is the package source trusted? Every > packageer could add malware to the package that cannot be > detected. Furthermore it is important that the sources are > downloaded from their official sites and that patches are reviewed."/ > > Which is a good point and something that I hadn't thought about (I > guess I'm just not devious enough to think of something like that > :-). But, unless one is willing to do everything by one's self, at > some point others will have to be trusted to "pick up the slack" > if you pardon the pun! What if some sort of 'official' packaging > team was set up to help build/test/review submitted packages? That > way efforts could be spread more evenly without sacrificing > security to such a great degree, even more so if team members were > allowed only to review others packages and not their own. > > Anyone that wants to, can already help with all that by logging to the > bugtracker every once in a while and testing the packages that are > posted there. Not that many right now, but there will be lots of them > when we get a current repo and almost everything will need to be > upgraded. Unfortunately that is not a task that many people enjoy. > > > ------------------------------------------------------------------------------ > Sell apps to millions through the Intel(R) Atom(Tm) Developer Program > Be part of this innovative community and reach millions of netbook users > worldwide. Take advantage of special opportunities to increase revenue and > speed time-to-market. Join now, and jumpstart your future. > http://p.sf.net/sfu/intel-atom-d2d > > > _______________________________________________ > Salix-main mailing list > Sal...@li... > https://lists.sourceforge.net/lists/listinfo/salix-main > |
From: George V. <vla...@gm...> - 2010-09-03 07:32:55
|
On Thu, Sep 2, 2010 at 8:57 PM, Max <max...@gm...> wrote: > Hi George > > Thanks for the explanations! Would you mind describing the procedure you > use when testing packages? Despite Akuna's confidence in my abilities > (thanks by the way! :-) as a packager, I realise that I am far from perfect. > So I think it would be helpful, both for the dev team as well as those > willing to help test, to have a standard way of testing so that package > quality has a good chance of remaining at the high standard that has been > achieved. > I'd say it's mostly like: 1. Download all files 2. Check the SLKBUILD visually, see if there's anything obviously wrong. 3. Rebuild the package. First on a x86_64 architecture. Some SLKBUILDs don't have proper support for all arches. Then on i486. 4. Check the buildlogs. Compare against the original ones for different configure options. Check that all files are placed in correct directories in the package (no /usr/local, lib${LIBDIRSUFFIX}, docs etc). You can already check if the package complies with most of the packaging rules here already: http://www.salixos.org/wiki/index.php/Packaging_rules 5. Run depfinder, see if the dep list is the same as the original one. If the package is an update to an older version already present in the repositories, see if there are any significant changes in the dependency list. 6. If uncertain about the dependencies, install the package in a VM with a basic mode installation and only the specified dependencies installed. 7. Run the application, see if it works properly in both arches. For "big" applications, you obviously can't test every aspect of their functionality. Loading the application and testing some basic functionality is usually enough. > Let's say I was to test a package posted on the bugtracker. If everything > looked good, and the md5sum of the submitted package matched that of the > package built by myself, I am guessing that would be enough for that package > to be included in the repo directly? I say this as you're mostly uploading > the packages you built as opposed to the ones submitted, so would it be > necessary and/or desirable for me to upload the package I built too? > No, you wouldn't have to upload your build too. Just report your findings. If you could upload a package for an architecture that is missing from the original submission it would help a lot though. |
From: George V. <vla...@gm...> - 2010-09-03 07:37:27
|
On Fri, Sep 3, 2010 at 10:32 AM, George Vlahavas <vla...@gm...> wrote: > > 4. Check the buildlogs. Compare against the original ones for different > configure options. Check that all files are placed in correct directories > in the package (no /usr/local, lib${LIBDIRSUFFIX}, docs etc). You can > already check if the package complies with most of the packaging rules here > already: http://www.salixos.org/wiki/index.php/Packaging_rules > Forgot to mention that slk-pkgcheck can help a lot here, especially with permissions. I'm also opening the package with file roller and visually checking the doinst script. Sometimes it's easier to check the directory structure with file-roller, instead if reading the build.log. |