kerncomp-devel Mailing List for kerncomp
Brought to you by:
delsarto,
dswatgelato
You can subscribe to this list here.
2005 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(14) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|
From: Jan D. <jdi...@pp...> - 2005-05-20 07:08:47
|
Ian Wienand wrote: > On Tue, May 17, 2005 at 12:48:10AM +0200, Jan Dittmer wrote: > >>Well, you convinced me. I now build a version which stores the build >>results in a directory tree named: >> >>version/arch/configs/testtype/* >> >>* being files for each data item stored there. Something like this: >> >>db/2.6.11-rc4-bk6/alpha/defconfig >>db/2.6.11-rc4-bk6/alpha/defconfig/compile >>db/2.6.11-rc4-bk6/alpha/defconfig/compile/startdate >>db/2.6.11-rc4-bk6/alpha/defconfig/compile/enddate >>db/2.6.11-rc4-bk6/alpha/defconfig/compile/log >>db/2.6.11-rc4-bk6/alpha/defconfig/compile/success >>db/2.6.11-rc4-bk6/alpha/defconfig/compile/warnings >>db/2.6.11-rc4-bk6/alpha/defconfig/compile/errors >>db/2.6.11-rc4-bk6/alpha/defconfig/compile/config >>db/2.6.11-rc4-bk6/alpha/defconfig/compile/compiler >>db/2.6.11-rc4-bk6/alpha/defconfig/compile/version >>db/2.6.11-rc4-bk6/alpha/defconfig/compile/num_errors >> >>And rewrote the front-end to access the database. Big pro: You can just >>rsync results from different places together, as long as they have >>unique 'testtype' stamps. You can even use git to manage this database. >>I'll put the beta interface up tomorrow. > > > I like it :) Well, and I've redone it once again :-). The problem with the format being, that it's hard to switch around. So what I'm doing now, because hashes are cool nowadays, I keep a database of results in db/objects/<sha1hashofresults>/<resultfiles> and a symlink tree in db/cache/, which looks like the above, but can be completly different. So you have directories like: db/objects/00/10/0010fa08869201472b0903d23bed23c2fe9a71dc/ containing lots of files. I've also added a gpg signature file (signature), which signs the hash of all files in the directory, without 'signature' itself. Therefore it's possible that I could start to automatically import tests from other people (By email, rsync, ...). > I also got in contact with Michael Still who also does some > autobuilding (http://www.stillhq.com/linux/automated/) > > He's got a nice general architecture behind his. You write scripts > that are supposed to operate on a clean tree, and they get called by > an overseer type script. > > I think this could translate into something where you specify a > 'procedures' file which has a list of scripts to run over a clean > tree. You give each one a few options. The generic ones would be > 'patch with this patchset', 'build this config', 'boot this on the ski > simulator', etc. If one step fails, the others don't complete. > > So your procedure file might look like > > -- lvhpt patches -- > apply_patch /home/ianw/patches/lvhpt/series > build_kernel defconfig > count_warnings > sim_boot > -- zx1_defconfig build -- > build_kernel zx1_defconfig > -- arm build -- > cross_build_kernel arm defconfig > > You can create your procedures file on the fly nightly too, I guess, > using something like m4. That's great. I thought about something like this, but if it's already finished... > It should be quite easy to add new procedure steps by writing your > scripts using all the generic functions provided by kerncomp. It > should handle redirecting output to html files in directories, etc > etc. Well, that would perfectly fit with my result storage I think. The final 'procedure' would just sign the results and submit them to the results database. -- Jan |
From: Ian W. <ia...@ge...> - 2005-05-20 01:07:30
|
On Tue, May 17, 2005 at 12:48:10AM +0200, Jan Dittmer wrote: > Well, you convinced me. I now build a version which stores the build > results in a directory tree named: > > version/arch/configs/testtype/* > > * being files for each data item stored there. Something like this: > > db/2.6.11-rc4-bk6/alpha/defconfig > db/2.6.11-rc4-bk6/alpha/defconfig/compile > db/2.6.11-rc4-bk6/alpha/defconfig/compile/startdate > db/2.6.11-rc4-bk6/alpha/defconfig/compile/enddate > db/2.6.11-rc4-bk6/alpha/defconfig/compile/log > db/2.6.11-rc4-bk6/alpha/defconfig/compile/success > db/2.6.11-rc4-bk6/alpha/defconfig/compile/warnings > db/2.6.11-rc4-bk6/alpha/defconfig/compile/errors > db/2.6.11-rc4-bk6/alpha/defconfig/compile/config > db/2.6.11-rc4-bk6/alpha/defconfig/compile/compiler > db/2.6.11-rc4-bk6/alpha/defconfig/compile/version > db/2.6.11-rc4-bk6/alpha/defconfig/compile/num_errors > > And rewrote the front-end to access the database. Big pro: You can just > rsync results from different places together, as long as they have > unique 'testtype' stamps. You can even use git to manage this database. > I'll put the beta interface up tomorrow. I like it :) I also got in contact with Michael Still who also does some autobuilding (http://www.stillhq.com/linux/automated/) He's got a nice general architecture behind his. You write scripts that are supposed to operate on a clean tree, and they get called by an overseer type script. I think this could translate into something where you specify a 'procedures' file which has a list of scripts to run over a clean tree. You give each one a few options. The generic ones would be 'patch with this patchset', 'build this config', 'boot this on the ski simulator', etc. If one step fails, the others don't complete. So your procedure file might look like -- lvhpt patches -- apply_patch /home/ianw/patches/lvhpt/series build_kernel defconfig count_warnings sim_boot -- zx1_defconfig build -- build_kernel zx1_defconfig -- arm build -- cross_build_kernel arm defconfig You can create your procedures file on the fly nightly too, I guess, using something like m4. It should be quite easy to add new procedure steps by writing your scripts using all the generic functions provided by kerncomp. It should handle redirecting output to html files in directories, etc etc. -i |
From: jerome l. <jer...@gm...> - 2005-05-18 13:06:17
|
Hi, I saw the announcement mail on LKML (thanks to Kernel Trap). I have an AMD64 box (3200+, 1G RAM, 160G SATA) on which I can reserve cycles to nighly build the kernel. Any advice on how to turn this desire into some concrete action? Is there a plan for some sort of central distributed automated build system in place where clients could register and be requested to perform some particular builds? Cheers, Jerome |
From: Ian W. <ia...@ge...> - 2005-05-16 23:33:34
|
On Sat, May 14, 2005 at 06:17:18PM +0200, Jan Dittmer wrote: > What I plan to implement, hopefully till next weekend, is a system > which discovers new kernel versions and builds a table of > kernel/version/arch triplets that want to be tested. Ok, that's good. However, I would say the most important thing for *us* is application of patches and building from a git tree every night. Currently the rate of change in the git tree isn't really like what the old bitkeeper trees was, but I'm sure it will get there. This has really what kerncomp was used for from the start, it originated right at the time IA64 was moving from being kept out of tree to being merged into mainline. For people who aren't trying to maintain out of tree patches this really isn't important. > Against these versions any number of tests can run: compile, run > time test, whatever, ... and report the results back, without much > of formal requirements. Basically a big storage of test > results. Would that work out for you? It sounds like it would; I guess the devil is in the details :) > ... and write git-SELECT to search through them? :) Really sql has > lots of advantages when your store even slightly more > complex/relational things. Personally, I'd really need to be conviced on the need of database here. We do all our builds on a big machine that is largly unused overnight, and can then simply rsync the results over to the webserver. If I want to put a status as to why the build failed, I just add it in the summary.log file and re-rsync. If I screw something up, I just delete the subdirectory. Everything is kept in order via standard unix permissions. If I need to sort through the output, I use sed and awk. We just bzip the files to archive them (we could use rzip across all of the files if space was really an issue). -i |
From: Jan D. <jdi...@pp...> - 2005-05-16 22:49:30
|
Ian Wienand wrote: > On Sat, May 14, 2005 at 06:17:18PM +0200, Jan Dittmer wrote: > >>What I plan to implement, hopefully till next weekend, is a system >>which discovers new kernel versions and builds a table of >>kernel/version/arch triplets that want to be tested. > > > Ok, that's good. However, I would say the most important thing for > *us* is application of patches and building from a git tree every > night. Currently the rate of change in the git tree isn't really like > what the old bitkeeper trees was, but I'm sure it will get there. > This has really what kerncomp was used for from the start, it > originated right at the time IA64 was moving from being kept out of > tree to being merged into mainline. > > For people who aren't trying to maintain out of tree patches this > really isn't important. Yes, and I won't prevent you from doing that. I'm just providing a very general system to view the results from such tests. You can still test whatever version you want, as long as you have a unique version string. >>Against these versions any number of tests can run: compile, run >>time test, whatever, ... and report the results back, without much >>of formal requirements. Basically a big storage of test >>results. Would that work out for you? > > > It sounds like it would; I guess the devil is in the details :) > > >>... and write git-SELECT to search through them? :) Really sql has >>lots of advantages when your store even slightly more >>complex/relational things. > > > Personally, I'd really need to be conviced on the need of database > here. We do all our builds on a big machine that is largly unused > overnight, and can then simply rsync the results over to the > webserver. If I want to put a status as to why the build failed, I > just add it in the summary.log file and re-rsync. If I screw > something up, I just delete the subdirectory. Everything is kept in > order via standard unix permissions. If I need to sort through the > output, I use sed and awk. We just bzip the files to archive them (we > could use rzip across all of the files if space was really an issue). Well, you convinced me. I now build a version which stores the build results in a directory tree named: version/arch/configs/testtype/* * being files for each data item stored there. Something like this: db/2.6.11-rc4-bk6/alpha/defconfig db/2.6.11-rc4-bk6/alpha/defconfig/compile db/2.6.11-rc4-bk6/alpha/defconfig/compile/startdate db/2.6.11-rc4-bk6/alpha/defconfig/compile/enddate db/2.6.11-rc4-bk6/alpha/defconfig/compile/log db/2.6.11-rc4-bk6/alpha/defconfig/compile/success db/2.6.11-rc4-bk6/alpha/defconfig/compile/warnings db/2.6.11-rc4-bk6/alpha/defconfig/compile/errors db/2.6.11-rc4-bk6/alpha/defconfig/compile/config db/2.6.11-rc4-bk6/alpha/defconfig/compile/compiler db/2.6.11-rc4-bk6/alpha/defconfig/compile/version db/2.6.11-rc4-bk6/alpha/defconfig/compile/num_errors And rewrote the front-end to access the database. Big pro: You can just rsync results from different places together, as long as they have unique 'testtype' stamps. You can even use git to manage this database. I'll put the beta interface up tomorrow. -- Jan |
From: Darren W. <ds...@ge...> - 2005-05-15 06:16:43
|
Hi Jan On Sat, 14 May 2005, Jan Dittmer wrote: > Darren Williams wrote: > >>>ls arch/*/configs/* gives about 184 different defconfigs. A test run > >>>on my single workstation would take literally ages. Therefore I'd > >>>consider a distributed client/server approach. > > > > > > And not all cross-compilers successfully build the kernel, where as the > > native build would succeed. There have been discussion about this on lkml. > > Though that should be considered a bug (tm). Different output directory > (O=) also does not always work. > > > In any simulator, LTP or any other thorough testing would take years, we need > > real hardware, this is my next big project (Autobench/test). This I think > > would be really cool. I think we could get the resources, we just need the > > auto stuff. Looking behind me I see idle sparc, ppc, alpha, 386 ..... > > So lets get started. Well do you want to coordinate this? I think my > platform would provide an easy start. I just need to split out the server and > client part. > What I plan to implement, hopefully till next weekend, is a system which > discovers new kernel versions and builds a table of kernel/version/arch > triplets that want to be tested. It would be nice if you could use git, this has a few advantage: 1. We expose Git to more testing. 2. Reporting errors and bugs to lkml can be referenced via the Git HEAD 3. Finding version changes is as simple as keeping track of the Git HEAD before an update and comparing the new HEAD after the update is complete if there is a difference then we build and test.... > Against these versions any number of tests can run: compile, run time test, > whatever, ... and report the results back, without much of formal requirements. > Basically a big storage of test results. Would that work out for you? I think keeping the run-time tests separate from the build is important, since this is more complicated, due to several factors: 1. What if the kernel Oopses, we need to be able to reboot the machine. 2. Upon reboot we need to bring the machine up with a working kernel. This is what my next project will be undertaking 'Autobench' This will be a pluggable benchmarking system that can reboot/reboot any architecture and run a series of benchmarks/tests and report results. This is the basic idea. Setting up will take a little time: - We are just about to move to a new building, I think waiting to we are settled there and the machines are setup, by me, Ian or some other sucker we can con, this will be about 1-2mths away. - The second is I'm just about to move to a new position within the group so my time spent on this will reduce, I dont mind putting in the extra initial hours though. - I am just about to start a 2 week holiday. So I think the time frame of machine setup would be 1-2mths, we can start on the code though. > > >>>But I looked at yours, and you're not using a db at all? You're just > >>>using flat files to store the results and parse them afterwards? > >> > >>Yes ... I think this has many advantages such as compressing and > >>archiving the logs, and the fact you can simply rsync things around. > >>I'm not married to it though ... > > > > > > And you do not require a dbm, hey we could store the results in a > > Git tree (8-0 > > ... and write git-SELECT to search through them? :) Really sql has lots > of advantages when your store even slightly more complex/relational > things. Arhg, I was just having fun, I dont mind the format either, though KISS. > > Jan See, or talk to you guys in ~two weeks - dsw -------------------------------------------------- Darren Williams <dsw AT gelato.unsw.edu.au> Gelato@UNSW <www.gelato.unsw.edu.au> -------------------------------------------------- |
From: Jan D. <jdi...@pp...> - 2005-05-14 16:44:23
|
Ian Wienand wrote: > On Wed, May 11, 2005 at 09:30:49PM +0200, Jan Dittmer wrote: > >>You know about ARMLinux Kautobuild? [1] That's also pretty far >>developed and does _lots_ of configs. Though arm is somewhat quicker >>to build than ia64. > > > Yep, that's another interesting one; they have some nice graphs but > I'm not sure how useful they are. Again an RSS feed or email would be > handy. I already send myself emails of changes between versions. rss is on my todo list, though I'd like to change the structure slightly first, as written in in the other mail. >>What are the goals any of this kerncomp project anyways? Just collecting >>resources or were you planning on developing _the_ kernel compile and >>runtime test application or just not really decided yet? > > > As a minimum I hoped we could keep interesting scripts together to > save someone else who wanted to set up an autobuild doing the whole > thing over again. > > I wonder if there is demand for one combined project or if everyone > has domain-specific issues they like to keep in their own project? > For example, we're very interested in making sure our various patches > apply to the absolute latest kernel. Fixing problems the next morning > is *much* better than waiting for the next -rc release since you'll > just end up with all your problems combined. For others this might > not be such an issue. That is coverted mostly by mm I think, as most breakages get discovered there first before they even remotly hit mainline. Though I don't know how many people test mm on ia64... >>Yes, I consider that, but I've to do at least a bit clean up before. >>Currently the code state is just too embarrassing. > > > Ahh, I know the feeling. The good thing about posting embarrassing > code is that it acts as a catalyst for someone else to do the work for > you :) As Alan Cox once said, nothing gets a problem fixed faster than > an obviously incorrect patch; people just can't resist showing you the > right way! Well, I can submit my current version if you really want. I just need to write a small README how to set things up. Jan |
From: Jan D. <jdi...@pp...> - 2005-05-14 16:17:48
|
Darren Williams wrote: >>>ls arch/*/configs/* gives about 184 different defconfigs. A test run >>>on my single workstation would take literally ages. Therefore I'd >>>consider a distributed client/server approach. > > > And not all cross-compilers successfully build the kernel, where as the > native build would succeed. There have been discussion about this on lkml. Though that should be considered a bug (tm). Different output directory (O=) also does not always work. > In any simulator, LTP or any other thorough testing would take years, we need > real hardware, this is my next big project (Autobench/test). This I think > would be really cool. I think we could get the resources, we just need the > auto stuff. Looking behind me I see idle sparc, ppc, alpha, 386 ..... So lets get started. Well do you want to coordinate this? I think my platform would provide an easy start. I just need to split out the server and client part. What I plan to implement, hopefully till next weekend, is a system which discovers new kernel versions and builds a table of kernel/version/arch triplets that want to be tested. Against these versions any number of tests can run: compile, run time test, whatever, ... and report the results back, without much of formal requirements. Basically a big storage of test results. Would that work out for you? >>>But I looked at yours, and you're not using a db at all? You're just >>>using flat files to store the results and parse them afterwards? >> >>Yes ... I think this has many advantages such as compressing and >>archiving the logs, and the fact you can simply rsync things around. >>I'm not married to it though ... > > > And you do not require a dbm, hey we could store the results in a > Git tree (8-0 ... and write git-SELECT to search through them? :) Really sql has lots of advantages when your store even slightly more complex/relational things. Jan |
From: Ian W. <ia...@ge...> - 2005-05-12 00:16:17
|
On Wed, May 11, 2005 at 09:30:49PM +0200, Jan Dittmer wrote: > You know about ARMLinux Kautobuild? [1] That's also pretty far > developed and does _lots_ of configs. Though arm is somewhat quicker > to build than ia64. Yep, that's another interesting one; they have some nice graphs but I'm not sure how useful they are. Again an RSS feed or email would be handy. > What are the goals any of this kerncomp project anyways? Just collecting > resources or were you planning on developing _the_ kernel compile and > runtime test application or just not really decided yet? As a minimum I hoped we could keep interesting scripts together to save someone else who wanted to set up an autobuild doing the whole thing over again. I wonder if there is demand for one combined project or if everyone has domain-specific issues they like to keep in their own project? For example, we're very interested in making sure our various patches apply to the absolute latest kernel. Fixing problems the next morning is *much* better than waiting for the next -rc release since you'll just end up with all your problems combined. For others this might not be such an issue. If a few people said they were interested, I think the first step would be some sort of design document that details everything we want it to do, and a rough framework of how it would do it. At this stage, I would take interest as people just sending me their ideas. > Yes, I consider that, but I've to do at least a bit clean up before. > Currently the code state is just too embarrassing. Ahh, I know the feeling. The good thing about posting embarrassing code is that it acts as a catalyst for someone else to do the work for you :) As Alan Cox once said, nothing gets a problem fixed faster than an obviously incorrect patch; people just can't resist showing you the right way! -i |
From: Darren W. <ds...@ge...> - 2005-05-11 23:15:17
|
Hi Ian On Wed, 11 May 2005, Ian Wienand wrote: > On Wed, May 11, 2005 at 02:32:39PM +0200, Jan Dittmer wrote: > > But as I can see from your output you also build if nothing has > > changed in the git tree (as is the case since -rc4). >=20 > Ahh, that might be a bug :) We didn't used to build when nothing > changed with BK. Darren? We have always built nightly, it would be trivial to set up kerncomp to only build when things change. I need to check in a few changes before Friday PM, I'll add it to the TODO list. =20 >=20 > > By grabbing the patches from kernel.org, I've definite checkpoints > > what was tested and you can fully rebuilt the build environment > > locally. For grabbing the different trees I use a tool called > > 'ketchup'. >=20 > Good point. BK used to make it really easy to update your tree with > what was pulled from the logs; we will see how this works out with > Cogito. Trivial once again since any applied changes updates the HEAD which can be written to the logs. It has never really been an issue for me since I start fixing broken builds at or near the date of the break with the up to the minute GIT/BK tree, so if a fix has been committed to mainline I get that change and stop fixing, get on with life and tommorrow build is all OK. Add to TODO since I see problems here. >=20 > > Also the -mm trees have much more experimental stuff in them and > > are more interesting to track. >=20 > Yes; these should definitely be tracked. There have been examples > where IA64 was broken in -mm for a long time because noone ever tries > it. >=20 For IA64 centric stuff I think the ia64 testing tree would also be relevant, this tree need testing before these changes hit any mainline or -mm tree. This is the advantage of GIT, we can point at any branch we want and pull =66rom it and GIT will manage the merging (in theory). For instance I cloned a kernel.org Git tree for local use here which is periodically updated every 1/2 hour. The kerncomp GIT tree use to pull from the same kernel.org tree, with a single command (cg-add-branch <branch-name> <URI>)and a small tweak to the kerncomp script (update from branch-name), kerncomp now pulls =66rom our local tree, this is a trivial example. =20 =20 > > ls arch/*/configs/* gives about 184 different defconfigs. A test run > > on my single workstation would take literally ages. Therefore I'd > > consider a distributed client/server approach. And not all cross-compilers successfully build the kernel, where as the native build would succeed. There have been discussion about this on lkml.= =20 >=20 > For a broad approach like your cross compilers; for sure. For other > architectures testing those defconfigs can give you pretty much > complete coverage. =20 >=20 > > That's a bit more generalized. I think it's focussing more on > > runtime than compile testing. >=20 > I'd love to have better runtime testing; for example running LTP on > the booted kernel in our simulator. It's one thing for the kernel to > build, another thing for it to work :) >=20 > > I though about doing runtime testing with qemu or by using a network > > of different arch machines. But at home I only have i386 and sparc32 > > so I considered it rather pointless. And qemu is also not that stable > > that I'd trust the results. In any simulator, LTP or any other thorough testing would take years, we ne= ed real hardware, this is my next big project (Autobench/test). This I think would be really cool. I think we could get the resources, we just need the auto stuff. Looking behind me I see idle sparc, ppc, alpha, 386 ..... >=20 > Luckily IA64 has the ski simulator which, on the whole, is really > good. You are right; anything is possible with a bunch of machines! >=20 > > Same are mine. And currently very focussed on kernel building. I'd have > > to look over them, and then I could post them if you're interested. >=20 > I would be happy to give you a CVS tree in our SF project if you are > thinking of releasing your code. That way at least resources are > together. >=20 > > But I looked at yours, and you're not using a db at all? You're just > > using flat files to store the results and parse them afterwards? >=20 > Yes ... I think this has many advantages such as compressing and > archiving the logs, and the fact you can simply rsync things around. > I'm not married to it though ... And you do not require a dbm, hey we could store the results in a Git tree (8-0 >=20 > -i - dsw -------------------------------------------------- Darren Williams <dsw AT gelato.unsw.edu.au> Gelato@UNSW <www.gelato.unsw.edu.au> -------------------------------------------------- |
From: Jan D. <jdi...@pp...> - 2005-05-11 19:31:09
|
Ian Wienand wrote: >>By grabbing the patches from kernel.org, I've definite checkpoints >>what was tested and you can fully rebuilt the build environment >>locally. For grabbing the different trees I use a tool called >>'ketchup'. > > > Good point. BK used to make it really easy to update your tree with > what was pulled from the logs; we will see how this works out with > Cogito. You should at least log the current .git/HEAD when building, to reproduce the tree afterwards. >>ls arch/*/configs/* gives about 184 different defconfigs. A test run >>on my single workstation would take literally ages. Therefore I'd >>consider a distributed client/server approach. > > > For a broad approach like your cross compilers; for sure. For other > architectures testing those defconfigs can give you pretty much > complete coverage. You know about ARMLinux Kautobuild? [1] That's also pretty far developed and does _lots_ of configs. Though arm is somewhat quicker to build than ia64. >>I though about doing runtime testing with qemu or by using a network >>of different arch machines. But at home I only have i386 and sparc32 >>so I considered it rather pointless. And qemu is also not that stable >>that I'd trust the results. > > > Luckily IA64 has the ski simulator which, on the whole, is really > good. You are right; anything is possible with a bunch of machines! What are the goals any of this kerncomp project anyways? Just collecting resources or were you planning on developing _the_ kernel compile and runtime test application or just not really decided yet? >>Same are mine. And currently very focussed on kernel building. I'd have >>to look over them, and then I could post them if you're interested. > > > I would be happy to give you a CVS tree in our SF project if you are > thinking of releasing your code. That way at least resources are > together. Yes, I consider that, but I've to do at least a bit clean up before. Currently the code state is just too embarassing. [1] http://armlinux.simtec.co.uk/kautobuild/ -- Jan |
From: Ian W. <ia...@ge...> - 2005-05-11 13:25:54
|
On Wed, May 11, 2005 at 02:32:39PM +0200, Jan Dittmer wrote: > But as I can see from your output you also build if nothing has > changed in the git tree (as is the case since -rc4). Ahh, that might be a bug :) We didn't used to build when nothing changed with BK. Darren? > By grabbing the patches from kernel.org, I've definite checkpoints > what was tested and you can fully rebuilt the build environment > locally. For grabbing the different trees I use a tool called > 'ketchup'. Good point. BK used to make it really easy to update your tree with what was pulled from the logs; we will see how this works out with Cogito. > Also the -mm trees have much more experimental stuff in them and > are more interesting to track. Yes; these should definitely be tracked. There have been examples where IA64 was broken in -mm for a long time because noone ever tries it. > ls arch/*/configs/* gives about 184 different defconfigs. A test run > on my single workstation would take literally ages. Therefore I'd > consider a distributed client/server approach. For a broad approach like your cross compilers; for sure. For other architectures testing those defconfigs can give you pretty much complete coverage. > That's a bit more generalized. I think it's focussing more on > runtime than compile testing. I'd love to have better runtime testing; for example running LTP on the booted kernel in our simulator. It's one thing for the kernel to build, another thing for it to work :) > I though about doing runtime testing with qemu or by using a network > of different arch machines. But at home I only have i386 and sparc32 > so I considered it rather pointless. And qemu is also not that stable > that I'd trust the results. Luckily IA64 has the ski simulator which, on the whole, is really good. You are right; anything is possible with a bunch of machines! > Same are mine. And currently very focussed on kernel building. I'd have > to look over them, and then I could post them if you're interested. I would be happy to give you a CVS tree in our SF project if you are thinking of releasing your code. That way at least resources are together. > But I looked at yours, and you're not using a db at all? You're just > using flat files to store the results and parse them afterwards? Yes ... I think this has many advantages such as compressing and archiving the logs, and the fact you can simply rsync things around. I'm not married to it though ... -i |
From: Jan D. <jdi...@pp...> - 2005-05-11 12:32:38
|
Ian Wienand wrote: > On Wed, May 11, 2005 at 09:23:59AM +0200, Jan Dittmer wrote: > >>I hope you know of my http://l4x.org/k effort? It features the possibility >>to cross-compile all archs, built different configs and has a somewhat >>nice web interface. > > > That's a great interface! I like the log on mouseover bit. An RSS > feed is something that I've found extremely useful; I can check the > results when I get in of a morning along with all my other RSS feeds. It would be trivial to add as it is database driven. Perhaps I find some time in the evening... >>It does fully automatic defconfig builds on 23 platforms for -git, >>-rc and -mm releases. > > > Ok, so you're grabbing the patches from kernel.org? We have found it > useful to do the pull directly via git (well, used to be BitKeeper) > every night to keep right up to date. But as I can see from your output you also build if nothing has changed in the git tree (as is the case since -rc4). By grabbing the patches from kernel.org, I've definite checkpoints what was tested and you can fully rebuilt the build environment locally. For grabbing the different trees I use a tool called 'ketchup'. Also the -mm trees have much more experimental stuff in them and are more interesting to track. > We also find it useful to build all the defconfigs for IA64; some > other architectures also have multiple defconfigs for different > classes of machine. Luckily, for IA64 the defconfig builds probably > cover > 95% of machines out there (since there's only a few vendors) > ... the x86 situation is obviously different! ls arch/*/configs/* gives about 184 different defconfigs. A test run on my single workstation would take literally ages. Therefore I'd consider a distributed client/server approach. >>What I would really like to see is a client/server architecture where >>multiple machines (over the inet?) can share the build results, so that >>one can start offering patch testing, ie. someone uploads a patch against >>some version and the network test the resulting source against all archs >>and all requested configs. That could be really handy I can imagine. >>A full defconfig run on all 23 archs currently takes about 2 hours on my >>2.8ghz xeons. > > > Yes, that would be very nice; I think OSDL provide something similar > to that (http://www.osdl.org/lab_activities/kernel_testing/stp/) but > I've found their interfaces to be a little difficult at times. That's a bit more generalized. I think it's focussing more on runtime than compile testing. Making such a thing work with non-Kernel related stuff would of course be nice too. I though about doing runtime testing with qemu or by using a network of different arch machines. But at home I only have i386 and sparc32 so I considered it rather pointless. And qemu is also not that stable that I'd trust the results. >>I'll check out your sourcebase soon. My scripts are about 400 lines for >>the builder and 1100 lines for the web interface, all php using >>postgres as a backend. > > > Ours is nothing fantastic, just hacked together shell and PHP code, > but it works. Same are mine. And currently very focussed on kernel building. I'd have to look over them, and then I could post them if you're interested. But I looked at yours, and you're not using a db at all? You're just using flat files to store the results and parse them afterwards? Regards, -- Jan |
From: Ian W. <ia...@ge...> - 2005-05-11 11:30:37
|
On Wed, May 11, 2005 at 09:23:59AM +0200, Jan Dittmer wrote: > I hope you know of my http://l4x.org/k effort? It features the possibility > to cross-compile all archs, built different configs and has a somewhat > nice web interface. That's a great interface! I like the log on mouseover bit. An RSS feed is something that I've found extremely useful; I can check the results when I get in of a morning along with all my other RSS feeds. > It does fully automatic defconfig builds on 23 platforms for -git, > -rc and -mm releases. Ok, so you're grabbing the patches from kernel.org? We have found it useful to do the pull directly via git (well, used to be BitKeeper) every night to keep right up to date. We also find it useful to build all the defconfigs for IA64; some other architectures also have multiple defconfigs for different classes of machine. Luckily, for IA64 the defconfig builds probably cover > 95% of machines out there (since there's only a few vendors) ... the x86 situation is obviously different! > What I would really like to see is a client/server architecture where > multiple machines (over the inet?) can share the build results, so that > one can start offering patch testing, ie. someone uploads a patch against > some version and the network test the resulting source against all archs > and all requested configs. That could be really handy I can imagine. > A full defconfig run on all 23 archs currently takes about 2 hours on my > 2.8ghz xeons. Yes, that would be very nice; I think OSDL provide something similar to that (http://www.osdl.org/lab_activities/kernel_testing/stp/) but I've found their interfaces to be a little difficult at times. > I'll check out your sourcebase soon. My scripts are about 400 lines for > the builder and 1100 lines for the web interface, all php using > postgres as a backend. Ours is nothing fantastic, just hacked together shell and PHP code, but it works. -i ia...@ge... http://www.gelato.unsw.edu.au |