[Kerncomp-devel] Re: [ANNOUNCE] Automated Kernel Build Regression Testing
Brought to you by:
delsarto,
dswatgelato
From: Darren W. <ds...@ge...> - 2005-05-15 06:16:43
|
Hi Jan On Sat, 14 May 2005, Jan Dittmer wrote: > Darren Williams wrote: > >>>ls arch/*/configs/* gives about 184 different defconfigs. A test run > >>>on my single workstation would take literally ages. Therefore I'd > >>>consider a distributed client/server approach. > > > > > > And not all cross-compilers successfully build the kernel, where as the > > native build would succeed. There have been discussion about this on lkml. > > Though that should be considered a bug (tm). Different output directory > (O=) also does not always work. > > > In any simulator, LTP or any other thorough testing would take years, we need > > real hardware, this is my next big project (Autobench/test). This I think > > would be really cool. I think we could get the resources, we just need the > > auto stuff. Looking behind me I see idle sparc, ppc, alpha, 386 ..... > > So lets get started. Well do you want to coordinate this? I think my > platform would provide an easy start. I just need to split out the server and > client part. > What I plan to implement, hopefully till next weekend, is a system which > discovers new kernel versions and builds a table of kernel/version/arch > triplets that want to be tested. It would be nice if you could use git, this has a few advantage: 1. We expose Git to more testing. 2. Reporting errors and bugs to lkml can be referenced via the Git HEAD 3. Finding version changes is as simple as keeping track of the Git HEAD before an update and comparing the new HEAD after the update is complete if there is a difference then we build and test.... > Against these versions any number of tests can run: compile, run time test, > whatever, ... and report the results back, without much of formal requirements. > Basically a big storage of test results. Would that work out for you? I think keeping the run-time tests separate from the build is important, since this is more complicated, due to several factors: 1. What if the kernel Oopses, we need to be able to reboot the machine. 2. Upon reboot we need to bring the machine up with a working kernel. This is what my next project will be undertaking 'Autobench' This will be a pluggable benchmarking system that can reboot/reboot any architecture and run a series of benchmarks/tests and report results. This is the basic idea. Setting up will take a little time: - We are just about to move to a new building, I think waiting to we are settled there and the machines are setup, by me, Ian or some other sucker we can con, this will be about 1-2mths away. - The second is I'm just about to move to a new position within the group so my time spent on this will reduce, I dont mind putting in the extra initial hours though. - I am just about to start a 2 week holiday. So I think the time frame of machine setup would be 1-2mths, we can start on the code though. > > >>>But I looked at yours, and you're not using a db at all? You're just > >>>using flat files to store the results and parse them afterwards? > >> > >>Yes ... I think this has many advantages such as compressing and > >>archiving the logs, and the fact you can simply rsync things around. > >>I'm not married to it though ... > > > > > > And you do not require a dbm, hey we could store the results in a > > Git tree (8-0 > > ... and write git-SELECT to search through them? :) Really sql has lots > of advantages when your store even slightly more complex/relational > things. Arhg, I was just having fun, I dont mind the format either, though KISS. > > Jan See, or talk to you guys in ~two weeks - dsw -------------------------------------------------- Darren Williams <dsw AT gelato.unsw.edu.au> Gelato@UNSW <www.gelato.unsw.edu.au> -------------------------------------------------- |