Your receiving this email because I believe you might be interested in
hearing about the release of "Xen", a Virtual Machine Monitor (VMM)
for x86 that provides high levels of performance and resource
We are in the final stages of preparing a public release of the source
code, and have our first 'release candidate' ready. To make it easy
for people to experiment with Xen, we've made a demo CD that packages
Xen / XenoLinux on to a 'live ISO' RedHat 9 installation. You can boot
directly off CD, then start multiple virtual machines, and run Apache,
PostgreSQL, Mozilla, ethereal, etc -- pretty much anything. The CD
also contains a source snapshot, along with all the build tools. The
CD can also be used as an installation disk.
You can download the CD iso image from:
We're planning on releasing the final 1.0 version next week, having
shaken out any issues that may arise in this release candidate. We're
keen to keep this '1.0rc1' version just among 'friends and family', so
would appreciate it if you could avoid distributing it too widely,
posting to Slashdot etc ;-)
To find out more about Xen, the SOSP paper is a good reference:
To join the Xen developer's mailing list and ask questions etc visit:
The project web site should be properly up and running in a week or
so, and will be at:
I enclose a couple of README files from the CD, and hope that you'll
download Xen, have a play, and let us know what you think!
__ __ _ ___
\ \/ /___ _ __ / | / _ \
\ // _ \ '_ \ | || | | |
/ \ __/ | | | | || |_| |
/_/\_\___|_| |_| |_(_)___/
University of Cambridge Computer Laboratory
31 Aug 2003
About the Xen Virtual Machine Monitor
"Xen" is a Virtual Machine Monitor (VMM) developed by the Systems
Research Group of the University of Cambridge Computer Laboratory, as
part of the UK-EPSRC funded XenoServers project.
The XenoServers project aims to provide a "public infrastructure for
global distributed computing", and Xen plays a key part in that,
allowing us to efficiently partition a single machine to enable
multiple independent clients to run their operating systems and
applications in an environment providing protection, resource
isolation and accounting. The project web page contains further
information along with pointers to papers and technical reports:
Xen has since grown into a project in its own right, enabling us to
investigate interesting research issues regarding the best techniques
for virtualizing resources such as the CPU, memory, disk and network.
The project has been bolstered by support from Intel Research
Cambridge, who are now working closely with us. We're also in receipt
of support from Microsoft Research Cambridge to port Windows XP to
run on Xen.
Xen enables multiple operating system images to execute concurrently
on the same hardware with very low performance overhead --- much lower
than commercial offerings for the same x86 platform.
This is achieved by requiring OSs to be specifically ported to run on
Xen, rather than allowing unmodified OS images to be used. Crucially,
only the OS needs to be changed -- all of the user-level application
binaries, libraries etc can run unmodified. Hence the modified OS
kernel can typically just be dropped into any existing OS distribution
Xen currently runs on the x86 architecture, but could in principle be
ported to others. In fact, it would have been rather easier to write
Xen for pretty much any other architecture as x86 is particularly
tricky to handle. A good description of Xen's design, implementation
and performance is contained in our October 2003 SOSP paper, available
We have been working on porting 3 different operating systems to run
on Xen: Linux 2.4, Windows XP, and NetBSD.
The Linux 2.4 port (currently Linux 2.4.22) works very well -- we
regularly use it to host complex applications such as PostgreSQL,
Apache, BK servers etc. It runs every user-space applications we've
tried. We refer to our version of Linux ported to run on Xen as
"XenoLinux", although really it's just standard Linux ported to a new
virtual CPU architecture that we call xeno-x86 (abbreviated to just
Unfortunately, the NetBSD port has stalled due to lack of man
power. We believe most of the hard stuff has already been done, and
are hoping to get the ball rolling again soon. In hindsight, a FreeBSD
4.8 port might have been more useful to the community. Any volunteers? :-)
The Windows XP port is nearly finished. It's running user space
applications and is generally in pretty good shape thanks to some hard
work by the team over the summer. Of course, there are issues with
releasing this code to others. We should be able to release the
source and binaries to anyone that has signed the Microsoft academic
source license, which these days has very reasonable terms. We are in
discussions with Microsoft about the possibility of being able to make
binary releases to a larger user community. Obviously, there are
issues with product activation in this environment which need to be
So, for the moment, you only get to run multiple copies of Linux on
Xen, but we hope this will change before too long. Even running
multiple copies of the same OS can be very useful, as it provides a
means of containing faults to one OS image, and also for providing
performance isolation between the various OS, enabling you to either
restrict, or reserve resources for, particular VM instances.
It's also useful for development -- each version of Linux can have
different patches applied, enabling different kernels to be tried
out. For example, the "vservers" patch used by PlanetLab applies
cleanly to our ported version of Linux.
We've successfully booted over 128 copies of Linux on the same machine
(a dual CPU hyperthreaded Xeon box) but we imagine that it would be
more normal to use some smaller number, perhaps 10-20.
Xen is intended to be run on server-class machines, and the current
list of supported hardware very much reflects this, avoiding the need
for us to write drivers for "legacy" hardware. It is likely that some
desktop chipsets will fail to work properly with the default Xen
configuration: specifying 'noacpi' or 'ignorebiostables' when booting
Xen may help in these cases.
Xen requires a "P6" or newer processor (e.g. Pentium Pro, Celeron,
Pentium II, Pentium III, Pentium IV, Xeon, AMD Athlon, AMD Duron).
Multiprocessor machines are supported, and we also have basic support
for HyperThreading (SMT), although this remains a topic for ongoing
research. We're also looking at an AMD x86_64 port (though it should
run on Opterons in 32-bit mode just fine).
Xen can currently use up to 4GB of memory. It's possible for x86
machines to address more than that (64GB), but it requires using a
different page table format (3-level rather than 2-level) that we
currently don't support. Adding 3-level PAE support wouldn't be
difficult, but we'd also need to add support to all the guest
OSs. Volunteers welcome!
We currently support a relatively modern set of network cards: Intel
e1000, Broadcom BCM 57xx (tg3), 3COM 3c905 (3c59x). Adding support for
other NICs that support hardware DMA scatter/gather from half-word
aligned addresses is relatively straightforward, by porting the
equivalent Linux driver. Drivers for a number of other older cards
have recently been added [pcnet32, e100, tulip], but are untested and
Building Xen and XenoLinux
Take a look at the tools/misc/xen-clone script in the BK repository,
which will 'bk clone' the live master tree, and then set about
building everything. The build procedure for xenolinux is slightly
complicated as its done by running the 'mkbuildtree' script over
a pristine Linux tree to turn it into a xenolinux tree by adding the
The public master BK repository lives at: bk://xen.bkbits.net/xeno.bk
9 Sep 2003
__ __ _ ___
\ \/ /___ _ __ / | / _ \
\ // _ \ '_ \ | || | | |
/ \ __/ | | | | || |_| |
/_/\_\___|_| |_| |_(_)___/
XenDemoCD 1.0 rc1
University of Cambridge Computer Laboratory
18 Sep 2003
Welcome to the Xen Demo CD!
This CD is a standalone demo of the Xen Virtual Machine Monitor (VMM)
and Linux-2.4 OS port (XenoLinux). It runs entirely off the CD,
without requiring hard disk installation. This is achieved using a RAM
disk to store mutable file system data while using the CD for
everything else. The CD can also be used for installing Xen/XenoLinux
to disk, and includes a source code snapshot along with all of the
tools required to build it.
Booting the CD
The Xen VMM is currently fairly h/w specific, but porting new device
drivers is relatively straightforward thanks to Xen's Linux driver
compatibility layer. The current snapshot supports the following
CPU: Pentium Pro/II/III/IV/Xeon, Athlon (i.e. P6 or newer) SMP supported
IDE: Intel PIIX chipset, others will be PIO only (slow)
SCSI: Adaptec / Dell PERC Raid (aacraid), megaraid, Adaptec aic7xxx
Net: Recommended: Intel e1000, Broadcom BCM57xx (tg3), 3c905 (3c59x)
Tested but require extra copies : pcnet32
Untested and also requires extra copies : Intel e100, tulip
Because of the demo CD's use of RAM disks, make sure you have plenty
of RAM (256MB+).
To try out the Demo, boot from CD (you may need to change your BIOS
configuration to do this), hit a key on either the keyboard or serial
line to pull up the Grub boot menu, then select one of the four boot
Xen / linux-2.4.22
Xen / linux-2.4.22 using cmdline IP configuration
Xen / linux-2.4.22 in "safe mode"
The last option is a plain linux kernel that runs on the bare machine,
and is included simply to help diagnose driver compatibility
problems. The "safe mode" boot option might be useful if you're having
problems getting Xen to work with your hardware, as it disables various
features such as SMP, and enables some debugging.
If you are going for a command line IP config, hit "e" at
the grub menu, then edit the "ip=" parameters to reflect your setup
e.g. "ip=<ipaddr>::<gateway>:<netmask>::eth0:off". It shouldn't be
necessary to set either the nfs server or hostname
parameters. Alternatively, once XenoLinux has booted you can login and
setup networking with 'dhclient' or 'ifconfig' and 'route' in the
To make things easier for yourself, it's worth trying to arrange for an
IP address which is the first in a sequential range of free IP
addresses. It's useful to give each VM instance its own public IP
address (though it is possible to do NAT or use private addresses),
and the configuration files on the CD allocate IP addresses
sequentially for subsequent domains unless told otherwise.
After selecting the kernel to boot, stand back and watch Xen boot,
closely followed by "domain 0" running the XenoLinux kernel. The boot
messages are also sent to the serial line (the baud rate can be set on
the Xen cmdline, but defaults to 115200), which can be very useful for
debugging should anything important scroll off the screen. Xen's
startup messages will look quite familiar as much of the hardware
initialisation (SMP boot, apic setup) and device drivers are derived
If everything is well, you should see the linux rc scripts start a
bunch of standard services including sshd. Login on the console or
username: user root
password: xendemo xendemo
Once logged in, it should look just like any regular linux box. All
the usual tools and commands should work as per usual. It's probably
best to start by configuring networking, either with 'dhclient' or
manually via ifconfig and route, remembering to edit /etc/resolv.conf
if you want DNS.
You can start an X server with 'startx'. It defaults to a conservative
1024x768, but you can edit the script for higher resoloutions. The CD
contains a load of standard software. You should be able to start
Apache, PostgreSQL, Mozilla etc in the normal way, but because
everything is running off CD the performance will be very sluggish and
you may run out of memory for the 'tmpfs' file system. You may wish
to go ahead and install Xen/XenoLinux on your hard drive, either
dropping Xen and the XenoLinux kernel down onto a pre-existing Linux
distribution, or using the file systems from the CD (which are based
on RH9). See the installation instructions later in this document.
If your video card requires 'agpgart' then it unfortunately won't yet
work with Xen, and you'll only be able to configure a VGA X
server. We're working on a fix for this for the next release.
If you want to browse the Xen / XenoLinux source, it's all located
under /usr/local/src, complete with BitKeeper repository. We've also
included source code and configuration information for the various
benchmarks we used in the SOSP paper.
Starting other domains
There's a web interface for starting and managing other domains (VMs),
but since we generally use the command line tools they're probably
rather better debugged at present. The key command is 'xenctl' which
lives in /usr/local/bin and uses /etc/xenctl.xml for its default
configuration. Run 'xenctl' without any arguments to get a help
message. Note that xenctl is a java front end to various underlying
internal tools written in C (xi_*). Running off CD, it seems to take
an age to start...
Abyway, the first thing to do is to set up a window in which you will
receive console output from other domains. Console output will arrive
as UDP packets destined for 169.254.1.0, so its necessary to setup an
alias on eth0. The easiest way to do this is to run:
This also inserts a few NAT rules into "domain0", in case you'll be
starting other domains without their own IP addresses. Alternatively,
just do "ifconfig eth0:0 169.254.1.0 up". NB: The intention is that in
future Xen will do NAT itsel (actually RSIP), but this is part of a
larger work package that isn't stable enough to release.
Next, run a the xen UDP console displayer:
As mentioned above, xenctl uses /etc/xenctl.xml as its default
configuration. The directory contains two different configs depending
on whether you want to use NAT, or multiple sequential external IPs
(it's possible to override any of the parameters on the command line
if you want to set specific IPs, etc).
The default configuration file supports NAT. To change to use multiple IPs:
cp /etc/xenctl.xml-publicip /etc/xenctl.xml
A sequence of commands must be given to xenctl to start a new
domain. First a new domain must be created, which requires specifying
the initial memory allocation, the kernel image to use, and the kernel
command line. As well as the root file system details, you'll need to
set the IP address on the command line: since Xen currently doesn't
support a virtual console for domains >1, you won't be able to log to
your new domain unless you've got networking configured and an sshd
running! (using dhcp for new domains should work too).
After creating the domain, xenctl must be used to grant the domain
access to other resources such as physical or virtual disk partions.
Then, the domain must be started.
These commands can be entered manually, but for convenience, xenctl
will also read them from a script and infer which domain number you're
referring to (-nX). To use the sample script:
xenctl script -f/etc/xen-mynewdom [NB: no space after the -f]
You should see the domain booting on your xen_read_console window.
The xml defaults start another domain running off the CD, using a
separate RAM-based file system for mutable data in root (just like
The new domain is started with a '4' on the kernel command line to
tell 'init' to go to runlevel 4 rather than the default of 3. This is
done simply to suppress a bunch of harmless error messages that would
otherwise occur when the new (unprivileged) domain tried to access
physical hardware resources to try setting the hwclock, system font,
run gpm etc.
After it's booted, you should be able to ssh into your new domain. If
you went for a NATed address, from domain 0 you should be able to ssh
into '169.254.1.X' where X is the domain number. If you ran the
xen_enable_nat script, a bunch of port redirects have been installed
to enable you to ssh in to other domains remotely. To access the new
virtual machine remotely, use:
ssh -p2201 root@... # use 2202 for domain 2 etc.
If you configured the new domain with its own IP address, you should
be able to ssh into it directly.
"xenctl domain list" provides status information about running domains,
though is currently only allowed to be run by domain 0. It accesses
/proc/xeno/domains to read this information from Xen. You can also use
xenctl to 'stop' (pause) a domain, or 'kill' a domain. You can either
kill it nicely by sending a shutdown event and waiting for it to
terminate, or blow the sucker away with extreme prejudice.
If you want to configure a new domain differently, type 'xenctl' to
get a list of arguments, e.g. at the 'xenctl domain new' command line
use the "-4" option to set a diffrent IPv4 address.
xenctl can be used to set the new kernel's command line, and hence
determine what it uses as a root file system, etc. Although the default
is to boot in the same manner that domain0 did (using the RAM-based
file system for root and the CD for /usr) it's possible to configure any
of the following possibilities, for example:
* initrd=/boot/initrd init=/linuxrc
boot using an initial ram disk, executing /linuxrc (as per this CD)
* root=/dev/hda3 ro
boot using a standard hard disk partition as root
* root=/dev/xvda1 ro
boot using a pre-configured 'virtual block device' that will be
attached to a virtual disk that previously has had a file system
installed on it.
* root=/dev/nfs nfsroot=/path/on/server ip=<blah_including server_IP>
Boot using an NFS mounted root file system. This could be from a
remote NFS server, or from an NFS server running in another
domain. The latter is rather a useful option.
A typical setup might be to allocate a standard disk partition for
each domain and populate it with files. To save space, having a shared
read-only usr partition might make sense.
Alternatively, you can use 'virtual disks', which are stored as files
within a custom file system. "xenctl partitions add" can be used to
'format' a partition with the file system, and then virtual disks can
be created with "xenctl vd create". Virtual disks can then be attached
to a running domain as a 'virtual block device' using "xenctl vdb
create". The virtual disk can optionally be partitioned (e.g. "fdisk
/dev/xvda") or have a file system created on it directly (e.g. "mkfs
-t ext3 /dev/xvda"). The virtual disk can then be accessed by a
virtual block device associated with another domain, and even used as
a boot device.
Both virtual disks and real partitions should only be shared between
domains in a read-only fashion otherwise the linux kernels will
obviously get very confused as the file system structure may change
underneath them (having the same partition mounted rw twice is a sure
fire way to cause irreparable damage)! If you want read-write
sharing, export the directory to other domains via NFS from domain0.
If you have problems booting Xen, there are a number of boot parameters
that may be able to help diagnose problems:
ignorebiostables Disable parsing of BIOS-supplied tables. This may
help with some chipsets that aren't fully supported
by Xen. If you specify this option then ACPI tables are
also ignored, and SMP support is disabled.
nosmp Disable SMP support.
This option is implied by 'ignorebiostables'.
noacpi Disable ACPI tables, which confuse Xen on some chipsets.
This option is implied by 'ignorebiostables'.
watchdog Enable NMI watchdog which can report certain failures.
noht Disable Hyperthreading.
ifname=ethXX Select which Ethernet interface to use.
ifname=dummy Don't use any network interface.
ser_baud=xxx Set serial line baud rate for console.
dom0_mem=xxx Set the initial amount of memory for domain0.
It's probably a good idea to join the Xen developer's mailing list on
About The Xen Demo CD
The purpose of the Demo CD is to distribute a snapshot of Xen's
source, and simultaneously provide a convenient means for enabling
people to get experience playing with Xen without needing to install
it on their hard drive. If you decide to install Xen/XenoLinux you can
do so simply by following the installation instructions below -- which
essentially involves copying the contents of the CD on to a suitably
formated disk partition, and then installing or updating the Grub
This is a bootable CD that loads Xen, and then a Linux 2.4.22 OS image
ported to run on Xen. The CD contains a copy of a file system based on
the RedHat 9 distribution that is able to run directly off the CD
("live ISO"), using a "tmpfs" RAM-based file system for root (/etc
/var etc). Changes you make to the tmpfs will obviously not be
persistent across reboots!
Because of the use of a RAM-based file system for root, you'll need
plenty of memory to run this CD -- something like 96MB per VM. This is
not a restriction of Xen : once you've installed Xen, XenoLinux and
the file system images on your hard drive you'll find you can boot VMs
in just a few MBs.
The CD contains a snapshot of the Xen and XenoLinux code base that we
believe to be pretty stable, but lacks some of the features that are
currently still work in progress e.g. OS suspend/resume to disk, and
various memory management enhancements to provide fast inter-OS
communication and sharing of memory pages between OSs. We'll release
newer snapshots as required, making use of a BitKeeper repository
hosted on http://xen.bkbits.net (follow instructions from the project
home page). We're obviously grateful to receive any bug fixes or
other code you can contribute. We suggest you join the
xen-devel@... mailing list.
Installing from the CD
If you're installing Xen/XenoLinux onto an existing linux file system
distribution, just copy the Xen VMM (/boot/image.gz) and XenoLinux
kernels (/boot/xenolinux.gz), then modify the Grub config
(/boot/grub/menu.lst or /boot/grub/grub.conf) on the target system.
It should work on pretty much any distribution.
Xen is a "multiboot" standard boot image. Despite being a 'standard',
few boot loaders actually support it. The only two we know of are
Grub, and our modified version of linux kexec (for booting off a
XenoBoot CD -- PlanetLab have adopted the same boot CD approach).
If you need to install grub on your system, you can do so either by
building the Grub source tree
/usr/local/src/grub-0.93-iso9660-splashimage or by copying over all
the files in /boot/grub and then running /sbin/grub and following the
usual grub documentation. You'll then need to edit the Grub
A typical Grub menu option might look like:
title Xen / XenoLinux 2.4.22
kernel /boot/image.gz dom0_mem=131072 ser_baud=115200 noht
module /boot/xenolinux.gz root=/dev/sda4 ro console=tty0
The first line specifies which Xen image to use, and what command line
arguments to pass to Xen. In this case, we set the maximum amount of
memory to allocate to domain0, and the serial baud rate (the default
is 9600 baud). We could also disable smp support (nosmp) or disable
hyper-threading support (noht). If you have multiple network interface
you can use ifname=ethXX to select which one to use. If your network
card is unsupported, use ifname=dummy
The second line specifies which xenolinux image to use, and the
standard linux command line arguments to pass to the kernel. In this
case, we're configuring the root partition and stating that it should
be mounted read-only (normal practice).
If we were booting with an initial ram disk (initrd), then this would
require a second "module" line.
Installing the file systems from the CD
If you haven't an existing Linux installation onto which you can just
drop down the Xen and XenoLinux images, then the file systems on the
CD provide a quick way of doing an install.
Choose one or two partitions, depending on whether you want a separate
/usr or not. Make file systems on it/them e.g.:
mkfs -t ext3 /dev/hda3
[or mkfs -t ext2 /dev/hda3 && tune2fs -j /dev/hda3 if using an old
version of mkfs]
Next, mount the file system(s) e.g.:
mkdir /mnt/root && mount /dev/hda3 /mnt/root
[mkdir /mnt/usr && mount /dev/hda4 /mnt/usr]
To install the root file system, simply untar /usr/XenDemoCD/root.tar.gz:
cd /mnt/root && tar -zxpf /usr/XenDemoCD/root.tar.gz
You'll need to edit /mnt/root/etc/fstab to reflect your file system
configuration. Changing the password file (etc/shadow) is probably a
good idea too.
To install the usr file system, copy the file system from CD on /usr,
though leaving out the "XenDemoCD" and "boot" directories:
cd /usr && cp -a X11R6 etc java libexec root src bin dict kerberos local sbin tmp doc include lib man share /mnt/usr
If you intend to boot off these file systems (i.e. use them for
domain 0), then you probably want to copy the /usr/boot directory on
the cd over the top of the current symlink to /boot on your root
filesystem (after deleting the current symlink) i.e.:
cd /mnt/root ; rm boot ; cp -a /usr/boot .
The XenDemoCD directory is only useful if you want to build your own
version of the XenDemoCD (see below).
Xen has a set of debugging features that can be useful to try and
figure out what's going on. Hit 'h' on the serial line or ScrollLock-h
on the keyboard to get a list of supported commands.
If you have a crash you'll likely get a crash dump containing an EIP
(PC) which, along with an 'objdump -d image', can be useful in
figuring out what's happened. Debug a XenoLinux image just as you
would any other Linux kernel.
We supply a handy debug terminal program which you can find in
This should be built and executed on another machine that is connected
via a null modem cable. Documentation is included.
Alternatively, telnet can be used in 'char mode' if the Xen machine is
connected to a serial-port server.
Description of how the XenDemoCD boots
1. Grub is used to load Xen, a XenoLinux kernel, and an initrd (initial
ram disk). [The source of the version of Grub used is in /usr/local/src]
2. the init=/linuxrc command line causes linux to execute /linuxrc in
3. the /linuxrc file attempts to mount the CD by trying the likely
locations : /dev/hd[abcd].
4. it then creates a 'tmpfs' file system and untars the
'XenDemoCD/root.tar.gz' file into the tmpfs. This contains hopefully
all the files that need to be mutable (this would be so much easier
if Linux supported 'stacked' or union file systems...)
5. Next, /linuxrc uses the pivot_root call to change the root file
system to the tmpfs, with the CD mounted as /usr.
6. It then invokes /sbin/init in the tmpfs and the boot proceeds
Building your own version of the XenDemoCD
The 'live ISO' version of RedHat is based heavily on Peter Anvin's
SuperRescue CD version 2.1.2 and J. McDaniel's Plan-B:
Since Xen uses a "multiboot" image format, it was necessary to change
the bootloader from isolinux to Grub0.93 with Leonid Lisovskiy's
The Xen Demo CD contains all of the build scripts that were used to
create it, so it is possible to 'unpack' the current iso, modifiy it,
then build a new iso. The procedure for doing so is as follows:
First, mount either the CD, or the iso image of the CD:
mount /dev/cdrom /mnt/cdrom
mount -o loop xendemo-1.0.iso /mnt/cdrom
cd to the directory you want to 'unpack' the iso into then run the
The result is a 'build' directory containing the file system tree
under the 'root' directory. e.g. /local/xendemocd/build/root
To add or remove rpms, its possible to use 'rpm' with the --root
option to set the path. For more complex changes, it easiest to boot a
machine using using the tree via NFS root. Before doing this, you'll
need to edit fstab to comment out the seperate mount of /usr.
One thing to watch out for: as part of the CD build process, the
contents of the 'rootpatch' tree gets copied over the existing 'root'
tree replacing various files. The intention of the rootpatch tree is
to contain the files that have been modified from the original RH
distribution (e.g. various /etc files). This was done to make it
easier to upgrade to newer RH versions in the future. The downside of
this is that if you edit an existing file in the root tree you should
check that you don't also need to propagate the change to the
rootpatch tree to avoid it being overwritten.
Once you've made the changes and want to build a new iso, here's the
echo '<put_your_name_here>' > Builder
./make.sh put_your_version_id_here >../buildlog 2>&1
This process can take 30 mins even on a fast machine, but you should
eventually end up with an iso image in the build directory.
root - the root of the file system heirarchy as presented to the
rootpatch - contains files that have been modified from the standard
RH, and copied over the root tree as part of the build
irtree - the file system tree that will go into the initrd (initial
work - a working directory used in the build process
usr - this should really be in 'work' as its created as part of the
build process. It contains the 'immutable' files that will
be served from the CD rather than the tmpfs containing the
contents of root.tar.gz. Some files that are normally in /etc
or /var that are large and actually unlikely to need changing
have been moved into /usr/root and replaced with links.
9 Sep 2003