On Thursday 30 December 2004 05:16, Michael Richardson wrote:
> >>>>> "Blaisorblade" == Blaisorblade <blaisorblade@...> writes:
> >> I'm running 2.4.26-um3. I happen to be running hostfs with NFS
> >> underneath. I get core dumps from my application, and I want to
> >> gdb them outside of the UML.
> >> I had to switch from core dump to NFS (which seems to fail now)
> >> to putting them in /tmp (which is ramdisk), and then move them to
> >> /var/tmp, which is hostfs.
> Blaisorblade> Hmm - were you putting the core dumps inside a hostfs
> Blaisorblade> mount, right? This has been reported to fail badly
> Yes, previously.
> Blaisorblade> with 2.4.26-3um... the hostfs contained in 2.4.24-1um
> Blaisorblade> and 2.4.27-1bs (this tree is from me, on my homepage)
> Blaisorblade> is stable, while the one in 2.4.24-2um and subsequent
> Blaisorblade> releases isn't.
> I'll try it on my next compile.
> >> Is there some way to defeat the caching that is occuring? or
> >> even to just flush it? (other than umount)
> Blaisorblade> Hmmm, I don't know this... probably giving "sync"
> Blaisorblade> inside UML should work. I remember that Jeff said
> Blaisorblade> that he made hostfs asynchronous in those releases on
> Blaisorblade> purpose (for performance, IIRC), and this is not bad.
> I actually wonder if there is really any advantage.
When answering I've not discussed this because I wasn't really sure... you can
read the exact message in the diary page on the website
> I'd like to be convinced.
The performance problem has shown some times and been analyzed on the MLs,
only that it happened with UBD... search "bonnie++" and "performance" and you
should find your answer...
In short, if the UML kernel executes a blocking FS operation, the UML virtual
machine is locked for a while; what must happen is instead that the operation
is performed on behalf of the UML kernel while it continues to run... this
will be indeed possible when we will fully use AIO (the code for this has
been written by Jeff, I don't know if it already works or not).
> It seems that for executables,
> mapping hostfs files straight into the
> guest address space
> wouldn't care.
It's not very clear to me what you mean...
1) whether hostfs data are mapped or copied is not related to the
synchronous/asynchronous question; the idea was born to avoid memory waste
and double caching by both UML and the host... which happens even if the data
are clean (is this what you mean by
> I'd have thought that for stdio
> things, that any buffering done by guest kernel would be really just
For stdio, there is 0 difference between the host and guest kernel...
> glibc has already buffered pretty much everything...
Well, the glibc buffer is one little and pretty thing for one application
only. The kernel buffer can buffer tons of datas, because is shared by the
system and the kernel knows a lot more things than glibc...
2) the UML kernel *will* anyhow perform some kind of caching... we cannot
rewrite the VM to allow avoid it.
If there are ideas to solve this problem can be solved in some non-intrusive
way, they are well accepted... however maybe using mapping from hostfs could
already solve most problems that are born from this.
> However, I'd like to be able to turn it off.
This could be done, i.e. some code could be written in the future for that.
For now, both 2.4.27-bs1 and 2.6.9/2.6.9-bb4/2.6.10 don't include the hostfs
However, note that hostfs has never been really synchronous. Try to do a tail
-f inside hostfs on the guest and to modify the file on the host... this will
never succeed on any existing UML.
(It could be done using dnotify/inotify and maybe someone started doing it,
but that will happen later).
Paolo Giarrusso, aka Blaisorblade
Linux registered user n. 292729