From: Dan A. <da...@co...> - 2004-04-05 13:14:55
|
Hello, Today I ran some dbench2 benchmarks in order to test coLinux's (http://www.colinux.org) virtual disk I/O performance. I'm cross-posting this message to the LKML, as I know that on that list there are some benchmarking experts or other people who may find this interesting. This is the output from a coLinux 2.4.25 guest VM configured with 128MB RAM running on a Linux 2.6.3 (BK) host that has a total of 256MB RAM. The host machine has a Mobile Intel Celeron CPU (2.20GHz). All filesystems used are ext3. colinux:/home/dax# dbench 5 -s -S 5 clients started 0 62477 10.01 MB/sec Throughput 10.0026 MB/sec (sync open) (sync dirs) 5 procs colinux:/home/dax# dbench 5 -s -S 5 clients started 0 62477 10.43 MB/sec Throughput 10.4262 MB/sec (sync open) (sync dirs) 5 procs colinux:/home/dax# dbench 5 -s -S 5 clients started 0 62477 10.90 MB/sec Throughput 10.8926 MB/sec (sync open) (sync dirs) 5 procs I then ran the same thing on the host itself, *without* the coLinux VM running in the background: hostile17:~/colinux# dbench 5 -s -S 5 clients started 0 62477 5.08 MB/sec Throughput 5.07573 MB/sec (sync open) (sync dirs) 5 procs hostile17:~/colinux# dbench 5 -s -S 5 clients started 0 62477 5.13 MB/sec Throughput 5.12705 MB/sec (sync open) (sync dirs) 5 procs The VM shows better results than the host. What gives? Perhaps it is because of the combination of the host and guest's buffer cache? I'd like to know about more percise benchmarking methods for VMs. -- Dan Aloni Cooperative Linux, lead developer da...@co... |
From: Christoph H. <hc...@in...> - 2004-04-05 13:31:02
|
> The VM shows better results than the host. What gives? Perhaps > it is because of the combination of the host and guest's buffer > cache? I'd like to know about more percise benchmarking methods > for VMs. How are the virtual disks for the VM implemented? If you're doing direct I/O these numbers are indeed strange. If not OTOH that's expected because even synchronous I/O in the guest is actually async which makes it a lot faster. |
From: Dan A. <da...@co...> - 2004-04-05 14:05:13
|
On Mon, Apr 05, 2004 at 02:30:56PM +0100, Christoph Hellwig wrote: > > The VM shows better results than the host. What gives? Perhaps > > it is because of the combination of the host and guest's buffer > > cache? I'd like to know about more percise benchmarking methods > > for VMs. > > How are the virtual disks for the VM implemented? If you're doing > direct I/O these numbers are indeed strange. If not OTOH that's > expected because even synchronous I/O in the guest is actually > async which makes it a lot faster. The virtual block device driver in coLinux, named cobd, is synchronous with the host OS highest level read()/write() functions, which means e.g. for a READ block I/O request in the guest, filp->f_op->read() is called on an open 'struct file' in the host. If the call blocks, the entire guest VM blocks on it. So, according to this, any type of I/O in the guest means synchronous I/O in the host unless the data is already in the guest's buffer cache. It's not really the implementation I am planning to stick to, but it sure was very easy to implement. BTW, the block device on the host side can be a file or any device that exposes read()/write() interfaces to userspace. In this benchmarking case it is a 3GB file that hosts an image of an ext3 filesystem. -- Dan Aloni da...@co... |
From: Eyal L. <gnu...@ya...> - 2004-04-05 20:11:47
|
Hey, I stumbled upon this when checking my mail :) I think the reason may be that Windows is using the disks better and making access faster. Perhaps DMA acceleration or some other feature is turned off on the Linux host side, making disk access slower on the Linux side. Then again, that could only explain the first run of the benchmark and not the cached runs. Maybe some bug in Linux's caching/buffering? --- Dan Aloni <da...@co...> wrote: > Hello, > > Today I ran some dbench2 benchmarks in order to test > coLinux's > (http://www.colinux.org) virtual disk I/O > performance. > > I'm cross-posting this message to the LKML, as I > know that on that > list there are some benchmarking experts or other > people who may > find this interesting. > > This is the output from a coLinux 2.4.25 guest VM > configured with > 128MB RAM running on a Linux 2.6.3 (BK) host that > has a total of > 256MB RAM. The host machine has a Mobile Intel > Celeron CPU (2.20GHz). > All filesystems used are ext3. > > colinux:/home/dax# dbench 5 -s -S > 5 clients started > 0 62477 10.01 MB/sec > Throughput 10.0026 MB/sec (sync open) (sync > dirs) 5 procs > > colinux:/home/dax# dbench 5 -s -S > 5 clients started > 0 62477 10.43 MB/sec > Throughput 10.4262 MB/sec (sync open) (sync > dirs) 5 procs > > colinux:/home/dax# dbench 5 -s -S > 5 clients started > 0 62477 10.90 MB/sec > Throughput 10.8926 MB/sec (sync open) (sync > dirs) 5 procs > > > I then ran the same thing on the host itself, > *without* the > coLinux VM running in the background: > > hostile17:~/colinux# dbench 5 -s -S > 5 clients started > 0 62477 5.08 MB/sec > Throughput 5.07573 MB/sec (sync open) (sync > dirs) 5 procs > > hostile17:~/colinux# dbench 5 -s -S > 5 clients started > 0 62477 5.13 MB/sec > Throughput 5.12705 MB/sec (sync open) (sync > dirs) 5 procs > > > The VM shows better results than the host. What > gives? Perhaps > it is because of the combination of the host and > guest's buffer > cache? I'd like to know about more percise > benchmarking methods > for VMs. > > -- > Dan Aloni > Cooperative Linux, lead developer > da...@co... > > > ------------------------------------------------------- > This SF.Net email is sponsored by: IBM Linux > Tutorials > Free Linux tutorial presented by Daniel Robbins, > President and CEO of > GenToo technologies. Learn everything from > fundamentals to system > administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click > _______________________________________________ > coLinux-devel mailing list > coL...@li... > https://lists.sourceforge.net/lists/listinfo/colinux-devel __________________________________ Do you Yahoo!? Yahoo! Small Business $15K Web Design Giveaway http://promotions.yahoo.com/design_giveaway/ |
From: Jeff W. <Kaz...@ce...> - 2004-04-05 20:53:17
|
>I think the reason may be that Windows is using the disks better and >making access faster. Perhaps DMA acceleration or some other feature is >turned off on the Linux host side, making disk access slower on the Linux side. What does Windows have to do with it? He's running Linux on Linux. -- Jeff Woods <kaz...@ce...> |
From: keksov <ke...@gm...> - 2004-04-05 21:03:30
|
Hi, try to use hdparam under "real" Linux to tune HDD performance. Something like hdparam -X66 -d1 -u1 -m16 -c3 /dev/hda First run hdparam /dev/hda, may be it's already tweaked. Use hdparam -tT /dev/hda for roughly testing disk's I/O. Be warned! hdparam may... well, crash your system... Switch to single user mode first and put right string into rc.d/* after all. Sure, don't forget to man hdparam ;) Regards, Dim |
From: Dan A. <da...@co...> - 2004-04-05 22:34:25
|
On Mon, Apr 05, 2004 at 01:11:39PM -0700, Eyal Lotem wrote: > I think the reason may be that Windows is using the > disks better and making access faster. Perhaps DMA > acceleration or some other feature is turned off on > the Linux host side, making disk access slower on the > Linux side. No Windows was involved with these benchmarks in any way. I ran coLinux on Linux. -- Dan Aloni da...@co... |
From: Ian C. B. <ia...@bl...> - 2004-04-06 13:46:09
|
On Tue, Apr 06, 2004 at 12:22:56AM +0200, Dan Aloni wrote: > On Mon, Apr 05, 2004 at 01:11:39PM -0700, Eyal Lotem wrote: > > > I think the reason may be that Windows is using the > > disks better and making access faster. Perhaps DMA > > acceleration or some other feature is turned off on > > the Linux host side, making disk access slower on the > > Linux side. > > No Windows was involved with these benchmarks in any way. I ran > coLinux on Linux. You ran coLinux on a Linux host? Perhaps I've missed something on the list.. is there a native Linux kernel port now? An alternative to User Mode Linux is a rather big thing for me. - Ian C. Blenke <ia...@bl...> |
From: Dan A. <da...@co...> - 2004-04-06 14:06:41
|
On Tue, Apr 06, 2004 at 09:45:49AM -0400, Ian C. Blenke wrote: > On Tue, Apr 06, 2004 at 12:22:56AM +0200, Dan Aloni wrote: > > On Mon, Apr 05, 2004 at 01:11:39PM -0700, Eyal Lotem wrote: > > > > > I think the reason may be that Windows is using the > > > disks better and making access faster. Perhaps DMA > > > acceleration or some other feature is turned off on > > > the Linux host side, making disk access slower on the > > > Linux side. > > > > No Windows was involved with these benchmarks in any way. I ran > > coLinux on Linux. > > You ran coLinux on a Linux host? Perhaps I've missed something on the list.. > is there a native Linux kernel port now? An alternative to User Mode Linux > is a rather big thing for me. Yes, it's an alternative to User Mode Linux, thought it's a bit early and doesn't have all the wide range of support tools and nifty stuff that UML has. -- Dan Aloni da...@co... |
From: Steven E. <ste...@ya...> - 2004-04-05 23:22:02
|
Hello All, If you want to run Linux and have your Windows device drivers then this message is for you. I just wanted to give all of you a update on our work to get CoLinux working on ReactOS. Where it stands right now it looks like ReactOS will be able to run CoLinux quite soon. We still have some issues with running Cygwin applications so we need some help either fixing that or we need to see if the CoLinux deamon can be built linking to msvcrt rather than cygwin. Other than this the only show stopper we have atm is our TCP/IP stack is still under heavy development and will not be ready to run the Xserver and Linux network applications for another few months. Thanks Steven __________________________________ Do you Yahoo!? Yahoo! Small Business $15K Web Design Giveaway http://promotions.yahoo.com/design_giveaway/ |