* windows domU disk performance graph comparing hvm vs stubdom vs pv drivers
@ 2010-02-19 22:41 Keith Coleman
2010-02-20 0:08 ` Daniel Stodden
2010-02-22 16:33 ` Stefano Stabellini
0 siblings, 2 replies; 16+ messages in thread
From: Keith Coleman @ 2010-02-19 22:41 UTC (permalink / raw)
To: xen-devel
[-- Attachment #1: Type: text/plain, Size: 778 bytes --]
I am posting this to xen-devel instead of -users because it paints an
incomplete picture that shouldn't be the basis for deciding how to run
production systems.
This graph shows the performance under a webserver disk IO workload at
different queue depths. It compares the 4 main IO methods for windows
guests that will be available in the upcoming xen 4.0.0 and 3.4.3
releases: pure HVM, stub domains, gplpv drivers, and xcp winpv
drivers.
The gplpv and xcp winpv drivers have comparable performance with gplpv
being slightly faster. Both pv drivers are considerably faster than
pure hvm or stub domains. Stub domain performance was about even with
HVM which is lower than we were expecting. We tried a different cpu
pinning in "Stubdom B" with little impact.
Keith Coleman
[-- Attachment #2: comparison-iops-3.png --]
[-- Type: image/png, Size: 53266 bytes --]
[-- Attachment #3: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: windows domU disk performance graph comparing hvm vs stubdom vs pv drivers
2010-02-19 22:41 windows domU disk performance graph comparing hvm vs stubdom vs pv drivers Keith Coleman
@ 2010-02-20 0:08 ` Daniel Stodden
2010-02-20 0:50 ` Keith Coleman
2010-02-22 16:33 ` Stefano Stabellini
1 sibling, 1 reply; 16+ messages in thread
From: Daniel Stodden @ 2010-02-20 0:08 UTC (permalink / raw)
To: Keith Coleman; +Cc: xen-devel@lists.xensource.com
On Fri, 2010-02-19 at 17:41 -0500, Keith Coleman wrote:
> This graph shows the performance under a webserver disk IO workload at
> different queue depths. It compares the 4 main IO methods for windows
> guests that will be available in the upcoming xen 4.0.0 and 3.4.3
> releases: pure HVM, stub domains, gplpv drivers, and xcp winpv
> drivers.
Cool, thanks. If I may ask, what exactly did you run?
> The gplpv and xcp winpv drivers have comparable performance with gplpv
> being slightly faster. Both pv drivers are considerably faster than
> pure hvm or stub domains. Stub domain performance was about even with
> HVM which is lower than we were expecting. We tried a different cpu
> pinning in "Stubdom B" with little impact.
Is this an SMP dom0? A single guest?
Daniel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: windows domU disk performance graph comparing hvm vs stubdom vs pv drivers
2010-02-20 0:08 ` Daniel Stodden
@ 2010-02-20 0:50 ` Keith Coleman
2010-02-20 1:46 ` Daniel Stodden
0 siblings, 1 reply; 16+ messages in thread
From: Keith Coleman @ 2010-02-20 0:50 UTC (permalink / raw)
To: Daniel Stodden; +Cc: xen-devel@lists.xensource.com
On Fri, Feb 19, 2010 at 7:08 PM, Daniel Stodden
<daniel.stodden@citrix.com> wrote:
> On Fri, 2010-02-19 at 17:41 -0500, Keith Coleman wrote:
>
>> This graph shows the performance under a webserver disk IO workload at
>> different queue depths. It compares the 4 main IO methods for windows
>> guests that will be available in the upcoming xen 4.0.0 and 3.4.3
>> releases: pure HVM, stub domains, gplpv drivers, and xcp winpv
>> drivers.
>
> Cool, thanks. If I may ask, what exactly did you run?
iometer
>> The gplpv and xcp winpv drivers have comparable performance with gplpv
>> being slightly faster. Both pv drivers are considerably faster than
>> pure hvm or stub domains. Stub domain performance was about even with
>> HVM which is lower than we were expecting. We tried a different cpu
>> pinning in "Stubdom B" with little impact.
>
> Is this an SMP dom0? A single guest?
Dual core server with dom0 pinned to core 0 and a single domU pinned
to core 1. Stubdom was pinned to core 0 then core 1.
Keith Coleman
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: windows domU disk performance graph comparing hvm vs stubdom vs pv drivers
2010-02-20 0:50 ` Keith Coleman
@ 2010-02-20 1:46 ` Daniel Stodden
0 siblings, 0 replies; 16+ messages in thread
From: Daniel Stodden @ 2010-02-20 1:46 UTC (permalink / raw)
To: Keith Coleman; +Cc: xen-devel@lists.xensource.com
On Fri, 2010-02-19 at 19:50 -0500, Keith Coleman wrote:
> On Fri, Feb 19, 2010 at 7:08 PM, Daniel Stodden
> <daniel.stodden@citrix.com> wrote:
> > On Fri, 2010-02-19 at 17:41 -0500, Keith Coleman wrote:
> >
> >> This graph shows the performance under a webserver disk IO workload at
> >> different queue depths. It compares the 4 main IO methods for windows
> >> guests that will be available in the upcoming xen 4.0.0 and 3.4.3
> >> releases: pure HVM, stub domains, gplpv drivers, and xcp winpv
> >> drivers.
> >
> > Cool, thanks. If I may ask, what exactly did you run?
>
> iometer
>
> >> The gplpv and xcp winpv drivers have comparable performance with gplpv
> >> being slightly faster. Both pv drivers are considerably faster than
> >> pure hvm or stub domains. Stub domain performance was about even with
> >> HVM which is lower than we were expecting. We tried a different cpu
> >> pinning in "Stubdom B" with little impact.
> >
> > Is this an SMP dom0? A single guest?
>
> Dual core server with dom0 pinned to core 0 and a single domU pinned
> to core 1. Stubdom was pinned to core 0 then core 1.
I don't see why stubdom would be faster in either configuration. Once
you're through DM emulation, there's plenty of cycles to spend waiting
for I/O completion. So dom0 won't mind spending them on qemu either.
Daniel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: windows domU disk performance graph comparing hvm vs stubdom vs pv drivers
2010-02-19 22:41 windows domU disk performance graph comparing hvm vs stubdom vs pv drivers Keith Coleman
2010-02-20 0:08 ` Daniel Stodden
@ 2010-02-22 16:33 ` Stefano Stabellini
2010-02-22 17:14 ` Keith Coleman
1 sibling, 1 reply; 16+ messages in thread
From: Stefano Stabellini @ 2010-02-22 16:33 UTC (permalink / raw)
To: Keith Coleman; +Cc: xen-devel@lists.xensource.com
On Fri, 19 Feb 2010, Keith Coleman wrote:
> I am posting this to xen-devel instead of -users because it paints an
> incomplete picture that shouldn't be the basis for deciding how to run
> production systems.
>
> This graph shows the performance under a webserver disk IO workload at
> different queue depths. It compares the 4 main IO methods for windows
> guests that will be available in the upcoming xen 4.0.0 and 3.4.3
> releases: pure HVM, stub domains, gplpv drivers, and xcp winpv
> drivers.
>
> The gplpv and xcp winpv drivers have comparable performance with gplpv
> being slightly faster. Both pv drivers are considerably faster than
> pure hvm or stub domains. Stub domain performance was about even with
> HVM which is lower than we were expecting. We tried a different cpu
> pinning in "Stubdom B" with little impact.
>
What disk backend are you using?
If you are using a raw file, it is worth trying with an lvm volume as well.
Also are using blktap or blktap2?
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: windows domU disk performance graph comparing hvm vs stubdom vs pv drivers
2010-02-22 16:33 ` Stefano Stabellini
@ 2010-02-22 17:14 ` Keith Coleman
2010-02-22 17:27 ` Stefano Stabellini
0 siblings, 1 reply; 16+ messages in thread
From: Keith Coleman @ 2010-02-22 17:14 UTC (permalink / raw)
To: Stefano Stabellini; +Cc: xen-devel@lists.xensource.com
On 2/22/10, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> On Fri, 19 Feb 2010, Keith Coleman wrote:
>> I am posting this to xen-devel instead of -users because it paints an
>> incomplete picture that shouldn't be the basis for deciding how to run
>> production systems.
>>
>> This graph shows the performance under a webserver disk IO workload at
>> different queue depths. It compares the 4 main IO methods for windows
>> guests that will be available in the upcoming xen 4.0.0 and 3.4.3
>> releases: pure HVM, stub domains, gplpv drivers, and xcp winpv
>> drivers.
>>
>> The gplpv and xcp winpv drivers have comparable performance with gplpv
>> being slightly faster. Both pv drivers are considerably faster than
>> pure hvm or stub domains. Stub domain performance was about even with
>> HVM which is lower than we were expecting. We tried a different cpu
>> pinning in "Stubdom B" with little impact.
>>
>
> What disk backend are you using?
phy, LV
Keith Coleman
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: windows domU disk performance graph comparing hvm vs stubdom vs pv drivers
2010-02-22 17:14 ` Keith Coleman
@ 2010-02-22 17:27 ` Stefano Stabellini
2010-02-22 21:13 ` Keith Coleman
0 siblings, 1 reply; 16+ messages in thread
From: Stefano Stabellini @ 2010-02-22 17:27 UTC (permalink / raw)
To: Keith Coleman; +Cc: xen-devel@lists.xensource.com, Stefano Stabellini
On Mon, 22 Feb 2010, Keith Coleman wrote:
> On 2/22/10, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > On Fri, 19 Feb 2010, Keith Coleman wrote:
> >> I am posting this to xen-devel instead of -users because it paints an
> >> incomplete picture that shouldn't be the basis for deciding how to run
> >> production systems.
> >>
> >> This graph shows the performance under a webserver disk IO workload at
> >> different queue depths. It compares the 4 main IO methods for windows
> >> guests that will be available in the upcoming xen 4.0.0 and 3.4.3
> >> releases: pure HVM, stub domains, gplpv drivers, and xcp winpv
> >> drivers.
> >>
> >> The gplpv and xcp winpv drivers have comparable performance with gplpv
> >> being slightly faster. Both pv drivers are considerably faster than
> >> pure hvm or stub domains. Stub domain performance was about even with
> >> HVM which is lower than we were expecting. We tried a different cpu
> >> pinning in "Stubdom B" with little impact.
> >>
> >
> > What disk backend are you using?
>
> phy, LV
>
That is strange because in that configuration I get a far better
disk bandwidth with stubdoms compared to qemu running in dom0.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: windows domU disk performance graph comparing hvm vs stubdom vs pv drivers
2010-02-22 17:27 ` Stefano Stabellini
@ 2010-02-22 21:13 ` Keith Coleman
2010-02-23 13:14 ` Stefano Stabellini
0 siblings, 1 reply; 16+ messages in thread
From: Keith Coleman @ 2010-02-22 21:13 UTC (permalink / raw)
To: Stefano Stabellini; +Cc: xen-devel@lists.xensource.com
On Mon, Feb 22, 2010 at 12:27 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Mon, 22 Feb 2010, Keith Coleman wrote:
>> On 2/22/10, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
>> > On Fri, 19 Feb 2010, Keith Coleman wrote:
>> >> I am posting this to xen-devel instead of -users because it paints an
>> >> incomplete picture that shouldn't be the basis for deciding how to run
>> >> production systems.
>> >>
>> >> This graph shows the performance under a webserver disk IO workload at
>> >> different queue depths. It compares the 4 main IO methods for windows
>> >> guests that will be available in the upcoming xen 4.0.0 and 3.4.3
>> >> releases: pure HVM, stub domains, gplpv drivers, and xcp winpv
>> >> drivers.
>> >>
>> >> The gplpv and xcp winpv drivers have comparable performance with gplpv
>> >> being slightly faster. Both pv drivers are considerably faster than
>> >> pure hvm or stub domains. Stub domain performance was about even with
>> >> HVM which is lower than we were expecting. We tried a different cpu
>> >> pinning in "Stubdom B" with little impact.
>> >>
>> >
>> > What disk backend are you using?
>>
>> phy, LV
>>
>
> That is strange because in that configuration I get a far better
> disk bandwidth with stubdoms compared to qemu running in dom0.
>
What type of test are you doing?
Keith Coleman
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: windows domU disk performance graph comparing hvm vs stubdom vs pv drivers
2010-02-22 21:13 ` Keith Coleman
@ 2010-02-23 13:14 ` Stefano Stabellini
2010-02-23 14:44 ` Konrad Rzeszutek Wilk
2010-02-23 19:38 ` Pasi Kärkkäinen
0 siblings, 2 replies; 16+ messages in thread
From: Stefano Stabellini @ 2010-02-23 13:14 UTC (permalink / raw)
To: Keith Coleman; +Cc: xen-devel@lists.xensource.com, Stefano Stabellini
On Mon, 22 Feb 2010, Keith Coleman wrote:
> On Mon, Feb 22, 2010 at 12:27 PM, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
> > On Mon, 22 Feb 2010, Keith Coleman wrote:
> >> On 2/22/10, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> >> > On Fri, 19 Feb 2010, Keith Coleman wrote:
> >> >> I am posting this to xen-devel instead of -users because it paints an
> >> >> incomplete picture that shouldn't be the basis for deciding how to run
> >> >> production systems.
> >> >>
> >> >> This graph shows the performance under a webserver disk IO workload at
> >> >> different queue depths. It compares the 4 main IO methods for windows
> >> >> guests that will be available in the upcoming xen 4.0.0 and 3.4.3
> >> >> releases: pure HVM, stub domains, gplpv drivers, and xcp winpv
> >> >> drivers.
> >> >>
> >> >> The gplpv and xcp winpv drivers have comparable performance with gplpv
> >> >> being slightly faster. Both pv drivers are considerably faster than
> >> >> pure hvm or stub domains. Stub domain performance was about even with
> >> >> HVM which is lower than we were expecting. We tried a different cpu
> >> >> pinning in "Stubdom B" with little impact.
> >> >>
> >> >
> >> > What disk backend are you using?
> >>
> >> phy, LV
> >>
> >
> > That is strange because in that configuration I get a far better
> > disk bandwidth with stubdoms compared to qemu running in dom0.
> >
>
> What type of test are you doing?
>
these are the results I got a while ago running a simple "dd if=/dev/zero
of=file" for 10 seconds:
qemu in dom0: 25.1 MB/s
qemu in a stubdom: 56.7 MB/s
I have just run just now "tiobench with --size 256 --numruns 4 --threads
4" using a raw file as a backend:
qemu in dom0, using blktap2, best run:
File Blk Num Avg Maximum Lat% Lat% CPU
Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
------ ----- --- ------ ------ --------- ----------- -------- -------- -----
256 4096 4 85.82 108.6% 0.615 1534.10 0.00000 0.00000 79
qemu in a stubdom, using phy on a loop device, best run:
File Blk Num Avg Maximum Lat% Lat% CPU
Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
------ ----- --- ------ ------ --------- ----------- -------- -------- -----
256 4096 4 130.49 163.8% 0.345 1459.94 0.00000 0.00000 80
These results as for the "sequential reads" test and rate is in
megabytes per second.
If I use phy on a loop device with qemu in dom0 unexpectedly I get much
worse results.
Same thing happens if I use tap:aio with qemu in a stubdom, but this is
kind of expected since blktap is never going to be as fast as blkback.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: windows domU disk performance graph comparing hvm vs stubdom vs pv drivers
2010-02-23 13:14 ` Stefano Stabellini
@ 2010-02-23 14:44 ` Konrad Rzeszutek Wilk
2010-02-23 19:39 ` Pasi Kärkkäinen
2010-02-23 20:11 ` Keith Coleman
2010-02-23 19:38 ` Pasi Kärkkäinen
1 sibling, 2 replies; 16+ messages in thread
From: Konrad Rzeszutek Wilk @ 2010-02-23 14:44 UTC (permalink / raw)
To: Stefano Stabellini; +Cc: xen-devel@lists.xensource.com, Keith Coleman
> > > That is strange because in that configuration I get a far better
> > > disk bandwidth with stubdoms compared to qemu running in dom0.
> > >
> >
> > What type of test are you doing?
> >
>
> these are the results I got a while ago running a simple "dd if=/dev/zero
> of=file" for 10 seconds:
Keep in mind that iometer (both the Windows a Linux version) by default
do random seak of 50% reads and 50% writes. They do have some set of
templates - "web server", "file server", "database server" that change
the read/write ratio, size of blocks, and the queue length.(fyi, you can
use fio to set the same values, if you can't get dynamo to compile on
your Linux box).
Keith, which workload did you choose?
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: windows domU disk performance graph comparing hvm vs stubdom vs pv drivers
2010-02-23 13:14 ` Stefano Stabellini
2010-02-23 14:44 ` Konrad Rzeszutek Wilk
@ 2010-02-23 19:38 ` Pasi Kärkkäinen
1 sibling, 0 replies; 16+ messages in thread
From: Pasi Kärkkäinen @ 2010-02-23 19:38 UTC (permalink / raw)
To: Stefano Stabellini; +Cc: xen-devel@lists.xensource.com, Keith Coleman
On Tue, Feb 23, 2010 at 01:14:47PM +0000, Stefano Stabellini wrote:
> On Mon, 22 Feb 2010, Keith Coleman wrote:
> > On Mon, Feb 22, 2010 at 12:27 PM, Stefano Stabellini
> > <stefano.stabellini@eu.citrix.com> wrote:
> > > On Mon, 22 Feb 2010, Keith Coleman wrote:
> > >> On 2/22/10, Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> > >> > On Fri, 19 Feb 2010, Keith Coleman wrote:
> > >> >> I am posting this to xen-devel instead of -users because it paints an
> > >> >> incomplete picture that shouldn't be the basis for deciding how to run
> > >> >> production systems.
> > >> >>
> > >> >> This graph shows the performance under a webserver disk IO workload at
> > >> >> different queue depths. It compares the 4 main IO methods for windows
> > >> >> guests that will be available in the upcoming xen 4.0.0 and 3.4.3
> > >> >> releases: pure HVM, stub domains, gplpv drivers, and xcp winpv
> > >> >> drivers.
> > >> >>
> > >> >> The gplpv and xcp winpv drivers have comparable performance with gplpv
> > >> >> being slightly faster. Both pv drivers are considerably faster than
> > >> >> pure hvm or stub domains. Stub domain performance was about even with
> > >> >> HVM which is lower than we were expecting. We tried a different cpu
> > >> >> pinning in "Stubdom B" with little impact.
> > >> >>
> > >> >
> > >> > What disk backend are you using?
> > >>
> > >> phy, LV
> > >>
> > >
> > > That is strange because in that configuration I get a far better
> > > disk bandwidth with stubdoms compared to qemu running in dom0.
> > >
> >
> > What type of test are you doing?
> >
>
> these are the results I got a while ago running a simple "dd if=/dev/zero
> of=file" for 10 seconds:
>
> qemu in dom0: 25.1 MB/s
> qemu in a stubdom: 56.7 MB/s
>
For dd tests you might want to use "oflag=direct" to make it use direct IO,
and not domU kernel cached.. also longer test would be good.
>
>
> I have just run just now "tiobench with --size 256 --numruns 4 --threads
> 4" using a raw file as a backend:
>
>
> qemu in dom0, using blktap2, best run:
>
> File Blk Num Avg Maximum Lat% Lat% CPU
> Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
> ------ ----- --- ------ ------ --------- ----------- -------- -------- -----
> 256 4096 4 85.82 108.6% 0.615 1534.10 0.00000 0.00000 79
>
> qemu in a stubdom, using phy on a loop device, best run:
>
> File Blk Num Avg Maximum Lat% Lat% CPU
> Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff
> ------ ----- --- ------ ------ --------- ----------- -------- -------- -----
> 256 4096 4 130.49 163.8% 0.345 1459.94 0.00000 0.00000 80
>
>
> These results as for the "sequential reads" test and rate is in
> megabytes per second.
> If I use phy on a loop device with qemu in dom0 unexpectedly I get much
> worse results.
> Same thing happens if I use tap:aio with qemu in a stubdom, but this is
> kind of expected since blktap is never going to be as fast as blkback.
>
Hmm... what's the overall cpu usage difference, measured from the hypervisor?
"xm top" or so..
-- Pasi
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: windows domU disk performance graph comparing hvm vs stubdom vs pv drivers
2010-02-23 14:44 ` Konrad Rzeszutek Wilk
@ 2010-02-23 19:39 ` Pasi Kärkkäinen
2010-02-23 20:03 ` Konrad Rzeszutek Wilk
2010-02-23 20:47 ` Marco Sinhoreli
2010-02-23 20:11 ` Keith Coleman
1 sibling, 2 replies; 16+ messages in thread
From: Pasi Kärkkäinen @ 2010-02-23 19:39 UTC (permalink / raw)
To: Konrad Rzeszutek Wilk
Cc: xen-devel@lists.xensource.com, Keith Coleman, Stefano Stabellini
On Tue, Feb 23, 2010 at 09:44:24AM -0500, Konrad Rzeszutek Wilk wrote:
> > > > That is strange because in that configuration I get a far better
> > > > disk bandwidth with stubdoms compared to qemu running in dom0.
> > > >
> > >
> > > What type of test are you doing?
> > >
> >
> > these are the results I got a while ago running a simple "dd if=/dev/zero
> > of=file" for 10 seconds:
>
> Keep in mind that iometer (both the Windows a Linux version) by default
> do random seak of 50% reads and 50% writes. They do have some set of
> templates - "web server", "file server", "database server" that change
> the read/write ratio, size of blocks, and the queue length.(fyi, you can
> use fio to set the same values, if you can't get dynamo to compile on
> your Linux box).
>
Does iometer run correctly on Linux nowadays? I remember it having problems
with more than 1 outstanding IO..
-- Pasi
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: windows domU disk performance graph comparing hvm vs stubdom vs pv drivers
2010-02-23 19:39 ` Pasi Kärkkäinen
@ 2010-02-23 20:03 ` Konrad Rzeszutek Wilk
2010-02-23 20:47 ` Marco Sinhoreli
1 sibling, 0 replies; 16+ messages in thread
From: Konrad Rzeszutek Wilk @ 2010-02-23 20:03 UTC (permalink / raw)
To: Pasi Kärkkäinen
Cc: xen-devel@lists.xensource.com, Keith Coleman, Stefano Stabellini
On Tue, Feb 23, 2010 at 09:39:26PM +0200, Pasi Kärkkäinen wrote:
> On Tue, Feb 23, 2010 at 09:44:24AM -0500, Konrad Rzeszutek Wilk wrote:
> > > > > That is strange because in that configuration I get a far better
> > > > > disk bandwidth with stubdoms compared to qemu running in dom0.
> > > > >
> > > >
> > > > What type of test are you doing?
> > > >
> > >
> > > these are the results I got a while ago running a simple "dd if=/dev/zero
> > > of=file" for 10 seconds:
> >
> > Keep in mind that iometer (both the Windows a Linux version) by default
> > do random seak of 50% reads and 50% writes. They do have some set of
> > templates - "web server", "file server", "database server" that change
> > the read/write ratio, size of blocks, and the queue length.(fyi, you can
> > use fio to set the same values, if you can't get dynamo to compile on
> > your Linux box).
> >
>
> Does iometer run correctly on Linux nowadays? I remember it having problems
> with more than 1 outstanding IO..
It seems to work for me. Thought if you download it from their web-site
expect to hack a bit of their module to make it work with new kernels.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: windows domU disk performance graph comparing hvm vs stubdom vs pv drivers
2010-02-23 14:44 ` Konrad Rzeszutek Wilk
2010-02-23 19:39 ` Pasi Kärkkäinen
@ 2010-02-23 20:11 ` Keith Coleman
1 sibling, 0 replies; 16+ messages in thread
From: Keith Coleman @ 2010-02-23 20:11 UTC (permalink / raw)
To: Konrad Rzeszutek Wilk; +Cc: xen-devel@lists.xensource.com, Stefano Stabellini
On Tue, Feb 23, 2010 at 9:44 AM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
>> > > That is strange because in that configuration I get a far better
>> > > disk bandwidth with stubdoms compared to qemu running in dom0.
>> > >
>> >
>> > What type of test are you doing?
>> >
>>
>> these are the results I got a while ago running a simple "dd if=/dev/zero
>> of=file" for 10 seconds:
>
> Keep in mind that iometer (both the Windows a Linux version) by default
> do random seak of 50% reads and 50% writes. They do have some set of
> templates - "web server", "file server", "database server" that change
> the read/write ratio, size of blocks, and the queue length.(fyi, you can
> use fio to set the same values, if you can't get dynamo to compile on
> your Linux box).
>
> Keith, which workload did you choose?
>
Web server workload was used in my graph.
Keith Coleman
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: windows domU disk performance graph comparing hvm vs stubdom vs pv drivers
2010-02-23 19:39 ` Pasi Kärkkäinen
2010-02-23 20:03 ` Konrad Rzeszutek Wilk
@ 2010-02-23 20:47 ` Marco Sinhoreli
2010-02-23 21:06 ` Keith Coleman
1 sibling, 1 reply; 16+ messages in thread
From: Marco Sinhoreli @ 2010-02-23 20:47 UTC (permalink / raw)
To: xen-devel@lists.xensource.com
[-- Attachment #1.1: Type: text/plain, Size: 1322 bytes --]
Hi Keith,
Do you have some KVM graph? It will be interesting to compare both.
Cheers
On Tue, Feb 23, 2010 at 4:39 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:
> On Tue, Feb 23, 2010 at 09:44:24AM -0500, Konrad Rzeszutek Wilk wrote:
> > > > > That is strange because in that configuration I get a far better
> > > > > disk bandwidth with stubdoms compared to qemu running in dom0.
> > > > >
> > > >
> > > > What type of test are you doing?
> > > >
> > >
> > > these are the results I got a while ago running a simple "dd
> if=/dev/zero
> > > of=file" for 10 seconds:
> >
> > Keep in mind that iometer (both the Windows a Linux version) by default
> > do random seak of 50% reads and 50% writes. They do have some set of
> > templates - "web server", "file server", "database server" that change
> > the read/write ratio, size of blocks, and the queue length.(fyi, you can
> > use fio to set the same values, if you can't get dynamo to compile on
> > your Linux box).
> >
>
> Does iometer run correctly on Linux nowadays? I remember it having problems
> with more than 1 outstanding IO..
>
> -- Pasi
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
--
Marco Sinhoreli
[-- Attachment #1.2: Type: text/html, Size: 2039 bytes --]
[-- Attachment #2: Type: text/plain, Size: 138 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: windows domU disk performance graph comparing hvm vs stubdom vs pv drivers
2010-02-23 20:47 ` Marco Sinhoreli
@ 2010-02-23 21:06 ` Keith Coleman
0 siblings, 0 replies; 16+ messages in thread
From: Keith Coleman @ 2010-02-23 21:06 UTC (permalink / raw)
To: Marco Sinhoreli; +Cc: xen-devel@lists.xensource.com
On Tue, Feb 23, 2010 at 3:47 PM, Marco Sinhoreli <msinhore@gmail.com> wrote:
> Hi Keith,
> Do you have some KVM graph? It will be interesting to compare both.
>
I have not compared to KVM.
Keith Coleman
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2010-02-23 21:06 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-02-19 22:41 windows domU disk performance graph comparing hvm vs stubdom vs pv drivers Keith Coleman
2010-02-20 0:08 ` Daniel Stodden
2010-02-20 0:50 ` Keith Coleman
2010-02-20 1:46 ` Daniel Stodden
2010-02-22 16:33 ` Stefano Stabellini
2010-02-22 17:14 ` Keith Coleman
2010-02-22 17:27 ` Stefano Stabellini
2010-02-22 21:13 ` Keith Coleman
2010-02-23 13:14 ` Stefano Stabellini
2010-02-23 14:44 ` Konrad Rzeszutek Wilk
2010-02-23 19:39 ` Pasi Kärkkäinen
2010-02-23 20:03 ` Konrad Rzeszutek Wilk
2010-02-23 20:47 ` Marco Sinhoreli
2010-02-23 21:06 ` Keith Coleman
2010-02-23 20:11 ` Keith Coleman
2010-02-23 19:38 ` Pasi Kärkkäinen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).