* DomU vs Dom0 performance.
@ 2013-09-29 23:22 sushrut shirole
2013-09-30 14:36 ` Konrad Rzeszutek Wilk
2013-09-30 14:50 ` Stefano Stabellini
0 siblings, 2 replies; 10+ messages in thread
From: sushrut shirole @ 2013-09-29 23:22 UTC (permalink / raw)
To: xen-devel
[-- Attachment #1.1: Type: text/plain, Size: 4214 bytes --]
Hi,
I have been doing some diskIO bench-marking of dom0 and domU (HVM). I ran
into an issue where domU
performed better than dom0. So I ran few experiments to check if it is
just diskIO performance.
I have an archlinux (kernel 3.5.0) + xen 4.2.2) installed on a Intel Core
i7 Q720 machine. I have also installed
archlinux (kernel 3.5.0) in domU running on this machine. The domU runs
with 8 vcpus. I have alloted both dom0
and domu 4096M ram.
I performed following experiments to compare the performance of domU vs
dom0.
experiment 1]
1. Created a file.img of 5G
2. Mounted the file with ext2 filesystem.
3. Ran sysbench with following command.
sysbench --num-threads=8 --test=fileio --file-total-size=1G
--max-requests=1000000 prepare
4. Read files into memory
script to read files
<snip>
for i in `ls test_file.*`
do
sudo dd if=./$i of=/dev/zero
done
</snip>
5. Ran sysbench.
sysbench --num-threads=8 --test=fileio --file-total-size=1G
--max-requests=5000000 --file-test-mode=rndrd run
the output i got on dom0 is
<output>
Number of threads: 8
Extra file open flags: 0
128 files, 8Mb each
1Gb total file size
Block size 16Kb
Number of random requests for random IO: 5000000
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random read test
Operations performed: 5130322 Read, 0 Write, 0 Other = 5130322 Total
Read 78.283Gb Written 0b Total transferred 78.283Gb (4.3971Gb/sec)
*288165.68 Requests/sec executed*
Test execution summary:
total time: 17.8034s
total number of events: 5130322
total time taken by event execution: 125.3102
per-request statistics:
min: 0.01ms
avg: 0.02ms
max: 55.55ms
approx. 95 percentile: 0.02ms
Threads fairness:
events (avg/stddev): 641290.2500/10057.89
execution time (avg/stddev): 15.6638/0.02
</output>
6. Performed same experiment on domU and result I got is
<output>
Number of threads: 8
Extra file open flags: 0
128 files, 8Mb each
1Gb total file size
Block size 16Kb
Number of random requests for random IO: 5000000
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random read test
Operations performed: 5221490 Read, 0 Write, 0 Other = 5221490 Total
Read 79.674Gb Written 0b Total transferred 79.674Gb (5.9889Gb/sec)
*392489.34 Requests/sec executed*
Test execution summary:
total time: 13.3035s
total number of events: 5221490
total time taken by event execution: 98.7121
per-request statistics:
min: 0.01ms
avg: 0.02ms
max: 49.75ms
approx. 95 percentile: 0.02ms
Threads fairness:
events (avg/stddev): 652686.2500/1494.93
execution time (avg/stddev): 12.3390/0.02
</output>
I was expecting dom0 to performa better than domU, so to debug more into it
I ram lm_bench microbenchmarks.
Experiment 2] bw_mem benchmark
1. ./bw_mem 1000m wr
dom0 output:
1048.58 3640.60
domU output:
1048.58 4719.32
2. ./bw_mem 1000m rd
dom0 output:
1048.58 5780.56
domU output:
1048.58 6258.32
Experiment 3] lat_syscall benchmark
1. ./lat_syscall write
dom0 output:
Simple write: 1.9659 microseconds
domU output :
Simple write: 0.4256 microseconds
2. ./lat_syscall read
dom0 output:
Simple read: 1.9399 microseconds
domU output :
Simple read: 0.3764 microseconds
3. ./lat_syscall stat
dom0 output:
Simple stat:3.9667 microseconds
domU output :
Simple stat: 1.2711 microseconds
I am not able to understand why domU has performed better than domU, when
obvious guess is that dom0
should perform better than domU. I would really appreciate an help if
anyone knows the reason behind this
issue.
Thank you,
Sushrut.
[-- Attachment #1.2: Type: text/html, Size: 6243 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: DomU vs Dom0 performance.
2013-09-29 23:22 DomU vs Dom0 performance sushrut shirole
@ 2013-09-30 14:36 ` Konrad Rzeszutek Wilk
2013-09-30 15:46 ` sushrut shirole
2013-09-30 14:50 ` Stefano Stabellini
1 sibling, 1 reply; 10+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-09-30 14:36 UTC (permalink / raw)
To: sushrut shirole; +Cc: xen-devel
On Sun, Sep 29, 2013 at 07:22:14PM -0400, sushrut shirole wrote:
> Hi,
>
> I have been doing some diskIO bench-marking of dom0 and domU (HVM). I ran
> into an issue where domU
> performed better than dom0. So I ran few experiments to check if it is
> just diskIO performance.
>
> I have an archlinux (kernel 3.5.0) + xen 4.2.2) installed on a Intel Core
> i7 Q720 machine. I have also installed
> archlinux (kernel 3.5.0) in domU running on this machine. The domU runs
> with 8 vcpus. I have alloted both dom0
> and domu 4096M ram.
What kind of guest is it ? PV or HVM?
>
> I performed following experiments to compare the performance of domU vs
> dom0.
>
> experiment 1]
>
> 1. Created a file.img of 5G
> 2. Mounted the file with ext2 filesystem.
> 3. Ran sysbench with following command.
>
> sysbench --num-threads=8 --test=fileio --file-total-size=1G
> --max-requests=1000000 prepare
>
> 4. Read files into memory
>
> script to read files
>
> <snip>
> for i in `ls test_file.*`
> do
> sudo dd if=./$i of=/dev/zero
> done
> </snip>
>
> 5. Ran sysbench.
>
> sysbench --num-threads=8 --test=fileio --file-total-size=1G
> --max-requests=5000000 --file-test-mode=rndrd run
>
> the output i got on dom0 is
>
> <output>
> Number of threads: 8
>
> Extra file open flags: 0
> 128 files, 8Mb each
> 1Gb total file size
> Block size 16Kb
> Number of random requests for random IO: 5000000
> Read/Write ratio for combined random IO test: 1.50
> Periodic FSYNC enabled, calling fsync() each 100 requests.
> Calling fsync() at the end of test, Enabled.
> Using synchronous I/O mode
> Doing random read test
>
> Operations performed: 5130322 Read, 0 Write, 0 Other = 5130322 Total
> Read 78.283Gb Written 0b Total transferred 78.283Gb (4.3971Gb/sec)
> *288165.68 Requests/sec executed*
>
> Test execution summary:
> total time: 17.8034s
> total number of events: 5130322
> total time taken by event execution: 125.3102
> per-request statistics:
> min: 0.01ms
> avg: 0.02ms
> max: 55.55ms
> approx. 95 percentile: 0.02ms
>
> Threads fairness:
> events (avg/stddev): 641290.2500/10057.89
> execution time (avg/stddev): 15.6638/0.02
> </output>
>
> 6. Performed same experiment on domU and result I got is
>
> <output>
> Number of threads: 8
>
> Extra file open flags: 0
> 128 files, 8Mb each
> 1Gb total file size
> Block size 16Kb
> Number of random requests for random IO: 5000000
> Read/Write ratio for combined random IO test: 1.50
> Periodic FSYNC enabled, calling fsync() each 100 requests.
> Calling fsync() at the end of test, Enabled.
> Using synchronous I/O mode
> Doing random read test
>
> Operations performed: 5221490 Read, 0 Write, 0 Other = 5221490 Total
> Read 79.674Gb Written 0b Total transferred 79.674Gb (5.9889Gb/sec)
> *392489.34 Requests/sec executed*
>
> Test execution summary:
> total time: 13.3035s
> total number of events: 5221490
> total time taken by event execution: 98.7121
> per-request statistics:
> min: 0.01ms
> avg: 0.02ms
> max: 49.75ms
> approx. 95 percentile: 0.02ms
>
> Threads fairness:
> events (avg/stddev): 652686.2500/1494.93
> execution time (avg/stddev): 12.3390/0.02
>
> </output>
>
> I was expecting dom0 to performa better than domU, so to debug more into it
> I ram lm_bench microbenchmarks.
>
> Experiment 2] bw_mem benchmark
>
> 1. ./bw_mem 1000m wr
>
> dom0 output:
>
> 1048.58 3640.60
>
> domU output:
>
> 1048.58 4719.32
>
> 2. ./bw_mem 1000m rd
>
> dom0 output:
> 1048.58 5780.56
>
> domU output:
>
> 1048.58 6258.32
>
>
> Experiment 3] lat_syscall benchmark
>
> 1. ./lat_syscall write
>
> dom0 output:
> Simple write: 1.9659 microseconds
>
> domU output :
> Simple write: 0.4256 microseconds
>
> 2. ./lat_syscall read
>
> dom0 output:
> Simple read: 1.9399 microseconds
>
> domU output :
> Simple read: 0.3764 microseconds
>
> 3. ./lat_syscall stat
>
> dom0 output:
> Simple stat:3.9667 microseconds
>
> domU output :
> Simple stat: 1.2711 microseconds
>
> I am not able to understand why domU has performed better than domU, when
> obvious guess is that dom0
> should perform better than domU. I would really appreciate an help if
> anyone knows the reason behind this
> issue.
>
> Thank you,
> Sushrut.
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: DomU vs Dom0 performance.
2013-09-29 23:22 DomU vs Dom0 performance sushrut shirole
2013-09-30 14:36 ` Konrad Rzeszutek Wilk
@ 2013-09-30 14:50 ` Stefano Stabellini
2013-09-30 15:45 ` sushrut shirole
1 sibling, 1 reply; 10+ messages in thread
From: Stefano Stabellini @ 2013-09-30 14:50 UTC (permalink / raw)
To: sushrut shirole; +Cc: xen-devel
[-- Attachment #1: Type: text/plain, Size: 5125 bytes --]
On Sun, 29 Sep 2013, sushrut shirole wrote:
> Hi,
>
> I have been doing some diskIO bench-marking of dom0 and domU (HVM). I ran into an issue where domU
> performed better than dom0. So I ran few experiments to check if it is just diskIO performance.
>
> I have an archlinux (kernel 3.5.0) + xen 4.2.2) installed on a Intel Core i7 Q720 machine. I have also installed
> archlinux (kernel 3.5.0) in domU running on this machine. The domU runs with 8 vcpus. I have alloted both dom0
> and domu 4096M ram.
What kind of disk backend are you using? QEMU or blkback?
Could you please post your disk line from your VM config file?
> I performed following experiments to compare the performance of domU vs dom0.
>
> experiment 1]
>
> 1. Created a file.img of 5G
> 2. Mounted the file with ext2 filesystem.
> 3. Ran sysbench with following command.
>
> sysbench --num-threads=8 --test=fileio --file-total-size=1G --max-requests=1000000 prepare
>
> 4. Read files into memory
>
> script to read files
>
> <snip>
> for i in `ls test_file.*`
> do
> sudo dd if=./$i of=/dev/zero
> done
> </snip>
>
> 5. Ran sysbench.
>
> sysbench --num-threads=8 --test=fileio --file-total-size=1G --max-requests=5000000 --file-test-mode=rndrd run
>
> the output i got on dom0 is
>
> <output>
> Number of threads: 8
>
> Extra file open flags: 0
> 128 files, 8Mb each
> 1Gb total file size
> Block size 16Kb
> Number of random requests for random IO: 5000000
> Read/Write ratio for combined random IO test: 1.50
> Periodic FSYNC enabled, calling fsync() each 100 requests.
> Calling fsync() at the end of test, Enabled.
> Using synchronous I/O mode
> Doing random read test
>
> Operations performed: 5130322 Read, 0 Write, 0 Other = 5130322 Total
> Read 78.283Gb Written 0b Total transferred 78.283Gb (4.3971Gb/sec)
> 288165.68 Requests/sec executed
>
> Test execution summary:
> total time: 17.8034s
> total number of events: 5130322
> total time taken by event execution: 125.3102
> per-request statistics:
> min: 0.01ms
> avg: 0.02ms
> max: 55.55ms
> approx. 95 percentile: 0.02ms
>
> Threads fairness:
> events (avg/stddev): 641290.2500/10057.89
> execution time (avg/stddev): 15.6638/0.02
> </output>
>
> 6. Performed same experiment on domU and result I got is
>
> <output>
> Number of threads: 8
>
> Extra file open flags: 0
> 128 files, 8Mb each
> 1Gb total file size
> Block size 16Kb
> Number of random requests for random IO: 5000000
> Read/Write ratio for combined random IO test: 1.50
> Periodic FSYNC enabled, calling fsync() each 100 requests.
> Calling fsync() at the end of test, Enabled.
> Using synchronous I/O mode
> Doing random read test
>
> Operations performed: 5221490 Read, 0 Write, 0 Other = 5221490 Total
> Read 79.674Gb Written 0b Total transferred 79.674Gb (5.9889Gb/sec)
> 392489.34 Requests/sec executed
>
> Test execution summary:
> total time: 13.3035s
> total number of events: 5221490
> total time taken by event execution: 98.7121
> per-request statistics:
> min: 0.01ms
> avg: 0.02ms
> max: 49.75ms
> approx. 95 percentile: 0.02ms
>
> Threads fairness:
> events (avg/stddev): 652686.2500/1494.93
> execution time (avg/stddev): 12.3390/0.02
>
> </output>
>
> I was expecting dom0 to performa better than domU, so to debug more into it I ram lm_bench microbenchmarks.
>
> Experiment 2] bw_mem benchmark
>
> 1. ./bw_mem 1000m wr
>
> dom0 output:
>
> 1048.58 3640.60
>
> domU output:
>
> 1048.58 4719.32
>
> 2. ./bw_mem 1000m rd
>
> dom0 output:
> 1048.58 5780.56
>
> domU output:
>
> 1048.58 6258.32
>
>
> Experiment 3] lat_syscall benchmark
>
> 1. ./lat_syscall write
>
> dom0 output:
> Simple write: 1.9659 microseconds
>
> domU output :
> Simple write: 0.4256 microseconds
>
> 2. ./lat_syscall read
>
> dom0 output:
> Simple read: 1.9399 microseconds
>
> domU output :
> Simple read: 0.3764 microseconds
>
> 3. ./lat_syscall stat
>
> dom0 output:
> Simple stat:3.9667 microseconds
>
> domU output :
> Simple stat: 1.2711 microseconds
>
> I am not able to understand why domU has performed better than domU, when obvious guess is that dom0
> should perform better than domU. I would really appreciate an help if anyone knows the reason behind this
> issue.
>
> Thank you,
> Sushrut.
>
>
>
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: DomU vs Dom0 performance.
2013-09-30 14:50 ` Stefano Stabellini
@ 2013-09-30 15:45 ` sushrut shirole
0 siblings, 0 replies; 10+ messages in thread
From: sushrut shirole @ 2013-09-30 15:45 UTC (permalink / raw)
To: Stefano Stabellini; +Cc: xen-devel
[-- Attachment #1.1: Type: text/plain, Size: 5273 bytes --]
Hi,
I am using qemu device model.
Disk configuration line is :
disk = [ 'phy:/dev/sda5,hda,w',
'file:/root/dev/iso/archlinux.iso,hdc:cdrom,r' ]
boot="c"
--
Thanks
On 30 September 2013 14:50, Stefano Stabellini <
stefano.stabellini@eu.citrix.com> wrote:
> On Sun, 29 Sep 2013, sushrut shirole wrote:
> > Hi,
> >
> > I have been doing some diskIO bench-marking of dom0 and domU (HVM). I
> ran into an issue where domU
> > performed better than dom0. So I ran few experiments to check if it is
> just diskIO performance.
> >
> > I have an archlinux (kernel 3.5.0) + xen 4.2.2) installed on a Intel
> Core i7 Q720 machine. I have also installed
> > archlinux (kernel 3.5.0) in domU running on this machine. The domU runs
> with 8 vcpus. I have alloted both dom0
> > and domu 4096M ram.
>
> What kind of disk backend are you using? QEMU or blkback?
> Could you please post your disk line from your VM config file?
>
>
> > I performed following experiments to compare the performance of domU vs
> dom0.
> >
> > experiment 1]
> >
> > 1. Created a file.img of 5G
> > 2. Mounted the file with ext2 filesystem.
> > 3. Ran sysbench with following command.
> >
> > sysbench --num-threads=8 --test=fileio --file-total-size=1G
> --max-requests=1000000 prepare
> >
> > 4. Read files into memory
> >
> > script to read files
> >
> > <snip>
> > for i in `ls test_file.*`
> > do
> > sudo dd if=./$i of=/dev/zero
> > done
> > </snip>
> >
> > 5. Ran sysbench.
> >
> > sysbench --num-threads=8 --test=fileio --file-total-size=1G
> --max-requests=5000000 --file-test-mode=rndrd run
> >
> > the output i got on dom0 is
> >
> > <output>
> > Number of threads: 8
> >
> > Extra file open flags: 0
> > 128 files, 8Mb each
> > 1Gb total file size
> > Block size 16Kb
> > Number of random requests for random IO: 5000000
> > Read/Write ratio for combined random IO test: 1.50
> > Periodic FSYNC enabled, calling fsync() each 100 requests.
> > Calling fsync() at the end of test, Enabled.
> > Using synchronous I/O mode
> > Doing random read test
> >
> > Operations performed: 5130322 Read, 0 Write, 0 Other = 5130322 Total
> > Read 78.283Gb Written 0b Total transferred 78.283Gb (4.3971Gb/sec)
> > 288165.68 Requests/sec executed
> >
> > Test execution summary:
> > total time: 17.8034s
> > total number of events: 5130322
> > total time taken by event execution: 125.3102
> > per-request statistics:
> > min: 0.01ms
> > avg: 0.02ms
> > max: 55.55ms
> > approx. 95 percentile: 0.02ms
> >
> > Threads fairness:
> > events (avg/stddev): 641290.2500/10057.89
> > execution time (avg/stddev): 15.6638/0.02
> > </output>
> >
> > 6. Performed same experiment on domU and result I got is
> >
> > <output>
> > Number of threads: 8
> >
> > Extra file open flags: 0
> > 128 files, 8Mb each
> > 1Gb total file size
> > Block size 16Kb
> > Number of random requests for random IO: 5000000
> > Read/Write ratio for combined random IO test: 1.50
> > Periodic FSYNC enabled, calling fsync() each 100 requests.
> > Calling fsync() at the end of test, Enabled.
> > Using synchronous I/O mode
> > Doing random read test
> >
> > Operations performed: 5221490 Read, 0 Write, 0 Other = 5221490 Total
> > Read 79.674Gb Written 0b Total transferred 79.674Gb (5.9889Gb/sec)
> > 392489.34 Requests/sec executed
> >
> > Test execution summary:
> > total time: 13.3035s
> > total number of events: 5221490
> > total time taken by event execution: 98.7121
> > per-request statistics:
> > min: 0.01ms
> > avg: 0.02ms
> > max: 49.75ms
> > approx. 95 percentile: 0.02ms
> >
> > Threads fairness:
> > events (avg/stddev): 652686.2500/1494.93
> > execution time (avg/stddev): 12.3390/0.02
> >
> > </output>
> >
> > I was expecting dom0 to performa better than domU, so to debug more into
> it I ram lm_bench microbenchmarks.
> >
> > Experiment 2] bw_mem benchmark
> >
> > 1. ./bw_mem 1000m wr
> >
> > dom0 output:
> >
> > 1048.58 3640.60
> >
> > domU output:
> >
> > 1048.58 4719.32
> >
> > 2. ./bw_mem 1000m rd
> >
> > dom0 output:
> > 1048.58 5780.56
> >
> > domU output:
> >
> > 1048.58 6258.32
> >
> >
> > Experiment 3] lat_syscall benchmark
> >
> > 1. ./lat_syscall write
> >
> > dom0 output:
> > Simple write: 1.9659 microseconds
> >
> > domU output :
> > Simple write: 0.4256 microseconds
> >
> > 2. ./lat_syscall read
> >
> > dom0 output:
> > Simple read: 1.9399 microseconds
> >
> > domU output :
> > Simple read: 0.3764 microseconds
> >
> > 3. ./lat_syscall stat
> >
> > dom0 output:
> > Simple stat:3.9667 microseconds
> >
> > domU output :
> > Simple stat: 1.2711 microseconds
> >
> > I am not able to understand why domU has performed better than domU,
> when obvious guess is that dom0
> > should perform better than domU. I would really appreciate an help if
> anyone knows the reason behind this
> > issue.
> >
> > Thank you,
> > Sushrut.
> >
> >
> >
>
[-- Attachment #1.2: Type: text/html, Size: 6848 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: DomU vs Dom0 performance.
2013-09-30 14:36 ` Konrad Rzeszutek Wilk
@ 2013-09-30 15:46 ` sushrut shirole
2013-10-01 10:05 ` Felipe Franciosi
0 siblings, 1 reply; 10+ messages in thread
From: sushrut shirole @ 2013-09-30 15:46 UTC (permalink / raw)
To: Konrad Rzeszutek Wilk; +Cc: xen-devel
[-- Attachment #1.1: Type: text/plain, Size: 5212 bytes --]
Its a HVM guest.
On 30 September 2013 14:36, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>wrote:
> On Sun, Sep 29, 2013 at 07:22:14PM -0400, sushrut shirole wrote:
> > Hi,
> >
> > I have been doing some diskIO bench-marking of dom0 and domU (HVM). I ran
> > into an issue where domU
> > performed better than dom0. So I ran few experiments to check if it is
> > just diskIO performance.
> >
> > I have an archlinux (kernel 3.5.0) + xen 4.2.2) installed on a Intel Core
> > i7 Q720 machine. I have also installed
> > archlinux (kernel 3.5.0) in domU running on this machine. The domU runs
> > with 8 vcpus. I have alloted both dom0
> > and domu 4096M ram.
>
> What kind of guest is it ? PV or HVM?
>
> >
> > I performed following experiments to compare the performance of domU vs
> > dom0.
> >
> > experiment 1]
> >
> > 1. Created a file.img of 5G
> > 2. Mounted the file with ext2 filesystem.
> > 3. Ran sysbench with following command.
> >
> > sysbench --num-threads=8 --test=fileio --file-total-size=1G
> > --max-requests=1000000 prepare
> >
> > 4. Read files into memory
> >
> > script to read files
> >
> > <snip>
> > for i in `ls test_file.*`
> > do
> > sudo dd if=./$i of=/dev/zero
> > done
> > </snip>
> >
> > 5. Ran sysbench.
> >
> > sysbench --num-threads=8 --test=fileio --file-total-size=1G
> > --max-requests=5000000 --file-test-mode=rndrd run
> >
> > the output i got on dom0 is
> >
> > <output>
> > Number of threads: 8
> >
> > Extra file open flags: 0
> > 128 files, 8Mb each
> > 1Gb total file size
> > Block size 16Kb
> > Number of random requests for random IO: 5000000
> > Read/Write ratio for combined random IO test: 1.50
> > Periodic FSYNC enabled, calling fsync() each 100 requests.
> > Calling fsync() at the end of test, Enabled.
> > Using synchronous I/O mode
> > Doing random read test
> >
> > Operations performed: 5130322 Read, 0 Write, 0 Other = 5130322 Total
> > Read 78.283Gb Written 0b Total transferred 78.283Gb (4.3971Gb/sec)
> > *288165.68 Requests/sec executed*
> >
> > Test execution summary:
> > total time: 17.8034s
> > total number of events: 5130322
> > total time taken by event execution: 125.3102
> > per-request statistics:
> > min: 0.01ms
> > avg: 0.02ms
> > max: 55.55ms
> > approx. 95 percentile: 0.02ms
> >
> > Threads fairness:
> > events (avg/stddev): 641290.2500/10057.89
> > execution time (avg/stddev): 15.6638/0.02
> > </output>
> >
> > 6. Performed same experiment on domU and result I got is
> >
> > <output>
> > Number of threads: 8
> >
> > Extra file open flags: 0
> > 128 files, 8Mb each
> > 1Gb total file size
> > Block size 16Kb
> > Number of random requests for random IO: 5000000
> > Read/Write ratio for combined random IO test: 1.50
> > Periodic FSYNC enabled, calling fsync() each 100 requests.
> > Calling fsync() at the end of test, Enabled.
> > Using synchronous I/O mode
> > Doing random read test
> >
> > Operations performed: 5221490 Read, 0 Write, 0 Other = 5221490 Total
> > Read 79.674Gb Written 0b Total transferred 79.674Gb (5.9889Gb/sec)
> > *392489.34 Requests/sec executed*
> >
> > Test execution summary:
> > total time: 13.3035s
> > total number of events: 5221490
> > total time taken by event execution: 98.7121
> > per-request statistics:
> > min: 0.01ms
> > avg: 0.02ms
> > max: 49.75ms
> > approx. 95 percentile: 0.02ms
> >
> > Threads fairness:
> > events (avg/stddev): 652686.2500/1494.93
> > execution time (avg/stddev): 12.3390/0.02
> >
> > </output>
> >
> > I was expecting dom0 to performa better than domU, so to debug more into
> it
> > I ram lm_bench microbenchmarks.
> >
> > Experiment 2] bw_mem benchmark
> >
> > 1. ./bw_mem 1000m wr
> >
> > dom0 output:
> >
> > 1048.58 3640.60
> >
> > domU output:
> >
> > 1048.58 4719.32
> >
> > 2. ./bw_mem 1000m rd
> >
> > dom0 output:
> > 1048.58 5780.56
> >
> > domU output:
> >
> > 1048.58 6258.32
> >
> >
> > Experiment 3] lat_syscall benchmark
> >
> > 1. ./lat_syscall write
> >
> > dom0 output:
> > Simple write: 1.9659 microseconds
> >
> > domU output :
> > Simple write: 0.4256 microseconds
> >
> > 2. ./lat_syscall read
> >
> > dom0 output:
> > Simple read: 1.9399 microseconds
> >
> > domU output :
> > Simple read: 0.3764 microseconds
> >
> > 3. ./lat_syscall stat
> >
> > dom0 output:
> > Simple stat:3.9667 microseconds
> >
> > domU output :
> > Simple stat: 1.2711 microseconds
> >
> > I am not able to understand why domU has performed better than domU, when
> > obvious guess is that dom0
> > should perform better than domU. I would really appreciate an help if
> > anyone knows the reason behind this
> > issue.
> >
> > Thank you,
> > Sushrut.
>
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
>
>
[-- Attachment #1.2: Type: text/html, Size: 6846 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: DomU vs Dom0 performance.
2013-09-30 15:46 ` sushrut shirole
@ 2013-10-01 10:05 ` Felipe Franciosi
2013-10-01 12:55 ` sushrut shirole
0 siblings, 1 reply; 10+ messages in thread
From: Felipe Franciosi @ 2013-10-01 10:05 UTC (permalink / raw)
To: 'sushrut shirole', Konrad Rzeszutek Wilk; +Cc: xen-devel@lists.xen.org
[-- Attachment #1.1: Type: text/plain, Size: 6406 bytes --]
1) Can you paste your entire config file here?
This is just for clarification on the HVM bit.
Your "disk" config suggests you are using the PV protocol for storage (blkback).
2) Also, can you run "uname -a" in both dom0 and domU and paste it here as well?
Based on the syscall latencies you presented, it sounds like one domain may be 32bit and the other 64bit.
3) You are doing this:
> <snip>
> for i in `ls test_file.*`
> do
> sudo dd if=./$i of=/dev/zero
> done
> </snip>
I don't know what you intended with this, but you can't output to /dev/zero (you can read from /dev/zero, but you can only output to /dev/null).
If your "img" is 5G and your guest has 4G of RAM, you will not consistently buffer the entire image.
You are then doing buffered IO (note that some of your requests are completing in 10us). That can only happen if you are reading from memory and not from disk.
If you want to consistently compare the performance between two domains, you should always bypass the VM's cache with O_DIRECT.
Cheers,
Felipe
From: xen-devel-bounces@lists.xen.org [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of sushrut shirole
Sent: 30 September 2013 16:47
To: Konrad Rzeszutek Wilk
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] DomU vs Dom0 performance.
Its a HVM guest.
On 30 September 2013 14:36, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com<mailto:konrad.wilk@oracle.com>> wrote:
On Sun, Sep 29, 2013 at 07:22:14PM -0400, sushrut shirole wrote:
> Hi,
>
> I have been doing some diskIO bench-marking of dom0 and domU (HVM). I ran
> into an issue where domU
> performed better than dom0. So I ran few experiments to check if it is
> just diskIO performance.
>
> I have an archlinux (kernel 3.5.0) + xen 4.2.2) installed on a Intel Core
> i7 Q720 machine. I have also installed
> archlinux (kernel 3.5.0) in domU running on this machine. The domU runs
> with 8 vcpus. I have alloted both dom0
> and domu 4096M ram.
What kind of guest is it ? PV or HVM?
>
> I performed following experiments to compare the performance of domU vs
> dom0.
>
> experiment 1]
>
> 1. Created a file.img of 5G
> 2. Mounted the file with ext2 filesystem.
> 3. Ran sysbench with following command.
>
> sysbench --num-threads=8 --test=fileio --file-total-size=1G
> --max-requests=1000000 prepare
>
> 4. Read files into memory
>
> script to read files
>
> <snip>
> for i in `ls test_file.*`
> do
> sudo dd if=./$i of=/dev/zero
> done
> </snip>
>
> 5. Ran sysbench.
>
> sysbench --num-threads=8 --test=fileio --file-total-size=1G
> --max-requests=5000000 --file-test-mode=rndrd run
>
> the output i got on dom0 is
>
> <output>
> Number of threads: 8
>
> Extra file open flags: 0
> 128 files, 8Mb each
> 1Gb total file size
> Block size 16Kb
> Number of random requests for random IO: 5000000
> Read/Write ratio for combined random IO test: 1.50
> Periodic FSYNC enabled, calling fsync() each 100 requests.
> Calling fsync() at the end of test, Enabled.
> Using synchronous I/O mode
> Doing random read test
>
> Operations performed: 5130322 Read, 0 Write, 0 Other = 5130322 Total
> Read 78.283Gb Written 0b Total transferred 78.283Gb (4.3971Gb/sec)
> *288165.68 Requests/sec executed*
>
> Test execution summary:
> total time: 17.8034s
> total number of events: 5130322
> total time taken by event execution: 125.3102
> per-request statistics:
> min: 0.01ms
> avg: 0.02ms
> max: 55.55ms
> approx. 95 percentile: 0.02ms
>
> Threads fairness:
> events (avg/stddev): 641290.2500/10057.89
> execution time (avg/stddev): 15.6638/0.02
> </output>
>
> 6. Performed same experiment on domU and result I got is
>
> <output>
> Number of threads: 8
>
> Extra file open flags: 0
> 128 files, 8Mb each
> 1Gb total file size
> Block size 16Kb
> Number of random requests for random IO: 5000000
> Read/Write ratio for combined random IO test: 1.50
> Periodic FSYNC enabled, calling fsync() each 100 requests.
> Calling fsync() at the end of test, Enabled.
> Using synchronous I/O mode
> Doing random read test
>
> Operations performed: 5221490 Read, 0 Write, 0 Other = 5221490 Total
> Read 79.674Gb Written 0b Total transferred 79.674Gb (5.9889Gb/sec)
> *392489.34 Requests/sec executed*
>
> Test execution summary:
> total time: 13.3035s
> total number of events: 5221490
> total time taken by event execution: 98.7121
> per-request statistics:
> min: 0.01ms
> avg: 0.02ms
> max: 49.75ms
> approx. 95 percentile: 0.02ms
>
> Threads fairness:
> events (avg/stddev): 652686.2500/1494.93
> execution time (avg/stddev): 12.3390/0.02
>
> </output>
>
> I was expecting dom0 to performa better than domU, so to debug more into it
> I ram lm_bench microbenchmarks.
>
> Experiment 2] bw_mem benchmark
>
> 1. ./bw_mem 1000m wr
>
> dom0 output:
>
> 1048.58 3640.60
>
> domU output:
>
> 1048.58 4719.32
>
> 2. ./bw_mem 1000m rd
>
> dom0 output:
> 1048.58 5780.56
>
> domU output:
>
> 1048.58 6258.32
>
>
> Experiment 3] lat_syscall benchmark
>
> 1. ./lat_syscall write
>
> dom0 output:
> Simple write: 1.9659 microseconds
>
> domU output :
> Simple write: 0.4256 microseconds
>
> 2. ./lat_syscall read
>
> dom0 output:
> Simple read: 1.9399 microseconds
>
> domU output :
> Simple read: 0.3764 microseconds
>
> 3. ./lat_syscall stat
>
> dom0 output:
> Simple stat:3.9667 microseconds
>
> domU output :
> Simple stat: 1.2711 microseconds
>
> I am not able to understand why domU has performed better than domU, when
> obvious guess is that dom0
> should perform better than domU. I would really appreciate an help if
> anyone knows the reason behind this
> issue.
>
> Thank you,
> Sushrut.
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org<mailto:Xen-devel@lists.xen.org>
> http://lists.xen.org/xen-devel
[-- Attachment #1.2: Type: text/html, Size: 14637 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: DomU vs Dom0 performance.
2013-10-01 10:05 ` Felipe Franciosi
@ 2013-10-01 12:55 ` sushrut shirole
2013-10-01 14:24 ` Konrad Rzeszutek Wilk
0 siblings, 1 reply; 10+ messages in thread
From: sushrut shirole @ 2013-10-01 12:55 UTC (permalink / raw)
To: Felipe Franciosi; +Cc: xen-devel@lists.xen.org
[-- Attachment #1.1: Type: text/plain, Size: 8405 bytes --]
Please find my response inline.
Thank you,
Sushrut.
On 1 October 2013 10:05, Felipe Franciosi <felipe.franciosi@citrix.com>wrote:
> 1) Can you paste your entire config file here?****
>
> This is just for clarification on the HVM bit.****
>
> Your “disk” config suggests you are using the PV protocol for storage
> (blkback).
>
> kernel = "hvmloader"
builder='hvm'
memory = 4096
name = "ArchHVM"
vcpus=8
disk = [ 'phy:/dev/sda5,hda,w',
'file:/root/dev/iso/archlinux.iso,hdc:cdrom,r' ]
device_model = 'qemu-dm'
boot="c"
sdl=0
xen_platform_pci=1
opengl=0
vnc=0
vncpasswd=''
nographic=1
stdvga=0
serial='pty'
> 2) Also, can you run “uname -a" in both dom0 and domU and paste it here as
> well?****
>
> Based on the syscall latencies you presented, it sounds like one
> domain may be 32bit and the other 64bit.****
>
> **
>
kernel information on dom0 is :
Linux localhost 3.5.0-IDD #5 SMP PREEMPT Fri Sep 6 23:31:56 UTC 2013 x86_64
GNU/Linux
on domU is :
Linux domu 3.5.0-IDD-12913 #2 SMP PREEMPT Sun Dec 9 17:54:30 EST 2012
x86_64 GNU/Linux
3) You are doing this:****
>
> ** **
>
> > <snip>
> > for i in `ls test_file.*`
> > do
> > sudo dd if=./$i of=/dev/zero
> > done
> > </snip>
>
> My bad. I have changed it to /dev/null.
****
>
> I don’t know what you intended with this, but you can’t output to
> /dev/zero (you can read from /dev/zero, but you can only output to
> /dev/null).****
>
> If your “img” is 5G and your guest has 4G of RAM, you will not
> consistently buffer the entire image.****
>
> **
>
Even though I am using a 5G of img, read operations executed are of size 1G
only. Also lm_benchmark doesn't involve any read/writes to this ".img",
still the results I am getting are better on domU when measured with lm
micro benchmarks.
> **
>
> You are then doing buffered IO (note that some of your requests are
> completing in 10us). That can only happen if you are reading from memory
> and not from disk.
>
Even though a single request is completing in 10us, total time required to
complete all requests (5000000) is 17 & 13 seconds for dom0 and domU
respectively.
(I forgot to mention that I have a SSD installed on this machine)
> **
>
> If you want to consistently compare the performance between two domains,
> you should always bypass the VM’s cache with O_DIRECT.****
>
> **
>
But looking at results of lat_syscall and bw_mem microbenchmarks, it shows
that syscalls are executed faster in domU and memory bandwidth is more in
domU.
> **
>
> Cheers,
> Felipe****
>
> ** **
>
> *From:* xen-devel-bounces@lists.xen.org [mailto:
> xen-devel-bounces@lists.xen.org] *On Behalf Of *sushrut shirole
> *Sent:* 30 September 2013 16:47
> *To:* Konrad Rzeszutek Wilk
> *Cc:* xen-devel@lists.xen.org
> *Subject:* Re: [Xen-devel] DomU vs Dom0 performance.****
>
> ** **
>
> Its a HVM guest.****
>
> ** **
>
> On 30 September 2013 14:36, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> wrote:****
>
> On Sun, Sep 29, 2013 at 07:22:14PM -0400, sushrut shirole wrote:
> > Hi,
> >
> > I have been doing some diskIO bench-marking of dom0 and domU (HVM). I ran
> > into an issue where domU
> > performed better than dom0. So I ran few experiments to check if it is
> > just diskIO performance.
> >
> > I have an archlinux (kernel 3.5.0) + xen 4.2.2) installed on a Intel Core
> > i7 Q720 machine. I have also installed
> > archlinux (kernel 3.5.0) in domU running on this machine. The domU runs
> > with 8 vcpus. I have alloted both dom0
> > and domu 4096M ram.****
>
> What kind of guest is it ? PV or HVM?****
>
>
> >
> > I performed following experiments to compare the performance of domU vs
> > dom0.
> >
> > experiment 1]
> >
> > 1. Created a file.img of 5G
> > 2. Mounted the file with ext2 filesystem.
> > 3. Ran sysbench with following command.
> >
> > sysbench --num-threads=8 --test=fileio --file-total-size=1G
> > --max-requests=1000000 prepare
> >
> > 4. Read files into memory
> >
> > script to read files
> >
> > <snip>
> > for i in `ls test_file.*`
> > do
> > sudo dd if=./$i of=/dev/zero
> > done
> > </snip>
> >
> > 5. Ran sysbench.
> >
> > sysbench --num-threads=8 --test=fileio --file-total-size=1G
> > --max-requests=5000000 --file-test-mode=rndrd run
> >
> > the output i got on dom0 is
> >
> > <output>
> > Number of threads: 8
> >
> > Extra file open flags: 0
> > 128 files, 8Mb each
> > 1Gb total file size
> > Block size 16Kb
> > Number of random requests for random IO: 5000000
> > Read/Write ratio for combined random IO test: 1.50
> > Periodic FSYNC enabled, calling fsync() each 100 requests.
> > Calling fsync() at the end of test, Enabled.
> > Using synchronous I/O mode
> > Doing random read test
> >
> > Operations performed: 5130322 Read, 0 Write, 0 Other = 5130322 Total
> > Read 78.283Gb Written 0b Total transferred 78.283Gb (4.3971Gb/sec)***
> *
>
> > *288165.68 Requests/sec executed*****
>
> >
> > Test execution summary:
> > total time: 17.8034s
> > total number of events: 5130322
> > total time taken by event execution: 125.3102
> > per-request statistics:
> > min: 0.01ms
> > avg: 0.02ms
> > max: 55.55ms
> > approx. 95 percentile: 0.02ms
> >
> > Threads fairness:
> > events (avg/stddev): 641290.2500/10057.89
> > execution time (avg/stddev): 15.6638/0.02
> > </output>
> >
> > 6. Performed same experiment on domU and result I got is
> >
> > <output>
> > Number of threads: 8
> >
> > Extra file open flags: 0
> > 128 files, 8Mb each
> > 1Gb total file size
> > Block size 16Kb
> > Number of random requests for random IO: 5000000
> > Read/Write ratio for combined random IO test: 1.50
> > Periodic FSYNC enabled, calling fsync() each 100 requests.
> > Calling fsync() at the end of test, Enabled.
> > Using synchronous I/O mode
> > Doing random read test
> >
> > Operations performed: 5221490 Read, 0 Write, 0 Other = 5221490 Total
> > Read 79.674Gb Written 0b Total transferred 79.674Gb (5.9889Gb/sec)***
> *
>
> > *392489.34 Requests/sec executed*****
>
> >
> > Test execution summary:
> > total time: 13.3035s
> > total number of events: 5221490
> > total time taken by event execution: 98.7121
> > per-request statistics:
> > min: 0.01ms
> > avg: 0.02ms
> > max: 49.75ms
> > approx. 95 percentile: 0.02ms
> >
> > Threads fairness:
> > events (avg/stddev): 652686.2500/1494.93
> > execution time (avg/stddev): 12.3390/0.02
> >
> > </output>
> >
> > I was expecting dom0 to performa better than domU, so to debug more into
> it
> > I ram lm_bench microbenchmarks.
> >
> > Experiment 2] bw_mem benchmark
> >
> > 1. ./bw_mem 1000m wr
> >
> > dom0 output:
> >
> > 1048.58 3640.60
> >
> > domU output:
> >
> > 1048.58 4719.32
> >
> > 2. ./bw_mem 1000m rd
> >
> > dom0 output:
> > 1048.58 5780.56
> >
> > domU output:
> >
> > 1048.58 6258.32
> >
> >
> > Experiment 3] lat_syscall benchmark
> >
> > 1. ./lat_syscall write
> >
> > dom0 output:
> > Simple write: 1.9659 microseconds
> >
> > domU output :
> > Simple write: 0.4256 microseconds
> >
> > 2. ./lat_syscall read
> >
> > dom0 output:
> > Simple read: 1.9399 microseconds
> >
> > domU output :
> > Simple read: 0.3764 microseconds
> >
> > 3. ./lat_syscall stat
> >
> > dom0 output:
> > Simple stat:3.9667 microseconds
> >
> > domU output :
> > Simple stat: 1.2711 microseconds
> >
> > I am not able to understand why domU has performed better than domU, when
> > obvious guess is that dom0
> > should perform better than domU. I would really appreciate an help if
> > anyone knows the reason behind this
> > issue.
> >
> > Thank you,
> > Sushrut.****
>
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel****
>
> ** **
>
[-- Attachment #1.2: Type: text/html, Size: 15262 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: DomU vs Dom0 performance.
2013-10-01 12:55 ` sushrut shirole
@ 2013-10-01 14:24 ` Konrad Rzeszutek Wilk
2013-10-03 18:50 ` sushrut shirole
0 siblings, 1 reply; 10+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-10-01 14:24 UTC (permalink / raw)
To: sushrut shirole; +Cc: Felipe Franciosi, xen-devel@lists.xen.org
On Tue, Oct 01, 2013 at 12:55:18PM +0000, sushrut shirole wrote:
> Please find my response inline.
>
> Thank you,
> Sushrut.
>
> On 1 October 2013 10:05, Felipe Franciosi <felipe.franciosi@citrix.com>wrote:
>
> > 1) Can you paste your entire config file here?****
> >
> > This is just for clarification on the HVM bit.****
> >
> > Your “disk” config suggests you are using the PV protocol for storage
> > (blkback).
> >
> > kernel = "hvmloader"
> builder='hvm'
> memory = 4096
> name = "ArchHVM"
> vcpus=8
> disk = [ 'phy:/dev/sda5,hda,w',
> 'file:/root/dev/iso/archlinux.iso,hdc:cdrom,r' ]
> device_model = 'qemu-dm'
> boot="c"
> sdl=0
> xen_platform_pci=1
> opengl=0
> vnc=0
> vncpasswd=''
> nographic=1
> stdvga=0
> serial='pty'
>
>
> > 2) Also, can you run “uname -a" in both dom0 and domU and paste it here as
> > well?****
> >
> > Based on the syscall latencies you presented, it sounds like one
> > domain may be 32bit and the other 64bit.****
> >
> > **
> >
> kernel information on dom0 is :
> Linux localhost 3.5.0-IDD #5 SMP PREEMPT Fri Sep 6 23:31:56 UTC 2013 x86_64
> GNU/Linux
>
> on domU is :
> Linux domu 3.5.0-IDD-12913 #2 SMP PREEMPT Sun Dec 9 17:54:30 EST 2012
> x86_64 GNU/Linux
>
> 3) You are doing this:****
> >
> > ** **
> >
> > > <snip>
> > > for i in `ls test_file.*`
> > > do
> > > sudo dd if=./$i of=/dev/zero
> > > done
> > > </snip>
> >
> > My bad. I have changed it to /dev/null.
>
> ****
> >
> > I don’t know what you intended with this, but you can’t output to
> > /dev/zero (you can read from /dev/zero, but you can only output to
> > /dev/null).****
> >
> > If your “img” is 5G and your guest has 4G of RAM, you will not
> > consistently buffer the entire image.****
> >
> > **
> >
> Even though I am using a 5G of img, read operations executed are of size 1G
> only. Also lm_benchmark doesn't involve any read/writes to this ".img",
> still the results I am getting are better on domU when measured with lm
> micro benchmarks.
>
> > **
> >
> > You are then doing buffered IO (note that some of your requests are
> > completing in 10us). That can only happen if you are reading from memory
> > and not from disk.
> >
> Even though a single request is completing in 10us, total time required to
> complete all requests (5000000) is 17 & 13 seconds for dom0 and domU
> respectively.
>
> (I forgot to mention that I have a SSD installed on this machine)
>
> > **
> >
> > If you want to consistently compare the performance between two domains,
> > you should always bypass the VM’s cache with O_DIRECT.****
> >
> > **
> >
> But looking at results of lat_syscall and bw_mem microbenchmarks, it shows
> that syscalls are executed faster in domU and memory bandwidth is more in
> domU.
Yes. That is expected with HVM guests. Their syscall overhead and also memory
bandwith will be faster than PV guests (which is what dom0 is).
That is why PVH is such an intersting future direction - it is PV with HVM
containers to lower the syscall overhead and memory page table operations.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: DomU vs Dom0 performance.
2013-10-01 14:24 ` Konrad Rzeszutek Wilk
@ 2013-10-03 18:50 ` sushrut shirole
2013-10-04 13:24 ` Konrad Rzeszutek Wilk
0 siblings, 1 reply; 10+ messages in thread
From: sushrut shirole @ 2013-10-03 18:50 UTC (permalink / raw)
To: Konrad Rzeszutek Wilk; +Cc: Felipe Franciosi, xen-devel@lists.xen.org
[-- Attachment #1.1: Type: text/plain, Size: 4175 bytes --]
Hi Konrad,
Thank you for the simple and wonderful explanation. Now I understand why
the syscall micro-benchmark performs better on domU
than the dom0. But I am still confused about 'memory bandwidth'
micro-benchmark performance. Memory bandwidth micro-benchmark
test will cause a page fault when the page is accessed for the first time.
I presume the PTE updates is the major reason for the
performance degradation of the dom0. But after first few page faults, all
the pages would be in the memory (Both dom0 and domU
have 4096M of memory and micro-benchmark uses < test_size * 3 i.e. 1000M *
3 in this case), then why does there is considerable
amount of performance difference ?
Thank you,
Sushrut.
On 1 October 2013 10:24, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>wrote:
> On Tue, Oct 01, 2013 at 12:55:18PM +0000, sushrut shirole wrote:
> > Please find my response inline.
> >
> > Thank you,
> > Sushrut.
> >
> > On 1 October 2013 10:05, Felipe Franciosi <felipe.franciosi@citrix.com
> >wrote:
> >
> > > 1) Can you paste your entire config file here?****
> > >
> > > This is just for clarification on the HVM bit.****
> > >
> > > Your “disk” config suggests you are using the PV protocol for storage
> > > (blkback).
> > >
> > > kernel = "hvmloader"
> > builder='hvm'
> > memory = 4096
> > name = "ArchHVM"
> > vcpus=8
> > disk = [ 'phy:/dev/sda5,hda,w',
> > 'file:/root/dev/iso/archlinux.iso,hdc:cdrom,r' ]
> > device_model = 'qemu-dm'
> > boot="c"
> > sdl=0
> > xen_platform_pci=1
> > opengl=0
> > vnc=0
> > vncpasswd=''
> > nographic=1
> > stdvga=0
> > serial='pty'
> >
> >
> > > 2) Also, can you run “uname -a" in both dom0 and domU and paste it
> here as
> > > well?****
> > >
> > > Based on the syscall latencies you presented, it sounds like one
> > > domain may be 32bit and the other 64bit.****
> > >
> > > **
> > >
> > kernel information on dom0 is :
> > Linux localhost 3.5.0-IDD #5 SMP PREEMPT Fri Sep 6 23:31:56 UTC 2013
> x86_64
> > GNU/Linux
> >
> > on domU is :
> > Linux domu 3.5.0-IDD-12913 #2 SMP PREEMPT Sun Dec 9 17:54:30 EST 2012
> > x86_64 GNU/Linux
> >
> > 3) You are doing this:****
> > >
> > > ** **
> > >
> > > > <snip>
> > > > for i in `ls test_file.*`
> > > > do
> > > > sudo dd if=./$i of=/dev/zero
> > > > done
> > > > </snip>
> > >
> > > My bad. I have changed it to /dev/null.
> >
> > ****
> > >
> > > I don’t know what you intended with this, but you can’t output to
> > > /dev/zero (you can read from /dev/zero, but you can only output to
> > > /dev/null).****
> > >
> > > If your “img” is 5G and your guest has 4G of RAM, you will not
> > > consistently buffer the entire image.****
> > >
> > > **
> > >
> > Even though I am using a 5G of img, read operations executed are of size
> 1G
> > only. Also lm_benchmark doesn't involve any read/writes to this ".img",
> > still the results I am getting are better on domU when measured with lm
> > micro benchmarks.
> >
> > > **
> > >
> > > You are then doing buffered IO (note that some of your requests are
> > > completing in 10us). That can only happen if you are reading from
> memory
> > > and not from disk.
> > >
> > Even though a single request is completing in 10us, total time required
> to
> > complete all requests (5000000) is 17 & 13 seconds for dom0 and domU
> > respectively.
> >
> > (I forgot to mention that I have a SSD installed on this machine)
> >
> > > **
> > >
> > > If you want to consistently compare the performance between two
> domains,
> > > you should always bypass the VM’s cache with O_DIRECT.****
> > >
> > > **
> > >
> > But looking at results of lat_syscall and bw_mem microbenchmarks, it
> shows
> > that syscalls are executed faster in domU and memory bandwidth is more in
> > domU.
>
> Yes. That is expected with HVM guests. Their syscall overhead and also
> memory
> bandwith will be faster than PV guests (which is what dom0 is).
>
> That is why PVH is such an intersting future direction - it is PV with HVM
> containers to lower the syscall overhead and memory page table operations.
>
>
[-- Attachment #1.2: Type: text/html, Size: 5690 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: DomU vs Dom0 performance.
2013-10-03 18:50 ` sushrut shirole
@ 2013-10-04 13:24 ` Konrad Rzeszutek Wilk
0 siblings, 0 replies; 10+ messages in thread
From: Konrad Rzeszutek Wilk @ 2013-10-04 13:24 UTC (permalink / raw)
To: sushrut shirole; +Cc: Felipe Franciosi, xen-devel@lists.xen.org
On Thu, Oct 03, 2013 at 02:50:27PM -0400, sushrut shirole wrote:
> Hi Konrad,
>
> Thank you for the simple and wonderful explanation. Now I understand why
> the syscall micro-benchmark performs better on domU
> than the dom0. But I am still confused about 'memory bandwidth'
> micro-benchmark performance. Memory bandwidth micro-benchmark
> test will cause a page fault when the page is accessed for the first time.
> I presume the PTE updates is the major reason for the
> performance degradation of the dom0. But after first few page faults, all
Correct. Each PTE update at worst requires a hypercall. We do have batching
which means you can batch up to 32 PTE updates in one hypercall. But
if you mix the PTE updates with mprotect, etc, it gets worst.
> the pages would be in the memory (Both dom0 and domU
> have 4096M of memory and micro-benchmark uses < test_size * 3 i.e. 1000M *
> 3 in this case), then why does there is considerable
> amount of performance difference ?
I don't know what the micro-benchmark does. Does it use mprotect and any
page manipulations?
>
> Thank you,
> Sushrut.
>
>
>
> On 1 October 2013 10:24, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>wrote:
>
> > On Tue, Oct 01, 2013 at 12:55:18PM +0000, sushrut shirole wrote:
> > > Please find my response inline.
> > >
> > > Thank you,
> > > Sushrut.
> > >
> > > On 1 October 2013 10:05, Felipe Franciosi <felipe.franciosi@citrix.com
> > >wrote:
> > >
> > > > 1) Can you paste your entire config file here?****
> > > >
> > > > This is just for clarification on the HVM bit.****
> > > >
> > > > Your “disk” config suggests you are using the PV protocol for storage
> > > > (blkback).
> > > >
> > > > kernel = "hvmloader"
> > > builder='hvm'
> > > memory = 4096
> > > name = "ArchHVM"
> > > vcpus=8
> > > disk = [ 'phy:/dev/sda5,hda,w',
> > > 'file:/root/dev/iso/archlinux.iso,hdc:cdrom,r' ]
> > > device_model = 'qemu-dm'
> > > boot="c"
> > > sdl=0
> > > xen_platform_pci=1
> > > opengl=0
> > > vnc=0
> > > vncpasswd=''
> > > nographic=1
> > > stdvga=0
> > > serial='pty'
> > >
> > >
> > > > 2) Also, can you run “uname -a" in both dom0 and domU and paste it
> > here as
> > > > well?****
> > > >
> > > > Based on the syscall latencies you presented, it sounds like one
> > > > domain may be 32bit and the other 64bit.****
> > > >
> > > > **
> > > >
> > > kernel information on dom0 is :
> > > Linux localhost 3.5.0-IDD #5 SMP PREEMPT Fri Sep 6 23:31:56 UTC 2013
> > x86_64
> > > GNU/Linux
> > >
> > > on domU is :
> > > Linux domu 3.5.0-IDD-12913 #2 SMP PREEMPT Sun Dec 9 17:54:30 EST 2012
> > > x86_64 GNU/Linux
> > >
> > > 3) You are doing this:****
> > > >
> > > > ** **
> > > >
> > > > > <snip>
> > > > > for i in `ls test_file.*`
> > > > > do
> > > > > sudo dd if=./$i of=/dev/zero
> > > > > done
> > > > > </snip>
> > > >
> > > > My bad. I have changed it to /dev/null.
> > >
> > > ****
> > > >
> > > > I don’t know what you intended with this, but you can’t output to
> > > > /dev/zero (you can read from /dev/zero, but you can only output to
> > > > /dev/null).****
> > > >
> > > > If your “img” is 5G and your guest has 4G of RAM, you will not
> > > > consistently buffer the entire image.****
> > > >
> > > > **
> > > >
> > > Even though I am using a 5G of img, read operations executed are of size
> > 1G
> > > only. Also lm_benchmark doesn't involve any read/writes to this ".img",
> > > still the results I am getting are better on domU when measured with lm
> > > micro benchmarks.
> > >
> > > > **
> > > >
> > > > You are then doing buffered IO (note that some of your requests are
> > > > completing in 10us). That can only happen if you are reading from
> > memory
> > > > and not from disk.
> > > >
> > > Even though a single request is completing in 10us, total time required
> > to
> > > complete all requests (5000000) is 17 & 13 seconds for dom0 and domU
> > > respectively.
> > >
> > > (I forgot to mention that I have a SSD installed on this machine)
> > >
> > > > **
> > > >
> > > > If you want to consistently compare the performance between two
> > domains,
> > > > you should always bypass the VM’s cache with O_DIRECT.****
> > > >
> > > > **
> > > >
> > > But looking at results of lat_syscall and bw_mem microbenchmarks, it
> > shows
> > > that syscalls are executed faster in domU and memory bandwidth is more in
> > > domU.
> >
> > Yes. That is expected with HVM guests. Their syscall overhead and also
> > memory
> > bandwith will be faster than PV guests (which is what dom0 is).
> >
> > That is why PVH is such an intersting future direction - it is PV with HVM
> > containers to lower the syscall overhead and memory page table operations.
> >
> >
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2013-10-04 13:24 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-09-29 23:22 DomU vs Dom0 performance sushrut shirole
2013-09-30 14:36 ` Konrad Rzeszutek Wilk
2013-09-30 15:46 ` sushrut shirole
2013-10-01 10:05 ` Felipe Franciosi
2013-10-01 12:55 ` sushrut shirole
2013-10-01 14:24 ` Konrad Rzeszutek Wilk
2013-10-03 18:50 ` sushrut shirole
2013-10-04 13:24 ` Konrad Rzeszutek Wilk
2013-09-30 14:50 ` Stefano Stabellini
2013-09-30 15:45 ` sushrut shirole
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).