From: Dario Faggioli <dario.faggioli@citrix.com>
To: Michael Schinzel <schinzel@ip-projects.de>,
Xen-devel <xen-devel@lists.xenproject.org>
Cc: Roger Pau Monne <roger.paumonne@citrix.com>,
Thomas Toka <toka@ip-projects.de>
Subject: Re: Read Performance issue when Xen Hypervisor is activated
Date: Thu, 12 Jan 2017 18:03:49 +0100 [thread overview]
Message-ID: <1484240629.9947.18.camel@citrix.com> (raw)
In-Reply-To: <a8ce81db36d142b5a1957468e6b8a547@ip-projects.de>
[-- Attachment #1.1: Type: text/plain, Size: 5058 bytes --]
On Mon, 2017-01-02 at 07:15 +0000, Michael Schinzel wrote:
> Good Morning,
>
I'm back, although, as anticipate, I can't be terribly useful, I'm
afraid...
> You can see, in default Xen configuration, the most important thing
> at read performance test -> 2414.92 MB/sec <- the used cache is half
> of the cache like the same host is bootet without hypervisor. We now
> searched and searched and searched and find the Case:
> xen_acpi_processor
>
> Xen is manageing the CPU Performance default with 1.200 Mhz. It is
> like you are driving a Ferrari all the time with 30 miles/h :) So we
> changed the Performance parameter to
>
> xenpm set-scaling-governor all performance
>
Well, yes, this will have an impact, but it's unlikely what you're
looking for. In fact, something similar would apply also to baremetal
Linux.
> After a little bit searching around, i also find a parameter for the
> scheduler.
>
> root@v7:~# cat /sys/block/sda/queue/scheduler
> noop deadline [cfq]
>
> I changed the scheduler to deadline. After this Change
>
Well, ISTR [nop] could be even better. But I don't think this will make
much difference either, in this case.
> We have already tried to remove the CPU reservation, memory limit and
> so on but this don't change anythink. Also upgrading the Hypervisor
> dont change anythink at this performance issue.
>
Well, these are all sequential benchmarks, so it indeed could have been
expected that adding more vCPUs wouldn't have changed things much.
I decided to re-run some of your tests on my test hardware (which is
way lower end than yours, especially as far as storage is concerned).
These are m results:
hdparm -Tt /dev/sda Without Xen (baremetal Linux) With Xen (from within dom0)
Timing cached reads 14074 MB in 2.00 seconds = 7043.05 MB/sec 14694 MB in 1.99 seconds = 7382.22 MB/sec
Timing buffered disk reads 364 MB in 3.01 seconds = 120.78 MB/sec 364 MB in 3.00 seconds = 121.22 MB/sec
dd_obs_test.sh datei transfer rate
block size Without Xen (baremetal Linux) With Xen (from within dom0)
512 279 MB/s 123 MB/s
1024 454 MB/s 217 MB/s
2048 275 MB/s 359 MB/s
4096 888 MB/s 532 MB/s
8192 987 MB/s 659 MB/s
16384 1.0 GB/s 685 MB/s
32768 1.1 GB/s 773 MB/s
65536 1.1 GB/s 846 MB/s
131072 1.1 GB/s 749 MB/s
262144 327 MB/s 844 MB/s
524288 1.1 GB/s 783 MB/s
1048576 420 MB/s 823 MB/s
2097152 485 MB/s 305 MB/s
4194304 409 MB/s 783 MB/s
8388608 380 MB/s 776 MB/s
16777216 950 MB/s 703 MB/s
33554432 916 MB/s 297 MB/s
67108864 856 MB/s 492 MB/s
time dd if=/dev/zero of=datei bs=1M count=10240
Without Xen (baremetal Linux) With Xen (from within dom0)
73.7224 s, 146 MB/s 97.6948 s, 110 MB/s
real 1m13.724s real 1m37.700s
user 0m0.000s user 0m0.068s
sys 0m9.364s sys 0m15.180s
root@Zhaman:~# time dd if=datei of=/dev/null
Without Xen (baremetal Linux) With Xen (from within dom0)
9.92787 s, 1.1 GB/s 95.1827 s, 113 MB/s
real 0m9.953s real 1m35.194s
user 0m2.096s user 0m10.632s
sys 0m7.300s sys 0m51.820s
Which confirms that, when running the tests inside a Xen Dom0, things
are indeed slower.
Let me say something, though: the purpose of Xen is not to achieve the
best possible performance in Dom0. In fact, it is to achieve the best
possible aggregated performance of a number of guest domains.
The fact that virtualization has an overhead and that Dom0 pays quite a
high price are well known. Have you tried, for instance, running some
of the test in a DomU?
Now, whether what both you and I are seeing is to be considered
"normal", I can't tell. Maybe Roger can (or he can tell us who to
bother for that).
In general, I don't think updating random system and firmware
components is useful at all... This is not a BIOS issue, IMO.
Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]
[-- Attachment #2: Type: text/plain, Size: 127 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-01-12 17:04 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-12-27 14:26 Read Performance issue when Xen Hypervisor is activated Michael Schinzel
2016-12-30 16:34 ` Dario Faggioli
2016-12-30 16:53 ` Michael Schinzel
2016-12-31 9:07 ` Michael Schinzel
2017-01-02 7:15 ` Michael Schinzel
2017-01-12 17:03 ` Dario Faggioli [this message]
2017-01-13 13:32 ` Konrad Rzeszutek Wilk
-- strict thread matches above, loose matches on Subject: below --
2016-12-26 11:48 Michael Schinzel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1484240629.9947.18.camel@citrix.com \
--to=dario.faggioli@citrix.com \
--cc=roger.paumonne@citrix.com \
--cc=schinzel@ip-projects.de \
--cc=toka@ip-projects.de \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).