From: Gordan Bobic <gordan@bobich.net>
To: xen-devel@lists.xen.org
Subject: Re: memory performance 20% degradation in DomU -- Sisu
Date: Thu, 06 Mar 2014 10:31:56 +0000 [thread overview]
Message-ID: <5ff5b79f65bd03837291020cb8173abf@mail.shatteredsilicon.net> (raw)
In-Reply-To: <20140305222852.GA26966@phenom.dumpdata.com>
On 2014-03-05 22:28, Konrad Rzeszutek Wilk wrote:
> On Wed, Mar 05, 2014 at 09:29:30PM +0000, Gordan Bobic wrote:
>> Just out of interest, have you tried the same test with HVM DomU?
>> The two have different characteristics, and IIRC for some workloads
>> PV can be slower than HVM. The recent PVHVM work was intended to
>> result in the best aspects of both, but that is more recent than Xen
>> 4.3.0.
>>
>> It is also interesting that your findings are approximately similar
>> to mine, albeit with a very different testing methodology:
>>
>> http://goo.gl/lIUk4y
>
> Don't know if you used PV drivers (for HVM) and if you used as a
> backend a
> block device instead of a file.
>
> But it also helps in using 'fio' to test this sort of thing.
I used a dedicated disk which was not altered between the tests.
Otherwise
I wouldn't have been able to run the same installation on bare metal and
virtualized.
I don't think disk I/O was particularly relevant in the test - the CPU
was always the bottleneck with no iowait time. My impression was that
it was the context switching that really crippled virtualized
performance,
especially in multi-socket or NUMA cases. C2Q I tested on can be
considered a dual-socket non-NUMA system in this context since the two
dies on it don't share any caches which means higher migration
penalties.
Throw in the extra Heisenbergism of the domU kernel not having any idea
where the hypervisor might schedule the virtual CPU mapping (I didn't
pin cores in the test, perhaps I should have) and it is easy to see a
case where it gets quite bad when you push the system to saturation.
Gordan
next prev parent reply other threads:[~2014-03-06 10:31 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-03-04 22:49 memory performance 20% degradation in DomU -- Sisu Sisu Xi
2014-03-04 23:00 ` Sisu Xi
2014-03-05 17:33 ` Konrad Rzeszutek Wilk
2014-03-05 20:09 ` Sisu Xi
2014-03-05 21:29 ` Gordan Bobic
2014-03-05 22:28 ` Konrad Rzeszutek Wilk
2014-03-06 10:31 ` Gordan Bobic [this message]
2014-03-05 22:09 ` Konrad Rzeszutek Wilk
2014-03-11 12:03 ` George Dunlap
2014-03-11 15:46 ` Sisu Xi
2014-03-11 20:21 ` Sisu Xi
2014-03-12 8:55 ` Dario Faggioli
2014-03-12 16:50 ` Sisu Xi
2014-03-13 10:25 ` George Dunlap
2014-03-12 8:59 ` Dario Faggioli
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5ff5b79f65bd03837291020cb8173abf@mail.shatteredsilicon.net \
--to=gordan@bobich.net \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).