xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Gordan Bobic <gordan@bobich.net>
Cc: Meng Xu <xumengpanda@gmail.com>, Sisu Xi <xisisu@gmail.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: memory performance 20% degradation in DomU -- Sisu
Date: Wed, 5 Mar 2014 17:28:52 -0500	[thread overview]
Message-ID: <20140305222852.GA26966@phenom.dumpdata.com> (raw)
In-Reply-To: <5317973A.6060502@bobich.net>

On Wed, Mar 05, 2014 at 09:29:30PM +0000, Gordan Bobic wrote:
> Just out of interest, have you tried the same test with HVM DomU?
> The two have different characteristics, and IIRC for some workloads
> PV can be slower than HVM. The recent PVHVM work was intended to
> result in the best aspects of both, but that is more recent than Xen
> 4.3.0.
> 
> It is also interesting that your findings are approximately similar
> to mine, albeit with a very different testing methodology:
> 
> http://goo.gl/lIUk4y

Don't know if you used PV drivers (for HVM) and if you used as a backend a 
block device instead of a file.

But it also helps in using 'fio' to test this sort of thing.

> 
> Gordan
> 
> On 03/05/2014 08:09 PM, Sisu Xi wrote:
> >Hi, Konrad:
> >
> >It is the PV domU.
> >
> >Thanks.
> >
> >Sisu
> >
> >
> >On Wed, Mar 5, 2014 at 11:33 AM, Konrad Rzeszutek Wilk
> ><konrad.wilk@oracle.com <mailto:konrad.wilk@oracle.com>> wrote:
> >
> >    On Tue, Mar 04, 2014 at 05:00:46PM -0600, Sisu Xi wrote:
> >     > Hi, all:
> >     >
> >     > I also used the ramspeed to measure memory throughput.
> >     > http://alasir.com/software/ramspeed/
> >     >
> >     > I am using the v2.6, single core version. The command I used is
> >    ./ramspeed
> >     > -b 3 (for int) and ./ramspeed -b 6 (for float).
> >     > The benchmark measures four operations: add, copy, scale, and
> >    triad. And
> >     > also gives an average number for all four operations.
> >     >
> >     > The results in DomU shows around 20% performance degradation
> >    compared to
> >     > non-virt results.
> >
> >    What kind of domU? PV or HVM?
> >     >
> >     > Attached is the results. The left part are results for int, while
> >    the right
> >     > part is the results for float. The Y axis is the measured
> >    throughput. Each
> >     > box contains 100 experiment repeats.
> >     > The black boxes are the results in non-virtualized environment,
> >    while the
> >     > blue ones are the results I got in DomU.
> >     >
> >     > The Xen version I am using is 4.3.0, 64bit.
> >     >
> >     > Thanks very much!
> >     >
> >     > Sisu
> >     >
> >     >
> >     >
> >     > On Tue, Mar 4, 2014 at 4:49 PM, Sisu Xi <xisisu@gmail.com
> >    <mailto:xisisu@gmail.com>> wrote:
> >     >
> >     > > Hi, all:
> >     > >
> >     > > I am trying to study the cache/memory performance under Xen,
> >    and has
> >     > > encountered some problems.
> >     > >
> >     > > My machine is has an Intel Core i7 X980 processor with 6
> >    physical cores. I
> >     > > disabled hyper-threading, frequency scaling, so it should be
> >    running at
> >     > > constant speed.
> >     > > Dom0 was boot with 1 VCPU pinned to 1 core, with 2 GB of memory.
> >     > >
> >     > > After that, I boot up DomU with 1 VCPU pinned to a separate
> >    core, with 1
> >     > > GB of memory. The credit scheduler is used, and no cap is set
> >    for them. So
> >     > > DomU should be able to access all resources.
> >     > >
> >     > > Each physical core has a 32KB dedicated L1 cache, 256KB
> >    dedicated L2
> >     > > cache. And all cores share a 12MB L3 cache.
> >     > >
> >     > > I created a simple program to create an array of specified
> >    size. Load them
> >     > > once, and then randomly access every cache line once. (1 cache
> >    line is 64B
> >     > > on my machine).
> >     > > rdtsc is used to record the duration for the random access.
> >     > >
> >     > > I tried different data sizes, with 1000 repeat for each data sizes.
> >     > > Attached is the boxplot for average access time for one cache line.
> >     > >
> >     > > The x axis is the different Data Size, the y axis is the CPU
> >    cycle. The
> >     > > three vertical lines at 32KB, 256KB, and 12MB represents the size
> >     > > difference in L1, L2, and L3 cache on my machine.
> >     > > *The black box are the results I got when I run it in
> >    non-virtualized,
> >     > > while the blue box are the results I got in DomU.*
> >     > >
> >     > > For some reason, the results in DomU varies much more than the
> >    results in
> >     > > non-virtualized environment.
> >     > > I also repeated the same experiments in DomU with Run Level 1,
> >    the results
> >     > > are the same.
> >     > >
> >     > > Can anyone give some suggestions about what might be the reason
> >    for this?
> >     > >
> >     > > Thanks very much!
> >     > >
> >     > > Sisu
> >     > >
> >     > > --
> >     > > Sisu Xi, PhD Candidate
> >     > >
> >     > > http://www.cse.wustl.edu/~xis/
> >     > > Department of Computer Science and Engineering
> >     > > Campus Box 1045
> >     > > Washington University in St. Louis
> >     > > One Brookings Drive
> >     > > St. Louis, MO 63130
> >     > >
> >     >
> >     >
> >     >
> >     > --
> >     > Sisu Xi, PhD Candidate
> >     >
> >     > http://www.cse.wustl.edu/~xis/
> >     > Department of Computer Science and Engineering
> >     > Campus Box 1045
> >     > Washington University in St. Louis
> >     > One Brookings Drive
> >     > St. Louis, MO 63130
> >
> >
> >     > _______________________________________________
> >     > Xen-devel mailing list
> >     > Xen-devel@lists.xen.org <mailto:Xen-devel@lists.xen.org>
> >     > http://lists.xen.org/xen-devel
> >
> >
> >
> >
> >--
> >Sisu Xi, PhD Candidate
> >
> >http://www.cse.wustl.edu/~xis/
> >Department of Computer Science and Engineering
> >Campus Box 1045
> >Washington University in St. Louis
> >One Brookings Drive
> >St. Louis, MO 63130
> >
> >
> >_______________________________________________
> >Xen-devel mailing list
> >Xen-devel@lists.xen.org
> >http://lists.xen.org/xen-devel
> >
> 

  reply	other threads:[~2014-03-05 22:28 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-03-04 22:49 memory performance 20% degradation in DomU -- Sisu Sisu Xi
2014-03-04 23:00 ` Sisu Xi
2014-03-05 17:33   ` Konrad Rzeszutek Wilk
2014-03-05 20:09     ` Sisu Xi
2014-03-05 21:29       ` Gordan Bobic
2014-03-05 22:28         ` Konrad Rzeszutek Wilk [this message]
2014-03-06 10:31           ` Gordan Bobic
2014-03-05 22:09       ` Konrad Rzeszutek Wilk
2014-03-11 12:03   ` George Dunlap
2014-03-11 15:46     ` Sisu Xi
2014-03-11 20:21       ` Sisu Xi
2014-03-12  8:55         ` Dario Faggioli
2014-03-12 16:50           ` Sisu Xi
2014-03-13 10:25             ` George Dunlap
2014-03-12  8:59       ` Dario Faggioli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140305222852.GA26966@phenom.dumpdata.com \
    --to=konrad.wilk@oracle.com \
    --cc=gordan@bobich.net \
    --cc=xen-devel@lists.xen.org \
    --cc=xisisu@gmail.com \
    --cc=xumengpanda@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).