From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sisu Xi <xisisu@gmail.com>
Cc: Meng Xu <xumengpanda@gmail.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: memory performance 20% degradation in DomU -- Sisu
Date: Wed, 5 Mar 2014 12:33:19 -0500 [thread overview]
Message-ID: <20140305173319.GC9528@phenom.dumpdata.com> (raw)
In-Reply-To: <CAPqOm-qvogSOXBrCyctiGXhWivTyjH-s3R0VV=kJnMvO92pgeg@mail.gmail.com>
On Tue, Mar 04, 2014 at 05:00:46PM -0600, Sisu Xi wrote:
> Hi, all:
>
> I also used the ramspeed to measure memory throughput.
> http://alasir.com/software/ramspeed/
>
> I am using the v2.6, single core version. The command I used is ./ramspeed
> -b 3 (for int) and ./ramspeed -b 6 (for float).
> The benchmark measures four operations: add, copy, scale, and triad. And
> also gives an average number for all four operations.
>
> The results in DomU shows around 20% performance degradation compared to
> non-virt results.
What kind of domU? PV or HVM?
>
> Attached is the results. The left part are results for int, while the right
> part is the results for float. The Y axis is the measured throughput. Each
> box contains 100 experiment repeats.
> The black boxes are the results in non-virtualized environment, while the
> blue ones are the results I got in DomU.
>
> The Xen version I am using is 4.3.0, 64bit.
>
> Thanks very much!
>
> Sisu
>
>
>
> On Tue, Mar 4, 2014 at 4:49 PM, Sisu Xi <xisisu@gmail.com> wrote:
>
> > Hi, all:
> >
> > I am trying to study the cache/memory performance under Xen, and has
> > encountered some problems.
> >
> > My machine is has an Intel Core i7 X980 processor with 6 physical cores. I
> > disabled hyper-threading, frequency scaling, so it should be running at
> > constant speed.
> > Dom0 was boot with 1 VCPU pinned to 1 core, with 2 GB of memory.
> >
> > After that, I boot up DomU with 1 VCPU pinned to a separate core, with 1
> > GB of memory. The credit scheduler is used, and no cap is set for them. So
> > DomU should be able to access all resources.
> >
> > Each physical core has a 32KB dedicated L1 cache, 256KB dedicated L2
> > cache. And all cores share a 12MB L3 cache.
> >
> > I created a simple program to create an array of specified size. Load them
> > once, and then randomly access every cache line once. (1 cache line is 64B
> > on my machine).
> > rdtsc is used to record the duration for the random access.
> >
> > I tried different data sizes, with 1000 repeat for each data sizes.
> > Attached is the boxplot for average access time for one cache line.
> >
> > The x axis is the different Data Size, the y axis is the CPU cycle. The
> > three vertical lines at 32KB, 256KB, and 12MB represents the size
> > difference in L1, L2, and L3 cache on my machine.
> > *The black box are the results I got when I run it in non-virtualized,
> > while the blue box are the results I got in DomU.*
> >
> > For some reason, the results in DomU varies much more than the results in
> > non-virtualized environment.
> > I also repeated the same experiments in DomU with Run Level 1, the results
> > are the same.
> >
> > Can anyone give some suggestions about what might be the reason for this?
> >
> > Thanks very much!
> >
> > Sisu
> >
> > --
> > Sisu Xi, PhD Candidate
> >
> > http://www.cse.wustl.edu/~xis/
> > Department of Computer Science and Engineering
> > Campus Box 1045
> > Washington University in St. Louis
> > One Brookings Drive
> > St. Louis, MO 63130
> >
>
>
>
> --
> Sisu Xi, PhD Candidate
>
> http://www.cse.wustl.edu/~xis/
> Department of Computer Science and Engineering
> Campus Box 1045
> Washington University in St. Louis
> One Brookings Drive
> St. Louis, MO 63130
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2014-03-05 17:33 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-03-04 22:49 memory performance 20% degradation in DomU -- Sisu Sisu Xi
2014-03-04 23:00 ` Sisu Xi
2014-03-05 17:33 ` Konrad Rzeszutek Wilk [this message]
2014-03-05 20:09 ` Sisu Xi
2014-03-05 21:29 ` Gordan Bobic
2014-03-05 22:28 ` Konrad Rzeszutek Wilk
2014-03-06 10:31 ` Gordan Bobic
2014-03-05 22:09 ` Konrad Rzeszutek Wilk
2014-03-11 12:03 ` George Dunlap
2014-03-11 15:46 ` Sisu Xi
2014-03-11 20:21 ` Sisu Xi
2014-03-12 8:55 ` Dario Faggioli
2014-03-12 16:50 ` Sisu Xi
2014-03-13 10:25 ` George Dunlap
2014-03-12 8:59 ` Dario Faggioli
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140305173319.GC9528@phenom.dumpdata.com \
--to=konrad.wilk@oracle.com \
--cc=xen-devel@lists.xen.org \
--cc=xisisu@gmail.com \
--cc=xumengpanda@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).