xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Tim Deegan <tim@xen.org>
To: Sisu Xi <xisisu@gmail.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: Did xen do some cache prefetch? -- Sisu
Date: Mon, 20 May 2013 10:20:40 +0100	[thread overview]
Message-ID: <20130520092040.GA95368@ocelot.phlegethon.org> (raw)
In-Reply-To: <CAPqOm-r33L35xihDHYAJNDgNhdKfBq-LMSud-CFdF0ms2LzN2g@mail.gmail.com>

Hi,

At 13:22 -0500 on 19 May (1368969755), Sisu Xi wrote:
> I am using similar technogies as in post, Figure 1.
> http://blog.stuffedcow.net/2013/01/ivb-cache-replacement/
> 
> My host domain is Cent OS 6.2, running Linux 3.4.35,
> Xen version is 4.2.0
> Guest OS is Ubuntu 12.04, Kernel 3.2.0.
> 
> My CPU is Intel i7, x980, it has 6 cores, runs constantly at 3.33GHz
> (disabled hyperthread). Also disabled frequency scaling.
> 
> with 32KB L1, 256KB L2 on each core, and a shared 12 MB L3 cache.
> 
> I am measuring the cache latency, the workload is readling data from an
> array.
> 
> X axis is the array size, in log scale. y axis is the average cycles per
> access.
> 
> Two lines are shown:
> The solid line is an experiments done using non-virt OS. I pinned the task
> to a specific core to prevent migration;
> The dashed line is running the same experiment, but within a guest
> OS(configured with one VCPU, pinned to one core).
> 
> You can see both lines shows three jumps at 32KB, 256KB, and around 12MB,
> which is the size of L1, L2, and L3.
> 
> The strange thing is the time using virtualization is smaller than non-virt.
> 
> I am guessing Xen did some cache prefetch about this?

Not that I know of.  And if you're using the random access patterns
described in that blog, I don't see how prefetching would help.

My guess is there's some other confounding factor -- are you absolutely
sure that you've turned off all the power management in both cases
(since you're measuring memory access time in CPU cycles that could skew
the graph in either direction)?  You could try a CPU-bound test and see
if the Xen case is faster there as well -- if so it's definitely not
cache behaviour.

We have seen cases where things like scheduler effects made a difference
(e.g. if you're using a single-processor linux in the Xen case make sure
to use a single-processor linux in bare-metal too as that affects kernel
performance).  Is your test array already populated and pinned in memory
to avoid page faults?

Cheers,

Tim.

      reply	other threads:[~2013-05-20  9:20 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-05-19 18:22 Did xen do some cache prefetch? -- Sisu Sisu Xi
2013-05-20  9:20 ` Tim Deegan [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130520092040.GA95368@ocelot.phlegethon.org \
    --to=tim@xen.org \
    --cc=xen-devel@lists.xensource.com \
    --cc=xisisu@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).