xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [PATCH] arm: use a per-VCPU stack
Date: Sun, 19 Feb 2012 09:31:05 +0000	[thread overview]
Message-ID: <20120219093105.GA30637@ocelot.phlegethon.org> (raw)
In-Reply-To: <1329641061.5898.10.camel@dagon.hellion.org.uk>

At 08:44 +0000 on 19 Feb (1329641061), Ian Campbell wrote:
> > Storing the CPU ID in the per-pcpu area only happens to work because
> > per-cpu areas are a noop right now.  I have a patch that re-enables them
> > properly but for that we'll need a proper way of getting the CPU id.
> 
> I had imagined that we would have per pVCPU page tables so the current
> CPU's per-pcpu area would always be at the same location. If that is not
> (going to be) the case then I'll stash it on the VCPU stack instead.

Yes I'd thought that too but then when I came to implement it up...

> Thinking about it now playing tricks with the PTs does make it tricky on
> the rare occasions when you want to access another pCPUs per area.

... I saw that and since I then had to use the normal relocation tricks
I didn't bother with the local-var special case.  Could still do it if
it turns out to be a perf win (but w/out hardware to measure, I think
I'll leave the optimizations alone for now).

> Speaking of per-cpu areas -- I did notice a strange behaviour while
> debugging this. It seemed that a barrier() was not sufficient to keep
> the processor from caching the value of "current" in a register (i.e it
> would load into r6 before the barrier and use r6 after). I figured this
> was probably an unfortunate side effect of the current nobbled per-pcpu
> areas and would be fixed as part of your SMP bringup stuff.

Wierd.  Must check that when I rebase the SMP patches.

> > We could use the physical CPU ID register; I don't know whether it
> > would be faster to stash the ID on the (per-vcpu) stack and update it
> > during context switch.
> 
> Does h/w CPU ID correspond to the s/w one in our circumstances? Might
> they be very sparse or something inconvenient like that?

It does on all the h/w we support :) but yes it could be sparse,
encoding NUMA topology.

> I'd expect pulling things from registers to be faster in the normal case
> but in this specific scenario I'd imagine the base of the stack will be
> pretty cache hot since it has all the guest state in it etc which we've
> probably fairly recently pushed to or are about to pop from.

Agreed.

Tim.

  reply	other threads:[~2012-02-19  9:31 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-02-15 16:49 [PATCH] arm: use a per-VCPU stack Ian Campbell
2012-02-18 13:52 ` Tim Deegan
2012-02-19  8:44   ` Ian Campbell
2012-02-19  9:31     ` Tim Deegan [this message]
2012-02-20 14:43       ` [PATCH v2] " Ian Campbell
2012-02-20 14:58         ` Tim Deegan
2012-02-22 14:33           ` Ian Campbell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120219093105.GA30637@ocelot.phlegethon.org \
    --to=tim@xen.org \
    --cc=Ian.Campbell@citrix.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).