From: Keir Fraser <keir.fraser@eu.citrix.com>
To: "Wei, Gang" <gang.wei@intel.com>,
"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [PATCH] CPUIDLE: shorten hpet spin_lock holding time
Date: Tue, 20 Apr 2010 15:21:44 +0100 [thread overview]
Message-ID: <C7F37708.11E5E%keir.fraser@eu.citrix.com> (raw)
In-Reply-To: <F26D193E20BBDC42A43B611D1BDEDE710270AE3EBC@shsmsx502.ccr.corp.intel.com>
On 20/04/2010 15:04, "Wei, Gang" <gang.wei@intel.com> wrote:
> On Tuesday, 2010-4-20 8:49 PM, Keir Fraser wrote:
>> Is this a measurable win? The newer locking looks like it could be
>> dodgy on 32-bit Xen: the 64-bit reads of timer_deadline_{start,end}
>> will be non-atomic and unsynchronised so you can read garbage. Even
>> on 64-bit Xen you can read stale values. I'll be surprised if you got
>> a performance win from chopping up critical regions in individual
>> functions like that anyway.
>
> First of all, this is a measurable power win for cpu overcommitment idle case
> (vcpu:pcpu > 2:1, pcpu >= 32, guests are non-tickless kernels).
So lots of short sleep periods, and possibly only a very few HPET channels
to share? How prevalent is always-running APIC timer now, and is that going
to be supported in future processors?
> As to the non-atomic access to timer_deadline_{start,end}, it should already
> be there before this patch. It is not protected by the hpet lock. Shall we add
> rw_lock for each timer_deadline_{start,end}? This can be done separately.
The bug isn't previously there, since the fields will not be read unless the
cpu is in ch->cpumask, which (was) protected by ch->lock. That was
sufficient since a CPU would not modify timer_deadline_{start,end} between
hpet_broadcast_enter and hpet_broadcast_exit. After your patch,
handle_hpet_broadcast is no longer fully synchronised against those
functions.
-- Keir
next prev parent reply other threads:[~2010-04-20 14:21 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-04-20 5:39 [PATCH] CPUIDLE: shorten hpet spin_lock holding time Wei, Gang
2010-04-20 12:49 ` Keir Fraser
2010-04-20 14:04 ` Wei, Gang
2010-04-20 14:21 ` Keir Fraser [this message]
2010-04-20 15:20 ` Wei, Gang
2010-04-20 16:05 ` Wei, Gang
2010-04-21 8:09 ` Keir Fraser
2010-04-21 9:06 ` Wei, Gang
2010-04-21 9:25 ` Keir Fraser
2010-04-21 9:36 ` Wei, Gang
2010-04-21 9:52 ` Keir Fraser
2010-04-21 10:03 ` Keir Fraser
2010-04-22 3:59 ` Wei, Gang
2010-04-22 7:22 ` Keir Fraser
2010-04-22 8:19 ` Keir Fraser
2010-04-22 8:23 ` Keir Fraser
2010-04-29 11:08 ` Wei, Gang
2010-04-22 8:21 ` Keir Fraser
2010-04-29 11:14 ` Wei, Gang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=C7F37708.11E5E%keir.fraser@eu.citrix.com \
--to=keir.fraser@eu.citrix.com \
--cc=gang.wei@intel.com \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).