xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Keir Fraser <keir.fraser@eu.citrix.com>
To: "Wei, Gang" <gang.wei@intel.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [PATCH] CPUIDLE: shorten hpet spin_lock holding time
Date: Tue, 20 Apr 2010 13:49:04 +0100	[thread overview]
Message-ID: <C7F36150.11E34%keir.fraser@eu.citrix.com> (raw)
In-Reply-To: <F26D193E20BBDC42A43B611D1BDEDE710270AE3CC3@shsmsx502.ccr.corp.intel.com>

Is this a measurable win? The newer locking looks like it could be dodgy on
32-bit Xen: the 64-bit reads of timer_deadline_{start,end} will be
non-atomic and unsynchronised so you can read garbage. Even on 64-bit Xen
you can read stale values. I'll be surprised if you got a performance win
from chopping up critical regions in individual functions like that anyway.

 -- Keir

On 20/04/2010 06:39, "Wei, Gang" <gang.wei@intel.com> wrote:

> CPUIDLE: shorten hpet spin_lock holding time
> 
> Try to reduce spin_lock overhead for deep C state entry/exit. This will
> benefit systems with a lot of cpus which need the hpet broadcast to wakeup
> from deep C state.
> 
> Signed-off-by: Wei Gang <gang.wei@intel.com>
> 
> diff -r 7ee8bb40200a xen/arch/x86/hpet.c
> --- a/xen/arch/x86/hpet.c Thu Apr 15 19:11:16 2010 +0100
> +++ b/xen/arch/x86/hpet.c Fri Apr 16 15:05:28 2010 +0800
> @@ -186,6 +186,9 @@ static void handle_hpet_broadcast(struct
>  
>  again:
>      ch->next_event = STIME_MAX;
> +
> +    spin_unlock_irq(&ch->lock);
> +
>      next_event = STIME_MAX;
>      mask = (cpumask_t)CPU_MASK_NONE;
>      now = NOW();
> @@ -204,10 +207,14 @@ again:
>  
>      if ( next_event != STIME_MAX )
>      {
> -        if ( reprogram_hpet_evt_channel(ch, next_event, now, 0) )
> +        spin_lock_irq(&ch->lock);
> +
> +        if ( next_event < ch->next_event &&
> +             reprogram_hpet_evt_channel(ch, next_event, now, 0) )
>              goto again;
> -    }
> -    spin_unlock_irq(&ch->lock);
> +
> +        spin_unlock_irq(&ch->lock);
> +    }
>  }
>  
>  static void hpet_interrupt_handler(int irq, void *data,
> @@ -656,10 +663,15 @@ void hpet_broadcast_enter(void)
>      BUG_ON( !ch );
>  
>      ASSERT(!local_irq_is_enabled());
> -    spin_lock(&ch->lock);
>  
>      if ( hpet_attach_channel )
> +    {
> +        spin_lock(&ch->lock);
> +
>          hpet_attach_channel(cpu, ch);
> +
> +        spin_unlock(&ch->lock);
> +    }
>  
>      /* Cancel any outstanding LAPIC timer event and disable interrupts. */
>      reprogram_timer(0);
> @@ -667,6 +679,8 @@ void hpet_broadcast_enter(void)
>  
>      cpu_set(cpu, ch->cpumask);
>  
> +    spin_lock(&ch->lock);
> +
>      /* reprogram if current cpu expire time is nearer */
>      if ( this_cpu(timer_deadline_end) < ch->next_event )
>          reprogram_hpet_evt_channel(ch, this_cpu(timer_deadline_end), NOW(),
> 1);
> @@ -683,8 +697,6 @@ void hpet_broadcast_exit(void)
>          return;
>  
>      BUG_ON( !ch );
> -
> -    spin_lock_irq(&ch->lock);
>  
>      if ( cpu_test_and_clear(cpu, ch->cpumask) )
>      {
> @@ -693,14 +705,22 @@ void hpet_broadcast_exit(void)
>          if ( !reprogram_timer(this_cpu(timer_deadline_start)) )
>              raise_softirq(TIMER_SOFTIRQ);
>  
> +        spin_lock_irq(&ch->lock);
> +
>          if ( cpus_empty(ch->cpumask) && ch->next_event != STIME_MAX )
>              reprogram_hpet_evt_channel(ch, STIME_MAX, 0, 0);
> +
> +        spin_unlock_irq(&ch->lock);
>      }
>  
>      if ( hpet_detach_channel )
> +    {
> +        spin_lock_irq(&ch->lock);
> +
>          hpet_detach_channel(cpu);
>  
> -    spin_unlock_irq(&ch->lock);
> +        spin_unlock_irq(&ch->lock);
> +    }
>  }
>  
>  int hpet_broadcast_is_available(void)

  reply	other threads:[~2010-04-20 12:49 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-04-20  5:39 [PATCH] CPUIDLE: shorten hpet spin_lock holding time Wei, Gang
2010-04-20 12:49 ` Keir Fraser [this message]
2010-04-20 14:04   ` Wei, Gang
2010-04-20 14:21     ` Keir Fraser
2010-04-20 15:20       ` Wei, Gang
2010-04-20 16:05       ` Wei, Gang
2010-04-21  8:09         ` Keir Fraser
2010-04-21  9:06           ` Wei, Gang
2010-04-21  9:25             ` Keir Fraser
2010-04-21  9:36               ` Wei, Gang
2010-04-21  9:52                 ` Keir Fraser
2010-04-21 10:03                   ` Keir Fraser
2010-04-22  3:59                     ` Wei, Gang
2010-04-22  7:22                       ` Keir Fraser
2010-04-22  8:19                         ` Keir Fraser
2010-04-22  8:23                           ` Keir Fraser
2010-04-29 11:08                             ` Wei, Gang
2010-04-22  8:21         ` Keir Fraser
2010-04-29 11:14           ` Wei, Gang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=C7F36150.11E34%keir.fraser@eu.citrix.com \
    --to=keir.fraser@eu.citrix.com \
    --cc=gang.wei@intel.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).