From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Cooper Subject: Re: [PATCH 2/2] x86/HVM: batch vCPU wakeups Date: Thu, 11 Sep 2014 12:11:46 +0100 Message-ID: <54118372.4000202@citrix.com> References: <541189280200007800033BB5@mail.emea.novell.com> <54118A3A0200007800033BCD@mail.emea.novell.com> <54117DE0.1010203@citrix.com> <54119D850200007800033CBD@mail.emea.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1XS2I4-0004PN-3o for xen-devel@lists.xenproject.org; Thu, 11 Sep 2014 11:11:52 +0000 In-Reply-To: <54119D850200007800033CBD@mail.emea.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich Cc: Ian Campbell , xen-devel , Keir Fraser , Ian Jackson , Tim Deegan List-Id: xen-devel@lists.xenproject.org On 11/09/14 12:03, Jan Beulich wrote: >>>> On 11.09.14 at 12:48, wrote: >> On 11/09/14 10:40, Jan Beulich wrote: >>> void cpu_raise_softirq(unsigned int cpu, unsigned int nr) >>> { >>> - if ( !test_and_set_bit(nr, &softirq_pending(cpu)) >>> - && (cpu != smp_processor_id()) >>> - && !arch_skip_send_event_check(cpu) ) >>> + unsigned int this_cpu = smp_processor_id(); >>> + >>> + if ( test_and_set_bit(nr, &softirq_pending(cpu)) >>> + || (cpu == this_cpu) >>> + || arch_skip_send_event_check(cpu) ) >>> + return; >>> + >>> + if ( !per_cpu(batching, this_cpu) || in_irq() ) >>> smp_send_event_check_cpu(cpu); >>> + else >>> + set_bit(nr, &per_cpu(batch_mask, this_cpu)); >> Under what circumstances would it be sensible to batch calls to >> cpu_raise_softirq()? >> >> All of the current callers are singleshot events, and their use in a >> batched period would only be as a result of a timer interrupt, which >> bypasses the batching. > You shouldn't be looking at the immediate callers of > cpu_raise_softirq(), but at those much higher up the stack. > Rooted at vlapic_ipi(), depending on the scheduler you might > end up in credit1's __runq_tickle() (calling cpumask_raise_softirq()) > or credit2's runq_tickle() (calling cpu_raise_softirq()). > > Jan > Ah true, which is valid to batch. Reviewed-by: Andrew Cooper