From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ingo Molnar Subject: Re: [PATCH V3 RESEND RFC 1/2] sched: Bail out of yield_to when source and target runqueue has one task Date: Thu, 24 Jan 2013 11:32:13 +0100 Message-ID: <20130124103213.GD27602@gmail.com> References: <20130122073854.24731.9426.sendpatchset@codeblue.in.ibm.com> <20130122073913.24731.65118.sendpatchset@codeblue.in.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Peter Zijlstra , Avi Kivity , "H. Peter Anvin" , Thomas Gleixner , Gleb Natapov , Ingo Molnar , Marcelo Tosatti , Rik van Riel , Srikar , "Nikunj A. Dadhania" , KVM , Jiannan Ouyang , Chegu Vinod , "Andrew M. Theurer" , LKML , Srivatsa Vaddagiri , Andrew Jones To: Raghavendra K T Return-path: Received: from mail-ee0-f44.google.com ([74.125.83.44]:59328 "EHLO mail-ee0-f44.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752489Ab3AXKcT (ORCPT ); Thu, 24 Jan 2013 05:32:19 -0500 Content-Disposition: inline In-Reply-To: <20130122073913.24731.65118.sendpatchset@codeblue.in.ibm.com> Sender: kvm-owner@vger.kernel.org List-ID: * Raghavendra K T wrote: > From: Peter Zijlstra > > In case of undercomitted scenarios, especially in large guests > yield_to overhead is significantly high. when run queue length of > source and target is one, take an opportunity to bail out and return > -ESRCH. This return condition can be further exploited to quickly come > out of PLE handler. > > (History: Raghavendra initially worked on break out of kvm ple handler upon > seeing source runqueue length = 1, but it had to export rq length). > Peter came up with the elegant idea of return -ESRCH in scheduler core. > > Signed-off-by: Peter Zijlstra > Raghavendra, Checking the rq length of target vcpu condition added.(thanks Avi) > Reviewed-by: Srikar Dronamraju > Signed-off-by: Raghavendra K T > Acked-by: Andrew Jones > Tested-by: Chegu Vinod > --- > > kernel/sched/core.c | 25 +++++++++++++++++++------ > 1 file changed, 19 insertions(+), 6 deletions(-) > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index 2d8927f..fc219a5 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -4289,7 +4289,10 @@ EXPORT_SYMBOL(yield); > * It's the caller's job to ensure that the target task struct > * can't go away on us before we can do any checks. > * > - * Returns true if we indeed boosted the target task. > + * Returns: > + * true (>0) if we indeed boosted the target task. > + * false (0) if we failed to boost the target. > + * -ESRCH if there's no task to yield to. > */ > bool __sched yield_to(struct task_struct *p, bool preempt) > { > @@ -4303,6 +4306,15 @@ bool __sched yield_to(struct task_struct *p, bool preempt) > > again: > p_rq = task_rq(p); > + /* > + * If we're the only runnable task on the rq and target rq also > + * has only one task, there's absolutely no point in yielding. > + */ > + if (rq->nr_running == 1 && p_rq->nr_running == 1) { > + yielded = -ESRCH; > + goto out_irq; > + } Looks good to me in principle. Would be nice to get more consistent benchmark numbers. Once those are unambiguously showing that this is a win: Acked-by: Ingo Molnar Thanks, Ingo