From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755305Ab2IXNdj (ORCPT ); Mon, 24 Sep 2012 09:33:39 -0400 Received: from e23smtp02.au.ibm.com ([202.81.31.144]:51201 "EHLO e23smtp02.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754569Ab2IXNdh (ORCPT ); Mon, 24 Sep 2012 09:33:37 -0400 Message-ID: <50606050.309@linux.vnet.ibm.com> Date: Mon, 24 Sep 2012 18:59:52 +0530 From: Raghavendra K T Organization: IBM User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.1) Gecko/20120216 Thunderbird/10.0.1 MIME-Version: 1.0 To: Peter Zijlstra CC: "H. Peter Anvin" , Marcelo Tosatti , Ingo Molnar , Avi Kivity , Rik van Riel , Srikar , "Nikunj A. Dadhania" , KVM , Jiannan Ouyang , chegu vinod , "Andrew M. Theurer" , LKML , Srivatsa Vaddagiri , Gleb Natapov , Andrew Jones Subject: Re: [PATCH RFC 0/2] kvm: Improving undercommit,overcommit scenarios in PLE handler References: <20120921115942.27611.67488.sendpatchset@codeblue> <1348486479.11847.46.camel@twins> <50604988.2030506@linux.vnet.ibm.com> <1348490165.11847.58.camel@twins> In-Reply-To: <1348490165.11847.58.camel@twins> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit x-cbid: 12092413-5490-0000-0000-0000022FB11E Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 09/24/2012 06:06 PM, Peter Zijlstra wrote: > On Mon, 2012-09-24 at 17:22 +0530, Raghavendra K T wrote: >> On 09/24/2012 05:04 PM, Peter Zijlstra wrote: >>> On Fri, 2012-09-21 at 17:29 +0530, Raghavendra K T wrote: >>>> In some special scenarios like #vcpu<= #pcpu, PLE handler may >>>> prove very costly, because there is no need to iterate over vcpus >>>> and do unsuccessful yield_to burning CPU. >>> >>> What's the costly thing? The vm-exit, the yield (which should be a nop >>> if its the only task there) or something else entirely? >>> >> Both vmexit and yield_to() actually, >> >> because unsuccessful yield_to() overall is costly in PLE handler. >> >> This is because when we have large guests, say 32/16 vcpus, and one >> vcpu is holding lock, rest of the vcpus waiting for the lock, when they >> do PL-exit, each of the vcpu try to iterate over rest of vcpu list in >> the VM and try to do directed yield (unsuccessful). (O(n^2) tries). >> >> this results is fairly high amount of cpu burning and double run queue >> lock contention. >> >> (if they were spinning probably lock progress would have been faster). >> As Avi/Chegu Vinod had felt it is better to avoid vmexit itself, which >> seems little complex to achieve currently. > > OK, so the vmexit stays and we need to improve yield_to. > > How about something like the below, that would allow breaking out of the > for-each-vcpu loop and simply going back into the vm, right? > > --- > kernel/sched/core.c | 25 +++++++++++++++++++------ > 1 file changed, 19 insertions(+), 6 deletions(-) > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index b38f00e..5d5b355 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -4272,7 +4272,10 @@ EXPORT_SYMBOL(yield); > * It's the caller's job to ensure that the target task struct > * can't go away on us before we can do any checks. > * > - * Returns true if we indeed boosted the target task. > + * Returns: > + * true (>0) if we indeed boosted the target task. > + * false (0) if we failed to boost the target. > + * -ESRCH if there's no task to yield to. > */ > bool __sched yield_to(struct task_struct *p, bool preempt) > { > @@ -4284,6 +4287,15 @@ bool __sched yield_to(struct task_struct *p, bool preempt) > local_irq_save(flags); > rq = this_rq(); > > + /* > + * If we're the only runnable task on the rq, there's absolutely no > + * point in yielding. > + */ > + if (rq->nr_running == 1) { > + yielded = -ESRCH; > + goto out_irq; > + } > + > again: > p_rq = task_rq(p); > double_rq_lock(rq, p_rq); > @@ -4293,13 +4305,13 @@ bool __sched yield_to(struct task_struct *p, bool preempt) > } > > if (!curr->sched_class->yield_to_task) > - goto out; > + goto out_unlock; > > if (curr->sched_class != p->sched_class) > - goto out; > + goto out_unlock; > > if (task_running(p_rq, p) || p->state) > - goto out; > + goto out_unlock; > > yielded = curr->sched_class->yield_to_task(rq, p, preempt); > if (yielded) { > @@ -4312,11 +4324,12 @@ bool __sched yield_to(struct task_struct *p, bool preempt) > resched_task(p_rq->curr); > } > > -out: > +out_unlock: > double_rq_unlock(rq, p_rq); > +out_irq: > local_irq_restore(flags); > > - if (yielded) > + if (yielded> 0) > schedule(); > > return yielded; > > Yes, I think this is a nice idea. Any future users of yield_to also would benefit from this. we will have to iterate only till first attempt to yield_to. I 'll run the test with this patch. However Rik had a genuine concern in the cases where runqueue is not equally distributed and lockholder might actually be on a different run queue but not running. Do you think instead of using rq->nr_running, we could get a global sense of load using avenrun (something like avenrun/num_onlinecpus)