From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756534Ab2IYNn5 (ORCPT ); Tue, 25 Sep 2012 09:43:57 -0400 Received: from e23smtp09.au.ibm.com ([202.81.31.142]:50872 "EHLO e23smtp09.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753973Ab2IYNnz (ORCPT ); Tue, 25 Sep 2012 09:43:55 -0400 Message-ID: <5061B437.8070300@linux.vnet.ibm.com> Date: Tue, 25 Sep 2012 19:10:07 +0530 From: Raghavendra K T Organization: IBM User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.1) Gecko/20120216 Thunderbird/10.0.1 MIME-Version: 1.0 To: Peter Zijlstra CC: "H. Peter Anvin" , Marcelo Tosatti , Ingo Molnar , Avi Kivity , Rik van Riel , Srikar , "Nikunj A. Dadhania" , KVM , Jiannan Ouyang , chegu vinod , "Andrew M. Theurer" , LKML , Srivatsa Vaddagiri , Gleb Natapov , Andrew Jones Subject: Re: [PATCH RFC 0/2] kvm: Improving undercommit,overcommit scenarios in PLE handler References: <20120921115942.27611.67488.sendpatchset@codeblue> <1348486479.11847.46.camel@twins> <50604988.2030506@linux.vnet.ibm.com> <1348490165.11847.58.camel@twins> <50606050.309@linux.vnet.ibm.com> <1348494895.11847.64.camel@twins> <50606B33.1040102@linux.vnet.ibm.com> In-Reply-To: <50606B33.1040102@linux.vnet.ibm.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit x-cbid: 12092513-3568-0000-0000-00000282D197 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 09/24/2012 07:46 PM, Raghavendra K T wrote: > On 09/24/2012 07:24 PM, Peter Zijlstra wrote: >> On Mon, 2012-09-24 at 18:59 +0530, Raghavendra K T wrote: >>> However Rik had a genuine concern in the cases where runqueue is not >>> equally distributed and lockholder might actually be on a different run >>> queue but not running. >> >> Load should eventually get distributed equally -- that's what the >> load-balancer is for -- so this is a temporary situation. >> >> We already try and favour the non running vcpu in this case, that's what >> yield_to_task_fair() is about. If its still not eligible to run, tough >> luck. > > Yes, I agree. > >> >>> Do you think instead of using rq->nr_running, we could get a global >>> sense of load using avenrun (something like avenrun/num_onlinecpus) >> >> To what purpose? Also, global stuff is expensive, so you should try and >> stay away from it as hard as you possibly can. > > Yes, that concern only had made me to fall back to rq->nr_running. > > Will come back with the result soon. Got the result with the patches: So here is the result, Tried this on a 32 core ple box with HT disabled. 32 guest vcpus with 1x and 2x overcommits Base = 3.6.0-rc5 + ple handler optimization patches A = Base + checking rq_running in vcpu_on_spin() patch B = Base + checking rq->nr_running in sched/core C = Base - PLE ---+-----------+-----------+-----------+-----------+ | Ebizzy result (rec/sec higher is better) | ---+-----------+-----------+-----------+-----------+ | Base | A | B | C | ---+-----------+-----------+-----------+-----------+ 1x | 2374.1250 | 7273.7500 | 5690.8750 | 7364.3750| 2x | 2536.2500 | 2458.5000 | 2426.3750 | 48.5000| ---+-----------+-----------+-----------+-----------+ % improvements w.r.t BASE ---+------------+------------+------------+ | A | B | C | ---+------------+------------+------------+ 1x | 206.37603 | 139.70410 | 210.19323 | 2x | -3.06555 | -4.33218 | -98.08773 | ---+------------+------------+------------+ we are getting the benefit of almost PLE disabled case with this approach. With patch B, we have dropped a bit in gain. (because we still would iterate vcpus until we decide to do a directed yield).