From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757963Ab2F1CQO (ORCPT ); Wed, 27 Jun 2012 22:16:14 -0400 Received: from e28smtp05.in.ibm.com ([122.248.162.5]:52962 "EHLO e28smtp05.in.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753821Ab2F1CQL (ORCPT ); Wed, 27 Jun 2012 22:16:11 -0400 Message-ID: <4FEBBE06.2060000@linux.vnet.ibm.com> Date: Thu, 28 Jun 2012 07:44:30 +0530 From: Raghavendra K T Organization: IBM User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:10.0.1) Gecko/20120216 Thunderbird/10.0.1 MIME-Version: 1.0 To: Gleb Natapov CC: Rik van Riel , Avi Kivity , Marcelo Tosatti , Srikar , Srivatsa Vaddagiri , Peter Zijlstra , "Nikunj A. Dadhania" , KVM , Ingo Molnar , LKML Subject: Re: [PATCH] kvm: handle last_boosted_vcpu = 0 case References: <20120619202047.26191.40429.sendpatchset@codeblue> <20120619165104.2a4574f8@annuminas.surriel.com> <20120621064358.GS6533@redhat.com> In-Reply-To: <20120621064358.GS6533@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit x-cbid: 12062802-8256-0000-0000-00000314CF85 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 06/21/2012 12:13 PM, Gleb Natapov wrote: > On Tue, Jun 19, 2012 at 04:51:04PM -0400, Rik van Riel wrote: >> On Wed, 20 Jun 2012 01:50:50 +0530 >> Raghavendra K T wrote: >> >>> >>> In ple handler code, last_boosted_vcpu (lbv) variable is >>> serving as reference point to start when we enter. >> >>> Also statistical analysis (below) is showing lbv is not very well >>> distributed with current approach. >> >> You are the second person to spot this bug today (yes, today). >> >> Due to time zones, the first person has not had a chance yet to >> test the patch below, which might fix the issue... >> >> Please let me know how it goes. >> >> ====8<==== >> >> If last_boosted_vcpu == 0, then we fall through all test cases and >> may end up with all VCPUs pouncing on vcpu 0. With a large enough >> guest, this can result in enormous runqueue lock contention, which >> can prevent vcpu0 from running, leading to a livelock. >> >> Changing< to<= makes sure we properly handle that case. >> >> Signed-off-by: Rik van Riel >> --- >> virt/kvm/kvm_main.c | 2 +- >> 1 files changed, 1 insertions(+), 1 deletions(-) >> >> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c >> index 7e14068..1da542b 100644 >> --- a/virt/kvm/kvm_main.c >> +++ b/virt/kvm/kvm_main.c >> @@ -1586,7 +1586,7 @@ void kvm_vcpu_on_spin(struct kvm_vcpu *me) >> */ >> for (pass = 0; pass< 2&& !yielded; pass++) { >> kvm_for_each_vcpu(i, vcpu, kvm) { >> - if (!pass&& i< last_boosted_vcpu) { >> + if (!pass&& i<= last_boosted_vcpu) { >> i = last_boosted_vcpu; >> continue; >> } else if (pass&& i> last_boosted_vcpu) >> > Looks correct. We can simplify this by introducing something like: > > #define kvm_for_each_vcpu_from(idx, n, vcpup, kvm) \ > for (n = atomic_read(&kvm->online_vcpus); \ > n&& (vcpup = kvm_get_vcpu(kvm, idx)) != NULL; \ > n--, idx = (idx+1) % atomic_read(&kvm->online_vcpus)) > Gleb, Rik, Any updates on this or Rik's patch status? I can come up with the above suggested cleanup patch with Gleb's from,sob. Please let me know.