From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3387C4708A for ; Wed, 26 May 2021 16:15:23 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id B41D1611BE for ; Wed, 26 May 2021 16:15:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232996AbhEZQQw (ORCPT ); Wed, 26 May 2021 12:16:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231622AbhEZQQr (ORCPT ); Wed, 26 May 2021 12:16:47 -0400 Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com [IPv6:2607:f8b0:4864:20::102a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B3F4DC061756 for ; Wed, 26 May 2021 09:15:15 -0700 (PDT) Received: by mail-pj1-x102a.google.com with SMTP id pi6-20020a17090b1e46b029015cec51d7cdso627793pjb.5 for ; Wed, 26 May 2021 09:15:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=LsTUhRIzMiGOPzu0wFBll6Sat6SdwoouWOQTfFNCCKs=; b=p33nyeJc82PGLNl739Gh4fhMojxiK0bsEN2zqawr7ymRxW2hxaeAkS8oJmt/tsT6cr 4ivme2D3PV8ypCy78qOGLFVChmiRD3W02Y80AezHnDakj39ONJxs6zOftjnalxe6tDfZ PNgcVysUZ34l16KJ8RhXIzntn6jePdduSyYWUUlSPL/fj79tE/6IfJNEf4+/EsyOUw22 EiQSAfOe03HamheqsEPXl1Gah/SnmYXGiS6HL3ouoyDA8foIr+LhvILcpcUq5BD8jEvR 1nJaUua8iCAFnIA4Esba6kEkLBIanDeEkCJ2U6UF+WgzKFnupjCFq8Jv46HTUGwLVqEf e1nw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=LsTUhRIzMiGOPzu0wFBll6Sat6SdwoouWOQTfFNCCKs=; b=rLuHDSlBst5vVhrepZYOO/OdnZB3n3KbB2DIN16t5+ZbmGhmdPFbN+4elPq2zV4VmH 7r+P4+Lwa3QYb2DmWa8K8cbVprh/wtIc5Syg1cDaLXGyyLxOVupZisvE1Tik7DTLTqzk am9evN02Y3QyAIPm+R2CjyQT080QHbqWQXGKV7XLl3qEV37KYWPCixi8W8cc3tHC8CW1 uOafjfdrr9UJjsuEUqMdsztpGHmQpVtQde/elqC8x7hYbbjjjyw6optiwyRHkmx+eRDK i4cr1ucenWbQdV5dI5hIW04SuR9jqTRL9C0llggYPyL6nA1vMmXLxaszp1X6QeZJJWUR xjDg== X-Gm-Message-State: AOAM531EW7ICUFGzsSBiktkebknJEw2h9A+tomBfJRPV4y4L8O6neU80 izmJ6s2c2EteegyjMPZMqG6UYw== X-Google-Smtp-Source: ABdhPJzj3FFsZjfje8nItMsxch1wg8M2VHKC44nbewwE2q+8XaVXKKJ6poNvfcuhRfoPX6yPJkhyFQ== X-Received: by 2002:a17:90a:542:: with SMTP id h2mr4604354pjf.82.1622045714986; Wed, 26 May 2021 09:15:14 -0700 (PDT) Received: from google.com (240.111.247.35.bc.googleusercontent.com. [35.247.111.240]) by smtp.gmail.com with ESMTPSA id mp21sm4736323pjb.50.2021.05.26.09.15.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 26 May 2021 09:15:14 -0700 (PDT) Date: Wed, 26 May 2021 16:15:10 +0000 From: Sean Christopherson To: Peter Zijlstra Cc: Masanori Misono , David Woodhouse , Paolo Bonzini , Rohit Jain , Ingo Molnar , kvm@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH RFC 0/1] Make vCPUs that are HLT state candidates for load balancing Message-ID: References: <20210526133727.42339-1-m.misono760@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, May 26, 2021, Peter Zijlstra wrote: > On Wed, May 26, 2021 at 10:37:26PM +0900, Masanori Misono wrote: > > Hi, > > > > I observed performance degradation when running some parallel programs on a > > VM that has (1) KVM_FEATURE_PV_UNHALT, (2) KVM_FEATURE_STEAL_TIME, and (3) > > multi-core architecture. The benchmark results are shown at the bottom. An > > example of libvirt XML for creating such VM is > > > > ``` > > [...] > > 8 > > > > > > > > > > > > > > > > [...] > > ``` > > > > I investigate the cause and found that the problem occurs in the following > > ways: > > > > - vCPU1 schedules thread A, and vCPU2 schedules thread B. vCPU1 and vCPU2 > > share LLC. > > - Thread A tries to acquire a lock but fails, resulting in a sleep state > > (via futex.) > > - vCPU1 becomes idle because there are no runnable threads and does HLT, > > which leads to HLT VMEXIT (if idle=halt, and KVM doesn't disable HLT > > VMEXIT using KVM_CAP_X86_DISABLE_EXITS). > > - KVM sets vCPU1's st->preempted as 1 in kvm_steal_time_set_preempted(). > > - Thread C wakes on vCPU2. vCPU2 tries to do load balancing in > > select_idle_core(). Although vCPU1 is idle, vCPU1 is not a candidate for > > load balancing because is_vcpu_preempted(vCPU1) is true, hence > > available_idle_cpu(vPCU1) is false. > > - As a result, both thread B and thread C stay in the vCPU2's runqueue, and > > vCPU1 is not utilized. If a patch ever gets merged, please put this analysis (or at least a summary of the problem) in the changelog. From the patch itself, I thought "and the vCPU becomes a candidate for CFS load balancing" was referring to CFS in the host, which was obviously confusing. > > The patch changes kvm_arch_cpu_put() so that it does not set st->preempted > > as 1 when a vCPU does HLT VMEXIT. As a result, is_vcpu_preempted(vCPU) > > becomes 0, and the vCPU becomes a candidate for CFS load balancing. > > I'm conficted on this; the vcpu stops running, the pcpu can go do > anything, it might start the next task. There is no saying how quickly > the vcpu task can return to running. Ya, the vCPU _is_ preempted after all. > I'm guessing your setup doesn't actually overload the system; and when > it doesn't have the vcpu thread to run, the pcpu actually goes idle too. > But for those 1:1 cases we already have knobs to disable much of this > IIRC. > > So I'm tempted to say things are working as expected and you're just not > configured right. That does seem to be the case. > > I created a VM with 48 vCPU, and each vCPU is pinned to the corresponding pCPU. If vCPUs are pinned and you want to eke out performance, then I think the correct answer is to ensure nothing else can run on those pCPUs, and/or configure KVM to not intercept HLT.