From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 24BDCCD11DD for ; Fri, 29 Mar 2024 13:49:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:Cc:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=SHuObnLCQoqE+WYl3u3TnpliKfnMc650Azk59TUcivI=; b=pdbYqEofmdFcNQ 87vzk2hDGJCR2wUfJDYtWovD564kW9umHSMQaTiKUqKxHNrwMgcIM6X4NhKXRK6C6DE+mvA/iDh+8 huYMWvK5hzDVOFTEVxcF5oo3onB7ejnBGvn2ZCAqK9/UDXwIhyT4I/1nglpU6qf5+HLct8D9dFbE3 22BhHIZrmWbk8KTjMtaT+JGfj8+QcMATV9Kb0Pv4l3m0zGMuDQRmJWxJxa0/6SQxwCThAcrnL43Bn Vf8jF4llTEvMxsbzsRpkeOjSq7jCwxEK54KiCcyESh4TpfMJilVy55HpSJYeFJrgAo2I1kAUnmwyC IY5EsggoToiqGGkgW1hA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.97.1 #2 (Red Hat Linux)) id 1rqCbN-00000000jjG-1FnP; Fri, 29 Mar 2024 13:49:01 +0000 Received: from out-189.mta0.migadu.com ([2001:41d0:1004:224b::bd]) by bombadil.infradead.org with esmtps (Exim 4.97.1 #2 (Red Hat Linux)) id 1rqCbJ-00000000jgv-1qPG for linux-arm-kernel@lists.infradead.org; Fri, 29 Mar 2024 13:49:00 +0000 Date: Fri, 29 Mar 2024 06:48:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1711720129; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=eBD9yA2PcZQcS1CfmKPqxRlA80oGVgTIn7Autaav/oY=; b=lS36eUjZ+5rnv5gHfj/ZZttGv/BuIVys767edLhgh3OcPBXFK6YChfGozlDquF0Y/xXPo9 mLzk7DrVslQuRFObOnhY/mkwbRVFQ2deBBsQAX904uoErgFftFU/6T9PA9iOzfrThPvvK+ C0y8Bqi7u6ZlV61gT0asFDYTnU7SLOo= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Oliver Upton To: Krister Johansen Cc: Marc Zyngier , James Morse , Suzuki K Poulose , Zenghui Yu , Catalin Marinas , Will Deacon , Ali Saidi , David Reaver , linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org Subject: Re: [PATCH] KVM: arm64: Limit stage2_apply_range() batch size to smallest block Message-ID: References: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: X-Migadu-Flow: FLOW_OUT X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240329_064858_611279_8FCC55CE X-CRM114-Status: GOOD ( 28.22 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Hi Krister, On Thu, Mar 28, 2024 at 12:05:08PM -0700, Krister Johansen wrote: > stage2_apply_range() for unmap operations can interfere with the > performance of IO if the device's interrupts share the CPU where the > unmap operation is occurring. commit 5994bc9e05c2 ("KVM: arm64: Limit > stage2_apply_range() batch size to largest block") improved this. Prior > to that commit, workloads that were unfortunate enough to have their IO > interrupts pinned to the same CPU as the unmap operation would observe a > complete stall. With the switch to using the largest block size, it is > possible for IO to make progress, albeit at a reduced speed. Can you describe the workload a bit more? I'm having a hard time understanding how you're unmapping that much memory on the fly in your workload. Is guest memory getting swapped? Are VMs being torn down? Also, it seems a bit odd to steer interrupts *into* the workload you care about... > Further reducing the stage2_apply_range() batch size has substantial > performance improvements for IO that share a CPU performing an unmap > operation. By switching to a 2mb chunk, IO performance regressions were > no longer observed in this author's tests. E.g. it was possible to > obtain the advertised device throughput despite an unmap operation > occurring on the CPU where the interrupt was running. There is a > tradeoff, however. No changes were observed in per-operation timings > when running the kvm_pagetable_test without an interrupt load. However, > with a 64gb VM, 1 vcpu, and 4k pages and a IO load, map times increased > by about 15% and unmap times increased by about 58%. In essence, this > trades slower map/unmap times for improved IO throughput. There are other users of the range-based operations, like write-protection. Live migration is especially sensitive to the latency of page table updates as it can affect the VMM's ability to converge with the guest. > Cc: # 5.15.x: 3b5c082bbfa2: KVM: arm64: Work out supported block level at compile time > Cc: # 5.15.x: 5994bc9e05c2: KVM: arm64: Limit stage2_apply_range() batch size to largest block > Cc: # 5.15.x This is a performance improvement, *not* a correctness fix. Please don't cc stable for it. > Suggested-by: Ali Saidi > Signed-off-by: Krister Johansen > --- > arch/arm64/include/asm/kvm_pgtable.h | 4 ++++ > arch/arm64/kvm/mmu.c | 2 +- > 2 files changed, 5 insertions(+), 1 deletion(-) > > diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h > index 19278dfe7978..b0c4651a4d9a 100644 > --- a/arch/arm64/include/asm/kvm_pgtable.h > +++ b/arch/arm64/include/asm/kvm_pgtable.h > @@ -19,11 +19,15 @@ > * - 4K (level 1): 1GB > * - 16K (level 2): 32MB > * - 64K (level 2): 512MB > + * > + * The max block level is the _smallest_ supported block size for KVM. This feels like a non sequitur given the old comment is left in place... > */ > #ifdef CONFIG_ARM64_4K_PAGES > #define KVM_PGTABLE_MIN_BLOCK_LEVEL 1 > +#define KVM_PGTABLE_MAX_BLOCK_LEVEL 2 > #else > #define KVM_PGTABLE_MIN_BLOCK_LEVEL 2 > +#define KVM_PGTABLE_MAX_BLOCK_LEVEL KVM_PGTABLE_MIN_BLOCK_LEVEL > #endif > > #define kvm_lpa2_is_enabled() system_supports_lpa2() > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index dc04bc767865..1e927b306aee 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -41,7 +41,7 @@ static phys_addr_t __stage2_range_addr_end(phys_addr_t addr, phys_addr_t end, > > static phys_addr_t stage2_range_addr_end(phys_addr_t addr, phys_addr_t end) > { > - phys_addr_t size = kvm_granule_size(KVM_PGTABLE_MIN_BLOCK_LEVEL); > + phys_addr_t size = kvm_granule_size(KVM_PGTABLE_MAX_BLOCK_LEVEL); > > return __stage2_range_addr_end(addr, end, size); > } This doesn't feel right to me. A property that we had before is that leaf entries are visited at most once, since every mapping size was evenly divisible into KVM_PGTABLE_MIN_BLOCK_LEVEL. Seems like we could wind up visiting a PUD mapping 512 times, at least for 4K pages. -- Thanks, Oliver _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel