From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 09C13C3ABC9 for ; Fri, 16 May 2025 14:43:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Type:MIME-Version: References:In-Reply-To:Subject:Cc:To:From:Message-ID:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=gkyrYqf1wsi+kTylLWZrogfx4OniXHLqPmgVIHeVamI=; b=l2MHtzqt6Sxt9yYqq3TSFDcHGU yX6o7zgN2W/e5y+aFAKlZApS9yU1pmohoxPyZAezip+5lNFTpQxLakYq2uisaJCX0MY1f+Ezwbx9b NbAD/bloLeZM3iQGT2aEJwDs4haZxWBY9Ez/h71GWS1ES4MLJFZSwsKNTn2ANlPi1rI5GatZq9QbJ HJrpUeXtj8N7gQWiUkN8/t23y7QqRXtnw1L+9m17XqaCL0xvbs3KtiPihtqL8sZW06rSlJkG/JaRw bS6EBG4lu5mXORJxg05tRACYxhG0z7gjOvStAPOMMdxq55qDRzPJpUkebkgOp2QV3DEE788STDmB4 A6bvhsBg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1uFwH0-00000003hNZ-2kKu; Fri, 16 May 2025 14:42:54 +0000 Received: from dfw.source.kernel.org ([2604:1380:4641:c500::1]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1uFvX9-00000003ans-1gnm for linux-arm-kernel@lists.infradead.org; Fri, 16 May 2025 13:55:32 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 5DCC95C4B4D; Fri, 16 May 2025 13:53:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 33606C4CEE4; Fri, 16 May 2025 13:55:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1747403730; bh=kXogt4iFZiSuDOpJ1zpztLcK0rga4wuTYqF+xCSgbUA=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=krPohU+bQPLALzxQxCOYbyNE0JlUjbZL1llaENNkLxxhIO1ety0ZCKuWQXROnCIPq uFVOuGuZTcifZAF6u9gtSiFqTBAV7jJvYcOdF1nKulYhx8QE5H1IIJy4bcNh8h/30O fZ/20rGdYex1Ax73T9756NO2Sp+TIaRvZVhBu5POiIDJ6zpmjNCv/6V9TZTPnTIh/q PGFgQkvyogowh2uaKzlh3vNQSUfH1503gxwd1TZzLNsb5mQBZB8GS3fSrMel3WCCJA 8yrYWWjZnZfR1mRp89bSvXww797lysxXkeF/KmV5Vt6msKRCBivdxa1BWnv337+MaQ SehUL8Q1tli4Q== Received: from sofa.misterjones.org ([185.219.108.64] helo=goblin-girl.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.95) (envelope-from ) id 1uFvX5-00FZrY-Sr; Fri, 16 May 2025 14:55:28 +0100 Date: Fri, 16 May 2025 14:55:27 +0100 Message-ID: <864ixkfynk.wl-maz@kernel.org> From: Marc Zyngier To: Vincent Donnefort Cc: oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com Subject: Re: [PATCH v4 10/10] KVM: arm64: np-guest CMOs with PMD_SIZE fixmap In-Reply-To: <20250509131706.2336138-11-vdonnefort@google.com> References: <20250509131706.2336138-1-vdonnefort@google.com> <20250509131706.2336138-11-vdonnefort@google.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/30.1 (aarch64-unknown-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: vdonnefort@google.com, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org, qperret@google.com, linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev, linux-kernel@vger.kernel.org, kernel-team@android.com X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250516_065531_564833_977DF543 X-CRM114-Status: GOOD ( 36.70 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, 09 May 2025 14:17:06 +0100, Vincent Donnefort wrote: > > With the introduction of stage-2 huge mappings in the pKVM hypervisor, > guest pages CMO is needed for PMD_SIZE size. Fixmap only supports > PAGE_SIZE and iterating over the huge-page is time consuming (mostly due > to TLBI on hyp_fixmap_unmap) which is a problem for EL2 latency. > > Introduce a shared PMD_SIZE fixmap (hyp_fixblock_map/hyp_fixblock_unmap) > to improve guest page CMOs when stage-2 huge mappings are installed. > > On a Pixel6, the iterative solution resulted in a latency of ~700us, > while the PMD_SIZE fixmap reduces it to ~100us. > > Because of the horrendous private range allocation that would be > necessary, this is disabled for 64KiB pages systems. > > Suggested-by: Quentin Perret > Signed-off-by: Vincent Donnefort > Signed-off-by: Quentin Perret > > diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h > index 1b43bcd2a679..2888b5d03757 100644 > --- a/arch/arm64/include/asm/kvm_pgtable.h > +++ b/arch/arm64/include/asm/kvm_pgtable.h > @@ -59,6 +59,11 @@ typedef u64 kvm_pte_t; > > #define KVM_PHYS_INVALID (-1ULL) > > +#define KVM_PTE_TYPE BIT(1) > +#define KVM_PTE_TYPE_BLOCK 0 > +#define KVM_PTE_TYPE_PAGE 1 > +#define KVM_PTE_TYPE_TABLE 1 > + > #define KVM_PTE_LEAF_ATTR_LO GENMASK(11, 2) > > #define KVM_PTE_LEAF_ATTR_LO_S1_ATTRIDX GENMASK(4, 2) > diff --git a/arch/arm64/kvm/hyp/include/nvhe/mm.h b/arch/arm64/kvm/hyp/include/nvhe/mm.h > index 230e4f2527de..b0c72bc2d5ba 100644 > --- a/arch/arm64/kvm/hyp/include/nvhe/mm.h > +++ b/arch/arm64/kvm/hyp/include/nvhe/mm.h > @@ -13,9 +13,11 @@ > extern struct kvm_pgtable pkvm_pgtable; > extern hyp_spinlock_t pkvm_pgd_lock; > > -int hyp_create_pcpu_fixmap(void); > +int hyp_create_fixmap(void); > void *hyp_fixmap_map(phys_addr_t phys); > void hyp_fixmap_unmap(void); > +void *hyp_fixblock_map(phys_addr_t phys); > +void hyp_fixblock_unmap(void); > > int hyp_create_idmap(u32 hyp_va_bits); > int hyp_map_vectors(void); > diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c > index 97e0fea9db4e..9f3ffa4e0690 100644 > --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c > +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c > @@ -220,16 +220,52 @@ static void guest_s2_put_page(void *addr) > hyp_put_page(¤t_vm->pool, addr); > } > > +static void *__fixmap_guest_page(void *va, size_t *size) > +{ > + if (IS_ALIGNED(*size, PMD_SIZE)) { > + void *addr = hyp_fixblock_map(__hyp_pa(va)); > + > + if (addr) > + return addr; > + > + *size = PAGE_SIZE; > + } > + > + if (IS_ALIGNED(*size, PAGE_SIZE)) > + return hyp_fixmap_map(__hyp_pa(va)); > + > + WARN_ON(1); > + > + return NULL; > +} > + > +static void __fixunmap_guest_page(size_t size) > +{ > + switch (size) { > + case PAGE_SIZE: > + hyp_fixmap_unmap(); > + break; > + case PMD_SIZE: > + hyp_fixblock_unmap(); > + break; > + default: > + WARN_ON(1); > + } This is pretty ugly. How can we end-up there the first place? I'd rather you make sure we can't reach this default path at all. See also towards the end of this patch (tl;dr: hyp_fixblock_unmap() should never explode). > +} > + > static void clean_dcache_guest_page(void *va, size_t size) > { > WARN_ON(!PAGE_ALIGNED(size)); > > while (size) { > - __clean_dcache_guest_page(hyp_fixmap_map(__hyp_pa(va)), > - PAGE_SIZE); > - hyp_fixmap_unmap(); > - va += PAGE_SIZE; > - size -= PAGE_SIZE; > + size_t fixmap_size = size == PMD_SIZE ? size : PAGE_SIZE; > + void *addr = __fixmap_guest_page(va, &fixmap_size); > + > + __clean_dcache_guest_page(addr, fixmap_size); > + __fixunmap_guest_page(fixmap_size); > + > + size -= fixmap_size; > + va += fixmap_size; Can this ever be called with a *multiple* of PMD_SIZE? In this case you'd still end-up doing PAGE_SIZEd-bite CMOs until there is only PMD_SIZE left, ruining the optimisation. I think this needs fixing. > } > } > > @@ -238,11 +274,14 @@ static void invalidate_icache_guest_page(void *va, size_t size) > WARN_ON(!PAGE_ALIGNED(size)); > > while (size) { > - __invalidate_icache_guest_page(hyp_fixmap_map(__hyp_pa(va)), > - PAGE_SIZE); > - hyp_fixmap_unmap(); > - va += PAGE_SIZE; > - size -= PAGE_SIZE; > + size_t fixmap_size = size == PMD_SIZE ? size : PAGE_SIZE; > + void *addr = __fixmap_guest_page(va, &fixmap_size); > + > + __invalidate_icache_guest_page(addr, fixmap_size); > + __fixunmap_guest_page(fixmap_size); > + > + size -= fixmap_size; > + va += fixmap_size; > } > } > > diff --git a/arch/arm64/kvm/hyp/nvhe/mm.c b/arch/arm64/kvm/hyp/nvhe/mm.c > index f41c7440b34b..e3b1bece8504 100644 > --- a/arch/arm64/kvm/hyp/nvhe/mm.c > +++ b/arch/arm64/kvm/hyp/nvhe/mm.c > @@ -229,9 +229,8 @@ int hyp_map_vectors(void) > return 0; > } > > -void *hyp_fixmap_map(phys_addr_t phys) > +static void *fixmap_map_slot(struct hyp_fixmap_slot *slot, phys_addr_t phys) > { > - struct hyp_fixmap_slot *slot = this_cpu_ptr(&fixmap_slots); > kvm_pte_t pte, *ptep = slot->ptep; > > pte = *ptep; > @@ -243,10 +242,21 @@ void *hyp_fixmap_map(phys_addr_t phys) > return (void *)slot->addr; > } > > +void *hyp_fixmap_map(phys_addr_t phys) > +{ > + return fixmap_map_slot(this_cpu_ptr(&fixmap_slots), phys); > +} > + > static void fixmap_clear_slot(struct hyp_fixmap_slot *slot) > { > kvm_pte_t *ptep = slot->ptep; > u64 addr = slot->addr; > + u32 level; > + > + if (FIELD_GET(KVM_PTE_TYPE, *ptep) == KVM_PTE_TYPE_PAGE) > + level = KVM_PGTABLE_LAST_LEVEL; > + else > + level = KVM_PGTABLE_LAST_LEVEL - 1; /* create_fixblock() guarantees PMD level */ Seeing this, (KVM_PGTABLE_LAST_LEVEL - 1) looks nicee than the "2" I suggested in one of the previous patches. > > WRITE_ONCE(*ptep, *ptep & ~KVM_PTE_VALID); > > @@ -260,7 +270,7 @@ static void fixmap_clear_slot(struct hyp_fixmap_slot *slot) > * https://lore.kernel.org/kvm/20221017115209.2099-1-will@kernel.org/T/#mf10dfbaf1eaef9274c581b81c53758918c1d0f03 > */ > dsb(ishst); > - __tlbi_level(vale2is, __TLBI_VADDR(addr, 0), KVM_PGTABLE_LAST_LEVEL); > + __tlbi_level(vale2is, __TLBI_VADDR(addr, 0), level); > dsb(ish); > isb(); > } > @@ -273,9 +283,9 @@ void hyp_fixmap_unmap(void) > static int __create_fixmap_slot_cb(const struct kvm_pgtable_visit_ctx *ctx, > enum kvm_pgtable_walk_flags visit) > { > - struct hyp_fixmap_slot *slot = per_cpu_ptr(&fixmap_slots, (u64)ctx->arg); > + struct hyp_fixmap_slot *slot = (struct hyp_fixmap_slot *)ctx->arg; > > - if (!kvm_pte_valid(ctx->old) || ctx->level != KVM_PGTABLE_LAST_LEVEL) > + if (!kvm_pte_valid(ctx->old) || (ctx->end - ctx->start) != kvm_granule_size(ctx->level)) > return -EINVAL; > > slot->addr = ctx->addr; > @@ -296,13 +306,73 @@ static int create_fixmap_slot(u64 addr, u64 cpu) > struct kvm_pgtable_walker walker = { > .cb = __create_fixmap_slot_cb, > .flags = KVM_PGTABLE_WALK_LEAF, > - .arg = (void *)cpu, > + .arg = (void *)per_cpu_ptr(&fixmap_slots, cpu), Do you really need this cast? > }; > > return kvm_pgtable_walk(&pkvm_pgtable, addr, PAGE_SIZE, &walker); > } > > -int hyp_create_pcpu_fixmap(void) > +#ifndef CONFIG_ARM64_64K_PAGES I don't have much faith in this symbol. We have changed the config stuff so often over the years that I wouldn't trust it long term. Using something like PAGE_SIZE or PAGE_SHIFT is likely to be more robust. > +static struct hyp_fixmap_slot hyp_fixblock_slot; > +static DEFINE_HYP_SPINLOCK(hyp_fixblock_lock); > + > +void *hyp_fixblock_map(phys_addr_t phys) > +{ > + hyp_spin_lock(&hyp_fixblock_lock); > + return fixmap_map_slot(&hyp_fixblock_slot, phys); > +} > + > +void hyp_fixblock_unmap(void) > +{ > + fixmap_clear_slot(&hyp_fixblock_slot); > + hyp_spin_unlock(&hyp_fixblock_lock); > +} > + > +static int create_fixblock(void) > +{ > + struct kvm_pgtable_walker walker = { > + .cb = __create_fixmap_slot_cb, > + .flags = KVM_PGTABLE_WALK_LEAF, > + .arg = (void *)&hyp_fixblock_slot, > + }; > + unsigned long addr; > + phys_addr_t phys; > + int ret, i; > + > + /* Find a RAM phys address, PMD aligned */ > + for (i = 0; i < hyp_memblock_nr; i++) { > + phys = ALIGN(hyp_memory[i].base, PMD_SIZE); > + if (phys + PMD_SIZE < (hyp_memory[i].base + hyp_memory[i].size)) > + break; > + } > + > + if (i >= hyp_memblock_nr) > + return -EINVAL; > + > + hyp_spin_lock(&pkvm_pgd_lock); > + addr = ALIGN(__io_map_base, PMD_SIZE); > + ret = __pkvm_alloc_private_va_range(addr, PMD_SIZE); > + if (ret) > + goto unlock; > + > + ret = kvm_pgtable_hyp_map(&pkvm_pgtable, addr, PMD_SIZE, phys, PAGE_HYP); > + if (ret) > + goto unlock; > + > + ret = kvm_pgtable_walk(&pkvm_pgtable, addr, PMD_SIZE, &walker); > + > +unlock: > + hyp_spin_unlock(&pkvm_pgd_lock); > + > + return ret; > +} > +#else > +void hyp_fixblock_unmap(void) { WARN_ON(1); } > +void *hyp_fixblock_map(phys_addr_t phys) { return NULL; } > +static int create_fixblock(void) { return 0; } > +#endif I can't say I like this. Can't you have a fallback that does the iteration rather than these placeholders that are only there to make things catch fire? Thanks, M. -- Without deviation from the norm, progress is not possible.