From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D481C433EF for ; Thu, 24 Feb 2022 11:29:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229678AbiBXL3b (ORCPT ); Thu, 24 Feb 2022 06:29:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34172 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229525AbiBXL3a (ORCPT ); Thu, 24 Feb 2022 06:29:30 -0500 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 964741B0C42 for ; Thu, 24 Feb 2022 03:29:00 -0800 (PST) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B3F5F61803 for ; Thu, 24 Feb 2022 11:28:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1BD90C340E9; Thu, 24 Feb 2022 11:28:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1645702139; bh=4w/rU91i3NqYRulfQYLvhcZ6uWsYzAIIFxwZLwXmFcI=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=jEJjR2pHAJMb8BxCm35tVc1uVoOOBZgjMBko8N2u0bHY4MdMdpI19o2RkNDjzWILw 2xQDCdo1j8G3ZrTlP4O6tGFMtpcmWdPvrZ9sBhhxLlJCOUgdwT2F4997KII2LJxPQ9 BxSydgndvVQkGFkJxu8S5OqjSB27fLQyD1PHMYbe/tQf4+bogFFai31qoNO2iR91uO pcThaUsETTYzii3VKaPcfPpoZgJ3Y+XUNL3/Qp1sXHCyLgkCTNp56sZ+G24OOlEFH2 C3U+H4Yu6GmzLJ3x//wrkXTgkbhKV63O8x4uhsarcrrEXCAkYaOa2QPuS7byQRTJwb kJhZVZq+oCDVw== Received: from sofa.misterjones.org ([185.219.108.64] helo=why.misterjones.org) by disco-boy.misterjones.org with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1nNCIq-00AAJR-RO; Thu, 24 Feb 2022 11:28:56 +0000 Date: Thu, 24 Feb 2022 11:28:56 +0000 Message-ID: <8735k84i6f.wl-maz@kernel.org> From: Marc Zyngier To: David Matlack Cc: Paolo Bonzini , Huacai Chen , leksandar Markovic , Sean Christopherson , Vitaly Kuznetsov , Peter Xu , Wanpeng Li , Jim Mattson , Joerg Roedel , Peter Feiner , Andrew Jones , maciej.szmigiero@oracle.com, kvm@vger.kernel.org Subject: Re: [PATCH 19/23] KVM: Allow for different capacities in kvm_mmu_memory_cache structs In-Reply-To: <20220203010051.2813563-20-dmatlack@google.com> References: <20220203010051.2813563-1-dmatlack@google.com> <20220203010051.2813563-20-dmatlack@google.com> User-Agent: Wanderlust/2.15.9 (Almost Unreal) SEMI-EPG/1.14.7 (Harue) FLIM-LB/1.14.9 (=?UTF-8?B?R29qxY0=?=) APEL-LB/10.8 EasyPG/1.0.0 Emacs/27.1 (x86_64-pc-linux-gnu) MULE/6.0 (HANACHIRUSATO) MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue") Content-Type: text/plain; charset=US-ASCII X-SA-Exim-Connect-IP: 185.219.108.64 X-SA-Exim-Rcpt-To: dmatlack@google.com, pbonzini@redhat.com, chenhuacai@kernel.org, aleksandar.qemu.devel@gmail.com, seanjc@google.com, vkuznets@redhat.com, peterx@redhat.com, wanpengli@tencent.com, jmattson@google.com, joro@8bytes.org, pfeiner@google.com, drjones@redhat.com, maciej.szmigiero@oracle.com, kvm@vger.kernel.org X-SA-Exim-Mail-From: maz@kernel.org X-SA-Exim-Scanned: No (on disco-boy.misterjones.org); SAEximRunCond expanded to false Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Thu, 03 Feb 2022 01:00:47 +0000, David Matlack wrote: > > Allow the capacity of the kvm_mmu_memory_cache struct to be chosen at > declaration time rather than being fixed for all declarations. This will > be used in a follow-up commit to declare an cache in x86 with a capacity > of 512+ objects without having to increase the capacity of all caches in > KVM. > > No functional change intended. > > Signed-off-by: David Matlack > --- > arch/arm64/include/asm/kvm_host.h | 2 +- > arch/arm64/kvm/mmu.c | 12 ++++++------ > arch/mips/include/asm/kvm_host.h | 2 +- > arch/x86/include/asm/kvm_host.h | 8 ++++---- > include/linux/kvm_types.h | 24 ++++++++++++++++++++++-- > virt/kvm/kvm_main.c | 8 +++++++- > 6 files changed, 41 insertions(+), 15 deletions(-) > > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h > index 3b44ea17af88..a450b91cc2d9 100644 > --- a/arch/arm64/include/asm/kvm_host.h > +++ b/arch/arm64/include/asm/kvm_host.h > @@ -357,7 +357,7 @@ struct kvm_vcpu_arch { > bool pause; > > /* Cache some mmu pages needed inside spinlock regions */ > - struct kvm_mmu_memory_cache mmu_page_cache; > + DEFINE_KVM_MMU_MEMORY_CACHE(mmu_page_cache); I must say I'm really not a fan of the anonymous structure trick. I can see why you are doing it that way, but it feels pretty brittle. > > /* Target CPU and feature flags */ > int target; > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c > index bc2aba953299..9c853c529b49 100644 > --- a/arch/arm64/kvm/mmu.c > +++ b/arch/arm64/kvm/mmu.c > @@ -765,7 +765,8 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, > { > phys_addr_t addr; > int ret = 0; > - struct kvm_mmu_memory_cache cache = { 0, __GFP_ZERO, NULL, }; > + DEFINE_KVM_MMU_MEMORY_CACHE(cache) page_cache = {}; > + struct kvm_mmu_memory_cache *cache = &page_cache.cache; > struct kvm_pgtable *pgt = kvm->arch.mmu.pgt; > enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_DEVICE | > KVM_PGTABLE_PROT_R | > @@ -774,18 +775,17 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, > if (is_protected_kvm_enabled()) > return -EPERM; > > + cache->gfp_zero = __GFP_ZERO; nit: consider this instead, which preserves the existing flow: diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 26d6c53be083..86a7ebd03a44 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -764,7 +764,9 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, { phys_addr_t addr; int ret = 0; - DEFINE_KVM_MMU_MEMORY_CACHE(cache) page_cache = {}; + DEFINE_KVM_MMU_MEMORY_CACHE(cache) page_cache = { + .cache = { .gfp_zero = __GFP_ZERO}, + }; struct kvm_mmu_memory_cache *cache = &page_cache.cache; struct kvm_pgtable *pgt = kvm->arch.mmu.pgt; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_DEVICE | @@ -774,7 +776,6 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa, if (is_protected_kvm_enabled()) return -EPERM; - cache->gfp_zero = __GFP_ZERO; size += offset_in_page(guest_ipa); guest_ipa &= PAGE_MASK; but whole "declare the outer structure and just use the inner one" hack is... huh... :-/ This hunk also conflicts with what currently sits in -next. Not a big deal, but just so you know. > diff --git a/include/linux/kvm_types.h b/include/linux/kvm_types.h > index dceac12c1ce5..9575fb8d333f 100644 > --- a/include/linux/kvm_types.h > +++ b/include/linux/kvm_types.h > @@ -78,14 +78,34 @@ struct gfn_to_pfn_cache { > * MMU flows is problematic, as is triggering reclaim, I/O, etc... while > * holding MMU locks. Note, these caches act more like prefetch buffers than > * classical caches, i.e. objects are not returned to the cache on being freed. > + * > + * The storage for the cache objects is laid out after the struct to allow > + * different declarations to choose different capacities. If the capacity field > + * is 0, the capacity is assumed to be KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE. > */ > struct kvm_mmu_memory_cache { > int nobjs; > + int capacity; > gfp_t gfp_zero; > struct kmem_cache *kmem_cache; > - void *objects[KVM_ARCH_NR_OBJS_PER_MEMORY_CACHE]; > + void *objects[0]; The VLA police is going to track you down ([0] vs []). M. -- Without deviation from the norm, progress is not possible.