From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D96CC4167B for ; Fri, 9 Dec 2022 17:25:07 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230022AbiLIRZF (ORCPT ); Fri, 9 Dec 2022 12:25:05 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46314 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229840AbiLIRZD (ORCPT ); Fri, 9 Dec 2022 12:25:03 -0500 Received: from out-109.mta0.migadu.com (out-109.mta0.migadu.com [91.218.175.109]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C00EFCC6 for ; Fri, 9 Dec 2022 09:25:02 -0800 (PST) Date: Fri, 9 Dec 2022 17:24:50 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1670606700; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=sbHd3TvFPLETZHNVBUCcer9R9cb69+aHOJviFvExi4Y=; b=myof8e9+vePyEh1XAVMIRcDm8q1jyelUp0oMG47QgTKeMP8Ad9MLe/+7riFT55TVEnMYX8 7B84ih0UqSmmSzalw4eYcdn7TGzMbR//yFaIlKDe2lSOL8s23+xUh0Njytzg/zI8dV/8zF cfDPbjEUYbw41qF2SIUp1mnpdTfjg4Y= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Oliver Upton To: "Yang, Weijiang" Cc: David Matlack , Paolo Bonzini , Marc Zyngier , James Morse , Alexandru Elisei , Suzuki K Poulose , Huacai Chen , Aleksandar Markovic , Anup Patel , Atish Patra , Paul Walmsley , Palmer Dabbelt , Albert Ou , "Christopherson,, Sean" , Andrew Morton , Anshuman Khandual , "Amit, Nadav" , "Matthew Wilcox (Oracle)" , Vlastimil Babka , "Liam R. Howlett" , Suren Baghdasaryan , Peter Xu , xu xin , Arnd Bergmann , Yu Zhao , Colin Cross , Hugh Dickins , Ben Gardon , Mingwei Zhang , Krish Sadhukhan , Ricardo Koller , Jing Zhang , "linux-arm-kernel@lists.infradead.org" , "kvmarm@lists.linux.dev" , "kvmarm@lists.cs.columbia.edu" , "linux-mips@vger.kernel.org" , "kvm@vger.kernel.org" , "kvm-riscv@lists.infradead.org" , "linux-riscv@lists.infradead.org" Subject: Re: [RFC PATCH 01/37] KVM: x86/mmu: Store the address space ID directly in kvm_mmu_page_role Message-ID: References: <20221208193857.4090582-1-dmatlack@google.com> <20221208193857.4090582-2-dmatlack@google.com> <22fe2332-497e-fe30-0155-e026b0eded97@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <22fe2332-497e-fe30-0155-e026b0eded97@intel.com> X-Migadu-Flow: FLOW_OUT Precedence: bulk List-ID: X-Mailing-List: kvm@vger.kernel.org On Fri, Dec 09, 2022 at 10:37:47AM +0800, Yang, Weijiang wrote: > > On 12/9/2022 3:38 AM, David Matlack wrote: > > Rename kvm_mmu_page_role.smm with kvm_mmu_page_role.as_id and use it > > directly as the address space ID throughout the KVM MMU code. This > > eliminates a needless level of indirection, kvm_mmu_role_as_id(), and > > prepares for making kvm_mmu_page_role architecture-neutral. > > > > Signed-off-by: David Matlack > > --- > > arch/x86/include/asm/kvm_host.h | 4 ++-- > > arch/x86/kvm/mmu/mmu.c | 6 +++--- > > arch/x86/kvm/mmu/mmu_internal.h | 10 ---------- > > arch/x86/kvm/mmu/tdp_iter.c | 2 +- > > arch/x86/kvm/mmu/tdp_mmu.c | 12 ++++++------ > > 5 files changed, 12 insertions(+), 22 deletions(-) > > > > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h > > index aa4eb8cfcd7e..0a819d40131a 100644 > > --- a/arch/x86/include/asm/kvm_host.h > > +++ b/arch/x86/include/asm/kvm_host.h > > @@ -348,7 +348,7 @@ union kvm_mmu_page_role { > > * simple shift. While there is room, give it a whole > > * byte so it is also faster to load it from memory. > > */ > > - unsigned smm:8; > > + unsigned as_id:8; > > }; > > }; > > @@ -2056,7 +2056,7 @@ enum { > > # define __KVM_VCPU_MULTIPLE_ADDRESS_SPACE > > # define KVM_ADDRESS_SPACE_NUM 2 > > # define kvm_arch_vcpu_memslots_id(vcpu) ((vcpu)->arch.hflags & HF_SMM_MASK ? 1 : 0) > > -# define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, (role).smm) > > +# define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, (role).as_id) > > #else > > # define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, 0) > > #endif > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > index 4d188f056933..f375b719f565 100644 > > --- a/arch/x86/kvm/mmu/mmu.c > > +++ b/arch/x86/kvm/mmu/mmu.c > > @@ -5056,7 +5056,7 @@ kvm_calc_cpu_role(struct kvm_vcpu *vcpu, const struct kvm_mmu_role_regs *regs) > > union kvm_cpu_role role = {0}; > > role.base.access = ACC_ALL; > > - role.base.smm = is_smm(vcpu); > > + role.base.as_id = is_smm(vcpu); > > I'm not familiar with other architectures, is there similar conception as > x86 smm mode? For KVM/arm64: No, we don't do anything like SMM emulation on x86. Architecturally speaking, though, we do have a higher level of privilege typically used by firmware on arm64, called EL3. I'll need to read David's series a bit more closely, but I'm inclined to think that the page role is going to be rather arch-specific. -- Thanks, Oliver