From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 18F22C83F03 for ; Wed, 9 Jul 2025 11:00:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8D2926B00D6; Wed, 9 Jul 2025 07:00:13 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 885356B00D7; Wed, 9 Jul 2025 07:00:13 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 74A6F6B00D8; Wed, 9 Jul 2025 07:00:13 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 5E0436B00D6 for ; Wed, 9 Jul 2025 07:00:13 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 0BB88C0163 for ; Wed, 9 Jul 2025 11:00:13 +0000 (UTC) X-FDA: 83644431906.28.B33917D Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) by imf15.hostedemail.com (Postfix) with ESMTP id 220C7A001A for ; Wed, 9 Jul 2025 11:00:10 +0000 (UTC) Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=MMzs4uW3; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf15.hostedemail.com: domain of 3uUtuaAUKCFQFwxxw2AA270.yA8749GJ-886Hwy6.AD2@flex--tabba.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3uUtuaAUKCFQFwxxw2AA270.yA8749GJ-886Hwy6.AD2@flex--tabba.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1752058811; a=rsa-sha256; cv=none; b=0RFjVBGzwcWKmlbkRZJuABs0jB2lIAU23ji52pDWCYuIaT97xMKU2E19eHhvXGQl17wfcs Cg+i79bB2DDOG4UdUqhHSbUhFkyRlzOHjhFZ2ySoqa3dzPNxZep9hyoOUetRiFtX3ppoF+ FIYHGYQM7FqwsX6yrbl+47oYSAGvRMg= ARC-Authentication-Results: i=1; imf15.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=MMzs4uW3; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf15.hostedemail.com: domain of 3uUtuaAUKCFQFwxxw2AA270.yA8749GJ-886Hwy6.AD2@flex--tabba.bounces.google.com designates 209.85.128.73 as permitted sender) smtp.mailfrom=3uUtuaAUKCFQFwxxw2AA270.yA8749GJ-886Hwy6.AD2@flex--tabba.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1752058811; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=5kTvRpzGrtmCEg1tr5wNn5lSrySnsvCr6vg0TvRNAPs=; b=mkbD4TIKflZiYFMc//lDKVJkqNDRMUdcMkgCFJZB2pK979BxZM6T2W68JQ1HoCdOJKuFNh HOhYWR65Fz9BKsbn3TRol56JGnqo/4qtqYLWNhHkYUW3DszFSnCPkuhu2oTHjE/8cMBbBO LNFPMTmjdbvyPDjBumC/cENsVpA0k1o= Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-454acc74f22so34510175e9.2 for ; Wed, 09 Jul 2025 04:00:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752058810; x=1752663610; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=5kTvRpzGrtmCEg1tr5wNn5lSrySnsvCr6vg0TvRNAPs=; b=MMzs4uW3PAUbLiTyNgb+K92BWa6WBsdN11lyxgvLpFaRxYcKeYXlilLroaNQe7phio WvC+CH91L53kNw7GmxtpgBHkihWsO4Q6cNWhJSJH/pSSam5AGl4mvaPUTSr0fP2yOb9e x1fKYsn1MGWBO/2zG5p3Tb6DEid+iORW/J7SrZ+rc+YBPGk/qRTB/4ws4g/rUvG7HK+w OZcXlXA5biyBFKZPpb9eDKYjUrPwkH8iJGj6UAgvHLpzFyZ+6iRLUxYeypwigXNyYvaX 5z8j+6iJhG4ru1WkUmZnkv9R3df28eHY+gaBvItQcM6+NQpVy5KdplWDQOLcdQcvl0tf u3bA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752058810; x=1752663610; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5kTvRpzGrtmCEg1tr5wNn5lSrySnsvCr6vg0TvRNAPs=; b=pn4WRpAF6UEUSI9czz7IuAa+435eKrNg6UHsuXOnQu7uItwlpPu6V8hgs2vUg4bzvw +TNeCjiNtgdNxx1VxdjtZWuo64HkRhgm4+AN7fkzsIjbR+nSbAoGnfWFjr89VY4qpieV W0m+J2QsrVKBwSiPgJrXWzJsOCcyhikkdXcrwAmfpiLA/97ejyZtbai9pwOsr7lrUMpH LsP48THaTgIFESf+bw1cGJ7BOWq+f1s0XLdM/u/uEunQnUXYR4HBLoKZP3krhLya1BF5 aZSmlnpSKoTt3reoFWJ0P0vFCHNRXUUWcL7pEHMi7wQRd1gA85LWEJHTyipSCbDx0LEF B7YA== X-Forwarded-Encrypted: i=1; AJvYcCUMhdvFyQgDS24CMXB9yjsnLJYxnihJ5KW3ty8pwqnEhPBf1hTFnte0LhYppbC2VEZ36VBH+O8PxQ==@kvack.org X-Gm-Message-State: AOJu0Yx8I7MYjNISCFdXV925kiodwPBrxhM7OB6PWfT+Pc6zUKCXIGAo z8CoGBUF0rP5OkEoC5lclQet+9eCA89AFw0M+Fm6ukwvhDh7LvV3XSYF8RViVe9KrI6Hy0qqBqS GOA== X-Google-Smtp-Source: AGHT+IEdAsIy/LmU0JBFIu4ARJjL6Vg0Q0Qj4GtB8ih8mv2aYdx8bkOY7PYMxiU3T1IHsE6KK6lcJmFyFg== X-Received: from wmbh3.prod.google.com ([2002:a05:600c:a103:b0:442:dc75:51ef]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:6209:b0:442:d9fc:7de with SMTP id 5b1f17b1804b1-454d536ca04mr16703485e9.22.1752058809620; Wed, 09 Jul 2025 04:00:09 -0700 (PDT) Date: Wed, 9 Jul 2025 11:59:36 +0100 In-Reply-To: <20250709105946.4009897-1-tabba@google.com> Mime-Version: 1.0 References: <20250709105946.4009897-1-tabba@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250709105946.4009897-11-tabba@google.com> Subject: [PATCH v13 10/20] KVM: x86/mmu: Generalize private_max_mapping_level x86 op to max_mapping_level From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 220C7A001A X-Stat-Signature: 68ka8n6hs76ygkgix5a7g1tqjogogf9h X-Rspam-User: X-Rspamd-Server: rspam10 X-HE-Tag: 1752058810-377210 X-HE-Meta: U2FsdGVkX1+fD83/cPpyJaXqVyz5y7ehLie3I1sWxuE7FBwMzR6zwf8J2y36EDDSlSf6reORCWiIzVYjqxPwLqKxqNpPyYB0MsoDnUXk36TuQUZsnBuvPJNYUlci0TQcxutWpGDRny8u8b4B+1mvyYBWat+c/dHHwuFj0E8nqfNiTkAHG4Fz0IfmCPVJOkpuUG44DImaNpsQPiFaTXIQR/MM2jUCh/VW8P4IHaIBeRPSiQJW0Ti2V+QVEMJJZTSkJqRZtYftMKhM9PEQ7i7HBVmxMK76y8FbTA+iXa9xZEYz6VH7fAyWOtq2Cwb9yAJ9gWmH/2s5zhneYNJrT0xF/EbJmL1DYNA9GF/G48lgJuaxu4GrlBstGAXf8HaGiR7yfZWI1e/PIAFHZ5/15d1JMSdkv02wvBdaHP/rJiFlgxFl3RJXbAhbW7bVKxataVzDp6kYT9LPMgYRtTizlMlP8bzCjSMZpumeYHlp5WFkluWSaDjtz/iEdKyAGj7Gzs0I9KbhGenMm5F+HjsoRoIfx5Ix6YdZzix3u8PxfmyJBM2NzclvmqA9mcv1FRqlLOGY3ueCRmCvulDO2mHqrznAOiaUavGYs3tRIwpQsf3EVkatWkI50nxAfgTrmdlltL5D34VDys0v0dcZI2cCxcrcAAnzCXzLCDARZ19tdd6Uk65Av0hM0azEU+Ni1e1s2NimTZjF5O44ojM5WYVI4USD03J4+cP/VRLsZGffSrySBtvdRoHfM1qWdcZ1sP7g6tttzRQEoQG9I/Eni5VL7wKZMX9RPXblxnk2uMMBwUcM2KSzH8a6aHjh+HnuB6TDvPvBcfbmDrFUm63DYn1Uvcsg8g/j/ktEEC5sw9LK4UzWN7r63k6UwraMYxjPtz2RQKhqeVyoNHWmYBMtOhRfv9mhIitSpN1ANCGJtLtuGfzgg8KI11Tb+UvvQPOeE3V9MQnshWrhHfPoWQCJXOAsu8F aFL4WaVr UsUlr5zWIYP/BCZgYhXgXHn7b32DnKDdetg4YDqT++Fc1SnwBIwGr8VOAOwF9HYJUFZ1OUiIvMHVBu8TiPnVd09y8HMfxGIo/icboL8UbP6m/RkJoJTqo5+OjaFplekVNJADf+UvJPs1CDrmEAsbVF/5q968trJy6x/JXzOjKNuROPMHQdJDOfidXFxFqKzXEBYoUJXNgHsx38S8k2AZ8mkPnGoloetYfj6r2Mv8CN1CT5Rz6c1+C6QjxG4QO0hCrd0f9VwILmt/XfkYFQ7JyC5FXdu2exX86nOvVBA1uLGCgPjsCxNLh5bMA1rFGRRl2imbTU4w/iqm1RGPasdhthSUCw1m+deaSCfbVOULxrR0Ol0H6xVhROiCDZNkjhWfVmvecSD9dCzw1X6qojzC/2SlOgqgqiASczaR8CGPWuszoi+O1UiXCBcA5tfx21YjmUevKFZNCCZyz6DB8bidKSYOi2R3+JjR+9q/1ZHlfm+0OwHgeDpbq8HKDHRv1mRKYqQ+y8VOIAU5GSCbsiIePqtJD1lnIcc3Wx3V2rVZKPuMUz6lXeyzp4oK0PD2hsx514ZHVr/ui4zerExmxHVmRU9Y1sK11Bc41bXOboJ9m773OwuEHC+O67WZY4RP0jIhv8Fob520AI6mXyGgAiywXhCSrmQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Ackerley Tng Generalize the private_max_mapping_level x86 operation to max_mapping_level. The private_max_mapping_level operation allows platform-specific code to limit mapping levels (e.g., forcing 4K pages for certain memory types). While it was previously used exclusively for private memory, guest_memfd can now back both private and non-private memory. Platforms may have specific mapping level restrictions that apply to guest_memfd memory regardless of its privacy attribute. Therefore, generalize this operation. Rename the operation: Removes the "private" prefix to reflect its broader applicability to any guest_memfd-backed memory. Pass kvm_page_fault information: The operation is updated to receive a struct kvm_page_fault object instead of just the pfn. This provides platform-specific implementations (e.g., for TDX or SEV) with additional context about the fault, such as whether it is private or shared, allowing them to apply different mapping level rules as needed. Enforce "private-only" behavior (for now): Since the current consumers of this hook (TDX and SEV) still primarily use it to enforce private memory constraints, platform-specific implementations are made to return 0 for non-private pages. A return value of 0 signals to callers that platform-specific input should be ignored for that particular fault, indicating no specific platform-imposed mapping level limits for non-private pages. This allows the core MMU to continue determining the mapping level based on generic rules for such cases. Suggested-by: Sean Christoperson Signed-off-by: Ackerley Tng Signed-off-by: Fuad Tabba --- arch/x86/include/asm/kvm-x86-ops.h | 2 +- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/mmu/mmu.c | 11 ++++++----- arch/x86/kvm/svm/sev.c | 8 ++++++-- arch/x86/kvm/svm/svm.c | 2 +- arch/x86/kvm/svm/svm.h | 4 ++-- arch/x86/kvm/vmx/main.c | 6 +++--- arch/x86/kvm/vmx/tdx.c | 5 ++++- arch/x86/kvm/vmx/x86_ops.h | 2 +- 9 files changed, 25 insertions(+), 17 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index 8d50e3e0a19b..02301fbad449 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -146,7 +146,7 @@ KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons); KVM_X86_OP_OPTIONAL(get_untagged_addr) KVM_X86_OP_OPTIONAL(alloc_apic_backing_page) KVM_X86_OP_OPTIONAL_RET0(gmem_prepare) -KVM_X86_OP_OPTIONAL_RET0(private_max_mapping_level) +KVM_X86_OP_OPTIONAL_RET0(max_mapping_level) KVM_X86_OP_OPTIONAL(gmem_invalidate) #undef KVM_X86_OP diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index ebddedf0a1f2..4c764faa12f3 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1901,7 +1901,7 @@ struct kvm_x86_ops { void *(*alloc_apic_backing_page)(struct kvm_vcpu *vcpu); int (*gmem_prepare)(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, int max_order); void (*gmem_invalidate)(kvm_pfn_t start, kvm_pfn_t end); - int (*private_max_mapping_level)(struct kvm *kvm, kvm_pfn_t pfn); + int (*max_mapping_level)(struct kvm *kvm, struct kvm_page_fault *fault); }; struct kvm_x86_nested_ops { diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 213904daf1e5..bb925994cbc5 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4467,9 +4467,11 @@ static inline u8 kvm_max_level_for_order(int order) return PG_LEVEL_4K; } -static u8 kvm_max_private_mapping_level(struct kvm *kvm, kvm_pfn_t pfn, - u8 max_level, int gmem_order) +static u8 kvm_max_private_mapping_level(struct kvm *kvm, + struct kvm_page_fault *fault, + int gmem_order) { + u8 max_level = fault->max_level; u8 req_max_level; if (max_level == PG_LEVEL_4K) @@ -4479,7 +4481,7 @@ static u8 kvm_max_private_mapping_level(struct kvm *kvm, kvm_pfn_t pfn, if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; - req_max_level = kvm_x86_call(private_max_mapping_level)(kvm, pfn); + req_max_level = kvm_x86_call(max_mapping_level)(kvm, fault); if (req_max_level) max_level = min(max_level, req_max_level); @@ -4511,8 +4513,7 @@ static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, } fault->map_writable = !(fault->slot->flags & KVM_MEM_READONLY); - fault->max_level = kvm_max_private_mapping_level(vcpu->kvm, fault->pfn, - fault->max_level, max_order); + fault->max_level = kvm_max_private_mapping_level(vcpu->kvm, fault, max_order); return RET_PF_CONTINUE; } diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index ade7a5b36c68..58116439d7c0 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -29,6 +29,7 @@ #include #include +#include "mmu/mmu_internal.h" #include "mmu.h" #include "x86.h" #include "svm.h" @@ -4898,7 +4899,7 @@ void sev_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end) } } -int sev_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn) +int sev_max_mapping_level(struct kvm *kvm, struct kvm_page_fault *fault) { int level, rc; bool assigned; @@ -4906,7 +4907,10 @@ int sev_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn) if (!sev_snp_guest(kvm)) return 0; - rc = snp_lookup_rmpentry(pfn, &assigned, &level); + if (!fault->is_private) + return 0; + + rc = snp_lookup_rmpentry(fault->pfn, &assigned, &level); if (rc || !assigned) return PG_LEVEL_4K; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index d1c484eaa8ad..6ad047189210 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -5347,7 +5347,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .gmem_prepare = sev_gmem_prepare, .gmem_invalidate = sev_gmem_invalidate, - .private_max_mapping_level = sev_private_max_mapping_level, + .max_mapping_level = sev_max_mapping_level, }; /* diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index e6f3c6a153a0..c2579f7df734 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -787,7 +787,7 @@ void sev_handle_rmp_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u64 error_code); void sev_snp_init_protected_guest_state(struct kvm_vcpu *vcpu); int sev_gmem_prepare(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, int max_order); void sev_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end); -int sev_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn); +int sev_max_mapping_level(struct kvm *kvm, struct kvm_page_fault *fault); struct vmcb_save_area *sev_decrypt_vmsa(struct kvm_vcpu *vcpu); void sev_free_decrypted_vmsa(struct kvm_vcpu *vcpu, struct vmcb_save_area *vmsa); #else @@ -816,7 +816,7 @@ static inline int sev_gmem_prepare(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, in return 0; } static inline void sev_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end) {} -static inline int sev_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn) +static inline int sev_max_mapping_level(struct kvm *kvm, struct kvm_page_fault *fault) { return 0; } diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index d1e02e567b57..8e53554932ba 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -871,10 +871,10 @@ static int vt_vcpu_mem_enc_ioctl(struct kvm_vcpu *vcpu, void __user *argp) return tdx_vcpu_ioctl(vcpu, argp); } -static int vt_gmem_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn) +static int vt_gmem_max_mapping_level(struct kvm *kvm, struct kvm_page_fault *fault) { if (is_td(kvm)) - return tdx_gmem_private_max_mapping_level(kvm, pfn); + return tdx_gmem_max_mapping_level(kvm, fault); return 0; } @@ -1044,7 +1044,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .mem_enc_ioctl = vt_op_tdx_only(mem_enc_ioctl), .vcpu_mem_enc_ioctl = vt_op_tdx_only(vcpu_mem_enc_ioctl), - .private_max_mapping_level = vt_op_tdx_only(gmem_private_max_mapping_level) + .max_mapping_level = vt_op_tdx_only(gmem_max_mapping_level) }; struct kvm_x86_init_ops vt_init_ops __initdata = { diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index c227516e6a02..1607b1f6be21 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -3292,8 +3292,11 @@ int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) return ret; } -int tdx_gmem_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn) +int tdx_gmem_max_mapping_level(struct kvm *kvm, struct kvm_page_fault *fault) { + if (!fault->is_private) + return 0; + return PG_LEVEL_4K; } diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index b4596f651232..ca7bc9e0fce5 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -163,7 +163,7 @@ int tdx_sept_remove_private_spte(struct kvm *kvm, gfn_t gfn, void tdx_flush_tlb_current(struct kvm_vcpu *vcpu); void tdx_flush_tlb_all(struct kvm_vcpu *vcpu); void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level); -int tdx_gmem_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn); +int tdx_gmem_max_mapping_level(struct kvm *kvm, struct kvm_page_fault *fault); #endif #endif /* __KVM_X86_VMX_X86_OPS_H */ -- 2.50.0.727.gbf7dc18ff4-goog