From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 36861C83F17 for ; Tue, 15 Jul 2025 09:34:31 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A716E6B00A3; Tue, 15 Jul 2025 05:34:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9FB2C6B00A4; Tue, 15 Jul 2025 05:34:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 910C86B00A5; Tue, 15 Jul 2025 05:34:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 833806B00A3 for ; Tue, 15 Jul 2025 05:34:16 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 51067C02E2 for ; Tue, 15 Jul 2025 09:34:16 +0000 (UTC) X-FDA: 83665988112.17.F4E93A0 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) by imf06.hostedemail.com (Postfix) with ESMTP id 6C1D7180008 for ; Tue, 15 Jul 2025 09:34:14 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="cZuujZ/9"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf06.hostedemail.com: domain of 3lCB2aAUKCPcsZaaZfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--tabba.bounces.google.com designates 209.85.221.73 as permitted sender) smtp.mailfrom=3lCB2aAUKCPcsZaaZfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--tabba.bounces.google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1752572054; a=rsa-sha256; cv=none; b=SbyNILuaVdx5xQLCg2IxXjOb+a0gIhG3hqVkBM3BhZJAuFOHuzjdvRVk/pJfC2YwGlS6tG 24nwEW6yG4RXoO3RYfLaC1Fntv6b1uvt8koI58QaAaHvAmDCxiZqyxa2KlUzMUVDIn3rIf GgJxVlcAOpVt8lhHoD8ZnP3oJS5UENM= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="cZuujZ/9"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf06.hostedemail.com: domain of 3lCB2aAUKCPcsZaaZfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--tabba.bounces.google.com designates 209.85.221.73 as permitted sender) smtp.mailfrom=3lCB2aAUKCPcsZaaZfnnfkd.bnlkhmtw-lljuZbj.nqf@flex--tabba.bounces.google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1752572054; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=9QnPPXFiOPfdg04/AUjNCkvc1fLkuMc1LvyVi5jzElg=; b=002QcqK7eYP0OWpJfFLMfFLMa5WXr5/txOXKnqIGRDpmru68mXOgP1R85cK2DKA0yoaqSR 3pkpzK1g/OE9ZrqNFCFRoF9IqFOuNdokORFWSuNdA+zGA9LNhNlTaaLEmmXq1VcRxHa/Lx B7cPyZL3k7/Txzfe/XDUebMMxd2OP3o= Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-3a579058758so2100214f8f.1 for ; Tue, 15 Jul 2025 02:34:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1752572053; x=1753176853; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9QnPPXFiOPfdg04/AUjNCkvc1fLkuMc1LvyVi5jzElg=; b=cZuujZ/98mjwmzpPFmPbsWZ5DZQKkzmvXGjApqbzy5Y5+yryq7CV2o8G9tmOU/Rgr7 GoJJj/V5dd2H5UEAsj53Zb0JCEjt1lUDO3ghARFTBgpjg+jXLVrLBoV/twkIU8aiZZKj aAm4Gqzen2xFC7CHe+8VRMukQJWyjxI78GLTIds4/YtA5NvVLJ0i/kdyXWdOR09j/tWS v8pO5+wjDYt/D10J4AWvVKYCLYdIfIYXSN6jxiyYzyFGNbSJYFpm8jooDSg7Xkb2RHIv sRH3DMxgBAEqs0PlfVbniNjaIkJIBTMVkvdT9CZ4Cy9Fi8QGSUpZvjbf9z1mCAYkE4gg sf2Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752572053; x=1753176853; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9QnPPXFiOPfdg04/AUjNCkvc1fLkuMc1LvyVi5jzElg=; b=hYh+zRMaDIaVJRViiA3rU/U1EC38SF04MZYN/3euvXWWLvwQRy7XCjjEZZb4mkOOvs mDAM3DweNiiQUi0erdnRzQiq9zBCzgajdIo0wr8mo3RePSuEzcM6YNzOvwN8u+xM52mP kL9TTfRG0bTUaKZmXH8SaA80d/t2uLHRVSimPOW/O0biTfmnOcDivmWCgMWpWPE/+kR5 Xfswx5Y6/xjkZUGoZYsaaB1sbL9pLcLeo/URPFwYyB3DbOBXnDt1AHTvVCK39etTcv8X OWbYejUdLKjIQxw80sH4SXStE0qpU5n71LOMvjNodkez4IYfa0t/6V3TA/jrB5+iRFjd Xdjw== X-Forwarded-Encrypted: i=1; AJvYcCW7EtQ2Yz9iLuyUOXG9IdMSWbyPYP7rBICpyqblCjWwJdn/bxBa7d6I9EAn2Tr8Iqm1XmPx4lsrWA==@kvack.org X-Gm-Message-State: AOJu0YwYu7kGUNB8syKnT1IGRe8PzWfcgKrgKHeoRzv8dHc+4aVTDRBt rNhcm5GM9KOMIsTkZKTrDA2WXVl2IYugvMpGQkmOJWKJUghHCd7nngVsv4Oq/ddag2DORpxT5H8 Cwg== X-Google-Smtp-Source: AGHT+IEcNdClssFPjkTzPuqkvJUN+FYze63JA5JjmLepK2R4xHK6Nv89kV4xlNwvelnLZlz3gXi/0kQY2g== X-Received: from wmbgw8.prod.google.com ([2002:a05:600c:8508:b0:451:4d6b:5b7e]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:43d8:b0:3b5:dc05:79b with SMTP id ffacd0b85a97d-3b5f188e8f5mr9189412f8f.14.1752572052811; Tue, 15 Jul 2025 02:34:12 -0700 (PDT) Date: Tue, 15 Jul 2025 10:33:39 +0100 In-Reply-To: <20250715093350.2584932-1-tabba@google.com> Mime-Version: 1.0 References: <20250715093350.2584932-1-tabba@google.com> X-Mailer: git-send-email 2.50.0.727.gbf7dc18ff4-goog Message-ID: <20250715093350.2584932-11-tabba@google.com> Subject: [PATCH v14 10/21] KVM: x86/mmu: Generalize private_max_mapping_level x86 op to max_mapping_level From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 6C1D7180008 X-Stat-Signature: rz49dampnh6u98yhhe5p1sf3w9nzxuyr X-HE-Tag: 1752572054-514797 X-HE-Meta: U2FsdGVkX1+LziFXLjXpWJO7FHuB/uzZ+eHVjjI3J9+OcVp8+9I24F8L9jR6JoVTNa9hZTYjua3r40dHG1o9emX2sRDgBkMJY63L1f+pnJSz4ugPNHRnUnYjIxtXYIsWpdK8GuJKqLYxAJGUHh2o6QkmcakFsqkU0mhiQB3fFtT6znFqdXCiNIizveLbWEp9/hBTfEncf1wUKyPsw2lIroG5GDgtuOh4wvgQyT+WaPyxzHe8CUI4qMckubUpXV0psziqmJUPRJkdQbgLI3sd5eVoMsj8TODcESJf5hdXSBGDZYcSHyA+2uG4mev3MnMkfWUq+M8OSJuff1UOqia3KRaqMa58KyZobmv9bKNtSfE8LdEVi/DXdSe90/x+y+E4t6Svdclj8EI79CxoNdNDHM4B6egeu/K24Gu9Cs5WXiJaU3oM6bHQ6OpRjX6X1cTwfFslLohHoA2PNcpJ+gqhQ6UnpXgnufiNN6HXSAJu/96WsVp5NkHOlBSOlkJKsN5AsakYwMxkM9G4VOtjjgW55OxC9A/o96obkZ2k8Bm2B5IFHzYyk6uKpWMzHpQ6LY8DKJiLDRrCj2VrbsaIWuQCuFKLgkO3sitpXCJsWFQcldo/jFFzLdPxShGdr6kM8ZCOrRabxZIXThecZBT2HJ8EJRducUznVnVUw5QBJ1lhj6NcIIDLrrU6xeRoOd2WaDNAOu5fZAWas754CXc7vbC2TG/uYZi7EH9fHnX/RkplvC7dyN6euD+Z7WReITKe09dKL3xryuCdj97PB8dsJhQf80H59O8SQugI93JzKL/IPL5KdjGjY/3+HQ5WdrUHRpXjv2MB63zXUPXJMdI9iYSFU0lASwnofPpEVuURLx/byCRJPkD4aZ582itrNhoiP06LExd+o/oVLfYhRCjb9qOX3ebXsOLze+MKNWG49CTU7gJHSe2lSXvQDB3Z41GIyw4iOkW9T4kqDmboa7grNX6 LKZy9EeX VvQBwGWcBQYW49azSVdO1692ckZibeDPdFBGRCmIu0ziXmmtXsxY78mXABNJIYjziQ8zk/LDvkrHKeGPnbDR23iTukfLFEVdeptDxomwn+tcznrKIejC+hMSIDo/CgaG1dEQm3P7rIxhTei1Ecj1sw/Fw768/sChPBoAc5kYfcKqHIV1QYSDiAkFrnLYMAWaf6QUF9bKn5+L8oWgiHutB3VJmz7AQwjCqOojfWCwa0TeCdONqslACygTjO1evoRRiTujhpCbk9AZsdUpQQqGep3lGRUG1PSPFpihgS+Sfohb62SWGMKvKFBqOJaPIAgi2pVTUsDkZe5WLCGC8nRb/Jc8rHm1Z4tgNoxmjmiwqhB1SAZZ+jIuBRhMVEODPS30Q4mh6LudFY2wgz15K8Kyy4INfukgMmgGUzMn/sYpx2it6Ek+YlRYtxX5koIRL2I9ZG1QqG4APmz0ORYvWrDh9iMtcmeURcuC1q4F+5zjsCso6PqSwZGZuKi21XBeef5qpIWoyjlyPk0KhRvYuRv/AI9bcVUoazLP/JZ/yxRZeq4rita09VEBV4efwAbgytnrjVbXpD27tzEWpckhEmnzaxYJ4XzCrQFfIK7R0nKywtxlhSCWeHkRCcP9AL781GKesxgGfZASIIVm8+KnM1u49CJNuSPpLfdnP5A6CjlN6jQFhgkE= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: From: Ackerley Tng Generalize the private_max_mapping_level x86 operation to max_mapping_level. The private_max_mapping_level operation allows platform-specific code to limit mapping levels (e.g., forcing 4K pages for certain memory types). While it was previously used exclusively for private memory, guest_memfd can now back both private and non-private memory. Platforms may have specific mapping level restrictions that apply to guest_memfd memory regardless of its privacy attribute. Therefore, generalize this operation. Rename the operation: Removes the "private" prefix to reflect its broader applicability to any guest_memfd-backed memory. Pass kvm_page_fault information: The operation is updated to receive a struct kvm_page_fault object instead of just the pfn. This provides platform-specific implementations (e.g., for TDX or SEV) with additional context about the fault, such as whether it is private or shared, allowing them to apply different mapping level rules as needed. Enforce "private-only" behavior (for now): Since the current consumers of this hook (TDX and SEV) still primarily use it to enforce private memory constraints, platform-specific implementations are made to return 0 for non-private pages. A return value of 0 signals to callers that platform-specific input should be ignored for that particular fault, indicating no specific platform-imposed mapping level limits for non-private pages. This allows the core MMU to continue determining the mapping level based on generic rules for such cases. Acked-by: David Hildenbrand Suggested-by: Sean Christoperson Signed-off-by: Ackerley Tng Signed-off-by: Fuad Tabba --- arch/x86/include/asm/kvm-x86-ops.h | 2 +- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/mmu/mmu.c | 11 ++++++----- arch/x86/kvm/svm/sev.c | 8 ++++++-- arch/x86/kvm/svm/svm.c | 2 +- arch/x86/kvm/svm/svm.h | 4 ++-- arch/x86/kvm/vmx/main.c | 6 +++--- arch/x86/kvm/vmx/tdx.c | 5 ++++- arch/x86/kvm/vmx/x86_ops.h | 2 +- 9 files changed, 25 insertions(+), 17 deletions(-) diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h index 8d50e3e0a19b..02301fbad449 100644 --- a/arch/x86/include/asm/kvm-x86-ops.h +++ b/arch/x86/include/asm/kvm-x86-ops.h @@ -146,7 +146,7 @@ KVM_X86_OP_OPTIONAL_RET0(vcpu_get_apicv_inhibit_reasons); KVM_X86_OP_OPTIONAL(get_untagged_addr) KVM_X86_OP_OPTIONAL(alloc_apic_backing_page) KVM_X86_OP_OPTIONAL_RET0(gmem_prepare) -KVM_X86_OP_OPTIONAL_RET0(private_max_mapping_level) +KVM_X86_OP_OPTIONAL_RET0(max_mapping_level) KVM_X86_OP_OPTIONAL(gmem_invalidate) #undef KVM_X86_OP diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 938b5be03d33..543d09fd4bca 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1907,7 +1907,7 @@ struct kvm_x86_ops { void *(*alloc_apic_backing_page)(struct kvm_vcpu *vcpu); int (*gmem_prepare)(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, int max_order); void (*gmem_invalidate)(kvm_pfn_t start, kvm_pfn_t end); - int (*private_max_mapping_level)(struct kvm *kvm, kvm_pfn_t pfn); + int (*max_mapping_level)(struct kvm *kvm, struct kvm_page_fault *fault); }; struct kvm_x86_nested_ops { diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 213904daf1e5..bb925994cbc5 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4467,9 +4467,11 @@ static inline u8 kvm_max_level_for_order(int order) return PG_LEVEL_4K; } -static u8 kvm_max_private_mapping_level(struct kvm *kvm, kvm_pfn_t pfn, - u8 max_level, int gmem_order) +static u8 kvm_max_private_mapping_level(struct kvm *kvm, + struct kvm_page_fault *fault, + int gmem_order) { + u8 max_level = fault->max_level; u8 req_max_level; if (max_level == PG_LEVEL_4K) @@ -4479,7 +4481,7 @@ static u8 kvm_max_private_mapping_level(struct kvm *kvm, kvm_pfn_t pfn, if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; - req_max_level = kvm_x86_call(private_max_mapping_level)(kvm, pfn); + req_max_level = kvm_x86_call(max_mapping_level)(kvm, fault); if (req_max_level) max_level = min(max_level, req_max_level); @@ -4511,8 +4513,7 @@ static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, } fault->map_writable = !(fault->slot->flags & KVM_MEM_READONLY); - fault->max_level = kvm_max_private_mapping_level(vcpu->kvm, fault->pfn, - fault->max_level, max_order); + fault->max_level = kvm_max_private_mapping_level(vcpu->kvm, fault, max_order); return RET_PF_CONTINUE; } diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 687392c5bf5d..dd470e26f6a0 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -29,6 +29,7 @@ #include #include +#include "mmu/mmu_internal.h" #include "mmu.h" #include "x86.h" #include "svm.h" @@ -4906,7 +4907,7 @@ void sev_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end) } } -int sev_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn) +int sev_max_mapping_level(struct kvm *kvm, struct kvm_page_fault *fault) { int level, rc; bool assigned; @@ -4914,7 +4915,10 @@ int sev_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn) if (!sev_snp_guest(kvm)) return 0; - rc = snp_lookup_rmpentry(pfn, &assigned, &level); + if (!fault->is_private) + return 0; + + rc = snp_lookup_rmpentry(fault->pfn, &assigned, &level); if (rc || !assigned) return PG_LEVEL_4K; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index d1c484eaa8ad..6ad047189210 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -5347,7 +5347,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = { .gmem_prepare = sev_gmem_prepare, .gmem_invalidate = sev_gmem_invalidate, - .private_max_mapping_level = sev_private_max_mapping_level, + .max_mapping_level = sev_max_mapping_level, }; /* diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index e6f3c6a153a0..c2579f7df734 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -787,7 +787,7 @@ void sev_handle_rmp_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u64 error_code); void sev_snp_init_protected_guest_state(struct kvm_vcpu *vcpu); int sev_gmem_prepare(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, int max_order); void sev_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end); -int sev_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn); +int sev_max_mapping_level(struct kvm *kvm, struct kvm_page_fault *fault); struct vmcb_save_area *sev_decrypt_vmsa(struct kvm_vcpu *vcpu); void sev_free_decrypted_vmsa(struct kvm_vcpu *vcpu, struct vmcb_save_area *vmsa); #else @@ -816,7 +816,7 @@ static inline int sev_gmem_prepare(struct kvm *kvm, kvm_pfn_t pfn, gfn_t gfn, in return 0; } static inline void sev_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end) {} -static inline int sev_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn) +static inline int sev_max_mapping_level(struct kvm *kvm, struct kvm_page_fault *fault) { return 0; } diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c index d1e02e567b57..8e53554932ba 100644 --- a/arch/x86/kvm/vmx/main.c +++ b/arch/x86/kvm/vmx/main.c @@ -871,10 +871,10 @@ static int vt_vcpu_mem_enc_ioctl(struct kvm_vcpu *vcpu, void __user *argp) return tdx_vcpu_ioctl(vcpu, argp); } -static int vt_gmem_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn) +static int vt_gmem_max_mapping_level(struct kvm *kvm, struct kvm_page_fault *fault) { if (is_td(kvm)) - return tdx_gmem_private_max_mapping_level(kvm, pfn); + return tdx_gmem_max_mapping_level(kvm, fault); return 0; } @@ -1044,7 +1044,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = { .mem_enc_ioctl = vt_op_tdx_only(mem_enc_ioctl), .vcpu_mem_enc_ioctl = vt_op_tdx_only(vcpu_mem_enc_ioctl), - .private_max_mapping_level = vt_op_tdx_only(gmem_private_max_mapping_level) + .max_mapping_level = vt_op_tdx_only(gmem_max_mapping_level) }; struct kvm_x86_init_ops vt_init_ops __initdata = { diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c index a3db6df245ee..7f652241491a 100644 --- a/arch/x86/kvm/vmx/tdx.c +++ b/arch/x86/kvm/vmx/tdx.c @@ -3322,8 +3322,11 @@ int tdx_vcpu_ioctl(struct kvm_vcpu *vcpu, void __user *argp) return ret; } -int tdx_gmem_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn) +int tdx_gmem_max_mapping_level(struct kvm *kvm, struct kvm_page_fault *fault) { + if (!fault->is_private) + return 0; + return PG_LEVEL_4K; } diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h index b4596f651232..ca7bc9e0fce5 100644 --- a/arch/x86/kvm/vmx/x86_ops.h +++ b/arch/x86/kvm/vmx/x86_ops.h @@ -163,7 +163,7 @@ int tdx_sept_remove_private_spte(struct kvm *kvm, gfn_t gfn, void tdx_flush_tlb_current(struct kvm_vcpu *vcpu); void tdx_flush_tlb_all(struct kvm_vcpu *vcpu); void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int root_level); -int tdx_gmem_private_max_mapping_level(struct kvm *kvm, kvm_pfn_t pfn); +int tdx_gmem_max_mapping_level(struct kvm *kvm, struct kvm_page_fault *fault); #endif #endif /* __KVM_X86_VMX_X86_OPS_H */ -- 2.50.0.727.gbf7dc18ff4-goog