From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A72D3C83F27 for ; Wed, 16 Jul 2025 05:40:48 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 2EF516B007B; Wed, 16 Jul 2025 01:40:48 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2A08B6B008C; Wed, 16 Jul 2025 01:40:48 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 167C46B0092; Wed, 16 Jul 2025 01:40:48 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 03F9E6B008C for ; Wed, 16 Jul 2025 01:40:48 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id B3E1B80264 for ; Wed, 16 Jul 2025 05:40:47 +0000 (UTC) X-FDA: 83669028534.17.57CC77E Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.12]) by imf16.hostedemail.com (Postfix) with ESMTP id DF3E218000A for ; Wed, 16 Jul 2025 05:40:44 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=G9fg+Vo8; spf=pass (imf16.hostedemail.com: domain of xiaoyao.li@intel.com designates 198.175.65.12 as permitted sender) smtp.mailfrom=xiaoyao.li@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1752644445; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=aWCfXIf1NmalDcEoBgMyNDZ9j8U3oAoOG5nKMmoEA5I=; b=EATgaerTA01pQzbsmg2j7Q+s+8WulDXXT7bFcz/XaqIn3fMlerPGgv6JxSMbS79OinFR2q StWw5loXCbWb1oCb2SMyprltCpiDSXwZQyR7qtahvQIda3WyeQ2kn16Fnb70gfyIsRox9o HbXEKW6XbTI0RoqDnqKwpkUIGywPKIg= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=G9fg+Vo8; spf=pass (imf16.hostedemail.com: domain of xiaoyao.li@intel.com designates 198.175.65.12 as permitted sender) smtp.mailfrom=xiaoyao.li@intel.com; dmarc=pass (policy=none) header.from=intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1752644445; a=rsa-sha256; cv=none; b=SKWsSFXU4MrA24ecIGSsCIYx4pDdBWHW87iyk3hhdHAXKQX8KSLW8IFonDBKdn58a/Ypdh +YPzKpsuWl7KWUimxh/PF2k6pmAYHv5tiChV55LiRZ79SaKBC+5lA3sKh9i/pGYe9Evjyo IQZ8RtUd2y9BmBHFwqhrMZsHFylfp6A= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1752644445; x=1784180445; h=message-id:date:mime-version:subject:to:cc:references: from:in-reply-to:content-transfer-encoding; bh=6iNgYxoqxx120NDW9X1toiGHqvZp6/FTcvKcQv2ACAo=; b=G9fg+Vo8JM5gpEJ092DUlHQ2qKNQ99zJUHAXxBkvi/BF+oraO9UkTHI0 nybqyzO5s7rLkggmdKkS3L4wDUZH0qL3qX5ZMNa078zkx80CRS/KaDKIN MC6yQLI4uCRSdPE7Rrirnx4xYPmu9fVcsFA/FIpnX1N42FN1N0dqXNTHO hDhTIP/LbhoQfIVBnKCVgSLyxHRkjT/DlwTxExzYsZw2isX7XrFzANnRF Ek9cqW1gJ/B8pQahLYFBPeOY45YP2xUT9bE7UfKCjPFkzzIyftTWpGjys HqTtZIlnhxiPaHfDmLQFxFmpJa5D2kxrwFadSckhGi5I1m6fqXroJe2rZ w==; X-CSE-ConnectionGUID: jwC0Ok2RRuOQEnHCjyS+zg== X-CSE-MsgGUID: mzbwiEFaTwySLf67OFRewQ== X-IronPort-AV: E=McAfee;i="6800,10657,11493"; a="66325498" X-IronPort-AV: E=Sophos;i="6.16,315,1744095600"; d="scan'208";a="66325498" Received: from orviesa009.jf.intel.com ([10.64.159.149]) by orvoesa104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jul 2025 22:40:43 -0700 X-CSE-ConnectionGUID: Oos/aMvwSTGdgishWHK1Pg== X-CSE-MsgGUID: E5t7ldu8TNOtPDSeoiR4uA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.16,315,1744095600"; d="scan'208";a="157215046" Received: from xiaoyaol-hp-g830.ccr.corp.intel.com (HELO [10.124.247.1]) ([10.124.247.1]) by orviesa009-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jul 2025 22:40:23 -0700 Message-ID: Date: Wed, 16 Jul 2025 13:40:20 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v14 08/21] KVM: guest_memfd: Allow host to map guest_memfd pages To: Fuad Tabba , kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com References: <20250715093350.2584932-1-tabba@google.com> <20250715093350.2584932-9-tabba@google.com> Content-Language: en-US From: Xiaoyao Li In-Reply-To: <20250715093350.2584932-9-tabba@google.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-Rspam-User: X-Rspamd-Queue-Id: DF3E218000A X-Rspamd-Server: rspam06 X-Stat-Signature: huhh5p4hpy6wrhspbpqd17yafyed8t11 X-HE-Tag: 1752644444-289965 X-HE-Meta: U2FsdGVkX1+Q88hhxqqzFjLM4BrqgjugoidQ9Qxh3+Zp5LDqYEoiUTf+TKak+Qo6EaxsYWay/w0WCtg74fdkQ01MepmjeAlTkJELT2z2hAhuumhnwT7DuGph04/curewDYK38h+AUVR01YJim8hZlFmqV0XUUcfv5w3tihwU1qxmbhsni5LmhsAi9he1ZDYJmMZtIwKA5Fny147aEvwGg1O8+Bdaz4IhjmsnFTpYVx+oEXJW0BHL3nBXHxmzFqT+gKBjjEpAV9L01oOy/dz6jIHhvOdEfOEl4FwWljN4gt68NoKyEqoQ+7mq0KTreHVa8Hh8hpbxYX3ll1JIIlX+G+CDPNmJ/GOKj3UHFeX4DBKPfAST1WYcOEoYuJeReg6BqOlKVlD/iDKLHpC5IdPpbT0w6r6dTqejZJdB5IicfwO1/q9uRY6ZJ/CPiw3IwqOkinMf8LB0PYJD5RwAbOkZWlmrcuV32u1KVGkMX6fZ1Pn++KltbHpSNC435SzAnd2ThesKMjfS5KMz6meTKGH7nAICuI6IezEms/iGcelbSqrrBVZ0Hw8M7iFeuHkAS4H2lZTuBiSV5CoSuPoFGly2Vj4tznz2n7yeYO9Xq/UtW7Kv6Ci7ez9XBei0V5tnS7jr8gdQwqK5YXHqB90zRVJ5gIu2lfXC+A4VvJERoJBjMgjHZcOpeurceSzBj5LaAz+yuJvERh8znOK9tPjir37XKemA3G23Vg5/pjEW/LbL6i0akUBZU8uFY8D5oOgUcd0LUSgyscPUj+m4gyrPR7EEUoCFlDf65zvgsLZwcbACmc79ROyI2T+BQc7LVNbM4WsPi1JcKpOgaaprew8XwFaC6hSjXzUhNBUnGup2GubF9OmriojU+K/hJ4dHbUGzNbAqkY5LYOZDB51hOUVLmnwP9jJSA2b/Rx9XdYPtUO1KSQ2I7PhtT1LzoAiFuet+0ORkaZcWf+49LrmLqAL+Bxl ewKe64Tf /qbKwLBlyw1ERkXw1uZMtriEZutcY2JzDeFYj5O59iA+k4SRahi+BW0HL5JOSpYZgr3DSeDmwKMXqvzGu17YfGuKggqy905O8cCK6crlmMWb7jzfM4F0NU7qHPQtKjC7L6JM/0lCTN6TQtg9YvJCRNLTrV2EeTlG6orolIEqLgXEkgOh1DVS9GR5PBYmKcTTLmopFLm1i1gyjrbB/CLee7tgtyuBzfUsB/TBlzKmbjl+q5U8b7sojCKR9OfXepY26oNp/YLWQyABdLKEtZJhIHoXjwNMWbjCKHjACAFPXFnhJyXE+qnmdZEhfojo5SRMXpJEaBuMfrYqyyOZltnEsL6t0cSn/tKb2xgu2ftDxnoADvAR6LI4AewPbXidOlqkUGyQmdDwYiLLp1DHBSPpvratEVVMj5R6ICfaTRPFsOj4P3GbURBvdLeJ7+G0254UXq3t1aza3fbRL/+M= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 7/15/2025 5:33 PM, Fuad Tabba wrote: > Introduce the core infrastructure to enable host userspace to mmap() > guest_memfd-backed memory. This is needed for several evolving KVM use > cases: > > * Non-CoCo VM backing: Allows VMMs like Firecracker to run guests > entirely backed by guest_memfd, even for non-CoCo VMs [1]. This > provides a unified memory management model and simplifies guest memory > handling. > > * Direct map removal for enhanced security: This is an important step > for direct map removal of guest memory [2]. By allowing host userspace > to fault in guest_memfd pages directly, we can avoid maintaining host > kernel direct maps of guest memory. This provides additional hardening > against Spectre-like transient execution attacks by removing a > potential attack surface within the kernel. > > * Future guest_memfd features: This also lays the groundwork for future > enhancements to guest_memfd, such as supporting huge pages and > enabling in-place sharing of guest memory with the host for CoCo > platforms that permit it [3]. > > Therefore, enable the basic mmap and fault handling logic within > guest_memfd. However, this functionality is not yet exposed to userspace > and remains inactive until two conditions are met in subsequent patches: > > * Kconfig Gate (CONFIG_KVM_GMEM_SUPPORTS_MMAP): A new Kconfig option, > KVM_GMEM_SUPPORTS_MMAP, is introduced later in this series. Well, KVM_GMEM_SUPPORTS_MMAP is actually introduced by *this* patch, not other patches later. > This > option gates the compilation and availability of this mmap > functionality at a system level. Well, at least from this patch, it doesn't gate the compilation. > While the code changes in this patch > might seem small, the Kconfig option is introduced to explicitly > signal the intent to enable this new capability and to provide a clear > compile-time switch for it. It also helps ensure that the necessary > architecture-specific glue (like kvm_arch_supports_gmem_mmap) is > properly defined. > > * Per-instance opt-in (GUEST_MEMFD_FLAG_MMAP): On a per-instance basis, > this functionality is enabled by the guest_memfd flag > GUEST_MEMFD_FLAG_MMAP, which will be set in the KVM_CREATE_GUEST_MEMFD > ioctl. This flag is crucial because when host userspace maps > guest_memfd pages, KVM must *not* manage the these memory regions in > the same way it does for traditional KVM memory slots. The presence of > GUEST_MEMFD_FLAG_MMAP on a guest_memfd instance allows mmap() and > faulting of guest_memfd memory to host userspace. Additionally, it > informs KVM to always consume guest faults to this memory from > guest_memfd, regardless of whether it is a shared or a private fault. > This opt-in mechanism ensures compatibility and prevents conflicts > with existing KVM memory management. This is a per-guest_memfd flag > rather than a per-memslot or per-VM capability because the ability to > mmap directly applies to the specific guest_memfd object, regardless > of how it might be used within various memory slots or VMs. > > [1] https://github.com/firecracker-microvm/firecracker/tree/feature/secret-hiding > [2] https://lore.kernel.org/linux-mm/cc1bb8e9bc3e1ab637700a4d3defeec95b55060a.camel@amazon.com > [3] https://lore.kernel.org/all/c1c9591d-218a-495c-957b-ba356c8f8e09@redhat.com/T/#u > > Reviewed-by: Gavin Shan > Reviewed-by: Shivank Garg > Acked-by: David Hildenbrand > Co-developed-by: Ackerley Tng > Signed-off-by: Ackerley Tng > Signed-off-by: Fuad Tabba > --- > include/linux/kvm_host.h | 13 +++++++ > include/uapi/linux/kvm.h | 1 + > virt/kvm/Kconfig | 4 +++ > virt/kvm/guest_memfd.c | 73 ++++++++++++++++++++++++++++++++++++++++ > 4 files changed, 91 insertions(+) > > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h > index 1ec71648824c..9ac21985f3b5 100644 > --- a/include/linux/kvm_host.h > +++ b/include/linux/kvm_host.h > @@ -740,6 +740,19 @@ static inline bool kvm_arch_supports_gmem(struct kvm *kvm) > } > #endif > > +/* > + * Returns true if this VM supports mmap() in guest_memfd. > + * > + * Arch code must define kvm_arch_supports_gmem_mmap if support for guest_memfd > + * is enabled. It describes the similar requirement as kvm_arch_has_private_mem and kvm_arch_supports_gmem, but it doesn't have the check of && !IS_ENABLED(CONFIG_KVM_GMEM) So it's straightforward for people to wonder why. I would suggest just adding the check of !IS_ENABLED(CONFIG_KVM_GMEM) like what for kvm_arch_has_private_mem and kvm_arch_supports_gmem. So it will get compilation error if any ARCH enables CONFIG_KVM_GMEM without defining kvm_arch_supports_gmem_mmap. > + */ > +#if !defined(kvm_arch_supports_gmem_mmap) > +static inline bool kvm_arch_supports_gmem_mmap(struct kvm *kvm) > +{ > + return false; > +} > +#endif > +