From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91551C3ABDA for ; Wed, 14 May 2025 23:45:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 820228D000A; Wed, 14 May 2025 19:43:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7A4288D0001; Wed, 14 May 2025 19:43:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 586FB8D000A; Wed, 14 May 2025 19:43:44 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 2712F8D0001 for ; Wed, 14 May 2025 19:43:44 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 5B82F120759 for ; Wed, 14 May 2025 23:43:45 +0000 (UTC) X-FDA: 83443143210.30.9449640 Received: from mail-pl1-f202.google.com (mail-pl1-f202.google.com [209.85.214.202]) by imf27.hostedemail.com (Postfix) with ESMTP id 8BDA34000B for ; Wed, 14 May 2025 23:43:43 +0000 (UTC) Authentication-Results: imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=WitCX5oa; spf=pass (imf27.hostedemail.com: domain of 3riolaAsKCN8BDLFSMFZUOHHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--ackerleytng.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3riolaAsKCN8BDLFSMFZUOHHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--ackerleytng.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747266223; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=YO6uWak20yIpwl5SkrA91BElV/aTzcnCIZVjpZ5wsSs=; b=goTZ1l5mBTKta77If/VKA7cYESS0PF0lfoVKplckOH0p6sH0vKrIw+tGuY8bx+XsWhcZxt Wgj0jLlflWlx1iEju07/2mfTzNTS+qjDmk8FWZbJxhRBF/iAzepYQM7bR67u5W6RgddhNI 55nwOztV3e57aMeEtDfaSQ+G8Vs/PUI= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747266223; a=rsa-sha256; cv=none; b=NV8yV+hjj8Z15NEw9AhPXbXX7JqZTZIJNkzOYCpkB1Iq6gIacJYJalbCPbR4/B/U7QznMA RACBQcjO4e/bmrrdspo+zohiWkS0sXTVJmC09wEjwVVc+myhxm7jBCIbI4QHwp9YTKMgBX Uivb7j6zmiwI474UaotM2SPw53JhkDs= ARC-Authentication-Results: i=1; imf27.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=WitCX5oa; spf=pass (imf27.hostedemail.com: domain of 3riolaAsKCN8BDLFSMFZUOHHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--ackerleytng.bounces.google.com designates 209.85.214.202 as permitted sender) smtp.mailfrom=3riolaAsKCN8BDLFSMFZUOHHPPHMF.DPNMJOVY-NNLWBDL.PSH@flex--ackerleytng.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-pl1-f202.google.com with SMTP id d9443c01a7336-22e540658a0so11773745ad.0 for ; Wed, 14 May 2025 16:43:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747266222; x=1747871022; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YO6uWak20yIpwl5SkrA91BElV/aTzcnCIZVjpZ5wsSs=; b=WitCX5oa8H7xcwAON6fJdkwkpxHQvJngny6GZXDkrNARkBqPGZ0G/9MrQ9IQoCyuLD KLJt0Dz/yZXiq2s38eBoJP2PCVQIHUiK5xvLH2WDRZdrMV47uRQmdS8hOO7OdKbXmz+l pzFTp4Go5rlTReXUXgJZn9goEZxvbDobBA+o9DVfOH5IAgXzZJWBk6P9smItgKObQ1AO Bun+dUbRmi4v/sbE4YMwk4QgKMyG22qZ/LFjTV81SDQK2AkedROLdKjGEWGDpxYoXJXH v/NbxsrM0WKJnJBvyIW6Mm67SlTlOXEXxOZNDI5/sEEiJ31Ze1EsA1Sf8dYAqcwxBlkS stJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747266222; x=1747871022; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YO6uWak20yIpwl5SkrA91BElV/aTzcnCIZVjpZ5wsSs=; b=UEUQn65k+t+RciFTyNEspN9wr5aFALbvQYz/wPJLnMqgP8kqd0z0oqupKCijtBZ9XL ne5vB5nepMsiQlKZIpDw++M77ZKv0LH8w2p2yHLYn8Dj1yMlkvg6JH1PZOeT+Ibuih4b NGtJXOFGz0NfYLE55eLfWpgGtvDigKbJ4WVlPTH5yTdGPHTr99tRAvUS6gzz9SeQvmtk /j4cFlEXi+U0z3Wyo0TUtMFPQDjELD3FqrTU+V5NLrNHE3/Nxz2daPQx5byxe7wz38dO gPM9SMOjJBtTijPMV5PhSCq+SNg3mNYmhSqQg7Mxh0ko3pCNGY/m+aQ1fcz+CvkJuIjN QBlw== X-Forwarded-Encrypted: i=1; AJvYcCVtA3gly7FW6Vv+bW6Dbz7LQUadIKBEzxY5BVnpw3SpI+0UCleocA0sYptKPtNP7mcQSCZeo4xImQ==@kvack.org X-Gm-Message-State: AOJu0YwyRpZ/LrdNEDuOpm/VSI20EaQsXCzAGsOJYIInuyE0XcPEUv3j QdSxXhWrjQge56258rdWZb1Q0yXBKyVQND2G25tWIVRDyH7zihewat3fClk/PMUEpj3rPdjQgki PQgAPTDKhL5wWOzHdiUJV0A== X-Google-Smtp-Source: AGHT+IGl3WmzH+yO2/mQx4qb1kBF9dOrq8V3rB8nhLLZwxphKAU+/jLPzMK1piBiMErLawYA7pQKxljtm90DB1S9Bg== X-Received: from pjbtb15.prod.google.com ([2002:a17:90b:53cf:b0:2ff:4be0:c675]) (user=ackerleytng job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:1811:b0:30e:3737:7c87 with SMTP id 98e67ed59e1d1-30e4db11d0amr1980560a91.5.1747266222377; Wed, 14 May 2025 16:43:42 -0700 (PDT) Date: Wed, 14 May 2025 16:42:08 -0700 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: Subject: [RFC PATCH v2 29/51] mm: guestmem_hugetlb: Wrap HugeTLB as an allocator for guest_memfd From: Ackerley Tng To: kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org Cc: ackerleytng@google.com, aik@amd.com, ajones@ventanamicro.com, akpm@linux-foundation.org, amoorthy@google.com, anthony.yznaga@oracle.com, anup@brainfault.org, aou@eecs.berkeley.edu, bfoster@redhat.com, binbin.wu@linux.intel.com, brauner@kernel.org, catalin.marinas@arm.com, chao.p.peng@intel.com, chenhuacai@kernel.org, dave.hansen@intel.com, david@redhat.com, dmatlack@google.com, dwmw@amazon.co.uk, erdemaktas@google.com, fan.du@intel.com, fvdl@google.com, graf@amazon.com, haibo1.xu@intel.com, hch@infradead.org, hughd@google.com, ira.weiny@intel.com, isaku.yamahata@intel.com, jack@suse.cz, james.morse@arm.com, jarkko@kernel.org, jgg@ziepe.ca, jgowans@amazon.com, jhubbard@nvidia.com, jroedel@suse.de, jthoughton@google.com, jun.miao@intel.com, kai.huang@intel.com, keirf@google.com, kent.overstreet@linux.dev, kirill.shutemov@intel.com, liam.merwick@oracle.com, maciej.wieczor-retman@intel.com, mail@maciej.szmigiero.name, maz@kernel.org, mic@digikod.net, michael.roth@amd.com, mpe@ellerman.id.au, muchun.song@linux.dev, nikunj@amd.com, nsaenz@amazon.es, oliver.upton@linux.dev, palmer@dabbelt.com, pankaj.gupta@amd.com, paul.walmsley@sifive.com, pbonzini@redhat.com, pdurrant@amazon.co.uk, peterx@redhat.com, pgonda@google.com, pvorel@suse.cz, qperret@google.com, quic_cvanscha@quicinc.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, quic_svaddagi@quicinc.com, quic_tsoni@quicinc.com, richard.weiyang@gmail.com, rick.p.edgecombe@intel.com, rientjes@google.com, roypat@amazon.co.uk, rppt@kernel.org, seanjc@google.com, shuah@kernel.org, steven.price@arm.com, steven.sistare@oracle.com, suzuki.poulose@arm.com, tabba@google.com, thomas.lendacky@amd.com, usama.arif@bytedance.com, vannapurve@google.com, vbabka@suse.cz, viro@zeniv.linux.org.uk, vkuznets@redhat.com, wei.w.wang@intel.com, will@kernel.org, willy@infradead.org, xiaoyao.li@intel.com, yan.y.zhao@intel.com, yilun.xu@intel.com, yuzenghui@huawei.com, zhiquan1.li@intel.com Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 8BDA34000B X-Stat-Signature: f8af8whsa3qmk1jfnthaowjrhgud61nw X-Rspam-User: X-HE-Tag: 1747266223-82967 X-HE-Meta: U2FsdGVkX19HkcYqHP5BIigBunxQ7RS6Mnv6t0hyqc6ulnCyj1ikeaIH3/Xq29zFSx5fbXy/NcfKw+SjRd1eeX0xe+fSSjf0OO1FyQpYReH0RlLsBMGJwFkYRpS2fQzuaqTr1eXuf/K2fwUIQvbqHUBz1wB2UPncAWetkVeIHPGuZV5CR39dzwUIbMEbsSVGaxWESe5Ur1g6AhczVUL2MnYnEH7DkknUT/lBlizVE4jc4urXhLMeH9sQm+jmOGR1y5ZGzuN+UHTn22o07TIlr6pRXgOrYVafgrYgC3nnw2q3nCMkkdA1lKMxz7gpRtrVQ/ljHx84col+d+y0VwFkSwdi3i6u75mKlXEvW4grzCQtNeGnQSIEj7Cvymx6nnXRIeBenMwZ4j5z7R3nzuQVSa1+NPFiRNwBfcMp+GnrPeqAttnIZbexXTIytIE3RXeNpSM/7PQcWpyzWxLv8eXQA1WZMOTK+3f+6NXpUof7nZXm3EWh5Ik6PUp/5KD3QHJYlyvq47yMk+FtMcB0fWUgbsVcAk7W3rFde44D6kEJ9IUJb38jLMZ5wihZxcBE4LzWC/SpXywB02Pih7WWXJW4RFeDF+GFaSIQkGbUA0nm6oQHIuV+cTLrO/mBLwJWi9PEfTiroWC9xRmS7yz295AHwQBrnymvLr7mAOr7g9jZApu8LjQ4tvJ8tmV3A/M0N2jRhlpvXGqqP6A791rWv7sRaxfKb73SX8EMK917Lyq2XRJkA8al1AwZGzFLpAmvRPaQXA5XMRv6gy81Lz/Qb6annfyUjg1gMrTwWJTJWLzrTRJo70VAuQi5Bj/g/g7mEzZ+7dUvy3EjaMr/dsI8cOr3Z22H49XVqPZYaosWfOmfTXI3lXQ6Ih3YnsCXWQwSSrU6ASoqtEr8ouZIegGdWKJHL79jkfBgwX4/oNwvtln//K/TXedljDR0osPnzNmTc/7RT31mjlpRAP9WrqPtZx0 l4j0fbfR JDH2eOIijCU5nC9juw5XpzclN+aGKz9DZr5NbET+9K2xSz6F7m2VHJSFQe3Kkp1zaU1pagfyOfrJFr5uLAMcbVGyKn/ygFFIgHZj/bcYawhLIjq4D3sH5r+rUq/in3daFjFufEI25CUZCeRx0ZXB6mJK644VqXyFJQ96hCARg0kqZn48S3qv5k5Olr8j6kl5w5KxVtIL0VUhUmAf/vQ5SGaHohz8AA8IBDs4u72LplHp4pHOmRwEV+SkA/KaenO/bB6fd/ugh3yCsbC5L3g1D3WUrNP3El8fD6Ysp7x+L/A+2yEZpUHkHXsDfC4u1nYcQ+b2cBPi7Vy7dUS8NNu5tWHFy43AkM98bJK4jbsgPXqgkQqsugIj6iyYyncN4yUmXy3uW4ipmivPPMT/nZfxgAQvVbKPV0GAEd2+bcSz+klyHCo05YQEwAqZfGmOR0ARkEriR9v6R5u4gBIyCobViSlPa+T3EF7Ns2VSryzgtYEDerqm+NQ7pRqBrCViQYz+h12zAYM/hQDAs3UD3qbFdOc3/B7BBTwEDHY49RrLym6Q1LLmAZ1HX2tbwcAVtXY3mF6O6Tr34k4uS5DtPWJlbJGkjIJuRyAXbgB+FeOjhr0cUaA9yPuNwyH/S8t33CZM9FeSsw8gX3eRJX15OvxxSVwo7iA82B718d9Vl30/erS38vWDGnbnWHLiJEELgjMcgxWIa/7yUwzVBghU= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: guestmem_hugetlb is an allocator for guest_memfd. It wraps HugeTLB to provide huge folios for guest_memfd. This patch also introduces guestmem_allocator_operations as a set of operations that allocators for guest_memfd can provide. In a later patch, guest_memfd will use these operations to manage pages from an allocator. The allocator operations are memory-management specific and are placed in mm/ so key mm-specific functions do not have to be exposed unnecessarily. Signed-off-by: Ackerley Tng Change-Id: I3cafe111ea7b3c84755d7112ff8f8c541c11136d --- include/linux/guestmem.h | 20 +++++ include/uapi/linux/guestmem.h | 29 +++++++ mm/Kconfig | 5 +- mm/guestmem_hugetlb.c | 159 ++++++++++++++++++++++++++++++++++ 4 files changed, 212 insertions(+), 1 deletion(-) create mode 100644 include/linux/guestmem.h create mode 100644 include/uapi/linux/guestmem.h diff --git a/include/linux/guestmem.h b/include/linux/guestmem.h new file mode 100644 index 000000000000..4b2d820274d9 --- /dev/null +++ b/include/linux/guestmem.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _LINUX_GUESTMEM_H +#define _LINUX_GUESTMEM_H + +#include + +struct guestmem_allocator_operations { + void *(*inode_setup)(size_t size, u64 flags); + void (*inode_teardown)(void *private, size_t inode_size); + struct folio *(*alloc_folio)(void *private); + /* + * Returns the number of PAGE_SIZE pages in a page that this guestmem + * allocator provides. + */ + size_t (*nr_pages_in_folio)(void *priv); +}; + +extern const struct guestmem_allocator_operations guestmem_hugetlb_ops; + +#endif diff --git a/include/uapi/linux/guestmem.h b/include/uapi/linux/guestmem.h new file mode 100644 index 000000000000..2e518682edd5 --- /dev/null +++ b/include/uapi/linux/guestmem.h @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +#ifndef _UAPI_LINUX_GUESTMEM_H +#define _UAPI_LINUX_GUESTMEM_H + +/* + * Huge page size must be explicitly defined when using the guestmem_hugetlb + * allocator for guest_memfd. It is the responsibility of the application to + * know which sizes are supported on the running system. See mmap(2) man page + * for details. + */ + +#define GUESTMEM_HUGETLB_FLAG_SHIFT 58 +#define GUESTMEM_HUGETLB_FLAG_MASK 0x3fUL + +#define GUESTMEM_HUGETLB_FLAG_16KB (14UL << GUESTMEM_HUGETLB_FLAG_SHIFT) +#define GUESTMEM_HUGETLB_FLAG_64KB (16UL << GUESTMEM_HUGETLB_FLAG_SHIFT) +#define GUESTMEM_HUGETLB_FLAG_512KB (19UL << GUESTMEM_HUGETLB_FLAG_SHIFT) +#define GUESTMEM_HUGETLB_FLAG_1MB (20UL << GUESTMEM_HUGETLB_FLAG_SHIFT) +#define GUESTMEM_HUGETLB_FLAG_2MB (21UL << GUESTMEM_HUGETLB_FLAG_SHIFT) +#define GUESTMEM_HUGETLB_FLAG_8MB (23UL << GUESTMEM_HUGETLB_FLAG_SHIFT) +#define GUESTMEM_HUGETLB_FLAG_16MB (24UL << GUESTMEM_HUGETLB_FLAG_SHIFT) +#define GUESTMEM_HUGETLB_FLAG_32MB (25UL << GUESTMEM_HUGETLB_FLAG_SHIFT) +#define GUESTMEM_HUGETLB_FLAG_256MB (28UL << GUESTMEM_HUGETLB_FLAG_SHIFT) +#define GUESTMEM_HUGETLB_FLAG_512MB (29UL << GUESTMEM_HUGETLB_FLAG_SHIFT) +#define GUESTMEM_HUGETLB_FLAG_1GB (30UL << GUESTMEM_HUGETLB_FLAG_SHIFT) +#define GUESTMEM_HUGETLB_FLAG_2GB (31UL << GUESTMEM_HUGETLB_FLAG_SHIFT) +#define GUESTMEM_HUGETLB_FLAG_16GB (34UL << GUESTMEM_HUGETLB_FLAG_SHIFT) + +#endif /* _UAPI_LINUX_GUESTMEM_H */ diff --git a/mm/Kconfig b/mm/Kconfig index 131adc49f58d..bb6e39e37245 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -1218,7 +1218,10 @@ config SECRETMEM config GUESTMEM_HUGETLB bool "Enable guestmem_hugetlb allocator for guest_memfd" - depends on HUGETLBFS + select GUESTMEM + select HUGETLBFS + select HUGETLB_PAGE + select HUGETLB_PAGE_OPTIMIZE_VMEMMAP help Enable this to make HugeTLB folios available to guest_memfd (KVM virtualization) as backing memory. diff --git a/mm/guestmem_hugetlb.c b/mm/guestmem_hugetlb.c index 51a724ebcc50..5459ef7eb329 100644 --- a/mm/guestmem_hugetlb.c +++ b/mm/guestmem_hugetlb.c @@ -5,6 +5,14 @@ */ #include +#include +#include +#include +#include +#include +#include + +#include #include "guestmem_hugetlb.h" @@ -12,3 +20,154 @@ void guestmem_hugetlb_handle_folio_put(struct folio *folio) { WARN_ONCE(1, "A placeholder that shouldn't trigger. Work in progress."); } + +struct guestmem_hugetlb_private { + struct hstate *h; + struct hugepage_subpool *spool; + struct hugetlb_cgroup *h_cg_rsvd; +}; + +static size_t guestmem_hugetlb_nr_pages_in_folio(void *priv) +{ + struct guestmem_hugetlb_private *private = priv; + + return pages_per_huge_page(private->h); +} + +static void *guestmem_hugetlb_setup(size_t size, u64 flags) + +{ + struct guestmem_hugetlb_private *private; + struct hugetlb_cgroup *h_cg_rsvd = NULL; + struct hugepage_subpool *spool; + unsigned long nr_pages; + int page_size_log; + struct hstate *h; + long hpages; + int idx; + int ret; + + page_size_log = (flags >> GUESTMEM_HUGETLB_FLAG_SHIFT) & + GUESTMEM_HUGETLB_FLAG_MASK; + h = hstate_sizelog(page_size_log); + if (!h) + return ERR_PTR(-EINVAL); + + /* + * Check against h because page_size_log could be 0 to request default + * HugeTLB page size. + */ + if (!IS_ALIGNED(size, huge_page_size(h))) + return ERR_PTR(-EINVAL); + + private = kzalloc(sizeof(*private), GFP_KERNEL); + if (!private) + return ERR_PTR(-ENOMEM); + + /* Creating a subpool makes reservations, hence charge for them now. */ + idx = hstate_index(h); + nr_pages = size >> PAGE_SHIFT; + ret = hugetlb_cgroup_charge_cgroup_rsvd(idx, nr_pages, &h_cg_rsvd); + if (ret) + goto err_free; + + hpages = size >> huge_page_shift(h); + spool = hugepage_new_subpool(h, hpages, hpages, false); + if (!spool) + goto err_uncharge; + + private->h = h; + private->spool = spool; + private->h_cg_rsvd = h_cg_rsvd; + + return private; + +err_uncharge: + ret = -ENOMEM; + hugetlb_cgroup_uncharge_cgroup_rsvd(idx, nr_pages, h_cg_rsvd); +err_free: + kfree(private); + return ERR_PTR(ret); +} + +static void guestmem_hugetlb_teardown(void *priv, size_t inode_size) +{ + struct guestmem_hugetlb_private *private = priv; + unsigned long nr_pages; + int idx; + + hugepage_put_subpool(private->spool); + + idx = hstate_index(private->h); + nr_pages = inode_size >> PAGE_SHIFT; + hugetlb_cgroup_uncharge_cgroup_rsvd(idx, nr_pages, private->h_cg_rsvd); + + kfree(private); +} + +static struct folio *guestmem_hugetlb_alloc_folio(void *priv) +{ + struct guestmem_hugetlb_private *private = priv; + struct mempolicy *mpol; + struct folio *folio; + pgoff_t ilx; + int ret; + + ret = hugepage_subpool_get_pages(private->spool, 1); + if (ret == -ENOMEM) { + return ERR_PTR(-ENOMEM); + } else if (ret > 0) { + /* guest_memfd will not use surplus pages. */ + goto err_put_pages; + } + + /* + * TODO: mempolicy would probably have to be stored on the inode, use + * task policy for now. + */ + mpol = get_task_policy(current); + + /* TODO: ignore interleaving for now. */ + ilx = NO_INTERLEAVE_INDEX; + + /* + * charge_cgroup_rsvd is false because we already charged reservations + * when creating the subpool for this + * guest_memfd. use_existing_reservation is true - we're using a + * reservation from the guest_memfd's subpool. + */ + folio = hugetlb_alloc_folio(private->h, mpol, ilx, false, true); + mpol_cond_put(mpol); + + if (IS_ERR_OR_NULL(folio)) + goto err_put_pages; + + /* + * Clear restore_reserve here so that when this folio is freed, + * free_huge_folio() will always attempt to return the reservation to + * the subpool. guest_memfd, unlike regular hugetlb, has no resv_map, + * and hence when freeing, the folio needs to be returned to the + * subpool. guest_memfd does not use surplus hugetlb pages, so in + * free_huge_folio(), returning to subpool will always succeed and the + * hstate reservation will then get restored. + * + * hugetlbfs does this in hugetlb_add_to_page_cache(). + */ + folio_clear_hugetlb_restore_reserve(folio); + + hugetlb_set_folio_subpool(folio, private->spool); + + return folio; + +err_put_pages: + hugepage_subpool_put_pages(private->spool, 1); + return ERR_PTR(-ENOMEM); +} + +const struct guestmem_allocator_operations guestmem_hugetlb_ops = { + .inode_setup = guestmem_hugetlb_setup, + .inode_teardown = guestmem_hugetlb_teardown, + .alloc_folio = guestmem_hugetlb_alloc_folio, + .nr_pages_in_folio = guestmem_hugetlb_nr_pages_in_folio, +}; +EXPORT_SYMBOL_GPL(guestmem_hugetlb_ops); -- 2.49.0.1045.g170613ef41-goog