From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC029C3ABDA for ; Wed, 14 May 2025 23:44:24 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D84C78D0005; Wed, 14 May 2025 19:43:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D0CBF8D0001; Wed, 14 May 2025 19:43:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B102B8D0005; Wed, 14 May 2025 19:43:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 874A78D0001 for ; Wed, 14 May 2025 19:43:31 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id AF41D12081F for ; Wed, 14 May 2025 23:43:32 +0000 (UTC) X-FDA: 83443142664.30.E1AE0D0 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) by imf14.hostedemail.com (Postfix) with ESMTP id 14232100003 for ; Wed, 14 May 2025 23:43:30 +0000 (UTC) Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=r55Y3pQB; spf=pass (imf14.hostedemail.com: domain of 3oSolaAsKCNIy082F92MHB44CC492.0CA96BIL-AA8Jy08.CF4@flex--ackerleytng.bounces.google.com designates 209.85.215.202 as permitted sender) smtp.mailfrom=3oSolaAsKCNIy082F92MHB44CC492.0CA96BIL-AA8Jy08.CF4@flex--ackerleytng.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=r55Y3pQB; spf=pass (imf14.hostedemail.com: domain of 3oSolaAsKCNIy082F92MHB44CC492.0CA96BIL-AA8Jy08.CF4@flex--ackerleytng.bounces.google.com designates 209.85.215.202 as permitted sender) smtp.mailfrom=3oSolaAsKCNIy082F92MHB44CC492.0CA96BIL-AA8Jy08.CF4@flex--ackerleytng.bounces.google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1747266211; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=4sIecgGyO0lUZDlGmfbLoDOBmwYRGdtl4g13CxMI1VY=; b=fMuTAHOSQu1GVyuVzfuN7q5hSU4VoDFfqDinCwnking20voTXmDwoFfhay1jroCTSZSe10 xY1+HJltCDj0pzyPsGST9MEDLQLaKOTRjjuidKsJ2teOBBg/6f8e1pBYwKGj2BvsE2gJgo rOR7M3MT7aNJfibLR61jxYSJl0NBL5M= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1747266211; a=rsa-sha256; cv=none; b=MKTQ9Uqk4czZrXvpwwi8Hul0Wk0EKfcQpU2AjymWvQAJK/kT453oHo3x8aIKmCRI05GCZN RndlfBVaXpcmsLkDdnF8dw8iqAZzv44/KhzJE9oRcF9rcSBRfxIJ5NGyjcQO0NEBa/wi4f j8sFFXBDHTINuizd0AVSPSbLyne5HLk= Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-b23eb54d921so233625a12.2 for ; Wed, 14 May 2025 16:43:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747266210; x=1747871010; darn=kvack.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=4sIecgGyO0lUZDlGmfbLoDOBmwYRGdtl4g13CxMI1VY=; b=r55Y3pQBRqYHKpJUyl4jaCbERtUrcxMMz/S/1w9TAAzSByT2iCO0GUuOWfZU/nYWj0 Koyj/CkzOirrhmJNrF/hgZf3RXmDs3KZk6DIbieCuBcPyGh2lwgS+QwhNbShlaPvIAaf 07P/FC0cOro+G+JKckJVcE1UsQ8Rov43p2XdshjdSO0WcBl5s7RTCh52vDAUe9dcLQnJ gJfj9ylCPp0IMsX/WSP7T6/zSV22yF8fyWvdFdn5zw8d7UdIJpGaaUafh5xDfvgy27ip PgdWXemv7YA+CLGZ7/p8C7wxgoNpwW2JRQJGcQAOX6qCpDpmfelKPnFe241elTv1RSxN ONGg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747266210; x=1747871010; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4sIecgGyO0lUZDlGmfbLoDOBmwYRGdtl4g13CxMI1VY=; b=VBoFkiHQGrR9mufsX4la4nOvngigOHOyiOcDHeUxPX4cnlURieN0GxdvuTod4prgeZ UkJ520CKNypkmsWR7B/M7lztg9rn7xT028OXw4yjcqWqZeazdYa4mcCNDEgNDNLIa8Qs S7YCD0uAZZUyobldRb3eEe9Y3EpTWCO+NffC+NFgCowbPypPa8vPcAqj5YqCsG/svJyK WoLHTDGHExdV3+IzJxLGoAQdWoeIHJmcwSAp/0rcGIdK4XErYRw31S7hniI/Z6pAntWN 2huV4O18ZhV11XcGNYTuzagYXFcLhHoxB3bQV/e+5MmEWdQEZHqfI3O53D0F4KrD9F+T Hm9A== X-Forwarded-Encrypted: i=1; AJvYcCV/Jzr+BObymbviplYN5xnf8lSCi8Zowa2Sg4LQmiqzPivqvh6vxw7eDEAPKmoTwKd24+g7Ptl5LA==@kvack.org X-Gm-Message-State: AOJu0YwGHsDmysD4O0Fynm3/ctd8Kna6eGb6kVLP2/9qyANYbvXfuA0e 9YuFoIp0KZIZdklI5F6pJtNLD2lL9ugU6RXqcLKiuAlDo9h3EzUEB9gsHF5mLD5Di7OZSTfbFca uumAionDbi+fl7ObydRRpkg== X-Google-Smtp-Source: AGHT+IHE2ihGcSaSi+44FOfQo2JFT44j8vrPkcuV6SQcS6l9oOHcwMqkj/ZKectxEWeBtOTnjJpM1DV+MxgAJO4ylw== X-Received: from pjbsw16.prod.google.com ([2002:a17:90b:2c90:b0:308:87dc:aa52]) (user=ackerleytng job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90a:d647:b0:30c:540b:9b6 with SMTP id 98e67ed59e1d1-30e2e419501mr9813764a91.0.1747266209881; Wed, 14 May 2025 16:43:29 -0700 (PDT) Date: Wed, 14 May 2025 16:42:00 -0700 In-Reply-To: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: Subject: [RFC PATCH v2 21/51] mm: hugetlb: Inline huge_node() into callers From: Ackerley Tng To: kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org Cc: ackerleytng@google.com, aik@amd.com, ajones@ventanamicro.com, akpm@linux-foundation.org, amoorthy@google.com, anthony.yznaga@oracle.com, anup@brainfault.org, aou@eecs.berkeley.edu, bfoster@redhat.com, binbin.wu@linux.intel.com, brauner@kernel.org, catalin.marinas@arm.com, chao.p.peng@intel.com, chenhuacai@kernel.org, dave.hansen@intel.com, david@redhat.com, dmatlack@google.com, dwmw@amazon.co.uk, erdemaktas@google.com, fan.du@intel.com, fvdl@google.com, graf@amazon.com, haibo1.xu@intel.com, hch@infradead.org, hughd@google.com, ira.weiny@intel.com, isaku.yamahata@intel.com, jack@suse.cz, james.morse@arm.com, jarkko@kernel.org, jgg@ziepe.ca, jgowans@amazon.com, jhubbard@nvidia.com, jroedel@suse.de, jthoughton@google.com, jun.miao@intel.com, kai.huang@intel.com, keirf@google.com, kent.overstreet@linux.dev, kirill.shutemov@intel.com, liam.merwick@oracle.com, maciej.wieczor-retman@intel.com, mail@maciej.szmigiero.name, maz@kernel.org, mic@digikod.net, michael.roth@amd.com, mpe@ellerman.id.au, muchun.song@linux.dev, nikunj@amd.com, nsaenz@amazon.es, oliver.upton@linux.dev, palmer@dabbelt.com, pankaj.gupta@amd.com, paul.walmsley@sifive.com, pbonzini@redhat.com, pdurrant@amazon.co.uk, peterx@redhat.com, pgonda@google.com, pvorel@suse.cz, qperret@google.com, quic_cvanscha@quicinc.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, quic_svaddagi@quicinc.com, quic_tsoni@quicinc.com, richard.weiyang@gmail.com, rick.p.edgecombe@intel.com, rientjes@google.com, roypat@amazon.co.uk, rppt@kernel.org, seanjc@google.com, shuah@kernel.org, steven.price@arm.com, steven.sistare@oracle.com, suzuki.poulose@arm.com, tabba@google.com, thomas.lendacky@amd.com, usama.arif@bytedance.com, vannapurve@google.com, vbabka@suse.cz, viro@zeniv.linux.org.uk, vkuznets@redhat.com, wei.w.wang@intel.com, will@kernel.org, willy@infradead.org, xiaoyao.li@intel.com, yan.y.zhao@intel.com, yilun.xu@intel.com, yuzenghui@huawei.com, zhiquan1.li@intel.com Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 14232100003 X-Stat-Signature: cf894bb19rwtdyjia1xqiehfrgbkefeb X-Rspam-User: X-HE-Tag: 1747266210-41041 X-HE-Meta: U2FsdGVkX19611FyUZ0oF8c5YyuDDQ+fzUsxolyTQnCy48agdehkPjj9aiFiq2HVd+pQQ+hlr8wrtkd84FttpSfm7ehwAfIqmV9NQZtJ7XQbixdofUqTfCZ70Uh5f+HQYxWa9/3yOExHcYu848LFVMDr+xRiZ15yVdRGn5QMr8m92FAH4PRTSQ9aEAWlM5HV4c/uQD/+qP0hLVExmsNIZ4rgnLI8U8G6LQxlU90KcVxTeMi4vH+Rx22Xd4nr9FaKXK169VoRZdB1s4EaEcHXHsAZxKnNXqGqPTcfni9RKxc/36nuRIiNlLYDwz72KngGGAd7eNtK8SSueSS80BO6J7vKX7t5KWkRWbTlIOGYfIAVS7oDiBdRQKQee8wGX3LcLgY07oPvHeJbS2nooJpy/r9tv982uKvN2JNOP/56na60ytBYH+azF+CTFd1tuO3Zpl6agmtiX4N3is700gHYnPCrQ6WkpScY7oBfq3QD8fXSNtLaLRI908brsJvPPrA7g4WuDlX144oZispz47fP3Qc9a+g1nARfH6vHEfK4VhI3hmLnBGI6Qz5IeK9WbiRjewqqaMjtD5feS6h+So9dm6e/0l3P3sJ9ee8LIUChhPhjziuCsUC/mplOj53uX3ECh/A2ie9djkcwooI08UjXVw8sERclf704V3UhO73qPqjgoSz7rSbFoP5BZR6XE0v9r04SAj+K0OskukNK8MRMiQon250NME0kkgWAPDcOWXBb63Y+FLxzSJvMLJG0K2EZfZgQdRrYBCfmyh62awodPBJYRPup2pICGHOaKTsHXwnXBhK46AkJbcQ7MdcM1ee8/N66kQnIZ1L6iutmCW23lUXecd3Apm0h1OjHyOthtTCBFg5VypcMjFxw6yD7pAMfXgZVeN50CxftrWO1+I44O5C2QcnyHMAr9QC5L+WDkgVylj+3GDPcBcCK9qR5gR5vXRVzY7PfhngRF0fIa9B bhsmjaJN Cc3kcw6TJIgeQsfz3VwN5FK+eKl7L7rh810fGygNevbpHi5h3ajfwQ/B8DaJzDaZajFFNLCgeal+VrO7nrS+6puptCRSMkOq6TwKCKPxd7dK4TMQgkff6Xl1SCJcqRNz/lXda5OtvaBKJz+E6yhMfCNZCPTa44c6LJRm2JB6AjNn1jIZIBbVxfglzXc/2rd379T4N5As2zhWsbVJP97n53wDe8yyyAOFz/gGsCzFDp1XWDE0uPnRwUfdV+aD5SRcyv1B5iNAzKI9/qhXOCURsT2At8/9eABU3ieNdD7U80YoCG5QEmBuzD5VgrW2IuMNNyJ3LlbR/wk0gRKE2Gy3gbc+RtbWzCvCiWqVBd95xqAQDqfdUejv79sap5WfWgQfDqFyUy0M52dZWyhGk594SiIFXMewlKtEVo4sj X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: huge_node()'s role was to read struct mempolicy (mpol) from the vma and also interpret mpol to get node id and nodemask. huge_node() can be inlined into callers since 2 out of 3 of the callers will be refactored in later patches to take and interpret mpol without reading mpol from the vma. Signed-off-by: Ackerley Tng Change-Id: Ic94b2ed916fd4f89b7d2755288a3a2f6a56051f7 --- include/linux/mempolicy.h | 12 ------------ mm/hugetlb.c | 13 ++++++++++--- mm/mempolicy.c | 21 --------------------- 3 files changed, 10 insertions(+), 36 deletions(-) diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index 840c576abcfd..41fc53605ef0 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -140,9 +140,6 @@ extern void mpol_rebind_mm(struct mm_struct *mm, nodemask_t *new); extern int policy_node_nodemask(struct mempolicy *mpol, gfp_t gfp_flags, pgoff_t ilx, nodemask_t **nodemask); -extern int huge_node(struct vm_area_struct *vma, - unsigned long addr, gfp_t gfp_flags, - struct mempolicy **mpol, nodemask_t **nodemask); extern bool init_nodemask_of_mempolicy(nodemask_t *mask); extern bool mempolicy_in_oom_domain(struct task_struct *tsk, const nodemask_t *mask); @@ -260,15 +257,6 @@ static inline int policy_node_nodemask(struct mempolicy *mpol, gfp_t gfp_flags, return 0; } -static inline int huge_node(struct vm_area_struct *vma, - unsigned long addr, gfp_t gfp_flags, - struct mempolicy **mpol, nodemask_t **nodemask) -{ - *mpol = NULL; - *nodemask = NULL; - return 0; -} - static inline bool init_nodemask_of_mempolicy(nodemask_t *m) { return false; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index b822b204e9b3..5cc261b90e39 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1372,10 +1372,12 @@ static struct folio *dequeue_hugetlb_folio(struct hstate *h, struct mempolicy *mpol; gfp_t gfp_mask; nodemask_t *nodemask; + pgoff_t ilx; int nid; gfp_mask = htlb_alloc_mask(h); - nid = huge_node(vma, address, gfp_mask, &mpol, &nodemask); + mpol = get_vma_policy(vma, address, h->order, &ilx); + nid = policy_node_nodemask(mpol, gfp_mask, ilx, &nodemask); if (mpol_is_preferred_many(mpol)) { folio = dequeue_hugetlb_folio_nodemask(h, gfp_mask, @@ -2321,8 +2323,11 @@ static struct folio *alloc_surplus_hugetlb_folio(struct hstate *h, gfp_t gfp_mask = htlb_alloc_mask(h); int nid; nodemask_t *nodemask; + pgoff_t ilx; + + mpol = get_vma_policy(vma, addr, h->order, &ilx); + nid = policy_node_nodemask(mpol, gfp_mask, ilx, &nodemask); - nid = huge_node(vma, addr, gfp_mask, &mpol, &nodemask); if (mpol_is_preferred_many(mpol)) { gfp_t gfp = gfp_mask & ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); @@ -6829,10 +6834,12 @@ static struct folio *alloc_hugetlb_folio_vma(struct hstate *h, nodemask_t *nodemask; struct folio *folio; gfp_t gfp_mask; + pgoff_t ilx; int node; gfp_mask = htlb_alloc_mask(h); - node = huge_node(vma, address, gfp_mask, &mpol, &nodemask); + mpol = get_vma_policy(vma, address, h->order, &ilx); + node = policy_node_nodemask(mpol, gfp_mask, ilx, &nodemask); /* * This is used to allocate a temporary hugetlb to hold the copied * content, which will then be copied again to the final hugetlb diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 7837158ee5a8..39d0abc407dc 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2145,27 +2145,6 @@ int policy_node_nodemask(struct mempolicy *mpol, gfp_t gfp_flags, } #ifdef CONFIG_HUGETLBFS -/* - * huge_node(@vma, @addr, @gfp_flags, @mpol) - * @vma: virtual memory area whose policy is sought - * @addr: address in @vma for shared policy lookup and interleave policy - * @gfp_flags: for requested zone - * @mpol: pointer to mempolicy pointer for reference counted mempolicy - * @nodemask: pointer to nodemask pointer for 'bind' and 'prefer-many' policy - * - * Returns a nid suitable for a huge page allocation and a pointer - * to the struct mempolicy for conditional unref after allocation. - * If the effective policy is 'bind' or 'prefer-many', returns a pointer - * to the mempolicy's @nodemask for filtering the zonelist. - */ -int huge_node(struct vm_area_struct *vma, unsigned long addr, gfp_t gfp_flags, - struct mempolicy **mpol, nodemask_t **nodemask) -{ - pgoff_t ilx; - - *mpol = get_vma_policy(vma, addr, hstate_vma(vma)->order, &ilx); - return policy_node_nodemask(*mpol, gfp_flags, ilx, nodemask); -} /* * init_nodemask_of_mempolicy -- 2.49.0.1045.g170613ef41-goog