From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6D36524677A for ; Wed, 14 May 2025 23:43:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747266209; cv=none; b=Xg7Pnv9CinIa+/u5uD6ZnLMH8COVFbibAk3YE6q/qJN6g6Cm5+AJia3FgzSKbTtxzHQIkaoywNDNMew9y8SEf2pxZlfAInxGQNaf95seG5puBSmkKZTiKxEaTQ9eoroaoqJuA1+ySdSZ4bO8vEOQFQQpn2crEE0E+jolLRL9gDk= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747266209; c=relaxed/simple; bh=0/T6Mrd1tLR+CRRQguZ0xMQEiqUuznphaf6jJwSvnvM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Qf+3tbN5ldWDn9gEsSPZhaG3ya+k9LrfEl7MJ9Cmpa6gHgx9Js74qfwoKQzA6ZdEilgo7LgzVua5E0GTD54D8kkJT8f7ECwyUU88a/8aG4JUs96FMVqFjDIHgRg2PlCKJ+CDSBwsyCS4ijW2ijhm1w4IZ+988JoeJ+YLOCTxjZc= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=seZ6CoU1; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--ackerleytng.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="seZ6CoU1" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-b0f807421c9so140185a12.0 for ; Wed, 14 May 2025 16:43:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747266207; x=1747871007; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=5oaNONcPwTQNIIHvMztjnu28D1ASTydoh1/NvmmsKm4=; b=seZ6CoU1cdmBISfwdI2rooQgKJZ1G3ejJasMflQ8Mk1rGgzBnSBvfB42UcOigGlQL4 P7NIP94bhrMyrZ7/bkBbZMVS8XvMMtPyibvbEEWPSccuh3Df8ssgoLr2hvyBrbfL+3Dj 3qO54qlwhE/2qYu/oa2Wz+gSkos+TFbojAhhtGOg2pZxBNgfBUGSSn/qa6fq6DuG6QCo 98lkBIzG1MtyO64jJd336Ur5Zcxq3mhhJiihhdnwC5I4OiINhwq0qJhYQHdLRuOrpEn/ TU5iYcfLgPAmbBKkHBBqYqahOVU8faB4KktBLo4dXlIokH2VkhITxtVLfRObLLwZOSsN hs+A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747266207; x=1747871007; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5oaNONcPwTQNIIHvMztjnu28D1ASTydoh1/NvmmsKm4=; b=ay/uXaTZITV95dUqP76HVFJ9EvB3y3bH5mdQRxlLxTkGghD0tCcQkWMdh2bkL/fw+J +C+GxW/ucZZOyKLmqxVqy6wliX6lGtoX18VEXwJqpyac8DV7LJqp7njcZ5fHubJpiBbN f0LWQWLmIPuXQgdHL3XlVEYEX9O9iQILftS/H4plS+bhFjsg1ekDxpjwOUYvLwqeUF/x 69qyfz1pPZJpzVTHx8SxFyGTPOCDYhJNDGYlhRWt/pL5l+/EQtqxjykNP4y8JesdEQ2O giV3Hp6JHO/IYOp3HEVMoYHtOGYkKAyHyxuZyNxIFBqbllQWNt+anpewfFQdxnB/VRHz 3qdQ== X-Forwarded-Encrypted: i=1; AJvYcCWMewwnOBg8nv4RQO3gqbwyO3ekeW6YYE0Op2Nu+PUri1r1gFJX2UMCjWXVWuJug/B27I85CPg8nhGmMWw=@vger.kernel.org X-Gm-Message-State: AOJu0Yz7QKkx1DPm+X01iva5KL1zdKbMmfx/RgtS2ixDhZi/W8AjGBs/ FPs0c1UMToeBXniqA9zOyazx0F5WlnjMwrra6o+sKlQSK/Zh+gTs5J1yYj5Hv/bRi3V6oi4Ch20 b5zoHzYGfJyNYvvwKBKGAfw== X-Google-Smtp-Source: AGHT+IFDw6Fhs5VsEhgFA8tXL8E4i6uAaDFqGq3lNYpfsvbH4tE4uhaRVHc3Hip3C1CIUzADwan554jzbkPTgKA3mg== X-Received: from pgbfm8.prod.google.com ([2002:a05:6a02:4988:b0:b0b:301e:8e96]) (user=ackerleytng job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a21:6005:b0:1f5:82ae:69d1 with SMTP id adf61e73a8af0-215ff1254b5mr7372409637.20.1747266206677; Wed, 14 May 2025 16:43:26 -0700 (PDT) Date: Wed, 14 May 2025 16:41:58 -0700 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <66aa28f888e392f7039de1c20ef854fb05a3c839.1747264138.git.ackerleytng@google.com> Subject: [RFC PATCH v2 19/51] mm: hugetlb: Rename alloc_surplus_hugetlb_folio From: Ackerley Tng To: kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, x86@kernel.org, linux-fsdevel@vger.kernel.org Cc: ackerleytng@google.com, aik@amd.com, ajones@ventanamicro.com, akpm@linux-foundation.org, amoorthy@google.com, anthony.yznaga@oracle.com, anup@brainfault.org, aou@eecs.berkeley.edu, bfoster@redhat.com, binbin.wu@linux.intel.com, brauner@kernel.org, catalin.marinas@arm.com, chao.p.peng@intel.com, chenhuacai@kernel.org, dave.hansen@intel.com, david@redhat.com, dmatlack@google.com, dwmw@amazon.co.uk, erdemaktas@google.com, fan.du@intel.com, fvdl@google.com, graf@amazon.com, haibo1.xu@intel.com, hch@infradead.org, hughd@google.com, ira.weiny@intel.com, isaku.yamahata@intel.com, jack@suse.cz, james.morse@arm.com, jarkko@kernel.org, jgg@ziepe.ca, jgowans@amazon.com, jhubbard@nvidia.com, jroedel@suse.de, jthoughton@google.com, jun.miao@intel.com, kai.huang@intel.com, keirf@google.com, kent.overstreet@linux.dev, kirill.shutemov@intel.com, liam.merwick@oracle.com, maciej.wieczor-retman@intel.com, mail@maciej.szmigiero.name, maz@kernel.org, mic@digikod.net, michael.roth@amd.com, mpe@ellerman.id.au, muchun.song@linux.dev, nikunj@amd.com, nsaenz@amazon.es, oliver.upton@linux.dev, palmer@dabbelt.com, pankaj.gupta@amd.com, paul.walmsley@sifive.com, pbonzini@redhat.com, pdurrant@amazon.co.uk, peterx@redhat.com, pgonda@google.com, pvorel@suse.cz, qperret@google.com, quic_cvanscha@quicinc.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, quic_svaddagi@quicinc.com, quic_tsoni@quicinc.com, richard.weiyang@gmail.com, rick.p.edgecombe@intel.com, rientjes@google.com, roypat@amazon.co.uk, rppt@kernel.org, seanjc@google.com, shuah@kernel.org, steven.price@arm.com, steven.sistare@oracle.com, suzuki.poulose@arm.com, tabba@google.com, thomas.lendacky@amd.com, usama.arif@bytedance.com, vannapurve@google.com, vbabka@suse.cz, viro@zeniv.linux.org.uk, vkuznets@redhat.com, wei.w.wang@intel.com, will@kernel.org, willy@infradead.org, xiaoyao.li@intel.com, yan.y.zhao@intel.com, yilun.xu@intel.com, yuzenghui@huawei.com, zhiquan1.li@intel.com Content-Type: text/plain; charset="UTF-8" Rename alloc_surplus_hugetlb_folio vs alloc_surplus_hugetlb_folio_nodemask to align with dequeue_hugetlb_folio vs dequeue_hugetlb_folio_nodemask. Signed-off-by: Ackerley Tng Change-Id: I38982497eb70aeb174c386ed71bb896d85939eae --- mm/hugetlb.c | 38 ++++++++++++++++++++------------------ 1 file changed, 20 insertions(+), 18 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 67144af7ab79..b822b204e9b3 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2236,7 +2236,7 @@ int dissolve_free_hugetlb_folios(unsigned long start_pfn, unsigned long end_pfn) /* * Allocates a fresh surplus page from the page allocator. */ -static struct folio *alloc_surplus_hugetlb_folio(struct hstate *h, +static struct folio *alloc_surplus_hugetlb_folio_nodemask(struct hstate *h, gfp_t gfp_mask, int nid, nodemask_t *nmask) { struct folio *folio = NULL; @@ -2312,9 +2312,9 @@ static struct folio *alloc_migrate_hugetlb_folio(struct hstate *h, gfp_t gfp_mas /* * Use the VMA's mpolicy to allocate a huge page from the buddy. */ -static -struct folio *alloc_buddy_hugetlb_folio_with_mpol(struct hstate *h, - struct vm_area_struct *vma, unsigned long addr) +static struct folio *alloc_surplus_hugetlb_folio(struct hstate *h, + struct vm_area_struct *vma, + unsigned long addr) { struct folio *folio = NULL; struct mempolicy *mpol; @@ -2326,14 +2326,14 @@ struct folio *alloc_buddy_hugetlb_folio_with_mpol(struct hstate *h, if (mpol_is_preferred_many(mpol)) { gfp_t gfp = gfp_mask & ~(__GFP_DIRECT_RECLAIM | __GFP_NOFAIL); - folio = alloc_surplus_hugetlb_folio(h, gfp, nid, nodemask); + folio = alloc_surplus_hugetlb_folio_nodemask(h, gfp, nid, nodemask); /* Fallback to all nodes if page==NULL */ nodemask = NULL; } if (!folio) - folio = alloc_surplus_hugetlb_folio(h, gfp_mask, nid, nodemask); + folio = alloc_surplus_hugetlb_folio_nodemask(h, gfp_mask, nid, nodemask); mpol_cond_put(mpol); return folio; } @@ -2435,14 +2435,14 @@ static int gather_surplus_pages(struct hstate *h, long delta) /* Prioritize current node */ if (node_isset(numa_mem_id(), alloc_nodemask)) - folio = alloc_surplus_hugetlb_folio(h, htlb_alloc_mask(h), + folio = alloc_surplus_hugetlb_folio_nodemask(h, htlb_alloc_mask(h), numa_mem_id(), NULL); if (!folio) { for_each_node_mask(node, alloc_nodemask) { if (node == numa_mem_id()) continue; - folio = alloc_surplus_hugetlb_folio(h, htlb_alloc_mask(h), + folio = alloc_surplus_hugetlb_folio_nodemask(h, htlb_alloc_mask(h), node, NULL); if (folio) break; @@ -3055,7 +3055,7 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma, if (!folio) { spin_unlock_irq(&hugetlb_lock); - folio = alloc_buddy_hugetlb_folio_with_mpol(h, vma, addr); + folio = alloc_surplus_hugetlb_folio(h, vma, addr); if (!folio) goto out_uncharge_cgroup; spin_lock_irq(&hugetlb_lock); @@ -3868,11 +3868,12 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid, * First take pages out of surplus state. Then make up the * remaining difference by allocating fresh huge pages. * - * We might race with alloc_surplus_hugetlb_folio() here and be unable - * to convert a surplus huge page to a normal huge page. That is - * not critical, though, it just means the overall size of the - * pool might be one hugepage larger than it needs to be, but - * within all the constraints specified by the sysctls. + * We might race with alloc_surplus_hugetlb_folio_nodemask() + * here and be unable to convert a surplus huge page to a normal + * huge page. That is not critical, though, it just means the + * overall size of the pool might be one hugepage larger than it + * needs to be, but within all the constraints specified by the + * sysctls. */ while (h->surplus_huge_pages && count > persistent_huge_pages(h)) { if (!adjust_pool_surplus(h, nodes_allowed, -1)) @@ -3930,10 +3931,11 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid, * By placing pages into the surplus state independent of the * overcommit value, we are allowing the surplus pool size to * exceed overcommit. There are few sane options here. Since - * alloc_surplus_hugetlb_folio() is checking the global counter, - * though, we'll note that we're not allowed to exceed surplus - * and won't grow the pool anywhere else. Not until one of the - * sysctls are changed, or the surplus pages go out of use. + * alloc_surplus_hugetlb_folio_nodemask() is checking the global + * counter, though, we'll note that we're not allowed to exceed + * surplus and won't grow the pool anywhere else. Not until one + * of the sysctls are changed, or the surplus pages go out of + * use. * * min_count is the expected number of persistent pages, we * shouldn't calculate min_count by using -- 2.49.0.1045.g170613ef41-goog