From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 754EBC47DAF for ; Thu, 18 Jan 2024 12:39:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8520D6B008A; Thu, 18 Jan 2024 07:39:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 803F36B0092; Thu, 18 Jan 2024 07:39:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6561C6B0093; Thu, 18 Jan 2024 07:39:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 406EC6B008A for ; Thu, 18 Jan 2024 07:39:34 -0500 (EST) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 1D3101401C0 for ; Thu, 18 Jan 2024 12:39:34 +0000 (UTC) X-FDA: 81692387868.03.485D207 Received: from out-180.mta1.migadu.com (out-180.mta1.migadu.com [95.215.58.180]) by imf07.hostedemail.com (Postfix) with ESMTP id 6915E40011 for ; Thu, 18 Jan 2024 12:39:32 +0000 (UTC) Authentication-Results: imf07.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=HorjKMfK; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf07.hostedemail.com: domain of gang.li@linux.dev designates 95.215.58.180 as permitted sender) smtp.mailfrom=gang.li@linux.dev ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1705581572; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=f8BLudBFy7F7d9VGcgZ6uTbCxD4rT9ItFTD1EH+b2yk=; b=gOONlTDYqBQM9hnPfPlS9VbrU0LH1MZuNsyo1rAlqCBoBgW1ZSHOUtWNY7jvleIjiAjQm+ QdT0bKr4rAEGC96tdOgTAmGxccHz/af8V2RUQ1dJsDzjiun7zzAZGuEWKJ198brhmRugFo 6G67ylAjBkImjutd92B/UxqfVbCXW2g= ARC-Authentication-Results: i=1; imf07.hostedemail.com; dkim=pass header.d=linux.dev header.s=key1 header.b=HorjKMfK; dmarc=pass (policy=none) header.from=linux.dev; spf=pass (imf07.hostedemail.com: domain of gang.li@linux.dev designates 95.215.58.180 as permitted sender) smtp.mailfrom=gang.li@linux.dev ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1705581572; a=rsa-sha256; cv=none; b=OJIvLUbDfEUBl5rZ4alw/ese3tgKWwbaZDZRbV2eZrHm9bH3q5N7tjpytyBTNFLEzPXm5V VX9uWtSFDOSUJ/xUt6vOxQqrOo1UWI40gqnBMn3kpT+qsLL/FmntFisl6ogg4jy60J+vlZ 6uMD9de/Pss0AKVCudA9nKExgweDre0= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1705581571; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=f8BLudBFy7F7d9VGcgZ6uTbCxD4rT9ItFTD1EH+b2yk=; b=HorjKMfKZszOTJ83qRS01vcjPG50ZRqhtcnxtHTRYvfWUGzWq0C9r0LqFiHnDKR66Rz5Nc 5u4qZ5PIJjK6E36euVVTPgblm7/zBw2ihXLuozf4chhpgLLJBNuj0/YOUr14eHNPUezFaQ sJDNTCbFkX134dX9DokOr/m7nu/AS3c= From: Gang Li To: David Hildenbrand , David Rientjes , Mike Kravetz , Muchun Song , Andrew Morton , Tim Chen Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, ligang.bdlg@bytedance.com, Gang Li Subject: [PATCH v4 4/7] hugetlb: pass *next_nid_to_alloc directly to for_each_node_mask_to_alloc Date: Thu, 18 Jan 2024 20:39:08 +0800 Message-Id: <20240118123911.88833-5-gang.li@linux.dev> In-Reply-To: <20240118123911.88833-1-gang.li@linux.dev> References: <20240118123911.88833-1-gang.li@linux.dev> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 6915E40011 X-Stat-Signature: t7ge1411q49da3hp1rcc95n7wcafer5h X-Rspam-User: X-HE-Tag: 1705581572-947843 X-HE-Meta: U2FsdGVkX190TNJaj0WUhwSWlmO3i0m+FY9DjYxCJyVgF0aPb+780owABiMHeHBWSpgL69aBRcTC2aYcTt9nFY6j+9rJ5i7M7/pjoS4kljoDzVGd6vezSG7nakXG9BncBIB7LMudFU5OWsJAmA0yco9myyBXuL+6Dyzd4iRiyregZmJyJSjerDFew9J8oB0YDPbOIeiUJ23/CagJ/Z1yo4LXmbj/V35tOgA57uGNUqTV5qk61p0XS+Ic1WuVGk2NE0GCBWVKiN1B4017nKUEpFs34orsI0FPIrUsIh74FydwsyYMaMK8vuRgwphtsctW9YJGzgky6lDk4frE/QZUiAZteC8FwxCdv25heiYjCFWI7rXP68LPgttbEtT9JyAm4J1mUh3mvJJoU0hx4CWT6Fn/zCbxQC4TkA0PoOMTsRBbtTu+NKAEUbGhyNQ5GNUxPy1WIDhc5isgtNZos4hJGcsJ886SCO/8E3b6pcSsGqkQGnBKc+TEFoC1ueF/mV9XhFHg83jNpLk2PSOSQun8y+2Ry48YUQP3aXiPFkk8ZyqFkWQ/rCgB1+Wzq8PsUAy6YtZmpr6jW+ZgJ3Wy+5G7ucrL26+4ve9VzU0jfSFzNtCuy6czq2cYRoAjouaUb0ABl5cLMjv0NHHju4DoyGlOYBu1PYBrPdfoUiUWQ5jAYbLuTyi5qIo27qLU3K4q9cxRV9utpuMn+QQWIrLfmAHFdFAmteNc+Nz0j6IdIDbJaunY7H3/ls8H0dryMjzfXfuhTCAoHMS1CYv3+0EawH4dtcm4+FSsPO1aUqGXZAIK/ntmOxraJEFdKsBFaoyB7ZnA1T2HkwQFpC/DEyPFSR/vlYdKNZ3vfECRY4aqlHrl33WoZzz/pIob5YRwkfiQW+1SWBeRYRGoWoTu508U8xfbIO62Xat3d6wT+sqTMtV8dzxbvzMc4aQMVFGcuqJn9lZyoeSpYs+T1s9FnAcazlK yOPEmDpz j37eRtbhXIB6C1JvpWz5claspECxngZdZlT0qBnsBNcmAJEVyJswMhPAvJkvSVH156HnTFAHmTJHg1Ntg00NW2N9UQdRXqOU0LHQkjBMoRz2T/02YimEmcjK7uXPtTtQzHKPj0jCBfL5eCc1Oo+h0HEYJzd/Ogo8Z9We9bzNz0w8P2OF3gruZUsTKRQ== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: With parallelization of hugetlb allocation across different threads, each thread works on a differnet node to allocate pages from, instead of all allocating from a common node h->next_nid_to_alloc. To address this, it's necessary to assign a separate next_nid_to_alloc for each thread. Consequently, the hstate_next_node_to_alloc and for_each_node_mask_to_alloc have been modified to directly accept a *next_nid_to_alloc parameter, ensuring thread-specific allocation and avoiding concurrent access issues. Signed-off-by: Gang Li Tested-by: David Rientjes --- mm/hugetlb.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 98ae108e1fac..effe5539e545 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1464,15 +1464,15 @@ static int get_valid_node_allowed(int nid, nodemask_t *nodes_allowed) * next node from which to allocate, handling wrap at end of node * mask. */ -static int hstate_next_node_to_alloc(struct hstate *h, +static int hstate_next_node_to_alloc(int *next_node, nodemask_t *nodes_allowed) { int nid; VM_BUG_ON(!nodes_allowed); - nid = get_valid_node_allowed(h->next_nid_to_alloc, nodes_allowed); - h->next_nid_to_alloc = next_node_allowed(nid, nodes_allowed); + nid = get_valid_node_allowed(*next_node, nodes_allowed); + *next_node = next_node_allowed(nid, nodes_allowed); return nid; } @@ -1495,10 +1495,10 @@ static int hstate_next_node_to_free(struct hstate *h, nodemask_t *nodes_allowed) return nid; } -#define for_each_node_mask_to_alloc(hs, nr_nodes, node, mask) \ +#define for_each_node_mask_to_alloc(next_node, nr_nodes, node, mask) \ for (nr_nodes = nodes_weight(*mask); \ nr_nodes > 0 && \ - ((node = hstate_next_node_to_alloc(hs, mask)) || 1); \ + ((node = hstate_next_node_to_alloc(next_node, mask)) || 1); \ nr_nodes--) #define for_each_node_mask_to_free(hs, nr_nodes, node, mask) \ @@ -2350,12 +2350,13 @@ static void prep_and_add_allocated_folios(struct hstate *h, */ static struct folio *alloc_pool_huge_folio(struct hstate *h, nodemask_t *nodes_allowed, - nodemask_t *node_alloc_noretry) + nodemask_t *node_alloc_noretry, + int *next_node) { gfp_t gfp_mask = htlb_alloc_mask(h) | __GFP_THISNODE; int nr_nodes, node; - for_each_node_mask_to_alloc(h, nr_nodes, node, nodes_allowed) { + for_each_node_mask_to_alloc(next_node, nr_nodes, node, nodes_allowed) { struct folio *folio; folio = only_alloc_fresh_hugetlb_folio(h, gfp_mask, node, @@ -3310,7 +3311,7 @@ int __alloc_bootmem_huge_page(struct hstate *h, int nid) goto found; } /* allocate from next node when distributing huge pages */ - for_each_node_mask_to_alloc(h, nr_nodes, node, &node_states[N_MEMORY]) { + for_each_node_mask_to_alloc(&h->next_nid_to_alloc, nr_nodes, node, &node_states[N_MEMORY]) { m = memblock_alloc_try_nid_raw( huge_page_size(h), huge_page_size(h), 0, MEMBLOCK_ALLOC_ACCESSIBLE, node); @@ -3679,7 +3680,7 @@ static int adjust_pool_surplus(struct hstate *h, nodemask_t *nodes_allowed, VM_BUG_ON(delta != -1 && delta != 1); if (delta < 0) { - for_each_node_mask_to_alloc(h, nr_nodes, node, nodes_allowed) { + for_each_node_mask_to_alloc(&h->next_nid_to_alloc, nr_nodes, node, nodes_allowed) { if (h->surplus_huge_pages_node[node]) goto found; } @@ -3794,7 +3795,8 @@ static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid, cond_resched(); folio = alloc_pool_huge_folio(h, nodes_allowed, - node_alloc_noretry); + node_alloc_noretry, + &h->next_nid_to_alloc); if (!folio) { prep_and_add_allocated_folios(h, &page_list); spin_lock_irq(&hugetlb_lock); -- 2.20.1