From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6DD9DC433EF for ; Wed, 22 Jun 2022 08:34:02 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0D3D68E008F; Wed, 22 Jun 2022 04:34:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 085158E008A; Wed, 22 Jun 2022 04:34:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E401C8E008F; Wed, 22 Jun 2022 04:34:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id CC7208E008A for ; Wed, 22 Jun 2022 04:34:01 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 2484C213BC for ; Wed, 22 Jun 2022 08:26:43 +0000 (UTC) X-FDA: 79605190686.25.0C3E362 Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by imf04.hostedemail.com (Postfix) with ESMTP id 971FE4001D for ; Wed, 22 Jun 2022 08:26:42 +0000 (UTC) Received: from pps.filterd (m0098409.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 25M7Vc2N004423; Wed, 22 Jun 2022 08:26:37 GMT DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding; s=pp1; bh=eG05JCLiolZtdBqK9JEVqCDLmzV0NLM/SBILo4e+8PI=; b=Rsyh0y5k8lmcqf/M+0FSG64Iab6xWp+ie8lFcKTrbLHTtjMht51cZPCRCxrZVySAx/Oi u7HH5MSBST199+ft9gfhE7QseGEkMSx92RiV4ZgJvKM7RzHf7HJysc02Qi56+72PxEyn MP8arOlH+Oc+D6F4n6PLpqN2NrVTAxtHjihA09Si2ZSxk5gf07qNhuclpORBdOmcdZKn xNELpfOfiRErw8j1wKFg6QNEr2MGHyFD+ZW7EeEfPglDgrN4d0lrGuVpsvxnyelwh9s8 gEIQfVYcAp7Ds+NJ67O8ekfkB6l3MkFD2LzjUgsxQ9EKBBohHBcjx36wT4AhueTgMXhN tQ== Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3guxs5he3k-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 22 Jun 2022 08:26:36 +0000 Received: from m0098409.ppops.net (m0098409.ppops.net [127.0.0.1]) by pps.reinject (8.17.1.5/8.17.1.5) with ESMTP id 25M7tmbA010830; Wed, 22 Jun 2022 08:26:36 GMT Received: from ppma02dal.us.ibm.com (a.bd.3ea9.ip4.static.sl-reverse.com [169.62.189.10]) by mx0a-001b2d01.pphosted.com (PPS) with ESMTPS id 3guxs5he36-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 22 Jun 2022 08:26:36 +0000 Received: from pps.filterd (ppma02dal.us.ibm.com [127.0.0.1]) by ppma02dal.us.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 25M8JxVV003300; Wed, 22 Jun 2022 08:26:35 GMT Received: from b03cxnp08025.gho.boulder.ibm.com (b03cxnp08025.gho.boulder.ibm.com [9.17.130.17]) by ppma02dal.us.ibm.com with ESMTP id 3gt0098rbp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Wed, 22 Jun 2022 08:26:35 +0000 Received: from b03ledav001.gho.boulder.ibm.com (b03ledav001.gho.boulder.ibm.com [9.17.130.232]) by b03cxnp08025.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 25M8QYrH36503906 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Wed, 22 Jun 2022 08:26:34 GMT Received: from b03ledav001.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 00AC06E050; Wed, 22 Jun 2022 08:26:34 +0000 (GMT) Received: from b03ledav001.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 8A8BB6E04E; Wed, 22 Jun 2022 08:26:28 +0000 (GMT) Received: from skywalker.ibmuc.com (unknown [9.43.69.226]) by b03ledav001.gho.boulder.ibm.com (Postfix) with ESMTP; Wed, 22 Jun 2022 08:26:28 +0000 (GMT) From: "Aneesh Kumar K.V" To: linux-mm@kvack.org, akpm@linux-foundation.org Cc: Wei Xu , Huang Ying , Yang Shi , Davidlohr Bueso , Tim C Chen , Michal Hocko , Linux Kernel Mailing List , Hesham Almatary , Dave Hansen , Jonathan Cameron , Alistair Popple , Dan Williams , "Aneesh Kumar K.V" Subject: [PATCH v7 10/12] mm/demotion: Update node_is_toptier to work with memory tiers Date: Wed, 22 Jun 2022 13:55:11 +0530 Message-Id: <20220622082513.467538-11-aneesh.kumar@linux.ibm.com> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220622082513.467538-1-aneesh.kumar@linux.ibm.com> References: <20220622082513.467538-1-aneesh.kumar@linux.ibm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-TM-AS-GCONF: 00 X-Proofpoint-GUID: zkQjAXAatpbBVjQaWRmhi_DSH-rW1i0B X-Proofpoint-ORIG-GUID: JfQk3jbXycLFIj3GuRJ4NFX8VzknRIwt X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.64.514 definitions=2022-06-21_11,2022-06-21_01,2022-02-23_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0 suspectscore=0 phishscore=0 mlxscore=0 adultscore=0 clxscore=1015 impostorscore=0 priorityscore=1501 spamscore=0 bulkscore=0 mlxlogscore=999 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2204290000 definitions=main-2206220039 ARC-Authentication-Results: i=1; imf04.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=Rsyh0y5k; spf=temperror (imf04.hostedemail.com: error in processing during lookup of aneesh.kumar@linux.ibm.com: DNS error) smtp.mailfrom=aneesh.kumar@linux.ibm.com; dmarc=pass (policy=none) header.from=ibm.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655886402; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=eG05JCLiolZtdBqK9JEVqCDLmzV0NLM/SBILo4e+8PI=; b=dK++bBk8I4BUuoKYMws3t4ZQiPBQ73VS3WelHFTou66NrI6gTwUUphSrOd7+wifje3uyb/ n3U/PqJDznXoKPzp2YnT2+O9dQ/lIaK/6bdCp5sF+Sz2V3Az3SbpjyjRMTmnACAIGp2xuZ AOz99hHugHCW2g8D7rjX5pHpv/exP1k= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655886402; a=rsa-sha256; cv=none; b=gU3Vm1kZDBCcNouL2vZly4xPfrG9Q82TJ57kwNcIiBJV1AYNtZdeytq77djNsajhcM14xn pPCvl+bQNoBX8BLY00tdN3qKHRd141hSaKkroiEyGL0YXlgCsPwtrtn6h4P6apEGbG/6RA 2j7IBQwodgZs8B637V+M4bC285xSECE= X-Stat-Signature: 33irrxfedb9yi5wdgbg3f6icdch6gbsu X-Rspam-User: X-Rspamd-Server: rspam07 Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=ibm.com header.s=pp1 header.b=Rsyh0y5k; spf=temperror (imf04.hostedemail.com: error in processing during lookup of aneesh.kumar@linux.ibm.com: DNS error) smtp.mailfrom=aneesh.kumar@linux.ibm.com; dmarc=pass (policy=none) header.from=ibm.com X-Rspamd-Queue-Id: 971FE4001D X-HE-Tag: 1655886402-958772 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: With memory tiers support we can have memory only NUMA nodes in the top tier from which we want to avoid promotion tracking NUMA faults. Update node_is_toptier to work with memory tiers. All NUMA nodes are by default top tier nodes. With lower memory tiers added we consider all memory tiers above a memory tier having CPU NUMA nodes as a top memory tier Signed-off-by: Aneesh Kumar K.V --- include/linux/memory-tiers.h | 6 ++++++ include/linux/node.h | 5 ----- mm/huge_memory.c | 1 + mm/memory-tiers.c | 41 ++++++++++++++++++++++++++++++++++++ mm/migrate.c | 1 + mm/mprotect.c | 1 + 6 files changed, 50 insertions(+), 5 deletions(-) diff --git a/include/linux/memory-tiers.h b/include/linux/memory-tiers.h index 335d21a30b2c..ff1a08933575 100644 --- a/include/linux/memory-tiers.h +++ b/include/linux/memory-tiers.h @@ -44,6 +44,7 @@ int node_get_memory_tier_id(int node); int node_update_memory_tier(int node, int tier); struct memory_tier *node_get_memory_tier(int node); void node_put_memory_tier(struct memory_tier *memtier); +bool node_is_toptier(int node); #else @@ -62,5 +63,10 @@ static inline void node_get_allowed_targets(pg_data_t *pgdat, nodemask_t *target { *targets = NODE_MASK_NONE; } + +static inline bool node_is_toptier(int node) +{ + return true; +} #endif /* CONFIG_NUMA */ #endif /* _LINUX_MEMORY_TIERS_H */ diff --git a/include/linux/node.h b/include/linux/node.h index 40d641a8bfb0..9ec680dd607f 100644 --- a/include/linux/node.h +++ b/include/linux/node.h @@ -185,9 +185,4 @@ static inline void register_hugetlbfs_with_node(node_registration_func_t reg, #define to_node(device) container_of(device, struct node, dev) -static inline bool node_is_toptier(int node) -{ - return node_state(node, N_CPU); -} - #endif /* _LINUX_NODE_H_ */ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index f7248002dad9..88f3288c8be0 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -35,6 +35,7 @@ #include #include #include +#include #include #include diff --git a/mm/memory-tiers.c b/mm/memory-tiers.c index aecce987df7c..7204f7381a15 100644 --- a/mm/memory-tiers.c +++ b/mm/memory-tiers.c @@ -18,6 +18,7 @@ struct demotion_nodes { static void establish_migration_targets(void); static DEFINE_MUTEX(memory_tier_lock); static LIST_HEAD(memory_tiers); +static int top_tier_id; /* * node_demotion[] examples: * @@ -373,6 +374,31 @@ void node_put_memory_tier(struct memory_tier *memtier) put_device(&memtier->dev); } +bool node_is_toptier(int node) +{ + bool toptier; + pg_data_t *pgdat; + struct memory_tier *memtier; + + pgdat = NODE_DATA(node); + if (!pgdat) + return false; + + rcu_read_lock(); + memtier = rcu_dereference(pgdat->memtier); + if (!memtier) { + toptier = true; + goto out; + } + if (memtier->dev.id >= top_tier_id) + toptier = true; + else + toptier = false; +out: + rcu_read_unlock(); + return toptier; +} + #ifdef CONFIG_MIGRATION void node_get_allowed_targets(pg_data_t *pgdat, nodemask_t *targets) { @@ -545,6 +571,21 @@ static void establish_migration_targets(void) } while (1); } build_lower_tier_mask: + /* + * Promotion is allowed from a memory tier to higher + * memory tier only if the memory tier doesn't include + * compute. We want to skip promotion from a memory tier, + * if any node that is part of the memory tier have CPUs. + * Once we detect such a memory tier, we consider that tier + * as top tiper from which promotion is not allowed. + */ + list_for_each_entry_reverse(memtier, &memory_tiers, list) { + nodes_and(used, node_states[N_CPU], memtier->nodelist); + if (!nodes_empty(used)) { + top_tier_id = memtier->dev.id; + break; + } + } /* * Now build the lower_tier mask for each node collecting node mask from * all memory tier below it. This allows us to fallback demotion page diff --git a/mm/migrate.c b/mm/migrate.c index 0b554625a219..78615c48fc0f 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -50,6 +50,7 @@ #include #include #include +#include #include diff --git a/mm/mprotect.c b/mm/mprotect.c index ba5592655ee3..92a2fc0fa88b 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include #include -- 2.36.1