The Linux Kernel Mailing List
 help / color / mirror / Atom feed
From: Chen Yu <yu.c.chen@intel.com>
To: kprateek.nayak@amd.com, tim.c.chen@linux.intel.com, peterz@infradead.org
Cc: pan.deng@intel.com, mingo@kernel.org,
	linux-kernel@vger.kernel.org, tianyou.li@intel.com,
	Chen Yu <yu.c.chen@intel.com>
Subject: [PATCH 2/3] lib/sbm: Use dynamically sized bitmap in sbm_leaf
Date: Sun, 10 May 2026 23:59:18 +0800	[thread overview]
Message-ID: <20260510155920.2587431-3-yu.c.chen@intel.com> (raw)
In-Reply-To: <20260510155920.2587431-1-yu.c.chen@intel.com>

The original sbm_leaf uses a single unsigned long (u64) as its bitmap,
which limits each leaf to representing at most 64 CPUs.  When a LLC
domain contains more than 64 logical CPUs, the within-leaf bit
position (computed as apicid & arch_sbm_mask) can exceed 63.

Since set_bit(nr, addr) treats addr as an arbitrarily long bitmap
array, set_bit(65, &leaf->bitmap) would write to (&leaf->bitmap)[1],
memory beyond the single unsigned long field. While
____cacheline_aligned padding may prevent corrupting adjacent
leaves, the bits written into the padding are never read back by
sbm_find_next_bit(), silently making those CPUs invisible.

Fix this by converting the fixed u64 bitmap to a flexible array
member (unsigned long bitmap[]) whose size is determined at
allocation time from the number of CPUs that of the TILE
domain(1 << arch_sbm_shift). A subsequent patch will switch
to use the number CPUs shared LLC rather than TILE domain.
---
 include/linux/sbm.h |  5 +++--
 lib/sbm.c           | 28 +++++++++++++++++-----------
 2 files changed, 20 insertions(+), 13 deletions(-)

diff --git a/include/linux/sbm.h b/include/linux/sbm.h
index a25a96366694..8d60f4bc7004 100644
--- a/include/linux/sbm.h
+++ b/include/linux/sbm.h
@@ -28,7 +28,8 @@ struct sbm_root {
 
 struct sbm_leaf {
 	enum sbm_type	type;
-	unsigned long	bitmap;
+	unsigned int	nbits;
+	unsigned long	bitmap[];
 } ____cacheline_aligned;
 
 struct sbm {
@@ -48,7 +49,7 @@ extern int sbm_find_next_bit(struct sbm *sbm, int start);
 		leaf = root->leafs[nr];			\
 	}						\
 	int bit = idx & arch_sbm_mask;			\
-	func(bit, &leaf->bitmap);			\
+	func(bit, leaf->bitmap);			\
 })
 
 static inline void sbm_cpu_set(struct sbm *sbm, int cpu)
diff --git a/lib/sbm.c b/lib/sbm.c
index 8006f9b04b62..76670ce14291 100644
--- a/lib/sbm.c
+++ b/lib/sbm.c
@@ -4,6 +4,8 @@
 struct sbm *sbm_alloc(void)
 {
 	unsigned int nr = arch_sbm_leafs;
+	unsigned int nbits = 1U << arch_sbm_shift;
+	unsigned int nlongs = BITS_TO_LONGS(nbits);
 	struct sbm_root *root = kzalloc_flex(*root, leafs, nr);
 	struct sbm_leaf *leaf;
 	if (!root)
@@ -12,10 +14,12 @@ struct sbm *sbm_alloc(void)
 	root->type = st_root;
 
 	for (int i = 0; i < nr; i++) {
-		leaf = kzalloc_obj(*leaf);
+		leaf = kzalloc(struct_size(leaf, bitmap, nlongs),
+			       GFP_KERNEL);
 		if (!leaf)
 			goto fail;
 		leaf->type = st_leaf;
+		leaf->nbits = nbits;
 		root->leafs[i] = leaf;
 	}
 
@@ -40,18 +44,20 @@ int sbm_find_next_bit(struct sbm *sbm, int start)
 	struct sbm_root *root = (void *)sbm;
 	int nr = start >> arch_sbm_shift;
 	int bit = start & arch_sbm_mask;
-	unsigned long tmp, mask = (~0UL) << bit;
+	unsigned int found;
+
 	if (sbm->type == st_root) {
-		for (; nr < arch_sbm_leafs; nr++, mask = ~0UL) {
+		do {
 			leaf = root->leafs[nr];
-			tmp = leaf->bitmap & mask;
-			if (tmp)
-				break;
-		}
+			found = find_next_bit(leaf->bitmap, leaf->nbits, bit);
+			if (found < leaf->nbits)
+				return (nr << arch_sbm_shift) | found;
+			bit = 0;
+		} while (++nr < arch_sbm_leafs);
 	} else {
-		tmp = leaf->bitmap & mask;
+		found = find_next_bit(leaf->bitmap, leaf->nbits, bit);
+		if (found < leaf->nbits)
+			return found;
 	}
-	if (!tmp)
-		return -1;
-	return (nr << arch_sbm_shift) | __ffs(tmp);
+	return -1;
 }
-- 
2.25.1


  parent reply	other threads:[~2026-05-10 16:08 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <729726b9-c669-41e2-887d-bdf9da703034@amd.com>
2026-05-10 15:59 ` [PATCH v2 1/4] sched/rt: Optimize cpupri_vec layout to mitigate cache line contention Chen Yu
2026-05-10 15:59   ` [PATCH 1/3] x86/sbm: Fix domain shift calculation and sbm_find_next_bit() Chen Yu
2026-05-10 15:59   ` Chen Yu [this message]
2026-05-10 15:59   ` [PATCH 3/3] x86/sbm: Derive leaf granularity from LLC cacheinfo instead of topology domain Chen Yu
2026-05-11  7:48     ` K Prateek Nayak
2026-05-12  9:29       ` Chen, Yu C

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260510155920.2587431-3-yu.c.chen@intel.com \
    --to=yu.c.chen@intel.com \
    --cc=kprateek.nayak@amd.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=pan.deng@intel.com \
    --cc=peterz@infradead.org \
    --cc=tianyou.li@intel.com \
    --cc=tim.c.chen@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox