* [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL
@ 2025-04-07 15:57 Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 01/19] rcuref: Provide rcuref_is_dead() Sebastian Andrzej Siewior
` (21 more replies)
0 siblings, 22 replies; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-07 15:57 UTC (permalink / raw)
To: linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long, Sebastian Andrzej Siewior
this is a follow up on
https://lore.kernel.org/ZwVOMgBMxrw7BU9A@jlelli-thinkpadt14gen4.remote.csb
and adds support for task local futex_hash_bucket.
This is the local hash map series with PeterZ FUTEX2_NUMA and
FUTEX2_MPOL plus a few fixes on top.
The complete tree is at
https://git.kernel.org/pub/scm/linux/kernel/git/bigeasy/staging.git/log/?h=futex_local_v11
https://git.kernel.org/pub/scm/linux/kernel/git/bigeasy/staging.git futex_local_v11
v10…v11: https://lore.kernel.org/all/20250312151634.2183278-1-bigeasy@linutronix.de
- PeterZ' fixups, changes to the local hash series have been folded
into the earlier patches so things are not added and renamed later
and the functionality is changed.
- vmalloc_huge() has been implemented on top of vmalloc_huge_node()
and the NOMMU bots have been adjusted. akpm asked for this.
- wake_up_var() has been removed from __futex_pivot_hash(). It is
enough to wake the userspace waiter after the final put so it can
perform the resize itself.
- Changed to logic in futex_pivot_pending() so it does not block for
the user. It waits for __futex_pivot_hash() which follows the logic
in __futex_pivot_hash().
- Updated kernel doc for __futex_hash().
- Patches 17+ are new:
- Wire up PR_FUTEX_HASH_SET_SLOTS in "perf bench futex"
- Add "immutable" mode to PR_FUTEX_HASH_SET_SLOTS to avoid resizing
the local hash any further. This avoids rcuref usage which is
noticeable in "perf bench futex hash"
Peter Zijlstra (8):
mm: Add vmalloc_huge_node()
futex: Move futex_queue() into futex_wait_setup()
futex: Pull futex_hash() out of futex_q_lock()
futex: Create hb scopes
futex: Create futex_hash() get/put class
futex: Create private_hash() get/put class
futex: Implement FUTEX2_NUMA
futex: Implement FUTEX2_MPOL
Sebastian Andrzej Siewior (11):
rcuref: Provide rcuref_is_dead().
futex: Acquire a hash reference in futex_wait_multiple_setup().
futex: Decrease the waiter count before the unlock operation.
futex: Introduce futex_q_lockptr_lock().
futex: Create helper function to initialize a hash slot.
futex: Add basic infrastructure for local task local hash.
futex: Allow automatic allocation of process wide futex hash.
futex: Allow to resize the private local hash.
tools headers: Synchronize prctl.h ABI header
tools/perf: Allow to select the number of hash buckets.
futex: Allow to make the private hash immutable.
include/linux/futex.h | 36 +-
include/linux/mm_types.h | 7 +-
include/linux/mmap_lock.h | 4 +
include/linux/rcuref.h | 22 +-
include/linux/vmalloc.h | 9 +-
include/uapi/linux/futex.h | 10 +-
include/uapi/linux/prctl.h | 6 +
init/Kconfig | 10 +
io_uring/futex.c | 4 +-
kernel/fork.c | 24 +
kernel/futex/core.c | 794 ++++++++++++++++++++++---
kernel/futex/futex.h | 73 ++-
kernel/futex/pi.c | 300 +++++-----
kernel/futex/requeue.c | 480 +++++++--------
kernel/futex/waitwake.c | 201 ++++---
kernel/sys.c | 4 +
mm/nommu.c | 18 +-
mm/vmalloc.c | 11 +-
tools/include/uapi/linux/prctl.h | 44 +-
tools/perf/bench/Build | 1 +
tools/perf/bench/futex-hash.c | 7 +
tools/perf/bench/futex-lock-pi.c | 5 +
tools/perf/bench/futex-requeue.c | 6 +
tools/perf/bench/futex-wake-parallel.c | 9 +-
tools/perf/bench/futex-wake.c | 4 +
tools/perf/bench/futex.c | 60 ++
tools/perf/bench/futex.h | 5 +
27 files changed, 1585 insertions(+), 569 deletions(-)
create mode 100644 tools/perf/bench/futex.c
--
2.49.0
^ permalink raw reply [flat|nested] 31+ messages in thread
* [PATCH v11 01/19] rcuref: Provide rcuref_is_dead().
2025-04-07 15:57 [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
@ 2025-04-07 15:57 ` Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 02/19] mm: Add vmalloc_huge_node() Sebastian Andrzej Siewior
` (20 subsequent siblings)
21 siblings, 0 replies; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-07 15:57 UTC (permalink / raw)
To: linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long, Sebastian Andrzej Siewior
rcuref_read() returns the number of references that are currently held.
If 0 is returned then it is not safe to assume that the object ca be
scheduled for deconstruction because it is marked DEAD. This happens if
the return value of rcuref_put() is ignored and assumptions are made.
If 0 is returned then the counter transitioned from 0 to RCUREF_NOREF.
If rcuref_put() did not return to the caller then the counter did not
yet transition from RCUREF_NOREF to RCUREF_DEAD. This means that there
is still a chance that the counter will transition from RCUREF_NOREF to
0 meaning it is still valid and must not be deconstructed. In this brief
window rcuref_read() will return 0.
Provide rcuref_is_dead() to determine if the counter is marked as
RCUREF_DEAD.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
include/linux/rcuref.h | 22 +++++++++++++++++++++-
1 file changed, 21 insertions(+), 1 deletion(-)
diff --git a/include/linux/rcuref.h b/include/linux/rcuref.h
index 6322d8c1c6b42..2fb2af6d98249 100644
--- a/include/linux/rcuref.h
+++ b/include/linux/rcuref.h
@@ -30,7 +30,11 @@ static inline void rcuref_init(rcuref_t *ref, unsigned int cnt)
* rcuref_read - Read the number of held reference counts of a rcuref
* @ref: Pointer to the reference count
*
- * Return: The number of held references (0 ... N)
+ * Return: The number of held references (0 ... N). The value 0 does not
+ * indicate that it is safe to schedule the object, protected by this reference
+ * counter, for deconstruction.
+ * If you want to know if the reference counter has been marked DEAD (as
+ * signaled by rcuref_put()) please use rcuread_is_dead().
*/
static inline unsigned int rcuref_read(rcuref_t *ref)
{
@@ -40,6 +44,22 @@ static inline unsigned int rcuref_read(rcuref_t *ref)
return c >= RCUREF_RELEASED ? 0 : c + 1;
}
+/**
+ * rcuref_is_dead - Check if the rcuref has been already marked dead
+ * @ref: Pointer to the reference count
+ *
+ * Return: True if the object has been marked DEAD. This signals that a previous
+ * invocation of rcuref_put() returned true on this reference counter meaning
+ * the protected object can safely be scheduled for deconstruction.
+ * Otherwise, returns false.
+ */
+static inline bool rcuref_is_dead(rcuref_t *ref)
+{
+ unsigned int c = atomic_read(&ref->refcnt);
+
+ return (c >= RCUREF_RELEASED) && (c < RCUREF_NOREF);
+}
+
extern __must_check bool rcuref_get_slowpath(rcuref_t *ref);
/**
--
2.49.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v11 02/19] mm: Add vmalloc_huge_node()
2025-04-07 15:57 [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 01/19] rcuref: Provide rcuref_is_dead() Sebastian Andrzej Siewior
@ 2025-04-07 15:57 ` Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 03/19] futex: Move futex_queue() into futex_wait_setup() Sebastian Andrzej Siewior
` (19 subsequent siblings)
21 siblings, 0 replies; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-07 15:57 UTC (permalink / raw)
To: linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long, Andrew Morton, Uladzislau Rezki, Christoph Hellwig,
linux-mm, Christoph Hellwig, Sebastian Andrzej Siewior
From: Peter Zijlstra <peterz@infradead.org>
To enable node specific hash-tables using huge pages if possible.
[bigeasy: use __vmalloc_node_range_noprof(), add nommu bits, inline
vmalloc_huge]
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Uladzislau Rezki <urezki@gmail.com>
Cc: Christoph Hellwig <hch@infradead.org>
Cc: linux-mm@kvack.org
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
include/linux/vmalloc.h | 9 +++++++--
mm/nommu.c | 18 +++++++++++++++++-
mm/vmalloc.c | 11 ++++++-----
3 files changed, 30 insertions(+), 8 deletions(-)
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 31e9ffd936e39..de95794777ad6 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -168,8 +168,13 @@ void *__vmalloc_node_noprof(unsigned long size, unsigned long align, gfp_t gfp_m
int node, const void *caller) __alloc_size(1);
#define __vmalloc_node(...) alloc_hooks(__vmalloc_node_noprof(__VA_ARGS__))
-void *vmalloc_huge_noprof(unsigned long size, gfp_t gfp_mask) __alloc_size(1);
-#define vmalloc_huge(...) alloc_hooks(vmalloc_huge_noprof(__VA_ARGS__))
+void *vmalloc_huge_node_noprof(unsigned long size, gfp_t gfp_mask, int node) __alloc_size(1);
+#define vmalloc_huge_node(...) alloc_hooks(vmalloc_huge_node_noprof(__VA_ARGS__))
+
+static inline void *vmalloc_huge(unsigned long size, gfp_t gfp_mask)
+{
+ return vmalloc_huge_node(size, gfp_mask, NUMA_NO_NODE);
+}
extern void *__vmalloc_array_noprof(size_t n, size_t size, gfp_t flags) __alloc_size(1, 2);
#define __vmalloc_array(...) alloc_hooks(__vmalloc_array_noprof(__VA_ARGS__))
diff --git a/mm/nommu.c b/mm/nommu.c
index 617e7ba8022f5..70f92f9a7fab3 100644
--- a/mm/nommu.c
+++ b/mm/nommu.c
@@ -200,7 +200,23 @@ void *vmalloc_noprof(unsigned long size)
}
EXPORT_SYMBOL(vmalloc_noprof);
-void *vmalloc_huge_noprof(unsigned long size, gfp_t gfp_mask) __weak __alias(__vmalloc_noprof);
+/*
+ * vmalloc_huge_node - allocate virtually contiguous memory, on a node
+ *
+ * @size: allocation size
+ * @gfp_mask: flags for the page level allocator
+ * @node: node to use for allocation or NUMA_NO_NODE
+ *
+ * Allocate enough pages to cover @size from the page level
+ * allocator and map them into contiguous kernel virtual space.
+ *
+ * Due to NOMMU implications the node argument and HUGE page attribute is
+ * ignored.
+ */
+void *vmalloc_huge_node_noprof(unsigned long size, gfp_t gfp_mask, int node)
+{
+ return __vmalloc_noprof(size, gfp_mask);
+}
/*
* vzalloc - allocate virtually contiguous memory with zero fill
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 3ed720a787ecd..8b9f6d3c099dd 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3943,9 +3943,10 @@ void *vmalloc_noprof(unsigned long size)
EXPORT_SYMBOL(vmalloc_noprof);
/**
- * vmalloc_huge - allocate virtually contiguous memory, allow huge pages
+ * vmalloc_huge_node - allocate virtually contiguous memory, allow huge pages
* @size: allocation size
* @gfp_mask: flags for the page level allocator
+ * @node: node to use for allocation or NUMA_NO_NODE
*
* Allocate enough pages to cover @size from the page level
* allocator and map them into contiguous kernel virtual space.
@@ -3954,13 +3955,13 @@ EXPORT_SYMBOL(vmalloc_noprof);
*
* Return: pointer to the allocated memory or %NULL on error
*/
-void *vmalloc_huge_noprof(unsigned long size, gfp_t gfp_mask)
+void *vmalloc_huge_node_noprof(unsigned long size, gfp_t gfp_mask, int node)
{
return __vmalloc_node_range_noprof(size, 1, VMALLOC_START, VMALLOC_END,
- gfp_mask, PAGE_KERNEL, VM_ALLOW_HUGE_VMAP,
- NUMA_NO_NODE, __builtin_return_address(0));
+ gfp_mask, PAGE_KERNEL, VM_ALLOW_HUGE_VMAP,
+ node, __builtin_return_address(0));
}
-EXPORT_SYMBOL_GPL(vmalloc_huge_noprof);
+EXPORT_SYMBOL_GPL(vmalloc_huge_node_noprof);
/**
* vzalloc - allocate virtually contiguous memory with zero fill
--
2.49.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v11 03/19] futex: Move futex_queue() into futex_wait_setup()
2025-04-07 15:57 [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 01/19] rcuref: Provide rcuref_is_dead() Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 02/19] mm: Add vmalloc_huge_node() Sebastian Andrzej Siewior
@ 2025-04-07 15:57 ` Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 04/19] futex: Pull futex_hash() out of futex_q_lock() Sebastian Andrzej Siewior
` (18 subsequent siblings)
21 siblings, 0 replies; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-07 15:57 UTC (permalink / raw)
To: linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long, Sebastian Andrzej Siewior
From: Peter Zijlstra <peterz@infradead.org>
futex_wait_setup() has a weird calling convention in order to return
hb to use as an argument to futex_queue().
Mostly such that requeue can have an extra test in between.
Reorder code a little to get rid of this and keep the hb usage inside
futex_wait_setup().
[bigeasy: fixes]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
io_uring/futex.c | 4 +---
kernel/futex/futex.h | 6 +++---
kernel/futex/requeue.c | 28 ++++++++++--------------
kernel/futex/waitwake.c | 47 +++++++++++++++++++++++------------------
4 files changed, 42 insertions(+), 43 deletions(-)
diff --git a/io_uring/futex.c b/io_uring/futex.c
index 0ea4820cd8ff8..e89c0897117ae 100644
--- a/io_uring/futex.c
+++ b/io_uring/futex.c
@@ -273,7 +273,6 @@ int io_futex_wait(struct io_kiocb *req, unsigned int issue_flags)
struct io_futex *iof = io_kiocb_to_cmd(req, struct io_futex);
struct io_ring_ctx *ctx = req->ctx;
struct io_futex_data *ifd = NULL;
- struct futex_hash_bucket *hb;
int ret;
if (!iof->futex_mask) {
@@ -295,12 +294,11 @@ int io_futex_wait(struct io_kiocb *req, unsigned int issue_flags)
ifd->req = req;
ret = futex_wait_setup(iof->uaddr, iof->futex_val, iof->futex_flags,
- &ifd->q, &hb);
+ &ifd->q, NULL, NULL);
if (!ret) {
hlist_add_head(&req->hash_node, &ctx->futex_list);
io_ring_submit_unlock(ctx, issue_flags);
- futex_queue(&ifd->q, hb, NULL);
return IOU_ISSUE_SKIP_COMPLETE;
}
diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h
index 6b2f4c7eb720f..16aafd0113442 100644
--- a/kernel/futex/futex.h
+++ b/kernel/futex/futex.h
@@ -219,9 +219,9 @@ static inline int futex_match(union futex_key *key1, union futex_key *key2)
}
extern int futex_wait_setup(u32 __user *uaddr, u32 val, unsigned int flags,
- struct futex_q *q, struct futex_hash_bucket **hb);
-extern void futex_wait_queue(struct futex_hash_bucket *hb, struct futex_q *q,
- struct hrtimer_sleeper *timeout);
+ struct futex_q *q, union futex_key *key2,
+ struct task_struct *task);
+extern void futex_do_wait(struct futex_q *q, struct hrtimer_sleeper *timeout);
extern bool __futex_wake_mark(struct futex_q *q);
extern void futex_wake_mark(struct wake_q_head *wake_q, struct futex_q *q);
diff --git a/kernel/futex/requeue.c b/kernel/futex/requeue.c
index b47bb764b3520..0e55975af515c 100644
--- a/kernel/futex/requeue.c
+++ b/kernel/futex/requeue.c
@@ -769,7 +769,6 @@ int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
{
struct hrtimer_sleeper timeout, *to;
struct rt_mutex_waiter rt_waiter;
- struct futex_hash_bucket *hb;
union futex_key key2 = FUTEX_KEY_INIT;
struct futex_q q = futex_q_init;
struct rt_mutex_base *pi_mutex;
@@ -805,29 +804,24 @@ int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
* Prepare to wait on uaddr. On success, it holds hb->lock and q
* is initialized.
*/
- ret = futex_wait_setup(uaddr, val, flags, &q, &hb);
+ ret = futex_wait_setup(uaddr, val, flags, &q, &key2, current);
if (ret)
goto out;
- /*
- * The check above which compares uaddrs is not sufficient for
- * shared futexes. We need to compare the keys:
- */
- if (futex_match(&q.key, &key2)) {
- futex_q_unlock(hb);
- ret = -EINVAL;
- goto out;
- }
-
/* Queue the futex_q, drop the hb lock, wait for wakeup. */
- futex_wait_queue(hb, &q, to);
+ futex_do_wait(&q, to);
switch (futex_requeue_pi_wakeup_sync(&q)) {
case Q_REQUEUE_PI_IGNORE:
- /* The waiter is still on uaddr1 */
- spin_lock(&hb->lock);
- ret = handle_early_requeue_pi_wakeup(hb, &q, to);
- spin_unlock(&hb->lock);
+ {
+ struct futex_hash_bucket *hb;
+
+ hb = futex_hash(&q.key);
+ /* The waiter is still on uaddr1 */
+ spin_lock(&hb->lock);
+ ret = handle_early_requeue_pi_wakeup(hb, &q, to);
+ spin_unlock(&hb->lock);
+ }
break;
case Q_REQUEUE_PI_LOCKED:
diff --git a/kernel/futex/waitwake.c b/kernel/futex/waitwake.c
index 25877d4f2f8f3..6cf10701294b4 100644
--- a/kernel/futex/waitwake.c
+++ b/kernel/futex/waitwake.c
@@ -339,18 +339,8 @@ static long futex_wait_restart(struct restart_block *restart);
* @q: the futex_q to queue up on
* @timeout: the prepared hrtimer_sleeper, or null for no timeout
*/
-void futex_wait_queue(struct futex_hash_bucket *hb, struct futex_q *q,
- struct hrtimer_sleeper *timeout)
+void futex_do_wait(struct futex_q *q, struct hrtimer_sleeper *timeout)
{
- /*
- * The task state is guaranteed to be set before another task can
- * wake it. set_current_state() is implemented using smp_store_mb() and
- * futex_queue() calls spin_unlock() upon completion, both serializing
- * access to the hash list and forcing another memory barrier.
- */
- set_current_state(TASK_INTERRUPTIBLE|TASK_FREEZABLE);
- futex_queue(q, hb, current);
-
/* Arm the timer */
if (timeout)
hrtimer_sleeper_start_expires(timeout, HRTIMER_MODE_ABS);
@@ -578,7 +568,8 @@ int futex_wait_multiple(struct futex_vector *vs, unsigned int count,
* @val: the expected value
* @flags: futex flags (FLAGS_SHARED, etc.)
* @q: the associated futex_q
- * @hb: storage for hash_bucket pointer to be returned to caller
+ * @key2: the second futex_key if used for requeue PI
+ * task: Task queueing this futex
*
* Setup the futex_q and locate the hash_bucket. Get the futex value and
* compare it with the expected value. Handle atomic faults internally.
@@ -589,8 +580,10 @@ int futex_wait_multiple(struct futex_vector *vs, unsigned int count,
* - <1 - -EFAULT or -EWOULDBLOCK (uaddr does not contain val) and hb is unlocked
*/
int futex_wait_setup(u32 __user *uaddr, u32 val, unsigned int flags,
- struct futex_q *q, struct futex_hash_bucket **hb)
+ struct futex_q *q, union futex_key *key2,
+ struct task_struct *task)
{
+ struct futex_hash_bucket *hb;
u32 uval;
int ret;
@@ -618,12 +611,12 @@ int futex_wait_setup(u32 __user *uaddr, u32 val, unsigned int flags,
return ret;
retry_private:
- *hb = futex_q_lock(q);
+ hb = futex_q_lock(q);
ret = futex_get_value_locked(&uval, uaddr);
if (ret) {
- futex_q_unlock(*hb);
+ futex_q_unlock(hb);
ret = get_user(uval, uaddr);
if (ret)
@@ -636,10 +629,25 @@ int futex_wait_setup(u32 __user *uaddr, u32 val, unsigned int flags,
}
if (uval != val) {
- futex_q_unlock(*hb);
- ret = -EWOULDBLOCK;
+ futex_q_unlock(hb);
+ return -EWOULDBLOCK;
}
+ if (key2 && futex_match(&q->key, key2)) {
+ futex_q_unlock(hb);
+ return -EINVAL;
+ }
+
+ /*
+ * The task state is guaranteed to be set before another task can
+ * wake it. set_current_state() is implemented using smp_store_mb() and
+ * futex_queue() calls spin_unlock() upon completion, both serializing
+ * access to the hash list and forcing another memory barrier.
+ */
+ if (task == current)
+ set_current_state(TASK_INTERRUPTIBLE|TASK_FREEZABLE);
+ futex_queue(q, hb, task);
+
return ret;
}
@@ -647,7 +655,6 @@ int __futex_wait(u32 __user *uaddr, unsigned int flags, u32 val,
struct hrtimer_sleeper *to, u32 bitset)
{
struct futex_q q = futex_q_init;
- struct futex_hash_bucket *hb;
int ret;
if (!bitset)
@@ -660,12 +667,12 @@ int __futex_wait(u32 __user *uaddr, unsigned int flags, u32 val,
* Prepare to wait on uaddr. On success, it holds hb->lock and q
* is initialized.
*/
- ret = futex_wait_setup(uaddr, val, flags, &q, &hb);
+ ret = futex_wait_setup(uaddr, val, flags, &q, NULL, current);
if (ret)
return ret;
/* futex_queue and wait for wakeup, timeout, or a signal. */
- futex_wait_queue(hb, &q, to);
+ futex_do_wait(&q, to);
/* If we were woken (and unqueued), we succeeded, whatever. */
if (!futex_unqueue(&q))
--
2.49.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v11 04/19] futex: Pull futex_hash() out of futex_q_lock()
2025-04-07 15:57 [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
` (2 preceding siblings ...)
2025-04-07 15:57 ` [PATCH v11 03/19] futex: Move futex_queue() into futex_wait_setup() Sebastian Andrzej Siewior
@ 2025-04-07 15:57 ` Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 05/19] futex: Create hb scopes Sebastian Andrzej Siewior
` (17 subsequent siblings)
21 siblings, 0 replies; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-07 15:57 UTC (permalink / raw)
To: linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long, Sebastian Andrzej Siewior
From: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
kernel/futex/core.c | 7 +------
kernel/futex/futex.h | 2 +-
kernel/futex/pi.c | 3 ++-
kernel/futex/waitwake.c | 6 ++++--
4 files changed, 8 insertions(+), 10 deletions(-)
diff --git a/kernel/futex/core.c b/kernel/futex/core.c
index cca15859a50be..7adc914878933 100644
--- a/kernel/futex/core.c
+++ b/kernel/futex/core.c
@@ -502,13 +502,9 @@ void __futex_unqueue(struct futex_q *q)
}
/* The key must be already stored in q->key. */
-struct futex_hash_bucket *futex_q_lock(struct futex_q *q)
+void futex_q_lock(struct futex_q *q, struct futex_hash_bucket *hb)
__acquires(&hb->lock)
{
- struct futex_hash_bucket *hb;
-
- hb = futex_hash(&q->key);
-
/*
* Increment the counter before taking the lock so that
* a potential waker won't miss a to-be-slept task that is
@@ -522,7 +518,6 @@ struct futex_hash_bucket *futex_q_lock(struct futex_q *q)
q->lock_ptr = &hb->lock;
spin_lock(&hb->lock);
- return hb;
}
void futex_q_unlock(struct futex_hash_bucket *hb)
diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h
index 16aafd0113442..a219903e52084 100644
--- a/kernel/futex/futex.h
+++ b/kernel/futex/futex.h
@@ -354,7 +354,7 @@ static inline int futex_hb_waiters_pending(struct futex_hash_bucket *hb)
#endif
}
-extern struct futex_hash_bucket *futex_q_lock(struct futex_q *q);
+extern void futex_q_lock(struct futex_q *q, struct futex_hash_bucket *hb);
extern void futex_q_unlock(struct futex_hash_bucket *hb);
diff --git a/kernel/futex/pi.c b/kernel/futex/pi.c
index 7a941845f7eee..3bf942e9400ac 100644
--- a/kernel/futex/pi.c
+++ b/kernel/futex/pi.c
@@ -939,7 +939,8 @@ int futex_lock_pi(u32 __user *uaddr, unsigned int flags, ktime_t *time, int tryl
goto out;
retry_private:
- hb = futex_q_lock(&q);
+ hb = futex_hash(&q.key);
+ futex_q_lock(&q, hb);
ret = futex_lock_pi_atomic(uaddr, hb, &q.key, &q.pi_state, current,
&exiting, 0);
diff --git a/kernel/futex/waitwake.c b/kernel/futex/waitwake.c
index 6cf10701294b4..1108f373fd315 100644
--- a/kernel/futex/waitwake.c
+++ b/kernel/futex/waitwake.c
@@ -441,7 +441,8 @@ int futex_wait_multiple_setup(struct futex_vector *vs, int count, int *woken)
struct futex_q *q = &vs[i].q;
u32 val = vs[i].w.val;
- hb = futex_q_lock(q);
+ hb = futex_hash(&q->key);
+ futex_q_lock(q, hb);
ret = futex_get_value_locked(&uval, uaddr);
if (!ret && uval == val) {
@@ -611,7 +612,8 @@ int futex_wait_setup(u32 __user *uaddr, u32 val, unsigned int flags,
return ret;
retry_private:
- hb = futex_q_lock(q);
+ hb = futex_hash(&q->key);
+ futex_q_lock(q, hb);
ret = futex_get_value_locked(&uval, uaddr);
--
2.49.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v11 05/19] futex: Create hb scopes
2025-04-07 15:57 [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
` (3 preceding siblings ...)
2025-04-07 15:57 ` [PATCH v11 04/19] futex: Pull futex_hash() out of futex_q_lock() Sebastian Andrzej Siewior
@ 2025-04-07 15:57 ` Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 06/19] futex: Create futex_hash() get/put class Sebastian Andrzej Siewior
` (16 subsequent siblings)
21 siblings, 0 replies; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-07 15:57 UTC (permalink / raw)
To: linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long, Sebastian Andrzej Siewior
From: Peter Zijlstra <peterz@infradead.org>
Create explicit scopes for hb variables; almost pure re-indent.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
kernel/futex/core.c | 81 ++++----
kernel/futex/pi.c | 282 +++++++++++++-------------
kernel/futex/requeue.c | 433 ++++++++++++++++++++--------------------
kernel/futex/waitwake.c | 193 +++++++++---------
4 files changed, 504 insertions(+), 485 deletions(-)
diff --git a/kernel/futex/core.c b/kernel/futex/core.c
index 7adc914878933..e4cb5ce9785b1 100644
--- a/kernel/futex/core.c
+++ b/kernel/futex/core.c
@@ -944,7 +944,6 @@ static void exit_pi_state_list(struct task_struct *curr)
{
struct list_head *next, *head = &curr->pi_state_list;
struct futex_pi_state *pi_state;
- struct futex_hash_bucket *hb;
union futex_key key = FUTEX_KEY_INIT;
/*
@@ -957,50 +956,54 @@ static void exit_pi_state_list(struct task_struct *curr)
next = head->next;
pi_state = list_entry(next, struct futex_pi_state, list);
key = pi_state->key;
- hb = futex_hash(&key);
+ if (1) {
+ struct futex_hash_bucket *hb;
- /*
- * We can race against put_pi_state() removing itself from the
- * list (a waiter going away). put_pi_state() will first
- * decrement the reference count and then modify the list, so
- * its possible to see the list entry but fail this reference
- * acquire.
- *
- * In that case; drop the locks to let put_pi_state() make
- * progress and retry the loop.
- */
- if (!refcount_inc_not_zero(&pi_state->refcount)) {
+ hb = futex_hash(&key);
+
+ /*
+ * We can race against put_pi_state() removing itself from the
+ * list (a waiter going away). put_pi_state() will first
+ * decrement the reference count and then modify the list, so
+ * its possible to see the list entry but fail this reference
+ * acquire.
+ *
+ * In that case; drop the locks to let put_pi_state() make
+ * progress and retry the loop.
+ */
+ if (!refcount_inc_not_zero(&pi_state->refcount)) {
+ raw_spin_unlock_irq(&curr->pi_lock);
+ cpu_relax();
+ raw_spin_lock_irq(&curr->pi_lock);
+ continue;
+ }
raw_spin_unlock_irq(&curr->pi_lock);
- cpu_relax();
- raw_spin_lock_irq(&curr->pi_lock);
- continue;
- }
- raw_spin_unlock_irq(&curr->pi_lock);
- spin_lock(&hb->lock);
- raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
- raw_spin_lock(&curr->pi_lock);
- /*
- * We dropped the pi-lock, so re-check whether this
- * task still owns the PI-state:
- */
- if (head->next != next) {
- /* retain curr->pi_lock for the loop invariant */
- raw_spin_unlock(&pi_state->pi_mutex.wait_lock);
+ spin_lock(&hb->lock);
+ raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
+ raw_spin_lock(&curr->pi_lock);
+ /*
+ * We dropped the pi-lock, so re-check whether this
+ * task still owns the PI-state:
+ */
+ if (head->next != next) {
+ /* retain curr->pi_lock for the loop invariant */
+ raw_spin_unlock(&pi_state->pi_mutex.wait_lock);
+ spin_unlock(&hb->lock);
+ put_pi_state(pi_state);
+ continue;
+ }
+
+ WARN_ON(pi_state->owner != curr);
+ WARN_ON(list_empty(&pi_state->list));
+ list_del_init(&pi_state->list);
+ pi_state->owner = NULL;
+
+ raw_spin_unlock(&curr->pi_lock);
+ raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
spin_unlock(&hb->lock);
- put_pi_state(pi_state);
- continue;
}
- WARN_ON(pi_state->owner != curr);
- WARN_ON(list_empty(&pi_state->list));
- list_del_init(&pi_state->list);
- pi_state->owner = NULL;
-
- raw_spin_unlock(&curr->pi_lock);
- raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
- spin_unlock(&hb->lock);
-
rt_mutex_futex_unlock(&pi_state->pi_mutex);
put_pi_state(pi_state);
diff --git a/kernel/futex/pi.c b/kernel/futex/pi.c
index 3bf942e9400ac..a56f28fda58dd 100644
--- a/kernel/futex/pi.c
+++ b/kernel/futex/pi.c
@@ -920,7 +920,6 @@ int futex_lock_pi(u32 __user *uaddr, unsigned int flags, ktime_t *time, int tryl
struct hrtimer_sleeper timeout, *to;
struct task_struct *exiting = NULL;
struct rt_mutex_waiter rt_waiter;
- struct futex_hash_bucket *hb;
struct futex_q q = futex_q_init;
DEFINE_WAKE_Q(wake_q);
int res, ret;
@@ -939,152 +938,169 @@ int futex_lock_pi(u32 __user *uaddr, unsigned int flags, ktime_t *time, int tryl
goto out;
retry_private:
- hb = futex_hash(&q.key);
- futex_q_lock(&q, hb);
+ if (1) {
+ struct futex_hash_bucket *hb;
- ret = futex_lock_pi_atomic(uaddr, hb, &q.key, &q.pi_state, current,
- &exiting, 0);
- if (unlikely(ret)) {
- /*
- * Atomic work succeeded and we got the lock,
- * or failed. Either way, we do _not_ block.
- */
- switch (ret) {
- case 1:
- /* We got the lock. */
- ret = 0;
- goto out_unlock_put_key;
- case -EFAULT:
- goto uaddr_faulted;
- case -EBUSY:
- case -EAGAIN:
+ hb = futex_hash(&q.key);
+ futex_q_lock(&q, hb);
+
+ ret = futex_lock_pi_atomic(uaddr, hb, &q.key, &q.pi_state, current,
+ &exiting, 0);
+ if (unlikely(ret)) {
/*
- * Two reasons for this:
- * - EBUSY: Task is exiting and we just wait for the
- * exit to complete.
- * - EAGAIN: The user space value changed.
+ * Atomic work succeeded and we got the lock,
+ * or failed. Either way, we do _not_ block.
*/
- futex_q_unlock(hb);
- /*
- * Handle the case where the owner is in the middle of
- * exiting. Wait for the exit to complete otherwise
- * this task might loop forever, aka. live lock.
- */
- wait_for_owner_exiting(ret, exiting);
- cond_resched();
- goto retry;
- default:
- goto out_unlock_put_key;
+ switch (ret) {
+ case 1:
+ /* We got the lock. */
+ ret = 0;
+ goto out_unlock_put_key;
+ case -EFAULT:
+ goto uaddr_faulted;
+ case -EBUSY:
+ case -EAGAIN:
+ /*
+ * Two reasons for this:
+ * - EBUSY: Task is exiting and we just wait for the
+ * exit to complete.
+ * - EAGAIN: The user space value changed.
+ */
+ futex_q_unlock(hb);
+ /*
+ * Handle the case where the owner is in the middle of
+ * exiting. Wait for the exit to complete otherwise
+ * this task might loop forever, aka. live lock.
+ */
+ wait_for_owner_exiting(ret, exiting);
+ cond_resched();
+ goto retry;
+ default:
+ goto out_unlock_put_key;
+ }
}
- }
- WARN_ON(!q.pi_state);
+ WARN_ON(!q.pi_state);
- /*
- * Only actually queue now that the atomic ops are done:
- */
- __futex_queue(&q, hb, current);
+ /*
+ * Only actually queue now that the atomic ops are done:
+ */
+ __futex_queue(&q, hb, current);
- if (trylock) {
- ret = rt_mutex_futex_trylock(&q.pi_state->pi_mutex);
- /* Fixup the trylock return value: */
- ret = ret ? 0 : -EWOULDBLOCK;
- goto no_block;
- }
+ if (trylock) {
+ ret = rt_mutex_futex_trylock(&q.pi_state->pi_mutex);
+ /* Fixup the trylock return value: */
+ ret = ret ? 0 : -EWOULDBLOCK;
+ goto no_block;
+ }
- /*
- * Must be done before we enqueue the waiter, here is unfortunately
- * under the hb lock, but that *should* work because it does nothing.
- */
- rt_mutex_pre_schedule();
+ /*
+ * Must be done before we enqueue the waiter, here is unfortunately
+ * under the hb lock, but that *should* work because it does nothing.
+ */
+ rt_mutex_pre_schedule();
- rt_mutex_init_waiter(&rt_waiter);
+ rt_mutex_init_waiter(&rt_waiter);
- /*
- * On PREEMPT_RT, when hb->lock becomes an rt_mutex, we must not
- * hold it while doing rt_mutex_start_proxy(), because then it will
- * include hb->lock in the blocking chain, even through we'll not in
- * fact hold it while blocking. This will lead it to report -EDEADLK
- * and BUG when futex_unlock_pi() interleaves with this.
- *
- * Therefore acquire wait_lock while holding hb->lock, but drop the
- * latter before calling __rt_mutex_start_proxy_lock(). This
- * interleaves with futex_unlock_pi() -- which does a similar lock
- * handoff -- such that the latter can observe the futex_q::pi_state
- * before __rt_mutex_start_proxy_lock() is done.
- */
- raw_spin_lock_irq(&q.pi_state->pi_mutex.wait_lock);
- spin_unlock(q.lock_ptr);
- /*
- * __rt_mutex_start_proxy_lock() unconditionally enqueues the @rt_waiter
- * such that futex_unlock_pi() is guaranteed to observe the waiter when
- * it sees the futex_q::pi_state.
- */
- ret = __rt_mutex_start_proxy_lock(&q.pi_state->pi_mutex, &rt_waiter, current, &wake_q);
- raw_spin_unlock_irq_wake(&q.pi_state->pi_mutex.wait_lock, &wake_q);
+ /*
+ * On PREEMPT_RT, when hb->lock becomes an rt_mutex, we must not
+ * hold it while doing rt_mutex_start_proxy(), because then it will
+ * include hb->lock in the blocking chain, even through we'll not in
+ * fact hold it while blocking. This will lead it to report -EDEADLK
+ * and BUG when futex_unlock_pi() interleaves with this.
+ *
+ * Therefore acquire wait_lock while holding hb->lock, but drop the
+ * latter before calling __rt_mutex_start_proxy_lock(). This
+ * interleaves with futex_unlock_pi() -- which does a similar lock
+ * handoff -- such that the latter can observe the futex_q::pi_state
+ * before __rt_mutex_start_proxy_lock() is done.
+ */
+ raw_spin_lock_irq(&q.pi_state->pi_mutex.wait_lock);
+ spin_unlock(q.lock_ptr);
+ /*
+ * __rt_mutex_start_proxy_lock() unconditionally enqueues the @rt_waiter
+ * such that futex_unlock_pi() is guaranteed to observe the waiter when
+ * it sees the futex_q::pi_state.
+ */
+ ret = __rt_mutex_start_proxy_lock(&q.pi_state->pi_mutex, &rt_waiter, current, &wake_q);
+ raw_spin_unlock_irq_wake(&q.pi_state->pi_mutex.wait_lock, &wake_q);
- if (ret) {
- if (ret == 1)
- ret = 0;
- goto cleanup;
- }
+ if (ret) {
+ if (ret == 1)
+ ret = 0;
+ goto cleanup;
+ }
- if (unlikely(to))
- hrtimer_sleeper_start_expires(to, HRTIMER_MODE_ABS);
+ if (unlikely(to))
+ hrtimer_sleeper_start_expires(to, HRTIMER_MODE_ABS);
- ret = rt_mutex_wait_proxy_lock(&q.pi_state->pi_mutex, to, &rt_waiter);
+ ret = rt_mutex_wait_proxy_lock(&q.pi_state->pi_mutex, to, &rt_waiter);
cleanup:
- /*
- * If we failed to acquire the lock (deadlock/signal/timeout), we must
- * must unwind the above, however we canont lock hb->lock because
- * rt_mutex already has a waiter enqueued and hb->lock can itself try
- * and enqueue an rt_waiter through rtlock.
- *
- * Doing the cleanup without holding hb->lock can cause inconsistent
- * state between hb and pi_state, but only in the direction of not
- * seeing a waiter that is leaving.
- *
- * See futex_unlock_pi(), it deals with this inconsistency.
- *
- * There be dragons here, since we must deal with the inconsistency on
- * the way out (here), it is impossible to detect/warn about the race
- * the other way around (missing an incoming waiter).
- *
- * What could possibly go wrong...
- */
- if (ret && !rt_mutex_cleanup_proxy_lock(&q.pi_state->pi_mutex, &rt_waiter))
- ret = 0;
+ /*
+ * If we failed to acquire the lock (deadlock/signal/timeout), we must
+ * unwind the above, however we canont lock hb->lock because
+ * rt_mutex already has a waiter enqueued and hb->lock can itself try
+ * and enqueue an rt_waiter through rtlock.
+ *
+ * Doing the cleanup without holding hb->lock can cause inconsistent
+ * state between hb and pi_state, but only in the direction of not
+ * seeing a waiter that is leaving.
+ *
+ * See futex_unlock_pi(), it deals with this inconsistency.
+ *
+ * There be dragons here, since we must deal with the inconsistency on
+ * the way out (here), it is impossible to detect/warn about the race
+ * the other way around (missing an incoming waiter).
+ *
+ * What could possibly go wrong...
+ */
+ if (ret && !rt_mutex_cleanup_proxy_lock(&q.pi_state->pi_mutex, &rt_waiter))
+ ret = 0;
- /*
- * Now that the rt_waiter has been dequeued, it is safe to use
- * spinlock/rtlock (which might enqueue its own rt_waiter) and fix up
- * the
- */
- spin_lock(q.lock_ptr);
- /*
- * Waiter is unqueued.
- */
- rt_mutex_post_schedule();
+ /*
+ * Now that the rt_waiter has been dequeued, it is safe to use
+ * spinlock/rtlock (which might enqueue its own rt_waiter) and fix up
+ * the
+ */
+ spin_lock(q.lock_ptr);
+ /*
+ * Waiter is unqueued.
+ */
+ rt_mutex_post_schedule();
no_block:
- /*
- * Fixup the pi_state owner and possibly acquire the lock if we
- * haven't already.
- */
- res = fixup_pi_owner(uaddr, &q, !ret);
- /*
- * If fixup_pi_owner() returned an error, propagate that. If it acquired
- * the lock, clear our -ETIMEDOUT or -EINTR.
- */
- if (res)
- ret = (res < 0) ? res : 0;
+ /*
+ * Fixup the pi_state owner and possibly acquire the lock if we
+ * haven't already.
+ */
+ res = fixup_pi_owner(uaddr, &q, !ret);
+ /*
+ * If fixup_pi_owner() returned an error, propagate that. If it acquired
+ * the lock, clear our -ETIMEDOUT or -EINTR.
+ */
+ if (res)
+ ret = (res < 0) ? res : 0;
- futex_unqueue_pi(&q);
- spin_unlock(q.lock_ptr);
- goto out;
+ futex_unqueue_pi(&q);
+ spin_unlock(q.lock_ptr);
+ goto out;
out_unlock_put_key:
- futex_q_unlock(hb);
+ futex_q_unlock(hb);
+ goto out;
+
+uaddr_faulted:
+ futex_q_unlock(hb);
+
+ ret = fault_in_user_writeable(uaddr);
+ if (ret)
+ goto out;
+
+ if (!(flags & FLAGS_SHARED))
+ goto retry_private;
+
+ goto retry;
+ }
out:
if (to) {
@@ -1092,18 +1108,6 @@ int futex_lock_pi(u32 __user *uaddr, unsigned int flags, ktime_t *time, int tryl
destroy_hrtimer_on_stack(&to->timer);
}
return ret != -EINTR ? ret : -ERESTARTNOINTR;
-
-uaddr_faulted:
- futex_q_unlock(hb);
-
- ret = fault_in_user_writeable(uaddr);
- if (ret)
- goto out;
-
- if (!(flags & FLAGS_SHARED))
- goto retry_private;
-
- goto retry;
}
/*
diff --git a/kernel/futex/requeue.c b/kernel/futex/requeue.c
index 0e55975af515c..209794cad6f2f 100644
--- a/kernel/futex/requeue.c
+++ b/kernel/futex/requeue.c
@@ -371,7 +371,6 @@ int futex_requeue(u32 __user *uaddr1, unsigned int flags1,
union futex_key key1 = FUTEX_KEY_INIT, key2 = FUTEX_KEY_INIT;
int task_count = 0, ret;
struct futex_pi_state *pi_state = NULL;
- struct futex_hash_bucket *hb1, *hb2;
struct futex_q *this, *next;
DEFINE_WAKE_Q(wake_q);
@@ -443,240 +442,244 @@ int futex_requeue(u32 __user *uaddr1, unsigned int flags1,
if (requeue_pi && futex_match(&key1, &key2))
return -EINVAL;
- hb1 = futex_hash(&key1);
- hb2 = futex_hash(&key2);
-
retry_private:
- futex_hb_waiters_inc(hb2);
- double_lock_hb(hb1, hb2);
+ if (1) {
+ struct futex_hash_bucket *hb1, *hb2;
- if (likely(cmpval != NULL)) {
- u32 curval;
+ hb1 = futex_hash(&key1);
+ hb2 = futex_hash(&key2);
- ret = futex_get_value_locked(&curval, uaddr1);
+ futex_hb_waiters_inc(hb2);
+ double_lock_hb(hb1, hb2);
- if (unlikely(ret)) {
- double_unlock_hb(hb1, hb2);
- futex_hb_waiters_dec(hb2);
+ if (likely(cmpval != NULL)) {
+ u32 curval;
- ret = get_user(curval, uaddr1);
- if (ret)
- return ret;
+ ret = futex_get_value_locked(&curval, uaddr1);
- if (!(flags1 & FLAGS_SHARED))
- goto retry_private;
+ if (unlikely(ret)) {
+ double_unlock_hb(hb1, hb2);
+ futex_hb_waiters_dec(hb2);
- goto retry;
- }
- if (curval != *cmpval) {
- ret = -EAGAIN;
- goto out_unlock;
- }
- }
+ ret = get_user(curval, uaddr1);
+ if (ret)
+ return ret;
- if (requeue_pi) {
- struct task_struct *exiting = NULL;
+ if (!(flags1 & FLAGS_SHARED))
+ goto retry_private;
- /*
- * Attempt to acquire uaddr2 and wake the top waiter. If we
- * intend to requeue waiters, force setting the FUTEX_WAITERS
- * bit. We force this here where we are able to easily handle
- * faults rather in the requeue loop below.
- *
- * Updates topwaiter::requeue_state if a top waiter exists.
- */
- ret = futex_proxy_trylock_atomic(uaddr2, hb1, hb2, &key1,
- &key2, &pi_state,
- &exiting, nr_requeue);
-
- /*
- * At this point the top_waiter has either taken uaddr2 or
- * is waiting on it. In both cases pi_state has been
- * established and an initial refcount on it. In case of an
- * error there's nothing.
- *
- * The top waiter's requeue_state is up to date:
- *
- * - If the lock was acquired atomically (ret == 1), then
- * the state is Q_REQUEUE_PI_LOCKED.
- *
- * The top waiter has been dequeued and woken up and can
- * return to user space immediately. The kernel/user
- * space state is consistent. In case that there must be
- * more waiters requeued the WAITERS bit in the user
- * space futex is set so the top waiter task has to go
- * into the syscall slowpath to unlock the futex. This
- * will block until this requeue operation has been
- * completed and the hash bucket locks have been
- * dropped.
- *
- * - If the trylock failed with an error (ret < 0) then
- * the state is either Q_REQUEUE_PI_NONE, i.e. "nothing
- * happened", or Q_REQUEUE_PI_IGNORE when there was an
- * interleaved early wakeup.
- *
- * - If the trylock did not succeed (ret == 0) then the
- * state is either Q_REQUEUE_PI_IN_PROGRESS or
- * Q_REQUEUE_PI_WAIT if an early wakeup interleaved.
- * This will be cleaned up in the loop below, which
- * cannot fail because futex_proxy_trylock_atomic() did
- * the same sanity checks for requeue_pi as the loop
- * below does.
- */
- switch (ret) {
- case 0:
- /* We hold a reference on the pi state. */
- break;
-
- case 1:
- /*
- * futex_proxy_trylock_atomic() acquired the user space
- * futex. Adjust task_count.
- */
- task_count++;
- ret = 0;
- break;
-
- /*
- * If the above failed, then pi_state is NULL and
- * waiter::requeue_state is correct.
- */
- case -EFAULT:
- double_unlock_hb(hb1, hb2);
- futex_hb_waiters_dec(hb2);
- ret = fault_in_user_writeable(uaddr2);
- if (!ret)
goto retry;
- return ret;
- case -EBUSY:
- case -EAGAIN:
- /*
- * Two reasons for this:
- * - EBUSY: Owner is exiting and we just wait for the
- * exit to complete.
- * - EAGAIN: The user space value changed.
- */
- double_unlock_hb(hb1, hb2);
- futex_hb_waiters_dec(hb2);
- /*
- * Handle the case where the owner is in the middle of
- * exiting. Wait for the exit to complete otherwise
- * this task might loop forever, aka. live lock.
- */
- wait_for_owner_exiting(ret, exiting);
- cond_resched();
- goto retry;
- default:
- goto out_unlock;
- }
- }
-
- plist_for_each_entry_safe(this, next, &hb1->chain, list) {
- if (task_count - nr_wake >= nr_requeue)
- break;
-
- if (!futex_match(&this->key, &key1))
- continue;
-
- /*
- * FUTEX_WAIT_REQUEUE_PI and FUTEX_CMP_REQUEUE_PI should always
- * be paired with each other and no other futex ops.
- *
- * We should never be requeueing a futex_q with a pi_state,
- * which is awaiting a futex_unlock_pi().
- */
- if ((requeue_pi && !this->rt_waiter) ||
- (!requeue_pi && this->rt_waiter) ||
- this->pi_state) {
- ret = -EINVAL;
- break;
+ }
+ if (curval != *cmpval) {
+ ret = -EAGAIN;
+ goto out_unlock;
+ }
}
- /* Plain futexes just wake or requeue and are done */
- if (!requeue_pi) {
- if (++task_count <= nr_wake)
- this->wake(&wake_q, this);
- else
+ if (requeue_pi) {
+ struct task_struct *exiting = NULL;
+
+ /*
+ * Attempt to acquire uaddr2 and wake the top waiter. If we
+ * intend to requeue waiters, force setting the FUTEX_WAITERS
+ * bit. We force this here where we are able to easily handle
+ * faults rather in the requeue loop below.
+ *
+ * Updates topwaiter::requeue_state if a top waiter exists.
+ */
+ ret = futex_proxy_trylock_atomic(uaddr2, hb1, hb2, &key1,
+ &key2, &pi_state,
+ &exiting, nr_requeue);
+
+ /*
+ * At this point the top_waiter has either taken uaddr2 or
+ * is waiting on it. In both cases pi_state has been
+ * established and an initial refcount on it. In case of an
+ * error there's nothing.
+ *
+ * The top waiter's requeue_state is up to date:
+ *
+ * - If the lock was acquired atomically (ret == 1), then
+ * the state is Q_REQUEUE_PI_LOCKED.
+ *
+ * The top waiter has been dequeued and woken up and can
+ * return to user space immediately. The kernel/user
+ * space state is consistent. In case that there must be
+ * more waiters requeued the WAITERS bit in the user
+ * space futex is set so the top waiter task has to go
+ * into the syscall slowpath to unlock the futex. This
+ * will block until this requeue operation has been
+ * completed and the hash bucket locks have been
+ * dropped.
+ *
+ * - If the trylock failed with an error (ret < 0) then
+ * the state is either Q_REQUEUE_PI_NONE, i.e. "nothing
+ * happened", or Q_REQUEUE_PI_IGNORE when there was an
+ * interleaved early wakeup.
+ *
+ * - If the trylock did not succeed (ret == 0) then the
+ * state is either Q_REQUEUE_PI_IN_PROGRESS or
+ * Q_REQUEUE_PI_WAIT if an early wakeup interleaved.
+ * This will be cleaned up in the loop below, which
+ * cannot fail because futex_proxy_trylock_atomic() did
+ * the same sanity checks for requeue_pi as the loop
+ * below does.
+ */
+ switch (ret) {
+ case 0:
+ /* We hold a reference on the pi state. */
+ break;
+
+ case 1:
+ /*
+ * futex_proxy_trylock_atomic() acquired the user space
+ * futex. Adjust task_count.
+ */
+ task_count++;
+ ret = 0;
+ break;
+
+ /*
+ * If the above failed, then pi_state is NULL and
+ * waiter::requeue_state is correct.
+ */
+ case -EFAULT:
+ double_unlock_hb(hb1, hb2);
+ futex_hb_waiters_dec(hb2);
+ ret = fault_in_user_writeable(uaddr2);
+ if (!ret)
+ goto retry;
+ return ret;
+ case -EBUSY:
+ case -EAGAIN:
+ /*
+ * Two reasons for this:
+ * - EBUSY: Owner is exiting and we just wait for the
+ * exit to complete.
+ * - EAGAIN: The user space value changed.
+ */
+ double_unlock_hb(hb1, hb2);
+ futex_hb_waiters_dec(hb2);
+ /*
+ * Handle the case where the owner is in the middle of
+ * exiting. Wait for the exit to complete otherwise
+ * this task might loop forever, aka. live lock.
+ */
+ wait_for_owner_exiting(ret, exiting);
+ cond_resched();
+ goto retry;
+ default:
+ goto out_unlock;
+ }
+ }
+
+ plist_for_each_entry_safe(this, next, &hb1->chain, list) {
+ if (task_count - nr_wake >= nr_requeue)
+ break;
+
+ if (!futex_match(&this->key, &key1))
+ continue;
+
+ /*
+ * FUTEX_WAIT_REQUEUE_PI and FUTEX_CMP_REQUEUE_PI should always
+ * be paired with each other and no other futex ops.
+ *
+ * We should never be requeueing a futex_q with a pi_state,
+ * which is awaiting a futex_unlock_pi().
+ */
+ if ((requeue_pi && !this->rt_waiter) ||
+ (!requeue_pi && this->rt_waiter) ||
+ this->pi_state) {
+ ret = -EINVAL;
+ break;
+ }
+
+ /* Plain futexes just wake or requeue and are done */
+ if (!requeue_pi) {
+ if (++task_count <= nr_wake)
+ this->wake(&wake_q, this);
+ else
+ requeue_futex(this, hb1, hb2, &key2);
+ continue;
+ }
+
+ /* Ensure we requeue to the expected futex for requeue_pi. */
+ if (!futex_match(this->requeue_pi_key, &key2)) {
+ ret = -EINVAL;
+ break;
+ }
+
+ /*
+ * Requeue nr_requeue waiters and possibly one more in the case
+ * of requeue_pi if we couldn't acquire the lock atomically.
+ *
+ * Prepare the waiter to take the rt_mutex. Take a refcount
+ * on the pi_state and store the pointer in the futex_q
+ * object of the waiter.
+ */
+ get_pi_state(pi_state);
+
+ /* Don't requeue when the waiter is already on the way out. */
+ if (!futex_requeue_pi_prepare(this, pi_state)) {
+ /*
+ * Early woken waiter signaled that it is on the
+ * way out. Drop the pi_state reference and try the
+ * next waiter. @this->pi_state is still NULL.
+ */
+ put_pi_state(pi_state);
+ continue;
+ }
+
+ ret = rt_mutex_start_proxy_lock(&pi_state->pi_mutex,
+ this->rt_waiter,
+ this->task);
+
+ if (ret == 1) {
+ /*
+ * We got the lock. We do neither drop the refcount
+ * on pi_state nor clear this->pi_state because the
+ * waiter needs the pi_state for cleaning up the
+ * user space value. It will drop the refcount
+ * after doing so. this::requeue_state is updated
+ * in the wakeup as well.
+ */
+ requeue_pi_wake_futex(this, &key2, hb2);
+ task_count++;
+ } else if (!ret) {
+ /* Waiter is queued, move it to hb2 */
requeue_futex(this, hb1, hb2, &key2);
- continue;
- }
-
- /* Ensure we requeue to the expected futex for requeue_pi. */
- if (!futex_match(this->requeue_pi_key, &key2)) {
- ret = -EINVAL;
- break;
+ futex_requeue_pi_complete(this, 0);
+ task_count++;
+ } else {
+ /*
+ * rt_mutex_start_proxy_lock() detected a potential
+ * deadlock when we tried to queue that waiter.
+ * Drop the pi_state reference which we took above
+ * and remove the pointer to the state from the
+ * waiters futex_q object.
+ */
+ this->pi_state = NULL;
+ put_pi_state(pi_state);
+ futex_requeue_pi_complete(this, ret);
+ /*
+ * We stop queueing more waiters and let user space
+ * deal with the mess.
+ */
+ break;
+ }
}
/*
- * Requeue nr_requeue waiters and possibly one more in the case
- * of requeue_pi if we couldn't acquire the lock atomically.
- *
- * Prepare the waiter to take the rt_mutex. Take a refcount
- * on the pi_state and store the pointer in the futex_q
- * object of the waiter.
+ * We took an extra initial reference to the pi_state in
+ * futex_proxy_trylock_atomic(). We need to drop it here again.
*/
- get_pi_state(pi_state);
-
- /* Don't requeue when the waiter is already on the way out. */
- if (!futex_requeue_pi_prepare(this, pi_state)) {
- /*
- * Early woken waiter signaled that it is on the
- * way out. Drop the pi_state reference and try the
- * next waiter. @this->pi_state is still NULL.
- */
- put_pi_state(pi_state);
- continue;
- }
-
- ret = rt_mutex_start_proxy_lock(&pi_state->pi_mutex,
- this->rt_waiter,
- this->task);
-
- if (ret == 1) {
- /*
- * We got the lock. We do neither drop the refcount
- * on pi_state nor clear this->pi_state because the
- * waiter needs the pi_state for cleaning up the
- * user space value. It will drop the refcount
- * after doing so. this::requeue_state is updated
- * in the wakeup as well.
- */
- requeue_pi_wake_futex(this, &key2, hb2);
- task_count++;
- } else if (!ret) {
- /* Waiter is queued, move it to hb2 */
- requeue_futex(this, hb1, hb2, &key2);
- futex_requeue_pi_complete(this, 0);
- task_count++;
- } else {
- /*
- * rt_mutex_start_proxy_lock() detected a potential
- * deadlock when we tried to queue that waiter.
- * Drop the pi_state reference which we took above
- * and remove the pointer to the state from the
- * waiters futex_q object.
- */
- this->pi_state = NULL;
- put_pi_state(pi_state);
- futex_requeue_pi_complete(this, ret);
- /*
- * We stop queueing more waiters and let user space
- * deal with the mess.
- */
- break;
- }
- }
-
- /*
- * We took an extra initial reference to the pi_state in
- * futex_proxy_trylock_atomic(). We need to drop it here again.
- */
- put_pi_state(pi_state);
+ put_pi_state(pi_state);
out_unlock:
- double_unlock_hb(hb1, hb2);
+ double_unlock_hb(hb1, hb2);
+ futex_hb_waiters_dec(hb2);
+ }
wake_up_q(&wake_q);
- futex_hb_waiters_dec(hb2);
return ret ? ret : task_count;
}
diff --git a/kernel/futex/waitwake.c b/kernel/futex/waitwake.c
index 1108f373fd315..7dc35be09e436 100644
--- a/kernel/futex/waitwake.c
+++ b/kernel/futex/waitwake.c
@@ -253,7 +253,6 @@ int futex_wake_op(u32 __user *uaddr1, unsigned int flags, u32 __user *uaddr2,
int nr_wake, int nr_wake2, int op)
{
union futex_key key1 = FUTEX_KEY_INIT, key2 = FUTEX_KEY_INIT;
- struct futex_hash_bucket *hb1, *hb2;
struct futex_q *this, *next;
int ret, op_ret;
DEFINE_WAKE_Q(wake_q);
@@ -266,67 +265,71 @@ int futex_wake_op(u32 __user *uaddr1, unsigned int flags, u32 __user *uaddr2,
if (unlikely(ret != 0))
return ret;
- hb1 = futex_hash(&key1);
- hb2 = futex_hash(&key2);
-
retry_private:
- double_lock_hb(hb1, hb2);
- op_ret = futex_atomic_op_inuser(op, uaddr2);
- if (unlikely(op_ret < 0)) {
- double_unlock_hb(hb1, hb2);
+ if (1) {
+ struct futex_hash_bucket *hb1, *hb2;
- if (!IS_ENABLED(CONFIG_MMU) ||
- unlikely(op_ret != -EFAULT && op_ret != -EAGAIN)) {
- /*
- * we don't get EFAULT from MMU faults if we don't have
- * an MMU, but we might get them from range checking
- */
- ret = op_ret;
- return ret;
- }
+ hb1 = futex_hash(&key1);
+ hb2 = futex_hash(&key2);
- if (op_ret == -EFAULT) {
- ret = fault_in_user_writeable(uaddr2);
- if (ret)
+ double_lock_hb(hb1, hb2);
+ op_ret = futex_atomic_op_inuser(op, uaddr2);
+ if (unlikely(op_ret < 0)) {
+ double_unlock_hb(hb1, hb2);
+
+ if (!IS_ENABLED(CONFIG_MMU) ||
+ unlikely(op_ret != -EFAULT && op_ret != -EAGAIN)) {
+ /*
+ * we don't get EFAULT from MMU faults if we don't have
+ * an MMU, but we might get them from range checking
+ */
+ ret = op_ret;
return ret;
- }
-
- cond_resched();
- if (!(flags & FLAGS_SHARED))
- goto retry_private;
- goto retry;
- }
-
- plist_for_each_entry_safe(this, next, &hb1->chain, list) {
- if (futex_match (&this->key, &key1)) {
- if (this->pi_state || this->rt_waiter) {
- ret = -EINVAL;
- goto out_unlock;
}
- this->wake(&wake_q, this);
- if (++ret >= nr_wake)
- break;
- }
- }
- if (op_ret > 0) {
- op_ret = 0;
- plist_for_each_entry_safe(this, next, &hb2->chain, list) {
- if (futex_match (&this->key, &key2)) {
+ if (op_ret == -EFAULT) {
+ ret = fault_in_user_writeable(uaddr2);
+ if (ret)
+ return ret;
+ }
+
+ cond_resched();
+ if (!(flags & FLAGS_SHARED))
+ goto retry_private;
+ goto retry;
+ }
+
+ plist_for_each_entry_safe(this, next, &hb1->chain, list) {
+ if (futex_match(&this->key, &key1)) {
if (this->pi_state || this->rt_waiter) {
ret = -EINVAL;
goto out_unlock;
}
this->wake(&wake_q, this);
- if (++op_ret >= nr_wake2)
+ if (++ret >= nr_wake)
break;
}
}
- ret += op_ret;
- }
+
+ if (op_ret > 0) {
+ op_ret = 0;
+ plist_for_each_entry_safe(this, next, &hb2->chain, list) {
+ if (futex_match(&this->key, &key2)) {
+ if (this->pi_state || this->rt_waiter) {
+ ret = -EINVAL;
+ goto out_unlock;
+ }
+ this->wake(&wake_q, this);
+ if (++op_ret >= nr_wake2)
+ break;
+ }
+ }
+ ret += op_ret;
+ }
out_unlock:
- double_unlock_hb(hb1, hb2);
+ double_unlock_hb(hb1, hb2);
+ }
wake_up_q(&wake_q);
return ret;
}
@@ -402,7 +405,6 @@ int futex_unqueue_multiple(struct futex_vector *v, int count)
*/
int futex_wait_multiple_setup(struct futex_vector *vs, int count, int *woken)
{
- struct futex_hash_bucket *hb;
bool retry = false;
int ret, i;
u32 uval;
@@ -441,21 +443,25 @@ int futex_wait_multiple_setup(struct futex_vector *vs, int count, int *woken)
struct futex_q *q = &vs[i].q;
u32 val = vs[i].w.val;
- hb = futex_hash(&q->key);
- futex_q_lock(q, hb);
- ret = futex_get_value_locked(&uval, uaddr);
+ if (1) {
+ struct futex_hash_bucket *hb;
- if (!ret && uval == val) {
- /*
- * The bucket lock can't be held while dealing with the
- * next futex. Queue each futex at this moment so hb can
- * be unlocked.
- */
- futex_queue(q, hb, current);
- continue;
+ hb = futex_hash(&q->key);
+ futex_q_lock(q, hb);
+ ret = futex_get_value_locked(&uval, uaddr);
+
+ if (!ret && uval == val) {
+ /*
+ * The bucket lock can't be held while dealing with the
+ * next futex. Queue each futex at this moment so hb can
+ * be unlocked.
+ */
+ futex_queue(q, hb, current);
+ continue;
+ }
+
+ futex_q_unlock(hb);
}
-
- futex_q_unlock(hb);
__set_current_state(TASK_RUNNING);
/*
@@ -584,7 +590,6 @@ int futex_wait_setup(u32 __user *uaddr, u32 val, unsigned int flags,
struct futex_q *q, union futex_key *key2,
struct task_struct *task)
{
- struct futex_hash_bucket *hb;
u32 uval;
int ret;
@@ -612,44 +617,48 @@ int futex_wait_setup(u32 __user *uaddr, u32 val, unsigned int flags,
return ret;
retry_private:
- hb = futex_hash(&q->key);
- futex_q_lock(q, hb);
+ if (1) {
+ struct futex_hash_bucket *hb;
- ret = futex_get_value_locked(&uval, uaddr);
+ hb = futex_hash(&q->key);
+ futex_q_lock(q, hb);
- if (ret) {
- futex_q_unlock(hb);
+ ret = futex_get_value_locked(&uval, uaddr);
- ret = get_user(uval, uaddr);
- if (ret)
- return ret;
+ if (ret) {
+ futex_q_unlock(hb);
- if (!(flags & FLAGS_SHARED))
- goto retry_private;
+ ret = get_user(uval, uaddr);
+ if (ret)
+ return ret;
- goto retry;
+ if (!(flags & FLAGS_SHARED))
+ goto retry_private;
+
+ goto retry;
+ }
+
+ if (uval != val) {
+ futex_q_unlock(hb);
+ return -EWOULDBLOCK;
+ }
+
+ if (key2 && futex_match(&q->key, key2)) {
+ futex_q_unlock(hb);
+ return -EINVAL;
+ }
+
+ /*
+ * The task state is guaranteed to be set before another task can
+ * wake it. set_current_state() is implemented using smp_store_mb() and
+ * futex_queue() calls spin_unlock() upon completion, both serializing
+ * access to the hash list and forcing another memory barrier.
+ */
+ if (task == current)
+ set_current_state(TASK_INTERRUPTIBLE|TASK_FREEZABLE);
+ futex_queue(q, hb, task);
}
- if (uval != val) {
- futex_q_unlock(hb);
- return -EWOULDBLOCK;
- }
-
- if (key2 && futex_match(&q->key, key2)) {
- futex_q_unlock(hb);
- return -EINVAL;
- }
-
- /*
- * The task state is guaranteed to be set before another task can
- * wake it. set_current_state() is implemented using smp_store_mb() and
- * futex_queue() calls spin_unlock() upon completion, both serializing
- * access to the hash list and forcing another memory barrier.
- */
- if (task == current)
- set_current_state(TASK_INTERRUPTIBLE|TASK_FREEZABLE);
- futex_queue(q, hb, task);
-
return ret;
}
--
2.49.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v11 06/19] futex: Create futex_hash() get/put class
2025-04-07 15:57 [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
` (4 preceding siblings ...)
2025-04-07 15:57 ` [PATCH v11 05/19] futex: Create hb scopes Sebastian Andrzej Siewior
@ 2025-04-07 15:57 ` Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 07/19] futex: Create private_hash() " Sebastian Andrzej Siewior
` (15 subsequent siblings)
21 siblings, 0 replies; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-07 15:57 UTC (permalink / raw)
To: linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long, Sebastian Andrzej Siewior
From: Peter Zijlstra <peterz@infradead.org>
This gets us:
hb = futex_hash(key) /* gets hb and inc users */
futex_hash_get(hb) /* inc users */
futex_hash_put(hb) /* dec users */
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
kernel/futex/core.c | 6 +++---
kernel/futex/futex.h | 7 +++++++
kernel/futex/pi.c | 10 ++++++----
kernel/futex/requeue.c | 10 +++-------
kernel/futex/waitwake.c | 15 +++++----------
5 files changed, 24 insertions(+), 24 deletions(-)
diff --git a/kernel/futex/core.c b/kernel/futex/core.c
index e4cb5ce9785b1..56a5653e450cb 100644
--- a/kernel/futex/core.c
+++ b/kernel/futex/core.c
@@ -122,6 +122,8 @@ struct futex_hash_bucket *futex_hash(union futex_key *key)
return &futex_queues[hash & futex_hashmask];
}
+void futex_hash_get(struct futex_hash_bucket *hb) { }
+void futex_hash_put(struct futex_hash_bucket *hb) { }
/**
* futex_setup_timer - set up the sleeping hrtimer.
@@ -957,9 +959,7 @@ static void exit_pi_state_list(struct task_struct *curr)
pi_state = list_entry(next, struct futex_pi_state, list);
key = pi_state->key;
if (1) {
- struct futex_hash_bucket *hb;
-
- hb = futex_hash(&key);
+ CLASS(hb, hb)(&key);
/*
* We can race against put_pi_state() removing itself from the
diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h
index a219903e52084..77d9b3509f75c 100644
--- a/kernel/futex/futex.h
+++ b/kernel/futex/futex.h
@@ -7,6 +7,7 @@
#include <linux/sched/wake_q.h>
#include <linux/compat.h>
#include <linux/uaccess.h>
+#include <linux/cleanup.h>
#ifdef CONFIG_PREEMPT_RT
#include <linux/rcuwait.h>
@@ -202,6 +203,12 @@ futex_setup_timer(ktime_t *time, struct hrtimer_sleeper *timeout,
int flags, u64 range_ns);
extern struct futex_hash_bucket *futex_hash(union futex_key *key);
+extern void futex_hash_get(struct futex_hash_bucket *hb);
+extern void futex_hash_put(struct futex_hash_bucket *hb);
+
+DEFINE_CLASS(hb, struct futex_hash_bucket *,
+ if (_T) futex_hash_put(_T),
+ futex_hash(key), union futex_key *key);
/**
* futex_match - Check whether two futex keys are equal
diff --git a/kernel/futex/pi.c b/kernel/futex/pi.c
index a56f28fda58dd..e0a70702bb80b 100644
--- a/kernel/futex/pi.c
+++ b/kernel/futex/pi.c
@@ -939,9 +939,8 @@ int futex_lock_pi(u32 __user *uaddr, unsigned int flags, ktime_t *time, int tryl
retry_private:
if (1) {
- struct futex_hash_bucket *hb;
+ CLASS(hb, hb)(&q.key);
- hb = futex_hash(&q.key);
futex_q_lock(&q, hb);
ret = futex_lock_pi_atomic(uaddr, hb, &q.key, &q.pi_state, current,
@@ -1017,6 +1016,10 @@ int futex_lock_pi(u32 __user *uaddr, unsigned int flags, ktime_t *time, int tryl
*/
raw_spin_lock_irq(&q.pi_state->pi_mutex.wait_lock);
spin_unlock(q.lock_ptr);
+ /*
+ * Caution; releasing @hb in-scope.
+ */
+ futex_hash_put(no_free_ptr(hb));
/*
* __rt_mutex_start_proxy_lock() unconditionally enqueues the @rt_waiter
* such that futex_unlock_pi() is guaranteed to observe the waiter when
@@ -1119,7 +1122,6 @@ int futex_unlock_pi(u32 __user *uaddr, unsigned int flags)
{
u32 curval, uval, vpid = task_pid_vnr(current);
union futex_key key = FUTEX_KEY_INIT;
- struct futex_hash_bucket *hb;
struct futex_q *top_waiter;
int ret;
@@ -1139,7 +1141,7 @@ int futex_unlock_pi(u32 __user *uaddr, unsigned int flags)
if (ret)
return ret;
- hb = futex_hash(&key);
+ CLASS(hb, hb)(&key);
spin_lock(&hb->lock);
retry_hb:
diff --git a/kernel/futex/requeue.c b/kernel/futex/requeue.c
index 209794cad6f2f..992e3ce005c6f 100644
--- a/kernel/futex/requeue.c
+++ b/kernel/futex/requeue.c
@@ -444,10 +444,8 @@ int futex_requeue(u32 __user *uaddr1, unsigned int flags1,
retry_private:
if (1) {
- struct futex_hash_bucket *hb1, *hb2;
-
- hb1 = futex_hash(&key1);
- hb2 = futex_hash(&key2);
+ CLASS(hb, hb1)(&key1);
+ CLASS(hb, hb2)(&key2);
futex_hb_waiters_inc(hb2);
double_lock_hb(hb1, hb2);
@@ -817,9 +815,7 @@ int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
switch (futex_requeue_pi_wakeup_sync(&q)) {
case Q_REQUEUE_PI_IGNORE:
{
- struct futex_hash_bucket *hb;
-
- hb = futex_hash(&q.key);
+ CLASS(hb, hb)(&q.key);
/* The waiter is still on uaddr1 */
spin_lock(&hb->lock);
ret = handle_early_requeue_pi_wakeup(hb, &q, to);
diff --git a/kernel/futex/waitwake.c b/kernel/futex/waitwake.c
index 7dc35be09e436..d52541bcc07e9 100644
--- a/kernel/futex/waitwake.c
+++ b/kernel/futex/waitwake.c
@@ -154,7 +154,6 @@ void futex_wake_mark(struct wake_q_head *wake_q, struct futex_q *q)
*/
int futex_wake(u32 __user *uaddr, unsigned int flags, int nr_wake, u32 bitset)
{
- struct futex_hash_bucket *hb;
struct futex_q *this, *next;
union futex_key key = FUTEX_KEY_INIT;
DEFINE_WAKE_Q(wake_q);
@@ -170,7 +169,7 @@ int futex_wake(u32 __user *uaddr, unsigned int flags, int nr_wake, u32 bitset)
if ((flags & FLAGS_STRICT) && !nr_wake)
return 0;
- hb = futex_hash(&key);
+ CLASS(hb, hb)(&key);
/* Make sure we really have tasks to wakeup */
if (!futex_hb_waiters_pending(hb))
@@ -267,10 +266,8 @@ int futex_wake_op(u32 __user *uaddr1, unsigned int flags, u32 __user *uaddr2,
retry_private:
if (1) {
- struct futex_hash_bucket *hb1, *hb2;
-
- hb1 = futex_hash(&key1);
- hb2 = futex_hash(&key2);
+ CLASS(hb, hb1)(&key1);
+ CLASS(hb, hb2)(&key2);
double_lock_hb(hb1, hb2);
op_ret = futex_atomic_op_inuser(op, uaddr2);
@@ -444,9 +441,8 @@ int futex_wait_multiple_setup(struct futex_vector *vs, int count, int *woken)
u32 val = vs[i].w.val;
if (1) {
- struct futex_hash_bucket *hb;
+ CLASS(hb, hb)(&q->key);
- hb = futex_hash(&q->key);
futex_q_lock(q, hb);
ret = futex_get_value_locked(&uval, uaddr);
@@ -618,9 +614,8 @@ int futex_wait_setup(u32 __user *uaddr, u32 val, unsigned int flags,
retry_private:
if (1) {
- struct futex_hash_bucket *hb;
+ CLASS(hb, hb)(&q->key);
- hb = futex_hash(&q->key);
futex_q_lock(q, hb);
ret = futex_get_value_locked(&uval, uaddr);
--
2.49.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v11 07/19] futex: Create private_hash() get/put class
2025-04-07 15:57 [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
` (5 preceding siblings ...)
2025-04-07 15:57 ` [PATCH v11 06/19] futex: Create futex_hash() get/put class Sebastian Andrzej Siewior
@ 2025-04-07 15:57 ` Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 08/19] futex: Acquire a hash reference in futex_wait_multiple_setup() Sebastian Andrzej Siewior
` (14 subsequent siblings)
21 siblings, 0 replies; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-07 15:57 UTC (permalink / raw)
To: linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long, Sebastian Andrzej Siewior
From: Peter Zijlstra <peterz@infradead.org>
This gets us:
fph = futex_private_hash(key) /* gets fph and inc users */
futex_private_hash_get(fph) /* inc users */
futex_private_hash_put(fph) /* dec users */
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
kernel/futex/core.c | 12 ++++++++++++
kernel/futex/futex.h | 8 ++++++++
2 files changed, 20 insertions(+)
diff --git a/kernel/futex/core.c b/kernel/futex/core.c
index 56a5653e450cb..6a1d6b14277f4 100644
--- a/kernel/futex/core.c
+++ b/kernel/futex/core.c
@@ -107,6 +107,18 @@ late_initcall(fail_futex_debugfs);
#endif /* CONFIG_FAIL_FUTEX */
+struct futex_private_hash *futex_private_hash(void)
+{
+ return NULL;
+}
+
+bool futex_private_hash_get(struct futex_private_hash *fph)
+{
+ return false;
+}
+
+void futex_private_hash_put(struct futex_private_hash *fph) { }
+
/**
* futex_hash - Return the hash bucket in the global hash
* @key: Pointer to the futex key for which the hash is calculated
diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h
index 77d9b3509f75c..bc76e366f9a77 100644
--- a/kernel/futex/futex.h
+++ b/kernel/futex/futex.h
@@ -206,10 +206,18 @@ extern struct futex_hash_bucket *futex_hash(union futex_key *key);
extern void futex_hash_get(struct futex_hash_bucket *hb);
extern void futex_hash_put(struct futex_hash_bucket *hb);
+extern struct futex_private_hash *futex_private_hash(void);
+extern bool futex_private_hash_get(struct futex_private_hash *fph);
+extern void futex_private_hash_put(struct futex_private_hash *fph);
+
DEFINE_CLASS(hb, struct futex_hash_bucket *,
if (_T) futex_hash_put(_T),
futex_hash(key), union futex_key *key);
+DEFINE_CLASS(private_hash, struct futex_private_hash *,
+ if (_T) futex_private_hash_put(_T),
+ futex_private_hash(), void);
+
/**
* futex_match - Check whether two futex keys are equal
* @key1: Pointer to key1
--
2.49.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v11 08/19] futex: Acquire a hash reference in futex_wait_multiple_setup().
2025-04-07 15:57 [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
` (6 preceding siblings ...)
2025-04-07 15:57 ` [PATCH v11 07/19] futex: Create private_hash() " Sebastian Andrzej Siewior
@ 2025-04-07 15:57 ` Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 09/19] futex: Decrease the waiter count before the unlock operation Sebastian Andrzej Siewior
` (13 subsequent siblings)
21 siblings, 0 replies; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-07 15:57 UTC (permalink / raw)
To: linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long, Sebastian Andrzej Siewior
futex_wait_multiple_setup() changes task_struct::__state to
!TASK_RUNNING and then enqueues on multiple futexes. Every
futex_q_lock() acquires a reference on the global hash which is dropped
later.
If a rehash is in progress then the loop will block on
mm_struct::futex_hash_bucket for the rehash to complete and this will
lose the previously set task_struct::__state.
Acquire a reference on the local hash to avoiding blocking on
mm_struct::futex_hash_bucket.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
kernel/futex/waitwake.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/kernel/futex/waitwake.c b/kernel/futex/waitwake.c
index d52541bcc07e9..bd8fef0f8d180 100644
--- a/kernel/futex/waitwake.c
+++ b/kernel/futex/waitwake.c
@@ -406,6 +406,12 @@ int futex_wait_multiple_setup(struct futex_vector *vs, int count, int *woken)
int ret, i;
u32 uval;
+ /*
+ * Make sure to have a reference on the private_hash such that we
+ * don't block on rehash after changing the task state below.
+ */
+ guard(private_hash)();
+
/*
* Enqueuing multiple futexes is tricky, because we need to enqueue
* each futex on the list before dealing with the next one to avoid
--
2.49.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v11 09/19] futex: Decrease the waiter count before the unlock operation.
2025-04-07 15:57 [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
` (7 preceding siblings ...)
2025-04-07 15:57 ` [PATCH v11 08/19] futex: Acquire a hash reference in futex_wait_multiple_setup() Sebastian Andrzej Siewior
@ 2025-04-07 15:57 ` Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 10/19] futex: Introduce futex_q_lockptr_lock() Sebastian Andrzej Siewior
` (12 subsequent siblings)
21 siblings, 0 replies; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-07 15:57 UTC (permalink / raw)
To: linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long, Sebastian Andrzej Siewior
To support runtime resizing of the process private hash, it's required
to not use the obtained hash bucket once the reference count has been
dropped. The reference will be dropped after the unlock of the hash
bucket.
The amount of waiters is decremented after the unlock operation. There
is no requirement that this needs to happen after the unlock. The
increment happens before acquiring the lock to signal early that there
will be a waiter. The waiter can avoid blocking on the lock if it is
known that there will be no waiter.
There is no difference in terms of ordering if the decrement happens
before or after the unlock.
Decrease the waiter count before the unlock operation.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
kernel/futex/core.c | 2 +-
kernel/futex/requeue.c | 8 ++++----
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/kernel/futex/core.c b/kernel/futex/core.c
index 6a1d6b14277f4..5e70cb8eb2507 100644
--- a/kernel/futex/core.c
+++ b/kernel/futex/core.c
@@ -537,8 +537,8 @@ void futex_q_lock(struct futex_q *q, struct futex_hash_bucket *hb)
void futex_q_unlock(struct futex_hash_bucket *hb)
__releases(&hb->lock)
{
- spin_unlock(&hb->lock);
futex_hb_waiters_dec(hb);
+ spin_unlock(&hb->lock);
}
void __futex_queue(struct futex_q *q, struct futex_hash_bucket *hb,
diff --git a/kernel/futex/requeue.c b/kernel/futex/requeue.c
index 992e3ce005c6f..023c028d2fce3 100644
--- a/kernel/futex/requeue.c
+++ b/kernel/futex/requeue.c
@@ -456,8 +456,8 @@ int futex_requeue(u32 __user *uaddr1, unsigned int flags1,
ret = futex_get_value_locked(&curval, uaddr1);
if (unlikely(ret)) {
- double_unlock_hb(hb1, hb2);
futex_hb_waiters_dec(hb2);
+ double_unlock_hb(hb1, hb2);
ret = get_user(curval, uaddr1);
if (ret)
@@ -542,8 +542,8 @@ int futex_requeue(u32 __user *uaddr1, unsigned int flags1,
* waiter::requeue_state is correct.
*/
case -EFAULT:
- double_unlock_hb(hb1, hb2);
futex_hb_waiters_dec(hb2);
+ double_unlock_hb(hb1, hb2);
ret = fault_in_user_writeable(uaddr2);
if (!ret)
goto retry;
@@ -556,8 +556,8 @@ int futex_requeue(u32 __user *uaddr1, unsigned int flags1,
* exit to complete.
* - EAGAIN: The user space value changed.
*/
- double_unlock_hb(hb1, hb2);
futex_hb_waiters_dec(hb2);
+ double_unlock_hb(hb1, hb2);
/*
* Handle the case where the owner is in the middle of
* exiting. Wait for the exit to complete otherwise
@@ -674,8 +674,8 @@ int futex_requeue(u32 __user *uaddr1, unsigned int flags1,
put_pi_state(pi_state);
out_unlock:
- double_unlock_hb(hb1, hb2);
futex_hb_waiters_dec(hb2);
+ double_unlock_hb(hb1, hb2);
}
wake_up_q(&wake_q);
return ret ? ret : task_count;
--
2.49.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v11 10/19] futex: Introduce futex_q_lockptr_lock().
2025-04-07 15:57 [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
` (8 preceding siblings ...)
2025-04-07 15:57 ` [PATCH v11 09/19] futex: Decrease the waiter count before the unlock operation Sebastian Andrzej Siewior
@ 2025-04-07 15:57 ` Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 11/19] futex: Create helper function to initialize a hash slot Sebastian Andrzej Siewior
` (11 subsequent siblings)
21 siblings, 0 replies; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-07 15:57 UTC (permalink / raw)
To: linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long, Sebastian Andrzej Siewior
futex_lock_pi() and __fixup_pi_state_owner() acquire the
futex_q::lock_ptr without holding a reference assuming the previously
obtained hash bucket and the assigned lock_ptr are still valid. This
isn't the case once the private hash can be resized and becomes invalid
after the reference drop.
Introduce futex_q_lockptr_lock() to lock the hash bucket recorded in
futex_q::lock_ptr. The lock pointer is read in a RCU section to ensure
that it does not go away if the hash bucket has been replaced and the
old pointer has been observed. After locking the pointer needs to be
compared to check if it changed. If so then the hash bucket has been
replaced and the user has been moved to the new one and lock_ptr has
been updated. The lock operation needs to be redone in this case.
The locked hash bucket is not returned.
A special case is an early return in futex_lock_pi() (due to signal or
timeout) and a successful futex_wait_requeue_pi(). In both cases a valid
futex_q::lock_ptr is expected (and its matching hash bucket) but since
the waiter has been removed from the hash this can no longer be
guaranteed. Therefore before the waiter is removed and a reference is
acquired which is later dropped by the waiter to avoid a resize.
Add futex_q_lockptr_lock() and use it.
Acquire an additional reference in requeue_pi_wake_futex() and
futex_unlock_pi() while the futex_q is removed, denote this extra
reference in futex_q::drop_hb_ref and let the waiter drop the reference
in this case.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
kernel/futex/core.c | 25 +++++++++++++++++++++++++
kernel/futex/futex.h | 3 ++-
kernel/futex/pi.c | 15 +++++++++++++--
kernel/futex/requeue.c | 16 +++++++++++++---
4 files changed, 53 insertions(+), 6 deletions(-)
diff --git a/kernel/futex/core.c b/kernel/futex/core.c
index 5e70cb8eb2507..1443a98dfa7fa 100644
--- a/kernel/futex/core.c
+++ b/kernel/futex/core.c
@@ -134,6 +134,13 @@ struct futex_hash_bucket *futex_hash(union futex_key *key)
return &futex_queues[hash & futex_hashmask];
}
+/**
+ * futex_hash_get - Get an additional reference for the local hash.
+ * @hb: ptr to the private local hash.
+ *
+ * Obtain an additional reference for the already obtained hash bucket. The
+ * caller must already own an reference.
+ */
void futex_hash_get(struct futex_hash_bucket *hb) { }
void futex_hash_put(struct futex_hash_bucket *hb) { }
@@ -615,6 +622,24 @@ int futex_unqueue(struct futex_q *q)
return ret;
}
+void futex_q_lockptr_lock(struct futex_q *q)
+{
+ spinlock_t *lock_ptr;
+
+ /*
+ * See futex_unqueue() why lock_ptr can change.
+ */
+ guard(rcu)();
+retry:
+ lock_ptr = READ_ONCE(q->lock_ptr);
+ spin_lock(lock_ptr);
+
+ if (unlikely(lock_ptr != q->lock_ptr)) {
+ spin_unlock(lock_ptr);
+ goto retry;
+ }
+}
+
/*
* PI futexes can not be requeued and must remove themselves from the hash
* bucket. The hash bucket lock (i.e. lock_ptr) is held.
diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h
index bc76e366f9a77..26e69333cb745 100644
--- a/kernel/futex/futex.h
+++ b/kernel/futex/futex.h
@@ -183,6 +183,7 @@ struct futex_q {
union futex_key *requeue_pi_key;
u32 bitset;
atomic_t requeue_state;
+ bool drop_hb_ref;
#ifdef CONFIG_PREEMPT_RT
struct rcuwait requeue_wait;
#endif
@@ -197,7 +198,7 @@ enum futex_access {
extern int get_futex_key(u32 __user *uaddr, unsigned int flags, union futex_key *key,
enum futex_access rw);
-
+extern void futex_q_lockptr_lock(struct futex_q *q);
extern struct hrtimer_sleeper *
futex_setup_timer(ktime_t *time, struct hrtimer_sleeper *timeout,
int flags, u64 range_ns);
diff --git a/kernel/futex/pi.c b/kernel/futex/pi.c
index e0a70702bb80b..356e52c17d3c5 100644
--- a/kernel/futex/pi.c
+++ b/kernel/futex/pi.c
@@ -806,7 +806,7 @@ static int __fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
break;
}
- spin_lock(q->lock_ptr);
+ futex_q_lockptr_lock(q);
raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);
/*
@@ -1066,7 +1066,7 @@ int futex_lock_pi(u32 __user *uaddr, unsigned int flags, ktime_t *time, int tryl
* spinlock/rtlock (which might enqueue its own rt_waiter) and fix up
* the
*/
- spin_lock(q.lock_ptr);
+ futex_q_lockptr_lock(&q);
/*
* Waiter is unqueued.
*/
@@ -1086,6 +1086,11 @@ int futex_lock_pi(u32 __user *uaddr, unsigned int flags, ktime_t *time, int tryl
futex_unqueue_pi(&q);
spin_unlock(q.lock_ptr);
+ if (q.drop_hb_ref) {
+ CLASS(hb, hb)(&q.key);
+ /* Additional reference from futex_unlock_pi() */
+ futex_hash_put(hb);
+ }
goto out;
out_unlock_put_key:
@@ -1194,6 +1199,12 @@ int futex_unlock_pi(u32 __user *uaddr, unsigned int flags)
*/
rt_waiter = rt_mutex_top_waiter(&pi_state->pi_mutex);
if (!rt_waiter) {
+ /*
+ * Acquire a reference for the leaving waiter to ensure
+ * valid futex_q::lock_ptr.
+ */
+ futex_hash_get(hb);
+ top_waiter->drop_hb_ref = true;
__futex_unqueue(top_waiter);
raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock);
goto retry_hb;
diff --git a/kernel/futex/requeue.c b/kernel/futex/requeue.c
index 023c028d2fce3..b0e64fd454d96 100644
--- a/kernel/futex/requeue.c
+++ b/kernel/futex/requeue.c
@@ -231,7 +231,12 @@ void requeue_pi_wake_futex(struct futex_q *q, union futex_key *key,
WARN_ON(!q->rt_waiter);
q->rt_waiter = NULL;
-
+ /*
+ * Acquire a reference for the waiter to ensure valid
+ * futex_q::lock_ptr.
+ */
+ futex_hash_get(hb);
+ q->drop_hb_ref = true;
q->lock_ptr = &hb->lock;
/* Signal locked state to the waiter */
@@ -826,7 +831,7 @@ int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
case Q_REQUEUE_PI_LOCKED:
/* The requeue acquired the lock */
if (q.pi_state && (q.pi_state->owner != current)) {
- spin_lock(q.lock_ptr);
+ futex_q_lockptr_lock(&q);
ret = fixup_pi_owner(uaddr2, &q, true);
/*
* Drop the reference to the pi state which the
@@ -853,7 +858,7 @@ int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
if (ret && !rt_mutex_cleanup_proxy_lock(pi_mutex, &rt_waiter))
ret = 0;
- spin_lock(q.lock_ptr);
+ futex_q_lockptr_lock(&q);
debug_rt_mutex_free_waiter(&rt_waiter);
/*
* Fixup the pi_state owner and possibly acquire the lock if we
@@ -885,6 +890,11 @@ int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
default:
BUG();
}
+ if (q.drop_hb_ref) {
+ CLASS(hb, hb)(&q.key);
+ /* Additional reference from requeue_pi_wake_futex() */
+ futex_hash_put(hb);
+ }
out:
if (to) {
--
2.49.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v11 11/19] futex: Create helper function to initialize a hash slot.
2025-04-07 15:57 [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
` (9 preceding siblings ...)
2025-04-07 15:57 ` [PATCH v11 10/19] futex: Introduce futex_q_lockptr_lock() Sebastian Andrzej Siewior
@ 2025-04-07 15:57 ` Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 12/19] futex: Add basic infrastructure for local task local hash Sebastian Andrzej Siewior
` (10 subsequent siblings)
21 siblings, 0 replies; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-07 15:57 UTC (permalink / raw)
To: linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long, Sebastian Andrzej Siewior
Factor out the futex_hash_bucket initialisation into a helpr function.
The helper function will be used in a follow up patch implementing
process private hash buckets.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
kernel/futex/core.c | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/kernel/futex/core.c b/kernel/futex/core.c
index 1443a98dfa7fa..afc66780f84fc 100644
--- a/kernel/futex/core.c
+++ b/kernel/futex/core.c
@@ -1160,6 +1160,13 @@ void futex_exit_release(struct task_struct *tsk)
futex_cleanup_end(tsk, FUTEX_STATE_DEAD);
}
+static void futex_hash_bucket_init(struct futex_hash_bucket *fhb)
+{
+ atomic_set(&fhb->waiters, 0);
+ plist_head_init(&fhb->chain);
+ spin_lock_init(&fhb->lock);
+}
+
static int __init futex_init(void)
{
unsigned long hashsize, i;
@@ -1177,11 +1184,8 @@ static int __init futex_init(void)
hashsize, hashsize);
hashsize = 1UL << futex_shift;
- for (i = 0; i < hashsize; i++) {
- atomic_set(&futex_queues[i].waiters, 0);
- plist_head_init(&futex_queues[i].chain);
- spin_lock_init(&futex_queues[i].lock);
- }
+ for (i = 0; i < hashsize; i++)
+ futex_hash_bucket_init(&futex_queues[i]);
futex_hashmask = hashsize - 1;
return 0;
--
2.49.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v11 12/19] futex: Add basic infrastructure for local task local hash.
2025-04-07 15:57 [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
` (10 preceding siblings ...)
2025-04-07 15:57 ` [PATCH v11 11/19] futex: Create helper function to initialize a hash slot Sebastian Andrzej Siewior
@ 2025-04-07 15:57 ` Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 13/19] futex: Allow automatic allocation of process wide futex hash Sebastian Andrzej Siewior
` (9 subsequent siblings)
21 siblings, 0 replies; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-07 15:57 UTC (permalink / raw)
To: linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long, Sebastian Andrzej Siewior
The futex hash is system wide and shared by all tasks. Each slot
is hashed based on futex address and the VMA of the thread. Due to
randomized VMAs (and memory allocations) the same logical lock (pointer)
can end up in a different hash bucket on each invocation of the
application. This in turn means that different applications may share a
hash bucket on the first invocation but not on the second and it is not
always clear which applications will be involved. This can result in
high latency's to acquire the futex_hash_bucket::lock especially if the
lock owner is limited to a CPU and can not be effectively PI boosted.
Introduce basic infrastructure for process local hash which is shared by
all threads of process. This hash will only be used for a
PROCESS_PRIVATE FUTEX operation.
The hashmap can be allocated via
prctl(PR_FUTEX_HASH, PR_FUTEX_HASH_SET_SLOTS, num)
A `num' of 0 means that the global hash is used instead of a private
hash.
Other values for `num' specify the number of slots for the hash and the
number must be power of two, starting with two.
The prctl() returns zero on success. This function can only be used
before a thread is created.
The current status for the private hash can be queried via
num = prctl(PR_FUTEX_HASH, PR_FUTEX_HASH_GET_SLOTS);
which return the current number of slots. The value 0 means that the
global hash is used. Values greater than 0 indicate the number of slots
that are used. A negative number indicates an error.
For optimisation, for the private hash jhash2() uses only two arguments
the address and the offset. This omits the VMA which is always the same.
[peterz: Use 0 for global hash. A bit shuffling and renaming. ]
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
include/linux/futex.h | 26 ++++-
include/linux/mm_types.h | 5 +-
include/uapi/linux/prctl.h | 5 +
init/Kconfig | 5 +
kernel/fork.c | 2 +
kernel/futex/core.c | 206 +++++++++++++++++++++++++++++++++----
kernel/futex/futex.h | 10 ++
kernel/sys.c | 4 +
8 files changed, 242 insertions(+), 21 deletions(-)
diff --git a/include/linux/futex.h b/include/linux/futex.h
index b70df27d7e85c..bbc897e54c864 100644
--- a/include/linux/futex.h
+++ b/include/linux/futex.h
@@ -4,11 +4,11 @@
#include <linux/sched.h>
#include <linux/ktime.h>
+#include <linux/mm_types.h>
#include <uapi/linux/futex.h>
struct inode;
-struct mm_struct;
struct task_struct;
/*
@@ -77,7 +77,22 @@ void futex_exec_release(struct task_struct *tsk);
long do_futex(u32 __user *uaddr, int op, u32 val, ktime_t *timeout,
u32 __user *uaddr2, u32 val2, u32 val3);
-#else
+int futex_hash_prctl(unsigned long arg2, unsigned long arg3);
+
+#ifdef CONFIG_FUTEX_PRIVATE_HASH
+void futex_hash_free(struct mm_struct *mm);
+
+static inline void futex_mm_init(struct mm_struct *mm)
+{
+ mm->futex_phash = NULL;
+}
+
+#else /* !CONFIG_FUTEX_PRIVATE_HASH */
+static inline void futex_hash_free(struct mm_struct *mm) { }
+static inline void futex_mm_init(struct mm_struct *mm) { }
+#endif /* CONFIG_FUTEX_PRIVATE_HASH */
+
+#else /* !CONFIG_FUTEX */
static inline void futex_init_task(struct task_struct *tsk) { }
static inline void futex_exit_recursive(struct task_struct *tsk) { }
static inline void futex_exit_release(struct task_struct *tsk) { }
@@ -88,6 +103,13 @@ static inline long do_futex(u32 __user *uaddr, int op, u32 val,
{
return -EINVAL;
}
+static inline int futex_hash_prctl(unsigned long arg2, unsigned long arg3)
+{
+ return -EINVAL;
+}
+static inline void futex_hash_free(struct mm_struct *mm) { }
+static inline void futex_mm_init(struct mm_struct *mm) { }
+
#endif
#endif
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 56d07edd01f91..a4b5661e41770 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -31,6 +31,7 @@
#define INIT_PASID 0
struct address_space;
+struct futex_private_hash;
struct mem_cgroup;
/*
@@ -1031,7 +1032,9 @@ struct mm_struct {
*/
seqcount_t mm_lock_seq;
#endif
-
+#ifdef CONFIG_FUTEX_PRIVATE_HASH
+ struct futex_private_hash *futex_phash;
+#endif
unsigned long hiwater_rss; /* High-watermark of RSS usage */
unsigned long hiwater_vm; /* High-water virtual memory usage */
diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
index 15c18ef4eb11a..3b93fb906e3c5 100644
--- a/include/uapi/linux/prctl.h
+++ b/include/uapi/linux/prctl.h
@@ -364,4 +364,9 @@ struct prctl_mm_map {
# define PR_TIMER_CREATE_RESTORE_IDS_ON 1
# define PR_TIMER_CREATE_RESTORE_IDS_GET 2
+/* FUTEX hash management */
+#define PR_FUTEX_HASH 78
+# define PR_FUTEX_HASH_SET_SLOTS 1
+# define PR_FUTEX_HASH_GET_SLOTS 2
+
#endif /* _LINUX_PRCTL_H */
diff --git a/init/Kconfig b/init/Kconfig
index dd2ea3b9a7992..b308b98d79347 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1699,6 +1699,11 @@ config FUTEX_PI
depends on FUTEX && RT_MUTEXES
default y
+config FUTEX_PRIVATE_HASH
+ bool
+ depends on FUTEX && !BASE_SMALL && MMU
+ default y
+
config EPOLL
bool "Enable eventpoll support" if EXPERT
default y
diff --git a/kernel/fork.c b/kernel/fork.c
index c4b26cd8998b8..831dfec450544 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1305,6 +1305,7 @@ static struct mm_struct *mm_init(struct mm_struct *mm, struct task_struct *p,
RCU_INIT_POINTER(mm->exe_file, NULL);
mmu_notifier_subscriptions_init(mm);
init_tlb_flush_pending(mm);
+ futex_mm_init(mm);
#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && !defined(CONFIG_SPLIT_PMD_PTLOCKS)
mm->pmd_huge_pte = NULL;
#endif
@@ -1387,6 +1388,7 @@ static inline void __mmput(struct mm_struct *mm)
if (mm->binfmt)
module_put(mm->binfmt->module);
lru_gen_del_mm(mm);
+ futex_hash_free(mm);
mmdrop(mm);
}
diff --git a/kernel/futex/core.c b/kernel/futex/core.c
index afc66780f84fc..479a629b9aeea 100644
--- a/kernel/futex/core.c
+++ b/kernel/futex/core.c
@@ -39,6 +39,7 @@
#include <linux/memblock.h>
#include <linux/fault-inject.h>
#include <linux/slab.h>
+#include <linux/prctl.h>
#include "futex.h"
#include "../locking/rtmutex_common.h"
@@ -55,6 +56,12 @@ static struct {
#define futex_queues (__futex_data.queues)
#define futex_hashmask (__futex_data.hashmask)
+struct futex_private_hash {
+ unsigned int hash_mask;
+ void *mm;
+ bool custom;
+ struct futex_hash_bucket queues[];
+};
/*
* Fault injections for futexes.
@@ -107,9 +114,17 @@ late_initcall(fail_futex_debugfs);
#endif /* CONFIG_FAIL_FUTEX */
-struct futex_private_hash *futex_private_hash(void)
+static struct futex_hash_bucket *
+__futex_hash(union futex_key *key, struct futex_private_hash *fph);
+
+#ifdef CONFIG_FUTEX_PRIVATE_HASH
+static inline bool futex_key_is_private(union futex_key *key)
{
- return NULL;
+ /*
+ * Relies on get_futex_key() to set either bit for shared
+ * futexes -- see comment with union futex_key.
+ */
+ return !(key->both.offset & (FUT_OFF_INODE | FUT_OFF_MMSHARED));
}
bool futex_private_hash_get(struct futex_private_hash *fph)
@@ -117,21 +132,8 @@ bool futex_private_hash_get(struct futex_private_hash *fph)
return false;
}
-void futex_private_hash_put(struct futex_private_hash *fph) { }
-
-/**
- * futex_hash - Return the hash bucket in the global hash
- * @key: Pointer to the futex key for which the hash is calculated
- *
- * We hash on the keys returned from get_futex_key (see below) and return the
- * corresponding hash bucket in the global hash.
- */
-struct futex_hash_bucket *futex_hash(union futex_key *key)
+void futex_private_hash_put(struct futex_private_hash *fph)
{
- u32 hash = jhash2((u32 *)key, offsetof(typeof(*key), both.offset) / 4,
- key->both.offset);
-
- return &futex_queues[hash & futex_hashmask];
}
/**
@@ -144,6 +146,84 @@ struct futex_hash_bucket *futex_hash(union futex_key *key)
void futex_hash_get(struct futex_hash_bucket *hb) { }
void futex_hash_put(struct futex_hash_bucket *hb) { }
+static struct futex_hash_bucket *
+__futex_hash_private(union futex_key *key, struct futex_private_hash *fph)
+{
+ u32 hash;
+
+ if (!futex_key_is_private(key))
+ return NULL;
+
+ if (!fph)
+ fph = key->private.mm->futex_phash;
+ if (!fph || !fph->hash_mask)
+ return NULL;
+
+ hash = jhash2((void *)&key->private.address,
+ sizeof(key->private.address) / 4,
+ key->both.offset);
+ return &fph->queues[hash & fph->hash_mask];
+}
+
+struct futex_private_hash *futex_private_hash(void)
+{
+ struct mm_struct *mm = current->mm;
+ struct futex_private_hash *fph;
+
+ fph = mm->futex_phash;
+ return fph;
+}
+
+struct futex_hash_bucket *futex_hash(union futex_key *key)
+{
+ struct futex_hash_bucket *hb;
+
+ hb = __futex_hash(key, NULL);
+ return hb;
+}
+
+#else /* !CONFIG_FUTEX_PRIVATE_HASH */
+
+static struct futex_hash_bucket *
+__futex_hash_private(union futex_key *key, struct futex_private_hash *fph)
+{
+ return NULL;
+}
+
+struct futex_hash_bucket *futex_hash(union futex_key *key)
+{
+ return __futex_hash(key, NULL);
+}
+
+#endif /* CONFIG_FUTEX_PRIVATE_HASH */
+
+/**
+ * __futex_hash - Return the hash bucket
+ * @key: Pointer to the futex key for which the hash is calculated
+ * @fph: Pointer to private hash if known
+ *
+ * We hash on the keys returned from get_futex_key (see below) and return the
+ * corresponding hash bucket.
+ * If the FUTEX is PROCESS_PRIVATE then a per-process hash bucket (from the
+ * private hash) is returned if existing. Otherwise a hash bucket from the
+ * global hash is returned.
+ */
+static struct futex_hash_bucket *
+__futex_hash(union futex_key *key, struct futex_private_hash *fph)
+{
+ struct futex_hash_bucket *hb;
+ u32 hash;
+
+ hb = __futex_hash_private(key, fph);
+ if (hb)
+ return hb;
+
+ hash = jhash2((u32 *)key,
+ offsetof(typeof(*key), both.offset) / 4,
+ key->both.offset);
+ return &futex_queues[hash & futex_hashmask];
+}
+
/**
* futex_setup_timer - set up the sleeping hrtimer.
* @time: ptr to the given timeout value
@@ -985,6 +1065,13 @@ static void exit_pi_state_list(struct task_struct *curr)
struct futex_pi_state *pi_state;
union futex_key key = FUTEX_KEY_INIT;
+ /*
+ * Ensure the hash remains stable (no resize) during the while loop
+ * below. The hb pointer is acquired under the pi_lock so we can't block
+ * on the mutex.
+ */
+ WARN_ON(curr != current);
+ guard(private_hash)();
/*
* We are a ZOMBIE and nobody can enqueue itself on
* pi_state_list anymore, but we have to be careful
@@ -1160,13 +1247,96 @@ void futex_exit_release(struct task_struct *tsk)
futex_cleanup_end(tsk, FUTEX_STATE_DEAD);
}
-static void futex_hash_bucket_init(struct futex_hash_bucket *fhb)
+static void futex_hash_bucket_init(struct futex_hash_bucket *fhb,
+ struct futex_private_hash *fph)
{
+#ifdef CONFIG_FUTEX_PRIVATE_HASH
+ fhb->priv = fph;
+#endif
atomic_set(&fhb->waiters, 0);
plist_head_init(&fhb->chain);
spin_lock_init(&fhb->lock);
}
+#ifdef CONFIG_FUTEX_PRIVATE_HASH
+void futex_hash_free(struct mm_struct *mm)
+{
+ kvfree(mm->futex_phash);
+}
+
+static int futex_hash_allocate(unsigned int hash_slots, bool custom)
+{
+ struct mm_struct *mm = current->mm;
+ struct futex_private_hash *fph;
+ int i;
+
+ if (hash_slots && (hash_slots == 1 || !is_power_of_2(hash_slots)))
+ return -EINVAL;
+
+ if (mm->futex_phash)
+ return -EALREADY;
+
+ if (!thread_group_empty(current))
+ return -EINVAL;
+
+ fph = kvzalloc(struct_size(fph, queues, hash_slots), GFP_KERNEL_ACCOUNT);
+ if (!fph)
+ return -ENOMEM;
+
+ fph->hash_mask = hash_slots ? hash_slots - 1 : 0;
+ fph->custom = custom;
+ fph->mm = mm;
+
+ for (i = 0; i < hash_slots; i++)
+ futex_hash_bucket_init(&fph->queues[i], fph);
+
+ mm->futex_phash = fph;
+ return 0;
+}
+
+static int futex_hash_get_slots(void)
+{
+ struct futex_private_hash *fph;
+
+ fph = current->mm->futex_phash;
+ if (fph && fph->hash_mask)
+ return fph->hash_mask + 1;
+ return 0;
+}
+
+#else
+
+static int futex_hash_allocate(unsigned int hash_slots, bool custom)
+{
+ return -EINVAL;
+}
+
+static int futex_hash_get_slots(void)
+{
+ return 0;
+}
+#endif
+
+int futex_hash_prctl(unsigned long arg2, unsigned long arg3)
+{
+ int ret;
+
+ switch (arg2) {
+ case PR_FUTEX_HASH_SET_SLOTS:
+ ret = futex_hash_allocate(arg3, true);
+ break;
+
+ case PR_FUTEX_HASH_GET_SLOTS:
+ ret = futex_hash_get_slots();
+ break;
+
+ default:
+ ret = -EINVAL;
+ break;
+ }
+ return ret;
+}
+
static int __init futex_init(void)
{
unsigned long hashsize, i;
@@ -1185,7 +1355,7 @@ static int __init futex_init(void)
hashsize = 1UL << futex_shift;
for (i = 0; i < hashsize; i++)
- futex_hash_bucket_init(&futex_queues[i]);
+ futex_hash_bucket_init(&futex_queues[i], NULL);
futex_hashmask = hashsize - 1;
return 0;
diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h
index 26e69333cb745..899aed5acde12 100644
--- a/kernel/futex/futex.h
+++ b/kernel/futex/futex.h
@@ -118,6 +118,7 @@ struct futex_hash_bucket {
atomic_t waiters;
spinlock_t lock;
struct plist_head chain;
+ struct futex_private_hash *priv;
} ____cacheline_aligned_in_smp;
/*
@@ -204,6 +205,7 @@ futex_setup_timer(ktime_t *time, struct hrtimer_sleeper *timeout,
int flags, u64 range_ns);
extern struct futex_hash_bucket *futex_hash(union futex_key *key);
+#ifdef CONFIG_FUTEX_PRIVATE_HASH
extern void futex_hash_get(struct futex_hash_bucket *hb);
extern void futex_hash_put(struct futex_hash_bucket *hb);
@@ -211,6 +213,14 @@ extern struct futex_private_hash *futex_private_hash(void);
extern bool futex_private_hash_get(struct futex_private_hash *fph);
extern void futex_private_hash_put(struct futex_private_hash *fph);
+#else /* !CONFIG_FUTEX_PRIVATE_HASH */
+static inline void futex_hash_get(struct futex_hash_bucket *hb) { }
+static inline void futex_hash_put(struct futex_hash_bucket *hb) { }
+static inline struct futex_private_hash *futex_private_hash(void) { return NULL; }
+static inline bool futex_private_hash_get(void) { return false; }
+static inline void futex_private_hash_put(struct futex_private_hash *fph) { }
+#endif
+
DEFINE_CLASS(hb, struct futex_hash_bucket *,
if (_T) futex_hash_put(_T),
futex_hash(key), union futex_key *key);
diff --git a/kernel/sys.c b/kernel/sys.c
index c434968e9f5dd..d446d8ecb0b33 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -52,6 +52,7 @@
#include <linux/user_namespace.h>
#include <linux/time_namespace.h>
#include <linux/binfmts.h>
+#include <linux/futex.h>
#include <linux/sched.h>
#include <linux/sched/autogroup.h>
@@ -2820,6 +2821,9 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
return -EINVAL;
error = posixtimer_create_prctl(arg2);
break;
+ case PR_FUTEX_HASH:
+ error = futex_hash_prctl(arg2, arg3);
+ break;
default:
trace_task_prctl_unknown(option, arg2, arg3, arg4, arg5);
error = -EINVAL;
--
2.49.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v11 13/19] futex: Allow automatic allocation of process wide futex hash.
2025-04-07 15:57 [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
` (11 preceding siblings ...)
2025-04-07 15:57 ` [PATCH v11 12/19] futex: Add basic infrastructure for local task local hash Sebastian Andrzej Siewior
@ 2025-04-07 15:57 ` Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 14/19] futex: Allow to resize the private local hash Sebastian Andrzej Siewior
` (8 subsequent siblings)
21 siblings, 0 replies; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-07 15:57 UTC (permalink / raw)
To: linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long, Sebastian Andrzej Siewior
Allocate a private futex hash with 16 slots if a task forks its first
thread.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
include/linux/futex.h | 6 ++++++
kernel/fork.c | 22 ++++++++++++++++++++++
kernel/futex/core.c | 11 +++++++++++
3 files changed, 39 insertions(+)
diff --git a/include/linux/futex.h b/include/linux/futex.h
index bbc897e54c864..0e147819ae153 100644
--- a/include/linux/futex.h
+++ b/include/linux/futex.h
@@ -80,6 +80,7 @@ long do_futex(u32 __user *uaddr, int op, u32 val, ktime_t *timeout,
int futex_hash_prctl(unsigned long arg2, unsigned long arg3);
#ifdef CONFIG_FUTEX_PRIVATE_HASH
+int futex_hash_allocate_default(void);
void futex_hash_free(struct mm_struct *mm);
static inline void futex_mm_init(struct mm_struct *mm)
@@ -88,6 +89,7 @@ static inline void futex_mm_init(struct mm_struct *mm)
}
#else /* !CONFIG_FUTEX_PRIVATE_HASH */
+static inline int futex_hash_allocate_default(void) { return 0; }
static inline void futex_hash_free(struct mm_struct *mm) { }
static inline void futex_mm_init(struct mm_struct *mm) { }
#endif /* CONFIG_FUTEX_PRIVATE_HASH */
@@ -107,6 +109,10 @@ static inline int futex_hash_prctl(unsigned long arg2, unsigned long arg3)
{
return -EINVAL;
}
+static inline int futex_hash_allocate_default(void)
+{
+ return 0;
+}
static inline void futex_hash_free(struct mm_struct *mm) { }
static inline void futex_mm_init(struct mm_struct *mm) { }
diff --git a/kernel/fork.c b/kernel/fork.c
index 831dfec450544..1f5d8083eeb25 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2164,6 +2164,13 @@ static void rv_task_fork(struct task_struct *p)
#define rv_task_fork(p) do {} while (0)
#endif
+static bool need_futex_hash_allocate_default(u64 clone_flags)
+{
+ if ((clone_flags & (CLONE_THREAD | CLONE_VM)) != (CLONE_THREAD | CLONE_VM))
+ return false;
+ return true;
+}
+
/*
* This creates a new process as a copy of the old one,
* but does not actually start it yet.
@@ -2544,6 +2551,21 @@ __latent_entropy struct task_struct *copy_process(
if (retval)
goto bad_fork_cancel_cgroup;
+ /*
+ * Allocate a default futex hash for the user process once the first
+ * thread spawns.
+ */
+ if (need_futex_hash_allocate_default(clone_flags)) {
+ retval = futex_hash_allocate_default();
+ if (retval)
+ goto bad_fork_core_free;
+ /*
+ * If we fail beyond this point we don't free the allocated
+ * futex hash map. We assume that another thread will be created
+ * and makes use of it. The hash map will be freed once the main
+ * thread terminates.
+ */
+ }
/*
* From this point on we must avoid any synchronous user-space
* communication until we take the tasklist-lock. In particular, we do
diff --git a/kernel/futex/core.c b/kernel/futex/core.c
index 479a629b9aeea..a8ede52bf7e63 100644
--- a/kernel/futex/core.c
+++ b/kernel/futex/core.c
@@ -1294,6 +1294,17 @@ static int futex_hash_allocate(unsigned int hash_slots, bool custom)
return 0;
}
+int futex_hash_allocate_default(void)
+{
+ if (!current->mm)
+ return 0;
+
+ if (current->mm->futex_phash)
+ return 0;
+
+ return futex_hash_allocate(16, false);
+}
+
static int futex_hash_get_slots(void)
{
struct futex_private_hash *fph;
--
2.49.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v11 14/19] futex: Allow to resize the private local hash.
2025-04-07 15:57 [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
` (12 preceding siblings ...)
2025-04-07 15:57 ` [PATCH v11 13/19] futex: Allow automatic allocation of process wide futex hash Sebastian Andrzej Siewior
@ 2025-04-07 15:57 ` Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 15/19] futex: Implement FUTEX2_NUMA Sebastian Andrzej Siewior
` (7 subsequent siblings)
21 siblings, 0 replies; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-07 15:57 UTC (permalink / raw)
To: linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long, Sebastian Andrzej Siewior
The mm_struct::futex_hash_lock guards the futex_hash_bucket assignment/
replacement. The futex_hash_allocate()/ PR_FUTEX_HASH_SET_SLOTS
operation can now be invoked at runtime and resize an already existing
internal private futex_hash_bucket to another size.
The reallocation is based on an idea by Thomas Gleixner: The initial
allocation of struct futex_private_hash sets the reference count
to one. Every user acquires a reference on the local hash before using
it and drops it after it enqueued itself on the hash bucket. There is no
reference held while the task is scheduled out while waiting for the
wake up.
The resize process allocates a new struct futex_private_hash and drops
the initial reference. Synchronized with mm_struct::futex_hash_lock it
is checked if the reference counter for the currently used
mm_struct::futex_phash is marked as DEAD. If so, then all users enqueued
on the current private hash are requeued on the new private hash and the
new private hash is set to mm_struct::futex_phash. Otherwise the newly
allocated private hash is saved as mm_struct::futex_phash_new and the
rehashing and reassigning is delayed to the futex_hash() caller once the
reference counter is marked DEAD.
The replacement is not performed at rcuref_put() time because certain
callers, such as futex_wait_queue(), drop their reference after changing
the task state. This change will be destroyed once the futex_hash_lock
is acquired.
The user can change the number slots with PR_FUTEX_HASH_SET_SLOTS
multiple times. An increase and decrease is allowed and request blocks
until the assignment is done.
The private hash allocated at thread creation is changed from 16 to
16 <= 4 * number_of_threads <= global_hash_size
where number_of_threads can not exceed the number of online CPUs. Should
the user PR_FUTEX_HASH_SET_SLOTS then the auto scaling is disabled.
[peterz: reorganize the code to avoid state tracking and simplify new
object handling, block the user until changes are in effect, allow
increase and decrease of the hash].
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
include/linux/futex.h | 3 +-
include/linux/mm_types.h | 4 +-
kernel/futex/core.c | 290 ++++++++++++++++++++++++++++++++++++---
kernel/futex/requeue.c | 5 +
4 files changed, 281 insertions(+), 21 deletions(-)
diff --git a/include/linux/futex.h b/include/linux/futex.h
index 0e147819ae153..0cb4c6d512dfb 100644
--- a/include/linux/futex.h
+++ b/include/linux/futex.h
@@ -85,7 +85,8 @@ void futex_hash_free(struct mm_struct *mm);
static inline void futex_mm_init(struct mm_struct *mm)
{
- mm->futex_phash = NULL;
+ rcu_assign_pointer(mm->futex_phash, NULL);
+ mutex_init(&mm->futex_hash_lock);
}
#else /* !CONFIG_FUTEX_PRIVATE_HASH */
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index a4b5661e41770..32ba5126e2214 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -1033,7 +1033,9 @@ struct mm_struct {
seqcount_t mm_lock_seq;
#endif
#ifdef CONFIG_FUTEX_PRIVATE_HASH
- struct futex_private_hash *futex_phash;
+ struct mutex futex_hash_lock;
+ struct futex_private_hash __rcu *futex_phash;
+ struct futex_private_hash *futex_phash_new;
#endif
unsigned long hiwater_rss; /* High-watermark of RSS usage */
diff --git a/kernel/futex/core.c b/kernel/futex/core.c
index a8ede52bf7e63..220ddc7620e49 100644
--- a/kernel/futex/core.c
+++ b/kernel/futex/core.c
@@ -40,6 +40,7 @@
#include <linux/fault-inject.h>
#include <linux/slab.h>
#include <linux/prctl.h>
+#include <linux/rcuref.h>
#include "futex.h"
#include "../locking/rtmutex_common.h"
@@ -57,7 +58,9 @@ static struct {
#define futex_hashmask (__futex_data.hashmask)
struct futex_private_hash {
+ rcuref_t users;
unsigned int hash_mask;
+ struct rcu_head rcu;
void *mm;
bool custom;
struct futex_hash_bucket queues[];
@@ -129,11 +132,14 @@ static inline bool futex_key_is_private(union futex_key *key)
bool futex_private_hash_get(struct futex_private_hash *fph)
{
- return false;
+ return rcuref_get(&fph->users);
}
void futex_private_hash_put(struct futex_private_hash *fph)
{
+ /* Ignore return value, last put is verified via rcuref_is_dead() */
+ if (rcuref_put(&fph->users))
+ wake_up_var(fph->mm);
}
/**
@@ -143,8 +149,23 @@ void futex_private_hash_put(struct futex_private_hash *fph)
* Obtain an additional reference for the already obtained hash bucket. The
* caller must already own an reference.
*/
-void futex_hash_get(struct futex_hash_bucket *hb) { }
-void futex_hash_put(struct futex_hash_bucket *hb) { }
+void futex_hash_get(struct futex_hash_bucket *hb)
+{
+ struct futex_private_hash *fph = hb->priv;
+
+ if (!fph)
+ return;
+ WARN_ON_ONCE(!futex_private_hash_get(fph));
+}
+
+void futex_hash_put(struct futex_hash_bucket *hb)
+{
+ struct futex_private_hash *fph = hb->priv;
+
+ if (!fph)
+ return;
+ futex_private_hash_put(fph);
+}
static struct futex_hash_bucket *
__futex_hash_private(union futex_key *key, struct futex_private_hash *fph)
@@ -155,7 +176,7 @@ __futex_hash_private(union futex_key *key, struct futex_private_hash *fph)
return NULL;
if (!fph)
- fph = key->private.mm->futex_phash;
+ fph = rcu_dereference(key->private.mm->futex_phash);
if (!fph || !fph->hash_mask)
return NULL;
@@ -165,21 +186,119 @@ __futex_hash_private(union futex_key *key, struct futex_private_hash *fph)
return &fph->queues[hash & fph->hash_mask];
}
+static void futex_rehash_private(struct futex_private_hash *old,
+ struct futex_private_hash *new)
+{
+ struct futex_hash_bucket *hb_old, *hb_new;
+ unsigned int slots = old->hash_mask + 1;
+ unsigned int i;
+
+ for (i = 0; i < slots; i++) {
+ struct futex_q *this, *tmp;
+
+ hb_old = &old->queues[i];
+
+ spin_lock(&hb_old->lock);
+ plist_for_each_entry_safe(this, tmp, &hb_old->chain, list) {
+
+ plist_del(&this->list, &hb_old->chain);
+ futex_hb_waiters_dec(hb_old);
+
+ WARN_ON_ONCE(this->lock_ptr != &hb_old->lock);
+
+ hb_new = __futex_hash(&this->key, new);
+ futex_hb_waiters_inc(hb_new);
+ /*
+ * The new pointer isn't published yet but an already
+ * moved user can be unqueued due to timeout or signal.
+ */
+ spin_lock_nested(&hb_new->lock, SINGLE_DEPTH_NESTING);
+ plist_add(&this->list, &hb_new->chain);
+ this->lock_ptr = &hb_new->lock;
+ spin_unlock(&hb_new->lock);
+ }
+ spin_unlock(&hb_old->lock);
+ }
+}
+
+static bool __futex_pivot_hash(struct mm_struct *mm,
+ struct futex_private_hash *new)
+{
+ struct futex_private_hash *fph;
+
+ WARN_ON_ONCE(mm->futex_phash_new);
+
+ fph = rcu_dereference_protected(mm->futex_phash,
+ lockdep_is_held(&mm->futex_hash_lock));
+ if (fph) {
+ if (!rcuref_is_dead(&fph->users)) {
+ mm->futex_phash_new = new;
+ return false;
+ }
+
+ futex_rehash_private(fph, new);
+ }
+ rcu_assign_pointer(mm->futex_phash, new);
+ kvfree_rcu(fph, rcu);
+ return true;
+}
+
+static void futex_pivot_hash(struct mm_struct *mm)
+{
+ scoped_guard(mutex, &mm->futex_hash_lock) {
+ struct futex_private_hash *fph;
+
+ fph = mm->futex_phash_new;
+ if (fph) {
+ mm->futex_phash_new = NULL;
+ __futex_pivot_hash(mm, fph);
+ }
+ }
+}
+
struct futex_private_hash *futex_private_hash(void)
{
struct mm_struct *mm = current->mm;
- struct futex_private_hash *fph;
+ /*
+ * Ideally we don't loop. If there is a replacement in progress
+ * then a new private hash is already prepared and a reference can't be
+ * obtained once the last user dropped it's.
+ * In that case we block on mm_struct::futex_hash_lock and either have
+ * to perform the replacement or wait while someone else is doing the
+ * job. Eitherway, on the second iteration we acquire a reference on the
+ * new private hash or loop again because a new replacement has been
+ * requested.
+ */
+again:
+ scoped_guard(rcu) {
+ struct futex_private_hash *fph;
- fph = mm->futex_phash;
- return fph;
+ fph = rcu_dereference(mm->futex_phash);
+ if (!fph)
+ return NULL;
+
+ if (rcuref_get(&fph->users))
+ return fph;
+ }
+ futex_pivot_hash(mm);
+ goto again;
}
struct futex_hash_bucket *futex_hash(union futex_key *key)
{
+ struct futex_private_hash *fph;
struct futex_hash_bucket *hb;
- hb = __futex_hash(key, NULL);
- return hb;
+again:
+ scoped_guard(rcu) {
+ hb = __futex_hash(key, NULL);
+ fph = hb->priv;
+
+ if (!fph || futex_private_hash_get(fph))
+ return hb;
+ }
+ futex_pivot_hash(key->private.mm);
+ goto again;
}
#else /* !CONFIG_FUTEX_PRIVATE_HASH */
@@ -664,6 +783,8 @@ int futex_unqueue(struct futex_q *q)
spinlock_t *lock_ptr;
int ret = 0;
+ /* RCU so lock_ptr is not going away during locking. */
+ guard(rcu)();
/* In the common case we don't take the spinlock, which is nice. */
retry:
/*
@@ -1065,6 +1186,10 @@ static void exit_pi_state_list(struct task_struct *curr)
struct futex_pi_state *pi_state;
union futex_key key = FUTEX_KEY_INIT;
+ /*
+ * The mutex mm_struct::futex_hash_lock might be acquired.
+ */
+ might_sleep();
/*
* Ensure the hash remains stable (no resize) during the while loop
* below. The hb pointer is acquired under the pi_lock so we can't block
@@ -1261,7 +1386,51 @@ static void futex_hash_bucket_init(struct futex_hash_bucket *fhb,
#ifdef CONFIG_FUTEX_PRIVATE_HASH
void futex_hash_free(struct mm_struct *mm)
{
- kvfree(mm->futex_phash);
+ struct futex_private_hash *fph;
+
+ kvfree(mm->futex_phash_new);
+ fph = rcu_dereference_raw(mm->futex_phash);
+ if (fph) {
+ WARN_ON_ONCE(rcuref_read(&fph->users) > 1);
+ kvfree(fph);
+ }
+}
+
+static bool futex_pivot_pending(struct mm_struct *mm)
+{
+ struct futex_private_hash *fph;
+
+ guard(rcu)();
+
+ if (!mm->futex_phash_new)
+ return true;
+
+ fph = rcu_dereference(mm->futex_phash);
+ return rcuref_is_dead(&fph->users);
+}
+
+static bool futex_hash_less(struct futex_private_hash *a,
+ struct futex_private_hash *b)
+{
+ /* user provided always wins */
+ if (!a->custom && b->custom)
+ return true;
+ if (a->custom && !b->custom)
+ return false;
+
+ /* zero-sized hash wins */
+ if (!b->hash_mask)
+ return true;
+ if (!a->hash_mask)
+ return false;
+
+ /* keep the biggest */
+ if (a->hash_mask < b->hash_mask)
+ return true;
+ if (a->hash_mask > b->hash_mask)
+ return false;
+
+ return false; /* equal */
}
static int futex_hash_allocate(unsigned int hash_slots, bool custom)
@@ -1273,16 +1442,23 @@ static int futex_hash_allocate(unsigned int hash_slots, bool custom)
if (hash_slots && (hash_slots == 1 || !is_power_of_2(hash_slots)))
return -EINVAL;
- if (mm->futex_phash)
- return -EALREADY;
-
- if (!thread_group_empty(current))
- return -EINVAL;
+ /*
+ * Once we've disabled the global hash there is no way back.
+ */
+ scoped_guard(rcu) {
+ fph = rcu_dereference(mm->futex_phash);
+ if (fph && !fph->hash_mask) {
+ if (custom)
+ return -EBUSY;
+ return 0;
+ }
+ }
fph = kvzalloc(struct_size(fph, queues, hash_slots), GFP_KERNEL_ACCOUNT);
if (!fph)
return -ENOMEM;
+ rcuref_init(&fph->users, 1);
fph->hash_mask = hash_slots ? hash_slots - 1 : 0;
fph->custom = custom;
fph->mm = mm;
@@ -1290,26 +1466,102 @@ static int futex_hash_allocate(unsigned int hash_slots, bool custom)
for (i = 0; i < hash_slots; i++)
futex_hash_bucket_init(&fph->queues[i], fph);
- mm->futex_phash = fph;
+ if (custom) {
+ /*
+ * Only let prctl() wait / retry; don't unduly delay clone().
+ */
+again:
+ wait_var_event(mm, futex_pivot_pending(mm));
+ }
+
+ scoped_guard(mutex, &mm->futex_hash_lock) {
+ struct futex_private_hash *free __free(kvfree) = NULL;
+ struct futex_private_hash *cur, *new;
+
+ cur = rcu_dereference_protected(mm->futex_phash,
+ lockdep_is_held(&mm->futex_hash_lock));
+ new = mm->futex_phash_new;
+ mm->futex_phash_new = NULL;
+
+ if (fph) {
+ if (cur && !new) {
+ /*
+ * If we have an existing hash, but do not yet have
+ * allocated a replacement hash, drop the initial
+ * reference on the existing hash.
+ */
+ futex_private_hash_put(cur);
+ }
+
+ if (new) {
+ /*
+ * Two updates raced; throw out the lesser one.
+ */
+ if (futex_hash_less(new, fph)) {
+ free = new;
+ new = fph;
+ } else {
+ free = fph;
+ }
+ } else {
+ new = fph;
+ }
+ fph = NULL;
+ }
+
+ if (new) {
+ /*
+ * Will set mm->futex_phash_new on failure;
+ * futex_private_hash_get() will try again.
+ */
+ if (!__futex_pivot_hash(mm, new) && custom)
+ goto again;
+ }
+ }
return 0;
}
int futex_hash_allocate_default(void)
{
+ unsigned int threads, buckets, current_buckets = 0;
+ struct futex_private_hash *fph;
+
if (!current->mm)
return 0;
- if (current->mm->futex_phash)
+ scoped_guard(rcu) {
+ threads = min_t(unsigned int,
+ get_nr_threads(current),
+ num_online_cpus());
+
+ fph = rcu_dereference(current->mm->futex_phash);
+ if (fph) {
+ if (fph->custom)
+ return 0;
+
+ current_buckets = fph->hash_mask + 1;
+ }
+ }
+
+ /*
+ * The default allocation will remain within
+ * 16 <= threads * 4 <= global hash size
+ */
+ buckets = roundup_pow_of_two(4 * threads);
+ buckets = clamp(buckets, 16, futex_hashmask + 1);
+
+ if (current_buckets >= buckets)
return 0;
- return futex_hash_allocate(16, false);
+ return futex_hash_allocate(buckets, false);
}
static int futex_hash_get_slots(void)
{
struct futex_private_hash *fph;
- fph = current->mm->futex_phash;
+ guard(rcu)();
+ fph = rcu_dereference(current->mm->futex_phash);
if (fph && fph->hash_mask)
return fph->hash_mask + 1;
return 0;
diff --git a/kernel/futex/requeue.c b/kernel/futex/requeue.c
index b0e64fd454d96..c716a66f86929 100644
--- a/kernel/futex/requeue.c
+++ b/kernel/futex/requeue.c
@@ -87,6 +87,11 @@ void requeue_futex(struct futex_q *q, struct futex_hash_bucket *hb1,
futex_hb_waiters_inc(hb2);
plist_add(&q->list, &hb2->chain);
q->lock_ptr = &hb2->lock;
+ /*
+ * hb1 and hb2 belong to the same futex_hash_bucket_private
+ * because if we managed get a reference on hb1 then it can't be
+ * replaced. Therefore we avoid put(hb1)+get(hb2) here.
+ */
}
q->key = *key2;
}
--
2.49.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v11 15/19] futex: Implement FUTEX2_NUMA
2025-04-07 15:57 [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
` (13 preceding siblings ...)
2025-04-07 15:57 ` [PATCH v11 14/19] futex: Allow to resize the private local hash Sebastian Andrzej Siewior
@ 2025-04-07 15:57 ` Sebastian Andrzej Siewior
2025-04-07 16:52 ` Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 16/19] futex: Implement FUTEX2_MPOL Sebastian Andrzej Siewior
` (6 subsequent siblings)
21 siblings, 1 reply; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-07 15:57 UTC (permalink / raw)
To: linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long, Sebastian Andrzej Siewior
From: Peter Zijlstra <peterz@infradead.org>
Extend the futex2 interface to be numa aware.
When FUTEX2_NUMA is specified for a futex, the user value is extended
to two words (of the same size). The first is the user value we all
know, the second one will be the node to place this futex on.
struct futex_numa_32 {
u32 val;
u32 node;
};
When node is set to ~0, WAIT will set it to the current node_id such
that WAKE knows where to find it. If userspace corrupts the node value
between WAIT and WAKE, the futex will not be found and no wakeup will
happen.
When FUTEX2_NUMA is not set, the node is simply an extension of the
hash, such that traditional futexes are still interleaved over the
nodes.
This is done to avoid having to have a separate !numa hash-table.
[bigeasy: ensure to have at least hashsize of 4 in futex_init(), add
pr_info() for size and allocation information.]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
include/linux/futex.h | 3 ++
include/uapi/linux/futex.h | 8 +++
kernel/futex/core.c | 100 ++++++++++++++++++++++++++++++-------
kernel/futex/futex.h | 33 ++++++++++--
4 files changed, 124 insertions(+), 20 deletions(-)
diff --git a/include/linux/futex.h b/include/linux/futex.h
index 0cb4c6d512dfb..ee48dcfbfe59d 100644
--- a/include/linux/futex.h
+++ b/include/linux/futex.h
@@ -34,6 +34,7 @@ union futex_key {
u64 i_seq;
unsigned long pgoff;
unsigned int offset;
+ /* unsigned int node; */
} shared;
struct {
union {
@@ -42,11 +43,13 @@ union futex_key {
};
unsigned long address;
unsigned int offset;
+ /* unsigned int node; */
} private;
struct {
u64 ptr;
unsigned long word;
unsigned int offset;
+ unsigned int node; /* NOT hashed! */
} both;
};
diff --git a/include/uapi/linux/futex.h b/include/uapi/linux/futex.h
index d2ee625ea1890..0435025beaae8 100644
--- a/include/uapi/linux/futex.h
+++ b/include/uapi/linux/futex.h
@@ -74,6 +74,14 @@
/* do not use */
#define FUTEX_32 FUTEX2_SIZE_U32 /* historical accident :-( */
+
+/*
+ * When FUTEX2_NUMA doubles the futex word, the second word is a node value.
+ * The special value -1 indicates no-node. This is the same value as
+ * NUMA_NO_NODE, except that value is not ABI, this is.
+ */
+#define FUTEX_NO_NODE (-1)
+
/*
* Max numbers of elements in a futex_waitv array
*/
diff --git a/kernel/futex/core.c b/kernel/futex/core.c
index 220ddc7620e49..5ab2713ac906f 100644
--- a/kernel/futex/core.c
+++ b/kernel/futex/core.c
@@ -36,6 +36,8 @@
#include <linux/pagemap.h>
#include <linux/debugfs.h>
#include <linux/plist.h>
+#include <linux/gfp.h>
+#include <linux/vmalloc.h>
#include <linux/memblock.h>
#include <linux/fault-inject.h>
#include <linux/slab.h>
@@ -51,11 +53,14 @@
* reside in the same cacheline.
*/
static struct {
- struct futex_hash_bucket *queues;
unsigned long hashmask;
+ unsigned int hashshift;
+ struct futex_hash_bucket *queues[MAX_NUMNODES];
} __futex_data __read_mostly __aligned(2*sizeof(long));
-#define futex_queues (__futex_data.queues)
-#define futex_hashmask (__futex_data.hashmask)
+
+#define futex_hashmask (__futex_data.hashmask)
+#define futex_hashshift (__futex_data.hashshift)
+#define futex_queues (__futex_data.queues)
struct futex_private_hash {
rcuref_t users;
@@ -332,15 +337,35 @@ __futex_hash(union futex_key *key, struct futex_private_hash *fph)
{
struct futex_hash_bucket *hb;
u32 hash;
+ int node;
hb = __futex_hash_private(key, fph);
if (hb)
return hb;
hash = jhash2((u32 *)key,
- offsetof(typeof(*key), both.offset) / 4,
+ offsetof(typeof(*key), both.offset) / sizeof(u32),
key->both.offset);
- return &futex_queues[hash & futex_hashmask];
+ node = key->both.node;
+
+ if (node == FUTEX_NO_NODE) {
+ /*
+ * In case of !FLAGS_NUMA, use some unused hash bits to pick a
+ * node -- this ensures regular futexes are interleaved across
+ * the nodes and avoids having to allocate multiple
+ * hash-tables.
+ *
+ * NOTE: this isn't perfectly uniform, but it is fast and
+ * handles sparse node masks.
+ */
+ node = (hash >> futex_hashshift) % nr_node_ids;
+ if (!node_possible(node)) {
+ node = find_next_bit_wrap(node_possible_map.bits,
+ nr_node_ids, node);
+ }
+ }
+
+ return &futex_queues[node][hash & futex_hashmask];
}
/**
@@ -447,25 +472,49 @@ int get_futex_key(u32 __user *uaddr, unsigned int flags, union futex_key *key,
struct page *page;
struct folio *folio;
struct address_space *mapping;
- int err, ro = 0;
+ int node, err, size, ro = 0;
bool fshared;
fshared = flags & FLAGS_SHARED;
+ size = futex_size(flags);
+ if (flags & FLAGS_NUMA)
+ size *= 2;
/*
* The futex address must be "naturally" aligned.
*/
key->both.offset = address % PAGE_SIZE;
- if (unlikely((address % sizeof(u32)) != 0))
+ if (unlikely((address % size) != 0))
return -EINVAL;
address -= key->both.offset;
- if (unlikely(!access_ok(uaddr, sizeof(u32))))
+ if (unlikely(!access_ok(uaddr, size)))
return -EFAULT;
if (unlikely(should_fail_futex(fshared)))
return -EFAULT;
+ if (flags & FLAGS_NUMA) {
+ u32 __user *naddr = uaddr + size / 2;
+
+ if (futex_get_value(&node, naddr))
+ return -EFAULT;
+
+ if (node == FUTEX_NO_NODE) {
+ node = numa_node_id();
+ if (futex_put_value(node, naddr))
+ return -EFAULT;
+
+ } else if (node >= MAX_NUMNODES || !node_possible(node)) {
+ return -EINVAL;
+ }
+
+ key->both.node = node;
+
+ } else {
+ key->both.node = FUTEX_NO_NODE;
+ }
+
/*
* PROCESS_PRIVATE futexes are fast.
* As the mm cannot disappear under us and the 'key' only needs
@@ -1603,24 +1652,41 @@ int futex_hash_prctl(unsigned long arg2, unsigned long arg3)
static int __init futex_init(void)
{
unsigned long hashsize, i;
- unsigned int futex_shift;
+ unsigned int order, n;
+ unsigned long size;
#ifdef CONFIG_BASE_SMALL
hashsize = 16;
#else
- hashsize = roundup_pow_of_two(256 * num_possible_cpus());
+ hashsize = 256 * num_possible_cpus();
+ hashsize /= num_possible_nodes();
+ hashsize = max(4, hashsize);
+ hashsize = roundup_pow_of_two(hashsize);
#endif
+ futex_hashshift = ilog2(hashsize);
+ size = sizeof(struct futex_hash_bucket) * hashsize;
+ order = get_order(size);
- futex_queues = alloc_large_system_hash("futex", sizeof(*futex_queues),
- hashsize, 0, 0,
- &futex_shift, NULL,
- hashsize, hashsize);
- hashsize = 1UL << futex_shift;
+ for_each_node(n) {
+ struct futex_hash_bucket *table;
- for (i = 0; i < hashsize; i++)
- futex_hash_bucket_init(&futex_queues[i], NULL);
+ if (order > MAX_PAGE_ORDER)
+ table = vmalloc_huge_node(size, GFP_KERNEL, n);
+ else
+ table = alloc_pages_exact_nid(n, size, GFP_KERNEL);
+
+ BUG_ON(!table);
+
+ for (i = 0; i < hashsize; i++)
+ futex_hash_bucket_init(&table[i], NULL);
+
+ futex_queues[n] = table;
+ }
futex_hashmask = hashsize - 1;
+ pr_info("futex hash table entries: %lu (%lu bytes on %d NUMA nodes, total %lu KiB, %s).\n",
+ hashsize, size, num_possible_nodes(), size * num_possible_nodes() / 1024,
+ order > MAX_PAGE_ORDER ? "vmalloc" : "linear");
return 0;
}
core_initcall(futex_init);
diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h
index 899aed5acde12..acc7953678898 100644
--- a/kernel/futex/futex.h
+++ b/kernel/futex/futex.h
@@ -54,7 +54,7 @@ static inline unsigned int futex_to_flags(unsigned int op)
return flags;
}
-#define FUTEX2_VALID_MASK (FUTEX2_SIZE_MASK | FUTEX2_PRIVATE)
+#define FUTEX2_VALID_MASK (FUTEX2_SIZE_MASK | FUTEX2_NUMA | FUTEX2_PRIVATE)
/* FUTEX2_ to FLAGS_ */
static inline unsigned int futex2_to_flags(unsigned int flags2)
@@ -87,6 +87,19 @@ static inline bool futex_flags_valid(unsigned int flags)
if ((flags & FLAGS_SIZE_MASK) != FLAGS_SIZE_32)
return false;
+ /*
+ * Must be able to represent both FUTEX_NO_NODE and every valid nodeid
+ * in a futex word.
+ */
+ if (flags & FLAGS_NUMA) {
+ int bits = 8 * futex_size(flags);
+ u64 max = ~0ULL;
+
+ max >>= 64 - bits;
+ if (nr_node_ids >= max)
+ return false;
+ }
+
return true;
}
@@ -282,7 +295,7 @@ static inline int futex_cmpxchg_value_locked(u32 *curval, u32 __user *uaddr, u32
* This looks a bit overkill, but generally just results in a couple
* of instructions.
*/
-static __always_inline int futex_read_inatomic(u32 *dest, u32 __user *from)
+static __always_inline int futex_get_value(u32 *dest, u32 __user *from)
{
u32 val;
@@ -299,12 +312,26 @@ static __always_inline int futex_read_inatomic(u32 *dest, u32 __user *from)
return -EFAULT;
}
+static __always_inline int futex_put_value(u32 val, u32 __user *to)
+{
+ if (can_do_masked_user_access())
+ to = masked_user_access_begin(to);
+ else if (!user_read_access_begin(to, sizeof(*to)))
+ return -EFAULT;
+ unsafe_put_user(val, to, Efault);
+ user_read_access_end();
+ return 0;
+Efault:
+ user_read_access_end();
+ return -EFAULT;
+}
+
static inline int futex_get_value_locked(u32 *dest, u32 __user *from)
{
int ret;
pagefault_disable();
- ret = futex_read_inatomic(dest, from);
+ ret = futex_get_value(dest, from);
pagefault_enable();
return ret;
--
2.49.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v11 16/19] futex: Implement FUTEX2_MPOL
2025-04-07 15:57 [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
` (14 preceding siblings ...)
2025-04-07 15:57 ` [PATCH v11 15/19] futex: Implement FUTEX2_NUMA Sebastian Andrzej Siewior
@ 2025-04-07 15:57 ` Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 17/19] tools headers: Synchronize prctl.h ABI header Sebastian Andrzej Siewior
` (5 subsequent siblings)
21 siblings, 0 replies; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-07 15:57 UTC (permalink / raw)
To: linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long, Sebastian Andrzej Siewior
From: Peter Zijlstra <peterz@infradead.org>
Extend the futex2 interface to be aware of mempolicy.
When FUTEX2_MPOL is specified and there is a MPOL_PREFERRED or
home_node specified covering the futex address, use that hash-map.
Notably, in this case the futex will go to the global node hashtable,
even if it is a PRIVATE futex.
When FUTEX2_NUMA|FUTEX2_MPOL is specified and the user specified node
value is FUTEX_NO_NODE, the MPOL lookup (as described above) will be
tried first before reverting to setting node to the local node.
[bigeasy: add CONFIG_FUTEX_MPOL ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
include/linux/mmap_lock.h | 4 ++
include/uapi/linux/futex.h | 2 +-
init/Kconfig | 5 ++
kernel/futex/core.c | 112 +++++++++++++++++++++++++++++++------
kernel/futex/futex.h | 4 ++
5 files changed, 108 insertions(+), 19 deletions(-)
diff --git a/include/linux/mmap_lock.h b/include/linux/mmap_lock.h
index 4706c67699027..e0eddfd306ef3 100644
--- a/include/linux/mmap_lock.h
+++ b/include/linux/mmap_lock.h
@@ -7,6 +7,7 @@
#include <linux/rwsem.h>
#include <linux/tracepoint-defs.h>
#include <linux/types.h>
+#include <linux/cleanup.h>
#define MMAP_LOCK_INITIALIZER(name) \
.mmap_lock = __RWSEM_INITIALIZER((name).mmap_lock),
@@ -211,6 +212,9 @@ static inline void mmap_read_unlock(struct mm_struct *mm)
up_read(&mm->mmap_lock);
}
+DEFINE_GUARD(mmap_read_lock, struct mm_struct *,
+ mmap_read_lock(_T), mmap_read_unlock(_T))
+
static inline void mmap_read_unlock_non_owner(struct mm_struct *mm)
{
__mmap_lock_trace_released(mm, false);
diff --git a/include/uapi/linux/futex.h b/include/uapi/linux/futex.h
index 0435025beaae8..247c425e175ef 100644
--- a/include/uapi/linux/futex.h
+++ b/include/uapi/linux/futex.h
@@ -63,7 +63,7 @@
#define FUTEX2_SIZE_U32 0x02
#define FUTEX2_SIZE_U64 0x03
#define FUTEX2_NUMA 0x04
- /* 0x08 */
+#define FUTEX2_MPOL 0x08
/* 0x10 */
/* 0x20 */
/* 0x40 */
diff --git a/init/Kconfig b/init/Kconfig
index b308b98d79347..174633bc9810b 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1704,6 +1704,11 @@ config FUTEX_PRIVATE_HASH
depends on FUTEX && !BASE_SMALL && MMU
default y
+config FUTEX_MPOL
+ bool
+ depends on FUTEX && NUMA
+ default y
+
config EPOLL
bool "Enable eventpoll support" if EXPERT
default y
diff --git a/kernel/futex/core.c b/kernel/futex/core.c
index 5ab2713ac906f..5b8609c8729e7 100644
--- a/kernel/futex/core.c
+++ b/kernel/futex/core.c
@@ -43,6 +43,8 @@
#include <linux/slab.h>
#include <linux/prctl.h>
#include <linux/rcuref.h>
+#include <linux/mempolicy.h>
+#include <linux/mmap_lock.h>
#include "futex.h"
#include "../locking/rtmutex_common.h"
@@ -321,6 +323,73 @@ struct futex_hash_bucket *futex_hash(union futex_key *key)
#endif /* CONFIG_FUTEX_PRIVATE_HASH */
+#ifdef CONFIG_FUTEX_MPOL
+static int __futex_key_to_node(struct mm_struct *mm, unsigned long addr)
+{
+ struct vm_area_struct *vma = vma_lookup(mm, addr);
+ struct mempolicy *mpol;
+ int node = FUTEX_NO_NODE;
+
+ if (!vma)
+ return FUTEX_NO_NODE;
+
+ mpol = vma_policy(vma);
+ if (!mpol)
+ return FUTEX_NO_NODE;
+
+ switch (mpol->mode) {
+ case MPOL_PREFERRED:
+ node = first_node(mpol->nodes);
+ break;
+ case MPOL_PREFERRED_MANY:
+ case MPOL_BIND:
+ if (mpol->home_node != NUMA_NO_NODE)
+ node = mpol->home_node;
+ break;
+ default:
+ break;
+ }
+
+ return node;
+}
+
+static int futex_key_to_node_opt(struct mm_struct *mm, unsigned long addr)
+{
+ int seq, node;
+
+ guard(rcu)();
+
+ if (!mmap_lock_speculate_try_begin(mm, &seq))
+ return -EBUSY;
+
+ node = __futex_key_to_node(mm, addr);
+
+ if (mmap_lock_speculate_retry(mm, seq))
+ return -EAGAIN;
+
+ return node;
+}
+
+static int futex_mpol(struct mm_struct *mm, unsigned long addr)
+{
+ int node;
+
+ node = futex_key_to_node_opt(mm, addr);
+ if (node >= FUTEX_NO_NODE)
+ return node;
+
+ guard(mmap_read_lock)(mm);
+ return __futex_key_to_node(mm, addr);
+}
+#else /* !CONFIG_FUTEX_MPOL */
+
+static int futex_mpol(struct mm_struct *mm, unsigned long addr)
+{
+ return FUTEX_NO_NODE;
+}
+
+#endif /* CONFIG_FUTEX_MPOL */
+
/**
* __futex_hash - Return the hash bucket
* @key: Pointer to the futex key for which the hash is calculated
@@ -335,18 +404,20 @@ struct futex_hash_bucket *futex_hash(union futex_key *key)
static struct futex_hash_bucket *
__futex_hash(union futex_key *key, struct futex_private_hash *fph)
{
- struct futex_hash_bucket *hb;
+ int node = key->both.node;
u32 hash;
- int node;
- hb = __futex_hash_private(key, fph);
- if (hb)
- return hb;
+ if (node == FUTEX_NO_NODE) {
+ struct futex_hash_bucket *hb;
+
+ hb = __futex_hash_private(key, fph);
+ if (hb)
+ return hb;
+ }
hash = jhash2((u32 *)key,
offsetof(typeof(*key), both.offset) / sizeof(u32),
key->both.offset);
- node = key->both.node;
if (node == FUTEX_NO_NODE) {
/*
@@ -494,27 +565,32 @@ int get_futex_key(u32 __user *uaddr, unsigned int flags, union futex_key *key,
if (unlikely(should_fail_futex(fshared)))
return -EFAULT;
+ node = FUTEX_NO_NODE;
+
if (flags & FLAGS_NUMA) {
u32 __user *naddr = uaddr + size / 2;
if (futex_get_value(&node, naddr))
return -EFAULT;
- if (node == FUTEX_NO_NODE) {
- node = numa_node_id();
- if (futex_put_value(node, naddr))
- return -EFAULT;
-
- } else if (node >= MAX_NUMNODES || !node_possible(node)) {
+ if (node >= MAX_NUMNODES || !node_possible(node))
return -EINVAL;
- }
-
- key->both.node = node;
-
- } else {
- key->both.node = FUTEX_NO_NODE;
}
+ if (node == FUTEX_NO_NODE && (flags & FLAGS_MPOL))
+ node = futex_mpol(mm, address);
+
+ if (flags & FLAGS_NUMA) {
+ u32 __user *naddr = uaddr + size / 2;
+
+ if (node == FUTEX_NO_NODE)
+ node = numa_node_id();
+ if (futex_put_value(node, naddr))
+ return -EFAULT;
+ }
+
+ key->both.node = node;
+
/*
* PROCESS_PRIVATE futexes are fast.
* As the mm cannot disappear under us and the 'key' only needs
diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h
index acc7953678898..004e4dbee4f93 100644
--- a/kernel/futex/futex.h
+++ b/kernel/futex/futex.h
@@ -39,6 +39,7 @@
#define FLAGS_HAS_TIMEOUT 0x0040
#define FLAGS_NUMA 0x0080
#define FLAGS_STRICT 0x0100
+#define FLAGS_MPOL 0x0200
/* FUTEX_ to FLAGS_ */
static inline unsigned int futex_to_flags(unsigned int op)
@@ -67,6 +68,9 @@ static inline unsigned int futex2_to_flags(unsigned int flags2)
if (flags2 & FUTEX2_NUMA)
flags |= FLAGS_NUMA;
+ if (flags2 & FUTEX2_MPOL)
+ flags |= FLAGS_MPOL;
+
return flags;
}
--
2.49.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v11 17/19] tools headers: Synchronize prctl.h ABI header
2025-04-07 15:57 [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
` (15 preceding siblings ...)
2025-04-07 15:57 ` [PATCH v11 16/19] futex: Implement FUTEX2_MPOL Sebastian Andrzej Siewior
@ 2025-04-07 15:57 ` Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 18/19] tools/perf: Allow to select the number of hash buckets Sebastian Andrzej Siewior
` (4 subsequent siblings)
21 siblings, 0 replies; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-07 15:57 UTC (permalink / raw)
To: linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long, Sebastian Andrzej Siewior, Liang, Kan, Adrian Hunter,
Alexander Shishkin, Arnaldo Carvalho de Melo, Ian Rogers,
Jiri Olsa, Mark Rutland, Namhyung Kim, linux-perf-users
Synchronize prctl.h with current uapi version after adding
PR_FUTEX_HASH.
Cc: "Liang, Kan" <kan.liang@linux.intel.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Ian Rogers <irogers@google.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: linux-perf-users@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
tools/include/uapi/linux/prctl.h | 43 +++++++++++++++++++++++++++++++-
1 file changed, 42 insertions(+), 1 deletion(-)
diff --git a/tools/include/uapi/linux/prctl.h b/tools/include/uapi/linux/prctl.h
index 35791791a879b..3b93fb906e3c5 100644
--- a/tools/include/uapi/linux/prctl.h
+++ b/tools/include/uapi/linux/prctl.h
@@ -230,7 +230,7 @@ struct prctl_mm_map {
# define PR_PAC_APDBKEY (1UL << 3)
# define PR_PAC_APGAKEY (1UL << 4)
-/* Tagged user address controls for arm64 */
+/* Tagged user address controls for arm64 and RISC-V */
#define PR_SET_TAGGED_ADDR_CTRL 55
#define PR_GET_TAGGED_ADDR_CTRL 56
# define PR_TAGGED_ADDR_ENABLE (1UL << 0)
@@ -244,6 +244,9 @@ struct prctl_mm_map {
# define PR_MTE_TAG_MASK (0xffffUL << PR_MTE_TAG_SHIFT)
/* Unused; kept only for source compatibility */
# define PR_MTE_TCF_SHIFT 1
+/* RISC-V pointer masking tag length */
+# define PR_PMLEN_SHIFT 24
+# define PR_PMLEN_MASK (0x7fUL << PR_PMLEN_SHIFT)
/* Control reclaim behavior when allocating memory */
#define PR_SET_IO_FLUSHER 57
@@ -328,4 +331,42 @@ struct prctl_mm_map {
# define PR_PPC_DEXCR_CTRL_CLEAR_ONEXEC 0x10 /* Clear the aspect on exec */
# define PR_PPC_DEXCR_CTRL_MASK 0x1f
+/*
+ * Get the current shadow stack configuration for the current thread,
+ * this will be the value configured via PR_SET_SHADOW_STACK_STATUS.
+ */
+#define PR_GET_SHADOW_STACK_STATUS 74
+
+/*
+ * Set the current shadow stack configuration. Enabling the shadow
+ * stack will cause a shadow stack to be allocated for the thread.
+ */
+#define PR_SET_SHADOW_STACK_STATUS 75
+# define PR_SHADOW_STACK_ENABLE (1UL << 0)
+# define PR_SHADOW_STACK_WRITE (1UL << 1)
+# define PR_SHADOW_STACK_PUSH (1UL << 2)
+
+/*
+ * Prevent further changes to the specified shadow stack
+ * configuration. All bits may be locked via this call, including
+ * undefined bits.
+ */
+#define PR_LOCK_SHADOW_STACK_STATUS 76
+
+/*
+ * Controls the mode of timer_create() for CRIU restore operations.
+ * Enabling this allows CRIU to restore timers with explicit IDs.
+ *
+ * Don't use for normal operations as the result might be undefined.
+ */
+#define PR_TIMER_CREATE_RESTORE_IDS 77
+# define PR_TIMER_CREATE_RESTORE_IDS_OFF 0
+# define PR_TIMER_CREATE_RESTORE_IDS_ON 1
+# define PR_TIMER_CREATE_RESTORE_IDS_GET 2
+
+/* FUTEX hash management */
+#define PR_FUTEX_HASH 78
+# define PR_FUTEX_HASH_SET_SLOTS 1
+# define PR_FUTEX_HASH_GET_SLOTS 2
+
#endif /* _LINUX_PRCTL_H */
--
2.49.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v11 18/19] tools/perf: Allow to select the number of hash buckets.
2025-04-07 15:57 [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
` (16 preceding siblings ...)
2025-04-07 15:57 ` [PATCH v11 17/19] tools headers: Synchronize prctl.h ABI header Sebastian Andrzej Siewior
@ 2025-04-07 15:57 ` Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 19/19] futex: Allow to make the private hash immutable Sebastian Andrzej Siewior
` (3 subsequent siblings)
21 siblings, 0 replies; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-07 15:57 UTC (permalink / raw)
To: linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long, Sebastian Andrzej Siewior, Liang, Kan, Adrian Hunter,
Alexander Shishkin, Arnaldo Carvalho de Melo, Ian Rogers,
Jiri Olsa, Mark Rutland, Namhyung Kim, linux-perf-users
Add the -b/ --buckets argument to specify the number of hash buckets for
the private futex hash. This is directly passed to
prctl(PR_FUTEX_HASH, PR_FUTEX_HASH_SET_SLOTS)
and must return without an error if specified. The size of the private
hash is verified with PR_FUTEX_HASH_GET_SLOTS.
If the PR_FUTEX_HASH_GET_SLOTS failed then it is assumed that an older
kernel was used without the support and that the global hash is used.
Cc: "Liang, Kan" <kan.liang@linux.intel.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Ian Rogers <irogers@google.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: linux-perf-users@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
tools/perf/bench/Build | 1 +
tools/perf/bench/futex-hash.c | 6 +++
tools/perf/bench/futex-lock-pi.c | 4 ++
tools/perf/bench/futex-requeue.c | 5 +++
tools/perf/bench/futex-wake-parallel.c | 8 +++-
tools/perf/bench/futex-wake.c | 3 ++
tools/perf/bench/futex.c | 58 ++++++++++++++++++++++++++
tools/perf/bench/futex.h | 4 ++
8 files changed, 88 insertions(+), 1 deletion(-)
create mode 100644 tools/perf/bench/futex.c
diff --git a/tools/perf/bench/Build b/tools/perf/bench/Build
index 279ab2ab4abe4..b558ab98719f9 100644
--- a/tools/perf/bench/Build
+++ b/tools/perf/bench/Build
@@ -3,6 +3,7 @@ perf-bench-y += sched-pipe.o
perf-bench-y += sched-seccomp-notify.o
perf-bench-y += syscall.o
perf-bench-y += mem-functions.o
+perf-bench-y += futex.o
perf-bench-y += futex-hash.o
perf-bench-y += futex-wake.o
perf-bench-y += futex-wake-parallel.o
diff --git a/tools/perf/bench/futex-hash.c b/tools/perf/bench/futex-hash.c
index b472eded521b1..c843bd8543c74 100644
--- a/tools/perf/bench/futex-hash.c
+++ b/tools/perf/bench/futex-hash.c
@@ -18,9 +18,11 @@
#include <stdlib.h>
#include <linux/compiler.h>
#include <linux/kernel.h>
+#include <linux/prctl.h>
#include <linux/zalloc.h>
#include <sys/time.h>
#include <sys/mman.h>
+#include <sys/prctl.h>
#include <perf/cpumap.h>
#include "../util/mutex.h"
@@ -50,9 +52,11 @@ struct worker {
static struct bench_futex_parameters params = {
.nfutexes = 1024,
.runtime = 10,
+ .nbuckets = -1,
};
static const struct option options[] = {
+ OPT_INTEGER( 'b', "buckets", ¶ms.nbuckets, "Specify amount of hash buckets"),
OPT_UINTEGER('t', "threads", ¶ms.nthreads, "Specify amount of threads"),
OPT_UINTEGER('r', "runtime", ¶ms.runtime, "Specify runtime (in seconds)"),
OPT_UINTEGER('f', "futexes", ¶ms.nfutexes, "Specify amount of futexes per threads"),
@@ -118,6 +122,7 @@ static void print_summary(void)
printf("%sAveraged %ld operations/sec (+- %.2f%%), total secs = %d\n",
!params.silent ? "\n" : "", avg, rel_stddev_stats(stddev, avg),
(int)bench__runtime.tv_sec);
+ futex_print_nbuckets(¶ms);
}
int bench_futex_hash(int argc, const char **argv)
@@ -161,6 +166,7 @@ int bench_futex_hash(int argc, const char **argv)
if (!params.fshared)
futex_flag = FUTEX_PRIVATE_FLAG;
+ futex_set_nbuckets_param(¶ms);
printf("Run summary [PID %d]: %d threads, each operating on %d [%s] futexes for %d secs.\n\n",
getpid(), params.nthreads, params.nfutexes, params.fshared ? "shared":"private", params.runtime);
diff --git a/tools/perf/bench/futex-lock-pi.c b/tools/perf/bench/futex-lock-pi.c
index 0416120c091b2..40640b6744279 100644
--- a/tools/perf/bench/futex-lock-pi.c
+++ b/tools/perf/bench/futex-lock-pi.c
@@ -41,10 +41,12 @@ static struct stats throughput_stats;
static struct cond thread_parent, thread_worker;
static struct bench_futex_parameters params = {
+ .nbuckets = -1,
.runtime = 10,
};
static const struct option options[] = {
+ OPT_INTEGER( 'b', "buckets", ¶ms.nbuckets, "Specify amount of hash buckets"),
OPT_UINTEGER('t', "threads", ¶ms.nthreads, "Specify amount of threads"),
OPT_UINTEGER('r', "runtime", ¶ms.runtime, "Specify runtime (in seconds)"),
OPT_BOOLEAN( 'M', "multi", ¶ms.multi, "Use multiple futexes"),
@@ -67,6 +69,7 @@ static void print_summary(void)
printf("%sAveraged %ld operations/sec (+- %.2f%%), total secs = %d\n",
!params.silent ? "\n" : "", avg, rel_stddev_stats(stddev, avg),
(int)bench__runtime.tv_sec);
+ futex_print_nbuckets(¶ms);
}
static void toggle_done(int sig __maybe_unused,
@@ -203,6 +206,7 @@ int bench_futex_lock_pi(int argc, const char **argv)
mutex_init(&thread_lock);
cond_init(&thread_parent);
cond_init(&thread_worker);
+ futex_set_nbuckets_param(¶ms);
threads_starting = params.nthreads;
gettimeofday(&bench__start, NULL);
diff --git a/tools/perf/bench/futex-requeue.c b/tools/perf/bench/futex-requeue.c
index aad5bfc4fe188..0748b0fd689e8 100644
--- a/tools/perf/bench/futex-requeue.c
+++ b/tools/perf/bench/futex-requeue.c
@@ -42,6 +42,7 @@ static unsigned int threads_starting;
static int futex_flag = 0;
static struct bench_futex_parameters params = {
+ .nbuckets = -1,
/*
* How many tasks to requeue at a time.
* Default to 1 in order to make the kernel work more.
@@ -50,6 +51,7 @@ static struct bench_futex_parameters params = {
};
static const struct option options[] = {
+ OPT_INTEGER( 'b', "buckets", ¶ms.nbuckets, "Specify amount of hash buckets"),
OPT_UINTEGER('t', "threads", ¶ms.nthreads, "Specify amount of threads"),
OPT_UINTEGER('q', "nrequeue", ¶ms.nrequeue, "Specify amount of threads to requeue at once"),
OPT_BOOLEAN( 's', "silent", ¶ms.silent, "Silent mode: do not display data/details"),
@@ -77,6 +79,7 @@ static void print_summary(void)
params.nthreads,
requeuetime_avg / USEC_PER_MSEC,
rel_stddev_stats(requeuetime_stddev, requeuetime_avg));
+ futex_print_nbuckets(¶ms);
}
static void *workerfn(void *arg __maybe_unused)
@@ -204,6 +207,8 @@ int bench_futex_requeue(int argc, const char **argv)
if (params.broadcast)
params.nrequeue = params.nthreads;
+ futex_set_nbuckets_param(¶ms);
+
printf("Run summary [PID %d]: Requeuing %d threads (from [%s] %p to %s%p), "
"%d at a time.\n\n", getpid(), params.nthreads,
params.fshared ? "shared":"private", &futex1,
diff --git a/tools/perf/bench/futex-wake-parallel.c b/tools/perf/bench/futex-wake-parallel.c
index 4352e318631e9..6aede7c46b337 100644
--- a/tools/perf/bench/futex-wake-parallel.c
+++ b/tools/perf/bench/futex-wake-parallel.c
@@ -57,9 +57,12 @@ static struct stats waketime_stats, wakeup_stats;
static unsigned int threads_starting;
static int futex_flag = 0;
-static struct bench_futex_parameters params;
+static struct bench_futex_parameters params = {
+ .nbuckets = -1,
+};
static const struct option options[] = {
+ OPT_INTEGER( 'b', "buckets", ¶ms.nbuckets, "Specify amount of hash buckets"),
OPT_UINTEGER('t', "threads", ¶ms.nthreads, "Specify amount of threads"),
OPT_UINTEGER('w', "nwakers", ¶ms.nwakes, "Specify amount of waking threads"),
OPT_BOOLEAN( 's', "silent", ¶ms.silent, "Silent mode: do not display data/details"),
@@ -218,6 +221,7 @@ static void print_summary(void)
params.nthreads,
waketime_avg / USEC_PER_MSEC,
rel_stddev_stats(waketime_stddev, waketime_avg));
+ futex_print_nbuckets(¶ms);
}
@@ -291,6 +295,8 @@ int bench_futex_wake_parallel(int argc, const char **argv)
if (!params.fshared)
futex_flag = FUTEX_PRIVATE_FLAG;
+ futex_set_nbuckets_param(¶ms);
+
printf("Run summary [PID %d]: blocking on %d threads (at [%s] "
"futex %p), %d threads waking up %d at a time.\n\n",
getpid(), params.nthreads, params.fshared ? "shared":"private",
diff --git a/tools/perf/bench/futex-wake.c b/tools/perf/bench/futex-wake.c
index 49b3c89b0b35d..a31fc1563862e 100644
--- a/tools/perf/bench/futex-wake.c
+++ b/tools/perf/bench/futex-wake.c
@@ -42,6 +42,7 @@ static unsigned int threads_starting;
static int futex_flag = 0;
static struct bench_futex_parameters params = {
+ .nbuckets = -1,
/*
* How many wakeups to do at a time.
* Default to 1 in order to make the kernel work more.
@@ -50,6 +51,7 @@ static struct bench_futex_parameters params = {
};
static const struct option options[] = {
+ OPT_INTEGER( 'b', "buckets", ¶ms.nbuckets, "Specify amount of hash buckets"),
OPT_UINTEGER('t', "threads", ¶ms.nthreads, "Specify amount of threads"),
OPT_UINTEGER('w', "nwakes", ¶ms.nwakes, "Specify amount of threads to wake at once"),
OPT_BOOLEAN( 's', "silent", ¶ms.silent, "Silent mode: do not display data/details"),
@@ -93,6 +95,7 @@ static void print_summary(void)
params.nthreads,
waketime_avg / USEC_PER_MSEC,
rel_stddev_stats(waketime_stddev, waketime_avg));
+ futex_print_nbuckets(¶ms);
}
static void block_threads(pthread_t *w, struct perf_cpu_map *cpu)
diff --git a/tools/perf/bench/futex.c b/tools/perf/bench/futex.c
new file mode 100644
index 0000000000000..8109d6bf3ede2
--- /dev/null
+++ b/tools/perf/bench/futex.c
@@ -0,0 +1,58 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <err.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <linux/prctl.h>
+#include <sys/prctl.h>
+
+#include "futex.h"
+
+void futex_set_nbuckets_param(struct bench_futex_parameters *params)
+{
+ int ret;
+
+ if (params->nbuckets < 0)
+ return;
+
+ ret = prctl(PR_FUTEX_HASH, PR_FUTEX_HASH_SET_SLOTS, params->nbuckets);
+ if (ret) {
+ printf("Requesting %d hash buckets failed: %d/%m\n",
+ params->nbuckets, ret);
+ err(EXIT_FAILURE, "prctl(PR_FUTEX_HASH)");
+ }
+}
+
+void futex_print_nbuckets(struct bench_futex_parameters *params)
+{
+ char *futex_hash_mode;
+ int ret;
+
+ ret = prctl(PR_FUTEX_HASH, PR_FUTEX_HASH_GET_SLOTS);
+ if (params->nbuckets >= 0) {
+ if (ret != params->nbuckets) {
+ if (ret < 0) {
+ printf("Can't query number of buckets: %d/%m\n", ret);
+ err(EXIT_FAILURE, "prctl(PR_FUTEX_HASH)");
+ }
+ printf("Requested number of hash buckets does not currently used.\n");
+ printf("Requested: %d in usage: %d\n", params->nbuckets, ret);
+ err(EXIT_FAILURE, "prctl(PR_FUTEX_HASH)");
+ }
+ if (params->nbuckets == 0)
+ ret = asprintf(&futex_hash_mode, "Futex hashing: global hash");
+ else
+ ret = asprintf(&futex_hash_mode, "Futex hashing: %d hash buckets",
+ params->nbuckets);
+ } else {
+ if (ret <= 0) {
+ ret = asprintf(&futex_hash_mode, "Futex hashing: global hash");
+ } else {
+ ret = asprintf(&futex_hash_mode, "Futex hashing: auto resized to %d buckets",
+ ret);
+ }
+ }
+ if (ret < 0)
+ err(EXIT_FAILURE, "ENOMEM, futex_hash_mode");
+ printf("%s\n", futex_hash_mode);
+ free(futex_hash_mode);
+}
diff --git a/tools/perf/bench/futex.h b/tools/perf/bench/futex.h
index ebdc2b032afc1..dd295d27044ac 100644
--- a/tools/perf/bench/futex.h
+++ b/tools/perf/bench/futex.h
@@ -25,6 +25,7 @@ struct bench_futex_parameters {
unsigned int nfutexes;
unsigned int nwakes;
unsigned int nrequeue;
+ int nbuckets;
};
/**
@@ -143,4 +144,7 @@ futex_cmp_requeue_pi(u_int32_t *uaddr, u_int32_t val, u_int32_t *uaddr2,
val, opflags);
}
+void futex_set_nbuckets_param(struct bench_futex_parameters *params);
+void futex_print_nbuckets(struct bench_futex_parameters *params);
+
#endif /* _FUTEX_H */
--
2.49.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* [PATCH v11 19/19] futex: Allow to make the private hash immutable.
2025-04-07 15:57 [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
` (17 preceding siblings ...)
2025-04-07 15:57 ` [PATCH v11 18/19] tools/perf: Allow to select the number of hash buckets Sebastian Andrzej Siewior
@ 2025-04-07 15:57 ` Sebastian Andrzej Siewior
2025-04-10 10:56 ` Sebastian Andrzej Siewior
2025-04-10 14:52 ` Shrikanth Hegde
2025-04-07 16:00 ` [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
` (2 subsequent siblings)
21 siblings, 2 replies; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-07 15:57 UTC (permalink / raw)
To: linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long, Sebastian Andrzej Siewior, Liang, Kan, Adrian Hunter,
Alexander Shishkin, Arnaldo Carvalho de Melo, Ian Rogers,
Jiri Olsa, Mark Rutland, Namhyung Kim, linux-perf-users
My initial testing showed that
perf bench futex hash
reported less operations/sec with private hash. After using the same
amount of buckets in the private hash as used by the global hash then
the operations/sec were about the same.
This changed once the private hash became resizable. This feature added
a RCU section and reference counting via atomic inc+dec operation into
the hot path.
The reference counting can be avoided if the private hash is made
immutable.
Extend PR_FUTEX_HASH_SET_SLOTS by a fourth argument which denotes if the
private should be made immutable. Once set (to true) the a further
resize is not allowed (same if set to global hash).
Add PR_FUTEX_HASH_GET_IMMUTABLE which returns true if the hash can not
be changed.
Update "perf bench" suite.
For comparison, results of "perf bench futex hash -s":
- Xeon CPU E5-2650, 2 NUMA nodes, total 32 CPUs:
- Before the introducing task local hash
shared Averaged 1.487.148 operations/sec (+- 0,53%), total secs = 10
private Averaged 2.192.405 operations/sec (+- 0,07%), total secs = 10
- With the series
shared Averaged 1.326.342 operations/sec (+- 0,41%), total secs = 10
-b128 Averaged 141.394 operations/sec (+- 1,15%), total secs = 10
-Ib128 Averaged 851.490 operations/sec (+- 0,67%), total secs = 10
-b8192 Averaged 131.321 operations/sec (+- 2,13%), total secs = 10
-Ib8192 Averaged 1.923.077 operations/sec (+- 0,61%), total secs = 10
128 is the default allocation of hash buckets.
8192 was the previous amount of allocated hash buckets.
- Xeon(R) CPU E7-8890 v3, 4 NUMA nodes, total 144 CPUs:
- Before the introducing task local hash
shared Averaged 1.810.936 operations/sec (+- 0,26%), total secs = 20
private Averaged 2.505.801 operations/sec (+- 0,05%), total secs = 20
- With the series
shared Averaged 1.589.002 operations/sec (+- 0,25%), total secs = 20
-b1024 Averaged 42.410 operations/sec (+- 0,20%), total secs = 20
-Ib1024 Averaged 740.638 operations/sec (+- 1,51%), total secs = 20
-b65536 Averaged 48.811 operations/sec (+- 1,35%), total secs = 20
-Ib65536 Averaged 1.963.165 operations/sec (+- 0,18%), total secs = 20
1024 is the default allocation of hash buckets.
65536 was the previous amount of allocated hash buckets.
Cc: "Liang, Kan" <kan.liang@linux.intel.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
Cc: Ian Rogers <irogers@google.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: linux-perf-users@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
include/linux/futex.h | 2 +-
include/uapi/linux/prctl.h | 1 +
kernel/futex/core.c | 42 ++++++++++++++++++++++----
kernel/sys.c | 2 +-
tools/include/uapi/linux/prctl.h | 1 +
tools/perf/bench/futex-hash.c | 1 +
tools/perf/bench/futex-lock-pi.c | 1 +
tools/perf/bench/futex-requeue.c | 1 +
tools/perf/bench/futex-wake-parallel.c | 1 +
tools/perf/bench/futex-wake.c | 1 +
tools/perf/bench/futex.c | 8 +++--
tools/perf/bench/futex.h | 1 +
12 files changed, 51 insertions(+), 11 deletions(-)
diff --git a/include/linux/futex.h b/include/linux/futex.h
index ee48dcfbfe59d..96c7229856d97 100644
--- a/include/linux/futex.h
+++ b/include/linux/futex.h
@@ -80,7 +80,7 @@ void futex_exec_release(struct task_struct *tsk);
long do_futex(u32 __user *uaddr, int op, u32 val, ktime_t *timeout,
u32 __user *uaddr2, u32 val2, u32 val3);
-int futex_hash_prctl(unsigned long arg2, unsigned long arg3);
+int futex_hash_prctl(unsigned long arg2, unsigned long arg3, unsigned long arg4);
#ifdef CONFIG_FUTEX_PRIVATE_HASH
int futex_hash_allocate_default(void);
diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
index 3b93fb906e3c5..21f30b3ded74b 100644
--- a/include/uapi/linux/prctl.h
+++ b/include/uapi/linux/prctl.h
@@ -368,5 +368,6 @@ struct prctl_mm_map {
#define PR_FUTEX_HASH 78
# define PR_FUTEX_HASH_SET_SLOTS 1
# define PR_FUTEX_HASH_GET_SLOTS 2
+# define PR_FUTEX_HASH_GET_IMMUTABLE 3
#endif /* _LINUX_PRCTL_H */
diff --git a/kernel/futex/core.c b/kernel/futex/core.c
index 5b8609c8729e7..44bb9eeb0a9c1 100644
--- a/kernel/futex/core.c
+++ b/kernel/futex/core.c
@@ -70,6 +70,7 @@ struct futex_private_hash {
struct rcu_head rcu;
void *mm;
bool custom;
+ bool immutable;
struct futex_hash_bucket queues[];
};
@@ -139,12 +140,16 @@ static inline bool futex_key_is_private(union futex_key *key)
bool futex_private_hash_get(struct futex_private_hash *fph)
{
+ if (fph->immutable)
+ return true;
return rcuref_get(&fph->users);
}
void futex_private_hash_put(struct futex_private_hash *fph)
{
/* Ignore return value, last put is verified via rcuref_is_dead() */
+ if (fph->immutable)
+ return;
if (rcuref_put(&fph->users))
wake_up_var(fph->mm);
}
@@ -284,6 +289,8 @@ struct futex_private_hash *futex_private_hash(void)
if (!fph)
return NULL;
+ if (fph->immutable)
+ return fph;
if (rcuref_get(&fph->users))
return fph;
}
@@ -1558,7 +1565,7 @@ static bool futex_hash_less(struct futex_private_hash *a,
return false; /* equal */
}
-static int futex_hash_allocate(unsigned int hash_slots, bool custom)
+static int futex_hash_allocate(unsigned int hash_slots, unsigned int immutable, bool custom)
{
struct mm_struct *mm = current->mm;
struct futex_private_hash *fph;
@@ -1572,7 +1579,7 @@ static int futex_hash_allocate(unsigned int hash_slots, bool custom)
*/
scoped_guard(rcu) {
fph = rcu_dereference(mm->futex_phash);
- if (fph && !fph->hash_mask) {
+ if (fph && (!fph->hash_mask || fph->immutable)) {
if (custom)
return -EBUSY;
return 0;
@@ -1586,6 +1593,7 @@ static int futex_hash_allocate(unsigned int hash_slots, bool custom)
rcuref_init(&fph->users, 1);
fph->hash_mask = hash_slots ? hash_slots - 1 : 0;
fph->custom = custom;
+ fph->immutable = !!immutable;
fph->mm = mm;
for (i = 0; i < hash_slots; i++)
@@ -1678,7 +1686,7 @@ int futex_hash_allocate_default(void)
if (current_buckets >= buckets)
return 0;
- return futex_hash_allocate(buckets, false);
+ return futex_hash_allocate(buckets, 0, false);
}
static int futex_hash_get_slots(void)
@@ -1692,9 +1700,22 @@ static int futex_hash_get_slots(void)
return 0;
}
+static int futex_hash_get_immutable(void)
+{
+ struct futex_private_hash *fph;
+
+ guard(rcu)();
+ fph = rcu_dereference(current->mm->futex_phash);
+ if (fph && fph->immutable)
+ return 1;
+ if (fph && !fph->hash_mask)
+ return 1;
+ return 0;
+}
+
#else
-static int futex_hash_allocate(unsigned int hash_slots, bool custom)
+static int futex_hash_allocate(unsigned int hash_slots, unsigned int immutable, bool custom)
{
return -EINVAL;
}
@@ -1703,21 +1724,30 @@ static int futex_hash_get_slots(void)
{
return 0;
}
+
+static int futex_hash_get_immutable(void)
+{
+ return 0;
+}
#endif
-int futex_hash_prctl(unsigned long arg2, unsigned long arg3)
+int futex_hash_prctl(unsigned long arg2, unsigned long arg3, unsigned long arg4)
{
int ret;
switch (arg2) {
case PR_FUTEX_HASH_SET_SLOTS:
- ret = futex_hash_allocate(arg3, true);
+ ret = futex_hash_allocate(arg3, arg4, true);
break;
case PR_FUTEX_HASH_GET_SLOTS:
ret = futex_hash_get_slots();
break;
+ case PR_FUTEX_HASH_GET_IMMUTABLE:
+ ret = futex_hash_get_immutable();
+ break;
+
default:
ret = -EINVAL;
break;
diff --git a/kernel/sys.c b/kernel/sys.c
index d446d8ecb0b33..adc0de0aa364a 100644
--- a/kernel/sys.c
+++ b/kernel/sys.c
@@ -2822,7 +2822,7 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
error = posixtimer_create_prctl(arg2);
break;
case PR_FUTEX_HASH:
- error = futex_hash_prctl(arg2, arg3);
+ error = futex_hash_prctl(arg2, arg3, arg4);
break;
default:
trace_task_prctl_unknown(option, arg2, arg3, arg4, arg5);
diff --git a/tools/include/uapi/linux/prctl.h b/tools/include/uapi/linux/prctl.h
index 3b93fb906e3c5..21f30b3ded74b 100644
--- a/tools/include/uapi/linux/prctl.h
+++ b/tools/include/uapi/linux/prctl.h
@@ -368,5 +368,6 @@ struct prctl_mm_map {
#define PR_FUTEX_HASH 78
# define PR_FUTEX_HASH_SET_SLOTS 1
# define PR_FUTEX_HASH_GET_SLOTS 2
+# define PR_FUTEX_HASH_GET_IMMUTABLE 3
#endif /* _LINUX_PRCTL_H */
diff --git a/tools/perf/bench/futex-hash.c b/tools/perf/bench/futex-hash.c
index c843bd8543c74..fdf133c9520f7 100644
--- a/tools/perf/bench/futex-hash.c
+++ b/tools/perf/bench/futex-hash.c
@@ -57,6 +57,7 @@ static struct bench_futex_parameters params = {
static const struct option options[] = {
OPT_INTEGER( 'b', "buckets", ¶ms.nbuckets, "Specify amount of hash buckets"),
+ OPT_BOOLEAN( 'I', "immutable", ¶ms.buckets_immutable, "Make the hash buckets immutable"),
OPT_UINTEGER('t', "threads", ¶ms.nthreads, "Specify amount of threads"),
OPT_UINTEGER('r', "runtime", ¶ms.runtime, "Specify runtime (in seconds)"),
OPT_UINTEGER('f', "futexes", ¶ms.nfutexes, "Specify amount of futexes per threads"),
diff --git a/tools/perf/bench/futex-lock-pi.c b/tools/perf/bench/futex-lock-pi.c
index 40640b6744279..5144a158512cc 100644
--- a/tools/perf/bench/futex-lock-pi.c
+++ b/tools/perf/bench/futex-lock-pi.c
@@ -47,6 +47,7 @@ static struct bench_futex_parameters params = {
static const struct option options[] = {
OPT_INTEGER( 'b', "buckets", ¶ms.nbuckets, "Specify amount of hash buckets"),
+ OPT_BOOLEAN( 'I', "immutable", ¶ms.buckets_immutable, "Make the hash buckets immutable"),
OPT_UINTEGER('t', "threads", ¶ms.nthreads, "Specify amount of threads"),
OPT_UINTEGER('r', "runtime", ¶ms.runtime, "Specify runtime (in seconds)"),
OPT_BOOLEAN( 'M', "multi", ¶ms.multi, "Use multiple futexes"),
diff --git a/tools/perf/bench/futex-requeue.c b/tools/perf/bench/futex-requeue.c
index 0748b0fd689e8..a2f91ee1950b3 100644
--- a/tools/perf/bench/futex-requeue.c
+++ b/tools/perf/bench/futex-requeue.c
@@ -52,6 +52,7 @@ static struct bench_futex_parameters params = {
static const struct option options[] = {
OPT_INTEGER( 'b', "buckets", ¶ms.nbuckets, "Specify amount of hash buckets"),
+ OPT_BOOLEAN( 'I', "immutable", ¶ms.buckets_immutable, "Make the hash buckets immutable"),
OPT_UINTEGER('t', "threads", ¶ms.nthreads, "Specify amount of threads"),
OPT_UINTEGER('q', "nrequeue", ¶ms.nrequeue, "Specify amount of threads to requeue at once"),
OPT_BOOLEAN( 's', "silent", ¶ms.silent, "Silent mode: do not display data/details"),
diff --git a/tools/perf/bench/futex-wake-parallel.c b/tools/perf/bench/futex-wake-parallel.c
index 6aede7c46b337..ee66482c29fd1 100644
--- a/tools/perf/bench/futex-wake-parallel.c
+++ b/tools/perf/bench/futex-wake-parallel.c
@@ -63,6 +63,7 @@ static struct bench_futex_parameters params = {
static const struct option options[] = {
OPT_INTEGER( 'b', "buckets", ¶ms.nbuckets, "Specify amount of hash buckets"),
+ OPT_BOOLEAN( 'I', "immutable", ¶ms.buckets_immutable, "Make the hash buckets immutable"),
OPT_UINTEGER('t', "threads", ¶ms.nthreads, "Specify amount of threads"),
OPT_UINTEGER('w', "nwakers", ¶ms.nwakes, "Specify amount of waking threads"),
OPT_BOOLEAN( 's', "silent", ¶ms.silent, "Silent mode: do not display data/details"),
diff --git a/tools/perf/bench/futex-wake.c b/tools/perf/bench/futex-wake.c
index a31fc1563862e..8d6107f7cd941 100644
--- a/tools/perf/bench/futex-wake.c
+++ b/tools/perf/bench/futex-wake.c
@@ -52,6 +52,7 @@ static struct bench_futex_parameters params = {
static const struct option options[] = {
OPT_INTEGER( 'b', "buckets", ¶ms.nbuckets, "Specify amount of hash buckets"),
+ OPT_BOOLEAN( 'I', "immutable", ¶ms.buckets_immutable, "Make the hash buckets immutable"),
OPT_UINTEGER('t', "threads", ¶ms.nthreads, "Specify amount of threads"),
OPT_UINTEGER('w', "nwakes", ¶ms.nwakes, "Specify amount of threads to wake at once"),
OPT_BOOLEAN( 's', "silent", ¶ms.silent, "Silent mode: do not display data/details"),
diff --git a/tools/perf/bench/futex.c b/tools/perf/bench/futex.c
index 8109d6bf3ede2..bed3b6e46d109 100644
--- a/tools/perf/bench/futex.c
+++ b/tools/perf/bench/futex.c
@@ -14,7 +14,7 @@ void futex_set_nbuckets_param(struct bench_futex_parameters *params)
if (params->nbuckets < 0)
return;
- ret = prctl(PR_FUTEX_HASH, PR_FUTEX_HASH_SET_SLOTS, params->nbuckets);
+ ret = prctl(PR_FUTEX_HASH, PR_FUTEX_HASH_SET_SLOTS, params->nbuckets, params->buckets_immutable);
if (ret) {
printf("Requesting %d hash buckets failed: %d/%m\n",
params->nbuckets, ret);
@@ -38,11 +38,13 @@ void futex_print_nbuckets(struct bench_futex_parameters *params)
printf("Requested: %d in usage: %d\n", params->nbuckets, ret);
err(EXIT_FAILURE, "prctl(PR_FUTEX_HASH)");
}
+ ret = prctl(PR_FUTEX_HASH, PR_FUTEX_HASH_GET_IMMUTABLE);
if (params->nbuckets == 0)
ret = asprintf(&futex_hash_mode, "Futex hashing: global hash");
else
- ret = asprintf(&futex_hash_mode, "Futex hashing: %d hash buckets",
- params->nbuckets);
+ ret = asprintf(&futex_hash_mode, "Futex hashing: %d hash buckets %s",
+ params->nbuckets,
+ ret == 1 ? "(immutable)" : "");
} else {
if (ret <= 0) {
ret = asprintf(&futex_hash_mode, "Futex hashing: global hash");
diff --git a/tools/perf/bench/futex.h b/tools/perf/bench/futex.h
index dd295d27044ac..9c9a73f9d865e 100644
--- a/tools/perf/bench/futex.h
+++ b/tools/perf/bench/futex.h
@@ -26,6 +26,7 @@ struct bench_futex_parameters {
unsigned int nwakes;
unsigned int nrequeue;
int nbuckets;
+ bool buckets_immutable;
};
/**
--
2.49.0
^ permalink raw reply related [flat|nested] 31+ messages in thread
* Re: [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL
2025-04-07 15:57 [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
` (18 preceding siblings ...)
2025-04-07 15:57 ` [PATCH v11 19/19] futex: Allow to make the private hash immutable Sebastian Andrzej Siewior
@ 2025-04-07 16:00 ` Sebastian Andrzej Siewior
2025-04-08 13:51 ` André Almeida
2025-04-10 17:51 ` Shrikanth Hegde
21 siblings, 0 replies; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-07 16:00 UTC (permalink / raw)
To: linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long
On 2025-04-07 17:57:23 [+0200], To linux-kernel@vger.kernel.org wrote:
> This is the local hash map series with PeterZ FUTEX2_NUMA and
> FUTEX2_MPOL plus a few fixes on top.
…
> v10…v11: https://lore.kernel.org/all/20250312151634.2183278-1-bigeasy@linutronix.de
> - PeterZ' fixups, changes to the local hash series have been folded
> into the earlier patches so things are not added and renamed later
> and the functionality is changed.
for easier comparison, here is a diff vs v10 excluding patches v17+
diff --git a/include/linux/futex.h b/include/linux/futex.h
index 19c37afa0432a..ee48dcfbfe59d 100644
--- a/include/linux/futex.h
+++ b/include/linux/futex.h
@@ -4,11 +4,11 @@
#include <linux/sched.h>
#include <linux/ktime.h>
+#include <linux/mm_types.h>
#include <uapi/linux/futex.h>
struct inode;
-struct mm_struct;
struct task_struct;
/*
diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 09c3e3e33f1f8..de95794777ad6 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -168,12 +168,14 @@ void *__vmalloc_node_noprof(unsigned long size, unsigned long align, gfp_t gfp_m
int node, const void *caller) __alloc_size(1);
#define __vmalloc_node(...) alloc_hooks(__vmalloc_node_noprof(__VA_ARGS__))
-void *vmalloc_huge_noprof(unsigned long size, gfp_t gfp_mask) __alloc_size(1);
-#define vmalloc_huge(...) alloc_hooks(vmalloc_huge_noprof(__VA_ARGS__))
-
void *vmalloc_huge_node_noprof(unsigned long size, gfp_t gfp_mask, int node) __alloc_size(1);
#define vmalloc_huge_node(...) alloc_hooks(vmalloc_huge_node_noprof(__VA_ARGS__))
+static inline void *vmalloc_huge(unsigned long size, gfp_t gfp_mask)
+{
+ return vmalloc_huge_node(size, gfp_mask, NUMA_NO_NODE);
+}
+
extern void *__vmalloc_array_noprof(size_t n, size_t size, gfp_t flags) __alloc_size(1, 2);
#define __vmalloc_array(...) alloc_hooks(__vmalloc_array_noprof(__VA_ARGS__))
diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
index 55b843644c51a..2344c1feaa4e3 100644
--- a/include/uapi/linux/prctl.h
+++ b/include/uapi/linux/prctl.h
@@ -354,7 +354,7 @@ struct prctl_mm_map {
#define PR_LOCK_SHADOW_STACK_STATUS 76
/* FUTEX hash management */
-#define PR_FUTEX_HASH 77
+#define PR_FUTEX_HASH 78
# define PR_FUTEX_HASH_SET_SLOTS 1
# define PR_FUTEX_HASH_GET_SLOTS 2
diff --git a/kernel/futex/core.c b/kernel/futex/core.c
index 65523f3cfe32e..2f2a92c791def 100644
--- a/kernel/futex/core.c
+++ b/kernel/futex/core.c
@@ -124,6 +124,10 @@ late_initcall(fail_futex_debugfs);
#endif /* CONFIG_FAIL_FUTEX */
+static struct futex_hash_bucket *
+__futex_hash(union futex_key *key, struct futex_private_hash *fph);
+
+#ifdef CONFIG_FUTEX_PRIVATE_HASH
static inline bool futex_key_is_private(union futex_key *key)
{
/*
@@ -133,10 +137,43 @@ static inline bool futex_key_is_private(union futex_key *key)
return !(key->both.offset & (FUT_OFF_INODE | FUT_OFF_MMSHARED));
}
-static struct futex_hash_bucket *
-__futex_hash(union futex_key *key, struct futex_private_hash *fph);
+bool futex_private_hash_get(struct futex_private_hash *fph)
+{
+ return rcuref_get(&fph->users);
+}
+
+void futex_private_hash_put(struct futex_private_hash *fph)
+{
+ /* Ignore return value, last put is verified via rcuref_is_dead() */
+ if (rcuref_put(&fph->users))
+ wake_up_var(fph->mm);
+}
+
+/**
+ * futex_hash_get - Get an additional reference for the local hash.
+ * @hb: ptr to the private local hash.
+ *
+ * Obtain an additional reference for the already obtained hash bucket. The
+ * caller must already own an reference.
+ */
+void futex_hash_get(struct futex_hash_bucket *hb)
+{
+ struct futex_private_hash *fph = hb->priv;
+
+ if (!fph)
+ return;
+ WARN_ON_ONCE(!futex_private_hash_get(fph));
+}
+
+void futex_hash_put(struct futex_hash_bucket *hb)
+{
+ struct futex_private_hash *fph = hb->priv;
+
+ if (!fph)
+ return;
+ futex_private_hash_put(fph);
+}
-#ifdef CONFIG_FUTEX_PRIVATE_HASH
static struct futex_hash_bucket *
__futex_hash_private(union futex_key *key, struct futex_private_hash *fph)
{
@@ -210,13 +247,12 @@ static bool __futex_pivot_hash(struct mm_struct *mm,
}
rcu_assign_pointer(mm->futex_phash, new);
kvfree_rcu(fph, rcu);
- wake_up_var(mm);
return true;
}
static void futex_pivot_hash(struct mm_struct *mm)
{
- scoped_guard (mutex, &mm->futex_hash_lock) {
+ scoped_guard(mutex, &mm->futex_hash_lock) {
struct futex_private_hash *fph;
fph = mm->futex_phash_new;
@@ -255,28 +291,13 @@ struct futex_private_hash *futex_private_hash(void)
goto again;
}
-bool futex_private_hash_get(struct futex_private_hash *fph)
-{
- return rcuref_get(&fph->users);
-}
-
-void futex_private_hash_put(struct futex_private_hash *fph)
-{
- /*
- * Ignore the result; the DEAD state is picked up
- * when rcuref_get() starts failing via rcuref_is_dead().
- */
- if (rcuref_put(&fph->users))
- wake_up_var(fph->mm);
-}
-
struct futex_hash_bucket *futex_hash(union futex_key *key)
{
struct futex_private_hash *fph;
struct futex_hash_bucket *hb;
again:
- scoped_guard (rcu) {
+ scoped_guard(rcu) {
hb = __futex_hash(key, NULL);
fph = hb->priv;
@@ -287,27 +308,9 @@ struct futex_hash_bucket *futex_hash(union futex_key *key)
goto again;
}
-void futex_hash_get(struct futex_hash_bucket *hb)
-{
- struct futex_private_hash *fph = hb->priv;
-
- if (!fph)
- return;
- WARN_ON_ONCE(!futex_private_hash_get(fph));
-}
-
-void futex_hash_put(struct futex_hash_bucket *hb)
-{
- struct futex_private_hash *fph = hb->priv;
-
- if (!fph)
- return;
- futex_private_hash_put(fph);
-}
-
#else /* !CONFIG_FUTEX_PRIVATE_HASH */
-static inline struct futex_hash_bucket *
+static struct futex_hash_bucket *
__futex_hash_private(union futex_key *key, struct futex_private_hash *fph)
{
return NULL;
@@ -388,12 +391,15 @@ static int futex_mpol(struct mm_struct *mm, unsigned long addr)
#endif /* CONFIG_FUTEX_MPOL */
/**
- * futex_hash - Return the hash bucket in the global hash
+ * __futex_hash - Return the hash bucket
* @key: Pointer to the futex key for which the hash is calculated
+ * @fph: Pointer to private hash if known
*
* We hash on the keys returned from get_futex_key (see below) and return the
- * corresponding hash bucket in the global hash. If the FUTEX is private and
- * a local hash table is privated then this one is used.
+ * corresponding hash bucket.
+ * If the FUTEX is PROCESS_PRIVATE then a per-process hash bucket (from the
+ * private hash) is returned if existing. Otherwise a hash bucket from the
+ * global hash is returned.
*/
static struct futex_hash_bucket *
__futex_hash(union futex_key *key, struct futex_private_hash *fph)
@@ -1522,10 +1528,10 @@ static bool futex_pivot_pending(struct mm_struct *mm)
guard(rcu)();
if (!mm->futex_phash_new)
- return false;
+ return true;
fph = rcu_dereference(mm->futex_phash);
- return !rcuref_read(&fph->users);
+ return rcuref_is_dead(&fph->users);
}
static bool futex_hash_less(struct futex_private_hash *a,
@@ -1564,7 +1570,7 @@ static int futex_hash_allocate(unsigned int hash_slots, bool custom)
/*
* Once we've disabled the global hash there is no way back.
*/
- scoped_guard (rcu) {
+ scoped_guard(rcu) {
fph = rcu_dereference(mm->futex_phash);
if (fph && !fph->hash_mask) {
if (custom)
@@ -1631,7 +1637,7 @@ static int futex_hash_allocate(unsigned int hash_slots, bool custom)
if (new) {
/*
* Will set mm->futex_phash_new on failure;
- * futex_get_private_hash() will try again.
+ * futex_private_hash_get() will try again.
*/
if (!__futex_pivot_hash(mm, new) && custom)
goto again;
@@ -1681,7 +1687,7 @@ static int futex_hash_get_slots(void)
guard(rcu)();
fph = rcu_dereference(current->mm->futex_phash);
- if (fph)
+ if (fph && fph->hash_mask)
return fph->hash_mask + 1;
return 0;
}
diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h
index 52e9c0c4b6c87..004e4dbee4f93 100644
--- a/kernel/futex/futex.h
+++ b/kernel/futex/futex.h
@@ -222,7 +222,6 @@ futex_setup_timer(ktime_t *time, struct hrtimer_sleeper *timeout,
int flags, u64 range_ns);
extern struct futex_hash_bucket *futex_hash(union futex_key *key);
-
#ifdef CONFIG_FUTEX_PRIVATE_HASH
extern void futex_hash_get(struct futex_hash_bucket *hb);
extern void futex_hash_put(struct futex_hash_bucket *hb);
@@ -234,15 +233,8 @@ extern void futex_private_hash_put(struct futex_private_hash *fph);
#else /* !CONFIG_FUTEX_PRIVATE_HASH */
static inline void futex_hash_get(struct futex_hash_bucket *hb) { }
static inline void futex_hash_put(struct futex_hash_bucket *hb) { }
-
-static inline struct futex_private_hash *futex_private_hash(void)
-{
- return NULL;
-}
-static inline bool futex_private_hash_get(struct futex_private_hash *fph)
-{
- return false;
-}
+static inline struct futex_private_hash *futex_private_hash(void) { return NULL; }
+static inline bool futex_private_hash_get(void) { return false; }
static inline void futex_private_hash_put(struct futex_private_hash *fph) { }
#endif
diff --git a/kernel/futex/pi.c b/kernel/futex/pi.c
index 51c69e8808152..356e52c17d3c5 100644
--- a/kernel/futex/pi.c
+++ b/kernel/futex/pi.c
@@ -1042,7 +1042,7 @@ int futex_lock_pi(u32 __user *uaddr, unsigned int flags, ktime_t *time, int tryl
cleanup:
/*
* If we failed to acquire the lock (deadlock/signal/timeout), we must
- * must unwind the above, however we canont lock hb->lock because
+ * unwind the above, however we canont lock hb->lock because
* rt_mutex already has a waiter enqueued and hb->lock can itself try
* and enqueue an rt_waiter through rtlock.
*
diff --git a/kernel/futex/waitwake.c b/kernel/futex/waitwake.c
index 74647f6bf75de..bd8fef0f8d180 100644
--- a/kernel/futex/waitwake.c
+++ b/kernel/futex/waitwake.c
@@ -297,7 +297,7 @@ int futex_wake_op(u32 __user *uaddr1, unsigned int flags, u32 __user *uaddr2,
}
plist_for_each_entry_safe(this, next, &hb1->chain, list) {
- if (futex_match (&this->key, &key1)) {
+ if (futex_match(&this->key, &key1)) {
if (this->pi_state || this->rt_waiter) {
ret = -EINVAL;
goto out_unlock;
@@ -311,7 +311,7 @@ int futex_wake_op(u32 __user *uaddr1, unsigned int flags, u32 __user *uaddr2,
if (op_ret > 0) {
op_ret = 0;
plist_for_each_entry_safe(this, next, &hb2->chain, list) {
- if (futex_match (&this->key, &key2)) {
+ if (futex_match(&this->key, &key2)) {
if (this->pi_state || this->rt_waiter) {
ret = -EINVAL;
goto out_unlock;
@@ -385,7 +385,7 @@ int futex_unqueue_multiple(struct futex_vector *v, int count)
}
/**
- * __futex_wait_multiple_setup - Prepare to wait and enqueue multiple futexes
+ * futex_wait_multiple_setup - Prepare to wait and enqueue multiple futexes
* @vs: The futex list to wait on
* @count: The size of the list
* @woken: Index of the last woken futex, if any. Used to notify the
diff --git a/mm/nommu.c b/mm/nommu.c
index d04e601a8f4d7..aed58ea7398db 100644
--- a/mm/nommu.c
+++ b/mm/nommu.c
@@ -207,11 +207,22 @@ void *vmalloc_noprof(unsigned long size)
}
EXPORT_SYMBOL(vmalloc_noprof);
-void *vmalloc_huge_noprof(unsigned long size, gfp_t gfp_mask) __weak __alias(__vmalloc_noprof);
-
+/*
+ * vmalloc_huge_node - allocate virtually contiguous memory, on a node
+ *
+ * @size: allocation size
+ * @gfp_mask: flags for the page level allocator
+ * @node: node to use for allocation or NUMA_NO_NODE
+ *
+ * Allocate enough pages to cover @size from the page level
+ * allocator and map them into contiguous kernel virtual space.
+ *
+ * Due to NOMMU implications the node argument and HUGE page attribute is
+ * ignored.
+ */
void *vmalloc_huge_node_noprof(unsigned long size, gfp_t gfp_mask, int node)
{
- return vmalloc_huge_noprof(size, gfp_mask);
+ return __vmalloc_noprof(size, gfp_mask);
}
/*
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 69247b46413ca..0e2c49aaf84f1 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3947,9 +3947,10 @@ void *vmalloc_noprof(unsigned long size)
EXPORT_SYMBOL(vmalloc_noprof);
/**
- * vmalloc_huge - allocate virtually contiguous memory, allow huge pages
+ * vmalloc_huge_node - allocate virtually contiguous memory, allow huge pages
* @size: allocation size
* @gfp_mask: flags for the page level allocator
+ * @node: node to use for allocation or NUMA_NO_NODE
*
* Allocate enough pages to cover @size from the page level
* allocator and map them into contiguous kernel virtual space.
@@ -3958,20 +3959,13 @@ EXPORT_SYMBOL(vmalloc_noprof);
*
* Return: pointer to the allocated memory or %NULL on error
*/
-void *vmalloc_huge_noprof(unsigned long size, gfp_t gfp_mask)
-{
- return __vmalloc_node_range_noprof(size, 1, VMALLOC_START, VMALLOC_END,
- gfp_mask, PAGE_KERNEL, VM_ALLOW_HUGE_VMAP,
- NUMA_NO_NODE, __builtin_return_address(0));
-}
-EXPORT_SYMBOL_GPL(vmalloc_huge_noprof);
-
void *vmalloc_huge_node_noprof(unsigned long size, gfp_t gfp_mask, int node)
{
return __vmalloc_node_range_noprof(size, 1, VMALLOC_START, VMALLOC_END,
gfp_mask, PAGE_KERNEL, VM_ALLOW_HUGE_VMAP,
node, __builtin_return_address(0));
}
+EXPORT_SYMBOL_GPL(vmalloc_huge_node_noprof);
/**
* vzalloc - allocate virtually contiguous memory with zero fill
^ permalink raw reply related [flat|nested] 31+ messages in thread
* Re: [PATCH v11 15/19] futex: Implement FUTEX2_NUMA
2025-04-07 15:57 ` [PATCH v11 15/19] futex: Implement FUTEX2_NUMA Sebastian Andrzej Siewior
@ 2025-04-07 16:52 ` Sebastian Andrzej Siewior
2025-04-17 15:34 ` Sebastian Andrzej Siewior
0 siblings, 1 reply; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-07 16:52 UTC (permalink / raw)
To: linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long
On 2025-04-07 17:57:38 [+0200], To linux-kernel@vger.kernel.org wrote:
> --- a/kernel/futex/core.c
> +++ b/kernel/futex/core.c
> @@ -332,15 +337,35 @@ __futex_hash(union futex_key *key, struct futex_private_hash *fph)
…
> + if (node == FUTEX_NO_NODE) {
> + /*
> + * In case of !FLAGS_NUMA, use some unused hash bits to pick a
> + * node -- this ensures regular futexes are interleaved across
> + * the nodes and avoids having to allocate multiple
> + * hash-tables.
> + *
> + * NOTE: this isn't perfectly uniform, but it is fast and
> + * handles sparse node masks.
> + */
> + node = (hash >> futex_hashshift) % nr_node_ids;
forgot to mention earlier: This % nr_node_ids turns into div and it is
visible in perf top while looking at __futex_hash(). We could round it
down to a power-of-two (which should be the case in my 1, 2 and 4 based
NUMA world) and then we could use AND instead.
ARM does not support NUMA or div so it is not a concern.
Maybe a fast path for 1/2/4 would make sense since it is the most common
one. In case you consider it I could run test to see how significant it
is. It might be that it pops up in "perf bench futex hash" but not be
significant in general use case. I had some hacks and those did not
improve the numbers as much as I hoped for.
> + if (!node_possible(node)) {
> + node = find_next_bit_wrap(node_possible_map.bits,
> + nr_node_ids, node);
> + }
> + }
> +
> + return &futex_queues[node][hash & futex_hashmask];
> }
>
> /**
Sebastian
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL
2025-04-07 15:57 [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
` (19 preceding siblings ...)
2025-04-07 16:00 ` [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
@ 2025-04-08 13:51 ` André Almeida
2025-04-08 16:13 ` Sebastian Andrzej Siewior
2025-04-10 17:51 ` Shrikanth Hegde
21 siblings, 1 reply; 31+ messages in thread
From: André Almeida @ 2025-04-08 13:51 UTC (permalink / raw)
To: Sebastian Andrzej Siewior, linux-kernel
Cc: Darren Hart, Davidlohr Bueso, Ingo Molnar, Juri Lelli,
Peter Zijlstra, Thomas Gleixner, Valentin Schneider, Waiman Long
Hi Sebastian,
Thanks for your patchset. I think the perf support is great, but usually
those new uAPI options should come with some selftests too.
Em 07/04/2025 12:57, Sebastian Andrzej Siewior escreveu:
> this is a follow up on
> https://lore.kernel.org/ZwVOMgBMxrw7BU9A@jlelli-thinkpadt14gen4.remote.csb
>
> and adds support for task local futex_hash_bucket.
>
> This is the local hash map series with PeterZ FUTEX2_NUMA and
> FUTEX2_MPOL plus a few fixes on top.
>
> The complete tree is at
> https://git.kernel.org/pub/scm/linux/kernel/git/bigeasy/staging.git/log/?h=futex_local_v11
> https://git.kernel.org/pub/scm/linux/kernel/git/bigeasy/staging.git futex_local_v11
>
> v10…v11: https://lore.kernel.org/all/20250312151634.2183278-1-bigeasy@linutronix.de
> - PeterZ' fixups, changes to the local hash series have been folded
> into the earlier patches so things are not added and renamed later
> and the functionality is changed.
>
> - vmalloc_huge() has been implemented on top of vmalloc_huge_node()
> and the NOMMU bots have been adjusted. akpm asked for this.
>
> - wake_up_var() has been removed from __futex_pivot_hash(). It is
> enough to wake the userspace waiter after the final put so it can
> perform the resize itself.
>
> - Changed to logic in futex_pivot_pending() so it does not block for
> the user. It waits for __futex_pivot_hash() which follows the logic
> in __futex_pivot_hash().
>
> - Updated kernel doc for __futex_hash().
>
> - Patches 17+ are new:
> - Wire up PR_FUTEX_HASH_SET_SLOTS in "perf bench futex"
> - Add "immutable" mode to PR_FUTEX_HASH_SET_SLOTS to avoid resizing
> the local hash any further. This avoids rcuref usage which is
> noticeable in "perf bench futex hash"
>
> Peter Zijlstra (8):
> mm: Add vmalloc_huge_node()
> futex: Move futex_queue() into futex_wait_setup()
> futex: Pull futex_hash() out of futex_q_lock()
> futex: Create hb scopes
> futex: Create futex_hash() get/put class
> futex: Create private_hash() get/put class
> futex: Implement FUTEX2_NUMA
> futex: Implement FUTEX2_MPOL
>
> Sebastian Andrzej Siewior (11):
> rcuref: Provide rcuref_is_dead().
> futex: Acquire a hash reference in futex_wait_multiple_setup().
> futex: Decrease the waiter count before the unlock operation.
> futex: Introduce futex_q_lockptr_lock().
> futex: Create helper function to initialize a hash slot.
> futex: Add basic infrastructure for local task local hash.
> futex: Allow automatic allocation of process wide futex hash.
> futex: Allow to resize the private local hash.
> tools headers: Synchronize prctl.h ABI header
> tools/perf: Allow to select the number of hash buckets.
> futex: Allow to make the private hash immutable.
>
> include/linux/futex.h | 36 +-
> include/linux/mm_types.h | 7 +-
> include/linux/mmap_lock.h | 4 +
> include/linux/rcuref.h | 22 +-
> include/linux/vmalloc.h | 9 +-
> include/uapi/linux/futex.h | 10 +-
> include/uapi/linux/prctl.h | 6 +
> init/Kconfig | 10 +
> io_uring/futex.c | 4 +-
> kernel/fork.c | 24 +
> kernel/futex/core.c | 794 ++++++++++++++++++++++---
> kernel/futex/futex.h | 73 ++-
> kernel/futex/pi.c | 300 +++++-----
> kernel/futex/requeue.c | 480 +++++++--------
> kernel/futex/waitwake.c | 201 ++++---
> kernel/sys.c | 4 +
> mm/nommu.c | 18 +-
> mm/vmalloc.c | 11 +-
> tools/include/uapi/linux/prctl.h | 44 +-
> tools/perf/bench/Build | 1 +
> tools/perf/bench/futex-hash.c | 7 +
> tools/perf/bench/futex-lock-pi.c | 5 +
> tools/perf/bench/futex-requeue.c | 6 +
> tools/perf/bench/futex-wake-parallel.c | 9 +-
> tools/perf/bench/futex-wake.c | 4 +
> tools/perf/bench/futex.c | 60 ++
> tools/perf/bench/futex.h | 5 +
> 27 files changed, 1585 insertions(+), 569 deletions(-)
> create mode 100644 tools/perf/bench/futex.c
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL
2025-04-08 13:51 ` André Almeida
@ 2025-04-08 16:13 ` Sebastian Andrzej Siewior
0 siblings, 0 replies; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-08 16:13 UTC (permalink / raw)
To: André Almeida
Cc: linux-kernel, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long
On 2025-04-08 10:51:29 [-0300], André Almeida wrote:
> Hi Sebastian,
Hi,
> Thanks for your patchset. I think the perf support is great, but usually
> those new uAPI options should come with some selftests too.
You mean "tools/testing/selftests/futex/functional" ?
I could add something once it is merged. The API changed in this
submission vs the previous one where the "immutable" argument was added.
And I am waiting for some feedback on that.
The MPOL & NUMA bits could use also some documentation. Slowly once we
get there…
Sebastian
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v11 19/19] futex: Allow to make the private hash immutable.
2025-04-07 15:57 ` [PATCH v11 19/19] futex: Allow to make the private hash immutable Sebastian Andrzej Siewior
@ 2025-04-10 10:56 ` Sebastian Andrzej Siewior
2025-04-10 14:52 ` Shrikanth Hegde
1 sibling, 0 replies; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-10 10:56 UTC (permalink / raw)
To: linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long, Liang, Kan, Adrian Hunter, Alexander Shishkin,
Arnaldo Carvalho de Melo, Ian Rogers, Jiri Olsa, Mark Rutland,
Namhyung Kim, linux-perf-users
On 2025-04-07 17:57:42 [+0200], To linux-kernel@vger.kernel.org wrote:
> - Xeon CPU E5-2650, 2 NUMA nodes, total 32 CPUs:
> - Before the introducing task local hash
> shared Averaged 1.487.148 operations/sec (+- 0,53%), total secs = 10
> private Averaged 2.192.405 operations/sec (+- 0,07%), total secs = 10
>
> - With the series
> shared Averaged 1.326.342 operations/sec (+- 0,41%), total secs = 10
> -b128 Averaged 141.394 operations/sec (+- 1,15%), total secs = 10
> -Ib128 Averaged 851.490 operations/sec (+- 0,67%), total secs = 10
> -b8192 Averaged 131.321 operations/sec (+- 2,13%), total secs = 10
> -Ib8192 Averaged 1.923.077 operations/sec (+- 0,61%), total secs = 10
> 128 is the default allocation of hash buckets.
> 8192 was the previous amount of allocated hash buckets.
>
> - Xeon(R) CPU E7-8890 v3, 4 NUMA nodes, total 144 CPUs:
> - Before the introducing task local hash
> shared Averaged 1.810.936 operations/sec (+- 0,26%), total secs = 20
> private Averaged 2.505.801 operations/sec (+- 0,05%), total secs = 20
>
> - With the series
> shared Averaged 1.589.002 operations/sec (+- 0,25%), total secs = 20
> -b1024 Averaged 42.410 operations/sec (+- 0,20%), total secs = 20
> -Ib1024 Averaged 740.638 operations/sec (+- 1,51%), total secs = 20
> -b65536 Averaged 48.811 operations/sec (+- 1,35%), total secs = 20
> -Ib65536 Averaged 1.963.165 operations/sec (+- 0,18%), total secs = 20
> 1024 is the default allocation of hash buckets.
> 65536 was the previous amount of allocated hash buckets.
On EPYC 7713, 2 NUMA nodes, 256 CPUs:
ops/sec buckets
-t 250 25.393 (+- 1,51%) 1024
-t 250 -b 1024 -I 1.645.327 (+- 0,34%) 1024
-t 250 -b 65536 25.445 (+- 2,41%) 65536
-t 250 -b 65536 -I 1.733.745 (+- 0,36%) 65536
-t 250 -b 0 1.745.242 (+- 0,21%) 32768 * 2
Sebastian
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v11 19/19] futex: Allow to make the private hash immutable.
2025-04-07 15:57 ` [PATCH v11 19/19] futex: Allow to make the private hash immutable Sebastian Andrzej Siewior
2025-04-10 10:56 ` Sebastian Andrzej Siewior
@ 2025-04-10 14:52 ` Shrikanth Hegde
2025-04-10 15:28 ` Sebastian Andrzej Siewior
1 sibling, 1 reply; 31+ messages in thread
From: Shrikanth Hegde @ 2025-04-10 14:52 UTC (permalink / raw)
To: Sebastian Andrzej Siewior, linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long, Liang, Kan, Adrian Hunter, Alexander Shishkin,
Arnaldo Carvalho de Melo, Ian Rogers, Jiri Olsa, Mark Rutland,
Namhyung Kim, linux-perf-users
Hi Sebastian.
On 4/7/25 21:27, Sebastian Andrzej Siewior wrote:
> My initial testing showed that
> perf bench futex hash
>
> reported less operations/sec with private hash. After using the same
> amount of buckets in the private hash as used by the global hash then
> the operations/sec were about the same.
>
> This changed once the private hash became resizable. This feature added
> a RCU section and reference counting via atomic inc+dec operation into
> the hot path.
> The reference counting can be avoided if the private hash is made
> immutable.
> Extend PR_FUTEX_HASH_SET_SLOTS by a fourth argument which denotes if the
> private should be made immutable. Once set (to true) the a further
> resize is not allowed (same if set to global hash).
> Add PR_FUTEX_HASH_GET_IMMUTABLE which returns true if the hash can not
> be changed.
> Update "perf bench" suite.
>
It would be good option for the application to decide if it needs this.
Using this option makes the perf regression goes away using previous number of buckets.
Acked-by: Shrikanth Hegde <sshegde@linux.ibm.com>
base:
./perf bench futex hash
Averaged 1556023 operations/sec (+- 0.08%), total secs = 10 <<-- 1.5M
with series:
./perf bench futex hash -b32768
Averaged 126499 operations/sec (+- 0.41%), total secs = 10 <<-- .12M
./perf bench futex hash -Ib32768
Averaged 1549339 operations/sec (+- 0.08%), total secs = 10 <<-- 1.5M
> For comparison, results of "perf bench futex hash -s":
> - Xeon CPU E5-2650, 2 NUMA nodes, total 32 CPUs:
> - Before the introducing task local hash
> shared Averaged 1.487.148 operations/sec (+- 0,53%), total secs = 10
> private Averaged 2.192.405 operations/sec (+- 0,07%), total secs = 10
>
> - With the series
> shared Averaged 1.326.342 operations/sec (+- 0,41%), total secs = 10
> -b128 Averaged 141.394 operations/sec (+- 1,15%), total secs = 10
> -Ib128 Averaged 851.490 operations/sec (+- 0,67%), total secs = 10
> -b8192 Averaged 131.321 operations/sec (+- 2,13%), total secs = 10
> -Ib8192 Averaged 1.923.077 operations/sec (+- 0,61%), total secs = 10
> 128 is the default allocation of hash buckets.
> 8192 was the previous amount of allocated hash buckets.
>
> - Xeon(R) CPU E7-8890 v3, 4 NUMA nodes, total 144 CPUs:
> - Before the introducing task local hash
> shared Averaged 1.810.936 operations/sec (+- 0,26%), total secs = 20
> private Averaged 2.505.801 operations/sec (+- 0,05%), total secs = 20
>
> - With the series
> shared Averaged 1.589.002 operations/sec (+- 0,25%), total secs = 20
> -b1024 Averaged 42.410 operations/sec (+- 0,20%), total secs = 20
> -Ib1024 Averaged 740.638 operations/sec (+- 1,51%), total secs = 20
> -b65536 Averaged 48.811 operations/sec (+- 1,35%), total secs = 20
> -Ib65536 Averaged 1.963.165 operations/sec (+- 0,18%), total secs = 20
> 1024 is the default allocation of hash buckets.
> 65536 was the previous amount of allocated hash buckets.
>
> Cc: "Liang, Kan" <kan.liang@linux.intel.com>
> Cc: Adrian Hunter <adrian.hunter@intel.com>
> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
> Cc: Arnaldo Carvalho de Melo <acme@kernel.org>
> Cc: Ian Rogers <irogers@google.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Jiri Olsa <jolsa@kernel.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Namhyung Kim <namhyung@kernel.org>
> Cc: linux-perf-users@vger.kernel.org
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
> include/linux/futex.h | 2 +-
> include/uapi/linux/prctl.h | 1 +
> kernel/futex/core.c | 42 ++++++++++++++++++++++----
> kernel/sys.c | 2 +-
> tools/include/uapi/linux/prctl.h | 1 +
> tools/perf/bench/futex-hash.c | 1 +
> tools/perf/bench/futex-lock-pi.c | 1 +
> tools/perf/bench/futex-requeue.c | 1 +
> tools/perf/bench/futex-wake-parallel.c | 1 +
> tools/perf/bench/futex-wake.c | 1 +
> tools/perf/bench/futex.c | 8 +++--
> tools/perf/bench/futex.h | 1 +
> 12 files changed, 51 insertions(+), 11 deletions(-)
>
nit: Does it makes sense to split this patch into futex and perf?
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v11 19/19] futex: Allow to make the private hash immutable.
2025-04-10 14:52 ` Shrikanth Hegde
@ 2025-04-10 15:28 ` Sebastian Andrzej Siewior
2025-04-10 15:48 ` Shrikanth Hegde
0 siblings, 1 reply; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-10 15:28 UTC (permalink / raw)
To: Shrikanth Hegde
Cc: linux-kernel, André Almeida, Darren Hart, Davidlohr Bueso,
Ingo Molnar, Juri Lelli, Peter Zijlstra, Thomas Gleixner,
Valentin Schneider, Waiman Long, Liang, Kan, Adrian Hunter,
Alexander Shishkin, Arnaldo Carvalho de Melo, Ian Rogers,
Jiri Olsa, Mark Rutland, Namhyung Kim, linux-perf-users
On 2025-04-10 20:22:08 [+0530], Shrikanth Hegde wrote:
> Hi Sebastian.
Hi Shrikanth,
> It would be good option for the application to decide if it needs this.
You mean to have it as I introduced it here or something else?
> Using this option makes the perf regression goes away using previous number of buckets.
Okay, good to know. You test this on on ppc64le?
> Acked-by: Shrikanth Hegde <sshegde@linux.ibm.com>
>
> base:
> ./perf bench futex hash
> Averaged 1556023 operations/sec (+- 0.08%), total secs = 10 <<-- 1.5M
>
> with series:
> ./perf bench futex hash -b32768
> Averaged 126499 operations/sec (+- 0.41%), total secs = 10 <<-- .12M
>
> ./perf bench futex hash -Ib32768
> Averaged 1549339 operations/sec (+- 0.08%), total secs = 10 <<-- 1.5M
Thank you for testing.
…
> nit: Does it makes sense to split this patch into futex and perf?
First I wanted to figure if we really do this. I have no idea if this
regression would show up in real world use case or just here as part of
the micro benchmark.
And if we do this, it would probably make sense to have one perf patch
which introduces -b & -I. And then figure out if the additional option
to prctl should be part of the resize patch or not. Probably we should
enforce 0/1 of arg4 from the beginning so maybe folding this in makes
sense.
Sebastian
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v11 19/19] futex: Allow to make the private hash immutable.
2025-04-10 15:28 ` Sebastian Andrzej Siewior
@ 2025-04-10 15:48 ` Shrikanth Hegde
0 siblings, 0 replies; 31+ messages in thread
From: Shrikanth Hegde @ 2025-04-10 15:48 UTC (permalink / raw)
To: Sebastian Andrzej Siewior
Cc: linux-kernel, André Almeida, Darren Hart, Davidlohr Bueso,
Ingo Molnar, Juri Lelli, Peter Zijlstra, Thomas Gleixner,
Valentin Schneider, Waiman Long, Liang, Kan, Adrian Hunter,
Alexander Shishkin, Arnaldo Carvalho de Melo, Ian Rogers,
Jiri Olsa, Mark Rutland, Namhyung Kim, linux-perf-users
On 4/10/25 20:58, Sebastian Andrzej Siewior wrote:
> On 2025-04-10 20:22:08 [+0530], Shrikanth Hegde wrote:
>> Hi Sebastian.
> Hi Shrikanth,
>
>> It would be good option for the application to decide if it needs this.
>
> You mean to have it as I introduced it here or something else?
>
as you have introduced it here.
>> Using this option makes the perf regression goes away using previous number of buckets.
>
> Okay, good to know. You test this on on ppc64le?
Yes.
>
>> Acked-by: Shrikanth Hegde <sshegde@linux.ibm.com>
>>
>> base:
>> ./perf bench futex hash
>> Averaged 1556023 operations/sec (+- 0.08%), total secs = 10 <<-- 1.5M
>>
>> with series:
>> ./perf bench futex hash -b32768
>> Averaged 126499 operations/sec (+- 0.41%), total secs = 10 <<-- .12M
>>
>> ./perf bench futex hash -Ib32768
>> Averaged 1549339 operations/sec (+- 0.08%), total secs = 10 <<-- 1.5M
> Thank you for testing.
> …
>> nit: Does it makes sense to split this patch into futex and perf?
>
> First I wanted to figure if we really do this. I have no idea if this
> regression would show up in real world use case or just here as part of
> the micro benchmark.
> And if we do this, it would probably make sense to have one perf patch
> which introduces -b & -I. And then figure out if the additional option
> to prctl should be part of the resize patch or not. Probably we should
> enforce 0/1 of arg4 from the beginning so maybe folding this in makes
> sense.
>
ok.
> Sebastian
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL
2025-04-07 15:57 [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
` (20 preceding siblings ...)
2025-04-08 13:51 ` André Almeida
@ 2025-04-10 17:51 ` Shrikanth Hegde
2025-04-15 9:03 ` Sebastian Andrzej Siewior
21 siblings, 1 reply; 31+ messages in thread
From: Shrikanth Hegde @ 2025-04-10 17:51 UTC (permalink / raw)
To: Sebastian Andrzej Siewior, linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long
On 4/7/25 21:27, Sebastian Andrzej Siewior wrote:
> this is a follow up on
> https://lore.kernel.org/ZwVOMgBMxrw7BU9A@jlelli-thinkpadt14gen4.remote.csb
>
> and adds support for task local futex_hash_bucket.
>
> This is the local hash map series with PeterZ FUTEX2_NUMA and
> FUTEX2_MPOL plus a few fixes on top.
>
> The complete tree is at
> https://git.kernel.org/pub/scm/linux/kernel/git/bigeasy/staging.git/log/?h=futex_local_v11
> https://git.kernel.org/pub/scm/linux/kernel/git/bigeasy/staging.git futex_local_v11
>
> v10…v11: https://lore.kernel.org/all/20250312151634.2183278-1-bigeasy@linutronix.de
> - PeterZ' fixups, changes to the local hash series have been folded
> into the earlier patches so things are not added and renamed later
> and the functionality is changed.
>
> - vmalloc_huge() has been implemented on top of vmalloc_huge_node()
> and the NOMMU bots have been adjusted. akpm asked for this.
>
> - wake_up_var() has been removed from __futex_pivot_hash(). It is
> enough to wake the userspace waiter after the final put so it can
> perform the resize itself.
>
> - Changed to logic in futex_pivot_pending() so it does not block for
> the user. It waits for __futex_pivot_hash() which follows the logic
> in __futex_pivot_hash().
>
> - Updated kernel doc for __futex_hash().
I don't see any change in Documentation/*. Is it updated in some
other path?
>
> - Patches 17+ are new:
> - Wire up PR_FUTEX_HASH_SET_SLOTS in "perf bench futex"
> - Add "immutable" mode to PR_FUTEX_HASH_SET_SLOTS to avoid resizing
> the local hash any further. This avoids rcuref usage which is
> noticeable in "perf bench futex hash"
>
> Peter Zijlstra (8):
> mm: Add vmalloc_huge_node()
> futex: Move futex_queue() into futex_wait_setup()
> futex: Pull futex_hash() out of futex_q_lock()
> futex: Create hb scopes
> futex: Create futex_hash() get/put class
> futex: Create private_hash() get/put class
> futex: Implement FUTEX2_NUMA
> futex: Implement FUTEX2_MPOL
>
> Sebastian Andrzej Siewior (11):
> rcuref: Provide rcuref_is_dead().
> futex: Acquire a hash reference in futex_wait_multiple_setup().
> futex: Decrease the waiter count before the unlock operation.
> futex: Introduce futex_q_lockptr_lock().
> futex: Create helper function to initialize a hash slot.
> futex: Add basic infrastructure for local task local hash.
> futex: Allow automatic allocation of process wide futex hash.
> futex: Allow to resize the private local hash.
> tools headers: Synchronize prctl.h ABI header
> tools/perf: Allow to select the number of hash buckets.
> futex: Allow to make the private hash immutable.
>
> include/linux/futex.h | 36 +-
> include/linux/mm_types.h | 7 +-
> include/linux/mmap_lock.h | 4 +
> include/linux/rcuref.h | 22 +-
> include/linux/vmalloc.h | 9 +-
> include/uapi/linux/futex.h | 10 +-
> include/uapi/linux/prctl.h | 6 +
> init/Kconfig | 10 +
> io_uring/futex.c | 4 +-
> kernel/fork.c | 24 +
> kernel/futex/core.c | 794 ++++++++++++++++++++++---
> kernel/futex/futex.h | 73 ++-
> kernel/futex/pi.c | 300 +++++-----
> kernel/futex/requeue.c | 480 +++++++--------
> kernel/futex/waitwake.c | 201 ++++---
> kernel/sys.c | 4 +
> mm/nommu.c | 18 +-
> mm/vmalloc.c | 11 +-
> tools/include/uapi/linux/prctl.h | 44 +-
> tools/perf/bench/Build | 1 +
> tools/perf/bench/futex-hash.c | 7 +
> tools/perf/bench/futex-lock-pi.c | 5 +
> tools/perf/bench/futex-requeue.c | 6 +
> tools/perf/bench/futex-wake-parallel.c | 9 +-
> tools/perf/bench/futex-wake.c | 4 +
> tools/perf/bench/futex.c | 60 ++
> tools/perf/bench/futex.h | 5 +
> 27 files changed, 1585 insertions(+), 569 deletions(-)
> create mode 100644 tools/perf/bench/futex.c
>
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL
2025-04-10 17:51 ` Shrikanth Hegde
@ 2025-04-15 9:03 ` Sebastian Andrzej Siewior
0 siblings, 0 replies; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-15 9:03 UTC (permalink / raw)
To: Shrikanth Hegde
Cc: linux-kernel, André Almeida, Darren Hart, Davidlohr Bueso,
Ingo Molnar, Juri Lelli, Peter Zijlstra, Thomas Gleixner,
Valentin Schneider, Waiman Long
On 2025-04-10 23:21:49 [+0530], Shrikanth Hegde wrote:
> > - Updated kernel doc for __futex_hash().
>
>
> I don't see any change in Documentation/*. Is it updated in some
> other path?
Look for "__futex_hash - Return" in
https://lore.kernel.org/all/20250407160007.vOk8ijom@linutronix.de/
This is the change I refer to.
Sebastian
^ permalink raw reply [flat|nested] 31+ messages in thread
* Re: [PATCH v11 15/19] futex: Implement FUTEX2_NUMA
2025-04-07 16:52 ` Sebastian Andrzej Siewior
@ 2025-04-17 15:34 ` Sebastian Andrzej Siewior
0 siblings, 0 replies; 31+ messages in thread
From: Sebastian Andrzej Siewior @ 2025-04-17 15:34 UTC (permalink / raw)
To: linux-kernel
Cc: André Almeida, Darren Hart, Davidlohr Bueso, Ingo Molnar,
Juri Lelli, Peter Zijlstra, Thomas Gleixner, Valentin Schneider,
Waiman Long
On 2025-04-07 18:52:25 [+0200], To linux-kernel@vger.kernel.org wrote:
> On 2025-04-07 17:57:38 [+0200], To linux-kernel@vger.kernel.org wrote:
> > --- a/kernel/futex/core.c
> > +++ b/kernel/futex/core.c
> > @@ -332,15 +337,35 @@ __futex_hash(union futex_key *key, struct futex_private_hash *fph)
> …
> > + if (node == FUTEX_NO_NODE) {
> > + /*
> > + * In case of !FLAGS_NUMA, use some unused hash bits to pick a
> > + * node -- this ensures regular futexes are interleaved across
> > + * the nodes and avoids having to allocate multiple
> > + * hash-tables.
> > + *
> > + * NOTE: this isn't perfectly uniform, but it is fast and
> > + * handles sparse node masks.
> > + */
> > + node = (hash >> futex_hashshift) % nr_node_ids;
>
> forgot to mention earlier: This % nr_node_ids turns into div and it is
> visible in perf top while looking at __futex_hash(). We could round it
> down to a power-of-two (which should be the case in my 1, 2 and 4 based
> NUMA world) and then we could use AND instead.
> ARM does not support NUMA or div so it is not a concern.
>
> Maybe a fast path for 1/2/4 would make sense since it is the most common
> one. In case you consider it I could run test to see how significant it
> is. It might be that it pops up in "perf bench futex hash" but not be
> significant in general use case. I had some hacks and those did not
> improve the numbers as much as I hoped for.
Since I'm cleaning up: I'm getting approx 1% improvement with this so I
am not considering it.
Sebastian
^ permalink raw reply [flat|nested] 31+ messages in thread
end of thread, other threads:[~2025-04-17 15:34 UTC | newest]
Thread overview: 31+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-07 15:57 [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 01/19] rcuref: Provide rcuref_is_dead() Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 02/19] mm: Add vmalloc_huge_node() Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 03/19] futex: Move futex_queue() into futex_wait_setup() Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 04/19] futex: Pull futex_hash() out of futex_q_lock() Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 05/19] futex: Create hb scopes Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 06/19] futex: Create futex_hash() get/put class Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 07/19] futex: Create private_hash() " Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 08/19] futex: Acquire a hash reference in futex_wait_multiple_setup() Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 09/19] futex: Decrease the waiter count before the unlock operation Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 10/19] futex: Introduce futex_q_lockptr_lock() Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 11/19] futex: Create helper function to initialize a hash slot Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 12/19] futex: Add basic infrastructure for local task local hash Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 13/19] futex: Allow automatic allocation of process wide futex hash Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 14/19] futex: Allow to resize the private local hash Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 15/19] futex: Implement FUTEX2_NUMA Sebastian Andrzej Siewior
2025-04-07 16:52 ` Sebastian Andrzej Siewior
2025-04-17 15:34 ` Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 16/19] futex: Implement FUTEX2_MPOL Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 17/19] tools headers: Synchronize prctl.h ABI header Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 18/19] tools/perf: Allow to select the number of hash buckets Sebastian Andrzej Siewior
2025-04-07 15:57 ` [PATCH v11 19/19] futex: Allow to make the private hash immutable Sebastian Andrzej Siewior
2025-04-10 10:56 ` Sebastian Andrzej Siewior
2025-04-10 14:52 ` Shrikanth Hegde
2025-04-10 15:28 ` Sebastian Andrzej Siewior
2025-04-10 15:48 ` Shrikanth Hegde
2025-04-07 16:00 ` [PATCH v11 00/19] futex: Add support task local hash maps, FUTEX2_NUMA and FUTEX2_MPOL Sebastian Andrzej Siewior
2025-04-08 13:51 ` André Almeida
2025-04-08 16:13 ` Sebastian Andrzej Siewior
2025-04-10 17:51 ` Shrikanth Hegde
2025-04-15 9:03 ` Sebastian Andrzej Siewior
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox