* [PATCH 1/3 v3] genirq: Prevent from early irq thread spurious wake-ups
2025-11-21 14:34 ` [PATCH 0/3 v3] genirq: Fix IRQ threads VS cpuset Frederic Weisbecker
@ 2025-11-21 14:34 ` Frederic Weisbecker
2025-11-21 19:12 ` Thomas Gleixner
2025-11-21 14:34 ` [PATCH 2/3 v3] genirq: Fix interrupt threads affinity vs. cpuset isolated partitions Frederic Weisbecker
` (2 subsequent siblings)
3 siblings, 1 reply; 12+ messages in thread
From: Frederic Weisbecker @ 2025-11-21 14:34 UTC (permalink / raw)
To: Thomas Gleixner
Cc: LKML, Marek Szyprowski, Marco Crivellari, Waiman Long, cgroups,
Frederic Weisbecker
From: Thomas Gleixner <tglx@linutronix.de>
During initialization, the IRQ thread is created before the IRQ get a
chance to be enabled. But the IRQ enablement may happen before the first
official kthread wake up point. As a result, the firing IRQ can perform
an early wake-up of the IRQ thread before the first official kthread
wake up point.
Although this has happened to be harmless so far, this uncontrolled
behaviour is a bug waiting to happen at some point in the future with
the threaded handler accessing halfway initialized states.
Prevent from such surprise with performing a wake-up only if the target
is in TASK_INTERRUPTIBLE state. Since the IRQ thread waits in this state
for interrupts to handle only after proper initialization, it is then
guaranteed not to be spuriously woken up while waiting in
TASK_UNINTERRUPTIBLE, right after creation in the kthread code, before
the official first wake up point to be reached.
Not-yet-Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
---
kernel/irq/handle.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c
index e103451243a0..786f5570a640 100644
--- a/kernel/irq/handle.c
+++ b/kernel/irq/handle.c
@@ -133,7 +133,15 @@ void __irq_wake_thread(struct irq_desc *desc, struct irqaction *action)
*/
atomic_inc(&desc->threads_active);
- wake_up_process(action->thread);
+ /*
+ * This might be a premature wakeup before the thread reached the
+ * thread function and set the IRQTF_READY bit. It's waiting in
+ * kthread code with state UNINTERRUPTIBLE. Once it reaches the
+ * thread function it waits with INTERRUPTIBLE. The wakeup is not
+ * lost in that case because the thread is guaranteed to observe
+ * the RUN flag before it goes to sleep in wait_for_interrupt().
+ */
+ wake_up_state(action->thread, TASK_INTERRUPTIBLE);
}
static DEFINE_STATIC_KEY_FALSE(irqhandler_duration_check_enabled);
--
2.51.1
^ permalink raw reply related [flat|nested] 12+ messages in thread* Re: [PATCH 1/3 v3] genirq: Prevent from early irq thread spurious wake-ups
2025-11-21 14:34 ` [PATCH 1/3 v3] genirq: Prevent from early irq thread spurious wake-ups Frederic Weisbecker
@ 2025-11-21 19:12 ` Thomas Gleixner
2025-11-21 22:04 ` Frederic Weisbecker
0 siblings, 1 reply; 12+ messages in thread
From: Thomas Gleixner @ 2025-11-21 19:12 UTC (permalink / raw)
To: Frederic Weisbecker
Cc: LKML, Marek Szyprowski, Marco Crivellari, Waiman Long, cgroups,
Frederic Weisbecker
On Fri, Nov 21 2025 at 15:34, Frederic Weisbecker wrote:
> During initialization, the IRQ thread is created before the IRQ get a
> chance to be enabled. But the IRQ enablement may happen before the first
> official kthread wake up point. As a result, the firing IRQ can perform
> an early wake-up of the IRQ thread before the first official kthread
> wake up point.
>
> Although this has happened to be harmless so far, this uncontrolled
> behaviour is a bug waiting to happen at some point in the future with
> the threaded handler accessing halfway initialized states.
No. At the point where the first wake up can happen, the state used by
the thread is completely initialized. That's right after setup_irq()
drops the descriptor lock. Even if the hardware raises it immediately on
starting the interrupt up, the handler is stuck on the descriptor lock,
which is not released before everything is ready.
That kthread_bind() issue is a special case as it makes the assumption
that the thread is still in that UNINTERRUPTIBLE state waiting for the
initial wake up. That assumption is only true, when the thread creator
guarantees that there is no wake up before kthread_bind() is invoked.
I'll rephrase that a bit. :)
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 1/3 v3] genirq: Prevent from early irq thread spurious wake-ups
2025-11-21 19:12 ` Thomas Gleixner
@ 2025-11-21 22:04 ` Frederic Weisbecker
0 siblings, 0 replies; 12+ messages in thread
From: Frederic Weisbecker @ 2025-11-21 22:04 UTC (permalink / raw)
To: Thomas Gleixner
Cc: LKML, Marek Szyprowski, Marco Crivellari, Waiman Long, cgroups
Le Fri, Nov 21, 2025 at 08:12:38PM +0100, Thomas Gleixner a écrit :
> On Fri, Nov 21 2025 at 15:34, Frederic Weisbecker wrote:
> > During initialization, the IRQ thread is created before the IRQ get a
> > chance to be enabled. But the IRQ enablement may happen before the first
> > official kthread wake up point. As a result, the firing IRQ can perform
> > an early wake-up of the IRQ thread before the first official kthread
> > wake up point.
> >
> > Although this has happened to be harmless so far, this uncontrolled
> > behaviour is a bug waiting to happen at some point in the future with
> > the threaded handler accessing halfway initialized states.
>
> No. At the point where the first wake up can happen, the state used by
> the thread is completely initialized. That's right after setup_irq()
> drops the descriptor lock. Even if the hardware raises it immediately on
> starting the interrupt up, the handler is stuck on the descriptor lock,
> which is not released before everything is ready.
>
> That kthread_bind() issue is a special case as it makes the assumption
> that the thread is still in that UNINTERRUPTIBLE state waiting for the
> initial wake up. That assumption is only true, when the thread creator
> guarantees that there is no wake up before kthread_bind() is invoked.
>
> I'll rephrase that a bit. :)
Eh, thanks and sorry for the misinterpretation.
--
Frederic Weisbecker
SUSE Labs
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 2/3 v3] genirq: Fix interrupt threads affinity vs. cpuset isolated partitions
2025-11-21 14:34 ` [PATCH 0/3 v3] genirq: Fix IRQ threads VS cpuset Frederic Weisbecker
2025-11-21 14:34 ` [PATCH 1/3 v3] genirq: Prevent from early irq thread spurious wake-ups Frederic Weisbecker
@ 2025-11-21 14:34 ` Frederic Weisbecker
2025-11-21 16:29 ` Waiman Long
2025-12-12 1:48 ` Chris Mason
2025-11-21 14:35 ` [PATCH 3/3 v3] genirq: Remove cpumask availability check on kthread affinity setting Frederic Weisbecker
2025-11-21 20:05 ` [PATCH 0/3 v3] genirq: Fix IRQ threads VS cpuset Marek Szyprowski
3 siblings, 2 replies; 12+ messages in thread
From: Frederic Weisbecker @ 2025-11-21 14:34 UTC (permalink / raw)
To: Thomas Gleixner
Cc: LKML, Frederic Weisbecker, Marek Szyprowski, Marco Crivellari,
Waiman Long, cgroups
When a cpuset isolated partition is created / updated or destroyed, the
interrupt threads are affine blindly to all the non-isolated CPUs. And this
happens without taking into account the interrupt threads initial affinity
that becomes ignored.
For example in a system with 8 CPUs, if an interrupt and its kthread are
initially affine to CPU 5, creating an isolated partition with only CPU 2
inside will eventually end up affining the interrupt kthread to all CPUs
but CPU 2 (that is CPUs 0,1,3-7), losing the kthread preference for CPU 5.
Besides the blind re-affinity, this doesn't take care of the actual low
level interrupt which isn't migrated. As of today the only way to isolate
non managed interrupts, along with their kthreads, is to overwrite their
affinity separately, for example through /proc/irq/
To avoid doing that manually, future development should focus on updating
the interrupt's affinity whenever cpuset isolated partitions are updated.
In the meantime, cpuset shouldn't fiddle with interrupt threads directly.
To prevent from that, set the PF_NO_SETAFFINITY flag to them.
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://patch.msgid.link/20251118143052.68778-2-frederic@kernel.org
---
kernel/irq/manage.c | 23 +++++++++++++++--------
1 file changed, 15 insertions(+), 8 deletions(-)
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index c1ce30c9c3ab..98b9b8b4de27 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -1408,16 +1408,23 @@ setup_irq_thread(struct irqaction *new, unsigned int irq, bool secondary)
* references an already freed task_struct.
*/
new->thread = get_task_struct(t);
+
/*
- * Tell the thread to set its affinity. This is
- * important for shared interrupt handlers as we do
- * not invoke setup_affinity() for the secondary
- * handlers as everything is already set up. Even for
- * interrupts marked with IRQF_NO_BALANCE this is
- * correct as we want the thread to move to the cpu(s)
- * on which the requesting code placed the interrupt.
+ * The affinity may not yet be available, but it will be once
+ * the IRQ will be enabled. Delay and defer the actual setting
+ * to the thread itself once it is ready to run. In the meantime,
+ * prevent it from ever being reaffined directly by cpuset or
+ * housekeeping. The proper way to do it is to reaffine the whole
+ * vector.
*/
- set_bit(IRQTF_AFFINITY, &new->thread_flags);
+ kthread_bind_mask(t, cpu_possible_mask);
+
+ /*
+ * Ensure the thread adjusts the affinity once it reaches the
+ * thread function.
+ */
+ new->thread_flags = BIT(IRQTF_AFFINITY);
+
return 0;
}
--
2.51.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH 2/3 v3] genirq: Fix interrupt threads affinity vs. cpuset isolated partitions
2025-11-21 14:34 ` [PATCH 2/3 v3] genirq: Fix interrupt threads affinity vs. cpuset isolated partitions Frederic Weisbecker
@ 2025-11-21 16:29 ` Waiman Long
2025-12-12 1:48 ` Chris Mason
1 sibling, 0 replies; 12+ messages in thread
From: Waiman Long @ 2025-11-21 16:29 UTC (permalink / raw)
To: Frederic Weisbecker, Thomas Gleixner
Cc: LKML, Marek Szyprowski, Marco Crivellari, Waiman Long, cgroups
On 11/21/25 9:34 AM, Frederic Weisbecker wrote:
> When a cpuset isolated partition is created / updated or destroyed, the
> interrupt threads are affine blindly to all the non-isolated CPUs. And this
> happens without taking into account the interrupt threads initial affinity
> that becomes ignored.
>
> For example in a system with 8 CPUs, if an interrupt and its kthread are
> initially affine to CPU 5, creating an isolated partition with only CPU 2
> inside will eventually end up affining the interrupt kthread to all CPUs
> but CPU 2 (that is CPUs 0,1,3-7), losing the kthread preference for CPU 5.
>
> Besides the blind re-affinity, this doesn't take care of the actual low
> level interrupt which isn't migrated. As of today the only way to isolate
> non managed interrupts, along with their kthreads, is to overwrite their
> affinity separately, for example through /proc/irq/
>
> To avoid doing that manually, future development should focus on updating
> the interrupt's affinity whenever cpuset isolated partitions are updated.
>
> In the meantime, cpuset shouldn't fiddle with interrupt threads directly.
> To prevent from that, set the PF_NO_SETAFFINITY flag to them.
>
> Suggested-by: Thomas Gleixner <tglx@linutronix.de>
> Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Link: https://patch.msgid.link/20251118143052.68778-2-frederic@kernel.org
> ---
> kernel/irq/manage.c | 23 +++++++++++++++--------
> 1 file changed, 15 insertions(+), 8 deletions(-)
>
> diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
> index c1ce30c9c3ab..98b9b8b4de27 100644
> --- a/kernel/irq/manage.c
> +++ b/kernel/irq/manage.c
> @@ -1408,16 +1408,23 @@ setup_irq_thread(struct irqaction *new, unsigned int irq, bool secondary)
> * references an already freed task_struct.
> */
> new->thread = get_task_struct(t);
> +
> /*
> - * Tell the thread to set its affinity. This is
> - * important for shared interrupt handlers as we do
> - * not invoke setup_affinity() for the secondary
> - * handlers as everything is already set up. Even for
> - * interrupts marked with IRQF_NO_BALANCE this is
> - * correct as we want the thread to move to the cpu(s)
> - * on which the requesting code placed the interrupt.
> + * The affinity may not yet be available, but it will be once
> + * the IRQ will be enabled. Delay and defer the actual setting
> + * to the thread itself once it is ready to run. In the meantime,
> + * prevent it from ever being reaffined directly by cpuset or
> + * housekeeping. The proper way to do it is to reaffine the whole
> + * vector.
> */
> - set_bit(IRQTF_AFFINITY, &new->thread_flags);
> + kthread_bind_mask(t, cpu_possible_mask);
> +
> + /*
> + * Ensure the thread adjusts the affinity once it reaches the
> + * thread function.
> + */
> + new->thread_flags = BIT(IRQTF_AFFINITY);
> +
> return 0;
> }
>
LGTM
Acked-by: Waiman Long <longman@redhat.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH 2/3 v3] genirq: Fix interrupt threads affinity vs. cpuset isolated partitions
2025-11-21 14:34 ` [PATCH 2/3 v3] genirq: Fix interrupt threads affinity vs. cpuset isolated partitions Frederic Weisbecker
2025-11-21 16:29 ` Waiman Long
@ 2025-12-12 1:48 ` Chris Mason
2025-12-12 2:26 ` Thomas Gleixner
1 sibling, 1 reply; 12+ messages in thread
From: Chris Mason @ 2025-12-12 1:48 UTC (permalink / raw)
To: Frederic Weisbecker
Cc: Chris Mason, Thomas Gleixner, LKML, Marek Szyprowski,
Marco Crivellari, Waiman Long, cgroups
On Fri, 21 Nov 2025 15:34:59 +0100 Frederic Weisbecker <frederic@kernel.org> wrote:
Hi everyone,
I ran some recent commits through AI patch review and it flagged this one.
The comments below looks right to me, but I might be missing something:
> diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
> index c1ce30c9c3ab6..61da1c68ff82a 100644
> --- a/kernel/irq/manage.c
> +++ b/kernel/irq/manage.c
> @@ -1408,16 +1408,23 @@ setup_irq_thread(struct irqaction *new, unsigned int irq, bool secondary)
> * references an already freed task_struct.
> */
> new->thread = get_task_struct(t);
[ ... ]
> - set_bit(IRQTF_AFFINITY, &new->thread_flags);
> + kthread_bind_mask(t, cpu_possible_mask);
> +
> + /*
> + * Ensure the thread adjusts the affinity once it reaches the
> + * thread function.
> + */
> + new->thread_flags = BIT(IRQTF_AFFINITY);
^^^
Can this clobber IRQTF_FORCED_THREAD?
In __setup_irq(), irq_setup_forced_threading() is called before
setup_irq_thread(). When forced threading is enabled,
irq_setup_forced_threading() sets IRQTF_FORCED_THREAD via set_bit():
set_bit(IRQTF_FORCED_THREAD, &new->thread_flags);
Then setup_irq_thread() overwrites thread_flags with a direct assignment:
new->thread_flags = BIT(IRQTF_AFFINITY);
This clears IRQTF_FORCED_THREAD. Later in irq_thread():
if (force_irqthreads() && test_bit(IRQTF_FORCED_THREAD,
&action->thread_flags))
handler_fn = irq_forced_thread_fn;
else
handler_fn = irq_thread_fn;
Without IRQTF_FORCED_THREAD, the handler uses irq_thread_fn() instead of
irq_forced_thread_fn(). The forced-threaded handler then runs without the
local_bh_disable() and local_irq_disable() protection that non-threaded
interrupt handlers expect.
Should this be:
new->thread_flags |= BIT(IRQTF_AFFINITY);
or:
set_bit(IRQTF_AFFINITY, &new->thread_flags);
> +
> return 0;
> }
-chris
^ permalink raw reply [flat|nested] 12+ messages in thread* Re: [PATCH 2/3 v3] genirq: Fix interrupt threads affinity vs. cpuset isolated partitions
2025-12-12 1:48 ` Chris Mason
@ 2025-12-12 2:26 ` Thomas Gleixner
2025-12-12 4:01 ` [PATCH] genirq: Don't overwrite interrupt thread flags on setup Thomas Gleixner
0 siblings, 1 reply; 12+ messages in thread
From: Thomas Gleixner @ 2025-12-12 2:26 UTC (permalink / raw)
To: Chris Mason, Frederic Weisbecker
Cc: Chris Mason, LKML, Marek Szyprowski, Marco Crivellari,
Waiman Long, cgroups
On Thu, Dec 11 2025 at 17:48, Chris Mason wrote:
> On Fri, 21 Nov 2025 15:34:59 +0100 Frederic Weisbecker <frederic@kernel.org> wrote:
> I ran some recent commits through AI patch review and it flagged this one.
> The comments below looks right to me, but I might be missing something:
>
>> diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
>> index c1ce30c9c3ab6..61da1c68ff82a 100644
>> --- a/kernel/irq/manage.c
>> +++ b/kernel/irq/manage.c
>> @@ -1408,16 +1408,23 @@ setup_irq_thread(struct irqaction *new, unsigned int irq, bool secondary)
>> * references an already freed task_struct.
>> */
>> new->thread = get_task_struct(t);
>
> [ ... ]
>
>> - set_bit(IRQTF_AFFINITY, &new->thread_flags);
>> + kthread_bind_mask(t, cpu_possible_mask);
>> +
>> + /*
>> + * Ensure the thread adjusts the affinity once it reaches the
>> + * thread function.
>> + */
>> + new->thread_flags = BIT(IRQTF_AFFINITY);
> ^^^
>
> Can this clobber IRQTF_FORCED_THREAD?
>
> In __setup_irq(), irq_setup_forced_threading() is called before
> setup_irq_thread(). When forced threading is enabled,
> irq_setup_forced_threading() sets IRQTF_FORCED_THREAD via set_bit():
>
> set_bit(IRQTF_FORCED_THREAD, &new->thread_flags);
>
> Then setup_irq_thread() overwrites thread_flags with a direct assignment:
>
> new->thread_flags = BIT(IRQTF_AFFINITY);
>
> This clears IRQTF_FORCED_THREAD. Later in irq_thread():
Yep. That's broken. Nice catch.
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH] genirq: Don't overwrite interrupt thread flags on setup
2025-12-12 2:26 ` Thomas Gleixner
@ 2025-12-12 4:01 ` Thomas Gleixner
2025-12-12 11:57 ` Frederic Weisbecker
0 siblings, 1 reply; 12+ messages in thread
From: Thomas Gleixner @ 2025-12-12 4:01 UTC (permalink / raw)
To: Chris Mason, Frederic Weisbecker
Cc: Chris Mason, LKML, Marek Szyprowski, Marco Crivellari,
Waiman Long, cgroups
Chris reported that the recent affinity management changes result in
overwriting the already initialized thread flags.
Use set_bit() to set the affinity bit instead of assigning the bit value to
the flags.
Fixes: 801afdfbfcd9 ("genirq: Fix interrupt threads affinity vs. cpuset isolated partitions")
Reported-by: Chris Mason <clm@meta.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Closes: https://lore.kernel.org/all/20251212014848.3509622-1-clm@meta.com
---
kernel/irq/manage.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -1414,7 +1414,7 @@ setup_irq_thread(struct irqaction *new,
* Ensure the thread adjusts the affinity once it reaches the
* thread function.
*/
- new->thread_flags = BIT(IRQTF_AFFINITY);
+ set_bit(IRQTF_AFFINITY, &new->thread_flags);
return 0;
}
^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH 3/3 v3] genirq: Remove cpumask availability check on kthread affinity setting
2025-11-21 14:34 ` [PATCH 0/3 v3] genirq: Fix IRQ threads VS cpuset Frederic Weisbecker
2025-11-21 14:34 ` [PATCH 1/3 v3] genirq: Prevent from early irq thread spurious wake-ups Frederic Weisbecker
2025-11-21 14:34 ` [PATCH 2/3 v3] genirq: Fix interrupt threads affinity vs. cpuset isolated partitions Frederic Weisbecker
@ 2025-11-21 14:35 ` Frederic Weisbecker
2025-11-21 20:05 ` [PATCH 0/3 v3] genirq: Fix IRQ threads VS cpuset Marek Szyprowski
3 siblings, 0 replies; 12+ messages in thread
From: Frederic Weisbecker @ 2025-11-21 14:35 UTC (permalink / raw)
To: Thomas Gleixner
Cc: LKML, Frederic Weisbecker, Marek Szyprowski, Marco Crivellari,
Waiman Long, cgroups
Failing to allocate the affinity mask of an interrupt descriptor fails the
whole descriptor initialization. It is then guaranteed that the cpumask is
always available whenever the related interrupt objects are alive, such as
the kthread handler.
Therefore remove the superfluous check since it is merely just a historical
leftover. Get rid also of the comments above it that are either obsolete or
useless.
Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Frederic Weisbecker <frederic@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://patch.msgid.link/20251118143052.68778-3-frederic@kernel.org
---
kernel/irq/manage.c | 17 ++++-------------
1 file changed, 4 insertions(+), 13 deletions(-)
diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
index 98b9b8b4de27..76c7b58f54c8 100644
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -1001,7 +1001,6 @@ static irqreturn_t irq_forced_secondary_handler(int irq, void *dev_id)
static void irq_thread_check_affinity(struct irq_desc *desc, struct irqaction *action)
{
cpumask_var_t mask;
- bool valid = false;
if (!test_and_clear_bit(IRQTF_AFFINITY, &action->thread_flags))
return;
@@ -1018,21 +1017,13 @@ static void irq_thread_check_affinity(struct irq_desc *desc, struct irqaction *a
}
scoped_guard(raw_spinlock_irq, &desc->lock) {
- /*
- * This code is triggered unconditionally. Check the affinity
- * mask pointer. For CPU_MASK_OFFSTACK=n this is optimized out.
- */
- if (cpumask_available(desc->irq_common_data.affinity)) {
- const struct cpumask *m;
+ const struct cpumask *m;
- m = irq_data_get_effective_affinity_mask(&desc->irq_data);
- cpumask_copy(mask, m);
- valid = true;
- }
+ m = irq_data_get_effective_affinity_mask(&desc->irq_data);
+ cpumask_copy(mask, m);
}
- if (valid)
- set_cpus_allowed_ptr(current, mask);
+ set_cpus_allowed_ptr(current, mask);
free_cpumask_var(mask);
}
#else
--
2.51.1
^ permalink raw reply related [flat|nested] 12+ messages in thread* Re: [PATCH 0/3 v3] genirq: Fix IRQ threads VS cpuset
2025-11-21 14:34 ` [PATCH 0/3 v3] genirq: Fix IRQ threads VS cpuset Frederic Weisbecker
` (2 preceding siblings ...)
2025-11-21 14:35 ` [PATCH 3/3 v3] genirq: Remove cpumask availability check on kthread affinity setting Frederic Weisbecker
@ 2025-11-21 20:05 ` Marek Szyprowski
3 siblings, 0 replies; 12+ messages in thread
From: Marek Szyprowski @ 2025-11-21 20:05 UTC (permalink / raw)
To: Frederic Weisbecker, Thomas Gleixner
Cc: LKML, Marco Crivellari, Waiman Long, cgroups
On 21.11.2025 15:34, Frederic Weisbecker wrote:
> Here is another take after some last minutes issues reported by
> Marek Szyprowski <m.szyprowski@samsung.com>:
>
> https://lore.kernel.org/all/73356b5f-ab5c-4e9e-b57f-b80981c35998@samsung.com/
Works fine on my test systems.
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
> Changes since v2:
>
> * Fix early spurious IRQ thread wake-up (to be SOB'ed by Thomas)
>
> * Instead of applying the affinity remotely, set PF_NO_SETAFFINITY
> early, right after kthread creation, and wait for the thread to
> apply the affinity by itself. This is to prevent from early wake-up
> to mess up with kthread_bind_mask(), as reported by
> Marek Szyprowski <m.szyprowski@samsung.com>
>
> Frederic Weisbecker (2):
> genirq: Fix interrupt threads affinity vs. cpuset isolated partitions
> genirq: Remove cpumask availability check on kthread affinity setting
>
> Thomas Gleixner (1):
> genirq: Prevent from early irq thread spurious wake-ups
>
> kernel/irq/handle.c | 10 +++++++++-
> kernel/irq/manage.c | 40 +++++++++++++++++++---------------------
> 2 files changed, 28 insertions(+), 22 deletions(-)
>
Best regards
--
Marek Szyprowski, PhD
Samsung R&D Institute Poland
^ permalink raw reply [flat|nested] 12+ messages in thread