* [PATCH 0/3 v3] genirq: Fix IRQ threads VS cpuset @ 2025-11-21 14:34 ` Frederic Weisbecker 2025-11-21 14:34 ` [PATCH 1/3 v3] genirq: Prevent from early irq thread spurious wake-ups Frederic Weisbecker ` (3 more replies) 0 siblings, 4 replies; 19+ messages in thread From: Frederic Weisbecker @ 2025-11-21 14:34 UTC (permalink / raw) To: Thomas Gleixner Cc: LKML, Frederic Weisbecker, Marek Szyprowski, Marco Crivellari, Waiman Long, cgroups Hi, Here is another take after some last minutes issues reported by Marek Szyprowski <m.szyprowski@samsung.com>: https://lore.kernel.org/all/73356b5f-ab5c-4e9e-b57f-b80981c35998@samsung.com/ Changes since v2: * Fix early spurious IRQ thread wake-up (to be SOB'ed by Thomas) * Instead of applying the affinity remotely, set PF_NO_SETAFFINITY early, right after kthread creation, and wait for the thread to apply the affinity by itself. This is to prevent from early wake-up to mess up with kthread_bind_mask(), as reported by Marek Szyprowski <m.szyprowski@samsung.com> Frederic Weisbecker (2): genirq: Fix interrupt threads affinity vs. cpuset isolated partitions genirq: Remove cpumask availability check on kthread affinity setting Thomas Gleixner (1): genirq: Prevent from early irq thread spurious wake-ups kernel/irq/handle.c | 10 +++++++++- kernel/irq/manage.c | 40 +++++++++++++++++++--------------------- 2 files changed, 28 insertions(+), 22 deletions(-) -- 2.51.1 ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH 1/3 v3] genirq: Prevent from early irq thread spurious wake-ups 2025-11-21 14:34 ` [PATCH 0/3 v3] genirq: Fix IRQ threads VS cpuset Frederic Weisbecker @ 2025-11-21 14:34 ` Frederic Weisbecker 2025-11-21 19:12 ` Thomas Gleixner ` (2 more replies) 2025-11-21 14:34 ` [PATCH 2/3 v3] genirq: Fix interrupt threads affinity vs. cpuset isolated partitions Frederic Weisbecker ` (2 subsequent siblings) 3 siblings, 3 replies; 19+ messages in thread From: Frederic Weisbecker @ 2025-11-21 14:34 UTC (permalink / raw) To: Thomas Gleixner Cc: LKML, Marek Szyprowski, Marco Crivellari, Waiman Long, cgroups, Frederic Weisbecker From: Thomas Gleixner <tglx@linutronix.de> During initialization, the IRQ thread is created before the IRQ get a chance to be enabled. But the IRQ enablement may happen before the first official kthread wake up point. As a result, the firing IRQ can perform an early wake-up of the IRQ thread before the first official kthread wake up point. Although this has happened to be harmless so far, this uncontrolled behaviour is a bug waiting to happen at some point in the future with the threaded handler accessing halfway initialized states. Prevent from such surprise with performing a wake-up only if the target is in TASK_INTERRUPTIBLE state. Since the IRQ thread waits in this state for interrupts to handle only after proper initialization, it is then guaranteed not to be spuriously woken up while waiting in TASK_UNINTERRUPTIBLE, right after creation in the kthread code, before the official first wake up point to be reached. Not-yet-Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Frederic Weisbecker <frederic@kernel.org> --- kernel/irq/handle.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c index e103451243a0..786f5570a640 100644 --- a/kernel/irq/handle.c +++ b/kernel/irq/handle.c @@ -133,7 +133,15 @@ void __irq_wake_thread(struct irq_desc *desc, struct irqaction *action) */ atomic_inc(&desc->threads_active); - wake_up_process(action->thread); + /* + * This might be a premature wakeup before the thread reached the + * thread function and set the IRQTF_READY bit. It's waiting in + * kthread code with state UNINTERRUPTIBLE. Once it reaches the + * thread function it waits with INTERRUPTIBLE. The wakeup is not + * lost in that case because the thread is guaranteed to observe + * the RUN flag before it goes to sleep in wait_for_interrupt(). + */ + wake_up_state(action->thread, TASK_INTERRUPTIBLE); } static DEFINE_STATIC_KEY_FALSE(irqhandler_duration_check_enabled); -- 2.51.1 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH 1/3 v3] genirq: Prevent from early irq thread spurious wake-ups 2025-11-21 14:34 ` [PATCH 1/3 v3] genirq: Prevent from early irq thread spurious wake-ups Frederic Weisbecker @ 2025-11-21 19:12 ` Thomas Gleixner 2025-11-21 22:04 ` Frederic Weisbecker 2025-11-21 20:01 ` [tip: irq/core] genirq: Prevent early spurious wake-ups of interrupt threads tip-bot2 for Thomas Gleixner 2025-11-22 8:30 ` tip-bot2 for Frederic Weisbecker 2 siblings, 1 reply; 19+ messages in thread From: Thomas Gleixner @ 2025-11-21 19:12 UTC (permalink / raw) To: Frederic Weisbecker Cc: LKML, Marek Szyprowski, Marco Crivellari, Waiman Long, cgroups, Frederic Weisbecker On Fri, Nov 21 2025 at 15:34, Frederic Weisbecker wrote: > During initialization, the IRQ thread is created before the IRQ get a > chance to be enabled. But the IRQ enablement may happen before the first > official kthread wake up point. As a result, the firing IRQ can perform > an early wake-up of the IRQ thread before the first official kthread > wake up point. > > Although this has happened to be harmless so far, this uncontrolled > behaviour is a bug waiting to happen at some point in the future with > the threaded handler accessing halfway initialized states. No. At the point where the first wake up can happen, the state used by the thread is completely initialized. That's right after setup_irq() drops the descriptor lock. Even if the hardware raises it immediately on starting the interrupt up, the handler is stuck on the descriptor lock, which is not released before everything is ready. That kthread_bind() issue is a special case as it makes the assumption that the thread is still in that UNINTERRUPTIBLE state waiting for the initial wake up. That assumption is only true, when the thread creator guarantees that there is no wake up before kthread_bind() is invoked. I'll rephrase that a bit. :) ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/3 v3] genirq: Prevent from early irq thread spurious wake-ups 2025-11-21 19:12 ` Thomas Gleixner @ 2025-11-21 22:04 ` Frederic Weisbecker 0 siblings, 0 replies; 19+ messages in thread From: Frederic Weisbecker @ 2025-11-21 22:04 UTC (permalink / raw) To: Thomas Gleixner Cc: LKML, Marek Szyprowski, Marco Crivellari, Waiman Long, cgroups Le Fri, Nov 21, 2025 at 08:12:38PM +0100, Thomas Gleixner a écrit : > On Fri, Nov 21 2025 at 15:34, Frederic Weisbecker wrote: > > During initialization, the IRQ thread is created before the IRQ get a > > chance to be enabled. But the IRQ enablement may happen before the first > > official kthread wake up point. As a result, the firing IRQ can perform > > an early wake-up of the IRQ thread before the first official kthread > > wake up point. > > > > Although this has happened to be harmless so far, this uncontrolled > > behaviour is a bug waiting to happen at some point in the future with > > the threaded handler accessing halfway initialized states. > > No. At the point where the first wake up can happen, the state used by > the thread is completely initialized. That's right after setup_irq() > drops the descriptor lock. Even if the hardware raises it immediately on > starting the interrupt up, the handler is stuck on the descriptor lock, > which is not released before everything is ready. > > That kthread_bind() issue is a special case as it makes the assumption > that the thread is still in that UNINTERRUPTIBLE state waiting for the > initial wake up. That assumption is only true, when the thread creator > guarantees that there is no wake up before kthread_bind() is invoked. > > I'll rephrase that a bit. :) Eh, thanks and sorry for the misinterpretation. -- Frederic Weisbecker SUSE Labs ^ permalink raw reply [flat|nested] 19+ messages in thread
* [tip: irq/core] genirq: Prevent early spurious wake-ups of interrupt threads 2025-11-21 14:34 ` [PATCH 1/3 v3] genirq: Prevent from early irq thread spurious wake-ups Frederic Weisbecker 2025-11-21 19:12 ` Thomas Gleixner @ 2025-11-21 20:01 ` tip-bot2 for Thomas Gleixner 2025-11-22 8:30 ` tip-bot2 for Frederic Weisbecker 2 siblings, 0 replies; 19+ messages in thread From: tip-bot2 for Thomas Gleixner @ 2025-11-21 20:01 UTC (permalink / raw) To: linux-tip-commits Cc: Frederic Weisbecker, Thomas Gleixner, x86, linux-kernel, maz The following commit has been merged into the irq/core branch of tip: Commit-ID: 9d5ca2edd74e479ad09bc7d02820395a9d46e2bd Gitweb: https://git.kernel.org/tip/9d5ca2edd74e479ad09bc7d02820395a9d46e2bd Author: Thomas Gleixner <tglx@linutronix.de> AuthorDate: Fri, 21 Nov 2025 15:34:58 +01:00 Committer: Thomas Gleixner <tglx@linutronix.de> CommitterDate: Fri, 21 Nov 2025 20:50:30 +01:00 genirq: Prevent early spurious wake-ups of interrupt threads During initialization, the interrupt thread is created before the interrupt is enabled. The interrupt enablement happens before the actual kthread wake up point. Once the interrupt is enabled the hardware can raise an interrupt and once setup_irq() drops the descriptor lock a interrupt wake-up can happen. Even when such an interrupt can be considered premature, this is not a problem in general because at the point where the descriptor lock is dropped and the wakeup can happen, the data which is used by the thread is fully initialized. Though from the perspective of least surprise, the initial wakeup really should be performed by the setup code and not randomly by a premature interrupt. Prevent this by performing a wake-up only if the target is in state TASK_INTERRUPTIBLE, which the thread uses in wait_for_interrupt(). If the thread is still in state TASK_UNINTERRUPTIBLE, the wake-up is not lost because after the setup code completed the initial wake-up the thread will observe the IRQTF_RUNTHREAD and proceed with the handling. Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://patch.msgid.link/20251121143500.42111-2-frederic@kernel.org --- kernel/irq/handle.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c index e103451..786f557 100644 --- a/kernel/irq/handle.c +++ b/kernel/irq/handle.c @@ -133,7 +133,15 @@ void __irq_wake_thread(struct irq_desc *desc, struct irqaction *action) */ atomic_inc(&desc->threads_active); - wake_up_process(action->thread); + /* + * This might be a premature wakeup before the thread reached the + * thread function and set the IRQTF_READY bit. It's waiting in + * kthread code with state UNINTERRUPTIBLE. Once it reaches the + * thread function it waits with INTERRUPTIBLE. The wakeup is not + * lost in that case because the thread is guaranteed to observe + * the RUN flag before it goes to sleep in wait_for_interrupt(). + */ + wake_up_state(action->thread, TASK_INTERRUPTIBLE); } static DEFINE_STATIC_KEY_FALSE(irqhandler_duration_check_enabled); ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [tip: irq/core] genirq: Prevent early spurious wake-ups of interrupt threads 2025-11-21 14:34 ` [PATCH 1/3 v3] genirq: Prevent from early irq thread spurious wake-ups Frederic Weisbecker 2025-11-21 19:12 ` Thomas Gleixner 2025-11-21 20:01 ` [tip: irq/core] genirq: Prevent early spurious wake-ups of interrupt threads tip-bot2 for Thomas Gleixner @ 2025-11-22 8:30 ` tip-bot2 for Frederic Weisbecker 2 siblings, 0 replies; 19+ messages in thread From: tip-bot2 for Frederic Weisbecker @ 2025-11-22 8:30 UTC (permalink / raw) To: linux-tip-commits Cc: Frederic Weisbecker, Thomas Gleixner, Ingo Molnar, x86, linux-kernel, maz The following commit has been merged into the irq/core branch of tip: Commit-ID: 68775ca79af3b8d4c147598983ece012d7007bac Gitweb: https://git.kernel.org/tip/68775ca79af3b8d4c147598983ece012d7007bac Author: Frederic Weisbecker <frederic@kernel.org> AuthorDate: Fri, 21 Nov 2025 15:34:58 +01:00 Committer: Ingo Molnar <mingo@kernel.org> CommitterDate: Sat, 22 Nov 2025 09:26:18 +01:00 genirq: Prevent early spurious wake-ups of interrupt threads During initialization, the interrupt thread is created before the interrupt is enabled. The interrupt enablement happens before the actual kthread wake up point. Once the interrupt is enabled the hardware can raise an interrupt and once setup_irq() drops the descriptor lock a interrupt wake-up can happen. Even when such an interrupt can be considered premature, this is not a problem in general because at the point where the descriptor lock is dropped and the wakeup can happen, the data which is used by the thread is fully initialized. Though from the perspective of least surprise, the initial wakeup really should be performed by the setup code and not randomly by a premature interrupt. Prevent this by performing a wake-up only if the target is in state TASK_INTERRUPTIBLE, which the thread uses in wait_for_interrupt(). If the thread is still in state TASK_UNINTERRUPTIBLE, the wake-up is not lost because after the setup code completed the initial wake-up the thread will observe the IRQTF_RUNTHREAD and proceed with the handling. [ tglx: Simplified the changes and extended the changelog. ] Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://patch.msgid.link/20251121143500.42111-2-frederic@kernel.org --- kernel/irq/handle.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c index e103451..786f557 100644 --- a/kernel/irq/handle.c +++ b/kernel/irq/handle.c @@ -133,7 +133,15 @@ void __irq_wake_thread(struct irq_desc *desc, struct irqaction *action) */ atomic_inc(&desc->threads_active); - wake_up_process(action->thread); + /* + * This might be a premature wakeup before the thread reached the + * thread function and set the IRQTF_READY bit. It's waiting in + * kthread code with state UNINTERRUPTIBLE. Once it reaches the + * thread function it waits with INTERRUPTIBLE. The wakeup is not + * lost in that case because the thread is guaranteed to observe + * the RUN flag before it goes to sleep in wait_for_interrupt(). + */ + wake_up_state(action->thread, TASK_INTERRUPTIBLE); } static DEFINE_STATIC_KEY_FALSE(irqhandler_duration_check_enabled); ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 2/3 v3] genirq: Fix interrupt threads affinity vs. cpuset isolated partitions 2025-11-21 14:34 ` [PATCH 0/3 v3] genirq: Fix IRQ threads VS cpuset Frederic Weisbecker 2025-11-21 14:34 ` [PATCH 1/3 v3] genirq: Prevent from early irq thread spurious wake-ups Frederic Weisbecker @ 2025-11-21 14:34 ` Frederic Weisbecker 2025-11-21 16:29 ` Waiman Long ` (3 more replies) 2025-11-21 14:35 ` [PATCH 3/3 v3] genirq: Remove cpumask availability check on kthread affinity setting Frederic Weisbecker 2025-11-21 20:05 ` [PATCH 0/3 v3] genirq: Fix IRQ threads VS cpuset Marek Szyprowski 3 siblings, 4 replies; 19+ messages in thread From: Frederic Weisbecker @ 2025-11-21 14:34 UTC (permalink / raw) To: Thomas Gleixner Cc: LKML, Frederic Weisbecker, Marek Szyprowski, Marco Crivellari, Waiman Long, cgroups When a cpuset isolated partition is created / updated or destroyed, the interrupt threads are affine blindly to all the non-isolated CPUs. And this happens without taking into account the interrupt threads initial affinity that becomes ignored. For example in a system with 8 CPUs, if an interrupt and its kthread are initially affine to CPU 5, creating an isolated partition with only CPU 2 inside will eventually end up affining the interrupt kthread to all CPUs but CPU 2 (that is CPUs 0,1,3-7), losing the kthread preference for CPU 5. Besides the blind re-affinity, this doesn't take care of the actual low level interrupt which isn't migrated. As of today the only way to isolate non managed interrupts, along with their kthreads, is to overwrite their affinity separately, for example through /proc/irq/ To avoid doing that manually, future development should focus on updating the interrupt's affinity whenever cpuset isolated partitions are updated. In the meantime, cpuset shouldn't fiddle with interrupt threads directly. To prevent from that, set the PF_NO_SETAFFINITY flag to them. Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://patch.msgid.link/20251118143052.68778-2-frederic@kernel.org --- kernel/irq/manage.c | 23 +++++++++++++++-------- 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index c1ce30c9c3ab..98b9b8b4de27 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -1408,16 +1408,23 @@ setup_irq_thread(struct irqaction *new, unsigned int irq, bool secondary) * references an already freed task_struct. */ new->thread = get_task_struct(t); + /* - * Tell the thread to set its affinity. This is - * important for shared interrupt handlers as we do - * not invoke setup_affinity() for the secondary - * handlers as everything is already set up. Even for - * interrupts marked with IRQF_NO_BALANCE this is - * correct as we want the thread to move to the cpu(s) - * on which the requesting code placed the interrupt. + * The affinity may not yet be available, but it will be once + * the IRQ will be enabled. Delay and defer the actual setting + * to the thread itself once it is ready to run. In the meantime, + * prevent it from ever being reaffined directly by cpuset or + * housekeeping. The proper way to do it is to reaffine the whole + * vector. */ - set_bit(IRQTF_AFFINITY, &new->thread_flags); + kthread_bind_mask(t, cpu_possible_mask); + + /* + * Ensure the thread adjusts the affinity once it reaches the + * thread function. + */ + new->thread_flags = BIT(IRQTF_AFFINITY); + return 0; } -- 2.51.1 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH 2/3 v3] genirq: Fix interrupt threads affinity vs. cpuset isolated partitions 2025-11-21 14:34 ` [PATCH 2/3 v3] genirq: Fix interrupt threads affinity vs. cpuset isolated partitions Frederic Weisbecker @ 2025-11-21 16:29 ` Waiman Long 2025-11-21 20:01 ` [tip: irq/core] " tip-bot2 for Frederic Weisbecker ` (2 subsequent siblings) 3 siblings, 0 replies; 19+ messages in thread From: Waiman Long @ 2025-11-21 16:29 UTC (permalink / raw) To: Frederic Weisbecker, Thomas Gleixner Cc: LKML, Marek Szyprowski, Marco Crivellari, Waiman Long, cgroups On 11/21/25 9:34 AM, Frederic Weisbecker wrote: > When a cpuset isolated partition is created / updated or destroyed, the > interrupt threads are affine blindly to all the non-isolated CPUs. And this > happens without taking into account the interrupt threads initial affinity > that becomes ignored. > > For example in a system with 8 CPUs, if an interrupt and its kthread are > initially affine to CPU 5, creating an isolated partition with only CPU 2 > inside will eventually end up affining the interrupt kthread to all CPUs > but CPU 2 (that is CPUs 0,1,3-7), losing the kthread preference for CPU 5. > > Besides the blind re-affinity, this doesn't take care of the actual low > level interrupt which isn't migrated. As of today the only way to isolate > non managed interrupts, along with their kthreads, is to overwrite their > affinity separately, for example through /proc/irq/ > > To avoid doing that manually, future development should focus on updating > the interrupt's affinity whenever cpuset isolated partitions are updated. > > In the meantime, cpuset shouldn't fiddle with interrupt threads directly. > To prevent from that, set the PF_NO_SETAFFINITY flag to them. > > Suggested-by: Thomas Gleixner <tglx@linutronix.de> > Signed-off-by: Frederic Weisbecker <frederic@kernel.org> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de> > Link: https://patch.msgid.link/20251118143052.68778-2-frederic@kernel.org > --- > kernel/irq/manage.c | 23 +++++++++++++++-------- > 1 file changed, 15 insertions(+), 8 deletions(-) > > diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c > index c1ce30c9c3ab..98b9b8b4de27 100644 > --- a/kernel/irq/manage.c > +++ b/kernel/irq/manage.c > @@ -1408,16 +1408,23 @@ setup_irq_thread(struct irqaction *new, unsigned int irq, bool secondary) > * references an already freed task_struct. > */ > new->thread = get_task_struct(t); > + > /* > - * Tell the thread to set its affinity. This is > - * important for shared interrupt handlers as we do > - * not invoke setup_affinity() for the secondary > - * handlers as everything is already set up. Even for > - * interrupts marked with IRQF_NO_BALANCE this is > - * correct as we want the thread to move to the cpu(s) > - * on which the requesting code placed the interrupt. > + * The affinity may not yet be available, but it will be once > + * the IRQ will be enabled. Delay and defer the actual setting > + * to the thread itself once it is ready to run. In the meantime, > + * prevent it from ever being reaffined directly by cpuset or > + * housekeeping. The proper way to do it is to reaffine the whole > + * vector. > */ > - set_bit(IRQTF_AFFINITY, &new->thread_flags); > + kthread_bind_mask(t, cpu_possible_mask); > + > + /* > + * Ensure the thread adjusts the affinity once it reaches the > + * thread function. > + */ > + new->thread_flags = BIT(IRQTF_AFFINITY); > + > return 0; > } > LGTM Acked-by: Waiman Long <longman@redhat.com> ^ permalink raw reply [flat|nested] 19+ messages in thread
* [tip: irq/core] genirq: Fix interrupt threads affinity vs. cpuset isolated partitions 2025-11-21 14:34 ` [PATCH 2/3 v3] genirq: Fix interrupt threads affinity vs. cpuset isolated partitions Frederic Weisbecker 2025-11-21 16:29 ` Waiman Long @ 2025-11-21 20:01 ` tip-bot2 for Frederic Weisbecker 2025-11-22 8:30 ` tip-bot2 for Frederic Weisbecker 2025-12-12 1:48 ` [PATCH 2/3 v3] " Chris Mason 3 siblings, 0 replies; 19+ messages in thread From: tip-bot2 for Frederic Weisbecker @ 2025-11-21 20:01 UTC (permalink / raw) To: linux-tip-commits Cc: Thomas Gleixner, Frederic Weisbecker, x86, linux-kernel, maz The following commit has been merged into the irq/core branch of tip: Commit-ID: 71b89ad36c06603c093f04e142972d67c9272f14 Gitweb: https://git.kernel.org/tip/71b89ad36c06603c093f04e142972d67c9272f14 Author: Frederic Weisbecker <frederic@kernel.org> AuthorDate: Fri, 21 Nov 2025 15:34:59 +01:00 Committer: Thomas Gleixner <tglx@linutronix.de> CommitterDate: Fri, 21 Nov 2025 20:50:30 +01:00 genirq: Fix interrupt threads affinity vs. cpuset isolated partitions When a cpuset isolated partition is created / updated or destroyed, the interrupt threads are affined blindly to all the non-isolated CPUs. This happens without taking into account the interrupt threads initial affinity that becomes ignored. For example in a system with 8 CPUs, if an interrupt and its kthread are initially affine to CPU 5, creating an isolated partition with only CPU 2 inside will eventually end up affining the interrupt kthread to all CPUs but CPU 2 (that is CPUs 0,1,3-7), losing the kthread preference for CPU 5. Besides the blind re-affining, this doesn't take care of the actual low level interrupt which isn't migrated. As of today the only way to isolate non managed interrupts, along with their kthreads, is to overwrite their affinity separately, for example through /proc/irq/ To avoid doing that manually, future development should focus on updating the interrupt's affinity whenever cpuset isolated partitions are updated. In the meantime, cpuset shouldn't fiddle with interrupt threads directly. To prevent from that, set the PF_NO_SETAFFINITY flag to them. This is done through kthread_bind_mask() by affining them initially to all possible CPUs as at that point the interrupt is not started up which means the affinity of the hard interrupt is not known. The thread will adjust that once it reaches the handler, which is guaranteed to happen after the initial affinity of the hard interrupt is established. Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://patch.msgid.link/20251121143500.42111-3-frederic@kernel.org --- kernel/irq/manage.c | 23 +++++++++++++++-------- 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index c1ce30c..61da1c6 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -1408,16 +1408,23 @@ setup_irq_thread(struct irqaction *new, unsigned int irq, bool secondary) * references an already freed task_struct. */ new->thread = get_task_struct(t); + /* - * Tell the thread to set its affinity. This is - * important for shared interrupt handlers as we do - * not invoke setup_affinity() for the secondary - * handlers as everything is already set up. Even for - * interrupts marked with IRQF_NO_BALANCE this is - * correct as we want the thread to move to the cpu(s) - * on which the requesting code placed the interrupt. + * The affinity can not be established yet, but it will be once the + * interrupt is enabled. Delay and defer the actual setting to the + * thread itself once it is ready to run. In the meantime, prevent + * it from ever being re-affined directly by cpuset or + * housekeeping. The proper way to do it is to re-affine the whole + * vector. */ - set_bit(IRQTF_AFFINITY, &new->thread_flags); + kthread_bind_mask(t, cpu_possible_mask); + + /* + * Ensure the thread adjusts the affinity once it reaches the + * thread function. + */ + new->thread_flags = BIT(IRQTF_AFFINITY); + return 0; } ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [tip: irq/core] genirq: Fix interrupt threads affinity vs. cpuset isolated partitions 2025-11-21 14:34 ` [PATCH 2/3 v3] genirq: Fix interrupt threads affinity vs. cpuset isolated partitions Frederic Weisbecker 2025-11-21 16:29 ` Waiman Long 2025-11-21 20:01 ` [tip: irq/core] " tip-bot2 for Frederic Weisbecker @ 2025-11-22 8:30 ` tip-bot2 for Frederic Weisbecker 2025-12-12 1:48 ` [PATCH 2/3 v3] " Chris Mason 3 siblings, 0 replies; 19+ messages in thread From: tip-bot2 for Frederic Weisbecker @ 2025-11-22 8:30 UTC (permalink / raw) To: linux-tip-commits Cc: Thomas Gleixner, Frederic Weisbecker, Ingo Molnar, x86, linux-kernel, maz The following commit has been merged into the irq/core branch of tip: Commit-ID: 801afdfbfcd90ff62a4b2469bbda1d958f7a5353 Gitweb: https://git.kernel.org/tip/801afdfbfcd90ff62a4b2469bbda1d958f7a5353 Author: Frederic Weisbecker <frederic@kernel.org> AuthorDate: Fri, 21 Nov 2025 15:34:59 +01:00 Committer: Ingo Molnar <mingo@kernel.org> CommitterDate: Sat, 22 Nov 2025 09:26:18 +01:00 genirq: Fix interrupt threads affinity vs. cpuset isolated partitions When a cpuset isolated partition is created / updated or destroyed, the interrupt threads are affined blindly to all the non-isolated CPUs. This happens without taking into account the interrupt threads initial affinity that becomes ignored. For example in a system with 8 CPUs, if an interrupt and its kthread are initially affine to CPU 5, creating an isolated partition with only CPU 2 inside will eventually end up affining the interrupt kthread to all CPUs but CPU 2 (that is CPUs 0,1,3-7), losing the kthread preference for CPU 5. Besides the blind re-affining, this doesn't take care of the actual low level interrupt which isn't migrated. As of today the only way to isolate non managed interrupts, along with their kthreads, is to overwrite their affinity separately, for example through /proc/irq/ To avoid doing that manually, future development should focus on updating the interrupt's affinity whenever cpuset isolated partitions are updated. In the meantime, cpuset shouldn't fiddle with interrupt threads directly. To prevent from that, set the PF_NO_SETAFFINITY flag to them. This is done through kthread_bind_mask() by affining them initially to all possible CPUs as at that point the interrupt is not started up which means the affinity of the hard interrupt is not known. The thread will adjust that once it reaches the handler, which is guaranteed to happen after the initial affinity of the hard interrupt is established. Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://patch.msgid.link/20251121143500.42111-3-frederic@kernel.org --- kernel/irq/manage.c | 23 +++++++++++++++-------- 1 file changed, 15 insertions(+), 8 deletions(-) diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index c1ce30c..61da1c6 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -1408,16 +1408,23 @@ setup_irq_thread(struct irqaction *new, unsigned int irq, bool secondary) * references an already freed task_struct. */ new->thread = get_task_struct(t); + /* - * Tell the thread to set its affinity. This is - * important for shared interrupt handlers as we do - * not invoke setup_affinity() for the secondary - * handlers as everything is already set up. Even for - * interrupts marked with IRQF_NO_BALANCE this is - * correct as we want the thread to move to the cpu(s) - * on which the requesting code placed the interrupt. + * The affinity can not be established yet, but it will be once the + * interrupt is enabled. Delay and defer the actual setting to the + * thread itself once it is ready to run. In the meantime, prevent + * it from ever being re-affined directly by cpuset or + * housekeeping. The proper way to do it is to re-affine the whole + * vector. */ - set_bit(IRQTF_AFFINITY, &new->thread_flags); + kthread_bind_mask(t, cpu_possible_mask); + + /* + * Ensure the thread adjusts the affinity once it reaches the + * thread function. + */ + new->thread_flags = BIT(IRQTF_AFFINITY); + return 0; } ^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH 2/3 v3] genirq: Fix interrupt threads affinity vs. cpuset isolated partitions 2025-11-21 14:34 ` [PATCH 2/3 v3] genirq: Fix interrupt threads affinity vs. cpuset isolated partitions Frederic Weisbecker ` (2 preceding siblings ...) 2025-11-22 8:30 ` tip-bot2 for Frederic Weisbecker @ 2025-12-12 1:48 ` Chris Mason 2025-12-12 2:26 ` Thomas Gleixner 3 siblings, 1 reply; 19+ messages in thread From: Chris Mason @ 2025-12-12 1:48 UTC (permalink / raw) To: Frederic Weisbecker Cc: Chris Mason, Thomas Gleixner, LKML, Marek Szyprowski, Marco Crivellari, Waiman Long, cgroups On Fri, 21 Nov 2025 15:34:59 +0100 Frederic Weisbecker <frederic@kernel.org> wrote: Hi everyone, I ran some recent commits through AI patch review and it flagged this one. The comments below looks right to me, but I might be missing something: > diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c > index c1ce30c9c3ab6..61da1c68ff82a 100644 > --- a/kernel/irq/manage.c > +++ b/kernel/irq/manage.c > @@ -1408,16 +1408,23 @@ setup_irq_thread(struct irqaction *new, unsigned int irq, bool secondary) > * references an already freed task_struct. > */ > new->thread = get_task_struct(t); [ ... ] > - set_bit(IRQTF_AFFINITY, &new->thread_flags); > + kthread_bind_mask(t, cpu_possible_mask); > + > + /* > + * Ensure the thread adjusts the affinity once it reaches the > + * thread function. > + */ > + new->thread_flags = BIT(IRQTF_AFFINITY); ^^^ Can this clobber IRQTF_FORCED_THREAD? In __setup_irq(), irq_setup_forced_threading() is called before setup_irq_thread(). When forced threading is enabled, irq_setup_forced_threading() sets IRQTF_FORCED_THREAD via set_bit(): set_bit(IRQTF_FORCED_THREAD, &new->thread_flags); Then setup_irq_thread() overwrites thread_flags with a direct assignment: new->thread_flags = BIT(IRQTF_AFFINITY); This clears IRQTF_FORCED_THREAD. Later in irq_thread(): if (force_irqthreads() && test_bit(IRQTF_FORCED_THREAD, &action->thread_flags)) handler_fn = irq_forced_thread_fn; else handler_fn = irq_thread_fn; Without IRQTF_FORCED_THREAD, the handler uses irq_thread_fn() instead of irq_forced_thread_fn(). The forced-threaded handler then runs without the local_bh_disable() and local_irq_disable() protection that non-threaded interrupt handlers expect. Should this be: new->thread_flags |= BIT(IRQTF_AFFINITY); or: set_bit(IRQTF_AFFINITY, &new->thread_flags); > + > return 0; > } -chris ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/3 v3] genirq: Fix interrupt threads affinity vs. cpuset isolated partitions 2025-12-12 1:48 ` [PATCH 2/3 v3] " Chris Mason @ 2025-12-12 2:26 ` Thomas Gleixner 2025-12-12 4:01 ` [PATCH] genirq: Don't overwrite interrupt thread flags on setup Thomas Gleixner 0 siblings, 1 reply; 19+ messages in thread From: Thomas Gleixner @ 2025-12-12 2:26 UTC (permalink / raw) To: Chris Mason, Frederic Weisbecker Cc: Chris Mason, LKML, Marek Szyprowski, Marco Crivellari, Waiman Long, cgroups On Thu, Dec 11 2025 at 17:48, Chris Mason wrote: > On Fri, 21 Nov 2025 15:34:59 +0100 Frederic Weisbecker <frederic@kernel.org> wrote: > I ran some recent commits through AI patch review and it flagged this one. > The comments below looks right to me, but I might be missing something: > >> diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c >> index c1ce30c9c3ab6..61da1c68ff82a 100644 >> --- a/kernel/irq/manage.c >> +++ b/kernel/irq/manage.c >> @@ -1408,16 +1408,23 @@ setup_irq_thread(struct irqaction *new, unsigned int irq, bool secondary) >> * references an already freed task_struct. >> */ >> new->thread = get_task_struct(t); > > [ ... ] > >> - set_bit(IRQTF_AFFINITY, &new->thread_flags); >> + kthread_bind_mask(t, cpu_possible_mask); >> + >> + /* >> + * Ensure the thread adjusts the affinity once it reaches the >> + * thread function. >> + */ >> + new->thread_flags = BIT(IRQTF_AFFINITY); > ^^^ > > Can this clobber IRQTF_FORCED_THREAD? > > In __setup_irq(), irq_setup_forced_threading() is called before > setup_irq_thread(). When forced threading is enabled, > irq_setup_forced_threading() sets IRQTF_FORCED_THREAD via set_bit(): > > set_bit(IRQTF_FORCED_THREAD, &new->thread_flags); > > Then setup_irq_thread() overwrites thread_flags with a direct assignment: > > new->thread_flags = BIT(IRQTF_AFFINITY); > > This clears IRQTF_FORCED_THREAD. Later in irq_thread(): Yep. That's broken. Nice catch. ^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH] genirq: Don't overwrite interrupt thread flags on setup 2025-12-12 2:26 ` Thomas Gleixner @ 2025-12-12 4:01 ` Thomas Gleixner 2025-12-12 11:57 ` Frederic Weisbecker 2025-12-13 1:37 ` [tip: irq/urgent] " tip-bot2 for Thomas Gleixner 0 siblings, 2 replies; 19+ messages in thread From: Thomas Gleixner @ 2025-12-12 4:01 UTC (permalink / raw) To: Chris Mason, Frederic Weisbecker Cc: Chris Mason, LKML, Marek Szyprowski, Marco Crivellari, Waiman Long, cgroups Chris reported that the recent affinity management changes result in overwriting the already initialized thread flags. Use set_bit() to set the affinity bit instead of assigning the bit value to the flags. Fixes: 801afdfbfcd9 ("genirq: Fix interrupt threads affinity vs. cpuset isolated partitions") Reported-by: Chris Mason <clm@meta.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Closes: https://lore.kernel.org/all/20251212014848.3509622-1-clm@meta.com --- kernel/irq/manage.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -1414,7 +1414,7 @@ setup_irq_thread(struct irqaction *new, * Ensure the thread adjusts the affinity once it reaches the * thread function. */ - new->thread_flags = BIT(IRQTF_AFFINITY); + set_bit(IRQTF_AFFINITY, &new->thread_flags); return 0; } ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH] genirq: Don't overwrite interrupt thread flags on setup 2025-12-12 4:01 ` [PATCH] genirq: Don't overwrite interrupt thread flags on setup Thomas Gleixner @ 2025-12-12 11:57 ` Frederic Weisbecker 2025-12-13 1:37 ` [tip: irq/urgent] " tip-bot2 for Thomas Gleixner 1 sibling, 0 replies; 19+ messages in thread From: Frederic Weisbecker @ 2025-12-12 11:57 UTC (permalink / raw) To: Thomas Gleixner Cc: Chris Mason, LKML, Marek Szyprowski, Marco Crivellari, Waiman Long, cgroups Le Fri, Dec 12, 2025 at 01:01:04PM +0900, Thomas Gleixner a écrit : > Chris reported that the recent affinity management changes result in > overwriting the already initialized thread flags. > > Use set_bit() to set the affinity bit instead of assigning the bit value to > the flags. > > Fixes: 801afdfbfcd9 ("genirq: Fix interrupt threads affinity vs. cpuset isolated partitions") > Reported-by: Chris Mason <clm@meta.com> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de> > Closes: https://lore.kernel.org/all/20251212014848.3509622-1-clm@meta.com Whoops! Acked-by: Frederic Weisbecker <frederic@kernel.org> -- Frederic Weisbecker SUSE Labs ^ permalink raw reply [flat|nested] 19+ messages in thread
* [tip: irq/urgent] genirq: Don't overwrite interrupt thread flags on setup 2025-12-12 4:01 ` [PATCH] genirq: Don't overwrite interrupt thread flags on setup Thomas Gleixner 2025-12-12 11:57 ` Frederic Weisbecker @ 2025-12-13 1:37 ` tip-bot2 for Thomas Gleixner 1 sibling, 0 replies; 19+ messages in thread From: tip-bot2 for Thomas Gleixner @ 2025-12-13 1:37 UTC (permalink / raw) To: linux-tip-commits Cc: Chris Mason, Thomas Gleixner, Frederic Weisbecker, x86, linux-kernel, maz The following commit has been merged into the irq/urgent branch of tip: Commit-ID: fbbd7ce627af733ded7971b2495b0d099a0a80da Gitweb: https://git.kernel.org/tip/fbbd7ce627af733ded7971b2495b0d099a0a80da Author: Thomas Gleixner <tglx@linutronix.de> AuthorDate: Fri, 12 Dec 2025 13:01:04 +09:00 Committer: Thomas Gleixner <tglx@linutronix.de> CommitterDate: Sat, 13 Dec 2025 10:29:33 +09:00 genirq: Don't overwrite interrupt thread flags on setup Chris reported that the recent affinity management changes result in overwriting the already initialized thread flags. Use set_bit() to set the affinity bit instead of assigning the bit value to the flags. Fixes: 801afdfbfcd9 ("genirq: Fix interrupt threads affinity vs. cpuset isolated partitions") Reported-by: Chris Mason <clm@meta.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Frederic Weisbecker <frederic@kernel.org> Link: https://patch.msgid.link/87ecp0e4cf.ffs@tglx Closes: https://lore.kernel.org/all/20251212014848.3509622-1-clm@meta.com --- kernel/irq/manage.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index 8b1b4c8..349ae79 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -1414,7 +1414,7 @@ setup_irq_thread(struct irqaction *new, unsigned int irq, bool secondary) * Ensure the thread adjusts the affinity once it reaches the * thread function. */ - new->thread_flags = BIT(IRQTF_AFFINITY); + set_bit(IRQTF_AFFINITY, &new->thread_flags); return 0; } ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 3/3 v3] genirq: Remove cpumask availability check on kthread affinity setting 2025-11-21 14:34 ` [PATCH 0/3 v3] genirq: Fix IRQ threads VS cpuset Frederic Weisbecker 2025-11-21 14:34 ` [PATCH 1/3 v3] genirq: Prevent from early irq thread spurious wake-ups Frederic Weisbecker 2025-11-21 14:34 ` [PATCH 2/3 v3] genirq: Fix interrupt threads affinity vs. cpuset isolated partitions Frederic Weisbecker @ 2025-11-21 14:35 ` Frederic Weisbecker 2025-11-21 20:01 ` [tip: irq/core] " tip-bot2 for Frederic Weisbecker 2025-11-22 8:30 ` tip-bot2 for Frederic Weisbecker 2025-11-21 20:05 ` [PATCH 0/3 v3] genirq: Fix IRQ threads VS cpuset Marek Szyprowski 3 siblings, 2 replies; 19+ messages in thread From: Frederic Weisbecker @ 2025-11-21 14:35 UTC (permalink / raw) To: Thomas Gleixner Cc: LKML, Frederic Weisbecker, Marek Szyprowski, Marco Crivellari, Waiman Long, cgroups Failing to allocate the affinity mask of an interrupt descriptor fails the whole descriptor initialization. It is then guaranteed that the cpumask is always available whenever the related interrupt objects are alive, such as the kthread handler. Therefore remove the superfluous check since it is merely just a historical leftover. Get rid also of the comments above it that are either obsolete or useless. Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://patch.msgid.link/20251118143052.68778-3-frederic@kernel.org --- kernel/irq/manage.c | 17 ++++------------- 1 file changed, 4 insertions(+), 13 deletions(-) diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index 98b9b8b4de27..76c7b58f54c8 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -1001,7 +1001,6 @@ static irqreturn_t irq_forced_secondary_handler(int irq, void *dev_id) static void irq_thread_check_affinity(struct irq_desc *desc, struct irqaction *action) { cpumask_var_t mask; - bool valid = false; if (!test_and_clear_bit(IRQTF_AFFINITY, &action->thread_flags)) return; @@ -1018,21 +1017,13 @@ static void irq_thread_check_affinity(struct irq_desc *desc, struct irqaction *a } scoped_guard(raw_spinlock_irq, &desc->lock) { - /* - * This code is triggered unconditionally. Check the affinity - * mask pointer. For CPU_MASK_OFFSTACK=n this is optimized out. - */ - if (cpumask_available(desc->irq_common_data.affinity)) { - const struct cpumask *m; + const struct cpumask *m; - m = irq_data_get_effective_affinity_mask(&desc->irq_data); - cpumask_copy(mask, m); - valid = true; - } + m = irq_data_get_effective_affinity_mask(&desc->irq_data); + cpumask_copy(mask, m); } - if (valid) - set_cpus_allowed_ptr(current, mask); + set_cpus_allowed_ptr(current, mask); free_cpumask_var(mask); } #else -- 2.51.1 ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [tip: irq/core] genirq: Remove cpumask availability check on kthread affinity setting 2025-11-21 14:35 ` [PATCH 3/3 v3] genirq: Remove cpumask availability check on kthread affinity setting Frederic Weisbecker @ 2025-11-21 20:01 ` tip-bot2 for Frederic Weisbecker 2025-11-22 8:30 ` tip-bot2 for Frederic Weisbecker 1 sibling, 0 replies; 19+ messages in thread From: tip-bot2 for Frederic Weisbecker @ 2025-11-21 20:01 UTC (permalink / raw) To: linux-tip-commits Cc: Thomas Gleixner, Frederic Weisbecker, x86, linux-kernel, maz The following commit has been merged into the irq/core branch of tip: Commit-ID: 15300e02321850105f9128992f12742f3cd78180 Gitweb: https://git.kernel.org/tip/15300e02321850105f9128992f12742f3cd78180 Author: Frederic Weisbecker <frederic@kernel.org> AuthorDate: Fri, 21 Nov 2025 15:35:00 +01:00 Committer: Thomas Gleixner <tglx@linutronix.de> CommitterDate: Fri, 21 Nov 2025 20:50:30 +01:00 genirq: Remove cpumask availability check on kthread affinity setting Failing to allocate the affinity mask of an interrupt descriptor fails the whole descriptor initialization. It is then guaranteed that the cpumask is always available whenever the related interrupt objects are alive, such as the kthread handler. Therefore remove the superfluous check since it is merely a historical leftover. Get rid also of the comments above it that are obsolete and useless. Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://patch.msgid.link/20251121143500.42111-4-frederic@kernel.org --- kernel/irq/manage.c | 17 ++++------------- 1 file changed, 4 insertions(+), 13 deletions(-) diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index 61da1c6..1615b64 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -1001,7 +1001,6 @@ static irqreturn_t irq_forced_secondary_handler(int irq, void *dev_id) static void irq_thread_check_affinity(struct irq_desc *desc, struct irqaction *action) { cpumask_var_t mask; - bool valid = false; if (!test_and_clear_bit(IRQTF_AFFINITY, &action->thread_flags)) return; @@ -1018,21 +1017,13 @@ static void irq_thread_check_affinity(struct irq_desc *desc, struct irqaction *a } scoped_guard(raw_spinlock_irq, &desc->lock) { - /* - * This code is triggered unconditionally. Check the affinity - * mask pointer. For CPU_MASK_OFFSTACK=n this is optimized out. - */ - if (cpumask_available(desc->irq_common_data.affinity)) { - const struct cpumask *m; + const struct cpumask *m; - m = irq_data_get_effective_affinity_mask(&desc->irq_data); - cpumask_copy(mask, m); - valid = true; - } + m = irq_data_get_effective_affinity_mask(&desc->irq_data); + cpumask_copy(mask, m); } - if (valid) - set_cpus_allowed_ptr(current, mask); + set_cpus_allowed_ptr(current, mask); free_cpumask_var(mask); } #else ^ permalink raw reply related [flat|nested] 19+ messages in thread
* [tip: irq/core] genirq: Remove cpumask availability check on kthread affinity setting 2025-11-21 14:35 ` [PATCH 3/3 v3] genirq: Remove cpumask availability check on kthread affinity setting Frederic Weisbecker 2025-11-21 20:01 ` [tip: irq/core] " tip-bot2 for Frederic Weisbecker @ 2025-11-22 8:30 ` tip-bot2 for Frederic Weisbecker 1 sibling, 0 replies; 19+ messages in thread From: tip-bot2 for Frederic Weisbecker @ 2025-11-22 8:30 UTC (permalink / raw) To: linux-tip-commits Cc: Thomas Gleixner, Frederic Weisbecker, Ingo Molnar, x86, linux-kernel, maz The following commit has been merged into the irq/core branch of tip: Commit-ID: 3de5e46e50abc01a1cee7e12b657e083fc5ed638 Gitweb: https://git.kernel.org/tip/3de5e46e50abc01a1cee7e12b657e083fc5ed638 Author: Frederic Weisbecker <frederic@kernel.org> AuthorDate: Fri, 21 Nov 2025 15:35:00 +01:00 Committer: Ingo Molnar <mingo@kernel.org> CommitterDate: Sat, 22 Nov 2025 09:26:18 +01:00 genirq: Remove cpumask availability check on kthread affinity setting Failing to allocate the affinity mask of an interrupt descriptor fails the whole descriptor initialization. It is then guaranteed that the cpumask is always available whenever the related interrupt objects are alive, such as the kthread handler. Therefore remove the superfluous check since it is merely a historical leftover. Get rid also of the comments above it that are obsolete and useless. Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Frederic Weisbecker <frederic@kernel.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://patch.msgid.link/20251121143500.42111-4-frederic@kernel.org --- kernel/irq/manage.c | 17 ++++------------- 1 file changed, 4 insertions(+), 13 deletions(-) diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c index 61da1c6..1615b64 100644 --- a/kernel/irq/manage.c +++ b/kernel/irq/manage.c @@ -1001,7 +1001,6 @@ static irqreturn_t irq_forced_secondary_handler(int irq, void *dev_id) static void irq_thread_check_affinity(struct irq_desc *desc, struct irqaction *action) { cpumask_var_t mask; - bool valid = false; if (!test_and_clear_bit(IRQTF_AFFINITY, &action->thread_flags)) return; @@ -1018,21 +1017,13 @@ static void irq_thread_check_affinity(struct irq_desc *desc, struct irqaction *a } scoped_guard(raw_spinlock_irq, &desc->lock) { - /* - * This code is triggered unconditionally. Check the affinity - * mask pointer. For CPU_MASK_OFFSTACK=n this is optimized out. - */ - if (cpumask_available(desc->irq_common_data.affinity)) { - const struct cpumask *m; + const struct cpumask *m; - m = irq_data_get_effective_affinity_mask(&desc->irq_data); - cpumask_copy(mask, m); - valid = true; - } + m = irq_data_get_effective_affinity_mask(&desc->irq_data); + cpumask_copy(mask, m); } - if (valid) - set_cpus_allowed_ptr(current, mask); + set_cpus_allowed_ptr(current, mask); free_cpumask_var(mask); } #else ^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH 0/3 v3] genirq: Fix IRQ threads VS cpuset 2025-11-21 14:34 ` [PATCH 0/3 v3] genirq: Fix IRQ threads VS cpuset Frederic Weisbecker ` (2 preceding siblings ...) 2025-11-21 14:35 ` [PATCH 3/3 v3] genirq: Remove cpumask availability check on kthread affinity setting Frederic Weisbecker @ 2025-11-21 20:05 ` Marek Szyprowski 3 siblings, 0 replies; 19+ messages in thread From: Marek Szyprowski @ 2025-11-21 20:05 UTC (permalink / raw) To: Frederic Weisbecker, Thomas Gleixner Cc: LKML, Marco Crivellari, Waiman Long, cgroups On 21.11.2025 15:34, Frederic Weisbecker wrote: > Here is another take after some last minutes issues reported by > Marek Szyprowski <m.szyprowski@samsung.com>: > > https://lore.kernel.org/all/73356b5f-ab5c-4e9e-b57f-b80981c35998@samsung.com/ Works fine on my test systems. Tested-by: Marek Szyprowski <m.szyprowski@samsung.com> > Changes since v2: > > * Fix early spurious IRQ thread wake-up (to be SOB'ed by Thomas) > > * Instead of applying the affinity remotely, set PF_NO_SETAFFINITY > early, right after kthread creation, and wait for the thread to > apply the affinity by itself. This is to prevent from early wake-up > to mess up with kthread_bind_mask(), as reported by > Marek Szyprowski <m.szyprowski@samsung.com> > > Frederic Weisbecker (2): > genirq: Fix interrupt threads affinity vs. cpuset isolated partitions > genirq: Remove cpumask availability check on kthread affinity setting > > Thomas Gleixner (1): > genirq: Prevent from early irq thread spurious wake-ups > > kernel/irq/handle.c | 10 +++++++++- > kernel/irq/manage.c | 40 +++++++++++++++++++--------------------- > 2 files changed, 28 insertions(+), 22 deletions(-) > Best regards -- Marek Szyprowski, PhD Samsung R&D Institute Poland ^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2025-12-13 1:37 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <CGME20251121143513eucas1p15c03a2c15aa5a0a15cc46d8f0a4e534e@eucas1p1.samsung.com>
2025-11-21 14:34 ` [PATCH 0/3 v3] genirq: Fix IRQ threads VS cpuset Frederic Weisbecker
2025-11-21 14:34 ` [PATCH 1/3 v3] genirq: Prevent from early irq thread spurious wake-ups Frederic Weisbecker
2025-11-21 19:12 ` Thomas Gleixner
2025-11-21 22:04 ` Frederic Weisbecker
2025-11-21 20:01 ` [tip: irq/core] genirq: Prevent early spurious wake-ups of interrupt threads tip-bot2 for Thomas Gleixner
2025-11-22 8:30 ` tip-bot2 for Frederic Weisbecker
2025-11-21 14:34 ` [PATCH 2/3 v3] genirq: Fix interrupt threads affinity vs. cpuset isolated partitions Frederic Weisbecker
2025-11-21 16:29 ` Waiman Long
2025-11-21 20:01 ` [tip: irq/core] " tip-bot2 for Frederic Weisbecker
2025-11-22 8:30 ` tip-bot2 for Frederic Weisbecker
2025-12-12 1:48 ` [PATCH 2/3 v3] " Chris Mason
2025-12-12 2:26 ` Thomas Gleixner
2025-12-12 4:01 ` [PATCH] genirq: Don't overwrite interrupt thread flags on setup Thomas Gleixner
2025-12-12 11:57 ` Frederic Weisbecker
2025-12-13 1:37 ` [tip: irq/urgent] " tip-bot2 for Thomas Gleixner
2025-11-21 14:35 ` [PATCH 3/3 v3] genirq: Remove cpumask availability check on kthread affinity setting Frederic Weisbecker
2025-11-21 20:01 ` [tip: irq/core] " tip-bot2 for Frederic Weisbecker
2025-11-22 8:30 ` tip-bot2 for Frederic Weisbecker
2025-11-21 20:05 ` [PATCH 0/3 v3] genirq: Fix IRQ threads VS cpuset Marek Szyprowski
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).