linux-rt-users.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/9 v4.9-RT] Backports to fix random core
@ 2022-08-19  9:24 Sebastian Andrzej Siewior
  2022-08-19  9:24 ` [PATCH 1/9] random: Bring back the local_locks Sebastian Andrzej Siewior
                   ` (10 more replies)
  0 siblings, 11 replies; 13+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-08-19  9:24 UTC (permalink / raw)
  To: Mark Gross; +Cc: linux-rt-users, stable-rt, Salvatore Bonaccorso

Hi,

in v4.9.320 some random-core patches broke RT. This is a series of
backports to align with later RT versions and keep things working again.

Sebastian


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/9] random: Bring back the local_locks
  2022-08-19  9:24 [PATCH 0/9 v4.9-RT] Backports to fix random core Sebastian Andrzej Siewior
@ 2022-08-19  9:24 ` Sebastian Andrzej Siewior
  2022-08-19  9:24 ` [PATCH 2/9] random: schedule mix_interrupt_randomness() less often Sebastian Andrzej Siewior
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 13+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-08-19  9:24 UTC (permalink / raw)
  To: Mark Gross; +Cc: linux-rt-users, stable-rt, Salvatore Bonaccorso

As part of the backports the random code lost its local_lock_t type and
the whole operation became a local_irq_{disable|enable}() simply because
the older kernel did not provide those primitives.

RT as of v4.9 has a slightly different variant of local_locks.
Replace the local_irq_*() operations with matching local_lock_irq*()
operations which were there as part of commit
   77760fd7f7ae3 ("random: remove batched entropy locking")

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/char/random.c | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 8e701ea78b0da..860dc427000e9 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -53,6 +53,7 @@
 #include <linux/uuid.h>
 #include <linux/siphash.h>
 #include <linux/uio.h>
+#include <linux/locallock.h>
 #include <crypto/chacha20.h>
 #include <crypto/blake2s.h>
 #include <asm/processor.h>
@@ -230,10 +231,12 @@ static struct {
 struct crng {
 	u8 key[CHACHA20_KEY_SIZE];
 	unsigned long generation;
+	struct local_irq_lock lock;
 };
 
 static DEFINE_PER_CPU(struct crng, crngs) = {
-	.generation = ULONG_MAX
+	.generation = ULONG_MAX,
+	.lock.lock = __SPIN_LOCK_UNLOCKED(crngs.lock.lock),
 };
 
 /* Used by crng_reseed() and crng_make_state() to extract a new seed from the input pool. */
@@ -363,7 +366,7 @@ static void crng_make_state(u32 chacha_state[CHACHA20_BLOCK_SIZE / sizeof(u32)],
 	if (unlikely(crng_has_old_seed()))
 		crng_reseed();
 
-	local_irq_save(flags);
+	local_lock_irqsave(crngs.lock, flags);
 	crng = raw_cpu_ptr(&crngs);
 
 	/*
@@ -388,7 +391,7 @@ static void crng_make_state(u32 chacha_state[CHACHA20_BLOCK_SIZE / sizeof(u32)],
 	 * should wind up here immediately.
 	 */
 	crng_fast_key_erasure(crng->key, chacha_state, random_data, random_data_len);
-	local_irq_restore(flags);
+	local_unlock_irqrestore(crngs.lock, flags);
 }
 
 static void _get_random_bytes(void *buf, size_t len)
@@ -506,11 +509,13 @@ struct batch_ ##type {								\
 	 * formula of (integer_blocks + 0.5) * CHACHA20_BLOCK_SIZE.		\
 	 */									\
 	type entropy[CHACHA20_BLOCK_SIZE * 3 / (2 * sizeof(type))];		\
+	struct local_irq_lock lock;						\
 	unsigned long generation;						\
 	unsigned int position;							\
 };										\
 										\
 static DEFINE_PER_CPU(struct batch_ ##type, batched_entropy_ ##type) = {	\
+	.lock.lock = __SPIN_LOCK_UNLOCKED(batched_entropy_ ##type.lock.lock),	\
 	.position = UINT_MAX							\
 };										\
 										\
@@ -528,7 +533,7 @@ type get_random_ ##type(void)							\
 		return ret;							\
 	}									\
 										\
-	local_irq_save(flags);		\
+	local_lock_irqsave(batched_entropy_ ##type.lock, flags);		\
 	batch = raw_cpu_ptr(&batched_entropy_##type);				\
 										\
 	next_gen = READ_ONCE(base_crng.generation);				\
@@ -542,7 +547,7 @@ type get_random_ ##type(void)							\
 	ret = batch->entropy[batch->position];					\
 	batch->entropy[batch->position] = 0;					\
 	++batch->position;							\
-	local_irq_restore(flags);		\
+	local_unlock_irqrestore(batched_entropy_ ##type.lock, flags);		\
 	return ret;								\
 }										\
 EXPORT_SYMBOL(get_random_ ##type);
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/9] random: schedule mix_interrupt_randomness() less often
  2022-08-19  9:24 [PATCH 0/9 v4.9-RT] Backports to fix random core Sebastian Andrzej Siewior
  2022-08-19  9:24 ` [PATCH 1/9] random: Bring back the local_locks Sebastian Andrzej Siewior
@ 2022-08-19  9:24 ` Sebastian Andrzej Siewior
  2022-08-19  9:24 ` [PATCH 3/9] Revert "workqueue: Use local irq lock instead of irq disable regions" Sebastian Andrzej Siewior
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 13+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-08-19  9:24 UTC (permalink / raw)
  To: Mark Gross; +Cc: linux-rt-users, stable-rt, Salvatore Bonaccorso

From: "Jason A. Donenfeld" <Jason@zx2c4.com>

Upstream commit 534d2eaf1970274150596fdd2bf552721e65d6b2

It used to be that mix_interrupt_randomness() would credit 1 bit each
time it ran, and so add_interrupt_randomness() would schedule mix() to
run every 64 interrupts, a fairly arbitrary number, but nonetheless
considered to be a decent enough conservative estimate.

Since e3e33fc2ea7f ("random: do not use input pool from hard IRQs"),
mix() is now able to credit multiple bits, depending on the number of
calls to add(). This was done for reasons separate from this commit, but
it has the nice side effect of enabling this patch to schedule mix()
less often.

Currently the rules are:
a) Credit 1 bit for every 64 calls to add().
b) Schedule mix() once a second that add() is called.
c) Schedule mix() once every 64 calls to add().

Rules (a) and (c) no longer need to be coupled. It's still important to
have _some_ value in (c), so that we don't "over-saturate" the fast
pool, but the once per second we get from rule (b) is a plenty enough
baseline. So, by increasing the 64 in rule (c) to something larger, we
avoid calling queue_work_on() as frequently during irq storms.

This commit changes that 64 in rule (c) to be 1024, which means we
schedule mix() 16 times less often. And it does *not* need to change the
64 in rule (a).

Fixes: 58340f8e952b ("random: defer fast pool mixing to worker")
Cc: stable@vger.kernel.org
Cc: Dominik Brodowski <linux@dominikbrodowski.net>
Acked-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by:: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 drivers/char/random.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/char/random.c b/drivers/char/random.c
index 860dc427000e9..40c97d09aeadc 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1002,7 +1002,7 @@ void add_interrupt_randomness(int irq)
 	if (new_count & MIX_INFLIGHT)
 		return;
 
-	if (new_count < 64 && !time_is_before_jiffies(fast_pool->last + HZ))
+	if (new_count < 1024 && !time_is_before_jiffies(fast_pool->last + HZ))
 		return;
 
 	if (unlikely(!fast_pool->mix.func))
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/9] Revert "workqueue: Use local irq lock instead of irq disable regions"
  2022-08-19  9:24 [PATCH 0/9 v4.9-RT] Backports to fix random core Sebastian Andrzej Siewior
  2022-08-19  9:24 ` [PATCH 1/9] random: Bring back the local_locks Sebastian Andrzej Siewior
  2022-08-19  9:24 ` [PATCH 2/9] random: schedule mix_interrupt_randomness() less often Sebastian Andrzej Siewior
@ 2022-08-19  9:24 ` Sebastian Andrzej Siewior
  2022-08-19  9:24 ` [PATCH 4/9] Revert "workqueue: Prevent deadlock/stall on RT" Sebastian Andrzej Siewior
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 13+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-08-19  9:24 UTC (permalink / raw)
  To: Mark Gross; +Cc: linux-rt-users, stable-rt, Salvatore Bonaccorso

This reverts the PREEMPT_RT related changes to workqueue. It reverts the
usage of local_locks() and cpu_chill().

This is a preparation to pull in the PREEMPT_RT related changes which
were merged upstream.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/workqueue.c | 36 +++++++++++++++---------------------
 1 file changed, 15 insertions(+), 21 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index a4bb53e2e26a1..0c2c383eb7d0e 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -48,8 +48,6 @@
 #include <linux/nodemask.h>
 #include <linux/moduleparam.h>
 #include <linux/uaccess.h>
-#include <linux/locallock.h>
-#include <linux/delay.h>
 #include <linux/nmi.h>
 #include <linux/kvm_para.h>
 
@@ -360,8 +358,6 @@ EXPORT_SYMBOL_GPL(system_power_efficient_wq);
 struct workqueue_struct *system_freezable_power_efficient_wq __read_mostly;
 EXPORT_SYMBOL_GPL(system_freezable_power_efficient_wq);
 
-static DEFINE_LOCAL_IRQ_LOCK(pendingb_lock);
-
 static int worker_thread(void *__worker);
 static void workqueue_sysfs_unregister(struct workqueue_struct *wq);
 
@@ -1132,11 +1128,9 @@ static void put_pwq_unlocked(struct pool_workqueue *pwq)
 		 * As both pwqs and pools are RCU protected, the
 		 * following lock operations are safe.
 		 */
-		rcu_read_lock();
-		local_spin_lock_irq(pendingb_lock, &pwq->pool->lock);
+		spin_lock_irq(&pwq->pool->lock);
 		put_pwq(pwq);
-		local_spin_unlock_irq(pendingb_lock, &pwq->pool->lock);
-		rcu_read_unlock();
+		spin_unlock_irq(&pwq->pool->lock);
 	}
 }
 
@@ -1240,7 +1234,7 @@ static int try_to_grab_pending(struct work_struct *work, bool is_dwork,
 	struct worker_pool *pool;
 	struct pool_workqueue *pwq;
 
-	local_lock_irqsave(pendingb_lock, *flags);
+	local_irq_save(*flags);
 
 	/* try to steal the timer if it exists */
 	if (is_dwork) {
@@ -1304,10 +1298,10 @@ static int try_to_grab_pending(struct work_struct *work, bool is_dwork,
 	spin_unlock(&pool->lock);
 fail:
 	rcu_read_unlock();
-	local_unlock_irqrestore(pendingb_lock, *flags);
+	local_irq_restore(*flags);
 	if (work_is_canceling(work))
 		return -ENOENT;
-	cpu_chill();
+	cpu_relax();
 	return -EAGAIN;
 }
 
@@ -1409,7 +1403,7 @@ static void __queue_work(int cpu, struct workqueue_struct *wq,
 	 * queued or lose PENDING.  Grabbing PENDING and queueing should
 	 * happen with IRQ disabled.
 	 */
-	WARN_ON_ONCE_NONRT(!irqs_disabled());
+	WARN_ON_ONCE(!irqs_disabled());
 
 
 	/* if draining, only works from the same workqueue are allowed */
@@ -1517,14 +1511,14 @@ bool queue_work_on(int cpu, struct workqueue_struct *wq,
 	bool ret = false;
 	unsigned long flags;
 
-	local_lock_irqsave(pendingb_lock,flags);
+	local_irq_save(flags);
 
 	if (!test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work))) {
 		__queue_work(cpu, wq, work);
 		ret = true;
 	}
 
-	local_unlock_irqrestore(pendingb_lock, flags);
+	local_irq_restore(flags);
 	return ret;
 }
 EXPORT_SYMBOL(queue_work_on);
@@ -1592,14 +1586,14 @@ bool queue_delayed_work_on(int cpu, struct workqueue_struct *wq,
 	unsigned long flags;
 
 	/* read the comment in __queue_work() */
-	local_lock_irqsave(pendingb_lock, flags);
+	local_irq_save(flags);
 
 	if (!test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work))) {
 		__queue_delayed_work(cpu, wq, dwork, delay);
 		ret = true;
 	}
 
-	local_unlock_irqrestore(pendingb_lock, flags);
+	local_irq_restore(flags);
 	return ret;
 }
 EXPORT_SYMBOL(queue_delayed_work_on);
@@ -1634,7 +1628,7 @@ bool mod_delayed_work_on(int cpu, struct workqueue_struct *wq,
 
 	if (likely(ret >= 0)) {
 		__queue_delayed_work(cpu, wq, dwork, delay);
-		local_unlock_irqrestore(pendingb_lock, flags);
+		local_irq_restore(flags);
 	}
 
 	/* -ENOENT from try_to_grab_pending() becomes %true */
@@ -2963,7 +2957,7 @@ static bool __cancel_work_timer(struct work_struct *work, bool is_dwork)
 
 	/* tell other tasks trying to grab @work to back off */
 	mark_work_canceling(work);
-	local_unlock_irqrestore(pendingb_lock, flags);
+	local_irq_restore(flags);
 
 	/*
 	 * This allows canceling during early boot.  We know that @work
@@ -3024,10 +3018,10 @@ EXPORT_SYMBOL_GPL(cancel_work_sync);
  */
 bool flush_delayed_work(struct delayed_work *dwork)
 {
-	local_lock_irq(pendingb_lock);
+	local_irq_disable();
 	if (del_timer_sync(&dwork->timer))
 		__queue_work(dwork->cpu, dwork->wq, &dwork->work);
-	local_unlock_irq(pendingb_lock);
+	local_irq_enable();
 	return flush_work(&dwork->work);
 }
 EXPORT_SYMBOL(flush_delayed_work);
@@ -3045,7 +3039,7 @@ static bool __cancel_work(struct work_struct *work, bool is_dwork)
 		return false;
 
 	set_work_pool_and_clear_pending(work, get_work_pool_id(work));
-	local_unlock_irqrestore(pendingb_lock, flags);
+	local_irq_restore(flags);
 	return ret;
 }
 
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/9] Revert "workqueue: Prevent deadlock/stall on RT"
  2022-08-19  9:24 [PATCH 0/9 v4.9-RT] Backports to fix random core Sebastian Andrzej Siewior
                   ` (2 preceding siblings ...)
  2022-08-19  9:24 ` [PATCH 3/9] Revert "workqueue: Use local irq lock instead of irq disable regions" Sebastian Andrzej Siewior
@ 2022-08-19  9:24 ` Sebastian Andrzej Siewior
  2022-08-19  9:24 ` [PATCH 5/9] timers: Keep interrupts disabled for TIMER_IRQSAFE timer Sebastian Andrzej Siewior
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 13+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-08-19  9:24 UTC (permalink / raw)
  To: Mark Gross; +Cc: linux-rt-users, stable-rt, Salvatore Bonaccorso

This reverts the PREEMPT_RT related changes to workqueue. It reverts the
extra locking needed to protect the worker which will soon become
obsolete.
The sched/core.c changes, which were introduced in the original commit,
must remain as the following code will rely on it.

This is a preparation to pull in the PREEMPT_RT related changes which
were merged upstream.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/workqueue.c | 60 ++++++++++------------------------------------
 1 file changed, 13 insertions(+), 47 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 0c2c383eb7d0e..6b3f3f54a05e9 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -124,11 +124,6 @@ enum {
  *    cpu or grabbing pool->lock is enough for read access.  If
  *    POOL_DISASSOCIATED is set, it's identical to L.
  *
- *    On RT we need the extra protection via rt_lock_idle_list() for
- *    the list manipulations against read access from
- *    wq_worker_sleeping(). All other places are nicely serialized via
- *    pool->lock.
- *
  * A: pool->attach_mutex protected.
  *
  * PL: wq_pool_mutex protected.
@@ -434,31 +429,6 @@ static void workqueue_sysfs_unregister(struct workqueue_struct *wq);
 		if (({ assert_rcu_or_wq_mutex(wq); false; })) { }	\
 		else
 
-#ifdef CONFIG_PREEMPT_RT_BASE
-static inline void rt_lock_idle_list(struct worker_pool *pool)
-{
-	preempt_disable();
-}
-static inline void rt_unlock_idle_list(struct worker_pool *pool)
-{
-	preempt_enable();
-}
-static inline void sched_lock_idle_list(struct worker_pool *pool) { }
-static inline void sched_unlock_idle_list(struct worker_pool *pool) { }
-#else
-static inline void rt_lock_idle_list(struct worker_pool *pool) { }
-static inline void rt_unlock_idle_list(struct worker_pool *pool) { }
-static inline void sched_lock_idle_list(struct worker_pool *pool)
-{
-	spin_lock_irq(&pool->lock);
-}
-static inline void sched_unlock_idle_list(struct worker_pool *pool)
-{
-	spin_unlock_irq(&pool->lock);
-}
-#endif
-
-
 #ifdef CONFIG_DEBUG_OBJECTS_WORK
 
 static struct debug_obj_descr work_debug_descr;
@@ -865,16 +835,10 @@ static struct worker *first_idle_worker(struct worker_pool *pool)
  */
 static void wake_up_worker(struct worker_pool *pool)
 {
-	struct worker *worker;
-
-	rt_lock_idle_list(pool);
-
-	worker = first_idle_worker(pool);
+	struct worker *worker = first_idle_worker(pool);
 
 	if (likely(worker))
 		wake_up_process(worker->task);
-
-	rt_unlock_idle_list(pool);
 }
 
 /**
@@ -903,7 +867,7 @@ void wq_worker_running(struct task_struct *task)
  */
 void wq_worker_sleeping(struct task_struct *task)
 {
-	struct worker *worker = kthread_data(task);
+	struct worker *next, *worker = kthread_data(task);
 	struct worker_pool *pool;
 
 	/*
@@ -920,18 +884,26 @@ void wq_worker_sleeping(struct task_struct *task)
 		return;
 
 	worker->sleeping = 1;
+	spin_lock_irq(&pool->lock);
 
 	/*
 	 * The counterpart of the following dec_and_test, implied mb,
 	 * worklist not empty test sequence is in insert_work().
 	 * Please read comment there.
+	 *
+	 * NOT_RUNNING is clear.  This means that we're bound to and
+	 * running on the local cpu w/ rq lock held and preemption
+	 * disabled, which in turn means that none else could be
+	 * manipulating idle_list, so dereferencing idle_list without pool
+	 * lock is safe.
 	 */
 	if (atomic_dec_and_test(&pool->nr_running) &&
 	    !list_empty(&pool->worklist)) {
-		sched_lock_idle_list(pool);
-		wake_up_worker(pool);
-		sched_unlock_idle_list(pool);
+		next = first_idle_worker(pool);
+		if (next)
+			wake_up_process(next->task);
 	}
+	spin_unlock_irq(&pool->lock);
 }
 
 /**
@@ -1661,9 +1633,7 @@ static void worker_enter_idle(struct worker *worker)
 	worker->last_active = jiffies;
 
 	/* idle_list is LIFO */
-	rt_lock_idle_list(pool);
 	list_add(&worker->entry, &pool->idle_list);
-	rt_unlock_idle_list(pool);
 
 	if (too_many_workers(pool) && !timer_pending(&pool->idle_timer))
 		mod_timer(&pool->idle_timer, jiffies + IDLE_WORKER_TIMEOUT);
@@ -1696,9 +1666,7 @@ static void worker_leave_idle(struct worker *worker)
 		return;
 	worker_clr_flags(worker, WORKER_IDLE);
 	pool->nr_idle--;
-	rt_lock_idle_list(pool);
 	list_del_init(&worker->entry);
-	rt_unlock_idle_list(pool);
 }
 
 static struct worker *alloc_worker(int node)
@@ -1864,9 +1832,7 @@ static void destroy_worker(struct worker *worker)
 	pool->nr_workers--;
 	pool->nr_idle--;
 
-	rt_lock_idle_list(pool);
 	list_del_init(&worker->entry);
-	rt_unlock_idle_list(pool);
 	worker->flags |= WORKER_DIE;
 	wake_up_process(worker->task);
 }
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 5/9] timers: Keep interrupts disabled for TIMER_IRQSAFE timer.
  2022-08-19  9:24 [PATCH 0/9 v4.9-RT] Backports to fix random core Sebastian Andrzej Siewior
                   ` (3 preceding siblings ...)
  2022-08-19  9:24 ` [PATCH 4/9] Revert "workqueue: Prevent deadlock/stall on RT" Sebastian Andrzej Siewior
@ 2022-08-19  9:24 ` Sebastian Andrzej Siewior
  2022-08-19  9:24 ` [PATCH 6/9] timers: Don't block on ->expiry_lock for TIMER_IRQSAFE timers Sebastian Andrzej Siewior
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 13+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-08-19  9:24 UTC (permalink / raw)
  To: Mark Gross; +Cc: linux-rt-users, stable-rt, Salvatore Bonaccorso

Keep interrupts disabled across callback invocation for the
TIMER_IRQSAFE as expected.
This is required for the timer used by workqueue after the upcomming
rework.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/time/timer.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index af9d338103a9b..613139c7538eb 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -1381,8 +1381,7 @@ static void expire_timers(struct timer_base *base, struct hlist_head *head)
 		fn = timer->function;
 		data = timer->data;
 
-		if (!IS_ENABLED(CONFIG_PREEMPT_RT_FULL) &&
-		    timer->flags & TIMER_IRQSAFE) {
+		if (timer->flags & TIMER_IRQSAFE) {
 			raw_spin_unlock(&base->lock);
 			call_timer_fn(timer, fn, data);
 			base->running_timer = NULL;
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 6/9] timers: Don't block on ->expiry_lock for TIMER_IRQSAFE timers
  2022-08-19  9:24 [PATCH 0/9 v4.9-RT] Backports to fix random core Sebastian Andrzej Siewior
                   ` (4 preceding siblings ...)
  2022-08-19  9:24 ` [PATCH 5/9] timers: Keep interrupts disabled for TIMER_IRQSAFE timer Sebastian Andrzej Siewior
@ 2022-08-19  9:24 ` Sebastian Andrzej Siewior
  2022-08-19  9:24 ` [PATCH 7/9] rcu: Intrdroduce rcuwait Sebastian Andrzej Siewior
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 13+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-08-19  9:24 UTC (permalink / raw)
  To: Mark Gross; +Cc: linux-rt-users, stable-rt, Salvatore Bonaccorso

Upstream commit c725dafc95f1b37027840aaeaa8b7e4e9cd20516

PREEMPT_RT does not spin and wait until a running timer completes its
callback but instead it blocks on a sleeping lock to prevent a livelock in
the case that the task waiting for the callback completion preempted the
callback.

This cannot be done for timers flagged with TIMER_IRQSAFE. These timers can
be canceled from an interrupt disabled context even on RT kernels.

The expiry callback of such timers is invoked with interrupts disabled so
there is no need to use the expiry lock mechanism because obviously the
callback cannot be preempted even on RT kernels.

Do not use the timer_base::expiry_lock mechanism when waiting for a running
callback to complete if the timer is flagged with TIMER_IRQSAFE.

Also add a lockdep assertion for RT kernels to validate that the expiry
lock mechanism is always invoked in preemptible context.

[bigeasy: The logic in v4.9 is slightly different but the outcome is the
   same as we must not sleep while waiting for the irqsafe timer to
   complete. The IRQSAFE timer can not be preempted.
   The "lockdep annotation" is not available and has been replaced with
   might_sleep()]

Reported-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20201103190937.hga67rqhvknki3tp@linutronix.de
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/time/timer.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/kernel/time/timer.c b/kernel/time/timer.c
index 613139c7538eb..401917af2abcf 100644
--- a/kernel/time/timer.c
+++ b/kernel/time/timer.c
@@ -1179,9 +1179,9 @@ EXPORT_SYMBOL_GPL(add_timer_on);
 static void wait_for_running_timer(struct timer_list *timer)
 {
 	struct timer_base *base;
-	u32 tf = timer->flags;
+	u32 tf = READ_ONCE(timer->flags);
 
-	if (tf & TIMER_MIGRATING)
+	if (tf & (TIMER_MIGRATING | TIMER_IRQSAFE))
 		return;
 
 	base = get_timer_base(tf);
@@ -1312,6 +1312,13 @@ int del_timer_sync(struct timer_list *timer)
 	 * could lead to deadlock.
 	 */
 	WARN_ON(in_irq() && !(timer->flags & TIMER_IRQSAFE));
+	/*
+	 * Must be able to sleep on PREEMPT_RT because of the slowpath in
+	 * del_timer_wait_running().
+	 */
+	if (IS_ENABLED(CONFIG_PREEMPT_RT) && !(timer->flags & TIMER_IRQSAFE))
+		might_sleep();
+
 	for (;;) {
 		int ret = try_to_del_timer_sync(timer);
 		if (ret >= 0)
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 7/9] rcu: Intrdroduce rcuwait.
  2022-08-19  9:24 [PATCH 0/9 v4.9-RT] Backports to fix random core Sebastian Andrzej Siewior
                   ` (5 preceding siblings ...)
  2022-08-19  9:24 ` [PATCH 6/9] timers: Don't block on ->expiry_lock for TIMER_IRQSAFE timers Sebastian Andrzej Siewior
@ 2022-08-19  9:24 ` Sebastian Andrzej Siewior
  2022-08-19  9:24 ` [PATCH 8/9] workqueue: Use rcuwait for wq_manager_wait Sebastian Andrzej Siewior
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 13+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-08-19  9:24 UTC (permalink / raw)
  To: Mark Gross; +Cc: linux-rt-users, stable-rt, Salvatore Bonaccorso

This is an all in one commit backporting rcuwait:
- update.c, rcuwait.h as of commit
   58d4292bd037b ("rcu: Uninline multi-use function: finish_rcuwait()")
- exit.c as of commit
   9d9a6ebfea329 ("rcuwait: Let rcuwait_wake_up() return whether or not a task was awoken")

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 include/linux/rcuwait.h | 76 +++++++++++++++++++++++++++++++++++++++++
 kernel/exit.c           | 30 ++++++++++++++++
 kernel/rcu/update.c     |  8 +++++
 3 files changed, 114 insertions(+)
 create mode 100644 include/linux/rcuwait.h

diff --git a/include/linux/rcuwait.h b/include/linux/rcuwait.h
new file mode 100644
index 0000000000000..12a9d1ad01ccb
--- /dev/null
+++ b/include/linux/rcuwait.h
@@ -0,0 +1,76 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_RCUWAIT_H_
+#define _LINUX_RCUWAIT_H_
+
+#include <linux/rcupdate.h>
+#include <linux/sched.h>
+
+/*
+ * rcuwait provides a way of blocking and waking up a single
+ * task in an rcu-safe manner.
+ *
+ * The only time @task is non-nil is when a user is blocked (or
+ * checking if it needs to) on a condition, and reset as soon as we
+ * know that the condition has succeeded and are awoken.
+ */
+struct rcuwait {
+	struct task_struct __rcu *task;
+};
+
+#define __RCUWAIT_INITIALIZER(name)		\
+	{ .task = NULL, }
+
+static inline void rcuwait_init(struct rcuwait *w)
+{
+	w->task = NULL;
+}
+
+/*
+ * Note: this provides no serialization and, just as with waitqueues,
+ * requires care to estimate as to whether or not the wait is active.
+ */
+static inline int rcuwait_active(struct rcuwait *w)
+{
+	return !!rcu_access_pointer(w->task);
+}
+
+extern int rcuwait_wake_up(struct rcuwait *w);
+
+/*
+ * The caller is responsible for locking around rcuwait_wait_event(),
+ * and [prepare_to/finish]_rcuwait() such that writes to @task are
+ * properly serialized.
+ */
+
+static inline void prepare_to_rcuwait(struct rcuwait *w)
+{
+	rcu_assign_pointer(w->task, current);
+}
+
+extern void finish_rcuwait(struct rcuwait *w);
+
+#define rcuwait_wait_event(w, condition, state)				\
+({									\
+	int __ret = 0;							\
+	prepare_to_rcuwait(w);						\
+	for (;;) {							\
+		/*							\
+		 * Implicit barrier (A) pairs with (B) in		\
+		 * rcuwait_wake_up().					\
+		 */							\
+		set_current_state(state);				\
+		if (condition)						\
+			break;						\
+									\
+		if (signal_pending_state(state, current)) {		\
+			__ret = -EINTR;					\
+			break;						\
+		}							\
+									\
+		schedule();						\
+	}								\
+	finish_rcuwait(w);						\
+	__ret;								\
+})
+
+#endif /* _LINUX_RCUWAIT_H_ */
diff --git a/kernel/exit.c b/kernel/exit.c
index 89e38e5929a9c..6cfbde6a83629 100644
--- a/kernel/exit.c
+++ b/kernel/exit.c
@@ -54,6 +54,7 @@
 #include <linux/writeback.h>
 #include <linux/shm.h>
 #include <linux/kcov.h>
+#include <linux/rcuwait.h>
 
 #include <asm/uaccess.h>
 #include <asm/unistd.h>
@@ -286,6 +287,35 @@ struct task_struct *try_get_task_struct(struct task_struct **ptask)
 	return task;
 }
 
+int rcuwait_wake_up(struct rcuwait *w)
+{
+	int ret = 0;
+	struct task_struct *task;
+
+	rcu_read_lock();
+
+	/*
+	 * Order condition vs @task, such that everything prior to the load
+	 * of @task is visible. This is the condition as to why the user called
+	 * rcuwait_wake() in the first place. Pairs with set_current_state()
+	 * barrier (A) in rcuwait_wait_event().
+	 *
+	 *    WAIT                WAKE
+	 *    [S] tsk = current   [S] cond = true
+	 *        MB (A)              MB (B)
+	 *    [L] cond            [L] tsk
+	 */
+	smp_mb(); /* (B) */
+
+	task = rcu_dereference(w->task);
+	if (task)
+		ret = wake_up_process(task);
+	rcu_read_unlock();
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(rcuwait_wake_up);
+
 /*
  * Determine if a process group is "orphaned", according to the POSIX
  * definition in 2.2.2.52.  Orphaned process groups are not to be affected
diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
index ee02e1e1b3e57..c4ffd7ead78e2 100644
--- a/kernel/rcu/update.c
+++ b/kernel/rcu/update.c
@@ -49,6 +49,7 @@
 #include <linux/moduleparam.h>
 #include <linux/kthread.h>
 #include <linux/tick.h>
+#include <linux/rcuwait.h>
 
 #define CREATE_TRACE_POINTS
 
@@ -372,6 +373,13 @@ void __wait_rcu_gp(bool checktiny, int n, call_rcu_func_t *crcu_array,
 }
 EXPORT_SYMBOL_GPL(__wait_rcu_gp);
 
+void finish_rcuwait(struct rcuwait *w)
+{
+	rcu_assign_pointer(w->task, NULL);
+	__set_current_state(TASK_RUNNING);
+}
+EXPORT_SYMBOL_GPL(finish_rcuwait);
+
 #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD
 void init_rcu_head(struct rcu_head *head)
 {
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 8/9] workqueue: Use rcuwait for wq_manager_wait
  2022-08-19  9:24 [PATCH 0/9 v4.9-RT] Backports to fix random core Sebastian Andrzej Siewior
                   ` (6 preceding siblings ...)
  2022-08-19  9:24 ` [PATCH 7/9] rcu: Intrdroduce rcuwait Sebastian Andrzej Siewior
@ 2022-08-19  9:24 ` Sebastian Andrzej Siewior
  2022-08-19  9:24 ` [PATCH 9/9] workqueue: Convert the pool::lock and wq_mayday_lock to raw_spinlock_t Sebastian Andrzej Siewior
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 13+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-08-19  9:24 UTC (permalink / raw)
  To: Mark Gross; +Cc: linux-rt-users, stable-rt, Salvatore Bonaccorso

Upstream commit d8bb65ab70f702531aaaa11d9710f9450078e295

The workqueue code has it's internal spinlock (pool::lock) and also
implicit spinlock usage in the wq_manager waitqueue. These spinlocks
are converted to 'sleeping' spinlocks on a RT-kernel.

Workqueue functions can be invoked from contexts which are truly atomic
even on a PREEMPT_RT enabled kernel. Taking sleeping locks from such
contexts is forbidden.

pool::lock can be converted to a raw spinlock as the lock held times
are short. But the workqueue manager waitqueue is handled inside of
pool::lock held regions which again violates the lock nesting rules
of raw and regular spinlocks.

The manager waitqueue has no special requirements like custom wakeup
callbacks or mass wakeups. While it does not use exclusive wait mode
explicitly there is no strict requirement to queue the waiters in a
particular order as there is only one waiter at a time.

This allows to replace the waitqueue with rcuwait which solves the
locking problem because rcuwait relies on existing locking.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/workqueue.c | 25 ++++++++++++++++++++-----
 1 file changed, 20 insertions(+), 5 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 6b3f3f54a05e9..f494ffe3551fe 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -50,6 +50,7 @@
 #include <linux/uaccess.h>
 #include <linux/nmi.h>
 #include <linux/kvm_para.h>
+#include <linux/rcuwait.h>
 
 #include "workqueue_internal.h"
 
@@ -301,7 +302,8 @@ static struct workqueue_attrs *wq_update_unbound_numa_attrs_buf;
 
 static DEFINE_MUTEX(wq_pool_mutex);	/* protects pools and workqueues list */
 static DEFINE_SPINLOCK(wq_mayday_lock);	/* protects wq->maydays list */
-static DECLARE_WAIT_QUEUE_HEAD(wq_manager_wait); /* wait for manager to go away */
+/* wait for manager to go away */
+static struct rcuwait manager_wait = __RCUWAIT_INITIALIZER(manager_wait);
 
 static LIST_HEAD(workqueues);		/* PR: list of all workqueues */
 static bool workqueue_freezing;		/* PL: have wqs started freezing? */
@@ -1995,7 +1997,7 @@ static bool manage_workers(struct worker *worker)
 
 	pool->manager = NULL;
 	pool->flags &= ~POOL_MANAGER_ACTIVE;
-	wake_up(&wq_manager_wait);
+	rcuwait_wake_up(&manager_wait);
 	return true;
 }
 
@@ -3258,6 +3260,18 @@ static void rcu_free_pool(struct rcu_head *rcu)
 	kfree(pool);
 }
 
+/* This returns with the lock held on success (pool manager is inactive). */
+static bool wq_manager_inactive(struct worker_pool *pool)
+{
+	spin_lock_irq(&pool->lock);
+
+	if (pool->flags & POOL_MANAGER_ACTIVE) {
+		spin_unlock_irq(&pool->lock);
+		return false;
+	}
+	return true;
+}
+
 /**
  * put_unbound_pool - put a worker_pool
  * @pool: worker_pool to put
@@ -3293,10 +3307,11 @@ static void put_unbound_pool(struct worker_pool *pool)
 	 * Become the manager and destroy all workers.  This prevents
 	 * @pool's workers from blocking on attach_mutex.  We're the last
 	 * manager and @pool gets freed with the flag set.
+	 * Because of how wq_manager_inactive() works, we will hold the
+	 * spinlock after a successful wait.
 	 */
-	spin_lock_irq(&pool->lock);
-	wait_event_lock_irq(wq_manager_wait,
-			    !(pool->flags & POOL_MANAGER_ACTIVE), pool->lock);
+	rcuwait_wait_event(&manager_wait, wq_manager_inactive(pool),
+			   TASK_UNINTERRUPTIBLE);
 	pool->flags |= POOL_MANAGER_ACTIVE;
 
 	while ((worker = first_idle_worker(pool)))
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 9/9] workqueue: Convert the pool::lock and wq_mayday_lock to raw_spinlock_t
  2022-08-19  9:24 [PATCH 0/9 v4.9-RT] Backports to fix random core Sebastian Andrzej Siewior
                   ` (7 preceding siblings ...)
  2022-08-19  9:24 ` [PATCH 8/9] workqueue: Use rcuwait for wq_manager_wait Sebastian Andrzej Siewior
@ 2022-08-19  9:24 ` Sebastian Andrzej Siewior
  2022-08-19 15:14 ` [PATCH 0/9 v4.9-RT] Backports to fix random core Mark Gross
  2022-09-01 21:16 ` Mark Gross
  10 siblings, 0 replies; 13+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-08-19  9:24 UTC (permalink / raw)
  To: Mark Gross; +Cc: linux-rt-users, stable-rt, Salvatore Bonaccorso

Upstream commit a9b8a985294debae00f6c087dfec8c384d30a3b9

The workqueue code has it's internal spinlocks (pool::lock), which
are acquired on most workqueue operations. These spinlocks are
converted to 'sleeping' spinlocks on a RT-kernel.

Workqueue functions can be invoked from contexts which are truly atomic
even on a PREEMPT_RT enabled kernel. Taking sleeping locks from such
contexts is forbidden.

The pool::lock hold times are bound and the code sections are
relatively short, which allows to convert pool::lock and as a
consequence wq_mayday_lock to raw spinlocks which are truly spinning
locks even on a PREEMPT_RT kernel.

With the previous conversion of the manager waitqueue to a simple
waitqueue workqueues are now fully RT compliant.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Reviewed-by: Lai Jiangshan <jiangshanlai@gmail.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/workqueue.c | 166 ++++++++++++++++++++++-----------------------
 1 file changed, 83 insertions(+), 83 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index f494ffe3551fe..5677417f449f9 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -146,7 +146,7 @@ enum {
 /* struct worker is defined in workqueue_internal.h */
 
 struct worker_pool {
-	spinlock_t		lock;		/* the pool lock */
+	raw_spinlock_t		lock;		/* the pool lock */
 	int			cpu;		/* I: the associated cpu */
 	int			node;		/* I: the associated node ID */
 	int			id;		/* I: pool ID */
@@ -301,7 +301,7 @@ static bool wq_numa_enabled;		/* unbound NUMA affinity enabled */
 static struct workqueue_attrs *wq_update_unbound_numa_attrs_buf;
 
 static DEFINE_MUTEX(wq_pool_mutex);	/* protects pools and workqueues list */
-static DEFINE_SPINLOCK(wq_mayday_lock);	/* protects wq->maydays list */
+static DEFINE_RAW_SPINLOCK(wq_mayday_lock);	/* protects wq->maydays list */
 /* wait for manager to go away */
 static struct rcuwait manager_wait = __RCUWAIT_INITIALIZER(manager_wait);
 
@@ -833,7 +833,7 @@ static struct worker *first_idle_worker(struct worker_pool *pool)
  * Wake up the first idle worker of @pool.
  *
  * CONTEXT:
- * spin_lock_irq(pool->lock).
+ * raw_spin_lock_irq(pool->lock).
  */
 static void wake_up_worker(struct worker_pool *pool)
 {
@@ -886,7 +886,7 @@ void wq_worker_sleeping(struct task_struct *task)
 		return;
 
 	worker->sleeping = 1;
-	spin_lock_irq(&pool->lock);
+	raw_spin_lock_irq(&pool->lock);
 
 	/*
 	 * The counterpart of the following dec_and_test, implied mb,
@@ -905,7 +905,7 @@ void wq_worker_sleeping(struct task_struct *task)
 		if (next)
 			wake_up_process(next->task);
 	}
-	spin_unlock_irq(&pool->lock);
+	raw_spin_unlock_irq(&pool->lock);
 }
 
 /**
@@ -916,7 +916,7 @@ void wq_worker_sleeping(struct task_struct *task)
  * Set @flags in @worker->flags and adjust nr_running accordingly.
  *
  * CONTEXT:
- * spin_lock_irq(pool->lock)
+ * raw_spin_lock_irq(pool->lock)
  */
 static inline void worker_set_flags(struct worker *worker, unsigned int flags)
 {
@@ -941,7 +941,7 @@ static inline void worker_set_flags(struct worker *worker, unsigned int flags)
  * Clear @flags in @worker->flags and adjust nr_running accordingly.
  *
  * CONTEXT:
- * spin_lock_irq(pool->lock)
+ * raw_spin_lock_irq(pool->lock)
  */
 static inline void worker_clr_flags(struct worker *worker, unsigned int flags)
 {
@@ -989,7 +989,7 @@ static inline void worker_clr_flags(struct worker *worker, unsigned int flags)
  * actually occurs, it should be easy to locate the culprit work function.
  *
  * CONTEXT:
- * spin_lock_irq(pool->lock).
+ * raw_spin_lock_irq(pool->lock).
  *
  * Return:
  * Pointer to worker which is executing @work if found, %NULL
@@ -1024,7 +1024,7 @@ static struct worker *find_worker_executing_work(struct worker_pool *pool,
  * nested inside outer list_for_each_entry_safe().
  *
  * CONTEXT:
- * spin_lock_irq(pool->lock).
+ * raw_spin_lock_irq(pool->lock).
  */
 static void move_linked_works(struct work_struct *work, struct list_head *head,
 			      struct work_struct **nextp)
@@ -1102,9 +1102,9 @@ static void put_pwq_unlocked(struct pool_workqueue *pwq)
 		 * As both pwqs and pools are RCU protected, the
 		 * following lock operations are safe.
 		 */
-		spin_lock_irq(&pwq->pool->lock);
+		raw_spin_lock_irq(&pwq->pool->lock);
 		put_pwq(pwq);
-		spin_unlock_irq(&pwq->pool->lock);
+		raw_spin_unlock_irq(&pwq->pool->lock);
 	}
 }
 
@@ -1137,7 +1137,7 @@ static void pwq_activate_first_delayed(struct pool_workqueue *pwq)
  * decrement nr_in_flight of its pwq and handle workqueue flushing.
  *
  * CONTEXT:
- * spin_lock_irq(pool->lock).
+ * raw_spin_lock_irq(pool->lock).
  */
 static void pwq_dec_nr_in_flight(struct pool_workqueue *pwq, int color)
 {
@@ -1236,7 +1236,7 @@ static int try_to_grab_pending(struct work_struct *work, bool is_dwork,
 	if (!pool)
 		goto fail;
 
-	spin_lock(&pool->lock);
+	raw_spin_lock(&pool->lock);
 	/*
 	 * work->data is guaranteed to point to pwq only while the work
 	 * item is queued on pwq->wq, and both updating work->data to point
@@ -1265,11 +1265,11 @@ static int try_to_grab_pending(struct work_struct *work, bool is_dwork,
 		/* work->data points to pwq iff queued, point to pool */
 		set_work_pool_and_keep_pending(work, pool->id);
 
-		spin_unlock(&pool->lock);
+		raw_spin_unlock(&pool->lock);
 		rcu_read_unlock();
 		return 1;
 	}
-	spin_unlock(&pool->lock);
+	raw_spin_unlock(&pool->lock);
 fail:
 	rcu_read_unlock();
 	local_irq_restore(*flags);
@@ -1290,7 +1290,7 @@ static int try_to_grab_pending(struct work_struct *work, bool is_dwork,
  * work_struct flags.
  *
  * CONTEXT:
- * spin_lock_irq(pool->lock).
+ * raw_spin_lock_irq(pool->lock).
  */
 static void insert_work(struct pool_workqueue *pwq, struct work_struct *work,
 			struct list_head *head, unsigned int extra_flags)
@@ -1406,7 +1406,7 @@ static void __queue_work(int cpu, struct workqueue_struct *wq,
 	if (last_pool && last_pool != pwq->pool) {
 		struct worker *worker;
 
-		spin_lock(&last_pool->lock);
+		raw_spin_lock(&last_pool->lock);
 
 		worker = find_worker_executing_work(last_pool, work);
 
@@ -1414,11 +1414,11 @@ static void __queue_work(int cpu, struct workqueue_struct *wq,
 			pwq = worker->current_pwq;
 		} else {
 			/* meh... not running there, queue here */
-			spin_unlock(&last_pool->lock);
-			spin_lock(&pwq->pool->lock);
+			raw_spin_unlock(&last_pool->lock);
+			raw_spin_lock(&pwq->pool->lock);
 		}
 	} else {
-		spin_lock(&pwq->pool->lock);
+		raw_spin_lock(&pwq->pool->lock);
 	}
 
 	/*
@@ -1431,7 +1431,7 @@ static void __queue_work(int cpu, struct workqueue_struct *wq,
 	 */
 	if (unlikely(!pwq->refcnt)) {
 		if (wq->flags & WQ_UNBOUND) {
-			spin_unlock(&pwq->pool->lock);
+			raw_spin_unlock(&pwq->pool->lock);
 			cpu_relax();
 			goto retry;
 		}
@@ -1464,7 +1464,7 @@ static void __queue_work(int cpu, struct workqueue_struct *wq,
 	insert_work(pwq, work, worklist, work_flags);
 
 out:
-	spin_unlock(&pwq->pool->lock);
+	raw_spin_unlock(&pwq->pool->lock);
 	rcu_read_unlock();
 }
 
@@ -1618,7 +1618,7 @@ EXPORT_SYMBOL_GPL(mod_delayed_work_on);
  * necessary.
  *
  * LOCKING:
- * spin_lock_irq(pool->lock).
+ * raw_spin_lock_irq(pool->lock).
  */
 static void worker_enter_idle(struct worker *worker)
 {
@@ -1658,7 +1658,7 @@ static void worker_enter_idle(struct worker *worker)
  * @worker is leaving idle state.  Update stats.
  *
  * LOCKING:
- * spin_lock_irq(pool->lock).
+ * raw_spin_lock_irq(pool->lock).
  */
 static void worker_leave_idle(struct worker *worker)
 {
@@ -1794,11 +1794,11 @@ static struct worker *create_worker(struct worker_pool *pool)
 	worker_attach_to_pool(worker, pool);
 
 	/* start the newly created worker */
-	spin_lock_irq(&pool->lock);
+	raw_spin_lock_irq(&pool->lock);
 	worker->pool->nr_workers++;
 	worker_enter_idle(worker);
 	wake_up_process(worker->task);
-	spin_unlock_irq(&pool->lock);
+	raw_spin_unlock_irq(&pool->lock);
 
 	return worker;
 
@@ -1817,7 +1817,7 @@ static struct worker *create_worker(struct worker_pool *pool)
  * be idle.
  *
  * CONTEXT:
- * spin_lock_irq(pool->lock).
+ * raw_spin_lock_irq(pool->lock).
  */
 static void destroy_worker(struct worker *worker)
 {
@@ -1843,7 +1843,7 @@ static void idle_worker_timeout(unsigned long __pool)
 {
 	struct worker_pool *pool = (void *)__pool;
 
-	spin_lock_irq(&pool->lock);
+	raw_spin_lock_irq(&pool->lock);
 
 	while (too_many_workers(pool)) {
 		struct worker *worker;
@@ -1861,7 +1861,7 @@ static void idle_worker_timeout(unsigned long __pool)
 		destroy_worker(worker);
 	}
 
-	spin_unlock_irq(&pool->lock);
+	raw_spin_unlock_irq(&pool->lock);
 }
 
 static void send_mayday(struct work_struct *work)
@@ -1892,8 +1892,8 @@ static void pool_mayday_timeout(unsigned long __pool)
 	struct worker_pool *pool = (void *)__pool;
 	struct work_struct *work;
 
-	spin_lock_irq(&pool->lock);
-	spin_lock(&wq_mayday_lock);		/* for wq->maydays */
+	raw_spin_lock_irq(&pool->lock);
+	raw_spin_lock(&wq_mayday_lock);		/* for wq->maydays */
 
 	if (need_to_create_worker(pool)) {
 		/*
@@ -1906,8 +1906,8 @@ static void pool_mayday_timeout(unsigned long __pool)
 			send_mayday(work);
 	}
 
-	spin_unlock(&wq_mayday_lock);
-	spin_unlock_irq(&pool->lock);
+	raw_spin_unlock(&wq_mayday_lock);
+	raw_spin_unlock_irq(&pool->lock);
 
 	mod_timer(&pool->mayday_timer, jiffies + MAYDAY_INTERVAL);
 }
@@ -1926,7 +1926,7 @@ static void pool_mayday_timeout(unsigned long __pool)
  * may_start_working() %true.
  *
  * LOCKING:
- * spin_lock_irq(pool->lock) which may be released and regrabbed
+ * raw_spin_lock_irq(pool->lock) which may be released and regrabbed
  * multiple times.  Does GFP_KERNEL allocations.  Called only from
  * manager.
  */
@@ -1935,7 +1935,7 @@ __releases(&pool->lock)
 __acquires(&pool->lock)
 {
 restart:
-	spin_unlock_irq(&pool->lock);
+	raw_spin_unlock_irq(&pool->lock);
 
 	/* if we don't make progress in MAYDAY_INITIAL_TIMEOUT, call for help */
 	mod_timer(&pool->mayday_timer, jiffies + MAYDAY_INITIAL_TIMEOUT);
@@ -1951,7 +1951,7 @@ __acquires(&pool->lock)
 	}
 
 	del_timer_sync(&pool->mayday_timer);
-	spin_lock_irq(&pool->lock);
+	raw_spin_lock_irq(&pool->lock);
 	/*
 	 * This is necessary even after a new worker was just successfully
 	 * created as @pool->lock was dropped and the new worker might have
@@ -1974,7 +1974,7 @@ __acquires(&pool->lock)
  * and may_start_working() is true.
  *
  * CONTEXT:
- * spin_lock_irq(pool->lock) which may be released and regrabbed
+ * raw_spin_lock_irq(pool->lock) which may be released and regrabbed
  * multiple times.  Does GFP_KERNEL allocations.
  *
  * Return:
@@ -2013,7 +2013,7 @@ static bool manage_workers(struct worker *worker)
  * call this function to process a work.
  *
  * CONTEXT:
- * spin_lock_irq(pool->lock) which is released and regrabbed.
+ * raw_spin_lock_irq(pool->lock) which is released and regrabbed.
  */
 static void process_one_work(struct worker *worker, struct work_struct *work)
 __releases(&pool->lock)
@@ -2089,7 +2089,7 @@ __acquires(&pool->lock)
 	 */
 	set_work_pool_and_clear_pending(work, pool->id);
 
-	spin_unlock_irq(&pool->lock);
+	raw_spin_unlock_irq(&pool->lock);
 
 	lock_map_acquire_read(&pwq->wq->lockdep_map);
 	lock_map_acquire(&lockdep_map);
@@ -2122,7 +2122,7 @@ __acquires(&pool->lock)
 	 */
 	cond_resched_rcu_qs();
 
-	spin_lock_irq(&pool->lock);
+	raw_spin_lock_irq(&pool->lock);
 
 	/* clear cpu intensive status */
 	if (unlikely(cpu_intensive))
@@ -2146,7 +2146,7 @@ __acquires(&pool->lock)
  * fetches a work from the top and executes it.
  *
  * CONTEXT:
- * spin_lock_irq(pool->lock) which may be released and regrabbed
+ * raw_spin_lock_irq(pool->lock) which may be released and regrabbed
  * multiple times.
  */
 static void process_scheduled_works(struct worker *worker)
@@ -2178,11 +2178,11 @@ static int worker_thread(void *__worker)
 	/* tell the scheduler that this is a workqueue worker */
 	worker->task->flags |= PF_WQ_WORKER;
 woke_up:
-	spin_lock_irq(&pool->lock);
+	raw_spin_lock_irq(&pool->lock);
 
 	/* am I supposed to die? */
 	if (unlikely(worker->flags & WORKER_DIE)) {
-		spin_unlock_irq(&pool->lock);
+		raw_spin_unlock_irq(&pool->lock);
 		WARN_ON_ONCE(!list_empty(&worker->entry));
 		worker->task->flags &= ~PF_WQ_WORKER;
 
@@ -2248,7 +2248,7 @@ static int worker_thread(void *__worker)
 	 */
 	worker_enter_idle(worker);
 	__set_current_state(TASK_INTERRUPTIBLE);
-	spin_unlock_irq(&pool->lock);
+	raw_spin_unlock_irq(&pool->lock);
 	schedule();
 	goto woke_up;
 }
@@ -2302,7 +2302,7 @@ static int rescuer_thread(void *__rescuer)
 	should_stop = kthread_should_stop();
 
 	/* see whether any pwq is asking for help */
-	spin_lock_irq(&wq_mayday_lock);
+	raw_spin_lock_irq(&wq_mayday_lock);
 
 	while (!list_empty(&wq->maydays)) {
 		struct pool_workqueue *pwq = list_first_entry(&wq->maydays,
@@ -2314,11 +2314,11 @@ static int rescuer_thread(void *__rescuer)
 		__set_current_state(TASK_RUNNING);
 		list_del_init(&pwq->mayday_node);
 
-		spin_unlock_irq(&wq_mayday_lock);
+		raw_spin_unlock_irq(&wq_mayday_lock);
 
 		worker_attach_to_pool(rescuer, pool);
 
-		spin_lock_irq(&pool->lock);
+		raw_spin_lock_irq(&pool->lock);
 		rescuer->pool = pool;
 
 		/*
@@ -2348,7 +2348,7 @@ static int rescuer_thread(void *__rescuer)
 			 * incur MAYDAY_INTERVAL delay inbetween.
 			 */
 			if (need_to_create_worker(pool)) {
-				spin_lock(&wq_mayday_lock);
+				raw_spin_lock(&wq_mayday_lock);
 				/*
 				 * Queue iff we aren't racing destruction
 				 * and somebody else hasn't queued it already.
@@ -2357,7 +2357,7 @@ static int rescuer_thread(void *__rescuer)
 					get_pwq(pwq);
 					list_add_tail(&pwq->mayday_node, &wq->maydays);
 				}
-				spin_unlock(&wq_mayday_lock);
+				raw_spin_unlock(&wq_mayday_lock);
 			}
 		}
 
@@ -2376,14 +2376,14 @@ static int rescuer_thread(void *__rescuer)
 			wake_up_worker(pool);
 
 		rescuer->pool = NULL;
-		spin_unlock_irq(&pool->lock);
+		raw_spin_unlock_irq(&pool->lock);
 
 		worker_detach_from_pool(rescuer, pool);
 
-		spin_lock_irq(&wq_mayday_lock);
+		raw_spin_lock_irq(&wq_mayday_lock);
 	}
 
-	spin_unlock_irq(&wq_mayday_lock);
+	raw_spin_unlock_irq(&wq_mayday_lock);
 
 	if (should_stop) {
 		__set_current_state(TASK_RUNNING);
@@ -2463,7 +2463,7 @@ static void wq_barrier_func(struct work_struct *work)
  * underneath us, so we can't reliably determine pwq from @target.
  *
  * CONTEXT:
- * spin_lock_irq(pool->lock).
+ * raw_spin_lock_irq(pool->lock).
  */
 static void insert_wq_barrier(struct pool_workqueue *pwq,
 			      struct wq_barrier *barr,
@@ -2548,7 +2548,7 @@ static bool flush_workqueue_prep_pwqs(struct workqueue_struct *wq,
 	for_each_pwq(pwq, wq) {
 		struct worker_pool *pool = pwq->pool;
 
-		spin_lock_irq(&pool->lock);
+		raw_spin_lock_irq(&pool->lock);
 
 		if (flush_color >= 0) {
 			WARN_ON_ONCE(pwq->flush_color != -1);
@@ -2565,7 +2565,7 @@ static bool flush_workqueue_prep_pwqs(struct workqueue_struct *wq,
 			pwq->work_color = work_color;
 		}
 
-		spin_unlock_irq(&pool->lock);
+		raw_spin_unlock_irq(&pool->lock);
 	}
 
 	if (flush_color >= 0 && atomic_dec_and_test(&wq->nr_pwqs_to_flush))
@@ -2765,9 +2765,9 @@ void drain_workqueue(struct workqueue_struct *wq)
 	for_each_pwq(pwq, wq) {
 		bool drained;
 
-		spin_lock_irq(&pwq->pool->lock);
+		raw_spin_lock_irq(&pwq->pool->lock);
 		drained = !pwq->nr_active && list_empty(&pwq->delayed_works);
-		spin_unlock_irq(&pwq->pool->lock);
+		raw_spin_unlock_irq(&pwq->pool->lock);
 
 		if (drained)
 			continue;
@@ -2802,7 +2802,7 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr)
 		return false;
 	}
 
-	spin_lock_irq(&pool->lock);
+	raw_spin_lock_irq(&pool->lock);
 	/* see the comment in try_to_grab_pending() with the same code */
 	pwq = get_work_pwq(work);
 	if (pwq) {
@@ -2818,7 +2818,7 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr)
 	check_flush_dependency(pwq->wq, work);
 
 	insert_wq_barrier(pwq, barr, work, worker);
-	spin_unlock_irq(&pool->lock);
+	raw_spin_unlock_irq(&pool->lock);
 
 	/*
 	 * If @max_active is 1 or rescuer is in use, flushing another work
@@ -2834,7 +2834,7 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr)
 	rcu_read_unlock();
 	return true;
 already_gone:
-	spin_unlock_irq(&pool->lock);
+	raw_spin_unlock_irq(&pool->lock);
 	rcu_read_unlock();
 	return false;
 }
@@ -3206,7 +3206,7 @@ static bool wqattrs_equal(const struct workqueue_attrs *a,
  */
 static int init_worker_pool(struct worker_pool *pool)
 {
-	spin_lock_init(&pool->lock);
+	raw_spin_lock_init(&pool->lock);
 	pool->id = -1;
 	pool->cpu = -1;
 	pool->node = NUMA_NO_NODE;
@@ -3263,10 +3263,10 @@ static void rcu_free_pool(struct rcu_head *rcu)
 /* This returns with the lock held on success (pool manager is inactive). */
 static bool wq_manager_inactive(struct worker_pool *pool)
 {
-	spin_lock_irq(&pool->lock);
+	raw_spin_lock_irq(&pool->lock);
 
 	if (pool->flags & POOL_MANAGER_ACTIVE) {
-		spin_unlock_irq(&pool->lock);
+		raw_spin_unlock_irq(&pool->lock);
 		return false;
 	}
 	return true;
@@ -3317,7 +3317,7 @@ static void put_unbound_pool(struct worker_pool *pool)
 	while ((worker = first_idle_worker(pool)))
 		destroy_worker(worker);
 	WARN_ON(pool->nr_workers || pool->nr_idle);
-	spin_unlock_irq(&pool->lock);
+	raw_spin_unlock_irq(&pool->lock);
 
 	mutex_lock(&pool->attach_mutex);
 	if (!list_empty(&pool->workers))
@@ -3477,7 +3477,7 @@ static void pwq_adjust_max_active(struct pool_workqueue *pwq)
 		return;
 
 	/* this function can be called during early boot w/ irq disabled */
-	spin_lock_irqsave(&pwq->pool->lock, flags);
+	raw_spin_lock_irqsave(&pwq->pool->lock, flags);
 
 	/*
 	 * During [un]freezing, the caller is responsible for ensuring that
@@ -3507,7 +3507,7 @@ static void pwq_adjust_max_active(struct pool_workqueue *pwq)
 		pwq->max_active = 0;
 	}
 
-	spin_unlock_irqrestore(&pwq->pool->lock, flags);
+	raw_spin_unlock_irqrestore(&pwq->pool->lock, flags);
 }
 
 /* initialize newly alloced @pwq which is associated with @wq and @pool */
@@ -3899,9 +3899,9 @@ static void wq_update_unbound_numa(struct workqueue_struct *wq, int cpu,
 
 use_dfl_pwq:
 	mutex_lock(&wq->mutex);
-	spin_lock_irq(&wq->dfl_pwq->pool->lock);
+	raw_spin_lock_irq(&wq->dfl_pwq->pool->lock);
 	get_pwq(wq->dfl_pwq);
-	spin_unlock_irq(&wq->dfl_pwq->pool->lock);
+	raw_spin_unlock_irq(&wq->dfl_pwq->pool->lock);
 	old_pwq = numa_pwq_tbl_install(wq, node, wq->dfl_pwq);
 out_unlock:
 	mutex_unlock(&wq->mutex);
@@ -4097,9 +4097,9 @@ void destroy_workqueue(struct workqueue_struct *wq)
 		struct worker *rescuer = wq->rescuer;
 
 		/* this prevents new queueing */
-		spin_lock_irq(&wq_mayday_lock);
+		raw_spin_lock_irq(&wq_mayday_lock);
 		wq->rescuer = NULL;
-		spin_unlock_irq(&wq_mayday_lock);
+		raw_spin_unlock_irq(&wq_mayday_lock);
 
 		/* rescuer will empty maydays list before exiting */
 		kthread_stop(rescuer->task);
@@ -4292,10 +4292,10 @@ unsigned int work_busy(struct work_struct *work)
 	rcu_read_lock();
 	pool = get_work_pool(work);
 	if (pool) {
-		spin_lock_irqsave(&pool->lock, flags);
+		raw_spin_lock_irqsave(&pool->lock, flags);
 		if (find_worker_executing_work(pool, work))
 			ret |= WORK_BUSY_RUNNING;
-		spin_unlock_irqrestore(&pool->lock, flags);
+		raw_spin_unlock_irqrestore(&pool->lock, flags);
 	}
 	rcu_read_unlock();
 
@@ -4507,10 +4507,10 @@ void show_workqueue_state(void)
 		pr_info("workqueue %s: flags=0x%x\n", wq->name, wq->flags);
 
 		for_each_pwq(pwq, wq) {
-			spin_lock_irqsave(&pwq->pool->lock, flags);
+			raw_spin_lock_irqsave(&pwq->pool->lock, flags);
 			if (pwq->nr_active || !list_empty(&pwq->delayed_works))
 				show_pwq(pwq);
-			spin_unlock_irqrestore(&pwq->pool->lock, flags);
+			raw_spin_unlock_irqrestore(&pwq->pool->lock, flags);
 			/*
 			 * We could be printing a lot from atomic context, e.g.
 			 * sysrq-t -> show_workqueue_state(). Avoid triggering
@@ -4524,7 +4524,7 @@ void show_workqueue_state(void)
 		struct worker *worker;
 		bool first = true;
 
-		spin_lock_irqsave(&pool->lock, flags);
+		raw_spin_lock_irqsave(&pool->lock, flags);
 		if (pool->nr_workers == pool->nr_idle)
 			goto next_pool;
 
@@ -4543,7 +4543,7 @@ void show_workqueue_state(void)
 		}
 		pr_cont("\n");
 	next_pool:
-		spin_unlock_irqrestore(&pool->lock, flags);
+		raw_spin_unlock_irqrestore(&pool->lock, flags);
 		/*
 		 * We could be printing a lot from atomic context, e.g.
 		 * sysrq-t -> show_workqueue_state(). Avoid triggering
@@ -4578,7 +4578,7 @@ static void wq_unbind_fn(struct work_struct *work)
 
 	for_each_cpu_worker_pool(pool, cpu) {
 		mutex_lock(&pool->attach_mutex);
-		spin_lock_irq(&pool->lock);
+		raw_spin_lock_irq(&pool->lock);
 
 		/*
 		 * We've blocked all attach/detach operations. Make all workers
@@ -4592,7 +4592,7 @@ static void wq_unbind_fn(struct work_struct *work)
 
 		pool->flags |= POOL_DISASSOCIATED;
 
-		spin_unlock_irq(&pool->lock);
+		raw_spin_unlock_irq(&pool->lock);
 		mutex_unlock(&pool->attach_mutex);
 
 		/*
@@ -4618,9 +4618,9 @@ static void wq_unbind_fn(struct work_struct *work)
 		 * worker blocking could lead to lengthy stalls.  Kick off
 		 * unbound chain execution of currently pending work items.
 		 */
-		spin_lock_irq(&pool->lock);
+		raw_spin_lock_irq(&pool->lock);
 		wake_up_worker(pool);
-		spin_unlock_irq(&pool->lock);
+		raw_spin_unlock_irq(&pool->lock);
 	}
 }
 
@@ -4647,7 +4647,7 @@ static void rebind_workers(struct worker_pool *pool)
 		WARN_ON_ONCE(set_cpus_allowed_ptr(worker->task,
 						  pool->attrs->cpumask) < 0);
 
-	spin_lock_irq(&pool->lock);
+	raw_spin_lock_irq(&pool->lock);
 
 	/*
 	 * XXX: CPU hotplug notifiers are weird and can call DOWN_FAILED
@@ -4655,7 +4655,7 @@ static void rebind_workers(struct worker_pool *pool)
 	 * being reworked and this can go away in time.
 	 */
 	if (!(pool->flags & POOL_DISASSOCIATED)) {
-		spin_unlock_irq(&pool->lock);
+		raw_spin_unlock_irq(&pool->lock);
 		return;
 	}
 
@@ -4696,7 +4696,7 @@ static void rebind_workers(struct worker_pool *pool)
 		ACCESS_ONCE(worker->flags) = worker_flags;
 	}
 
-	spin_unlock_irq(&pool->lock);
+	raw_spin_unlock_irq(&pool->lock);
 }
 
 /**
-- 
2.37.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/9 v4.9-RT] Backports to fix random core
  2022-08-19  9:24 [PATCH 0/9 v4.9-RT] Backports to fix random core Sebastian Andrzej Siewior
                   ` (8 preceding siblings ...)
  2022-08-19  9:24 ` [PATCH 9/9] workqueue: Convert the pool::lock and wq_mayday_lock to raw_spinlock_t Sebastian Andrzej Siewior
@ 2022-08-19 15:14 ` Mark Gross
  2022-09-01 21:16 ` Mark Gross
  10 siblings, 0 replies; 13+ messages in thread
From: Mark Gross @ 2022-08-19 15:14 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Mark Gross, linux-rt-users, stable-rt, Salvatore Bonaccorso

On Fri, Aug 19, 2022 at 11:24:37AM +0200, Sebastian Andrzej Siewior wrote:
> Hi,
> 
> in v4.9.320 some random-core patches broke RT. This is a series of
> backports to align with later RT versions and keep things working again.
> 
> Sebastian
Thanks so much!

--mark

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/9 v4.9-RT] Backports to fix random core
  2022-08-19  9:24 [PATCH 0/9 v4.9-RT] Backports to fix random core Sebastian Andrzej Siewior
                   ` (9 preceding siblings ...)
  2022-08-19 15:14 ` [PATCH 0/9 v4.9-RT] Backports to fix random core Mark Gross
@ 2022-09-01 21:16 ` Mark Gross
  2022-09-02  6:52   ` Sebastian Andrzej Siewior
  10 siblings, 1 reply; 13+ messages in thread
From: Mark Gross @ 2022-09-01 21:16 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: Mark Gross, linux-rt-users, stable-rt, Salvatore Bonaccorso

On Fri, Aug 19, 2022 at 11:24:37AM +0200, Sebastian Andrzej Siewior wrote:
> Hi,
> 
> in v4.9.320 some random-core patches broke RT. This is a series of
> backports to align with later RT versions and keep things working again.
> 
> Sebastian
Sorry for taking so long to start working these.  Are these patches to be
applied to v4.9.320, or to the RT patches prior to the linux-stable merge?

Or are these patches the updates to the patches that conflict with v4.9.320 as
I rebase?

I'm guessing the latter as the patches don't seem to work on just the last RT
release or v4.9.320 directly.

Bottom line, I'm not sure how you expected me to use these 9 patches.

Sorry for being dense.

--mark


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 0/9 v4.9-RT] Backports to fix random core
  2022-09-01 21:16 ` Mark Gross
@ 2022-09-02  6:52   ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 13+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-09-02  6:52 UTC (permalink / raw)
  To: Mark Gross; +Cc: linux-rt-users, stable-rt, Salvatore Bonaccorso

On 2022-09-01 14:16:22 [-0700], Mark Gross wrote:
> Sorry for taking so long to start working these.  Are these patches to be
> applied to v4.9.320, or to the RT patches prior to the linux-stable merge?
> 
> Or are these patches the updates to the patches that conflict with v4.9.320 as
> I rebase?
> 
> I'm guessing the latter as the patches don't seem to work on just the last RT
> release or v4.9.320 directly.
> 
> Bottom line, I'm not sure how you expected me to use these 9 patches.

The commit I cited was backported into v4.9-stable, I *think* it is part
of v4.9-stable as of v4.9.320. If you apply the RT queue then you should
have conflicts. The "old" RT patches for random are obsolete if I
remember correctly.

> Sorry for being dense.

No worries. If in doubt, make a -rc and let me look over it or if
completely in doubt yell and I prepare a complete queue.

> --mark

Sebastian

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-09-02  6:52 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-08-19  9:24 [PATCH 0/9 v4.9-RT] Backports to fix random core Sebastian Andrzej Siewior
2022-08-19  9:24 ` [PATCH 1/9] random: Bring back the local_locks Sebastian Andrzej Siewior
2022-08-19  9:24 ` [PATCH 2/9] random: schedule mix_interrupt_randomness() less often Sebastian Andrzej Siewior
2022-08-19  9:24 ` [PATCH 3/9] Revert "workqueue: Use local irq lock instead of irq disable regions" Sebastian Andrzej Siewior
2022-08-19  9:24 ` [PATCH 4/9] Revert "workqueue: Prevent deadlock/stall on RT" Sebastian Andrzej Siewior
2022-08-19  9:24 ` [PATCH 5/9] timers: Keep interrupts disabled for TIMER_IRQSAFE timer Sebastian Andrzej Siewior
2022-08-19  9:24 ` [PATCH 6/9] timers: Don't block on ->expiry_lock for TIMER_IRQSAFE timers Sebastian Andrzej Siewior
2022-08-19  9:24 ` [PATCH 7/9] rcu: Intrdroduce rcuwait Sebastian Andrzej Siewior
2022-08-19  9:24 ` [PATCH 8/9] workqueue: Use rcuwait for wq_manager_wait Sebastian Andrzej Siewior
2022-08-19  9:24 ` [PATCH 9/9] workqueue: Convert the pool::lock and wq_mayday_lock to raw_spinlock_t Sebastian Andrzej Siewior
2022-08-19 15:14 ` [PATCH 0/9 v4.9-RT] Backports to fix random core Mark Gross
2022-09-01 21:16 ` Mark Gross
2022-09-02  6:52   ` Sebastian Andrzej Siewior

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).