linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] sbitmap: Fix two issues in the per-bitmap wakeup counter code
@ 2022-11-15 22:45 Gabriel Krisman Bertazi
  2022-11-15 22:45 ` [PATCH 1/3] sbitmap: Advance the queue index before waking up a queue Gabriel Krisman Bertazi
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Gabriel Krisman Bertazi @ 2022-11-15 22:45 UTC (permalink / raw)
  To: axboe
  Cc: mingo, peterz, jack, linux-kernel, linux-block, liusong,
	chaitanyak, Gabriel Krisman Bertazi

Jan reported two issues in the original thread.

The first is that wake_index was not updated after returning from
sbq_wake_ptr which meant we'd have to empty the wq before moving to the
next one.  Patch 1/3 in this series reorders the code to avoid this
condition, increasing fairness of queue selection and preventing
starvation.  I sent this patch already on the other thread and Jan
reviewed it, but since it is a small one, and a dependency for the
other, I'm resending it a along this series.

The second issue is trickier.  When the selected queue is emptied after
the waitqueue_active check and before wake_up_nr, there is no waiters to
be awaken in that queue, even if other queues might have it.  This
causes us to loose one too many wakeups, and there might not be enough
requests in flight to wake up every queued request.

The proposed fix, is to walk through every queue after doing the atomic
update, such that we ensure any waiters already queued are candidates
for awakening, and that we awake at least 1 waiter in any of the queues.
The patch is a bit more complex than the suggestion since it avoids
partial updates to wake_index, which measurably hurt performance
unnecessarily.

It survived the same tests done on the original patch.

btw, I'm still missing the latency and utilisation reports.  I haven't
forgotten about it, but I didn't have a chance to collect them
yet. Sorry.  I will follow up with them for completeness, even if the
original patch is already queued.

Gabriel Krisman Bertazi (3):
  sbitmap: Advance the queue index before waking up a queue
  wait: Return number of exclusive waiters awaken
  sbitmap: Try each queue to wake up at least one waiter

 include/linux/wait.h |  2 +-
 kernel/sched/wait.c  | 18 +++++++++++-------
 lib/sbitmap.c        | 36 +++++++++++++++++++-----------------
 3 files changed, 31 insertions(+), 25 deletions(-)

-- 
2.35.3


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/3] sbitmap: Advance the queue index before waking up a queue
  2022-11-15 22:45 [PATCH 0/3] sbitmap: Fix two issues in the per-bitmap wakeup counter code Gabriel Krisman Bertazi
@ 2022-11-15 22:45 ` Gabriel Krisman Bertazi
  2022-11-15 22:45 ` [PATCH 2/3] wait: Return number of exclusive waiters awaken Gabriel Krisman Bertazi
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 8+ messages in thread
From: Gabriel Krisman Bertazi @ 2022-11-15 22:45 UTC (permalink / raw)
  To: axboe
  Cc: mingo, peterz, jack, linux-kernel, linux-block, liusong,
	chaitanyak, Gabriel Krisman Bertazi

When a queue is awaken, the wake_index written by sbq_wake_ptr currently
keeps pointing to the same queue.  On the next wake up, it will thus
retry the same queue, which is unfair to other queues, and can lead to
starvation.  This patch, moves the index update to happen before the
queue is returned, such that it will now try a different queue first on
the next wake up, improving fairness.

Fixes: 4f8126bb2308 ("sbitmap: Use single per-bitmap counting to wake up queued tags")
Reported-by: Jan Kara <jack@suse.cz>
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Gabriel Krisman Bertazi <krisman@suse.de>
---
 lib/sbitmap.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/lib/sbitmap.c b/lib/sbitmap.c
index eca462cba398..bea7984f7987 100644
--- a/lib/sbitmap.c
+++ b/lib/sbitmap.c
@@ -571,13 +571,19 @@ static struct sbq_wait_state *sbq_wake_ptr(struct sbitmap_queue *sbq)
 	for (i = 0; i < SBQ_WAIT_QUEUES; i++) {
 		struct sbq_wait_state *ws = &sbq->ws[wake_index];
 
+		/*
+		 * Advance the index before checking the current queue.
+		 * It improves fairness, by ensuring the queue doesn't
+		 * need to be fully emptied before trying to wake up
+		 * from the next one.
+		 */
+		wake_index = sbq_index_inc(wake_index);
+
 		if (waitqueue_active(&ws->wait)) {
 			if (wake_index != atomic_read(&sbq->wake_index))
 				atomic_set(&sbq->wake_index, wake_index);
 			return ws;
 		}
-
-		wake_index = sbq_index_inc(wake_index);
 	}
 
 	return NULL;
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/3] wait: Return number of exclusive waiters awaken
  2022-11-15 22:45 [PATCH 0/3] sbitmap: Fix two issues in the per-bitmap wakeup counter code Gabriel Krisman Bertazi
  2022-11-15 22:45 ` [PATCH 1/3] sbitmap: Advance the queue index before waking up a queue Gabriel Krisman Bertazi
@ 2022-11-15 22:45 ` Gabriel Krisman Bertazi
  2022-11-16 11:06   ` Jan Kara
  2022-11-16 20:22   ` Peter Zijlstra
  2022-11-15 22:45 ` [PATCH 3/3] sbitmap: Try each queue to wake up at least one waiter Gabriel Krisman Bertazi
  2022-11-16 18:49 ` [PATCH 0/3] sbitmap: Fix two issues in the per-bitmap wakeup counter code Jens Axboe
  3 siblings, 2 replies; 8+ messages in thread
From: Gabriel Krisman Bertazi @ 2022-11-15 22:45 UTC (permalink / raw)
  To: axboe
  Cc: mingo, peterz, jack, linux-kernel, linux-block, liusong,
	chaitanyak, Gabriel Krisman Bertazi

Sbitmap code will need to know how many waiters were actually woken for
its batched wakeups implementation.  Return the number of woken
exclusive waiters from __wake_up() to facilitate that.

Suggested-by: Jan Kara <jack@suse.cz>
Signed-off-by: Gabriel Krisman Bertazi <krisman@suse.de>
---
 include/linux/wait.h |  2 +-
 kernel/sched/wait.c  | 18 +++++++++++-------
 2 files changed, 12 insertions(+), 8 deletions(-)

diff --git a/include/linux/wait.h b/include/linux/wait.h
index 7f5a51aae0a7..a0307b516b09 100644
--- a/include/linux/wait.h
+++ b/include/linux/wait.h
@@ -209,7 +209,7 @@ __remove_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq
 	list_del(&wq_entry->entry);
 }
 
-void __wake_up(struct wait_queue_head *wq_head, unsigned int mode, int nr, void *key);
+int __wake_up(struct wait_queue_head *wq_head, unsigned int mode, int nr, void *key);
 void __wake_up_locked_key(struct wait_queue_head *wq_head, unsigned int mode, void *key);
 void __wake_up_locked_key_bookmark(struct wait_queue_head *wq_head,
 		unsigned int mode, void *key, wait_queue_entry_t *bookmark);
diff --git a/kernel/sched/wait.c b/kernel/sched/wait.c
index 9860bb9a847c..133b74730738 100644
--- a/kernel/sched/wait.c
+++ b/kernel/sched/wait.c
@@ -121,11 +121,12 @@ static int __wake_up_common(struct wait_queue_head *wq_head, unsigned int mode,
 	return nr_exclusive;
 }
 
-static void __wake_up_common_lock(struct wait_queue_head *wq_head, unsigned int mode,
+static int __wake_up_common_lock(struct wait_queue_head *wq_head, unsigned int mode,
 			int nr_exclusive, int wake_flags, void *key)
 {
 	unsigned long flags;
 	wait_queue_entry_t bookmark;
+	int remaining = nr_exclusive;
 
 	bookmark.flags = 0;
 	bookmark.private = NULL;
@@ -134,10 +135,12 @@ static void __wake_up_common_lock(struct wait_queue_head *wq_head, unsigned int
 
 	do {
 		spin_lock_irqsave(&wq_head->lock, flags);
-		nr_exclusive = __wake_up_common(wq_head, mode, nr_exclusive,
+		remaining = __wake_up_common(wq_head, mode, remaining,
 						wake_flags, key, &bookmark);
 		spin_unlock_irqrestore(&wq_head->lock, flags);
 	} while (bookmark.flags & WQ_FLAG_BOOKMARK);
+
+	return nr_exclusive - remaining;
 }
 
 /**
@@ -147,13 +150,14 @@ static void __wake_up_common_lock(struct wait_queue_head *wq_head, unsigned int
  * @nr_exclusive: how many wake-one or wake-many threads to wake up
  * @key: is directly passed to the wakeup function
  *
- * If this function wakes up a task, it executes a full memory barrier before
- * accessing the task state.
+ * If this function wakes up a task, it executes a full memory barrier
+ * before accessing the task state.  Returns the number of exclusive
+ * tasks that were awaken.
  */
-void __wake_up(struct wait_queue_head *wq_head, unsigned int mode,
-			int nr_exclusive, void *key)
+int __wake_up(struct wait_queue_head *wq_head, unsigned int mode,
+	      int nr_exclusive, void *key)
 {
-	__wake_up_common_lock(wq_head, mode, nr_exclusive, 0, key);
+	return __wake_up_common_lock(wq_head, mode, nr_exclusive, 0, key);
 }
 EXPORT_SYMBOL(__wake_up);
 
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/3] sbitmap: Try each queue to wake up at least one waiter
  2022-11-15 22:45 [PATCH 0/3] sbitmap: Fix two issues in the per-bitmap wakeup counter code Gabriel Krisman Bertazi
  2022-11-15 22:45 ` [PATCH 1/3] sbitmap: Advance the queue index before waking up a queue Gabriel Krisman Bertazi
  2022-11-15 22:45 ` [PATCH 2/3] wait: Return number of exclusive waiters awaken Gabriel Krisman Bertazi
@ 2022-11-15 22:45 ` Gabriel Krisman Bertazi
  2022-11-16 11:09   ` Jan Kara
  2022-11-16 18:49 ` [PATCH 0/3] sbitmap: Fix two issues in the per-bitmap wakeup counter code Jens Axboe
  3 siblings, 1 reply; 8+ messages in thread
From: Gabriel Krisman Bertazi @ 2022-11-15 22:45 UTC (permalink / raw)
  To: axboe
  Cc: mingo, peterz, jack, linux-kernel, linux-block, liusong,
	chaitanyak, Gabriel Krisman Bertazi

Jan reported the new algorithm as merged might be problematic if the
queue being awaken becomes empty between the waitqueue_active inside
sbq_wake_ptr check and the wake up.  If that happens, wake_up_nr will
not wake up any waiter and we loose too many wake ups.  In order to
guarantee progress, we need to wake up at least one waiter here, if
there are any.  This now requires trying to wake up from every queue.

Instead of walking through all the queues with sbq_wake_ptr, this call
moves the wake up inside that function.  In a previous version of the
patch, I found that updating wake_index several times when walking
through queues had a measurable overhead.  This ensures we only update
it once, at the end.

Fixes: 4f8126bb2308 ("sbitmap: Use single per-bitmap counting to wake up queued tags")
Reported-by: Jan Kara <jack@suse.cz>
Signed-off-by: Gabriel Krisman Bertazi <krisman@suse.de>
---
 lib/sbitmap.c | 28 ++++++++++++----------------
 1 file changed, 12 insertions(+), 16 deletions(-)

diff --git a/lib/sbitmap.c b/lib/sbitmap.c
index bea7984f7987..586deb333237 100644
--- a/lib/sbitmap.c
+++ b/lib/sbitmap.c
@@ -560,12 +560,12 @@ void sbitmap_queue_min_shallow_depth(struct sbitmap_queue *sbq,
 }
 EXPORT_SYMBOL_GPL(sbitmap_queue_min_shallow_depth);
 
-static struct sbq_wait_state *sbq_wake_ptr(struct sbitmap_queue *sbq)
+static void __sbitmap_queue_wake_up(struct sbitmap_queue *sbq, int nr)
 {
 	int i, wake_index;
 
 	if (!atomic_read(&sbq->ws_active))
-		return NULL;
+		return;
 
 	wake_index = atomic_read(&sbq->wake_index);
 	for (i = 0; i < SBQ_WAIT_QUEUES; i++) {
@@ -579,20 +579,22 @@ static struct sbq_wait_state *sbq_wake_ptr(struct sbitmap_queue *sbq)
 		 */
 		wake_index = sbq_index_inc(wake_index);
 
-		if (waitqueue_active(&ws->wait)) {
-			if (wake_index != atomic_read(&sbq->wake_index))
-				atomic_set(&sbq->wake_index, wake_index);
-			return ws;
-		}
+		/*
+		 * It is sufficient to wake up at least one waiter to
+		 * guarantee forward progress.
+		 */
+		if (waitqueue_active(&ws->wait) &&
+		    wake_up_nr(&ws->wait, nr))
+			break;
 	}
 
-	return NULL;
+	if (wake_index != atomic_read(&sbq->wake_index))
+		atomic_set(&sbq->wake_index, wake_index);
 }
 
 void sbitmap_queue_wake_up(struct sbitmap_queue *sbq, int nr)
 {
 	unsigned int wake_batch = READ_ONCE(sbq->wake_batch);
-	struct sbq_wait_state *ws = NULL;
 	unsigned int wakeups;
 
 	if (!atomic_read(&sbq->ws_active))
@@ -604,16 +606,10 @@ void sbitmap_queue_wake_up(struct sbitmap_queue *sbq, int nr)
 	do {
 		if (atomic_read(&sbq->completion_cnt) - wakeups < wake_batch)
 			return;
-
-		if (!ws) {
-			ws = sbq_wake_ptr(sbq);
-			if (!ws)
-				return;
-		}
 	} while (!atomic_try_cmpxchg(&sbq->wakeup_cnt,
 				     &wakeups, wakeups + wake_batch));
 
-	wake_up_nr(&ws->wait, wake_batch);
+	__sbitmap_queue_wake_up(sbq, wake_batch);
 }
 EXPORT_SYMBOL_GPL(sbitmap_queue_wake_up);
 
-- 
2.35.3


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/3] wait: Return number of exclusive waiters awaken
  2022-11-15 22:45 ` [PATCH 2/3] wait: Return number of exclusive waiters awaken Gabriel Krisman Bertazi
@ 2022-11-16 11:06   ` Jan Kara
  2022-11-16 20:22   ` Peter Zijlstra
  1 sibling, 0 replies; 8+ messages in thread
From: Jan Kara @ 2022-11-16 11:06 UTC (permalink / raw)
  To: Gabriel Krisman Bertazi
  Cc: axboe, mingo, peterz, jack, linux-kernel, linux-block, liusong,
	chaitanyak

On Tue 15-11-22 17:45:52, Gabriel Krisman Bertazi wrote:
> Sbitmap code will need to know how many waiters were actually woken for
> its batched wakeups implementation.  Return the number of woken
> exclusive waiters from __wake_up() to facilitate that.
> 
> Suggested-by: Jan Kara <jack@suse.cz>
> Signed-off-by: Gabriel Krisman Bertazi <krisman@suse.de>

Looks good. Feel free to add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  include/linux/wait.h |  2 +-
>  kernel/sched/wait.c  | 18 +++++++++++-------
>  2 files changed, 12 insertions(+), 8 deletions(-)
> 
> diff --git a/include/linux/wait.h b/include/linux/wait.h
> index 7f5a51aae0a7..a0307b516b09 100644
> --- a/include/linux/wait.h
> +++ b/include/linux/wait.h
> @@ -209,7 +209,7 @@ __remove_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq
>  	list_del(&wq_entry->entry);
>  }
>  
> -void __wake_up(struct wait_queue_head *wq_head, unsigned int mode, int nr, void *key);
> +int __wake_up(struct wait_queue_head *wq_head, unsigned int mode, int nr, void *key);
>  void __wake_up_locked_key(struct wait_queue_head *wq_head, unsigned int mode, void *key);
>  void __wake_up_locked_key_bookmark(struct wait_queue_head *wq_head,
>  		unsigned int mode, void *key, wait_queue_entry_t *bookmark);
> diff --git a/kernel/sched/wait.c b/kernel/sched/wait.c
> index 9860bb9a847c..133b74730738 100644
> --- a/kernel/sched/wait.c
> +++ b/kernel/sched/wait.c
> @@ -121,11 +121,12 @@ static int __wake_up_common(struct wait_queue_head *wq_head, unsigned int mode,
>  	return nr_exclusive;
>  }
>  
> -static void __wake_up_common_lock(struct wait_queue_head *wq_head, unsigned int mode,
> +static int __wake_up_common_lock(struct wait_queue_head *wq_head, unsigned int mode,
>  			int nr_exclusive, int wake_flags, void *key)
>  {
>  	unsigned long flags;
>  	wait_queue_entry_t bookmark;
> +	int remaining = nr_exclusive;
>  
>  	bookmark.flags = 0;
>  	bookmark.private = NULL;
> @@ -134,10 +135,12 @@ static void __wake_up_common_lock(struct wait_queue_head *wq_head, unsigned int
>  
>  	do {
>  		spin_lock_irqsave(&wq_head->lock, flags);
> -		nr_exclusive = __wake_up_common(wq_head, mode, nr_exclusive,
> +		remaining = __wake_up_common(wq_head, mode, remaining,
>  						wake_flags, key, &bookmark);
>  		spin_unlock_irqrestore(&wq_head->lock, flags);
>  	} while (bookmark.flags & WQ_FLAG_BOOKMARK);
> +
> +	return nr_exclusive - remaining;
>  }
>  
>  /**
> @@ -147,13 +150,14 @@ static void __wake_up_common_lock(struct wait_queue_head *wq_head, unsigned int
>   * @nr_exclusive: how many wake-one or wake-many threads to wake up
>   * @key: is directly passed to the wakeup function
>   *
> - * If this function wakes up a task, it executes a full memory barrier before
> - * accessing the task state.
> + * If this function wakes up a task, it executes a full memory barrier
> + * before accessing the task state.  Returns the number of exclusive
> + * tasks that were awaken.
>   */
> -void __wake_up(struct wait_queue_head *wq_head, unsigned int mode,
> -			int nr_exclusive, void *key)
> +int __wake_up(struct wait_queue_head *wq_head, unsigned int mode,
> +	      int nr_exclusive, void *key)
>  {
> -	__wake_up_common_lock(wq_head, mode, nr_exclusive, 0, key);
> +	return __wake_up_common_lock(wq_head, mode, nr_exclusive, 0, key);
>  }
>  EXPORT_SYMBOL(__wake_up);
>  
> -- 
> 2.35.3
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3] sbitmap: Try each queue to wake up at least one waiter
  2022-11-15 22:45 ` [PATCH 3/3] sbitmap: Try each queue to wake up at least one waiter Gabriel Krisman Bertazi
@ 2022-11-16 11:09   ` Jan Kara
  0 siblings, 0 replies; 8+ messages in thread
From: Jan Kara @ 2022-11-16 11:09 UTC (permalink / raw)
  To: Gabriel Krisman Bertazi
  Cc: axboe, mingo, peterz, jack, linux-kernel, linux-block, liusong,
	chaitanyak

On Tue 15-11-22 17:45:53, Gabriel Krisman Bertazi wrote:
> Jan reported the new algorithm as merged might be problematic if the
> queue being awaken becomes empty between the waitqueue_active inside
> sbq_wake_ptr check and the wake up.  If that happens, wake_up_nr will
> not wake up any waiter and we loose too many wake ups.  In order to
> guarantee progress, we need to wake up at least one waiter here, if
> there are any.  This now requires trying to wake up from every queue.
> 
> Instead of walking through all the queues with sbq_wake_ptr, this call
> moves the wake up inside that function.  In a previous version of the
> patch, I found that updating wake_index several times when walking
> through queues had a measurable overhead.  This ensures we only update
> it once, at the end.
> 
> Fixes: 4f8126bb2308 ("sbitmap: Use single per-bitmap counting to wake up queued tags")
> Reported-by: Jan Kara <jack@suse.cz>
> Signed-off-by: Gabriel Krisman Bertazi <krisman@suse.de>

Elegant and looks good to me! Feel free to add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  lib/sbitmap.c | 28 ++++++++++++----------------
>  1 file changed, 12 insertions(+), 16 deletions(-)
> 
> diff --git a/lib/sbitmap.c b/lib/sbitmap.c
> index bea7984f7987..586deb333237 100644
> --- a/lib/sbitmap.c
> +++ b/lib/sbitmap.c
> @@ -560,12 +560,12 @@ void sbitmap_queue_min_shallow_depth(struct sbitmap_queue *sbq,
>  }
>  EXPORT_SYMBOL_GPL(sbitmap_queue_min_shallow_depth);
>  
> -static struct sbq_wait_state *sbq_wake_ptr(struct sbitmap_queue *sbq)
> +static void __sbitmap_queue_wake_up(struct sbitmap_queue *sbq, int nr)
>  {
>  	int i, wake_index;
>  
>  	if (!atomic_read(&sbq->ws_active))
> -		return NULL;
> +		return;
>  
>  	wake_index = atomic_read(&sbq->wake_index);
>  	for (i = 0; i < SBQ_WAIT_QUEUES; i++) {
> @@ -579,20 +579,22 @@ static struct sbq_wait_state *sbq_wake_ptr(struct sbitmap_queue *sbq)
>  		 */
>  		wake_index = sbq_index_inc(wake_index);
>  
> -		if (waitqueue_active(&ws->wait)) {
> -			if (wake_index != atomic_read(&sbq->wake_index))
> -				atomic_set(&sbq->wake_index, wake_index);
> -			return ws;
> -		}
> +		/*
> +		 * It is sufficient to wake up at least one waiter to
> +		 * guarantee forward progress.
> +		 */
> +		if (waitqueue_active(&ws->wait) &&
> +		    wake_up_nr(&ws->wait, nr))
> +			break;
>  	}
>  
> -	return NULL;
> +	if (wake_index != atomic_read(&sbq->wake_index))
> +		atomic_set(&sbq->wake_index, wake_index);
>  }
>  
>  void sbitmap_queue_wake_up(struct sbitmap_queue *sbq, int nr)
>  {
>  	unsigned int wake_batch = READ_ONCE(sbq->wake_batch);
> -	struct sbq_wait_state *ws = NULL;
>  	unsigned int wakeups;
>  
>  	if (!atomic_read(&sbq->ws_active))
> @@ -604,16 +606,10 @@ void sbitmap_queue_wake_up(struct sbitmap_queue *sbq, int nr)
>  	do {
>  		if (atomic_read(&sbq->completion_cnt) - wakeups < wake_batch)
>  			return;
> -
> -		if (!ws) {
> -			ws = sbq_wake_ptr(sbq);
> -			if (!ws)
> -				return;
> -		}
>  	} while (!atomic_try_cmpxchg(&sbq->wakeup_cnt,
>  				     &wakeups, wakeups + wake_batch));
>  
> -	wake_up_nr(&ws->wait, wake_batch);
> +	__sbitmap_queue_wake_up(sbq, wake_batch);
>  }
>  EXPORT_SYMBOL_GPL(sbitmap_queue_wake_up);
>  
> -- 
> 2.35.3
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/3] sbitmap: Fix two issues in the per-bitmap wakeup counter code
  2022-11-15 22:45 [PATCH 0/3] sbitmap: Fix two issues in the per-bitmap wakeup counter code Gabriel Krisman Bertazi
                   ` (2 preceding siblings ...)
  2022-11-15 22:45 ` [PATCH 3/3] sbitmap: Try each queue to wake up at least one waiter Gabriel Krisman Bertazi
@ 2022-11-16 18:49 ` Jens Axboe
  3 siblings, 0 replies; 8+ messages in thread
From: Jens Axboe @ 2022-11-16 18:49 UTC (permalink / raw)
  To: Gabriel Krisman Bertazi
  Cc: mingo, linux-kernel, jack, chaitanyak, peterz, liusong,
	linux-block

On Tue, 15 Nov 2022 17:45:50 -0500, Gabriel Krisman Bertazi wrote:
> Jan reported two issues in the original thread.
> 
> The first is that wake_index was not updated after returning from
> sbq_wake_ptr which meant we'd have to empty the wq before moving to the
> next one.  Patch 1/3 in this series reorders the code to avoid this
> condition, increasing fairness of queue selection and preventing
> starvation.  I sent this patch already on the other thread and Jan
> reviewed it, but since it is a small one, and a dependency for the
> other, I'm resending it a along this series.
> 
> [...]

Applied, thanks!

[1/3] sbitmap: Advance the queue index before waking up a queue
      commit: 976570b4ecd30d3ec6e1b0910da8e5edc591f2b6
[2/3] wait: Return number of exclusive waiters awaken
      commit: ee7dc86b6d3e3b86c2c487f713eda657850de238
[3/3] sbitmap: Try each queue to wake up at least one waiter
      commit: 26edb30dd1c0c9be11fa676b4f330ada7b794ba6

Best regards,
-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/3] wait: Return number of exclusive waiters awaken
  2022-11-15 22:45 ` [PATCH 2/3] wait: Return number of exclusive waiters awaken Gabriel Krisman Bertazi
  2022-11-16 11:06   ` Jan Kara
@ 2022-11-16 20:22   ` Peter Zijlstra
  1 sibling, 0 replies; 8+ messages in thread
From: Peter Zijlstra @ 2022-11-16 20:22 UTC (permalink / raw)
  To: Gabriel Krisman Bertazi
  Cc: axboe, mingo, jack, linux-kernel, linux-block, liusong,
	chaitanyak

On Tue, Nov 15, 2022 at 05:45:52PM -0500, Gabriel Krisman Bertazi wrote:
> Sbitmap code will need to know how many waiters were actually woken for
> its batched wakeups implementation.  Return the number of woken
> exclusive waiters from __wake_up() to facilitate that.
> 
> Suggested-by: Jan Kara <jack@suse.cz>
> Signed-off-by: Gabriel Krisman Bertazi <krisman@suse.de>

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>

> ---
>  include/linux/wait.h |  2 +-
>  kernel/sched/wait.c  | 18 +++++++++++-------
>  2 files changed, 12 insertions(+), 8 deletions(-)

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2022-11-16 20:22 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-11-15 22:45 [PATCH 0/3] sbitmap: Fix two issues in the per-bitmap wakeup counter code Gabriel Krisman Bertazi
2022-11-15 22:45 ` [PATCH 1/3] sbitmap: Advance the queue index before waking up a queue Gabriel Krisman Bertazi
2022-11-15 22:45 ` [PATCH 2/3] wait: Return number of exclusive waiters awaken Gabriel Krisman Bertazi
2022-11-16 11:06   ` Jan Kara
2022-11-16 20:22   ` Peter Zijlstra
2022-11-15 22:45 ` [PATCH 3/3] sbitmap: Try each queue to wake up at least one waiter Gabriel Krisman Bertazi
2022-11-16 11:09   ` Jan Kara
2022-11-16 18:49 ` [PATCH 0/3] sbitmap: Fix two issues in the per-bitmap wakeup counter code Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).