Linux block layer
 help / color / mirror / Atom feed
* [PATCH] blk-mq: fix IO hang from sbitmap wakeup race
@ 2024-01-11 15:54 Ming Lei
  2024-01-12  9:27 ` Kemeng Shi
  0 siblings, 1 reply; 4+ messages in thread
From: Ming Lei @ 2024-01-11 15:54 UTC (permalink / raw)
  To: Jens Axboe, linux-block
  Cc: David Jeffery, Gabriel Krisman Bertazi, Jan Kara, Kemeng Shi,
	Ming Lei, Changhui Zhong

In blk_mq_mark_tag_wait(), __add_wait_queue() may be re-ordered
with the following blk_mq_get_driver_tag() in case of getting driver
tag failure.

Then in __sbitmap_queue_wake_up(), waitqueue_active() may not observe
the added waiter in blk_mq_mark_tag_wait() and wake up nothing, meantime
blk_mq_mark_tag_wait() can't get driver tag successfully.

This issue can be reproduced by running the following test in loop, and
fio hang can be observed in < 30min when running it on my test VM
in laptop.

	modprobe -r scsi_debug
	modprobe scsi_debug delay=0 dev_size_mb=4096 max_queue=1 host_max_queue=1 submit_queues=4
	dev=`ls -d /sys/bus/pseudo/drivers/scsi_debug/adapter*/host*/target*/*/block/* | head -1 | xargs basename`
	fio --filename=/dev/"$dev" --direct=1 --rw=randrw --bs=4k --iodepth=1 \
       		--runtime=100 --numjobs=40 --time_based --name=test \
        	--ioengine=libaio

Fix the issue by adding one explicit barrier in blk_mq_mark_tag_wait(), which
is just fine in case of running out of tag.

Apply the same pattern in blk_mq_get_tag() which should have same risk.

Reported-by: Changhui Zhong <czhong@redhat.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
BTW, Changhui is planning to upstream the test case to blktests.

 block/blk-mq-tag.c | 19 +++++++++++++++++++
 block/blk-mq.c     | 16 ++++++++++++++++
 2 files changed, 35 insertions(+)

diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index cc57e2dd9a0b..29f77cae8eb2 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -179,6 +179,25 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data)
 
 		sbitmap_prepare_to_wait(bt, ws, &wait, TASK_UNINTERRUPTIBLE);
 
+		/*
+		 * Add one explicit barrier since __blk_mq_get_tag() may not
+		 * imply barrier in case of failure.
+		 *
+		 * Order adding us to wait queue and the following allocating
+		 * tag in  __blk_mq_get_tag().
+		 *
+		 * The pair is the one implied in sbitmap_queue_wake_up()
+		 * which orders clearing sbitmap tag bits and
+		 * waitqueue_active() in __sbitmap_queue_wake_up(), since
+		 * waitqueue_active() is lockless
+		 *
+		 * Otherwise, re-order of adding wait queue and getting tag
+		 * may cause __sbitmap_queue_wake_up() to wake up nothing
+		 * because the waitqueue_active() may not observe us in wait
+		 * queue.
+		 */
+		smp_mb();
+
 		tag = __blk_mq_get_tag(data, bt);
 		if (tag != BLK_MQ_NO_TAG)
 			break;
diff --git a/block/blk-mq.c b/block/blk-mq.c
index fb29ff5cc281..54545a4792bf 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1847,6 +1847,22 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
 	wait->flags &= ~WQ_FLAG_EXCLUSIVE;
 	__add_wait_queue(wq, wait);
 
+	/*
+	 * Add one explicit barrier since blk_mq_get_driver_tag() may
+	 * not imply barrier in case of failure.
+	 *
+	 * Order adding us to wait queue and allocating driver tag.
+	 *
+	 * The pair is the one implied in sbitmap_queue_wake_up() which
+	 * orders clearing sbitmap tag bits and waitqueue_active() in
+	 * __sbitmap_queue_wake_up(), since waitqueue_active() is lockless
+	 *
+	 * Otherwise, re-order of adding wait queue and getting driver tag
+	 * may cause __sbitmap_queue_wake_up() to wake up nothing because
+	 * the waitqueue_active() may not observe us in wait queue.
+	 */
+	smp_mb();
+
 	/*
 	 * It's possible that a tag was freed in the window between the
 	 * allocation failure and adding the hardware queue to the wait
-- 
2.42.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] blk-mq: fix IO hang from sbitmap wakeup race
  2024-01-11 15:54 [PATCH] blk-mq: fix IO hang from sbitmap wakeup race Ming Lei
@ 2024-01-12  9:27 ` Kemeng Shi
  2024-01-12 10:20   ` Jan Kara
  0 siblings, 1 reply; 4+ messages in thread
From: Kemeng Shi @ 2024-01-12  9:27 UTC (permalink / raw)
  To: Ming Lei, Jens Axboe, linux-block
  Cc: David Jeffery, Gabriel Krisman Bertazi, Jan Kara, Changhui Zhong



on 1/11/2024 11:54 PM, Ming Lei wrote:
> In blk_mq_mark_tag_wait(), __add_wait_queue() may be re-ordered
> with the following blk_mq_get_driver_tag() in case of getting driver
> tag failure.
> 
> Then in __sbitmap_queue_wake_up(), waitqueue_active() may not observe
> the added waiter in blk_mq_mark_tag_wait() and wake up nothing, meantime
> blk_mq_mark_tag_wait() can't get driver tag successfully.
> 
> This issue can be reproduced by running the following test in loop, and
> fio hang can be observed in < 30min when running it on my test VM
> in laptop.
> 
> 	modprobe -r scsi_debug
> 	modprobe scsi_debug delay=0 dev_size_mb=4096 max_queue=1 host_max_queue=1 submit_queues=4
> 	dev=`ls -d /sys/bus/pseudo/drivers/scsi_debug/adapter*/host*/target*/*/block/* | head -1 | xargs basename`
> 	fio --filename=/dev/"$dev" --direct=1 --rw=randrw --bs=4k --iodepth=1 \
>        		--runtime=100 --numjobs=40 --time_based --name=test \
>         	--ioengine=libaio
> 
> Fix the issue by adding one explicit barrier in blk_mq_mark_tag_wait(), which
> is just fine in case of running out of tag.
> 
> Apply the same pattern in blk_mq_get_tag() which should have same risk.
> 
> Reported-by: Changhui Zhong <czhong@redhat.com>
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
> BTW, Changhui is planning to upstream the test case to blktests.
> 
>  block/blk-mq-tag.c | 19 +++++++++++++++++++
>  block/blk-mq.c     | 16 ++++++++++++++++
>  2 files changed, 35 insertions(+)
> 
> diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
> index cc57e2dd9a0b..29f77cae8eb2 100644
> --- a/block/blk-mq-tag.c
> +++ b/block/blk-mq-tag.c
> @@ -179,6 +179,25 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data)
>  
>  		sbitmap_prepare_to_wait(bt, ws, &wait, TASK_UNINTERRUPTIBLE);
>  
> +		/*
> +		 * Add one explicit barrier since __blk_mq_get_tag() may not
> +		 * imply barrier in case of failure.
> +		 *
> +		 * Order adding us to wait queue and the following allocating
> +		 * tag in  __blk_mq_get_tag().
> +		 *
> +		 * The pair is the one implied in sbitmap_queue_wake_up()
> +		 * which orders clearing sbitmap tag bits and
> +		 * waitqueue_active() in __sbitmap_queue_wake_up(), since
> +		 * waitqueue_active() is lockless
> +		 *
> +		 * Otherwise, re-order of adding wait queue and getting tag
> +		 * may cause __sbitmap_queue_wake_up() to wake up nothing
> +		 * because the waitqueue_active() may not observe us in wait
> +		 * queue.
> +		 */
> +		smp_mb();
> +
Hi Ming, thanks for the fix. I'm not sure if we should explicitly imply
a memory barrier here as prepare_to_wait variants normally imply a general
memory barrier (see section "SLEEP AND WAKE-UP FUNCTIONS " in [1]).
Wish this helps!

[1] https://www.kernel.org/doc/Documentation/memory-barriers.txt


>  		tag = __blk_mq_get_tag(data, bt);
>  		if (tag != BLK_MQ_NO_TAG)
>  			break;
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index fb29ff5cc281..54545a4792bf 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1847,6 +1847,22 @@ static bool blk_mq_mark_tag_wait(struct blk_mq_hw_ctx *hctx,
>  	wait->flags &= ~WQ_FLAG_EXCLUSIVE;
>  	__add_wait_queue(wq, wait);
>  
> +	/*
> +	 * Add one explicit barrier since blk_mq_get_driver_tag() may
> +	 * not imply barrier in case of failure.
> +	 *
> +	 * Order adding us to wait queue and allocating driver tag.
> +	 *
> +	 * The pair is the one implied in sbitmap_queue_wake_up() which
> +	 * orders clearing sbitmap tag bits and waitqueue_active() in
> +	 * __sbitmap_queue_wake_up(), since waitqueue_active() is lockless
> +	 *
> +	 * Otherwise, re-order of adding wait queue and getting driver tag
> +	 * may cause __sbitmap_queue_wake_up() to wake up nothing because
> +	 * the waitqueue_active() may not observe us in wait queue.
> +	 */
> +	smp_mb();
> +
>  	/*
>  	 * It's possible that a tag was freed in the window between the
>  	 * allocation failure and adding the hardware queue to the wait
> 


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] blk-mq: fix IO hang from sbitmap wakeup race
  2024-01-12  9:27 ` Kemeng Shi
@ 2024-01-12 10:20   ` Jan Kara
  2024-01-12 12:21     ` Ming Lei
  0 siblings, 1 reply; 4+ messages in thread
From: Jan Kara @ 2024-01-12 10:20 UTC (permalink / raw)
  To: Kemeng Shi
  Cc: Ming Lei, Jens Axboe, linux-block, David Jeffery,
	Gabriel Krisman Bertazi, Jan Kara, Changhui Zhong

On Fri 12-01-24 17:27:48, Kemeng Shi wrote:
> 
> 
> on 1/11/2024 11:54 PM, Ming Lei wrote:
> > In blk_mq_mark_tag_wait(), __add_wait_queue() may be re-ordered
> > with the following blk_mq_get_driver_tag() in case of getting driver
> > tag failure.
> > 
> > Then in __sbitmap_queue_wake_up(), waitqueue_active() may not observe
> > the added waiter in blk_mq_mark_tag_wait() and wake up nothing, meantime
> > blk_mq_mark_tag_wait() can't get driver tag successfully.
> > 
> > This issue can be reproduced by running the following test in loop, and
> > fio hang can be observed in < 30min when running it on my test VM
> > in laptop.
> > 
> > 	modprobe -r scsi_debug
> > 	modprobe scsi_debug delay=0 dev_size_mb=4096 max_queue=1 host_max_queue=1 submit_queues=4
> > 	dev=`ls -d /sys/bus/pseudo/drivers/scsi_debug/adapter*/host*/target*/*/block/* | head -1 | xargs basename`
> > 	fio --filename=/dev/"$dev" --direct=1 --rw=randrw --bs=4k --iodepth=1 \
> >        		--runtime=100 --numjobs=40 --time_based --name=test \
> >         	--ioengine=libaio
> > 
> > Fix the issue by adding one explicit barrier in blk_mq_mark_tag_wait(), which
> > is just fine in case of running out of tag.
> > 
> > Apply the same pattern in blk_mq_get_tag() which should have same risk.
> > 
> > Reported-by: Changhui Zhong <czhong@redhat.com>
> > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > ---
> > BTW, Changhui is planning to upstream the test case to blktests.
> > 
> >  block/blk-mq-tag.c | 19 +++++++++++++++++++
> >  block/blk-mq.c     | 16 ++++++++++++++++
> >  2 files changed, 35 insertions(+)
> > 
> > diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
> > index cc57e2dd9a0b..29f77cae8eb2 100644
> > --- a/block/blk-mq-tag.c
> > +++ b/block/blk-mq-tag.c
> > @@ -179,6 +179,25 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data)
> >  
> >  		sbitmap_prepare_to_wait(bt, ws, &wait, TASK_UNINTERRUPTIBLE);
> >  
> > +		/*
> > +		 * Add one explicit barrier since __blk_mq_get_tag() may not
> > +		 * imply barrier in case of failure.
> > +		 *
> > +		 * Order adding us to wait queue and the following allocating
> > +		 * tag in  __blk_mq_get_tag().
> > +		 *
> > +		 * The pair is the one implied in sbitmap_queue_wake_up()
> > +		 * which orders clearing sbitmap tag bits and
> > +		 * waitqueue_active() in __sbitmap_queue_wake_up(), since
> > +		 * waitqueue_active() is lockless
> > +		 *
> > +		 * Otherwise, re-order of adding wait queue and getting tag
> > +		 * may cause __sbitmap_queue_wake_up() to wake up nothing
> > +		 * because the waitqueue_active() may not observe us in wait
> > +		 * queue.
> > +		 */
> > +		smp_mb();
> > +
> Hi Ming, thanks for the fix. I'm not sure if we should explicitly imply
> a memory barrier here as prepare_to_wait variants normally imply a general
> memory barrier (see section "SLEEP AND WAKE-UP FUNCTIONS " in [1]).
> Wish this helps!

Indeed, good spotting with the ordering bug Ming! I agree with Kemeng
though that set_current_state() called from sbitmap_prepare_to_wait() is
guaranteed to contain a memory barrier and thus reads from
__blk_mq_get_tag() are guaranteed to be ordered properly wrt addition into
the waitqueue.

So only blk_mq_mark_tag_wait() is vulnerable to the problem you have
spotted AFAICT.

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] blk-mq: fix IO hang from sbitmap wakeup race
  2024-01-12 10:20   ` Jan Kara
@ 2024-01-12 12:21     ` Ming Lei
  0 siblings, 0 replies; 4+ messages in thread
From: Ming Lei @ 2024-01-12 12:21 UTC (permalink / raw)
  To: Jan Kara
  Cc: Kemeng Shi, Jens Axboe, linux-block, David Jeffery,
	Gabriel Krisman Bertazi, Changhui Zhong

On Fri, Jan 12, 2024 at 11:20:04AM +0100, Jan Kara wrote:
> On Fri 12-01-24 17:27:48, Kemeng Shi wrote:
> > 
> > 
> > on 1/11/2024 11:54 PM, Ming Lei wrote:
> > > In blk_mq_mark_tag_wait(), __add_wait_queue() may be re-ordered
> > > with the following blk_mq_get_driver_tag() in case of getting driver
> > > tag failure.
> > > 
> > > Then in __sbitmap_queue_wake_up(), waitqueue_active() may not observe
> > > the added waiter in blk_mq_mark_tag_wait() and wake up nothing, meantime
> > > blk_mq_mark_tag_wait() can't get driver tag successfully.
> > > 
> > > This issue can be reproduced by running the following test in loop, and
> > > fio hang can be observed in < 30min when running it on my test VM
> > > in laptop.
> > > 
> > > 	modprobe -r scsi_debug
> > > 	modprobe scsi_debug delay=0 dev_size_mb=4096 max_queue=1 host_max_queue=1 submit_queues=4
> > > 	dev=`ls -d /sys/bus/pseudo/drivers/scsi_debug/adapter*/host*/target*/*/block/* | head -1 | xargs basename`
> > > 	fio --filename=/dev/"$dev" --direct=1 --rw=randrw --bs=4k --iodepth=1 \
> > >        		--runtime=100 --numjobs=40 --time_based --name=test \
> > >         	--ioengine=libaio
> > > 
> > > Fix the issue by adding one explicit barrier in blk_mq_mark_tag_wait(), which
> > > is just fine in case of running out of tag.
> > > 
> > > Apply the same pattern in blk_mq_get_tag() which should have same risk.
> > > 
> > > Reported-by: Changhui Zhong <czhong@redhat.com>
> > > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > > ---
> > > BTW, Changhui is planning to upstream the test case to blktests.
> > > 
> > >  block/blk-mq-tag.c | 19 +++++++++++++++++++
> > >  block/blk-mq.c     | 16 ++++++++++++++++
> > >  2 files changed, 35 insertions(+)
> > > 
> > > diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
> > > index cc57e2dd9a0b..29f77cae8eb2 100644
> > > --- a/block/blk-mq-tag.c
> > > +++ b/block/blk-mq-tag.c
> > > @@ -179,6 +179,25 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data)
> > >  
> > >  		sbitmap_prepare_to_wait(bt, ws, &wait, TASK_UNINTERRUPTIBLE);
> > >  
> > > +		/*
> > > +		 * Add one explicit barrier since __blk_mq_get_tag() may not
> > > +		 * imply barrier in case of failure.
> > > +		 *
> > > +		 * Order adding us to wait queue and the following allocating
> > > +		 * tag in  __blk_mq_get_tag().
> > > +		 *
> > > +		 * The pair is the one implied in sbitmap_queue_wake_up()
> > > +		 * which orders clearing sbitmap tag bits and
> > > +		 * waitqueue_active() in __sbitmap_queue_wake_up(), since
> > > +		 * waitqueue_active() is lockless
> > > +		 *
> > > +		 * Otherwise, re-order of adding wait queue and getting tag
> > > +		 * may cause __sbitmap_queue_wake_up() to wake up nothing
> > > +		 * because the waitqueue_active() may not observe us in wait
> > > +		 * queue.
> > > +		 */
> > > +		smp_mb();
> > > +
> > Hi Ming, thanks for the fix. I'm not sure if we should explicitly imply
> > a memory barrier here as prepare_to_wait variants normally imply a general
> > memory barrier (see section "SLEEP AND WAKE-UP FUNCTIONS " in [1]).
> > Wish this helps!
> 
> Indeed, good spotting with the ordering bug Ming! I agree with Kemeng
> though that set_current_state() called from sbitmap_prepare_to_wait() is
> guaranteed to contain a memory barrier and thus reads from
> __blk_mq_get_tag() are guaranteed to be ordered properly wrt addition into
> the waitqueue.
> 
> So only blk_mq_mark_tag_wait() is vulnerable to the problem you have
> spotted AFAICT.

Indeed, I will remove the one in blk_mq_get_tag() in V2.


thanks,
Ming


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2024-01-12 12:21 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-01-11 15:54 [PATCH] blk-mq: fix IO hang from sbitmap wakeup race Ming Lei
2024-01-12  9:27 ` Kemeng Shi
2024-01-12 10:20   ` Jan Kara
2024-01-12 12:21     ` Ming Lei

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox