public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
* [PATCH] nvme: core: don't hold rcu read lock in nvme_ns_chr_uring_cmd_iopoll
@ 2023-08-09  2:04 ` Ming Lei
  2023-08-09  6:59   ` Kanchan Joshi
                     ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Ming Lei @ 2023-08-09  2:04 UTC (permalink / raw)
  To: Christoph Hellwig, Keith Busch, linux-nvme, Sagi Grimberg
  Cc: Ming Lei, Guangwu Zhang, Kanchan Joshi, Anuj Gupta

Now nvme_ns_chr_uring_cmd_iopoll() has switched to request based io
polling, and the associated NS is guaranteed to be live in case of
io polling, so request is guaranteed to be valid because blk-mq uses
pre-allocated request pool.

Remove the rcu read lock in nvme_ns_chr_uring_cmd_iopoll(), which
isn't needed any more after switching to request based io polling.

Fix "BUG: sleeping function called from invalid context" because
set_page_dirty_lock() from blk_rq_unmap_user() may sleep.

Fixes: 585079b6e425 ("nvme: wire up async polling for io passthrough commands")
Reported-by: Guangwu Zhang <guazhang@redhat.com>
Cc: Kanchan Joshi <joshi.k@samsung.com>
Cc: Anuj Gupta <anuj20.g@samsung.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
---
 drivers/nvme/host/ioctl.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
index 5c3250f36ce7..d39f3219358b 100644
--- a/drivers/nvme/host/ioctl.c
+++ b/drivers/nvme/host/ioctl.c
@@ -786,11 +786,9 @@ int nvme_ns_chr_uring_cmd_iopoll(struct io_uring_cmd *ioucmd,
 	if (!(ioucmd->flags & IORING_URING_CMD_POLLED))
 		return 0;
 
-	rcu_read_lock();
 	req = READ_ONCE(ioucmd->cookie);
 	if (req && blk_rq_is_poll(req))
 		ret = blk_rq_poll(req, iob, poll_flags);
-	rcu_read_unlock();
 	return ret;
 }
 #ifdef CONFIG_NVME_MULTIPATH
-- 
2.40.1



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH] nvme: core: don't hold rcu read lock in nvme_ns_chr_uring_cmd_iopoll
  2023-08-09  2:04 ` [PATCH] nvme: core: don't hold rcu read lock in nvme_ns_chr_uring_cmd_iopoll Ming Lei
@ 2023-08-09  6:59   ` Kanchan Joshi
  2023-08-09  7:53     ` Ming Lei
  2023-08-11 14:12   ` Jens Axboe
  2023-08-11 14:12   ` Jens Axboe
  2 siblings, 1 reply; 7+ messages in thread
From: Kanchan Joshi @ 2023-08-09  6:59 UTC (permalink / raw)
  To: Ming Lei
  Cc: Christoph Hellwig, Keith Busch, linux-nvme, Sagi Grimberg,
	Guangwu Zhang, Anuj Gupta

[-- Attachment #1: Type: text/plain, Size: 840 bytes --]

On Wed, Aug 09, 2023 at 10:04:40AM +0800, Ming Lei wrote:
>Now nvme_ns_chr_uring_cmd_iopoll() has switched to request based io
>polling, and the associated NS is guaranteed to be live in case of
>io polling, so request is guaranteed to be valid because blk-mq uses
>pre-allocated request pool.
>
>Remove the rcu read lock in nvme_ns_chr_uring_cmd_iopoll(), which
>isn't needed any more after switching to request based io polling.

>Fix "BUG: sleeping function called from invalid context" because
>set_page_dirty_lock() from blk_rq_unmap_user() may sleep.
>
>Fixes: 585079b6e425 ("nvme: wire up async polling for io passthrough commands")
>Reported-by: Guangwu Zhang <guazhang@redhat.com>

Thanks Ming. Looks fine, but any link to this report?
I don't see this breaking in my tests. 
So I wonder how to reproduce and improve the coverage.

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] nvme: core: don't hold rcu read lock in nvme_ns_chr_uring_cmd_iopoll
  2023-08-09  6:59   ` Kanchan Joshi
@ 2023-08-09  7:53     ` Ming Lei
  2023-08-10  6:34       ` Kanchan Joshi
  0 siblings, 1 reply; 7+ messages in thread
From: Ming Lei @ 2023-08-09  7:53 UTC (permalink / raw)
  To: Kanchan Joshi
  Cc: Christoph Hellwig, Keith Busch, linux-nvme, Sagi Grimberg,
	Guangwu Zhang, Anuj Gupta

On Wed, Aug 09, 2023 at 12:29:20PM +0530, Kanchan Joshi wrote:
> On Wed, Aug 09, 2023 at 10:04:40AM +0800, Ming Lei wrote:
> > Now nvme_ns_chr_uring_cmd_iopoll() has switched to request based io
> > polling, and the associated NS is guaranteed to be live in case of
> > io polling, so request is guaranteed to be valid because blk-mq uses
> > pre-allocated request pool.
> > 
> > Remove the rcu read lock in nvme_ns_chr_uring_cmd_iopoll(), which
> > isn't needed any more after switching to request based io polling.
> 
> > Fix "BUG: sleeping function called from invalid context" because
> > set_page_dirty_lock() from blk_rq_unmap_user() may sleep.
> > 
> > Fixes: 585079b6e425 ("nvme: wire up async polling for io passthrough commands")
> > Reported-by: Guangwu Zhang <guazhang@redhat.com>
> 
> Thanks Ming. Looks fine, but any link to this report?
> I don't see this breaking in my tests. So I wonder how to reproduce and
> improve the coverage.

It is reported in RH BZ2227639, and follows the stack trace:

[ 3286.960425] BUG: sleeping function called from invalid context at include/linux/pagemap.h:914
[ 3286.960434] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 530910, name: fio
[ 3286.960440] preempt_count: 1, expected: 0
[ 3286.960443] RCU nest depth: 1, expected: 0
[ 3286.960446] 3 locks held by fio/530910:
[ 3286.960450]  #0: ffff8881108e40b0 (&ctx->uring_lock){+.+.}-{3:3}, at: __do_sys_io_uring_enter+0x535/0x980
[ 3286.960476]  #1: ffffffff9b72a320 (rcu_read_lock){....}-{1:2}, at: nvme_ns_chr_uring_cmd_iopoll+0x5/0x270 [nvme_core]
[ 3286.960530]  #2: ffff88837937b098 (&nvmeq->cq_poll_lock){+.+.}-{2:2}, at: nvme_poll+0x129/0x180 [nvme]
[ 3286.960553] Preemption disabled at:
[ 3286.960555] [<0000000000000000>] 0x0
[ 3286.960691] CPU: 1 PID: 530910 Comm: fio Kdump: loaded Tainted: G        W    L X  -------  ---  5.14.0-345.el9.x86_64+debug #1
[ 3286.960700] Hardware name: Dell Inc. PowerEdge R640/06DKY5, BIOS 2.15.1 06/15/2022
[ 3286.960704] Call Trace:
[ 3286.960707]  <TASK>
[ 3286.960720]  dump_stack_lvl+0x57/0x81
[ 3286.960734]  __might_resched.cold+0x222/0x26b
[ 3286.960756]  set_page_dirty_lock+0x1d/0x130
[ 3286.960773]  __bio_release_pages+0x266/0x470
[ 3286.960811]  blk_rq_unmap_user+0x2a8/0x660
[ 3286.960824]  ? lock_acquire+0x1d8/0x640
[ 3286.960839]  ? sched_clock_cpu+0x15/0x1b0
[ 3286.960850]  ? find_held_lock+0x33/0x120
[ 3286.960870]  ? __pfx_blk_rq_unmap_user+0x10/0x10
[ 3286.960876]  ? __lock_release+0x4c1/0xa00
[ 3286.960894]  ? __pfx___lock_release+0x10/0x10
[ 3286.960908]  ? mark_held_locks+0xa5/0xf0
[ 3286.960938]  nvme_uring_cmd_end_io+0x204/0x300 [nvme_core]
[ 3286.960974]  ? __pfx_nvme_uring_cmd_end_io+0x10/0x10 [nvme_core]
[ 3286.961020]  __blk_mq_end_request+0xf6/0x4c0
[ 3286.961042]  nvme_poll_cq+0x71e/0xe40 [nvme]
[ 3286.961102]  nvme_poll+0x134/0x180 [nvme]
[ 3286.961121]  blk_mq_poll_classic+0x179/0x420
[ 3286.961153]  bio_poll+0x1f5/0x440
[ 3286.961182]  nvme_ns_chr_uring_cmd_iopoll+0x16f/0x270 [nvme_core]

Thanks,
Ming



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] nvme: core: don't hold rcu read lock in nvme_ns_chr_uring_cmd_iopoll
  2023-08-09  7:53     ` Ming Lei
@ 2023-08-10  6:34       ` Kanchan Joshi
  2023-08-10  8:14         ` Ming Lei
  0 siblings, 1 reply; 7+ messages in thread
From: Kanchan Joshi @ 2023-08-10  6:34 UTC (permalink / raw)
  To: Ming Lei
  Cc: Christoph Hellwig, Keith Busch, linux-nvme, Sagi Grimberg,
	Guangwu Zhang, Anuj Gupta

[-- Attachment #1: Type: text/plain, Size: 1160 bytes --]

On Wed, Aug 09, 2023 at 03:53:35PM +0800, Ming Lei wrote:
>On Wed, Aug 09, 2023 at 12:29:20PM +0530, Kanchan Joshi wrote:
>> On Wed, Aug 09, 2023 at 10:04:40AM +0800, Ming Lei wrote:
>> > Now nvme_ns_chr_uring_cmd_iopoll() has switched to request based io
>> > polling, and the associated NS is guaranteed to be live in case of
>> > io polling, so request is guaranteed to be valid because blk-mq uses
>> > pre-allocated request pool.
>> >
>> > Remove the rcu read lock in nvme_ns_chr_uring_cmd_iopoll(), which
>> > isn't needed any more after switching to request based io polling.
>>
>> > Fix "BUG: sleeping function called from invalid context" because
>> > set_page_dirty_lock() from blk_rq_unmap_user() may sleep.
>> >
>> > Fixes: 585079b6e425 ("nvme: wire up async polling for io passthrough commands")
>> > Reported-by: Guangwu Zhang <guazhang@redhat.com>
>>
>> Thanks Ming. Looks fine, but any link to this report?
>> I don't see this breaking in my tests. So I wonder how to reproduce and
>> improve the coverage.
>
>It is reported in RH BZ2227639, and follows the stack trace:

Tried to access, but no luck.
Any chance that steps can be posted here?

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] nvme: core: don't hold rcu read lock in nvme_ns_chr_uring_cmd_iopoll
  2023-08-10  6:34       ` Kanchan Joshi
@ 2023-08-10  8:14         ` Ming Lei
  0 siblings, 0 replies; 7+ messages in thread
From: Ming Lei @ 2023-08-10  8:14 UTC (permalink / raw)
  To: Kanchan Joshi
  Cc: Christoph Hellwig, Keith Busch, linux-nvme, Sagi Grimberg,
	Guangwu Zhang, Anuj Gupta, ming.lei

On Thu, Aug 10, 2023 at 12:04:11PM +0530, Kanchan Joshi wrote:
> On Wed, Aug 09, 2023 at 03:53:35PM +0800, Ming Lei wrote:
> > On Wed, Aug 09, 2023 at 12:29:20PM +0530, Kanchan Joshi wrote:
> > > On Wed, Aug 09, 2023 at 10:04:40AM +0800, Ming Lei wrote:
> > > > Now nvme_ns_chr_uring_cmd_iopoll() has switched to request based io
> > > > polling, and the associated NS is guaranteed to be live in case of
> > > > io polling, so request is guaranteed to be valid because blk-mq uses
> > > > pre-allocated request pool.
> > > >
> > > > Remove the rcu read lock in nvme_ns_chr_uring_cmd_iopoll(), which
> > > > isn't needed any more after switching to request based io polling.
> > > 
> > > > Fix "BUG: sleeping function called from invalid context" because
> > > > set_page_dirty_lock() from blk_rq_unmap_user() may sleep.
> > > >
> > > > Fixes: 585079b6e425 ("nvme: wire up async polling for io passthrough commands")
> > > > Reported-by: Guangwu Zhang <guazhang@redhat.com>
> > > 
> > > Thanks Ming. Looks fine, but any link to this report?
> > > I don't see this breaking in my tests. So I wonder how to reproduce and
> > > improve the coverage.
> > 
> > It is reported in RH BZ2227639, and follows the stack trace:
> 
> Tried to access, but no luck.
> Any chance that steps can be posted here?

It is reported by Guang Wu, and I think it can be triggered by:

1) enable CONFIG_DEBUG_ATOMIC_SLEEP

2) run some nvme pt read workload, and fio should be fine, but
don't pass --fixedbufs

Just run a quick trace on set_page_dirty_lock() in non-debug kernel, which is
really called from bio_poll()<-nvme_ns_chr_uring_cmd_iopoll().

Thanks,
Ming



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] nvme: core: don't hold rcu read lock in nvme_ns_chr_uring_cmd_iopoll
  2023-08-09  2:04 ` [PATCH] nvme: core: don't hold rcu read lock in nvme_ns_chr_uring_cmd_iopoll Ming Lei
  2023-08-09  6:59   ` Kanchan Joshi
@ 2023-08-11 14:12   ` Jens Axboe
  2023-08-11 14:12   ` Jens Axboe
  2 siblings, 0 replies; 7+ messages in thread
From: Jens Axboe @ 2023-08-11 14:12 UTC (permalink / raw)
  To: Ming Lei, Christoph Hellwig, Keith Busch, linux-nvme,
	Sagi Grimberg
  Cc: Guangwu Zhang, Kanchan Joshi, Anuj Gupta

On 8/8/23 8:04 PM, Ming Lei wrote:
> Now nvme_ns_chr_uring_cmd_iopoll() has switched to request based io
> polling, and the associated NS is guaranteed to be live in case of
> io polling, so request is guaranteed to be valid because blk-mq uses
> pre-allocated request pool.
> 
> Remove the rcu read lock in nvme_ns_chr_uring_cmd_iopoll(), which
> isn't needed any more after switching to request based io polling.
> 
> Fix "BUG: sleeping function called from invalid context" because
> set_page_dirty_lock() from blk_rq_unmap_user() may sleep.

Reviewed-by: Jens Axboe <axboe@kernel.dk>

Keith, I'll just apply this directly so it can make this weeks pull
request.

-- 
Jens Axboe



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH] nvme: core: don't hold rcu read lock in nvme_ns_chr_uring_cmd_iopoll
  2023-08-09  2:04 ` [PATCH] nvme: core: don't hold rcu read lock in nvme_ns_chr_uring_cmd_iopoll Ming Lei
  2023-08-09  6:59   ` Kanchan Joshi
  2023-08-11 14:12   ` Jens Axboe
@ 2023-08-11 14:12   ` Jens Axboe
  2 siblings, 0 replies; 7+ messages in thread
From: Jens Axboe @ 2023-08-11 14:12 UTC (permalink / raw)
  To: Christoph Hellwig, Keith Busch, linux-nvme, Sagi Grimberg,
	Ming Lei
  Cc: Guangwu Zhang, Kanchan Joshi, Anuj Gupta


On Wed, 09 Aug 2023 10:04:40 +0800, Ming Lei wrote:
> Now nvme_ns_chr_uring_cmd_iopoll() has switched to request based io
> polling, and the associated NS is guaranteed to be live in case of
> io polling, so request is guaranteed to be valid because blk-mq uses
> pre-allocated request pool.
> 
> Remove the rcu read lock in nvme_ns_chr_uring_cmd_iopoll(), which
> isn't needed any more after switching to request based io polling.
> 
> [...]

Applied, thanks!

[1/1] nvme: core: don't hold rcu read lock in nvme_ns_chr_uring_cmd_iopoll
      (no commit info)

Best regards,
-- 
Jens Axboe





^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2023-08-11 14:13 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <CGME20230809020844epcas5p30e520491fa59ab8c20836d4275931e8f@epcas5p3.samsung.com>
2023-08-09  2:04 ` [PATCH] nvme: core: don't hold rcu read lock in nvme_ns_chr_uring_cmd_iopoll Ming Lei
2023-08-09  6:59   ` Kanchan Joshi
2023-08-09  7:53     ` Ming Lei
2023-08-10  6:34       ` Kanchan Joshi
2023-08-10  8:14         ` Ming Lei
2023-08-11 14:12   ` Jens Axboe
2023-08-11 14:12   ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox