* [PATCH v2 0/3] nvme: system fault while shutting down fabric controller
@ 2024-10-08 4:13 Nilay Shroff
2024-10-08 4:13 ` [PATCH v2 1/3] nvme-loop: flush off pending I/O while shutting down loop controller Nilay Shroff
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Nilay Shroff @ 2024-10-08 4:13 UTC (permalink / raw)
To: linux-nvme; +Cc: kbusch, hch, sagi, axboe, chaitanyak, gjoyce, Nilay Shroff
We observed a kernel task hang up and a kernel crash while shutting down
NVMe fabric controller. These issues were observed while running blktest
nvme/037. The first two patches in this series address issues encountered
while running this test. The third patch in the series is an attempt to
use helper nvme_ctrl_state for accessing NVMe controller state.
We intermittently observe the blow kernel task hang while running the
blktest nvme/037. This test setup NVMeOF passthru controller using loop
target, connect to it and then immediately terminate/cleanup the
connection.
dmesg output:
-------------
run blktests nvme/037 at 2024-10-04 00:46:02
nvmet: creating nvm controller 1 for subsystem blktests-subsystem-1 for NQN nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349.
nvme nvme1: D3 entry latency set to 10 seconds
nvme nvme1: creating 32 I/O queues.
nvme nvme1: new ctrl: "blktests-subsystem-1"
nvme nvme1: Failed to configure AEN (cfg 300)
nvme nvme1: resetting controller
INFO: task nvme:3082 blocked for more than 120 seconds.
Not tainted 6.11.0+ #89
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:nvme state:D stack:0 pid:3082 tgid:3082 ppid:2983 flags:0x00042080
Call Trace:
0xc000000070f5bf90 (unreliable)
__switch_to+0x148/0x230
__schedule+0x260/0x6dc
schedule+0x40/0x100
blk_mq_freeze_queue_wait+0xa4/0xec
blk_mq_destroy_queue+0x68/0xac
nvme_remove_admin_tag_set+0x2c/0xb8 [nvme_core]
nvme_loop_destroy_admin_queue+0x68/0x88 [nvme_loop]
nvme_do_delete_ctrl+0x1e0/0x268 [nvme_core]
nvme_delete_ctrl_sync+0xd4/0x104 [nvme_core]
nvme_sysfs_delete+0x78/0x90 [nvme_core]
dev_attr_store+0x34/0x50
sysfs_kf_write+0x64/0x78
kernfs_fop_write_iter+0x1b0/0x290
vfs_write+0x3bc/0x4f8
ksys_write+0x84/0x140
system_call_exception+0x124/0x320
system_call_vectored_common+0x15c/0x2ec
As we can see from the above trace that nvme task hangs up indefinitely
while shutting down loop controller. This task couldn't forward progress
because it's waiting for unfinished/outstanding requests which haven't
yet finished.
The first patch in the series fixes the above hang by ensuring that while
shutting down nvme loop controller, we flush off any pending I/O to the
completion, which might have been queued after that queue has been quiesced.
So the first patch adds a missing unquiesce admin and IO queue operation in
the nvme loop driver just before the respective queue is destroyed.
The second patch in the series fixes another issue with the nvme keep-alive
operation. The keep-alive operation could potentially sneak in while
the fabric controller is shutting down. We encounter the below intermittent
kernel crash while running blktest nvme/037:
dmesg output:
------------
run blktests nvme/037 at 2024-10-04 03:59:27
<snip>
nvme nvme1: new ctrl: "blktests-subsystem-5"
nvme nvme1: Failed to configure AEN (cfg 300)
nvme nvme1: Removing ctrl: NQN "blktests-subsystem-5"
nvme nvme1: long keepalive RTT (54760 ms)
nvme nvme1: failed nvme_keep_alive_end_io error=4
BUG: Kernel NULL pointer dereference on read at 0x00000080
Faulting instruction address: 0xc00000000091c9f8
Oops: Kernel access of bad area, sig: 7 [#1]
LE PAGE_SIZE=64K MMU=Radix SMP NR_CPUS=2048 NUMA pSeries
<snip>
CPU: 28 UID: 0 PID: 338 Comm: kworker/u263:2 Kdump: loaded Not tainted 6.11.0+ #89
Hardware name: IBM,9043-MRX POWER10 (architected) 0x800200 0xf000006 of:IBM,FW1060.00 (NM1060_028) hv:phyp pSeries
Workqueue: nvme-wq nvme_keep_alive_work [nvme_core]
NIP: c00000000091c9f8 LR: c00000000084150c CTR: 0000000000000004
<snip>
NIP [c00000000091c9f8] sbitmap_any_bit_set+0x68/0xb8
LR [c00000000084150c] blk_mq_do_dispatch_ctx+0xcc/0x280
Call Trace:
autoremove_wake_function+0x0/0xbc (unreliable)
__blk_mq_sched_dispatch_requests+0x114/0x24c
blk_mq_sched_dispatch_requests+0x44/0x84
blk_mq_run_hw_queue+0x140/0x220
nvme_keep_alive_work+0xc8/0x19c [nvme_core]
process_one_work+0x200/0x4e0
worker_thread+0x340/0x504
kthread+0x138/0x140
start_kernel_thread+0x14/0x18
The above crash occurred while shutting down fabric/loop controller.
While shutting down fabric controller, if nvme keep-alive request sneaks in
and later flushed off then nvme_keep_alive_end_io() function is
asynchronously invoked to handle the end of the keep-alive operation. The
nvme_keep_alive_end_io() decrements the admin-queue-usage-ref-counter and
assuming this is the last/only request in the admin queue then the admin
queue-usage-ref-counter becomes zero. If that happens then blk-mq destroy
queue operation (blk_mq_destroy_queue()) which could be potentially
running simultaneously on another cpu (as this is the controller shutdown
code path) would forward progress and deletes the admin queue. However at
the same time nvme keep-alive thread running on another cpu hasn't yet
returned/finished from it's async blk-mq request operation (i.e blk_execute_
rq_nowait()) and so it could still access admin queue resources which could
have been already released from controller shutdown code path and that
causes the observed symptom.
For instance, find below the sequence of operations running simultaneously
on two cpus and causing this issue:
cpu0:
nvme_keep_alive_work()
->blk_execute_rq_no_wait()
->blk_mq_run_hw_queue()
->blk_mq_sched_dispatch_requests()
->__blk_mq_sched_dispatch_requests()
->blk_mq_dispatch_rq_list()
->nvme_loop_queue_rq()
->nvme_fail_nonready_command() -- here keep-alive req fails because admin queue is shutting down
->nvme_complete_rq()
->nvme_end_req()
->blk_mq_end_request()
->__blk_mq_end_request()
->nvme_keep_alive_end_io() -- here we decrement admin-queue-usage-ref-counter
cpu1:
nvme_loop_delete_ctrl_host()
->nvme_loop_shutdown_ctrl()
->nvme_loop_destroy_admin_queue()
->nvme_remove_admin_tag_set()
->blk_mq_destroy_queue() -- here we wait until admin-queue-usage-ref-counter reches to zero
->blk_put_queue() -- here we destroy queue once admin-queue-usage-ref-counter becomes zero
-- From here on we are not supposed to access admin queue
resources, however, nvme keep-alive thread running on
cpu0 which is not yet finished and so may still access
the admin qeueue pointer and causing the observed crash.
So prima facie, from the above trace it appears that the nvme keep-alive
thread running on one cpu races with the shutdown controller operation
which could be running on another cpu.
The second patch in the series addresses above issue by making nvme keep-
alive a synchronous operation so that we decrement admin-queue-usage-ref-
counter only after keep-alive operation/command finish and returns status.
This would also ensure that blk_mq_destroy_queue() doesn't return until
the nvme keep-alive thread finish it's work and so it's safe to destroy the
queue.
Moreover, the Keep-alive command is lightweight and low-frequency, making
it a synchronous approach shall be reasonable from a performance
perspective. Since this command is not frequent compared to other NVMe
operations (like reads/writes), it does not introduce a significant
performance overhead when handled synchronously.
The third patch in the series addresses the use of ctrl->lock before
accessing NVMe controller state in nvme_keep_alive_finish function.
With introduction of helper nvme_ctrl_state, we no longer need to
first acquire ctrl->lock before accessing the NVMe controller state.
So this patch removes the use of ctrl->lock from nvme_keep_alive_finish
function and replaces it with helper nvme_ctrl_state call.
Changes since v1:
- Split the second patch and move the use of helper
nvme_ctrl_state call in third patch (Christoph Hellwig)
Nilay Shroff (3):
nvme-loop: flush off pending I/O while shutting down loop controller
nvme: make keep-alive synchronous operation
nvme: use helper nvme_ctrl_state in nvme_keep_alive_finish function
drivers/nvme/host/core.c | 25 ++++++++++---------------
drivers/nvme/target/loop.c | 13 +++++++++++++
2 files changed, 23 insertions(+), 15 deletions(-)
--
2.45.2
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v2 1/3] nvme-loop: flush off pending I/O while shutting down loop controller
2024-10-08 4:13 [PATCH v2 0/3] nvme: system fault while shutting down fabric controller Nilay Shroff
@ 2024-10-08 4:13 ` Nilay Shroff
2024-10-08 4:13 ` [PATCH v2 2/3] nvme: make keep-alive synchronous operation Nilay Shroff
2024-10-08 4:13 ` [PATCH v2 3/3] nvme: use helper nvme_ctrl_state in nvme_keep_alive_finish function Nilay Shroff
2 siblings, 0 replies; 6+ messages in thread
From: Nilay Shroff @ 2024-10-08 4:13 UTC (permalink / raw)
To: linux-nvme; +Cc: kbusch, hch, sagi, axboe, chaitanyak, gjoyce, Nilay Shroff
While shutting down loop controller, we first quiesce the admin/IO queue,
delete the admin/IO tag-set and then at last destroy the admin/IO queue.
However it's quite possible that during the window between quiescing and
destroying of the admin/IO queue, some admin/IO request might sneak in
and if that happens then we could potentially encounter a hung task
because shutdown operation can't forward progress until any pending I/O
is flushed off.
This commit helps ensure that before destroying the admin/IO queue, we
unquiesce the admin/IO queue so that any outstanding requests, which are
added after the admin/IO queue is quiesced, are now flushed to its
completion.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
---
drivers/nvme/target/loop.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index e32790d8fc26..a9d112d34d4f 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -265,6 +265,13 @@ static void nvme_loop_destroy_admin_queue(struct nvme_loop_ctrl *ctrl)
{
if (!test_and_clear_bit(NVME_LOOP_Q_LIVE, &ctrl->queues[0].flags))
return;
+ /*
+ * It's possible that some requests might have been added
+ * after admin queue is stopped/quiesced. So now start the
+ * queue to flush these requests to the completion.
+ */
+ nvme_unquiesce_admin_queue(&ctrl->ctrl);
+
nvmet_sq_destroy(&ctrl->queues[0].nvme_sq);
nvme_remove_admin_tag_set(&ctrl->ctrl);
}
@@ -297,6 +304,12 @@ static void nvme_loop_destroy_io_queues(struct nvme_loop_ctrl *ctrl)
nvmet_sq_destroy(&ctrl->queues[i].nvme_sq);
}
ctrl->ctrl.queue_count = 1;
+ /*
+ * It's possible that some requests might have been added
+ * after io queue is stopped/quiesced. So now start the
+ * queue to flush these requests to the completion.
+ */
+ nvme_unquiesce_io_queues(&ctrl->ctrl);
}
static int nvme_loop_init_io_queues(struct nvme_loop_ctrl *ctrl)
--
2.45.2
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v2 2/3] nvme: make keep-alive synchronous operation
2024-10-08 4:13 [PATCH v2 0/3] nvme: system fault while shutting down fabric controller Nilay Shroff
2024-10-08 4:13 ` [PATCH v2 1/3] nvme-loop: flush off pending I/O while shutting down loop controller Nilay Shroff
@ 2024-10-08 4:13 ` Nilay Shroff
2024-10-08 4:13 ` [PATCH v2 3/3] nvme: use helper nvme_ctrl_state in nvme_keep_alive_finish function Nilay Shroff
2 siblings, 0 replies; 6+ messages in thread
From: Nilay Shroff @ 2024-10-08 4:13 UTC (permalink / raw)
To: linux-nvme; +Cc: kbusch, hch, sagi, axboe, chaitanyak, gjoyce, Nilay Shroff
The nvme keep-alive operation, which executes at a periodic interval,
could potentially sneak in while shutting down a fabric controller.
This may lead to a race between the fabric controller admin queue
destroy code path (while shutting down controller) and the blk-mq
hw/hctx queuing from the keep-alive thread.
This fix helps avoid race by implementing keep-alive as a synchronous
operation so that admin queue-usage ref counter is decremented only
after keep-alive command finish execution and returns its status. This
would ensure that we don't inadvertently destroy the fabric admin queue
until we finish processing of nvme keep-alive request and its status and
hence it's safe to delete the queue.
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
---
drivers/nvme/host/core.c | 18 ++++++++----------
1 file changed, 8 insertions(+), 10 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 02897f0564a3..736adbf65ef5 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1292,10 +1292,10 @@ static void nvme_queue_keep_alive_work(struct nvme_ctrl *ctrl)
queue_delayed_work(nvme_wq, &ctrl->ka_work, delay);
}
-static enum rq_end_io_ret nvme_keep_alive_end_io(struct request *rq,
- blk_status_t status)
+static void nvme_keep_alive_finish(struct request *rq,
+ blk_status_t status,
+ struct nvme_ctrl *ctrl)
{
- struct nvme_ctrl *ctrl = rq->end_io_data;
unsigned long flags;
bool startka = false;
unsigned long rtt = jiffies - (rq->deadline - rq->timeout);
@@ -1313,13 +1313,11 @@ static enum rq_end_io_ret nvme_keep_alive_end_io(struct request *rq,
delay = 0;
}
- blk_mq_free_request(rq);
-
if (status) {
dev_err(ctrl->device,
"failed nvme_keep_alive_end_io error=%d\n",
status);
- return RQ_END_IO_NONE;
+ return;
}
ctrl->ka_last_check_time = jiffies;
@@ -1331,7 +1329,6 @@ static enum rq_end_io_ret nvme_keep_alive_end_io(struct request *rq,
spin_unlock_irqrestore(&ctrl->lock, flags);
if (startka)
queue_delayed_work(nvme_wq, &ctrl->ka_work, delay);
- return RQ_END_IO_NONE;
}
static void nvme_keep_alive_work(struct work_struct *work)
@@ -1340,6 +1337,7 @@ static void nvme_keep_alive_work(struct work_struct *work)
struct nvme_ctrl, ka_work);
bool comp_seen = ctrl->comp_seen;
struct request *rq;
+ blk_status_t status;
ctrl->ka_last_check_time = jiffies;
@@ -1362,9 +1360,9 @@ static void nvme_keep_alive_work(struct work_struct *work)
nvme_init_request(rq, &ctrl->ka_cmd);
rq->timeout = ctrl->kato * HZ;
- rq->end_io = nvme_keep_alive_end_io;
- rq->end_io_data = ctrl;
- blk_execute_rq_nowait(rq, false);
+ status = blk_execute_rq(rq, false);
+ nvme_keep_alive_finish(rq, status, ctrl);
+ blk_mq_free_request(rq);
}
static void nvme_start_keep_alive(struct nvme_ctrl *ctrl)
--
2.45.2
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH v2 3/3] nvme: use helper nvme_ctrl_state in nvme_keep_alive_finish function
2024-10-08 4:13 [PATCH v2 0/3] nvme: system fault while shutting down fabric controller Nilay Shroff
2024-10-08 4:13 ` [PATCH v2 1/3] nvme-loop: flush off pending I/O while shutting down loop controller Nilay Shroff
2024-10-08 4:13 ` [PATCH v2 2/3] nvme: make keep-alive synchronous operation Nilay Shroff
@ 2024-10-08 4:13 ` Nilay Shroff
2024-10-08 5:19 ` Damien Le Moal
2 siblings, 1 reply; 6+ messages in thread
From: Nilay Shroff @ 2024-10-08 4:13 UTC (permalink / raw)
To: linux-nvme; +Cc: kbusch, hch, sagi, axboe, chaitanyak, gjoyce, Nilay Shroff
We no more need acquiring ctrl->lock for accessing the NVMe controller
state and instead we can now use the helper nvme_ctrl_state. So replace
the ctrl->lock from nvme_keep_alive_finish function with nvme_ctrl_state
call.
Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
---
drivers/nvme/host/core.c | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 736adbf65ef5..5a690cf16e5e 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1296,10 +1296,10 @@ static void nvme_keep_alive_finish(struct request *rq,
blk_status_t status,
struct nvme_ctrl *ctrl)
{
- unsigned long flags;
bool startka = false;
unsigned long rtt = jiffies - (rq->deadline - rq->timeout);
unsigned long delay = nvme_keep_alive_work_period(ctrl);
+ enum nvme_ctrl_state state = nvme_ctrl_state(ctrl);
/*
* Subtract off the keepalive RTT so nvme_keep_alive_work runs
@@ -1322,11 +1322,8 @@ static void nvme_keep_alive_finish(struct request *rq,
ctrl->ka_last_check_time = jiffies;
ctrl->comp_seen = false;
- spin_lock_irqsave(&ctrl->lock, flags);
- if (ctrl->state == NVME_CTRL_LIVE ||
- ctrl->state == NVME_CTRL_CONNECTING)
+ if (state == NVME_CTRL_LIVE || state == NVME_CTRL_CONNECTING)
startka = true;
- spin_unlock_irqrestore(&ctrl->lock, flags);
if (startka)
queue_delayed_work(nvme_wq, &ctrl->ka_work, delay);
}
--
2.45.2
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v2 3/3] nvme: use helper nvme_ctrl_state in nvme_keep_alive_finish function
2024-10-08 4:13 ` [PATCH v2 3/3] nvme: use helper nvme_ctrl_state in nvme_keep_alive_finish function Nilay Shroff
@ 2024-10-08 5:19 ` Damien Le Moal
2024-10-08 5:36 ` Nilay Shroff
0 siblings, 1 reply; 6+ messages in thread
From: Damien Le Moal @ 2024-10-08 5:19 UTC (permalink / raw)
To: Nilay Shroff, linux-nvme; +Cc: kbusch, hch, sagi, axboe, chaitanyak, gjoyce
On 10/8/24 13:13, Nilay Shroff wrote:
> We no more need acquiring ctrl->lock for accessing the NVMe controller
> state and instead we can now use the helper nvme_ctrl_state. So replace
> the ctrl->lock from nvme_keep_alive_finish function with nvme_ctrl_state
> call.
>
> Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
> ---
> drivers/nvme/host/core.c | 7 ++-----
> 1 file changed, 2 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 736adbf65ef5..5a690cf16e5e 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -1296,10 +1296,10 @@ static void nvme_keep_alive_finish(struct request *rq,
> blk_status_t status,
> struct nvme_ctrl *ctrl)
> {
> - unsigned long flags;
> bool startka = false;
> unsigned long rtt = jiffies - (rq->deadline - rq->timeout);
> unsigned long delay = nvme_keep_alive_work_period(ctrl);
> + enum nvme_ctrl_state state = nvme_ctrl_state(ctrl);
>
> /*
> * Subtract off the keepalive RTT so nvme_keep_alive_work runs
> @@ -1322,11 +1322,8 @@ static void nvme_keep_alive_finish(struct request *rq,
>
> ctrl->ka_last_check_time = jiffies;
> ctrl->comp_seen = false;
> - spin_lock_irqsave(&ctrl->lock, flags);
> - if (ctrl->state == NVME_CTRL_LIVE ||
> - ctrl->state == NVME_CTRL_CONNECTING)
> + if (state == NVME_CTRL_LIVE || state == NVME_CTRL_CONNECTING)
> startka = true;
startka = state == NVME_CTRL_LIVE || state == NVME_CTRL_CONNECTING;
But do you even need that variable now ? The below "if (startka)" could be
replaced with "if (state == NVME_CTRL_LIVE || state == NVME_CTRL_CONNECTING)", no ?
> - spin_unlock_irqrestore(&ctrl->lock, flags);
> if (startka)
> queue_delayed_work(nvme_wq, &ctrl->ka_work, delay);
> }
--
Damien Le Moal
Western Digital Research
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2 3/3] nvme: use helper nvme_ctrl_state in nvme_keep_alive_finish function
2024-10-08 5:19 ` Damien Le Moal
@ 2024-10-08 5:36 ` Nilay Shroff
0 siblings, 0 replies; 6+ messages in thread
From: Nilay Shroff @ 2024-10-08 5:36 UTC (permalink / raw)
To: Damien Le Moal, linux-nvme; +Cc: kbusch, hch, sagi, axboe, chaitanyak, gjoyce
On 10/8/24 10:49, Damien Le Moal wrote:
> On 10/8/24 13:13, Nilay Shroff wrote:
>> We no more need acquiring ctrl->lock for accessing the NVMe controller
>> state and instead we can now use the helper nvme_ctrl_state. So replace
>> the ctrl->lock from nvme_keep_alive_finish function with nvme_ctrl_state
>> call.
>>
>> Signed-off-by: Nilay Shroff <nilay@linux.ibm.com>
>> ---
>> drivers/nvme/host/core.c | 7 ++-----
>> 1 file changed, 2 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>> index 736adbf65ef5..5a690cf16e5e 100644
>> --- a/drivers/nvme/host/core.c
>> +++ b/drivers/nvme/host/core.c
>> @@ -1296,10 +1296,10 @@ static void nvme_keep_alive_finish(struct request *rq,
>> blk_status_t status,
>> struct nvme_ctrl *ctrl)
>> {
>> - unsigned long flags;
>> bool startka = false;
>> unsigned long rtt = jiffies - (rq->deadline - rq->timeout);
>> unsigned long delay = nvme_keep_alive_work_period(ctrl);
>> + enum nvme_ctrl_state state = nvme_ctrl_state(ctrl);
>>
>> /*
>> * Subtract off the keepalive RTT so nvme_keep_alive_work runs
>> @@ -1322,11 +1322,8 @@ static void nvme_keep_alive_finish(struct request *rq,
>>
>> ctrl->ka_last_check_time = jiffies;
>> ctrl->comp_seen = false;
>> - spin_lock_irqsave(&ctrl->lock, flags);
>> - if (ctrl->state == NVME_CTRL_LIVE ||
>> - ctrl->state == NVME_CTRL_CONNECTING)
>> + if (state == NVME_CTRL_LIVE || state == NVME_CTRL_CONNECTING)
>> startka = true;
>
> startka = state == NVME_CTRL_LIVE || state == NVME_CTRL_CONNECTING;
>
> But do you even need that variable now ? The below "if (startka)" could be
> replaced with "if (state == NVME_CTRL_LIVE || state == NVME_CTRL_CONNECTING)", no ?
>
>> - spin_unlock_irqrestore(&ctrl->lock, flags);
>> if (startka)
>> queue_delayed_work(nvme_wq, &ctrl->ka_work, delay);
>> }
>
>
Oops yeah you were correct, we don't need @startka. I will spin a new version.
Thanks for your comment!
--Nilay
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2024-10-08 5:36 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-08 4:13 [PATCH v2 0/3] nvme: system fault while shutting down fabric controller Nilay Shroff
2024-10-08 4:13 ` [PATCH v2 1/3] nvme-loop: flush off pending I/O while shutting down loop controller Nilay Shroff
2024-10-08 4:13 ` [PATCH v2 2/3] nvme: make keep-alive synchronous operation Nilay Shroff
2024-10-08 4:13 ` [PATCH v2 3/3] nvme: use helper nvme_ctrl_state in nvme_keep_alive_finish function Nilay Shroff
2024-10-08 5:19 ` Damien Le Moal
2024-10-08 5:36 ` Nilay Shroff
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox