public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
* [PATCH 0/3] keepalive bugfixes
@ 2023-04-17 22:55 Uday Shankar
  2023-04-17 22:55 ` [PATCH 1/3] nvme: double KA polling frequency to avoid KATO with TBKAS on Uday Shankar
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Uday Shankar @ 2023-04-17 22:55 UTC (permalink / raw)
  To: Costa Sapuntzakis, Randy Jennings, Hannes Reinecke, Sagi Grimberg,
	Keith Busch, Christoph Hellwig, Jens Axboe
  Cc: Uday Shankar, linux-nvme

While reviewing the Linux KATO implementation in an attempt to better
understand the current NVMe Keep Alive specification, we found a
few issues in the host implementation that could contribute to spurious
Keep Alive timeouts being detected by controllers.

Uday Shankar (3):
  nvme: double KA polling frequency to avoid KATO with TBKAS on
  nvme: check IO start time when deciding to defer KA
  nvme: improve handling of long keep alives

 drivers/nvme/host/core.c | 38 +++++++++++++++++++++++++++++++++++---
 drivers/nvme/host/nvme.h |  5 +++--
 2 files changed, 38 insertions(+), 5 deletions(-)

-- 
2.25.1



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/3] nvme: double KA polling frequency to avoid KATO with TBKAS on
  2023-04-17 22:55 [PATCH 0/3] keepalive bugfixes Uday Shankar
@ 2023-04-17 22:55 ` Uday Shankar
  2023-04-18 16:03   ` Sagi Grimberg
  2023-04-17 22:55 ` [PATCH 2/3] nvme: check IO start time when deciding to defer KA Uday Shankar
  2023-04-17 22:55 ` [PATCH 3/3] nvme: improve handling of long keep alives Uday Shankar
  2 siblings, 1 reply; 10+ messages in thread
From: Uday Shankar @ 2023-04-17 22:55 UTC (permalink / raw)
  To: Costa Sapuntzakis, Randy Jennings, Hannes Reinecke, Sagi Grimberg,
	Keith Busch, Christoph Hellwig, Jens Axboe
  Cc: Uday Shankar, linux-nvme

With TBKAS on, the completion of one command can defer sending a
keep alive for up to twice the delay between successive runs of
nvme_keep_alive_work. The current delay of KATO / 2 thus makes it
possible for one command to defer sending a keep alive for up to
KATO, which can result in the controller detecting a KATO. The following
trace demonstrates the issue, taking KATO = 8 for simplicity:

1. t = 0: run nvme_keep_alive_work, no keep-alive sent
2. t = ε: I/O completion seen, set comp_seen = true
3. t = 4: run nvme_keep_alive_work, see comp_seen == true,
          skip sending keep-alive, set comp_seen = false
4. t = 8: run nvme_keep_alive_work, see comp_seen == false,
          send a keep-alive command.

Here, there is a delay of 8 - ε between receiving a command completion
and sending the next command. With ε small, the controller is likely to
detect a keep alive timeout.

Fix this by running nvme_keep_alive_work with a delay of KATO / 4
whenever TBKAS is on. Going through the above trace now gives us a
worst-case delay of 4 - ε, which is in line with the recommendation of
sending a command every KATO / 2 in the NVMe specification.

Reported-by: Costa Sapuntzakis <costa@purestorage.com>
Reported-by: Randy Jennings <randyj@purestorage.com>
Signed-off-by: Uday Shankar <ushankar@purestorage.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 drivers/nvme/host/core.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 6c1e7d6709e0..1298c7b9bffb 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1150,10 +1150,16 @@ EXPORT_SYMBOL_NS_GPL(nvme_passthru_end, NVME_TARGET_PASSTHRU);
  * 
  *   The host should send Keep Alive commands at half of the Keep Alive Timeout
  *   accounting for transport roundtrip times [..].
+ * 
+ * When TBKAS is on, we need to run nvme_keep_alive_work at twice this
+ * frequency, as one command completion can postpone sending a keep alive
+ * command by up to twice the delay between runs.
  */
 static void nvme_queue_keep_alive_work(struct nvme_ctrl *ctrl)
 {
-	queue_delayed_work(nvme_wq, &ctrl->ka_work, ctrl->kato * HZ / 2);
+	unsigned long delay = (ctrl->ctratt & NVME_CTRL_ATTR_TBKAS) ?
+		ctrl->kato * HZ / 4 : ctrl->kato * HZ / 2;
+	queue_delayed_work(nvme_wq, &ctrl->ka_work, delay);
 }
 
 static enum rq_end_io_ret nvme_keep_alive_end_io(struct request *rq,
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/3] nvme: check IO start time when deciding to defer KA
  2023-04-17 22:55 [PATCH 0/3] keepalive bugfixes Uday Shankar
  2023-04-17 22:55 ` [PATCH 1/3] nvme: double KA polling frequency to avoid KATO with TBKAS on Uday Shankar
@ 2023-04-17 22:55 ` Uday Shankar
  2023-04-18 16:56   ` Sagi Grimberg
  2023-04-17 22:55 ` [PATCH 3/3] nvme: improve handling of long keep alives Uday Shankar
  2 siblings, 1 reply; 10+ messages in thread
From: Uday Shankar @ 2023-04-17 22:55 UTC (permalink / raw)
  To: Costa Sapuntzakis, Randy Jennings, Hannes Reinecke, Sagi Grimberg,
	Keith Busch, Christoph Hellwig, Jens Axboe
  Cc: Uday Shankar, linux-nvme

When a command completes, we set a flag which will skip sending a
keep alive at the next run of nvme_keep_alive_work when TBKAS is on.
However, if the command was submitted long ago, it's possible that
the controller may have also restarted its keep alive timer (as a
result of receiving the command) long ago. The following trace
demonstrates the issue, assuming TBKAS is on and KATO = 8 for
simplicity:

1. t = 0: submit I/O commands A, B, C, D, E
2. t = 0.5: commands A, B, C, D, E reach controller, restart its keep
            alive timer
3. t = 1: A completes
4. t = 2: run nvme_keep_alive_work, see recent completion, do nothing
5. t = 3: B completes
6. t = 4: run nvme_keep_alive_work, see recent completion, do nothing
7. t = 5: C completes
8. t = 6: run nvme_keep_alive_work, see recent completion, do nothing
9. t = 7: D completes
10. t = 8: run nvme_keep_alive_work, see recent completion, do nothing
11. t = 9: E completes

At this point, 8.5 seconds have passed without restarting the
controller's keep alive timer, so the controller will detect a keep
alive timeout.

Fix this by checking the IO start time when deciding to defer sending a
keep alive command. Only set comp_seen if the command started after the
most recent run of nvme_keep_alive_work. With this change, the
completions of B, C, and D will not set comp_seen and the run of
nvme_keep_alive_work at t = 4 will send a keep alive.

Reported-by: Costa Sapuntzakis <costa@purestorage.com>
Reported-by: Randy Jennings <randyj@purestorage.com>
Signed-off-by: Uday Shankar <ushankar@purestorage.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 drivers/nvme/host/core.c | 14 +++++++++++++-
 drivers/nvme/host/nvme.h |  5 +++--
 2 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 1298c7b9bffb..8a63051d7b5e 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -397,7 +397,14 @@ void nvme_complete_rq(struct request *req)
 	trace_nvme_complete_rq(req);
 	nvme_cleanup_cmd(req);
 
-	if (ctrl->kas)
+	/*
+	 * Completions of long-running commands should not be able to
+	 * defer sending of periodic keep alives, since the controller
+	 * may have completed processing such commands a long time ago
+	 * (arbitrarily close to command submission time).
+	 */
+	if (ctrl->kas && !ctrl->comp_seen
+		      && nvme_req(req)->start_time >= ctrl->ka_last_check_time)
 		ctrl->comp_seen = true;
 
 	switch (nvme_decide_disposition(req)) {
@@ -1178,6 +1185,8 @@ static enum rq_end_io_ret nvme_keep_alive_end_io(struct request *rq,
 		return RQ_END_IO_NONE;
 	}
 
+	WRITE_ONCE(ctrl->ka_last_check_time, jiffies);
+	smp_wmb();
 	ctrl->comp_seen = false;
 	spin_lock_irqsave(&ctrl->lock, flags);
 	if (ctrl->state == NVME_CTRL_LIVE ||
@@ -1196,6 +1205,9 @@ static void nvme_keep_alive_work(struct work_struct *work)
 	bool comp_seen = ctrl->comp_seen;
 	struct request *rq;
 
+	WRITE_ONCE(ctrl->ka_last_check_time, jiffies);
+	smp_wmb();
+
 	if ((ctrl->ctratt & NVME_CTRL_ATTR_TBKAS) && comp_seen) {
 		dev_dbg(ctrl->device,
 			"reschedule traffic based keep-alive timer\n");
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index bf46f122e9e1..f044ffb9ce10 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -162,9 +162,7 @@ struct nvme_request {
 	u8			retries;
 	u8			flags;
 	u16			status;
-#ifdef CONFIG_NVME_MULTIPATH
 	unsigned long		start_time;
-#endif
 	struct nvme_ctrl	*ctrl;
 };
 
@@ -323,6 +321,7 @@ struct nvme_ctrl {
 	struct delayed_work ka_work;
 	struct delayed_work failfast_work;
 	struct nvme_command ka_cmd;
+	unsigned long ka_last_check_time;
 	struct work_struct fw_act_work;
 	unsigned long events;
 
@@ -1028,6 +1027,8 @@ static inline void nvme_start_request(struct request *rq)
 {
 	if (rq->cmd_flags & REQ_NVME_MPATH)
 		nvme_mpath_start_request(rq);
+	else
+		nvme_req(rq)->start_time = jiffies;
 	blk_mq_start_request(rq);
 }
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/3] nvme: improve handling of long keep alives
  2023-04-17 22:55 [PATCH 0/3] keepalive bugfixes Uday Shankar
  2023-04-17 22:55 ` [PATCH 1/3] nvme: double KA polling frequency to avoid KATO with TBKAS on Uday Shankar
  2023-04-17 22:55 ` [PATCH 2/3] nvme: check IO start time when deciding to defer KA Uday Shankar
@ 2023-04-17 22:55 ` Uday Shankar
  2023-04-18 16:59   ` Sagi Grimberg
  2 siblings, 1 reply; 10+ messages in thread
From: Uday Shankar @ 2023-04-17 22:55 UTC (permalink / raw)
  To: Costa Sapuntzakis, Randy Jennings, Hannes Reinecke, Sagi Grimberg,
	Keith Busch, Christoph Hellwig, Jens Axboe
  Cc: Uday Shankar, linux-nvme

Upon keep alive completion, nvme_keep_alive_work is scheduled with the
same delay every time. If keep alive commands are completing slowly,
this may cause a keep alive timeout. The following trace illustrates the
issue, taking KATO = 8 and TBKAS off for simplicity:

1. t = 0: run nvme_keep_alive_work, send keep alive
2. t = ε: keep alive reaches controller, controller restarts its keep
          alive timer
3. t = 4: host receives keep alive completion, schedules
          nvme_keep_alive_work with delay 4
4. t = 8: run nvme_keep_alive_work, send keep alive

Here, a keep alive having RTT of 4 causes a delay of at least 8 - ε
between the controller receiving successive keep alives. With ε small,
the controller is likely to detect a keep alive timeout.

Fix this by calculating the RTT of the keep alive command, and adjusting
the scheduling delay of the next keep alive work accordingly.

Reported-by: Costa Sapuntzakis <costa@purestorage.com>
Reported-by: Randy Jennings <randyj@purestorage.com>
Signed-off-by: Uday Shankar <ushankar@purestorage.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 drivers/nvme/host/core.c | 20 +++++++++++++++++---
 1 file changed, 17 insertions(+), 3 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 8a63051d7b5e..fbb8b2f41fe4 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1162,10 +1162,15 @@ EXPORT_SYMBOL_NS_GPL(nvme_passthru_end, NVME_TARGET_PASSTHRU);
  * frequency, as one command completion can postpone sending a keep alive
  * command by up to twice the delay between runs.
  */
+static unsigned long nvme_keep_alive_work_period(struct nvme_ctrl *ctrl)
+{
+	return (ctrl->ctratt & NVME_CTRL_ATTR_TBKAS) ?
+		(ctrl->kato * HZ / 4) : (ctrl->kato * HZ / 2);
+}
+
 static void nvme_queue_keep_alive_work(struct nvme_ctrl *ctrl)
 {
-	unsigned long delay = (ctrl->ctratt & NVME_CTRL_ATTR_TBKAS) ?
-		ctrl->kato * HZ / 4 : ctrl->kato * HZ / 2;
+	unsigned long delay = nvme_keep_alive_work_period(ctrl);
 	queue_delayed_work(nvme_wq, &ctrl->ka_work, delay);
 }
 
@@ -1175,6 +1180,15 @@ static enum rq_end_io_ret nvme_keep_alive_end_io(struct request *rq,
 	struct nvme_ctrl *ctrl = rq->end_io_data;
 	unsigned long flags;
 	bool startka = false;
+	unsigned long rtt = jiffies - nvme_req(rq)->start_time;
+	unsigned long delay = nvme_keep_alive_work_period(ctrl);
+
+	/* Subtract off the keepalive RTT so nvme_keep_alive_work runs
+	 * at the desired frequency. */
+	if (rtt <= delay)
+		delay -= rtt;
+	else
+		delay = 0;
 
 	blk_mq_free_request(rq);
 
@@ -1194,7 +1208,7 @@ static enum rq_end_io_ret nvme_keep_alive_end_io(struct request *rq,
 		startka = true;
 	spin_unlock_irqrestore(&ctrl->lock, flags);
 	if (startka)
-		nvme_queue_keep_alive_work(ctrl);
+		queue_delayed_work(nvme_wq, &ctrl->ka_work, delay);
 	return RQ_END_IO_NONE;
 }
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/3] nvme: double KA polling frequency to avoid KATO with TBKAS on
  2023-04-17 22:55 ` [PATCH 1/3] nvme: double KA polling frequency to avoid KATO with TBKAS on Uday Shankar
@ 2023-04-18 16:03   ` Sagi Grimberg
  2023-04-18 16:48     ` Hannes Reinecke
  0 siblings, 1 reply; 10+ messages in thread
From: Sagi Grimberg @ 2023-04-18 16:03 UTC (permalink / raw)
  To: Uday Shankar, Costa Sapuntzakis, Randy Jennings, Hannes Reinecke,
	Keith Busch, Christoph Hellwig, Jens Axboe
  Cc: linux-nvme


> With TBKAS on, the completion of one command can defer sending a
> keep alive for up to twice the delay between successive runs of
> nvme_keep_alive_work. The current delay of KATO / 2 thus makes it
> possible for one command to defer sending a keep alive for up to
> KATO, which can result in the controller detecting a KATO. The following
> trace demonstrates the issue, taking KATO = 8 for simplicity:
> 
> 1. t = 0: run nvme_keep_alive_work, no keep-alive sent
> 2. t = ε: I/O completion seen, set comp_seen = true
> 3. t = 4: run nvme_keep_alive_work, see comp_seen == true,
>            skip sending keep-alive, set comp_seen = false
> 4. t = 8: run nvme_keep_alive_work, see comp_seen == false,
>            send a keep-alive command.
> 
> Here, there is a delay of 8 - ε between receiving a command completion
> and sending the next command. With ε small, the controller is likely to
> detect a keep alive timeout.
> 
> Fix this by running nvme_keep_alive_work with a delay of KATO / 4
> whenever TBKAS is on. Going through the above trace now gives us a
> worst-case delay of 4 - ε, which is in line with the recommendation of
> sending a command every KATO / 2 in the NVMe specification.
> 
> Reported-by: Costa Sapuntzakis <costa@purestorage.com>
> Reported-by: Randy Jennings <randyj@purestorage.com>
> Signed-off-by: Uday Shankar <ushankar@purestorage.com>
> Reviewed-by: Hannes Reinecke <hare@suse.de>
> ---
>   drivers/nvme/host/core.c | 8 +++++++-
>   1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 6c1e7d6709e0..1298c7b9bffb 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -1150,10 +1150,16 @@ EXPORT_SYMBOL_NS_GPL(nvme_passthru_end, NVME_TARGET_PASSTHRU);
>    *
>    *   The host should send Keep Alive commands at half of the Keep Alive Timeout
>    *   accounting for transport roundtrip times [..].
> + *
> + * When TBKAS is on, we need to run nvme_keep_alive_work at twice this
> + * frequency, as one command completion can postpone sending a keep alive
> + * command by up to twice the delay between runs.
>    */
>   static void nvme_queue_keep_alive_work(struct nvme_ctrl *ctrl)
>   {
> -	queue_delayed_work(nvme_wq, &ctrl->ka_work, ctrl->kato * HZ / 2);
> +	unsigned long delay = (ctrl->ctratt & NVME_CTRL_ATTR_TBKAS) ?
> +		ctrl->kato * HZ / 4 : ctrl->kato * HZ / 2;
> +	queue_delayed_work(nvme_wq, &ctrl->ka_work, delay);
>   }
>   
>   static enum rq_end_io_ret nvme_keep_alive_end_io(struct request *rq,

This looks fine to me, the only thing that is a bit concerning is that
we may excessively send keep-alive too frequently (default kato is 10
divide by 4 gives every 2 seconds).


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 1/3] nvme: double KA polling frequency to avoid KATO with TBKAS on
  2023-04-18 16:03   ` Sagi Grimberg
@ 2023-04-18 16:48     ` Hannes Reinecke
  0 siblings, 0 replies; 10+ messages in thread
From: Hannes Reinecke @ 2023-04-18 16:48 UTC (permalink / raw)
  To: Sagi Grimberg, Uday Shankar, Costa Sapuntzakis, Randy Jennings,
	Keith Busch, Christoph Hellwig, Jens Axboe
  Cc: linux-nvme

On 4/18/23 18:03, Sagi Grimberg wrote:
> 
>> With TBKAS on, the completion of one command can defer sending a
>> keep alive for up to twice the delay between successive runs of
>> nvme_keep_alive_work. The current delay of KATO / 2 thus makes it
>> possible for one command to defer sending a keep alive for up to
>> KATO, which can result in the controller detecting a KATO. The following
>> trace demonstrates the issue, taking KATO = 8 for simplicity:
>>
>> 1. t = 0: run nvme_keep_alive_work, no keep-alive sent
>> 2. t = ε: I/O completion seen, set comp_seen = true
>> 3. t = 4: run nvme_keep_alive_work, see comp_seen == true,
>>            skip sending keep-alive, set comp_seen = false
>> 4. t = 8: run nvme_keep_alive_work, see comp_seen == false,
>>            send a keep-alive command.
>>
>> Here, there is a delay of 8 - ε between receiving a command completion
>> and sending the next command. With ε small, the controller is likely to
>> detect a keep alive timeout.
>>
>> Fix this by running nvme_keep_alive_work with a delay of KATO / 4
>> whenever TBKAS is on. Going through the above trace now gives us a
>> worst-case delay of 4 - ε, which is in line with the recommendation of
>> sending a command every KATO / 2 in the NVMe specification.
>>
>> Reported-by: Costa Sapuntzakis <costa@purestorage.com>
>> Reported-by: Randy Jennings <randyj@purestorage.com>
>> Signed-off-by: Uday Shankar <ushankar@purestorage.com>
>> Reviewed-by: Hannes Reinecke <hare@suse.de>
>> ---
>>   drivers/nvme/host/core.c | 8 +++++++-
>>   1 file changed, 7 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
>> index 6c1e7d6709e0..1298c7b9bffb 100644
>> --- a/drivers/nvme/host/core.c
>> +++ b/drivers/nvme/host/core.c
>> @@ -1150,10 +1150,16 @@ EXPORT_SYMBOL_NS_GPL(nvme_passthru_end, 
>> NVME_TARGET_PASSTHRU);
>>    *
>>    *   The host should send Keep Alive commands at half of the Keep 
>> Alive Timeout
>>    *   accounting for transport roundtrip times [..].
>> + *
>> + * When TBKAS is on, we need to run nvme_keep_alive_work at twice this
>> + * frequency, as one command completion can postpone sending a keep 
>> alive
>> + * command by up to twice the delay between runs.
>>    */
>>   static void nvme_queue_keep_alive_work(struct nvme_ctrl *ctrl)
>>   {
>> -    queue_delayed_work(nvme_wq, &ctrl->ka_work, ctrl->kato * HZ / 2);
>> +    unsigned long delay = (ctrl->ctratt & NVME_CTRL_ATTR_TBKAS) ?
>> +        ctrl->kato * HZ / 4 : ctrl->kato * HZ / 2;
>> +    queue_delayed_work(nvme_wq, &ctrl->ka_work, delay);
>>   }
>>   static enum rq_end_io_ret nvme_keep_alive_end_io(struct request *rq,
> 
> This looks fine to me, the only thing that is a bit concerning is that
> we may excessively send keep-alive too frequently (default kato is 10
> divide by 4 gives every 2 seconds).

Well, this is with TBKAS on, so we're sending keep-alives only if there 
is no other traffic on the wire. And then sending a keep-alive every two 
seconds is hardly exacting.

Cheers,

Hannes



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/3] nvme: check IO start time when deciding to defer KA
  2023-04-17 22:55 ` [PATCH 2/3] nvme: check IO start time when deciding to defer KA Uday Shankar
@ 2023-04-18 16:56   ` Sagi Grimberg
  2023-04-20 19:37     ` Uday Shankar
  0 siblings, 1 reply; 10+ messages in thread
From: Sagi Grimberg @ 2023-04-18 16:56 UTC (permalink / raw)
  To: Uday Shankar, Costa Sapuntzakis, Randy Jennings, Hannes Reinecke,
	Keith Busch, Christoph Hellwig, Jens Axboe
  Cc: linux-nvme



On 4/18/23 01:55, Uday Shankar wrote:
> When a command completes, we set a flag which will skip sending a
> keep alive at the next run of nvme_keep_alive_work when TBKAS is on.
> However, if the command was submitted long ago, it's possible that
> the controller may have also restarted its keep alive timer (as a
> result of receiving the command) long ago. The following trace
> demonstrates the issue, assuming TBKAS is on and KATO = 8 for
> simplicity:
> 
> 1. t = 0: submit I/O commands A, B, C, D, E
> 2. t = 0.5: commands A, B, C, D, E reach controller, restart its keep
>              alive timer
> 3. t = 1: A completes
> 4. t = 2: run nvme_keep_alive_work, see recent completion, do nothing
> 5. t = 3: B completes
> 6. t = 4: run nvme_keep_alive_work, see recent completion, do nothing
> 7. t = 5: C completes
> 8. t = 6: run nvme_keep_alive_work, see recent completion, do nothing
> 9. t = 7: D completes
> 10. t = 8: run nvme_keep_alive_work, see recent completion, do nothing
> 11. t = 9: E completes
> 
> At this point, 8.5 seconds have passed without restarting the
> controller's keep alive timer, so the controller will detect a keep
> alive timeout.
> 
> Fix this by checking the IO start time when deciding to defer sending a
> keep alive command. Only set comp_seen if the command started after the
> most recent run of nvme_keep_alive_work. With this change, the
> completions of B, C, and D will not set comp_seen and the run of
> nvme_keep_alive_work at t = 4 will send a keep alive.
> 
> Reported-by: Costa Sapuntzakis <costa@purestorage.com>
> Reported-by: Randy Jennings <randyj@purestorage.com>
> Signed-off-by: Uday Shankar <ushankar@purestorage.com>
> Reviewed-by: Hannes Reinecke <hare@suse.de>
> ---
>   drivers/nvme/host/core.c | 14 +++++++++++++-
>   drivers/nvme/host/nvme.h |  5 +++--
>   2 files changed, 16 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 1298c7b9bffb..8a63051d7b5e 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -397,7 +397,14 @@ void nvme_complete_rq(struct request *req)
>   	trace_nvme_complete_rq(req);
>   	nvme_cleanup_cmd(req);
>   
> -	if (ctrl->kas)
> +	/*
> +	 * Completions of long-running commands should not be able to
> +	 * defer sending of periodic keep alives, since the controller
> +	 * may have completed processing such commands a long time ago
> +	 * (arbitrarily close to command submission time).
> +	 */
> +	if (ctrl->kas && !ctrl->comp_seen
> +		      && nvme_req(req)->start_time >= ctrl->ka_last_check_time)
>   		ctrl->comp_seen = true;

indentation is wrong here.

>   
>   	switch (nvme_decide_disposition(req)) {
> @@ -1178,6 +1185,8 @@ static enum rq_end_io_ret nvme_keep_alive_end_io(struct request *rq,
>   		return RQ_END_IO_NONE;
>   	}
>   
> +	WRITE_ONCE(ctrl->ka_last_check_time, jiffies);
> +	smp_wmb();
>   	ctrl->comp_seen = false;
>   	spin_lock_irqsave(&ctrl->lock, flags);
>   	if (ctrl->state == NVME_CTRL_LIVE ||
> @@ -1196,6 +1205,9 @@ static void nvme_keep_alive_work(struct work_struct *work)
>   	bool comp_seen = ctrl->comp_seen;
>   	struct request *rq;
>   
> +	WRITE_ONCE(ctrl->ka_last_check_time, jiffies);
> +	smp_wmb();
> +
>   	if ((ctrl->ctratt & NVME_CTRL_ATTR_TBKAS) && comp_seen) {
>   		dev_dbg(ctrl->device,
>   			"reschedule traffic based keep-alive timer\n");
> diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
> index bf46f122e9e1..f044ffb9ce10 100644
> --- a/drivers/nvme/host/nvme.h
> +++ b/drivers/nvme/host/nvme.h
> @@ -162,9 +162,7 @@ struct nvme_request {
>   	u8			retries;
>   	u8			flags;
>   	u16			status;
> -#ifdef CONFIG_NVME_MULTIPATH
>   	unsigned long		start_time;
> -#endif
>   	struct nvme_ctrl	*ctrl;
>   };
>   
> @@ -323,6 +321,7 @@ struct nvme_ctrl {
>   	struct delayed_work ka_work;
>   	struct delayed_work failfast_work;
>   	struct nvme_command ka_cmd;
> +	unsigned long ka_last_check_time;
>   	struct work_struct fw_act_work;
>   	unsigned long events;
>   
> @@ -1028,6 +1027,8 @@ static inline void nvme_start_request(struct request *rq)
>   {
>   	if (rq->cmd_flags & REQ_NVME_MPATH)
>   		nvme_mpath_start_request(rq);
> +	else
> +		nvme_req(rq)->start_time = jiffies;

nvme_mpath_start_request may not set the start_time if stats are
disabled...


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/3] nvme: improve handling of long keep alives
  2023-04-17 22:55 ` [PATCH 3/3] nvme: improve handling of long keep alives Uday Shankar
@ 2023-04-18 16:59   ` Sagi Grimberg
  2023-04-20 19:34     ` Uday Shankar
  0 siblings, 1 reply; 10+ messages in thread
From: Sagi Grimberg @ 2023-04-18 16:59 UTC (permalink / raw)
  To: Uday Shankar, Costa Sapuntzakis, Randy Jennings, Hannes Reinecke,
	Keith Busch, Christoph Hellwig, Jens Axboe
  Cc: linux-nvme



On 4/18/23 01:55, Uday Shankar wrote:
> Upon keep alive completion, nvme_keep_alive_work is scheduled with the
> same delay every time. If keep alive commands are completing slowly,
> this may cause a keep alive timeout. The following trace illustrates the
> issue, taking KATO = 8 and TBKAS off for simplicity:
> 
> 1. t = 0: run nvme_keep_alive_work, send keep alive
> 2. t = ε: keep alive reaches controller, controller restarts its keep
>            alive timer
> 3. t = 4: host receives keep alive completion, schedules
>            nvme_keep_alive_work with delay 4
> 4. t = 8: run nvme_keep_alive_work, send keep alive
> 
> Here, a keep alive having RTT of 4 causes a delay of at least 8 - ε
> between the controller receiving successive keep alives. With ε small,
> the controller is likely to detect a keep alive timeout.
> 
> Fix this by calculating the RTT of the keep alive command, and adjusting
> the scheduling delay of the next keep alive work accordingly.

Is this something that was met in reality?

it is surprising that host->ctrl is super fast and
ctrl->host is super slow to the extent that this
situation exists in reality...


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 3/3] nvme: improve handling of long keep alives
  2023-04-18 16:59   ` Sagi Grimberg
@ 2023-04-20 19:34     ` Uday Shankar
  0 siblings, 0 replies; 10+ messages in thread
From: Uday Shankar @ 2023-04-20 19:34 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Costa Sapuntzakis, Randy Jennings, Hannes Reinecke, Keith Busch,
	Christoph Hellwig, Jens Axboe, linux-nvme

On Tue, Apr 18, 2023 at 07:59:51PM +0300, Sagi Grimberg wrote:
> Is this something that was met in reality?
> 
> it is surprising that host->ctrl is super fast and
> ctrl->host is super slow to the extent that this
> situation exists in reality...

We haven't seen this exact issue in reality, but we more generally have
observed significant delays in host-side keep alive processing with
controllers that do not support TBKA. This patch should help with that
issue.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 2/3] nvme: check IO start time when deciding to defer KA
  2023-04-18 16:56   ` Sagi Grimberg
@ 2023-04-20 19:37     ` Uday Shankar
  0 siblings, 0 replies; 10+ messages in thread
From: Uday Shankar @ 2023-04-20 19:37 UTC (permalink / raw)
  To: Sagi Grimberg
  Cc: Costa Sapuntzakis, Randy Jennings, Hannes Reinecke, Keith Busch,
	Christoph Hellwig, Jens Axboe, linux-nvme

On Tue, Apr 18, 2023 at 07:56:58PM +0300, Sagi Grimberg wrote:
> > +	if (ctrl->kas && !ctrl->comp_seen
> > +		      && nvme_req(req)->start_time >= ctrl->ka_last_check_time)
> >   		ctrl->comp_seen = true;
> 
> indentation is wrong here.
>
> > @@ -1028,6 +1027,8 @@ static inline void nvme_start_request(struct request *rq)
> >   {
> >   	if (rq->cmd_flags & REQ_NVME_MPATH)
> >   		nvme_mpath_start_request(rq);
> > +	else
> > +		nvme_req(rq)->start_time = jiffies;
> 
> nvme_mpath_start_request may not set the start_time if stats are
> disabled...

Thanks for catching these, I'll fix them up in a v2.


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2023-04-20 19:37 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-04-17 22:55 [PATCH 0/3] keepalive bugfixes Uday Shankar
2023-04-17 22:55 ` [PATCH 1/3] nvme: double KA polling frequency to avoid KATO with TBKAS on Uday Shankar
2023-04-18 16:03   ` Sagi Grimberg
2023-04-18 16:48     ` Hannes Reinecke
2023-04-17 22:55 ` [PATCH 2/3] nvme: check IO start time when deciding to defer KA Uday Shankar
2023-04-18 16:56   ` Sagi Grimberg
2023-04-20 19:37     ` Uday Shankar
2023-04-17 22:55 ` [PATCH 3/3] nvme: improve handling of long keep alives Uday Shankar
2023-04-18 16:59   ` Sagi Grimberg
2023-04-20 19:34     ` Uday Shankar

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox