From: Hannes Reinecke <hare@suse.de>
To: Victor Gladkov <Victor.Gladkov@kioxia.com>
Cc: Sagi Grimberg <sagi@grimberg.me>,
James Smart <james.smart@broadcom.com>,
"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
"Ewan D. Milne" <emilne@redhat.com>,
Keith Busch <kbusch@kernel.org>, Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH v9] nvme-fabrics: reject I/O to offline device
Date: Wed, 30 Sep 2020 15:14:12 +0200 [thread overview]
Message-ID: <05e192d4-b57e-b934-302a-79bdc0e6a36f@suse.de> (raw)
In-Reply-To: <3e9337bfbd7f410eb632e96a44b43924@kioxia.com>
[-- Attachment #1: Type: text/plain, Size: 2476 bytes --]
On 9/30/20 2:31 PM, Victor Gladkov wrote:
> On 9/30/20 8:47 PM, Hannes Reinecke wrote:
>
>>> opts->max_reconnects = DIV_ROUND_UP(ctrl_loss_tmo,
>>> opts->reconnect_delay);
>>> + if (ctrl_loss_tmo < opts->fast_io_fail_tmo)
>>> + pr_warn("failfast tmo (%d) larger than controller "
>>> + "loss tmo (%d)\n",
>>> + opts->fast_io_fail_tmo, ctrl_loss_tmo);
>>> + }
>>
>> If we already check for that condition, shouldn't we disable fast_io_fail_tmo in
>> that situation to clarify things?
>>
>
> OK for me. I don't mind
>
>>>
>>> if (!opts->host) {
>>> kref_get(&nvmf_default_host->ref);
>>> @@ -985,7 +1004,7 @@ void nvmf_free_options(struct nvmf_ctrl_options
>>> *opts)
>>> #define NVMF_ALLOWED_OPTS (NVMF_OPT_QUEUE_SIZE |
>>> NVMF_OPT_NR_IO_QUEUES | \
>>> NVMF_OPT_KATO | NVMF_OPT_HOSTNQN | \
>>> NVMF_OPT_HOST_ID | NVMF_OPT_DUP_CONNECT
>>> |\
>>> - NVMF_OPT_DISABLE_SQFLOW)
>>> + NVMF_OPT_DISABLE_SQFLOW |
>>> NVMF_OPT_FAIL_FAST_TMO)
>>>
>>> static struct nvme_ctrl *
>>> nvmf_create_ctrl(struct device *dev, const char *buf) diff --git
>>> a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h index
>
>>> diff --git a/drivers/nvme/host/multipath.c
>>> b/drivers/nvme/host/multipath.c index 54603bd..d8b7f45 100644
>>> --- a/drivers/nvme/host/multipath.c
>>> +++ b/drivers/nvme/host/multipath.c
>>> @@ -278,9 +278,12 @@ static bool nvme_available_path(struct
>>> nvme_ns_head *head)
>>>
>>> list_for_each_entry_rcu(ns, &head->list, siblings) {
>>> switch (ns->ctrl->state) {
>>> + case NVME_CTRL_CONNECTING:
>>> + if (test_bit(NVME_CTRL_FAILFAST_EXPIRED,
>>> + &ns->ctrl->flags))
>>> + break;
>>
>> No. We shouldn't select this path, but that doesn't mean that all other paths in
>> this list can't be selected, either; they might be coming from different
>> controllers.
>> Please use 'continue' here.
>>
>
> The 'break' doesn't interrupt the 'for_each' loop, but only go out from 'switch'
>
Ah. But still; that is not quite correct as we'll need to intercept
things at nvme_ns_head_submit_bio() to make the correct decision there.
I've attached the modified version I'm working with; please check if
you're okay with it.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer
[-- Attachment #2: 0001-nvme-fabrics-reject-I-O-to-offline-device.patch --]
[-- Type: text/x-patch, Size: 10481 bytes --]
From fc2e2c5ceff5d0807623ccd83d735965ba5b7ec0 Mon Sep 17 00:00:00 2001
From: Victor Gladkov <Victor.Gladkov@kioxia.com>
Date: Tue, 29 Sep 2020 15:27:38 +0000
Subject: [PATCH] nvme-fabrics: reject I/O to offline device
Commands get stuck while Host NVMe-oF controller is in reconnect state.
NVMe controller enters into reconnect state when it loses connection with the
target. It tries to reconnect every 10 seconds
(default) until successful reconnection or until reconnect time-out is reached.
The default reconnect time out is 10 minutes.
Applications are expecting commands to complete with success or error within
a certain timeout (30 seconds by default). The NVMe host is enforcing that
timeout while it is connected, never the less, during reconnection, the timeout
is not enforced and commands may get stuck for a long period or even forever.
To fix this long delay due to the default timeout we introduce new session
parameter "fast_io_fail_tmo". The timeout is measured in seconds from the
controller reconnect, any command beyond that timeout is rejected. The new
parameter value may be passed during 'connect'.
The default value of -1 means no timeout (similar to current behavior).
We add a new controller NVME_CTRL_FAILFAST_EXPIRED and respective
delayed work that updates the NVME_CTRL_FAILFAST_EXPIRED flag.
When the controller is entering the CONNECTING state, we schedule the
delayed_work based on failfast timeout value. If the transition is out of
CONNECTING, terminate delayed work item and ensure failfast_expired is
false. If delayed work item expires then set "NVME_CTRL_FAILFAST_EXPIRED"
flag to true.
We also update nvmf_fail_nonready_command() and
nvme_available_path() functions with check the
"NVME_CTRL_FAILFAST_EXPIRED" controller flag.
For multipath (function nvme_available_path()):
The path will not be considered available if
"NVME_CTRL_FAILFAST_EXPIRED" controller flag is set
and the controller in NVME_CTRL_CONNECTING state.
This prevents commands from getting stuck
when available paths have tried to reconnect for too long.
Signed-off-by: Victor Gladkov <victor.gladkov at kioxia.com>
Signed-off-by: Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com>
Reviewed-by: Hannes Reinecke <hare at suse.de>
---
drivers/nvme/host/core.c | 42 ++++++++++++++++++++++++++++++++++++++++--
drivers/nvme/host/fabrics.c | 24 ++++++++++++++++++++++--
drivers/nvme/host/fabrics.h | 5 +++++
drivers/nvme/host/multipath.c | 4 ++++
drivers/nvme/host/nvme.h | 3 +++
5 files changed, 74 insertions(+), 4 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 385b10317873..9610d81f2070 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -148,6 +148,37 @@ int nvme_try_sched_reset(struct nvme_ctrl *ctrl)
}
EXPORT_SYMBOL_GPL(nvme_try_sched_reset);
+static void nvme_failfast_work(struct work_struct *work) {
+ struct nvme_ctrl *ctrl = container_of(to_delayed_work(work),
+ struct nvme_ctrl, failfast_work);
+
+ if (ctrl->state != NVME_CTRL_CONNECTING)
+ return;
+
+
+ set_bit(NVME_CTRL_FAILFAST_EXPIRED, &ctrl->flags);
+ dev_info(ctrl->device, "failfast expired\n");
+ nvme_kick_requeue_lists(ctrl);
+}
+
+static inline void nvme_start_failfast_work(struct nvme_ctrl *ctrl)
+{
+ if (!ctrl->opts || ctrl->opts->fast_io_fail_tmo == -1)
+ return;
+
+ schedule_delayed_work(&ctrl->failfast_work,
+ ctrl->opts->fast_io_fail_tmo * HZ);
+}
+
+static inline void nvme_stop_failfast_work(struct nvme_ctrl *ctrl)
+{
+ if (!ctrl->opts)
+ return;
+
+ cancel_delayed_work_sync(&ctrl->failfast_work);
+ clear_bit(NVME_CTRL_FAILFAST_EXPIRED, &ctrl->flags);
+}
+
int nvme_reset_ctrl(struct nvme_ctrl *ctrl)
{
if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
@@ -360,9 +391,11 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
switch (new_state) {
case NVME_CTRL_LIVE:
switch (old_state) {
+ case NVME_CTRL_CONNECTING:
+ nvme_stop_failfast_work(ctrl);
+ fallthrough;
case NVME_CTRL_NEW:
case NVME_CTRL_RESETTING:
- case NVME_CTRL_CONNECTING:
changed = true;
fallthrough;
default:
@@ -381,8 +414,10 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
break;
case NVME_CTRL_CONNECTING:
switch (old_state) {
- case NVME_CTRL_NEW:
case NVME_CTRL_RESETTING:
+ nvme_start_failfast_work(ctrl);
+ fallthrough;
+ case NVME_CTRL_NEW:
changed = true;
fallthrough;
default:
@@ -4343,6 +4378,7 @@ void nvme_stop_ctrl(struct nvme_ctrl *ctrl)
{
nvme_mpath_stop(ctrl);
nvme_stop_keep_alive(ctrl);
+ nvme_stop_failfast_work(ctrl);
flush_work(&ctrl->async_event_work);
cancel_work_sync(&ctrl->fw_act_work);
}
@@ -4408,6 +4444,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
int ret;
ctrl->state = NVME_CTRL_NEW;
+ clear_bit(NVME_CTRL_FAILFAST_EXPIRED, &ctrl->flags);
spin_lock_init(&ctrl->lock);
mutex_init(&ctrl->scan_lock);
INIT_LIST_HEAD(&ctrl->namespaces);
@@ -4424,6 +4461,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
init_waitqueue_head(&ctrl->state_wq);
INIT_DELAYED_WORK(&ctrl->ka_work, nvme_keep_alive_work);
+ INIT_DELAYED_WORK(&ctrl->failfast_work, nvme_failfast_work);
memset(&ctrl->ka_cmd, 0, sizeof(ctrl->ka_cmd));
ctrl->ka_cmd.common.opcode = nvme_admin_keep_alive;
diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index 8575724734e0..6404cbf41c96 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -549,6 +549,7 @@ blk_status_t nvmf_fail_nonready_command(struct nvme_ctrl *ctrl,
{
if (ctrl->state != NVME_CTRL_DELETING_NOIO &&
ctrl->state != NVME_CTRL_DEAD &&
+ !test_bit(NVME_CTRL_FAILFAST_EXPIRED, &ctrl->flags) &&
!blk_noretry_request(rq) && !(rq->cmd_flags & REQ_NVME_MPATH))
return BLK_STS_RESOURCE;
@@ -615,6 +616,7 @@ static const match_table_t opt_tokens = {
{ NVMF_OPT_NR_WRITE_QUEUES, "nr_write_queues=%d" },
{ NVMF_OPT_NR_POLL_QUEUES, "nr_poll_queues=%d" },
{ NVMF_OPT_TOS, "tos=%d" },
+ { NVMF_OPT_FAIL_FAST_TMO, "fast_io_fail_tmo=%d" },
{ NVMF_OPT_ERR, NULL }
};
@@ -634,6 +636,7 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts,
opts->reconnect_delay = NVMF_DEF_RECONNECT_DELAY;
opts->kato = NVME_DEFAULT_KATO;
opts->duplicate_connect = false;
+ opts->fast_io_fail_tmo = NVMF_DEF_FAIL_FAST_TMO;
opts->hdr_digest = false;
opts->data_digest = false;
opts->tos = -1; /* < 0 == use transport default */
@@ -754,6 +757,17 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts,
pr_warn("ctrl_loss_tmo < 0 will reconnect forever\n");
ctrl_loss_tmo = token;
break;
+ case NVMF_OPT_FAIL_FAST_TMO:
+ if (match_int(args, &token)) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ if (token >= 0)
+ pr_warn("I/O will fail on reconnect "
+ "controller after %d sec\n", token);
+ opts->fast_io_fail_tmo = token;
+ break;
case NVMF_OPT_HOSTNQN:
if (opts->host) {
pr_err("hostnqn already user-assigned: %s\n",
@@ -886,9 +900,14 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts,
}
if (ctrl_loss_tmo < 0)
opts->max_reconnects = -1;
- else
+ else {
opts->max_reconnects = DIV_ROUND_UP(ctrl_loss_tmo,
opts->reconnect_delay);
+ if (ctrl_loss_tmo < opts->fast_io_fail_tmo)
+ pr_warn("failfast tmo (%d) larger than controller "
+ "dev loss tmo (%d)\n",
+ opts->fast_io_fail_tmo, ctrl_loss_tmo);
+ }
if (!opts->host) {
kref_get(&nvmf_default_host->ref);
@@ -988,7 +1007,8 @@ EXPORT_SYMBOL_GPL(nvmf_free_options);
#define NVMF_ALLOWED_OPTS (NVMF_OPT_QUEUE_SIZE | NVMF_OPT_NR_IO_QUEUES | \
NVMF_OPT_KATO | NVMF_OPT_HOSTNQN | \
NVMF_OPT_HOST_ID | NVMF_OPT_DUP_CONNECT |\
- NVMF_OPT_DISABLE_SQFLOW)
+ NVMF_OPT_DISABLE_SQFLOW | \
+ NVMF_OPT_FAIL_FAST_TMO)
static struct nvme_ctrl *
nvmf_create_ctrl(struct device *dev, const char *buf)
diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
index a9c1e3b4585e..2ce0451c00ff 100644
--- a/drivers/nvme/host/fabrics.h
+++ b/drivers/nvme/host/fabrics.h
@@ -15,6 +15,8 @@
#define NVMF_DEF_RECONNECT_DELAY 10
/* default to 600 seconds of reconnect attempts before giving up */
#define NVMF_DEF_CTRL_LOSS_TMO 600
+/* default is -1: the fail fast mechanism is disabled */
+#define NVMF_DEF_FAIL_FAST_TMO -1
/*
* Define a host as seen by the target. We allocate one at boot, but also
@@ -56,6 +58,7 @@ enum {
NVMF_OPT_NR_WRITE_QUEUES = 1 << 17,
NVMF_OPT_NR_POLL_QUEUES = 1 << 18,
NVMF_OPT_TOS = 1 << 19,
+ NVMF_OPT_FAIL_FAST_TMO = 1 << 20,
};
/**
@@ -89,6 +92,7 @@ enum {
* @nr_write_queues: number of queues for write I/O
* @nr_poll_queues: number of queues for polling I/O
* @tos: type of service
+ * @fast_io_fail_tmo: Fast I/O fail timeout in seconds
*/
struct nvmf_ctrl_options {
unsigned mask;
@@ -111,6 +115,7 @@ struct nvmf_ctrl_options {
unsigned int nr_write_queues;
unsigned int nr_poll_queues;
int tos;
+ int fast_io_fail_tmo;
};
/*
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 74896be40c17..0e08ca6e0264 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -157,6 +157,8 @@ static bool nvme_path_is_disabled(struct nvme_ns *ns)
if (test_bit(NVME_NS_ANA_PENDING, &ns->flags) ||
test_bit(NVME_NS_REMOVING, &ns->flags))
return true;
+ if (test_bit(NVME_CTRL_FAILFAST_EXPIRED, &ns->ctrl->flags))
+ return true;
return false;
}
@@ -279,6 +281,8 @@ static bool nvme_available_path(struct nvme_ns_head *head)
struct nvme_ns *ns;
list_for_each_entry_rcu(ns, &head->list, siblings) {
+ if (test_bit(NVME_CTRL_FAILFAST_EXPIRED, &ns->ctrl->flags))
+ continue;
switch (ns->ctrl->state) {
case NVME_CTRL_LIVE:
case NVME_CTRL_RESETTING:
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 566776100126..37be7d16ab15 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -304,6 +304,7 @@ struct nvme_ctrl {
struct work_struct scan_work;
struct work_struct async_event_work;
struct delayed_work ka_work;
+ struct delayed_work failfast_work;
struct nvme_command ka_cmd;
struct work_struct fw_act_work;
unsigned long events;
@@ -337,6 +338,8 @@ struct nvme_ctrl {
u16 icdoff;
u16 maxcmd;
int nr_reconnects;
+ unsigned long flags;
+#define NVME_CTRL_FAILFAST_EXPIRED 0
struct nvmf_ctrl_options *opts;
struct page *discard_page;
--
2.16.4
[-- Attachment #3: Type: text/plain, Size: 158 bytes --]
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
next parent reply other threads:[~2020-09-30 13:14 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <3e9337bfbd7f410eb632e96a44b43924@kioxia.com>
2020-09-30 13:14 ` Hannes Reinecke [this message]
2020-09-29 15:27 [PATCH v9] nvme-fabrics: reject I/O to offline device Victor Gladkov
2020-09-29 18:19 ` Sagi Grimberg
2020-09-30 5:46 ` Hannes Reinecke
2020-09-30 6:39 ` Christoph Hellwig
2020-10-01 8:55 ` Hannes Reinecke
2020-11-15 15:45 ` Victor Gladkov
2020-11-16 9:54 ` Hannes Reinecke
2020-11-17 8:39 ` Sagi Grimberg
2020-11-20 13:09 ` Hannes Reinecke
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=05e192d4-b57e-b934-302a-79bdc0e6a36f@suse.de \
--to=hare@suse.de \
--cc=Victor.Gladkov@kioxia.com \
--cc=emilne@redhat.com \
--cc=hch@lst.de \
--cc=james.smart@broadcom.com \
--cc=kbusch@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox