From: Hannes Reinecke <hare@suse.de>
To: Sagi Grimberg <sagi@grimberg.me>,
James Smart <james.smart@broadcom.com>,
Victor Gladkov <Victor.Gladkov@kioxia.com>,
"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>
Subject: Re: [PATCH] nvme-fabrics: reject I/O to offline device
Date: Sun, 8 Dec 2019 13:31:33 +0100 [thread overview]
Message-ID: <e009d8fe-74ec-8c87-30ec-b1ac657b2aa8@suse.de> (raw)
In-Reply-To: <ae027c0c-45ab-d412-11f3-39dcf3217434@grimberg.me>
On 12/6/19 11:18 PM, Sagi Grimberg wrote:
>
>>> ---
>>> diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
>>> index 74b8818..b58abc1 100644
>>> --- a/drivers/nvme/host/fabrics.c
>>> +++ b/drivers/nvme/host/fabrics.c
>>> @@ -549,6 +549,8 @@ blk_status_t nvmf_fail_nonready_command(struct
>>> nvme_ctrl *ctrl,
>>> {
>>> if (ctrl->state != NVME_CTRL_DELETING &&
>>> ctrl->state != NVME_CTRL_DEAD &&
>>> + !(ctrl->state == NVME_CTRL_CONNECTING &&
>>> + ((ktime_get_ns() - rq->start_time_ns) >
>>> jiffies_to_nsecs(rq->timeout))) &&
>>> !blk_noretry_request(rq) && !(rq->cmd_flags &
>>> REQ_NVME_MPATH))
>>> return BLK_STS_RESOURCE;
>>>
>>
>> Did you test this to ensure it's doing what you expect. I'm not sure
>> that all the timers are set right at this point. Most I/O's timeout
>> from a deadline time stamped at blk_mq_start_request(). But that
>> routine is actually called by the transports post the
>> nvmf_check_ready/fail_nonready calls. E.g. the io is not yet in
>> flight, thus queued, and the blk-mq internal queuing doesn't count
>> against the io timeout. I can't see anything that guarantees
>> start_time_ns is set.
>
> I'm not sure this behavior for failing I/O always desired? some
> consumers would actually not want the I/O to fail prematurely if we
> are not multipathing...
>
> I think we need a fail_fast_tmo set in when establishing the controller
> to get it right.
>
Agreed. This whole patch looks like someone is trying to reimplement
fast_io_fail_tmo / dev_loss_tmo.
As we're moving into unreliable fabrics I guess we'll need a similar
mechanism.
Cheers,
Hannes
--
Dr. Hannes Reinecke Teamlead Storage & Networking
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2019-12-08 12:31 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-12-01 7:59 [PATCH] nvme-fabrics: reject I/O to offline device Victor Gladkov
2019-12-02 22:26 ` Chaitanya Kulkarni
2019-12-02 22:47 ` James Smart
2019-12-03 10:04 ` Victor Gladkov
2019-12-03 16:19 ` James Smart
2019-12-04 8:28 ` Victor Gladkov
2019-12-06 0:38 ` James Smart
2019-12-06 22:18 ` Sagi Grimberg
2019-12-08 12:31 ` Hannes Reinecke [this message]
2019-12-09 15:30 ` Victor Gladkov
2019-12-17 18:03 ` James Smart
2019-12-17 21:46 ` Sagi Grimberg
2019-12-18 22:20 ` James Smart
2019-12-15 12:33 ` Victor Gladkov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e009d8fe-74ec-8c87-30ec-b1ac657b2aa8@suse.de \
--to=hare@suse.de \
--cc=Victor.Gladkov@kioxia.com \
--cc=james.smart@broadcom.com \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox