From mboxrd@z Thu Jan 1 00:00:00 1970 From: hch@lst.de (hch@lst.de) Date: Thu, 20 Sep 2018 08:39:24 +0200 Subject: [PATCH] nvme-fabrics -in case of REQ_NVME_MPATH we should return BLK_STS_RESOURCE In-Reply-To: <19b26262-686c-c778-7e01-488917537636@broadcom.com> References: <20180918162526.GA5038@lst.de> <2866a0d5-9864-a2fd-572b-6e6f2c581de5@broadcom.com> <20180918190834.GA26013@localhost.localdomain> <19b26262-686c-c778-7e01-488917537636@broadcom.com> Message-ID: <20180920063924.GG12913@lst.de> On Tue, Sep 18, 2018@12:43:30PM -0700, James Smart wrote: > The issue that I believe exists is that the multipath driver/core knows how > to retry on a different path when the request is accepted then the > nvme_complete_rq() is called. But it doesn't know how to retry on a > different path when the initial transport:queue_rq() fails - the io ends up > failing completely.?? I think we need to fix that code path, not "busy" > requeue the mpath io. The multipath code handles failures from nvme_complete_rq just fine, in fact even with this patch we still don't accept the command into queue_rq. It is just that BLK_STS_RESOURCE is a magic indicator for the blk-mq core to retry internally and not hand it back to the next higher level (which would be the multipath code, either nvme or dm for that matter).