From: hch@lst.de (Christoph Hellwig)
Subject: nvme multipath support V5
Date: Mon, 23 Oct 2017 16:51:09 +0200 [thread overview]
Message-ID: <20171023145126.2471-1-hch@lst.de> (raw)
Hi all,
this series adds support for multipathing, that is accessing nvme
namespaces through multiple controllers to the nvme core driver.
It is a very thin and efficient implementation that relies on
close cooperation with other bits of the nvme driver, and few small
and simple block helpers.
Compared to dm-multipath the important differences are how management
of the paths is done, and how the I/O path works.
Management of the paths is fully integrated into the nvme driver,
for each newly found nvme controller we check if there are other
controllers that refer to the same subsystem, and if so we link them
up in the nvme driver. Then for each namespace found we check if
the namespace id and identifiers match to check if we have multiple
controllers that refer to the same namespaces. For now path
availability is based entirely on the controller status, which at
least for fabrics will be continuously updated based on the mandatory
keep alive timer. Once the Asynchronous Namespace Access (ANA)
proposal passes in NVMe we will also get per-namespace states in
addition to that, but for now any details of that remain confidential
to NVMe members.
The I/O path is very different from the existing multipath drivers,
which is enabled by the fact that NVMe (unlike SCSI) does not support
partial completions - a controller will either complete a whole
command or not, but never only complete parts of it. Because of that
there is no need to clone bios or requests - the I/O path simply
redirects the I/O to a suitable path. For successful commands
multipath is not in the completion stack at all. For failed commands
we decide if the error could be a path failure, and if yes remove
the bios from the request structure and requeue them before completing
the request. All together this means there is no performance
degradation compared to normal nvme operation when using the multipath
device node (at least not until I find a dual ported DRAM backed
device :))
A git tree is available at:
git://git.infradead.org/users/hch/block.git nvme-mpath
gitweb:
http://git.infradead.org/users/hch/block.git/shortlog/refs/heads/nvme-mpath
Changes since V4:
- add a refcount to release the device in struct nvme_subsystem
- use the instance to name the nvme_subsystems in sysfs
- remove a NULL check before nvme_put_ns_head
- take a ns_head reference in ->open
- improve open protection for GENHD_FL_HIDDEN
- add poll support for the mpath device
Changes since V3:
- new block layer support for hidden gendisks
- a couple new patches to refactor device handling before the
actual multipath support
- don't expose per-controller block device nodes
- use /dev/nvmeXnZ as the device nodes for the whole subsystem.
- expose subsystems in sysfs (Hannes Reinecke)
- fix a subsystem leak when duplicate NQNs are found
- fix up some names
- don't clear current_path if freeing a different namespace
Changes since V2:
- don't create duplicate subsystems on reset (Keith Bush)
- free requests properly when failing over in I/O completion (Keith Bush)
- new devices names: /dev/nvm-sub%dn%d
- expose the namespace identification sysfs files for the mpath nodes
Changes since V1:
- introduce new nvme_ns_ids structure to clean up identifier handling
- generic_make_request_fast is now named direct_make_request and calls
generic_make_request_checks
- reset bi_disk on resubmission
- create sysfs links between the existing nvme namespace block devices and
the new share mpath device
- temporarily added the timeout patches from James, this should go into
nvme-4.14, though
next reply other threads:[~2017-10-23 14:51 UTC|newest]
Thread overview: 54+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-10-23 14:51 Christoph Hellwig [this message]
2017-10-23 14:51 ` [PATCH 01/17] block: move REQ_NOWAIT Christoph Hellwig
2017-10-23 14:51 ` [PATCH 02/17] block: add REQ_DRV bit Christoph Hellwig
2017-10-23 14:51 ` [PATCH 03/17] block: provide a direct_make_request helper Christoph Hellwig
2017-10-24 7:05 ` Hannes Reinecke
2017-10-24 17:57 ` Javier González
2017-10-23 14:51 ` [PATCH 04/17] block: add a blk_steal_bios helper Christoph Hellwig
2017-10-24 7:07 ` Hannes Reinecke
2017-10-24 8:44 ` Max Gurtovoy
2017-10-28 6:13 ` Christoph Hellwig
2017-10-23 14:51 ` [PATCH 05/17] block: don't look at the struct device dev_t in disk_devt Christoph Hellwig
2017-10-24 7:08 ` Hannes Reinecke
2017-10-23 14:51 ` [PATCH 06/17] block: introduce GENHD_FL_HIDDEN Christoph Hellwig
2017-10-24 7:18 ` Hannes Reinecke
2017-10-28 6:15 ` Christoph Hellwig
2017-10-24 21:40 ` Mike Snitzer
2017-10-28 6:38 ` Christoph Hellwig
2017-10-28 7:20 ` Guan Junxiong
2017-10-28 7:42 ` Christoph Hellwig
2017-10-28 10:09 ` Guan Junxiong
2017-10-29 8:00 ` Anish Jhaveri
2017-10-29 8:57 ` Christoph Hellwig
2017-10-28 14:17 ` Mike Snitzer
2017-10-29 10:01 ` Hannes Reinecke
2017-10-30 4:09 ` Guan Junxiong
2017-10-23 14:51 ` [PATCH 07/17] block: add a poll_fn callback to struct request_queue Christoph Hellwig
2017-10-24 7:20 ` Hannes Reinecke
2017-10-23 14:51 ` [PATCH 08/17] nvme: use kref_get_unless_zero in nvme_find_get_ns Christoph Hellwig
2017-10-24 7:21 ` Hannes Reinecke
2017-10-23 14:51 ` [PATCH 09/17] nvme: simplify nvme_open Christoph Hellwig
2017-10-24 7:21 ` Hannes Reinecke
2017-10-23 14:51 ` [PATCH 10/17] nvme: switch controller refcounting to use struct device Christoph Hellwig
2017-10-24 7:23 ` Hannes Reinecke
2017-10-23 14:51 ` [PATCH 11/17] nvme: get rid of nvme_ctrl_list Christoph Hellwig
2017-10-24 7:24 ` Hannes Reinecke
2017-10-23 14:51 ` [PATCH 12/17] nvme: check for a live controller in nvme_dev_open Christoph Hellwig
2017-10-24 7:25 ` Hannes Reinecke
2017-10-23 14:51 ` [PATCH 13/17] nvme: track subsystems Christoph Hellwig
2017-10-24 7:33 ` Hannes Reinecke
2017-10-23 14:51 ` [PATCH 14/17] nvme: introduce a nvme_ns_ids structure Christoph Hellwig
2017-10-24 7:27 ` Hannes Reinecke
2017-10-23 14:51 ` [PATCH 15/17] nvme: track shared namespaces Christoph Hellwig
2017-10-24 7:34 ` Hannes Reinecke
2017-10-23 14:51 ` [PATCH 16/17] nvme: implement multipath access to nvme subsystems Christoph Hellwig
2017-10-23 15:32 ` Sagi Grimberg
2017-10-23 16:57 ` Christoph Hellwig
2017-10-23 17:43 ` Sagi Grimberg
2017-10-24 7:43 ` Hannes Reinecke
2017-10-28 6:32 ` Christoph Hellwig
2017-10-30 3:37 ` Guan Junxiong
2017-11-02 18:22 ` Christoph Hellwig
2017-10-23 14:51 ` [PATCH 17/17] nvme: also expose the namespace identification sysfs files for mpath nodes Christoph Hellwig
2017-10-24 7:45 ` Hannes Reinecke
2017-10-28 6:20 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20171023145126.2471-1-hch@lst.de \
--to=hch@lst.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).