From: jthumshirn@suse.de (Johannes Thumshirn)
Subject: [LSF/MM TOPIC] NVMe over Fabrics auto-discovery in Linux
Date: Tue, 23 Jan 2018 16:11:32 +0100 [thread overview]
Message-ID: <20180123151132.wrbc7dcjjhpzcnba@linux-x5ow.site> (raw)
In NVMe over Fabrics we currently perform target discovery by running either
one of 'nvme discover' or 'nvme connect-all' (with or without the use of an
appropriate /etc/nvme/discovery.conf).
This is well suited for the RDMA transport, which has no idea of the
underlying fabric and it's connections. To automatically connect to an RDMA
target Sagi proposed a systemd one-shot service in [1].
The Fibre Channel transport on the other hand does already know it's mapping
of rports to lports and thus could automatically connect to the target (with a
little help from udev) as shown in [2].
Unfortunately the method for FC is not possible with RDMA and the currently
used 'nvme discover/connect/connect-all' method is extremely cumbersome with
Fibre Channel, especially as no special setup was/is needed for SCSI devices
over Fibre Channel and administrators thus expect it for NVMe as well.
Other downside of the "RDMA version" are 1) once the network topology and thus
/etc/nvme/discovery.conf changes one has to rebuild the initrd if nvme is to
be started from the initrd and 2) if we use the one-shot systemd service there
is no way to automatically re-try the discovery/connect.
I'm hoping we have developers from the RDMA and Fibre Channel transports, as
well as seasoned Storage developers with a SCSI Fibre Channel and RDMA
knowledge and Distribution Maintainers around to discuss a way to address this
problem is a user-friendly way.
Byte,
Johannes
[1] http://lists.infradead.org/pipermail/linux-nvme/2017-September/012976.html
[2] http://lists.infradead.org/pipermail/linux-nvme/2017-December/014324.html
--
Johannes Thumshirn Storage
jthumshirn at suse.de +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg
GF: Felix Imend?rffer, Jane Smithard, Graham Norton
HRB 21284 (AG N?rnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850
next reply other threads:[~2018-01-23 15:11 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-23 15:11 Johannes Thumshirn [this message]
2018-01-23 16:09 ` [LSF/MM TOPIC] NVMe over Fabrics auto-discovery in Linux Bart Van Assche
2018-01-24 8:26 ` Hannes Reinecke
2018-01-24 17:17 ` James Smart
2018-01-24 18:46 ` Sagi Grimberg
2018-01-24 18:42 ` Sagi Grimberg
2018-01-24 18:51 ` James Smart
2018-01-24 18:59 ` Sagi Grimberg
2018-01-29 13:05 ` Johannes Thumshirn
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20180123151132.wrbc7dcjjhpzcnba@linux-x5ow.site \
--to=jthumshirn@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox