From: sagi@grimberg.me (Sagi Grimberg)
Subject: [LSF/MM TOPIC] NVMe over Fabrics auto-discovery in Linux
Date: Wed, 24 Jan 2018 20:42:18 +0200 [thread overview]
Message-ID: <64ee446d-118d-364a-e9bb-bcbf80b8135b@grimberg.me> (raw)
In-Reply-To: <20180123151132.wrbc7dcjjhpzcnba@linux-x5ow.site>
Hi Johannes,
> In NVMe over Fabrics we currently perform target discovery by running either
> one of 'nvme discover' or 'nvme connect-all' (with or without the use of an
> appropriate /etc/nvme/discovery.conf).
>
> This is well suited for the RDMA transport, which has no idea of the
> underlying fabric and it's connections. To automatically connect to an RDMA
> target Sagi proposed a systemd one-shot service in [1].
>
> The Fibre Channel transport on the other hand does already know it's mapping
> of rports to lports and thus could automatically connect to the target (with a
> little help from udev) as shown in [2].
>
> Unfortunately the method for FC is not possible with RDMA and the currently
> used 'nvme discover/connect/connect-all' method is extremely cumbersome with
> Fibre Channel, especially as no special setup was/is needed for SCSI devices
> over Fibre Channel and administrators thus expect it for NVMe as well.
>
> Other downside of the "RDMA version" are 1) once the network topology and thus
> /etc/nvme/discovery.conf changes one has to rebuild the initrd if nvme is to
> be started from the initrd and 2) if we use the one-shot systemd service there
> is no way to automatically re-try the discovery/connect.
>
> I'm hoping we have developers from the RDMA and Fibre Channel transports, as
> well as seasoned Storage developers with a SCSI Fibre Channel and RDMA
> knowledge and Distribution Maintainers around to discuss a way to address this
> problem is a user-friendly way.
Discovery enhancements is a subject the NVMe TWG will be working on in
the near future, and "discovery of the the discovery service" is indeed
a sub-topic IIRC. I'm not sure LSF would be the appropriate forum for
this.
What we do need to have, is a way to support existing devices. I think
its acceptable that FC and Ethernet based transports diverge in their
implementations for this.
For Ethernet based transports we could follow the open-iscsi model which
has discoveryd service which periodically polls predefined addresses. As
for updating initramfs, maybe we can live with this limitation for the
time being?
FC can keep doing its own thing...
next prev parent reply other threads:[~2018-01-24 18:42 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-01-23 15:11 [LSF/MM TOPIC] NVMe over Fabrics auto-discovery in Linux Johannes Thumshirn
2018-01-23 16:09 ` Bart Van Assche
2018-01-24 8:26 ` Hannes Reinecke
2018-01-24 17:17 ` James Smart
2018-01-24 18:46 ` Sagi Grimberg
2018-01-24 18:42 ` Sagi Grimberg [this message]
2018-01-24 18:51 ` James Smart
2018-01-24 18:59 ` Sagi Grimberg
2018-01-29 13:05 ` Johannes Thumshirn
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=64ee446d-118d-364a-e9bb-bcbf80b8135b@grimberg.me \
--to=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox