From: hch@lst.de (Christoph Hellwig)
Subject: NVMe over Fabrics target implementation
Date: Wed, 8 Jun 2016 15:46:25 +0200 [thread overview]
Message-ID: <20160608134625.GA337@lst.de> (raw)
In-Reply-To: <575819BB.7010209@lightbits.io>
On Wed, Jun 08, 2016@04:12:27PM +0300, Sagi Grimberg wrote:
>> Because it keeps the code simple. If you had actually participated
>> on our development list you might have seen that until not too long
>> ago we have very fine grainded locks here. In the end Armen convinced
>> me that it's easier to maintain if we don't bother with fine grained
>> locking outside the fast path, especially as it significantly simplifies
>> the discovery implementation. If if it ever turns out to be an
>> issue we can change it easily as the implementation is well encapsulated.
>
> We did change that, and Nic is raising a valid point in terms of having
> a global mutex around all the ports. If the requirement of nvme
> subsystems and ports configuration is that it should happen fast enough
> and scale to the numbers that Nic is referring to, we'll need to change
> that back.
>
> Having said that, I'm not sure this is a real hard requirement for RDMA
> and FC in the mid-term, because from what I've seen, the workloads Nic
> is referring to are more typical for iscsi/tcp where connections are
> cheaper and you need more to saturate a high-speed interconnects, so
> we'll probably see this when we have nvme over tcp working.
I'm not really worried about connection establishment - that can be
changed to RCU locking really easily. I'm a bit more worried about
the case where a driver would block long in ->add_port. But let's
worry about that if an actual user comes up. The last thing we need
in a new driver is lots of complexity for hypothetical use cases,
I'm much more interested in having the driver simple, testable and
actually tested than optimizing for something.
That is to say the priorities here are very different from Nic's goals
for the target code.
next prev parent reply other threads:[~2016-06-08 13:46 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-06-06 21:22 NVMe over Fabrics target implementation Christoph Hellwig
2016-06-06 21:22 ` [PATCH 1/3] block: Export blk_poll Christoph Hellwig
2016-06-07 6:49 ` Nicholas A. Bellinger
2016-06-06 21:22 ` [PATCH 2/3] nvmet: add a generic NVMe target Christoph Hellwig
2016-06-06 21:22 ` [PATCH 3/3] nvme-loop: add a NVMe loopback host driver Christoph Hellwig
2016-06-06 22:00 ` kbuild test robot
2016-06-07 6:23 ` NVMe over Fabrics target implementation Nicholas A. Bellinger
2016-06-07 10:55 ` Christoph Hellwig
2016-06-08 5:21 ` Nicholas A. Bellinger
2016-06-08 12:19 ` Christoph Hellwig
2016-06-08 13:12 ` Sagi Grimberg
2016-06-08 13:46 ` Christoph Hellwig [this message]
2016-06-09 4:36 ` Nicholas A. Bellinger
2016-06-09 13:46 ` Christoph Hellwig
2016-06-09 3:32 ` Nicholas A. Bellinger
2016-06-07 21:02 ` Andy Grover
2016-06-07 21:10 ` Ming Lin
2016-06-07 17:01 ` Bart Van Assche
2016-06-07 17:31 ` Christoph Hellwig
2016-06-07 18:11 ` Bart Van Assche
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160608134625.GA337@lst.de \
--to=hch@lst.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).