Linux-NVME Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: muneendra.kumar@broadcom.com (Muneendra Kumar M)
Subject: [PATCH 1/1] RDMA over Fibre Channel
Date: Wed, 18 Apr 2018 17:17:25 +0530	[thread overview]
Message-ID: <86f66430d0c577792b663226189d31a1@mail.gmail.com> (raw)
In-Reply-To: <20180418102223.GA27364@infradead.org>

Hi Christoph,

The current implementation of RDMA over Fibre channel uses NVMe for the
below reasons:
1. Existing FC-NVMe HBA and FC network can be used without requiring any
changes.
2. NVMe namespace based discovery is used for RDMA node discovery.
3. FC-NVMe provides us a way to achieve Zero copy TX/RX for non-block
workloads.

Although we concur with the idea of RDMA directly over Fibre channel, the
actual implementation addressing the above reasons requires
standardization and coordination with FC HBA vendors and other SAN
ecosystem players. This effort is ongoing within our organization (Brocade
at Broadcom).   However, there is a business case for the current soft
RDMA implementation for FC, i.e. RDMA over FC-NVMe, as it provides
existing Fibre channel customers a way to utilize existing FC network to
transport RDMA workloads as well.  While doing this we are making sure
NVMe block traffic also can happen on the same FC network.

The below link  gives more technical details. We are glad to discuss any
further details on the same.

https://github.com/brocade/RDMAoverFC/blob/master/RDMA%20over%20FC.pdf

Regards,
Muneendra, Amit & Anand.


-----Original Message-----
From: Christoph Hellwig [mailto:hch@infradead.org]
Sent: Wednesday, April 18, 2018 3:52 PM
To: muneendra.kumar at broadcom.com
Cc: linux-rdma at vger.kernel.org; amit.tyagi at broadcom.com;
anand.sundaram at broadcom.com; linux-nvme at lists.infradead.org
Subject: Re: [PATCH 1/1] RDMA over Fibre Channel

On Wed, Apr 18, 2018 at 02:42:40AM -0700, muneendra.kumar at broadcom.com
wrote:
> Eventhough it is inspired from the Soft RoCE driver, the underlying
> transport layer is FC-NVMe (short for 'NVMe over fibre channel').
> The request, response and completion state machines in the driver have
> been heavily modified to adapt to the Exchange based Data transfer
> mechanism of Fibre channel.

That sounds like a bad joke.  Please stop abusing the NVMe code for this
otherwise reasonable idea.  You should be able to layer this over plain
FCP just fine.

  reply	other threads:[~2018-04-18 11:47 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20180418094240.26371-1-muneendra.kumar@broadcom.com>
2018-04-18 10:22 ` [PATCH 1/1] RDMA over Fibre Channel Christoph Hellwig
2018-04-18 11:47   ` Muneendra Kumar M [this message]
2018-04-18 13:18     ` Christoph Hellwig
2018-04-18 16:53       ` Anand Nataraja Sundaram
2018-04-19  9:39         ` Christoph Hellwig
2018-04-23 11:48           ` Anand Nataraja Sundaram
2018-04-18 13:39     ` Bart Van Assche

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=86f66430d0c577792b663226189d31a1@mail.gmail.com \
    --to=muneendra.kumar@broadcom.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox