linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@kernel.dk>
To: Christoph Hellwig <hch@lst.de>, keith.busch@intel.com
Cc: linux-block@vger.kernel.org, linux-rdma@vger.kernel.org,
	linux-nvme@lists.infradead.org
Subject: Re: NVMe over Fabrics RDMA transport drivers V2
Date: Fri, 8 Jul 2016 08:40:20 -0600	[thread overview]
Message-ID: <577FBB54.2010204@kernel.dk> (raw)
In-Reply-To: <1467809752-31320-1-git-send-email-hch@lst.de>

On 07/06/2016 06:55 AM, Christoph Hellwig wrote:
> This patch set implements the NVMe over Fabrics RDMA host and the target
> drivers.
>
> The host driver is tied into the NVMe host stack and implements the RDMA
> transport under the NVMe core and Fabrics modules. The NVMe over Fabrics
> RDMA host module is responsible for establishing a connection against a
> given target/controller, RDMA event handling and data-plane command
> processing.
>
> The target driver hooks into the NVMe target core stack and implements
> the RDMA transport. The module is responsible for RDMA connection
> establishment, RDMA event handling and data-plane RDMA commands
> processing.
>
> RDMA connection establishment is done using RDMA/CM and IP resolution.
> The data-plane command sequence follows the classic storage model where
> the target pushes/pulls the data.
>
> Changes since V1:
>   - updates for req_op changes in for-next (me)
>   - validate adrfam in nvmet-rdma (Ming)
>   - don't leak rsp structures on connect failure in nvmet-rdma (Steve)
>   - don't use RDMA/CM errors codes in reject path in nvmet-rdma (Steve)
>   - fix nvmet_rdma_delete_ctrl (me)
>   - invoke fatal error on error completion in nvmet-rdma (Sagi)
>   - don't leak rsp structure on disconnected queue in nvmet-rdma (Ming)
>   - properly set the SGL flag on AERs in nvme-rdma (me)
>   - correctly stop the keep alive timer on reconnect in nvme-rdma (Ming)
>   - stop and drain queues before freeing the tagset in nvet-rdma (Steve)

Added for 4.8, thanks.

-- 
Jens Axboe

      parent reply	other threads:[~2016-07-08 14:40 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-07-06 12:55 NVMe over Fabrics RDMA transport drivers V2 Christoph Hellwig
2016-07-06 12:55 ` [PATCH 1/5] blk-mq: Introduce blk_mq_reinit_tagset Christoph Hellwig
2016-07-08 13:46   ` Steve Wise
2016-07-06 12:55 ` [PATCH 2/5] nvme: add new reconnecting controller state Christoph Hellwig
2016-07-08 13:47   ` Steve Wise
2016-07-06 12:55 ` [PATCH 3/5] nvme-rdma.h: Add includes for nvme rdma_cm negotiation Christoph Hellwig
2016-07-08 13:49   ` Steve Wise
2016-07-06 12:55 ` [PATCH 4/5] nvmet-rdma: add a NVMe over Fabrics RDMA target driver Christoph Hellwig
2016-07-08 13:51   ` Steve Wise
2016-07-06 12:55 ` [PATCH 5/5] nvme-rdma: add a NVMe over Fabrics RDMA host driver Christoph Hellwig
2016-07-08 13:53   ` Steve Wise
2016-07-08 14:32 ` NVMe over Fabrics RDMA transport drivers V2 Steve Wise
2016-07-08 14:40 ` Jens Axboe [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=577FBB54.2010204@kernel.dk \
    --to=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=keith.busch@intel.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-rdma@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).