linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Keith Busch <keith.busch@intel.com>
To: Christoph Hellwig <hch@lst.de>
Cc: axboe@kernel.dk, linux-block@vger.kernel.org,
	linux-nvme@lists.infradead.org, linux-rdma@vger.kernel.org
Subject: Re: generic NVMe over Fabrics library support V2
Date: Wed, 15 Jun 2016 15:54:01 -0400	[thread overview]
Message-ID: <20160615195401.GA7637@localhost.localdomain> (raw)
In-Reply-To: <1465829128-22993-1-git-send-email-hch@lst.de>

On Mon, Jun 13, 2016 at 04:45:20PM +0200, Christoph Hellwig wrote:
> This patch set adds the necessary infrastructure for the NVMe over
> Fabrics functionality and the NVMe over Fabrics library itself.
> 
> First we add some needed parameters to NVMe request allocation such as flags
> (for reserved commands - connect and keep-alive), also support tag
> allocation of a given queue ID (for connect to be executed per-queue)
> and allow request to be queued at the head of the request queue (so
> reconnects can pass in flight I/O).
> 
> Second, we add support for additional sysfs attributes that are needed
> or useful for the Fabrics driver.
> 
> Third we add the NVMe over Fabrics related header definitions and the
> Fabrics library itself which is transport independent and handles
> Fabrics specific commands and variables.
> 
> Last, we add support for periodic keep-alive mechanism which is mandatory
> for Fabrics.
> 
> Changes from V1:
>  - don't directly free host->opts on connect failure (Sagi)
>  - blk_mq_alloc_request_hctx improvements (Ming and me)
>  - keep alive should not use blk_mq_alloc_request_hctx (me)

I only had the one comment, and Sagi says you guys already considered
it but was more complicated than it was worth. Looking at the patches
that follow, I tend to agree.

The rest looks great, and passes all the sanity tests I can run. This
time with correct email spelling:

Reviewed-by: Keith Busch <keith.busch@intel.com>

      parent reply	other threads:[~2016-06-15 19:54 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-13 14:45 generic NVMe over Fabrics library support V2 Christoph Hellwig
2016-06-13 14:45 ` [PATCH 1/8] blk-mq: add blk_mq_alloc_request_hctx Christoph Hellwig
2016-06-13 14:45 ` [PATCH 2/8] nvme: allow transitioning from NEW to LIVE state Christoph Hellwig
2016-06-13 14:45 ` [PATCH 3/8] nvme: Modify and export sync command submission for fabrics Christoph Hellwig
2016-06-13 14:45 ` [PATCH 4/8] nvme: add fabrics sysfs attributes Christoph Hellwig
2016-06-13 14:45 ` [PATCH 5/8] nvme.h: add NVMe over Fabrics definitions Christoph Hellwig
2016-06-13 14:45 ` [PATCH 6/8] nvme-fabrics: add a generic NVMe over Fabrics library Christoph Hellwig
2016-06-15 19:16   ` Keith Busch
2016-06-15 19:16     ` Sagi Grimberg
2016-06-13 14:45 ` [PATCH 7/8] nvme.h: Add keep-alive opcode and identify controller attribute Christoph Hellwig
2016-06-13 14:45 ` [PATCH 8/8] nvme: add keep-alive support Christoph Hellwig
2016-06-15 19:54 ` Keith Busch [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160615195401.GA7637@localhost.localdomain \
    --to=keith.busch@intel.com \
    --cc=axboe@kernel.dk \
    --cc=hch@lst.de \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=linux-rdma@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).