public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: J Freyensee <james_p_freyensee@linux.intel.com>
To: Helen Koike <helen.koike@collabora.co.uk>,
	hch@infradead.org, mlin@kernel.org, fes@google.com,
	keith.busch@intel.com, rlnelson@google.com, axboe@fb.com,
	digitaleric@google.com, tytso@mit.edu, mikew@google.com,
	monish@google.com
Cc: open list <linux-kernel@vger.kernel.org>,
	"open list:NVM EXPRESS DRIVER" <linux-nvme@lists.infradead.org>,
	"Huffman, Amber" <amber.huffman@intel.com>,
	"Minturn, Dave B" <dave.b.minturn@intel.com>
Subject: Re: [PATCH v3 RFC 2/2] nvme: improve performance for virtual NVMe devices
Date: Tue, 16 Aug 2016 13:45:03 -0700	[thread overview]
Message-ID: <1471380303.21107.38.camel@linux.intel.com> (raw)
In-Reply-To: <c738f3d3c14907a9074a40b1658461e008fbed9e.1471308889.git.helen.koike@collabora.co.uk>

On Mon, 2016-08-15 at 22:41 -0300, Helen Koike wrote:


>  
> +struct nvme_doorbell_memory {
> +	__u8			opcode;
> +	__u8			flags;
> +	__u16			command_id;
> +	__u32			rsvd1[5];
> +	__le64			prp1;
> +	__le64			prp2;
> +	__u32			rsvd12[6];
> +};
> +
>  struct nvme_command {
>  	union {
>  		struct nvme_common_command common;
> @@ -845,6 +858,7 @@ struct nvme_command {
>  		struct nvmf_connect_command connect;
>  		struct nvmf_property_set_command prop_set;
>  		struct nvmf_property_get_command prop_get;
> +		struct nvme_doorbell_memory doorbell_memory;
>  	};
>  };

This looks like a new NVMe command being introduced, not found in the
latest NVMe specs (NVMe 1.2.1 spec or NVMe-over-Fabrics 1.0 spec)?

This is a big NACK, the command needs to be part of the NVMe standard
before adding it to the NVMe code base (this is exactly how NVMe-over-
Fabrics standard got implemented).  I would bring your proposal to
nvmexpress.org.

Jay


>  
> @@ -934,6 +948,9 @@ enum {
>  	/*
>  	 * Media and Data Integrity Errors:
>  	 */
> +#ifdef CONFIG_NVME_VDB
> +	NVME_SC_DOORBELL_MEMORY_INVALID	= 0x1C0,
> +#endif
>  	NVME_SC_WRITE_FAULT		= 0x280,
>  	NVME_SC_READ_ERROR		= 0x281,
>  	NVME_SC_GUARD_CHECK		= 0x282,

  reply	other threads:[~2016-08-16 20:45 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1460657059-21214-1-git-send-email-helen.koike@collabora.co.uk>
2016-08-16  1:41 ` [PATCH v3 RFC 0/2] Virtual NVMe device optimization Helen Koike
2016-08-16  1:41   ` [PATCH v3 RFC 1/2] PCI: Add Google device ID Helen Koike
2016-08-18 21:59     ` Bjorn Helgaas
2016-08-16  1:41   ` [PATCH v3 RFC 2/2] nvme: improve performance for virtual NVMe devices Helen Koike
2016-08-16 20:45     ` J Freyensee [this message]
2016-08-16 23:45       ` Keith Busch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1471380303.21107.38.camel@linux.intel.com \
    --to=james_p_freyensee@linux.intel.com \
    --cc=amber.huffman@intel.com \
    --cc=axboe@fb.com \
    --cc=dave.b.minturn@intel.com \
    --cc=digitaleric@google.com \
    --cc=fes@google.com \
    --cc=hch@infradead.org \
    --cc=helen.koike@collabora.co.uk \
    --cc=keith.busch@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=mikew@google.com \
    --cc=mlin@kernel.org \
    --cc=monish@google.com \
    --cc=rlnelson@google.com \
    --cc=tytso@mit.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox