qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Maxim Levitsky <mlevitsk@redhat.com>
To: Klaus Birkelund Jensen <its@irrelevant.dk>
Cc: Kevin Wolf <kwolf@redhat.com>,
	Beata Michalska <beata.michalska@linaro.org>,
	qemu-block@nongnu.org, Klaus Jensen <k.jensen@samsung.com>,
	qemu-devel@nongnu.org, Max Reitz <mreitz@redhat.com>,
	Keith Busch <kbusch@kernel.org>,
	Javier Gonzalez <javier.gonz@samsung.com>
Subject: Re: [PATCH v5 08/26] nvme: refactor device realization
Date: Wed, 25 Mar 2020 12:21:26 +0200	[thread overview]
Message-ID: <d76c59aec319c20f901ecb65734046aede533d91.camel@redhat.com> (raw)
In-Reply-To: <20200316074348.wxmxsox6j42r6rw5@apples.localdomain>

On Mon, 2020-03-16 at 00:43 -0700, Klaus Birkelund Jensen wrote:
> On Feb 12 11:27, Maxim Levitsky wrote:
> > On Tue, 2020-02-04 at 10:51 +0100, Klaus Jensen wrote:
> > > This patch splits up nvme_realize into multiple individual functions,
> > > each initializing a different subset of the device.
> > > 
> > > Signed-off-by: Klaus Jensen <klaus.jensen@cnexlabs.com>
> > > ---
> > >  hw/block/nvme.c | 175 +++++++++++++++++++++++++++++++-----------------
> > >  hw/block/nvme.h |  21 ++++++
> > >  2 files changed, 133 insertions(+), 63 deletions(-)
> > > 
> > > diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> > > index e1810260d40b..81514eaef63a 100644
> > > --- a/hw/block/nvme.c
> > > +++ b/hw/block/nvme.c
> > > @@ -44,6 +44,7 @@
> > >  #include "nvme.h"
> > >  
> > >  #define NVME_SPEC_VER 0x00010201
> > > +#define NVME_MAX_QS PCI_MSIX_FLAGS_QSIZE
> > >  
> > >  #define NVME_GUEST_ERR(trace, fmt, ...) \
> > >      do { \
> > > @@ -1325,67 +1326,106 @@ static const MemoryRegionOps nvme_cmb_ops = {
> > >      },
> > >  };
> > >  
> > > -static void nvme_realize(PCIDevice *pci_dev, Error **errp)
> > > +static int nvme_check_constraints(NvmeCtrl *n, Error **errp)
> > >  {
> > > -    NvmeCtrl *n = NVME(pci_dev);
> > > -    NvmeIdCtrl *id = &n->id_ctrl;
> > > -
> > > -    int i;
> > > -    int64_t bs_size;
> > > -    uint8_t *pci_conf;
> > > -
> > > -    if (!n->params.num_queues) {
> > > -        error_setg(errp, "num_queues can't be zero");
> > > -        return;
> > > -    }
> > > +    NvmeParams *params = &n->params;
> > >  
> > >      if (!n->conf.blk) {
> > > -        error_setg(errp, "drive property not set");
> > > -        return;
> > > +        error_setg(errp, "nvme: block backend not configured");
> > > +        return 1;
> > 
> > As a matter of taste, negative values indicate error, and 0 is the success value.
> > In Linux kernel this is even an official rule.
> > >      }
> 
> Fixed.
> 
> > >  
> > > -    bs_size = blk_getlength(n->conf.blk);
> > > -    if (bs_size < 0) {
> > > -        error_setg(errp, "could not get backing file size");
> > > -        return;
> > > +    if (!params->serial) {
> > > +        error_setg(errp, "nvme: serial not configured");
> > > +        return 1;
> > >      }
> > >  
> > > -    if (!n->params.serial) {
> > > -        error_setg(errp, "serial property not set");
> > > -        return;
> > > +    if ((params->num_queues < 1 || params->num_queues > NVME_MAX_QS)) {
> > > +        error_setg(errp, "nvme: invalid queue configuration");
> > 
> > Maybe something like "nvme: invalid queue count specified, should be between 1 and ..."?
> > > +        return 1;
> > >      }
> 
> Fixed.
Thanks
> 
> > > +
> > > +    return 0;
> > > +}
> > > +
> > > +static int nvme_init_blk(NvmeCtrl *n, Error **errp)
> > > +{
> > >      blkconf_blocksizes(&n->conf);
> > >      if (!blkconf_apply_backend_options(&n->conf, blk_is_read_only(n->conf.blk),
> > > -                                       false, errp)) {
> > > -        return;
> > > +        false, errp)) {
> > > +        return 1;
> > >      }
> > >  
> > > -    pci_conf = pci_dev->config;
> > > -    pci_conf[PCI_INTERRUPT_PIN] = 1;
> > > -    pci_config_set_prog_interface(pci_dev->config, 0x2);
> > > -    pci_config_set_class(pci_dev->config, PCI_CLASS_STORAGE_EXPRESS);
> > > -    pcie_endpoint_cap_init(pci_dev, 0x80);
> > > +    return 0;
> > > +}
> > >  
> > > +static void nvme_init_state(NvmeCtrl *n)
> > > +{
> > >      n->num_namespaces = 1;
> > >      n->reg_size = pow2ceil(0x1004 + 2 * (n->params.num_queues + 1) * 4);
> > 
> > Isn't that wrong?
> > First 4K of mmio (0x1000) is the registers, and that is followed by the doorbells,
> > and each doorbell takes 8 bytes (assuming regular doorbell stride).
> > so n->params.num_queues + 1 should be total number of queues, thus the 0x1004 should be 0x1000 IMHO.
> > I might miss some rounding magic here though.
> > 
> 
> Yeah. I think you are right. It all becomes slightly more fishy due to
> the num_queues device parameter being 1's based and accounts for the
> admin queue pair.
> 
> But in get/set features, the value has to be 0's based and only account
> for the I/O queues, so we need to subtract 2 from the value. It's
> confusing all around.
Yea, I can't agree more on that. The zero based values had bitten
me few times while I developed nvme-mdev as well.

> 
> Since the admin queue pair isn't really optional I think it would be
> better that we introduces a new max_ioqpairs parameter that is 1's
> based, counts number of pairs and obviously only accounts for the io
> queues.
> 
> I guess we need to keep the num_queues parameter around for
> compatibility.
> 
> The doorbells are only 4 bytes btw, but the calculation still looks
I don't understand that. Each doorbell is indeed 4 bytes, but they come
in pairs so each doorbell pair is 8 bytes.

BTW, the spec has so called doorbell stride, which allows to artificially increase
each doorbell by a power of two. This was intended for software implementations
(like my nvme-mdev), to make sure that each doorbell takes exactly one cacheline.

I personally wasn't able to notice any measurable difference, but then my  nvme-mdev
adds so little overhead, that it might not be measurable.
You might want to support this sometime in the future to increase the feature coverage
of this nvme device.

> wrong. With a max_ioqpairs parameter in place, the reg_size should be
> 
>     pow2ceil(0x1008 + 2 * (n->params.max_ioqpairs) * 4)
> 
> Right? Thats 0x1000 for the core registers, 8 bytes for the sq/cq
> doorbells for the admin queue pair, and then room for the i/o queue
> pairs.
Looks great.
BTW, 


> 
> I added a patch for this in v6.
> 
> > > -    n->ns_size = bs_size / (uint64_t)n->num_namespaces;
> > > -
> > >      n->namespaces = g_new0(NvmeNamespace, n->num_namespaces);
> > >      n->sq = g_new0(NvmeSQueue *, n->params.num_queues);
> > >      n->cq = g_new0(NvmeCQueue *, n->params.num_queues);
> > > +}
> > >  
> > > -    memory_region_init_io(&n->iomem, OBJECT(n), &nvme_mmio_ops, n,
> > > -                          "nvme", n->reg_size);
> > > +static void nvme_init_cmb(NvmeCtrl *n, PCIDevice *pci_dev)
> > > +{
> > > +    NVME_CMBLOC_SET_BIR(n->bar.cmbloc, 2);
> > 
> > It would be nice to have #define for CMB bar number
> 
> Added.
Thanks!
> 
> > > +    NVME_CMBLOC_SET_OFST(n->bar.cmbloc, 0);
> > > +
> > > +    NVME_CMBSZ_SET_SQS(n->bar.cmbsz, 1);
> > > +    NVME_CMBSZ_SET_CQS(n->bar.cmbsz, 0);
> > > +    NVME_CMBSZ_SET_LISTS(n->bar.cmbsz, 0);
> > > +    NVME_CMBSZ_SET_RDS(n->bar.cmbsz, 1);
> > > +    NVME_CMBSZ_SET_WDS(n->bar.cmbsz, 1);
> > > +    NVME_CMBSZ_SET_SZU(n->bar.cmbsz, 2);
> > > +    NVME_CMBSZ_SET_SZ(n->bar.cmbsz, n->params.cmb_size_mb);
> > > +
> > > +    n->cmbloc = n->bar.cmbloc;
> > > +    n->cmbsz = n->bar.cmbsz;
> > > +
> > > +    n->cmbuf = g_malloc0(NVME_CMBSZ_GETSIZE(n->bar.cmbsz));
> > > +    memory_region_init_io(&n->ctrl_mem, OBJECT(n), &nvme_cmb_ops, n,
> > > +                            "nvme-cmb", NVME_CMBSZ_GETSIZE(n->bar.cmbsz));
> > > +    pci_register_bar(pci_dev, NVME_CMBLOC_BIR(n->bar.cmbloc),
> > 
> > Same here although since you read it here from the controller register,
> > then maybe leave it as is. I prefer though for this kind of thing
> > to have a #define and use it everywhere. 
> > 
> 
> Done.
> 
> > > +        PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64 |
> > > +        PCI_BASE_ADDRESS_MEM_PREFETCH, &n->ctrl_mem);
> > > +}
> > > +
> > > +static void nvme_init_pci(NvmeCtrl *n, PCIDevice *pci_dev)
> > > +{
> > > +    uint8_t *pci_conf = pci_dev->config;
> > > +
> > > +    pci_conf[PCI_INTERRUPT_PIN] = 1;
> > > +    pci_config_set_prog_interface(pci_conf, 0x2);
> > 
> > Nitpick: How about adding some #define for that as well?
> > (I know that this code is copied as is but still)
> 
> Yeah. A PCI_PI_NVME or something would be nice. But this should probably
> go to some pci related header file? Any idea where that would fit?

in include/hw/pci/pci_ids.h maybe?

> 
> > > +    pci_config_set_vendor_id(pci_conf, PCI_VENDOR_ID_INTEL);
> > > +    pci_config_set_device_id(pci_conf, 0x5845);
> > > +    pci_config_set_class(pci_conf, PCI_CLASS_STORAGE_EXPRESS);
> > > +    pcie_endpoint_cap_init(pci_dev, 0x80);
> > > +
> > > +    memory_region_init_io(&n->iomem, OBJECT(n), &nvme_mmio_ops, n, "nvme",
> > > +        n->reg_size);
> > 
> > Code on split lines should start at column right after the '('
> > Now its my turn to notice this - our checkpatch.pl doesn't check this,
> > and I can't explain how often I am getting burnt on this myself.
> > 
> > There are *lot* of these issues, I pointed out some of them but you should
> > check all the patches for this.
> > 
> 
> I fixed all that :)

Thanks, but I bet that some of this remained - taking from my experience,
since I also like you wasn't used to this rule, 
so I  didn't yet adopt that rule subconsciously, and our checkpatch.pl
doesn't check for it, so I keep on violating this rule in most patches I send
despite me checking each patch for few times.
I'll go over V6, and if I spot this I'll take a note, now that you fixed most of
this issues.
Thanks again.

> 
> > 
> > >      pci_register_bar(pci_dev, 0,
> > >          PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64,
> > >          &n->iomem);
> > 
> > Split line alignment issue here as well.
> > >      msix_init_exclusive_bar(pci_dev, n->params.num_queues, 4, NULL);
> > >  
> > > +    if (n->params.cmb_size_mb) {
> > > +        nvme_init_cmb(n, pci_dev);
> > > +    }
> > > +}
> > > +
> > > +static void nvme_init_ctrl(NvmeCtrl *n)
> > > +{
> > > +    NvmeIdCtrl *id = &n->id_ctrl;
> > > +    NvmeParams *params = &n->params;
> > > +    uint8_t *pci_conf = n->parent_obj.config;
> > > +
> > >      id->vid = cpu_to_le16(pci_get_word(pci_conf + PCI_VENDOR_ID));
> > >      id->ssvid = cpu_to_le16(pci_get_word(pci_conf + PCI_SUBSYSTEM_VENDOR_ID));
> > >      strpadcpy((char *)id->mn, sizeof(id->mn), "QEMU NVMe Ctrl", ' ');
> > >      strpadcpy((char *)id->fr, sizeof(id->fr), "1.0", ' ');
> > > -    strpadcpy((char *)id->sn, sizeof(id->sn), n->params.serial, ' ');
> > > +    strpadcpy((char *)id->sn, sizeof(id->sn), params->serial, ' ');
> > >      id->rab = 6;
> > >      id->ieee[0] = 0x00;
> > >      id->ieee[1] = 0x02;
> > > @@ -1431,46 +1471,55 @@ static void nvme_realize(PCIDevice *pci_dev, Error **errp)
> > >  
> > >      n->bar.vs = NVME_SPEC_VER;
> > >      n->bar.intmc = n->bar.intms = 0;
> > > +}
> > >  
> > > -    if (n->params.cmb_size_mb) {
> > > +static int nvme_init_namespace(NvmeCtrl *n, NvmeNamespace *ns, Error **errp)
> > > +{
> > > +    int64_t bs_size;
> > > +    NvmeIdNs *id_ns = &ns->id_ns;
> > >  
> > > -        NVME_CMBLOC_SET_BIR(n->bar.cmbloc, 2);
> > > -        NVME_CMBLOC_SET_OFST(n->bar.cmbloc, 0);
> > > +    bs_size = blk_getlength(n->conf.blk);
> > > +    if (bs_size < 0) {
> > > +        error_setg_errno(errp, -bs_size, "blk_getlength");
> > > +        return 1;
> > > +    }
> > >  
> > > -        NVME_CMBSZ_SET_SQS(n->bar.cmbsz, 1);
> > > -        NVME_CMBSZ_SET_CQS(n->bar.cmbsz, 0);
> > > -        NVME_CMBSZ_SET_LISTS(n->bar.cmbsz, 0);
> > > -        NVME_CMBSZ_SET_RDS(n->bar.cmbsz, 1);
> > > -        NVME_CMBSZ_SET_WDS(n->bar.cmbsz, 1);
> > > -        NVME_CMBSZ_SET_SZU(n->bar.cmbsz, 2); /* MBs */
> > > -        NVME_CMBSZ_SET_SZ(n->bar.cmbsz, n->params.cmb_size_mb);
> > > +    id_ns->lbaf[0].ds = BDRV_SECTOR_BITS;
> > > +    n->ns_size = bs_size;
> > >  
> > > -        n->cmbloc = n->bar.cmbloc;
> > > -        n->cmbsz = n->bar.cmbsz;
> > > +    id_ns->ncap = id_ns->nuse = id_ns->nsze =
> > > +        cpu_to_le64(nvme_ns_nlbas(n, ns));
> > 
> > I myself don't know how to align these splits to be honest.
> > I would just split this into multiple statements.
> > >  
> > > -        n->cmbuf = g_malloc0(NVME_CMBSZ_GETSIZE(n->bar.cmbsz));
> > > -        memory_region_init_io(&n->ctrl_mem, OBJECT(n), &nvme_cmb_ops, n,
> > > -                              "nvme-cmb", NVME_CMBSZ_GETSIZE(n->bar.cmbsz));
> > > -        pci_register_bar(pci_dev, NVME_CMBLOC_BIR(n->bar.cmbloc),
> > > -            PCI_BASE_ADDRESS_SPACE_MEMORY | PCI_BASE_ADDRESS_MEM_TYPE_64 |
> > > -            PCI_BASE_ADDRESS_MEM_PREFETCH, &n->ctrl_mem);
> > > +    return 0;
> > > +}
> > >  
> > > +static void nvme_realize(PCIDevice *pci_dev, Error **errp)
> > > +{
> > > +    NvmeCtrl *n = NVME(pci_dev);
> > > +    Error *local_err = NULL;
> > > +    int i;
> > > +
> > > +    if (nvme_check_constraints(n, &local_err)) {
> > > +        error_propagate_prepend(errp, local_err, "nvme_check_constraints: ");
> > 
> > Do we need that hint for the end user?
> 
> Removed.
> 
> > > +        return;
> > > +    }
> > > +
> > > +    nvme_init_state(n);
> > > +
> > > +    if (nvme_init_blk(n, &local_err)) {
> > > +        error_propagate_prepend(errp, local_err, "nvme_init_blk: ");
> > 
> > Same here
> 
> Done.
> 
> 
> > > +        return;
> > >      }
> > >  
> > >      for (i = 0; i < n->num_namespaces; i++) {
> > > -        NvmeNamespace *ns = &n->namespaces[i];
> > > -        NvmeIdNs *id_ns = &ns->id_ns;
> > > -        id_ns->nsfeat = 0;
> > > -        id_ns->nlbaf = 0;
> > > -        id_ns->flbas = 0;
> > > -        id_ns->mc = 0;
> > > -        id_ns->dpc = 0;
> > > -        id_ns->dps = 0;
> > > -        id_ns->lbaf[0].ds = BDRV_SECTOR_BITS;
> > > -        id_ns->ncap  = id_ns->nuse = id_ns->nsze =
> > > -            cpu_to_le64(n->ns_size >>
> > > -                id_ns->lbaf[NVME_ID_NS_FLBAS_INDEX(ns->id_ns.flbas)].ds);
> > > +        if (nvme_init_namespace(n, &n->namespaces[i], &local_err)) {
> > > +            error_propagate_prepend(errp, local_err, "nvme_init_namespace: ");
> > 
> > And here
> 
> Done.
> 
> 
> > > +            return;
> > > +        }
> > >      }
> > > +
> > > +    nvme_init_pci(n, pci_dev);
> > > +    nvme_init_ctrl(n);
> > >  }
> > >  
> > >  static void nvme_exit(PCIDevice *pci_dev)
> > > diff --git a/hw/block/nvme.h b/hw/block/nvme.h
> > > index 9957c4a200e2..a867bdfabafd 100644
> > > --- a/hw/block/nvme.h
> > > +++ b/hw/block/nvme.h
> > > @@ -65,6 +65,22 @@ typedef struct NvmeNamespace {
> > >      NvmeIdNs        id_ns;
> > >  } NvmeNamespace;
> > >  
> > > +static inline NvmeLBAF nvme_ns_lbaf(NvmeNamespace *ns)
> > > +{
> > 
> > Its not common to return a structure in C, usually pointer is returned to
> > avoid copying. In this case this doesn't matter that much though.
> 
> It's actually gonna be used a lot. So swapped to pointer.
Thanks.

> 
> > > +    NvmeIdNs *id_ns = &ns->id_ns;
> > > +    return id_ns->lbaf[NVME_ID_NS_FLBAS_INDEX(id_ns->flbas)];
> > > +}
> > > +
> > > +static inline uint8_t nvme_ns_lbads(NvmeNamespace *ns)
> > > +{
> > > +    return nvme_ns_lbaf(ns).ds;
> > > +}
> > > +
> > > +static inline size_t nvme_ns_lbads_bytes(NvmeNamespace *ns)
> > > +{
> > > +    return 1 << nvme_ns_lbads(ns);
> > > +}
> > > +
> > >  #define TYPE_NVME "nvme"
> > >  #define NVME(obj) \
> > >          OBJECT_CHECK(NvmeCtrl, (obj), TYPE_NVME)
> > > @@ -101,4 +117,9 @@ typedef struct NvmeCtrl {
> > >      NvmeIdCtrl      id_ctrl;
> > >  } NvmeCtrl;
> > >  
> > > +static inline uint64_t nvme_ns_nlbas(NvmeCtrl *n, NvmeNamespace *ns)
> > > +{
> > > +    return n->ns_size >> nvme_ns_lbads(ns);
> > > +}
> > 
> > Unless you need all these functions in the future, this feels like
> > it is a bit verbose.
> > 
> 
> These will be used in various places later.
OK, then it is all right.

>  
> 

Best regards,
	Maxim Levitsky





  reply	other threads:[~2020-03-25 10:23 UTC|newest]

Thread overview: 86+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <CGME20200204095215eucas1p1bb0d5a3c183f7531d8b0e5e081f1ae6b@eucas1p1.samsung.com>
2020-02-04  9:51 ` [PATCH v5 00/26] nvme: support NVMe v1.3d, SGLs and multiple namespaces Klaus Jensen
     [not found]   ` <CGME20200204095216eucas1p2cb2b4772c04b92c97b0690c8e565234c@eucas1p2.samsung.com>
2020-02-04  9:51     ` [PATCH v5 01/26] nvme: rename trace events to nvme_dev Klaus Jensen
2020-02-12  9:08       ` Maxim Levitsky
2020-02-12 13:08         ` Klaus Birkelund Jensen
2020-02-12 13:17           ` Maxim Levitsky
     [not found]   ` <CGME20200204095216eucas1p137a2adf666e82d490aefca96a269acd9@eucas1p1.samsung.com>
2020-02-04  9:51     ` [PATCH v5 02/26] nvme: remove superfluous breaks Klaus Jensen
2020-02-12  9:09       ` Maxim Levitsky
     [not found]   ` <CGME20200204095217eucas1p1f3e1d113d5eaad4327de0158d1e480cb@eucas1p1.samsung.com>
2020-02-04  9:51     ` [PATCH v5 03/26] nvme: move device parameters to separate struct Klaus Jensen
2020-02-12  9:12       ` Maxim Levitsky
     [not found]   ` <CGME20200204095218eucas1p25d4623d82b1b7db3e555f3b27ca19763@eucas1p2.samsung.com>
2020-02-04  9:51     ` [PATCH v5 04/26] nvme: add missing fields in the identify data structures Klaus Jensen
2020-02-12  9:15       ` Maxim Levitsky
     [not found]   ` <CGME20200204095218eucas1p2400645e2400b3d4450386a46e71b9e9a@eucas1p2.samsung.com>
2020-02-04  9:51     ` [PATCH v5 05/26] nvme: populate the mandatory subnqn and ver fields Klaus Jensen
2020-02-12  9:18       ` Maxim Levitsky
     [not found]   ` <CGME20200204095219eucas1p1a7d44c741e119939c60ff60b96c7652e@eucas1p1.samsung.com>
2020-02-04  9:51     ` [PATCH v5 06/26] nvme: refactor nvme_addr_read Klaus Jensen
2020-02-12  9:23       ` Maxim Levitsky
     [not found]   ` <CGME20200204095219eucas1p1a7e88f8f4090988b3dee34d4d4bcc239@eucas1p1.samsung.com>
2020-02-04  9:51     ` [PATCH v5 07/26] nvme: add support for the abort command Klaus Jensen
2020-02-12  9:25       ` Maxim Levitsky
     [not found]   ` <CGME20200204095220eucas1p186b0de598359750d49278e0226ae45fb@eucas1p1.samsung.com>
2020-02-04  9:51     ` [PATCH v5 08/26] nvme: refactor device realization Klaus Jensen
2020-02-12  9:27       ` Maxim Levitsky
2020-03-16  7:43         ` Klaus Birkelund Jensen
2020-03-25 10:21           ` Maxim Levitsky [this message]
     [not found]   ` <CGME20200204095221eucas1p1d5b1c9578d79e6bcc5714976bbe7dc11@eucas1p1.samsung.com>
2020-02-04  9:51     ` [PATCH v5 09/26] nvme: add temperature threshold feature Klaus Jensen
2020-02-12  9:31       ` Maxim Levitsky
2020-03-16  7:44         ` Klaus Birkelund Jensen
2020-03-25 10:21           ` Maxim Levitsky
     [not found]   ` <CGME20200204095221eucas1p216ca2452c4184eb06bff85cff3c6a82b@eucas1p2.samsung.com>
2020-02-04  9:51     ` [PATCH v5 10/26] nvme: add support for the get log page command Klaus Jensen
2020-02-12  9:35       ` Maxim Levitsky
2020-03-16  7:45         ` Klaus Birkelund Jensen
2020-03-25 10:22           ` Maxim Levitsky
2020-03-25 10:24           ` Maxim Levitsky
     [not found]   ` <CGME20200204095222eucas1p2a2351bfc0930b3939927e485f1417e29@eucas1p2.samsung.com>
2020-02-04  9:51     ` [PATCH v5 11/26] nvme: add support for the asynchronous event request command Klaus Jensen
2020-02-12 10:21       ` Maxim Levitsky
     [not found]   ` <CGME20200204095223eucas1p281b4ef7c8f4170d8a42da3b4aea9e166@eucas1p2.samsung.com>
2020-02-04  9:51     ` [PATCH v5 12/26] nvme: add missing mandatory features Klaus Jensen
2020-02-12 10:27       ` Maxim Levitsky
2020-03-16  7:47         ` Klaus Birkelund Jensen
2020-03-25 10:22           ` Maxim Levitsky
     [not found]   ` <CGME20200204095223eucas1p2b24d674e4b201c13a5fffc6853520d9b@eucas1p2.samsung.com>
2020-02-04  9:51     ` [PATCH v5 13/26] nvme: additional tracing Klaus Jensen
2020-02-12 10:28       ` Maxim Levitsky
     [not found]   ` <CGME20200204095224eucas1p10807239f5dc4aa809650c85186c426a8@eucas1p1.samsung.com>
2020-02-04  9:51     ` [PATCH v5 14/26] nvme: make sure ncqr and nsqr is valid Klaus Jensen
2020-02-12 10:30       ` Maxim Levitsky
2020-03-16  7:48         ` Klaus Birkelund Jensen
2020-03-25 10:25           ` Maxim Levitsky
     [not found]   ` <CGME20200204095225eucas1p1e44b4de86afdf936e3c7f61359d529ce@eucas1p1.samsung.com>
2020-02-04  9:51     ` [PATCH v5 15/26] nvme: bump supported specification to 1.3 Klaus Jensen
2020-02-12 10:35       ` Maxim Levitsky
2020-03-16  7:50         ` Klaus Birkelund Jensen
2020-03-25 10:22           ` Maxim Levitsky
     [not found]   ` <CGME20200204095225eucas1p226336a91fb5460dddae5caa85964279f@eucas1p2.samsung.com>
2020-02-04  9:51     ` [PATCH v5 16/26] nvme: refactor prp mapping Klaus Jensen
2020-02-12 11:44       ` Maxim Levitsky
2020-03-16  7:51         ` Klaus Birkelund Jensen
2020-03-25 10:23           ` Maxim Levitsky
     [not found]   ` <CGME20200204095226eucas1p2429f45a5e23fe6ed57dee293be5e1b44@eucas1p2.samsung.com>
2020-02-04  9:51     ` [PATCH v5 17/26] nvme: allow multiple aios per command Klaus Jensen
2020-02-12 11:48       ` Maxim Levitsky
2020-03-16  7:53         ` Klaus Birkelund Jensen
2020-03-25 10:24           ` Maxim Levitsky
     [not found]   ` <CGME20200204095227eucas1p2f23061d391e67f4d3bde8bab74d1e44b@eucas1p2.samsung.com>
2020-02-04  9:52     ` [PATCH v5 18/26] nvme: use preallocated qsg/iov in nvme_dma_prp Klaus Jensen
2020-02-12 11:49       ` Maxim Levitsky
     [not found]   ` <CGME20200204095227eucas1p2d86cd6abcb66327dc112d58c83664139@eucas1p2.samsung.com>
2020-02-04  9:52     ` [PATCH v5 19/26] pci: pass along the return value of dma_memory_rw Klaus Jensen
     [not found]   ` <CGME20200204095228eucas1p2878eb150a933bb196fe5ca10a0b76eaf@eucas1p2.samsung.com>
2020-02-04  9:52     ` [PATCH v5 20/26] nvme: handle dma errors Klaus Jensen
2020-02-12 11:52       ` Maxim Levitsky
2020-03-16  7:53         ` Klaus Birkelund Jensen
2020-03-25 10:23           ` Maxim Levitsky
     [not found]   ` <CGME20200204095229eucas1p2b290e3603d73c129a4f6149805273705@eucas1p2.samsung.com>
2020-02-04  9:52     ` [PATCH v5 21/26] nvme: add support for scatter gather lists Klaus Jensen
2020-02-12 12:07       ` Maxim Levitsky
2020-03-16  7:54         ` Klaus Birkelund Jensen
2020-03-25 10:24           ` Maxim Levitsky
     [not found]   ` <CGME20200204095230eucas1p27456c6c0ab3b688d2f891d0dff098821@eucas1p2.samsung.com>
2020-02-04  9:52     ` [PATCH v5 22/26] nvme: support multiple namespaces Klaus Jensen
2020-02-04 16:31       ` Keith Busch
2020-02-06  7:27         ` Klaus Birkelund Jensen
2020-02-12 12:34       ` Maxim Levitsky
2020-03-16  7:55         ` Klaus Birkelund Jensen
2020-03-25 10:24           ` Maxim Levitsky
     [not found]   ` <CGME20200204095230eucas1p23f3105c4cab4aaec77a3dd42b8158c10@eucas1p2.samsung.com>
2020-02-04  9:52     ` [PATCH v5 23/26] pci: allocate pci id for nvme Klaus Jensen
2020-02-12 12:36       ` Maxim Levitsky
     [not found]   ` <CGME20200204095231eucas1p21019b1d857fcda9d67950e7d01de6b6a@eucas1p2.samsung.com>
2020-02-04  9:52     ` [PATCH v5 24/26] nvme: change controller pci id Klaus Jensen
2020-02-04 16:35       ` Keith Busch
2020-02-06  7:28         ` Klaus Birkelund Jensen
2020-02-12 12:37       ` Maxim Levitsky
     [not found]   ` <CGME20200204095231eucas1p1f2b78a655b1a217fe4f7006f79e37f86@eucas1p1.samsung.com>
2020-02-04  9:52     ` [PATCH v5 25/26] nvme: remove redundant NvmeCmd pointer parameter Klaus Jensen
2020-02-12 12:37       ` Maxim Levitsky
     [not found]   ` <CGME20200204095232eucas1p2b3264104447a42882f10edb06608ece5@eucas1p2.samsung.com>
2020-02-04  9:52     ` [PATCH v5 26/26] nvme: make lba data size configurable Klaus Jensen
2020-02-04 16:43       ` Keith Busch
2020-02-06  7:24         ` Klaus Birkelund Jensen
2020-02-12 12:39           ` Maxim Levitsky
2020-02-04 10:34   ` [PATCH v5 00/26] nvme: support NVMe v1.3d, SGLs and multiple namespaces no-reply
2020-02-04 16:47   ` Keith Busch
2020-02-06  7:29     ` Klaus Birkelund Jensen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d76c59aec319c20f901ecb65734046aede533d91.camel@redhat.com \
    --to=mlevitsk@redhat.com \
    --cc=beata.michalska@linaro.org \
    --cc=its@irrelevant.dk \
    --cc=javier.gonz@samsung.com \
    --cc=k.jensen@samsung.com \
    --cc=kbusch@kernel.org \
    --cc=kwolf@redhat.com \
    --cc=mreitz@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).