qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Andrzej Jakowski <andrzej.jakowski@linux.intel.com>
To: Klaus Jensen <its@irrelevant.dk>
Cc: kbusch@kernel.org, kwolf@redhat.com, qemu-devel@nongnu.org,
	qemu-block@nongnu.org, mreitz@redhat.com
Subject: Re: [PATCH v1 2/2] nvme: allow cmb and pmr to be enabled on same device
Date: Mon, 8 Jun 2020 12:44:50 -0700	[thread overview]
Message-ID: <c4c6eb4b-ed35-951c-0b00-079070638445@linux.intel.com> (raw)
In-Reply-To: <20200608080750.ahze6ez3gvhginq7@apples.localdomain>

On 6/8/20 1:08 AM, Klaus Jensen wrote:
> On Jun  5 11:10, Andrzej Jakowski wrote:
>> So far it was not possible to have CMB and PMR emulated on the same
>> device, because BAR2 was used exclusively either of PMR or CMB. This
>> patch places CMB at BAR4 offset so it not conflicts with MSI-X vectors.
>>
> 
> Hi Andrzej,
> 
> Thanks for doing this, it's a nice addition!
> 
> Though, I would prefer that the table and pba was located in BAR0 and
> keeping BAR4 for exclusive CMB use. I'm no expert on this, but is it ok
> to have the table and pba in prefetchable memory? Having it "together"
> with the other controller-level configuration memory just feels more
> natural to me, but I'm not gonna put my foot down.
Hi Klaus,

Thx for your feedback!
I don't think it matters if MSIX table is in prefetchable vs 
non-prefetchable memory. 
My understanding is that spec allows MSIX and PBA to be in any BAR and
offset. I understand your preference and at the same time think that
since it is not in violation of the spec why don't we leave it as-is?
Does anybody know what's typical approach for real devices?
> 
> Using BAR0 would also slightly simplify the patch since no changes would
> be required for the CMB path.
> 
> Also, can you rebase this on Kevin's block branch? There are a bunch of
> refactoring patches that changes the realization code, so this patch
> doesn't apply at all.
Yep will reabse it.
> 
>> Signed-off-by: Andrzej Jakowski <andrzej.jakowski@linux.intel.com>
>> ---
>>  hw/block/nvme.c      | 127 +++++++++++++++++++++++++++++--------------
>>  hw/block/nvme.h      |   3 +-
>>  include/block/nvme.h |   4 +-
>>  3 files changed, 91 insertions(+), 43 deletions(-)
>>
>> diff --git a/hw/block/nvme.c b/hw/block/nvme.c
>> index f0b45704be..353cf20e0a 100644
>> --- a/hw/block/nvme.c
>> +++ b/hw/block/nvme.c
>> @@ -22,12 +22,12 @@
>>   *              [pmrdev=<mem_backend_file_id>,] \
>>   *              num_queues=<N[optional]>
>>   *
>> - * Note cmb_size_mb denotes size of CMB in MB. CMB is assumed to be at
>> - * offset 0 in BAR2 and supports only WDS, RDS and SQS for now.
>> + * Note cmb_size_mb denotes size of CMB in MB. CMB when configured is assumed
>> + * to be resident in BAR4 at certain offset - this is because BAR4 is also
>> + * used for storing MSI-X table that is available at offset 0 in BAR4.
>>   *
>> - * cmb_size_mb= and pmrdev= options are mutually exclusive due to limitation
>> - * in available BAR's. cmb_size_mb= will take precedence over pmrdev= when
>> - * both provided.
>> + * pmrdev is assumed to be resident in BAR2. When configured it consumes whole
>> + * BAR2 exclusively.
> 
> Actually it uses both BAR2 and BAR3 since its 64 bits.
Correct. That's what I implied here w/o actual verbiage. I can extend it
to add that information.
> 
>> @@ -1342,6 +1346,71 @@ static const MemoryRegionOps nvme_cmb_ops = {
>>      },
>>  };
>>  
>> +#define NVME_MSIX_BIR (4)
>> +static void nvme_bar4_init(PCIDevice *pci_dev)
>> +{
>> +    NvmeCtrl *n = NVME(pci_dev);
>> +    int status;
>> +    uint64_t bar_size = 4096;
>> +    uint32_t nvme_pba_offset = bar_size / 2;
>> +    uint32_t nvme_pba_size = QEMU_ALIGN_UP(n->num_queues, 64) / 8;
>> +    uint32_t cmb_size_units;
>> +
>> +    if (n->num_queues * PCI_MSIX_ENTRY_SIZE > nvme_pba_offset) {
>> +        nvme_pba_offset = n->num_queues * PCI_MSIX_ENTRY_SIZE;
>> +    }
>> +
>> +    if (nvme_pba_offset + nvme_pba_size > 4096) {
>> +        bar_size = nvme_pba_offset + nvme_pba_size;
>> +    }
>> +
> 
> This is migration compatibility stuff that is not needed because the
> nvme device is unmigratable anyway.
I don't understand that comment. Could you please explain more?
 



  reply	other threads:[~2020-06-08 19:46 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-05 18:10 [PATCH v1] nvme: allow cmb and pmr emulation on same device Andrzej Jakowski
2020-06-05 18:10 ` [PATCH v1 1/2] nvme: indicate CMB support through controller capabilities register Andrzej Jakowski
2020-06-05 18:10 ` [PATCH v1 2/2] nvme: allow cmb and pmr to be enabled on same device Andrzej Jakowski
2020-06-08  8:08   ` Klaus Jensen
2020-06-08 19:44     ` Andrzej Jakowski [this message]
2020-06-09  9:49       ` Klaus Jensen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c4c6eb4b-ed35-951c-0b00-079070638445@linux.intel.com \
    --to=andrzej.jakowski@linux.intel.com \
    --cc=its@irrelevant.dk \
    --cc=kbusch@kernel.org \
    --cc=kwolf@redhat.com \
    --cc=mreitz@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).