qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Łukasz Gieryk" <lukasz.gieryk@linux.intel.com>
To: Klaus Jensen <its@irrelevant.dk>
Cc: "Fam Zheng" <fam@euphon.net>, "Kevin Wolf" <kwolf@redhat.com>,
	qemu-block@nongnu.org, qemu-devel@nongnu.org,
	"Lukasz Maniak" <lukasz.maniak@linux.intel.com>,
	"Hanna Reitz" <hreitz@redhat.com>,
	"Stefan Hajnoczi" <stefanha@redhat.com>,
	"Keith Busch" <kbusch@kernel.org>,
	"Philippe Mathieu-Daudé" <philmd@redhat.com>
Subject: Re: [PATCH v2 12/15] hw/nvme: Initialize capability structures for primary/secondary controllers
Date: Thu, 25 Nov 2021 13:02:33 +0100	[thread overview]
Message-ID: <20211125120233.GA27945@lgieryk-VirtualBox> (raw)
In-Reply-To: <20211124142630.GB25350@lgieryk-VirtualBox>

On Wed, Nov 24, 2021 at 03:26:30PM +0100, Łukasz Gieryk wrote:
> On Wed, Nov 24, 2021 at 09:04:31AM +0100, Klaus Jensen wrote:
> > On Nov 16 16:34, Łukasz Gieryk wrote:
> > > With four new properties:
> > >  - sriov_v{i,q}_flexible,
> > >  - sriov_max_v{i,q}_per_vf,
> > > one can configure the number of available flexible resources, as well as
> > > the limits. The primary and secondary controller capability structures
> > > are initialized accordingly.
> > > 
> > > Since the number of available queues (interrupts) now varies between
> > > VF/PF, BAR size calculation is also adjusted.
> > > 
> > > Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
> > > ---
> > >  hw/nvme/ctrl.c       | 138 ++++++++++++++++++++++++++++++++++++++++---
> > >  hw/nvme/nvme.h       |   4 ++
> > >  include/block/nvme.h |   5 ++
> > >  3 files changed, 140 insertions(+), 7 deletions(-)
> > > 
> > > diff --git a/hw/nvme/ctrl.c b/hw/nvme/ctrl.c
> > > index f8f5dfe204..f589ffde59 100644
> > > --- a/hw/nvme/ctrl.c
> > > +++ b/hw/nvme/ctrl.c
> > > @@ -6358,13 +6444,40 @@ static void nvme_init_state(NvmeCtrl *n)
> > >      n->starttime_ms = qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL);
> > >      n->aer_reqs = g_new0(NvmeRequest *, n->params.aerl + 1);
> > >  
> > > -    list->numcntl = cpu_to_le16(n->params.sriov_max_vfs);
> > > -    for (i = 0; i < n->params.sriov_max_vfs; i++) {
> > > +    list->numcntl = cpu_to_le16(max_vfs);
> > > +    for (i = 0; i < max_vfs; i++) {
> > >          sctrl = &list->sec[i];
> > >          sctrl->pcid = cpu_to_le16(n->cntlid);
> > >      }
> > >  
> > >      cap->cntlid = cpu_to_le16(n->cntlid);
> > > +    cap->crt = NVME_CRT_VQ | NVME_CRT_VI;
> > > +
> > > +    if (pci_is_vf(&n->parent_obj)) {
> > > +        cap->vqprt = cpu_to_le16(1 + n->conf_ioqpairs);
> > > +    } else {
> > > +        cap->vqprt = cpu_to_le16(1 + n->params.max_ioqpairs -
> > > +                                 n->params.sriov_vq_flexible);
> > > +        cap->vqfrt = cpu_to_le32(n->params.sriov_vq_flexible);
> > > +        cap->vqrfap = cap->vqfrt;
> > > +        cap->vqgran = cpu_to_le16(NVME_VF_RES_GRANULARITY);
> > > +        cap->vqfrsm = n->params.sriov_max_vq_per_vf ?
> > > +                        cpu_to_le16(n->params.sriov_max_vq_per_vf) :
> > > +                        cap->vqprt;
> > 
> > That this defaults to VQPRT doesn't seem right. It should default to
> > VQFRT. Does not make sense to report a maximum number of assignable
> > flexible resources that are bigger than the number of flexible resources
> > available.
> 
> I’ve explained in on of v1 threads why I think using the current default
> is better than VQPRT.
> 
> What you’ve noticed is indeed an inconvenience, but it’s – at least in
> my opinion – part of the design. What matters is the current number of
> unassigned flexible resources. It may be lower than VQFRSM due to
> multiple reasons:
>  1) resources are bound to PF, 
>  2) resources are bound to other VFs,
>  3) resources simply don’t exist (not baked in silicone: VQFRT < VQFRSM).
> 
> If 1) and 2) are allowed to happen, and the user must be aware of that,
> then why 3) shouldn’t?
> 

I’ve done some more thinking, and now I’m not happy with my version, nor
the suggested VQPRT.

How about using this formula instead?:

v{q,i}frsm = sriov_max_v{I,q}_per_vf ? sriov_max_v{I,q}_per_vf :
             floor(sriov_v{i,q}_flexible / sriov_max_vfs)

v{q,i}frsm would end up with values similar/proportional to those
reported by and actual SR-IOV-capable device available on the market.



  reply	other threads:[~2021-11-25 12:04 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-16 15:34 [PATCH v2 00/15] hw/nvme: SR-IOV with Virtualization Enhancements Łukasz Gieryk
2021-11-16 15:34 ` [PATCH v2 01/15] pcie: Add support for Single Root I/O Virtualization (SR/IOV) Łukasz Gieryk
2021-11-16 15:34 ` [PATCH v2 02/15] pcie: Add some SR/IOV API documentation in docs/pcie_sriov.txt Łukasz Gieryk
2021-11-16 15:34 ` [PATCH v2 03/15] pcie: Add helpers to the SR/IOV API Łukasz Gieryk
2021-11-16 15:34 ` [PATCH v2 04/15] pcie: Add 1.2 version token for the Power Management Capability Łukasz Gieryk
2021-11-16 15:34 ` [PATCH v2 05/15] hw/nvme: Add support for SR-IOV Łukasz Gieryk
2021-11-16 15:34 ` [PATCH v2 06/15] hw/nvme: Add support for Primary Controller Capabilities Łukasz Gieryk
2021-11-16 15:34 ` [PATCH v2 07/15] hw/nvme: Add support for Secondary Controller List Łukasz Gieryk
2021-11-16 15:34 ` [PATCH v2 08/15] hw/nvme: Implement the Function Level Reset Łukasz Gieryk
2021-11-16 21:28   ` Keith Busch
2021-11-17 11:22     ` Łukasz Gieryk
2021-11-16 15:34 ` [PATCH v2 09/15] hw/nvme: Make max_ioqpairs and msix_qsize configurable in runtime Łukasz Gieryk
2021-11-16 15:34 ` [PATCH v2 10/15] hw/nvme: Remove reg_size variable and update BAR0 size calculation Łukasz Gieryk
2021-11-16 15:34 ` [PATCH v2 11/15] hw/nvme: Calculate BAR attributes in a function Łukasz Gieryk
2021-11-16 15:34 ` [PATCH v2 12/15] hw/nvme: Initialize capability structures for primary/secondary controllers Łukasz Gieryk
2021-11-24  8:04   ` Klaus Jensen
2021-11-24 14:26     ` Łukasz Gieryk
2021-11-25 12:02       ` Łukasz Gieryk [this message]
2021-11-16 15:34 ` [PATCH v2 13/15] hw/nvme: Add support for the Virtualization Management command Łukasz Gieryk
2021-11-24  8:06   ` Klaus Jensen
2021-11-24 14:20     ` Łukasz Gieryk
2021-11-16 15:34 ` [PATCH v2 14/15] docs: Add documentation for SR-IOV and Virtualization Enhancements Łukasz Gieryk
2021-11-16 15:34 ` [PATCH v2 15/15] hw/nvme: Update the initalization place for the AER queue Łukasz Gieryk
2021-11-24  8:03 ` [PATCH v2 00/15] hw/nvme: SR-IOV with Virtualization Enhancements Klaus Jensen
2021-11-25 14:15   ` Łukasz Gieryk
2021-12-20  7:12     ` Klaus Jensen
2021-12-20 10:06       ` Łukasz Gieryk

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211125120233.GA27945@lgieryk-VirtualBox \
    --to=lukasz.gieryk@linux.intel.com \
    --cc=fam@euphon.net \
    --cc=hreitz@redhat.com \
    --cc=its@irrelevant.dk \
    --cc=kbusch@kernel.org \
    --cc=kwolf@redhat.com \
    --cc=lukasz.maniak@linux.intel.com \
    --cc=philmd@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).