virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Eli Cohen <elic@nvidia.com>
Cc: "virtualization@lists.linux-foundation.org"
	<virtualization@lists.linux-foundation.org>
Subject: Re: RFC: VDPA Interrupt vector distribution
Date: Mon, 30 Jan 2023 06:34:20 -0500	[thread overview]
Message-ID: <20230130063247-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <23806cd9-ffde-778c-5fa5-b95bd1ff0b44@nvidia.com>

On Mon, Jan 30, 2023 at 12:01:23PM +0200, Eli Cohen wrote:
> On 30/01/2023 10:19, Jason Wang wrote:
> > Hi Eli:
> > 
> > On Mon, Jan 23, 2023 at 1:59 PM Eli Cohen <elic@nvidia.com> wrote:
> > > VDPA allows hardware drivers the propagate interrupts from the hardware
> > > directly to the vCPU used by the guest. In a typical implementation, the
> > > hardware driver will assign the interrupt vectors to the virtqueues and report
> > > this information back through the get_vq_irq() callback defined in
> > > struct vdpa_config_ops.
> > > 
> > > Interrupt vectors could be a scarce resource and may be limited. For such
> > > cases, we can opt the administrator, through the vdpa tool, to set the policy
> > > defining how to distribute the available vectors amongst the data virtqueues.
> > > 
> > > The following policies are proposed:
> > > 
> > > 1. First comes first served. Assign a vector to each data virtqueue by the
> > >      virtqueue index. Virtqueues which could not be assigned a dedicated vector
> > >      would use the hardware driver to propagate interrupts using the available
> > >      callback mechanism.
> > > 
> > >      vdpa dev add name vdpa0 mgmtdev pci/0000:86:00.2 int=all
> > > 
> > >      This is the default mode and works even if "int=all" was not specified.
> > > 
> > > 2. Use round robin distribution so virtqueues could share vectors.
> > >      vdpa dev add name vdpa0 mgmtdev pci/0000:86:00.2 int=all intmode=share
> > > 
> > > 3. Assign vectors to RX virtqueues only.
> > > 3.1 Do not share vectors
> > >       vdpa dev add name vdpa0 mgmtdev pci/0000:86:00.2 int=rx
> > > 3.2 Share vectors
> > >       vdpa dev add name vdpa0 mgmtdev pci/0000:86:00.2 int=rx intmode=share
> > > 
> > > 4. Assign vectors to TX virtqueues only. Can share or not, like rx.
> > > 5. Fail device creation if number of vectors cannot be fulfilled.
> > >      vdpa dev add name vdpa0 mgmtdev pci/0000:86:00.2 max_vq_pairs 8 int=rx intnum=8
> > I wonder:
> > 
> > 1) how the administrator can know if there's sufficient resources for
> > one of the above policies.
> There's no established way to know. The idea is to use whatever there is
> assuming interrupt bypassing is always better then the callback mechanism.
> > 2) how does the administrator know which policy is the best assuming
> > the resources are sufficient? (E.g vectors to RX only or vectors to TX
> > only)
> I don't think there's a rule of thumb here but he needs to experiment what
> works best for him.
> > 
> > If it requires a vendor specific way or knowledge, I believe it's
> > better to code them in:
> > 
> > 1) the vDPA parent or
> > 2) underlayer management tool or drivers
> > 
> > Thanks
> 
> I was wondering also about the current mechanism we have. The hardware
> driver reports irq number for each VQ.
> 
> The guest driver sees a virtio pci device with MSIX vectors as the number of
> virtqueues.
> 
> Suppose the hardware driver provided only 5 interrupt vectors while there
> are 16 VQs.
> 
> Which MSIX vector at the guest gets really posted interrupt and which one
> uses callback handled at the hardware driver?

Not sure I understand.
If you get a single interrupt from hardware callback or posted
you can only drive one interrupt to guest, no?


> > > 
> > > 
> > > 

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  parent reply	other threads:[~2023-01-30 11:34 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <bc4136ed-abe0-dcc2-4dd9-31dcf3d8c179@nvidia.com>
2023-01-30  8:19 ` RFC: VDPA Interrupt vector distribution Jason Wang
     [not found]   ` <23806cd9-ffde-778c-5fa5-b95bd1ff0b44@nvidia.com>
2023-01-30 11:34     ` Michael S. Tsirkin [this message]
     [not found]       ` <734e2553-199f-94eb-88d1-a642ec1c7490@nvidia.com>
2023-01-31  6:02         ` Jason Wang
2023-01-31  7:26         ` Michael S. Tsirkin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230130063247-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=elic@nvidia.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).