From: "Michael S. Tsirkin" <mst@redhat.com>
To: Angus Chen <angus.chen@jaguarmicro.com>
Cc: "jasowang@redhat.com" <jasowang@redhat.com>,
"virtualization@lists.linux-foundation.org"
<virtualization@lists.linux-foundation.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v2] vdpa/vp_vdpa: Check queue number of vdpa device from add_config
Date: Fri, 9 Jun 2023 12:12:06 -0400 [thread overview]
Message-ID: <20230609120939-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <TY2PR06MB34248F29ED36A5DBB4FE0E2E8551A@TY2PR06MB3424.apcprd06.prod.outlook.com>
On Fri, Jun 09, 2023 at 12:42:22AM +0000, Angus Chen wrote:
>
>
> > -----Original Message-----
> > From: Michael S. Tsirkin <mst@redhat.com>
> > Sent: Friday, June 9, 2023 3:45 AM
> > To: Angus Chen <angus.chen@jaguarmicro.com>
> > Cc: jasowang@redhat.com; virtualization@lists.linux-foundation.org;
> > linux-kernel@vger.kernel.org
> > Subject: Re: [PATCH v2] vdpa/vp_vdpa: Check queue number of vdpa device from
> > add_config
> >
> > On Thu, Jun 08, 2023 at 05:01:24PM +0800, Angus Chen wrote:
> > > When add virtio_pci vdpa device,check the vqs number of device cap
> > > and max_vq_pairs from add_config.
> > > Simply starting from failing if the provisioned #qp is not
> > > equal to the one that hardware has.
> > >
> > > Signed-off-by: Angus Chen <angus.chen@jaguarmicro.com>
> >
> > I am not sure about this one. How does userspace know
> > which values are legal?
> Maybe we can print device cap of device in dev_err?
No one reads these except kernel devs. Surely not userspace.
> >
> > If there's no way then maybe we should just cap the value
> > to what device can support but otherwise keep the device
> > working.
> We I use max_vqs pair to test vp_vdpa,it doesn't work as expect.
> And there is no any hint of this.
So things don't work either way just differently.
Let's come up with a way for userspace to know what's legal
so things can start working.
> >
> > > ---
> > > v1: Use max_vqs from add_config
> > > v2: Just return fail if max_vqs from add_config is not same as device
> > > cap. Suggested by jason.
> > >
> > > drivers/vdpa/virtio_pci/vp_vdpa.c | 35 ++++++++++++++++++-------------
> > > 1 file changed, 21 insertions(+), 14 deletions(-)
> > >
> > > diff --git a/drivers/vdpa/virtio_pci/vp_vdpa.c
> > b/drivers/vdpa/virtio_pci/vp_vdpa.c
> > > index 281287fae89f..c1fb6963da12 100644
> > > --- a/drivers/vdpa/virtio_pci/vp_vdpa.c
> > > +++ b/drivers/vdpa/virtio_pci/vp_vdpa.c
> > > @@ -480,32 +480,39 @@ static int vp_vdpa_dev_add(struct
> > vdpa_mgmt_dev *v_mdev, const char *name,
> > > u64 device_features;
> > > int ret, i;
> > >
> > > - vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa,
> > > - dev, &vp_vdpa_ops, 1, 1, name, false);
> > > -
> > > - if (IS_ERR(vp_vdpa)) {
> > > - dev_err(dev, "vp_vdpa: Failed to allocate vDPA structure\n");
> > > - return PTR_ERR(vp_vdpa);
> > > + if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_NET_CFG_MAX_VQP)) {
> > > + if (add_config->net.max_vq_pairs != (v_mdev->max_supported_vqs /
> > 2)) {
> > > + dev_err(&pdev->dev, "max vqs 0x%x should be equal to 0x%x
> > which device has\n",
> > > + add_config->net.max_vq_pairs*2,
> > v_mdev->max_supported_vqs);
> > > + return -EINVAL;
> > > + }
> > > }
> > >
> > > - vp_vdpa_mgtdev->vp_vdpa = vp_vdpa;
> > > -
> > > - vp_vdpa->vdpa.dma_dev = &pdev->dev;
> > > - vp_vdpa->queues = vp_modern_get_num_queues(mdev);
> > > - vp_vdpa->mdev = mdev;
> > > -
> > > device_features = vp_modern_get_features(mdev);
> > > if (add_config->mask & BIT_ULL(VDPA_ATTR_DEV_FEATURES)) {
> > > if (add_config->device_features & ~device_features) {
> > > - ret = -EINVAL;
> > > dev_err(&pdev->dev, "Try to provision features "
> > > "that are not supported by the device: "
> > > "device_features 0x%llx provisioned 0x%llx\n",
> > > device_features, add_config->device_features);
> > > - goto err;
> > > + return -EINVAL;
> > > }
> > > device_features = add_config->device_features;
> > > }
> > > +
> > > + vp_vdpa = vdpa_alloc_device(struct vp_vdpa, vdpa,
> > > + dev, &vp_vdpa_ops, 1, 1, name, false);
> > > +
> > > + if (IS_ERR(vp_vdpa)) {
> > > + dev_err(dev, "vp_vdpa: Failed to allocate vDPA structure\n");
> > > + return PTR_ERR(vp_vdpa);
> > > + }
> > > +
> > > + vp_vdpa_mgtdev->vp_vdpa = vp_vdpa;
> > > +
> > > + vp_vdpa->vdpa.dma_dev = &pdev->dev;
> > > + vp_vdpa->queues = v_mdev->max_supported_vqs;
> > > + vp_vdpa->mdev = mdev;
> > > vp_vdpa->device_features = device_features;
> > >
> > > ret = devm_add_action_or_reset(dev, vp_vdpa_free_irq_vectors, pdev);
> > > --
> > > 2.25.1
>
next prev parent reply other threads:[~2023-06-09 16:13 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-08 9:01 [PATCH v2] vdpa/vp_vdpa: Check queue number of vdpa device from add_config Angus Chen
2023-06-08 19:44 ` Michael S. Tsirkin
2023-06-09 0:42 ` Angus Chen
2023-06-09 16:12 ` Michael S. Tsirkin [this message]
2023-06-09 2:30 ` Jason Wang
2023-06-26 2:30 ` Jason Wang
2023-06-26 2:42 ` Angus Chen
2023-06-26 2:51 ` Jason Wang
2023-06-26 3:02 ` Angus Chen
2023-06-26 3:08 ` Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230609120939-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=angus.chen@jaguarmicro.com \
--cc=jasowang@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).