From: Si-Wei Liu <si-wei.liu@oracle.com>
To: trix@redhat.com, mst@redhat.com, jasowang@redhat.com,
nathan@kernel.org, ndesaulniers@google.com, elic@nvidia.com,
parav@nvidia.com, xieyongji@bytedance.com
Cc: llvm@lists.linux.dev, linux-kernel@vger.kernel.org,
virtualization@lists.linux-foundation.org
Subject: Re: [PATCH] vdpa/mlx5: fix error handling in mlx5_vdpa_dev_add()
Date: Fri, 7 Jan 2022 17:49:42 -0800 [thread overview]
Message-ID: <3740be2d-192f-aeaf-02fe-e309cdb278dc@oracle.com> (raw)
In-Reply-To: <20220107211352.3940570-1-trix@redhat.com>
The proposed fix looks fine, but I still hope this to revert this series
if at all possible. The review hadn't been done yet.
On 1/7/2022 1:13 PM, trix@redhat.com wrote:
> From: Tom Rix <trix@redhat.com>
>
> Clang build fails with
> mlx5_vnet.c:2574:6: error: variable 'mvdev' is used uninitialized whenever
> 'if' condition is true
> if (!ndev->vqs || !ndev->event_cbs) {
> ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> mlx5_vnet.c:2660:14: note: uninitialized use occurs here
> put_device(&mvdev->vdev.dev);
> ^~~~~
> This because mvdev is set after trying to allocate ndev->vqs,event_cbs.
> So move the allocation to after mvdev is set but before the arrays
> are used in init_mvqs()
>
> Fixes: 7620d51af29a ("vdpa/mlx5: Support configuring max data virtqueue")
> Signed-off-by: Tom Rix <trix@redhat.com>
Reviewed-by: Si-Wei Liu<si-wei.liu@oracle.com>
> ---
> drivers/vdpa/mlx5/net/mlx5_vnet.c | 10 ++++++----
> 1 file changed, 6 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> index b564c70475815..37220f6db7ad7 100644
> --- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
> +++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
> @@ -2569,16 +2569,18 @@ static int mlx5_vdpa_dev_add(struct vdpa_mgmt_dev *v_mdev, const char *name,
> if (IS_ERR(ndev))
> return PTR_ERR(ndev);
>
> + ndev->mvdev.mlx_features = mgtdev->mgtdev.supported_features;
> + ndev->mvdev.max_vqs = max_vqs;
> + mvdev = &ndev->mvdev;
> + mvdev->mdev = mdev;
> +
> ndev->vqs = kcalloc(max_vqs, sizeof(*ndev->vqs), GFP_KERNEL);
> ndev->event_cbs = kcalloc(max_vqs + 1, sizeof(*ndev->event_cbs), GFP_KERNEL);
> if (!ndev->vqs || !ndev->event_cbs) {
> err = -ENOMEM;
> goto err_alloc;
> }
> - ndev->mvdev.mlx_features = mgtdev->mgtdev.supported_features;
> - ndev->mvdev.max_vqs = max_vqs;
> - mvdev = &ndev->mvdev;
> - mvdev->mdev = mdev;
> +
> init_mvqs(ndev);
> mutex_init(&ndev->reslock);
> config = &ndev->config;
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
parent reply other threads:[~2022-01-08 1:50 UTC|newest]
Thread overview: expand[flat|nested] mbox.gz Atom feed
[parent not found: <20220107211352.3940570-1-trix@redhat.com>]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3740be2d-192f-aeaf-02fe-e309cdb278dc@oracle.com \
--to=si-wei.liu@oracle.com \
--cc=elic@nvidia.com \
--cc=jasowang@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=llvm@lists.linux.dev \
--cc=mst@redhat.com \
--cc=nathan@kernel.org \
--cc=ndesaulniers@google.com \
--cc=parav@nvidia.com \
--cc=trix@redhat.com \
--cc=virtualization@lists.linux-foundation.org \
--cc=xieyongji@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).