qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* vhost-vdpa potential fd leak (coverity issue)
@ 2025-07-10 15:46 Peter Maydell
  2025-07-10 16:40 ` Michael S. Tsirkin
  0 siblings, 1 reply; 7+ messages in thread
From: Peter Maydell @ 2025-07-10 15:46 UTC (permalink / raw)
  To: QEMU Developers
  Cc: Jason Wang, Eugenio Pérez, Michael S. Tsirkin,
	Stefano Garzarella

Hi; Coverity complains about a potential filedescriptor leak in
net/vhost-vdpa.c:net_init_vhost_vdpa(). This is CID 1490785.

Specifically, in this function we do:
    queue_pairs = vhost_vdpa_get_max_queue_pairs(vdpa_device_fd, features,
                                                 &has_cvq, errp);
    if (queue_pairs < 0) {
        [exit with failure]
    }
    ...
    ncs = g_malloc0(sizeof(*ncs) * queue_pairs);
    for (i = 0; i < queue_pairs; i++) {
       ...
       ncs[i] = net_vhost_vdpa_init(..., vdpa_device_fd, ...)
       ...
    }
    if (has_cvq) {
       ...
       nc = net_host_vdpa_init(..., vdpa_device_fd, ...)
       ...
    }

So if queue_pairs is zero we will malloc(0) which seems dubious;
and if queue_pairs is zero and has_cvq is false then the init
function will exit success without ever calling net_vhost_vdpa_init()
and it will leak the vdpa_device_fd.

My guess is that queue_pairs == 0 should be an error, or possibly
that (queue_pairs == 0 && !has_cvq) should be an error.

Could somebody who knows more about this code tell me which, and
perhaps produce a patch to make it handle that case?

Q: should this file be listed in the "vhost" subcategory of MAINTAINERS?
At the moment it only gets caught by "Network device backends".

thanks
-- PMM


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: vhost-vdpa potential fd leak (coverity issue)
  2025-07-10 15:46 vhost-vdpa potential fd leak (coverity issue) Peter Maydell
@ 2025-07-10 16:40 ` Michael S. Tsirkin
  2025-07-14  9:18   ` Stefano Garzarella
  0 siblings, 1 reply; 7+ messages in thread
From: Michael S. Tsirkin @ 2025-07-10 16:40 UTC (permalink / raw)
  To: Peter Maydell
  Cc: QEMU Developers, Jason Wang, Eugenio Pérez,
	Stefano Garzarella

On Thu, Jul 10, 2025 at 04:46:34PM +0100, Peter Maydell wrote:
> Hi; Coverity complains about a potential filedescriptor leak in
> net/vhost-vdpa.c:net_init_vhost_vdpa(). This is CID 1490785.
> 
> Specifically, in this function we do:
>     queue_pairs = vhost_vdpa_get_max_queue_pairs(vdpa_device_fd, features,
>                                                  &has_cvq, errp);
>     if (queue_pairs < 0) {
>         [exit with failure]
>     }
>     ...
>     ncs = g_malloc0(sizeof(*ncs) * queue_pairs);
>     for (i = 0; i < queue_pairs; i++) {
>        ...
>        ncs[i] = net_vhost_vdpa_init(..., vdpa_device_fd, ...)
>        ...
>     }
>     if (has_cvq) {
>        ...
>        nc = net_host_vdpa_init(..., vdpa_device_fd, ...)
>        ...
>     }
> 
> So if queue_pairs is zero we will malloc(0) which seems dubious;
> and if queue_pairs is zero and has_cvq is false then the init
> function will exit success without ever calling net_vhost_vdpa_init()
> and it will leak the vdpa_device_fd.
> 
> My guess is that queue_pairs == 0 should be an error, or possibly
> that (queue_pairs == 0 && !has_cvq) should be an error.
> 
> Could somebody who knows more about this code tell me which, and
> perhaps produce a patch to make it handle that case?

Historically queue_pairs == 0 was always same as 1, IIRC.

> Q: should this file be listed in the "vhost" subcategory of MAINTAINERS?
> At the moment it only gets caught by "Network device backends".
> 
> thanks
> -- PMM


This so.



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: vhost-vdpa potential fd leak (coverity issue)
  2025-07-10 16:40 ` Michael S. Tsirkin
@ 2025-07-14  9:18   ` Stefano Garzarella
  2025-07-14  9:48     ` Peter Maydell
                       ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Stefano Garzarella @ 2025-07-14  9:18 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Maydell, QEMU Developers, Jason Wang, Eugenio Pérez

On Thu, Jul 10, 2025 at 12:40:43PM -0400, Michael S. Tsirkin wrote:
>On Thu, Jul 10, 2025 at 04:46:34PM +0100, Peter Maydell wrote:
>> Hi; Coverity complains about a potential filedescriptor leak in
>> net/vhost-vdpa.c:net_init_vhost_vdpa(). This is CID 1490785.
>>
>> Specifically, in this function we do:
>>     queue_pairs = vhost_vdpa_get_max_queue_pairs(vdpa_device_fd, features,
>>                                                  &has_cvq, errp);
>>     if (queue_pairs < 0) {
>>         [exit with failure]
>>     }
>>     ...
>>     ncs = g_malloc0(sizeof(*ncs) * queue_pairs);
>>     for (i = 0; i < queue_pairs; i++) {
>>        ...
>>        ncs[i] = net_vhost_vdpa_init(..., vdpa_device_fd, ...)
>>        ...
>>     }
>>     if (has_cvq) {
>>        ...
>>        nc = net_host_vdpa_init(..., vdpa_device_fd, ...)
>>        ...
>>     }
>>
>> So if queue_pairs is zero we will malloc(0) which seems dubious;
>> and if queue_pairs is zero and has_cvq is false then the init
>> function will exit success without ever calling net_vhost_vdpa_init()
>> and it will leak the vdpa_device_fd.
>>
>> My guess is that queue_pairs == 0 should be an error, or possibly
>> that (queue_pairs == 0 && !has_cvq) should be an error.
>>
>> Could somebody who knows more about this code tell me which, and
>> perhaps produce a patch to make it handle that case?
>
>Historically queue_pairs == 0 was always same as 1, IIRC.

Yep, also looking at vhost_vdpa_get_max_queue_pairs() it returns 1 if 
VIRTIO_NET_F_MQ is not negotiated, or what the device expose in the 
config space in the `max_virtqueue_pairs` field.

In the spec we have:
     The device MUST set max_virtqueue_pairs to between 1 and 0x8000 
     inclusive, if it offers VIRTIO_NET_F_MQ.

So, IMHO we can just change the error check in this way:
diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
index 58d738945d..8f39e5a983 100644
--- a/net/vhost-vdpa.c
+++ b/net/vhost-vdpa.c
@@ -1813,7 +1813,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
  
      queue_pairs = vhost_vdpa_get_max_queue_pairs(vdpa_device_fd, features,
                                                   &has_cvq, errp);
-    if (queue_pairs < 0) {
+    if (queue_pairs <= 0) {
          qemu_close(vdpa_device_fd);
          return queue_pairs;
      }

I'll send a patch if no one complain.

>
>> Q: should this file be listed in the "vhost" subcategory of 
>> MAINTAINERS?
>> At the moment it only gets caught by "Network device backends".

Maybe yes, but it's really virtio-net specific.
@Michael WDYT?

Thanks,
Stefano



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: vhost-vdpa potential fd leak (coverity issue)
  2025-07-14  9:18   ` Stefano Garzarella
@ 2025-07-14  9:48     ` Peter Maydell
  2025-07-14 10:33       ` Stefano Garzarella
  2025-07-14  9:57     ` Michael S. Tsirkin
  2025-07-14 10:20     ` Stefano Garzarella
  2 siblings, 1 reply; 7+ messages in thread
From: Peter Maydell @ 2025-07-14  9:48 UTC (permalink / raw)
  To: Stefano Garzarella
  Cc: Michael S. Tsirkin, QEMU Developers, Jason Wang,
	Eugenio Pérez

On Mon, 14 Jul 2025 at 10:19, Stefano Garzarella <sgarzare@redhat.com> wrote:
> On Thu, Jul 10, 2025 at 12:40:43PM -0400, Michael S. Tsirkin wrote:
> >> Q: should this file be listed in the "vhost" subcategory of
> >> MAINTAINERS?
> >> At the moment it only gets caught by "Network device backends".
>
> Maybe yes, but it's really virtio-net specific.
> @Michael WDYT?

You could create a new virtio-net subsection if that works
better (and we could move the two virtio-net* files
currently listed under "Network devices" into it).
The aim here should really be to ensure that the people
interested in reviewing/queueing patches get cc'd.

-- PMM


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: vhost-vdpa potential fd leak (coverity issue)
  2025-07-14  9:18   ` Stefano Garzarella
  2025-07-14  9:48     ` Peter Maydell
@ 2025-07-14  9:57     ` Michael S. Tsirkin
  2025-07-14 10:20     ` Stefano Garzarella
  2 siblings, 0 replies; 7+ messages in thread
From: Michael S. Tsirkin @ 2025-07-14  9:57 UTC (permalink / raw)
  To: Stefano Garzarella
  Cc: Peter Maydell, QEMU Developers, Jason Wang, Eugenio Pérez

On Mon, Jul 14, 2025 at 11:18:51AM +0200, Stefano Garzarella wrote:
> On Thu, Jul 10, 2025 at 12:40:43PM -0400, Michael S. Tsirkin wrote:
> > On Thu, Jul 10, 2025 at 04:46:34PM +0100, Peter Maydell wrote:
> > > Hi; Coverity complains about a potential filedescriptor leak in
> > > net/vhost-vdpa.c:net_init_vhost_vdpa(). This is CID 1490785.
> > > 
> > > Specifically, in this function we do:
> > >     queue_pairs = vhost_vdpa_get_max_queue_pairs(vdpa_device_fd, features,
> > >                                                  &has_cvq, errp);
> > >     if (queue_pairs < 0) {
> > >         [exit with failure]
> > >     }
> > >     ...
> > >     ncs = g_malloc0(sizeof(*ncs) * queue_pairs);
> > >     for (i = 0; i < queue_pairs; i++) {
> > >        ...
> > >        ncs[i] = net_vhost_vdpa_init(..., vdpa_device_fd, ...)
> > >        ...
> > >     }
> > >     if (has_cvq) {
> > >        ...
> > >        nc = net_host_vdpa_init(..., vdpa_device_fd, ...)
> > >        ...
> > >     }
> > > 
> > > So if queue_pairs is zero we will malloc(0) which seems dubious;
> > > and if queue_pairs is zero and has_cvq is false then the init
> > > function will exit success without ever calling net_vhost_vdpa_init()
> > > and it will leak the vdpa_device_fd.
> > > 
> > > My guess is that queue_pairs == 0 should be an error, or possibly
> > > that (queue_pairs == 0 && !has_cvq) should be an error.
> > > 
> > > Could somebody who knows more about this code tell me which, and
> > > perhaps produce a patch to make it handle that case?
> > 
> > Historically queue_pairs == 0 was always same as 1, IIRC.
> 
> Yep, also looking at vhost_vdpa_get_max_queue_pairs() it returns 1 if
> VIRTIO_NET_F_MQ is not negotiated, or what the device expose in the config
> space in the `max_virtqueue_pairs` field.
> 
> In the spec we have:
>     The device MUST set max_virtqueue_pairs to between 1 and 0x8000
> inclusive, if it offers VIRTIO_NET_F_MQ.
> 
> So, IMHO we can just change the error check in this way:
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 58d738945d..8f39e5a983 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -1813,7 +1813,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
>      queue_pairs = vhost_vdpa_get_max_queue_pairs(vdpa_device_fd, features,
>                                                   &has_cvq, errp);
> -    if (queue_pairs < 0) {
> +    if (queue_pairs <= 0) {
>          qemu_close(vdpa_device_fd);
>          return queue_pairs;
>      }
> 
> I'll send a patch if no one complain.
> 
> > 
> > > Q: should this file be listed in the "vhost" subcategory of
> > > MAINTAINERS?
> > > At the moment it only gets caught by "Network device backends".
> 
> Maybe yes, but it's really virtio-net specific.
> @Michael WDYT?
> 
> Thanks,
> Stefano


vhost kind of includes all vhost things. Not an issue if it's
in both places, I think.



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: vhost-vdpa potential fd leak (coverity issue)
  2025-07-14  9:18   ` Stefano Garzarella
  2025-07-14  9:48     ` Peter Maydell
  2025-07-14  9:57     ` Michael S. Tsirkin
@ 2025-07-14 10:20     ` Stefano Garzarella
  2 siblings, 0 replies; 7+ messages in thread
From: Stefano Garzarella @ 2025-07-14 10:20 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Peter Maydell, QEMU Developers, Jason Wang, Eugenio Pérez

On Mon, 14 Jul 2025 at 11:18, Stefano Garzarella <sgarzare@redhat.com> wrote:
>
> On Thu, Jul 10, 2025 at 12:40:43PM -0400, Michael S. Tsirkin wrote:
> >On Thu, Jul 10, 2025 at 04:46:34PM +0100, Peter Maydell wrote:
> >> Hi; Coverity complains about a potential filedescriptor leak in
> >> net/vhost-vdpa.c:net_init_vhost_vdpa(). This is CID 1490785.
> >>
> >> Specifically, in this function we do:
> >>     queue_pairs = vhost_vdpa_get_max_queue_pairs(vdpa_device_fd, features,
> >>                                                  &has_cvq, errp);
> >>     if (queue_pairs < 0) {
> >>         [exit with failure]
> >>     }
> >>     ...
> >>     ncs = g_malloc0(sizeof(*ncs) * queue_pairs);
> >>     for (i = 0; i < queue_pairs; i++) {
> >>        ...
> >>        ncs[i] = net_vhost_vdpa_init(..., vdpa_device_fd, ...)
> >>        ...
> >>     }
> >>     if (has_cvq) {
> >>        ...
> >>        nc = net_host_vdpa_init(..., vdpa_device_fd, ...)
> >>        ...
> >>     }
> >>
> >> So if queue_pairs is zero we will malloc(0) which seems dubious;
> >> and if queue_pairs is zero and has_cvq is false then the init
> >> function will exit success without ever calling net_vhost_vdpa_init()
> >> and it will leak the vdpa_device_fd.
> >>
> >> My guess is that queue_pairs == 0 should be an error, or possibly
> >> that (queue_pairs == 0 && !has_cvq) should be an error.
> >>
> >> Could somebody who knows more about this code tell me which, and
> >> perhaps produce a patch to make it handle that case?
> >
> >Historically queue_pairs == 0 was always same as 1, IIRC.
>
> Yep, also looking at vhost_vdpa_get_max_queue_pairs() it returns 1 if
> VIRTIO_NET_F_MQ is not negotiated, or what the device expose in the
> config space in the `max_virtqueue_pairs` field.
>
> In the spec we have:
>      The device MUST set max_virtqueue_pairs to between 1 and 0x8000
>      inclusive, if it offers VIRTIO_NET_F_MQ.
>
> So, IMHO we can just change the error check in this way:
> diff --git a/net/vhost-vdpa.c b/net/vhost-vdpa.c
> index 58d738945d..8f39e5a983 100644
> --- a/net/vhost-vdpa.c
> +++ b/net/vhost-vdpa.c
> @@ -1813,7 +1813,7 @@ int net_init_vhost_vdpa(const Netdev *netdev, const char *name,
>
>       queue_pairs = vhost_vdpa_get_max_queue_pairs(vdpa_device_fd, features,
>                                                    &has_cvq, errp);
> -    if (queue_pairs < 0) {
> +    if (queue_pairs <= 0) {
>           qemu_close(vdpa_device_fd);
>           return queue_pairs;
>       }
>
> I'll send a patch if no one complain.

Patch sent: https://lore.kernel.org/qemu-devel/20250714101156.30024-1-sgarzare@redhat.com

Thanks,
Stefano



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: vhost-vdpa potential fd leak (coverity issue)
  2025-07-14  9:48     ` Peter Maydell
@ 2025-07-14 10:33       ` Stefano Garzarella
  0 siblings, 0 replies; 7+ messages in thread
From: Stefano Garzarella @ 2025-07-14 10:33 UTC (permalink / raw)
  To: Peter Maydell
  Cc: Michael S. Tsirkin, QEMU Developers, Jason Wang,
	Eugenio Pérez

On Mon, 14 Jul 2025 at 11:51, Peter Maydell <peter.maydell@linaro.org> wrote:
>
> On Mon, 14 Jul 2025 at 10:19, Stefano Garzarella <sgarzare@redhat.com> wrote:
> > On Thu, Jul 10, 2025 at 12:40:43PM -0400, Michael S. Tsirkin wrote:
> > >> Q: should this file be listed in the "vhost" subcategory of
> > >> MAINTAINERS?
> > >> At the moment it only gets caught by "Network device backends".
> >
> > Maybe yes, but it's really virtio-net specific.
> > @Michael WDYT?
>
> You could create a new virtio-net subsection if that works
> better (and we could move the two virtio-net* files
> currently listed under "Network devices" into it).
> The aim here should really be to ensure that the people
> interested in reviewing/queueing patches get cc'd.

Just added net/vhost* filese under vhost section:
https://lore.kernel.org/qemu-devel/20250714102626.34431-1-sgarzare@redhat.com/T/#u

Not sure about virtio-net subsection, I'm not really involved, so I'd
leave to someone else.

Thanks,
Stefano



^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2025-07-14 10:33 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-10 15:46 vhost-vdpa potential fd leak (coverity issue) Peter Maydell
2025-07-10 16:40 ` Michael S. Tsirkin
2025-07-14  9:18   ` Stefano Garzarella
2025-07-14  9:48     ` Peter Maydell
2025-07-14 10:33       ` Stefano Garzarella
2025-07-14  9:57     ` Michael S. Tsirkin
2025-07-14 10:20     ` Stefano Garzarella

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).