qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Alex Williamson <alex.williamson@redhat.com>
To: Nicolin Chen <nicolinc@nvidia.com>
Cc: Cornelia Huck <cohuck@redhat.com>, <qemu-devel@nongnu.org>,
	<kwankhede@nvidia.com>, <avihaih@nvidia.com>, <shayd@nvidia.com>,
	<jgg@nvidia.com>
Subject: Re: [PATCH] vfio/common: Do not g_free in vfio_get_iommu_info
Date: Wed, 14 Sep 2022 12:10:29 -0600	[thread overview]
Message-ID: <20220914121029.1a693e5d.alex.williamson@redhat.com> (raw)
In-Reply-To: <Yx+b0t20wtneTry+@Asurada-Nvidia>

On Mon, 12 Sep 2022 13:51:30 -0700
Nicolin Chen <nicolinc@nvidia.com> wrote:

> On Mon, Sep 12, 2022 at 02:38:52PM +0200, Cornelia Huck wrote:
> > External email: Use caution opening links or attachments
> > 
> > 
> > On Fri, Sep 09 2022, Nicolin Chen <nicolinc@nvidia.com> wrote:
> >   
> > > Its caller vfio_connect_container() assigns a default value
> > > to info->iova_pgsizes, even if vfio_get_iommu_info() fails.
> > > This would result in a "Segmentation fault" error, when the
> > > VFIO_IOMMU_GET_INFO ioctl errors out.
> > >
> > > Since the caller has g_free already, drop the g_free in its
> > > rollback routine and add a line of comments to highlight it.  
> > 
> > There's basically two ways to fix this:
> > 
> > - return *info in any case, even on error
> > - free *info on error, and make sure that the caller doesn't try to
> >   access *info if the function returned !0
> > 
> > The problem with the first option is that the caller will access invalid
> > information if it neglects to check the return code, and that might lead
> > to not-that-obvious errors; in the second case, a broken caller would at
> > least fail quickly with a segfault. The current code is easier to fix
> > with the first option.
> > 
> > I think I'd prefer the second option; but obviously maintainer's choice.  
> 
> The caller does check rc all the time. So I made a smaller fix
> (the first option). Attaching the git-diff for the second one.
> 
> Alex, please let me know which one you prefer. Thanks!
> 
> diff --git a/hw/vfio/common.c b/hw/vfio/common.c
> index 51b2e05c76..74431411ab 100644
> --- a/hw/vfio/common.c
> +++ b/hw/vfio/common.c
[snip]

I think we can do better than that, I don't think we need to maintain
the existing grouping, and that FIXME comment is outdated and has
drifted from the relevant line of code.  What about:

diff --git a/hw/vfio/common.c b/hw/vfio/common.c
index ace9562a9ba1..8d8c54d59083 100644
--- a/hw/vfio/common.c
+++ b/hw/vfio/common.c
@@ -2111,29 +2111,31 @@ static int vfio_connect_container(VFIOGroup *group, AddressSpace *as,
     {
         struct vfio_iommu_type1_info *info;
 
-        /*
-         * FIXME: This assumes that a Type1 IOMMU can map any 64-bit
-         * IOVA whatsoever.  That's not actually true, but the current
-         * kernel interface doesn't tell us what it can map, and the
-         * existing Type1 IOMMUs generally support any IOVA we're
-         * going to actually try in practice.
+	/*
+         * Setup defaults for container pgsizes and dma_max_mappings if not
+         * provided by kernel below.
          */
-        ret = vfio_get_iommu_info(container, &info);
-
-        if (ret || !(info->flags & VFIO_IOMMU_INFO_PGSIZES)) {
-            /* Assume 4k IOVA page size */
-            info->iova_pgsizes = 4096;
-        }
-        vfio_host_win_add(container, 0, (hwaddr)-1, info->iova_pgsizes);
-        container->pgsizes = info->iova_pgsizes;
-
-        /* The default in the kernel ("dma_entry_limit") is 65535. */
+        container->pgsizes = 4096;
         container->dma_max_mappings = 65535;
+
+        ret = vfio_get_iommu_info(container, &info);
         if (!ret) {
+            if (info->flags & VFIO_IOMMU_INFO_PGSIZES) {
+                container->pgsizes = info->iova_pgsizes;
+            }
+
             vfio_get_info_dma_avail(info, &container->dma_max_mappings);
             vfio_get_iommu_info_migration(container, info);
+            g_free(info);
         }
-        g_free(info);
+
+        /*
+         * FIXME: We should parse VFIO_IOMMU_TYPE1_INFO_CAP_IOVA_RANGE
+         * information to get the actual window extent rather than assume
+         * a 64-bit IOVA address space.
+         */
+        vfio_host_win_add(container, 0, (hwaddr)-1, container->pgsizes);
+
         break;
     }
     case VFIO_SPAPR_TCE_v2_IOMMU:



  reply	other threads:[~2022-09-14 18:12 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-10  0:42 [PATCH] vfio/common: Do not g_free in vfio_get_iommu_info Nicolin Chen
2022-09-12 12:38 ` Cornelia Huck
2022-09-12 20:51   ` Nicolin Chen
2022-09-14 18:10     ` Alex Williamson [this message]
2022-09-14 18:30       ` Alex Williamson
2022-09-14 19:02       ` Nicolin Chen
2022-09-14 19:53         ` Alex Williamson
2022-09-14 20:03           ` Alex Williamson
2022-09-14 20:16             ` Nicolin Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220914121029.1a693e5d.alex.williamson@redhat.com \
    --to=alex.williamson@redhat.com \
    --cc=avihaih@nvidia.com \
    --cc=cohuck@redhat.com \
    --cc=jgg@nvidia.com \
    --cc=kwankhede@nvidia.com \
    --cc=nicolinc@nvidia.com \
    --cc=qemu-devel@nongnu.org \
    --cc=shayd@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).