virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: "Longpeng (Mike,
	Cloud Infrastructure Service Product Dept.)"
	<longpeng2@huawei.com>
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	huangzhichao@huawei.com, yechuan@huawei.com
Subject: Re: [PATCH] vhost-vdpa: cleanup memory maps when closing vdpa fds
Date: Wed, 15 Feb 2023 06:45:48 -0500	[thread overview]
Message-ID: <20230215064454-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <af95c38d-fdca-aef0-55ae-bbb0baee6029@huawei.com>

On Wed, Feb 15, 2023 at 01:15:55PM +0800, Longpeng (Mike, Cloud Infrastructure Service Product Dept.) wrote:
> 
> 
> 在 2023/2/15 10:56, Jason Wang 写道:
> > On Wed, Feb 15, 2023 at 10:49 AM Longpeng (Mike, Cloud Infrastructure
> > Service Product Dept.) <longpeng2@huawei.com> wrote:
> > > 
> > > 
> > > 
> > > 在 2023/2/15 10:00, Jason Wang 写道:
> > > > On Tue, Feb 14, 2023 at 2:28 PM Longpeng (Mike, Cloud Infrastructure
> > > > Service Product Dept.) <longpeng2@huawei.com> wrote:
> > > > > 
> > > > > 
> > > > > 
> > > > > 在 2023/2/14 14:16, Jason Wang 写道:
> > > > > > 
> > > > > > 在 2023/1/31 22:53, Longpeng(Mike) 写道:
> > > > > > > From: Longpeng <longpeng2@huawei.com>
> > > > > > > 
> > > > > > > We must cleanup all memory maps when closing the vdpa fds, otherwise
> > > > > > > some critical resources (e.g. memory, iommu map) will leaked if the
> > > > > > > userspace exits unexpectedly (e.g. kill -9).
> > > > > > 
> > > > > > 
> > > > > > Sounds like a bug of the kernel, should we fix there?
> > > > > > 
> > > > > 
> > > > > For example, the iommu map is setup when QEMU calls VHOST_IOTLB_UPDATE
> > > > > ioctl and it'll be freed if QEMU calls VHOST_IOTLB_INVALIDATE ioctl.
> > > > > 
> > > > > So maybe we release these resources in vdpa framework in kernel is a
> > > > > suitable choice?
> > > > 
> > > > I think I need understand what does "resources" mean here:
> > > > 
> > > > For iommu mapping, it should be freed by vhost_vdpa_free_domain() in
> > > > vhost_vdpa_release()?
> > > > 
> > > 
> > > Please consider the following lifecycle of the vdpa device:
> > > 
> > > 1. vhost_vdpa_open
> > >       vhost_vdpa_alloc_domain
> > > 
> > > 2. vhost_vdpa_pa_map
> > >       pin_user_pages
> > >       vhost_vdpa_map
> > >         iommu_map
> > > 
> > > 3. kill QEMU
> > > 
> > > 4. vhost_vdpa_release
> > >       vhost_vdpa_free_domain
> > > 
> > > In this case, we have no opportunity to invoke unpin_user_pages or
> > > iommu_unmap to free the memory.
> > 
> > We do:
> > 
> > vhost_vdpa_cleanup()
> >      vhost_vdpa_remove_as()
> >          vhost_vdpa_iotlb_unmap()
> >              vhost_vdpa_pa_unmap()
> >                  unpin_user_pages()
> >                  vhost_vdpa_general_unmap()
> >                      iommu_unmap()
> > ?
> > 
> Oh, my codebase is linux-6.2-rc2 and the commit c070c1912a8 (vhost-vdpa: fix
> an iotlb memory leak) already fixed this bug in linux-6.2-rc3.
> 
> > Btw, it looks like we should call vhost_vdpa_free_domain() *after*
> > vhost_vdpa_cleanup() otherwise it's a UAF?
> > 
> I think so, the v->domain is set to NULL in vhost_vdpa_free_domain(), it
> seems would trigger null-pointer access in my case.


OK I'll drop this patch for now then?

> > Thanks
> > 
> > > 
> > > > static int vhost_vdpa_release(struct inode *inode, struct file *filep)
> > > > {
> > > >           struct vhost_vdpa *v = filep->private_data;
> > > >           struct vhost_dev *d = &v->vdev;
> > > > 
> > > >           mutex_lock(&d->mutex);
> > > >           filep->private_data = NULL;
> > > >           vhost_vdpa_clean_irq(v);
> > > >           vhost_vdpa_reset(v);
> > > >           vhost_dev_stop(&v->vdev);
> > > >           vhost_vdpa_free_domain(v);
> > > >           vhost_vdpa_config_put(v);
> > > >           vhost_vdpa_cleanup(v);
> > > >           mutex_unlock(&d->mutex);
> > > > 
> > > >           atomic_dec(&v->opened);
> > > >           complete(&v->completion);
> > > > 
> > > >           return 0;
> > > > }
> > > > 
> > > > > 
> > > > > By the way, Jason, can you reproduce the problem in your machine?
> > > > > 
> > > > 
> > > > Haven't got time in doing this but it should be the responsibility of
> > > > the author to validate this anyhow.
> > > > 
> > > > Thanks
> > > > 
> > > > > > Thanks
> > > > > > 
> > > > > > 
> > > > > > > 
> > > > > > > Signed-off-by: Longpeng <longpeng2@huawei.com>
> > > > > > > ---
> > > > > > >     drivers/vhost/vdpa.c | 13 +++++++++++++
> > > > > > >     1 file changed, 13 insertions(+)
> > > > > > > 
> > > > > > > diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
> > > > > > > index a527eeeac637..37477cffa5aa 100644
> > > > > > > --- a/drivers/vhost/vdpa.c
> > > > > > > +++ b/drivers/vhost/vdpa.c
> > > > > > > @@ -823,6 +823,18 @@ static void vhost_vdpa_unmap(struct vhost_vdpa *v,
> > > > > > >             vhost_vdpa_remove_as(v, asid);
> > > > > > >     }
> > > > > > > +static void vhost_vdpa_clean_map(struct vhost_vdpa *v)
> > > > > > > +{
> > > > > > > +    struct vhost_vdpa_as *as;
> > > > > > > +    u32 asid;
> > > > > > > +
> > > > > > > +    for (asid = 0; asid < v->vdpa->nas; asid++) {
> > > > > > > +        as = asid_to_as(v, asid);
> > > > > > > +        if (as)
> > > > > > > +            vhost_vdpa_unmap(v, &as->iotlb, 0ULL, 0ULL - 1);
> > > > > > > +    }
> > > > > > > +}
> > > > > > > +
> > > > > > >     static int vhost_vdpa_va_map(struct vhost_vdpa *v,
> > > > > > >                      struct vhost_iotlb *iotlb,
> > > > > > >                      u64 iova, u64 size, u64 uaddr, u32 perm)
> > > > > > > @@ -1247,6 +1259,7 @@ static int vhost_vdpa_release(struct inode
> > > > > > > *inode, struct file *filep)
> > > > > > >         vhost_vdpa_clean_irq(v);
> > > > > > >         vhost_vdpa_reset(v);
> > > > > > >         vhost_dev_stop(&v->vdev);
> > > > > > > +    vhost_vdpa_clean_map(v);
> > > > > > >         vhost_vdpa_free_domain(v);
> > > > > > >         vhost_vdpa_config_put(v);
> > > > > > >         vhost_vdpa_cleanup(v);
> > > > > > 
> > > > > > .
> > > > > 
> > > > 
> > > > .
> > > 
> > 
> > .

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  parent reply	other threads:[~2023-02-15 11:46 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20230131145310.2069-1-longpeng2@huawei.com>
2023-02-14  6:16 ` [PATCH] vhost-vdpa: cleanup memory maps when closing vdpa fds Jason Wang
     [not found]   ` <35b94992-0c6b-a190-1fce-5dda9c8dcf4b@huawei.com>
2023-02-15  2:00     ` Jason Wang
     [not found]       ` <ad0ab6b8-1e1e-f686-eb5c-78cc63869c54@huawei.com>
2023-02-15  2:56         ` Jason Wang
     [not found]           ` <af95c38d-fdca-aef0-55ae-bbb0baee6029@huawei.com>
2023-02-15 11:45             ` Michael S. Tsirkin [this message]
2023-02-17  5:58             ` Jason Wang
2023-03-10  8:37             ` Michael S. Tsirkin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230215064454-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=huangzhichao@huawei.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=longpeng2@huawei.com \
    --cc=virtualization@lists.linux-foundation.org \
    --cc=yechuan@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).