From: Alex Williamson <alex.williamson@redhat.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
peterx@redhat.com, prime.zeng@hisilicon.com, cohuck@redhat.com
Subject: Re: [PATCH v2] vfio/pci: Handle concurrent vma faults
Date: Mon, 28 Jun 2021 12:36:21 -0600 [thread overview]
Message-ID: <20210628123621.7fd36a1b.alex.williamson@redhat.com> (raw)
In-Reply-To: <20210628173028.GF4459@nvidia.com>
On Mon, 28 Jun 2021 14:30:28 -0300
Jason Gunthorpe <jgg@nvidia.com> wrote:
> On Mon, Jun 28, 2021 at 10:46:53AM -0600, Alex Williamson wrote:
> > On Wed, 10 Mar 2021 11:58:07 -0700
> > Alex Williamson <alex.williamson@redhat.com> wrote:
> >
> > > vfio_pci_mmap_fault() incorrectly makes use of io_remap_pfn_range()
> > > from within a vm_ops fault handler. This function will trigger a
> > > BUG_ON if it encounters a populated pte within the remapped range,
> > > where any fault is meant to populate the entire vma. Concurrent
> > > inflight faults to the same vma will therefore hit this issue,
> > > triggering traces such as:
>
> If it is just about concurrancy can the vma_lock enclose
> io_remap_pfn_range() ?
We could extend vma_lock around io_remap_pfn_range(), but that alone
would just block the concurrent faults to the same vma and once we
released them they'd still hit the BUG_ON in io_remap_pfn_range()
because the page is no longer pte_none(). We'd need to combine that
with something like __vfio_pci_add_vma() returning -EEXIST to skip the
io_remap_pfn_range(), but I've been advised that we shouldn't be
calling io_remap_pfn_range() from within the fault handler anyway, we
should be using something like vmf_insert_pfn() instead, which I
understand can be called safely in the same situation. That's rather
the testing I was hoping someone who reproduced the issue previously
could validate.
> > IIRC, there were no blocking issues on this patch as an interim fix to
> > resolve the concurrent fault issues with io_remap_pfn_range().
> > Unfortunately it also got no Reviewed-by or Tested-by feedback. I'd
> > like to put this in for v5.14 (should have gone in earlier). Any final
> > comments? Thanks,
>
> I assume there is a reason why vm_lock can't be used here, so I
> wouldn't object, though I don't especially like the loss of tracking
> either.
There's no loss of tracking here, we were only expecting a single fault
per vma to add the vma to our list. This just skips adding duplicates
in these cases where we can have multiple faults in-flight. Thanks,
Alex
next prev parent reply other threads:[~2021-06-28 18:36 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-10 18:58 [PATCH v2] vfio/pci: Handle concurrent vma faults Alex Williamson
2021-06-28 16:46 ` Alex Williamson
2021-06-28 17:30 ` Jason Gunthorpe
2021-06-28 18:36 ` Alex Williamson [this message]
2021-06-28 18:52 ` Jason Gunthorpe
2021-06-28 19:30 ` Alex Williamson
2021-06-28 23:26 ` Jason Gunthorpe
2021-06-29 14:11 ` Alex Williamson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210628123621.7fd36a1b.alex.williamson@redhat.com \
--to=alex.williamson@redhat.com \
--cc=cohuck@redhat.com \
--cc=jgg@nvidia.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=peterx@redhat.com \
--cc=prime.zeng@hisilicon.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox