From: Jason Gunthorpe <jgg@nvidia.com>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
peterx@redhat.com, prime.zeng@hisilicon.com, cohuck@redhat.com
Subject: Re: [PATCH] vfio/pci: Handle concurrent vma faults
Date: Fri, 12 Mar 2021 15:41:47 -0400 [thread overview]
Message-ID: <20210312194147.GH2356281@nvidia.com> (raw)
In-Reply-To: <20210312121611.07a313e3@omen.home.shazbot.org>
On Fri, Mar 12, 2021 at 12:16:11PM -0700, Alex Williamson wrote:
> On Wed, 10 Mar 2021 14:40:11 -0400
> Jason Gunthorpe <jgg@nvidia.com> wrote:
>
> > On Wed, Mar 10, 2021 at 11:34:06AM -0700, Alex Williamson wrote:
> >
> > > > I think after the address_space changes this should try to stick with
> > > > a normal io_rmap_pfn_range() done outside the fault handler.
> > >
> > > I assume you're suggesting calling io_remap_pfn_range() when device
> > > memory is enabled,
> >
> > Yes, I think I saw Peter thinking along these lines too
> >
> > Then fault just always causes SIGBUS if it gets called
>
> Trying to use the address_space approach because otherwise we'd just be
> adding back vma list tracking, it looks like we can't call
> io_remap_pfn_range() while holding the address_space i_mmap_rwsem via
> i_mmap_lock_write(), like done in unmap_mapping_range(). lockdep
> identifies a circular lock order issue against fs_reclaim. Minimally we
> also need vma_interval_tree_iter_{first,next} exported in order to use
> vma_interval_tree_foreach(). Suggestions? Thanks,
You are asking how to put the BAR back into every VMA when it is
enabled again after it has been zap'd?
What did the lockdep splat look like? Is it a memory allocation?
Does current_gfp_context()/memalloc_nofs_save()/etc solve it?
The easiest answer is to continue to use fault and the
vmf_insert_page()..
But it feels like it wouuld be OK to export enough i_mmap machinery to
enable this. Cleaner than building your own tracking, which would
still have the same ugly mmap_sem inversion issue which was preventing
this last time.
Jason
next prev parent reply other threads:[~2021-03-12 19:42 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-10 17:53 [PATCH] vfio/pci: Handle concurrent vma faults Alex Williamson
2021-03-10 18:14 ` Jason Gunthorpe
2021-03-10 18:34 ` Alex Williamson
2021-03-10 18:40 ` Jason Gunthorpe
2021-03-10 20:06 ` Peter Xu
2021-03-11 11:35 ` Christoph Hellwig
2021-03-11 16:35 ` Peter Xu
2021-03-12 19:16 ` Alex Williamson
2021-03-12 19:41 ` Jason Gunthorpe [this message]
2021-03-12 20:09 ` Alex Williamson
2021-03-12 20:58 ` Alex Williamson
2021-03-13 0:03 ` Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210312194147.GH2356281@nvidia.com \
--to=jgg@nvidia.com \
--cc=alex.williamson@redhat.com \
--cc=cohuck@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=peterx@redhat.com \
--cc=prime.zeng@hisilicon.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox