From: Keith Busch <kbusch@kernel.org>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: Keith Busch <kbusch@meta.com>, kvm@vger.kernel.org
Subject: Re: [PATCH] vfio/type1: conditional rescheduling while pinning
Date: Wed, 9 Jul 2025 14:18:58 -0600 [thread overview]
Message-ID: <aG7OspdCPAK2oILR@kbusch-mbp> (raw)
In-Reply-To: <20250319121704.7744c73e.alex.williamson@redhat.com>
On Wed, Mar 19, 2025 at 12:17:04PM -0600, Alex Williamson wrote:
> On Wed, 19 Mar 2025 09:47:05 -0600
> > >
> > > Note that we already have a cond_resched() in vfio_iommu_map(), which
> > > we'll hit any time we get a break in a contiguous mapping. We may hit
> > > that regularly enough that it's not an issue for RAM mapping, but I've
> > > certainly seen soft lockups when we have many GiB of contiguous pfnmaps
> > > prior to the series above. Thanks,
> >
> > So far adding the additional patches has not changed anything. We've
> > ensured we are using an address and length aligned to 2MB, but it sure
> > looks like vfio's fault handler is only getting order-0 faults. I'm not
> > finding anything immediately obvious about what we can change to get the
> > desired higher order behvaior, though. Any other hints or information I
> > could provide?
>
> Since you mention folding in the changes, are you working on an upstream
> kernel or a downstream backport? Huge pfnmap support was added in
> v6.12 via [1]. Without that you'd never see better than a order-a
> fault. I hope that's it because with all the kernel pieces in place it
> should "Just work". Thanks,
I think I'm back to needing a cond_resched(). I'm finding too many user
space programs, including qemu, for various reasons do not utilize
hugepage faults, and we're ultimately locking up a cpu for long enough
to cause other nasty side effects, like OOM due to blocked rcu free
callbacks. As preferable as it is to get everything aligned to use the
faster faults, I don't think the kernel should depend on that to prevent
prolonged cpu lockups. What do you think?
next prev parent reply other threads:[~2025-07-09 20:19 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-03-12 22:52 [PATCH] vfio/type1: conditional rescheduling while pinning Keith Busch
2025-03-17 21:44 ` Alex Williamson
2025-03-17 22:30 ` Keith Busch
2025-03-17 22:53 ` Alex Williamson
2025-03-19 15:47 ` Keith Busch
2025-03-19 18:17 ` Alex Williamson
2025-03-19 18:34 ` Keith Busch
2025-03-19 22:13 ` Keith Busch
2025-07-09 20:18 ` Keith Busch [this message]
2025-07-11 20:16 ` Alex Williamson
2025-07-11 20:40 ` Keith Busch
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aG7OspdCPAK2oILR@kbusch-mbp \
--to=kbusch@kernel.org \
--cc=alex.williamson@redhat.com \
--cc=kbusch@meta.com \
--cc=kvm@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox