From: Keith Busch <kbusch@kernel.org>
To: Alex Williamson <alex.williamson@redhat.com>
Cc: linux-pci@vger.kernel.org
Subject: Re: vfio memlock question
Date: Wed, 13 Dec 2023 12:18:02 -0800 [thread overview]
Message-ID: <ZXoRegCK_r5g9NAN@kbusch-mbp> (raw)
In-Reply-To: <20231213102313.1f3955e1.alex.williamson@redhat.com>
On Wed, Dec 13, 2023 at 10:23:13AM -0700, Alex Williamson wrote:
> On Tue, 12 Dec 2023 17:06:39 -0800
> Keith Busch <kbusch@kernel.org> wrote:
>
> > I was examining an issue where a user process utilizing vfio is hitting
> > the RLIMIT_MEMLOCK limit during a ioctl(VFIO_IOMMU_MAP_DMA) call. The
> > amount of memory, though, should have been well below the memlock limit.
> >
> > The test maps the same address range to multiple devices. Each time the
> > same address range is mapped to another device, the lock count is
> > increasing, creating a multiplier on the memory lock accounting, which
> > was unexpected to me.
> >
> > Another strange thing, the /proc/PID/status shows VmLck is indeed
> > increasing toward the limit, but /proc/PID/smaps shows that nothing has
> > been locked.
> >
> > The mlock() syscall doesn't doubly account for previously locked ranges
> > when asked to lock them again, so I was initially expecting the same
> > behavior with vfio since they subscribe to the same limit.
> >
> > So a few initial questions:
> >
> > Is there a reason vfio is doubly accounting for the locked pages for
> > each device they're mapped to?
> >
> > Is the discrepency on how much memory is locked depending on which
> > source I consult expected?
>
> Locked page accounting is at the vfio container level and those
> containers are unaware of other containers owned by the same process,
> so unfortunately this is expected. IOMMUFD resolves this by having
> multiple IO address spaces within the same iommufd context.
Thanks for the reply! Sounds like I need to better familiarize myself
with iommufd. :)
> I don't know the reason smaps is not showing what you expect or if it
> should. Thanks,
It was just unexpected, but not hugely concerning right now. Not sure if
anyone cares, but I think a process could exceed the ulimit by locking
different ranges through vfio and mlock since their accounting is done
differently.
prev parent reply other threads:[~2023-12-13 20:18 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-13 1:06 vfio memlock question Keith Busch
2023-12-13 17:23 ` Alex Williamson
2023-12-13 20:18 ` Keith Busch [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZXoRegCK_r5g9NAN@kbusch-mbp \
--to=kbusch@kernel.org \
--cc=alex.williamson@redhat.com \
--cc=linux-pci@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox