From: Peter Xu <peterx@redhat.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
kvm@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org>,
Alex Williamson <alex.williamson@redhat.com>,
Zi Yan <ziy@nvidia.com>, Alex Mastro <amastro@fb.com>,
David Hildenbrand <david@redhat.com>,
Nico Pache <npache@redhat.com>
Subject: Re: [PATCH 5/5] vfio-pci: Best-effort huge pfnmaps with !MAP_FIXED mappings
Date: Wed, 18 Jun 2025 15:15:50 -0400 [thread overview]
Message-ID: <aFMQZru7l2aKVsZm@x1.local> (raw)
In-Reply-To: <20250618174641.GB1629589@nvidia.com>
On Wed, Jun 18, 2025 at 02:46:41PM -0300, Jason Gunthorpe wrote:
> On Wed, Jun 18, 2025 at 12:56:01PM -0400, Peter Xu wrote:
> > So I changed my mind, slightly. I can still have the "order" parameter to
> > make the API cleaner (even if it'll be a pure overhead.. because all
> > existing caller will pass in PUD_SIZE as of now),
>
> That doesn't seem right, the callers should report the real value not
> artifically cap it.. Like ARM does have page sizes greater than PUD
> that might be interesting to enable someday for PFN users.
It needs to pass in PUD_SIZE to match what vfio-pci currently supports in
its huge_fault().
>
> > but I think I'll still
> > stick with the ifdef in patch 4, as I mentioned here:
>
> > https://lore.kernel.org/all/aFGMG3763eSv9l8b@x1.local/
> >
> > The problem is I just noticed yet again that exporting
> > huge_mapping_get_va_aligned() for all configs doesn't make sense. At least
> > it'll need something like this to make !MMU compile for VFIO, while this is
> > definitely some ugliness I also want to avoid..
>
> IMHO this uglyness should certainly be contained to the mm code and not
> leak into drivers.
>
> > There's just no way to provide a sane default value for !MMU.
>
> So all this mess seems to say that get_unmapped_area() is just the
> wrong fop to have here. It can't be implemented sanely for !MMU and
> has these weird conditions, like can't fail.
>
> I again suggest to just simplify and add an new fop
>
> size_t get_best_mapping_order(struct file *filp, pgoff_t pgoff,
> size_t length);
>
> Which will return the largest pgoff aligned order within pgoff/length
> that the FD could try to install. Very simple for the driver
> side. vfio pci will just return ilog2(bar_size).
>
> PAGE_SHIFT can be a safe default.
I agree this is a better way. We can make the PAGE_SHIFT by default or
just 0, because it doesn't sound necessary to me to support anything
smaller than PAGE_SIZE.. maybe a "int" retval would suffice to also cover
errors.
So this will introduce a new file operation that will only be used so far
in VFIO, playing similar role until we start to convert many
get_unmapped_area() to this one.
>
> Then put all this maze of conditionals in the mm side replacing the
> call to fops->get_unmapped_area() and don't export anything new. The
> mm will automaticall cap the alignment based on what the architecture
> can do and what
>
> !MMU would simply entirely ignore this new stuff.
For the long term, we should move all get_unmapped_area() users to the new
API. For old !MMU users, we should rename get_unmapped_area() to something
better, like get_mmap_addr(). For those cases it's really not about
looking for something not mapped, but normally exactly what is requested.
>
> > So going one step back: huge_mapping_get_va_aligned() (or whatever name we
> > prefer) doesn't make sense to be exported always, but only when CONFIG_MMU.
> > It should follow the same way we treat mm_get_unmapped_area().
>
> We just deleted !SMP, I really wonder if it is time for !MMU to go
> away too..
Yes, if this comes earlier, we can completely drop get_unmapped_area()
after all existing MMU users converted to the new one.
Any early objections / concerns / comments from anyone else, before I go
and introduce it?
--
Peter Xu
next prev parent reply other threads:[~2025-06-18 19:16 UTC|newest]
Thread overview: 78+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-13 13:41 [PATCH 0/5] mm/vfio: huge pfnmaps with !MAP_FIXED mappings Peter Xu
2025-06-13 13:41 ` [PATCH 1/5] mm: Deduplicate mm_get_unmapped_area() Peter Xu
2025-06-13 14:12 ` Jason Gunthorpe
2025-06-13 14:55 ` Oscar Salvador
2025-06-13 14:58 ` Zi Yan
2025-06-13 15:57 ` Lorenzo Stoakes
2025-06-13 17:00 ` Pedro Falcato
2025-06-13 18:00 ` David Hildenbrand
2025-06-16 8:01 ` David Laight
2025-06-17 21:13 ` Peter Xu
2025-06-13 13:41 ` [PATCH 2/5] mm/hugetlb: Remove prepare_hugepage_range() Peter Xu
2025-06-13 14:12 ` Jason Gunthorpe
2025-06-13 14:59 ` Oscar Salvador
2025-06-13 15:13 ` Zi Yan
2025-06-13 16:24 ` Peter Xu
2025-06-13 18:01 ` David Hildenbrand
2025-06-14 4:11 ` Liam R. Howlett
2025-06-17 21:07 ` Peter Xu
2025-06-13 13:41 ` [PATCH 3/5] mm: Rename __thp_get_unmapped_area to mm_get_unmapped_area_aligned Peter Xu
2025-06-13 14:17 ` Jason Gunthorpe
2025-06-13 15:13 ` Peter Xu
2025-06-13 16:00 ` Jason Gunthorpe
2025-06-13 18:31 ` Peter Xu
2025-06-13 15:19 ` Zi Yan
2025-06-13 18:33 ` Peter Xu
2025-06-13 15:36 ` Lorenzo Stoakes
2025-06-13 18:45 ` Peter Xu
2025-06-13 19:18 ` Lorenzo Stoakes
2025-06-13 20:34 ` Peter Xu
2025-06-14 5:58 ` Lorenzo Stoakes
2025-06-14 5:23 ` Liam R. Howlett
2025-06-16 12:14 ` Jason Gunthorpe
2025-06-16 12:20 ` Lorenzo Stoakes
2025-06-16 12:26 ` Jason Gunthorpe
2025-06-13 13:41 ` [PATCH 4/5] vfio: Introduce vfio_device_ops.get_unmapped_area hook Peter Xu
2025-06-13 14:18 ` Jason Gunthorpe
2025-06-13 18:03 ` David Hildenbrand
2025-06-14 14:46 ` kernel test robot
2025-06-17 15:39 ` Peter Xu
2025-06-17 15:41 ` Jason Gunthorpe
2025-06-17 16:47 ` Peter Xu
2025-06-17 19:39 ` Peter Xu
2025-06-17 19:46 ` Jason Gunthorpe
2025-06-17 20:01 ` Peter Xu
2025-06-17 23:00 ` Jason Gunthorpe
2025-06-17 23:26 ` Peter Xu
2025-06-13 13:41 ` [PATCH 5/5] vfio-pci: Best-effort huge pfnmaps with !MAP_FIXED mappings Peter Xu
2025-06-13 14:29 ` Jason Gunthorpe
2025-06-13 15:26 ` Peter Xu
2025-06-13 16:09 ` Jason Gunthorpe
2025-06-13 19:15 ` Peter Xu
2025-06-13 23:16 ` Jason Gunthorpe
2025-06-16 22:06 ` Peter Xu
2025-06-16 23:00 ` Jason Gunthorpe
2025-06-17 20:56 ` Peter Xu
2025-06-17 23:18 ` Jason Gunthorpe
2025-06-17 23:36 ` Peter Xu
2025-06-18 16:56 ` Peter Xu
2025-06-18 17:46 ` Jason Gunthorpe
2025-06-18 19:15 ` Peter Xu [this message]
2025-06-19 13:58 ` Jason Gunthorpe
2025-06-19 14:55 ` Peter Xu
2025-06-19 18:40 ` Jason Gunthorpe
2025-06-24 20:37 ` Peter Xu
2025-06-24 20:51 ` Peter Xu
2025-06-24 23:40 ` Jason Gunthorpe
2025-06-25 0:48 ` Peter Xu
2025-06-25 13:07 ` Jason Gunthorpe
2025-06-25 17:12 ` Peter Xu
2025-06-25 18:41 ` Jason Gunthorpe
2025-06-25 19:26 ` Peter Xu
2025-06-30 14:05 ` Jason Gunthorpe
2025-07-02 20:58 ` Peter Xu
2025-07-02 23:32 ` Jason Gunthorpe
2025-06-13 17:44 ` Alex Mastro
2025-06-13 18:53 ` Peter Xu
2025-06-13 18:09 ` David Hildenbrand
2025-06-13 19:21 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aFMQZru7l2aKVsZm@x1.local \
--to=peterx@redhat.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=alex.williamson@redhat.com \
--cc=amastro@fb.com \
--cc=david@redhat.com \
--cc=jgg@nvidia.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=npache@redhat.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).