linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Large folios and filemap_get_folios_contig()
@ 2025-04-03  9:36 Qu Wenruo
  2025-04-03 12:35 ` Matthew Wilcox
  0 siblings, 1 reply; 5+ messages in thread
From: Qu Wenruo @ 2025-04-03  9:36 UTC (permalink / raw)
  To: Linux Memory Management List, linux-fsdevel@vger.kernel.org,
	linux-btrfs
  Cc: vivek.kasireddy, Andrew Morton

Hi,

Recently I hit a bug when developing the large folios support for btrfs.

That we call filemap_get_folios_contig(), then lock each returned folio.
(We also have a case where we unlock each returned folio)

However since a large folio can be returned several times in the batch,
this obviously makes a deadlock, as btrfs is trying to lock the same
folio more than once.

Then I looked into the caller of filemap_get_folios_contig() inside
mm/gup, and it indeed does the correct skip.


This makes me wonder, since we have large folios, why we still go
filemap_get_folios_contig() and skip duplicated large folios?

Isn't the purpose of large folios to handle a much large range in just
one go, without going through multiple pages?


And there are only 3 call sites, two of them are nilfs and ramfs,
neither support large folios, the only caller with large folio support
is the memfd_pin_folios(), which skip duplicated folios manually.

I'm wondering if it's possible to make filemap_get_folios_contig() to
avoid filling the batch with duplicated folios completely?

Thanks,
Qu

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2025-04-04  4:15 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-03  9:36 Large folios and filemap_get_folios_contig() Qu Wenruo
2025-04-03 12:35 ` Matthew Wilcox
2025-04-03 21:16   ` Qu Wenruo
2025-04-04  0:50     ` Vishal Moola (Oracle)
2025-04-04  4:15       ` Qu Wenruo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).