* [PATCH 0/3] Remove copy_page_from_iter_atomic()
@ 2025-05-14 17:06 Matthew Wilcox (Oracle)
2025-05-14 17:06 ` [PATCH 1/3] highmem: Add folio_test_partial_kmap() Matthew Wilcox (Oracle)
` (2 more replies)
0 siblings, 3 replies; 5+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-05-14 17:06 UTC (permalink / raw)
To: Andrew Morton
Cc: Matthew Wilcox (Oracle), ntfs3, linux-mm, Alexander Viro,
linux-fsdevel, Hugh Dickins
The first patch here is a bug-fix that enables
CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP to work correctly with
filesystems that support large folios. Then we convert ntfs3 to use
copy_folio_from_iter_atomic() instead of copy_page_from_iter_atomic()
and finally remove copy_page_from_iter_atomic().
Matthew Wilcox (Oracle) (3):
highmem: Add folio_test_partial_kmap()
ntfs3: Use folios more in ntfs_compress_write()
iov: Remove copy_page_from_iter_atomic()
fs/ntfs3/file.c | 31 +++++++++++++------------------
include/linux/highmem.h | 10 +++++-----
include/linux/page-flags.h | 7 +++++++
include/linux/uio.h | 10 ++--------
lib/iov_iter.c | 29 +++++++++++++----------------
5 files changed, 40 insertions(+), 47 deletions(-)
--
2.47.2
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH 1/3] highmem: Add folio_test_partial_kmap()
2025-05-14 17:06 [PATCH 0/3] Remove copy_page_from_iter_atomic() Matthew Wilcox (Oracle)
@ 2025-05-14 17:06 ` Matthew Wilcox (Oracle)
2025-05-14 17:06 ` [PATCH 2/3] ntfs3: Use folios more in ntfs_compress_write() Matthew Wilcox (Oracle)
2025-05-14 17:06 ` [PATCH 3/3] iov: Remove copy_page_from_iter_atomic() Matthew Wilcox (Oracle)
2 siblings, 0 replies; 5+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-05-14 17:06 UTC (permalink / raw)
To: Andrew Morton
Cc: Matthew Wilcox (Oracle), ntfs3, linux-mm, Alexander Viro,
linux-fsdevel, Hugh Dickins, stable
In commit c749d9b7ebbc (iov_iter: fix copy_page_from_iter_atomic() if
KMAP_LOCAL_FORCE_MAP), Hugh correctly noted that if KMAP_LOCAL_FORCE_MAP
is enabled, we must limit ourselves to PAGE_SIZE bytes per call
to kmap_local(). The same problem exists in memcpy_from_folio(),
memcpy_to_folio(), folio_zero_tail(), folio_fill_tail() and
memcpy_from_file_folio(), so add folio_test_partial_kmap() to do this
more succinctly.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Fixes: 00cdf76012ab (mm: add memcpy_from_file_folio())
Cc: stable@vger.kernel.org
---
include/linux/highmem.h | 10 +++++-----
include/linux/page-flags.h | 7 +++++++
2 files changed, 12 insertions(+), 5 deletions(-)
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 5c6bea81a90e..c698f8415675 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -461,7 +461,7 @@ static inline void memcpy_from_folio(char *to, struct folio *folio,
const char *from = kmap_local_folio(folio, offset);
size_t chunk = len;
- if (folio_test_highmem(folio) &&
+ if (folio_test_partial_kmap(folio) &&
chunk > PAGE_SIZE - offset_in_page(offset))
chunk = PAGE_SIZE - offset_in_page(offset);
memcpy(to, from, chunk);
@@ -489,7 +489,7 @@ static inline void memcpy_to_folio(struct folio *folio, size_t offset,
char *to = kmap_local_folio(folio, offset);
size_t chunk = len;
- if (folio_test_highmem(folio) &&
+ if (folio_test_partial_kmap(folio) &&
chunk > PAGE_SIZE - offset_in_page(offset))
chunk = PAGE_SIZE - offset_in_page(offset);
memcpy(to, from, chunk);
@@ -522,7 +522,7 @@ static inline __must_check void *folio_zero_tail(struct folio *folio,
{
size_t len = folio_size(folio) - offset;
- if (folio_test_highmem(folio)) {
+ if (folio_test_partial_kmap(folio)) {
size_t max = PAGE_SIZE - offset_in_page(offset);
while (len > max) {
@@ -560,7 +560,7 @@ static inline void folio_fill_tail(struct folio *folio, size_t offset,
VM_BUG_ON(offset + len > folio_size(folio));
- if (folio_test_highmem(folio)) {
+ if (folio_test_partial_kmap(folio)) {
size_t max = PAGE_SIZE - offset_in_page(offset);
while (len > max) {
@@ -597,7 +597,7 @@ static inline size_t memcpy_from_file_folio(char *to, struct folio *folio,
size_t offset = offset_in_folio(folio, pos);
char *from = kmap_local_folio(folio, offset);
- if (folio_test_highmem(folio)) {
+ if (folio_test_partial_kmap(folio)) {
offset = offset_in_page(offset);
len = min_t(size_t, len, PAGE_SIZE - offset);
} else
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index e6a21b62dcce..3b814ce08331 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -615,6 +615,13 @@ FOLIO_FLAG(dropbehind, FOLIO_HEAD_PAGE)
PAGEFLAG_FALSE(HighMem, highmem)
#endif
+/* Does kmap_local_folio() only allow access to one page of the folio? */
+#ifdef CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP
+#define folio_test_partial_kmap(f) true
+#else
+#define folio_test_partial_kmap(f) folio_test_highmem(f)
+#endif
+
#ifdef CONFIG_SWAP
static __always_inline bool folio_test_swapcache(const struct folio *folio)
{
--
2.47.2
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 2/3] ntfs3: Use folios more in ntfs_compress_write()
2025-05-14 17:06 [PATCH 0/3] Remove copy_page_from_iter_atomic() Matthew Wilcox (Oracle)
2025-05-14 17:06 ` [PATCH 1/3] highmem: Add folio_test_partial_kmap() Matthew Wilcox (Oracle)
@ 2025-05-14 17:06 ` Matthew Wilcox (Oracle)
2025-05-23 11:58 ` Konstantin Komarov
2025-05-14 17:06 ` [PATCH 3/3] iov: Remove copy_page_from_iter_atomic() Matthew Wilcox (Oracle)
2 siblings, 1 reply; 5+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-05-14 17:06 UTC (permalink / raw)
To: Andrew Morton
Cc: Matthew Wilcox (Oracle), ntfs3, linux-mm, Alexander Viro,
linux-fsdevel, Hugh Dickins
Remove the local 'page' variable and do everything in terms of folios.
Removes the last user of copy_page_from_iter_atomic() and a hidden
call to compound_head() in ClearPageDirty().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
fs/ntfs3/file.c | 31 +++++++++++++------------------
1 file changed, 13 insertions(+), 18 deletions(-)
diff --git a/fs/ntfs3/file.c b/fs/ntfs3/file.c
index 9b6a3f8d2e7c..bc6062e0668e 100644
--- a/fs/ntfs3/file.c
+++ b/fs/ntfs3/file.c
@@ -998,7 +998,8 @@ static ssize_t ntfs_compress_write(struct kiocb *iocb, struct iov_iter *from)
struct ntfs_inode *ni = ntfs_i(inode);
u64 valid = ni->i_valid;
struct ntfs_sb_info *sbi = ni->mi.sbi;
- struct page *page, **pages = NULL;
+ struct page **pages = NULL;
+ struct folio *folio;
size_t written = 0;
u8 frame_bits = NTFS_LZNT_CUNIT + sbi->cluster_bits;
u32 frame_size = 1u << frame_bits;
@@ -1008,7 +1009,6 @@ static ssize_t ntfs_compress_write(struct kiocb *iocb, struct iov_iter *from)
u64 frame_vbo;
pgoff_t index;
bool frame_uptodate;
- struct folio *folio;
if (frame_size < PAGE_SIZE) {
/*
@@ -1062,8 +1062,7 @@ static ssize_t ntfs_compress_write(struct kiocb *iocb, struct iov_iter *from)
pages_per_frame);
if (err) {
for (ip = 0; ip < pages_per_frame; ip++) {
- page = pages[ip];
- folio = page_folio(page);
+ folio = page_folio(pages[ip]);
folio_unlock(folio);
folio_put(folio);
}
@@ -1074,10 +1073,9 @@ static ssize_t ntfs_compress_write(struct kiocb *iocb, struct iov_iter *from)
ip = off >> PAGE_SHIFT;
off = offset_in_page(valid);
for (; ip < pages_per_frame; ip++, off = 0) {
- page = pages[ip];
- folio = page_folio(page);
- zero_user_segment(page, off, PAGE_SIZE);
- flush_dcache_page(page);
+ folio = page_folio(pages[ip]);
+ folio_zero_segment(folio, off, PAGE_SIZE);
+ flush_dcache_folio(folio);
folio_mark_uptodate(folio);
}
@@ -1086,8 +1084,7 @@ static ssize_t ntfs_compress_write(struct kiocb *iocb, struct iov_iter *from)
ni_unlock(ni);
for (ip = 0; ip < pages_per_frame; ip++) {
- page = pages[ip];
- folio = page_folio(page);
+ folio = page_folio(pages[ip]);
folio_mark_uptodate(folio);
folio_unlock(folio);
folio_put(folio);
@@ -1131,8 +1128,7 @@ static ssize_t ntfs_compress_write(struct kiocb *iocb, struct iov_iter *from)
if (err) {
for (ip = 0; ip < pages_per_frame;
ip++) {
- page = pages[ip];
- folio = page_folio(page);
+ folio = page_folio(pages[ip]);
folio_unlock(folio);
folio_put(folio);
}
@@ -1150,10 +1146,10 @@ static ssize_t ntfs_compress_write(struct kiocb *iocb, struct iov_iter *from)
for (;;) {
size_t cp, tail = PAGE_SIZE - off;
- page = pages[ip];
- cp = copy_page_from_iter_atomic(page, off,
+ folio = page_folio(pages[ip]);
+ cp = copy_folio_from_iter_atomic(folio, off,
min(tail, bytes), from);
- flush_dcache_page(page);
+ flush_dcache_folio(folio);
copied += cp;
bytes -= cp;
@@ -1173,9 +1169,8 @@ static ssize_t ntfs_compress_write(struct kiocb *iocb, struct iov_iter *from)
ni_unlock(ni);
for (ip = 0; ip < pages_per_frame; ip++) {
- page = pages[ip];
- ClearPageDirty(page);
- folio = page_folio(page);
+ folio = page_folio(pages[ip]);
+ folio_clear_dirty(folio);
folio_mark_uptodate(folio);
folio_unlock(folio);
folio_put(folio);
--
2.47.2
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 3/3] iov: Remove copy_page_from_iter_atomic()
2025-05-14 17:06 [PATCH 0/3] Remove copy_page_from_iter_atomic() Matthew Wilcox (Oracle)
2025-05-14 17:06 ` [PATCH 1/3] highmem: Add folio_test_partial_kmap() Matthew Wilcox (Oracle)
2025-05-14 17:06 ` [PATCH 2/3] ntfs3: Use folios more in ntfs_compress_write() Matthew Wilcox (Oracle)
@ 2025-05-14 17:06 ` Matthew Wilcox (Oracle)
2 siblings, 0 replies; 5+ messages in thread
From: Matthew Wilcox (Oracle) @ 2025-05-14 17:06 UTC (permalink / raw)
To: Andrew Morton
Cc: Matthew Wilcox (Oracle), ntfs3, linux-mm, Alexander Viro,
linux-fsdevel, Hugh Dickins
All callers now use copy_folio_from_iter_atomic(), so convert
copy_page_from_iter_atomic(). While I'm in there, use kmap_local_folio()
and pagefault_disable() instead of kmap_atomic(). That allows preemption
and/or task migration to happen during the copy_from_user(). Also use
the new folio_test_partial_kmap() predicate instead of open-coding it.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
include/linux/uio.h | 10 ++--------
lib/iov_iter.c | 29 +++++++++++++----------------
2 files changed, 15 insertions(+), 24 deletions(-)
diff --git a/include/linux/uio.h b/include/linux/uio.h
index 49ece9e1888f..e46477482663 100644
--- a/include/linux/uio.h
+++ b/include/linux/uio.h
@@ -176,8 +176,6 @@ static inline size_t iov_length(const struct iovec *iov, unsigned long nr_segs)
return ret;
}
-size_t copy_page_from_iter_atomic(struct page *page, size_t offset,
- size_t bytes, struct iov_iter *i);
void iov_iter_advance(struct iov_iter *i, size_t bytes);
void iov_iter_revert(struct iov_iter *i, size_t bytes);
size_t fault_in_iov_iter_readable(const struct iov_iter *i, size_t bytes);
@@ -187,6 +185,8 @@ size_t copy_page_to_iter(struct page *page, size_t offset, size_t bytes,
struct iov_iter *i);
size_t copy_page_from_iter(struct page *page, size_t offset, size_t bytes,
struct iov_iter *i);
+size_t copy_folio_from_iter_atomic(struct folio *folio, size_t offset,
+ size_t bytes, struct iov_iter *i);
size_t _copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i);
size_t _copy_from_iter(void *addr, size_t bytes, struct iov_iter *i);
@@ -204,12 +204,6 @@ static inline size_t copy_folio_from_iter(struct folio *folio, size_t offset,
return copy_page_from_iter(&folio->page, offset, bytes, i);
}
-static inline size_t copy_folio_from_iter_atomic(struct folio *folio,
- size_t offset, size_t bytes, struct iov_iter *i)
-{
- return copy_page_from_iter_atomic(&folio->page, offset, bytes, i);
-}
-
size_t copy_page_to_iter_nofault(struct page *page, unsigned offset,
size_t bytes, struct iov_iter *i);
diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index d9e19fb2dcf3..969d4ad510df 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -457,38 +457,35 @@ size_t iov_iter_zero(size_t bytes, struct iov_iter *i)
}
EXPORT_SYMBOL(iov_iter_zero);
-size_t copy_page_from_iter_atomic(struct page *page, size_t offset,
+size_t copy_folio_from_iter_atomic(struct folio *folio, size_t offset,
size_t bytes, struct iov_iter *i)
{
size_t n, copied = 0;
- bool uses_kmap = IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL_FORCE_MAP) ||
- PageHighMem(page);
- if (!page_copy_sane(page, offset, bytes))
+ if (!page_copy_sane(&folio->page, offset, bytes))
return 0;
if (WARN_ON_ONCE(!i->data_source))
return 0;
do {
- char *p;
+ char *to = kmap_local_folio(folio, offset);
n = bytes - copied;
- if (uses_kmap) {
- page += offset / PAGE_SIZE;
- offset %= PAGE_SIZE;
- n = min_t(size_t, n, PAGE_SIZE - offset);
- }
-
- p = kmap_atomic(page) + offset;
- n = __copy_from_iter(p, n, i);
- kunmap_atomic(p);
+ if (folio_test_partial_kmap(folio) &&
+ n > PAGE_SIZE - offset_in_page(offset))
+ n = PAGE_SIZE - offset_in_page(offset);
+
+ pagefault_disable();
+ n = __copy_from_iter(to, n, i);
+ pagefault_enable();
+ kunmap_local(to);
copied += n;
offset += n;
- } while (uses_kmap && copied != bytes && n > 0);
+ } while (copied != bytes && n > 0);
return copied;
}
-EXPORT_SYMBOL(copy_page_from_iter_atomic);
+EXPORT_SYMBOL(copy_folio_from_iter_atomic);
static void iov_iter_bvec_advance(struct iov_iter *i, size_t size)
{
--
2.47.2
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH 2/3] ntfs3: Use folios more in ntfs_compress_write()
2025-05-14 17:06 ` [PATCH 2/3] ntfs3: Use folios more in ntfs_compress_write() Matthew Wilcox (Oracle)
@ 2025-05-23 11:58 ` Konstantin Komarov
0 siblings, 0 replies; 5+ messages in thread
From: Konstantin Komarov @ 2025-05-23 11:58 UTC (permalink / raw)
To: Matthew Wilcox (Oracle); +Cc: ntfs3, linux-mm, viro, linux-fsdevel, hughd, akpm
On 5/14/25 19:06, Matthew Wilcox (Oracle) wrote:
> Remove the local 'page' variable and do everything in terms of folios.
> Removes the last user of copy_page_from_iter_atomic() and a hidden
> call to compound_head() in ClearPageDirty().
>
> Signed-off-by: Matthew Wilcox (Oracle)<willy@infradead.org>
> ---
> fs/ntfs3/file.c | 31 +++++++++++++------------------
> 1 file changed, 13 insertions(+), 18 deletions(-)
>
> diff --git a/fs/ntfs3/file.c b/fs/ntfs3/file.c
> index 9b6a3f8d2e7c..bc6062e0668e 100644
> --- a/fs/ntfs3/file.c
> +++ b/fs/ntfs3/file.c
> @@ -998,7 +998,8 @@ static ssize_t ntfs_compress_write(struct kiocb *iocb, struct iov_iter *from)
> struct ntfs_inode *ni = ntfs_i(inode);
> u64 valid = ni->i_valid;
> struct ntfs_sb_info *sbi = ni->mi.sbi;
> - struct page *page, **pages = NULL;
> + struct page **pages = NULL;
> + struct folio *folio;
> size_t written = 0;
> u8 frame_bits = NTFS_LZNT_CUNIT + sbi->cluster_bits;
> u32 frame_size = 1u << frame_bits;
> @@ -1008,7 +1009,6 @@ static ssize_t ntfs_compress_write(struct kiocb *iocb, struct iov_iter *from)
> u64 frame_vbo;
> pgoff_t index;
> bool frame_uptodate;
> - struct folio *folio;
>
> if (frame_size < PAGE_SIZE) {
> /*
> @@ -1062,8 +1062,7 @@ static ssize_t ntfs_compress_write(struct kiocb *iocb, struct iov_iter *from)
> pages_per_frame);
> if (err) {
> for (ip = 0; ip < pages_per_frame; ip++) {
> - page = pages[ip];
> - folio = page_folio(page);
> + folio = page_folio(pages[ip]);
> folio_unlock(folio);
> folio_put(folio);
> }
> @@ -1074,10 +1073,9 @@ static ssize_t ntfs_compress_write(struct kiocb *iocb, struct iov_iter *from)
> ip = off >> PAGE_SHIFT;
> off = offset_in_page(valid);
> for (; ip < pages_per_frame; ip++, off = 0) {
> - page = pages[ip];
> - folio = page_folio(page);
> - zero_user_segment(page, off, PAGE_SIZE);
> - flush_dcache_page(page);
> + folio = page_folio(pages[ip]);
> + folio_zero_segment(folio, off, PAGE_SIZE);
> + flush_dcache_folio(folio);
> folio_mark_uptodate(folio);
> }
>
> @@ -1086,8 +1084,7 @@ static ssize_t ntfs_compress_write(struct kiocb *iocb, struct iov_iter *from)
> ni_unlock(ni);
>
> for (ip = 0; ip < pages_per_frame; ip++) {
> - page = pages[ip];
> - folio = page_folio(page);
> + folio = page_folio(pages[ip]);
> folio_mark_uptodate(folio);
> folio_unlock(folio);
> folio_put(folio);
> @@ -1131,8 +1128,7 @@ static ssize_t ntfs_compress_write(struct kiocb *iocb, struct iov_iter *from)
> if (err) {
> for (ip = 0; ip < pages_per_frame;
> ip++) {
> - page = pages[ip];
> - folio = page_folio(page);
> + folio = page_folio(pages[ip]);
> folio_unlock(folio);
> folio_put(folio);
> }
> @@ -1150,10 +1146,10 @@ static ssize_t ntfs_compress_write(struct kiocb *iocb, struct iov_iter *from)
> for (;;) {
> size_t cp, tail = PAGE_SIZE - off;
>
> - page = pages[ip];
> - cp = copy_page_from_iter_atomic(page, off,
> + folio = page_folio(pages[ip]);
> + cp = copy_folio_from_iter_atomic(folio, off,
> min(tail, bytes), from);
> - flush_dcache_page(page);
> + flush_dcache_folio(folio);
>
> copied += cp;
> bytes -= cp;
> @@ -1173,9 +1169,8 @@ static ssize_t ntfs_compress_write(struct kiocb *iocb, struct iov_iter *from)
> ni_unlock(ni);
>
> for (ip = 0; ip < pages_per_frame; ip++) {
> - page = pages[ip];
> - ClearPageDirty(page);
> - folio = page_folio(page);
> + folio = page_folio(pages[ip]);
> + folio_clear_dirty(folio);
> folio_mark_uptodate(folio);
> folio_unlock(folio);
> folio_put(folio);
The changes look fine. Feel free to add them.
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2025-05-23 11:58 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-14 17:06 [PATCH 0/3] Remove copy_page_from_iter_atomic() Matthew Wilcox (Oracle)
2025-05-14 17:06 ` [PATCH 1/3] highmem: Add folio_test_partial_kmap() Matthew Wilcox (Oracle)
2025-05-14 17:06 ` [PATCH 2/3] ntfs3: Use folios more in ntfs_compress_write() Matthew Wilcox (Oracle)
2025-05-23 11:58 ` Konstantin Komarov
2025-05-14 17:06 ` [PATCH 3/3] iov: Remove copy_page_from_iter_atomic() Matthew Wilcox (Oracle)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).