From: Hugh Dickins <hughd@google.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
Christoph Hellwig <hch@infradead.org>, Jan Kara <jack@suse.cz>,
William Kucharski <william.kucharski@oracle.com>,
linux-fsdevel@vger.kernel.org, linux-mm@kvack.org
Subject: [PATCH next 2/3] shmem: Fix data loss when folio truncated
Date: Sun, 2 Jan 2022 17:34:05 -0800 (PST) [thread overview]
Message-ID: <24d53dac-d58d-6bb9-82af-c472922e4a31@google.com> (raw)
xfstests generic 098 214 263 286 412 used to pass on huge tmpfs (well,
three of those _require_odirect, enabled by a shmem_direct_IO() stub),
but still fail even with the partial_end fix.
generic/098 output mismatch shows actual data loss:
--- tests/generic/098.out
+++ /home/hughd/xfstests/results//generic/098.out.bad
@@ -4,9 +4,7 @@
wrote 32768/32768 bytes at offset 262144
XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
File content after remount:
-0000000 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
-*
-0400000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
+0000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
...
The problem here is that shmem_getpage(,,,SGP_READ) intentionally
supplies NULL page beyond EOF, and truncation and eviction intentionally
lower i_size before shmem_undo_range() is called: so a whole folio got
truncated instead of being treated partially.
That could be solved by adding yet another SGP_mode to select the
required behaviour, but it's cleaner just to handle cache and then swap
in shmem_get_folio() - renamed here to shmem_get_partial_folio(), given
an easier interface, and moved next to its sole user, shmem_undo_range().
We certainly do not want to read data back from swap when evicting an
inode: i_size preset to 0 still ensures that. Nor do we want to zero
folio data when evicting: truncate_inode_partial_folio()'s check for
length == folio_size(folio) already ensures that.
Fixes: 8842c9c23524 ("truncate,shmem: Handle truncates that split large folios")
Signed-off-by: Hugh Dickins <hughd@google.com>
---
mm/shmem.c | 39 ++++++++++++++++++++++++---------------
1 file changed, 24 insertions(+), 15 deletions(-)
--- hughd1/mm/shmem.c
+++ hughd2/mm/shmem.c
@@ -151,19 +151,6 @@ int shmem_getpage(struct inode *inode, p
mapping_gfp_mask(inode->i_mapping), NULL, NULL, NULL);
}
-static int shmem_get_folio(struct inode *inode, pgoff_t index,
- struct folio **foliop, enum sgp_type sgp)
-{
- struct page *page = NULL;
- int ret = shmem_getpage(inode, index, &page, sgp);
-
- if (page)
- *foliop = page_folio(page);
- else
- *foliop = NULL;
- return ret;
-}
-
static inline struct shmem_sb_info *SHMEM_SB(struct super_block *sb)
{
return sb->s_fs_info;
@@ -894,6 +881,28 @@ void shmem_unlock_mapping(struct address
}
}
+static struct folio *shmem_get_partial_folio(struct inode *inode, pgoff_t index)
+{
+ struct folio *folio;
+ struct page *page;
+
+ /*
+ * At first avoid shmem_getpage(,,,SGP_READ): that fails
+ * beyond i_size, and reports fallocated pages as holes.
+ */
+ folio = __filemap_get_folio(inode->i_mapping, index,
+ FGP_ENTRY | FGP_LOCK, 0);
+ if (!folio || !xa_is_value(folio))
+ return folio;
+ /*
+ * But read a page back from swap if any of it is within i_size
+ * (although in some cases this is just a waste of time).
+ */
+ page = NULL;
+ shmem_getpage(inode, index, &page, SGP_READ);
+ return page ? page_folio(page) : NULL;
+}
+
/*
* Remove range of pages and swap entries from page cache, and free them.
* If !unfalloc, truncate or punch hole; if unfalloc, undo failed fallocate.
@@ -948,7 +957,7 @@ static void shmem_undo_range(struct inod
}
same_folio = (lstart >> PAGE_SHIFT) == (lend >> PAGE_SHIFT);
- shmem_get_folio(inode, lstart >> PAGE_SHIFT, &folio, SGP_READ);
+ folio = shmem_get_partial_folio(inode, lstart >> PAGE_SHIFT);
if (folio) {
same_folio = lend < folio_pos(folio) + folio_size(folio);
folio_mark_dirty(folio);
@@ -963,7 +972,7 @@ static void shmem_undo_range(struct inod
}
if (!same_folio)
- shmem_get_folio(inode, lend >> PAGE_SHIFT, &folio, SGP_READ);
+ folio = shmem_get_partial_folio(inode, lend >> PAGE_SHIFT);
if (folio) {
folio_mark_dirty(folio);
if (!truncate_inode_partial_folio(folio, lstart, lend))
next reply other threads:[~2022-01-03 1:34 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-03 1:34 Hugh Dickins [this message]
2022-01-07 15:53 ` [PATCH next 2/3] shmem: Fix data loss when folio truncated Matthew Wilcox
2022-01-08 17:12 ` Hugh Dickins
2022-01-08 21:25 ` Matthew Wilcox
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=24d53dac-d58d-6bb9-82af-c472922e4a31@google.com \
--to=hughd@google.com \
--cc=akpm@linux-foundation.org \
--cc=hch@infradead.org \
--cc=jack@suse.cz \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=william.kucharski@oracle.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).