From: Matthew Wilcox <willy@infradead.org>
To: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
linux-mm@kvack.org
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
Jeff Layton <jlayton@kernel.org>,
Christoph Hellwig <hch@infradead.org>, Chris Mason <clm@fb.com>
Subject: [PATCH v2 9/9] mm: Unify all add_to_page_cache variants
Date: Tue, 14 Jan 2020 18:38:43 -0800 [thread overview]
Message-ID: <20200115023843.31325-10-willy@infradead.org> (raw)
In-Reply-To: <20200115023843.31325-1-willy@infradead.org>
From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
We already have various bits of add_to_page_cache() executed conditionally
on !PageHuge(page); add the add_to_page_cache_lru() pieces as some
more code which isn't executed for huge pages. This lets us remove
the old add_to_page_cache() and rename __add_to_page_cache_locked() to
add_to_page_cache(). Include a compatibility define so we don't have
to change all 20+ callers of add_to_page_cache_lru().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
include/linux/pagemap.h | 5 ++--
mm/filemap.c | 65 ++++++++++++-----------------------------
2 files changed, 21 insertions(+), 49 deletions(-)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 75075065dd0b..637770fa283f 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -606,14 +606,15 @@ static inline int fault_in_pages_readable(const char __user *uaddr, int size)
int add_to_page_cache(struct page *page, struct address_space *mapping,
pgoff_t index, gfp_t gfp);
-int add_to_page_cache_lru(struct page *page, struct address_space *mapping,
- pgoff_t index, gfp_t gfp_mask);
extern void delete_from_page_cache(struct page *page);
extern void __delete_from_page_cache(struct page *page, void *shadow);
int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask);
void delete_from_page_cache_batch(struct address_space *mapping,
struct pagevec *pvec);
+#define add_to_page_cache_lru(page, mapping, index, gfp) \
+ add_to_page_cache(page, mapping, index, gfp)
+
/*
* Only call this from a ->readahead implementation.
*/
diff --git a/mm/filemap.c b/mm/filemap.c
index fb87f5fa75e6..83f45f31a00a 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -847,19 +847,18 @@ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask)
}
EXPORT_SYMBOL_GPL(replace_page_cache_page);
-static int __add_to_page_cache_locked(struct page *page,
- struct address_space *mapping,
- pgoff_t offset, gfp_t gfp_mask,
- void **shadowp)
+int add_to_page_cache(struct page *page, struct address_space *mapping,
+ pgoff_t offset, gfp_t gfp_mask)
{
XA_STATE(xas, &mapping->i_pages, offset);
int huge = PageHuge(page);
struct mem_cgroup *memcg;
int error;
- void *old;
+ void *old, *shadow = NULL;
VM_BUG_ON_PAGE(!PageLocked(page), page);
VM_BUG_ON_PAGE(PageSwapBacked(page), page);
+ __SetPageLocked(page);
mapping_set_update(&xas, mapping);
if (!huge) {
@@ -884,8 +883,7 @@ static int __add_to_page_cache_locked(struct page *page,
if (xa_is_value(old)) {
mapping->nrexceptional--;
- if (shadowp)
- *shadowp = old;
+ shadow = old;
}
mapping->nrpages++;
@@ -899,45 +897,8 @@ static int __add_to_page_cache_locked(struct page *page,
if (xas_error(&xas))
goto error;
- if (!huge)
+ if (!huge) {
mem_cgroup_commit_charge(page, memcg, false, false);
- trace_mm_filemap_add_to_page_cache(page);
- return 0;
-error:
- page->mapping = NULL;
- /* Leave page->index set: truncation relies upon it */
- if (!huge)
- mem_cgroup_cancel_charge(page, memcg, false);
- put_page(page);
- return xas_error(&xas);
-}
-ALLOW_ERROR_INJECTION(__add_to_page_cache_locked, ERRNO);
-
-int add_to_page_cache(struct page *page, struct address_space *mapping,
- pgoff_t offset, gfp_t gfp_mask)
-{
- int err;
-
- __SetPageLocked(page);
- err = __add_to_page_cache_locked(page, mapping, offset,
- gfp_mask, NULL);
- if (unlikely(err))
- __ClearPageLocked(page);
- return err;
-}
-
-int add_to_page_cache_lru(struct page *page, struct address_space *mapping,
- pgoff_t offset, gfp_t gfp_mask)
-{
- void *shadow = NULL;
- int ret;
-
- __SetPageLocked(page);
- ret = __add_to_page_cache_locked(page, mapping, offset,
- gfp_mask, &shadow);
- if (unlikely(ret))
- __ClearPageLocked(page);
- else {
/*
* The page might have been evicted from cache only
* recently, in which case it should be activated like
@@ -951,9 +912,19 @@ int add_to_page_cache_lru(struct page *page, struct address_space *mapping,
workingset_refault(page, shadow);
lru_cache_add(page);
}
- return ret;
+ trace_mm_filemap_add_to_page_cache(page);
+ return 0;
+error:
+ page->mapping = NULL;
+ /* Leave page->index set: truncation relies upon it */
+ if (!huge)
+ mem_cgroup_cancel_charge(page, memcg, false);
+ put_page(page);
+ __ClearPageLocked(page);
+ return xas_error(&xas);
}
-EXPORT_SYMBOL_GPL(add_to_page_cache_lru);
+ALLOW_ERROR_INJECTION(add_to_page_cache, ERRNO);
+EXPORT_SYMBOL_GPL(add_to_page_cache);
#ifdef CONFIG_NUMA
struct page *__page_cache_alloc(gfp_t gfp)
--
2.24.1
next prev parent reply other threads:[~2020-01-15 2:38 UTC|newest]
Thread overview: 23+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-01-15 2:38 [RFC v2 0/9] Replacing the readpages a_op Matthew Wilcox
2020-01-15 2:38 ` [PATCH v2 1/9] mm: Fix the return type of __do_page_cache_readahead Matthew Wilcox
2020-01-15 2:38 ` [PATCH v2 2/9] readahead: Ignore return value of ->readpages Matthew Wilcox
2020-01-15 2:38 ` [PATCH v2 3/9] XArray: Add xarray_for_each_range Matthew Wilcox
2020-01-15 2:38 ` [PATCH v2 4/9] readahead: Put pages in cache earlier Matthew Wilcox
2020-01-15 2:38 ` [PATCH v2 5/9] mm: Add readahead address space operation Matthew Wilcox
2020-01-15 2:38 ` [PATCH v2 6/9] iomap,xfs: Convert from readpages to readahead Matthew Wilcox
2020-01-15 7:16 ` Christoph Hellwig
2020-01-15 7:42 ` Matthew Wilcox
2020-01-24 22:53 ` Matthew Wilcox
2020-01-15 2:38 ` [PATCH v2 7/9] cifs: " Matthew Wilcox
2020-01-15 2:38 ` [PATCH v2 8/9] mm: Remove add_to_page_cache_locked Matthew Wilcox
2020-01-15 2:38 ` Matthew Wilcox [this message]
2020-01-15 7:20 ` [PATCH v2 9/9] mm: Unify all add_to_page_cache variants Christoph Hellwig
2020-01-15 7:44 ` Matthew Wilcox
2020-01-18 23:13 ` [RFC v2 0/9] Replacing the readpages a_op Matthew Wilcox
2020-01-21 11:36 ` Jan Kara
2020-01-21 21:48 ` Matthew Wilcox
2020-01-22 9:44 ` Jan Kara
2020-01-23 10:31 ` Jan Kara
2020-01-22 23:47 ` Dave Chinner
2020-01-23 10:21 ` Jan Kara
2020-01-23 22:29 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200115023843.31325-10-willy@infradead.org \
--to=willy@infradead.org \
--cc=clm@fb.com \
--cc=hch@infradead.org \
--cc=jlayton@kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-xfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).