From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
To: linux-fsdevel@vger.kernel.org, linux-mm@kvack.org
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>,
linux-kernel@vger.kernel.org
Subject: [PATCH 07/25] mm: Add lock_folio_killable
Date: Wed, 16 Dec 2020 18:23:17 +0000 [thread overview]
Message-ID: <20201216182335.27227-8-willy@infradead.org> (raw)
In-Reply-To: <20201216182335.27227-1-willy@infradead.org>
This is like lock_page_killable() but for use by callers who
know they have a folio. Convert __lock_page_killable() to be
__lock_folio_killable(). This saves one call to compound_head() per
contended call to lock_page_killable().
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
include/linux/pagemap.h | 15 ++++++++++-----
mm/filemap.c | 17 +++++++++--------
2 files changed, 19 insertions(+), 13 deletions(-)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index c5fe759872b5..5acebbb75d41 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -600,7 +600,7 @@ static inline bool wake_page_match(struct wait_page_queue *wait_page,
}
extern void __lock_folio(struct folio *folio);
-extern int __lock_page_killable(struct page *page);
+extern int __lock_folio_killable(struct folio *folio);
extern int __lock_page_async(struct page *page, struct wait_page_queue *wait);
extern int __lock_page_or_retry(struct page *page, struct mm_struct *mm,
unsigned int flags);
@@ -648,6 +648,14 @@ static inline void lock_page(struct page *page)
lock_folio(page_folio(page));
}
+static inline int lock_folio_killable(struct folio *folio)
+{
+ might_sleep();
+ if (!trylock_folio(folio))
+ return __lock_folio_killable(folio);
+ return 0;
+}
+
/*
* lock_page_killable is like lock_page but can be interrupted by fatal
* signals. It returns 0 if it locked the page and -EINTR if it was
@@ -655,10 +663,7 @@ static inline void lock_page(struct page *page)
*/
static inline int lock_page_killable(struct page *page)
{
- might_sleep();
- if (!trylock_page(page))
- return __lock_page_killable(page);
- return 0;
+ return lock_folio_killable(page_folio(page));
}
/*
diff --git a/mm/filemap.c b/mm/filemap.c
index 50fdc03590b3..dd26b50e3676 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1534,14 +1534,13 @@ void __lock_folio(struct folio *folio)
}
EXPORT_SYMBOL(__lock_folio);
-int __lock_page_killable(struct page *__page)
+int __lock_folio_killable(struct folio *folio)
{
- struct page *page = compound_head(__page);
- wait_queue_head_t *q = page_waitqueue(page);
- return wait_on_page_bit_common(q, page, PG_locked, TASK_KILLABLE,
+ wait_queue_head_t *q = page_waitqueue(&folio->page);
+ return wait_on_page_bit_common(q, &folio->page, PG_locked, TASK_KILLABLE,
EXCLUSIVE);
}
-EXPORT_SYMBOL_GPL(__lock_page_killable);
+EXPORT_SYMBOL_GPL(__lock_folio_killable);
int __lock_page_async(struct page *page, struct wait_page_queue *wait)
{
@@ -1562,6 +1561,8 @@ int __lock_page_async(struct page *page, struct wait_page_queue *wait)
int __lock_page_or_retry(struct page *page, struct mm_struct *mm,
unsigned int flags)
{
+ struct folio *folio = page_folio(page);
+
if (fault_flag_allow_retry_first(flags)) {
/*
* CAUTION! In this case, mmap_lock is not released
@@ -1580,13 +1581,13 @@ int __lock_page_or_retry(struct page *page, struct mm_struct *mm,
if (flags & FAULT_FLAG_KILLABLE) {
int ret;
- ret = __lock_page_killable(page);
+ ret = __lock_folio_killable(folio);
if (ret) {
mmap_read_unlock(mm);
return 0;
}
} else {
- __lock_folio(page_folio(page));
+ __lock_folio(folio);
}
return 1;
@@ -2778,7 +2779,7 @@ static int lock_page_maybe_drop_mmap(struct vm_fault *vmf, struct page *page,
*fpin = maybe_unlock_mmap_for_io(vmf, *fpin);
if (vmf->flags & FAULT_FLAG_KILLABLE) {
- if (__lock_page_killable(&folio->page)) {
+ if (__lock_folio_killable(folio)) {
/*
* We didn't have the right flags to drop the mmap_lock,
* but all fault_handlers only check for fatal signals
--
2.29.2
next prev parent reply other threads:[~2020-12-16 18:26 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-12-16 18:23 [PATCH 00/25] Page folios Matthew Wilcox (Oracle)
2020-12-16 18:23 ` [PATCH 01/25] mm: Introduce struct folio Matthew Wilcox (Oracle)
2020-12-16 18:23 ` [PATCH 02/25] mm: Add put_folio Matthew Wilcox (Oracle)
2020-12-16 18:23 ` [PATCH 03/25] mm: Add get_folio Matthew Wilcox (Oracle)
2020-12-16 18:23 ` [PATCH 04/25] mm: Create FolioFlags Matthew Wilcox (Oracle)
2020-12-16 18:23 ` [PATCH 05/25] mm: Add unlock_folio Matthew Wilcox (Oracle)
2020-12-16 18:23 ` [PATCH 06/25] mm: Add lock_folio Matthew Wilcox (Oracle)
2020-12-16 18:23 ` Matthew Wilcox (Oracle) [this message]
2020-12-16 18:23 ` [PATCH 08/25] mm: Add __alloc_folio_node and alloc_folio Matthew Wilcox (Oracle)
2020-12-16 18:23 ` [PATCH 09/25] mm: Convert __page_cache_alloc to return a folio Matthew Wilcox (Oracle)
2020-12-16 18:23 ` [PATCH 10/25] mm/filemap: Convert end_page_writeback to use " Matthew Wilcox (Oracle)
2020-12-16 18:23 ` [PATCH 11/25] mm: Convert mapping_get_entry to return " Matthew Wilcox (Oracle)
2020-12-16 18:23 ` [PATCH 12/25] mm: Add mark_folio_accessed Matthew Wilcox (Oracle)
2020-12-16 18:23 ` [PATCH 13/25] mm: Add filemap_get_folio and find_get_folio Matthew Wilcox (Oracle)
2020-12-16 18:23 ` [PATCH 14/25] mm/filemap: Add folio_add_to_page_cache Matthew Wilcox (Oracle)
2020-12-16 18:23 ` [PATCH 15/25] mm/swap: Convert rotate_reclaimable_page to folio Matthew Wilcox (Oracle)
2020-12-16 18:23 ` [PATCH 16/25] mm: Add folio_mapping Matthew Wilcox (Oracle)
2020-12-16 18:23 ` [PATCH 17/25] mm: Rename THP_SUPPORT to MULTI_PAGE_FOLIOS Matthew Wilcox (Oracle)
2020-12-16 18:23 ` [PATCH 18/25] btrfs: Use readahead_batch_length Matthew Wilcox (Oracle)
2020-12-17 9:15 ` John Hubbard
2020-12-17 12:12 ` Matthew Wilcox
2020-12-17 13:42 ` Matthew Wilcox
2020-12-17 19:36 ` John Hubbard
2020-12-16 18:23 ` [PATCH 19/25] fs: Change page refcount rules for readahead Matthew Wilcox (Oracle)
2020-12-16 18:23 ` [PATCH 20/25] fs: Change readpage to take a folio Matthew Wilcox (Oracle)
2020-12-16 18:23 ` [PATCH 21/25] mm: Convert wait_on_page_bit to wait_on_folio_bit Matthew Wilcox (Oracle)
2020-12-16 18:23 ` [PATCH 22/25] mm: Add wait_on_folio_locked & wait_on_folio_locked_killable Matthew Wilcox (Oracle)
2020-12-16 18:23 ` [PATCH 23/25] mm: Add flush_dcache_folio Matthew Wilcox (Oracle)
2020-12-16 20:59 ` kernel test robot
2020-12-16 22:01 ` Matthew Wilcox
2020-12-16 18:23 ` [PATCH 24/25] mm: Add read_cache_folio and read_mapping_folio Matthew Wilcox (Oracle)
2020-12-16 18:23 ` [PATCH 25/25] fs: Convert vfs_dedupe_file_range_compare to folios Matthew Wilcox (Oracle)
2020-12-17 12:47 ` [PATCH 00/25] Page folios David Hildenbrand
2020-12-17 13:55 ` Matthew Wilcox
2020-12-17 14:35 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201216182335.27227-8-willy@infradead.org \
--to=willy@infradead.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).