From: Jan Kara <jack@suse.cz>
To: linux-mm@kvack.org
Cc: linux-fsdevel@vger.kernel.org, linux-nvdimm@lists.01.org,
Andrew Morton <akpm@linux-foundation.org>,
Ross Zwisler <ross.zwisler@linux.intel.com>,
Jan Kara <jack@suse.cz>
Subject: [PATCH 09/21] mm: Factor out functionality to finish page faults
Date: Tue, 1 Nov 2016 23:36:18 +0100 [thread overview]
Message-ID: <1478039794-20253-13-git-send-email-jack@suse.cz> (raw)
In-Reply-To: <1478039794-20253-1-git-send-email-jack@suse.cz>
Introduce function finish_fault() as a helper function for finishing
page faults. It is rather thin wrapper around alloc_set_pte() but since
we'd want to call this from DAX code or filesystems, it is still useful
to avoid some boilerplate code.
Reviewed-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Signed-off-by: Jan Kara <jack@suse.cz>
---
include/linux/mm.h | 1 +
mm/memory.c | 44 +++++++++++++++++++++++++++++++++++---------
2 files changed, 36 insertions(+), 9 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 78173d7de007..7ac2bbaab4f4 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -620,6 +620,7 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
struct page *page);
+int finish_fault(struct vm_fault *vmf);
#endif
/*
diff --git a/mm/memory.c b/mm/memory.c
index ac901bb02398..d3fc4988f869 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3033,6 +3033,38 @@ int alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
return 0;
}
+
+/**
+ * finish_fault - finish page fault once we have prepared the page to fault
+ *
+ * @vmf: structure describing the fault
+ *
+ * This function handles all that is needed to finish a page fault once the
+ * page to fault in is prepared. It handles locking of PTEs, inserts PTE for
+ * given page, adds reverse page mapping, handles memcg charges and LRU
+ * addition. The function returns 0 on success, VM_FAULT_ code in case of
+ * error.
+ *
+ * The function expects the page to be locked and on success it consumes a
+ * reference of a page being mapped (for the PTE which maps it).
+ */
+int finish_fault(struct vm_fault *vmf)
+{
+ struct page *page;
+ int ret;
+
+ /* Did we COW the page? */
+ if ((vmf->flags & FAULT_FLAG_WRITE) &&
+ !(vmf->vma->vm_flags & VM_SHARED))
+ page = vmf->cow_page;
+ else
+ page = vmf->page;
+ ret = alloc_set_pte(vmf, vmf->memcg, page);
+ if (vmf->pte)
+ pte_unmap_unlock(vmf->pte, vmf->ptl);
+ return ret;
+}
+
static unsigned long fault_around_bytes __read_mostly =
rounddown_pow_of_two(65536);
@@ -3178,9 +3210,7 @@ static int do_read_fault(struct vm_fault *vmf)
if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY)))
return ret;
- ret |= alloc_set_pte(vmf, NULL, vmf->page);
- if (vmf->pte)
- pte_unmap_unlock(vmf->pte, vmf->ptl);
+ ret |= finish_fault(vmf);
unlock_page(vmf->page);
if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY)))
put_page(vmf->page);
@@ -3219,9 +3249,7 @@ static int do_cow_fault(struct vm_fault *vmf)
copy_user_highpage(new_page, vmf->page, vmf->address, vma);
__SetPageUptodate(new_page);
- ret |= alloc_set_pte(vmf, memcg, new_page);
- if (vmf->pte)
- pte_unmap_unlock(vmf->pte, vmf->ptl);
+ ret |= finish_fault(vmf);
if (!(ret & VM_FAULT_DAX_LOCKED)) {
unlock_page(vmf->page);
put_page(vmf->page);
@@ -3262,9 +3290,7 @@ static int do_shared_fault(struct vm_fault *vmf)
}
}
- ret |= alloc_set_pte(vmf, NULL, vmf->page);
- if (vmf->pte)
- pte_unmap_unlock(vmf->pte, vmf->ptl);
+ ret |= finish_fault(vmf);
if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE |
VM_FAULT_RETRY))) {
unlock_page(vmf->page);
--
2.6.6
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2016-11-01 22:37 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-11-01 22:36 [PATCH 0/21 v4] dax: Clear dirty bits after flushing caches Jan Kara
2016-11-01 22:36 ` [PATCH 01/20] mm: Change type of vmf->virtual_address Jan Kara
2016-11-02 9:55 ` Kirill A. Shutemov
2016-11-01 22:36 ` [PATCH 01/21] mm: Join struct fault_env and vm_fault Jan Kara
2016-11-02 9:58 ` Kirill A. Shutemov
2016-11-04 4:32 ` Jan Kara
2016-11-01 22:36 ` [PATCH 02/20] " Jan Kara
2016-11-01 22:36 ` [PATCH 02/21] mm: Use vmf->address instead of of vmf->virtual_address Jan Kara
2016-11-02 4:18 ` Hillf Danton
2016-11-04 3:46 ` Jan Kara
2016-11-01 22:36 ` [PATCH 03/21] mm: Use pgoff in struct vm_fault instead of passing it separately Jan Kara
2016-11-01 22:36 ` [PATCH 04/21] mm: Use passed vm_fault structure in __do_fault() Jan Kara
2016-11-01 22:36 ` [PATCH 05/21] mm: Trim __do_fault() arguments Jan Kara
2016-11-01 22:36 ` [PATCH 06/21] mm: Use passed vm_fault structure for in wp_pfn_shared() Jan Kara
2016-11-01 22:36 ` [PATCH 06/20] mm: Use pass " Jan Kara
2016-11-01 22:36 ` [PATCH 07/21] mm: Add orig_pte field into vm_fault Jan Kara
2016-11-01 22:36 ` [PATCH 08/21] mm: Allow full handling of COW faults in ->fault handlers Jan Kara
2016-11-01 22:36 ` Jan Kara [this message]
2016-11-01 22:36 ` [PATCH 10/21] mm: Move handling of COW faults into DAX code Jan Kara
2016-11-01 22:36 ` [PATCH 11/21] mm: Remove unnecessary vma->vm_ops check Jan Kara
2016-11-01 22:36 ` [PATCH 12/21] mm: Factor out common parts of write fault handling Jan Kara
2016-11-01 22:36 ` [PATCH 13/21] mm: Pass vm_fault structure into do_page_mkwrite() Jan Kara
2016-11-01 22:36 ` [PATCH 14/21] mm: Use vmf->page during WP faults Jan Kara
2016-11-01 22:36 ` [PATCH 15/21] mm: Move part of wp_page_reuse() into the single call site Jan Kara
2016-11-01 22:36 ` [PATCH 16/21] mm: Provide helper for finishing mkwrite faults Jan Kara
2016-11-01 22:36 ` [PATCH 17/21] mm: Change return values of finish_mkwrite_fault() Jan Kara
2016-11-01 22:36 ` [PATCH 17/20] mm: Export follow_pte() Jan Kara
2016-11-01 22:36 ` [PATCH 18/20] dax: Make cache flushing protected by entry lock Jan Kara
2016-11-01 22:36 ` [PATCH 18/21] mm: Export follow_pte() Jan Kara
2016-11-01 22:36 ` [PATCH 19/21] dax: Make cache flushing protected by entry lock Jan Kara
2016-11-01 22:36 ` [PATCH 19/20] dax: Protect PTE modification on WP fault by radix tree " Jan Kara
2016-11-01 22:36 ` [PATCH 20/20] dax: Clear dirty entry tags on cache flush Jan Kara
2016-11-01 22:36 ` [PATCH 20/21] dax: Protect PTE modification on WP fault by radix tree entry lock Jan Kara
2016-11-01 22:36 ` [PATCH 21/21] dax: Clear dirty entry tags on cache flush Jan Kara
2016-11-01 23:13 ` [PATCH 0/21 v4] dax: Clear dirty bits after flushing caches Jan Kara
2016-11-02 10:02 ` Kirill A. Shutemov
2016-11-03 20:46 ` Jan Kara
2016-11-02 5:17 ` Ross Zwisler
2016-11-04 4:46 ` Jan Kara
2016-11-04 18:14 ` Jan Kara
-- strict thread matches above, loose matches on Subject: below --
2016-11-04 4:24 [PATCH 0/21 v4 RESEND] " Jan Kara
2016-11-04 4:25 ` [PATCH 09/21] mm: Factor out functionality to finish page faults Jan Kara
2016-11-15 22:21 ` Kirill A. Shutemov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1478039794-20253-13-git-send-email-jack@suse.cz \
--to=jack@suse.cz \
--cc=akpm@linux-foundation.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nvdimm@lists.01.org \
--cc=ross.zwisler@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).