linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/8] Sort out mess in __do_fault()
@ 2014-02-10 20:40 Kirill A. Shutemov
  2014-02-10 20:40 ` [PATCH 1/8] mm, hwpoison: release page on PageHWPoison() " Kirill A. Shutemov
                   ` (7 more replies)
  0 siblings, 8 replies; 10+ messages in thread
From: Kirill A. Shutemov @ 2014-02-10 20:40 UTC (permalink / raw)
  To: Andrew Morton, Mel Gorman, Rik van Riel
  Cc: Andi Kleen, Matthew Wilcox, Dave Hansen, linux-mm,
	Kirill A. Shutemov

From: "Kirill A. Shutemov" <kirill@shutemov.name>

Current __do_fault() is awful and unmaintainable. These patches try to
sort it out by split __do_fault() into three destinct codepaths:
 - to handle read page fault;
 - to handle write page fault to private mappings;
 - to handle write page fault to shared mappings;

I also found page refcount leak in PageHWPoison() path of __do_fault().

Kirill A. Shutemov (8):
  mm, hwpoison: release page on PageHWPoison() in __do_fault()
  mm: rename __do_fault() -> do_fault()
  mm: do_fault(): extract to call vm_ops->do_fault() to separate
    function
  mm: introduce do_read_fault()
  mm: introduce do_cow_fault()
  mm: introduce do_shared_fault() and drop do_fault()
  mm: consolidate code to call vm_ops->page_mkwrite()
  mm: consolidate code to setup pte

 mm/memory.c | 394 ++++++++++++++++++++++++++++++------------------------------
 1 file changed, 194 insertions(+), 200 deletions(-)

-- 
1.8.5.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/8] mm, hwpoison: release page on PageHWPoison() in __do_fault()
  2014-02-10 20:40 [PATCH 0/8] Sort out mess in __do_fault() Kirill A. Shutemov
@ 2014-02-10 20:40 ` Kirill A. Shutemov
  2014-02-10 20:41 ` [PATCH 2/8] mm: rename __do_fault() -> do_fault() Kirill A. Shutemov
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Kirill A. Shutemov @ 2014-02-10 20:40 UTC (permalink / raw)
  To: Andrew Morton, Mel Gorman, Rik van Riel
  Cc: Andi Kleen, Matthew Wilcox, Dave Hansen, linux-mm,
	Kirill A. Shutemov

It seems we forget to release page after detecting HW error.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 mm/memory.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/mm/memory.c b/mm/memory.c
index be6a0c0d4ae0..5f2001a7ab31 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3348,6 +3348,7 @@ static int __do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 		if (ret & VM_FAULT_LOCKED)
 			unlock_page(vmf.page);
 		ret = VM_FAULT_HWPOISON;
+		page_cache_release(vmf.page);
 		goto uncharge_out;
 	}
 
-- 
1.8.5.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/8] mm: rename __do_fault() -> do_fault()
  2014-02-10 20:40 [PATCH 0/8] Sort out mess in __do_fault() Kirill A. Shutemov
  2014-02-10 20:40 ` [PATCH 1/8] mm, hwpoison: release page on PageHWPoison() " Kirill A. Shutemov
@ 2014-02-10 20:41 ` Kirill A. Shutemov
  2014-02-10 20:41 ` [PATCH 3/8] mm: do_fault(): extract to call vm_ops->do_fault() to separate function Kirill A. Shutemov
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Kirill A. Shutemov @ 2014-02-10 20:41 UTC (permalink / raw)
  To: Andrew Morton, Mel Gorman, Rik van Riel
  Cc: Andi Kleen, Matthew Wilcox, Dave Hansen, linux-mm,
	Kirill A. Shutemov

do_fault() is unused: no reason for underscores.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 mm/memory.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 5f2001a7ab31..e626283089ca 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2748,7 +2748,7 @@ reuse:
 		 * bit after it clear all dirty ptes, but before a racing
 		 * do_wp_page installs a dirty pte.
 		 *
-		 * __do_fault is protected similarly.
+		 * do_fault is protected similarly.
 		 */
 		if (!page_mkwrite) {
 			wait_on_page_locked(dirty_page);
@@ -3287,7 +3287,7 @@ oom:
 }
 
 /*
- * __do_fault() tries to create a new page mapping. It aggressively
+ * do_fault() tries to create a new page mapping. It aggressively
  * tries to share with existing pages, but makes a separate copy if
  * the FAULT_FLAG_WRITE is set in the flags parameter in order to avoid
  * the next page fault.
@@ -3299,7 +3299,7 @@ oom:
  * but allow concurrent faults), and pte neither mapped nor locked.
  * We return with mmap_sem still held, but pte unmapped and unlocked.
  */
-static int __do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+static int do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 		unsigned long address, pmd_t *pmd,
 		pgoff_t pgoff, unsigned int flags, pte_t orig_pte)
 {
@@ -3496,7 +3496,7 @@ static int do_linear_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 			- vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
 
 	pte_unmap(page_table);
-	return __do_fault(mm, vma, address, pmd, pgoff, flags, orig_pte);
+	return do_fault(mm, vma, address, pmd, pgoff, flags, orig_pte);
 }
 
 /*
@@ -3528,7 +3528,7 @@ static int do_nonlinear_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 	}
 
 	pgoff = pte_to_pgoff(orig_pte);
-	return __do_fault(mm, vma, address, pmd, pgoff, flags, orig_pte);
+	return do_fault(mm, vma, address, pmd, pgoff, flags, orig_pte);
 }
 
 int numa_migrate_prep(struct page *page, struct vm_area_struct *vma,
-- 
1.8.5.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/8] mm: do_fault(): extract to call vm_ops->do_fault() to separate function
  2014-02-10 20:40 [PATCH 0/8] Sort out mess in __do_fault() Kirill A. Shutemov
  2014-02-10 20:40 ` [PATCH 1/8] mm, hwpoison: release page on PageHWPoison() " Kirill A. Shutemov
  2014-02-10 20:41 ` [PATCH 2/8] mm: rename __do_fault() -> do_fault() Kirill A. Shutemov
@ 2014-02-10 20:41 ` Kirill A. Shutemov
  2014-02-10 20:41 ` [PATCH 4/8] mm: introduce do_read_fault() Kirill A. Shutemov
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Kirill A. Shutemov @ 2014-02-10 20:41 UTC (permalink / raw)
  To: Andrew Morton, Mel Gorman, Rik van Riel
  Cc: Andi Kleen, Matthew Wilcox, Dave Hansen, linux-mm,
	Kirill A. Shutemov

The patch extract code to vm_ops->do_fault() and basic error handling to
separate function. The code will be reused.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 mm/memory.c | 76 ++++++++++++++++++++++++++++++++++++-------------------------
 1 file changed, 45 insertions(+), 31 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index e626283089ca..d3317ac02a5b 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3286,6 +3286,37 @@ oom:
 	return VM_FAULT_OOM;
 }
 
+static int __do_fault(struct vm_area_struct *vma, unsigned long address,
+		pgoff_t pgoff, unsigned int flags, struct page **page)
+{
+	struct vm_fault vmf;
+	int ret;
+
+	vmf.virtual_address = (void __user *)(address & PAGE_MASK);
+	vmf.pgoff = pgoff;
+	vmf.flags = flags;
+	vmf.page = NULL;
+
+	ret = vma->vm_ops->fault(vma, &vmf);
+	if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY)))
+		return ret;
+
+	if (unlikely(PageHWPoison(vmf.page))) {
+		if (ret & VM_FAULT_LOCKED)
+			unlock_page(vmf.page);
+		page_cache_release(vmf.page);
+		return VM_FAULT_HWPOISON;
+	}
+
+	if (unlikely(!(ret & VM_FAULT_LOCKED)))
+		lock_page(vmf.page);
+	else
+		VM_BUG_ON_PAGE(!PageLocked(vmf.page), vmf.page);
+
+	*page = vmf.page;
+	return ret;
+}
+
 /*
  * do_fault() tries to create a new page mapping. It aggressively
  * tries to share with existing pages, but makes a separate copy if
@@ -3305,12 +3336,11 @@ static int do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 {
 	pte_t *page_table;
 	spinlock_t *ptl;
-	struct page *page;
+	struct page *page, *fault_page;
 	struct page *cow_page;
 	pte_t entry;
 	int anon = 0;
 	struct page *dirty_page = NULL;
-	struct vm_fault vmf;
 	int ret;
 	int page_mkwrite = 0;
 
@@ -3334,42 +3364,19 @@ static int do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 	} else
 		cow_page = NULL;
 
-	vmf.virtual_address = (void __user *)(address & PAGE_MASK);
-	vmf.pgoff = pgoff;
-	vmf.flags = flags;
-	vmf.page = NULL;
-
-	ret = vma->vm_ops->fault(vma, &vmf);
-	if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE |
-			    VM_FAULT_RETRY)))
+	ret = __do_fault(vma, address, pgoff, flags, &fault_page);
+	if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY)))
 		goto uncharge_out;
 
-	if (unlikely(PageHWPoison(vmf.page))) {
-		if (ret & VM_FAULT_LOCKED)
-			unlock_page(vmf.page);
-		ret = VM_FAULT_HWPOISON;
-		page_cache_release(vmf.page);
-		goto uncharge_out;
-	}
-
-	/*
-	 * For consistency in subsequent calls, make the faulted page always
-	 * locked.
-	 */
-	if (unlikely(!(ret & VM_FAULT_LOCKED)))
-		lock_page(vmf.page);
-	else
-		VM_BUG_ON_PAGE(!PageLocked(vmf.page), vmf.page);
-
 	/*
 	 * Should we do an early C-O-W break?
 	 */
-	page = vmf.page;
+	page = fault_page;
 	if (flags & FAULT_FLAG_WRITE) {
 		if (!(vma->vm_flags & VM_SHARED)) {
 			page = cow_page;
 			anon = 1;
-			copy_user_highpage(page, vmf.page, address, vma);
+			copy_user_highpage(page, fault_page, address, vma);
 			__SetPageUptodate(page);
 		} else {
 			/*
@@ -3378,8 +3385,15 @@ static int do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 			 * to become writable
 			 */
 			if (vma->vm_ops->page_mkwrite) {
+				struct vm_fault vmf;
 				int tmp;
 
+				vmf.virtual_address =
+					(void __user *)(address & PAGE_MASK);
+				vmf.pgoff = pgoff;
+				vmf.flags = flags;
+				vmf.page = fault_page;
+
 				unlock_page(page);
 				vmf.flags = FAULT_FLAG_WRITE|FAULT_FLAG_MKWRITE;
 				tmp = vma->vm_ops->page_mkwrite(vma, &vmf);
@@ -3469,9 +3483,9 @@ static int do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 		if (vma->vm_file && !page_mkwrite)
 			file_update_time(vma->vm_file);
 	} else {
-		unlock_page(vmf.page);
+		unlock_page(fault_page);
 		if (anon)
-			page_cache_release(vmf.page);
+			page_cache_release(fault_page);
 	}
 
 	return ret;
-- 
1.8.5.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/8] mm: introduce do_read_fault()
  2014-02-10 20:40 [PATCH 0/8] Sort out mess in __do_fault() Kirill A. Shutemov
                   ` (2 preceding siblings ...)
  2014-02-10 20:41 ` [PATCH 3/8] mm: do_fault(): extract to call vm_ops->do_fault() to separate function Kirill A. Shutemov
@ 2014-02-10 20:41 ` Kirill A. Shutemov
  2014-02-10 20:41 ` [PATCH 5/8] mm: introduce do_cow_fault() Kirill A. Shutemov
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Kirill A. Shutemov @ 2014-02-10 20:41 UTC (permalink / raw)
  To: Andrew Morton, Mel Gorman, Rik van Riel
  Cc: Andi Kleen, Matthew Wilcox, Dave Hansen, linux-mm,
	Kirill A. Shutemov

This patch introduces do_read_fault(). The function does what do_fault()
does for read page faults.

Unlike do_fault(), do_read_fault() is pretty clean and straight-forward.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 mm/memory.c | 43 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 43 insertions(+)

diff --git a/mm/memory.c b/mm/memory.c
index d3317ac02a5b..cbc17f47df11 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3317,6 +3317,43 @@ static int __do_fault(struct vm_area_struct *vma, unsigned long address,
 	return ret;
 }
 
+static int do_read_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+		unsigned long address, pmd_t *pmd,
+		pgoff_t pgoff, unsigned int flags, pte_t orig_pte)
+{
+	struct page *fault_page;
+	spinlock_t *ptl;
+	pte_t entry, *pte;
+	int ret;
+
+	ret = __do_fault(vma, address, pgoff, flags, &fault_page);
+	if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY)))
+		return ret;
+
+	pte = pte_offset_map_lock(mm, pmd, address, &ptl);
+	if (unlikely(!pte_same(*pte, orig_pte))) {
+		pte_unmap_unlock(pte, ptl);
+		unlock_page(fault_page);
+		page_cache_release(fault_page);
+		return ret;
+	}
+
+	flush_icache_page(vma, fault_page);
+	entry = mk_pte(fault_page, vma->vm_page_prot);
+	if (pte_file(orig_pte) && pte_file_soft_dirty(orig_pte))
+		pte_mksoft_dirty(entry);
+	inc_mm_counter_fast(mm, MM_FILEPAGES);
+	page_add_file_rmap(fault_page);
+	set_pte_at(mm, address, pte, entry);
+
+	/* no need to invalidate: a not-present page won't be cached */
+	update_mmu_cache(vma, address, pte);
+	pte_unmap_unlock(pte, ptl);
+	unlock_page(fault_page);
+
+	return ret;
+}
+
 /*
  * do_fault() tries to create a new page mapping. It aggressively
  * tries to share with existing pages, but makes a separate copy if
@@ -3510,6 +3547,9 @@ static int do_linear_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 			- vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
 
 	pte_unmap(page_table);
+	if (!(flags & FAULT_FLAG_WRITE))
+		return do_read_fault(mm, vma, address, pmd, pgoff, flags,
+				orig_pte);
 	return do_fault(mm, vma, address, pmd, pgoff, flags, orig_pte);
 }
 
@@ -3542,6 +3582,9 @@ static int do_nonlinear_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 	}
 
 	pgoff = pte_to_pgoff(orig_pte);
+	if (!(flags & FAULT_FLAG_WRITE))
+		return do_read_fault(mm, vma, address, pmd, pgoff, flags,
+				orig_pte);
 	return do_fault(mm, vma, address, pmd, pgoff, flags, orig_pte);
 }
 
-- 
1.8.5.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 5/8] mm: introduce do_cow_fault()
  2014-02-10 20:40 [PATCH 0/8] Sort out mess in __do_fault() Kirill A. Shutemov
                   ` (3 preceding siblings ...)
  2014-02-10 20:41 ` [PATCH 4/8] mm: introduce do_read_fault() Kirill A. Shutemov
@ 2014-02-10 20:41 ` Kirill A. Shutemov
  2014-02-10 20:41 ` [PATCH 6/8] mm: introduce do_shared_fault() and drop do_fault() Kirill A. Shutemov
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 10+ messages in thread
From: Kirill A. Shutemov @ 2014-02-10 20:41 UTC (permalink / raw)
  To: Andrew Morton, Mel Gorman, Rik van Riel
  Cc: Andi Kleen, Matthew Wilcox, Dave Hansen, linux-mm,
	Kirill A. Shutemov

This patch introduces do_cow_fault(). The function does what do_fault()
does for write page faults to private mappings.

Unlike do_fault(), do_read_fault() is relatively clean and
straight-forward.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 mm/memory.c | 62 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 62 insertions(+)

diff --git a/mm/memory.c b/mm/memory.c
index cbc17f47df11..9ad0754d11ba 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3354,6 +3354,62 @@ static int do_read_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 	return ret;
 }
 
+static int do_cow_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+		unsigned long address, pmd_t *pmd,
+		pgoff_t pgoff, unsigned int flags, pte_t orig_pte)
+{
+	struct page *fault_page, *new_page;
+	spinlock_t *ptl;
+	pte_t entry, *pte;
+	int ret;
+
+	if (unlikely(anon_vma_prepare(vma)))
+		return VM_FAULT_OOM;
+
+	new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address);
+	if (!new_page)
+		return VM_FAULT_OOM;
+
+	if (mem_cgroup_newpage_charge(new_page, mm, GFP_KERNEL)) {
+		page_cache_release(new_page);
+		return VM_FAULT_OOM;
+	}
+
+	ret = __do_fault(vma, address, pgoff, flags, &fault_page);
+	if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY)))
+		goto uncharge_out;
+
+	copy_user_highpage(new_page, fault_page, address, vma);
+	__SetPageUptodate(new_page);
+
+	pte = pte_offset_map_lock(mm, pmd, address, &ptl);
+	if (unlikely(!pte_same(*pte, orig_pte))) {
+		pte_unmap_unlock(pte, ptl);
+		unlock_page(fault_page);
+		page_cache_release(fault_page);
+		goto uncharge_out;
+	}
+
+	flush_icache_page(vma, new_page);
+	entry = mk_pte(new_page, vma->vm_page_prot);
+	entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+	inc_mm_counter_fast(mm, MM_ANONPAGES);
+	page_add_new_anon_rmap(new_page, vma, address);
+	set_pte_at(mm, address, pte, entry);
+
+	/* no need to invalidate: a not-present page won't be cached */
+	update_mmu_cache(vma, address, pte);
+
+	pte_unmap_unlock(pte, ptl);
+	unlock_page(fault_page);
+	page_cache_release(fault_page);
+	return ret;
+uncharge_out:
+	mem_cgroup_uncharge_page(new_page);
+	page_cache_release(new_page);
+	return ret;
+}
+
 /*
  * do_fault() tries to create a new page mapping. It aggressively
  * tries to share with existing pages, but makes a separate copy if
@@ -3550,6 +3606,9 @@ static int do_linear_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 	if (!(flags & FAULT_FLAG_WRITE))
 		return do_read_fault(mm, vma, address, pmd, pgoff, flags,
 				orig_pte);
+	if (!(vma->vm_flags & VM_SHARED))
+		return do_cow_fault(mm, vma, address, pmd, pgoff, flags,
+				orig_pte);
 	return do_fault(mm, vma, address, pmd, pgoff, flags, orig_pte);
 }
 
@@ -3585,6 +3644,9 @@ static int do_nonlinear_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 	if (!(flags & FAULT_FLAG_WRITE))
 		return do_read_fault(mm, vma, address, pmd, pgoff, flags,
 				orig_pte);
+	if (!(vma->vm_flags & VM_SHARED))
+		return do_cow_fault(mm, vma, address, pmd, pgoff, flags,
+				orig_pte);
 	return do_fault(mm, vma, address, pmd, pgoff, flags, orig_pte);
 }
 
-- 
1.8.5.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 6/8] mm: introduce do_shared_fault() and drop do_fault()
  2014-02-10 20:40 [PATCH 0/8] Sort out mess in __do_fault() Kirill A. Shutemov
                   ` (4 preceding siblings ...)
  2014-02-10 20:41 ` [PATCH 5/8] mm: introduce do_cow_fault() Kirill A. Shutemov
@ 2014-02-10 20:41 ` Kirill A. Shutemov
  2014-02-10 20:41 ` [PATCH 7/8] mm: consolidate code to call vm_ops->page_mkwrite() Kirill A. Shutemov
  2014-02-10 20:41 ` [PATCH 8/8] mm: consolidate code to setup pte Kirill A. Shutemov
  7 siblings, 0 replies; 10+ messages in thread
From: Kirill A. Shutemov @ 2014-02-10 20:41 UTC (permalink / raw)
  To: Andrew Morton, Mel Gorman, Rik van Riel
  Cc: Andi Kleen, Matthew Wilcox, Dave Hansen, linux-mm,
	Kirill A. Shutemov

This patch introduces do_shared_fault(). The function does what
do_fault() does for write faults to shared mappings

Unlike do_fault(), do_shared_fault() is relatively clean and
straight-forward.

Old do_fault() is not needed anymore. Let it die.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 mm/memory.c | 224 ++++++++++++++++--------------------------------------------
 1 file changed, 60 insertions(+), 164 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 9ad0754d11ba..288a351c6dd0 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2748,7 +2748,7 @@ reuse:
 		 * bit after it clear all dirty ptes, but before a racing
 		 * do_wp_page installs a dirty pte.
 		 *
-		 * do_fault is protected similarly.
+		 * do_shared_fault is protected similarly.
 		 */
 		if (!page_mkwrite) {
 			wait_on_page_locked(dirty_page);
@@ -3410,188 +3410,84 @@ uncharge_out:
 	return ret;
 }
 
-/*
- * do_fault() tries to create a new page mapping. It aggressively
- * tries to share with existing pages, but makes a separate copy if
- * the FAULT_FLAG_WRITE is set in the flags parameter in order to avoid
- * the next page fault.
- *
- * As this is called only for pages that do not currently exist, we
- * do not need to flush old virtual caches or the TLB.
- *
- * We enter with non-exclusive mmap_sem (to exclude vma changes,
- * but allow concurrent faults), and pte neither mapped nor locked.
- * We return with mmap_sem still held, but pte unmapped and unlocked.
- */
-static int do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+static int do_shared_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 		unsigned long address, pmd_t *pmd,
 		pgoff_t pgoff, unsigned int flags, pte_t orig_pte)
 {
-	pte_t *page_table;
+	struct page *fault_page;
 	spinlock_t *ptl;
-	struct page *page, *fault_page;
-	struct page *cow_page;
-	pte_t entry;
-	int anon = 0;
-	struct page *dirty_page = NULL;
-	int ret;
-	int page_mkwrite = 0;
-
-	/*
-	 * If we do COW later, allocate page befor taking lock_page()
-	 * on the file cache page. This will reduce lock holding time.
-	 */
-	if ((flags & FAULT_FLAG_WRITE) && !(vma->vm_flags & VM_SHARED)) {
-
-		if (unlikely(anon_vma_prepare(vma)))
-			return VM_FAULT_OOM;
-
-		cow_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address);
-		if (!cow_page)
-			return VM_FAULT_OOM;
-
-		if (mem_cgroup_newpage_charge(cow_page, mm, GFP_KERNEL)) {
-			page_cache_release(cow_page);
-			return VM_FAULT_OOM;
-		}
-	} else
-		cow_page = NULL;
+	pte_t entry, *pte;
+	int dirtied = 0;
+	struct vm_fault vmf;
+	int ret, tmp;
 
 	ret = __do_fault(vma, address, pgoff, flags, &fault_page);
 	if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE | VM_FAULT_RETRY)))
-		goto uncharge_out;
+		return ret;
 
 	/*
-	 * Should we do an early C-O-W break?
+	 * Check if the backing address space wants to know that the page is
+	 * about to become writable
 	 */
-	page = fault_page;
-	if (flags & FAULT_FLAG_WRITE) {
-		if (!(vma->vm_flags & VM_SHARED)) {
-			page = cow_page;
-			anon = 1;
-			copy_user_highpage(page, fault_page, address, vma);
-			__SetPageUptodate(page);
-		} else {
-			/*
-			 * If the page will be shareable, see if the backing
-			 * address space wants to know that the page is about
-			 * to become writable
-			 */
-			if (vma->vm_ops->page_mkwrite) {
-				struct vm_fault vmf;
-				int tmp;
-
-				vmf.virtual_address =
-					(void __user *)(address & PAGE_MASK);
-				vmf.pgoff = pgoff;
-				vmf.flags = flags;
-				vmf.page = fault_page;
-
-				unlock_page(page);
-				vmf.flags = FAULT_FLAG_WRITE|FAULT_FLAG_MKWRITE;
-				tmp = vma->vm_ops->page_mkwrite(vma, &vmf);
-				if (unlikely(tmp &
-					  (VM_FAULT_ERROR | VM_FAULT_NOPAGE))) {
-					ret = tmp;
-					goto unwritable_page;
-				}
-				if (unlikely(!(tmp & VM_FAULT_LOCKED))) {
-					lock_page(page);
-					if (!page->mapping) {
-						ret = 0; /* retry the fault */
-						unlock_page(page);
-						goto unwritable_page;
-					}
-				} else
-					VM_BUG_ON_PAGE(!PageLocked(page), page);
-				page_mkwrite = 1;
-			}
-		}
+	if (!vma->vm_ops->page_mkwrite)
+		goto set_pte;
 
-	}
+	unlock_page(fault_page);
+	vmf.virtual_address = (void __user *)(address & PAGE_MASK);
+	vmf.pgoff = pgoff;
+	vmf.flags = FAULT_FLAG_WRITE|FAULT_FLAG_MKWRITE;
+	vmf.page = fault_page;
 
-	page_table = pte_offset_map_lock(mm, pmd, address, &ptl);
+	tmp = vma->vm_ops->page_mkwrite(vma, &vmf);
+	if (unlikely(tmp & (VM_FAULT_ERROR | VM_FAULT_NOPAGE))) {
+		page_cache_release(fault_page);
+		return tmp;
+	}
 
-	/*
-	 * This silly early PAGE_DIRTY setting removes a race
-	 * due to the bad i386 page protection. But it's valid
-	 * for other architectures too.
-	 *
-	 * Note that if FAULT_FLAG_WRITE is set, we either now have
-	 * an exclusive copy of the page, or this is a shared mapping,
-	 * so we can make it writable and dirty to avoid having to
-	 * handle that later.
-	 */
-	/* Only go through if we didn't race with anybody else... */
-	if (likely(pte_same(*page_table, orig_pte))) {
-		flush_icache_page(vma, page);
-		entry = mk_pte(page, vma->vm_page_prot);
-		if (flags & FAULT_FLAG_WRITE)
-			entry = maybe_mkwrite(pte_mkdirty(entry), vma);
-		else if (pte_file(orig_pte) && pte_file_soft_dirty(orig_pte))
-			pte_mksoft_dirty(entry);
-		if (anon) {
-			inc_mm_counter_fast(mm, MM_ANONPAGES);
-			page_add_new_anon_rmap(page, vma, address);
-		} else {
-			inc_mm_counter_fast(mm, MM_FILEPAGES);
-			page_add_file_rmap(page);
-			if (flags & FAULT_FLAG_WRITE) {
-				dirty_page = page;
-				get_page(dirty_page);
-			}
+	if (unlikely(!(tmp & VM_FAULT_LOCKED))) {
+		lock_page(fault_page);
+		if (!fault_page->mapping) {
+			unlock_page(fault_page);
+			page_cache_release(fault_page);
+			return 0; /* retry */
 		}
-		set_pte_at(mm, address, page_table, entry);
-
-		/* no need to invalidate: a not-present page won't be cached */
-		update_mmu_cache(vma, address, page_table);
-	} else {
-		if (cow_page)
-			mem_cgroup_uncharge_page(cow_page);
-		if (anon)
-			page_cache_release(page);
-		else
-			anon = 1; /* no anon but release faulted_page */
+	} else
+		VM_BUG_ON_PAGE(!PageLocked(fault_page), fault_page);
+set_pte:
+	pte = pte_offset_map_lock(mm, pmd, address, &ptl);
+	if (unlikely(!pte_same(*pte, orig_pte))) {
+		pte_unmap_unlock(pte, ptl);
+		unlock_page(fault_page);
+		page_cache_release(fault_page);
+		return ret;
 	}
 
-	pte_unmap_unlock(page_table, ptl);
-
-	if (dirty_page) {
-		struct address_space *mapping = page->mapping;
-		int dirtied = 0;
+	flush_icache_page(vma, fault_page);
+	entry = mk_pte(fault_page, vma->vm_page_prot);
+	entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+	inc_mm_counter_fast(mm, MM_FILEPAGES);
+	page_add_file_rmap(fault_page);
+	set_pte_at(mm, address, pte, entry);
 
-		if (set_page_dirty(dirty_page))
-			dirtied = 1;
-		unlock_page(dirty_page);
-		put_page(dirty_page);
-		if ((dirtied || page_mkwrite) && mapping) {
-			/*
-			 * Some device drivers do not set page.mapping but still
-			 * dirty their pages
-			 */
-			balance_dirty_pages_ratelimited(mapping);
-		}
+	/* no need to invalidate: a not-present page won't be cached */
+	update_mmu_cache(vma, address, pte);
+	pte_unmap_unlock(pte, ptl);
 
-		/* file_update_time outside page_lock */
-		if (vma->vm_file && !page_mkwrite)
-			file_update_time(vma->vm_file);
-	} else {
-		unlock_page(fault_page);
-		if (anon)
-			page_cache_release(fault_page);
+	if (set_page_dirty(fault_page))
+		dirtied = 1;
+	unlock_page(fault_page);
+	if ((dirtied || vma->vm_ops->page_mkwrite) && fault_page->mapping) {
+		/*
+		 * Some device drivers do not set page.mapping but still
+		 * dirty their pages
+		 */
+		balance_dirty_pages_ratelimited(fault_page->mapping);
 	}
 
-	return ret;
+	/* file_update_time outside page_lock */
+	if (vma->vm_file && !vma->vm_ops->page_mkwrite)
+		file_update_time(vma->vm_file);
 
-unwritable_page:
-	page_cache_release(page);
-	return ret;
-uncharge_out:
-	/* fs's fault handler get error */
-	if (cow_page) {
-		mem_cgroup_uncharge_page(cow_page);
-		page_cache_release(cow_page);
-	}
 	return ret;
 }
 
@@ -3609,7 +3505,7 @@ static int do_linear_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 	if (!(vma->vm_flags & VM_SHARED))
 		return do_cow_fault(mm, vma, address, pmd, pgoff, flags,
 				orig_pte);
-	return do_fault(mm, vma, address, pmd, pgoff, flags, orig_pte);
+	return do_shared_fault(mm, vma, address, pmd, pgoff, flags, orig_pte);
 }
 
 /*
@@ -3647,7 +3543,7 @@ static int do_nonlinear_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 	if (!(vma->vm_flags & VM_SHARED))
 		return do_cow_fault(mm, vma, address, pmd, pgoff, flags,
 				orig_pte);
-	return do_fault(mm, vma, address, pmd, pgoff, flags, orig_pte);
+	return do_shared_fault(mm, vma, address, pmd, pgoff, flags, orig_pte);
 }
 
 int numa_migrate_prep(struct page *page, struct vm_area_struct *vma,
-- 
1.8.5.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 7/8] mm: consolidate code to call vm_ops->page_mkwrite()
  2014-02-10 20:40 [PATCH 0/8] Sort out mess in __do_fault() Kirill A. Shutemov
                   ` (5 preceding siblings ...)
  2014-02-10 20:41 ` [PATCH 6/8] mm: introduce do_shared_fault() and drop do_fault() Kirill A. Shutemov
@ 2014-02-10 20:41 ` Kirill A. Shutemov
  2014-02-17 17:17   ` Kirill A. Shutemov
  2014-02-10 20:41 ` [PATCH 8/8] mm: consolidate code to setup pte Kirill A. Shutemov
  7 siblings, 1 reply; 10+ messages in thread
From: Kirill A. Shutemov @ 2014-02-10 20:41 UTC (permalink / raw)
  To: Andrew Morton, Mel Gorman, Rik van Riel
  Cc: Andi Kleen, Matthew Wilcox, Dave Hansen, linux-mm,
	Kirill A. Shutemov

There're two functions which need to call vm_ops->page_mkwrite():
do_shared_fault() and do_wp_page(). We can consolidate preparation code.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 mm/memory.c | 104 +++++++++++++++++++++++++-----------------------------------
 1 file changed, 44 insertions(+), 60 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 288a351c6dd0..c36a21b912e1 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2587,6 +2587,37 @@ static inline void cow_user_page(struct page *dst, struct page *src, unsigned lo
 }
 
 /*
+ * Notify the address space that the page is about to become writable so that
+ * it can prohibit this or wait for the page to get into an appropriate state.
+ *
+ * We do this without the lock held, so that it can sleep if it needs to.
+ */
+static int do_page_mkwrite(struct vm_area_struct *vma, struct page *page,
+	       unsigned long address)
+{
+	struct vm_fault vmf;
+	int ret;
+
+	vmf.virtual_address = (void __user *)(address & PAGE_MASK);
+	vmf.pgoff = page->index;
+	vmf.flags = FAULT_FLAG_WRITE|FAULT_FLAG_MKWRITE;
+	vmf.page = page;
+
+	ret = vma->vm_ops->page_mkwrite(vma, &vmf);
+	if (unlikely(ret & (VM_FAULT_ERROR | VM_FAULT_NOPAGE)))
+		return ret;
+	if (unlikely(!(ret & VM_FAULT_LOCKED))) {
+		lock_page(page);
+		if (!page->mapping) {
+			unlock_page(page);
+			return 0; /* retry */
+		}
+	} else
+		VM_BUG_ON_PAGE(!PageLocked(page), page);
+	return ret;
+}
+
+/*
  * This routine handles present pages, when users try to write
  * to a shared page. It is done by copying the page to a new address
  * and decrementing the shared-page counter for the old page.
@@ -2668,42 +2699,15 @@ static int do_wp_page(struct mm_struct *mm, struct vm_area_struct *vma,
 		 * get_user_pages(.write=1, .force=1).
 		 */
 		if (vma->vm_ops && vma->vm_ops->page_mkwrite) {
-			struct vm_fault vmf;
 			int tmp;
-
-			vmf.virtual_address = (void __user *)(address &
-								PAGE_MASK);
-			vmf.pgoff = old_page->index;
-			vmf.flags = FAULT_FLAG_WRITE|FAULT_FLAG_MKWRITE;
-			vmf.page = old_page;
-
-			/*
-			 * Notify the address space that the page is about to
-			 * become writable so that it can prohibit this or wait
-			 * for the page to get into an appropriate state.
-			 *
-			 * We do this without the lock held, so that it can
-			 * sleep if it needs to.
-			 */
 			page_cache_get(old_page);
 			pte_unmap_unlock(page_table, ptl);
-
-			tmp = vma->vm_ops->page_mkwrite(vma, &vmf);
-			if (unlikely(tmp &
-					(VM_FAULT_ERROR | VM_FAULT_NOPAGE))) {
-				ret = tmp;
-				goto unwritable_page;
+			tmp = do_page_mkwrite(vma, old_page, address);
+			if (unlikely(!tmp || (tmp &
+					(VM_FAULT_ERROR | VM_FAULT_NOPAGE)))) {
+				page_cache_release(old_page);
+				return tmp;
 			}
-			if (unlikely(!(tmp & VM_FAULT_LOCKED))) {
-				lock_page(old_page);
-				if (!old_page->mapping) {
-					ret = 0; /* retry the fault */
-					unlock_page(old_page);
-					goto unwritable_page;
-				}
-			} else
-				VM_BUG_ON_PAGE(!PageLocked(old_page), old_page);
-
 			/*
 			 * Since we dropped the lock we need to revalidate
 			 * the PTE as someone else may have changed it.  If
@@ -2892,10 +2896,6 @@ oom:
 	if (old_page)
 		page_cache_release(old_page);
 	return VM_FAULT_OOM;
-
-unwritable_page:
-	page_cache_release(old_page);
-	return ret;
 }
 
 static void unmap_mapping_range_vma(struct vm_area_struct *vma,
@@ -3418,7 +3418,6 @@ static int do_shared_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 	spinlock_t *ptl;
 	pte_t entry, *pte;
 	int dirtied = 0;
-	struct vm_fault vmf;
 	int ret, tmp;
 
 	ret = __do_fault(vma, address, pgoff, flags, &fault_page);
@@ -3429,31 +3428,16 @@ static int do_shared_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 	 * Check if the backing address space wants to know that the page is
 	 * about to become writable
 	 */
-	if (!vma->vm_ops->page_mkwrite)
-		goto set_pte;
-
-	unlock_page(fault_page);
-	vmf.virtual_address = (void __user *)(address & PAGE_MASK);
-	vmf.pgoff = pgoff;
-	vmf.flags = FAULT_FLAG_WRITE|FAULT_FLAG_MKWRITE;
-	vmf.page = fault_page;
-
-	tmp = vma->vm_ops->page_mkwrite(vma, &vmf);
-	if (unlikely(tmp & (VM_FAULT_ERROR | VM_FAULT_NOPAGE))) {
-		page_cache_release(fault_page);
-		return tmp;
-	}
-
-	if (unlikely(!(tmp & VM_FAULT_LOCKED))) {
-		lock_page(fault_page);
-		if (!fault_page->mapping) {
-			unlock_page(fault_page);
+	if (vma->vm_ops->page_mkwrite) {
+		unlock_page(fault_page);
+		tmp = do_page_mkwrite(vma, fault_page, address);
+		if (unlikely(!tmp ||
+				(tmp & (VM_FAULT_ERROR | VM_FAULT_NOPAGE)))) {
 			page_cache_release(fault_page);
-			return 0; /* retry */
+			return tmp;
 		}
-	} else
-		VM_BUG_ON_PAGE(!PageLocked(fault_page), fault_page);
-set_pte:
+	}
+
 	pte = pte_offset_map_lock(mm, pmd, address, &ptl);
 	if (unlikely(!pte_same(*pte, orig_pte))) {
 		pte_unmap_unlock(pte, ptl);
-- 
1.8.5.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 8/8] mm: consolidate code to setup pte
  2014-02-10 20:40 [PATCH 0/8] Sort out mess in __do_fault() Kirill A. Shutemov
                   ` (6 preceding siblings ...)
  2014-02-10 20:41 ` [PATCH 7/8] mm: consolidate code to call vm_ops->page_mkwrite() Kirill A. Shutemov
@ 2014-02-10 20:41 ` Kirill A. Shutemov
  7 siblings, 0 replies; 10+ messages in thread
From: Kirill A. Shutemov @ 2014-02-10 20:41 UTC (permalink / raw)
  To: Andrew Morton, Mel Gorman, Rik van Riel
  Cc: Andi Kleen, Matthew Wilcox, Dave Hansen, linux-mm,
	Kirill A. Shutemov

This patch extract consolidate code to setup pte from do_read_fault(),
do_cow_fault() and do_shared_fault().

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 mm/memory.c | 66 ++++++++++++++++++++++++++++---------------------------------
 1 file changed, 30 insertions(+), 36 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index c36a21b912e1..68c3dc141059 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3317,13 +3317,37 @@ static int __do_fault(struct vm_area_struct *vma, unsigned long address,
 	return ret;
 }
 
+static void do_set_pte(struct vm_area_struct *vma, unsigned long address,
+		struct page *page, pte_t *pte, bool write, bool anon)
+{
+	pte_t entry;
+
+	flush_icache_page(vma, page);
+	entry = mk_pte(page, vma->vm_page_prot);
+	if (write)
+		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+	else if (pte_file(*pte) && pte_file_soft_dirty(*pte))
+		pte_mksoft_dirty(entry);
+	if (anon) {
+		inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
+		page_add_new_anon_rmap(page, vma, address);
+	} else {
+		inc_mm_counter_fast(vma->vm_mm, MM_FILEPAGES);
+		page_add_file_rmap(page);
+	}
+	set_pte_at(vma->vm_mm, address, pte, entry);
+
+	/* no need to invalidate: a not-present page won't be cached */
+	update_mmu_cache(vma, address, pte);
+}
+
 static int do_read_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 		unsigned long address, pmd_t *pmd,
 		pgoff_t pgoff, unsigned int flags, pte_t orig_pte)
 {
 	struct page *fault_page;
 	spinlock_t *ptl;
-	pte_t entry, *pte;
+	pte_t *pte;
 	int ret;
 
 	ret = __do_fault(vma, address, pgoff, flags, &fault_page);
@@ -3337,20 +3361,9 @@ static int do_read_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 		page_cache_release(fault_page);
 		return ret;
 	}
-
-	flush_icache_page(vma, fault_page);
-	entry = mk_pte(fault_page, vma->vm_page_prot);
-	if (pte_file(orig_pte) && pte_file_soft_dirty(orig_pte))
-		pte_mksoft_dirty(entry);
-	inc_mm_counter_fast(mm, MM_FILEPAGES);
-	page_add_file_rmap(fault_page);
-	set_pte_at(mm, address, pte, entry);
-
-	/* no need to invalidate: a not-present page won't be cached */
-	update_mmu_cache(vma, address, pte);
+	do_set_pte(vma, address, fault_page, pte, false, false);
 	pte_unmap_unlock(pte, ptl);
 	unlock_page(fault_page);
-
 	return ret;
 }
 
@@ -3360,7 +3373,7 @@ static int do_cow_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 {
 	struct page *fault_page, *new_page;
 	spinlock_t *ptl;
-	pte_t entry, *pte;
+	pte_t *pte;
 	int ret;
 
 	if (unlikely(anon_vma_prepare(vma)))
@@ -3389,17 +3402,7 @@ static int do_cow_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 		page_cache_release(fault_page);
 		goto uncharge_out;
 	}
-
-	flush_icache_page(vma, new_page);
-	entry = mk_pte(new_page, vma->vm_page_prot);
-	entry = maybe_mkwrite(pte_mkdirty(entry), vma);
-	inc_mm_counter_fast(mm, MM_ANONPAGES);
-	page_add_new_anon_rmap(new_page, vma, address);
-	set_pte_at(mm, address, pte, entry);
-
-	/* no need to invalidate: a not-present page won't be cached */
-	update_mmu_cache(vma, address, pte);
-
+	do_set_pte(vma, address, new_page, pte, true, true);
 	pte_unmap_unlock(pte, ptl);
 	unlock_page(fault_page);
 	page_cache_release(fault_page);
@@ -3416,7 +3419,7 @@ static int do_shared_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 {
 	struct page *fault_page;
 	spinlock_t *ptl;
-	pte_t entry, *pte;
+	pte_t *pte;
 	int dirtied = 0;
 	int ret, tmp;
 
@@ -3445,16 +3448,7 @@ static int do_shared_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 		page_cache_release(fault_page);
 		return ret;
 	}
-
-	flush_icache_page(vma, fault_page);
-	entry = mk_pte(fault_page, vma->vm_page_prot);
-	entry = maybe_mkwrite(pte_mkdirty(entry), vma);
-	inc_mm_counter_fast(mm, MM_FILEPAGES);
-	page_add_file_rmap(fault_page);
-	set_pte_at(mm, address, pte, entry);
-
-	/* no need to invalidate: a not-present page won't be cached */
-	update_mmu_cache(vma, address, pte);
+	do_set_pte(vma, address, fault_page, pte, true, false);
 	pte_unmap_unlock(pte, ptl);
 
 	if (set_page_dirty(fault_page))
-- 
1.8.5.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* RE: [PATCH 7/8] mm: consolidate code to call vm_ops->page_mkwrite()
  2014-02-10 20:41 ` [PATCH 7/8] mm: consolidate code to call vm_ops->page_mkwrite() Kirill A. Shutemov
@ 2014-02-17 17:17   ` Kirill A. Shutemov
  0 siblings, 0 replies; 10+ messages in thread
From: Kirill A. Shutemov @ 2014-02-17 17:17 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: Andrew Morton, Mel Gorman, Rik van Riel, Andi Kleen,
	Matthew Wilcox, Dave Hansen, linux-mm

Hi Andrew,

I forgot to set VM_FAULT_LOCKED bit in do_page_mkwrite() return code, if
we lock the page in do_page_mkwrite(). It triggers deadlock, if
->page_mkwrite doesn't take page lock on its own.

Please replace orignal patch with the patch below.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2014-02-17 17:18 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-02-10 20:40 [PATCH 0/8] Sort out mess in __do_fault() Kirill A. Shutemov
2014-02-10 20:40 ` [PATCH 1/8] mm, hwpoison: release page on PageHWPoison() " Kirill A. Shutemov
2014-02-10 20:41 ` [PATCH 2/8] mm: rename __do_fault() -> do_fault() Kirill A. Shutemov
2014-02-10 20:41 ` [PATCH 3/8] mm: do_fault(): extract to call vm_ops->do_fault() to separate function Kirill A. Shutemov
2014-02-10 20:41 ` [PATCH 4/8] mm: introduce do_read_fault() Kirill A. Shutemov
2014-02-10 20:41 ` [PATCH 5/8] mm: introduce do_cow_fault() Kirill A. Shutemov
2014-02-10 20:41 ` [PATCH 6/8] mm: introduce do_shared_fault() and drop do_fault() Kirill A. Shutemov
2014-02-10 20:41 ` [PATCH 7/8] mm: consolidate code to call vm_ops->page_mkwrite() Kirill A. Shutemov
2014-02-17 17:17   ` Kirill A. Shutemov
2014-02-10 20:41 ` [PATCH 8/8] mm: consolidate code to setup pte Kirill A. Shutemov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).