linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Laurent Dufour <ldufour@linux.vnet.ibm.com>
To: paulmck@linux.vnet.ibm.com, peterz@infradead.org,
	akpm@linux-foundation.org, kirill@shutemov.name,
	ak@linux.intel.com, mhocko@kernel.org, dave@stgolabs.net,
	jack@suse.cz, Matthew Wilcox <willy@infradead.org>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	haren@linux.vnet.ibm.com, khandual@linux.vnet.ibm.com,
	npiggin@gmail.com, bsingharora@gmail.com,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: [RFC v5 04/11] mm: VMA sequence count
Date: Fri, 16 Jun 2017 19:52:28 +0200	[thread overview]
Message-ID: <1497635555-25679-5-git-send-email-ldufour@linux.vnet.ibm.com> (raw)
In-Reply-To: <1497635555-25679-1-git-send-email-ldufour@linux.vnet.ibm.com>

From: Peter Zijlstra <peterz@infradead.org>

Wrap the VMA modifications (vma_adjust/unmap_page_range) with sequence
counts such that we can easily test if a VMA is changed.

The unmap_page_range() one allows us to make assumptions about
page-tables; when we find the seqcount hasn't changed we can assume
page-tables are still valid.

The flip side is that we cannot distinguish between a vma_adjust() and
the unmap_page_range() -- where with the former we could have
re-checked the vma bounds against the address.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

[port to 4.12 kernel]
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
 include/linux/mm_types.h |  1 +
 mm/memory.c              |  2 ++
 mm/mmap.c                | 13 +++++++++++++
 3 files changed, 16 insertions(+)

diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 45cdb27791a3..8945743e4609 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -342,6 +342,7 @@ struct vm_area_struct {
 	struct mempolicy *vm_policy;	/* NUMA policy for the VMA */
 #endif
 	struct vm_userfaultfd_ctx vm_userfaultfd_ctx;
+	seqcount_t vm_sequence;
 };
 
 struct core_thread {
diff --git a/mm/memory.c b/mm/memory.c
index f1132f7931ef..5d259cd67a83 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1379,6 +1379,7 @@ void unmap_page_range(struct mmu_gather *tlb,
 	unsigned long next;
 
 	BUG_ON(addr >= end);
+	write_seqcount_begin(&vma->vm_sequence);
 	tlb_start_vma(tlb, vma);
 	pgd = pgd_offset(vma->vm_mm, addr);
 	do {
@@ -1388,6 +1389,7 @@ void unmap_page_range(struct mmu_gather *tlb,
 		next = zap_p4d_range(tlb, vma, pgd, addr, next, details);
 	} while (pgd++, addr = next, addr != end);
 	tlb_end_vma(tlb, vma);
+	write_seqcount_end(&vma->vm_sequence);
 }
 
 
diff --git a/mm/mmap.c b/mm/mmap.c
index f82741e199c0..9f86356d0012 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -543,6 +543,8 @@ void __vma_link_rb(struct mm_struct *mm, struct vm_area_struct *vma,
 	else
 		mm->highest_vm_end = vma->vm_end;
 
+	seqcount_init(&vma->vm_sequence);
+
 	/*
 	 * vma->vm_prev wasn't known when we followed the rbtree to find the
 	 * correct insertion point for that vma. As a result, we could not
@@ -677,6 +679,10 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
 	long adjust_next = 0;
 	int remove_next = 0;
 
+	write_seqcount_begin(&vma->vm_sequence);
+	if (next)
+		write_seqcount_begin_nested(&next->vm_sequence, SINGLE_DEPTH_NESTING);
+
 	if (next && !insert) {
 		struct vm_area_struct *exporter = NULL, *importer = NULL;
 
@@ -888,6 +894,7 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
 		mm->map_count--;
 		mpol_put(vma_policy(next));
 		kmem_cache_free(vm_area_cachep, next);
+		write_seqcount_end(&next->vm_sequence);
 		/*
 		 * In mprotect's case 6 (see comments on vma_merge),
 		 * we must remove another next too. It would clutter
@@ -901,6 +908,8 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
 			 * "vma->vm_next" gap must be updated.
 			 */
 			next = vma->vm_next;
+			if (next)
+				write_seqcount_begin_nested(&next->vm_sequence, SINGLE_DEPTH_NESTING);
 		} else {
 			/*
 			 * For the scope of the comment "next" and
@@ -947,6 +956,10 @@ int __vma_adjust(struct vm_area_struct *vma, unsigned long start,
 	if (insert && file)
 		uprobe_mmap(insert);
 
+	if (next)
+		write_seqcount_end(&next->vm_sequence);
+	write_seqcount_end(&vma->vm_sequence);
+
 	validate_mm(mm);
 
 	return 0;
-- 
2.7.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2017-06-16 17:52 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-16 17:52 [RFC v5 00/11] Speculative page faults Laurent Dufour
2017-06-16 17:52 ` [RFC v5 01/11] mm: Dont assume page-table invariance during faults Laurent Dufour
2017-07-07  7:07   ` Balbir Singh
2017-07-10 17:48     ` Laurent Dufour
2017-07-11  4:26       ` Balbir Singh
2017-08-08 10:04       ` Anshuman Khandual
2017-08-08  9:45   ` Anshuman Khandual
2017-08-08 12:11     ` Laurent Dufour
2017-06-16 17:52 ` [RFC v5 02/11] mm: Prepare for FAULT_FLAG_SPECULATIVE Laurent Dufour
2017-08-08 10:24   ` Anshuman Khandual
2017-08-08 10:42     ` Peter Zijlstra
2017-06-16 17:52 ` [RFC v5 03/11] mm: Introduce pte_spinlock " Laurent Dufour
2017-08-08 10:35   ` Anshuman Khandual
2017-08-08 12:16     ` Laurent Dufour
2017-06-16 17:52 ` Laurent Dufour [this message]
2017-08-08 10:59   ` [RFC v5 04/11] mm: VMA sequence count Anshuman Khandual
2017-08-08 11:04     ` Peter Zijlstra
2017-06-16 17:52 ` [RFC v5 05/11] mm: fix lock dependency against mapping->i_mmap_rwsem Laurent Dufour
2017-08-08 11:17   ` Anshuman Khandual
2017-08-08 12:20     ` Laurent Dufour
2017-08-08 12:49       ` Jan Kara
2017-08-08 13:08         ` Laurent Dufour
2017-08-08 13:15       ` Peter Zijlstra
2017-08-08 13:34         ` Laurent Dufour
2017-06-16 17:52 ` [RFC v5 06/11] mm: Protect VMA modifications using VMA sequence count Laurent Dufour
2017-06-16 17:52 ` [RFC v5 07/11] mm: RCU free VMAs Laurent Dufour
2017-06-16 17:52 ` [RFC v5 08/11] mm: Provide speculative fault infrastructure Laurent Dufour
2017-06-16 17:52 ` [RFC v5 09/11] mm: Try spin lock in speculative path Laurent Dufour
2017-07-05 18:50   ` Peter Zijlstra
2017-07-06 13:46     ` Laurent Dufour
2017-07-06 14:48       ` Peter Zijlstra
2017-07-06 15:29         ` Laurent Dufour
2017-07-06 16:13           ` Peter Zijlstra
2017-06-16 17:52 ` [RFC v5 10/11] x86/mm: Add speculative pagefault handling Laurent Dufour
2017-06-16 17:52 ` [RFC v5 11/11] powerpc/mm: Add speculative page fault Laurent Dufour
2017-07-03 17:32 ` [RFC v5 00/11] Speculative page faults Laurent Dufour
2017-07-07  1:54   ` Balbir Singh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1497635555-25679-5-git-send-email-ldufour@linux.vnet.ibm.com \
    --to=ldufour@linux.vnet.ibm.com \
    --cc=ak@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=bsingharora@gmail.com \
    --cc=dave@stgolabs.net \
    --cc=haren@linux.vnet.ibm.com \
    --cc=jack@suse.cz \
    --cc=khandual@linux.vnet.ibm.com \
    --cc=kirill@shutemov.name \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=npiggin@gmail.com \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=tim.c.chen@linux.intel.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).