linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Liam Howlett <liam.howlett@oracle.com>
To: "maple-tree@lists.infradead.org" <maple-tree@lists.infradead.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Hugh Dickins <hughd@google.com>
Cc: Yu Zhao <yuzhao@google.com>
Subject: [PATCH v12 28/69] mm/mmap: reorganize munmap to use maple states
Date: Wed, 20 Jul 2022 02:17:53 +0000	[thread overview]
Message-ID: <20220720021727.17018-29-Liam.Howlett@oracle.com> (raw)
In-Reply-To: <20220720021727.17018-1-Liam.Howlett@oracle.com>

From: "Liam R. Howlett" <Liam.Howlett@Oracle.com>

Remove __do_munmap() in favour of do_munmap(), do_mas_munmap(), and
do_mas_align_munmap().

do_munmap() is a wrapper to create a maple state for any callers that have
not been converted to the maple tree.

do_mas_munmap() takes a maple state to mumap a range.  This is just a
small function which checks for error conditions and aligns the end of the
range.

do_mas_align_munmap() uses the aligned range to mumap a range.
do_mas_align_munmap() starts with the first VMA in the range, then finds
the last VMA in the range.  Both start and end are split if necessary.
Then the VMAs are removed from the linked list and the mm mlock count is
updated at the same time.  Followed by a single tree operation of
overwriting the area in with a NULL.  Finally, the detached list is
unmapped and freed.

By reorganizing the munmap calls as outlined, it is now possible to avoid
extra work of aligning pre-aligned callers which are known to be safe,
avoid extra VMA lookups or tree walks for modifications.

detach_vmas_to_be_unmapped() is no longer used, so drop this code.

vm_brk_flags() can just call the do_mas_munmap() as it checks for
intersecting VMAs directly.

Link: https://lkml.kernel.org/r/20220504011345.662299-13-Liam.Howlett@oracle.com
Link: https://lkml.kernel.org/r/20220621204632.3370049-29-Liam.Howlett@oracle.com
Signed-off-by: Liam R. Howlett <Liam.Howlett@Oracle.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: David Howells <dhowells@redhat.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: SeongJae Park <sj@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Will Deacon <will@kernel.org>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
 include/linux/mm.h |   5 +-
 mm/mmap.c          | 228 ++++++++++++++++++++++++++++-----------------
 mm/mremap.c        |  17 ++--
 3 files changed, 158 insertions(+), 92 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index a250fd86fde9..75ac5664af69 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2724,8 +2724,9 @@ extern unsigned long mmap_region(struct file *file, unsigned long addr,
 extern unsigned long do_mmap(struct file *file, unsigned long addr,
 	unsigned long len, unsigned long prot, unsigned long flags,
 	unsigned long pgoff, unsigned long *populate, struct list_head *uf);
-extern int __do_munmap(struct mm_struct *, unsigned long, size_t,
-		       struct list_head *uf, bool downgrade);
+extern int do_mas_munmap(struct ma_state *mas, struct mm_struct *mm,
+			 unsigned long start, size_t len, struct list_head *uf,
+			 bool downgrade);
 extern int do_munmap(struct mm_struct *, unsigned long, size_t,
 		     struct list_head *uf);
 extern int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int behavior);
diff --git a/mm/mmap.c b/mm/mmap.c
index 0cde534a8f9f..280fc2d2854e 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2404,47 +2404,6 @@ static void unmap_region(struct mm_struct *mm,
 	tlb_finish_mmu(&tlb);
 }
 
-/*
- * Create a list of vma's touched by the unmap, removing them from the mm's
- * vma list as we go..
- */
-static bool
-detach_vmas_to_be_unmapped(struct mm_struct *mm, struct ma_state *mas,
-	struct vm_area_struct *vma, struct vm_area_struct *prev,
-	unsigned long end)
-{
-	struct vm_area_struct **insertion_point;
-	struct vm_area_struct *tail_vma = NULL;
-
-	insertion_point = (prev ? &prev->vm_next : &mm->mmap);
-	vma->vm_prev = NULL;
-	vma_mas_szero(mas, vma->vm_start, end);
-	do {
-		if (vma->vm_flags & VM_LOCKED)
-			mm->locked_vm -= vma_pages(vma);
-		mm->map_count--;
-		tail_vma = vma;
-		vma = vma->vm_next;
-	} while (vma && vma->vm_start < end);
-	*insertion_point = vma;
-	if (vma)
-		vma->vm_prev = prev;
-	else
-		mm->highest_vm_end = prev ? vm_end_gap(prev) : 0;
-	tail_vma->vm_next = NULL;
-
-	/*
-	 * Do not downgrade mmap_lock if we are next to VM_GROWSDOWN or
-	 * VM_GROWSUP VMA. Such VMAs can change their size under
-	 * down_read(mmap_lock) and collide with the VMA we are about to unmap.
-	 */
-	if (vma && (vma->vm_flags & VM_GROWSDOWN))
-		return false;
-	if (prev && (prev->vm_flags & VM_GROWSUP))
-		return false;
-	return true;
-}
-
 /*
  * __split_vma() bypasses sysctl_max_map_count checking.  We use this where it
  * has already been checked or doesn't make sense to fail.
@@ -2527,40 +2486,51 @@ int split_vma(struct mm_struct *mm, struct vm_area_struct *vma,
 	return __split_vma(mm, vma, addr, new_below);
 }
 
-/* Munmap is split into 2 main parts -- this part which finds
- * what needs doing, and the areas themselves, which do the
- * work.  This now handles partial unmappings.
- * Jeremy Fitzhardinge <jeremy@goop.org>
- */
-int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
-		struct list_head *uf, bool downgrade)
+static inline int
+unlock_range(struct vm_area_struct *start, struct vm_area_struct **tail,
+	     unsigned long limit)
 {
-	unsigned long end;
-	struct vm_area_struct *vma, *prev, *last;
-	int error = -ENOMEM;
-	MA_STATE(mas, &mm->mm_mt, 0, 0);
+	struct mm_struct *mm = start->vm_mm;
+	struct vm_area_struct *tmp = start;
+	int count = 0;
 
-	if ((offset_in_page(start)) || start > TASK_SIZE || len > TASK_SIZE-start)
-		return -EINVAL;
+	while (tmp && tmp->vm_start < limit) {
+		*tail = tmp;
+		count++;
+		if (tmp->vm_flags & VM_LOCKED)
+			mm->locked_vm -= vma_pages(tmp);
 
-	len = PAGE_ALIGN(len);
-	end = start + len;
-	if (len == 0)
-		return -EINVAL;
+		tmp = tmp->vm_next;
+	}
 
-	 /* arch_unmap() might do unmaps itself.  */
-	arch_unmap(mm, start, end);
+	return count;
+}
 
-	/* Find the first overlapping VMA where start < vma->vm_end */
-	vma = find_vma_intersection(mm, start, end);
-	if (!vma)
-		return 0;
+/*
+ * do_mas_align_munmap() - munmap the aligned region from @start to @end.
+ * @mas: The maple_state, ideally set up to alter the correct tree location.
+ * @vma: The starting vm_area_struct
+ * @mm: The mm_struct
+ * @start: The aligned start address to munmap.
+ * @end: The aligned end address to munmap.
+ * @uf: The userfaultfd list_head
+ * @downgrade: Set to true to attempt a write downgrade of the mmap_sem
+ *
+ * If @downgrade is true, check return code for potential release of the lock.
+ */
+static int
+do_mas_align_munmap(struct ma_state *mas, struct vm_area_struct *vma,
+		    struct mm_struct *mm, unsigned long start,
+		    unsigned long end, struct list_head *uf, bool downgrade)
+{
+	struct vm_area_struct *prev, *last;
+	int error = -ENOMEM;
+	/* we have start < vma->vm_end  */
 
-	if (mas_preallocate(&mas, vma, GFP_KERNEL))
+	if (mas_preallocate(mas, vma, GFP_KERNEL))
 		return -ENOMEM;
-	prev = vma->vm_prev;
-	/* we have start < vma->vm_end  */
 
+	mas->last = end - 1;
 	/*
 	 * If we need to split any vma, do it now to save pain later.
 	 *
@@ -2581,17 +2551,31 @@ int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 		error = __split_vma(mm, vma, start, 0);
 		if (error)
 			goto split_failed;
+
 		prev = vma;
+		vma = __vma_next(mm, prev);
+		mas->index = start;
+		mas_reset(mas);
+	} else {
+		prev = vma->vm_prev;
 	}
 
+	if (vma->vm_end >= end)
+		last = vma;
+	else
+		last = find_vma_intersection(mm, end - 1, end);
+
 	/* Does it split the last one? */
-	last = find_vma(mm, end);
-	if (last && end > last->vm_start) {
+	if (last && end < last->vm_end) {
 		error = __split_vma(mm, last, end, 1);
+
 		if (error)
 			goto split_failed;
+
+		if (vma == last)
+			vma = __vma_next(mm, prev);
+		mas_reset(mas);
 	}
-	vma = __vma_next(mm, prev);
 
 	if (unlikely(uf)) {
 		/*
@@ -2604,16 +2588,46 @@ int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 		 * failure that it's not worth optimizing it for.
 		 */
 		error = userfaultfd_unmap_prep(vma, start, end, uf);
+
 		if (error)
 			goto userfaultfd_error;
 	}
 
-	/* Detach vmas from rbtree */
-	if (!detach_vmas_to_be_unmapped(mm, &mas, vma, prev, end))
-		downgrade = false;
+	/*
+	 * unlock any mlock()ed ranges before detaching vmas, count the number
+	 * of VMAs to be dropped, and return the tail entry of the affected
+	 * area.
+	 */
+	mm->map_count -= unlock_range(vma, &last, end);
+	/* Drop removed area from the tree */
+	mas_store_prealloc(mas, NULL);
 
-	if (downgrade)
-		mmap_write_downgrade(mm);
+	/* Detach vmas from the MM linked list */
+	vma->vm_prev = NULL;
+	if (prev)
+		prev->vm_next = last->vm_next;
+	else
+		mm->mmap = last->vm_next;
+
+	if (last->vm_next) {
+		last->vm_next->vm_prev = prev;
+		last->vm_next = NULL;
+	} else
+		mm->highest_vm_end = prev ? vm_end_gap(prev) : 0;
+
+	/*
+	 * Do not downgrade mmap_lock if we are next to VM_GROWSDOWN or
+	 * VM_GROWSUP VMA. Such VMAs can change their size under
+	 * down_read(mmap_lock) and collide with the VMA we are about to unmap.
+	 */
+	if (downgrade) {
+		if (last && (last->vm_flags & VM_GROWSDOWN))
+			downgrade = false;
+		else if (prev && (prev->vm_flags & VM_GROWSUP))
+			downgrade = false;
+		else
+			mmap_write_downgrade(mm);
+	}
 
 	unmap_region(mm, vma, prev, start, end);
 
@@ -2627,14 +2641,63 @@ int __do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 map_count_exceeded:
 split_failed:
 userfaultfd_error:
-	mas_destroy(&mas);
+	mas_destroy(mas);
 	return error;
 }
 
+/*
+ * do_mas_munmap() - munmap a given range.
+ * @mas: The maple state
+ * @mm: The mm_struct
+ * @start: The start address to munmap
+ * @len: The length of the range to munmap
+ * @uf: The userfaultfd list_head
+ * @downgrade: set to true if the user wants to attempt to write_downgrade the
+ * mmap_sem
+ *
+ * This function takes a @mas that is either pointing to the previous VMA or set
+ * to MA_START and sets it up to remove the mapping(s).  The @len will be
+ * aligned and any arch_unmap work will be preformed.
+ *
+ * Returns: -EINVAL on failure, 1 on success and unlock, 0 otherwise.
+ */
+int do_mas_munmap(struct ma_state *mas, struct mm_struct *mm,
+		  unsigned long start, size_t len, struct list_head *uf,
+		  bool downgrade)
+{
+	unsigned long end;
+	struct vm_area_struct *vma;
+
+	if ((offset_in_page(start)) || start > TASK_SIZE || len > TASK_SIZE-start)
+		return -EINVAL;
+
+	end = start + PAGE_ALIGN(len);
+	if (end == start)
+		return -EINVAL;
+
+	 /* arch_unmap() might do unmaps itself.  */
+	arch_unmap(mm, start, end);
+
+	/* Find the first overlapping VMA */
+	vma = mas_find(mas, end - 1);
+	if (!vma)
+		return 0;
+
+	return do_mas_align_munmap(mas, vma, mm, start, end, uf, downgrade);
+}
+
+/* do_munmap() - Wrapper function for non-maple tree aware do_munmap() calls.
+ * @mm: The mm_struct
+ * @start: The start address to munmap
+ * @len: The length to be munmapped.
+ * @uf: The userfaultfd list_head
+ */
 int do_munmap(struct mm_struct *mm, unsigned long start, size_t len,
 	      struct list_head *uf)
 {
-	return __do_munmap(mm, start, len, uf, false);
+	MA_STATE(mas, &mm->mm_mt, start, start);
+
+	return do_mas_munmap(&mas, mm, start, len, uf, false);
 }
 
 unsigned long mmap_region(struct file *file, unsigned long addr,
@@ -2668,7 +2731,7 @@ unsigned long mmap_region(struct file *file, unsigned long addr,
 	}
 
 	/* Unmap any existing mapping in the area */
-	if (do_munmap(mm, addr, len, uf))
+	if (do_mas_munmap(&mas, mm, addr, len, uf, false))
 		return -ENOMEM;
 
 	/*
@@ -2888,11 +2951,12 @@ static int __vm_munmap(unsigned long start, size_t len, bool downgrade)
 	int ret;
 	struct mm_struct *mm = current->mm;
 	LIST_HEAD(uf);
+	MA_STATE(mas, &mm->mm_mt, start, start);
 
 	if (mmap_write_lock_killable(mm))
 		return -EINTR;
 
-	ret = __do_munmap(mm, start, len, &uf, downgrade);
+	ret = do_mas_munmap(&mas, mm, start, len, &uf, downgrade);
 	/*
 	 * Returning 1 indicates mmap_lock is downgraded.
 	 * But 1 is not legal return value of vm_munmap() and munmap(), reset
@@ -3021,7 +3085,7 @@ static int do_brk_munmap(struct ma_state *mas, struct vm_area_struct *vma,
 	int ret;
 
 	arch_unmap(mm, newbrk, oldbrk);
-	ret = __do_munmap(mm, newbrk, oldbrk - newbrk, uf, true);
+	ret = do_mas_munmap(mas, mm, newbrk, oldbrk-newbrk, uf, true);
 	validate_mm_mt(mm);
 	return ret;
 }
@@ -3161,9 +3225,7 @@ int vm_brk_flags(unsigned long addr, unsigned long request, unsigned long flags)
 	if (ret)
 		goto limits_failed;
 
-	if (find_vma_intersection(mm, addr, addr + len))
-		ret = do_munmap(mm, addr, len, &uf);
-
+	ret = do_mas_munmap(&mas, mm, addr, len, &uf, 0);
 	if (ret)
 		goto munmap_failed;
 
diff --git a/mm/mremap.c b/mm/mremap.c
index b522cd0259a0..e0fba9004246 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -975,20 +975,23 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len,
 	/*
 	 * Always allow a shrinking remap: that just unmaps
 	 * the unnecessary pages..
-	 * __do_munmap does all the needed commit accounting, and
+	 * do_mas_munmap does all the needed commit accounting, and
 	 * downgrades mmap_lock to read if so directed.
 	 */
 	if (old_len >= new_len) {
 		int retval;
+		MA_STATE(mas, &mm->mm_mt, addr + new_len, addr + new_len);
 
-		retval = __do_munmap(mm, addr+new_len, old_len - new_len,
-				  &uf_unmap, true);
-		if (retval < 0 && old_len != new_len) {
-			ret = retval;
-			goto out;
+		retval = do_mas_munmap(&mas, mm, addr + new_len,
+				       old_len - new_len, &uf_unmap, true);
 		/* Returning 1 indicates mmap_lock is downgraded to read. */
-		} else if (retval == 1)
+		if (retval == 1) {
 			downgraded = true;
+		} else if (retval < 0 && old_len != new_len) {
+			ret = retval;
+			goto out;
+		}
+
 		ret = addr;
 		goto out;
 	}
-- 
2.35.1


  parent reply	other threads:[~2022-07-20  2:18 UTC|newest]

Thread overview: 80+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-20  2:17 [PATCH v12 00/69] Introducing the Maple Tree Liam Howlett
2022-07-20  2:17 ` [PATCH v12 02/69] radix tree test suite: add pr_err define Liam Howlett
2022-07-20  2:17 ` [PATCH v12 03/69] radix tree test suite: add kmem_cache_set_non_kernel() Liam Howlett
2022-07-20  2:17 ` [PATCH v12 01/69] Maple Tree: add new data structure Liam Howlett
2022-07-20  2:17 ` [PATCH v12 05/69] radix tree test suite: add support for slab bulk APIs Liam Howlett
2022-07-20  2:17 ` [PATCH v12 04/69] radix tree test suite: add allocation counts and size to kmem_cache Liam Howlett
2022-07-20  2:17 ` [PATCH v12 06/69] radix tree test suite: add lockdep_is_held to header Liam Howlett
2022-07-20  2:17 ` [PATCH v12 08/69] mm: start tracking VMAs with maple tree Liam Howlett
2022-07-27  0:28   ` Nathan Chancellor
2022-07-28  0:34     ` Liam Howlett
2022-07-29 15:41       ` Liam Howlett
2022-07-29 17:02         ` Nathan Chancellor
2022-07-29 20:13           ` Liam Howlett
2022-07-20  2:17 ` [PATCH v12 07/69] lib/test_maple_tree: add testing for " Liam Howlett
2022-07-20  2:17 ` [PATCH v12 10/69] mmap: use the VMA iterator in count_vma_pages_range() Liam Howlett
2022-07-20  2:17 ` [PATCH v12 09/69] mm: add VMA iterator Liam Howlett
2022-07-20  2:17 ` [PATCH v12 13/69] mm/mmap: use maple tree for unmapped_area{_topdown} Liam Howlett
2022-07-20  2:17 ` [PATCH v12 12/69] mm/mmap: use the maple tree for find_vma_prev() instead of the rbtree Liam Howlett
2022-07-20  2:17 ` [PATCH v12 11/69] mm/mmap: use the maple tree in find_vma() " Liam Howlett
2022-07-20  2:17 ` [PATCH v12 14/69] kernel/fork: use maple tree for dup_mmap() during forking Liam Howlett
2022-07-20  2:17 ` [PATCH v12 15/69] damon: convert __damon_va_three_regions to use the VMA iterator Liam Howlett
2022-07-20  2:17 ` [PATCH v12 16/69] proc: remove VMA rbtree use from nommu Liam Howlett
2022-07-20  2:17 ` [PATCH v12 19/69] xen: use vma_lookup() in privcmd_ioctl_mmap() Liam Howlett
2022-07-20  2:17 ` [PATCH v12 17/69] mm: remove rb tree Liam Howlett
2022-07-20  2:17 ` [PATCH v12 18/69] mmap: change zeroing of maple tree in __vma_adjust() Liam Howlett
2022-07-20  2:17 ` [PATCH v12 21/69] mm/khugepaged: optimize collapse_pte_mapped_thp() by using vma_lookup() Liam Howlett
2022-07-20  2:17 ` [PATCH v12 22/69] mm/mmap: change do_brk_flags() to expand existing VMA and add do_brk_munmap() Liam Howlett
     [not found]   ` <f5e99dbb-7008-dcf8-3e85-2f161056b37a@gmail.com>
2022-07-25 14:01     ` Liam Howlett
2022-07-25 18:49       ` Liam Howlett
     [not found]         ` <8d77a517-fcc1-5e74-b9d8-2a250dc751ed@gmail.com>
2022-07-28  0:57           ` Liam Howlett
2022-07-20  2:17 ` [PATCH v12 20/69] mm: optimize find_exact_vma() to use vma_lookup() Liam Howlett
2022-07-20  2:17 ` [PATCH v12 24/69] mm/mmap: use advanced maple tree API for mmap_region() Liam Howlett
2022-07-20  2:17 ` [PATCH v12 23/69] mm: use maple tree operations for find_vma_intersection() Liam Howlett
2022-07-20  2:17 ` [PATCH v12 27/69] mm/mmap: move mmap_region() below do_munmap() Liam Howlett
2022-07-20  2:17 ` [PATCH v12 26/69] mm: convert vma_lookup() to use mtree_load() Liam Howlett
2022-07-20  2:17 ` [PATCH v12 25/69] mm: remove vmacache Liam Howlett
2022-07-20  2:17 ` Liam Howlett [this message]
2022-07-20  2:17 ` [PATCH v12 30/69] arm64: remove mmap linked list from vdso Liam Howlett
2022-07-20  2:17 ` [PATCH v12 29/69] mm/mmap: change do_brk_munmap() to use do_mas_align_munmap() Liam Howlett
2022-07-20  2:17 ` [PATCH v12 32/69] parisc: remove mmap linked list from cache handling Liam Howlett
2022-07-20  2:17 ` [PATCH v12 33/69] powerpc: remove mmap linked list walks Liam Howlett
2022-08-02 10:36   ` Christophe Leroy
2022-07-20  2:17 ` [PATCH v12 31/69] arm64: Change elfcore for_each_mte_vma() to use VMA iterator Liam Howlett
2022-07-20  2:17 ` [PATCH v12 34/69] s390: remove vma linked list walks Liam Howlett
2022-07-20  2:17 ` [PATCH v12 35/69] x86: " Liam Howlett
2022-07-20  2:17 ` [PATCH v12 36/69] xtensa: " Liam Howlett
2022-07-20  2:17 ` [PATCH v12 39/69] um: remove vma linked list walk Liam Howlett
2022-07-20  2:17 ` [PATCH v12 40/69] coredump: " Liam Howlett
2022-07-20  2:17 ` [PATCH v12 38/69] optee: " Liam Howlett
2022-07-20  2:17 ` [PATCH v12 37/69] cxl: " Liam Howlett
2022-07-20  2:17 ` [PATCH v12 44/69] userfaultfd: use maple tree iterator to iterate VMAs Liam Howlett
2022-07-20  2:17 ` [PATCH v12 41/69] exec: use VMA iterator instead of linked list Liam Howlett
2022-07-20  2:17 ` [PATCH v12 43/69] fs/proc/task_mmu: stop using linked list and highest_vm_end Liam Howlett
2022-07-20  2:17 ` [PATCH v12 42/69] fs/proc/base: use maple tree iterators in place of linked list Liam Howlett
2022-07-20  2:17 ` [PATCH v12 47/69] perf: use VMA iterator Liam Howlett
2022-07-20  2:17 ` [PATCH v12 45/69] ipc/shm: use VMA iterator instead of linked list Liam Howlett
2022-07-20  2:17 ` [PATCH v12 46/69] acct: " Liam Howlett
2022-07-20  2:17 ` [PATCH v12 51/69] mm/gup: use maple tree navigation " Liam Howlett
2022-07-20  2:17 ` [PATCH v12 48/69] sched: use maple tree iterator to walk VMAs Liam Howlett
2022-07-20  2:17 ` [PATCH v12 49/69] fork: use VMA iterator Liam Howlett
2022-07-20  2:17 ` [PATCH v12 50/69] bpf: remove VMA linked list Liam Howlett
2022-07-20  2:18 ` [PATCH v12 52/69] mm/khugepaged: stop using vma " Liam Howlett
2022-07-20  2:18 ` [PATCH v12 53/69] mm/ksm: use vma iterators instead of " Liam Howlett
2022-07-20  2:18 ` [PATCH v12 54/69] mm/madvise: use vma_find() " Liam Howlett
2022-07-20  2:18 ` [PATCH v12 55/69] mm/memcontrol: stop using mm->highest_vm_end Liam Howlett
2022-07-20  2:18 ` [PATCH v12 56/69] mm/mempolicy: use vma iterator & maple state instead of vma linked list Liam Howlett
2022-07-20  2:18 ` [PATCH v12 57/69] mm/mlock: use vma iterator and " Liam Howlett
2022-07-20  2:18 ` [PATCH v12 58/69] mm/mprotect: use maple tree navigation " Liam Howlett
2022-07-20  2:18 ` [PATCH v12 61/69] mm/oom_kill: use maple tree iterators " Liam Howlett
2022-07-20  2:18 ` [PATCH v12 60/69] mm/msync: use vma_find() " Liam Howlett
2022-07-20  2:18 ` [PATCH v12 62/69] mm/pagewalk: " Liam Howlett
2022-07-20  2:18 ` [PATCH v12 59/69] mm/mremap: use vma_find_intersection() " Liam Howlett
2022-07-20  2:18 ` [PATCH v12 64/69] i915: use the VMA iterator Liam Howlett
2022-07-20  2:18 ` [PATCH v12 63/69] mm/swapfile: use vma iterator instead of vma linked list Liam Howlett
2022-07-20  2:18 ` [PATCH v12 65/69] nommu: remove uses of VMA " Liam Howlett
2022-07-20  2:18 ` [PATCH v12 68/69] mm/mmap: drop range_has_overlap() function Liam Howlett
2022-07-20  2:18 ` [PATCH v12 67/69] mm: remove the vma linked list Liam Howlett
2022-07-20  2:18 ` [PATCH v12 66/69] riscv: use vma iterator for vdso Liam Howlett
2022-07-20  2:18 ` [PATCH v12 69/69] mm/mmap.c: pass in mapping to __vma_link_file() Liam Howlett
2022-07-20  5:09 ` [PATCH v12 00/69] Introducing the Maple Tree Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220720021727.17018-29-Liam.Howlett@oracle.com \
    --to=liam.howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=hughd@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=maple-tree@lists.infradead.org \
    --cc=yuzhao@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).