stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: stable@vger.kernel.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	patches@lists.linux.dev, Suren Baghdasaryan <surenb@google.com>,
	Linus Torvalds <torvalds@linuxfoundation.org>,
	Jann Horn <jannh@google.com>,
	David Hildenbrand <david@redhat.com>,
	Davidlohr Bueso <dave@stgolabs.net>,
	Hugh Dickins <hughd@google.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Laurent Dufour <ldufour@linux.ibm.com>,
	Liam Howlett <liam.howlett@oracle.com>,
	"Matthew Wilcox (Oracle)" <willy@infradead.org>,
	Michal Hocko <mhocko@suse.com>,
	Michel Lespinasse <michel@lespinasse.org>,
	Peter Xu <peterx@redhat.com>, Vlastimil Babka <vbabka@suse.cz>,
	Andrew Morton <akpm@linux-foundation.org>
Subject: [PATCH 6.4 064/129] mm: enable page walking API to lock vmas during the walk
Date: Mon, 28 Aug 2023 12:12:23 +0200	[thread overview]
Message-ID: <20230828101159.485425389@linuxfoundation.org> (raw)
In-Reply-To: <20230828101157.383363777@linuxfoundation.org>

6.4-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Suren Baghdasaryan <surenb@google.com>

commit 49b0638502da097c15d46cd4e871dbaa022caf7c upstream.

walk_page_range() and friends often operate under write-locked mmap_lock.
With introduction of vma locks, the vmas have to be locked as well during
such walks to prevent concurrent page faults in these areas.  Add an
additional member to mm_walk_ops to indicate locking requirements for the
walk.

The change ensures that page walks which prevent concurrent page faults
by write-locking mmap_lock, operate correctly after introduction of
per-vma locks.  With per-vma locks page faults can be handled under vma
lock without taking mmap_lock at all, so write locking mmap_lock would
not stop them.  The change ensures vmas are properly locked during such
walks.

A sample issue this solves is do_mbind() performing queue_pages_range()
to queue pages for migration.  Without this change a concurrent page
can be faulted into the area and be left out of migration.

Link: https://lkml.kernel.org/r/20230804152724.3090321-2-surenb@google.com
Signed-off-by: Suren Baghdasaryan <surenb@google.com>
Suggested-by: Linus Torvalds <torvalds@linuxfoundation.org>
Suggested-by: Jann Horn <jannh@google.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Hugh Dickins <hughd@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Laurent Dufour <ldufour@linux.ibm.com>
Cc: Liam Howlett <liam.howlett@oracle.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Michel Lespinasse <michel@lespinasse.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 arch/powerpc/mm/book3s64/subpage_prot.c |    1 
 arch/riscv/mm/pageattr.c                |    1 
 arch/s390/mm/gmap.c                     |    5 ++++
 fs/proc/task_mmu.c                      |    5 ++++
 include/linux/pagewalk.h                |   11 +++++++++
 mm/damon/vaddr.c                        |    2 +
 mm/hmm.c                                |    1 
 mm/ksm.c                                |   25 ++++++++++++++--------
 mm/madvise.c                            |    3 ++
 mm/memcontrol.c                         |    2 +
 mm/memory-failure.c                     |    1 
 mm/mempolicy.c                          |   22 ++++++++++++-------
 mm/migrate_device.c                     |    1 
 mm/mincore.c                            |    1 
 mm/mlock.c                              |    1 
 mm/mprotect.c                           |    1 
 mm/pagewalk.c                           |   36 +++++++++++++++++++++++++++++---
 mm/vmscan.c                             |    1 
 18 files changed, 100 insertions(+), 20 deletions(-)

--- a/arch/powerpc/mm/book3s64/subpage_prot.c
+++ b/arch/powerpc/mm/book3s64/subpage_prot.c
@@ -143,6 +143,7 @@ static int subpage_walk_pmd_entry(pmd_t
 
 static const struct mm_walk_ops subpage_walk_ops = {
 	.pmd_entry	= subpage_walk_pmd_entry,
+	.walk_lock	= PGWALK_WRLOCK_VERIFY,
 };
 
 static void subpage_mark_vma_nohuge(struct mm_struct *mm, unsigned long addr,
--- a/arch/riscv/mm/pageattr.c
+++ b/arch/riscv/mm/pageattr.c
@@ -102,6 +102,7 @@ static const struct mm_walk_ops pageattr
 	.pmd_entry = pageattr_pmd_entry,
 	.pte_entry = pageattr_pte_entry,
 	.pte_hole = pageattr_pte_hole,
+	.walk_lock = PGWALK_RDLOCK,
 };
 
 static int __set_memory(unsigned long addr, int numpages, pgprot_t set_mask,
--- a/arch/s390/mm/gmap.c
+++ b/arch/s390/mm/gmap.c
@@ -2514,6 +2514,7 @@ static int thp_split_walk_pmd_entry(pmd_
 
 static const struct mm_walk_ops thp_split_walk_ops = {
 	.pmd_entry	= thp_split_walk_pmd_entry,
+	.walk_lock	= PGWALK_WRLOCK_VERIFY,
 };
 
 static inline void thp_split_mm(struct mm_struct *mm)
@@ -2558,6 +2559,7 @@ static int __zap_zero_pages(pmd_t *pmd,
 
 static const struct mm_walk_ops zap_zero_walk_ops = {
 	.pmd_entry	= __zap_zero_pages,
+	.walk_lock	= PGWALK_WRLOCK,
 };
 
 /*
@@ -2648,6 +2650,7 @@ static const struct mm_walk_ops enable_s
 	.hugetlb_entry		= __s390_enable_skey_hugetlb,
 	.pte_entry		= __s390_enable_skey_pte,
 	.pmd_entry		= __s390_enable_skey_pmd,
+	.walk_lock		= PGWALK_WRLOCK,
 };
 
 int s390_enable_skey(void)
@@ -2685,6 +2688,7 @@ static int __s390_reset_cmma(pte_t *pte,
 
 static const struct mm_walk_ops reset_cmma_walk_ops = {
 	.pte_entry		= __s390_reset_cmma,
+	.walk_lock		= PGWALK_WRLOCK,
 };
 
 void s390_reset_cmma(struct mm_struct *mm)
@@ -2721,6 +2725,7 @@ static int s390_gather_pages(pte_t *ptep
 
 static const struct mm_walk_ops gather_pages_ops = {
 	.pte_entry = s390_gather_pages,
+	.walk_lock = PGWALK_RDLOCK,
 };
 
 /*
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -759,12 +759,14 @@ static int smaps_hugetlb_range(pte_t *pt
 static const struct mm_walk_ops smaps_walk_ops = {
 	.pmd_entry		= smaps_pte_range,
 	.hugetlb_entry		= smaps_hugetlb_range,
+	.walk_lock		= PGWALK_RDLOCK,
 };
 
 static const struct mm_walk_ops smaps_shmem_walk_ops = {
 	.pmd_entry		= smaps_pte_range,
 	.hugetlb_entry		= smaps_hugetlb_range,
 	.pte_hole		= smaps_pte_hole,
+	.walk_lock		= PGWALK_RDLOCK,
 };
 
 /*
@@ -1245,6 +1247,7 @@ static int clear_refs_test_walk(unsigned
 static const struct mm_walk_ops clear_refs_walk_ops = {
 	.pmd_entry		= clear_refs_pte_range,
 	.test_walk		= clear_refs_test_walk,
+	.walk_lock		= PGWALK_WRLOCK,
 };
 
 static ssize_t clear_refs_write(struct file *file, const char __user *buf,
@@ -1621,6 +1624,7 @@ static const struct mm_walk_ops pagemap_
 	.pmd_entry	= pagemap_pmd_range,
 	.pte_hole	= pagemap_pte_hole,
 	.hugetlb_entry	= pagemap_hugetlb_range,
+	.walk_lock	= PGWALK_RDLOCK,
 };
 
 /*
@@ -1932,6 +1936,7 @@ static int gather_hugetlb_stats(pte_t *p
 static const struct mm_walk_ops show_numa_ops = {
 	.hugetlb_entry = gather_hugetlb_stats,
 	.pmd_entry = gather_pte_stats,
+	.walk_lock = PGWALK_RDLOCK,
 };
 
 /*
--- a/include/linux/pagewalk.h
+++ b/include/linux/pagewalk.h
@@ -6,6 +6,16 @@
 
 struct mm_walk;
 
+/* Locking requirement during a page walk. */
+enum page_walk_lock {
+	/* mmap_lock should be locked for read to stabilize the vma tree */
+	PGWALK_RDLOCK = 0,
+	/* vma will be write-locked during the walk */
+	PGWALK_WRLOCK = 1,
+	/* vma is expected to be already write-locked during the walk */
+	PGWALK_WRLOCK_VERIFY = 2,
+};
+
 /**
  * struct mm_walk_ops - callbacks for walk_page_range
  * @pgd_entry:		if set, called for each non-empty PGD (top-level) entry
@@ -66,6 +76,7 @@ struct mm_walk_ops {
 	int (*pre_vma)(unsigned long start, unsigned long end,
 		       struct mm_walk *walk);
 	void (*post_vma)(struct mm_walk *walk);
+	enum page_walk_lock walk_lock;
 };
 
 /*
--- a/mm/damon/vaddr.c
+++ b/mm/damon/vaddr.c
@@ -384,6 +384,7 @@ out:
 static const struct mm_walk_ops damon_mkold_ops = {
 	.pmd_entry = damon_mkold_pmd_entry,
 	.hugetlb_entry = damon_mkold_hugetlb_entry,
+	.walk_lock = PGWALK_RDLOCK,
 };
 
 static void damon_va_mkold(struct mm_struct *mm, unsigned long addr)
@@ -519,6 +520,7 @@ out:
 static const struct mm_walk_ops damon_young_ops = {
 	.pmd_entry = damon_young_pmd_entry,
 	.hugetlb_entry = damon_young_hugetlb_entry,
+	.walk_lock = PGWALK_RDLOCK,
 };
 
 static bool damon_va_young(struct mm_struct *mm, unsigned long addr,
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -560,6 +560,7 @@ static const struct mm_walk_ops hmm_walk
 	.pte_hole	= hmm_vma_walk_hole,
 	.hugetlb_entry	= hmm_vma_walk_hugetlb_entry,
 	.test_walk	= hmm_vma_walk_test,
+	.walk_lock	= PGWALK_RDLOCK,
 };
 
 /**
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -454,6 +454,12 @@ static int break_ksm_pmd_entry(pmd_t *pm
 
 static const struct mm_walk_ops break_ksm_ops = {
 	.pmd_entry = break_ksm_pmd_entry,
+	.walk_lock = PGWALK_RDLOCK,
+};
+
+static const struct mm_walk_ops break_ksm_lock_vma_ops = {
+	.pmd_entry = break_ksm_pmd_entry,
+	.walk_lock = PGWALK_WRLOCK,
 };
 
 /*
@@ -469,16 +475,17 @@ static const struct mm_walk_ops break_ks
  * of the process that owns 'vma'.  We also do not want to enforce
  * protection keys here anyway.
  */
-static int break_ksm(struct vm_area_struct *vma, unsigned long addr)
+static int break_ksm(struct vm_area_struct *vma, unsigned long addr, bool lock_vma)
 {
 	vm_fault_t ret = 0;
+	const struct mm_walk_ops *ops = lock_vma ?
+				&break_ksm_lock_vma_ops : &break_ksm_ops;
 
 	do {
 		int ksm_page;
 
 		cond_resched();
-		ksm_page = walk_page_range_vma(vma, addr, addr + 1,
-					       &break_ksm_ops, NULL);
+		ksm_page = walk_page_range_vma(vma, addr, addr + 1, ops, NULL);
 		if (WARN_ON_ONCE(ksm_page < 0))
 			return ksm_page;
 		if (!ksm_page)
@@ -564,7 +571,7 @@ static void break_cow(struct ksm_rmap_it
 	mmap_read_lock(mm);
 	vma = find_mergeable_vma(mm, addr);
 	if (vma)
-		break_ksm(vma, addr);
+		break_ksm(vma, addr, false);
 	mmap_read_unlock(mm);
 }
 
@@ -870,7 +877,7 @@ static void remove_trailing_rmap_items(s
  * in cmp_and_merge_page on one of the rmap_items we would be removing.
  */
 static int unmerge_ksm_pages(struct vm_area_struct *vma,
-			     unsigned long start, unsigned long end)
+			     unsigned long start, unsigned long end, bool lock_vma)
 {
 	unsigned long addr;
 	int err = 0;
@@ -881,7 +888,7 @@ static int unmerge_ksm_pages(struct vm_a
 		if (signal_pending(current))
 			err = -ERESTARTSYS;
 		else
-			err = break_ksm(vma, addr);
+			err = break_ksm(vma, addr, lock_vma);
 	}
 	return err;
 }
@@ -1028,7 +1035,7 @@ static int unmerge_and_remove_all_rmap_i
 			if (!(vma->vm_flags & VM_MERGEABLE) || !vma->anon_vma)
 				continue;
 			err = unmerge_ksm_pages(vma,
-						vma->vm_start, vma->vm_end);
+						vma->vm_start, vma->vm_end, false);
 			if (err)
 				goto error;
 		}
@@ -2528,7 +2535,7 @@ static int __ksm_del_vma(struct vm_area_
 		return 0;
 
 	if (vma->anon_vma) {
-		err = unmerge_ksm_pages(vma, vma->vm_start, vma->vm_end);
+		err = unmerge_ksm_pages(vma, vma->vm_start, vma->vm_end, true);
 		if (err)
 			return err;
 	}
@@ -2666,7 +2673,7 @@ int ksm_madvise(struct vm_area_struct *v
 			return 0;		/* just ignore the advice */
 
 		if (vma->anon_vma) {
-			err = unmerge_ksm_pages(vma, start, end);
+			err = unmerge_ksm_pages(vma, start, end, true);
 			if (err)
 				return err;
 		}
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -227,6 +227,7 @@ static int swapin_walk_pmd_entry(pmd_t *
 
 static const struct mm_walk_ops swapin_walk_ops = {
 	.pmd_entry		= swapin_walk_pmd_entry,
+	.walk_lock		= PGWALK_RDLOCK,
 };
 
 static void force_shm_swapin_readahead(struct vm_area_struct *vma,
@@ -521,6 +522,7 @@ regular_folio:
 
 static const struct mm_walk_ops cold_walk_ops = {
 	.pmd_entry = madvise_cold_or_pageout_pte_range,
+	.walk_lock = PGWALK_RDLOCK,
 };
 
 static void madvise_cold_page_range(struct mmu_gather *tlb,
@@ -741,6 +743,7 @@ next:
 
 static const struct mm_walk_ops madvise_free_walk_ops = {
 	.pmd_entry		= madvise_free_pte_range,
+	.walk_lock		= PGWALK_RDLOCK,
 };
 
 static int madvise_free_single_vma(struct vm_area_struct *vma,
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6072,6 +6072,7 @@ static int mem_cgroup_count_precharge_pt
 
 static const struct mm_walk_ops precharge_walk_ops = {
 	.pmd_entry	= mem_cgroup_count_precharge_pte_range,
+	.walk_lock	= PGWALK_RDLOCK,
 };
 
 static unsigned long mem_cgroup_count_precharge(struct mm_struct *mm)
@@ -6351,6 +6352,7 @@ put:			/* get_mctgt_type() gets & locks
 
 static const struct mm_walk_ops charge_walk_ops = {
 	.pmd_entry	= mem_cgroup_move_charge_pte_range,
+	.walk_lock	= PGWALK_RDLOCK,
 };
 
 static void mem_cgroup_move_charge(void)
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -836,6 +836,7 @@ static int hwpoison_hugetlb_range(pte_t
 static const struct mm_walk_ops hwp_walk_ops = {
 	.pmd_entry = hwpoison_pte_range,
 	.hugetlb_entry = hwpoison_hugetlb_range,
+	.walk_lock = PGWALK_RDLOCK,
 };
 
 /*
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -715,6 +715,14 @@ static const struct mm_walk_ops queue_pa
 	.hugetlb_entry		= queue_folios_hugetlb,
 	.pmd_entry		= queue_folios_pte_range,
 	.test_walk		= queue_pages_test_walk,
+	.walk_lock		= PGWALK_RDLOCK,
+};
+
+static const struct mm_walk_ops queue_pages_lock_vma_walk_ops = {
+	.hugetlb_entry		= queue_folios_hugetlb,
+	.pmd_entry		= queue_folios_pte_range,
+	.test_walk		= queue_pages_test_walk,
+	.walk_lock		= PGWALK_WRLOCK,
 };
 
 /*
@@ -735,7 +743,7 @@ static const struct mm_walk_ops queue_pa
 static int
 queue_pages_range(struct mm_struct *mm, unsigned long start, unsigned long end,
 		nodemask_t *nodes, unsigned long flags,
-		struct list_head *pagelist)
+		struct list_head *pagelist, bool lock_vma)
 {
 	int err;
 	struct queue_pages qp = {
@@ -746,8 +754,10 @@ queue_pages_range(struct mm_struct *mm,
 		.end = end,
 		.first = NULL,
 	};
+	const struct mm_walk_ops *ops = lock_vma ?
+			&queue_pages_lock_vma_walk_ops : &queue_pages_walk_ops;
 
-	err = walk_page_range(mm, start, end, &queue_pages_walk_ops, &qp);
+	err = walk_page_range(mm, start, end, ops, &qp);
 
 	if (!qp.first)
 		/* whole range in hole */
@@ -1075,7 +1085,7 @@ static int migrate_to_node(struct mm_str
 	vma = find_vma(mm, 0);
 	VM_BUG_ON(!(flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)));
 	queue_pages_range(mm, vma->vm_start, mm->task_size, &nmask,
-			flags | MPOL_MF_DISCONTIG_OK, &pagelist);
+			flags | MPOL_MF_DISCONTIG_OK, &pagelist, false);
 
 	if (!list_empty(&pagelist)) {
 		err = migrate_pages(&pagelist, alloc_migration_target, NULL,
@@ -1321,12 +1331,8 @@ static long do_mbind(unsigned long start
 	 * Lock the VMAs before scanning for pages to migrate, to ensure we don't
 	 * miss a concurrently inserted page.
 	 */
-	vma_iter_init(&vmi, mm, start);
-	for_each_vma_range(vmi, vma, end)
-		vma_start_write(vma);
-
 	ret = queue_pages_range(mm, start, end, nmask,
-			  flags | MPOL_MF_INVERT, &pagelist);
+			  flags | MPOL_MF_INVERT, &pagelist, true);
 
 	if (ret < 0) {
 		err = ret;
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -286,6 +286,7 @@ next:
 static const struct mm_walk_ops migrate_vma_walk_ops = {
 	.pmd_entry		= migrate_vma_collect_pmd,
 	.pte_hole		= migrate_vma_collect_hole,
+	.walk_lock		= PGWALK_RDLOCK,
 };
 
 /*
--- a/mm/mincore.c
+++ b/mm/mincore.c
@@ -177,6 +177,7 @@ static const struct mm_walk_ops mincore_
 	.pmd_entry		= mincore_pte_range,
 	.pte_hole		= mincore_unmapped_range,
 	.hugetlb_entry		= mincore_hugetlb,
+	.walk_lock		= PGWALK_RDLOCK,
 };
 
 /*
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -365,6 +365,7 @@ static void mlock_vma_pages_range(struct
 {
 	static const struct mm_walk_ops mlock_walk_ops = {
 		.pmd_entry = mlock_pte_range,
+		.walk_lock = PGWALK_WRLOCK_VERIFY,
 	};
 
 	/*
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -611,6 +611,7 @@ static const struct mm_walk_ops prot_non
 	.pte_entry		= prot_none_pte_entry,
 	.hugetlb_entry		= prot_none_hugetlb_entry,
 	.test_walk		= prot_none_test,
+	.walk_lock		= PGWALK_WRLOCK,
 };
 
 int
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -384,6 +384,33 @@ static int __walk_page_range(unsigned lo
 	return err;
 }
 
+static inline void process_mm_walk_lock(struct mm_struct *mm,
+					enum page_walk_lock walk_lock)
+{
+	if (walk_lock == PGWALK_RDLOCK)
+		mmap_assert_locked(mm);
+	else
+		mmap_assert_write_locked(mm);
+}
+
+static inline void process_vma_walk_lock(struct vm_area_struct *vma,
+					 enum page_walk_lock walk_lock)
+{
+#ifdef CONFIG_PER_VMA_LOCK
+	switch (walk_lock) {
+	case PGWALK_WRLOCK:
+		vma_start_write(vma);
+		break;
+	case PGWALK_WRLOCK_VERIFY:
+		vma_assert_write_locked(vma);
+		break;
+	case PGWALK_RDLOCK:
+		/* PGWALK_RDLOCK is handled by process_mm_walk_lock */
+		break;
+	}
+#endif
+}
+
 /**
  * walk_page_range - walk page table with caller specific callbacks
  * @mm:		mm_struct representing the target process of page table walk
@@ -443,7 +470,7 @@ int walk_page_range(struct mm_struct *mm
 	if (!walk.mm)
 		return -EINVAL;
 
-	mmap_assert_locked(walk.mm);
+	process_mm_walk_lock(walk.mm, ops->walk_lock);
 
 	vma = find_vma(walk.mm, start);
 	do {
@@ -458,6 +485,7 @@ int walk_page_range(struct mm_struct *mm
 			if (ops->pte_hole)
 				err = ops->pte_hole(start, next, -1, &walk);
 		} else { /* inside vma */
+			process_vma_walk_lock(vma, ops->walk_lock);
 			walk.vma = vma;
 			next = min(end, vma->vm_end);
 			vma = find_vma(mm, vma->vm_end);
@@ -533,7 +561,8 @@ int walk_page_range_vma(struct vm_area_s
 	if (start < vma->vm_start || end > vma->vm_end)
 		return -EINVAL;
 
-	mmap_assert_locked(walk.mm);
+	process_mm_walk_lock(walk.mm, ops->walk_lock);
+	process_vma_walk_lock(vma, ops->walk_lock);
 	return __walk_page_range(start, end, &walk);
 }
 
@@ -550,7 +579,8 @@ int walk_page_vma(struct vm_area_struct
 	if (!walk.mm)
 		return -EINVAL;
 
-	mmap_assert_locked(walk.mm);
+	process_mm_walk_lock(walk.mm, ops->walk_lock);
+	process_vma_walk_lock(vma, ops->walk_lock);
 	return __walk_page_range(vma->vm_start, vma->vm_end, &walk);
 }
 
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4249,6 +4249,7 @@ static void walk_mm(struct lruvec *lruve
 	static const struct mm_walk_ops mm_walk_ops = {
 		.test_walk = should_skip_vma,
 		.p4d_entry = walk_pud_range,
+		.walk_lock = PGWALK_RDLOCK,
 	};
 
 	int err;



  parent reply	other threads:[~2023-08-28 10:22 UTC|newest]

Thread overview: 142+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-28 10:11 [PATCH 6.4 000/129] 6.4.13-rc1 review Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 001/129] NFSv4.2: fix error handling in nfs42_proc_getxattr Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 002/129] NFSv4: fix out path in __nfs4_get_acl_uncached Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 003/129] xprtrdma: Remap Receive buffers after a reconnect Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 004/129] PCI: acpiphp: Reassign resources on bridge if necessary Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 005/129] jbd2: remove t_checkpoint_io_list Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 006/129] jbd2: remove journal_clean_one_cp_list() Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 007/129] jbd2: fix a race when checking checkpoint buffer busy Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 008/129] can: raw: fix receiver memory leak Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 009/129] can: raw: fix lockdep issue in raw_release() Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 010/129] wifi: iwlwifi: mvm: add dependency for PTP clock Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 011/129] tracing: Fix cpu buffers unavailable due to record_disabled missed Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 012/129] tracing/synthetic: Use union instead of casts Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 013/129] tracing/synthetic: Skip first entry for stack traces Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 014/129] tracing/synthetic: Allocate one additional element for size Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 015/129] tracing: Fix memleak due to race between current_tracer and trace Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 016/129] octeontx2-af: SDP: fix receive link config Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 017/129] devlink: add missing unregister linecard notification Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 018/129] net: dsa: felix: fix oversize frame dropping for always closed tc-taprio gates Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 019/129] sock: annotate data-races around prot->memory_pressure Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 020/129] dccp: annotate data-races in dccp_poll() Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 021/129] ipvlan: Fix a reference count leak warning in ipvlan_ns_exit() Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 022/129] mlxsw: pci: Set time stamp fields also when its type is MIRROR_UTC Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 023/129] mlxsw: reg: Fix SSPR register layout Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 024/129] mlxsw: Fix the size of VIRT_ROUTER_MSB Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 025/129] selftests: mlxsw: Fix test failure on Spectrum-4 Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 026/129] net: dsa: mt7530: fix handling of 802.1X PAE frames Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 027/129] net: mdio: mdio-bitbang: Fix C45 read/write protocol Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 028/129] net: bgmac: Fix return value check for fixed_phy_register() Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 029/129] net: bcmgenet: " Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 030/129] net: validate veth and vxcan peer ifindexes Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 031/129] ipv4: fix data-races around inet->inet_id Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 032/129] ice: fix receive buffer size miscalculation Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 033/129] Revert "ice: Fix ice VF reset during iavf initialization" Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 034/129] ice: Fix NULL pointer deref during VF reset Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 035/129] selftests: bonding: do not set port down before adding to bond Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 036/129] tg3: Use slab_build_skb() when needed Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 037/129] net: ethernet: mtk_eth_soc: fix NULL pointer on hw reset Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 038/129] can: isotp: fix support for transmission of SF without flow control Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 039/129] igb: Avoid starting unnecessary workqueues Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 040/129] igc: Fix the typo in the PTM Control macro Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 041/129] net/sched: fix a qdisc modification with ambiguous command request Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 042/129] i40e: fix potential NULL pointer dereferencing of pf->vf i40e_sync_vsi_filters() Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 043/129] netfilter: nf_tables: validate all pending tables Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 044/129] netfilter: nf_tables: flush pending destroy work before netlink notifier Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 045/129] netfilter: nf_tables: GC transaction race with abort path Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 046/129] netfilter: nf_tables: use correct lock to protect gc_list Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 047/129] netfilter: nf_tables: fix out of memory error handling Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 048/129] netfilter: nf_tables: defer gc run if previous batch is still pending Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 049/129] rtnetlink: Reject negative ifindexes in RTM_NEWLINK Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 050/129] bonding: fix macvlan over alb bond support Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 051/129] ASoC: amd: yc: Add VivoBook Pro 15 to quirks list for acp6x Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 052/129] ASoC: cs35l41: Correct amp_gain_tlv values Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 053/129] spi: spi-cadence: Fix data corruption issues in slave mode Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 054/129] ibmveth: Use dcbf rather than dcbfl Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 055/129] wifi: mac80211: limit reorder_buf_filtered to avoid UBSAN warning Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 056/129] platform/x86: lenovo-ymc: Add Lenovo Yoga 7 14ACN6 to ec_trigger_quirk_dmi_table Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 057/129] platform/x86: ideapad-laptop: Add support for new hotkeys found on ThinkBook 14s Yoga ITL Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 058/129] NFSv4: Fix dropped lock for racing OPEN and delegation return Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 059/129] clk: Fix slab-out-of-bounds error in devm_clk_release() Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 060/129] mm,ima,kexec,of: use memblock_free_late from ima_free_kexec_buffer Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 061/129] shmem: fix smaps BUG sleeping while atomic Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 062/129] ALSA: ymfpci: Fix the missing snd_card_free() call at probe error Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 063/129] selftests/mm: FOLL_LONGTERM need to be updated to 0x100 Greg Kroah-Hartman
2023-08-28 10:12 ` Greg Kroah-Hartman [this message]
2023-08-28 10:12 ` [PATCH 6.4 065/129] mm/gup: reintroduce FOLL_NUMA as FOLL_HONOR_NUMA_FAULT Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 066/129] mm/gup: handle cont-PTE hugetlb pages correctly in gup_must_unshare() via GUP-fast Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 067/129] drm/vmwgfx: Fix shader stage validation Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 068/129] drm/vmwgfx: Fix possible invalid drm gem put calls Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 069/129] drm: Add an HPD poll helper to reschedule the poll work Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 070/129] drm/panfrost: Skip speed binning on EOPNOTSUPP Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 071/129] drm/i915/dgfx: Enable d3cold at s2idle Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 072/129] drm/display/dp: Fix the DP DSC Receiver cap size Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 073/129] drm/i915: Fix HPD polling, reenabling the output poll work as needed Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 074/129] LoongArch: Fix hw_breakpoint_control() for watchpoints Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 075/129] x86/fpu: Invalidate FPU state correctly on exec() Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 076/129] x86/fpu: Set X86_FEATURE_OSXSAVE feature after enabling OSXSAVE in CR4 Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 077/129] drm/i915/display: Handle GMD_ID identification in display code Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 078/129] drm/i915: fix display probe for IVB Q and IVB D GT2 server Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 079/129] cgroup/cpuset: Rename functions dealing with DEADLINE accounting Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 080/129] sched/cpuset: Bring back cpuset_mutex Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 081/129] sched/cpuset: Keep track of SCHED_DEADLINE task in cpusets Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 082/129] cgroup/cpuset: Iterate only if DEADLINE tasks are present Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 083/129] sched/deadline: Create DL BW alloc, free & check overflow interface Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 084/129] cgroup/cpuset: Free DL BW in case can_attach() fails Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 085/129] mm: add a call to flush_cache_vmap() in vmap_pfn() Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 086/129] mm: memory-failure: fix unexpected return value in soft_offline_page() Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 087/129] mm: multi-gen LRU: dont spin during memcg release Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 088/129] nilfs2: fix general protection fault in nilfs_lookup_dirty_data_buffers() Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 089/129] NFS: Fix a use after free in nfs_direct_join_group() Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 090/129] nfsd: Fix race to FREE_STATEID and cl_revoked Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 091/129] selinux: set next pointer before attaching to list Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 092/129] batman-adv: Trigger events for auto adjusted MTU Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 093/129] batman-adv: Dont increase MTU when set by user Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 094/129] batman-adv: Do not get eth header before batadv_check_management_packet Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 095/129] batman-adv: Fix TT global entry leak when client roamed back Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 096/129] batman-adv: Fix batadv_v_ogm_aggr_send memory leak Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 097/129] batman-adv: Hold rtnl lock during MTU update via netlink Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 098/129] ACPI: resource: Fix IRQ override quirk for PCSpecialist Elimina Pro 16 M Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 099/129] lib/clz_ctz.c: Fix __clzdi2() and __ctzdi2() for 32-bit kernels Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 100/129] riscv: Handle zicsr/zifencei issue between gcc and binutils Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 101/129] riscv: Fix build errors using binutils2.37 toolchains Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 102/129] radix tree: remove unused variable Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 103/129] of: unittest: Fix EXPECT for parse_phandle_with_args_map() test Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 104/129] of: dynamic: Refactor action prints to not use "%pOF" inside devtree_lock Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 105/129] pinctrl: amd: Mask wake bits on probe again Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 106/129] media: vcodec: Fix potential array out-of-bounds in encoder queue_setup Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 107/129] PCI: acpiphp: Use pci_assign_unassigned_bridge_resources() only for non-root bus Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 108/129] thunderbolt: Fix Thunderbolt 3 display flickering issue on 2nd hot plug onwards Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 109/129] can: raw: add missing refcount for memory leak fix Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 110/129] drm/i915: Fix error handling if driver creation fails during probe Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 111/129] madvise:madvise_cold_or_pageout_pte_range(): dont use mapcount() against large folio for sharing check Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 112/129] madvise:madvise_free_pte_range(): " Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 113/129] scsi: snic: Fix double free in snic_tgt_create() Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 114/129] scsi: ufs: ufs-qcom: Clear qunipro_g4_sel for HW major version > 5 Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 115/129] scsi: core: raid_class: Remove raid_component_add() Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 116/129] clk: Fix undefined reference to `clk_rate_exclusive_{get,put} Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 117/129] ASoC: SOF: ipc4-pcm: fix possible null pointer deference Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 118/129] ASoC: cs35l56: Read firmware uuid from a device property instead of _SUB Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 119/129] pinctrl: renesas: rzg2l: Fix NULL pointer dereference in rzg2l_dt_subnode_to_map() Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 120/129] pinctrl: renesas: rzv2m: Fix NULL pointer dereference in rzv2m_dt_subnode_to_map() Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 121/129] pinctrl: renesas: rza2: Add lock around pinctrl_generic{{add,remove}_group,{add,remove}_function} Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 122/129] dma-buf/sw_sync: Avoid recursive lock during fence signal Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 123/129] gpio: sim: dispose of irq mappings before destroying the irq_sim domain Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 124/129] gpio: sim: pass the GPIO devices software node to irq domain Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 125/129] ASoC: amd: yc: Fix a non-functional mic on Lenovo 82SJ Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 126/129] maple_tree: disable mas_wr_append() when other readers are possible Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 127/129] ASoC: amd: vangogh: select CONFIG_SND_AMD_ACP_CONFIG Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 128/129] TIOCSTI: Document CAP_SYS_ADMIN behaviour in Kconfig Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 129/129] netfilter: nf_tables: fix kdoc warnings after gc rework Greg Kroah-Hartman
2023-08-28 16:41 ` [PATCH 6.4 000/129] 6.4.13-rc1 review Justin Forbes
2023-08-28 19:14 ` Naresh Kamboju
2023-08-28 22:36 ` Ron Economos
2023-08-28 23:35 ` Joel Fernandes
2023-08-29  1:54 ` SeongJae Park
2023-08-29  6:22 ` Conor Dooley
2023-08-29  8:28 ` Bagas Sanjaya
2023-08-29 12:26 ` Sudip Mukherjee (Codethink)
2023-08-29 14:07 ` Shuah Khan
2023-08-29 20:23 ` Florian Fainelli
2023-08-30  2:04 ` Guenter Roeck
2023-08-30 10:24 ` Jon Hunter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230828101159.485425389@linuxfoundation.org \
    --to=gregkh@linuxfoundation.org \
    --cc=akpm@linux-foundation.org \
    --cc=dave@stgolabs.net \
    --cc=david@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=jannh@google.com \
    --cc=ldufour@linux.ibm.com \
    --cc=liam.howlett@oracle.com \
    --cc=mhocko@suse.com \
    --cc=michel@lespinasse.org \
    --cc=patches@lists.linux.dev \
    --cc=peterx@redhat.com \
    --cc=stable@vger.kernel.org \
    --cc=surenb@google.com \
    --cc=torvalds@linuxfoundation.org \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).