stable.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: stable@vger.kernel.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	patches@lists.linux.dev, Juri Lelli <juri.lelli@redhat.com>,
	Waiman Long <longman@redhat.com>, Tejun Heo <tj@kernel.org>,
	"Qais Yousef (Google)" <qyousef@layalina.io>
Subject: [PATCH 6.4 080/129] sched/cpuset: Bring back cpuset_mutex
Date: Mon, 28 Aug 2023 12:12:39 +0200	[thread overview]
Message-ID: <20230828101159.993231479@linuxfoundation.org> (raw)
In-Reply-To: <20230828101157.383363777@linuxfoundation.org>

6.4-stable review patch.  If anyone has any objections, please let me know.

------------------

From: Juri Lelli <juri.lelli@redhat.com>

commit 111cd11bbc54850f24191c52ff217da88a5e639b upstream.

Turns out percpu_cpuset_rwsem - commit 1243dc518c9d ("cgroup/cpuset:
Convert cpuset_mutex to percpu_rwsem") - wasn't such a brilliant idea,
as it has been reported to cause slowdowns in workloads that need to
change cpuset configuration frequently and it is also not implementing
priority inheritance (which causes troubles with realtime workloads).

Convert percpu_cpuset_rwsem back to regular cpuset_mutex. Also grab it
only for SCHED_DEADLINE tasks (other policies don't care about stable
cpusets anyway).

Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
Reviewed-by: Waiman Long <longman@redhat.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Qais Yousef (Google) <qyousef@layalina.io>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
---
 include/linux/cpuset.h |    8 +-
 kernel/cgroup/cpuset.c |  159 ++++++++++++++++++++++++-------------------------
 kernel/sched/core.c    |   22 ++++--
 3 files changed, 99 insertions(+), 90 deletions(-)

--- a/include/linux/cpuset.h
+++ b/include/linux/cpuset.h
@@ -71,8 +71,8 @@ extern void cpuset_init_smp(void);
 extern void cpuset_force_rebuild(void);
 extern void cpuset_update_active_cpus(void);
 extern void cpuset_wait_for_hotplug(void);
-extern void cpuset_read_lock(void);
-extern void cpuset_read_unlock(void);
+extern void cpuset_lock(void);
+extern void cpuset_unlock(void);
 extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask);
 extern bool cpuset_cpus_allowed_fallback(struct task_struct *p);
 extern nodemask_t cpuset_mems_allowed(struct task_struct *p);
@@ -189,8 +189,8 @@ static inline void cpuset_update_active_
 
 static inline void cpuset_wait_for_hotplug(void) { }
 
-static inline void cpuset_read_lock(void) { }
-static inline void cpuset_read_unlock(void) { }
+static inline void cpuset_lock(void) { }
+static inline void cpuset_unlock(void) { }
 
 static inline void cpuset_cpus_allowed(struct task_struct *p,
 				       struct cpumask *mask)
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -366,22 +366,23 @@ static struct cpuset top_cpuset = {
 		if (is_cpuset_online(((des_cs) = css_cs((pos_css)))))
 
 /*
- * There are two global locks guarding cpuset structures - cpuset_rwsem and
+ * There are two global locks guarding cpuset structures - cpuset_mutex and
  * callback_lock. We also require taking task_lock() when dereferencing a
  * task's cpuset pointer. See "The task_lock() exception", at the end of this
- * comment.  The cpuset code uses only cpuset_rwsem write lock.  Other
- * kernel subsystems can use cpuset_read_lock()/cpuset_read_unlock() to
- * prevent change to cpuset structures.
+ * comment.  The cpuset code uses only cpuset_mutex. Other kernel subsystems
+ * can use cpuset_lock()/cpuset_unlock() to prevent change to cpuset
+ * structures. Note that cpuset_mutex needs to be a mutex as it is used in
+ * paths that rely on priority inheritance (e.g. scheduler - on RT) for
+ * correctness.
  *
  * A task must hold both locks to modify cpusets.  If a task holds
- * cpuset_rwsem, it blocks others wanting that rwsem, ensuring that it
- * is the only task able to also acquire callback_lock and be able to
- * modify cpusets.  It can perform various checks on the cpuset structure
- * first, knowing nothing will change.  It can also allocate memory while
- * just holding cpuset_rwsem.  While it is performing these checks, various
- * callback routines can briefly acquire callback_lock to query cpusets.
- * Once it is ready to make the changes, it takes callback_lock, blocking
- * everyone else.
+ * cpuset_mutex, it blocks others, ensuring that it is the only task able to
+ * also acquire callback_lock and be able to modify cpusets.  It can perform
+ * various checks on the cpuset structure first, knowing nothing will change.
+ * It can also allocate memory while just holding cpuset_mutex.  While it is
+ * performing these checks, various callback routines can briefly acquire
+ * callback_lock to query cpusets.  Once it is ready to make the changes, it
+ * takes callback_lock, blocking everyone else.
  *
  * Calls to the kernel memory allocator can not be made while holding
  * callback_lock, as that would risk double tripping on callback_lock
@@ -403,16 +404,16 @@ static struct cpuset top_cpuset = {
  * guidelines for accessing subsystem state in kernel/cgroup.c
  */
 
-DEFINE_STATIC_PERCPU_RWSEM(cpuset_rwsem);
+static DEFINE_MUTEX(cpuset_mutex);
 
-void cpuset_read_lock(void)
+void cpuset_lock(void)
 {
-	percpu_down_read(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 }
 
-void cpuset_read_unlock(void)
+void cpuset_unlock(void)
 {
-	percpu_up_read(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 }
 
 static DEFINE_SPINLOCK(callback_lock);
@@ -496,7 +497,7 @@ static inline bool partition_is_populate
  * One way or another, we guarantee to return some non-empty subset
  * of cpu_online_mask.
  *
- * Call with callback_lock or cpuset_rwsem held.
+ * Call with callback_lock or cpuset_mutex held.
  */
 static void guarantee_online_cpus(struct task_struct *tsk,
 				  struct cpumask *pmask)
@@ -538,7 +539,7 @@ out_unlock:
  * One way or another, we guarantee to return some non-empty subset
  * of node_states[N_MEMORY].
  *
- * Call with callback_lock or cpuset_rwsem held.
+ * Call with callback_lock or cpuset_mutex held.
  */
 static void guarantee_online_mems(struct cpuset *cs, nodemask_t *pmask)
 {
@@ -550,7 +551,7 @@ static void guarantee_online_mems(struct
 /*
  * update task's spread flag if cpuset's page/slab spread flag is set
  *
- * Call with callback_lock or cpuset_rwsem held. The check can be skipped
+ * Call with callback_lock or cpuset_mutex held. The check can be skipped
  * if on default hierarchy.
  */
 static void cpuset_update_task_spread_flags(struct cpuset *cs,
@@ -575,7 +576,7 @@ static void cpuset_update_task_spread_fl
  *
  * One cpuset is a subset of another if all its allowed CPUs and
  * Memory Nodes are a subset of the other, and its exclusive flags
- * are only set if the other's are set.  Call holding cpuset_rwsem.
+ * are only set if the other's are set.  Call holding cpuset_mutex.
  */
 
 static int is_cpuset_subset(const struct cpuset *p, const struct cpuset *q)
@@ -713,7 +714,7 @@ out:
  * If we replaced the flag and mask values of the current cpuset
  * (cur) with those values in the trial cpuset (trial), would
  * our various subset and exclusive rules still be valid?  Presumes
- * cpuset_rwsem held.
+ * cpuset_mutex held.
  *
  * 'cur' is the address of an actual, in-use cpuset.  Operations
  * such as list traversal that depend on the actual address of the
@@ -829,7 +830,7 @@ static void update_domain_attr_tree(stru
 	rcu_read_unlock();
 }
 
-/* Must be called with cpuset_rwsem held.  */
+/* Must be called with cpuset_mutex held.  */
 static inline int nr_cpusets(void)
 {
 	/* jump label reference count + the top-level cpuset */
@@ -855,7 +856,7 @@ static inline int nr_cpusets(void)
  * domains when operating in the severe memory shortage situations
  * that could cause allocation failures below.
  *
- * Must be called with cpuset_rwsem held.
+ * Must be called with cpuset_mutex held.
  *
  * The three key local variables below are:
  *    cp - cpuset pointer, used (together with pos_css) to perform a
@@ -1084,7 +1085,7 @@ static void dl_rebuild_rd_accounting(voi
 	struct cpuset *cs = NULL;
 	struct cgroup_subsys_state *pos_css;
 
-	percpu_rwsem_assert_held(&cpuset_rwsem);
+	lockdep_assert_held(&cpuset_mutex);
 	lockdep_assert_cpus_held();
 	lockdep_assert_held(&sched_domains_mutex);
 
@@ -1134,7 +1135,7 @@ partition_and_rebuild_sched_domains(int
  * 'cpus' is removed, then call this routine to rebuild the
  * scheduler's dynamic sched domains.
  *
- * Call with cpuset_rwsem held.  Takes cpus_read_lock().
+ * Call with cpuset_mutex held.  Takes cpus_read_lock().
  */
 static void rebuild_sched_domains_locked(void)
 {
@@ -1145,7 +1146,7 @@ static void rebuild_sched_domains_locked
 	int ndoms;
 
 	lockdep_assert_cpus_held();
-	percpu_rwsem_assert_held(&cpuset_rwsem);
+	lockdep_assert_held(&cpuset_mutex);
 
 	/*
 	 * If we have raced with CPU hotplug, return early to avoid
@@ -1196,9 +1197,9 @@ static void rebuild_sched_domains_locked
 void rebuild_sched_domains(void)
 {
 	cpus_read_lock();
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	rebuild_sched_domains_locked();
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 	cpus_read_unlock();
 }
 
@@ -1208,7 +1209,7 @@ void rebuild_sched_domains(void)
  * @new_cpus: the temp variable for the new effective_cpus mask
  *
  * Iterate through each task of @cs updating its cpus_allowed to the
- * effective cpuset's.  As this function is called with cpuset_rwsem held,
+ * effective cpuset's.  As this function is called with cpuset_mutex held,
  * cpuset membership stays stable. For top_cpuset, task_cpu_possible_mask()
  * is used instead of effective_cpus to make sure all offline CPUs are also
  * included as hotplug code won't update cpumasks for tasks in top_cpuset.
@@ -1322,7 +1323,7 @@ static int update_parent_subparts_cpumas
 	int old_prs, new_prs;
 	int part_error = PERR_NONE;	/* Partition error? */
 
-	percpu_rwsem_assert_held(&cpuset_rwsem);
+	lockdep_assert_held(&cpuset_mutex);
 
 	/*
 	 * The parent must be a partition root.
@@ -1545,7 +1546,7 @@ static int update_parent_subparts_cpumas
  *
  * On legacy hierarchy, effective_cpus will be the same with cpu_allowed.
  *
- * Called with cpuset_rwsem held
+ * Called with cpuset_mutex held
  */
 static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,
 				 bool force)
@@ -1705,7 +1706,7 @@ static void update_sibling_cpumasks(stru
 	struct cpuset *sibling;
 	struct cgroup_subsys_state *pos_css;
 
-	percpu_rwsem_assert_held(&cpuset_rwsem);
+	lockdep_assert_held(&cpuset_mutex);
 
 	/*
 	 * Check all its siblings and call update_cpumasks_hier()
@@ -1955,12 +1956,12 @@ static void *cpuset_being_rebound;
  * @cs: the cpuset in which each task's mems_allowed mask needs to be changed
  *
  * Iterate through each task of @cs updating its mems_allowed to the
- * effective cpuset's.  As this function is called with cpuset_rwsem held,
+ * effective cpuset's.  As this function is called with cpuset_mutex held,
  * cpuset membership stays stable.
  */
 static void update_tasks_nodemask(struct cpuset *cs)
 {
-	static nodemask_t newmems;	/* protected by cpuset_rwsem */
+	static nodemask_t newmems;	/* protected by cpuset_mutex */
 	struct css_task_iter it;
 	struct task_struct *task;
 
@@ -1973,7 +1974,7 @@ static void update_tasks_nodemask(struct
 	 * take while holding tasklist_lock.  Forks can happen - the
 	 * mpol_dup() cpuset_being_rebound check will catch such forks,
 	 * and rebind their vma mempolicies too.  Because we still hold
-	 * the global cpuset_rwsem, we know that no other rebind effort
+	 * the global cpuset_mutex, we know that no other rebind effort
 	 * will be contending for the global variable cpuset_being_rebound.
 	 * It's ok if we rebind the same mm twice; mpol_rebind_mm()
 	 * is idempotent.  Also migrate pages in each mm to new nodes.
@@ -2019,7 +2020,7 @@ static void update_tasks_nodemask(struct
  *
  * On legacy hierarchy, effective_mems will be the same with mems_allowed.
  *
- * Called with cpuset_rwsem held
+ * Called with cpuset_mutex held
  */
 static void update_nodemasks_hier(struct cpuset *cs, nodemask_t *new_mems)
 {
@@ -2072,7 +2073,7 @@ static void update_nodemasks_hier(struct
  * mempolicies and if the cpuset is marked 'memory_migrate',
  * migrate the tasks pages to the new memory.
  *
- * Call with cpuset_rwsem held. May take callback_lock during call.
+ * Call with cpuset_mutex held. May take callback_lock during call.
  * Will take tasklist_lock, scan tasklist for tasks in cpuset cs,
  * lock each such tasks mm->mmap_lock, scan its vma's and rebind
  * their mempolicies to the cpusets new mems_allowed.
@@ -2164,7 +2165,7 @@ static int update_relax_domain_level(str
  * @cs: the cpuset in which each task's spread flags needs to be changed
  *
  * Iterate through each task of @cs updating its spread flags.  As this
- * function is called with cpuset_rwsem held, cpuset membership stays
+ * function is called with cpuset_mutex held, cpuset membership stays
  * stable.
  */
 static void update_tasks_flags(struct cpuset *cs)
@@ -2184,7 +2185,7 @@ static void update_tasks_flags(struct cp
  * cs:		the cpuset to update
  * turning_on: 	whether the flag is being set or cleared
  *
- * Call with cpuset_rwsem held.
+ * Call with cpuset_mutex held.
  */
 
 static int update_flag(cpuset_flagbits_t bit, struct cpuset *cs,
@@ -2234,7 +2235,7 @@ out:
  * @new_prs: new partition root state
  * Return: 0 if successful, != 0 if error
  *
- * Call with cpuset_rwsem held.
+ * Call with cpuset_mutex held.
  */
 static int update_prstate(struct cpuset *cs, int new_prs)
 {
@@ -2472,7 +2473,7 @@ static int cpuset_can_attach_check(struc
 	return 0;
 }
 
-/* Called by cgroups to determine if a cpuset is usable; cpuset_rwsem held */
+/* Called by cgroups to determine if a cpuset is usable; cpuset_mutex held */
 static int cpuset_can_attach(struct cgroup_taskset *tset)
 {
 	struct cgroup_subsys_state *css;
@@ -2484,7 +2485,7 @@ static int cpuset_can_attach(struct cgro
 	cpuset_attach_old_cs = task_cs(cgroup_taskset_first(tset, &css));
 	cs = css_cs(css);
 
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 
 	/* Check to see if task is allowed in the cpuset */
 	ret = cpuset_can_attach_check(cs);
@@ -2506,7 +2507,7 @@ static int cpuset_can_attach(struct cgro
 	 */
 	cs->attach_in_progress++;
 out_unlock:
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 	return ret;
 }
 
@@ -2518,15 +2519,15 @@ static void cpuset_cancel_attach(struct
 	cgroup_taskset_first(tset, &css);
 	cs = css_cs(css);
 
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	cs->attach_in_progress--;
 	if (!cs->attach_in_progress)
 		wake_up(&cpuset_attach_wq);
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 }
 
 /*
- * Protected by cpuset_rwsem. cpus_attach is used only by cpuset_attach_task()
+ * Protected by cpuset_mutex. cpus_attach is used only by cpuset_attach_task()
  * but we can't allocate it dynamically there.  Define it global and
  * allocate from cpuset_init().
  */
@@ -2535,7 +2536,7 @@ static nodemask_t cpuset_attach_nodemask
 
 static void cpuset_attach_task(struct cpuset *cs, struct task_struct *task)
 {
-	percpu_rwsem_assert_held(&cpuset_rwsem);
+	lockdep_assert_held(&cpuset_mutex);
 
 	if (cs != &top_cpuset)
 		guarantee_online_cpus(task, cpus_attach);
@@ -2565,7 +2566,7 @@ static void cpuset_attach(struct cgroup_
 	cs = css_cs(css);
 
 	lockdep_assert_cpus_held();	/* see cgroup_attach_lock() */
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	cpus_updated = !cpumask_equal(cs->effective_cpus,
 				      oldcs->effective_cpus);
 	mems_updated = !nodes_equal(cs->effective_mems, oldcs->effective_mems);
@@ -2626,7 +2627,7 @@ out:
 	if (!cs->attach_in_progress)
 		wake_up(&cpuset_attach_wq);
 
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 }
 
 /* The various types of files and directories in a cpuset file system */
@@ -2658,7 +2659,7 @@ static int cpuset_write_u64(struct cgrou
 	int retval = 0;
 
 	cpus_read_lock();
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	if (!is_cpuset_online(cs)) {
 		retval = -ENODEV;
 		goto out_unlock;
@@ -2694,7 +2695,7 @@ static int cpuset_write_u64(struct cgrou
 		break;
 	}
 out_unlock:
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 	cpus_read_unlock();
 	return retval;
 }
@@ -2707,7 +2708,7 @@ static int cpuset_write_s64(struct cgrou
 	int retval = -ENODEV;
 
 	cpus_read_lock();
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	if (!is_cpuset_online(cs))
 		goto out_unlock;
 
@@ -2720,7 +2721,7 @@ static int cpuset_write_s64(struct cgrou
 		break;
 	}
 out_unlock:
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 	cpus_read_unlock();
 	return retval;
 }
@@ -2753,7 +2754,7 @@ static ssize_t cpuset_write_resmask(stru
 	 * operation like this one can lead to a deadlock through kernfs
 	 * active_ref protection.  Let's break the protection.  Losing the
 	 * protection is okay as we check whether @cs is online after
-	 * grabbing cpuset_rwsem anyway.  This only happens on the legacy
+	 * grabbing cpuset_mutex anyway.  This only happens on the legacy
 	 * hierarchies.
 	 */
 	css_get(&cs->css);
@@ -2761,7 +2762,7 @@ static ssize_t cpuset_write_resmask(stru
 	flush_work(&cpuset_hotplug_work);
 
 	cpus_read_lock();
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	if (!is_cpuset_online(cs))
 		goto out_unlock;
 
@@ -2785,7 +2786,7 @@ static ssize_t cpuset_write_resmask(stru
 
 	free_cpuset(trialcs);
 out_unlock:
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 	cpus_read_unlock();
 	kernfs_unbreak_active_protection(of->kn);
 	css_put(&cs->css);
@@ -2933,13 +2934,13 @@ static ssize_t sched_partition_write(str
 
 	css_get(&cs->css);
 	cpus_read_lock();
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	if (!is_cpuset_online(cs))
 		goto out_unlock;
 
 	retval = update_prstate(cs, val);
 out_unlock:
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 	cpus_read_unlock();
 	css_put(&cs->css);
 	return retval ?: nbytes;
@@ -3156,7 +3157,7 @@ static int cpuset_css_online(struct cgro
 		return 0;
 
 	cpus_read_lock();
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 
 	set_bit(CS_ONLINE, &cs->flags);
 	if (is_spread_page(parent))
@@ -3207,7 +3208,7 @@ static int cpuset_css_online(struct cgro
 	cpumask_copy(cs->effective_cpus, parent->cpus_allowed);
 	spin_unlock_irq(&callback_lock);
 out_unlock:
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 	cpus_read_unlock();
 	return 0;
 }
@@ -3228,7 +3229,7 @@ static void cpuset_css_offline(struct cg
 	struct cpuset *cs = css_cs(css);
 
 	cpus_read_lock();
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 
 	if (is_partition_valid(cs))
 		update_prstate(cs, 0);
@@ -3247,7 +3248,7 @@ static void cpuset_css_offline(struct cg
 	cpuset_dec();
 	clear_bit(CS_ONLINE, &cs->flags);
 
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 	cpus_read_unlock();
 }
 
@@ -3260,7 +3261,7 @@ static void cpuset_css_free(struct cgrou
 
 static void cpuset_bind(struct cgroup_subsys_state *root_css)
 {
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	spin_lock_irq(&callback_lock);
 
 	if (is_in_v2_mode()) {
@@ -3273,7 +3274,7 @@ static void cpuset_bind(struct cgroup_su
 	}
 
 	spin_unlock_irq(&callback_lock);
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 }
 
 /*
@@ -3294,7 +3295,7 @@ static int cpuset_can_fork(struct task_s
 		return 0;
 
 	lockdep_assert_held(&cgroup_mutex);
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 
 	/* Check to see if task is allowed in the cpuset */
 	ret = cpuset_can_attach_check(cs);
@@ -3315,7 +3316,7 @@ static int cpuset_can_fork(struct task_s
 	 */
 	cs->attach_in_progress++;
 out_unlock:
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 	return ret;
 }
 
@@ -3331,11 +3332,11 @@ static void cpuset_cancel_fork(struct ta
 	if (same_cs)
 		return;
 
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	cs->attach_in_progress--;
 	if (!cs->attach_in_progress)
 		wake_up(&cpuset_attach_wq);
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 }
 
 /*
@@ -3363,7 +3364,7 @@ static void cpuset_fork(struct task_stru
 	}
 
 	/* CLONE_INTO_CGROUP */
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	guarantee_online_mems(cs, &cpuset_attach_nodemask_to);
 	cpuset_attach_task(cs, task);
 
@@ -3371,7 +3372,7 @@ static void cpuset_fork(struct task_stru
 	if (!cs->attach_in_progress)
 		wake_up(&cpuset_attach_wq);
 
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 }
 
 struct cgroup_subsys cpuset_cgrp_subsys = {
@@ -3472,7 +3473,7 @@ hotplug_update_tasks_legacy(struct cpuse
 	is_empty = cpumask_empty(cs->cpus_allowed) ||
 		   nodes_empty(cs->mems_allowed);
 
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 
 	/*
 	 * Move tasks to the nearest ancestor with execution resources,
@@ -3482,7 +3483,7 @@ hotplug_update_tasks_legacy(struct cpuse
 	if (is_empty)
 		remove_tasks_in_empty_cpuset(cs);
 
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 }
 
 static void
@@ -3533,14 +3534,14 @@ static void cpuset_hotplug_update_tasks(
 retry:
 	wait_event(cpuset_attach_wq, cs->attach_in_progress == 0);
 
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 
 	/*
 	 * We have raced with task attaching. We wait until attaching
 	 * is finished, so we won't attach a task to an empty cpuset.
 	 */
 	if (cs->attach_in_progress) {
-		percpu_up_write(&cpuset_rwsem);
+		mutex_unlock(&cpuset_mutex);
 		goto retry;
 	}
 
@@ -3637,7 +3638,7 @@ update_tasks:
 					    cpus_updated, mems_updated);
 
 unlock:
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 }
 
 /**
@@ -3667,7 +3668,7 @@ static void cpuset_hotplug_workfn(struct
 	if (on_dfl && !alloc_cpumasks(NULL, &tmp))
 		ptmp = &tmp;
 
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 
 	/* fetch the available cpus/mems and find out which changed how */
 	cpumask_copy(&new_cpus, cpu_active_mask);
@@ -3724,7 +3725,7 @@ static void cpuset_hotplug_workfn(struct
 		update_tasks_nodemask(&top_cpuset);
 	}
 
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 
 	/* if cpus or mems changed, we need to propagate to descendants */
 	if (cpus_updated || mems_updated) {
@@ -4155,7 +4156,7 @@ void __cpuset_memory_pressure_bump(void)
  *  - Used for /proc/<pid>/cpuset.
  *  - No need to task_lock(tsk) on this tsk->cpuset reference, as it
  *    doesn't really matter if tsk->cpuset changes after we read it,
- *    and we take cpuset_rwsem, keeping cpuset_attach() from changing it
+ *    and we take cpuset_mutex, keeping cpuset_attach() from changing it
  *    anyway.
  */
 int proc_cpuset_show(struct seq_file *m, struct pid_namespace *ns,
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7590,6 +7590,7 @@ static int __sched_setscheduler(struct t
 	int reset_on_fork;
 	int queue_flags = DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK;
 	struct rq *rq;
+	bool cpuset_locked = false;
 
 	/* The pi code expects interrupts enabled */
 	BUG_ON(pi && in_interrupt());
@@ -7639,8 +7640,14 @@ recheck:
 			return retval;
 	}
 
-	if (pi)
-		cpuset_read_lock();
+	/*
+	 * SCHED_DEADLINE bandwidth accounting relies on stable cpusets
+	 * information.
+	 */
+	if (dl_policy(policy) || dl_policy(p->policy)) {
+		cpuset_locked = true;
+		cpuset_lock();
+	}
 
 	/*
 	 * Make sure no PI-waiters arrive (or leave) while we are
@@ -7716,8 +7723,8 @@ change:
 	if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
 		policy = oldpolicy = -1;
 		task_rq_unlock(rq, p, &rf);
-		if (pi)
-			cpuset_read_unlock();
+		if (cpuset_locked)
+			cpuset_unlock();
 		goto recheck;
 	}
 
@@ -7784,7 +7791,8 @@ change:
 	task_rq_unlock(rq, p, &rf);
 
 	if (pi) {
-		cpuset_read_unlock();
+		if (cpuset_locked)
+			cpuset_unlock();
 		rt_mutex_adjust_pi(p);
 	}
 
@@ -7796,8 +7804,8 @@ change:
 
 unlock:
 	task_rq_unlock(rq, p, &rf);
-	if (pi)
-		cpuset_read_unlock();
+	if (cpuset_locked)
+		cpuset_unlock();
 	return retval;
 }
 



  parent reply	other threads:[~2023-08-28 10:22 UTC|newest]

Thread overview: 142+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-28 10:11 [PATCH 6.4 000/129] 6.4.13-rc1 review Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 001/129] NFSv4.2: fix error handling in nfs42_proc_getxattr Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 002/129] NFSv4: fix out path in __nfs4_get_acl_uncached Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 003/129] xprtrdma: Remap Receive buffers after a reconnect Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 004/129] PCI: acpiphp: Reassign resources on bridge if necessary Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 005/129] jbd2: remove t_checkpoint_io_list Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 006/129] jbd2: remove journal_clean_one_cp_list() Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 007/129] jbd2: fix a race when checking checkpoint buffer busy Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 008/129] can: raw: fix receiver memory leak Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 009/129] can: raw: fix lockdep issue in raw_release() Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 010/129] wifi: iwlwifi: mvm: add dependency for PTP clock Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 011/129] tracing: Fix cpu buffers unavailable due to record_disabled missed Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 012/129] tracing/synthetic: Use union instead of casts Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 013/129] tracing/synthetic: Skip first entry for stack traces Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 014/129] tracing/synthetic: Allocate one additional element for size Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 015/129] tracing: Fix memleak due to race between current_tracer and trace Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 016/129] octeontx2-af: SDP: fix receive link config Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 017/129] devlink: add missing unregister linecard notification Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 018/129] net: dsa: felix: fix oversize frame dropping for always closed tc-taprio gates Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 019/129] sock: annotate data-races around prot->memory_pressure Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 020/129] dccp: annotate data-races in dccp_poll() Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 021/129] ipvlan: Fix a reference count leak warning in ipvlan_ns_exit() Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 022/129] mlxsw: pci: Set time stamp fields also when its type is MIRROR_UTC Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 023/129] mlxsw: reg: Fix SSPR register layout Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 024/129] mlxsw: Fix the size of VIRT_ROUTER_MSB Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 025/129] selftests: mlxsw: Fix test failure on Spectrum-4 Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 026/129] net: dsa: mt7530: fix handling of 802.1X PAE frames Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 027/129] net: mdio: mdio-bitbang: Fix C45 read/write protocol Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 028/129] net: bgmac: Fix return value check for fixed_phy_register() Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 029/129] net: bcmgenet: " Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 030/129] net: validate veth and vxcan peer ifindexes Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 031/129] ipv4: fix data-races around inet->inet_id Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 032/129] ice: fix receive buffer size miscalculation Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 033/129] Revert "ice: Fix ice VF reset during iavf initialization" Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 034/129] ice: Fix NULL pointer deref during VF reset Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 035/129] selftests: bonding: do not set port down before adding to bond Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 036/129] tg3: Use slab_build_skb() when needed Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 037/129] net: ethernet: mtk_eth_soc: fix NULL pointer on hw reset Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 038/129] can: isotp: fix support for transmission of SF without flow control Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 039/129] igb: Avoid starting unnecessary workqueues Greg Kroah-Hartman
2023-08-28 10:11 ` [PATCH 6.4 040/129] igc: Fix the typo in the PTM Control macro Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 041/129] net/sched: fix a qdisc modification with ambiguous command request Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 042/129] i40e: fix potential NULL pointer dereferencing of pf->vf i40e_sync_vsi_filters() Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 043/129] netfilter: nf_tables: validate all pending tables Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 044/129] netfilter: nf_tables: flush pending destroy work before netlink notifier Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 045/129] netfilter: nf_tables: GC transaction race with abort path Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 046/129] netfilter: nf_tables: use correct lock to protect gc_list Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 047/129] netfilter: nf_tables: fix out of memory error handling Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 048/129] netfilter: nf_tables: defer gc run if previous batch is still pending Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 049/129] rtnetlink: Reject negative ifindexes in RTM_NEWLINK Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 050/129] bonding: fix macvlan over alb bond support Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 051/129] ASoC: amd: yc: Add VivoBook Pro 15 to quirks list for acp6x Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 052/129] ASoC: cs35l41: Correct amp_gain_tlv values Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 053/129] spi: spi-cadence: Fix data corruption issues in slave mode Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 054/129] ibmveth: Use dcbf rather than dcbfl Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 055/129] wifi: mac80211: limit reorder_buf_filtered to avoid UBSAN warning Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 056/129] platform/x86: lenovo-ymc: Add Lenovo Yoga 7 14ACN6 to ec_trigger_quirk_dmi_table Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 057/129] platform/x86: ideapad-laptop: Add support for new hotkeys found on ThinkBook 14s Yoga ITL Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 058/129] NFSv4: Fix dropped lock for racing OPEN and delegation return Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 059/129] clk: Fix slab-out-of-bounds error in devm_clk_release() Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 060/129] mm,ima,kexec,of: use memblock_free_late from ima_free_kexec_buffer Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 061/129] shmem: fix smaps BUG sleeping while atomic Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 062/129] ALSA: ymfpci: Fix the missing snd_card_free() call at probe error Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 063/129] selftests/mm: FOLL_LONGTERM need to be updated to 0x100 Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 064/129] mm: enable page walking API to lock vmas during the walk Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 065/129] mm/gup: reintroduce FOLL_NUMA as FOLL_HONOR_NUMA_FAULT Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 066/129] mm/gup: handle cont-PTE hugetlb pages correctly in gup_must_unshare() via GUP-fast Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 067/129] drm/vmwgfx: Fix shader stage validation Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 068/129] drm/vmwgfx: Fix possible invalid drm gem put calls Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 069/129] drm: Add an HPD poll helper to reschedule the poll work Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 070/129] drm/panfrost: Skip speed binning on EOPNOTSUPP Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 071/129] drm/i915/dgfx: Enable d3cold at s2idle Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 072/129] drm/display/dp: Fix the DP DSC Receiver cap size Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 073/129] drm/i915: Fix HPD polling, reenabling the output poll work as needed Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 074/129] LoongArch: Fix hw_breakpoint_control() for watchpoints Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 075/129] x86/fpu: Invalidate FPU state correctly on exec() Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 076/129] x86/fpu: Set X86_FEATURE_OSXSAVE feature after enabling OSXSAVE in CR4 Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 077/129] drm/i915/display: Handle GMD_ID identification in display code Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 078/129] drm/i915: fix display probe for IVB Q and IVB D GT2 server Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 079/129] cgroup/cpuset: Rename functions dealing with DEADLINE accounting Greg Kroah-Hartman
2023-08-28 10:12 ` Greg Kroah-Hartman [this message]
2023-08-28 10:12 ` [PATCH 6.4 081/129] sched/cpuset: Keep track of SCHED_DEADLINE task in cpusets Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 082/129] cgroup/cpuset: Iterate only if DEADLINE tasks are present Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 083/129] sched/deadline: Create DL BW alloc, free & check overflow interface Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 084/129] cgroup/cpuset: Free DL BW in case can_attach() fails Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 085/129] mm: add a call to flush_cache_vmap() in vmap_pfn() Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 086/129] mm: memory-failure: fix unexpected return value in soft_offline_page() Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 087/129] mm: multi-gen LRU: dont spin during memcg release Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 088/129] nilfs2: fix general protection fault in nilfs_lookup_dirty_data_buffers() Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 089/129] NFS: Fix a use after free in nfs_direct_join_group() Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 090/129] nfsd: Fix race to FREE_STATEID and cl_revoked Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 091/129] selinux: set next pointer before attaching to list Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 092/129] batman-adv: Trigger events for auto adjusted MTU Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 093/129] batman-adv: Dont increase MTU when set by user Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 094/129] batman-adv: Do not get eth header before batadv_check_management_packet Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 095/129] batman-adv: Fix TT global entry leak when client roamed back Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 096/129] batman-adv: Fix batadv_v_ogm_aggr_send memory leak Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 097/129] batman-adv: Hold rtnl lock during MTU update via netlink Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 098/129] ACPI: resource: Fix IRQ override quirk for PCSpecialist Elimina Pro 16 M Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 099/129] lib/clz_ctz.c: Fix __clzdi2() and __ctzdi2() for 32-bit kernels Greg Kroah-Hartman
2023-08-28 10:12 ` [PATCH 6.4 100/129] riscv: Handle zicsr/zifencei issue between gcc and binutils Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 101/129] riscv: Fix build errors using binutils2.37 toolchains Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 102/129] radix tree: remove unused variable Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 103/129] of: unittest: Fix EXPECT for parse_phandle_with_args_map() test Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 104/129] of: dynamic: Refactor action prints to not use "%pOF" inside devtree_lock Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 105/129] pinctrl: amd: Mask wake bits on probe again Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 106/129] media: vcodec: Fix potential array out-of-bounds in encoder queue_setup Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 107/129] PCI: acpiphp: Use pci_assign_unassigned_bridge_resources() only for non-root bus Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 108/129] thunderbolt: Fix Thunderbolt 3 display flickering issue on 2nd hot plug onwards Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 109/129] can: raw: add missing refcount for memory leak fix Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 110/129] drm/i915: Fix error handling if driver creation fails during probe Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 111/129] madvise:madvise_cold_or_pageout_pte_range(): dont use mapcount() against large folio for sharing check Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 112/129] madvise:madvise_free_pte_range(): " Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 113/129] scsi: snic: Fix double free in snic_tgt_create() Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 114/129] scsi: ufs: ufs-qcom: Clear qunipro_g4_sel for HW major version > 5 Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 115/129] scsi: core: raid_class: Remove raid_component_add() Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 116/129] clk: Fix undefined reference to `clk_rate_exclusive_{get,put} Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 117/129] ASoC: SOF: ipc4-pcm: fix possible null pointer deference Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 118/129] ASoC: cs35l56: Read firmware uuid from a device property instead of _SUB Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 119/129] pinctrl: renesas: rzg2l: Fix NULL pointer dereference in rzg2l_dt_subnode_to_map() Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 120/129] pinctrl: renesas: rzv2m: Fix NULL pointer dereference in rzv2m_dt_subnode_to_map() Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 121/129] pinctrl: renesas: rza2: Add lock around pinctrl_generic{{add,remove}_group,{add,remove}_function} Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 122/129] dma-buf/sw_sync: Avoid recursive lock during fence signal Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 123/129] gpio: sim: dispose of irq mappings before destroying the irq_sim domain Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 124/129] gpio: sim: pass the GPIO devices software node to irq domain Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 125/129] ASoC: amd: yc: Fix a non-functional mic on Lenovo 82SJ Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 126/129] maple_tree: disable mas_wr_append() when other readers are possible Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 127/129] ASoC: amd: vangogh: select CONFIG_SND_AMD_ACP_CONFIG Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 128/129] TIOCSTI: Document CAP_SYS_ADMIN behaviour in Kconfig Greg Kroah-Hartman
2023-08-28 10:13 ` [PATCH 6.4 129/129] netfilter: nf_tables: fix kdoc warnings after gc rework Greg Kroah-Hartman
2023-08-28 16:41 ` [PATCH 6.4 000/129] 6.4.13-rc1 review Justin Forbes
2023-08-28 19:14 ` Naresh Kamboju
2023-08-28 22:36 ` Ron Economos
2023-08-28 23:35 ` Joel Fernandes
2023-08-29  1:54 ` SeongJae Park
2023-08-29  6:22 ` Conor Dooley
2023-08-29  8:28 ` Bagas Sanjaya
2023-08-29 12:26 ` Sudip Mukherjee (Codethink)
2023-08-29 14:07 ` Shuah Khan
2023-08-29 20:23 ` Florian Fainelli
2023-08-30  2:04 ` Guenter Roeck
2023-08-30 10:24 ` Jon Hunter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230828101159.993231479@linuxfoundation.org \
    --to=gregkh@linuxfoundation.org \
    --cc=juri.lelli@redhat.com \
    --cc=longman@redhat.com \
    --cc=patches@lists.linux.dev \
    --cc=qyousef@layalina.io \
    --cc=stable@vger.kernel.org \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).