* [PATCH-cgroup 1/5] cgroup/cpuset: fix panic caused by partcmd_update
2024-08-05 1:30 [PATCH-cgroup 0/5] cgroup/cpuset: Miscellaneous cpuset updates for 6.12 Waiman Long
@ 2024-08-05 1:30 ` Waiman Long
2024-08-05 1:30 ` [PATCH-cgroup 2/5] cgroup/cpuset: Clear effective_xcpus on cpus_allowed clearing only if cpus.exclusive not set Waiman Long
` (3 subsequent siblings)
4 siblings, 0 replies; 9+ messages in thread
From: Waiman Long @ 2024-08-05 1:30 UTC (permalink / raw)
To: Tejun Heo, Zefan Li, Johannes Weiner, Michal Koutný,
Shuah Khan
Cc: linux-kernel, cgroups, linux-kselftest, Chen Ridong, Waiman Long
From: Chen Ridong <chenridong@huawei.com>
We find a bug as below:
BUG: unable to handle page fault for address: 00000003
PGD 0 P4D 0
Oops: 0000 [#1] PREEMPT SMP NOPTI
CPU: 3 PID: 358 Comm: bash Tainted: G W I 6.6.0-10893-g60d6
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/4
RIP: 0010:partition_sched_domains_locked+0x483/0x600
Code: 01 48 85 d2 74 0d 48 83 05 29 3f f8 03 01 f3 48 0f bc c2 89 c0 48 9
RSP: 0018:ffffc90000fdbc58 EFLAGS: 00000202
RAX: 0000000100000003 RBX: ffff888100b3dfa0 RCX: 0000000000000000
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000000000002fe80
RBP: ffff888100b3dfb0 R08: 0000000000000001 R09: 0000000000000000
R10: ffffc90000fdbcb0 R11: 0000000000000004 R12: 0000000000000002
R13: ffff888100a92b48 R14: 0000000000000000 R15: 0000000000000000
FS: 00007f44a5425740(0000) GS:ffff888237d80000(0000) knlGS:0000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000100030973 CR3: 000000010722c000 CR4: 00000000000006e0
Call Trace:
<TASK>
? show_regs+0x8c/0xa0
? __die_body+0x23/0xa0
? __die+0x3a/0x50
? page_fault_oops+0x1d2/0x5c0
? partition_sched_domains_locked+0x483/0x600
? search_module_extables+0x2a/0xb0
? search_exception_tables+0x67/0x90
? kernelmode_fixup_or_oops+0x144/0x1b0
? __bad_area_nosemaphore+0x211/0x360
? up_read+0x3b/0x50
? bad_area_nosemaphore+0x1a/0x30
? exc_page_fault+0x890/0xd90
? __lock_acquire.constprop.0+0x24f/0x8d0
? __lock_acquire.constprop.0+0x24f/0x8d0
? asm_exc_page_fault+0x26/0x30
? partition_sched_domains_locked+0x483/0x600
? partition_sched_domains_locked+0xf0/0x600
rebuild_sched_domains_locked+0x806/0xdc0
update_partition_sd_lb+0x118/0x130
cpuset_write_resmask+0xffc/0x1420
cgroup_file_write+0xb2/0x290
kernfs_fop_write_iter+0x194/0x290
new_sync_write+0xeb/0x160
vfs_write+0x16f/0x1d0
ksys_write+0x81/0x180
__x64_sys_write+0x21/0x30
x64_sys_call+0x2f25/0x4630
do_syscall_64+0x44/0xb0
entry_SYSCALL_64_after_hwframe+0x78/0xe2
RIP: 0033:0x7f44a553c887
It can be reproduced with cammands:
cd /sys/fs/cgroup/
mkdir test
cd test/
echo +cpuset > ../cgroup.subtree_control
echo root > cpuset.cpus.partition
cat /sys/fs/cgroup/cpuset.cpus.effective
0-3
echo 0-3 > cpuset.cpus // taking away all cpus from root
This issue is caused by the incorrect rebuilding of scheduling domains.
In this scenario, test/cpuset.cpus.partition should be an invalid root
and should not trigger the rebuilding of scheduling domains. When calling
update_parent_effective_cpumask with partcmd_update, if newmask is not
null, it should recheck newmask whether there are cpus is available
for parect/cs that has tasks.
Fixes: 0c7f293efc87 ("cgroup/cpuset: Add cpuset.cpus.exclusive.effective for v2")
Signed-off-by: Chen Ridong <chenridong@huawei.com>
Signed-off-by: Waiman Long <longman@redhat.com>
---
kernel/cgroup/cpuset.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 9066f9b4af24..f1846a08e245 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1978,6 +1978,8 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
part_error = PERR_CPUSEMPTY;
goto write_error;
}
+ /* Check newmask again, whether cpus are available for parent/cs */
+ nocpu |= tasks_nocpu_error(parent, cs, newmask);
/*
* partcmd_update with newmask:
--
2.43.5
^ permalink raw reply related [flat|nested] 9+ messages in thread* [PATCH-cgroup 2/5] cgroup/cpuset: Clear effective_xcpus on cpus_allowed clearing only if cpus.exclusive not set
2024-08-05 1:30 [PATCH-cgroup 0/5] cgroup/cpuset: Miscellaneous cpuset updates for 6.12 Waiman Long
2024-08-05 1:30 ` [PATCH-cgroup 1/5] cgroup/cpuset: fix panic caused by partcmd_update Waiman Long
@ 2024-08-05 1:30 ` Waiman Long
2024-08-05 20:53 ` Tejun Heo
2024-08-05 1:30 ` [PATCH-cgroup 3/5] cgroup/cpuset: Eliminate unncessary sched domains rebuilds in hotplug Waiman Long
` (2 subsequent siblings)
4 siblings, 1 reply; 9+ messages in thread
From: Waiman Long @ 2024-08-05 1:30 UTC (permalink / raw)
To: Tejun Heo, Zefan Li, Johannes Weiner, Michal Koutný,
Shuah Khan
Cc: linux-kernel, cgroups, linux-kselftest, Chen Ridong, Waiman Long
Commit e2ffe502ba45 ("cgroup/cpuset: Add cpuset.cpus.exclusive for
v2") adds a user writable cpuset.cpus.exclusive file for setting
exclusive CPUs to be used for the creation of partitions. Since then
effective_xcpus depends on both the cpuset.cpus and cpuset.cpus.exclusive
setting. If cpuset.cpus.exclusive is set, effective_xcpus will depend
only on cpuset.cpus.exclusive. When it is not set, effective_xcpus
will be set according to the cpuset.cpus value when the cpuset becomes
a valid partition root.
When cpuset.cpus is being cleared by the user, effective_xcpus should
only be cleared when cpuset.cpus.exclusive is not set. However, that
is not currently the case.
# cd /sys/fs/cgroup/
# mkdir test
# echo +cpuset > cgroup.subtree_control
# cd test
# echo 3 > cpuset.cpus.exclusive
# cat cpuset.cpus.exclusive.effective
3
# echo > cpuset.cpus
# cat cpuset.cpus.exclusive.effective // was cleared
Fix it by clearing effective_xcpus only if cpuset.cpus.exclusive is
not set.
Fixes: e2ffe502ba45 ("cgroup/cpuset: Add cpuset.cpus.exclusive for v2")
Reported-by: Chen Ridong <chenridong@huawei.com>
Signed-off-by: Waiman Long <longman@redhat.com>
---
kernel/cgroup/cpuset.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index f1846a08e245..7287cecb27d1 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -2508,7 +2508,8 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
*/
if (!*buf) {
cpumask_clear(trialcs->cpus_allowed);
- cpumask_clear(trialcs->effective_xcpus);
+ if (cpumask_empty(trialcs->exclusive_cpus))
+ cpumask_clear(trialcs->effective_xcpus);
} else {
retval = cpulist_parse(buf, trialcs->cpus_allowed);
if (retval < 0)
--
2.43.5
^ permalink raw reply related [flat|nested] 9+ messages in thread* Re: [PATCH-cgroup 2/5] cgroup/cpuset: Clear effective_xcpus on cpus_allowed clearing only if cpus.exclusive not set
2024-08-05 1:30 ` [PATCH-cgroup 2/5] cgroup/cpuset: Clear effective_xcpus on cpus_allowed clearing only if cpus.exclusive not set Waiman Long
@ 2024-08-05 20:53 ` Tejun Heo
0 siblings, 0 replies; 9+ messages in thread
From: Tejun Heo @ 2024-08-05 20:53 UTC (permalink / raw)
To: Waiman Long
Cc: Zefan Li, Johannes Weiner, Michal Koutný, Shuah Khan,
linux-kernel, cgroups, linux-kselftest, Chen Ridong
On Sun, Aug 04, 2024 at 09:30:16PM -0400, Waiman Long wrote:
> Commit e2ffe502ba45 ("cgroup/cpuset: Add cpuset.cpus.exclusive for
> v2") adds a user writable cpuset.cpus.exclusive file for setting
> exclusive CPUs to be used for the creation of partitions. Since then
> effective_xcpus depends on both the cpuset.cpus and cpuset.cpus.exclusive
> setting. If cpuset.cpus.exclusive is set, effective_xcpus will depend
> only on cpuset.cpus.exclusive. When it is not set, effective_xcpus
> will be set according to the cpuset.cpus value when the cpuset becomes
> a valid partition root.
>
> When cpuset.cpus is being cleared by the user, effective_xcpus should
> only be cleared when cpuset.cpus.exclusive is not set. However, that
> is not currently the case.
>
> # cd /sys/fs/cgroup/
> # mkdir test
> # echo +cpuset > cgroup.subtree_control
> # cd test
> # echo 3 > cpuset.cpus.exclusive
> # cat cpuset.cpus.exclusive.effective
> 3
> # echo > cpuset.cpus
> # cat cpuset.cpus.exclusive.effective // was cleared
>
> Fix it by clearing effective_xcpus only if cpuset.cpus.exclusive is
> not set.
>
> Fixes: e2ffe502ba45 ("cgroup/cpuset: Add cpuset.cpus.exclusive for v2")
> Reported-by: Chen Ridong <chenridong@huawei.com>
> Signed-off-by: Waiman Long <longman@redhat.com>
Applied 1-2 to cgroup/for-6.11-fixes w/ stable cc'd.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH-cgroup 3/5] cgroup/cpuset: Eliminate unncessary sched domains rebuilds in hotplug
2024-08-05 1:30 [PATCH-cgroup 0/5] cgroup/cpuset: Miscellaneous cpuset updates for 6.12 Waiman Long
2024-08-05 1:30 ` [PATCH-cgroup 1/5] cgroup/cpuset: fix panic caused by partcmd_update Waiman Long
2024-08-05 1:30 ` [PATCH-cgroup 2/5] cgroup/cpuset: Clear effective_xcpus on cpus_allowed clearing only if cpus.exclusive not set Waiman Long
@ 2024-08-05 1:30 ` Waiman Long
2024-08-05 20:55 ` Tejun Heo
2024-08-05 1:30 ` [PATCH-cgroup 4/5] cgroup/cpuset: Check for partition roots with overlapping CPUs Waiman Long
2024-08-05 1:30 ` [PATCH-cgroup 5/5] selftest/cgroup: Add new test cases to test_cpuset_prs.sh Waiman Long
4 siblings, 1 reply; 9+ messages in thread
From: Waiman Long @ 2024-08-05 1:30 UTC (permalink / raw)
To: Tejun Heo, Zefan Li, Johannes Weiner, Michal Koutný,
Shuah Khan
Cc: linux-kernel, cgroups, linux-kselftest, Chen Ridong, Waiman Long
It was found that some hotplug operations may cause multiple
rebuild_sched_domains_locked() calls. Some of those intermediate calls
may use cpuset states not in the final correct form leading to incorrect
sched domain setting.
Fix this problem by using the existing force_rebuild flag to inhibit
immediate rebuild_sched_domains_locked() calls if set and only doing
one final call at the end. Also renaming the force_rebuild flag to
force_sd_rebuild to make its meaning for clear.
Signed-off-by: Waiman Long <longman@redhat.com>
---
kernel/cgroup/cpuset.c | 33 +++++++++++++++++----------------
1 file changed, 17 insertions(+), 16 deletions(-)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 7287cecb27d1..e070e391d7a8 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -231,6 +231,13 @@ static cpumask_var_t isolated_cpus;
/* List of remote partition root children */
static struct list_head remote_children;
+/*
+ * A flag to force sched domain rebuild at the end of an operation while
+ * inhibiting it in the intermediate stages when set. Currently it is only
+ * set in hotplug code.
+ */
+static bool force_sd_rebuild;
+
/*
* Partition root states:
*
@@ -1467,7 +1474,7 @@ static void update_partition_sd_lb(struct cpuset *cs, int old_prs)
clear_bit(CS_SCHED_LOAD_BALANCE, &cs->flags);
}
- if (rebuild_domains)
+ if (rebuild_domains && !force_sd_rebuild)
rebuild_sched_domains_locked();
}
@@ -1820,7 +1827,7 @@ static void remote_partition_check(struct cpuset *cs, struct cpumask *newmask,
remote_partition_disable(child, tmp);
disable_cnt++;
}
- if (disable_cnt)
+ if (disable_cnt && !force_sd_rebuild)
rebuild_sched_domains_locked();
}
@@ -2425,7 +2432,8 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,
}
rcu_read_unlock();
- if (need_rebuild_sched_domains && !(flags & HIER_NO_SD_REBUILD))
+ if (need_rebuild_sched_domains && !(flags & HIER_NO_SD_REBUILD) &&
+ !force_sd_rebuild)
rebuild_sched_domains_locked();
}
@@ -3087,7 +3095,8 @@ static int update_flag(cpuset_flagbits_t bit, struct cpuset *cs,
cs->flags = trialcs->flags;
spin_unlock_irq(&callback_lock);
- if (!cpumask_empty(trialcs->cpus_allowed) && balance_flag_changed)
+ if (!cpumask_empty(trialcs->cpus_allowed) && balance_flag_changed &&
+ !force_sd_rebuild)
rebuild_sched_domains_locked();
if (spread_flag_changed)
@@ -4468,11 +4477,9 @@ hotplug_update_tasks(struct cpuset *cs,
update_tasks_nodemask(cs);
}
-static bool force_rebuild;
-
void cpuset_force_rebuild(void)
{
- force_rebuild = true;
+ force_sd_rebuild = true;
}
/**
@@ -4620,15 +4627,9 @@ static void cpuset_handle_hotplug(void)
!cpumask_empty(subpartitions_cpus);
mems_updated = !nodes_equal(top_cpuset.effective_mems, new_mems);
- /*
- * In the rare case that hotplug removes all the cpus in
- * subpartitions_cpus, we assumed that cpus are updated.
- */
- if (!cpus_updated && !cpumask_empty(subpartitions_cpus))
- cpus_updated = true;
-
/* For v1, synchronize cpus_allowed to cpu_active_mask */
if (cpus_updated) {
+ cpuset_force_rebuild();
spin_lock_irq(&callback_lock);
if (!on_dfl)
cpumask_copy(top_cpuset.cpus_allowed, &new_cpus);
@@ -4684,8 +4685,8 @@ static void cpuset_handle_hotplug(void)
}
/* rebuild sched domains if cpus_allowed has changed */
- if (cpus_updated || force_rebuild) {
- force_rebuild = false;
+ if (force_sd_rebuild) {
+ force_sd_rebuild = false;
rebuild_sched_domains_cpuslocked();
}
--
2.43.5
^ permalink raw reply related [flat|nested] 9+ messages in thread* Re: [PATCH-cgroup 3/5] cgroup/cpuset: Eliminate unncessary sched domains rebuilds in hotplug
2024-08-05 1:30 ` [PATCH-cgroup 3/5] cgroup/cpuset: Eliminate unncessary sched domains rebuilds in hotplug Waiman Long
@ 2024-08-05 20:55 ` Tejun Heo
0 siblings, 0 replies; 9+ messages in thread
From: Tejun Heo @ 2024-08-05 20:55 UTC (permalink / raw)
To: Waiman Long
Cc: Zefan Li, Johannes Weiner, Michal Koutný, Shuah Khan,
linux-kernel, cgroups, linux-kselftest, Chen Ridong
On Sun, Aug 04, 2024 at 09:30:17PM -0400, Waiman Long wrote:
> It was found that some hotplug operations may cause multiple
> rebuild_sched_domains_locked() calls. Some of those intermediate calls
> may use cpuset states not in the final correct form leading to incorrect
> sched domain setting.
>
> Fix this problem by using the existing force_rebuild flag to inhibit
> immediate rebuild_sched_domains_locked() calls if set and only doing
> one final call at the end. Also renaming the force_rebuild flag to
> force_sd_rebuild to make its meaning for clear.
>
> Signed-off-by: Waiman Long <longman@redhat.com>
Applied to cgroup/for-6.11-fixes.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH-cgroup 4/5] cgroup/cpuset: Check for partition roots with overlapping CPUs
2024-08-05 1:30 [PATCH-cgroup 0/5] cgroup/cpuset: Miscellaneous cpuset updates for 6.12 Waiman Long
` (2 preceding siblings ...)
2024-08-05 1:30 ` [PATCH-cgroup 3/5] cgroup/cpuset: Eliminate unncessary sched domains rebuilds in hotplug Waiman Long
@ 2024-08-05 1:30 ` Waiman Long
2024-08-05 1:30 ` [PATCH-cgroup 5/5] selftest/cgroup: Add new test cases to test_cpuset_prs.sh Waiman Long
4 siblings, 0 replies; 9+ messages in thread
From: Waiman Long @ 2024-08-05 1:30 UTC (permalink / raw)
To: Tejun Heo, Zefan Li, Johannes Weiner, Michal Koutný,
Shuah Khan
Cc: linux-kernel, cgroups, linux-kselftest, Chen Ridong, Waiman Long
With the previous commit that eliminates the overlapping partition
root corner cases in the hotplug code, the partition roots passed down
to generate_sched_domains() should not have overlapping CPUs. Enable
overlapping cpuset check for v2 and warn if that happens.
This patch also has the benefit of increasing test coverage of the new
Union-Find cpuset merging code to cgroup v2.
Signed-off-by: Waiman Long <longman@redhat.com>
---
kernel/cgroup/cpuset.c | 34 ++++++++++++++++++----------------
1 file changed, 18 insertions(+), 16 deletions(-)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index e070e391d7a8..e34fd6108b06 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1127,25 +1127,27 @@ static int generate_sched_domains(cpumask_var_t **domains,
if (root_load_balance && (csn == 1))
goto single_root_domain;
- if (!cgrpv2) {
- for (i = 0; i < csn; i++)
- uf_node_init(&csa[i]->node);
-
- /* Merge overlapping cpusets */
- for (i = 0; i < csn; i++) {
- for (j = i + 1; j < csn; j++) {
- if (cpusets_overlap(csa[i], csa[j]))
- uf_union(&csa[i]->node, &csa[j]->node);
+ for (i = 0; i < csn; i++)
+ uf_node_init(&csa[i]->node);
+
+ /* Merge overlapping cpusets */
+ for (i = 0; i < csn; i++) {
+ for (j = i + 1; j < csn; j++) {
+ if (cpusets_overlap(csa[i], csa[j])) {
+ /*
+ * Cgroup v2 shouldn't pass down overlapping
+ * partition root cpusets.
+ */
+ WARN_ON_ONCE(cgrpv2);
+ uf_union(&csa[i]->node, &csa[j]->node);
}
}
+ }
- /* Count the total number of domains */
- for (i = 0; i < csn; i++) {
- if (uf_find(&csa[i]->node) == &csa[i]->node)
- ndoms++;
- }
- } else {
- ndoms = csn;
+ /* Count the total number of domains */
+ for (i = 0; i < csn; i++) {
+ if (uf_find(&csa[i]->node) == &csa[i]->node)
+ ndoms++;
}
/*
--
2.43.5
^ permalink raw reply related [flat|nested] 9+ messages in thread* [PATCH-cgroup 5/5] selftest/cgroup: Add new test cases to test_cpuset_prs.sh
2024-08-05 1:30 [PATCH-cgroup 0/5] cgroup/cpuset: Miscellaneous cpuset updates for 6.12 Waiman Long
` (3 preceding siblings ...)
2024-08-05 1:30 ` [PATCH-cgroup 4/5] cgroup/cpuset: Check for partition roots with overlapping CPUs Waiman Long
@ 2024-08-05 1:30 ` Waiman Long
2024-08-05 20:58 ` Tejun Heo
4 siblings, 1 reply; 9+ messages in thread
From: Waiman Long @ 2024-08-05 1:30 UTC (permalink / raw)
To: Tejun Heo, Zefan Li, Johannes Weiner, Michal Koutný,
Shuah Khan
Cc: linux-kernel, cgroups, linux-kselftest, Chen Ridong, Waiman Long
Add new test cases to test_cpuset_prs.sh to cover corner cases reported
in previous fix commits.
Signed-off-by: Waiman Long <longman@redhat.com>
---
tools/testing/selftests/cgroup/test_cpuset_prs.sh | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/cgroup/test_cpuset_prs.sh b/tools/testing/selftests/cgroup/test_cpuset_prs.sh
index 7c08cc153367..7295424502b9 100755
--- a/tools/testing/selftests/cgroup/test_cpuset_prs.sh
+++ b/tools/testing/selftests/cgroup/test_cpuset_prs.sh
@@ -321,7 +321,7 @@ TEST_MATRIX=(
# old-A1 old-A2 old-A3 old-B1 new-A1 new-A2 new-A3 new-B1 fail ECPUs Pstate ISOLCPUS
# ------ ------ ------ ------ ------ ------ ------ ------ ---- ----- ------ --------
#
- # Incorrect change to cpuset.cpus invalidates partition root
+ # Incorrect change to cpuset.cpus[.exclusive] invalidates partition root
#
# Adding CPUs to partition root that are not in parent's
# cpuset.cpus is allowed, but those extra CPUs are ignored.
@@ -365,6 +365,16 @@ TEST_MATRIX=(
# cpuset.cpus can overlap with sibling cpuset.cpus.exclusive but not subsumed by it
" C0-3 . . C4-5 X5 . . . 0 A1:0-3,B1:4-5"
+ # Child partition root that try to take all CPUs from parent partition
+ # with tasks will remain invalid.
+ " C1-4:P1:S+ P1 . . . . . . 0 A1:1-4,A2:1-4 A1:P1,A2:P-1"
+ " C1-4:P1:S+ P1 . . . C1-4 . . 0 A1,A2:1-4 A1:P1,A2:P1"
+ " C1-4:P1:S+ P1 . . T C1-4 . . 0 A1:1-4,A2:1-4 A1:P1,A2:P-1"
+
+ # Clearing of cpuset.cpus with a preset cpuset.cpus.exclusive shouldn't
+ # affect cpuset.cpus.exclusive.effective.
+ " C1-4:X3:S+ C1:X3 . . . C . . 0 A2:1-4,XA2:3"
+
# old-A1 old-A2 old-A3 old-B1 new-A1 new-A2 new-A3 new-B1 fail ECPUs Pstate ISOLCPUS
# ------ ------ ------ ------ ------ ------ ------ ------ ---- ----- ------ --------
# Failure cases:
--
2.43.5
^ permalink raw reply related [flat|nested] 9+ messages in thread* Re: [PATCH-cgroup 5/5] selftest/cgroup: Add new test cases to test_cpuset_prs.sh
2024-08-05 1:30 ` [PATCH-cgroup 5/5] selftest/cgroup: Add new test cases to test_cpuset_prs.sh Waiman Long
@ 2024-08-05 20:58 ` Tejun Heo
0 siblings, 0 replies; 9+ messages in thread
From: Tejun Heo @ 2024-08-05 20:58 UTC (permalink / raw)
To: Waiman Long
Cc: Zefan Li, Johannes Weiner, Michal Koutný, Shuah Khan,
linux-kernel, cgroups, linux-kselftest, Chen Ridong
On Sun, Aug 04, 2024 at 09:30:19PM -0400, Waiman Long wrote:
> Add new test cases to test_cpuset_prs.sh to cover corner cases reported
> in previous fix commits.
>
> Signed-off-by: Waiman Long <longman@redhat.com>
Applied 4-5 to cgroup/for-6.12 after pulling in cgroup/for-6.11-fixes.
Thanks.
--
tejun
^ permalink raw reply [flat|nested] 9+ messages in thread