* [PATCH 1/2] cpusets,isolcpus: exclude isolcpus from load balancing in cpusets
2015-02-23 21:45 [PATCH 0/2] cpusets,isolcpus: resolve conflict between cpusets and isolcpus riel
@ 2015-02-23 21:45 ` riel
2015-02-25 2:10 ` David Rientjes
2015-02-23 21:45 ` [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset riel
2015-02-24 2:18 ` [PATCH 0/2] cpusets,isolcpus: resolve conflict between cpusets and isolcpus Mike Galbraith
2 siblings, 1 reply; 10+ messages in thread
From: riel @ 2015-02-23 21:45 UTC (permalink / raw)
To: linux-kernel
Cc: Rik van Riel, Peter Zijlstra, Clark Williams, Li Zefan,
Ingo Molnar, Luiz Capitulino, cgroups
From: Rik van Riel <riel@redhat.com>
Ensure that cpus specified with the isolcpus= boot commandline
option stay outside of the load balancing in the kernel scheduler.
Operations like load balancing can introduce unwanted latencies,
which is exactly what the isolcpus= commandline is there to prevent.
Previously, simply creating a new cpuset, without even touching the
cpuset.cpus field inside the new cpuset, would undo the effects of
isolcpus=, by creating a scheduler domain spanning the whole system,
and setting up load balancing inside that domain. The cpuset root
cpuset.cpus file is read-only, so there was not even a way to undo
that effect.
This does not impact the majority of cpusets users, since isolcpus=
is a fairly specialized feature used for realtime purposes.
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Clark Williams <williams@redhat.com>
Cc: Li Zefan <lizefan@huawei.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Luiz Capitulino <lcapitulino@redhat.com>
Cc: cgroups@vger.kernel.org
Signed-off-by: Rik van Riel <riel@redhat.com>
---
include/linux/sched.h | 2 ++
kernel/cpuset.c | 13 +++++++++++--
kernel/sched/core.c | 2 +-
3 files changed, 14 insertions(+), 3 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index cb5cdc777c8a..af1b32a5ddcc 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1038,6 +1038,8 @@ static inline struct cpumask *sched_domain_span(struct sched_domain *sd)
extern void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
struct sched_domain_attr *dattr_new);
+extern cpumask_var_t cpu_isolated_map;
+
/* Allocate an array of sched domains, for partition_sched_domains(). */
cpumask_var_t *alloc_sched_domains(unsigned int ndoms);
void free_sched_domains(cpumask_var_t doms[], unsigned int ndoms);
diff --git a/kernel/cpuset.c b/kernel/cpuset.c
index 64b257f6bca2..1ad63fa37cb4 100644
--- a/kernel/cpuset.c
+++ b/kernel/cpuset.c
@@ -625,6 +625,7 @@ static int generate_sched_domains(cpumask_var_t **domains,
int csn; /* how many cpuset ptrs in csa so far */
int i, j, k; /* indices for partition finding loops */
cpumask_var_t *doms; /* resulting partition; i.e. sched domains */
+ cpumask_var_t non_isolated_cpus; /* load balanced CPUs */
struct sched_domain_attr *dattr; /* attributes for custom domains */
int ndoms = 0; /* number of sched domains in result */
int nslot; /* next empty doms[] struct cpumask slot */
@@ -634,6 +635,10 @@ static int generate_sched_domains(cpumask_var_t **domains,
dattr = NULL;
csa = NULL;
+ if (!alloc_cpumask_var(&non_isolated_cpus, GFP_KERNEL))
+ goto done;
+ cpumask_andnot(non_isolated_cpus, cpu_possible_mask, cpu_isolated_map);
+
/* Special case for the 99% of systems with one, full, sched domain */
if (is_sched_load_balance(&top_cpuset)) {
ndoms = 1;
@@ -646,7 +651,8 @@ static int generate_sched_domains(cpumask_var_t **domains,
*dattr = SD_ATTR_INIT;
update_domain_attr_tree(dattr, &top_cpuset);
}
- cpumask_copy(doms[0], top_cpuset.effective_cpus);
+ cpumask_and(doms[0], top_cpuset.effective_cpus,
+ non_isolated_cpus);
goto done;
}
@@ -669,7 +675,8 @@ static int generate_sched_domains(cpumask_var_t **domains,
* the corresponding sched domain.
*/
if (!cpumask_empty(cp->cpus_allowed) &&
- !is_sched_load_balance(cp))
+ !(is_sched_load_balance(cp) &&
+ cpumask_intersects(cp->cpus_allowed, non_isolated_cpus)))
continue;
if (is_sched_load_balance(cp))
@@ -751,6 +758,7 @@ static int generate_sched_domains(cpumask_var_t **domains,
if (apn == b->pn) {
cpumask_or(dp, dp, b->effective_cpus);
+ cpumask_and(dp, dp, non_isolated_cpus);
if (dattr)
update_domain_attr_tree(dattr + nslot, b);
@@ -763,6 +771,7 @@ static int generate_sched_domains(cpumask_var_t **domains,
BUG_ON(nslot != ndoms);
done:
+ free_cpumask_var(non_isolated_cpus);
kfree(csa);
/*
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 97fe79cf613e..6069f3703240 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5831,7 +5831,7 @@ cpu_attach_domain(struct sched_domain *sd, struct root_domain *rd, int cpu)
}
/* cpus with isolated domains */
-static cpumask_var_t cpu_isolated_map;
+cpumask_var_t cpu_isolated_map;
/* Setup the mask of cpus configured for isolated domains */
static int __init isolated_cpu_setup(char *str)
--
1.9.3
^ permalink raw reply related [flat|nested] 10+ messages in thread* Re: [PATCH 1/2] cpusets,isolcpus: exclude isolcpus from load balancing in cpusets
2015-02-23 21:45 ` [PATCH 1/2] cpusets,isolcpus: exclude isolcpus from load balancing in cpusets riel
@ 2015-02-25 2:10 ` David Rientjes
0 siblings, 0 replies; 10+ messages in thread
From: David Rientjes @ 2015-02-25 2:10 UTC (permalink / raw)
To: Rik van Riel
Cc: linux-kernel, Peter Zijlstra, Clark Williams, Li Zefan,
Ingo Molnar, Luiz Capitulino, cgroups
On Mon, 23 Feb 2015, riel@redhat.com wrote:
> From: Rik van Riel <riel@redhat.com>
>
> Ensure that cpus specified with the isolcpus= boot commandline
> option stay outside of the load balancing in the kernel scheduler.
>
> Operations like load balancing can introduce unwanted latencies,
> which is exactly what the isolcpus= commandline is there to prevent.
>
> Previously, simply creating a new cpuset, without even touching the
> cpuset.cpus field inside the new cpuset, would undo the effects of
> isolcpus=, by creating a scheduler domain spanning the whole system,
> and setting up load balancing inside that domain. The cpuset root
> cpuset.cpus file is read-only, so there was not even a way to undo
> that effect.
>
> This does not impact the majority of cpusets users, since isolcpus=
> is a fairly specialized feature used for realtime purposes.
>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Clark Williams <williams@redhat.com>
> Cc: Li Zefan <lizefan@huawei.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Luiz Capitulino <lcapitulino@redhat.com>
> Cc: cgroups@vger.kernel.org
> Signed-off-by: Rik van Riel <riel@redhat.com>
Tested-by: David Rientjes <rientjes@google.com>
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
2015-02-23 21:45 [PATCH 0/2] cpusets,isolcpus: resolve conflict between cpusets and isolcpus riel
2015-02-23 21:45 ` [PATCH 1/2] cpusets,isolcpus: exclude isolcpus from load balancing in cpusets riel
@ 2015-02-23 21:45 ` riel
2015-02-25 2:15 ` David Rientjes
2015-02-24 2:18 ` [PATCH 0/2] cpusets,isolcpus: resolve conflict between cpusets and isolcpus Mike Galbraith
2 siblings, 1 reply; 10+ messages in thread
From: riel @ 2015-02-23 21:45 UTC (permalink / raw)
To: linux-kernel
Cc: Rik van Riel, Peter Zijlstra, Clark Williams, Li Zefan,
Ingo Molnar, Luiz Capitulino, cgroups
From: Rik van Riel <riel@redhat.com>
The previous patch makes it so the code skips over isolcpus when
building scheduler load balancing domains. This makes it hard to
see for a user which of the CPUs in a cpuset are participating in
load balancing, and which ones are isolated cpus.
Add a cpuset.isolcpus file with info on which cpus in a cpuset are
isolated CPUs.
This file is read-only for now. In the future we could extend things
so isolcpus can be changed at run time, for the root (system wide)
cpuset only.
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Clark Williams <williams@redhat.com>
Cc: Li Zefan <lizefan@huawei.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Luiz Capitulino <lcapitulino@redhat.com>
Cc: cgroups@vger.kernel.org
Signed-off-by: Rik van Riel <riel@redhat.com>
---
kernel/cpuset.c | 27 +++++++++++++++++++++++++++
1 file changed, 27 insertions(+)
diff --git a/kernel/cpuset.c b/kernel/cpuset.c
index 1ad63fa37cb4..19ad5d3377f8 100644
--- a/kernel/cpuset.c
+++ b/kernel/cpuset.c
@@ -1563,6 +1563,7 @@ typedef enum {
FILE_MEMORY_PRESSURE,
FILE_SPREAD_PAGE,
FILE_SPREAD_SLAB,
+ FILE_ISOLCPUS,
} cpuset_filetype_t;
static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
@@ -1704,6 +1705,23 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
return retval ?: nbytes;
}
+static size_t cpuset_sprintf_isolcpus(char *s, ssize_t pos, struct cpuset *cs)
+{
+ cpumask_var_t my_isolated_cpus;
+ ssize_t count;
+
+ if (!alloc_cpumask_var(&my_isolated_cpus, GFP_KERNEL))
+ return 0;
+
+ cpumask_and(my_isolated_cpus, cs->cpus_allowed, cpu_isolated_map);
+
+ count = cpulist_scnprintf(s, pos, my_isolated_cpus);
+
+ free_cpumask_var(my_isolated_cpus);
+
+ return count;
+}
+
/*
* These ascii lists should be read in a single call, by using a user
* buffer large enough to hold the entire map. If read in smaller
@@ -1738,6 +1756,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
case FILE_EFFECTIVE_MEMLIST:
s += nodelist_scnprintf(s, count, cs->effective_mems);
break;
+ case FILE_ISOLCPUS:
+ s += cpuset_sprintf_isolcpus(s, count, cs);
+ break;
default:
ret = -EINVAL;
goto out_unlock;
@@ -1906,6 +1927,12 @@ static struct cftype files[] = {
.private = FILE_MEMORY_PRESSURE_ENABLED,
},
+ {
+ .name = "isolcpus",
+ .seq_show = cpuset_common_seq_show,
+ .private = FILE_ISOLCPUS,
+ },
+
{ } /* terminate */
};
--
1.9.3
^ permalink raw reply related [flat|nested] 10+ messages in thread* Re: [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
2015-02-23 21:45 ` [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset riel
@ 2015-02-25 2:15 ` David Rientjes
2015-02-25 3:30 ` Rik van Riel
0 siblings, 1 reply; 10+ messages in thread
From: David Rientjes @ 2015-02-25 2:15 UTC (permalink / raw)
To: Rik van Riel
Cc: linux-kernel, Peter Zijlstra, Clark Williams, Li Zefan,
Ingo Molnar, Luiz Capitulino, cgroups
On Mon, 23 Feb 2015, riel@redhat.com wrote:
> From: Rik van Riel <riel@redhat.com>
>
> The previous patch makes it so the code skips over isolcpus when
> building scheduler load balancing domains. This makes it hard to
> see for a user which of the CPUs in a cpuset are participating in
> load balancing, and which ones are isolated cpus.
>
> Add a cpuset.isolcpus file with info on which cpus in a cpuset are
> isolated CPUs.
>
> This file is read-only for now. In the future we could extend things
> so isolcpus can be changed at run time, for the root (system wide)
> cpuset only.
>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Clark Williams <williams@redhat.com>
> Cc: Li Zefan <lizefan@huawei.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Luiz Capitulino <lcapitulino@redhat.com>
> Cc: cgroups@vger.kernel.org
> Signed-off-by: Rik van Riel <riel@redhat.com>
> ---
> kernel/cpuset.c | 27 +++++++++++++++++++++++++++
> 1 file changed, 27 insertions(+)
>
> diff --git a/kernel/cpuset.c b/kernel/cpuset.c
> index 1ad63fa37cb4..19ad5d3377f8 100644
> --- a/kernel/cpuset.c
> +++ b/kernel/cpuset.c
> @@ -1563,6 +1563,7 @@ typedef enum {
> FILE_MEMORY_PRESSURE,
> FILE_SPREAD_PAGE,
> FILE_SPREAD_SLAB,
> + FILE_ISOLCPUS,
> } cpuset_filetype_t;
>
> static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
> @@ -1704,6 +1705,23 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
> return retval ?: nbytes;
> }
>
> +static size_t cpuset_sprintf_isolcpus(char *s, ssize_t pos, struct cpuset *cs)
> +{
> + cpumask_var_t my_isolated_cpus;
> + ssize_t count;
> +
Whitespace.
> + if (!alloc_cpumask_var(&my_isolated_cpus, GFP_KERNEL))
> + return 0;
> +
> + cpumask_and(my_isolated_cpus, cs->cpus_allowed, cpu_isolated_map);
> +
> + count = cpulist_scnprintf(s, pos, my_isolated_cpus);
> +
> + free_cpumask_var(my_isolated_cpus);
> +
> + return count;
> +}
> +
> /*
> * These ascii lists should be read in a single call, by using a user
> * buffer large enough to hold the entire map. If read in smaller
> @@ -1738,6 +1756,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
> case FILE_EFFECTIVE_MEMLIST:
> s += nodelist_scnprintf(s, count, cs->effective_mems);
> break;
> + case FILE_ISOLCPUS:
> + s += cpuset_sprintf_isolcpus(s, count, cs);
> + break;
This patch looks fine, and I think cpuset.effective_cpus and
cpuset.isolcpus can be used well together, but will need updating now that
commit e8e6d97c9b ("cpuset: use %*pb[l] to print bitmaps including
cpumasks and nodemasks") has been merged which reworks this function.
It's a little unfortunate, though, that the user sees Cpus_allowed,
cpuset.cpus, and cpuset.effective_cpus that include isolcpus and then have
to check another cpulist for the isolcpus to see their sched domain,
though.
> default:
> ret = -EINVAL;
> goto out_unlock;
> @@ -1906,6 +1927,12 @@ static struct cftype files[] = {
> .private = FILE_MEMORY_PRESSURE_ENABLED,
> },
>
> + {
> + .name = "isolcpus",
> + .seq_show = cpuset_common_seq_show,
> + .private = FILE_ISOLCPUS,
> + },
> +
> { } /* terminate */
> };
>
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
2015-02-25 2:15 ` David Rientjes
@ 2015-02-25 3:30 ` Rik van Riel
0 siblings, 0 replies; 10+ messages in thread
From: Rik van Riel @ 2015-02-25 3:30 UTC (permalink / raw)
To: David Rientjes
Cc: linux-kernel, Peter Zijlstra, Clark Williams, Li Zefan,
Ingo Molnar, Luiz Capitulino, cgroups
On 02/24/2015 09:15 PM, David Rientjes wrote:
> On Mon, 23 Feb 2015, riel@redhat.com wrote:
>
>> From: Rik van Riel <riel@redhat.com>
>>
>> The previous patch makes it so the code skips over isolcpus when
>> building scheduler load balancing domains. This makes it hard to
>> see for a user which of the CPUs in a cpuset are participating in
>> load balancing, and which ones are isolated cpus.
>>
>> Add a cpuset.isolcpus file with info on which cpus in a cpuset are
>> isolated CPUs.
>>
>> This file is read-only for now. In the future we could extend things
>> so isolcpus can be changed at run time, for the root (system wide)
>> cpuset only.
>>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: Clark Williams <williams@redhat.com>
>> Cc: Li Zefan <lizefan@huawei.com>
>> Cc: Ingo Molnar <mingo@redhat.com>
>> Cc: Luiz Capitulino <lcapitulino@redhat.com>
>> Cc: cgroups@vger.kernel.org
>> Signed-off-by: Rik van Riel <riel@redhat.com>
>> ---
>> kernel/cpuset.c | 27 +++++++++++++++++++++++++++
>> 1 file changed, 27 insertions(+)
>>
>> diff --git a/kernel/cpuset.c b/kernel/cpuset.c
>> index 1ad63fa37cb4..19ad5d3377f8 100644
>> --- a/kernel/cpuset.c
>> +++ b/kernel/cpuset.c
>> @@ -1563,6 +1563,7 @@ typedef enum {
>> FILE_MEMORY_PRESSURE,
>> FILE_SPREAD_PAGE,
>> FILE_SPREAD_SLAB,
>> + FILE_ISOLCPUS,
>> } cpuset_filetype_t;
>>
>> static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
>> @@ -1704,6 +1705,23 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
>> return retval ?: nbytes;
>> }
>>
>> +static size_t cpuset_sprintf_isolcpus(char *s, ssize_t pos, struct cpuset *cs)
>> +{
>> + cpumask_var_t my_isolated_cpus;
>> + ssize_t count;
>> +
>
> Whitespace.
>
>> + if (!alloc_cpumask_var(&my_isolated_cpus, GFP_KERNEL))
>> + return 0;
>> +
>> + cpumask_and(my_isolated_cpus, cs->cpus_allowed, cpu_isolated_map);
>> +
>> + count = cpulist_scnprintf(s, pos, my_isolated_cpus);
>> +
>> + free_cpumask_var(my_isolated_cpus);
>> +
>> + return count;
>> +}
>> +
>> /*
>> * These ascii lists should be read in a single call, by using a user
>> * buffer large enough to hold the entire map. If read in smaller
>> @@ -1738,6 +1756,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
>> case FILE_EFFECTIVE_MEMLIST:
>> s += nodelist_scnprintf(s, count, cs->effective_mems);
>> break;
>> + case FILE_ISOLCPUS:
>> + s += cpuset_sprintf_isolcpus(s, count, cs);
>> + break;
>
> This patch looks fine, and I think cpuset.effective_cpus and
> cpuset.isolcpus can be used well together, but will need updating now that
> commit e8e6d97c9b ("cpuset: use %*pb[l] to print bitmaps including
> cpumasks and nodemasks") has been merged which reworks this function.
I will take a look at that changeset. It was not in the
tip tree I worked against.
Expect a v2 :)
> It's a little unfortunate, though, that the user sees Cpus_allowed,
> cpuset.cpus, and cpuset.effective_cpus that include isolcpus and then have
> to check another cpulist for the isolcpus to see their sched domain,
> though.
Agreed, but all the alternatives I could think of would break the
userspace API, leaving this as the best way to go.
--
All rights reversed
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 0/2] cpusets,isolcpus: resolve conflict between cpusets and isolcpus
2015-02-23 21:45 [PATCH 0/2] cpusets,isolcpus: resolve conflict between cpusets and isolcpus riel
2015-02-23 21:45 ` [PATCH 1/2] cpusets,isolcpus: exclude isolcpus from load balancing in cpusets riel
2015-02-23 21:45 ` [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset riel
@ 2015-02-24 2:18 ` Mike Galbraith
2015-02-24 14:13 ` Rik van Riel
2 siblings, 1 reply; 10+ messages in thread
From: Mike Galbraith @ 2015-02-24 2:18 UTC (permalink / raw)
To: riel; +Cc: linux-kernel
On Mon, 2015-02-23 at 16:45 -0500, riel@redhat.com wrote:
> Ensure that cpus specified with the isolcpus= boot commandline
> option stay outside of the load balancing in the kernel scheduler.
>
> Operations like load balancing can introduce unwanted latencies,
> which is exactly what the isolcpus= commandline is there to prevent.
>
> Previously, simply creating a new cpuset, without even touching the
> cpuset.cpus field inside the new cpuset, would undo the effects of
> isolcpus=, by creating a scheduler domain spanning the whole system,
> and setting up load balancing inside that domain. The cpuset root
> cpuset.cpus file is read-only, so there was not even a way to undo
> that effect.
>
> This does not impact the majority of cpusets users, since isolcpus=
> is a fairly specialized feature used for realtime purposes.
3/3: nohz_full cpus become part of that unified isolated map?
-Mike
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 0/2] cpusets,isolcpus: resolve conflict between cpusets and isolcpus
2015-02-24 2:18 ` [PATCH 0/2] cpusets,isolcpus: resolve conflict between cpusets and isolcpus Mike Galbraith
@ 2015-02-24 14:13 ` Rik van Riel
2015-02-24 14:22 ` Mike Galbraith
2015-02-24 14:29 ` Mike Galbraith
0 siblings, 2 replies; 10+ messages in thread
From: Rik van Riel @ 2015-02-24 14:13 UTC (permalink / raw)
To: Mike Galbraith; +Cc: linux-kernel
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 02/23/2015 09:18 PM, Mike Galbraith wrote:
> On Mon, 2015-02-23 at 16:45 -0500, riel@redhat.com wrote:
>> Ensure that cpus specified with the isolcpus= boot commandline
>> option stay outside of the load balancing in the kernel
>> scheduler.
>>
>> Operations like load balancing can introduce unwanted latencies,
>> which is exactly what the isolcpus= commandline is there to
>> prevent.
>>
>> Previously, simply creating a new cpuset, without even touching
>> the cpuset.cpus field inside the new cpuset, would undo the
>> effects of isolcpus=, by creating a scheduler domain spanning the
>> whole system, and setting up load balancing inside that domain.
>> The cpuset root cpuset.cpus file is read-only, so there was not
>> even a way to undo that effect.
>>
>> This does not impact the majority of cpusets users, since
>> isolcpus= is a fairly specialized feature used for realtime
>> purposes.
>
> 3/3: nohz_full cpus become part of that unified isolated map?
There may be use cases where users want nohz_full, but still
want the scheduler to automatically load balance the CPU.
I am not sure whether we want nohz_full and isolcpus to always
overlap 100%.
On the other hand, any CPU that is isolated with isolcpus=
probably wants nohz_full...
- --
All rights reversed
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAEBAgAGBQJU7IbwAAoJEM553pKExN6DxpAIAIt3Wp1fYhyTiceCZPZj/75y
aNdpa1tsdyZmC3UoqHlWPajhU9kz3LV88gkDuRVLkBSIbAdc+Krpj0QwU80SBpn8
MIRkzlDE5pHwqgpNEmY0dTI8OP/BWH6SzgkbAqACTeffR8glz49ELFL2IK9hSl4P
2j5gOc1sgBD24cpqComw0qpIJwhRfDTr270zHPzEcqwESYLD57Z6AZxLuz8UjDnD
vgvmz5+zCeVKfPWfFSCUHGDZ56PuQAvQk0olAVp5pd6wwGoPyAMNJm12RgE2ru0M
8y/xyHAhbtzdV7XsdfBWWe6F4jmodvjhKKtqRdwGzQSGkJzxGjdlFBeSkOkeBsk=
=4dML
-----END PGP SIGNATURE-----
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 0/2] cpusets,isolcpus: resolve conflict between cpusets and isolcpus
2015-02-24 14:13 ` Rik van Riel
@ 2015-02-24 14:22 ` Mike Galbraith
2015-02-24 14:29 ` Mike Galbraith
1 sibling, 0 replies; 10+ messages in thread
From: Mike Galbraith @ 2015-02-24 14:22 UTC (permalink / raw)
To: Rik van Riel; +Cc: linux-kernel
On Tue, 2015-02-24 at 09:13 -0500, Rik van Riel wrote:
> On 02/23/2015 09:18 PM, Mike Galbraith wrote:
> > On Mon, 2015-02-23 at 16:45 -0500, riel@redhat.com wrote:
> >> Ensure that cpus specified with the isolcpus= boot commandline
> >> option stay outside of the load balancing in the kernel
> >> scheduler.
> >>
> >> Operations like load balancing can introduce unwanted latencies,
> >> which is exactly what the isolcpus= commandline is there to
> >> prevent.
> >>
> >> Previously, simply creating a new cpuset, without even touching
> >> the cpuset.cpus field inside the new cpuset, would undo the
> >> effects of isolcpus=, by creating a scheduler domain spanning the
> >> whole system, and setting up load balancing inside that domain.
> >> The cpuset root cpuset.cpus file is read-only, so there was not
> >> even a way to undo that effect.
> >>
> >> This does not impact the majority of cpusets users, since
> >> isolcpus= is a fairly specialized feature used for realtime
> >> purposes.
> >
> > 3/3: nohz_full cpus become part of that unified isolated map?
>
> There may be use cases where users want nohz_full, but still
> want the scheduler to automatically load balance the CPU.
>
> I am not sure whether we want nohz_full and isolcpus to always
> overlap 100%.
>
> On the other hand, any CPU that is isolated with isolcpus=
> probably wants nohz_full...
I can't imagine caring deeply about the tiny interference of the tick,
yet not caring about the massive interference of load balancing.
-Mike
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 0/2] cpusets,isolcpus: resolve conflict between cpusets and isolcpus
2015-02-24 14:13 ` Rik van Riel
2015-02-24 14:22 ` Mike Galbraith
@ 2015-02-24 14:29 ` Mike Galbraith
1 sibling, 0 replies; 10+ messages in thread
From: Mike Galbraith @ 2015-02-24 14:29 UTC (permalink / raw)
To: Rik van Riel; +Cc: linux-kernel
On Tue, 2015-02-24 at 09:13 -0500, Rik van Riel wrote:
> On the other hand, any CPU that is isolated with isolcpus=
> probably wants nohz_full...
Not here. I isolate (via cpusets) for a 60 core rt load, but it's not
single task/core, and doesn't like the nohz_full overhead.
-Mike
^ permalink raw reply [flat|nested] 10+ messages in thread