* [PATCH 1/2] cpusets,isolcpus: exclude isolcpus from load balancing in cpusets
[not found] <1424727906-4460-1-git-send-email-riel@redhat.com>
@ 2015-02-23 21:45 ` riel
2015-02-25 2:10 ` David Rientjes
2015-02-23 21:45 ` [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset riel
1 sibling, 1 reply; 10+ messages in thread
From: riel @ 2015-02-23 21:45 UTC (permalink / raw)
To: linux-kernel
Cc: Rik van Riel, Peter Zijlstra, Clark Williams, Li Zefan,
Ingo Molnar, Luiz Capitulino, cgroups
From: Rik van Riel <riel@redhat.com>
Ensure that cpus specified with the isolcpus= boot commandline
option stay outside of the load balancing in the kernel scheduler.
Operations like load balancing can introduce unwanted latencies,
which is exactly what the isolcpus= commandline is there to prevent.
Previously, simply creating a new cpuset, without even touching the
cpuset.cpus field inside the new cpuset, would undo the effects of
isolcpus=, by creating a scheduler domain spanning the whole system,
and setting up load balancing inside that domain. The cpuset root
cpuset.cpus file is read-only, so there was not even a way to undo
that effect.
This does not impact the majority of cpusets users, since isolcpus=
is a fairly specialized feature used for realtime purposes.
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Clark Williams <williams@redhat.com>
Cc: Li Zefan <lizefan@huawei.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Luiz Capitulino <lcapitulino@redhat.com>
Cc: cgroups@vger.kernel.org
Signed-off-by: Rik van Riel <riel@redhat.com>
---
include/linux/sched.h | 2 ++
kernel/cpuset.c | 13 +++++++++++--
kernel/sched/core.c | 2 +-
3 files changed, 14 insertions(+), 3 deletions(-)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index cb5cdc777c8a..af1b32a5ddcc 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1038,6 +1038,8 @@ static inline struct cpumask *sched_domain_span(struct sched_domain *sd)
extern void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
struct sched_domain_attr *dattr_new);
+extern cpumask_var_t cpu_isolated_map;
+
/* Allocate an array of sched domains, for partition_sched_domains(). */
cpumask_var_t *alloc_sched_domains(unsigned int ndoms);
void free_sched_domains(cpumask_var_t doms[], unsigned int ndoms);
diff --git a/kernel/cpuset.c b/kernel/cpuset.c
index 64b257f6bca2..1ad63fa37cb4 100644
--- a/kernel/cpuset.c
+++ b/kernel/cpuset.c
@@ -625,6 +625,7 @@ static int generate_sched_domains(cpumask_var_t **domains,
int csn; /* how many cpuset ptrs in csa so far */
int i, j, k; /* indices for partition finding loops */
cpumask_var_t *doms; /* resulting partition; i.e. sched domains */
+ cpumask_var_t non_isolated_cpus; /* load balanced CPUs */
struct sched_domain_attr *dattr; /* attributes for custom domains */
int ndoms = 0; /* number of sched domains in result */
int nslot; /* next empty doms[] struct cpumask slot */
@@ -634,6 +635,10 @@ static int generate_sched_domains(cpumask_var_t **domains,
dattr = NULL;
csa = NULL;
+ if (!alloc_cpumask_var(&non_isolated_cpus, GFP_KERNEL))
+ goto done;
+ cpumask_andnot(non_isolated_cpus, cpu_possible_mask, cpu_isolated_map);
+
/* Special case for the 99% of systems with one, full, sched domain */
if (is_sched_load_balance(&top_cpuset)) {
ndoms = 1;
@@ -646,7 +651,8 @@ static int generate_sched_domains(cpumask_var_t **domains,
*dattr = SD_ATTR_INIT;
update_domain_attr_tree(dattr, &top_cpuset);
}
- cpumask_copy(doms[0], top_cpuset.effective_cpus);
+ cpumask_and(doms[0], top_cpuset.effective_cpus,
+ non_isolated_cpus);
goto done;
}
@@ -669,7 +675,8 @@ static int generate_sched_domains(cpumask_var_t **domains,
* the corresponding sched domain.
*/
if (!cpumask_empty(cp->cpus_allowed) &&
- !is_sched_load_balance(cp))
+ !(is_sched_load_balance(cp) &&
+ cpumask_intersects(cp->cpus_allowed, non_isolated_cpus)))
continue;
if (is_sched_load_balance(cp))
@@ -751,6 +758,7 @@ static int generate_sched_domains(cpumask_var_t **domains,
if (apn == b->pn) {
cpumask_or(dp, dp, b->effective_cpus);
+ cpumask_and(dp, dp, non_isolated_cpus);
if (dattr)
update_domain_attr_tree(dattr + nslot, b);
@@ -763,6 +771,7 @@ static int generate_sched_domains(cpumask_var_t **domains,
BUG_ON(nslot != ndoms);
done:
+ free_cpumask_var(non_isolated_cpus);
kfree(csa);
/*
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 97fe79cf613e..6069f3703240 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5831,7 +5831,7 @@ cpu_attach_domain(struct sched_domain *sd, struct root_domain *rd, int cpu)
}
/* cpus with isolated domains */
-static cpumask_var_t cpu_isolated_map;
+cpumask_var_t cpu_isolated_map;
/* Setup the mask of cpus configured for isolated domains */
static int __init isolated_cpu_setup(char *str)
--
1.9.3
^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
[not found] <1424727906-4460-1-git-send-email-riel@redhat.com>
2015-02-23 21:45 ` [PATCH 1/2] cpusets,isolcpus: exclude isolcpus from load balancing in cpusets riel
@ 2015-02-23 21:45 ` riel
[not found] ` <1424727906-4460-3-git-send-email-riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
1 sibling, 1 reply; 10+ messages in thread
From: riel @ 2015-02-23 21:45 UTC (permalink / raw)
To: linux-kernel
Cc: Rik van Riel, Peter Zijlstra, Clark Williams, Li Zefan,
Ingo Molnar, Luiz Capitulino, cgroups
From: Rik van Riel <riel@redhat.com>
The previous patch makes it so the code skips over isolcpus when
building scheduler load balancing domains. This makes it hard to
see for a user which of the CPUs in a cpuset are participating in
load balancing, and which ones are isolated cpus.
Add a cpuset.isolcpus file with info on which cpus in a cpuset are
isolated CPUs.
This file is read-only for now. In the future we could extend things
so isolcpus can be changed at run time, for the root (system wide)
cpuset only.
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Clark Williams <williams@redhat.com>
Cc: Li Zefan <lizefan@huawei.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Luiz Capitulino <lcapitulino@redhat.com>
Cc: cgroups@vger.kernel.org
Signed-off-by: Rik van Riel <riel@redhat.com>
---
kernel/cpuset.c | 27 +++++++++++++++++++++++++++
1 file changed, 27 insertions(+)
diff --git a/kernel/cpuset.c b/kernel/cpuset.c
index 1ad63fa37cb4..19ad5d3377f8 100644
--- a/kernel/cpuset.c
+++ b/kernel/cpuset.c
@@ -1563,6 +1563,7 @@ typedef enum {
FILE_MEMORY_PRESSURE,
FILE_SPREAD_PAGE,
FILE_SPREAD_SLAB,
+ FILE_ISOLCPUS,
} cpuset_filetype_t;
static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
@@ -1704,6 +1705,23 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
return retval ?: nbytes;
}
+static size_t cpuset_sprintf_isolcpus(char *s, ssize_t pos, struct cpuset *cs)
+{
+ cpumask_var_t my_isolated_cpus;
+ ssize_t count;
+
+ if (!alloc_cpumask_var(&my_isolated_cpus, GFP_KERNEL))
+ return 0;
+
+ cpumask_and(my_isolated_cpus, cs->cpus_allowed, cpu_isolated_map);
+
+ count = cpulist_scnprintf(s, pos, my_isolated_cpus);
+
+ free_cpumask_var(my_isolated_cpus);
+
+ return count;
+}
+
/*
* These ascii lists should be read in a single call, by using a user
* buffer large enough to hold the entire map. If read in smaller
@@ -1738,6 +1756,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
case FILE_EFFECTIVE_MEMLIST:
s += nodelist_scnprintf(s, count, cs->effective_mems);
break;
+ case FILE_ISOLCPUS:
+ s += cpuset_sprintf_isolcpus(s, count, cs);
+ break;
default:
ret = -EINVAL;
goto out_unlock;
@@ -1906,6 +1927,12 @@ static struct cftype files[] = {
.private = FILE_MEMORY_PRESSURE_ENABLED,
},
+ {
+ .name = "isolcpus",
+ .seq_show = cpuset_common_seq_show,
+ .private = FILE_ISOLCPUS,
+ },
+
{ } /* terminate */
};
--
1.9.3
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 1/2] cpusets,isolcpus: exclude isolcpus from load balancing in cpusets
2015-02-23 21:45 ` [PATCH 1/2] cpusets,isolcpus: exclude isolcpus from load balancing in cpusets riel
@ 2015-02-25 2:10 ` David Rientjes
0 siblings, 0 replies; 10+ messages in thread
From: David Rientjes @ 2015-02-25 2:10 UTC (permalink / raw)
To: Rik van Riel
Cc: linux-kernel, Peter Zijlstra, Clark Williams, Li Zefan,
Ingo Molnar, Luiz Capitulino, cgroups
On Mon, 23 Feb 2015, riel@redhat.com wrote:
> From: Rik van Riel <riel@redhat.com>
>
> Ensure that cpus specified with the isolcpus= boot commandline
> option stay outside of the load balancing in the kernel scheduler.
>
> Operations like load balancing can introduce unwanted latencies,
> which is exactly what the isolcpus= commandline is there to prevent.
>
> Previously, simply creating a new cpuset, without even touching the
> cpuset.cpus field inside the new cpuset, would undo the effects of
> isolcpus=, by creating a scheduler domain spanning the whole system,
> and setting up load balancing inside that domain. The cpuset root
> cpuset.cpus file is read-only, so there was not even a way to undo
> that effect.
>
> This does not impact the majority of cpusets users, since isolcpus=
> is a fairly specialized feature used for realtime purposes.
>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Clark Williams <williams@redhat.com>
> Cc: Li Zefan <lizefan@huawei.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: Luiz Capitulino <lcapitulino@redhat.com>
> Cc: cgroups@vger.kernel.org
> Signed-off-by: Rik van Riel <riel@redhat.com>
Tested-by: David Rientjes <rientjes@google.com>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
[not found] ` <1424727906-4460-3-git-send-email-riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2015-02-25 2:15 ` David Rientjes
[not found] ` <alpine.DEB.2.10.1502241811020.19547-X6Q0R45D7oAcqpCFd4KODRPsWskHk0ljAL8bYrjMMd8@public.gmane.org>
0 siblings, 1 reply; 10+ messages in thread
From: David Rientjes @ 2015-02-25 2:15 UTC (permalink / raw)
To: Rik van Riel
Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Peter Zijlstra,
Clark Williams, Li Zefan, Ingo Molnar, Luiz Capitulino,
cgroups-u79uwXL29TY76Z2rM5mHXA
On Mon, 23 Feb 2015, riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org wrote:
> From: Rik van Riel <riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
>
> The previous patch makes it so the code skips over isolcpus when
> building scheduler load balancing domains. This makes it hard to
> see for a user which of the CPUs in a cpuset are participating in
> load balancing, and which ones are isolated cpus.
>
> Add a cpuset.isolcpus file with info on which cpus in a cpuset are
> isolated CPUs.
>
> This file is read-only for now. In the future we could extend things
> so isolcpus can be changed at run time, for the root (system wide)
> cpuset only.
>
> Cc: Peter Zijlstra <peterz-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
> Cc: Clark Williams <williams-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: Li Zefan <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
> Cc: Ingo Molnar <mingo-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: Luiz Capitulino <lcapitulino-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> Signed-off-by: Rik van Riel <riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> ---
> kernel/cpuset.c | 27 +++++++++++++++++++++++++++
> 1 file changed, 27 insertions(+)
>
> diff --git a/kernel/cpuset.c b/kernel/cpuset.c
> index 1ad63fa37cb4..19ad5d3377f8 100644
> --- a/kernel/cpuset.c
> +++ b/kernel/cpuset.c
> @@ -1563,6 +1563,7 @@ typedef enum {
> FILE_MEMORY_PRESSURE,
> FILE_SPREAD_PAGE,
> FILE_SPREAD_SLAB,
> + FILE_ISOLCPUS,
> } cpuset_filetype_t;
>
> static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
> @@ -1704,6 +1705,23 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
> return retval ?: nbytes;
> }
>
> +static size_t cpuset_sprintf_isolcpus(char *s, ssize_t pos, struct cpuset *cs)
> +{
> + cpumask_var_t my_isolated_cpus;
> + ssize_t count;
> +
Whitespace.
> + if (!alloc_cpumask_var(&my_isolated_cpus, GFP_KERNEL))
> + return 0;
> +
> + cpumask_and(my_isolated_cpus, cs->cpus_allowed, cpu_isolated_map);
> +
> + count = cpulist_scnprintf(s, pos, my_isolated_cpus);
> +
> + free_cpumask_var(my_isolated_cpus);
> +
> + return count;
> +}
> +
> /*
> * These ascii lists should be read in a single call, by using a user
> * buffer large enough to hold the entire map. If read in smaller
> @@ -1738,6 +1756,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
> case FILE_EFFECTIVE_MEMLIST:
> s += nodelist_scnprintf(s, count, cs->effective_mems);
> break;
> + case FILE_ISOLCPUS:
> + s += cpuset_sprintf_isolcpus(s, count, cs);
> + break;
This patch looks fine, and I think cpuset.effective_cpus and
cpuset.isolcpus can be used well together, but will need updating now that
commit e8e6d97c9b ("cpuset: use %*pb[l] to print bitmaps including
cpumasks and nodemasks") has been merged which reworks this function.
It's a little unfortunate, though, that the user sees Cpus_allowed,
cpuset.cpus, and cpuset.effective_cpus that include isolcpus and then have
to check another cpulist for the isolcpus to see their sched domain,
though.
> default:
> ret = -EINVAL;
> goto out_unlock;
> @@ -1906,6 +1927,12 @@ static struct cftype files[] = {
> .private = FILE_MEMORY_PRESSURE_ENABLED,
> },
>
> + {
> + .name = "isolcpus",
> + .seq_show = cpuset_common_seq_show,
> + .private = FILE_ISOLCPUS,
> + },
> +
> { } /* terminate */
> };
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
[not found] ` <alpine.DEB.2.10.1502241811020.19547-X6Q0R45D7oAcqpCFd4KODRPsWskHk0ljAL8bYrjMMd8@public.gmane.org>
@ 2015-02-25 3:30 ` Rik van Riel
0 siblings, 0 replies; 10+ messages in thread
From: Rik van Riel @ 2015-02-25 3:30 UTC (permalink / raw)
To: David Rientjes
Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Peter Zijlstra,
Clark Williams, Li Zefan, Ingo Molnar, Luiz Capitulino,
cgroups-u79uwXL29TY76Z2rM5mHXA
On 02/24/2015 09:15 PM, David Rientjes wrote:
> On Mon, 23 Feb 2015, riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org wrote:
>
>> From: Rik van Riel <riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
>>
>> The previous patch makes it so the code skips over isolcpus when
>> building scheduler load balancing domains. This makes it hard to
>> see for a user which of the CPUs in a cpuset are participating in
>> load balancing, and which ones are isolated cpus.
>>
>> Add a cpuset.isolcpus file with info on which cpus in a cpuset are
>> isolated CPUs.
>>
>> This file is read-only for now. In the future we could extend things
>> so isolcpus can be changed at run time, for the root (system wide)
>> cpuset only.
>>
>> Cc: Peter Zijlstra <peterz-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
>> Cc: Clark Williams <williams-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
>> Cc: Li Zefan <lizefan-hv44wF8Li93QT0dZR+AlfA@public.gmane.org>
>> Cc: Ingo Molnar <mingo-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
>> Cc: Luiz Capitulino <lcapitulino-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
>> Cc: cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
>> Signed-off-by: Rik van Riel <riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
>> ---
>> kernel/cpuset.c | 27 +++++++++++++++++++++++++++
>> 1 file changed, 27 insertions(+)
>>
>> diff --git a/kernel/cpuset.c b/kernel/cpuset.c
>> index 1ad63fa37cb4..19ad5d3377f8 100644
>> --- a/kernel/cpuset.c
>> +++ b/kernel/cpuset.c
>> @@ -1563,6 +1563,7 @@ typedef enum {
>> FILE_MEMORY_PRESSURE,
>> FILE_SPREAD_PAGE,
>> FILE_SPREAD_SLAB,
>> + FILE_ISOLCPUS,
>> } cpuset_filetype_t;
>>
>> static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
>> @@ -1704,6 +1705,23 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
>> return retval ?: nbytes;
>> }
>>
>> +static size_t cpuset_sprintf_isolcpus(char *s, ssize_t pos, struct cpuset *cs)
>> +{
>> + cpumask_var_t my_isolated_cpus;
>> + ssize_t count;
>> +
>
> Whitespace.
>
>> + if (!alloc_cpumask_var(&my_isolated_cpus, GFP_KERNEL))
>> + return 0;
>> +
>> + cpumask_and(my_isolated_cpus, cs->cpus_allowed, cpu_isolated_map);
>> +
>> + count = cpulist_scnprintf(s, pos, my_isolated_cpus);
>> +
>> + free_cpumask_var(my_isolated_cpus);
>> +
>> + return count;
>> +}
>> +
>> /*
>> * These ascii lists should be read in a single call, by using a user
>> * buffer large enough to hold the entire map. If read in smaller
>> @@ -1738,6 +1756,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
>> case FILE_EFFECTIVE_MEMLIST:
>> s += nodelist_scnprintf(s, count, cs->effective_mems);
>> break;
>> + case FILE_ISOLCPUS:
>> + s += cpuset_sprintf_isolcpus(s, count, cs);
>> + break;
>
> This patch looks fine, and I think cpuset.effective_cpus and
> cpuset.isolcpus can be used well together, but will need updating now that
> commit e8e6d97c9b ("cpuset: use %*pb[l] to print bitmaps including
> cpumasks and nodemasks") has been merged which reworks this function.
I will take a look at that changeset. It was not in the
tip tree I worked against.
Expect a v2 :)
> It's a little unfortunate, though, that the user sees Cpus_allowed,
> cpuset.cpus, and cpuset.effective_cpus that include isolcpus and then have
> to check another cpulist for the isolcpus to see their sched domain,
> though.
Agreed, but all the alternatives I could think of would break the
userspace API, leaving this as the best way to go.
--
All rights reversed
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
[not found] <1424882288-2910-1-git-send-email-riel@redhat.com>
@ 2015-02-25 16:38 ` riel
[not found] ` <1424882288-2910-3-git-send-email-riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
0 siblings, 1 reply; 10+ messages in thread
From: riel @ 2015-02-25 16:38 UTC (permalink / raw)
To: linux-kernel
Cc: Rik van Riel, Peter Zijlstra, Clark Williams, Li Zefan,
Ingo Molnar, Luiz Capitulino, David Rientjes, Mike Galbraith,
cgroups
From: Rik van Riel <riel@redhat.com>
The previous patch makes it so the code skips over isolcpus when
building scheduler load balancing domains. This makes it hard to
see for a user which of the CPUs in a cpuset are participating in
load balancing, and which ones are isolated cpus.
Add a cpuset.isolcpus file with info on which cpus in a cpuset are
isolated CPUs.
This file is read-only for now. In the future we could extend things
so isolcpus can be changed at run time, for the root (system wide)
cpuset only.
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Clark Williams <williams@redhat.com>
Cc: Li Zefan <lizefan@huawei.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Luiz Capitulino <lcapitulino@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: cgroups@vger.kernel.org
Signed-off-by: Rik van Riel <riel@redhat.com>
---
kernel/cpuset.c | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
diff --git a/kernel/cpuset.c b/kernel/cpuset.c
index b544e5229d99..94bf59588e23 100644
--- a/kernel/cpuset.c
+++ b/kernel/cpuset.c
@@ -1563,6 +1563,7 @@ typedef enum {
FILE_MEMORY_PRESSURE,
FILE_SPREAD_PAGE,
FILE_SPREAD_SLAB,
+ FILE_ISOLCPUS,
} cpuset_filetype_t;
static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
@@ -1704,6 +1705,20 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
return retval ?: nbytes;
}
+static void cpuset_seq_print_isolcpus(struct seq_file *sf, struct cpuset *cs)
+{
+ cpumask_var_t my_isolated_cpus;
+
+ if (!alloc_cpumask_var(&my_isolated_cpus, GFP_KERNEL))
+ return;
+
+ cpumask_and(my_isolated_cpus, cs->cpus_allowed, cpu_isolated_map);
+
+ seq_printf(sf, "%*pbl\n", nodemask_pr_args(my_isolated_cpus));
+
+ free_cpumask_var(my_isolated_cpus);
+}
+
/*
* These ascii lists should be read in a single call, by using a user
* buffer large enough to hold the entire map. If read in smaller
@@ -1733,6 +1748,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
case FILE_EFFECTIVE_MEMLIST:
seq_printf(sf, "%*pbl\n", nodemask_pr_args(&cs->effective_mems));
break;
+ case FILE_ISOLCPUS:
+ cpuset_seq_print_isolcpus(sf, cs);
+ break;
default:
ret = -EINVAL;
}
@@ -1893,6 +1911,12 @@ static struct cftype files[] = {
.private = FILE_MEMORY_PRESSURE_ENABLED,
},
+ {
+ .name = "isolcpus",
+ .seq_show = cpuset_common_seq_show,
+ .private = FILE_ISOLCPUS,
+ },
+
{ } /* terminate */
};
--
2.1.0
^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
[not found] ` <1424882288-2910-3-git-send-email-riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
@ 2015-02-25 21:09 ` David Rientjes
2015-02-25 21:21 ` Rik van Riel
2015-02-26 11:05 ` Zefan Li
1 sibling, 1 reply; 10+ messages in thread
From: David Rientjes @ 2015-02-25 21:09 UTC (permalink / raw)
To: Rik van Riel
Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Peter Zijlstra,
Clark Williams, Li Zefan, Ingo Molnar, Luiz Capitulino,
Mike Galbraith, cgroups-u79uwXL29TY76Z2rM5mHXA
On Wed, 25 Feb 2015, riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org wrote:
> diff --git a/kernel/cpuset.c b/kernel/cpuset.c
> index b544e5229d99..94bf59588e23 100644
> --- a/kernel/cpuset.c
> +++ b/kernel/cpuset.c
> @@ -1563,6 +1563,7 @@ typedef enum {
> FILE_MEMORY_PRESSURE,
> FILE_SPREAD_PAGE,
> FILE_SPREAD_SLAB,
> + FILE_ISOLCPUS,
> } cpuset_filetype_t;
>
> static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
> @@ -1704,6 +1705,20 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
> return retval ?: nbytes;
> }
>
> +static void cpuset_seq_print_isolcpus(struct seq_file *sf, struct cpuset *cs)
> +{
> + cpumask_var_t my_isolated_cpus;
> +
> + if (!alloc_cpumask_var(&my_isolated_cpus, GFP_KERNEL))
> + return;
> +
> + cpumask_and(my_isolated_cpus, cs->cpus_allowed, cpu_isolated_map);
> +
> + seq_printf(sf, "%*pbl\n", nodemask_pr_args(my_isolated_cpus));
That unfortunately won't output anything, it needs to be
cpumask_pr_args(). After that's fixed, feel free to add my
Acked-by: David Rientjes <rientjes-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> +
> + free_cpumask_var(my_isolated_cpus);
> +}
> +
> /*
> * These ascii lists should be read in a single call, by using a user
> * buffer large enough to hold the entire map. If read in smaller
> @@ -1733,6 +1748,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
> case FILE_EFFECTIVE_MEMLIST:
> seq_printf(sf, "%*pbl\n", nodemask_pr_args(&cs->effective_mems));
> break;
> + case FILE_ISOLCPUS:
> + cpuset_seq_print_isolcpus(sf, cs);
> + break;
> default:
> ret = -EINVAL;
> }
> @@ -1893,6 +1911,12 @@ static struct cftype files[] = {
> .private = FILE_MEMORY_PRESSURE_ENABLED,
> },
>
> + {
> + .name = "isolcpus",
> + .seq_show = cpuset_common_seq_show,
> + .private = FILE_ISOLCPUS,
> + },
> +
> { } /* terminate */
> };
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
2015-02-25 21:09 ` David Rientjes
@ 2015-02-25 21:21 ` Rik van Riel
0 siblings, 0 replies; 10+ messages in thread
From: Rik van Riel @ 2015-02-25 21:21 UTC (permalink / raw)
To: David Rientjes
Cc: linux-kernel, Peter Zijlstra, Clark Williams, Li Zefan,
Ingo Molnar, Luiz Capitulino, Mike Galbraith, cgroups
On 02/25/2015 04:09 PM, David Rientjes wrote:
> On Wed, 25 Feb 2015, riel@redhat.com wrote:
>
>> diff --git a/kernel/cpuset.c b/kernel/cpuset.c
>> index b544e5229d99..94bf59588e23 100644
>> --- a/kernel/cpuset.c
>> +++ b/kernel/cpuset.c
>> @@ -1563,6 +1563,7 @@ typedef enum {
>> FILE_MEMORY_PRESSURE,
>> FILE_SPREAD_PAGE,
>> FILE_SPREAD_SLAB,
>> + FILE_ISOLCPUS,
>> } cpuset_filetype_t;
>>
>> static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
>> @@ -1704,6 +1705,20 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
>> return retval ?: nbytes;
>> }
>>
>> +static void cpuset_seq_print_isolcpus(struct seq_file *sf, struct cpuset *cs)
>> +{
>> + cpumask_var_t my_isolated_cpus;
>> +
>> + if (!alloc_cpumask_var(&my_isolated_cpus, GFP_KERNEL))
>> + return;
>> +
>> + cpumask_and(my_isolated_cpus, cs->cpus_allowed, cpu_isolated_map);
>> +
>> + seq_printf(sf, "%*pbl\n", nodemask_pr_args(my_isolated_cpus));
>
> That unfortunately won't output anything, it needs to be
> cpumask_pr_args(). After that's fixed, feel free to add my
>
> Acked-by: David Rientjes <rientjes@google.com>
Gah. Too many things going on at once.
Let me resend a v3 of just patch 2/2 with your ack.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
[not found] ` <1424882288-2910-3-git-send-email-riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2015-02-25 21:09 ` David Rientjes
@ 2015-02-26 11:05 ` Zefan Li
2015-02-26 15:24 ` Rik van Riel
1 sibling, 1 reply; 10+ messages in thread
From: Zefan Li @ 2015-02-26 11:05 UTC (permalink / raw)
To: riel-H+wXaHxf7aLQT0dZR+AlfA
Cc: linux-kernel-u79uwXL29TY76Z2rM5mHXA, Peter Zijlstra,
Clark Williams, Ingo Molnar, Luiz Capitulino, David Rientjes,
Mike Galbraith, cgroups-u79uwXL29TY76Z2rM5mHXA
> +static void cpuset_seq_print_isolcpus(struct seq_file *sf, struct cpuset *cs)
> +{
> + cpumask_var_t my_isolated_cpus;
> +
> + if (!alloc_cpumask_var(&my_isolated_cpus, GFP_KERNEL))
> + return;
> +
Make it return -ENOMEM ? Or make it a global variable and allocate memory for it
in cpuset_init().
> + cpumask_and(my_isolated_cpus, cs->cpus_allowed, cpu_isolated_map);
> +
> + seq_printf(sf, "%*pbl\n", nodemask_pr_args(my_isolated_cpus));
> +
> + free_cpumask_var(my_isolated_cpus);
> +}
> +
> /*
> * These ascii lists should be read in a single call, by using a user
> * buffer large enough to hold the entire map. If read in smaller
> @@ -1733,6 +1748,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
> case FILE_EFFECTIVE_MEMLIST:
> seq_printf(sf, "%*pbl\n", nodemask_pr_args(&cs->effective_mems));
> break;
> + case FILE_ISOLCPUS:
> + cpuset_seq_print_isolcpus(sf, cs);
> + break;
> default:
> ret = -EINVAL;
> }
> @@ -1893,6 +1911,12 @@ static struct cftype files[] = {
> .private = FILE_MEMORY_PRESSURE_ENABLED,
> },
>
> + {
> + .name = "isolcpus",
> + .seq_show = cpuset_common_seq_show,
> + .private = FILE_ISOLCPUS,
> + },
> +
> { } /* terminate */
> };
>
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset
2015-02-26 11:05 ` Zefan Li
@ 2015-02-26 15:24 ` Rik van Riel
0 siblings, 0 replies; 10+ messages in thread
From: Rik van Riel @ 2015-02-26 15:24 UTC (permalink / raw)
To: Zefan Li
Cc: linux-kernel, Peter Zijlstra, Clark Williams, Ingo Molnar,
Luiz Capitulino, David Rientjes, Mike Galbraith, cgroups
On 02/26/2015 06:05 AM, Zefan Li wrote:
>> +static void cpuset_seq_print_isolcpus(struct seq_file *sf, struct cpuset *cs)
>> +{
>> + cpumask_var_t my_isolated_cpus;
>> +
>> + if (!alloc_cpumask_var(&my_isolated_cpus, GFP_KERNEL))
>> + return;
>> +
>
> Make it return -ENOMEM ? Or make it a global variable and allocate memory for it
> in cpuset_init().
OK, can do.
I see that cpuset_common_seq_show already takes a lock, so having
one global variable for this should not introduce any additional
contention.
I will send a v4.
>> @@ -1733,6 +1748,9 @@ static int cpuset_common_seq_show(struct seq_file *sf, void *v)
>> case FILE_EFFECTIVE_MEMLIST:
>> seq_printf(sf, "%*pbl\n", nodemask_pr_args(&cs->effective_mems));
>> break;
>> + case FILE_ISOLCPUS:
>> + cpuset_seq_print_isolcpus(sf, cs);
>> + break;
>> default:
>> ret = -EINVAL;
>> }
--
All rights reversed
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2015-02-26 15:24 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <1424727906-4460-1-git-send-email-riel@redhat.com>
2015-02-23 21:45 ` [PATCH 1/2] cpusets,isolcpus: exclude isolcpus from load balancing in cpusets riel
2015-02-25 2:10 ` David Rientjes
2015-02-23 21:45 ` [PATCH 2/2] cpusets,isolcpus: add file to show isolated cpus in cpuset riel
[not found] ` <1424727906-4460-3-git-send-email-riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2015-02-25 2:15 ` David Rientjes
[not found] ` <alpine.DEB.2.10.1502241811020.19547-X6Q0R45D7oAcqpCFd4KODRPsWskHk0ljAL8bYrjMMd8@public.gmane.org>
2015-02-25 3:30 ` Rik van Riel
[not found] <1424882288-2910-1-git-send-email-riel@redhat.com>
2015-02-25 16:38 ` riel
[not found] ` <1424882288-2910-3-git-send-email-riel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2015-02-25 21:09 ` David Rientjes
2015-02-25 21:21 ` Rik van Riel
2015-02-26 11:05 ` Zefan Li
2015-02-26 15:24 ` Rik van Riel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).