* Re: [PATCH] irq/affinity: fix irq_create_affinity_masks for the pre_vectors case
2016-11-15 9:12 ` [PATCH] irq/affinity: fix irq_create_affinity_masks for the pre_vectors case Christoph Hellwig
@ 2016-11-15 22:41 ` Thomas Gleixner
2016-11-16 11:04 ` Christoph Hellwig
2016-11-16 17:48 ` [tip:irq/core] genirq/affinity: Take reserved vectors into account when spreading irqs tip-bot for Christoph Hellwig
1 sibling, 1 reply; 5+ messages in thread
From: Thomas Gleixner @ 2016-11-15 22:41 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: linux-kernel
On Tue, 15 Nov 2016, Christoph Hellwig wrote:
> Adjust the exit condition for assigning the affinity vectors to take the
> pre_vectors into account. Otherwise the last vector will get a cpu mask
> for all CPUs by accidentally hitting the post_vectors case.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
> kernel/irq/affinity.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
> index 17360bd..2ca420a 100644
> --- a/kernel/irq/affinity.c
> +++ b/kernel/irq/affinity.c
> @@ -107,7 +107,9 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
> /* Calculate the number of cpus per vector */
> ncpus = cpumask_weight(nmsk);
>
> - for (v = 0; curvec < affv && v < vecs_to_assign; curvec++, v++) {
> + for (v = 0;
> + curvec < affd->pre_vectors + affv && v < vecs_to_assign;
> + curvec++, v++) {
> cpus_per_vec = ncpus / vecs_to_assign;
>
> /* Account for extra vectors to compensate rounding errors */
We have the same exit condition in the (affv <= nodes) case and further
down in the outer loop. So the complete patch should be something like
this:
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@ -61,6 +61,7 @@ irq_create_affinity_masks(int nvecs, con
{
int n, nodes, vecs_per_node, cpus_per_vec, extra_vecs, curvec;
int affv = nvecs - affd->pre_vectors - affd->post_vectors;
+ int last_affv = affv + affd->pre_vectors;
nodemask_t nodemsk = NODE_MASK_NONE;
struct cpumask *masks;
cpumask_var_t nmsk;
@@ -87,7 +88,7 @@ irq_create_affinity_masks(int nvecs, con
if (affv <= nodes) {
for_each_node_mask(n, nodemsk) {
cpumask_copy(masks + curvec, cpumask_of_node(n));
- if (++curvec == affv)
+ if (++curvec == last_affv)
break;
}
goto done;
@@ -107,7 +108,8 @@ irq_create_affinity_masks(int nvecs, con
/* Calculate the number of cpus per vector */
ncpus = cpumask_weight(nmsk);
- for (v = 0; curvec < affv && v < vecs_to_assign; curvec++, v++) {
+ for (v = 0; curvec < last_affv && v < vecs_to_assign;
+ curvec++, v++) {
cpus_per_vec = ncpus / vecs_to_assign;
/* Account for extra vectors to compensate rounding errors */
@@ -119,7 +121,7 @@ irq_create_affinity_masks(int nvecs, con
irq_spread_init_one(masks + curvec, nmsk, cpus_per_vec);
}
- if (curvec >= affv)
+ if (curvec >= last_affv)
break;
}
^ permalink raw reply [flat|nested] 5+ messages in thread* [tip:irq/core] genirq/affinity: Take reserved vectors into account when spreading irqs
2016-11-15 9:12 ` [PATCH] irq/affinity: fix irq_create_affinity_masks for the pre_vectors case Christoph Hellwig
2016-11-15 22:41 ` Thomas Gleixner
@ 2016-11-16 17:48 ` tip-bot for Christoph Hellwig
1 sibling, 0 replies; 5+ messages in thread
From: tip-bot for Christoph Hellwig @ 2016-11-16 17:48 UTC (permalink / raw)
To: linux-tip-commits; +Cc: tglx, hpa, mingo, hch, linux-kernel
Commit-ID: bfe130773862bb3a02cdc4d4c2169f7f0210a46b
Gitweb: http://git.kernel.org/tip/bfe130773862bb3a02cdc4d4c2169f7f0210a46b
Author: Christoph Hellwig <hch@lst.de>
AuthorDate: Tue, 15 Nov 2016 10:12:58 +0100
Committer: Thomas Gleixner <tglx@linutronix.de>
CommitDate: Wed, 16 Nov 2016 18:44:01 +0100
genirq/affinity: Take reserved vectors into account when spreading irqs
The recent addition of reserved vectors at the beginning or the end of the
vector space did not take the reserved vectors at the beginning into
account for the various loop exit conditions. As a consequence the last
vectors of the spread area are not included into the spread algorithm and
are treated like the reserved vectors at the end of the vector space and
get the default affinity mask assigned.
Sum up the affinity vectors and the reserved vectors at the beginning and
use the sum as exit condition.
[ tglx: Fixed all conditions instead of only one and massaged changelog ]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: http://lkml.kernel.org/r/1479201178-29604-2-git-send-email-hch@lst.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
kernel/irq/affinity.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index 17360bd..49eb38d 100644
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@ -61,6 +61,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
{
int n, nodes, vecs_per_node, cpus_per_vec, extra_vecs, curvec;
int affv = nvecs - affd->pre_vectors - affd->post_vectors;
+ int last_affv = affv + affd->pre_vectors;
nodemask_t nodemsk = NODE_MASK_NONE;
struct cpumask *masks;
cpumask_var_t nmsk;
@@ -87,7 +88,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
if (affv <= nodes) {
for_each_node_mask(n, nodemsk) {
cpumask_copy(masks + curvec, cpumask_of_node(n));
- if (++curvec == affv)
+ if (++curvec == last_affv)
break;
}
goto done;
@@ -107,7 +108,8 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
/* Calculate the number of cpus per vector */
ncpus = cpumask_weight(nmsk);
- for (v = 0; curvec < affv && v < vecs_to_assign; curvec++, v++) {
+ for (v = 0; curvec < last_affv && v < vecs_to_assign;
+ curvec++, v++) {
cpus_per_vec = ncpus / vecs_to_assign;
/* Account for extra vectors to compensate rounding errors */
@@ -119,7 +121,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd)
irq_spread_init_one(masks + curvec, nmsk, cpus_per_vec);
}
- if (curvec >= affv)
+ if (curvec >= last_affv)
break;
}
^ permalink raw reply related [flat|nested] 5+ messages in thread