linux-rdma.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 19/49] RDMA/hfi: replace cpumask_weight with cpumask_empty where appropriate
       [not found] <20220210224933.379149-1-yury.norov@gmail.com>
@ 2022-02-10 22:49 ` Yury Norov
  2022-02-11 19:10   ` Jason Gunthorpe
  2022-02-10 22:49 ` [PATCH 41/49] RDMA/hfi1: replace cpumask_weight with cpumask_weight_{eq, ...} " Yury Norov
  1 sibling, 1 reply; 4+ messages in thread
From: Yury Norov @ 2022-02-10 22:49 UTC (permalink / raw)
  To: Yury Norov, Andy Shevchenko, Rasmus Villemoes, Andrew Morton,
	Michał Mirosław, Greg Kroah-Hartman, Peter Zijlstra,
	David Laight, Joe Perches, Dennis Zhou, Emil Renner Berthing,
	Nicholas Piggin, Matti Vaittinen, Alexey Klimov, linux-kernel,
	Mike Marciniszyn, Dennis Dalessandro, Jason Gunthorpe, linux-rdma
  Cc: Leon Romanovsky

drivers/infiniband/hw/hfi1/affinity.c code calls cpumask_weight() to check
if any bit of a given cpumask is set. We can do it more efficiently with
cpumask_empty() because cpumask_empty() stops traversing the cpumask as
soon as it finds first set bit, while cpumask_weight() counts all bits
unconditionally.

Signed-off-by: Yury Norov <yury.norov@gmail.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
---
 drivers/infiniband/hw/hfi1/affinity.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/infiniband/hw/hfi1/affinity.c b/drivers/infiniband/hw/hfi1/affinity.c
index 706b3b659713..877f8e84a672 100644
--- a/drivers/infiniband/hw/hfi1/affinity.c
+++ b/drivers/infiniband/hw/hfi1/affinity.c
@@ -666,7 +666,7 @@ int hfi1_dev_affinity_init(struct hfi1_devdata *dd)
 			 * engines, use the same CPU cores as general/control
 			 * context.
 			 */
-			if (cpumask_weight(&entry->def_intr.mask) == 0)
+			if (cpumask_empty(&entry->def_intr.mask))
 				cpumask_copy(&entry->def_intr.mask,
 					     &entry->general_intr_mask);
 		}
@@ -686,7 +686,7 @@ int hfi1_dev_affinity_init(struct hfi1_devdata *dd)
 		 * vectors, use the same CPU core as the general/control
 		 * context.
 		 */
-		if (cpumask_weight(&entry->comp_vect_mask) == 0)
+		if (cpumask_empty(&entry->comp_vect_mask))
 			cpumask_copy(&entry->comp_vect_mask,
 				     &entry->general_intr_mask);
 	}
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH 41/49] RDMA/hfi1: replace cpumask_weight with cpumask_weight_{eq, ...} where appropriate
       [not found] <20220210224933.379149-1-yury.norov@gmail.com>
  2022-02-10 22:49 ` [PATCH 19/49] RDMA/hfi: replace cpumask_weight with cpumask_empty where appropriate Yury Norov
@ 2022-02-10 22:49 ` Yury Norov
  2022-02-11 19:11   ` Jason Gunthorpe
  1 sibling, 1 reply; 4+ messages in thread
From: Yury Norov @ 2022-02-10 22:49 UTC (permalink / raw)
  To: Yury Norov, Andy Shevchenko, Rasmus Villemoes, Andrew Morton,
	Michał Mirosław, Greg Kroah-Hartman, Peter Zijlstra,
	David Laight, Joe Perches, Dennis Zhou, Emil Renner Berthing,
	Nicholas Piggin, Matti Vaittinen, Alexey Klimov, linux-kernel,
	Mike Marciniszyn, Dennis Dalessandro, Jason Gunthorpe, linux-rdma

Infiniband code uses cpumask_weight() to compare the weight of cpumask
with a given number. We can do it more efficiently with
cpumask_weight_{eq, ...} because conditional cpumask_weight may stop
traversing the cpumask earlier, as soon as condition is (or can't be) met.

Signed-off-by: Yury Norov <yury.norov@gmail.com>
---
 drivers/infiniband/hw/hfi1/affinity.c    | 9 ++++-----
 drivers/infiniband/hw/qib/qib_file_ops.c | 2 +-
 drivers/infiniband/hw/qib/qib_iba7322.c  | 2 +-
 3 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/drivers/infiniband/hw/hfi1/affinity.c b/drivers/infiniband/hw/hfi1/affinity.c
index 877f8e84a672..a9ad07808dea 100644
--- a/drivers/infiniband/hw/hfi1/affinity.c
+++ b/drivers/infiniband/hw/hfi1/affinity.c
@@ -506,7 +506,7 @@ static int _dev_comp_vect_cpu_mask_init(struct hfi1_devdata *dd,
 	 * available CPUs divide it by the number of devices in the
 	 * local NUMA node.
 	 */
-	if (cpumask_weight(&entry->comp_vect_mask) == 1) {
+	if (cpumask_weight_eq(&entry->comp_vect_mask, 1)) {
 		possible_cpus_comp_vect = 1;
 		dd_dev_warn(dd,
 			    "Number of kernel receive queues is too large for completion vector affinity to be effective\n");
@@ -592,7 +592,7 @@ int hfi1_dev_affinity_init(struct hfi1_devdata *dd)
 {
 	struct hfi1_affinity_node *entry;
 	const struct cpumask *local_mask;
-	int curr_cpu, possible, i, ret;
+	int curr_cpu, i, ret;
 	bool new_entry = false;
 
 	local_mask = cpumask_of_node(dd->node);
@@ -625,10 +625,9 @@ int hfi1_dev_affinity_init(struct hfi1_devdata *dd)
 			    local_mask);
 
 		/* fill in the receive list */
-		possible = cpumask_weight(&entry->def_intr.mask);
 		curr_cpu = cpumask_first(&entry->def_intr.mask);
 
-		if (possible == 1) {
+		if (cpumask_weight_eq(&entry->def_intr.mask, 1)) {
 			/* only one CPU, everyone will use it */
 			cpumask_set_cpu(curr_cpu, &entry->rcv_intr.mask);
 			cpumask_set_cpu(curr_cpu, &entry->general_intr_mask);
@@ -1016,7 +1015,7 @@ int hfi1_get_proc_affinity(int node)
 		cpu = cpumask_first(proc_mask);
 		cpumask_set_cpu(cpu, &set->used);
 		goto done;
-	} else if (current->nr_cpus_allowed < cpumask_weight(&set->mask)) {
+	} else if (cpumask_weight_gt(&set->mask, current->nr_cpus_allowed)) {
 		hfi1_cdbg(PROC, "PID %u %s affinity set to CPU set(s) %*pbl",
 			  current->pid, current->comm,
 			  cpumask_pr_args(proc_mask));
diff --git a/drivers/infiniband/hw/qib/qib_file_ops.c b/drivers/infiniband/hw/qib/qib_file_ops.c
index aa290928cf96..add89bc21b0a 100644
--- a/drivers/infiniband/hw/qib/qib_file_ops.c
+++ b/drivers/infiniband/hw/qib/qib_file_ops.c
@@ -1151,7 +1151,7 @@ static void assign_ctxt_affinity(struct file *fp, struct qib_devdata *dd)
 	 * reserve a processor for it on the local NUMA node.
 	 */
 	if ((weight >= qib_cpulist_count) &&
-		(cpumask_weight(local_mask) <= qib_cpulist_count)) {
+		(cpumask_weight_le(local_mask, qib_cpulist_count))) {
 		for_each_cpu(local_cpu, local_mask)
 			if (!test_and_set_bit(local_cpu, qib_cpulist)) {
 				fd->rec_cpu_num = local_cpu;
diff --git a/drivers/infiniband/hw/qib/qib_iba7322.c b/drivers/infiniband/hw/qib/qib_iba7322.c
index ceed302cf6a0..b17f96509d2c 100644
--- a/drivers/infiniband/hw/qib/qib_iba7322.c
+++ b/drivers/infiniband/hw/qib/qib_iba7322.c
@@ -3405,7 +3405,7 @@ static void qib_setup_7322_interrupt(struct qib_devdata *dd, int clearpend)
 	local_mask = cpumask_of_pcibus(dd->pcidev->bus);
 	firstcpu = cpumask_first(local_mask);
 	if (firstcpu >= nr_cpu_ids ||
-			cpumask_weight(local_mask) == num_online_cpus()) {
+			cpumask_weight_eq(local_mask, num_online_cpus())) {
 		local_mask = topology_core_cpumask(0);
 		firstcpu = cpumask_first(local_mask);
 	}
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH 19/49] RDMA/hfi: replace cpumask_weight with cpumask_empty where appropriate
  2022-02-10 22:49 ` [PATCH 19/49] RDMA/hfi: replace cpumask_weight with cpumask_empty where appropriate Yury Norov
@ 2022-02-11 19:10   ` Jason Gunthorpe
  0 siblings, 0 replies; 4+ messages in thread
From: Jason Gunthorpe @ 2022-02-11 19:10 UTC (permalink / raw)
  To: Yury Norov
  Cc: Andy Shevchenko, Rasmus Villemoes, Andrew Morton,
	Michał Mirosław, Greg Kroah-Hartman, Peter Zijlstra,
	David Laight, Joe Perches, Dennis Zhou, Emil Renner Berthing,
	Nicholas Piggin, Matti Vaittinen, Alexey Klimov, linux-kernel,
	Mike Marciniszyn, Dennis Dalessandro, linux-rdma, Leon Romanovsky

On Thu, Feb 10, 2022 at 02:49:03PM -0800, Yury Norov wrote:
> drivers/infiniband/hw/hfi1/affinity.c code calls cpumask_weight() to check
> if any bit of a given cpumask is set. We can do it more efficiently with
> cpumask_empty() because cpumask_empty() stops traversing the cpumask as
> soon as it finds first set bit, while cpumask_weight() counts all bits
> unconditionally.
> 
> Signed-off-by: Yury Norov <yury.norov@gmail.com>
> Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
> ---
>  drivers/infiniband/hw/hfi1/affinity.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)

Applied to the rdma tree

Thanks,
Jason

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH 41/49] RDMA/hfi1: replace cpumask_weight with cpumask_weight_{eq, ...} where appropriate
  2022-02-10 22:49 ` [PATCH 41/49] RDMA/hfi1: replace cpumask_weight with cpumask_weight_{eq, ...} " Yury Norov
@ 2022-02-11 19:11   ` Jason Gunthorpe
  0 siblings, 0 replies; 4+ messages in thread
From: Jason Gunthorpe @ 2022-02-11 19:11 UTC (permalink / raw)
  To: Yury Norov
  Cc: Andy Shevchenko, Rasmus Villemoes, Andrew Morton,
	Michał Mirosław, Greg Kroah-Hartman, Peter Zijlstra,
	David Laight, Joe Perches, Dennis Zhou, Emil Renner Berthing,
	Nicholas Piggin, Matti Vaittinen, Alexey Klimov, linux-kernel,
	Mike Marciniszyn, Dennis Dalessandro, linux-rdma

On Thu, Feb 10, 2022 at 02:49:25PM -0800, Yury Norov wrote:
> Infiniband code uses cpumask_weight() to compare the weight of cpumask
> with a given number. We can do it more efficiently with
> cpumask_weight_{eq, ...} because conditional cpumask_weight may stop
> traversing the cpumask earlier, as soon as condition is (or can't be) met.
> 
> Signed-off-by: Yury Norov <yury.norov@gmail.com>
> ---
>  drivers/infiniband/hw/hfi1/affinity.c    | 9 ++++-----
>  drivers/infiniband/hw/qib/qib_file_ops.c | 2 +-
>  drivers/infiniband/hw/qib/qib_iba7322.c  | 2 +-
>  3 files changed, 6 insertions(+), 7 deletions(-)

I suppose you'll send this with the prior patch adding the functions
in which case

Acked-by: Jason Gunthorpe <jgg@nvidia.com>

Jason

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-02-11 19:11 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20220210224933.379149-1-yury.norov@gmail.com>
2022-02-10 22:49 ` [PATCH 19/49] RDMA/hfi: replace cpumask_weight with cpumask_empty where appropriate Yury Norov
2022-02-11 19:10   ` Jason Gunthorpe
2022-02-10 22:49 ` [PATCH 41/49] RDMA/hfi1: replace cpumask_weight with cpumask_weight_{eq, ...} " Yury Norov
2022-02-11 19:11   ` Jason Gunthorpe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).