From: Yury Norov <yury.norov@gmail.com>
To: Nick Child <nnac123@linux.ibm.com>
Cc: linux-kernel@vger.kernel.org, netdev@vger.kernel.org,
linuxppc-dev@lists.ozlabs.org, Haren Myneni <haren@linux.ibm.com>,
Rick Lindsley <ricklind@linux.ibm.com>,
Thomas Falcon <tlfalcon@linux.ibm.com>,
Michael Ellerman <mpe@ellerman.id.au>,
Nicholas Piggin <npiggin@gmail.com>,
Christophe Leroy <christophe.leroy@csgroup.eu>,
Naveen N Rao <naveen@kernel.org>,
Madhavan Srinivasan <maddy@linux.ibm.com>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
Rasmus Villemoes <linux@rasmusvillemoes.dk>
Subject: Re: [PATCH 03/14] ibmvnic: simplify ibmvnic_set_queue_affinity()
Date: Tue, 7 Jan 2025 14:42:59 -0800 [thread overview]
Message-ID: <Z32t88W3biaZa7fH@yury-ThinkPad> (raw)
In-Reply-To: <Z32sncx9K4iFLsJN@li-4c4c4544-0047-5210-804b-b8c04f323634.ibm.com>
On Tue, Jan 07, 2025 at 04:37:17PM -0600, Nick Child wrote:
> On Sat, Dec 28, 2024 at 10:49:35AM -0800, Yury Norov wrote:
> > A loop based on cpumask_next_wrap() opencodes the dedicated macro
> > for_each_online_cpu_wrap(). Using the macro allows to avoid setting
> > bits affinity mask more than once when stride >= num_online_cpus.
> >
> > This also helps to drop cpumask handling code in the caller function.
> >
> > Signed-off-by: Yury Norov <yury.norov@gmail.com>
> > ---
> > drivers/net/ethernet/ibm/ibmvnic.c | 17 ++++++++++-------
> > 1 file changed, 10 insertions(+), 7 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
> > index e95ae0d39948..4cfd90fb206b 100644
> > --- a/drivers/net/ethernet/ibm/ibmvnic.c
> > +++ b/drivers/net/ethernet/ibm/ibmvnic.c
> > @@ -234,11 +234,16 @@ static int ibmvnic_set_queue_affinity(struct ibmvnic_sub_crq_queue *queue,
> > (*stragglers)--;
> > }
> > /* atomic write is safer than writing bit by bit directly */
> > - for (i = 0; i < stride; i++) {
> > - cpumask_set_cpu(*cpu, mask);
> > - *cpu = cpumask_next_wrap(*cpu, cpu_online_mask,
> > - nr_cpu_ids, false);
> > + for_each_online_cpu_wrap(i, *cpu) {
> > + if (!stride--)
> > + break;
> > + cpumask_set_cpu(i, mask);
> > }
> > +
> > + /* For the next queue we start from the first unused CPU in this queue */
> > + if (i < nr_cpu_ids)
> > + *cpu = i + 1;
> > +
> This should read '*cpu = i'. Since the loop breaks after incrementing i.
> Thanks!
cpumask_next_wrap() makes '+ 1' for you. The for_each_cpu_wrap() starts
exactly where you point. So, this '+1' needs to be explicit now.
Does that make sense?
>
> > /* set queue affinity mask */
> > cpumask_copy(queue->affinity_mask, mask);
> > rc = irq_set_affinity_and_hint(queue->irq, queue->affinity_mask);
> > @@ -256,7 +261,7 @@ static void ibmvnic_set_affinity(struct ibmvnic_adapter *adapter)
> > int num_rxqs = adapter->num_active_rx_scrqs, i_rxqs = 0;
> > int num_txqs = adapter->num_active_tx_scrqs, i_txqs = 0;
> > int total_queues, stride, stragglers, i;
> > - unsigned int num_cpu, cpu;
> > + unsigned int num_cpu, cpu = 0;
> > bool is_rx_queue;
> > int rc = 0;
> >
> > @@ -274,8 +279,6 @@ static void ibmvnic_set_affinity(struct ibmvnic_adapter *adapter)
> > stride = max_t(int, num_cpu / total_queues, 1);
> > /* number of leftover cpu's */
> > stragglers = num_cpu >= total_queues ? num_cpu % total_queues : 0;
> > - /* next available cpu to assign irq to */
> > - cpu = cpumask_next(-1, cpu_online_mask);
> >
> > for (i = 0; i < total_queues; i++) {
> > is_rx_queue = false;
> > --
> > 2.43.0
> >
next prev parent reply other threads:[~2025-01-07 22:43 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-28 18:49 [PATCH 00/14] cpumask: cleanup cpumask_next_wrap() implementation and usage Yury Norov
2024-12-28 18:49 ` [PATCH 03/14] ibmvnic: simplify ibmvnic_set_queue_affinity() Yury Norov
2025-01-07 22:37 ` Nick Child
2025-01-07 22:42 ` Yury Norov [this message]
2025-01-07 23:04 ` Yury Norov
2025-01-08 14:08 ` [PATCH 03/14] ibmvnic: simplify ibmvnic_set_queue_affinity()y Nick Child
2024-12-28 18:49 ` [PATCH 04/14] powerpc/xmon: simplify xmon_batch_next_cpu() Yury Norov
2024-12-28 18:49 ` [PATCH 05/14] cpumask: deprecate cpumask_next_wrap() Yury Norov
2025-01-03 17:39 ` Bjorn Helgaas
2024-12-28 18:49 ` [PATCH 06/14] cpumask: re-introduce cpumask_next{,_and}_wrap() Yury Norov
2025-01-03 17:44 ` Bjorn Helgaas
2025-01-15 3:41 ` Yury Norov
2025-01-07 13:28 ` Alexander Gordeev
2025-01-15 3:38 ` Yury Norov
2025-01-03 7:02 ` [PATCH 00/14] cpumask: cleanup cpumask_next_wrap() implementation and usage Christoph Hellwig
2025-01-03 15:21 ` Yury Norov
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Z32t88W3biaZa7fH@yury-ThinkPad \
--to=yury.norov@gmail.com \
--cc=christophe.leroy@csgroup.eu \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=haren@linux.ibm.com \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux@rasmusvillemoes.dk \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=maddy@linux.ibm.com \
--cc=mpe@ellerman.id.au \
--cc=naveen@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=nnac123@linux.ibm.com \
--cc=npiggin@gmail.com \
--cc=pabeni@redhat.com \
--cc=ricklind@linux.ibm.com \
--cc=tlfalcon@linux.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).