public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Kurt Kanzenbach <kurt@linutronix.de>
To: Joe Damato <jdamato@fastly.com>
Cc: netdev@vger.kernel.org, vinicius.gomes@intel.com,
	Tony Nguyen <anthony.l.nguyen@intel.com>,
	Przemek Kitszel <przemyslaw.kitszel@intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
	Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	John Fastabend <john.fastabend@gmail.com>,
	"moderated list:INTEL ETHERNET DRIVERS"
	<intel-wired-lan@lists.osuosl.org>,
	open list <linux-kernel@vger.kernel.org>,
	"open list:XDP (eXpress Data Path)" <bpf@vger.kernel.org>
Subject: Re: [RFC net-next v2 2/2] igc: Link queues to NAPI instances
Date: Mon, 21 Oct 2024 07:42:02 +0200	[thread overview]
Message-ID: <87o73e2es5.fsf@kurt.kurt.home> (raw)
In-Reply-To: <ZxKVI_DvFWBvRMaf@LQ3V64L9R2>

[-- Attachment #1: Type: text/plain, Size: 3001 bytes --]

On Fri Oct 18 2024, Joe Damato wrote:
> On Tue, Oct 15, 2024 at 12:27:01PM +0200, Kurt Kanzenbach wrote:
>> On Mon Oct 14 2024, Joe Damato wrote:
>
> [...]
>
>> > diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
>> > index 7964bbedb16c..59c00acfa0ed 100644
>> > --- a/drivers/net/ethernet/intel/igc/igc_main.c
>> > +++ b/drivers/net/ethernet/intel/igc/igc_main.c
>> > @@ -4948,6 +4948,47 @@ static int igc_sw_init(struct igc_adapter *adapter)
>> >  	return 0;
>> >  }
>> >  
>> > +void igc_set_queue_napi(struct igc_adapter *adapter, int q_idx,
>> > +			struct napi_struct *napi)
>> > +{
>> > +	if (adapter->flags & IGC_FLAG_QUEUE_PAIRS) {
>> > +		netif_queue_set_napi(adapter->netdev, q_idx,
>> > +				     NETDEV_QUEUE_TYPE_RX, napi);
>> > +		netif_queue_set_napi(adapter->netdev, q_idx,
>> > +				     NETDEV_QUEUE_TYPE_TX, napi);
>> > +	} else {
>> > +		if (q_idx < adapter->num_rx_queues) {
>> > +			netif_queue_set_napi(adapter->netdev, q_idx,
>> > +					     NETDEV_QUEUE_TYPE_RX, napi);
>> > +		} else {
>> > +			q_idx -= adapter->num_rx_queues;
>> > +			netif_queue_set_napi(adapter->netdev, q_idx,
>> > +					     NETDEV_QUEUE_TYPE_TX, napi);
>> > +		}
>> > +	}
>> > +}
>> 
>> In addition, to what Vinicius said. I think this can be done
>> simpler. Something like this?
>> 
>> void igc_set_queue_napi(struct igc_adapter *adapter, int vector,
>> 			struct napi_struct *napi)
>> {
>> 	struct igc_q_vector *q_vector = adapter->q_vector[vector];
>> 
>> 	if (q_vector->rx.ring)
>> 		netif_queue_set_napi(adapter->netdev, vector, NETDEV_QUEUE_TYPE_RX, napi);
>> 
>> 	if (q_vector->tx.ring)
>> 		netif_queue_set_napi(adapter->netdev, vector, NETDEV_QUEUE_TYPE_TX, napi);
>> }
>
> I tried this suggestion but this does not result in correct output
> in the case where IGC_FLAG_QUEUE_PAIRS is disabled.
>
> The output from netlink:
>
> $ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml \
>                              --dump queue-get --json='{"ifindex": 2}'
>
> [{'id': 0, 'ifindex': 2, 'napi-id': 8193, 'type': 'rx'},
>  {'id': 1, 'ifindex': 2, 'napi-id': 8194, 'type': 'rx'},
>  {'id': 0, 'ifindex': 2, 'type': 'tx'},
>  {'id': 1, 'ifindex': 2, 'type': 'tx'}]
>
> Note the lack of a napi-id for the TX queues. This typically happens
> when the linking is not done correctly; netif_queue_set_napi should
> take a queue id as the second parameter.
>
> I believe the suggested code above should be modified to be as
> follows to use ring->queue_index:
>
>   if (q_vector->rx.ring)
>     netif_queue_set_napi(adapter->netdev,
>                          q_vector->rx.ring->queue_index,
>                          NETDEV_QUEUE_TYPE_RX, napi);
>   
>   if (q_vector->tx.ring)
>     netif_queue_set_napi(adapter->netdev,
>                          q_vector->tx.ring->queue_index,
>                          NETDEV_QUEUE_TYPE_TX, napi);

LGTM. Thanks.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 861 bytes --]

      reply	other threads:[~2024-10-21  5:42 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-14 21:30 [RFC net-next v2 0/2] igc: Link IRQs and queues to NAPIs Joe Damato
2024-10-14 21:30 ` [RFC net-next v2 1/2] igc: Link IRQs to NAPI instances Joe Damato
2024-10-14 21:30 ` [RFC net-next v2 2/2] igc: Link queues " Joe Damato
2024-10-15  1:51   ` Vinicius Costa Gomes
2024-10-15  3:44     ` Joe Damato
2024-10-15 10:27   ` Kurt Kanzenbach
2024-10-16  1:01     ` Joe Damato
2024-10-18 17:04     ` Joe Damato
2024-10-21  5:42       ` Kurt Kanzenbach [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87o73e2es5.fsf@kurt.kurt.home \
    --to=kurt@linutronix.de \
    --cc=anthony.l.nguyen@intel.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=jdamato@fastly.com \
    --cc=john.fastabend@gmail.com \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=przemyslaw.kitszel@intel.com \
    --cc=vinicius.gomes@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox