netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Willem de Bruijn <willemdebruijn.kernel@gmail.com>
To: Jakub Kicinski <kuba@kernel.org>,  davem@davemloft.net
Cc: netdev@vger.kernel.org,  edumazet@google.com,  pabeni@redhat.com,
	 andrew+netdev@lunn.ch,  horms@kernel.org,  willemb@google.com,
	 petrm@nvidia.com,  dw@davidwei.uk,  shuah@kernel.org,
	 linux-kselftest@vger.kernel.org,
	 Jakub Kicinski <kuba@kernel.org>
Subject: Re: [PATCH net-next 4/5] selftests: hw-net: toeplitz: read indirection table from the device
Date: Fri, 21 Nov 2025 18:12:16 -0500	[thread overview]
Message-ID: <willemdebruijn.kernel.224bdf2fac125@gmail.com> (raw)
In-Reply-To: <20251121040259.3647749-5-kuba@kernel.org>

Jakub Kicinski wrote:
> Replace the simple modulo math with the real indirection table
> read from the device. This makes the tests pass for mlx5 and
> bnxt NICs.
> 
> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
> ---
>  .../selftests/drivers/net/hw/toeplitz.c       | 24 ++++++++++++++++++-
>  1 file changed, 23 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/testing/selftests/drivers/net/hw/toeplitz.c b/tools/testing/selftests/drivers/net/hw/toeplitz.c
> index 7420a4e201cc..a4d04438c313 100644
> --- a/tools/testing/selftests/drivers/net/hw/toeplitz.c
> +++ b/tools/testing/selftests/drivers/net/hw/toeplitz.c
> @@ -68,6 +68,7 @@
>  #define FOUR_TUPLE_MAX_LEN	((sizeof(struct in6_addr) * 2) + (sizeof(uint16_t) * 2))
>  
>  #define RSS_MAX_CPUS (1 << 16)	/* real constraint is PACKET_FANOUT_MAX */
> +#define RSS_MAX_INDIR	(1 << 16)

Only if respinning, maybe also fix alignment of RSS_MAX_CPUS
  
>  #define RPS_MAX_CPUS 16UL	/* must be a power of 2 */
>  
> @@ -105,6 +106,8 @@ struct ring_state {
>  static unsigned int rx_irq_cpus[RSS_MAX_CPUS];	/* map from rxq to cpu */
>  static int rps_silo_to_cpu[RPS_MAX_CPUS];
>  static unsigned char toeplitz_key[TOEPLITZ_KEY_MAX_LEN];
> +static unsigned int rss_indir_tbl[RSS_MAX_INDIR];
> +static unsigned int rss_indir_tbl_size;
>  static struct ring_state rings[RSS_MAX_CPUS];
>  
>  static inline uint32_t toeplitz(const unsigned char *four_tuple,
> @@ -133,7 +136,12 @@ static inline uint32_t toeplitz(const unsigned char *four_tuple,
>  /* Compare computed cpu with arrival cpu from packet_fanout_cpu */
>  static void verify_rss(uint32_t rx_hash, int cpu)
>  {
> -	int queue = rx_hash % cfg_num_queues;
> +	int queue;
> +
> +	if (rss_indir_tbl_size)
> +		queue = rss_indir_tbl[rx_hash % rss_indir_tbl_size];
> +	else
> +		queue = rx_hash % cfg_num_queues;
>  
>  	log_verbose(" rxq %d (cpu %d)", queue, rx_irq_cpus[queue]);
>  	if (rx_irq_cpus[queue] != cpu) {
> @@ -517,6 +525,20 @@ static void read_rss_dev_info_ynl(void)
>  
>  	memcpy(toeplitz_key, rsp->hkey, rsp->_len.hkey);
>  
> +	if (rsp->_count.indir > RSS_MAX_INDIR)
> +		error(1, 0, "RSS indirection table too large (%u > %u)",
> +		      rsp->_count.indir, RSS_MAX_INDIR);
> +
> +	/* If indir table not available we'll fallback to simple modulo math */
> +	if (rsp->_count.indir) {
> +		memcpy(rss_indir_tbl, rsp->indir,
> +		       rsp->_count.indir * sizeof(rss_indir_tbl[0]));

It can be assumed that rsp->indir elements are sizeof(rss_indir_tbl[0])?

Is there a way to have the test verify element size. I'm not that
familiar with YNL.

> +		rss_indir_tbl_size = rsp->_count.indir;
> +
> +		log_verbose("RSS indirection table size: %u\n",
> +			    rss_indir_tbl_size);
> +	}
> +
>  	ethtool_rss_get_rsp_free(rsp);
>  	ethtool_rss_get_req_free(req);
>  	ynl_sock_destroy(ys);
> -- 
> 2.51.1
> 



  reply	other threads:[~2025-11-21 23:12 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-21  4:02 [PATCH net-next 0/5] selftests: hw-net: toeplitz: read config from the NIC directly Jakub Kicinski
2025-11-21  4:02 ` [PATCH net-next 1/5] selftests: hw-net: auto-disable building the iouring C code Jakub Kicinski
2025-11-23  0:55   ` David Wei
2025-11-25  2:49     ` Jakub Kicinski
2025-11-21  4:02 ` [PATCH net-next 2/5] selftests: hw-net: toeplitz: make sure NICs have pure Toeplitz configured Jakub Kicinski
2025-11-21  4:02 ` [PATCH net-next 3/5] selftests: hw-net: toeplitz: read the RSS key directly from C Jakub Kicinski
2025-11-23  2:07   ` David Wei
2025-11-21  4:02 ` [PATCH net-next 4/5] selftests: hw-net: toeplitz: read indirection table from the device Jakub Kicinski
2025-11-21 23:12   ` Willem de Bruijn [this message]
2025-11-22  1:32     ` Jakub Kicinski
2025-11-22  2:16       ` Willem de Bruijn
2025-11-21  4:02 ` [PATCH net-next 5/5] selftests: hw-net: toeplitz: give the test up to 4 seconds Jakub Kicinski
2025-11-21 23:10 ` [PATCH net-next 0/5] selftests: hw-net: toeplitz: read config from the NIC directly Willem de Bruijn
2025-11-25  3:10 ` patchwork-bot+netdevbpf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=willemdebruijn.kernel.224bdf2fac125@gmail.com \
    --to=willemdebruijn.kernel@gmail.com \
    --cc=andrew+netdev@lunn.ch \
    --cc=davem@davemloft.net \
    --cc=dw@davidwei.uk \
    --cc=edumazet@google.com \
    --cc=horms@kernel.org \
    --cc=kuba@kernel.org \
    --cc=linux-kselftest@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=petrm@nvidia.com \
    --cc=shuah@kernel.org \
    --cc=willemb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).