public inbox for dev@dpdk.org
 help / color / mirror / Atom feed
From: Stephen Hemminger <stephen@networkplumber.org>
To: David Marchand <david.marchand@redhat.com>
Cc: dev@dpdk.org, rjarry@redhat.com, cfontain@redhat.com
Subject: Re: [PATCH 0/4] Remove limitations coming from legacy VMDq
Date: Sun, 5 Apr 2026 11:47:20 -0700	[thread overview]
Message-ID: <20260405114720.081a3b56@phoenix.local> (raw)
In-Reply-To: <20260403091836.1073484-1-david.marchand@redhat.com>

On Fri,  3 Apr 2026 11:18:31 +0200
David Marchand <david.marchand@redhat.com> wrote:

> Since the commit 88ac4396ad29 ("ethdev: add VMDq support"),
> VMDq has been imposing a maximum number of mac addresses in the
> mac_addr_add/del API.
> 
> Nowadays, new Intel drivers do not support the feature and few other
> drivers implement this feature.
> 
> This series proposes to flag drivers that support the feature, and
> remove the limit of number of mac addresses for others.
> 
> Next step could be to remove the VMDq pool notion from the generic API.
> However I have some concern about this, as changing the quite stable
> mac_addr_add/del API now seems a lot of noise for not much benefit.
> 
> 

Make sense. The AI review found a couple of things.
Had to poke at it to make a good description

Subject: Re: [PATCH 1/4] ethdev: skip VMDq pools unless configured

Patches 1/4 and 3/4 look good to me.

Patch 2/4 has two issues:

1) In bnxt_reps.c, the new line:

  dev_info->dev_capa = RTE_ETH_DEV_CAPA_VMDQ;

overwrites any default capabilities that were previously set before
the driver callback. The other drivers in this patch had the same
pre-existing pattern (plain assignment before &= ~FLOW_RULE_KEEP),
so for them it's no worse. But bnxt_reps.c previously only did the
&= ~ clear, so this is a new regression. Should be |= instead of =.

2) Several drivers that receive RTE_ETH_DEV_CAPA_VMDQ don't actually
support VMDq: e1000/em, bnxt representors, and i40e VF representors
have no max_vmdq_pools or VMDq configuration. Marking them as
VMDq-capable seems incorrect and would allow users to attempt VMDq
configuration on devices that can't handle it.

Patch 4/4 has a stack overflow:

iavf_add_del_all_mac_addr() allocates list_req on the stack with
addr[IAVF_UC_MACADDR_MAX]. At 32768 entries of ~8 bytes each,
that's roughly 256 KiB on the stack. This will blow the stack
on most configurations. Needs to be heap-allocated.

      parent reply	other threads:[~2026-04-05 18:47 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-03  9:18 [PATCH 0/4] Remove limitations coming from legacy VMDq David Marchand
2026-04-03  9:18 ` [PATCH 1/4] ethdev: skip VMDq pools unless configured David Marchand
2026-04-03  9:18 ` [PATCH 2/4] ethdev: announce VMDq capability David Marchand
2026-04-06 22:22   ` Kishore Padmanabha
2026-04-03  9:18 ` [PATCH 3/4] ethdev: hide VMDq internal sizes David Marchand
2026-04-03  9:18 ` [PATCH 4/4] net/iavf: accept up to 32k unicast MAC addresses David Marchand
2026-04-05 18:47 ` Stephen Hemminger [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260405114720.081a3b56@phoenix.local \
    --to=stephen@networkplumber.org \
    --cc=cfontain@redhat.com \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=rjarry@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox