netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Przemek Kitszel <przemyslaw.kitszel@intel.com>
To: Michal Kubiak <michal.kubiak@intel.com>,
	<intel-wired-lan@lists.osuosl.org>
Cc: <maciej.fijalkowski@intel.com>, <netdev@vger.kernel.org>,
	"Michal Swiatkowski" <michal.swiatkowski@intel.com>
Subject: Re: [PATCH iwl-next] ice: add a separate Rx handler for flow director commands
Date: Mon, 24 Mar 2025 11:07:01 +0100	[thread overview]
Message-ID: <a3022053-18a9-45fe-af14-cbcede33e94f@intel.com> (raw)
In-Reply-To: <20250321151357.28540-1-michal.kubiak@intel.com>

On 3/21/25 16:13, Michal Kubiak wrote:
> The "ice" driver implementation uses the control VSI to handle
> the flow director configuration for PFs and VFs.
> 
> Unfortunately, although a separate VSI type was created to handle flow
> director queues, the Rx queue handler was shared between the flow
> director and a standard NAPI Rx handler.
> 
> Such a design approach was not very flexible. First, it mixed hotpath
> and slowpath code, blocking their further optimization. It also created
> a huge overkill for the flow director command processing, which is
> descriptor-based only, so there is no need to allocate Rx data buffers.
> 
> For the above reasons, implement a separate Rx handler for the control
> VSI. Also, remove from the NAPI handler the code dedicated to
> configuring the flow director rules on VFs.
> Do not allocate Rx data buffers to the flow director queues because
> their processing is descriptor-based only.
> Finally, allow Rx data queues to be allocated only for VSIs that have
> netdev assigned to them.
> 
> This handler splitting approach is the first step in converting the
> driver to use the Page Pool (which can only be used for data queues).
> 
> Test hints:
>    1. Create a VF for any PF managed by the ice driver.
>    2. In a loop, add and delete flow director rules for the VF, e.g.:
> 
>         for i in {1..128}; do
>             q=$(( i % 16 ))
>             ethtool -N ens802f0v0 flow-type tcp4 dst-port "$i" action "$q"
>         done
> 
>         for i in {0..127}; do
>             ethtool -N ens802f0v0 delete "$i"
>         done
> 
> Suggested-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> Suggested-by: Michal Swiatkowski <michal.swiatkowski@intel.com>
> Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> Signed-off-by: Michal Kubiak <michal.kubiak@intel.com>
Thank you, a very nice improvement

Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com>

  parent reply	other threads:[~2025-03-24 10:07 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-21 15:13 [PATCH iwl-next] ice: add a separate Rx handler for flow director commands Michal Kubiak
2025-03-21 22:52 ` [Intel-wired-lan] " Keller, Jacob E
2025-03-24 10:07 ` Przemek Kitszel [this message]
2025-04-09 19:14 ` Simon Horman
2025-05-14 10:26 ` Maciej Fijalkowski
2025-05-14 12:33   ` Michal Kubiak

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a3022053-18a9-45fe-af14-cbcede33e94f@intel.com \
    --to=przemyslaw.kitszel@intel.com \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=maciej.fijalkowski@intel.com \
    --cc=michal.kubiak@intel.com \
    --cc=michal.swiatkowski@intel.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).