From: Alexander Lobakin <aleksander.lobakin@intel.com>
To: Paul Menzel <pmenzel@molgen.mpg.de>
Cc: <intel-wired-lan@lists.osuosl.org>,
Tony Nguyen <anthony.l.nguyen@intel.com>,
Przemek Kitszel <przemyslaw.kitszel@intel.com>,
Andrew Lunn <andrew+netdev@lunn.ch>,
"David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>,
"Paolo Abeni" <pabeni@redhat.com>,
Simon Horman <horms@kernel.org>, Kohei Enju <kohei@enjuk.jp>,
Jacob Keller <jacob.e.keller@intel.com>,
"Aleksandr Loktionov" <aleksandr.loktionov@intel.com>,
<nxne.cnse.osdt.itp.upstreaming@intel.com>,
<netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>
Subject: Re: [Intel-wired-lan] [PATCH iwl-next v3 1/5] libeth: pass Rx queue index to PP when creating a fill queue
Date: Tue, 3 Mar 2026 16:42:48 +0100 [thread overview]
Message-ID: <ef25723a-84c4-46af-9f54-81945b21e9f4@intel.com> (raw)
In-Reply-To: <4dbf4f75-0474-4583-a2ca-77e4886c2dec@molgen.mpg.de>
From: Paul Menzel <pmenzel@molgen.mpg.de>
Date: Tue, 24 Feb 2026 19:53:11 +0100
> Dear Alexander,
>
>
> Thank you for your patch.
>
> Am 24.02.26 um 18:46 schrieb Alexander Lobakin:
>> Since recently, page_pool_create() accepts optional stack index of
>> the Rx queue which the pool will be created for. It can then be
>> used on control path for stuff like memory providers.
>> Add the same field to libeth_fq and pass the index from all the
>> drivers using libeth for managing Rx to simplify implementing MP
>> support later.
>> idpf has one libeth_fq per buffer/fill queue and each Rx queue has
>> two fill queues, but since fill queues can never be shared, we can
>> store the corresponding Rx queue index there during the
>> initialization to pass it to libeth.
>>
>> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com>
>> Reviewed-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
>> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
[...]
>> diff --git a/include/net/libeth/rx.h b/include/net/libeth/rx.h
>> index 5d991404845e..3b3d7acd13c9 100644
>> --- a/include/net/libeth/rx.h
>> +++ b/include/net/libeth/rx.h
>> @@ -71,6 +71,7 @@ enum libeth_fqe_type {
>> * @xdp: flag indicating whether XDP is enabled
>> * @buf_len: HW-writeable length per each buffer
>> * @nid: ID of the closest NUMA node with memory
>> + * @idx: stack index of the corresponding Rx queue
>> */
>> struct libeth_fq {
>> struct_group_tagged(libeth_fq_fp, fp,
>> @@ -88,6 +89,7 @@ struct libeth_fq {
>> u32 buf_len;
>> int nid;
>> + u32 idx;
>
> The type above and here is different (u16 vs u32), despite the
> description being the same. Could you enlighten me why, and maybe add it
> to the commit message?
The idpf queue index can never exceed U16_MAX and a u16 field stacks
nicely with other fields there. libeth is more generic and I in general
prefer to use 4+ byte long fields, hence u32.
I don't think it's anyhow important.
>
>
> Kind regards,
>
> Paul
Thanks,
Olek
next prev parent reply other threads:[~2026-03-03 15:44 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-24 17:46 [PATCH iwl-next v3 0/5] ice: add support for devmem/io_uring Rx and Tx Alexander Lobakin
2026-02-24 17:46 ` [PATCH iwl-next v3 1/5] libeth: pass Rx queue index to PP when creating a fill queue Alexander Lobakin
2026-02-24 18:53 ` [Intel-wired-lan] " Paul Menzel
2026-03-03 15:42 ` Alexander Lobakin [this message]
2026-02-24 17:46 ` [PATCH iwl-next v3 2/5] libeth: handle creating pools with unreadable buffers Alexander Lobakin
2026-03-05 22:04 ` [Intel-wired-lan] " Tantilov, Emil S
2026-03-06 11:57 ` Alexander Lobakin
2026-02-24 17:46 ` [PATCH iwl-next v3 3/5] ice: migrate to netdev ops lock Alexander Lobakin
2026-02-24 17:46 ` [PATCH iwl-next v3 4/5] ice: implement Rx queue management ops Alexander Lobakin
2026-02-24 17:46 ` [PATCH iwl-next v3 5/5] ice: add support for transmitting unreadable frags Alexander Lobakin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ef25723a-84c4-46af-9f54-81945b21e9f4@intel.com \
--to=aleksander.lobakin@intel.com \
--cc=aleksandr.loktionov@intel.com \
--cc=andrew+netdev@lunn.ch \
--cc=anthony.l.nguyen@intel.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=horms@kernel.org \
--cc=intel-wired-lan@lists.osuosl.org \
--cc=jacob.e.keller@intel.com \
--cc=kohei@enjuk.jp \
--cc=kuba@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=nxne.cnse.osdt.itp.upstreaming@intel.com \
--cc=pabeni@redhat.com \
--cc=pmenzel@molgen.mpg.de \
--cc=przemyslaw.kitszel@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox