public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Alexander Lobakin <aleksander.lobakin@intel.com>
To: Alexander Duyck <alexander.duyck@gmail.com>
Cc: Jakub Kicinski <kuba@kernel.org>,
	"David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	Paolo Abeni <pabeni@redhat.com>,
	"Maciej Fijalkowski" <maciej.fijalkowski@intel.com>,
	Magnus Karlsson <magnus.karlsson@intel.com>,
	Michal Kubiak <michal.kubiak@intel.com>,
	"Larysa Zaremba" <larysa.zaremba@intel.com>,
	Jesper Dangaard Brouer <hawk@kernel.org>,
	Ilias Apalodimas <ilias.apalodimas@linaro.org>,
	"Christoph Hellwig" <hch@lst.de>,
	Paul Menzel <pmenzel@molgen.mpg.de>, <netdev@vger.kernel.org>,
	<intel-wired-lan@lists.osuosl.org>,
	<linux-kernel@vger.kernel.org>
Subject: Re: [PATCH net-next v3 09/12] iavf: switch to Page Pool
Date: Tue, 6 Jun 2023 15:13:52 +0200	[thread overview]
Message-ID: <df0bfcc0-558e-d394-be3e-59264a495e86@intel.com> (raw)
In-Reply-To: <CAKgT0Ue7US2wwZXXU6HcGPBZWg+pSZ=PE_HWxJHgF8bmLymkfg@mail.gmail.com>

From: Alexander Duyck <alexander.duyck@gmail.com>
Date: Fri, 2 Jun 2023 11:00:07 -0700

> On Fri, Jun 2, 2023 at 9:31 AM Alexander Lobakin
> <aleksander.lobakin@intel.com> wrote:

[...]

>>> Not a fan of this switching back and forth between being a page pool
>>> pointer and a dev pointer. Seems problematic as it is easily
>>> misinterpreted. I would say that at a minimum stick to either it is
>>> page_pool(Rx) or dev(Tx) on a ring type basis.
>>
>> The problem is that page_pool has lifetime from ifup to ifdown, while
>> its ring lives longer. So I had to do something with this, but also I
>> didn't want to have 2 pointers at the same time since it's redundant and
>> +8 bytes to the ring for nothing.
> 
> It might be better to just go with NULL rather than populating it w/
> two different possible values. Then at least you know if it is an
> rx_ring it is a page_pool and if it is a tx_ring it is dev. You can
> reset to the page pool when you repopulate the rest of the ring.

IIRC I did that to have struct device pointer at the moment of creating
page_pools. But sounds reasonable, I'll take a look.

> 
>>> This setup works for iavf, however for i40e/ice you may run into issues
>>> since the setup_rx_descriptors call is also used to setup the ethtool
>>> loopback test w/o a napi struct as I recall so there may not be a
>>> q_vector.
>>
>> I'll handle that. Somehow :D Thanks for noticing, I'll take a look
>> whether I should do something right now or it can be done later when
>> switching the actual mentioned drivers.
>>
>> [...]
>>
>>>> @@ -240,7 +237,10 @@ struct iavf_rx_queue_stats {
>>>>  struct iavf_ring {
>>>>      struct iavf_ring *next;         /* pointer to next ring in q_vector */
>>>>      void *desc;                     /* Descriptor ring memory */
>>>> -    struct device *dev;             /* Used for DMA mapping */
>>>> +    union {
>>>> +            struct page_pool *pool; /* Used for Rx page management */
>>>> +            struct device *dev;     /* Used for DMA mapping on Tx */
>>>> +    };
>>>>      struct net_device *netdev;      /* netdev ring maps to */
>>>>      union {
>>>>              struct iavf_tx_buffer *tx_bi;
>>>
>>> Would it make more sense to have the page pool in the q_vector rather
>>> than the ring? Essentially the page pool is associated per napi
>>> instance so it seems like it would make more sense to store it with the
>>> napi struct rather than potentially have multiple instances per napi.
>>
>> As per Page Pool design, you should have it per ring. Plus you have
>> rxq_info (XDP-related structure), which is also per-ring and
>> participates in recycling in some cases. So I wouldn't complicate.
>> I went down the chain and haven't found any place where having more than
>> 1 PP per NAPI would break anything. If I got it correctly, Jakub's
>> optimization discourages having 1 PP per several NAPIs (or scheduling
>> one NAPI on different CPUs), but not the other way around. The goal was
>> to exclude concurrent access to one PP from different threads, and here
>> it's impossible.
> 
> The xdp_rxq can be mapped many:1 to the page pool if I am not mistaken.
> 
> The only reason why I am a fan of trying to keep the page_pool tightly
> associated with the napi instance is because the napi instance is what
> essentially is guaranteeing the page_pool is consistent as it is only
> accessed by that one napi instance.

Here we can't have more than one NAPI instance accessing one page_pool,
so I did that unconditionally. I'm a fan of what you've said, too :p

> 
>> Lemme know. I can always disable NAPI optimization for cases when one
>> vector is shared by several queues -- and it's not a usual case for
>> these NICs anyway -- but I haven't found a reason for that.
> 
> I suppose we should be fine if we have a many to one mapping though I
> suppose. As you said the issue would be if multiple NAPI were
> accessing the same page pool.

Thanks,
Olek

  reply	other threads:[~2023-06-06 13:20 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-30 15:00 [PATCH net-next v3 00/12] net: intel: start The Great Code Dedup + Page Pool for iavf Alexander Lobakin
2023-05-30 15:00 ` [PATCH net-next v3 01/12] net: intel: introduce Intel Ethernet common library Alexander Lobakin
2023-05-30 15:00 ` [PATCH net-next v3 02/12] iavf: kill "legacy-rx" for good Alexander Lobakin
2023-05-30 15:00 ` [PATCH net-next v3 03/12] iavf: optimize Rx buffer allocation a bunch Alexander Lobakin
2023-05-31 15:37   ` Alexander H Duyck
2023-05-31 16:39   ` Maciej Fijalkowski
2023-06-02 14:09     ` Alexander Lobakin
2023-05-30 15:00 ` [PATCH net-next v3 04/12] iavf: remove page splitting/recycling Alexander Lobakin
2023-05-30 15:00 ` [PATCH net-next v3 05/12] iavf: always use a full order-0 page Alexander Lobakin
2023-05-30 15:00 ` [PATCH net-next v3 06/12] net: skbuff: don't include <net/page_pool.h> into <linux/skbuff.h> Alexander Lobakin
2023-05-31 15:21   ` Alexander H Duyck
2023-05-31 15:28     ` Alexander Lobakin
2023-05-31 15:29       ` Alexander Lobakin
2023-05-30 15:00 ` [PATCH net-next v3 07/12] net: page_pool: avoid calling no-op externals when possible Alexander Lobakin
2023-05-30 15:00 ` [PATCH net-next v3 08/12] net: page_pool: add DMA-sync-for-CPU inline helpers Alexander Lobakin
2023-05-30 15:00 ` [PATCH net-next v3 09/12] iavf: switch to Page Pool Alexander Lobakin
2023-05-31 16:19   ` Alexander H Duyck
2023-06-02 16:29     ` Alexander Lobakin
2023-06-02 18:00       ` Alexander Duyck
2023-06-06 13:13         ` Alexander Lobakin [this message]
2023-05-30 15:00 ` [PATCH net-next v3 10/12] libie: add common queue stats Alexander Lobakin
2023-05-30 15:00 ` [PATCH net-next v3 11/12] libie: add per-queue Page Pool stats Alexander Lobakin
2023-05-30 15:00 ` [PATCH net-next v3 12/12] iavf: switch queue stats to libie Alexander Lobakin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=df0bfcc0-558e-d394-be3e-59264a495e86@intel.com \
    --to=aleksander.lobakin@intel.com \
    --cc=alexander.duyck@gmail.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=hawk@kernel.org \
    --cc=hch@lst.de \
    --cc=ilias.apalodimas@linaro.org \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=kuba@kernel.org \
    --cc=larysa.zaremba@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maciej.fijalkowski@intel.com \
    --cc=magnus.karlsson@intel.com \
    --cc=michal.kubiak@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=pmenzel@molgen.mpg.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox