From: Petr Machata <petrm@nvidia.com>
To: "David S. Miller" <davem@davemloft.net>,
Eric Dumazet <edumazet@google.com>,
Jakub Kicinski <kuba@kernel.org>, Paolo Abeni <pabeni@redhat.com>,
<netdev@vger.kernel.org>
Cc: Ido Schimmel <idosch@nvidia.com>, Petr Machata <petrm@nvidia.com>,
"Amit Cohen" <amcohen@nvidia.com>, <mlxsw@nvidia.com>
Subject: [PATCH net-next 0/7] mlxsw: Use page pool for Rx buffers allocation
Date: Tue, 18 Jun 2024 13:34:39 +0200 [thread overview]
Message-ID: <cover.1718709196.git.petrm@nvidia.com> (raw)
Amit Cohen writes:
After using NAPI to process events from hardware, the next step is to
use page pool for Rx buffers allocation, which is also enhances
performance.
To simplify this change, first use page pool to allocate one continuous
buffer for each packet, later memory consumption can be improved by using
fragmented buffers.
This set significantly enhances mlxsw driver performance, CPU can handle
about 370% of the packets per second it previously handled.
The next planned improvement is using XDP to optimize telemetry.
Patch set overview:
Patches #1-#2 are small preparations for page pool usage
Patch #3 initializes page pool, but do not use it
Patch #4 converts the driver to use page pool for buffers allocations
Patch #5 is an optimization for buffer access
Patch #6 cleans up an unused structure
Patch #7 uses napi_consume_skb() as part of Tx completion
Amit Cohen (7):
mlxsw: pci: Split NAPI setup/teardown into two steps
mlxsw: pci: Store CQ pointer as part of RDQ structure
mlxsw: pci: Initialize page pool per CQ
mlxsw: pci: Use page pool for Rx buffers allocation
mlxsw: pci: Optimize data buffer access
mlxsw: pci: Do not store SKB for RDQ elements
mlxsw: pci: Use napi_consume_skb() to free SKB as part of Tx
completion
drivers/net/ethernet/mellanox/mlxsw/Kconfig | 1 +
drivers/net/ethernet/mellanox/mlxsw/pci.c | 199 ++++++++++++++------
2 files changed, 142 insertions(+), 58 deletions(-)
--
2.45.0
next reply other threads:[~2024-06-18 11:35 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-18 11:34 Petr Machata [this message]
2024-06-18 11:34 ` [PATCH net-next 1/7] mlxsw: pci: Split NAPI setup/teardown into two steps Petr Machata
2024-06-18 11:34 ` [PATCH net-next 2/7] mlxsw: pci: Store CQ pointer as part of RDQ structure Petr Machata
2024-06-18 11:34 ` [PATCH net-next 3/7] mlxsw: pci: Initialize page pool per CQ Petr Machata
2024-06-18 11:34 ` [PATCH net-next 4/7] mlxsw: pci: Use page pool for Rx buffers allocation Petr Machata
2024-06-18 11:34 ` [PATCH net-next 5/7] mlxsw: pci: Optimize data buffer access Petr Machata
2024-06-18 11:34 ` [PATCH net-next 6/7] mlxsw: pci: Do not store SKB for RDQ elements Petr Machata
2024-06-18 11:34 ` [PATCH net-next 7/7] mlxsw: pci: Use napi_consume_skb() to free SKB as part of Tx completion Petr Machata
2024-06-18 14:01 ` [PATCH net-next 0/7] mlxsw: Use page pool for Rx buffers allocation Jiri Pirko
2024-06-20 0:50 ` patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cover.1718709196.git.petrm@nvidia.com \
--to=petrm@nvidia.com \
--cc=amcohen@nvidia.com \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=idosch@nvidia.com \
--cc=kuba@kernel.org \
--cc=mlxsw@nvidia.com \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).