From: Subbaraya Sundeep <sbhatta@marvell.com>
To: <davem@davemloft.net>, <kuba@kernel.org>, <netdev@vger.kernel.org>
Cc: <sgoutham@marvell.com>, <hkelam@marvell.com>,
<gakula@marvell.com>, Subbaraya Sundeep <sbhatta@marvell.com>
Subject: [net-next PATCH v2 0/3] Add devlink params to vary cqe and rbuf
Date: Wed, 6 Oct 2021 12:48:43 +0530 [thread overview]
Message-ID: <1633504726-30751-1-git-send-email-sbhatta@marvell.com> (raw)
Octeontx2 hardware writes a Completion Queue Entry(CQE) in the
memory provided by software when a packet is received or
transmitted. CQE has the buffer pointers (IOVAs) where the
packet data fragments are written by hardware. One 128 byte
CQE can hold 6 buffer pointers and a 512 byte CQE can hold
42 buffer pointers. Hence large packets can be received either
by using 512 byte CQEs or by increasing size of receive buffers.
Current driver only supports 128 byte CQEs.
This patchset adds devlink params to change CQE and receive
buffer sizes which inturn helps to tune whether many small size
buffers or less big size buffers are needed to receive larger
packets. Below is the patches description:
Patch 1 - This prepares for 512 byte CQE operation by
seperating out transmit side and receive side config.
Also simplifies existing rbuf size calculation.
Patch 2 - Adds devlink param to change cqe. Basically
sets new config and toggles interface to cleanup and init properly.
Patch 3 - Similar to patch 2 and adds devlink param to
change receive buffer size
v2 changes:
Fixed compilation error in patch 1
error: ‘struct otx2_nic’ has no member named ‘max_frs’
Thanks,
Sundeep
Subbaraya Sundeep (3):
octeontx2-pf: Simplify the receive buffer size calculation
octeontx2-pf: Add devlink param to vary cqe size
octeontx2-pf: Add devlink param to vary rbuf size
drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c | 2 +-
.../ethernet/marvell/octeontx2/nic/otx2_common.c | 20 ++--
.../ethernet/marvell/octeontx2/nic/otx2_common.h | 4 +-
.../ethernet/marvell/octeontx2/nic/otx2_devlink.c | 116 +++++++++++++++++++++
.../net/ethernet/marvell/octeontx2/nic/otx2_pf.c | 24 +++--
.../net/ethernet/marvell/octeontx2/nic/otx2_txrx.c | 30 ++++--
.../net/ethernet/marvell/octeontx2/nic/otx2_txrx.h | 4 +-
.../net/ethernet/marvell/octeontx2/nic/otx2_vf.c | 7 ++
8 files changed, 177 insertions(+), 30 deletions(-)
--
2.7.4
next reply other threads:[~2021-10-06 7:19 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-10-06 7:18 Subbaraya Sundeep [this message]
2021-10-06 7:18 ` [net-next PATCH v2 1/3] octeontx2-pf: Simplify the receive buffer size calculation Subbaraya Sundeep
2021-10-06 7:18 ` [net-next PATCH v2 2/3] octeontx2-pf: Add devlink param to vary cqe size Subbaraya Sundeep
2021-10-06 7:18 ` [net-next PATCH v2 3/3] octeontx2-pf: Add devlink param to vary rbuf size Subbaraya Sundeep
2021-10-06 13:43 ` [net-next PATCH v2 0/3] Add devlink params to vary cqe and rbuf Jakub Kicinski
2021-10-06 13:48 ` sundeep subbaraya
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1633504726-30751-1-git-send-email-sbhatta@marvell.com \
--to=sbhatta@marvell.com \
--cc=davem@davemloft.net \
--cc=gakula@marvell.com \
--cc=hkelam@marvell.com \
--cc=kuba@kernel.org \
--cc=netdev@vger.kernel.org \
--cc=sgoutham@marvell.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).