From: Jakub Kicinski <kuba@kernel.org>
To: davem@davemloft.net
Cc: netdev@vger.kernel.org, edumazet@google.com, pabeni@redhat.com,
Jakub Kicinski <kuba@kernel.org>,
ecree.xilinx@gmail.com, corbet@lwn.net,
linux-doc@vger.kernel.org
Subject: [PATCH net-next] docs: networking: document multi-RSS context
Date: Tue, 17 Oct 2023 18:07:58 -0700 [thread overview]
Message-ID: <20231018010758.2382742-1-kuba@kernel.org> (raw)
There seems to be no docs for the concept of multiple RSS
contexts and how to configure it. I had to explain it three
times recently, the last one being the charm, document it.
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
---
CC: ecree.xilinx@gmail.com
CC: corbet@lwn.net
CC: linux-doc@vger.kernel.org
---
Documentation/networking/scaling.rst | 42 ++++++++++++++++++++++++++++
1 file changed, 42 insertions(+)
diff --git a/Documentation/networking/scaling.rst b/Documentation/networking/scaling.rst
index 92c9fb46d6a2..03ae19a689fc 100644
--- a/Documentation/networking/scaling.rst
+++ b/Documentation/networking/scaling.rst
@@ -105,6 +105,48 @@ a separate CPU. For interrupt handling, HT has shown no benefit in
initial tests, so limit the number of queues to the number of CPU cores
in the system.
+Dedicated RSS contexts
+~~~~~~~~~~~~~~~~~~~~~~
+
+Modern NICs support creating multiple co-existing RSS configurations
+which are selected based on explicit matching rules. This can be very
+useful when application wants to constrain the set of queues receiving
+traffic for e.g. a particular destination port or IP address.
+The example below shows how to direct all traffic to TCP port 22
+to queues 0 and 1.
+
+To create an additional RSS context use::
+
+ # ethtool -X eth0 hfunc toeplitz context new
+ New RSS context is 1
+
+Kernel reports back the ID of the allocated context (the default, always
+present RSS context has ID of 0). The new context can be queried and
+modified using the same APIs as the default context::
+
+ # ethtool -x eth0 context 1
+ RX flow hash indirection table for eth0 with 13 RX ring(s):
+ 0: 0 1 2 3 4 5 6 7
+ 8: 8 9 10 11 12 0 1 2
+ [...]
+ # ethtool -X eth0 equal 2 context 1
+ # ethtool -x eth0 context 1
+ RX flow hash indirection table for eth0 with 13 RX ring(s):
+ 0: 0 1 0 1 0 1 0 1
+ 8: 0 1 0 1 0 1 0 1
+ [...]
+
+To make use of the new context direct traffic to it using an n-tuple
+filter::
+
+ # ethtool -N eth0 flow-type tcp6 dst-port 22 context 1
+ Added rule with ID 1023
+
+When done, remove the context and the rule::
+
+ # ethtool -N eth0 delete 1023
+ # ethtool -X eth0 context 1 delete
+
RPS: Receive Packet Steering
============================
--
2.41.0
next reply other threads:[~2023-10-18 1:08 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-18 1:07 Jakub Kicinski [this message]
2023-10-18 15:47 ` [PATCH net-next] docs: networking: document multi-RSS context Simon Horman
2023-10-18 19:35 ` Edward Cree
2023-10-19 10:00 ` patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231018010758.2382742-1-kuba@kernel.org \
--to=kuba@kernel.org \
--cc=corbet@lwn.net \
--cc=davem@davemloft.net \
--cc=ecree.xilinx@gmail.com \
--cc=edumazet@google.com \
--cc=linux-doc@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pabeni@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).