* [PATCH 01/29] examples/timer: correct documentation errors
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
@ 2026-01-14 22:21 ` Stephen Hemminger
2026-01-14 22:21 ` [PATCH 02/29] examples/packet_ordering: " Stephen Hemminger
` (27 subsequent siblings)
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:21 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Erik Gabriel Carrillo
Address minor issues in timer sample documentation:
- Capitalize "linux" to "Linux"
- Complete incomplete phrase "and also on the main" to
"and also on the main lcore"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/sample_app_ug/timer.rst | 45 +++++++++++++++---------------
1 file changed, 23 insertions(+), 22 deletions(-)
diff --git a/doc/guides/sample_app_ug/timer.rst b/doc/guides/sample_app_ug/timer.rst
index 7af35d3d67..a1149be6c3 100644
--- a/doc/guides/sample_app_ug/timer.rst
+++ b/doc/guides/sample_app_ug/timer.rst
@@ -4,20 +4,23 @@
Timer Sample Application
========================
-The Timer sample application is a simple application that demonstrates the use of a timer in a DPDK application.
-This application prints some messages from different lcores regularly, demonstrating the use of timers.
+Overview
+--------
+
+The Timer sample application demonstrates the use of a timer in a DPDK application.
+This application prints messages from different lcores at regular intervals using timers.
Compiling the Application
-------------------------
-To compile the sample application see :doc:`compiling`.
+To compile the sample application, see :doc:`compiling`.
The application is located in the ``timer`` sub-directory.
Running the Application
-----------------------
-To run the example in linux environment:
+To run the example in a Linux environment:
.. code-block:: console
@@ -29,12 +32,10 @@ the Environment Abstraction Layer (EAL) options.
Explanation
-----------
-The following sections provide some explanation of the code.
-
Initialization and Main Loop
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-In addition to EAL initialization, the timer subsystem must be initialized, by calling the rte_timer_subsystem_init() function.
+In addition to EAL initialization, the timer subsystem must be initialized by calling the ``rte_timer_subsystem_init()`` function.
.. literalinclude:: ../../../examples/timer/main.c
:language: c
@@ -44,7 +45,7 @@ In addition to EAL initialization, the timer subsystem must be initialized, by c
After timer creation (see the next paragraph), the main loop is
executed on each worker lcore using the well-known
-rte_eal_remote_launch() and also on the main.
+``rte_eal_remote_launch()`` and also on the main lcore.
.. literalinclude:: ../../../examples/timer/main.c
:language: c
@@ -61,14 +62,14 @@ The main loop is very simple in this example:
:dedent: 1
As explained in the comment, it is better to use the TSC register (as it is a per-lcore register) to check if the
-rte_timer_manage() function must be called or not.
+``rte_timer_manage()`` function must be called or not.
In this example, the resolution of the timer is 10 milliseconds.
Managing Timers
~~~~~~~~~~~~~~~
-In the main() function, the two timers are initialized.
-This call to rte_timer_init() is necessary before doing any other operation on the timer structure.
+In the ``main()`` function, the two timers are initialized.
+This call to ``rte_timer_init()`` is necessary before doing any other operation on the timer structure.
.. literalinclude:: ../../../examples/timer/main.c
:language: c
@@ -76,15 +77,15 @@ This call to rte_timer_init() is necessary before doing any other operation on t
:end-before: >8 End of init timer structures.
:dedent: 1
-Then, the two timers are configured:
+Next, the two timers are configured:
-* The first timer (timer0) is loaded on the main lcore and expires every second.
- Since the PERIODICAL flag is provided, the timer is reloaded automatically by the timer subsystem.
- The callback function is timer0_cb().
+* The first timer (``timer0``) is loaded on the main lcore and expires every second.
+ Since the ``PERIODICAL`` flag is provided, the timer is reloaded automatically by the timer subsystem.
+ The callback function is ``timer0_cb()``.
-* The second timer (timer1) is loaded on the next available lcore every 333 ms.
- The SINGLE flag means that the timer expires only once and must be reloaded manually if required.
- The callback function is timer1_cb().
+* The second timer (``timer1``) is loaded on the next available lcore every 333 ms.
+ The ``SINGLE`` flag means that the timer expires only once and must be reloaded manually if required.
+ The callback function is ``timer1_cb()``.
.. literalinclude:: ../../../examples/timer/main.c
:language: c
@@ -92,16 +93,16 @@ Then, the two timers are configured:
:end-before: >8 End of two timers configured.
:dedent: 1
-The callback for the first timer (timer0) only displays a message until a global counter reaches 20 (after 20 seconds).
-In this case, the timer is stopped using the rte_timer_stop() function.
+The callback for the first timer (``timer0``) only displays a message until a global counter reaches 20 (after 20 seconds).
+In this case, the timer is stopped using the ``rte_timer_stop()`` function.
.. literalinclude:: ../../../examples/timer/main.c
:language: c
:start-after: timer0 callback. 8<
:end-before: >8 End of timer0 callback.
-The callback for the second timer (timer1) displays a message and reloads the timer on the next lcore, using the
-rte_timer_reset() function:
+The callback for the second timer (``timer1``) displays a message and reloads the timer on the next lcore, using the
+``rte_timer_reset()`` function:
.. literalinclude:: ../../../examples/timer/main.c
:language: c
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 02/29] examples/packet_ordering: correct documentation errors
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
2026-01-14 22:21 ` [PATCH 01/29] examples/timer: correct documentation errors Stephen Hemminger
@ 2026-01-14 22:21 ` Stephen Hemminger
2026-01-14 22:21 ` [PATCH 03/29] examples/service_cores: " Stephen Hemminger
` (26 subsequent siblings)
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:21 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Volodymyr Fialko
Address grammar and clarity issues in packet ordering sample documentation:
- Change "enables output the" to "enables output of the"
- Clarify port mask description from "either 1 or even enabled port
numbers" to "either 1 or an even number of enabled ports"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/sample_app_ug/packet_ordering.rst | 23 +++++++++++---------
1 file changed, 13 insertions(+), 10 deletions(-)
diff --git a/doc/guides/sample_app_ug/packet_ordering.rst b/doc/guides/sample_app_ug/packet_ordering.rst
index f96c0ad697..057dd932d4 100644
--- a/doc/guides/sample_app_ug/packet_ordering.rst
+++ b/doc/guides/sample_app_ug/packet_ordering.rst
@@ -4,29 +4,29 @@
Packet Ordering Application
============================
-The Packet Ordering sample app simply shows the impact of reordering a stream.
-It's meant to stress the library with different configurations for performance.
+The Packet Ordering sample application shows the impact of reordering a stream.
+It is meant to stress the library with different configurations for performance.
Overview
--------
The application uses at least three CPU cores:
-* RX core (main core) receives traffic from the NIC ports and feeds Worker
+* The RX core (main core) receives traffic from the NIC ports and feeds Worker
cores with traffic through SW queues.
-* Worker (worker core) basically do some light work on the packet.
- Currently it modifies the output port of the packet for configurations with
+* The Worker (worker core) does some light work on the packet.
+ Currently, it modifies the output port of the packet for configurations with
more than one port enabled.
-* TX Core (worker core) receives traffic from Worker cores through software queues,
+* The TX Core (worker core) receives traffic from Worker cores through software queues,
inserts out-of-order packets into reorder buffer, extracts ordered packets
from the reorder buffer and sends them to the NIC ports for transmission.
Compiling the Application
-------------------------
-To compile the sample application see :doc:`compiling`.
+To compile the sample application, see :doc:`compiling`.
The application is located in the ``packet_ordering`` sub-directory.
@@ -36,6 +36,9 @@ Running the Application
Refer to *DPDK Getting Started Guide* for general information on running applications
and the Environment Abstraction Layer (EAL) options.
+Explanation
+-----------
+
Application Command Line
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -50,12 +53,12 @@ The -l EAL corelist option has to contain at least 3 CPU cores.
The first CPU core in the core mask is the main core and would be assigned to
RX core, the last to TX core and the rest to Worker cores.
-The PORTMASK parameter must contain either 1 or even enabled port numbers.
+The PORTMASK parameter must contain either 1 or an even number of enabled ports.
When setting more than 1 port, traffic would be forwarded in pairs.
For example, if we enable 4 ports, traffic from port 0 to 1 and from 1 to 0,
then the other pair from 2 to 3 and from 3 to 2, having [0,1] and [2,3] pairs.
-The disable-reorder long option does, as its name implies, disable the reordering
+The disable-reorder long option, as its name implies, disables the reordering
of traffic, which should help evaluate reordering performance impact.
-The insight-worker long option enables output the packet statistics of each worker thread.
+The insight-worker long option enables output of the packet statistics of each worker thread.
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 03/29] examples/service_cores: correct documentation errors
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
2026-01-14 22:21 ` [PATCH 01/29] examples/timer: correct documentation errors Stephen Hemminger
2026-01-14 22:21 ` [PATCH 02/29] examples/packet_ordering: " Stephen Hemminger
@ 2026-01-14 22:21 ` Stephen Hemminger
2026-01-14 22:21 ` [PATCH 04/29] examples/rxtx_callbacks: " Stephen Hemminger
` (25 subsequent siblings)
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:21 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Address errors in service cores sample documentation:
- Change "registering applications" to "registering services"
- Add missing word: "can CPU cycles to" becomes
"can provide CPU cycles to"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/sample_app_ug/service_cores.rst | 41 ++++++++++++----------
1 file changed, 23 insertions(+), 18 deletions(-)
diff --git a/doc/guides/sample_app_ug/service_cores.rst b/doc/guides/sample_app_ug/service_cores.rst
index 307a6c5fbb..cc44857d41 100644
--- a/doc/guides/sample_app_ug/service_cores.rst
+++ b/doc/guides/sample_app_ug/service_cores.rst
@@ -4,23 +4,26 @@
Service Cores Sample Application
================================
-The service cores sample application demonstrates the service cores capabilities
-of DPDK. The service cores infrastructure is part of the DPDK EAL, and allows
-any DPDK component to register a service. A service is a work item or task, that
+Overview
+--------
+
+This sample application demonstrates the service core capabilities
+of DPDK. The service core infrastructure is part of the DPDK EAL and allows
+any DPDK component to register a service. A service is a work item or task that
requires CPU time to perform its duty.
-This sample application registers 5 dummy services. These 5 services are used
-to show how the service_cores API can be used to orchestrate these services to
+This sample application registers 5 dummy services that are used
+to show how the service_cores API can orchestrate these services to
run on different service lcores. This orchestration is done by calling the
-service cores APIs, however the sample application introduces a "profile"
-concept to contain the service mapping details. Note that the profile concept
-is application specific, and not a part of the service cores API.
+service cores APIs. However, the sample application introduces a "profile"
+concept to contain service mapping details. Note that the profile concept
+is application-specific, and not a part of the service cores API.
Compiling the Application
-------------------------
-To compile the sample application see :doc:`compiling`.
+To compile the sample application, see :doc:`compiling`.
The application is located in the ``service_cores`` sub-directory.
@@ -39,8 +42,8 @@ pass a service core-mask as an EAL argument at startup time.
Explanation
-----------
-The following sections provide some explanation of code focusing on
-registering applications from an applications point of view, and modifying the
+The following sections provide explanation of the application code with focus on
+registering services from an application's point of view and modifying the
service core counts and mappings at runtime.
@@ -48,7 +51,7 @@ Registering a Service
~~~~~~~~~~~~~~~~~~~~~
The following code section shows how to register a service as an application.
-Note that the service component header must be included by the application in
+Note: The service component header must be included by the application in
order to register services: ``rte_service_component.h``, in addition
to the ordinary service cores header ``rte_service.h`` which provides
the runtime functions to add, remove and remap service cores.
@@ -80,7 +83,7 @@ Removing A Service Core
~~~~~~~~~~~~~~~~~~~~~~~
To remove a service core, the steps are similar to adding but in reverse order.
-Note that it is not allowed to remove a service core if the service is running,
+Note: It is not allowed to remove a service core if the service is running,
and the service-core is the only core running that service (see documentation
for ``rte_service_lcore_stop`` function for details).
@@ -88,9 +91,11 @@ for ``rte_service_lcore_stop`` function for details).
Conclusion
~~~~~~~~~~
-The service cores infrastructure provides DPDK with two main features. The first
-is to abstract away hardware differences: the service core can CPU cycles to
+The service cores infrastructure provides DPDK with two main features.
+
+The first is to abstract away hardware differences: the service core can provide CPU cycles to
a software fallback implementation, allowing the application to be abstracted
-from the difference in HW / SW availability. The second feature is a flexible
-method of registering functions to be run, allowing the running of the
-functions to be scaled across multiple CPUs.
+from the difference in HW / SW availability.
+
+The second feature is a flexible method of registering functions to be run,
+allowing the running of the functions to be scaled across multiple CPUs.
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 04/29] examples/rxtx_callbacks: correct documentation errors
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (2 preceding siblings ...)
2026-01-14 22:21 ` [PATCH 03/29] examples/service_cores: " Stephen Hemminger
@ 2026-01-14 22:21 ` Stephen Hemminger
2026-01-14 22:21 ` [PATCH 05/29] examples/ip_fragmentation: " Stephen Hemminger
` (24 subsequent siblings)
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:21 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Bruce Richardson, John McNamara
Address issues in RX/TX callbacks sample documentation:
- Rewrite confusing paragraph about hardware timestamping for clarity
- Capitalize "linux" to "Linux"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/sample_app_ug/rxtx_callbacks.rst | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/doc/guides/sample_app_ug/rxtx_callbacks.rst b/doc/guides/sample_app_ug/rxtx_callbacks.rst
index cd6512508b..7cee5bbf8c 100644
--- a/doc/guides/sample_app_ug/rxtx_callbacks.rst
+++ b/doc/guides/sample_app_ug/rxtx_callbacks.rst
@@ -17,9 +17,8 @@ packets to add a timestamp. A separate callback is applied to all packets
prior to transmission to calculate the elapsed time in CPU cycles.
If hardware timestamping is supported by the NIC, the sample application will
-also display the average latency.
-The packet was timestamped in hardware
-on top of the latency since the packet was received and processed by the RX
+also display the average latency since the packet was timestamped in hardware,
+in addition to the latency since the packet was received and processed by the RX
callback.
@@ -34,7 +33,7 @@ The application is located in the ``rxtx_callbacks`` sub-directory.
Running the Application
-----------------------
-To run the example in a ``linux`` environment:
+To run the example in a ``Linux`` environment:
.. code-block:: console
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 05/29] examples/ip_fragmentation: correct documentation errors
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (3 preceding siblings ...)
2026-01-14 22:21 ` [PATCH 04/29] examples/rxtx_callbacks: " Stephen Hemminger
@ 2026-01-14 22:21 ` Stephen Hemminger
2026-01-14 22:21 ` [PATCH 06/29] examples/eventdev_pipeline: " Stephen Hemminger
` (23 subsequent siblings)
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:21 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Konstantin Ananyev
Address formatting issues in IP fragmentation sample documentation:
- Capitalize "linux" to "Linux" in two instances
- Add missing spaces before port lists in example commands
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/sample_app_ug/ip_frag.rst | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/doc/guides/sample_app_ug/ip_frag.rst b/doc/guides/sample_app_ug/ip_frag.rst
index d2c66683e3..7dcb0177f6 100644
--- a/doc/guides/sample_app_ug/ip_frag.rst
+++ b/doc/guides/sample_app_ug/ip_frag.rst
@@ -62,7 +62,7 @@ where:
* -q NQ: Maximum number of queues per lcore (default is 1)
-To run the example in linux environment with 2 lcores (2,4) over 2 ports(0,2) with 1 RX queue per lcore:
+To run the example in a Linux environment with 2 lcores (2,4) over 2 ports (0,2) with 1 RX queue per lcore:
.. code-block:: console
@@ -84,7 +84,7 @@ To run the example in linux environment with 2 lcores (2,4) over 2 ports(0,2) wi
IP_FRAG: entering main loop on lcore 2
IP_FRAG: -- lcoreid=2 portid=0
-To run the example in linux environment with 1 lcore (4) over 2 ports(0,2) with 2 RX queues per lcore:
+To run the example in a Linux environment with 1 lcore (4) over 2 ports (0,2) with 2 RX queues per lcore:
.. code-block:: console
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 06/29] examples/eventdev_pipeline: correct documentation errors
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (4 preceding siblings ...)
2026-01-14 22:21 ` [PATCH 05/29] examples/ip_fragmentation: " Stephen Hemminger
@ 2026-01-14 22:21 ` Stephen Hemminger
2026-01-14 22:21 ` [PATCH 07/29] doc/guides: improve VMDq sample application documentation Stephen Hemminger
` (22 subsequent siblings)
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:21 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Address minor issues in eventdev pipeline sample documentation:
- Add missing word: "various numbers worker cores" to
"various numbers of worker cores"
- Correct punctuation: "(e.g.;" to "(e.g.,"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
.../sample_app_ug/eventdev_pipeline.rst | 22 +++++++++----------
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/doc/guides/sample_app_ug/eventdev_pipeline.rst b/doc/guides/sample_app_ug/eventdev_pipeline.rst
index 19ff53803e..c243fa5160 100644
--- a/doc/guides/sample_app_ug/eventdev_pipeline.rst
+++ b/doc/guides/sample_app_ug/eventdev_pipeline.rst
@@ -10,7 +10,7 @@ application can configure a pipeline and assign a set of worker cores to
perform the processing required.
The application has a range of command line arguments allowing it to be
-configured for various numbers worker cores, stages,queue depths and cycles per
+configured for various numbers of worker cores, stages, queue depths and cycles per
stage of work. This is useful for performance testing as well as quickly testing
a particular pipeline configuration.
@@ -18,7 +18,7 @@ a particular pipeline configuration.
Compiling the Application
-------------------------
-To compile the sample application see :doc:`compiling`.
+To compile the sample application, see :doc:`compiling`.
The application is located in the ``examples`` sub-directory.
@@ -51,7 +51,7 @@ these settings is shown below:
-- -r1 -t1 -e4 -w FF00 -s4 -n0 -c32 -W1000 -D
The application has some sanity checking built-in, so if there is a function
-(e.g.; the RX core) which doesn't have a cpu core mask assigned, the application
+(e.g., the RX core) which doesn't have a cpu core mask assigned, the application
will print an error message:
.. code-block:: console
@@ -61,21 +61,21 @@ will print an error message:
rx: 0
tx: 1
-Configuration of the eventdev is covered in detail in the programmers guide,
-see the Event Device Library section.
+Configuration of the eventdev is covered in detail in the programmers guide.
+See the Event Device Library section.
Observing the Application
--------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~
-At runtime the eventdev pipeline application prints out a summary of the
-configuration, and some runtime statistics like packets per second. On exit the
+At runtime, the eventdev pipeline application prints out a summary of the
+configuration, and some runtime statistics like packets per second. On exit, the
worker statistics are printed, along with a full dump of the PMD statistics if
required. The following sections show sample output for each of the output
types.
Configuration
-~~~~~~~~~~~~~
+^^^^^^^^^^^^^
This provides an overview of the pipeline,
scheduling type at each stage, and parameters to options such as how many
@@ -101,7 +101,7 @@ for details:
Stage 3, Type Atomic Priority = 128
Runtime
-~~~~~~~
+^^^^^^^
At runtime, the statistics of the consumer are printed, stating the number of
packets received, runtime in milliseconds, average mpps, and current mpps.
@@ -111,7 +111,7 @@ packets received, runtime in milliseconds, average mpps, and current mpps.
# consumer RX= xxxxxxx, time yyyy ms, avg z.zzz mpps [current w.www mpps]
Shutdown
-~~~~~~~~
+^^^^^^^^
At shutdown, the application prints the number of packets received and
transmitted, and an overview of the distribution of work across worker cores.
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 07/29] doc/guides: improve VMDq sample application documentation
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (5 preceding siblings ...)
2026-01-14 22:21 ` [PATCH 06/29] examples/eventdev_pipeline: " Stephen Hemminger
@ 2026-01-14 22:21 ` Stephen Hemminger
2026-01-14 22:21 ` [PATCH 08/29] examples/distributor: correct documentation errors Stephen Hemminger
` (21 subsequent siblings)
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:21 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Revise VMDq and VMDq/DCB Forwarding sample documentation for clarity,
accuracy, and compliance with technical writing standards.
Common changes to both files:
- Add technology overview sections explaining VMDq hardware packet sorting
- Fix contradictory statements about command-line options
- Create dedicated Command-Line Options sections
- Add Supported Configurations sections for hardware details
- Improve sentence structure and readability
- Fix RST formatting issues
- Convert warnings to RST note directives
vmdq_forwarding.rst:
- Update application name from vmdq_app to dpdk-vmdq
vmdq_dcb_forwarding.rst:
- Add DCB/QoS explanation using VLAN user priority fields
- Correct typo: "VMD queues" -> "VMDq queues"
- Correct capitalization: "linux" -> "Linux"
- Add sub-headings for traffic class and MAC address sections
The technology context is based on Intel's VMDq Technology paper and
helps readers understand hardware-based packet classification benefits
in virtualized environments.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
.../sample_app_ug/vmdq_dcb_forwarding.rst | 193 +++++++++++-------
doc/guides/sample_app_ug/vmdq_forwarding.rst | 144 +++++++------
2 files changed, 207 insertions(+), 130 deletions(-)
diff --git a/doc/guides/sample_app_ug/vmdq_dcb_forwarding.rst b/doc/guides/sample_app_ug/vmdq_dcb_forwarding.rst
index efb133c11c..9d01901f0c 100644
--- a/doc/guides/sample_app_ug/vmdq_dcb_forwarding.rst
+++ b/doc/guides/sample_app_ug/vmdq_dcb_forwarding.rst
@@ -1,150 +1,197 @@
.. SPDX-License-Identifier: BSD-3-Clause
Copyright(c) 2010-2014 Intel Corporation.
-VMDQ and DCB Forwarding Sample Application
+VMDq and DCB Forwarding Sample Application
==========================================
-The VMDQ and DCB Forwarding sample application is a simple example of packet processing using the DPDK.
-The application performs L2 forwarding using VMDQ and DCB to divide the incoming traffic into queues.
-The traffic splitting is performed in hardware by the VMDQ and DCB features of the Intel® 82599 and X710/XL710 Ethernet Controllers.
+The VMDq and DCB Forwarding sample application demonstrates packet processing using the DPDK.
+The application performs L2 forwarding using Intel VMDq (Virtual Machine Device Queues) combined
+with DCB (Data Center Bridging) to divide incoming traffic into queues. The traffic splitting
+is performed in hardware by the VMDq and DCB features of Intel 82599 and X710/XL710
+Ethernet Controllers.
Overview
--------
-This sample application can be used as a starting point for developing a new application that is based on the DPDK and
-uses VMDQ and DCB for traffic partitioning.
+This sample application can serve as a starting point for developing DPDK applications
+that use VMDq and DCB for traffic partitioning.
-The VMDQ and DCB filters work on MAC and VLAN traffic to divide the traffic into input queues on the basis of the Destination MAC
-address, VLAN ID and VLAN user priority fields.
-VMDQ filters split the traffic into 16 or 32 groups based on the Destination MAC and VLAN ID.
-Then, DCB places each packet into one of queues within that group, based upon the VLAN user priority field.
+About VMDq and DCB Technology
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-All traffic is read from a single incoming port (port 0) and output on port 1, without any processing being performed.
-With Intel® 82599 NIC, for example, the traffic is split into 128 queues on input, where each thread of the application reads from
-multiple queues. When run with 8 threads, that is, with the -c FF option, each thread receives and forwards packets from 16 queues.
+VMDq is a silicon-level technology that offloads network I/O packet sorting from the
+Virtual Machine Monitor (VMM) to the network controller hardware. This reduces CPU
+overhead in virtualized environments by performing Layer 2 classification in hardware.
-As supplied, the sample application configures the VMDQ feature to have 32 pools with 4 queues each as indicated in :numref:`figure_vmdq_dcb_example`.
-The Intel® 82599 10 Gigabit Ethernet Controller NIC also supports the splitting of traffic into 16 pools of 8 queues. While the
-Intel® X710 or XL710 Ethernet Controller NICs support many configurations of VMDQ pools of 4 or 8 queues each. For simplicity, only 16
-or 32 pools is supported in this sample. And queues numbers for each VMDQ pool can be changed by setting RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM
-in config/rte_config.h file.
-The nb-pools, nb-tcs and enable-rss parameters can be passed on the command line, after the EAL parameters:
+DCB (Data Center Bridging) extends VMDq by adding Quality of Service (QoS) support.
+DCB uses the VLAN user priority field (also called Priority Code Point or PCP) to
+classify packets into different traffic classes, enabling bandwidth allocation and
+priority-based queuing.
-.. code-block:: console
+How VMDq and DCB Filtering Works
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The VMDq and DCB filters work together on MAC and VLAN traffic to divide packets into
+input queues:
+
+1. **VMDq filtering**: Splits traffic into 16 or 32 groups based on the destination
+ MAC address and VLAN ID.
+
+2. **DCB classification**: Places each packet into one of the queues within its VMDq
+ group based on the VLAN user priority field.
- ./<build_dir>/examples/dpdk-vmdq_dcb [EAL options] -- -p PORTMASK --nb-pools NP --nb-tcs TC --enable-rss
+All traffic is read from a single incoming port (port 0) and output on port 1 without
+modification. For the Intel 82599 NIC, traffic is split into 128 queues on input.
+Each application thread reads from multiple queues. When running with 8 threads
+(using the ``-c FF`` option), each thread receives and forwards packets from 16 queues.
-where, NP can be 16 or 32, TC can be 4 or 8, rss is disabled by default.
+:numref:`figure_vmdq_dcb_example` illustrates the packet flow through the application.
.. _figure_vmdq_dcb_example:
.. figure:: img/vmdq_dcb_example.*
- Packet Flow Through the VMDQ and DCB Sample Application
+ Packet Flow Through the VMDq and DCB Sample Application
+Supported Configurations
+~~~~~~~~~~~~~~~~~~~~~~~~
-In Linux* user space, the application can display statistics with the number of packets received on each queue.
-To have the application display the statistics, send a SIGHUP signal to the running application process.
+The sample application supports the following configurations:
-The VMDQ and DCB Forwarding sample application is in many ways simpler than the L2 Forwarding application
-(see :doc:`l2_forward_real_virtual`)
-as it performs unidirectional L2 forwarding of packets from one port to a second port.
-No command-line options are taken by this application apart from the standard EAL command-line options.
+- **Intel 82599 10 Gigabit Ethernet Controller**: 32 pools with 4 queues each (default),
+ or 16 pools with 8 queues each.
+
+- **Intel X710/XL710 Ethernet Controllers**: Multiple configurations of VMDq pools
+ with 4 or 8 queues each. For simplicity, this sample supports only 16 or 32 pools.
+ The number of queues per VMDq pool can be changed by setting
+ ``RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM`` in ``config/rte_config.h``.
.. note::
- Since VMD queues are being used for VMM, this application works correctly
- when VTd is disabled in the BIOS or Linux* kernel (intel_iommu=off).
+ Since VMDq queues are used for virtual machine management, this application works
+ correctly when VT-d is disabled in the BIOS or Linux kernel (``intel_iommu=off``).
Compiling the Application
-------------------------
-
-
-To compile the sample application see :doc:`compiling`.
+To compile the sample application, see :doc:`compiling`.
The application is located in the ``vmdq_dcb`` sub-directory.
Running the Application
-----------------------
-To run the example in a linux environment:
+To run the example in a Linux environment:
+
+.. code-block:: console
+
+ ./<build_dir>/examples/dpdk-vmdq_dcb -l 0-3 -- -p 0x3 --nb-pools 32 --nb-tcs 4
+
+Command-Line Options
+~~~~~~~~~~~~~~~~~~~~
+
+The following application-specific options are available after the EAL parameters:
+
+``-p PORTMASK``
+ Hexadecimal bitmask of ports to configure.
+
+``--nb-pools NP``
+ Number of VMDq pools. Valid values are 16 or 32.
+
+``--nb-tcs TC``
+ Number of traffic classes. Valid values are 4 or 8.
+
+``--enable-rss``
+ Enable Receive Side Scaling. RSS is disabled by default.
+
+Example:
.. code-block:: console
- user@target:~$ ./<build_dir>/examples/dpdk-vmdq_dcb -l 0-3 -- -p 0x3 --nb-pools 32 --nb-tcs 4
+ ./<build_dir>/examples/dpdk-vmdq_dcb [EAL options] -- -p 0x3 --nb-pools 32 --nb-tcs 4 --enable-rss
-Refer to the *DPDK Getting Started Guide* for general information on running applications and
-the Environment Abstraction Layer (EAL) options.
+Refer to the *DPDK Getting Started Guide* for general information on running applications
+and the Environment Abstraction Layer (EAL) options.
Explanation
-----------
-The following sections provide some explanation of the code.
+The following sections explain the code structure.
Initialization
~~~~~~~~~~~~~~
-The EAL, driver and PCI configuration is performed largely as in the L2 Forwarding sample application,
-as is the creation of the mbuf pool.
-See :doc:`l2_forward_real_virtual`.
-Where this example application differs is in the configuration of the NIC port for RX.
+The EAL, driver, and PCI configuration is performed similarly to the L2 Forwarding sample
+application, as is the creation of the mbuf pool. See :doc:`l2_forward_real_virtual` for details.
+
+This example application differs in the configuration of the NIC port for RX. The VMDq and
+DCB hardware features are configured at port initialization time by setting appropriate values
+in the ``rte_eth_conf`` structure passed to the ``rte_eth_dev_configure()`` API.
-The VMDQ and DCB hardware feature is configured at port initialization time by setting the appropriate values in the
-rte_eth_conf structure passed to the rte_eth_dev_configure() API.
-Initially in the application,
-a default structure is provided for VMDQ and DCB configuration to be filled in later by the application.
+Initially, the application provides a default structure for VMDq and DCB configuration:
.. literalinclude:: ../../../examples/vmdq_dcb/main.c
:language: c
:start-after: Empty vmdq+dcb configuration structure. Filled in programmatically. 8<
:end-before: >8 End of empty vmdq+dcb configuration structure.
-The get_eth_conf() function fills in an rte_eth_conf structure with the appropriate values,
-based on the global vlan_tags array,
-and dividing up the possible user priority values equally among the individual queues
-(also referred to as traffic classes) within each pool. With Intel® 82599 NIC,
-if the number of pools is 32, then the user priority fields are allocated 2 to a queue.
-If 16 pools are used, then each of the 8 user priority fields is allocated to its own queue within the pool.
-With Intel® X710/XL710 NICs, if number of tcs is 4, and number of queues in pool is 8,
-then the user priority fields are allocated 2 to one tc, and a tc has 2 queues mapping to it, then
-RSS will determine the destination queue in 2.
-For the VLAN IDs, each one can be allocated to possibly multiple pools of queues,
-so the pools parameter in the rte_eth_vmdq_dcb_conf structure is specified as a bitmask value.
-For destination MAC, each VMDQ pool will be assigned with a MAC address. In this sample, each VMDQ pool
-is assigned to the MAC like 52:54:00:12:<port_id>:<pool_id>, that is,
-the MAC of VMDQ pool 2 on port 1 is 52:54:00:12:01:02.
+Traffic Class and Queue Assignment
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The ``get_eth_conf()`` function fills in the ``rte_eth_conf`` structure with appropriate
+values based on the global ``vlan_tags`` array. The function divides user priority values
+among individual queues (traffic classes) within each pool.
+
+For Intel 82599 NICs:
+
+- With 32 pools: User priority fields are allocated 2 per queue.
+- With 16 pools: Each of the 8 user priority fields is allocated to its own queue.
+
+For Intel X710/XL710 NICs:
+
+- With 4 traffic classes and 8 queues per pool: User priority fields are allocated
+ 2 per traffic class, with 2 queues mapped to each traffic class. RSS determines
+ the destination queue within each traffic class.
+
+For VLAN IDs, each ID can be allocated to multiple pools of queues, so the ``pools``
+parameter in the ``rte_eth_vmdq_dcb_conf`` structure is specified as a bitmask.
.. literalinclude:: ../../../examples/vmdq_dcb/main.c
:language: c
:start-after: Dividing up the possible user priority values. 8<
:end-before: >8 End of dividing up the possible user priority values.
+MAC Address Assignment
+^^^^^^^^^^^^^^^^^^^^^^
+
+Each VMDq pool is assigned a MAC address using the format ``52:54:00:12:<port_id>:<pool_id>``.
+For example, VMDq pool 2 on port 1 uses the MAC address ``52:54:00:12:01:02``.
+
.. literalinclude:: ../../../examples/vmdq_dcb/main.c
:language: c
:start-after: Set mac for each pool. 8<
:end-before: >8 End of set mac for each pool.
:dedent: 1
-Once the network port has been initialized using the correct VMDQ and DCB values,
-the initialization of the port's RX and TX hardware rings is performed similarly to that
-in the L2 Forwarding sample application.
+After the network port is initialized with VMDq and DCB values, the port's RX and TX
+hardware rings are initialized similarly to the L2 Forwarding sample application.
See :doc:`l2_forward_real_virtual` for more information.
Statistics Display
~~~~~~~~~~~~~~~~~~
-When run in a linux environment,
-the VMDQ and DCB Forwarding sample application can display statistics showing the number of packets read from each RX queue.
-This is provided by way of a signal handler for the SIGHUP signal,
-which simply prints to standard output the packet counts in grid form.
-Each row of the output is a single pool with the columns being the queue number within that pool.
+When running in a Linux environment, the application can display statistics showing the
+number of packets read from each RX queue. The application uses a signal handler for the
+SIGHUP signal that prints packet counts in grid form, with each row representing a single
+pool and each column representing a queue number within that pool.
-To generate the statistics output, use the following command:
+To generate the statistics output:
.. code-block:: console
- user@host$ sudo killall -HUP vmdq_dcb_app
+ sudo killall -HUP dpdk-vmdq_dcb
+
+.. note::
-Please note that the statistics output will appear on the terminal where the vmdq_dcb_app is running,
-rather than the terminal from which the HUP signal was sent.
+ The statistics output appears on the terminal where the application is running,
+ not on the terminal from which the HUP signal was sent.
diff --git a/doc/guides/sample_app_ug/vmdq_forwarding.rst b/doc/guides/sample_app_ug/vmdq_forwarding.rst
index c998a5a223..f100d965cd 100644
--- a/doc/guides/sample_app_ug/vmdq_forwarding.rst
+++ b/doc/guides/sample_app_ug/vmdq_forwarding.rst
@@ -2,50 +2,60 @@
Copyright(c) 2020 Intel Corporation.
VMDq Forwarding Sample Application
-==========================================
+==================================
-The VMDq Forwarding sample application is a simple example of packet processing using the DPDK.
-The application performs L2 forwarding using VMDq to divide the incoming traffic into queues.
-The traffic splitting is performed in hardware by the VMDq feature of the Intel® 82599 and X710/XL710 Ethernet Controllers.
+The VMDq Forwarding sample application demonstrates packet processing using the DPDK.
+The application performs L2 forwarding using Intel VMDq (Virtual Machine Device Queues)
+to divide incoming traffic into queues. The traffic splitting is performed in hardware
+by the VMDq feature of Intel 82599 and X710/XL710 Ethernet Controllers.
Overview
--------
-This sample application can be used as a starting point for developing a new application that is based on the DPDK and
-uses VMDq for traffic partitioning.
+This sample application can serve as a starting point for developing DPDK applications
+that use VMDq for traffic partitioning.
-VMDq filters split the incoming packets up into different "pools" - each with its own set of RX queues - based upon
-the MAC address and VLAN ID within the VLAN tag of the packet.
+About VMDq Technology
+~~~~~~~~~~~~~~~~~~~~~
-All traffic is read from a single incoming port and output on another port, without any processing being performed.
-With Intel® 82599 NIC, for example, the traffic is split into 128 queues on input, where each thread of the application reads from
-multiple queues. When run with 8 threads, that is, with the -c FF option, each thread receives and forwards packets from 16 queues.
+VMDq is a silicon-level technology designed to improve network I/O performance in
+virtualized environments. In traditional virtualized systems, the Virtual Machine Monitor
+(VMM) must sort incoming packets and route them to the correct virtual machine, consuming
+significant CPU cycles. VMDq offloads this packet sorting to the network controller hardware,
+freeing CPU resources for application workloads.
-As supplied, the sample application configures the VMDq feature to have 32 pools with 4 queues each.
-The Intel® 82599 10 Gigabit Ethernet Controller NIC also supports the splitting of traffic into 16 pools of 2 queues.
-While the Intel® X710 or XL710 Ethernet Controller NICs support many configurations of VMDq pools of 4 or 8 queues each.
-And queues numbers for each VMDq pool can be changed by setting RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM
-in config/rte_config.h file.
-The nb-pools and enable-rss parameters can be passed on the command line, after the EAL parameters:
+When packets arrive at a VMDq-enabled network adapter, a Layer 2 classifier in the controller
+sorts packets based on MAC addresses and VLAN tags, then places each packet in the receive
+queue assigned to the appropriate destination. This hardware-based pre-sorting reduces the
+overhead of software-based virtual switches.
-.. code-block:: console
+How VMDq Filtering Works
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+VMDq filters split incoming packets into different pools, each with its own set of RX queues,
+based on the MAC address and VLAN ID within the VLAN tag of the packet.
- ./<build_dir>/examples/dpdk-vmdq [EAL options] -- -p PORTMASK --nb-pools NP --enable-rss
+All traffic is read from a single incoming port and output on another port without modification.
+For the Intel 82599 NIC, traffic is split into 128 queues on input. Each application thread
+reads from multiple queues. When running with 8 threads (using the ``-c FF`` option), each
+thread receives and forwards packets from 16 queues.
-where, NP can be 8, 16 or 32, rss is disabled by default.
+Supported Configurations
+~~~~~~~~~~~~~~~~~~~~~~~~
-In Linux* user space, the application can display statistics with the number of packets received on each queue.
-To have the application display the statistics, send a SIGHUP signal to the running application process.
+The sample application supports the following configurations:
-The VMDq Forwarding sample application is in many ways simpler than the L2 Forwarding application
-(see :doc:`l2_forward_real_virtual`)
-as it performs unidirectional L2 forwarding of packets from one port to a second port.
-No command-line options are taken by this application apart from the standard EAL command-line options.
+- **Intel 82599 10 Gigabit Ethernet Controller**: 32 pools with 4 queues each (default),
+ or 16 pools with 2 queues each.
+
+- **Intel X710/XL710 Ethernet Controllers**: Multiple configurations of VMDq pools
+ with 4 or 8 queues each. The number of queues per VMDq pool can be changed by setting
+ ``RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM`` in ``config/rte_config.h``.
Compiling the Application
-------------------------
-To compile the sample application see :doc:`compiling`.
+To compile the sample application, see :doc:`compiling`.
The application is located in the ``vmdq`` sub-directory.
@@ -56,40 +66,60 @@ To run the example in a Linux environment:
.. code-block:: console
- user@target:~$ ./<build_dir>/examples/dpdk-vmdq -l 0-3 -- -p 0x3 --nb-pools 16
+ ./<build_dir>/examples/dpdk-vmdq -l 0-3 -- -p 0x3 --nb-pools 16
+
+Command-Line Options
+~~~~~~~~~~~~~~~~~~~~
+
+The following application-specific options are available after the EAL parameters:
+
+``-p PORTMASK``
+ Hexadecimal bitmask of ports to configure.
+
+``--nb-pools NP``
+ Number of VMDq pools. Valid values are 8, 16, or 32.
+
+``--enable-rss``
+ Enable Receive Side Scaling. RSS is disabled by default.
-Refer to the *DPDK Getting Started Guide* for general information on running applications and
-the Environment Abstraction Layer (EAL) options.
+Example:
+
+.. code-block:: console
+
+ ./<build_dir>/examples/dpdk-vmdq [EAL options] -- -p 0x3 --nb-pools 32 --enable-rss
+
+Refer to the *DPDK Getting Started Guide* for general information on running applications
+and the Environment Abstraction Layer (EAL) options.
Explanation
-----------
-The following sections provide some explanation of the code.
+The following sections explain the code structure.
Initialization
~~~~~~~~~~~~~~
-The EAL, driver and PCI configuration is performed largely as in the L2 Forwarding sample application,
-as is the creation of the mbuf pool.
-See :doc:`l2_forward_real_virtual`.
-Where this example application differs is in the configuration of the NIC port for RX.
+The EAL, driver, and PCI configuration is performed similarly to the L2 Forwarding sample
+application, as is the creation of the mbuf pool. See :doc:`l2_forward_real_virtual` for details.
-The VMDq hardware feature is configured at port initialization time by setting the appropriate values in the
-rte_eth_conf structure passed to the rte_eth_dev_configure() API.
-Initially in the application,
-a default structure is provided for VMDq configuration to be filled in later by the application.
+This example application differs in the configuration of the NIC port for RX. The VMDq
+hardware feature is configured at port initialization time by setting appropriate values
+in the ``rte_eth_conf`` structure passed to the ``rte_eth_dev_configure()`` API.
+
+Initially, the application provides a default structure for VMDq configuration:
.. literalinclude:: ../../../examples/vmdq/main.c
:language: c
:start-after: Default structure for VMDq. 8<
:end-before: >8 End of Empty vdmq configuration structure.
-The get_eth_conf() function fills in an rte_eth_conf structure with the appropriate values,
-based on the global vlan_tags array.
-For the VLAN IDs, each one can be allocated to possibly multiple pools of queues.
-For destination MAC, each VMDq pool will be assigned with a MAC address. In this sample, each VMDq pool
-is assigned to the MAC like 52:54:00:12:<port_id>:<pool_id>, that is,
-the MAC of VMDq pool 2 on port 1 is 52:54:00:12:01:02.
+The ``get_eth_conf()`` function fills in the ``rte_eth_conf`` structure with appropriate
+values based on the global ``vlan_tags`` array. Each VLAN ID can be allocated to multiple
+pools of queues.
+
+For destination MAC addresses, each VMDq pool is assigned a MAC address using the format
+``52:54:00:12:<port_id>:<pool_id>``. For example, VMDq pool 2 on port 1 uses the MAC address
+``52:54:00:12:01:02``.
.. literalinclude:: ../../../examples/vmdq/main.c
:language: c
@@ -106,25 +136,25 @@ the MAC of VMDq pool 2 on port 1 is 52:54:00:12:01:02.
:start-after: Building correct configuration for vdmq. 8<
:end-before: >8 End of get_eth_conf.
-Once the network port has been initialized using the correct VMDq values,
-the initialization of the port's RX and TX hardware rings is performed similarly to that
-in the L2 Forwarding sample application.
+After the network port is initialized with VMDq values, the port's RX and TX hardware rings
+are initialized similarly to the L2 Forwarding sample application.
See :doc:`l2_forward_real_virtual` for more information.
Statistics Display
~~~~~~~~~~~~~~~~~~
-When run in a Linux environment,
-the VMDq Forwarding sample application can display statistics showing the number of packets read from each RX queue.
-This is provided by way of a signal handler for the SIGHUP signal,
-which simply prints to standard output the packet counts in grid form.
-Each row of the output is a single pool with the columns being the queue number within that pool.
+When running in a Linux environment, the application can display statistics showing the
+number of packets read from each RX queue. The application uses a signal handler for the
+SIGHUP signal that prints packet counts in grid form, with each row representing a single
+pool and each column representing a queue number within that pool.
-To generate the statistics output, use the following command:
+To generate the statistics output:
.. code-block:: console
- user@host$ sudo killall -HUP vmdq_app
+ sudo killall -HUP dpdk-vmdq
+
+.. note::
-Please note that the statistics output will appear on the terminal where the vmdq_app is running,
-rather than the terminal from which the HUP signal was sent.
+ The statistics output appears on the terminal where the application is running,
+ not on the terminal from which the HUP signal was sent.
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 08/29] examples/distributor: correct documentation errors
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (6 preceding siblings ...)
2026-01-14 22:21 ` [PATCH 07/29] doc/guides: improve VMDq sample application documentation Stephen Hemminger
@ 2026-01-14 22:21 ` Stephen Hemminger
2026-01-14 22:21 ` [PATCH 09/29] examples/ipv4_multicast: correct documentation typo Stephen Hemminger
` (20 subsequent siblings)
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:21 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, David Hunt
Address issues in distributor sample documentation:
- Capitalize "linux" to "Linux"
- Add missing space before function name in thread description
- Add missing article: "is done in same way" to
"is done in the same way"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/sample_app_ug/dist_app.rst | 30 +++++++++++++--------------
1 file changed, 15 insertions(+), 15 deletions(-)
diff --git a/doc/guides/sample_app_ug/dist_app.rst b/doc/guides/sample_app_ug/dist_app.rst
index 30b4184d40..f4a2de50d6 100644
--- a/doc/guides/sample_app_ug/dist_app.rst
+++ b/doc/guides/sample_app_ug/dist_app.rst
@@ -4,7 +4,7 @@
Distributor Sample Application
==============================
-The distributor sample application is a simple example of packet distribution
+The distributor sample application is an example of packet distribution
to cores using the Data Plane Development Kit (DPDK). It also makes use of
Intel Speed Select Technology - Base Frequency (Intel SST-BF) to pin the
distributor to the higher frequency core if available.
@@ -31,7 +31,7 @@ generator as shown in the figure below.
Compiling the Application
-------------------------
-To compile the sample application see :doc:`compiling`.
+To compile the sample application, see :doc:`compiling`.
The application is located in the ``distributor`` sub-directory.
@@ -49,7 +49,7 @@ Running the Application
* -p PORTMASK: Hexadecimal bitmask of ports to configure
* -c: Combines the RX core with distribution core
-#. To run the application in linux environment with 10 lcores, 4 ports,
+#. To run the application in a Linux environment with 10 lcores, 4 ports,
issue the command:
.. code-block:: console
@@ -64,9 +64,9 @@ Explanation
The distributor application consists of four types of threads: a receive
thread (``lcore_rx()``), a distributor thread (``lcore_dist()``), a set of
-worker threads (``lcore_worker()``), and a transmit thread(``lcore_tx()``).
+worker threads (``lcore_worker()``), and a transmit thread (``lcore_tx()``).
How these threads work together is shown in :numref:`figure_dist_app` below.
-The ``main()`` function launches threads of these four types. Each thread
+The ``main()`` function launches threads of these four types. Each thread
has a while loop which will be doing processing and which is terminated
only upon SIGINT or ctrl+C.
@@ -86,7 +86,7 @@ the distributor, doing a simple XOR operation on the input port mbuf field
(to indicate the output port which will be used later for packet transmission)
and then finally returning the packets back to the distributor thread.
-The distributor thread will then call the distributor api
+The distributor thread will then call the distributor API
``rte_distributor_returned_pkts()`` to get the processed packets, and will enqueue
them to another rte_ring for transfer to the TX thread for transmission on the
output port. The transmit thread will dequeue the packets from the ring and
@@ -105,7 +105,7 @@ final statistics to the user.
Intel SST-BF Support
---------------------
+~~~~~~~~~~~~~~~~~~~~
In DPDK 19.05, support was added to the power management library for
Intel-SST-BF, a technology that allows some cores to run at a higher
@@ -114,20 +114,20 @@ and is entitled
`Intel Speed Select Technology – Base Frequency - Enhancing Performance <https://builders.intel.com/docs/networkbuilders/intel-speed-select-technology-base-frequency-enhancing-performance.pdf>`_
The distributor application was also enhanced to be aware of these higher
-frequency SST-BF cores, and when starting the application, if high frequency
+frequency SST-BF cores. When starting the application, if high frequency
SST-BF cores are present in the core mask, the application will identify these
cores and pin the workloads appropriately. The distributor core is usually
the bottleneck, so this is given first choice of the high frequency SST-BF
-cores, followed by the rx core and the tx core.
+cores, followed by the Rx core and the Tx core.
Debug Logging Support
----------------------
+~~~~~~~~~~~~~~~~~~~~~
Debug logging is provided as part of the application; the user needs to uncomment
the line "#define DEBUG" defined in start of the application in main.c to enable debug logs.
Statistics
-----------
+~~~~~~~~~~
The main function will print statistics on the console every second. These
statistics include the number of packets enqueued and dequeued at each stage
@@ -135,7 +135,7 @@ in the application, and also key statistics per worker, including how many
packets of each burst size (1-8) were sent to each worker thread.
Application Initialization
---------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~
Command line parsing is done in the same way as it is done in the L2 Forwarding Sample
Application. See :ref:`l2_fwd_app_cmd_arguments`.
@@ -143,11 +143,11 @@ Application. See :ref:`l2_fwd_app_cmd_arguments`.
Mbuf pool initialization is done in the same way as it is done in the L2 Forwarding
Sample Application. See :ref:`l2_fwd_app_mbuf_init`.
-Driver Initialization is done in same way as it is done in the L2 Forwarding Sample
+Driver Initialization is done in the same way as it is done in the L2 Forwarding Sample
Application. See :ref:`l2_fwd_app_dvr_init`.
-RX queue initialization is done in the same way as it is done in the L2 Forwarding
+Rx queue initialization is done in the same way as it is done in the L2 Forwarding
Sample Application. See :ref:`l2_fwd_app_rx_init`.
-TX queue initialization is done in the same way as it is done in the L2 Forwarding
+Tx queue initialization is done in the same way as it is done in the L2 Forwarding
Sample Application. See :ref:`l2_fwd_app_tx_init`.
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 09/29] examples/ipv4_multicast: correct documentation typo
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (7 preceding siblings ...)
2026-01-14 22:21 ` [PATCH 08/29] examples/distributor: correct documentation errors Stephen Hemminger
@ 2026-01-14 22:21 ` Stephen Hemminger
2026-01-14 22:21 ` [PATCH 10/29] examples/test_pipeline: correct documentation errors Stephen Hemminger
` (19 subsequent siblings)
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:21 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct typo in IPv4 multicast sample documentation:
- Change "walk though" to "walk through"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/sample_app_ug/ipv4_multicast.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/doc/guides/sample_app_ug/ipv4_multicast.rst b/doc/guides/sample_app_ug/ipv4_multicast.rst
index 3eb8b95f29..fdccea23d9 100644
--- a/doc/guides/sample_app_ug/ipv4_multicast.rst
+++ b/doc/guides/sample_app_ug/ipv4_multicast.rst
@@ -195,7 +195,7 @@ Although both are based on the data zero-copy idea,
there are some differences in the details.
The first approach creates a clone of the input packet. For example,
-walk though all segments of the input packet and for each of segment,
+walk through all segments of the input packet and for each of segment,
create a new buffer and attach that new buffer to the segment
(refer to ``rte_pktmbuf_clone()`` in the mbuf library for more details).
A new buffer is then allocated for the packet header and is prepended to the cloned buffer.
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 10/29] examples/test_pipeline: correct documentation errors
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (8 preceding siblings ...)
2026-01-14 22:21 ` [PATCH 09/29] examples/ipv4_multicast: correct documentation typo Stephen Hemminger
@ 2026-01-14 22:21 ` Stephen Hemminger
2026-01-14 22:21 ` [PATCH 11/29] examples/qos: improve sample application documentation Stephen Hemminger
` (18 subsequent siblings)
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:21 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Cristian Dumitrescu
Address minor issues in test pipeline sample documentation:
- Add missing period after doc reference
- Correct typo: "Lpmcore B" to "core B"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/sample_app_ug/test_pipeline.rst | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/doc/guides/sample_app_ug/test_pipeline.rst b/doc/guides/sample_app_ug/test_pipeline.rst
index 818be93cd6..40c530af57 100644
--- a/doc/guides/sample_app_ug/test_pipeline.rst
+++ b/doc/guides/sample_app_ug/test_pipeline.rst
@@ -30,7 +30,7 @@ The application uses three CPU cores:
Compiling the Application
-------------------------
-To compile the sample application see :doc:`compiling`
+To compile the sample application, see :doc:`compiling`.
The application is located in the ``dpdk/<build_dir>/app`` directory.
@@ -149,7 +149,7 @@ For hash tables, the following parameters can be selected:
| | | | |
| | | | At run time, core A is creating the following lookup |
| | | | key and storing it into the packet meta data for |
- | | | | Lpmcore B to use for table lookup: |
+ | | | | core B to use for table lookup: |
| | | | |
| | | | [destination IPv4 address, 28 bytes of 0] |
| | | | |
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 11/29] examples/qos: improve sample application documentation
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (9 preceding siblings ...)
2026-01-14 22:21 ` [PATCH 10/29] examples/test_pipeline: correct documentation errors Stephen Hemminger
@ 2026-01-14 22:21 ` Stephen Hemminger
2026-01-14 22:21 ` [PATCH 12/29] examples/vhost: " Stephen Hemminger
` (17 subsequent siblings)
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:21 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Cristian Dumitrescu
Rewrite the QoS metering and QoS scheduler documentation for clarity
and correctness.
Changes to qos_metering.rst:
- Correct duplicate "srTCM" entries to "trTCM" in the mode list
- Simplify and clarify introductory text
- Use imperative mood for instructions
- Add Oxford commas for consistency
- Remove unnecessary empty rows in tables
Changes to qos_sched.rst:
- Rewrite command descriptions for clarity
- Improve formatting of command syntax using inline literals
- Streamline bullet point structure
- Remove unnecessary empty rows in tables
- Make language more concise throughout
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/sample_app_ug/qos_metering.rst | 95 ++++-----
doc/guides/sample_app_ug/qos_scheduler.rst | 214 +++++++++++----------
2 files changed, 158 insertions(+), 151 deletions(-)
diff --git a/doc/guides/sample_app_ug/qos_metering.rst b/doc/guides/sample_app_ug/qos_metering.rst
index e7101559aa..c6ab8f78ab 100644
--- a/doc/guides/sample_app_ug/qos_metering.rst
+++ b/doc/guides/sample_app_ug/qos_metering.rst
@@ -4,19 +4,22 @@
QoS Metering Sample Application
===============================
-The QoS meter sample application is an example that demonstrates the use of DPDK to provide QoS marking and metering,
-as defined by RFC2697 for Single Rate Three Color Marker (srTCM) and RFC 2698 for Two Rate Three Color Marker (trTCM) algorithm.
+The QoS meter sample application demonstrates DPDK QoS marking and metering
+using the Single Rate Three Color Marker (srTCM) algorithm defined in RFC 2697
+and the Two Rate Three Color Marker (trTCM) algorithm defined in RFC 2698.
Overview
--------
-The application uses a single thread for reading the packets from the RX port,
-metering, marking them with the appropriate color (green, yellow or red) and writing them to the TX port.
+The application uses a single thread to read packets from the RX port,
+meter them, mark them with the appropriate color (green, yellow, or red),
+and write them to the TX port.
-A policing scheme can be applied before writing the packets to the TX port by dropping or
-changing the color of the packet in a static manner depending on both the input and output colors of the packets that are processed by the meter.
+A policing scheme can apply before writing packets to the TX port by dropping
+or changing the packet color statically. The scheme depends on both the input
+and output colors of packets processed by the meter.
-The operation mode can be selected as compile time out of the following options:
+Select the operation mode at compile time from the following options:
* Simple forwarding
@@ -24,60 +27,64 @@ The operation mode can be selected as compile time out of the following options:
* srTCM color aware
-* srTCM color blind
+* trTCM color blind
-* srTCM color aware
+* trTCM color aware
-Please refer to RFC2697 and RFC2698 for details about the srTCM and trTCM configurable parameters
-(CIR, CBS and EBS for srTCM; CIR, PIR, CBS and PBS for trTCM).
+See RFC 2697 and RFC 2698 for details about the srTCM and trTCM configurable
+parameters (CIR, CBS, and EBS for srTCM; CIR, PIR, CBS, and PBS for trTCM).
-The color blind modes are functionally equivalent with the color-aware modes when
-all the incoming packets are colored as green.
+The color blind modes function equivalently to the color aware modes when
+all incoming packets are green.
Compiling the Application
-------------------------
-To compile the sample application see :doc:`compiling`.
+To compile the sample application, see :doc:`compiling`.
-The application is located in the ``qos_meter`` sub-directory.
+The application source resides in the ``qos_meter`` sub-directory.
Running the Application
-----------------------
-The application execution command line is as below:
+Run the application with the following command line:
.. code-block:: console
./dpdk-qos_meter [EAL options] -- -p PORTMASK
-The application is constrained to use a single core in the EAL core mask and 2 ports only in the application port mask
-(first port from the port mask is used for RX and the other port in the core mask is used for TX).
+The application requires a single core in the EAL core mask and exactly
+two ports in the application port mask. The first port in the mask handles RX;
+the second port handles TX.
-Refer to *DPDK Getting Started Guide* for general information on running applications and
-the Environment Abstraction Layer (EAL) options.
+Refer to the *DPDK Getting Started Guide* for general information on running
+applications and the Environment Abstraction Layer (EAL) options.
Explanation
-----------
-Selecting one of the metering modes is done with these defines:
+Select the metering mode with these defines:
.. literalinclude:: ../../../examples/qos_meter/main.c
:language: c
:start-after: Traffic metering configuration. 8<
:end-before: >8 End of traffic metering configuration.
-To simplify debugging (for example, by using the traffic generator RX side MAC address based packet filtering feature),
-the color is defined as the LSB byte of the destination MAC address.
+To simplify debugging (for example, when using the traffic generator's
+MAC address-based packet filtering on the RX side), the application encodes
+the color in the LSB of the destination MAC address.
-The traffic meter parameters are configured in the application source code with following default values:
+The application source code configures traffic meter parameters with the
+following default values:
.. literalinclude:: ../../../examples/qos_meter/main.c
:language: c
:start-after: Traffic meter parameters are configured in the application. 8<
:end-before: >8 End of traffic meter parameters are configured in the application.
-Assuming the input traffic is generated at line rate and all packets are 64 bytes Ethernet frames (IPv4 packet size of 46 bytes)
-and green, the expected output traffic should be marked as shown in the following table:
+Assuming the input traffic arrives at line rate with all packets as
+64-byte Ethernet frames (46-byte IPv4 payload) colored green, the meter
+marks the output traffic as shown in the following table:
.. _table_qos_metering_1:
@@ -85,53 +92,49 @@ and green, the expected output traffic should be marked as shown in the followin
+-------------+------------------+-------------------+----------------+
| **Mode** | **Green (Mpps)** | **Yellow (Mpps)** | **Red (Mpps)** |
- | | | | |
+=============+==================+===================+================+
| srTCM blind | 1 | 1 | 12.88 |
- | | | | |
+-------------+------------------+-------------------+----------------+
| srTCM color | 1 | 1 | 12.88 |
- | | | | |
+-------------+------------------+-------------------+----------------+
| trTCM blind | 1 | 0.5 | 13.38 |
- | | | | |
+-------------+------------------+-------------------+----------------+
| trTCM color | 1 | 0.5 | 13.38 |
- | | | | |
+-------------+------------------+-------------------+----------------+
| FWD | 14.88 | 0 | 0 |
- | | | | |
+-------------+------------------+-------------------+----------------+
-To set up the policing scheme as desired, it is necessary to modify the main.h source file,
-where this policy is implemented as a static structure, as follows:
+To configure the policing scheme, modify the static structure in the main.h
+source file:
.. literalinclude:: ../../../examples/qos_meter/main.h
:language: c
:start-after: Policy implemented as a static structure. 8<
:end-before: >8 End of policy implemented as a static structure.
-Where rows indicate the input color, columns indicate the output color,
-and the value that is stored in the table indicates the action to be taken for that particular case.
+Rows indicate the input color, columns indicate the output color, and each
+table entry specifies the action for that combination.
-There are four different actions:
+The four available actions are:
-* GREEN: The packet's color is changed to green.
+* GREEN: Change the packet color to green.
-* YELLOW: The packet's color is changed to yellow.
+* YELLOW: Change the packet color to yellow.
-* RED: The packet's color is changed to red.
+* RED: Change the packet color to red.
-* DROP: The packet is dropped.
+* DROP: Drop the packet.
In this particular case:
-* Every packet which input and output color are the same, keeps the same color.
+* When input and output colors match, keep the same color.
-* Every packet which color has improved is dropped (this particular case can't happen, so these values will not be used).
+* When the color improves (output greener than input), drop the packet.
+ This case cannot occur in practice, so these values go unused.
-* For the rest of the cases, the color is changed to red.
+* For all other cases, change the color to red.
.. note::
- * In color blind mode, first row GREEN color is only valid.
- * To drop the packet, policer_table action has to be set to DROP.
+
+ In color blind mode, only the GREEN input row applies.
+ To drop packets, set the policer_table action to DROP.
diff --git a/doc/guides/sample_app_ug/qos_scheduler.rst b/doc/guides/sample_app_ug/qos_scheduler.rst
index cd33beecb0..caa2fb84cb 100644
--- a/doc/guides/sample_app_ug/qos_scheduler.rst
+++ b/doc/guides/sample_app_ug/qos_scheduler.rst
@@ -4,12 +4,12 @@
QoS Scheduler Sample Application
================================
-The QoS sample application demonstrates the use of the DPDK to provide QoS scheduling.
+The QoS sample application demonstrates DPDK QoS scheduling.
Overview
--------
-The architecture of the QoS scheduler application is shown in the following figure.
+The following figure shows the architecture of the QoS scheduler application.
.. _figure_qos_sched_app_arch:
@@ -18,110 +18,113 @@ The architecture of the QoS scheduler application is shown in the following figu
QoS Scheduler Application Architecture
-There are two flavors of the runtime execution for this application,
-with two or three threads per each packet flow configuration being used.
-The RX thread reads packets from the RX port,
-classifies the packets based on the double VLAN (outer and inner) and
-the lower byte of the IP destination address and puts them into the ring queue.
-The worker thread dequeues the packets from the ring and calls the QoS scheduler enqueue/dequeue functions.
-If a separate TX core is used, these are sent to the TX ring.
-Otherwise, they are sent directly to the TX port.
-The TX thread, if present, reads from the TX ring and write the packets to the TX port.
+The application supports two runtime configurations: two or three threads
+per packet flow.
+
+The RX thread reads packets from the RX port, classifies them based on
+double VLAN tags (outer and inner) and the lower byte of the IP destination
+address, then enqueues them to the ring.
+
+The worker thread dequeues packets from the ring and calls the QoS scheduler
+enqueue/dequeue functions. With a separate TX core, the worker sends packets
+to the TX ring. Otherwise, it sends them directly to the TX port.
+The TX thread, when present, reads from the TX ring and writes packets to
+the TX port.
Compiling the Application
-------------------------
-To compile the sample application see :doc:`compiling`.
+To compile the sample application, see :doc:`compiling`.
-The application is located in the ``qos_sched`` sub-directory.
+The application source resides in the ``qos_sched`` sub-directory.
- .. note::
+.. note::
- This application is intended as a linux only.
+ This application supports Linux only.
.. note::
- Number of grinders is currently set to 8.
- This can be modified by specifying RTE_SCHED_PORT_N_GRINDERS=N
- in CFLAGS, where N is number of grinders.
+ The number of grinders defaults to 8. Modify this value by specifying
+ ``RTE_SCHED_PORT_N_GRINDERS=N`` in CFLAGS, where N is the desired count.
Running the Application
-----------------------
.. note::
- In order to run the application, a total of at least 4
- G of huge pages must be set up for each of the used sockets (depending on the cores in use).
+ The application requires at least 4 GB of huge pages per socket
+ (depending on which cores are in use).
-The application has a number of command line options:
+The application accepts the following command line options:
.. code-block:: console
./<build_dir>/examples/dpdk-qos_sched [EAL options] -- <APP PARAMS>
-Mandatory application parameters include:
+Mandatory application parameters:
-* --pfc "RX PORT, TX PORT, RX LCORE, WT LCORE, TX CORE": Packet flow configuration.
- Multiple pfc entities can be configured in the command line,
- having 4 or 5 items (if TX core defined or not).
+* ``--pfc "RX PORT, TX PORT, RX LCORE, WT LCORE, TX CORE"``: Packet flow
+ configuration. Specify multiple pfc entries on the command line with
+ 4 or 5 items (depending on whether a TX core is defined).
-Optional application parameters include:
+Optional application parameters:
-* -i: It makes the application to start in the interactive mode.
- In this mode, the application shows a command line that can be used for obtaining statistics while
- scheduling is taking place (see interactive mode below for more information).
+* ``-i``: Start the application in interactive mode. This mode displays
+ a command line for obtaining statistics while scheduling runs
+ (see `Interactive mode`_ for details).
-* --mnc n: Main core index (the default value is 1).
+* ``--mnc n``: Main core index (default: 1).
-* --rsz "A, B, C": Ring sizes:
+* ``--rsz "A, B, C"``: Ring sizes:
-* A = Size (in number of buffer descriptors) of each of the NIC RX rings read
- by the I/O RX lcores (the default value is 128).
+ * A = Size (in buffer descriptors) of each NIC RX ring read by I/O RX
+ lcores (default: 128).
-* B = Size (in number of elements) of each of the software rings used
- by the I/O RX lcores to send packets to worker lcores (the default value is 8192).
+ * B = Size (in elements) of each software ring that I/O RX lcores use
+ to send packets to worker lcores (default: 8192).
-* C = Size (in number of buffer descriptors) of each of the NIC TX rings written
- by worker lcores (the default value is 256)
+ * C = Size (in buffer descriptors) of each NIC TX ring written by
+ worker lcores (default: 256).
-* --bsz "A, B, C, D": Burst sizes
+* ``--bsz "A, B, C, D"``: Burst sizes:
-* A = I/O RX lcore read burst size from the NIC RX (the default value is 64)
+ * A = I/O RX lcore read burst size from NIC RX (default: 64).
-* B = I/O RX lcore write burst size to the output software rings,
- worker lcore read burst size from input software rings,QoS enqueue size (the default value is 64)
+ * B = I/O RX lcore write burst size to output software rings, worker
+ lcore read burst size from input software rings, and QoS enqueue
+ size (default: 64).
-* C = QoS dequeue size (the default value is 63)
+ * C = QoS dequeue size (default: 63).
-* D = Worker lcore write burst size to the NIC TX (the default value is 64)
+ * D = Worker lcore write burst size to NIC TX (default: 64).
-* --msz M: Mempool size (in number of mbufs) for each pfc (default 2097152)
+* ``--msz M``: Mempool size (in mbufs) for each pfc (default: 2097152).
-* --rth "A, B, C": The RX queue threshold parameters
+* ``--rth "A, B, C"``: RX queue threshold parameters:
-* A = RX prefetch threshold (the default value is 8)
+ * A = RX prefetch threshold (default: 8).
-* B = RX host threshold (the default value is 8)
+ * B = RX host threshold (default: 8).
-* C = RX write-back threshold (the default value is 4)
+ * C = RX write-back threshold (default: 4).
-* --tth "A, B, C": TX queue threshold parameters
+* ``--tth "A, B, C"``: TX queue threshold parameters:
-* A = TX prefetch threshold (the default value is 36)
+ * A = TX prefetch threshold (default: 36).
-* B = TX host threshold (the default value is 0)
+ * B = TX host threshold (default: 0).
-* C = TX write-back threshold (the default value is 0)
+ * C = TX write-back threshold (default: 0).
-* --cfg FILE: Profile configuration to load
+* ``--cfg FILE``: Profile configuration file to load.
-Refer to *DPDK Getting Started Guide* for general information on running applications and
-the Environment Abstraction Layer (EAL) options.
+Refer to the *DPDK Getting Started Guide* for general information on running
+applications and the Environment Abstraction Layer (EAL) options.
-The profile configuration file defines all the port/subport/pipe/traffic class/queue parameters
-needed for the QoS scheduler configuration.
+The profile configuration file defines all port/subport/pipe/traffic class/queue
+parameters for the QoS scheduler.
-The profile file has the following format:
+The profile file uses the following format:
.. literalinclude:: ../../../examples/qos_sched/profile.cfg
:start-after: Data Plane Development Kit (DPDK) Programmer's Guide
@@ -129,89 +132,94 @@ The profile file has the following format:
Interactive mode
~~~~~~~~~~~~~~~~
-These are the commands that are currently working under the command line interface:
-
-* Control Commands
+The interactive mode supports these commands:
-* --quit: Quits the application.
+* Control commands:
-* General Statistics
+ * ``quit``: Exit the application.
- * stats app: Shows a table with in-app calculated statistics.
+* General statistics:
- * stats port X subport Y: For a specific subport, it shows the number of packets that
- went through the scheduler properly and the number of packets that were dropped.
- The same information is shown in bytes.
- The information is displayed in a table separating it in different traffic classes.
+ * ``stats app``: Display a table of in-application statistics.
- * stats port X subport Y pipe Z: For a specific pipe, it shows the number of packets that
- went through the scheduler properly and the number of packets that were dropped.
- The same information is shown in bytes.
- This information is displayed in a table separating it in individual queues.
+ * ``stats port X subport Y``: For a specific subport, display the number
+ of packets (and bytes) that passed through the scheduler and the
+ number dropped. The table separates results by traffic class.
-* Average queue size
+ * ``stats port X subport Y pipe Z``: For a specific pipe, display the
+ number of packets (and bytes) that passed through the scheduler and
+ the number dropped. The table separates results by queue.
-All of these commands work the same way, averaging the number of packets throughout a specific subset of queues.
+* Average queue size:
-Two parameters can be configured for this prior to calling any of these commands:
+ These commands average packet counts across a subset of queues.
+ Configure two parameters before using these commands:
- * qavg n X: n is the number of times that the calculation will take place.
- Bigger numbers provide higher accuracy. The default value is 10.
+ * ``qavg n X``: Set the number of calculation iterations. Higher values
+ improve accuracy (default: 10).
- * qavg period X: period is the number of microseconds that will be allowed between each calculation.
- The default value is 100.
+ * ``qavg period X``: Set the interval in microseconds between
+ calculations (default: 100).
-The commands that can be used for measuring average queue size are:
+ The queue size measurement commands are:
-* qavg port X subport Y: Show average queue size per subport.
+ * ``qavg port X subport Y``: Display average queue size per subport.
-* qavg port X subport Y tc Z: Show average queue size per subport for a specific traffic class.
+ * ``qavg port X subport Y tc Z``: Display average queue size per subport
+ for a specific traffic class.
-* qavg port X subport Y pipe Z: Show average queue size per pipe.
+ * ``qavg port X subport Y pipe Z``: Display average queue size per pipe.
-* qavg port X subport Y pipe Z tc A: Show average queue size per pipe for a specific traffic class.
+ * ``qavg port X subport Y pipe Z tc A``: Display average queue size per
+ pipe for a specific traffic class.
-* qavg port X subport Y pipe Z tc A q B: Show average queue size of a specific queue.
+ * ``qavg port X subport Y pipe Z tc A q B``: Display average queue size
+ for a specific queue.
Example
~~~~~~~
-The following is an example command with a single packet flow configuration:
+The following command configures a single packet flow:
.. code-block:: console
./<build_dir>/examples/dpdk-qos_sched -l 1,5,7 -- --pfc "3,2,5,7" --cfg ./profile.cfg
-This example uses a single packet flow configuration which creates one RX thread on lcore 5 reading
-from port 3 and a worker thread on lcore 7 writing to port 2.
+This example creates one RX thread on lcore 5 reading from port 3 and a
+worker thread on lcore 7 writing to port 2.
-Another example with 2 packet flow configurations using different ports but sharing the same core for QoS scheduler is given below:
+The following command configures two packet flows using different ports but
+sharing the same QoS scheduler core:
.. code-block:: console
./<build_dir>/examples/dpdk-qos_sched -l 1,2,6,7 -- --pfc "3,2,2,6,7" --pfc "1,0,2,6,7" --cfg ./profile.cfg
-Note that independent cores for the packet flow configurations for each of the RX, WT and TX thread are also supported,
-providing flexibility to balance the work.
+The application also supports independent cores for RX, WT, and TX threads
+in each packet flow configuration, providing flexibility to balance workloads.
-The EAL corelist is constrained to contain the default main core 1 and the RX, WT and TX cores only.
+The EAL corelist must contain only the default main core 1 plus the RX, WT,
+and TX cores.
Explanation
-----------
-The Port/Subport/Pipe/Traffic Class/Queue are the hierarchical entities in a typical QoS application:
+The Port/Subport/Pipe/Traffic Class/Queue hierarchy represents entities in
+a typical QoS application:
* A subport represents a predefined group of users.
-* A pipe represents an individual user/subscriber.
+* A pipe represents an individual user or subscriber.
-* A traffic class is the representation of a different traffic type with a specific loss rate,
- delay and jitter requirements; such as data voice, video or data transfers.
+* A traffic class represents a traffic type with specific loss rate, delay,
+ and jitter requirements, such as voice, video, or data transfers.
-* A queue hosts packets from one or multiple connections of the same type belonging to the same user.
+* A queue hosts packets from one or more connections of the same type
+ belonging to the same user.
-The traffic flows that need to be configured are application dependent.
-This application classifies based on the QinQ double VLAN tags and the IP destination address as indicated in the following table.
+Traffic flow configuration depends on the application. This application
+classifies packets based on QinQ double VLAN tags and IP destination address
+as shown in the following table.
.. _table_qos_scheduler_1:
@@ -219,22 +227,18 @@ This application classifies based on the QinQ double VLAN tags and the IP destin
+----------------+-------------------------+--------------------------------------------------+----------------------------------+
| **Level Name** | **Siblings per Parent** | **QoS Functional Description** | **Selected By** |
- | | | | |
+================+=========================+==================================================+==================================+
| Port | - | Ethernet port | Physical port |
- | | | | |
+----------------+-------------------------+--------------------------------------------------+----------------------------------+
| Subport | Config (8) | Traffic shaped (token bucket) | Outer VLAN tag |
- | | | | |
+----------------+-------------------------+--------------------------------------------------+----------------------------------+
| Pipe | Config (4k) | Traffic shaped (token bucket) | Inner VLAN tag |
- | | | | |
+----------------+-------------------------+--------------------------------------------------+----------------------------------+
| Traffic Class | 13 | TCs of the same pipe services in strict priority | Destination IP address (0.0.0.X) |
- | | | | |
+----------------+-------------------------+--------------------------------------------------+----------------------------------+
| Queue | High Priority TC: 1, | Queue of lowest priority traffic | Destination IP address (0.0.0.X) |
| | Lowest Priority TC: 4 | class (Best effort) serviced in WRR | |
+----------------+-------------------------+--------------------------------------------------+----------------------------------+
-Please refer to the "QoS Scheduler" chapter in the *DPDK Programmer's Guide* for more information about these parameters.
+For more information about these parameters, see the "QoS Scheduler" chapter
+in the *DPDK Programmer's Guide*.
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 12/29] examples/vhost: improve sample application documentation
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (10 preceding siblings ...)
2026-01-14 22:21 ` [PATCH 11/29] examples/qos: improve sample application documentation Stephen Hemminger
@ 2026-01-14 22:21 ` Stephen Hemminger
2026-01-14 22:21 ` [PATCH 13/29] examples/ptpclient: " Stephen Hemminger
` (16 subsequent siblings)
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:21 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Maxime Coquelin, Chenbo Xia
Rewrite the vhost, vhost_blk, and vhost_crypto documentation for
consistency, clarity, and correctness.
Common changes across all files:
- Add Overview sections where missing
- Standardize section structure (Compiling, Running, Explanation)
- Standardize "QEMU" capitalization (was inconsistent "Qemu")
- Add missing commas after introductory clauses
- Use imperative mood for instructions
- Improve parameter formatting using RST definition lists
Changes to vhost.rst:
- Reorganize Testing Steps as a subsection under Overview
- Correct "bond" to "bound" for UIO driver binding
- Improve parameter descriptions with proper indentation
- Streamline packet injection instructions
Changes to vhost_blk.rst:
- Restructure QEMU requirements as a proper bulleted list
- Clarify reconnect and packed ring feature descriptions
Changes to vhost_crypto.rst:
- Reformat command-line options as definition list
- Clarify zero-copy experimental status warning
- Improve device initialization requirements description
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/sample_app_ug/vhost.rst | 185 +++++++++++-----------
doc/guides/sample_app_ug/vhost_blk.rst | 58 ++++---
doc/guides/sample_app_ug/vhost_crypto.rst | 94 +++++------
3 files changed, 175 insertions(+), 162 deletions(-)
diff --git a/doc/guides/sample_app_ug/vhost.rst b/doc/guides/sample_app_ug/vhost.rst
index 4c944a844a..40de72be73 100644
--- a/doc/guides/sample_app_ug/vhost.rst
+++ b/doc/guides/sample_app_ug/vhost.rst
@@ -4,6 +4,9 @@
Vhost Sample Application
========================
+Overview
+--------
+
The vhost sample application demonstrates integration of the Data Plane
Development Kit (DPDK) with the Linux* KVM hypervisor by implementing the
vhost-net offload API. The sample application performs simple packet
@@ -13,27 +16,26 @@ traffic from an external switch is performed in hardware by the Virtual
Machine Device Queues (VMDQ) and Data Center Bridging (DCB) features of
the Intel® 82599 10 Gigabit Ethernet Controller.
-Testing steps
--------------
+Testing Steps
+~~~~~~~~~~~~~
-This section shows the steps how to test a typical PVP case with this
-dpdk-vhost sample, whereas packets are received from the physical NIC
-port first and enqueued to the VM's Rx queue. Through the guest testpmd's
-default forwarding mode (io forward), those packets will be put into
-the Tx queue. The dpdk-vhost example, in turn, gets the packets and
-puts back to the same physical NIC port.
+This section shows how to test a typical PVP case with the dpdk-vhost sample,
+where packets are received from the physical NIC port first and enqueued to the
+VM's Rx queue. Through the guest testpmd's default forwarding mode (io forward),
+those packets are put into the Tx queue. The dpdk-vhost example, in turn,
+gets the packets and puts them back to the same physical NIC port.
-Build
-~~~~~
+Compiling the Application
+-------------------------
-To compile the sample application see :doc:`compiling`.
+To compile the sample application, see :doc:`compiling`.
The application is located in the ``vhost`` sub-directory.
.. note::
- In this example, you need build DPDK both on the host and inside guest.
+ In this example, you need to build DPDK both on the host and inside guest.
-. _vhost_app_run_vm:
+.. _vhost_app_run_vm:
Start the VM
~~~~~~~~~~~~
@@ -50,12 +52,12 @@ Start the VM
...
.. note::
- For basic vhost-user support, QEMU 2.2 (or above) is required. For
- some specific features, a higher version might be need. Such as
- QEMU 2.7 (or above) for the reconnect feature.
+ For basic vhost-user support, QEMU 2.2 or later is required. For
+ some specific features, a higher version might be needed. For example,
+ QEMU 2.7 or later is required for the reconnect feature.
-Start the vswitch example
+Start the vswitch Example
~~~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: console
@@ -64,40 +66,44 @@ Start the vswitch example
-- --socket-file /tmp/sock0 --client \
...
-Check the `Parameters`_ section for the explanations on what do those
-parameters mean.
+See the `Parameters`_ section for explanations of the command-line options.
+
+Running the Application
+-----------------------
.. _vhost_app_run_dpdk_inside_guest:
-Run testpmd inside guest
+Run testpmd Inside Guest
~~~~~~~~~~~~~~~~~~~~~~~~
-Make sure you have DPDK built inside the guest. Also make sure the
-corresponding virtio-net PCI device is bond to a UIO driver, which
-could be done by:
+Ensure DPDK is built inside the guest and that the corresponding virtio-net
+PCI device is bound to a UIO driver. This can be done as follows:
.. code-block:: console
modprobe vfio-pci
dpdk/usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0
-Then start testpmd for packet forwarding testing.
+Then, start testpmd for packet forwarding testing.
.. code-block:: console
./<build_dir>/app/dpdk-testpmd -l 0-1 -- -i
> start tx_first
-For more information about vIOMMU and NO-IOMMU and VFIO please refer to
-:doc:`/../linux_gsg/linux_drivers` section of the DPDK Getting started guide.
+For more information about vIOMMU, NO-IOMMU, and VFIO, see the
+:doc:`/../linux_gsg/linux_drivers` section of the DPDK Getting Started Guide.
-Inject packets
---------------
+Explanation
+-----------
-While a virtio-net is connected to dpdk-vhost, a VLAN tag starts with
-1000 is assigned to it. So make sure configure your packet generator
-with the right MAC and VLAN tag, you should be able to see following
-log from the dpdk-vhost console. It means you get it work::
+Inject Packets
+~~~~~~~~~~~~~~
+
+When a virtio-net device connects to dpdk-vhost, a VLAN tag starting with
+1000 is assigned to it. Configure your packet generator with the appropriate
+MAC and VLAN tag. The following log message should appear on the dpdk-vhost
+console::
VHOST_DATA: (0) mac 52:54:00:00:00:14 and vlan 1000 registered
@@ -105,88 +111,86 @@ log from the dpdk-vhost console. It means you get it work::
.. _vhost_app_parameters:
Parameters
-----------
+~~~~~~~~~~
**--socket-file path**
-Specifies the vhost-user socket file path.
+ Specifies the vhost-user socket file path.
**--client**
-DPDK vhost-user will act as the client mode when such option is given.
-In the client mode, QEMU will create the socket file. Otherwise, DPDK
-will create it. Put simply, it's the server to create the socket file.
-
+ DPDK vhost-user acts as the client when this option is given.
+ In client mode, QEMU creates the socket file. Otherwise, DPDK
+ creates it. The server always creates the socket file.
**--vm2vm mode**
-The vm2vm parameter sets the mode of packet switching between guests in
-the host.
+ Sets the mode of packet switching between guests in the host.
-- 0 disables vm2vm, implying that VM's packets will always go to the NIC port.
-- 1 means a normal mac lookup packet routing.
-- 2 means hardware mode packet forwarding between guests, it allows packets
- go to the NIC port, hardware L2 switch will determine which guest the
- packet should forward to or need send to external, which bases on the
- packet destination MAC address and VLAN tag.
+ - 0 disables vm2vm, meaning VM packets always go to the NIC port.
+ - 1 enables normal MAC lookup packet routing.
+ - 2 enables hardware mode packet forwarding between guests. Packets
+ can go to the NIC port, and the hardware L2 switch determines which
+ guest the packet should be forwarded to or whether it needs to be
+ sent externally, based on the packet destination MAC address and
+ VLAN tag.
**--mergeable 0|1**
-Set 0/1 to disable/enable the mergeable Rx feature. It's disabled by default.
+ Set to 0 to disable or 1 to enable the mergeable Rx feature.
+ Disabled by default.
**--stats interval**
-The stats parameter controls the printing of virtio-net device statistics.
-The parameter specifies an interval (in unit of seconds) to print statistics,
-with an interval of 0 seconds disabling statistics.
+ Controls the printing of virtio-net device statistics.
+ The parameter specifies an interval in seconds to print statistics.
+ An interval of 0 disables statistics.
**--rx-retry 0|1**
-The rx-retry option enables/disables enqueue retries when the guests Rx queue
-is full. This feature resolves a packet loss that is observed at high data
-rates, by allowing it to delay and retry in the receive path. This option is
-enabled by default.
+ Enables or disables enqueue retries when the guest's Rx queue
+ is full. This feature resolves packet loss observed at high data
+ rates by allowing delay and retry in the receive path. Enabled by default.
**--rx-retry-num num**
-The rx-retry-num option specifies the number of retries on an Rx burst, it
-takes effect only when rx retry is enabled. The default value is 4.
+ Specifies the number of retries on an Rx burst. Takes effect only when
+ rx-retry is enabled. The default value is 4.
**--rx-retry-delay msec**
-The rx-retry-delay option specifies the timeout (in micro seconds) between
-retries on an RX burst, it takes effect only when rx retry is enabled. The
-default value is 15.
+ Specifies the timeout in microseconds between retries on an Rx burst.
+ Takes effect only when rx-retry is enabled. The default value is 15.
**--builtin-net-driver**
-A very simple vhost-user net driver which demonstrates how to use the generic
-vhost APIs will be used when this option is given. It is disabled by default.
+ Uses a simple vhost-user net driver that demonstrates how to use the
+ generic vhost APIs. Disabled by default.
**--dmas**
-This parameter is used to specify the assigned DMA device of a vhost device.
-Async vhost-user net driver will be used if --dmas is set. For example
---dmas [txd0@00:04.0,txd1@00:04.1,rxd0@00:04.2,rxd1@00:04.3] means use
-DMA channel 00:04.0/00:04.2 for vhost device 0 enqueue/dequeue operation
-and use DMA channel 00:04.1/00:04.3 for vhost device 1 enqueue/dequeue
-operation. The index of the device corresponds to the socket file in order,
-that means vhost device 0 is created through the first socket file, vhost
-device 1 is created through the second socket file, and so on.
+ Specifies the assigned DMA device of a vhost device.
+ The async vhost-user net driver is used when --dmas is set. For example,
+ ``--dmas [txd0@00:04.0,txd1@00:04.1,rxd0@00:04.2,rxd1@00:04.3]`` means
+ DMA channel 00:04.0/00:04.2 is used for vhost device 0 enqueue/dequeue
+ operations and DMA channel 00:04.1/00:04.3 is used for vhost device 1
+ enqueue/dequeue operations. The index of the device corresponds to the
+ socket file in order: vhost device 0 is created through the first socket
+ file, vhost device 1 is created through the second socket file, and so on.
**--total-num-mbufs 0-N**
-This parameter sets the number of mbufs to be allocated in mbuf pools,
-the default value is 147456. This is can be used if launch of a port fails
-due to shortage of mbufs.
+ Sets the number of mbufs to be allocated in mbuf pools.
+ The default value is 147456. This option can be used if port launch fails
+ due to shortage of mbufs.
**--tso 0|1**
-Disables/enables TCP segment offload.
+ Disables or enables TCP segment offload.
**--tx-csum 0|1**
-Disables/enables TX checksum offload.
+ Disables or enables TX checksum offload.
**-p mask**
-Port mask which specifies the ports to be used
+ Port mask specifying the ports to be used.
Common Issues
--------------
+~~~~~~~~~~~~~
-* QEMU fails to allocate memory on hugetlbfs, with an error like the
+* QEMU fails to allocate memory on hugetlbfs and shows an error like the
following::
file_ram_alloc: can't mmap RAM pages: Cannot allocate memory
- When running QEMU the above error indicates that it has failed to allocate
+ When running QEMU, the above error indicates that it has failed to allocate
memory for the Virtual Machine on the hugetlbfs. This is typically due to
insufficient hugepages being free to support the allocation request. The
number of free hugepages can be checked as follows:
@@ -200,23 +204,22 @@ Common Issues
* Failed to build DPDK in VM
- Make sure "-cpu host" QEMU option is given.
+ Ensure the ``-cpu host`` QEMU option is given.
-* Device start fails if NIC's max queues > the default number of 128
+* Device start fails if NIC's max queues exceeds the default of 128
- mbuf pool size is dependent on the MAX_QUEUES configuration, if NIC's
- max queue number is larger than 128, device start will fail due to
- insufficient mbuf. This can be adjusted using ``--total-num-mbufs``
- parameter.
+ The mbuf pool size depends on the MAX_QUEUES configuration. If the NIC's
+ max queue number is larger than 128, device start fails due to
+ insufficient mbufs. Adjust using the ``--total-num-mbufs`` parameter.
-* Option "builtin-net-driver" is incompatible with QEMU
+* Option ``builtin-net-driver`` is incompatible with QEMU
- QEMU vhost net device start will fail if protocol feature is not negotiated.
- DPDK virtio-user PMD can be the replacement of QEMU.
+ The QEMU vhost net device start fails if the protocol feature is not
+ negotiated. DPDK virtio-user PMD can be used as a replacement for QEMU.
-* Device start fails when enabling "builtin-net-driver" without memory
+* Device start fails when enabling ``builtin-net-driver`` without memory
pre-allocation
- The builtin example doesn't support dynamic memory allocation. When vhost
- backend enables "builtin-net-driver", "--numa-mem" option should be
- added at virtio-user PMD side as a startup item.
+ The builtin example does not support dynamic memory allocation. When the
+ vhost backend enables ``builtin-net-driver``, the ``--numa-mem`` option
+ should be added at the virtio-user PMD side as a startup item.
diff --git a/doc/guides/sample_app_ug/vhost_blk.rst b/doc/guides/sample_app_ug/vhost_blk.rst
index 788eef0d5f..aedf146375 100644
--- a/doc/guides/sample_app_ug/vhost_blk.rst
+++ b/doc/guides/sample_app_ug/vhost_blk.rst
@@ -2,37 +2,41 @@
Copyright(c) 2010-2017 Intel Corporation.
Vhost_blk Sample Application
-=============================
+============================
-The vhost_blk sample application implemented a simple block device,
-which used as the backend of Qemu vhost-user-blk device. Users can extend
-the exist example to use other type of block device(e.g. AIO) besides
-memory based block device. Similar with vhost-user-net device, the sample
-application used domain socket to communicate with Qemu, and the virtio
-ring (split or packed format) was processed by vhost_blk sample application.
+Overview
+--------
-The sample application reuse lots codes from SPDK(Storage Performance
-Development Kit, https://github.com/spdk/spdk) vhost-user-blk target,
-for DPDK vhost library used in storage area, user can take SPDK as
-reference as well.
+The vhost_blk sample application implements a simple block device
+used as the backend of a QEMU vhost-user-blk device. Users can extend
+the existing example to use other types of block devices (for example, AIO)
+in addition to memory-based block devices. Similar to the vhost-user-net
+device, the sample application uses a domain socket to communicate with QEMU,
+and the virtio ring (split or packed format) is processed by the vhost_blk
+sample application.
-Testing steps
--------------
+The sample application reuses code from SPDK (Storage Performance
+Development Kit, https://github.com/spdk/spdk) vhost-user-blk target.
+For DPDK vhost library use in storage applications, SPDK can also serve
+as a reference.
-This section shows the steps how to start a VM with the block device as
-fast data path for critical application.
+This section shows how to start a VM with the block device as a
+fast data path for critical applications.
Compiling the Application
-------------------------
-To compile the sample application see :doc:`compiling`.
+To compile the sample application, see :doc:`compiling`.
The application is located in the ``examples`` sub-directory.
-You will also need to build DPDK both on the host and inside the guest
+You need to build DPDK both on the host and inside the guest.
-Start the vhost_blk example
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Running the Application
+-----------------------
+
+Start the vhost_blk Example
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: console
@@ -55,11 +59,13 @@ Start the VM
...
.. note::
- You must check whether your Qemu can support "vhost-user-blk" or not,
- Qemu v4.0 or newer version is required.
- reconnect=1 means live recovery support that qemu can reconnect vhost_blk
- after we restart vhost_blk example.
- packed=on means the device support packed ring but need the guest kernel
- version >= 5.0.
- Now Qemu commit 9bb73502321d46f4d320fa17aa38201445783fc4 both support the
+ Verify that your QEMU supports ``vhost-user-blk``. QEMU v4.0 or later
+ is required.
+
+ * ``reconnect=1`` enables live recovery support, allowing QEMU to reconnect
+ to vhost_blk after the vhost_blk example is restarted.
+ * ``packed=on`` enables packed ring support, which requires guest kernel
+ version 5.0 or later.
+
+ QEMU commit 9bb73502321d46f4d320fa17aa38201445783fc4 supports both
vhost-blk reconnect and packed ring.
diff --git a/doc/guides/sample_app_ug/vhost_crypto.rst b/doc/guides/sample_app_ug/vhost_crypto.rst
index 5c4475342c..0c4ee3f25a 100644
--- a/doc/guides/sample_app_ug/vhost_crypto.rst
+++ b/doc/guides/sample_app_ug/vhost_crypto.rst
@@ -4,66 +4,70 @@
Vhost_Crypto Sample Application
===============================
-The vhost_crypto sample application implemented a simple Crypto device,
-which used as the backend of Qemu vhost-user-crypto device. Similar with
-vhost-user-net and vhost-user-scsi device, the sample application used
-domain socket to communicate with Qemu, and the virtio ring was processed
-by vhost_crypto sample application.
+Overview
+--------
-Testing steps
--------------
+The vhost_crypto sample application implements a crypto device used
+as the backend of a QEMU vhost-user-crypto device. Similar to the
+vhost-user-net and vhost-user-scsi devices, the sample application uses a
+domain socket to communicate with QEMU, and the virtio ring is processed
+by the vhost_crypto sample application.
-This section shows the steps how to start a VM with the crypto device as
-fast data path for critical application.
+This section shows how to start a VM with the crypto device as a
+fast data path for critical applications.
Compiling the Application
-------------------------
-To compile the sample application see :doc:`compiling`.
+To compile the sample application, see :doc:`compiling`.
The application is located in the ``examples`` sub-directory.
-Start the vhost_crypto example
+Running the Application
+-----------------------
+
+Start the vhost_crypto Example
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: console
./dpdk-vhost_crypto [EAL options] --
- --config (lcore,cdev-id,queue-id)[,(lcore,cdev-id,queue-id)]
- --socket-file lcore,PATH
- [--zero-copy]
- [--guest-polling]
- [--asymmetric-crypto]
-
-where,
-
-* config (lcore,cdev-id,queue-id): build the lcore-cryptodev id-queue id
- connection. Once specified, the specified lcore will only work with
- specified cryptodev's queue.
-
-* socket-file lcore,PATH: the path of UNIX socket file to be created and
- the lcore id that will deal with the all workloads of the socket. Multiple
- instances of this config item is supported and one lcore supports processing
- multiple sockets.
-
-* zero-copy: the presence of this item means the ZERO-COPY feature will be
- enabled. Otherwise it is disabled. PLEASE NOTE the ZERO-COPY feature is still
- in experimental stage and may cause the problem like segmentation fault. If
- the user wants to use LKCF in the guest, this feature shall be turned off.
-
-* guest-polling: the presence of this item means the application assumes the
- guest works in polling mode, thus will NOT notify the guest completion of
- processing.
-
-* asymmetric-crypto: the presence of this item means
- the application can handle the asymmetric crypto requests.
- When this option is used,
- symmetric crypto requests can not be handled by the application.
+ --config (lcore,cdev-id,queue-id)[,(lcore,cdev-id,queue-id)]
+ --socket-file lcore,PATH
+ [--zero-copy]
+ [--guest-polling]
+ [--asymmetric-crypto]
+
+where:
+
+**--config (lcore,cdev-id,queue-id)**
+ Builds the lcore-cryptodev-queue connection. When specified, the lcore
+ works only with the specified cryptodev's queue.
+
+**--socket-file lcore,PATH**
+ Specifies the path of the UNIX socket file to be created and the lcore
+ that handles all workloads for the socket. Multiple instances of this
+ option are supported, and one lcore can process multiple sockets.
+
+**--zero-copy**
+ Enables the zero-copy feature when present. Otherwise, zero-copy is
+ disabled. Note that the zero-copy feature is experimental and may cause
+ problems such as segmentation faults. If the user wants to use LKCF in
+ the guest, this feature should be disabled.
+
+**--guest-polling**
+ When present, the application assumes the guest works in polling mode
+ and does not notify the guest of processing completion.
+
+**--asymmetric-crypto**
+ When present, the application can handle asymmetric crypto requests.
+ When this option is used, symmetric crypto requests cannot be handled
+ by the application.
The application requires that crypto devices capable of performing
-the specified crypto operation are available on application initialization.
-This means that HW crypto device/s must be bound to a DPDK driver or
-a SW crypto device/s (virtual crypto PMD) must be created (using --vdev).
+the specified crypto operation are available at initialization.
+This means that hardware crypto devices must be bound to a DPDK driver or
+software crypto devices (virtual crypto PMD) must be created using --vdev.
.. _vhost_crypto_app_run_vm:
@@ -83,4 +87,4 @@ Start the VM
...
.. note::
- You must check whether your Qemu can support "vhost-user-crypto" or not.
+ Verify that your QEMU supports ``vhost-user-crypto``.
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 13/29] examples/ptpclient: improve sample application documentation
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (11 preceding siblings ...)
2026-01-14 22:21 ` [PATCH 12/29] examples/vhost: " Stephen Hemminger
@ 2026-01-14 22:21 ` Stephen Hemminger
2026-01-14 22:21 ` [PATCH 14/29] doc/guides: improve vDPA sample application guide Stephen Hemminger
` (15 subsequent siblings)
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:21 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Rewrite the PTP client documentation for clarity and correctness.
Structural changes:
- Add Overview section with note about PTP purpose
- Add reference to IEEE 1588 standard
- Demote Limitations and How the Application Works to subsections
- Rename Code Explanation to Explanation for consistency
Technical corrections:
- Use standard PTP message names: Follow_Up, Delay_Req, Delay_Resp
- Document use of IEEE 1588g-2022 alternative terminology
(time transmitter/receiver instead of master/slave)
- Clarify T1-T4 timestamp exchange sequence
Style improvements:
- Capitalize "Linux" consistently
- Use imperative mood for instructions
- Replace informal language ("we", "And than we") with formal style
- Add missing commas and articles
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/sample_app_ug/ptpclient.rst | 139 +++++++++++++------------
1 file changed, 74 insertions(+), 65 deletions(-)
diff --git a/doc/guides/sample_app_ug/ptpclient.rst b/doc/guides/sample_app_ug/ptpclient.rst
index 0df465bcb4..1940f6e29f 100644
--- a/doc/guides/sample_app_ug/ptpclient.rst
+++ b/doc/guides/sample_app_ug/ptpclient.rst
@@ -4,31 +4,44 @@
PTP Client Sample Application
=============================
-The PTP (Precision Time Protocol) client sample application is a simple
-example of using the DPDK IEEE1588 API to communicate with a PTP time transmitter
-to synchronize the time on the NIC and, optionally, on the Linux system.
+Overview
+--------
-Note, PTP is a time syncing protocol and cannot be used within DPDK as a
-time-stamping mechanism. See the following for an explanation of the protocol:
+The PTP (Precision Time Protocol) client sample application demonstrates
+the DPDK IEEE1588 API for synchronizing time with a PTP time transmitter.
+The application synchronizes the NIC clock and optionally the Linux system clock.
+
+.. note::
+
+ PTP is a time synchronization protocol and cannot serve as a
+ timestamping mechanism within DPDK.
+
+For an explanation of the protocol, see
`Precision Time Protocol
<https://en.wikipedia.org/wiki/Precision_Time_Protocol>`_.
+This application uses the IEEE 1588g-2022 alternative terminology
+("time transmitter" and "time receiver" instead of "master" and "slave").
+For the official standard, see the
+`IEEE 1588 Standard
+<https://standards.ieee.org/ieee/1588/6825/>`_.
+
Limitations
------------
+~~~~~~~~~~~
-The PTP sample application is intended as a simple reference implementation of
+The PTP sample application provides a simple reference implementation of
a PTP client using the DPDK IEEE1588 API.
-In order to keep the application simple the following assumptions are made:
+To keep the application simple, it makes the following assumptions:
-* The first discovered time transmitter is the main for the session.
-* Only L2 PTP packets are supported.
-* Only the PTP v2 protocol is supported.
-* Only the time receiver clock is implemented.
+* The first discovered time transmitter becomes the session's primary transmitter.
+* The application supports only L2 PTP packets.
+* The application supports only PTP v2 protocol.
+* The application implements only the time receiver clock.
How the Application Works
--------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~
.. _figure_ptpclient_highlevel:
@@ -38,62 +51,61 @@ How the Application Works
The PTP synchronization in the sample application works as follows:
-* Time transmitter sends *Sync* message - the time receiver saves it as T2.
-* Time transmitter sends *Follow Up* message and sends time of T1.
-* Time receiver sends *Delay Request* frame to PTP time transmitter and stores T3.
-* Time transmitter sends *Delay Response* T4 time which is time of received T3.
+* The time transmitter sends a *Sync* message; the time receiver saves the arrival time as T2.
+* The time transmitter sends a *Follow_Up* message containing T1 (the *Sync* transmission time).
+* The time receiver sends a *Delay_Req* message to the time transmitter and records T3.
+* The time transmitter replies with a *Delay_Resp* message containing T4 (when it received the *Delay_Req*).
-The adjustment for time receiver can be represented as:
+The time receiver calculates the adjustment as:
adj = -[(T2-T1)-(T4 - T3)]/2
-If the command line parameter ``-T 1`` is used the application also
-synchronizes the PTP PHC clock with the Linux kernel clock.
+If you specify the command line parameter ``-T 1``, the application also
+synchronizes the Linux kernel clock with the PTP PHC clock.
Compiling the Application
-------------------------
-To compile the sample application see :doc:`compiling`.
+To compile the sample application, see :doc:`compiling`.
-The application is located in the ``ptpclient`` sub-directory.
+The application source resides in the ``ptpclient`` sub-directory.
Running the Application
-----------------------
-To run the example in a ``linux`` environment:
+To run the example in a Linux environment:
.. code-block:: console
./<build_dir>/examples/dpdk-ptpclient -l 1 -- -p 0x1 -T 0
-Refer to *DPDK Getting Started Guide* for general information on running
+Refer to the *DPDK Getting Started Guide* for general information on running
applications and the Environment Abstraction Layer (EAL) options.
* ``-p portmask``: Hexadecimal portmask.
* ``-T 0``: Update only the PTP time receiver clock.
-* ``-T 1``: Update the PTP time receiver clock and synchronize the Linux Kernel to the PTP clock.
+* ``-T 1``: Update the PTP time receiver clock and synchronize the Linux kernel clock to it.
-Code Explanation
-----------------
+Explanation
+-----------
-The following sections provide an explanation of the main components of the
-code.
+The following sections explain the main components of the code.
-All DPDK library functions used in the sample code are prefixed with ``rte_``
-and are explained in detail in the *DPDK API Documentation*.
+All DPDK library functions used in the sample code have the ``rte_`` prefix.
+The *DPDK API Documentation* explains these functions in detail.
The Main Function
~~~~~~~~~~~~~~~~~
-The ``main()`` function performs the initialization and calls the execution
+The ``main()`` function initializes the application and launches execution
threads for each lcore.
-The first task is to initialize the Environment Abstraction Layer (EAL). The
-``argc`` and ``argv`` arguments are provided to the ``rte_eal_init()``
-function. The value returned is the number of parsed arguments:
+The first task initializes the Environment Abstraction Layer (EAL). The
+``rte_eal_init()`` function receives the ``argc`` and ``argv`` arguments
+and returns the number of parsed arguments:
.. literalinclude:: ../../../examples/ptpclient/ptpclient.c
:language: c
@@ -101,7 +113,7 @@ function. The value returned is the number of parsed arguments:
:end-before: >8 End of initialization of EAL.
:dedent: 1
-And than we parse application specific arguments
+Next, the application parses application-specific arguments:
.. literalinclude:: ../../../examples/ptpclient/ptpclient.c
:language: c
@@ -109,8 +121,8 @@ And than we parse application specific arguments
:end-before: >8 End of parsing specific arguments.
:dedent: 1
-The ``main()`` also allocates a mempool to hold the mbufs (Message Buffers)
-used by the application:
+The ``main()`` function also allocates a mempool to hold the mbufs (Message Buffers)
+that the application uses:
.. literalinclude:: ../../../examples/ptpclient/ptpclient.c
:language: c
@@ -118,11 +130,11 @@ used by the application:
:end-before: >8 End of a new mempool in memory to hold the mbufs.
:dedent: 1
-Mbufs are the packet buffer structure used by DPDK. They are explained in
-detail in the "Mbuf Library" section of the *DPDK Programmer's Guide*.
+Mbufs provide the packet buffer structure that DPDK uses. The "Mbuf Library"
+section of the *DPDK Programmer's Guide* explains them in detail.
-The ``main()`` function also initializes all the ports using the user defined
-``port_init()`` function with portmask provided by user:
+The ``main()`` function also initializes all ports using the user-defined
+``port_init()`` function with the user-provided portmask:
.. literalinclude:: ../../../examples/ptpclient/ptpclient.c
:language: c
@@ -131,24 +143,23 @@ The ``main()`` function also initializes all the ports using the user defined
:dedent: 1
-Once the initialization is complete, the application is ready to launch a
-function on an lcore. In this example ``lcore_main()`` is called on a single
-lcore.
+After initialization completes, the application launches a function on an lcore.
+In this example, ``main()`` calls ``lcore_main()`` on a single lcore.
.. code-block:: c
lcore_main();
-The ``lcore_main()`` function is explained below.
+The next section explains the ``lcore_main()`` function.
The Lcores Main
~~~~~~~~~~~~~~~
-As we saw above the ``main()`` function calls an application function on the
+As shown above, the ``main()`` function calls an application function on the
available lcores.
-The main work of the application is done within the loop:
+The application performs its main work within the loop:
.. literalinclude:: ../../../examples/ptpclient/ptpclient.c
:language: c
@@ -156,11 +167,11 @@ The main work of the application is done within the loop:
:end-before: >8 End of read packets from RX queues.
:dedent: 2
-Packets are received one by one on the RX ports and, if required, PTP response
-packets are transmitted on the TX ports.
+The loop receives packets one by one on the RX ports and, when required,
+transmits PTP response packets on the TX ports.
-If the offload flags in the mbuf indicate that the packet is a PTP packet then
-the packet is parsed to determine which type:
+If the mbuf offload flags indicate a PTP packet, the code parses the packet
+to determine its type:
.. literalinclude:: ../../../examples/ptpclient/ptpclient.c
:language: c
@@ -169,30 +180,28 @@ the packet is parsed to determine which type:
:dedent: 3
-All packets are freed explicitly using ``rte_pktmbuf_free()``.
+The code frees all packets explicitly using ``rte_pktmbuf_free()``.
-The forwarding loop can be interrupted and the application closed using
-``Ctrl-C``.
+Press ``Ctrl-C`` to interrupt the forwarding loop and close the application.
PTP parsing
~~~~~~~~~~~
-The ``parse_ptp_frames()`` function processes PTP packets, implementing time receiver
-PTP IEEE1588 L2 functionality.
+The ``parse_ptp_frames()`` function processes PTP packets, implementing the
+PTP IEEE1588 L2 time receiver functionality.
.. literalinclude:: ../../../examples/ptpclient/ptpclient.c
:language: c
:start-after: Parse ptp frames. 8<
:end-before: >8 End of function processes PTP packets.
-There are 3 types of packets on the RX path which we must parse to create a minimal
-implementation of the PTP time receiver client:
+A minimal PTP time receiver client must parse three packet types on the RX path:
-* SYNC packet.
-* FOLLOW UP packet
-* DELAY RESPONSE packet.
+* *Sync* packet
+* *Follow_Up* packet
+* *Delay_Resp* packet
-When we parse the *FOLLOW UP* packet we also create and send a *DELAY_REQUEST* packet.
-Also when we parse the *DELAY RESPONSE* packet, and all conditions are met
-we adjust the PTP time receiver clock.
+When the code parses the *Follow_Up* packet, it also creates and sends a
+*Delay_Req* packet. When it parses the *Delay_Resp* packet and all
+conditions are met, it adjusts the PTP time receiver clock.
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 14/29] doc/guides: improve vDPA sample application guide
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (12 preceding siblings ...)
2026-01-14 22:21 ` [PATCH 13/29] examples/ptpclient: " Stephen Hemminger
@ 2026-01-14 22:21 ` Stephen Hemminger
2026-01-14 22:21 ` [PATCH 15/29] doc/guides: improve command line sample app guide Stephen Hemminger
` (14 subsequent siblings)
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:21 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Maxime Coquelin, Chenbo Xia
Improve the vDPA sample application documentation:
- add Overview section for better document structure
- use consistent capitalization of vDPA
- replace contractions with full forms
- fix grammar and improve sentence clarity
- use proper article usage and punctuation
- replace abbreviations with full phrases
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/sample_app_ug/vdpa.rst | 71 +++++++++++++++++--------------
1 file changed, 38 insertions(+), 33 deletions(-)
diff --git a/doc/guides/sample_app_ug/vdpa.rst b/doc/guides/sample_app_ug/vdpa.rst
index cd3ec99054..ec25a57183 100644
--- a/doc/guides/sample_app_ug/vdpa.rst
+++ b/doc/guides/sample_app_ug/vdpa.rst
@@ -4,41 +4,45 @@
Vdpa Sample Application
=======================
-The vdpa sample application creates vhost-user sockets by using the
-vDPA backend. vDPA stands for vhost Data Path Acceleration which utilizes
-virtio ring compatible devices to serve virtio driver directly to enable
-datapath acceleration. As vDPA driver can help to set up vhost datapath,
-this application doesn't need to launch dedicated worker threads for vhost
+Overview
+--------
+
+The vDPA sample application creates vhost-user sockets by using the
+vDPA backend. vDPA (vhost Data Path Acceleration) uses virtio ring
+compatible devices to serve a virtio driver directly, enabling
+datapath acceleration. A vDPA driver can help to set up the vhost datapath.
+This application does not need dedicated worker threads for vhost
enqueue/dequeue operations.
-Testing steps
--------------
+The following shows how to start VMs with a vDPA vhost-user
+backend and verify network connection and live migration.
-This section shows the steps of how to start VMs with vDPA vhost-user
-backend and verify network connection & live migration.
+Compiling the Application
+-------------------------
-Build
-~~~~~
-
-To compile the sample application see :doc:`compiling`.
+To compile the sample application, see :doc:`compiling`.
The application is located in the ``vdpa`` sub-directory.
-Start the vdpa example
+Running the Application
+-----------------------
+
+Start the vDPA Example
~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: console
./dpdk-vdpa [EAL options] -- [--client] [--interactive|-i] or [--iface SOCKET_PATH]
-where
+where:
-* --client means running vdpa app in client mode, in the client mode, QEMU needs
- to run as the server mode and take charge of socket file creation.
-* --iface specifies the path prefix of the UNIX domain socket file, e.g.
- /tmp/vhost-user-, then the socket files will be named as /tmp/vhost-user-<n>
- (n starts from 0).
-* --interactive means run the vDPA sample in interactive mode:
+* --client runs the vdpa application in client mode. In client mode, QEMU
+ runs as the server and is responsible for socket file creation.
+* --iface specifies the path prefix of the UNIX domain socket file (for example,
+ /tmp/vhost-user-). The socket files are named /tmp/vhost-user-<n>
+ where n starts from 0.
+* --interactive runs the vDPA sample in interactive mode with the following
+ commands:
#. help: show help message
@@ -50,7 +54,7 @@ where
#. quit: unregister vhost driver and exit the application
-Take IFCVF driver for example:
+The following example uses the IFCVF driver:
.. code-block:: console
@@ -59,13 +63,13 @@ Take IFCVF driver for example:
-- --interactive
.. note::
- Here 0000:06:00.3 and 0000:06:00.4 refer to virtio ring compatible devices,
- and we need to bind vfio-pci to them before running vdpa sample.
+ Here 0000:06:00.3 and 0000:06:00.4 refer to virtio ring compatible devices.
+ Bind vfio-pci to them before running the vdpa sample:
* modprobe vfio-pci
* ./usertools/dpdk-devbind.py -b vfio-pci 06:00.3 06:00.4
-Then we can create 2 vdpa ports in interactive cmdline.
+Then, create two vdpa ports in the interactive command line.
.. code-block:: console
@@ -92,24 +96,25 @@ Start the VMs
-netdev type=vhost-user,id=vdpa,chardev=char0 \
-device virtio-net-pci,netdev=vdpa,mac=00:aa:bb:cc:dd:ee,page-per-vq=on \
-After the VMs launches, we can login the VMs and configure the ip, verify the
+After the VMs launch, log into the VMs and configure the IP address to verify
network connection via ping or netperf.
.. note::
- Suggest to use QEMU 3.0.0 which extends vhost-user for vDPA.
+ QEMU 3.0.0 or later is recommended as it extends vhost-user for vDPA.
Live Migration
~~~~~~~~~~~~~~
-vDPA supports cross-backend live migration, user can migrate SW vhost backend
-VM to vDPA backend VM and vice versa. Here are the detailed steps. Assume A is
-the source host with SW vhost VM and B is the destination host with vDPA.
+vDPA supports cross-backend live migration. Users can migrate a SW vhost
+backend VM to a vDPA backend VM and vice versa. The following are the
+detailed steps. Assume A is the source host with the SW vhost VM and B is
+the destination host with vDPA.
-#. Start vdpa sample and launch a VM with exact same parameters as the VM on A,
- in migration-listen mode:
+#. Start the vdpa sample and launch a VM with the same parameters as the VM
+ on A, in migration-listen mode:
.. code-block:: console
- B: <qemu-command-line> -incoming tcp:0:4444 (or other PORT))
+ B: <qemu-command-line> -incoming tcp:0:4444 (or other PORT)
#. Start the migration (on source host):
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 15/29] doc/guides: improve command line sample app guide
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (13 preceding siblings ...)
2026-01-14 22:21 ` [PATCH 14/29] doc/guides: improve vDPA sample application guide Stephen Hemminger
@ 2026-01-14 22:21 ` Stephen Hemminger
2026-01-14 22:21 ` [PATCH 16/29] doc/guides: improve DMA " Stephen Hemminger
` (13 subsequent siblings)
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:21 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Improve the command line sample application documentation:
- remove trademark asterisks from Linux and Ethernet
- use code formatting for command examples
- add Oxford commas and fix punctuation
- add missing articles
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/sample_app_ug/cmd_line.rst | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/doc/guides/sample_app_ug/cmd_line.rst b/doc/guides/sample_app_ug/cmd_line.rst
index e038667bd5..f2e506f8f1 100644
--- a/doc/guides/sample_app_ug/cmd_line.rst
+++ b/doc/guides/sample_app_ug/cmd_line.rst
@@ -13,7 +13,7 @@ Overview
The Command Line sample application is a simple application that
demonstrates the use of the command line interface in the DPDK.
This application is a readline-like interface that can be used
-to debug a DPDK application in a Linux* application environment.
+to debug a DPDK application in a Linux application environment.
.. note::
@@ -23,7 +23,7 @@ to debug a DPDK application in a Linux* application environment.
in the "Known Issues" section of the Release Notes.
The Command Line sample application supports some of the features of the GNU readline library
-such as completion, cut/paste and other special bindings
+such as completion, cut/paste, and other special bindings
that make configuration and debug faster and easier.
The application shows how the ``cmdline`` library can be extended
@@ -31,11 +31,11 @@ to handle a list of objects.
There are three simple commands:
-* add obj_name IP: Add a new object with an IP/IPv6 address associated to it.
+* ``add obj_name IP``: Add a new object with an IP/IPv6 address associated to it.
-* del obj_name: Delete the specified object.
+* ``del obj_name``: Delete the specified object.
-* show obj_name: Show the IP associated with the specified object.
+* ``show obj_name``: Show the IP associated with the specified object.
.. note::
@@ -44,7 +44,7 @@ There are three simple commands:
Compiling the Application
-------------------------
-To compile the sample application see :doc:`compiling`
+To compile the sample application, see :doc:`compiling`.
The application is located in the ``cmd_line`` sub-directory.
@@ -63,7 +63,7 @@ and the Environment Abstraction Layer (EAL) options.
Explanation
-----------
-The following sections provide explanation of the code.
+The following sections provide an explanation of the code.
EAL Initialization and cmdline Start
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -100,17 +100,17 @@ For example:
Each command (of type cmdline_parse_inst_t) is defined statically.
It contains a pointer to a callback function that is executed when the command is parsed,
-an opaque pointer, a help string and a list of tokens in a NULL-terminated table.
+an opaque pointer, a help string, and a list of tokens in a NULL-terminated table.
The rte_cmdline application provides a list of pre-defined token types:
-* String Token: Match a static string, a list of static strings or any string.
+* String Token: Match a static string, a list of static strings, or any string.
* Number Token: Match a number that can be signed or unsigned, from 8-bit to 32-bit.
* IP Address Token: Match an IPv4 or IPv6 address or network.
-* Ethernet* Address Token: Match a MAC address.
+* Ethernet Address Token: Match a MAC address.
In this example, a new token type obj_list is defined and implemented
in the parse_obj_list.c and parse_obj_list.h files.
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 16/29] doc/guides: improve DMA sample app guide
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (14 preceding siblings ...)
2026-01-14 22:21 ` [PATCH 15/29] doc/guides: improve command line sample app guide Stephen Hemminger
@ 2026-01-14 22:21 ` Stephen Hemminger
2026-01-19 0:47 ` fengchengwen
2026-01-14 22:21 ` [PATCH 17/29] doc/guides: improve FIPS validation " Stephen Hemminger
` (12 subsequent siblings)
28 siblings, 1 reply; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:21 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Chengwen Feng, Kevin Laatz, Bruce Richardson
Improve the DMA sample application documentation:
- use Tx/Rx instead of TX/RX per DPDK style
- use "software" and "hardware" instead of SW/HW abbreviations
- add missing articles and fix punctuation
- use consistent option formatting with leading dash
- fix grammatical errors throughout
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/sample_app_ug/dma.rst | 104 +++++++++++++++----------------
1 file changed, 52 insertions(+), 52 deletions(-)
diff --git a/doc/guides/sample_app_ug/dma.rst b/doc/guides/sample_app_ug/dma.rst
index 9605996c6c..d59af45e7c 100644
--- a/doc/guides/sample_app_ug/dma.rst
+++ b/doc/guides/sample_app_ug/dma.rst
@@ -15,14 +15,14 @@ copy application.
Also, while forwarding, the MAC addresses are affected as follows:
-* The source MAC address is replaced by the TX port MAC address
+* The source MAC address is replaced by the Tx port MAC address
-* The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID
+* The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID
-This application can be used to compare performance of using software packet
+This application can be used to compare the performance of using software packet
copy with copy done using a DMA device for different sizes of packets.
-The example will print out statistics each second. The stats shows
-received/send packets and packets dropped or failed to copy.
+The example prints out statistics each second. The stats show
+received/sent packets and packets dropped or failed to copy.
Compiling the Application
-------------------------
@@ -36,7 +36,7 @@ Running the Application
-----------------------
In order to run the hardware copy application, the copying device
-needs to be bound to user-space IO driver.
+needs to be bound to a user-space I/O driver.
Refer to the :doc:`../prog_guide/dmadev` for information on using the library.
@@ -49,25 +49,25 @@ The application requires a number of command line options:
where,
-* p MASK: A hexadecimal bitmask of the ports to configure (default is all)
+* -p MASK: A hexadecimal bitmask of the ports to configure (default is all)
-* q NQ: Number of Rx queues used per port equivalent to DMA channels
+* -q NQ: Number of Rx queues used per port equivalent to DMA channels
per port (default is 1)
-* c CT: Performed packet copy type: software (sw) or hardware using
+* -c CT: Performed packet copy type: software (sw) or hardware using
DMA (hw) (default is hw)
-* s RS: Size of dmadev descriptor ring for hardware copy mode or rte_ring for
+* -s RS: Size of DMAdev descriptor ring for hardware copy mode or rte_ring for
software copy mode (default is 2048)
* --[no-]mac-updating: Whether MAC address of packets should be changed
or not (default is mac-updating)
-* b BS: set the DMA batch size
+* -b BS: Set the DMA batch size
-* f FS: set the max frame size
+* -f FS: Set the max frame size
-* i SI: set the interval, in second, between statistics prints (default is 1)
+* -i SI: Set the interval, in seconds, between statistics prints (default is 1)
The application can be launched in various configurations depending on the
provided parameters. The app can use up to 2 lcores: one of them receives
@@ -75,22 +75,22 @@ incoming traffic and makes a copy of each packet. The second lcore then
updates the MAC address and sends the copy. If one lcore per port is used,
both operations are done sequentially. For each configuration, an additional
lcore is needed since the main lcore does not handle traffic but is
-responsible for configuration, statistics printing and safe shutdown of
+responsible for configuration, statistics printing, and safe shutdown of
all ports and devices.
The application can use a maximum of 8 ports.
To run the application in a Linux environment with 3 lcores (the main lcore,
-plus two forwarding cores), a single port (port 0), software copying and MAC
-updating issue the command:
+plus two forwarding cores), a single port (port 0), software copying, and MAC
+updating, issue the command:
.. code-block:: console
$ ./<build_dir>/examples/dpdk-dma -l 0-2 -n 2 -- -p 0x1 --mac-updating -c sw
To run the application in a Linux environment with 2 lcores (the main lcore,
-plus one forwarding core), 2 ports (ports 0 and 1), hardware copying and no MAC
-updating issue the command:
+plus one forwarding core), 2 ports (ports 0 and 1), hardware copying, and no MAC
+updating, issue the command:
.. code-block:: console
@@ -126,7 +126,7 @@ function. The value returned is the number of parsed arguments:
:dedent: 1
-The ``main()`` also allocates a mempool to hold the mbufs (Message Buffers)
+The ``main()`` function also allocates a mempool to hold the mbufs (Message Buffers)
used by the application:
.. literalinclude:: ../../../examples/dma/dmafwd.c
@@ -146,10 +146,10 @@ The ``main()`` function also initializes the ports:
:end-before: >8 End of initializing each port.
:dedent: 1
-Each port is configured using ``port_init()`` function. The Ethernet
+Each port is configured using the ``port_init()`` function. The Ethernet
ports are configured with local settings using the ``rte_eth_dev_configure()``
-function and the ``port_conf`` struct. The RSS is enabled so that
-multiple Rx queues could be used for packet receiving and copying by
+function and the ``port_conf`` struct. RSS is enabled so that
+multiple Rx queues can be used for packet receiving and copying by
multiple DMA channels per port:
.. literalinclude:: ../../../examples/dma/dmafwd.c
@@ -159,7 +159,7 @@ multiple DMA channels per port:
:dedent: 1
For this example, the ports are set up with the number of Rx queues provided
-with -q option and 1 Tx queue using the ``rte_eth_rx_queue_setup()``
+with the -q option and 1 Tx queue using the ``rte_eth_rx_queue_setup()``
and ``rte_eth_tx_queue_setup()`` functions.
The Ethernet port is then started:
@@ -188,8 +188,8 @@ After that, each port application assigns resources needed.
:end-before: >8 End of assigning each port resources.
:dedent: 1
-Ring structures are assigned for exchanging packets between lcores for both SW
-and HW copy modes.
+Ring structures are assigned for exchanging packets between lcores for both software
+and hardware copy modes.
.. literalinclude:: ../../../examples/dma/dmafwd.c
:language: c
@@ -198,7 +198,7 @@ and HW copy modes.
:dedent: 0
-When using hardware copy each Rx queue of the port is assigned a DMA device
+When using hardware copy, each Rx queue of the port is assigned a DMA device
(``assign_dmadevs()``) using DMAdev library API functions:
.. literalinclude:: ../../../examples/dma/dmafwd.c
@@ -210,8 +210,8 @@ When using hardware copy each Rx queue of the port is assigned a DMA device
The initialization of hardware device is done by ``rte_dma_configure()`` and
``rte_dma_vchan_setup()`` functions using the ``rte_dma_conf`` and
-``rte_dma_vchan_conf`` structs. After configuration the device is started
-using ``rte_dma_start()`` function. Each of the above operations is done in
+``rte_dma_vchan_conf`` structs. After configuration, the device is started
+using the ``rte_dma_start()`` function. Each of the above operations is done in
``configure_dmadev_queue()``.
.. literalinclude:: ../../../examples/dma/dmafwd.c
@@ -225,16 +225,16 @@ statistics is allocated.
Finally, the ``main()`` function starts all packet handling lcores and starts
printing stats in a loop on the main lcore. The application can be
-interrupted and closed using ``Ctrl-C``. The main lcore waits for
-all worker lcores to finish, deallocates resources and exits.
+interrupted and closed using ``Ctrl+C``. The main lcore waits for
+all worker lcores to finish, deallocates resources, and exits.
-The processing lcores launching function are described below.
+The processing lcore launching functions are described below.
The Lcores Launching Functions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-As described above, ``main()`` function invokes ``start_forwarding_cores()``
-function in order to start processing for each lcore:
+As described above, the ``main()`` function invokes ``start_forwarding_cores()``
+in order to start processing for each lcore:
.. literalinclude:: ../../../examples/dma/dmafwd.c
:language: c
@@ -243,8 +243,8 @@ function in order to start processing for each lcore:
:dedent: 0
The function launches Rx/Tx processing functions on configured lcores
-using ``rte_eal_remote_launch()``. The configured ports, their number
-and number of assigned lcores are stored in user-defined
+using ``rte_eal_remote_launch()``. The configured ports, their number,
+and number of assigned lcores are stored in the user-defined
``rxtx_transmission_config`` struct:
.. literalinclude:: ../../../examples/dma/dmafwd.c
@@ -253,7 +253,7 @@ and number of assigned lcores are stored in user-defined
:end-before: >8 End of configuration of ports and number of assigned lcores.
:dedent: 0
-The structure is initialized in 'main()' function with the values
+The structure is initialized in the ``main()`` function with the values
corresponding to ports and lcores configuration provided by the user.
The Lcores Processing Functions
@@ -261,9 +261,9 @@ The Lcores Processing Functions
For receiving packets on each port, the ``dma_rx_port()`` function is used.
The function receives packets on each configured Rx queue. Depending on the
-mode the user chose, it will enqueue packets to DMA channels and
-then invoke copy process (hardware copy), or perform software copy of each
-packet using ``pktmbuf_sw_copy()`` function and enqueue them to an rte_ring:
+mode the user chose, it either enqueues packets to DMA channels and
+then invokes the copy process (hardware copy), or performs software copy of each
+packet using the ``pktmbuf_sw_copy()`` function and enqueues them to an rte_ring:
.. literalinclude:: ../../../examples/dma/dmafwd.c
:language: c
@@ -271,13 +271,13 @@ packet using ``pktmbuf_sw_copy()`` function and enqueue them to an rte_ring:
:end-before: >8 End of receive packets on one port and enqueue to dmadev or rte_ring.
:dedent: 0
-The packets are received in burst mode using ``rte_eth_rx_burst()``
-function. When using hardware copy mode the packets are enqueued in the
-copying device's buffer using ``dma_enqueue_packets()`` which calls
+The packets are received in burst mode using the ``rte_eth_rx_burst()``
+function. When using hardware copy mode, the packets are enqueued in the
+copying device's buffer using ``dma_enqueue_packets()``, which calls
``rte_dma_copy()``. When all received packets are in the
buffer, the copy operations are started by calling ``rte_dma_submit()``.
-Function ``rte_dma_copy()`` operates on physical address of
-the packet. Structure ``rte_mbuf`` contains only physical address to
+The ``rte_dma_copy()`` function operates on the physical address of
+the packet. The ``rte_mbuf`` structure contains only the physical address to the
start of the data buffer (``buf_iova``). Thus, the ``rte_pktmbuf_iova()`` API is
used to get the address of the start of the data within the mbuf.
@@ -289,12 +289,12 @@ used to get the address of the start of the data within the mbuf.
Once the copies have been completed (this includes gathering the completions in
-HW copy mode), the copied packets are enqueued to the ``rx_to_tx_ring``, which
+hardware copy mode), the copied packets are enqueued to the ``rx_to_tx_ring``, which
is used to pass the packets to the Tx function.
-All completed copies are processed by ``dma_tx_port()`` function. This function
+All completed copies are processed by the ``dma_tx_port()`` function. This function
dequeues copied packets from the ``rx_to_tx_ring``. Then, each packet MAC address is changed
-if it was enabled. After that, copies are sent in burst mode using ``rte_eth_tx_burst()``.
+if enabled. After that, copies are sent in burst mode using ``rte_eth_tx_burst()``.
.. literalinclude:: ../../../examples/dma/dmafwd.c
@@ -304,9 +304,9 @@ if it was enabled. After that, copies are sent in burst mode using ``rte_eth_tx_
:dedent: 0
The Packet Copying Functions
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-In order to perform SW packet copy, there are user-defined functions to the first copy
+In order to perform software packet copy, there are user-defined functions to first copy
the packet metadata (``pktmbuf_metadata_copy()``) and then the packet data
(``pktmbuf_sw_copy()``):
@@ -316,8 +316,8 @@ the packet metadata (``pktmbuf_metadata_copy()``) and then the packet data
:end-before: >8 End of perform packet copy there is a user-defined function.
:dedent: 0
-The metadata in this example is copied from ``rx_descriptor_fields1`` marker of
-``rte_mbuf`` struct up to ``buf_len`` member.
+The metadata in this example is copied from the ``rx_descriptor_fields1`` marker of
+the ``rte_mbuf`` struct up to the ``buf_len`` member.
In order to understand why software packet copying is done as shown
-above, please refer to the :doc:`../prog_guide/mbuf_lib`.
+above, refer to the :doc:`../prog_guide/mbuf_lib`.
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* Re: [PATCH 16/29] doc/guides: improve DMA sample app guide
2026-01-14 22:21 ` [PATCH 16/29] doc/guides: improve DMA " Stephen Hemminger
@ 2026-01-19 0:47 ` fengchengwen
0 siblings, 0 replies; 33+ messages in thread
From: fengchengwen @ 2026-01-19 0:47 UTC (permalink / raw)
To: Stephen Hemminger, dev; +Cc: Kevin Laatz, Bruce Richardson
On 1/15/2026 6:21 AM, Stephen Hemminger wrote:
> Improve the DMA sample application documentation:
> - use Tx/Rx instead of TX/RX per DPDK style
> - use "software" and "hardware" instead of SW/HW abbreviations
> - add missing articles and fix punctuation
> - use consistent option formatting with leading dash
> - fix grammatical errors throughout
>
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---
> doc/guides/sample_app_ug/dma.rst | 104 +++++++++++++++----------------
> 1 file changed, 52 insertions(+), 52 deletions(-)
>
...
> .. literalinclude:: ../../../examples/dma/dmafwd.c
> @@ -210,8 +210,8 @@ When using hardware copy each Rx queue of the port is assigned a DMA device
>
> The initialization of hardware device is done by ``rte_dma_configure()`` and
> ``rte_dma_vchan_setup()`` functions using the ``rte_dma_conf`` and
> -``rte_dma_vchan_conf`` structs. After configuration the device is started
> -using ``rte_dma_start()`` function. Each of the above operations is done in
> +``rte_dma_vchan_conf`` structs. After configuration, the device is started
The rte_dma_vchan_conf is used with rte_dma_vchan_setup()
So maybe we could only descript functions, e.g.
``rte_dma_vchan_setup()`` functions. After configuration, the device is started
with this fixed, please:
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
> +using the ``rte_dma_start()`` function. Each of the above operations is done in
> ``configure_dmadev_queue()``.
>
...
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 17/29] doc/guides: improve FIPS validation sample app guide
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (15 preceding siblings ...)
2026-01-14 22:21 ` [PATCH 16/29] doc/guides: improve DMA " Stephen Hemminger
@ 2026-01-14 22:21 ` Stephen Hemminger
2026-01-19 5:59 ` [EXTERNAL] " Gowrishankar Muthukrishnan
2026-01-14 22:21 ` [PATCH 18/29] doc/guides: improve Hello World " Stephen Hemminger
` (11 subsequent siblings)
28 siblings, 1 reply; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:21 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Gowrishankar Muthukrishnan
Improve the FIPS validation sample application documentation:
- restructure CAVP and ACVP as subsections under Limitations
- fix indentation of code blocks and bullet lists
- improve grammar and clarify explanations
- use consistent formatting for options
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/sample_app_ug/fips_validation.rst | 117 ++++++++++---------
1 file changed, 60 insertions(+), 57 deletions(-)
diff --git a/doc/guides/sample_app_ug/fips_validation.rst b/doc/guides/sample_app_ug/fips_validation.rst
index 613c5afd19..4c96452b65 100644
--- a/doc/guides/sample_app_ug/fips_validation.rst
+++ b/doc/guides/sample_app_ug/fips_validation.rst
@@ -17,32 +17,35 @@ Automated Crypto Validation Protocol (ACVP) test vectors.
For an algorithm implementation to be listed on a cryptographic module
validation certificate as an Approved security function, the algorithm
-implementation must meet all the requirements of FIPS 140-2 (in case of CAVP)
-and FIPS 140-3 (in case of ACVP) and must successfully complete the
+implementation must meet all the requirements of FIPS 140-2 (in the case of CAVP)
+and FIPS 140-3 (in the case of ACVP) and must successfully complete the
cryptographic algorithm validation process.
Limitations
-----------
+The following sections describe limitations for CAVP and ACVP.
+
CAVP
-----
+~~~~
* The version of request file supported is ``CAVS 21.0``.
-* If the header comment in a ``.req`` file does not contain a Algo tag
- i.e ``AES,TDES,GCM`` you need to manually add it into the header comment for
- example::
+* If the header comment in a ``.req`` file does not contain an Algo tag
+ (i.e., ``AES,TDES,GCM``), you need to manually add it into the header comment.
+ For example::
# VARIABLE KEY - KAT for CBC / # TDES VARIABLE KEY - KAT for CBC
* The application does not supply the test vectors. The user is expected to
- obtain the test vector files from `CAVP
- <https://csrc.nist.gov/projects/cryptographic-algorithm-validation-
- program/block-ciphers>`_ website. To obtain the ``.req`` files you need to
+ obtain the test vector files from the `CAVP
+ <https://csrc.nist.gov/projects/cryptographic-algorithm-validation-program/block-ciphers>`_
+ website. To obtain the ``.req`` files, you need to
email a person from the NIST website and pay for the ``.req`` files.
The ``.rsp`` files from the site can be used to validate and compare with
the ``.rsp`` files created by the FIPS application.
-* Supported test vectors
+* Supported test vectors:
+
* AES-CBC (128,192,256) - GFSbox, KeySbox, MCT, MMT
* AES-GCM (128,192,256) - EncryptExtIV, Decrypt
* AES-CCM (128) - VADT, VNT, VPT, VTT, DVPT
@@ -52,12 +55,14 @@ CAVP
VarText
ACVP
-----
+~~~~
* The application does not supply the test vectors. The user is expected to
- obtain the test vector files from `ACVP <https://pages.nist.gov/ACVP>`_
+ obtain the test vector files from the `ACVP <https://pages.nist.gov/ACVP>`_
website.
-* Supported test vectors
+
+* Supported test vectors:
+
* AES-CBC (128,192,256) - AFT, MCT
* AES-GCM (128,192,256) - AFT
* AES-CCM (128,192,256) - AFT
@@ -78,74 +83,72 @@ ACVP
Application Information
-----------------------
-If a ``.req`` is used as the input file after the application is finished
-running it will generate a response file or ``.rsp``. Differences between the
-two files are, the ``.req`` file has missing information for instance if doing
-encryption you will not have the cipher text and that will be generated in the
-response file. Also if doing decryption it will not have the plain text until it
-finished the work and in the response file it will be added onto the end of each
-operation.
-
-The application can be run with a ``.rsp`` file and what the outcome of that
-will be is it will add a extra line in the generated ``.rsp`` which should be
-the same as the ``.rsp`` used to run the application, this is useful for
-validating if the application has done the operation correctly.
+If a ``.req`` file is used as the input file, after the application finishes
+running it generates a response file (``.rsp``). The differences between
+the two files are as follows: the ``.req`` file has missing information (for instance,
+if performing encryption, you do not have the cipher text, and that is
+generated in the response file); and if performing decryption, it does not
+have plain text until the work has finished. In the response file, this information
+is added onto the end of each operation.
+The application can be run with a ``.rsp`` file as input. The outcome is that
+an extra line in the generated ``.rsp`` file is added. This should be the same
+as the ``.rsp`` used to run the application. This is useful for validating if
+the application has performed the operation correctly.
Compiling the Application
-------------------------
-* Compile Application
+To compile the sample application, see :doc:`compiling`.
- To compile the sample application see :doc:`compiling`.
+Run ``dos2unix`` on the request files:
-* Run ``dos2unix`` on the request files
-
- .. code-block:: console
+.. code-block:: console
- dos2unix AES/req/*
- dos2unix GCM/req/*
- dos2unix CCM/req/*
- dos2unix CMAC/req/*
- dos2unix HMAC/req/*
- dos2unix TDES/req/*
- dos2unix SHA/req/*
+ dos2unix AES/req/*
+ dos2unix GCM/req/*
+ dos2unix CCM/req/*
+ dos2unix CMAC/req/*
+ dos2unix HMAC/req/*
+ dos2unix TDES/req/*
+ dos2unix SHA/req/*
Running the Application
-----------------------
The application requires a number of command line options:
- .. code-block:: console
+.. code-block:: console
+
+ ./dpdk-fips_validation [EAL options]
+ -- --req-file FILE_PATH/FOLDER_PATH
+ --rsp-file FILE_PATH/FOLDER_PATH
+ [--cryptodev DEVICE_NAME] [--cryptodev-id ID] [--path-is-folder]
+ --mbuf-dataroom DATAROOM_SIZE
- ./dpdk-fips_validation [EAL options]
- -- --req-file FILE_PATH/FOLDER_PATH
- --rsp-file FILE_PATH/FOLDER_PATH
- [--cryptodev DEVICE_NAME] [--cryptodev-id ID] [--path-is-folder]
- --mbuf-dataroom DATAROOM_SIZE
+where:
-where,
- * req-file: The path of the request file or folder, separated by
+* req-file: The path of the request file or folder, separated by the
``path-is-folder`` option.
- * rsp-file: The path that the response file or folder is stored. separated by
+* rsp-file: The path that the response file or folder is stored, separated by the
``path-is-folder`` option.
- * cryptodev: The name of the target DPDK Crypto device to be validated.
+* cryptodev: The name of the target DPDK Crypto device to be validated.
- * cryptodev-id: The id of the target DPDK Crypto device to be validated.
+* cryptodev-id: The ID of the target DPDK Crypto device to be validated.
- * path-is-folder: If presented the application expects req-file and rsp-file
- are folder paths.
+* path-is-folder: If present, the application expects req-file and rsp-file
+ to be folder paths.
- * mbuf-dataroom: By default the application creates mbuf pool with maximum
- possible data room (65535 bytes). If the user wants to test scatter-gather
- list feature of the PMD he or she may set this value to reduce the dataroom
+* mbuf-dataroom: By default, the application creates an mbuf pool with maximum
+ possible data room (65535 bytes). If the user wants to test the scatter-gather
+ list feature of the PMD, this value can be set to reduce the dataroom
size so that the input data may be divided into multiple chained mbufs.
-To run the application in linux environment to test one AES FIPS test data
-file for crypto_aesni_mb PMD, issue the command:
+To run the application in a Linux environment to test one AES FIPS test data
+file for the crypto_aesni_mb PMD, issue the command:
.. code-block:: console
@@ -153,8 +156,8 @@ file for crypto_aesni_mb PMD, issue the command:
--req-file /PATH/TO/REQUEST/FILE.req --rsp-file ./PATH/TO/RESPONSE/FILE.rsp
--cryptodev crypto_aesni_mb
-To run the application in linux environment to test all AES-GCM FIPS test
-data files in one folder for crypto_aesni_gcm PMD, issue the command:
+To run the application in a Linux environment to test all AES-GCM FIPS test
+data files in one folder for the crypto_aesni_gcm PMD, issue the command:
.. code-block:: console
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* RE: [EXTERNAL] [PATCH 17/29] doc/guides: improve FIPS validation sample app guide
2026-01-14 22:21 ` [PATCH 17/29] doc/guides: improve FIPS validation " Stephen Hemminger
@ 2026-01-19 5:59 ` Gowrishankar Muthukrishnan
0 siblings, 0 replies; 33+ messages in thread
From: Gowrishankar Muthukrishnan @ 2026-01-19 5:59 UTC (permalink / raw)
To: Stephen Hemminger, dev@dpdk.org
> Improve the FIPS validation sample application documentation:
> - restructure CAVP and ACVP as subsections under Limitations
> - fix indentation of code blocks and bullet lists
> - improve grammar and clarify explanations
> - use consistent formatting for options
>
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com>
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH 18/29] doc/guides: improve Hello World sample app guide
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (16 preceding siblings ...)
2026-01-14 22:21 ` [PATCH 17/29] doc/guides: improve FIPS validation " Stephen Hemminger
@ 2026-01-14 22:21 ` Stephen Hemminger
2026-01-14 22:22 ` [PATCH 19/29] doc/guides: improve sample applications introduction Stephen Hemminger
` (10 subsequent siblings)
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:21 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Bruce Richardson
Improve the Hello World sample application documentation:
- capitalize Linux consistently
- fix "helloworld" to "Hello World" in description
- add missing articles and commas
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/sample_app_ug/hello_world.rst | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/doc/guides/sample_app_ug/hello_world.rst b/doc/guides/sample_app_ug/hello_world.rst
index 5bfd4b3fda..8f22bb2260 100644
--- a/doc/guides/sample_app_ug/hello_world.rst
+++ b/doc/guides/sample_app_ug/hello_world.rst
@@ -9,25 +9,25 @@ Overview
--------
The Hello World sample application is an example of the simplest DPDK application that can be written.
-The application simply prints an "helloworld" message on every enabled lcore.
+The application simply prints a "Hello World" message on every enabled lcore.
Compiling the Application
-------------------------
-To compile the sample application see :doc:`compiling`.
+To compile the sample application, see :doc:`compiling`.
The application is located in the ``helloworld`` sub-directory.
Running the Application
-----------------------
-To run the example in a linux environment:
+To run the example in a Linux environment:
.. code-block:: console
$ ./<build_dir>/examples/dpdk-helloworld -l 0-3
-Refer to *DPDK Getting Started Guide* for general information on running applications
+Refer to the *DPDK Getting Started Guide* for general information on running applications
and the Environment Abstraction Layer (EAL) options.
Explanation
@@ -46,7 +46,7 @@ This is done in the main() function using the following code:
:start-after: Initialization of Environment Abstraction Layer (EAL). 8<
:end-before: >8 End of initialization of Environment Abstraction Layer
-This call finishes the initialization process that was started before main() is called (in case of a Linux environment).
+This call finishes the initialization process that was started before main() is called (in the case of a Linux environment).
The argc and argv arguments are provided to the rte_eal_init() function.
The value returned is the number of parsed arguments.
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 19/29] doc/guides: improve sample applications introduction
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (17 preceding siblings ...)
2026-01-14 22:21 ` [PATCH 18/29] doc/guides: improve Hello World " Stephen Hemminger
@ 2026-01-14 22:22 ` Stephen Hemminger
2026-01-14 22:22 ` [PATCH 20/29] doc/guides: improve IP pipeline sample app guide Stephen Hemminger
` (9 subsequent siblings)
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:22 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Improve the sample applications introduction documentation:
- use consistent title capitalization in cross-references
- replace contractions with full forms
- use I/O instead of IO
- fix punctuation and grammar
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/sample_app_ug/intro.rst | 54 +++++++++++++++---------------
1 file changed, 27 insertions(+), 27 deletions(-)
diff --git a/doc/guides/sample_app_ug/intro.rst b/doc/guides/sample_app_ug/intro.rst
index a19c0b8c13..25a15966db 100644
--- a/doc/guides/sample_app_ug/intro.rst
+++ b/doc/guides/sample_app_ug/intro.rst
@@ -4,8 +4,8 @@
Introduction to the DPDK Sample Applications
============================================
-The DPDK Sample Applications are small standalone applications that
-demonstrate various features of DPDK. They can be considered as a cookbook of
+The DPDK sample applications are small standalone applications that
+demonstrate various features of DPDK. They can be considered a cookbook of
DPDK features. Users interested in getting started with DPDK can take the
applications, try out the features, and then extend them to fit their needs.
@@ -14,8 +14,8 @@ Running Sample Applications
---------------------------
Some sample applications may have their own command-line parameters described in
-their respective guides. However, they all also share the same EAL parameters.
-Please refer to :doc:`EAL parameters (Linux) <../linux_gsg/linux_eal_parameters>`
+their respective guides. However, they also all share the same EAL parameters.
+Refer to :doc:`EAL parameters (Linux) <../linux_gsg/linux_eal_parameters>`
or :doc:`EAL parameters (FreeBSD) <../freebsd_gsg/freebsd_eal_parameters>` for
a list of available EAL command-line options.
@@ -32,8 +32,8 @@ examples are highlighted below.
* :doc:`Hello World<hello_world>`: As with most introductions to a
programming framework, a good place to start is with the Hello World
application. The Hello World example sets up the DPDK Environment Abstraction
- Layer (EAL), and prints a simple "Hello World" message to each of the DPDK
- enabled cores. This application doesn't do any packet forwarding, but it is a
+ Layer (EAL) and prints a simple "Hello World" message to each of the DPDK
+ enabled cores. This application does not do any packet forwarding, but it is a
good way to test if the DPDK environment is compiled and set up properly.
* :doc:`Basic Forwarding/Skeleton Application<skeleton>`: The Basic
@@ -41,25 +41,25 @@ examples are highlighted below.
basic packet forwarding with DPDK. This allows you to test if your network
interfaces are working with DPDK.
-* :doc:`Network Layer 2 forwarding<l2_forward_real_virtual>`: The Network Layer 2
- forwarding, or ``l2fwd`` application does forwarding based on Ethernet MAC
+* :doc:`Network Layer 2 Forwarding<l2_forward_real_virtual>`: The Network Layer 2
+ forwarding, or ``l2fwd`` application, does forwarding based on Ethernet MAC
addresses like a simple switch.
-* :doc:`Network Layer 2 forwarding<l2_forward_event>`: The Network Layer 2
- forwarding, or ``l2fwd-event`` application does forwarding based on Ethernet MAC
- addresses like a simple switch. It demonstrates usage of poll and event mode
- IO mechanism under a single application.
+* :doc:`Network Layer 2 Forwarding with Event Mode<l2_forward_event>`: The Network
+ Layer 2 forwarding, or ``l2fwd-event`` application, does forwarding based on
+ Ethernet MAC addresses like a simple switch. It demonstrates usage of poll and
+ event mode I/O mechanisms under a single application.
-* :doc:`Network Layer 3 forwarding<l3_forward>`: The Network Layer3
- forwarding, or ``l3fwd`` application does forwarding based on Internet
- Protocol, IPv4 or IPv6 like a simple router.
+* :doc:`Network Layer 3 Forwarding<l3_forward>`: The Network Layer 3
+ forwarding, or ``l3fwd`` application, does forwarding based on Internet
+ Protocol (IPv4 or IPv6) like a simple router.
-* :doc:`Network Layer 3 forwarding Graph<l3_forward_graph>`: The Network Layer3
- forwarding Graph, or ``l3fwd_graph`` application does forwarding based on IPv4
- like a simple router with DPDK Graph framework.
+* :doc:`Network Layer 3 Forwarding Graph<l3_forward_graph>`: The Network Layer 3
+ forwarding Graph, or ``l3fwd_graph`` application, does forwarding based on IPv4
+ like a simple router using the DPDK Graph framework.
-* :doc:`Hardware packet copying<dma>`: The Hardware packet copying,
- or ``dmafwd`` application demonstrates how to use DMAdev library for
+* :doc:`Hardware Packet Copying<dma>`: The hardware packet copying,
+ or ``dmafwd`` application, demonstrates how to use the DMAdev library for
copying packets between two threads.
* :doc:`Packet Distributor<dist_app>`: The Packet Distributor
@@ -70,27 +70,27 @@ examples are highlighted below.
multi-process application shows how two DPDK processes can work together using
queues and memory pools to share information.
-* :doc:`RX/TX callbacks Application<rxtx_callbacks>`: The RX/TX
+* :doc:`Rx/Tx Callbacks Application<rxtx_callbacks>`: The Rx/Tx
callbacks sample application is a packet forwarding application that
demonstrates the use of user-defined callbacks on received and transmitted
- packets. The application calculates the latency of a packet between RX
- (packet arrival) and TX (packet transmission) by adding callbacks to the RX
- and TX packet processing functions.
+ packets. The application calculates the latency of a packet between Rx
+ (packet arrival) and Tx (packet transmission) by adding callbacks to the Rx
+ and Tx packet processing functions.
* :doc:`IPsec Security Gateway<ipsec_secgw>`: The IPsec Security
Gateway application is a minimal example of something closer to a real world
example. This is also a good example of an application using the DPDK
Cryptodev framework.
-* :doc:`Precision Time Protocol (PTP) client<ptpclient>`: The PTP
+* :doc:`Precision Time Protocol (PTP) Client<ptpclient>`: The PTP
client is another minimal implementation of a real world application.
In this case, the application is a PTP client that communicates with a PTP
time transmitter to synchronize time on a Network Interface Card (NIC) using the
- IEEE1588 protocol.
+ IEEE 1588 protocol.
* :doc:`Quality of Service (QoS) Scheduler<qos_scheduler>`: The QoS
Scheduler application demonstrates the use of DPDK to provide QoS scheduling.
There are many more examples shown in the following chapters. Each of the
-documented sample applications show how to compile, configure and run the
+documented sample applications shows how to compile, configure, and run the
application, as well as explaining the main functionality of the code.
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 20/29] doc/guides: improve IP pipeline sample app guide
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (18 preceding siblings ...)
2026-01-14 22:22 ` [PATCH 19/29] doc/guides: improve sample applications introduction Stephen Hemminger
@ 2026-01-14 22:22 ` Stephen Hemminger
2026-01-14 22:22 ` [PATCH 21/29] doc/guides: improve IP reassembly " Stephen Hemminger
` (8 subsequent siblings)
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:22 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Cristian Dumitrescu
Improve the IP pipeline sample application documentation:
- rename sections for consistency with other guides
- fix port enable/disable command descriptions which
incorrectly referred to input ports instead of output ports
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/sample_app_ug/ip_pipeline.rst | 18 ++++++++++--------
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/doc/guides/sample_app_ug/ip_pipeline.rst b/doc/guides/sample_app_ug/ip_pipeline.rst
index f9e8caa0a8..153a6e3900 100644
--- a/doc/guides/sample_app_ug/ip_pipeline.rst
+++ b/doc/guides/sample_app_ug/ip_pipeline.rst
@@ -4,8 +4,8 @@
Internet Protocol (IP) Pipeline Application
===========================================
-Application overview
---------------------
+Overview
+--------
The *Internet Protocol (IP) Pipeline* application is intended to be a vehicle for rapid development of packet processing
applications on multi-core CPUs.
@@ -107,8 +107,10 @@ Once application and telnet client start running, messages can be sent from clie
At any stage, telnet client can be terminated using the quit command.
-Application stages
-------------------
+Explanation
+-----------
+
+The following explains the stages of the application.
Initialization
~~~~~~~~~~~~~~
@@ -134,7 +136,7 @@ executes two tasks in time-sharing mode:
to/from given table of a specific pipeline owned by the current data plane thread, read statistics, etc.
Examples
---------
+~~~~~~~~
.. _table_examples:
@@ -396,11 +398,11 @@ or table ::
pipeline <pipeline_name> port out <port_id> stats read [clear]
pipeline <pipeline_name> table <table_id> stats read [clear]
-Enable given input port for specific pipeline instance ::
+Enable given output port for specific pipeline instance ::
- pipeline <pipeline_name> port out <port_id> disable
+ pipeline <pipeline_name> port out <port_id> enable
-Disable given input port for specific pipeline instance ::
+Disable given output port for specific pipeline instance ::
pipeline <pipeline_name> port out <port_id> disable
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 21/29] doc/guides: improve IP reassembly sample app guide
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (19 preceding siblings ...)
2026-01-14 22:22 ` [PATCH 20/29] doc/guides: improve IP pipeline sample app guide Stephen Hemminger
@ 2026-01-14 22:22 ` Stephen Hemminger
2026-01-14 22:22 ` [PATCH 22/29] doc/guides: improve IPsec security gateway guide Stephen Hemminger
` (7 subsequent siblings)
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:22 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Konstantin Ananyev
Improve the IP reassembly sample application documentation:
- fix hyphenation of "run-time"
- replace "wouldn't" with "do not"
- fix "mbuf's" to "mbufs"
- add missing articles and spacing
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/sample_app_ug/ip_reassembly.rst | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/doc/guides/sample_app_ug/ip_reassembly.rst b/doc/guides/sample_app_ug/ip_reassembly.rst
index 04b581a489..092084ba8c 100644
--- a/doc/guides/sample_app_ug/ip_reassembly.rst
+++ b/doc/guides/sample_app_ug/ip_reassembly.rst
@@ -12,7 +12,7 @@ Overview
The application demonstrates the use of the DPDK libraries to implement packet forwarding
with reassembly for IPv4 and IPv6 fragmented packets.
-The initialization and run- time paths are very similar to those of the :doc:`l2_forward_real_virtual`.
+The initialization and run-time paths are very similar to those of the :doc:`l2_forward_real_virtual`.
The main difference from the L2 Forwarding sample application is that
it reassembles fragmented IPv4 and IPv6 packets before forwarding.
The maximum allowed size of reassembled packet is 9.5 KB.
@@ -53,11 +53,11 @@ where:
* --maxflows=FLOWS: determines maximum number of active fragmented flows (1-65535). Default value: 4096.
* --flowttl=TTL[(s|ms)]: determines maximum Time To Live for fragmented packet.
- If all fragments of the packet wouldn't appear within given time-out,
+ If all fragments of the packet do not appear within given time-out,
then they are considered as invalid and will be dropped.
Valid range is 1ms - 3600s. Default value: 1s.
-To run the example in a Linux environment with 2 lcores (2,4) over 2 ports(0,2)
+To run the example in a Linux environment with 2 lcores (2,4) over 2 ports (0,2)
with 1 Rx queue per lcore:
.. code-block:: console
@@ -80,7 +80,7 @@ with 1 Rx queue per lcore:
IP_RSMBL: entering main loop on lcore 2
IP_RSMBL: -- lcoreid=2 portid=0
-To run the example in a Linux environment with 1 lcore (4) over 2 ports(0,2)
+To run the example in a Linux environment with 1 lcore (4) over 2 ports (0,2)
with 2 Rx queues per lcore:
.. code-block:: console
@@ -110,7 +110,7 @@ The default l3fwd_ipv6_route_array table is:
:end-before: >8 End of default l3fwd_ipv6_route_array table.
For example, for the fragmented input IPv4 packet with destination address: 100.10.1.1,
-a reassembled IPv4 packet be sent out from port #0 to the destination address 100.10.1.1
+a reassembled IPv4 packet will be sent out from port #0 to the destination address 100.10.1.1
once all the fragments are collected.
Explanation
@@ -142,7 +142,7 @@ consisting of up to ``RTE_LIBRTE_IP_FRAG_MAX_FRAG`` fragments.
Mempools Initialization
~~~~~~~~~~~~~~~~~~~~~~~
-The reassembly application demands a lot of mbuf's to be allocated.
+The reassembly application demands a lot of mbufs to be allocated.
At any given time, up to (2 \* max_flow_num \* RTE_LIBRTE_IP_FRAG_MAX_FRAG \* <maximum number of mbufs per packet>)
can be stored inside the fragment table waiting for remaining fragments.
To keep mempool size under reasonable limits
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 22/29] doc/guides: improve IPsec security gateway guide
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (20 preceding siblings ...)
2026-01-14 22:22 ` [PATCH 21/29] doc/guides: improve IP reassembly " Stephen Hemminger
@ 2026-01-14 22:22 ` Stephen Hemminger
2026-01-14 22:22 ` [PATCH 23/29] doc/guides: improve link status interrupt sample app guide Stephen Hemminger
` (6 subsequent siblings)
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:22 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Radu Nicolau, Akhil Goyal
Improve the IPsec security gateway sample application documentation:
- restructure sections as subsections for consistency
- fix aead_algo syntax documentation (was incorrectly cipher_algo)
- fix "Example SP rules" to "Example Routing rules"
- replace "bigger then" with "bigger than"
- fix "depeneds" typo
- improve grammar and clarity throughout
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/sample_app_ug/ipsec_secgw.rst | 125 +++++++++++------------
1 file changed, 60 insertions(+), 65 deletions(-)
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 7319505fe9..ff08343061 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -11,35 +11,30 @@ application using DPDK cryptodev framework.
Overview
--------
-The application demonstrates the implementation of a Security Gateway
-(not IPsec compliant, see the Constraints section below) using DPDK based on RFC4301,
-RFC4303, RFC3602 and RFC2404.
+This application demonstrates the implementation of a Security Gateway
+(not fully IPsec-compliant; see the Constraints section) using DPDK, based
+on RFC4301, RFC4303, RFC3602, and RFC2404.
-Internet Key Exchange (IKE) is not implemented, so only manual setting of
-Security Policies and Security Associations is supported.
+Currently, DPDK does not support Internet Key Exchange (IKE), so Security Policies
+(SP) and Security Associations (SA) must be configured manually. SPs are implemented
+as ACL rules, SAs are stored in a table, and routing is handled using LPM.
-The Security Policies (SP) are implemented as ACL rules, the Security
-Associations (SA) are stored in a table and the routing is implemented
-using LPM.
+The application classifies ports as *Protected* or *Unprotected*, with traffic
+received on Unprotected ports considered Inbound and traffic on Protected ports
+considered Outbound.
-The application classifies the ports as *Protected* and *Unprotected*.
-Thus, traffic received on an Unprotected or Protected port is consider
-Inbound or Outbound respectively.
+It supports full IPsec protocol offload to hardware (via crypto accelerators or
+Ethernet devices) as well as inline IPsec processing by supported Ethernet
+devices during transmission. These modes can be configured during SA creation.
-The application also supports complete IPsec protocol offload to hardware
-(Look aside crypto accelerator or using ethernet device). It also support
-inline ipsec processing by the supported ethernet device during transmission.
-These modes can be selected during the SA creation configuration.
+For full protocol offload, the hardware processes ESP and outer IP headers,
+so the application does not need to add or remove them during Outbound or
+Inbound processing.
-In case of complete protocol offload, the processing of headers(ESP and outer
-IP header) is done by the hardware and the application does not need to
-add/remove them during outbound/inbound processing.
-
-For inline offloaded outbound traffic, the application will not do the LPM
-lookup for routing, as the port on which the packet has to be forwarded will be
-part of the SA. Security parameters will be configured on that port only, and
-sending the packet on other ports could result in unencrypted packets being
-sent out.
+In the inline offload mode for Outbound traffic, the application skips the
+LPM lookup for routing, as the SA specifies the port for forwarding. Security
+parameters are configured only on the specified port, and sending packets
+through other ports may result in unencrypted packets being transmitted.
The Path for IPsec Inbound traffic is:
@@ -64,27 +59,27 @@ The Path for the IPsec Outbound traffic is:
The application supports two modes of operation: poll mode and event mode.
-* In the poll mode a core receives packets from statically configured list
+* In the poll mode, a core receives packets from statically configured list
of eth ports and eth ports' queues.
-* In the event mode a core receives packets as events. After packet processing
- is done core submits them back as events to an event device. This enables
- multicore scaling and HW assisted scheduling by making use of the event device
- capabilities. The event mode configuration is predefined. All packets reaching
- given eth port will arrive at the same event queue. All event queues are mapped
- to all event ports. This allows all cores to receive traffic from all ports.
- Since the underlying event device might have varying capabilities, the worker
- threads can be drafted differently to maximize performance. For example, if an
- event device - eth device pair has Tx internal port, then application can call
- rte_event_eth_tx_adapter_enqueue() instead of regular rte_event_enqueue_burst().
- So a thread which assumes that the device pair has internal port will not be the
- right solution for another pair. The infrastructure added for the event mode aims
- to help application to have multiple worker threads by maximizing performance from
- every type of event device without affecting existing paths/use cases. The worker
- to be used will be determined by the operating conditions and the underlying device
- capabilities.
+* In event mode, a core processes packets as events. After processing, the
+core submits the packets back to an event device, enabling multicore scaling
+and hardware-assisted scheduling by leveraging the capabilities of the event
+device. The event mode configuration is predefined, where all packets arriving
+at a specific Ethernet port are directed to the same event queue. All event
+queues are mapped to all event ports, allowing any core to receive traffic
+from any port. Since event devices can have varying capabilities, worker threads are designed
+differently to optimize performance. For instance, if an event device and Ethernet
+device pair includes a Tx internal port, the application can use `rte_event_eth_tx_adapter_enqueue`
+instead of the standard `rte_event_enqueue_burst`. A thread optimized for a device
+pair with an internal port may not work effectively with another pair. The infrastructure
+for event mode is designed to support multiple worker threads
+while maximizing the performance of each type of event device without impacting
+existing paths or use cases. The worker thread selection depends on the operating
+conditions and the capabilities of the underlying devices.
+
**Currently the application provides non-burst, internal port worker threads.**
- It also provides infrastructure for non-internal port
+ It also provides infrastructure for non-internal port,
however does not define any worker threads.
Event mode also supports event vectorization. The event devices, ethernet device
@@ -97,9 +92,9 @@ The application supports two modes of operation: poll mode and event mode.
option.
For the event devices, crypto device pairs which support the capability
``RTE_EVENT_CRYPTO_ADAPTER_CAP_EVENT_VECTOR`` vector aggregation
- could also be enable using event-vector option.
+ could also be enabled using event-vector option.
-Additionally the event mode introduces two submodes of processing packets:
+Additionally, the event mode introduces two submodes of processing packets:
* Driver submode: This submode has bare minimum changes in the application to support
IPsec. There are no lookups, no routing done in the application. And for inline
@@ -115,19 +110,19 @@ Additionally the event mode introduces two submodes of processing packets:
benchmark numbers.
Constraints
------------
+~~~~~~~~~~~
* No IPv6 options headers.
* No AH mode.
* Supported algorithms: AES-CBC, AES-CTR, AES-GCM, 3DES-CBC, DES-CBC,
HMAC-SHA1, HMAC-SHA256, AES-GMAC, AES_CTR, AES_XCBC_MAC, AES_CCM,
CHACHA20_POLY1305 and NULL.
-* Each SA must be handle by a unique lcore (*1 RX queue per port*).
+* Each SA must be handled by a unique lcore (*1 RX queue per port*).
Compiling the Application
-------------------------
-To compile the sample application see :doc:`compiling`.
+To compile the sample application, see :doc:`compiling`.
The application is located in the ``ipsec-secgw`` sub-directory.
@@ -170,7 +165,7 @@ Where:
* ``-j FRAMESIZE``: *optional*. data buffer size (in bytes),
in other words maximum data size for one segment.
- Packets with length bigger then FRAMESIZE still can be received,
+ Packets with length bigger than FRAMESIZE still can be received,
but will be segmented.
Default value: RTE_MBUF_DEFAULT_BUF_SIZE (2176)
Minimum value: RTE_MBUF_DEFAULT_BUF_SIZE (2176)
@@ -244,8 +239,8 @@ Where:
Default value: 0.
* ``--mtu MTU``: MTU value (in bytes) on all attached ethernet ports.
- Outgoing packets with length bigger then MTU will be fragmented.
- Incoming packets with length bigger then MTU will be discarded.
+ Outgoing packets with length bigger than MTU will be fragmented.
+ Incoming packets with length bigger than MTU will be discarded.
Default value: 1500.
* ``--frag-ttl FRAG_TTL_NS``: fragment lifetime (in nanoseconds).
@@ -260,7 +255,7 @@ Where:
via Rx queue setup.
* ``--vector-pool-sz``: Number of buffers in vector pool.
- By default, vector pool size depeneds on packet pool size
+ By default, vector pool size depends on packet pool size
and size of each vector.
* ``--desc-nb NUMBER_OF_DESC``: Number of descriptors per queue pair.
@@ -377,11 +372,11 @@ For example, something like the following command line:
Configurations
---------------
+~~~~~~~~~~~~~~
The following sections provide the syntax of configurations to initialize
your SP, SA, Routing, Flow and Neighbour tables.
-Configurations shall be specified in the configuration file to be passed to
+Configurations will be specified in the configuration file to be passed to
the application. The file is then parsed by the application. The successful
parsing will result in the appropriate rules being applied to the tables
accordingly.
@@ -390,11 +385,11 @@ accordingly.
Configuration File Syntax
~~~~~~~~~~~~~~~~~~~~~~~~~
-As mention in the overview, the Security Policies are ACL rules.
-The application parsers the rules specified in the configuration file and
+As mentioned in the overview, the Security Policies are ACL rules.
+The application parses the rules specified in the configuration file and
passes them to the ACL table, and replicates them per socket in use.
-Following are the configuration file syntax.
+The following sections contain the configuration file syntax.
General rule syntax
^^^^^^^^^^^^^^^^^^^
@@ -425,7 +420,7 @@ The SP rule syntax is shown as follows:
<proto> <sport> <dport>
-where each options means:
+where each option means:
``<ip_ver>``
@@ -540,7 +535,7 @@ The SA rule syntax is shown as follows:
<mode> <src_ip> <dst_ip> <action_type> <port_id> <fallback>
<flow-direction> <port_id> <queue_id> <udp-encap> <reassembly_en>
-where each options means:
+where each option means:
``<dir>``
@@ -633,7 +628,7 @@ where each options means:
* *aes-192-gcm*: AES-GCM 192-bit algorithm
* *aes-256-gcm*: AES-GCM 256-bit algorithm
- * Syntax: *cipher_algo <your algorithm>*
+ * Syntax: *aead_algo <your algorithm>*
``<aead_key>``
@@ -847,7 +842,7 @@ The Routing rule syntax is shown as follows:
rt <ip_ver> <src_ip> <dst_ip> <port>
-where each options means:
+where each option means:
``<ip_ver>``
@@ -890,7 +885,7 @@ where each options means:
* Syntax: *port X*
-Example SP rules:
+Example Routing rules:
.. code-block:: console
@@ -912,7 +907,7 @@ The flow rule syntax is shown as follows:
flow <mark> <eth> <ip_ver> <src_ip> <dst_ip> <port> <queue> \
<count> <security> <set_mark>
-where each options means:
+where each option means:
``<mark>``
@@ -1039,7 +1034,7 @@ The Neighbour rule syntax is shown as follows:
neigh <port> <dst_mac>
-where each options means:
+where each option means:
``<port>``
@@ -1143,7 +1138,7 @@ It then tries to perform some data transfer using the scheme described above.
Usage
~~~~~
-In the ipsec-secgw/test directory run
+In the ipsec-secgw/test directory run:
/bin/bash run_test.sh <options> <ipsec_mode>
@@ -1176,4 +1171,4 @@ Available options:
* ``-h`` Show usage.
If <ipsec_mode> is specified, only tests for that mode will be invoked. For the
-list of available modes please refer to run_test.sh.
+list of available modes, please refer to run_test.sh.
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 23/29] doc/guides: improve link status interrupt sample app guide
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (21 preceding siblings ...)
2026-01-14 22:22 ` [PATCH 22/29] doc/guides: improve IPsec security gateway guide Stephen Hemminger
@ 2026-01-14 22:22 ` Stephen Hemminger
2026-01-14 22:22 ` [PATCH 24/29] doc/guides: improve multi-process " Stephen Hemminger
` (5 subsequent siblings)
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:22 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Improve the link status interrupt sample application documentation:
- fix spacing in command line example
- capitalize Linux consistently
- fix "that related" to "that relate"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/sample_app_ug/link_status_intr.rst | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/doc/guides/sample_app_ug/link_status_intr.rst b/doc/guides/sample_app_ug/link_status_intr.rst
index fd4e32560c..133c8d0b2a 100644
--- a/doc/guides/sample_app_ug/link_status_intr.rst
+++ b/doc/guides/sample_app_ug/link_status_intr.rst
@@ -40,7 +40,7 @@ The application requires a number of command line options:
.. code-block:: console
- ./<build_dir>/examples/dpdk-link_status_interrupt [EAL options] -- -p PORTMASK [-q NQ][-T PERIOD]
+ ./<build_dir>/examples/dpdk-link_status_interrupt [EAL options] -- -p PORTMASK [-q NQ] [-T PERIOD]
where,
@@ -50,7 +50,7 @@ where,
* -T PERIOD: statistics will be refreshed each PERIOD seconds (0 to disable, 10 default)
-To run the application in a linux environment with 4 lcores, 16 ports and 8 RX queues per lcore,
+To run the application in a Linux environment with 4 lcores, 16 ports and 8 RX queues per lcore,
issue the command:
.. code-block:: console
@@ -84,7 +84,7 @@ Driver Initialization
~~~~~~~~~~~~~~~~~~~~~
The main part of the code in the main() function relates to the initialization of the driver.
-To fully understand this code, it is recommended to study the chapters that related to the Poll Mode Driver in the
+To fully understand this code, it is recommended to study the chapters that relate to the Poll Mode Driver in the
*DPDK Programmer's Guide and the DPDK API Reference*.
.. literalinclude:: ../../../examples/link_status_interrupt/main.c
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 24/29] doc/guides: improve multi-process sample app guide
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (22 preceding siblings ...)
2026-01-14 22:22 ` [PATCH 23/29] doc/guides: improve link status interrupt sample app guide Stephen Hemminger
@ 2026-01-14 22:22 ` Stephen Hemminger
2026-01-14 22:22 ` [PATCH 25/29] doc/guides: improve pipeline " Stephen Hemminger
` (4 subsequent siblings)
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:22 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Anatoly Burakov
Improve the multi-process sample application documentation:
- fix section heading levels for consistency
- fix hyphenation of "packet-processing"
- improve grammar and sentence structure
- fix "gets the port information and exported" grammar
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/sample_app_ug/multi_process.rst | 27 +++++++++++-----------
1 file changed, 14 insertions(+), 13 deletions(-)
diff --git a/doc/guides/sample_app_ug/multi_process.rst b/doc/guides/sample_app_ug/multi_process.rst
index 1bd858bfb5..19e49669ad 100644
--- a/doc/guides/sample_app_ug/multi_process.rst
+++ b/doc/guides/sample_app_ug/multi_process.rst
@@ -42,9 +42,10 @@ passing at least two cores in the corelist:
./<build_dir>/examples/dpdk-simple_mp -l 0-1 --proc-type=primary
-For the first DPDK process run, the proc-type flag can be omitted or set to auto,
-since all DPDK processes will default to being a primary instance,
-meaning they have control over the hugepage shared memory regions.
+For the first DPDK process run, the proc-type flag can be omitted or set to auto
+since all DPDK processes will default to being a primary instance
+(meaning, they have control over the hugepage shared memory regions).
+
The process should start successfully and display a command prompt as follows:
.. code-block:: console
@@ -99,7 +100,7 @@ At any stage, either process can be terminated using the quit command.
The secondary process can be stopped and restarted without affecting the primary process.
How the Application Works
-^^^^^^^^^^^^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~~~~~~~~~~~~
This application uses two queues and a single memory pool created in the primary process.
The secondary process then uses lookup functions to attach to these objects.
@@ -124,7 +125,7 @@ Symmetric Multi-process Example
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The symmetric multi process example demonstrates how a set of processes can run in parallel,
-with each process performing the same set of packet- processing operations.
+with each process performing the same set of packet-processing operations.
The following diagram shows the data-flow through the application, using two processes.
.. _figure_sym_multi_proc_app:
@@ -173,7 +174,7 @@ Example:
In the above example, ``auto`` is used so the first instance becomes the primary process.
How the Application Works
-^^^^^^^^^^^^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~~~~~~~~~~~~
The primary instance creates the memory pool and initializes the network ports.
@@ -183,7 +184,7 @@ The primary instance creates the memory pool and initializes the network ports.
:end-before: >8 End of primary instance initialization.
:dedent: 1
-The secondary instance gets the port information and exported by the primary process.
+The secondary instance gets the port information exported by the primary process.
The memory pool is accessed by doing a lookup for it by name:
.. code-block:: c
@@ -198,7 +199,7 @@ Each process reads from each port using the queue corresponding to its proc-id p
and writes to the corresponding transmit queue on the output port.
Client-Server Multi-process Example
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+-----------------------------------
The example multi-process application demonstrates a client-server type multi-process design.
A single server process receives a set of packets from the ports
@@ -216,7 +217,7 @@ The following diagram shows the data-flow through the application, using two cli
Running the Application
-^^^^^^^^^^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~~~~~~~~~~
The server process must be run as the primary process to set up all memory structures.
In addition to the EAL parameters, the application-specific parameters are:
@@ -229,9 +230,9 @@ In addition to the EAL parameters, the application-specific parameters are:
.. note::
- In the server process, has a single thread using the lowest numbered lcore
- in the corelist, performs all packet I/O.
- If corelist parameter specifies with more than a single lcore,
+ In the server process, a single thread using the lowest numbered lcore
+ in the corelist performs all packet I/O.
+ If the corelist parameter specifies more than a single lcore,
an additional lcore will be used for a thread to print packet count periodically.
The server application stores configuration data in shared memory,
@@ -254,7 +255,7 @@ the commands are:
Any client processes that need restarting can be restarted without affecting the server process.
How the Application Works
-^^^^^^^^^^^^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~~~~~~~~~~~~
The server (primary) process performs the initialization of network port and data structure
and stores its port configuration data in a memory zone in hugepage shared memory.
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 25/29] doc/guides: improve pipeline sample app guide
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (23 preceding siblings ...)
2026-01-14 22:22 ` [PATCH 24/29] doc/guides: improve multi-process " Stephen Hemminger
@ 2026-01-14 22:22 ` Stephen Hemminger
2026-01-14 22:22 ` [PATCH 26/29] doc/guides: improve VM power management " Stephen Hemminger
` (3 subsequent siblings)
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:22 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Improve the pipeline sample application documentation:
- rename "Application overview" to "Overview"
- rename "Application stages" to "Explanation"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/sample_app_ug/pipeline.rst | 10 ++++++----
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/doc/guides/sample_app_ug/pipeline.rst b/doc/guides/sample_app_ug/pipeline.rst
index 2d7c977068..1ea845be2d 100644
--- a/doc/guides/sample_app_ug/pipeline.rst
+++ b/doc/guides/sample_app_ug/pipeline.rst
@@ -4,8 +4,8 @@
Pipeline Application
====================
-Application overview
---------------------
+Overview
+--------
This application showcases the features of the Software Switch (SWX) pipeline that is aligned with the P4 language.
@@ -93,8 +93,10 @@ When running a telnet client as above, command prompt is displayed:
Once application and telnet client start running, messages can be sent from client to application.
-Application stages
-------------------
+Explanation
+-----------
+
+Here is a description of the various stages of the application.
Initialization
~~~~~~~~~~~~~~
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 26/29] doc/guides: improve VM power management sample app guide
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (24 preceding siblings ...)
2026-01-14 22:22 ` [PATCH 25/29] doc/guides: improve pipeline " Stephen Hemminger
@ 2026-01-14 22:22 ` Stephen Hemminger
2026-01-14 22:22 ` [PATCH 27/29] doc/guides: improve keep alive " Stephen Hemminger
` (2 subsequent siblings)
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:22 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Anatoly Burakov, David Hunt,
Sivaprasad Tummala
Improve the VM power management sample application documentation:
- add Overview section heading
- restructure sections as subsections for consistency
- clarify and simplify explanations
- fix "in relation CPU" to "in relation to CPU"
- improve grammar throughout
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
.../sample_app_ug/vm_power_management.rst | 138 ++++++++----------
1 file changed, 63 insertions(+), 75 deletions(-)
diff --git a/doc/guides/sample_app_ug/vm_power_management.rst b/doc/guides/sample_app_ug/vm_power_management.rst
index 1955140bb3..86231c619b 100644
--- a/doc/guides/sample_app_ug/vm_power_management.rst
+++ b/doc/guides/sample_app_ug/vm_power_management.rst
@@ -4,20 +4,21 @@
Virtual Machine Power Management Application
============================================
-Applications running in virtual environments have an abstract view of
-the underlying hardware on the host. Specifically, applications cannot
-see the binding of virtual components to physical hardware. When looking
-at CPU resourcing, the pinning of Virtual CPUs (vCPUs) to Physical CPUs
-(pCPUs) on the host is not apparent to an application and this pinning
-may change over time. In addition, operating systems on Virtual Machines
-(VMs) do not have the ability to govern their own power policy. The
-Machine Specific Registers (MSRs) for enabling P-state transitions are
-not exposed to the operating systems running on the VMs.
-
-The solution demonstrated in this sample application shows an example of
-how a DPDK application can indicate its processing requirements using
-VM-local only information (vCPU/lcore, and so on) to a host resident VM
-Power Manager. The VM Power Manager is responsible for:
+Overview
+--------
+
+Applications in virtual environments have a limited view of the host hardware.
+They cannot see how virtual components map to physical hardware, including the
+pinning of virtual CPUs (vCPUs) to physical CPUs (pCPUs), which may change over time.
+Additionally, virtual machine operating systems cannot manage their own power policies,
+as the necessary Machine Specific Registers (MSRs) for controlling P-state transitions
+are not accessible.
+
+This sample application demonstrates how a DPDK application can communicate its
+processing needs using local VM information (like vCPU or lcore details) to a
+host-based VM Power Manager.
+
+The VM Power Manager is responsible for:
- **Accepting requests for frequency changes for a vCPU**
- **Translating the vCPU to a pCPU using libvirt**
@@ -84,77 +85,64 @@ in the host.
state, manually altering CPU frequency. Also allows for the changings
of vCPU to pCPU pinning
-Sample Application Architecture Overview
-----------------------------------------
-
-The VM power management solution employs ``qemu-kvm`` to provide
-communications channels between the host and VMs in the form of a
-``virtio-serial`` connection that appears as a para-virtualised serial
-device on a VM and can be configured to use various backends on the
-host. For this example, the configuration of each ``virtio-serial`` endpoint
-on the host as an ``AF_UNIX`` file socket, supporting poll/select and
-``epoll`` for event notification. In this example, each channel endpoint on
-the host is monitored for ``EPOLLIN`` events using ``epoll``. Each channel
-is specified as ``qemu-kvm`` arguments or as ``libvirt`` XML for each VM,
-where each VM can have several channels up to a maximum of 64 per VM. In this
-example, each DPDK lcore on a VM has exclusive access to a channel.
-
-To enable frequency changes from within a VM, the VM forwards a
-``librte_power`` request over the ``virtio-serial`` channel to the host. Each
-request contains the vCPU and power command (scale up/down/min/max). The
-API for the host ``librte_power`` and guest ``librte_power`` is consistent
-across environments, with the selection of VM or host implementation
-determined automatically at runtime based on the environment. On
-receiving a request, the host translates the vCPU to a pCPU using the
-libvirt API before forwarding it to the host ``librte_power``.
+Sample Application Architecture
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+The VM power management solution uses ``qemu-kvm`` to create communication
+channels between the host and VMs through a ``virtio-serial`` connection.
+This connection appears as a para-virtualized serial device on the VM
+and can use various backends on the host. In this example, each ``virtio-serial``
+endpoint is configured as an ``AF_UNIX`` file socket on the host, supporting
+event notifications via ``poll``, `select``, or ``epoll``. The host monitors
+each channel for ``EPOLLIN`` events using ``epoll``, with up to 64 channels per VM.
+Each DPDK lcore on a VM has exclusive access to a channel.
+
+To enable frequency scaling from within a VM, the VM sends a ``librte_power``
+request over the ``virtio-serial`` channel to the host. The request specifies
+the vCPU and desired power action (e.g., scale up, scale down, set to min/max).
+The ``librte_power`` API is consistent across environments, automatically selecting
+the appropriate VM or host implementation at runtime. Upon receiving a request,
+the host maps the vCPU to a pCPU using the libvirt API and forwards the command
+to the host’s ``librte_power`` for execution.
.. _figure_vm_power_mgr_vm_request_seq:
.. figure:: img/vm_power_mgr_vm_request_seq.*
-In addition to the ability to send power management requests to the
-host, a VM can send a power management policy to the host. In some
-cases, using a power management policy is a preferred option because it
-can eliminate possible latency issues that can occur when sending power
-management requests. Once the VM sends the policy to the host, the VM no
-longer needs to worry about power management, because the host now
-manages the power for the VM based on the policy. The policy can specify
-power behavior that is based on incoming traffic rates or time-of-day
-power adjustment (busy/quiet hour power adjustment for example). See
-:ref:`sending_policy` for more information.
-
-One method of power management is to sense how busy a core is when
-processing packets and adjusting power accordingly. One technique for
-doing this is to monitor the ratio of the branch miss to branch hits
-counters and scale the core power accordingly. This technique is based
-on the premise that when a core is not processing packets, the ratio of
-branch misses to branch hits is very low, but when the core is
-processing packets, it is measurably higher. The implementation of this
-capability is as a policy of type ``BRANCH_RATIO``.
-See :ref:`sending_policy` for more information on using the
-BRANCH_RATIO policy option.
-
-A JSON interface enables the specification of power management requests
-and policies in JSON format. The JSON interfaces provide a more
-convenient and more easily interpreted interface for the specification
-of requests and policies. See :ref:`power_man_requests` for more information.
+In addition to sending power management requests to the
+host, a VM can send a power management policy to the host.
+Using a policy is often preferred as it avoids potential
+latency issues from frequent requests. Once the policy is
+sent, the host manages the VM's power based on the policy,
+freeing the VM from further involvement. Policies can include
+rules like adjusting power based on traffic rates or setting
+power levels for busy and quiet hours. See :ref:`sending_policy`
+for more information.
+
+One power management method monitors core activity by tracking
+the ratio of branch misses to branch hits. When a core is idle,
+this ratio is low; when it’s busy processing packets, the ratio increases.
+This technique, implemented as a ``BRANCH_RATIO`` policy, adjusts core power
+dynamically based on workload. See :ref:`sending_policy` for more information
+on using the BRANCH_RATIO policy option.
+
+Power management requests and policies can also be defined using a JSON interface,
+which provides a simpler and more readable way to specify these configurations.
+For more details, see :ref:`power_man_requests` for more information.
Performance Considerations
~~~~~~~~~~~~~~~~~~~~~~~~~~
-While the Haswell microarchitecture allows for independent power control
-for each core, earlier microarchitectures do not offer such fine-grained
-control. When deploying on pre-Haswell platforms, greater care must be
-taken when selecting which cores are assigned to a VM, for example, a
-core does not scale down in frequency until all of its siblings are
-similarly scaled down.
+The Haswell microarchitecture enables independent power control for each core,
+but earlier microarchitectures lack this level of precision. On pre-Haswell platforms,
+careful consideration is needed when assigning cores to a VM. For instance, a core cannot
+scale down its frequency until all its sibling cores are also scaled down.
Configuration
--------------
+~~~~~~~~~~~~~
BIOS
-~~~~
+^^^^
To use the power management features of the DPDK, you must enable
Enhanced Intel SpeedStep® Technology in the platform BIOS. Otherwise,
@@ -163,7 +151,7 @@ exist, and you cannot use CPU frequency-based power management. Refer to the
relevant BIOS documentation to determine how to access these settings.
Host Operating System
-~~~~~~~~~~~~~~~~~~~~~
+^^^^^^^^^^^^^^^^^^^^^
The DPDK Power Management library can use either the ``acpi_cpufreq`` or
the ``intel_pstate`` kernel driver for the management of core frequencies. In
@@ -183,7 +171,7 @@ On reboot, load the ``acpi_cpufreq`` module:
``modprobe acpi_cpufreq``
Hypervisor Channel Configuration
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Configure ``virtio-serial`` channels using ``libvirt`` XML.
The XML structure is as follows:
@@ -324,7 +312,7 @@ comma-separated list of channel numbers to add. Specifying the keyword
set_query {vm_name} enable|disable
-Manual control and inspection can also be carried in relation CPU frequency scaling:
+Manual control and inspection can also be carried in relation to CPU frequency scaling:
Get the current frequency for each core specified in the mask:
@@ -479,7 +467,7 @@ correct directory using the following find command:
/usr/lib/i386-linux-gnu/pkgconfig
/usr/lib/x86_64-linux-gnu/pkgconfig
-Then use:
+Then, use:
.. code-block:: console
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 27/29] doc/guides: improve keep alive sample app guide
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (25 preceding siblings ...)
2026-01-14 22:22 ` [PATCH 26/29] doc/guides: improve VM power management " Stephen Hemminger
@ 2026-01-14 22:22 ` Stephen Hemminger
2026-01-14 22:22 ` [PATCH 28/29] fix ipsec gw Stephen Hemminger
2026-01-14 22:22 ` [PATCH 29/29] fix pipeline Stephen Hemminger
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:22 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
- fix "Cores states" to "Core states"
- add missing space in heartbeat period description
- add article before "Linux environment"
- fix "Keep- Alive" typo and capitalization
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/sample_app_ug/keep_alive.rst | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/doc/guides/sample_app_ug/keep_alive.rst b/doc/guides/sample_app_ug/keep_alive.rst
index 8ae9d7c689..aaefeefe7e 100644
--- a/doc/guides/sample_app_ug/keep_alive.rst
+++ b/doc/guides/sample_app_ug/keep_alive.rst
@@ -19,7 +19,7 @@ The application demonstrates how to protect against 'silent outages'
on packet processing cores. A Keep Alive Monitor Agent Core (main)
monitors the state of packet processing cores (worker cores) by
dispatching pings at a regular time interval (default is 5ms) and
-monitoring the state of the cores. Cores states are: Alive, MIA, Dead
+monitoring the state of the cores. Core states are: Alive, MIA, Dead
or Buried. MIA indicates a missed ping, and Dead indicates two missed
pings within the specified time interval. When a core is Dead, a
callback function is invoked to restart the packet processing core;
@@ -58,12 +58,12 @@ where,
* ``q NQ``: Maximum number of queues per lcore (default is 1)
-* ``K PERIOD``: Heartbeat check period in ms(5ms default; 86400 max)
+* ``K PERIOD``: Heartbeat check period in ms (5ms default; 86400 max)
* ``T PERIOD``: statistics will be refreshed each PERIOD seconds (0 to
disable, 10 default, 86400 maximum).
-To run the application in linux environment with 4 lcores, 16 ports
+To run the application in a Linux environment with 4 lcores, 16 ports
8 RX queues per lcore and a ping interval of 10ms, issue the command:
.. code-block:: console
@@ -85,7 +85,7 @@ similar to those of the :doc:`l2_forward_real_virtual`.
The Keep-Alive/'Liveliness' conceptual scheme:
-* A Keep- Alive Agent Runs every N Milliseconds.
+* A Keep-Alive Agent runs every N Milliseconds.
* DPDK Cores respond to the keep-alive agent.
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 28/29] fix ipsec gw
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (26 preceding siblings ...)
2026-01-14 22:22 ` [PATCH 27/29] doc/guides: improve keep alive " Stephen Hemminger
@ 2026-01-14 22:22 ` Stephen Hemminger
2026-01-14 23:00 ` Stephen Hemminger
2026-01-14 22:22 ` [PATCH 29/29] fix pipeline Stephen Hemminger
28 siblings, 1 reply; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:22 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Radu Nicolau, Akhil Goyal
---
doc/guides/sample_app_ug/ipsec_secgw.rst | 183 ++++++++++++-----------
1 file changed, 94 insertions(+), 89 deletions(-)
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index ff08343061..1ce2353aee 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -11,30 +11,35 @@ application using DPDK cryptodev framework.
Overview
--------
-This application demonstrates the implementation of a Security Gateway
-(not fully IPsec-compliant; see the Constraints section) using DPDK, based
-on RFC4301, RFC4303, RFC3602, and RFC2404.
+The application demonstrates the implementation of a Security Gateway
+(not IPsec compliant, see the Constraints section below) using DPDK based on RFC4301,
+RFC4303, RFC3602 and RFC2404.
-Currently, DPDK does not support Internet Key Exchange (IKE), so Security Policies
-(SP) and Security Associations (SA) must be configured manually. SPs are implemented
-as ACL rules, SAs are stored in a table, and routing is handled using LPM.
+Internet Key Exchange (IKE) is not implemented, so only manual setting of
+Security Policies and Security Associations is supported.
-The application classifies ports as *Protected* or *Unprotected*, with traffic
-received on Unprotected ports considered Inbound and traffic on Protected ports
-considered Outbound.
+The Security Policies (SP) are implemented as ACL rules, the Security
+Associations (SA) are stored in a table and the routing is implemented
+using LPM.
-It supports full IPsec protocol offload to hardware (via crypto accelerators or
-Ethernet devices) as well as inline IPsec processing by supported Ethernet
-devices during transmission. These modes can be configured during SA creation.
+The application classifies the ports as *Protected* and *Unprotected*.
+Thus, traffic received on an Unprotected or Protected port is consider
+Inbound or Outbound respectively.
-For full protocol offload, the hardware processes ESP and outer IP headers,
-so the application does not need to add or remove them during Outbound or
-Inbound processing.
+The application also supports complete IPsec protocol offload to hardware
+(Look aside crypto accelerator or using ethernet device). It also support
+inline ipsec processing by the supported ethernet device during transmission.
+These modes can be selected during the SA creation configuration.
-In the inline offload mode for Outbound traffic, the application skips the
-LPM lookup for routing, as the SA specifies the port for forwarding. Security
-parameters are configured only on the specified port, and sending packets
-through other ports may result in unencrypted packets being transmitted.
+In case of complete protocol offload, the processing of headers(ESP and outer
+IP header) is done by the hardware and the application does not need to
+add/remove them during outbound/inbound processing.
+
+For inline offloaded outbound traffic, the application will not do the LPM
+lookup for routing, as the port on which the packet has to be forwarded will be
+part of the SA. Security parameters will be configured on that port only, and
+sending the packet on other ports could result in unencrypted packets being
+sent out.
The Path for IPsec Inbound traffic is:
@@ -59,70 +64,70 @@ The Path for the IPsec Outbound traffic is:
The application supports two modes of operation: poll mode and event mode.
-* In the poll mode, a core receives packets from statically configured list
- of eth ports and eth ports' queues.
-
-* In event mode, a core processes packets as events. After processing, the
-core submits the packets back to an event device, enabling multicore scaling
-and hardware-assisted scheduling by leveraging the capabilities of the event
-device. The event mode configuration is predefined, where all packets arriving
-at a specific Ethernet port are directed to the same event queue. All event
-queues are mapped to all event ports, allowing any core to receive traffic
-from any port. Since event devices can have varying capabilities, worker threads are designed
-differently to optimize performance. For instance, if an event device and Ethernet
-device pair includes a Tx internal port, the application can use `rte_event_eth_tx_adapter_enqueue`
-instead of the standard `rte_event_enqueue_burst`. A thread optimized for a device
-pair with an internal port may not work effectively with another pair. The infrastructure
-for event mode is designed to support multiple worker threads
-while maximizing the performance of each type of event device without impacting
-existing paths or use cases. The worker thread selection depends on the operating
-conditions and the capabilities of the underlying devices.
-
- **Currently the application provides non-burst, internal port worker threads.**
- It also provides infrastructure for non-internal port,
- however does not define any worker threads.
-
- Event mode also supports event vectorization. The event devices, ethernet device
- pairs which support the capability ``RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR`` can
- aggregate packets based on flow characteristics and generate a ``rte_event``
- containing ``rte_event_vector``.
- The aggregation size and timeout can be given using command line options vector-size
- (default vector-size is 16) and vector-tmo (default vector-tmo is 102400ns).
- By default event vectorization is disabled and it can be enabled using event-vector
- option.
- For the event devices, crypto device pairs which support the capability
- ``RTE_EVENT_CRYPTO_ADAPTER_CAP_EVENT_VECTOR`` vector aggregation
- could also be enabled using event-vector option.
-
-Additionally, the event mode introduces two submodes of processing packets:
-
-* Driver submode: This submode has bare minimum changes in the application to support
- IPsec. There are no lookups, no routing done in the application. And for inline
- protocol use case, the worker thread resembles l2fwd worker thread as the IPsec
- processing is done entirely in HW. This mode can be used to benchmark the raw
- performance of the HW. The driver submode is selected with --single-sa option
- (used also by poll mode). When --single-sa option is used in conjunction with event
- mode then index passed to --single-sa is ignored.
-
-* App submode: This submode has all the features currently implemented with the
- application (non librte_ipsec path). All the lookups, routing follows existing
- methods and report numbers that can be compared against regular poll mode
- benchmark numbers.
+* In the poll mode a core receives packets from statically configured list
+ of eth ports and eth ports' queues.
+
+* In the event mode a core receives packets as events. After packet processing
+ is done core submits them back as events to an event device. This enables
+ multicore scaling and HW assisted scheduling by making use of the event device
+ capabilities. The event mode configuration is predefined. All packets reaching
+ given eth port will arrive at the same event queue. All event queues are mapped
+ to all event ports. This allows all cores to receive traffic from all ports.
+ Since the underlying event device might have varying capabilities, the worker
+ threads can be drafted differently to maximize performance. For example, if an
+ event device - eth device pair has Tx internal port, then application can call
+ rte_event_eth_tx_adapter_enqueue() instead of regular rte_event_enqueue_burst().
+ So a thread which assumes that the device pair has internal port will not be the
+ right solution for another pair. The infrastructure added for the event mode aims
+ to help application to have multiple worker threads by maximizing performance from
+ every type of event device without affecting existing paths/use cases. The worker
+ to be used will be determined by the operating conditions and the underlying device
+ capabilities.
+ **Currently the application provides non-burst, internal port worker threads.**
+ It also provides infrastructure for non-internal port
+ however does not define any worker threads.
+
+ Event mode also supports event vectorization. The event devices, ethernet device
+ pairs which support the capability ``RTE_EVENT_ETH_RX_ADAPTER_CAP_EVENT_VECTOR`` can
+ aggregate packets based on flow characteristics and generate a ``rte_event``
+ containing ``rte_event_vector``.
+ The aggregation size and timeout can be given using command line options vector-size
+ (default vector-size is 16) and vector-tmo (default vector-tmo is 102400ns).
+ By default event vectorization is disabled and it can be enabled using event-vector
+ option.
+ For the event devices, crypto device pairs which support the capability
+ ``RTE_EVENT_CRYPTO_ADAPTER_CAP_EVENT_VECTOR`` vector aggregation
+ could also be enable using event-vector option.
+
+Additionally the event mode introduces two submodes of processing packets:
+
+* Driver submode: This submode has bare minimum changes in the application to support
+ IPsec. There are no lookups, no routing done in the application. And for inline
+ protocol use case, the worker thread resembles l2fwd worker thread as the IPsec
+ processing is done entirely in HW. This mode can be used to benchmark the raw
+ performance of the HW. The driver submode is selected with --single-sa option
+ (used also by poll mode). When --single-sa option is used in conjunction with event
+ mode then index passed to --single-sa is ignored.
+
+* App submode: This submode has all the features currently implemented with the
+ application (non librte_ipsec path). All the lookups, routing follows existing
+ methods and report numbers that can be compared against regular poll mode
+ benchmark numbers.
Constraints
-~~~~~~~~~~~
+-----------
* No IPv6 options headers.
* No AH mode.
* Supported algorithms: AES-CBC, AES-CTR, AES-GCM, 3DES-CBC, DES-CBC,
HMAC-SHA1, HMAC-SHA256, AES-GMAC, AES_CTR, AES_XCBC_MAC, AES_CCM,
CHACHA20_POLY1305 and NULL.
-* Each SA must be handled by a unique lcore (*1 RX queue per port*).
+* Each SA must be handle by a unique lcore (*1 RX queue per port*).
Compiling the Application
-------------------------
-To compile the sample application, see :doc:`compiling`.
+To compile the sample application see :doc:`compiling`.
The application is located in the ``ipsec-secgw`` sub-directory.
@@ -165,7 +170,7 @@ Where:
* ``-j FRAMESIZE``: *optional*. data buffer size (in bytes),
in other words maximum data size for one segment.
- Packets with length bigger than FRAMESIZE still can be received,
+ Packets with length bigger then FRAMESIZE still can be received,
but will be segmented.
Default value: RTE_MBUF_DEFAULT_BUF_SIZE (2176)
Minimum value: RTE_MBUF_DEFAULT_BUF_SIZE (2176)
@@ -239,8 +244,8 @@ Where:
Default value: 0.
* ``--mtu MTU``: MTU value (in bytes) on all attached ethernet ports.
- Outgoing packets with length bigger than MTU will be fragmented.
- Incoming packets with length bigger than MTU will be discarded.
+ Outgoing packets with length bigger then MTU will be fragmented.
+ Incoming packets with length bigger then MTU will be discarded.
Default value: 1500.
* ``--frag-ttl FRAG_TTL_NS``: fragment lifetime (in nanoseconds).
@@ -255,7 +260,7 @@ Where:
via Rx queue setup.
* ``--vector-pool-sz``: Number of buffers in vector pool.
- By default, vector pool size depends on packet pool size
+ By default, vector pool size depeneds on packet pool size
and size of each vector.
* ``--desc-nb NUMBER_OF_DESC``: Number of descriptors per queue pair.
@@ -372,11 +377,11 @@ For example, something like the following command line:
Configurations
-~~~~~~~~~~~~~~
+--------------
The following sections provide the syntax of configurations to initialize
your SP, SA, Routing, Flow and Neighbour tables.
-Configurations will be specified in the configuration file to be passed to
+Configurations shall be specified in the configuration file to be passed to
the application. The file is then parsed by the application. The successful
parsing will result in the appropriate rules being applied to the tables
accordingly.
@@ -385,11 +390,11 @@ accordingly.
Configuration File Syntax
~~~~~~~~~~~~~~~~~~~~~~~~~
-As mentioned in the overview, the Security Policies are ACL rules.
-The application parses the rules specified in the configuration file and
+As mention in the overview, the Security Policies are ACL rules.
+The application parsers the rules specified in the configuration file and
passes them to the ACL table, and replicates them per socket in use.
-The following sections contain the configuration file syntax.
+Following are the configuration file syntax.
General rule syntax
^^^^^^^^^^^^^^^^^^^
@@ -420,7 +425,7 @@ The SP rule syntax is shown as follows:
<proto> <sport> <dport>
-where each option means:
+where each options means:
``<ip_ver>``
@@ -535,7 +540,7 @@ The SA rule syntax is shown as follows:
<mode> <src_ip> <dst_ip> <action_type> <port_id> <fallback>
<flow-direction> <port_id> <queue_id> <udp-encap> <reassembly_en>
-where each option means:
+where each options means:
``<dir>``
@@ -628,7 +633,7 @@ where each option means:
* *aes-192-gcm*: AES-GCM 192-bit algorithm
* *aes-256-gcm*: AES-GCM 256-bit algorithm
- * Syntax: *aead_algo <your algorithm>*
+ * Syntax: *cipher_algo <your algorithm>*
``<aead_key>``
@@ -842,7 +847,7 @@ The Routing rule syntax is shown as follows:
rt <ip_ver> <src_ip> <dst_ip> <port>
-where each option means:
+where each options means:
``<ip_ver>``
@@ -885,7 +890,7 @@ where each option means:
* Syntax: *port X*
-Example Routing rules:
+Example SP rules:
.. code-block:: console
@@ -907,7 +912,7 @@ The flow rule syntax is shown as follows:
flow <mark> <eth> <ip_ver> <src_ip> <dst_ip> <port> <queue> \
<count> <security> <set_mark>
-where each option means:
+where each options means:
``<mark>``
@@ -1034,7 +1039,7 @@ The Neighbour rule syntax is shown as follows:
neigh <port> <dst_mac>
-where each option means:
+where each options means:
``<port>``
@@ -1138,7 +1143,7 @@ It then tries to perform some data transfer using the scheme described above.
Usage
~~~~~
-In the ipsec-secgw/test directory run:
+In the ipsec-secgw/test directory run
/bin/bash run_test.sh <options> <ipsec_mode>
@@ -1171,4 +1176,4 @@ Available options:
* ``-h`` Show usage.
If <ipsec_mode> is specified, only tests for that mode will be invoked. For the
-list of available modes, please refer to run_test.sh.
+list of available modes please refer to run_test.sh.
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread* [PATCH 29/29] fix pipeline
2026-01-14 22:21 [PATCH 00/29] doc/guides: sample application documentation improvements Stephen Hemminger
` (27 preceding siblings ...)
2026-01-14 22:22 ` [PATCH 28/29] fix ipsec gw Stephen Hemminger
@ 2026-01-14 22:22 ` Stephen Hemminger
28 siblings, 0 replies; 33+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:22 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Cristian Dumitrescu
---
doc/guides/sample_app_ug/test_pipeline.rst | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/doc/guides/sample_app_ug/test_pipeline.rst b/doc/guides/sample_app_ug/test_pipeline.rst
index 40c530af57..686ce152cf 100644
--- a/doc/guides/sample_app_ug/test_pipeline.rst
+++ b/doc/guides/sample_app_ug/test_pipeline.rst
@@ -30,7 +30,7 @@ The application uses three CPU cores:
Compiling the Application
-------------------------
-To compile the sample application, see :doc:`compiling`.
+To compile the sample application see :doc:`compiling`
The application is located in the ``dpdk/<build_dir>/app`` directory.
@@ -149,7 +149,7 @@ For hash tables, the following parameters can be selected:
| | | | |
| | | | At run time, core A is creating the following lookup |
| | | | key and storing it into the packet meta data for |
- | | | | core B to use for table lookup: |
+ | | | | core B to use for table lookup: |
| | | | |
| | | | [destination IPv4 address, 28 bytes of 0] |
| | | | |
--
2.51.0
^ permalink raw reply related [flat|nested] 33+ messages in thread