* [PATCH 0/9] reowrd in prog guide
@ 2024-05-13 15:59 Nandini Persad
2024-05-13 15:59 ` [PATCH 1/9] doc: reword design section in contributors guidelines Nandini Persad
` (12 more replies)
0 siblings, 13 replies; 118+ messages in thread
From: Nandini Persad @ 2024-05-13 15:59 UTC (permalink / raw)
To: dev
I reviewed and made small syntax and grammatical edits
to sections in the programmer's guide and the design section of
the contributor's guidelines.
Nandini Persad (9):
doc: reword design section in contributors guidelines
doc: reword pmd section in prog guide
doc: reword argparse section in prog guide
doc: reword service cores section in prog guide
doc: reword trace library section in prog guide
doc: reword log library section in prog guide
doc: reword cmdline section in prog guide
doc: reword stack library section in prog guide
doc: reword rcu library section in prog guide
.mailmap | 1 +
doc/guides/contributing/design.rst | 79 ++++++-------
doc/guides/linux_gsg/sys_reqs.rst | 2 +-
doc/guides/prog_guide/argparse_lib.rst | 72 ++++++-----
doc/guides/prog_guide/cmdline.rst | 56 ++++-----
doc/guides/prog_guide/log_lib.rst | 32 ++---
doc/guides/prog_guide/poll_mode_drv.rst | 151 ++++++++++++------------
doc/guides/prog_guide/rcu_lib.rst | 77 ++++++------
doc/guides/prog_guide/service_cores.rst | 32 ++---
doc/guides/prog_guide/stack_lib.rst | 4 +-
doc/guides/prog_guide/trace_lib.rst | 72 +++++------
11 files changed, 284 insertions(+), 294 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 118+ messages in thread
* [PATCH 1/9] doc: reword design section in contributors guidelines
2024-05-13 15:59 [PATCH 0/9] reowrd in prog guide Nandini Persad
@ 2024-05-13 15:59 ` Nandini Persad
2024-05-13 15:59 ` [PATCH 2/9] doc: reword pmd section in prog guide Nandini Persad
` (11 subsequent siblings)
12 siblings, 0 replies; 118+ messages in thread
From: Nandini Persad @ 2024-05-13 15:59 UTC (permalink / raw)
To: dev; +Cc: Thomas Monjalon
minor editing for grammar and syntax of design section
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
.mailmap | 1 +
doc/guides/contributing/design.rst | 79 ++++++++++++++----------------
doc/guides/linux_gsg/sys_reqs.rst | 2 +-
3 files changed, 38 insertions(+), 44 deletions(-)
diff --git a/.mailmap b/.mailmap
index 66ebc20666..7d4929c5d1 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1002,6 +1002,7 @@ Naga Suresh Somarowthu <naga.sureshx.somarowthu@intel.com>
Nalla Pradeep <pnalla@marvell.com>
Na Na <nana.nn@alibaba-inc.com>
Nan Chen <whutchennan@gmail.com>
+Nandini Persad <nandinipersad361@gmail.com>
Nannan Lu <nannan.lu@intel.com>
Nan Zhou <zhounan14@huawei.com>
Narcisa Vasile <navasile@linux.microsoft.com> <navasile@microsoft.com> <narcisa.vasile@microsoft.com>
diff --git a/doc/guides/contributing/design.rst b/doc/guides/contributing/design.rst
index b724177ba1..921578aec5 100644
--- a/doc/guides/contributing/design.rst
+++ b/doc/guides/contributing/design.rst
@@ -8,22 +8,26 @@ Design
Environment or Architecture-specific Sources
--------------------------------------------
-In DPDK and DPDK applications, some code is specific to an architecture (i686, x86_64) or to an executive environment (freebsd or linux) and so on.
-As far as is possible, all such instances of architecture or env-specific code should be provided via standard APIs in the EAL.
+In DPDK and DPDK applications, some code is architecture-specific (i686, x86_64) or environment-specific (FreeBsd or Linux, etc.).
+When feasible, such instances of architecture or env-specific code should be provided via standard APIs in the EAL.
-By convention, a file is common if it is not located in a directory indicating that it is specific.
-For instance, a file located in a subdir of "x86_64" directory is specific to this architecture.
+By convention, a file is specific if the directory is indicated. Otherwise, it is common.
+
+For example:
+
+A file located in a subdir of "x86_64" directory is specific to this architecture.
A file located in a subdir of "linux" is specific to this execution environment.
.. note::
Code in DPDK libraries and applications should be generic.
- The correct location for architecture or executive environment specific code is in the EAL.
+ The correct location for architecture or executive environment-specific code is in the EAL.
+
+When necessary, there are several ways to handle specific code:
-When absolutely necessary, there are several ways to handle specific code:
-* Use a ``#ifdef`` with a build definition macro in the C code.
- This can be done when the differences are small and they can be embedded in the same C file:
+* When the differences are small and they can be embedded in the same C file, use a ``#ifdef`` with a build definition macro in the C code.
+
.. code-block:: c
@@ -33,9 +37,9 @@ When absolutely necessary, there are several ways to handle specific code:
titi();
#endif
-* Use build definition macros and conditions in the Meson build file. This is done when the differences are more significant.
- In this case, the code is split into two separate files that are architecture or environment specific.
- This should only apply inside the EAL library.
+* When the differences are more significant, use build definition macros and conditions in the Meson build file.
+In this case, the code is split into two separate files that are architecture or environment specific.
+This should only apply inside the EAL library.
Per Architecture Sources
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -43,7 +47,7 @@ Per Architecture Sources
The following macro options can be used:
* ``RTE_ARCH`` is a string that contains the name of the architecture.
-* ``RTE_ARCH_I686``, ``RTE_ARCH_X86_64``, ``RTE_ARCH_X86_X32``, ``RTE_ARCH_PPC_64``, ``RTE_ARCH_RISCV``, ``RTE_ARCH_LOONGARCH``, ``RTE_ARCH_ARM``, ``RTE_ARCH_ARMv7`` or ``RTE_ARCH_ARM64`` are defined only if we are building for those architectures.
+* ``RTE_ARCH_I686``, ``RTE_ARCH_X86_64``, ``RTE_ARCH_X86_X32``, ``RTE_ARCH_PPC_64``, ``RTE_ARCH_RISCV``, ``RTE_ARCH_LOONGARCH``, ``RTE_ARCH_ARM``, ``RTE_ARCH_ARMv7`` or ``RTE_ARCH_ARM64`` are defined when building for these architectures.
Per Execution Environment Sources
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -51,30 +55,21 @@ Per Execution Environment Sources
The following macro options can be used:
* ``RTE_EXEC_ENV`` is a string that contains the name of the executive environment.
-* ``RTE_EXEC_ENV_FREEBSD``, ``RTE_EXEC_ENV_LINUX`` or ``RTE_EXEC_ENV_WINDOWS`` are defined only if we are building for this execution environment.
+* ``RTE_EXEC_ENV_FREEBSD``, ``RTE_EXEC_ENV_LINUX`` or ``RTE_EXEC_ENV_WINDOWS`` are defined only when building for this execution environment.
Mbuf features
-------------
-The ``rte_mbuf`` structure must be kept small (128 bytes).
-
-In order to add new features without wasting buffer space for unused features,
-some fields and flags can be registered dynamically in a shared area.
-The "dynamic" mbuf area is the default choice for the new features.
-
-The "dynamic" area is eating the remaining space in mbuf,
-and some existing "static" fields may need to become "dynamic".
+A designated area in mbuf stores "dynamically" registered fields and flags. It is the default choice for accomodating new features. The "dynamic" area consumes the remaining space in the mbuf, indicating that it's being efficiently utilized. However, the ``rte_mbuf`` structure must be kept small (128 bytes).
-Adding a new static field or flag must be an exception matching many criteria
-like (non exhaustive): wide usage, performance, size.
+As more features are added, the space for existinG=g "static" fields (fields that are allocated statically) may need to be reconsidered and possibly converted to "dynamic" allocation. Adding a new static field or flag should be an exception. It must meet specific criteria including widespread usage, performance impact, and size considerations. Before adding a new static feature, it must be justified by its necessity and its impact on the system's efficiency.
Runtime Information - Logging, Tracing and Telemetry
----------------------------------------------------
-It is often desirable to provide information to the end-user
-as to what is happening to the application at runtime.
-DPDK provides a number of built-in mechanisms to provide this introspection:
+The end user may inquire as to what is happening to the application at runtime.
+DPDK provides several built-in mechanisms to provide these insights:
* :ref:`Logging <dynamic_logging>`
* :doc:`Tracing <../prog_guide/trace_lib>`
@@ -82,11 +77,11 @@ DPDK provides a number of built-in mechanisms to provide this introspection:
Each of these has its own strengths and suitabilities for use within DPDK components.
-Below are some guidelines for when each should be used:
+Here are guidelines for when each mechanism should be used:
* For reporting error conditions, or other abnormal runtime issues, *logging* should be used.
- Depending on the severity of the issue, the appropriate log level, for example,
- ``ERROR``, ``WARNING`` or ``NOTICE``, should be used.
+ For example, depending on the severity of the issue, the appropriate log level,
+ ``ERROR``, ``WARNING`` or ``NOTICE`` should be used.
.. note::
@@ -96,22 +91,21 @@ Below are some guidelines for when each should be used:
* For component initialization, or other cases where a path through the code
is only likely to be taken once,
- either *logging* at ``DEBUG`` level or *tracing* may be used, or potentially both.
+ either *logging* at ``DEBUG`` level or *tracing* may be used, or both.
In the latter case, tracing can provide basic information as to the code path taken,
with debug-level logging providing additional details on internal state,
- not possible to emit via tracing.
+ which is not possible to emit via tracing.
* For a component's data-path, where a path is to be taken multiple times within a short timeframe,
*tracing* should be used.
Since DPDK tracing uses `Common Trace Format <https://diamon.org/ctf/>`_ for its tracing logs,
post-analysis can be done using a range of external tools.
-* For numerical or statistical data generated by a component, for example, per-packet statistics,
+* For numerical or statistical data generated by a component, such as per-packet statistics,
*telemetry* should be used.
-* For any data where the data may need to be gathered at any point in the execution
- to help assess the state of the application component,
- for example, core configuration, device information, *telemetry* should be used.
+* For any data that may need to be gathered at any point during the execution
+ to help assess the state of the application component (for example, core configuration, device information) *telemetry* should be used.
Telemetry callbacks should not modify any program state, but be "read-only".
Many libraries also include a ``rte_<libname>_dump()`` function as part of their API,
@@ -135,13 +129,12 @@ requirements for preventing ABI changes when implementing statistics.
Mechanism to allow the application to turn library statistics on and off
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Having runtime support for enabling/disabling library statistics is recommended,
-as build-time options should be avoided. However, if build-time options are used,
-for example as in the table library, the options can be set using c_args.
-When this flag is set, all the counters supported by current library are
+Having runtime support for enabling/disabling library statistics is recommended
+as build-time options should be avoided. However, if build-time options are used, as in the table library, the options can be set using c_args.
+When this flag is set, all the counters supported by the current library are
collected for all the instances of every object type provided by the library.
When this flag is cleared, none of the counters supported by the current library
-are collected for any instance of any object type provided by the library:
+are collected for any instance of any object type provided by the library.
Prevention of ABI changes due to library statistics support
@@ -165,8 +158,8 @@ Motivation to allow the application to turn library statistics on and off
It is highly recommended that each library provides statistics counters to allow
an application to monitor the library-level run-time events. Typical counters
-are: number of packets received/dropped/transmitted, number of buffers
-allocated/freed, number of occurrences for specific events, etc.
+are: the number of packets received/dropped/transmitted, the number of buffers
+allocated/freed, the number of occurrences for specific events, etc.
However, the resources consumed for library-level statistics counter collection
have to be spent out of the application budget and the counters collected by
@@ -229,5 +222,5 @@ Developers should work with the Linux Kernel community to get the required
functionality upstream. PF functionality should only be added to DPDK for
testing and prototyping purposes while the kernel work is ongoing. It should
also be marked with an "EXPERIMENTAL" tag. If the functionality isn't
-upstreamable then a case can be made to maintain the PF functionality in DPDK
+upstreamable, then a case can be made to maintain the PF functionality in DPDK
without the EXPERIMENTAL tag.
diff --git a/doc/guides/linux_gsg/sys_reqs.rst b/doc/guides/linux_gsg/sys_reqs.rst
index 13be715933..0569c5cae6 100644
--- a/doc/guides/linux_gsg/sys_reqs.rst
+++ b/doc/guides/linux_gsg/sys_reqs.rst
@@ -99,7 +99,7 @@ e.g. :doc:`../nics/index`
Running DPDK Applications
-------------------------
-To run a DPDK application, some customization may be required on the target machine.
+To run a DPDK application, customization may be required on the target machine.
System Software
~~~~~~~~~~~~~~~
--
2.34.1
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH 2/9] doc: reword pmd section in prog guide
2024-05-13 15:59 [PATCH 0/9] reowrd in prog guide Nandini Persad
2024-05-13 15:59 ` [PATCH 1/9] doc: reword design section in contributors guidelines Nandini Persad
@ 2024-05-13 15:59 ` Nandini Persad
2024-05-13 15:59 ` [PATCH 3/9] doc: reword argparse " Nandini Persad
` (10 subsequent siblings)
12 siblings, 0 replies; 118+ messages in thread
From: Nandini Persad @ 2024-05-13 15:59 UTC (permalink / raw)
To: dev
made small edits to section 15.1 and 15.5
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/poll_mode_drv.rst | 151 ++++++++++++------------
1 file changed, 73 insertions(+), 78 deletions(-)
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 5008b41c60..360af20900 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -6,25 +6,24 @@
Poll Mode Driver
================
-The DPDK includes 1 Gigabit, 10 Gigabit and 40 Gigabit and para virtualized virtio Poll Mode Drivers.
+The DPDK includes 1 Gigabit, 10 Gigabit, 40 Gigabit and para virtualized virtio Poll Mode Drivers.
-A Poll Mode Driver (PMD) consists of APIs, provided through the BSD driver running in user space,
-to configure the devices and their respective queues.
+A Poll Mode Driver (PMD) consists of APIs (provided through the BSD driver running in user space) to configure the devices and their respective queues.
In addition, a PMD accesses the RX and TX descriptors directly without any interrupts
(with the exception of Link Status Change interrupts) to quickly receive,
process and deliver packets in the user's application.
-This section describes the requirements of the PMDs,
-their global design principles and proposes a high-level architecture and a generic external API for the Ethernet PMDs.
+This section describes the requirements of the PMDs and
+their global design principles. It also proposes a high-level architecture and a generic external API for the Ethernet PMDs.
Requirements and Assumptions
----------------------------
The DPDK environment for packet processing applications allows for two models, run-to-completion and pipe-line:
-* In the *run-to-completion* model, a specific port's RX descriptor ring is polled for packets through an API.
- Packets are then processed on the same core and placed on a port's TX descriptor ring through an API for transmission.
+* In the *run-to-completion* model, a specific port's Rx descriptor ring is polled for packets through an API.
+ Packets are then processed on the same core and placed on a port's Tx descriptor ring through an API for transmission.
-* In the *pipe-line* model, one core polls one or more port's RX descriptor ring through an API.
+* In the *pipe-line* model, one core polls one or more port's Rx descriptor ring through an API.
Packets are received and passed to another core via a ring.
The other core continues to process the packet which then may be placed on a port's TX descriptor ring through an API for transmission.
@@ -50,14 +49,14 @@ The loop for packet processing includes the following steps:
* Retrieve the received packet from the packet queue
-* Process the received packet, up to its retransmission if forwarded
+* Process the received packet up to its retransmission if forwarded
To avoid any unnecessary interrupt processing overhead, the execution environment must not use any asynchronous notification mechanisms.
Whenever needed and appropriate, asynchronous communication should be introduced as much as possible through the use of rings.
Avoiding lock contention is a key issue in a multi-core environment.
-To address this issue, PMDs are designed to work with per-core private resources as much as possible.
-For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
+To address this issue, PMDs are designed to work with per core private resources as much as possible.
+For example, a PMD maintains a separate transmit queue per core, per port, if the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
In the same way, every receive queue of a port is assigned to and polled by a single logical core (lcore).
To comply with Non-Uniform Memory Access (NUMA), memory management is designed to assign to each logical core
@@ -101,9 +100,9 @@ However, an rte_eth_tx_burst function is effectively implemented by the PMD to m
* Apply burst-oriented software optimization techniques to remove operations that would otherwise be unavoidable, such as ring index wrap back management.
-Burst-oriented functions are also introduced via the API for services that are intensively used by the PMD.
+Burst-oriented functions are also introduced via the API for services that are extensively used by the PMD.
This applies in particular to buffer allocators used to populate NIC rings, which provide functions to allocate/free several buffers at a time.
-For example, an mbuf_multiple_alloc function returning an array of pointers to rte_mbuf buffers which speeds up the receive poll function of the PMD when
+An example of this would be an mbuf_multiple_alloc function returning an array of pointers to rte_mbuf buffers which speeds up the receive poll function of the PMD when
replenishing multiple descriptors of the receive ring.
Logical Cores, Memory and NIC Queues Relationships
@@ -111,7 +110,7 @@ Logical Cores, Memory and NIC Queues Relationships
The DPDK supports NUMA allowing for better performance when a processor's logical cores and interfaces utilize its local memory.
Therefore, mbuf allocation associated with local PCIe* interfaces should be allocated from memory pools created in the local memory.
-The buffers should, if possible, remain on the local processor to obtain the best performance results and RX and TX buffer descriptors
+The buffers should, if possible, remain on the local processor to obtain the best performance results and Rx and Tx buffer descriptors
should be populated with mbufs allocated from a mempool allocated from local memory.
The run-to-completion model also performs better if packet or data manipulation is in local memory instead of a remote processors memory.
@@ -120,12 +119,11 @@ This is also true for the pipe-line model provided all logical cores used are lo
Multiple logical cores should never share receive or transmit queues for interfaces since this would require global locks and hinder performance.
If the PMD is ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
-concurrently on the same tx queue without SW lock. This PMD feature found in some NICs and useful in the following use cases:
+concurrently on the same Tx queue without an SW lock. This PMD feature found in some NICs is useful for:
-* Remove explicit spinlock in some applications where lcores are not mapped to Tx queues with 1:1 relation.
+* Removing explicit spinlock in some applications where lcores are not mapped to Tx queues with 1:1 relation.
-* In the eventdev use case, avoid dedicating a separate TX core for transmitting and thus
- enables more scaling as all workers can send the packets.
+* Enabling greater scalability by removing the requirement to have a dedicated Tx core
See `Hardware Offload`_ for ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
@@ -135,8 +133,8 @@ Device Identification, Ownership and Configuration
Device Identification
~~~~~~~~~~~~~~~~~~~~~
-Each NIC port is uniquely designated by its (bus/bridge, device, function) PCI
-identifiers assigned by the PCI probing/enumeration function executed at DPDK initialization.
+Each NIC port is uniquely designated by its PCI
+identifiers (bus/bridge, device, function) assigned by the PCI probing/enumeration function executed at DPDK initialization.
Based on their PCI identifier, NIC ports are assigned two other identifiers:
* A port index used to designate the NIC port in all functions exported by the PMD API.
@@ -149,14 +147,13 @@ Port Ownership
The Ethernet devices ports can be owned by a single DPDK entity (application, library, PMD, process, etc).
The ownership mechanism is controlled by ethdev APIs and allows to set/remove/get a port owner by DPDK entities.
-It prevents Ethernet ports to be managed by different entities.
+This prevents Ethernet ports from being managed by different entities.
.. note::
- It is the DPDK entity responsibility to set the port owner before using it and to manage the port usage synchronization between different threads or processes.
+ It is the DPDK entity responsibility to set the port owner before using the port and to manage the port usage synchronization between different threads or processes.
-It is recommended to set port ownership early,
-like during the probing notification ``RTE_ETH_EVENT_NEW``.
+It is recommended to set port ownership early. For instance, during the probing notification ``RTE_ETH_EVENT_NEW``.
Device Configuration
~~~~~~~~~~~~~~~~~~~~
@@ -165,7 +162,7 @@ The configuration of each NIC port includes the following operations:
* Allocate PCI resources
-* Reset the hardware (issue a Global Reset) to a well-known default state
+* Reset the hardware to a well-known default state (issue a Global Reset)
* Set up the PHY and the link
@@ -174,7 +171,7 @@ The configuration of each NIC port includes the following operations:
The PMD API must also export functions to start/stop the all-multicast feature of a port and functions to set/unset the port in promiscuous mode.
Some hardware offload features must be individually configured at port initialization through specific configuration parameters.
-This is the case for the Receive Side Scaling (RSS) and Data Center Bridging (DCB) features for example.
+This is the case for the Receive Side Scaling (RSS) and Data Center Bridging (DCB) features.
On-the-Fly Configuration
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -210,7 +207,7 @@ Each transmit queue is independently configured with the following information:
* The *minimum* transmit packets to free threshold (tx_free_thresh).
When the number of descriptors used to transmit packets exceeds this threshold, the network adaptor should be checked to see if it has written back descriptors.
- A value of 0 can be passed during the TX queue configuration to indicate the default value should be used.
+ A value of 0 can be passed during the Tx queue configuration to indicate the default value should be used.
The default value for tx_free_thresh is 32.
This ensures that the PMD does not search for completed descriptors until at least 32 have been processed by the NIC for this queue.
@@ -222,7 +219,7 @@ Each transmit queue is independently configured with the following information:
A value of 0 can be passed during the TX queue configuration to indicate that the default value should be used.
The default value for tx_rs_thresh is 32.
This ensures that at least 32 descriptors are used before the network adapter writes back the most recently used descriptor.
- This saves upstream PCIe* bandwidth resulting from TX descriptor write-backs.
+ This saves upstream PCIe* bandwidth resulting from Tx descriptor write-backs.
It is important to note that the TX Write-back threshold (TX wthresh) should be set to 0 when tx_rs_thresh is greater than 1.
Refer to the Intel® 82599 10 Gigabit Ethernet Controller Datasheet for more details.
@@ -244,7 +241,7 @@ One descriptor in the TX ring is used as a sentinel to avoid a hardware race con
.. note::
- When configuring for DCB operation, at port initialization, both the number of transmit queues and the number of receive queues must be set to 128.
+ When configuring for DCB operation at port initialization, both the number of transmit queues and the number of receive queues must be set to 128.
Free Tx mbuf on Demand
~~~~~~~~~~~~~~~~~~~~~~
@@ -265,7 +262,7 @@ There are two scenarios when an application may want the mbuf released immediate
One option is to make a copy of the packet or a copy of the header portion that needs to be manipulated.
A second option is to transmit the packet and then poll the ``rte_eth_tx_done_cleanup()`` API
until the reference count on the packet is decremented.
- Then the same packet can be transmitted to the next destination interface.
+ Then, the same packet can be transmitted to the next destination interface.
The application is still responsible for managing any packet manipulations needed
between the different destination interfaces, but a packet copy can be avoided.
This API is independent of whether the packet was transmitted or dropped,
@@ -288,13 +285,13 @@ Hardware Offload
Depending on driver capabilities advertised by
``rte_eth_dev_info_get()``, the PMD may support hardware offloading
feature like checksumming, TCP segmentation, VLAN insertion or
-lockfree multithreaded TX burst on the same TX queue.
+lockfree multithreaded Tx burst on the same Tx queue.
The support of these offload features implies the addition of dedicated
status bit(s) and value field(s) into the rte_mbuf data structure, along
with their appropriate handling by the receive/transmit functions
exported by each PMD. The list of flags and their precise meaning is
-described in the mbuf API documentation and in the in :ref:`Mbuf Library
+described in the mbuf API documentation and in the :ref:`Mbuf Library
<Mbuf_Library>`, section "Meta Information".
Per-Port and Per-Queue Offloads
@@ -303,14 +300,14 @@ Per-Port and Per-Queue Offloads
In the DPDK offload API, offloads are divided into per-port and per-queue offloads as follows:
* A per-queue offloading can be enabled on a queue and disabled on another queue at the same time.
-* A pure per-port offload is the one supported by device but not per-queue type.
-* A pure per-port offloading can't be enabled on a queue and disabled on another queue at the same time.
+* A pure per-port offload is supported by a device but not per-queue type.
+* A pure per-port offloading cannot be enabled on a queue and disabled on another queue at the same time.
* A pure per-port offloading must be enabled or disabled on all queues at the same time.
-* Any offloading is per-queue or pure per-port type, but can't be both types at same devices.
+* Offloading is per-queue or pure per-port type, but cannot be both types on the same devices.
* Port capabilities = per-queue capabilities + pure per-port capabilities.
* Any supported offloading can be enabled on all queues.
-The different offloads capabilities can be queried using ``rte_eth_dev_info_get()``.
+The different offload capabilities can be queried using ``rte_eth_dev_info_get()``.
The ``dev_info->[rt]x_queue_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all per-queue offloading capabilities.
The ``dev_info->[rt]x_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all pure per-port and per-queue offloading capabilities.
Supported offloads can be either per-port or per-queue.
@@ -329,8 +326,8 @@ per-port type and no matter whether it is set or cleared in
If a per-queue offloading hasn't been enabled in ``rte_eth_dev_configure()``,
it can be enabled or disabled in ``rte_eth_[rt]x_queue_setup()`` for individual queue.
A newly added offloads in ``[rt]x_conf->offloads`` to ``rte_eth_[rt]x_queue_setup()`` input by application
-is the one which hasn't been enabled in ``rte_eth_dev_configure()`` and is requested to be enabled
-in ``rte_eth_[rt]x_queue_setup()``. It must be per-queue type, otherwise trigger an error log.
+is the one that hasn't been enabled in ``rte_eth_dev_configure()`` and is requested to be enabled
+in ``rte_eth_[rt]x_queue_setup()``. It must be per-queue type, otherwise an error log will be triggered.
Poll Mode Driver API
--------------------
@@ -340,8 +337,8 @@ Generalities
By default, all functions exported by a PMD are lock-free functions that are assumed
not to be invoked in parallel on different logical cores to work on the same target object.
-For instance, a PMD receive function cannot be invoked in parallel on two logical cores to poll the same RX queue of the same port.
-Of course, this function can be invoked in parallel by different logical cores on different RX queues.
+For instance, a PMD receive function cannot be invoked in parallel on two logical cores to poll the same Rx queue of the same port.
+This function can be invoked in parallel by different logical cores on different Rx queues.
It is the responsibility of the upper-level application to enforce this rule.
If needed, parallel accesses by multiple logical cores to shared queues can be explicitly protected by dedicated inline lock-aware functions
@@ -357,7 +354,7 @@ The rte_mbuf data structure includes specific fields to represent, in a generic
For an input packet, most fields of the rte_mbuf structure are filled in by the PMD receive function with the information contained in the receive descriptor.
Conversely, for output packets, most fields of rte_mbuf structures are used by the PMD transmit function to initialize transmit descriptors.
-The mbuf structure is fully described in the :ref:`Mbuf Library <Mbuf_Library>` chapter.
+The mbuf structure is described in depth in the :ref:`Mbuf Library <Mbuf_Library>` chapter.
Ethernet Device API
~~~~~~~~~~~~~~~~~~~
@@ -370,12 +367,12 @@ Ethernet Device Standard Device Arguments
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Standard Ethernet device arguments allow for a set of commonly used arguments/
-parameters which are applicable to all Ethernet devices to be available to for
-specification of specific device and for passing common configuration
+parameters applicable to all Ethernet devices. These arguments/parameters are available for
+specification of specific devices and passing common configuration
parameters to those ports.
-* ``representor`` for a device which supports the creation of representor ports
- this argument allows user to specify which switch ports to enable port
+* Use ``representor`` for a device which supports the creation of representor ports.
+ This argument allows user to specify which switch ports to enable port
representors for::
-a DBDF,representor=vf0
@@ -392,7 +389,7 @@ parameters to those ports.
-a DBDF,representor=[pf[0-1],pf2vf[0-2],pf3[3,5-8]]
(Multiple representors in one device argument can be represented as a list)
-Note: PMDs are not required to support the standard device arguments and users
+Note: PMDs are not required to support the standard device arguments. Users
should consult the relevant PMD documentation to see support devargs.
Extended Statistics API
@@ -402,9 +399,9 @@ The extended statistics API allows a PMD to expose all statistics that are
available to it, including statistics that are unique to the device.
Each statistic has three properties ``name``, ``id`` and ``value``:
-* ``name``: A human readable string formatted by the scheme detailed below.
+* ``name``: A human-readable string formatted by the scheme detailed below.
* ``id``: An integer that represents only that statistic.
-* ``value``: A unsigned 64-bit integer that is the value of the statistic.
+* ``value``: An unsigned 64-bit integer that is the value of the statistic.
Note that extended statistic identifiers are
driver-specific, and hence might not be the same for different ports.
@@ -439,7 +436,7 @@ associated with the receive side of the NIC. The second component ``packets``
indicates that the unit of measure is packets.
A more complicated example: ``tx_size_128_to_255_packets``. In this example,
-``tx`` indicates transmission, ``size`` is the first detail, ``128`` etc are
+``tx`` indicates transmission, ``size`` is the first detail, ``128`` etc., are
more details, and ``packets`` indicates that this is a packet counter.
Some additions in the metadata scheme are as follows:
@@ -466,8 +463,8 @@ lookup of specific statistics. Performant lookup means two things;
The API ensures these requirements are met by mapping the ``name`` of the
statistic to a unique ``id``, which is used as a key for lookup in the fast-path.
The API allows applications to request an array of ``id`` values, so that the
-PMD only performs the required calculations. Expected usage is that the
-application scans the ``name`` of each statistic, and caches the ``id``
+PMD only performs the required calculations. The expected usage is that the
+application scans the ``name`` of each statistic and caches the ``id``
if it has an interest in that statistic. On the fast-path, the integer can be used
to retrieve the actual ``value`` of the statistic that the ``id`` represents.
@@ -486,7 +483,7 @@ statistics.
* ``rte_eth_xstats_get_by_id()``: Fills in an array of ``uint64_t`` values
with matching the provided ``ids`` array. If the ``ids`` array is NULL, it
- returns all statistics that are available.
+ returns all available statistics.
Application Usage
@@ -496,10 +493,10 @@ Imagine an application that wants to view the dropped packet count. If no
packets are dropped, the application does not read any other metrics for
performance reasons. If packets are dropped, the application has a particular
set of statistics that it requests. This "set" of statistics allows the app to
-decide what next steps to perform. The following code-snippets show how the
+decide what next steps to perform. The following code snippets show how the
xstats API can be used to achieve this goal.
-First step is to get all statistics names and list them:
+The first step is to get all statistics names and list them:
.. code-block:: c
@@ -545,7 +542,7 @@ First step is to get all statistics names and list them:
The application has access to the names of all of the statistics that the PMD
exposes. The application can decide which statistics are of interest, cache the
-ids of those statistics by looking up the name as follows:
+IDs of those statistics by looking up the name as follows:
.. code-block:: c
@@ -564,8 +561,7 @@ ids of those statistics by looking up the name as follows:
The API provides flexibility to the application so that it can look up multiple
statistics using an array containing multiple ``id`` numbers. This reduces the
-function call overhead of retrieving statistics, and makes lookup of multiple
-statistics simpler for the application.
+function call overhead of retrieving statistics and simplifies the application's lookup of multiple statistics.
.. code-block:: c
@@ -585,8 +581,8 @@ statistics simpler for the application.
This array lookup API for xstats allows the application create multiple
"groups" of statistics, and look up the values of those IDs using a single API
-call. As an end result, the application is able to achieve its goal of
-monitoring a single statistic ("rx_errors" in this case), and if that shows
+call. As an end result, the application can achieve its goal of
+monitoring a single statistic (in this case,"rx_errors"). If that shows
packets being dropped, it can easily retrieve a "set" of statistics using the
IDs array parameter to ``rte_eth_xstats_get_by_id`` function.
@@ -597,23 +593,23 @@ NIC Reset API
int rte_eth_dev_reset(uint16_t port_id);
-Sometimes a port has to be reset passively. For example when a PF is
+There are times when a port has to be reset passively. For example, when a PF is
reset, all its VFs should also be reset by the application to make them
-consistent with the PF. A DPDK application also can call this function
-to trigger a port reset. Normally, a DPDK application would invokes this
+consistent with the PF. A DPDK application can also call this function
+to trigger a port reset. Normally, a DPDK application would invoke this
function when an RTE_ETH_EVENT_INTR_RESET event is detected.
-It is the duty of the PMD to trigger RTE_ETH_EVENT_INTR_RESET events and
-the application should register a callback function to handle these
+The PMD's duty is to trigger RTE_ETH_EVENT_INTR_RESET events.
+The application should register a callback function to handle these
events. When a PMD needs to trigger a reset, it can trigger an
RTE_ETH_EVENT_INTR_RESET event. On receiving an RTE_ETH_EVENT_INTR_RESET
-event, applications can handle it as follows: Stop working queues, stop
+event, applications can do as follows: Stop working queues, stop
calling Rx and Tx functions, and then call rte_eth_dev_reset(). For
thread safety all these operations should be called from the same thread.
For example when PF is reset, the PF sends a message to notify VFs of
-this event and also trigger an interrupt to VFs. Then in the interrupt
-service routine the VFs detects this notification message and calls
+this event and also trigger an interrupt to VFs. Then, in the interrupt
+service routine, the VFs detects this notification message and calls
rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_RESET, NULL).
This means that a PF reset triggers an RTE_ETH_EVENT_INTR_RESET
event within VFs. The function rte_eth_dev_callback_process() will
@@ -621,13 +617,12 @@ call the registered callback function. The callback function can trigger
the application to handle all operations the VF reset requires including
stopping Rx/Tx queues and calling rte_eth_dev_reset().
-The rte_eth_dev_reset() itself is a generic function which only does
-some hardware reset operations through calling dev_unint() and
-dev_init(), and itself does not handle synchronization, which is handled
+The rte_eth_dev_reset() is a generic function that only does hardware reset operations through calling dev_unint() and
+dev_init(). It does not handle synchronization, which is handled
by application.
The PMD itself should not call rte_eth_dev_reset(). The PMD can trigger
-the application to handle reset event. It is duty of application to
+the application to handle reset event. It is duty of the application to
handle all synchronization before it calls rte_eth_dev_reset().
The above error handling mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PASSIVE``.
@@ -635,15 +630,15 @@ The above error handling mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PASSIVE``.
Proactive Error Handling Mode
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-This mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PROACTIVE``,
-different from the application invokes recovery in PASSIVE mode,
-the PMD automatically recovers from error in PROACTIVE mode,
+This mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PROACTIVE``, which is
+different from the application invokes recovery in PASSIVE mode.
+The PMD automatically recovers from error in PROACTIVE mode,
and only a small amount of work is required for the application.
During error detection and automatic recovery,
the PMD sets the data path pointers to dummy functions
(which will prevent the crash),
-and also make sure the control path operations fail with a return code ``-EBUSY``.
+and ensures sure the control path operations fail with a return code ``-EBUSY``.
Because the PMD recovers automatically,
the application can only sense that the data flow is disconnected for a while
@@ -655,9 +650,9 @@ three events are available:
``RTE_ETH_EVENT_ERR_RECOVERING``
Notify the application that an error is detected
- and the recovery is being started.
+ and the recovery is beginning.
Upon receiving the event, the application should not invoke
- any control path function until receiving
+ any control path function until receiving the
``RTE_ETH_EVENT_RECOVERY_SUCCESS`` or ``RTE_ETH_EVENT_RECOVERY_FAILED`` event.
.. note::
@@ -667,7 +662,7 @@ three events are available:
because a larger error may occur during the recovery.
``RTE_ETH_EVENT_RECOVERY_SUCCESS``
- Notify the application that the recovery from error is successful,
+ Notify the application that the recovery from the error was successful,
the PMD already re-configures the port,
and the effect is the same as a restart operation.
--
2.34.1
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH 3/9] doc: reword argparse section in prog guide
2024-05-13 15:59 [PATCH 0/9] reowrd in prog guide Nandini Persad
2024-05-13 15:59 ` [PATCH 1/9] doc: reword design section in contributors guidelines Nandini Persad
2024-05-13 15:59 ` [PATCH 2/9] doc: reword pmd section in prog guide Nandini Persad
@ 2024-05-13 15:59 ` Nandini Persad
2024-05-13 19:01 ` Stephen Hemminger
2024-05-13 15:59 ` [PATCH 4/9] doc: reword service cores " Nandini Persad
` (9 subsequent siblings)
12 siblings, 1 reply; 118+ messages in thread
From: Nandini Persad @ 2024-05-13 15:59 UTC (permalink / raw)
To: dev
made small edits to sections 6.1 and 6.2 intro
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/argparse_lib.rst | 72 +++++++++++++-------------
1 file changed, 35 insertions(+), 37 deletions(-)
diff --git a/doc/guides/prog_guide/argparse_lib.rst b/doc/guides/prog_guide/argparse_lib.rst
index a6ac11b1c0..a2af7d49e9 100644
--- a/doc/guides/prog_guide/argparse_lib.rst
+++ b/doc/guides/prog_guide/argparse_lib.rst
@@ -4,22 +4,21 @@
Argparse Library
================
-The argparse library provides argument parsing functionality,
-this library makes it easy to write user-friendly command-line program.
+The argparse library provides argument parsing functionality and makes it easy to write user-friendly command-line programming.
Features and Capabilities
-------------------------
-- Support parsing optional argument (which could take with no-value,
- required-value and optional-value).
+- Supports parsing of optional arguments (which can contain no-value,
+ required-value and optional-values).
-- Support parsing positional argument (which must take with required-value).
+- Supports parsing of positional arguments (which must contain required-values).
-- Support automatic generate usage information.
+- Supports automatic generation of usage information.
-- Support issue errors when provide with invalid arguments.
+- Provides issue errors when an argument is invalid
-- Support parsing argument by two ways:
+- Supports parsing arguments in two ways:
#. autosave: used for parsing known value types;
#. callback: will invoke user callback to parse.
@@ -27,7 +26,7 @@ Features and Capabilities
Usage Guide
-----------
-The following code demonstrates how to use:
+The following code demonstrates how to use the following:
.. code-block:: C
@@ -89,12 +88,12 @@ The following code demonstrates how to use:
...
}
-In this example, the arguments which start with a hyphen (-) are optional
-arguments (they're "--aaa"/"--bbb"/"--ccc"/"--ddd"/"--eee"/"--fff"); and the
-arguments which don't start with a hyphen (-) are positional arguments
-(they're "ooo"/"ppp").
+In this example, the arguments thhat start with a hyphen (-) are optional
+arguments ("--aaa"/"--bbb"/"--ccc"/"--ddd"/"--eee"/"--fff").
+The arguments that do not start with a hyphen (-) are positional arguments
+("ooo"/"ppp").
-Every argument must be set whether to carry a value (one of
+Every argument must set whether it carries a value (one of
``RTE_ARGPARSE_ARG_NO_VALUE``, ``RTE_ARGPARSE_ARG_REQUIRED_VALUE`` and
``RTE_ARGPARSE_ARG_OPTIONAL_VALUE``).
@@ -105,26 +104,26 @@ Every argument must be set whether to carry a value (one of
User Input Requirements
~~~~~~~~~~~~~~~~~~~~~~~
-For optional arguments which take no-value,
+For optional arguments which have no-value,
the following mode is supported (take above "--aaa" as an example):
- The single mode: "--aaa" or "-a".
-For optional arguments which take required-value,
+For optional arguments which have required-value,
the following two modes are supported (take above "--bbb" as an example):
- The kv mode: "--bbb=1234" or "-b=1234".
- The split mode: "--bbb 1234" or "-b 1234".
-For optional arguments which take optional-value,
+For optional arguments which have optional-value,
the following two modes are supported (take above "--ccc" as an example):
- The single mode: "--ccc" or "-c".
- The kv mode: "--ccc=123" or "-c=123".
-For positional arguments which must take required-value,
+For positional arguments which must have required-value,
their values are parsing in the order defined.
.. note::
@@ -132,15 +131,15 @@ their values are parsing in the order defined.
The compact mode is not supported.
Take above "-a" and "-d" as an example, don't support "-ad" input.
-Parsing by autosave way
+Parsing the Autosave Method
~~~~~~~~~~~~~~~~~~~~~~~
-Argument of known value type (e.g. ``RTE_ARGPARSE_ARG_VALUE_INT``)
-could be parsed using this autosave way,
-and its result will save in the ``val_saver`` field.
+Arguments of a known value type (e.g. ``RTE_ARGPARSE_ARG_VALUE_INT``)
+can be parsed using the autosave method,
+The result will save in the ``val_saver`` field.
In the above example, the arguments "--aaa"/"--bbb"/"--ccc" and "ooo"
-both use this way, the parsing is as follows:
+both use this method. The parsing is as follows:
- For argument "--aaa", it is configured as no-value,
so the ``aaa_val`` will be set to ``val_set`` field
@@ -150,28 +149,27 @@ both use this way, the parsing is as follows:
so the ``bbb_val`` will be set to user input's value
(e.g. will be set to 1234 with input "--bbb 1234").
-- For argument "--ccc", it is configured as optional-value,
- if user only input "--ccc" then the ``ccc_val`` will be set to ``val_set`` field
+- For argument "--ccc", it is configured as optional-value.
+ If user only input "--ccc", then the ``ccc_val`` will be set to ``val_set`` field
which is 200 in the above example;
- if user input "--ccc=123", then the ``ccc_val`` will be set to 123.
+ If user input "--ccc=123", then the ``ccc_val`` will be set to 123.
- For argument "ooo", it is positional argument,
the ``ooo_val`` will be set to user input's value.
-Parsing by callback way
-~~~~~~~~~~~~~~~~~~~~~~~
-
-It could also choose to use callback to parse,
-just define a unique index for the argument
-and make the ``val_save`` field to be NULL also zero value-type.
+Parsing by Callback Method
+~~~~~~~~~
+You may choose to use the callback method to parse.
+To do so, define a unique index for the argument
+and make the ``val_save`` field to be NULL as a zero value-type.
-In the above example, the arguments "--ddd"/"--eee"/"--fff" and "ppp" both use this way.
+In the above example, the arguments "--ddd"/"--eee"/"--fff" and "ppp" both use this method.
-Multiple times argument
+Multiple Times Argument
~~~~~~~~~~~~~~~~~~~~~~~
-If want to support the ability to enter the same argument multiple times,
-then should mark ``RTE_ARGPARSE_ARG_SUPPORT_MULTI`` in the ``flags`` field.
+If you want to support the ability to enter the same argument multiple times,
+then you should mark ``RTE_ARGPARSE_ARG_SUPPORT_MULTI`` in the ``flags`` field.
For example:
.. code-block:: C
@@ -182,5 +180,5 @@ Then the user input could contain multiple "--xyz" arguments.
.. note::
- The multiple times argument only support with optional argument
+ The multiple times argument is only supported with optional argument
and must be parsed by callback way.
--
2.34.1
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH 4/9] doc: reword service cores section in prog guide
2024-05-13 15:59 [PATCH 0/9] reowrd in prog guide Nandini Persad
` (2 preceding siblings ...)
2024-05-13 15:59 ` [PATCH 3/9] doc: reword argparse " Nandini Persad
@ 2024-05-13 15:59 ` Nandini Persad
2024-05-13 15:59 ` [PATCH 5/9] doc: reword trace library " Nandini Persad
` (8 subsequent siblings)
12 siblings, 0 replies; 118+ messages in thread
From: Nandini Persad @ 2024-05-13 15:59 UTC (permalink / raw)
To: dev; +Cc: Harry van Haaren
made minor syntax changes to section 8 of programmer's guide, service cores
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/service_cores.rst | 32 ++++++++++++-------------
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/doc/guides/prog_guide/service_cores.rst b/doc/guides/prog_guide/service_cores.rst
index d4e6c3d6e6..59da3964bf 100644
--- a/doc/guides/prog_guide/service_cores.rst
+++ b/doc/guides/prog_guide/service_cores.rst
@@ -4,38 +4,38 @@
Service Cores
=============
-DPDK has a concept known as service cores, which enables a dynamic way of
-performing work on DPDK lcores. Service core support is built into the EAL, and
-an API is provided to optionally allow applications to control how the service
+DPDK has a concept known as service cores. Service cores enable a dynamic way of
+performing work on DPDK lcores. Service core support is built into the EAL.
+An API is provided to give you the option of allowing applications to control how the service
cores are used at runtime.
-The service cores concept is built up out of services (components of DPDK that
+The service cores concept is built out of services (components of DPDK that
require CPU cycles to operate) and service cores (DPDK lcores, tasked with
running services). The power of the service core concept is that the mapping
-between service cores and services can be configured to abstract away the
+between service cores and services can be configured to simplify the
difference between platforms and environments.
-For example, the Eventdev has hardware and software PMDs. Of these the software
+For example, the Eventdev has hardware and software PMDs. Of these the software,
PMD requires an lcore to perform the scheduling operations, while the hardware
PMD does not. With service cores, the application would not directly notice
-that the scheduling is done in software.
+that the scheduling is done in the software.
For detailed information about the service core API, please refer to the docs.
Service Core Initialization
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-There are two methods to having service cores in a DPDK application, either by
+There are two methods to having service cores in a DPDK application: by
using the service coremask, or by dynamically adding cores using the API.
-The simpler of the two is to pass the `-s` coremask argument to EAL, which will
-take any cores available in the main DPDK coremask, and if the bits are also set
-in the service coremask the cores become service-cores instead of DPDK
+The simpler of the two is to pass the `-s` coremask argument to the EAL, which will
+take any cores available in the main DPDK coremask. If the bits are also set
+in the service coremask, the cores become service-cores instead of DPDK
application lcores.
Enabling Services on Cores
~~~~~~~~~~~~~~~~~~~~~~~~~~
-Each registered service can be individually mapped to a service core, or set of
+Each registered service can be individually mapped to a service core, or a set of
service cores. Enabling a service on a particular core means that the lcore in
question will run the service. Disabling that core on the service stops the
lcore in question from running the service.
@@ -48,8 +48,8 @@ function to run the service.
Service Core Statistics
~~~~~~~~~~~~~~~~~~~~~~~
-The service core library is capable of collecting runtime statistics like number
-of calls to a specific service, and number of cycles used by the service. The
+The service core library is capable of collecting runtime statistics like the number
+of calls to a specific service, and the number of cycles used by the service. The
cycle count collection is dynamically configurable, allowing any application to
profile the services running on the system at any time.
@@ -58,9 +58,9 @@ Service Core Tracing
The service core library is instrumented with tracepoints using the DPDK Trace
Library. These tracepoints allow you to track the service and logical cores
-state. To activate tracing when launching a DPDK program it is necessary to use the
+state. To activate tracing when launching a DPDK program, it is necessary to use the
``--trace`` option to specify a regular expression to select which tracepoints
-to enable. Here is an example if you want to only specify service core tracing::
+to enable. Here is an example if you want to specify only service core tracing::
./dpdk/examples/service_cores/build/service_cores --trace="lib.eal.thread*" --trace="lib.eal.service*"
--
2.34.1
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH 5/9] doc: reword trace library section in prog guide
2024-05-13 15:59 [PATCH 0/9] reowrd in prog guide Nandini Persad
` (3 preceding siblings ...)
2024-05-13 15:59 ` [PATCH 4/9] doc: reword service cores " Nandini Persad
@ 2024-05-13 15:59 ` Nandini Persad
2024-05-13 15:59 ` [PATCH 6/9] doc: reword log " Nandini Persad
` (7 subsequent siblings)
12 siblings, 0 replies; 118+ messages in thread
From: Nandini Persad @ 2024-05-13 15:59 UTC (permalink / raw)
To: dev; +Cc: Jerin Jacob, Sunil Kumar Kori
made minor syntax edits to sect 9.1-9.7 of prog guide
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/trace_lib.rst | 50 ++++++++++++++---------------
1 file changed, 25 insertions(+), 25 deletions(-)
diff --git a/doc/guides/prog_guide/trace_lib.rst b/doc/guides/prog_guide/trace_lib.rst
index d9b17abe90..e2983017d8 100644
--- a/doc/guides/prog_guide/trace_lib.rst
+++ b/doc/guides/prog_guide/trace_lib.rst
@@ -14,29 +14,29 @@ When recording, specific instrumentation points placed in the software source
code generate events that are saved on a giant tape: a trace file.
The trace file then later can be opened in *trace viewers* to visualize and
analyze the trace events with timestamps and multi-core views.
-Such a mechanism will be useful for resolving a wide range of problems such as
-multi-core synchronization issues, latency measurements, finding out the
-post analysis information like CPU idle time, etc that would otherwise be
-extremely challenging to get.
+This mechanism will be useful for resolving a wide range of problems such as
+multi-core synchronization issues, latency measurements, and finding
+post analysis information like CPU idle time, etc., that would otherwise be
+extremely challenging to gather.
Tracing is often compared to *logging*. However, tracers and loggers are two
-different tools, serving two different purposes.
-Tracers are designed to record much lower-level events that occur much more
+different tools serving two different purposes.
+Tracers are designed to record much lower-level events that occur more
frequently than log messages, often in the range of thousands per second, with
very little execution overhead.
Logging is more appropriate for a very high-level analysis of less frequent
events: user accesses, exceptional conditions (errors and warnings, for
-example), database transactions, instant messaging communications, and such.
+example), database transactions, instant messaging communications, etc.
Simply put, logging is one of the many use cases that can be satisfied with
tracing.
DPDK tracing library features
-----------------------------
-- A framework to add tracepoints in control and fast path APIs with minimum
+- Provides framework to add tracepoints in control and fast path APIs with minimum
impact on performance.
Typical trace overhead is ~20 cycles and instrumentation overhead is 1 cycle.
-- Enable and disable the tracepoints at runtime.
+- Enable and disable tracepoints at runtime.
- Save the trace buffer to the filesystem at any point in time.
- Support ``overwrite`` and ``discard`` trace mode operations.
- String-based tracepoint object lookup.
@@ -47,7 +47,7 @@ DPDK tracing library features
For detailed information, refer to
`Common Trace Format <https://diamon.org/ctf/>`_.
-How to add a tracepoint?
+How to add a Tracepoint
------------------------
This section steps you through the details of adding a simple tracepoint.
@@ -67,14 +67,14 @@ Create the tracepoint header file
rte_trace_point_emit_string(str);
)
-The above macro creates ``app_trace_string`` tracepoint.
+The above macro creates the ``app_trace_string`` tracepoint.
The user can choose any name for the tracepoint.
However, when adding a tracepoint in the DPDK library, the
``rte_<library_name>_trace_[<domain>_]<name>`` naming convention must be
followed.
The examples are ``rte_eal_trace_generic_str``, ``rte_mempool_trace_create``.
-The ``RTE_TRACE_POINT`` macro expands from above definition as the following
+The ``RTE_TRACE_POINT`` macro expands from the above definition as the following
function template:
.. code-block:: c
@@ -91,7 +91,7 @@ The consumer of this tracepoint can invoke
``app_trace_string(const char *str)`` to emit the trace event to the trace
buffer.
-Register the tracepoint
+Register the Tracepoint
~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: c
@@ -122,40 +122,40 @@ convention.
The ``RTE_TRACE_POINT_REGISTER`` defines the placeholder for the
``rte_trace_point_t`` tracepoint object.
- For generic tracepoint or for tracepoint used in public header files,
+ For a generic tracepoint or for the tracepoint used in public header files,
the user must export a ``__<trace_function_name>`` symbol
in the library ``.map`` file for this tracepoint
- to be used out of the library, in shared builds.
+ to be used out of the library in shared builds.
For example, ``__app_trace_string`` will be the exported symbol in the
above example.
-Fast path tracepoint
+Fast Path Tracepoint
--------------------
In order to avoid performance impact in fast path code, the library introduced
``RTE_TRACE_POINT_FP``. When adding the tracepoint in fast path code,
the user must use ``RTE_TRACE_POINT_FP`` instead of ``RTE_TRACE_POINT``.
-``RTE_TRACE_POINT_FP`` is compiled out by default and it can be enabled using
+``RTE_TRACE_POINT_FP`` is compiled by default and can be enabled using
the ``enable_trace_fp`` option for meson build.
-Event record mode
+Event Record Mode
-----------------
-Event record mode is an attribute of trace buffers. Trace library exposes the
+Event record mode is an attribute of trace buffers. The trace library exposes the
following modes:
Overwrite
- When the trace buffer is full, new trace events overwrites the existing
+ When the trace buffer is full, new trace events overwrite the existing
captured events in the trace buffer.
Discard
When the trace buffer is full, new trace events will be discarded.
-The mode can be configured either using EAL command line parameter
-``--trace-mode`` on application boot up or use ``rte_trace_mode_set()`` API to
+The mode can be configured either using the EAL command line parameter
+``--trace-mode`` on application boot up or use the ``rte_trace_mode_set()`` API to
configure at runtime.
-Trace file location
+Trace File Location
-------------------
On ``rte_trace_save()`` or ``rte_eal_cleanup()`` invocation, the library saves
@@ -167,7 +167,7 @@ option.
For more information, refer to :doc:`../linux_gsg/linux_eal_parameters` for
trace EAL command line options.
-View and analyze the recorded events
+View and Analyze Recorded Events
------------------------------------
Once the trace directory is available, the user can view/inspect the recorded
@@ -176,7 +176,7 @@ events.
There are many tools you can use to read DPDK traces:
#. ``babeltrace`` is a command-line utility that converts trace formats; it
- supports the format that DPDK trace library produces, CTF, as well as a
+ supports the format that the DPDK trace library produces, CTF, as well as a
basic text output that can be grep'ed.
The babeltrace command is part of the Open Source Babeltrace project.
--
2.34.1
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH 6/9] doc: reword log library section in prog guide
2024-05-13 15:59 [PATCH 0/9] reowrd in prog guide Nandini Persad
` (4 preceding siblings ...)
2024-05-13 15:59 ` [PATCH 5/9] doc: reword trace library " Nandini Persad
@ 2024-05-13 15:59 ` Nandini Persad
2024-05-13 15:59 ` [PATCH 7/9] doc: reword cmdline " Nandini Persad
` (6 subsequent siblings)
12 siblings, 0 replies; 118+ messages in thread
From: Nandini Persad @ 2024-05-13 15:59 UTC (permalink / raw)
To: dev; +Cc: Jerin Jacob, Sunil Kumar Kori
minor changes made for syntax in the log library section and 7.1
section of the programmer's guide. A couple sentences at the end of the
trace library section were also edited.
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/cmdline.rst | 24 +++++++++++-----------
doc/guides/prog_guide/log_lib.rst | 32 ++++++++++++++---------------
doc/guides/prog_guide/trace_lib.rst | 22 ++++++++++----------
3 files changed, 39 insertions(+), 39 deletions(-)
diff --git a/doc/guides/prog_guide/cmdline.rst b/doc/guides/prog_guide/cmdline.rst
index e20281ceb5..6b10ab6c99 100644
--- a/doc/guides/prog_guide/cmdline.rst
+++ b/doc/guides/prog_guide/cmdline.rst
@@ -5,8 +5,8 @@ Command-line Library
====================
Since its earliest versions, DPDK has included a command-line library -
-primarily for internal use by, for example, ``dpdk-testpmd`` and the ``dpdk-test`` binaries,
-but the library is also exported on install and can be used by any end application.
+primarily for internal use by, for example, ``dpdk-testpmd`` and the ``dpdk-test`` binaries.
+However, the library is also exported on install and can be used by any end application.
This chapter covers the basics of the command-line library and how to use it in an application.
Library Features
@@ -18,7 +18,7 @@ The DPDK command-line library supports the following features:
* Ability to read and process commands taken from an input file, e.g. startup script
-* Parameterized commands able to take multiple parameters with different datatypes:
+* Parameterized commands that can take multiple parameters with different datatypes:
* Strings
* Signed/unsigned 16/32/64-bit integers
@@ -56,7 +56,7 @@ Creating a Command List File
The ``dpdk-cmdline-gen.py`` script takes as input a list of commands to be used by the application.
While these can be piped to it via standard input, using a list file is probably best.
-The format of the list file must be:
+The format of the list file must follow these requirements:
* Comment lines start with '#' as first non-whitespace character
@@ -75,7 +75,7 @@ The format of the list file must be:
* ``<IPv6>dst_ip6``
* Variable fields, which take their values from a list of options,
- have the comma-separated option list placed in braces, rather than a the type name.
+ have the comma-separated option list placed in braces, rather than by the type name.
For example,
* ``<(rx,tx,rxtx)>mode``
@@ -127,13 +127,13 @@ and the callback stubs will be written to an equivalent ".c" file.
Providing the Function Callbacks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-As discussed above, the script output is a header file, containing structure definitions,
-but the callback functions themselves obviously have to be provided by the user.
-These callback functions must be provided as non-static functions in a C file,
+As discussed above, the script output is a header file containing structure definitions,
+but the callback functions must be provided by the user.
+These callback functions must be provided as non-static functions in a C file
and named ``cmd_<cmdname>_parsed``.
The function prototypes can be seen in the generated output header.
-The "cmdname" part of the function name is built up by combining the non-variable initial tokens in the command.
+The "cmdname" part of the function name is built by combining the non-variable initial tokens in the command.
So, given the commands in our worked example below: ``quit`` and ``show port stats <n>``,
the callback functions would be:
@@ -151,11 +151,11 @@ the callback functions would be:
...
}
-These functions must be provided by the developer, but, as stated above,
+These functions must be provided by the developer. However, as stated above,
stub functions may be generated by the script automatically using the ``--stubs`` parameter.
The same "cmdname" stem is used in the naming of the generated structures too.
-To get at the results structure for each command above,
+To get to the results structure for each command above,
the ``parsed_result`` parameter should be cast to ``struct cmd_quit_result``
or ``struct cmd_show_port_stats_result`` respectively.
@@ -179,7 +179,7 @@ To integrate the script output with the application,
we must ``#include`` the generated header into our applications C file,
and then have the command-line created via either ``cmdline_new`` or ``cmdline_stdin_new``.
The first parameter to the function call should be the context array in the generated header file,
-``ctx`` by default. (Modifiable via script parameter).
+``ctx`` by default (Modifiable via script parameter).
The callback functions may be in this same file, or in a separate one -
they just need to be available to the linker at build-time.
diff --git a/doc/guides/prog_guide/log_lib.rst b/doc/guides/prog_guide/log_lib.rst
index ff9d1b54a2..05f032dfad 100644
--- a/doc/guides/prog_guide/log_lib.rst
+++ b/doc/guides/prog_guide/log_lib.rst
@@ -5,7 +5,7 @@ Log Library
===========
The DPDK Log library provides the logging functionality for other DPDK libraries and drivers.
-By default, in a Linux application, logs are sent to syslog and also to the console.
+By default, in a Linux application, logs are sent to syslog and the console.
On FreeBSD and Windows applications, logs are sent only to the console.
However, the log function can be overridden by the user to use a different logging mechanism.
@@ -26,14 +26,14 @@ These levels, specified in ``rte_log.h`` are (from most to least important):
At runtime, only messages of a configured level or above (i.e. of higher importance)
will be emitted by the application to the log output.
-That level can be configured either by the application calling the relevant APIs from the logging library,
+That level can be configured either by the application calling relevant APIs from the logging library,
or by the user passing the ``--log-level`` parameter to the EAL via the application.
Setting Global Log Level
~~~~~~~~~~~~~~~~~~~~~~~~
To adjust the global log level for an application,
-just pass a numeric level or a level name to the ``--log-level`` EAL parameter.
+pass a numeric level or a level name to the ``--log-level`` EAL parameter.
For example::
/path/to/app --log-level=error
@@ -47,9 +47,9 @@ Within an application, the log level can be similarly set using the ``rte_log_se
Setting Log Level for a Component
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-In some cases, for example, for debugging purposes,
-it may be desirable to increase or decrease the log level for only a specific component, or set of components.
-To facilitate this, the ``--log-level`` argument also accepts an, optionally wildcarded, component name,
+In some cases (such as debugging purposes),
+you may want to increase or decrease the log level for only a specific component or set of components.
+To facilitate this, the ``--log-level`` argument also accepts an optionally wildcarded component name,
along with the desired level for that component.
For example::
@@ -57,13 +57,13 @@ For example::
/path/to/app --log-level=lib.*:warning
-Within an application, the same result can be got using the ``rte_log_set_level_pattern()`` or ``rte_log_set_level_regex()`` APIs.
+Within an application, you can achieve the same result using the ``rte_log_set_level_pattern()`` or ``rte_log_set_level_regex()`` APIs.
Using Logging APIs to Generate Log Messages
-------------------------------------------
-To output log messages, ``rte_log()`` API function should be used.
-As well as the log message, ``rte_log()`` takes two additional parameters:
+To output log messages, the ``rte_log()`` API function should be used,
+as well as the log message, ``rte_log()`` which takes two additional parameters:
* The log level
* The log component type
@@ -73,16 +73,16 @@ The component type is a unique id that identifies the particular DPDK component
To get this id, each component needs to register itself at startup,
using the macro ``RTE_LOG_REGISTER_DEFAULT``.
This macro takes two parameters, with the second being the default log level for the component.
-The first parameter, called "type", the name of the "logtype", or "component type" variable used in the component.
-This variable will be defined by the macro, and should be passed as the second parameter in calls to ``rte_log()``.
+The first parameter, called "type", is the name of the "logtype", or "component type" variable used in the component.
+This variable will be defined by the macro and should be passed as the second parameter in calls to ``rte_log()``.
In general, most DPDK components define their own logging macros to simplify the calls to the log APIs.
They do this by:
* Hiding the component type parameter inside the macro so it never needs to be passed explicitly.
* Using the log-level definitions given in ``rte_log.h`` to allow short textual names to be used in
- place of the numeric log levels.
+ place of numeric log levels.
-The following code is taken from ``rte_cfgfile.c`` and shows the log registration,
+The following code is taken from ``rte_cfgfile.c`` and shows the log registration
and subsequent definition of a shortcut logging macro.
It can be used as a template for any new components using DPDK logging.
@@ -97,10 +97,10 @@ It can be used as a template for any new components using DPDK logging.
it should be placed near the top of the C file using it.
If not, the logtype variable should be defined as an "extern int" near the top of the file.
- Similarly, if logging is to be done by multiple files in a component,
- only one file should register the logtype via the macro,
+ Similarly, if logging will be done by multiple files in a component,
+ only one file should register the logtype via the macro
and the logtype should be defined as an "extern int" in a common header file.
- Any component-specific logging macro should similarly be defined in that header.
+ Any component-specific logging macro should be similarly defined in that header.
Throughout the cfgfile library, all logging calls are therefore of the form:
diff --git a/doc/guides/prog_guide/trace_lib.rst b/doc/guides/prog_guide/trace_lib.rst
index e2983017d8..4177f8ba15 100644
--- a/doc/guides/prog_guide/trace_lib.rst
+++ b/doc/guides/prog_guide/trace_lib.rst
@@ -195,12 +195,12 @@ to babeltrace with no options::
all their events, merging them in chronological order.
You can pipe the output of the babeltrace into a tool like grep(1) for further
-filtering. Below example grep the events for ``ethdev`` only::
+filtering. Here's an example of how you grep the events for ``ethdev`` only::
babeltrace /tmp/my-dpdk-trace | grep ethdev
You can pipe the output of babeltrace into a tool like wc(1) to count the
-recorded events. Below example count the number of ``ethdev`` events::
+recorded events. Below is an example of counting the number of ``ethdev`` events::
babeltrace /tmp/my-dpdk-trace | grep ethdev | wc --lines
@@ -210,14 +210,14 @@ Use the tracecompass GUI tool
``Tracecompass`` is another tool to view/analyze the DPDK traces which gives
a graphical view of events. Like ``babeltrace``, tracecompass also provides
an interface to search for a particular event.
-To use ``tracecompass``, following are the minimum required steps:
+To use ``tracecompass``, the following are the minimum required steps:
- Install ``tracecompass`` to the localhost. Variants are available for Linux,
Windows, and OS-X.
- Launch ``tracecompass`` which will open a graphical window with trace
management interfaces.
-- Open a trace using ``File->Open Trace`` option and select metadata file which
- is to be viewed/analyzed.
+- Open a trace using the ``File->Open Trace`` option and select the metadata file which
+ will be viewed/analyzed.
For more details, refer
`Trace Compass <https://www.eclipse.org/tracecompass/>`_.
@@ -225,7 +225,7 @@ For more details, refer
Quick start
-----------
-This section steps you through the details of generating trace and viewing it.
+This section steps you through the details of generating the trace and viewing it.
- Start the dpdk-test::
@@ -238,8 +238,8 @@ This section steps you through the details of generating trace and viewing it.
Implementation details
----------------------
-As DPDK trace library is designed to generate traces that uses ``Common Trace
-Format (CTF)``. ``CTF`` specification consists of the following units to create
+As the DPDK trace library is designed to generate traces that use the ``Common Trace
+Format (CTF)``. ``CTF`` specification and consists of the following units to create
a trace.
- ``Stream`` Sequence of packets.
@@ -249,7 +249,7 @@ a trace.
For detailed information, refer to
`Common Trace Format <https://diamon.org/ctf/>`_.
-The implementation details broadly divided into the following areas:
+Implementation details are broadly divided into the following areas:
Trace metadata creation
~~~~~~~~~~~~~~~~~~~~~~~
@@ -277,7 +277,7 @@ per thread to enable lock less trace-emit function.
For non lcore threads, the trace memory is allocated on the first trace
emission.
-For lcore threads, if trace points are enabled through a EAL option, the trace
+For lcore threads, if trace points are enabled through an EAL option, the trace
memory is allocated when the threads are known of DPDK
(``rte_eal_init`` for EAL lcores, ``rte_thread_register`` for non-EAL lcores).
Otherwise, when trace points are enabled later in the life of the application,
@@ -348,7 +348,7 @@ trace.header
| timestamp [47:0] |
+----------------------+
-The trace header is 64 bits, it consists of 48 bits of timestamp and 16 bits
+The trace header is 64 bits. It consists of 48 bits of timestamp and 16 bits
event ID.
The ``packet.header`` and ``packet.context`` will be written in the slow path
--
2.34.1
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH 7/9] doc: reword cmdline section in prog guide
2024-05-13 15:59 [PATCH 0/9] reowrd in prog guide Nandini Persad
` (5 preceding siblings ...)
2024-05-13 15:59 ` [PATCH 6/9] doc: reword log " Nandini Persad
@ 2024-05-13 15:59 ` Nandini Persad
2024-05-13 15:59 ` [PATCH 8/9] doc: reword stack library " Nandini Persad
` (5 subsequent siblings)
12 siblings, 0 replies; 118+ messages in thread
From: Nandini Persad @ 2024-05-13 15:59 UTC (permalink / raw)
To: dev
minor syntax edits
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/cmdline.rst | 34 +++++++++++++++----------------
1 file changed, 17 insertions(+), 17 deletions(-)
diff --git a/doc/guides/prog_guide/cmdline.rst b/doc/guides/prog_guide/cmdline.rst
index 6b10ab6c99..8aa1ef180b 100644
--- a/doc/guides/prog_guide/cmdline.rst
+++ b/doc/guides/prog_guide/cmdline.rst
@@ -62,7 +62,7 @@ The format of the list file must follow these requirements:
* One command per line
-* Variable fields are prefixed by the type-name in angle-brackets, for example:
+* Variable fields are prefixed by the type-name in angle-brackets. For example:
* ``<STRING>message``
@@ -75,7 +75,7 @@ The format of the list file must follow these requirements:
* ``<IPv6>dst_ip6``
* Variable fields, which take their values from a list of options,
- have the comma-separated option list placed in braces, rather than by the type name.
+ have the comma-separated option list placed in braces rather than by the type name.
For example,
* ``<(rx,tx,rxtx)>mode``
@@ -112,7 +112,7 @@ The generated content includes:
* A command-line context array definition, suitable for passing to ``cmdline_new``
-If so desired, the script can also output function stubs for the callback functions for each command.
+If needed, the script can also output function stubs for the callback functions for each command.
This behaviour is triggered by passing the ``--stubs`` flag to the script.
In this case, an output file must be provided with a filename ending in ".h",
and the callback stubs will be written to an equivalent ".c" file.
@@ -120,7 +120,7 @@ and the callback stubs will be written to an equivalent ".c" file.
.. note::
The stubs are written to a separate file,
- to allow continuous use of the script to regenerate the command-line header,
+ to allow continuous use of the script to regenerate the command-line header
without overwriting any code the user has added to the callback functions.
This makes it easy to incrementally add new commands to an existing application.
@@ -154,7 +154,7 @@ the callback functions would be:
These functions must be provided by the developer. However, as stated above,
stub functions may be generated by the script automatically using the ``--stubs`` parameter.
-The same "cmdname" stem is used in the naming of the generated structures too.
+The same "cmdname" stem is used in the naming of the generated structures as well.
To get to the results structure for each command above,
the ``parsed_result`` parameter should be cast to ``struct cmd_quit_result``
or ``struct cmd_show_port_stats_result`` respectively.
@@ -176,13 +176,12 @@ Integrating with the Application
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To integrate the script output with the application,
-we must ``#include`` the generated header into our applications C file,
+we must ``#include`` the generated header into our application's C file,
and then have the command-line created via either ``cmdline_new`` or ``cmdline_stdin_new``.
The first parameter to the function call should be the context array in the generated header file,
``ctx`` by default (Modifiable via script parameter).
-The callback functions may be in this same file, or in a separate one -
-they just need to be available to the linker at build-time.
+The callback functions may be in the same or separate file, as long as they are available to the linker at build-time.
Limitations of the Script Approach
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -242,19 +241,19 @@ The resulting struct looks like:
As before, we choose names to match the tokens in the command.
Since our numeric parameter is a 16-bit value, we use ``uint16_t`` type for it.
-Any of the standard sized integer types can be used as parameters, depending on the desired result.
+Any of the standard-sized integer types can be used as parameters depending on the desired result.
Beyond the standard integer types,
-the library also allows variable parameters to be of a number of other types,
+the library also allows variable parameters to be of a number of other types
as called out in the feature list above.
* For variable string parameters,
the type should be ``cmdline_fixed_string_t`` - the same as for fixed tokens,
but these will be initialized differently (as described below).
-* For ethernet addresses use type ``struct rte_ether_addr``
+* For ethernet addresses, use type ``struct rte_ether_addr``
-* For IP addresses use type ``cmdline_ipaddr_t``
+* For IP addresses, use type ``cmdline_ipaddr_t``
Providing Field Initializers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -267,6 +266,7 @@ For fixed string tokens, like "quit", "show" and "port", the initializer will be
static cmdline_parse_token_string_t cmd_quit_quit_tok =
TOKEN_STRING_INITIALIZER(struct cmd_quit_result, quit, "quit");
+
The convention for naming used here is to include the base name of the overall result structure -
``cmd_quit`` in this case,
as well as the name of the field within that structure - ``quit`` in this case, followed by ``_tok``.
@@ -311,8 +311,8 @@ The callback function should have type:
where the first parameter is a pointer to the result structure defined above,
the second parameter is the command-line instance,
and the final parameter is a user-defined pointer provided when we associate the callback with the command.
-Most callback functions only use the first parameter, or none at all,
-but the additional two parameters provide some extra flexibility,
+Most callback functions only use the first parameter or none at all,
+but the additional two parameters provide some extra flexibility
to allow the callback to work with non-global state in your application.
For our two example commands, the relevant callback functions would look very similar in definition.
@@ -341,7 +341,7 @@ Associating Callback and Command
The ``cmdline_parse_inst_t`` type defines a "parse instance",
i.e. a sequence of tokens to be matched and then an associated function to be called.
-Also included in the instance type are a field for help text for the command,
+Also included in the instance type are a field for help text for the command
and any additional user-defined parameter to be passed to the callback functions referenced above.
For example, for our simple "quit" command:
@@ -362,8 +362,8 @@ then set the user-defined parameter to NULL,
provide a help message to be given, on request, to the user explaining the command,
before finally listing out the single token to be matched for this command instance.
-For our second, port stats, example,
-as well as making things a little more complicated by having multiple tokens to be matched,
+For our second "port stats" example,
+as well as making things more complex by having multiple tokens to be matched,
we can also demonstrate passing in a parameter to the function.
Let us suppose that our application does not always use all the ports available to it,
but instead only uses a subset of the ports, stored in an array called ``active_ports``.
--
2.34.1
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH 8/9] doc: reword stack library section in prog guide
2024-05-13 15:59 [PATCH 0/9] reowrd in prog guide Nandini Persad
` (6 preceding siblings ...)
2024-05-13 15:59 ` [PATCH 7/9] doc: reword cmdline " Nandini Persad
@ 2024-05-13 15:59 ` Nandini Persad
2024-05-13 15:59 ` [PATCH 9/9] doc: reword rcu " Nandini Persad
` (4 subsequent siblings)
12 siblings, 0 replies; 118+ messages in thread
From: Nandini Persad @ 2024-05-13 15:59 UTC (permalink / raw)
To: dev
minor change made to wording
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/stack_lib.rst | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 975d3ad796..a51df60d13 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -44,8 +44,8 @@ Lock-free Stack
The lock-free stack consists of a linked list of elements, each containing a
data pointer and a next pointer, and an atomic stack depth counter. The
-lock-free property means that multiple threads can push and pop simultaneously,
-and one thread being preempted/delayed in a push or pop operation will not
+lock-free property means that multiple threads can push and pop simultaneously.
+One thread being preempted/delayed in a push or pop operation will not
impede the forward progress of any other thread.
The lock-free push operation enqueues a linked list of pointers by pointing the
--
2.34.1
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH 9/9] doc: reword rcu library section in prog guide
2024-05-13 15:59 [PATCH 0/9] reowrd in prog guide Nandini Persad
` (7 preceding siblings ...)
2024-05-13 15:59 ` [PATCH 8/9] doc: reword stack library " Nandini Persad
@ 2024-05-13 15:59 ` Nandini Persad
2024-06-21 2:32 ` [PATCH v2 1/9] doc: reword pmd " Nandini Persad
` (3 subsequent siblings)
12 siblings, 0 replies; 118+ messages in thread
From: Nandini Persad @ 2024-05-13 15:59 UTC (permalink / raw)
To: dev; +Cc: Honnappa Nagarahalli
simple syntax changes made to rcu library sectionin programmer's guide
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/rcu_lib.rst | 77 ++++++++++++++++---------------
1 file changed, 40 insertions(+), 37 deletions(-)
diff --git a/doc/guides/prog_guide/rcu_lib.rst b/doc/guides/prog_guide/rcu_lib.rst
index d0aef3bc16..c7ae349184 100644
--- a/doc/guides/prog_guide/rcu_lib.rst
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -8,17 +8,17 @@ RCU Library
Lockless data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
-(for example real-time applications).
+(for example, real-time applications).
In the following sections, the term "memory" refers to memory allocated
by typical APIs like malloc() or anything that is representative of
-memory, for example an index of a free element array.
+memory. An example of this is an index of a free element array.
Since these data structures are lockless, the writers and readers
are accessing the data structures concurrently. Hence, while removing
an element from a data structure, the writers cannot return the memory
-to the allocator, without knowing that the readers are not
-referencing that element/memory anymore. Hence, it is required to
+to the allocator without knowing that the readers are not
+referencing that element/memory anymore. Therefore, it is required to
separate the operation of removing an element into two steps:
#. Delete: in this step, the writer removes the reference to the element from
@@ -64,19 +64,19 @@ quiescent state. Reader thread 3 was not accessing D1 when the delete
operation happened. So, reader thread 3 will not have a reference to the
deleted entry.
-It can be noted that, the critical sections for D2 is a quiescent state
-for D1. i.e. for a given data structure Dx, any point in the thread execution
-that does not reference Dx is a quiescent state.
+Note that the critical sections for D2 is a quiescent state
+for D1 (i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state).
Since memory is not freed immediately, there might be a need for
-provisioning of additional memory, depending on the application requirements.
+provisioning additional memory depending on the application requirements.
Factors affecting the RCU mechanism
-----------------------------------
It is important to make sure that this library keeps the overhead of
-identifying the end of grace period and subsequent freeing of memory,
-to a minimum. The following paras explain how grace period and critical
+identifying the end of grace period and subsequent freeing of memory
+to a minimum. The following paragraphs explain how grace period and critical
section affect this overhead.
The writer has to poll the readers to identify the end of grace period.
@@ -119,14 +119,14 @@ How to use this library
The application must allocate memory and initialize a QS variable.
Applications can call ``rte_rcu_qsbr_get_memsize()`` to calculate the size
-of memory to allocate. This API takes a maximum number of reader threads,
-using this variable, as a parameter.
+of memory to allocate. This API takes a maximum number of reader threads
+using this variable as a parameter.
Further, the application can initialize a QS variable using the API
``rte_rcu_qsbr_init()``.
Each reader thread is assumed to have a unique thread ID. Currently, the
-management of the thread ID (for example allocation/free) is left to the
+management of the thread ID (for example, allocation/free) is left to the
application. The thread ID should be in the range of 0 to
maximum number of threads provided while creating the QS variable.
The application could also use ``lcore_id`` as the thread ID where applicable.
@@ -134,13 +134,13 @@ The application could also use ``lcore_id`` as the thread ID where applicable.
The ``rte_rcu_qsbr_thread_register()`` API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
-The reader thread must call ``rte_rcu_qsbr_thread_online()`` API to start
+The reader thread must call the ``rte_rcu_qsbr_thread_online()`` API to start
reporting its quiescent state.
Some of the use cases might require the reader threads to make blocking API
-calls (for example while using eventdev APIs). The writer thread should not
+calls (for example, while using eventdev APIs). The writer thread should not
wait for such reader threads to enter quiescent state. The reader thread must
-call ``rte_rcu_qsbr_thread_offline()`` API, before calling blocking APIs. It
+call ``rte_rcu_qsbr_thread_offline()`` API before calling blocking APIs. It
can call ``rte_rcu_qsbr_thread_online()`` API once the blocking API call
returns.
@@ -149,13 +149,13 @@ state by calling the API ``rte_rcu_qsbr_start()``. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
``rte_rcu_qsbr_start()`` returns a token to each caller.
-The writer thread must call ``rte_rcu_qsbr_check()`` API with the token to
-get the current quiescent state status. Option to block till all the reader
+The writer thread must call the ``rte_rcu_qsbr_check()`` API with the token to
+get the current quiescent state status. The option to block till all the reader
threads enter the quiescent state is provided. If this API indicates that
all the reader threads have entered the quiescent state, the application
can free the deleted entry.
-The APIs ``rte_rcu_qsbr_start()`` and ``rte_rcu_qsbr_check()`` are lock free.
+The APIs ``rte_rcu_qsbr_start()`` and ``rte_rcu_qsbr_check()`` are lock-free.
Hence, they can be called concurrently from multiple writers even while
running as worker threads.
@@ -173,7 +173,7 @@ polls till all the readers enter the quiescent state or go offline. This API
does not allow the writer to do useful work while waiting and introduces
additional memory accesses due to continuous polling. However, the application
does not have to store the token or the reference to the deleted resource. The
-resource can be freed immediately after ``rte_rcu_qsbr_synchronize()`` API
+resource can be freed immediately after the ``rte_rcu_qsbr_synchronize()`` API
returns.
The reader thread must call ``rte_rcu_qsbr_thread_offline()`` and
@@ -181,9 +181,9 @@ The reader thread must call ``rte_rcu_qsbr_thread_offline()`` and
quiescent state. The ``rte_rcu_qsbr_check()`` API will not wait for this reader
thread to report the quiescent state status anymore.
-The reader threads should call ``rte_rcu_qsbr_quiescent()`` API to indicate that
+The reader threads should call the ``rte_rcu_qsbr_quiescent()`` API to indicate that
they entered a quiescent state. This API checks if a writer has triggered a
-quiescent state query and update the state accordingly.
+quiescent state query and updates the state accordingly.
The ``rte_rcu_qsbr_lock()`` and ``rte_rcu_qsbr_unlock()`` are empty functions.
However, these APIs can aid in debugging issues. One can mark the access to
@@ -203,40 +203,43 @@ the application. When a writer deletes an entry from a data structure, the write
There are several APIs provided to help with this process. The writer
can create a FIFO to store the references to deleted resources using ``rte_rcu_qsbr_dq_create()``.
The resources can be enqueued to this FIFO using ``rte_rcu_qsbr_dq_enqueue()``.
-If the FIFO is full, ``rte_rcu_qsbr_dq_enqueue`` will reclaim the resources before enqueuing. It will also reclaim resources on regular basis to keep the FIFO from growing too large. If the writer runs out of resources, the writer can call ``rte_rcu_qsbr_dq_reclaim`` API to reclaim resources. ``rte_rcu_qsbr_dq_delete`` is provided to reclaim any remaining resources and free the FIFO while shutting down.
+If the FIFO is full, ``rte_rcu_qsbr_dq_enqueue`` will reclaim the resources before enqueuing.
+It will also reclaim resources on regular basis to keep the FIFO from growing too large. If the writer runs out of resources, the writer can call ``rte_rcu_qsbr_dq_reclaim`` API to reclaim resources. ``rte_rcu_qsbr_dq_delete`` is provided to reclaim any remaining resources and free the FIFO while shutting down.
However, if this resource reclamation process were to be integrated in lock-free data structure libraries, it
-hides this complexity from the application and makes it easier for the application to adopt lock-free algorithms. The following paragraphs discuss how the reclamation process can be integrated in DPDK libraries.
+hides this complexity from the application and makes it easier for the application to adopt lock-free algorithms.
+
+The following paragraphs discuss how the reclamation process can be integrated in DPDK libraries.
In any DPDK application, the resource reclamation process using QSBR can be split into 4 parts:
#. Initialization
#. Quiescent State Reporting
-#. Reclaiming Resources
+#. Reclaiming Resources*
#. Shutdown
The design proposed here assigns different parts of this process to client libraries and applications. The term 'client library' refers to lock-free data structure libraries such at rte_hash, rte_lpm etc. in DPDK or similar libraries outside of DPDK. The term 'application' refers to the packet processing application that makes use of DPDK such as L3 Forwarding example application, OVS, VPP etc..
-The application has to handle 'Initialization' and 'Quiescent State Reporting'. So,
+The application must handle 'Initialization' and 'Quiescent State Reporting'. Therefore, the application:
-* the application has to create the RCU variable and register the reader threads to report their quiescent state.
-* the application has to register the same RCU variable with the client library.
-* reader threads in the application have to report the quiescent state. This allows for the application to control the length of the critical section/how frequently the application wants to report the quiescent state.
+* Must create the RCU variable and register the reader threads to report their quiescent state.
+* Must register the same RCU variable with the client library.
+* Note that reader threads in the application have to report the quiescent state. This allows for the application to control the length of the critical section/how frequently the application wants to report the quiescent state.
-The client library will handle 'Reclaiming Resources' part of the process. The
+The client library will handle the 'Reclaiming Resources' part of the process. The
client libraries will make use of the writer thread context to execute the memory
-reclamation algorithm. So,
+reclamation algorithm. So, the client library should:
-* client library should provide an API to register a RCU variable that it will use. It should call ``rte_rcu_qsbr_dq_create()`` to create the FIFO to store the references to deleted entries.
-* client library should use ``rte_rcu_qsbr_dq_enqueue`` to enqueue the deleted resources on the FIFO and start the grace period.
-* if the library runs out of resources while adding entries, it should call ``rte_rcu_qsbr_dq_reclaim`` to reclaim the resources and try the resource allocation again.
+* Provide an API to register a RCU variable that it will use. It should call ``rte_rcu_qsbr_dq_create()`` to create the FIFO to store the references to deleted entries.
+* Use ``rte_rcu_qsbr_dq_enqueue`` to enqueue the deleted resources on the FIFO and start the grace period.
+* Note that if the library runs out of resources while adding entries, it should call ``rte_rcu_qsbr_dq_reclaim`` to reclaim the resources and try the resource allocation again.
The 'Shutdown' process needs to be shared between the application and the
-client library.
+client library. Note that:
-* the application should make sure that the reader threads are not using the shared data structure, unregister the reader threads from the QSBR variable before calling the client library's shutdown function.
+* The application should make sure that the reader threads are not using the shared data structure, unregister the reader threads from the QSBR variable before calling the client library's shutdown function.
-* client library should call ``rte_rcu_qsbr_dq_delete`` to reclaim any remaining resources and free the FIFO.
+* The client library should call ``rte_rcu_qsbr_dq_delete`` to reclaim any remaining resources and free the FIFO.
Integrating the resource reclamation with client libraries removes the burden from
the application and makes it easy to use lock-free algorithms.
--
2.34.1
^ permalink raw reply related [flat|nested] 118+ messages in thread
* Re: [PATCH 3/9] doc: reword argparse section in prog guide
2024-05-13 15:59 ` [PATCH 3/9] doc: reword argparse " Nandini Persad
@ 2024-05-13 19:01 ` Stephen Hemminger
0 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2024-05-13 19:01 UTC (permalink / raw)
To: Nandini Persad; +Cc: dev
On Mon, 13 May 2024 08:59:05 -0700
Nandini Persad <nandinipersad361@gmail.com> wrote:
> @@ -132,15 +131,15 @@ their values are parsing in the order defined.
> The compact mode is not supported.
> Take above "-a" and "-d" as an example, don't support "-ad" input.
>
> -Parsing by autosave way
> +Parsing the Autosave Method
> ~~~~~~~~~~~~~~~~~~~~~~~
RST is picky about the title format.
If you build the docs, this change will generate a warning:
ninja -C build doc
ninja: Entering directory `build'
[4/5] Generating doc/guides/html_guides with a custom command
/home/shemminger/DPDK/doc-rework/doc/guides/contributing/design.rst:41: WARNING: Bullet list ends without a blank line; unexpected unindent.
/home/shemminger/DPDK/doc-rework/doc/guides/prog_guide/argparse_lib.rst:135: WARNING: Title underline too short.
Parsing the Autosave Method
~~~~~~~~~~~~~~~~~~~~~~~
/home/shemminger/DPDK/doc-rework/doc/guides/prog_guide/argparse_lib.rst:135: WARNING: Title underline too short.
Parsing the Autosave Method
~~~~~~~~~~~~~~~~~~~~~~~
/home/shemminger/DPDK/doc-rework/doc/guides/prog_guide/argparse_lib.rst:161: WARNING: Title underline too short.
Parsing by Callback Method
~~~~~~~~~
/home/shemminger/DPDK/doc-rework/doc/guides/prog_guide/argparse_lib.rst:161: WARNING: Title underline too short.
Parsing by Callback Method
~~~~~~~~~
[4/5] Running external command doc (wrapped by meson to set env)
Building docs: Doxygen_API(HTML) Doxygen_API(Manpage) HTML_Guides
^ permalink raw reply [flat|nested] 118+ messages in thread
* [PATCH v2 1/9] doc: reword pmd section in prog guide
2024-05-13 15:59 [PATCH 0/9] reowrd in prog guide Nandini Persad
` (8 preceding siblings ...)
2024-05-13 15:59 ` [PATCH 9/9] doc: reword rcu " Nandini Persad
@ 2024-06-21 2:32 ` Nandini Persad
2024-06-21 2:32 ` [PATCH v2 2/9] doc: reword argparse " Nandini Persad
` (8 more replies)
2026-01-13 22:51 ` [PATCH v3 00/11] doc: programmers guide corrections Stephen Hemminger
` (2 subsequent siblings)
12 siblings, 9 replies; 118+ messages in thread
From: Nandini Persad @ 2024-06-21 2:32 UTC (permalink / raw)
To: dev
I made edits for syntax/grammar the PMD section of the prog guide.
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/poll_mode_drv.rst | 151 ++++++++++++------------
1 file changed, 73 insertions(+), 78 deletions(-)
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 5008b41c60..360af20900 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -6,25 +6,24 @@
Poll Mode Driver
================
-The DPDK includes 1 Gigabit, 10 Gigabit and 40 Gigabit and para virtualized virtio Poll Mode Drivers.
+The DPDK includes 1 Gigabit, 10 Gigabit, 40 Gigabit and para virtualized virtio Poll Mode Drivers.
-A Poll Mode Driver (PMD) consists of APIs, provided through the BSD driver running in user space,
-to configure the devices and their respective queues.
+A Poll Mode Driver (PMD) consists of APIs (provided through the BSD driver running in user space) to configure the devices and their respective queues.
In addition, a PMD accesses the RX and TX descriptors directly without any interrupts
(with the exception of Link Status Change interrupts) to quickly receive,
process and deliver packets in the user's application.
-This section describes the requirements of the PMDs,
-their global design principles and proposes a high-level architecture and a generic external API for the Ethernet PMDs.
+This section describes the requirements of the PMDs and
+their global design principles. It also proposes a high-level architecture and a generic external API for the Ethernet PMDs.
Requirements and Assumptions
----------------------------
The DPDK environment for packet processing applications allows for two models, run-to-completion and pipe-line:
-* In the *run-to-completion* model, a specific port's RX descriptor ring is polled for packets through an API.
- Packets are then processed on the same core and placed on a port's TX descriptor ring through an API for transmission.
+* In the *run-to-completion* model, a specific port's Rx descriptor ring is polled for packets through an API.
+ Packets are then processed on the same core and placed on a port's Tx descriptor ring through an API for transmission.
-* In the *pipe-line* model, one core polls one or more port's RX descriptor ring through an API.
+* In the *pipe-line* model, one core polls one or more port's Rx descriptor ring through an API.
Packets are received and passed to another core via a ring.
The other core continues to process the packet which then may be placed on a port's TX descriptor ring through an API for transmission.
@@ -50,14 +49,14 @@ The loop for packet processing includes the following steps:
* Retrieve the received packet from the packet queue
-* Process the received packet, up to its retransmission if forwarded
+* Process the received packet up to its retransmission if forwarded
To avoid any unnecessary interrupt processing overhead, the execution environment must not use any asynchronous notification mechanisms.
Whenever needed and appropriate, asynchronous communication should be introduced as much as possible through the use of rings.
Avoiding lock contention is a key issue in a multi-core environment.
-To address this issue, PMDs are designed to work with per-core private resources as much as possible.
-For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
+To address this issue, PMDs are designed to work with per core private resources as much as possible.
+For example, a PMD maintains a separate transmit queue per core, per port, if the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
In the same way, every receive queue of a port is assigned to and polled by a single logical core (lcore).
To comply with Non-Uniform Memory Access (NUMA), memory management is designed to assign to each logical core
@@ -101,9 +100,9 @@ However, an rte_eth_tx_burst function is effectively implemented by the PMD to m
* Apply burst-oriented software optimization techniques to remove operations that would otherwise be unavoidable, such as ring index wrap back management.
-Burst-oriented functions are also introduced via the API for services that are intensively used by the PMD.
+Burst-oriented functions are also introduced via the API for services that are extensively used by the PMD.
This applies in particular to buffer allocators used to populate NIC rings, which provide functions to allocate/free several buffers at a time.
-For example, an mbuf_multiple_alloc function returning an array of pointers to rte_mbuf buffers which speeds up the receive poll function of the PMD when
+An example of this would be an mbuf_multiple_alloc function returning an array of pointers to rte_mbuf buffers which speeds up the receive poll function of the PMD when
replenishing multiple descriptors of the receive ring.
Logical Cores, Memory and NIC Queues Relationships
@@ -111,7 +110,7 @@ Logical Cores, Memory and NIC Queues Relationships
The DPDK supports NUMA allowing for better performance when a processor's logical cores and interfaces utilize its local memory.
Therefore, mbuf allocation associated with local PCIe* interfaces should be allocated from memory pools created in the local memory.
-The buffers should, if possible, remain on the local processor to obtain the best performance results and RX and TX buffer descriptors
+The buffers should, if possible, remain on the local processor to obtain the best performance results and Rx and Tx buffer descriptors
should be populated with mbufs allocated from a mempool allocated from local memory.
The run-to-completion model also performs better if packet or data manipulation is in local memory instead of a remote processors memory.
@@ -120,12 +119,11 @@ This is also true for the pipe-line model provided all logical cores used are lo
Multiple logical cores should never share receive or transmit queues for interfaces since this would require global locks and hinder performance.
If the PMD is ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
-concurrently on the same tx queue without SW lock. This PMD feature found in some NICs and useful in the following use cases:
+concurrently on the same Tx queue without an SW lock. This PMD feature found in some NICs is useful for:
-* Remove explicit spinlock in some applications where lcores are not mapped to Tx queues with 1:1 relation.
+* Removing explicit spinlock in some applications where lcores are not mapped to Tx queues with 1:1 relation.
-* In the eventdev use case, avoid dedicating a separate TX core for transmitting and thus
- enables more scaling as all workers can send the packets.
+* Enabling greater scalability by removing the requirement to have a dedicated Tx core
See `Hardware Offload`_ for ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
@@ -135,8 +133,8 @@ Device Identification, Ownership and Configuration
Device Identification
~~~~~~~~~~~~~~~~~~~~~
-Each NIC port is uniquely designated by its (bus/bridge, device, function) PCI
-identifiers assigned by the PCI probing/enumeration function executed at DPDK initialization.
+Each NIC port is uniquely designated by its PCI
+identifiers (bus/bridge, device, function) assigned by the PCI probing/enumeration function executed at DPDK initialization.
Based on their PCI identifier, NIC ports are assigned two other identifiers:
* A port index used to designate the NIC port in all functions exported by the PMD API.
@@ -149,14 +147,13 @@ Port Ownership
The Ethernet devices ports can be owned by a single DPDK entity (application, library, PMD, process, etc).
The ownership mechanism is controlled by ethdev APIs and allows to set/remove/get a port owner by DPDK entities.
-It prevents Ethernet ports to be managed by different entities.
+This prevents Ethernet ports from being managed by different entities.
.. note::
- It is the DPDK entity responsibility to set the port owner before using it and to manage the port usage synchronization between different threads or processes.
+ It is the DPDK entity responsibility to set the port owner before using the port and to manage the port usage synchronization between different threads or processes.
-It is recommended to set port ownership early,
-like during the probing notification ``RTE_ETH_EVENT_NEW``.
+It is recommended to set port ownership early. For instance, during the probing notification ``RTE_ETH_EVENT_NEW``.
Device Configuration
~~~~~~~~~~~~~~~~~~~~
@@ -165,7 +162,7 @@ The configuration of each NIC port includes the following operations:
* Allocate PCI resources
-* Reset the hardware (issue a Global Reset) to a well-known default state
+* Reset the hardware to a well-known default state (issue a Global Reset)
* Set up the PHY and the link
@@ -174,7 +171,7 @@ The configuration of each NIC port includes the following operations:
The PMD API must also export functions to start/stop the all-multicast feature of a port and functions to set/unset the port in promiscuous mode.
Some hardware offload features must be individually configured at port initialization through specific configuration parameters.
-This is the case for the Receive Side Scaling (RSS) and Data Center Bridging (DCB) features for example.
+This is the case for the Receive Side Scaling (RSS) and Data Center Bridging (DCB) features.
On-the-Fly Configuration
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -210,7 +207,7 @@ Each transmit queue is independently configured with the following information:
* The *minimum* transmit packets to free threshold (tx_free_thresh).
When the number of descriptors used to transmit packets exceeds this threshold, the network adaptor should be checked to see if it has written back descriptors.
- A value of 0 can be passed during the TX queue configuration to indicate the default value should be used.
+ A value of 0 can be passed during the Tx queue configuration to indicate the default value should be used.
The default value for tx_free_thresh is 32.
This ensures that the PMD does not search for completed descriptors until at least 32 have been processed by the NIC for this queue.
@@ -222,7 +219,7 @@ Each transmit queue is independently configured with the following information:
A value of 0 can be passed during the TX queue configuration to indicate that the default value should be used.
The default value for tx_rs_thresh is 32.
This ensures that at least 32 descriptors are used before the network adapter writes back the most recently used descriptor.
- This saves upstream PCIe* bandwidth resulting from TX descriptor write-backs.
+ This saves upstream PCIe* bandwidth resulting from Tx descriptor write-backs.
It is important to note that the TX Write-back threshold (TX wthresh) should be set to 0 when tx_rs_thresh is greater than 1.
Refer to the Intel® 82599 10 Gigabit Ethernet Controller Datasheet for more details.
@@ -244,7 +241,7 @@ One descriptor in the TX ring is used as a sentinel to avoid a hardware race con
.. note::
- When configuring for DCB operation, at port initialization, both the number of transmit queues and the number of receive queues must be set to 128.
+ When configuring for DCB operation at port initialization, both the number of transmit queues and the number of receive queues must be set to 128.
Free Tx mbuf on Demand
~~~~~~~~~~~~~~~~~~~~~~
@@ -265,7 +262,7 @@ There are two scenarios when an application may want the mbuf released immediate
One option is to make a copy of the packet or a copy of the header portion that needs to be manipulated.
A second option is to transmit the packet and then poll the ``rte_eth_tx_done_cleanup()`` API
until the reference count on the packet is decremented.
- Then the same packet can be transmitted to the next destination interface.
+ Then, the same packet can be transmitted to the next destination interface.
The application is still responsible for managing any packet manipulations needed
between the different destination interfaces, but a packet copy can be avoided.
This API is independent of whether the packet was transmitted or dropped,
@@ -288,13 +285,13 @@ Hardware Offload
Depending on driver capabilities advertised by
``rte_eth_dev_info_get()``, the PMD may support hardware offloading
feature like checksumming, TCP segmentation, VLAN insertion or
-lockfree multithreaded TX burst on the same TX queue.
+lockfree multithreaded Tx burst on the same Tx queue.
The support of these offload features implies the addition of dedicated
status bit(s) and value field(s) into the rte_mbuf data structure, along
with their appropriate handling by the receive/transmit functions
exported by each PMD. The list of flags and their precise meaning is
-described in the mbuf API documentation and in the in :ref:`Mbuf Library
+described in the mbuf API documentation and in the :ref:`Mbuf Library
<Mbuf_Library>`, section "Meta Information".
Per-Port and Per-Queue Offloads
@@ -303,14 +300,14 @@ Per-Port and Per-Queue Offloads
In the DPDK offload API, offloads are divided into per-port and per-queue offloads as follows:
* A per-queue offloading can be enabled on a queue and disabled on another queue at the same time.
-* A pure per-port offload is the one supported by device but not per-queue type.
-* A pure per-port offloading can't be enabled on a queue and disabled on another queue at the same time.
+* A pure per-port offload is supported by a device but not per-queue type.
+* A pure per-port offloading cannot be enabled on a queue and disabled on another queue at the same time.
* A pure per-port offloading must be enabled or disabled on all queues at the same time.
-* Any offloading is per-queue or pure per-port type, but can't be both types at same devices.
+* Offloading is per-queue or pure per-port type, but cannot be both types on the same devices.
* Port capabilities = per-queue capabilities + pure per-port capabilities.
* Any supported offloading can be enabled on all queues.
-The different offloads capabilities can be queried using ``rte_eth_dev_info_get()``.
+The different offload capabilities can be queried using ``rte_eth_dev_info_get()``.
The ``dev_info->[rt]x_queue_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all per-queue offloading capabilities.
The ``dev_info->[rt]x_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all pure per-port and per-queue offloading capabilities.
Supported offloads can be either per-port or per-queue.
@@ -329,8 +326,8 @@ per-port type and no matter whether it is set or cleared in
If a per-queue offloading hasn't been enabled in ``rte_eth_dev_configure()``,
it can be enabled or disabled in ``rte_eth_[rt]x_queue_setup()`` for individual queue.
A newly added offloads in ``[rt]x_conf->offloads`` to ``rte_eth_[rt]x_queue_setup()`` input by application
-is the one which hasn't been enabled in ``rte_eth_dev_configure()`` and is requested to be enabled
-in ``rte_eth_[rt]x_queue_setup()``. It must be per-queue type, otherwise trigger an error log.
+is the one that hasn't been enabled in ``rte_eth_dev_configure()`` and is requested to be enabled
+in ``rte_eth_[rt]x_queue_setup()``. It must be per-queue type, otherwise an error log will be triggered.
Poll Mode Driver API
--------------------
@@ -340,8 +337,8 @@ Generalities
By default, all functions exported by a PMD are lock-free functions that are assumed
not to be invoked in parallel on different logical cores to work on the same target object.
-For instance, a PMD receive function cannot be invoked in parallel on two logical cores to poll the same RX queue of the same port.
-Of course, this function can be invoked in parallel by different logical cores on different RX queues.
+For instance, a PMD receive function cannot be invoked in parallel on two logical cores to poll the same Rx queue of the same port.
+This function can be invoked in parallel by different logical cores on different Rx queues.
It is the responsibility of the upper-level application to enforce this rule.
If needed, parallel accesses by multiple logical cores to shared queues can be explicitly protected by dedicated inline lock-aware functions
@@ -357,7 +354,7 @@ The rte_mbuf data structure includes specific fields to represent, in a generic
For an input packet, most fields of the rte_mbuf structure are filled in by the PMD receive function with the information contained in the receive descriptor.
Conversely, for output packets, most fields of rte_mbuf structures are used by the PMD transmit function to initialize transmit descriptors.
-The mbuf structure is fully described in the :ref:`Mbuf Library <Mbuf_Library>` chapter.
+The mbuf structure is described in depth in the :ref:`Mbuf Library <Mbuf_Library>` chapter.
Ethernet Device API
~~~~~~~~~~~~~~~~~~~
@@ -370,12 +367,12 @@ Ethernet Device Standard Device Arguments
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Standard Ethernet device arguments allow for a set of commonly used arguments/
-parameters which are applicable to all Ethernet devices to be available to for
-specification of specific device and for passing common configuration
+parameters applicable to all Ethernet devices. These arguments/parameters are available for
+specification of specific devices and passing common configuration
parameters to those ports.
-* ``representor`` for a device which supports the creation of representor ports
- this argument allows user to specify which switch ports to enable port
+* Use ``representor`` for a device which supports the creation of representor ports.
+ This argument allows user to specify which switch ports to enable port
representors for::
-a DBDF,representor=vf0
@@ -392,7 +389,7 @@ parameters to those ports.
-a DBDF,representor=[pf[0-1],pf2vf[0-2],pf3[3,5-8]]
(Multiple representors in one device argument can be represented as a list)
-Note: PMDs are not required to support the standard device arguments and users
+Note: PMDs are not required to support the standard device arguments. Users
should consult the relevant PMD documentation to see support devargs.
Extended Statistics API
@@ -402,9 +399,9 @@ The extended statistics API allows a PMD to expose all statistics that are
available to it, including statistics that are unique to the device.
Each statistic has three properties ``name``, ``id`` and ``value``:
-* ``name``: A human readable string formatted by the scheme detailed below.
+* ``name``: A human-readable string formatted by the scheme detailed below.
* ``id``: An integer that represents only that statistic.
-* ``value``: A unsigned 64-bit integer that is the value of the statistic.
+* ``value``: An unsigned 64-bit integer that is the value of the statistic.
Note that extended statistic identifiers are
driver-specific, and hence might not be the same for different ports.
@@ -439,7 +436,7 @@ associated with the receive side of the NIC. The second component ``packets``
indicates that the unit of measure is packets.
A more complicated example: ``tx_size_128_to_255_packets``. In this example,
-``tx`` indicates transmission, ``size`` is the first detail, ``128`` etc are
+``tx`` indicates transmission, ``size`` is the first detail, ``128`` etc., are
more details, and ``packets`` indicates that this is a packet counter.
Some additions in the metadata scheme are as follows:
@@ -466,8 +463,8 @@ lookup of specific statistics. Performant lookup means two things;
The API ensures these requirements are met by mapping the ``name`` of the
statistic to a unique ``id``, which is used as a key for lookup in the fast-path.
The API allows applications to request an array of ``id`` values, so that the
-PMD only performs the required calculations. Expected usage is that the
-application scans the ``name`` of each statistic, and caches the ``id``
+PMD only performs the required calculations. The expected usage is that the
+application scans the ``name`` of each statistic and caches the ``id``
if it has an interest in that statistic. On the fast-path, the integer can be used
to retrieve the actual ``value`` of the statistic that the ``id`` represents.
@@ -486,7 +483,7 @@ statistics.
* ``rte_eth_xstats_get_by_id()``: Fills in an array of ``uint64_t`` values
with matching the provided ``ids`` array. If the ``ids`` array is NULL, it
- returns all statistics that are available.
+ returns all available statistics.
Application Usage
@@ -496,10 +493,10 @@ Imagine an application that wants to view the dropped packet count. If no
packets are dropped, the application does not read any other metrics for
performance reasons. If packets are dropped, the application has a particular
set of statistics that it requests. This "set" of statistics allows the app to
-decide what next steps to perform. The following code-snippets show how the
+decide what next steps to perform. The following code snippets show how the
xstats API can be used to achieve this goal.
-First step is to get all statistics names and list them:
+The first step is to get all statistics names and list them:
.. code-block:: c
@@ -545,7 +542,7 @@ First step is to get all statistics names and list them:
The application has access to the names of all of the statistics that the PMD
exposes. The application can decide which statistics are of interest, cache the
-ids of those statistics by looking up the name as follows:
+IDs of those statistics by looking up the name as follows:
.. code-block:: c
@@ -564,8 +561,7 @@ ids of those statistics by looking up the name as follows:
The API provides flexibility to the application so that it can look up multiple
statistics using an array containing multiple ``id`` numbers. This reduces the
-function call overhead of retrieving statistics, and makes lookup of multiple
-statistics simpler for the application.
+function call overhead of retrieving statistics and simplifies the application's lookup of multiple statistics.
.. code-block:: c
@@ -585,8 +581,8 @@ statistics simpler for the application.
This array lookup API for xstats allows the application create multiple
"groups" of statistics, and look up the values of those IDs using a single API
-call. As an end result, the application is able to achieve its goal of
-monitoring a single statistic ("rx_errors" in this case), and if that shows
+call. As an end result, the application can achieve its goal of
+monitoring a single statistic (in this case,"rx_errors"). If that shows
packets being dropped, it can easily retrieve a "set" of statistics using the
IDs array parameter to ``rte_eth_xstats_get_by_id`` function.
@@ -597,23 +593,23 @@ NIC Reset API
int rte_eth_dev_reset(uint16_t port_id);
-Sometimes a port has to be reset passively. For example when a PF is
+There are times when a port has to be reset passively. For example, when a PF is
reset, all its VFs should also be reset by the application to make them
-consistent with the PF. A DPDK application also can call this function
-to trigger a port reset. Normally, a DPDK application would invokes this
+consistent with the PF. A DPDK application can also call this function
+to trigger a port reset. Normally, a DPDK application would invoke this
function when an RTE_ETH_EVENT_INTR_RESET event is detected.
-It is the duty of the PMD to trigger RTE_ETH_EVENT_INTR_RESET events and
-the application should register a callback function to handle these
+The PMD's duty is to trigger RTE_ETH_EVENT_INTR_RESET events.
+The application should register a callback function to handle these
events. When a PMD needs to trigger a reset, it can trigger an
RTE_ETH_EVENT_INTR_RESET event. On receiving an RTE_ETH_EVENT_INTR_RESET
-event, applications can handle it as follows: Stop working queues, stop
+event, applications can do as follows: Stop working queues, stop
calling Rx and Tx functions, and then call rte_eth_dev_reset(). For
thread safety all these operations should be called from the same thread.
For example when PF is reset, the PF sends a message to notify VFs of
-this event and also trigger an interrupt to VFs. Then in the interrupt
-service routine the VFs detects this notification message and calls
+this event and also trigger an interrupt to VFs. Then, in the interrupt
+service routine, the VFs detects this notification message and calls
rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_RESET, NULL).
This means that a PF reset triggers an RTE_ETH_EVENT_INTR_RESET
event within VFs. The function rte_eth_dev_callback_process() will
@@ -621,13 +617,12 @@ call the registered callback function. The callback function can trigger
the application to handle all operations the VF reset requires including
stopping Rx/Tx queues and calling rte_eth_dev_reset().
-The rte_eth_dev_reset() itself is a generic function which only does
-some hardware reset operations through calling dev_unint() and
-dev_init(), and itself does not handle synchronization, which is handled
+The rte_eth_dev_reset() is a generic function that only does hardware reset operations through calling dev_unint() and
+dev_init(). It does not handle synchronization, which is handled
by application.
The PMD itself should not call rte_eth_dev_reset(). The PMD can trigger
-the application to handle reset event. It is duty of application to
+the application to handle reset event. It is duty of the application to
handle all synchronization before it calls rte_eth_dev_reset().
The above error handling mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PASSIVE``.
@@ -635,15 +630,15 @@ The above error handling mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PASSIVE``.
Proactive Error Handling Mode
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-This mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PROACTIVE``,
-different from the application invokes recovery in PASSIVE mode,
-the PMD automatically recovers from error in PROACTIVE mode,
+This mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PROACTIVE``, which is
+different from the application invokes recovery in PASSIVE mode.
+The PMD automatically recovers from error in PROACTIVE mode,
and only a small amount of work is required for the application.
During error detection and automatic recovery,
the PMD sets the data path pointers to dummy functions
(which will prevent the crash),
-and also make sure the control path operations fail with a return code ``-EBUSY``.
+and ensures sure the control path operations fail with a return code ``-EBUSY``.
Because the PMD recovers automatically,
the application can only sense that the data flow is disconnected for a while
@@ -655,9 +650,9 @@ three events are available:
``RTE_ETH_EVENT_ERR_RECOVERING``
Notify the application that an error is detected
- and the recovery is being started.
+ and the recovery is beginning.
Upon receiving the event, the application should not invoke
- any control path function until receiving
+ any control path function until receiving the
``RTE_ETH_EVENT_RECOVERY_SUCCESS`` or ``RTE_ETH_EVENT_RECOVERY_FAILED`` event.
.. note::
@@ -667,7 +662,7 @@ three events are available:
because a larger error may occur during the recovery.
``RTE_ETH_EVENT_RECOVERY_SUCCESS``
- Notify the application that the recovery from error is successful,
+ Notify the application that the recovery from the error was successful,
the PMD already re-configures the port,
and the effect is the same as a restart operation.
--
2.34.1
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v2 2/9] doc: reword argparse section in prog guide
2024-06-21 2:32 ` [PATCH v2 1/9] doc: reword pmd " Nandini Persad
@ 2024-06-21 2:32 ` Nandini Persad
2024-06-22 14:53 ` Stephen Hemminger
2026-03-30 16:08 ` Stephen Hemminger
2024-06-21 2:32 ` [PATCH v2 3/9] doc: reword design section in contributors guidelines Nandini Persad
` (7 subsequent siblings)
8 siblings, 2 replies; 118+ messages in thread
From: Nandini Persad @ 2024-06-21 2:32 UTC (permalink / raw)
To: dev
I have made small edits for syntax in this section.
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/argparse_lib.rst | 75 +++++++++++++-------------
1 file changed, 38 insertions(+), 37 deletions(-)
diff --git a/doc/guides/prog_guide/argparse_lib.rst b/doc/guides/prog_guide/argparse_lib.rst
index a6ac11b1c0..1acde60861 100644
--- a/doc/guides/prog_guide/argparse_lib.rst
+++ b/doc/guides/prog_guide/argparse_lib.rst
@@ -4,30 +4,31 @@
Argparse Library
================
-The argparse library provides argument parsing functionality,
-this library makes it easy to write user-friendly command-line program.
+The argparse library provides argument parsing functionality and makes it easy to write user-friendly command-line programming.
Features and Capabilities
-------------------------
-- Support parsing optional argument (which could take with no-value,
- required-value and optional-value).
+- Supports parsing of optional arguments (which can contain no-value,
+ required-value and optional-values).
-- Support parsing positional argument (which must take with required-value).
+- Supports parsing of positional arguments (which must contain required-values).
-- Support automatic generate usage information.
+- Supports automatic generation of usage information.
-- Support issue errors when provide with invalid arguments.
+- Provides issue errors when an argument is invalid
+
+- Supports parsing arguments in two ways:
-- Support parsing argument by two ways:
#. autosave: used for parsing known value types;
#. callback: will invoke user callback to parse.
+
Usage Guide
-----------
-The following code demonstrates how to use:
+The following code demonstrates how to use the following:
.. code-block:: C
@@ -89,12 +90,12 @@ The following code demonstrates how to use:
...
}
-In this example, the arguments which start with a hyphen (-) are optional
-arguments (they're "--aaa"/"--bbb"/"--ccc"/"--ddd"/"--eee"/"--fff"); and the
-arguments which don't start with a hyphen (-) are positional arguments
-(they're "ooo"/"ppp").
+In this example, the arguments thhat start with a hyphen (-) are optional
+arguments ("--aaa"/"--bbb"/"--ccc"/"--ddd"/"--eee"/"--fff").
+The arguments that do not start with a hyphen (-) are positional arguments
+("ooo"/"ppp").
-Every argument must be set whether to carry a value (one of
+Every argument must set whether it carries a value (one of
``RTE_ARGPARSE_ARG_NO_VALUE``, ``RTE_ARGPARSE_ARG_REQUIRED_VALUE`` and
``RTE_ARGPARSE_ARG_OPTIONAL_VALUE``).
@@ -105,26 +106,26 @@ Every argument must be set whether to carry a value (one of
User Input Requirements
~~~~~~~~~~~~~~~~~~~~~~~
-For optional arguments which take no-value,
+For optional arguments which have no-value,
the following mode is supported (take above "--aaa" as an example):
- The single mode: "--aaa" or "-a".
-For optional arguments which take required-value,
+For optional arguments which have required-value,
the following two modes are supported (take above "--bbb" as an example):
- The kv mode: "--bbb=1234" or "-b=1234".
- The split mode: "--bbb 1234" or "-b 1234".
-For optional arguments which take optional-value,
+For optional arguments which have optional-value,
the following two modes are supported (take above "--ccc" as an example):
- The single mode: "--ccc" or "-c".
- The kv mode: "--ccc=123" or "-c=123".
-For positional arguments which must take required-value,
+For positional arguments which must have required-value,
their values are parsing in the order defined.
.. note::
@@ -132,15 +133,15 @@ their values are parsing in the order defined.
The compact mode is not supported.
Take above "-a" and "-d" as an example, don't support "-ad" input.
-Parsing by autosave way
-~~~~~~~~~~~~~~~~~~~~~~~
+Parsing the Autosave Method
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Argument of known value type (e.g. ``RTE_ARGPARSE_ARG_VALUE_INT``)
-could be parsed using this autosave way,
-and its result will save in the ``val_saver`` field.
+Arguments of a known value type (e.g. ``RTE_ARGPARSE_ARG_VALUE_INT``)
+can be parsed using the autosave method,
+The result will save in the ``val_saver`` field.
In the above example, the arguments "--aaa"/"--bbb"/"--ccc" and "ooo"
-both use this way, the parsing is as follows:
+both use this method. The parsing is as follows:
- For argument "--aaa", it is configured as no-value,
so the ``aaa_val`` will be set to ``val_set`` field
@@ -150,28 +151,28 @@ both use this way, the parsing is as follows:
so the ``bbb_val`` will be set to user input's value
(e.g. will be set to 1234 with input "--bbb 1234").
-- For argument "--ccc", it is configured as optional-value,
- if user only input "--ccc" then the ``ccc_val`` will be set to ``val_set`` field
+- For argument "--ccc", it is configured as optional-value.
+ If user only input "--ccc", then the ``ccc_val`` will be set to ``val_set`` field
which is 200 in the above example;
- if user input "--ccc=123", then the ``ccc_val`` will be set to 123.
+ If user input "--ccc=123", then the ``ccc_val`` will be set to 123.
- For argument "ooo", it is positional argument,
the ``ooo_val`` will be set to user input's value.
-Parsing by callback way
-~~~~~~~~~~~~~~~~~~~~~~~
+Parsing by Callback Method
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
-It could also choose to use callback to parse,
-just define a unique index for the argument
-and make the ``val_save`` field to be NULL also zero value-type.
+You may choose to use the callback method to parse.
+To do so, define a unique index for the argument
+and make the ``val_save`` field to be NULL as a zero value-type.
-In the above example, the arguments "--ddd"/"--eee"/"--fff" and "ppp" both use this way.
+In the above example, the arguments "--ddd"/"--eee"/"--fff" and "ppp" both use this method.
-Multiple times argument
+Multiple Times Argument
~~~~~~~~~~~~~~~~~~~~~~~
-If want to support the ability to enter the same argument multiple times,
-then should mark ``RTE_ARGPARSE_ARG_SUPPORT_MULTI`` in the ``flags`` field.
+If you want to support the ability to enter the same argument multiple times,
+then you should mark ``RTE_ARGPARSE_ARG_SUPPORT_MULTI`` in the ``flags`` field.
For example:
.. code-block:: C
@@ -182,5 +183,5 @@ Then the user input could contain multiple "--xyz" arguments.
.. note::
- The multiple times argument only support with optional argument
+ The multiple times argument is only supported with optional argument
and must be parsed by callback way.
--
2.34.1
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v2 3/9] doc: reword design section in contributors guidelines
2024-06-21 2:32 ` [PATCH v2 1/9] doc: reword pmd " Nandini Persad
2024-06-21 2:32 ` [PATCH v2 2/9] doc: reword argparse " Nandini Persad
@ 2024-06-21 2:32 ` Nandini Persad
2024-06-22 14:47 ` [PATCH] doc/design: minor cleanus Stephen Hemminger
2026-03-31 22:53 ` [PATCH v2 3/9] doc: reword design section in contributors guidelines Stephen Hemminger
2024-06-21 2:32 ` [PATCH v2 4/9] doc: reword service cores section in prog guide Nandini Persad
` (6 subsequent siblings)
8 siblings, 2 replies; 118+ messages in thread
From: Nandini Persad @ 2024-06-21 2:32 UTC (permalink / raw)
To: dev
Minor editing was made for grammar and syntax of design section.
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
.mailmap | 1 +
doc/guides/contributing/design.rst | 86 +++++++++++++++---------------
doc/guides/linux_gsg/sys_reqs.rst | 2 +-
3 files changed, 45 insertions(+), 44 deletions(-)
diff --git a/.mailmap b/.mailmap
index 66ebc20666..7d4929c5d1 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1002,6 +1002,7 @@ Naga Suresh Somarowthu <naga.sureshx.somarowthu@intel.com>
Nalla Pradeep <pnalla@marvell.com>
Na Na <nana.nn@alibaba-inc.com>
Nan Chen <whutchennan@gmail.com>
+Nandini Persad <nandinipersad361@gmail.com>
Nannan Lu <nannan.lu@intel.com>
Nan Zhou <zhounan14@huawei.com>
Narcisa Vasile <navasile@linux.microsoft.com> <navasile@microsoft.com> <narcisa.vasile@microsoft.com>
diff --git a/doc/guides/contributing/design.rst b/doc/guides/contributing/design.rst
index b724177ba1..3d1f5aeb91 100644
--- a/doc/guides/contributing/design.rst
+++ b/doc/guides/contributing/design.rst
@@ -1,6 +1,7 @@
.. SPDX-License-Identifier: BSD-3-Clause
Copyright 2018 The DPDK contributors
+
Design
======
@@ -8,22 +9,26 @@ Design
Environment or Architecture-specific Sources
--------------------------------------------
-In DPDK and DPDK applications, some code is specific to an architecture (i686, x86_64) or to an executive environment (freebsd or linux) and so on.
-As far as is possible, all such instances of architecture or env-specific code should be provided via standard APIs in the EAL.
+In DPDK and DPDK applications, some code is architecture-specific (i686, x86_64) or environment-specific (FreeBsd or Linux, etc.).
+When feasible, such instances of architecture or env-specific code should be provided via standard APIs in the EAL.
+
+By convention, a file is specific if the directory is indicated. Otherwise, it is common.
-By convention, a file is common if it is not located in a directory indicating that it is specific.
-For instance, a file located in a subdir of "x86_64" directory is specific to this architecture.
+For example:
+
+A file located in a subdir of "x86_64" directory is specific to this architecture.
A file located in a subdir of "linux" is specific to this execution environment.
.. note::
Code in DPDK libraries and applications should be generic.
- The correct location for architecture or executive environment specific code is in the EAL.
+ The correct location for architecture or executive environment-specific code is in the EAL.
+
+When necessary, there are several ways to handle specific code:
-When absolutely necessary, there are several ways to handle specific code:
-* Use a ``#ifdef`` with a build definition macro in the C code.
- This can be done when the differences are small and they can be embedded in the same C file:
+* When the differences are small and they can be embedded in the same C file, use a ``#ifdef`` with a build definition macro in the C code.
+
.. code-block:: c
@@ -33,9 +38,9 @@ When absolutely necessary, there are several ways to handle specific code:
titi();
#endif
-* Use build definition macros and conditions in the Meson build file. This is done when the differences are more significant.
- In this case, the code is split into two separate files that are architecture or environment specific.
- This should only apply inside the EAL library.
+
+* When the differences are more significant, use build definition macros and conditions in the Meson build file. In this case, the code is split into two separate files that are architecture or environment specific. This should only apply inside the EAL library.
+
Per Architecture Sources
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -43,7 +48,8 @@ Per Architecture Sources
The following macro options can be used:
* ``RTE_ARCH`` is a string that contains the name of the architecture.
-* ``RTE_ARCH_I686``, ``RTE_ARCH_X86_64``, ``RTE_ARCH_X86_X32``, ``RTE_ARCH_PPC_64``, ``RTE_ARCH_RISCV``, ``RTE_ARCH_LOONGARCH``, ``RTE_ARCH_ARM``, ``RTE_ARCH_ARMv7`` or ``RTE_ARCH_ARM64`` are defined only if we are building for those architectures.
+* ``RTE_ARCH_I686``, ``RTE_ARCH_X86_64``, ``RTE_ARCH_X86_X32``, ``RTE_ARCH_PPC_64``, ``RTE_ARCH_RISCV``, ``RTE_ARCH_LOONGARCH``, ``RTE_ARCH_ARM``, ``RTE_ARCH_ARMv7`` or ``RTE_ARCH_ARM64`` are defined when building for these architectures.
+
Per Execution Environment Sources
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -51,30 +57,22 @@ Per Execution Environment Sources
The following macro options can be used:
* ``RTE_EXEC_ENV`` is a string that contains the name of the executive environment.
-* ``RTE_EXEC_ENV_FREEBSD``, ``RTE_EXEC_ENV_LINUX`` or ``RTE_EXEC_ENV_WINDOWS`` are defined only if we are building for this execution environment.
+* ``RTE_EXEC_ENV_FREEBSD``, ``RTE_EXEC_ENV_LINUX`` or ``RTE_EXEC_ENV_WINDOWS`` are defined only when building for this execution environment.
+
Mbuf features
-------------
-The ``rte_mbuf`` structure must be kept small (128 bytes).
-
-In order to add new features without wasting buffer space for unused features,
-some fields and flags can be registered dynamically in a shared area.
-The "dynamic" mbuf area is the default choice for the new features.
+A designated area in mbuf stores "dynamically" registered fields and flags. It is the default choice for accommodating new features. The "dynamic" area consumes the remaining space in the mbuf, indicating that it's being efficiently utilized. However, the ``rte_mbuf`` structure must be kept small (128 bytes).
-The "dynamic" area is eating the remaining space in mbuf,
-and some existing "static" fields may need to become "dynamic".
-
-Adding a new static field or flag must be an exception matching many criteria
-like (non exhaustive): wide usage, performance, size.
+As more features are added, the space for existinG=g "static" fields (fields that are allocated statically) may need to be reconsidered and possibly converted to "dynamic" allocation. Adding a new static field or flag should be an exception. It must meet specific criteria including widespread usage, performance impact, and size considerations. Before adding a new static feature, it must be justified by its necessity and its impact on the system's efficiency.
Runtime Information - Logging, Tracing and Telemetry
----------------------------------------------------
-It is often desirable to provide information to the end-user
-as to what is happening to the application at runtime.
-DPDK provides a number of built-in mechanisms to provide this introspection:
+The end user may inquire as to what is happening to the application at runtime.
+DPDK provides several built-in mechanisms to provide these insights:
* :ref:`Logging <dynamic_logging>`
* :doc:`Tracing <../prog_guide/trace_lib>`
@@ -82,11 +80,11 @@ DPDK provides a number of built-in mechanisms to provide this introspection:
Each of these has its own strengths and suitabilities for use within DPDK components.
-Below are some guidelines for when each should be used:
+Here are guidelines for when each mechanism should be used:
* For reporting error conditions, or other abnormal runtime issues, *logging* should be used.
- Depending on the severity of the issue, the appropriate log level, for example,
- ``ERROR``, ``WARNING`` or ``NOTICE``, should be used.
+ For example, depending on the severity of the issue, the appropriate log level,
+ ``ERROR``, ``WARNING`` or ``NOTICE`` should be used.
.. note::
@@ -96,24 +94,24 @@ Below are some guidelines for when each should be used:
* For component initialization, or other cases where a path through the code
is only likely to be taken once,
- either *logging* at ``DEBUG`` level or *tracing* may be used, or potentially both.
+ either *logging* at ``DEBUG`` level or *tracing* may be used, or both.
In the latter case, tracing can provide basic information as to the code path taken,
with debug-level logging providing additional details on internal state,
- not possible to emit via tracing.
+ which is not possible to emit via tracing.
* For a component's data-path, where a path is to be taken multiple times within a short timeframe,
*tracing* should be used.
Since DPDK tracing uses `Common Trace Format <https://diamon.org/ctf/>`_ for its tracing logs,
post-analysis can be done using a range of external tools.
-* For numerical or statistical data generated by a component, for example, per-packet statistics,
+* For numerical or statistical data generated by a component, such as per-packet statistics,
*telemetry* should be used.
-* For any data where the data may need to be gathered at any point in the execution
- to help assess the state of the application component,
- for example, core configuration, device information, *telemetry* should be used.
+* For any data that may need to be gathered at any point during the execution
+ to help assess the state of the application component (for example, core configuration, device information) *telemetry* should be used.
Telemetry callbacks should not modify any program state, but be "read-only".
+
Many libraries also include a ``rte_<libname>_dump()`` function as part of their API,
writing verbose internal details to a given file-handle.
New libraries are encouraged to provide such functions where it makes sense to do so,
@@ -135,13 +133,12 @@ requirements for preventing ABI changes when implementing statistics.
Mechanism to allow the application to turn library statistics on and off
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Having runtime support for enabling/disabling library statistics is recommended,
-as build-time options should be avoided. However, if build-time options are used,
-for example as in the table library, the options can be set using c_args.
-When this flag is set, all the counters supported by current library are
+Having runtime support for enabling/disabling library statistics is recommended
+as build-time options should be avoided. However, if build-time options are used, as in the table library, the options can be set using c_args.
+When this flag is set, all the counters supported by the current library are
collected for all the instances of every object type provided by the library.
When this flag is cleared, none of the counters supported by the current library
-are collected for any instance of any object type provided by the library:
+are collected for any instance of any object type provided by the library.
Prevention of ABI changes due to library statistics support
@@ -165,8 +162,8 @@ Motivation to allow the application to turn library statistics on and off
It is highly recommended that each library provides statistics counters to allow
an application to monitor the library-level run-time events. Typical counters
-are: number of packets received/dropped/transmitted, number of buffers
-allocated/freed, number of occurrences for specific events, etc.
+are: the number of packets received/dropped/transmitted, the number of buffers
+allocated/freed, the number of occurrences for specific events, etc.
However, the resources consumed for library-level statistics counter collection
have to be spent out of the application budget and the counters collected by
@@ -198,6 +195,7 @@ applications:
the application may decide to turn the collection of statistics counters off for
Library X and on for Library Y.
+
The statistics collection consumes a certain amount of CPU resources (cycles,
cache bandwidth, memory bandwidth, etc) that depends on:
@@ -218,6 +216,7 @@ cache bandwidth, memory bandwidth, etc) that depends on:
validated for header integrity, counting the number of bits set in a bitmask
might be needed.
+
PF and VF Considerations
------------------------
@@ -229,5 +228,6 @@ Developers should work with the Linux Kernel community to get the required
functionality upstream. PF functionality should only be added to DPDK for
testing and prototyping purposes while the kernel work is ongoing. It should
also be marked with an "EXPERIMENTAL" tag. If the functionality isn't
-upstreamable then a case can be made to maintain the PF functionality in DPDK
+upstreamable, then a case can be made to maintain the PF functionality in DPDK
without the EXPERIMENTAL tag.
+
diff --git a/doc/guides/linux_gsg/sys_reqs.rst b/doc/guides/linux_gsg/sys_reqs.rst
index 13be715933..0569c5cae6 100644
--- a/doc/guides/linux_gsg/sys_reqs.rst
+++ b/doc/guides/linux_gsg/sys_reqs.rst
@@ -99,7 +99,7 @@ e.g. :doc:`../nics/index`
Running DPDK Applications
-------------------------
-To run a DPDK application, some customization may be required on the target machine.
+To run a DPDK application, customization may be required on the target machine.
System Software
~~~~~~~~~~~~~~~
--
2.34.1
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v2 4/9] doc: reword service cores section in prog guide
2024-06-21 2:32 ` [PATCH v2 1/9] doc: reword pmd " Nandini Persad
2024-06-21 2:32 ` [PATCH v2 2/9] doc: reword argparse " Nandini Persad
2024-06-21 2:32 ` [PATCH v2 3/9] doc: reword design section in contributors guidelines Nandini Persad
@ 2024-06-21 2:32 ` Nandini Persad
2024-06-22 14:53 ` Stephen Hemminger
2026-03-31 22:50 ` Stephen Hemminger
2024-06-21 2:32 ` [PATCH v2 5/9] doc: reword trace library " Nandini Persad
` (5 subsequent siblings)
8 siblings, 2 replies; 118+ messages in thread
From: Nandini Persad @ 2024-06-21 2:32 UTC (permalink / raw)
To: dev
I've made minor syntax changes to section 8 of programmer's guide, service cores.
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/service_cores.rst | 32 ++++++++++++-------------
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/doc/guides/prog_guide/service_cores.rst b/doc/guides/prog_guide/service_cores.rst
index d4e6c3d6e6..59da3964bf 100644
--- a/doc/guides/prog_guide/service_cores.rst
+++ b/doc/guides/prog_guide/service_cores.rst
@@ -4,38 +4,38 @@
Service Cores
=============
-DPDK has a concept known as service cores, which enables a dynamic way of
-performing work on DPDK lcores. Service core support is built into the EAL, and
-an API is provided to optionally allow applications to control how the service
+DPDK has a concept known as service cores. Service cores enable a dynamic way of
+performing work on DPDK lcores. Service core support is built into the EAL.
+An API is provided to give you the option of allowing applications to control how the service
cores are used at runtime.
-The service cores concept is built up out of services (components of DPDK that
+The service cores concept is built out of services (components of DPDK that
require CPU cycles to operate) and service cores (DPDK lcores, tasked with
running services). The power of the service core concept is that the mapping
-between service cores and services can be configured to abstract away the
+between service cores and services can be configured to simplify the
difference between platforms and environments.
-For example, the Eventdev has hardware and software PMDs. Of these the software
+For example, the Eventdev has hardware and software PMDs. Of these the software,
PMD requires an lcore to perform the scheduling operations, while the hardware
PMD does not. With service cores, the application would not directly notice
-that the scheduling is done in software.
+that the scheduling is done in the software.
For detailed information about the service core API, please refer to the docs.
Service Core Initialization
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-There are two methods to having service cores in a DPDK application, either by
+There are two methods to having service cores in a DPDK application: by
using the service coremask, or by dynamically adding cores using the API.
-The simpler of the two is to pass the `-s` coremask argument to EAL, which will
-take any cores available in the main DPDK coremask, and if the bits are also set
-in the service coremask the cores become service-cores instead of DPDK
+The simpler of the two is to pass the `-s` coremask argument to the EAL, which will
+take any cores available in the main DPDK coremask. If the bits are also set
+in the service coremask, the cores become service-cores instead of DPDK
application lcores.
Enabling Services on Cores
~~~~~~~~~~~~~~~~~~~~~~~~~~
-Each registered service can be individually mapped to a service core, or set of
+Each registered service can be individually mapped to a service core, or a set of
service cores. Enabling a service on a particular core means that the lcore in
question will run the service. Disabling that core on the service stops the
lcore in question from running the service.
@@ -48,8 +48,8 @@ function to run the service.
Service Core Statistics
~~~~~~~~~~~~~~~~~~~~~~~
-The service core library is capable of collecting runtime statistics like number
-of calls to a specific service, and number of cycles used by the service. The
+The service core library is capable of collecting runtime statistics like the number
+of calls to a specific service, and the number of cycles used by the service. The
cycle count collection is dynamically configurable, allowing any application to
profile the services running on the system at any time.
@@ -58,9 +58,9 @@ Service Core Tracing
The service core library is instrumented with tracepoints using the DPDK Trace
Library. These tracepoints allow you to track the service and logical cores
-state. To activate tracing when launching a DPDK program it is necessary to use the
+state. To activate tracing when launching a DPDK program, it is necessary to use the
``--trace`` option to specify a regular expression to select which tracepoints
-to enable. Here is an example if you want to only specify service core tracing::
+to enable. Here is an example if you want to specify only service core tracing::
./dpdk/examples/service_cores/build/service_cores --trace="lib.eal.thread*" --trace="lib.eal.service*"
--
2.34.1
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v2 5/9] doc: reword trace library section in prog guide
2024-06-21 2:32 ` [PATCH v2 1/9] doc: reword pmd " Nandini Persad
` (2 preceding siblings ...)
2024-06-21 2:32 ` [PATCH v2 4/9] doc: reword service cores section in prog guide Nandini Persad
@ 2024-06-21 2:32 ` Nandini Persad
2024-06-22 14:54 ` Stephen Hemminger
2026-03-31 22:49 ` Stephen Hemminger
2024-06-21 2:32 ` [PATCH v2 6/9] doc: reword log " Nandini Persad
` (4 subsequent siblings)
8 siblings, 2 replies; 118+ messages in thread
From: Nandini Persad @ 2024-06-21 2:32 UTC (permalink / raw)
To: dev
Minor syntax edits were made to sect the trace library section of prog guide.
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/trace_lib.rst | 50 ++++++++++++++---------------
1 file changed, 25 insertions(+), 25 deletions(-)
diff --git a/doc/guides/prog_guide/trace_lib.rst b/doc/guides/prog_guide/trace_lib.rst
index d9b17abe90..e2983017d8 100644
--- a/doc/guides/prog_guide/trace_lib.rst
+++ b/doc/guides/prog_guide/trace_lib.rst
@@ -14,29 +14,29 @@ When recording, specific instrumentation points placed in the software source
code generate events that are saved on a giant tape: a trace file.
The trace file then later can be opened in *trace viewers* to visualize and
analyze the trace events with timestamps and multi-core views.
-Such a mechanism will be useful for resolving a wide range of problems such as
-multi-core synchronization issues, latency measurements, finding out the
-post analysis information like CPU idle time, etc that would otherwise be
-extremely challenging to get.
+This mechanism will be useful for resolving a wide range of problems such as
+multi-core synchronization issues, latency measurements, and finding
+post analysis information like CPU idle time, etc., that would otherwise be
+extremely challenging to gather.
Tracing is often compared to *logging*. However, tracers and loggers are two
-different tools, serving two different purposes.
-Tracers are designed to record much lower-level events that occur much more
+different tools serving two different purposes.
+Tracers are designed to record much lower-level events that occur more
frequently than log messages, often in the range of thousands per second, with
very little execution overhead.
Logging is more appropriate for a very high-level analysis of less frequent
events: user accesses, exceptional conditions (errors and warnings, for
-example), database transactions, instant messaging communications, and such.
+example), database transactions, instant messaging communications, etc.
Simply put, logging is one of the many use cases that can be satisfied with
tracing.
DPDK tracing library features
-----------------------------
-- A framework to add tracepoints in control and fast path APIs with minimum
+- Provides framework to add tracepoints in control and fast path APIs with minimum
impact on performance.
Typical trace overhead is ~20 cycles and instrumentation overhead is 1 cycle.
-- Enable and disable the tracepoints at runtime.
+- Enable and disable tracepoints at runtime.
- Save the trace buffer to the filesystem at any point in time.
- Support ``overwrite`` and ``discard`` trace mode operations.
- String-based tracepoint object lookup.
@@ -47,7 +47,7 @@ DPDK tracing library features
For detailed information, refer to
`Common Trace Format <https://diamon.org/ctf/>`_.
-How to add a tracepoint?
+How to add a Tracepoint
------------------------
This section steps you through the details of adding a simple tracepoint.
@@ -67,14 +67,14 @@ Create the tracepoint header file
rte_trace_point_emit_string(str);
)
-The above macro creates ``app_trace_string`` tracepoint.
+The above macro creates the ``app_trace_string`` tracepoint.
The user can choose any name for the tracepoint.
However, when adding a tracepoint in the DPDK library, the
``rte_<library_name>_trace_[<domain>_]<name>`` naming convention must be
followed.
The examples are ``rte_eal_trace_generic_str``, ``rte_mempool_trace_create``.
-The ``RTE_TRACE_POINT`` macro expands from above definition as the following
+The ``RTE_TRACE_POINT`` macro expands from the above definition as the following
function template:
.. code-block:: c
@@ -91,7 +91,7 @@ The consumer of this tracepoint can invoke
``app_trace_string(const char *str)`` to emit the trace event to the trace
buffer.
-Register the tracepoint
+Register the Tracepoint
~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: c
@@ -122,40 +122,40 @@ convention.
The ``RTE_TRACE_POINT_REGISTER`` defines the placeholder for the
``rte_trace_point_t`` tracepoint object.
- For generic tracepoint or for tracepoint used in public header files,
+ For a generic tracepoint or for the tracepoint used in public header files,
the user must export a ``__<trace_function_name>`` symbol
in the library ``.map`` file for this tracepoint
- to be used out of the library, in shared builds.
+ to be used out of the library in shared builds.
For example, ``__app_trace_string`` will be the exported symbol in the
above example.
-Fast path tracepoint
+Fast Path Tracepoint
--------------------
In order to avoid performance impact in fast path code, the library introduced
``RTE_TRACE_POINT_FP``. When adding the tracepoint in fast path code,
the user must use ``RTE_TRACE_POINT_FP`` instead of ``RTE_TRACE_POINT``.
-``RTE_TRACE_POINT_FP`` is compiled out by default and it can be enabled using
+``RTE_TRACE_POINT_FP`` is compiled by default and can be enabled using
the ``enable_trace_fp`` option for meson build.
-Event record mode
+Event Record Mode
-----------------
-Event record mode is an attribute of trace buffers. Trace library exposes the
+Event record mode is an attribute of trace buffers. The trace library exposes the
following modes:
Overwrite
- When the trace buffer is full, new trace events overwrites the existing
+ When the trace buffer is full, new trace events overwrite the existing
captured events in the trace buffer.
Discard
When the trace buffer is full, new trace events will be discarded.
-The mode can be configured either using EAL command line parameter
-``--trace-mode`` on application boot up or use ``rte_trace_mode_set()`` API to
+The mode can be configured either using the EAL command line parameter
+``--trace-mode`` on application boot up or use the ``rte_trace_mode_set()`` API to
configure at runtime.
-Trace file location
+Trace File Location
-------------------
On ``rte_trace_save()`` or ``rte_eal_cleanup()`` invocation, the library saves
@@ -167,7 +167,7 @@ option.
For more information, refer to :doc:`../linux_gsg/linux_eal_parameters` for
trace EAL command line options.
-View and analyze the recorded events
+View and Analyze Recorded Events
------------------------------------
Once the trace directory is available, the user can view/inspect the recorded
@@ -176,7 +176,7 @@ events.
There are many tools you can use to read DPDK traces:
#. ``babeltrace`` is a command-line utility that converts trace formats; it
- supports the format that DPDK trace library produces, CTF, as well as a
+ supports the format that the DPDK trace library produces, CTF, as well as a
basic text output that can be grep'ed.
The babeltrace command is part of the Open Source Babeltrace project.
--
2.34.1
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v2 6/9] doc: reword log library section in prog guide
2024-06-21 2:32 ` [PATCH v2 1/9] doc: reword pmd " Nandini Persad
` (3 preceding siblings ...)
2024-06-21 2:32 ` [PATCH v2 5/9] doc: reword trace library " Nandini Persad
@ 2024-06-21 2:32 ` Nandini Persad
2024-06-22 14:55 ` Stephen Hemminger
2026-03-31 22:47 ` Stephen Hemminger
2024-06-21 2:32 ` [PATCH v2 7/9] doc: reword cmdline " Nandini Persad
` (3 subsequent siblings)
8 siblings, 2 replies; 118+ messages in thread
From: Nandini Persad @ 2024-06-21 2:32 UTC (permalink / raw)
To: dev
Minor changes made for syntax in the log library section and 7.1
section of the programmer's guide. A couple sentences at the end of the
trace library section were also edited.
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/cmdline.rst | 24 +++++++++++-----------
doc/guides/prog_guide/log_lib.rst | 32 ++++++++++++++---------------
doc/guides/prog_guide/trace_lib.rst | 22 ++++++++++----------
3 files changed, 39 insertions(+), 39 deletions(-)
diff --git a/doc/guides/prog_guide/cmdline.rst b/doc/guides/prog_guide/cmdline.rst
index e20281ceb5..6b10ab6c99 100644
--- a/doc/guides/prog_guide/cmdline.rst
+++ b/doc/guides/prog_guide/cmdline.rst
@@ -5,8 +5,8 @@ Command-line Library
====================
Since its earliest versions, DPDK has included a command-line library -
-primarily for internal use by, for example, ``dpdk-testpmd`` and the ``dpdk-test`` binaries,
-but the library is also exported on install and can be used by any end application.
+primarily for internal use by, for example, ``dpdk-testpmd`` and the ``dpdk-test`` binaries.
+However, the library is also exported on install and can be used by any end application.
This chapter covers the basics of the command-line library and how to use it in an application.
Library Features
@@ -18,7 +18,7 @@ The DPDK command-line library supports the following features:
* Ability to read and process commands taken from an input file, e.g. startup script
-* Parameterized commands able to take multiple parameters with different datatypes:
+* Parameterized commands that can take multiple parameters with different datatypes:
* Strings
* Signed/unsigned 16/32/64-bit integers
@@ -56,7 +56,7 @@ Creating a Command List File
The ``dpdk-cmdline-gen.py`` script takes as input a list of commands to be used by the application.
While these can be piped to it via standard input, using a list file is probably best.
-The format of the list file must be:
+The format of the list file must follow these requirements:
* Comment lines start with '#' as first non-whitespace character
@@ -75,7 +75,7 @@ The format of the list file must be:
* ``<IPv6>dst_ip6``
* Variable fields, which take their values from a list of options,
- have the comma-separated option list placed in braces, rather than a the type name.
+ have the comma-separated option list placed in braces, rather than by the type name.
For example,
* ``<(rx,tx,rxtx)>mode``
@@ -127,13 +127,13 @@ and the callback stubs will be written to an equivalent ".c" file.
Providing the Function Callbacks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-As discussed above, the script output is a header file, containing structure definitions,
-but the callback functions themselves obviously have to be provided by the user.
-These callback functions must be provided as non-static functions in a C file,
+As discussed above, the script output is a header file containing structure definitions,
+but the callback functions must be provided by the user.
+These callback functions must be provided as non-static functions in a C file
and named ``cmd_<cmdname>_parsed``.
The function prototypes can be seen in the generated output header.
-The "cmdname" part of the function name is built up by combining the non-variable initial tokens in the command.
+The "cmdname" part of the function name is built by combining the non-variable initial tokens in the command.
So, given the commands in our worked example below: ``quit`` and ``show port stats <n>``,
the callback functions would be:
@@ -151,11 +151,11 @@ the callback functions would be:
...
}
-These functions must be provided by the developer, but, as stated above,
+These functions must be provided by the developer. However, as stated above,
stub functions may be generated by the script automatically using the ``--stubs`` parameter.
The same "cmdname" stem is used in the naming of the generated structures too.
-To get at the results structure for each command above,
+To get to the results structure for each command above,
the ``parsed_result`` parameter should be cast to ``struct cmd_quit_result``
or ``struct cmd_show_port_stats_result`` respectively.
@@ -179,7 +179,7 @@ To integrate the script output with the application,
we must ``#include`` the generated header into our applications C file,
and then have the command-line created via either ``cmdline_new`` or ``cmdline_stdin_new``.
The first parameter to the function call should be the context array in the generated header file,
-``ctx`` by default. (Modifiable via script parameter).
+``ctx`` by default (Modifiable via script parameter).
The callback functions may be in this same file, or in a separate one -
they just need to be available to the linker at build-time.
diff --git a/doc/guides/prog_guide/log_lib.rst b/doc/guides/prog_guide/log_lib.rst
index ff9d1b54a2..05f032dfad 100644
--- a/doc/guides/prog_guide/log_lib.rst
+++ b/doc/guides/prog_guide/log_lib.rst
@@ -5,7 +5,7 @@ Log Library
===========
The DPDK Log library provides the logging functionality for other DPDK libraries and drivers.
-By default, in a Linux application, logs are sent to syslog and also to the console.
+By default, in a Linux application, logs are sent to syslog and the console.
On FreeBSD and Windows applications, logs are sent only to the console.
However, the log function can be overridden by the user to use a different logging mechanism.
@@ -26,14 +26,14 @@ These levels, specified in ``rte_log.h`` are (from most to least important):
At runtime, only messages of a configured level or above (i.e. of higher importance)
will be emitted by the application to the log output.
-That level can be configured either by the application calling the relevant APIs from the logging library,
+That level can be configured either by the application calling relevant APIs from the logging library,
or by the user passing the ``--log-level`` parameter to the EAL via the application.
Setting Global Log Level
~~~~~~~~~~~~~~~~~~~~~~~~
To adjust the global log level for an application,
-just pass a numeric level or a level name to the ``--log-level`` EAL parameter.
+pass a numeric level or a level name to the ``--log-level`` EAL parameter.
For example::
/path/to/app --log-level=error
@@ -47,9 +47,9 @@ Within an application, the log level can be similarly set using the ``rte_log_se
Setting Log Level for a Component
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-In some cases, for example, for debugging purposes,
-it may be desirable to increase or decrease the log level for only a specific component, or set of components.
-To facilitate this, the ``--log-level`` argument also accepts an, optionally wildcarded, component name,
+In some cases (such as debugging purposes),
+you may want to increase or decrease the log level for only a specific component or set of components.
+To facilitate this, the ``--log-level`` argument also accepts an optionally wildcarded component name,
along with the desired level for that component.
For example::
@@ -57,13 +57,13 @@ For example::
/path/to/app --log-level=lib.*:warning
-Within an application, the same result can be got using the ``rte_log_set_level_pattern()`` or ``rte_log_set_level_regex()`` APIs.
+Within an application, you can achieve the same result using the ``rte_log_set_level_pattern()`` or ``rte_log_set_level_regex()`` APIs.
Using Logging APIs to Generate Log Messages
-------------------------------------------
-To output log messages, ``rte_log()`` API function should be used.
-As well as the log message, ``rte_log()`` takes two additional parameters:
+To output log messages, the ``rte_log()`` API function should be used,
+as well as the log message, ``rte_log()`` which takes two additional parameters:
* The log level
* The log component type
@@ -73,16 +73,16 @@ The component type is a unique id that identifies the particular DPDK component
To get this id, each component needs to register itself at startup,
using the macro ``RTE_LOG_REGISTER_DEFAULT``.
This macro takes two parameters, with the second being the default log level for the component.
-The first parameter, called "type", the name of the "logtype", or "component type" variable used in the component.
-This variable will be defined by the macro, and should be passed as the second parameter in calls to ``rte_log()``.
+The first parameter, called "type", is the name of the "logtype", or "component type" variable used in the component.
+This variable will be defined by the macro and should be passed as the second parameter in calls to ``rte_log()``.
In general, most DPDK components define their own logging macros to simplify the calls to the log APIs.
They do this by:
* Hiding the component type parameter inside the macro so it never needs to be passed explicitly.
* Using the log-level definitions given in ``rte_log.h`` to allow short textual names to be used in
- place of the numeric log levels.
+ place of numeric log levels.
-The following code is taken from ``rte_cfgfile.c`` and shows the log registration,
+The following code is taken from ``rte_cfgfile.c`` and shows the log registration
and subsequent definition of a shortcut logging macro.
It can be used as a template for any new components using DPDK logging.
@@ -97,10 +97,10 @@ It can be used as a template for any new components using DPDK logging.
it should be placed near the top of the C file using it.
If not, the logtype variable should be defined as an "extern int" near the top of the file.
- Similarly, if logging is to be done by multiple files in a component,
- only one file should register the logtype via the macro,
+ Similarly, if logging will be done by multiple files in a component,
+ only one file should register the logtype via the macro
and the logtype should be defined as an "extern int" in a common header file.
- Any component-specific logging macro should similarly be defined in that header.
+ Any component-specific logging macro should be similarly defined in that header.
Throughout the cfgfile library, all logging calls are therefore of the form:
diff --git a/doc/guides/prog_guide/trace_lib.rst b/doc/guides/prog_guide/trace_lib.rst
index e2983017d8..4177f8ba15 100644
--- a/doc/guides/prog_guide/trace_lib.rst
+++ b/doc/guides/prog_guide/trace_lib.rst
@@ -195,12 +195,12 @@ to babeltrace with no options::
all their events, merging them in chronological order.
You can pipe the output of the babeltrace into a tool like grep(1) for further
-filtering. Below example grep the events for ``ethdev`` only::
+filtering. Here's an example of how you grep the events for ``ethdev`` only::
babeltrace /tmp/my-dpdk-trace | grep ethdev
You can pipe the output of babeltrace into a tool like wc(1) to count the
-recorded events. Below example count the number of ``ethdev`` events::
+recorded events. Below is an example of counting the number of ``ethdev`` events::
babeltrace /tmp/my-dpdk-trace | grep ethdev | wc --lines
@@ -210,14 +210,14 @@ Use the tracecompass GUI tool
``Tracecompass`` is another tool to view/analyze the DPDK traces which gives
a graphical view of events. Like ``babeltrace``, tracecompass also provides
an interface to search for a particular event.
-To use ``tracecompass``, following are the minimum required steps:
+To use ``tracecompass``, the following are the minimum required steps:
- Install ``tracecompass`` to the localhost. Variants are available for Linux,
Windows, and OS-X.
- Launch ``tracecompass`` which will open a graphical window with trace
management interfaces.
-- Open a trace using ``File->Open Trace`` option and select metadata file which
- is to be viewed/analyzed.
+- Open a trace using the ``File->Open Trace`` option and select the metadata file which
+ will be viewed/analyzed.
For more details, refer
`Trace Compass <https://www.eclipse.org/tracecompass/>`_.
@@ -225,7 +225,7 @@ For more details, refer
Quick start
-----------
-This section steps you through the details of generating trace and viewing it.
+This section steps you through the details of generating the trace and viewing it.
- Start the dpdk-test::
@@ -238,8 +238,8 @@ This section steps you through the details of generating trace and viewing it.
Implementation details
----------------------
-As DPDK trace library is designed to generate traces that uses ``Common Trace
-Format (CTF)``. ``CTF`` specification consists of the following units to create
+As the DPDK trace library is designed to generate traces that use the ``Common Trace
+Format (CTF)``. ``CTF`` specification and consists of the following units to create
a trace.
- ``Stream`` Sequence of packets.
@@ -249,7 +249,7 @@ a trace.
For detailed information, refer to
`Common Trace Format <https://diamon.org/ctf/>`_.
-The implementation details broadly divided into the following areas:
+Implementation details are broadly divided into the following areas:
Trace metadata creation
~~~~~~~~~~~~~~~~~~~~~~~
@@ -277,7 +277,7 @@ per thread to enable lock less trace-emit function.
For non lcore threads, the trace memory is allocated on the first trace
emission.
-For lcore threads, if trace points are enabled through a EAL option, the trace
+For lcore threads, if trace points are enabled through an EAL option, the trace
memory is allocated when the threads are known of DPDK
(``rte_eal_init`` for EAL lcores, ``rte_thread_register`` for non-EAL lcores).
Otherwise, when trace points are enabled later in the life of the application,
@@ -348,7 +348,7 @@ trace.header
| timestamp [47:0] |
+----------------------+
-The trace header is 64 bits, it consists of 48 bits of timestamp and 16 bits
+The trace header is 64 bits. It consists of 48 bits of timestamp and 16 bits
event ID.
The ``packet.header`` and ``packet.context`` will be written in the slow path
--
2.34.1
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v2 7/9] doc: reword cmdline section in prog guide
2024-06-21 2:32 ` [PATCH v2 1/9] doc: reword pmd " Nandini Persad
` (4 preceding siblings ...)
2024-06-21 2:32 ` [PATCH v2 6/9] doc: reword log " Nandini Persad
@ 2024-06-21 2:32 ` Nandini Persad
2024-06-22 14:55 ` Stephen Hemminger
2026-03-31 22:45 ` Stephen Hemminger
2024-06-21 2:32 ` [PATCH v2 8/9] doc: reword stack library " Nandini Persad
` (2 subsequent siblings)
8 siblings, 2 replies; 118+ messages in thread
From: Nandini Persad @ 2024-06-21 2:32 UTC (permalink / raw)
To: dev
Minor syntax edits made to the cmdline section.
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/cmdline.rst | 34 +++++++++++++++----------------
1 file changed, 17 insertions(+), 17 deletions(-)
diff --git a/doc/guides/prog_guide/cmdline.rst b/doc/guides/prog_guide/cmdline.rst
index 6b10ab6c99..8aa1ef180b 100644
--- a/doc/guides/prog_guide/cmdline.rst
+++ b/doc/guides/prog_guide/cmdline.rst
@@ -62,7 +62,7 @@ The format of the list file must follow these requirements:
* One command per line
-* Variable fields are prefixed by the type-name in angle-brackets, for example:
+* Variable fields are prefixed by the type-name in angle-brackets. For example:
* ``<STRING>message``
@@ -75,7 +75,7 @@ The format of the list file must follow these requirements:
* ``<IPv6>dst_ip6``
* Variable fields, which take their values from a list of options,
- have the comma-separated option list placed in braces, rather than by the type name.
+ have the comma-separated option list placed in braces rather than by the type name.
For example,
* ``<(rx,tx,rxtx)>mode``
@@ -112,7 +112,7 @@ The generated content includes:
* A command-line context array definition, suitable for passing to ``cmdline_new``
-If so desired, the script can also output function stubs for the callback functions for each command.
+If needed, the script can also output function stubs for the callback functions for each command.
This behaviour is triggered by passing the ``--stubs`` flag to the script.
In this case, an output file must be provided with a filename ending in ".h",
and the callback stubs will be written to an equivalent ".c" file.
@@ -120,7 +120,7 @@ and the callback stubs will be written to an equivalent ".c" file.
.. note::
The stubs are written to a separate file,
- to allow continuous use of the script to regenerate the command-line header,
+ to allow continuous use of the script to regenerate the command-line header
without overwriting any code the user has added to the callback functions.
This makes it easy to incrementally add new commands to an existing application.
@@ -154,7 +154,7 @@ the callback functions would be:
These functions must be provided by the developer. However, as stated above,
stub functions may be generated by the script automatically using the ``--stubs`` parameter.
-The same "cmdname" stem is used in the naming of the generated structures too.
+The same "cmdname" stem is used in the naming of the generated structures as well.
To get to the results structure for each command above,
the ``parsed_result`` parameter should be cast to ``struct cmd_quit_result``
or ``struct cmd_show_port_stats_result`` respectively.
@@ -176,13 +176,12 @@ Integrating with the Application
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To integrate the script output with the application,
-we must ``#include`` the generated header into our applications C file,
+we must ``#include`` the generated header into our application's C file,
and then have the command-line created via either ``cmdline_new`` or ``cmdline_stdin_new``.
The first parameter to the function call should be the context array in the generated header file,
``ctx`` by default (Modifiable via script parameter).
-The callback functions may be in this same file, or in a separate one -
-they just need to be available to the linker at build-time.
+The callback functions may be in the same or separate file, as long as they are available to the linker at build-time.
Limitations of the Script Approach
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -242,19 +241,19 @@ The resulting struct looks like:
As before, we choose names to match the tokens in the command.
Since our numeric parameter is a 16-bit value, we use ``uint16_t`` type for it.
-Any of the standard sized integer types can be used as parameters, depending on the desired result.
+Any of the standard-sized integer types can be used as parameters depending on the desired result.
Beyond the standard integer types,
-the library also allows variable parameters to be of a number of other types,
+the library also allows variable parameters to be of a number of other types
as called out in the feature list above.
* For variable string parameters,
the type should be ``cmdline_fixed_string_t`` - the same as for fixed tokens,
but these will be initialized differently (as described below).
-* For ethernet addresses use type ``struct rte_ether_addr``
+* For ethernet addresses, use type ``struct rte_ether_addr``
-* For IP addresses use type ``cmdline_ipaddr_t``
+* For IP addresses, use type ``cmdline_ipaddr_t``
Providing Field Initializers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -267,6 +266,7 @@ For fixed string tokens, like "quit", "show" and "port", the initializer will be
static cmdline_parse_token_string_t cmd_quit_quit_tok =
TOKEN_STRING_INITIALIZER(struct cmd_quit_result, quit, "quit");
+
The convention for naming used here is to include the base name of the overall result structure -
``cmd_quit`` in this case,
as well as the name of the field within that structure - ``quit`` in this case, followed by ``_tok``.
@@ -311,8 +311,8 @@ The callback function should have type:
where the first parameter is a pointer to the result structure defined above,
the second parameter is the command-line instance,
and the final parameter is a user-defined pointer provided when we associate the callback with the command.
-Most callback functions only use the first parameter, or none at all,
-but the additional two parameters provide some extra flexibility,
+Most callback functions only use the first parameter or none at all,
+but the additional two parameters provide some extra flexibility
to allow the callback to work with non-global state in your application.
For our two example commands, the relevant callback functions would look very similar in definition.
@@ -341,7 +341,7 @@ Associating Callback and Command
The ``cmdline_parse_inst_t`` type defines a "parse instance",
i.e. a sequence of tokens to be matched and then an associated function to be called.
-Also included in the instance type are a field for help text for the command,
+Also included in the instance type are a field for help text for the command
and any additional user-defined parameter to be passed to the callback functions referenced above.
For example, for our simple "quit" command:
@@ -362,8 +362,8 @@ then set the user-defined parameter to NULL,
provide a help message to be given, on request, to the user explaining the command,
before finally listing out the single token to be matched for this command instance.
-For our second, port stats, example,
-as well as making things a little more complicated by having multiple tokens to be matched,
+For our second "port stats" example,
+as well as making things more complex by having multiple tokens to be matched,
we can also demonstrate passing in a parameter to the function.
Let us suppose that our application does not always use all the ports available to it,
but instead only uses a subset of the ports, stored in an array called ``active_ports``.
--
2.34.1
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v2 8/9] doc: reword stack library section in prog guide
2024-06-21 2:32 ` [PATCH v2 1/9] doc: reword pmd " Nandini Persad
` (5 preceding siblings ...)
2024-06-21 2:32 ` [PATCH v2 7/9] doc: reword cmdline " Nandini Persad
@ 2024-06-21 2:32 ` Nandini Persad
2024-06-22 14:55 ` Stephen Hemminger
2026-03-31 22:36 ` Stephen Hemminger
2024-06-21 2:32 ` [PATCH v2 9/9] doc: reword rcu " Nandini Persad
2024-06-22 14:52 ` [PATCH v2 1/9] doc: reword pmd " Stephen Hemminger
8 siblings, 2 replies; 118+ messages in thread
From: Nandini Persad @ 2024-06-21 2:32 UTC (permalink / raw)
To: dev
Minor changes made to wording of the stack library section in prog guide.
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/stack_lib.rst | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 975d3ad796..a51df60d13 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -44,8 +44,8 @@ Lock-free Stack
The lock-free stack consists of a linked list of elements, each containing a
data pointer and a next pointer, and an atomic stack depth counter. The
-lock-free property means that multiple threads can push and pop simultaneously,
-and one thread being preempted/delayed in a push or pop operation will not
+lock-free property means that multiple threads can push and pop simultaneously.
+One thread being preempted/delayed in a push or pop operation will not
impede the forward progress of any other thread.
The lock-free push operation enqueues a linked list of pointers by pointing the
--
2.34.1
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v2 9/9] doc: reword rcu library section in prog guide
2024-06-21 2:32 ` [PATCH v2 1/9] doc: reword pmd " Nandini Persad
` (6 preceding siblings ...)
2024-06-21 2:32 ` [PATCH v2 8/9] doc: reword stack library " Nandini Persad
@ 2024-06-21 2:32 ` Nandini Persad
2024-06-22 14:55 ` Stephen Hemminger
2026-03-31 22:35 ` Stephen Hemminger
2024-06-22 14:52 ` [PATCH v2 1/9] doc: reword pmd " Stephen Hemminger
8 siblings, 2 replies; 118+ messages in thread
From: Nandini Persad @ 2024-06-21 2:32 UTC (permalink / raw)
To: dev
Simple syntax changes made to the rcu library section in programmer's guide.
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
---
doc/guides/prog_guide/rcu_lib.rst | 77 ++++++++++++++++---------------
1 file changed, 40 insertions(+), 37 deletions(-)
diff --git a/doc/guides/prog_guide/rcu_lib.rst b/doc/guides/prog_guide/rcu_lib.rst
index d0aef3bc16..c7ae349184 100644
--- a/doc/guides/prog_guide/rcu_lib.rst
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -8,17 +8,17 @@ RCU Library
Lockless data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
-(for example real-time applications).
+(for example, real-time applications).
In the following sections, the term "memory" refers to memory allocated
by typical APIs like malloc() or anything that is representative of
-memory, for example an index of a free element array.
+memory. An example of this is an index of a free element array.
Since these data structures are lockless, the writers and readers
are accessing the data structures concurrently. Hence, while removing
an element from a data structure, the writers cannot return the memory
-to the allocator, without knowing that the readers are not
-referencing that element/memory anymore. Hence, it is required to
+to the allocator without knowing that the readers are not
+referencing that element/memory anymore. Therefore, it is required to
separate the operation of removing an element into two steps:
#. Delete: in this step, the writer removes the reference to the element from
@@ -64,19 +64,19 @@ quiescent state. Reader thread 3 was not accessing D1 when the delete
operation happened. So, reader thread 3 will not have a reference to the
deleted entry.
-It can be noted that, the critical sections for D2 is a quiescent state
-for D1. i.e. for a given data structure Dx, any point in the thread execution
-that does not reference Dx is a quiescent state.
+Note that the critical sections for D2 is a quiescent state
+for D1 (i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state).
Since memory is not freed immediately, there might be a need for
-provisioning of additional memory, depending on the application requirements.
+provisioning additional memory depending on the application requirements.
Factors affecting the RCU mechanism
-----------------------------------
It is important to make sure that this library keeps the overhead of
-identifying the end of grace period and subsequent freeing of memory,
-to a minimum. The following paras explain how grace period and critical
+identifying the end of grace period and subsequent freeing of memory
+to a minimum. The following paragraphs explain how grace period and critical
section affect this overhead.
The writer has to poll the readers to identify the end of grace period.
@@ -119,14 +119,14 @@ How to use this library
The application must allocate memory and initialize a QS variable.
Applications can call ``rte_rcu_qsbr_get_memsize()`` to calculate the size
-of memory to allocate. This API takes a maximum number of reader threads,
-using this variable, as a parameter.
+of memory to allocate. This API takes a maximum number of reader threads
+using this variable as a parameter.
Further, the application can initialize a QS variable using the API
``rte_rcu_qsbr_init()``.
Each reader thread is assumed to have a unique thread ID. Currently, the
-management of the thread ID (for example allocation/free) is left to the
+management of the thread ID (for example, allocation/free) is left to the
application. The thread ID should be in the range of 0 to
maximum number of threads provided while creating the QS variable.
The application could also use ``lcore_id`` as the thread ID where applicable.
@@ -134,13 +134,13 @@ The application could also use ``lcore_id`` as the thread ID where applicable.
The ``rte_rcu_qsbr_thread_register()`` API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
-The reader thread must call ``rte_rcu_qsbr_thread_online()`` API to start
+The reader thread must call the ``rte_rcu_qsbr_thread_online()`` API to start
reporting its quiescent state.
Some of the use cases might require the reader threads to make blocking API
-calls (for example while using eventdev APIs). The writer thread should not
+calls (for example, while using eventdev APIs). The writer thread should not
wait for such reader threads to enter quiescent state. The reader thread must
-call ``rte_rcu_qsbr_thread_offline()`` API, before calling blocking APIs. It
+call ``rte_rcu_qsbr_thread_offline()`` API before calling blocking APIs. It
can call ``rte_rcu_qsbr_thread_online()`` API once the blocking API call
returns.
@@ -149,13 +149,13 @@ state by calling the API ``rte_rcu_qsbr_start()``. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
``rte_rcu_qsbr_start()`` returns a token to each caller.
-The writer thread must call ``rte_rcu_qsbr_check()`` API with the token to
-get the current quiescent state status. Option to block till all the reader
+The writer thread must call the ``rte_rcu_qsbr_check()`` API with the token to
+get the current quiescent state status. The option to block till all the reader
threads enter the quiescent state is provided. If this API indicates that
all the reader threads have entered the quiescent state, the application
can free the deleted entry.
-The APIs ``rte_rcu_qsbr_start()`` and ``rte_rcu_qsbr_check()`` are lock free.
+The APIs ``rte_rcu_qsbr_start()`` and ``rte_rcu_qsbr_check()`` are lock-free.
Hence, they can be called concurrently from multiple writers even while
running as worker threads.
@@ -173,7 +173,7 @@ polls till all the readers enter the quiescent state or go offline. This API
does not allow the writer to do useful work while waiting and introduces
additional memory accesses due to continuous polling. However, the application
does not have to store the token or the reference to the deleted resource. The
-resource can be freed immediately after ``rte_rcu_qsbr_synchronize()`` API
+resource can be freed immediately after the ``rte_rcu_qsbr_synchronize()`` API
returns.
The reader thread must call ``rte_rcu_qsbr_thread_offline()`` and
@@ -181,9 +181,9 @@ The reader thread must call ``rte_rcu_qsbr_thread_offline()`` and
quiescent state. The ``rte_rcu_qsbr_check()`` API will not wait for this reader
thread to report the quiescent state status anymore.
-The reader threads should call ``rte_rcu_qsbr_quiescent()`` API to indicate that
+The reader threads should call the ``rte_rcu_qsbr_quiescent()`` API to indicate that
they entered a quiescent state. This API checks if a writer has triggered a
-quiescent state query and update the state accordingly.
+quiescent state query and updates the state accordingly.
The ``rte_rcu_qsbr_lock()`` and ``rte_rcu_qsbr_unlock()`` are empty functions.
However, these APIs can aid in debugging issues. One can mark the access to
@@ -203,40 +203,43 @@ the application. When a writer deletes an entry from a data structure, the write
There are several APIs provided to help with this process. The writer
can create a FIFO to store the references to deleted resources using ``rte_rcu_qsbr_dq_create()``.
The resources can be enqueued to this FIFO using ``rte_rcu_qsbr_dq_enqueue()``.
-If the FIFO is full, ``rte_rcu_qsbr_dq_enqueue`` will reclaim the resources before enqueuing. It will also reclaim resources on regular basis to keep the FIFO from growing too large. If the writer runs out of resources, the writer can call ``rte_rcu_qsbr_dq_reclaim`` API to reclaim resources. ``rte_rcu_qsbr_dq_delete`` is provided to reclaim any remaining resources and free the FIFO while shutting down.
+If the FIFO is full, ``rte_rcu_qsbr_dq_enqueue`` will reclaim the resources before enqueuing.
+It will also reclaim resources on regular basis to keep the FIFO from growing too large. If the writer runs out of resources, the writer can call ``rte_rcu_qsbr_dq_reclaim`` API to reclaim resources. ``rte_rcu_qsbr_dq_delete`` is provided to reclaim any remaining resources and free the FIFO while shutting down.
However, if this resource reclamation process were to be integrated in lock-free data structure libraries, it
-hides this complexity from the application and makes it easier for the application to adopt lock-free algorithms. The following paragraphs discuss how the reclamation process can be integrated in DPDK libraries.
+hides this complexity from the application and makes it easier for the application to adopt lock-free algorithms.
+
+The following paragraphs discuss how the reclamation process can be integrated in DPDK libraries.
In any DPDK application, the resource reclamation process using QSBR can be split into 4 parts:
#. Initialization
#. Quiescent State Reporting
-#. Reclaiming Resources
+#. Reclaiming Resources*
#. Shutdown
The design proposed here assigns different parts of this process to client libraries and applications. The term 'client library' refers to lock-free data structure libraries such at rte_hash, rte_lpm etc. in DPDK or similar libraries outside of DPDK. The term 'application' refers to the packet processing application that makes use of DPDK such as L3 Forwarding example application, OVS, VPP etc..
-The application has to handle 'Initialization' and 'Quiescent State Reporting'. So,
+The application must handle 'Initialization' and 'Quiescent State Reporting'. Therefore, the application:
-* the application has to create the RCU variable and register the reader threads to report their quiescent state.
-* the application has to register the same RCU variable with the client library.
-* reader threads in the application have to report the quiescent state. This allows for the application to control the length of the critical section/how frequently the application wants to report the quiescent state.
+* Must create the RCU variable and register the reader threads to report their quiescent state.
+* Must register the same RCU variable with the client library.
+* Note that reader threads in the application have to report the quiescent state. This allows for the application to control the length of the critical section/how frequently the application wants to report the quiescent state.
-The client library will handle 'Reclaiming Resources' part of the process. The
+The client library will handle the 'Reclaiming Resources' part of the process. The
client libraries will make use of the writer thread context to execute the memory
-reclamation algorithm. So,
+reclamation algorithm. So, the client library should:
-* client library should provide an API to register a RCU variable that it will use. It should call ``rte_rcu_qsbr_dq_create()`` to create the FIFO to store the references to deleted entries.
-* client library should use ``rte_rcu_qsbr_dq_enqueue`` to enqueue the deleted resources on the FIFO and start the grace period.
-* if the library runs out of resources while adding entries, it should call ``rte_rcu_qsbr_dq_reclaim`` to reclaim the resources and try the resource allocation again.
+* Provide an API to register a RCU variable that it will use. It should call ``rte_rcu_qsbr_dq_create()`` to create the FIFO to store the references to deleted entries.
+* Use ``rte_rcu_qsbr_dq_enqueue`` to enqueue the deleted resources on the FIFO and start the grace period.
+* Note that if the library runs out of resources while adding entries, it should call ``rte_rcu_qsbr_dq_reclaim`` to reclaim the resources and try the resource allocation again.
The 'Shutdown' process needs to be shared between the application and the
-client library.
+client library. Note that:
-* the application should make sure that the reader threads are not using the shared data structure, unregister the reader threads from the QSBR variable before calling the client library's shutdown function.
+* The application should make sure that the reader threads are not using the shared data structure, unregister the reader threads from the QSBR variable before calling the client library's shutdown function.
-* client library should call ``rte_rcu_qsbr_dq_delete`` to reclaim any remaining resources and free the FIFO.
+* The client library should call ``rte_rcu_qsbr_dq_delete`` to reclaim any remaining resources and free the FIFO.
Integrating the resource reclamation with client libraries removes the burden from
the application and makes it easy to use lock-free algorithms.
--
2.34.1
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH] doc/design: minor cleanus
2024-06-21 2:32 ` [PATCH v2 3/9] doc: reword design section in contributors guidelines Nandini Persad
@ 2024-06-22 14:47 ` Stephen Hemminger
2024-06-24 15:07 ` Thomas Monjalon
2026-03-31 22:53 ` [PATCH v2 3/9] doc: reword design section in contributors guidelines Stephen Hemminger
1 sibling, 1 reply; 118+ messages in thread
From: Stephen Hemminger @ 2024-06-22 14:47 UTC (permalink / raw)
To: nandinipersad361; +Cc: dev, Stephen Hemminger
Minor fixes to previous edit:
1. remove blank line at end of file, causes git complaint
2. fix minor typo (UTF-8?)
3. break long lines, although rst doesn't care it is nicer
for future editors to keep to 100 characters or less.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
Depends-on: patch-141466 ("doc: reword design section in contributors guideline")
doc/guides/contributing/design.rst | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/doc/guides/contributing/design.rst b/doc/guides/contributing/design.rst
index 3d1f5aeb91..77c4d3d823 100644
--- a/doc/guides/contributing/design.rst
+++ b/doc/guides/contributing/design.rst
@@ -63,9 +63,16 @@ The following macro options can be used:
Mbuf features
-------------
-A designated area in mbuf stores "dynamically" registered fields and flags. It is the default choice for accommodating new features. The "dynamic" area consumes the remaining space in the mbuf, indicating that it's being efficiently utilized. However, the ``rte_mbuf`` structure must be kept small (128 bytes).
+A designated area in mbuf stores "dynamically" registered fields and flags. It is the default choice
+for accommodating new features. The "dynamic" area consumes the remaining space in the mbuf,
+indicating that it's being efficiently utilized. However, the ``rte_mbuf`` structure must be kept
+small (128 bytes).
-As more features are added, the space for existinG=g "static" fields (fields that are allocated statically) may need to be reconsidered and possibly converted to "dynamic" allocation. Adding a new static field or flag should be an exception. It must meet specific criteria including widespread usage, performance impact, and size considerations. Before adding a new static feature, it must be justified by its necessity and its impact on the system's efficiency.
+As more features are added, the space for existing "static" fields (fields that are allocated
+statically) may need to be reconsidered and possibly converted to "dynamic" allocation. Adding a new
+static field or flag should be an exception. It must meet specific criteria including widespread
+usage, performance impact, and size considerations. Before adding a new static feature, it must be
+justified by its necessity and its impact on the system's efficiency.
Runtime Information - Logging, Tracing and Telemetry
@@ -134,7 +141,8 @@ Mechanism to allow the application to turn library statistics on and off
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Having runtime support for enabling/disabling library statistics is recommended
-as build-time options should be avoided. However, if build-time options are used, as in the table library, the options can be set using c_args.
+as build-time options should be avoided. However, if build-time options are used,
+as in the table library, the options can be set using c_args.
When this flag is set, all the counters supported by the current library are
collected for all the instances of every object type provided by the library.
When this flag is cleared, none of the counters supported by the current library
@@ -230,4 +238,3 @@ testing and prototyping purposes while the kernel work is ongoing. It should
also be marked with an "EXPERIMENTAL" tag. If the functionality isn't
upstreamable, then a case can be made to maintain the PF functionality in DPDK
without the EXPERIMENTAL tag.
-
--
2.43.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* Re: [PATCH v2 1/9] doc: reword pmd section in prog guide
2024-06-21 2:32 ` [PATCH v2 1/9] doc: reword pmd " Nandini Persad
` (7 preceding siblings ...)
2024-06-21 2:32 ` [PATCH v2 9/9] doc: reword rcu " Nandini Persad
@ 2024-06-22 14:52 ` Stephen Hemminger
8 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2024-06-22 14:52 UTC (permalink / raw)
To: Nandini Persad; +Cc: dev
On Thu, 20 Jun 2024 19:32:46 -0700
Nandini Persad <nandinipersad361@gmail.com> wrote:
> I made edits for syntax/grammar the PMD section of the prog guide.
>
> Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
^ permalink raw reply [flat|nested] 118+ messages in thread
* Re: [PATCH v2 2/9] doc: reword argparse section in prog guide
2024-06-21 2:32 ` [PATCH v2 2/9] doc: reword argparse " Nandini Persad
@ 2024-06-22 14:53 ` Stephen Hemminger
2026-03-30 16:08 ` Stephen Hemminger
1 sibling, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2024-06-22 14:53 UTC (permalink / raw)
To: Nandini Persad; +Cc: dev
On Thu, 20 Jun 2024 19:32:47 -0700
Nandini Persad <nandinipersad361@gmail.com> wrote:
> I have made small edits for syntax in this section.
>
> Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
^ permalink raw reply [flat|nested] 118+ messages in thread
* Re: [PATCH v2 4/9] doc: reword service cores section in prog guide
2024-06-21 2:32 ` [PATCH v2 4/9] doc: reword service cores section in prog guide Nandini Persad
@ 2024-06-22 14:53 ` Stephen Hemminger
2026-03-31 22:50 ` Stephen Hemminger
1 sibling, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2024-06-22 14:53 UTC (permalink / raw)
To: Nandini Persad; +Cc: dev
On Thu, 20 Jun 2024 19:32:49 -0700
Nandini Persad <nandinipersad361@gmail.com> wrote:
> I've made minor syntax changes to section 8 of programmer's guide, service cores.
>
> Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
^ permalink raw reply [flat|nested] 118+ messages in thread
* Re: [PATCH v2 5/9] doc: reword trace library section in prog guide
2024-06-21 2:32 ` [PATCH v2 5/9] doc: reword trace library " Nandini Persad
@ 2024-06-22 14:54 ` Stephen Hemminger
2026-03-31 22:49 ` Stephen Hemminger
1 sibling, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2024-06-22 14:54 UTC (permalink / raw)
To: Nandini Persad; +Cc: dev
On Thu, 20 Jun 2024 19:32:50 -0700
Nandini Persad <nandinipersad361@gmail.com> wrote:
> Minor syntax edits were made to sect the trace library section of prog guide.
>
> Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
> ---
> doc/guides/prog_guide/trace_lib.rst | 50 ++++++++++++++---------------
> 1 file changed, 25 insertions(+), 25 deletions(-)
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
^ permalink raw reply [flat|nested] 118+ messages in thread
* Re: [PATCH v2 7/9] doc: reword cmdline section in prog guide
2024-06-21 2:32 ` [PATCH v2 7/9] doc: reword cmdline " Nandini Persad
@ 2024-06-22 14:55 ` Stephen Hemminger
2026-03-31 22:45 ` Stephen Hemminger
1 sibling, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2024-06-22 14:55 UTC (permalink / raw)
To: Nandini Persad; +Cc: dev
On Thu, 20 Jun 2024 19:32:52 -0700
Nandini Persad <nandinipersad361@gmail.com> wrote:
> Minor syntax edits made to the cmdline section.
>
> Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
^ permalink raw reply [flat|nested] 118+ messages in thread
* Re: [PATCH v2 6/9] doc: reword log library section in prog guide
2024-06-21 2:32 ` [PATCH v2 6/9] doc: reword log " Nandini Persad
@ 2024-06-22 14:55 ` Stephen Hemminger
2026-03-31 22:47 ` Stephen Hemminger
1 sibling, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2024-06-22 14:55 UTC (permalink / raw)
To: Nandini Persad; +Cc: dev
On Thu, 20 Jun 2024 19:32:51 -0700
Nandini Persad <nandinipersad361@gmail.com> wrote:
> Minor changes made for syntax in the log library section and 7.1
> section of the programmer's guide. A couple sentences at the end of the
> trace library section were also edited.
>
> Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
^ permalink raw reply [flat|nested] 118+ messages in thread
* Re: [PATCH v2 8/9] doc: reword stack library section in prog guide
2024-06-21 2:32 ` [PATCH v2 8/9] doc: reword stack library " Nandini Persad
@ 2024-06-22 14:55 ` Stephen Hemminger
2026-03-31 22:36 ` Stephen Hemminger
1 sibling, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2024-06-22 14:55 UTC (permalink / raw)
To: Nandini Persad; +Cc: dev
On Thu, 20 Jun 2024 19:32:53 -0700
Nandini Persad <nandinipersad361@gmail.com> wrote:
> Minor changes made to wording of the stack library section in prog guide.
>
> Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
> ---
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
^ permalink raw reply [flat|nested] 118+ messages in thread
* Re: [PATCH v2 9/9] doc: reword rcu library section in prog guide
2024-06-21 2:32 ` [PATCH v2 9/9] doc: reword rcu " Nandini Persad
@ 2024-06-22 14:55 ` Stephen Hemminger
2026-03-31 22:35 ` Stephen Hemminger
1 sibling, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2024-06-22 14:55 UTC (permalink / raw)
To: Nandini Persad; +Cc: dev
On Thu, 20 Jun 2024 19:32:54 -0700
Nandini Persad <nandinipersad361@gmail.com> wrote:
> Simple syntax changes made to the rcu library section in programmer's guide.
>
> Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
> ---
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
^ permalink raw reply [flat|nested] 118+ messages in thread
* Re: [PATCH] doc/design: minor cleanus
2024-06-22 14:47 ` [PATCH] doc/design: minor cleanus Stephen Hemminger
@ 2024-06-24 15:07 ` Thomas Monjalon
0 siblings, 0 replies; 118+ messages in thread
From: Thomas Monjalon @ 2024-06-24 15:07 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: nandinipersad361, dev
22/06/2024 16:47, Stephen Hemminger:
> Minor fixes to previous edit:
> 1. remove blank line at end of file, causes git complaint
> 2. fix minor typo (UTF-8?)
> 3. break long lines, although rst doesn't care it is nicer
> for future editors to keep to 100 characters or less.
While changing lines, please split logically after dots, commas, etc,
so each line talks about something different.
It is easier to read/review, and it makes future changes even easier to review.
^ permalink raw reply [flat|nested] 118+ messages in thread
* [PATCH v3 00/11] doc: programmers guide corrections
2024-05-13 15:59 [PATCH 0/9] reowrd in prog guide Nandini Persad
` (9 preceding siblings ...)
2024-06-21 2:32 ` [PATCH v2 1/9] doc: reword pmd " Nandini Persad
@ 2026-01-13 22:51 ` Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 01/11] doc: correct grammar and punctuation errors in ethdev guide Stephen Hemminger
` (10 more replies)
2026-01-14 22:26 ` [PATCH v4 00/11] doc: programmers guide corrections Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
12 siblings, 11 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-13 22:51 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
This is a revision of earlier corrections to the programmers guide.
At this point, it is a collaborative work of myself (Stephen), the
technical writer (Nandini) and AI (Claude).
found while reviewing the DPDK programmer's guide documentation.
Types of corrections include:
- Grammar fixes (subject-verb agreement, articles, prepositions)
- Typos and spelling errors
- Function name corrections (cmdline_new_stdin -> cmdline_stdin_new)
- Type name corrections (cmdline_parse_t -> cmdline_parse_inst_t)
- Word errors (exasperate -> exacerbate)
- RST formatting issues (section hierarchy, underline lengths)
- Code formatting consistency (adding backticks around function names)
- Critical content fixes (restored missing "out" in "compiled out by default")
- Command syntax fixes (hugetlbfs mount command)
- Consistency improvements (capitalization, hyphenation, terminology)
Affected documentation:
- ethdev guide: extensive grammar and punctuation cleanup
- argparse library guide: articles, verb forms, typo fixes
- design guide: capitalization, terminology consistency
- Linux system requirements: mount command fix, spacing
- service cores guide: clarity improvements
- trace library guide: critical fix and grammar (2 patches)
- log library guide: spelling, sentence structure
- command-line library guide: function/type name corrections
- stack library guide: RST hierarchy fix, grammar
- RCU library guide: word errors, formatting, clarity
Stephen Hemminger (11):
doc: correct grammar and punctuation errors in ethdev guide
doc: correct grammar and typos in argparse library guide
doc: correct grammar and typos in design guide
doc: correct errors in Linux system requirements guide
doc: correct grammar in service cores guide
doc: correct grammar and errors in trace library guide
doc: correct typos in log library guide
doc: correct errors in command-line library guide
doc: correct errors in trace library guide
doc: correct errors in stack library guide
doc: correct errors in RCU library guide
doc/guides/contributing/design.rst | 71 +++++-----
doc/guides/linux_gsg/sys_reqs.rst | 14 +-
doc/guides/prog_guide/argparse_lib.rst | 24 ++--
doc/guides/prog_guide/cmdline.rst | 42 +++---
doc/guides/prog_guide/ethdev/ethdev.rst | 168 ++++++++++++------------
doc/guides/prog_guide/log_lib.rst | 32 ++---
doc/guides/prog_guide/rcu_lib.rst | 143 ++++++++++++--------
doc/guides/prog_guide/service_cores.rst | 30 ++---
doc/guides/prog_guide/stack_lib.rst | 32 ++---
doc/guides/prog_guide/trace_lib.rst | 118 ++++++++---------
10 files changed, 353 insertions(+), 321 deletions(-)
--
2.51.0
^ permalink raw reply [flat|nested] 118+ messages in thread
* [PATCH v3 01/11] doc: correct grammar and punctuation errors in ethdev guide
2026-01-13 22:51 ` [PATCH v3 00/11] doc: programmers guide corrections Stephen Hemminger
@ 2026-01-13 22:51 ` Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 02/11] doc: correct grammar and typos in argparse library guide Stephen Hemminger
` (9 subsequent siblings)
10 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-13 22:51 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Fix various grammar, punctuation, and typographical errors throughout
the Poll Mode Driver documentation.
- Fix extra spaces after emphasized terms
(*run-to-completion*, *pipe-line*)
- Correct possessive forms
("port's" -> "ports'", "processors" -> "processor's")
- Fix subject-verb agreement ("VFs detects" -> "VFs detect")
- Add missing articles and words ("It is duty" -> "It is the duty",
"allows the application create" -> "allows the application to create")
- Remove extraneous words ("release of all" -> "release all",
"ensures sure" -> "ensures")
- Fix typos ("dev_unint()" -> "dev_uninit()", "receive of transmit" ->
"receive or transmit", "UDP/TCP/ SCTP" -> "UDP/TCP/SCTP")
- Add missing punctuation (period at end of bullet point)
- Fix spacing around inline code markup
- Clarify awkward sentence about PROACTIVE vs PASSIVE error
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/ethdev/ethdev.rst | 168 ++++++++++++------------
1 file changed, 82 insertions(+), 86 deletions(-)
diff --git a/doc/guides/prog_guide/ethdev/ethdev.rst b/doc/guides/prog_guide/ethdev/ethdev.rst
index daaf43ea3b..68b0985033 100644
--- a/doc/guides/prog_guide/ethdev/ethdev.rst
+++ b/doc/guides/prog_guide/ethdev/ethdev.rst
@@ -4,25 +4,25 @@
Poll Mode Driver
================
-The DPDK includes 1 Gigabit, 10 Gigabit and 40 Gigabit and para virtualized virtio Poll Mode Drivers.
+The DPDK includes support multiple physical speeds as well as
+pure virtualized Poll Mode Drivers.
-A Poll Mode Driver (PMD) consists of APIs, provided through the BSD driver running in user space,
-to configure the devices and their respective queues.
+A Poll Mode Driver (PMD) consists of APIs (provided through the BSD driver running in user space) to configure the devices and their respective queues.
In addition, a PMD accesses the RX and TX descriptors directly without any interrupts
(with the exception of Link Status Change interrupts) to quickly receive,
process and deliver packets in the user's application.
-This section describes the requirements of the PMDs,
-their global design principles and proposes a high-level architecture and a generic external API for the Ethernet PMDs.
+This section describes the requirements of the PMDs and
+their global design principles. It also proposes a high-level architecture and a generic external API for the Ethernet PMDs.
Requirements and Assumptions
----------------------------
The DPDK environment for packet processing applications allows for two models, run-to-completion and pipe-line:
-* In the *run-to-completion* model, a specific port's RX descriptor ring is polled for packets through an API.
- Packets are then processed on the same core and placed on a port's TX descriptor ring through an API for transmission.
+* In the *run-to-completion* model, a specific port's Rx descriptor ring is polled for packets through an API.
+ Packets are then processed on the same core and placed on a port's Tx descriptor ring through an API for transmission.
-* In the *pipe-line* model, one core polls one or more port's RX descriptor ring through an API.
+* In the *pipe-line* model, one core polls one or more ports' Rx descriptor rings through an API.
Packets are received and passed to another core via a ring.
The other core continues to process the packet which then may be placed on a port's TX descriptor ring through an API for transmission.
@@ -48,14 +48,14 @@ The loop for packet processing includes the following steps:
* Retrieve the received packet from the packet queue
-* Process the received packet, up to its retransmission if forwarded
+* Process the received packet up to its retransmission if forwarded
To avoid any unnecessary interrupt processing overhead, the execution environment must not use any asynchronous notification mechanisms.
Whenever needed and appropriate, asynchronous communication should be introduced as much as possible through the use of rings.
Avoiding lock contention is a key issue in a multi-core environment.
-To address this issue, PMDs are designed to work with per-core private resources as much as possible.
-For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
+To address this issue, PMDs are designed to work with per core private resources as much as possible.
+For example, a PMD maintains a separate transmit queue per core, per port, if the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
In the same way, every receive queue of a port is assigned to and polled by a single logical core (lcore).
To comply with Non-Uniform Memory Access (NUMA), memory management is designed to assign to each logical core
@@ -68,7 +68,7 @@ See :doc:`../mempool_lib`.
Design Principles
-----------------
-The API and architecture of the Ethernet* PMDs are designed with the following guidelines in mind.
+The API and architecture of the Ethernet PMDs are designed with the following guidelines in mind.
PMDs must help global policy-oriented decisions to be enforced at the upper application level.
Conversely, NIC PMD functions should not impede the benefits expected by upper-level global policies,
@@ -87,11 +87,11 @@ to dynamically adapt its overall behavior through different global loop policies
To achieve optimal performance, overall software design choices and pure software optimization techniques must be considered and
balanced against available low-level hardware-based optimization features (CPU cache properties, bus speed, NIC PCI bandwidth, and so on).
The case of packet transmission is an example of this software/hardware tradeoff issue when optimizing burst-oriented network packet processing engines.
-In the initial case, the PMD could export only an rte_eth_tx_one function to transmit one packet at a time on a given queue.
-On top of that, one can easily build an rte_eth_tx_burst function that loops invoking the rte_eth_tx_one function to transmit several packets at a time.
+In the initial case, the PMD could export only an ``rte_eth_tx_one`` function to transmit one packet at a time on a given queue.
+On top of that, one can easily build an ``rte_eth_tx_burst`` function that loops invoking the ``rte_eth_tx_one`` function to transmit several packets at a time.
However, an rte_eth_tx_burst function is effectively implemented by the PMD to minimize the driver-level transmit cost per packet through the following optimizations:
-* Share among multiple packets the un-amortized cost of invoking the rte_eth_tx_one function.
+* Share among multiple packets the un-amortized cost of invoking the ``rte_eth_tx_one`` function.
* Enable the rte_eth_tx_burst function to take advantage of burst-oriented hardware features (prefetch data in cache, use of NIC head/tail registers)
to minimize the number of CPU cycles per packet, for example by avoiding unnecessary read memory accesses to ring transmit descriptors,
@@ -99,9 +99,9 @@ However, an rte_eth_tx_burst function is effectively implemented by the PMD to m
* Apply burst-oriented software optimization techniques to remove operations that would otherwise be unavoidable, such as ring index wrap back management.
-Burst-oriented functions are also introduced via the API for services that are intensively used by the PMD.
+Burst-oriented functions are also introduced via the API for services that are extensively used by the PMD.
This applies in particular to buffer allocators used to populate NIC rings, which provide functions to allocate/free several buffers at a time.
-For example, an mbuf_multiple_alloc function returning an array of pointers to rte_mbuf buffers which speeds up the receive poll function of the PMD when
+An example of this would be an ``mbuf_multiple_alloc`` function returning an array of pointers to rte_mbuf buffers which speeds up the receive poll function of the PMD when
replenishing multiple descriptors of the receive ring.
Logical Cores, Memory and NIC Queues Relationships
@@ -109,21 +109,20 @@ Logical Cores, Memory and NIC Queues Relationships
The DPDK supports NUMA allowing for better performance when a processor's logical cores and interfaces utilize its local memory.
Therefore, mbuf allocation associated with local PCIe* interfaces should be allocated from memory pools created in the local memory.
-The buffers should, if possible, remain on the local processor to obtain the best performance results and RX and TX buffer descriptors
+The buffers should, if possible, remain on the local processor to obtain the best performance results and Rx and Tx buffer descriptors
should be populated with mbufs allocated from a mempool allocated from local memory.
-The run-to-completion model also performs better if packet or data manipulation is in local memory instead of a remote processors memory.
+The run-to-completion model also performs better if packet or data manipulation is in local memory instead of a remote processor's memory.
This is also true for the pipe-line model provided all logical cores used are located on the same processor.
Multiple logical cores should never share receive or transmit queues for interfaces since this would require global locks and hinder performance.
If the PMD is ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
-concurrently on the same tx queue without SW lock. This PMD feature found in some NICs and useful in the following use cases:
+concurrently on the same Tx queue without an SW lock. This PMD feature found in some NICs is useful for:
-* Remove explicit spinlock in some applications where lcores are not mapped to Tx queues with 1:1 relation.
+* Removing explicit spinlock in some applications where lcores are not mapped to Tx queues with 1:1 relation.
-* In the eventdev use case, avoid dedicating a separate TX core for transmitting and thus
- enables more scaling as all workers can send the packets.
+* Enabling greater scalability by removing the requirement to have a dedicated Tx core.
See `Hardware Offload`_ for ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
@@ -133,8 +132,8 @@ Device Identification, Ownership and Configuration
Device Identification
~~~~~~~~~~~~~~~~~~~~~
-Each NIC port is uniquely designated by its (bus/bridge, device, function) PCI
-identifiers assigned by the PCI probing/enumeration function executed at DPDK initialization.
+Each NIC port is uniquely designated by its PCI
+identifiers (bus/bridge, device, function) assigned by the PCI probing/enumeration function executed at DPDK initialization.
Based on their PCI identifier, NIC ports are assigned two other identifiers:
* A port index used to designate the NIC port in all functions exported by the PMD API.
@@ -146,15 +145,14 @@ Port Ownership
~~~~~~~~~~~~~~
The Ethernet devices ports can be owned by a single DPDK entity (application, library, PMD, process, etc).
-The ownership mechanism is controlled by ethdev APIs and allows to set/remove/get a port owner by DPDK entities.
-It prevents Ethernet ports to be managed by different entities.
+The ownership mechanism is controlled by ethdev APIs and allows setting/removing/getting a port owner by DPDK entities.
+This prevents Ethernet ports from being managed by different entities.
.. note::
- It is the DPDK entity responsibility to set the port owner before using it and to manage the port usage synchronization between different threads or processes.
+ It is the DPDK entity responsibility to set the port owner before using the port and to manage the port usage synchronization between different threads or processes.
-It is recommended to set port ownership early,
-like during the probing notification ``RTE_ETH_EVENT_NEW``.
+It is recommended to set port ownership early. For instance, during the probing notification ``RTE_ETH_EVENT_NEW``.
Device Configuration
~~~~~~~~~~~~~~~~~~~~
@@ -163,7 +161,7 @@ The configuration of each NIC port includes the following operations:
* Allocate PCI resources
-* Reset the hardware (issue a Global Reset) to a well-known default state
+* Reset the hardware to a well-known default state (issue a Global Reset)
* Set up the PHY and the link
@@ -172,7 +170,7 @@ The configuration of each NIC port includes the following operations:
The PMD API must also export functions to start/stop the all-multicast feature of a port and functions to set/unset the port in promiscuous mode.
Some hardware offload features must be individually configured at port initialization through specific configuration parameters.
-This is the case for the Receive Side Scaling (RSS) and Data Center Bridging (DCB) features for example.
+This is the case for the Receive Side Scaling (RSS) and Data Center Bridging (DCB) features.
On-the-Fly Configuration
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -193,7 +191,7 @@ the Intel® 82599 10 Gigabit Ethernet Controller controllers in the testpmd appl
Other features such as the L3/L4 5-Tuple packet filtering feature of a port can be configured in the same way.
Ethernet* flow control (pause frame) can be configured on the individual port.
Refer to the testpmd source code for details.
-Also, L4 (UDP/TCP/ SCTP) checksum offload by the NIC can be enabled for an individual packet as long as the packet mbuf is set up correctly. See `Hardware Offload`_ for details.
+Also, L4 (UDP/TCP/SCTP) checksum offload by the NIC can be enabled for an individual packet as long as the packet mbuf is set up correctly. See `Hardware Offload`_ for details.
Configuration of Transmit Queues
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -208,7 +206,7 @@ Each transmit queue is independently configured with the following information:
* The *minimum* transmit packets to free threshold (tx_free_thresh).
When the number of descriptors used to transmit packets exceeds this threshold, the network adaptor should be checked to see if it has written back descriptors.
- A value of 0 can be passed during the TX queue configuration to indicate the default value should be used.
+ A value of 0 can be passed during the Tx queue configuration to indicate the default value should be used.
The default value for tx_free_thresh is 32.
This ensures that the PMD does not search for completed descriptors until at least 32 have been processed by the NIC for this queue.
@@ -220,7 +218,7 @@ Each transmit queue is independently configured with the following information:
A value of 0 can be passed during the TX queue configuration to indicate that the default value should be used.
The default value for tx_rs_thresh is 32.
This ensures that at least 32 descriptors are used before the network adapter writes back the most recently used descriptor.
- This saves upstream PCIe* bandwidth resulting from TX descriptor write-backs.
+ This saves upstream PCIe* bandwidth resulting from Tx descriptor write-backs.
It is important to note that the TX Write-back threshold (TX wthresh) should be set to 0 when tx_rs_thresh is greater than 1.
Refer to the Intel® 82599 10 Gigabit Ethernet Controller Datasheet for more details.
@@ -242,7 +240,7 @@ One descriptor in the TX ring is used as a sentinel to avoid a hardware race con
.. note::
- When configuring for DCB operation, at port initialization, both the number of transmit queues and the number of receive queues must be set to 128.
+ When configuring for DCB operation at port initialization, both the number of transmit queues and the number of receive queues must be set to 128.
Free Tx mbuf on Demand
~~~~~~~~~~~~~~~~~~~~~~
@@ -263,7 +261,7 @@ There are two scenarios when an application may want the mbuf released immediate
One option is to make a copy of the packet or a copy of the header portion that needs to be manipulated.
A second option is to transmit the packet and then poll the ``rte_eth_tx_done_cleanup()`` API
until the reference count on the packet is decremented.
- Then the same packet can be transmitted to the next destination interface.
+ Then, the same packet can be transmitted to the next destination interface.
The application is still responsible for managing any packet manipulations needed
between the different destination interfaces, but a packet copy can be avoided.
This API is independent of whether the packet was transmitted or dropped,
@@ -275,7 +273,7 @@ There are two scenarios when an application may want the mbuf released immediate
between each run, where all mbufs are returned to the mempool.
In this case, it can call the ``rte_eth_tx_done_cleanup()`` API
for each destination interface it has been using
- to request it to release of all its used mbufs.
+ to request it to release all its used mbufs.
To determine if a driver supports this API, check for the *Free Tx mbuf on demand* feature
in the *Network Interface Controller Drivers* document.
@@ -286,7 +284,7 @@ Hardware Offload
Depending on driver capabilities advertised by
``rte_eth_dev_info_get()``, the PMD may support hardware offloading
feature like checksumming, TCP segmentation, VLAN insertion or
-lockfree multithreaded TX burst on the same TX queue.
+lockfree multithreaded Tx burst on the same Tx queue.
The support of these offload features implies the addition of dedicated
status bit(s) and value field(s) into the rte_mbuf data structure, along
@@ -300,14 +298,14 @@ Per-Port and Per-Queue Offloads
In the DPDK offload API, offloads are divided into per-port and per-queue offloads as follows:
* A per-queue offloading can be enabled on a queue and disabled on another queue at the same time.
-* A pure per-port offload is the one supported by device but not per-queue type.
-* A pure per-port offloading can't be enabled on a queue and disabled on another queue at the same time.
+* A pure per-port offload is supported by a device but not per-queue type.
+* A pure per-port offloading cannot be enabled on a queue and disabled on another queue at the same time.
* A pure per-port offloading must be enabled or disabled on all queues at the same time.
-* Any offloading is per-queue or pure per-port type, but can't be both types at same devices.
+* Offloading is per-queue or pure per-port type, but cannot be both types on the same devices.
* Port capabilities = per-queue capabilities + pure per-port capabilities.
* Any supported offloading can be enabled on all queues.
-The different offloads capabilities can be queried using ``rte_eth_dev_info_get()``.
+The different offload capabilities can be queried using ``rte_eth_dev_info_get()``.
The ``dev_info->[rt]x_queue_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all per-queue offloading capabilities.
The ``dev_info->[rt]x_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all pure per-port and per-queue offloading capabilities.
Supported offloads can be either per-port or per-queue.
@@ -326,8 +324,8 @@ per-port type and no matter whether it is set or cleared in
If a per-queue offloading hasn't been enabled in ``rte_eth_dev_configure()``,
it can be enabled or disabled in ``rte_eth_[rt]x_queue_setup()`` for individual queue.
A newly added offloads in ``[rt]x_conf->offloads`` to ``rte_eth_[rt]x_queue_setup()`` input by application
-is the one which hasn't been enabled in ``rte_eth_dev_configure()`` and is requested to be enabled
-in ``rte_eth_[rt]x_queue_setup()``. It must be per-queue type, otherwise trigger an error log.
+is the one that hasn't been enabled in ``rte_eth_dev_configure()`` and is requested to be enabled
+in ``rte_eth_[rt]x_queue_setup()``. It must be per-queue type, otherwise an error log will be triggered.
Poll Mode Driver API
--------------------
@@ -337,8 +335,8 @@ Generalities
By default, all functions exported by a PMD are lock-free functions that are assumed
not to be invoked in parallel on different logical cores to work on the same target object.
-For instance, a PMD receive function cannot be invoked in parallel on two logical cores to poll the same RX queue of the same port.
-Of course, this function can be invoked in parallel by different logical cores on different RX queues.
+For instance, a PMD receive function cannot be invoked in parallel on two logical cores to poll the same Rx queue of the same port.
+This function can be invoked in parallel by different logical cores on different Rx queues.
It is the responsibility of the upper-level application to enforce this rule.
If needed, parallel accesses by multiple logical cores to shared queues can be explicitly protected by dedicated inline lock-aware functions
@@ -367,12 +365,12 @@ Ethernet Device Standard Device Arguments
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Standard Ethernet device arguments allow for a set of commonly used arguments/
-parameters which are applicable to all Ethernet devices to be available to for
-specification of specific device and for passing common configuration
+parameters applicable to all Ethernet devices. These arguments/parameters are available for
+specification of specific devices and passing common configuration
parameters to those ports.
-* ``representor`` for a device which supports the creation of representor ports
- this argument allows user to specify which switch ports to enable port
+* Use ``representor`` for a device which supports the creation of representor ports.
+ This argument allows user to specify which switch ports to enable port
representors for::
-a DBDF,representor=vf0
@@ -416,9 +414,9 @@ The extended statistics API allows a PMD to expose all statistics that are
available to it, including statistics that are unique to the device.
Each statistic has three properties ``name``, ``id`` and ``value``:
-* ``name``: A human readable string formatted by the scheme detailed below.
+* ``name``: A human-readable string formatted by the scheme detailed below.
* ``id``: An integer that represents only that statistic.
-* ``value``: A unsigned 64-bit integer that is the value of the statistic.
+* ``value``: An unsigned 64-bit integer that is the value of the statistic.
Note that extended statistic identifiers are
driver-specific, and hence might not be the same for different ports.
@@ -447,19 +445,19 @@ proposed above:
The scheme, although quite simple, allows flexibility in presenting and reading
information from the statistic strings. The following example illustrates the
-naming scheme:``rx_packets``. In this example, the string is split into two
+naming scheme: ``rx_packets``. In this example, the string is split into two
components. The first component ``rx`` indicates that the statistic is
associated with the receive side of the NIC. The second component ``packets``
indicates that the unit of measure is packets.
A more complicated example: ``tx_size_128_to_255_packets``. In this example,
-``tx`` indicates transmission, ``size`` is the first detail, ``128`` etc are
+``tx`` indicates transmission, ``size`` is the first detail, ``128`` etc., are
more details, and ``packets`` indicates that this is a packet counter.
Some additions in the metadata scheme are as follows:
* If the first part does not match ``rx`` or ``tx``, the statistic does not
- have an affinity with either receive of transmit.
+ have an affinity with either receive or transmit.
* If the first letter of the second part is ``q`` and this ``q`` is followed
by a number, this statistic is part of a specific queue.
@@ -480,8 +478,8 @@ lookup of specific statistics. Performant lookup means two things;
The API ensures these requirements are met by mapping the ``name`` of the
statistic to a unique ``id``, which is used as a key for lookup in the fast-path.
The API allows applications to request an array of ``id`` values, so that the
-PMD only performs the required calculations. Expected usage is that the
-application scans the ``name`` of each statistic, and caches the ``id``
+PMD only performs the required calculations. The expected usage is that the
+application scans the ``name`` of each statistic and caches the ``id``
if it has an interest in that statistic. On the fast-path, the integer can be used
to retrieve the actual ``value`` of the statistic that the ``id`` represents.
@@ -500,7 +498,7 @@ statistics.
* ``rte_eth_xstats_get_by_id()``: Fills in an array of ``uint64_t`` values
with matching the provided ``ids`` array. If the ``ids`` array is NULL, it
- returns all statistics that are available.
+ returns all available statistics.
Application Usage
@@ -510,10 +508,10 @@ Imagine an application that wants to view the dropped packet count. If no
packets are dropped, the application does not read any other metrics for
performance reasons. If packets are dropped, the application has a particular
set of statistics that it requests. This "set" of statistics allows the app to
-decide what next steps to perform. The following code-snippets show how the
+decide what next steps to perform. The following code snippets show how the
xstats API can be used to achieve this goal.
-First step is to get all statistics names and list them:
+The first step is to get all statistics names and list them:
.. code-block:: c
@@ -559,7 +557,7 @@ First step is to get all statistics names and list them:
The application has access to the names of all of the statistics that the PMD
exposes. The application can decide which statistics are of interest, cache the
-ids of those statistics by looking up the name as follows:
+IDs of those statistics by looking up the name as follows:
.. code-block:: c
@@ -578,8 +576,7 @@ ids of those statistics by looking up the name as follows:
The API provides flexibility to the application so that it can look up multiple
statistics using an array containing multiple ``id`` numbers. This reduces the
-function call overhead of retrieving statistics, and makes lookup of multiple
-statistics simpler for the application.
+function call overhead of retrieving statistics and simplifies the application's lookup of multiple statistics.
.. code-block:: c
@@ -597,10 +594,10 @@ statistics simpler for the application.
}
-This array lookup API for xstats allows the application create multiple
+This array lookup API for xstats allows the application to create multiple
"groups" of statistics, and look up the values of those IDs using a single API
-call. As an end result, the application is able to achieve its goal of
-monitoring a single statistic ("rx_errors" in this case), and if that shows
+call. As an end result, the application can achieve its goal of
+monitoring a single statistic (in this case,"rx_errors"). If that shows
packets being dropped, it can easily retrieve a "set" of statistics using the
IDs array parameter to ``rte_eth_xstats_get_by_id`` function.
@@ -611,23 +608,23 @@ NIC Reset API
int rte_eth_dev_reset(uint16_t port_id);
-Sometimes a port has to be reset passively. For example when a PF is
+There are times when a port has to be reset passively. For example, when a PF is
reset, all its VFs should also be reset by the application to make them
-consistent with the PF. A DPDK application also can call this function
-to trigger a port reset. Normally, a DPDK application would invokes this
+consistent with the PF. A DPDK application can also call this function
+to trigger a port reset. Normally, a DPDK application would invoke this
function when an RTE_ETH_EVENT_INTR_RESET event is detected.
-It is the duty of the PMD to trigger RTE_ETH_EVENT_INTR_RESET events and
-the application should register a callback function to handle these
+The PMD's duty is to trigger RTE_ETH_EVENT_INTR_RESET events.
+The application should register a callback function to handle these
events. When a PMD needs to trigger a reset, it can trigger an
RTE_ETH_EVENT_INTR_RESET event. On receiving an RTE_ETH_EVENT_INTR_RESET
-event, applications can handle it as follows: Stop working queues, stop
+event, applications can do as follows: Stop working queues, stop
calling Rx and Tx functions, and then call rte_eth_dev_reset(). For
thread safety all these operations should be called from the same thread.
For example when PF is reset, the PF sends a message to notify VFs of
-this event and also trigger an interrupt to VFs. Then in the interrupt
-service routine the VFs detects this notification message and calls
+this event and also trigger an interrupt to VFs. Then, in the interrupt
+service routine, the VFs detect this notification message and calls
rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_RESET, NULL).
This means that a PF reset triggers an RTE_ETH_EVENT_INTR_RESET
event within VFs. The function rte_eth_dev_callback_process() will
@@ -635,13 +632,12 @@ call the registered callback function. The callback function can trigger
the application to handle all operations the VF reset requires including
stopping Rx/Tx queues and calling rte_eth_dev_reset().
-The rte_eth_dev_reset() itself is a generic function which only does
-some hardware reset operations through calling dev_unint() and
-dev_init(), and itself does not handle synchronization, which is handled
+The rte_eth_dev_reset() is a generic function that only does hardware reset operations through calling dev_uninit() and
+dev_init(). It does not handle synchronization, which is handled
by application.
The PMD itself should not call rte_eth_dev_reset(). The PMD can trigger
-the application to handle reset event. It is duty of application to
+the application to handle reset event. It is the duty of the application to
handle all synchronization before it calls rte_eth_dev_reset().
The above error handling mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PASSIVE``.
@@ -649,15 +645,15 @@ The above error handling mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PASSIVE``.
Proactive Error Handling Mode
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-This mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PROACTIVE``,
-different from the application invokes recovery in PASSIVE mode,
-the PMD automatically recovers from error in PROACTIVE mode,
+This mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PROACTIVE``, which
+differs from PASSIVE mode where the application invokes recovery.
+The PMD automatically recovers from error in PROACTIVE mode,
and only a small amount of work is required for the application.
During error detection and automatic recovery,
the PMD sets the data path pointers to dummy functions
(which will prevent the crash),
-and also make sure the control path operations fail with a return code ``-EBUSY``.
+and ensures the control path operations fail with a return code ``-EBUSY``.
Because the PMD recovers automatically,
the application can only sense that the data flow is disconnected for a while
@@ -669,9 +665,9 @@ three events are available:
``RTE_ETH_EVENT_ERR_RECOVERING``
Notify the application that an error is detected
- and the recovery is being started.
+ and the recovery is beginning.
Upon receiving the event, the application should not invoke
- any control path function until receiving
+ any control path function until receiving the
``RTE_ETH_EVENT_RECOVERY_SUCCESS`` or ``RTE_ETH_EVENT_RECOVERY_FAILED`` event.
.. note::
@@ -681,7 +677,7 @@ three events are available:
because a larger error may occur during the recovery.
``RTE_ETH_EVENT_RECOVERY_SUCCESS``
- Notify the application that the recovery from error is successful,
+ Notify the application that the recovery from the error was successful,
the PMD already re-configures the port,
and the effect is the same as a restart operation.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v3 02/11] doc: correct grammar and typos in argparse library guide
2026-01-13 22:51 ` [PATCH v3 00/11] doc: programmers guide corrections Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 01/11] doc: correct grammar and punctuation errors in ethdev guide Stephen Hemminger
@ 2026-01-13 22:51 ` Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 03/11] doc: correct grammar and typos in design guide Stephen Hemminger
` (8 subsequent siblings)
10 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-13 22:51 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Changes:
- Add missing articles ("a user-friendly", "a long_name field")
- Fix awkward phrasing ("take with" -> "have")
- Correct verb forms ("automatic generate" -> "automatic generation of",
"are parsing" -> "are parsed", "don't" -> "doesn't")
- Fix typo in field name (val_save -> val_saver)
- Fix stray backtick in code example
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/argparse_lib.rst | 24 ++++++++++++------------
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/doc/guides/prog_guide/argparse_lib.rst b/doc/guides/prog_guide/argparse_lib.rst
index 4a4214e00f..b0907cfc07 100644
--- a/doc/guides/prog_guide/argparse_lib.rst
+++ b/doc/guides/prog_guide/argparse_lib.rst
@@ -5,21 +5,21 @@ Argparse Library
================
The argparse library provides argument parsing functionality,
-this library makes it easy to write user-friendly command-line program.
+this library makes it easy to write a user-friendly command-line program.
Features and Capabilities
-------------------------
-- Support parsing optional argument (which could take with no-value,
- required-value and optional-value).
+- Support parsing optional argument (which could have no-value,
+ required-value or optional-value).
-- Support parsing positional argument (which must take with required-value).
+- Support parsing positional argument (which must have required-value).
- Support getopt-style argument reordering for non-flag arguments as an alternative to positional arguments.
-- Support automatic generate usage information.
+- Support automatic generation of usage information.
-- Support issue errors when provide with invalid arguments.
+- Support issuing errors when provided with invalid arguments.
- Support parsing argument by two ways:
@@ -126,15 +126,15 @@ the following two modes are supported (take above ``--ccc`` as an example):
- The single mode: ``--ccc`` or ``-c``.
-- The kv mode: ``--ccc=123`` or ``-c=123`` or ``-c123```.
+- The kv mode: ``--ccc=123`` or ``-c=123`` or ``-c123``.
For positional arguments which must take required-value,
-their values are parsing in the order defined.
+their values are parsed in the order defined.
.. note::
The compact mode is not supported.
- Take above ``-a`` and ``-d`` as an example, don't support ``-ad`` input.
+ Take above ``-a`` and ``-d`` as an example, doesn't support ``-ad`` input.
Parsing by autosave way
~~~~~~~~~~~~~~~~~~~~~~~
@@ -169,7 +169,7 @@ For arguments which are not flags (i.e. don't start with a hyphen '-'),
there are two ways in which they can be handled by the library:
#. Positional arguments: these are defined in the ``args`` array with a NULL ``short_name`` field,
- and long_name field that does not start with a hyphen '-'.
+ and a ``long_name`` field that does not start with a hyphen '-'.
They are parsed as required-value arguments.
#. As ignored, or unhandled arguments: if the ``ignore_non_flag_args`` field in the ``rte_argparse`` object is set to true,
@@ -283,7 +283,7 @@ Parsing by callback way
It could also choose to use callback to parse,
just define a unique index for the argument
-and make the ``val_save`` field to be NULL also zero value-type.
+and make the ``val_saver`` field be NULL also zero value-type.
In the example at the top of this section,
the arguments ``--ddd``/``--eee``/``--fff`` and ``ppp`` all use this way.
@@ -311,7 +311,7 @@ Then the user input could contain multiple ``--xyz`` arguments.
.. note::
- The multiple times argument only support with optional argument
+ The multiple times argument is only supported with optional argument
and must be parsed by callback way.
Help and Usage Information
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v3 03/11] doc: correct grammar and typos in design guide
2026-01-13 22:51 ` [PATCH v3 00/11] doc: programmers guide corrections Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 01/11] doc: correct grammar and punctuation errors in ethdev guide Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 02/11] doc: correct grammar and typos in argparse library guide Stephen Hemminger
@ 2026-01-13 22:51 ` Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 04/11] doc: correct errors in Linux system requirements guide Stephen Hemminger
` (7 subsequent siblings)
10 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-13 22:51 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Fixes:
- Use "execution environment" consistently (not "executive environment")
- Fix FreeBSD capitalization
- Use plural "these execution environments" to match multiple options
- Add hyphen to "non-exhaustive"
- Add missing comma before *telemetry*
- Remove double space
- Minor rewording for clarity
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/contributing/design.rst | 71 ++++++++++++++++--------------
1 file changed, 39 insertions(+), 32 deletions(-)
diff --git a/doc/guides/contributing/design.rst b/doc/guides/contributing/design.rst
index 5517613424..a966da2ddd 100644
--- a/doc/guides/contributing/design.rst
+++ b/doc/guides/contributing/design.rst
@@ -1,6 +1,7 @@
.. SPDX-License-Identifier: BSD-3-Clause
Copyright 2018 The DPDK contributors
+
Design
======
@@ -8,22 +9,26 @@ Design
Environment or Architecture-specific Sources
--------------------------------------------
-In DPDK and DPDK applications, some code is specific to an architecture (i686, x86_64) or to an executive environment (freebsd or linux) and so on.
-As far as is possible, all such instances of architecture or env-specific code should be provided via standard APIs in the EAL.
+In DPDK and DPDK applications, some code is architecture-specific (i686, x86_64) or environment-specific (FreeBSD or Linux, etc.).
+When feasible, such instances of architecture or env-specific code should be provided via standard APIs in the EAL.
+
+By convention, a file is specific if the directory is indicated. Otherwise, it is common.
+
+For example:
-By convention, a file is common if it is not located in a directory indicating that it is specific.
-For instance, a file located in a subdir of "x86_64" directory is specific to this architecture.
+A file located in a subdir of "x86_64" directory is specific to this architecture.
A file located in a subdir of "linux" is specific to this execution environment.
.. note::
Code in DPDK libraries and applications should be generic.
- The correct location for architecture or executive environment specific code is in the EAL.
+ The correct location for architecture or execution environment-specific code is in the EAL.
-When absolutely necessary, there are several ways to handle specific code:
+When necessary, there are several ways to handle specific code:
+
+
+* When the differences are small and they can be embedded in the same C file, use a ``#ifdef`` with a build definition macro in the C code.
-* Use a ``#ifdef`` with a build definition macro in the C code.
- This can be done when the differences are small and they can be embedded in the same C file:
.. code-block:: c
@@ -33,9 +38,9 @@ When absolutely necessary, there are several ways to handle specific code:
titi();
#endif
-* Use build definition macros and conditions in the Meson build file. This is done when the differences are more significant.
- In this case, the code is split into two separate files that are architecture or environment specific.
- This should only apply inside the EAL library.
+
+* When the differences are more significant, use build definition macros and conditions in the Meson build file. In this case, the code is split into two separate files that are architecture or environment specific. This should only apply inside the EAL library.
+
Per Architecture Sources
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -43,15 +48,16 @@ Per Architecture Sources
The following macro options can be used:
* ``RTE_ARCH`` is a string that contains the name of the architecture.
-* ``RTE_ARCH_I686``, ``RTE_ARCH_X86_64``, ``RTE_ARCH_X86_X32``, ``RTE_ARCH_PPC_64``, ``RTE_ARCH_RISCV``, ``RTE_ARCH_LOONGARCH``, ``RTE_ARCH_ARM``, ``RTE_ARCH_ARMv7`` or ``RTE_ARCH_ARM64`` are defined only if we are building for those architectures.
+* ``RTE_ARCH_I686``, ``RTE_ARCH_X86_64``, ``RTE_ARCH_X86_X32``, ``RTE_ARCH_PPC_64``, ``RTE_ARCH_RISCV``, ``RTE_ARCH_LOONGARCH``, ``RTE_ARCH_ARM``, ``RTE_ARCH_ARMv7`` or ``RTE_ARCH_ARM64`` are defined when building for these architectures.
+
Per Execution Environment Sources
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following macro options can be used:
-* ``RTE_EXEC_ENV_NAME`` is a string that contains the name of the executive environment.
-* ``RTE_EXEC_ENV_FREEBSD``, ``RTE_EXEC_ENV_LINUX`` or ``RTE_EXEC_ENV_WINDOWS`` are defined only if we are building for this execution environment.
+* ``RTE_EXEC_ENV_NAME`` is a string that contains the name of the execution environment.
+* ``RTE_EXEC_ENV_FREEBSD``, ``RTE_EXEC_ENV_LINUX`` or ``RTE_EXEC_ENV_WINDOWS`` are defined only if we are building for these execution environments.
Mbuf features
-------------
@@ -66,7 +72,7 @@ The "dynamic" area is eating the remaining space in mbuf,
and some existing "static" fields may need to become "dynamic".
Adding a new static field or flag must be an exception matching many criteria
-like (non exhaustive): wide usage, performance, size.
+like (non-exhaustive): wide usage, performance, size.
Runtime Information - Logging, Tracing and Telemetry
@@ -82,11 +88,11 @@ DPDK provides a number of built-in mechanisms to provide this introspection:
Each of these has its own strengths and suitabilities for use within DPDK components.
-Below are some guidelines for when each should be used:
+Here are guidelines for when each mechanism should be used:
* For reporting error conditions, or other abnormal runtime issues, *logging* should be used.
- Depending on the severity of the issue, the appropriate log level, for example,
- ``ERROR``, ``WARNING`` or ``NOTICE``, should be used.
+ For example, depending on the severity of the issue, the appropriate log level,
+ ``ERROR``, ``WARNING`` or ``NOTICE`` should be used.
.. note::
@@ -96,24 +102,24 @@ Below are some guidelines for when each should be used:
* For component initialization, or other cases where a path through the code
is only likely to be taken once,
- either *logging* at ``DEBUG`` level or *tracing* may be used, or potentially both.
+ either *logging* at ``DEBUG`` level or *tracing* may be used, or both.
In the latter case, tracing can provide basic information as to the code path taken,
with debug-level logging providing additional details on internal state,
- not possible to emit via tracing.
+ which is not possible to emit via tracing.
* For a component's data-path, where a path is to be taken multiple times within a short timeframe,
*tracing* should be used.
Since DPDK tracing uses `Common Trace Format <https://diamon.org/ctf/>`_ for its tracing logs,
post-analysis can be done using a range of external tools.
-* For numerical or statistical data generated by a component, for example, per-packet statistics,
+* For numerical or statistical data generated by a component, such as per-packet statistics,
*telemetry* should be used.
-* For any data where the data may need to be gathered at any point in the execution
- to help assess the state of the application component,
- for example, core configuration, device information, *telemetry* should be used.
+* For any data that may need to be gathered at any point during the execution
+ to help assess the state of the application component (for example, core configuration, device information), *telemetry* should be used.
Telemetry callbacks should not modify any program state, but be "read-only".
+
Many libraries also include a ``rte_<libname>_dump()`` function as part of their API,
writing verbose internal details to a given file-handle.
New libraries are encouraged to provide such functions where it makes sense to do so,
@@ -135,13 +141,12 @@ requirements for preventing ABI changes when implementing statistics.
Mechanism to allow the application to turn library statistics on and off
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Having runtime support for enabling/disabling library statistics is recommended,
-as build-time options should be avoided. However, if build-time options are used,
-for example as in the table library, the options can be set using c_args.
-When this flag is set, all the counters supported by current library are
+Having runtime support for enabling/disabling library statistics is recommended
+as build-time options should be avoided. However, if build-time options are used, as in the table library, the options can be set using c_args.
+When this flag is set, all the counters supported by the current library are
collected for all the instances of every object type provided by the library.
When this flag is cleared, none of the counters supported by the current library
-are collected for any instance of any object type provided by the library:
+are collected for any instance of any object type provided by the library.
Prevention of ABI changes due to library statistics support
@@ -165,8 +170,8 @@ Motivation to allow the application to turn library statistics on and off
It is highly recommended that each library provides statistics counters to allow
an application to monitor the library-level run-time events. Typical counters
-are: number of packets received/dropped/transmitted, number of buffers
-allocated/freed, number of occurrences for specific events, etc.
+are: the number of packets received/dropped/transmitted, the number of buffers
+allocated/freed, the number of occurrences for specific events, etc.
However, the resources consumed for library-level statistics counter collection
have to be spent out of the application budget and the counters collected by
@@ -198,6 +203,7 @@ applications:
the application may decide to turn the collection of statistics counters off for
Library X and on for Library Y.
+
The statistics collection consumes a certain amount of CPU resources (cycles,
cache bandwidth, memory bandwidth, etc) that depends on:
@@ -218,6 +224,7 @@ cache bandwidth, memory bandwidth, etc) that depends on:
validated for header integrity, counting the number of bits set in a bitmask
might be needed.
+
PF and VF Considerations
------------------------
@@ -229,5 +236,5 @@ Developers should work with the Linux Kernel community to get the required
functionality upstream. PF functionality should only be added to DPDK for
testing and prototyping purposes while the kernel work is ongoing. It should
also be marked with an "EXPERIMENTAL" tag. If the functionality isn't
-upstreamable then a case can be made to maintain the PF functionality in DPDK
+upstreamable, then a case can be made to maintain the PF functionality in DPDK
without the EXPERIMENTAL tag.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v3 04/11] doc: correct errors in Linux system requirements guide
2026-01-13 22:51 ` [PATCH v3 00/11] doc: programmers guide corrections Stephen Hemminger
` (2 preceding siblings ...)
2026-01-13 22:51 ` [PATCH v3 03/11] doc: correct grammar and typos in design guide Stephen Hemminger
@ 2026-01-13 22:51 ` Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 05/11] doc: correct grammar in service cores guide Stephen Hemminger
` (6 subsequent siblings)
10 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-13 22:51 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Changes:
- Fix capitalization of IBM Advance Toolchain
- Remove double spaces before "support"
- Add missing preposition "in" and article "the"
- Fix hugetlbfs mount command syntax (add -o flag and device)
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/linux_gsg/sys_reqs.rst | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/doc/guides/linux_gsg/sys_reqs.rst b/doc/guides/linux_gsg/sys_reqs.rst
index 52a840fbe9..869584c344 100644
--- a/doc/guides/linux_gsg/sys_reqs.rst
+++ b/doc/guides/linux_gsg/sys_reqs.rst
@@ -68,7 +68,7 @@ Compilation of the DPDK
* Intel\ |reg| oneAPI DPC++/C++ Compiler.
-* IBM\ |reg| Advance ToolChain for Powerlinux. This is a set of open source development tools and runtime libraries
+* IBM\ |reg| Advance Toolchain for Powerlinux. This is a set of open source development tools and runtime libraries
which allows users to take leading edge advantage of IBM's latest POWER hardware features on Linux. To install
it, see the IBM official installation document.
@@ -93,7 +93,7 @@ e.g. :doc:`../nics/index`
Running DPDK Applications
-------------------------
-To run a DPDK application, some customization may be required on the target machine.
+To run a DPDK application, customization may be required on the target machine.
System Software
~~~~~~~~~~~~~~~
@@ -127,9 +127,9 @@ System Software
* HUGETLBFS
- * PROC_PAGE_MONITOR support
+ * PROC_PAGE_MONITOR support
- * HPET and HPET_MMAP configuration options should also be enabled if HPET support is required.
+ * HPET and HPET_MMAP configuration options should also be enabled if HPET support is required.
See the section on :ref:`High Precision Event Timer (HPET) Functionality <High_Precision_Event_Timer>` for more details.
.. _linux_gsg_hugepages:
@@ -138,7 +138,7 @@ Use of Hugepages in the Linux Environment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Hugepage support is required for the large memory pool allocation used for packet buffers
-(the HUGETLBFS option must be enabled in the running kernel as indicated the previous section).
+(the HUGETLBFS option must be enabled in the running kernel as indicated in the previous section).
By using hugepage allocations, performance is increased since fewer pages are needed,
and therefore less Translation Lookaside Buffers (TLBs, high speed translation caches),
which reduce the time it takes to translate a virtual page address to a physical page address.
@@ -225,10 +225,10 @@ However, in order to use hugepage sizes other than the default, it is necessary
to manually create mount points for those hugepage sizes (e.g. 1GB pages).
To make the hugepages of size 1GB available for DPDK use,
-following steps must be performed::
+the following steps must be performed::
mkdir /mnt/huge
- mount -t hugetlbfs pagesize=1GB /mnt/huge
+ mount -t hugetlbfs -o pagesize=1GB none /mnt/huge
The mount point can be made permanent across reboots, by adding the following line to the ``/etc/fstab`` file::
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v3 05/11] doc: correct grammar in service cores guide
2026-01-13 22:51 ` [PATCH v3 00/11] doc: programmers guide corrections Stephen Hemminger
` (3 preceding siblings ...)
2026-01-13 22:51 ` [PATCH v3 04/11] doc: correct errors in Linux system requirements guide Stephen Hemminger
@ 2026-01-13 22:51 ` Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 06/11] doc: correct grammar and errors in trace library guide Stephen Hemminger
` (5 subsequent siblings)
10 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-13 22:51 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Changes:
- Use "abstract away the differences" instead of unclear
"simplify the difference"
- Fix preposition: "two methods for having" not "to having"
- Fix reversed phrasing: "Disabling that service on the core"
not "that core on the service"
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/service_cores.rst | 30 ++++++++++++-------------
1 file changed, 15 insertions(+), 15 deletions(-)
diff --git a/doc/guides/prog_guide/service_cores.rst b/doc/guides/prog_guide/service_cores.rst
index 5284eeb96a..ee47a85a90 100644
--- a/doc/guides/prog_guide/service_cores.rst
+++ b/doc/guides/prog_guide/service_cores.rst
@@ -4,12 +4,12 @@
Service Cores
=============
-DPDK has a concept known as service cores, which enables a dynamic way of
-performing work on DPDK lcores. Service core support is built into the EAL, and
-an API is provided to optionally allow applications to control how the service
-cores are used at runtime.
+DPDK has a concept known as service cores. Service cores enable a dynamic way of
+performing work on DPDK lcores. Service core support is built into the EAL.
+An API is provided to give you the option of allowing applications to control
+how the service cores are used at runtime.
-The service cores concept is built up out of services (components of DPDK that
+The service cores concept is built out of services (components of DPDK that
require CPU cycles to operate) and service cores (DPDK lcores, tasked with
running services). The power of the service core concept is that the mapping
between service cores and services can be configured to abstract away the
@@ -18,24 +18,24 @@ difference between platforms and environments.
For example, the Eventdev has hardware and software PMDs. Of these the software
PMD requires an lcore to perform the scheduling operations, while the hardware
PMD does not. With service cores, the application would not directly notice
-that the scheduling is done in software.
+that the scheduling is done in the software.
For detailed information about the service core API, please refer to the docs.
Service Core Initialization
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-There are two methods to having service cores in a DPDK application, either by
+There are two methods to having service cores in a DPDK application: either by
using the service corelist, or by dynamically adding cores using the API.
-The simpler of the two is to pass the `-S` corelist argument to EAL, which will
-take any cores available in the main DPDK corelist, and if also set
-in the service corelist the cores become service-cores instead of DPDK
+The simpler of the two is to pass the `-S` coremask argument to the EAL, which will
+take any cores available in the main DPDK coremask. If the bits are also set
+in the service coremask, the cores become service-cores instead of DPDK
application lcores.
Enabling Services on Cores
~~~~~~~~~~~~~~~~~~~~~~~~~~
-Each registered service can be individually mapped to a service core, or set of
+Each registered service can be individually mapped to a service core, or a set of
service cores. Enabling a service on a particular core means that the lcore in
question will run the service. Disabling that core on the service stops the
lcore in question from running the service.
@@ -48,8 +48,8 @@ function to run the service.
Service Core Statistics
~~~~~~~~~~~~~~~~~~~~~~~
-The service core library is capable of collecting runtime statistics like number
-of calls to a specific service, and number of cycles used by the service. The
+The service core library is capable of collecting runtime statistics like the number
+of calls to a specific service, and the number of cycles used by the service. The
cycle count collection is dynamically configurable, allowing any application to
profile the services running on the system at any time.
@@ -58,9 +58,9 @@ Service Core Tracing
The service core library is instrumented with tracepoints using the DPDK Trace
Library. These tracepoints allow you to track the service and logical cores
-state. To activate tracing when launching a DPDK program it is necessary to use the
+state. To activate tracing when launching a DPDK program, it is necessary to use the
``--trace`` option to specify a regular expression to select which tracepoints
-to enable. Here is an example if you want to only specify service core tracing::
+to enable. Here is an example if you want to specify only service core tracing::
./dpdk/examples/service_cores/build/service_cores --trace="lib.eal.thread*" --trace="lib.eal.service*"
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v3 06/11] doc: correct grammar and errors in trace library guide
2026-01-13 22:51 ` [PATCH v3 00/11] doc: programmers guide corrections Stephen Hemminger
` (4 preceding siblings ...)
2026-01-13 22:51 ` [PATCH v3 05/11] doc: correct grammar in service cores guide Stephen Hemminger
@ 2026-01-13 22:51 ` Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 07/11] doc: correct typos in log " Stephen Hemminger
` (4 subsequent siblings)
10 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-13 22:51 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Changes:
- CRITICAL: restore missing "out" in "compiled out by default"
(RTE_TRACE_POINT_FP is disabled by default, not enabled)
- Add missing article and verb ("a framework", "are broadly divided")
- Fix subject-verb agreement ("traces that use", "example greps/counts")
- Fix article before vowel sound ("an EAL")
- Fix preposition ("known to DPDK" not "known of DPDK")
- Use standard spelling "lockless" and "non-lcore"
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/trace_lib.rst | 68 ++++++++++++++---------------
1 file changed, 34 insertions(+), 34 deletions(-)
diff --git a/doc/guides/prog_guide/trace_lib.rst b/doc/guides/prog_guide/trace_lib.rst
index d9b17abe90..829a061074 100644
--- a/doc/guides/prog_guide/trace_lib.rst
+++ b/doc/guides/prog_guide/trace_lib.rst
@@ -14,29 +14,29 @@ When recording, specific instrumentation points placed in the software source
code generate events that are saved on a giant tape: a trace file.
The trace file then later can be opened in *trace viewers* to visualize and
analyze the trace events with timestamps and multi-core views.
-Such a mechanism will be useful for resolving a wide range of problems such as
-multi-core synchronization issues, latency measurements, finding out the
-post analysis information like CPU idle time, etc that would otherwise be
-extremely challenging to get.
+This mechanism will be useful for resolving a wide range of problems such as
+multi-core synchronization issues, latency measurements, and finding
+post analysis information like CPU idle time, etc., that would otherwise be
+extremely challenging to gather.
Tracing is often compared to *logging*. However, tracers and loggers are two
-different tools, serving two different purposes.
-Tracers are designed to record much lower-level events that occur much more
+different tools serving two different purposes.
+Tracers are designed to record much lower-level events that occur more
frequently than log messages, often in the range of thousands per second, with
very little execution overhead.
Logging is more appropriate for a very high-level analysis of less frequent
events: user accesses, exceptional conditions (errors and warnings, for
-example), database transactions, instant messaging communications, and such.
+example), database transactions, instant messaging communications, etc.
Simply put, logging is one of the many use cases that can be satisfied with
tracing.
DPDK tracing library features
-----------------------------
-- A framework to add tracepoints in control and fast path APIs with minimum
+- Provides a framework to add tracepoints in control and fast path APIs with minimum
impact on performance.
Typical trace overhead is ~20 cycles and instrumentation overhead is 1 cycle.
-- Enable and disable the tracepoints at runtime.
+- Enable and disable tracepoints at runtime.
- Save the trace buffer to the filesystem at any point in time.
- Support ``overwrite`` and ``discard`` trace mode operations.
- String-based tracepoint object lookup.
@@ -47,7 +47,7 @@ DPDK tracing library features
For detailed information, refer to
`Common Trace Format <https://diamon.org/ctf/>`_.
-How to add a tracepoint?
+How to add a Tracepoint
------------------------
This section steps you through the details of adding a simple tracepoint.
@@ -67,14 +67,14 @@ Create the tracepoint header file
rte_trace_point_emit_string(str);
)
-The above macro creates ``app_trace_string`` tracepoint.
+The above macro creates the ``app_trace_string`` tracepoint.
The user can choose any name for the tracepoint.
However, when adding a tracepoint in the DPDK library, the
``rte_<library_name>_trace_[<domain>_]<name>`` naming convention must be
followed.
The examples are ``rte_eal_trace_generic_str``, ``rte_mempool_trace_create``.
-The ``RTE_TRACE_POINT`` macro expands from above definition as the following
+The ``RTE_TRACE_POINT`` macro expands from the above definition as the following
function template:
.. code-block:: c
@@ -91,7 +91,7 @@ The consumer of this tracepoint can invoke
``app_trace_string(const char *str)`` to emit the trace event to the trace
buffer.
-Register the tracepoint
+Register the Tracepoint
~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: c
@@ -122,40 +122,40 @@ convention.
The ``RTE_TRACE_POINT_REGISTER`` defines the placeholder for the
``rte_trace_point_t`` tracepoint object.
- For generic tracepoint or for tracepoint used in public header files,
+ For a generic tracepoint or for the tracepoint used in public header files,
the user must export a ``__<trace_function_name>`` symbol
in the library ``.map`` file for this tracepoint
- to be used out of the library, in shared builds.
+ to be used out of the library in shared builds.
For example, ``__app_trace_string`` will be the exported symbol in the
above example.
-Fast path tracepoint
+Fast Path Tracepoint
--------------------
In order to avoid performance impact in fast path code, the library introduced
``RTE_TRACE_POINT_FP``. When adding the tracepoint in fast path code,
the user must use ``RTE_TRACE_POINT_FP`` instead of ``RTE_TRACE_POINT``.
-``RTE_TRACE_POINT_FP`` is compiled out by default and it can be enabled using
+``RTE_TRACE_POINT_FP`` is compiled out by default and can be enabled using
the ``enable_trace_fp`` option for meson build.
-Event record mode
+Event Record Mode
-----------------
-Event record mode is an attribute of trace buffers. Trace library exposes the
+Event record mode is an attribute of trace buffers. The trace library exposes the
following modes:
Overwrite
- When the trace buffer is full, new trace events overwrites the existing
+ When the trace buffer is full, new trace events overwrite the existing
captured events in the trace buffer.
Discard
When the trace buffer is full, new trace events will be discarded.
-The mode can be configured either using EAL command line parameter
-``--trace-mode`` on application boot up or use ``rte_trace_mode_set()`` API to
+The mode can be configured either using the EAL command line parameter
+``--trace-mode`` on application boot up or use the ``rte_trace_mode_set()`` API to
configure at runtime.
-Trace file location
+Trace File Location
-------------------
On ``rte_trace_save()`` or ``rte_eal_cleanup()`` invocation, the library saves
@@ -167,7 +167,7 @@ option.
For more information, refer to :doc:`../linux_gsg/linux_eal_parameters` for
trace EAL command line options.
-View and analyze the recorded events
+View and Analyze Recorded Events
------------------------------------
Once the trace directory is available, the user can view/inspect the recorded
@@ -176,7 +176,7 @@ events.
There are many tools you can use to read DPDK traces:
#. ``babeltrace`` is a command-line utility that converts trace formats; it
- supports the format that DPDK trace library produces, CTF, as well as a
+ supports the format that the DPDK trace library produces, CTF, as well as a
basic text output that can be grep'ed.
The babeltrace command is part of the Open Source Babeltrace project.
@@ -195,12 +195,12 @@ to babeltrace with no options::
all their events, merging them in chronological order.
You can pipe the output of the babeltrace into a tool like grep(1) for further
-filtering. Below example grep the events for ``ethdev`` only::
+filtering. The example below greps the events for ``ethdev`` only::
babeltrace /tmp/my-dpdk-trace | grep ethdev
You can pipe the output of babeltrace into a tool like wc(1) to count the
-recorded events. Below example count the number of ``ethdev`` events::
+recorded events. The example below counts the number of ``ethdev`` events::
babeltrace /tmp/my-dpdk-trace | grep ethdev | wc --lines
@@ -238,7 +238,7 @@ This section steps you through the details of generating trace and viewing it.
Implementation details
----------------------
-As DPDK trace library is designed to generate traces that uses ``Common Trace
+As DPDK trace library is designed to generate traces that use ``Common Trace
Format (CTF)``. ``CTF`` specification consists of the following units to create
a trace.
@@ -249,7 +249,7 @@ a trace.
For detailed information, refer to
`Common Trace Format <https://diamon.org/ctf/>`_.
-The implementation details broadly divided into the following areas:
+The implementation details are broadly divided into the following areas:
Trace metadata creation
~~~~~~~~~~~~~~~~~~~~~~~
@@ -272,16 +272,16 @@ Trace memory
The trace memory will be allocated through an internal function
``__rte_trace_mem_per_thread_alloc()``. The trace memory will be allocated
-per thread to enable lock less trace-emit function.
+per thread to enable lockless trace-emit function.
-For non lcore threads, the trace memory is allocated on the first trace
+For non-lcore threads, the trace memory is allocated on the first trace
emission.
-For lcore threads, if trace points are enabled through a EAL option, the trace
-memory is allocated when the threads are known of DPDK
+For lcore threads, if trace points are enabled through an EAL option, the trace
+memory is allocated when the threads are known to DPDK
(``rte_eal_init`` for EAL lcores, ``rte_thread_register`` for non-EAL lcores).
Otherwise, when trace points are enabled later in the life of the application,
-the behavior is the same as non lcore threads and the trace memory is allocated
+the behavior is the same as non-lcore threads and the trace memory is allocated
on the first trace emission.
Trace memory layout
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v3 07/11] doc: correct typos in log library guide
2026-01-13 22:51 ` [PATCH v3 00/11] doc: programmers guide corrections Stephen Hemminger
` (5 preceding siblings ...)
2026-01-13 22:51 ` [PATCH v3 06/11] doc: correct grammar and errors in trace library guide Stephen Hemminger
@ 2026-01-13 22:51 ` Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 08/11] doc: correct errors in command-line " Stephen Hemminger
` (3 subsequent siblings)
10 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-13 22:51 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Changes:
- Fix spelling errors (stystem -> system, acheived -> achieved)
- Fix sentence structure for rte_log() parameter description
- Use consistent spelling "timestamp" (one word)
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/log_lib.rst | 32 +++++++++++++++----------------
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/doc/guides/prog_guide/log_lib.rst b/doc/guides/prog_guide/log_lib.rst
index 3e888b8965..a3d6104e72 100644
--- a/doc/guides/prog_guide/log_lib.rst
+++ b/doc/guides/prog_guide/log_lib.rst
@@ -6,7 +6,7 @@ Log Library
The DPDK Log library provides the logging functionality for other DPDK libraries and drivers.
By default, logs are sent only to standard error output of the process.
-The syslog EAL option can be used to redirect to the stystem logger on Linux and FreeBSD.
+The syslog EAL option can be used to redirect to the system logger on Linux and FreeBSD.
In addition, the log can be redirected to a different stdio file stream.
Log Levels
@@ -26,14 +26,14 @@ These levels, specified in ``rte_log.h`` are (from most to least important):
At runtime, only messages of a configured level or above (i.e. of higher importance)
will be emitted by the application to the log output.
-That level can be configured either by the application calling the relevant APIs from the logging library,
+That level can be configured either by the application calling relevant APIs from the logging library,
or by the user passing the ``--log-level`` parameter to the EAL via the application.
Setting Global Log Level
~~~~~~~~~~~~~~~~~~~~~~~~
To adjust the global log level for an application,
-just pass a numeric level or a level name to the ``--log-level`` EAL parameter.
+pass a numeric level or a level name to the ``--log-level`` EAL parameter.
For example::
/path/to/app --log-level=error
@@ -47,9 +47,9 @@ Within an application, the log level can be similarly set using the ``rte_log_se
Setting Log Level for a Component
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-In some cases, for example, for debugging purposes,
-it may be desirable to increase or decrease the log level for only a specific component, or set of components.
-To facilitate this, the ``--log-level`` argument also accepts an, optionally wildcarded, component name,
+In some cases (such as debugging purposes),
+you may want to increase or decrease the log level for only a specific component or set of components.
+To facilitate this, the ``--log-level`` argument also accepts an optionally wildcarded component name,
along with the desired level for that component.
For example::
@@ -57,13 +57,13 @@ For example::
/path/to/app --log-level=lib.*:warning
-Within an application, the same result can be got using the ``rte_log_set_level_pattern()`` or ``rte_log_set_level_regex()`` APIs.
+Within an application, the same result can be achieved by using the ``rte_log_set_level_pattern()`` or ``rte_log_set_level_regex()`` APIs.
Using Logging APIs to Generate Log Messages
-------------------------------------------
-To output log messages, ``rte_log()`` API function should be used.
+To output log messages, the ``rte_log()`` API function should be used.
As well as the log message, ``rte_log()`` takes two additional parameters:
* The log level
@@ -74,16 +74,16 @@ The component type is a unique id that identifies the particular DPDK component
To get this id, each component needs to register itself at startup,
using the macro ``RTE_LOG_REGISTER_DEFAULT``.
This macro takes two parameters, with the second being the default log level for the component.
-The first parameter, called "type", the name of the "logtype", or "component type" variable used in the component.
-This variable will be defined by the macro, and should be passed as the second parameter in calls to ``rte_log()``.
+The first parameter, called "type", is the name of the "logtype", or "component type" variable used in the component.
+This variable will be defined by the macro and should be passed as the second parameter in calls to ``rte_log()``.
In general, most DPDK components define their own logging macros to simplify the calls to the log APIs.
They do this by:
* Hiding the component type parameter inside the macro so it never needs to be passed explicitly.
* Using the log-level definitions given in ``rte_log.h`` to allow short textual names to be used in
- place of the numeric log levels.
+ place of numeric log levels.
-The following code is taken from ``rte_cfgfile.c`` and shows the log registration,
+The following code is taken from ``rte_cfgfile.c`` and shows the log registration
and subsequent definition of a shortcut logging macro.
It can be used as a template for any new components using DPDK logging.
@@ -98,10 +98,10 @@ It can be used as a template for any new components using DPDK logging.
it should be placed near the top of the C file using it.
If not, the logtype variable should be defined as an "extern int" near the top of the file.
- Similarly, if logging is to be done by multiple files in a component,
- only one file should register the logtype via the macro,
+ Similarly, if logging will be done by multiple files in a component,
+ only one file should register the logtype via the macro
and the logtype should be defined as an "extern int" in a common header file.
- Any component-specific logging macro should similarly be defined in that header.
+ Any component-specific logging macro should be similarly defined in that header.
Throughout the cfgfile library, all logging calls are therefore of the form:
@@ -122,7 +122,7 @@ For example::
Multiple alternative timestamp formats are available:
-.. csv-table:: Log time stamp format
+.. csv-table:: Log timestamp format
:header: "Format", "Description", "Example"
:widths: 6, 30, 32
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v3 08/11] doc: correct errors in command-line library guide
2026-01-13 22:51 ` [PATCH v3 00/11] doc: programmers guide corrections Stephen Hemminger
` (6 preceding siblings ...)
2026-01-13 22:51 ` [PATCH v3 07/11] doc: correct typos in log " Stephen Hemminger
@ 2026-01-13 22:51 ` Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 09/11] doc: correct errors in trace " Stephen Hemminger
` (2 subsequent siblings)
10 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-13 22:51 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Fix several errors in the cmdline library documentation:
- Fix function name typo: cmdline_new_stdin -> cmdline_stdin_new
- Fix type name: cmdline_parse_t -> cmdline_parse_inst_t
- Fix grammar: "that others" -> "than others"
- Fix spelling: "boiler plate" -> "boilerplate"
- Clarify wording: "multiplex" -> "direct" for command routing
- Fix misleading phrase: "call a separate function" -> "call a single
function" (multiplexing routes multiple commands to one callback)
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/cmdline.rst | 42 +++++++++++++++----------------
1 file changed, 21 insertions(+), 21 deletions(-)
diff --git a/doc/guides/prog_guide/cmdline.rst b/doc/guides/prog_guide/cmdline.rst
index e20281ceb5..c794ec826f 100644
--- a/doc/guides/prog_guide/cmdline.rst
+++ b/doc/guides/prog_guide/cmdline.rst
@@ -4,9 +4,9 @@
Command-line Library
====================
-Since its earliest versions, DPDK has included a command-line library -
-primarily for internal use by, for example, ``dpdk-testpmd`` and the ``dpdk-test`` binaries,
-but the library is also exported on install and can be used by any end application.
+Since its earliest versions, DPDK has included a command-line library,
+primarily for internal use by, for example, ``dpdk-testpmd`` and the ``dpdk-test`` binaries.
+However, the library is also exported on install and can be used by any end application.
This chapter covers the basics of the command-line library and how to use it in an application.
Library Features
@@ -18,14 +18,14 @@ The DPDK command-line library supports the following features:
* Ability to read and process commands taken from an input file, e.g. startup script
-* Parameterized commands able to take multiple parameters with different datatypes:
+* Parameterized commands that can take multiple parameters with different datatypes:
* Strings
* Signed/unsigned 16/32/64-bit integers
* IP Addresses
* Ethernet Addresses
-* Ability to multiplex multiple commands to a single callback function
+* Ability to direct multiple commands to a single callback function
Adding Command-line to an Application
-------------------------------------
@@ -46,7 +46,7 @@ Adding a command-line instance to an application involves a number of coding ste
Many of these steps can be automated using the script ``dpdk-cmdline-gen.py`` installed by DPDK,
and found in the ``buildtools`` folder in the source tree.
-This section covers adding a command-line using this script to generate the boiler plate,
+This section covers adding a command-line using this script to generate the boilerplate,
while the following section,
`Worked Example of Adding Command-line to an Application`_ covers the steps to do so manually.
@@ -56,7 +56,7 @@ Creating a Command List File
The ``dpdk-cmdline-gen.py`` script takes as input a list of commands to be used by the application.
While these can be piped to it via standard input, using a list file is probably best.
-The format of the list file must be:
+The format of the list file must follow these requirements:
* Comment lines start with '#' as first non-whitespace character
@@ -75,7 +75,7 @@ The format of the list file must be:
* ``<IPv6>dst_ip6``
* Variable fields, which take their values from a list of options,
- have the comma-separated option list placed in braces, rather than a the type name.
+ have the comma-separated option list placed in braces, rather than by the type name.
For example,
* ``<(rx,tx,rxtx)>mode``
@@ -127,13 +127,13 @@ and the callback stubs will be written to an equivalent ".c" file.
Providing the Function Callbacks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-As discussed above, the script output is a header file, containing structure definitions,
-but the callback functions themselves obviously have to be provided by the user.
-These callback functions must be provided as non-static functions in a C file,
+As discussed above, the script output is a header file containing structure definitions,
+but the callback functions must be provided by the user.
+These callback functions must be provided as non-static functions in a C file
and named ``cmd_<cmdname>_parsed``.
The function prototypes can be seen in the generated output header.
-The "cmdname" part of the function name is built up by combining the non-variable initial tokens in the command.
+The "cmdname" part of the function name is built by combining the non-variable initial tokens in the command.
So, given the commands in our worked example below: ``quit`` and ``show port stats <n>``,
the callback functions would be:
@@ -151,11 +151,11 @@ the callback functions would be:
...
}
-These functions must be provided by the developer, but, as stated above,
+These functions must be provided by the developer. However, as stated above,
stub functions may be generated by the script automatically using the ``--stubs`` parameter.
The same "cmdname" stem is used in the naming of the generated structures too.
-To get at the results structure for each command above,
+To get to the results structure for each command above,
the ``parsed_result`` parameter should be cast to ``struct cmd_quit_result``
or ``struct cmd_show_port_stats_result`` respectively.
@@ -179,7 +179,7 @@ To integrate the script output with the application,
we must ``#include`` the generated header into our applications C file,
and then have the command-line created via either ``cmdline_new`` or ``cmdline_stdin_new``.
The first parameter to the function call should be the context array in the generated header file,
-``ctx`` by default. (Modifiable via script parameter).
+``ctx`` by default (Modifiable via script parameter).
The callback functions may be in this same file, or in a separate one -
they just need to be available to the linker at build-time.
@@ -190,7 +190,7 @@ Limitations of the Script Approach
The script approach works for most commands that a user may wish to add to an application.
However, it does not support the full range of functions possible with the DPDK command-line library.
For example,
-it is not possible using the script to multiplex multiple commands into a single callback function.
+it is not possible using the script to direct multiple commands to a single callback function.
To use this functionality, the user should follow the instructions in the next section
`Worked Example of Adding Command-line to an Application`_ to manually configure a command-line instance.
@@ -416,7 +416,7 @@ Once we have our ``ctx`` variable defined,
we now just need to call the API to create the new command-line instance in our application.
The basic API is ``cmdline_new`` which will create an interactive command-line with all commands available.
However, if additional features for interactive use - such as tab-completion -
-are desired, it is recommended that ``cmdline_new_stdin`` be used instead.
+are desired, it is recommended that ``cmdline_stdin_new`` be used instead.
A pattern that can be used in applications is to use ``cmdline_new`` for processing any startup commands,
either from file or from the environment (as is done in the "dpdk-test" application),
@@ -449,8 +449,8 @@ For example, to handle a startup file and then provide an interactive prompt:
Multiplexing Multiple Commands to a Single Function
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-To reduce the amount of boiler-plate code needed when creating a command-line for an application,
-it is possible to merge a number of commands together to have them call a separate function.
+To reduce the amount of boilerplate code needed when creating a command-line for an application,
+it is possible to merge a number of commands together to have them call a single function.
This can be done in a number of different ways:
* A callback function can be used as the target for a number of different commands.
@@ -463,7 +463,7 @@ This can be done in a number of different ways:
As a concrete example,
these two techniques are used in the DPDK unit test application ``dpdk-test``,
-where a single command ``cmdline_parse_t`` instance is used for all the "dump_<item>" test cases.
+where a single ``cmdline_parse_inst_t`` instance is used for all the "dump_<item>" test cases.
.. literalinclude:: ../../../app/test/commands.c
:language: c
@@ -481,7 +481,7 @@ the following DPDK files can be consulted for examples of command-line use.
This is not an exhaustive list of examples of command-line use in DPDK.
It is simply a list of a few files that may be of use to the application developer.
- Some of these referenced files contain more complex examples of use that others.
+ Some of these referenced files contain more complex examples of use than others.
* ``commands.c/.h`` in ``examples/cmdline``
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v3 09/11] doc: correct errors in trace library guide
2026-01-13 22:51 ` [PATCH v3 00/11] doc: programmers guide corrections Stephen Hemminger
` (7 preceding siblings ...)
2026-01-13 22:51 ` [PATCH v3 08/11] doc: correct errors in command-line " Stephen Hemminger
@ 2026-01-13 22:51 ` Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 10/11] doc: correct errors in stack " Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 11/11] doc: correct errors in RCU " Stephen Hemminger
10 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-13 22:51 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Changes:
- Fix broken sentence in implementation details: "As DPDK trace library
is designed..." was a sentence fragment
- Fix tense inconsistency: "will be useful" -> "is useful",
"will be allocated" -> "is allocated",
"will be discarded" -> "are discarded"
- Fix grammatical parallelism: "or use the API" -> "or by using the API"
- Fix inconsistent capitalization in section headers: "Tracepoint" ->
"tracepoint" to match body text usage throughout
- Fix tool name consistency: "Tracecompass" -> "Trace Compass"
(official name, two words)
- Fix awkward phrasing: "one of a CTF trace's streams" ->
"one of the streams in a CTF trace"
- Fix trace header description for parallelism: "48 bits of timestamp
and 16 bits event ID" -> "a 48-bit timestamp and a 16-bit event ID"
- Add missing article: "to trace library" -> "to the trace library"
- Remove incorrect article:
"the ``my_tracepoint.h``" -> "``my_tracepoint.h``"
- Add function call parentheses: rte_eal_init -> rte_eal_init(),
rte_thread_register -> rte_thread_register()
- Fix ambiguous pronoun: "It can be overridden" -> "This location can
be overridden"
- Fix line wrap for readability in features list
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/trace_lib.rst | 80 ++++++++++++++---------------
1 file changed, 40 insertions(+), 40 deletions(-)
diff --git a/doc/guides/prog_guide/trace_lib.rst b/doc/guides/prog_guide/trace_lib.rst
index 829a061074..33b6232ce3 100644
--- a/doc/guides/prog_guide/trace_lib.rst
+++ b/doc/guides/prog_guide/trace_lib.rst
@@ -14,7 +14,7 @@ When recording, specific instrumentation points placed in the software source
code generate events that are saved on a giant tape: a trace file.
The trace file then later can be opened in *trace viewers* to visualize and
analyze the trace events with timestamps and multi-core views.
-This mechanism will be useful for resolving a wide range of problems such as
+This mechanism is useful for resolving a wide range of problems such as
multi-core synchronization issues, latency measurements, and finding
post analysis information like CPU idle time, etc., that would otherwise be
extremely challenging to gather.
@@ -33,8 +33,8 @@ tracing.
DPDK tracing library features
-----------------------------
-- Provides a framework to add tracepoints in control and fast path APIs with minimum
- impact on performance.
+- Provides a framework to add tracepoints in control and fast path APIs with
+ minimal impact on performance.
Typical trace overhead is ~20 cycles and instrumentation overhead is 1 cycle.
- Enable and disable tracepoints at runtime.
- Save the trace buffer to the filesystem at any point in time.
@@ -47,8 +47,8 @@ DPDK tracing library features
For detailed information, refer to
`Common Trace Format <https://diamon.org/ctf/>`_.
-How to add a Tracepoint
-------------------------
+How to add a tracepoint
+-----------------------
This section steps you through the details of adding a simple tracepoint.
@@ -91,7 +91,7 @@ The consumer of this tracepoint can invoke
``app_trace_string(const char *str)`` to emit the trace event to the trace
buffer.
-Register the Tracepoint
+Register the tracepoint
~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: c
@@ -103,7 +103,7 @@ Register the Tracepoint
RTE_TRACE_POINT_REGISTER(app_trace_string, app.trace.string)
The above code snippet registers the ``app_trace_string`` tracepoint to
-trace library. Here, the ``my_tracepoint.h`` is the header file
+the trace library. Here, ``my_tracepoint.h`` is the header file
that the user created in the first step :ref:`create_tracepoint_header_file`.
The second argument for the ``RTE_TRACE_POINT_REGISTER`` is the name for the
@@ -129,7 +129,7 @@ convention.
For example, ``__app_trace_string`` will be the exported symbol in the
above example.
-Fast Path Tracepoint
+Fast path tracepoint
--------------------
In order to avoid performance impact in fast path code, the library introduced
@@ -139,7 +139,7 @@ the user must use ``RTE_TRACE_POINT_FP`` instead of ``RTE_TRACE_POINT``.
``RTE_TRACE_POINT_FP`` is compiled out by default and can be enabled using
the ``enable_trace_fp`` option for meson build.
-Event Record Mode
+Event record mode
-----------------
Event record mode is an attribute of trace buffers. The trace library exposes the
@@ -149,26 +149,26 @@ Overwrite
When the trace buffer is full, new trace events overwrite the existing
captured events in the trace buffer.
Discard
- When the trace buffer is full, new trace events will be discarded.
+ When the trace buffer is full, new trace events are discarded.
The mode can be configured either using the EAL command line parameter
-``--trace-mode`` on application boot up or use the ``rte_trace_mode_set()`` API to
-configure at runtime.
+``--trace-mode`` on application boot up or by using the ``rte_trace_mode_set()``
+API at runtime.
-Trace File Location
+Trace file location
-------------------
On ``rte_trace_save()`` or ``rte_eal_cleanup()`` invocation, the library saves
the trace buffers to the filesystem. By default, the trace files are stored in
``$HOME/dpdk-traces/rte-yyyy-mm-dd-[AP]M-hh-mm-ss/``.
-It can be overridden by the ``--trace-dir=<directory path>`` EAL command line
-option.
+This location can be overridden by the ``--trace-dir=<directory path>`` EAL
+command line option.
For more information, refer to :doc:`../linux_gsg/linux_eal_parameters` for
trace EAL command line options.
-View and Analyze Recorded Events
-------------------------------------
+View and analyze recorded events
+--------------------------------
Once the trace directory is available, the user can view/inspect the recorded
events.
@@ -204,28 +204,28 @@ recorded events. The example below counts the number of ``ethdev`` events::
babeltrace /tmp/my-dpdk-trace | grep ethdev | wc --lines
-Use the tracecompass GUI tool
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Use the Trace Compass GUI tool
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-``Tracecompass`` is another tool to view/analyze the DPDK traces which gives
-a graphical view of events. Like ``babeltrace``, tracecompass also provides
+``Trace Compass`` is another tool to view/analyze the DPDK traces which gives
+a graphical view of events. Like ``babeltrace``, Trace Compass also provides
an interface to search for a particular event.
-To use ``tracecompass``, following are the minimum required steps:
+To use ``Trace Compass``, the following are the minimum required steps:
-- Install ``tracecompass`` to the localhost. Variants are available for Linux,
+- Install ``Trace Compass`` to the localhost. Variants are available for Linux,
Windows, and OS-X.
-- Launch ``tracecompass`` which will open a graphical window with trace
+- Launch ``Trace Compass`` which will open a graphical window with trace
management interfaces.
-- Open a trace using ``File->Open Trace`` option and select metadata file which
- is to be viewed/analyzed.
+- Open a trace using the ``File->Open Trace`` option and select the metadata file which
+ will be viewed/analyzed.
-For more details, refer
+For more details, refer to
`Trace Compass <https://www.eclipse.org/tracecompass/>`_.
Quick start
-----------
-This section steps you through the details of generating trace and viewing it.
+This section steps you through the details of generating the trace and viewing it.
- Start the dpdk-test::
@@ -238,9 +238,9 @@ This section steps you through the details of generating trace and viewing it.
Implementation details
----------------------
-As DPDK trace library is designed to generate traces that use ``Common Trace
-Format (CTF)``. ``CTF`` specification consists of the following units to create
-a trace.
+The DPDK trace library is designed to generate traces that use
+``Common Trace Format (CTF)``. The ``CTF`` specification consists of the
+following units to create a trace.
- ``Stream`` Sequence of packets.
- ``Packet`` Header and one or more events.
@@ -249,15 +249,15 @@ a trace.
For detailed information, refer to
`Common Trace Format <https://diamon.org/ctf/>`_.
-The implementation details are broadly divided into the following areas:
+Implementation details are broadly divided into the following areas:
Trace metadata creation
~~~~~~~~~~~~~~~~~~~~~~~
-Based on the ``CTF`` specification, one of a CTF trace's streams is mandatory:
-the metadata stream. It contains exactly what you would expect: data about the
-trace itself. The metadata stream contains a textual description of the binary
-layouts of all the other streams.
+Based on the ``CTF`` specification, one of the streams in a CTF trace is
+mandatory: the metadata stream. It contains exactly what you would expect:
+data about the trace itself. The metadata stream contains a textual description
+of the binary layouts of all the other streams.
This description is written using the Trace Stream Description Language (TSDL),
a declarative language that exists only in the realm of CTF.
@@ -270,8 +270,8 @@ The internal ``trace_metadata_create()`` function generates the metadata.
Trace memory
~~~~~~~~~~~~
-The trace memory will be allocated through an internal function
-``__rte_trace_mem_per_thread_alloc()``. The trace memory will be allocated
+The trace memory is allocated through an internal function
+``__rte_trace_mem_per_thread_alloc()``. The trace memory is allocated
per thread to enable lockless trace-emit function.
For non-lcore threads, the trace memory is allocated on the first trace
@@ -279,7 +279,7 @@ emission.
For lcore threads, if trace points are enabled through an EAL option, the trace
memory is allocated when the threads are known to DPDK
-(``rte_eal_init`` for EAL lcores, ``rte_thread_register`` for non-EAL lcores).
+(``rte_eal_init()`` for EAL lcores, ``rte_thread_register()`` for non-EAL lcores).
Otherwise, when trace points are enabled later in the life of the application,
the behavior is the same as non-lcore threads and the trace memory is allocated
on the first trace emission.
@@ -348,7 +348,7 @@ trace.header
| timestamp [47:0] |
+----------------------+
-The trace header is 64 bits, it consists of 48 bits of timestamp and 16 bits
+The trace header is 64 bits, consisting of a 48-bit timestamp and a 16-bit
event ID.
The ``packet.header`` and ``packet.context`` will be written in the slow path
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v3 10/11] doc: correct errors in stack library guide
2026-01-13 22:51 ` [PATCH v3 00/11] doc: programmers guide corrections Stephen Hemminger
` (8 preceding siblings ...)
2026-01-13 22:51 ` [PATCH v3 09/11] doc: correct errors in trace " Stephen Hemminger
@ 2026-01-13 22:51 ` Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 11/11] doc: correct errors in RCU " Stephen Hemminger
10 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-13 22:51 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Fix several errors in the stack library documentation:
- Fix grammar: "These function are" -> "These functions are"
- Fix grammar: "incorrect change" -> "incorrectly change"
- Fix RST section hierarchy: "Implementation" was marked as a
subsection (~~~~) but contained sections (----); corrected so
Implementation is a section and Lock-based/Lock-free stack are
subsections beneath it
- Fix inconsistent header capitalization: "Lock-based Stack" and
"Lock-free Stack" -> lowercase "stack" to match other DPDK docs
- Fix awkward wording: "this algorithm stack uses" -> "this algorithm
uses"
- Fix inconsistent underline lengths in RST headers
- Add code formatting to function name: rte_stack_create() with
backticks and parentheses
- Fix hyphenation: "multi-threading safe" -> "multi-thread safe"
(matches line 37 usage)
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/stack_lib.rst | 32 ++++++++++++++---------------
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 975d3ad796..1ca9d73bc0 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -13,8 +13,8 @@ The stack library provides the following basic operations:
user-specified socket, with either standard (lock-based) or lock-free
behavior.
-* Push and pop a burst of one or more stack objects (pointers). These function
- are multi-threading safe.
+* Push and pop a burst of one or more stack objects (pointers). These functions
+ are multi-thread safe.
* Free a previously created stack.
@@ -23,15 +23,15 @@ The stack library provides the following basic operations:
* Query a stack's current depth and number of free entries.
Implementation
-~~~~~~~~~~~~~~
+--------------
The library supports two types of stacks: standard (lock-based) and lock-free.
Both types use the same set of interfaces, but their implementations differ.
.. _Stack_Library_Std_Stack:
-Lock-based Stack
-----------------
+Lock-based stack
+~~~~~~~~~~~~~~~~
The lock-based stack consists of a contiguous array of pointers, a current
index, and a spinlock. Accesses to the stack are made multi-thread safe by the
@@ -39,13 +39,13 @@ spinlock.
.. _Stack_Library_LF_Stack:
-Lock-free Stack
-------------------
+Lock-free stack
+~~~~~~~~~~~~~~~
The lock-free stack consists of a linked list of elements, each containing a
data pointer and a next pointer, and an atomic stack depth counter. The
-lock-free property means that multiple threads can push and pop simultaneously,
-and one thread being preempted/delayed in a push or pop operation will not
+lock-free property means that multiple threads can push and pop simultaneously.
+One thread being preempted/delayed in a push or pop operation will not
impede the forward progress of any other thread.
The lock-free push operation enqueues a linked list of pointers by pointing the
@@ -65,15 +65,15 @@ allocated before stack pushes and freed after stack pops. Since the stack has a
fixed maximum depth, these elements do not need to be dynamically created.
The lock-free behavior is selected by passing the *RTE_STACK_F_LF* flag to
-rte_stack_create().
+``rte_stack_create()``.
-Preventing the ABA Problem
+Preventing the ABA problem
^^^^^^^^^^^^^^^^^^^^^^^^^^
-To prevent the ABA problem, this algorithm stack uses a 128-bit
-compare-and-swap instruction to atomically update both the stack top pointer
-and a modification counter. The ABA problem can occur without a modification
-counter if, for example:
+To prevent the ABA problem, this algorithm uses a 128-bit compare-and-swap
+instruction to atomically update both the stack top pointer and a modification
+counter. The ABA problem can occur without a modification counter if, for
+example:
#. Thread A reads head pointer X and stores the pointed-to list element.
@@ -83,7 +83,7 @@ counter if, for example:
#. Thread A changes the head pointer with a compare-and-swap and succeeds.
In this case thread A would not detect that the list had changed, and would
-both pop stale data and incorrect change the head pointer. By adding a
+both pop stale data and incorrectly change the head pointer. By adding a
modification counter that is updated on every push and pop as part of the
compare-and-swap, the algorithm can detect when the list changes even if the
head pointer remains the same.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v3 11/11] doc: correct errors in RCU library guide
2026-01-13 22:51 ` [PATCH v3 00/11] doc: programmers guide corrections Stephen Hemminger
` (9 preceding siblings ...)
2026-01-13 22:51 ` [PATCH v3 10/11] doc: correct errors in stack " Stephen Hemminger
@ 2026-01-13 22:51 ` Stephen Hemminger
10 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-13 22:51 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Fix several errors in the RCU library documentation:
- Fix wrong word: "exasperate" -> "exacerbate" (exasperate means to
annoy; exacerbate means to make worse)
- Fix typo: "such at rte_hash" -> "such as rte_hash"
- Fix grammar: "critical sections for D2 is" -> "critical section for
D2 is"
- Fix subject-verb agreement: "length... and number... is proportional"
-> "are proportional"
- Fix inconsistent abbreviation: "RT1" -> "reader thread 1" (RT1 was
never defined)
- Fix missing articles: add "the" before "grace period", "API"
- Fix double space before "The reader thread must"
- Add missing function parentheses throughout for consistency
- Add missing code formatting (backticks) around function names:
rte_rcu_qsbr_dq_enqueue, rte_rcu_qsbr_dq_reclaim,
rte_rcu_qsbr_dq_delete, rte_hash, rte_lpm
- Fix unexplained asterisk after "Reclaiming Resources*"
- Fix inconsistent capitalization in numbered list items
- Rewrap overly long lines for readability
- Clarify awkward sentence about memory examples
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/rcu_lib.rst | 143 ++++++++++++++++++------------
1 file changed, 86 insertions(+), 57 deletions(-)
diff --git a/doc/guides/prog_guide/rcu_lib.rst b/doc/guides/prog_guide/rcu_lib.rst
index 9f3654f398..ac573eecbc 100644
--- a/doc/guides/prog_guide/rcu_lib.rst
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -6,17 +6,17 @@ Read-Copy-Update (RCU) Library
Lockless data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
-(for example real-time applications).
+(for example, real-time applications).
In the following sections, the term "memory" refers to memory allocated
by typical APIs like malloc() or anything that is representative of
-memory, for example an index of a free element array.
+memory, such as an index into a free element array.
Since these data structures are lockless, the writers and readers
are accessing the data structures concurrently. Hence, while removing
an element from a data structure, the writers cannot return the memory
-to the allocator, without knowing that the readers are not
-referencing that element/memory anymore. Hence, it is required to
+to the allocator without knowing that the readers are not
+referencing that element/memory anymore. Therefore, it is required to
separate the operation of removing an element into two steps:
#. Delete: in this step, the writer removes the reference to the element from
@@ -51,7 +51,7 @@ As shown in :numref:`figure_quiescent_state`, reader thread 1 accesses data
structures D1 and D2. When it is accessing D1, if the writer has to remove an
element from D1, the writer cannot free the memory associated with that
element immediately. The writer can return the memory to the allocator only
-after the reader stops referencing D1. In other words, reader thread RT1 has
+after the reader stops referencing D1. In other words, reader thread 1 has
to enter a quiescent state.
Similarly, since reader thread 2 is also accessing D1, the writer has to
@@ -62,28 +62,28 @@ quiescent state. Reader thread 3 was not accessing D1 when the delete
operation happened. So, reader thread 3 will not have a reference to the
deleted entry.
-It can be noted that, the critical sections for D2 is a quiescent state
-for D1. i.e. for a given data structure Dx, any point in the thread execution
-that does not reference Dx is a quiescent state.
+Note that the critical section for D2 is a quiescent state
+for D1 (i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state).
Since memory is not freed immediately, there might be a need for
-provisioning of additional memory, depending on the application requirements.
+provisioning additional memory depending on the application requirements.
Factors affecting the RCU mechanism
-----------------------------------
It is important to make sure that this library keeps the overhead of
-identifying the end of grace period and subsequent freeing of memory,
-to a minimum. The following paras explain how grace period and critical
+identifying the end of the grace period and subsequent freeing of memory
+to a minimum. The following paragraphs explain how grace period and critical
section affect this overhead.
-The writer has to poll the readers to identify the end of grace period.
+The writer has to poll the readers to identify the end of the grace period.
Polling introduces memory accesses and wastes CPU cycles. The memory
is not available for reuse during the grace period. Longer grace periods
-exasperate these conditions.
+exacerbate these conditions.
The length of the critical section and the number of reader threads
-is proportional to the duration of the grace period. Keeping the critical
+are proportional to the duration of the grace period. Keeping the critical
sections smaller will keep the grace period smaller. However, keeping the
critical sections smaller requires additional CPU cycles (due to additional
reporting) in the readers.
@@ -117,14 +117,14 @@ How to use this library
The application must allocate memory and initialize a QS variable.
Applications can call ``rte_rcu_qsbr_get_memsize()`` to calculate the size
-of memory to allocate. This API takes a maximum number of reader threads,
-using this variable, as a parameter.
+of memory to allocate. This API takes a maximum number of reader threads
+using this variable as a parameter.
Further, the application can initialize a QS variable using the API
``rte_rcu_qsbr_init()``.
Each reader thread is assumed to have a unique thread ID. Currently, the
-management of the thread ID (for example allocation/free) is left to the
+management of the thread ID (for example, allocation/free) is left to the
application. The thread ID should be in the range of 0 to
maximum number of threads provided while creating the QS variable.
The application could also use ``lcore_id`` as the thread ID where applicable.
@@ -132,14 +132,14 @@ The application could also use ``lcore_id`` as the thread ID where applicable.
The ``rte_rcu_qsbr_thread_register()`` API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
-The reader thread must call ``rte_rcu_qsbr_thread_online()`` API to start
+The reader thread must call the ``rte_rcu_qsbr_thread_online()`` API to start
reporting its quiescent state.
Some of the use cases might require the reader threads to make blocking API
-calls (for example while using eventdev APIs). The writer thread should not
-wait for such reader threads to enter quiescent state. The reader thread must
-call ``rte_rcu_qsbr_thread_offline()`` API, before calling blocking APIs. It
-can call ``rte_rcu_qsbr_thread_online()`` API once the blocking API call
+calls (for example, while using eventdev APIs). The writer thread should not
+wait for such reader threads to enter quiescent state. The reader thread must
+call the ``rte_rcu_qsbr_thread_offline()`` API before calling blocking APIs. It
+can call the ``rte_rcu_qsbr_thread_online()`` API once the blocking API call
returns.
The writer thread can trigger the reader threads to report their quiescent
@@ -147,13 +147,13 @@ state by calling the API ``rte_rcu_qsbr_start()``. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
``rte_rcu_qsbr_start()`` returns a token to each caller.
-The writer thread must call ``rte_rcu_qsbr_check()`` API with the token to
-get the current quiescent state status. Option to block till all the reader
+The writer thread must call the ``rte_rcu_qsbr_check()`` API with the token to
+get the current quiescent state status. The option to block till all the reader
threads enter the quiescent state is provided. If this API indicates that
all the reader threads have entered the quiescent state, the application
can free the deleted entry.
-The APIs ``rte_rcu_qsbr_start()`` and ``rte_rcu_qsbr_check()`` are lock free.
+The APIs ``rte_rcu_qsbr_start()`` and ``rte_rcu_qsbr_check()`` are lock-free.
Hence, they can be called concurrently from multiple writers even while
running as worker threads.
@@ -171,7 +171,7 @@ polls till all the readers enter the quiescent state or go offline. This API
does not allow the writer to do useful work while waiting and introduces
additional memory accesses due to continuous polling. However, the application
does not have to store the token or the reference to the deleted resource. The
-resource can be freed immediately after ``rte_rcu_qsbr_synchronize()`` API
+resource can be freed immediately after the ``rte_rcu_qsbr_synchronize()`` API
returns.
The reader thread must call ``rte_rcu_qsbr_thread_offline()`` and
@@ -179,9 +179,9 @@ The reader thread must call ``rte_rcu_qsbr_thread_offline()`` and
quiescent state. The ``rte_rcu_qsbr_check()`` API will not wait for this reader
thread to report the quiescent state status anymore.
-The reader threads should call ``rte_rcu_qsbr_quiescent()`` API to indicate that
+The reader threads should call the ``rte_rcu_qsbr_quiescent()`` API to indicate that
they entered a quiescent state. This API checks if a writer has triggered a
-quiescent state query and update the state accordingly.
+quiescent state query and updates the state accordingly.
The ``rte_rcu_qsbr_lock()`` and ``rte_rcu_qsbr_unlock()`` are empty functions.
However, these APIs can aid in debugging issues. One can mark the access to
@@ -199,42 +199,71 @@ the application. When a writer deletes an entry from a data structure, the write
#. Should check if the readers have completed a grace period and free the resources.
There are several APIs provided to help with this process. The writer
-can create a FIFO to store the references to deleted resources using ``rte_rcu_qsbr_dq_create()``.
+can create a FIFO to store the references to deleted resources using
+``rte_rcu_qsbr_dq_create()``.
The resources can be enqueued to this FIFO using ``rte_rcu_qsbr_dq_enqueue()``.
-If the FIFO is full, ``rte_rcu_qsbr_dq_enqueue`` will reclaim the resources before enqueuing. It will also reclaim resources on regular basis to keep the FIFO from growing too large. If the writer runs out of resources, the writer can call ``rte_rcu_qsbr_dq_reclaim`` API to reclaim resources. ``rte_rcu_qsbr_dq_delete`` is provided to reclaim any remaining resources and free the FIFO while shutting down.
+If the FIFO is full, ``rte_rcu_qsbr_dq_enqueue()`` will reclaim the resources
+before enqueuing.
+It will also reclaim resources on a regular basis to keep the FIFO from growing
+too large. If the writer runs out of resources, the writer can call
+``rte_rcu_qsbr_dq_reclaim()`` API to reclaim resources.
+``rte_rcu_qsbr_dq_delete()`` is provided to reclaim any remaining resources and
+free the FIFO while shutting down.
-However, if this resource reclamation process were to be integrated in lock-free data structure libraries, it
-hides this complexity from the application and makes it easier for the application to adopt lock-free algorithms. The following paragraphs discuss how the reclamation process can be integrated in DPDK libraries.
+However, if this resource reclamation process were to be integrated in lock-free
+data structure libraries, it hides this complexity from the application and
+makes it easier for the application to adopt lock-free algorithms.
-In any DPDK application, the resource reclamation process using QSBR can be split into 4 parts:
+The following paragraphs discuss how the reclamation process can be integrated
+in DPDK libraries.
+
+In any DPDK application, the resource reclamation process using QSBR can be
+split into 4 parts:
#. Initialization
-#. Quiescent State Reporting
-#. Reclaiming Resources
+#. Quiescent state reporting
+#. Reclaiming resources
#. Shutdown
-The design proposed here assigns different parts of this process to client libraries and applications. The term 'client library' refers to lock-free data structure libraries such at rte_hash, rte_lpm etc. in DPDK or similar libraries outside of DPDK. The term 'application' refers to the packet processing application that makes use of DPDK such as L3 Forwarding example application, OVS, VPP etc..
-
-The application has to handle 'Initialization' and 'Quiescent State Reporting'. So,
-
-* the application has to create the RCU variable and register the reader threads to report their quiescent state.
-* the application has to register the same RCU variable with the client library.
-* reader threads in the application have to report the quiescent state. This allows for the application to control the length of the critical section/how frequently the application wants to report the quiescent state.
-
-The client library will handle 'Reclaiming Resources' part of the process. The
-client libraries will make use of the writer thread context to execute the memory
-reclamation algorithm. So,
-
-* client library should provide an API to register a RCU variable that it will use. It should call ``rte_rcu_qsbr_dq_create()`` to create the FIFO to store the references to deleted entries.
-* client library should use ``rte_rcu_qsbr_dq_enqueue`` to enqueue the deleted resources on the FIFO and start the grace period.
-* if the library runs out of resources while adding entries, it should call ``rte_rcu_qsbr_dq_reclaim`` to reclaim the resources and try the resource allocation again.
-
-The 'Shutdown' process needs to be shared between the application and the
-client library.
-
-* the application should make sure that the reader threads are not using the shared data structure, unregister the reader threads from the QSBR variable before calling the client library's shutdown function.
-
-* client library should call ``rte_rcu_qsbr_dq_delete`` to reclaim any remaining resources and free the FIFO.
+The design proposed here assigns different parts of this process to client
+libraries and applications. The term "client library" refers to lock-free data
+structure libraries such as ``rte_hash``, ``rte_lpm`` etc. in DPDK or similar
+libraries outside of DPDK. The term "application" refers to the packet
+processing application that makes use of DPDK such as L3 Forwarding example
+application, OVS, VPP etc.
+
+The application must handle "Initialization" and "Quiescent State Reporting".
+Therefore, the application:
+
+* Must create the RCU variable and register the reader threads to report their
+ quiescent state.
+* Must register the same RCU variable with the client library.
+* Note that reader threads in the application have to report the quiescent
+ state. This allows for the application to control the length of the critical
+ section/how frequently the application wants to report the quiescent state.
+
+The client library will handle the "Reclaiming Resources" part of the process.
+The client libraries will make use of the writer thread context to execute the
+memory reclamation algorithm. So, the client library should:
+
+* Provide an API to register an RCU variable that it will use. It should call
+ ``rte_rcu_qsbr_dq_create()`` to create the FIFO to store the references to
+ deleted entries.
+* Use ``rte_rcu_qsbr_dq_enqueue()`` to enqueue the deleted resources on the FIFO
+ and start the grace period.
+* Note that if the library runs out of resources while adding entries, it should
+ call ``rte_rcu_qsbr_dq_reclaim()`` to reclaim the resources and try the
+ resource allocation again.
+
+The "Shutdown" process needs to be shared between the application and the
+client library. Note that:
+
+* The application should make sure that the reader threads are not using the
+ shared data structure, unregister the reader threads from the QSBR variable
+ before calling the client library's shutdown function.
+
+* The client library should call ``rte_rcu_qsbr_dq_delete()`` to reclaim any
+ remaining resources and free the FIFO.
Integrating the resource reclamation with client libraries removes the burden from
the application and makes it easy to use lock-free algorithms.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v4 00/11] doc: programmers guide corrections
2024-05-13 15:59 [PATCH 0/9] reowrd in prog guide Nandini Persad
` (10 preceding siblings ...)
2026-01-13 22:51 ` [PATCH v3 00/11] doc: programmers guide corrections Stephen Hemminger
@ 2026-01-14 22:26 ` Stephen Hemminger
2026-01-14 22:26 ` [PATCH v4 01/11] doc: correct grammar and punctuation errors in ethdev guide Stephen Hemminger
` (10 more replies)
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
12 siblings, 11 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:26 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
This is a revision of earlier corrections to the programmers guide.
At this point, it is a collaborative work of myself (Stephen), the
technical writer (Nandini) and AI (Claude).
Found while reviewing the DPDK programmer's guide documentation.
v4 - incorporate additional earlier changes to ethdev doc.
Stephen Hemminger (11):
doc: correct grammar and punctuation errors in ethdev guide
doc: correct grammar and typos in argparse library guide
doc: correct grammar and typos in design guide
doc: correct errors in Linux system requirements guide
doc: correct grammar in service cores guide
doc: correct grammar and errors in trace library guide
doc: correct typos in log library guide
doc: correct errors in command-line library guide
doc: correct errors in trace library guide
doc: correct errors in stack library guide
doc: correct errors in RCU library guide
doc/guides/contributing/design.rst | 71 +--
doc/guides/linux_gsg/sys_reqs.rst | 14 +-
doc/guides/prog_guide/argparse_lib.rst | 24 +-
doc/guides/prog_guide/cmdline.rst | 42 +-
doc/guides/prog_guide/ethdev/ethdev.rst | 557 ++++++++++++------------
doc/guides/prog_guide/log_lib.rst | 32 +-
doc/guides/prog_guide/rcu_lib.rst | 143 +++---
doc/guides/prog_guide/service_cores.rst | 30 +-
doc/guides/prog_guide/stack_lib.rst | 32 +-
doc/guides/prog_guide/trace_lib.rst | 118 ++---
10 files changed, 548 insertions(+), 515 deletions(-)
--
2.51.0
^ permalink raw reply [flat|nested] 118+ messages in thread
* [PATCH v4 01/11] doc: correct grammar and punctuation errors in ethdev guide
2026-01-14 22:26 ` [PATCH v4 00/11] doc: programmers guide corrections Stephen Hemminger
@ 2026-01-14 22:26 ` Stephen Hemminger
2026-01-14 22:26 ` [PATCH v4 02/11] doc: correct grammar and typos in argparse library guide Stephen Hemminger
` (9 subsequent siblings)
10 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:26 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Fix various grammar, punctuation, and typographical errors throughout
the Poll Mode Driver documentation.
- Fix extra spaces after emphasized terms
(*run-to-completion*, *pipe-line*)
- Correct possessive forms
("port's" -> "ports'", "processors" -> "processor's")
- Fix subject-verb agreement ("VFs detects" -> "VFs detect")
- Add missing articles and words ("It is duty" -> "It is the duty",
"allows the application create" -> "allows the application to create")
- Remove extraneous words ("release of all" -> "release all",
"ensures sure" -> "ensures")
- Fix typos ("dev_unint()" -> "dev_uninit()", "receive of transmit" ->
"receive or transmit", "UDP/TCP/ SCTP" -> "UDP/TCP/SCTP")
- Add missing punctuation (period at end of bullet point)
- Fix spacing around inline code markup
- Clarify awkward sentence about PROACTIVE vs PASSIVE error
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/ethdev/ethdev.rst | 557 ++++++++++++------------
1 file changed, 277 insertions(+), 280 deletions(-)
diff --git a/doc/guides/prog_guide/ethdev/ethdev.rst b/doc/guides/prog_guide/ethdev/ethdev.rst
index daaf43ea3b..31b5e973ae 100644
--- a/doc/guides/prog_guide/ethdev/ethdev.rst
+++ b/doc/guides/prog_guide/ethdev/ethdev.rst
@@ -4,126 +4,134 @@
Poll Mode Driver
================
-The DPDK includes 1 Gigabit, 10 Gigabit and 40 Gigabit and para virtualized virtio Poll Mode Drivers.
+The Data Plane Development Kit (DPDK) supports a wide range of Ethernet speeds,
+from 10 Megabits to 400 Gigabits, depending on hardware capability.
-A Poll Mode Driver (PMD) consists of APIs, provided through the BSD driver running in user space,
-to configure the devices and their respective queues.
-In addition, a PMD accesses the RX and TX descriptors directly without any interrupts
-(with the exception of Link Status Change interrupts) to quickly receive,
-process and deliver packets in the user's application.
-This section describes the requirements of the PMDs,
-their global design principles and proposes a high-level architecture and a generic external API for the Ethernet PMDs.
+DPDK's Poll Mode Drivers (PMDs) are high-performance, optimized drivers for various
+network interface cards that bypass the traditional kernel network stack to reduce
+latency and improve throughput. They access Rx and Tx descriptors directly in a polling
+mode without relying on interrupts (except for Link Status Change notifications), enabling
+efficient packet reception and transmission in user-space applications.
+
+This section outlines the requirements of Ethernet PMDs, their design principles,
+and presents a high-level architecture along with a generic external API.
Requirements and Assumptions
----------------------------
-The DPDK environment for packet processing applications allows for two models, run-to-completion and pipe-line:
+The DPDK environment for packet processing applications supports two models: run-to-completion and pipeline.
-* In the *run-to-completion* model, a specific port's RX descriptor ring is polled for packets through an API.
- Packets are then processed on the same core and placed on a port's TX descriptor ring through an API for transmission.
+* In the *run-to-completion* model, a specific port's Rx descriptor ring is polled for packets through an API.
+ The application then processes packets on the same core and transmits them via the port's Tx descriptor ring using another API.
-* In the *pipe-line* model, one core polls one or more port's RX descriptor ring through an API.
- Packets are received and passed to another core via a ring.
- The other core continues to process the packet which then may be placed on a port's TX descriptor ring through an API for transmission.
+* In the *pipeline* model, one core polls the Rx descriptor ring(s) of one or more ports via an API.
+ The application then passes received packets to another core via a ring for further processing,
+ which may include transmission through the Tx descriptor ring using an API.
-In a synchronous run-to-completion model,
-each logical core assigned to the DPDK executes a packet processing loop that includes the following steps:
+In a synchronous run-to-completion model, a logical core (lcore)
+assigned to DPDK executes a packet processing loop that includes the following steps:
-* Retrieve input packets through the PMD receive API
+* Retrieve input packets using the PMD receive API
-* Process each received packet one at a time, up to its forwarding
+* Process each received packet individually, up to its forwarding
-* Send pending output packets through the PMD transmit API
+* Transmit output packets using the PMD transmit API
-Conversely, in an asynchronous pipe-line model, some logical cores may be dedicated to the retrieval of received packets and
-other logical cores to the processing of previously received packets.
-Received packets are exchanged between logical cores through rings.
-The loop for packet retrieval includes the following steps:
+In contrast, the asynchronous pipeline model assigns some logical cores to retrieve packets
+and others to process them. The application exchanges packets between cores via rings.
+
+The packet retrieval loop includes:
* Retrieve input packets through the PMD receive API
* Provide received packets to processing lcores through packet queues
-The loop for packet processing includes the following steps:
-
-* Retrieve the received packet from the packet queue
+The packet processing loop includes:
-* Process the received packet, up to its retransmission if forwarded
+* Dequeue received packets from the packet queue
-To avoid any unnecessary interrupt processing overhead, the execution environment must not use any asynchronous notification mechanisms.
-Whenever needed and appropriate, asynchronous communication should be introduced as much as possible through the use of rings.
+* Process packets, including retransmission if forwarded
-Avoiding lock contention is a key issue in a multi-core environment.
-To address this issue, PMDs are designed to work with per-core private resources as much as possible.
-For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
-In the same way, every receive queue of a port is assigned to and polled by a single logical core (lcore).
+To minimize interrupt-related overhead, the execution environment should avoid asynchronous
+notification mechanisms. When asynchronous communication is required, implement it
+using rings where possible. Minimizing lock contention is critical in multi-core environments.
+To support this, PMDs use per-core private resources whenever possible.
+For example, if a PMD does not support ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``, it maintains a separate
+transmit queue per core and per port. Similarly, each receive queue is assigned to and polled by a single lcore.
-To comply with Non-Uniform Memory Access (NUMA), memory management is designed to assign to each logical core
-a private buffer pool in local memory to minimize remote memory access.
-The configuration of packet buffer pools should take into account the underlying physical memory architecture in terms of DIMMS,
-channels and ranks.
-The application must ensure that appropriate parameters are given at memory pool creation time.
+To support Non-Uniform Memory Access (NUMA), the memory management design assigns each logical
+core a private buffer pool in local memory to reduce remote memory access. Configuration of packet
+buffer pools should consider the underlying physical memory layout, such as DIMMs, channels, and ranks.
+The application must set proper parameters during memory pool creation.
See :doc:`../mempool_lib`.
Design Principles
-----------------
-The API and architecture of the Ethernet* PMDs are designed with the following guidelines in mind.
+The API and architecture of the Ethernet PMDs follow these design principles:
-PMDs must help global policy-oriented decisions to be enforced at the upper application level.
-Conversely, NIC PMD functions should not impede the benefits expected by upper-level global policies,
-or worse prevent such policies from being applied.
+PMDs should support the enforcement of global, policy-driven decisions at the upper application level.
+At the same time, NIC PMD functions must not hinder the performance gains expected by these higher-level policies,
+or worse, prevent them from being implemented.
-For instance, both the receive and transmit functions of a PMD have a maximum number of packets/descriptors to poll.
-This allows a run-to-completion processing stack to statically fix or
-to dynamically adapt its overall behavior through different global loop policies, such as:
+For example, both the receive and transmit functions of a PMD define a maximum number of packets to poll.
+This enables a run-to-completion processing stack to either statically configure or dynamically adjust its
+behavior according to different global loop strategies, such as:
-* Receive, process immediately and transmit packets one at a time in a piecemeal fashion.
+* Receiving, processing, and transmitting packets one at a time in a piecemeal fashion
-* Receive as many packets as possible, then process all received packets, transmitting them immediately.
+* Receiving as many packets as possible, then processing and transmitting them all immediately
-* Receive a given maximum number of packets, process the received packets, accumulate them and finally send all accumulated packets to transmit.
+* Receiving a set number of packets, processing them, and batching them for transmission at once
-To achieve optimal performance, overall software design choices and pure software optimization techniques must be considered and
-balanced against available low-level hardware-based optimization features (CPU cache properties, bus speed, NIC PCI bandwidth, and so on).
-The case of packet transmission is an example of this software/hardware tradeoff issue when optimizing burst-oriented network packet processing engines.
-In the initial case, the PMD could export only an rte_eth_tx_one function to transmit one packet at a time on a given queue.
-On top of that, one can easily build an rte_eth_tx_burst function that loops invoking the rte_eth_tx_one function to transmit several packets at a time.
-However, an rte_eth_tx_burst function is effectively implemented by the PMD to minimize the driver-level transmit cost per packet through the following optimizations:
+To maximize performance, developers must consider overall software architecture and optimization techniques
+alongside available low-level hardware optimizations (e.g., CPU cache behavior, bus speed, and NIC PCI bandwidth).
-* Share among multiple packets the un-amortized cost of invoking the rte_eth_tx_one function.
+Packet transmission in burst-oriented network engines illustrates this software/hardware tradeoff.
+A PMD could expose only the ``rte_eth_tx_one`` function to transmit a single packet at a time on a given queue.
+While it is possible to build an ``rte_eth_tx_burst`` function by repeatedly calling ``rte_eth_tx_one``,
+most PMDs implement ``rte_eth_tx_burst`` directly to reduce per-packet transmission overhead.
-* Enable the rte_eth_tx_burst function to take advantage of burst-oriented hardware features (prefetch data in cache, use of NIC head/tail registers)
- to minimize the number of CPU cycles per packet, for example by avoiding unnecessary read memory accesses to ring transmit descriptors,
- or by systematically using arrays of pointers that exactly fit cache line boundaries and sizes.
+This implementation includes several key optimizations:
-* Apply burst-oriented software optimization techniques to remove operations that would otherwise be unavoidable, such as ring index wrap back management.
+* Sharing the fixed cost of invoking ``rte_eth_tx_one`` across multiple packets
-Burst-oriented functions are also introduced via the API for services that are intensively used by the PMD.
-This applies in particular to buffer allocators used to populate NIC rings, which provide functions to allocate/free several buffers at a time.
-For example, an mbuf_multiple_alloc function returning an array of pointers to rte_mbuf buffers which speeds up the receive poll function of the PMD when
-replenishing multiple descriptors of the receive ring.
+* Leveraging burst-oriented hardware features (e.g., data prefetching, NIC head/tail registers, vector extensions)
+ to reduce CPU cycles per packet, including minimizing unnecessary memory accesses and aligning pointer arrays
+ with cache line boundaries and sizes
+
+* Applying software-level burst optimizations to eliminate otherwise unavoidable overheads, such as ring index wrap-around handling
+
+The API also introduces burst-oriented functions for PMD-intensive services, such as buffer allocation.
+For instance, buffer allocators used to populate NIC rings often support functions that allocate or free multiple buffers in a single call.
+An example is ``rte_pktmbuf_alloc_bulk``, which returns an array of rte_mbuf pointers, significantly improving PMD performance
+when replenishing multiple descriptors in the receive ring.
Logical Cores, Memory and NIC Queues Relationships
--------------------------------------------------
-The DPDK supports NUMA allowing for better performance when a processor's logical cores and interfaces utilize its local memory.
-Therefore, mbuf allocation associated with local PCIe* interfaces should be allocated from memory pools created in the local memory.
-The buffers should, if possible, remain on the local processor to obtain the best performance results and RX and TX buffer descriptors
-should be populated with mbufs allocated from a mempool allocated from local memory.
+DPDK supports NUMA, which improves performance when a processor's logical cores and network interfaces
+use memory local to that processor. To maximize this benefit, allocate mbufs associated with local PCIe* interfaces
+from memory pools located in the same NUMA node.
+
+Ideally, keep these buffers on the local processor to achieve optimal performance. Populate Rx and Tx buffer
+descriptors with mbufs from mempools created in local memory.
-The run-to-completion model also performs better if packet or data manipulation is in local memory instead of a remote processors memory.
-This is also true for the pipe-line model provided all logical cores used are located on the same processor.
+The run-to-completion model also benefits from performing packet data operations in local memory,
+rather than accessing remote memory across NUMA nodes.
+The same applies to the pipeline model, provided all logical cores involved reside on the same processor.
-Multiple logical cores should never share receive or transmit queues for interfaces since this would require global locks and hinder performance.
+Never share receive and transmit queues between multiple logical cores, as doing so requires
+global locks and severely impacts performance.
-If the PMD is ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
-concurrently on the same tx queue without SW lock. This PMD feature found in some NICs and useful in the following use cases:
+If the PMD supports the ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` offload,
+multiple threads can call ``rte_eth_tx_burst()`` concurrently on the same Tx queue without a software lock.
+This capability, available in some NICs, proves advantageous in these scenarios:
-* Remove explicit spinlock in some applications where lcores are not mapped to Tx queues with 1:1 relation.
+* Eliminating explicit spinlocks in applications where Tx queues do not map 1:1 to logical cores
-* In the eventdev use case, avoid dedicating a separate TX core for transmitting and thus
- enables more scaling as all workers can send the packets.
+* In eventdev-based workloads, allowing all worker threads to transmit packets, removing the need for a dedicated
+ Tx core and enabling greater scalability
See `Hardware Offload`_ for ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
@@ -133,11 +141,10 @@ Device Identification, Ownership and Configuration
Device Identification
~~~~~~~~~~~~~~~~~~~~~
-Each NIC port is uniquely designated by its (bus/bridge, device, function) PCI
-identifiers assigned by the PCI probing/enumeration function executed at DPDK initialization.
-Based on their PCI identifier, NIC ports are assigned two other identifiers:
+The PCI probing/enumeration function executed at DPDK initialization assigns each NIC port a unique PCI
+identifier (bus/bridge, device, function). Based on this PCI identifier, DPDK assigns each NIC port two additional identifiers:
-* A port index used to designate the NIC port in all functions exported by the PMD API.
+* A port index used to designate the NIC port in all functions exported by the PMD API
* A port name used to designate the port in console messages, for administration or debugging purposes.
For ease of use, the port name includes the port index.
@@ -145,83 +152,82 @@ Based on their PCI identifier, NIC ports are assigned two other identifiers:
Port Ownership
~~~~~~~~~~~~~~
-The Ethernet devices ports can be owned by a single DPDK entity (application, library, PMD, process, etc).
-The ownership mechanism is controlled by ethdev APIs and allows to set/remove/get a port owner by DPDK entities.
-It prevents Ethernet ports to be managed by different entities.
+A single DPDK entity (application, library, PMD, process, etc.) can own Ethernet device ports.
+The ethdev APIs control the ownership mechanism and allow DPDK entities to set, remove, or get a port owner.
+This prevents different entities from managing the same Ethernet ports.
.. note::
- It is the DPDK entity responsibility to set the port owner before using it and to manage the port usage synchronization between different threads or processes.
+ The DPDK entity must set port ownership before using the port and manage usage synchronization between different threads or processes.
-It is recommended to set port ownership early,
-like during the probing notification ``RTE_ETH_EVENT_NEW``.
+Set port ownership early, for instance during the probing notification ``RTE_ETH_EVENT_NEW``.
Device Configuration
~~~~~~~~~~~~~~~~~~~~
-The configuration of each NIC port includes the following operations:
+Configuring each NIC port includes the following operations:
* Allocate PCI resources
-* Reset the hardware (issue a Global Reset) to a well-known default state
+* Reset the hardware to a well-known default state (issue a Global Reset)
* Set up the PHY and the link
* Initialize statistics counters
-The PMD API must also export functions to start/stop the all-multicast feature of a port and functions to set/unset the port in promiscuous mode.
+The PMD API must also export functions to start/stop the all-multicast feature of a port and functions to set/unset promiscuous mode.
-Some hardware offload features must be individually configured at port initialization through specific configuration parameters.
-This is the case for the Receive Side Scaling (RSS) and Data Center Bridging (DCB) features for example.
+Some hardware offload features require individual configuration at port initialization through specific parameters.
+This includes Receive Side Scaling (RSS) and Data Center Bridging (DCB) features.
On-the-Fly Configuration
~~~~~~~~~~~~~~~~~~~~~~~~
-All device features that can be started or stopped "on the fly" (that is, without stopping the device) do not require the PMD API to export dedicated functions for this purpose.
+Device features that can start or stop "on the fly" (without stopping the device) do not require the PMD API to export dedicated functions.
-All that is required is the mapping address of the device PCI registers to implement the configuration of these features in specific functions outside of the drivers.
+Implementing the configuration of these features in specific functions outside of the drivers requires only the mapping address of the device PCI registers.
For this purpose,
-the PMD API exports a function that provides all the information associated with a device that can be used to set up a given device feature outside of the driver.
-This includes the PCI vendor identifier, the PCI device identifier, the mapping address of the PCI device registers, and the name of the driver.
+the PMD API exports a function that provides all device information needed to set up a given feature outside of the driver.
+This includes the PCI vendor identifier, the PCI device identifier, the mapping address of the PCI device registers, and the driver name.
-The main advantage of this approach is that it gives complete freedom on the choice of the API used to configure, to start, and to stop such features.
+The main advantage of this approach is complete freedom in choosing the API to configure, start, and stop such features.
As an example, refer to the configuration of the IEEE1588 feature for the Intel® 82576 Gigabit Ethernet Controller and
-the Intel® 82599 10 Gigabit Ethernet Controller controllers in the testpmd application.
+the Intel® 82599 10 Gigabit Ethernet Controller in the testpmd application.
-Other features such as the L3/L4 5-Tuple packet filtering feature of a port can be configured in the same way.
-Ethernet* flow control (pause frame) can be configured on the individual port.
+Configure other features such as the L3/L4 5-Tuple packet filtering feature of a port in the same way.
+Configure Ethernet* flow control (pause frame) on the individual port.
Refer to the testpmd source code for details.
-Also, L4 (UDP/TCP/ SCTP) checksum offload by the NIC can be enabled for an individual packet as long as the packet mbuf is set up correctly. See `Hardware Offload`_ for details.
+Also, enable L4 (UDP/TCP/SCTP) checksum offload by the NIC for an individual packet by setting up the packet mbuf correctly. See `Hardware Offload`_ for details.
Configuration of Transmit Queues
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Each transmit queue is independently configured with the following information:
+Configure each transmit queue independently with the following information:
* The number of descriptors of the transmit ring
-* The socket identifier used to identify the appropriate DMA memory zone from which to allocate the transmit ring in NUMA architectures
+* The socket identifier used to identify the appropriate DMA memory zone for allocating the transmit ring in NUMA architectures
* The values of the Prefetch, Host and Write-Back threshold registers of the transmit queue
* The *minimum* transmit packets to free threshold (tx_free_thresh).
- When the number of descriptors used to transmit packets exceeds this threshold, the network adaptor should be checked to see if it has written back descriptors.
- A value of 0 can be passed during the TX queue configuration to indicate the default value should be used.
+ When the number of descriptors used to transmit packets exceeds this threshold, check the network adaptor to see if it has written back descriptors.
+ Pass a value of 0 during Tx queue configuration to use the default value.
The default value for tx_free_thresh is 32.
- This ensures that the PMD does not search for completed descriptors until at least 32 have been processed by the NIC for this queue.
+ This ensures the PMD does not search for completed descriptors until the NIC has processed at least 32 for this queue.
-* The *minimum* RS bit threshold. The minimum number of transmit descriptors to use before setting the Report Status (RS) bit in the transmit descriptor.
+* The *minimum* RS bit threshold. The minimum number of transmit descriptors to use before setting the Report Status (RS) bit in the transmit descriptor.
Note that this parameter may only be valid for Intel 10 GbE network adapters.
- The RS bit is set on the last descriptor used to transmit a packet if the number of descriptors used since the last RS bit setting,
+ Set the RS bit on the last descriptor used to transmit a packet if the number of descriptors used since the last RS bit setting,
up to the first descriptor used to transmit the packet, exceeds the transmit RS bit threshold (tx_rs_thresh).
- In short, this parameter controls which transmit descriptors are written back to host memory by the network adapter.
- A value of 0 can be passed during the TX queue configuration to indicate that the default value should be used.
+ In short, this parameter controls which transmit descriptors the network adapter writes back to host memory.
+ Pass a value of 0 during Tx queue configuration to use the default value.
The default value for tx_rs_thresh is 32.
- This ensures that at least 32 descriptors are used before the network adapter writes back the most recently used descriptor.
- This saves upstream PCIe* bandwidth resulting from TX descriptor write-backs.
- It is important to note that the TX Write-back threshold (TX wthresh) should be set to 0 when tx_rs_thresh is greater than 1.
+ This ensures the PMD uses at least 32 descriptors before the network adapter writes back the most recently used descriptor.
+ This saves upstream PCIe* bandwidth that would be used for Tx descriptor write-backs.
+ Set the Tx Write-back threshold (Tx wthresh) to 0 when tx_rs_thresh is greater than 1.
Refer to the Intel® 82599 10 Gigabit Ethernet Controller Datasheet for more details.
The following constraints must be satisfied for tx_free_thresh and tx_rs_thresh:
@@ -236,46 +242,45 @@ The following constraints must be satisfied for tx_free_thresh and tx_rs_thresh:
* tx_free_thresh must be less than the size of the ring minus 3.
-* For optimal performance, TX wthresh should be set to 0 when tx_rs_thresh is greater than 1.
+* For optimal performance, set Tx wthresh to 0 when tx_rs_thresh is greater than 1.
-One descriptor in the TX ring is used as a sentinel to avoid a hardware race condition, hence the maximum threshold constraints.
+One descriptor in the Tx ring serves as a sentinel to avoid a hardware race condition, hence the maximum threshold constraints.
.. note::
- When configuring for DCB operation, at port initialization, both the number of transmit queues and the number of receive queues must be set to 128.
+ When configuring for DCB operation at port initialization, set both the number of transmit queues and the number of receive queues to 128.
Free Tx mbuf on Demand
~~~~~~~~~~~~~~~~~~~~~~
-Many of the drivers do not release the mbuf back to the mempool, or local cache,
-immediately after the packet has been transmitted.
+Many drivers do not release the mbuf back to the mempool or local cache immediately after packet transmission.
Instead, they leave the mbuf in their Tx ring and
either perform a bulk release when the ``tx_rs_thresh`` has been crossed
or free the mbuf when a slot in the Tx ring is needed.
An application can request the driver to release used mbufs with the ``rte_eth_tx_done_cleanup()`` API.
-This API requests the driver to release mbufs that are no longer in use,
-independent of whether or not the ``tx_rs_thresh`` has been crossed.
-There are two scenarios when an application may want the mbuf released immediately:
+This API requests the driver to release mbufs no longer in use,
+independent of whether the ``tx_rs_thresh`` has been crossed.
+Two scenarios exist where an application may want the mbuf released immediately:
* When a given packet needs to be sent to multiple destination interfaces
(either for Layer 2 flooding or Layer 3 multi-cast).
- One option is to make a copy of the packet or a copy of the header portion that needs to be manipulated.
+ One option is to copy the packet or the header portion that needs manipulation.
A second option is to transmit the packet and then poll the ``rte_eth_tx_done_cleanup()`` API
- until the reference count on the packet is decremented.
- Then the same packet can be transmitted to the next destination interface.
- The application is still responsible for managing any packet manipulations needed
- between the different destination interfaces, but a packet copy can be avoided.
- This API is independent of whether the packet was transmitted or dropped,
+ until the reference count on the packet decrements.
+ Then, transmit the same packet to the next destination interface.
+ The application remains responsible for managing any packet manipulations needed
+ between the different destination interfaces, but avoids a packet copy.
+ This API operates independently of whether the interface transmitted or dropped the packet,
only that the mbuf is no longer in use by the interface.
-* Some applications are designed to make multiple runs, like a packet generator.
+* Some applications make multiple runs, like a packet generator.
For performance reasons and consistency between runs,
the application may want to reset back to an initial state
between each run, where all mbufs are returned to the mempool.
- In this case, it can call the ``rte_eth_tx_done_cleanup()`` API
- for each destination interface it has been using
- to request it to release of all its used mbufs.
+ In this case, call the ``rte_eth_tx_done_cleanup()`` API
+ for each destination interface used
+ to request it to release all used mbufs.
To determine if a driver supports this API, check for the *Free Tx mbuf on demand* feature
in the *Network Interface Controller Drivers* document.
@@ -285,49 +290,49 @@ Hardware Offload
Depending on driver capabilities advertised by
``rte_eth_dev_info_get()``, the PMD may support hardware offloading
-feature like checksumming, TCP segmentation, VLAN insertion or
-lockfree multithreaded TX burst on the same TX queue.
+features like checksumming, TCP segmentation, VLAN insertion, or
+lockfree multithreaded Tx burst on the same Tx queue.
-The support of these offload features implies the addition of dedicated
-status bit(s) and value field(s) into the rte_mbuf data structure, along
-with their appropriate handling by the receive/transmit functions
-exported by each PMD. The list of flags and their precise meaning is
-described in the mbuf API documentation and in the :ref:`mbuf_meta` chapter.
+Supporting these offload features requires adding dedicated
+status bit(s) and value field(s) to the rte_mbuf data structure, along
+with appropriate handling by the receive/transmit functions
+exported by each PMD. The mbuf API documentation and the :ref:`mbuf_meta` chapter
+describe the list of flags and their precise meanings.
Per-Port and Per-Queue Offloads
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-In the DPDK offload API, offloads are divided into per-port and per-queue offloads as follows:
+In the DPDK offload API, offloads divide into per-port and per-queue offloads as follows:
-* A per-queue offloading can be enabled on a queue and disabled on another queue at the same time.
-* A pure per-port offload is the one supported by device but not per-queue type.
-* A pure per-port offloading can't be enabled on a queue and disabled on another queue at the same time.
-* A pure per-port offloading must be enabled or disabled on all queues at the same time.
-* Any offloading is per-queue or pure per-port type, but can't be both types at same devices.
+* A per-queue offload can be enabled on one queue and disabled on another queue simultaneously.
+* A pure per-port offload is supported by a device but not as a per-queue type.
+* A pure per-port offload cannot be enabled on one queue and disabled on another queue simultaneously.
+* A pure per-port offload must be enabled or disabled on all queues simultaneously.
+* An offload is either per-queue or pure per-port type; it cannot be both types on the same device.
* Port capabilities = per-queue capabilities + pure per-port capabilities.
-* Any supported offloading can be enabled on all queues.
+* Any supported offload can be enabled on all queues.
-The different offloads capabilities can be queried using ``rte_eth_dev_info_get()``.
+Query the different offload capabilities using ``rte_eth_dev_info_get()``.
The ``dev_info->[rt]x_queue_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all per-queue offloading capabilities.
The ``dev_info->[rt]x_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all pure per-port and per-queue offloading capabilities.
Supported offloads can be either per-port or per-queue.
-Offloads are enabled using the existing ``RTE_ETH_TX_OFFLOAD_*`` or ``RTE_ETH_RX_OFFLOAD_*`` flags.
-Any requested offloading by an application must be within the device capabilities.
-Any offloading is disabled by default if it is not set in the parameter
+Enable offloads using the existing ``RTE_ETH_TX_OFFLOAD_*`` or ``RTE_ETH_RX_OFFLOAD_*`` flags.
+Any offload requested by an application must be within the device capabilities.
+Any offload is disabled by default if it is not set in the parameter
``dev_conf->[rt]xmode.offloads`` to ``rte_eth_dev_configure()`` and
``[rt]x_conf->offloads`` to ``rte_eth_[rt]x_queue_setup()``.
-If any offloading is enabled in ``rte_eth_dev_configure()`` by an application,
-it is enabled on all queues no matter whether it is per-queue or
-per-port type and no matter whether it is set or cleared in
+If an application enables any offload in ``rte_eth_dev_configure()``,
+it is enabled on all queues regardless of whether it is per-queue or
+per-port type and regardless of whether it is set or cleared in
``[rt]x_conf->offloads`` to ``rte_eth_[rt]x_queue_setup()``.
-If a per-queue offloading hasn't been enabled in ``rte_eth_dev_configure()``,
-it can be enabled or disabled in ``rte_eth_[rt]x_queue_setup()`` for individual queue.
-A newly added offloads in ``[rt]x_conf->offloads`` to ``rte_eth_[rt]x_queue_setup()`` input by application
-is the one which hasn't been enabled in ``rte_eth_dev_configure()`` and is requested to be enabled
-in ``rte_eth_[rt]x_queue_setup()``. It must be per-queue type, otherwise trigger an error log.
+If a per-queue offload has not been enabled in ``rte_eth_dev_configure()``,
+it can be enabled or disabled in ``rte_eth_[rt]x_queue_setup()`` for an individual queue.
+A newly added offload in ``[rt]x_conf->offloads`` to ``rte_eth_[rt]x_queue_setup()`` input by the application
+is one that has not been enabled in ``rte_eth_dev_configure()`` and is requested to be enabled
+in ``rte_eth_[rt]x_queue_setup()``. It must be per-queue type; otherwise an error log triggers.
Poll Mode Driver API
--------------------
@@ -335,44 +340,43 @@ Poll Mode Driver API
Generalities
~~~~~~~~~~~~
-By default, all functions exported by a PMD are lock-free functions that are assumed
-not to be invoked in parallel on different logical cores to work on the same target object.
-For instance, a PMD receive function cannot be invoked in parallel on two logical cores to poll the same RX queue of the same port.
-Of course, this function can be invoked in parallel by different logical cores on different RX queues.
-It is the responsibility of the upper-level application to enforce this rule.
+By default, all functions exported by a PMD are lock-free functions assumed
+not to be invoked in parallel on different logical cores working on the same target object.
+For instance, a PMD receive function cannot be invoked in parallel on two logical cores polling the same Rx queue of the same port.
+This function can be invoked in parallel by different logical cores on different Rx queues.
+The upper-level application must enforce this rule.
-If needed, parallel accesses by multiple logical cores to shared queues can be explicitly protected by dedicated inline lock-aware functions
+If needed, explicitly protect parallel accesses by multiple logical cores to shared queues using dedicated inline lock-aware functions
built on top of their corresponding lock-free functions of the PMD API.
Generic Packet Representation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-A packet is represented by an rte_mbuf structure, which is a generic metadata structure containing all necessary housekeeping information.
-This includes fields and status bits corresponding to offload hardware features, such as checksum computation of IP headers or VLAN tags.
+An rte_mbuf structure represents a packet. This generic metadata structure contains all necessary housekeeping information,
+including fields and status bits corresponding to offload hardware features, such as checksum computation of IP headers or VLAN tags.
The rte_mbuf data structure includes specific fields to represent, in a generic way, the offload features provided by network controllers.
-For an input packet, most fields of the rte_mbuf structure are filled in by the PMD receive function with the information contained in the receive descriptor.
-Conversely, for output packets, most fields of rte_mbuf structures are used by the PMD transmit function to initialize transmit descriptors.
+For an input packet, the PMD receive function fills in most fields of the rte_mbuf structure with information contained in the receive descriptor.
+Conversely, for output packets, the PMD transmit function uses most fields of rte_mbuf structures to initialize transmit descriptors.
See :doc:`../mbuf_lib` chapter for more details.
Ethernet Device API
~~~~~~~~~~~~~~~~~~~
-The Ethernet device API exported by the Ethernet PMDs is described in the *DPDK API Reference*.
+The *DPDK API Reference* describes the Ethernet device API exported by the Ethernet PMDs.
.. _ethernet_device_standard_device_arguments:
Ethernet Device Standard Device Arguments
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Standard Ethernet device arguments allow for a set of commonly used arguments/
-parameters which are applicable to all Ethernet devices to be available to for
-specification of specific device and for passing common configuration
-parameters to those ports.
+Standard Ethernet device arguments provide a set of commonly used arguments/
+parameters applicable to all Ethernet devices. Use these arguments/parameters to
+specify specific devices and pass common configuration parameters to those ports.
-* ``representor`` for a device which supports the creation of representor ports
- this argument allows user to specify which switch ports to enable port
+* Use ``representor`` for a device that supports creating representor ports.
+ This argument allows the user to specify which switch ports to enable port
representors for::
-a DBDF,representor=vf0
@@ -380,8 +384,8 @@ parameters to those ports.
-a DBDF,representor=vf[0-31]
-a DBDF,representor=vf[0,2-4,7,9-11]
- These examples will attach VF representors relative to DBDF.
- The VF IDs can be a list, a range or a mix.
+ These examples attach VF representors relative to DBDF.
+ The VF IDs can be a list, a range, or a mix.
SF representors follow the same syntax::
-a DBDF,representor=sf0
@@ -389,47 +393,47 @@ parameters to those ports.
-a DBDF,representor=sf[0-1023]
-a DBDF,representor=sf[0,2-4,7,9-11]
- If there are multiple PFs associated with the same PCI device,
- the PF ID must be used to distinguish between representors relative to different PFs::
+ If multiple PFs are associated with the same PCI device,
+ use the PF ID to distinguish between representors relative to different PFs::
-a DBDF,representor=pf1vf0
-a DBDF,representor=pf[0-1]vf0
- The example above will attach 4 representors pf0vf0, pf1vf0, pf0 and pf1.
- If only VF representors are required, the PF part must be enclosed with parentheses::
+ The example above attaches 4 representors pf0vf0, pf1vf0, pf0, and pf1.
+ If only VF representors are required, enclose the PF part in parentheses::
-a DBDF,representor=(pf[0-1])vf0
- The example above will attach 2 representors pf0vf0, pf1vf0.
+ The example above attaches 2 representors pf0vf0 and pf1vf0.
- List of representors for the same PCI device is enclosed in square brackets::
+ Enclose the list of representors for the same PCI device in square brackets::
-a DBDF,representor=[pf[0-1],pf2vf[0-2],pf3[3,5-8]]
- Note: PMDs may have additional extensions for the representor parameter, and users
- should consult the relevant PMD documentation to see support devargs.
+ Note: PMDs may have additional extensions for the representor parameter. Consult
+ the relevant PMD documentation for supported devargs.
Extended Statistics API
~~~~~~~~~~~~~~~~~~~~~~~
-The extended statistics API allows a PMD to expose all statistics that are
-available to it, including statistics that are unique to the device.
-Each statistic has three properties ``name``, ``id`` and ``value``:
+The extended statistics API allows a PMD to expose all available statistics,
+including statistics unique to the device.
+Each statistic has three properties: ``name``, ``id``, and ``value``:
-* ``name``: A human readable string formatted by the scheme detailed below.
+* ``name``: A human-readable string formatted by the scheme detailed below.
* ``id``: An integer that represents only that statistic.
-* ``value``: A unsigned 64-bit integer that is the value of the statistic.
+* ``value``: An unsigned 64-bit integer that is the value of the statistic.
-Note that extended statistic identifiers are
-driver-specific, and hence might not be the same for different ports.
-The API consists of various ``rte_eth_xstats_*()`` functions, and allows an
-application to be flexible in how it retrieves statistics.
+Note that extended statistic identifiers are driver-specific,
+and therefore might not be the same for different ports.
+The API consists of various ``rte_eth_xstats_*()`` functions and provides
+applications flexibility in how they retrieve statistics.
Scheme for Human Readable Names
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-A naming scheme exists for the strings exposed to clients of the API. This is
-to allow scraping of the API for statistics of interest. The naming scheme uses
+A naming scheme governs the strings exposed to clients of the API. This scheme
+allows scraping of the API for statistics of interest. The naming scheme uses
strings split by a single underscore ``_``. The scheme is as follows:
* direction
@@ -438,69 +442,67 @@ strings split by a single underscore ``_``. The scheme is as follows:
* detail n
* unit
-Examples of common statistics xstats strings, formatted to comply to the scheme
+Examples of common statistics xstats strings, formatted to comply with the scheme
proposed above:
* ``rx_bytes``
* ``rx_crc_errors``
* ``tx_multicast_packets``
-The scheme, although quite simple, allows flexibility in presenting and reading
+The scheme, although simple, provides flexibility in presenting and reading
information from the statistic strings. The following example illustrates the
-naming scheme:``rx_packets``. In this example, the string is split into two
-components. The first component ``rx`` indicates that the statistic is
-associated with the receive side of the NIC. The second component ``packets``
+naming scheme: ``rx_packets``. In this example, the string splits into two
+components. The first component ``rx`` indicates that the statistic
+is associated with the receive side of the NIC. The second component ``packets``
indicates that the unit of measure is packets.
A more complicated example: ``tx_size_128_to_255_packets``. In this example,
-``tx`` indicates transmission, ``size`` is the first detail, ``128`` etc are
+``tx`` indicates transmission, ``size`` is the first detail, ``128`` etc. are
more details, and ``packets`` indicates that this is a packet counter.
Some additions in the metadata scheme are as follows:
* If the first part does not match ``rx`` or ``tx``, the statistic does not
- have an affinity with either receive of transmit.
+ have an affinity with either receive or transmit.
* If the first letter of the second part is ``q`` and this ``q`` is followed
by a number, this statistic is part of a specific queue.
-An example where queue numbers are used is as follows: ``tx_q7_bytes`` which
-indicates this statistic applies to queue number 7, and represents the number
+An example where queue numbers are used: ``tx_q7_bytes`` indicates this statistic applies to queue number 7 and represents the number
of transmitted bytes on that queue.
API Design
^^^^^^^^^^
-The xstats API uses the ``name``, ``id``, and ``value`` to allow performant
-lookup of specific statistics. Performant lookup means two things;
+The xstats API uses ``name``, ``id``, and ``value`` to allow performant
+lookup of specific statistics. Performant lookup means two things:
-* No string comparisons with the ``name`` of the statistic in fast-path
-* Allow requesting of only the statistics of interest
+* No string comparisons with the ``name`` of the statistic in the fast path
+* Allow requesting only the statistics of interest
-The API ensures these requirements are met by mapping the ``name`` of the
-statistic to a unique ``id``, which is used as a key for lookup in the fast-path.
-The API allows applications to request an array of ``id`` values, so that the
-PMD only performs the required calculations. Expected usage is that the
-application scans the ``name`` of each statistic, and caches the ``id``
-if it has an interest in that statistic. On the fast-path, the integer can be used
+The API meets these requirements by mapping the ``name`` of the
+statistic to a unique ``id``, which serves as a key for lookup in the fast path.
+The API allows applications to request an array of ``id`` values, so the
+PMD only performs the required calculations. The expected usage is that the
+application scans the ``name`` of each statistic and caches the ``id``
+if it has an interest in that statistic. On the fast path, the integer can be used
to retrieve the actual ``value`` of the statistic that the ``id`` represents.
API Functions
^^^^^^^^^^^^^
-The API is built out of a small number of functions, which can be used to
-retrieve the number of statistics and the names, IDs and values of those
-statistics.
+The API is built from a small number of functions, which retrieve the number of statistics
+and the names, IDs, and values of those statistics.
-* ``rte_eth_xstats_get_names_by_id()``: returns the names of the statistics. When given a
- ``NULL`` parameter the function returns the number of statistics that are available.
+* ``rte_eth_xstats_get_names_by_id()``: Returns the names of the statistics. When given a
+ ``NULL`` parameter, the function returns the number of available statistics.
* ``rte_eth_xstats_get_id_by_name()``: Searches for the statistic ID that matches
- ``xstat_name``. If found, the ``id`` integer is set.
+ ``xstat_name``. If found, sets the ``id`` integer.
* ``rte_eth_xstats_get_by_id()``: Fills in an array of ``uint64_t`` values
- with matching the provided ``ids`` array. If the ``ids`` array is NULL, it
- returns all statistics that are available.
+ matching the provided ``ids`` array. If the ``ids`` array is NULL, it
+ returns all available statistics.
Application Usage
@@ -509,11 +511,11 @@ Application Usage
Imagine an application that wants to view the dropped packet count. If no
packets are dropped, the application does not read any other metrics for
performance reasons. If packets are dropped, the application has a particular
-set of statistics that it requests. This "set" of statistics allows the app to
-decide what next steps to perform. The following code-snippets show how the
-xstats API can be used to achieve this goal.
+set of statistics that it requests. This "set" of statistics allows the application to
+decide what next steps to perform. The following code snippets show how the
+xstats API achieves this goal.
-First step is to get all statistics names and list them:
+The first step is to get all statistics names and list them:
.. code-block:: c
@@ -557,9 +559,9 @@ First step is to get all statistics names and list them:
printf("%s: %"PRIu64"\n", xstats_names[i].name, values[i]);
}
-The application has access to the names of all of the statistics that the PMD
-exposes. The application can decide which statistics are of interest, cache the
-ids of those statistics by looking up the name as follows:
+The application has access to the names of all statistics that the PMD
+exposes. The application can decide which statistics are of interest and cache the
+IDs of those statistics by looking up the name as follows:
.. code-block:: c
@@ -576,10 +578,8 @@ ids of those statistics by looking up the name as follows:
goto err;
}
-The API provides flexibility to the application so that it can look up multiple
-statistics using an array containing multiple ``id`` numbers. This reduces the
-function call overhead of retrieving statistics, and makes lookup of multiple
-statistics simpler for the application.
+The API allows the application to look up multiple statistics using an array containing multiple ``id`` numbers.
+This reduces function call overhead when retrieving statistics and simplifies looking up multiple statistics.
.. code-block:: c
@@ -597,12 +597,12 @@ statistics simpler for the application.
}
-This array lookup API for xstats allows the application create multiple
-"groups" of statistics, and look up the values of those IDs using a single API
-call. As an end result, the application is able to achieve its goal of
-monitoring a single statistic ("rx_errors" in this case), and if that shows
+This array lookup API for xstats allows the application to create multiple
+"groups" of statistics and look up the values of those IDs using a single API
+call. As a result, the application achieves its goal of
+monitoring a single statistic (in this case, "rx_errors"). If that shows
packets being dropped, it can easily retrieve a "set" of statistics using the
-IDs array parameter to ``rte_eth_xstats_get_by_id`` function.
+IDs array parameter to the ``rte_eth_xstats_get_by_id`` function.
NIC Reset API
~~~~~~~~~~~~~
@@ -611,84 +611,81 @@ NIC Reset API
int rte_eth_dev_reset(uint16_t port_id);
-Sometimes a port has to be reset passively. For example when a PF is
-reset, all its VFs should also be reset by the application to make them
-consistent with the PF. A DPDK application also can call this function
-to trigger a port reset. Normally, a DPDK application would invokes this
-function when an RTE_ETH_EVENT_INTR_RESET event is detected.
+Sometimes a port must be reset passively. For example, when a PF is
+reset, the application should also reset all its VFs to maintain consistency
+with the PF. A DPDK application can also call this function
+to trigger a port reset. Normally, a DPDK application invokes this
+function when it detects an RTE_ETH_EVENT_INTR_RESET event.
-It is the duty of the PMD to trigger RTE_ETH_EVENT_INTR_RESET events and
-the application should register a callback function to handle these
-events. When a PMD needs to trigger a reset, it can trigger an
+The PMD triggers RTE_ETH_EVENT_INTR_RESET events.
+The application should register a callback function to handle these
+events. When a PMD needs to trigger a reset, it triggers an
RTE_ETH_EVENT_INTR_RESET event. On receiving an RTE_ETH_EVENT_INTR_RESET
-event, applications can handle it as follows: Stop working queues, stop
+event, applications should: stop working queues, stop
calling Rx and Tx functions, and then call rte_eth_dev_reset(). For
-thread safety all these operations should be called from the same thread.
+thread safety, call all these operations from the same thread.
-For example when PF is reset, the PF sends a message to notify VFs of
-this event and also trigger an interrupt to VFs. Then in the interrupt
-service routine the VFs detects this notification message and calls
+For example, when a PF is reset, it sends a message to notify VFs of
+this event and also triggers an interrupt to VFs. Then, in the interrupt
+service routine, the VFs detect this notification message and call
rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_RESET, NULL).
This means that a PF reset triggers an RTE_ETH_EVENT_INTR_RESET
-event within VFs. The function rte_eth_dev_callback_process() will
-call the registered callback function. The callback function can trigger
-the application to handle all operations the VF reset requires including
+event within VFs. The function rte_eth_dev_callback_process()
+calls the registered callback function. The callback function can trigger
+the application to handle all operations the VF reset requires, including
stopping Rx/Tx queues and calling rte_eth_dev_reset().
-The rte_eth_dev_reset() itself is a generic function which only does
-some hardware reset operations through calling dev_unint() and
-dev_init(), and itself does not handle synchronization, which is handled
-by application.
+The rte_eth_dev_reset() function is a generic function that only performs hardware reset operations by calling dev_uninit() and
+dev_init(). It does not handle synchronization; the application handles that.
-The PMD itself should not call rte_eth_dev_reset(). The PMD can trigger
-the application to handle reset event. It is duty of application to
-handle all synchronization before it calls rte_eth_dev_reset().
+The PMD should not call rte_eth_dev_reset(). The PMD can trigger
+the application to handle the reset event. The application must
+handle all synchronization before calling rte_eth_dev_reset().
The above error handling mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PASSIVE``.
Proactive Error Handling Mode
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-This mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PROACTIVE``,
-different from the application invokes recovery in PASSIVE mode,
-the PMD automatically recovers from error in PROACTIVE mode,
-and only a small amount of work is required for the application.
+This mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PROACTIVE``, which
+differs from PASSIVE mode where the application invokes recovery.
+In PROACTIVE mode, the PMD automatically recovers from errors,
+and the application requires only minimal handling.
During error detection and automatic recovery,
the PMD sets the data path pointers to dummy functions
-(which will prevent the crash),
-and also make sure the control path operations fail with a return code ``-EBUSY``.
+(which prevent crashes)
+and ensures control path operations fail with return code ``-EBUSY``.
Because the PMD recovers automatically,
-the application can only sense that the data flow is disconnected for a while
-and the control API returns an error in this period.
+the application only senses that the data flow is disconnected for a while
+and that the control API returns an error during this period.
-In order to sense the error happening/recovering,
-as well as to restore some additional configuration,
+To sense error occurrence and recovery,
+as well as to restore additional configuration,
three events are available:
``RTE_ETH_EVENT_ERR_RECOVERING``
- Notify the application that an error is detected
- and the recovery is being started.
+ Notifies the application that an error is detected
+ and recovery is beginning.
Upon receiving the event, the application should not invoke
- any control path function until receiving
+ any control path function until receiving the
``RTE_ETH_EVENT_RECOVERY_SUCCESS`` or ``RTE_ETH_EVENT_RECOVERY_FAILED`` event.
.. note::
Before the PMD reports the recovery result,
- the PMD may report the ``RTE_ETH_EVENT_ERR_RECOVERING`` event again,
- because a larger error may occur during the recovery.
+ it may report the ``RTE_ETH_EVENT_ERR_RECOVERING`` event again
+ because a larger error may occur during recovery.
``RTE_ETH_EVENT_RECOVERY_SUCCESS``
- Notify the application that the recovery from error is successful,
- the PMD already re-configures the port,
+ Notifies the application that recovery from the error was successful.
+ The PMD has reconfigured the port,
and the effect is the same as a restart operation.
``RTE_ETH_EVENT_RECOVERY_FAILED``
- Notify the application that the recovery from error failed,
- the port should not be usable anymore.
+ Notifies the application that recovery from the error failed.
+ The port should not be usable anymore.
The application should close the port.
-The error handling mode supported by the PMD can be reported through
-``rte_eth_dev_info_get``.
+Query the error handling mode supported by the PMD using ``rte_eth_dev_info_get()``.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v4 02/11] doc: correct grammar and typos in argparse library guide
2026-01-14 22:26 ` [PATCH v4 00/11] doc: programmers guide corrections Stephen Hemminger
2026-01-14 22:26 ` [PATCH v4 01/11] doc: correct grammar and punctuation errors in ethdev guide Stephen Hemminger
@ 2026-01-14 22:26 ` Stephen Hemminger
2026-01-19 0:50 ` fengchengwen
2026-01-14 22:26 ` [PATCH v4 03/11] doc: correct grammar and typos in design guide Stephen Hemminger
` (8 subsequent siblings)
10 siblings, 1 reply; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:26 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad, Chengwen Feng
Changes:
- Add missing articles ("a user-friendly", "a long_name field")
- Fix awkward phrasing ("take with" -> "have")
- Correct verb forms ("automatic generate" -> "automatic generation of",
"are parsing" -> "are parsed", "don't" -> "doesn't")
- Fix typo in field name (val_save -> val_saver)
- Fix stray backtick in code example
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/argparse_lib.rst | 24 ++++++++++++------------
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/doc/guides/prog_guide/argparse_lib.rst b/doc/guides/prog_guide/argparse_lib.rst
index 4a4214e00f..b0907cfc07 100644
--- a/doc/guides/prog_guide/argparse_lib.rst
+++ b/doc/guides/prog_guide/argparse_lib.rst
@@ -5,21 +5,21 @@ Argparse Library
================
The argparse library provides argument parsing functionality,
-this library makes it easy to write user-friendly command-line program.
+this library makes it easy to write a user-friendly command-line program.
Features and Capabilities
-------------------------
-- Support parsing optional argument (which could take with no-value,
- required-value and optional-value).
+- Support parsing optional argument (which could have no-value,
+ required-value or optional-value).
-- Support parsing positional argument (which must take with required-value).
+- Support parsing positional argument (which must have required-value).
- Support getopt-style argument reordering for non-flag arguments as an alternative to positional arguments.
-- Support automatic generate usage information.
+- Support automatic generation of usage information.
-- Support issue errors when provide with invalid arguments.
+- Support issuing errors when provided with invalid arguments.
- Support parsing argument by two ways:
@@ -126,15 +126,15 @@ the following two modes are supported (take above ``--ccc`` as an example):
- The single mode: ``--ccc`` or ``-c``.
-- The kv mode: ``--ccc=123`` or ``-c=123`` or ``-c123```.
+- The kv mode: ``--ccc=123`` or ``-c=123`` or ``-c123``.
For positional arguments which must take required-value,
-their values are parsing in the order defined.
+their values are parsed in the order defined.
.. note::
The compact mode is not supported.
- Take above ``-a`` and ``-d`` as an example, don't support ``-ad`` input.
+ Take above ``-a`` and ``-d`` as an example, doesn't support ``-ad`` input.
Parsing by autosave way
~~~~~~~~~~~~~~~~~~~~~~~
@@ -169,7 +169,7 @@ For arguments which are not flags (i.e. don't start with a hyphen '-'),
there are two ways in which they can be handled by the library:
#. Positional arguments: these are defined in the ``args`` array with a NULL ``short_name`` field,
- and long_name field that does not start with a hyphen '-'.
+ and a ``long_name`` field that does not start with a hyphen '-'.
They are parsed as required-value arguments.
#. As ignored, or unhandled arguments: if the ``ignore_non_flag_args`` field in the ``rte_argparse`` object is set to true,
@@ -283,7 +283,7 @@ Parsing by callback way
It could also choose to use callback to parse,
just define a unique index for the argument
-and make the ``val_save`` field to be NULL also zero value-type.
+and make the ``val_saver`` field be NULL also zero value-type.
In the example at the top of this section,
the arguments ``--ddd``/``--eee``/``--fff`` and ``ppp`` all use this way.
@@ -311,7 +311,7 @@ Then the user input could contain multiple ``--xyz`` arguments.
.. note::
- The multiple times argument only support with optional argument
+ The multiple times argument is only supported with optional argument
and must be parsed by callback way.
Help and Usage Information
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v4 03/11] doc: correct grammar and typos in design guide
2026-01-14 22:26 ` [PATCH v4 00/11] doc: programmers guide corrections Stephen Hemminger
2026-01-14 22:26 ` [PATCH v4 01/11] doc: correct grammar and punctuation errors in ethdev guide Stephen Hemminger
2026-01-14 22:26 ` [PATCH v4 02/11] doc: correct grammar and typos in argparse library guide Stephen Hemminger
@ 2026-01-14 22:26 ` Stephen Hemminger
2026-01-14 22:26 ` [PATCH v4 04/11] doc: correct errors in Linux system requirements guide Stephen Hemminger
` (7 subsequent siblings)
10 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:26 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Fixes:
- Use "execution environment" consistently (not "executive environment")
- Fix FreeBSD capitalization
- Use plural "these execution environments" to match multiple options
- Add hyphen to "non-exhaustive"
- Add missing comma before *telemetry*
- Remove double space
- Minor rewording for clarity
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/contributing/design.rst | 71 ++++++++++++++++--------------
1 file changed, 39 insertions(+), 32 deletions(-)
diff --git a/doc/guides/contributing/design.rst b/doc/guides/contributing/design.rst
index 5517613424..a966da2ddd 100644
--- a/doc/guides/contributing/design.rst
+++ b/doc/guides/contributing/design.rst
@@ -1,6 +1,7 @@
.. SPDX-License-Identifier: BSD-3-Clause
Copyright 2018 The DPDK contributors
+
Design
======
@@ -8,22 +9,26 @@ Design
Environment or Architecture-specific Sources
--------------------------------------------
-In DPDK and DPDK applications, some code is specific to an architecture (i686, x86_64) or to an executive environment (freebsd or linux) and so on.
-As far as is possible, all such instances of architecture or env-specific code should be provided via standard APIs in the EAL.
+In DPDK and DPDK applications, some code is architecture-specific (i686, x86_64) or environment-specific (FreeBSD or Linux, etc.).
+When feasible, such instances of architecture or env-specific code should be provided via standard APIs in the EAL.
+
+By convention, a file is specific if the directory is indicated. Otherwise, it is common.
+
+For example:
-By convention, a file is common if it is not located in a directory indicating that it is specific.
-For instance, a file located in a subdir of "x86_64" directory is specific to this architecture.
+A file located in a subdir of "x86_64" directory is specific to this architecture.
A file located in a subdir of "linux" is specific to this execution environment.
.. note::
Code in DPDK libraries and applications should be generic.
- The correct location for architecture or executive environment specific code is in the EAL.
+ The correct location for architecture or execution environment-specific code is in the EAL.
-When absolutely necessary, there are several ways to handle specific code:
+When necessary, there are several ways to handle specific code:
+
+
+* When the differences are small and they can be embedded in the same C file, use a ``#ifdef`` with a build definition macro in the C code.
-* Use a ``#ifdef`` with a build definition macro in the C code.
- This can be done when the differences are small and they can be embedded in the same C file:
.. code-block:: c
@@ -33,9 +38,9 @@ When absolutely necessary, there are several ways to handle specific code:
titi();
#endif
-* Use build definition macros and conditions in the Meson build file. This is done when the differences are more significant.
- In this case, the code is split into two separate files that are architecture or environment specific.
- This should only apply inside the EAL library.
+
+* When the differences are more significant, use build definition macros and conditions in the Meson build file. In this case, the code is split into two separate files that are architecture or environment specific. This should only apply inside the EAL library.
+
Per Architecture Sources
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -43,15 +48,16 @@ Per Architecture Sources
The following macro options can be used:
* ``RTE_ARCH`` is a string that contains the name of the architecture.
-* ``RTE_ARCH_I686``, ``RTE_ARCH_X86_64``, ``RTE_ARCH_X86_X32``, ``RTE_ARCH_PPC_64``, ``RTE_ARCH_RISCV``, ``RTE_ARCH_LOONGARCH``, ``RTE_ARCH_ARM``, ``RTE_ARCH_ARMv7`` or ``RTE_ARCH_ARM64`` are defined only if we are building for those architectures.
+* ``RTE_ARCH_I686``, ``RTE_ARCH_X86_64``, ``RTE_ARCH_X86_X32``, ``RTE_ARCH_PPC_64``, ``RTE_ARCH_RISCV``, ``RTE_ARCH_LOONGARCH``, ``RTE_ARCH_ARM``, ``RTE_ARCH_ARMv7`` or ``RTE_ARCH_ARM64`` are defined when building for these architectures.
+
Per Execution Environment Sources
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The following macro options can be used:
-* ``RTE_EXEC_ENV_NAME`` is a string that contains the name of the executive environment.
-* ``RTE_EXEC_ENV_FREEBSD``, ``RTE_EXEC_ENV_LINUX`` or ``RTE_EXEC_ENV_WINDOWS`` are defined only if we are building for this execution environment.
+* ``RTE_EXEC_ENV_NAME`` is a string that contains the name of the execution environment.
+* ``RTE_EXEC_ENV_FREEBSD``, ``RTE_EXEC_ENV_LINUX`` or ``RTE_EXEC_ENV_WINDOWS`` are defined only if we are building for these execution environments.
Mbuf features
-------------
@@ -66,7 +72,7 @@ The "dynamic" area is eating the remaining space in mbuf,
and some existing "static" fields may need to become "dynamic".
Adding a new static field or flag must be an exception matching many criteria
-like (non exhaustive): wide usage, performance, size.
+like (non-exhaustive): wide usage, performance, size.
Runtime Information - Logging, Tracing and Telemetry
@@ -82,11 +88,11 @@ DPDK provides a number of built-in mechanisms to provide this introspection:
Each of these has its own strengths and suitabilities for use within DPDK components.
-Below are some guidelines for when each should be used:
+Here are guidelines for when each mechanism should be used:
* For reporting error conditions, or other abnormal runtime issues, *logging* should be used.
- Depending on the severity of the issue, the appropriate log level, for example,
- ``ERROR``, ``WARNING`` or ``NOTICE``, should be used.
+ For example, depending on the severity of the issue, the appropriate log level,
+ ``ERROR``, ``WARNING`` or ``NOTICE`` should be used.
.. note::
@@ -96,24 +102,24 @@ Below are some guidelines for when each should be used:
* For component initialization, or other cases where a path through the code
is only likely to be taken once,
- either *logging* at ``DEBUG`` level or *tracing* may be used, or potentially both.
+ either *logging* at ``DEBUG`` level or *tracing* may be used, or both.
In the latter case, tracing can provide basic information as to the code path taken,
with debug-level logging providing additional details on internal state,
- not possible to emit via tracing.
+ which is not possible to emit via tracing.
* For a component's data-path, where a path is to be taken multiple times within a short timeframe,
*tracing* should be used.
Since DPDK tracing uses `Common Trace Format <https://diamon.org/ctf/>`_ for its tracing logs,
post-analysis can be done using a range of external tools.
-* For numerical or statistical data generated by a component, for example, per-packet statistics,
+* For numerical or statistical data generated by a component, such as per-packet statistics,
*telemetry* should be used.
-* For any data where the data may need to be gathered at any point in the execution
- to help assess the state of the application component,
- for example, core configuration, device information, *telemetry* should be used.
+* For any data that may need to be gathered at any point during the execution
+ to help assess the state of the application component (for example, core configuration, device information), *telemetry* should be used.
Telemetry callbacks should not modify any program state, but be "read-only".
+
Many libraries also include a ``rte_<libname>_dump()`` function as part of their API,
writing verbose internal details to a given file-handle.
New libraries are encouraged to provide such functions where it makes sense to do so,
@@ -135,13 +141,12 @@ requirements for preventing ABI changes when implementing statistics.
Mechanism to allow the application to turn library statistics on and off
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Having runtime support for enabling/disabling library statistics is recommended,
-as build-time options should be avoided. However, if build-time options are used,
-for example as in the table library, the options can be set using c_args.
-When this flag is set, all the counters supported by current library are
+Having runtime support for enabling/disabling library statistics is recommended
+as build-time options should be avoided. However, if build-time options are used, as in the table library, the options can be set using c_args.
+When this flag is set, all the counters supported by the current library are
collected for all the instances of every object type provided by the library.
When this flag is cleared, none of the counters supported by the current library
-are collected for any instance of any object type provided by the library:
+are collected for any instance of any object type provided by the library.
Prevention of ABI changes due to library statistics support
@@ -165,8 +170,8 @@ Motivation to allow the application to turn library statistics on and off
It is highly recommended that each library provides statistics counters to allow
an application to monitor the library-level run-time events. Typical counters
-are: number of packets received/dropped/transmitted, number of buffers
-allocated/freed, number of occurrences for specific events, etc.
+are: the number of packets received/dropped/transmitted, the number of buffers
+allocated/freed, the number of occurrences for specific events, etc.
However, the resources consumed for library-level statistics counter collection
have to be spent out of the application budget and the counters collected by
@@ -198,6 +203,7 @@ applications:
the application may decide to turn the collection of statistics counters off for
Library X and on for Library Y.
+
The statistics collection consumes a certain amount of CPU resources (cycles,
cache bandwidth, memory bandwidth, etc) that depends on:
@@ -218,6 +224,7 @@ cache bandwidth, memory bandwidth, etc) that depends on:
validated for header integrity, counting the number of bits set in a bitmask
might be needed.
+
PF and VF Considerations
------------------------
@@ -229,5 +236,5 @@ Developers should work with the Linux Kernel community to get the required
functionality upstream. PF functionality should only be added to DPDK for
testing and prototyping purposes while the kernel work is ongoing. It should
also be marked with an "EXPERIMENTAL" tag. If the functionality isn't
-upstreamable then a case can be made to maintain the PF functionality in DPDK
+upstreamable, then a case can be made to maintain the PF functionality in DPDK
without the EXPERIMENTAL tag.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v4 04/11] doc: correct errors in Linux system requirements guide
2026-01-14 22:26 ` [PATCH v4 00/11] doc: programmers guide corrections Stephen Hemminger
` (2 preceding siblings ...)
2026-01-14 22:26 ` [PATCH v4 03/11] doc: correct grammar and typos in design guide Stephen Hemminger
@ 2026-01-14 22:26 ` Stephen Hemminger
2026-01-14 22:26 ` [PATCH v4 05/11] doc: correct grammar in service cores guide Stephen Hemminger
` (6 subsequent siblings)
10 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:26 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Changes:
- Fix capitalization of IBM Advance Toolchain
- Remove double spaces before "support"
- Add missing preposition "in" and article "the"
- Fix hugetlbfs mount command syntax (add -o flag and device)
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/linux_gsg/sys_reqs.rst | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/doc/guides/linux_gsg/sys_reqs.rst b/doc/guides/linux_gsg/sys_reqs.rst
index 52a840fbe9..869584c344 100644
--- a/doc/guides/linux_gsg/sys_reqs.rst
+++ b/doc/guides/linux_gsg/sys_reqs.rst
@@ -68,7 +68,7 @@ Compilation of the DPDK
* Intel\ |reg| oneAPI DPC++/C++ Compiler.
-* IBM\ |reg| Advance ToolChain for Powerlinux. This is a set of open source development tools and runtime libraries
+* IBM\ |reg| Advance Toolchain for Powerlinux. This is a set of open source development tools and runtime libraries
which allows users to take leading edge advantage of IBM's latest POWER hardware features on Linux. To install
it, see the IBM official installation document.
@@ -93,7 +93,7 @@ e.g. :doc:`../nics/index`
Running DPDK Applications
-------------------------
-To run a DPDK application, some customization may be required on the target machine.
+To run a DPDK application, customization may be required on the target machine.
System Software
~~~~~~~~~~~~~~~
@@ -127,9 +127,9 @@ System Software
* HUGETLBFS
- * PROC_PAGE_MONITOR support
+ * PROC_PAGE_MONITOR support
- * HPET and HPET_MMAP configuration options should also be enabled if HPET support is required.
+ * HPET and HPET_MMAP configuration options should also be enabled if HPET support is required.
See the section on :ref:`High Precision Event Timer (HPET) Functionality <High_Precision_Event_Timer>` for more details.
.. _linux_gsg_hugepages:
@@ -138,7 +138,7 @@ Use of Hugepages in the Linux Environment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Hugepage support is required for the large memory pool allocation used for packet buffers
-(the HUGETLBFS option must be enabled in the running kernel as indicated the previous section).
+(the HUGETLBFS option must be enabled in the running kernel as indicated in the previous section).
By using hugepage allocations, performance is increased since fewer pages are needed,
and therefore less Translation Lookaside Buffers (TLBs, high speed translation caches),
which reduce the time it takes to translate a virtual page address to a physical page address.
@@ -225,10 +225,10 @@ However, in order to use hugepage sizes other than the default, it is necessary
to manually create mount points for those hugepage sizes (e.g. 1GB pages).
To make the hugepages of size 1GB available for DPDK use,
-following steps must be performed::
+the following steps must be performed::
mkdir /mnt/huge
- mount -t hugetlbfs pagesize=1GB /mnt/huge
+ mount -t hugetlbfs -o pagesize=1GB none /mnt/huge
The mount point can be made permanent across reboots, by adding the following line to the ``/etc/fstab`` file::
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v4 05/11] doc: correct grammar in service cores guide
2026-01-14 22:26 ` [PATCH v4 00/11] doc: programmers guide corrections Stephen Hemminger
` (3 preceding siblings ...)
2026-01-14 22:26 ` [PATCH v4 04/11] doc: correct errors in Linux system requirements guide Stephen Hemminger
@ 2026-01-14 22:26 ` Stephen Hemminger
2026-01-14 22:26 ` [PATCH v4 06/11] doc: correct grammar and errors in trace library guide Stephen Hemminger
` (5 subsequent siblings)
10 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:26 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Changes:
- Use "abstract away the differences" instead of unclear
"simplify the difference"
- Fix preposition: "two methods for having" not "to having"
- Fix reversed phrasing: "Disabling that service on the core"
not "that core on the service"
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/service_cores.rst | 30 ++++++++++++-------------
1 file changed, 15 insertions(+), 15 deletions(-)
diff --git a/doc/guides/prog_guide/service_cores.rst b/doc/guides/prog_guide/service_cores.rst
index 5284eeb96a..ee47a85a90 100644
--- a/doc/guides/prog_guide/service_cores.rst
+++ b/doc/guides/prog_guide/service_cores.rst
@@ -4,12 +4,12 @@
Service Cores
=============
-DPDK has a concept known as service cores, which enables a dynamic way of
-performing work on DPDK lcores. Service core support is built into the EAL, and
-an API is provided to optionally allow applications to control how the service
-cores are used at runtime.
+DPDK has a concept known as service cores. Service cores enable a dynamic way of
+performing work on DPDK lcores. Service core support is built into the EAL.
+An API is provided to give you the option of allowing applications to control
+how the service cores are used at runtime.
-The service cores concept is built up out of services (components of DPDK that
+The service cores concept is built out of services (components of DPDK that
require CPU cycles to operate) and service cores (DPDK lcores, tasked with
running services). The power of the service core concept is that the mapping
between service cores and services can be configured to abstract away the
@@ -18,24 +18,24 @@ difference between platforms and environments.
For example, the Eventdev has hardware and software PMDs. Of these the software
PMD requires an lcore to perform the scheduling operations, while the hardware
PMD does not. With service cores, the application would not directly notice
-that the scheduling is done in software.
+that the scheduling is done in the software.
For detailed information about the service core API, please refer to the docs.
Service Core Initialization
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-There are two methods to having service cores in a DPDK application, either by
+There are two methods to having service cores in a DPDK application: either by
using the service corelist, or by dynamically adding cores using the API.
-The simpler of the two is to pass the `-S` corelist argument to EAL, which will
-take any cores available in the main DPDK corelist, and if also set
-in the service corelist the cores become service-cores instead of DPDK
+The simpler of the two is to pass the `-S` coremask argument to the EAL, which will
+take any cores available in the main DPDK coremask. If the bits are also set
+in the service coremask, the cores become service-cores instead of DPDK
application lcores.
Enabling Services on Cores
~~~~~~~~~~~~~~~~~~~~~~~~~~
-Each registered service can be individually mapped to a service core, or set of
+Each registered service can be individually mapped to a service core, or a set of
service cores. Enabling a service on a particular core means that the lcore in
question will run the service. Disabling that core on the service stops the
lcore in question from running the service.
@@ -48,8 +48,8 @@ function to run the service.
Service Core Statistics
~~~~~~~~~~~~~~~~~~~~~~~
-The service core library is capable of collecting runtime statistics like number
-of calls to a specific service, and number of cycles used by the service. The
+The service core library is capable of collecting runtime statistics like the number
+of calls to a specific service, and the number of cycles used by the service. The
cycle count collection is dynamically configurable, allowing any application to
profile the services running on the system at any time.
@@ -58,9 +58,9 @@ Service Core Tracing
The service core library is instrumented with tracepoints using the DPDK Trace
Library. These tracepoints allow you to track the service and logical cores
-state. To activate tracing when launching a DPDK program it is necessary to use the
+state. To activate tracing when launching a DPDK program, it is necessary to use the
``--trace`` option to specify a regular expression to select which tracepoints
-to enable. Here is an example if you want to only specify service core tracing::
+to enable. Here is an example if you want to specify only service core tracing::
./dpdk/examples/service_cores/build/service_cores --trace="lib.eal.thread*" --trace="lib.eal.service*"
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v4 06/11] doc: correct grammar and errors in trace library guide
2026-01-14 22:26 ` [PATCH v4 00/11] doc: programmers guide corrections Stephen Hemminger
` (4 preceding siblings ...)
2026-01-14 22:26 ` [PATCH v4 05/11] doc: correct grammar in service cores guide Stephen Hemminger
@ 2026-01-14 22:26 ` Stephen Hemminger
2026-01-14 22:26 ` [PATCH v4 07/11] doc: correct typos in log " Stephen Hemminger
` (4 subsequent siblings)
10 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:26 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad, Jerin Jacob, Sunil Kumar Kori
Changes:
- CRITICAL: restore missing "out" in "compiled out by default"
(RTE_TRACE_POINT_FP is disabled by default, not enabled)
- Add missing article and verb ("a framework", "are broadly divided")
- Fix subject-verb agreement ("traces that use", "example greps/counts")
- Fix article before vowel sound ("an EAL")
- Fix preposition ("known to DPDK" not "known of DPDK")
- Use standard spelling "lockless" and "non-lcore"
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/trace_lib.rst | 68 ++++++++++++++---------------
1 file changed, 34 insertions(+), 34 deletions(-)
diff --git a/doc/guides/prog_guide/trace_lib.rst b/doc/guides/prog_guide/trace_lib.rst
index d9b17abe90..829a061074 100644
--- a/doc/guides/prog_guide/trace_lib.rst
+++ b/doc/guides/prog_guide/trace_lib.rst
@@ -14,29 +14,29 @@ When recording, specific instrumentation points placed in the software source
code generate events that are saved on a giant tape: a trace file.
The trace file then later can be opened in *trace viewers* to visualize and
analyze the trace events with timestamps and multi-core views.
-Such a mechanism will be useful for resolving a wide range of problems such as
-multi-core synchronization issues, latency measurements, finding out the
-post analysis information like CPU idle time, etc that would otherwise be
-extremely challenging to get.
+This mechanism will be useful for resolving a wide range of problems such as
+multi-core synchronization issues, latency measurements, and finding
+post analysis information like CPU idle time, etc., that would otherwise be
+extremely challenging to gather.
Tracing is often compared to *logging*. However, tracers and loggers are two
-different tools, serving two different purposes.
-Tracers are designed to record much lower-level events that occur much more
+different tools serving two different purposes.
+Tracers are designed to record much lower-level events that occur more
frequently than log messages, often in the range of thousands per second, with
very little execution overhead.
Logging is more appropriate for a very high-level analysis of less frequent
events: user accesses, exceptional conditions (errors and warnings, for
-example), database transactions, instant messaging communications, and such.
+example), database transactions, instant messaging communications, etc.
Simply put, logging is one of the many use cases that can be satisfied with
tracing.
DPDK tracing library features
-----------------------------
-- A framework to add tracepoints in control and fast path APIs with minimum
+- Provides a framework to add tracepoints in control and fast path APIs with minimum
impact on performance.
Typical trace overhead is ~20 cycles and instrumentation overhead is 1 cycle.
-- Enable and disable the tracepoints at runtime.
+- Enable and disable tracepoints at runtime.
- Save the trace buffer to the filesystem at any point in time.
- Support ``overwrite`` and ``discard`` trace mode operations.
- String-based tracepoint object lookup.
@@ -47,7 +47,7 @@ DPDK tracing library features
For detailed information, refer to
`Common Trace Format <https://diamon.org/ctf/>`_.
-How to add a tracepoint?
+How to add a Tracepoint
------------------------
This section steps you through the details of adding a simple tracepoint.
@@ -67,14 +67,14 @@ Create the tracepoint header file
rte_trace_point_emit_string(str);
)
-The above macro creates ``app_trace_string`` tracepoint.
+The above macro creates the ``app_trace_string`` tracepoint.
The user can choose any name for the tracepoint.
However, when adding a tracepoint in the DPDK library, the
``rte_<library_name>_trace_[<domain>_]<name>`` naming convention must be
followed.
The examples are ``rte_eal_trace_generic_str``, ``rte_mempool_trace_create``.
-The ``RTE_TRACE_POINT`` macro expands from above definition as the following
+The ``RTE_TRACE_POINT`` macro expands from the above definition as the following
function template:
.. code-block:: c
@@ -91,7 +91,7 @@ The consumer of this tracepoint can invoke
``app_trace_string(const char *str)`` to emit the trace event to the trace
buffer.
-Register the tracepoint
+Register the Tracepoint
~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: c
@@ -122,40 +122,40 @@ convention.
The ``RTE_TRACE_POINT_REGISTER`` defines the placeholder for the
``rte_trace_point_t`` tracepoint object.
- For generic tracepoint or for tracepoint used in public header files,
+ For a generic tracepoint or for the tracepoint used in public header files,
the user must export a ``__<trace_function_name>`` symbol
in the library ``.map`` file for this tracepoint
- to be used out of the library, in shared builds.
+ to be used out of the library in shared builds.
For example, ``__app_trace_string`` will be the exported symbol in the
above example.
-Fast path tracepoint
+Fast Path Tracepoint
--------------------
In order to avoid performance impact in fast path code, the library introduced
``RTE_TRACE_POINT_FP``. When adding the tracepoint in fast path code,
the user must use ``RTE_TRACE_POINT_FP`` instead of ``RTE_TRACE_POINT``.
-``RTE_TRACE_POINT_FP`` is compiled out by default and it can be enabled using
+``RTE_TRACE_POINT_FP`` is compiled out by default and can be enabled using
the ``enable_trace_fp`` option for meson build.
-Event record mode
+Event Record Mode
-----------------
-Event record mode is an attribute of trace buffers. Trace library exposes the
+Event record mode is an attribute of trace buffers. The trace library exposes the
following modes:
Overwrite
- When the trace buffer is full, new trace events overwrites the existing
+ When the trace buffer is full, new trace events overwrite the existing
captured events in the trace buffer.
Discard
When the trace buffer is full, new trace events will be discarded.
-The mode can be configured either using EAL command line parameter
-``--trace-mode`` on application boot up or use ``rte_trace_mode_set()`` API to
+The mode can be configured either using the EAL command line parameter
+``--trace-mode`` on application boot up or use the ``rte_trace_mode_set()`` API to
configure at runtime.
-Trace file location
+Trace File Location
-------------------
On ``rte_trace_save()`` or ``rte_eal_cleanup()`` invocation, the library saves
@@ -167,7 +167,7 @@ option.
For more information, refer to :doc:`../linux_gsg/linux_eal_parameters` for
trace EAL command line options.
-View and analyze the recorded events
+View and Analyze Recorded Events
------------------------------------
Once the trace directory is available, the user can view/inspect the recorded
@@ -176,7 +176,7 @@ events.
There are many tools you can use to read DPDK traces:
#. ``babeltrace`` is a command-line utility that converts trace formats; it
- supports the format that DPDK trace library produces, CTF, as well as a
+ supports the format that the DPDK trace library produces, CTF, as well as a
basic text output that can be grep'ed.
The babeltrace command is part of the Open Source Babeltrace project.
@@ -195,12 +195,12 @@ to babeltrace with no options::
all their events, merging them in chronological order.
You can pipe the output of the babeltrace into a tool like grep(1) for further
-filtering. Below example grep the events for ``ethdev`` only::
+filtering. The example below greps the events for ``ethdev`` only::
babeltrace /tmp/my-dpdk-trace | grep ethdev
You can pipe the output of babeltrace into a tool like wc(1) to count the
-recorded events. Below example count the number of ``ethdev`` events::
+recorded events. The example below counts the number of ``ethdev`` events::
babeltrace /tmp/my-dpdk-trace | grep ethdev | wc --lines
@@ -238,7 +238,7 @@ This section steps you through the details of generating trace and viewing it.
Implementation details
----------------------
-As DPDK trace library is designed to generate traces that uses ``Common Trace
+As DPDK trace library is designed to generate traces that use ``Common Trace
Format (CTF)``. ``CTF`` specification consists of the following units to create
a trace.
@@ -249,7 +249,7 @@ a trace.
For detailed information, refer to
`Common Trace Format <https://diamon.org/ctf/>`_.
-The implementation details broadly divided into the following areas:
+The implementation details are broadly divided into the following areas:
Trace metadata creation
~~~~~~~~~~~~~~~~~~~~~~~
@@ -272,16 +272,16 @@ Trace memory
The trace memory will be allocated through an internal function
``__rte_trace_mem_per_thread_alloc()``. The trace memory will be allocated
-per thread to enable lock less trace-emit function.
+per thread to enable lockless trace-emit function.
-For non lcore threads, the trace memory is allocated on the first trace
+For non-lcore threads, the trace memory is allocated on the first trace
emission.
-For lcore threads, if trace points are enabled through a EAL option, the trace
-memory is allocated when the threads are known of DPDK
+For lcore threads, if trace points are enabled through an EAL option, the trace
+memory is allocated when the threads are known to DPDK
(``rte_eal_init`` for EAL lcores, ``rte_thread_register`` for non-EAL lcores).
Otherwise, when trace points are enabled later in the life of the application,
-the behavior is the same as non lcore threads and the trace memory is allocated
+the behavior is the same as non-lcore threads and the trace memory is allocated
on the first trace emission.
Trace memory layout
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v4 07/11] doc: correct typos in log library guide
2026-01-14 22:26 ` [PATCH v4 00/11] doc: programmers guide corrections Stephen Hemminger
` (5 preceding siblings ...)
2026-01-14 22:26 ` [PATCH v4 06/11] doc: correct grammar and errors in trace library guide Stephen Hemminger
@ 2026-01-14 22:26 ` Stephen Hemminger
2026-01-14 22:27 ` [PATCH v4 08/11] doc: correct errors in command-line " Stephen Hemminger
` (3 subsequent siblings)
10 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:26 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Changes:
- Fix spelling errors (stystem -> system, acheived -> achieved)
- Fix sentence structure for rte_log() parameter description
- Use consistent spelling "timestamp" (one word)
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/log_lib.rst | 32 +++++++++++++++----------------
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/doc/guides/prog_guide/log_lib.rst b/doc/guides/prog_guide/log_lib.rst
index 3e888b8965..a3d6104e72 100644
--- a/doc/guides/prog_guide/log_lib.rst
+++ b/doc/guides/prog_guide/log_lib.rst
@@ -6,7 +6,7 @@ Log Library
The DPDK Log library provides the logging functionality for other DPDK libraries and drivers.
By default, logs are sent only to standard error output of the process.
-The syslog EAL option can be used to redirect to the stystem logger on Linux and FreeBSD.
+The syslog EAL option can be used to redirect to the system logger on Linux and FreeBSD.
In addition, the log can be redirected to a different stdio file stream.
Log Levels
@@ -26,14 +26,14 @@ These levels, specified in ``rte_log.h`` are (from most to least important):
At runtime, only messages of a configured level or above (i.e. of higher importance)
will be emitted by the application to the log output.
-That level can be configured either by the application calling the relevant APIs from the logging library,
+That level can be configured either by the application calling relevant APIs from the logging library,
or by the user passing the ``--log-level`` parameter to the EAL via the application.
Setting Global Log Level
~~~~~~~~~~~~~~~~~~~~~~~~
To adjust the global log level for an application,
-just pass a numeric level or a level name to the ``--log-level`` EAL parameter.
+pass a numeric level or a level name to the ``--log-level`` EAL parameter.
For example::
/path/to/app --log-level=error
@@ -47,9 +47,9 @@ Within an application, the log level can be similarly set using the ``rte_log_se
Setting Log Level for a Component
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-In some cases, for example, for debugging purposes,
-it may be desirable to increase or decrease the log level for only a specific component, or set of components.
-To facilitate this, the ``--log-level`` argument also accepts an, optionally wildcarded, component name,
+In some cases (such as debugging purposes),
+you may want to increase or decrease the log level for only a specific component or set of components.
+To facilitate this, the ``--log-level`` argument also accepts an optionally wildcarded component name,
along with the desired level for that component.
For example::
@@ -57,13 +57,13 @@ For example::
/path/to/app --log-level=lib.*:warning
-Within an application, the same result can be got using the ``rte_log_set_level_pattern()`` or ``rte_log_set_level_regex()`` APIs.
+Within an application, the same result can be achieved by using the ``rte_log_set_level_pattern()`` or ``rte_log_set_level_regex()`` APIs.
Using Logging APIs to Generate Log Messages
-------------------------------------------
-To output log messages, ``rte_log()`` API function should be used.
+To output log messages, the ``rte_log()`` API function should be used.
As well as the log message, ``rte_log()`` takes two additional parameters:
* The log level
@@ -74,16 +74,16 @@ The component type is a unique id that identifies the particular DPDK component
To get this id, each component needs to register itself at startup,
using the macro ``RTE_LOG_REGISTER_DEFAULT``.
This macro takes two parameters, with the second being the default log level for the component.
-The first parameter, called "type", the name of the "logtype", or "component type" variable used in the component.
-This variable will be defined by the macro, and should be passed as the second parameter in calls to ``rte_log()``.
+The first parameter, called "type", is the name of the "logtype", or "component type" variable used in the component.
+This variable will be defined by the macro and should be passed as the second parameter in calls to ``rte_log()``.
In general, most DPDK components define their own logging macros to simplify the calls to the log APIs.
They do this by:
* Hiding the component type parameter inside the macro so it never needs to be passed explicitly.
* Using the log-level definitions given in ``rte_log.h`` to allow short textual names to be used in
- place of the numeric log levels.
+ place of numeric log levels.
-The following code is taken from ``rte_cfgfile.c`` and shows the log registration,
+The following code is taken from ``rte_cfgfile.c`` and shows the log registration
and subsequent definition of a shortcut logging macro.
It can be used as a template for any new components using DPDK logging.
@@ -98,10 +98,10 @@ It can be used as a template for any new components using DPDK logging.
it should be placed near the top of the C file using it.
If not, the logtype variable should be defined as an "extern int" near the top of the file.
- Similarly, if logging is to be done by multiple files in a component,
- only one file should register the logtype via the macro,
+ Similarly, if logging will be done by multiple files in a component,
+ only one file should register the logtype via the macro
and the logtype should be defined as an "extern int" in a common header file.
- Any component-specific logging macro should similarly be defined in that header.
+ Any component-specific logging macro should be similarly defined in that header.
Throughout the cfgfile library, all logging calls are therefore of the form:
@@ -122,7 +122,7 @@ For example::
Multiple alternative timestamp formats are available:
-.. csv-table:: Log time stamp format
+.. csv-table:: Log timestamp format
:header: "Format", "Description", "Example"
:widths: 6, 30, 32
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v4 08/11] doc: correct errors in command-line library guide
2026-01-14 22:26 ` [PATCH v4 00/11] doc: programmers guide corrections Stephen Hemminger
` (6 preceding siblings ...)
2026-01-14 22:26 ` [PATCH v4 07/11] doc: correct typos in log " Stephen Hemminger
@ 2026-01-14 22:27 ` Stephen Hemminger
2026-01-14 22:27 ` [PATCH v4 09/11] doc: correct errors in trace " Stephen Hemminger
` (2 subsequent siblings)
10 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:27 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Fix several errors in the cmdline library documentation:
- Fix function name typo: cmdline_new_stdin -> cmdline_stdin_new
- Fix type name: cmdline_parse_t -> cmdline_parse_inst_t
- Fix grammar: "that others" -> "than others"
- Fix spelling: "boiler plate" -> "boilerplate"
- Clarify wording: "multiplex" -> "direct" for command routing
- Fix misleading phrase: "call a separate function" -> "call a single
function" (multiplexing routes multiple commands to one callback)
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/cmdline.rst | 42 +++++++++++++++----------------
1 file changed, 21 insertions(+), 21 deletions(-)
diff --git a/doc/guides/prog_guide/cmdline.rst b/doc/guides/prog_guide/cmdline.rst
index e20281ceb5..c794ec826f 100644
--- a/doc/guides/prog_guide/cmdline.rst
+++ b/doc/guides/prog_guide/cmdline.rst
@@ -4,9 +4,9 @@
Command-line Library
====================
-Since its earliest versions, DPDK has included a command-line library -
-primarily for internal use by, for example, ``dpdk-testpmd`` and the ``dpdk-test`` binaries,
-but the library is also exported on install and can be used by any end application.
+Since its earliest versions, DPDK has included a command-line library,
+primarily for internal use by, for example, ``dpdk-testpmd`` and the ``dpdk-test`` binaries.
+However, the library is also exported on install and can be used by any end application.
This chapter covers the basics of the command-line library and how to use it in an application.
Library Features
@@ -18,14 +18,14 @@ The DPDK command-line library supports the following features:
* Ability to read and process commands taken from an input file, e.g. startup script
-* Parameterized commands able to take multiple parameters with different datatypes:
+* Parameterized commands that can take multiple parameters with different datatypes:
* Strings
* Signed/unsigned 16/32/64-bit integers
* IP Addresses
* Ethernet Addresses
-* Ability to multiplex multiple commands to a single callback function
+* Ability to direct multiple commands to a single callback function
Adding Command-line to an Application
-------------------------------------
@@ -46,7 +46,7 @@ Adding a command-line instance to an application involves a number of coding ste
Many of these steps can be automated using the script ``dpdk-cmdline-gen.py`` installed by DPDK,
and found in the ``buildtools`` folder in the source tree.
-This section covers adding a command-line using this script to generate the boiler plate,
+This section covers adding a command-line using this script to generate the boilerplate,
while the following section,
`Worked Example of Adding Command-line to an Application`_ covers the steps to do so manually.
@@ -56,7 +56,7 @@ Creating a Command List File
The ``dpdk-cmdline-gen.py`` script takes as input a list of commands to be used by the application.
While these can be piped to it via standard input, using a list file is probably best.
-The format of the list file must be:
+The format of the list file must follow these requirements:
* Comment lines start with '#' as first non-whitespace character
@@ -75,7 +75,7 @@ The format of the list file must be:
* ``<IPv6>dst_ip6``
* Variable fields, which take their values from a list of options,
- have the comma-separated option list placed in braces, rather than a the type name.
+ have the comma-separated option list placed in braces, rather than by the type name.
For example,
* ``<(rx,tx,rxtx)>mode``
@@ -127,13 +127,13 @@ and the callback stubs will be written to an equivalent ".c" file.
Providing the Function Callbacks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-As discussed above, the script output is a header file, containing structure definitions,
-but the callback functions themselves obviously have to be provided by the user.
-These callback functions must be provided as non-static functions in a C file,
+As discussed above, the script output is a header file containing structure definitions,
+but the callback functions must be provided by the user.
+These callback functions must be provided as non-static functions in a C file
and named ``cmd_<cmdname>_parsed``.
The function prototypes can be seen in the generated output header.
-The "cmdname" part of the function name is built up by combining the non-variable initial tokens in the command.
+The "cmdname" part of the function name is built by combining the non-variable initial tokens in the command.
So, given the commands in our worked example below: ``quit`` and ``show port stats <n>``,
the callback functions would be:
@@ -151,11 +151,11 @@ the callback functions would be:
...
}
-These functions must be provided by the developer, but, as stated above,
+These functions must be provided by the developer. However, as stated above,
stub functions may be generated by the script automatically using the ``--stubs`` parameter.
The same "cmdname" stem is used in the naming of the generated structures too.
-To get at the results structure for each command above,
+To get to the results structure for each command above,
the ``parsed_result`` parameter should be cast to ``struct cmd_quit_result``
or ``struct cmd_show_port_stats_result`` respectively.
@@ -179,7 +179,7 @@ To integrate the script output with the application,
we must ``#include`` the generated header into our applications C file,
and then have the command-line created via either ``cmdline_new`` or ``cmdline_stdin_new``.
The first parameter to the function call should be the context array in the generated header file,
-``ctx`` by default. (Modifiable via script parameter).
+``ctx`` by default (Modifiable via script parameter).
The callback functions may be in this same file, or in a separate one -
they just need to be available to the linker at build-time.
@@ -190,7 +190,7 @@ Limitations of the Script Approach
The script approach works for most commands that a user may wish to add to an application.
However, it does not support the full range of functions possible with the DPDK command-line library.
For example,
-it is not possible using the script to multiplex multiple commands into a single callback function.
+it is not possible using the script to direct multiple commands to a single callback function.
To use this functionality, the user should follow the instructions in the next section
`Worked Example of Adding Command-line to an Application`_ to manually configure a command-line instance.
@@ -416,7 +416,7 @@ Once we have our ``ctx`` variable defined,
we now just need to call the API to create the new command-line instance in our application.
The basic API is ``cmdline_new`` which will create an interactive command-line with all commands available.
However, if additional features for interactive use - such as tab-completion -
-are desired, it is recommended that ``cmdline_new_stdin`` be used instead.
+are desired, it is recommended that ``cmdline_stdin_new`` be used instead.
A pattern that can be used in applications is to use ``cmdline_new`` for processing any startup commands,
either from file or from the environment (as is done in the "dpdk-test" application),
@@ -449,8 +449,8 @@ For example, to handle a startup file and then provide an interactive prompt:
Multiplexing Multiple Commands to a Single Function
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-To reduce the amount of boiler-plate code needed when creating a command-line for an application,
-it is possible to merge a number of commands together to have them call a separate function.
+To reduce the amount of boilerplate code needed when creating a command-line for an application,
+it is possible to merge a number of commands together to have them call a single function.
This can be done in a number of different ways:
* A callback function can be used as the target for a number of different commands.
@@ -463,7 +463,7 @@ This can be done in a number of different ways:
As a concrete example,
these two techniques are used in the DPDK unit test application ``dpdk-test``,
-where a single command ``cmdline_parse_t`` instance is used for all the "dump_<item>" test cases.
+where a single ``cmdline_parse_inst_t`` instance is used for all the "dump_<item>" test cases.
.. literalinclude:: ../../../app/test/commands.c
:language: c
@@ -481,7 +481,7 @@ the following DPDK files can be consulted for examples of command-line use.
This is not an exhaustive list of examples of command-line use in DPDK.
It is simply a list of a few files that may be of use to the application developer.
- Some of these referenced files contain more complex examples of use that others.
+ Some of these referenced files contain more complex examples of use than others.
* ``commands.c/.h`` in ``examples/cmdline``
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v4 09/11] doc: correct errors in trace library guide
2026-01-14 22:26 ` [PATCH v4 00/11] doc: programmers guide corrections Stephen Hemminger
` (7 preceding siblings ...)
2026-01-14 22:27 ` [PATCH v4 08/11] doc: correct errors in command-line " Stephen Hemminger
@ 2026-01-14 22:27 ` Stephen Hemminger
2026-01-14 22:27 ` [PATCH v4 10/11] doc: correct errors in stack " Stephen Hemminger
2026-01-14 22:27 ` [PATCH v4 11/11] doc: correct errors in RCU " Stephen Hemminger
10 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:27 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad, Jerin Jacob, Sunil Kumar Kori
Changes:
- Fix broken sentence in implementation details: "As DPDK trace library
is designed..." was a sentence fragment
- Fix tense inconsistency: "will be useful" -> "is useful",
"will be allocated" -> "is allocated",
"will be discarded" -> "are discarded"
- Fix grammatical parallelism: "or use the API" -> "or by using the API"
- Fix inconsistent capitalization in section headers: "Tracepoint" ->
"tracepoint" to match body text usage throughout
- Fix tool name consistency: "Tracecompass" -> "Trace Compass"
(official name, two words)
- Fix awkward phrasing: "one of a CTF trace's streams" ->
"one of the streams in a CTF trace"
- Fix trace header description for parallelism: "48 bits of timestamp
and 16 bits event ID" -> "a 48-bit timestamp and a 16-bit event ID"
- Add missing article: "to trace library" -> "to the trace library"
- Remove incorrect article:
"the ``my_tracepoint.h``" -> "``my_tracepoint.h``"
- Add function call parentheses: rte_eal_init -> rte_eal_init(),
rte_thread_register -> rte_thread_register()
- Fix ambiguous pronoun: "It can be overridden" -> "This location can
be overridden"
- Fix line wrap for readability in features list
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/trace_lib.rst | 80 ++++++++++++++---------------
1 file changed, 40 insertions(+), 40 deletions(-)
diff --git a/doc/guides/prog_guide/trace_lib.rst b/doc/guides/prog_guide/trace_lib.rst
index 829a061074..33b6232ce3 100644
--- a/doc/guides/prog_guide/trace_lib.rst
+++ b/doc/guides/prog_guide/trace_lib.rst
@@ -14,7 +14,7 @@ When recording, specific instrumentation points placed in the software source
code generate events that are saved on a giant tape: a trace file.
The trace file then later can be opened in *trace viewers* to visualize and
analyze the trace events with timestamps and multi-core views.
-This mechanism will be useful for resolving a wide range of problems such as
+This mechanism is useful for resolving a wide range of problems such as
multi-core synchronization issues, latency measurements, and finding
post analysis information like CPU idle time, etc., that would otherwise be
extremely challenging to gather.
@@ -33,8 +33,8 @@ tracing.
DPDK tracing library features
-----------------------------
-- Provides a framework to add tracepoints in control and fast path APIs with minimum
- impact on performance.
+- Provides a framework to add tracepoints in control and fast path APIs with
+ minimal impact on performance.
Typical trace overhead is ~20 cycles and instrumentation overhead is 1 cycle.
- Enable and disable tracepoints at runtime.
- Save the trace buffer to the filesystem at any point in time.
@@ -47,8 +47,8 @@ DPDK tracing library features
For detailed information, refer to
`Common Trace Format <https://diamon.org/ctf/>`_.
-How to add a Tracepoint
-------------------------
+How to add a tracepoint
+-----------------------
This section steps you through the details of adding a simple tracepoint.
@@ -91,7 +91,7 @@ The consumer of this tracepoint can invoke
``app_trace_string(const char *str)`` to emit the trace event to the trace
buffer.
-Register the Tracepoint
+Register the tracepoint
~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: c
@@ -103,7 +103,7 @@ Register the Tracepoint
RTE_TRACE_POINT_REGISTER(app_trace_string, app.trace.string)
The above code snippet registers the ``app_trace_string`` tracepoint to
-trace library. Here, the ``my_tracepoint.h`` is the header file
+the trace library. Here, ``my_tracepoint.h`` is the header file
that the user created in the first step :ref:`create_tracepoint_header_file`.
The second argument for the ``RTE_TRACE_POINT_REGISTER`` is the name for the
@@ -129,7 +129,7 @@ convention.
For example, ``__app_trace_string`` will be the exported symbol in the
above example.
-Fast Path Tracepoint
+Fast path tracepoint
--------------------
In order to avoid performance impact in fast path code, the library introduced
@@ -139,7 +139,7 @@ the user must use ``RTE_TRACE_POINT_FP`` instead of ``RTE_TRACE_POINT``.
``RTE_TRACE_POINT_FP`` is compiled out by default and can be enabled using
the ``enable_trace_fp`` option for meson build.
-Event Record Mode
+Event record mode
-----------------
Event record mode is an attribute of trace buffers. The trace library exposes the
@@ -149,26 +149,26 @@ Overwrite
When the trace buffer is full, new trace events overwrite the existing
captured events in the trace buffer.
Discard
- When the trace buffer is full, new trace events will be discarded.
+ When the trace buffer is full, new trace events are discarded.
The mode can be configured either using the EAL command line parameter
-``--trace-mode`` on application boot up or use the ``rte_trace_mode_set()`` API to
-configure at runtime.
+``--trace-mode`` on application boot up or by using the ``rte_trace_mode_set()``
+API at runtime.
-Trace File Location
+Trace file location
-------------------
On ``rte_trace_save()`` or ``rte_eal_cleanup()`` invocation, the library saves
the trace buffers to the filesystem. By default, the trace files are stored in
``$HOME/dpdk-traces/rte-yyyy-mm-dd-[AP]M-hh-mm-ss/``.
-It can be overridden by the ``--trace-dir=<directory path>`` EAL command line
-option.
+This location can be overridden by the ``--trace-dir=<directory path>`` EAL
+command line option.
For more information, refer to :doc:`../linux_gsg/linux_eal_parameters` for
trace EAL command line options.
-View and Analyze Recorded Events
-------------------------------------
+View and analyze recorded events
+--------------------------------
Once the trace directory is available, the user can view/inspect the recorded
events.
@@ -204,28 +204,28 @@ recorded events. The example below counts the number of ``ethdev`` events::
babeltrace /tmp/my-dpdk-trace | grep ethdev | wc --lines
-Use the tracecompass GUI tool
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Use the Trace Compass GUI tool
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-``Tracecompass`` is another tool to view/analyze the DPDK traces which gives
-a graphical view of events. Like ``babeltrace``, tracecompass also provides
+``Trace Compass`` is another tool to view/analyze the DPDK traces which gives
+a graphical view of events. Like ``babeltrace``, Trace Compass also provides
an interface to search for a particular event.
-To use ``tracecompass``, following are the minimum required steps:
+To use ``Trace Compass``, the following are the minimum required steps:
-- Install ``tracecompass`` to the localhost. Variants are available for Linux,
+- Install ``Trace Compass`` to the localhost. Variants are available for Linux,
Windows, and OS-X.
-- Launch ``tracecompass`` which will open a graphical window with trace
+- Launch ``Trace Compass`` which will open a graphical window with trace
management interfaces.
-- Open a trace using ``File->Open Trace`` option and select metadata file which
- is to be viewed/analyzed.
+- Open a trace using the ``File->Open Trace`` option and select the metadata file which
+ will be viewed/analyzed.
-For more details, refer
+For more details, refer to
`Trace Compass <https://www.eclipse.org/tracecompass/>`_.
Quick start
-----------
-This section steps you through the details of generating trace and viewing it.
+This section steps you through the details of generating the trace and viewing it.
- Start the dpdk-test::
@@ -238,9 +238,9 @@ This section steps you through the details of generating trace and viewing it.
Implementation details
----------------------
-As DPDK trace library is designed to generate traces that use ``Common Trace
-Format (CTF)``. ``CTF`` specification consists of the following units to create
-a trace.
+The DPDK trace library is designed to generate traces that use
+``Common Trace Format (CTF)``. The ``CTF`` specification consists of the
+following units to create a trace.
- ``Stream`` Sequence of packets.
- ``Packet`` Header and one or more events.
@@ -249,15 +249,15 @@ a trace.
For detailed information, refer to
`Common Trace Format <https://diamon.org/ctf/>`_.
-The implementation details are broadly divided into the following areas:
+Implementation details are broadly divided into the following areas:
Trace metadata creation
~~~~~~~~~~~~~~~~~~~~~~~
-Based on the ``CTF`` specification, one of a CTF trace's streams is mandatory:
-the metadata stream. It contains exactly what you would expect: data about the
-trace itself. The metadata stream contains a textual description of the binary
-layouts of all the other streams.
+Based on the ``CTF`` specification, one of the streams in a CTF trace is
+mandatory: the metadata stream. It contains exactly what you would expect:
+data about the trace itself. The metadata stream contains a textual description
+of the binary layouts of all the other streams.
This description is written using the Trace Stream Description Language (TSDL),
a declarative language that exists only in the realm of CTF.
@@ -270,8 +270,8 @@ The internal ``trace_metadata_create()`` function generates the metadata.
Trace memory
~~~~~~~~~~~~
-The trace memory will be allocated through an internal function
-``__rte_trace_mem_per_thread_alloc()``. The trace memory will be allocated
+The trace memory is allocated through an internal function
+``__rte_trace_mem_per_thread_alloc()``. The trace memory is allocated
per thread to enable lockless trace-emit function.
For non-lcore threads, the trace memory is allocated on the first trace
@@ -279,7 +279,7 @@ emission.
For lcore threads, if trace points are enabled through an EAL option, the trace
memory is allocated when the threads are known to DPDK
-(``rte_eal_init`` for EAL lcores, ``rte_thread_register`` for non-EAL lcores).
+(``rte_eal_init()`` for EAL lcores, ``rte_thread_register()`` for non-EAL lcores).
Otherwise, when trace points are enabled later in the life of the application,
the behavior is the same as non-lcore threads and the trace memory is allocated
on the first trace emission.
@@ -348,7 +348,7 @@ trace.header
| timestamp [47:0] |
+----------------------+
-The trace header is 64 bits, it consists of 48 bits of timestamp and 16 bits
+The trace header is 64 bits, consisting of a 48-bit timestamp and a 16-bit
event ID.
The ``packet.header`` and ``packet.context`` will be written in the slow path
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v4 10/11] doc: correct errors in stack library guide
2026-01-14 22:26 ` [PATCH v4 00/11] doc: programmers guide corrections Stephen Hemminger
` (8 preceding siblings ...)
2026-01-14 22:27 ` [PATCH v4 09/11] doc: correct errors in trace " Stephen Hemminger
@ 2026-01-14 22:27 ` Stephen Hemminger
2026-01-14 22:27 ` [PATCH v4 11/11] doc: correct errors in RCU " Stephen Hemminger
10 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:27 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Fix several errors in the stack library documentation:
- Fix grammar: "These function are" -> "These functions are"
- Fix grammar: "incorrect change" -> "incorrectly change"
- Fix RST section hierarchy: "Implementation" was marked as a
subsection (~~~~) but contained sections (----); corrected so
Implementation is a section and Lock-based/Lock-free stack are
subsections beneath it
- Fix inconsistent header capitalization: "Lock-based Stack" and
"Lock-free Stack" -> lowercase "stack" to match other DPDK docs
- Fix awkward wording: "this algorithm stack uses" -> "this algorithm
uses"
- Fix inconsistent underline lengths in RST headers
- Add code formatting to function name: rte_stack_create() with
backticks and parentheses
- Fix hyphenation: "multi-threading safe" -> "multi-thread safe"
(matches line 37 usage)
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/stack_lib.rst | 32 ++++++++++++++---------------
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 975d3ad796..1ca9d73bc0 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -13,8 +13,8 @@ The stack library provides the following basic operations:
user-specified socket, with either standard (lock-based) or lock-free
behavior.
-* Push and pop a burst of one or more stack objects (pointers). These function
- are multi-threading safe.
+* Push and pop a burst of one or more stack objects (pointers). These functions
+ are multi-thread safe.
* Free a previously created stack.
@@ -23,15 +23,15 @@ The stack library provides the following basic operations:
* Query a stack's current depth and number of free entries.
Implementation
-~~~~~~~~~~~~~~
+--------------
The library supports two types of stacks: standard (lock-based) and lock-free.
Both types use the same set of interfaces, but their implementations differ.
.. _Stack_Library_Std_Stack:
-Lock-based Stack
-----------------
+Lock-based stack
+~~~~~~~~~~~~~~~~
The lock-based stack consists of a contiguous array of pointers, a current
index, and a spinlock. Accesses to the stack are made multi-thread safe by the
@@ -39,13 +39,13 @@ spinlock.
.. _Stack_Library_LF_Stack:
-Lock-free Stack
-------------------
+Lock-free stack
+~~~~~~~~~~~~~~~
The lock-free stack consists of a linked list of elements, each containing a
data pointer and a next pointer, and an atomic stack depth counter. The
-lock-free property means that multiple threads can push and pop simultaneously,
-and one thread being preempted/delayed in a push or pop operation will not
+lock-free property means that multiple threads can push and pop simultaneously.
+One thread being preempted/delayed in a push or pop operation will not
impede the forward progress of any other thread.
The lock-free push operation enqueues a linked list of pointers by pointing the
@@ -65,15 +65,15 @@ allocated before stack pushes and freed after stack pops. Since the stack has a
fixed maximum depth, these elements do not need to be dynamically created.
The lock-free behavior is selected by passing the *RTE_STACK_F_LF* flag to
-rte_stack_create().
+``rte_stack_create()``.
-Preventing the ABA Problem
+Preventing the ABA problem
^^^^^^^^^^^^^^^^^^^^^^^^^^
-To prevent the ABA problem, this algorithm stack uses a 128-bit
-compare-and-swap instruction to atomically update both the stack top pointer
-and a modification counter. The ABA problem can occur without a modification
-counter if, for example:
+To prevent the ABA problem, this algorithm uses a 128-bit compare-and-swap
+instruction to atomically update both the stack top pointer and a modification
+counter. The ABA problem can occur without a modification counter if, for
+example:
#. Thread A reads head pointer X and stores the pointed-to list element.
@@ -83,7 +83,7 @@ counter if, for example:
#. Thread A changes the head pointer with a compare-and-swap and succeeds.
In this case thread A would not detect that the list had changed, and would
-both pop stale data and incorrect change the head pointer. By adding a
+both pop stale data and incorrectly change the head pointer. By adding a
modification counter that is updated on every push and pop as part of the
compare-and-swap, the algorithm can detect when the list changes even if the
head pointer remains the same.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v4 11/11] doc: correct errors in RCU library guide
2026-01-14 22:26 ` [PATCH v4 00/11] doc: programmers guide corrections Stephen Hemminger
` (9 preceding siblings ...)
2026-01-14 22:27 ` [PATCH v4 10/11] doc: correct errors in stack " Stephen Hemminger
@ 2026-01-14 22:27 ` Stephen Hemminger
10 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-14 22:27 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad, Honnappa Nagarahalli
Fix several errors in the RCU library documentation:
- Fix wrong word: "exasperate" -> "exacerbate" (exasperate means to
annoy; exacerbate means to make worse)
- Fix typo: "such at rte_hash" -> "such as rte_hash"
- Fix grammar: "critical sections for D2 is" -> "critical section for
D2 is"
- Fix subject-verb agreement: "length... and number... is proportional"
-> "are proportional"
- Fix inconsistent abbreviation: "RT1" -> "reader thread 1" (RT1 was
never defined)
- Fix missing articles: add "the" before "grace period", "API"
- Fix double space before "The reader thread must"
- Add missing function parentheses throughout for consistency
- Add missing code formatting (backticks) around function names:
rte_rcu_qsbr_dq_enqueue, rte_rcu_qsbr_dq_reclaim,
rte_rcu_qsbr_dq_delete, rte_hash, rte_lpm
- Fix unexplained asterisk after "Reclaiming Resources*"
- Fix inconsistent capitalization in numbered list items
- Rewrap overly long lines for readability
- Clarify awkward sentence about memory examples
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/rcu_lib.rst | 143 ++++++++++++++++++------------
1 file changed, 86 insertions(+), 57 deletions(-)
diff --git a/doc/guides/prog_guide/rcu_lib.rst b/doc/guides/prog_guide/rcu_lib.rst
index 9f3654f398..ac573eecbc 100644
--- a/doc/guides/prog_guide/rcu_lib.rst
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -6,17 +6,17 @@ Read-Copy-Update (RCU) Library
Lockless data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
-(for example real-time applications).
+(for example, real-time applications).
In the following sections, the term "memory" refers to memory allocated
by typical APIs like malloc() or anything that is representative of
-memory, for example an index of a free element array.
+memory, such as an index into a free element array.
Since these data structures are lockless, the writers and readers
are accessing the data structures concurrently. Hence, while removing
an element from a data structure, the writers cannot return the memory
-to the allocator, without knowing that the readers are not
-referencing that element/memory anymore. Hence, it is required to
+to the allocator without knowing that the readers are not
+referencing that element/memory anymore. Therefore, it is required to
separate the operation of removing an element into two steps:
#. Delete: in this step, the writer removes the reference to the element from
@@ -51,7 +51,7 @@ As shown in :numref:`figure_quiescent_state`, reader thread 1 accesses data
structures D1 and D2. When it is accessing D1, if the writer has to remove an
element from D1, the writer cannot free the memory associated with that
element immediately. The writer can return the memory to the allocator only
-after the reader stops referencing D1. In other words, reader thread RT1 has
+after the reader stops referencing D1. In other words, reader thread 1 has
to enter a quiescent state.
Similarly, since reader thread 2 is also accessing D1, the writer has to
@@ -62,28 +62,28 @@ quiescent state. Reader thread 3 was not accessing D1 when the delete
operation happened. So, reader thread 3 will not have a reference to the
deleted entry.
-It can be noted that, the critical sections for D2 is a quiescent state
-for D1. i.e. for a given data structure Dx, any point in the thread execution
-that does not reference Dx is a quiescent state.
+Note that the critical section for D2 is a quiescent state
+for D1 (i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state).
Since memory is not freed immediately, there might be a need for
-provisioning of additional memory, depending on the application requirements.
+provisioning additional memory depending on the application requirements.
Factors affecting the RCU mechanism
-----------------------------------
It is important to make sure that this library keeps the overhead of
-identifying the end of grace period and subsequent freeing of memory,
-to a minimum. The following paras explain how grace period and critical
+identifying the end of the grace period and subsequent freeing of memory
+to a minimum. The following paragraphs explain how grace period and critical
section affect this overhead.
-The writer has to poll the readers to identify the end of grace period.
+The writer has to poll the readers to identify the end of the grace period.
Polling introduces memory accesses and wastes CPU cycles. The memory
is not available for reuse during the grace period. Longer grace periods
-exasperate these conditions.
+exacerbate these conditions.
The length of the critical section and the number of reader threads
-is proportional to the duration of the grace period. Keeping the critical
+are proportional to the duration of the grace period. Keeping the critical
sections smaller will keep the grace period smaller. However, keeping the
critical sections smaller requires additional CPU cycles (due to additional
reporting) in the readers.
@@ -117,14 +117,14 @@ How to use this library
The application must allocate memory and initialize a QS variable.
Applications can call ``rte_rcu_qsbr_get_memsize()`` to calculate the size
-of memory to allocate. This API takes a maximum number of reader threads,
-using this variable, as a parameter.
+of memory to allocate. This API takes a maximum number of reader threads
+using this variable as a parameter.
Further, the application can initialize a QS variable using the API
``rte_rcu_qsbr_init()``.
Each reader thread is assumed to have a unique thread ID. Currently, the
-management of the thread ID (for example allocation/free) is left to the
+management of the thread ID (for example, allocation/free) is left to the
application. The thread ID should be in the range of 0 to
maximum number of threads provided while creating the QS variable.
The application could also use ``lcore_id`` as the thread ID where applicable.
@@ -132,14 +132,14 @@ The application could also use ``lcore_id`` as the thread ID where applicable.
The ``rte_rcu_qsbr_thread_register()`` API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
-The reader thread must call ``rte_rcu_qsbr_thread_online()`` API to start
+The reader thread must call the ``rte_rcu_qsbr_thread_online()`` API to start
reporting its quiescent state.
Some of the use cases might require the reader threads to make blocking API
-calls (for example while using eventdev APIs). The writer thread should not
-wait for such reader threads to enter quiescent state. The reader thread must
-call ``rte_rcu_qsbr_thread_offline()`` API, before calling blocking APIs. It
-can call ``rte_rcu_qsbr_thread_online()`` API once the blocking API call
+calls (for example, while using eventdev APIs). The writer thread should not
+wait for such reader threads to enter quiescent state. The reader thread must
+call the ``rte_rcu_qsbr_thread_offline()`` API before calling blocking APIs. It
+can call the ``rte_rcu_qsbr_thread_online()`` API once the blocking API call
returns.
The writer thread can trigger the reader threads to report their quiescent
@@ -147,13 +147,13 @@ state by calling the API ``rte_rcu_qsbr_start()``. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
``rte_rcu_qsbr_start()`` returns a token to each caller.
-The writer thread must call ``rte_rcu_qsbr_check()`` API with the token to
-get the current quiescent state status. Option to block till all the reader
+The writer thread must call the ``rte_rcu_qsbr_check()`` API with the token to
+get the current quiescent state status. The option to block till all the reader
threads enter the quiescent state is provided. If this API indicates that
all the reader threads have entered the quiescent state, the application
can free the deleted entry.
-The APIs ``rte_rcu_qsbr_start()`` and ``rte_rcu_qsbr_check()`` are lock free.
+The APIs ``rte_rcu_qsbr_start()`` and ``rte_rcu_qsbr_check()`` are lock-free.
Hence, they can be called concurrently from multiple writers even while
running as worker threads.
@@ -171,7 +171,7 @@ polls till all the readers enter the quiescent state or go offline. This API
does not allow the writer to do useful work while waiting and introduces
additional memory accesses due to continuous polling. However, the application
does not have to store the token or the reference to the deleted resource. The
-resource can be freed immediately after ``rte_rcu_qsbr_synchronize()`` API
+resource can be freed immediately after the ``rte_rcu_qsbr_synchronize()`` API
returns.
The reader thread must call ``rte_rcu_qsbr_thread_offline()`` and
@@ -179,9 +179,9 @@ The reader thread must call ``rte_rcu_qsbr_thread_offline()`` and
quiescent state. The ``rte_rcu_qsbr_check()`` API will not wait for this reader
thread to report the quiescent state status anymore.
-The reader threads should call ``rte_rcu_qsbr_quiescent()`` API to indicate that
+The reader threads should call the ``rte_rcu_qsbr_quiescent()`` API to indicate that
they entered a quiescent state. This API checks if a writer has triggered a
-quiescent state query and update the state accordingly.
+quiescent state query and updates the state accordingly.
The ``rte_rcu_qsbr_lock()`` and ``rte_rcu_qsbr_unlock()`` are empty functions.
However, these APIs can aid in debugging issues. One can mark the access to
@@ -199,42 +199,71 @@ the application. When a writer deletes an entry from a data structure, the write
#. Should check if the readers have completed a grace period and free the resources.
There are several APIs provided to help with this process. The writer
-can create a FIFO to store the references to deleted resources using ``rte_rcu_qsbr_dq_create()``.
+can create a FIFO to store the references to deleted resources using
+``rte_rcu_qsbr_dq_create()``.
The resources can be enqueued to this FIFO using ``rte_rcu_qsbr_dq_enqueue()``.
-If the FIFO is full, ``rte_rcu_qsbr_dq_enqueue`` will reclaim the resources before enqueuing. It will also reclaim resources on regular basis to keep the FIFO from growing too large. If the writer runs out of resources, the writer can call ``rte_rcu_qsbr_dq_reclaim`` API to reclaim resources. ``rte_rcu_qsbr_dq_delete`` is provided to reclaim any remaining resources and free the FIFO while shutting down.
+If the FIFO is full, ``rte_rcu_qsbr_dq_enqueue()`` will reclaim the resources
+before enqueuing.
+It will also reclaim resources on a regular basis to keep the FIFO from growing
+too large. If the writer runs out of resources, the writer can call
+``rte_rcu_qsbr_dq_reclaim()`` API to reclaim resources.
+``rte_rcu_qsbr_dq_delete()`` is provided to reclaim any remaining resources and
+free the FIFO while shutting down.
-However, if this resource reclamation process were to be integrated in lock-free data structure libraries, it
-hides this complexity from the application and makes it easier for the application to adopt lock-free algorithms. The following paragraphs discuss how the reclamation process can be integrated in DPDK libraries.
+However, if this resource reclamation process were to be integrated in lock-free
+data structure libraries, it hides this complexity from the application and
+makes it easier for the application to adopt lock-free algorithms.
-In any DPDK application, the resource reclamation process using QSBR can be split into 4 parts:
+The following paragraphs discuss how the reclamation process can be integrated
+in DPDK libraries.
+
+In any DPDK application, the resource reclamation process using QSBR can be
+split into 4 parts:
#. Initialization
-#. Quiescent State Reporting
-#. Reclaiming Resources
+#. Quiescent state reporting
+#. Reclaiming resources
#. Shutdown
-The design proposed here assigns different parts of this process to client libraries and applications. The term 'client library' refers to lock-free data structure libraries such at rte_hash, rte_lpm etc. in DPDK or similar libraries outside of DPDK. The term 'application' refers to the packet processing application that makes use of DPDK such as L3 Forwarding example application, OVS, VPP etc..
-
-The application has to handle 'Initialization' and 'Quiescent State Reporting'. So,
-
-* the application has to create the RCU variable and register the reader threads to report their quiescent state.
-* the application has to register the same RCU variable with the client library.
-* reader threads in the application have to report the quiescent state. This allows for the application to control the length of the critical section/how frequently the application wants to report the quiescent state.
-
-The client library will handle 'Reclaiming Resources' part of the process. The
-client libraries will make use of the writer thread context to execute the memory
-reclamation algorithm. So,
-
-* client library should provide an API to register a RCU variable that it will use. It should call ``rte_rcu_qsbr_dq_create()`` to create the FIFO to store the references to deleted entries.
-* client library should use ``rte_rcu_qsbr_dq_enqueue`` to enqueue the deleted resources on the FIFO and start the grace period.
-* if the library runs out of resources while adding entries, it should call ``rte_rcu_qsbr_dq_reclaim`` to reclaim the resources and try the resource allocation again.
-
-The 'Shutdown' process needs to be shared between the application and the
-client library.
-
-* the application should make sure that the reader threads are not using the shared data structure, unregister the reader threads from the QSBR variable before calling the client library's shutdown function.
-
-* client library should call ``rte_rcu_qsbr_dq_delete`` to reclaim any remaining resources and free the FIFO.
+The design proposed here assigns different parts of this process to client
+libraries and applications. The term "client library" refers to lock-free data
+structure libraries such as ``rte_hash``, ``rte_lpm`` etc. in DPDK or similar
+libraries outside of DPDK. The term "application" refers to the packet
+processing application that makes use of DPDK such as L3 Forwarding example
+application, OVS, VPP etc.
+
+The application must handle "Initialization" and "Quiescent State Reporting".
+Therefore, the application:
+
+* Must create the RCU variable and register the reader threads to report their
+ quiescent state.
+* Must register the same RCU variable with the client library.
+* Note that reader threads in the application have to report the quiescent
+ state. This allows for the application to control the length of the critical
+ section/how frequently the application wants to report the quiescent state.
+
+The client library will handle the "Reclaiming Resources" part of the process.
+The client libraries will make use of the writer thread context to execute the
+memory reclamation algorithm. So, the client library should:
+
+* Provide an API to register an RCU variable that it will use. It should call
+ ``rte_rcu_qsbr_dq_create()`` to create the FIFO to store the references to
+ deleted entries.
+* Use ``rte_rcu_qsbr_dq_enqueue()`` to enqueue the deleted resources on the FIFO
+ and start the grace period.
+* Note that if the library runs out of resources while adding entries, it should
+ call ``rte_rcu_qsbr_dq_reclaim()`` to reclaim the resources and try the
+ resource allocation again.
+
+The "Shutdown" process needs to be shared between the application and the
+client library. Note that:
+
+* The application should make sure that the reader threads are not using the
+ shared data structure, unregister the reader threads from the QSBR variable
+ before calling the client library's shutdown function.
+
+* The client library should call ``rte_rcu_qsbr_dq_delete()`` to reclaim any
+ remaining resources and free the FIFO.
Integrating the resource reclamation with client libraries removes the burden from
the application and makes it easy to use lock-free algorithms.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 00/54] doc: programmers guide corrections
2024-05-13 15:59 [PATCH 0/9] reowrd in prog guide Nandini Persad
` (11 preceding siblings ...)
2026-01-14 22:26 ` [PATCH v4 00/11] doc: programmers guide corrections Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 01/54] doc: correct grammar and typos in argparse library guide Stephen Hemminger
` (53 more replies)
12 siblings, 54 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
This is a revision of earlier corrections to the programmers guide.
At this point, it is a collaborative work of myself (Stephen), the
technical writer (Nandini) and AI (Claude).
This patch series contains 54 patches that improves the
quality and clarity of DPDK programmer guide documentation. The changes
address grammar errors, typos, awkward phrasing, and consistency issues
across multiple documentation files.
The improvements include:
* Grammar corrections (subject-verb agreement, verb tenses, article usage)
* Typo fixes (field names, function names, general spelling)
* Clarity improvements (rephrasing awkward or unclear sentences)
* Consistency improvements (terminology, formatting)
* Minor whitespace and formatting corrections
Files affected include documentation for:
- argparse library
- service cores
- vhost library
- toeplitz hash
- writing efficient code
- Various other programmer guides
Stephen Hemminger (54):
doc: correct grammar and typos in argparse library guide
doc: correct grammar in service cores guide
doc: correct grammar and errors in trace library guide
doc: correct typos in log library guide
doc: correct errors in command-line library guide
doc: correct errors in trace library guide
doc: correct errors in stack library guide
doc: correct errors in RCU library guide
doc: correct grammar and formatting in ASan guide
doc: correct grammar and typos in bbdev guide
doc: correct grammar and formatting in bpf lib guide
doc: correct grammar and typos in meson build guide
doc: correct grammar and typos in cryptodev guide
doc: correct grammar and formatting in compressdev guide
doc: correct grammar in dmadev guide
doc: correct grammar in efd guide
doc: correct grammar in EAL guide
doc: correct double space in FIB guide
doc: correct grammar in GRO guide
doc: correct grammar in GSO guide
doc: correct typos and grammar in graph guide
doc: correct grammar in hash guide
doc: correct grammar and typos in IP fragment guide
doc: correct double spaces in IPsec guide
doc: correct grammar in lcore variables guide
doc: correct typo in link bonding guide
doc: correct grammar in LTO guide
doc: correct grammar in LPM guide
doc: correct grammar and typo in LPM6 guide
doc: correct grammar in introduction
doc: correct grammar in mbuf library guide
doc: correct grammar in membership library guide
doc: correct errors in mempool library guide
doc: correct style in meson unit tests guide
doc: correct errors in metrics library guide
doc: correct grammar in mldev library guide
doc: correct grammar in multi-process guide
doc: correct grammar in overview
doc: correct grammar in ACL library guide
doc: correct typos in packet distributor guide
doc: correct grammar in packet framework guide
doc: correct grammar in PDCP library guide
doc: correct grammar in pdump library guide
doc: correct typos in power management guide
doc: correct grammar in profiling guide
doc: correct errors in regexdev guide
doc: correct grammar in reorder library guide
doc: correct whitespace in RIB library guide
doc: correct incomplete sentence in ring library guide
doc: correct grammar in security library guide
doc: correct hyphenation in thread safety guide
doc: correct errors in toeplitz hash library guide
doc: correct errors in vhost library guide
doc: correct whitespace in efficient code guide
doc/guides/prog_guide/argparse_lib.rst | 24 +--
doc/guides/prog_guide/asan.rst | 14 +-
doc/guides/prog_guide/bbdev.rst | 6 +-
doc/guides/prog_guide/bpf_lib.rst | 46 +++++-
doc/guides/prog_guide/build-sdk-meson.rst | 10 +-
doc/guides/prog_guide/cmdline.rst | 42 ++---
doc/guides/prog_guide/compressdev.rst | 10 +-
doc/guides/prog_guide/cryptodev_lib.rst | 21 +--
doc/guides/prog_guide/dmadev.rst | 6 +-
doc/guides/prog_guide/efd_lib.rst | 8 +-
.../prog_guide/env_abstraction_layer.rst | 4 +-
doc/guides/prog_guide/fib_lib.rst | 2 +-
.../generic_receive_offload_lib.rst | 20 +--
.../generic_segmentation_offload_lib.rst | 2 +-
doc/guides/prog_guide/graph_lib.rst | 16 +-
doc/guides/prog_guide/hash_lib.rst | 4 +-
doc/guides/prog_guide/intro.rst | 4 +-
.../prog_guide/ip_fragment_reassembly_lib.rst | 12 +-
doc/guides/prog_guide/ipsec_lib.rst | 4 +-
doc/guides/prog_guide/lcore_var.rst | 2 +-
.../link_bonding_poll_mode_drv_lib.rst | 2 +-
doc/guides/prog_guide/log_lib.rst | 32 ++--
doc/guides/prog_guide/lpm6_lib.rst | 4 +-
doc/guides/prog_guide/lpm_lib.rst | 2 +-
doc/guides/prog_guide/lto.rst | 2 +-
doc/guides/prog_guide/mbuf_lib.rst | 26 ++--
doc/guides/prog_guide/member_lib.rst | 26 ++--
doc/guides/prog_guide/mempool_lib.rst | 8 +-
doc/guides/prog_guide/meson_ut.rst | 12 +-
doc/guides/prog_guide/metrics_lib.rst | 26 ++--
doc/guides/prog_guide/mldev.rst | 20 +--
doc/guides/prog_guide/multi_proc_support.rst | 28 ++--
doc/guides/prog_guide/overview.rst | 10 +-
.../prog_guide/packet_classif_access_ctrl.rst | 10 +-
doc/guides/prog_guide/packet_distrib_lib.rst | 6 +-
doc/guides/prog_guide/packet_framework.rst | 6 +-
doc/guides/prog_guide/pdcp_lib.rst | 24 +--
doc/guides/prog_guide/pdump_lib.rst | 6 +-
doc/guides/prog_guide/power_man.rst | 8 +-
doc/guides/prog_guide/profile_app.rst | 2 +-
doc/guides/prog_guide/rcu_lib.rst | 143 +++++++++++-------
doc/guides/prog_guide/regexdev.rst | 24 +--
doc/guides/prog_guide/reorder_lib.rst | 4 +-
doc/guides/prog_guide/rib_lib.rst | 2 +-
doc/guides/prog_guide/ring_lib.rst | 2 +-
doc/guides/prog_guide/rte_security.rst | 4 +-
doc/guides/prog_guide/service_cores.rst | 30 ++--
doc/guides/prog_guide/stack_lib.rst | 32 ++--
doc/guides/prog_guide/thread_safety.rst | 2 +-
doc/guides/prog_guide/toeplitz_hash_lib.rst | 4 +-
doc/guides/prog_guide/trace_lib.rst | 118 +++++++--------
doc/guides/prog_guide/vhost_lib.rst | 22 +--
.../prog_guide/writing_efficient_code.rst | 2 +-
53 files changed, 488 insertions(+), 418 deletions(-)
--
2.51.0
^ permalink raw reply [flat|nested] 118+ messages in thread
* [PATCH v5 01/54] doc: correct grammar and typos in argparse library guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 02/54] doc: correct grammar in service cores guide Stephen Hemminger
` (52 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Changes:
- Add missing articles ("a user-friendly", "a long_name field")
- Change awkward phrasing ("take with" -> "have")
- Correct verb forms ("automatic generate" -> "automatic generation of",
"are parsing" -> "are parsed", "don't" -> "doesn't")
- Change typo in field name (val_save -> val_saver)
- Change stray backtick in code example
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/argparse_lib.rst | 24 ++++++++++++------------
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/doc/guides/prog_guide/argparse_lib.rst b/doc/guides/prog_guide/argparse_lib.rst
index 4a4214e00f..b0907cfc07 100644
--- a/doc/guides/prog_guide/argparse_lib.rst
+++ b/doc/guides/prog_guide/argparse_lib.rst
@@ -5,21 +5,21 @@ Argparse Library
================
The argparse library provides argument parsing functionality,
-this library makes it easy to write user-friendly command-line program.
+this library makes it easy to write a user-friendly command-line program.
Features and Capabilities
-------------------------
-- Support parsing optional argument (which could take with no-value,
- required-value and optional-value).
+- Support parsing optional argument (which could have no-value,
+ required-value or optional-value).
-- Support parsing positional argument (which must take with required-value).
+- Support parsing positional argument (which must have required-value).
- Support getopt-style argument reordering for non-flag arguments as an alternative to positional arguments.
-- Support automatic generate usage information.
+- Support automatic generation of usage information.
-- Support issue errors when provide with invalid arguments.
+- Support issuing errors when provided with invalid arguments.
- Support parsing argument by two ways:
@@ -126,15 +126,15 @@ the following two modes are supported (take above ``--ccc`` as an example):
- The single mode: ``--ccc`` or ``-c``.
-- The kv mode: ``--ccc=123`` or ``-c=123`` or ``-c123```.
+- The kv mode: ``--ccc=123`` or ``-c=123`` or ``-c123``.
For positional arguments which must take required-value,
-their values are parsing in the order defined.
+their values are parsed in the order defined.
.. note::
The compact mode is not supported.
- Take above ``-a`` and ``-d`` as an example, don't support ``-ad`` input.
+ Take above ``-a`` and ``-d`` as an example, doesn't support ``-ad`` input.
Parsing by autosave way
~~~~~~~~~~~~~~~~~~~~~~~
@@ -169,7 +169,7 @@ For arguments which are not flags (i.e. don't start with a hyphen '-'),
there are two ways in which they can be handled by the library:
#. Positional arguments: these are defined in the ``args`` array with a NULL ``short_name`` field,
- and long_name field that does not start with a hyphen '-'.
+ and a ``long_name`` field that does not start with a hyphen '-'.
They are parsed as required-value arguments.
#. As ignored, or unhandled arguments: if the ``ignore_non_flag_args`` field in the ``rte_argparse`` object is set to true,
@@ -283,7 +283,7 @@ Parsing by callback way
It could also choose to use callback to parse,
just define a unique index for the argument
-and make the ``val_save`` field to be NULL also zero value-type.
+and make the ``val_saver`` field be NULL also zero value-type.
In the example at the top of this section,
the arguments ``--ddd``/``--eee``/``--fff`` and ``ppp`` all use this way.
@@ -311,7 +311,7 @@ Then the user input could contain multiple ``--xyz`` arguments.
.. note::
- The multiple times argument only support with optional argument
+ The multiple times argument is only supported with optional argument
and must be parsed by callback way.
Help and Usage Information
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 02/54] doc: correct grammar in service cores guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 01/54] doc: correct grammar and typos in argparse library guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 03/54] doc: correct grammar and errors in trace library guide Stephen Hemminger
` (51 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Changes:
- Use "abstract away the differences" instead of unclear
"simplify the difference"
- Change preposition: "two methods for having" not "to having"
- Change reversed phrasing: "Disabling that service on the core"
not "that core on the service"
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/service_cores.rst | 30 ++++++++++++-------------
1 file changed, 15 insertions(+), 15 deletions(-)
diff --git a/doc/guides/prog_guide/service_cores.rst b/doc/guides/prog_guide/service_cores.rst
index 5284eeb96a..ee47a85a90 100644
--- a/doc/guides/prog_guide/service_cores.rst
+++ b/doc/guides/prog_guide/service_cores.rst
@@ -4,12 +4,12 @@
Service Cores
=============
-DPDK has a concept known as service cores, which enables a dynamic way of
-performing work on DPDK lcores. Service core support is built into the EAL, and
-an API is provided to optionally allow applications to control how the service
-cores are used at runtime.
+DPDK has a concept known as service cores. Service cores enable a dynamic way of
+performing work on DPDK lcores. Service core support is built into the EAL.
+An API is provided to give you the option of allowing applications to control
+how the service cores are used at runtime.
-The service cores concept is built up out of services (components of DPDK that
+The service cores concept is built out of services (components of DPDK that
require CPU cycles to operate) and service cores (DPDK lcores, tasked with
running services). The power of the service core concept is that the mapping
between service cores and services can be configured to abstract away the
@@ -18,24 +18,24 @@ difference between platforms and environments.
For example, the Eventdev has hardware and software PMDs. Of these the software
PMD requires an lcore to perform the scheduling operations, while the hardware
PMD does not. With service cores, the application would not directly notice
-that the scheduling is done in software.
+that the scheduling is done in the software.
For detailed information about the service core API, please refer to the docs.
Service Core Initialization
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-There are two methods to having service cores in a DPDK application, either by
+There are two methods to having service cores in a DPDK application: either by
using the service corelist, or by dynamically adding cores using the API.
-The simpler of the two is to pass the `-S` corelist argument to EAL, which will
-take any cores available in the main DPDK corelist, and if also set
-in the service corelist the cores become service-cores instead of DPDK
+The simpler of the two is to pass the `-S` coremask argument to the EAL, which will
+take any cores available in the main DPDK coremask. If the bits are also set
+in the service coremask, the cores become service-cores instead of DPDK
application lcores.
Enabling Services on Cores
~~~~~~~~~~~~~~~~~~~~~~~~~~
-Each registered service can be individually mapped to a service core, or set of
+Each registered service can be individually mapped to a service core, or a set of
service cores. Enabling a service on a particular core means that the lcore in
question will run the service. Disabling that core on the service stops the
lcore in question from running the service.
@@ -48,8 +48,8 @@ function to run the service.
Service Core Statistics
~~~~~~~~~~~~~~~~~~~~~~~
-The service core library is capable of collecting runtime statistics like number
-of calls to a specific service, and number of cycles used by the service. The
+The service core library is capable of collecting runtime statistics like the number
+of calls to a specific service, and the number of cycles used by the service. The
cycle count collection is dynamically configurable, allowing any application to
profile the services running on the system at any time.
@@ -58,9 +58,9 @@ Service Core Tracing
The service core library is instrumented with tracepoints using the DPDK Trace
Library. These tracepoints allow you to track the service and logical cores
-state. To activate tracing when launching a DPDK program it is necessary to use the
+state. To activate tracing when launching a DPDK program, it is necessary to use the
``--trace`` option to specify a regular expression to select which tracepoints
-to enable. Here is an example if you want to only specify service core tracing::
+to enable. Here is an example if you want to specify only service core tracing::
./dpdk/examples/service_cores/build/service_cores --trace="lib.eal.thread*" --trace="lib.eal.service*"
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 03/54] doc: correct grammar and errors in trace library guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 01/54] doc: correct grammar and typos in argparse library guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 02/54] doc: correct grammar in service cores guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 04/54] doc: correct typos in log " Stephen Hemminger
` (50 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Changes:
- CRITICAL: restore missing "out" in "compiled out by default"
(RTE_TRACE_POINT_FP is disabled by default, not enabled)
- Add missing article and verb ("a framework", "are broadly divided")
- Change subject-verb agreement ("traces that use", "example greps/counts")
- Change article before vowel sound ("an EAL")
- Change preposition ("known to DPDK" not "known of DPDK")
- Use standard spelling "lockless" and "non-lcore"
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/trace_lib.rst | 68 ++++++++++++++---------------
1 file changed, 34 insertions(+), 34 deletions(-)
diff --git a/doc/guides/prog_guide/trace_lib.rst b/doc/guides/prog_guide/trace_lib.rst
index d9b17abe90..829a061074 100644
--- a/doc/guides/prog_guide/trace_lib.rst
+++ b/doc/guides/prog_guide/trace_lib.rst
@@ -14,29 +14,29 @@ When recording, specific instrumentation points placed in the software source
code generate events that are saved on a giant tape: a trace file.
The trace file then later can be opened in *trace viewers* to visualize and
analyze the trace events with timestamps and multi-core views.
-Such a mechanism will be useful for resolving a wide range of problems such as
-multi-core synchronization issues, latency measurements, finding out the
-post analysis information like CPU idle time, etc that would otherwise be
-extremely challenging to get.
+This mechanism will be useful for resolving a wide range of problems such as
+multi-core synchronization issues, latency measurements, and finding
+post analysis information like CPU idle time, etc., that would otherwise be
+extremely challenging to gather.
Tracing is often compared to *logging*. However, tracers and loggers are two
-different tools, serving two different purposes.
-Tracers are designed to record much lower-level events that occur much more
+different tools serving two different purposes.
+Tracers are designed to record much lower-level events that occur more
frequently than log messages, often in the range of thousands per second, with
very little execution overhead.
Logging is more appropriate for a very high-level analysis of less frequent
events: user accesses, exceptional conditions (errors and warnings, for
-example), database transactions, instant messaging communications, and such.
+example), database transactions, instant messaging communications, etc.
Simply put, logging is one of the many use cases that can be satisfied with
tracing.
DPDK tracing library features
-----------------------------
-- A framework to add tracepoints in control and fast path APIs with minimum
+- Provides a framework to add tracepoints in control and fast path APIs with minimum
impact on performance.
Typical trace overhead is ~20 cycles and instrumentation overhead is 1 cycle.
-- Enable and disable the tracepoints at runtime.
+- Enable and disable tracepoints at runtime.
- Save the trace buffer to the filesystem at any point in time.
- Support ``overwrite`` and ``discard`` trace mode operations.
- String-based tracepoint object lookup.
@@ -47,7 +47,7 @@ DPDK tracing library features
For detailed information, refer to
`Common Trace Format <https://diamon.org/ctf/>`_.
-How to add a tracepoint?
+How to add a Tracepoint
------------------------
This section steps you through the details of adding a simple tracepoint.
@@ -67,14 +67,14 @@ Create the tracepoint header file
rte_trace_point_emit_string(str);
)
-The above macro creates ``app_trace_string`` tracepoint.
+The above macro creates the ``app_trace_string`` tracepoint.
The user can choose any name for the tracepoint.
However, when adding a tracepoint in the DPDK library, the
``rte_<library_name>_trace_[<domain>_]<name>`` naming convention must be
followed.
The examples are ``rte_eal_trace_generic_str``, ``rte_mempool_trace_create``.
-The ``RTE_TRACE_POINT`` macro expands from above definition as the following
+The ``RTE_TRACE_POINT`` macro expands from the above definition as the following
function template:
.. code-block:: c
@@ -91,7 +91,7 @@ The consumer of this tracepoint can invoke
``app_trace_string(const char *str)`` to emit the trace event to the trace
buffer.
-Register the tracepoint
+Register the Tracepoint
~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: c
@@ -122,40 +122,40 @@ convention.
The ``RTE_TRACE_POINT_REGISTER`` defines the placeholder for the
``rte_trace_point_t`` tracepoint object.
- For generic tracepoint or for tracepoint used in public header files,
+ For a generic tracepoint or for the tracepoint used in public header files,
the user must export a ``__<trace_function_name>`` symbol
in the library ``.map`` file for this tracepoint
- to be used out of the library, in shared builds.
+ to be used out of the library in shared builds.
For example, ``__app_trace_string`` will be the exported symbol in the
above example.
-Fast path tracepoint
+Fast Path Tracepoint
--------------------
In order to avoid performance impact in fast path code, the library introduced
``RTE_TRACE_POINT_FP``. When adding the tracepoint in fast path code,
the user must use ``RTE_TRACE_POINT_FP`` instead of ``RTE_TRACE_POINT``.
-``RTE_TRACE_POINT_FP`` is compiled out by default and it can be enabled using
+``RTE_TRACE_POINT_FP`` is compiled out by default and can be enabled using
the ``enable_trace_fp`` option for meson build.
-Event record mode
+Event Record Mode
-----------------
-Event record mode is an attribute of trace buffers. Trace library exposes the
+Event record mode is an attribute of trace buffers. The trace library exposes the
following modes:
Overwrite
- When the trace buffer is full, new trace events overwrites the existing
+ When the trace buffer is full, new trace events overwrite the existing
captured events in the trace buffer.
Discard
When the trace buffer is full, new trace events will be discarded.
-The mode can be configured either using EAL command line parameter
-``--trace-mode`` on application boot up or use ``rte_trace_mode_set()`` API to
+The mode can be configured either using the EAL command line parameter
+``--trace-mode`` on application boot up or use the ``rte_trace_mode_set()`` API to
configure at runtime.
-Trace file location
+Trace File Location
-------------------
On ``rte_trace_save()`` or ``rte_eal_cleanup()`` invocation, the library saves
@@ -167,7 +167,7 @@ option.
For more information, refer to :doc:`../linux_gsg/linux_eal_parameters` for
trace EAL command line options.
-View and analyze the recorded events
+View and Analyze Recorded Events
------------------------------------
Once the trace directory is available, the user can view/inspect the recorded
@@ -176,7 +176,7 @@ events.
There are many tools you can use to read DPDK traces:
#. ``babeltrace`` is a command-line utility that converts trace formats; it
- supports the format that DPDK trace library produces, CTF, as well as a
+ supports the format that the DPDK trace library produces, CTF, as well as a
basic text output that can be grep'ed.
The babeltrace command is part of the Open Source Babeltrace project.
@@ -195,12 +195,12 @@ to babeltrace with no options::
all their events, merging them in chronological order.
You can pipe the output of the babeltrace into a tool like grep(1) for further
-filtering. Below example grep the events for ``ethdev`` only::
+filtering. The example below greps the events for ``ethdev`` only::
babeltrace /tmp/my-dpdk-trace | grep ethdev
You can pipe the output of babeltrace into a tool like wc(1) to count the
-recorded events. Below example count the number of ``ethdev`` events::
+recorded events. The example below counts the number of ``ethdev`` events::
babeltrace /tmp/my-dpdk-trace | grep ethdev | wc --lines
@@ -238,7 +238,7 @@ This section steps you through the details of generating trace and viewing it.
Implementation details
----------------------
-As DPDK trace library is designed to generate traces that uses ``Common Trace
+As DPDK trace library is designed to generate traces that use ``Common Trace
Format (CTF)``. ``CTF`` specification consists of the following units to create
a trace.
@@ -249,7 +249,7 @@ a trace.
For detailed information, refer to
`Common Trace Format <https://diamon.org/ctf/>`_.
-The implementation details broadly divided into the following areas:
+The implementation details are broadly divided into the following areas:
Trace metadata creation
~~~~~~~~~~~~~~~~~~~~~~~
@@ -272,16 +272,16 @@ Trace memory
The trace memory will be allocated through an internal function
``__rte_trace_mem_per_thread_alloc()``. The trace memory will be allocated
-per thread to enable lock less trace-emit function.
+per thread to enable lockless trace-emit function.
-For non lcore threads, the trace memory is allocated on the first trace
+For non-lcore threads, the trace memory is allocated on the first trace
emission.
-For lcore threads, if trace points are enabled through a EAL option, the trace
-memory is allocated when the threads are known of DPDK
+For lcore threads, if trace points are enabled through an EAL option, the trace
+memory is allocated when the threads are known to DPDK
(``rte_eal_init`` for EAL lcores, ``rte_thread_register`` for non-EAL lcores).
Otherwise, when trace points are enabled later in the life of the application,
-the behavior is the same as non lcore threads and the trace memory is allocated
+the behavior is the same as non-lcore threads and the trace memory is allocated
on the first trace emission.
Trace memory layout
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 04/54] doc: correct typos in log library guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (2 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 03/54] doc: correct grammar and errors in trace library guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 05/54] doc: correct errors in command-line " Stephen Hemminger
` (49 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Changes:
- Change spelling errors (stystem -> system, acheived -> achieved)
- Change sentence structure for rte_log() parameter description
- Use consistent spelling "timestamp" (one word)
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/log_lib.rst | 32 +++++++++++++++----------------
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/doc/guides/prog_guide/log_lib.rst b/doc/guides/prog_guide/log_lib.rst
index 3e888b8965..a3d6104e72 100644
--- a/doc/guides/prog_guide/log_lib.rst
+++ b/doc/guides/prog_guide/log_lib.rst
@@ -6,7 +6,7 @@ Log Library
The DPDK Log library provides the logging functionality for other DPDK libraries and drivers.
By default, logs are sent only to standard error output of the process.
-The syslog EAL option can be used to redirect to the stystem logger on Linux and FreeBSD.
+The syslog EAL option can be used to redirect to the system logger on Linux and FreeBSD.
In addition, the log can be redirected to a different stdio file stream.
Log Levels
@@ -26,14 +26,14 @@ These levels, specified in ``rte_log.h`` are (from most to least important):
At runtime, only messages of a configured level or above (i.e. of higher importance)
will be emitted by the application to the log output.
-That level can be configured either by the application calling the relevant APIs from the logging library,
+That level can be configured either by the application calling relevant APIs from the logging library,
or by the user passing the ``--log-level`` parameter to the EAL via the application.
Setting Global Log Level
~~~~~~~~~~~~~~~~~~~~~~~~
To adjust the global log level for an application,
-just pass a numeric level or a level name to the ``--log-level`` EAL parameter.
+pass a numeric level or a level name to the ``--log-level`` EAL parameter.
For example::
/path/to/app --log-level=error
@@ -47,9 +47,9 @@ Within an application, the log level can be similarly set using the ``rte_log_se
Setting Log Level for a Component
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-In some cases, for example, for debugging purposes,
-it may be desirable to increase or decrease the log level for only a specific component, or set of components.
-To facilitate this, the ``--log-level`` argument also accepts an, optionally wildcarded, component name,
+In some cases (such as debugging purposes),
+you may want to increase or decrease the log level for only a specific component or set of components.
+To facilitate this, the ``--log-level`` argument also accepts an optionally wildcarded component name,
along with the desired level for that component.
For example::
@@ -57,13 +57,13 @@ For example::
/path/to/app --log-level=lib.*:warning
-Within an application, the same result can be got using the ``rte_log_set_level_pattern()`` or ``rte_log_set_level_regex()`` APIs.
+Within an application, the same result can be achieved by using the ``rte_log_set_level_pattern()`` or ``rte_log_set_level_regex()`` APIs.
Using Logging APIs to Generate Log Messages
-------------------------------------------
-To output log messages, ``rte_log()`` API function should be used.
+To output log messages, the ``rte_log()`` API function should be used.
As well as the log message, ``rte_log()`` takes two additional parameters:
* The log level
@@ -74,16 +74,16 @@ The component type is a unique id that identifies the particular DPDK component
To get this id, each component needs to register itself at startup,
using the macro ``RTE_LOG_REGISTER_DEFAULT``.
This macro takes two parameters, with the second being the default log level for the component.
-The first parameter, called "type", the name of the "logtype", or "component type" variable used in the component.
-This variable will be defined by the macro, and should be passed as the second parameter in calls to ``rte_log()``.
+The first parameter, called "type", is the name of the "logtype", or "component type" variable used in the component.
+This variable will be defined by the macro and should be passed as the second parameter in calls to ``rte_log()``.
In general, most DPDK components define their own logging macros to simplify the calls to the log APIs.
They do this by:
* Hiding the component type parameter inside the macro so it never needs to be passed explicitly.
* Using the log-level definitions given in ``rte_log.h`` to allow short textual names to be used in
- place of the numeric log levels.
+ place of numeric log levels.
-The following code is taken from ``rte_cfgfile.c`` and shows the log registration,
+The following code is taken from ``rte_cfgfile.c`` and shows the log registration
and subsequent definition of a shortcut logging macro.
It can be used as a template for any new components using DPDK logging.
@@ -98,10 +98,10 @@ It can be used as a template for any new components using DPDK logging.
it should be placed near the top of the C file using it.
If not, the logtype variable should be defined as an "extern int" near the top of the file.
- Similarly, if logging is to be done by multiple files in a component,
- only one file should register the logtype via the macro,
+ Similarly, if logging will be done by multiple files in a component,
+ only one file should register the logtype via the macro
and the logtype should be defined as an "extern int" in a common header file.
- Any component-specific logging macro should similarly be defined in that header.
+ Any component-specific logging macro should be similarly defined in that header.
Throughout the cfgfile library, all logging calls are therefore of the form:
@@ -122,7 +122,7 @@ For example::
Multiple alternative timestamp formats are available:
-.. csv-table:: Log time stamp format
+.. csv-table:: Log timestamp format
:header: "Format", "Description", "Example"
:widths: 6, 30, 32
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 05/54] doc: correct errors in command-line library guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (3 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 04/54] doc: correct typos in log " Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 06/54] doc: correct errors in trace " Stephen Hemminger
` (48 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Change several errors in the cmdline library documentation:
- Change function name typo: cmdline_new_stdin -> cmdline_stdin_new
- Change type name: cmdline_parse_t -> cmdline_parse_inst_t
- Change grammar: "that others" -> "than others"
- Change spelling: "boiler plate" -> "boilerplate"
- Clarify wording: "multiplex" -> "direct" for command routing
- Change misleading phrase: "call a separate function" -> "call a single
function" (multiplexing routes multiple commands to one callback)
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/cmdline.rst | 42 +++++++++++++++----------------
1 file changed, 21 insertions(+), 21 deletions(-)
diff --git a/doc/guides/prog_guide/cmdline.rst b/doc/guides/prog_guide/cmdline.rst
index e20281ceb5..c794ec826f 100644
--- a/doc/guides/prog_guide/cmdline.rst
+++ b/doc/guides/prog_guide/cmdline.rst
@@ -4,9 +4,9 @@
Command-line Library
====================
-Since its earliest versions, DPDK has included a command-line library -
-primarily for internal use by, for example, ``dpdk-testpmd`` and the ``dpdk-test`` binaries,
-but the library is also exported on install and can be used by any end application.
+Since its earliest versions, DPDK has included a command-line library,
+primarily for internal use by, for example, ``dpdk-testpmd`` and the ``dpdk-test`` binaries.
+However, the library is also exported on install and can be used by any end application.
This chapter covers the basics of the command-line library and how to use it in an application.
Library Features
@@ -18,14 +18,14 @@ The DPDK command-line library supports the following features:
* Ability to read and process commands taken from an input file, e.g. startup script
-* Parameterized commands able to take multiple parameters with different datatypes:
+* Parameterized commands that can take multiple parameters with different datatypes:
* Strings
* Signed/unsigned 16/32/64-bit integers
* IP Addresses
* Ethernet Addresses
-* Ability to multiplex multiple commands to a single callback function
+* Ability to direct multiple commands to a single callback function
Adding Command-line to an Application
-------------------------------------
@@ -46,7 +46,7 @@ Adding a command-line instance to an application involves a number of coding ste
Many of these steps can be automated using the script ``dpdk-cmdline-gen.py`` installed by DPDK,
and found in the ``buildtools`` folder in the source tree.
-This section covers adding a command-line using this script to generate the boiler plate,
+This section covers adding a command-line using this script to generate the boilerplate,
while the following section,
`Worked Example of Adding Command-line to an Application`_ covers the steps to do so manually.
@@ -56,7 +56,7 @@ Creating a Command List File
The ``dpdk-cmdline-gen.py`` script takes as input a list of commands to be used by the application.
While these can be piped to it via standard input, using a list file is probably best.
-The format of the list file must be:
+The format of the list file must follow these requirements:
* Comment lines start with '#' as first non-whitespace character
@@ -75,7 +75,7 @@ The format of the list file must be:
* ``<IPv6>dst_ip6``
* Variable fields, which take their values from a list of options,
- have the comma-separated option list placed in braces, rather than a the type name.
+ have the comma-separated option list placed in braces, rather than by the type name.
For example,
* ``<(rx,tx,rxtx)>mode``
@@ -127,13 +127,13 @@ and the callback stubs will be written to an equivalent ".c" file.
Providing the Function Callbacks
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-As discussed above, the script output is a header file, containing structure definitions,
-but the callback functions themselves obviously have to be provided by the user.
-These callback functions must be provided as non-static functions in a C file,
+As discussed above, the script output is a header file containing structure definitions,
+but the callback functions must be provided by the user.
+These callback functions must be provided as non-static functions in a C file
and named ``cmd_<cmdname>_parsed``.
The function prototypes can be seen in the generated output header.
-The "cmdname" part of the function name is built up by combining the non-variable initial tokens in the command.
+The "cmdname" part of the function name is built by combining the non-variable initial tokens in the command.
So, given the commands in our worked example below: ``quit`` and ``show port stats <n>``,
the callback functions would be:
@@ -151,11 +151,11 @@ the callback functions would be:
...
}
-These functions must be provided by the developer, but, as stated above,
+These functions must be provided by the developer. However, as stated above,
stub functions may be generated by the script automatically using the ``--stubs`` parameter.
The same "cmdname" stem is used in the naming of the generated structures too.
-To get at the results structure for each command above,
+To get to the results structure for each command above,
the ``parsed_result`` parameter should be cast to ``struct cmd_quit_result``
or ``struct cmd_show_port_stats_result`` respectively.
@@ -179,7 +179,7 @@ To integrate the script output with the application,
we must ``#include`` the generated header into our applications C file,
and then have the command-line created via either ``cmdline_new`` or ``cmdline_stdin_new``.
The first parameter to the function call should be the context array in the generated header file,
-``ctx`` by default. (Modifiable via script parameter).
+``ctx`` by default (Modifiable via script parameter).
The callback functions may be in this same file, or in a separate one -
they just need to be available to the linker at build-time.
@@ -190,7 +190,7 @@ Limitations of the Script Approach
The script approach works for most commands that a user may wish to add to an application.
However, it does not support the full range of functions possible with the DPDK command-line library.
For example,
-it is not possible using the script to multiplex multiple commands into a single callback function.
+it is not possible using the script to direct multiple commands to a single callback function.
To use this functionality, the user should follow the instructions in the next section
`Worked Example of Adding Command-line to an Application`_ to manually configure a command-line instance.
@@ -416,7 +416,7 @@ Once we have our ``ctx`` variable defined,
we now just need to call the API to create the new command-line instance in our application.
The basic API is ``cmdline_new`` which will create an interactive command-line with all commands available.
However, if additional features for interactive use - such as tab-completion -
-are desired, it is recommended that ``cmdline_new_stdin`` be used instead.
+are desired, it is recommended that ``cmdline_stdin_new`` be used instead.
A pattern that can be used in applications is to use ``cmdline_new`` for processing any startup commands,
either from file or from the environment (as is done in the "dpdk-test" application),
@@ -449,8 +449,8 @@ For example, to handle a startup file and then provide an interactive prompt:
Multiplexing Multiple Commands to a Single Function
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-To reduce the amount of boiler-plate code needed when creating a command-line for an application,
-it is possible to merge a number of commands together to have them call a separate function.
+To reduce the amount of boilerplate code needed when creating a command-line for an application,
+it is possible to merge a number of commands together to have them call a single function.
This can be done in a number of different ways:
* A callback function can be used as the target for a number of different commands.
@@ -463,7 +463,7 @@ This can be done in a number of different ways:
As a concrete example,
these two techniques are used in the DPDK unit test application ``dpdk-test``,
-where a single command ``cmdline_parse_t`` instance is used for all the "dump_<item>" test cases.
+where a single ``cmdline_parse_inst_t`` instance is used for all the "dump_<item>" test cases.
.. literalinclude:: ../../../app/test/commands.c
:language: c
@@ -481,7 +481,7 @@ the following DPDK files can be consulted for examples of command-line use.
This is not an exhaustive list of examples of command-line use in DPDK.
It is simply a list of a few files that may be of use to the application developer.
- Some of these referenced files contain more complex examples of use that others.
+ Some of these referenced files contain more complex examples of use than others.
* ``commands.c/.h`` in ``examples/cmdline``
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 06/54] doc: correct errors in trace library guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (4 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 05/54] doc: correct errors in command-line " Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 07/54] doc: correct errors in stack " Stephen Hemminger
` (47 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Changes:
- Change broken sentence in implementation details: "As DPDK trace library
is designed..." was a sentence fragment
- Change tense inconsistency: "will be useful" -> "is useful",
"will be allocated" -> "is allocated",
"will be discarded" -> "are discarded"
- Change grammatical parallelism: "or use the API" -> "or by using the API"
- Change inconsistent capitalization in section headers: "Tracepoint" ->
"tracepoint" to match body text usage throughout
- Change tool name consistency: "Tracecompass" -> "Trace Compass"
(official name, two words)
- Change awkward phrasing: "one of a CTF trace's streams" ->
"one of the streams in a CTF trace"
- Change trace header description for parallelism: "48 bits of timestamp
and 16 bits event ID" -> "a 48-bit timestamp and a 16-bit event ID"
- Add missing article: "to trace library" -> "to the trace library"
- Remove incorrect article:
"the ``my_tracepoint.h``" -> "``my_tracepoint.h``"
- Add function call parentheses: rte_eal_init -> rte_eal_init(),
rte_thread_register -> rte_thread_register()
- Change ambiguous pronoun: "It can be overridden" -> "This location can
be overridden"
- Change line wrap for readability in features list
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/trace_lib.rst | 80 ++++++++++++++---------------
1 file changed, 40 insertions(+), 40 deletions(-)
diff --git a/doc/guides/prog_guide/trace_lib.rst b/doc/guides/prog_guide/trace_lib.rst
index 829a061074..33b6232ce3 100644
--- a/doc/guides/prog_guide/trace_lib.rst
+++ b/doc/guides/prog_guide/trace_lib.rst
@@ -14,7 +14,7 @@ When recording, specific instrumentation points placed in the software source
code generate events that are saved on a giant tape: a trace file.
The trace file then later can be opened in *trace viewers* to visualize and
analyze the trace events with timestamps and multi-core views.
-This mechanism will be useful for resolving a wide range of problems such as
+This mechanism is useful for resolving a wide range of problems such as
multi-core synchronization issues, latency measurements, and finding
post analysis information like CPU idle time, etc., that would otherwise be
extremely challenging to gather.
@@ -33,8 +33,8 @@ tracing.
DPDK tracing library features
-----------------------------
-- Provides a framework to add tracepoints in control and fast path APIs with minimum
- impact on performance.
+- Provides a framework to add tracepoints in control and fast path APIs with
+ minimal impact on performance.
Typical trace overhead is ~20 cycles and instrumentation overhead is 1 cycle.
- Enable and disable tracepoints at runtime.
- Save the trace buffer to the filesystem at any point in time.
@@ -47,8 +47,8 @@ DPDK tracing library features
For detailed information, refer to
`Common Trace Format <https://diamon.org/ctf/>`_.
-How to add a Tracepoint
-------------------------
+How to add a tracepoint
+-----------------------
This section steps you through the details of adding a simple tracepoint.
@@ -91,7 +91,7 @@ The consumer of this tracepoint can invoke
``app_trace_string(const char *str)`` to emit the trace event to the trace
buffer.
-Register the Tracepoint
+Register the tracepoint
~~~~~~~~~~~~~~~~~~~~~~~
.. code-block:: c
@@ -103,7 +103,7 @@ Register the Tracepoint
RTE_TRACE_POINT_REGISTER(app_trace_string, app.trace.string)
The above code snippet registers the ``app_trace_string`` tracepoint to
-trace library. Here, the ``my_tracepoint.h`` is the header file
+the trace library. Here, ``my_tracepoint.h`` is the header file
that the user created in the first step :ref:`create_tracepoint_header_file`.
The second argument for the ``RTE_TRACE_POINT_REGISTER`` is the name for the
@@ -129,7 +129,7 @@ convention.
For example, ``__app_trace_string`` will be the exported symbol in the
above example.
-Fast Path Tracepoint
+Fast path tracepoint
--------------------
In order to avoid performance impact in fast path code, the library introduced
@@ -139,7 +139,7 @@ the user must use ``RTE_TRACE_POINT_FP`` instead of ``RTE_TRACE_POINT``.
``RTE_TRACE_POINT_FP`` is compiled out by default and can be enabled using
the ``enable_trace_fp`` option for meson build.
-Event Record Mode
+Event record mode
-----------------
Event record mode is an attribute of trace buffers. The trace library exposes the
@@ -149,26 +149,26 @@ Overwrite
When the trace buffer is full, new trace events overwrite the existing
captured events in the trace buffer.
Discard
- When the trace buffer is full, new trace events will be discarded.
+ When the trace buffer is full, new trace events are discarded.
The mode can be configured either using the EAL command line parameter
-``--trace-mode`` on application boot up or use the ``rte_trace_mode_set()`` API to
-configure at runtime.
+``--trace-mode`` on application boot up or by using the ``rte_trace_mode_set()``
+API at runtime.
-Trace File Location
+Trace file location
-------------------
On ``rte_trace_save()`` or ``rte_eal_cleanup()`` invocation, the library saves
the trace buffers to the filesystem. By default, the trace files are stored in
``$HOME/dpdk-traces/rte-yyyy-mm-dd-[AP]M-hh-mm-ss/``.
-It can be overridden by the ``--trace-dir=<directory path>`` EAL command line
-option.
+This location can be overridden by the ``--trace-dir=<directory path>`` EAL
+command line option.
For more information, refer to :doc:`../linux_gsg/linux_eal_parameters` for
trace EAL command line options.
-View and Analyze Recorded Events
-------------------------------------
+View and analyze recorded events
+--------------------------------
Once the trace directory is available, the user can view/inspect the recorded
events.
@@ -204,28 +204,28 @@ recorded events. The example below counts the number of ``ethdev`` events::
babeltrace /tmp/my-dpdk-trace | grep ethdev | wc --lines
-Use the tracecompass GUI tool
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Use the Trace Compass GUI tool
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-``Tracecompass`` is another tool to view/analyze the DPDK traces which gives
-a graphical view of events. Like ``babeltrace``, tracecompass also provides
+``Trace Compass`` is another tool to view/analyze the DPDK traces which gives
+a graphical view of events. Like ``babeltrace``, Trace Compass also provides
an interface to search for a particular event.
-To use ``tracecompass``, following are the minimum required steps:
+To use ``Trace Compass``, the following are the minimum required steps:
-- Install ``tracecompass`` to the localhost. Variants are available for Linux,
+- Install ``Trace Compass`` to the localhost. Variants are available for Linux,
Windows, and OS-X.
-- Launch ``tracecompass`` which will open a graphical window with trace
+- Launch ``Trace Compass`` which will open a graphical window with trace
management interfaces.
-- Open a trace using ``File->Open Trace`` option and select metadata file which
- is to be viewed/analyzed.
+- Open a trace using the ``File->Open Trace`` option and select the metadata file which
+ will be viewed/analyzed.
-For more details, refer
+For more details, refer to
`Trace Compass <https://www.eclipse.org/tracecompass/>`_.
Quick start
-----------
-This section steps you through the details of generating trace and viewing it.
+This section steps you through the details of generating the trace and viewing it.
- Start the dpdk-test::
@@ -238,9 +238,9 @@ This section steps you through the details of generating trace and viewing it.
Implementation details
----------------------
-As DPDK trace library is designed to generate traces that use ``Common Trace
-Format (CTF)``. ``CTF`` specification consists of the following units to create
-a trace.
+The DPDK trace library is designed to generate traces that use
+``Common Trace Format (CTF)``. The ``CTF`` specification consists of the
+following units to create a trace.
- ``Stream`` Sequence of packets.
- ``Packet`` Header and one or more events.
@@ -249,15 +249,15 @@ a trace.
For detailed information, refer to
`Common Trace Format <https://diamon.org/ctf/>`_.
-The implementation details are broadly divided into the following areas:
+Implementation details are broadly divided into the following areas:
Trace metadata creation
~~~~~~~~~~~~~~~~~~~~~~~
-Based on the ``CTF`` specification, one of a CTF trace's streams is mandatory:
-the metadata stream. It contains exactly what you would expect: data about the
-trace itself. The metadata stream contains a textual description of the binary
-layouts of all the other streams.
+Based on the ``CTF`` specification, one of the streams in a CTF trace is
+mandatory: the metadata stream. It contains exactly what you would expect:
+data about the trace itself. The metadata stream contains a textual description
+of the binary layouts of all the other streams.
This description is written using the Trace Stream Description Language (TSDL),
a declarative language that exists only in the realm of CTF.
@@ -270,8 +270,8 @@ The internal ``trace_metadata_create()`` function generates the metadata.
Trace memory
~~~~~~~~~~~~
-The trace memory will be allocated through an internal function
-``__rte_trace_mem_per_thread_alloc()``. The trace memory will be allocated
+The trace memory is allocated through an internal function
+``__rte_trace_mem_per_thread_alloc()``. The trace memory is allocated
per thread to enable lockless trace-emit function.
For non-lcore threads, the trace memory is allocated on the first trace
@@ -279,7 +279,7 @@ emission.
For lcore threads, if trace points are enabled through an EAL option, the trace
memory is allocated when the threads are known to DPDK
-(``rte_eal_init`` for EAL lcores, ``rte_thread_register`` for non-EAL lcores).
+(``rte_eal_init()`` for EAL lcores, ``rte_thread_register()`` for non-EAL lcores).
Otherwise, when trace points are enabled later in the life of the application,
the behavior is the same as non-lcore threads and the trace memory is allocated
on the first trace emission.
@@ -348,7 +348,7 @@ trace.header
| timestamp [47:0] |
+----------------------+
-The trace header is 64 bits, it consists of 48 bits of timestamp and 16 bits
+The trace header is 64 bits, consisting of a 48-bit timestamp and a 16-bit
event ID.
The ``packet.header`` and ``packet.context`` will be written in the slow path
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 07/54] doc: correct errors in stack library guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (5 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 06/54] doc: correct errors in trace " Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 08/54] doc: correct errors in RCU " Stephen Hemminger
` (46 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Change several errors in the stack library documentation:
- Change grammar: "These function are" -> "These functions are"
- Change grammar: "incorrect change" -> "incorrectly change"
- Change RST section hierarchy: "Implementation" was marked as a
subsection (~~~~) but contained sections (----); corrected so
Implementation is a section and Lock-based/Lock-free stack are
subsections beneath it
- Change inconsistent header capitalization: "Lock-based Stack" and
"Lock-free Stack" -> lowercase "stack" to match other DPDK docs
- Change awkward wording: "this algorithm stack uses" -> "this algorithm
uses"
- Change inconsistent underline lengths in RST headers
- Add code formatting to function name: rte_stack_create() with
backticks and parentheses
- Change hyphenation: "multi-threading safe" -> "multi-thread safe"
(matches line 37 usage)
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/stack_lib.rst | 32 ++++++++++++++---------------
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/doc/guides/prog_guide/stack_lib.rst b/doc/guides/prog_guide/stack_lib.rst
index 975d3ad796..1ca9d73bc0 100644
--- a/doc/guides/prog_guide/stack_lib.rst
+++ b/doc/guides/prog_guide/stack_lib.rst
@@ -13,8 +13,8 @@ The stack library provides the following basic operations:
user-specified socket, with either standard (lock-based) or lock-free
behavior.
-* Push and pop a burst of one or more stack objects (pointers). These function
- are multi-threading safe.
+* Push and pop a burst of one or more stack objects (pointers). These functions
+ are multi-thread safe.
* Free a previously created stack.
@@ -23,15 +23,15 @@ The stack library provides the following basic operations:
* Query a stack's current depth and number of free entries.
Implementation
-~~~~~~~~~~~~~~
+--------------
The library supports two types of stacks: standard (lock-based) and lock-free.
Both types use the same set of interfaces, but their implementations differ.
.. _Stack_Library_Std_Stack:
-Lock-based Stack
-----------------
+Lock-based stack
+~~~~~~~~~~~~~~~~
The lock-based stack consists of a contiguous array of pointers, a current
index, and a spinlock. Accesses to the stack are made multi-thread safe by the
@@ -39,13 +39,13 @@ spinlock.
.. _Stack_Library_LF_Stack:
-Lock-free Stack
-------------------
+Lock-free stack
+~~~~~~~~~~~~~~~
The lock-free stack consists of a linked list of elements, each containing a
data pointer and a next pointer, and an atomic stack depth counter. The
-lock-free property means that multiple threads can push and pop simultaneously,
-and one thread being preempted/delayed in a push or pop operation will not
+lock-free property means that multiple threads can push and pop simultaneously.
+One thread being preempted/delayed in a push or pop operation will not
impede the forward progress of any other thread.
The lock-free push operation enqueues a linked list of pointers by pointing the
@@ -65,15 +65,15 @@ allocated before stack pushes and freed after stack pops. Since the stack has a
fixed maximum depth, these elements do not need to be dynamically created.
The lock-free behavior is selected by passing the *RTE_STACK_F_LF* flag to
-rte_stack_create().
+``rte_stack_create()``.
-Preventing the ABA Problem
+Preventing the ABA problem
^^^^^^^^^^^^^^^^^^^^^^^^^^
-To prevent the ABA problem, this algorithm stack uses a 128-bit
-compare-and-swap instruction to atomically update both the stack top pointer
-and a modification counter. The ABA problem can occur without a modification
-counter if, for example:
+To prevent the ABA problem, this algorithm uses a 128-bit compare-and-swap
+instruction to atomically update both the stack top pointer and a modification
+counter. The ABA problem can occur without a modification counter if, for
+example:
#. Thread A reads head pointer X and stores the pointed-to list element.
@@ -83,7 +83,7 @@ counter if, for example:
#. Thread A changes the head pointer with a compare-and-swap and succeeds.
In this case thread A would not detect that the list had changed, and would
-both pop stale data and incorrect change the head pointer. By adding a
+both pop stale data and incorrectly change the head pointer. By adding a
modification counter that is updated on every push and pop as part of the
compare-and-swap, the algorithm can detect when the list changes even if the
head pointer remains the same.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 08/54] doc: correct errors in RCU library guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (6 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 07/54] doc: correct errors in stack " Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 09/54] doc: correct grammar and formatting in ASan guide Stephen Hemminger
` (45 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Nandini Persad
Change several errors in the RCU library documentation:
- Change wrong word: "exasperate" -> "exacerbate" (exasperate means to
annoy; exacerbate means to make worse)
- Change typo: "such at rte_hash" -> "such as rte_hash"
- Change grammar: "critical sections for D2 is" -> "critical section for
D2 is"
- Change subject-verb agreement: "length... and number... is proportional"
-> "are proportional"
- Change inconsistent abbreviation: "RT1" -> "reader thread 1" (RT1 was
never defined)
- Change missing articles: add "the" before "grace period", "API"
- Change double space before "The reader thread must"
- Add missing function parentheses throughout for consistency
- Add missing code formatting (backticks) around function names:
rte_rcu_qsbr_dq_enqueue, rte_rcu_qsbr_dq_reclaim,
rte_rcu_qsbr_dq_delete, rte_hash, rte_lpm
- Change unexplained asterisk after "Reclaiming Resources*"
- Change inconsistent capitalization in numbered list items
- Rewrap overly long lines for readability
- Clarify awkward sentence about memory examples
Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/rcu_lib.rst | 143 ++++++++++++++++++------------
1 file changed, 86 insertions(+), 57 deletions(-)
diff --git a/doc/guides/prog_guide/rcu_lib.rst b/doc/guides/prog_guide/rcu_lib.rst
index 9f3654f398..ac573eecbc 100644
--- a/doc/guides/prog_guide/rcu_lib.rst
+++ b/doc/guides/prog_guide/rcu_lib.rst
@@ -6,17 +6,17 @@ Read-Copy-Update (RCU) Library
Lockless data structures provide scalability and determinism.
They enable use cases where locking may not be allowed
-(for example real-time applications).
+(for example, real-time applications).
In the following sections, the term "memory" refers to memory allocated
by typical APIs like malloc() or anything that is representative of
-memory, for example an index of a free element array.
+memory, such as an index into a free element array.
Since these data structures are lockless, the writers and readers
are accessing the data structures concurrently. Hence, while removing
an element from a data structure, the writers cannot return the memory
-to the allocator, without knowing that the readers are not
-referencing that element/memory anymore. Hence, it is required to
+to the allocator without knowing that the readers are not
+referencing that element/memory anymore. Therefore, it is required to
separate the operation of removing an element into two steps:
#. Delete: in this step, the writer removes the reference to the element from
@@ -51,7 +51,7 @@ As shown in :numref:`figure_quiescent_state`, reader thread 1 accesses data
structures D1 and D2. When it is accessing D1, if the writer has to remove an
element from D1, the writer cannot free the memory associated with that
element immediately. The writer can return the memory to the allocator only
-after the reader stops referencing D1. In other words, reader thread RT1 has
+after the reader stops referencing D1. In other words, reader thread 1 has
to enter a quiescent state.
Similarly, since reader thread 2 is also accessing D1, the writer has to
@@ -62,28 +62,28 @@ quiescent state. Reader thread 3 was not accessing D1 when the delete
operation happened. So, reader thread 3 will not have a reference to the
deleted entry.
-It can be noted that, the critical sections for D2 is a quiescent state
-for D1. i.e. for a given data structure Dx, any point in the thread execution
-that does not reference Dx is a quiescent state.
+Note that the critical section for D2 is a quiescent state
+for D1 (i.e. for a given data structure Dx, any point in the thread execution
+that does not reference Dx is a quiescent state).
Since memory is not freed immediately, there might be a need for
-provisioning of additional memory, depending on the application requirements.
+provisioning additional memory depending on the application requirements.
Factors affecting the RCU mechanism
-----------------------------------
It is important to make sure that this library keeps the overhead of
-identifying the end of grace period and subsequent freeing of memory,
-to a minimum. The following paras explain how grace period and critical
+identifying the end of the grace period and subsequent freeing of memory
+to a minimum. The following paragraphs explain how grace period and critical
section affect this overhead.
-The writer has to poll the readers to identify the end of grace period.
+The writer has to poll the readers to identify the end of the grace period.
Polling introduces memory accesses and wastes CPU cycles. The memory
is not available for reuse during the grace period. Longer grace periods
-exasperate these conditions.
+exacerbate these conditions.
The length of the critical section and the number of reader threads
-is proportional to the duration of the grace period. Keeping the critical
+are proportional to the duration of the grace period. Keeping the critical
sections smaller will keep the grace period smaller. However, keeping the
critical sections smaller requires additional CPU cycles (due to additional
reporting) in the readers.
@@ -117,14 +117,14 @@ How to use this library
The application must allocate memory and initialize a QS variable.
Applications can call ``rte_rcu_qsbr_get_memsize()`` to calculate the size
-of memory to allocate. This API takes a maximum number of reader threads,
-using this variable, as a parameter.
+of memory to allocate. This API takes a maximum number of reader threads
+using this variable as a parameter.
Further, the application can initialize a QS variable using the API
``rte_rcu_qsbr_init()``.
Each reader thread is assumed to have a unique thread ID. Currently, the
-management of the thread ID (for example allocation/free) is left to the
+management of the thread ID (for example, allocation/free) is left to the
application. The thread ID should be in the range of 0 to
maximum number of threads provided while creating the QS variable.
The application could also use ``lcore_id`` as the thread ID where applicable.
@@ -132,14 +132,14 @@ The application could also use ``lcore_id`` as the thread ID where applicable.
The ``rte_rcu_qsbr_thread_register()`` API will register a reader thread
to report its quiescent state. This can be called from a reader thread.
A control plane thread can also call this on behalf of a reader thread.
-The reader thread must call ``rte_rcu_qsbr_thread_online()`` API to start
+The reader thread must call the ``rte_rcu_qsbr_thread_online()`` API to start
reporting its quiescent state.
Some of the use cases might require the reader threads to make blocking API
-calls (for example while using eventdev APIs). The writer thread should not
-wait for such reader threads to enter quiescent state. The reader thread must
-call ``rte_rcu_qsbr_thread_offline()`` API, before calling blocking APIs. It
-can call ``rte_rcu_qsbr_thread_online()`` API once the blocking API call
+calls (for example, while using eventdev APIs). The writer thread should not
+wait for such reader threads to enter quiescent state. The reader thread must
+call the ``rte_rcu_qsbr_thread_offline()`` API before calling blocking APIs. It
+can call the ``rte_rcu_qsbr_thread_online()`` API once the blocking API call
returns.
The writer thread can trigger the reader threads to report their quiescent
@@ -147,13 +147,13 @@ state by calling the API ``rte_rcu_qsbr_start()``. It is possible for multiple
writer threads to query the quiescent state status simultaneously. Hence,
``rte_rcu_qsbr_start()`` returns a token to each caller.
-The writer thread must call ``rte_rcu_qsbr_check()`` API with the token to
-get the current quiescent state status. Option to block till all the reader
+The writer thread must call the ``rte_rcu_qsbr_check()`` API with the token to
+get the current quiescent state status. The option to block till all the reader
threads enter the quiescent state is provided. If this API indicates that
all the reader threads have entered the quiescent state, the application
can free the deleted entry.
-The APIs ``rte_rcu_qsbr_start()`` and ``rte_rcu_qsbr_check()`` are lock free.
+The APIs ``rte_rcu_qsbr_start()`` and ``rte_rcu_qsbr_check()`` are lock-free.
Hence, they can be called concurrently from multiple writers even while
running as worker threads.
@@ -171,7 +171,7 @@ polls till all the readers enter the quiescent state or go offline. This API
does not allow the writer to do useful work while waiting and introduces
additional memory accesses due to continuous polling. However, the application
does not have to store the token or the reference to the deleted resource. The
-resource can be freed immediately after ``rte_rcu_qsbr_synchronize()`` API
+resource can be freed immediately after the ``rte_rcu_qsbr_synchronize()`` API
returns.
The reader thread must call ``rte_rcu_qsbr_thread_offline()`` and
@@ -179,9 +179,9 @@ The reader thread must call ``rte_rcu_qsbr_thread_offline()`` and
quiescent state. The ``rte_rcu_qsbr_check()`` API will not wait for this reader
thread to report the quiescent state status anymore.
-The reader threads should call ``rte_rcu_qsbr_quiescent()`` API to indicate that
+The reader threads should call the ``rte_rcu_qsbr_quiescent()`` API to indicate that
they entered a quiescent state. This API checks if a writer has triggered a
-quiescent state query and update the state accordingly.
+quiescent state query and updates the state accordingly.
The ``rte_rcu_qsbr_lock()`` and ``rte_rcu_qsbr_unlock()`` are empty functions.
However, these APIs can aid in debugging issues. One can mark the access to
@@ -199,42 +199,71 @@ the application. When a writer deletes an entry from a data structure, the write
#. Should check if the readers have completed a grace period and free the resources.
There are several APIs provided to help with this process. The writer
-can create a FIFO to store the references to deleted resources using ``rte_rcu_qsbr_dq_create()``.
+can create a FIFO to store the references to deleted resources using
+``rte_rcu_qsbr_dq_create()``.
The resources can be enqueued to this FIFO using ``rte_rcu_qsbr_dq_enqueue()``.
-If the FIFO is full, ``rte_rcu_qsbr_dq_enqueue`` will reclaim the resources before enqueuing. It will also reclaim resources on regular basis to keep the FIFO from growing too large. If the writer runs out of resources, the writer can call ``rte_rcu_qsbr_dq_reclaim`` API to reclaim resources. ``rte_rcu_qsbr_dq_delete`` is provided to reclaim any remaining resources and free the FIFO while shutting down.
+If the FIFO is full, ``rte_rcu_qsbr_dq_enqueue()`` will reclaim the resources
+before enqueuing.
+It will also reclaim resources on a regular basis to keep the FIFO from growing
+too large. If the writer runs out of resources, the writer can call
+``rte_rcu_qsbr_dq_reclaim()`` API to reclaim resources.
+``rte_rcu_qsbr_dq_delete()`` is provided to reclaim any remaining resources and
+free the FIFO while shutting down.
-However, if this resource reclamation process were to be integrated in lock-free data structure libraries, it
-hides this complexity from the application and makes it easier for the application to adopt lock-free algorithms. The following paragraphs discuss how the reclamation process can be integrated in DPDK libraries.
+However, if this resource reclamation process were to be integrated in lock-free
+data structure libraries, it hides this complexity from the application and
+makes it easier for the application to adopt lock-free algorithms.
-In any DPDK application, the resource reclamation process using QSBR can be split into 4 parts:
+The following paragraphs discuss how the reclamation process can be integrated
+in DPDK libraries.
+
+In any DPDK application, the resource reclamation process using QSBR can be
+split into 4 parts:
#. Initialization
-#. Quiescent State Reporting
-#. Reclaiming Resources
+#. Quiescent state reporting
+#. Reclaiming resources
#. Shutdown
-The design proposed here assigns different parts of this process to client libraries and applications. The term 'client library' refers to lock-free data structure libraries such at rte_hash, rte_lpm etc. in DPDK or similar libraries outside of DPDK. The term 'application' refers to the packet processing application that makes use of DPDK such as L3 Forwarding example application, OVS, VPP etc..
-
-The application has to handle 'Initialization' and 'Quiescent State Reporting'. So,
-
-* the application has to create the RCU variable and register the reader threads to report their quiescent state.
-* the application has to register the same RCU variable with the client library.
-* reader threads in the application have to report the quiescent state. This allows for the application to control the length of the critical section/how frequently the application wants to report the quiescent state.
-
-The client library will handle 'Reclaiming Resources' part of the process. The
-client libraries will make use of the writer thread context to execute the memory
-reclamation algorithm. So,
-
-* client library should provide an API to register a RCU variable that it will use. It should call ``rte_rcu_qsbr_dq_create()`` to create the FIFO to store the references to deleted entries.
-* client library should use ``rte_rcu_qsbr_dq_enqueue`` to enqueue the deleted resources on the FIFO and start the grace period.
-* if the library runs out of resources while adding entries, it should call ``rte_rcu_qsbr_dq_reclaim`` to reclaim the resources and try the resource allocation again.
-
-The 'Shutdown' process needs to be shared between the application and the
-client library.
-
-* the application should make sure that the reader threads are not using the shared data structure, unregister the reader threads from the QSBR variable before calling the client library's shutdown function.
-
-* client library should call ``rte_rcu_qsbr_dq_delete`` to reclaim any remaining resources and free the FIFO.
+The design proposed here assigns different parts of this process to client
+libraries and applications. The term "client library" refers to lock-free data
+structure libraries such as ``rte_hash``, ``rte_lpm`` etc. in DPDK or similar
+libraries outside of DPDK. The term "application" refers to the packet
+processing application that makes use of DPDK such as L3 Forwarding example
+application, OVS, VPP etc.
+
+The application must handle "Initialization" and "Quiescent State Reporting".
+Therefore, the application:
+
+* Must create the RCU variable and register the reader threads to report their
+ quiescent state.
+* Must register the same RCU variable with the client library.
+* Note that reader threads in the application have to report the quiescent
+ state. This allows for the application to control the length of the critical
+ section/how frequently the application wants to report the quiescent state.
+
+The client library will handle the "Reclaiming Resources" part of the process.
+The client libraries will make use of the writer thread context to execute the
+memory reclamation algorithm. So, the client library should:
+
+* Provide an API to register an RCU variable that it will use. It should call
+ ``rte_rcu_qsbr_dq_create()`` to create the FIFO to store the references to
+ deleted entries.
+* Use ``rte_rcu_qsbr_dq_enqueue()`` to enqueue the deleted resources on the FIFO
+ and start the grace period.
+* Note that if the library runs out of resources while adding entries, it should
+ call ``rte_rcu_qsbr_dq_reclaim()`` to reclaim the resources and try the
+ resource allocation again.
+
+The "Shutdown" process needs to be shared between the application and the
+client library. Note that:
+
+* The application should make sure that the reader threads are not using the
+ shared data structure, unregister the reader threads from the QSBR variable
+ before calling the client library's shutdown function.
+
+* The client library should call ``rte_rcu_qsbr_dq_delete()`` to reclaim any
+ remaining resources and free the FIFO.
Integrating the resource reclamation with client libraries removes the burden from
the application and makes it easy to use lock-free algorithms.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 09/54] doc: correct grammar and formatting in ASan guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (7 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 08/54] doc: correct errors in RCU " Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 10/54] doc: correct grammar and typos in bbdev guide Stephen Hemminger
` (44 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct several documentation issues:
- Change "Add below unit test code" to "Add the following unit test code"
- Change grammar in error descriptions to use proper subject-verb agreement
- Correct RST directive syntax from "Note::" to ".. Note::"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/asan.rst | 14 +++++++++-----
1 file changed, 9 insertions(+), 5 deletions(-)
diff --git a/doc/guides/prog_guide/asan.rst b/doc/guides/prog_guide/asan.rst
index 9a6c5a7e4b..b770ac9029 100644
--- a/doc/guides/prog_guide/asan.rst
+++ b/doc/guides/prog_guide/asan.rst
@@ -39,7 +39,7 @@ to define ASAN_SHADOW_OFFSET.
Example heap-buffer-overflow error
----------------------------------
-Add below unit test code in examples/helloworld/main.c::
+Add the following unit test code to examples/helloworld/main.c::
Add code to helloworld:
char *p = rte_zmalloc(NULL, 9, 0);
@@ -49,7 +49,9 @@ Add below unit test code in examples/helloworld/main.c::
}
p[9] = 'a';
-Above code will result in heap-buffer-overflow error if ASan is enabled, because apply 9 bytes of memory but access the tenth byte, detailed error log as below::
+This code will result in a heap-buffer-overflow error if ASan is enabled,
+because it allocates 9 bytes of memory but accesses the tenth byte.
+Detailed error log::
==369953==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x7fb17f465809 at pc 0x5652e6707b84 bp 0x7ffea70eea20 sp 0x7ffea70eea10 WRITE of size 1 at 0x7fb17f465809 thread T0
#0 0x5652e6707b83 in main ../examples/helloworld/main.c:47
@@ -59,7 +61,7 @@ Above code will result in heap-buffer-overflow error if ASan is enabled, because
Address 0x7fb17f465809 is a wild pointer.
SUMMARY: AddressSanitizer: heap-buffer-overflow ../examples/helloworld/main.c:47 in main
-Note::
+.. Note::
- Some of the features of ASan (for example, 'Display memory application location, currently
displayed as a wild pointer') are not currently supported with DPDK allocations.
@@ -67,7 +69,7 @@ Note::
Example use-after-free error
----------------------------
-Add below unit test code in examples/helloworld/main.c::
+Add the following unit test code to examples/helloworld/main.c::
Add code to helloworld:
char *p = rte_zmalloc(NULL, 9, 0);
@@ -78,7 +80,9 @@ Add below unit test code in examples/helloworld/main.c::
rte_free(p);
*p = 'a';
-Above code will result in use-after-free error if ASan is enabled, because apply 9 bytes of memory but access the first byte after release, detailed error log as below::
+This code will result in a use-after-free error if ASan is enabled,
+because it accesses the first byte of memory after it has been freed.
+Detailed error log::
==417048==ERROR: AddressSanitizer: heap-use-after-free on address 0x7fc83f465800 at pc 0x564308a39b89 bp 0x7ffc8c85bf50 sp 0x7ffc8c85bf40 WRITE of size 1 at 0x7fc83f465800 thread T0
#0 0x564308a39b88 in main ../examples/helloworld/main.c:48
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 10/54] doc: correct grammar and typos in bbdev guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (8 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 09/54] doc: correct grammar and formatting in ASan guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 11/54] doc: correct grammar and formatting in bpf lib guide Stephen Hemminger
` (43 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct several documentation issues:
- Change "Each bbdev devices queue" to "Each bbdev device's queue"
- Change "it's" to "its" for possessive form
- Correct typo "ressampling" to "resampling"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/bbdev.rst | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/doc/guides/prog_guide/bbdev.rst b/doc/guides/prog_guide/bbdev.rst
index 8abb0e9c88..4edcdb967b 100644
--- a/doc/guides/prog_guide/bbdev.rst
+++ b/doc/guides/prog_guide/bbdev.rst
@@ -125,7 +125,7 @@ device, if supported by the driver. Should be called before starting the device.
Queues Configuration
~~~~~~~~~~~~~~~~~~~~
-Each bbdev devices queue is individually configured through the
+Each bbdev device's queue is individually configured through the
``rte_bbdev_queue_configure()`` API.
Each queue resources may be allocated on a specified socket.
@@ -168,7 +168,7 @@ Logical Cores, Memory and Queues Relationships
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The bbdev poll mode device driver library supports NUMA architecture, in which
-a processor's logical cores and interfaces utilize it's local memory. Therefore
+a processor's logical cores and interfaces utilize its local memory. Therefore
with baseband operations, the mbuf being operated on should be allocated from memory
pools created in the local memory. The buffers should, if possible, remain on
the local processor to obtain the best performance results and buffer
@@ -1232,7 +1232,7 @@ The FFT parameters are set out in the table below.
+-------------------------+--------------------------------------------------------------+
|fp16_exp_adjust |value added to FP16 exponent at conversion from INT16 |
+-------------------------+--------------------------------------------------------------+
-|freq_resample_mode |frequency ressampling mode (0:transparent, 1-2: resample) |
+|freq_resample_mode |frequency resampling mode (0:transparent, 1-2: resample) |
+-------------------------+--------------------------------------------------------------+
| output_depadded_size |output depadded size prior to frequency resampling |
+-------------------------+--------------------------------------------------------------+
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 11/54] doc: correct grammar and formatting in bpf lib guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (9 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 10/54] doc: correct grammar and typos in bbdev guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 12/54] doc: correct grammar and typos in meson build guide Stephen Hemminger
` (42 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct several documentation issues:
- Change "an BPF" to "a BPF" (correct article usage)
- Capitalize "dpdk" to "DPDK"
- Change inconsistent bullet point spacing
The BPF library only supports v1 and v2 instructions.
Add a more complete description of that restriction.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/bpf_lib.rst | 46 +++++++++++++++++++++++++++----
1 file changed, 40 insertions(+), 6 deletions(-)
diff --git a/doc/guides/prog_guide/bpf_lib.rst b/doc/guides/prog_guide/bpf_lib.rst
index 8c820328b9..b915ed31b8 100644
--- a/doc/guides/prog_guide/bpf_lib.rst
+++ b/doc/guides/prog_guide/bpf_lib.rst
@@ -4,9 +4,9 @@
Berkeley Packet Filter (BPF) Library
====================================
-The DPDK provides an BPF library that gives the ability
+The DPDK provides a BPF library that gives the ability
to load and execute Enhanced Berkeley Packet Filter (eBPF) bytecode within
-user-space dpdk application.
+user-space DPDK application.
It supports basic set of features from eBPF spec.
Please refer to the
@@ -19,13 +19,13 @@ The library API provides the following basic operations:
* Create a new BPF execution context and load user provided eBPF code into it.
-* Destroy an BPF execution context and its runtime structures and free the associated memory.
+* Destroy a BPF execution context and its runtime structures and free the associated memory.
-* Execute eBPF bytecode associated with provided input parameter.
+* Execute eBPF bytecode associated with provided input parameter.
-* Provide information about natively compiled code for given BPF context.
+* Provide information about natively compiled code for given BPF context.
-* Load BPF program from the ELF file and install callback to execute it on given ethdev port/queue.
+* Load BPF program from the ELF file and install callback to execute it on given ethdev port/queue.
Packet data load instructions
-----------------------------
@@ -64,3 +64,37 @@ Not currently supported eBPF features
- tail-pointer call
- eBPF MAP
- external function calls for 32-bit platforms
+
+Supported BPF instruction set
+-----------------------------
+
+The DPDK BPF library supports eBPF instruction set versions **v1** and **v2**.
+Instructions introduced in v3 and later (such as JMP32, extended atomics,
+signed division, and sign-extending loads) are **not supported**.
+
+When compiling BPF programs with clang, use ``-mcpu=v2`` or earlier to ensure
+compatibility:
+
+.. code-block:: console
+
+ clang -target bpf -mcpu=v2 -O2 -c filter.c -o filter.o
+
+.. warning::
+
+ LLVM 20 and later default to ``-mcpu=v3``, which generates JMP32
+ instructions that DPDK cannot execute. Always specify ``-mcpu=v2``
+ explicitly when compiling BPF programs for use with DPDK.
+
+The following instruction classes are **not supported**:
+
+ - ``BPF_JMP32`` (class 0x06) - 32-bit conditional jumps (v3)
+ - ``BPF_ATOMIC`` with ``BPF_FETCH`` - atomic fetch-and-op, XCHG, CMPXCHG (v3)
+ - ``BPF_SDIV`` / ``BPF_SMOD`` - signed division and modulo (v4)
+ - ``BPF_MOVSX`` - sign-extending register moves (v4)
+ - ``BPF_MEMSX`` - sign-extending memory loads (v4)
+ - ``BPF_JA`` with 32-bit offset (GOTOL) (v4)
+ - ``BPF_BSWAP`` - new byte-swap encoding (v4)
+
+If you encounter validation errors such as ``invalid opcode at pc: N``,
+verify that your BPF program was compiled with a compatible instruction
+set version.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 12/54] doc: correct grammar and typos in meson build guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (10 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 11/54] doc: correct grammar and formatting in bpf lib guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 13/54] doc: correct grammar and typos in cryptodev guide Stephen Hemminger
` (41 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct several documentation issues:
- Remove extra space after "libdpdk.pc"
- Replace awkward "can be got" with clearer alternatives
- Change typo "are passed used" to "are passed using"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/build-sdk-meson.rst | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/doc/guides/prog_guide/build-sdk-meson.rst b/doc/guides/prog_guide/build-sdk-meson.rst
index fdb5d484fa..84ad64494a 100644
--- a/doc/guides/prog_guide/build-sdk-meson.rst
+++ b/doc/guides/prog_guide/build-sdk-meson.rst
@@ -16,23 +16,23 @@ following set of commands::
This will compile DPDK in the ``build`` subdirectory, and then install the
resulting libraries, drivers and header files onto the system - generally
-in /usr/local. A package-config file, ``libdpdk.pc``, for DPDK will also
+in /usr/local. A package-config file, ``libdpdk.pc``, for DPDK will also
be installed to allow ease of compiling and linking with applications.
After installation, to use DPDK, the necessary CFLAG and LDFLAG variables
-can be got from pkg-config::
+can be obtained from pkg-config::
pkg-config --cflags libdpdk
pkg-config --libs libdpdk
-More detail on each of these steps can be got from the following sections.
+More detail on each of these steps can be found in the following sections.
Getting the Tools
------------------
The ``meson`` tool is used to configure a DPDK build. On most Linux
-distributions this can be got using the local package management system,
+distributions this can be installed using the local package management system,
e.g. ``dnf install meson`` or ``apt-get install meson``. If meson is not
available as a suitable package, it can also be installed using the Python
3 ``pip`` tool, e.g. ``pip3 install meson``. Version 0.57 or later of meson is
@@ -78,7 +78,7 @@ examples to build, are DPDK-specific. To have a list of all options
available run ``meson configure`` in the build directory.
Examples of adjusting the defaults when doing initial meson configuration.
-Project-specific options are passed used -Doption=value::
+Project-specific options are passed using -Doption=value::
# build with warnings as errors
meson setup --werror werrorbuild
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 13/54] doc: correct grammar and typos in cryptodev guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (11 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 12/54] doc: correct grammar and typos in meson build guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 14/54] doc: correct grammar and formatting in compressdev guide Stephen Hemminger
` (40 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct several documentation issues:
- Change possessive forms: "device's queue pair", "queue pair's resources"
- Change typo "the same other different" to "the same or different"
- Change "PMDs supports" to "PMDs support"
- Change "Queues Pair" to "Queue Pair" in section title
- Change "library support NUMA" to "library supports NUMA"
- Change "Global devices features" to "Global device features"
- Remove extra space in "are defined"
- Change "Sessions typically stores" to "Sessions typically store"
- Change "a optimal" to "an optimal"
- Add required blank line before bullet list
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/cryptodev_lib.rst | 21 +++++++++++----------
1 file changed, 11 insertions(+), 10 deletions(-)
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index f0ee44eb54..dcf3323c9e 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -108,9 +108,9 @@ parameters for socket selection and number of queue pairs.
Configuration of Queue Pairs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Each Crypto devices queue pair is individually configured through the
+Each Crypto device's queue pair is individually configured through the
``rte_cryptodev_queue_pair_setup`` API.
-Each queue pairs resources may be allocated on a specified socket.
+Each queue pair's resources may be allocated on a specified socket.
.. code-block:: c
@@ -127,14 +127,14 @@ Each queue pairs resources may be allocated on a specified socket.
The field ``mp_session`` is used for creating temporary session to process
the crypto operations in the session-less mode.
-They can be the same other different mempools. Please note not all Cryptodev
-PMDs supports session-less mode.
+They can be the same or different mempools. Please note not all Cryptodev
+PMDs support session-less mode.
-Logical Cores, Memory and Queues Pair Relationships
+Logical Cores, Memory and Queue Pair Relationships
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The Crypto device Library as the Poll Mode Driver library support NUMA for when
+The Crypto device Library as the Poll Mode Driver library supports NUMA for when
a processor’s logical cores and interfaces utilize its local memory. Therefore
Crypto operations, and in the case of symmetric Crypto operations, the session
and the mbuf being operated on, should be allocated from memory pools created
@@ -161,7 +161,7 @@ Device Features and Capabilities
---------------------------------
Crypto devices define their functionality through two mechanisms, global device
-features and algorithm capabilities. Global devices features identify device
+features and algorithm capabilities. Global device features identify device
wide level features which are applicable to the whole device such as
the device having hardware acceleration or supporting symmetric and/or asymmetric
Crypto operations.
@@ -191,7 +191,7 @@ Device Operation Capabilities
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Crypto capabilities which identify particular algorithm which the Crypto PMD
-supports are defined by the operation type, the operation transform, the
+supports are defined by the operation type, the operation transform, the
transform identifier and then the particulars of the transform. For the full
scope of the Crypto capability see the definition of the structure in the
*DPDK API Reference*.
@@ -936,9 +936,9 @@ Session and Session Management
Sessions are used in asymmetric cryptographic processing to store the immutable
data defined in asymmetric cryptographic transform which is further used in the
-operation processing. Sessions typically stores information, such as, public
+operation processing. Sessions typically store information, such as, public
and private key information or domain params or prime modulus data i.e. immutable
-across data sets. Crypto sessions cache this immutable data in a optimal way for the
+across data sets. Crypto sessions cache this immutable data in an optimal way for the
underlying PMD and this allows further acceleration of the offload of Crypto workloads.
Like symmetric, the Crypto device framework provides APIs to allocate and initialize
@@ -993,6 +993,7 @@ public generation. Also, currently API does not support chaining of symmetric an
asymmetric crypto xforms.
Each xform defines specific asymmetric crypto algo. Currently supported are:
+
* RSA
* Modular operations (Exponentiation and Inverse)
* Diffie-Hellman
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 14/54] doc: correct grammar and formatting in compressdev guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (12 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 13/54] doc: correct grammar and typos in cryptodev guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 15/54] doc: correct grammar in dmadev guide Stephen Hemminger
` (39 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct several documentation issues:
- Add missing spaces in "physical (hardware)" and "virtual (software)"
- Change "For e.g." to "For example,"
- Change "checksums operation" to "checksum operations"
- Remove double space in code comment
- Change possessive form "device's processed queue"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/compressdev.rst | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/doc/guides/prog_guide/compressdev.rst b/doc/guides/prog_guide/compressdev.rst
index 2a59c434c1..a878010c4c 100644
--- a/doc/guides/prog_guide/compressdev.rst
+++ b/doc/guides/prog_guide/compressdev.rst
@@ -5,7 +5,7 @@ Compression Device Library
==========================
The compression framework provides a generic set of APIs to perform compression services
-as well as to query and configure compression devices both physical(hardware) and virtual(software)
+as well as to query and configure compression devices both physical (hardware) and virtual (software)
to perform those services. The framework currently only supports lossless compression schemes:
Deflate and LZS.
@@ -17,7 +17,7 @@ Device Creation
Physical compression devices are discovered during the bus probe of the EAL function
which is executed at DPDK initialization, based on their unique device identifier.
-For e.g. PCI devices can be identified using PCI BDF (bus/bridge, device, function).
+For example, PCI devices can be identified using PCI BDF (bus/bridge, device, function).
Specific physical compression devices, like other physical devices in DPDK can be
listed using the EAL command line options.
@@ -113,7 +113,7 @@ acceleration and CPU features. List of compression device features can be seen i
RTE_COMPDEV_FF_XXX macros.
The algorithm features are features which the device supports per-algorithm,
-such as a stateful compression/decompression, checksums operation etc.
+such as a stateful compression/decompression, checksum operations etc.
The list of algorithm features can be seen in the RTE_COMP_FF_XXX macros.
Capabilities
@@ -488,7 +488,7 @@ with each chunk size of CHUNK_LEN, would look like:
uint8_t cdev_id = rte_compressdev_get_dev_id(<PMD name>);
- /* configure the device. */
+ /* configure the device. */
if (rte_compressdev_configure(cdev_id, &conf) < 0)
rte_exit(EXIT_FAILURE, "Failed to configure compressdev %u", cdev_id);
@@ -592,7 +592,7 @@ on the device's hardware input queue, for virtual devices the processing of the
operations is usually completed during the enqueue call to the compression
device. The dequeue burst API will retrieve any processed operations available
from the queue pair on the compression device, from physical devices this is usually
-directly from the devices processed queue, and for virtual device's from an
+directly from the device's processed queue, and for virtual devices from an
``rte_ring`` where processed operations are placed after being processed on the
enqueue call.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 15/54] doc: correct grammar in dmadev guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (13 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 14/54] doc: correct grammar and formatting in compressdev guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 16/54] doc: correct grammar in efd guide Stephen Hemminger
` (38 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct several documentation issues:
- Change "defining generic API which support" to "defining generic APIs
which support"
- Change "can be got via" to "can be obtained via"
- Change "the necessary API are" to "the necessary APIs are"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/dmadev.rst | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/doc/guides/prog_guide/dmadev.rst b/doc/guides/prog_guide/dmadev.rst
index df25b52461..ad0598a8c0 100644
--- a/doc/guides/prog_guide/dmadev.rst
+++ b/doc/guides/prog_guide/dmadev.rst
@@ -5,7 +5,7 @@ Direct Memory Access (DMA) Device Library
=========================================
The DMA library provides a DMA device framework for management and provisioning
-of hardware and software DMA poll mode drivers, defining generic API which
+of hardware and software DMA poll mode drivers, defining generic APIs which
support a number of different DMA operations.
@@ -144,7 +144,7 @@ The following example demonstrates the usage of enqueue and dequeue operations:
Querying Device Statistics
~~~~~~~~~~~~~~~~~~~~~~~~~~
-The statistics from a dmadev device can be got via the statistics functions,
+The statistics from a dmadev device can be obtained via the statistics functions,
i.e. ``rte_dma_stats_get()``. The statistics returned for each device instance are:
* ``submitted``: The number of operations submitted to the device.
@@ -192,7 +192,7 @@ DMA devices used for inter-domain data transfer can be categorized as follows:
- Class B: Only one endpoint requires a DMA device; the other does not.
- Class C: Other device types not currently classified.
-Currently the necessary API for Class A DMA devices are available
+Currently the necessary APIs for Class A DMA devices are available
for exchanging the handler details.
Devices can create or join access groups using token-based authentication,
ensuring that only authorized devices within the same group
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 16/54] doc: correct grammar in efd guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (14 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 15/54] doc: correct grammar in dmadev guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 17/54] doc: correct grammar in EAL guide Stephen Hemminger
` (37 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct several documentation issues:
- Change "flow table need to store" to "flow table needs to store"
- Add missing verb "is performed" in brute force search sentence
- Change "it can used as" to "it can be used as"
- Change "keys for each group is stored" to "keys for each group are
stored"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/efd_lib.rst | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/doc/guides/prog_guide/efd_lib.rst b/doc/guides/prog_guide/efd_lib.rst
index 68404d5f33..71f58b00e0 100644
--- a/doc/guides/prog_guide/efd_lib.rst
+++ b/doc/guides/prog_guide/efd_lib.rst
@@ -100,7 +100,7 @@ are retrieved. The retrieved key(s) is matched with the input flow key
and if there is a match the value (target id) is returned.
The drawback of using a hash table for flow distribution/load balancing
-is the storage requirement, since the flow table need to store keys,
+is the storage requirement, since the flow table needs to store keys,
signatures and target values. This doesn't allow this scheme to scale to
millions of flow keys. Large tables will usually not fit in
the CPU cache, and hence, the lookup performance is degraded because of
@@ -142,7 +142,7 @@ impossible, as a result EFD, as shown in :numref:`figure_efd5`,
breaks the problem into smaller pieces (divide and conquer).
EFD divides the entire input key set into many small groups.
Each group consists of approximately 20-28 keys (a configurable parameter
-for the library), then, for each small group, a brute force search to find
+for the library), then, for each small group, a brute force search is performed to find
a hash function that produces the correct outputs for each key in the group.
It should be mentioned that, since the online lookup table for EFD
@@ -161,7 +161,7 @@ Example of EFD Library Usage
----------------------------
EFD can be used along the data path of many network functions and middleboxes.
-As previously mentioned, it can used as an index table for
+As previously mentioned, it can be used as an index table for
<key,value> pairs, meta-data for objects, a flow-level load balancer, etc.
:numref:`figure_efd6` shows an example of using EFD as a flow-level load
balancer, where flows are received at a front end server before being forwarded
@@ -301,7 +301,7 @@ stores two version of the <key,value> table:
- Offline Version (in memory): Only used for the insertion/update
operation, which is less frequent than the lookup operation. In the
- offline version the exact keys for each group is stored. When a new
+ offline version the exact keys for each group are stored. When a new
key is added, the hash function is updated that will satisfy the
value for the new key together with the all old keys already inserted
in this group.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 17/54] doc: correct grammar in EAL guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (15 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 16/54] doc: correct grammar in efd guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 18/54] doc: correct double space in FIB guide Stephen Hemminger
` (36 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct several documentation issues:
- Change "It consist of calls" to "It consists of calls"
- Add missing comma after "EAL" for proper sentence structure
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/env_abstraction_layer.rst | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst
index d716895c1d..170c34576d 100644
--- a/doc/guides/prog_guide/env_abstraction_layer.rst
+++ b/doc/guides/prog_guide/env_abstraction_layer.rst
@@ -50,7 +50,7 @@ Initialization and Core Launching
Part of the initialization is done by the start function of glibc.
A check is also performed at initialization time to ensure that the micro architecture type chosen in the config file is supported by the CPU.
Then, the main() function is called. The core initialization and launch is done in rte_eal_init() (see the API documentation).
-It consist of calls to the pthread library (more specifically, pthread_self(), pthread_create(), and pthread_setaffinity_np()).
+It consists of calls to the pthread library (more specifically, pthread_self(), pthread_create(), and pthread_setaffinity_np()).
.. _figure_linux_launch:
@@ -69,7 +69,7 @@ It consist of calls to the pthread library (more specifically, pthread_self(), p
Shutdown and Cleanup
~~~~~~~~~~~~~~~~~~~~
-During the initialization of EAL resources such as hugepage backed memory can be
+During the initialization of EAL, resources such as hugepage backed memory can be
allocated by core components. The memory allocated during ``rte_eal_init()``
can be released by calling the ``rte_eal_cleanup()`` function. Refer to the
API documentation for details.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 18/54] doc: correct double space in FIB guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (16 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 17/54] doc: correct grammar in EAL guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 19/54] doc: correct grammar in GRO guide Stephen Hemminger
` (35 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Remove extra space before "the" in API applicability note.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/fib_lib.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/doc/guides/prog_guide/fib_lib.rst b/doc/guides/prog_guide/fib_lib.rst
index a81da2d491..c357dbc4e5 100644
--- a/doc/guides/prog_guide/fib_lib.rst
+++ b/doc/guides/prog_guide/fib_lib.rst
@@ -12,7 +12,7 @@ the most typical of which is IPv4/IPv6 forwarding.
The API and implementation are very similar for IPv4 ``rte_fib`` API and IPv6 ``rte_fib6``
API, therefore only the ``rte_fib`` API will be discussed here.
- Everything within this document except for the size of the prefixes is applicable to the
+ Everything within this document except for the size of the prefixes is applicable to the
``rte_fib6`` API.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 19/54] doc: correct grammar in GRO guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (17 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 18/54] doc: correct double space in FIB guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 20/54] doc: correct grammar in GSO guide Stephen Hemminger
` (34 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct several documentation issues:
- Change "is in charge of process" to "is in charge of processing"
- Change "which process N packets" to "which processes N packets"
- Change "existed packets" to "existing packets"
- Change "require our algorithm is" to "require that our algorithm is"
- Change "If find" to "If it finds" and "If can't find" to "If it cannot
find"
- Change "can't merge" to "cannot merge"
- Change "doesn't support to process" to "doesn't support processing"
- Change "just supports to process the packet" to "only supports
processing packets"
- Change "before call GRO APIs" to "before calling GRO APIs"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
.../generic_receive_offload_lib.rst | 20 +++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/doc/guides/prog_guide/generic_receive_offload_lib.rst b/doc/guides/prog_guide/generic_receive_offload_lib.rst
index f2b5ff9eed..66e07d47f4 100644
--- a/doc/guides/prog_guide/generic_receive_offload_lib.rst
+++ b/doc/guides/prog_guide/generic_receive_offload_lib.rst
@@ -17,7 +17,7 @@ Overview
--------
In the GRO library, there are many GRO types which are defined by packet
-types. One GRO type is in charge of process one kind of packets. For
+types. One GRO type is in charge of processing one kind of packets. For
example, TCP/IPv4 GRO processes TCP/IPv4 packets.
Each GRO type has a reassembly function, which defines own algorithm and
@@ -47,7 +47,7 @@ Lightweight Mode API
~~~~~~~~~~~~~~~~~~~~
The lightweight mode only has one function ``rte_gro_reassemble_burst()``,
-which process N packets at a time. Using the lightweight mode API to
+which processes N packets at a time. Using the lightweight mode API to
merge packets is very simple. Calling ``rte_gro_reassemble_burst()`` is
enough. The GROed packets are returned to applications as soon as it
finishes.
@@ -69,7 +69,7 @@ Secondly, applications use ``rte_gro_reassemble()`` to merge packets.
If input packets have invalid parameters, ``rte_gro_reassemble()``
returns them to applications. For example, packets of unsupported GRO
types or TCP SYN packets are returned. Otherwise, the input packets are
-either merged with the existed packets in the tables or inserted into the
+either merged with the existing packets in the tables or inserted into the
tables. Finally, applications use ``rte_gro_timeout_flush()`` to flush
packets from the tables, when they want to get the GROed packets.
@@ -98,7 +98,7 @@ challenges in the algorithm design:
- packet reordering makes it hard to merge packets. For example, Linux
GRO fails to merge packets when encounters packet reordering.
-The above two challenges require our algorithm is:
+The above two challenges require that our algorithm is:
- lightweight enough to scale fast networking speed
@@ -114,13 +114,13 @@ key-based algorithm. Packets are classified into "flows" by some header
fields (we call them as "key"). To process an input packet, the algorithm
searches for a matched "flow" (i.e., the same value of key) for the
packet first, then checks all packets in the "flow" and tries to find a
-"neighbor" for it. If find a "neighbor", merge the two packets together.
-If can't find a "neighbor", store the packet into its "flow". If can't
+"neighbor" for it. If it finds a "neighbor", merge the two packets together.
+If it cannot find a "neighbor", store the packet into its "flow". If it cannot
find a matched "flow", insert a new "flow" and store the packet into the
"flow".
.. note::
- Packets in the same "flow" that can't merge are always caused
+ Packets in the same "flow" that cannot merge are always caused
by packet reordering.
The key-based algorithm has two characters:
@@ -199,15 +199,15 @@ GRO Library Limitations
- GRO library uses MBUF->l2_len/l3_len/l4_len/outer_l2_len/
outer_l3_len/packet_type to get protocol headers for the
input packet, rather than parsing the packet header. Therefore,
- before call GRO APIs to merge packets, user applications
+ before calling GRO APIs to merge packets, user applications
must set MBUF->l2_len/l3_len/l4_len/outer_l2_len/outer_l3_len/
packet_type to the same values as the protocol headers of the
packet.
-- GRO library doesn't support to process the packets with IPv4
+- GRO library doesn't support processing packets with IPv4
Options or VLAN tagged.
-- GRO library just supports to process the packet organized
+- GRO library only supports processing packets organized
in a single MBUF. If the input packet consists of multiple
MBUFs (i.e. chained MBUFs), GRO reassembly behaviors are
unknown.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 20/54] doc: correct grammar in GSO guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (18 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 19/54] doc: correct grammar in GRO guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 21/54] doc: correct typos and grammar in graph guide Stephen Hemminger
` (33 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct "GSO library use" to "GSO library uses".
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/generic_segmentation_offload_lib.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
index 618297044b..90ef80765b 100644
--- a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
+++ b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
@@ -206,7 +206,7 @@ To segment an outgoing packet, an application must:
#. Set the appropriate ol_flags in the mbuf.
- - The GSO library use the value of an mbuf's ``ol_flags`` attribute to
+ - The GSO library uses the value of an mbuf's ``ol_flags`` attribute to
determine how a packet should be segmented. It is the application's
responsibility to ensure that these flags are set.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 21/54] doc: correct typos and grammar in graph guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (19 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 20/54] doc: correct grammar in GSO guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 22/54] doc: correct grammar in hash guide Stephen Hemminger
` (32 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct several documentation issues:
- Remove trailing hyphen from bullet point
- Change "detached to a graph" to "detached from a graph"
- Change typo "edege" to "edge" (two occurrences)
- Change typo "feilds" to "fields"
- Change "attach" to "attaches" and "create" to "creates"
- Change "with custom port with and" to "with custom port and"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/graph_lib.rst | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/doc/guides/prog_guide/graph_lib.rst b/doc/guides/prog_guide/graph_lib.rst
index 8409e7666e..8e3fd227d3 100644
--- a/doc/guides/prog_guide/graph_lib.rst
+++ b/doc/guides/prog_guide/graph_lib.rst
@@ -37,7 +37,7 @@ Advantages of Graph architecture
caches misses.
- Exploits the probability that most packets will follow the same nodes in the
graph.
-- Allow SIMD instructions for packet processing of the node.-
+- Allow SIMD instructions for packet processing of the node.
- The modular scheme allows having reusable nodes for the consumers.
- The modular scheme allows us to abstract the vendor HW specific
optimizations as a node.
@@ -92,7 +92,7 @@ fini():
^^^^^^^
The callback function will be invoked by ``rte_graph_destroy()`` on when a
-node gets detached to a graph.
+node gets detached from a graph.
Node name:
^^^^^^^^^^
@@ -1095,17 +1095,17 @@ This node is an intermediate node that does udp destination port lookup for
the received ipv4 packets and the result determines each packets next node.
User registers a new node ``udp4_input`` into graph library during initialization
-and attach user specified node as edege to this node using
-``rte_node_udp4_usr_node_add()``, and create empty hash table with destination
-port and node id as its feilds.
+and attaches user specified node as edge to this node using
+``rte_node_udp4_usr_node_add()``, and creates empty hash table with destination
+port and node id as its fields.
-After successful addition of user node as edege, edge id is returned to the user.
+After successful addition of user node as edge, edge id is returned to the user.
User would register ``ip4_lookup`` table with specified ip address and 32 bit as mask
for ip filtration using api ``rte_node_ip4_route_add()``.
-After graph is created user would update hash table with custom port with
-and previously obtained edge id using API ``rte_node_udp4_dst_port_add()``.
+After graph is created user would update hash table with custom port and
+previously obtained edge id using API ``rte_node_udp4_dst_port_add()``.
When packet is received lpm look up is performed if ip is matched the packet
is handed over to ip4_local node, then packet is verified for udp proto and
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 22/54] doc: correct grammar in hash guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (20 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 21/54] doc: correct typos and grammar in graph guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 23/54] doc: correct grammar and typos in IP fragment guide Stephen Hemminger
` (31 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct several documentation issues:
- Change "a 8-byte integer" to "an 8-byte integer"
- Change "the later will increase" to "the latter will increase"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/hash_lib.rst | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/hash_lib.rst b/doc/guides/prog_guide/hash_lib.rst
index fdbb99fd5e..bc257b4ffb 100644
--- a/doc/guides/prog_guide/hash_lib.rst
+++ b/doc/guides/prog_guide/hash_lib.rst
@@ -43,7 +43,7 @@ Apart from the basic methods explained above, the Hash Library API provides a fe
the user to perform these operations faster, as the hash value is already computed.
* Add / lookup entry with key and data: A data is provided as input for add. Add allows the user to store
- not only the key, but also the data which may be either a 8-byte integer or a pointer to external data (if data size is more than 8 bytes).
+ not only the key, but also the data which may be either an 8-byte integer or a pointer to external data (if data size is more than 8 bytes).
* Combination of the two options above: User can provide key, precomputed hash, and data.
@@ -201,7 +201,7 @@ if there is a new entry to be added which primary location coincides with their
being pushed to their alternative location.
Therefore, as user adds more entries to the hash table, distribution of the hash values
in the buckets will change, being most of them in their primary location and a few in
-their secondary location, which the later will increase, as table gets busier.
+their secondary location, which the latter will increase, as table gets busier.
This information is quite useful, as performance may be lower as more entries
are evicted to their secondary location.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 23/54] doc: correct grammar and typos in IP fragment guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (21 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 22/54] doc: correct grammar in hash guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 24/54] doc: correct double spaces in IPsec guide Stephen Hemminger
` (30 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct several documentation issues:
- Change "It's data field" to "Its data field" (possessive not
contraction)
- Change typo "filed" to "field"
- Change "mechanism have to be" to "mechanism has to be"
- Change "mbuf's to be allocated" to "mbufs to be allocated"
- Change misplaced period in function name reference
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/ip_fragment_reassembly_lib.rst | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/doc/guides/prog_guide/ip_fragment_reassembly_lib.rst b/doc/guides/prog_guide/ip_fragment_reassembly_lib.rst
index b14289eb73..aa05ce8a46 100644
--- a/doc/guides/prog_guide/ip_fragment_reassembly_lib.rst
+++ b/doc/guides/prog_guide/ip_fragment_reassembly_lib.rst
@@ -18,12 +18,12 @@ For each fragment two new mbufs are created:
* Direct mbuf -- mbuf that will contain L3 header of the new fragment.
* Indirect mbuf -- mbuf that is attached to the mbuf with the original packet.
- It's data field points to the start of the original packets data plus fragment offset.
+ Its data field points to the start of the original packets data plus fragment offset.
Then L3 header is copied from the original mbuf into the 'direct' mbuf and updated to reflect new fragmented status.
Note that for IPv4, header checksum is not recalculated and is set to zero.
-Finally 'direct' and 'indirect' mbufs for each fragment are linked together via mbuf's next filed to compose a packet for the new fragment.
+Finally 'direct' and 'indirect' mbufs for each fragment are linked together via mbuf's next field to compose a packet for the new fragment.
The caller has an ability to explicitly specify which mempools should be used to allocate 'direct' and 'indirect' mbufs from.
@@ -41,7 +41,7 @@ Each IP packet is uniquely identified by triple <Source IP address>, <Destinatio
Note that all update/lookup operations on Fragment Table are not thread safe.
So if different execution contexts (threads/processes) will access the same table simultaneously,
-then some external syncing mechanism have to be provided.
+then some external syncing mechanism has to be provided.
Each table entry can hold information about packets consisting of up to RTE_LIBRTE_IP_FRAG_MAX (by default: 8) fragments.
@@ -62,15 +62,15 @@ instead of reinserting existing keys into alternative locations, ip_frag_tbl_add
Also, entries that resides in the table longer then <max_cycles> are considered as invalid,
and could be removed/replaced by the new ones.
-Note that reassembly demands a lot of mbuf's to be allocated.
+Note that reassembly demands a lot of mbufs to be allocated.
At any given time up to (2 \* bucket_entries \* RTE_LIBRTE_IP_FRAG_MAX \* <maximum number of mbufs per packet>)
can be stored inside Fragment Table waiting for remaining fragments.
Packet Reassembly
~~~~~~~~~~~~~~~~~
-Fragmented packets processing and reassembly is done by the rte_ipv4_frag_reassemble_packet()/rte_ipv6_frag_reassemble_packet.
-Functions. They either return a pointer to valid mbuf that contains reassembled packet,
+Fragmented packets processing and reassembly is done by the rte_ipv4_frag_reassemble_packet()/rte_ipv6_frag_reassemble_packet()
+functions. They either return a pointer to valid mbuf that contains reassembled packet,
or NULL (if the packet can't be reassembled for some reason).
These functions are responsible for:
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 24/54] doc: correct double spaces in IPsec guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (22 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 23/54] doc: correct grammar and typos in IP fragment guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 25/54] doc: correct grammar in lcore variables guide Stephen Hemminger
` (29 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct double spaces in "struct rte_mbuf" (two occurrences).
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/ipsec_lib.rst | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/ipsec_lib.rst b/doc/guides/prog_guide/ipsec_lib.rst
index 458a82828c..bc3bc5525b 100644
--- a/doc/guides/prog_guide/ipsec_lib.rst
+++ b/doc/guides/prog_guide/ipsec_lib.rst
@@ -109,7 +109,7 @@ In that mode the library functions perform
- generate SQN and IV
- add outer IP header (tunnel mode) / update IP header (transport mode)
- add ESP header and trailer, padding and IV data
- - update *ol_flags* inside *struct rte_mbuf* to indicate that
+ - update *ol_flags* inside *struct rte_mbuf* to indicate that
inline-crypto processing has to be performed by HW on this packet
- invoke *rte_security* device specific *set_pkt_metadata()* to associate
security device specific data with the packet
@@ -126,7 +126,7 @@ In that mode the library functions perform
* for outbound packets:
- - update *ol_flags* inside *struct rte_mbuf* to indicate that
+ - update *ol_flags* inside *struct rte_mbuf* to indicate that
inline-crypto processing has to be performed by HW on this packet
- invoke *rte_security* device specific *set_pkt_metadata()* to associate
security device specific data with the packet
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 25/54] doc: correct grammar in lcore variables guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (23 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 24/54] doc: correct double spaces in IPsec guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 26/54] doc: correct typo in link bonding guide Stephen Hemminger
` (28 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct "Non-owner accesses results in" to "Non-owner accesses result in".
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/lcore_var.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/doc/guides/prog_guide/lcore_var.rst b/doc/guides/prog_guide/lcore_var.rst
index 3d9384bc33..2605dcf0d7 100644
--- a/doc/guides/prog_guide/lcore_var.rst
+++ b/doc/guides/prog_guide/lcore_var.rst
@@ -75,7 +75,7 @@ but it should only be *frequently* read from or written to by the *owner*.
A thread is considered the owner of a particular lcore variable value instance
if it has the lcore id associated with that instance.
-Non-owner accesses results in *false sharing*.
+Non-owner accesses result in *false sharing*.
As long as non-owner accesses are rare,
they will have only a very slight effect on performance.
This property of lcore variables memory organization is intentional.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 26/54] doc: correct typo in link bonding guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (24 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 25/54] doc: correct grammar in lcore variables guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 27/54] doc: correct grammar in LTO guide Stephen Hemminger
` (27 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct "down stream" to "downstream".
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
index 4c1d69175e..8e5a651e39 100644
--- a/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
+++ b/doc/guides/prog_guide/link_bonding_poll_mode_drv_lib.rst
@@ -48,7 +48,7 @@ Currently the Link Bonding PMD library supports following modes of operation:
packets in sequential order from the first available member device through
the last. Packets are bulk dequeued from devices then serviced in a
round-robin manner. This mode does not guarantee in order reception of
- packets and down stream should be able to handle out of order packets.
+ packets and downstream should be able to handle out of order packets.
* **Active Backup (Mode 1):**
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 27/54] doc: correct grammar in LTO guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (25 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 26/54] doc: correct typo in link bonding guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 28/54] doc: correct grammar in LPM guide Stephen Hemminger
` (26 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct "compiler have to support" to "compiler has to support".
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/lto.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/doc/guides/prog_guide/lto.rst b/doc/guides/prog_guide/lto.rst
index 5791e35494..e6628b11a8 100644
--- a/doc/guides/prog_guide/lto.rst
+++ b/doc/guides/prog_guide/lto.rst
@@ -8,7 +8,7 @@ The DPDK supports compilation with link time optimization turned on.
This depends obviously on the ability of the compiler to do "whole
program" optimization at link time and is available only for compilers
that support that feature.
-To be more specific, compiler (in addition to performing LTO) have to
+To be more specific, compiler (in addition to performing LTO) has to
support creation of ELF objects containing both normal code and internal
representation (called fat-lto-objects in gcc).
This is required since during build some code is generated by parsing
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 28/54] doc: correct grammar in LPM guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (26 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 27/54] doc: correct grammar in LTO guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 29/54] doc: correct grammar and typo in LPM6 guide Stephen Hemminger
` (25 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct "search process have finished" to "search process has finished".
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/lpm_lib.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/doc/guides/prog_guide/lpm_lib.rst b/doc/guides/prog_guide/lpm_lib.rst
index 316ac835a4..483173f5cd 100644
--- a/doc/guides/prog_guide/lpm_lib.rst
+++ b/doc/guides/prog_guide/lpm_lib.rst
@@ -82,7 +82,7 @@ An entry in tbl24 contains the following fields:
The first field can either contain a number indicating the tbl8 in which the lookup process should continue
or the next hop itself if the longest prefix match has already been found.
The two flags are used to determine whether the entry is valid or not and
-whether the search process have finished or not respectively.
+whether the search process has finished or not respectively.
The depth or length of the rule is the number of bits of the rule that is stored in a specific entry.
An entry in a tbl8 contains the following fields:
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 29/54] doc: correct grammar and typo in LPM6 guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (27 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 28/54] doc: correct grammar in LPM guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 30/54] doc: correct grammar in introduction Stephen Hemminger
` (24 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct several documentation issues:
- Change "IP address be looked up" to "IP address to be looked up"
- Change "search process have finished" to "search process has finished"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/lpm6_lib.rst | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/lpm6_lib.rst b/doc/guides/prog_guide/lpm6_lib.rst
index 2c3eb10857..41b9d8a837 100644
--- a/doc/guides/prog_guide/lpm6_lib.rst
+++ b/doc/guides/prog_guide/lpm6_lib.rst
@@ -68,7 +68,7 @@ The main data structure is built using the following elements:
* A number of tables, configurable by the user through the API, with 2^8 entries
-The first table, called tbl24, is indexed using the first 24 bits of the IP address be looked up,
+The first table, called tbl24, is indexed using the first 24 bits of the IP address to be looked up,
while the rest of the tables, called tbl8s,
are indexed using the rest of the bytes of the IP address, in chunks of 8 bits.
This means that depending on the outcome of trying to match the IP address of an incoming packet to the rule stored in the tbl24
@@ -103,7 +103,7 @@ The first field can either contain a number indicating the tbl8 in which the loo
or the next hop itself if the longest prefix match has already been found.
The depth or length of the rule is the number of bits of the rule that is stored in a specific entry.
The flags are used to determine whether the entry/table is valid or not
-and whether the search process have finished or not respectively.
+and whether the search process has finished or not respectively.
Both types of tables share the same structure.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 30/54] doc: correct grammar in introduction
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (28 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 29/54] doc: correct grammar and typo in LPM6 guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 31/54] doc: correct grammar in mbuf library guide Stephen Hemminger
` (23 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct missing subject in sentence describing release notes content.
Correct inconsistent capitalization of Linux in the documentation
roadmap section.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/intro.rst | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/intro.rst b/doc/guides/prog_guide/intro.rst
index 44877bd3e3..5055574206 100644
--- a/doc/guides/prog_guide/intro.rst
+++ b/doc/guides/prog_guide/intro.rst
@@ -19,7 +19,7 @@ The following is a list of DPDK documents in the suggested reading order:
* **Release Notes** : Provides release-specific information, including supported features,
limitations, fixed issues, known issues and so on.
- Also, provides the answers to frequently asked questions in FAQ format.
+ It also provides the answers to frequently asked questions in FAQ format.
* **Getting Started Guide** : Describes how to install and configure the DPDK software;
designed to get users up and running quickly with the software.
@@ -31,7 +31,7 @@ The following is a list of DPDK documents in the suggested reading order:
* **Programmer's Guide** (this document): Describes:
* The software architecture and how to use it (through examples),
- specifically in a Linux* application (linux) environment
+ specifically in a Linux* application (Linux) environment
* The content of the DPDK, the build system
(including the commands that can be used in the root DPDK to build the development kit and an application)
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 31/54] doc: correct grammar in mbuf library guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (29 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 30/54] doc: correct grammar in introduction Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 32/54] doc: correct grammar in membership " Stephen Hemminger
` (22 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct various grammar and style issues in the mbuf library
documentation:
- make Mbuf capitalization consistent with title
- fix garbled sentence about embedding metadata in buffers
- correct awkward phrasing for newly-allocated mbuf description
- use correct article "an mbuf" before vowel sound
- fix redundant "prepend data before data" phrasing
- add missing punctuation and normalize list formatting
- correct idiom "allows to offload" to "allows offloading"
- fix subject-verb agreement for "information finds"
- fix plural "applications" in use cases section
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/mbuf_lib.rst | 26 ++++++++++++++------------
1 file changed, 14 insertions(+), 12 deletions(-)
diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst
index 382bfbdca4..ee88b13766 100644
--- a/doc/guides/prog_guide/mbuf_lib.rst
+++ b/doc/guides/prog_guide/mbuf_lib.rst
@@ -4,7 +4,7 @@
Packet (Mbuf) Library
=====================
-The Packet (MBuf) library provides the ability to allocate and free buffers (mbufs)
+The Packet (Mbuf) library provides the ability to allocate and free buffers (mbufs)
that may be used by the DPDK application to store message buffers.
The message buffers are stored in a mempool, using the :doc:`mempool_lib`.
@@ -19,7 +19,7 @@ Design of Packet Buffers
For the storage of the packet data (including protocol headers), two approaches were considered:
-#. Embed metadata within a single memory buffer the structure followed by a fixed size area for the packet data.
+#. Embed metadata within a single memory buffer, with the structure followed by a fixed-size area for the packet data.
#. Use separate memory buffers for the metadata structure and for the packet data.
@@ -79,10 +79,10 @@ Allocating and Freeing mbufs
----------------------------
Allocating a new mbuf requires the user to specify the mempool from which the mbuf should be taken.
-For any newly-allocated mbuf, it contains one segment, with a length of 0.
+Any newly-allocated mbuf contains one segment, with a length of 0.
The offset to data is initialized to have some bytes of headroom in the buffer (RTE_PKTMBUF_HEADROOM).
-Freeing a mbuf means returning it into its original mempool.
+Freeing an mbuf means returning it into its original mempool.
The content of an mbuf is not modified when it is stored in a pool (as a free mbuf).
Fields initialized by the constructor do not need to be re-initialized at mbuf allocation.
@@ -93,17 +93,19 @@ Manipulating mbufs
This library provides some functions for manipulating the data in a packet mbuf. For instance:
- * Get data length
+ * Get data length
- * Get a pointer to the start of data
+ * Get a pointer to the start of data
- * Prepend data before data
+ * Prepend data before the payload
- * Append data after data
+ * Append data after the payload
* Remove data at the beginning of the buffer (rte_pktmbuf_adj())
- * Remove data at the end of the buffer (rte_pktmbuf_trim()) Refer to the *DPDK API Reference* for details.
+ * Remove data at the end of the buffer (rte_pktmbuf_trim())
+
+Refer to the *DPDK API Reference* for details.
.. _mbuf_meta:
@@ -123,7 +125,7 @@ timestamp mechanism, the VLAN tagging and the IP checksum computation.
On TX side, it is also possible for an application to delegate some
processing to the hardware if it supports it. For instance, the
-RTE_MBUF_F_TX_IP_CKSUM flag allows to offload the computation of the IPv4
+RTE_MBUF_F_TX_IP_CKSUM flag allows offloading the computation of the IPv4
checksum.
The following examples explain how to configure different TX offloads on
@@ -212,7 +214,7 @@ Dynamic fields and flags
The size of the mbuf is constrained and limited;
while the amount of metadata to save for each packet is quite unlimited.
-The most basic networking information already find their place
+The most basic networking information already finds its place
in the existing mbuf fields and flags.
If new features need to be added, the new fields and flags should fit
@@ -286,4 +288,4 @@ by the script ``dpdk-mbuf-history-parser.py``.
Use Cases
---------
-All networking application should use mbufs to transport network packets.
+All networking applications should use mbufs to transport network packets.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 32/54] doc: correct grammar in membership library guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (30 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 31/54] doc: correct grammar in mbuf library guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 33/54] doc: correct errors in mempool " Stephen Hemminger
` (21 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct various grammar and style issues in the membership library
documentation:
- use "that" instead of "who" for applications
- fix subject-verb agreement in multiple places
- use consistent capitalization for Sub-figure references
- add missing articles before nouns
- correct grammar for element insertion description
- add missing relative pronouns for clarity
- fix "two time" to "two times the"
- correct function names to use rte_member_* pattern consistently
(rte_membership_lookup_multi_bulk -> rte_member_lookup_multi_bulk,
rte_membership_delete -> rte_member_delete)
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/member_lib.rst | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/doc/guides/prog_guide/member_lib.rst b/doc/guides/prog_guide/member_lib.rst
index d2f76de35c..fde7204ad4 100644
--- a/doc/guides/prog_guide/member_lib.rst
+++ b/doc/guides/prog_guide/member_lib.rst
@@ -31,7 +31,7 @@ is a fundamental data aggregation component that can be used in many network
(and other) applications. It is a crucial structure to address performance and
scalability issues of diverse network applications including overlay networks,
data-centric networks, flow table summaries, network statistics and
-traffic monitoring. A set-summary is useful for applications who need to
+traffic monitoring. A set-summary is useful for applications that need to
include a list of elements while a complete list requires too much space
and/or too much processing cost. In these situations, the set-summary works as
a lossy hash-based representation of a set of members. It can dramatically
@@ -47,7 +47,7 @@ probability.
There are various usages for a Membership Library in a very
large set of applications and workloads. Interested readers can refer to
[Member-survey] for a survey of possible networking usages. The above figure
-provide a small set of examples of using the Membership Library:
+provides a small set of examples of using the Membership Library:
* Sub-figure (a)
depicts a distributed web cache architecture where a collection of proxies
@@ -66,7 +66,7 @@ provide a small set of examples of using the Membership Library:
whether its id is a member of the set of visited nodes, and if it is, then a
routing loop is detected.
-* Sub-Figure (c) presents another usage of the Membership
+* Sub-figure (c) presents another usage of the Membership
Library to load-balance flows to worker threads with in-order guarantee where a
set-summary is used to query if a packet belongs to an existing flow or a new
flow. Packets belonging to a new flow are forwarded to the current least loaded
@@ -79,7 +79,7 @@ provide a small set of examples of using the Membership Library:
element of a set against the other elements in a different set, a join is done
on the summaries since they can efficiently encode members of a given set.
-Membership Library is a configurable library that is optimized to cover set
+The Membership Library is a configurable library that is optimized to cover set
membership functionality for both a single set and multi-set scenarios. Two set-summary
schemes are presented including (a) vector of Bloom Filters and (b) Hash-Table based
set-summary schemes with and without false negative probability.
@@ -99,8 +99,8 @@ The BF is a method for representing a set of ``n`` elements (for example flow ke
in network applications domain) to support membership queries. The idea of BF is
to allocate a bit-vector ``v`` with ``m`` bits, which are initially all set to 0. Then
it chooses ``k`` independent hash functions ``h1``, ``h2``, ... ``hk`` with hash values range from
-``0`` to ``m-1`` to perform hashing calculations on each element to be inserted. Every time when an
-element ``X`` being inserted into the set, the bits at positions ``h1(X)``, ``h2(X)``, ...
+``0`` to ``m-1`` to perform hashing calculations on each element to be inserted. Every time an
+element ``X`` is inserted into the set, the bits at positions ``h1(X)``, ``h2(X)``, ...
``hk(X)`` in ``v`` are set to 1 (any particular bit might be set to 1 multiple times
for multiple different inserted elements). Given a query for any element ``Y``, the
bits at positions ``h1(Y)``, ``h2(Y)``, ... ``hk(Y)`` are checked. If any of them is 0,
@@ -126,9 +126,9 @@ lookup with element comparison.
Detecting Routing Loops Using BF
-BF is used for applications that need only one set, and the
+A BF is used for applications that need only one set, and the
membership of elements is checked against the BF. The example discussed
-in the above figure is one example of potential applications that uses only one
+in the above figure is one example of potential applications that use only one
set to capture the node IDs that have been visited so far by the packet. Each
node will then check this embedded BF in the packet header for its own id, and
if the BF indicates that the current node is definitely not in the set then a
@@ -280,7 +280,7 @@ The general input arguments used when creating the set-summary should include ``
which is the name of the created set-summary, *type* which is one of the types
supported by the library (e.g. ``RTE_MEMBER_TYPE_HT`` for HTSS or ``RTE_MEMBER_TYPE_VBF`` for vBF), and ``key_len``
which is the length of the element/key. There are other parameters
-are only used for certain type of set-summary, or which have a slightly different meaning for different types of set-summary.
+that are only used for a certain type of set-summary, or that have a slightly different meaning for different types of set-summary.
For example, ``num_keys`` parameter means the maximum number of entries for Hash table based set-summary.
However, for bloom filter, this value means the expected number of keys that could be
inserted into the bloom filter(s). The value is used to calculate the size of each
@@ -293,7 +293,7 @@ set-summary. For HTSS, another parameter ``is_cache`` is used to indicate
if this set-summary is a cache (i.e. with false negative probability) or not.
For vBF, extra parameters are needed. For example, ``num_set`` is the number of
sets needed to initialize the vector bloom filters. This number is equal to the
-number of bloom filters will be created.
+number of bloom filters that will be created.
``false_pos_rate`` is the false positive rate. num_keys and false_pos_rate will be used to determine
the number of hash functions and the bloom filter size.
@@ -346,11 +346,11 @@ element/key that needs to be looked up, ``max_match_per_key`` which is to indica
the user expects to find for each key, and ``set_id`` which is used to return all
target set ids where the key has matched, if any. The ``set_id`` array should be sized
according to ``max_match_per_key``. For vBF, the maximum number of matches per key is equal
-to the number of sets. For HTSS, the maximum number of matches per key is equal to two time
+to the number of sets. For HTSS, the maximum number of matches per key is equal to two times the
entry count per bucket. ``max_match_per_key`` should be equal or smaller than the maximum number of
possible matches.
-The ``rte_membership_lookup_multi_bulk()`` function looks up a bulk of keys/elements in the
+The ``rte_member_lookup_multi_bulk()`` function looks up a bulk of keys/elements in the
set-summary structure for multiple matches, each key lookup returns ALL the matches (possibly more
than one) found for this key when it is matched against all target sets (cache mode HTSS
matches at most one target set). The
@@ -370,7 +370,7 @@ possible matches, similar to ``rte_member_lookup_multi``.
Set-summary Element Delete
~~~~~~~~~~~~~~~~~~~~~~~~~~
-The ``rte_membership_delete()`` function deletes an element/key from a set-summary structure, if it fails
+The ``rte_member_delete()`` function deletes an element/key from a set-summary structure, if it fails
an error is returned. The input arguments should include ``key`` which is a pointer to the
element/key that needs to be deleted from the set-summary, and ``set_id``
which is the set id associated with the key to delete. It is worth noting that current
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 33/54] doc: correct errors in mempool library guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (31 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 32/54] doc: correct grammar in membership " Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 34/54] doc: correct style in meson unit tests guide Stephen Hemminger
` (20 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct various issues in the mempool library documentation:
- use consistent lowercase "x86" architecture naming
- fix subject-verb agreement for "size is" not "size are"
- correct "(with locks)" to "(without locks)" since the benefit of
per-core caches is avoiding locks on the shared ring
- fix function name rte_pktmbuf_create to rte_pktmbuf_pool_create
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/mempool_lib.rst | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/doc/guides/prog_guide/mempool_lib.rst b/doc/guides/prog_guide/mempool_lib.rst
index 8b4793afff..c10bc7bb43 100644
--- a/doc/guides/prog_guide/mempool_lib.rst
+++ b/doc/guides/prog_guide/mempool_lib.rst
@@ -33,7 +33,7 @@ but can be enabled by setting ``RTE_LIBRTE_MEMPOOL_STATS`` in ``config/rte_confi
Memory Alignment Constraints on x86 architecture
------------------------------------------------
-Depending on hardware memory configuration on X86 architecture, performance can be greatly improved by adding a specific padding between objects.
+Depending on hardware memory configuration on x86 architecture, performance can be greatly improved by adding a specific padding between objects.
The objective is to ensure that the beginning of each object starts on a different channel and rank in memory so that all channels are equally loaded.
This is particularly true for packet buffers when doing L3 forwarding or flow classification.
@@ -59,7 +59,7 @@ Examples of alignment for different DIMM architectures are shown in
In this case, the assumption is that a packet is 16 blocks of 64 bytes, which is not true.
The Intel® 5520 chipset has three channels, so in most cases,
-no padding is required between objects (except for objects whose size are n x 3 x 64 bytes blocks).
+no padding is required between objects (except for objects whose size is n x 3 x 64 bytes blocks).
.. _figure_memory-management2:
@@ -89,7 +89,7 @@ since each access requires a compare-and-set (CAS) operation.
To avoid having too many access requests to the memory pool's ring,
the memory pool allocator can maintain a per-core cache and do bulk requests to the memory pool's ring,
via the cache with many fewer locks on the actual memory pool structure.
-In this way, each core has full access to its own cache (with locks) of free objects and
+In this way, each core has full access to its own cache (without locks) of free objects and
only when the cache fills does the core need to shuffle some of the free objects back to the pools ring or
obtain more objects when the cache is empty.
@@ -140,7 +140,7 @@ Legacy applications may continue to use the old ``rte_mempool_create()`` API
call, which uses a ring based mempool handler by default. These applications
will need to be modified to use a new mempool handler.
-For applications that use ``rte_pktmbuf_create()``, there is a config setting
+For applications that use ``rte_pktmbuf_pool_create()``, there is a config setting
(``RTE_MBUF_DEFAULT_MEMPOOL_OPS``) that allows the application to make use of
an alternative mempool handler.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 34/54] doc: correct style in meson unit tests guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (32 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 33/54] doc: correct errors in mempool " Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 35/54] doc: correct errors in metrics library guide Stephen Hemminger
` (19 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct style issues in the meson unit tests documentation:
- improve awkward passive construction for build steps reference
- normalize list punctuation by removing unnecessary periods
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/meson_ut.rst | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/doc/guides/prog_guide/meson_ut.rst b/doc/guides/prog_guide/meson_ut.rst
index 9bc52a30fc..2740184cde 100644
--- a/doc/guides/prog_guide/meson_ut.rst
+++ b/doc/guides/prog_guide/meson_ut.rst
@@ -6,19 +6,19 @@ Running DPDK Unit Tests with Meson
This section describes how to run test cases with the DPDK meson build system.
-Steps to build and install DPDK using meson can be referred
-in :doc:`build-sdk-meson`
+For steps to build and install DPDK using meson, refer to
+:doc:`build-sdk-meson`.
Grouping of test cases
----------------------
Test cases have been classified into four different groups.
-* Fast tests.
-* Performance tests.
-* Driver tests.
+* Fast tests
+* Performance tests
+* Driver tests
* Tests which produce lists of objects as output, and therefore that need
- manual checking.
+ manual checking
These tests can be run using the argument to ``meson test`` as
``--suite project_name:label``.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 35/54] doc: correct errors in metrics library guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (33 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 34/54] doc: correct style in meson unit tests guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 36/54] doc: correct grammar in mldev " Stephen Hemminger
` (18 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct various issues in the metrics library documentation:
- capitalize sentence start
- fix subject-verb agreement for "function that returns"
- add missing variable declarations (ret, i) in code example
- remove extra space in malloc line
- use consistent American spelling "Deinitializing"
- invalid C syntax: rte_metrics_deinit(void) -> ()
- normalize spacing in metric name lists
- fix typo mac_latency_ns to max_latency_ns
- remove unused variable declaration from example
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/metrics_lib.rst | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/doc/guides/prog_guide/metrics_lib.rst b/doc/guides/prog_guide/metrics_lib.rst
index 98fc8947c6..6f534534c0 100644
--- a/doc/guides/prog_guide/metrics_lib.rst
+++ b/doc/guides/prog_guide/metrics_lib.rst
@@ -84,7 +84,7 @@ using ``rte_metrics_get_names()``.
rte_metrics_update_value(port_id, id_3, values[2]);
rte_metrics_update_value(port_id, id_4, values[3]);
-if metrics were registered as a single set, they can either be updated
+If metrics were registered as a single set, they can either be updated
individually using ``rte_metrics_update_value()``, or updated together
using the ``rte_metrics_update_values()`` function:
@@ -105,7 +105,7 @@ Querying metrics
----------------
Consumers can obtain metric values by querying the metrics library using
-the ``rte_metrics_get_values()`` function that return an array of
+the ``rte_metrics_get_values()`` function that returns an array of
``struct rte_metric_value``. Each entry within this array contains a metric
value and its associated key. A key-name mapping can be obtained using the
``rte_metrics_get_names()`` function that returns an array of
@@ -118,6 +118,8 @@ print out all metrics for a given port:
struct rte_metric_value *metrics;
struct rte_metric_name *names;
int len;
+ int ret;
+ int i;
len = rte_metrics_get_names(NULL, 0);
if (len < 0) {
@@ -129,7 +131,7 @@ print out all metrics for a given port:
return;
}
metrics = malloc(sizeof(struct rte_metric_value) * len);
- names = malloc(sizeof(struct rte_metric_name) * len);
+ names = malloc(sizeof(struct rte_metric_name) * len);
if (metrics == NULL || names == NULL) {
printf("Cannot allocate memory\n");
free(metrics);
@@ -152,7 +154,7 @@ print out all metrics for a given port:
}
-Deinitialising the library
+Deinitializing the library
--------------------------
Once the library usage is done, it must be deinitialized by calling
@@ -161,7 +163,7 @@ during initialization.
.. code-block:: c
- err = rte_metrics_deinit(void);
+ err = rte_metrics_deinit();
If the return value is negative, it means deinitialization failed.
This function **must** be called from a primary process.
@@ -175,11 +177,11 @@ These statistics are reported via the metrics library using the
following names:
- ``mean_bits_in``: Average inbound bit-rate
- - ``mean_bits_out``: Average outbound bit-rate
+ - ``mean_bits_out``: Average outbound bit-rate
- ``ewma_bits_in``: Average inbound bit-rate (EWMA smoothed)
- - ``ewma_bits_out``: Average outbound bit-rate (EWMA smoothed)
- - ``peak_bits_in``: Peak inbound bit-rate
- - ``peak_bits_out``: Peak outbound bit-rate
+ - ``ewma_bits_out``: Average outbound bit-rate (EWMA smoothed)
+ - ``peak_bits_in``: Peak inbound bit-rate
+ - ``peak_bits_out``: Peak outbound bit-rate
Once initialised and clocked at the appropriate frequency, these
statistics can be obtained by querying the metrics library.
@@ -241,8 +243,8 @@ the jitter in processing delay. These statistics are then reported
via the metrics library using the following names:
- ``min_latency_ns``: Minimum processing latency (nano-seconds)
- - ``avg_latency_ns``: Average processing latency (nano-seconds)
- - ``mac_latency_ns``: Maximum processing latency (nano-seconds)
+ - ``avg_latency_ns``: Average processing latency (nano-seconds)
+ - ``max_latency_ns``: Maximum processing latency (nano-seconds)
- ``jitter_ns``: Variance in processing latency (nano-seconds)
Once initialised and clocked at the appropriate frequency, these
@@ -256,8 +258,6 @@ Before the library can be used, it has to be initialised by calling
.. code-block:: c
- lcoreid_t latencystats_lcore_id = -1;
-
int ret = rte_latencystats_init(1, NULL);
if (ret)
rte_exit(EXIT_FAILURE, "Could not allocate latency data.\n");
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 36/54] doc: correct grammar in mldev library guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (34 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 35/54] doc: correct errors in metrics library guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 37/54] doc: correct grammar in multi-process guide Stephen Hemminger
` (17 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct various grammar and style issues in the ML device library
documentation:
- fix subject-verb agreement for "API which supports"
- use compound word "Workflow" instead of "Work flow"
- fix parallel construction for model load and start
- use plural "feature sets"
- rewrite grammatically broken sentence about rte_ml_dev_info_get
- add missing article before "number of queue pairs"
- use consistent terminology "operations" not "packets"
- fix malformed sentence about dequeue API format
- add missing word "with" in quantize section
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/mldev.rst | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/doc/guides/prog_guide/mldev.rst b/doc/guides/prog_guide/mldev.rst
index 61661b998b..094a67cbdb 100644
--- a/doc/guides/prog_guide/mldev.rst
+++ b/doc/guides/prog_guide/mldev.rst
@@ -6,7 +6,7 @@ Machine Learning (ML) Device Library
The Machine Learning (ML) Device library provides a Machine Learning device framework for the management and
provisioning of hardware and software ML poll mode drivers,
-defining an API which support a number of ML operations
+defining an API which supports a number of ML operations
including device handling and inference processing.
The ML model creation and training is outside of the scope of this library.
@@ -16,7 +16,7 @@ The ML framework is built on the following model:
.. figure:: img/mldev_flow.*
- Work flow of inference on MLDEV
+ Workflow of inference on MLDEV
ML Device
A hardware or software-based implementation of ML device API
@@ -28,7 +28,7 @@ ML Model
required to make predictions on live data.
Once the model is created and trained outside of the DPDK scope,
the model can be loaded via ``rte_ml_model_load()``
- and then start it using ``rte_ml_model_start()`` API function.
+ and then started using ``rte_ml_model_start()`` API function.
The ``rte_ml_model_params_update()`` can be used to update the model parameters
such as weights and bias without unloading the model using ``rte_ml_model_unload()``.
@@ -79,9 +79,9 @@ Each device, whether virtual or physical is uniquely designated by two identifie
Device Features and Capabilities
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-ML devices may support different feature set.
-In order to get the supported PMD feature ``rte_ml_dev_info_get()`` API
-which return the info of the device and its supported features.
+ML devices may support different feature sets.
+To get the supported PMD features, use the ``rte_ml_dev_info_get()`` API,
+which returns the info of the device and its supported features.
Device Configuration
@@ -106,7 +106,7 @@ maximum size of model and so on.
Configuration of Queue Pairs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Each ML device can be configured with number of queue pairs.
+Each ML device can be configured with a number of queue pairs.
Each queue pair is configured using ``rte_ml_dev_queue_pair_setup()``
@@ -162,9 +162,9 @@ to specify the device queue pair to schedule the processing on.
The ``nb_ops`` parameter is the number of operations to process
which are supplied in the ``ops`` array of ``rte_ml_op`` structures.
The enqueue function returns the number of operations it enqueued for processing,
-a return value equal to ``nb_ops`` means that all packets have been enqueued.
+a return value equal to ``nb_ops`` means that all operations have been enqueued.
-The dequeue API uses the same format as the enqueue API of processed
+The dequeue API uses the same format as the enqueue API,
but the ``nb_ops`` and ``ops`` parameters are now used to specify
the max processed operations the user wishes to retrieve
and the location in which to store them.
@@ -193,7 +193,7 @@ from a higher precision type to a lower precision type and vice-versa.
ML library provides the functions ``rte_ml_io_quantize()`` and ``rte_ml_io_dequantize()``
to enable data type conversions.
User needs to provide the address of the quantized and dequantized data buffers
-to the functions, along the number of the batches in the buffers.
+to the functions, along with the number of batches in the buffers.
For quantization, the dequantized data is assumed to be
of the type ``dtype`` provided by the ``rte_ml_model_info::input``
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 37/54] doc: correct grammar in multi-process guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (35 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 36/54] doc: correct grammar in mldev " Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 38/54] doc: correct grammar in overview Stephen Hemminger
` (16 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct various grammar and style issues in the multi-process support
documentation:
- remove spurious spaces in hyphenated words (pre-initialized,
Multi-process, side-by-side)
- add missing article "the" before "same DPDK version"
- fix subject-verb agreement in multiple places
- correct book title from "Application's" to "Applications"
- remove duplicate word "process" in startup description
- fix typo "pass long" to "pass along"
- add missing past participle "triggered"
- expand abbreviation "Misc" to "Miscellaneous"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/multi_proc_support.rst | 28 ++++++++++----------
1 file changed, 14 insertions(+), 14 deletions(-)
diff --git a/doc/guides/prog_guide/multi_proc_support.rst b/doc/guides/prog_guide/multi_proc_support.rst
index a73918a5da..722506e385 100644
--- a/doc/guides/prog_guide/multi_proc_support.rst
+++ b/doc/guides/prog_guide/multi_proc_support.rst
@@ -17,7 +17,7 @@ For now, there are two types of process specified:
* primary processes, which can initialize and which have full permissions on shared memory
* secondary processes, which cannot initialize shared memory,
- but can attach to pre- initialized shared memory and create objects in it.
+ but can attach to pre-initialized shared memory and create objects in it.
Standalone DPDK processes are primary processes,
while secondary processes can only run alongside a primary process or
@@ -25,9 +25,9 @@ after a primary process has already configured the hugepage shared memory for th
.. note::
- Secondary processes should run alongside primary process with same DPDK version.
+ Secondary processes should run alongside primary process with the same DPDK version.
- Secondary processes which requires access to physical devices in Primary process, must
+ Secondary processes which require access to physical devices in Primary process, must
be passed with the same allow and block options.
To support these two process types, and other multi-process setups described later,
@@ -38,8 +38,8 @@ two additional command-line parameters are available to the EAL:
* ``--file-prefix:`` to allow processes that do not want to co-operate to have different memory regions
A number of example applications are provided that demonstrate how multiple DPDK processes can be used together.
-These are more fully documented in the "Multi- process Sample Application" chapter
-in the *DPDK Sample Application's User Guide*.
+These are more fully documented in the "Multi-process Sample Application" chapter
+in the *DPDK Sample Applications User Guide*.
Memory Sharing
--------------
@@ -47,7 +47,7 @@ Memory Sharing
The key element in getting a multi-process application working using the DPDK is to ensure that
memory resources are properly shared among the processes making up the multi-process application.
Once there are blocks of shared memory available that can be accessed by multiple processes,
-then issues such as inter-process communication (IPC) becomes much simpler.
+then issues such as inter-process communication (IPC) become much simpler.
On application start-up in a primary or standalone process,
the DPDK records to memory-mapped files the details of the memory configuration it is using - hugepages in use,
@@ -88,7 +88,7 @@ In this model, the first of the processes spawned should be spawned using the ``
while all subsequent instances should be spawned using the ``--proc-type=secondary`` flag.
The simple_mp and symmetric_mp sample applications demonstrate this usage model.
-They are described in the "Multi-process Sample Application" chapter in the *DPDK Sample Application's User Guide*.
+They are described in the "Multi-process Sample Application" chapter in the *DPDK Sample Applications User Guide*.
Asymmetric/Non-Peer Processes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -99,7 +99,7 @@ server distributing received packets among worker or client threads, which are r
In this case, extensive use of rte_ring objects is made, which are located in shared hugepage memory.
The client_server_mp sample application shows this usage model.
-It is described in the "Multi-process Sample Application" chapter in the *DPDK Sample Application's User Guide*.
+It is described in the "Multi-process Sample Application" chapter in the *DPDK Sample Applications User Guide*.
Running Multiple Independent DPDK Applications
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -138,7 +138,7 @@ can use).
Running Multiple Independent Groups of DPDK Applications
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-In the same way that it is possible to run independent DPDK applications side- by-side on a single system,
+In the same way that it is possible to run independent DPDK applications side-by-side on a single system,
this can be trivially extended to multi-process groups of DPDK applications running side-by-side.
In this case, the secondary processes must use the same ``--file-prefix`` parameter
as the primary process whose shared memory they are connecting to.
@@ -155,7 +155,7 @@ There are a number of limitations to what can be done when running DPDK multi-pr
Some of these are documented below:
* The multi-process feature requires that the exact same hugepage memory mappings be present in all applications.
- This makes secondary process startup process generally unreliable. Disabling
+ This makes secondary process startup generally unreliable. Disabling
Linux security feature - Address-Space Layout Randomization (ASLR) may
help getting more consistent mappings, but not necessarily more reliable -
if the mappings are wrong, they will be consistently wrong!
@@ -247,7 +247,7 @@ of fields to be populated are as follows:
* ``name`` - message name. This name must match receivers' callback name.
* ``param`` - message data (up to 256 bytes).
* ``len_param`` - length of message data.
-* ``fds`` - file descriptors to pass long with the data (up to 8 fd's).
+* ``fds`` - file descriptors to pass along with the data (up to 8 fd's).
* ``num_fds`` - number of file descriptors to send.
Once the structure is populated, calling ``rte_mp_sendmsg()`` will send the
@@ -301,7 +301,7 @@ Receiving and responding to messages
To receive a message, a name callback must be registered using the
``rte_mp_action_register()`` function. The name of the callback must match the
``name`` field in sender's ``rte_mp_msg`` message descriptor in order for this
-message to be delivered and for the callback to be trigger.
+message to be delivered and for the callback to be triggered.
The callback's definition is ``rte_mp_t``, and consists of the incoming message
pointer ``msg``, and an opaque pointer ``peer``. Contents of ``msg`` will be
@@ -318,8 +318,8 @@ pointer. The resulting response will then be delivered to the correct requestor.
there is no built-in way to indicate success or error for a request. Failing
to do so will cause the requestor to time out while waiting on a response.
-Misc considerations
-~~~~~~~~~~~~~~~~~~~~~~~~
+Miscellaneous considerations
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Due to the underlying IPC implementation being single-threaded, recursive
requests (i.e. sending a request while responding to another request) is not
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 38/54] doc: correct grammar in overview
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (36 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 37/54] doc: correct grammar in multi-process guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 39/54] doc: correct grammar in ACL library guide Stephen Hemminger
` (15 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct various issues in the overview documentation:
- fix typo "Mos" to "Most"
- fix grammatically broken sentence about interrupt driven model
- use consistent spacing in "40 GbE" to match "1 GbE" and "10 GbE"
- add missing space after comma in LPM library reference
- remove extra space before doc reference
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/overview.rst | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/doc/guides/prog_guide/overview.rst b/doc/guides/prog_guide/overview.rst
index c70023e8a1..a22f51e8ca 100644
--- a/doc/guides/prog_guide/overview.rst
+++ b/doc/guides/prog_guide/overview.rst
@@ -23,8 +23,8 @@ Longest Prefix Match (LPM) and rings libraries are also provided.
Sample applications are provided to help show the user how to use various features of the DPDK.
The DPDK supports multiple programming models for packet processing.
-Mos of the sample applications use a polling mode for performance but
-some of the samples use interrupt driven model is useful for saving power
+Most of the sample applications use a polling mode for performance but
+some of the samples use an interrupt driven model that is useful for saving power
but has additional performance overhead. If available, it is possible
to use the DPDK with event based hardware support.
@@ -140,16 +140,16 @@ The library documentation is available in :doc:`timer_lib`.
Ethernet* Poll Mode Driver Architecture
---------------------------------------
-The DPDK includes Poll Mode Drivers (PMDs) for 1 GbE, 10 GbE and 40GbE, and para virtualized virtio
+The DPDK includes Poll Mode Drivers (PMDs) for 1 GbE, 10 GbE and 40 GbE, and para virtualized virtio
Ethernet controllers which are designed to work without asynchronous, interrupt-based signaling mechanisms.
Packet Forwarding Algorithm Support
-----------------------------------
-The DPDK includes Hash (librte_hash) and Longest Prefix Match (LPM,librte_lpm)
+The DPDK includes Hash (librte_hash) and Longest Prefix Match (LPM, librte_lpm)
libraries to support the corresponding packet forwarding algorithms.
-See :doc:`hash_lib` and :doc:`lpm_lib` for more information.
+See :doc:`hash_lib` and :doc:`lpm_lib` for more information.
librte_net
----------
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 39/54] doc: correct grammar in ACL library guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (37 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 38/54] doc: correct grammar in overview Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 40/54] doc: correct typos in packet distributor guide Stephen Hemminger
` (14 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct various grammar issues in the ACL library documentation:
- fix subject-verb agreement "fields has" to "fields have"
- fix awkward phrasing "to which...belongs to"
- fix typo "is a follows" to "is as follows" in two places
- fix typo "less then" to "less than" in code comment
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/packet_classif_access_ctrl.rst | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/doc/guides/prog_guide/packet_classif_access_ctrl.rst b/doc/guides/prog_guide/packet_classif_access_ctrl.rst
index 172f443f6e..00e2fdbef8 100644
--- a/doc/guides/prog_guide/packet_classif_access_ctrl.rst
+++ b/doc/guides/prog_guide/packet_classif_access_ctrl.rst
@@ -32,7 +32,7 @@ over which packet classification will be performed.
Though there are few restrictions on the rule fields layout:
* First field in the rule definition has to be one byte long.
-* All subsequent fields has to be grouped into sets of 4 consecutive bytes.
+* All subsequent fields have to be grouped into sets of 4 consecutive bytes.
This is done mainly for performance reasons - search function processes the first input byte as part of the flow setup and then the inner loop of the search function is unrolled to process four input bytes at a time.
@@ -69,7 +69,7 @@ To define each field inside an AC rule, the following structure is used:
* input_index
As mentioned above, all input fields, except the very first one, must be in groups of 4 consecutive bytes.
- The input index specifies to which input group that field belongs to.
+ The input index specifies which input group that field belongs to.
* offset
The offset field defines the offset for the field.
@@ -140,7 +140,7 @@ The following array of field definitions can be used:
},
};
-A typical example of such an IPv4 5-tuple rule is a follows:
+A typical example of such an IPv4 5-tuple rule is as follows:
::
@@ -209,7 +209,7 @@ The following array of field definitions can be used:
},
};
-A typical example of such an IPv6 2-tuple rule is a follows:
+A typical example of such an IPv6 2-tuple rule is as follows:
::
@@ -346,7 +346,7 @@ For example:
* populated with rules AC context and cfg filled properly.
*/
- /* try to build AC context, with RT structures less then 8MB. */
+ /* try to build AC context, with RT structures less than 8MB. */
cfg.max_size = 0x800000;
ret = rte_acl_build(acx, &cfg);
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 40/54] doc: correct typos in packet distributor guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (38 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 39/54] doc: correct grammar in ACL library guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 41/54] doc: correct grammar in packet framework guide Stephen Hemminger
` (13 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct various issues in the packet distributor documentation:
- remove extra space after "each"
- fix typo "work" to "worker" in queue description
- fix "of less use that" to "of less use than"
- fix inconsistent capitalization "APIS" to "APIs"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/packet_distrib_lib.rst | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/doc/guides/prog_guide/packet_distrib_lib.rst b/doc/guides/prog_guide/packet_distrib_lib.rst
index 3c3b746aca..10e187be04 100644
--- a/doc/guides/prog_guide/packet_distrib_lib.rst
+++ b/doc/guides/prog_guide/packet_distrib_lib.rst
@@ -33,11 +33,11 @@ The operation of the distributor is as follows:
#. As workers request packets, the distributor takes packets from the set of packets passed in and distributes them to the workers.
As it does so, it examines the "tag" -- stored in the RSS hash field in the mbuf -- for each packet
- and records what tags are being processed by each worker.
+ and records what tags are being processed by each worker.
#. If the next packet in the input set has a tag which is already being processed by a worker,
then that packet will be queued up for processing by that worker
- and given to it in preference to other packets when that work next makes a request for work.
+ and given to it in preference to other packets when that worker next makes a request for work.
This ensures that no two packets with the same tag are processed in parallel,
and that all packets with the same tag are processed in input order.
@@ -78,7 +78,7 @@ while allowing packet order within a packet flow -- identified by a tag -- to be
The flush and clear_returns API calls, mentioned previously,
-are likely of less use that the process and returned_pkts APIS, and are principally provided to aid in unit testing of the library.
+are likely of less use than the process and returned_pkts APIs, and are principally provided to aid in unit testing of the library.
Descriptions of these functions and their use can be found in the DPDK API Reference document.
Worker Operation
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 41/54] doc: correct grammar in packet framework guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (39 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 40/54] doc: correct typos in packet distributor guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 42/54] doc: correct grammar in PDCP library guide Stephen Hemminger
` (12 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct various issues in the packet framework documentation:
- add missing article "a" in "As a result of lookup"
- fix incomplete sentence "described in." to "described below."
- fix confusing mirror/main copy swap description that had
duplicate text and fix subject-verb agreement
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/packet_framework.rst | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/doc/guides/prog_guide/packet_framework.rst b/doc/guides/prog_guide/packet_framework.rst
index 17010b07dc..cacda9add2 100644
--- a/doc/guides/prog_guide/packet_framework.rst
+++ b/doc/guides/prog_guide/packet_framework.rst
@@ -34,7 +34,7 @@ as well as providing libraries of reusable templates for the commonly used pipel
The pipeline is constructed by connecting the set of input ports with the set of output ports
through the set of tables in a tree-like topology.
-As result of lookup operation for the current packet in the current table,
+As a result of lookup operation for the current packet in the current table,
one of the table entries (on lookup hit) or the default table entry (on lookup miss)
provides the set of actions to be applied on the current packet,
as well as the next hop for the packet, which can be either another table, an output port or packet drop.
@@ -106,7 +106,7 @@ Port Interface
Each port is unidirectional, i.e. either input port or output port.
Each input/output port is required to implement an abstract interface that
defines the initialization and run-time operation of the port.
-The port abstract interface is described in.
+The port abstract interface is described below.
.. _table_qos_20:
@@ -1128,7 +1128,7 @@ Mechanisms to share the same table between multiple threads:
#. **Single writer thread performing table entry add/delete operations and multiple reader threads that perform table lookup operations with read-only access to the table entries.**
The reader threads use the main table copy while the writer is updating the mirror copy.
- Once the writer update is done, the writer can signal to the readers and busy wait until all readers swaps between the mirror copy (which now becomes the main copy) and
+ Once the writer update is done, the writer can signal to the readers and busy wait until all readers swap between the main copy (which now becomes the mirror copy) and
the mirror copy (which now becomes the main copy).
Interfacing with Accelerators
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 42/54] doc: correct grammar in PDCP library guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (40 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 41/54] doc: correct grammar in packet framework guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 43/54] doc: correct grammar in pdump " Stephen Hemminger
` (11 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct various grammar issues in the PDCP library documentation:
- use "high-performance" instead of nonstandard "high performant"
- add missing articles and fix pluralization for APIs
- use present tense consistently instead of "would" conditional
since the library is implemented and we're describing current
behavior
- fix subject-verb agreement in multiple places
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/pdcp_lib.rst | 24 ++++++++++++------------
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/doc/guides/prog_guide/pdcp_lib.rst b/doc/guides/prog_guide/pdcp_lib.rst
index 266abb8574..4123dcfde9 100644
--- a/doc/guides/prog_guide/pdcp_lib.rst
+++ b/doc/guides/prog_guide/pdcp_lib.rst
@@ -7,7 +7,7 @@ PDCP Protocol Processing Library
DPDK provides a library for PDCP protocol processing.
The library utilizes other DPDK libraries such as cryptodev, reorder, etc.,
to provide the application with a transparent and
-high performant PDCP protocol processing library.
+high-performance PDCP protocol processing library.
The library abstracts complete PDCP protocol processing conforming to
`ETSI TS 138 323 V17.1.0 (2022-08)
@@ -34,27 +34,27 @@ to work with cryptodev irrespective of the protocol offload features supported.
PDCP entity API
---------------
-PDCP library provides following control path API that is used to
+PDCP library provides the following control path APIs that are used to
configure various PDCP entities:
- ``rte_pdcp_entity_establish()``
- ``rte_pdcp_entity_suspend()``
- ``rte_pdcp_entity_release()``
-A PDCP entity would translate to one ``rte_cryptodev_sym_session`` or
+A PDCP entity translates to one ``rte_cryptodev_sym_session`` or
``rte_security_session`` based on the config.
The sessions would be created/destroyed
while corresponding PDCP entity operations are performed.
When upper layers request a PDCP entity suspend (``rte_pdcp_entity_suspend()``),
-it would result in flushing out of all cached packets and
+it results in flushing out of all cached packets and
internal state variables are updated as described in 5.1.4.
When upper layers request a PDCP entity release (``rte_pdcp_entity_release()``),
-it would result in flushing out of all cached packets
+it results in flushing out of all cached packets
and releasing of all memory associated with the entity.
-It would internally free any crypto/security sessions created.
-All procedures mentioned in 5.1.3 would be performed.
+It internally frees any crypto/security sessions created.
+All procedures mentioned in 5.1.3 are performed.
PDCP PDU (Protocol Data Unit) API
---------------------------------
@@ -84,8 +84,8 @@ PDCP packet processing API for control PDU
Control PDUs are used in PDCP as a communication channel
between transmitting and receiving entities.
-When upper layer request for operations such as re-establishment,
-receiving PDCP entity need to prepare a status report
+When upper layers request operations such as re-establishment,
+the receiving PDCP entity needs to prepare a status report
and send it to the other end.
The API ``rte_pdcp_control_pdu_create()`` allows application to request the same.
@@ -99,13 +99,13 @@ Since cryptodev dequeue can return crypto operations
belonging to multiple entities, ``rte_pdcp_pkt_crypto_group()``
is added to help grouping crypto operations belonging to same PDCP entity.
-Lib PDCP would allow application to use same API sequence
+Lib PDCP allows the application to use the same API sequence
while leveraging protocol offload features enabled by ``rte_security`` library.
-Lib PDCP would internally change the handles registered
+Lib PDCP internally changes the handles registered
for ``pre_process`` and ``post_process`` based on features enabled in the entity.
-Lib PDCP would create the required sessions on the device
+Lib PDCP creates the required sessions on the device
provided in entity to minimize the application requirements.
Also, the ``rte_crypto_op`` allocation and free would also be done internally
by lib PDCP to allow the library to create crypto ops as required for the input packets.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 43/54] doc: correct grammar in pdump library guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (41 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 42/54] doc: correct grammar in PDCP library guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 44/54] doc: correct typos in power management guide Stephen Hemminger
` (10 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct various grammar issues in the pdump library documentation:
- fix "both...or" to "both...and"
- fix "maybe" to "may be" (two separate words)
- add missing article "the" before "application"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/pdump_lib.rst | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/doc/guides/prog_guide/pdump_lib.rst b/doc/guides/prog_guide/pdump_lib.rst
index 5bacb49ffc..d27385fef1 100644
--- a/doc/guides/prog_guide/pdump_lib.rst
+++ b/doc/guides/prog_guide/pdump_lib.rst
@@ -7,7 +7,7 @@ Packet Capture Library
The DPDK ``pdump`` library provides a framework
for capturing packets within DPDK applications.
It enables a **secondary process** to monitor packets
-being processed by both **primary** or **secondary** processes.
+being processed by both **primary** and **secondary** processes.
Overview
@@ -76,7 +76,7 @@ The library exposes API for:
.. function:: int rte_pdump_stats(uint16_t port_id, struct rte_dump_stats *stats)
Reports the number of packets captured, filtered, and missed.
- Packets maybe missed due to mbuf pool being exhausted or the ring being full.
+ Packets may be missed due to mbuf pool being exhausted or the ring being full.
Operation
@@ -128,7 +128,7 @@ What is the performance impact of pdump?
What happens if process does not call pdump init?
- If application does not call ``rte_pdump_init``
+ If the application does not call ``rte_pdump_init``
then the request to enable (in the capture command)
will timeout and an error is returned.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 44/54] doc: correct typos in power management guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (42 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 43/54] doc: correct grammar in pdump " Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 45/54] doc: correct grammar in profiling guide Stephen Hemminger
` (9 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct various typos in the power management documentation:
- fix "users space" to "user space"
- fix "according the" to "according to the"
- fix section title "User Cases" to "Use Cases"
- fix "die's" to "dies" (no apostrophe for plural)
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/power_man.rst | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/doc/guides/prog_guide/power_man.rst b/doc/guides/prog_guide/power_man.rst
index 74039e5786..b4c9271e0b 100644
--- a/doc/guides/prog_guide/power_man.rst
+++ b/doc/guides/prog_guide/power_man.rst
@@ -4,7 +4,7 @@
Power Management
================
-The DPDK Power Management feature allows users space applications to save power
+The DPDK Power Management feature allows user space applications to save power
by dynamically adjusting CPU frequency or entering into different C-States.
* Adjusting the CPU frequency dynamically according to the utilization of RX queue.
@@ -60,7 +60,7 @@ Core-load Throttling through C-States
Core state can be altered by speculative sleeps whenever the specified lcore has nothing to do.
In the DPDK, if no packet is received after polling,
-speculative sleeps can be triggered according the strategies defined by the user space application.
+speculative sleeps can be triggered according to the strategies defined by the user space application.
Per-core Turbo Boost
--------------------
@@ -101,7 +101,7 @@ The main methods exported by power library are for CPU frequency scaling and inc
* **Disable turbo**: Prompt the kernel to disable Turbo Boost for the specific lcore.
-User Cases
+Use Cases
----------
The power management mechanism is used to save power when performing L3 forwarding.
@@ -285,7 +285,7 @@ Get Num Pkgs
Get the number of packages (CPU's) on the system.
Get Num Dies
- Get the number of die's on a given package.
+ Get the number of dies on a given package.
References
----------
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 45/54] doc: correct grammar in profiling guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (43 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 44/54] doc: correct typos in power management guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 46/54] doc: correct errors in regexdev guide Stephen Hemminger
` (8 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct subject-verb agreement: "architecture provides" not "provide".
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/profile_app.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/doc/guides/prog_guide/profile_app.rst b/doc/guides/prog_guide/profile_app.rst
index 2f47680d5d..b2950c2a61 100644
--- a/doc/guides/prog_guide/profile_app.rst
+++ b/doc/guides/prog_guide/profile_app.rst
@@ -88,7 +88,7 @@ Profiling on ARM64
Using Linux perf
~~~~~~~~~~~~~~~~
-The ARM64 architecture provide performance counters to monitor events. The
+The ARM64 architecture provides performance counters to monitor events. The
Linux ``perf`` tool can be used to profile and benchmark an application. In
addition to the standard events, ``perf`` can be used to profile arm64
specific PMU (Performance Monitor Unit) events through raw events (``-e``
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 46/54] doc: correct errors in regexdev guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (44 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 45/54] doc: correct grammar in profiling guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 47/54] doc: correct grammar in reorder library guide Stephen Hemminger
` (7 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct various issues in the RegEx device library documentation:
- fix "Crypto" to "RegEx" in framework description (copy-paste error)
- fix "ReEx" typo to "RegEx"
- remove extra space after "for example"
- fix "in being compiled" to "is being compiled"
- fix "depended" to "dependent"
- fix subject-verb agreement "add / remove" to "adds / removes"
- add missing article "a" before "number of queue pairs"
- fix "feature set" to "feature sets" and complete broken sentence
- fix "it's" to "its"
- fix garbled sentence about data release timing
- fix awkward "API of processed but" phrasing
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/regexdev.rst | 24 ++++++++++++------------
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/doc/guides/prog_guide/regexdev.rst b/doc/guides/prog_guide/regexdev.rst
index 3bf3b154b4..2b4f26263e 100644
--- a/doc/guides/prog_guide/regexdev.rst
+++ b/doc/guides/prog_guide/regexdev.rst
@@ -14,7 +14,7 @@ Design Principles
The RegEx library follows the same basic principles as those used in DPDK's
Ethernet Device framework and the Crypto framework. The RegEx framework provides
-a generic Crypto device framework which supports both physical (hardware)
+a generic RegEx device framework which supports both physical (hardware)
and virtual (software) RegEx devices as well as a generic RegEx API which allows
RegEx devices to be managed and configured and supports RegEx operations to be
provisioned on RegEx poll mode driver.
@@ -29,7 +29,7 @@ Device Creation
Physical RegEx devices are discovered during the PCI probe/enumeration of the
EAL function which is executed at DPDK initialization, based on
their PCI device identifier, each unique PCI BDF (bus/bridge, device,
-function). Specific physical ReEx devices, like other physical devices in DPDK
+function). Specific physical RegEx devices, like other physical devices in DPDK
can be listed using the EAL command line options.
@@ -63,7 +63,7 @@ The rte_regexdev_configure API is used to configure a RegEx device.
const struct rte_regexdev_config *cfg);
The ``rte_regexdev_config`` structure is used to pass the configuration
-parameters for the RegEx device for example number of queue pairs, number of
+parameters for the RegEx device for example number of queue pairs, number of
groups, max number of matches and so on.
.. code-block:: c
@@ -117,13 +117,13 @@ Configuration of Rules Database
Each Regex device should be configured with the rule database.
There are two modes of setting the rule database, online or offline.
-The online mode means, that the rule database in being compiled by the
+The online mode means that the rule database is being compiled by the
RegEx PMD while in the offline mode the rule database is compiled by external
compiler, and is being loaded to the PMD as a buffer.
-The configuration mode is depended on the PMD capabilities.
+The configuration mode is dependent on the PMD capabilities.
Online rule configuration is done using the following API functions:
-``rte_regexdev_rule_db_update`` which add / remove rules from the rules
+``rte_regexdev_rule_db_update`` which adds / removes rules from the rules
precompiled list.
Offline rule configuration can be done by adding a pointer to the compiled
@@ -134,7 +134,7 @@ rule database in the configuration step, or by using
Configuration of Queue Pairs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Each RegEx device can be configured with number of queue pairs.
+Each RegEx device can be configured with a number of queue pairs.
Each queue pair is configured using ``rte_regexdev_queue_pair_setup``
@@ -149,9 +149,9 @@ require global locks and hinder performance.
Device Features and Capabilities
---------------------------------
-RegEx devices may support different feature set.
-In order to get the supported PMD feature ``rte_regexdev_info_get``
-API which return the info of the device and it's supported features.
+RegEx devices may support different feature sets.
+In order to get the supported PMD features, use the ``rte_regexdev_info_get``
+API which returns the info of the device and its supported features.
Enqueue / Dequeue Burst APIs
@@ -165,10 +165,10 @@ The enqueue function returns the number of operations it actually enqueued for
processing, a return value equal to ``nb_ops`` means that all packets have been
enqueued.
-Data pointed in each op, should not be released until the dequeue of for that
+Data pointed to by each op should not be released until the dequeue for that
op.
-The dequeue API uses the same format as the enqueue API of processed but
+The dequeue API uses the same format as the enqueue API but
the ``nb_ops`` and ``ops`` parameters are now used to specify the max processed
operations the user wishes to retrieve and the location in which to store them.
The API call returns the actual number of processed operations returned, this
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 47/54] doc: correct grammar in reorder library guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (45 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 46/54] doc: correct errors in regexdev guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 48/54] doc: correct whitespace in RIB " Stephen Hemminger
` (6 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct minor grammar issues in the reorder library documentation:
- add missing "are" in "which are referred to as"
- remove extra space before "mbufs"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/reorder_lib.rst | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/reorder_lib.rst b/doc/guides/prog_guide/reorder_lib.rst
index 3fb5df5570..a5f529145d 100644
--- a/doc/guides/prog_guide/reorder_lib.rst
+++ b/doc/guides/prog_guide/reorder_lib.rst
@@ -36,7 +36,7 @@ mbufs.
Implementation Details
-------------------------
-The reorder library is implemented as a pair of buffers, which referred to as
+The reorder library is implemented as a pair of buffers, which are referred to as
the *Order* buffer and the *Ready* buffer.
On an insert call, valid mbufs are inserted directly into the Order buffer and
@@ -62,7 +62,7 @@ be reported as late packets when they arrive. The process of moving packets
to the Ready buffer continues beyond the minimum required until a gap,
i.e. missing mbuf, in the Order buffer is encountered.
-When draining mbufs, the reorder buffer would return mbufs in the Ready
+When draining mbufs, the reorder buffer would return mbufs in the Ready
buffer first and then from the Order buffer until a gap is found (mbufs that
have not arrived yet).
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 48/54] doc: correct whitespace in RIB library guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (46 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 47/54] doc: correct grammar in reorder library guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 49/54] doc: correct incomplete sentence in ring " Stephen Hemminger
` (5 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Remove extra space before "the" in API applicability note.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/rib_lib.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/doc/guides/prog_guide/rib_lib.rst b/doc/guides/prog_guide/rib_lib.rst
index 40b7de3f1d..e95d85b2d1 100644
--- a/doc/guides/prog_guide/rib_lib.rst
+++ b/doc/guides/prog_guide/rib_lib.rst
@@ -24,7 +24,7 @@ Next hop IDs are represented by ``uint64_t`` values.
The API and implementation are very similar for IPv4 ``rte_rib`` API and IPv6 ``rte_rib6``
API, therefore only the ``rte_rib`` API will be discussed here.
- Everything within this document except for the size of the prefixes is applicable to the
+ Everything within this document except for the size of the prefixes is applicable to the
``rte_rib6`` API.
Internally RIB is represented as a binary tree as shown in :numref:`figure_rib_internals`:
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 49/54] doc: correct incomplete sentence in ring library guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (47 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 48/54] doc: correct whitespace in RIB " Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 50/54] doc: correct grammar in security " Stephen Hemminger
` (4 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct incomplete sentence "is shown in with" to "is shown in the figure
below, with".
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/ring_lib.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/doc/guides/prog_guide/ring_lib.rst b/doc/guides/prog_guide/ring_lib.rst
index 98ef003aac..8e1dec288d 100644
--- a/doc/guides/prog_guide/ring_lib.rst
+++ b/doc/guides/prog_guide/ring_lib.rst
@@ -43,7 +43,7 @@ The disadvantages:
* Having many rings costs more in terms of memory than a linked list queue. An empty ring contains at least N objects.
-A simplified representation of a Ring is shown in with consumer and producer head and tail pointers to objects stored in the data structure.
+A simplified representation of a Ring is shown in the figure below, with consumer and producer head and tail pointers to objects stored in the data structure.
.. _figure_ring1:
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 50/54] doc: correct grammar in security library guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (48 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 49/54] doc: correct incomplete sentence in ring " Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 51/54] doc: correct hyphenation in thread safety guide Stephen Hemminger
` (3 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct grammar issues in the security library documentation:
- fix "will contains" to "will contain"
- fix "in a optimal way" to "in an optimal way"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/rte_security.rst | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
index 5cfa39a71d..837acadfde 100644
--- a/doc/guides/prog_guide/rte_security.rst
+++ b/doc/guides/prog_guide/rte_security.rst
@@ -120,7 +120,7 @@ crypto processing the packet is presented to the host as a regular Rx packet
but all security protocol related headers are optionally removed from the
packet. e.g. in the case of IPsec, the IPsec tunnel headers (if any),
ESP/AH headers will be removed from the packet and the received packet
-will contains the decrypted packet only. The driver Rx path checks the
+will contain the decrypted packet only. The driver Rx path checks the
descriptors and based on the crypto status sets additional flags in
``rte_mbuf.ol_flags`` field. The driver would also set device-specific
metadata in ``RTE_SECURITY_DYNFIELD_NAME`` field.
@@ -696,7 +696,7 @@ Security Sessions are created to store the immutable fields of a particular Secu
Association for a particular protocol which is defined by a security session
configuration structure which is used in the operation processing of a packet flow.
Sessions are used to manage protocol specific information as well as crypto parameters.
-Security sessions cache this immutable data in a optimal way for the underlying PMD
+Security sessions cache this immutable data in an optimal way for the underlying PMD
and this allows further acceleration of the offload of Crypto workloads.
The Security framework provides APIs to create and free sessions for crypto/ethernet
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 51/54] doc: correct hyphenation in thread safety guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (49 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 50/54] doc: correct grammar in security " Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 52/54] doc: correct errors in toeplitz hash library guide Stephen Hemminger
` (2 subsequent siblings)
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct spurious space in "thread- safe" to "thread-safe".
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/thread_safety.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/doc/guides/prog_guide/thread_safety.rst b/doc/guides/prog_guide/thread_safety.rst
index f7cda8bb32..e3e1a567e9 100644
--- a/doc/guides/prog_guide/thread_safety.rst
+++ b/doc/guides/prog_guide/thread_safety.rst
@@ -11,7 +11,7 @@ This section allows the developer to take these issues into account when buildin
The run-time environment of the DPDK is typically a single thread per logical core.
In some cases, it is not only multi-threaded, but multi-process.
Typically, it is best to avoid sharing data structures between threads and/or processes where possible.
-Where this is not possible, then the execution blocks must access the data in a thread- safe manner.
+Where this is not possible, then the execution blocks must access the data in a thread-safe manner.
Mechanisms such as atomics or locking can be used that will allow execution blocks to operate serially.
However, this can have an effect on the performance of the application.
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 52/54] doc: correct errors in toeplitz hash library guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (50 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 51/54] doc: correct hyphenation in thread safety guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 53/54] doc: correct errors in vhost " Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 54/54] doc: correct whitespace in efficient code guide Stephen Hemminger
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct sentence fragment "Could be used" to "They can be used".
Correct typo in code example: rte_thash_get_compliment should be
rte_thash_get_complement to match the actual API
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/toeplitz_hash_lib.rst | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/doc/guides/prog_guide/toeplitz_hash_lib.rst b/doc/guides/prog_guide/toeplitz_hash_lib.rst
index 61eaafd169..cc22236836 100644
--- a/doc/guides/prog_guide/toeplitz_hash_lib.rst
+++ b/doc/guides/prog_guide/toeplitz_hash_lib.rst
@@ -40,7 +40,7 @@ The ``rte_softrss_be`` function is a faster implementation,
but it expects ``rss_key`` to be converted to the host byte order.
The last two functions are vectorized implementations using
-Galois Fields New Instructions. Could be used if ``rte_thash_gfni_supported`` is true.
+Galois Fields New Instructions. They can be used if ``rte_thash_gfni_supported`` is true.
They expect the tuple to be in network byte order.
``rte_thash_gfni()`` calculates the hash value for a single tuple, and
@@ -306,7 +306,7 @@ collision. This is shown in the code below.
uint32_t rev_hash = rte_softrss((uint32_t *)&rev_tuple, RTE_THASH_V4_L4_LEN, new_key);
/* Get the adjustment bits for the src port to get a new port. */
- uint32_t adj = rte_thash_get_compliment(h, rev_hash, orig_hash);
+ uint32_t adj = rte_thash_get_complement(h, rev_hash, orig_hash);
/* Adjust the source port bits. */
uint16_t new_sport = tuple.v4.sport ^ adj;
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 53/54] doc: correct errors in vhost library guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (51 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 52/54] doc: correct errors in toeplitz hash library guide Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 54/54] doc: correct whitespace in efficient code guide Stephen Hemminger
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Correct various issues in the vhost library documentation:
- fix "In another words" to "In other words"
- fix word order "for given a packet" to "for a given packet"
- fix "to be dequeue" to "to be dequeued"
- fix typo "pkmbuf" to "pktmbuf"
- fix plural agreement "application that doesn't" to "applications
that don't"
- add missing period after NET_STATS_ENABLE description
- fix subject-verb agreement "features is" to "features are"
- fix verb tense "stored them at" to "stores them in"
- fix awkward phrasing in DMA vChannel cleanup description
- fix "which responses to create" to "which is responsible for creating"
- fix "to response" to "to respond"
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/vhost_lib.rst | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst
index 0c2b4d020a..345a621716 100644
--- a/doc/guides/prog_guide/vhost_lib.rst
+++ b/doc/guides/prog_guide/vhost_lib.rst
@@ -5,7 +5,7 @@ Vhost Library
=============
The vhost library implements a user space virtio net server allowing the user
-to manipulate the virtio ring directly. In another words, it allows the user
+to manipulate the virtio ring directly. In other words, it allows the user
to fetch/put packets from/to the VM virtio net device. To achieve this, a
vhost library should be able to:
@@ -73,7 +73,7 @@ The following is an overview of some key Vhost API functions:
Enabling this flag forces vhost dequeue function to only provide linear
pktmbuf (no multi-segmented pktmbuf).
- The vhost library by default provides a single pktmbuf for given a
+ The vhost library by default provides a single pktmbuf for a given
packet, but if for some reason the data doesn't fit into a single
pktmbuf (e.g., TSO is enabled), the library will allocate additional
pktmbufs from the same mempool and chain them together to create a
@@ -81,7 +81,7 @@ The following is an overview of some key Vhost API functions:
However, the vhost application needs to support multi-segmented format.
If the vhost application does not support that format and requires large
- buffers to be dequeue, this flag should be enabled to force only linear
+ buffers to be dequeued, this flag should be enabled to force only linear
buffers (see RTE_VHOST_USER_EXTBUF_SUPPORT) or drop the packet.
It is disabled by default.
@@ -89,7 +89,7 @@ The following is an overview of some key Vhost API functions:
- ``RTE_VHOST_USER_EXTBUF_SUPPORT``
Enabling this flag allows vhost dequeue function to allocate and attach
- an external buffer to a pktmbuf if the pkmbuf doesn't provide enough
+ an external buffer to a pktmbuf if the pktmbuf doesn't provide enough
space to store all data.
This is useful when the vhost application wants to support large packets
@@ -99,7 +99,7 @@ The following is an overview of some key Vhost API functions:
rte_pktmbuf_attach_extbuf().
See RTE_VHOST_USER_LINEARBUF_SUPPORT as well to disable multi-segmented
- mbufs for application that doesn't support chained mbufs.
+ mbufs for applications that don't support chained mbufs.
It is disabled by default.
@@ -137,7 +137,7 @@ The following is an overview of some key Vhost API functions:
rte_vhost_stats_get() to collect statistics, and rte_vhost_stats_reset() to
reset them.
- It is disabled by default
+ It is disabled by default.
* ``rte_vhost_driver_set_features(path, features)``
@@ -167,7 +167,7 @@ The following is an overview of some key Vhost API functions:
* ``features_changed(int vid, uint64_t features)``
- This callback is invoked when the features is changed. For example,
+ This callback is invoked when the features are changed. For example,
``VHOST_F_LOG_ALL`` will be set/cleared at the start/end of live
migration, respectively.
@@ -200,7 +200,7 @@ The following is an overview of some key Vhost API functions:
* ``rte_vhost_dequeue_burst(vid, queue_id, mbuf_pool, pkts, count)``
- Receives (dequeues) ``count`` packets from guest, and stored them at ``pkts``.
+ Receives (dequeues) ``count`` packets from guest, and stores them in ``pkts``.
* ``rte_vhost_crypto_create(vid, cryptodev_id, sess_mempool, socket_id)``
@@ -331,7 +331,7 @@ The following is an overview of some key Vhost API functions:
* ``rte_vhost_async_dma_unconfigure(dma_id, vchan_id)``
- Clean DMA vChannel finished to use. After this function is called,
+ Clean up a DMA vChannel after use is finished. After this function is called,
the specified DMA vChannel should no longer be used by the Vhost library.
* ``rte_vhost_notify_guest(int vid, uint16_t queue_id)``
@@ -360,7 +360,7 @@ vhost-user implementation has two options:
* DPDK vhost-user acts as the client.
Unlike the server mode, this mode doesn't create the socket file;
- it just tries to connect to the server (which responses to create the
+ it just tries to connect to the server (which is responsible for creating the
file instead).
When the DPDK vhost-user application restarts, DPDK vhost-user will try to
@@ -469,7 +469,7 @@ Finally, a set of device ops is defined for device specific operations:
* ``migration_done``
- Called to allow the device to response to RARP sending.
+ Called to allow the device to respond to RARP sending.
* ``get_vfio_group_fd``
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* [PATCH v5 54/54] doc: correct whitespace in efficient code guide
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
` (52 preceding siblings ...)
2026-01-18 19:10 ` [PATCH v5 53/54] doc: correct errors in vhost " Stephen Hemminger
@ 2026-01-18 19:10 ` Stephen Hemminger
53 siblings, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-01-18 19:10 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Remove extra space in "rte_malloc ()" function name reference.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
doc/guides/prog_guide/writing_efficient_code.rst | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/doc/guides/prog_guide/writing_efficient_code.rst b/doc/guides/prog_guide/writing_efficient_code.rst
index b63fa8e459..6fbdb57fa8 100644
--- a/doc/guides/prog_guide/writing_efficient_code.rst
+++ b/doc/guides/prog_guide/writing_efficient_code.rst
@@ -41,7 +41,7 @@ If you really need dynamic allocation in the data plane, it is better to use a m
This API is provided by librte_mempool.
This data structure provides several services that increase performance, such as memory alignment of objects,
lockless access to objects, NUMA awareness, bulk get/put and per-lcore cache.
-The rte_malloc () function uses a similar concept to mempools.
+The rte_malloc() function uses a similar concept to mempools.
Concurrent Access to the Same Memory Area
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
--
2.51.0
^ permalink raw reply related [flat|nested] 118+ messages in thread
* Re: [PATCH v4 02/11] doc: correct grammar and typos in argparse library guide
2026-01-14 22:26 ` [PATCH v4 02/11] doc: correct grammar and typos in argparse library guide Stephen Hemminger
@ 2026-01-19 0:50 ` fengchengwen
0 siblings, 0 replies; 118+ messages in thread
From: fengchengwen @ 2026-01-19 0:50 UTC (permalink / raw)
To: Stephen Hemminger, dev; +Cc: Nandini Persad
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
On 1/15/2026 6:26 AM, Stephen Hemminger wrote:
> Changes:
> - Add missing articles ("a user-friendly", "a long_name field")
> - Fix awkward phrasing ("take with" -> "have")
> - Correct verb forms ("automatic generate" -> "automatic generation of",
> "are parsing" -> "are parsed", "don't" -> "doesn't")
> - Fix typo in field name (val_save -> val_saver)
> - Fix stray backtick in code example
>
> Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
^ permalink raw reply [flat|nested] 118+ messages in thread
* Re: [PATCH v2 2/9] doc: reword argparse section in prog guide
2024-06-21 2:32 ` [PATCH v2 2/9] doc: reword argparse " Nandini Persad
2024-06-22 14:53 ` Stephen Hemminger
@ 2026-03-30 16:08 ` Stephen Hemminger
1 sibling, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-03-30 16:08 UTC (permalink / raw)
To: Nandini Persad; +Cc: dev
On Thu, 20 Jun 2024 19:32:47 -0700
Nandini Persad <nandinipersad361@gmail.com> wrote:
> I have made small edits for syntax in this section.
>
> Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Superseded by https://patchwork.dpdk.org/project/dpdk/patch/20260118191323.241013-2-stephen@networkplumber.org/
^ permalink raw reply [flat|nested] 118+ messages in thread
* Re: [PATCH v2 9/9] doc: reword rcu library section in prog guide
2024-06-21 2:32 ` [PATCH v2 9/9] doc: reword rcu " Nandini Persad
2024-06-22 14:55 ` Stephen Hemminger
@ 2026-03-31 22:35 ` Stephen Hemminger
1 sibling, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-03-31 22:35 UTC (permalink / raw)
To: Nandini Persad; +Cc: dev
On Thu, 20 Jun 2024 19:32:54 -0700
Nandini Persad <nandinipersad361@gmail.com> wrote:
> Simple syntax changes made to the rcu library section in programmer's guide.
>
> Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
> ---
These changes were integrated into https://patchwork.dpdk.org/project/dpdk/patch/20260118191323.241013-9-stephen@networkplumber.org/
^ permalink raw reply [flat|nested] 118+ messages in thread
* Re: [PATCH v2 8/9] doc: reword stack library section in prog guide
2024-06-21 2:32 ` [PATCH v2 8/9] doc: reword stack library " Nandini Persad
2024-06-22 14:55 ` Stephen Hemminger
@ 2026-03-31 22:36 ` Stephen Hemminger
1 sibling, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-03-31 22:36 UTC (permalink / raw)
To: Nandini Persad; +Cc: dev
On Thu, 20 Jun 2024 19:32:53 -0700
Nandini Persad <nandinipersad361@gmail.com> wrote:
> Minor changes made to wording of the stack library section in prog guide.
>
> Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
Integrated into https://patchwork.dpdk.org/project/dpdk/patch/20260118191323.241013-8-stephen@networkplumber.org/
^ permalink raw reply [flat|nested] 118+ messages in thread
* Re: [PATCH v2 7/9] doc: reword cmdline section in prog guide
2024-06-21 2:32 ` [PATCH v2 7/9] doc: reword cmdline " Nandini Persad
2024-06-22 14:55 ` Stephen Hemminger
@ 2026-03-31 22:45 ` Stephen Hemminger
1 sibling, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-03-31 22:45 UTC (permalink / raw)
To: Nandini Persad; +Cc: dev
On Thu, 20 Jun 2024 19:32:52 -0700
Nandini Persad <nandinipersad361@gmail.com> wrote:
> Minor syntax edits made to the cmdline section.
>
> Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
> ---
These changes got folded into https://patchwork.dpdk.org/project/dpdk/patch/20260118191323.241013-6-stephen@networkplumber.org/
^ permalink raw reply [flat|nested] 118+ messages in thread
* Re: [PATCH v2 6/9] doc: reword log library section in prog guide
2024-06-21 2:32 ` [PATCH v2 6/9] doc: reword log " Nandini Persad
2024-06-22 14:55 ` Stephen Hemminger
@ 2026-03-31 22:47 ` Stephen Hemminger
1 sibling, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-03-31 22:47 UTC (permalink / raw)
To: Nandini Persad; +Cc: dev
On Thu, 20 Jun 2024 19:32:51 -0700
Nandini Persad <nandinipersad361@gmail.com> wrote:
> Minor changes made for syntax in the log library section and 7.1
> section of the programmer's guide. A couple sentences at the end of the
> trace library section were also edited.
>
> Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
> ---
Integrated into https://patchwork.dpdk.org/project/dpdk/patch/20260118191323.241013-5-stephen@networkplumber.org/
^ permalink raw reply [flat|nested] 118+ messages in thread
* Re: [PATCH v2 5/9] doc: reword trace library section in prog guide
2024-06-21 2:32 ` [PATCH v2 5/9] doc: reword trace library " Nandini Persad
2024-06-22 14:54 ` Stephen Hemminger
@ 2026-03-31 22:49 ` Stephen Hemminger
1 sibling, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-03-31 22:49 UTC (permalink / raw)
To: Nandini Persad; +Cc: dev
On Thu, 20 Jun 2024 19:32:50 -0700
Nandini Persad <nandinipersad361@gmail.com> wrote:
> Minor syntax edits were made to sect the trace library section of prog guide.
>
> Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
> ---
Integrated this into https://patchwork.dpdk.org/project/dpdk/patch/20260118191323.241013-7-stephen@networkplumber.org/
^ permalink raw reply [flat|nested] 118+ messages in thread
* Re: [PATCH v2 4/9] doc: reword service cores section in prog guide
2024-06-21 2:32 ` [PATCH v2 4/9] doc: reword service cores section in prog guide Nandini Persad
2024-06-22 14:53 ` Stephen Hemminger
@ 2026-03-31 22:50 ` Stephen Hemminger
1 sibling, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-03-31 22:50 UTC (permalink / raw)
To: Nandini Persad; +Cc: dev
On Thu, 20 Jun 2024 19:32:49 -0700
Nandini Persad <nandinipersad361@gmail.com> wrote:
> I've made minor syntax changes to section 8 of programmer's guide, service cores.
>
> Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
> ---
Integrated into other patch https://patchwork.dpdk.org/project/dpdk/patch/20260118191323.241013-3-stephen@networkplumber.org/
^ permalink raw reply [flat|nested] 118+ messages in thread
* Re: [PATCH v2 3/9] doc: reword design section in contributors guidelines
2024-06-21 2:32 ` [PATCH v2 3/9] doc: reword design section in contributors guidelines Nandini Persad
2024-06-22 14:47 ` [PATCH] doc/design: minor cleanus Stephen Hemminger
@ 2026-03-31 22:53 ` Stephen Hemminger
1 sibling, 0 replies; 118+ messages in thread
From: Stephen Hemminger @ 2026-03-31 22:53 UTC (permalink / raw)
To: Nandini Persad; +Cc: dev
On Thu, 20 Jun 2024 19:32:48 -0700
Nandini Persad <nandinipersad361@gmail.com> wrote:
> Minor editing was made for grammar and syntax of design section.
>
> Signed-off-by: Nandini Persad <nandinipersad361@gmail.com>
> ---
Integrated into https://patchwork.dpdk.org/project/dpdk/patch/20260116201738.74578-2-stephen@networkplumber.org/
^ permalink raw reply [flat|nested] 118+ messages in thread
end of thread, other threads:[~2026-03-31 22:53 UTC | newest]
Thread overview: 118+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-05-13 15:59 [PATCH 0/9] reowrd in prog guide Nandini Persad
2024-05-13 15:59 ` [PATCH 1/9] doc: reword design section in contributors guidelines Nandini Persad
2024-05-13 15:59 ` [PATCH 2/9] doc: reword pmd section in prog guide Nandini Persad
2024-05-13 15:59 ` [PATCH 3/9] doc: reword argparse " Nandini Persad
2024-05-13 19:01 ` Stephen Hemminger
2024-05-13 15:59 ` [PATCH 4/9] doc: reword service cores " Nandini Persad
2024-05-13 15:59 ` [PATCH 5/9] doc: reword trace library " Nandini Persad
2024-05-13 15:59 ` [PATCH 6/9] doc: reword log " Nandini Persad
2024-05-13 15:59 ` [PATCH 7/9] doc: reword cmdline " Nandini Persad
2024-05-13 15:59 ` [PATCH 8/9] doc: reword stack library " Nandini Persad
2024-05-13 15:59 ` [PATCH 9/9] doc: reword rcu " Nandini Persad
2024-06-21 2:32 ` [PATCH v2 1/9] doc: reword pmd " Nandini Persad
2024-06-21 2:32 ` [PATCH v2 2/9] doc: reword argparse " Nandini Persad
2024-06-22 14:53 ` Stephen Hemminger
2026-03-30 16:08 ` Stephen Hemminger
2024-06-21 2:32 ` [PATCH v2 3/9] doc: reword design section in contributors guidelines Nandini Persad
2024-06-22 14:47 ` [PATCH] doc/design: minor cleanus Stephen Hemminger
2024-06-24 15:07 ` Thomas Monjalon
2026-03-31 22:53 ` [PATCH v2 3/9] doc: reword design section in contributors guidelines Stephen Hemminger
2024-06-21 2:32 ` [PATCH v2 4/9] doc: reword service cores section in prog guide Nandini Persad
2024-06-22 14:53 ` Stephen Hemminger
2026-03-31 22:50 ` Stephen Hemminger
2024-06-21 2:32 ` [PATCH v2 5/9] doc: reword trace library " Nandini Persad
2024-06-22 14:54 ` Stephen Hemminger
2026-03-31 22:49 ` Stephen Hemminger
2024-06-21 2:32 ` [PATCH v2 6/9] doc: reword log " Nandini Persad
2024-06-22 14:55 ` Stephen Hemminger
2026-03-31 22:47 ` Stephen Hemminger
2024-06-21 2:32 ` [PATCH v2 7/9] doc: reword cmdline " Nandini Persad
2024-06-22 14:55 ` Stephen Hemminger
2026-03-31 22:45 ` Stephen Hemminger
2024-06-21 2:32 ` [PATCH v2 8/9] doc: reword stack library " Nandini Persad
2024-06-22 14:55 ` Stephen Hemminger
2026-03-31 22:36 ` Stephen Hemminger
2024-06-21 2:32 ` [PATCH v2 9/9] doc: reword rcu " Nandini Persad
2024-06-22 14:55 ` Stephen Hemminger
2026-03-31 22:35 ` Stephen Hemminger
2024-06-22 14:52 ` [PATCH v2 1/9] doc: reword pmd " Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 00/11] doc: programmers guide corrections Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 01/11] doc: correct grammar and punctuation errors in ethdev guide Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 02/11] doc: correct grammar and typos in argparse library guide Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 03/11] doc: correct grammar and typos in design guide Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 04/11] doc: correct errors in Linux system requirements guide Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 05/11] doc: correct grammar in service cores guide Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 06/11] doc: correct grammar and errors in trace library guide Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 07/11] doc: correct typos in log " Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 08/11] doc: correct errors in command-line " Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 09/11] doc: correct errors in trace " Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 10/11] doc: correct errors in stack " Stephen Hemminger
2026-01-13 22:51 ` [PATCH v3 11/11] doc: correct errors in RCU " Stephen Hemminger
2026-01-14 22:26 ` [PATCH v4 00/11] doc: programmers guide corrections Stephen Hemminger
2026-01-14 22:26 ` [PATCH v4 01/11] doc: correct grammar and punctuation errors in ethdev guide Stephen Hemminger
2026-01-14 22:26 ` [PATCH v4 02/11] doc: correct grammar and typos in argparse library guide Stephen Hemminger
2026-01-19 0:50 ` fengchengwen
2026-01-14 22:26 ` [PATCH v4 03/11] doc: correct grammar and typos in design guide Stephen Hemminger
2026-01-14 22:26 ` [PATCH v4 04/11] doc: correct errors in Linux system requirements guide Stephen Hemminger
2026-01-14 22:26 ` [PATCH v4 05/11] doc: correct grammar in service cores guide Stephen Hemminger
2026-01-14 22:26 ` [PATCH v4 06/11] doc: correct grammar and errors in trace library guide Stephen Hemminger
2026-01-14 22:26 ` [PATCH v4 07/11] doc: correct typos in log " Stephen Hemminger
2026-01-14 22:27 ` [PATCH v4 08/11] doc: correct errors in command-line " Stephen Hemminger
2026-01-14 22:27 ` [PATCH v4 09/11] doc: correct errors in trace " Stephen Hemminger
2026-01-14 22:27 ` [PATCH v4 10/11] doc: correct errors in stack " Stephen Hemminger
2026-01-14 22:27 ` [PATCH v4 11/11] doc: correct errors in RCU " Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 00/54] doc: programmers guide corrections Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 01/54] doc: correct grammar and typos in argparse library guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 02/54] doc: correct grammar in service cores guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 03/54] doc: correct grammar and errors in trace library guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 04/54] doc: correct typos in log " Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 05/54] doc: correct errors in command-line " Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 06/54] doc: correct errors in trace " Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 07/54] doc: correct errors in stack " Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 08/54] doc: correct errors in RCU " Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 09/54] doc: correct grammar and formatting in ASan guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 10/54] doc: correct grammar and typos in bbdev guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 11/54] doc: correct grammar and formatting in bpf lib guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 12/54] doc: correct grammar and typos in meson build guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 13/54] doc: correct grammar and typos in cryptodev guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 14/54] doc: correct grammar and formatting in compressdev guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 15/54] doc: correct grammar in dmadev guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 16/54] doc: correct grammar in efd guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 17/54] doc: correct grammar in EAL guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 18/54] doc: correct double space in FIB guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 19/54] doc: correct grammar in GRO guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 20/54] doc: correct grammar in GSO guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 21/54] doc: correct typos and grammar in graph guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 22/54] doc: correct grammar in hash guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 23/54] doc: correct grammar and typos in IP fragment guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 24/54] doc: correct double spaces in IPsec guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 25/54] doc: correct grammar in lcore variables guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 26/54] doc: correct typo in link bonding guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 27/54] doc: correct grammar in LTO guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 28/54] doc: correct grammar in LPM guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 29/54] doc: correct grammar and typo in LPM6 guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 30/54] doc: correct grammar in introduction Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 31/54] doc: correct grammar in mbuf library guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 32/54] doc: correct grammar in membership " Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 33/54] doc: correct errors in mempool " Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 34/54] doc: correct style in meson unit tests guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 35/54] doc: correct errors in metrics library guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 36/54] doc: correct grammar in mldev " Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 37/54] doc: correct grammar in multi-process guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 38/54] doc: correct grammar in overview Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 39/54] doc: correct grammar in ACL library guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 40/54] doc: correct typos in packet distributor guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 41/54] doc: correct grammar in packet framework guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 42/54] doc: correct grammar in PDCP library guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 43/54] doc: correct grammar in pdump " Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 44/54] doc: correct typos in power management guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 45/54] doc: correct grammar in profiling guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 46/54] doc: correct errors in regexdev guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 47/54] doc: correct grammar in reorder library guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 48/54] doc: correct whitespace in RIB " Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 49/54] doc: correct incomplete sentence in ring " Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 50/54] doc: correct grammar in security " Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 51/54] doc: correct hyphenation in thread safety guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 52/54] doc: correct errors in toeplitz hash library guide Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 53/54] doc: correct errors in vhost " Stephen Hemminger
2026-01-18 19:10 ` [PATCH v5 54/54] doc: correct whitespace in efficient code guide Stephen Hemminger
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox