* [RFC PATCH 01/44] eal: define new functionally distinct config structs
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
@ 2026-04-29 16:57 ` Bruce Richardson
2026-04-29 19:03 ` Stephen Hemminger
2026-04-29 16:57 ` [RFC PATCH 02/44] eal: move memory request fields to user config Bruce Richardson
` (44 subsequent siblings)
45 siblings, 1 reply; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:57 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
Rather than having a generic internal_config structure which mixes
system configuration, memory configuration, and user options in one
struct, create separate structures for the platform info, system runtime
state and the user provided config. Later patches will complete these
with data moved from the current location.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_config.c | 31 ++++++++++++++++++++++++++++++
lib/eal/common/eal_internal_cfg.h | 28 +++++++++++++++++++++++++++
2 files changed, 59 insertions(+)
diff --git a/lib/eal/common/eal_common_config.c b/lib/eal/common/eal_common_config.c
index e2e69a75fb..5427b30659 100644
--- a/lib/eal/common/eal_common_config.c
+++ b/lib/eal/common/eal_common_config.c
@@ -5,6 +5,7 @@
#include <rte_string_fns.h>
#include <eal_export.h>
+#include "eal_internal_cfg.h"
#include "eal_private.h"
#include "eal_filesystem.h"
#include "eal_memcfg.h"
@@ -27,6 +28,15 @@ static struct rte_config rte_config = {
/* platform-specific runtime dir */
static char runtime_dir[UNIX_PATH_MAX];
+/* user-provided EAL configuration */
+static struct eal_user_cfg eal_user_cfg;
+
+/* platform-discovered and runtime EAL state */
+static struct eal_platform_info eal_platform_info;
+
+/* internal runtime configuration */
+static struct eal_runtime_state eal_runtime_state;
+
/* internal configuration */
static struct internal_config internal_config;
@@ -63,6 +73,27 @@ eal_get_internal_configuration(void)
return &internal_config;
}
+/* Return a pointer to the user configuration structure */
+struct eal_user_cfg *
+eal_get_user_configuration(void)
+{
+ return &eal_user_cfg;
+}
+
+/* Return a pointer to the platform state structure */
+struct eal_platform_info *
+eal_get_platform_info(void)
+{
+ return &eal_platform_info;
+}
+
+/* Return a pointer to the runtime state structure */
+struct eal_runtime_state *
+eal_get_runtime_state(void)
+{
+ return &eal_runtime_state;
+}
+
RTE_EXPORT_SYMBOL(rte_eal_iova_mode)
enum rte_iova_mode
rte_eal_iova_mode(void)
diff --git a/lib/eal/common/eal_internal_cfg.h b/lib/eal/common/eal_internal_cfg.h
index fac45cbe66..fb4afca5b8 100644
--- a/lib/eal/common/eal_internal_cfg.h
+++ b/lib/eal/common/eal_internal_cfg.h
@@ -48,6 +48,31 @@ struct hugepage_file_discipline {
bool unlink_existing;
};
+/**
+ * User-provided EAL initialization configuration.
+ * Immutable after initialization, so no need for atomic types or locks.
+ */
+struct eal_user_cfg {
+ uint8_t reserved;
+};
+
+/**
+ * Discovered information about cores, memory, etc. on the system.
+ * Immutable after initialization, so no need for atomic types or locks.
+ */
+struct eal_platform_info {
+ uint8_t reserved;
+};
+
+/**
+ * Internal EAL runtime state
+ * May be modified at runtime, so access must be protected by locks or atomic types
+ * as appropriate.
+ */
+struct eal_runtime_state {
+ uint8_t reserved;
+};
+
/**
* internal configuration
*/
@@ -107,6 +132,9 @@ struct internal_config {
unsigned int no_auto_probing; /**< true to switch from block-listing to allow-listing */
};
+struct eal_user_cfg *eal_get_user_configuration(void);
+struct eal_platform_info *eal_get_platform_info(void);
+struct eal_runtime_state *eal_get_runtime_state(void);
void eal_reset_internal_config(struct internal_config *internal_cfg);
#endif /* EAL_INTERNAL_CFG_H */
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* Re: [RFC PATCH 01/44] eal: define new functionally distinct config structs
2026-04-29 16:57 ` [RFC PATCH 01/44] eal: define new functionally distinct config structs Bruce Richardson
@ 2026-04-29 19:03 ` Stephen Hemminger
2026-04-30 7:56 ` Bruce Richardson
0 siblings, 1 reply; 50+ messages in thread
From: Stephen Hemminger @ 2026-04-29 19:03 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, techboard
On Wed, 29 Apr 2026 17:57:53 +0100
Bruce Richardson <bruce.richardson@intel.com> wrote:
> +/**
> + * User-provided EAL initialization configuration.
> + * Immutable after initialization, so no need for atomic types or locks.
> + */
> +struct eal_user_cfg {
> + uint8_t reserved;
> +};
> +
I assume reserved is only placeholder to be clobbered in later patches.
Internal structures should not have reserved fields
^ permalink raw reply [flat|nested] 50+ messages in thread* Re: [RFC PATCH 01/44] eal: define new functionally distinct config structs
2026-04-29 19:03 ` Stephen Hemminger
@ 2026-04-30 7:56 ` Bruce Richardson
0 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-30 7:56 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev, techboard
On Wed, Apr 29, 2026 at 12:03:48PM -0700, Stephen Hemminger wrote:
> On Wed, 29 Apr 2026 17:57:53 +0100
> Bruce Richardson <bruce.richardson@intel.com> wrote:
>
> > +/**
> > + * User-provided EAL initialization configuration.
> > + * Immutable after initialization, so no need for atomic types or locks.
> > + */
> > +struct eal_user_cfg {
> > + uint8_t reserved;
> > +};
> > +
>
> I assume reserved is only placeholder to be clobbered in later patches.
> Internal structures should not have reserved fields
Yep. As soon as I start moving in actual fields the reserved ones
disappear. I think some compilers complained about the empty structs so I
had to stick something in!
The other alternative was just to introduce the structs as they are
starting being populated. However, I feel the intent is far clearer to add
all three initially.
/Bruce
^ permalink raw reply [flat|nested] 50+ messages in thread
* [RFC PATCH 02/44] eal: move memory request fields to user config
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
2026-04-29 16:57 ` [RFC PATCH 01/44] eal: define new functionally distinct config structs Bruce Richardson
@ 2026-04-29 16:57 ` Bruce Richardson
2026-04-29 16:57 ` [RFC PATCH 03/44] eal: move NUMA " Bruce Richardson
` (43 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:57 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
Move the basic memory request information supplied by the user from the
internal-config to the user config structure in EAL. For the number of
channels or ranks of memory, limit the data size to uint8_t rather than
having a full 32-bit value per entry.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_dynmem.c | 11 ++++++-----
lib/eal/common/eal_common_memory.c | 8 ++++----
lib/eal/common/eal_common_options.c | 29 ++++++++++++++++++-----------
lib/eal/common/eal_internal_cfg.h | 8 ++++----
lib/eal/freebsd/eal.c | 7 ++++---
lib/eal/freebsd/eal_memory.c | 11 ++++++-----
lib/eal/linux/eal.c | 5 +++--
lib/eal/linux/eal_memory.c | 12 ++++++------
lib/eal/windows/eal.c | 5 +++--
lib/eal/windows/eal_memory.c | 3 ++-
10 files changed, 56 insertions(+), 43 deletions(-)
diff --git a/lib/eal/common/eal_common_dynmem.c b/lib/eal/common/eal_common_dynmem.c
index 8f51d6dd4a..5bd22f6ef0 100644
--- a/lib/eal/common/eal_common_dynmem.c
+++ b/lib/eal/common/eal_common_dynmem.c
@@ -376,7 +376,8 @@ eal_dynmem_calc_num_pages_per_socket(
uint64_t remaining_mem, cur_mem;
const struct internal_config *internal_conf =
eal_get_internal_configuration();
- uint64_t total_mem = internal_conf->memory;
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ uint64_t total_mem = user_cfg->memory;
if (num_hp_info == 0)
return -1;
@@ -400,12 +401,12 @@ eal_dynmem_calc_num_pages_per_socket(
* sockets according to number of cores from CPU mask present
* on each socket.
*/
- total_size = internal_conf->memory;
+ total_size = user_cfg->memory;
for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_size != 0;
socket++) {
/* Set memory amount per socket */
- default_size = internal_conf->memory *
+ default_size = user_cfg->memory *
cpu_per_socket[socket] / rte_lcore_count();
/* Limit to maximum available memory on socket */
@@ -436,7 +437,7 @@ eal_dynmem_calc_num_pages_per_socket(
/* in 32-bit mode, allocate all of the memory only on main
* lcore socket
*/
- total_size = internal_conf->memory;
+ total_size = user_cfg->memory;
for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_size != 0;
socket++) {
struct rte_config *cfg = rte_eal_get_configuration();
@@ -520,7 +521,7 @@ eal_dynmem_calc_num_pages_per_socket(
/* if we didn't satisfy total memory requirements */
if (total_mem > 0) {
- requested = internal_conf->memory / 0x100000;
+ requested = user_cfg->memory / 0x100000;
available = requested - (total_mem / 0x100000);
EAL_LOG(ERR, "Not enough memory available! Requested: %uMB, available: %uMB",
requested, available);
diff --git a/lib/eal/common/eal_common_memory.c b/lib/eal/common/eal_common_memory.c
index dccf9406c5..208e3583b0 100644
--- a/lib/eal/common/eal_common_memory.c
+++ b/lib/eal/common/eal_common_memory.c
@@ -662,15 +662,15 @@ static int
rte_eal_memdevice_init(void)
{
struct rte_config *config;
- const struct internal_config *internal_conf;
+ const struct eal_user_cfg *user_cfg;
if (rte_eal_process_type() == RTE_PROC_SECONDARY)
return 0;
- internal_conf = eal_get_internal_configuration();
+ user_cfg = eal_get_user_configuration();
config = rte_eal_get_configuration();
- config->mem_config->nchannel = internal_conf->force_nchannel;
- config->mem_config->nrank = internal_conf->force_nrank;
+ config->mem_config->nchannel = user_cfg->force_nchannel;
+ config->mem_config->nrank = user_cfg->force_nrank;
return 0;
}
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index 290386dc63..11b77bc9dd 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -493,11 +493,12 @@ eal_get_hugefile_prefix(void)
void
eal_reset_internal_config(struct internal_config *internal_cfg)
{
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
int i;
- internal_cfg->memory = 0;
- internal_cfg->force_nrank = 0;
- internal_cfg->force_nchannel = 0;
+ user_cfg->memory = 0;
+ user_cfg->force_nrank = 0;
+ user_cfg->force_nchannel = 0;
internal_cfg->hugefile_prefix = NULL;
internal_cfg->hugepage_dir = NULL;
internal_cfg->hugepage_file.unlink_before_mapping = false;
@@ -1957,6 +1958,7 @@ int
eal_parse_args(void)
{
struct internal_config *int_cfg = eal_get_internal_configuration();
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
struct rte_config *rte_cfg = rte_eal_get_configuration();
bool remap_lcores = (args.remap_lcore_ids != NULL);
struct arg_list_elem *arg;
@@ -2095,23 +2097,27 @@ eal_parse_args(void)
/* memory options */
if (args.memory_size != NULL) {
- int_cfg->memory = atoi(args.memory_size);
- int_cfg->memory *= 1024ULL;
- int_cfg->memory *= 1024ULL;
+ user_cfg->memory = atoi(args.memory_size);
+ user_cfg->memory *= 1024ULL;
+ user_cfg->memory *= 1024ULL;
}
if (args.memory_channels != NULL) {
- int_cfg->force_nchannel = atoi(args.memory_channels);
- if (int_cfg->force_nchannel == 0) {
+ int nchannel = atoi(args.memory_channels);
+
+ if (nchannel <= 0 || nchannel > UINT8_MAX) {
EAL_LOG(ERR, "invalid memory channel parameter");
return -1;
}
+ user_cfg->force_nchannel = (uint8_t)nchannel;
}
if (args.memory_ranks != NULL) {
- int_cfg->force_nrank = atoi(args.memory_ranks);
- if (int_cfg->force_nrank == 0 || int_cfg->force_nrank > 16) {
+ int nrank = atoi(args.memory_ranks);
+
+ if (nrank <= 0 || nrank > 16 || nrank > UINT8_MAX) {
EAL_LOG(ERR, "invalid memory rank parameter");
return -1;
}
+ user_cfg->force_nrank = (uint8_t)nrank;
}
if (args.no_huge) {
int_cfg->no_hugetlbfs = 1;
@@ -2354,6 +2360,7 @@ eal_cleanup_config(struct internal_config *internal_cfg)
int
eal_adjust_config(struct internal_config *internal_cfg)
{
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
int i;
if (internal_cfg->process_type == RTE_PROC_AUTO)
@@ -2364,7 +2371,7 @@ eal_adjust_config(struct internal_config *internal_cfg)
/* if no memory amounts were requested, this will result in 0 and
* will be overridden later, right after eal_hugepage_info_init() */
for (i = 0; i < RTE_MAX_NUMA_NODES; i++)
- internal_cfg->memory += internal_cfg->numa_mem[i];
+ user_cfg->memory += internal_cfg->numa_mem[i];
return 0;
}
diff --git a/lib/eal/common/eal_internal_cfg.h b/lib/eal/common/eal_internal_cfg.h
index fb4afca5b8..1625c697b2 100644
--- a/lib/eal/common/eal_internal_cfg.h
+++ b/lib/eal/common/eal_internal_cfg.h
@@ -13,6 +13,7 @@
#include <rte_eal.h>
#include <rte_os_shim.h>
#include <rte_pci_dev_feature_defs.h>
+#include <stdint.h>
#include "eal_thread.h"
@@ -53,7 +54,9 @@ struct hugepage_file_discipline {
* Immutable after initialization, so no need for atomic types or locks.
*/
struct eal_user_cfg {
- uint8_t reserved;
+ size_t memory; /**< amount of asked memory */
+ uint8_t force_nchannel; /**< force number of channels */
+ uint8_t force_nrank; /**< force number of ranks */
};
/**
@@ -77,9 +80,6 @@ struct eal_runtime_state {
* internal configuration
*/
struct internal_config {
- volatile size_t memory; /**< amount of asked memory */
- volatile unsigned force_nchannel; /**< force number of channels */
- volatile unsigned force_nrank; /**< force number of ranks */
volatile unsigned no_hugetlbfs; /**< true to disable hugetlbfs */
struct hugepage_file_discipline hugepage_file;
volatile unsigned no_pci; /**< true to disable PCI */
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 60f5e676a8..1779362686 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -415,6 +415,7 @@ rte_eal_init(int argc, char **argv)
const struct rte_config *config = rte_eal_get_configuration();
struct internal_config *internal_conf =
eal_get_internal_configuration();
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
bool has_phys_addr;
enum rte_iova_mode iova_mode;
@@ -587,11 +588,11 @@ rte_eal_init(int argc, char **argv)
}
}
- if (internal_conf->memory == 0 && internal_conf->force_numa == 0) {
+ if (user_cfg->memory == 0 && internal_conf->force_numa == 0) {
if (internal_conf->no_hugetlbfs)
- internal_conf->memory = MEMSIZE_IF_NO_HUGE_PAGE;
+ user_cfg->memory = MEMSIZE_IF_NO_HUGE_PAGE;
else
- internal_conf->memory = eal_get_hugepage_mem_size();
+ user_cfg->memory = eal_get_hugepage_mem_size();
}
if (internal_conf->vmware_tsc_map == 1) {
diff --git a/lib/eal/freebsd/eal_memory.c b/lib/eal/freebsd/eal_memory.c
index cd608db9f9..a0a398ab55 100644
--- a/lib/eal/freebsd/eal_memory.c
+++ b/lib/eal/freebsd/eal_memory.c
@@ -70,11 +70,12 @@ rte_eal_hugepage_init(void)
struct rte_memseg_list *msl;
uint64_t mem_sz, page_sz;
int n_segs;
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* create a memseg list */
msl = &mcfg->memsegs[0];
- mem_sz = internal_conf->memory;
+ mem_sz = user_cfg->memory;
page_sz = RTE_PGSIZE_4K;
n_segs = mem_sz / page_sz;
@@ -109,7 +110,7 @@ rte_eal_hugepage_init(void)
hpi = &internal_conf->hugepage_info[i];
page_sz = hpi->hugepage_sz;
max_pages = hpi->num_pages[0];
- mem_needed = RTE_ALIGN_CEIL(internal_conf->memory - total_mem,
+ mem_needed = RTE_ALIGN_CEIL(eal_get_user_configuration()->memory - total_mem,
page_sz);
n_pages = RTE_MIN(mem_needed / page_sz, max_pages);
@@ -229,14 +230,14 @@ rte_eal_hugepage_init(void)
total_mem += seg->len;
}
- if (total_mem >= internal_conf->memory)
+ if (total_mem >= eal_get_user_configuration()->memory)
break;
}
- if (total_mem < internal_conf->memory) {
+ if (total_mem < eal_get_user_configuration()->memory) {
EAL_LOG(ERR, "Couldn't reserve requested memory, "
"requested: %" PRIu64 "M "
"available: %" PRIu64 "M",
- internal_conf->memory >> 20, total_mem >> 20);
+ eal_get_user_configuration()->memory >> 20, total_mem >> 20);
return -1;
}
return 0;
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index d848de03d8..a15e4dd598 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -577,6 +577,7 @@ rte_eal_init(int argc, char **argv)
const struct rte_config *config = rte_eal_get_configuration();
struct internal_config *internal_conf =
eal_get_internal_configuration();
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* first check if we have been run before */
if (!rte_atomic_compare_exchange_strong_explicit(&run_once, &has_run, 1,
@@ -750,9 +751,9 @@ rte_eal_init(int argc, char **argv)
}
}
- if (internal_conf->memory == 0 && internal_conf->force_numa == 0) {
+ if (user_cfg->memory == 0 && internal_conf->force_numa == 0) {
if (internal_conf->no_hugetlbfs)
- internal_conf->memory = MEMSIZE_IF_NO_HUGE_PAGE;
+ user_cfg->memory = MEMSIZE_IF_NO_HUGE_PAGE;
}
if (internal_conf->vmware_tsc_map == 1) {
diff --git a/lib/eal/linux/eal_memory.c b/lib/eal/linux/eal_memory.c
index bf783e3c76..695596668f 100644
--- a/lib/eal/linux/eal_memory.c
+++ b/lib/eal/linux/eal_memory.c
@@ -1182,7 +1182,7 @@ eal_legacy_hugepage_init(void)
/* create a memseg list */
msl = &mcfg->memsegs[0];
- mem_sz = internal_conf->memory;
+ mem_sz = eal_get_user_configuration()->memory;
page_sz = RTE_PGSIZE_4K;
n_segs = mem_sz / page_sz;
@@ -1203,7 +1203,7 @@ eal_legacy_hugepage_init(void)
EAL_LOG(DEBUG, "Falling back to anonymous map");
} else {
/* we got an fd - now resize it */
- if (ftruncate(memfd, internal_conf->memory) < 0) {
+ if (ftruncate(memfd, eal_get_user_configuration()->memory) < 0) {
EAL_LOG(ERR, "Cannot resize memfd: %s",
strerror(errno));
EAL_LOG(ERR, "Falling back to anonymous map");
@@ -1359,8 +1359,8 @@ eal_legacy_hugepage_init(void)
huge_recover_sigbus();
- if (internal_conf->memory == 0 && internal_conf->force_numa == 0)
- internal_conf->memory = eal_get_hugepage_mem_size();
+ if (eal_get_user_configuration()->memory == 0 && internal_conf->force_numa == 0)
+ eal_get_user_configuration()->memory = eal_get_hugepage_mem_size();
nr_hugefiles = nr_hugepages;
@@ -1758,7 +1758,7 @@ memseg_primary_init_32(void)
total_requested_mem += mem;
}
else
- total_requested_mem = internal_conf->memory;
+ total_requested_mem = eal_get_user_configuration()->memory;
max_mem = (uint64_t)RTE_MAX_MEM_MB << 20;
if (total_requested_mem > max_mem) {
@@ -1823,7 +1823,7 @@ memseg_primary_init_32(void)
/* max amount of memory on this socket */
max_socket_mem = (active_sockets != 0 ?
internal_conf->numa_mem[socket_id] :
- internal_conf->memory) +
+ eal_get_user_configuration()->memory) +
extra_mem_per_socket;
cur_socket_mem = 0;
diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c
index f06375a624..2e1cd88189 100644
--- a/lib/eal/windows/eal.c
+++ b/lib/eal/windows/eal.c
@@ -161,6 +161,7 @@ rte_eal_init(int argc, char **argv)
const struct rte_config *config = rte_eal_get_configuration();
struct internal_config *internal_conf =
eal_get_internal_configuration();
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
bool has_phys_addr;
enum rte_iova_mode iova_mode;
int ret;
@@ -230,9 +231,9 @@ rte_eal_init(int argc, char **argv)
goto err_out;
}
- if (internal_conf->memory == 0 && !internal_conf->force_numa) {
+ if (user_cfg->memory == 0 && !internal_conf->force_numa) {
if (internal_conf->no_hugetlbfs)
- internal_conf->memory = MEMSIZE_IF_NO_HUGE_PAGE;
+ user_cfg->memory = MEMSIZE_IF_NO_HUGE_PAGE;
}
if (rte_eal_intr_init() < 0) {
diff --git a/lib/eal/windows/eal_memory.c b/lib/eal/windows/eal_memory.c
index 9f85191016..caaa557694 100644
--- a/lib/eal/windows/eal_memory.c
+++ b/lib/eal/windows/eal_memory.c
@@ -680,13 +680,14 @@ eal_nohuge_init(void)
mcfg = rte_eal_get_configuration()->mem_config;
struct internal_config *internal_conf =
eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* nohuge mode is legacy mode */
internal_conf->legacy_mem = 1;
msl = &mcfg->memsegs[0];
- mem_sz = internal_conf->memory;
+ mem_sz = user_cfg->memory;
page_sz = RTE_PGSIZE_4K;
n_segs = mem_sz / page_sz;
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 03/44] eal: move NUMA request fields to user config
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
2026-04-29 16:57 ` [RFC PATCH 01/44] eal: define new functionally distinct config structs Bruce Richardson
2026-04-29 16:57 ` [RFC PATCH 02/44] eal: move memory request fields to user config Bruce Richardson
@ 2026-04-29 16:57 ` Bruce Richardson
2026-04-29 16:57 ` [RFC PATCH 04/44] eal: move hugepage policy " Bruce Richardson
` (42 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:57 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
As with basic memory user parameters, move the NUMA-specific parameters
to the user config struct. Update flag types to bool in the process.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_dynmem.c | 15 +++++++--------
lib/eal/common/eal_common_options.c | 24 +++++++++++-------------
lib/eal/common/eal_internal_cfg.h | 11 ++++-------
lib/eal/common/malloc_heap.c | 2 +-
lib/eal/freebsd/eal.c | 2 +-
lib/eal/linux/eal.c | 2 +-
lib/eal/linux/eal_memory.c | 23 ++++++++++++-----------
lib/eal/windows/eal.c | 2 +-
8 files changed, 38 insertions(+), 43 deletions(-)
diff --git a/lib/eal/common/eal_common_dynmem.c b/lib/eal/common/eal_common_dynmem.c
index 5bd22f6ef0..38e3ff7bcb 100644
--- a/lib/eal/common/eal_common_dynmem.c
+++ b/lib/eal/common/eal_common_dynmem.c
@@ -230,6 +230,7 @@ eal_dynmem_hugepage_init(void)
int hp_sz_idx, socket_id;
struct internal_config *internal_conf =
eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
memset(used_hp, 0, sizeof(used_hp));
@@ -266,7 +267,7 @@ eal_dynmem_hugepage_init(void)
/* make a copy of numa_mem, needed for balanced allocation. */
for (hp_sz_idx = 0; hp_sz_idx < RTE_MAX_NUMA_NODES; hp_sz_idx++)
- memory[hp_sz_idx] = internal_conf->numa_mem[hp_sz_idx];
+ memory[hp_sz_idx] = user_cfg->numa_mem[hp_sz_idx];
/* calculate final number of pages */
if (eal_dynmem_calc_num_pages_per_socket(memory,
@@ -334,10 +335,10 @@ eal_dynmem_hugepage_init(void)
}
/* if socket limits were specified, set them */
- if (internal_conf->force_numa_limits) {
+ if (user_cfg->force_numa_limits) {
unsigned int i;
for (i = 0; i < RTE_MAX_NUMA_NODES; i++) {
- uint64_t limit = internal_conf->numa_limit[i];
+ uint64_t limit = user_cfg->numa_limit[i];
if (limit == 0)
continue;
if (rte_mem_alloc_validator_register("socket-limit",
@@ -374,8 +375,6 @@ eal_dynmem_calc_num_pages_per_socket(
unsigned int requested, available;
int total_num_pages = 0;
uint64_t remaining_mem, cur_mem;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
uint64_t total_mem = user_cfg->memory;
@@ -383,7 +382,7 @@ eal_dynmem_calc_num_pages_per_socket(
return -1;
/* if specific memory amounts per socket weren't requested */
- if (internal_conf->force_numa == 0) {
+ if (!user_cfg->force_numa) {
size_t total_size;
#ifdef RTE_ARCH_64
int cpu_per_socket[RTE_MAX_NUMA_NODES];
@@ -510,8 +509,8 @@ eal_dynmem_calc_num_pages_per_socket(
/* if we didn't satisfy all memory requirements per socket */
if (memory[socket] > 0 &&
- internal_conf->numa_mem[socket] != 0) {
- requested = internal_conf->numa_mem[socket] / 0x100000;
+ user_cfg->numa_mem[socket] != 0) {
+ requested = user_cfg->numa_mem[socket] / 0x100000;
available = requested - (memory[socket] / 0x100000);
EAL_LOG(ERR, "Not enough memory available on socket %u! Requested: %uMB, available: %uMB",
socket, requested, available);
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index 11b77bc9dd..03c0aed4e2 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -499,18 +499,16 @@ eal_reset_internal_config(struct internal_config *internal_cfg)
user_cfg->memory = 0;
user_cfg->force_nrank = 0;
user_cfg->force_nchannel = 0;
+ user_cfg->force_numa = false;
+ for (i = 0; i < RTE_MAX_NUMA_NODES; i++)
+ user_cfg->numa_mem[i] = 0;
+ user_cfg->force_numa_limits = false;
+ for (i = 0; i < RTE_MAX_NUMA_NODES; i++)
+ user_cfg->numa_limit[i] = 0;
internal_cfg->hugefile_prefix = NULL;
internal_cfg->hugepage_dir = NULL;
internal_cfg->hugepage_file.unlink_before_mapping = false;
internal_cfg->hugepage_file.unlink_existing = true;
- internal_cfg->force_numa = 0;
- /* zero out the NUMA config */
- for (i = 0; i < RTE_MAX_NUMA_NODES; i++)
- internal_cfg->numa_mem[i] = 0;
- internal_cfg->force_numa_limits = 0;
- /* zero out the NUMA limits config */
- for (i = 0; i < RTE_MAX_NUMA_NODES; i++)
- internal_cfg->numa_limit[i] = 0;
/* zero out hugedir descriptors */
for (i = 0; i < MAX_HUGEPAGE_SIZES; i++) {
memset(&internal_cfg->hugepage_info[i], 0,
@@ -2174,18 +2172,18 @@ eal_parse_args(void)
}
}
if (args.numa_mem != NULL) {
- if (eal_parse_socket_arg(args.numa_mem, int_cfg->numa_mem) < 0) {
+ if (eal_parse_socket_arg(args.numa_mem, user_cfg->numa_mem) < 0) {
EAL_LOG(ERR, "invalid numa-mem parameter: '%s'", args.numa_mem);
return -1;
}
- int_cfg->force_numa = 1;
+ user_cfg->force_numa = true;
}
if (args.numa_limit != NULL) {
- if (eal_parse_socket_arg(args.numa_limit, int_cfg->numa_limit) < 0) {
+ if (eal_parse_socket_arg(args.numa_limit, user_cfg->numa_limit) < 0) {
EAL_LOG(ERR, "invalid numa-limit parameter: '%s'", args.numa_limit);
return -1;
}
- int_cfg->force_numa_limits = 1;
+ user_cfg->force_numa_limits = true;
}
/* tracing settings, not supported on windows */
@@ -2371,7 +2369,7 @@ eal_adjust_config(struct internal_config *internal_cfg)
/* if no memory amounts were requested, this will result in 0 and
* will be overridden later, right after eal_hugepage_info_init() */
for (i = 0; i < RTE_MAX_NUMA_NODES; i++)
- user_cfg->memory += internal_cfg->numa_mem[i];
+ user_cfg->memory += user_cfg->numa_mem[i];
return 0;
}
diff --git a/lib/eal/common/eal_internal_cfg.h b/lib/eal/common/eal_internal_cfg.h
index 1625c697b2..e99e74cecd 100644
--- a/lib/eal/common/eal_internal_cfg.h
+++ b/lib/eal/common/eal_internal_cfg.h
@@ -57,6 +57,10 @@ struct eal_user_cfg {
size_t memory; /**< amount of asked memory */
uint8_t force_nchannel; /**< force number of channels */
uint8_t force_nrank; /**< force number of ranks */
+ bool force_numa; /**< true to request memory on specific NUMA nodes */
+ bool force_numa_limits; /**< true to apply per-NUMA memory limits */
+ uint64_t numa_mem[RTE_MAX_NUMA_NODES]; /**< amount of memory per NUMA node */
+ uint64_t numa_limit[RTE_MAX_NUMA_NODES]; /**< limit amount of memory per NUMA node */
};
/**
@@ -93,13 +97,6 @@ struct internal_config {
*/
volatile unsigned create_uio_dev; /**< true to create /dev/uioX devices */
volatile enum rte_proc_type_t process_type; /**< multi-process proc type */
- /** true to try allocating memory on specific NUMA nodes */
- volatile unsigned force_numa;
- /** amount of memory per NUMA node */
- volatile uint64_t numa_mem[RTE_MAX_NUMA_NODES];
- volatile unsigned force_numa_limits;
- /** limit amount of memory per NUMA node */
- volatile uint64_t numa_limit[RTE_MAX_NUMA_NODES];
uintptr_t base_virtaddr; /**< base address to try and reserve memory from */
volatile unsigned legacy_mem;
/**< true to enable legacy memory behavior (no dynamic allocation,
diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c
index 39240c261c..77f364158a 100644
--- a/lib/eal/common/malloc_heap.c
+++ b/lib/eal/common/malloc_heap.c
@@ -701,7 +701,7 @@ malloc_heap_alloc_on_heap_id(size_t size, unsigned int heap_id, unsigned int fla
static unsigned int
malloc_get_numa_socket(void)
{
- const struct internal_config *conf = eal_get_internal_configuration();
+ const struct eal_user_cfg *conf = eal_get_user_configuration();
unsigned int socket_id = rte_socket_id();
unsigned int idx;
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 1779362686..d890b899e1 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -588,7 +588,7 @@ rte_eal_init(int argc, char **argv)
}
}
- if (user_cfg->memory == 0 && internal_conf->force_numa == 0) {
+ if (user_cfg->memory == 0 && !user_cfg->force_numa) {
if (internal_conf->no_hugetlbfs)
user_cfg->memory = MEMSIZE_IF_NO_HUGE_PAGE;
else
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index a15e4dd598..ae0f42b15e 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -751,7 +751,7 @@ rte_eal_init(int argc, char **argv)
}
}
- if (user_cfg->memory == 0 && internal_conf->force_numa == 0) {
+ if (user_cfg->memory == 0 && !user_cfg->force_numa) {
if (internal_conf->no_hugetlbfs)
user_cfg->memory = MEMSIZE_IF_NO_HUGE_PAGE;
}
diff --git a/lib/eal/linux/eal_memory.c b/lib/eal/linux/eal_memory.c
index 695596668f..9532dfc5cb 100644
--- a/lib/eal/linux/eal_memory.c
+++ b/lib/eal/linux/eal_memory.c
@@ -274,8 +274,7 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi,
struct bitmask *oldmask = NULL;
bool have_numa = true;
unsigned long maxnode = 0;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* Check if kernel supports NUMA. */
if (numa_available() != 0) {
@@ -294,7 +293,7 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi,
oldpolicy = MPOL_DEFAULT;
}
for (i = 0; i < RTE_MAX_NUMA_NODES; i++)
- if (internal_conf->numa_mem[i])
+ if (user_cfg->numa_mem[i])
maxnode = i + 1;
}
#endif
@@ -313,7 +312,7 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi,
if (j == maxnode) {
node_id = (node_id + 1) % maxnode;
- while (!internal_conf->numa_mem[node_id]) {
+ while (!user_cfg->numa_mem[node_id]) {
node_id++;
node_id %= maxnode;
}
@@ -1151,6 +1150,7 @@ eal_legacy_hugepage_init(void)
struct hugepage_info used_hp[MAX_HUGEPAGE_SIZES];
struct internal_config *internal_conf =
eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
uint64_t memory[RTE_MAX_NUMA_NODES];
@@ -1291,7 +1291,7 @@ eal_legacy_hugepage_init(void)
/* make a copy of numa_mem, needed for balanced allocation. */
for (i = 0; i < RTE_MAX_NUMA_NODES; i++)
- memory[i] = internal_conf->numa_mem[i];
+ memory[i] = user_cfg->numa_mem[i];
/* map all hugepages and sort them */
for (i = 0; i < (int)internal_conf->num_hugepage_sizes; i++) {
@@ -1359,7 +1359,7 @@ eal_legacy_hugepage_init(void)
huge_recover_sigbus();
- if (eal_get_user_configuration()->memory == 0 && internal_conf->force_numa == 0)
+ if (eal_get_user_configuration()->memory == 0 && !user_cfg->force_numa)
eal_get_user_configuration()->memory = eal_get_hugepage_mem_size();
nr_hugefiles = nr_hugepages;
@@ -1387,7 +1387,7 @@ eal_legacy_hugepage_init(void)
/* make a copy of numa_mem, needed for number of pages calculation */
for (i = 0; i < RTE_MAX_NUMA_NODES; i++)
- memory[i] = internal_conf->numa_mem[i];
+ memory[i] = user_cfg->numa_mem[i];
/* calculate final number of pages */
nr_hugepages = eal_dynmem_calc_num_pages_per_socket(memory,
@@ -1720,6 +1720,7 @@ memseg_primary_init_32(void)
uint64_t max_mem;
struct internal_config *internal_conf =
eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* no-huge does not need this at all */
if (internal_conf->no_hugetlbfs)
@@ -1744,12 +1745,12 @@ memseg_primary_init_32(void)
*/
active_sockets = 0;
total_requested_mem = 0;
- if (internal_conf->force_numa)
+ if (user_cfg->force_numa)
for (i = 0; i < rte_socket_count(); i++) {
uint64_t mem;
socket_id = rte_socket_id_by_idx(i);
- mem = internal_conf->numa_mem[socket_id];
+ mem = user_cfg->numa_mem[socket_id];
if (mem == 0)
continue;
@@ -1807,7 +1808,7 @@ memseg_primary_init_32(void)
/* if we didn't specifically request memory on this socket */
skip = active_sockets != 0 &&
- internal_conf->numa_mem[socket_id] == 0;
+ user_cfg->numa_mem[socket_id] == 0;
/* ...or if we didn't specifically request memory on *any*
* socket, and this is not main lcore
*/
@@ -1822,7 +1823,7 @@ memseg_primary_init_32(void)
/* max amount of memory on this socket */
max_socket_mem = (active_sockets != 0 ?
- internal_conf->numa_mem[socket_id] :
+ user_cfg->numa_mem[socket_id] :
eal_get_user_configuration()->memory) +
extra_mem_per_socket;
cur_socket_mem = 0;
diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c
index 2e1cd88189..f8a536bb97 100644
--- a/lib/eal/windows/eal.c
+++ b/lib/eal/windows/eal.c
@@ -231,7 +231,7 @@ rte_eal_init(int argc, char **argv)
goto err_out;
}
- if (user_cfg->memory == 0 && !internal_conf->force_numa) {
+ if (user_cfg->memory == 0 && !user_cfg->force_numa) {
if (internal_conf->no_hugetlbfs)
user_cfg->memory = MEMSIZE_IF_NO_HUGE_PAGE;
}
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 04/44] eal: move hugepage policy fields to user config
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (2 preceding siblings ...)
2026-04-29 16:57 ` [RFC PATCH 03/44] eal: move NUMA " Bruce Richardson
@ 2026-04-29 16:57 ` Bruce Richardson
2026-04-29 16:57 ` [RFC PATCH 05/44] eal: move process " Bruce Richardson
` (41 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:57 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
Continue moving fields from the internal config to the user config, in
this case the hugepage policy fields. As with previous fields, any flags
are adjusted from unsigned to boolean.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_config.c | 2 +-
lib/eal/common/eal_common_dynmem.c | 3 ++-
lib/eal/common/eal_common_options.c | 39 +++++++++++++++--------------
lib/eal/common/eal_internal_cfg.h | 9 ++++---
lib/eal/freebsd/eal.c | 6 ++---
lib/eal/freebsd/eal_memory.c | 9 ++++---
lib/eal/linux/eal.c | 7 +++---
lib/eal/linux/eal_hugepage_info.c | 12 ++++-----
lib/eal/linux/eal_memalloc.c | 25 ++++++++++--------
lib/eal/linux/eal_memory.c | 6 ++---
lib/eal/windows/eal.c | 4 +--
lib/eal/windows/eal_memory.c | 5 ++--
12 files changed, 67 insertions(+), 60 deletions(-)
diff --git a/lib/eal/common/eal_common_config.c b/lib/eal/common/eal_common_config.c
index 5427b30659..4ebf938f31 100644
--- a/lib/eal/common/eal_common_config.c
+++ b/lib/eal/common/eal_common_config.c
@@ -131,7 +131,7 @@ RTE_EXPORT_SYMBOL(rte_eal_has_hugepages)
int
rte_eal_has_hugepages(void)
{
- return !internal_config.no_hugetlbfs;
+ return !eal_user_cfg.no_hugetlbfs;
}
RTE_EXPORT_SYMBOL(rte_eal_has_pci)
diff --git a/lib/eal/common/eal_common_dynmem.c b/lib/eal/common/eal_common_dynmem.c
index 38e3ff7bcb..7913509eb9 100644
--- a/lib/eal/common/eal_common_dynmem.c
+++ b/lib/eal/common/eal_common_dynmem.c
@@ -32,9 +32,10 @@ eal_dynmem_memseg_lists_init(void)
unsigned int n_memtypes, cur_type;
struct internal_config *internal_conf =
eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* no-huge does not need this at all */
- if (internal_conf->no_hugetlbfs)
+ if (user_cfg->no_hugetlbfs)
return 0;
/*
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index 03c0aed4e2..f7f305e302 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -3,6 +3,7 @@
* Copyright(c) 2014 6WIND S.A.
*/
+#include <stdbool.h>
#include <stdlib.h>
#include <string.h>
#include <pthread.h>
@@ -482,11 +483,10 @@ eal_option_device_parse(void)
const char *
eal_get_hugefile_prefix(void)
{
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- if (internal_conf->hugefile_prefix != NULL)
- return internal_conf->hugefile_prefix;
+ if (user_cfg->hugefile_prefix != NULL)
+ return user_cfg->hugefile_prefix;
return HUGEFILE_PREFIX_DEFAULT;
}
@@ -505,10 +505,11 @@ eal_reset_internal_config(struct internal_config *internal_cfg)
user_cfg->force_numa_limits = false;
for (i = 0; i < RTE_MAX_NUMA_NODES; i++)
user_cfg->numa_limit[i] = 0;
- internal_cfg->hugefile_prefix = NULL;
- internal_cfg->hugepage_dir = NULL;
- internal_cfg->hugepage_file.unlink_before_mapping = false;
- internal_cfg->hugepage_file.unlink_existing = true;
+ user_cfg->no_hugetlbfs = false;
+ user_cfg->hugefile_prefix = NULL;
+ user_cfg->hugepage_dir = NULL;
+ user_cfg->hugepage_file.unlink_before_mapping = false;
+ user_cfg->hugepage_file.unlink_existing = true;
/* zero out hugedir descriptors */
for (i = 0; i < MAX_HUGEPAGE_SIZES; i++) {
memset(&internal_cfg->hugepage_info[i], 0,
@@ -2118,7 +2119,7 @@ eal_parse_args(void)
user_cfg->force_nrank = (uint8_t)nrank;
}
if (args.no_huge) {
- int_cfg->no_hugetlbfs = 1;
+ user_cfg->no_hugetlbfs = true;
/* no-huge is legacy mem */
int_cfg->legacy_mem = 1;
}
@@ -2126,7 +2127,7 @@ eal_parse_args(void)
int_cfg->in_memory = 1;
/* in-memory is a superset of noshconf and huge-unlink */
int_cfg->no_shconf = 1;
- int_cfg->hugepage_file.unlink_before_mapping = true;
+ user_cfg->hugepage_file.unlink_before_mapping = true;
}
if (args.legacy_mem) {
int_cfg->legacy_mem = 1;
@@ -2140,9 +2141,9 @@ eal_parse_args(void)
EAL_LOG(ERR, "Invalid hugepage dir parameter");
return -1;
}
- free(int_cfg->hugepage_dir); /* free old hugepage dir */
- int_cfg->hugepage_dir = strdup(args.huge_dir);
- if (int_cfg->hugepage_dir == NULL) {
+ free(user_cfg->hugepage_dir); /* free old hugepage dir */
+ user_cfg->hugepage_dir = strdup(args.huge_dir);
+ if (user_cfg->hugepage_dir == NULL) {
EAL_LOG(ERR, "failed to allocate memory for hugepage dir parameter");
return -1;
}
@@ -2156,9 +2157,9 @@ eal_parse_args(void)
EAL_LOG(ERR, "Invalid char, '%%', in file_prefix parameter");
return -1;
}
- free(int_cfg->hugefile_prefix); /* free old file prefix */
- int_cfg->hugefile_prefix = strdup(args.file_prefix);
- if (int_cfg->hugefile_prefix == NULL) {
+ free(user_cfg->hugefile_prefix); /* free old file prefix */
+ user_cfg->hugefile_prefix = strdup(args.file_prefix);
+ if (user_cfg->hugefile_prefix == NULL) {
EAL_LOG(ERR, "failed to allocate memory for file prefix parameter");
return -1;
}
@@ -2166,7 +2167,7 @@ eal_parse_args(void)
if (args.huge_unlink != NULL) {
if (args.huge_unlink == (void *)1)
args.huge_unlink = NULL;
- if (eal_parse_huge_unlink(args.huge_unlink, &int_cfg->hugepage_file) < 0) {
+ if (eal_parse_huge_unlink(args.huge_unlink, &user_cfg->hugepage_file) < 0) {
EAL_LOG(ERR, "invalid huge-unlink parameter");
return -1;
}
@@ -2348,8 +2349,8 @@ compute_ctrl_threads_cpuset(struct internal_config *internal_cfg)
int
eal_cleanup_config(struct internal_config *internal_cfg)
{
- free(internal_cfg->hugefile_prefix);
- free(internal_cfg->hugepage_dir);
+ free(eal_get_user_configuration()->hugefile_prefix);
+ free(eal_get_user_configuration()->hugepage_dir);
free(internal_cfg->user_mbuf_pool_ops_name);
return 0;
diff --git a/lib/eal/common/eal_internal_cfg.h b/lib/eal/common/eal_internal_cfg.h
index e99e74cecd..0516e10ebb 100644
--- a/lib/eal/common/eal_internal_cfg.h
+++ b/lib/eal/common/eal_internal_cfg.h
@@ -14,6 +14,7 @@
#include <rte_os_shim.h>
#include <rte_pci_dev_feature_defs.h>
#include <stdint.h>
+#include <stdbool.h>
#include "eal_thread.h"
@@ -59,6 +60,10 @@ struct eal_user_cfg {
uint8_t force_nrank; /**< force number of ranks */
bool force_numa; /**< true to request memory on specific NUMA nodes */
bool force_numa_limits; /**< true to apply per-NUMA memory limits */
+ bool no_hugetlbfs; /**< true to disable hugetlbfs */
+ struct hugepage_file_discipline hugepage_file;
+ char *hugefile_prefix; /**< the base filename of hugetlbfs files */
+ char *hugepage_dir; /**< specific hugetlbfs directory to use */
uint64_t numa_mem[RTE_MAX_NUMA_NODES]; /**< amount of memory per NUMA node */
uint64_t numa_limit[RTE_MAX_NUMA_NODES]; /**< limit amount of memory per NUMA node */
};
@@ -84,8 +89,6 @@ struct eal_runtime_state {
* internal configuration
*/
struct internal_config {
- volatile unsigned no_hugetlbfs; /**< true to disable hugetlbfs */
- struct hugepage_file_discipline hugepage_file;
volatile unsigned no_pci; /**< true to disable PCI */
volatile unsigned no_hpet; /**< true to disable HPET */
volatile unsigned vmware_tsc_map; /**< true to use VMware TSC mapping
@@ -112,8 +115,6 @@ struct internal_config {
volatile enum rte_intr_mode vfio_intr_mode;
/** the shared VF token for VFIO-PCI bound PF and VFs devices */
rte_uuid_t vfio_vf_token;
- char *hugefile_prefix; /**< the base filename of hugetlbfs files */
- char *hugepage_dir; /**< specific hugetlbfs directory to use */
char *user_mbuf_pool_ops_name;
/**< user defined mbuf pool ops name */
unsigned num_hugepage_sizes; /**< how many sizes on this system */
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index d890b899e1..bff0e4615a 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -536,7 +536,7 @@ rte_eal_init(int argc, char **argv)
* If contigmem is inaccessible, rte_eal_hugepage_init() will fail
* with a message describing the cause.
*/
- has_phys_addr = internal_conf->no_hugetlbfs == 0;
+ has_phys_addr = !user_cfg->no_hugetlbfs;
/* Always call rte_bus_get_iommu_class() to trigger DMA mask detection and validation */
enum rte_iova_mode bus_iova_mode = rte_bus_get_iommu_class();
@@ -576,7 +576,7 @@ rte_eal_init(int argc, char **argv)
EAL_LOG(INFO, "Selected IOVA mode '%s'",
rte_eal_iova_mode() == RTE_IOVA_PA ? "PA" : "VA");
- if (internal_conf->no_hugetlbfs == 0) {
+ if (!user_cfg->no_hugetlbfs) {
/* rte_config isn't initialized yet */
ret = internal_conf->process_type == RTE_PROC_PRIMARY ?
eal_hugepage_info_init() :
@@ -589,7 +589,7 @@ rte_eal_init(int argc, char **argv)
}
if (user_cfg->memory == 0 && !user_cfg->force_numa) {
- if (internal_conf->no_hugetlbfs)
+ if (user_cfg->no_hugetlbfs)
user_cfg->memory = MEMSIZE_IF_NO_HUGE_PAGE;
else
user_cfg->memory = eal_get_hugepage_mem_size();
diff --git a/lib/eal/freebsd/eal_memory.c b/lib/eal/freebsd/eal_memory.c
index a0a398ab55..cfb17fb3fa 100644
--- a/lib/eal/freebsd/eal_memory.c
+++ b/lib/eal/freebsd/eal_memory.c
@@ -61,16 +61,16 @@ rte_eal_hugepage_init(void)
unsigned int i, j, seg_idx = 0;
struct internal_config *internal_conf =
eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* get pointer to global configuration */
mcfg = rte_eal_get_configuration()->mem_config;
/* for debug purposes, hugetlbfs can be disabled */
- if (internal_conf->no_hugetlbfs) {
+ if (user_cfg->no_hugetlbfs) {
struct rte_memseg_list *msl;
uint64_t mem_sz, page_sz;
int n_segs;
- const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* create a memseg list */
msl = &mcfg->memsegs[0];
@@ -110,7 +110,7 @@ rte_eal_hugepage_init(void)
hpi = &internal_conf->hugepage_info[i];
page_sz = hpi->hugepage_sz;
max_pages = hpi->num_pages[0];
- mem_needed = RTE_ALIGN_CEIL(eal_get_user_configuration()->memory - total_mem,
+ mem_needed = RTE_ALIGN_CEIL(user_cfg->memory - total_mem,
page_sz);
n_pages = RTE_MIN(mem_needed / page_sz, max_pages);
@@ -358,9 +358,10 @@ memseg_primary_init(void)
uint64_t max_mem, total_mem;
struct internal_config *internal_conf =
eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* no-huge does not need this at all */
- if (internal_conf->no_hugetlbfs)
+ if (user_cfg->no_hugetlbfs)
return 0;
/* FreeBSD has an issue where core dump will dump the entire memory
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index ae0f42b15e..c51aa7e3b4 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -739,7 +739,7 @@ rte_eal_init(int argc, char **argv)
EAL_LOG(INFO, "Selected IOVA mode '%s'",
rte_eal_iova_mode() == RTE_IOVA_PA ? "PA" : "VA");
- if (internal_conf->no_hugetlbfs == 0) {
+ if (!user_cfg->no_hugetlbfs) {
/* rte_config isn't initialized yet */
ret = internal_conf->process_type == RTE_PROC_PRIMARY ?
eal_hugepage_info_init() :
@@ -752,7 +752,7 @@ rte_eal_init(int argc, char **argv)
}
if (user_cfg->memory == 0 && !user_cfg->force_numa) {
- if (internal_conf->no_hugetlbfs)
+ if (user_cfg->no_hugetlbfs)
user_cfg->memory = MEMSIZE_IF_NO_HUGE_PAGE;
}
@@ -974,9 +974,10 @@ rte_eal_cleanup(void)
*/
struct internal_config *internal_conf =
eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (rte_eal_process_type() == RTE_PROC_PRIMARY &&
- internal_conf->hugepage_file.unlink_existing)
+ user_cfg->hugepage_file.unlink_existing)
rte_memseg_walk(mark_freeable, NULL);
rte_service_finalize();
diff --git a/lib/eal/linux/eal_hugepage_info.c b/lib/eal/linux/eal_hugepage_info.c
index 05c5b3f613..2f889b291e 100644
--- a/lib/eal/linux/eal_hugepage_info.c
+++ b/lib/eal/linux/eal_hugepage_info.c
@@ -215,22 +215,21 @@ get_hugepage_dir(uint64_t hugepage_sz, char *hugedir, int len)
static uint64_t default_size = 0;
const char pagesize_opt[] = "pagesize=";
const size_t pagesize_opt_len = sizeof(pagesize_opt) - 1;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
struct mntent *mnt;
FILE *fp;
/* Fast path: hugepage_dir explicitly specified */
- if (internal_conf->hugepage_dir != NULL) {
+ if (user_cfg->hugepage_dir != NULL) {
struct statfs sfs;
/* Query info about mounted filesystem */
- if (statfs(internal_conf->hugepage_dir, &sfs) != 0 ||
+ if (statfs(user_cfg->hugepage_dir, &sfs) != 0 ||
(uint32_t)sfs.f_type != HUGETLBFS_MAGIC ||
(uint64_t)sfs.f_bsize != hugepage_sz)
return -1;
- strlcpy(hugedir, internal_conf->hugepage_dir, len);
+ strlcpy(hugedir, user_cfg->hugepage_dir, len);
return 0;
}
@@ -457,6 +456,7 @@ hugepage_info_init(void)
struct dirent *dirent;
struct internal_config *internal_conf =
eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
dir = opendir(sys_dir_path);
if (dir == NULL) {
@@ -527,7 +527,7 @@ hugepage_info_init(void)
* or count how many of them can be reused.
*/
reusable_pages = 0;
- if (!internal_conf->hugepage_file.unlink_existing) {
+ if (!user_cfg->hugepage_file.unlink_existing) {
reusable_bytes = 0;
if (inspect_hugedir(hpi->hugedir,
&reusable_bytes) < 0)
diff --git a/lib/eal/linux/eal_memalloc.c b/lib/eal/linux/eal_memalloc.c
index a39bc31c7b..37f5da8d1f 100644
--- a/lib/eal/linux/eal_memalloc.c
+++ b/lib/eal/linux/eal_memalloc.c
@@ -267,6 +267,7 @@ get_seg_fd(char *path, int buflen, struct hugepage_info *hi,
int ret;
const struct internal_config *internal_conf =
eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (dirty != NULL)
*dirty = false;
@@ -305,7 +306,7 @@ get_seg_fd(char *path, int buflen, struct hugepage_info *hi,
__func__, path, strerror(errno));
return -1;
}
- if (!internal_conf->hugepage_file.unlink_existing && ret == 0 &&
+ if (!user_cfg->hugepage_file.unlink_existing && ret == 0 &&
dirty != NULL)
*dirty = true;
@@ -322,7 +323,7 @@ get_seg_fd(char *path, int buflen, struct hugepage_info *hi,
* whether they will be dirty depends on the part that is mapped.
*/
if (!internal_conf->single_file_segments &&
- internal_conf->hugepage_file.unlink_existing &&
+ user_cfg->hugepage_file.unlink_existing &&
rte_eal_process_type() == RTE_PROC_PRIMARY &&
ret == 0) {
/* coverity[toctou] */
@@ -375,8 +376,7 @@ static int
resize_hugefile_in_filesystem(int fd, uint64_t fa_offset, uint64_t page_sz,
bool grow, bool *dirty)
{
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
bool again = false;
do {
@@ -448,7 +448,7 @@ resize_hugefile_in_filesystem(int fd, uint64_t fa_offset, uint64_t page_sz,
* dirty, unless the file is a fresh one.
*/
if (dirty != NULL)
- *dirty &= !internal_conf->hugepage_file.unlink_existing;
+ *dirty &= !user_cfg->hugepage_file.unlink_existing;
}
}
} while (again);
@@ -516,6 +516,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id,
void *new_addr;
const struct internal_config *internal_conf =
eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
alloc_sz = hi->hugepage_sz;
@@ -548,7 +549,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id,
EAL_LOG(DEBUG, "%s(): ftruncate() failed: %s", __func__, strerror(errno));
goto resized;
}
- if (internal_conf->hugepage_file.unlink_before_mapping &&
+ if (user_cfg->hugepage_file.unlink_before_mapping &&
!internal_conf->in_memory) {
if (unlink(path)) {
EAL_LOG(DEBUG, "%s(): unlink() failed: %s",
@@ -681,7 +682,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id,
close_hugefile(fd, path, list_idx);
} else {
/* only remove file if we can take out a write lock */
- if (!internal_conf->hugepage_file.unlink_before_mapping &&
+ if (!user_cfg->hugepage_file.unlink_before_mapping &&
internal_conf->in_memory == 0 &&
lock(fd, LOCK_EX) == 1)
unlink(path);
@@ -700,6 +701,7 @@ free_seg(struct rte_memseg *ms, struct hugepage_info *hi,
int fd, ret = 0;
const struct internal_config *internal_conf =
eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* erase page data */
memset(ms->addr, 0, ms->len);
@@ -735,8 +737,8 @@ free_seg(struct rte_memseg *ms, struct hugepage_info *hi,
* holding onto this page.
*/
if (!internal_conf->in_memory &&
- internal_conf->hugepage_file.unlink_existing &&
- !internal_conf->hugepage_file.unlink_before_mapping) {
+ user_cfg->hugepage_file.unlink_existing &&
+ !user_cfg->hugepage_file.unlink_before_mapping) {
ret = lock(fd, LOCK_EX);
if (ret >= 0) {
/* no one else is using this page */
@@ -1656,6 +1658,7 @@ eal_memalloc_init(void)
{
const struct internal_config *internal_conf =
eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (rte_eal_process_type() == RTE_PROC_SECONDARY)
/* memory_hotplug_lock is held during initialization, so it's
@@ -1672,8 +1675,8 @@ eal_memalloc_init(void)
return -1;
}
/* safety net, should be impossible to configure */
- if (internal_conf->hugepage_file.unlink_before_mapping &&
- !internal_conf->hugepage_file.unlink_existing) {
+ if (user_cfg->hugepage_file.unlink_before_mapping &&
+ !user_cfg->hugepage_file.unlink_existing) {
EAL_LOG(ERR, "Unlinking existing hugepage files is prohibited, cannot unlink them before mapping.");
return -1;
}
diff --git a/lib/eal/linux/eal_memory.c b/lib/eal/linux/eal_memory.c
index 9532dfc5cb..a53fe65c60 100644
--- a/lib/eal/linux/eal_memory.c
+++ b/lib/eal/linux/eal_memory.c
@@ -1165,7 +1165,7 @@ eal_legacy_hugepage_init(void)
mcfg = rte_eal_get_configuration()->mem_config;
/* hugetlbfs can be disabled */
- if (internal_conf->no_hugetlbfs) {
+ if (user_cfg->no_hugetlbfs) {
void *prealloc_addr;
size_t mem_sz;
struct rte_memseg_list *msl;
@@ -1462,7 +1462,7 @@ eal_legacy_hugepage_init(void)
}
/* free the hugepage backing files */
- if (internal_conf->hugepage_file.unlink_before_mapping &&
+ if (user_cfg->hugepage_file.unlink_before_mapping &&
unlink_hugepage_files(tmp_hp, internal_conf->num_hugepage_sizes) < 0) {
EAL_LOG(ERR, "Unlinking hugepage files failed!");
goto fail;
@@ -1723,7 +1723,7 @@ memseg_primary_init_32(void)
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* no-huge does not need this at all */
- if (internal_conf->no_hugetlbfs)
+ if (user_cfg->no_hugetlbfs)
return 0;
/* this is a giant hack, but desperate times call for desperate
diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c
index f8a536bb97..622bda7578 100644
--- a/lib/eal/windows/eal.c
+++ b/lib/eal/windows/eal.c
@@ -225,14 +225,14 @@ rte_eal_init(int argc, char **argv)
internal_conf->no_shconf = 1;
}
- if (!internal_conf->no_hugetlbfs && (eal_hugepage_info_init() < 0)) {
+ if (!user_cfg->no_hugetlbfs && (eal_hugepage_info_init() < 0)) {
rte_eal_init_alert("Cannot get hugepage information");
rte_errno = EACCES;
goto err_out;
}
if (user_cfg->memory == 0 && !user_cfg->force_numa) {
- if (internal_conf->no_hugetlbfs)
+ if (user_cfg->no_hugetlbfs)
user_cfg->memory = MEMSIZE_IF_NO_HUGE_PAGE;
}
diff --git a/lib/eal/windows/eal_memory.c b/lib/eal/windows/eal_memory.c
index caaa557694..3140d7b9c3 100644
--- a/lib/eal/windows/eal_memory.c
+++ b/lib/eal/windows/eal_memory.c
@@ -723,10 +723,9 @@ eal_nohuge_init(void)
int
rte_eal_hugepage_init(void)
{
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- return internal_conf->no_hugetlbfs ?
+ return user_cfg->no_hugetlbfs ?
eal_nohuge_init() : eal_dynmem_hugepage_init();
}
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 05/44] eal: move process policy fields to user config
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (3 preceding siblings ...)
2026-04-29 16:57 ` [RFC PATCH 04/44] eal: move hugepage policy " Bruce Richardson
@ 2026-04-29 16:57 ` Bruce Richardson
2026-04-29 16:57 ` [RFC PATCH 06/44] eal: move advanced user config options to user cfg struct Bruce Richardson
` (40 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:57 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
Move process/runtime policy request fields from internal_config into
eal_user_cfg, updating flag types to bool in the process.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_config.c | 2 +-
lib/eal/common/eal_common_fbarray.c | 10 +++---
lib/eal/common/eal_common_memory.c | 12 +++-----
lib/eal/common/eal_common_options.c | 47 ++++++++++++++++-------------
lib/eal/common/eal_common_proc.c | 35 +++++++++------------
lib/eal/common/eal_internal_cfg.h | 20 +++++-------
lib/eal/freebsd/eal.c | 40 +++++++++++-------------
lib/eal/freebsd/eal_hugepage_info.c | 3 +-
lib/eal/linux/eal.c | 36 ++++++++++------------
lib/eal/linux/eal_hugepage_info.c | 5 +--
lib/eal/linux/eal_memalloc.c | 37 +++++++++--------------
lib/eal/linux/eal_memory.c | 5 ++-
lib/eal/linux/eal_timer_hpet.c | 21 ++++++-------
lib/eal/linux/eal_vfio.c | 14 ++++-----
lib/eal/windows/eal.c | 6 ++--
15 files changed, 132 insertions(+), 161 deletions(-)
diff --git a/lib/eal/common/eal_common_config.c b/lib/eal/common/eal_common_config.c
index 4ebf938f31..5efc6623d6 100644
--- a/lib/eal/common/eal_common_config.c
+++ b/lib/eal/common/eal_common_config.c
@@ -138,5 +138,5 @@ RTE_EXPORT_SYMBOL(rte_eal_has_pci)
int
rte_eal_has_pci(void)
{
- return !internal_config.no_pci;
+ return !eal_user_cfg.no_pci;
}
diff --git a/lib/eal/common/eal_common_fbarray.c b/lib/eal/common/eal_common_fbarray.c
index 8bdcefb717..f76bb5353d 100644
--- a/lib/eal/common/eal_common_fbarray.c
+++ b/lib/eal/common/eal_common_fbarray.c
@@ -697,8 +697,7 @@ rte_fbarray_init(struct rte_fbarray *arr, const char *name, unsigned int len,
struct mem_area *ma = NULL;
void *data = NULL;
int fd = -1;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (arr == NULL) {
rte_errno = EINVAL;
@@ -734,7 +733,7 @@ rte_fbarray_init(struct rte_fbarray *arr, const char *name, unsigned int len,
fd = -1;
- if (internal_conf->no_shconf) {
+ if (user_cfg->no_shconf) {
/* remap virtual area as writable */
static const int flags = RTE_MAP_FORCE_ADDRESS |
RTE_MAP_PRIVATE | RTE_MAP_ANONYMOUS;
@@ -964,8 +963,7 @@ rte_fbarray_destroy(struct rte_fbarray *arr)
size_t mmap_len;
int fd, ret;
char path[PATH_MAX];
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (arr == NULL) {
rte_errno = EINVAL;
@@ -999,7 +997,7 @@ rte_fbarray_destroy(struct rte_fbarray *arr)
goto out;
}
/* with no shconf, there were never any files to begin with */
- if (!internal_conf->no_shconf) {
+ if (!user_cfg->no_shconf) {
/*
* attempt to get an exclusive lock on the file, to ensure it
* has been detached by all other processes
diff --git a/lib/eal/common/eal_common_memory.c b/lib/eal/common/eal_common_memory.c
index 208e3583b0..b6a737b1ab 100644
--- a/lib/eal/common/eal_common_memory.c
+++ b/lib/eal/common/eal_common_memory.c
@@ -1054,13 +1054,12 @@ rte_extmem_detach(void *va_addr, size_t len)
int
rte_eal_memory_detach(void)
{
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
size_t page_sz = rte_mem_page_size();
unsigned int i;
- if (internal_conf->in_memory == 1)
+ if (user_cfg->in_memory)
return 0;
rte_rwlock_write_lock(&mcfg->memory_hotplug_lock);
@@ -1103,7 +1102,7 @@ rte_eal_memory_detach(void)
* config - we can't zero it out because it might still be referenced
* by other processes.
*/
- if (internal_conf->no_shconf == 0 && mcfg->mem_cfg_addr != 0) {
+ if (!user_cfg->no_shconf && mcfg->mem_cfg_addr != 0) {
if (rte_mem_unmap(mcfg, RTE_ALIGN(sizeof(*mcfg), page_sz)) != 0)
EAL_LOG(ERR, "Could not unmap shared memory config: %s",
rte_strerror(rte_errno));
@@ -1117,8 +1116,7 @@ rte_eal_memory_detach(void)
int
rte_eal_memory_init(void)
{
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
int retval;
EAL_LOG(DEBUG, "Setting up physically contiguous memory...");
@@ -1135,7 +1133,7 @@ rte_eal_memory_init(void)
if (retval < 0)
goto fail;
- if (internal_conf->no_shconf == 0 && rte_eal_memdevice_init() < 0)
+ if (!user_cfg->no_shconf && rte_eal_memdevice_init() < 0)
goto fail;
return 0;
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index f7f305e302..48b004258a 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -505,7 +505,9 @@ eal_reset_internal_config(struct internal_config *internal_cfg)
user_cfg->force_numa_limits = false;
for (i = 0; i < RTE_MAX_NUMA_NODES; i++)
user_cfg->numa_limit[i] = 0;
+ user_cfg->process_type = RTE_PROC_AUTO;
user_cfg->no_hugetlbfs = false;
+ user_cfg->no_pci = false;
user_cfg->hugefile_prefix = NULL;
user_cfg->hugepage_dir = NULL;
user_cfg->hugepage_file.unlink_before_mapping = false;
@@ -526,12 +528,15 @@ eal_reset_internal_config(struct internal_config *internal_cfg)
internal_cfg->no_auto_probing = 0;
#ifdef RTE_LIBEAL_USE_HPET
- internal_cfg->no_hpet = 0;
+ user_cfg->no_hpet = false;
#else
- internal_cfg->no_hpet = 1;
+ user_cfg->no_hpet = true;
#endif
- internal_cfg->vmware_tsc_map = 0;
- internal_cfg->create_uio_dev = 0;
+ user_cfg->vmware_tsc_map = false;
+ user_cfg->no_shconf = false;
+ user_cfg->in_memory = false;
+ user_cfg->create_uio_dev = false;
+ user_cfg->no_telemetry = false;
internal_cfg->iova_mode = RTE_IOVA_DC;
internal_cfg->user_mbuf_pool_ops_name = NULL;
CPU_ZERO(&internal_cfg->ctrl_cpuset);
@@ -1973,8 +1978,8 @@ eal_parse_args(void)
/* parse the process type */
if (args.proc_type != NULL) {
- int_cfg->process_type = eal_parse_proc_type(args.proc_type);
- if (int_cfg->process_type == RTE_PROC_INVALID) {
+ user_cfg->process_type = eal_parse_proc_type(args.proc_type);
+ if (user_cfg->process_type == RTE_PROC_INVALID) {
EAL_LOG(ERR, "invalid process type: %s", args.proc_type);
return -1;
}
@@ -2121,21 +2126,21 @@ eal_parse_args(void)
if (args.no_huge) {
user_cfg->no_hugetlbfs = true;
/* no-huge is legacy mem */
- int_cfg->legacy_mem = 1;
+ int_cfg->legacy_mem = true;
}
if (args.in_memory) {
- int_cfg->in_memory = 1;
+ user_cfg->in_memory = true;
/* in-memory is a superset of noshconf and huge-unlink */
- int_cfg->no_shconf = 1;
+ user_cfg->no_shconf = true;
user_cfg->hugepage_file.unlink_before_mapping = true;
}
if (args.legacy_mem) {
- int_cfg->legacy_mem = 1;
+ int_cfg->legacy_mem = true;
if (args.memory_size == NULL && args.numa_mem == NULL)
EAL_LOG(NOTICE, "Static memory layout is selected, amount of reserved memory can be adjusted with -m or --socket-mem");
}
if (args.single_file_segments)
- int_cfg->single_file_segments = 1;
+ int_cfg->single_file_segments = true;
if (args.huge_dir != NULL) {
if (strlen(args.huge_dir) < 1) {
EAL_LOG(ERR, "Invalid hugepage dir parameter");
@@ -2226,19 +2231,19 @@ eal_parse_args(void)
* other options above have already set them.
*/
if (args.no_pci)
- int_cfg->no_pci = 1;
+ user_cfg->no_pci = true;
if (args.no_hpet)
- int_cfg->no_hpet = 1;
+ user_cfg->no_hpet = true;
if (args.vmware_tsc_map)
- int_cfg->vmware_tsc_map = 1;
+ user_cfg->vmware_tsc_map = true;
if (args.no_shconf)
- int_cfg->no_shconf = 1;
+ user_cfg->no_shconf = true;
if (args.no_telemetry)
- int_cfg->no_telemetry = 1;
+ user_cfg->no_telemetry = true;
if (args.match_allocations)
- int_cfg->match_allocations = 1;
+ int_cfg->match_allocations = true;
if (args.create_uio_dev)
- int_cfg->create_uio_dev = 1;
+ user_cfg->create_uio_dev = true;
/* other misc settings */
if (args.iova_mode != NULL) {
@@ -2297,7 +2302,7 @@ eal_parse_args(void)
#ifndef RTE_EXEC_ENV_WINDOWS
/* create runtime data directory. In no_shconf mode, skip any errors */
if (eal_create_runtime_dir() < 0) {
- if (int_cfg->no_shconf == 0) {
+ if (!user_cfg->no_shconf) {
EAL_LOG(ERR, "Cannot create runtime directory");
return -1;
}
@@ -2362,8 +2367,8 @@ eal_adjust_config(struct internal_config *internal_cfg)
struct eal_user_cfg *user_cfg = eal_get_user_configuration();
int i;
- if (internal_cfg->process_type == RTE_PROC_AUTO)
- internal_cfg->process_type = eal_proc_type_detect();
+ if (user_cfg->process_type == RTE_PROC_AUTO)
+ user_cfg->process_type = eal_proc_type_detect();
compute_ctrl_threads_cpuset(internal_cfg);
diff --git a/lib/eal/common/eal_common_proc.c b/lib/eal/common/eal_common_proc.c
index 06f151818c..74f4f60b0a 100644
--- a/lib/eal/common/eal_common_proc.c
+++ b/lib/eal/common/eal_common_proc.c
@@ -208,13 +208,12 @@ int
rte_mp_action_register(const char *name, rte_mp_t action)
{
struct action_entry *entry;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (validate_action_name(name) != 0)
return -1;
- if (internal_conf->no_shconf) {
+ if (user_cfg->no_shconf) {
EAL_LOG(DEBUG, "No shared files mode enabled, IPC is disabled");
rte_errno = ENOTSUP;
return -1;
@@ -245,13 +244,12 @@ void
rte_mp_action_unregister(const char *name)
{
struct action_entry *entry;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (validate_action_name(name) != 0)
return;
- if (internal_conf->no_shconf) {
+ if (user_cfg->no_shconf) {
EAL_LOG(DEBUG, "No shared files mode enabled, IPC is disabled");
return;
}
@@ -619,13 +617,12 @@ rte_mp_channel_init(void)
{
char path[UNIX_PATH_MAX];
int dir_fd;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* in no shared files mode, we do not have secondary processes support,
* so no need to initialize IPC.
*/
- if (internal_conf->no_shconf) {
+ if (user_cfg->no_shconf) {
EAL_LOG(DEBUG, "No shared files mode enabled, IPC will be disabled");
rte_errno = ENOTSUP;
return -1;
@@ -856,13 +853,12 @@ RTE_EXPORT_SYMBOL(rte_mp_sendmsg)
int
rte_mp_sendmsg(struct rte_mp_msg *msg)
{
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (check_input(msg) != 0)
return -1;
- if (internal_conf->no_shconf) {
+ if (user_cfg->no_shconf) {
EAL_LOG(DEBUG, "No shared files mode enabled, IPC is disabled");
rte_errno = ENOTSUP;
return -1;
@@ -1015,8 +1011,7 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply,
DIR *mp_dir;
struct dirent *ent;
struct timespec now, end;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
EAL_LOG(DEBUG, "request: %s", req->name);
@@ -1027,7 +1022,7 @@ rte_mp_request_sync(struct rte_mp_msg *req, struct rte_mp_reply *reply,
if (check_input(req) != 0)
goto end;
- if (internal_conf->no_shconf) {
+ if (user_cfg->no_shconf) {
EAL_LOG(DEBUG, "No shared files mode enabled, IPC is disabled");
rte_errno = ENOTSUP;
return -1;
@@ -1124,15 +1119,14 @@ rte_mp_request_async(struct rte_mp_msg *req, const struct timespec *ts,
struct timespec now;
struct timespec *end;
bool dummy_used = false;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
EAL_LOG(DEBUG, "request: %s", req->name);
if (check_input(req) != 0)
return -1;
- if (internal_conf->no_shconf) {
+ if (user_cfg->no_shconf) {
EAL_LOG(DEBUG, "No shared files mode enabled, IPC is disabled");
rte_errno = ENOTSUP;
return -1;
@@ -1268,8 +1262,7 @@ int
rte_mp_reply(struct rte_mp_msg *msg, const char *peer)
{
EAL_LOG(DEBUG, "reply: %s", msg->name);
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (check_input(msg) != 0)
return -1;
@@ -1280,7 +1273,7 @@ rte_mp_reply(struct rte_mp_msg *msg, const char *peer)
return -1;
}
- if (internal_conf->no_shconf) {
+ if (user_cfg->no_shconf) {
EAL_LOG(DEBUG, "No shared files mode enabled, IPC is disabled");
return 0;
}
diff --git a/lib/eal/common/eal_internal_cfg.h b/lib/eal/common/eal_internal_cfg.h
index 0516e10ebb..4ba43eb5ca 100644
--- a/lib/eal/common/eal_internal_cfg.h
+++ b/lib/eal/common/eal_internal_cfg.h
@@ -56,11 +56,19 @@ struct hugepage_file_discipline {
*/
struct eal_user_cfg {
size_t memory; /**< amount of asked memory */
+ enum rte_proc_type_t process_type; /**< requested process type */
uint8_t force_nchannel; /**< force number of channels */
uint8_t force_nrank; /**< force number of ranks */
bool force_numa; /**< true to request memory on specific NUMA nodes */
bool force_numa_limits; /**< true to apply per-NUMA memory limits */
bool no_hugetlbfs; /**< true to disable hugetlbfs */
+ bool no_pci; /**< true to disable PCI */
+ bool no_hpet; /**< true to disable HPET */
+ bool vmware_tsc_map; /**< true to use VMware TSC mapping */
+ bool no_shconf; /**< true if there is no shared config */
+ bool in_memory; /**< true to run with no shared runtime files */
+ bool create_uio_dev; /**< true to create /dev/uioX devices */
+ bool no_telemetry; /**< true to disable telemetry */
struct hugepage_file_discipline hugepage_file;
char *hugefile_prefix; /**< the base filename of hugetlbfs files */
char *hugepage_dir; /**< specific hugetlbfs directory to use */
@@ -89,17 +97,6 @@ struct eal_runtime_state {
* internal configuration
*/
struct internal_config {
- volatile unsigned no_pci; /**< true to disable PCI */
- volatile unsigned no_hpet; /**< true to disable HPET */
- volatile unsigned vmware_tsc_map; /**< true to use VMware TSC mapping
- * instead of native TSC */
- volatile unsigned no_shconf; /**< true if there is no shared config */
- volatile unsigned in_memory;
- /**< true if DPDK should operate entirely in-memory and not create any
- * shared files or runtime data.
- */
- volatile unsigned create_uio_dev; /**< true to create /dev/uioX devices */
- volatile enum rte_proc_type_t process_type; /**< multi-process proc type */
uintptr_t base_virtaddr; /**< base address to try and reserve memory from */
volatile unsigned legacy_mem;
/**< true to enable legacy memory behavior (no dynamic allocation,
@@ -123,7 +120,6 @@ struct internal_config {
rte_cpuset_t ctrl_cpuset; /**< cpuset for ctrl threads */
volatile unsigned int init_complete;
/**< indicates whether EAL has completed initialization */
- unsigned int no_telemetry; /**< true to disable Telemetry */
struct simd_bitwidth max_simd_bitwidth;
/**< max simd bitwidth path to use */
size_t huge_worker_stack_size; /**< worker thread stack size */
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index bff0e4615a..7f8fa6e1c0 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -99,6 +99,7 @@ rte_eal_config_create(void)
struct rte_config *config = rte_eal_get_configuration();
const struct internal_config *internal_conf =
eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
size_t page_sz = rte_mem_page_size();
size_t cfg_len = sizeof(struct rte_mem_config);
size_t cfg_len_aligned = RTE_ALIGN(cfg_len, page_sz);
@@ -107,7 +108,7 @@ rte_eal_config_create(void)
const char *pathname = eal_runtime_config_path();
- if (internal_conf->no_shconf)
+ if (user_cfg->no_shconf)
return 0;
/* map the config before base address so that we don't waste a page */
@@ -184,11 +185,10 @@ rte_eal_config_attach(void)
void *rte_mem_cfg_addr;
const char *pathname = eal_runtime_config_path();
struct rte_config *config = rte_eal_get_configuration();
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- if (internal_conf->no_shconf)
+ if (user_cfg->no_shconf)
return 0;
if (mem_cfg_fd < 0){
@@ -223,10 +223,9 @@ rte_eal_config_reattach(void)
struct rte_mem_config *mem_config;
void *rte_mem_cfg_addr;
struct rte_config *config = rte_eal_get_configuration();
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- if (internal_conf->no_shconf)
+ if (user_cfg->no_shconf)
return 0;
/* save the address primary process has mapped shared config to */
@@ -266,11 +265,10 @@ eal_proc_type_detect(void)
{
enum rte_proc_type_t ptype = RTE_PROC_PRIMARY;
const char *pathname = eal_runtime_config_path();
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* if there no shared config, there can be no secondary processes */
- if (!internal_conf->no_shconf) {
+ if (!user_cfg->no_shconf) {
/* if we can open the file but not get a write-lock we are a
* secondary process. NOTE: if we get a file handle back, we
* keep that open and don't close it to prevent a race condition
@@ -292,10 +290,9 @@ static int
rte_config_init(void)
{
struct rte_config *config = rte_eal_get_configuration();
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- config->process_type = internal_conf->process_type;
+ config->process_type = user_cfg->process_type;
switch (config->process_type) {
case RTE_PROC_PRIMARY:
@@ -476,9 +473,9 @@ rte_eal_init(int argc, char **argv)
/* FreeBSD always uses legacy memory model */
internal_conf->legacy_mem = true;
- if (internal_conf->in_memory) {
+ if (user_cfg->in_memory) {
EAL_LOG(WARNING, "Warning: ignoring unsupported flag, '--in-memory'");
- internal_conf->in_memory = false;
+ user_cfg->in_memory = false;
}
if (eal_plugins_init() < 0) {
@@ -578,7 +575,7 @@ rte_eal_init(int argc, char **argv)
if (!user_cfg->no_hugetlbfs) {
/* rte_config isn't initialized yet */
- ret = internal_conf->process_type == RTE_PROC_PRIMARY ?
+ ret = user_cfg->process_type == RTE_PROC_PRIMARY ?
eal_hugepage_info_init() :
eal_hugepage_info_read();
if (ret < 0) {
@@ -595,7 +592,7 @@ rte_eal_init(int argc, char **argv)
user_cfg->memory = eal_get_hugepage_mem_size();
}
- if (internal_conf->vmware_tsc_map == 1) {
+ if (user_cfg->vmware_tsc_map) {
#ifdef RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT
rte_cycles_vmware_tsc_map = 1;
EAL_LOG(DEBUG, "Using VMWARE TSC MAP, "
@@ -744,11 +741,11 @@ rte_eal_init(int argc, char **argv)
* In no_shconf mode, no runtime directory is created in the first
* place, so no cleanup needed.
*/
- if (!internal_conf->no_shconf && eal_clean_runtime_dir() < 0) {
+ if (!user_cfg->no_shconf && eal_clean_runtime_dir() < 0) {
rte_eal_init_alert("Cannot clear runtime directory");
goto err_out;
}
- if (rte_eal_process_type() == RTE_PROC_PRIMARY && !internal_conf->no_telemetry) {
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY && !user_cfg->no_telemetry) {
if (rte_telemetry_init(rte_eal_get_runtime_dir(),
rte_version(),
&internal_conf->ctrl_cpuset) != 0)
@@ -796,9 +793,8 @@ rte_eal_cleanup(void)
RTE_EXPORT_SYMBOL(rte_eal_create_uio_dev)
int rte_eal_create_uio_dev(void)
{
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
- return internal_conf->create_uio_dev;
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ return user_cfg->create_uio_dev;
}
RTE_EXPORT_SYMBOL(rte_eal_vfio_intr_mode)
diff --git a/lib/eal/freebsd/eal_hugepage_info.c b/lib/eal/freebsd/eal_hugepage_info.c
index b6772e0701..586c5d9f17 100644
--- a/lib/eal/freebsd/eal_hugepage_info.c
+++ b/lib/eal/freebsd/eal_hugepage_info.c
@@ -62,6 +62,7 @@ eal_hugepage_info_init(void)
/* re-use the linux "internal config" structure for our memory data */
struct hugepage_info *hpi = &internal_conf->hugepage_info[0];
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
struct hugepage_info *tmp_hpi;
unsigned int i;
@@ -111,7 +112,7 @@ eal_hugepage_info_init(void)
hpi->lock_descriptor = fd;
/* for no shared files mode, do not create shared memory config */
- if (internal_conf->no_shconf)
+ if (user_cfg->no_shconf)
return 0;
tmp_hpi = create_shared_memory(eal_hugepage_info_path(),
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index c51aa7e3b4..c9c30e15fd 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -184,10 +184,11 @@ rte_eal_config_create(void)
int retval;
const struct internal_config *internal_conf =
eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
const char *pathname = eal_runtime_config_path();
- if (internal_conf->no_shconf)
+ if (user_cfg->no_shconf)
return 0;
/* map the config before hugepage address so that we don't waste a page */
@@ -265,12 +266,11 @@ rte_eal_config_attach(void)
{
struct rte_config *config = rte_eal_get_configuration();
struct rte_mem_config *mem_config;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
const char *pathname = eal_runtime_config_path();
- if (internal_conf->no_shconf)
+ if (user_cfg->no_shconf)
return 0;
if (mem_cfg_fd < 0){
@@ -305,10 +305,9 @@ rte_eal_config_reattach(void)
struct rte_config *config = rte_eal_get_configuration();
struct rte_mem_config *mem_config;
void *rte_mem_cfg_addr;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- if (internal_conf->no_shconf)
+ if (user_cfg->no_shconf)
return 0;
/* save the address primary process has mapped shared config to */
@@ -350,11 +349,10 @@ eal_proc_type_detect(void)
{
enum rte_proc_type_t ptype = RTE_PROC_PRIMARY;
const char *pathname = eal_runtime_config_path();
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* if there no shared config, there can be no secondary processes */
- if (!internal_conf->no_shconf) {
+ if (!user_cfg->no_shconf) {
/* if we can open the file but not get a write-lock we are a
* secondary process. NOTE: if we get a file handle back, we
* keep that open and don't close it to prevent a race condition
@@ -376,10 +374,9 @@ static int
rte_config_init(void)
{
struct rte_config *config = rte_eal_get_configuration();
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- config->process_type = internal_conf->process_type;
+ config->process_type = user_cfg->process_type;
switch (config->process_type) {
case RTE_PROC_PRIMARY:
@@ -741,7 +738,7 @@ rte_eal_init(int argc, char **argv)
if (!user_cfg->no_hugetlbfs) {
/* rte_config isn't initialized yet */
- ret = internal_conf->process_type == RTE_PROC_PRIMARY ?
+ ret = user_cfg->process_type == RTE_PROC_PRIMARY ?
eal_hugepage_info_init() :
eal_hugepage_info_read();
if (ret < 0) {
@@ -756,7 +753,7 @@ rte_eal_init(int argc, char **argv)
user_cfg->memory = MEMSIZE_IF_NO_HUGE_PAGE;
}
- if (internal_conf->vmware_tsc_map == 1) {
+ if (user_cfg->vmware_tsc_map) {
#ifdef RTE_LIBRTE_EAL_VMWARE_TSC_MAP_SUPPORT
rte_cycles_vmware_tsc_map = 1;
EAL_LOG(DEBUG, "Using VMWARE TSC MAP, "
@@ -917,11 +914,11 @@ rte_eal_init(int argc, char **argv)
* In no_shconf mode, no runtime directory is created in the first
* place, so no cleanup needed.
*/
- if (!internal_conf->no_shconf && eal_clean_runtime_dir() < 0) {
+ if (!user_cfg->no_shconf && eal_clean_runtime_dir() < 0) {
rte_eal_init_alert("Cannot clear runtime directory");
goto err_out;
}
- if (rte_eal_process_type() == RTE_PROC_PRIMARY && !internal_conf->no_telemetry) {
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY && !user_cfg->no_telemetry) {
if (rte_telemetry_init(rte_eal_get_runtime_dir(),
rte_version(),
&internal_conf->ctrl_cpuset) != 0)
@@ -1000,10 +997,9 @@ rte_eal_cleanup(void)
RTE_EXPORT_SYMBOL(rte_eal_create_uio_dev)
int rte_eal_create_uio_dev(void)
{
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- return internal_conf->create_uio_dev;
+ return user_cfg->create_uio_dev;
}
RTE_EXPORT_SYMBOL(rte_eal_vfio_intr_mode)
diff --git a/lib/eal/linux/eal_hugepage_info.c b/lib/eal/linux/eal_hugepage_info.c
index 2f889b291e..44dafa5292 100644
--- a/lib/eal/linux/eal_hugepage_info.c
+++ b/lib/eal/linux/eal_hugepage_info.c
@@ -500,7 +500,7 @@ hugepage_info_init(void)
* init process.
*/
#ifdef MAP_HUGE_SHIFT
- if (internal_conf->in_memory) {
+ if (user_cfg->in_memory) {
EAL_LOG(DEBUG, "In-memory mode enabled, "
"hugepages of size %" PRIu64 " bytes "
"will be allocated anonymously",
@@ -581,12 +581,13 @@ eal_hugepage_info_init(void)
unsigned int i;
struct internal_config *internal_conf =
eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (hugepage_info_init() < 0)
return -1;
/* for no shared files mode, we're done */
- if (internal_conf->no_shconf)
+ if (user_cfg->no_shconf)
return 0;
hpi = &internal_conf->hugepage_info[0];
diff --git a/lib/eal/linux/eal_memalloc.c b/lib/eal/linux/eal_memalloc.c
index 37f5da8d1f..d2fb08e625 100644
--- a/lib/eal/linux/eal_memalloc.c
+++ b/lib/eal/linux/eal_memalloc.c
@@ -275,7 +275,7 @@ get_seg_fd(char *path, int buflen, struct hugepage_info *hi,
/* for in-memory mode, we only make it here when we're sure we support
* memfd, and this is a special case.
*/
- if (internal_conf->in_memory)
+ if (user_cfg->in_memory)
return get_seg_memfd(hi, list_idx, seg_idx);
if (internal_conf->single_file_segments) {
@@ -459,13 +459,12 @@ resize_hugefile_in_filesystem(int fd, uint64_t fa_offset, uint64_t page_sz,
static void
close_hugefile(int fd, char *path, int list_idx)
{
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/*
* primary process must unlink the file, but only when not in in-memory
* mode (as in that case there is no file to unlink).
*/
- if (!internal_conf->in_memory &&
+ if (!user_cfg->in_memory &&
rte_eal_process_type() == RTE_PROC_PRIMARY &&
unlink(path))
EAL_LOG(ERR, "%s(): unlinking '%s' failed: %s",
@@ -482,10 +481,9 @@ resize_hugefile(int fd, uint64_t fa_offset, uint64_t page_sz, bool grow,
/* in-memory mode is a special case, because we can be sure that
* fallocate() is supported.
*/
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- if (internal_conf->in_memory) {
+ if (user_cfg->in_memory) {
if (dirty != NULL)
*dirty = false;
return resize_hugefile_in_memory(fd, fa_offset,
@@ -521,7 +519,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id,
alloc_sz = hi->hugepage_sz;
/* these are checked at init, but code analyzers don't know that */
- if (internal_conf->in_memory && !anonymous_hugepages_supported) {
+ if (user_cfg->in_memory && !anonymous_hugepages_supported) {
EAL_LOG(ERR, "Anonymous hugepages not supported, in-memory mode cannot allocate memory");
return -1;
}
@@ -550,7 +548,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id,
goto resized;
}
if (user_cfg->hugepage_file.unlink_before_mapping &&
- !internal_conf->in_memory) {
+ !user_cfg->in_memory) {
if (unlink(path)) {
EAL_LOG(DEBUG, "%s(): unlink() failed: %s",
__func__, strerror(errno));
@@ -683,7 +681,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id,
} else {
/* only remove file if we can take out a write lock */
if (!user_cfg->hugepage_file.unlink_before_mapping &&
- internal_conf->in_memory == 0 &&
+ !user_cfg->in_memory &&
lock(fd, LOCK_EX) == 1)
unlink(path);
close(fd);
@@ -736,7 +734,7 @@ free_seg(struct rte_memseg *ms, struct hugepage_info *hi,
/* if we're able to take out a write lock, we're the last one
* holding onto this page.
*/
- if (!internal_conf->in_memory &&
+ if (!user_cfg->in_memory &&
user_cfg->hugepage_file.unlink_existing &&
!user_cfg->hugepage_file.unlink_before_mapping) {
ret = lock(fd, LOCK_EX);
@@ -774,8 +772,7 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg)
size_t page_sz;
int cur_idx, start_idx, j, dir_fd = -1;
unsigned int msl_idx, need, i;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (msl->page_sz != wa->page_sz)
return 0;
@@ -825,7 +822,7 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg)
* during init, we already hold a write lock, so don't try to take out
* another one.
*/
- if (wa->hi->lock_descriptor == -1 && !internal_conf->in_memory) {
+ if (wa->hi->lock_descriptor == -1 && !user_cfg->in_memory) {
dir_fd = open(wa->hi->hugedir, O_RDONLY);
if (dir_fd < 0) {
EAL_LOG(ERR, "%s(): Cannot open '%s': %s",
@@ -908,8 +905,7 @@ free_seg_walk(const struct rte_memseg_list *msl, void *arg)
struct free_walk_param *wa = arg;
uintptr_t start_addr, end_addr;
int msl_idx, seg_idx, ret, dir_fd = -1;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
start_addr = (uintptr_t) msl->base_va;
end_addr = start_addr + msl->len;
@@ -932,7 +928,7 @@ free_seg_walk(const struct rte_memseg_list *msl, void *arg)
* during init, we already hold a write lock, so don't try to take out
* another one.
*/
- if (wa->hi->lock_descriptor == -1 && !internal_conf->in_memory) {
+ if (wa->hi->lock_descriptor == -1 && !user_cfg->in_memory) {
dir_fd = open(wa->hi->hugedir, O_RDONLY);
if (dir_fd < 0) {
EAL_LOG(ERR, "%s(): Cannot open '%s': %s",
@@ -1097,8 +1093,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs)
int
eal_memalloc_free_seg(struct rte_memseg *ms)
{
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct internal_config *internal_conf = eal_get_internal_configuration();
/* dynamic free not supported in legacy mode */
if (internal_conf->legacy_mem)
@@ -1656,8 +1651,6 @@ eal_memalloc_cleanup(void)
int
eal_memalloc_init(void)
{
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (rte_eal_process_type() == RTE_PROC_SECONDARY)
@@ -1667,7 +1660,7 @@ eal_memalloc_init(void)
if (rte_memseg_list_walk_thread_unsafe(secondary_msl_create_walk, NULL) < 0)
return -1;
if (rte_eal_process_type() == RTE_PROC_PRIMARY &&
- internal_conf->in_memory) {
+ user_cfg->in_memory) {
EAL_LOG(DEBUG, "Using memfd for anonymous memory");
/* this cannot ever happen but better safe than sorry */
if (!anonymous_hugepages_supported) {
diff --git a/lib/eal/linux/eal_memory.c b/lib/eal/linux/eal_memory.c
index a53fe65c60..f52206e698 100644
--- a/lib/eal/linux/eal_memory.c
+++ b/lib/eal/linux/eal_memory.c
@@ -545,11 +545,10 @@ create_shared_memory(const char *filename, const size_t mem_size)
{
void *retval;
int fd;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* if no shared files mode is used, create anonymous memory instead */
- if (internal_conf->no_shconf) {
+ if (user_cfg->no_shconf) {
retval = mmap(NULL, mem_size, PROT_READ | PROT_WRITE,
MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
if (retval == MAP_FAILED)
diff --git a/lib/eal/linux/eal_timer_hpet.c b/lib/eal/linux/eal_timer_hpet.c
index 63e38bd53e..c1a3993e8f 100644
--- a/lib/eal/linux/eal_timer_hpet.c
+++ b/lib/eal/linux/eal_timer_hpet.c
@@ -88,10 +88,9 @@ RTE_EXPORT_SYMBOL(rte_get_hpet_hz)
uint64_t
rte_get_hpet_hz(void)
{
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- if (internal_conf->no_hpet)
+ if (user_cfg->no_hpet)
rte_panic("Error, HPET called, but no HPET present\n");
return eal_hpet_resolution_hz;
@@ -103,10 +102,9 @@ rte_get_hpet_cycles(void)
{
uint32_t t, msb;
uint64_t ret;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- if (internal_conf->no_hpet)
+ if (user_cfg->no_hpet)
rte_panic("Error, HPET called, but no HPET present\n");
t = eal_hpet->counter_l;
@@ -126,10 +124,9 @@ int
rte_eal_hpet_init(int make_default)
{
int fd, ret;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- if (internal_conf->no_hpet) {
+ if (user_cfg->no_hpet) {
EAL_LOG(NOTICE, "HPET is disabled");
return -1;
}
@@ -138,14 +135,14 @@ rte_eal_hpet_init(int make_default)
if (fd < 0) {
EAL_LOG(ERR, "ERROR: Cannot open "DEV_HPET": %s!",
strerror(errno));
- internal_conf->no_hpet = 1;
+ user_cfg->no_hpet = true;
return -1;
}
eal_hpet = mmap(NULL, 1024, PROT_READ, MAP_SHARED, fd, 0);
if (eal_hpet == MAP_FAILED) {
EAL_LOG(ERR, "ERROR: Cannot mmap "DEV_HPET"!");
close(fd);
- internal_conf->no_hpet = 1;
+ user_cfg->no_hpet = true;
return -1;
}
close(fd);
@@ -169,7 +166,7 @@ rte_eal_hpet_init(int make_default)
hpet_msb_inc, NULL);
if (ret != 0) {
EAL_LOG(ERR, "ERROR: Cannot create HPET timer thread!");
- internal_conf->no_hpet = 1;
+ user_cfg->no_hpet = true;
return -1;
}
diff --git a/lib/eal/linux/eal_vfio.c b/lib/eal/linux/eal_vfio.c
index f1050ffa60..678ac57e87 100644
--- a/lib/eal/linux/eal_vfio.c
+++ b/lib/eal/linux/eal_vfio.c
@@ -482,8 +482,8 @@ vfio_get_group_fd(struct vfio_config *vfio_cfg,
* knowledge of them. Requesting a group fd from the primary for a
* container it doesn't know about would be incorrect.
*/
- const struct internal_config *internal_conf = eal_get_internal_configuration();
- bool mp_request = (internal_conf->process_type == RTE_PROC_SECONDARY) &&
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ bool mp_request = (user_cfg->process_type == RTE_PROC_SECONDARY) &&
(vfio_cfg == default_vfio_cfg);
vfio_group_fd = vfio_open_group_fd(iommu_group_num, mp_request);
@@ -770,8 +770,7 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
int iommu_group_num;
rte_uuid_t vf_token;
int i, ret;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* get group number */
ret = rte_vfio_get_group_num(sysfs_base, dev_addr, &iommu_group_num);
@@ -853,7 +852,7 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
* Note this can happen several times with the hotplug
* functionality.
*/
- if (internal_conf->process_type == RTE_PROC_PRIMARY &&
+ if (user_cfg->process_type == RTE_PROC_PRIMARY &&
vfio_cfg->vfio_active_groups == 1 &&
vfio_group_device_count(vfio_group_fd) == 0) {
const struct vfio_iommu_type *t;
@@ -1106,8 +1105,7 @@ rte_vfio_enable(const char *modname)
unsigned int i, j;
int vfio_available;
DIR *dir;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
rte_spinlock_recursive_t lock = RTE_SPINLOCK_RECURSIVE_INITIALIZER;
@@ -1151,7 +1149,7 @@ rte_vfio_enable(const char *modname)
}
closedir(dir);
- if (internal_conf->process_type == RTE_PROC_PRIMARY) {
+ if (user_cfg->process_type == RTE_PROC_PRIMARY) {
if (vfio_mp_sync_setup() == -1) {
default_vfio_cfg->vfio_container_fd = -1;
} else {
diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c
index 622bda7578..9ec4892fdb 100644
--- a/lib/eal/windows/eal.c
+++ b/lib/eal/windows/eal.c
@@ -218,11 +218,11 @@ rte_eal_init(int argc, char **argv)
}
/* Prevent creation of shared memory files. */
- if (internal_conf->in_memory == 0) {
+ if (!user_cfg->in_memory) {
EAL_LOG(WARNING, "Multi-process support is requested, "
"but not available.");
- internal_conf->in_memory = 1;
- internal_conf->no_shconf = 1;
+ user_cfg->in_memory = true;
+ user_cfg->no_shconf = true;
}
if (!user_cfg->no_hugetlbfs && (eal_hugepage_info_init() < 0)) {
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 06/44] eal: move advanced user config options to user cfg struct
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (4 preceding siblings ...)
2026-04-29 16:57 ` [RFC PATCH 05/44] eal: move process " Bruce Richardson
@ 2026-04-29 16:57 ` Bruce Richardson
2026-04-29 16:57 ` [RFC PATCH 07/44] eal: move hugepage size info to platform info struct Bruce Richardson
` (39 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:57 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
The more advanced configuration options, such as for virtual base
addresses, vfio interrupt mode, etc., to the user configuration
structure, so that all user-provided config options are in a single
struct, which does not contain any other fields other than user config
options.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_bus.c | 4 +-
lib/eal/common/eal_common_config.c | 6 +-
lib/eal/common/eal_common_dynmem.c | 2 +-
lib/eal/common/eal_common_mcfg.c | 14 ++---
lib/eal/common/eal_common_memalloc.c | 5 +-
lib/eal/common/eal_common_memory.c | 36 +++++------
lib/eal/common/eal_common_options.c | 90 +++++++++++++---------------
lib/eal/common/eal_internal_cfg.h | 33 ++++------
lib/eal/common/eal_options.h | 3 +-
lib/eal/common/malloc_elem.c | 15 ++---
lib/eal/common/malloc_heap.c | 17 +++---
lib/eal/freebsd/eal.c | 15 ++---
lib/eal/linux/eal.c | 26 ++++----
lib/eal/linux/eal_hugepage_info.c | 5 +-
lib/eal/linux/eal_memalloc.c | 63 ++++++++-----------
lib/eal/linux/eal_memory.c | 30 +++++-----
lib/eal/windows/eal.c | 9 +--
lib/eal/windows/eal_memalloc.c | 6 +-
lib/eal/windows/eal_memory.c | 6 +-
19 files changed, 165 insertions(+), 220 deletions(-)
diff --git a/lib/eal/common/eal_common_bus.c b/lib/eal/common/eal_common_bus.c
index b33f5b4bf4..9682136129 100644
--- a/lib/eal/common/eal_common_bus.c
+++ b/lib/eal/common/eal_common_bus.c
@@ -258,12 +258,12 @@ RTE_EXPORT_INTERNAL_SYMBOL(rte_bus_device_is_ignored)
bool
rte_bus_device_is_ignored(const struct rte_bus *bus, const char *dev_name)
{
- const struct internal_config *internal_conf = eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
struct rte_devargs *devargs = rte_bus_find_devargs(bus, dev_name);
enum rte_bus_scan_mode scan_mode = bus->conf.scan_mode;
if (scan_mode == RTE_BUS_SCAN_UNDEFINED) {
- if (internal_conf->no_auto_probing != 0)
+ if (user_cfg->no_auto_probing)
scan_mode = RTE_BUS_SCAN_ALLOWLIST;
else
scan_mode = RTE_BUS_SCAN_BLOCKLIST;
diff --git a/lib/eal/common/eal_common_config.c b/lib/eal/common/eal_common_config.c
index 5efc6623d6..50cba4fa1a 100644
--- a/lib/eal/common/eal_common_config.c
+++ b/lib/eal/common/eal_common_config.c
@@ -106,8 +106,8 @@ RTE_EXPORT_INTERNAL_SYMBOL(rte_eal_get_baseaddr)
uint64_t
rte_eal_get_baseaddr(void)
{
- return (internal_config.base_virtaddr != 0) ?
- (uint64_t) internal_config.base_virtaddr :
+ return (eal_user_cfg.base_virtaddr != 0) ?
+ (uint64_t) eal_user_cfg.base_virtaddr :
eal_get_baseaddr();
}
@@ -123,7 +123,7 @@ RTE_EXPORT_SYMBOL(rte_eal_mbuf_user_pool_ops)
const char *
rte_eal_mbuf_user_pool_ops(void)
{
- return internal_config.user_mbuf_pool_ops_name;
+ return eal_user_cfg.user_mbuf_pool_ops_name;
}
/* return non-zero if hugepages are enabled. */
diff --git a/lib/eal/common/eal_common_dynmem.c b/lib/eal/common/eal_common_dynmem.c
index 7913509eb9..73a55794e0 100644
--- a/lib/eal/common/eal_common_dynmem.c
+++ b/lib/eal/common/eal_common_dynmem.c
@@ -96,7 +96,7 @@ eal_dynmem_memseg_lists_init(void)
#ifndef RTE_EAL_NUMA_AWARE_HUGEPAGES
/* we can still sort pages by socket in legacy mode */
- if (!internal_conf->legacy_mem && socket_id > 0)
+ if (!user_cfg->legacy_mem && socket_id > 0)
break;
#endif
memtypes[cur_type].page_sz = hugepage_sz;
diff --git a/lib/eal/common/eal_common_mcfg.c b/lib/eal/common/eal_common_mcfg.c
index 84ee3f3959..fddeae255e 100644
--- a/lib/eal/common/eal_common_mcfg.c
+++ b/lib/eal/common/eal_common_mcfg.c
@@ -50,22 +50,20 @@ void
eal_mcfg_update_internal(void)
{
struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- internal_conf->legacy_mem = mcfg->legacy_mem;
- internal_conf->single_file_segments = mcfg->single_file_segments;
+ user_cfg->legacy_mem = mcfg->legacy_mem;
+ user_cfg->single_file_segments = mcfg->single_file_segments;
}
void
eal_mcfg_update_from_internal(void)
{
struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- mcfg->legacy_mem = internal_conf->legacy_mem;
- mcfg->single_file_segments = internal_conf->single_file_segments;
+ mcfg->legacy_mem = user_cfg->legacy_mem;
+ mcfg->single_file_segments = user_cfg->single_file_segments;
/* record current DPDK version */
mcfg->version = RTE_VERSION;
}
diff --git a/lib/eal/common/eal_common_memalloc.c b/lib/eal/common/eal_common_memalloc.c
index 47e782f395..e3eadf0237 100644
--- a/lib/eal/common/eal_common_memalloc.c
+++ b/lib/eal/common/eal_common_memalloc.c
@@ -72,15 +72,14 @@ eal_memalloc_is_contig(const struct rte_memseg_list *msl, void *start,
void *end, *aligned_start, *aligned_end;
size_t pgsz = (size_t)msl->page_sz;
const struct rte_memseg *ms;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* for IOVA_VA, it's always contiguous */
if (rte_eal_iova_mode() == RTE_IOVA_VA && !msl->external)
return true;
/* for legacy memory, it's always contiguous */
- if (internal_conf->legacy_mem)
+ if (user_cfg->legacy_mem)
return true;
end = RTE_PTR_ADD(start, len);
diff --git a/lib/eal/common/eal_common_memory.c b/lib/eal/common/eal_common_memory.c
index b6a737b1ab..42ddc34b01 100644
--- a/lib/eal/common/eal_common_memory.c
+++ b/lib/eal/common/eal_common_memory.c
@@ -54,8 +54,7 @@ eal_get_virtual_area(void *requested_addr, size_t *size,
uint64_t map_sz;
void *mapped_addr, *aligned_addr;
uint8_t try = 0;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (system_page_sz == 0)
system_page_sz = rte_mem_page_size();
@@ -66,12 +65,12 @@ eal_get_virtual_area(void *requested_addr, size_t *size,
allow_shrink = (flags & EAL_VIRTUAL_AREA_ALLOW_SHRINK) > 0;
unmap = (flags & EAL_VIRTUAL_AREA_UNMAP) > 0;
- if (next_baseaddr == NULL && internal_conf->base_virtaddr != 0 &&
+ if (next_baseaddr == NULL && user_cfg->base_virtaddr != 0 &&
rte_eal_process_type() == RTE_PROC_PRIMARY)
- next_baseaddr = (void *) internal_conf->base_virtaddr;
+ next_baseaddr = (void *) user_cfg->base_virtaddr;
#ifdef RTE_ARCH_64
- if (next_baseaddr == NULL && internal_conf->base_virtaddr == 0 &&
+ if (next_baseaddr == NULL && user_cfg->base_virtaddr == 0 &&
rte_eal_process_type() == RTE_PROC_PRIMARY)
next_baseaddr = (void *) eal_get_baseaddr();
#endif
@@ -152,7 +151,7 @@ eal_get_virtual_area(void *requested_addr, size_t *size,
* demote this warning to debug if we did not explicitly request
* a base virtual address.
*/
- if (internal_conf->base_virtaddr != 0) {
+ if (user_cfg->base_virtaddr != 0) {
EAL_LOG(WARNING, "WARNING! Base virtual address hint (%p != %p) not respected!",
requested_addr, aligned_addr);
EAL_LOG(WARNING, " This may cause issues with mapping memory into secondary processes");
@@ -385,8 +384,7 @@ void *
rte_mem_iova2virt(rte_iova_t iova)
{
struct virtiova vi;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
memset(&vi, 0, sizeof(vi));
@@ -394,7 +392,7 @@ rte_mem_iova2virt(rte_iova_t iova)
/* for legacy mem, we can get away with scanning VA-contiguous segments,
* as we know they are PA-contiguous as well
*/
- if (internal_conf->legacy_mem)
+ if (user_cfg->legacy_mem)
rte_memseg_contig_walk(find_virt_legacy, &vi);
else
rte_memseg_walk(find_virt, &vi);
@@ -478,11 +476,10 @@ int
rte_mem_event_callback_register(const char *name, rte_mem_event_callback_t clb,
void *arg)
{
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* FreeBSD boots with legacy mem enabled by default */
- if (internal_conf->legacy_mem) {
+ if (user_cfg->legacy_mem) {
EAL_LOG(DEBUG, "Registering mem event callbacks not supported");
rte_errno = ENOTSUP;
return -1;
@@ -494,11 +491,10 @@ RTE_EXPORT_SYMBOL(rte_mem_event_callback_unregister)
int
rte_mem_event_callback_unregister(const char *name, void *arg)
{
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* FreeBSD boots with legacy mem enabled by default */
- if (internal_conf->legacy_mem) {
+ if (user_cfg->legacy_mem) {
EAL_LOG(DEBUG, "Registering mem event callbacks not supported");
rte_errno = ENOTSUP;
return -1;
@@ -511,11 +507,10 @@ int
rte_mem_alloc_validator_register(const char *name,
rte_mem_alloc_validator_t clb, int socket_id, size_t limit)
{
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* FreeBSD boots with legacy mem enabled by default */
- if (internal_conf->legacy_mem) {
+ if (user_cfg->legacy_mem) {
EAL_LOG(DEBUG, "Registering mem alloc validators not supported");
rte_errno = ENOTSUP;
return -1;
@@ -528,11 +523,10 @@ RTE_EXPORT_SYMBOL(rte_mem_alloc_validator_unregister)
int
rte_mem_alloc_validator_unregister(const char *name, int socket_id)
{
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* FreeBSD boots with legacy mem enabled by default */
- if (internal_conf->legacy_mem) {
+ if (user_cfg->legacy_mem) {
EAL_LOG(DEBUG, "Registering mem alloc validators not supported");
rte_errno = ENOTSUP;
return -1;
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index 48b004258a..a2f305fc68 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -505,7 +505,7 @@ eal_reset_internal_config(struct internal_config *internal_cfg)
user_cfg->force_numa_limits = false;
for (i = 0; i < RTE_MAX_NUMA_NODES; i++)
user_cfg->numa_limit[i] = 0;
- user_cfg->process_type = RTE_PROC_AUTO;
+ user_cfg->process_type = RTE_PROC_PRIMARY;
user_cfg->no_hugetlbfs = false;
user_cfg->no_pci = false;
user_cfg->hugefile_prefix = NULL;
@@ -518,14 +518,14 @@ eal_reset_internal_config(struct internal_config *internal_cfg)
sizeof(internal_cfg->hugepage_info[0]));
internal_cfg->hugepage_info[i].lock_descriptor = -1;
}
- internal_cfg->base_virtaddr = 0;
+ user_cfg->base_virtaddr = 0;
/* if set to NONE, interrupt mode is determined automatically */
- internal_cfg->vfio_intr_mode = RTE_INTR_MODE_NONE;
- memset(internal_cfg->vfio_vf_token, 0,
- sizeof(internal_cfg->vfio_vf_token));
+ user_cfg->vfio_intr_mode = RTE_INTR_MODE_NONE;
+ memset(user_cfg->vfio_vf_token, 0,
+ sizeof(user_cfg->vfio_vf_token));
- internal_cfg->no_auto_probing = 0;
+ user_cfg->no_auto_probing = false;
#ifdef RTE_LIBEAL_USE_HPET
user_cfg->no_hpet = false;
@@ -537,12 +537,12 @@ eal_reset_internal_config(struct internal_config *internal_cfg)
user_cfg->in_memory = false;
user_cfg->create_uio_dev = false;
user_cfg->no_telemetry = false;
- internal_cfg->iova_mode = RTE_IOVA_DC;
- internal_cfg->user_mbuf_pool_ops_name = NULL;
+ user_cfg->iova_mode = RTE_IOVA_DC;
+ user_cfg->user_mbuf_pool_ops_name = NULL;
CPU_ZERO(&internal_cfg->ctrl_cpuset);
internal_cfg->init_complete = 0;
- internal_cfg->max_simd_bitwidth.bitwidth = RTE_VECT_DEFAULT_SIMD_BITWIDTH;
- internal_cfg->max_simd_bitwidth.forced = 0;
+ user_cfg->max_simd_bitwidth.bitwidth = RTE_VECT_DEFAULT_SIMD_BITWIDTH;
+ user_cfg->max_simd_bitwidth.forced = 0;
}
static int
@@ -1605,8 +1605,7 @@ static int
eal_parse_iova_mode(const char *name)
{
int mode;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (name == NULL)
return -1;
@@ -1618,7 +1617,7 @@ eal_parse_iova_mode(const char *name)
else
return -1;
- internal_conf->iova_mode = mode;
+ user_cfg->iova_mode = mode;
return 0;
}
@@ -1628,8 +1627,7 @@ eal_parse_simd_bitwidth(const char *arg)
char *end;
unsigned long bitwidth;
int ret;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (arg == NULL || arg[0] == '\0')
return -1;
@@ -1646,7 +1644,7 @@ eal_parse_simd_bitwidth(const char *arg)
ret = rte_vect_set_max_simd_bitwidth(bitwidth);
if (ret < 0)
return -1;
- internal_conf->max_simd_bitwidth.forced = 1;
+ user_cfg->max_simd_bitwidth.forced = 1;
return 0;
}
@@ -1655,8 +1653,7 @@ eal_parse_base_virtaddr(const char *arg)
{
char *end;
uint64_t addr;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
errno = 0;
addr = strtoull(arg, &end, 16);
@@ -1676,7 +1673,7 @@ eal_parse_base_virtaddr(const char *arg)
* it can align to 2MB for x86. So this alignment can also be used
* on x86 and other architectures.
*/
- internal_conf->base_virtaddr =
+ user_cfg->base_virtaddr =
RTE_PTR_ALIGN_CEIL((uintptr_t)addr, (size_t)RTE_PGSIZE_16M);
return 0;
@@ -1881,8 +1878,7 @@ eal_parse_socket_arg(char *strval, volatile uint64_t *socket_arg)
static int
eal_parse_vfio_intr(const char *mode)
{
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
static struct {
const char *name;
enum rte_intr_mode value;
@@ -1894,7 +1890,7 @@ eal_parse_vfio_intr(const char *mode)
for (size_t i = 0; i < RTE_DIM(map); i++) {
if (!strcmp(mode, map[i].name)) {
- internal_conf->vfio_intr_mode = map[i].value;
+ user_cfg->vfio_intr_mode = map[i].value;
return 0;
}
}
@@ -1904,11 +1900,11 @@ eal_parse_vfio_intr(const char *mode)
static int
eal_parse_vfio_vf_token(const char *vf_token)
{
- struct internal_config *cfg = eal_get_internal_configuration();
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
rte_uuid_t uuid;
if (!rte_uuid_parse(vf_token, uuid)) {
- rte_uuid_copy(cfg->vfio_vf_token, uuid);
+ rte_uuid_copy(user_cfg->vfio_vf_token, uuid);
return 0;
}
@@ -1922,7 +1918,7 @@ eal_parse_huge_worker_stack(const char *arg)
EAL_LOG(WARNING, "Cannot set worker stack size on Windows, parameter ignored");
RTE_SET_USED(arg);
#else
- struct internal_config *cfg = eal_get_internal_configuration();
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (arg == NULL || arg[0] == '\0') {
pthread_attr_t attr;
@@ -1932,7 +1928,7 @@ eal_parse_huge_worker_stack(const char *arg)
EAL_LOG(ERR, "Could not retrieve default stack size");
return -1;
}
- ret = pthread_attr_getstacksize(&attr, &cfg->huge_worker_stack_size);
+ ret = pthread_attr_getstacksize(&attr, &user_cfg->huge_worker_stack_size);
pthread_attr_destroy(&attr);
if (ret != 0) {
EAL_LOG(ERR, "Could not retrieve default stack size");
@@ -1948,11 +1944,11 @@ eal_parse_huge_worker_stack(const char *arg)
stack_size >= (size_t)-1 / 1024)
return -1;
- cfg->huge_worker_stack_size = stack_size * 1024;
+ user_cfg->huge_worker_stack_size = stack_size * 1024;
}
EAL_LOG(DEBUG, "Each worker thread will use %zu kB of DPDK memory as stack",
- cfg->huge_worker_stack_size / 1024);
+ user_cfg->huge_worker_stack_size / 1024);
#endif
return 0;
}
@@ -1986,7 +1982,7 @@ eal_parse_args(void)
}
if (args.no_auto_probing)
- int_cfg->no_auto_probing = 1;
+ user_cfg->no_auto_probing = true;
/* device -a/-b/-vdev options*/
TAILQ_FOREACH(arg, &args.allow, next)
@@ -2126,7 +2122,7 @@ eal_parse_args(void)
if (args.no_huge) {
user_cfg->no_hugetlbfs = true;
/* no-huge is legacy mem */
- int_cfg->legacy_mem = true;
+ user_cfg->legacy_mem = true;
}
if (args.in_memory) {
user_cfg->in_memory = true;
@@ -2135,12 +2131,12 @@ eal_parse_args(void)
user_cfg->hugepage_file.unlink_before_mapping = true;
}
if (args.legacy_mem) {
- int_cfg->legacy_mem = true;
+ user_cfg->legacy_mem = true;
if (args.memory_size == NULL && args.numa_mem == NULL)
EAL_LOG(NOTICE, "Static memory layout is selected, amount of reserved memory can be adjusted with -m or --socket-mem");
}
if (args.single_file_segments)
- int_cfg->single_file_segments = true;
+ user_cfg->single_file_segments = true;
if (args.huge_dir != NULL) {
if (strlen(args.huge_dir) < 1) {
EAL_LOG(ERR, "Invalid hugepage dir parameter");
@@ -2241,7 +2237,7 @@ eal_parse_args(void)
if (args.no_telemetry)
user_cfg->no_telemetry = true;
if (args.match_allocations)
- int_cfg->match_allocations = true;
+ user_cfg->match_allocations = true;
if (args.create_uio_dev)
user_cfg->create_uio_dev = true;
@@ -2287,13 +2283,13 @@ eal_parse_args(void)
}
}
if (args.mbuf_pool_ops_name != NULL) {
- free(int_cfg->user_mbuf_pool_ops_name); /* free old ops name */
- int_cfg->user_mbuf_pool_ops_name = strdup(args.mbuf_pool_ops_name);
- if (int_cfg->user_mbuf_pool_ops_name == NULL) {
+ free(user_cfg->user_mbuf_pool_ops_name); /* free old ops name */
+ user_cfg->user_mbuf_pool_ops_name = strdup(args.mbuf_pool_ops_name);
+ if (user_cfg->user_mbuf_pool_ops_name == NULL) {
EAL_LOG(ERR, "failed to allocate memory for mbuf pool ops name parameter");
return -1;
}
- if (strlen(int_cfg->user_mbuf_pool_ops_name) < 1) {
+ if (strlen(user_cfg->user_mbuf_pool_ops_name) < 1) {
EAL_LOG(ERR, "Invalid mbuf pool ops name parameter");
return -1;
}
@@ -2352,11 +2348,11 @@ compute_ctrl_threads_cpuset(struct internal_config *internal_cfg)
}
int
-eal_cleanup_config(struct internal_config *internal_cfg)
+eal_cleanup_config(const struct eal_user_cfg *user_cfg)
{
- free(eal_get_user_configuration()->hugefile_prefix);
- free(eal_get_user_configuration()->hugepage_dir);
- free(internal_cfg->user_mbuf_pool_ops_name);
+ free(user_cfg->hugefile_prefix);
+ free(user_cfg->hugepage_dir);
+ free(user_cfg->user_mbuf_pool_ops_name);
return 0;
}
@@ -2384,18 +2380,16 @@ RTE_EXPORT_SYMBOL(rte_vect_get_max_simd_bitwidth)
uint16_t
rte_vect_get_max_simd_bitwidth(void)
{
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
- return internal_conf->max_simd_bitwidth.bitwidth;
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ return user_cfg->max_simd_bitwidth.bitwidth;
}
RTE_EXPORT_SYMBOL(rte_vect_set_max_simd_bitwidth)
int
rte_vect_set_max_simd_bitwidth(uint16_t bitwidth)
{
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
- if (internal_conf->max_simd_bitwidth.forced) {
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ if (user_cfg->max_simd_bitwidth.forced) {
EAL_LOG(NOTICE, "Cannot set max SIMD bitwidth - user runtime override enabled");
return -EPERM;
}
@@ -2404,6 +2398,6 @@ rte_vect_set_max_simd_bitwidth(uint16_t bitwidth)
EAL_LOG(ERR, "Invalid bitwidth value!");
return -EINVAL;
}
- internal_conf->max_simd_bitwidth.bitwidth = bitwidth;
+ user_cfg->max_simd_bitwidth.bitwidth = bitwidth;
return 0;
}
diff --git a/lib/eal/common/eal_internal_cfg.h b/lib/eal/common/eal_internal_cfg.h
index 4ba43eb5ca..3aec3b0020 100644
--- a/lib/eal/common/eal_internal_cfg.h
+++ b/lib/eal/common/eal_internal_cfg.h
@@ -56,7 +56,12 @@ struct hugepage_file_discipline {
*/
struct eal_user_cfg {
size_t memory; /**< amount of asked memory */
+ size_t huge_worker_stack_size; /**< worker thread stack size */
enum rte_proc_type_t process_type; /**< requested process type */
+ enum rte_intr_mode vfio_intr_mode; /**< default interrupt mode for VFIO */
+ enum rte_iova_mode iova_mode; /**< requested IOVA mode */
+ struct simd_bitwidth max_simd_bitwidth; /**< max simd bitwidth path to use */
+ rte_uuid_t vfio_vf_token; /**< shared VF token for VFIO-PCI bound PF and VFs */
uint8_t force_nchannel; /**< force number of channels */
uint8_t force_nrank; /**< force number of ranks */
bool force_numa; /**< true to request memory on specific NUMA nodes */
@@ -69,9 +74,15 @@ struct eal_user_cfg {
bool in_memory; /**< true to run with no shared runtime files */
bool create_uio_dev; /**< true to create /dev/uioX devices */
bool no_telemetry; /**< true to disable telemetry */
+ bool legacy_mem; /**< true to enable legacy memory behavior */
+ bool match_allocations; /**< true to free hugepages exactly as allocated */
+ bool no_auto_probing; /**< true to switch from block-listing to allow-listing */
+ bool single_file_segments; /**< true if storing all pages within single files */
struct hugepage_file_discipline hugepage_file;
char *hugefile_prefix; /**< the base filename of hugetlbfs files */
char *hugepage_dir; /**< specific hugetlbfs directory to use */
+ char *user_mbuf_pool_ops_name; /**< user defined mbuf pool ops name */
+ uintptr_t base_virtaddr; /**< base address to try and reserve memory from */
uint64_t numa_mem[RTE_MAX_NUMA_NODES]; /**< amount of memory per NUMA node */
uint64_t numa_limit[RTE_MAX_NUMA_NODES]; /**< limit amount of memory per NUMA node */
};
@@ -97,33 +108,11 @@ struct eal_runtime_state {
* internal configuration
*/
struct internal_config {
- uintptr_t base_virtaddr; /**< base address to try and reserve memory from */
- volatile unsigned legacy_mem;
- /**< true to enable legacy memory behavior (no dynamic allocation,
- * IOVA-contiguous segments).
- */
- volatile unsigned match_allocations;
- /**< true to free hugepages exactly as allocated */
- volatile unsigned single_file_segments;
- /**< true if storing all pages within single files (per-page-size,
- * per-node) non-legacy mode only.
- */
- /** default interrupt mode for VFIO */
- volatile enum rte_intr_mode vfio_intr_mode;
- /** the shared VF token for VFIO-PCI bound PF and VFs devices */
- rte_uuid_t vfio_vf_token;
- char *user_mbuf_pool_ops_name;
- /**< user defined mbuf pool ops name */
unsigned num_hugepage_sizes; /**< how many sizes on this system */
struct hugepage_info hugepage_info[MAX_HUGEPAGE_SIZES];
- enum rte_iova_mode iova_mode ; /**< Set IOVA mode on this system */
rte_cpuset_t ctrl_cpuset; /**< cpuset for ctrl threads */
volatile unsigned int init_complete;
/**< indicates whether EAL has completed initialization */
- struct simd_bitwidth max_simd_bitwidth;
- /**< max simd bitwidth path to use */
- size_t huge_worker_stack_size; /**< worker thread stack size */
- unsigned int no_auto_probing; /**< true to switch from block-listing to allow-listing */
};
struct eal_user_cfg *eal_get_user_configuration(void);
diff --git a/lib/eal/common/eal_options.h b/lib/eal/common/eal_options.h
index f5e7905609..5ad347b61d 100644
--- a/lib/eal/common/eal_options.h
+++ b/lib/eal/common/eal_options.h
@@ -8,12 +8,13 @@
#include "getopt.h"
struct rte_tel_data;
+struct eal_user_cfg;
int eal_parse_log_options(void);
int eal_parse_args(void);
int eal_option_device_parse(void);
int eal_adjust_config(struct internal_config *internal_cfg);
-int eal_cleanup_config(struct internal_config *internal_cfg);
+int eal_cleanup_config(const struct eal_user_cfg *user_cfg);
enum rte_proc_type_t eal_proc_type_detect(void);
int eal_plugins_init(void);
int eal_save_args(int argc, char **argv);
diff --git a/lib/eal/common/malloc_elem.c b/lib/eal/common/malloc_elem.c
index 452b119c20..7a10a66779 100644
--- a/lib/eal/common/malloc_elem.c
+++ b/lib/eal/common/malloc_elem.c
@@ -37,8 +37,7 @@ malloc_elem_find_max_iova_contig(struct malloc_elem *elem, size_t align)
rte_iova_t expected_iova;
struct rte_memseg *ms;
size_t page_sz, cur, max;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
page_sz = (size_t)elem->msl->page_sz;
data_start = RTE_PTR_ADD(elem, MALLOC_ELEM_HEADER_LEN);
@@ -57,7 +56,7 @@ malloc_elem_find_max_iova_contig(struct malloc_elem *elem, size_t align)
*/
if (!elem->msl->external &&
(rte_eal_iova_mode() == RTE_IOVA_VA ||
- (internal_conf->legacy_mem &&
+ (user_cfg->legacy_mem &&
rte_eal_has_hugepages())))
return RTE_PTR_DIFF(data_end, contig_seg_start);
@@ -338,24 +337,22 @@ remove_elem(struct malloc_elem *elem)
static int
next_elem_is_adjacent(struct malloc_elem *elem)
{
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
return elem->next == RTE_PTR_ADD(elem, elem->size) &&
elem->next->msl == elem->msl &&
- (!internal_conf->match_allocations ||
+ (!user_cfg->match_allocations ||
elem->orig_elem == elem->next->orig_elem);
}
static int
prev_elem_is_adjacent(struct malloc_elem *elem)
{
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
return elem == RTE_PTR_ADD(elem->prev, elem->prev->size) &&
elem->prev->msl == elem->msl &&
- (!internal_conf->match_allocations ||
+ (!user_cfg->match_allocations ||
elem->orig_elem == elem->prev->orig_elem);
}
diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c
index 77f364158a..bd25496275 100644
--- a/lib/eal/common/malloc_heap.c
+++ b/lib/eal/common/malloc_heap.c
@@ -647,15 +647,14 @@ malloc_heap_alloc_on_heap_id(size_t size, unsigned int heap_id, unsigned int fla
unsigned int size_flags = flags & ~RTE_MEMZONE_SIZE_HINT_ONLY;
int socket_id;
void *ret;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
rte_spinlock_lock(&(heap->lock));
align = align == 0 ? 1 : align;
/* for legacy mode, try once and with all flags */
- if (internal_conf->legacy_mem) {
+ if (user_cfg->legacy_mem) {
ret = heap_alloc(heap, size, flags, align, bound, contig);
goto alloc_unlock;
}
@@ -865,8 +864,7 @@ malloc_heap_free(struct malloc_elem *elem)
unsigned int i, n_segs, before_space, after_space;
int ret;
bool unmapped = false;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (!malloc_elem_cookies_ok(elem) || elem->state != ELEM_BUSY)
return -1;
@@ -894,7 +892,7 @@ malloc_heap_free(struct malloc_elem *elem)
/* ...of which we can't avail if we are in legacy mode, or if this is an
* externally allocated segment.
*/
- if (internal_conf->legacy_mem || (msl->external > 0))
+ if (user_cfg->legacy_mem || (msl->external > 0))
goto free_unlock;
/* check if we can free any memory back to the system */
@@ -905,7 +903,7 @@ malloc_heap_free(struct malloc_elem *elem)
* we will defer freeing these hugepages until the entire original allocation
* can be freed
*/
- if (internal_conf->match_allocations && elem->size != elem->orig_size)
+ if (user_cfg->match_allocations && elem->size != elem->orig_size)
goto free_unlock;
/* probably, but let's make sure, as we may not be using up full page */
@@ -1401,10 +1399,9 @@ rte_eal_malloc_heap_init(void)
{
struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
unsigned int i;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- if (internal_conf->match_allocations)
+ if (user_cfg->match_allocations)
EAL_LOG(DEBUG, "Hugepages will be freed exactly as allocated.");
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 7f8fa6e1c0..7e00010771 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -97,8 +97,6 @@ static int
rte_eal_config_create(void)
{
struct rte_config *config = rte_eal_get_configuration();
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
size_t page_sz = rte_mem_page_size();
size_t cfg_len = sizeof(struct rte_mem_config);
@@ -112,9 +110,9 @@ rte_eal_config_create(void)
return 0;
/* map the config before base address so that we don't waste a page */
- if (internal_conf->base_virtaddr != 0)
+ if (user_cfg->base_virtaddr != 0)
rte_mem_cfg_addr = (void *)
- RTE_ALIGN_FLOOR(internal_conf->base_virtaddr -
+ RTE_ALIGN_FLOOR(user_cfg->base_virtaddr -
sizeof(struct rte_mem_config), page_sz);
else
rte_mem_cfg_addr = NULL;
@@ -472,7 +470,7 @@ rte_eal_init(int argc, char **argv)
}
/* FreeBSD always uses legacy memory model */
- internal_conf->legacy_mem = true;
+ user_cfg->legacy_mem = true;
if (user_cfg->in_memory) {
EAL_LOG(WARNING, "Warning: ignoring unsupported flag, '--in-memory'");
user_cfg->in_memory = false;
@@ -538,7 +536,7 @@ rte_eal_init(int argc, char **argv)
/* Always call rte_bus_get_iommu_class() to trigger DMA mask detection and validation */
enum rte_iova_mode bus_iova_mode = rte_bus_get_iommu_class();
- iova_mode = internal_conf->iova_mode;
+ iova_mode = user_cfg->iova_mode;
if (iova_mode == RTE_IOVA_DC) {
EAL_LOG(DEBUG, "Specific IOVA mode is not requested, autodetecting");
if (has_phys_addr) {
@@ -775,8 +773,7 @@ rte_eal_cleanup(void)
return -1;
}
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
rte_service_finalize();
eal_bus_cleanup();
rte_mp_channel_cleanup();
@@ -785,7 +782,7 @@ rte_eal_cleanup(void)
eal_trace_fini();
/* after this point, any DPDK pointers will become dangling */
rte_eal_memory_detach();
- eal_cleanup_config(internal_conf);
+ eal_cleanup_config(user_cfg);
eal_lcore_var_cleanup();
return 0;
}
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index c9c30e15fd..4b33e461fd 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -182,8 +182,6 @@ rte_eal_config_create(void)
size_t cfg_len_aligned = RTE_ALIGN(cfg_len, page_sz);
void *rte_mem_cfg_addr, *mapped_mem_cfg_addr;
int retval;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
const char *pathname = eal_runtime_config_path();
@@ -192,9 +190,9 @@ rte_eal_config_create(void)
return 0;
/* map the config before hugepage address so that we don't waste a page */
- if (internal_conf->base_virtaddr != 0)
+ if (user_cfg->base_virtaddr != 0)
rte_mem_cfg_addr = (void *)
- RTE_ALIGN_FLOOR(internal_conf->base_virtaddr -
+ RTE_ALIGN_FLOOR(user_cfg->base_virtaddr -
sizeof(struct rte_mem_config), page_sz);
else
rte_mem_cfg_addr = NULL;
@@ -522,8 +520,9 @@ eal_worker_thread_create(unsigned int lcore_id)
pthread_attr_t attr;
size_t stack_size;
int ret = -1;
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- stack_size = eal_get_internal_configuration()->huge_worker_stack_size;
+ stack_size = user_cfg->huge_worker_stack_size;
if (stack_size != 0) {
/* Allocate NUMA aware stack memory and set pthread attributes */
stack_ptr = rte_zmalloc_socket("lcore_stack", stack_size,
@@ -687,7 +686,7 @@ rte_eal_init(int argc, char **argv)
enum rte_iova_mode bus_iova_mode = rte_bus_get_iommu_class();
/* if no EAL option "--iova-mode=<pa|va>", use bus IOVA scheme */
- if (internal_conf->iova_mode == RTE_IOVA_DC) {
+ if (user_cfg->iova_mode == RTE_IOVA_DC) {
/* autodetect the IOVA mapping mode */
enum rte_iova_mode iova_mode = bus_iova_mode;
@@ -718,7 +717,7 @@ rte_eal_init(int argc, char **argv)
rte_eal_get_configuration()->iova_mode = iova_mode;
} else {
rte_eal_get_configuration()->iova_mode =
- internal_conf->iova_mode;
+ user_cfg->iova_mode;
}
if (rte_eal_iova_mode() == RTE_IOVA_PA && !phys_addrs) {
@@ -969,8 +968,6 @@ rte_eal_cleanup(void)
/* if we're in a primary process, we need to mark hugepages as freeable
* so that finalization can release them back to the system.
*/
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (rte_eal_process_type() == RTE_PROC_PRIMARY &&
@@ -988,7 +985,7 @@ rte_eal_cleanup(void)
/* after this point, any DPDK pointers will become dangling */
rte_eal_memory_detach();
rte_eal_malloc_heap_cleanup();
- eal_cleanup_config(internal_conf);
+ eal_cleanup_config(user_cfg);
eal_lcore_var_cleanup();
rte_eal_log_cleanup();
return 0;
@@ -1006,19 +1003,18 @@ RTE_EXPORT_SYMBOL(rte_eal_vfio_intr_mode)
enum rte_intr_mode
rte_eal_vfio_intr_mode(void)
{
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- return internal_conf->vfio_intr_mode;
+ return user_cfg->vfio_intr_mode;
}
RTE_EXPORT_SYMBOL(rte_eal_vfio_get_vf_token)
void
rte_eal_vfio_get_vf_token(rte_uuid_t vf_token)
{
- struct internal_config *cfg = eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- rte_uuid_copy(vf_token, cfg->vfio_vf_token);
+ rte_uuid_copy(vf_token, user_cfg->vfio_vf_token);
}
int
diff --git a/lib/eal/linux/eal_hugepage_info.c b/lib/eal/linux/eal_hugepage_info.c
index 44dafa5292..74c55327ff 100644
--- a/lib/eal/linux/eal_hugepage_info.c
+++ b/lib/eal/linux/eal_hugepage_info.c
@@ -401,8 +401,7 @@ calc_num_pages(struct hugepage_info *hpi, struct dirent *dirent,
{
uint64_t total_pages = 0;
unsigned int i;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/*
* first, try to put all hugepages into relevant sockets, but
@@ -418,7 +417,7 @@ calc_num_pages(struct hugepage_info *hpi, struct dirent *dirent,
* This could be determined by mapping,
* but it is precisely what hugepage file reuse is trying to avoid.
*/
- if (!internal_conf->legacy_mem && reusable_pages == 0)
+ if (!user_cfg->legacy_mem && reusable_pages == 0)
for (i = 0; i < rte_socket_count(); i++) {
int socket = rte_socket_id_by_idx(i);
unsigned int num_pages =
diff --git a/lib/eal/linux/eal_memalloc.c b/lib/eal/linux/eal_memalloc.c
index d2fb08e625..7121f933ea 100644
--- a/lib/eal/linux/eal_memalloc.c
+++ b/lib/eal/linux/eal_memalloc.c
@@ -221,10 +221,9 @@ get_seg_memfd(struct hugepage_info *hi __rte_unused,
char segname[250]; /* as per manpage, limit is 249 bytes plus null */
int flags = MFD_HUGETLB | pagesz_flags(hi->hugepage_sz);
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- if (internal_conf->single_file_segments) {
+ if (user_cfg->single_file_segments) {
fd = fd_list[list_idx].memseg_list_fd;
if (fd < 0) {
@@ -265,8 +264,6 @@ get_seg_fd(char *path, int buflen, struct hugepage_info *hi,
const char *huge_path;
struct stat st;
int ret;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (dirty != NULL)
@@ -278,7 +275,7 @@ get_seg_fd(char *path, int buflen, struct hugepage_info *hi,
if (user_cfg->in_memory)
return get_seg_memfd(hi, list_idx, seg_idx);
- if (internal_conf->single_file_segments) {
+ if (user_cfg->single_file_segments) {
out_fd = &fd_list[list_idx].memseg_list_fd;
huge_path = eal_get_hugefile_path(path, buflen, hi->hugedir, list_idx);
} else {
@@ -322,7 +319,7 @@ get_seg_fd(char *path, int buflen, struct hugepage_info *hi,
* When multiple hugepages are mapped from the same file,
* whether they will be dirty depends on the part that is mapped.
*/
- if (!internal_conf->single_file_segments &&
+ if (!user_cfg->single_file_segments &&
user_cfg->hugepage_file.unlink_existing &&
rte_eal_process_type() == RTE_PROC_PRIMARY &&
ret == 0) {
@@ -512,8 +509,6 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id,
size_t alloc_sz;
int flags;
void *new_addr;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
alloc_sz = hi->hugepage_sz;
@@ -534,7 +529,7 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id,
return -1;
}
- if (internal_conf->single_file_segments) {
+ if (user_cfg->single_file_segments) {
map_offset = seg_idx * alloc_sz;
ret = resize_hugefile(fd, map_offset, alloc_sz, true, &dirty);
if (ret < 0)
@@ -664,14 +659,14 @@ alloc_seg(struct rte_memseg *ms, void *addr, int socket_id,
EAL_LOG(CRIT, "Can't mmap holes in our virtual address space");
}
/* roll back the ref count */
- if (internal_conf->single_file_segments)
+ if (user_cfg->single_file_segments)
fd_list[list_idx].count--;
resized:
/* some codepaths will return negative fd, so exit early */
if (fd < 0)
return -1;
- if (internal_conf->single_file_segments) {
+ if (user_cfg->single_file_segments) {
resize_hugefile(fd, map_offset, alloc_sz, false, NULL);
/* ignore failure, can't make it any worse */
@@ -697,8 +692,6 @@ free_seg(struct rte_memseg *ms, struct hugepage_info *hi,
uint64_t map_offset;
char path[PATH_MAX];
int fd, ret = 0;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* erase page data */
@@ -721,7 +714,7 @@ free_seg(struct rte_memseg *ms, struct hugepage_info *hi,
if (fd < 0)
return -1;
- if (internal_conf->single_file_segments) {
+ if (user_cfg->single_file_segments) {
map_offset = seg_idx * ms->len;
if (resize_hugefile(fd, map_offset, ms->len, false, NULL))
return -1;
@@ -973,11 +966,12 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, size_t page_sz,
struct hugepage_info *hi = NULL;
struct internal_config *internal_conf =
eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
memset(&wa, 0, sizeof(wa));
/* dynamic allocation not supported in legacy mode */
- if (internal_conf->legacy_mem)
+ if (user_cfg->legacy_mem)
return -1;
for (i = 0; i < (int) RTE_DIM(internal_conf->hugepage_info); i++) {
@@ -1042,9 +1036,10 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs)
int seg, ret = 0;
struct internal_config *internal_conf =
eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* dynamic free not supported in legacy mode */
- if (internal_conf->legacy_mem)
+ if (user_cfg->legacy_mem)
return -1;
for (seg = 0; seg < n_segs; seg++) {
@@ -1093,10 +1088,10 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs)
int
eal_memalloc_free_seg(struct rte_memseg *ms)
{
- const struct internal_config *internal_conf = eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* dynamic free not supported in legacy mode */
- if (internal_conf->legacy_mem)
+ if (user_cfg->legacy_mem)
return -1;
return eal_memalloc_free_seg_bulk(&ms, 1);
@@ -1459,11 +1454,10 @@ alloc_list(int list_idx, int len)
{
int *data;
int i;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* single-file segments mode does not need fd list */
- if (!internal_conf->single_file_segments) {
+ if (!user_cfg->single_file_segments) {
/* ensure we have space to store fd per each possible segment */
data = malloc(sizeof(int) * len);
if (data == NULL) {
@@ -1489,11 +1483,10 @@ alloc_list(int list_idx, int len)
static int
destroy_list(int list_idx)
{
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* single-file segments mode does not need fd list */
- if (!internal_conf->single_file_segments) {
+ if (!user_cfg->single_file_segments) {
int *fds = fd_list[list_idx].fds;
int i;
/* go through each fd and ensure it's closed */
@@ -1549,11 +1542,10 @@ int
eal_memalloc_set_seg_fd(int list_idx, int seg_idx, int fd)
{
struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* single file segments mode doesn't support individual segment fd's */
- if (internal_conf->single_file_segments)
+ if (user_cfg->single_file_segments)
return -ENOTSUP;
/* if list is not allocated, allocate it */
@@ -1571,11 +1563,10 @@ eal_memalloc_set_seg_fd(int list_idx, int seg_idx, int fd)
int
eal_memalloc_set_seg_list_fd(int list_idx, int fd)
{
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* non-single file segment mode doesn't support segment list fd's */
- if (!internal_conf->single_file_segments)
+ if (!user_cfg->single_file_segments)
return -ENOTSUP;
fd_list[list_idx].memseg_list_fd = fd;
@@ -1587,10 +1578,9 @@ int
eal_memalloc_get_seg_fd(int list_idx, int seg_idx)
{
int fd;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- if (internal_conf->single_file_segments) {
+ if (user_cfg->single_file_segments) {
fd = fd_list[list_idx].memseg_list_fd;
} else if (fd_list[list_idx].len == 0) {
/* list not initialized */
@@ -1607,10 +1597,9 @@ int
eal_memalloc_get_seg_fd_offset(int list_idx, int seg_idx, size_t *offset)
{
struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- if (internal_conf->single_file_segments) {
+ if (user_cfg->single_file_segments) {
size_t pgsz = mcfg->memsegs[list_idx].page_sz;
/* segment not active? */
diff --git a/lib/eal/linux/eal_memory.c b/lib/eal/linux/eal_memory.c
index f52206e698..69314656c2 100644
--- a/lib/eal/linux/eal_memory.c
+++ b/lib/eal/linux/eal_memory.c
@@ -786,7 +786,7 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end)
/* we have a new address, so unmap previous one */
#ifndef RTE_ARCH_64
/* in 32-bit legacy mode, we have already unmapped the page */
- if (!internal_conf->legacy_mem)
+ if (!user_cfg->legacy_mem)
munmap(hfile->orig_va, page_sz);
#else
munmap(hfile->orig_va, page_sz);
@@ -1149,7 +1149,7 @@ eal_legacy_hugepage_init(void)
struct hugepage_info used_hp[MAX_HUGEPAGE_SIZES];
struct internal_config *internal_conf =
eal_get_internal_configuration();
- const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
uint64_t memory[RTE_MAX_NUMA_NODES];
@@ -1173,10 +1173,10 @@ eal_legacy_hugepage_init(void)
uint64_t page_sz;
/* nohuge mode is legacy mode */
- internal_conf->legacy_mem = 1;
+ user_cfg->legacy_mem = 1;
/* nohuge mode is single-file segments mode */
- internal_conf->single_file_segments = 1;
+ user_cfg->single_file_segments = 1;
/* create a memseg list */
msl = &mcfg->memsegs[0];
@@ -1445,7 +1445,7 @@ eal_legacy_hugepage_init(void)
#ifndef RTE_ARCH_64
/* for legacy 32-bit mode, we did not preallocate VA space, so do it */
- if (internal_conf->legacy_mem &&
+ if (user_cfg->legacy_mem &&
prealloc_segments(hugepage, nr_hugefiles)) {
EAL_LOG(ERR, "Could not preallocate VA space for hugepages");
goto fail;
@@ -1673,10 +1673,9 @@ eal_hugepage_attach(void)
int
rte_eal_hugepage_init(void)
{
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- return internal_conf->legacy_mem ?
+ return user_cfg->legacy_mem ?
eal_legacy_hugepage_init() :
eal_dynmem_hugepage_init();
}
@@ -1684,10 +1683,9 @@ rte_eal_hugepage_init(void)
int
rte_eal_hugepage_attach(void)
{
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- return internal_conf->legacy_mem ?
+ return user_cfg->legacy_mem ?
eal_legacy_hugepage_attach() :
eal_hugepage_attach();
}
@@ -1735,7 +1733,7 @@ memseg_primary_init_32(void)
* unneeded pages. this will not affect secondary processes, as those
* should be able to mmap the space without (too many) problems.
*/
- if (internal_conf->legacy_mem)
+ if (user_cfg->legacy_mem)
return 0;
/* 32-bit mode is a very special case. we cannot know in advance where
@@ -1801,7 +1799,7 @@ memseg_primary_init_32(void)
#ifndef RTE_EAL_NUMA_AWARE_HUGEPAGES
/* we can still sort pages by socket in legacy mode */
- if (!internal_conf->legacy_mem && socket_id > 0)
+ if (!user_cfg->legacy_mem && socket_id > 0)
break;
#endif
@@ -1950,8 +1948,8 @@ rte_eal_memseg_init(void)
struct rlimit lim;
#ifndef RTE_EAL_NUMA_AWARE_HUGEPAGES
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg =
+ eal_get_user_configuration();
#endif
if (getrlimit(RLIMIT_NOFILE, &lim) == 0) {
/* set limit to maximum */
@@ -1969,7 +1967,7 @@ rte_eal_memseg_init(void)
EAL_LOG(ERR, "Cannot get current resource limits");
}
#ifndef RTE_EAL_NUMA_AWARE_HUGEPAGES
- if (!internal_conf->legacy_mem && rte_socket_count() > 1) {
+ if (!user_cfg->legacy_mem && rte_socket_count() > 1) {
EAL_LOG(WARNING, "DPDK is running on a NUMA system, but is compiled without NUMA support.");
EAL_LOG(WARNING, "This will have adverse consequences for performance and usability.");
EAL_LOG(WARNING, "Please use --legacy-mem option, or recompile with NUMA support.");
diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c
index 9ec4892fdb..6e40c3d6d3 100644
--- a/lib/eal/windows/eal.c
+++ b/lib/eal/windows/eal.c
@@ -139,15 +139,14 @@ RTE_EXPORT_SYMBOL(rte_eal_cleanup)
int
rte_eal_cleanup(void)
{
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
eal_intr_thread_cancel();
eal_mem_virt2iova_cleanup();
eal_bus_cleanup();
/* after this point, any DPDK pointers will become dangling */
rte_eal_memory_detach();
- eal_cleanup_config(internal_conf);
+ eal_cleanup_config(user_cfg);
eal_lcore_var_cleanup();
return 0;
}
@@ -159,8 +158,6 @@ rte_eal_init(int argc, char **argv)
{
int i, fctret, bscan;
const struct rte_config *config = rte_eal_get_configuration();
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
struct eal_user_cfg *user_cfg = eal_get_user_configuration();
bool has_phys_addr;
enum rte_iova_mode iova_mode;
@@ -271,7 +268,7 @@ rte_eal_init(int argc, char **argv)
/* Always call rte_bus_get_iommu_class() to trigger DMA mask detection and validation */
enum rte_iova_mode bus_iova_mode = rte_bus_get_iommu_class();
- iova_mode = internal_conf->iova_mode;
+ iova_mode = user_cfg->iova_mode;
if (iova_mode == RTE_IOVA_DC) {
EAL_LOG(DEBUG, "Specific IOVA mode is not requested, autodetecting");
if (has_phys_addr) {
diff --git a/lib/eal/windows/eal_memalloc.c b/lib/eal/windows/eal_memalloc.c
index 5db5a474cc..26d9cae54c 100644
--- a/lib/eal/windows/eal_memalloc.c
+++ b/lib/eal/windows/eal_memalloc.c
@@ -316,8 +316,9 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs,
struct hugepage_info *hi = NULL;
struct internal_config *internal_conf =
eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- if (internal_conf->legacy_mem) {
+ if (user_cfg->legacy_mem) {
EAL_LOG(ERR, "dynamic allocation not supported in legacy mode");
return -ENOTSUP;
}
@@ -369,9 +370,10 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs)
int seg, ret = 0;
struct internal_config *internal_conf =
eal_get_internal_configuration();
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* dynamic free not supported in legacy mode */
- if (internal_conf->legacy_mem)
+ if (user_cfg->legacy_mem)
return -1;
for (seg = 0; seg < n_segs; seg++) {
diff --git a/lib/eal/windows/eal_memory.c b/lib/eal/windows/eal_memory.c
index 3140d7b9c3..8fcd636a3a 100644
--- a/lib/eal/windows/eal_memory.c
+++ b/lib/eal/windows/eal_memory.c
@@ -678,12 +678,10 @@ eal_nohuge_init(void)
void *addr;
mcfg = rte_eal_get_configuration()->mem_config;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
- const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* nohuge mode is legacy mode */
- internal_conf->legacy_mem = 1;
+ user_cfg->legacy_mem = 1;
msl = &mcfg->memsegs[0];
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 07/44] eal: move hugepage size info to platform info struct
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (5 preceding siblings ...)
2026-04-29 16:57 ` [RFC PATCH 06/44] eal: move advanced user config options to user cfg struct Bruce Richardson
@ 2026-04-29 16:57 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 08/44] telemetry: make cpuset init parameter const Bruce Richardson
` (38 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:57 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
The hugepage information should not change for the lifetime of the
process, so store it in the platform_info structure in EAL, moving it
out of the internal_config.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_dynmem.c | 29 ++++++-------
lib/eal/common/eal_common_options.c | 7 +--
lib/eal/common/eal_internal_cfg.h | 5 +--
lib/eal/freebsd/eal.c | 7 ++-
lib/eal/freebsd/eal_hugepage_info.c | 28 ++++++------
lib/eal/freebsd/eal_memory.c | 21 ++++-----
lib/eal/linux/eal.c | 12 +++---
lib/eal/linux/eal_hugepage_info.c | 37 ++++++++--------
lib/eal/linux/eal_memalloc.c | 27 ++++++------
lib/eal/linux/eal_memory.c | 66 +++++++++++++----------------
lib/eal/windows/eal_hugepages.c | 7 ++-
lib/eal/windows/eal_memalloc.c | 26 ++++++------
12 files changed, 122 insertions(+), 150 deletions(-)
diff --git a/lib/eal/common/eal_common_dynmem.c b/lib/eal/common/eal_common_dynmem.c
index 73a55794e0..c1c72499c4 100644
--- a/lib/eal/common/eal_common_dynmem.c
+++ b/lib/eal/common/eal_common_dynmem.c
@@ -30,8 +30,7 @@ eal_dynmem_memseg_lists_init(void)
uint64_t max_mem, max_mem_per_type;
unsigned int max_seglists_per_type;
unsigned int n_memtypes, cur_type;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ struct eal_platform_info *platform_info = eal_get_platform_info();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* no-huge does not need this at all */
@@ -74,7 +73,7 @@ eal_dynmem_memseg_lists_init(void)
*/
/* create space for mem types */
- n_memtypes = internal_conf->num_hugepage_sizes * rte_socket_count();
+ n_memtypes = platform_info->num_hugepage_sizes * rte_socket_count();
memtypes = calloc(n_memtypes, sizeof(*memtypes));
if (memtypes == NULL) {
EAL_LOG(ERR, "Cannot allocate space for memory types");
@@ -83,12 +82,12 @@ eal_dynmem_memseg_lists_init(void)
/* populate mem types */
cur_type = 0;
- for (hpi_idx = 0; hpi_idx < (int) internal_conf->num_hugepage_sizes;
+ for (hpi_idx = 0; hpi_idx < (int) platform_info->num_hugepage_sizes;
hpi_idx++) {
struct hugepage_info *hpi;
uint64_t hugepage_sz;
- hpi = &internal_conf->hugepage_info[hpi_idx];
+ hpi = &platform_info->hugepage_info[hpi_idx];
hugepage_sz = hpi->hugepage_sz;
for (i = 0; i < (int) rte_socket_count(); i++, cur_type++) {
@@ -229,14 +228,13 @@ eal_dynmem_hugepage_init(void)
struct hugepage_info used_hp[MAX_HUGEPAGE_SIZES];
uint64_t memory[RTE_MAX_NUMA_NODES];
int hp_sz_idx, socket_id;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ struct eal_platform_info *platform_info = eal_get_platform_info();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
memset(used_hp, 0, sizeof(used_hp));
for (hp_sz_idx = 0;
- hp_sz_idx < (int) internal_conf->num_hugepage_sizes;
+ hp_sz_idx < (int) platform_info->num_hugepage_sizes;
hp_sz_idx++) {
#ifndef RTE_ARCH_64
struct hugepage_info dummy;
@@ -244,7 +242,7 @@ eal_dynmem_hugepage_init(void)
#endif
/* also initialize used_hp hugepage sizes in used_hp */
struct hugepage_info *hpi;
- hpi = &internal_conf->hugepage_info[hp_sz_idx];
+ hpi = &platform_info->hugepage_info[hp_sz_idx];
used_hp[hp_sz_idx].hugepage_sz = hpi->hugepage_sz;
#ifndef RTE_ARCH_64
@@ -272,12 +270,12 @@ eal_dynmem_hugepage_init(void)
/* calculate final number of pages */
if (eal_dynmem_calc_num_pages_per_socket(memory,
- internal_conf->hugepage_info, used_hp,
- internal_conf->num_hugepage_sizes) < 0)
+ platform_info->hugepage_info, used_hp,
+ platform_info->num_hugepage_sizes) < 0)
return -1;
for (hp_sz_idx = 0;
- hp_sz_idx < (int)internal_conf->num_hugepage_sizes;
+ hp_sz_idx < (int)platform_info->num_hugepage_sizes;
hp_sz_idx++) {
for (socket_id = 0; socket_id < RTE_MAX_NUMA_NODES;
socket_id++) {
@@ -356,11 +354,10 @@ get_socket_mem_size(int socket)
{
uint64_t size = 0;
unsigned int i;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ struct eal_platform_info *platform_info = eal_get_platform_info();
- for (i = 0; i < internal_conf->num_hugepage_sizes; i++) {
- struct hugepage_info *hpi = &internal_conf->hugepage_info[i];
+ for (i = 0; i < platform_info->num_hugepage_sizes; i++) {
+ struct hugepage_info *hpi = &platform_info->hugepage_info[i];
size += hpi->hugepage_sz * hpi->num_pages[socket];
}
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index a2f305fc68..0750a52373 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -494,6 +494,7 @@ void
eal_reset_internal_config(struct internal_config *internal_cfg)
{
struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ struct eal_platform_info *platform_info = eal_get_platform_info();
int i;
user_cfg->memory = 0;
@@ -514,9 +515,9 @@ eal_reset_internal_config(struct internal_config *internal_cfg)
user_cfg->hugepage_file.unlink_existing = true;
/* zero out hugedir descriptors */
for (i = 0; i < MAX_HUGEPAGE_SIZES; i++) {
- memset(&internal_cfg->hugepage_info[i], 0,
- sizeof(internal_cfg->hugepage_info[0]));
- internal_cfg->hugepage_info[i].lock_descriptor = -1;
+ memset(&platform_info->hugepage_info[i], 0,
+ sizeof(platform_info->hugepage_info[0]));
+ platform_info->hugepage_info[i].lock_descriptor = -1;
}
user_cfg->base_virtaddr = 0;
diff --git a/lib/eal/common/eal_internal_cfg.h b/lib/eal/common/eal_internal_cfg.h
index 3aec3b0020..fbbe5dce82 100644
--- a/lib/eal/common/eal_internal_cfg.h
+++ b/lib/eal/common/eal_internal_cfg.h
@@ -92,7 +92,8 @@ struct eal_user_cfg {
* Immutable after initialization, so no need for atomic types or locks.
*/
struct eal_platform_info {
- uint8_t reserved;
+ uint8_t num_hugepage_sizes; /**< how many sizes on this system */
+ struct hugepage_info hugepage_info[MAX_HUGEPAGE_SIZES];
};
/**
@@ -108,8 +109,6 @@ struct eal_runtime_state {
* internal configuration
*/
struct internal_config {
- unsigned num_hugepage_sizes; /**< how many sizes on this system */
- struct hugepage_info hugepage_info[MAX_HUGEPAGE_SIZES];
rte_cpuset_t ctrl_cpuset; /**< cpuset for ctrl threads */
volatile unsigned int init_complete;
/**< indicates whether EAL has completed initialization */
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 7e00010771..996a2de9ff 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -329,11 +329,10 @@ eal_get_hugepage_mem_size(void)
{
uint64_t size = 0;
unsigned i, j;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ struct eal_platform_info *platform_info = eal_get_platform_info();
- for (i = 0; i < internal_conf->num_hugepage_sizes; i++) {
- struct hugepage_info *hpi = &internal_conf->hugepage_info[i];
+ for (i = 0; i < platform_info->num_hugepage_sizes; i++) {
+ struct hugepage_info *hpi = &platform_info->hugepage_info[i];
if (strnlen(hpi->hugedir, sizeof(hpi->hugedir)) != 0) {
for (j = 0; j < RTE_MAX_NUMA_NODES; j++) {
size += hpi->hugepage_sz * hpi->num_pages[j];
diff --git a/lib/eal/freebsd/eal_hugepage_info.c b/lib/eal/freebsd/eal_hugepage_info.c
index 586c5d9f17..b46ae4b689 100644
--- a/lib/eal/freebsd/eal_hugepage_info.c
+++ b/lib/eal/freebsd/eal_hugepage_info.c
@@ -57,16 +57,15 @@ eal_hugepage_info_init(void)
size_t sysctl_size;
int num_buffers, fd, error;
int64_t buffer_size;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ struct eal_platform_info *platform_info = eal_get_platform_info();
/* re-use the linux "internal config" structure for our memory data */
- struct hugepage_info *hpi = &internal_conf->hugepage_info[0];
+ struct hugepage_info *hpi = &platform_info->hugepage_info[0];
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
struct hugepage_info *tmp_hpi;
unsigned int i;
- internal_conf->num_hugepage_sizes = 1;
+ platform_info->num_hugepage_sizes = 1;
sysctl_size = sizeof(num_buffers);
error = sysctlbyname("hw.contigmem.num_buffers", &num_buffers,
@@ -116,23 +115,23 @@ eal_hugepage_info_init(void)
return 0;
tmp_hpi = create_shared_memory(eal_hugepage_info_path(),
- sizeof(internal_conf->hugepage_info));
+ sizeof(platform_info->hugepage_info));
if (tmp_hpi == NULL ) {
EAL_LOG(ERR, "Failed to create shared memory!");
return -1;
}
- memcpy(tmp_hpi, hpi, sizeof(internal_conf->hugepage_info));
+ memcpy(tmp_hpi, hpi, sizeof(platform_info->hugepage_info));
/* we've copied file descriptors along with everything else, but they
* will be invalid in secondary process, so overwrite them
*/
- for (i = 0; i < RTE_DIM(internal_conf->hugepage_info); i++) {
+ for (i = 0; i < RTE_DIM(platform_info->hugepage_info); i++) {
struct hugepage_info *tmp = &tmp_hpi[i];
tmp->lock_descriptor = -1;
}
- if (munmap(tmp_hpi, sizeof(internal_conf->hugepage_info)) < 0) {
+ if (munmap(tmp_hpi, sizeof(platform_info->hugepage_info)) < 0) {
EAL_LOG(ERR, "Failed to unmap shared memory!");
return -1;
}
@@ -144,24 +143,23 @@ eal_hugepage_info_init(void)
int
eal_hugepage_info_read(void)
{
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ struct eal_platform_info *platform_info = eal_get_platform_info();
- struct hugepage_info *hpi = &internal_conf->hugepage_info[0];
+ struct hugepage_info *hpi = &platform_info->hugepage_info[0];
struct hugepage_info *tmp_hpi;
- internal_conf->num_hugepage_sizes = 1;
+ platform_info->num_hugepage_sizes = 1;
tmp_hpi = open_shared_memory(eal_hugepage_info_path(),
- sizeof(internal_conf->hugepage_info));
+ sizeof(platform_info->hugepage_info));
if (tmp_hpi == NULL) {
EAL_LOG(ERR, "Failed to open shared memory!");
return -1;
}
- memcpy(hpi, tmp_hpi, sizeof(internal_conf->hugepage_info));
+ memcpy(hpi, tmp_hpi, sizeof(platform_info->hugepage_info));
- if (munmap(tmp_hpi, sizeof(internal_conf->hugepage_info)) < 0) {
+ if (munmap(tmp_hpi, sizeof(platform_info->hugepage_info)) < 0) {
EAL_LOG(ERR, "Failed to unmap shared memory!");
return -1;
}
diff --git a/lib/eal/freebsd/eal_memory.c b/lib/eal/freebsd/eal_memory.c
index cfb17fb3fa..e925fa9743 100644
--- a/lib/eal/freebsd/eal_memory.c
+++ b/lib/eal/freebsd/eal_memory.c
@@ -59,9 +59,8 @@ rte_eal_hugepage_init(void)
uint64_t total_mem = 0;
void *addr;
unsigned int i, j, seg_idx = 0;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ struct eal_platform_info *platform_info = eal_get_platform_info();
/* get pointer to global configuration */
mcfg = rte_eal_get_configuration()->mem_config;
@@ -101,13 +100,13 @@ rte_eal_hugepage_init(void)
}
/* map all hugepages and sort them */
- for (i = 0; i < internal_conf->num_hugepage_sizes; i++) {
+ for (i = 0; i < platform_info->num_hugepage_sizes; i++) {
struct hugepage_info *hpi;
rte_iova_t prev_end = 0;
uint64_t page_sz, mem_needed;
unsigned int n_pages, max_pages;
- hpi = &internal_conf->hugepage_info[i];
+ hpi = &platform_info->hugepage_info[i];
page_sz = hpi->hugepage_sz;
max_pages = hpi->num_pages[0];
mem_needed = RTE_ALIGN_CEIL(user_cfg->memory - total_mem,
@@ -270,15 +269,14 @@ attach_segment(const struct rte_memseg_list *msl, const struct rte_memseg *ms,
int
rte_eal_hugepage_attach(void)
{
+ struct eal_platform_info *platform_info = eal_get_platform_info();
struct hugepage_info *hpi;
int fd_hugepage = -1;
unsigned int i;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
- hpi = &internal_conf->hugepage_info[0];
+ hpi = &platform_info->hugepage_info[0];
- for (i = 0; i < internal_conf->num_hugepage_sizes; i++) {
+ for (i = 0; i < platform_info->num_hugepage_sizes; i++) {
const struct hugepage_info *cur_hpi = &hpi[i];
struct attach_walk_args wa;
@@ -356,9 +354,8 @@ memseg_primary_init(void)
int hpi_idx, msl_idx = 0;
struct rte_memseg_list *msl;
uint64_t max_mem, total_mem;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ struct eal_platform_info *platform_info = eal_get_platform_info();
/* no-huge does not need this at all */
if (user_cfg->no_hugetlbfs)
@@ -378,7 +375,7 @@ memseg_primary_init(void)
total_mem = 0;
/* create memseg lists */
- for (hpi_idx = 0; hpi_idx < (int) internal_conf->num_hugepage_sizes;
+ for (hpi_idx = 0; hpi_idx < (int) platform_info->num_hugepage_sizes;
hpi_idx++) {
uint64_t max_type_mem, total_type_mem = 0;
uint64_t avail_mem;
@@ -386,7 +383,7 @@ memseg_primary_init(void)
struct hugepage_info *hpi;
uint64_t hugepage_sz;
- hpi = &internal_conf->hugepage_info[hpi_idx];
+ hpi = &platform_info->hugepage_info[hpi_idx];
hugepage_sz = hpi->hugepage_sz;
/* no NUMA support on FreeBSD */
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 4b33e461fd..f692521fe7 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -412,20 +412,18 @@ rte_config_init(void)
static void
eal_hugedirs_unlock(void)
{
+ struct eal_platform_info *platform_info = eal_get_platform_info();
int i;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
-
for (i = 0; i < MAX_HUGEPAGE_SIZES; i++)
{
/* skip uninitialized */
- if (internal_conf->hugepage_info[i].lock_descriptor < 0)
+ if (platform_info->hugepage_info[i].lock_descriptor < 0)
continue;
/* unlock hugepage file */
- flock(internal_conf->hugepage_info[i].lock_descriptor, LOCK_UN);
- close(internal_conf->hugepage_info[i].lock_descriptor);
+ flock(platform_info->hugepage_info[i].lock_descriptor, LOCK_UN);
+ close(platform_info->hugepage_info[i].lock_descriptor);
/* reset the field */
- internal_conf->hugepage_info[i].lock_descriptor = -1;
+ platform_info->hugepage_info[i].lock_descriptor = -1;
}
}
diff --git a/lib/eal/linux/eal_hugepage_info.c b/lib/eal/linux/eal_hugepage_info.c
index 74c55327ff..f35446cdc9 100644
--- a/lib/eal/linux/eal_hugepage_info.c
+++ b/lib/eal/linux/eal_hugepage_info.c
@@ -453,8 +453,7 @@ hugepage_info_init(void)
unsigned int reusable_pages;
DIR *dir;
struct dirent *dirent;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ struct eal_platform_info *platform_info = eal_get_platform_info();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
dir = opendir(sys_dir_path);
@@ -475,7 +474,7 @@ hugepage_info_init(void)
if (num_sizes >= MAX_HUGEPAGE_SIZES)
break;
- hpi = &internal_conf->hugepage_info[num_sizes];
+ hpi = &platform_info->hugepage_info[num_sizes];
hpi->hugepage_sz =
rte_str_to_size(&dirent->d_name[dirent_start_len]);
@@ -546,17 +545,17 @@ hugepage_info_init(void)
if (dirent != NULL)
return -1;
- internal_conf->num_hugepage_sizes = num_sizes;
+ platform_info->num_hugepage_sizes = num_sizes;
/* sort the page directory entries by size, largest to smallest */
- qsort(&internal_conf->hugepage_info[0], num_sizes,
- sizeof(internal_conf->hugepage_info[0]), compare_hpi);
+ qsort(&platform_info->hugepage_info[0], num_sizes,
+ sizeof(platform_info->hugepage_info[0]), compare_hpi);
/* now we have all info, check we have at least one valid size */
for (i = 0; i < num_sizes; i++) {
/* pages may no longer all be on socket 0, so check all */
unsigned int j, num_pages = 0;
- struct hugepage_info *hpi = &internal_conf->hugepage_info[i];
+ struct hugepage_info *hpi = &platform_info->hugepage_info[i];
for (j = 0; j < RTE_MAX_NUMA_NODES; j++)
num_pages += hpi->num_pages[j];
@@ -578,8 +577,7 @@ eal_hugepage_info_init(void)
{
struct hugepage_info *hpi, *tmp_hpi;
unsigned int i;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ struct eal_platform_info *platform_info = eal_get_platform_info();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (hugepage_info_init() < 0)
@@ -589,26 +587,26 @@ eal_hugepage_info_init(void)
if (user_cfg->no_shconf)
return 0;
- hpi = &internal_conf->hugepage_info[0];
+ hpi = &platform_info->hugepage_info[0];
tmp_hpi = create_shared_memory(eal_hugepage_info_path(),
- sizeof(internal_conf->hugepage_info));
+ sizeof(platform_info->hugepage_info));
if (tmp_hpi == NULL) {
EAL_LOG(ERR, "Failed to create shared memory!");
return -1;
}
- memcpy(tmp_hpi, hpi, sizeof(internal_conf->hugepage_info));
+ memcpy(tmp_hpi, hpi, sizeof(platform_info->hugepage_info));
/* we've copied file descriptors along with everything else, but they
* will be invalid in secondary process, so overwrite them
*/
- for (i = 0; i < RTE_DIM(internal_conf->hugepage_info); i++) {
+ for (i = 0; i < RTE_DIM(platform_info->hugepage_info); i++) {
struct hugepage_info *tmp = &tmp_hpi[i];
tmp->lock_descriptor = -1;
}
- if (munmap(tmp_hpi, sizeof(internal_conf->hugepage_info)) < 0) {
+ if (munmap(tmp_hpi, sizeof(platform_info->hugepage_info)) < 0) {
EAL_LOG(ERR, "Failed to unmap shared memory!");
return -1;
}
@@ -617,21 +615,20 @@ eal_hugepage_info_init(void)
int eal_hugepage_info_read(void)
{
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
- struct hugepage_info *hpi = &internal_conf->hugepage_info[0];
+ struct eal_platform_info *platform_info = eal_get_platform_info();
+ struct hugepage_info *hpi = &platform_info->hugepage_info[0];
struct hugepage_info *tmp_hpi;
tmp_hpi = open_shared_memory(eal_hugepage_info_path(),
- sizeof(internal_conf->hugepage_info));
+ sizeof(platform_info->hugepage_info));
if (tmp_hpi == NULL) {
EAL_LOG(ERR, "Failed to open shared memory!");
return -1;
}
- memcpy(hpi, tmp_hpi, sizeof(internal_conf->hugepage_info));
+ memcpy(hpi, tmp_hpi, sizeof(platform_info->hugepage_info));
- if (munmap(tmp_hpi, sizeof(internal_conf->hugepage_info)) < 0) {
+ if (munmap(tmp_hpi, sizeof(platform_info->hugepage_info)) < 0) {
EAL_LOG(ERR, "Failed to unmap shared memory!");
return -1;
}
diff --git a/lib/eal/linux/eal_memalloc.c b/lib/eal/linux/eal_memalloc.c
index 7121f933ea..5ae81429d9 100644
--- a/lib/eal/linux/eal_memalloc.c
+++ b/lib/eal/linux/eal_memalloc.c
@@ -964,9 +964,8 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, size_t page_sz,
#endif
struct alloc_walk_param wa;
struct hugepage_info *hi = NULL;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ struct eal_platform_info *platform_info = eal_get_platform_info();
memset(&wa, 0, sizeof(wa));
@@ -974,10 +973,10 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, size_t page_sz,
if (user_cfg->legacy_mem)
return -1;
- for (i = 0; i < (int) RTE_DIM(internal_conf->hugepage_info); i++) {
+ for (i = 0; i < (int) RTE_DIM(platform_info->hugepage_info); i++) {
if (page_sz ==
- internal_conf->hugepage_info[i].hugepage_sz) {
- hi = &internal_conf->hugepage_info[i];
+ platform_info->hugepage_info[i].hugepage_sz) {
+ hi = &platform_info->hugepage_info[i];
break;
}
}
@@ -1034,9 +1033,8 @@ int
eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs)
{
int seg, ret = 0;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ struct eal_platform_info *platform_info = eal_get_platform_info();
/* dynamic free not supported in legacy mode */
if (user_cfg->legacy_mem)
@@ -1057,13 +1055,13 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs)
memset(&wa, 0, sizeof(wa));
- for (i = 0; i < (int)RTE_DIM(internal_conf->hugepage_info);
+ for (i = 0; i < (int)RTE_DIM(platform_info->hugepage_info);
i++) {
- hi = &internal_conf->hugepage_info[i];
+ hi = &platform_info->hugepage_info[i];
if (cur->hugepage_sz == hi->hugepage_sz)
break;
}
- if (i == (int)RTE_DIM(internal_conf->hugepage_info)) {
+ if (i == (int)RTE_DIM(platform_info->hugepage_info)) {
EAL_LOG(ERR, "Can't find relevant hugepage_info entry");
ret = -1;
continue;
@@ -1327,11 +1325,10 @@ sync_walk(const struct rte_memseg_list *msl, void *arg __rte_unused)
{
struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
struct rte_memseg_list *primary_msl, *local_msl;
+ struct eal_platform_info *platform_info = eal_get_platform_info();
struct hugepage_info *hi = NULL;
unsigned int i;
int msl_idx;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
if (msl->external)
return 0;
@@ -1340,12 +1337,12 @@ sync_walk(const struct rte_memseg_list *msl, void *arg __rte_unused)
primary_msl = &mcfg->memsegs[msl_idx];
local_msl = &local_memsegs[msl_idx];
- for (i = 0; i < RTE_DIM(internal_conf->hugepage_info); i++) {
+ for (i = 0; i < RTE_DIM(platform_info->hugepage_info); i++) {
uint64_t cur_sz =
- internal_conf->hugepage_info[i].hugepage_sz;
+ platform_info->hugepage_info[i].hugepage_sz;
uint64_t msl_sz = primary_msl->page_sz;
if (msl_sz == cur_sz) {
- hi = &internal_conf->hugepage_info[i];
+ hi = &platform_info->hugepage_info[i];
break;
}
}
diff --git a/lib/eal/linux/eal_memory.c b/lib/eal/linux/eal_memory.c
index 69314656c2..36763bb44f 100644
--- a/lib/eal/linux/eal_memory.c
+++ b/lib/eal/linux/eal_memory.c
@@ -598,14 +598,13 @@ unlink_hugepage_files(struct hugepage_file *hugepg_tbl,
{
unsigned socket, size;
int page, nrpages = 0;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_platform_info *platform_info = eal_get_platform_info();
/* get total number of hugepages */
for (size = 0; size < num_hp_info; size++)
for (socket = 0; socket < RTE_MAX_NUMA_NODES; socket++)
nrpages +=
- internal_conf->hugepage_info[size].num_pages[socket];
+ platform_info->hugepage_info[size].num_pages[socket];
for (page = 0; page < nrpages; page++) {
struct hugepage_file *hp = &hugepg_tbl[page];
@@ -629,13 +628,12 @@ unmap_unneeded_hugepages(struct hugepage_file *hugepg_tbl,
{
unsigned socket, size;
int page, nrpages = 0;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_platform_info *platform_info = eal_get_platform_info();
/* get total number of hugepages */
for (size = 0; size < num_hp_info; size++)
for (socket = 0; socket < RTE_MAX_NUMA_NODES; socket++)
- nrpages += internal_conf->hugepage_info[size].num_pages[socket];
+ nrpages += platform_info->hugepage_info[size].num_pages[socket];
for (size = 0; size < num_hp_info; size++) {
for (socket = 0; socket < RTE_MAX_NUMA_NODES; socket++) {
@@ -691,8 +689,6 @@ remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end)
size_t memseg_len;
int socket_id;
#ifndef RTE_ARCH_64
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
#endif
page_sz = hugepages[seg_start].size;
socket_id = hugepages[seg_start].socket_id;
@@ -859,13 +855,12 @@ memseg_list_free(struct rte_memseg_list *msl)
static int __rte_unused
prealloc_segments(struct hugepage_file *hugepages, int n_pages)
{
+ const struct eal_platform_info *platform_info = eal_get_platform_info();
struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
int cur_page, seg_start_page, end_seg, new_memseg;
unsigned int hpi_idx, socket, i;
int n_contig_segs, n_segs;
int msl_idx;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
/* before we preallocate segments, we need to free up our VA space.
* we're not removing files, and we already have information about
@@ -880,10 +875,10 @@ prealloc_segments(struct hugepage_file *hugepages, int n_pages)
/* we cannot know how many page sizes and sockets we have discovered, so
* loop over all of them
*/
- for (hpi_idx = 0; hpi_idx < internal_conf->num_hugepage_sizes;
+ for (hpi_idx = 0; hpi_idx < platform_info->num_hugepage_sizes;
hpi_idx++) {
uint64_t page_sz =
- internal_conf->hugepage_info[hpi_idx].hugepage_sz;
+ platform_info->hugepage_info[hpi_idx].hugepage_sz;
for (i = 0; i < rte_socket_count(); i++) {
struct rte_memseg_list *msl;
@@ -1088,11 +1083,10 @@ eal_get_hugepage_mem_size(void)
{
uint64_t size = 0;
unsigned i, j;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ struct eal_platform_info *platform_info = eal_get_platform_info();
- for (i = 0; i < internal_conf->num_hugepage_sizes; i++) {
- struct hugepage_info *hpi = &internal_conf->hugepage_info[i];
+ for (i = 0; i < platform_info->num_hugepage_sizes; i++) {
+ struct hugepage_info *hpi = &platform_info->hugepage_info[i];
if (strnlen(hpi->hugedir, sizeof(hpi->hugedir)) != 0) {
for (j = 0; j < RTE_MAX_NUMA_NODES; j++) {
size += hpi->hugepage_sz * hpi->num_pages[j];
@@ -1147,8 +1141,7 @@ eal_legacy_hugepage_init(void)
struct rte_mem_config *mcfg;
struct hugepage_file *hugepage = NULL, *tmp_hp = NULL;
struct hugepage_info used_hp[MAX_HUGEPAGE_SIZES];
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ struct eal_platform_info *platform_info = eal_get_platform_info();
struct eal_user_cfg *user_cfg = eal_get_user_configuration();
uint64_t memory[RTE_MAX_NUMA_NODES];
@@ -1265,11 +1258,11 @@ eal_legacy_hugepage_init(void)
/* calculate total number of hugepages available. at this point we haven't
* yet started sorting them so they all are on socket 0 */
- for (i = 0; i < (int) internal_conf->num_hugepage_sizes; i++) {
+ for (i = 0; i < (int) platform_info->num_hugepage_sizes; i++) {
/* meanwhile, also initialize used_hp hugepage sizes in used_hp */
- used_hp[i].hugepage_sz = internal_conf->hugepage_info[i].hugepage_sz;
+ used_hp[i].hugepage_sz = platform_info->hugepage_info[i].hugepage_sz;
- nr_hugepages += internal_conf->hugepage_info[i].num_pages[0];
+ nr_hugepages += platform_info->hugepage_info[i].num_pages[0];
}
/*
@@ -1293,7 +1286,7 @@ eal_legacy_hugepage_init(void)
memory[i] = user_cfg->numa_mem[i];
/* map all hugepages and sort them */
- for (i = 0; i < (int)internal_conf->num_hugepage_sizes; i++) {
+ for (i = 0; i < (int)platform_info->num_hugepage_sizes; i++) {
unsigned pages_old, pages_new;
struct hugepage_info *hpi;
@@ -1302,7 +1295,7 @@ eal_legacy_hugepage_init(void)
* we just map all hugepages available to the system
* all hugepages are still located on socket 0
*/
- hpi = &internal_conf->hugepage_info[i];
+ hpi = &platform_info->hugepage_info[i];
if (hpi->num_pages[0] == 0)
continue;
@@ -1365,9 +1358,9 @@ eal_legacy_hugepage_init(void)
/* clean out the numbers of pages */
- for (i = 0; i < (int) internal_conf->num_hugepage_sizes; i++)
+ for (i = 0; i < (int) platform_info->num_hugepage_sizes; i++)
for (j = 0; j < RTE_MAX_NUMA_NODES; j++)
- internal_conf->hugepage_info[i].num_pages[j] = 0;
+ platform_info->hugepage_info[i].num_pages[j] = 0;
/* get hugepages for each socket */
for (i = 0; i < nr_hugefiles; i++) {
@@ -1375,11 +1368,11 @@ eal_legacy_hugepage_init(void)
/* find a hugepage info with right size and increment num_pages */
const int nb_hpsizes = RTE_MIN(MAX_HUGEPAGE_SIZES,
- (int)internal_conf->num_hugepage_sizes);
+ (int)platform_info->num_hugepage_sizes);
for (j = 0; j < nb_hpsizes; j++) {
if (tmp_hp[i].size ==
- internal_conf->hugepage_info[j].hugepage_sz) {
- internal_conf->hugepage_info[j].num_pages[socket]++;
+ platform_info->hugepage_info[j].hugepage_sz) {
+ platform_info->hugepage_info[j].num_pages[socket]++;
}
}
}
@@ -1390,15 +1383,15 @@ eal_legacy_hugepage_init(void)
/* calculate final number of pages */
nr_hugepages = eal_dynmem_calc_num_pages_per_socket(memory,
- internal_conf->hugepage_info, used_hp,
- internal_conf->num_hugepage_sizes);
+ platform_info->hugepage_info, used_hp,
+ platform_info->num_hugepage_sizes);
/* error if not enough memory available */
if (nr_hugepages < 0)
goto fail;
/* reporting in! */
- for (i = 0; i < (int) internal_conf->num_hugepage_sizes; i++) {
+ for (i = 0; i < (int) platform_info->num_hugepage_sizes; i++) {
for (j = 0; j < RTE_MAX_NUMA_NODES; j++) {
if (used_hp[i].num_pages[j] > 0) {
EAL_LOG(DEBUG,
@@ -1427,7 +1420,7 @@ eal_legacy_hugepage_init(void)
* also, sets final_va to NULL on pages that were unmapped.
*/
if (unmap_unneeded_hugepages(tmp_hp, used_hp,
- internal_conf->num_hugepage_sizes) < 0) {
+ platform_info->num_hugepage_sizes) < 0) {
EAL_LOG(ERR, "Unmapping and locking hugepages failed!");
goto fail;
}
@@ -1462,7 +1455,7 @@ eal_legacy_hugepage_init(void)
/* free the hugepage backing files */
if (user_cfg->hugepage_file.unlink_before_mapping &&
- unlink_hugepage_files(tmp_hp, internal_conf->num_hugepage_sizes) < 0) {
+ unlink_hugepage_files(tmp_hp, platform_info->num_hugepage_sizes) < 0) {
EAL_LOG(ERR, "Unlinking hugepage files failed!");
goto fail;
}
@@ -1715,8 +1708,7 @@ memseg_primary_init_32(void)
struct rte_memseg_list *msl;
uint64_t extra_mem_per_socket, total_extra_mem, total_requested_mem;
uint64_t max_mem;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ struct eal_platform_info *platform_info = eal_get_platform_info();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* no-huge does not need this at all */
@@ -1783,7 +1775,7 @@ memseg_primary_init_32(void)
/* create memseg lists */
for (i = 0; i < rte_socket_count(); i++) {
- int hp_sizes = (int) internal_conf->num_hugepage_sizes;
+ int hp_sizes = (int) platform_info->num_hugepage_sizes;
uint64_t max_socket_mem, cur_socket_mem;
unsigned int main_lcore_socket;
struct rte_config *cfg = rte_eal_get_configuration();
@@ -1831,7 +1823,7 @@ memseg_primary_init_32(void)
struct hugepage_info *hpi;
int type_msl_idx, max_segs, total_segs = 0;
- hpi = &internal_conf->hugepage_info[hpi_idx];
+ hpi = &platform_info->hugepage_info[hpi_idx];
hugepage_sz = hpi->hugepage_sz;
/* check if pages are actually available */
diff --git a/lib/eal/windows/eal_hugepages.c b/lib/eal/windows/eal_hugepages.c
index ff72b8ee38..4e9e958c65 100644
--- a/lib/eal/windows/eal_hugepages.c
+++ b/lib/eal/windows/eal_hugepages.c
@@ -62,12 +62,11 @@ hugepage_info_init(void)
struct hugepage_info *hpi;
unsigned int socket_id;
int ret = 0;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ struct eal_platform_info *platform_info = eal_get_platform_info();
/* Only one hugepage size available on Windows. */
- internal_conf->num_hugepage_sizes = 1;
- hpi = &internal_conf->hugepage_info[0];
+ platform_info->num_hugepage_sizes = 1;
+ hpi = &platform_info->hugepage_info[0];
hpi->hugepage_sz = GetLargePageMinimum();
if (hpi->hugepage_sz == 0)
diff --git a/lib/eal/windows/eal_memalloc.c b/lib/eal/windows/eal_memalloc.c
index 26d9cae54c..35eaf3a180 100644
--- a/lib/eal/windows/eal_memalloc.c
+++ b/lib/eal/windows/eal_memalloc.c
@@ -33,7 +33,7 @@ eal_memalloc_get_seg_fd_offset(int list_idx, int seg_idx, size_t *offset)
static int
alloc_seg(struct rte_memseg *ms, void *requested_addr, int socket_id,
- struct hugepage_info *hi)
+ const struct hugepage_info *hi)
{
HANDLE current_process;
unsigned int numa_node;
@@ -166,7 +166,7 @@ free_seg(struct rte_memseg *ms)
}
struct alloc_walk_param {
- struct hugepage_info *hi;
+ const struct hugepage_info *hi;
struct rte_memseg **ms;
size_t page_sz;
unsigned int segs_allocated;
@@ -273,7 +273,7 @@ alloc_seg_walk(const struct rte_memseg_list *msl, void *arg)
}
struct free_walk_param {
- struct hugepage_info *hi;
+ const struct hugepage_info *hi;
struct rte_memseg *ms;
};
static int
@@ -313,9 +313,8 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs,
unsigned int i;
int ret = -1;
struct alloc_walk_param wa;
- struct hugepage_info *hi = NULL;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct hugepage_info *hi = NULL;
+ const struct eal_platform_info *platform_info = eal_get_platform_info();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (user_cfg->legacy_mem) {
@@ -323,8 +322,8 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs,
return -ENOTSUP;
}
- for (i = 0; i < internal_conf->num_hugepage_sizes; i++) {
- struct hugepage_info *hpi = &internal_conf->hugepage_info[i];
+ for (i = 0; i < platform_info->num_hugepage_sizes; i++) {
+ const struct hugepage_info *hpi = &platform_info->hugepage_info[i];
if (page_sz == hpi->hugepage_sz) {
hi = hpi;
break;
@@ -368,8 +367,7 @@ int
eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs)
{
int seg, ret = 0;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_platform_info *platform_info = eal_get_platform_info();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* dynamic free not supported in legacy mode */
@@ -378,7 +376,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs)
for (seg = 0; seg < n_segs; seg++) {
struct rte_memseg *cur = ms[seg];
- struct hugepage_info *hi = NULL;
+ const struct hugepage_info *hi = NULL;
struct free_walk_param wa;
size_t i;
int walk_res;
@@ -392,12 +390,12 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs)
memset(&wa, 0, sizeof(wa));
- for (i = 0; i < RTE_DIM(internal_conf->hugepage_info); i++) {
- hi = &internal_conf->hugepage_info[i];
+ for (i = 0; i < RTE_DIM(platform_info->hugepage_info); i++) {
+ hi = &platform_info->hugepage_info[i];
if (cur->hugepage_sz == hi->hugepage_sz)
break;
}
- if (i == RTE_DIM(internal_conf->hugepage_info)) {
+ if (i == RTE_DIM(platform_info->hugepage_info)) {
EAL_LOG(ERR, "Can't find relevant hugepage_info entry");
ret = -1;
continue;
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 08/44] telemetry: make cpuset init parameter const
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (6 preceding siblings ...)
2026-04-29 16:57 ` [RFC PATCH 07/44] eal: move hugepage size info to platform info struct Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 09/44] eal: move runtime state to appropriate structure Bruce Richardson
` (37 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
The cpuset parameter to telemetry_init is not modified so can be marked
as const. This can allow callers to pass the parameter from a const
structure rather than a modifiable one.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/telemetry/telemetry.c | 4 ++--
lib/telemetry/telemetry_internal.h | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/lib/telemetry/telemetry.c b/lib/telemetry/telemetry.c
index b109d076d4..70c00d5940 100644
--- a/lib/telemetry/telemetry.c
+++ b/lib/telemetry/telemetry.c
@@ -56,7 +56,7 @@ static struct socket v1_socket; /* socket for v1 telemetry */
static const char *telemetry_version; /* save rte_version */
static const char *socket_dir; /* runtime directory */
-static rte_cpuset_t *thread_cpuset;
+static const rte_cpuset_t *thread_cpuset;
RTE_LOG_REGISTER_DEFAULT(logtype, WARNING);
#define RTE_LOGTYPE_TELEMETRY logtype
@@ -657,7 +657,7 @@ telemetry_v2_init(void)
RTE_EXPORT_INTERNAL_SYMBOL(rte_telemetry_init)
int32_t
-rte_telemetry_init(const char *runtime_dir, const char *rte_version, rte_cpuset_t *cpuset)
+rte_telemetry_init(const char *runtime_dir, const char *rte_version, const rte_cpuset_t *cpuset)
{
telemetry_version = rte_version;
socket_dir = runtime_dir;
diff --git a/lib/telemetry/telemetry_internal.h b/lib/telemetry/telemetry_internal.h
index 2fd9fbd7c1..4a6b2e9838 100644
--- a/lib/telemetry/telemetry_internal.h
+++ b/lib/telemetry/telemetry_internal.h
@@ -119,6 +119,6 @@ typedef int (*rte_log_fn)(uint32_t level, uint32_t logtype, const char *format,
*/
__rte_internal
int
-rte_telemetry_init(const char *runtime_dir, const char *rte_version, rte_cpuset_t *cpuset);
+rte_telemetry_init(const char *runtime_dir, const char *rte_version, const rte_cpuset_t *cpuset);
#endif
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 09/44] eal: move runtime state to appropriate structure
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (7 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 08/44] telemetry: make cpuset init parameter const Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 10/44] eal: record details of all cpus in platform info Bruce Richardson
` (36 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
Remove the last of the fields from the internal config to the runtime
state structure, allowing us to remove the general internal_config
struct and it's reference function.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_config.c | 10 ----------
lib/eal/common/eal_common_mcfg.c | 5 ++---
lib/eal/common/eal_common_memzone.c | 4 +++-
lib/eal/common/eal_common_options.c | 19 ++++++++++---------
lib/eal/common/eal_common_proc.c | 6 +++---
lib/eal/common/eal_common_thread.c | 8 ++++----
lib/eal/common/eal_internal_cfg.h | 9 +--------
lib/eal/common/eal_options.h | 2 +-
lib/eal/common/eal_private.h | 9 ---------
lib/eal/freebsd/eal.c | 7 +++----
lib/eal/linux/eal.c | 7 +++----
11 files changed, 30 insertions(+), 56 deletions(-)
diff --git a/lib/eal/common/eal_common_config.c b/lib/eal/common/eal_common_config.c
index 50cba4fa1a..f1a5e84aa9 100644
--- a/lib/eal/common/eal_common_config.c
+++ b/lib/eal/common/eal_common_config.c
@@ -37,9 +37,6 @@ static struct eal_platform_info eal_platform_info;
/* internal runtime configuration */
static struct eal_runtime_state eal_runtime_state;
-/* internal configuration */
-static struct internal_config internal_config;
-
RTE_EXPORT_SYMBOL(rte_eal_get_runtime_dir)
const char *
rte_eal_get_runtime_dir(void)
@@ -66,13 +63,6 @@ rte_eal_get_configuration(void)
return &rte_config;
}
-/* Return a pointer to the internal configuration structure */
-struct internal_config *
-eal_get_internal_configuration(void)
-{
- return &internal_config;
-}
-
/* Return a pointer to the user configuration structure */
struct eal_user_cfg *
eal_get_user_configuration(void)
diff --git a/lib/eal/common/eal_common_mcfg.c b/lib/eal/common/eal_common_mcfg.c
index fddeae255e..497b0933c7 100644
--- a/lib/eal/common/eal_common_mcfg.c
+++ b/lib/eal/common/eal_common_mcfg.c
@@ -15,14 +15,13 @@ eal_mcfg_complete(void)
{
struct rte_config *cfg = rte_eal_get_configuration();
struct rte_mem_config *mcfg = cfg->mem_config;
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
/* ALL shared mem_config related INIT DONE */
if (cfg->process_type == RTE_PROC_PRIMARY)
mcfg->magic = RTE_MAGIC;
- internal_conf->init_complete = 1;
+ runtime_state->init_complete = 1;
}
void
diff --git a/lib/eal/common/eal_common_memzone.c b/lib/eal/common/eal_common_memzone.c
index db43af13a8..1207d524c9 100644
--- a/lib/eal/common/eal_common_memzone.c
+++ b/lib/eal/common/eal_common_memzone.c
@@ -20,6 +20,7 @@
#include "malloc_heap.h"
#include "malloc_elem.h"
+#include "eal_internal_cfg.h"
#include "eal_private.h"
#include "eal_memcfg.h"
@@ -30,9 +31,10 @@ RTE_EXPORT_SYMBOL(rte_memzone_max_set)
int
rte_memzone_max_set(size_t max)
{
+ const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
struct rte_mem_config *mcfg;
- if (eal_get_internal_configuration()->init_complete > 0) {
+ if (runtime_state->init_complete > 0) {
EAL_LOG(ERR, "Max memzone cannot be set after EAL init");
return -1;
}
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index 0750a52373..2d6d4dc9bc 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -491,10 +491,11 @@ eal_get_hugefile_prefix(void)
}
void
-eal_reset_internal_config(struct internal_config *internal_cfg)
+eal_reset_internal_config(void)
{
struct eal_user_cfg *user_cfg = eal_get_user_configuration();
struct eal_platform_info *platform_info = eal_get_platform_info();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
int i;
user_cfg->memory = 0;
@@ -540,8 +541,8 @@ eal_reset_internal_config(struct internal_config *internal_cfg)
user_cfg->no_telemetry = false;
user_cfg->iova_mode = RTE_IOVA_DC;
user_cfg->user_mbuf_pool_ops_name = NULL;
- CPU_ZERO(&internal_cfg->ctrl_cpuset);
- internal_cfg->init_complete = 0;
+ CPU_ZERO(&runtime_state->ctrl_cpuset);
+ runtime_state->init_complete = 0;
user_cfg->max_simd_bitwidth.bitwidth = RTE_VECT_DEFAULT_SIMD_BITWIDTH;
user_cfg->max_simd_bitwidth.forced = 0;
}
@@ -1958,7 +1959,6 @@ eal_parse_huge_worker_stack(const char *arg)
int
eal_parse_args(void)
{
- struct internal_config *int_cfg = eal_get_internal_configuration();
struct eal_user_cfg *user_cfg = eal_get_user_configuration();
struct rte_config *rte_cfg = rte_eal_get_configuration();
bool remap_lcores = (args.remap_lcore_ids != NULL);
@@ -2307,7 +2307,7 @@ eal_parse_args(void)
}
#endif
- if (eal_adjust_config(int_cfg) != 0) {
+ if (eal_adjust_config() != 0) {
EAL_LOG(ERR, "Invalid configuration");
return -1;
}
@@ -2316,9 +2316,10 @@ eal_parse_args(void)
}
static void
-compute_ctrl_threads_cpuset(struct internal_config *internal_cfg)
+compute_ctrl_threads_cpuset(void)
{
- rte_cpuset_t *cpuset = &internal_cfg->ctrl_cpuset;
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+ rte_cpuset_t *cpuset = &runtime_state->ctrl_cpuset;
rte_cpuset_t default_set;
unsigned int lcore_id;
@@ -2359,7 +2360,7 @@ eal_cleanup_config(const struct eal_user_cfg *user_cfg)
}
int
-eal_adjust_config(struct internal_config *internal_cfg)
+eal_adjust_config(void)
{
struct eal_user_cfg *user_cfg = eal_get_user_configuration();
int i;
@@ -2367,7 +2368,7 @@ eal_adjust_config(struct internal_config *internal_cfg)
if (user_cfg->process_type == RTE_PROC_AUTO)
user_cfg->process_type = eal_proc_type_detect();
- compute_ctrl_threads_cpuset(internal_cfg);
+ compute_ctrl_threads_cpuset();
/* if no memory amounts were requested, this will result in 0 and
* will be overridden later, right after eal_hugepage_info_init() */
diff --git a/lib/eal/common/eal_common_proc.c b/lib/eal/common/eal_common_proc.c
index 74f4f60b0a..dcf18ebf4c 100644
--- a/lib/eal/common/eal_common_proc.c
+++ b/lib/eal/common/eal_common_proc.c
@@ -343,8 +343,8 @@ process_msg(struct mp_msg_internal *m, struct sockaddr_un *s)
struct action_entry *entry;
struct rte_mp_msg *msg = &m->msg;
rte_mp_t action = NULL;
- const struct internal_config *internal_conf =
- eal_get_internal_configuration();
+ const struct eal_runtime_state *runtime_state =
+ eal_get_runtime_state();
EAL_LOG(DEBUG, "msg: %s", msg->name);
@@ -382,7 +382,7 @@ process_msg(struct mp_msg_internal *m, struct sockaddr_un *s)
pthread_mutex_unlock(&mp_mutex_action);
if (!action) {
- if (m->type == MP_REQ && !internal_conf->init_complete) {
+ if (m->type == MP_REQ && !runtime_state->init_complete) {
/* if this is a request, and init is not yet complete,
* and callback wasn't registered, we should tell the
* requester to ignore our existence because we're not
diff --git a/lib/eal/common/eal_common_thread.c b/lib/eal/common/eal_common_thread.c
index dcd81f9e32..c2e7315bf4 100644
--- a/lib/eal/common/eal_common_thread.c
+++ b/lib/eal/common/eal_common_thread.c
@@ -234,9 +234,8 @@ struct control_thread_params {
static int control_thread_init(void *arg)
{
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
- rte_cpuset_t *cpuset = &internal_conf->ctrl_cpuset;
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+ rte_cpuset_t *cpuset = &runtime_state->ctrl_cpuset;
struct control_thread_params *params = arg;
__rte_thread_init(rte_lcore_id(), cpuset);
@@ -354,11 +353,12 @@ RTE_EXPORT_SYMBOL(rte_thread_register)
int
rte_thread_register(void)
{
+ const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
unsigned int lcore_id;
rte_cpuset_t cpuset;
/* EAL init flushes all lcores, we can't register before. */
- if (eal_get_internal_configuration()->init_complete != 1) {
+ if (runtime_state->init_complete != 1) {
EAL_LOG(DEBUG, "Called %s before EAL init.", __func__);
rte_errno = EINVAL;
return -1;
diff --git a/lib/eal/common/eal_internal_cfg.h b/lib/eal/common/eal_internal_cfg.h
index fbbe5dce82..9a898e676e 100644
--- a/lib/eal/common/eal_internal_cfg.h
+++ b/lib/eal/common/eal_internal_cfg.h
@@ -102,13 +102,6 @@ struct eal_platform_info {
* as appropriate.
*/
struct eal_runtime_state {
- uint8_t reserved;
-};
-
-/**
- * internal configuration
- */
-struct internal_config {
rte_cpuset_t ctrl_cpuset; /**< cpuset for ctrl threads */
volatile unsigned int init_complete;
/**< indicates whether EAL has completed initialization */
@@ -117,6 +110,6 @@ struct internal_config {
struct eal_user_cfg *eal_get_user_configuration(void);
struct eal_platform_info *eal_get_platform_info(void);
struct eal_runtime_state *eal_get_runtime_state(void);
-void eal_reset_internal_config(struct internal_config *internal_cfg);
+void eal_reset_internal_config(void);
#endif /* EAL_INTERNAL_CFG_H */
diff --git a/lib/eal/common/eal_options.h b/lib/eal/common/eal_options.h
index 5ad347b61d..a70c5b0c05 100644
--- a/lib/eal/common/eal_options.h
+++ b/lib/eal/common/eal_options.h
@@ -13,7 +13,7 @@ struct eal_user_cfg;
int eal_parse_log_options(void);
int eal_parse_args(void);
int eal_option_device_parse(void);
-int eal_adjust_config(struct internal_config *internal_cfg);
+int eal_adjust_config(void);
int eal_cleanup_config(const struct eal_user_cfg *user_cfg);
enum rte_proc_type_t eal_proc_type_detect(void);
int eal_plugins_init(void);
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index e032dd10c9..d9807fb2fb 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -710,15 +710,6 @@ eal_mem_set_dump(void *virt, size_t size, bool dump);
int
eal_set_runtime_dir(const char *run_dir);
-/**
- * Get the internal configuration structure.
- *
- * @return
- * A pointer to the internal configuration structure.
- */
-struct internal_config *
-eal_get_internal_configuration(void);
-
/**
* Get the current value of the rte_application_usage pointer
*
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 996a2de9ff..f41a700125 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -407,9 +407,8 @@ rte_eal_init(int argc, char **argv)
char cpuset[RTE_CPU_AFFINITY_STR_LEN];
char thread_name[RTE_THREAD_NAME_SIZE];
const struct rte_config *config = rte_eal_get_configuration();
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
bool has_phys_addr;
enum rte_iova_mode iova_mode;
@@ -454,7 +453,7 @@ rte_eal_init(int argc, char **argv)
goto err_out;
}
- eal_reset_internal_config(internal_conf);
+ eal_reset_internal_config();
if (rte_eal_cpu_init() < 0) {
rte_eal_init_alert("Cannot detect lcores.");
@@ -745,7 +744,7 @@ rte_eal_init(int argc, char **argv)
if (rte_eal_process_type() == RTE_PROC_PRIMARY && !user_cfg->no_telemetry) {
if (rte_telemetry_init(rte_eal_get_runtime_dir(),
rte_version(),
- &internal_conf->ctrl_cpuset) != 0)
+ &runtime_state->ctrl_cpuset) != 0)
goto err_out;
}
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index f692521fe7..ffe930155a 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -569,9 +569,8 @@ rte_eal_init(int argc, char **argv)
char thread_name[RTE_THREAD_NAME_SIZE];
bool phys_addrs;
const struct rte_config *config = rte_eal_get_configuration();
- struct internal_config *internal_conf =
- eal_get_internal_configuration();
struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
/* first check if we have been run before */
if (!rte_atomic_compare_exchange_strong_explicit(&run_once, &has_run, 1,
@@ -614,7 +613,7 @@ rte_eal_init(int argc, char **argv)
goto err_out;
}
- eal_reset_internal_config(internal_conf);
+ eal_reset_internal_config();
if (rte_eal_cpu_init() < 0) {
rte_eal_init_alert("Cannot detect lcores.");
@@ -918,7 +917,7 @@ rte_eal_init(int argc, char **argv)
if (rte_eal_process_type() == RTE_PROC_PRIMARY && !user_cfg->no_telemetry) {
if (rte_telemetry_init(rte_eal_get_runtime_dir(),
rte_version(),
- &internal_conf->ctrl_cpuset) != 0)
+ &runtime_state->ctrl_cpuset) != 0)
goto err_out;
}
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 10/44] eal: record details of all cpus in platform info
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (8 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 09/44] eal: move runtime state to appropriate structure Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 11/44] eal: use platform info for lcore lookups Bruce Richardson
` (35 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
Populate the platform info structure with details of all the cores on
the system.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_lcore.c | 19 +++++++++++++++++++
lib/eal/common/eal_internal_cfg.h | 16 ++++++++++++++--
lib/eal/common/eal_private.h | 8 ++++++++
lib/eal/freebsd/eal_lcore.c | 16 +++++++++++-----
lib/eal/linux/eal_lcore.c | 13 +++++++++++++
lib/eal/windows/eal_lcore.c | 6 ++++++
6 files changed, 71 insertions(+), 7 deletions(-)
diff --git a/lib/eal/common/eal_common_lcore.c b/lib/eal/common/eal_common_lcore.c
index 39411f9370..d8cc5e1a91 100644
--- a/lib/eal/common/eal_common_lcore.c
+++ b/lib/eal/common/eal_common_lcore.c
@@ -152,6 +152,7 @@ rte_eal_cpu_init(void)
{
/* pointer to global configuration */
struct rte_config *config = rte_eal_get_configuration();
+ struct eal_platform_info *platform_info = eal_get_platform_info();
unsigned lcore_id;
unsigned count = 0;
unsigned int socket_id, prev_socket_id;
@@ -161,6 +162,24 @@ rte_eal_cpu_init(void)
int lcore_to_socket_id[RTE_MAX_LCORE] = {0};
#endif
+ /* allocate cpu_info for all CPUs visible to the OS */
+ platform_info->cpu_count = eal_cpu_max();
+ platform_info->cpu_info = calloc(platform_info->cpu_count,
+ sizeof(*platform_info->cpu_info));
+ if (platform_info->cpu_info == NULL) {
+ EAL_LOG(ERR, "Cannot allocate cpu_info array");
+ return -1;
+ }
+
+ /* populate cpu_info with hardware topology for all detected CPUs */
+ for (size_t cpu_id = 0; cpu_id < platform_info->cpu_count; cpu_id++) {
+ if (eal_cpu_detected(cpu_id) == 0)
+ continue;
+ platform_info->cpu_info[cpu_id].detected = true;
+ platform_info->cpu_info[cpu_id].numa_id = eal_cpu_socket_id(cpu_id);
+ platform_info->cpu_info[cpu_id].core_id = eal_cpu_core_id(cpu_id);
+ }
+
/*
* Parse the maximum set of logical cores, detect the subset of running
* ones and enable them by default.
diff --git a/lib/eal/common/eal_internal_cfg.h b/lib/eal/common/eal_internal_cfg.h
index 9a898e676e..8ed7171bdc 100644
--- a/lib/eal/common/eal_internal_cfg.h
+++ b/lib/eal/common/eal_internal_cfg.h
@@ -88,10 +88,22 @@ struct eal_user_cfg {
};
/**
- * Discovered information about cores, memory, etc. on the system.
- * Immutable after initialization, so no need for atomic types or locks.
+ * Hardware facts about a single physical CPU, populated during CPU discovery.
+ * Indexed by physical CPU ID (not DPDK lcore ID).
+ */
+struct eal_cpu_info {
+ bool detected; /**< true if this CPU ID is valid and visible to the OS */
+ unsigned int numa_id; /**< NUMA node this CPU belongs to */
+ unsigned int core_id; /**< physical core number on its NUMA node */
+};
+
+/**
+ * Discovered information about the system hardware.
+ * Immutable after discovery.
*/
struct eal_platform_info {
+ size_t cpu_count; /**< number of entries in cpu_info[] */
+ struct eal_cpu_info *cpu_info; /**< per-physical-CPU hardware facts */
uint8_t num_hugepage_sizes; /**< how many sizes on this system */
struct hugepage_info hugepage_info[MAX_HUGEPAGE_SIZES];
};
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index d9807fb2fb..00a73a9d61 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -388,6 +388,14 @@ unsigned eal_cpu_core_id(unsigned lcore_id);
*/
int eal_cpu_detected(unsigned lcore_id);
+/**
+ * Get the number of CPU IDs to allocate for platform CPU info.
+ * Returns max_cpu_id + 1: all valid CPU IDs are in [0, eal_cpu_max()).
+ *
+ * This function is private to the EAL.
+ */
+size_t eal_cpu_max(void);
+
/**
* Set TSC frequency from precise value or estimation
*
diff --git a/lib/eal/freebsd/eal_lcore.c b/lib/eal/freebsd/eal_lcore.c
index 1d3d1b67b9..c20d25358b 100644
--- a/lib/eal/freebsd/eal_lcore.c
+++ b/lib/eal/freebsd/eal_lcore.c
@@ -2,7 +2,10 @@
* Copyright(c) 2010-2014 Intel Corporation
*/
+#include <sched.h>
+#include <errno.h>
#include <unistd.h>
+#include <string.h>
#include <sys/sysctl.h>
#include <rte_log.h>
@@ -21,18 +24,21 @@ eal_cpu_core_id(__rte_unused unsigned lcore_id)
return 0;
}
-static int
-eal_get_ncpus(void)
+size_t
+eal_cpu_max(void)
{
static int ncpu = -1;
int mib[2] = {CTL_HW, HW_NCPU};
size_t len = sizeof(ncpu);
if (ncpu < 0) {
- sysctl(mib, 2, &ncpu, &len, NULL, 0);
+ if (sysctl(mib, 2, &ncpu, &len, NULL, 0) != 0) {
+ EAL_LOG(ERR, "sysctl failed to get number of CPUs: %s", strerror(errno));
+ return CPU_SETSIZE; /* fallback to CPU_SETSIZE */
+ }
EAL_LOG(INFO, "Sysctl reports %d cpus", ncpu);
}
- return ncpu;
+ return (size_t)ncpu;
}
unsigned
@@ -47,6 +53,6 @@ eal_cpu_socket_id(__rte_unused unsigned cpu_id)
int
eal_cpu_detected(unsigned lcore_id)
{
- const unsigned ncpus = eal_get_ncpus();
+ const unsigned ncpus = eal_cpu_max();
return lcore_id < ncpus;
}
diff --git a/lib/eal/linux/eal_lcore.c b/lib/eal/linux/eal_lcore.c
index 29b36dd610..6a806336bd 100644
--- a/lib/eal/linux/eal_lcore.c
+++ b/lib/eal/linux/eal_lcore.c
@@ -72,3 +72,16 @@ eal_cpu_core_id(unsigned lcore_id)
"for lcore %u - assuming core 0", SYS_CPU_DIR, lcore_id);
return 0;
}
+
+size_t
+eal_cpu_max(void)
+{
+ long n = sysconf(_SC_NPROCESSORS_CONF);
+
+ if (n <= 0) {
+ EAL_LOG(WARNING, "sysconf(_SC_NPROCESSORS_CONF) failed, "
+ "falling back to CPU_SETSIZE");
+ return CPU_SETSIZE;
+ }
+ return (size_t)n;
+}
diff --git a/lib/eal/windows/eal_lcore.c b/lib/eal/windows/eal_lcore.c
index a498044620..dcbba383f7 100644
--- a/lib/eal/windows/eal_lcore.c
+++ b/lib/eal/windows/eal_lcore.c
@@ -239,6 +239,12 @@ eal_cpu_core_id(unsigned int lcore_id)
return cpu_map.lcores[lcore_id].core_id;
}
+size_t
+eal_cpu_max(void)
+{
+ return (size_t)cpu_map.lcore_count;
+}
+
unsigned int
eal_socket_numa_node(unsigned int socket_id)
{
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 11/44] eal: use platform info for lcore lookups
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (9 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 10/44] eal: record details of all cpus in platform info Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 12/44] eal: add RTE_CPU_FFS macro Bruce Richardson
` (34 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
Rather than storing the lcore and numa fields in the lcore_config
structure - which is a structure indexed by logical core id, rather than
physical cpu - use the platform info struct as the one source of truth
for cpu topology info.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_lcore.c | 42 ++++++++++++++++--------
lib/eal/common/eal_common_thread.c | 51 +++++++++---------------------
lib/eal/common/eal_private.h | 2 --
3 files changed, 43 insertions(+), 52 deletions(-)
diff --git a/lib/eal/common/eal_common_lcore.c b/lib/eal/common/eal_common_lcore.c
index d8cc5e1a91..ba3a0c8a92 100644
--- a/lib/eal/common/eal_common_lcore.c
+++ b/lib/eal/common/eal_common_lcore.c
@@ -50,6 +50,9 @@ int rte_lcore_index(int lcore_id)
RTE_EXPORT_SYMBOL(rte_lcore_to_cpu_id)
int rte_lcore_to_cpu_id(int lcore_id)
{
+ const struct eal_platform_info *platform_info = eal_get_platform_info();
+ unsigned int cpu;
+
if (unlikely(lcore_id >= RTE_MAX_LCORE))
return -1;
@@ -60,7 +63,11 @@ int rte_lcore_to_cpu_id(int lcore_id)
lcore_id = (int)rte_lcore_id();
}
- return lcore_config[lcore_id].core_id;
+ for (cpu = 0; cpu < CPU_SETSIZE && cpu < platform_info->cpu_count; cpu++) {
+ if (CPU_ISSET(cpu, &lcore_config[lcore_id].cpuset))
+ return (int)platform_info->cpu_info[cpu].core_id;
+ }
+ return -1;
}
RTE_EXPORT_SYMBOL(rte_lcore_cpuset)
@@ -126,7 +133,14 @@ RTE_EXPORT_SYMBOL(rte_lcore_to_socket_id)
unsigned int
rte_lcore_to_socket_id(unsigned int lcore_id)
{
- return lcore_config[lcore_id].numa_id;
+ const struct eal_platform_info *platform_info = eal_get_platform_info();
+ unsigned int cpu;
+
+ for (cpu = 0; cpu < CPU_SETSIZE && cpu < platform_info->cpu_count; cpu++) {
+ if (CPU_ISSET(cpu, &lcore_config[lcore_id].cpuset))
+ return platform_info->cpu_info[cpu].numa_id;
+ }
+ return 0;
}
static int
@@ -190,38 +204,38 @@ rte_eal_cpu_init(void)
/* init cpuset for per lcore config */
CPU_ZERO(&lcore_config[lcore_id].cpuset);
- /* find socket first */
- socket_id = eal_cpu_socket_id(lcore_id);
- lcore_to_socket_id[lcore_id] = socket_id;
-
if (eal_cpu_detected(lcore_id) == 0) {
config->lcore_role[lcore_id] = ROLE_OFF;
lcore_config[lcore_id].core_index = -1;
continue;
}
+ /* find socket first */
+ socket_id = platform_info->cpu_info[lcore_id].numa_id;
+ lcore_to_socket_id[lcore_id] = socket_id;
+
/* By default, lcore 1:1 map to cpu id */
CPU_SET(lcore_id, &lcore_config[lcore_id].cpuset);
/* By default, each detected core is enabled */
config->lcore_role[lcore_id] = ROLE_RTE;
lcore_config[lcore_id].core_role = ROLE_RTE;
- lcore_config[lcore_id].core_id = eal_cpu_core_id(lcore_id);
- lcore_config[lcore_id].numa_id = socket_id;
EAL_LOG(DEBUG, "Detected lcore %u as "
"core %u on NUMA node %u",
- lcore_id, lcore_config[lcore_id].core_id,
- lcore_config[lcore_id].numa_id);
+ lcore_id,
+ platform_info->cpu_info[lcore_id].core_id,
+ platform_info->cpu_info[lcore_id].numa_id);
count++;
}
for (; lcore_id < CPU_SETSIZE; lcore_id++) {
if (eal_cpu_detected(lcore_id) == 0)
continue;
- socket_id = eal_cpu_socket_id(lcore_id);
- lcore_to_socket_id[lcore_id] = socket_id;
+ if (unlikely(lcore_id >= platform_info->cpu_count))
+ break;
+ lcore_to_socket_id[lcore_id] = platform_info->cpu_info[lcore_id].numa_id;
EAL_LOG(DEBUG, "Skipped lcore %u as core %u on NUMA node %u",
- lcore_id, eal_cpu_core_id(lcore_id),
- socket_id);
+ lcore_id, platform_info->cpu_info[lcore_id].core_id,
+ platform_info->cpu_info[lcore_id].numa_id);
}
/* Set the count of enabled logical cores of the EAL configuration */
diff --git a/lib/eal/common/eal_common_thread.c b/lib/eal/common/eal_common_thread.c
index c2e7315bf4..774344013d 100644
--- a/lib/eal/common/eal_common_thread.c
+++ b/lib/eal/common/eal_common_thread.c
@@ -37,52 +37,31 @@ unsigned rte_socket_id(void)
return RTE_PER_LCORE(_numa_id);
}
-static int
-eal_cpuset_socket_id(rte_cpuset_t *cpusetp)
-{
- unsigned cpu = 0;
- int socket_id = SOCKET_ID_ANY;
- int sid;
-
- if (cpusetp == NULL)
- return SOCKET_ID_ANY;
-
- do {
- if (!CPU_ISSET(cpu, cpusetp))
- continue;
-
- if (socket_id == SOCKET_ID_ANY)
- socket_id = eal_cpu_socket_id(cpu);
-
- sid = eal_cpu_socket_id(cpu);
- if (socket_id != sid) {
- socket_id = SOCKET_ID_ANY;
- break;
- }
-
- } while (++cpu < CPU_SETSIZE);
-
- return socket_id;
-}
-
static void
thread_update_affinity(rte_cpuset_t *cpusetp)
{
unsigned int lcore_id = rte_lcore_id();
- /* store numa_id in TLS for quick access */
- RTE_PER_LCORE(_numa_id) =
- eal_cpuset_socket_id(cpusetp);
-
/* store cpuset in TLS for quick access */
- memmove(&RTE_PER_LCORE(_cpuset), cpusetp,
- sizeof(rte_cpuset_t));
+ memmove(&RTE_PER_LCORE(_cpuset), cpusetp, sizeof(rte_cpuset_t));
if (lcore_id != (unsigned)LCORE_ID_ANY) {
- /* EAL thread will update lcore_config */
- lcore_config[lcore_id].numa_id = RTE_PER_LCORE(_numa_id);
+ /* EAL thread: update lcore_config cpuset first then find numa based on that */
memmove(&lcore_config[lcore_id].cpuset, cpusetp,
sizeof(rte_cpuset_t));
+ RTE_PER_LCORE(_numa_id) = rte_lcore_to_socket_id(lcore_id);
+ } else {
+ /* Non-EAL thread: derive NUMA node from first CPU in cpuset. */
+ const struct eal_platform_info *platform_info = eal_get_platform_info();
+ unsigned int cpu;
+
+ RTE_PER_LCORE(_numa_id) = SOCKET_ID_ANY;
+ for (cpu = 0; cpu < CPU_SETSIZE && cpu < platform_info->cpu_count; cpu++) {
+ if (CPU_ISSET(cpu, cpusetp)) {
+ RTE_PER_LCORE(_numa_id) = platform_info->cpu_info[cpu].numa_id;
+ break;
+ }
+ }
}
}
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index 00a73a9d61..48569f2ed7 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -30,8 +30,6 @@ struct lcore_config {
volatile int ret; /**< return value of function */
volatile RTE_ATOMIC(enum rte_lcore_state_t) state; /**< lcore state */
- unsigned int numa_id; /**< NUMA node ID for this lcore */
- unsigned int core_id; /**< core number on socket for this lcore */
int core_index; /**< relative index, starting from 0 */
uint8_t core_role; /**< role of core eg: OFF, RTE, SERVICE */
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 12/44] eal: add RTE_CPU_FFS macro
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (10 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 11/44] eal: use platform info for lcore lookups Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 13/44] eal: store lcore configuration in runtime data Bruce Richardson
` (33 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
FreeBSD supports the CPU_FFS macro as part of the regular cpuset
functions. Add this as RTE_CPU_FFS for all OS's.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/freebsd/include/rte_os.h | 2 ++
lib/eal/linux/include/rte_os.h | 10 ++++++++++
lib/eal/windows/include/rte_os.h | 1 +
lib/eal/windows/include/sched.h | 10 ++++++++++
4 files changed, 23 insertions(+)
diff --git a/lib/eal/freebsd/include/rte_os.h b/lib/eal/freebsd/include/rte_os.h
index 94b9275beb..38fd0e7f05 100644
--- a/lib/eal/freebsd/include/rte_os.h
+++ b/lib/eal/freebsd/include/rte_os.h
@@ -75,4 +75,6 @@ typedef cpuset_t rte_cpuset_t;
#endif /* RTE_EAL_FREEBSD_CPUSET_LEGACY */
+#define RTE_CPU_FFS CPU_FFS
+
#endif /* _RTE_OS_H_ */
diff --git a/lib/eal/linux/include/rte_os.h b/lib/eal/linux/include/rte_os.h
index 20eff0409a..6e299cc7da 100644
--- a/lib/eal/linux/include/rte_os.h
+++ b/lib/eal/linux/include/rte_os.h
@@ -41,6 +41,16 @@ typedef cpu_set_t rte_cpuset_t;
RTE_CPU_FILL(&tmp); \
CPU_XOR(dst, &tmp, src); \
} while (0)
+
+static inline int
+_cpu_ffs(const rte_cpuset_t *s)
+{
+ for (unsigned int _i = 0; _i < CPU_SETSIZE; _i++)
+ if (CPU_ISSET(_i, s))
+ return (int)(_i + 1);
+ return 0;
+}
+#define RTE_CPU_FFS(s) _cpu_ffs(s)
#endif
#endif /* _RTE_OS_H_ */
diff --git a/lib/eal/windows/include/rte_os.h b/lib/eal/windows/include/rte_os.h
index 2a43cb1f9b..25701c7906 100644
--- a/lib/eal/windows/include/rte_os.h
+++ b/lib/eal/windows/include/rte_os.h
@@ -49,6 +49,7 @@ struct { \
#define RTE_CPU_OR(dst, src1, src2) CPU_OR(dst, src1, src2)
#define RTE_CPU_FILL(set) CPU_FILL(set)
#define RTE_CPU_NOT(dst, src) CPU_NOT(dst, src)
+#define RTE_CPU_FFS(s) CPU_FFS(s)
/* This is an exception without "rte_" prefix, because Windows does have
* ssize_t, but it's defined in <windows.h> which we avoid to expose.
diff --git a/lib/eal/windows/include/sched.h b/lib/eal/windows/include/sched.h
index 912fed12c2..230de19c76 100644
--- a/lib/eal/windows/include/sched.h
+++ b/lib/eal/windows/include/sched.h
@@ -86,6 +86,16 @@ do { \
(dst)->_bits[_i] = (src)->_bits[_i] ^ -1LL; \
} while (0)
+static inline int
+cpu_ffs(const rte_cpuset_t *s)
+{
+ for (unsigned int _i = 0; _i < CPU_SETSIZE; _i++)
+ if (CPU_ISSET(_i, s))
+ return (int)(_i + 1);
+ return 0;
+}
+#define CPU_FFS(s) cpu_ffs(s)
+
#ifdef __cplusplus
}
#endif
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 13/44] eal: store lcore configuration in runtime data
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (11 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 12/44] eal: add RTE_CPU_FFS macro Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 14/44] eal: cleanup CPU init function Bruce Richardson
` (32 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
Remove the standalone lcore_config array and instead manage the lcore
configuration as part of the runtime state.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_launch.c | 25 +++++++++++------
lib/eal/common/eal_common_lcore.c | 43 +++++++++++++++++------------
lib/eal/common/eal_common_options.c | 38 +++++++++++++------------
lib/eal/common/eal_common_thread.c | 24 +++++++++-------
lib/eal/common/eal_internal_cfg.h | 19 +++++++++++++
lib/eal/common/eal_private.h | 21 --------------
lib/eal/common/rte_service.c | 8 ++----
lib/eal/freebsd/eal.c | 23 +++++++--------
lib/eal/linux/eal.c | 24 ++++++++--------
lib/eal/unix/eal_unix_thread.c | 11 +++++---
lib/eal/windows/eal.c | 22 +++++++--------
lib/eal/windows/eal_thread.c | 11 +++++---
12 files changed, 142 insertions(+), 127 deletions(-)
diff --git a/lib/eal/common/eal_common_launch.c b/lib/eal/common/eal_common_launch.c
index a7deac6ecd..a0f2a43b2a 100644
--- a/lib/eal/common/eal_common_launch.c
+++ b/lib/eal/common/eal_common_launch.c
@@ -20,11 +20,13 @@ RTE_EXPORT_SYMBOL(rte_eal_wait_lcore)
int
rte_eal_wait_lcore(unsigned worker_id)
{
- while (rte_atomic_load_explicit(&lcore_config[worker_id].state,
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+
+ while (rte_atomic_load_explicit(&runtime_state->lcore_cfg[worker_id].state,
rte_memory_order_acquire) != WAIT)
rte_pause();
- return lcore_config[worker_id].ret;
+ return runtime_state->lcore_cfg[worker_id].ret;
}
/*
@@ -36,21 +38,23 @@ RTE_EXPORT_SYMBOL(rte_eal_remote_launch)
int
rte_eal_remote_launch(lcore_function_t *f, void *arg, unsigned int worker_id)
{
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
int rc = -EBUSY;
/* Check if the worker is in 'WAIT' state. Use acquire order
* since 'state' variable is used as the guard variable.
*/
- if (rte_atomic_load_explicit(&lcore_config[worker_id].state,
+ if (rte_atomic_load_explicit(&runtime_state->lcore_cfg[worker_id].state,
rte_memory_order_acquire) != WAIT)
goto finish;
- lcore_config[worker_id].arg = arg;
+ runtime_state->lcore_cfg[worker_id].arg = arg;
/* Ensure that all the memory operations are completed
* before the worker thread starts running the function.
* Use worker thread function as the guard variable.
*/
- rte_atomic_store_explicit(&lcore_config[worker_id].f, f, rte_memory_order_release);
+ rte_atomic_store_explicit(&runtime_state->lcore_cfg[worker_id].f, f,
+ rte_memory_order_release);
rc = eal_thread_wake_worker(worker_id);
@@ -69,12 +73,13 @@ int
rte_eal_mp_remote_launch(int (*f)(void *), void *arg,
enum rte_rmt_call_main_t call_main)
{
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
int lcore_id;
int main_lcore = rte_get_main_lcore();
/* check state of lcores */
RTE_LCORE_FOREACH_WORKER(lcore_id) {
- if (lcore_config[lcore_id].state != WAIT)
+ if (runtime_state->lcore_cfg[lcore_id].state != WAIT)
return -EBUSY;
}
@@ -84,8 +89,8 @@ rte_eal_mp_remote_launch(int (*f)(void *), void *arg,
}
if (call_main == CALL_MAIN) {
- lcore_config[main_lcore].ret = f(arg);
- lcore_config[main_lcore].state = WAIT;
+ runtime_state->lcore_cfg[main_lcore].ret = f(arg);
+ runtime_state->lcore_cfg[main_lcore].state = WAIT;
}
return 0;
@@ -98,7 +103,9 @@ RTE_EXPORT_SYMBOL(rte_eal_get_lcore_state)
enum rte_lcore_state_t
rte_eal_get_lcore_state(unsigned lcore_id)
{
- return lcore_config[lcore_id].state;
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+
+ return runtime_state->lcore_cfg[lcore_id].state;
}
/*
diff --git a/lib/eal/common/eal_common_lcore.c b/lib/eal/common/eal_common_lcore.c
index ba3a0c8a92..ca5106c623 100644
--- a/lib/eal/common/eal_common_lcore.c
+++ b/lib/eal/common/eal_common_lcore.c
@@ -34,6 +34,8 @@ unsigned int rte_lcore_count(void)
RTE_EXPORT_SYMBOL(rte_lcore_index)
int rte_lcore_index(int lcore_id)
{
+ const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+
if (unlikely(lcore_id >= RTE_MAX_LCORE))
return -1;
@@ -44,12 +46,13 @@ int rte_lcore_index(int lcore_id)
lcore_id = (int)rte_lcore_id();
}
- return lcore_config[lcore_id].core_index;
+ return runtime_state->lcore_cfg[lcore_id].core_index;
}
RTE_EXPORT_SYMBOL(rte_lcore_to_cpu_id)
int rte_lcore_to_cpu_id(int lcore_id)
{
+ const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
const struct eal_platform_info *platform_info = eal_get_platform_info();
unsigned int cpu;
@@ -63,17 +66,18 @@ int rte_lcore_to_cpu_id(int lcore_id)
lcore_id = (int)rte_lcore_id();
}
- for (cpu = 0; cpu < CPU_SETSIZE && cpu < platform_info->cpu_count; cpu++) {
- if (CPU_ISSET(cpu, &lcore_config[lcore_id].cpuset))
- return (int)platform_info->cpu_info[cpu].core_id;
- }
+ cpu = runtime_state->lcore_cfg[lcore_id].first_cpu;
+ if (cpu < platform_info->cpu_count)
+ return (int)platform_info->cpu_info[cpu].core_id;
return -1;
}
RTE_EXPORT_SYMBOL(rte_lcore_cpuset)
rte_cpuset_t rte_lcore_cpuset(unsigned int lcore_id)
{
- return lcore_config[lcore_id].cpuset;
+ const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+
+ return runtime_state->lcore_cfg[lcore_id].cpuset;
}
RTE_EXPORT_SYMBOL(rte_eal_lcore_role)
@@ -133,13 +137,12 @@ RTE_EXPORT_SYMBOL(rte_lcore_to_socket_id)
unsigned int
rte_lcore_to_socket_id(unsigned int lcore_id)
{
+ const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
const struct eal_platform_info *platform_info = eal_get_platform_info();
- unsigned int cpu;
+ unsigned int cpu = runtime_state->lcore_cfg[lcore_id].first_cpu;
- for (cpu = 0; cpu < CPU_SETSIZE && cpu < platform_info->cpu_count; cpu++) {
- if (CPU_ISSET(cpu, &lcore_config[lcore_id].cpuset))
- return platform_info->cpu_info[cpu].numa_id;
- }
+ if (cpu < platform_info->cpu_count)
+ return platform_info->cpu_info[cpu].numa_id;
return 0;
}
@@ -167,6 +170,7 @@ rte_eal_cpu_init(void)
/* pointer to global configuration */
struct rte_config *config = rte_eal_get_configuration();
struct eal_platform_info *platform_info = eal_get_platform_info();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
unsigned lcore_id;
unsigned count = 0;
unsigned int socket_id, prev_socket_id;
@@ -199,14 +203,15 @@ rte_eal_cpu_init(void)
* ones and enable them by default.
*/
for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
- lcore_config[lcore_id].core_index = count;
+ runtime_state->lcore_cfg[lcore_id].core_index = count;
/* init cpuset for per lcore config */
- CPU_ZERO(&lcore_config[lcore_id].cpuset);
+ CPU_ZERO(&runtime_state->lcore_cfg[lcore_id].cpuset);
+ runtime_state->lcore_cfg[lcore_id].first_cpu = UINT16_MAX;
if (eal_cpu_detected(lcore_id) == 0) {
config->lcore_role[lcore_id] = ROLE_OFF;
- lcore_config[lcore_id].core_index = -1;
+ runtime_state->lcore_cfg[lcore_id].core_index = -1;
continue;
}
@@ -215,11 +220,11 @@ rte_eal_cpu_init(void)
lcore_to_socket_id[lcore_id] = socket_id;
/* By default, lcore 1:1 map to cpu id */
- CPU_SET(lcore_id, &lcore_config[lcore_id].cpuset);
+ CPU_SET(lcore_id, &runtime_state->lcore_cfg[lcore_id].cpuset);
+ runtime_state->lcore_cfg[lcore_id].first_cpu = lcore_id;
/* By default, each detected core is enabled */
config->lcore_role[lcore_id] = ROLE_RTE;
- lcore_config[lcore_id].core_role = ROLE_RTE;
EAL_LOG(DEBUG, "Detected lcore %u as "
"core %u on NUMA node %u",
lcore_id,
@@ -513,6 +518,7 @@ calc_usage_ratio(const struct rte_lcore_usage *usage)
static int
lcore_dump_cb(unsigned int lcore_id, void *arg)
{
+ const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
struct rte_config *cfg = rte_eal_get_configuration();
char *cpuset;
struct rte_lcore_usage usage;
@@ -531,7 +537,7 @@ lcore_dump_cb(unsigned int lcore_id, void *arg)
return -ENOMEM;
}
}
- cpuset = eal_cpuset_to_str(&lcore_config[lcore_id].cpuset);
+ cpuset = eal_cpuset_to_str(&runtime_state->lcore_cfg[lcore_id].cpuset);
fprintf(f, "lcore %u, socket %u, role %s, cpuset %s\n", lcore_id,
rte_lcore_to_socket_id(lcore_id),
lcore_role_str(cfg->lcore_role[lcore_id]),
@@ -586,6 +592,7 @@ format_usage_ratio(char *buf, uint16_t size, const struct rte_lcore_usage *usage
static int
lcore_telemetry_info_cb(unsigned int lcore_id, void *arg)
{
+ const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
struct rte_config *cfg = rte_eal_get_configuration();
struct lcore_telemetry_info *info = arg;
char ratio_str[RTE_TEL_MAX_STRING_LEN];
@@ -606,7 +613,7 @@ lcore_telemetry_info_cb(unsigned int lcore_id, void *arg)
return -ENOMEM;
rte_tel_data_start_array(cpuset, RTE_TEL_INT_VAL);
for (cpu = 0; cpu < CPU_SETSIZE; cpu++) {
- if (CPU_ISSET(cpu, &lcore_config[lcore_id].cpuset))
+ if (CPU_ISSET(cpu, &runtime_state->lcore_cfg[lcore_id].cpuset))
rte_tel_data_add_array_int(cpuset, cpu);
}
rte_tel_data_add_dict_container(info->d, "cpuset", cpuset, 0);
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index 2d6d4dc9bc..02c40e5ce1 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -888,7 +888,7 @@ eal_parse_service_coremask(const char *coremask)
if (cfg->lcore_role[idx] == ROLE_RTE)
taken_lcore_count++;
- lcore_config[idx].core_role = ROLE_SERVICE;
+ cfg->lcore_role[idx] = ROLE_SERVICE;
count++;
}
}
@@ -898,9 +898,6 @@ eal_parse_service_coremask(const char *coremask)
if (coremask[i] != '0')
return -1;
- for (; idx < RTE_MAX_LCORE; idx++)
- lcore_config[idx].core_index = -1;
-
if (count == 0)
return -1;
@@ -918,6 +915,7 @@ static int
update_lcore_config(const rte_cpuset_t *cpuset, bool remap, uint16_t remap_base)
{
struct rte_config *cfg = rte_eal_get_configuration();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
unsigned int lcore_id = remap_base;
unsigned int count = 0;
unsigned int i;
@@ -926,7 +924,7 @@ update_lcore_config(const rte_cpuset_t *cpuset, bool remap, uint16_t remap_base)
/* set everything to disabled first, then set up values */
for (i = 0; i < RTE_MAX_LCORE; i++) {
cfg->lcore_role[i] = ROLE_OFF;
- lcore_config[i].core_index = -1;
+ runtime_state->lcore_cfg[i].core_index = -1;
}
/* now go through the cpuset */
@@ -954,9 +952,10 @@ update_lcore_config(const rte_cpuset_t *cpuset, bool remap, uint16_t remap_base)
}
cfg->lcore_role[lcore_id] = ROLE_RTE;
- lcore_config[lcore_id].core_index = count;
- CPU_ZERO(&lcore_config[lcore_id].cpuset);
- CPU_SET(i, &lcore_config[lcore_id].cpuset);
+ runtime_state->lcore_cfg[lcore_id].core_index = count;
+ CPU_ZERO(&runtime_state->lcore_cfg[lcore_id].cpuset);
+ CPU_SET(i, &runtime_state->lcore_cfg[lcore_id].cpuset);
+ runtime_state->lcore_cfg[lcore_id].first_cpu = i;
EAL_LOG(DEBUG, "lcore %u mapped to physical core %u", lcore_id, i);
lcore_id++;
count++;
@@ -1129,8 +1128,7 @@ eal_parse_service_corelist(const char *corelist)
if (cfg->lcore_role[idx] == ROLE_RTE)
taken_lcore_count++;
- lcore_config[idx].core_role =
- ROLE_SERVICE;
+ cfg->lcore_role[idx] = ROLE_SERVICE;
count++;
}
}
@@ -1153,7 +1151,7 @@ eal_parse_service_corelist(const char *corelist)
rte_cpuset_t service_cpuset;
CPU_ZERO(&service_cpuset);
for (i = 0; i < RTE_MAX_LCORE; i++) {
- if (lcore_config[i].core_role == ROLE_SERVICE)
+ if (cfg->lcore_role[i] == ROLE_SERVICE)
CPU_SET(i, &service_cpuset);
}
if (CPU_COUNT(&service_cpuset) > 0) {
@@ -1182,7 +1180,7 @@ eal_parse_main_lcore(const char *arg)
return -1;
/* ensure main core is not used as service core */
- if (lcore_config[cfg->main_lcore].core_role == ROLE_SERVICE) {
+ if (cfg->lcore_role[cfg->main_lcore] == ROLE_SERVICE) {
EAL_LOG(ERR, "Error: Main lcore is used as a service core");
return -1;
}
@@ -1354,6 +1352,7 @@ static int
eal_parse_lcores(const char *lcores)
{
struct rte_config *cfg = rte_eal_get_configuration();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
rte_cpuset_t lcore_set;
unsigned int set_count;
unsigned idx = 0;
@@ -1377,8 +1376,9 @@ eal_parse_lcores(const char *lcores)
/* Reset lcore config */
for (idx = 0; idx < RTE_MAX_LCORE; idx++) {
cfg->lcore_role[idx] = ROLE_OFF;
- lcore_config[idx].core_index = -1;
- CPU_ZERO(&lcore_config[idx].cpuset);
+ runtime_state->lcore_cfg[idx].core_index = -1;
+ CPU_ZERO(&runtime_state->lcore_cfg[idx].cpuset);
+ runtime_state->lcore_cfg[idx].first_cpu = UINT16_MAX;
}
/* Get list of cores */
@@ -1439,7 +1439,7 @@ eal_parse_lcores(const char *lcores)
set_count--;
if (cfg->lcore_role[idx] != ROLE_RTE) {
- lcore_config[idx].core_index = count;
+ runtime_state->lcore_cfg[idx].core_index = count;
cfg->lcore_role[idx] = ROLE_RTE;
count++;
}
@@ -1451,8 +1451,10 @@ eal_parse_lcores(const char *lcores)
if (check_cpuset(&cpuset) < 0)
goto err;
- rte_memcpy(&lcore_config[idx].cpuset, &cpuset,
+ rte_memcpy(&runtime_state->lcore_cfg[idx].cpuset, &cpuset,
sizeof(rte_cpuset_t));
+ runtime_state->lcore_cfg[idx].first_cpu =
+ (uint16_t)(RTE_CPU_FFS(&cpuset) - 1);
}
/* some cores from the lcore_set can't be handled by EAL */
@@ -2326,7 +2328,7 @@ compute_ctrl_threads_cpuset(void)
for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
if (rte_lcore_has_role(lcore_id, ROLE_OFF))
continue;
- RTE_CPU_OR(cpuset, cpuset, &lcore_config[lcore_id].cpuset);
+ RTE_CPU_OR(cpuset, cpuset, &runtime_state->lcore_cfg[lcore_id].cpuset);
}
RTE_CPU_NOT(cpuset, cpuset);
@@ -2337,7 +2339,7 @@ compute_ctrl_threads_cpuset(void)
/* if no remaining cpu, use main lcore cpu affinity */
if (!CPU_COUNT(cpuset)) {
- memcpy(cpuset, &lcore_config[rte_get_main_lcore()].cpuset,
+ memcpy(cpuset, &runtime_state->lcore_cfg[rte_get_main_lcore()].cpuset,
sizeof(*cpuset));
}
diff --git a/lib/eal/common/eal_common_thread.c b/lib/eal/common/eal_common_thread.c
index 774344013d..7256d06d0a 100644
--- a/lib/eal/common/eal_common_thread.c
+++ b/lib/eal/common/eal_common_thread.c
@@ -46,9 +46,12 @@ thread_update_affinity(rte_cpuset_t *cpusetp)
memmove(&RTE_PER_LCORE(_cpuset), cpusetp, sizeof(rte_cpuset_t));
if (lcore_id != (unsigned)LCORE_ID_ANY) {
- /* EAL thread: update lcore_config cpuset first then find numa based on that */
- memmove(&lcore_config[lcore_id].cpuset, cpusetp,
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+
+ /* EAL thread: update lcore_runtime cpuset first then find numa based on that */
+ memmove(&runtime_state->lcore_cfg[lcore_id].cpuset, cpusetp,
sizeof(rte_cpuset_t));
+ runtime_state->lcore_cfg[lcore_id].first_cpu = (uint16_t)(RTE_CPU_FFS(cpusetp) - 1);
RTE_PER_LCORE(_numa_id) = rte_lcore_to_socket_id(lcore_id);
} else {
/* Non-EAL thread: derive NUMA node from first CPU in cpuset. */
@@ -135,10 +138,11 @@ __rte_noreturn uint32_t
eal_thread_loop(void *arg)
{
unsigned int lcore_id = (uintptr_t)arg;
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
char cpuset[RTE_CPU_AFFINITY_STR_LEN];
int ret;
- __rte_thread_init(lcore_id, &lcore_config[lcore_id].cpuset);
+ __rte_thread_init(lcore_id, &runtime_state->lcore_cfg[lcore_id].cpuset);
ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset));
EAL_LOG(DEBUG, "lcore %u is ready (tid=%zx;cpuset=[%s%s])",
@@ -157,7 +161,7 @@ eal_thread_loop(void *arg)
/* Set the state to 'RUNNING'. Use release order
* since 'state' variable is used as the guard variable.
*/
- rte_atomic_store_explicit(&lcore_config[lcore_id].state, RUNNING,
+ rte_atomic_store_explicit(&runtime_state->lcore_cfg[lcore_id].state, RUNNING,
rte_memory_order_release);
eal_thread_ack_command();
@@ -167,25 +171,25 @@ eal_thread_loop(void *arg)
* are accessed only after update to 'f' is visible.
* Wait till the update to 'f' is visible to the worker.
*/
- while ((f = rte_atomic_load_explicit(&lcore_config[lcore_id].f,
+ while ((f = rte_atomic_load_explicit(&runtime_state->lcore_cfg[lcore_id].f,
rte_memory_order_acquire)) == NULL)
rte_pause();
rte_eal_trace_thread_lcore_running(lcore_id, f);
/* call the function and store the return value */
- fct_arg = lcore_config[lcore_id].arg;
+ fct_arg = runtime_state->lcore_cfg[lcore_id].arg;
ret = f(fct_arg);
- lcore_config[lcore_id].ret = ret;
- lcore_config[lcore_id].f = NULL;
- lcore_config[lcore_id].arg = NULL;
+ runtime_state->lcore_cfg[lcore_id].ret = ret;
+ runtime_state->lcore_cfg[lcore_id].f = NULL;
+ runtime_state->lcore_cfg[lcore_id].arg = NULL;
/* Store the state with release order to ensure that
* the memory operations from the worker thread
* are completed before the state is updated.
* Use 'state' as the guard variable.
*/
- rte_atomic_store_explicit(&lcore_config[lcore_id].state, WAIT,
+ rte_atomic_store_explicit(&runtime_state->lcore_cfg[lcore_id].state, WAIT,
rte_memory_order_release);
rte_eal_trace_thread_lcore_stopped(lcore_id);
diff --git a/lib/eal/common/eal_internal_cfg.h b/lib/eal/common/eal_internal_cfg.h
index 8ed7171bdc..ef4bcfc01a 100644
--- a/lib/eal/common/eal_internal_cfg.h
+++ b/lib/eal/common/eal_internal_cfg.h
@@ -16,6 +16,7 @@
#include <stdint.h>
#include <stdbool.h>
+#include <rte_stdatomic.h>
#include "eal_thread.h"
#if defined(RTE_ARCH_ARM)
@@ -108,6 +109,23 @@ struct eal_platform_info {
struct hugepage_info hugepage_info[MAX_HUGEPAGE_SIZES];
};
+/**
+ * Per-lcore runtime state, owned by EAL.
+ */
+struct lcore_cfg {
+ int core_index; /**< relative index, starting from 0 */
+ rte_cpuset_t cpuset; /**< cpu set which the lcore affinity to */
+ uint16_t first_cpu; /**< lowest CPU set in cpuset, UINT16_MAX if none */
+ /* Fields for executing code on a remote lcore */
+ rte_thread_t thread_id; /**< thread identifier */
+ int pipe_main2worker[2]; /**< communication pipe with main */
+ int pipe_worker2main[2]; /**< communication pipe with main */
+ RTE_ATOMIC(lcore_function_t *) volatile f; /**< function to call */
+ void * volatile arg; /**< argument of function */
+ volatile int ret; /**< return value of function */
+ volatile RTE_ATOMIC(enum rte_lcore_state_t) state; /**< lcore state */
+};
+
/**
* Internal EAL runtime state
* May be modified at runtime, so access must be protected by locks or atomic types
@@ -117,6 +135,7 @@ struct eal_runtime_state {
rte_cpuset_t ctrl_cpuset; /**< cpuset for ctrl threads */
volatile unsigned int init_complete;
/**< indicates whether EAL has completed initialization */
+ struct lcore_cfg lcore_cfg[RTE_MAX_LCORE];
};
struct eal_user_cfg *eal_get_user_configuration(void);
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index 48569f2ed7..bd9c9f2b70 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -17,27 +17,6 @@
#include "eal_internal_cfg.h"
-/**
- * Structure storing internal configuration (per-lcore)
- */
-struct lcore_config {
- rte_thread_t thread_id; /**< thread identifier */
- int pipe_main2worker[2]; /**< communication pipe with main */
- int pipe_worker2main[2]; /**< communication pipe with main */
-
- RTE_ATOMIC(lcore_function_t *) volatile f; /**< function to call */
- void * volatile arg; /**< argument of function */
- volatile int ret; /**< return value of function */
-
- volatile RTE_ATOMIC(enum rte_lcore_state_t) state; /**< lcore state */
- int core_index; /**< relative index, starting from 0 */
- uint8_t core_role; /**< role of core eg: OFF, RTE, SERVICE */
-
- rte_cpuset_t cpuset; /**< cpu set which the lcore affinity to */
-};
-
-extern struct lcore_config lcore_config[RTE_MAX_LCORE];
-
/**
* The global RTE configuration structure.
*/
diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c
index d2ac9d3f14..dbf4fe153b 100644
--- a/lib/eal/common/rte_service.c
+++ b/lib/eal/common/rte_service.c
@@ -107,7 +107,7 @@ rte_service_init(void)
int i;
struct rte_config *cfg = rte_eal_get_configuration();
for (i = 0; i < RTE_MAX_LCORE; i++) {
- if (lcore_config[i].core_role == ROLE_SERVICE) {
+ if (cfg->lcore_role[i] == ROLE_SERVICE) {
if ((unsigned int)i == cfg->main_lcore)
continue;
rte_service_lcore_add(i);
@@ -714,9 +714,6 @@ set_lcore_state(uint32_t lcore, int32_t state)
struct core_state *cs = RTE_LCORE_VAR_LCORE(lcore, lcore_states);
cfg->lcore_role[lcore] = state;
- /* mark state in process local lcore_config */
- lcore_config[lcore].core_role = state;
-
/* update per-lcore optimized state tracking */
cs->is_service_core = (state == ROLE_SERVICE);
@@ -1104,6 +1101,7 @@ RTE_EXPORT_SYMBOL(rte_service_dump)
int32_t
rte_service_dump(FILE *f, uint32_t id)
{
+ struct rte_config *cfg = rte_eal_get_configuration();
uint32_t i;
int print_one = (id != UINT32_MAX);
@@ -1126,7 +1124,7 @@ rte_service_dump(FILE *f, uint32_t id)
fprintf(f, "Service Cores Summary\n");
for (i = 0; i < RTE_MAX_LCORE; i++) {
- if (lcore_config[i].core_role != ROLE_SERVICE)
+ if (cfg->lcore_role[i] != ROLE_SERVICE)
continue;
service_dump_calls_per_lcore(f, i);
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index f41a700125..a75af85a7c 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -70,9 +70,6 @@ static struct flock wr_lock = {
.l_len = RTE_SIZEOF_FIELD(struct rte_mem_config, memsegs),
};
-/* internal configuration (per-core) */
-struct lcore_config lcore_config[RTE_MAX_LCORE];
-
/* used by rte_rdtsc() */
RTE_EXPORT_SYMBOL(rte_cycles_vmware_tsc_map)
int rte_cycles_vmware_tsc_map;
@@ -408,7 +405,7 @@ rte_eal_init(int argc, char **argv)
char thread_name[RTE_THREAD_NAME_SIZE];
const struct rte_config *config = rte_eal_get_configuration();
struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
bool has_phys_addr;
enum rte_iova_mode iova_mode;
@@ -650,13 +647,13 @@ rte_eal_init(int argc, char **argv)
eal_check_mem_on_local_socket();
if (rte_thread_set_affinity_by_id(rte_thread_self(),
- &lcore_config[config->main_lcore].cpuset) != 0) {
+ &runtime_state->lcore_cfg[config->main_lcore].cpuset) != 0) {
rte_eal_init_alert("Cannot set affinity");
rte_errno = EINVAL;
goto err_out;
}
__rte_thread_init(config->main_lcore,
- &lcore_config[config->main_lcore].cpuset);
+ &runtime_state->lcore_cfg[config->main_lcore].cpuset);
ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset));
@@ -670,15 +667,15 @@ rte_eal_init(int argc, char **argv)
* create communication pipes between main thread
* and children
*/
- if (pipe(lcore_config[i].pipe_main2worker) < 0)
+ if (pipe(runtime_state->lcore_cfg[i].pipe_main2worker) < 0)
rte_panic("Cannot create pipe\n");
- if (pipe(lcore_config[i].pipe_worker2main) < 0)
+ if (pipe(runtime_state->lcore_cfg[i].pipe_worker2main) < 0)
rte_panic("Cannot create pipe\n");
- lcore_config[i].state = WAIT;
+ runtime_state->lcore_cfg[i].state = WAIT;
/* create a thread for each lcore */
- ret = rte_thread_create(&lcore_config[i].thread_id, NULL,
+ ret = rte_thread_create(&runtime_state->lcore_cfg[i].thread_id, NULL,
eal_thread_loop, (void *)(uintptr_t)i);
if (ret != 0)
rte_panic("Cannot create thread\n");
@@ -688,10 +685,10 @@ rte_eal_init(int argc, char **argv)
if (ret >= RTE_THREAD_NAME_SIZE)
EAL_LOG(INFO, "Worker thread name %s truncated", thread_name);
- rte_thread_set_name(lcore_config[i].thread_id, thread_name);
+ rte_thread_set_name(runtime_state->lcore_cfg[i].thread_id, thread_name);
- ret = rte_thread_set_affinity_by_id(lcore_config[i].thread_id,
- &lcore_config[i].cpuset);
+ ret = rte_thread_set_affinity_by_id(runtime_state->lcore_cfg[i].thread_id,
+ &runtime_state->lcore_cfg[i].cpuset);
if (ret != 0)
rte_panic("Cannot set affinity\n");
}
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index ffe930155a..9ef4b4e6f5 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -72,9 +72,6 @@ static struct flock wr_lock = {
.l_len = RTE_SIZEOF_FIELD(struct rte_mem_config, memsegs),
};
-/* internal configuration (per-core) */
-struct lcore_config lcore_config[RTE_MAX_LCORE];
-
/* used by rte_rdtsc() */
RTE_EXPORT_SYMBOL(rte_cycles_vmware_tsc_map)
int rte_cycles_vmware_tsc_map;
@@ -519,6 +516,7 @@ eal_worker_thread_create(unsigned int lcore_id)
size_t stack_size;
int ret = -1;
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
stack_size = user_cfg->huge_worker_stack_size;
if (stack_size != 0) {
@@ -545,7 +543,7 @@ eal_worker_thread_create(unsigned int lcore_id)
}
}
- if (pthread_create((pthread_t *)&lcore_config[lcore_id].thread_id.opaque_id,
+ if (pthread_create((pthread_t *)&runtime_state->lcore_cfg[lcore_id].thread_id.opaque_id,
attrp, eal_worker_thread_loop, (void *)(uintptr_t)lcore_id) == 0)
ret = 0;
@@ -570,7 +568,7 @@ rte_eal_init(int argc, char **argv)
bool phys_addrs;
const struct rte_config *config = rte_eal_get_configuration();
struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
/* first check if we have been run before */
if (!rte_atomic_compare_exchange_strong_explicit(&run_once, &has_run, 1,
@@ -825,13 +823,13 @@ rte_eal_init(int argc, char **argv)
eal_check_mem_on_local_socket();
if (rte_thread_set_affinity_by_id(rte_thread_self(),
- &lcore_config[config->main_lcore].cpuset) != 0) {
+ &runtime_state->lcore_cfg[config->main_lcore].cpuset) != 0) {
rte_eal_init_alert("Cannot set affinity");
rte_errno = EINVAL;
goto err_out;
}
__rte_thread_init(config->main_lcore,
- &lcore_config[config->main_lcore].cpuset);
+ &runtime_state->lcore_cfg[config->main_lcore].cpuset);
ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset));
EAL_LOG(DEBUG, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])",
@@ -844,12 +842,12 @@ rte_eal_init(int argc, char **argv)
* create communication pipes between main thread
* and children
*/
- if (pipe(lcore_config[i].pipe_main2worker) < 0)
+ if (pipe(runtime_state->lcore_cfg[i].pipe_main2worker) < 0)
rte_panic("Cannot create pipe\n");
- if (pipe(lcore_config[i].pipe_worker2main) < 0)
+ if (pipe(runtime_state->lcore_cfg[i].pipe_worker2main) < 0)
rte_panic("Cannot create pipe\n");
- lcore_config[i].state = WAIT;
+ runtime_state->lcore_cfg[i].state = WAIT;
/* create a thread for each lcore */
ret = eal_worker_thread_create(i);
@@ -861,10 +859,10 @@ rte_eal_init(int argc, char **argv)
if (ret >= RTE_THREAD_NAME_SIZE)
EAL_LOG(INFO, "Worker thread name %s truncated", thread_name);
- rte_thread_set_name(lcore_config[i].thread_id, thread_name);
+ rte_thread_set_name(runtime_state->lcore_cfg[i].thread_id, thread_name);
- ret = rte_thread_set_affinity_by_id(lcore_config[i].thread_id,
- &lcore_config[i].cpuset);
+ ret = rte_thread_set_affinity_by_id(runtime_state->lcore_cfg[i].thread_id,
+ &runtime_state->lcore_cfg[i].cpuset);
if (ret != 0)
rte_panic("Cannot set affinity\n");
}
diff --git a/lib/eal/unix/eal_unix_thread.c b/lib/eal/unix/eal_unix_thread.c
index ef6cbff0ee..1555078f96 100644
--- a/lib/eal/unix/eal_unix_thread.c
+++ b/lib/eal/unix/eal_unix_thread.c
@@ -12,8 +12,9 @@
int
eal_thread_wake_worker(unsigned int worker_id)
{
- int m2w = lcore_config[worker_id].pipe_main2worker[1];
- int w2m = lcore_config[worker_id].pipe_worker2main[0];
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+ int m2w = runtime_state->lcore_cfg[worker_id].pipe_main2worker[1];
+ int w2m = runtime_state->lcore_cfg[worker_id].pipe_worker2main[0];
char c = 0;
int n;
@@ -35,11 +36,12 @@ void
eal_thread_wait_command(void)
{
unsigned int lcore_id = rte_lcore_id();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
int m2w;
char c;
int n;
- m2w = lcore_config[lcore_id].pipe_main2worker[0];
+ m2w = runtime_state->lcore_cfg[lcore_id].pipe_main2worker[0];
do {
n = read(m2w, &c, 1);
} while (n < 0 && errno == EINTR);
@@ -51,11 +53,12 @@ void
eal_thread_ack_command(void)
{
unsigned int lcore_id = rte_lcore_id();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
char c = 0;
int w2m;
int n;
- w2m = lcore_config[lcore_id].pipe_worker2main[1];
+ w2m = runtime_state->lcore_cfg[lcore_id].pipe_worker2main[1];
do {
n = write(w2m, &c, 1);
} while (n == 0 || (n < 0 && errno == EINTR));
diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c
index 6e40c3d6d3..988352f867 100644
--- a/lib/eal/windows/eal.c
+++ b/lib/eal/windows/eal.c
@@ -39,9 +39,6 @@
*/
static int mem_cfg_fd = -1;
-/* internal configuration (per-core) */
-struct lcore_config lcore_config[RTE_MAX_LCORE];
-
/* Detect if we are a primary or a secondary process */
enum rte_proc_type_t
eal_proc_type_detect(void)
@@ -159,6 +156,7 @@ rte_eal_init(int argc, char **argv)
int i, fctret, bscan;
const struct rte_config *config = rte_eal_get_configuration();
struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
bool has_phys_addr;
enum rte_iova_mode iova_mode;
int ret;
@@ -342,13 +340,13 @@ rte_eal_init(int argc, char **argv)
eal_rand_init();
if (rte_thread_set_affinity_by_id(rte_thread_self(),
- &lcore_config[config->main_lcore].cpuset) != 0) {
+ &runtime_state->lcore_cfg[config->main_lcore].cpuset) != 0) {
rte_eal_init_alert("Cannot set affinity");
rte_errno = EINVAL;
goto err_out;
}
__rte_thread_init(config->main_lcore,
- &lcore_config[config->main_lcore].cpuset);
+ &runtime_state->lcore_cfg[config->main_lcore].cpuset);
ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset));
EAL_LOG(DEBUG, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])",
@@ -361,17 +359,17 @@ rte_eal_init(int argc, char **argv)
* create communication pipes between main thread
* and children
*/
- if (_pipe(lcore_config[i].pipe_main2worker,
+ if (_pipe(runtime_state->lcore_cfg[i].pipe_main2worker,
sizeof(char), _O_BINARY) < 0)
rte_panic("Cannot create pipe\n");
- if (_pipe(lcore_config[i].pipe_worker2main,
+ if (_pipe(runtime_state->lcore_cfg[i].pipe_worker2main,
sizeof(char), _O_BINARY) < 0)
rte_panic("Cannot create pipe\n");
- lcore_config[i].state = WAIT;
+ runtime_state->lcore_cfg[i].state = WAIT;
/* create a thread for each lcore */
- if (rte_thread_create(&lcore_config[i].thread_id, NULL,
+ if (rte_thread_create(&runtime_state->lcore_cfg[i].thread_id, NULL,
eal_thread_loop, (void *)(uintptr_t)i) != 0)
rte_panic("Cannot create thread\n");
@@ -380,10 +378,10 @@ rte_eal_init(int argc, char **argv)
if (ret >= RTE_THREAD_NAME_SIZE)
EAL_LOG(INFO, "Worker thread name %s truncated", thread_name);
- rte_thread_set_name(lcore_config[i].thread_id, thread_name);
+ rte_thread_set_name(runtime_state->lcore_cfg[i].thread_id, thread_name);
- ret = rte_thread_set_affinity_by_id(lcore_config[i].thread_id,
- &lcore_config[i].cpuset);
+ ret = rte_thread_set_affinity_by_id(runtime_state->lcore_cfg[i].thread_id,
+ &runtime_state->lcore_cfg[i].cpuset);
if (ret != 0)
EAL_LOG(DEBUG, "Cannot set affinity");
}
diff --git a/lib/eal/windows/eal_thread.c b/lib/eal/windows/eal_thread.c
index 3eeb94a589..7dbba48ecb 100644
--- a/lib/eal/windows/eal_thread.c
+++ b/lib/eal/windows/eal_thread.c
@@ -20,8 +20,9 @@
int
eal_thread_wake_worker(unsigned int worker_id)
{
- int m2w = lcore_config[worker_id].pipe_main2worker[1];
- int w2m = lcore_config[worker_id].pipe_worker2main[0];
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+ int m2w = runtime_state->lcore_cfg[worker_id].pipe_main2worker[1];
+ int w2m = runtime_state->lcore_cfg[worker_id].pipe_worker2main[0];
char c = 0;
int n;
@@ -43,11 +44,12 @@ void
eal_thread_wait_command(void)
{
unsigned int lcore_id = rte_lcore_id();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
int m2w;
char c;
int n;
- m2w = lcore_config[lcore_id].pipe_main2worker[0];
+ m2w = runtime_state->lcore_cfg[lcore_id].pipe_main2worker[0];
do {
n = _read(m2w, &c, 1);
} while (n < 0 && errno == EINTR);
@@ -59,11 +61,12 @@ void
eal_thread_ack_command(void)
{
unsigned int lcore_id = rte_lcore_id();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
char c = 0;
int w2m;
int n;
- w2m = lcore_config[lcore_id].pipe_worker2main[1];
+ w2m = runtime_state->lcore_cfg[lcore_id].pipe_worker2main[1];
do {
n = _write(w2m, &c, 1);
} while (n == 0 || (n < 0 && errno == EINTR));
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 14/44] eal: cleanup CPU init function
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (12 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 13/44] eal: store lcore configuration in runtime data Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 15/44] eal: move numa node information to platform info struct Bruce Richardson
` (31 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
The CPU init function did extra work in zeroing the runtime config for
lcores, something that was being done a second time in configuring the
argparsing stage when preparing to set up the correct cpu affinities for
each lcore. Therefore we can remove that unnecessary initialization, and
have the function not make any changes to "runtime_state" but only the
"platform_info".
In the process we can make some other cleanups too:
* remove the limit on the lcore_to_socket_id array, and dynamically
allocate it to the correct number of present cores.
* with the runtime lcore init gone, we can move the assignment to the
lcore_to_socket_id into the first loop, allowing us to remove the
second loop entirely.
* the log message about "skipping" lcores was incorrect for modern DPDK,
since we no longer have a hard-link between physical core numbers and
lcore ids. Therefore just report what cores are detected on what
sockets.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_lcore.c | 83 +++++++------------------------
1 file changed, 18 insertions(+), 65 deletions(-)
diff --git a/lib/eal/common/eal_common_lcore.c b/lib/eal/common/eal_common_lcore.c
index ca5106c623..808fffbb6c 100644
--- a/lib/eal/common/eal_common_lcore.c
+++ b/lib/eal/common/eal_common_lcore.c
@@ -167,18 +167,9 @@ socket_id_cmp(const void *a, const void *b)
int
rte_eal_cpu_init(void)
{
- /* pointer to global configuration */
struct rte_config *config = rte_eal_get_configuration();
struct eal_platform_info *platform_info = eal_get_platform_info();
- struct eal_runtime_state *runtime_state = eal_get_runtime_state();
- unsigned lcore_id;
- unsigned count = 0;
- unsigned int socket_id, prev_socket_id;
-#if CPU_SETSIZE > RTE_MAX_LCORE
- int lcore_to_socket_id[CPU_SETSIZE] = {0};
-#else
- int lcore_to_socket_id[RTE_MAX_LCORE] = {0};
-#endif
+ int *lcore_to_socket_id;
/* allocate cpu_info for all CPUs visible to the OS */
platform_info->cpu_count = eal_cpu_max();
@@ -188,6 +179,12 @@ rte_eal_cpu_init(void)
EAL_LOG(ERR, "Cannot allocate cpu_info array");
return -1;
}
+ lcore_to_socket_id = calloc(platform_info->cpu_count, sizeof(*lcore_to_socket_id));
+ if (lcore_to_socket_id == NULL) {
+ EAL_LOG(ERR, "Cannot allocate lcore_to_socket_id array");
+ free(platform_info->cpu_info);
+ return -1;
+ }
/* populate cpu_info with hardware topology for all detected CPUs */
for (size_t cpu_id = 0; cpu_id < platform_info->cpu_count; cpu_id++) {
@@ -196,68 +193,23 @@ rte_eal_cpu_init(void)
platform_info->cpu_info[cpu_id].detected = true;
platform_info->cpu_info[cpu_id].numa_id = eal_cpu_socket_id(cpu_id);
platform_info->cpu_info[cpu_id].core_id = eal_cpu_core_id(cpu_id);
- }
+ /* store numa id for later processing to determine all unique numa nodes */
+ lcore_to_socket_id[cpu_id] = platform_info->cpu_info[cpu_id].numa_id;
- /*
- * Parse the maximum set of logical cores, detect the subset of running
- * ones and enable them by default.
- */
- for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
- runtime_state->lcore_cfg[lcore_id].core_index = count;
-
- /* init cpuset for per lcore config */
- CPU_ZERO(&runtime_state->lcore_cfg[lcore_id].cpuset);
- runtime_state->lcore_cfg[lcore_id].first_cpu = UINT16_MAX;
-
- if (eal_cpu_detected(lcore_id) == 0) {
- config->lcore_role[lcore_id] = ROLE_OFF;
- runtime_state->lcore_cfg[lcore_id].core_index = -1;
- continue;
- }
-
- /* find socket first */
- socket_id = platform_info->cpu_info[lcore_id].numa_id;
- lcore_to_socket_id[lcore_id] = socket_id;
-
- /* By default, lcore 1:1 map to cpu id */
- CPU_SET(lcore_id, &runtime_state->lcore_cfg[lcore_id].cpuset);
- runtime_state->lcore_cfg[lcore_id].first_cpu = lcore_id;
-
- /* By default, each detected core is enabled */
- config->lcore_role[lcore_id] = ROLE_RTE;
- EAL_LOG(DEBUG, "Detected lcore %u as "
- "core %u on NUMA node %u",
- lcore_id,
- platform_info->cpu_info[lcore_id].core_id,
- platform_info->cpu_info[lcore_id].numa_id);
- count++;
+ EAL_LOG(DEBUG, "Detected CPU %zu as core %u on NUMA node %u",
+ cpu_id,
+ platform_info->cpu_info[cpu_id].core_id,
+ platform_info->cpu_info[cpu_id].numa_id);
}
- for (; lcore_id < CPU_SETSIZE; lcore_id++) {
- if (eal_cpu_detected(lcore_id) == 0)
- continue;
- if (unlikely(lcore_id >= platform_info->cpu_count))
- break;
- lcore_to_socket_id[lcore_id] = platform_info->cpu_info[lcore_id].numa_id;
- EAL_LOG(DEBUG, "Skipped lcore %u as core %u on NUMA node %u",
- lcore_id, platform_info->cpu_info[lcore_id].core_id,
- platform_info->cpu_info[lcore_id].numa_id);
- }
-
- /* Set the count of enabled logical cores of the EAL configuration */
- config->lcore_count = count;
- EAL_LOG(DEBUG,
- "Maximum logical cores by configuration: %u",
- RTE_MAX_LCORE);
- EAL_LOG(INFO, "Detected CPU lcores: %u", config->lcore_count);
/* sort all socket id's in ascending order */
- qsort(lcore_to_socket_id, RTE_DIM(lcore_to_socket_id),
+ qsort(lcore_to_socket_id, platform_info->cpu_count,
sizeof(lcore_to_socket_id[0]), socket_id_cmp);
- prev_socket_id = -1;
+ int prev_socket_id = -1;
config->numa_node_count = 0;
- for (lcore_id = 0; lcore_id < RTE_DIM(lcore_to_socket_id); lcore_id++) {
- socket_id = lcore_to_socket_id[lcore_id];
+ for (size_t cpu_id = 0; cpu_id < platform_info->cpu_count; cpu_id++) {
+ int socket_id = lcore_to_socket_id[cpu_id];
if (socket_id != prev_socket_id)
config->numa_nodes[config->numa_node_count++] = socket_id;
prev_socket_id = socket_id;
@@ -266,6 +218,7 @@ rte_eal_cpu_init(void)
}
EAL_LOG(INFO, "Detected NUMA nodes: %u", config->numa_node_count);
+ free(lcore_to_socket_id);
return 0;
}
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 15/44] eal: move numa node information to platform info struct
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (13 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 14/44] eal: cleanup CPU init function Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 16/44] eal: move lcore role and count to runtime state Bruce Richardson
` (30 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
The numa node information for the platform is stored in rte_config but
belongs more in platform info struct. Move the data to that location. In
the process remove the hard-coded limit for RTE_MAX_NUMA_NODES for the
array. At least for platform info, we can record details about all numa
nodes, whatever the number.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_lcore.c | 45 +++++++++++++++++++++----------
lib/eal/common/eal_internal_cfg.h | 2 ++
lib/eal/common/eal_private.h | 2 --
3 files changed, 33 insertions(+), 16 deletions(-)
diff --git a/lib/eal/common/eal_common_lcore.c b/lib/eal/common/eal_common_lcore.c
index 808fffbb6c..01cbaf572b 100644
--- a/lib/eal/common/eal_common_lcore.c
+++ b/lib/eal/common/eal_common_lcore.c
@@ -167,7 +167,6 @@ socket_id_cmp(const void *a, const void *b)
int
rte_eal_cpu_init(void)
{
- struct rte_config *config = rte_eal_get_configuration();
struct eal_platform_info *platform_info = eal_get_platform_info();
int *lcore_to_socket_id;
@@ -206,19 +205,37 @@ rte_eal_cpu_init(void)
qsort(lcore_to_socket_id, platform_info->cpu_count,
sizeof(lcore_to_socket_id[0]), socket_id_cmp);
+ /* allocate worst-case (one NUMA node per CPU), then dedup and shrink */
+ platform_info->numa_nodes = malloc(platform_info->cpu_count *
+ sizeof(*platform_info->numa_nodes));
+ if (platform_info->numa_nodes == NULL) {
+ EAL_LOG(ERR, "Cannot allocate numa_nodes array");
+ free(lcore_to_socket_id);
+ free(platform_info->cpu_info);
+ return -1;
+ }
+
+ uint32_t numa_node_count = 0;
int prev_socket_id = -1;
- config->numa_node_count = 0;
for (size_t cpu_id = 0; cpu_id < platform_info->cpu_count; cpu_id++) {
int socket_id = lcore_to_socket_id[cpu_id];
- if (socket_id != prev_socket_id)
- config->numa_nodes[config->numa_node_count++] = socket_id;
- prev_socket_id = socket_id;
- if (config->numa_node_count >= RTE_MAX_NUMA_NODES)
- break;
+ if (socket_id != prev_socket_id) {
+ platform_info->numa_nodes[numa_node_count++] = socket_id;
+ prev_socket_id = socket_id;
+ }
}
- EAL_LOG(INFO, "Detected NUMA nodes: %u", config->numa_node_count);
-
+ platform_info->numa_node_count = numa_node_count;
free(lcore_to_socket_id);
+
+ /* shrink to the actual number of unique NUMA nodes found,
+ * realloc may fail, in that case we keep the original allocation
+ */
+ uint32_t *tmp = realloc(platform_info->numa_nodes,
+ numa_node_count * sizeof(*platform_info->numa_nodes));
+ if (tmp != NULL)
+ platform_info->numa_nodes = tmp;
+ EAL_LOG(INFO, "Detected NUMA nodes: %u", platform_info->numa_node_count);
+
return 0;
}
@@ -226,20 +243,20 @@ RTE_EXPORT_SYMBOL(rte_socket_count)
unsigned int
rte_socket_count(void)
{
- const struct rte_config *config = rte_eal_get_configuration();
- return config->numa_node_count;
+ const struct eal_platform_info *platform_info = eal_get_platform_info();
+ return platform_info->numa_node_count;
}
RTE_EXPORT_SYMBOL(rte_socket_id_by_idx)
int
rte_socket_id_by_idx(unsigned int idx)
{
- const struct rte_config *config = rte_eal_get_configuration();
- if (idx >= config->numa_node_count) {
+ const struct eal_platform_info *platform_info = eal_get_platform_info();
+ if (idx >= platform_info->numa_node_count) {
rte_errno = EINVAL;
return -1;
}
- return config->numa_nodes[idx];
+ return platform_info->numa_nodes[idx];
}
static rte_rwlock_t lcore_lock = RTE_RWLOCK_INITIALIZER;
diff --git a/lib/eal/common/eal_internal_cfg.h b/lib/eal/common/eal_internal_cfg.h
index ef4bcfc01a..17e96ab634 100644
--- a/lib/eal/common/eal_internal_cfg.h
+++ b/lib/eal/common/eal_internal_cfg.h
@@ -105,6 +105,8 @@ struct eal_cpu_info {
struct eal_platform_info {
size_t cpu_count; /**< number of entries in cpu_info[] */
struct eal_cpu_info *cpu_info; /**< per-physical-CPU hardware facts */
+ uint32_t numa_node_count; /**< number of detected NUMA nodes */
+ uint32_t *numa_nodes; /**< sorted list of detected NUMA node IDs, heap-allocated */
uint8_t num_hugepage_sizes; /**< how many sizes on this system */
struct hugepage_info hugepage_info[MAX_HUGEPAGE_SIZES];
};
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index bd9c9f2b70..86500cbc5b 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -23,8 +23,6 @@
struct rte_config {
uint32_t main_lcore; /**< Id of the main lcore */
uint32_t lcore_count; /**< Number of available logical cores. */
- uint32_t numa_node_count; /**< Number of detected NUMA nodes. */
- uint32_t numa_nodes[RTE_MAX_NUMA_NODES]; /**< List of detected NUMA nodes. */
uint32_t service_lcore_count;/**< Number of available service cores. */
enum rte_lcore_role_t lcore_role[RTE_MAX_LCORE]; /**< State of cores. */
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 16/44] eal: move lcore role and count to runtime state
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (14 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 15/44] eal: move numa node information to platform info struct Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 17/44] eal: make lcore role a field in lcore config struct Bruce Richardson
` (29 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
Move the lcore role and lcore count values from rte_config struct to the
runtime state struct. Update all call sites. The service_lcore_count
value was set but never actually used, so just remove it from rte_config
rather than moving it to the new structure.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_lcore.c | 54 ++++++++++++++---------------
lib/eal/common/eal_common_options.c | 38 ++++++++++----------
lib/eal/common/eal_internal_cfg.h | 2 ++
lib/eal/common/eal_private.h | 3 --
lib/eal/common/rte_service.c | 14 ++++----
5 files changed, 53 insertions(+), 58 deletions(-)
diff --git a/lib/eal/common/eal_common_lcore.c b/lib/eal/common/eal_common_lcore.c
index 01cbaf572b..457c4c91a0 100644
--- a/lib/eal/common/eal_common_lcore.c
+++ b/lib/eal/common/eal_common_lcore.c
@@ -28,7 +28,7 @@ unsigned int rte_get_main_lcore(void)
RTE_EXPORT_SYMBOL(rte_lcore_count)
unsigned int rte_lcore_count(void)
{
- return rte_eal_get_configuration()->lcore_count;
+ return eal_get_runtime_state()->lcore_count;
}
RTE_EXPORT_SYMBOL(rte_lcore_index)
@@ -84,33 +84,33 @@ RTE_EXPORT_SYMBOL(rte_eal_lcore_role)
enum rte_lcore_role_t
rte_eal_lcore_role(unsigned int lcore_id)
{
- struct rte_config *cfg = rte_eal_get_configuration();
+ const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
if (lcore_id >= RTE_MAX_LCORE)
return ROLE_OFF;
- return cfg->lcore_role[lcore_id];
+ return runtime_state->lcore_role[lcore_id];
}
RTE_EXPORT_SYMBOL(rte_lcore_has_role)
int
rte_lcore_has_role(unsigned int lcore_id, enum rte_lcore_role_t role)
{
- struct rte_config *cfg = rte_eal_get_configuration();
+ const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
if (lcore_id >= RTE_MAX_LCORE)
return 0;
- return cfg->lcore_role[lcore_id] == role;
+ return runtime_state->lcore_role[lcore_id] == role;
}
RTE_EXPORT_SYMBOL(rte_lcore_is_enabled)
int rte_lcore_is_enabled(unsigned int lcore_id)
{
- struct rte_config *cfg = rte_eal_get_configuration();
+ const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
if (lcore_id >= RTE_MAX_LCORE)
return 0;
- return cfg->lcore_role[lcore_id] == ROLE_RTE;
+ return runtime_state->lcore_role[lcore_id] == ROLE_RTE;
}
RTE_EXPORT_SYMBOL(rte_get_next_lcore)
@@ -302,7 +302,7 @@ void *
rte_lcore_callback_register(const char *name, rte_lcore_init_cb init,
rte_lcore_uninit_cb uninit, void *arg)
{
- struct rte_config *cfg = rte_eal_get_configuration();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
struct lcore_callback *callback;
unsigned int lcore_id;
@@ -322,7 +322,7 @@ rte_lcore_callback_register(const char *name, rte_lcore_init_cb init,
if (callback->init == NULL)
goto no_init;
for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
- if (cfg->lcore_role[lcore_id] == ROLE_OFF)
+ if (runtime_state->lcore_role[lcore_id] == ROLE_OFF)
continue;
if (callback_init(callback, lcore_id) == 0)
continue;
@@ -330,7 +330,7 @@ rte_lcore_callback_register(const char *name, rte_lcore_init_cb init,
* previous lcore.
*/
while (lcore_id-- != 0) {
- if (cfg->lcore_role[lcore_id] == ROLE_OFF)
+ if (runtime_state->lcore_role[lcore_id] == ROLE_OFF)
continue;
callback_uninit(callback, lcore_id);
}
@@ -352,7 +352,7 @@ RTE_EXPORT_SYMBOL(rte_lcore_callback_unregister)
void
rte_lcore_callback_unregister(void *handle)
{
- struct rte_config *cfg = rte_eal_get_configuration();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
struct lcore_callback *callback = handle;
unsigned int lcore_id;
@@ -362,7 +362,7 @@ rte_lcore_callback_unregister(void *handle)
if (callback->uninit == NULL)
goto no_uninit;
for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
- if (cfg->lcore_role[lcore_id] == ROLE_OFF)
+ if (runtime_state->lcore_role[lcore_id] == ROLE_OFF)
continue;
callback_uninit(callback, lcore_id);
}
@@ -377,17 +377,17 @@ rte_lcore_callback_unregister(void *handle)
unsigned int
eal_lcore_non_eal_allocate(void)
{
- struct rte_config *cfg = rte_eal_get_configuration();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
struct lcore_callback *callback;
struct lcore_callback *prev;
unsigned int lcore_id;
rte_rwlock_write_lock(&lcore_lock);
for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
- if (cfg->lcore_role[lcore_id] != ROLE_OFF)
+ if (runtime_state->lcore_role[lcore_id] != ROLE_OFF)
continue;
- cfg->lcore_role[lcore_id] = ROLE_NON_EAL;
- cfg->lcore_count++;
+ runtime_state->lcore_role[lcore_id] = ROLE_NON_EAL;
+ runtime_state->lcore_count++;
break;
}
if (lcore_id == RTE_MAX_LCORE) {
@@ -407,8 +407,8 @@ eal_lcore_non_eal_allocate(void)
}
EAL_LOG(DEBUG, "Initialization refused for lcore %u.",
lcore_id);
- cfg->lcore_role[lcore_id] = ROLE_OFF;
- cfg->lcore_count--;
+ runtime_state->lcore_role[lcore_id] = ROLE_OFF;
+ runtime_state->lcore_count--;
lcore_id = RTE_MAX_LCORE;
goto out;
}
@@ -420,16 +420,16 @@ eal_lcore_non_eal_allocate(void)
void
eal_lcore_non_eal_release(unsigned int lcore_id)
{
- struct rte_config *cfg = rte_eal_get_configuration();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
struct lcore_callback *callback;
rte_rwlock_write_lock(&lcore_lock);
- if (cfg->lcore_role[lcore_id] != ROLE_NON_EAL)
+ if (runtime_state->lcore_role[lcore_id] != ROLE_NON_EAL)
goto out;
TAILQ_FOREACH(callback, &lcore_callbacks, next)
callback_uninit(callback, lcore_id);
- cfg->lcore_role[lcore_id] = ROLE_OFF;
- cfg->lcore_count--;
+ runtime_state->lcore_role[lcore_id] = ROLE_OFF;
+ runtime_state->lcore_count--;
out:
rte_rwlock_write_unlock(&lcore_lock);
}
@@ -438,13 +438,13 @@ RTE_EXPORT_SYMBOL(rte_lcore_iterate)
int
rte_lcore_iterate(rte_lcore_iterate_cb cb, void *arg)
{
- struct rte_config *cfg = rte_eal_get_configuration();
+ const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
unsigned int lcore_id;
int ret = 0;
rte_rwlock_read_lock(&lcore_lock);
for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
- if (cfg->lcore_role[lcore_id] == ROLE_OFF)
+ if (runtime_state->lcore_role[lcore_id] == ROLE_OFF)
continue;
ret = cb(lcore_id, arg);
if (ret != 0)
@@ -489,7 +489,6 @@ static int
lcore_dump_cb(unsigned int lcore_id, void *arg)
{
const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
- struct rte_config *cfg = rte_eal_get_configuration();
char *cpuset;
struct rte_lcore_usage usage;
rte_lcore_usage_cb usage_cb;
@@ -510,7 +509,7 @@ lcore_dump_cb(unsigned int lcore_id, void *arg)
cpuset = eal_cpuset_to_str(&runtime_state->lcore_cfg[lcore_id].cpuset);
fprintf(f, "lcore %u, socket %u, role %s, cpuset %s\n", lcore_id,
rte_lcore_to_socket_id(lcore_id),
- lcore_role_str(cfg->lcore_role[lcore_id]),
+ lcore_role_str(runtime_state->lcore_role[lcore_id]),
cpuset != NULL ? cpuset : "<unknown>");
free(cpuset);
free(usage_str);
@@ -563,7 +562,6 @@ static int
lcore_telemetry_info_cb(unsigned int lcore_id, void *arg)
{
const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
- struct rte_config *cfg = rte_eal_get_configuration();
struct lcore_telemetry_info *info = arg;
char ratio_str[RTE_TEL_MAX_STRING_LEN];
struct rte_lcore_usage usage;
@@ -577,7 +575,7 @@ lcore_telemetry_info_cb(unsigned int lcore_id, void *arg)
rte_tel_data_start_dict(info->d);
rte_tel_data_add_dict_int(info->d, "lcore_id", lcore_id);
rte_tel_data_add_dict_int(info->d, "socket", rte_lcore_to_socket_id(lcore_id));
- rte_tel_data_add_dict_string(info->d, "role", lcore_role_str(cfg->lcore_role[lcore_id]));
+ rte_tel_data_add_dict_string(info->d, "role", lcore_role_str(runtime_state->lcore_role[lcore_id]));
cpuset = rte_tel_data_alloc();
if (cpuset == NULL)
return -ENOMEM;
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index 02c40e5ce1..3535e467c6 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -841,7 +841,7 @@ static int xdigit2val(unsigned char c)
static int
eal_parse_service_coremask(const char *coremask)
{
- struct rte_config *cfg = rte_eal_get_configuration();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
int i, j, idx = 0;
unsigned int count = 0;
char c;
@@ -885,10 +885,10 @@ eal_parse_service_coremask(const char *coremask)
return -1;
}
- if (cfg->lcore_role[idx] == ROLE_RTE)
+ if (runtime_state->lcore_role[idx] == ROLE_RTE)
taken_lcore_count++;
- cfg->lcore_role[idx] = ROLE_SERVICE;
+ runtime_state->lcore_role[idx] = ROLE_SERVICE;
count++;
}
}
@@ -907,14 +907,12 @@ eal_parse_service_coremask(const char *coremask)
"Please ensure -c or -l includes service cores");
}
- cfg->service_lcore_count = count;
return 0;
}
static int
update_lcore_config(const rte_cpuset_t *cpuset, bool remap, uint16_t remap_base)
{
- struct rte_config *cfg = rte_eal_get_configuration();
struct eal_runtime_state *runtime_state = eal_get_runtime_state();
unsigned int lcore_id = remap_base;
unsigned int count = 0;
@@ -923,7 +921,7 @@ update_lcore_config(const rte_cpuset_t *cpuset, bool remap, uint16_t remap_base)
/* set everything to disabled first, then set up values */
for (i = 0; i < RTE_MAX_LCORE; i++) {
- cfg->lcore_role[i] = ROLE_OFF;
+ runtime_state->lcore_role[i] = ROLE_OFF;
runtime_state->lcore_cfg[i].core_index = -1;
}
@@ -951,7 +949,7 @@ update_lcore_config(const rte_cpuset_t *cpuset, bool remap, uint16_t remap_base)
continue;
}
- cfg->lcore_role[lcore_id] = ROLE_RTE;
+ runtime_state->lcore_role[lcore_id] = ROLE_RTE;
runtime_state->lcore_cfg[lcore_id].core_index = count;
CPU_ZERO(&runtime_state->lcore_cfg[lcore_id].cpuset);
CPU_SET(i, &runtime_state->lcore_cfg[lcore_id].cpuset);
@@ -966,7 +964,7 @@ update_lcore_config(const rte_cpuset_t *cpuset, bool remap, uint16_t remap_base)
ret = -1;
}
if (!ret)
- cfg->lcore_count = count;
+ runtime_state->lcore_count = count;
return ret;
}
@@ -1085,7 +1083,7 @@ rte_eal_parse_coremask(const char *coremask, rte_cpuset_t *cpuset, bool limit_ra
static int
eal_parse_service_corelist(const char *corelist)
{
- struct rte_config *cfg = rte_eal_get_configuration();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
int i;
unsigned count = 0;
char *end = NULL;
@@ -1124,11 +1122,11 @@ eal_parse_service_corelist(const char *corelist)
if (min == RTE_MAX_LCORE)
min = idx;
for (idx = min; idx <= max; idx++) {
- if (cfg->lcore_role[idx] != ROLE_SERVICE) {
- if (cfg->lcore_role[idx] == ROLE_RTE)
+ if (runtime_state->lcore_role[idx] != ROLE_SERVICE) {
+ if (runtime_state->lcore_role[idx] == ROLE_RTE)
taken_lcore_count++;
- cfg->lcore_role[idx] = ROLE_SERVICE;
+ runtime_state->lcore_role[idx] = ROLE_SERVICE;
count++;
}
}
@@ -1151,7 +1149,7 @@ eal_parse_service_corelist(const char *corelist)
rte_cpuset_t service_cpuset;
CPU_ZERO(&service_cpuset);
for (i = 0; i < RTE_MAX_LCORE; i++) {
- if (cfg->lcore_role[i] == ROLE_SERVICE)
+ if (runtime_state->lcore_role[i] == ROLE_SERVICE)
CPU_SET(i, &service_cpuset);
}
if (CPU_COUNT(&service_cpuset) > 0) {
@@ -1171,6 +1169,7 @@ eal_parse_main_lcore(const char *arg)
{
char *parsing_end;
struct rte_config *cfg = rte_eal_get_configuration();
+ const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
errno = 0;
cfg->main_lcore = (uint32_t) strtol(arg, &parsing_end, 0);
@@ -1180,12 +1179,12 @@ eal_parse_main_lcore(const char *arg)
return -1;
/* ensure main core is not used as service core */
- if (cfg->lcore_role[cfg->main_lcore] == ROLE_SERVICE) {
+ if (runtime_state->lcore_role[cfg->main_lcore] == ROLE_SERVICE) {
EAL_LOG(ERR, "Error: Main lcore is used as a service core");
return -1;
}
/* check that we have the core recorded in the core list */
- if (cfg->lcore_role[cfg->main_lcore] != ROLE_RTE) {
+ if (runtime_state->lcore_role[cfg->main_lcore] != ROLE_RTE) {
EAL_LOG(ERR, "Error: Main lcore is not enabled for DPDK");
return -1;
}
@@ -1351,7 +1350,6 @@ check_cpuset(rte_cpuset_t *set)
static int
eal_parse_lcores(const char *lcores)
{
- struct rte_config *cfg = rte_eal_get_configuration();
struct eal_runtime_state *runtime_state = eal_get_runtime_state();
rte_cpuset_t lcore_set;
unsigned int set_count;
@@ -1375,7 +1373,7 @@ eal_parse_lcores(const char *lcores)
/* Reset lcore config */
for (idx = 0; idx < RTE_MAX_LCORE; idx++) {
- cfg->lcore_role[idx] = ROLE_OFF;
+ runtime_state->lcore_role[idx] = ROLE_OFF;
runtime_state->lcore_cfg[idx].core_index = -1;
CPU_ZERO(&runtime_state->lcore_cfg[idx].cpuset);
runtime_state->lcore_cfg[idx].first_cpu = UINT16_MAX;
@@ -1438,9 +1436,9 @@ eal_parse_lcores(const char *lcores)
continue;
set_count--;
- if (cfg->lcore_role[idx] != ROLE_RTE) {
+ if (runtime_state->lcore_role[idx] != ROLE_RTE) {
runtime_state->lcore_cfg[idx].core_index = count;
- cfg->lcore_role[idx] = ROLE_RTE;
+ runtime_state->lcore_role[idx] = ROLE_RTE;
count++;
}
@@ -1467,7 +1465,7 @@ eal_parse_lcores(const char *lcores)
if (count == 0)
goto err;
- cfg->lcore_count = count;
+ runtime_state->lcore_count = count;
ret = 0;
err:
diff --git a/lib/eal/common/eal_internal_cfg.h b/lib/eal/common/eal_internal_cfg.h
index 17e96ab634..d46d3f59d3 100644
--- a/lib/eal/common/eal_internal_cfg.h
+++ b/lib/eal/common/eal_internal_cfg.h
@@ -137,6 +137,8 @@ struct eal_runtime_state {
rte_cpuset_t ctrl_cpuset; /**< cpuset for ctrl threads */
volatile unsigned int init_complete;
/**< indicates whether EAL has completed initialization */
+ uint32_t lcore_count; /**< Number of active lcore IDs (role != ROLE_OFF). */
+ enum rte_lcore_role_t lcore_role[RTE_MAX_LCORE]; /**< State of cores. */
struct lcore_cfg lcore_cfg[RTE_MAX_LCORE];
};
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index 86500cbc5b..80fa49ce8b 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -22,9 +22,6 @@
*/
struct rte_config {
uint32_t main_lcore; /**< Id of the main lcore */
- uint32_t lcore_count; /**< Number of available logical cores. */
- uint32_t service_lcore_count;/**< Number of available service cores. */
- enum rte_lcore_role_t lcore_role[RTE_MAX_LCORE]; /**< State of cores. */
/** Primary or secondary configuration */
enum rte_proc_type_t process_type;
diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c
index dbf4fe153b..36ef2d32a7 100644
--- a/lib/eal/common/rte_service.c
+++ b/lib/eal/common/rte_service.c
@@ -105,9 +105,10 @@ rte_service_init(void)
RTE_LCORE_VAR_ALLOC(lcore_states);
int i;
- struct rte_config *cfg = rte_eal_get_configuration();
+ const struct rte_config *cfg = rte_eal_get_configuration();
+ const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
for (i = 0; i < RTE_MAX_LCORE; i++) {
- if (cfg->lcore_role[i] == ROLE_SERVICE) {
+ if (runtime_state->lcore_role[i] == ROLE_SERVICE) {
if ((unsigned int)i == cfg->main_lcore)
continue;
rte_service_lcore_add(i);
@@ -709,10 +710,9 @@ rte_service_map_lcore_get(uint32_t id, uint32_t lcore)
static void
set_lcore_state(uint32_t lcore, int32_t state)
{
- /* mark core state in hugepage backed config */
- struct rte_config *cfg = rte_eal_get_configuration();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
struct core_state *cs = RTE_LCORE_VAR_LCORE(lcore, lcore_states);
- cfg->lcore_role[lcore] = state;
+ runtime_state->lcore_role[lcore] = state;
/* update per-lcore optimized state tracking */
cs->is_service_core = (state == ROLE_SERVICE);
@@ -1101,7 +1101,7 @@ RTE_EXPORT_SYMBOL(rte_service_dump)
int32_t
rte_service_dump(FILE *f, uint32_t id)
{
- struct rte_config *cfg = rte_eal_get_configuration();
+ const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
uint32_t i;
int print_one = (id != UINT32_MAX);
@@ -1124,7 +1124,7 @@ rte_service_dump(FILE *f, uint32_t id)
fprintf(f, "Service Cores Summary\n");
for (i = 0; i < RTE_MAX_LCORE; i++) {
- if (cfg->lcore_role[i] != ROLE_SERVICE)
+ if (runtime_state->lcore_role[i] != ROLE_SERVICE)
continue;
service_dump_calls_per_lcore(f, i);
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 17/44] eal: make lcore role a field in lcore config struct
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (15 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 16/44] eal: move lcore role and count to runtime state Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 18/44] eal: move main lcore setting to runtime " Bruce Richardson
` (28 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
Rather than having a separate array for lcore role, just make the role a
field in the lcore_cfg struct.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_lcore.c | 28 ++++++++++++++--------------
lib/eal/common/eal_common_options.c | 26 +++++++++++++-------------
lib/eal/common/eal_internal_cfg.h | 2 +-
lib/eal/common/rte_service.c | 6 +++---
4 files changed, 31 insertions(+), 31 deletions(-)
diff --git a/lib/eal/common/eal_common_lcore.c b/lib/eal/common/eal_common_lcore.c
index 457c4c91a0..1ec1dc080a 100644
--- a/lib/eal/common/eal_common_lcore.c
+++ b/lib/eal/common/eal_common_lcore.c
@@ -88,7 +88,7 @@ rte_eal_lcore_role(unsigned int lcore_id)
if (lcore_id >= RTE_MAX_LCORE)
return ROLE_OFF;
- return runtime_state->lcore_role[lcore_id];
+ return runtime_state->lcore_cfg[lcore_id].role;
}
RTE_EXPORT_SYMBOL(rte_lcore_has_role)
@@ -100,7 +100,7 @@ rte_lcore_has_role(unsigned int lcore_id, enum rte_lcore_role_t role)
if (lcore_id >= RTE_MAX_LCORE)
return 0;
- return runtime_state->lcore_role[lcore_id] == role;
+ return runtime_state->lcore_cfg[lcore_id].role == role;
}
RTE_EXPORT_SYMBOL(rte_lcore_is_enabled)
@@ -110,7 +110,7 @@ int rte_lcore_is_enabled(unsigned int lcore_id)
if (lcore_id >= RTE_MAX_LCORE)
return 0;
- return runtime_state->lcore_role[lcore_id] == ROLE_RTE;
+ return runtime_state->lcore_cfg[lcore_id].role == ROLE_RTE;
}
RTE_EXPORT_SYMBOL(rte_get_next_lcore)
@@ -322,7 +322,7 @@ rte_lcore_callback_register(const char *name, rte_lcore_init_cb init,
if (callback->init == NULL)
goto no_init;
for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
- if (runtime_state->lcore_role[lcore_id] == ROLE_OFF)
+ if (runtime_state->lcore_cfg[lcore_id].role == ROLE_OFF)
continue;
if (callback_init(callback, lcore_id) == 0)
continue;
@@ -330,7 +330,7 @@ rte_lcore_callback_register(const char *name, rte_lcore_init_cb init,
* previous lcore.
*/
while (lcore_id-- != 0) {
- if (runtime_state->lcore_role[lcore_id] == ROLE_OFF)
+ if (runtime_state->lcore_cfg[lcore_id].role == ROLE_OFF)
continue;
callback_uninit(callback, lcore_id);
}
@@ -362,7 +362,7 @@ rte_lcore_callback_unregister(void *handle)
if (callback->uninit == NULL)
goto no_uninit;
for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
- if (runtime_state->lcore_role[lcore_id] == ROLE_OFF)
+ if (runtime_state->lcore_cfg[lcore_id].role == ROLE_OFF)
continue;
callback_uninit(callback, lcore_id);
}
@@ -384,9 +384,9 @@ eal_lcore_non_eal_allocate(void)
rte_rwlock_write_lock(&lcore_lock);
for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
- if (runtime_state->lcore_role[lcore_id] != ROLE_OFF)
+ if (runtime_state->lcore_cfg[lcore_id].role != ROLE_OFF)
continue;
- runtime_state->lcore_role[lcore_id] = ROLE_NON_EAL;
+ runtime_state->lcore_cfg[lcore_id].role = ROLE_NON_EAL;
runtime_state->lcore_count++;
break;
}
@@ -407,7 +407,7 @@ eal_lcore_non_eal_allocate(void)
}
EAL_LOG(DEBUG, "Initialization refused for lcore %u.",
lcore_id);
- runtime_state->lcore_role[lcore_id] = ROLE_OFF;
+ runtime_state->lcore_cfg[lcore_id].role = ROLE_OFF;
runtime_state->lcore_count--;
lcore_id = RTE_MAX_LCORE;
goto out;
@@ -424,11 +424,11 @@ eal_lcore_non_eal_release(unsigned int lcore_id)
struct lcore_callback *callback;
rte_rwlock_write_lock(&lcore_lock);
- if (runtime_state->lcore_role[lcore_id] != ROLE_NON_EAL)
+ if (runtime_state->lcore_cfg[lcore_id].role != ROLE_NON_EAL)
goto out;
TAILQ_FOREACH(callback, &lcore_callbacks, next)
callback_uninit(callback, lcore_id);
- runtime_state->lcore_role[lcore_id] = ROLE_OFF;
+ runtime_state->lcore_cfg[lcore_id].role = ROLE_OFF;
runtime_state->lcore_count--;
out:
rte_rwlock_write_unlock(&lcore_lock);
@@ -444,7 +444,7 @@ rte_lcore_iterate(rte_lcore_iterate_cb cb, void *arg)
rte_rwlock_read_lock(&lcore_lock);
for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
- if (runtime_state->lcore_role[lcore_id] == ROLE_OFF)
+ if (runtime_state->lcore_cfg[lcore_id].role == ROLE_OFF)
continue;
ret = cb(lcore_id, arg);
if (ret != 0)
@@ -509,7 +509,7 @@ lcore_dump_cb(unsigned int lcore_id, void *arg)
cpuset = eal_cpuset_to_str(&runtime_state->lcore_cfg[lcore_id].cpuset);
fprintf(f, "lcore %u, socket %u, role %s, cpuset %s\n", lcore_id,
rte_lcore_to_socket_id(lcore_id),
- lcore_role_str(runtime_state->lcore_role[lcore_id]),
+ lcore_role_str(runtime_state->lcore_cfg[lcore_id].role),
cpuset != NULL ? cpuset : "<unknown>");
free(cpuset);
free(usage_str);
@@ -575,7 +575,7 @@ lcore_telemetry_info_cb(unsigned int lcore_id, void *arg)
rte_tel_data_start_dict(info->d);
rte_tel_data_add_dict_int(info->d, "lcore_id", lcore_id);
rte_tel_data_add_dict_int(info->d, "socket", rte_lcore_to_socket_id(lcore_id));
- rte_tel_data_add_dict_string(info->d, "role", lcore_role_str(runtime_state->lcore_role[lcore_id]));
+ rte_tel_data_add_dict_string(info->d, "role", lcore_role_str(runtime_state->lcore_cfg[lcore_id].role));
cpuset = rte_tel_data_alloc();
if (cpuset == NULL)
return -ENOMEM;
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index 3535e467c6..83e0d986a5 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -885,10 +885,10 @@ eal_parse_service_coremask(const char *coremask)
return -1;
}
- if (runtime_state->lcore_role[idx] == ROLE_RTE)
+ if (runtime_state->lcore_cfg[idx].role == ROLE_RTE)
taken_lcore_count++;
- runtime_state->lcore_role[idx] = ROLE_SERVICE;
+ runtime_state->lcore_cfg[idx].role = ROLE_SERVICE;
count++;
}
}
@@ -921,7 +921,7 @@ update_lcore_config(const rte_cpuset_t *cpuset, bool remap, uint16_t remap_base)
/* set everything to disabled first, then set up values */
for (i = 0; i < RTE_MAX_LCORE; i++) {
- runtime_state->lcore_role[i] = ROLE_OFF;
+ runtime_state->lcore_cfg[i].role = ROLE_OFF;
runtime_state->lcore_cfg[i].core_index = -1;
}
@@ -949,7 +949,7 @@ update_lcore_config(const rte_cpuset_t *cpuset, bool remap, uint16_t remap_base)
continue;
}
- runtime_state->lcore_role[lcore_id] = ROLE_RTE;
+ runtime_state->lcore_cfg[lcore_id].role = ROLE_RTE;
runtime_state->lcore_cfg[lcore_id].core_index = count;
CPU_ZERO(&runtime_state->lcore_cfg[lcore_id].cpuset);
CPU_SET(i, &runtime_state->lcore_cfg[lcore_id].cpuset);
@@ -1122,11 +1122,11 @@ eal_parse_service_corelist(const char *corelist)
if (min == RTE_MAX_LCORE)
min = idx;
for (idx = min; idx <= max; idx++) {
- if (runtime_state->lcore_role[idx] != ROLE_SERVICE) {
- if (runtime_state->lcore_role[idx] == ROLE_RTE)
+ if (runtime_state->lcore_cfg[idx].role != ROLE_SERVICE) {
+ if (runtime_state->lcore_cfg[idx].role == ROLE_RTE)
taken_lcore_count++;
- runtime_state->lcore_role[idx] = ROLE_SERVICE;
+ runtime_state->lcore_cfg[idx].role = ROLE_SERVICE;
count++;
}
}
@@ -1149,7 +1149,7 @@ eal_parse_service_corelist(const char *corelist)
rte_cpuset_t service_cpuset;
CPU_ZERO(&service_cpuset);
for (i = 0; i < RTE_MAX_LCORE; i++) {
- if (runtime_state->lcore_role[i] == ROLE_SERVICE)
+ if (runtime_state->lcore_cfg[i].role == ROLE_SERVICE)
CPU_SET(i, &service_cpuset);
}
if (CPU_COUNT(&service_cpuset) > 0) {
@@ -1179,12 +1179,12 @@ eal_parse_main_lcore(const char *arg)
return -1;
/* ensure main core is not used as service core */
- if (runtime_state->lcore_role[cfg->main_lcore] == ROLE_SERVICE) {
+ if (runtime_state->lcore_cfg[cfg->main_lcore].role == ROLE_SERVICE) {
EAL_LOG(ERR, "Error: Main lcore is used as a service core");
return -1;
}
/* check that we have the core recorded in the core list */
- if (runtime_state->lcore_role[cfg->main_lcore] != ROLE_RTE) {
+ if (runtime_state->lcore_cfg[cfg->main_lcore].role != ROLE_RTE) {
EAL_LOG(ERR, "Error: Main lcore is not enabled for DPDK");
return -1;
}
@@ -1373,7 +1373,7 @@ eal_parse_lcores(const char *lcores)
/* Reset lcore config */
for (idx = 0; idx < RTE_MAX_LCORE; idx++) {
- runtime_state->lcore_role[idx] = ROLE_OFF;
+ runtime_state->lcore_cfg[idx].role = ROLE_OFF;
runtime_state->lcore_cfg[idx].core_index = -1;
CPU_ZERO(&runtime_state->lcore_cfg[idx].cpuset);
runtime_state->lcore_cfg[idx].first_cpu = UINT16_MAX;
@@ -1436,9 +1436,9 @@ eal_parse_lcores(const char *lcores)
continue;
set_count--;
- if (runtime_state->lcore_role[idx] != ROLE_RTE) {
+ if (runtime_state->lcore_cfg[idx].role != ROLE_RTE) {
runtime_state->lcore_cfg[idx].core_index = count;
- runtime_state->lcore_role[idx] = ROLE_RTE;
+ runtime_state->lcore_cfg[idx].role = ROLE_RTE;
count++;
}
diff --git a/lib/eal/common/eal_internal_cfg.h b/lib/eal/common/eal_internal_cfg.h
index d46d3f59d3..41a1437435 100644
--- a/lib/eal/common/eal_internal_cfg.h
+++ b/lib/eal/common/eal_internal_cfg.h
@@ -116,6 +116,7 @@ struct eal_platform_info {
*/
struct lcore_cfg {
int core_index; /**< relative index, starting from 0 */
+ enum rte_lcore_role_t role; /**< role assigned to this lcore */
rte_cpuset_t cpuset; /**< cpu set which the lcore affinity to */
uint16_t first_cpu; /**< lowest CPU set in cpuset, UINT16_MAX if none */
/* Fields for executing code on a remote lcore */
@@ -138,7 +139,6 @@ struct eal_runtime_state {
volatile unsigned int init_complete;
/**< indicates whether EAL has completed initialization */
uint32_t lcore_count; /**< Number of active lcore IDs (role != ROLE_OFF). */
- enum rte_lcore_role_t lcore_role[RTE_MAX_LCORE]; /**< State of cores. */
struct lcore_cfg lcore_cfg[RTE_MAX_LCORE];
};
diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c
index 36ef2d32a7..e28e17f8d5 100644
--- a/lib/eal/common/rte_service.c
+++ b/lib/eal/common/rte_service.c
@@ -108,7 +108,7 @@ rte_service_init(void)
const struct rte_config *cfg = rte_eal_get_configuration();
const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
for (i = 0; i < RTE_MAX_LCORE; i++) {
- if (runtime_state->lcore_role[i] == ROLE_SERVICE) {
+ if (runtime_state->lcore_cfg[i].role == ROLE_SERVICE) {
if ((unsigned int)i == cfg->main_lcore)
continue;
rte_service_lcore_add(i);
@@ -712,7 +712,7 @@ set_lcore_state(uint32_t lcore, int32_t state)
{
struct eal_runtime_state *runtime_state = eal_get_runtime_state();
struct core_state *cs = RTE_LCORE_VAR_LCORE(lcore, lcore_states);
- runtime_state->lcore_role[lcore] = state;
+ runtime_state->lcore_cfg[lcore].role = state;
/* update per-lcore optimized state tracking */
cs->is_service_core = (state == ROLE_SERVICE);
@@ -1124,7 +1124,7 @@ rte_service_dump(FILE *f, uint32_t id)
fprintf(f, "Service Cores Summary\n");
for (i = 0; i < RTE_MAX_LCORE; i++) {
- if (runtime_state->lcore_role[i] != ROLE_SERVICE)
+ if (runtime_state->lcore_cfg[i].role != ROLE_SERVICE)
continue;
service_dump_calls_per_lcore(f, i);
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 18/44] eal: move main lcore setting to runtime config struct
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (16 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 17/44] eal: make lcore role a field in lcore config struct Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 19/44] eal: move iova mode and process type to runtime cfg Bruce Richardson
` (27 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
Move the main lcore setting from the rte_config struct to the new
runtime structure. One additional wrinkle here is that this is an
optional user setting, so it also needs to be stored in eal_user_cfg if
specified. The difference is that the eal_user_cfg value is the raw
value and can be "-1" if unspecified, while the runtime config value has
to have a valid lcore id.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_dynmem.c | 4 +---
lib/eal/common/eal_common_lcore.c | 2 +-
lib/eal/common/eal_common_options.c | 27 +++++++++++++++------------
lib/eal/common/eal_internal_cfg.h | 2 ++
lib/eal/common/eal_private.h | 1 -
lib/eal/common/rte_service.c | 3 +--
lib/eal/freebsd/eal.c | 12 +++++-------
lib/eal/linux/eal.c | 12 +++++-------
lib/eal/linux/eal_memory.c | 3 +--
lib/eal/windows/eal.c | 9 ++++-----
10 files changed, 35 insertions(+), 40 deletions(-)
diff --git a/lib/eal/common/eal_common_dynmem.c b/lib/eal/common/eal_common_dynmem.c
index c1c72499c4..674ea5ec42 100644
--- a/lib/eal/common/eal_common_dynmem.c
+++ b/lib/eal/common/eal_common_dynmem.c
@@ -437,11 +437,9 @@ eal_dynmem_calc_num_pages_per_socket(
total_size = user_cfg->memory;
for (socket = 0; socket < RTE_MAX_NUMA_NODES && total_size != 0;
socket++) {
- struct rte_config *cfg = rte_eal_get_configuration();
unsigned int main_lcore_socket;
- main_lcore_socket =
- rte_lcore_to_socket_id(cfg->main_lcore);
+ main_lcore_socket = rte_lcore_to_socket_id(rte_get_main_lcore());
if (main_lcore_socket != socket)
continue;
diff --git a/lib/eal/common/eal_common_lcore.c b/lib/eal/common/eal_common_lcore.c
index 1ec1dc080a..bafcfe78d2 100644
--- a/lib/eal/common/eal_common_lcore.c
+++ b/lib/eal/common/eal_common_lcore.c
@@ -22,7 +22,7 @@
RTE_EXPORT_SYMBOL(rte_get_main_lcore)
unsigned int rte_get_main_lcore(void)
{
- return rte_eal_get_configuration()->main_lcore;
+ return eal_get_runtime_state()->main_lcore;
}
RTE_EXPORT_SYMBOL(rte_lcore_count)
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index 83e0d986a5..9b3ba64c4c 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -1168,23 +1168,23 @@ static int
eal_parse_main_lcore(const char *arg)
{
char *parsing_end;
- struct rte_config *cfg = rte_eal_get_configuration();
- const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
errno = 0;
- cfg->main_lcore = (uint32_t) strtol(arg, &parsing_end, 0);
+ user_cfg->main_lcore = (uint32_t) strtol(arg, &parsing_end, 0);
if (errno || parsing_end[0] != 0)
return -1;
- if (cfg->main_lcore >= RTE_MAX_LCORE)
+ if (user_cfg->main_lcore >= RTE_MAX_LCORE)
return -1;
/* ensure main core is not used as service core */
- if (runtime_state->lcore_cfg[cfg->main_lcore].role == ROLE_SERVICE) {
+ if (runtime_state->lcore_cfg[user_cfg->main_lcore].role == ROLE_SERVICE) {
EAL_LOG(ERR, "Error: Main lcore is used as a service core");
return -1;
}
/* check that we have the core recorded in the core list */
- if (runtime_state->lcore_cfg[cfg->main_lcore].role != ROLE_RTE) {
+ if (runtime_state->lcore_cfg[user_cfg->main_lcore].role != ROLE_RTE) {
EAL_LOG(ERR, "Error: Main lcore is not enabled for DPDK");
return -1;
}
@@ -1960,7 +1960,7 @@ int
eal_parse_args(void)
{
struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- struct rte_config *rte_cfg = rte_eal_get_configuration();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
bool remap_lcores = (args.remap_lcore_ids != NULL);
struct arg_list_elem *arg;
uint16_t lcore_id_base = 0;
@@ -2084,13 +2084,16 @@ eal_parse_args(void)
return -1;
}
}
- if (args.main_lcore != NULL) {
- if (eal_parse_main_lcore(args.main_lcore) < 0)
- return -1;
+ user_cfg->main_lcore = -1;
+ if (args.main_lcore != NULL && eal_parse_main_lcore(args.main_lcore) < 0)
+ return -1;
+
+ if (user_cfg->main_lcore != -1) {
+ runtime_state->main_lcore = user_cfg->main_lcore;
} else {
/* default main lcore is the first one */
- rte_cfg->main_lcore = rte_get_next_lcore(-1, 0, 0);
- if (rte_cfg->main_lcore >= RTE_MAX_LCORE) {
+ runtime_state->main_lcore = rte_get_next_lcore(-1, 0, 0);
+ if (runtime_state->main_lcore >= RTE_MAX_LCORE) {
EAL_LOG(ERR, "Main lcore is not enabled for DPDK");
return -1;
}
diff --git a/lib/eal/common/eal_internal_cfg.h b/lib/eal/common/eal_internal_cfg.h
index 41a1437435..5572af28af 100644
--- a/lib/eal/common/eal_internal_cfg.h
+++ b/lib/eal/common/eal_internal_cfg.h
@@ -86,6 +86,7 @@ struct eal_user_cfg {
uintptr_t base_virtaddr; /**< base address to try and reserve memory from */
uint64_t numa_mem[RTE_MAX_NUMA_NODES]; /**< amount of memory per NUMA node */
uint64_t numa_limit[RTE_MAX_NUMA_NODES]; /**< limit amount of memory per NUMA node */
+ int main_lcore; /**< ID of the main lcore */
};
/**
@@ -138,6 +139,7 @@ struct eal_runtime_state {
rte_cpuset_t ctrl_cpuset; /**< cpuset for ctrl threads */
volatile unsigned int init_complete;
/**< indicates whether EAL has completed initialization */
+ uint32_t main_lcore; /**< ID of the main lcore */
uint32_t lcore_count; /**< Number of active lcore IDs (role != ROLE_OFF). */
struct lcore_cfg lcore_cfg[RTE_MAX_LCORE];
};
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index 80fa49ce8b..ffbaba6add 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -21,7 +21,6 @@
* The global RTE configuration structure.
*/
struct rte_config {
- uint32_t main_lcore; /**< Id of the main lcore */
/** Primary or secondary configuration */
enum rte_proc_type_t process_type;
diff --git a/lib/eal/common/rte_service.c b/lib/eal/common/rte_service.c
index e28e17f8d5..aa068f88ea 100644
--- a/lib/eal/common/rte_service.c
+++ b/lib/eal/common/rte_service.c
@@ -105,11 +105,10 @@ rte_service_init(void)
RTE_LCORE_VAR_ALLOC(lcore_states);
int i;
- const struct rte_config *cfg = rte_eal_get_configuration();
const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
for (i = 0; i < RTE_MAX_LCORE; i++) {
if (runtime_state->lcore_cfg[i].role == ROLE_SERVICE) {
- if ((unsigned int)i == cfg->main_lcore)
+ if ((unsigned int)i == runtime_state->main_lcore)
continue;
rte_service_lcore_add(i);
}
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index a75af85a7c..30702f5b20 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -358,9 +358,8 @@ static void
eal_check_mem_on_local_socket(void)
{
int socket_id;
- const struct rte_config *config = rte_eal_get_configuration();
- socket_id = rte_lcore_to_socket_id(config->main_lcore);
+ socket_id = rte_lcore_to_socket_id(rte_get_main_lcore());
if (rte_memseg_list_walk(check_socket, &socket_id) == 0)
EAL_LOG(WARNING, "WARNING: Main core has no memory on local socket!");
@@ -403,7 +402,6 @@ rte_eal_init(int argc, char **argv)
uint32_t has_run = 0;
char cpuset[RTE_CPU_AFFINITY_STR_LEN];
char thread_name[RTE_THREAD_NAME_SIZE];
- const struct rte_config *config = rte_eal_get_configuration();
struct eal_user_cfg *user_cfg = eal_get_user_configuration();
struct eal_runtime_state *runtime_state = eal_get_runtime_state();
bool has_phys_addr;
@@ -647,18 +645,18 @@ rte_eal_init(int argc, char **argv)
eal_check_mem_on_local_socket();
if (rte_thread_set_affinity_by_id(rte_thread_self(),
- &runtime_state->lcore_cfg[config->main_lcore].cpuset) != 0) {
+ &runtime_state->lcore_cfg[rte_get_main_lcore()].cpuset) != 0) {
rte_eal_init_alert("Cannot set affinity");
rte_errno = EINVAL;
goto err_out;
}
- __rte_thread_init(config->main_lcore,
- &runtime_state->lcore_cfg[config->main_lcore].cpuset);
+ __rte_thread_init(rte_get_main_lcore(),
+ &runtime_state->lcore_cfg[rte_get_main_lcore()].cpuset);
ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset));
EAL_LOG(DEBUG, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])",
- config->main_lcore, (uintptr_t)pthread_self(), cpuset,
+ rte_get_main_lcore(), (uintptr_t)pthread_self(), cpuset,
ret == 0 ? "" : "...");
RTE_LCORE_FOREACH_WORKER(i) {
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 9ef4b4e6f5..71c15d1ad5 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -439,9 +439,8 @@ static void
eal_check_mem_on_local_socket(void)
{
int socket_id;
- const struct rte_config *config = rte_eal_get_configuration();
- socket_id = rte_lcore_to_socket_id(config->main_lcore);
+ socket_id = rte_lcore_to_socket_id(rte_get_main_lcore());
if (rte_memseg_list_walk(check_socket, &socket_id) == 0)
EAL_LOG(WARNING, "WARNING: Main core has no memory on local socket!");
@@ -566,7 +565,6 @@ rte_eal_init(int argc, char **argv)
char cpuset[RTE_CPU_AFFINITY_STR_LEN];
char thread_name[RTE_THREAD_NAME_SIZE];
bool phys_addrs;
- const struct rte_config *config = rte_eal_get_configuration();
struct eal_user_cfg *user_cfg = eal_get_user_configuration();
struct eal_runtime_state *runtime_state = eal_get_runtime_state();
@@ -823,17 +821,17 @@ rte_eal_init(int argc, char **argv)
eal_check_mem_on_local_socket();
if (rte_thread_set_affinity_by_id(rte_thread_self(),
- &runtime_state->lcore_cfg[config->main_lcore].cpuset) != 0) {
+ &runtime_state->lcore_cfg[rte_get_main_lcore()].cpuset) != 0) {
rte_eal_init_alert("Cannot set affinity");
rte_errno = EINVAL;
goto err_out;
}
- __rte_thread_init(config->main_lcore,
- &runtime_state->lcore_cfg[config->main_lcore].cpuset);
+ __rte_thread_init(rte_get_main_lcore(),
+ &runtime_state->lcore_cfg[rte_get_main_lcore()].cpuset);
ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset));
EAL_LOG(DEBUG, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])",
- config->main_lcore, (uintptr_t)pthread_self(), cpuset,
+ rte_get_main_lcore(), (uintptr_t)pthread_self(), cpuset,
ret == 0 ? "" : "...");
RTE_LCORE_FOREACH_WORKER(i) {
diff --git a/lib/eal/linux/eal_memory.c b/lib/eal/linux/eal_memory.c
index 36763bb44f..c341e9a599 100644
--- a/lib/eal/linux/eal_memory.c
+++ b/lib/eal/linux/eal_memory.c
@@ -1778,7 +1778,6 @@ memseg_primary_init_32(void)
int hp_sizes = (int) platform_info->num_hugepage_sizes;
uint64_t max_socket_mem, cur_socket_mem;
unsigned int main_lcore_socket;
- struct rte_config *cfg = rte_eal_get_configuration();
bool skip;
int ret;
@@ -1801,7 +1800,7 @@ memseg_primary_init_32(void)
/* ...or if we didn't specifically request memory on *any*
* socket, and this is not main lcore
*/
- main_lcore_socket = rte_lcore_to_socket_id(cfg->main_lcore);
+ main_lcore_socket = rte_lcore_to_socket_id(rte_get_main_lcore());
skip |= active_sockets == 0 && socket_id != main_lcore_socket;
if (skip) {
diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c
index 988352f867..0d1ba3aaeb 100644
--- a/lib/eal/windows/eal.c
+++ b/lib/eal/windows/eal.c
@@ -154,7 +154,6 @@ int
rte_eal_init(int argc, char **argv)
{
int i, fctret, bscan;
- const struct rte_config *config = rte_eal_get_configuration();
struct eal_user_cfg *user_cfg = eal_get_user_configuration();
struct eal_runtime_state *runtime_state = eal_get_runtime_state();
bool has_phys_addr;
@@ -340,17 +339,17 @@ rte_eal_init(int argc, char **argv)
eal_rand_init();
if (rte_thread_set_affinity_by_id(rte_thread_self(),
- &runtime_state->lcore_cfg[config->main_lcore].cpuset) != 0) {
+ &runtime_state->lcore_cfg[rte_get_main_lcore()].cpuset) != 0) {
rte_eal_init_alert("Cannot set affinity");
rte_errno = EINVAL;
goto err_out;
}
- __rte_thread_init(config->main_lcore,
- &runtime_state->lcore_cfg[config->main_lcore].cpuset);
+ __rte_thread_init(rte_get_main_lcore(),
+ &runtime_state->lcore_cfg[rte_get_main_lcore()].cpuset);
ret = eal_thread_dump_current_affinity(cpuset, sizeof(cpuset));
EAL_LOG(DEBUG, "Main lcore %u is ready (tid=%zx;cpuset=[%s%s])",
- config->main_lcore, rte_thread_self().opaque_id, cpuset,
+ rte_get_main_lcore(), rte_thread_self().opaque_id, cpuset,
ret == 0 ? "" : "...");
RTE_LCORE_FOREACH_WORKER(i) {
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 19/44] eal: move iova mode and process type to runtime cfg
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (17 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 18/44] eal: move main lcore setting to runtime " Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 20/44] eal: move memory config pointer to runtime state struct Bruce Richardson
` (26 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
Move two more fields, iova_mode and process_type from old rte_config to
eal_runtime_state struct.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_config.c | 4 ++--
lib/eal/common/eal_common_mcfg.c | 2 +-
lib/eal/common/eal_internal_cfg.h | 2 ++
lib/eal/common/eal_private.h | 7 -------
lib/eal/freebsd/eal.c | 11 +++++------
lib/eal/linux/eal.c | 13 ++++++-------
lib/eal/windows/eal.c | 2 +-
7 files changed, 17 insertions(+), 24 deletions(-)
diff --git a/lib/eal/common/eal_common_config.c b/lib/eal/common/eal_common_config.c
index f1a5e84aa9..7e6704a32d 100644
--- a/lib/eal/common/eal_common_config.c
+++ b/lib/eal/common/eal_common_config.c
@@ -88,7 +88,7 @@ RTE_EXPORT_SYMBOL(rte_eal_iova_mode)
enum rte_iova_mode
rte_eal_iova_mode(void)
{
- return rte_eal_get_configuration()->iova_mode;
+ return eal_get_runtime_state()->iova_mode;
}
/* Get the EAL base address */
@@ -105,7 +105,7 @@ RTE_EXPORT_SYMBOL(rte_eal_process_type)
enum rte_proc_type_t
rte_eal_process_type(void)
{
- return rte_config.process_type;
+ return eal_get_runtime_state()->process_type;
}
/* Return user provided mbuf pool ops name */
diff --git a/lib/eal/common/eal_common_mcfg.c b/lib/eal/common/eal_common_mcfg.c
index 497b0933c7..cc4107bbca 100644
--- a/lib/eal/common/eal_common_mcfg.c
+++ b/lib/eal/common/eal_common_mcfg.c
@@ -18,7 +18,7 @@ eal_mcfg_complete(void)
struct eal_runtime_state *runtime_state = eal_get_runtime_state();
/* ALL shared mem_config related INIT DONE */
- if (cfg->process_type == RTE_PROC_PRIMARY)
+ if (runtime_state->process_type == RTE_PROC_PRIMARY)
mcfg->magic = RTE_MAGIC;
runtime_state->init_complete = 1;
diff --git a/lib/eal/common/eal_internal_cfg.h b/lib/eal/common/eal_internal_cfg.h
index 5572af28af..e229f82c1e 100644
--- a/lib/eal/common/eal_internal_cfg.h
+++ b/lib/eal/common/eal_internal_cfg.h
@@ -139,6 +139,8 @@ struct eal_runtime_state {
rte_cpuset_t ctrl_cpuset; /**< cpuset for ctrl threads */
volatile unsigned int init_complete;
/**< indicates whether EAL has completed initialization */
+ enum rte_proc_type_t process_type; /**< primary or secondary process */
+ enum rte_iova_mode iova_mode; /**< PA or VA IOVA mapping mode */
uint32_t main_lcore; /**< ID of the main lcore */
uint32_t lcore_count; /**< Number of active lcore IDs (role != ROLE_OFF). */
struct lcore_cfg lcore_cfg[RTE_MAX_LCORE];
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index ffbaba6add..a905632cbe 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -21,13 +21,6 @@
* The global RTE configuration structure.
*/
struct rte_config {
-
- /** Primary or secondary configuration */
- enum rte_proc_type_t process_type;
-
- /** PA or VA mapping mode */
- enum rte_iova_mode iova_mode;
-
/**
* Pointer to memory configuration, which may be shared across multiple
* DPDK instances
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 30702f5b20..5271614e4a 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -284,12 +284,12 @@ eal_proc_type_detect(void)
static int
rte_config_init(void)
{
- struct rte_config *config = rte_eal_get_configuration();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- config->process_type = user_cfg->process_type;
+ runtime_state->process_type = user_cfg->process_type;
- switch (config->process_type) {
+ switch (runtime_state->process_type) {
case RTE_PROC_PRIMARY:
if (rte_eal_config_create() < 0)
return -1;
@@ -313,8 +313,7 @@ rte_config_init(void)
break;
case RTE_PROC_AUTO:
case RTE_PROC_INVALID:
- EAL_LOG(ERR, "Invalid process type %d",
- config->process_type);
+ EAL_LOG(ERR, "Invalid process type %d", runtime_state->process_type);
return -1;
}
@@ -560,7 +559,7 @@ rte_eal_init(int argc, char **argv)
goto err_out;
}
- rte_eal_get_configuration()->iova_mode = iova_mode;
+ runtime_state->iova_mode = iova_mode;
EAL_LOG(INFO, "Selected IOVA mode '%s'",
rte_eal_iova_mode() == RTE_IOVA_PA ? "PA" : "VA");
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 71c15d1ad5..2ac2546391 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -368,12 +368,12 @@ eal_proc_type_detect(void)
static int
rte_config_init(void)
{
- struct rte_config *config = rte_eal_get_configuration();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- config->process_type = user_cfg->process_type;
+ runtime_state->process_type = user_cfg->process_type;
- switch (config->process_type) {
+ switch (runtime_state->process_type) {
case RTE_PROC_PRIMARY:
if (rte_eal_config_create() < 0)
return -1;
@@ -398,7 +398,7 @@ rte_config_init(void)
case RTE_PROC_AUTO:
case RTE_PROC_INVALID:
EAL_LOG(ERR, "Invalid process type %d",
- config->process_type);
+ runtime_state->process_type);
return -1;
}
@@ -707,10 +707,9 @@ rte_eal_init(int argc, char **argv)
EAL_LOG(DEBUG, "IOMMU is not available, selecting IOVA as PA mode.");
}
}
- rte_eal_get_configuration()->iova_mode = iova_mode;
+ runtime_state->iova_mode = iova_mode;
} else {
- rte_eal_get_configuration()->iova_mode =
- user_cfg->iova_mode;
+ runtime_state->iova_mode = user_cfg->iova_mode;
}
if (rte_eal_iova_mode() == RTE_IOVA_PA && !phys_addrs) {
diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c
index 0d1ba3aaeb..72df9163f0 100644
--- a/lib/eal/windows/eal.c
+++ b/lib/eal/windows/eal.c
@@ -298,7 +298,7 @@ rte_eal_init(int argc, char **argv)
EAL_LOG(DEBUG, "Selected IOVA mode '%s'",
iova_mode == RTE_IOVA_PA ? "PA" : "VA");
- rte_eal_get_configuration()->iova_mode = iova_mode;
+ runtime_state->iova_mode = iova_mode;
if (rte_eal_memzone_init() < 0) {
rte_eal_init_alert("Cannot init memzone");
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 20/44] eal: move memory config pointer to runtime state struct
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (18 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 19/44] eal: move iova mode and process type to runtime cfg Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 21/44] eal: remove rte_config structure Bruce Richardson
` (25 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
Remove the final field from the rte_config struct - the mem_config
pointer - and put it in the runtime state structure instead. The
rte_config struct is now unused and ready for removal is a later commit.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_config.c | 15 ++++++---
lib/eal/common/eal_common_dynmem.c | 2 +-
lib/eal/common/eal_common_mcfg.c | 23 ++++++-------
lib/eal/common/eal_common_memory.c | 52 ++++++++++++++---------------
lib/eal/common/eal_common_memzone.c | 20 +++++------
lib/eal/common/eal_common_proc.c | 2 +-
lib/eal/common/eal_common_tailqs.c | 6 ++--
lib/eal/common/eal_common_timer.c | 2 +-
lib/eal/common/eal_internal_cfg.h | 4 +++
lib/eal/common/eal_memcfg.h | 3 ++
lib/eal/common/eal_private.h | 6 +---
lib/eal/common/malloc_heap.c | 22 ++++++------
lib/eal/common/malloc_mp.c | 2 +-
lib/eal/common/rte_malloc.c | 14 ++++----
lib/eal/freebsd/eal.c | 22 ++++++------
lib/eal/freebsd/eal_memory.c | 6 ++--
lib/eal/linux/eal.c | 24 ++++++-------
lib/eal/linux/eal_memalloc.c | 18 +++++-----
lib/eal/linux/eal_memory.c | 12 +++----
lib/eal/windows/eal.c | 6 ++--
lib/eal/windows/eal_memalloc.c | 4 +--
lib/eal/windows/eal_memory.c | 2 +-
22 files changed, 137 insertions(+), 130 deletions(-)
diff --git a/lib/eal/common/eal_common_config.c b/lib/eal/common/eal_common_config.c
index 7e6704a32d..ebb7c222d9 100644
--- a/lib/eal/common/eal_common_config.c
+++ b/lib/eal/common/eal_common_config.c
@@ -21,9 +21,7 @@ static struct rte_mem_config early_mem_config = {
};
/* Address of global and public configuration */
-static struct rte_config rte_config = {
- .mem_config = &early_mem_config,
-};
+static struct rte_config rte_config;
/* platform-specific runtime dir */
static char runtime_dir[UNIX_PATH_MAX];
@@ -35,7 +33,9 @@ static struct eal_user_cfg eal_user_cfg;
static struct eal_platform_info eal_platform_info;
/* internal runtime configuration */
-static struct eal_runtime_state eal_runtime_state;
+static struct eal_runtime_state eal_runtime_state = {
+ .mem_config = &early_mem_config,
+};
RTE_EXPORT_SYMBOL(rte_eal_get_runtime_dir)
const char *
@@ -63,6 +63,13 @@ rte_eal_get_configuration(void)
return &rte_config;
}
+/* Return a pointer to the memory config structure */
+struct rte_mem_config *
+eal_get_mcfg(void)
+{
+ return eal_get_runtime_state()->mem_config;
+}
+
/* Return a pointer to the user configuration structure */
struct eal_user_cfg *
eal_get_user_configuration(void)
diff --git a/lib/eal/common/eal_common_dynmem.c b/lib/eal/common/eal_common_dynmem.c
index 674ea5ec42..629cec7ccc 100644
--- a/lib/eal/common/eal_common_dynmem.c
+++ b/lib/eal/common/eal_common_dynmem.c
@@ -20,7 +20,7 @@
int
eal_dynmem_memseg_lists_init(void)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct memtype {
uint64_t page_sz;
int socket_id;
diff --git a/lib/eal/common/eal_common_mcfg.c b/lib/eal/common/eal_common_mcfg.c
index cc4107bbca..1e09bd650d 100644
--- a/lib/eal/common/eal_common_mcfg.c
+++ b/lib/eal/common/eal_common_mcfg.c
@@ -13,8 +13,7 @@
void
eal_mcfg_complete(void)
{
- struct rte_config *cfg = rte_eal_get_configuration();
- struct rte_mem_config *mcfg = cfg->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct eal_runtime_state *runtime_state = eal_get_runtime_state();
/* ALL shared mem_config related INIT DONE */
@@ -27,7 +26,7 @@ eal_mcfg_complete(void)
void
eal_mcfg_wait_complete(void)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
/* wait until shared mem_config finish initialising */
rte_wait_until_equal_32(&mcfg->magic, RTE_MAGIC, rte_memory_order_relaxed);
@@ -36,7 +35,7 @@ eal_mcfg_wait_complete(void)
int
eal_mcfg_check_version(void)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
/* check if version from memconfig matches compiled in macro */
if (mcfg->version != RTE_VERSION)
@@ -48,7 +47,7 @@ eal_mcfg_check_version(void)
void
eal_mcfg_update_internal(void)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct eal_user_cfg *user_cfg = eal_get_user_configuration();
user_cfg->legacy_mem = mcfg->legacy_mem;
@@ -58,7 +57,7 @@ eal_mcfg_update_internal(void)
void
eal_mcfg_update_from_internal(void)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
mcfg->legacy_mem = user_cfg->legacy_mem;
@@ -71,7 +70,7 @@ RTE_EXPORT_INTERNAL_SYMBOL(rte_mcfg_mem_get_lock)
rte_rwlock_t *
rte_mcfg_mem_get_lock(void)
{
- return &rte_eal_get_configuration()->mem_config->memory_hotplug_lock;
+ return &eal_get_mcfg()->memory_hotplug_lock;
}
RTE_EXPORT_SYMBOL(rte_mcfg_mem_read_lock)
@@ -106,7 +105,7 @@ RTE_EXPORT_INTERNAL_SYMBOL(rte_mcfg_tailq_get_lock)
rte_rwlock_t *
rte_mcfg_tailq_get_lock(void)
{
- return &rte_eal_get_configuration()->mem_config->qlock;
+ return &eal_get_mcfg()->qlock;
}
RTE_EXPORT_SYMBOL(rte_mcfg_tailq_read_lock)
@@ -141,7 +140,7 @@ RTE_EXPORT_INTERNAL_SYMBOL(rte_mcfg_mempool_get_lock)
rte_rwlock_t *
rte_mcfg_mempool_get_lock(void)
{
- return &rte_eal_get_configuration()->mem_config->mplock;
+ return &eal_get_mcfg()->mplock;
}
RTE_EXPORT_SYMBOL(rte_mcfg_mempool_read_lock)
@@ -176,7 +175,7 @@ RTE_EXPORT_INTERNAL_SYMBOL(rte_mcfg_timer_get_lock)
rte_spinlock_t *
rte_mcfg_timer_get_lock(void)
{
- return &rte_eal_get_configuration()->mem_config->tlock;
+ return &eal_get_mcfg()->tlock;
}
RTE_EXPORT_SYMBOL(rte_mcfg_timer_lock)
@@ -197,13 +196,13 @@ RTE_EXPORT_INTERNAL_SYMBOL(rte_mcfg_ethdev_get_lock)
rte_spinlock_t *
rte_mcfg_ethdev_get_lock(void)
{
- return &rte_eal_get_configuration()->mem_config->ethdev_lock;
+ return &eal_get_mcfg()->ethdev_lock;
}
RTE_EXPORT_SYMBOL(rte_mcfg_get_single_file_segments)
bool
rte_mcfg_get_single_file_segments(void)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
return (bool)mcfg->single_file_segments;
}
diff --git a/lib/eal/common/eal_common_memory.c b/lib/eal/common/eal_common_memory.c
index 42ddc34b01..cc7ee56b64 100644
--- a/lib/eal/common/eal_common_memory.c
+++ b/lib/eal/common/eal_common_memory.c
@@ -322,7 +322,7 @@ virt2memseg(const void *addr, const struct rte_memseg_list *msl)
static struct rte_memseg_list *
virt2memseg_list(const void *addr)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct rte_memseg_list *msl;
int msl_idx;
@@ -437,7 +437,7 @@ static int
dump_memseg(const struct rte_memseg_list *msl, const struct rte_memseg *ms,
void *arg)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
int msl_idx, ms_idx, fd;
FILE *f = arg;
@@ -569,7 +569,7 @@ check_iova(const struct rte_memseg_list *msl __rte_unused,
static int
check_dma_mask(uint8_t maskbits, bool thread_unsafe)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
uint64_t mask;
int ret;
@@ -632,7 +632,7 @@ RTE_EXPORT_SYMBOL(rte_mem_set_dma_mask)
void
rte_mem_set_dma_mask(uint8_t maskbits)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
mcfg->dma_maskbits = mcfg->dma_maskbits == 0 ? maskbits :
RTE_MIN(mcfg->dma_maskbits, maskbits);
@@ -642,29 +642,27 @@ rte_mem_set_dma_mask(uint8_t maskbits)
RTE_EXPORT_SYMBOL(rte_memory_get_nchannel)
unsigned rte_memory_get_nchannel(void)
{
- return rte_eal_get_configuration()->mem_config->nchannel;
+ return eal_get_mcfg()->nchannel;
}
/* return the number of memory rank */
RTE_EXPORT_SYMBOL(rte_memory_get_nrank)
unsigned rte_memory_get_nrank(void)
{
- return rte_eal_get_configuration()->mem_config->nrank;
+ return eal_get_mcfg()->nrank;
}
static int
rte_eal_memdevice_init(void)
{
- struct rte_config *config;
const struct eal_user_cfg *user_cfg;
if (rte_eal_process_type() == RTE_PROC_SECONDARY)
return 0;
user_cfg = eal_get_user_configuration();
- config = rte_eal_get_configuration();
- config->mem_config->nchannel = user_cfg->force_nchannel;
- config->mem_config->nrank = user_cfg->force_nrank;
+ eal_get_mcfg()->nchannel = user_cfg->force_nchannel;
+ eal_get_mcfg()->nrank = user_cfg->force_nrank;
return 0;
}
@@ -684,7 +682,7 @@ RTE_EXPORT_SYMBOL(rte_memseg_contig_walk_thread_unsafe)
int
rte_memseg_contig_walk_thread_unsafe(rte_memseg_contig_walk_t func, void *arg)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
int i, ms_idx, ret = 0;
for (i = 0; i < RTE_MAX_MEMSEG_LISTS; i++) {
@@ -738,7 +736,7 @@ RTE_EXPORT_SYMBOL(rte_memseg_walk_thread_unsafe)
int
rte_memseg_walk_thread_unsafe(rte_memseg_walk_t func, void *arg)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
int i, ms_idx, ret = 0;
for (i = 0; i < RTE_MAX_MEMSEG_LISTS; i++) {
@@ -781,7 +779,7 @@ RTE_EXPORT_SYMBOL(rte_memseg_list_walk_thread_unsafe)
int
rte_memseg_list_walk_thread_unsafe(rte_memseg_list_walk_t func, void *arg)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
int i, ret = 0;
for (i = 0; i < RTE_MAX_MEMSEG_LISTS; i++) {
@@ -815,7 +813,7 @@ RTE_EXPORT_SYMBOL(rte_memseg_get_fd_thread_unsafe)
int
rte_memseg_get_fd_thread_unsafe(const struct rte_memseg *ms)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct rte_memseg_list *msl;
struct rte_fbarray *arr;
int msl_idx, seg_idx, ret;
@@ -872,7 +870,7 @@ int
rte_memseg_get_fd_offset_thread_unsafe(const struct rte_memseg *ms,
size_t *offset)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct rte_memseg_list *msl;
struct rte_fbarray *arr;
int msl_idx, seg_idx, ret;
@@ -929,7 +927,7 @@ int
rte_extmem_register(void *va_addr, size_t len, rte_iova_t iova_addrs[],
unsigned int n_pages, size_t page_sz)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
unsigned int socket_id, n;
int ret = 0;
@@ -1049,7 +1047,7 @@ int
rte_eal_memory_detach(void)
{
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
size_t page_sz = rte_mem_page_size();
unsigned int i;
@@ -1101,7 +1099,7 @@ rte_eal_memory_detach(void)
EAL_LOG(ERR, "Could not unmap shared memory config: %s",
rte_strerror(rte_errno));
}
- rte_eal_get_configuration()->mem_config = NULL;
+ eal_get_runtime_state()->mem_config = NULL;
return 0;
}
@@ -1154,7 +1152,7 @@ static int
handle_eal_heap_info_request(const char *cmd __rte_unused, const char *params,
struct rte_tel_data *d)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct rte_malloc_socket_stats sock_stats;
struct malloc_heap *heap;
unsigned int heap_id;
@@ -1191,7 +1189,7 @@ handle_eal_heap_list_request(const char *cmd __rte_unused,
const char *params __rte_unused,
struct rte_tel_data *d)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct rte_malloc_socket_stats sock_stats;
unsigned int heap_id;
@@ -1213,7 +1211,7 @@ static int
handle_eal_memzone_info_request(const char *cmd __rte_unused,
const char *params, struct rte_tel_data *d)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct rte_memseg_list *msl = NULL;
int ms_idx, ms_count = 0;
void *cur_addr, *mz_end;
@@ -1275,7 +1273,7 @@ static void
memzone_list_cb(const struct rte_memzone *mz __rte_unused,
void *arg __rte_unused)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct rte_tel_data *d = arg;
int mz_idx;
@@ -1340,7 +1338,7 @@ handle_eal_memseg_lists_request(const char *cmd __rte_unused,
rte_tel_data_start_array(d, RTE_TEL_INT_VAL);
rte_mcfg_mem_read_lock();
- mcfg = rte_eal_get_configuration()->mem_config;
+ mcfg = eal_get_mcfg();
for (i = 0; i < RTE_MAX_MEMSEG_LISTS; i++) {
struct rte_memseg_list *msl = &mcfg->memsegs[i];
@@ -1376,7 +1374,7 @@ handle_eal_memseg_list_info_request(const char *cmd __rte_unused,
rte_tel_data_start_array(d, RTE_TEL_INT_VAL);
rte_mcfg_mem_read_lock();
- mcfg = rte_eal_get_configuration()->mem_config;
+ mcfg = eal_get_mcfg();
msl = &mcfg->memsegs[ms_list_idx];
if (msl->memseg_arr.count == 0)
goto done;
@@ -1423,7 +1421,7 @@ handle_eal_memseg_info_request(const char *cmd __rte_unused,
rte_mcfg_mem_read_lock();
- mcfg = rte_eal_get_configuration()->mem_config;
+ mcfg = eal_get_mcfg();
msl = &mcfg->memsegs[ms_list_idx];
if (msl->memseg_arr.count == 0) {
rte_mcfg_mem_read_unlock();
@@ -1502,7 +1500,7 @@ handle_eal_element_list_request(const char *cmd __rte_unused,
rte_mcfg_mem_read_lock();
- mcfg = rte_eal_get_configuration()->mem_config;
+ mcfg = eal_get_mcfg();
msl = &mcfg->memsegs[ms_list_idx];
ms = rte_fbarray_get(&msl->memseg_arr, ms_idx);
if (ms == NULL) {
@@ -1580,7 +1578,7 @@ handle_eal_element_info_request(const char *cmd __rte_unused,
rte_mcfg_mem_read_lock();
- mcfg = rte_eal_get_configuration()->mem_config;
+ mcfg = eal_get_mcfg();
msl = &mcfg->memsegs[ms_list_idx];
ms = rte_fbarray_get(&msl->memseg_arr, ms_idx);
if (ms == NULL) {
diff --git a/lib/eal/common/eal_common_memzone.c b/lib/eal/common/eal_common_memzone.c
index 1207d524c9..570cb60757 100644
--- a/lib/eal/common/eal_common_memzone.c
+++ b/lib/eal/common/eal_common_memzone.c
@@ -39,7 +39,7 @@ rte_memzone_max_set(size_t max)
return -1;
}
- mcfg = rte_eal_get_configuration()->mem_config;
+ mcfg = eal_get_mcfg();
if (mcfg == NULL) {
EAL_LOG(ERR, "Failed to set max memzone count");
return -1;
@@ -56,7 +56,7 @@ rte_memzone_max_get(void)
{
struct rte_mem_config *mcfg;
- mcfg = rte_eal_get_configuration()->mem_config;
+ mcfg = eal_get_mcfg();
if (mcfg == NULL || mcfg->max_memzone == 0)
return DEFAULT_MAX_MEMZONE_COUNT;
@@ -72,7 +72,7 @@ memzone_lookup_thread_unsafe(const char *name)
int i = 0;
/* get pointer to global configuration */
- mcfg = rte_eal_get_configuration()->mem_config;
+ mcfg = eal_get_mcfg();
arr = &mcfg->memzones;
/*
@@ -116,7 +116,7 @@ memzone_reserve_aligned_thread_unsafe(const char *name, size_t len,
bool contig;
/* get pointer to global configuration */
- mcfg = rte_eal_get_configuration()->mem_config;
+ mcfg = eal_get_mcfg();
arr = &mcfg->memzones;
/* no more room in config */
@@ -248,7 +248,7 @@ rte_memzone_reserve_thread_safe(const char *name, size_t len, int socket_id,
const struct rte_memzone *mz = NULL;
/* get pointer to global configuration */
- mcfg = rte_eal_get_configuration()->mem_config;
+ mcfg = eal_get_mcfg();
rte_rwlock_write_lock(&mcfg->mlock);
@@ -319,7 +319,7 @@ rte_memzone_free(const struct rte_memzone *mz)
return -EINVAL;
rte_strlcpy(name, mz->name, RTE_MEMZONE_NAMESIZE);
- mcfg = rte_eal_get_configuration()->mem_config;
+ mcfg = eal_get_mcfg();
arr = &mcfg->memzones;
rte_rwlock_write_lock(&mcfg->mlock);
@@ -357,7 +357,7 @@ rte_memzone_lookup(const char *name)
struct rte_mem_config *mcfg;
const struct rte_memzone *memzone = NULL;
- mcfg = rte_eal_get_configuration()->mem_config;
+ mcfg = eal_get_mcfg();
rte_rwlock_read_lock(&mcfg->mlock);
@@ -377,7 +377,7 @@ struct memzone_info {
static void
dump_memzone(const struct rte_memzone *mz, void *arg)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct rte_memseg_list *msl = NULL;
struct memzone_info *info = arg;
void *cur_addr, *mz_end;
@@ -448,7 +448,7 @@ rte_eal_memzone_init(void)
int ret = 0;
/* get pointer to global configuration */
- mcfg = rte_eal_get_configuration()->mem_config;
+ mcfg = eal_get_mcfg();
rte_rwlock_write_lock(&mcfg->mlock);
@@ -477,7 +477,7 @@ void rte_memzone_walk(void (*func)(const struct rte_memzone *, void *),
struct rte_fbarray *arr;
int i;
- mcfg = rte_eal_get_configuration()->mem_config;
+ mcfg = eal_get_mcfg();
arr = &mcfg->memzones;
rte_rwlock_read_lock(&mcfg->mlock);
diff --git a/lib/eal/common/eal_common_proc.c b/lib/eal/common/eal_common_proc.c
index dcf18ebf4c..448089d024 100644
--- a/lib/eal/common/eal_common_proc.c
+++ b/lib/eal/common/eal_common_proc.c
@@ -1296,7 +1296,7 @@ enum mp_status {
static bool
set_mp_status(enum mp_status status)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
uint8_t expected;
uint8_t desired;
diff --git a/lib/eal/common/eal_common_tailqs.c b/lib/eal/common/eal_common_tailqs.c
index c581f43b6f..8f8a2d2417 100644
--- a/lib/eal/common/eal_common_tailqs.c
+++ b/lib/eal/common/eal_common_tailqs.c
@@ -29,7 +29,7 @@ struct rte_tailq_head *
rte_eal_tailq_lookup(const char *name)
{
unsigned i;
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
if (name == NULL) {
rte_errno = EINVAL;
@@ -53,7 +53,7 @@ rte_dump_tailq(FILE *f)
struct rte_mem_config *mcfg;
unsigned i = 0;
- mcfg = rte_eal_get_configuration()->mem_config;
+ mcfg = eal_get_mcfg();
rte_mcfg_tailq_read_lock();
for (i = 0; i < RTE_MAX_TAILQ; i++) {
@@ -81,7 +81,7 @@ rte_eal_tailq_create(const char *name)
(rte_tailqs_count + 1 < RTE_MAX_TAILQ)) {
struct rte_mem_config *mcfg;
- mcfg = rte_eal_get_configuration()->mem_config;
+ mcfg = eal_get_mcfg();
head = &mcfg->tailq_head[rte_tailqs_count];
strlcpy(head->name, name, sizeof(head->name) - 1);
TAILQ_INIT(&head->tailq_head);
diff --git a/lib/eal/common/eal_common_timer.c b/lib/eal/common/eal_common_timer.c
index bbf8b8b11b..4051096784 100644
--- a/lib/eal/common/eal_common_timer.c
+++ b/lib/eal/common/eal_common_timer.c
@@ -55,7 +55,7 @@ estimate_tsc_freq(void)
void
set_tsc_freq(void)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
uint64_t freq;
if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
diff --git a/lib/eal/common/eal_internal_cfg.h b/lib/eal/common/eal_internal_cfg.h
index e229f82c1e..47af403c27 100644
--- a/lib/eal/common/eal_internal_cfg.h
+++ b/lib/eal/common/eal_internal_cfg.h
@@ -19,6 +19,9 @@
#include <rte_stdatomic.h>
#include "eal_thread.h"
+/* Forward declaration — full definition is in eal_memcfg.h */
+struct rte_mem_config;
+
#if defined(RTE_ARCH_ARM)
#define MAX_HUGEPAGE_SIZES 4 /**< support up to 4 page sizes */
#else
@@ -144,6 +147,7 @@ struct eal_runtime_state {
uint32_t main_lcore; /**< ID of the main lcore */
uint32_t lcore_count; /**< Number of active lcore IDs (role != ROLE_OFF). */
struct lcore_cfg lcore_cfg[RTE_MAX_LCORE];
+ struct rte_mem_config *mem_config; /**< pointer to memory config (in shared memory) */
};
struct eal_user_cfg *eal_get_user_configuration(void);
diff --git a/lib/eal/common/eal_memcfg.h b/lib/eal/common/eal_memcfg.h
index 60e2089797..8302a7c8e5 100644
--- a/lib/eal/common/eal_memcfg.h
+++ b/lib/eal/common/eal_memcfg.h
@@ -80,6 +80,9 @@ struct rte_mem_config {
size_t max_memzone; /**< Maximum number of allocated memzones. */
};
+/* Return a pointer to the shared memory config */
+struct rte_mem_config *eal_get_mcfg(void);
+
/* update internal config from shared mem config */
void
eal_mcfg_update_internal(void);
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index a905632cbe..2bb5c6c402 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -21,11 +21,7 @@
* The global RTE configuration structure.
*/
struct rte_config {
- /**
- * Pointer to memory configuration, which may be shared across multiple
- * DPDK instances
- */
- struct rte_mem_config *mem_config;
+ int _unused; /**< dummy field to prevent empty struct */
};
/**
diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c
index bd25496275..90534c9cbc 100644
--- a/lib/eal/common/malloc_heap.c
+++ b/lib/eal/common/malloc_heap.c
@@ -71,7 +71,7 @@ check_hugepage_sz(unsigned flags, uint64_t hugepage_sz)
int
malloc_socket_to_heap_id(unsigned int socket_id)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
int i;
for (i = 0; i < RTE_MAX_HEAPS; i++) {
@@ -107,7 +107,7 @@ static int
malloc_add_seg(const struct rte_memseg_list *msl,
const struct rte_memseg *ms, size_t len, void *arg __rte_unused)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct rte_memseg_list *found_msl;
struct malloc_heap *heap;
int msl_idx, heap_idx;
@@ -294,7 +294,7 @@ alloc_pages_on_heap(struct malloc_heap *heap, uint64_t pg_sz, size_t elt_size,
int socket, unsigned int flags, size_t align, size_t bound,
bool contig, struct rte_memseg **ms, int n_segs)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct rte_memseg_list *msl;
struct malloc_elem *elem = NULL;
size_t alloc_sz;
@@ -465,7 +465,7 @@ try_expand_heap_secondary(struct malloc_heap *heap, uint64_t pg_sz,
size_t elt_size, int socket, unsigned int flags, size_t align,
size_t bound, bool contig)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct malloc_mp_req req;
int req_result;
@@ -534,7 +534,7 @@ static int
alloc_more_mem_on_socket(struct malloc_heap *heap, size_t size, int socket,
unsigned int flags, size_t align, size_t bound, bool contig)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct rte_memseg_list *requested_msls[RTE_MAX_MEMSEG_LISTS];
struct rte_memseg_list *other_msls[RTE_MAX_MEMSEG_LISTS];
uint64_t requested_pg_sz[RTE_MAX_MEMSEG_LISTS];
@@ -642,7 +642,7 @@ static void *
malloc_heap_alloc_on_heap_id(size_t size, unsigned int heap_id, unsigned int flags, size_t align,
size_t bound, bool contig)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct malloc_heap *heap = &mcfg->malloc_heaps[heap_id];
unsigned int size_flags = flags & ~RTE_MEMZONE_SIZE_HINT_ONLY;
int socket_id;
@@ -773,7 +773,7 @@ static void *
heap_alloc_biggest_on_heap_id(unsigned int heap_id,
unsigned int flags, size_t align, bool contig)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct malloc_heap *heap = &mcfg->malloc_heaps[heap_id];
void *ret;
@@ -1175,7 +1175,7 @@ malloc_heap_create_external_seg(void *va_addr, rte_iova_t iova_addrs[],
unsigned int n_pages, size_t page_sz, const char *seg_name,
unsigned int socket_id)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
char fbarray_name[RTE_FBARRAY_NAME_LEN];
struct rte_memseg_list *msl = NULL;
struct rte_fbarray *arr;
@@ -1242,7 +1242,7 @@ struct extseg_walk_arg {
static int
extseg_walk(const struct rte_memseg_list *msl, void *arg)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct extseg_walk_arg *wa = arg;
if (msl->base_va == wa->va_addr && msl->len == wa->len) {
@@ -1343,7 +1343,7 @@ malloc_heap_remove_external_memory(struct malloc_heap *heap, void *va_addr,
int
malloc_heap_create(struct malloc_heap *heap, const char *heap_name)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
uint32_t next_socket_id = mcfg->next_socket_id;
/* prevent overflow. did you really create 2 billion heaps??? */
@@ -1397,7 +1397,7 @@ malloc_heap_destroy(struct malloc_heap *heap)
int
rte_eal_malloc_heap_init(void)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
unsigned int i;
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
diff --git a/lib/eal/common/malloc_mp.c b/lib/eal/common/malloc_mp.c
index 9765277f5d..d225b22bfd 100644
--- a/lib/eal/common/malloc_mp.c
+++ b/lib/eal/common/malloc_mp.c
@@ -217,7 +217,7 @@ static int
handle_alloc_request(const struct malloc_mp_req *m,
struct mp_request *req)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
const struct malloc_req_alloc *ar = &m->alloc_req;
struct malloc_heap *heap;
struct malloc_elem *elem;
diff --git a/lib/eal/common/rte_malloc.c b/lib/eal/common/rte_malloc.c
index 388e5a63b6..9e1c7adfed 100644
--- a/lib/eal/common/rte_malloc.c
+++ b/lib/eal/common/rte_malloc.c
@@ -265,7 +265,7 @@ int
rte_malloc_get_socket_stats(int socket,
struct rte_malloc_socket_stats *socket_stats)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
int heap_idx;
heap_idx = malloc_socket_to_heap_id(socket);
@@ -283,7 +283,7 @@ RTE_EXPORT_SYMBOL(rte_malloc_dump_heaps)
void
rte_malloc_dump_heaps(FILE *f)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
unsigned int idx;
for (idx = 0; idx < RTE_MAX_HEAPS; idx++) {
@@ -296,7 +296,7 @@ RTE_EXPORT_SYMBOL(rte_malloc_heap_get_socket)
int
rte_malloc_heap_get_socket(const char *name)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct malloc_heap *heap = NULL;
unsigned int idx;
int ret;
@@ -333,7 +333,7 @@ RTE_EXPORT_SYMBOL(rte_malloc_heap_socket_is_external)
int
rte_malloc_heap_socket_is_external(int socket_id)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
unsigned int idx;
int ret = -1;
@@ -362,7 +362,7 @@ RTE_EXPORT_SYMBOL(rte_malloc_dump_stats)
void
rte_malloc_dump_stats(FILE *f, __rte_unused const char *type)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
unsigned int heap_id;
struct rte_malloc_socket_stats sock_stats;
@@ -414,7 +414,7 @@ rte_malloc_virt2iova(const void *addr)
static struct malloc_heap *
find_named_heap(const char *name)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
unsigned int i;
for (i = 0; i < RTE_MAX_HEAPS; i++) {
@@ -616,7 +616,7 @@ RTE_EXPORT_SYMBOL(rte_malloc_heap_create)
int
rte_malloc_heap_create(const char *heap_name)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct malloc_heap *heap = NULL;
int i, ret;
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 5271614e4a..98da77acc1 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -93,7 +93,7 @@ eal_clean_runtime_dir(void)
static int
rte_eal_config_create(void)
{
- struct rte_config *config = rte_eal_get_configuration();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
size_t page_sz = rte_mem_page_size();
size_t cfg_len = sizeof(struct rte_mem_config);
@@ -163,13 +163,13 @@ rte_eal_config_create(void)
return -1;
}
- memcpy(rte_mem_cfg_addr, config->mem_config, sizeof(struct rte_mem_config));
- config->mem_config = rte_mem_cfg_addr;
+ memcpy(rte_mem_cfg_addr, runtime_state->mem_config, sizeof(struct rte_mem_config));
+ runtime_state->mem_config = rte_mem_cfg_addr;
/* store address of the config in the config itself so that secondary
* processes could later map the config into this exact location
*/
- config->mem_config->mem_cfg_addr = (uintptr_t) rte_mem_cfg_addr;
+ runtime_state->mem_config->mem_cfg_addr = (uintptr_t) rte_mem_cfg_addr;
return 0;
}
@@ -179,7 +179,7 @@ rte_eal_config_attach(void)
{
void *rte_mem_cfg_addr;
const char *pathname = eal_runtime_config_path();
- struct rte_config *config = rte_eal_get_configuration();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
@@ -195,7 +195,7 @@ rte_eal_config_attach(void)
}
}
- rte_mem_cfg_addr = mmap(NULL, sizeof(*config->mem_config),
+ rte_mem_cfg_addr = mmap(NULL, sizeof(*runtime_state->mem_config),
PROT_READ, MAP_SHARED, mem_cfg_fd, 0);
/* don't close the fd here, it will be closed on reattach */
if (rte_mem_cfg_addr == MAP_FAILED) {
@@ -206,7 +206,7 @@ rte_eal_config_attach(void)
return -1;
}
- config->mem_config = rte_mem_cfg_addr;
+ runtime_state->mem_config = rte_mem_cfg_addr;
return 0;
}
@@ -217,7 +217,7 @@ rte_eal_config_reattach(void)
{
struct rte_mem_config *mem_config;
void *rte_mem_cfg_addr;
- struct rte_config *config = rte_eal_get_configuration();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (user_cfg->no_shconf)
@@ -225,10 +225,10 @@ rte_eal_config_reattach(void)
/* save the address primary process has mapped shared config to */
rte_mem_cfg_addr =
- (void *)(uintptr_t)config->mem_config->mem_cfg_addr;
+ (void *)(uintptr_t)runtime_state->mem_config->mem_cfg_addr;
/* unmap original config */
- munmap(config->mem_config, sizeof(struct rte_mem_config));
+ munmap(runtime_state->mem_config, sizeof(struct rte_mem_config));
/* remap the config at proper address */
mem_config = (struct rte_mem_config *) mmap(rte_mem_cfg_addr,
@@ -249,7 +249,7 @@ rte_eal_config_reattach(void)
return -1;
}
- config->mem_config = mem_config;
+ runtime_state->mem_config = mem_config;
return 0;
}
diff --git a/lib/eal/freebsd/eal_memory.c b/lib/eal/freebsd/eal_memory.c
index e925fa9743..cf0c5b7332 100644
--- a/lib/eal/freebsd/eal_memory.c
+++ b/lib/eal/freebsd/eal_memory.c
@@ -63,7 +63,7 @@ rte_eal_hugepage_init(void)
struct eal_platform_info *platform_info = eal_get_platform_info();
/* get pointer to global configuration */
- mcfg = rte_eal_get_configuration()->mem_config;
+ mcfg = eal_get_mcfg();
/* for debug purposes, hugetlbfs can be disabled */
if (user_cfg->no_hugetlbfs) {
@@ -350,7 +350,7 @@ memseg_list_alloc(struct rte_memseg_list *msl)
static int
memseg_primary_init(void)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
int hpi_idx, msl_idx = 0;
struct rte_memseg_list *msl;
uint64_t max_mem, total_mem;
@@ -454,7 +454,7 @@ memseg_primary_init(void)
static int
memseg_secondary_init(void)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
int msl_idx = 0;
struct rte_memseg_list *msl;
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 2ac2546391..04affc6a28 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -173,9 +173,9 @@ eal_clean_runtime_dir(void)
static int
rte_eal_config_create(void)
{
- struct rte_config *config = rte_eal_get_configuration();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
size_t page_sz = rte_mem_page_size();
- size_t cfg_len = sizeof(*config->mem_config);
+ size_t cfg_len = sizeof(*runtime_state->mem_config);
size_t cfg_len_aligned = RTE_ALIGN(cfg_len, page_sz);
void *rte_mem_cfg_addr, *mapped_mem_cfg_addr;
int retval;
@@ -243,14 +243,14 @@ rte_eal_config_create(void)
return -1;
}
- memcpy(rte_mem_cfg_addr, config->mem_config, sizeof(struct rte_mem_config));
- config->mem_config = rte_mem_cfg_addr;
+ memcpy(rte_mem_cfg_addr, runtime_state->mem_config, sizeof(struct rte_mem_config));
+ runtime_state->mem_config = rte_mem_cfg_addr;
/* store address of the config in the config itself so that secondary
* processes could later map the config into this exact location
*/
- config->mem_config->mem_cfg_addr = (uintptr_t) rte_mem_cfg_addr;
- config->mem_config->dma_maskbits = 0;
+ runtime_state->mem_config->mem_cfg_addr = (uintptr_t) rte_mem_cfg_addr;
+ runtime_state->mem_config->dma_maskbits = 0;
return 0;
}
@@ -259,7 +259,7 @@ rte_eal_config_create(void)
static int
rte_eal_config_attach(void)
{
- struct rte_config *config = rte_eal_get_configuration();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
struct rte_mem_config *mem_config;
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
@@ -288,7 +288,7 @@ rte_eal_config_attach(void)
return -1;
}
- config->mem_config = mem_config;
+ runtime_state->mem_config = mem_config;
return 0;
}
@@ -297,7 +297,7 @@ rte_eal_config_attach(void)
static int
rte_eal_config_reattach(void)
{
- struct rte_config *config = rte_eal_get_configuration();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
struct rte_mem_config *mem_config;
void *rte_mem_cfg_addr;
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
@@ -307,10 +307,10 @@ rte_eal_config_reattach(void)
/* save the address primary process has mapped shared config to */
rte_mem_cfg_addr =
- (void *) (uintptr_t) config->mem_config->mem_cfg_addr;
+ (void *) (uintptr_t) runtime_state->mem_config->mem_cfg_addr;
/* unmap original config */
- munmap(config->mem_config, sizeof(struct rte_mem_config));
+ munmap(runtime_state->mem_config, sizeof(struct rte_mem_config));
/* remap the config at proper address */
mem_config = (struct rte_mem_config *) mmap(rte_mem_cfg_addr,
@@ -333,7 +333,7 @@ rte_eal_config_reattach(void)
return -1;
}
- config->mem_config = mem_config;
+ runtime_state->mem_config = mem_config;
return 0;
}
diff --git a/lib/eal/linux/eal_memalloc.c b/lib/eal/linux/eal_memalloc.c
index 5ae81429d9..035e5da08a 100644
--- a/lib/eal/linux/eal_memalloc.c
+++ b/lib/eal/linux/eal_memalloc.c
@@ -759,7 +759,7 @@ struct alloc_walk_param {
static int
alloc_seg_walk(const struct rte_memseg_list *msl, void *arg)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct alloc_walk_param *wa = arg;
struct rte_memseg_list *cur_msl;
size_t page_sz;
@@ -893,7 +893,7 @@ struct free_walk_param {
static int
free_seg_walk(const struct rte_memseg_list *msl, void *arg)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct rte_memseg_list *found_msl;
struct free_walk_param *wa = arg;
uintptr_t start_addr, end_addr;
@@ -1323,7 +1323,7 @@ sync_existing(struct rte_memseg_list *primary_msl,
static int
sync_walk(const struct rte_memseg_list *msl, void *arg __rte_unused)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct rte_memseg_list *primary_msl, *local_msl;
struct eal_platform_info *platform_info = eal_get_platform_info();
struct hugepage_info *hi = NULL;
@@ -1376,7 +1376,7 @@ static int
secondary_msl_create_walk(const struct rte_memseg_list *msl,
void *arg __rte_unused)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct rte_memseg_list *primary_msl, *local_msl;
char name[RTE_FBARRAY_NAME_LEN];
int msl_idx, ret;
@@ -1425,7 +1425,7 @@ static int
secondary_msl_destroy_walk(const struct rte_memseg_list *msl,
void *arg __rte_unused)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct rte_memseg_list *local_msl;
int msl_idx, ret;
@@ -1508,7 +1508,7 @@ static int
fd_list_create_walk(const struct rte_memseg_list *msl,
void *arg __rte_unused)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
unsigned int len;
int msl_idx;
@@ -1524,7 +1524,7 @@ fd_list_create_walk(const struct rte_memseg_list *msl,
static int
fd_list_destroy_walk(const struct rte_memseg_list *msl, void *arg __rte_unused)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
int msl_idx;
if (msl->external)
@@ -1538,7 +1538,7 @@ fd_list_destroy_walk(const struct rte_memseg_list *msl, void *arg __rte_unused)
int
eal_memalloc_set_seg_fd(int list_idx, int seg_idx, int fd)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* single file segments mode doesn't support individual segment fd's */
@@ -1593,7 +1593,7 @@ eal_memalloc_get_seg_fd(int list_idx, int seg_idx)
int
eal_memalloc_get_seg_fd_offset(int list_idx, int seg_idx, size_t *offset)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (user_cfg->single_file_segments) {
diff --git a/lib/eal/linux/eal_memory.c b/lib/eal/linux/eal_memory.c
index c341e9a599..1bfea89021 100644
--- a/lib/eal/linux/eal_memory.c
+++ b/lib/eal/linux/eal_memory.c
@@ -679,7 +679,7 @@ unmap_unneeded_hugepages(struct hugepage_file *hugepg_tbl,
static int
remap_segment(struct hugepage_file *hugepages, int seg_start, int seg_end)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct rte_memseg_list *msl;
struct rte_fbarray *arr;
int cur_page, seg_len;
@@ -856,7 +856,7 @@ static int __rte_unused
prealloc_segments(struct hugepage_file *hugepages, int n_pages)
{
const struct eal_platform_info *platform_info = eal_get_platform_info();
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
int cur_page, seg_start_page, end_seg, new_memseg;
unsigned int hpi_idx, socket, i;
int n_contig_segs, n_segs;
@@ -1154,7 +1154,7 @@ eal_legacy_hugepage_init(void)
memset(used_hp, 0, sizeof(used_hp));
/* get pointer to global configuration */
- mcfg = rte_eal_get_configuration()->mem_config;
+ mcfg = eal_get_mcfg();
/* hugetlbfs can be disabled */
if (user_cfg->no_hugetlbfs) {
@@ -1531,7 +1531,7 @@ getFileSize(int fd)
static int
eal_legacy_hugepage_attach(void)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct hugepage_file *hp = NULL;
unsigned int num_hp = 0;
unsigned int i = 0;
@@ -1702,7 +1702,7 @@ rte_eal_using_phys_addrs(void)
static int __rte_unused
memseg_primary_init_32(void)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
int active_sockets, hpi_idx, msl_idx = 0;
unsigned int socket_id, i;
struct rte_memseg_list *msl;
@@ -1905,7 +1905,7 @@ memseg_primary_init(void)
static int
memseg_secondary_init(void)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
int msl_idx = 0;
struct rte_memseg_list *msl;
diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c
index 72df9163f0..8de7d6d715 100644
--- a/lib/eal/windows/eal.c
+++ b/lib/eal/windows/eal.c
@@ -45,7 +45,7 @@ eal_proc_type_detect(void)
{
enum rte_proc_type_t ptype = RTE_PROC_PRIMARY;
const char *pathname = eal_runtime_config_path();
- const struct rte_config *config = rte_eal_get_configuration();
+ const struct rte_mem_config *config = eal_get_mcfg();
/* if we can open the file but not get a write-lock we are a secondary
* process. NOTE: if we get a file handle back, we keep that open
@@ -55,14 +55,14 @@ eal_proc_type_detect(void)
_O_RDWR, _SH_DENYNO, _S_IREAD | _S_IWRITE);
if (err == 0) {
OVERLAPPED soverlapped = { 0 };
- soverlapped.Offset = sizeof(*config->mem_config);
+ soverlapped.Offset = sizeof(*config);
soverlapped.OffsetHigh = 0;
HANDLE hwinfilehandle = (HANDLE)_get_osfhandle(mem_cfg_fd);
if (!LockFileEx(hwinfilehandle,
LOCKFILE_EXCLUSIVE_LOCK | LOCKFILE_FAIL_IMMEDIATELY, 0,
- sizeof(*config->mem_config), 0, &soverlapped))
+ sizeof(*config), 0, &soverlapped))
ptype = RTE_PROC_SECONDARY;
}
diff --git a/lib/eal/windows/eal_memalloc.c b/lib/eal/windows/eal_memalloc.c
index 35eaf3a180..7eaae467d8 100644
--- a/lib/eal/windows/eal_memalloc.c
+++ b/lib/eal/windows/eal_memalloc.c
@@ -178,7 +178,7 @@ struct alloc_walk_param {
static int
alloc_seg_walk(const struct rte_memseg_list *msl, void *arg)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct alloc_walk_param *wa = arg;
struct rte_memseg_list *cur_msl;
size_t page_sz;
@@ -279,7 +279,7 @@ struct free_walk_param {
static int
free_seg_walk(const struct rte_memseg_list *msl, void *arg)
{
- struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config;
+ struct rte_mem_config *mcfg = eal_get_mcfg();
struct rte_memseg_list *found_msl;
struct free_walk_param *wa = arg;
uintptr_t start_addr, end_addr;
diff --git a/lib/eal/windows/eal_memory.c b/lib/eal/windows/eal_memory.c
index 8fcd636a3a..8a267a9215 100644
--- a/lib/eal/windows/eal_memory.c
+++ b/lib/eal/windows/eal_memory.c
@@ -677,7 +677,7 @@ eal_nohuge_init(void)
uint64_t mem_sz, page_sz;
void *addr;
- mcfg = rte_eal_get_configuration()->mem_config;
+ mcfg = eal_get_mcfg();
struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* nohuge mode is legacy mode */
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 21/44] eal: remove rte_config structure
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (19 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 20/44] eal: move memory config pointer to runtime state struct Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 22/44] eal: separate runtime state update from arg parsing Bruce Richardson
` (24 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
The rte_config structure is no longer used, so remove it, and the
function returning a pointer to it.
In the process, as a general cleanup, remove references to rte_config in
error or info messages where it no longer makes sense to do so.
For example, messages around creating or attaching to the shared memory
config were referencing "rte_config", even though rte_config itself
hasn't been in shared memory.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_config.c | 10 ----------
lib/eal/common/eal_private.h | 15 ---------------
lib/eal/freebsd/eal.c | 18 +++++++++---------
lib/eal/include/rte_memzone.h | 10 +++++-----
lib/eal/include/rte_tailq.h | 2 +-
lib/eal/linux/eal.c | 18 +++++++++---------
6 files changed, 24 insertions(+), 49 deletions(-)
diff --git a/lib/eal/common/eal_common_config.c b/lib/eal/common/eal_common_config.c
index ebb7c222d9..35654cc71f 100644
--- a/lib/eal/common/eal_common_config.c
+++ b/lib/eal/common/eal_common_config.c
@@ -20,9 +20,6 @@ static struct rte_mem_config early_mem_config = {
.memory_hotplug_lock = RTE_RWLOCK_INITIALIZER,
};
-/* Address of global and public configuration */
-static struct rte_config rte_config;
-
/* platform-specific runtime dir */
static char runtime_dir[UNIX_PATH_MAX];
@@ -56,13 +53,6 @@ eal_set_runtime_dir(const char *run_dir)
return 0;
}
-/* Return a pointer to the configuration structure */
-struct rte_config *
-rte_eal_get_configuration(void)
-{
- return &rte_config;
-}
-
/* Return a pointer to the memory config structure */
struct rte_mem_config *
eal_get_mcfg(void)
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index 2bb5c6c402..c5efdb070a 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -17,21 +17,6 @@
#include "eal_internal_cfg.h"
-/**
- * The global RTE configuration structure.
- */
-struct rte_config {
- int _unused; /**< dummy field to prevent empty struct */
-};
-
-/**
- * Get the global configuration structure.
- *
- * @return
- * A pointer to the global configuration structure.
- */
-struct rte_config *rte_eal_get_configuration(void);
-
/**
* Put the argument list into a structure.
*
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 98da77acc1..0fafb2c295 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -145,7 +145,7 @@ rte_eal_config_create(void)
rte_mem_cfg_addr = eal_get_virtual_area(rte_mem_cfg_addr,
&cfg_len_aligned, page_sz, 0, 0);
if (rte_mem_cfg_addr == NULL) {
- EAL_LOG(ERR, "Cannot mmap memory for rte_config");
+ EAL_LOG(ERR, "Cannot mmap shared memory config");
close(mem_cfg_fd);
mem_cfg_fd = -1;
return -1;
@@ -156,7 +156,7 @@ rte_eal_config_create(void)
cfg_len_aligned, PROT_READ | PROT_WRITE,
MAP_SHARED | MAP_FIXED, mem_cfg_fd, 0);
if (mapped_mem_cfg_addr == MAP_FAILED) {
- EAL_LOG(ERR, "Cannot remap memory for rte_config: %s", strerror(errno));
+ EAL_LOG(ERR, "Cannot remap shared memory config: %s", strerror(errno));
munmap(rte_mem_cfg_addr, cfg_len);
close(mem_cfg_fd);
mem_cfg_fd = -1;
@@ -201,7 +201,7 @@ rte_eal_config_attach(void)
if (rte_mem_cfg_addr == MAP_FAILED) {
close(mem_cfg_fd);
mem_cfg_fd = -1;
- EAL_LOG(ERR, "Cannot mmap memory for rte_config! error %i (%s)",
+ EAL_LOG(ERR, "Cannot mmap shared memory config! error %i (%s)",
errno, strerror(errno));
return -1;
}
@@ -239,12 +239,12 @@ rte_eal_config_reattach(void)
if (mem_config == MAP_FAILED || mem_config != rte_mem_cfg_addr) {
if (mem_config != MAP_FAILED) {
- EAL_LOG(ERR, "Cannot mmap memory for rte_config at [%p], got [%p] - please use '--base-virtaddr' option",
+ EAL_LOG(ERR, "Cannot mmap shared memory config at [%p], got [%p] - please use '--base-virtaddr' option",
rte_mem_cfg_addr, mem_config);
munmap(mem_config, sizeof(struct rte_mem_config));
return -1;
}
- EAL_LOG(ERR, "Cannot mmap memory for rte_config! error %i (%s)",
+ EAL_LOG(ERR, "Cannot mmap shared memory config! error %i (%s)",
errno, strerror(errno));
return -1;
}
@@ -280,9 +280,9 @@ eal_proc_type_detect(void)
return ptype;
}
-/* Sets up rte_config structure with the pointer to shared memory config.*/
+/* Attaches to or creates the shared memory config for this process. */
static int
-rte_config_init(void)
+eal_mem_config_init(void)
{
struct eal_runtime_state *runtime_state = eal_get_runtime_state();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
@@ -485,7 +485,7 @@ rte_eal_init(int argc, char **argv)
goto err_out;
}
- if (rte_config_init() < 0) {
+ if (eal_mem_config_init() < 0) {
rte_eal_init_alert("Cannot init config");
goto err_out;
}
@@ -564,7 +564,7 @@ rte_eal_init(int argc, char **argv)
rte_eal_iova_mode() == RTE_IOVA_PA ? "PA" : "VA");
if (!user_cfg->no_hugetlbfs) {
- /* rte_config isn't initialized yet */
+ /* shared mem config not yet attached */
ret = user_cfg->process_type == RTE_PROC_PRIMARY ?
eal_hugepage_info_init() :
eal_hugepage_info_read();
diff --git a/lib/eal/include/rte_memzone.h b/lib/eal/include/rte_memzone.h
index 5a0e1b8a15..7f94b6ff71 100644
--- a/lib/eal/include/rte_memzone.h
+++ b/lib/eal/include/rte_memzone.h
@@ -13,8 +13,8 @@
* portions of physical memory. These zones are identified by a name.
*
* The memzone descriptors are shared by all partitions and are
- * located in a known place of physical memory. This zone is accessed
- * using rte_eal_get_configuration(). The lookup (by name) of a
+ * located in a known place of physical memory accessible via the
+ * shared memory config. The lookup (by name) of a
* memory zone can be done in any partition and returns the same
* physical address.
*
@@ -137,7 +137,7 @@ size_t rte_memzone_max_get(void);
* A pointer to a correctly-filled read-only memzone descriptor, or NULL
* on error.
* On error case, rte_errno will be set appropriately:
- * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure
+ * - E_RTE_NO_CONFIG - function could not get pointer to shared memory config
* - ENOSPC - the maximum number of memzones has already been allocated
* - EEXIST - a memzone with the same name already exists
* - ENOMEM - no appropriate memory area found in which to create memzone
@@ -202,7 +202,7 @@ const struct rte_memzone *rte_memzone_reserve(const char *name,
* A pointer to a correctly-filled read-only memzone descriptor, or NULL
* on error.
* On error case, rte_errno will be set appropriately:
- * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure
+ * - E_RTE_NO_CONFIG - function could not get pointer to shared memory config
* - ENOSPC - the maximum number of memzones has already been allocated
* - EEXIST - a memzone with the same name already exists
* - ENOMEM - no appropriate memory area found in which to create memzone
@@ -273,7 +273,7 @@ const struct rte_memzone *rte_memzone_reserve_aligned(const char *name,
* A pointer to a correctly-filled read-only memzone descriptor, or NULL
* on error.
* On error case, rte_errno will be set appropriately:
- * - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure
+ * - E_RTE_NO_CONFIG - function could not get pointer to shared memory config
* - ENOSPC - the maximum number of memzones has already been allocated
* - EEXIST - a memzone with the same name already exists
* - ENOMEM - no appropriate memory area found in which to create memzone
diff --git a/lib/eal/include/rte_tailq.h b/lib/eal/include/rte_tailq.h
index e7caed6812..dced107368 100644
--- a/lib/eal/include/rte_tailq.h
+++ b/lib/eal/include/rte_tailq.h
@@ -29,7 +29,7 @@ RTE_TAILQ_HEAD(rte_tailq_entry_head, rte_tailq_entry);
/**
* The structure defining a tailq header entry for storing
- * in the rte_config structure in shared memory. Each tailq
+ * in shared memory. Each tailq
* is identified by name.
* Any library storing a set of objects e.g. rings, mempools, hash-tables,
* is recommended to use an entry here, so as to make it easy for
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 04affc6a28..8d3559aa37 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -225,7 +225,7 @@ rte_eal_config_create(void)
rte_mem_cfg_addr = eal_get_virtual_area(rte_mem_cfg_addr,
&cfg_len_aligned, page_sz, 0, 0);
if (rte_mem_cfg_addr == NULL) {
- EAL_LOG(ERR, "Cannot mmap memory for rte_config");
+ EAL_LOG(ERR, "Cannot mmap shared memory config");
close(mem_cfg_fd);
mem_cfg_fd = -1;
return -1;
@@ -239,7 +239,7 @@ rte_eal_config_create(void)
munmap(rte_mem_cfg_addr, cfg_len);
close(mem_cfg_fd);
mem_cfg_fd = -1;
- EAL_LOG(ERR, "Cannot remap memory for rte_config");
+ EAL_LOG(ERR, "Cannot remap shared memory config");
return -1;
}
@@ -283,7 +283,7 @@ rte_eal_config_attach(void)
if (mem_config == MAP_FAILED) {
close(mem_cfg_fd);
mem_cfg_fd = -1;
- EAL_LOG(ERR, "Cannot mmap memory for rte_config! error %i (%s)",
+ EAL_LOG(ERR, "Cannot mmap shared memory config! error %i (%s)",
errno, strerror(errno));
return -1;
}
@@ -323,12 +323,12 @@ rte_eal_config_reattach(void)
if (mem_config == MAP_FAILED || mem_config != rte_mem_cfg_addr) {
if (mem_config != MAP_FAILED) {
/* errno is stale, don't use */
- EAL_LOG(ERR, "Cannot mmap memory for rte_config at [%p], got [%p] - please use '--base-virtaddr' option",
+ EAL_LOG(ERR, "Cannot mmap shared memory config at [%p], got [%p] - please use '--base-virtaddr' option",
rte_mem_cfg_addr, mem_config);
munmap(mem_config, sizeof(struct rte_mem_config));
return -1;
}
- EAL_LOG(ERR, "Cannot mmap memory for rte_config! error %i (%s)",
+ EAL_LOG(ERR, "Cannot mmap shared memory config! error %i (%s)",
errno, strerror(errno));
return -1;
}
@@ -364,9 +364,9 @@ eal_proc_type_detect(void)
return ptype;
}
-/* Sets up rte_config structure with the pointer to shared memory config.*/
+/* Attaches to or creates the shared memory config for this process. */
static int
-rte_config_init(void)
+eal_mem_config_init(void)
{
struct eal_runtime_state *runtime_state = eal_get_runtime_state();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
@@ -640,7 +640,7 @@ rte_eal_init(int argc, char **argv)
goto err_out;
}
- if (rte_config_init() < 0) {
+ if (eal_mem_config_init() < 0) {
rte_eal_init_alert("Cannot init config");
goto err_out;
}
@@ -728,7 +728,7 @@ rte_eal_init(int argc, char **argv)
rte_eal_iova_mode() == RTE_IOVA_PA ? "PA" : "VA");
if (!user_cfg->no_hugetlbfs) {
- /* rte_config isn't initialized yet */
+ /* shared mem config not yet attached */
ret = user_cfg->process_type == RTE_PROC_PRIMARY ?
eal_hugepage_info_init() :
eal_hugepage_info_read();
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 22/44] eal: separate runtime state update from arg parsing
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (20 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 21/44] eal: remove rte_config structure Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 23/44] eal: move devopt_list staging list into user_cfg Bruce Richardson
` (23 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
The eal_parse_args function was performing several updates to the
runtime state that do not logically belong in a parse_args function.
Move these to the "eal_adjust_config()" function, which is now renamed
to the more accurate "eal_apply_runtime_state()". Ensure too, that all
queries for the multi process mode use the runtime state rather than the
original user provided value, which needs resolution if it's AUTO.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_options.c | 68 +++++++++++++----------------
lib/eal/common/eal_options.h | 2 +-
lib/eal/freebsd/eal.c | 5 +--
lib/eal/linux/eal.c | 5 +--
lib/eal/linux/eal_vfio.c | 9 ++--
5 files changed, 37 insertions(+), 52 deletions(-)
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index 9b3ba64c4c..dc6f4643c4 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -1960,7 +1960,6 @@ int
eal_parse_args(void)
{
struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- struct eal_runtime_state *runtime_state = eal_get_runtime_state();
bool remap_lcores = (args.remap_lcore_ids != NULL);
struct arg_list_elem *arg;
uint16_t lcore_id_base = 0;
@@ -2088,17 +2087,6 @@ eal_parse_args(void)
if (args.main_lcore != NULL && eal_parse_main_lcore(args.main_lcore) < 0)
return -1;
- if (user_cfg->main_lcore != -1) {
- runtime_state->main_lcore = user_cfg->main_lcore;
- } else {
- /* default main lcore is the first one */
- runtime_state->main_lcore = rte_get_next_lcore(-1, 0, 0);
- if (runtime_state->main_lcore >= RTE_MAX_LCORE) {
- EAL_LOG(ERR, "Main lcore is not enabled for DPDK");
- return -1;
- }
- }
-
/* memory options */
if (args.memory_size != NULL) {
user_cfg->memory = atoi(args.memory_size);
@@ -2299,23 +2287,11 @@ eal_parse_args(void)
}
}
-#ifndef RTE_EXEC_ENV_WINDOWS
- /* create runtime data directory. In no_shconf mode, skip any errors */
- if (eal_create_runtime_dir() < 0) {
- if (!user_cfg->no_shconf) {
- EAL_LOG(ERR, "Cannot create runtime directory");
- return -1;
- }
- EAL_LOG(WARNING, "No DPDK runtime directory created");
- }
-#endif
-
- if (eal_adjust_config() != 0) {
- EAL_LOG(ERR, "Invalid configuration");
- return -1;
- }
+ /* sum per-NUMA memory requests into user_cfg->memory */
+ for (int i = 0; i < RTE_MAX_NUMA_NODES; i++)
+ user_cfg->memory += user_cfg->numa_mem[i];
- return 0;
+ return eal_apply_runtime_state();
}
static void
@@ -2363,20 +2339,38 @@ eal_cleanup_config(const struct eal_user_cfg *user_cfg)
}
int
-eal_adjust_config(void)
+eal_apply_runtime_state(void)
{
struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- int i;
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
- if (user_cfg->process_type == RTE_PROC_AUTO)
- user_cfg->process_type = eal_proc_type_detect();
+ /* set the main lcore */
+ if (user_cfg->main_lcore != -1) {
+ runtime_state->main_lcore = user_cfg->main_lcore;
+ } else {
+ /* default main lcore is the first one */
+ runtime_state->main_lcore = rte_get_next_lcore(-1, 0, 0);
+ if (runtime_state->main_lcore >= RTE_MAX_LCORE) {
+ EAL_LOG(ERR, "Main lcore is not enabled for DPDK");
+ return -1;
+ }
+ }
- compute_ctrl_threads_cpuset();
+#ifndef RTE_EXEC_ENV_WINDOWS
+ /* create runtime data directory. In no_shconf mode, skip any errors */
+ if (eal_create_runtime_dir() < 0) {
+ if (!user_cfg->no_shconf) {
+ EAL_LOG(ERR, "Cannot create runtime directory");
+ return -1;
+ }
+ EAL_LOG(WARNING, "No DPDK runtime directory created");
+ }
+#endif
- /* if no memory amounts were requested, this will result in 0 and
- * will be overridden later, right after eal_hugepage_info_init() */
- for (i = 0; i < RTE_MAX_NUMA_NODES; i++)
- user_cfg->memory += user_cfg->numa_mem[i];
+ runtime_state->process_type = (user_cfg->process_type == RTE_PROC_AUTO) ?
+ eal_proc_type_detect() : user_cfg->process_type;
+
+ compute_ctrl_threads_cpuset();
return 0;
}
diff --git a/lib/eal/common/eal_options.h b/lib/eal/common/eal_options.h
index a70c5b0c05..d5ad7a4720 100644
--- a/lib/eal/common/eal_options.h
+++ b/lib/eal/common/eal_options.h
@@ -13,7 +13,7 @@ struct eal_user_cfg;
int eal_parse_log_options(void);
int eal_parse_args(void);
int eal_option_device_parse(void);
-int eal_adjust_config(void);
+int eal_apply_runtime_state(void);
int eal_cleanup_config(const struct eal_user_cfg *user_cfg);
enum rte_proc_type_t eal_proc_type_detect(void);
int eal_plugins_init(void);
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 0fafb2c295..16748f965e 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -285,9 +285,6 @@ static int
eal_mem_config_init(void)
{
struct eal_runtime_state *runtime_state = eal_get_runtime_state();
- const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
-
- runtime_state->process_type = user_cfg->process_type;
switch (runtime_state->process_type) {
case RTE_PROC_PRIMARY:
@@ -565,7 +562,7 @@ rte_eal_init(int argc, char **argv)
if (!user_cfg->no_hugetlbfs) {
/* shared mem config not yet attached */
- ret = user_cfg->process_type == RTE_PROC_PRIMARY ?
+ ret = rte_eal_process_type() == RTE_PROC_PRIMARY ?
eal_hugepage_info_init() :
eal_hugepage_info_read();
if (ret < 0) {
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 8d3559aa37..8d67d6744f 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -369,9 +369,6 @@ static int
eal_mem_config_init(void)
{
struct eal_runtime_state *runtime_state = eal_get_runtime_state();
- const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
-
- runtime_state->process_type = user_cfg->process_type;
switch (runtime_state->process_type) {
case RTE_PROC_PRIMARY:
@@ -729,7 +726,7 @@ rte_eal_init(int argc, char **argv)
if (!user_cfg->no_hugetlbfs) {
/* shared mem config not yet attached */
- ret = user_cfg->process_type == RTE_PROC_PRIMARY ?
+ ret = rte_eal_process_type() == RTE_PROC_PRIMARY ?
eal_hugepage_info_init() :
eal_hugepage_info_read();
if (ret < 0) {
diff --git a/lib/eal/linux/eal_vfio.c b/lib/eal/linux/eal_vfio.c
index 678ac57e87..019966593e 100644
--- a/lib/eal/linux/eal_vfio.c
+++ b/lib/eal/linux/eal_vfio.c
@@ -482,8 +482,7 @@ vfio_get_group_fd(struct vfio_config *vfio_cfg,
* knowledge of them. Requesting a group fd from the primary for a
* container it doesn't know about would be incorrect.
*/
- const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- bool mp_request = (user_cfg->process_type == RTE_PROC_SECONDARY) &&
+ bool mp_request = (rte_eal_process_type() == RTE_PROC_SECONDARY) &&
(vfio_cfg == default_vfio_cfg);
vfio_group_fd = vfio_open_group_fd(iommu_group_num, mp_request);
@@ -770,7 +769,6 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
int iommu_group_num;
rte_uuid_t vf_token;
int i, ret;
- const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* get group number */
ret = rte_vfio_get_group_num(sysfs_base, dev_addr, &iommu_group_num);
@@ -852,7 +850,7 @@ rte_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
* Note this can happen several times with the hotplug
* functionality.
*/
- if (user_cfg->process_type == RTE_PROC_PRIMARY &&
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY &&
vfio_cfg->vfio_active_groups == 1 &&
vfio_group_device_count(vfio_group_fd) == 0) {
const struct vfio_iommu_type *t;
@@ -1105,7 +1103,6 @@ rte_vfio_enable(const char *modname)
unsigned int i, j;
int vfio_available;
DIR *dir;
- const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
rte_spinlock_recursive_t lock = RTE_SPINLOCK_RECURSIVE_INITIALIZER;
@@ -1149,7 +1146,7 @@ rte_vfio_enable(const char *modname)
}
closedir(dir);
- if (user_cfg->process_type == RTE_PROC_PRIMARY) {
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
if (vfio_mp_sync_setup() == -1) {
default_vfio_cfg->vfio_container_fd = -1;
} else {
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 23/44] eal: move devopt_list staging list into user_cfg
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (21 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 22/44] eal: separate runtime state update from arg parsing Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 24/44] eal: separate plugin paths from loaded plugin objects Bruce Richardson
` (22 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
The file-static devopt_list in eal_common_options.c holds device
options (-a/-b/--vdev) staged during arg parsing for later consumption
by eal_option_device_parse(). Rather than holding this data as statics,
store it in the user_cfg struct.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_options.c | 21 ++++++---------------
lib/eal/common/eal_internal_cfg.h | 15 +++++++++++++++
2 files changed, 21 insertions(+), 15 deletions(-)
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index dc6f4643c4..71fc69e80d 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -289,18 +289,6 @@ static const char *default_solib_dir = RTE_EAL_PMD_PATH;
RTE_PMD_EXPORT_SYMBOL(const char, dpdk_solib_path)[] =
"DPDK_PLUGIN_PATH=" RTE_EAL_PMD_PATH;
-TAILQ_HEAD(device_option_list, device_option);
-
-struct device_option {
- TAILQ_ENTRY(device_option) next;
-
- enum rte_devtype type;
- char arg[];
-};
-
-static struct device_option_list devopt_list =
-TAILQ_HEAD_INITIALIZER(devopt_list);
-
/* Returns rte_usage_hook_t */
rte_usage_hook_t
eal_get_application_usage_hook(void)
@@ -438,6 +426,7 @@ eal_clean_saved_args(void)
static int
eal_option_device_add(enum rte_devtype type, const char *arg)
{
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
struct device_option *devopt;
size_t arglen;
int ret;
@@ -456,25 +445,26 @@ eal_option_device_add(enum rte_devtype type, const char *arg)
free(devopt);
return -EINVAL;
}
- TAILQ_INSERT_TAIL(&devopt_list, devopt, next);
+ TAILQ_INSERT_TAIL(&user_cfg->devopt_list, devopt, next);
return 0;
}
int
eal_option_device_parse(void)
{
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
struct device_option *devopt;
void *tmp;
int ret = 0;
- RTE_TAILQ_FOREACH_SAFE(devopt, &devopt_list, next, tmp) {
+ RTE_TAILQ_FOREACH_SAFE(devopt, &user_cfg->devopt_list, next, tmp) {
if (ret == 0) {
ret = rte_devargs_add(devopt->type, devopt->arg);
if (ret)
EAL_LOG(ERR, "Unable to parse device '%s'",
devopt->arg);
}
- TAILQ_REMOVE(&devopt_list, devopt, next);
+ TAILQ_REMOVE(&user_cfg->devopt_list, devopt, next);
free(devopt);
}
return ret;
@@ -498,6 +488,7 @@ eal_reset_internal_config(void)
struct eal_runtime_state *runtime_state = eal_get_runtime_state();
int i;
+ TAILQ_INIT(&user_cfg->devopt_list);
user_cfg->memory = 0;
user_cfg->force_nrank = 0;
user_cfg->force_nchannel = 0;
diff --git a/lib/eal/common/eal_internal_cfg.h b/lib/eal/common/eal_internal_cfg.h
index 47af403c27..4decc26d2c 100644
--- a/lib/eal/common/eal_internal_cfg.h
+++ b/lib/eal/common/eal_internal_cfg.h
@@ -10,6 +10,9 @@
#ifndef EAL_INTERNAL_CFG_H
#define EAL_INTERNAL_CFG_H
+#include <sys/queue.h>
+
+#include <rte_devargs.h>
#include <rte_eal.h>
#include <rte_os_shim.h>
#include <rte_pci_dev_feature_defs.h>
@@ -54,11 +57,23 @@ struct hugepage_file_discipline {
bool unlink_existing;
};
+/**
+ * A single device option (-a/-b/--vdev) staged during arg parsing.
+ * Lives in user_cfg->devopt_list; drained by eal_option_device_parse().
+ */
+struct device_option {
+ TAILQ_ENTRY(device_option) next;
+ enum rte_devtype type;
+ char arg[];
+};
+TAILQ_HEAD(eal_devopt_list, device_option);
+
/**
* User-provided EAL initialization configuration.
* Immutable after initialization, so no need for atomic types or locks.
*/
struct eal_user_cfg {
+ struct eal_devopt_list devopt_list; /**< staged device options (-a/-b/--vdev) */
size_t memory; /**< amount of asked memory */
size_t huge_worker_stack_size; /**< worker thread stack size */
enum rte_proc_type_t process_type; /**< requested process type */
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 24/44] eal: separate plugin paths from loaded plugin objects
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (22 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 23/44] eal: move devopt_list staging list into user_cfg Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 25/44] eal: simplify internal driver path iteration APIs Bruce Richardson
` (21 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
The one data structure in eal managed both the loaded plugins and the
list of plugin paths provided by the user, separating the two rather
awkwardly by using a flag value. Since we have separate user_cfg and
runtime_state structures, separate the plugin info between the two
structs - user provided directory and file paths in one, and actual
loaded .so paths and pointers in the other.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_options.c | 248 +++++++++++++++-------------
lib/eal/common/eal_internal_cfg.h | 22 +++
2 files changed, 158 insertions(+), 112 deletions(-)
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index 71fc69e80d..63ab7980c1 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -261,21 +261,6 @@ eal_collate_args(int argc, char **argv)
return retval - 1;
}
-TAILQ_HEAD(shared_driver_list, shared_driver);
-
-/* Definition for shared object drivers. */
-struct shared_driver {
- TAILQ_ENTRY(shared_driver) next;
-
- char name[PATH_MAX];
- void* lib_handle;
- bool from_cmdline; /**< true if from -d flag, false if driver found in a directory */
-};
-
-/* List of external loadable drivers */
-static struct shared_driver_list solib_list =
-TAILQ_HEAD_INITIALIZER(solib_list);
-
#ifndef RTE_EXEC_ENV_WINDOWS
/* Default path of external loadable drivers */
static const char *default_solib_dir = RTE_EAL_PMD_PATH;
@@ -489,6 +474,8 @@ eal_reset_internal_config(void)
int i;
TAILQ_INIT(&user_cfg->devopt_list);
+ TAILQ_INIT(&user_cfg->plugin_list);
+ TAILQ_INIT(&runtime_state->loaded_plugins);
user_cfg->memory = 0;
user_cfg->force_nrank = 0;
user_cfg->force_nchannel = 0;
@@ -539,19 +526,18 @@ eal_reset_internal_config(void)
}
static int
-eal_plugin_add(const char *path, bool from_cmdline)
+eal_plugin_path_add(const char *path)
{
- struct shared_driver *solib;
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ struct eal_plugin_path *p;
- solib = malloc(sizeof(*solib));
- if (solib == NULL) {
- EAL_LOG(ERR, "malloc(solib) failed");
+ p = malloc(sizeof(*p));
+ if (p == NULL) {
+ EAL_LOG(ERR, "malloc(plugin_path) failed");
return -1;
}
- memset(solib, 0, sizeof(*solib));
- strlcpy(solib->name, path, PATH_MAX);
- solib->from_cmdline = from_cmdline;
- TAILQ_INSERT_TAIL(&solib_list, solib, next);
+ strlcpy(p->name, path, PATH_MAX);
+ TAILQ_INSERT_TAIL(&user_cfg->plugin_list, p, next);
return 0;
}
@@ -573,53 +559,6 @@ ends_with(const char *str, const char *tail)
return str_len >= tail_len && strcmp(&str[str_len - tail_len], tail) == 0;
}
-static int
-eal_plugindir_init(const char *path)
-{
- struct dirent *dent = NULL;
- DIR *d = NULL;
-
- if (path == NULL || *path == '\0')
- return 0;
-
- d = opendir(path);
- if (d == NULL) {
- EAL_LOG(ERR, "failed to open directory %s: %s",
- path, strerror(errno));
- return -1;
- }
-
- while ((dent = readdir(d)) != NULL) {
- char *sopath = NULL;
- struct stat sb;
-
- if (!ends_with(dent->d_name, ".so") && !ends_with(dent->d_name, ".so."ABI_VERSION))
- continue;
-
- if (asprintf(&sopath, "%s/%s", path, dent->d_name) < 0) {
- EAL_LOG(ERR, "failed to create full path %s/%s",
- path, dent->d_name);
- continue;
- }
-
- /* if a regular file, add to list to load */
- if (!(stat(sopath, &sb) == 0 && S_ISREG(sb.st_mode))) {
- free(sopath);
- continue;
- }
-
- if (eal_plugin_add(sopath, false) == -1) {
- free(sopath);
- break;
- }
- free(sopath);
- }
-
- closedir(d);
- /* XXX this ignores failures from readdir() itself */
- return (dent == NULL) ? 0 : -1;
-}
-
static int
verify_perms(const char *dirpath)
{
@@ -691,6 +630,65 @@ eal_dlopen(const char *pathname)
return retval;
}
+static int
+eal_plugindir_init(const char *path)
+{
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+ struct dirent *dent = NULL;
+ DIR *d = NULL;
+
+ if (path == NULL || *path == '\0')
+ return 0;
+
+ d = opendir(path);
+ if (d == NULL) {
+ EAL_LOG(ERR, "failed to open directory %s: %s",
+ path, strerror(errno));
+ return -1;
+ }
+
+ while ((dent = readdir(d)) != NULL) {
+ char *sopath = NULL;
+ struct shared_driver *solib;
+ struct stat sb;
+
+ if (!ends_with(dent->d_name, ".so") && !ends_with(dent->d_name, ".so."ABI_VERSION))
+ continue;
+
+ if (asprintf(&sopath, "%s/%s", path, dent->d_name) < 0) {
+ EAL_LOG(ERR, "failed to create full path %s/%s",
+ path, dent->d_name);
+ continue;
+ }
+
+ /* if not a regular file, skip */
+ if (!(stat(sopath, &sb) == 0 && S_ISREG(sb.st_mode))) {
+ free(sopath);
+ continue;
+ }
+
+ solib = calloc(1, sizeof(*solib));
+ if (solib == NULL) {
+ free(sopath);
+ break;
+ }
+ strlcpy(solib->name, sopath, PATH_MAX);
+ free(sopath);
+
+ EAL_LOG(DEBUG, "open shared lib %s", solib->name);
+ solib->lib_handle = eal_dlopen(solib->name);
+ if (solib->lib_handle == NULL) {
+ free(solib);
+ break;
+ }
+ TAILQ_INSERT_TAIL(&runtime_state->loaded_plugins, solib, next);
+ }
+
+ closedir(d);
+ /* XXX this ignores failures from readdir() itself */
+ return (dent == NULL) ? 0 : -1;
+}
+
static int
is_shared_build(void)
{
@@ -731,37 +729,45 @@ is_shared_build(void)
int
eal_plugins_init(void)
{
- struct shared_driver *solib = NULL;
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+ struct eal_plugin_path *p;
struct stat sb;
- /* If we are not statically linked, add default driver loading
- * path if it exists as a directory.
- * (Using dlopen with NOLOAD flag on EAL, will return NULL if the EAL
- * shared library is not already loaded i.e. it's statically linked.)
- */
+ TAILQ_INIT(&runtime_state->loaded_plugins);
+
+ /* If we are not statically linked, scan the default driver directory. */
if (is_shared_build() &&
*default_solib_dir != '\0' &&
stat(default_solib_dir, &sb) == 0 &&
- S_ISDIR(sb.st_mode))
- eal_plugin_add(default_solib_dir, false);
-
- TAILQ_FOREACH(solib, &solib_list, next) {
+ S_ISDIR(sb.st_mode)) {
+ if (eal_plugindir_init(default_solib_dir) == -1) {
+ EAL_LOG(ERR, "Cannot init plugin directory %s",
+ default_solib_dir);
+ return -1;
+ }
+ }
- if (stat(solib->name, &sb) == 0 && S_ISDIR(sb.st_mode)) {
- if (eal_plugindir_init(solib->name) == -1) {
- EAL_LOG(ERR,
- "Cannot init plugin directory %s",
- solib->name);
+ TAILQ_FOREACH(p, &user_cfg->plugin_list, next) {
+ if (stat(p->name, &sb) == 0 && S_ISDIR(sb.st_mode)) {
+ if (eal_plugindir_init(p->name) == -1) {
+ EAL_LOG(ERR, "Cannot init plugin directory %s",
+ p->name);
return -1;
}
} else {
- EAL_LOG(DEBUG, "open shared lib %s",
- solib->name);
+ struct shared_driver *solib = calloc(1, sizeof(*solib));
+ if (solib == NULL)
+ return -1;
+ strlcpy(solib->name, p->name, PATH_MAX);
+ EAL_LOG(DEBUG, "open shared lib %s", solib->name);
solib->lib_handle = eal_dlopen(solib->name);
- if (solib->lib_handle == NULL)
+ if (solib->lib_handle == NULL) {
+ free(solib);
return -1;
+ }
+ TAILQ_INSERT_TAIL(&runtime_state->loaded_plugins, solib, next);
}
-
}
return 0;
}
@@ -771,40 +777,58 @@ RTE_EXPORT_INTERNAL_SYMBOL(rte_eal_driver_path_next)
const char *
rte_eal_driver_path_next(const char *start, bool cmdline_only)
{
- struct shared_driver *solib;
+ if (cmdline_only) {
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ struct eal_plugin_path *p;
- if (start == NULL) {
- solib = TAILQ_FIRST(&solib_list);
- } else {
- /* Find the current entry based on the name string */
- TAILQ_FOREACH(solib, &solib_list, next) {
- if (start == solib->name) {
- solib = TAILQ_NEXT(solib, next);
- break;
+ if (start == NULL) {
+ p = TAILQ_FIRST(&user_cfg->plugin_list);
+ } else {
+ TAILQ_FOREACH(p, &user_cfg->plugin_list, next) {
+ if (start == p->name) {
+ p = TAILQ_NEXT(p, next);
+ break;
+ }
}
+ if (p == NULL)
+ return NULL;
}
- if (solib == NULL)
- return NULL;
- }
+ return p ? p->name : NULL;
+ } else {
+ const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+ struct shared_driver *solib;
- /* Skip entries that were expanded from directories if cmdline_only is true */
- if (cmdline_only) {
- while (solib != NULL && !solib->from_cmdline)
- solib = TAILQ_NEXT(solib, next);
+ if (start == NULL) {
+ solib = TAILQ_FIRST(&runtime_state->loaded_plugins);
+ } else {
+ TAILQ_FOREACH(solib, &runtime_state->loaded_plugins, next) {
+ if (start == solib->name) {
+ solib = TAILQ_NEXT(solib, next);
+ break;
+ }
+ }
+ if (solib == NULL)
+ return NULL;
+ }
+ return solib ? solib->name : NULL;
}
-
- return solib ? solib->name : NULL;
}
RTE_EXPORT_INTERNAL_SYMBOL(rte_eal_driver_path_count)
unsigned int
rte_eal_driver_path_count(bool cmdline_only)
{
- struct shared_driver *solib;
unsigned int count = 0;
- TAILQ_FOREACH(solib, &solib_list, next) {
- if (!cmdline_only || solib->from_cmdline)
+ if (cmdline_only) {
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ struct eal_plugin_path *p;
+ TAILQ_FOREACH(p, &user_cfg->plugin_list, next)
+ count++;
+ } else {
+ const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+ struct shared_driver *solib;
+ TAILQ_FOREACH(solib, &runtime_state->loaded_plugins, next)
count++;
}
@@ -1987,7 +2011,7 @@ eal_parse_args(void)
return -1;
/* driver loading options */
TAILQ_FOREACH(arg, &args.driver_path, next)
- if (eal_plugin_add(arg->arg, true) < 0)
+ if (eal_plugin_path_add(arg->arg) < 0)
return -1;
if (remap_lcores && args.remap_lcore_ids != (void *)1) {
diff --git a/lib/eal/common/eal_internal_cfg.h b/lib/eal/common/eal_internal_cfg.h
index 4decc26d2c..6894bbf9d5 100644
--- a/lib/eal/common/eal_internal_cfg.h
+++ b/lib/eal/common/eal_internal_cfg.h
@@ -57,6 +57,16 @@ struct hugepage_file_discipline {
bool unlink_existing;
};
+/**
+ * A plugin path provided by the user via -d, staged during arg parsing.
+ * Lives in user_cfg->plugin_list; consumed by eal_plugins_init().
+ */
+struct eal_plugin_path {
+ TAILQ_ENTRY(eal_plugin_path) next;
+ char name[PATH_MAX];
+};
+TAILQ_HEAD(eal_plugin_path_list, eal_plugin_path);
+
/**
* A single device option (-a/-b/--vdev) staged during arg parsing.
* Lives in user_cfg->devopt_list; drained by eal_option_device_parse().
@@ -74,6 +84,7 @@ TAILQ_HEAD(eal_devopt_list, device_option);
*/
struct eal_user_cfg {
struct eal_devopt_list devopt_list; /**< staged device options (-a/-b/--vdev) */
+ struct eal_plugin_path_list plugin_list; /**< user-provided plugin paths (-d) */
size_t memory; /**< amount of asked memory */
size_t huge_worker_stack_size; /**< worker thread stack size */
enum rte_proc_type_t process_type; /**< requested process type */
@@ -148,6 +159,16 @@ struct lcore_cfg {
volatile RTE_ATOMIC(enum rte_lcore_state_t) state; /**< lcore state */
};
+/**
+ * A plugin loaded by EAL, including directory-expanded entries.
+ */
+struct shared_driver {
+ TAILQ_ENTRY(shared_driver) next;
+ char name[PATH_MAX];
+ void *lib_handle;
+};
+TAILQ_HEAD(eal_solib_list, shared_driver);
+
/**
* Internal EAL runtime state
* May be modified at runtime, so access must be protected by locks or atomic types
@@ -163,6 +184,7 @@ struct eal_runtime_state {
uint32_t lcore_count; /**< Number of active lcore IDs (role != ROLE_OFF). */
struct lcore_cfg lcore_cfg[RTE_MAX_LCORE];
struct rte_mem_config *mem_config; /**< pointer to memory config (in shared memory) */
+ struct eal_solib_list loaded_plugins; /**< all plugins loaded by eal_plugins_init() */
};
struct eal_user_cfg *eal_get_user_configuration(void);
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 25/44] eal: simplify internal driver path iteration APIs
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (23 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 24/44] eal: separate plugin paths from loaded plugin objects Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 26/44] eal: move trace config into user config struct Bruce Richardson
` (20 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
The APIs for counting and iterating the driver paths originally iterated
the mixed list of both user-provided paths and loaded .so paths.
However, in practice only the user-provided paths were ever queried.
Since supporting both types of iteration is now duplicating the work in
code, since we have two different lists of different types to iterate,
just drop the unused support and update the API to only iterate the
user-provided list. This simplifies the API (dropping a parameter), the
implementation (removing iteration of the runtime_cfg list), and has no
ABI impacts since the APIs are explicitly marked as internal. The
iteration macro is updated to match the APIs too.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
app/test/process.h | 4 +-
lib/eal/common/eal_common_options.c | 60 ++++++++---------------------
lib/eal/include/rte_eal.h | 35 ++++++-----------
3 files changed, 31 insertions(+), 68 deletions(-)
diff --git a/app/test/process.h b/app/test/process.h
index df43966a2a..a4085e98fb 100644
--- a/app/test/process.h
+++ b/app/test/process.h
@@ -70,7 +70,7 @@ add_parameter_driver_path(char **argv, int max_capacity)
const char *driver_path;
int count = 0;
- RTE_EAL_DRIVER_PATH_FOREACH(driver_path, true) {
+ RTE_EAL_DRIVER_PATH_FOREACH(driver_path) {
if (asprintf(&argv[count], PREFIX_DRIVER_PATH"%s", driver_path) < 0)
break;
@@ -109,7 +109,7 @@ process_dup(const char *const argv[], int numargs, const char *env_value)
return -1;
else if (pid == 0) {
allow_num = rte_devargs_type_count(RTE_DEVTYPE_ALLOWED);
- driver_path_num = rte_eal_driver_path_count(true);
+ driver_path_num = rte_eal_driver_path_count();
argv_num = numargs + allow_num + driver_path_num + 1;
argv_cpy = calloc(argv_num, sizeof(char *));
if (!argv_cpy)
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index 63ab7980c1..835e518e2c 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -775,62 +775,36 @@ eal_plugins_init(void)
RTE_EXPORT_INTERNAL_SYMBOL(rte_eal_driver_path_next)
const char *
-rte_eal_driver_path_next(const char *start, bool cmdline_only)
+rte_eal_driver_path_next(const char *start)
{
- if (cmdline_only) {
- const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- struct eal_plugin_path *p;
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ struct eal_plugin_path *p;
- if (start == NULL) {
- p = TAILQ_FIRST(&user_cfg->plugin_list);
- } else {
- TAILQ_FOREACH(p, &user_cfg->plugin_list, next) {
- if (start == p->name) {
- p = TAILQ_NEXT(p, next);
- break;
- }
- }
- if (p == NULL)
- return NULL;
- }
- return p ? p->name : NULL;
+ if (start == NULL) {
+ p = TAILQ_FIRST(&user_cfg->plugin_list);
} else {
- const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
- struct shared_driver *solib;
-
- if (start == NULL) {
- solib = TAILQ_FIRST(&runtime_state->loaded_plugins);
- } else {
- TAILQ_FOREACH(solib, &runtime_state->loaded_plugins, next) {
- if (start == solib->name) {
- solib = TAILQ_NEXT(solib, next);
- break;
- }
+ TAILQ_FOREACH(p, &user_cfg->plugin_list, next) {
+ if (start == p->name) {
+ p = TAILQ_NEXT(p, next);
+ break;
}
- if (solib == NULL)
- return NULL;
}
- return solib ? solib->name : NULL;
+ if (p == NULL)
+ return NULL;
}
+ return p ? p->name : NULL;
}
RTE_EXPORT_INTERNAL_SYMBOL(rte_eal_driver_path_count)
unsigned int
-rte_eal_driver_path_count(bool cmdline_only)
+rte_eal_driver_path_count(void)
{
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ struct eal_plugin_path *p;
unsigned int count = 0;
- if (cmdline_only) {
- const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- struct eal_plugin_path *p;
- TAILQ_FOREACH(p, &user_cfg->plugin_list, next)
- count++;
- } else {
- const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
- struct shared_driver *solib;
- TAILQ_FOREACH(solib, &runtime_state->loaded_plugins, next)
- count++;
- }
+ TAILQ_FOREACH(p, &user_cfg->plugin_list, next)
+ count++;
return count;
}
diff --git a/lib/eal/include/rte_eal.h b/lib/eal/include/rte_eal.h
index 7241f3be5d..6711ee4440 100644
--- a/lib/eal/include/rte_eal.h
+++ b/lib/eal/include/rte_eal.h
@@ -493,57 +493,46 @@ rte_eal_get_runtime_dir(void);
/**
* @internal
- * Iterate to the next driver path.
+ * Iterate to the next user-provided driver path.
*
- * This function iterates through the list of dynamically loaded drivers,
- * or driver paths that were specified via -d or --driver-path command-line
- * options during EAL initialization.
+ * This function iterates through the driver paths that were specified
+ * via -d or --driver-path command-line options during EAL initialization.
*
* @param start
* Starting iteration point. The iteration will start at the first driver path if NULL.
- * @param cmdline_only
- * If true, only iterate paths from command line (-d flags).
- * If false, iterate all paths including those expanded from directories.
*
* @return
* Next driver path string, NULL if there is none.
*/
__rte_internal
const char *
-rte_eal_driver_path_next(const char *start, bool cmdline_only);
+rte_eal_driver_path_next(const char *start);
/**
* @internal
- * Iterate over all driver paths.
+ * Iterate over all user-provided driver paths.
*
* This macro provides a convenient way to iterate through all driver paths
- * that were loaded via -d flags during EAL initialization.
+ * that were specified via -d flags during EAL initialization.
*
* @param path
* Iterator variable of type const char *
- * @param cmdline_only
- * If true, only iterate paths from command line (-d flags).
- * If false, iterate all paths including those expanded from directories.
*/
-#define RTE_EAL_DRIVER_PATH_FOREACH(path, cmdline_only) \
- for (path = rte_eal_driver_path_next(NULL, cmdline_only); \
+#define RTE_EAL_DRIVER_PATH_FOREACH(path) \
+ for (path = rte_eal_driver_path_next(NULL); \
path != NULL; \
- path = rte_eal_driver_path_next(path, cmdline_only))
+ path = rte_eal_driver_path_next(path))
/**
* @internal
- * Get count of driver paths.
- *
- * @param cmdline_only
- * If true, only count paths from command line (-d flags).
- * If false, count all paths including those expanded from directories.
+ * Get count of user-provided driver paths.
*
* @return
- * Number of driver paths.
+ * Number of driver paths specified via -d flags during EAL initialization.
*/
__rte_internal
unsigned int
-rte_eal_driver_path_count(bool cmdline_only);
+rte_eal_driver_path_count(void);
#ifdef __cplusplus
}
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 26/44] eal: move trace config into user config struct
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (24 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 25/44] eal: simplify internal driver path iteration APIs Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 27/44] eal: record service cores in " Bruce Richardson
` (19 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
The trace settings (pattern list, directory, buffer size, mode) were
previously stored directly in the internal struct trace singleton during
arg parsing, via helper functions. Move these setting into eal_user_cfg
and drop the helper functions.
NOTE: after these changes, eal_cleanup_config needs a non-const user
config pointer to cleanup. Rather than making the pointer in
rte_eal_cleanup non-const, make eal_cleanup_config like all the other
functions in the cleanup chain in taking no parameters. Then it can
define its own non-const user config pointer internally.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_options.c | 53 ++++++++++--
lib/eal/common/eal_common_trace.c | 30 +++++--
lib/eal/common/eal_common_trace_utils.c | 104 ------------------------
lib/eal/common/eal_internal_cfg.h | 15 ++++
lib/eal/common/eal_options.h | 2 +-
lib/eal/common/eal_trace.h | 11 ---
lib/eal/freebsd/eal.c | 3 +-
lib/eal/linux/eal.c | 2 +-
lib/eal/windows/eal.c | 4 +-
9 files changed, 87 insertions(+), 137 deletions(-)
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index 835e518e2c..18d6ee3f5a 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -14,6 +14,7 @@
#include <sys/queue.h>
#ifndef RTE_EXEC_ENV_WINDOWS
#include <dlfcn.h>
+#include <fnmatch.h>
#include <libgen.h>
#endif
#include <sys/stat.h>
@@ -475,6 +476,7 @@ eal_reset_internal_config(void)
TAILQ_INIT(&user_cfg->devopt_list);
TAILQ_INIT(&user_cfg->plugin_list);
+ STAILQ_INIT(&user_cfg->trace_patterns);
TAILQ_INIT(&runtime_state->loaded_plugins);
user_cfg->memory = 0;
user_cfg->force_nrank = 0;
@@ -2178,28 +2180,56 @@ eal_parse_args(void)
EAL_LOG(WARNING, "Tracing is not supported on Windows, ignoring tracing parameters");
#else
TAILQ_FOREACH(arg, &args.trace, next) {
- if (eal_trace_args_save(arg->arg) < 0) {
- EAL_LOG(ERR, "invalid trace parameter, '%s'", arg->arg);
+ struct eal_trace_arg *ta = malloc(sizeof(*ta));
+ if (ta == NULL) {
+ EAL_LOG(ERR, "failed to allocate trace arg for '%s'", arg->arg);
return -1;
}
+ ta->val = strdup(arg->arg);
+ if (ta->val == NULL) {
+ EAL_LOG(ERR, "failed to allocate trace arg for '%s'", arg->arg);
+ free(ta);
+ return -1;
+ }
+ STAILQ_INSERT_TAIL(&user_cfg->trace_patterns, ta, next);
}
if (args.trace_dir != NULL) {
- if (eal_trace_dir_args_save(args.trace_dir) < 0) {
+ if (asprintf(&user_cfg->trace_dir, "%s/", args.trace_dir) == -1) {
EAL_LOG(ERR, "invalid trace directory, '%s'", args.trace_dir);
return -1;
}
}
if (args.trace_bufsz != NULL) {
- if (eal_trace_bufsz_args_save(args.trace_bufsz) < 0) {
+ uint64_t bufsz = rte_str_to_size(args.trace_bufsz);
+ if (bufsz == 0) {
EAL_LOG(ERR, "invalid trace buffer size, '%s'", args.trace_bufsz);
return -1;
}
+ user_cfg->trace_bufsz = bufsz;
}
if (args.trace_mode != NULL) {
- if (eal_trace_mode_args_save(args.trace_mode) < 0) {
+ size_t len = strlen(args.trace_mode);
+ char *pattern;
+ if (len == 0) {
+ EAL_LOG(ERR, "trace mode value is empty");
+ return -1;
+ }
+ pattern = calloc(1, len + 2);
+ if (pattern == NULL) {
+ EAL_LOG(ERR, "failed to allocate memory for trace mode");
+ return -1;
+ }
+ sprintf(pattern, "%s*", args.trace_mode);
+ if (fnmatch(pattern, "overwrite", 0) == 0)
+ user_cfg->trace_mode = RTE_TRACE_MODE_OVERWRITE;
+ else if (fnmatch(pattern, "discard", 0) == 0)
+ user_cfg->trace_mode = RTE_TRACE_MODE_DISCARD;
+ else {
EAL_LOG(ERR, "invalid trace mode, '%s'", args.trace_mode);
+ free(pattern);
return -1;
}
+ free(pattern);
}
#endif
@@ -2318,8 +2348,19 @@ compute_ctrl_threads_cpuset(void)
}
int
-eal_cleanup_config(const struct eal_user_cfg *user_cfg)
+eal_cleanup_config(void)
{
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ struct eal_trace_arg *ta;
+
+ /* free trace patterns list */
+ while (!STAILQ_EMPTY(&user_cfg->trace_patterns)) {
+ ta = STAILQ_FIRST(&user_cfg->trace_patterns);
+ STAILQ_REMOVE_HEAD(&user_cfg->trace_patterns, next);
+ free(ta->val);
+ free(ta);
+ }
+ free(user_cfg->trace_dir);
free(user_cfg->hugefile_prefix);
free(user_cfg->hugepage_dir);
free(user_cfg->user_mbuf_pool_ops_name);
diff --git a/lib/eal/common/eal_common_trace.c b/lib/eal/common/eal_common_trace.c
index a76dff0017..3d984ac8b1 100644
--- a/lib/eal/common/eal_common_trace.c
+++ b/lib/eal/common/eal_common_trace.c
@@ -24,7 +24,7 @@ RTE_DEFINE_PER_LCORE(void *, trace_mem);
static RTE_DEFINE_PER_LCORE(char *, ctf_field);
static struct trace_point_head tp_list = STAILQ_HEAD_INITIALIZER(tp_list);
-static struct trace trace = { .args = STAILQ_HEAD_INITIALIZER(trace.args), };
+static struct trace trace;
struct trace *
trace_obj_get(void)
@@ -41,7 +41,8 @@ trace_list_head_get(void)
int
eal_trace_init(void)
{
- struct trace_arg *arg;
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ const struct eal_trace_arg *arg;
/* Trace memory should start with 8B aligned for natural alignment */
RTE_BUILD_BUG_ON((offsetof(struct __rte_trace_header, mem) % 8) != 0);
@@ -58,14 +59,24 @@ eal_trace_init(void)
if (trace_has_duplicate_entry())
goto fail;
+ /* Copy trace directory from user config (trace.dir may be reallocated later) */
+ if (user_cfg->trace_dir != NULL) {
+ trace.dir = strdup(user_cfg->trace_dir);
+ if (trace.dir == NULL) {
+ rte_errno = ENOMEM;
+ goto fail;
+ }
+ }
+
+ /* Apply buffer size from user config, then fill in default if still 0 */
+ trace.buff_len = user_cfg->trace_bufsz;
+ trace_bufsz_args_apply();
+
/* Generate UUID ver 4 with total size of events and number of
* events
*/
trace_uuid_generate();
- /* Apply buffer size configuration for trace output */
- trace_bufsz_args_apply();
-
/* Generate CTF TDSL metadata */
if (trace_metadata_create() < 0)
goto fail;
@@ -74,11 +85,11 @@ eal_trace_init(void)
if (trace_epoch_time_save() < 0)
goto free_meta;
- /* Apply global configurations */
- STAILQ_FOREACH(arg, &trace.args, next)
+ /* Apply trace pattern filters from user config */
+ STAILQ_FOREACH(arg, &user_cfg->trace_patterns, next)
trace_args_apply(arg->val);
- rte_trace_mode_set(trace.mode);
+ rte_trace_mode_set(user_cfg->trace_mode);
return 0;
@@ -94,7 +105,8 @@ eal_trace_fini(void)
{
trace_mem_free();
trace_metadata_destroy();
- eal_trace_args_free();
+ free(trace.dir);
+ trace.dir = NULL;
}
RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_trace_is_enabled, 20.05)
diff --git a/lib/eal/common/eal_common_trace_utils.c b/lib/eal/common/eal_common_trace_utils.c
index e1996433b7..821036b4bb 100644
--- a/lib/eal/common/eal_common_trace_utils.c
+++ b/lib/eal/common/eal_common_trace_utils.c
@@ -2,7 +2,6 @@
* Copyright(C) 2020 Marvell International Ltd.
*/
-#include <fnmatch.h>
#include <pwd.h>
#include <sys/stat.h>
#include <time.h>
@@ -132,42 +131,6 @@ trace_dir_update(const char *str)
return rc;
}
-int
-eal_trace_args_save(const char *val)
-{
- struct trace *trace = trace_obj_get();
- struct trace_arg *arg = malloc(sizeof(*arg));
-
- if (arg == NULL) {
- trace_err("failed to allocate memory for %s", val);
- return -ENOMEM;
- }
-
- arg->val = strdup(val);
- if (arg->val == NULL) {
- trace_err("failed to allocate memory for %s", val);
- free(arg);
- return -ENOMEM;
- }
-
- STAILQ_INSERT_TAIL(&trace->args, arg, next);
- return 0;
-}
-
-void
-eal_trace_args_free(void)
-{
- struct trace *trace = trace_obj_get();
- struct trace_arg *arg;
-
- while (!STAILQ_EMPTY(&trace->args)) {
- arg = STAILQ_FIRST(&trace->args);
- STAILQ_REMOVE_HEAD(&trace->args, next);
- free(arg->val);
- free(arg);
- }
-}
-
int
trace_args_apply(const char *arg)
{
@@ -179,22 +142,6 @@ trace_args_apply(const char *arg)
return 0;
}
-int
-eal_trace_bufsz_args_save(char const *val)
-{
- struct trace *trace = trace_obj_get();
- uint64_t bufsz;
-
- bufsz = rte_str_to_size(val);
- if (bufsz == 0) {
- trace_err("buffer size cannot be zero");
- return -EINVAL;
- }
-
- trace->buff_len = bufsz;
- return 0;
-}
-
void
trace_bufsz_args_apply(void)
{
@@ -204,57 +151,6 @@ trace_bufsz_args_apply(void)
trace->buff_len = 1024 * 1024; /* 1MB */
}
-int
-eal_trace_mode_args_save(const char *val)
-{
- struct trace *trace = trace_obj_get();
- size_t len = strlen(val);
- unsigned long tmp;
- char *pattern;
-
- if (len == 0) {
- trace_err("value is not provided with option");
- return -EINVAL;
- }
-
- pattern = (char *)calloc(1, len + 2);
- if (pattern == NULL) {
- trace_err("fail to allocate memory");
- return -ENOMEM;
- }
-
- sprintf(pattern, "%s*", val);
-
- if (fnmatch(pattern, "overwrite", 0) == 0)
- tmp = RTE_TRACE_MODE_OVERWRITE;
- else if (fnmatch(pattern, "discard", 0) == 0)
- tmp = RTE_TRACE_MODE_DISCARD;
- else {
- free(pattern);
- return -EINVAL;
- }
-
- trace->mode = tmp;
- free(pattern);
- return 0;
-}
-
-int
-eal_trace_dir_args_save(char const *val)
-{
- char *dir_path;
- int rc;
-
- if (asprintf(&dir_path, "%s/", val) == -1) {
- trace_err("failed to copy directory: %s", strerror(errno));
- return -ENOMEM;
- }
-
- rc = trace_dir_update(dir_path);
- free(dir_path);
- return rc;
-}
-
int
trace_epoch_time_save(void)
{
diff --git a/lib/eal/common/eal_internal_cfg.h b/lib/eal/common/eal_internal_cfg.h
index 6894bbf9d5..79722577a5 100644
--- a/lib/eal/common/eal_internal_cfg.h
+++ b/lib/eal/common/eal_internal_cfg.h
@@ -16,6 +16,7 @@
#include <rte_eal.h>
#include <rte_os_shim.h>
#include <rte_pci_dev_feature_defs.h>
+#include <rte_trace.h>
#include <stdint.h>
#include <stdbool.h>
@@ -57,6 +58,16 @@ struct hugepage_file_discipline {
bool unlink_existing;
};
+/**
+ * A saved trace pattern string from --trace, staged during arg parsing.
+ * Lives in user_cfg->trace_patterns; applied during eal_trace_init().
+ */
+struct eal_trace_arg {
+ STAILQ_ENTRY(eal_trace_arg) next;
+ char *val;
+};
+STAILQ_HEAD(eal_trace_arg_list, eal_trace_arg);
+
/**
* A plugin path provided by the user via -d, staged during arg parsing.
* Lives in user_cfg->plugin_list; consumed by eal_plugins_init().
@@ -85,6 +96,10 @@ TAILQ_HEAD(eal_devopt_list, device_option);
struct eal_user_cfg {
struct eal_devopt_list devopt_list; /**< staged device options (-a/-b/--vdev) */
struct eal_plugin_path_list plugin_list; /**< user-provided plugin paths (-d) */
+ struct eal_trace_arg_list trace_patterns; /**< saved --trace patterns */
+ char *trace_dir; /**< trace output directory (NULL = use default) */
+ uint64_t trace_bufsz; /**< trace buffer size in bytes (0 = use default 1 MB) */
+ enum rte_trace_mode trace_mode; /**< trace mode (default RTE_TRACE_MODE_OVERWRITE) */
size_t memory; /**< amount of asked memory */
size_t huge_worker_stack_size; /**< worker thread stack size */
enum rte_proc_type_t process_type; /**< requested process type */
diff --git a/lib/eal/common/eal_options.h b/lib/eal/common/eal_options.h
index d5ad7a4720..d20381a48f 100644
--- a/lib/eal/common/eal_options.h
+++ b/lib/eal/common/eal_options.h
@@ -14,7 +14,7 @@ int eal_parse_log_options(void);
int eal_parse_args(void);
int eal_option_device_parse(void);
int eal_apply_runtime_state(void);
-int eal_cleanup_config(const struct eal_user_cfg *user_cfg);
+int eal_cleanup_config(void);
enum rte_proc_type_t eal_proc_type_detect(void);
int eal_plugins_init(void);
int eal_save_args(int argc, char **argv);
diff --git a/lib/eal/common/eal_trace.h b/lib/eal/common/eal_trace.h
index 55262677e0..c7ef7d12f7 100644
--- a/lib/eal/common/eal_trace.h
+++ b/lib/eal/common/eal_trace.h
@@ -42,11 +42,6 @@ struct thread_mem_meta {
enum trace_area_e area;
};
-struct trace_arg {
- STAILQ_ENTRY(trace_arg) next;
- char *val;
-};
-
struct trace {
char *dir;
int register_errno;
@@ -54,7 +49,6 @@ struct trace {
enum rte_trace_mode mode;
rte_uuid_t uuid;
uint32_t buff_len;
- STAILQ_HEAD(, trace_arg) args;
uint32_t nb_trace_points;
uint32_t nb_trace_mem_list;
struct thread_mem_meta *lcore_meta;
@@ -107,10 +101,5 @@ void trace_mem_per_thread_free(void);
/* EAL interface */
int eal_trace_init(void);
void eal_trace_fini(void);
-int eal_trace_args_save(const char *val);
-void eal_trace_args_free(void);
-int eal_trace_dir_args_save(const char *val);
-int eal_trace_mode_args_save(const char *val);
-int eal_trace_bufsz_args_save(const char *val);
#endif /* __EAL_TRACE_H */
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 16748f965e..b1155dfc2c 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -762,7 +762,6 @@ rte_eal_cleanup(void)
return -1;
}
- struct eal_user_cfg *user_cfg = eal_get_user_configuration();
rte_service_finalize();
eal_bus_cleanup();
rte_mp_channel_cleanup();
@@ -771,7 +770,7 @@ rte_eal_cleanup(void)
eal_trace_fini();
/* after this point, any DPDK pointers will become dangling */
rte_eal_memory_detach();
- eal_cleanup_config(user_cfg);
+ eal_cleanup_config();
eal_lcore_var_cleanup();
return 0;
}
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 8d67d6744f..4c716f2a09 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -974,7 +974,7 @@ rte_eal_cleanup(void)
/* after this point, any DPDK pointers will become dangling */
rte_eal_memory_detach();
rte_eal_malloc_heap_cleanup();
- eal_cleanup_config(user_cfg);
+ eal_cleanup_config();
eal_lcore_var_cleanup();
rte_eal_log_cleanup();
return 0;
diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c
index 8de7d6d715..e0d7c4e612 100644
--- a/lib/eal/windows/eal.c
+++ b/lib/eal/windows/eal.c
@@ -136,14 +136,12 @@ RTE_EXPORT_SYMBOL(rte_eal_cleanup)
int
rte_eal_cleanup(void)
{
- struct eal_user_cfg *user_cfg = eal_get_user_configuration();
-
eal_intr_thread_cancel();
eal_mem_virt2iova_cleanup();
eal_bus_cleanup();
/* after this point, any DPDK pointers will become dangling */
rte_eal_memory_detach();
- eal_cleanup_config(user_cfg);
+ eal_cleanup_config();
eal_lcore_var_cleanup();
return 0;
}
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 27/44] eal: record service cores in user config struct
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (25 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 26/44] eal: move trace config into user config struct Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 28/44] eal: store user-provided lcore info " Bruce Richardson
` (18 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
The user provided service coremask or service corelist needs to be
recorded in the user config struct, so store it there as a cpuset.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_options.c | 155 ++++++++--------------------
lib/eal/common/eal_internal_cfg.h | 1 +
2 files changed, 43 insertions(+), 113 deletions(-)
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index 18d6ee3f5a..076e939292 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -523,6 +523,7 @@ eal_reset_internal_config(void)
user_cfg->user_mbuf_pool_ops_name = NULL;
CPU_ZERO(&runtime_state->ctrl_cpuset);
runtime_state->init_complete = 0;
+ CPU_ZERO(&user_cfg->service_cpuset);
user_cfg->max_simd_bitwidth.bitwidth = RTE_VECT_DEFAULT_SIMD_BITWIDTH;
user_cfg->max_simd_bitwidth.forced = 0;
}
@@ -830,21 +831,19 @@ static int xdigit2val(unsigned char c)
}
static int
-eal_parse_service_coremask(const char *coremask)
+eal_parse_service_coremask(const char *coremask, rte_cpuset_t *cpuset)
{
- struct eal_runtime_state *runtime_state = eal_get_runtime_state();
int i, j, idx = 0;
unsigned int count = 0;
char c;
int val;
- uint32_t taken_lcore_count = 0;
EAL_LOG(WARNING, "'-s <service-coremask>' is deprecated, and will be removed in a future release.");
EAL_LOG(WARNING, "\tUse '-S <service-corelist>' option instead.");
if (coremask == NULL)
return -1;
- /* Remove all blank characters ahead and after .
+ /* Remove all blank characters ahead and after.
* Remove 0x/0X if exists.
*/
while (isblank(*coremask))
@@ -866,20 +865,9 @@ eal_parse_service_coremask(const char *coremask)
return -1;
}
val = xdigit2val(c);
- for (j = 0; j < BITS_PER_HEX && idx < RTE_MAX_LCORE;
- j++, idx++) {
+ for (j = 0; j < BITS_PER_HEX && idx < RTE_MAX_LCORE; j++, idx++) {
if ((1 << j) & val) {
-
- if (eal_cpu_detected(idx) == 0) {
- EAL_LOG(ERR,
- "lcore %u unavailable", idx);
- return -1;
- }
-
- if (runtime_state->lcore_cfg[idx].role == ROLE_RTE)
- taken_lcore_count++;
-
- runtime_state->lcore_cfg[idx].role = ROLE_SERVICE;
+ CPU_SET(idx, cpuset);
count++;
}
}
@@ -889,16 +877,15 @@ eal_parse_service_coremask(const char *coremask)
if (coremask[i] != '0')
return -1;
- if (count == 0)
- return -1;
-
- if (taken_lcore_count != count) {
- EAL_LOG(WARNING,
- "Not all service cores are in the coremask. "
- "Please ensure -c or -l includes service cores");
- }
+ return count > 0 ? 0 : -1;
+}
- return 0;
+static int
+eal_parse_service_corelist(const char *corelist, rte_cpuset_t *cpuset)
+{
+ if (rte_argparse_parse_type(corelist, RTE_ARGPARSE_VALUE_TYPE_CORELIST, cpuset) != 0)
+ return -1;
+ return CPU_COUNT(cpuset) > 0 ? 0 : -1;
}
static int
@@ -1071,89 +1058,6 @@ rte_eal_parse_coremask(const char *coremask, rte_cpuset_t *cpuset, bool limit_ra
return 0;
}
-static int
-eal_parse_service_corelist(const char *corelist)
-{
- struct eal_runtime_state *runtime_state = eal_get_runtime_state();
- int i;
- unsigned count = 0;
- char *end = NULL;
- uint32_t min, max, idx;
- uint32_t taken_lcore_count = 0;
-
- if (corelist == NULL)
- return -1;
-
- /* Remove all blank characters ahead and after */
- while (isblank(*corelist))
- corelist++;
- i = strlen(corelist);
- while ((i > 0) && isblank(corelist[i - 1]))
- i--;
-
- /* Get list of cores */
- min = RTE_MAX_LCORE;
- do {
- while (isblank(*corelist))
- corelist++;
- if (*corelist == '\0')
- return -1;
- errno = 0;
- idx = strtoul(corelist, &end, 10);
- if (errno || end == NULL)
- return -1;
- if (idx >= RTE_MAX_LCORE)
- return -1;
- while (isblank(*end))
- end++;
- if (*end == '-') {
- min = idx;
- } else if ((*end == ',') || (*end == '\0')) {
- max = idx;
- if (min == RTE_MAX_LCORE)
- min = idx;
- for (idx = min; idx <= max; idx++) {
- if (runtime_state->lcore_cfg[idx].role != ROLE_SERVICE) {
- if (runtime_state->lcore_cfg[idx].role == ROLE_RTE)
- taken_lcore_count++;
-
- runtime_state->lcore_cfg[idx].role = ROLE_SERVICE;
- count++;
- }
- }
- min = RTE_MAX_LCORE;
- } else
- return -1;
- corelist = end + 1;
- } while (*end != '\0');
-
- if (count == 0)
- return -1;
-
- if (taken_lcore_count != count) {
- EAL_LOG(WARNING,
- "Not all service cores were in the coremask. "
- "Please ensure -c or -l includes service cores");
- }
-
- /* log the configured service cores for debugging */
- rte_cpuset_t service_cpuset;
- CPU_ZERO(&service_cpuset);
- for (i = 0; i < RTE_MAX_LCORE; i++) {
- if (runtime_state->lcore_cfg[i].role == ROLE_SERVICE)
- CPU_SET(i, &service_cpuset);
- }
- if (CPU_COUNT(&service_cpuset) > 0) {
- char *cpuset_str = eal_cpuset_to_str(&service_cpuset);
- if (cpuset_str != NULL) {
- EAL_LOG(DEBUG, "Service cores configured: %s", cpuset_str);
- free(cpuset_str);
- }
- }
-
- return 0;
-}
-
/* Changes the lcore id of the main thread */
static int
eal_parse_main_lcore(const char *arg)
@@ -1169,8 +1073,8 @@ eal_parse_main_lcore(const char *arg)
if (user_cfg->main_lcore >= RTE_MAX_LCORE)
return -1;
- /* ensure main core is not used as service core */
- if (runtime_state->lcore_cfg[user_cfg->main_lcore].role == ROLE_SERVICE) {
+ /* check main core is not already down as a service core */
+ if (CPU_ISSET(user_cfg->main_lcore, &user_cfg->service_cpuset)) {
EAL_LOG(ERR, "Error: Main lcore is used as a service core");
return -1;
}
@@ -2062,13 +1966,15 @@ eal_parse_args(void)
/* service core options */
if (args.service_coremask != NULL) {
- if (eal_parse_service_coremask(args.service_coremask) < 0) {
+ if (eal_parse_service_coremask(args.service_coremask,
+ &user_cfg->service_cpuset) < 0) {
EAL_LOG(ERR, "invalid service coremask: '%s'",
args.service_coremask);
return -1;
}
} else if (args.service_corelist != NULL) {
- if (eal_parse_service_corelist(args.service_corelist) < 0) {
+ if (eal_parse_service_corelist(args.service_corelist,
+ &user_cfg->service_cpuset) < 0) {
EAL_LOG(ERR, "invalid service core list: '%s'",
args.service_corelist);
return -1;
@@ -2374,6 +2280,29 @@ eal_apply_runtime_state(void)
struct eal_user_cfg *user_cfg = eal_get_user_configuration();
struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+ /* Apply service core roles: service_cpuset bits are lcore IDs */
+ if (CPU_COUNT(&user_cfg->service_cpuset) > 0) {
+ unsigned int i;
+ char *cpuset_str;
+
+ for (i = 0; i < RTE_MAX_LCORE; i++) {
+ if (!CPU_ISSET(i, &user_cfg->service_cpuset))
+ continue;
+ if (runtime_state->lcore_cfg[i].role != ROLE_RTE) {
+ EAL_LOG(WARNING,
+ "service lcore %u is not in the enabled lcore set; ignoring",
+ i);
+ continue;
+ }
+ runtime_state->lcore_cfg[i].role = ROLE_SERVICE;
+ }
+ cpuset_str = eal_cpuset_to_str(&user_cfg->service_cpuset);
+ if (cpuset_str != NULL) {
+ EAL_LOG(DEBUG, "Service cores configured: %s", cpuset_str);
+ free(cpuset_str);
+ }
+ }
+
/* set the main lcore */
if (user_cfg->main_lcore != -1) {
runtime_state->main_lcore = user_cfg->main_lcore;
diff --git a/lib/eal/common/eal_internal_cfg.h b/lib/eal/common/eal_internal_cfg.h
index 79722577a5..99ffde5c8b 100644
--- a/lib/eal/common/eal_internal_cfg.h
+++ b/lib/eal/common/eal_internal_cfg.h
@@ -130,6 +130,7 @@ struct eal_user_cfg {
uintptr_t base_virtaddr; /**< base address to try and reserve memory from */
uint64_t numa_mem[RTE_MAX_NUMA_NODES]; /**< amount of memory per NUMA node */
uint64_t numa_limit[RTE_MAX_NUMA_NODES]; /**< limit amount of memory per NUMA node */
+ rte_cpuset_t service_cpuset; /**< service lcore IDs (bits = lcore IDs to use as service cores) */
int main_lcore; /**< ID of the main lcore */
};
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 28/44] eal: store user-provided lcore info in user config struct
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (26 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 27/44] eal: record service cores in " Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 29/44] eal: clarify docs on params taking lcore IDs Bruce Richardson
` (17 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
The user provides details of what lcores are to run on what cpus in a
variety of ways. Map all those to a single array of cpusets in the
user_cfg struct, such that each lcore id has a cpuset of physical lcore
ids if it is to be used. Then after the parsing of args, we can use that
to appropriately populate the runtime configuration.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_options.c | 146 ++++++++++++++++++----------
lib/eal/common/eal_internal_cfg.h | 11 ++-
lib/eal/freebsd/eal.c | 1 +
lib/eal/linux/eal.c | 1 +
lib/eal/windows/eal.c | 1 +
5 files changed, 108 insertions(+), 52 deletions(-)
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index 076e939292..bd08d29e1d 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -524,6 +524,10 @@ eal_reset_internal_config(void)
CPU_ZERO(&runtime_state->ctrl_cpuset);
runtime_state->init_complete = 0;
CPU_ZERO(&user_cfg->service_cpuset);
+ for (i = 0; i < RTE_MAX_LCORE; i++) {
+ free(user_cfg->lcore_cpusets[i]);
+ user_cfg->lcore_cpusets[i] = NULL;
+ }
user_cfg->max_simd_bitwidth.bitwidth = RTE_VECT_DEFAULT_SIMD_BITWIDTH;
user_cfg->max_simd_bitwidth.forced = 0;
}
@@ -888,22 +892,19 @@ eal_parse_service_corelist(const char *corelist, rte_cpuset_t *cpuset)
return CPU_COUNT(cpuset) > 0 ? 0 : -1;
}
+/* Expand a flat cpuset into lcore_cpusets[], assigning lcore IDs.
+ * If remap is false: lcore_id == physical CPU id (identity mapping).
+ * If remap is true: lcore IDs are assigned sequentially from remap_base.
+ * Returns the number of lcores configured, or -1 on error. */
static int
-update_lcore_config(const rte_cpuset_t *cpuset, bool remap, uint16_t remap_base)
+eal_expand_cpuset_to_map(const rte_cpuset_t *cpuset, bool remap, uint16_t remap_base,
+ rte_cpuset_t **lcore_cpusets)
{
- struct eal_runtime_state *runtime_state = eal_get_runtime_state();
unsigned int lcore_id = remap_base;
unsigned int count = 0;
unsigned int i;
int ret = 0;
- /* set everything to disabled first, then set up values */
- for (i = 0; i < RTE_MAX_LCORE; i++) {
- runtime_state->lcore_cfg[i].role = ROLE_OFF;
- runtime_state->lcore_cfg[i].core_index = -1;
- }
-
- /* now go through the cpuset */
for (i = 0; i < CPU_SETSIZE; i++) {
if (CPU_ISSET(i, cpuset)) {
if (eal_cpu_detected(i) == 0) {
@@ -927,11 +928,17 @@ update_lcore_config(const rte_cpuset_t *cpuset, bool remap, uint16_t remap_base)
continue;
}
- runtime_state->lcore_cfg[lcore_id].role = ROLE_RTE;
- runtime_state->lcore_cfg[lcore_id].core_index = count;
- CPU_ZERO(&runtime_state->lcore_cfg[lcore_id].cpuset);
- CPU_SET(i, &runtime_state->lcore_cfg[lcore_id].cpuset);
- runtime_state->lcore_cfg[lcore_id].first_cpu = i;
+ lcore_cpusets[lcore_id] = malloc(sizeof(rte_cpuset_t));
+ if (lcore_cpusets[lcore_id] == NULL) {
+ EAL_LOG(ERR, "failed to allocate cpuset for lcore %u", lcore_id);
+ for (unsigned int j = 0; j < lcore_id; j++) {
+ free(lcore_cpusets[j]);
+ lcore_cpusets[j] = NULL;
+ }
+ return -1;
+ }
+ CPU_ZERO(lcore_cpusets[lcore_id]);
+ CPU_SET(i, lcore_cpusets[lcore_id]);
EAL_LOG(DEBUG, "lcore %u mapped to physical core %u", lcore_id, i);
lcore_id++;
count++;
@@ -941,9 +948,9 @@ update_lcore_config(const rte_cpuset_t *cpuset, bool remap, uint16_t remap_base)
EAL_LOG(ERR, "No valid lcores in core list");
ret = -1;
}
- if (!ret)
- runtime_state->lcore_count = count;
- return ret;
+ if (ret == -1)
+ return -1;
+ return (int)count;
}
static int
@@ -1064,7 +1071,6 @@ eal_parse_main_lcore(const char *arg)
{
char *parsing_end;
struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- struct eal_runtime_state *runtime_state = eal_get_runtime_state();
errno = 0;
user_cfg->main_lcore = (uint32_t) strtol(arg, &parsing_end, 0);
@@ -1078,8 +1084,9 @@ eal_parse_main_lcore(const char *arg)
EAL_LOG(ERR, "Error: Main lcore is used as a service core");
return -1;
}
- /* check that we have the core recorded in the core list */
- if (runtime_state->lcore_cfg[user_cfg->main_lcore].role != ROLE_RTE) {
+
+ /* lcore_cpusets is always populated before eal_parse_main_lcore is called */
+ if (user_cfg->lcore_cpusets[user_cfg->main_lcore] == NULL) {
EAL_LOG(ERR, "Error: Main lcore is not enabled for DPDK");
return -1;
}
@@ -1241,15 +1248,18 @@ check_cpuset(rte_cpuset_t *set)
* lcore 6 runs on cpuset 0x41 (cpu 0,6)
* lcore 7 runs on cpuset 0x80 (cpu 7)
* lcore 8 runs on cpuset 0x100 (cpu 8)
+ *
+ * Writes the physical-CPU affinity for each mentioned lcore_id into
+ * cpusets[lcore_id]. Slots not mentioned are left as NULL.
+ * Returns the number of distinct lcore IDs configured, or -1 on error.
*/
static int
-eal_parse_lcores(const char *lcores)
+eal_parse_lcores_to_map(const char *lcores, rte_cpuset_t **cpusets)
{
- struct eal_runtime_state *runtime_state = eal_get_runtime_state();
rte_cpuset_t lcore_set;
unsigned int set_count;
- unsigned idx = 0;
- unsigned count = 0;
+ unsigned int idx;
+ int count = 0;
const char *lcore_start = NULL;
const char *end = NULL;
int offset;
@@ -1266,14 +1276,6 @@ eal_parse_lcores(const char *lcores)
CPU_ZERO(&cpuset);
- /* Reset lcore config */
- for (idx = 0; idx < RTE_MAX_LCORE; idx++) {
- runtime_state->lcore_cfg[idx].role = ROLE_OFF;
- runtime_state->lcore_cfg[idx].core_index = -1;
- CPU_ZERO(&runtime_state->lcore_cfg[idx].cpuset);
- runtime_state->lcore_cfg[idx].first_cpu = UINT16_MAX;
- }
-
/* Get list of cores */
do {
while (isblank(*lcores))
@@ -1322,7 +1324,7 @@ eal_parse_lcores(const char *lcores)
/* without '@', by default using lcore_set as cpuset */
if (*lcores != '@')
- rte_memcpy(&cpuset, &lcore_set, sizeof(cpuset));
+ memcpy(&cpuset, &lcore_set, sizeof(cpuset));
set_count = CPU_COUNT(&lcore_set);
/* start to update lcore_set */
@@ -1331,12 +1333,6 @@ eal_parse_lcores(const char *lcores)
continue;
set_count--;
- if (runtime_state->lcore_cfg[idx].role != ROLE_RTE) {
- runtime_state->lcore_cfg[idx].core_index = count;
- runtime_state->lcore_cfg[idx].role = ROLE_RTE;
- count++;
- }
-
if (lflags) {
CPU_ZERO(&cpuset);
CPU_SET(idx, &cpuset);
@@ -1344,10 +1340,16 @@ eal_parse_lcores(const char *lcores)
if (check_cpuset(&cpuset) < 0)
goto err;
- rte_memcpy(&runtime_state->lcore_cfg[idx].cpuset, &cpuset,
- sizeof(rte_cpuset_t));
- runtime_state->lcore_cfg[idx].first_cpu =
- (uint16_t)(RTE_CPU_FFS(&cpuset) - 1);
+ if (cpusets[idx] == NULL) {
+ cpusets[idx] = malloc(sizeof(rte_cpuset_t));
+ if (cpusets[idx] == NULL) {
+ EAL_LOG(ERR, "failed to allocate cpuset for lcore %u", idx);
+ ret = -1;
+ goto err;
+ }
+ count++;
+ }
+ memcpy(cpusets[idx], &cpuset, sizeof(rte_cpuset_t));
}
/* some cores from the lcore_set can't be handled by EAL */
@@ -1360,11 +1362,14 @@ eal_parse_lcores(const char *lcores)
if (count == 0)
goto err;
- runtime_state->lcore_count = count;
- ret = 0;
-
+ ret = count;
err:
-
+ if (ret == -1) {
+ for (unsigned int j = 0; j < RTE_MAX_LCORE; j++) {
+ free(cpusets[j]);
+ cpusets[j] = NULL;
+ }
+ }
return ret;
}
@@ -1916,7 +1921,7 @@ eal_parse_args(void)
/* First handle the special case where we have explicit core mapping/remapping */
if (manual_lcore_mapping) {
- if (eal_parse_lcores(args.lcores) < 0) {
+ if (eal_parse_lcores_to_map(args.lcores, user_cfg->lcore_cpusets) < 0) {
EAL_LOG(ERR, "invalid lcore mapping list: '%s'", args.lcores);
return -1;
}
@@ -1954,7 +1959,8 @@ eal_parse_args(void)
EAL_LOG(DEBUG, "Cores selected by %s: %s", cpuset_source, cpuset_str);
free(cpuset_str);
}
- if (update_lcore_config(&cpuset, remap_lcores, lcore_id_base) < 0) {
+ if (eal_expand_cpuset_to_map(&cpuset, remap_lcores, lcore_id_base,
+ user_cfg->lcore_cpusets) < 0) {
char *available = available_cores();
EAL_LOG(ERR, "invalid coremask or core-list parameter, please check specified cores are part of %s",
@@ -2270,7 +2276,44 @@ eal_cleanup_config(void)
free(user_cfg->hugefile_prefix);
free(user_cfg->hugepage_dir);
free(user_cfg->user_mbuf_pool_ops_name);
+ for (unsigned int i = 0; i < RTE_MAX_LCORE; i++) {
+ free(user_cfg->lcore_cpusets[i]);
+ user_cfg->lcore_cpusets[i] = NULL;
+ }
+
+ return 0;
+}
+
+static int
+eal_apply_lcore_config(void)
+{
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+
+ /* lcore_cpusets[] is always populated at parse time for all input forms */
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+ unsigned int i;
+ unsigned int count = 0;
+ for (i = 0; i < RTE_MAX_LCORE; i++) {
+ if (user_cfg->lcore_cpusets[i] == NULL) {
+ runtime_state->lcore_cfg[i].role = ROLE_OFF;
+ runtime_state->lcore_cfg[i].core_index = -1;
+ CPU_ZERO(&runtime_state->lcore_cfg[i].cpuset);
+ runtime_state->lcore_cfg[i].first_cpu = UINT16_MAX;
+ continue;
+ }
+ runtime_state->lcore_cfg[i].role = ROLE_RTE;
+ runtime_state->lcore_cfg[i].core_index = count++;
+ memcpy(&runtime_state->lcore_cfg[i].cpuset,
+ user_cfg->lcore_cpusets[i], sizeof(rte_cpuset_t));
+ runtime_state->lcore_cfg[i].first_cpu =
+ (uint16_t)(RTE_CPU_FFS(&runtime_state->lcore_cfg[i].cpuset) - 1);
+ }
+ if (count == 0) {
+ EAL_LOG(ERR, "No valid lcores in core list");
+ return -1;
+ }
+ runtime_state->lcore_count = count;
return 0;
}
@@ -2280,6 +2323,9 @@ eal_apply_runtime_state(void)
struct eal_user_cfg *user_cfg = eal_get_user_configuration();
struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+ if (eal_apply_lcore_config() < 0)
+ return -1;
+
/* Apply service core roles: service_cpuset bits are lcore IDs */
if (CPU_COUNT(&user_cfg->service_cpuset) > 0) {
unsigned int i;
@@ -2289,7 +2335,7 @@ eal_apply_runtime_state(void)
if (!CPU_ISSET(i, &user_cfg->service_cpuset))
continue;
if (runtime_state->lcore_cfg[i].role != ROLE_RTE) {
- EAL_LOG(WARNING,
+ EAL_LOG(WARNING,
"service lcore %u is not in the enabled lcore set; ignoring",
i);
continue;
diff --git a/lib/eal/common/eal_internal_cfg.h b/lib/eal/common/eal_internal_cfg.h
index 99ffde5c8b..239fe2a7ac 100644
--- a/lib/eal/common/eal_internal_cfg.h
+++ b/lib/eal/common/eal_internal_cfg.h
@@ -130,8 +130,15 @@ struct eal_user_cfg {
uintptr_t base_virtaddr; /**< base address to try and reserve memory from */
uint64_t numa_mem[RTE_MAX_NUMA_NODES]; /**< amount of memory per NUMA node */
uint64_t numa_limit[RTE_MAX_NUMA_NODES]; /**< limit amount of memory per NUMA node */
- rte_cpuset_t service_cpuset; /**< service lcore IDs (bits = lcore IDs to use as service cores) */
- int main_lcore; /**< ID of the main lcore */
+ rte_cpuset_t service_cpuset; /**< service lcore IDs (bits = lcore IDs to use as service cores) */
+
+ /** Per-lcore cpuset array, always populated at arg-parse time for all input forms
+ * (-c coremask, -l corelist, --lcores with or without '@'/'()').
+ * Each non-NULL slot is an individually heap-allocated rte_cpuset_t.
+ * NULL means the corresponding lcore ID is not configured.
+ */
+ rte_cpuset_t *lcore_cpusets[RTE_MAX_LCORE];
+ int main_lcore; /**< ID of the main lcore */
};
/**
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index b1155dfc2c..120425d425 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -744,6 +744,7 @@ rte_eal_init(int argc, char **argv)
return fctret;
err_out:
rte_atomic_store_explicit(&run_once, 0, rte_memory_order_relaxed);
+ eal_cleanup_config();
eal_clean_saved_args();
return -1;
}
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 4c716f2a09..3f2ad98425 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -919,6 +919,7 @@ rte_eal_init(int argc, char **argv)
err_out:
rte_atomic_store_explicit(&run_once, 0, rte_memory_order_relaxed);
+ eal_cleanup_config();
eal_clean_saved_args();
return -1;
}
diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c
index e0d7c4e612..b8034dceed 100644
--- a/lib/eal/windows/eal.c
+++ b/lib/eal/windows/eal.c
@@ -408,6 +408,7 @@ rte_eal_init(int argc, char **argv)
return fctret;
err_out:
+ eal_cleanup_config();
eal_clean_saved_args();
return -1;
}
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 29/44] eal: clarify docs on params taking lcore IDs
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (27 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 28/44] eal: store user-provided lcore info " Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 30/44] eal: remove internal config reset function Bruce Richardson
` (16 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
The documentation doesn't make clear how the parameters to the service
core mask and core list parameters (or the main lcore parameter for that
matter) related to the provided lcore lists, or lcore mappings.
When using the "-R" flag, for example, as "/path/to/app -R -l 40-50",
(which will configure lcores 0-10 to run on CPUs 40-50 respectively) in
order to run service cores, it's not entirely clear whether the
parameter should be "-S 49-50" i.e. physical CPU ids, or "-S 9-10" i.e.
logical lcore IDs. Update the docs to make it clear its the latter.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
doc/guides/linux_gsg/eal_args.include.rst | 38 +++++++++++++++++++++--
lib/eal/common/eal_option_list.h | 6 ++--
2 files changed, 38 insertions(+), 6 deletions(-)
diff --git a/doc/guides/linux_gsg/eal_args.include.rst b/doc/guides/linux_gsg/eal_args.include.rst
index 32c24c8e41..33281ecec3 100644
--- a/doc/guides/linux_gsg/eal_args.include.rst
+++ b/doc/guides/linux_gsg/eal_args.include.rst
@@ -89,13 +89,45 @@ Lcore-related options
and the use of ``()`` for core groupings,
are not allowed when ``-R`` or ``--remap-lcore-ids`` is also used.
-* ``--main-lcore <core ID>``
+* ``--main-lcore <lcore ID>``
- Core ID that is used as main.
+ Set the lcore ID to use for the main thread.
+ The value is a DPDK lcore ID, not a physical CPU ID,
+ and must be present in the enabled lcore set (as configured by ``-l``/``--lcores``).
+
+ In the simple case, without any explicit lcore-to-CPU mapping,
+ lcore IDs equal physical CPU IDs so the distinction does not matter.
+ The two differ when ``-R``/``--remap-lcore-ids`` assigns sequential lcore IDs to higher-numbered physical CPUs,
+ or when the ``@`` mapping syntax in ``--lcores`` is used.
+ In those cases the lcore ID must be specified here, not the physical CPU ID.
+
+ Example using ``--lcores`` explicit mapping:
+ ``--lcores=1@31,2@32,3@33 --main-lcore 2`` selects the thread with lcore ID 2,
+ running on physical CPU 32, as the main thread.
+
+ Example using ``-R`` remapping:
+ ``-l 31-33 -R --main-lcore 1`` starts three threads on physical CPUs 31, 32 and 33, remapped to lcore IDs 0, 1 and 2.
+ ``--main-lcore 1`` selects the thread remapped to lcore ID 1, which runs on physical CPU 32.
* ``-S, --service-corelist <service core list>``
- List of cores to be used as service cores.
+ List of lcore IDs to be used as service cores.
+ The list format is the same as for ``-l``/``--lcores``:
+ a comma-separated set of lcore IDs or ranges (e.g. ``2,3`` or ``2-5``).
+ Each specified lcore ID must be present in the enabled lcore set.
+
+ The values are lcore IDs, not physical CPU IDs.
+ In the simple case, without any explicit lcore-to-CPU mapping, the two are equal so the distinction does not matter.
+ When using ``-R``/``--remap-lcore-ids`` or the ``@`` mapping syntax in ``--lcores``, lcore IDs and physical CPU IDs differ,
+ and the lcore IDs must be used here.
+
+ Example using ``--lcores`` explicit mapping:
+ ``--lcores=1@31,2@32,3@33 -S 2,3`` assigns the threads with
+ lcore IDs 2 and 3 (running on physical CPUs 32 and 33) as service cores.
+
+ Example using ``-R`` remapping:
+ ``-l 31-33 -R -S 1,2`` starts three threads on physical CPUs 31, 32 and 33, remapped to lcore IDs 0, 1 and 2.
+ ``-S 1,2`` assigns the threads with lcore IDs 1 and 2 (running on physical CPUs 32 and 33) as service cores.
Device-related options
diff --git a/lib/eal/common/eal_option_list.h b/lib/eal/common/eal_option_list.h
index 6a5ddfd8d1..7ac2e8eadd 100644
--- a/lib/eal/common/eal_option_list.h
+++ b/lib/eal/common/eal_option_list.h
@@ -47,7 +47,7 @@ BOOL_ARG("--legacy-mem", NULL, "Enable legacy memory behavior", legacy_mem)
OPT_STR_ARG("--log-color", NULL, "Enable/disable color in log output", log_color)
LIST_ARG("--log-level", NULL, "Log level for loggers; use log-level=help for list of log types and levels", log_level)
OPT_STR_ARG("--log-timestamp", NULL, "Enable/disable timestamp in log output", log_timestamp)
-STR_ARG("--main-lcore", NULL, "Select which core to use for the main thread", main_lcore)
+STR_ARG("--main-lcore", NULL, "Lcore ID to use for the main thread", main_lcore)
STR_ARG("--mbuf-pool-ops-name", NULL, "User defined mbuf default pool ops name", mbuf_pool_ops_name)
STR_ARG("--memory-channels", "-n", "Number of memory channels per socket", memory_channels)
STR_ARG("--memory-ranks", "-r", "Force number of memory ranks (don't detect)", memory_ranks)
@@ -60,8 +60,8 @@ BOOL_ARG("--no-shconf", NULL, "Disable shared config file generation", no_shconf
BOOL_ARG("--no-telemetry", NULL, "Disable telemetry", no_telemetry)
STR_ARG("--proc-type", NULL, "Type of process (primary|secondary|auto)", proc_type)
OPT_STR_ARG("--remap-lcore-ids", "-R", "Remap lcore IDs to be contiguous starting from 0, or supplied value", remap_lcore_ids)
-STR_ARG("--service-corelist", "-S", "List of cores to use for service threads", service_corelist)
-STR_ARG("--service-coremask", "-s", "[Deprecated] Bitmask of cores to use for service threads", service_coremask)
+STR_ARG("--service-corelist", "-S", "List of lcore IDs to use for service threads", service_corelist)
+STR_ARG("--service-coremask", "-s", "[Deprecated] Bitmask of lcore IDs to use for service threads", service_coremask)
BOOL_ARG("--single-file-segments", NULL, "Store all pages within single files (per-page-size, per-node)", single_file_segments)
BOOL_ARG("--telemetry", NULL, "Enable telemetry", telemetry)
LIST_ARG("--vdev", NULL, "Add a virtual device to the system; format=<driver><id>[,key=val,...]", vdev)
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 30/44] eal: remove internal config reset function
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (28 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 29/44] eal: clarify docs on params taking lcore IDs Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 31/44] eal: move functions setting runtime state Bruce Richardson
` (15 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
Rather than having a single function which attempts to reset multiple
internal config structures, just init each structure in the function
which configures it. Use structure assignment for initialization rather
than a series of assignment statements.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_options.c | 85 ++++++-----------------------
lib/eal/common/eal_internal_cfg.h | 1 -
lib/eal/freebsd/eal.c | 2 -
lib/eal/linux/eal.c | 7 +--
4 files changed, 21 insertions(+), 74 deletions(-)
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index bd08d29e1d..292ac7378e 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -466,72 +466,6 @@ eal_get_hugefile_prefix(void)
return HUGEFILE_PREFIX_DEFAULT;
}
-void
-eal_reset_internal_config(void)
-{
- struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- struct eal_platform_info *platform_info = eal_get_platform_info();
- struct eal_runtime_state *runtime_state = eal_get_runtime_state();
- int i;
-
- TAILQ_INIT(&user_cfg->devopt_list);
- TAILQ_INIT(&user_cfg->plugin_list);
- STAILQ_INIT(&user_cfg->trace_patterns);
- TAILQ_INIT(&runtime_state->loaded_plugins);
- user_cfg->memory = 0;
- user_cfg->force_nrank = 0;
- user_cfg->force_nchannel = 0;
- user_cfg->force_numa = false;
- for (i = 0; i < RTE_MAX_NUMA_NODES; i++)
- user_cfg->numa_mem[i] = 0;
- user_cfg->force_numa_limits = false;
- for (i = 0; i < RTE_MAX_NUMA_NODES; i++)
- user_cfg->numa_limit[i] = 0;
- user_cfg->process_type = RTE_PROC_PRIMARY;
- user_cfg->no_hugetlbfs = false;
- user_cfg->no_pci = false;
- user_cfg->hugefile_prefix = NULL;
- user_cfg->hugepage_dir = NULL;
- user_cfg->hugepage_file.unlink_before_mapping = false;
- user_cfg->hugepage_file.unlink_existing = true;
- /* zero out hugedir descriptors */
- for (i = 0; i < MAX_HUGEPAGE_SIZES; i++) {
- memset(&platform_info->hugepage_info[i], 0,
- sizeof(platform_info->hugepage_info[0]));
- platform_info->hugepage_info[i].lock_descriptor = -1;
- }
- user_cfg->base_virtaddr = 0;
-
- /* if set to NONE, interrupt mode is determined automatically */
- user_cfg->vfio_intr_mode = RTE_INTR_MODE_NONE;
- memset(user_cfg->vfio_vf_token, 0,
- sizeof(user_cfg->vfio_vf_token));
-
- user_cfg->no_auto_probing = false;
-
-#ifdef RTE_LIBEAL_USE_HPET
- user_cfg->no_hpet = false;
-#else
- user_cfg->no_hpet = true;
-#endif
- user_cfg->vmware_tsc_map = false;
- user_cfg->no_shconf = false;
- user_cfg->in_memory = false;
- user_cfg->create_uio_dev = false;
- user_cfg->no_telemetry = false;
- user_cfg->iova_mode = RTE_IOVA_DC;
- user_cfg->user_mbuf_pool_ops_name = NULL;
- CPU_ZERO(&runtime_state->ctrl_cpuset);
- runtime_state->init_complete = 0;
- CPU_ZERO(&user_cfg->service_cpuset);
- for (i = 0; i < RTE_MAX_LCORE; i++) {
- free(user_cfg->lcore_cpusets[i]);
- user_cfg->lcore_cpusets[i] = NULL;
- }
- user_cfg->max_simd_bitwidth.bitwidth = RTE_VECT_DEFAULT_SIMD_BITWIDTH;
- user_cfg->max_simd_bitwidth.forced = 0;
-}
-
static int
eal_plugin_path_add(const char *path)
{
@@ -1860,6 +1794,24 @@ int
eal_parse_args(void)
{
struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+
+ /*
+ * Initialise user_cfg to defaults. Fields not listed here are zero,
+ * false or NULL, which is the correct default (RTE_PROC_PRIMARY,
+ * RTE_INTR_MODE_NONE, RTE_IOVA_DC, etc. are all defined as 0).
+ */
+ *user_cfg = (struct eal_user_cfg){
+ .devopt_list = TAILQ_HEAD_INITIALIZER(user_cfg->devopt_list),
+ .plugin_list = TAILQ_HEAD_INITIALIZER(user_cfg->plugin_list),
+ .trace_patterns = STAILQ_HEAD_INITIALIZER(user_cfg->trace_patterns),
+ .hugepage_file.unlink_existing = true,
+ .main_lcore = -1,
+#ifndef RTE_LIBEAL_USE_HPET
+ .no_hpet = true,
+#endif
+ .max_simd_bitwidth.bitwidth = RTE_VECT_DEFAULT_SIMD_BITWIDTH,
+ };
+
bool remap_lcores = (args.remap_lcore_ids != NULL);
struct arg_list_elem *arg;
uint16_t lcore_id_base = 0;
@@ -1986,7 +1938,6 @@ eal_parse_args(void)
return -1;
}
}
- user_cfg->main_lcore = -1;
if (args.main_lcore != NULL && eal_parse_main_lcore(args.main_lcore) < 0)
return -1;
diff --git a/lib/eal/common/eal_internal_cfg.h b/lib/eal/common/eal_internal_cfg.h
index 239fe2a7ac..979c1320e8 100644
--- a/lib/eal/common/eal_internal_cfg.h
+++ b/lib/eal/common/eal_internal_cfg.h
@@ -213,6 +213,5 @@ struct eal_runtime_state {
struct eal_user_cfg *eal_get_user_configuration(void);
struct eal_platform_info *eal_get_platform_info(void);
struct eal_runtime_state *eal_get_runtime_state(void);
-void eal_reset_internal_config(void);
#endif /* EAL_INTERNAL_CFG_H */
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 120425d425..13bbd8b868 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -444,8 +444,6 @@ rte_eal_init(int argc, char **argv)
goto err_out;
}
- eal_reset_internal_config();
-
if (rte_eal_cpu_init() < 0) {
rte_eal_init_alert("Cannot detect lcores.");
rte_errno = ENOTSUP;
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 3f2ad98425..328c74ae4d 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -410,8 +410,9 @@ eal_hugedirs_unlock(void)
int i;
for (i = 0; i < MAX_HUGEPAGE_SIZES; i++)
{
- /* skip uninitialized */
- if (platform_info->hugepage_info[i].lock_descriptor < 0)
+ /* skip uninitialized or unlocked entries */
+ if (platform_info->hugepage_info[i].hugepage_sz == 0 ||
+ platform_info->hugepage_info[i].lock_descriptor < 0)
continue;
/* unlock hugepage file */
flock(platform_info->hugepage_info[i].lock_descriptor, LOCK_UN);
@@ -606,8 +607,6 @@ rte_eal_init(int argc, char **argv)
goto err_out;
}
- eal_reset_internal_config();
-
if (rte_eal_cpu_init() < 0) {
rte_eal_init_alert("Cannot detect lcores.");
rte_errno = ENOTSUP;
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 31/44] eal: move functions setting runtime state
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (29 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 30/44] eal: remove internal config reset function Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 32/44] eal: initialize platform info on first use Bruce Richardson
` (14 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
The functions to configure some of the runtime state based on the
user-provided options no longer belong in eal_common_options.c, which
should instead be more focused on processing the user-provided options.
Move the functions to eal_common_config instead, and explicitly have the
runtime state setup function called from eal_init directly, rather than
hidden as a last step in arg parsing.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_config.c | 135 +++++++++++++++++++++++++++-
lib/eal/common/eal_common_options.c | 132 +--------------------------
lib/eal/common/eal_options.h | 2 -
lib/eal/common/eal_private.h | 22 +++++
lib/eal/freebsd/eal.c | 6 ++
lib/eal/linux/eal.c | 6 ++
lib/eal/windows/eal.c | 6 ++
7 files changed, 175 insertions(+), 134 deletions(-)
diff --git a/lib/eal/common/eal_common_config.c b/lib/eal/common/eal_common_config.c
index 35654cc71f..60eeea6439 100644
--- a/lib/eal/common/eal_common_config.c
+++ b/lib/eal/common/eal_common_config.c
@@ -2,8 +2,10 @@
* Copyright(c) 2020 Mellanox Technologies, Ltd
*/
-#include <rte_string_fns.h>
+#include <pthread.h>
+#include <rte_string_fns.h>
+#include <rte_thread.h>
#include <eal_export.h>
#include "eal_internal_cfg.h"
#include "eal_private.h"
@@ -127,3 +129,134 @@ rte_eal_has_pci(void)
{
return !eal_user_cfg.no_pci;
}
+
+static void
+compute_ctrl_threads_cpuset(void)
+{
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+ rte_cpuset_t *cpuset = &runtime_state->ctrl_cpuset;
+ rte_cpuset_t default_set;
+ unsigned int lcore_id;
+
+ for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
+ if (rte_lcore_has_role(lcore_id, ROLE_OFF))
+ continue;
+ RTE_CPU_OR(cpuset, cpuset, &runtime_state->lcore_cfg[lcore_id].cpuset);
+ }
+ RTE_CPU_NOT(cpuset, cpuset);
+
+ if (rte_thread_get_affinity_by_id(rte_thread_self(), &default_set) != 0)
+ CPU_ZERO(&default_set);
+
+ RTE_CPU_AND(cpuset, cpuset, &default_set);
+
+ /* if no remaining cpu, use main lcore cpu affinity */
+ if (!CPU_COUNT(cpuset)) {
+ memcpy(cpuset, &runtime_state->lcore_cfg[rte_get_main_lcore()].cpuset,
+ sizeof(*cpuset));
+ }
+
+ /* log the computed control thread cpuset for debugging */
+ char *cpuset_str = eal_cpuset_to_str(cpuset);
+ if (cpuset_str != NULL) {
+ EAL_LOG(DEBUG, "Control threads will use cores: %s", cpuset_str);
+ free(cpuset_str);
+ }
+}
+
+static int
+eal_apply_lcore_config(void)
+{
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+
+ /* lcore_cpusets[] is always populated at parse time for all input forms */
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+ unsigned int i;
+ unsigned int count = 0;
+
+ for (i = 0; i < RTE_MAX_LCORE; i++) {
+ if (user_cfg->lcore_cpusets[i] == NULL) {
+ runtime_state->lcore_cfg[i].role = ROLE_OFF;
+ runtime_state->lcore_cfg[i].core_index = -1;
+ CPU_ZERO(&runtime_state->lcore_cfg[i].cpuset);
+ runtime_state->lcore_cfg[i].first_cpu = UINT16_MAX;
+ continue;
+ }
+ runtime_state->lcore_cfg[i].role = ROLE_RTE;
+ runtime_state->lcore_cfg[i].core_index = count++;
+ memcpy(&runtime_state->lcore_cfg[i].cpuset,
+ user_cfg->lcore_cpusets[i], sizeof(rte_cpuset_t));
+ runtime_state->lcore_cfg[i].first_cpu =
+ (uint16_t)(RTE_CPU_FFS(&runtime_state->lcore_cfg[i].cpuset) - 1);
+ }
+ if (count == 0) {
+ EAL_LOG(ERR, "No valid lcores in core list");
+ return -1;
+ }
+ runtime_state->lcore_count = count;
+ return 0;
+}
+
+int
+eal_apply_runtime_state(void)
+{
+ const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+
+ if (eal_apply_lcore_config() < 0)
+ return -1;
+
+ /* Apply service core roles: service_cpuset bits are lcore IDs */
+ if (CPU_COUNT(&user_cfg->service_cpuset) > 0) {
+ unsigned int i;
+ char *cpuset_str;
+
+ for (i = 0; i < RTE_MAX_LCORE; i++) {
+ if (!CPU_ISSET(i, &user_cfg->service_cpuset))
+ continue;
+ if (runtime_state->lcore_cfg[i].role != ROLE_RTE) {
+ EAL_LOG(WARNING,
+ "service lcore %u is not in the enabled lcore set; ignoring",
+ i);
+ continue;
+ }
+ runtime_state->lcore_cfg[i].role = ROLE_SERVICE;
+ }
+ cpuset_str = eal_cpuset_to_str(&user_cfg->service_cpuset);
+ if (cpuset_str != NULL) {
+ EAL_LOG(DEBUG, "Service cores configured: %s", cpuset_str);
+ free(cpuset_str);
+ }
+ }
+
+ /* set the main lcore */
+ if (user_cfg->main_lcore != -1) {
+ runtime_state->main_lcore = user_cfg->main_lcore;
+ } else {
+ /* default main lcore is the first one */
+ runtime_state->main_lcore = rte_get_next_lcore(-1, 0, 0);
+ if (runtime_state->main_lcore >= RTE_MAX_LCORE) {
+ EAL_LOG(ERR, "Main lcore is not enabled for DPDK");
+ return -1;
+ }
+ }
+
+#ifndef RTE_EXEC_ENV_WINDOWS
+ /* create runtime data directory. In no_shconf mode, skip any errors */
+ if (eal_create_runtime_dir() < 0) {
+ if (!user_cfg->no_shconf) {
+ EAL_LOG(ERR, "Cannot create runtime directory");
+ return -1;
+ }
+ EAL_LOG(WARNING, "No DPDK runtime directory created");
+ }
+#endif
+
+ runtime_state->process_type = (user_cfg->process_type == RTE_PROC_AUTO) ?
+ eal_proc_type_detect() :
+ user_cfg->process_type;
+
+ compute_ctrl_threads_cpuset();
+
+ return 0;
+}
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index 292ac7378e..605c5a59d1 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -2173,41 +2173,7 @@ eal_parse_args(void)
for (int i = 0; i < RTE_MAX_NUMA_NODES; i++)
user_cfg->memory += user_cfg->numa_mem[i];
- return eal_apply_runtime_state();
-}
-
-static void
-compute_ctrl_threads_cpuset(void)
-{
- struct eal_runtime_state *runtime_state = eal_get_runtime_state();
- rte_cpuset_t *cpuset = &runtime_state->ctrl_cpuset;
- rte_cpuset_t default_set;
- unsigned int lcore_id;
-
- for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {
- if (rte_lcore_has_role(lcore_id, ROLE_OFF))
- continue;
- RTE_CPU_OR(cpuset, cpuset, &runtime_state->lcore_cfg[lcore_id].cpuset);
- }
- RTE_CPU_NOT(cpuset, cpuset);
-
- if (rte_thread_get_affinity_by_id(rte_thread_self(), &default_set) != 0)
- CPU_ZERO(&default_set);
-
- RTE_CPU_AND(cpuset, cpuset, &default_set);
-
- /* if no remaining cpu, use main lcore cpu affinity */
- if (!CPU_COUNT(cpuset)) {
- memcpy(cpuset, &runtime_state->lcore_cfg[rte_get_main_lcore()].cpuset,
- sizeof(*cpuset));
- }
-
- /* log the computed control thread cpuset for debugging */
- char *cpuset_str = eal_cpuset_to_str(cpuset);
- if (cpuset_str != NULL) {
- EAL_LOG(DEBUG, "Control threads will use cores: %s", cpuset_str);
- free(cpuset_str);
- }
+ return 0;
}
int
@@ -2235,102 +2201,6 @@ eal_cleanup_config(void)
return 0;
}
-static int
-eal_apply_lcore_config(void)
-{
- struct eal_user_cfg *user_cfg = eal_get_user_configuration();
-
- /* lcore_cpusets[] is always populated at parse time for all input forms */
- struct eal_runtime_state *runtime_state = eal_get_runtime_state();
- unsigned int i;
- unsigned int count = 0;
-
- for (i = 0; i < RTE_MAX_LCORE; i++) {
- if (user_cfg->lcore_cpusets[i] == NULL) {
- runtime_state->lcore_cfg[i].role = ROLE_OFF;
- runtime_state->lcore_cfg[i].core_index = -1;
- CPU_ZERO(&runtime_state->lcore_cfg[i].cpuset);
- runtime_state->lcore_cfg[i].first_cpu = UINT16_MAX;
- continue;
- }
- runtime_state->lcore_cfg[i].role = ROLE_RTE;
- runtime_state->lcore_cfg[i].core_index = count++;
- memcpy(&runtime_state->lcore_cfg[i].cpuset,
- user_cfg->lcore_cpusets[i], sizeof(rte_cpuset_t));
- runtime_state->lcore_cfg[i].first_cpu =
- (uint16_t)(RTE_CPU_FFS(&runtime_state->lcore_cfg[i].cpuset) - 1);
- }
- if (count == 0) {
- EAL_LOG(ERR, "No valid lcores in core list");
- return -1;
- }
- runtime_state->lcore_count = count;
- return 0;
-}
-
-int
-eal_apply_runtime_state(void)
-{
- struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- struct eal_runtime_state *runtime_state = eal_get_runtime_state();
-
- if (eal_apply_lcore_config() < 0)
- return -1;
-
- /* Apply service core roles: service_cpuset bits are lcore IDs */
- if (CPU_COUNT(&user_cfg->service_cpuset) > 0) {
- unsigned int i;
- char *cpuset_str;
-
- for (i = 0; i < RTE_MAX_LCORE; i++) {
- if (!CPU_ISSET(i, &user_cfg->service_cpuset))
- continue;
- if (runtime_state->lcore_cfg[i].role != ROLE_RTE) {
- EAL_LOG(WARNING,
- "service lcore %u is not in the enabled lcore set; ignoring",
- i);
- continue;
- }
- runtime_state->lcore_cfg[i].role = ROLE_SERVICE;
- }
- cpuset_str = eal_cpuset_to_str(&user_cfg->service_cpuset);
- if (cpuset_str != NULL) {
- EAL_LOG(DEBUG, "Service cores configured: %s", cpuset_str);
- free(cpuset_str);
- }
- }
-
- /* set the main lcore */
- if (user_cfg->main_lcore != -1) {
- runtime_state->main_lcore = user_cfg->main_lcore;
- } else {
- /* default main lcore is the first one */
- runtime_state->main_lcore = rte_get_next_lcore(-1, 0, 0);
- if (runtime_state->main_lcore >= RTE_MAX_LCORE) {
- EAL_LOG(ERR, "Main lcore is not enabled for DPDK");
- return -1;
- }
- }
-
-#ifndef RTE_EXEC_ENV_WINDOWS
- /* create runtime data directory. In no_shconf mode, skip any errors */
- if (eal_create_runtime_dir() < 0) {
- if (!user_cfg->no_shconf) {
- EAL_LOG(ERR, "Cannot create runtime directory");
- return -1;
- }
- EAL_LOG(WARNING, "No DPDK runtime directory created");
- }
-#endif
-
- runtime_state->process_type = (user_cfg->process_type == RTE_PROC_AUTO) ?
- eal_proc_type_detect() : user_cfg->process_type;
-
- compute_ctrl_threads_cpuset();
-
- return 0;
-}
-
RTE_EXPORT_SYMBOL(rte_vect_get_max_simd_bitwidth)
uint16_t
rte_vect_get_max_simd_bitwidth(void)
diff --git a/lib/eal/common/eal_options.h b/lib/eal/common/eal_options.h
index d20381a48f..77a6a4405f 100644
--- a/lib/eal/common/eal_options.h
+++ b/lib/eal/common/eal_options.h
@@ -13,9 +13,7 @@ struct eal_user_cfg;
int eal_parse_log_options(void);
int eal_parse_args(void);
int eal_option_device_parse(void);
-int eal_apply_runtime_state(void);
int eal_cleanup_config(void);
-enum rte_proc_type_t eal_proc_type_detect(void);
int eal_plugins_init(void);
int eal_save_args(int argc, char **argv);
void eal_clean_saved_args(void);
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index c5efdb070a..877c0840ec 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -28,6 +28,28 @@
*/
int eal_collate_args(int argc, char **argv);
+/**
+ * Apply user configuration to runtime state.
+ *
+ * Translates the populated eal_user_cfg into the eal_runtime_state,
+ * including lcore roles, main lcore, service cores, process type
+ * detection, and the runtime directory.
+ *
+ * @return
+ * 0 on success, negative on error
+ */
+int eal_apply_runtime_state(void);
+
+/**
+ * Detect the process type.
+ *
+ * Used to detect process type when the user requests process type auto-detection,
+ * rather than manually specifying primary or secondary.
+ * @return
+ * The detected process type.
+ */
+enum rte_proc_type_t eal_proc_type_detect(void);
+
/**
* Convert an rte_cpuset_t to string form suitable for parsing by argparse.
*
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 13bbd8b868..2245ffc5ac 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -463,6 +463,12 @@ rte_eal_init(int argc, char **argv)
user_cfg->in_memory = false;
}
+ if (eal_apply_runtime_state() < 0) {
+ rte_eal_init_alert("Cannot apply runtime state.");
+ rte_errno = EINVAL;
+ goto err_out;
+ }
+
if (eal_plugins_init() < 0) {
rte_eal_init_alert("Cannot init plugins");
rte_errno = EINVAL;
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 328c74ae4d..d3f1748297 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -619,6 +619,12 @@ rte_eal_init(int argc, char **argv)
goto err_out;
}
+ if (eal_apply_runtime_state() < 0) {
+ rte_eal_init_alert("Cannot apply runtime state.");
+ rte_errno = EINVAL;
+ goto err_out;
+ }
+
if (eal_plugins_init() < 0) {
rte_eal_init_alert("Cannot init plugins");
rte_errno = EINVAL;
diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c
index b8034dceed..e03ba18c4b 100644
--- a/lib/eal/windows/eal.c
+++ b/lib/eal/windows/eal.c
@@ -217,6 +217,12 @@ rte_eal_init(int argc, char **argv)
user_cfg->no_shconf = true;
}
+ if (eal_apply_runtime_state() < 0) {
+ rte_eal_init_alert("Cannot apply runtime state.");
+ rte_errno = EINVAL;
+ goto err_out;
+ }
+
if (!user_cfg->no_hugetlbfs && (eal_hugepage_info_init() < 0)) {
rte_eal_init_alert("Cannot get hugepage information");
rte_errno = EACCES;
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 32/44] eal: initialize platform info on first use
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (30 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 31/44] eal: move functions setting runtime state Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 33/44] eal: remove duplicated scan of sysfs for hugepage details Bruce Richardson
` (13 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
The platform information should be made available as early in the init
process as possible, since it should be fixed and not depend on any
application or user provided data. Therefore, rework the hugepage info
so that the basic HP info only is stored in the platform info struct,
and things like lock fd's are found in the runtime state struct.
Then start enhancing the get_platform_info structure so that it is
initialized on first use, rather than having it initialized as part of
the normal flow of rte_eal_init(). The objective is to have the platform
info, both cpu and memory, fully available when processing the user
configuration parameters, in order to properly validate them.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_config.c | 31 +++++++-
lib/eal/common/eal_common_dynmem.c | 26 +++----
lib/eal/common/eal_common_lcore.c | 8 +-
lib/eal/common/eal_hugepages.h | 8 ++
lib/eal/common/eal_internal_cfg.h | 15 +++-
lib/eal/common/eal_private.h | 18 ++++-
lib/eal/freebsd/eal.c | 11 +--
lib/eal/freebsd/eal_hugepage_info.c | 60 +++++++++++----
lib/eal/freebsd/eal_memory.c | 15 ++--
lib/eal/linux/eal.c | 18 ++---
lib/eal/linux/eal_hugepage_info.c | 114 +++++++++++++++++++++++-----
lib/eal/linux/eal_memalloc.c | 24 +++---
lib/eal/linux/eal_memory.c | 58 +++++++-------
lib/eal/windows/eal.c | 12 ---
lib/eal/windows/eal_hugepages.c | 35 ++++++++-
lib/eal/windows/eal_memalloc.c | 11 +--
lib/eal/windows/eal_windows.h | 8 --
17 files changed, 323 insertions(+), 149 deletions(-)
diff --git a/lib/eal/common/eal_common_config.c b/lib/eal/common/eal_common_config.c
index 60eeea6439..9afe836903 100644
--- a/lib/eal/common/eal_common_config.c
+++ b/lib/eal/common/eal_common_config.c
@@ -10,6 +10,7 @@
#include "eal_internal_cfg.h"
#include "eal_private.h"
#include "eal_filesystem.h"
+#include "eal_hugepages.h"
#include "eal_memcfg.h"
/* early configuration structure, when memory config is not mmapped */
@@ -28,9 +29,6 @@ static char runtime_dir[UNIX_PATH_MAX];
/* user-provided EAL configuration */
static struct eal_user_cfg eal_user_cfg;
-/* platform-discovered and runtime EAL state */
-static struct eal_platform_info eal_platform_info;
-
/* internal runtime configuration */
static struct eal_runtime_state eal_runtime_state = {
.mem_config = &early_mem_config,
@@ -70,9 +68,34 @@ eal_get_user_configuration(void)
}
/* Return a pointer to the platform state structure */
-struct eal_platform_info *
+const struct eal_platform_info *
eal_get_platform_info(void)
{
+ /* platform-discovered and runtime EAL state */
+ static struct eal_platform_info eal_platform_info;
+ static rte_spinlock_t init_lock = RTE_SPINLOCK_INITIALIZER;
+ static RTE_ATOMIC(bool) initialized;
+
+ if (unlikely(!rte_atomic_load_explicit(&initialized, rte_memory_order_acquire))) {
+ rte_spinlock_lock(&init_lock);
+ if (rte_atomic_load_explicit(&initialized, rte_memory_order_relaxed)) {
+ rte_spinlock_unlock(&init_lock);
+ return &eal_platform_info;
+ }
+ if (rte_eal_cpu_init(&eal_platform_info) < 0) {
+ EAL_LOG(ERR, "Failed to initialise CPU information");
+ rte_spinlock_unlock(&init_lock);
+ return NULL;
+ }
+ if (eal_get_platform_hp_info(&eal_platform_info) < 0) {
+ EAL_LOG(ERR, "Failed to get platform hugepage information");
+ rte_spinlock_unlock(&init_lock);
+ return NULL;
+ }
+ rte_atomic_store_explicit(&initialized, true, rte_memory_order_release);
+ rte_spinlock_unlock(&init_lock);
+ }
+
return &eal_platform_info;
}
diff --git a/lib/eal/common/eal_common_dynmem.c b/lib/eal/common/eal_common_dynmem.c
index 629cec7ccc..a1471bdff0 100644
--- a/lib/eal/common/eal_common_dynmem.c
+++ b/lib/eal/common/eal_common_dynmem.c
@@ -30,7 +30,7 @@ eal_dynmem_memseg_lists_init(void)
uint64_t max_mem, max_mem_per_type;
unsigned int max_seglists_per_type;
unsigned int n_memtypes, cur_type;
- struct eal_platform_info *platform_info = eal_get_platform_info();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* no-huge does not need this at all */
@@ -73,7 +73,7 @@ eal_dynmem_memseg_lists_init(void)
*/
/* create space for mem types */
- n_memtypes = platform_info->num_hugepage_sizes * rte_socket_count();
+ n_memtypes = runtime_state->num_hugepage_sizes * rte_socket_count();
memtypes = calloc(n_memtypes, sizeof(*memtypes));
if (memtypes == NULL) {
EAL_LOG(ERR, "Cannot allocate space for memory types");
@@ -82,12 +82,12 @@ eal_dynmem_memseg_lists_init(void)
/* populate mem types */
cur_type = 0;
- for (hpi_idx = 0; hpi_idx < (int) platform_info->num_hugepage_sizes;
+ for (hpi_idx = 0; hpi_idx < (int) runtime_state->num_hugepage_sizes;
hpi_idx++) {
struct hugepage_info *hpi;
uint64_t hugepage_sz;
- hpi = &platform_info->hugepage_info[hpi_idx];
+ hpi = &runtime_state->hugepage_info[hpi_idx];
hugepage_sz = hpi->hugepage_sz;
for (i = 0; i < (int) rte_socket_count(); i++, cur_type++) {
@@ -228,13 +228,13 @@ eal_dynmem_hugepage_init(void)
struct hugepage_info used_hp[MAX_HUGEPAGE_SIZES];
uint64_t memory[RTE_MAX_NUMA_NODES];
int hp_sz_idx, socket_id;
- struct eal_platform_info *platform_info = eal_get_platform_info();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
memset(used_hp, 0, sizeof(used_hp));
for (hp_sz_idx = 0;
- hp_sz_idx < (int) platform_info->num_hugepage_sizes;
+ hp_sz_idx < (int) runtime_state->num_hugepage_sizes;
hp_sz_idx++) {
#ifndef RTE_ARCH_64
struct hugepage_info dummy;
@@ -242,7 +242,7 @@ eal_dynmem_hugepage_init(void)
#endif
/* also initialize used_hp hugepage sizes in used_hp */
struct hugepage_info *hpi;
- hpi = &platform_info->hugepage_info[hp_sz_idx];
+ hpi = &runtime_state->hugepage_info[hp_sz_idx];
used_hp[hp_sz_idx].hugepage_sz = hpi->hugepage_sz;
#ifndef RTE_ARCH_64
@@ -270,12 +270,12 @@ eal_dynmem_hugepage_init(void)
/* calculate final number of pages */
if (eal_dynmem_calc_num_pages_per_socket(memory,
- platform_info->hugepage_info, used_hp,
- platform_info->num_hugepage_sizes) < 0)
+ runtime_state->hugepage_info, used_hp,
+ runtime_state->num_hugepage_sizes) < 0)
return -1;
for (hp_sz_idx = 0;
- hp_sz_idx < (int)platform_info->num_hugepage_sizes;
+ hp_sz_idx < (int)runtime_state->num_hugepage_sizes;
hp_sz_idx++) {
for (socket_id = 0; socket_id < RTE_MAX_NUMA_NODES;
socket_id++) {
@@ -354,10 +354,10 @@ get_socket_mem_size(int socket)
{
uint64_t size = 0;
unsigned int i;
- struct eal_platform_info *platform_info = eal_get_platform_info();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
- for (i = 0; i < platform_info->num_hugepage_sizes; i++) {
- struct hugepage_info *hpi = &platform_info->hugepage_info[i];
+ for (i = 0; i < runtime_state->num_hugepage_sizes; i++) {
+ struct hugepage_info *hpi = &runtime_state->hugepage_info[i];
size += hpi->hugepage_sz * hpi->num_pages[socket];
}
diff --git a/lib/eal/common/eal_common_lcore.c b/lib/eal/common/eal_common_lcore.c
index bafcfe78d2..98857c0e34 100644
--- a/lib/eal/common/eal_common_lcore.c
+++ b/lib/eal/common/eal_common_lcore.c
@@ -165,11 +165,15 @@ socket_id_cmp(const void *a, const void *b)
* structure.
*/
int
-rte_eal_cpu_init(void)
+rte_eal_cpu_init(struct eal_platform_info *platform_info)
{
- struct eal_platform_info *platform_info = eal_get_platform_info();
int *lcore_to_socket_id;
+ if (eal_create_cpu_map() < 0) {
+ EAL_LOG(ERR, "Failed to create CPU map");
+ return -1;
+ }
+
/* allocate cpu_info for all CPUs visible to the OS */
platform_info->cpu_count = eal_cpu_max();
platform_info->cpu_info = calloc(platform_info->cpu_count,
diff --git a/lib/eal/common/eal_hugepages.h b/lib/eal/common/eal_hugepages.h
index 1b560d3379..b30ff12f2d 100644
--- a/lib/eal/common/eal_hugepages.h
+++ b/lib/eal/common/eal_hugepages.h
@@ -11,6 +11,8 @@
#define MAX_HUGEPAGE_PATH PATH_MAX
+struct eal_platform_info;
+
/**
* Structure used to store information about hugepages that we mapped
* through the files in hugetlbfs.
@@ -25,6 +27,12 @@ struct hugepage_file {
char filepath[MAX_HUGEPAGE_PATH]; /**< path to backing file on filesystem */
};
+/**
+ * Discover hugepage sizes and mounts available on this platform and populate
+ * the hugepage_sizes[] fields of the provided platform_info struct.
+ */
+int eal_get_platform_hp_info(struct eal_platform_info *platform_info);
+
/**
* Read the information on what hugepages are available for the EAL to use,
* clearing out any unused ones.
diff --git a/lib/eal/common/eal_internal_cfg.h b/lib/eal/common/eal_internal_cfg.h
index 979c1320e8..b5962d6081 100644
--- a/lib/eal/common/eal_internal_cfg.h
+++ b/lib/eal/common/eal_internal_cfg.h
@@ -151,6 +151,14 @@ struct eal_cpu_info {
unsigned int core_id; /**< physical core number on its NUMA node */
};
+struct hp_sizes {
+ uint64_t size; /**< hugepage size in bytes */
+ char dir[PATH_MAX]; /**< dir where hugetlbfs is mounted for this size */
+ uint32_t total_pages; /**< total hugepages of this size across all NUMA nodes */
+ uint32_t max_pages[RTE_MAX_NUMA_NODES];
+ /**< maximum hugepages of this size available on each NUMA node */
+};
+
/**
* Discovered information about the system hardware.
* Immutable after discovery.
@@ -161,7 +169,7 @@ struct eal_platform_info {
uint32_t numa_node_count; /**< number of detected NUMA nodes */
uint32_t *numa_nodes; /**< sorted list of detected NUMA node IDs, heap-allocated */
uint8_t num_hugepage_sizes; /**< how many sizes on this system */
- struct hugepage_info hugepage_info[MAX_HUGEPAGE_SIZES];
+ struct hp_sizes hugepage_sizes[MAX_HUGEPAGE_SIZES];
};
/**
@@ -206,12 +214,15 @@ struct eal_runtime_state {
uint32_t main_lcore; /**< ID of the main lcore */
uint32_t lcore_count; /**< Number of active lcore IDs (role != ROLE_OFF). */
struct lcore_cfg lcore_cfg[RTE_MAX_LCORE];
+
+ uint32_t num_hugepage_sizes; /**< how many sizes stored in hugepage_info[] */
+ struct hugepage_info hugepage_info[MAX_HUGEPAGE_SIZES];
struct rte_mem_config *mem_config; /**< pointer to memory config (in shared memory) */
struct eal_solib_list loaded_plugins; /**< all plugins loaded by eal_plugins_init() */
};
+const struct eal_platform_info *eal_get_platform_info(void);
struct eal_user_cfg *eal_get_user_configuration(void);
-struct eal_platform_info *eal_get_platform_info(void);
struct eal_runtime_state *eal_get_runtime_state(void);
#endif /* EAL_INTERNAL_CFG_H */
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index 877c0840ec..5d9cc0886f 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -77,10 +77,12 @@ int rte_eal_memzone_init(void);
* Parse /proc/cpuinfo to get the number of physical and logical
* processors on the machine.
*
+ * @param platform_info
+ * Platform info struct to populate with CPU topology.
* @return
* 0 on success, negative on error
*/
-int rte_eal_cpu_init(void);
+int rte_eal_cpu_init(struct eal_platform_info *platform_info);
/**
* Check for architecture supported MMU.
@@ -731,6 +733,20 @@ int eal_asprintf(char **buffer, const char *format, ...);
eal_asprintf(buffer, format, ##__VA_ARGS__)
#endif
+/**
+ * Create a map of processors and cores on the system.
+ *
+ * @return
+ * 0 on success, (-1) on failure and rte_errno is set.
+ */
+#ifdef RTE_EXEC_ENV_WINDOWS
+int eal_create_cpu_map(void);
+#else
+/* non-Windows platforms do not require CPU map creation, define stub fn */
+static inline int
+eal_create_cpu_map(void) { return 0; }
+#endif
+
#define EAL_LOG(level, ...) \
RTE_LOG_LINE(level, EAL, "" __VA_ARGS__)
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 2245ffc5ac..4f4b9accfe 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -322,10 +322,11 @@ eal_get_hugepage_mem_size(void)
{
uint64_t size = 0;
unsigned i, j;
- struct eal_platform_info *platform_info = eal_get_platform_info();
+ const struct eal_platform_info *platform_info = eal_get_platform_info();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
for (i = 0; i < platform_info->num_hugepage_sizes; i++) {
- struct hugepage_info *hpi = &platform_info->hugepage_info[i];
+ struct hugepage_info *hpi = &runtime_state->hugepage_info[i];
if (strnlen(hpi->hugedir, sizeof(hpi->hugedir)) != 0) {
for (j = 0; j < RTE_MAX_NUMA_NODES; j++) {
size += hpi->hugepage_sz * hpi->num_pages[j];
@@ -444,12 +445,6 @@ rte_eal_init(int argc, char **argv)
goto err_out;
}
- if (rte_eal_cpu_init() < 0) {
- rte_eal_init_alert("Cannot detect lcores.");
- rte_errno = ENOTSUP;
- goto err_out;
- }
-
if (eal_parse_args() < 0) {
rte_eal_init_alert("Error parsing command-line arguments.");
rte_errno = EINVAL;
diff --git a/lib/eal/freebsd/eal_hugepage_info.c b/lib/eal/freebsd/eal_hugepage_info.c
index b46ae4b689..63dc734142 100644
--- a/lib/eal/freebsd/eal_hugepage_info.c
+++ b/lib/eal/freebsd/eal_hugepage_info.c
@@ -51,22 +51,53 @@ create_shared_memory(const char *filename, const size_t mem_size)
/*
* No hugepage support on freebsd, but we dummy it, using contigmem driver
*/
+int
+eal_get_platform_hp_info(struct eal_platform_info *platform_info)
+{
+ size_t sysctl_size;
+ int num_buffers, error;
+ int64_t buffer_size;
+
+ sysctl_size = sizeof(num_buffers);
+ error = sysctlbyname("hw.contigmem.num_buffers", &num_buffers,
+ &sysctl_size, NULL, 0);
+ if (error != 0) {
+ EAL_LOG(ERR, "could not read sysctl hw.contigmem.num_buffers");
+ return -1;
+ }
+
+ sysctl_size = sizeof(buffer_size);
+ error = sysctlbyname("hw.contigmem.buffer_size", &buffer_size,
+ &sysctl_size, NULL, 0);
+ if (error != 0) {
+ EAL_LOG(ERR, "could not read sysctl hw.contigmem.buffer_size");
+ return -1;
+ }
+
+ platform_info->num_hugepage_sizes = 1;
+ platform_info->hugepage_sizes[0].size = buffer_size;
+ strlcpy(platform_info->hugepage_sizes[0].dir, CONTIGMEM_DEV,
+ sizeof(platform_info->hugepage_sizes[0].dir));
+ platform_info->hugepage_sizes[0].max_pages[0] = num_buffers;
+ platform_info->hugepage_sizes[0].total_pages = num_buffers;
+
+ return 0;
+}
+
int
eal_hugepage_info_init(void)
{
size_t sysctl_size;
int num_buffers, fd, error;
int64_t buffer_size;
- struct eal_platform_info *platform_info = eal_get_platform_info();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
/* re-use the linux "internal config" structure for our memory data */
- struct hugepage_info *hpi = &platform_info->hugepage_info[0];
+ struct hugepage_info *hpi = &runtime_state->hugepage_info[0];
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
struct hugepage_info *tmp_hpi;
unsigned int i;
- platform_info->num_hugepage_sizes = 1;
-
sysctl_size = sizeof(num_buffers);
error = sysctlbyname("hw.contigmem.num_buffers", &num_buffers,
&sysctl_size, NULL, 0);
@@ -109,29 +140,30 @@ eal_hugepage_info_init(void)
hpi->hugepage_sz = buffer_size;
hpi->num_pages[0] = num_buffers;
hpi->lock_descriptor = fd;
+ runtime_state->num_hugepage_sizes = 1;
/* for no shared files mode, do not create shared memory config */
if (user_cfg->no_shconf)
return 0;
tmp_hpi = create_shared_memory(eal_hugepage_info_path(),
- sizeof(platform_info->hugepage_info));
+ sizeof(runtime_state->hugepage_info));
if (tmp_hpi == NULL ) {
EAL_LOG(ERR, "Failed to create shared memory!");
return -1;
}
- memcpy(tmp_hpi, hpi, sizeof(platform_info->hugepage_info));
+ memcpy(tmp_hpi, hpi, sizeof(runtime_state->hugepage_info));
/* we've copied file descriptors along with everything else, but they
* will be invalid in secondary process, so overwrite them
*/
- for (i = 0; i < RTE_DIM(platform_info->hugepage_info); i++) {
+ for (i = 0; i < RTE_DIM(runtime_state->hugepage_info); i++) {
struct hugepage_info *tmp = &tmp_hpi[i];
tmp->lock_descriptor = -1;
}
- if (munmap(tmp_hpi, sizeof(platform_info->hugepage_info)) < 0) {
+ if (munmap(tmp_hpi, sizeof(runtime_state->hugepage_info)) < 0) {
EAL_LOG(ERR, "Failed to unmap shared memory!");
return -1;
}
@@ -143,23 +175,23 @@ eal_hugepage_info_init(void)
int
eal_hugepage_info_read(void)
{
- struct eal_platform_info *platform_info = eal_get_platform_info();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
- struct hugepage_info *hpi = &platform_info->hugepage_info[0];
+ struct hugepage_info *hpi = &runtime_state->hugepage_info[0];
struct hugepage_info *tmp_hpi;
- platform_info->num_hugepage_sizes = 1;
+ runtime_state->num_hugepage_sizes = 1;
tmp_hpi = open_shared_memory(eal_hugepage_info_path(),
- sizeof(platform_info->hugepage_info));
+ sizeof(runtime_state->hugepage_info));
if (tmp_hpi == NULL) {
EAL_LOG(ERR, "Failed to open shared memory!");
return -1;
}
- memcpy(hpi, tmp_hpi, sizeof(platform_info->hugepage_info));
+ memcpy(hpi, tmp_hpi, sizeof(runtime_state->hugepage_info));
- if (munmap(tmp_hpi, sizeof(platform_info->hugepage_info)) < 0) {
+ if (munmap(tmp_hpi, sizeof(runtime_state->hugepage_info)) < 0) {
EAL_LOG(ERR, "Failed to unmap shared memory!");
return -1;
}
diff --git a/lib/eal/freebsd/eal_memory.c b/lib/eal/freebsd/eal_memory.c
index cf0c5b7332..18e388ef08 100644
--- a/lib/eal/freebsd/eal_memory.c
+++ b/lib/eal/freebsd/eal_memory.c
@@ -60,7 +60,8 @@ rte_eal_hugepage_init(void)
void *addr;
unsigned int i, j, seg_idx = 0;
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- struct eal_platform_info *platform_info = eal_get_platform_info();
+ const struct eal_platform_info *platform_info = eal_get_platform_info();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
/* get pointer to global configuration */
mcfg = eal_get_mcfg();
@@ -106,7 +107,7 @@ rte_eal_hugepage_init(void)
uint64_t page_sz, mem_needed;
unsigned int n_pages, max_pages;
- hpi = &platform_info->hugepage_info[i];
+ hpi = &runtime_state->hugepage_info[i];
page_sz = hpi->hugepage_sz;
max_pages = hpi->num_pages[0];
mem_needed = RTE_ALIGN_CEIL(user_cfg->memory - total_mem,
@@ -269,12 +270,13 @@ attach_segment(const struct rte_memseg_list *msl, const struct rte_memseg *ms,
int
rte_eal_hugepage_attach(void)
{
- struct eal_platform_info *platform_info = eal_get_platform_info();
+ const struct eal_platform_info *platform_info = eal_get_platform_info();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
struct hugepage_info *hpi;
int fd_hugepage = -1;
unsigned int i;
- hpi = &platform_info->hugepage_info[0];
+ hpi = &runtime_state->hugepage_info[0];
for (i = 0; i < platform_info->num_hugepage_sizes; i++) {
const struct hugepage_info *cur_hpi = &hpi[i];
@@ -355,7 +357,8 @@ memseg_primary_init(void)
struct rte_memseg_list *msl;
uint64_t max_mem, total_mem;
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- struct eal_platform_info *platform_info = eal_get_platform_info();
+ const struct eal_platform_info *platform_info = eal_get_platform_info();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
/* no-huge does not need this at all */
if (user_cfg->no_hugetlbfs)
@@ -383,7 +386,7 @@ memseg_primary_init(void)
struct hugepage_info *hpi;
uint64_t hugepage_sz;
- hpi = &platform_info->hugepage_info[hpi_idx];
+ hpi = &runtime_state->hugepage_info[hpi_idx];
hugepage_sz = hpi->hugepage_sz;
/* no NUMA support on FreeBSD */
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index d3f1748297..1bf519eb10 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -406,19 +406,19 @@ eal_mem_config_init(void)
static void
eal_hugedirs_unlock(void)
{
- struct eal_platform_info *platform_info = eal_get_platform_info();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
int i;
for (i = 0; i < MAX_HUGEPAGE_SIZES; i++)
{
/* skip uninitialized or unlocked entries */
- if (platform_info->hugepage_info[i].hugepage_sz == 0 ||
- platform_info->hugepage_info[i].lock_descriptor < 0)
+ if (runtime_state->hugepage_info[i].hugepage_sz == 0 ||
+ runtime_state->hugepage_info[i].lock_descriptor < 0)
continue;
/* unlock hugepage file */
- flock(platform_info->hugepage_info[i].lock_descriptor, LOCK_UN);
- close(platform_info->hugepage_info[i].lock_descriptor);
+ flock(runtime_state->hugepage_info[i].lock_descriptor, LOCK_UN);
+ close(runtime_state->hugepage_info[i].lock_descriptor);
/* reset the field */
- platform_info->hugepage_info[i].lock_descriptor = -1;
+ runtime_state->hugepage_info[i].lock_descriptor = -1;
}
}
@@ -607,12 +607,6 @@ rte_eal_init(int argc, char **argv)
goto err_out;
}
- if (rte_eal_cpu_init() < 0) {
- rte_eal_init_alert("Cannot detect lcores.");
- rte_errno = ENOTSUP;
- goto err_out;
- }
-
if (eal_parse_args() < 0) {
rte_eal_init_alert("Error parsing command line arguments.");
rte_errno = EINVAL;
diff --git a/lib/eal/linux/eal_hugepage_info.c b/lib/eal/linux/eal_hugepage_info.c
index f35446cdc9..28e4584ddf 100644
--- a/lib/eal/linux/eal_hugepage_info.c
+++ b/lib/eal/linux/eal_hugepage_info.c
@@ -395,6 +395,79 @@ compare_hpi(const void *a, const void *b)
return hpi_b->hugepage_sz - hpi_a->hugepage_sz;
}
+static int
+compare_hp_sizes(const void *a, const void *b)
+{
+ const struct hp_sizes *ha = a;
+ const struct hp_sizes *hb = b;
+
+ if (hb->size > ha->size)
+ return 1;
+ if (hb->size < ha->size)
+ return -1;
+ return 0;
+}
+
+int
+eal_get_platform_hp_info(struct eal_platform_info *platform_info)
+{
+ const char dirent_start_text[] = "hugepages-";
+ const size_t dirent_start_len = sizeof(dirent_start_text) - 1;
+ unsigned int num_sizes = 0;
+ DIR *dir;
+ struct dirent *dirent;
+
+ dir = opendir(sys_dir_path);
+ if (dir == NULL) {
+ EAL_LOG(ERR, "Cannot open directory %s to read system hugepage info",
+ sys_dir_path);
+ return -1;
+ }
+
+ for (dirent = readdir(dir); dirent != NULL; dirent = readdir(dir)) {
+ struct hp_sizes *hps;
+ uint64_t sz;
+ unsigned int i;
+
+ if (strncmp(dirent->d_name, dirent_start_text,
+ dirent_start_len) != 0)
+ continue;
+
+ if (num_sizes >= MAX_HUGEPAGE_SIZES)
+ break;
+
+ sz = rte_str_to_size(&dirent->d_name[dirent_start_len]);
+ hps = &platform_info->hugepage_sizes[num_sizes];
+ hps->size = sz;
+
+ /* fill per-socket page counts; fall back to socket 0 total */
+ hps->total_pages = 0;
+ for (i = 0; i < platform_info->numa_node_count; i++) {
+ int socket = (int)platform_info->numa_nodes[i];
+ hps->max_pages[socket] = get_num_hugepages_on_node(dirent->d_name,
+ socket, sz);
+ hps->total_pages += hps->max_pages[socket];
+ }
+ if (hps->total_pages == 0) {
+ hps->max_pages[0] = get_num_hugepages(dirent->d_name, sz, 0);
+ hps->total_pages = hps->max_pages[0];
+ }
+
+ if (get_hugepage_dir(sz, hps->dir, sizeof(hps->dir)) < 0)
+ hps->dir[0] = '\0';
+
+ num_sizes++;
+ }
+ closedir(dir);
+
+ /* sort largest to smallest, matching hugepage_info ordering */
+ qsort(&platform_info->hugepage_sizes[0], num_sizes,
+ sizeof(platform_info->hugepage_sizes[0]), compare_hp_sizes);
+
+ platform_info->num_hugepage_sizes = num_sizes;
+ return 0;
+}
+
static void
calc_num_pages(struct hugepage_info *hpi, struct dirent *dirent,
unsigned int reusable_pages)
@@ -453,7 +526,7 @@ hugepage_info_init(void)
unsigned int reusable_pages;
DIR *dir;
struct dirent *dirent;
- struct eal_platform_info *platform_info = eal_get_platform_info();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
dir = opendir(sys_dir_path);
@@ -474,7 +547,7 @@ hugepage_info_init(void)
if (num_sizes >= MAX_HUGEPAGE_SIZES)
break;
- hpi = &platform_info->hugepage_info[num_sizes];
+ hpi = &runtime_state->hugepage_info[num_sizes];
hpi->hugepage_sz =
rte_str_to_size(&dirent->d_name[dirent_start_len]);
@@ -545,17 +618,17 @@ hugepage_info_init(void)
if (dirent != NULL)
return -1;
- platform_info->num_hugepage_sizes = num_sizes;
+ runtime_state->num_hugepage_sizes = num_sizes;
/* sort the page directory entries by size, largest to smallest */
- qsort(&platform_info->hugepage_info[0], num_sizes,
- sizeof(platform_info->hugepage_info[0]), compare_hpi);
+ qsort(&runtime_state->hugepage_info[0], num_sizes,
+ sizeof(runtime_state->hugepage_info[0]), compare_hpi);
/* now we have all info, check we have at least one valid size */
for (i = 0; i < num_sizes; i++) {
/* pages may no longer all be on socket 0, so check all */
unsigned int j, num_pages = 0;
- struct hugepage_info *hpi = &platform_info->hugepage_info[i];
+ struct hugepage_info *hpi = &runtime_state->hugepage_info[i];
for (j = 0; j < RTE_MAX_NUMA_NODES; j++)
num_pages += hpi->num_pages[j];
@@ -577,7 +650,7 @@ eal_hugepage_info_init(void)
{
struct hugepage_info *hpi, *tmp_hpi;
unsigned int i;
- struct eal_platform_info *platform_info = eal_get_platform_info();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (hugepage_info_init() < 0)
@@ -587,26 +660,26 @@ eal_hugepage_info_init(void)
if (user_cfg->no_shconf)
return 0;
- hpi = &platform_info->hugepage_info[0];
+ hpi = &runtime_state->hugepage_info[0];
tmp_hpi = create_shared_memory(eal_hugepage_info_path(),
- sizeof(platform_info->hugepage_info));
+ sizeof(runtime_state->hugepage_info));
if (tmp_hpi == NULL) {
EAL_LOG(ERR, "Failed to create shared memory!");
return -1;
}
- memcpy(tmp_hpi, hpi, sizeof(platform_info->hugepage_info));
+ memcpy(tmp_hpi, hpi, sizeof(runtime_state->hugepage_info));
/* we've copied file descriptors along with everything else, but they
* will be invalid in secondary process, so overwrite them
*/
- for (i = 0; i < RTE_DIM(platform_info->hugepage_info); i++) {
+ for (i = 0; i < RTE_DIM(runtime_state->hugepage_info); i++) {
struct hugepage_info *tmp = &tmp_hpi[i];
tmp->lock_descriptor = -1;
}
- if (munmap(tmp_hpi, sizeof(platform_info->hugepage_info)) < 0) {
+ if (munmap(tmp_hpi, sizeof(runtime_state->hugepage_info)) < 0) {
EAL_LOG(ERR, "Failed to unmap shared memory!");
return -1;
}
@@ -615,22 +688,29 @@ eal_hugepage_info_init(void)
int eal_hugepage_info_read(void)
{
- struct eal_platform_info *platform_info = eal_get_platform_info();
- struct hugepage_info *hpi = &platform_info->hugepage_info[0];
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+ struct hugepage_info *hpi = &runtime_state->hugepage_info[0];
struct hugepage_info *tmp_hpi;
tmp_hpi = open_shared_memory(eal_hugepage_info_path(),
- sizeof(platform_info->hugepage_info));
+ sizeof(runtime_state->hugepage_info));
if (tmp_hpi == NULL) {
EAL_LOG(ERR, "Failed to open shared memory!");
return -1;
}
- memcpy(hpi, tmp_hpi, sizeof(platform_info->hugepage_info));
+ memcpy(hpi, tmp_hpi, sizeof(runtime_state->hugepage_info));
- if (munmap(tmp_hpi, sizeof(platform_info->hugepage_info)) < 0) {
+ if (munmap(tmp_hpi, sizeof(runtime_state->hugepage_info)) < 0) {
EAL_LOG(ERR, "Failed to unmap shared memory!");
return -1;
}
+
+ /* count valid entries copied from primary process */
+ for (unsigned int i = 0; i < MAX_HUGEPAGE_SIZES; i++) {
+ if (runtime_state->hugepage_info[i].hugepage_sz == 0)
+ break;
+ runtime_state->num_hugepage_sizes = i + 1;
+ }
return 0;
}
diff --git a/lib/eal/linux/eal_memalloc.c b/lib/eal/linux/eal_memalloc.c
index 035e5da08a..a8b3d97fca 100644
--- a/lib/eal/linux/eal_memalloc.c
+++ b/lib/eal/linux/eal_memalloc.c
@@ -965,7 +965,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, size_t page_sz,
struct alloc_walk_param wa;
struct hugepage_info *hi = NULL;
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- struct eal_platform_info *platform_info = eal_get_platform_info();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
memset(&wa, 0, sizeof(wa));
@@ -973,10 +973,10 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs, size_t page_sz,
if (user_cfg->legacy_mem)
return -1;
- for (i = 0; i < (int) RTE_DIM(platform_info->hugepage_info); i++) {
+ for (i = 0; i < (int) RTE_DIM(runtime_state->hugepage_info); i++) {
if (page_sz ==
- platform_info->hugepage_info[i].hugepage_sz) {
- hi = &platform_info->hugepage_info[i];
+ runtime_state->hugepage_info[i].hugepage_sz) {
+ hi = &runtime_state->hugepage_info[i];
break;
}
}
@@ -1034,7 +1034,7 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs)
{
int seg, ret = 0;
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- struct eal_platform_info *platform_info = eal_get_platform_info();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
/* dynamic free not supported in legacy mode */
if (user_cfg->legacy_mem)
@@ -1055,13 +1055,13 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs)
memset(&wa, 0, sizeof(wa));
- for (i = 0; i < (int)RTE_DIM(platform_info->hugepage_info);
+ for (i = 0; i < (int)RTE_DIM(runtime_state->hugepage_info);
i++) {
- hi = &platform_info->hugepage_info[i];
+ hi = &runtime_state->hugepage_info[i];
if (cur->hugepage_sz == hi->hugepage_sz)
break;
}
- if (i == (int)RTE_DIM(platform_info->hugepage_info)) {
+ if (i == (int)RTE_DIM(runtime_state->hugepage_info)) {
EAL_LOG(ERR, "Can't find relevant hugepage_info entry");
ret = -1;
continue;
@@ -1325,7 +1325,7 @@ sync_walk(const struct rte_memseg_list *msl, void *arg __rte_unused)
{
struct rte_mem_config *mcfg = eal_get_mcfg();
struct rte_memseg_list *primary_msl, *local_msl;
- struct eal_platform_info *platform_info = eal_get_platform_info();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
struct hugepage_info *hi = NULL;
unsigned int i;
int msl_idx;
@@ -1337,12 +1337,12 @@ sync_walk(const struct rte_memseg_list *msl, void *arg __rte_unused)
primary_msl = &mcfg->memsegs[msl_idx];
local_msl = &local_memsegs[msl_idx];
- for (i = 0; i < RTE_DIM(platform_info->hugepage_info); i++) {
+ for (i = 0; i < RTE_DIM(runtime_state->hugepage_info); i++) {
uint64_t cur_sz =
- platform_info->hugepage_info[i].hugepage_sz;
+ runtime_state->hugepage_info[i].hugepage_sz;
uint64_t msl_sz = primary_msl->page_sz;
if (msl_sz == cur_sz) {
- hi = &platform_info->hugepage_info[i];
+ hi = &runtime_state->hugepage_info[i];
break;
}
}
diff --git a/lib/eal/linux/eal_memory.c b/lib/eal/linux/eal_memory.c
index 1bfea89021..d5d30baf61 100644
--- a/lib/eal/linux/eal_memory.c
+++ b/lib/eal/linux/eal_memory.c
@@ -598,13 +598,13 @@ unlink_hugepage_files(struct hugepage_file *hugepg_tbl,
{
unsigned socket, size;
int page, nrpages = 0;
- const struct eal_platform_info *platform_info = eal_get_platform_info();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
/* get total number of hugepages */
for (size = 0; size < num_hp_info; size++)
for (socket = 0; socket < RTE_MAX_NUMA_NODES; socket++)
nrpages +=
- platform_info->hugepage_info[size].num_pages[socket];
+ runtime_state->hugepage_info[size].num_pages[socket];
for (page = 0; page < nrpages; page++) {
struct hugepage_file *hp = &hugepg_tbl[page];
@@ -628,12 +628,12 @@ unmap_unneeded_hugepages(struct hugepage_file *hugepg_tbl,
{
unsigned socket, size;
int page, nrpages = 0;
- const struct eal_platform_info *platform_info = eal_get_platform_info();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
/* get total number of hugepages */
for (size = 0; size < num_hp_info; size++)
for (socket = 0; socket < RTE_MAX_NUMA_NODES; socket++)
- nrpages += platform_info->hugepage_info[size].num_pages[socket];
+ nrpages += runtime_state->hugepage_info[size].num_pages[socket];
for (size = 0; size < num_hp_info; size++) {
for (socket = 0; socket < RTE_MAX_NUMA_NODES; socket++) {
@@ -855,7 +855,7 @@ memseg_list_free(struct rte_memseg_list *msl)
static int __rte_unused
prealloc_segments(struct hugepage_file *hugepages, int n_pages)
{
- const struct eal_platform_info *platform_info = eal_get_platform_info();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
struct rte_mem_config *mcfg = eal_get_mcfg();
int cur_page, seg_start_page, end_seg, new_memseg;
unsigned int hpi_idx, socket, i;
@@ -875,10 +875,10 @@ prealloc_segments(struct hugepage_file *hugepages, int n_pages)
/* we cannot know how many page sizes and sockets we have discovered, so
* loop over all of them
*/
- for (hpi_idx = 0; hpi_idx < platform_info->num_hugepage_sizes;
+ for (hpi_idx = 0; hpi_idx < runtime_state->num_hugepage_sizes;
hpi_idx++) {
uint64_t page_sz =
- platform_info->hugepage_info[hpi_idx].hugepage_sz;
+ runtime_state->hugepage_info[hpi_idx].hugepage_sz;
for (i = 0; i < rte_socket_count(); i++) {
struct rte_memseg_list *msl;
@@ -1083,10 +1083,10 @@ eal_get_hugepage_mem_size(void)
{
uint64_t size = 0;
unsigned i, j;
- struct eal_platform_info *platform_info = eal_get_platform_info();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
- for (i = 0; i < platform_info->num_hugepage_sizes; i++) {
- struct hugepage_info *hpi = &platform_info->hugepage_info[i];
+ for (i = 0; i < runtime_state->num_hugepage_sizes; i++) {
+ struct hugepage_info *hpi = &runtime_state->hugepage_info[i];
if (strnlen(hpi->hugedir, sizeof(hpi->hugedir)) != 0) {
for (j = 0; j < RTE_MAX_NUMA_NODES; j++) {
size += hpi->hugepage_sz * hpi->num_pages[j];
@@ -1141,7 +1141,7 @@ eal_legacy_hugepage_init(void)
struct rte_mem_config *mcfg;
struct hugepage_file *hugepage = NULL, *tmp_hp = NULL;
struct hugepage_info used_hp[MAX_HUGEPAGE_SIZES];
- struct eal_platform_info *platform_info = eal_get_platform_info();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
struct eal_user_cfg *user_cfg = eal_get_user_configuration();
uint64_t memory[RTE_MAX_NUMA_NODES];
@@ -1258,11 +1258,11 @@ eal_legacy_hugepage_init(void)
/* calculate total number of hugepages available. at this point we haven't
* yet started sorting them so they all are on socket 0 */
- for (i = 0; i < (int) platform_info->num_hugepage_sizes; i++) {
+ for (i = 0; i < (int) runtime_state->num_hugepage_sizes; i++) {
/* meanwhile, also initialize used_hp hugepage sizes in used_hp */
- used_hp[i].hugepage_sz = platform_info->hugepage_info[i].hugepage_sz;
+ used_hp[i].hugepage_sz = runtime_state->hugepage_info[i].hugepage_sz;
- nr_hugepages += platform_info->hugepage_info[i].num_pages[0];
+ nr_hugepages += runtime_state->hugepage_info[i].num_pages[0];
}
/*
@@ -1286,7 +1286,7 @@ eal_legacy_hugepage_init(void)
memory[i] = user_cfg->numa_mem[i];
/* map all hugepages and sort them */
- for (i = 0; i < (int)platform_info->num_hugepage_sizes; i++) {
+ for (i = 0; i < (int)runtime_state->num_hugepage_sizes; i++) {
unsigned pages_old, pages_new;
struct hugepage_info *hpi;
@@ -1295,7 +1295,7 @@ eal_legacy_hugepage_init(void)
* we just map all hugepages available to the system
* all hugepages are still located on socket 0
*/
- hpi = &platform_info->hugepage_info[i];
+ hpi = &runtime_state->hugepage_info[i];
if (hpi->num_pages[0] == 0)
continue;
@@ -1358,9 +1358,9 @@ eal_legacy_hugepage_init(void)
/* clean out the numbers of pages */
- for (i = 0; i < (int) platform_info->num_hugepage_sizes; i++)
+ for (i = 0; i < (int) runtime_state->num_hugepage_sizes; i++)
for (j = 0; j < RTE_MAX_NUMA_NODES; j++)
- platform_info->hugepage_info[i].num_pages[j] = 0;
+ runtime_state->hugepage_info[i].num_pages[j] = 0;
/* get hugepages for each socket */
for (i = 0; i < nr_hugefiles; i++) {
@@ -1368,11 +1368,11 @@ eal_legacy_hugepage_init(void)
/* find a hugepage info with right size and increment num_pages */
const int nb_hpsizes = RTE_MIN(MAX_HUGEPAGE_SIZES,
- (int)platform_info->num_hugepage_sizes);
+ (int)runtime_state->num_hugepage_sizes);
for (j = 0; j < nb_hpsizes; j++) {
if (tmp_hp[i].size ==
- platform_info->hugepage_info[j].hugepage_sz) {
- platform_info->hugepage_info[j].num_pages[socket]++;
+ runtime_state->hugepage_info[j].hugepage_sz) {
+ runtime_state->hugepage_info[j].num_pages[socket]++;
}
}
}
@@ -1383,15 +1383,15 @@ eal_legacy_hugepage_init(void)
/* calculate final number of pages */
nr_hugepages = eal_dynmem_calc_num_pages_per_socket(memory,
- platform_info->hugepage_info, used_hp,
- platform_info->num_hugepage_sizes);
+ runtime_state->hugepage_info, used_hp,
+ runtime_state->num_hugepage_sizes);
/* error if not enough memory available */
if (nr_hugepages < 0)
goto fail;
/* reporting in! */
- for (i = 0; i < (int) platform_info->num_hugepage_sizes; i++) {
+ for (i = 0; i < (int) runtime_state->num_hugepage_sizes; i++) {
for (j = 0; j < RTE_MAX_NUMA_NODES; j++) {
if (used_hp[i].num_pages[j] > 0) {
EAL_LOG(DEBUG,
@@ -1420,7 +1420,7 @@ eal_legacy_hugepage_init(void)
* also, sets final_va to NULL on pages that were unmapped.
*/
if (unmap_unneeded_hugepages(tmp_hp, used_hp,
- platform_info->num_hugepage_sizes) < 0) {
+ runtime_state->num_hugepage_sizes) < 0) {
EAL_LOG(ERR, "Unmapping and locking hugepages failed!");
goto fail;
}
@@ -1455,7 +1455,7 @@ eal_legacy_hugepage_init(void)
/* free the hugepage backing files */
if (user_cfg->hugepage_file.unlink_before_mapping &&
- unlink_hugepage_files(tmp_hp, platform_info->num_hugepage_sizes) < 0) {
+ unlink_hugepage_files(tmp_hp, runtime_state->num_hugepage_sizes) < 0) {
EAL_LOG(ERR, "Unlinking hugepage files failed!");
goto fail;
}
@@ -1708,7 +1708,7 @@ memseg_primary_init_32(void)
struct rte_memseg_list *msl;
uint64_t extra_mem_per_socket, total_extra_mem, total_requested_mem;
uint64_t max_mem;
- struct eal_platform_info *platform_info = eal_get_platform_info();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* no-huge does not need this at all */
@@ -1775,7 +1775,7 @@ memseg_primary_init_32(void)
/* create memseg lists */
for (i = 0; i < rte_socket_count(); i++) {
- int hp_sizes = (int) platform_info->num_hugepage_sizes;
+ int hp_sizes = (int) runtime_state->num_hugepage_sizes;
uint64_t max_socket_mem, cur_socket_mem;
unsigned int main_lcore_socket;
bool skip;
@@ -1822,7 +1822,7 @@ memseg_primary_init_32(void)
struct hugepage_info *hpi;
int type_msl_idx, max_segs, total_segs = 0;
- hpi = &platform_info->hugepage_info[hpi_idx];
+ hpi = &runtime_state->hugepage_info[hpi_idx];
hugepage_sz = hpi->hugepage_sz;
/* check if pages are actually available */
diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c
index e03ba18c4b..ed293ada8f 100644
--- a/lib/eal/windows/eal.c
+++ b/lib/eal/windows/eal.c
@@ -179,12 +179,6 @@ rte_eal_init(int argc, char **argv)
eal_log_init(NULL);
- if (eal_create_cpu_map() < 0) {
- rte_eal_init_alert("Cannot discover CPU and NUMA.");
- /* rte_errno is set */
- goto err_out;
- }
-
/* verify if DPDK supported on architecture MMU */
if (!eal_mmu_supported()) {
rte_eal_init_alert("Unsupported MMU type.");
@@ -192,12 +186,6 @@ rte_eal_init(int argc, char **argv)
goto err_out;
}
- if (rte_eal_cpu_init() < 0) {
- rte_eal_init_alert("Cannot detect lcores.");
- rte_errno = ENOTSUP;
- goto err_out;
- }
-
if (eal_parse_args() < 0) {
rte_eal_init_alert("Invalid command line arguments.");
rte_errno = EINVAL;
diff --git a/lib/eal/windows/eal_hugepages.c b/lib/eal/windows/eal_hugepages.c
index 4e9e958c65..fa19b7b77c 100644
--- a/lib/eal/windows/eal_hugepages.c
+++ b/lib/eal/windows/eal_hugepages.c
@@ -62,11 +62,9 @@ hugepage_info_init(void)
struct hugepage_info *hpi;
unsigned int socket_id;
int ret = 0;
- struct eal_platform_info *platform_info = eal_get_platform_info();
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
- /* Only one hugepage size available on Windows. */
- platform_info->num_hugepage_sizes = 1;
- hpi = &platform_info->hugepage_info[0];
+ hpi = &runtime_state->hugepage_info[0];
hpi->hugepage_sz = GetLargePageMinimum();
if (hpi->hugepage_sz == 0)
@@ -96,10 +94,39 @@ hugepage_info_init(void)
/* No hugepage filesystem on Windows. */
hpi->lock_descriptor = -1;
memset(hpi->hugedir, 0, sizeof(hpi->hugedir));
+ runtime_state->num_hugepage_sizes = 1;
return ret;
}
+int
+eal_get_platform_hp_info(struct eal_platform_info *platform_info)
+{
+ size_t hp_sz;
+ unsigned int socket_id;
+
+ hp_sz = GetLargePageMinimum();
+ if (hp_sz == 0)
+ return -ENOTSUP;
+
+ for (socket_id = 0; socket_id < platform_info->numa_node_count; socket_id++) {
+ ULONGLONG bytes;
+ unsigned int numa_node;
+
+ numa_node = eal_socket_numa_node(socket_id);
+ if (GetNumaAvailableMemoryNodeEx(numa_node, &bytes)) {
+ platform_info->hugepage_sizes[0].max_pages[socket_id] = bytes / hp_sz;
+ platform_info->hugepage_sizes[0].total_pages += bytes / hp_sz;
+ }
+ }
+
+ platform_info->num_hugepage_sizes = 1;
+ platform_info->hugepage_sizes[0].size = hp_sz;
+ platform_info->hugepage_sizes[0].dir[0] = '\0'; /* no hugetlbfs on Windows */
+
+ return 0;
+}
+
int
eal_hugepage_info_init(void)
{
diff --git a/lib/eal/windows/eal_memalloc.c b/lib/eal/windows/eal_memalloc.c
index 7eaae467d8..3ab8af0466 100644
--- a/lib/eal/windows/eal_memalloc.c
+++ b/lib/eal/windows/eal_memalloc.c
@@ -315,6 +315,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs,
struct alloc_walk_param wa;
const struct hugepage_info *hi = NULL;
const struct eal_platform_info *platform_info = eal_get_platform_info();
+ const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (user_cfg->legacy_mem) {
@@ -323,7 +324,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs,
}
for (i = 0; i < platform_info->num_hugepage_sizes; i++) {
- const struct hugepage_info *hpi = &platform_info->hugepage_info[i];
+ const struct hugepage_info *hpi = &runtime_state->hugepage_info[i];
if (page_sz == hpi->hugepage_sz) {
hi = hpi;
break;
@@ -367,7 +368,7 @@ int
eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs)
{
int seg, ret = 0;
- const struct eal_platform_info *platform_info = eal_get_platform_info();
+ const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
/* dynamic free not supported in legacy mode */
@@ -390,12 +391,12 @@ eal_memalloc_free_seg_bulk(struct rte_memseg **ms, int n_segs)
memset(&wa, 0, sizeof(wa));
- for (i = 0; i < RTE_DIM(platform_info->hugepage_info); i++) {
- hi = &platform_info->hugepage_info[i];
+ for (i = 0; i < RTE_DIM(runtime_state->hugepage_info); i++) {
+ hi = &runtime_state->hugepage_info[i];
if (cur->hugepage_sz == hi->hugepage_sz)
break;
}
- if (i == RTE_DIM(platform_info->hugepage_info)) {
+ if (i == RTE_DIM(runtime_state->hugepage_info)) {
EAL_LOG(ERR, "Can't find relevant hugepage_info entry");
ret = -1;
continue;
diff --git a/lib/eal/windows/eal_windows.h b/lib/eal/windows/eal_windows.h
index 91cf15eaaa..64662e957a 100644
--- a/lib/eal/windows/eal_windows.h
+++ b/lib/eal/windows/eal_windows.h
@@ -29,14 +29,6 @@
#define EAL_LOG_STUB() \
EAL_LOG(DEBUG, "Windows: %s() is a stub", __func__)
-/**
- * Create a map of processors and cores on the system.
- *
- * @return
- * 0 on success, (-1) on failure and rte_errno is set.
- */
-int eal_create_cpu_map(void);
-
/**
* Get system NUMA node number for a socket ID.
*
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 33/44] eal: remove duplicated scan of sysfs for hugepage details
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (31 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 32/44] eal: initialize platform info on first use Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 34/44] eal: add utilities for working with user config struct Bruce Richardson
` (12 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
Since the platform_info struct does the hugepage scanning, we can reuse
those details in the hugepage init functions rather than rescanning
sysfs.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/freebsd/eal_hugepage_info.c | 22 ++-----
lib/eal/linux/eal_hugepage_info.c | 94 +++++++++--------------------
lib/eal/windows/eal_hugepages.c | 26 +++-----
3 files changed, 40 insertions(+), 102 deletions(-)
diff --git a/lib/eal/freebsd/eal_hugepage_info.c b/lib/eal/freebsd/eal_hugepage_info.c
index 63dc734142..9c97897cc3 100644
--- a/lib/eal/freebsd/eal_hugepage_info.c
+++ b/lib/eal/freebsd/eal_hugepage_info.c
@@ -87,8 +87,8 @@ eal_get_platform_hp_info(struct eal_platform_info *platform_info)
int
eal_hugepage_info_init(void)
{
- size_t sysctl_size;
- int num_buffers, fd, error;
+ const struct eal_platform_info *platform_info = eal_get_platform_info();
+ int num_buffers, fd;
int64_t buffer_size;
struct eal_runtime_state *runtime_state = eal_get_runtime_state();
@@ -98,23 +98,13 @@ eal_hugepage_info_init(void)
struct hugepage_info *tmp_hpi;
unsigned int i;
- sysctl_size = sizeof(num_buffers);
- error = sysctlbyname("hw.contigmem.num_buffers", &num_buffers,
- &sysctl_size, NULL, 0);
-
- if (error != 0) {
- EAL_LOG(ERR, "could not read sysctl hw.contigmem.num_buffers");
+ if (platform_info->num_hugepage_sizes == 0) {
+ EAL_LOG(ERR, "could not read hugepage info from platform");
return -1;
}
- sysctl_size = sizeof(buffer_size);
- error = sysctlbyname("hw.contigmem.buffer_size", &buffer_size,
- &sysctl_size, NULL, 0);
-
- if (error != 0) {
- EAL_LOG(ERR, "could not read sysctl hw.contigmem.buffer_size");
- return -1;
- }
+ buffer_size = (int64_t)platform_info->hugepage_sizes[0].size;
+ num_buffers = (int)platform_info->hugepage_sizes[0].max_pages[0];
fd = open(CONTIGMEM_DEV, O_RDWR);
if (fd < 0) {
diff --git a/lib/eal/linux/eal_hugepage_info.c b/lib/eal/linux/eal_hugepage_info.c
index 28e4584ddf..738632bc20 100644
--- a/lib/eal/linux/eal_hugepage_info.c
+++ b/lib/eal/linux/eal_hugepage_info.c
@@ -386,15 +386,6 @@ inspect_hugedir(const char *hugedir, uint64_t *total_size)
return walk_hugedir(hugedir, inspect_hugedir_cb, total_size);
}
-static int
-compare_hpi(const void *a, const void *b)
-{
- const struct hugepage_info *hpi_a = a;
- const struct hugepage_info *hpi_b = b;
-
- return hpi_b->hugepage_sz - hpi_a->hugepage_sz;
-}
-
static int
compare_hp_sizes(const void *a, const void *b)
{
@@ -469,20 +460,13 @@ eal_get_platform_hp_info(struct eal_platform_info *platform_info)
}
static void
-calc_num_pages(struct hugepage_info *hpi, struct dirent *dirent,
+calc_num_pages(struct hugepage_info *hpi, const struct hp_sizes *hps,
unsigned int reusable_pages)
{
uint64_t total_pages = 0;
unsigned int i;
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- /*
- * first, try to put all hugepages into relevant sockets, but
- * if first attempts fails, fall back to collecting all pages
- * in one socket and sorting them later
- */
- total_pages = 0;
-
/*
* We also don't want to do this for legacy init.
* When there are hugepage files to reuse it is unknown
@@ -490,23 +474,20 @@ calc_num_pages(struct hugepage_info *hpi, struct dirent *dirent,
* This could be determined by mapping,
* but it is precisely what hugepage file reuse is trying to avoid.
*/
- if (!user_cfg->legacy_mem && reusable_pages == 0)
- for (i = 0; i < rte_socket_count(); i++) {
- int socket = rte_socket_id_by_idx(i);
- unsigned int num_pages =
- get_num_hugepages_on_node(
- dirent->d_name, socket,
- hpi->hugepage_sz);
- hpi->num_pages[socket] = num_pages;
- total_pages += num_pages;
+ if (!user_cfg->legacy_mem && reusable_pages == 0) {
+ for (i = 0; i < RTE_MAX_NUMA_NODES; i++) {
+ hpi->num_pages[i] = hps->max_pages[i];
+ total_pages += hps->max_pages[i];
}
+ }
/*
* we failed to sort memory from the get go, so fall
* back to old way
*/
if (total_pages == 0) {
- hpi->num_pages[0] = get_num_hugepages(dirent->d_name,
- hpi->hugepage_sz, reusable_pages);
+ hpi->num_pages[0] = hps->total_pages > 0 ? hps->total_pages :
+ get_num_hugepages("hugepages", hpi->hugepage_sz,
+ reusable_pages);
#ifndef RTE_ARCH_64
/* for 32-bit systems, limit number of hugepages to
@@ -519,51 +500,35 @@ calc_num_pages(struct hugepage_info *hpi, struct dirent *dirent,
static int
hugepage_info_init(void)
-{ const char dirent_start_text[] = "hugepages-";
- const size_t dirent_start_len = sizeof(dirent_start_text) - 1;
+{
unsigned int i, num_sizes = 0;
uint64_t reusable_bytes;
unsigned int reusable_pages;
- DIR *dir;
- struct dirent *dirent;
struct eal_runtime_state *runtime_state = eal_get_runtime_state();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ const struct eal_platform_info *platform_info = eal_get_platform_info();
+ int failed = 0;
- dir = opendir(sys_dir_path);
- if (dir == NULL) {
- EAL_LOG(ERR,
- "Cannot open directory %s to read system hugepage info",
- sys_dir_path);
- return -1;
- }
-
- for (dirent = readdir(dir); dirent != NULL; dirent = readdir(dir)) {
+ /* platform_info->hugepage_sizes[] is already sorted largest to smallest */
+ for (i = 0; i < platform_info->num_hugepage_sizes; i++) {
+ const struct hp_sizes *hps = &platform_info->hugepage_sizes[i];
struct hugepage_info *hpi;
- if (strncmp(dirent->d_name, dirent_start_text,
- dirent_start_len) != 0)
- continue;
-
if (num_sizes >= MAX_HUGEPAGE_SIZES)
break;
hpi = &runtime_state->hugepage_info[num_sizes];
- hpi->hugepage_sz =
- rte_str_to_size(&dirent->d_name[dirent_start_len]);
+ hpi->hugepage_sz = hps->size;
/* first, check if we have a mountpoint */
if (get_hugepage_dir(hpi->hugepage_sz,
hpi->hugedir, sizeof(hpi->hugedir)) < 0) {
- uint32_t num_pages;
-
- num_pages = get_num_hugepages(dirent->d_name,
- hpi->hugepage_sz, 0);
- if (num_pages > 0)
+ if (hps->total_pages > 0)
EAL_LOG(NOTICE,
"%" PRIu32 " hugepages of size "
"%" PRIu64 " reserved, but no mounted "
"hugetlbfs found for that size",
- num_pages, hpi->hugepage_sz);
+ hps->total_pages, hpi->hugepage_sz);
/* if we have kernel support for reserving hugepages
* through mmap, and we're in in-memory mode, treat this
* page size as valid. we cannot be in legacy mode at
@@ -572,11 +537,9 @@ hugepage_info_init(void)
*/
#ifdef MAP_HUGE_SHIFT
if (user_cfg->in_memory) {
- EAL_LOG(DEBUG, "In-memory mode enabled, "
- "hugepages of size %" PRIu64 " bytes "
- "will be allocated anonymously",
+ EAL_LOG(DEBUG, "In-memory mode enabled, hugepages of size %" PRIu64 " bytes will be allocated anonymously",
hpi->hugepage_sz);
- calc_num_pages(hpi, dirent, 0);
+ calc_num_pages(hpi, hps, 0);
num_sizes++;
}
#endif
@@ -590,6 +553,7 @@ hugepage_info_init(void)
if (flock(hpi->lock_descriptor, LOCK_EX) == -1) {
EAL_LOG(CRIT,
"Failed to lock hugepage directory!");
+ failed = 1;
break;
}
@@ -600,30 +564,26 @@ hugepage_info_init(void)
reusable_pages = 0;
if (!user_cfg->hugepage_file.unlink_existing) {
reusable_bytes = 0;
- if (inspect_hugedir(hpi->hugedir,
- &reusable_bytes) < 0)
+ if (inspect_hugedir(hpi->hugedir, &reusable_bytes) < 0) {
+ failed = 1;
break;
+ }
RTE_ASSERT(reusable_bytes % hpi->hugepage_sz == 0);
reusable_pages = reusable_bytes / hpi->hugepage_sz;
} else if (clear_hugedir(hpi->hugedir) < 0) {
+ failed = 1;
break;
}
- calc_num_pages(hpi, dirent, reusable_pages);
+ calc_num_pages(hpi, hps, reusable_pages);
num_sizes++;
}
- closedir(dir);
- /* something went wrong, and we broke from the for loop above */
- if (dirent != NULL)
+ if (failed)
return -1;
runtime_state->num_hugepage_sizes = num_sizes;
- /* sort the page directory entries by size, largest to smallest */
- qsort(&runtime_state->hugepage_info[0], num_sizes,
- sizeof(runtime_state->hugepage_info[0]), compare_hpi);
-
/* now we have all info, check we have at least one valid size */
for (i = 0; i < num_sizes; i++) {
/* pages may no longer all be on socket 0, so check all */
diff --git a/lib/eal/windows/eal_hugepages.c b/lib/eal/windows/eal_hugepages.c
index fa19b7b77c..0c62f5ff48 100644
--- a/lib/eal/windows/eal_hugepages.c
+++ b/lib/eal/windows/eal_hugepages.c
@@ -59,33 +59,21 @@ hugepage_claim_privilege(void)
static int
hugepage_info_init(void)
{
+ const struct eal_platform_info *platform_info = eal_get_platform_info();
struct hugepage_info *hpi;
unsigned int socket_id;
int ret = 0;
struct eal_runtime_state *runtime_state = eal_get_runtime_state();
- hpi = &runtime_state->hugepage_info[0];
-
- hpi->hugepage_sz = GetLargePageMinimum();
- if (hpi->hugepage_sz == 0)
+ if (platform_info->num_hugepage_sizes == 0)
return -ENOTSUP;
- /* Assume all memory on each NUMA node available for hugepages,
- * because Windows neither advertises additional limits,
- * nor provides an API to query them.
- */
- for (socket_id = 0; socket_id < rte_socket_count(); socket_id++) {
- ULONGLONG bytes;
- unsigned int numa_node;
-
- numa_node = eal_socket_numa_node(socket_id);
- if (!GetNumaAvailableMemoryNodeEx(numa_node, &bytes)) {
- RTE_LOG_WIN32_ERR("GetNumaAvailableMemoryNodeEx(%u)",
- numa_node);
- continue;
- }
+ hpi = &runtime_state->hugepage_info[0];
+ hpi->hugepage_sz = platform_info->hugepage_sizes[0].size;
- hpi->num_pages[socket_id] = bytes / hpi->hugepage_sz;
+ for (socket_id = 0; socket_id < rte_socket_count(); socket_id++) {
+ hpi->num_pages[socket_id] =
+ platform_info->hugepage_sizes[0].max_pages[socket_id];
EAL_LOG(DEBUG,
"Found %u hugepages of %zu bytes on socket %u",
hpi->num_pages[socket_id], hpi->hugepage_sz, socket_id);
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 34/44] eal: add utilities for working with user config struct
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (32 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 33/44] eal: remove duplicated scan of sysfs for hugepage details Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 35/44] eal: split EAL init into two stages Bruce Richardson
` (11 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
Since the user-config struct has tailqs elements, string elements and
variable length arrays it cannot be initialized via a simple "= {0}" and
copy and cleanup requires looping through the various non-basic
elements. Add functions below the struct definitions to handle these
tasks correctly, adding a comment on the struct to remind any future
editors to also adjust the functions if necessary.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_options.c | 12 +--
lib/eal/common/eal_internal_cfg.h | 152 ++++++++++++++++++++++++++++
2 files changed, 153 insertions(+), 11 deletions(-)
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index 605c5a59d1..32984cb8eb 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -1800,17 +1800,7 @@ eal_parse_args(void)
* false or NULL, which is the correct default (RTE_PROC_PRIMARY,
* RTE_INTR_MODE_NONE, RTE_IOVA_DC, etc. are all defined as 0).
*/
- *user_cfg = (struct eal_user_cfg){
- .devopt_list = TAILQ_HEAD_INITIALIZER(user_cfg->devopt_list),
- .plugin_list = TAILQ_HEAD_INITIALIZER(user_cfg->plugin_list),
- .trace_patterns = STAILQ_HEAD_INITIALIZER(user_cfg->trace_patterns),
- .hugepage_file.unlink_existing = true,
- .main_lcore = -1,
-#ifndef RTE_LIBEAL_USE_HPET
- .no_hpet = true,
-#endif
- .max_simd_bitwidth.bitwidth = RTE_VECT_DEFAULT_SIMD_BITWIDTH,
- };
+ *user_cfg = EAL_USER_CFG_INITIALIZER(*user_cfg);
bool remap_lcores = (args.remap_lcore_ids != NULL);
struct arg_list_elem *arg;
diff --git a/lib/eal/common/eal_internal_cfg.h b/lib/eal/common/eal_internal_cfg.h
index b5962d6081..31f2c2cf72 100644
--- a/lib/eal/common/eal_internal_cfg.h
+++ b/lib/eal/common/eal_internal_cfg.h
@@ -10,6 +10,7 @@
#ifndef EAL_INTERNAL_CFG_H
#define EAL_INTERNAL_CFG_H
+#include <malloc.h>
#include <sys/queue.h>
#include <rte_devargs.h>
@@ -17,6 +18,7 @@
#include <rte_os_shim.h>
#include <rte_pci_dev_feature_defs.h>
#include <rte_trace.h>
+#include <rte_vect.h>
#include <stdint.h>
#include <stdbool.h>
@@ -92,6 +94,8 @@ TAILQ_HEAD(eal_devopt_list, device_option);
/**
* User-provided EAL initialization configuration.
* Immutable after initialization, so no need for atomic types or locks.
+ *
+ * NOTE: On modify, always update the initializer, copy, and cleanup functions below.
*/
struct eal_user_cfg {
struct eal_devopt_list devopt_list; /**< staged device options (-a/-b/--vdev) */
@@ -141,6 +145,154 @@ struct eal_user_cfg {
int main_lcore; /**< ID of the main lcore */
};
+#ifdef RTE_LIBEAL_USE_HPET
+#define EAL_NO_HPET_DEFAULT false
+#else
+#define EAL_NO_HPET_DEFAULT true
+#endif
+
+#define EAL_USER_CFG_INITIALIZER(self) (struct eal_user_cfg){ \
+ .devopt_list = TAILQ_HEAD_INITIALIZER((self).devopt_list), \
+ .plugin_list = TAILQ_HEAD_INITIALIZER((self).plugin_list), \
+ .trace_patterns = STAILQ_HEAD_INITIALIZER((self).trace_patterns), \
+ .hugepage_file.unlink_existing = true, \
+ .main_lcore = -1, \
+ .no_hpet = EAL_NO_HPET_DEFAULT, \
+ .max_simd_bitwidth.bitwidth = RTE_VECT_DEFAULT_SIMD_BITWIDTH, \
+}
+
+static inline void
+eal_user_cfg_cleanup(struct eal_user_cfg *cfg)
+{
+ while (!TAILQ_EMPTY(&cfg->devopt_list)) {
+ struct device_option *devopt = TAILQ_FIRST(&cfg->devopt_list);
+ TAILQ_REMOVE(&cfg->devopt_list, devopt, next);
+ free(devopt);
+ }
+
+ while (!TAILQ_EMPTY(&cfg->plugin_list)) {
+ struct eal_plugin_path *p = TAILQ_FIRST(&cfg->plugin_list);
+ TAILQ_REMOVE(&cfg->plugin_list, p, next);
+ free(p);
+ }
+
+ while (!STAILQ_EMPTY(&cfg->trace_patterns)) {
+ struct eal_trace_arg *ta = STAILQ_FIRST(&cfg->trace_patterns);
+ STAILQ_REMOVE_HEAD(&cfg->trace_patterns, next);
+ free(ta->val);
+ free(ta);
+ }
+
+ free(cfg->trace_dir);
+ cfg->trace_dir = NULL;
+ free(cfg->hugefile_prefix);
+ cfg->hugefile_prefix = NULL;
+ free(cfg->hugepage_dir);
+ cfg->hugepage_dir = NULL;
+ free(cfg->user_mbuf_pool_ops_name);
+ cfg->user_mbuf_pool_ops_name = NULL;
+
+ for (unsigned int i = 0; i < RTE_MAX_LCORE; i++) {
+ free(cfg->lcore_cpusets[i]);
+ cfg->lcore_cpusets[i] = NULL;
+ }
+}
+
+static inline int
+eal_user_cfg_copy(struct eal_user_cfg *dst, const struct eal_user_cfg *src)
+{
+
+ /* copy all scalar/fixed-size fields */
+ *dst = *src;
+
+ /* re-initialise list heads — the shallow copy above has stale pointers */
+ TAILQ_INIT(&dst->devopt_list);
+ TAILQ_INIT(&dst->plugin_list);
+ STAILQ_INIT(&dst->trace_patterns);
+
+ /* zero heap string pointers so cleanup is safe on partial failure */
+ dst->trace_dir = NULL;
+ dst->hugefile_prefix = NULL;
+ dst->hugepage_dir = NULL;
+ dst->user_mbuf_pool_ops_name = NULL;
+ for (unsigned int i = 0; i < RTE_MAX_LCORE; i++)
+ dst->lcore_cpusets[i] = NULL;
+
+ /* deep-copy device option list (device_option has a flexible array member) */
+ struct device_option *devopt, *devopt_copy;
+ TAILQ_FOREACH(devopt, &src->devopt_list, next) {
+ size_t arglen = strlen(devopt->arg) + 1;
+ devopt_copy = calloc(1, sizeof(*devopt_copy) + arglen);
+ if (devopt_copy == NULL)
+ goto err;
+ devopt_copy->type = devopt->type;
+ memcpy(devopt_copy->arg, devopt->arg, arglen);
+ TAILQ_INSERT_TAIL(&dst->devopt_list, devopt_copy, next);
+ }
+
+ /* deep-copy plugin path list */
+ struct eal_plugin_path *p, *p_copy;
+ TAILQ_FOREACH(p, &src->plugin_list, next) {
+ p_copy = malloc(sizeof(*p_copy));
+ if (p_copy == NULL)
+ goto err;
+ memcpy(p_copy->name, p->name, sizeof(p_copy->name));
+ TAILQ_INSERT_TAIL(&dst->plugin_list, p_copy, next);
+ }
+
+ /* deep-copy trace pattern list */
+ struct eal_trace_arg *ta, *ta_copy;
+ STAILQ_FOREACH(ta, &src->trace_patterns, next) {
+ ta_copy = malloc(sizeof(*ta_copy));
+ if (ta_copy == NULL)
+ goto err;
+ ta_copy->val = strdup(ta->val);
+ if (ta_copy->val == NULL) {
+ free(ta_copy);
+ goto err;
+ }
+ STAILQ_INSERT_TAIL(&dst->trace_patterns, ta_copy, next);
+ }
+
+ /* deep-copy heap strings */
+ if (src->trace_dir != NULL) {
+ dst->trace_dir = strdup(src->trace_dir);
+ if (dst->trace_dir == NULL)
+ goto err;
+ }
+ if (src->hugefile_prefix != NULL) {
+ dst->hugefile_prefix = strdup(src->hugefile_prefix);
+ if (dst->hugefile_prefix == NULL)
+ goto err;
+ }
+ if (src->hugepage_dir != NULL) {
+ dst->hugepage_dir = strdup(src->hugepage_dir);
+ if (dst->hugepage_dir == NULL)
+ goto err;
+ }
+ if (src->user_mbuf_pool_ops_name != NULL) {
+ dst->user_mbuf_pool_ops_name = strdup(src->user_mbuf_pool_ops_name);
+ if (dst->user_mbuf_pool_ops_name == NULL)
+ goto err;
+ }
+
+ /* deep-copy per-lcore cpusets */
+ for (unsigned int i = 0; i < RTE_MAX_LCORE; i++) {
+ if (src->lcore_cpusets[i] == NULL)
+ continue;
+ dst->lcore_cpusets[i] = malloc(sizeof(rte_cpuset_t));
+ if (dst->lcore_cpusets[i] == NULL)
+ goto err;
+ *dst->lcore_cpusets[i] = *src->lcore_cpusets[i];
+ }
+
+ return 0;
+
+err:
+ eal_user_cfg_cleanup(dst);
+ return -1;
+}
+
/**
* Hardware facts about a single physical CPU, populated during CPU discovery.
* Indexed by physical CPU ID (not DPDK lcore ID).
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 35/44] eal: split EAL init into two stages
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (33 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 34/44] eal: add utilities for working with user config struct Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 36/44] eal: provide hooks for init with externally supplied config Bruce Richardson
` (10 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
Split the rte_eal_init function into two parts, where the first part is
handling the argument parsing and then calling the second part which
does the actual subsystem initialization. To keep the split clean and
make the user_cfg the point of data transfer between the two, update
functions in eal_common_options.c to take the user_cfg as parameter
rather than relying on the EAL global.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
Note: to avoid having a massive diff - and therefore make reviewing
easier - I have kept the second function below the first, rather than
putting it above. A forward declaration of the function is enough to
keep the compiler happy, and the result shows clearly the split of the
function into two.
---
lib/eal/common/eal_common_options.c | 69 +++++++++--------------------
lib/eal/common/eal_options.h | 2 +-
lib/eal/freebsd/eal.c | 56 +++++++++++++++++------
lib/eal/linux/eal.c | 54 +++++++++++++++++-----
lib/eal/windows/eal.c | 55 +++++++++++++++++------
5 files changed, 149 insertions(+), 87 deletions(-)
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index 32984cb8eb..4cae4cd43f 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -410,9 +410,8 @@ eal_clean_saved_args(void)
#endif /* !RTE_EXEC_ENV_WINDOWS */
static int
-eal_option_device_add(enum rte_devtype type, const char *arg)
+eal_option_device_add(struct eal_user_cfg *user_cfg, enum rte_devtype type, const char *arg)
{
- struct eal_user_cfg *user_cfg = eal_get_user_configuration();
struct device_option *devopt;
size_t arglen;
int ret;
@@ -467,9 +466,8 @@ eal_get_hugefile_prefix(void)
}
static int
-eal_plugin_path_add(const char *path)
+eal_plugin_path_add(struct eal_user_cfg *user_cfg, const char *path)
{
- struct eal_user_cfg *user_cfg = eal_get_user_configuration();
struct eal_plugin_path *p;
p = malloc(sizeof(*p));
@@ -1001,10 +999,9 @@ rte_eal_parse_coremask(const char *coremask, rte_cpuset_t *cpuset, bool limit_ra
/* Changes the lcore id of the main thread */
static int
-eal_parse_main_lcore(const char *arg)
+eal_parse_main_lcore(struct eal_user_cfg *user_cfg, const char *arg)
{
char *parsing_end;
- struct eal_user_cfg *user_cfg = eal_get_user_configuration();
errno = 0;
user_cfg->main_lcore = (uint32_t) strtol(arg, &parsing_end, 0);
@@ -1438,10 +1435,9 @@ eal_parse_proc_type(const char *arg)
}
static int
-eal_parse_iova_mode(const char *name)
+eal_parse_iova_mode(struct eal_user_cfg *user_cfg, const char *name)
{
int mode;
- struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (name == NULL)
return -1;
@@ -1485,11 +1481,10 @@ eal_parse_simd_bitwidth(const char *arg)
}
static int
-eal_parse_base_virtaddr(const char *arg)
+eal_parse_base_virtaddr(struct eal_user_cfg *user_cfg, const char *arg)
{
char *end;
uint64_t addr;
- struct eal_user_cfg *user_cfg = eal_get_user_configuration();
errno = 0;
addr = strtoull(arg, &end, 16);
@@ -1712,9 +1707,8 @@ eal_parse_socket_arg(char *strval, volatile uint64_t *socket_arg)
}
static int
-eal_parse_vfio_intr(const char *mode)
+eal_parse_vfio_intr(struct eal_user_cfg *user_cfg, const char *mode)
{
- struct eal_user_cfg *user_cfg = eal_get_user_configuration();
static struct {
const char *name;
enum rte_intr_mode value;
@@ -1734,9 +1728,8 @@ eal_parse_vfio_intr(const char *mode)
}
static int
-eal_parse_vfio_vf_token(const char *vf_token)
+eal_parse_vfio_vf_token(struct eal_user_cfg *user_cfg, const char *vf_token)
{
- struct eal_user_cfg *user_cfg = eal_get_user_configuration();
rte_uuid_t uuid;
if (!rte_uuid_parse(vf_token, uuid)) {
@@ -1748,13 +1741,13 @@ eal_parse_vfio_vf_token(const char *vf_token)
}
static int
-eal_parse_huge_worker_stack(const char *arg)
+eal_parse_huge_worker_stack(struct eal_user_cfg *user_cfg, const char *arg)
{
#ifdef RTE_EXEC_ENV_WINDOWS
EAL_LOG(WARNING, "Cannot set worker stack size on Windows, parameter ignored");
+ RTE_SET_USED(user_cfg);
RTE_SET_USED(arg);
#else
- struct eal_user_cfg *user_cfg = eal_get_user_configuration();
if (arg == NULL || arg[0] == '\0') {
pthread_attr_t attr;
@@ -1791,10 +1784,8 @@ eal_parse_huge_worker_stack(const char *arg)
/* Parse the arguments given in the command line of the application */
int
-eal_parse_args(void)
+eal_parse_args(struct eal_user_cfg *user_cfg)
{
- struct eal_user_cfg *user_cfg = eal_get_user_configuration();
-
/*
* Initialise user_cfg to defaults. Fields not listed here are zero,
* false or NULL, which is the correct default (RTE_PROC_PRIMARY,
@@ -1828,17 +1819,17 @@ eal_parse_args(void)
/* device -a/-b/-vdev options*/
TAILQ_FOREACH(arg, &args.allow, next)
- if (eal_option_device_add(RTE_DEVTYPE_ALLOWED, arg->arg) < 0)
+ if (eal_option_device_add(user_cfg,RTE_DEVTYPE_ALLOWED, arg->arg) < 0)
return -1;
TAILQ_FOREACH(arg, &args.block, next)
- if (eal_option_device_add(RTE_DEVTYPE_BLOCKED, arg->arg) < 0)
+ if (eal_option_device_add(user_cfg, RTE_DEVTYPE_BLOCKED, arg->arg) < 0)
return -1;
TAILQ_FOREACH(arg, &args.vdev, next)
- if (eal_option_device_add(RTE_DEVTYPE_VIRTUAL, arg->arg) < 0)
+ if (eal_option_device_add(user_cfg, RTE_DEVTYPE_VIRTUAL, arg->arg) < 0)
return -1;
/* driver loading options */
TAILQ_FOREACH(arg, &args.driver_path, next)
- if (eal_plugin_path_add(arg->arg) < 0)
+ if (eal_plugin_path_add(user_cfg, arg->arg) < 0)
return -1;
if (remap_lcores && args.remap_lcore_ids != (void *)1) {
@@ -1928,7 +1919,7 @@ eal_parse_args(void)
return -1;
}
}
- if (args.main_lcore != NULL && eal_parse_main_lcore(args.main_lcore) < 0)
+ if (args.main_lcore != NULL && eal_parse_main_lcore(user_cfg, args.main_lcore) < 0)
return -1;
/* memory options */
@@ -2107,13 +2098,13 @@ eal_parse_args(void)
/* other misc settings */
if (args.iova_mode != NULL) {
- if (eal_parse_iova_mode(args.iova_mode) < 0) {
+ if (eal_parse_iova_mode(user_cfg, args.iova_mode) < 0) {
EAL_LOG(ERR, "invalid iova mode parameter '%s'", args.iova_mode);
return -1;
}
};
if (args.base_virtaddr != NULL) {
- if (eal_parse_base_virtaddr(args.base_virtaddr) < 0) {
+ if (eal_parse_base_virtaddr(user_cfg, args.base_virtaddr) < 0) {
EAL_LOG(ERR, "invalid base virtaddr '%s'", args.base_virtaddr);
return -1;
}
@@ -2126,13 +2117,13 @@ eal_parse_args(void)
}
}
if (args.vfio_intr != NULL) {
- if (eal_parse_vfio_intr(args.vfio_intr) < 0) {
+ if (eal_parse_vfio_intr(user_cfg, args.vfio_intr) < 0) {
EAL_LOG(ERR, "invalid vfio interrupt parameter: '%s'", args.vfio_intr);
return -1;
}
}
if (args.vfio_vf_token != NULL) {
- if (eal_parse_vfio_vf_token(args.vfio_vf_token) < 0) {
+ if (eal_parse_vfio_vf_token(user_cfg, args.vfio_vf_token) < 0) {
EAL_LOG(ERR, "invalid vfio vf token parameter: '%s'", args.vfio_vf_token);
return -1;
}
@@ -2141,7 +2132,7 @@ eal_parse_args(void)
if (args.huge_worker_stack != NULL) {
if (args.huge_worker_stack == (void *)1)
args.huge_worker_stack = NULL;
- if (eal_parse_huge_worker_stack(args.huge_worker_stack) < 0) {
+ if (eal_parse_huge_worker_stack(user_cfg, args.huge_worker_stack) < 0) {
EAL_LOG(ERR, "invalid huge worker stack parameter");
return -1;
}
@@ -2169,25 +2160,7 @@ eal_parse_args(void)
int
eal_cleanup_config(void)
{
- struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- struct eal_trace_arg *ta;
-
- /* free trace patterns list */
- while (!STAILQ_EMPTY(&user_cfg->trace_patterns)) {
- ta = STAILQ_FIRST(&user_cfg->trace_patterns);
- STAILQ_REMOVE_HEAD(&user_cfg->trace_patterns, next);
- free(ta->val);
- free(ta);
- }
- free(user_cfg->trace_dir);
- free(user_cfg->hugefile_prefix);
- free(user_cfg->hugepage_dir);
- free(user_cfg->user_mbuf_pool_ops_name);
- for (unsigned int i = 0; i < RTE_MAX_LCORE; i++) {
- free(user_cfg->lcore_cpusets[i]);
- user_cfg->lcore_cpusets[i] = NULL;
- }
-
+ eal_user_cfg_cleanup(eal_get_user_configuration());
return 0;
}
diff --git a/lib/eal/common/eal_options.h b/lib/eal/common/eal_options.h
index 77a6a4405f..afa8449ee7 100644
--- a/lib/eal/common/eal_options.h
+++ b/lib/eal/common/eal_options.h
@@ -11,7 +11,7 @@ struct rte_tel_data;
struct eal_user_cfg;
int eal_parse_log_options(void);
-int eal_parse_args(void);
+int eal_parse_args(struct eal_user_cfg *user_cfg);
int eal_option_device_parse(void);
int eal_cleanup_config(void);
int eal_plugins_init(void);
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 4f4b9accfe..5e9348a2cd 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -389,20 +389,16 @@ static void rte_eal_init_alert(const char *msg)
EAL_LOG(ALERT, "%s", msg);
}
+static int eal_runtime_init(const struct eal_user_cfg *user_provided_cfg);
+
/* Launch threads, called at application init(). */
RTE_EXPORT_SYMBOL(rte_eal_init)
int
rte_eal_init(int argc, char **argv)
{
- int i, fctret, ret;
static uint32_t run_once;
+ struct eal_user_cfg user_cfg_from_args = EAL_USER_CFG_INITIALIZER(user_cfg_from_args);
uint32_t has_run = 0;
- char cpuset[RTE_CPU_AFFINITY_STR_LEN];
- char thread_name[RTE_THREAD_NAME_SIZE];
- struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- struct eal_runtime_state *runtime_state = eal_get_runtime_state();
- bool has_phys_addr;
- enum rte_iova_mode iova_mode;
/* first check if we have been run before */
if (!rte_atomic_compare_exchange_strong_explicit(&run_once, &has_run, 1,
@@ -415,7 +411,7 @@ rte_eal_init(int argc, char **argv)
/* Save and collate args at the top */
eal_save_args(argc, argv);
- fctret = eal_collate_args(argc, argv);
+ int fctret = eal_collate_args(argc, argv);
if (fctret < 0) {
rte_eal_init_alert("invalid command-line arguments.");
rte_errno = EINVAL;
@@ -431,6 +427,39 @@ rte_eal_init(int argc, char **argv)
eal_log_init(getprogname());
+ if (eal_parse_args(&user_cfg_from_args) < 0) {
+ rte_eal_init_alert("Error parsing command-line arguments.");
+ rte_errno = EINVAL;
+ goto err_out;
+ }
+
+ if (eal_runtime_init(&user_cfg_from_args) < 0)
+ goto err_out; /* log message and rte_errno set by eal_runtime_init() */
+
+ eal_user_cfg_cleanup(&user_cfg_from_args);
+ return fctret;
+
+err_out:
+ rte_atomic_store_explicit(&run_once, 0, rte_memory_order_relaxed);
+ eal_clean_saved_args();
+ eal_user_cfg_cleanup(&user_cfg_from_args);
+ return -1;
+}
+
+/**
+ * take a provided user config, copy it to the internal user config structure
+ * and then use it to initialize the DPDK runtime.
+ */
+static int
+eal_runtime_init(const struct eal_user_cfg *user_provided_cfg)
+{
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+ char cpuset[RTE_CPU_AFFINITY_STR_LEN];
+ char thread_name[RTE_THREAD_NAME_SIZE];
+ bool has_phys_addr;
+ enum rte_iova_mode iova_mode;
+ int i, ret;
+
/* checks if the machine is adequate */
if (!rte_cpu_is_supported()) {
rte_eal_init_alert("unsupported cpu type.");
@@ -445,8 +474,10 @@ rte_eal_init(int argc, char **argv)
goto err_out;
}
- if (eal_parse_args() < 0) {
- rte_eal_init_alert("Error parsing command-line arguments.");
+ /* Copy user-provided configuration to EAL global configuration */
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ if (eal_user_cfg_copy(user_cfg, user_provided_cfg) < 0) {
+ rte_eal_init_alert("Cannot copy user configuration.");
rte_errno = EINVAL;
goto err_out;
}
@@ -740,11 +771,10 @@ rte_eal_init(int argc, char **argv)
eal_mcfg_complete();
- return fctret;
+ return 0;
+
err_out:
- rte_atomic_store_explicit(&run_once, 0, rte_memory_order_relaxed);
eal_cleanup_config();
- eal_clean_saved_args();
return -1;
}
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 1bf519eb10..75d03e3b3c 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -552,19 +552,16 @@ eal_worker_thread_create(unsigned int lcore_id)
return ret;
}
+static int eal_runtime_init(const struct eal_user_cfg *user_provided_cfg);
+
/* Launch threads, called at application init(). */
RTE_EXPORT_SYMBOL(rte_eal_init)
int
rte_eal_init(int argc, char **argv)
{
- int i, fctret, ret;
static RTE_ATOMIC(uint32_t) run_once;
+ struct eal_user_cfg user_cfg_from_args = EAL_USER_CFG_INITIALIZER(user_cfg_from_args);
uint32_t has_run = 0;
- char cpuset[RTE_CPU_AFFINITY_STR_LEN];
- char thread_name[RTE_THREAD_NAME_SIZE];
- bool phys_addrs;
- struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- struct eal_runtime_state *runtime_state = eal_get_runtime_state();
/* first check if we have been run before */
if (!rte_atomic_compare_exchange_strong_explicit(&run_once, &has_run, 1,
@@ -577,7 +574,7 @@ rte_eal_init(int argc, char **argv)
/* clone argv to report out later in telemetry */
eal_save_args(argc, argv);
- fctret = eal_collate_args(argc, argv);
+ int fctret = eal_collate_args(argc, argv);
if (fctret < 0) {
rte_eal_init_alert("Invalid command line arguments.");
rte_errno = EINVAL;
@@ -593,6 +590,39 @@ rte_eal_init(int argc, char **argv)
eal_log_init(program_invocation_short_name);
+ if (eal_parse_args(&user_cfg_from_args) < 0) {
+ rte_eal_init_alert("Error parsing command line arguments.");
+ rte_errno = EINVAL;
+ goto err_out;
+ }
+
+ if (eal_runtime_init(&user_cfg_from_args) < 0)
+ goto err_out; /* log message and rte_errno set by eal_runtime_init() */
+
+ eal_user_cfg_cleanup(&user_cfg_from_args);
+ return fctret;
+
+err_out:
+ rte_atomic_store_explicit(&run_once, 0, rte_memory_order_relaxed);
+ eal_clean_saved_args();
+ eal_user_cfg_cleanup(&user_cfg_from_args);
+
+ return -1;
+}
+
+/**
+ * take a provided user config, copy it to the internal user config structure
+ * and then use it to initialize the DPDK runtime.
+ */
+static int
+eal_runtime_init(const struct eal_user_cfg *user_provided_cfg)
+{
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+ char cpuset[RTE_CPU_AFFINITY_STR_LEN];
+ char thread_name[RTE_THREAD_NAME_SIZE];
+ bool phys_addrs;
+ int i, ret;
+
/* checks if the machine is adequate */
if (!rte_cpu_is_supported()) {
rte_eal_init_alert("unsupported cpu type.");
@@ -607,8 +637,10 @@ rte_eal_init(int argc, char **argv)
goto err_out;
}
- if (eal_parse_args() < 0) {
- rte_eal_init_alert("Error parsing command line arguments.");
+ /* Copy user-provided configuration to EAL global configuration */
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ if (eal_user_cfg_copy(user_cfg, user_provided_cfg) < 0) {
+ rte_eal_init_alert("Cannot copy user configuration.");
rte_errno = EINVAL;
goto err_out;
}
@@ -914,12 +946,10 @@ rte_eal_init(int argc, char **argv)
eal_mcfg_complete();
- return fctret;
+ return 0;
err_out:
- rte_atomic_store_explicit(&run_once, 0, rte_memory_order_relaxed);
eal_cleanup_config();
- eal_clean_saved_args();
return -1;
}
diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c
index ed293ada8f..021361a802 100644
--- a/lib/eal/windows/eal.c
+++ b/lib/eal/windows/eal.c
@@ -146,24 +146,19 @@ rte_eal_cleanup(void)
return 0;
}
+static int eal_runtime_init(const struct eal_user_cfg *user_provided_cfg);
+
/* Launch threads, called at application init(). */
RTE_EXPORT_SYMBOL(rte_eal_init)
int
rte_eal_init(int argc, char **argv)
{
- int i, fctret, bscan;
- struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- struct eal_runtime_state *runtime_state = eal_get_runtime_state();
- bool has_phys_addr;
- enum rte_iova_mode iova_mode;
- int ret;
- char cpuset[RTE_CPU_AFFINITY_STR_LEN];
- char thread_name[RTE_THREAD_NAME_SIZE];
+ struct eal_user_cfg user_cfg_from_args = EAL_USER_CFG_INITIALIZER(user_cfg_from_args);
/* clone argv to report out later in telemetry */
eal_save_args(argc, argv);
- fctret = eal_collate_args(argc, argv);
+ int fctret = eal_collate_args(argc, argv);
if (fctret < 0) {
rte_eal_init_alert("Invalid command line arguments.");
rte_errno = EINVAL;
@@ -179,6 +174,38 @@ rte_eal_init(int argc, char **argv)
eal_log_init(NULL);
+ if (eal_parse_args(&user_cfg_from_args) < 0) {
+ rte_eal_init_alert("Invalid command line arguments.");
+ rte_errno = EINVAL;
+ goto err_out;
+ }
+
+ if (eal_runtime_init(&user_cfg_from_args) < 0)
+ goto err_out; /* log message and rte_errno set by eal_runtime_init() */
+
+ eal_user_cfg_cleanup(&user_cfg_from_args);
+ return fctret;
+
+err_out:
+ eal_clean_saved_args();
+ eal_user_cfg_cleanup(&user_cfg_from_args);
+ return -1;
+}
+
+/**
+ * take a provided user config, copy it to the internal user config structure
+ * and then use it to initialize the DPDK runtime.
+ */
+static int
+eal_runtime_init(const struct eal_user_cfg *user_provided_cfg)
+{
+ struct eal_runtime_state *runtime_state = eal_get_runtime_state();
+ bool has_phys_addr;
+ enum rte_iova_mode iova_mode;
+ int i, ret, bscan;
+ char cpuset[RTE_CPU_AFFINITY_STR_LEN];
+ char thread_name[RTE_THREAD_NAME_SIZE];
+
/* verify if DPDK supported on architecture MMU */
if (!eal_mmu_supported()) {
rte_eal_init_alert("Unsupported MMU type.");
@@ -186,8 +213,10 @@ rte_eal_init(int argc, char **argv)
goto err_out;
}
- if (eal_parse_args() < 0) {
- rte_eal_init_alert("Invalid command line arguments.");
+ /* Copy user-provided configuration to EAL global configuration */
+ struct eal_user_cfg *user_cfg = eal_get_user_configuration();
+ if (eal_user_cfg_copy(user_cfg, user_provided_cfg) < 0) {
+ rte_eal_init_alert("Cannot copy user configuration.");
rte_errno = EINVAL;
goto err_out;
}
@@ -400,10 +429,10 @@ rte_eal_init(int argc, char **argv)
eal_mcfg_complete();
- return fctret;
+ return 0;
+
err_out:
eal_cleanup_config();
- eal_clean_saved_args();
return -1;
}
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 36/44] eal: provide hooks for init with externally supplied config
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (34 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 35/44] eal: split EAL init into two stages Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 37/44] eal_cfg: add new library to programmatically init DPDK Bruce Richardson
` (9 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
Now that we have a split EAL init, where most work is done on a
configured eal_user_cfg structure, add in hooks to allow other libraries
to init EAL by passing in that structure pre-configured.
We export an internal function that wraps the second stage init, while
taking care of the run_once flag (since this flag needs to be in the top
level init function to avoid issues with trying to save off EAL args
twice), before calling the main initialization function with a
prepopulated config. Also, export internally the platform info function
so that any callers of this new function can do any necessary validation
of parameters as part of their setup.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal/common/eal_common_config.c | 3 +-
lib/eal/common/eal_common_lcore.c | 8 ++---
lib/eal/common/eal_common_thread.c | 2 +-
lib/eal/common/eal_internal_cfg.h | 8 ++++-
lib/eal/freebsd/eal.c | 51 ++++++++++++++++++++++++---
lib/eal/freebsd/eal_hugepage_info.c | 2 +-
lib/eal/freebsd/eal_memory.c | 6 ++--
lib/eal/linux/eal.c | 49 ++++++++++++++++++++++++--
lib/eal/linux/eal_hugepage_info.c | 2 +-
lib/eal/windows/eal.c | 54 +++++++++++++++++++++++++++++
lib/eal/windows/eal_hugepages.c | 2 +-
lib/eal/windows/eal_memalloc.c | 2 +-
12 files changed, 168 insertions(+), 21 deletions(-)
diff --git a/lib/eal/common/eal_common_config.c b/lib/eal/common/eal_common_config.c
index 9afe836903..9fff42b0b5 100644
--- a/lib/eal/common/eal_common_config.c
+++ b/lib/eal/common/eal_common_config.c
@@ -68,8 +68,9 @@ eal_get_user_configuration(void)
}
/* Return a pointer to the platform state structure */
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eal_get_platform_info)
const struct eal_platform_info *
-eal_get_platform_info(void)
+rte_eal_get_platform_info(void)
{
/* platform-discovered and runtime EAL state */
static struct eal_platform_info eal_platform_info;
diff --git a/lib/eal/common/eal_common_lcore.c b/lib/eal/common/eal_common_lcore.c
index 98857c0e34..304e99eff2 100644
--- a/lib/eal/common/eal_common_lcore.c
+++ b/lib/eal/common/eal_common_lcore.c
@@ -53,7 +53,7 @@ RTE_EXPORT_SYMBOL(rte_lcore_to_cpu_id)
int rte_lcore_to_cpu_id(int lcore_id)
{
const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
- const struct eal_platform_info *platform_info = eal_get_platform_info();
+ const struct eal_platform_info *platform_info = rte_eal_get_platform_info();
unsigned int cpu;
if (unlikely(lcore_id >= RTE_MAX_LCORE))
@@ -138,7 +138,7 @@ unsigned int
rte_lcore_to_socket_id(unsigned int lcore_id)
{
const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
- const struct eal_platform_info *platform_info = eal_get_platform_info();
+ const struct eal_platform_info *platform_info = rte_eal_get_platform_info();
unsigned int cpu = runtime_state->lcore_cfg[lcore_id].first_cpu;
if (cpu < platform_info->cpu_count)
@@ -247,7 +247,7 @@ RTE_EXPORT_SYMBOL(rte_socket_count)
unsigned int
rte_socket_count(void)
{
- const struct eal_platform_info *platform_info = eal_get_platform_info();
+ const struct eal_platform_info *platform_info = rte_eal_get_platform_info();
return platform_info->numa_node_count;
}
@@ -255,7 +255,7 @@ RTE_EXPORT_SYMBOL(rte_socket_id_by_idx)
int
rte_socket_id_by_idx(unsigned int idx)
{
- const struct eal_platform_info *platform_info = eal_get_platform_info();
+ const struct eal_platform_info *platform_info = rte_eal_get_platform_info();
if (idx >= platform_info->numa_node_count) {
rte_errno = EINVAL;
return -1;
diff --git a/lib/eal/common/eal_common_thread.c b/lib/eal/common/eal_common_thread.c
index 7256d06d0a..c3df4cd9ae 100644
--- a/lib/eal/common/eal_common_thread.c
+++ b/lib/eal/common/eal_common_thread.c
@@ -55,7 +55,7 @@ thread_update_affinity(rte_cpuset_t *cpusetp)
RTE_PER_LCORE(_numa_id) = rte_lcore_to_socket_id(lcore_id);
} else {
/* Non-EAL thread: derive NUMA node from first CPU in cpuset. */
- const struct eal_platform_info *platform_info = eal_get_platform_info();
+ const struct eal_platform_info *platform_info = rte_eal_get_platform_info();
unsigned int cpu;
RTE_PER_LCORE(_numa_id) = SOCKET_ID_ANY;
diff --git a/lib/eal/common/eal_internal_cfg.h b/lib/eal/common/eal_internal_cfg.h
index 31f2c2cf72..5a72b4c15e 100644
--- a/lib/eal/common/eal_internal_cfg.h
+++ b/lib/eal/common/eal_internal_cfg.h
@@ -24,6 +24,7 @@
#include <rte_stdatomic.h>
#include "eal_thread.h"
+#include "rte_compat.h"
/* Forward declaration — full definition is in eal_memcfg.h */
struct rte_mem_config;
@@ -373,8 +374,13 @@ struct eal_runtime_state {
struct eal_solib_list loaded_plugins; /**< all plugins loaded by eal_plugins_init() */
};
-const struct eal_platform_info *eal_get_platform_info(void);
+__rte_internal
+const struct eal_platform_info *rte_eal_get_platform_info(void);
struct eal_user_cfg *eal_get_user_configuration(void);
struct eal_runtime_state *eal_get_runtime_state(void);
+__rte_internal
+int
+rte_eal_runtime_init(const char *progname, const struct eal_user_cfg *user_provided_cfg);
+
#endif /* EAL_INTERNAL_CFG_H */
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index 5e9348a2cd..1295e4e06b 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -59,6 +59,9 @@
#define MEMSIZE_IF_NO_HUGE_PAGE (64ULL * 1024ULL * 1024ULL)
+/* flag to prevent double-init of EAL */
+static RTE_ATOMIC(uint32_t) init_has_run;
+
/* define fd variable here, because file needs to be kept open for the
* duration of the program, as we hold a write lock on it in the primary proc */
static int mem_cfg_fd = -1;
@@ -322,7 +325,7 @@ eal_get_hugepage_mem_size(void)
{
uint64_t size = 0;
unsigned i, j;
- const struct eal_platform_info *platform_info = eal_get_platform_info();
+ const struct eal_platform_info *platform_info = rte_eal_get_platform_info();
struct eal_runtime_state *runtime_state = eal_get_runtime_state();
for (i = 0; i < platform_info->num_hugepage_sizes; i++) {
@@ -396,12 +399,11 @@ RTE_EXPORT_SYMBOL(rte_eal_init)
int
rte_eal_init(int argc, char **argv)
{
- static uint32_t run_once;
struct eal_user_cfg user_cfg_from_args = EAL_USER_CFG_INITIALIZER(user_cfg_from_args);
uint32_t has_run = 0;
/* first check if we have been run before */
- if (!rte_atomic_compare_exchange_strong_explicit(&run_once, &has_run, 1,
+ if (!rte_atomic_compare_exchange_strong_explicit(&init_has_run, &has_run, 1,
rte_memory_order_relaxed, rte_memory_order_relaxed)) {
rte_eal_init_alert("already called initialization.");
rte_errno = EALREADY;
@@ -440,7 +442,7 @@ rte_eal_init(int argc, char **argv)
return fctret;
err_out:
- rte_atomic_store_explicit(&run_once, 0, rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&init_has_run, 0, rte_memory_order_relaxed);
eal_clean_saved_args();
eal_user_cfg_cleanup(&user_cfg_from_args);
return -1;
@@ -778,6 +780,47 @@ eal_runtime_init(const struct eal_user_cfg *user_provided_cfg)
return -1;
}
+/**
+ * Initialize the DPDK runtime with a user-provided configuration.
+ * This is an alternative to rte_eal_init() that allows the caller to provide
+ * a configuration struct directly, instead of parsing command line arguments.
+ * Both parameters to the function must be non-NULL,
+ * and the user_provided_cfg must be fully initialized by the caller.
+ */
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eal_runtime_init)
+int
+rte_eal_runtime_init(const char *progname, const struct eal_user_cfg *user_provided_cfg)
+{
+ uint32_t has_run = 0;
+
+ if (progname == NULL || user_provided_cfg == NULL) {
+ rte_eal_init_alert("Invalid arguments to rte_eal_runtime_init.");
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ if (rte_eal_get_platform_info() == NULL) {
+ rte_eal_init_alert("Platform information is not available.");
+ return -1;
+ }
+
+ /* first check if we have been run before */
+ if (!rte_atomic_compare_exchange_strong_explicit(&init_has_run, &has_run, 1,
+ rte_memory_order_relaxed, rte_memory_order_relaxed)) {
+ rte_eal_init_alert("already called initialization.");
+ rte_errno = EALREADY;
+ return -1;
+ }
+
+ eal_log_init(progname);
+
+ if (eal_runtime_init(user_provided_cfg) < 0) {
+ rte_atomic_store_explicit(&init_has_run, 0, rte_memory_order_relaxed);
+ return -1;
+ }
+ return 0;
+}
+
RTE_EXPORT_SYMBOL(rte_eal_cleanup)
int
rte_eal_cleanup(void)
diff --git a/lib/eal/freebsd/eal_hugepage_info.c b/lib/eal/freebsd/eal_hugepage_info.c
index 9c97897cc3..28e1a04528 100644
--- a/lib/eal/freebsd/eal_hugepage_info.c
+++ b/lib/eal/freebsd/eal_hugepage_info.c
@@ -87,7 +87,7 @@ eal_get_platform_hp_info(struct eal_platform_info *platform_info)
int
eal_hugepage_info_init(void)
{
- const struct eal_platform_info *platform_info = eal_get_platform_info();
+ const struct eal_platform_info *platform_info = rte_eal_get_platform_info();
int num_buffers, fd;
int64_t buffer_size;
struct eal_runtime_state *runtime_state = eal_get_runtime_state();
diff --git a/lib/eal/freebsd/eal_memory.c b/lib/eal/freebsd/eal_memory.c
index 18e388ef08..c23e203ec2 100644
--- a/lib/eal/freebsd/eal_memory.c
+++ b/lib/eal/freebsd/eal_memory.c
@@ -60,7 +60,7 @@ rte_eal_hugepage_init(void)
void *addr;
unsigned int i, j, seg_idx = 0;
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- const struct eal_platform_info *platform_info = eal_get_platform_info();
+ const struct eal_platform_info *platform_info = rte_eal_get_platform_info();
struct eal_runtime_state *runtime_state = eal_get_runtime_state();
/* get pointer to global configuration */
@@ -270,7 +270,7 @@ attach_segment(const struct rte_memseg_list *msl, const struct rte_memseg *ms,
int
rte_eal_hugepage_attach(void)
{
- const struct eal_platform_info *platform_info = eal_get_platform_info();
+ const struct eal_platform_info *platform_info = rte_eal_get_platform_info();
struct eal_runtime_state *runtime_state = eal_get_runtime_state();
struct hugepage_info *hpi;
int fd_hugepage = -1;
@@ -357,7 +357,7 @@ memseg_primary_init(void)
struct rte_memseg_list *msl;
uint64_t max_mem, total_mem;
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- const struct eal_platform_info *platform_info = eal_get_platform_info();
+ const struct eal_platform_info *platform_info = rte_eal_get_platform_info();
struct eal_runtime_state *runtime_state = eal_get_runtime_state();
/* no-huge does not need this at all */
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 75d03e3b3c..b0cc16a221 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -61,6 +61,9 @@
#define MEMSIZE_IF_NO_HUGE_PAGE (64ULL * 1024ULL * 1024ULL)
#define KERNEL_IOMMU_GROUPS_PATH "/sys/kernel/iommu_groups"
+/* flag to prevent double-init of EAL */
+static RTE_ATOMIC(uint32_t) init_has_run;
+
/* define fd variable here, because file needs to be kept open for the
* duration of the program, as we hold a write lock on it in the primary proc */
static int mem_cfg_fd = -1;
@@ -559,12 +562,11 @@ RTE_EXPORT_SYMBOL(rte_eal_init)
int
rte_eal_init(int argc, char **argv)
{
- static RTE_ATOMIC(uint32_t) run_once;
struct eal_user_cfg user_cfg_from_args = EAL_USER_CFG_INITIALIZER(user_cfg_from_args);
uint32_t has_run = 0;
/* first check if we have been run before */
- if (!rte_atomic_compare_exchange_strong_explicit(&run_once, &has_run, 1,
+ if (!rte_atomic_compare_exchange_strong_explicit(&init_has_run, &has_run, 1,
rte_memory_order_relaxed, rte_memory_order_relaxed)) {
rte_eal_init_alert("already called initialization.");
rte_errno = EALREADY;
@@ -603,7 +605,7 @@ rte_eal_init(int argc, char **argv)
return fctret;
err_out:
- rte_atomic_store_explicit(&run_once, 0, rte_memory_order_relaxed);
+ rte_atomic_store_explicit(&init_has_run, 0, rte_memory_order_relaxed);
eal_clean_saved_args();
eal_user_cfg_cleanup(&user_cfg_from_args);
@@ -953,6 +955,47 @@ eal_runtime_init(const struct eal_user_cfg *user_provided_cfg)
return -1;
}
+/**
+ * Initialize the DPDK runtime with a user-provided configuration.
+ * This is an alternative to rte_eal_init() that allows the caller to provide
+ * a configuration struct directly, instead of parsing command line arguments.
+ * Both parameters to the function must be non-NULL,
+ * and the user_provided_cfg must be fully initialized by the caller.
+ */
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eal_runtime_init)
+int
+rte_eal_runtime_init(const char *progname, const struct eal_user_cfg *user_provided_cfg)
+{
+ uint32_t has_run = 0;
+
+ if (progname == NULL || user_provided_cfg == NULL) {
+ rte_eal_init_alert("Invalid arguments to rte_eal_runtime_init.");
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ if (rte_eal_get_platform_info() == NULL) {
+ rte_eal_init_alert("Platform information is not available.");
+ return -1;
+ }
+
+ /* first check if we have been run before */
+ if (!rte_atomic_compare_exchange_strong_explicit(&init_has_run, &has_run, 1,
+ rte_memory_order_relaxed, rte_memory_order_relaxed)) {
+ rte_eal_init_alert("already called initialization.");
+ rte_errno = EALREADY;
+ return -1;
+ }
+
+ eal_log_init(progname);
+
+ if (eal_runtime_init(user_provided_cfg) < 0) {
+ rte_atomic_store_explicit(&init_has_run, 0, rte_memory_order_relaxed);
+ return -1;
+ }
+ return 0;
+}
+
static int
mark_freeable(const struct rte_memseg_list *msl, const struct rte_memseg *ms,
void *arg __rte_unused)
diff --git a/lib/eal/linux/eal_hugepage_info.c b/lib/eal/linux/eal_hugepage_info.c
index 738632bc20..482064acb8 100644
--- a/lib/eal/linux/eal_hugepage_info.c
+++ b/lib/eal/linux/eal_hugepage_info.c
@@ -506,7 +506,7 @@ hugepage_info_init(void)
unsigned int reusable_pages;
struct eal_runtime_state *runtime_state = eal_get_runtime_state();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
- const struct eal_platform_info *platform_info = eal_get_platform_info();
+ const struct eal_platform_info *platform_info = rte_eal_get_platform_info();
int failed = 0;
/* platform_info->hugepage_sizes[] is already sorted largest to smallest */
diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c
index 021361a802..d24d58701b 100644
--- a/lib/eal/windows/eal.c
+++ b/lib/eal/windows/eal.c
@@ -34,6 +34,9 @@
#define MEMSIZE_IF_NO_HUGE_PAGE (64ULL * 1024ULL * 1024ULL)
+/* flag to prevent double-init of EAL */
+static RTE_ATOMIC(uint32_t) init_has_run;
+
/* define fd variable here, because file needs to be kept open for the
* duration of the program, as we hold a write lock on it in the primary proc
*/
@@ -154,6 +157,15 @@ int
rte_eal_init(int argc, char **argv)
{
struct eal_user_cfg user_cfg_from_args = EAL_USER_CFG_INITIALIZER(user_cfg_from_args);
+ uint32_t has_run = 0;
+
+ /* first check if we have been run before */
+ if (!rte_atomic_compare_exchange_strong_explicit(&init_has_run, &has_run, 1,
+ rte_memory_order_relaxed, rte_memory_order_relaxed)) {
+ rte_eal_init_alert("already called initialization.");
+ rte_errno = EALREADY;
+ return -1;
+ }
/* clone argv to report out later in telemetry */
eal_save_args(argc, argv);
@@ -187,6 +199,7 @@ rte_eal_init(int argc, char **argv)
return fctret;
err_out:
+ rte_atomic_store_explicit(&init_has_run, 0, rte_memory_order_relaxed);
eal_clean_saved_args();
eal_user_cfg_cleanup(&user_cfg_from_args);
return -1;
@@ -436,6 +449,47 @@ eal_runtime_init(const struct eal_user_cfg *user_provided_cfg)
return -1;
}
+/**
+ * Initialize the DPDK runtime with a user-provided configuration.
+ * This is an alternative to rte_eal_init() that allows the caller to provide
+ * a configuration struct directly, instead of parsing command line arguments.
+ * Both parameters to the function must be non-NULL,
+ * and the user_provided_cfg must be fully initialized by the caller.
+ */
+RTE_EXPORT_INTERNAL_SYMBOL(rte_eal_runtime_init)
+int
+rte_eal_runtime_init(const char *progname, const struct eal_user_cfg *user_provided_cfg)
+{
+ uint32_t has_run = 0;
+
+ if (progname == NULL || user_provided_cfg == NULL) {
+ rte_eal_init_alert("Invalid arguments to rte_eal_runtime_init.");
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ if (rte_eal_get_platform_info() == NULL) {
+ rte_eal_init_alert("Platform information is not available.");
+ return -1;
+ }
+
+ /* first check if we have been run before */
+ if (!rte_atomic_compare_exchange_strong_explicit(&init_has_run, &has_run, 1,
+ rte_memory_order_relaxed, rte_memory_order_relaxed)) {
+ rte_eal_init_alert("already called initialization.");
+ rte_errno = EALREADY;
+ return -1;
+ }
+
+ eal_log_init(progname);
+
+ if (eal_runtime_init(user_provided_cfg) < 0) {
+ rte_atomic_store_explicit(&init_has_run, 0, rte_memory_order_relaxed);
+ return -1;
+ }
+ return 0;
+}
+
/* Don't use MinGW asprintf() to have identical code with all toolchains. */
int
eal_asprintf(char **buffer, const char *format, ...)
diff --git a/lib/eal/windows/eal_hugepages.c b/lib/eal/windows/eal_hugepages.c
index 0c62f5ff48..0870495c78 100644
--- a/lib/eal/windows/eal_hugepages.c
+++ b/lib/eal/windows/eal_hugepages.c
@@ -59,7 +59,7 @@ hugepage_claim_privilege(void)
static int
hugepage_info_init(void)
{
- const struct eal_platform_info *platform_info = eal_get_platform_info();
+ const struct eal_platform_info *platform_info = rte_eal_get_platform_info();
struct hugepage_info *hpi;
unsigned int socket_id;
int ret = 0;
diff --git a/lib/eal/windows/eal_memalloc.c b/lib/eal/windows/eal_memalloc.c
index 3ab8af0466..0c34c65603 100644
--- a/lib/eal/windows/eal_memalloc.c
+++ b/lib/eal/windows/eal_memalloc.c
@@ -314,7 +314,7 @@ eal_memalloc_alloc_seg_bulk(struct rte_memseg **ms, int n_segs,
int ret = -1;
struct alloc_walk_param wa;
const struct hugepage_info *hi = NULL;
- const struct eal_platform_info *platform_info = eal_get_platform_info();
+ const struct eal_platform_info *platform_info = rte_eal_get_platform_info();
const struct eal_runtime_state *runtime_state = eal_get_runtime_state();
const struct eal_user_cfg *user_cfg = eal_get_user_configuration();
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 37/44] eal_cfg: add new library to programmatically init DPDK
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (35 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 36/44] eal: provide hooks for init with externally supplied config Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 38/44] eal_cfg: configure defaults for easier testing and use Bruce Richardson
` (8 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
Rather than relying on apps to populate an argc/argv array, add a new
eal_cfg library that allows programmatically programming the config and
initializing using that. Start with basic alloc and free functions for
the config context, and the init function itself to call EAL init.
Include basic unit tests for these too, although limitations in the test
framework right now prevent testing actual EAL init.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
app/test/meson.build | 1 +
app/test/test_eal_cfg.c | 99 +++++++++++++++++++++++++++
doc/api/doxy-api-index.md | 1 +
doc/api/doxy-api.conf.in | 1 +
doc/guides/prog_guide/eal_cfg_lib.rst | 23 +++++++
doc/guides/prog_guide/index.rst | 1 +
lib/eal_cfg/eal_cfg.c | 64 +++++++++++++++++
lib/eal_cfg/meson.build | 6 ++
lib/eal_cfg/rte_eal_cfg.h | 79 +++++++++++++++++++++
lib/meson.build | 1 +
10 files changed, 276 insertions(+)
create mode 100644 app/test/test_eal_cfg.c
create mode 100644 doc/guides/prog_guide/eal_cfg_lib.rst
create mode 100644 lib/eal_cfg/eal_cfg.c
create mode 100644 lib/eal_cfg/meson.build
create mode 100644 lib/eal_cfg/rte_eal_cfg.h
diff --git a/app/test/meson.build b/app/test/meson.build
index 7d458f9c07..3d39e82dd8 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -66,6 +66,7 @@ source_file_deps = {
'test_distributor_perf.c': ['distributor'],
'test_dmadev.c': ['dmadev', 'bus_vdev'],
'test_dmadev_api.c': ['dmadev'],
+ 'test_eal_cfg.c': ['eal_cfg'],
'test_eal_flags.c': [],
'test_eal_fs.c': [],
'test_efd.c': ['efd', 'net'],
diff --git a/app/test/test_eal_cfg.c b/app/test/test_eal_cfg.c
new file mode 100644
index 0000000000..3def760b50
--- /dev/null
+++ b/app/test/test_eal_cfg.c
@@ -0,0 +1,99 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2026 Intel Corporation
+ */
+
+#include <errno.h>
+
+#include <rte_errno.h>
+
+#include <rte_eal_cfg.h>
+
+#include "test.h"
+
+/* Test that a config handle can be created and freed without error. */
+static int
+test_eal_cfg_create_free(void)
+{
+ struct rte_eal_cfg *cfg;
+
+ /* create with no arguments */
+ cfg = rte_eal_cfg_create();
+ TEST_ASSERT_NOT_NULL(cfg, "rte_eal_cfg_create returned NULL");
+
+ /* free must not crash */
+ rte_eal_cfg_free(cfg);
+
+ /* free(NULL) must be a safe no-op */
+ rte_eal_cfg_free(NULL);
+
+ return TEST_SUCCESS;
+}
+
+/*
+ * Test initialising EAL with a freshly created (empty/default) config.
+ * Since the test binary has already initialised EAL, we expect the call to
+ * fail with EALREADY rather than succeed — but the function must forward
+ * the call through to rte_eal_runtime_init() and return its error correctly.
+ */
+static int
+test_eal_cfg_init_empty(void)
+{
+ struct rte_eal_cfg *cfg;
+ int ret;
+
+ cfg = rte_eal_cfg_create();
+ TEST_ASSERT_NOT_NULL(cfg, "rte_eal_cfg_create returned NULL");
+
+ ret = rte_eal_init_from_cfg("test_prog", cfg);
+ TEST_ASSERT(ret == -1,
+ "Expected -1 from rte_eal_init_from_cfg (EAL already init), got %d", ret);
+ TEST_ASSERT(rte_errno == EALREADY,
+ "Expected EALREADY, got %d", rte_errno);
+
+ rte_eal_cfg_free(cfg);
+ return TEST_SUCCESS;
+}
+
+/* Test that passing NULL cfg to rte_eal_init_from_cfg uses default config.
+ * Since EAL is already running, we still expect EALREADY.
+ */
+static int
+test_eal_cfg_init_null(void)
+{
+ int ret;
+
+ ret = rte_eal_init_from_cfg("test_prog", NULL);
+ TEST_ASSERT(ret == -1,
+ "Expected -1 from rte_eal_init_from_cfg with NULL cfg, got %d", ret);
+ TEST_ASSERT(rte_errno == EALREADY,
+ "Expected EALREADY for NULL cfg, got %d", rte_errno);
+
+ /* NULL progname must be rejected regardless of cfg */
+ ret = rte_eal_init_from_cfg(NULL, NULL);
+ TEST_ASSERT(ret == -1,
+ "Expected -1 from rte_eal_init_from_cfg(NULL, NULL), got %d", ret);
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for NULL progname, got %d", rte_errno);
+
+ return TEST_SUCCESS;
+}
+
+static struct unit_test_suite eal_cfg_testsuite = {
+ .suite_name = "EAL cfg API tests",
+ .setup = NULL,
+ .teardown = NULL,
+ .unit_test_cases = {
+ TEST_CASE(test_eal_cfg_create_free),
+ TEST_CASE(test_eal_cfg_init_empty),
+ TEST_CASE(test_eal_cfg_init_null),
+ TEST_CASES_END()
+ }
+};
+
+static int
+test_eal_cfg(void)
+{
+ return unit_test_suite_runner(&eal_cfg_testsuite);
+}
+
+REGISTER_FAST_TEST(eal_cfg_autotest, NOHUGE_OK, ASAN_OK, test_eal_cfg);
diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md
index 9296042119..491ce1a958 100644
--- a/doc/api/doxy-api-index.md
+++ b/doc/api/doxy-api-index.md
@@ -251,6 +251,7 @@ The public API headers are grouped by topics:
- **misc**:
[EAL config](@ref rte_eal.h),
+ [EAL programmatic init](@ref rte_eal_cfg.h),
[common](@ref rte_common.h),
[experimental APIs](@ref rte_compat.h),
[version](@ref rte_version.h)
diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in
index bedd944681..305714773e 100644
--- a/doc/api/doxy-api.conf.in
+++ b/doc/api/doxy-api.conf.in
@@ -28,6 +28,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \
@TOPDIR@/drivers/raw/ifpga \
@TOPDIR@/lib/eal/include \
@TOPDIR@/lib/eal/include/generic \
+ @TOPDIR@/lib/eal_cfg \
@TOPDIR@/lib/acl \
@TOPDIR@/lib/argparse \
@TOPDIR@/lib/bbdev \
diff --git a/doc/guides/prog_guide/eal_cfg_lib.rst b/doc/guides/prog_guide/eal_cfg_lib.rst
new file mode 100644
index 0000000000..34ca5a95ad
--- /dev/null
+++ b/doc/guides/prog_guide/eal_cfg_lib.rst
@@ -0,0 +1,23 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright(c) 2026 Intel Corporation.
+
+EAL Programmatic Configuration Library
+=======================================
+
+The EAL programmatic configuration library provides a programmatic alternative to the ``rte_eal_init()`` argc/argv interface.
+An application creates an ``rte_eal_cfg`` handle, populates it via setter functions,
+and passes it to ``rte_eal_init_from_cfg()`` in place of the standard init call.
+This avoids the need to construct a synthetic argument vector
+when the EAL is embedded in a larger framework or launched without a conventional command line.
+
+
+Configuration Handle Lifecycle
+-------------------------------
+
+A configuration handle is created with ``rte_eal_cfg_create()``,
+which accepts the program name used for logging.
+The handle is initialised with the same defaults that apply when ``rte_eal_init()`` is called with no options.
+
+Once the application has populated the handle, it is passed to ``rte_eal_init_from_cfg()``.
+After that call the handle may be freed with ``rte_eal_cfg_free()``, which releases all memory owned by the handle.
+It is safe to call ``rte_eal_cfg_free()`` with a NULL pointer.
diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst
index e6f24945b0..f5e6ee69d2 100644
--- a/doc/guides/prog_guide/index.rst
+++ b/doc/guides/prog_guide/index.rst
@@ -38,6 +38,7 @@ CPU Management
:numbered:
env_abstraction_layer
+ eal_cfg_lib
power_man
thread_safety
service_cores
diff --git a/lib/eal_cfg/eal_cfg.c b/lib/eal_cfg/eal_cfg.c
new file mode 100644
index 0000000000..70f0122b81
--- /dev/null
+++ b/lib/eal_cfg/eal_cfg.c
@@ -0,0 +1,64 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2026 Intel Corporation
+ */
+
+#include <errno.h>
+#include <stdlib.h>
+
+#include <eal_export.h>
+#include <rte_errno.h>
+#include <rte_log.h>
+
+#include "eal_internal_cfg.h"
+#include "rte_eal_cfg.h"
+
+struct rte_eal_cfg {
+ struct eal_user_cfg user_cfg;
+};
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_create, 26.07)
+struct rte_eal_cfg *
+rte_eal_cfg_create(void)
+{
+ struct rte_eal_cfg *cfg;
+
+ cfg = calloc(1, sizeof(*cfg));
+ if (cfg == NULL) {
+ rte_errno = ENOMEM;
+ return NULL;
+ }
+
+ cfg->user_cfg = EAL_USER_CFG_INITIALIZER(cfg->user_cfg);
+
+ return cfg;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_free, 26.07)
+void
+rte_eal_cfg_free(struct rte_eal_cfg *cfg)
+{
+ if (cfg == NULL)
+ return;
+
+ eal_user_cfg_cleanup(&cfg->user_cfg);
+ free(cfg);
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_init_from_cfg, 26.07)
+int
+rte_eal_init_from_cfg(const char *progname, struct rte_eal_cfg *cfg)
+{
+ struct rte_eal_cfg local_cfg = {
+ .user_cfg = EAL_USER_CFG_INITIALIZER(local_cfg.user_cfg),
+ };
+
+ if (progname == NULL || progname[0] == '\0') {
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ if (cfg == NULL)
+ cfg = &local_cfg;
+
+ return rte_eal_runtime_init(progname, &cfg->user_cfg);
+}
diff --git a/lib/eal_cfg/meson.build b/lib/eal_cfg/meson.build
new file mode 100644
index 0000000000..280a85b93e
--- /dev/null
+++ b/lib/eal_cfg/meson.build
@@ -0,0 +1,6 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2026 Intel Corporation
+
+sources = files('eal_cfg.c')
+headers = files('rte_eal_cfg.h')
+includes += include_directories('../eal/common')
diff --git a/lib/eal_cfg/rte_eal_cfg.h b/lib/eal_cfg/rte_eal_cfg.h
new file mode 100644
index 0000000000..c0d316a6cb
--- /dev/null
+++ b/lib/eal_cfg/rte_eal_cfg.h
@@ -0,0 +1,79 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2026 Intel Corporation
+ */
+
+#ifndef RTE_EAL_CFG_H
+#define RTE_EAL_CFG_H
+
+/**
+ * @file
+ *
+ * EAL programmatic configuration API.
+ *
+ * This API allows applications to configure and initialize the EAL without
+ * passing argc/argv. A configuration handle is created, populated via setter
+ * functions, and passed to rte_eal_init_from_cfg() in place of rte_eal_init().
+ */
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <rte_compat.h>
+
+/**
+ * Opaque EAL configuration handle.
+ */
+struct rte_eal_cfg;
+
+/**
+ * Create a new EAL configuration handle.
+ *
+ * Allocates and initialises a configuration struct with default values
+ * equivalent to those used when rte_eal_init() is called with no options.
+ *
+ * @return
+ * Pointer to a new configuration handle, or NULL on failure (rte_errno set).
+ */
+__rte_experimental
+struct rte_eal_cfg *
+rte_eal_cfg_create(void);
+
+/**
+ * Free an EAL configuration handle.
+ *
+ * Releases all resources owned by the handle. Safe to call on NULL.
+ *
+ * @param cfg
+ * Configuration handle to free. If NULL, this function is a no-op.
+ */
+__rte_experimental
+void
+rte_eal_cfg_free(struct rte_eal_cfg *cfg);
+
+/**
+ * Initialise the EAL using a programmatic configuration handle.
+ *
+ * This function is a programmatic alternative to rte_eal_init().
+ * The caller optionally creates a configuration handle with rte_eal_cfg_create(),
+ * populates it via setter functions, and passes it to this function
+ * in place of argc/argv. If @p cfg is NULL, default EAL configuration is used.
+ *
+ * @param progname
+ * The program name, used for logging. Must not be NULL or empty.
+ * @param cfg
+ * Configuration handle created with rte_eal_cfg_create(),
+ * or NULL to use default configuration.
+ * @return
+ * 0 on success, or -1 on failure (rte_errno is set).
+ */
+__rte_experimental
+int
+rte_eal_init_from_cfg(const char *progname, struct rte_eal_cfg *cfg);
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* RTE_EAL_CFG_H */
diff --git a/lib/meson.build b/lib/meson.build
index 8f5cfd28a5..9b9ebcf5db 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -15,6 +15,7 @@ libraries = [
'telemetry', # basic info querying
'pmu',
'eal', # everything depends on eal
+ 'eal_cfg',
'ptr_compress',
'ring',
'rcu', # rcu depends on ring
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 38/44] eal_cfg: configure defaults for easier testing and use
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (36 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 37/44] eal_cfg: add new library to programmatically init DPDK Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 39/44] app/test: enable testing init using EAL config lib Bruce Richardson
` (7 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
Add a function to configure the lcore settings with the current thread
affinities. Use that function when EAL is being initialized without any
lcores otherwise being specified. This minics existing DPDK behaviour
when no "-l" or "-c" flags are passed on EAL commandline. Also set the
"in-memory" flag by default, which makes testing easier, and should be a
sensible default for most app cases except when multi-process is
required.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
lib/eal_cfg/eal_cfg.c | 91 ++++++++++++++++++++++++++++++++++++---
lib/eal_cfg/rte_eal_cfg.h | 48 ++++++++++++++++++++-
2 files changed, 133 insertions(+), 6 deletions(-)
diff --git a/lib/eal_cfg/eal_cfg.c b/lib/eal_cfg/eal_cfg.c
index 70f0122b81..18a508e7ad 100644
--- a/lib/eal_cfg/eal_cfg.c
+++ b/lib/eal_cfg/eal_cfg.c
@@ -7,7 +7,9 @@
#include <eal_export.h>
#include <rte_errno.h>
+#include <rte_lcore.h>
#include <rte_log.h>
+#include <rte_thread.h>
#include "eal_internal_cfg.h"
#include "rte_eal_cfg.h"
@@ -29,6 +31,9 @@ rte_eal_cfg_create(void)
}
cfg->user_cfg = EAL_USER_CFG_INITIALIZER(cfg->user_cfg);
+ /* default to in-memory mode without a shared config */
+ cfg->user_cfg.in_memory = true;
+ cfg->user_cfg.no_shconf = true;
return cfg;
}
@@ -44,21 +49,97 @@ rte_eal_cfg_free(struct rte_eal_cfg *cfg)
free(cfg);
}
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_lcores_from_affinity, 26.07)
+int
+rte_eal_cfg_set_lcores_from_affinity(struct rte_eal_cfg *cfg, bool remap)
+{
+ rte_cpuset_t cpuset;
+ unsigned int lcore_id = 0;
+ int count = 0;
+
+ if (cfg == NULL) {
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ if (rte_thread_get_affinity_by_id(rte_thread_self(), &cpuset) != 0) {
+ rte_errno = ENOTSUP;
+ return -1;
+ }
+
+ /* clear any existing lcore configuration before populating */
+ for (unsigned int i = 0; i < RTE_MAX_LCORE; i++) {
+ free(cfg->user_cfg.lcore_cpusets[i]);
+ cfg->user_cfg.lcore_cpusets[i] = NULL;
+ }
+
+ for (unsigned int cpu = 0; cpu < CPU_SETSIZE; cpu++) {
+ if (!CPU_ISSET(cpu, &cpuset))
+ continue;
+ if (!remap)
+ lcore_id = cpu;
+ if (lcore_id >= RTE_MAX_LCORE)
+ break;
+ cfg->user_cfg.lcore_cpusets[lcore_id] = malloc(sizeof(rte_cpuset_t));
+ if (cfg->user_cfg.lcore_cpusets[lcore_id] == NULL) {
+ for (unsigned int i = 0; i < lcore_id; i++) {
+ free(cfg->user_cfg.lcore_cpusets[i]);
+ cfg->user_cfg.lcore_cpusets[i] = NULL;
+ }
+ rte_errno = ENOMEM;
+ return -1;
+ }
+ CPU_ZERO(cfg->user_cfg.lcore_cpusets[lcore_id]);
+ CPU_SET(cpu, cfg->user_cfg.lcore_cpusets[lcore_id]);
+ count++;
+ lcore_id++;
+ }
+
+ if (count == 0) {
+ rte_errno = ENOTSUP;
+ return -1;
+ }
+
+ return 0;
+}
+
RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_init_from_cfg, 26.07)
int
rte_eal_init_from_cfg(const char *progname, struct rte_eal_cfg *cfg)
{
- struct rte_eal_cfg local_cfg = {
- .user_cfg = EAL_USER_CFG_INITIALIZER(local_cfg.user_cfg),
- };
+ unsigned int i;
if (progname == NULL || progname[0] == '\0') {
rte_errno = EINVAL;
return -1;
}
- if (cfg == NULL)
- cfg = &local_cfg;
+ if (cfg == NULL) {
+ struct rte_eal_cfg local_cfg = {
+ .user_cfg = EAL_USER_CFG_INITIALIZER(local_cfg.user_cfg),
+ };
+ local_cfg.user_cfg.in_memory = true;
+ local_cfg.user_cfg.no_shconf = true;
+ if (rte_eal_cfg_set_lcores_from_affinity(&local_cfg, true) < 0) {
+ eal_user_cfg_cleanup(&local_cfg.user_cfg);
+ return -1;
+ }
+
+ int ret = rte_eal_runtime_init(progname, &local_cfg.user_cfg);
+ eal_user_cfg_cleanup(&local_cfg.user_cfg);
+ return ret;
+ }
+
+ /* If no lcores have been configured, default to the current thread's
+ * CPU affinity, matching rte_eal_init() behaviour when neither -c nor
+ * -l is passed.
+ */
+ for (i = 0; i < RTE_MAX_LCORE; i++) {
+ if (cfg->user_cfg.lcore_cpusets[i] != NULL)
+ break;
+ }
+ if (i == RTE_MAX_LCORE && rte_eal_cfg_set_lcores_from_affinity(cfg, true) < 0)
+ return -1;
return rte_eal_runtime_init(progname, &cfg->user_cfg);
}
diff --git a/lib/eal_cfg/rte_eal_cfg.h b/lib/eal_cfg/rte_eal_cfg.h
index c0d316a6cb..ecabe136d8 100644
--- a/lib/eal_cfg/rte_eal_cfg.h
+++ b/lib/eal_cfg/rte_eal_cfg.h
@@ -19,6 +19,8 @@
extern "C" {
#endif
+#include <stdbool.h>
+
#include <rte_compat.h>
/**
@@ -32,6 +34,11 @@ struct rte_eal_cfg;
* Allocates and initialises a configuration struct with default values
* equivalent to those used when rte_eal_init() is called with no options.
*
+ * Returned defaults include:
+ * - empty lcore configuration (no lcores configured)
+ * - in-memory mode enabled, which disables multi-process mode,
+ * but allows multiple identical application instances to run concurrently.
+ *
* @return
* Pointer to a new configuration handle, or NULL on failure (rte_errno set).
*/
@@ -51,13 +58,52 @@ __rte_experimental
void
rte_eal_cfg_free(struct rte_eal_cfg *cfg);
+/**
+ * Populate lcore configuration from the calling thread's CPU affinity.
+ *
+ * Queries the current thread's CPU affinity and replaces the lcore
+ * configuration in @p cfg with one DPDK lcore per CPU in the affinity
+ * set. Any existing lcore configuration in the handle is cleared before
+ * the new mapping is applied.
+ *
+ * When @p remap is false, an identity mapping is used: lcore ID equals
+ * the physical CPU number. CPUs with numbers at or above RTE_MAX_LCORE
+ * are silently skipped. This matches the behaviour of rte_eal_init()
+ * when neither -c nor -l is passed on the command line.
+ *
+ * When @p remap is true, lcore IDs are assigned sequentially starting
+ * from 0 regardless of physical CPU numbers, so CPUs above RTE_MAX_LCORE
+ * are usable as long as the total count stays within RTE_MAX_LCORE.
+ *
+ * rte_eal_init_from_cfg() calls this automatically with @p remap set to
+ * true when no lcores have been configured in the handle.
+ *
+ * @param cfg
+ * Configuration handle created with rte_eal_cfg_create(). Must not be NULL.
+ * @param remap
+ * If true, assign sequential lcore IDs (0, 1, 2, ...) to affinitised CPUs.
+ * If false, use identity mapping (lcore ID == physical CPU ID).
+ * @return
+ * 0 on success, or -1 on failure (rte_errno is set to EINVAL if cfg is NULL,
+ * ENOTSUP if the affinity could not be queried or yielded no usable CPUs,
+ * or ENOMEM on allocation failure).
+ */
+__rte_experimental
+int
+rte_eal_cfg_set_lcores_from_affinity(struct rte_eal_cfg *cfg, bool remap);
+
/**
* Initialise the EAL using a programmatic configuration handle.
*
* This function is a programmatic alternative to rte_eal_init().
* The caller optionally creates a configuration handle with rte_eal_cfg_create(),
* populates it via setter functions, and passes it to this function
- * in place of argc/argv. If @p cfg is NULL, default EAL configuration is used.
+ * in place of argc/argv.
+ *
+ * If @p cfg is NULL, default EAL configuration is used, where:
+ * - lcores are assigned based on the current thread's CPU affinity
+ * - in-memory mode is enabled, which disables multi-process mode,
+ * but allows multiple identical application instances to run concurrently.
*
* @param progname
* The program name, used for logging. Must not be NULL or empty.
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 39/44] app/test: enable testing init using EAL config lib
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (37 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 38/44] eal_cfg: configure defaults for easier testing and use Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 40/44] eal_cfg: add basic setters and getters Bruce Richardson
` (6 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
To test the eal_cfg library initialization we need to bypass the regular
rte_eal_init() calls in the test binary, which requires a little bit of
rework to the existing test harness. Make those changes and then add
some very basic init testing using the rte_eal_cfg_init() function.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
app/test/test.c | 14 +++---
app/test/test.h | 1 +
app/test/test_eal_cfg.c | 99 +++++++++++++++++++++++++++++++----------
3 files changed, 86 insertions(+), 28 deletions(-)
diff --git a/app/test/test.c b/app/test/test.c
index 58ef52f312..dba3c581a4 100644
--- a/app/test/test.c
+++ b/app/test/test.c
@@ -47,13 +47,11 @@ extern cmdline_parse_ctx_t main_ctx[];
const char *prgname; /* to be set to argv[0] */
-static const char *recursive_call; /* used in linux for MP and other tests */
-
static int
no_action(void){ return 0; }
static int
-do_recursive_call(void)
+do_recursive_call(const char *recursive_call)
{
unsigned i;
struct {
@@ -117,6 +115,13 @@ main(int argc, char **argv)
int i;
char *extra_args;
int ret;
+ const char *recursive_call = getenv(RECURSIVE_ENV_VAR);
+
+#ifdef RTE_LIB_EAL_CFG
+ /* Pre-EAL-init dispatch: for tests that test EAL init itself. */
+ if (recursive_call != NULL && strcmp(recursive_call, "test_eal_cfg_init") == 0)
+ return test_eal_cfg_init();
+#endif
extra_args = getenv("DPDK_TEST_PARAMS");
if (extra_args != NULL && strlen(extra_args) > 0) {
@@ -174,9 +179,8 @@ main(int argc, char **argv)
goto out;
}
- recursive_call = getenv(RECURSIVE_ENV_VAR);
if (recursive_call != NULL) {
- ret = do_recursive_call();
+ ret = do_recursive_call(recursive_call);
goto out;
}
diff --git a/app/test/test.h b/app/test/test.h
index 1f12fc5397..b80c818249 100644
--- a/app/test/test.h
+++ b/app/test/test.h
@@ -179,6 +179,7 @@ int test_exit(void);
int test_mp_secondary(void);
int test_panic(void);
int test_timer_secondary(void);
+int test_eal_cfg_init(void);
int test_set_rxtx_conf(cmdline_fixed_string_t mode);
int test_set_rxtx_anchor(cmdline_fixed_string_t type);
diff --git a/app/test/test_eal_cfg.c b/app/test/test_eal_cfg.c
index 3def760b50..f714be3fd4 100644
--- a/app/test/test_eal_cfg.c
+++ b/app/test/test_eal_cfg.c
@@ -4,12 +4,19 @@
#include <errno.h>
+#include <rte_eal.h>
+#include <rte_debug.h>
#include <rte_errno.h>
#include <rte_eal_cfg.h>
+#include <stdlib.h>
#include "test.h"
+#ifndef RTE_EXEC_ENV_WINDOWS
+#include "process.h"
+#endif
+
/* Test that a config handle can be created and freed without error. */
static int
test_eal_cfg_create_free(void)
@@ -29,14 +36,13 @@ test_eal_cfg_create_free(void)
return TEST_SUCCESS;
}
-/*
- * Test initialising EAL with a freshly created (empty/default) config.
- * Since the test binary has already initialised EAL, we expect the call to
- * fail with EALREADY rather than succeed — but the function must forward
- * the call through to rte_eal_runtime_init() and return its error correctly.
- */
+#ifdef RTE_EXEC_ENV_WINDOWS
+int
+test_eal_cfg_init(void) { return 0; }
+#else
+/* Test initialising EAL with a freshly created (empty/default) config. */
static int
-test_eal_cfg_init_empty(void)
+subtest_eal_cfg_init_empty(void)
{
struct rte_eal_cfg *cfg;
int ret;
@@ -45,29 +51,21 @@ test_eal_cfg_init_empty(void)
TEST_ASSERT_NOT_NULL(cfg, "rte_eal_cfg_create returned NULL");
ret = rte_eal_init_from_cfg("test_prog", cfg);
- TEST_ASSERT(ret == -1,
- "Expected -1 from rte_eal_init_from_cfg (EAL already init), got %d", ret);
- TEST_ASSERT(rte_errno == EALREADY,
- "Expected EALREADY, got %d", rte_errno);
+ TEST_ASSERT(ret == 0,
+ "Expected 0 from rte_eal_init_from_cfg, got %d", ret);
rte_eal_cfg_free(cfg);
+
+ rte_eal_cleanup();
return TEST_SUCCESS;
}
-/* Test that passing NULL cfg to rte_eal_init_from_cfg uses default config.
- * Since EAL is already running, we still expect EALREADY.
- */
+/* Test that passing NULL cfg to rte_eal_init_from_cfg uses default config. */
static int
-test_eal_cfg_init_null(void)
+subtest_eal_cfg_init_null(void)
{
int ret;
- ret = rte_eal_init_from_cfg("test_prog", NULL);
- TEST_ASSERT(ret == -1,
- "Expected -1 from rte_eal_init_from_cfg with NULL cfg, got %d", ret);
- TEST_ASSERT(rte_errno == EALREADY,
- "Expected EALREADY for NULL cfg, got %d", rte_errno);
-
/* NULL progname must be rejected regardless of cfg */
ret = rte_eal_init_from_cfg(NULL, NULL);
TEST_ASSERT(ret == -1,
@@ -75,17 +73,72 @@ test_eal_cfg_init_null(void)
TEST_ASSERT(rte_errno == EINVAL,
"Expected EINVAL for NULL progname, got %d", rte_errno);
+ ret = rte_eal_init_from_cfg("test_prog", NULL);
+ TEST_ASSERT(ret == 0,
+ "Expected 0 from rte_eal_init_from_cfg with NULL cfg, got %d", ret);
+
+ rte_eal_cleanup();
return TEST_SUCCESS;
}
+/*
+ * Test EAL initialisation from a default config in a fresh subprocess.
+ * Called before rte_eal_init() so that it can exercise the very first call
+ * through rte_eal_init_from_cfg(). Returns 0 on success, 1 on failure.
+ */
+int
+test_eal_cfg_init(void)
+{
+#define EAL_CFG_TEST_FN "EAL_CFG_TEST_FN"
+ struct test_fns {
+ const char *name;
+ int (*fn)(void);
+ } test_fns[] = {
+#define TEST_CFG_FN(X) { #X, X }
+ TEST_CFG_FN(subtest_eal_cfg_init_null),
+ TEST_CFG_FN(subtest_eal_cfg_init_empty),
+ { NULL, NULL }
+ };
+
+ const char *test_fn = getenv(EAL_CFG_TEST_FN);
+ if (test_fn == NULL) {
+ /* This is the parent process: spawn a child to run the test. */
+ const char *argv[] = { prgname, NULL };
+
+ for (size_t i = 0; test_fns[i].name != NULL; i++) {
+ setenv(EAL_CFG_TEST_FN, test_fns[i].name, 1);
+ int ret = process_dup(argv, 1, __func__);
+ if (ret != 0) {
+ printf("Test '%s' failed with return code %d\n",
+ test_fns[i].name, ret);
+ return -1;
+ }
+ printf("Test '%s' passed\n", test_fns[i].name);
+ }
+ } else {
+ /* This is the child process: run the specified test function. */
+ for (size_t i = 0; test_fns[i].name != NULL; i++) {
+ if (strcmp(test_fn, test_fns[i].name) == 0) {
+ printf("Running test '%s'\n", test_fns[i].name);
+ return test_fns[i].fn();
+ }
+ }
+
+ printf("Unknown test function '%s'\n", test_fn);
+ return -1;
+ }
+
+ return 0;
+}
+#endif /* RTE_EXEC_ENV_WINDOWS */
+
static struct unit_test_suite eal_cfg_testsuite = {
.suite_name = "EAL cfg API tests",
.setup = NULL,
.teardown = NULL,
.unit_test_cases = {
TEST_CASE(test_eal_cfg_create_free),
- TEST_CASE(test_eal_cfg_init_empty),
- TEST_CASE(test_eal_cfg_init_null),
+ TEST_CASE(test_eal_cfg_init),
TEST_CASES_END()
}
};
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 40/44] eal_cfg: add basic setters and getters
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (38 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 39/44] app/test: enable testing init using EAL config lib Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 41/44] eal_cfg: add hugepage memory configuration Bruce Richardson
` (5 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
For simple fields in the EAL config struct we can add basic setters and
getters. For booleans, these can be autogenerated by macros. For the
other basic fields, we may need validation of the values so add explicit
setter functions, though the getter functions can similarly be
auto-generated.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
app/test/test_eal_cfg.c | 372 ++++++++++++++++++++++++++++++++++++++
lib/eal_cfg/eal_cfg.c | 357 +++++++++++++++++++++++++++++++++++-
lib/eal_cfg/rte_eal_cfg.h | 309 +++++++++++++++++++++++++++++++
3 files changed, 1034 insertions(+), 4 deletions(-)
diff --git a/app/test/test_eal_cfg.c b/app/test/test_eal_cfg.c
index f714be3fd4..4424c42533 100644
--- a/app/test/test_eal_cfg.c
+++ b/app/test/test_eal_cfg.c
@@ -3,10 +3,13 @@
*/
#include <errno.h>
+#include <inttypes.h>
#include <rte_eal.h>
#include <rte_debug.h>
#include <rte_errno.h>
+#include <rte_lcore.h>
+#include <rte_vect.h>
#include <rte_eal_cfg.h>
#include <stdlib.h>
@@ -40,6 +43,59 @@ test_eal_cfg_create_free(void)
int
test_eal_cfg_init(void) { return 0; }
#else
+/* Test that specific cfg values are visible through EAL query APIs post-init. */
+static int
+subtest_eal_cfg_init_with_values(void)
+{
+ struct rte_eal_cfg *cfg;
+ int ret;
+
+ cfg = rte_eal_cfg_create();
+ TEST_ASSERT_NOT_NULL(cfg, "rte_eal_cfg_create returned NULL");
+
+ TEST_ASSERT(rte_eal_cfg_set_no_pci(cfg, true) == 0,
+ "Failed to set no_pci");
+ TEST_ASSERT(rte_eal_cfg_set_iova_mode(cfg, RTE_IOVA_VA) == 0,
+ "Failed to set iova_mode");
+
+#ifdef RTE_ARCH_64
+ /*
+ * 0x200000000 (8 GiB) is above the 32-bit address space; only set and
+ * check this on 64-bit builds. rte_eal_get_baseaddr() returns
+ * eal_user_cfg.base_virtaddr directly when non-zero, so it is
+ * observable immediately after init.
+ */
+#define TEST_BASE_VIRTADDR ((uintptr_t)0x200000000ULL)
+ TEST_ASSERT(rte_eal_cfg_set_base_virtaddr(cfg, TEST_BASE_VIRTADDR) == 0,
+ "Failed to set base_virtaddr");
+#endif
+
+ ret = rte_eal_init_from_cfg("test_prog", cfg);
+ TEST_ASSERT(ret == 0,
+ "rte_eal_init_from_cfg failed: ret=%d rte_errno=%d", ret, rte_errno);
+
+ rte_eal_cfg_free(cfg);
+
+ /* no_pci=true means rte_eal_has_pci() must return 0 */
+ TEST_ASSERT(rte_eal_has_pci() == 0,
+ "Expected rte_eal_has_pci()==0 with no_pci=true, got %d",
+ rte_eal_has_pci());
+
+#ifdef RTE_ARCH_64
+ /* base_virtaddr was set non-zero so rte_eal_get_baseaddr() returns it */
+ TEST_ASSERT(rte_eal_get_baseaddr() == (uint64_t)TEST_BASE_VIRTADDR,
+ "Expected base addr 0x%" PRIx64 ", got 0x%" PRIx64,
+ (uint64_t)TEST_BASE_VIRTADDR, rte_eal_get_baseaddr());
+#endif
+
+ /* iova_mode=VA is stored directly to runtime state when not DC */
+ TEST_ASSERT(rte_eal_iova_mode() == RTE_IOVA_VA,
+ "Expected RTE_IOVA_VA after init, got %d", rte_eal_iova_mode());
+
+ rte_eal_cleanup();
+ return TEST_SUCCESS;
+}
+
/* Test initialising EAL with a freshly created (empty/default) config. */
static int
subtest_eal_cfg_init_empty(void)
@@ -97,6 +153,7 @@ test_eal_cfg_init(void)
#define TEST_CFG_FN(X) { #X, X }
TEST_CFG_FN(subtest_eal_cfg_init_null),
TEST_CFG_FN(subtest_eal_cfg_init_empty),
+ TEST_CFG_FN(subtest_eal_cfg_init_with_values),
{ NULL, NULL }
};
@@ -132,6 +189,315 @@ test_eal_cfg_init(void)
}
#endif /* RTE_EXEC_ENV_WINDOWS */
+/* Test a representative boolean field (no_pci): NULL, default, roundtrip. */
+static int
+test_eal_cfg_bool(void)
+{
+ struct rte_eal_cfg *cfg;
+
+ /* NULL cfg */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_no_pci(NULL, true) == -1,
+ "Expected -1 for NULL cfg");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for NULL cfg, got %d", rte_errno);
+ TEST_ASSERT(rte_eal_cfg_get_no_pci(NULL) == false,
+ "Expected false from get with NULL cfg");
+
+ cfg = rte_eal_cfg_create();
+ TEST_ASSERT_NOT_NULL(cfg, "rte_eal_cfg_create returned NULL");
+
+ /* default is false */
+ TEST_ASSERT(rte_eal_cfg_get_no_pci(cfg) == false,
+ "Expected default no_pci == false");
+
+ /* set true, get true */
+ TEST_ASSERT(rte_eal_cfg_set_no_pci(cfg, true) == 0,
+ "Expected 0 setting no_pci = true");
+ TEST_ASSERT(rte_eal_cfg_get_no_pci(cfg) == true,
+ "Expected no_pci == true after set");
+
+ /* set false, get false */
+ TEST_ASSERT(rte_eal_cfg_set_no_pci(cfg, false) == 0,
+ "Expected 0 setting no_pci = false");
+ TEST_ASSERT(rte_eal_cfg_get_no_pci(cfg) == false,
+ "Expected no_pci == false after reset");
+
+ rte_eal_cfg_free(cfg);
+ return TEST_SUCCESS;
+}
+
+/* Test max_simd_bitwidth: NULL, valid values, boundary, and invalid inputs. */
+static int
+test_eal_cfg_max_simd_bitwidth(void)
+{
+ struct rte_eal_cfg *cfg;
+
+ /* NULL cfg */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_max_simd_bitwidth(NULL, 256) == -1,
+ "Expected -1 for NULL cfg");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for NULL cfg, got %d", rte_errno);
+ TEST_ASSERT(rte_eal_cfg_get_max_simd_bitwidth(NULL) == 0,
+ "Expected 0 from get with NULL cfg");
+
+ cfg = rte_eal_cfg_create();
+ TEST_ASSERT_NOT_NULL(cfg, "rte_eal_cfg_create returned NULL");
+
+ /* valid values: a selection of named powers of two */
+ static const uint16_t valid[] = {
+ RTE_VECT_SIMD_DISABLED,
+ RTE_VECT_SIMD_128,
+ RTE_VECT_SIMD_256,
+ RTE_VECT_SIMD_512,
+ RTE_VECT_SIMD_MAX, /* upper boundary */
+ };
+ for (size_t i = 0; i < RTE_DIM(valid); i++) {
+ TEST_ASSERT(rte_eal_cfg_set_max_simd_bitwidth(cfg, valid[i]) == 0,
+ "Expected 0 for bitwidth %u", valid[i]);
+ TEST_ASSERT(rte_eal_cfg_get_max_simd_bitwidth(cfg) == valid[i],
+ "get returned wrong value for bitwidth %u", valid[i]);
+ }
+
+ /* == RTE_VECT_SIMD_DISABLED/2 (32): must fail (need strictly greater) */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_max_simd_bitwidth(cfg, RTE_VECT_SIMD_DISABLED / 2) == -1,
+ "Expected -1 for bitwidth == SIMD_DISABLED/2");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for bitwidth == SIMD_DISABLED/2, got %d", rte_errno);
+
+ /* non-power-of-two (192 = 128 + 64) */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_max_simd_bitwidth(cfg, 192) == -1,
+ "Expected -1 for non-power-of-two bitwidth 192");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for bitwidth 192, got %d", rte_errno);
+
+ /* > RTE_VECT_SIMD_MAX */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_max_simd_bitwidth(cfg,
+ (uint16_t)(RTE_VECT_SIMD_MAX + 1)) == -1,
+ "Expected -1 for bitwidth > SIMD_MAX");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for bitwidth > SIMD_MAX, got %d", rte_errno);
+
+ rte_eal_cfg_free(cfg);
+ return TEST_SUCCESS;
+}
+
+/* Test iova_mode: NULL, valid modes, and an invalid value. */
+static int
+test_eal_cfg_iova_mode(void)
+{
+ struct rte_eal_cfg *cfg;
+
+ /* NULL cfg */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_iova_mode(NULL, RTE_IOVA_VA) == -1,
+ "Expected -1 for NULL cfg");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for NULL cfg, got %d", rte_errno);
+ TEST_ASSERT(rte_eal_cfg_get_iova_mode(NULL) == RTE_IOVA_DC,
+ "Expected RTE_IOVA_DC from get with NULL cfg");
+
+ cfg = rte_eal_cfg_create();
+ TEST_ASSERT_NOT_NULL(cfg, "rte_eal_cfg_create returned NULL");
+
+ /* valid modes */
+ static const enum rte_iova_mode valid[] = {
+ RTE_IOVA_DC, RTE_IOVA_PA, RTE_IOVA_VA,
+ };
+ for (size_t i = 0; i < RTE_DIM(valid); i++) {
+ TEST_ASSERT(rte_eal_cfg_set_iova_mode(cfg, valid[i]) == 0,
+ "Expected 0 for iova_mode %d", valid[i]);
+ TEST_ASSERT(rte_eal_cfg_get_iova_mode(cfg) == valid[i],
+ "get returned wrong value for iova_mode %d", valid[i]);
+ }
+
+ /* invalid: combination of flags that isn't a named mode */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_iova_mode(cfg, (enum rte_iova_mode)(RTE_IOVA_PA | RTE_IOVA_VA)) == -1,
+ "Expected -1 for invalid iova_mode");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for invalid iova_mode, got %d", rte_errno);
+
+ rte_eal_cfg_free(cfg);
+ return TEST_SUCCESS;
+}
+
+/* Test process_type: NULL, valid types, and RTE_PROC_INVALID. */
+static int
+test_eal_cfg_process_type(void)
+{
+ struct rte_eal_cfg *cfg;
+
+ /* NULL cfg */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_process_type(NULL, RTE_PROC_PRIMARY) == -1,
+ "Expected -1 for NULL cfg");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for NULL cfg, got %d", rte_errno);
+ TEST_ASSERT(rte_eal_cfg_get_process_type(NULL) == RTE_PROC_PRIMARY,
+ "Expected RTE_PROC_PRIMARY from get with NULL cfg");
+
+ cfg = rte_eal_cfg_create();
+ TEST_ASSERT_NOT_NULL(cfg, "rte_eal_cfg_create returned NULL");
+
+ /* valid types */
+ static const enum rte_proc_type_t valid[] = {
+ RTE_PROC_AUTO, RTE_PROC_PRIMARY, RTE_PROC_SECONDARY,
+ };
+ for (size_t i = 0; i < RTE_DIM(valid); i++) {
+ TEST_ASSERT(rte_eal_cfg_set_process_type(cfg, valid[i]) == 0,
+ "Expected 0 for process_type %d", valid[i]);
+ TEST_ASSERT(rte_eal_cfg_get_process_type(cfg) == valid[i],
+ "get returned wrong value for process_type %d", valid[i]);
+ }
+
+ /* RTE_PROC_INVALID must be rejected */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_process_type(cfg, RTE_PROC_INVALID) == -1,
+ "Expected -1 for RTE_PROC_INVALID");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for RTE_PROC_INVALID, got %d", rte_errno);
+
+ rte_eal_cfg_free(cfg);
+ return TEST_SUCCESS;
+}
+
+/*
+ * Returns the first NUMA node ID in [0, RTE_MAX_NUMA_NODES) that does not
+ * exist on this system, or -1 if every ID in that range is occupied.
+ */
+static int
+find_absent_numa_node(void)
+{
+ unsigned int count = rte_socket_count();
+ bool present[RTE_MAX_NUMA_NODES] = {};
+
+ for (unsigned int i = 0; i < count; i++) {
+ int id = rte_socket_id_by_idx(i);
+ if (id >= 0 && (unsigned int)id < RTE_MAX_NUMA_NODES)
+ present[id] = true;
+ }
+ for (unsigned int id = 0; id < RTE_MAX_NUMA_NODES; id++) {
+ if (!present[id])
+ return (int)id;
+ }
+ return -1;
+}
+
+/* Test set/get numa_mem: valid node, ERANGE, ENODEV, NULL, roundtrip. */
+static int
+test_eal_cfg_numa_mem(void)
+{
+ struct rte_eal_cfg *cfg;
+ unsigned int valid_node;
+ int absent;
+
+ cfg = rte_eal_cfg_create();
+ TEST_ASSERT_NOT_NULL(cfg, "rte_eal_cfg_create returned NULL");
+
+ /* NULL cfg → EINVAL */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_numa_mem(NULL, 0, 1024) == -1,
+ "Expected -1 for NULL cfg");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for NULL cfg, got %d", rte_errno);
+ TEST_ASSERT(rte_eal_cfg_get_numa_mem(NULL, 0) == 0,
+ "Expected 0 from get with NULL cfg");
+
+ /* node >= RTE_MAX_NUMA_NODES → ERANGE */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_numa_mem(cfg, RTE_MAX_NUMA_NODES, 1024) == -1,
+ "Expected -1 for out-of-range node");
+ TEST_ASSERT(rte_errno == ERANGE,
+ "Expected ERANGE for node >= RTE_MAX_NUMA_NODES, got %d", rte_errno);
+
+ /* non-existent node → ENODEV */
+ absent = find_absent_numa_node();
+ if (absent >= 0) {
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_numa_mem(cfg, (unsigned int)absent, 1024) == -1,
+ "Expected -1 for absent NUMA node %d", absent);
+ TEST_ASSERT(rte_errno == ENODEV,
+ "Expected ENODEV for absent node %d, got %d", absent, rte_errno);
+ } else {
+ printf(" Skipping ENODEV test: all %u NUMA node IDs occupied\n",
+ RTE_MAX_NUMA_NODES);
+ }
+
+ /* valid node → roundtrip */
+ valid_node = (unsigned int)rte_socket_id_by_idx(0);
+ TEST_ASSERT(rte_eal_cfg_set_numa_mem(cfg, valid_node, 2048) == 0,
+ "Expected 0 for valid NUMA node %u", valid_node);
+ TEST_ASSERT(rte_eal_cfg_get_numa_mem(cfg, valid_node) == 2048,
+ "get_numa_mem returned wrong value for node %u", valid_node);
+
+ /* get with out-of-range node → 0 */
+ TEST_ASSERT(rte_eal_cfg_get_numa_mem(cfg, RTE_MAX_NUMA_NODES) == 0,
+ "Expected 0 from get with out-of-range node");
+
+ rte_eal_cfg_free(cfg);
+ return TEST_SUCCESS;
+}
+
+/* Test set/get numa_limit: valid node, ERANGE, ENODEV, NULL, roundtrip. */
+static int
+test_eal_cfg_numa_limit(void)
+{
+ struct rte_eal_cfg *cfg;
+ unsigned int valid_node;
+ int absent;
+
+ cfg = rte_eal_cfg_create();
+ TEST_ASSERT_NOT_NULL(cfg, "rte_eal_cfg_create returned NULL");
+
+ /* NULL cfg → EINVAL */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_numa_limit(NULL, 0, 1024) == -1,
+ "Expected -1 for NULL cfg");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for NULL cfg, got %d", rte_errno);
+ TEST_ASSERT(rte_eal_cfg_get_numa_limit(NULL, 0) == 0,
+ "Expected 0 from get with NULL cfg");
+
+ /* node >= RTE_MAX_NUMA_NODES → ERANGE */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_numa_limit(cfg, RTE_MAX_NUMA_NODES, 1024) == -1,
+ "Expected -1 for out-of-range node");
+ TEST_ASSERT(rte_errno == ERANGE,
+ "Expected ERANGE for node >= RTE_MAX_NUMA_NODES, got %d", rte_errno);
+
+ /* non-existent node → ENODEV */
+ absent = find_absent_numa_node();
+ if (absent >= 0) {
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_numa_limit(cfg, (unsigned int)absent, 1024) == -1,
+ "Expected -1 for absent NUMA node %d", absent);
+ TEST_ASSERT(rte_errno == ENODEV,
+ "Expected ENODEV for absent node %d, got %d", absent, rte_errno);
+ } else {
+ printf(" Skipping ENODEV test: all %u NUMA node IDs occupied\n",
+ RTE_MAX_NUMA_NODES);
+ }
+
+ /* valid node → roundtrip */
+ valid_node = (unsigned int)rte_socket_id_by_idx(0);
+ TEST_ASSERT(rte_eal_cfg_set_numa_limit(cfg, valid_node, 4096) == 0,
+ "Expected 0 for valid NUMA node %u", valid_node);
+ TEST_ASSERT(rte_eal_cfg_get_numa_limit(cfg, valid_node) == 4096,
+ "get_numa_limit returned wrong value for node %u", valid_node);
+
+ /* get with out-of-range node → 0 */
+ TEST_ASSERT(rte_eal_cfg_get_numa_limit(cfg, RTE_MAX_NUMA_NODES) == 0,
+ "Expected 0 from get with out-of-range node");
+
+ rte_eal_cfg_free(cfg);
+ return TEST_SUCCESS;
+}
+
static struct unit_test_suite eal_cfg_testsuite = {
.suite_name = "EAL cfg API tests",
.setup = NULL,
@@ -139,6 +505,12 @@ static struct unit_test_suite eal_cfg_testsuite = {
.unit_test_cases = {
TEST_CASE(test_eal_cfg_create_free),
TEST_CASE(test_eal_cfg_init),
+ TEST_CASE(test_eal_cfg_bool),
+ TEST_CASE(test_eal_cfg_max_simd_bitwidth),
+ TEST_CASE(test_eal_cfg_iova_mode),
+ TEST_CASE(test_eal_cfg_process_type),
+ TEST_CASE(test_eal_cfg_numa_mem),
+ TEST_CASE(test_eal_cfg_numa_limit),
TEST_CASES_END()
}
};
diff --git a/lib/eal_cfg/eal_cfg.c b/lib/eal_cfg/eal_cfg.c
index 18a508e7ad..ce3be8201b 100644
--- a/lib/eal_cfg/eal_cfg.c
+++ b/lib/eal_cfg/eal_cfg.c
@@ -3,17 +3,70 @@
*/
#include <errno.h>
+#include <stdint.h>
#include <stdlib.h>
#include <eal_export.h>
+#include <rte_bitops.h>
#include <rte_errno.h>
#include <rte_lcore.h>
#include <rte_log.h>
#include <rte_thread.h>
+#include <rte_vect.h>
#include "eal_internal_cfg.h"
#include "rte_eal_cfg.h"
+RTE_LOG_REGISTER_DEFAULT(eal_cfg_logtype, INFO);
+
+#define RTE_LOGTYPE_EAL_CFG eal_cfg_logtype
+#define EAL_CFG_LOG(level, ...) \
+ RTE_LOG_LINE(level, EAL_CFG, "" __VA_ARGS__)
+/*
+ * Convenience macros for the repetitive bool/integer/enum setter+getter bodies.
+ * Each expands to two complete exported functions.
+ */
+
+/*
+ * CFG_REQUIRE_NOT_NULL(cfg) — standard NULL guard for all setter functions.
+ * Logs a function-name-tagged error, sets rte_errno = EINVAL, and returns -1.
+ */
+#define CFG_REQUIRE_NOT_NULL(cfg) do { \
+ if ((cfg) == NULL) { \
+ EAL_CFG_LOG(ERR, "%s: cfg is NULL", __func__); \
+ rte_errno = EINVAL; \
+ return -1; \
+ } \
+} while (0)
+/*
+ * Simple getter: cfg NULL -> return null_val; else return cfg->user_cfg.name.
+ * Only usable when the function name suffix equals the struct field name.
+ * The matching RTE_EXPORT_EXPERIMENTAL_SYMBOL line must precede this macro.
+ */
+#define EAL_CFG_GETTER(type, name, null_val) \
+type \
+rte_eal_cfg_get_##name(const struct rte_eal_cfg *cfg) \
+{ \
+ if (cfg == NULL) \
+ return (null_val); \
+ return cfg->user_cfg.name; \
+}
+/* bool field: sym_suffix == struct field name */
+#define EAL_CFG_BOOL(name) \
+int \
+rte_eal_cfg_set_##name(struct rte_eal_cfg *cfg, bool val) \
+{ \
+ CFG_REQUIRE_NOT_NULL(cfg); \
+ cfg->user_cfg.name = val; \
+ return 0; \
+} \
+bool \
+rte_eal_cfg_get_##name(const struct rte_eal_cfg *cfg) \
+{ \
+ if (cfg == NULL) return false; \
+ return cfg->user_cfg.name; \
+}
+
struct rte_eal_cfg {
struct eal_user_cfg user_cfg;
};
@@ -49,6 +102,303 @@ rte_eal_cfg_free(struct rte_eal_cfg *cfg)
free(cfg);
}
+/* --- Boolean fields --- */
+/* Export declarations must be standalone lines for gen-version-map.py */
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_no_hugetlbfs, 26.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_no_hugetlbfs, 26.07)
+EAL_CFG_BOOL(no_hugetlbfs)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_no_pci, 26.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_no_pci, 26.07)
+EAL_CFG_BOOL(no_pci)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_no_hpet, 26.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_no_hpet, 26.07)
+EAL_CFG_BOOL(no_hpet)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_vmware_tsc_map, 26.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_vmware_tsc_map, 26.07)
+EAL_CFG_BOOL(vmware_tsc_map)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_no_shconf, 26.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_no_shconf, 26.07)
+EAL_CFG_BOOL(no_shconf)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_in_memory, 26.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_in_memory, 26.07)
+EAL_CFG_BOOL(in_memory)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_create_uio_dev, 26.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_create_uio_dev, 26.07)
+EAL_CFG_BOOL(create_uio_dev)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_no_telemetry, 26.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_no_telemetry, 26.07)
+EAL_CFG_BOOL(no_telemetry)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_legacy_mem, 26.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_legacy_mem, 26.07)
+EAL_CFG_BOOL(legacy_mem)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_match_allocations, 26.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_match_allocations, 26.07)
+EAL_CFG_BOOL(match_allocations)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_single_file_segments, 26.07)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_single_file_segments, 26.07)
+EAL_CFG_BOOL(single_file_segments)
+
+/* --- Integer fields --- */
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_force_nchannel, 26.07)
+int
+rte_eal_cfg_set_force_nchannel(struct rte_eal_cfg *cfg, uint8_t val)
+{
+ CFG_REQUIRE_NOT_NULL(cfg);
+ cfg->user_cfg.force_nchannel = val;
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_force_nchannel, 26.07)
+EAL_CFG_GETTER(uint8_t, force_nchannel, 0)
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_force_nrank, 26.07)
+int
+rte_eal_cfg_set_force_nrank(struct rte_eal_cfg *cfg, uint8_t val)
+{
+ CFG_REQUIRE_NOT_NULL(cfg);
+ cfg->user_cfg.force_nrank = val;
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_force_nrank, 26.07)
+EAL_CFG_GETTER(uint8_t, force_nrank, 0)
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_huge_worker_stack_size, 26.07)
+int
+rte_eal_cfg_set_huge_worker_stack_size(struct rte_eal_cfg *cfg, size_t val)
+{
+ CFG_REQUIRE_NOT_NULL(cfg);
+ cfg->user_cfg.huge_worker_stack_size = val;
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_huge_worker_stack_size, 26.07)
+EAL_CFG_GETTER(size_t, huge_worker_stack_size, 0)
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_base_virtaddr, 26.07)
+int
+rte_eal_cfg_set_base_virtaddr(struct rte_eal_cfg *cfg, uintptr_t val)
+{
+ CFG_REQUIRE_NOT_NULL(cfg);
+ cfg->user_cfg.base_virtaddr = val;
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_base_virtaddr, 26.07)
+EAL_CFG_GETTER(uintptr_t, base_virtaddr, 0)
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_main_lcore, 26.07)
+int
+rte_eal_cfg_set_main_lcore(struct rte_eal_cfg *cfg, int val)
+{
+ CFG_REQUIRE_NOT_NULL(cfg);
+ if (val != -1 && (val < 0 || (unsigned int)val >= RTE_MAX_LCORE)) {
+ EAL_CFG_LOG(ERR, "%s: main_lcore %d out of range [0, %u)",
+ __func__, val, RTE_MAX_LCORE);
+ rte_errno = ERANGE;
+ return -1;
+ }
+ cfg->user_cfg.main_lcore = val;
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_main_lcore, 26.07)
+EAL_CFG_GETTER(int, main_lcore, -1)
+
+/* --- Enum fields --- */
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_process_type, 26.07)
+int
+rte_eal_cfg_set_process_type(struct rte_eal_cfg *cfg, enum rte_proc_type_t val)
+{
+ CFG_REQUIRE_NOT_NULL(cfg);
+ if (val >= RTE_PROC_INVALID) {
+ EAL_CFG_LOG(ERR, "%s: invalid process type %d", __func__, val);
+ rte_errno = EINVAL;
+ return -1;
+ }
+ cfg->user_cfg.process_type = val;
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_process_type, 26.07)
+EAL_CFG_GETTER(enum rte_proc_type_t, process_type, RTE_PROC_PRIMARY)
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_vfio_intr_mode, 26.07)
+int
+rte_eal_cfg_set_vfio_intr_mode(struct rte_eal_cfg *cfg, enum rte_intr_mode val)
+{
+ CFG_REQUIRE_NOT_NULL(cfg);
+ if (val < RTE_INTR_MODE_NONE || val > RTE_INTR_MODE_MSIX) {
+ EAL_CFG_LOG(ERR, "%s: invalid vfio interrupt mode %d", __func__, val);
+ rte_errno = EINVAL;
+ return -1;
+ }
+ cfg->user_cfg.vfio_intr_mode = val;
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_vfio_intr_mode, 26.07)
+EAL_CFG_GETTER(enum rte_intr_mode, vfio_intr_mode, RTE_INTR_MODE_NONE)
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_iova_mode, 26.07)
+int
+rte_eal_cfg_set_iova_mode(struct rte_eal_cfg *cfg, enum rte_iova_mode val)
+{
+ CFG_REQUIRE_NOT_NULL(cfg);
+ if (val != RTE_IOVA_DC && val != RTE_IOVA_PA && val != RTE_IOVA_VA) {
+ EAL_CFG_LOG(ERR, "%s: invalid IOVA mode %d", __func__, val);
+ rte_errno = EINVAL;
+ return -1;
+ }
+ cfg->user_cfg.iova_mode = val;
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_iova_mode, 26.07)
+EAL_CFG_GETTER(enum rte_iova_mode, iova_mode, RTE_IOVA_DC)
+
+/* max_simd_bitwidth: the user-visible value is the bitwidth field; setting it
+ * also marks the value as forced, matching the CLI --force-max-simd-bitwidth
+ * semantics. */
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_max_simd_bitwidth, 26.07)
+int
+rte_eal_cfg_set_max_simd_bitwidth(struct rte_eal_cfg *cfg, uint16_t bitwidth)
+{
+ CFG_REQUIRE_NOT_NULL(cfg);
+ if (bitwidth < RTE_VECT_SIMD_DISABLED || !rte_is_power_of_2(bitwidth)) {
+ EAL_CFG_LOG(ERR, "%s: invalid SIMD bitwidth %u (must be a power of two in [%u, %u])",
+ __func__, bitwidth, RTE_VECT_SIMD_DISABLED, RTE_VECT_SIMD_MAX);
+ rte_errno = EINVAL;
+ return -1;
+ }
+ cfg->user_cfg.max_simd_bitwidth.bitwidth = bitwidth;
+ cfg->user_cfg.max_simd_bitwidth.forced = true;
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_max_simd_bitwidth, 26.07)
+uint16_t
+rte_eal_cfg_get_max_simd_bitwidth(const struct rte_eal_cfg *cfg)
+{
+ if (cfg == NULL)
+ return 0;
+ return cfg->user_cfg.max_simd_bitwidth.bitwidth;
+}
+
+/*
+ * Check that @node is a NUMA node ID that actually exists on this system.
+ *
+ * The node argument in rte_eal_cfg_set_numa_mem / set_numa_limit is a NUMA
+ * node ID, not a sequential index. numa_mem[] is indexed by that same ID.
+ * rte_socket_id_by_idx() can convert an index to an ID when the caller only
+ * knows the ordinal position.
+ *
+ * Returns true when the node exists (or when platform info is unavailable and
+ * we cannot verify), false when the node is definitely absent.
+ */
+static bool
+numa_node_exists(unsigned int node)
+{
+ const struct eal_platform_info *pi = rte_eal_get_platform_info();
+
+ if (pi == NULL)
+ return true; /* discovery failed; allow and let EAL sort it out */
+
+ for (uint32_t i = 0; i < pi->numa_node_count; i++) {
+ if (pi->numa_nodes[i] == node)
+ return true;
+ }
+ return false;
+}
+
+/* --- Per-NUMA memory --- */
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_numa_mem, 26.07)
+int
+rte_eal_cfg_set_numa_mem(struct rte_eal_cfg *cfg, unsigned int node, uint64_t mb)
+{
+ CFG_REQUIRE_NOT_NULL(cfg);
+ if (node >= RTE_MAX_NUMA_NODES) {
+ EAL_CFG_LOG(ERR, "%s: node %u out of range (max %u)",
+ __func__, node, RTE_MAX_NUMA_NODES - 1);
+ rte_errno = ERANGE;
+ return -1;
+ }
+ if (!numa_node_exists(node)) {
+ EAL_CFG_LOG(ERR, "%s: NUMA node %u does not exist on this system",
+ __func__, node);
+ rte_errno = ENODEV;
+ return -1;
+ }
+ cfg->user_cfg.numa_mem[node] = mb;
+ cfg->user_cfg.force_numa = true;
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_numa_mem, 26.07)
+uint64_t
+rte_eal_cfg_get_numa_mem(const struct rte_eal_cfg *cfg, unsigned int node)
+{
+ if (cfg == NULL || node >= RTE_MAX_NUMA_NODES)
+ return 0;
+ return cfg->user_cfg.numa_mem[node];
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_numa_limit, 26.07)
+int
+rte_eal_cfg_set_numa_limit(struct rte_eal_cfg *cfg, unsigned int node, uint64_t mb)
+{
+ CFG_REQUIRE_NOT_NULL(cfg);
+ if (node >= RTE_MAX_NUMA_NODES) {
+ EAL_CFG_LOG(ERR, "%s: node %u out of range (max %u)",
+ __func__, node, RTE_MAX_NUMA_NODES - 1);
+ rte_errno = ERANGE;
+ return -1;
+ }
+ if (!numa_node_exists(node)) {
+ EAL_CFG_LOG(ERR, "%s: NUMA node %u does not exist on this system",
+ __func__, node);
+ rte_errno = ENODEV;
+ return -1;
+ }
+ cfg->user_cfg.numa_limit[node] = mb;
+ cfg->user_cfg.force_numa_limits = true;
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_numa_limit, 26.07)
+uint64_t
+rte_eal_cfg_get_numa_limit(const struct rte_eal_cfg *cfg, unsigned int node)
+{
+ if (cfg == NULL || node >= RTE_MAX_NUMA_NODES)
+ return 0;
+ return cfg->user_cfg.numa_limit[node];
+}
+
+/* --- memory: set is unsupported; get sums all per-NUMA allocations --- */
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_memory, 26.07)
+int
+rte_eal_cfg_set_memory(struct rte_eal_cfg *cfg __rte_unused,
+ size_t mb __rte_unused)
+{
+ rte_errno = ENOTSUP;
+ return -1;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_memory, 26.07)
+size_t
+rte_eal_cfg_get_memory(const struct rte_eal_cfg *cfg)
+{
+ uint64_t total = 0;
+
+ if (cfg == NULL)
+ return 0;
+ for (unsigned int i = 0; i < RTE_MAX_NUMA_NODES; i++)
+ total += cfg->user_cfg.numa_mem[i];
+ return (size_t)total;
+}
+
RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_lcores_from_affinity, 26.07)
int
rte_eal_cfg_set_lcores_from_affinity(struct rte_eal_cfg *cfg, bool remap)
@@ -57,12 +407,10 @@ rte_eal_cfg_set_lcores_from_affinity(struct rte_eal_cfg *cfg, bool remap)
unsigned int lcore_id = 0;
int count = 0;
- if (cfg == NULL) {
- rte_errno = EINVAL;
- return -1;
- }
+ CFG_REQUIRE_NOT_NULL(cfg);
if (rte_thread_get_affinity_by_id(rte_thread_self(), &cpuset) != 0) {
+ EAL_CFG_LOG(ERR, "%s: failed to get thread CPU affinity", __func__);
rte_errno = ENOTSUP;
return -1;
}
@@ -96,6 +444,7 @@ rte_eal_cfg_set_lcores_from_affinity(struct rte_eal_cfg *cfg, bool remap)
}
if (count == 0) {
+ EAL_CFG_LOG(ERR, "%s: no CPUs in thread affinity mask", __func__);
rte_errno = ENOTSUP;
return -1;
}
diff --git a/lib/eal_cfg/rte_eal_cfg.h b/lib/eal_cfg/rte_eal_cfg.h
index ecabe136d8..1ffdcffa49 100644
--- a/lib/eal_cfg/rte_eal_cfg.h
+++ b/lib/eal_cfg/rte_eal_cfg.h
@@ -20,8 +20,12 @@ extern "C" {
#endif
#include <stdbool.h>
+#include <stddef.h>
+#include <stdint.h>
#include <rte_compat.h>
+#include <rte_eal.h>
+#include <rte_pci_dev_feature_defs.h>
/**
* Opaque EAL configuration handle.
@@ -58,6 +62,311 @@ __rte_experimental
void
rte_eal_cfg_free(struct rte_eal_cfg *cfg);
+/**
+ * @name Boolean configuration fields
+ *
+ * Each pair of functions gets or sets one boolean flag in the configuration
+ * handle. The setter returns 0 on success or -1 with rte_errno set to EINVAL
+ * if @p cfg is NULL. The getter returns false if @p cfg is NULL.
+ *
+ * @{
+ */
+/** Disable use of hugepages (equivalent to --no-huge). */
+__rte_experimental
+int
+rte_eal_cfg_set_no_hugetlbfs(struct rte_eal_cfg *cfg, bool val);
+__rte_experimental
+bool
+rte_eal_cfg_get_no_hugetlbfs(const struct rte_eal_cfg *cfg);
+
+/** Disable PCI bus scanning (equivalent to --no-pci). */
+__rte_experimental
+int
+rte_eal_cfg_set_no_pci(struct rte_eal_cfg *cfg, bool val);
+__rte_experimental
+bool
+rte_eal_cfg_get_no_pci(const struct rte_eal_cfg *cfg);
+
+/** Disable HPET timer (equivalent to --no-hpet). */
+__rte_experimental
+int
+rte_eal_cfg_set_no_hpet(struct rte_eal_cfg *cfg, bool val);
+__rte_experimental
+bool
+rte_eal_cfg_get_no_hpet(const struct rte_eal_cfg *cfg);
+
+/** Use VMware TSC mapping (equivalent to --vmware-tsc-map). */
+__rte_experimental
+int
+rte_eal_cfg_set_vmware_tsc_map(struct rte_eal_cfg *cfg, bool val);
+__rte_experimental
+bool
+rte_eal_cfg_get_vmware_tsc_map(const struct rte_eal_cfg *cfg);
+
+/** Disable creation of a shared config file (equivalent to --no-shconf). */
+__rte_experimental
+int
+rte_eal_cfg_set_no_shconf(struct rte_eal_cfg *cfg, bool val);
+__rte_experimental
+bool
+rte_eal_cfg_get_no_shconf(const struct rte_eal_cfg *cfg);
+
+/** Run without any shared runtime files (equivalent to --in-memory). */
+__rte_experimental
+int
+rte_eal_cfg_set_in_memory(struct rte_eal_cfg *cfg, bool val);
+__rte_experimental
+bool
+rte_eal_cfg_get_in_memory(const struct rte_eal_cfg *cfg);
+
+/** Create /dev/uioX devices (equivalent to --create-uio-dev). */
+__rte_experimental
+int
+rte_eal_cfg_set_create_uio_dev(struct rte_eal_cfg *cfg, bool val);
+__rte_experimental
+bool
+rte_eal_cfg_get_create_uio_dev(const struct rte_eal_cfg *cfg);
+
+/** Disable telemetry (equivalent to --no-telemetry). */
+__rte_experimental
+int
+rte_eal_cfg_set_no_telemetry(struct rte_eal_cfg *cfg, bool val);
+__rte_experimental
+bool
+rte_eal_cfg_get_no_telemetry(const struct rte_eal_cfg *cfg);
+
+/** Use legacy memory layout (equivalent to --legacy-mem). */
+__rte_experimental
+int
+rte_eal_cfg_set_legacy_mem(struct rte_eal_cfg *cfg, bool val);
+__rte_experimental
+bool
+rte_eal_cfg_get_legacy_mem(const struct rte_eal_cfg *cfg);
+
+/** Free hugepages exactly as allocated (equivalent to --match-allocations). */
+__rte_experimental
+int
+rte_eal_cfg_set_match_allocations(struct rte_eal_cfg *cfg, bool val);
+__rte_experimental
+bool
+rte_eal_cfg_get_match_allocations(const struct rte_eal_cfg *cfg);
+
+/** Store all hugepages in single files per size (equivalent to --single-file-segments). */
+__rte_experimental
+int
+rte_eal_cfg_set_single_file_segments(struct rte_eal_cfg *cfg, bool val);
+__rte_experimental
+bool
+rte_eal_cfg_get_single_file_segments(const struct rte_eal_cfg *cfg);
+/** @} */
+
+/**
+ * @name Integer configuration fields
+ *
+ * Setters return 0 on success, or -1 with rte_errno set to EINVAL if @p cfg
+ * is NULL, or ERANGE / EINVAL if the value is out of the accepted range.
+ * Getters return 0 (or a suitable zero-value) if @p cfg is NULL.
+ *
+ * @{
+ */
+/** Force number of memory channels (equivalent to -n). */
+__rte_experimental
+int
+rte_eal_cfg_set_force_nchannel(struct rte_eal_cfg *cfg, uint8_t val);
+__rte_experimental
+uint8_t
+rte_eal_cfg_get_force_nchannel(const struct rte_eal_cfg *cfg);
+
+/** Force number of memory ranks (equivalent to -r). */
+__rte_experimental
+int
+rte_eal_cfg_set_force_nrank(struct rte_eal_cfg *cfg, uint8_t val);
+__rte_experimental
+uint8_t
+rte_eal_cfg_get_force_nrank(const struct rte_eal_cfg *cfg);
+
+/** Worker lcore thread stack size in bytes (equivalent to --huge-worker-stack). */
+__rte_experimental
+int
+rte_eal_cfg_set_huge_worker_stack_size(struct rte_eal_cfg *cfg, size_t val);
+__rte_experimental
+size_t
+rte_eal_cfg_get_huge_worker_stack_size(const struct rte_eal_cfg *cfg);
+
+/** Base virtual address for memory mapping (equivalent to --base-virtaddr). */
+__rte_experimental
+int
+rte_eal_cfg_set_base_virtaddr(struct rte_eal_cfg *cfg, uintptr_t val);
+__rte_experimental
+uintptr_t
+rte_eal_cfg_get_base_virtaddr(const struct rte_eal_cfg *cfg);
+
+/**
+ * Set the ID of the main lcore (equivalent to --main-lcore).
+ *
+ * @param cfg Configuration handle. Must not be NULL.
+ * @param val Lcore ID in the range [0, RTE_MAX_LCORE), or -1 for auto-select.
+ * @return 0 on success, -1 with rte_errno set to EINVAL or ERANGE on error.
+ */
+__rte_experimental
+int
+rte_eal_cfg_set_main_lcore(struct rte_eal_cfg *cfg, int val);
+/** Returns -1 (auto-select) if @p cfg is NULL. */
+__rte_experimental
+int
+rte_eal_cfg_get_main_lcore(const struct rte_eal_cfg *cfg);
+/** @} */
+
+/**
+ * @name Enum configuration fields
+ *
+ * Setters return 0 on success, or -1 with rte_errno set to EINVAL if @p cfg
+ * is NULL or the value is not a recognised member of the enum.
+ * Getters return a suitable default if @p cfg is NULL.
+ *
+ * @{
+ */
+/**
+ * Set the process type (equivalent to --proc-type).
+ *
+ * @p val must be RTE_PROC_AUTO, RTE_PROC_PRIMARY, or RTE_PROC_SECONDARY.
+ * RTE_PROC_INVALID is rejected.
+ */
+__rte_experimental
+int
+rte_eal_cfg_set_process_type(struct rte_eal_cfg *cfg, enum rte_proc_type_t val);
+__rte_experimental
+enum rte_proc_type_t
+rte_eal_cfg_get_process_type(const struct rte_eal_cfg *cfg);
+
+/**
+ * Set the default VFIO interrupt mode (equivalent to --vfio-intr).
+ *
+ * @p val must be one of RTE_INTR_MODE_NONE, RTE_INTR_MODE_LEGACY,
+ * RTE_INTR_MODE_MSI, or RTE_INTR_MODE_MSIX.
+ */
+__rte_experimental
+int
+rte_eal_cfg_set_vfio_intr_mode(struct rte_eal_cfg *cfg, enum rte_intr_mode val);
+__rte_experimental
+enum rte_intr_mode
+rte_eal_cfg_get_vfio_intr_mode(const struct rte_eal_cfg *cfg);
+
+/**
+ * Set the requested IOVA mode (equivalent to --iova-mode).
+ *
+ * @p val must be RTE_IOVA_DC (auto-detect), RTE_IOVA_PA, or RTE_IOVA_VA.
+ */
+__rte_experimental
+int
+rte_eal_cfg_set_iova_mode(struct rte_eal_cfg *cfg, enum rte_iova_mode val);
+__rte_experimental
+enum rte_iova_mode
+rte_eal_cfg_get_iova_mode(const struct rte_eal_cfg *cfg);
+
+/**
+ * Set the maximum SIMD bitwidth for vector code paths.
+ *
+ * Marks the value as forced, equivalent to --force-max-simd-bitwidth.
+ *
+ * @param cfg Configuration handle. Must not be NULL.
+ * @param bitwidth Maximum SIMD bitwidth. Must be a power of two strictly
+ * greater than RTE_VECT_SIMD_DISABLED and at most
+ * RTE_VECT_SIMD_MAX (e.g. 128, 256, 512).
+ * @return 0 on success, -1 with rte_errno = EINVAL if cfg is NULL.
+ */
+__rte_experimental
+int
+rte_eal_cfg_set_max_simd_bitwidth(struct rte_eal_cfg *cfg, uint16_t bitwidth);
+__rte_experimental
+uint16_t
+rte_eal_cfg_get_max_simd_bitwidth(const struct rte_eal_cfg *cfg);
+/** @} */
+
+/**
+ * @name Per-NUMA memory configuration
+ * @{
+ */
+/**
+ * Set the requested memory amount for a NUMA node, in megabytes.
+ *
+ * Equivalent to the per-socket value in --socket-mem.
+ *
+ * @param cfg Configuration handle. Must not be NULL.
+ * @param node NUMA node ID (must be < RTE_MAX_NUMA_NODES and present on this
+ * system; use rte_socket_id_by_idx() to convert a sequential index).
+ * @param mb Memory in megabytes to request on this node.
+ * @return 0 on success, -1 with rte_errno set to EINVAL, ERANGE, or ENODEV on error.
+ */
+__rte_experimental
+int
+rte_eal_cfg_set_numa_mem(struct rte_eal_cfg *cfg, unsigned int node, uint64_t mb);
+/**
+ * Get the requested memory amount for a NUMA node, in megabytes.
+ *
+ * @param cfg Configuration handle.
+ * @param node NUMA node ID (must be < RTE_MAX_NUMA_NODES).
+ * @return Configured memory in megabytes, or 0 if cfg is NULL or node is out of range.
+ */
+__rte_experimental
+uint64_t
+rte_eal_cfg_get_numa_mem(const struct rte_eal_cfg *cfg, unsigned int node);
+
+/**
+ * Set the memory limit for a NUMA node, in megabytes.
+ *
+ * Equivalent to the per-socket value in --socket-limit.
+ *
+ * @param cfg Configuration handle. Must not be NULL.
+ * @param node NUMA node ID (must be < RTE_MAX_NUMA_NODES and present on this
+ * system; use rte_socket_id_by_idx() to convert a sequential index).
+ * @param mb Memory limit in megabytes for this node.
+ * @return 0 on success, -1 with rte_errno set to EINVAL, ERANGE, or ENODEV on error.
+ */
+__rte_experimental
+int
+rte_eal_cfg_set_numa_limit(struct rte_eal_cfg *cfg, unsigned int node, uint64_t mb);
+/**
+ * Get the memory limit for a NUMA node, in megabytes.
+ *
+ * @param cfg Configuration handle.
+ * @param node NUMA node ID (must be < RTE_MAX_NUMA_NODES).
+ * @return Configured memory limit in megabytes, or 0 if cfg is NULL or node is out of range.
+ */
+__rte_experimental
+uint64_t
+rte_eal_cfg_get_numa_limit(const struct rte_eal_cfg *cfg, unsigned int node);
+/** @} */
+
+/**
+ * @name Total memory
+ * @{
+ */
+/**
+ * Setting total memory directly is not supported.
+ *
+ * Use rte_eal_cfg_set_numa_mem() to configure per-NUMA memory instead.
+ *
+ * @return Always -1 with rte_errno set to ENOTSUP.
+ */
+__rte_experimental
+int
+rte_eal_cfg_set_memory(struct rte_eal_cfg *cfg, size_t mb);
+
+/**
+ * Get the total configured memory across all NUMA nodes, in megabytes.
+ *
+ * Returns the sum of all per-NUMA memory values set via
+ * rte_eal_cfg_set_numa_mem().
+ *
+ * @param cfg Configuration handle, or NULL (returns 0).
+ * @return Total memory in megabytes.
+ */
+__rte_experimental
+size_t
+rte_eal_cfg_get_memory(const struct rte_eal_cfg *cfg);
+/** @} */
+
/**
* Populate lcore configuration from the calling thread's CPU affinity.
*
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 41/44] eal_cfg: add hugepage memory configuration
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (39 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 40/44] eal_cfg: add basic setters and getters Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 42/44] eal_cfg: support configuring lcores Bruce Richardson
` (4 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
Add support for options to do with hugepage memory configuration, such
as the hugepage directory to use, and when to cleanup hugepage files. As
part of this, also update the in-memory setting to match the effect of
the in-memory setting from the commandline.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
app/test/test_eal_cfg.c | 231 ++++++++++++++++++++++++++++++++++++++
lib/eal_cfg/eal_cfg.c | 143 ++++++++++++++++++++++-
lib/eal_cfg/rte_eal_cfg.h | 126 +++++++++++++++++++--
3 files changed, 489 insertions(+), 11 deletions(-)
diff --git a/app/test/test_eal_cfg.c b/app/test/test_eal_cfg.c
index 4424c42533..8b90afefac 100644
--- a/app/test/test_eal_cfg.c
+++ b/app/test/test_eal_cfg.c
@@ -13,6 +13,9 @@
#include <rte_eal_cfg.h>
#include <stdlib.h>
+#include <string.h>
+
+#include "eal_internal_cfg.h"
#include "test.h"
@@ -498,6 +501,230 @@ test_eal_cfg_numa_limit(void)
return TEST_SUCCESS;
}
+/* Test hugefile_prefix: NULL cfg, empty, '%' char, valid, overwrite. */
+static int
+test_eal_cfg_hugefile_prefix(void)
+{
+ struct rte_eal_cfg *cfg;
+
+ /* NULL cfg */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_hugefile_prefix(NULL, "pfx") == -1,
+ "Expected -1 for NULL cfg");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for NULL cfg, got %d", rte_errno);
+ TEST_ASSERT(rte_eal_cfg_get_hugefile_prefix(NULL) == NULL,
+ "Expected NULL from get with NULL cfg");
+
+ cfg = rte_eal_cfg_create();
+ TEST_ASSERT_NOT_NULL(cfg, "rte_eal_cfg_create returned NULL");
+
+ /* default is NULL (unset) */
+ TEST_ASSERT(rte_eal_cfg_get_hugefile_prefix(cfg) == NULL,
+ "Expected default hugefile_prefix == NULL");
+
+ /* empty string rejected */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_hugefile_prefix(cfg, "") == -1,
+ "Expected -1 for empty prefix");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for empty prefix, got %d", rte_errno);
+
+ /* '%' character rejected */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_hugefile_prefix(cfg, "bad%prefix") == -1,
+ "Expected -1 for prefix containing '%%'");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for '%%' in prefix, got %d", rte_errno);
+
+ /* valid prefix */
+ TEST_ASSERT(rte_eal_cfg_set_hugefile_prefix(cfg, "myapp") == 0,
+ "Expected 0 for valid prefix");
+ TEST_ASSERT(strcmp(rte_eal_cfg_get_hugefile_prefix(cfg), "myapp") == 0,
+ "get_hugefile_prefix returned wrong value");
+
+ /* overwrite: getter must reflect the new value */
+ TEST_ASSERT(rte_eal_cfg_set_hugefile_prefix(cfg, "newpfx") == 0,
+ "Expected 0 overwriting prefix");
+ TEST_ASSERT(strcmp(rte_eal_cfg_get_hugefile_prefix(cfg), "newpfx") == 0,
+ "get_hugefile_prefix did not reflect overwritten value");
+
+ rte_eal_cfg_free(cfg);
+ return TEST_SUCCESS;
+}
+
+/* Test hugepage_dir: NULL cfg, empty string, non-hugetlbfs path, valid hugetlbfs path. */
+static int
+test_eal_cfg_hugepage_dir(void)
+{
+ struct rte_eal_cfg *cfg;
+ const struct eal_platform_info *pi;
+
+ /* NULL cfg */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_hugepage_dir(NULL, "/mnt/huge") == -1,
+ "Expected -1 for NULL cfg");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for NULL cfg, got %d", rte_errno);
+ TEST_ASSERT(rte_eal_cfg_get_hugepage_dir(NULL) == NULL,
+ "Expected NULL from get with NULL cfg");
+
+ cfg = rte_eal_cfg_create();
+ TEST_ASSERT_NOT_NULL(cfg, "rte_eal_cfg_create returned NULL");
+
+ /* default is NULL (unset) */
+ TEST_ASSERT(rte_eal_cfg_get_hugepage_dir(cfg) == NULL,
+ "Expected default hugepage_dir == NULL");
+
+ /* empty string rejected */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_hugepage_dir(cfg, "") == -1,
+ "Expected -1 for empty dir");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for empty dir, got %d", rte_errno);
+
+ /* Tests gated on platform having hugepage mount points */
+ pi = rte_eal_get_platform_info();
+ if (pi != NULL && pi->num_hugepage_sizes > 0 &&
+ pi->hugepage_sizes[0].dir[0] != '\0') {
+ const char *valid_dir = pi->hugepage_sizes[0].dir;
+
+ /* positive test: valid hugetlbfs mount point must succeed */
+ TEST_ASSERT(rte_eal_cfg_set_hugepage_dir(cfg, valid_dir) == 0,
+ "Expected 0 for valid hugetlbfs dir '%s'", valid_dir);
+ TEST_ASSERT(strcmp(rte_eal_cfg_get_hugepage_dir(cfg), valid_dir) == 0,
+ "get_hugepage_dir returned wrong value");
+
+ /* overwrite with same valid dir */
+ TEST_ASSERT(rte_eal_cfg_set_hugepage_dir(cfg, valid_dir) == 0,
+ "Expected 0 overwriting with same dir");
+ TEST_ASSERT(strcmp(rte_eal_cfg_get_hugepage_dir(cfg), valid_dir) == 0,
+ "get_hugepage_dir did not reflect overwritten value");
+
+ /* negative test: "/" is never a hugetlbfs mount point */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_hugepage_dir(cfg, "/") == -1,
+ "Expected -1 for non-hugetlbfs dir '/'");
+ TEST_ASSERT(rte_errno == ENODEV,
+ "Expected ENODEV for non-hugetlbfs '/', got errno=%d", rte_errno);
+ } else {
+ printf(" Skipping hugetlbfs dir tests: no hugepage mount points discovered\n");
+ }
+
+ rte_eal_cfg_free(cfg);
+ return TEST_SUCCESS;
+}
+
+/* Test huge_unlink: NULL cfg, default, all valid modes, invalid value. */
+static int
+test_eal_cfg_huge_unlink(void)
+{
+ struct rte_eal_cfg *cfg;
+
+ /* NULL cfg */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_huge_unlink(NULL, RTE_EAL_HUGE_UNLINK_ALWAYS) == -1,
+ "Expected -1 for NULL cfg");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for NULL cfg, got %d", rte_errno);
+ TEST_ASSERT(rte_eal_cfg_get_huge_unlink(NULL) == RTE_EAL_HUGE_UNLINK_EXISTING,
+ "Expected EXISTING from get with NULL cfg");
+
+ cfg = rte_eal_cfg_create();
+ TEST_ASSERT_NOT_NULL(cfg, "rte_eal_cfg_create returned NULL");
+
+ /* default */
+ TEST_ASSERT(rte_eal_cfg_get_huge_unlink(cfg) == RTE_EAL_HUGE_UNLINK_EXISTING,
+ "Expected default huge_unlink == EXISTING");
+
+ /* all three valid modes roundtrip */
+ static const enum rte_eal_huge_unlink valid[] = {
+ RTE_EAL_HUGE_UNLINK_EXISTING,
+ RTE_EAL_HUGE_UNLINK_ALWAYS,
+ RTE_EAL_HUGE_UNLINK_NEVER,
+ };
+ for (size_t i = 0; i < RTE_DIM(valid); i++) {
+ TEST_ASSERT(rte_eal_cfg_set_huge_unlink(cfg, valid[i]) == 0,
+ "Expected 0 for huge_unlink mode %d", (int)valid[i]);
+ TEST_ASSERT(rte_eal_cfg_get_huge_unlink(cfg) == valid[i],
+ "get returned wrong value for huge_unlink mode %d", (int)valid[i]);
+ }
+
+ /* invalid enum value */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_huge_unlink(cfg, (enum rte_eal_huge_unlink)99) == -1,
+ "Expected -1 for invalid huge_unlink mode");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for invalid mode, got %d", rte_errno);
+
+ rte_eal_cfg_free(cfg);
+ return TEST_SUCCESS;
+}
+
+/*
+ * Test in_memory setter side-effects: enabling it must also set no_shconf
+ * and flip huge_unlink to ALWAYS; disabling it must not reverse those.
+ */
+static int
+test_eal_cfg_in_memory(void)
+{
+ struct rte_eal_cfg *cfg;
+
+ /* NULL cfg */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_in_memory(NULL, true) == -1,
+ "Expected -1 for NULL cfg");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for NULL cfg, got %d", rte_errno);
+ TEST_ASSERT(rte_eal_cfg_get_in_memory(NULL) == false,
+ "Expected false from get with NULL cfg");
+
+ cfg = rte_eal_cfg_create();
+ TEST_ASSERT_NOT_NULL(cfg, "rte_eal_cfg_create returned NULL");
+
+ /* defaults */
+ TEST_ASSERT(rte_eal_cfg_get_in_memory(cfg) == true,
+ "Expected default in_memory == true (set by rte_eal_cfg_create)");
+
+ /* reset to false to test the set-true side-effects cleanly */
+ TEST_ASSERT(rte_eal_cfg_set_no_shconf(cfg, false) == 0,
+ "Failed to reset no_shconf");
+ TEST_ASSERT(rte_eal_cfg_set_huge_unlink(cfg, RTE_EAL_HUGE_UNLINK_EXISTING) == 0,
+ "Failed to reset huge_unlink");
+ TEST_ASSERT(rte_eal_cfg_set_in_memory(cfg, false) == 0,
+ "Failed to set in_memory = false");
+
+ TEST_ASSERT(rte_eal_cfg_get_in_memory(cfg) == false,
+ "Expected in_memory == false after reset");
+ TEST_ASSERT(rte_eal_cfg_get_no_shconf(cfg) == false,
+ "Expected no_shconf == false after reset");
+ TEST_ASSERT(rte_eal_cfg_get_huge_unlink(cfg) == RTE_EAL_HUGE_UNLINK_EXISTING,
+ "Expected huge_unlink == EXISTING after reset");
+
+ /* set in_memory = true: must pull no_shconf and huge_unlink with it */
+ TEST_ASSERT(rte_eal_cfg_set_in_memory(cfg, true) == 0,
+ "Expected 0 setting in_memory = true");
+ TEST_ASSERT(rte_eal_cfg_get_in_memory(cfg) == true,
+ "Expected in_memory == true after set");
+ TEST_ASSERT(rte_eal_cfg_get_no_shconf(cfg) == true,
+ "Expected no_shconf == true after in_memory = true");
+ TEST_ASSERT(rte_eal_cfg_get_huge_unlink(cfg) == RTE_EAL_HUGE_UNLINK_ALWAYS,
+ "Expected huge_unlink == ALWAYS after in_memory = true");
+
+ /* set in_memory = false: side-effects are NOT reversed */
+ TEST_ASSERT(rte_eal_cfg_set_in_memory(cfg, false) == 0,
+ "Expected 0 setting in_memory = false");
+ TEST_ASSERT(rte_eal_cfg_get_in_memory(cfg) == false,
+ "Expected in_memory == false after clear");
+ TEST_ASSERT(rte_eal_cfg_get_no_shconf(cfg) == true,
+ "Expected no_shconf to remain true after in_memory cleared");
+ TEST_ASSERT(rte_eal_cfg_get_huge_unlink(cfg) == RTE_EAL_HUGE_UNLINK_ALWAYS,
+ "Expected huge_unlink to remain ALWAYS after in_memory cleared");
+
+ rte_eal_cfg_free(cfg);
+ return TEST_SUCCESS;
+}
+
static struct unit_test_suite eal_cfg_testsuite = {
.suite_name = "EAL cfg API tests",
.setup = NULL,
@@ -511,6 +738,10 @@ static struct unit_test_suite eal_cfg_testsuite = {
TEST_CASE(test_eal_cfg_process_type),
TEST_CASE(test_eal_cfg_numa_mem),
TEST_CASE(test_eal_cfg_numa_limit),
+ TEST_CASE(test_eal_cfg_hugefile_prefix),
+ TEST_CASE(test_eal_cfg_hugepage_dir),
+ TEST_CASE(test_eal_cfg_huge_unlink),
+ TEST_CASE(test_eal_cfg_in_memory),
TEST_CASES_END()
}
};
diff --git a/lib/eal_cfg/eal_cfg.c b/lib/eal_cfg/eal_cfg.c
index ce3be8201b..3e3e6bfb59 100644
--- a/lib/eal_cfg/eal_cfg.c
+++ b/lib/eal_cfg/eal_cfg.c
@@ -5,6 +5,12 @@
#include <errno.h>
#include <stdint.h>
#include <stdlib.h>
+#include <string.h>
+
+#ifdef RTE_EXEC_ENV_LINUX
+#include <sys/vfs.h>
+#include <linux/magic.h>
+#endif
#include <eal_export.h>
#include <rte_bitops.h>
@@ -119,9 +125,6 @@ EAL_CFG_BOOL(vmware_tsc_map)
RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_no_shconf, 26.07)
RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_no_shconf, 26.07)
EAL_CFG_BOOL(no_shconf)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_in_memory, 26.07)
-RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_in_memory, 26.07)
-EAL_CFG_BOOL(in_memory)
RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_create_uio_dev, 26.07)
RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_create_uio_dev, 26.07)
EAL_CFG_BOOL(create_uio_dev)
@@ -287,6 +290,28 @@ rte_eal_cfg_get_max_simd_bitwidth(const struct rte_eal_cfg *cfg)
return cfg->user_cfg.max_simd_bitwidth.bitwidth;
}
+/*
+ * Check that @dir is a hugetlbfs mount point.
+ *
+ * Uses statfs(2) on Linux to verify the filesystem type. On non-Linux
+ * platforms where this check is not available, the function always returns
+ * true (permissive fallback — EAL will catch the error at init time).
+ */
+static bool
+hugedir_is_hugetlbfs(const char *dir)
+{
+#ifdef RTE_EXEC_ENV_LINUX
+ struct statfs sfs;
+
+ if (statfs(dir, &sfs) != 0)
+ return false;
+ return (uint32_t)sfs.f_type == HUGETLBFS_MAGIC;
+#else
+ (void)dir;
+ return true;
+#endif
+}
+
/*
* Check that @node is a NUMA node ID that actually exists on this system.
*
@@ -399,6 +424,118 @@ rte_eal_cfg_get_memory(const struct rte_eal_cfg *cfg)
return (size_t)total;
}
+/* --- Hugepage configuration --- */
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_hugefile_prefix, 26.07)
+int
+rte_eal_cfg_set_hugefile_prefix(struct rte_eal_cfg *cfg, const char *prefix)
+{
+ char *copy;
+
+ CFG_REQUIRE_NOT_NULL(cfg);
+ if (prefix == NULL || prefix[0] == '\0' || strchr(prefix, '%') != NULL) {
+ EAL_CFG_LOG(ERR, "%s: invalid hugefile prefix", __func__);
+ rte_errno = EINVAL;
+ return -1;
+ }
+ copy = strdup(prefix);
+ if (copy == NULL) {
+ rte_errno = ENOMEM;
+ return -1;
+ }
+ free(cfg->user_cfg.hugefile_prefix);
+ cfg->user_cfg.hugefile_prefix = copy;
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_hugefile_prefix, 26.07)
+EAL_CFG_GETTER(const char *, hugefile_prefix, NULL)
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_hugepage_dir, 26.07)
+int
+rte_eal_cfg_set_hugepage_dir(struct rte_eal_cfg *cfg, const char *dir)
+{
+ char *copy;
+
+ CFG_REQUIRE_NOT_NULL(cfg);
+ if (dir == NULL || dir[0] == '\0') {
+ EAL_CFG_LOG(ERR, "%s: invalid hugepage dir", __func__);
+ rte_errno = EINVAL;
+ return -1;
+ }
+ if (!hugedir_is_hugetlbfs(dir)) {
+ EAL_CFG_LOG(ERR, "%s: '%s' is not a hugetlbfs mount point", __func__, dir);
+ rte_errno = ENODEV;
+ return -1;
+ }
+ copy = strdup(dir);
+ if (copy == NULL) {
+ rte_errno = ENOMEM;
+ return -1;
+ }
+ free(cfg->user_cfg.hugepage_dir);
+ cfg->user_cfg.hugepage_dir = copy;
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_hugepage_dir, 26.07)
+EAL_CFG_GETTER(const char *, hugepage_dir, NULL)
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_huge_unlink, 26.07)
+int
+rte_eal_cfg_set_huge_unlink(struct rte_eal_cfg *cfg, enum rte_eal_huge_unlink mode)
+{
+ CFG_REQUIRE_NOT_NULL(cfg);
+ switch (mode) {
+ case RTE_EAL_HUGE_UNLINK_EXISTING:
+ cfg->user_cfg.hugepage_file.unlink_existing = true;
+ cfg->user_cfg.hugepage_file.unlink_before_mapping = false;
+ break;
+ case RTE_EAL_HUGE_UNLINK_ALWAYS:
+ cfg->user_cfg.hugepage_file.unlink_existing = true;
+ cfg->user_cfg.hugepage_file.unlink_before_mapping = true;
+ break;
+ case RTE_EAL_HUGE_UNLINK_NEVER:
+ cfg->user_cfg.hugepage_file.unlink_existing = false;
+ cfg->user_cfg.hugepage_file.unlink_before_mapping = false;
+ break;
+ default:
+ EAL_CFG_LOG(ERR, "%s: invalid huge_unlink mode %d", __func__, (int)mode);
+ rte_errno = EINVAL;
+ return -1;
+ }
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_huge_unlink, 26.07)
+enum rte_eal_huge_unlink
+rte_eal_cfg_get_huge_unlink(const struct rte_eal_cfg *cfg)
+{
+ if (cfg == NULL)
+ return RTE_EAL_HUGE_UNLINK_EXISTING;
+ if (cfg->user_cfg.hugepage_file.unlink_before_mapping)
+ return RTE_EAL_HUGE_UNLINK_ALWAYS;
+ if (!cfg->user_cfg.hugepage_file.unlink_existing)
+ return RTE_EAL_HUGE_UNLINK_NEVER;
+ return RTE_EAL_HUGE_UNLINK_EXISTING;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_in_memory, 26.07)
+int
+rte_eal_cfg_set_in_memory(struct rte_eal_cfg *cfg, bool val)
+{
+ CFG_REQUIRE_NOT_NULL(cfg);
+ cfg->user_cfg.in_memory = val;
+ if (val) {
+ /* in-memory is a superset of no_shconf and huge-unlink=always */
+ cfg->user_cfg.no_shconf = true;
+ cfg->user_cfg.hugepage_file.unlink_before_mapping = true;
+ }
+ return 0;
+}
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_in_memory, 26.07)
+EAL_CFG_GETTER(bool, in_memory, false)
+
RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_lcores_from_affinity, 26.07)
int
rte_eal_cfg_set_lcores_from_affinity(struct rte_eal_cfg *cfg, bool remap)
diff --git a/lib/eal_cfg/rte_eal_cfg.h b/lib/eal_cfg/rte_eal_cfg.h
index 1ffdcffa49..59eacfe64d 100644
--- a/lib/eal_cfg/rte_eal_cfg.h
+++ b/lib/eal_cfg/rte_eal_cfg.h
@@ -27,6 +27,21 @@ extern "C" {
#include <rte_eal.h>
#include <rte_pci_dev_feature_defs.h>
+/**
+ * Hugepage file unlink behaviour (equivalent to --huge-unlink).
+ */
+enum rte_eal_huge_unlink {
+ /** Remove stale hugepage files at startup; keep newly created files (default). */
+ RTE_EAL_HUGE_UNLINK_EXISTING = 0,
+ /**
+ * Unlink hugepage files immediately before mapping, leaving no trace in hugetlbfs.
+ * Equivalent to --huge-unlink or --huge-unlink=always.
+ */
+ RTE_EAL_HUGE_UNLINK_ALWAYS,
+ /** Never unlink hugepage files. May leave stale files on abnormal exit (warns). */
+ RTE_EAL_HUGE_UNLINK_NEVER,
+};
+
/**
* Opaque EAL configuration handle.
*/
@@ -111,14 +126,6 @@ __rte_experimental
bool
rte_eal_cfg_get_no_shconf(const struct rte_eal_cfg *cfg);
-/** Run without any shared runtime files (equivalent to --in-memory). */
-__rte_experimental
-int
-rte_eal_cfg_set_in_memory(struct rte_eal_cfg *cfg, bool val);
-__rte_experimental
-bool
-rte_eal_cfg_get_in_memory(const struct rte_eal_cfg *cfg);
-
/** Create /dev/uioX devices (equivalent to --create-uio-dev). */
__rte_experimental
int
@@ -367,6 +374,109 @@ size_t
rte_eal_cfg_get_memory(const struct rte_eal_cfg *cfg);
/** @} */
+/**
+ * @name Hugepage configuration
+ * @{
+ */
+/**
+ * Set the hugetlbfs file prefix (equivalent to --file-prefix).
+ *
+ * Overrides the base name used for hugepage backing files and runtime
+ * directory. Allows multiple independent DPDK processes to coexist on
+ * the same system without sharing hugepage files.
+ *
+ * @param cfg Configuration handle. Must not be NULL.
+ * @param prefix File prefix string. Must not be NULL or empty and must
+ * not contain the '%' character.
+ * @return 0 on success, -1 with rte_errno set to EINVAL or ENOMEM on error.
+ */
+__rte_experimental
+int
+rte_eal_cfg_set_hugefile_prefix(struct rte_eal_cfg *cfg, const char *prefix);
+/**
+ * Get the configured hugetlbfs file prefix.
+ *
+ * @param cfg Configuration handle.
+ * @return Pointer to the prefix string, or NULL if not set or cfg is NULL.
+ * The returned pointer is valid until the next call to
+ * rte_eal_cfg_set_hugefile_prefix() or rte_eal_cfg_free().
+ */
+__rte_experimental
+const char *
+rte_eal_cfg_get_hugefile_prefix(const struct rte_eal_cfg *cfg);
+
+/**
+ * Set the hugetlbfs mount directory (equivalent to --huge-dir).
+ *
+ * @param cfg Configuration handle. Must not be NULL.
+ * @param dir Path to a hugetlbfs mount point. Must not be NULL or empty, and
+ * must refer to a directory that is actually mounted as hugetlbfs
+ * on the current system.
+ * @return 0 on success, -1 with rte_errno set to EINVAL if cfg is NULL or
+ * the path is empty; ENODEV if the path is not a hugetlbfs mount;
+ * ENOMEM on allocation failure.
+ */
+__rte_experimental
+int
+rte_eal_cfg_set_hugepage_dir(struct rte_eal_cfg *cfg, const char *dir);
+/**
+ * Get the configured hugetlbfs directory.
+ *
+ * @param cfg Configuration handle.
+ * @return Pointer to the directory string, or NULL if not set or cfg is NULL.
+ * The returned pointer is valid until the next call to
+ * rte_eal_cfg_set_hugepage_dir() or rte_eal_cfg_free().
+ */
+__rte_experimental
+const char *
+rte_eal_cfg_get_hugepage_dir(const struct rte_eal_cfg *cfg);
+
+/**
+ * Set the hugepage file unlink behaviour (equivalent to --huge-unlink).
+ *
+ * Controls when hugepage backing files are removed:
+ * - RTE_EAL_HUGE_UNLINK_EXISTING (default): remove stale files at startup,
+ * keep newly created files.
+ * - RTE_EAL_HUGE_UNLINK_ALWAYS: unlink files immediately before mapping,
+ * leaving no trace in hugetlbfs.
+ * - RTE_EAL_HUGE_UNLINK_NEVER: never remove any hugepage files. This may
+ * leave stale files on abnormal exit and is logged as a warning at init.
+ *
+ * @param cfg Configuration handle. Must not be NULL.
+ * @param mode Unlink mode.
+ * @return 0 on success, -1 with rte_errno set to EINVAL on error.
+ */
+__rte_experimental
+int
+rte_eal_cfg_set_huge_unlink(struct rte_eal_cfg *cfg, enum rte_eal_huge_unlink mode);
+/**
+ * Get the configured hugepage file unlink behaviour.
+ *
+ * @param cfg Configuration handle.
+ * @return The configured mode, or RTE_EAL_HUGE_UNLINK_EXISTING if cfg is NULL.
+ */
+__rte_experimental
+enum rte_eal_huge_unlink
+rte_eal_cfg_get_huge_unlink(const struct rte_eal_cfg *cfg);
+
+/**
+ * Run without any shared runtime files (equivalent to --in-memory).
+ *
+ * NOTE: as with the command-line option, this implies no_shconf and huge-unlink=always.
+ * Passing "true" will therefore set both no_shconf and huge-unlink=always to true.
+ *
+ * @param cfg Configuration handle. Must not be NULL.
+ * @param val true to enable in-memory mode.
+ * @return 0 on success, -1 with rte_errno set to EINVAL if cfg is NULL.
+ */
+__rte_experimental
+int
+rte_eal_cfg_set_in_memory(struct rte_eal_cfg *cfg, bool val);
+__rte_experimental
+bool
+rte_eal_cfg_get_in_memory(const struct rte_eal_cfg *cfg);
+/** @} */
+
/**
* Populate lcore configuration from the calling thread's CPU affinity.
*
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 42/44] eal_cfg: support configuring lcores
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (40 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 41/44] eal_cfg: add hugepage memory configuration Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 43/44] eal_cfg: support device and driver lists Bruce Richardson
` (3 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
Add functions to configure lcores and set them with a cpuset for what
physical CPUs to run upon. Also support marking certain cores as service
cores.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
app/test/test_eal_cfg.c | 235 ++++++++++++++++++++++++++++++++++++++
lib/eal_cfg/eal_cfg.c | 78 +++++++++++++
lib/eal_cfg/rte_eal_cfg.h | 95 +++++++++++++++
3 files changed, 408 insertions(+)
diff --git a/app/test/test_eal_cfg.c b/app/test/test_eal_cfg.c
index 8b90afefac..ceaa42260a 100644
--- a/app/test/test_eal_cfg.c
+++ b/app/test/test_eal_cfg.c
@@ -2,6 +2,7 @@
* Copyright(c) 2026 Intel Corporation
*/
+#include <time.h>
#include <errno.h>
#include <inttypes.h>
@@ -12,6 +13,7 @@
#include <rte_vect.h>
#include <rte_eal_cfg.h>
+#include <rte_thread.h>
#include <stdlib.h>
#include <string.h>
@@ -140,6 +142,101 @@ subtest_eal_cfg_init_null(void)
return TEST_SUCCESS;
}
+/* Test that lcore cpusets configured via set_lcore are visible post-init. */
+static int
+subtest_eal_cfg_init_lcore_affinity(void)
+{
+ struct rte_eal_cfg *cfg;
+ const struct eal_platform_info *pi;
+ unsigned int all_cpus[CPU_SETSIZE];
+ unsigned int ncpus = 0;
+ unsigned int idx0, idx1;
+ rte_cpuset_t cs0, cs1;
+ int ret;
+
+ /*
+ * Collect all platform-detected CPUs. Picking from this list avoids
+ * any dependency on the calling thread's CPU affinity, which in a
+ * dpdk-test subprocess is typically pinned to a single CPU.
+ */
+ pi = rte_eal_get_platform_info();
+ if (pi == NULL) {
+ printf(" Skipping: platform info unavailable\n");
+ return TEST_SUCCESS;
+ }
+ for (unsigned int i = 0; i < pi->cpu_count; i++) {
+ if (pi->cpu_info[i].detected)
+ all_cpus[ncpus++] = i;
+ }
+ if (ncpus < 2) {
+ printf(" Skipping: need at least 2 detected CPUs, found %u\n", ncpus);
+ return TEST_SUCCESS;
+ }
+
+ /* Pick two distinct CPUs at random. */
+ srand((unsigned int)time(NULL));
+ idx0 = (unsigned int)rand() % ncpus;
+ do {
+ idx1 = (unsigned int)rand() % ncpus;
+ } while (idx1 == idx0);
+
+ CPU_ZERO(&cs0);
+ CPU_SET(all_cpus[idx0], &cs0);
+ CPU_ZERO(&cs1);
+ CPU_SET(all_cpus[idx1], &cs1);
+
+ cfg = rte_eal_cfg_create();
+ TEST_ASSERT_NOT_NULL(cfg, "rte_eal_cfg_create returned NULL");
+
+ /*
+ * Pin lcore 0 to cs0 and lcore 1 to cs1. Use lcore 1 as the main
+ * lcore so we can verify the live thread affinity on the calling thread.
+ */
+ TEST_ASSERT(rte_eal_cfg_set_lcore(cfg, 0, &cs0, false) == 0,
+ "Failed to configure lcore 0");
+ TEST_ASSERT(rte_eal_cfg_set_lcore(cfg, 1, &cs1, false) == 0,
+ "Failed to configure lcore 1");
+ TEST_ASSERT(rte_eal_cfg_set_main_lcore(cfg, 1) == 0,
+ "Failed to set main_lcore to 1");
+
+ ret = rte_eal_init_from_cfg("test_prog", cfg);
+ TEST_ASSERT(ret == 0,
+ "rte_eal_init_from_cfg failed: ret=%d rte_errno=%d", ret, rte_errno);
+
+ rte_eal_cfg_free(cfg);
+
+ /* Two lcores (0 and 1) should be active. */
+ TEST_ASSERT(rte_lcore_count() == 2,
+ "Expected lcore_count=2, got %u", rte_lcore_count());
+ TEST_ASSERT(rte_lcore_is_enabled(0),
+ "Expected lcore 0 to be enabled");
+ TEST_ASSERT(rte_lcore_is_enabled(1),
+ "Expected lcore 1 to be enabled");
+ TEST_ASSERT(!rte_lcore_is_enabled(2),
+ "Expected lcore 2 to be disabled");
+
+ /* Worker lcore 0: verify the stored cpuset matches configuration. */
+ rte_cpuset_t got0 = rte_lcore_cpuset(0);
+ TEST_ASSERT(CPU_EQUAL(&got0, &cs0),
+ "lcore 0 cpuset mismatch: expected CPU %u", all_cpus[idx0]);
+
+ /*
+ * Main lcore 1: this is the current thread after init. Verify both
+ * that rte_lcore_id() identifies us as lcore 1 and that the actual
+ * thread CPU affinity matches what we configured.
+ */
+ TEST_ASSERT(rte_lcore_id() == 1,
+ "Expected rte_lcore_id()==1 on main thread, got %u", rte_lcore_id());
+ rte_cpuset_t live_affinity;
+ TEST_ASSERT(rte_thread_get_affinity_by_id(rte_thread_self(),
+ &live_affinity) == 0,
+ "Failed to get main thread affinity");
+ TEST_ASSERT(CPU_EQUAL(&live_affinity, &cs1),
+ "main lcore 1 live affinity mismatch: expected CPU %u", all_cpus[idx1]);
+
+ rte_eal_cleanup();
+ return TEST_SUCCESS;
+}
/*
* Test EAL initialisation from a default config in a fresh subprocess.
* Called before rte_eal_init() so that it can exercise the very first call
@@ -157,6 +254,7 @@ test_eal_cfg_init(void)
TEST_CFG_FN(subtest_eal_cfg_init_null),
TEST_CFG_FN(subtest_eal_cfg_init_empty),
TEST_CFG_FN(subtest_eal_cfg_init_with_values),
+ TEST_CFG_FN(subtest_eal_cfg_init_lcore_affinity),
{ NULL, NULL }
};
@@ -725,6 +823,141 @@ test_eal_cfg_in_memory(void)
return TEST_SUCCESS;
}
+#ifndef RTE_EXEC_ENV_WINDOWS /* windows is missing the necessary macros for comparing CPUSETs etc. */
+/* Test set/get_lcore and set/is_service_lcore. */
+static int
+test_eal_cfg_lcore(void)
+{
+ struct rte_eal_cfg *cfg;
+ rte_cpuset_t cpuset;
+
+ CPU_ZERO(&cpuset);
+ CPU_SET(0, &cpuset);
+
+ /* NULL cfg */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_lcore(NULL, 0, &cpuset, false) == -1,
+ "Expected -1 for NULL cfg");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for NULL cfg, got %d", rte_errno);
+ TEST_ASSERT(rte_eal_cfg_get_lcore_cpuset(NULL, 0) == NULL,
+ "Expected NULL from get with NULL cfg");
+
+ cfg = rte_eal_cfg_create();
+ TEST_ASSERT_NOT_NULL(cfg, "rte_eal_cfg_create returned NULL");
+
+ /* lcore_id out of range */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_lcore(cfg, RTE_MAX_LCORE, &cpuset, false) == -1,
+ "Expected -1 for lcore_id == RTE_MAX_LCORE");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for out-of-range lcore_id, got %d", rte_errno);
+ TEST_ASSERT(rte_eal_cfg_get_lcore_cpuset(cfg, RTE_MAX_LCORE) == NULL,
+ "Expected NULL from get with out-of-range lcore_id");
+
+ /* NULL cpuset */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_lcore(cfg, 0, NULL, false) == -1,
+ "Expected -1 for NULL cpuset");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for NULL cpuset, got %d", rte_errno);
+
+ /* default: lcore 0 not yet configured */
+ TEST_ASSERT(rte_eal_cfg_get_lcore_cpuset(cfg, 0) == NULL,
+ "Expected default lcore 0 cpuset to be NULL");
+
+ /* first set: replace=false, slot is empty, must succeed */
+ TEST_ASSERT(rte_eal_cfg_set_lcore(cfg, 0, &cpuset, false) == 0,
+ "Expected 0 for first set of lcore 0");
+ TEST_ASSERT(rte_eal_cfg_get_lcore_cpuset(cfg, 0) != NULL,
+ "Expected non-NULL cpuset after set");
+ TEST_ASSERT(CPU_EQUAL(rte_eal_cfg_get_lcore_cpuset(cfg, 0), &cpuset),
+ "Expected cpuset to match what was set");
+
+ /* second set with replace=false must fail */
+ rte_cpuset_t cpuset2;
+ CPU_ZERO(&cpuset2);
+ CPU_SET(1, &cpuset2);
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_lcore(cfg, 0, &cpuset2, false) == -1,
+ "Expected -1 when replace=false and lcore already set");
+ TEST_ASSERT(rte_errno == EEXIST,
+ "Expected EEXIST, got %d", rte_errno);
+
+ /* second set with replace=true must succeed and update cpuset */
+ TEST_ASSERT(rte_eal_cfg_set_lcore(cfg, 0, &cpuset2, true) == 0,
+ "Expected 0 for replace=true");
+ TEST_ASSERT(CPU_EQUAL(rte_eal_cfg_get_lcore_cpuset(cfg, 0), &cpuset2),
+ "Expected cpuset to be updated after replace");
+
+ rte_eal_cfg_free(cfg);
+ return TEST_SUCCESS;
+}
+
+/* Test set_service_lcore and is_service_lcore. */
+static int
+test_eal_cfg_service_lcore(void)
+{
+ struct rte_eal_cfg *cfg;
+ rte_cpuset_t cpuset;
+
+ CPU_ZERO(&cpuset);
+ CPU_SET(0, &cpuset);
+
+ /* NULL cfg */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_service_lcore(NULL, 0) == -1,
+ "Expected -1 for NULL cfg");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for NULL cfg, got %d", rte_errno);
+ TEST_ASSERT(rte_eal_cfg_is_service_lcore(NULL, 0) == false,
+ "Expected false from is_service_lcore with NULL cfg");
+
+ cfg = rte_eal_cfg_create();
+ TEST_ASSERT_NOT_NULL(cfg, "rte_eal_cfg_create returned NULL");
+
+ /* lcore_id out of range */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_service_lcore(cfg, RTE_MAX_LCORE) == -1,
+ "Expected -1 for lcore_id == RTE_MAX_LCORE");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for out-of-range lcore_id, got %d", rte_errno);
+
+ /* lcore not yet configured: must fail with ENOENT */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_service_lcore(cfg, 0) == -1,
+ "Expected -1 for unconfigured lcore");
+ TEST_ASSERT(rte_errno == ENOENT,
+ "Expected ENOENT for unconfigured lcore, got %d", rte_errno);
+ TEST_ASSERT(rte_eal_cfg_is_service_lcore(cfg, 0) == false,
+ "Expected false before service lcore is set");
+
+ /* configure the lcore first, then designate as service */
+ TEST_ASSERT(rte_eal_cfg_set_lcore(cfg, 0, &cpuset, false) == 0,
+ "Expected 0 setting up lcore 0");
+ TEST_ASSERT(rte_eal_cfg_set_service_lcore(cfg, 0) == 0,
+ "Expected 0 designating lcore 0 as service");
+ TEST_ASSERT(rte_eal_cfg_is_service_lcore(cfg, 0) == true,
+ "Expected is_service_lcore to return true after set");
+
+ /* second call is idempotent */
+ TEST_ASSERT(rte_eal_cfg_set_service_lcore(cfg, 0) == 0,
+ "Expected 0 on repeated set_service_lcore");
+ TEST_ASSERT(rte_eal_cfg_is_service_lcore(cfg, 0) == true,
+ "Expected is_service_lcore still true after repeated set");
+
+ /* a different lcore that was not configured is not a service lcore */
+ TEST_ASSERT(rte_eal_cfg_is_service_lcore(cfg, 1) == false,
+ "Expected false for unconfigured lcore 1");
+
+ rte_eal_cfg_free(cfg);
+ return TEST_SUCCESS;
+}
+#else
+static int test_eal_cfg_lcore(void) { return TEST_SUCCESS; }
+static int test_eal_cfg_service_lcore(void) { return TEST_SUCCESS; }
+#endif /* RTE_EXEC_ENV_WINDOWS */
+
static struct unit_test_suite eal_cfg_testsuite = {
.suite_name = "EAL cfg API tests",
.setup = NULL,
@@ -742,6 +975,8 @@ static struct unit_test_suite eal_cfg_testsuite = {
TEST_CASE(test_eal_cfg_hugepage_dir),
TEST_CASE(test_eal_cfg_huge_unlink),
TEST_CASE(test_eal_cfg_in_memory),
+ TEST_CASE(test_eal_cfg_lcore),
+ TEST_CASE(test_eal_cfg_service_lcore),
TEST_CASES_END()
}
};
diff --git a/lib/eal_cfg/eal_cfg.c b/lib/eal_cfg/eal_cfg.c
index 3e3e6bfb59..78f130c257 100644
--- a/lib/eal_cfg/eal_cfg.c
+++ b/lib/eal_cfg/eal_cfg.c
@@ -536,6 +536,84 @@ rte_eal_cfg_set_in_memory(struct rte_eal_cfg *cfg, bool val)
RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_in_memory, 26.07)
EAL_CFG_GETTER(bool, in_memory, false)
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_lcore, 26.07)
+int
+rte_eal_cfg_set_lcore(struct rte_eal_cfg *cfg, unsigned int lcore_id,
+ const rte_cpuset_t *cpuset, bool replace)
+{
+ CFG_REQUIRE_NOT_NULL(cfg);
+
+ if (lcore_id >= RTE_MAX_LCORE) {
+ EAL_CFG_LOG(ERR, "%s: lcore_id %u out of range [0, %u)",
+ __func__, lcore_id, RTE_MAX_LCORE);
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ if (cpuset == NULL) {
+ EAL_CFG_LOG(ERR, "%s: cpuset is NULL", __func__);
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ if (cfg->user_cfg.lcore_cpusets[lcore_id] != NULL && !replace) {
+ rte_errno = EEXIST;
+ return -1;
+ }
+
+ if (cfg->user_cfg.lcore_cpusets[lcore_id] == NULL) {
+ cfg->user_cfg.lcore_cpusets[lcore_id] = malloc(sizeof(rte_cpuset_t));
+ if (cfg->user_cfg.lcore_cpusets[lcore_id] == NULL) {
+ rte_errno = ENOMEM;
+ return -1;
+ }
+ }
+
+ memcpy(cfg->user_cfg.lcore_cpusets[lcore_id], cpuset, sizeof(rte_cpuset_t));
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_lcore_cpuset, 26.07)
+const rte_cpuset_t *
+rte_eal_cfg_get_lcore_cpuset(const struct rte_eal_cfg *cfg, unsigned int lcore_id)
+{
+ if (cfg == NULL || lcore_id >= RTE_MAX_LCORE)
+ return NULL;
+ return cfg->user_cfg.lcore_cpusets[lcore_id];
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_service_lcore, 26.07)
+int
+rte_eal_cfg_set_service_lcore(struct rte_eal_cfg *cfg, unsigned int lcore_id)
+{
+ CFG_REQUIRE_NOT_NULL(cfg);
+
+ if (lcore_id >= RTE_MAX_LCORE) {
+ EAL_CFG_LOG(ERR, "%s: lcore_id %u out of range [0, %u)",
+ __func__, lcore_id, RTE_MAX_LCORE);
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ if (cfg->user_cfg.lcore_cpusets[lcore_id] == NULL) {
+ EAL_CFG_LOG(ERR, "%s: lcore %u is not configured", __func__, lcore_id);
+ rte_errno = ENOENT;
+ return -1;
+ }
+
+ CPU_SET(lcore_id, &cfg->user_cfg.service_cpuset);
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_is_service_lcore, 26.07)
+bool
+rte_eal_cfg_is_service_lcore(const struct rte_eal_cfg *cfg, unsigned int lcore_id)
+{
+ if (cfg == NULL || lcore_id >= RTE_MAX_LCORE)
+ return false;
+ return CPU_ISSET(lcore_id, &cfg->user_cfg.service_cpuset);
+}
+
RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_lcores_from_affinity, 26.07)
int
rte_eal_cfg_set_lcores_from_affinity(struct rte_eal_cfg *cfg, bool remap)
diff --git a/lib/eal_cfg/rte_eal_cfg.h b/lib/eal_cfg/rte_eal_cfg.h
index 59eacfe64d..34a52d70e8 100644
--- a/lib/eal_cfg/rte_eal_cfg.h
+++ b/lib/eal_cfg/rte_eal_cfg.h
@@ -23,6 +23,7 @@ extern "C" {
#include <stddef.h>
#include <stdint.h>
+#include <rte_os.h>
#include <rte_compat.h>
#include <rte_eal.h>
#include <rte_pci_dev_feature_defs.h>
@@ -477,6 +478,100 @@ bool
rte_eal_cfg_get_in_memory(const struct rte_eal_cfg *cfg);
/** @} */
+/**
+ * @name Per-lcore CPU affinity configuration
+ * @{
+ */
+
+/**
+ * Set the CPU affinity for a specific lcore.
+ *
+ * Assigns the CPU affinity set @p cpuset to lcore @p lcore_id.
+ * If the lcore already has a cpuset configured and @p replace is false,
+ * the call fails. If @p replace is true, the existing cpuset is overwritten.
+ *
+ * @param cfg
+ * Configuration handle. Must not be NULL.
+ * @param lcore_id
+ * Lcore ID to configure. Must be in [0, RTE_MAX_LCORE).
+ * @param cpuset
+ * CPU affinity set to assign to this lcore. Must not be NULL.
+ * @param replace
+ * If true, overwrite any existing cpuset for this lcore.
+ * If false, return -1 with rte_errno set to EEXIST if already configured.
+ * @return
+ * 0 on success, or -1 with rte_errno set to EINVAL (NULL cfg or cpuset,
+ * or lcore_id out of range), EEXIST (already configured and replace is
+ * false), or ENOMEM on allocation failure.
+ */
+__rte_experimental
+int
+rte_eal_cfg_set_lcore(struct rte_eal_cfg *cfg, unsigned int lcore_id,
+ const rte_cpuset_t *cpuset, bool replace);
+
+/**
+ * Get the CPU affinity set for a specific lcore.
+ *
+ * @param cfg
+ * Configuration handle. Must not be NULL.
+ * @param lcore_id
+ * Lcore ID to query. Must be in [0, RTE_MAX_LCORE).
+ * @return
+ * Pointer to the cpuset for the lcore, or NULL if the lcore is not
+ * configured or cfg is NULL or lcore_id is out of range.
+ */
+__rte_experimental
+const rte_cpuset_t *
+rte_eal_cfg_get_lcore_cpuset(const struct rte_eal_cfg *cfg, unsigned int lcore_id);
+
+/**
+ * @}
+ */
+
+/**
+ * @name Service lcore configuration
+ * @{
+ */
+
+/**
+ * Designate a configured lcore as a service lcore.
+ *
+ * Sets the bit for @p lcore_id in the service lcore cpuset, marking it
+ * as a service core. The lcore must already be configured via
+ * rte_eal_cfg_set_lcore() or rte_eal_cfg_set_lcores_from_affinity();
+ * it is an error to designate an unconfigured lcore as a service core.
+ *
+ * @param cfg
+ * Configuration handle. Must not be NULL.
+ * @param lcore_id
+ * Lcore ID to designate as a service lcore.
+ * Must be in [0, RTE_MAX_LCORE).
+ * @return
+ * 0 on success, or -1 with rte_errno set to EINVAL (NULL cfg or lcore_id
+ * out of range) or ENOENT (lcore not configured in lcore_cpusets).
+ */
+__rte_experimental
+int
+rte_eal_cfg_set_service_lcore(struct rte_eal_cfg *cfg, unsigned int lcore_id);
+
+/**
+ * Query whether a lcore is designated as a service lcore.
+ *
+ * @param cfg
+ * Configuration handle. If NULL, returns false.
+ * @param lcore_id
+ * Lcore ID to query.
+ * @return
+ * true if the lcore is marked as a service lcore, false otherwise.
+ */
+__rte_experimental
+bool
+rte_eal_cfg_is_service_lcore(const struct rte_eal_cfg *cfg, unsigned int lcore_id);
+
+/**
+ * @}
+ */
+
/**
* Populate lcore configuration from the calling thread's CPU affinity.
*
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 43/44] eal_cfg: support device and driver lists
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (41 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 42/44] eal_cfg: support configuring lcores Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 16:58 ` [RFC PATCH 44/44] eal_cfg: add APIs for configuring remaining init settings Bruce Richardson
` (2 subsequent siblings)
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
Support configuring lists of devices, allowlist, blocklist and vdev
list, as well as lists of plugin paths.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
app/test/meson.build | 2 +-
app/test/test_eal_cfg.c | 128 ++++++++++++++++++++++++++++++++++++++
lib/eal_cfg/eal_cfg.c | 75 ++++++++++++++++++++++
lib/eal_cfg/rte_eal_cfg.h | 89 ++++++++++++++++++++++++++
4 files changed, 293 insertions(+), 1 deletion(-)
diff --git a/app/test/meson.build b/app/test/meson.build
index 3d39e82dd8..dccc27c18d 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -66,7 +66,7 @@ source_file_deps = {
'test_distributor_perf.c': ['distributor'],
'test_dmadev.c': ['dmadev', 'bus_vdev'],
'test_dmadev_api.c': ['dmadev'],
- 'test_eal_cfg.c': ['eal_cfg'],
+ 'test_eal_cfg.c': ['eal_cfg', 'ethdev', 'bus_vdev'],
'test_eal_flags.c': [],
'test_eal_fs.c': [],
'test_efd.c': ['efd', 'net'],
diff --git a/app/test/test_eal_cfg.c b/app/test/test_eal_cfg.c
index ceaa42260a..cf197e3eaa 100644
--- a/app/test/test_eal_cfg.c
+++ b/app/test/test_eal_cfg.c
@@ -13,6 +13,7 @@
#include <rte_vect.h>
#include <rte_eal_cfg.h>
+#include <rte_ethdev.h>
#include <rte_thread.h>
#include <stdlib.h>
#include <string.h>
@@ -142,6 +143,42 @@ subtest_eal_cfg_init_null(void)
return TEST_SUCCESS;
}
+/* Test that two net_null vdevs configured via add_vdev appear as ethdevs after init. */
+static int
+subtest_eal_cfg_init_null_pmd_vdevs(void)
+{
+#ifdef RTE_NET_NULL
+ struct rte_eal_cfg *cfg;
+ uint16_t port_id;
+ int ret;
+
+ cfg = rte_eal_cfg_create();
+ TEST_ASSERT_NOT_NULL(cfg, "rte_eal_cfg_create returned NULL");
+
+ TEST_ASSERT(rte_eal_cfg_add_vdev(cfg, "net_null0") == 0,
+ "Failed to add net_null0 vdev");
+ TEST_ASSERT(rte_eal_cfg_add_vdev(cfg, "net_null1") == 0,
+ "Failed to add net_null1 vdev");
+
+ ret = rte_eal_init_from_cfg("test_prog", cfg);
+ TEST_ASSERT(ret == 0,
+ "rte_eal_init_from_cfg failed: ret=%d rte_errno=%d", ret, rte_errno);
+
+ rte_eal_cfg_free(cfg);
+
+ TEST_ASSERT(rte_eth_dev_count_avail() == 2,
+ "Expected 2 ethdevs, got %u", rte_eth_dev_count_avail());
+
+ TEST_ASSERT(rte_eth_dev_get_port_by_name("net_null0", &port_id) == 0,
+ "Expected to find port net_null0");
+ TEST_ASSERT(rte_eth_dev_get_port_by_name("net_null1", &port_id) == 0,
+ "Expected to find port net_null1");
+
+ rte_eal_cleanup();
+#endif /* RTE_NET_NULL */
+ return TEST_SUCCESS;
+}
+
/* Test that lcore cpusets configured via set_lcore are visible post-init. */
static int
subtest_eal_cfg_init_lcore_affinity(void)
@@ -255,6 +292,7 @@ test_eal_cfg_init(void)
TEST_CFG_FN(subtest_eal_cfg_init_empty),
TEST_CFG_FN(subtest_eal_cfg_init_with_values),
TEST_CFG_FN(subtest_eal_cfg_init_lcore_affinity),
+ TEST_CFG_FN(subtest_eal_cfg_init_null_pmd_vdevs),
{ NULL, NULL }
};
@@ -823,6 +861,94 @@ test_eal_cfg_in_memory(void)
return TEST_SUCCESS;
}
+/* Test add_device_allow, add_device_block, add_vdev, add_plugin. */
+static int
+test_eal_cfg_devopt(void)
+{
+ struct rte_eal_cfg *cfg;
+
+ /* NULL cfg */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_add_device_allow(NULL, "0000:00:01.0") == -1,
+ "Expected -1 for NULL cfg");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for NULL cfg, got %d", rte_errno);
+
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_add_device_block(NULL, "0000:00:01.0") == -1,
+ "Expected -1 for NULL cfg");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for NULL cfg, got %d", rte_errno);
+
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_add_vdev(NULL, "net_ring0") == -1,
+ "Expected -1 for NULL cfg");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for NULL cfg, got %d", rte_errno);
+
+ cfg = rte_eal_cfg_create();
+ TEST_ASSERT_NOT_NULL(cfg, "rte_eal_cfg_create returned NULL");
+
+ /* empty string rejected */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_add_device_allow(cfg, "") == -1,
+ "Expected -1 for empty devargs");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for empty devargs, got %d", rte_errno);
+
+ /* valid allow entries */
+ TEST_ASSERT(rte_eal_cfg_add_device_allow(cfg, "0000:00:01.0") == 0,
+ "Expected 0 for add_device_allow");
+ TEST_ASSERT(rte_eal_cfg_add_device_allow(cfg, "0000:00:02.0,key=val") == 0,
+ "Expected 0 for add_device_allow with args");
+
+ /* valid block entry */
+ TEST_ASSERT(rte_eal_cfg_add_device_block(cfg, "0000:00:03.0") == 0,
+ "Expected 0 for add_device_block");
+
+ /* valid vdev entry */
+ TEST_ASSERT(rte_eal_cfg_add_vdev(cfg, "net_ring0") == 0,
+ "Expected 0 for add_vdev");
+ TEST_ASSERT(rte_eal_cfg_add_vdev(cfg, "net_ring1,size=1024") == 0,
+ "Expected 0 for add_vdev with args");
+
+ rte_eal_cfg_free(cfg);
+ return TEST_SUCCESS;
+}
+
+/* Test add_plugin. */
+static int
+test_eal_cfg_plugin(void)
+{
+ struct rte_eal_cfg *cfg;
+
+ /* NULL cfg */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_add_plugin(NULL, "/path/to/lib.so") == -1,
+ "Expected -1 for NULL cfg");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for NULL cfg, got %d", rte_errno);
+
+ cfg = rte_eal_cfg_create();
+ TEST_ASSERT_NOT_NULL(cfg, "rte_eal_cfg_create returned NULL");
+
+ /* empty string rejected */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_add_plugin(cfg, "") == -1,
+ "Expected -1 for empty path");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for empty path, got %d", rte_errno);
+
+ /* valid plugin paths */
+ TEST_ASSERT(rte_eal_cfg_add_plugin(cfg, "/usr/lib/dpdk/pmds/librte_net_ring.so") == 0,
+ "Expected 0 for add_plugin");
+ TEST_ASSERT(rte_eal_cfg_add_plugin(cfg, "/usr/lib/dpdk/pmds") == 0,
+ "Expected 0 for add_plugin with directory/glob");
+
+ rte_eal_cfg_free(cfg);
+ return TEST_SUCCESS;
+}
+
#ifndef RTE_EXEC_ENV_WINDOWS /* windows is missing the necessary macros for comparing CPUSETs etc. */
/* Test set/get_lcore and set/is_service_lcore. */
static int
@@ -975,6 +1101,8 @@ static struct unit_test_suite eal_cfg_testsuite = {
TEST_CASE(test_eal_cfg_hugepage_dir),
TEST_CASE(test_eal_cfg_huge_unlink),
TEST_CASE(test_eal_cfg_in_memory),
+ TEST_CASE(test_eal_cfg_devopt),
+ TEST_CASE(test_eal_cfg_plugin),
TEST_CASE(test_eal_cfg_lcore),
TEST_CASE(test_eal_cfg_service_lcore),
TEST_CASES_END()
diff --git a/lib/eal_cfg/eal_cfg.c b/lib/eal_cfg/eal_cfg.c
index 78f130c257..258e4fa11b 100644
--- a/lib/eal_cfg/eal_cfg.c
+++ b/lib/eal_cfg/eal_cfg.c
@@ -14,9 +14,11 @@
#include <eal_export.h>
#include <rte_bitops.h>
+#include <rte_devargs.h>
#include <rte_errno.h>
#include <rte_lcore.h>
#include <rte_log.h>
+#include <rte_string_fns.h>
#include <rte_thread.h>
#include <rte_vect.h>
@@ -536,6 +538,79 @@ rte_eal_cfg_set_in_memory(struct rte_eal_cfg *cfg, bool val)
RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_in_memory, 26.07)
EAL_CFG_GETTER(bool, in_memory, false)
+static int
+cfg_add_devopt(struct rte_eal_cfg *cfg, enum rte_devtype type, const char *devargs)
+{
+ struct device_option *devopt;
+ size_t arglen;
+
+ CFG_REQUIRE_NOT_NULL(cfg);
+
+ if (devargs == NULL || devargs[0] == '\0') {
+ EAL_CFG_LOG(ERR, "%s: devargs is NULL or empty", __func__);
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ arglen = strlen(devargs) + 1;
+ devopt = calloc(1, sizeof(*devopt) + arglen);
+ if (devopt == NULL) {
+ rte_errno = ENOMEM;
+ return -1;
+ }
+
+ devopt->type = type;
+ strlcpy(devopt->arg, devargs, arglen);
+ TAILQ_INSERT_TAIL(&cfg->user_cfg.devopt_list, devopt, next);
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_add_device_allow, 26.07)
+int
+rte_eal_cfg_add_device_allow(struct rte_eal_cfg *cfg, const char *devargs)
+{
+ return cfg_add_devopt(cfg, RTE_DEVTYPE_ALLOWED, devargs);
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_add_device_block, 26.07)
+int
+rte_eal_cfg_add_device_block(struct rte_eal_cfg *cfg, const char *devargs)
+{
+ return cfg_add_devopt(cfg, RTE_DEVTYPE_BLOCKED, devargs);
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_add_vdev, 26.07)
+int
+rte_eal_cfg_add_vdev(struct rte_eal_cfg *cfg, const char *devargs)
+{
+ return cfg_add_devopt(cfg, RTE_DEVTYPE_VIRTUAL, devargs);
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_add_plugin, 26.07)
+int
+rte_eal_cfg_add_plugin(struct rte_eal_cfg *cfg, const char *path)
+{
+ struct eal_plugin_path *p;
+
+ CFG_REQUIRE_NOT_NULL(cfg);
+
+ if (path == NULL || path[0] == '\0') {
+ EAL_CFG_LOG(ERR, "%s: path is NULL or empty", __func__);
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ p = calloc(1, sizeof(*p));
+ if (p == NULL) {
+ rte_errno = ENOMEM;
+ return -1;
+ }
+
+ strlcpy(p->name, path, sizeof(p->name));
+ TAILQ_INSERT_TAIL(&cfg->user_cfg.plugin_list, p, next);
+ return 0;
+}
+
RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_lcore, 26.07)
int
rte_eal_cfg_set_lcore(struct rte_eal_cfg *cfg, unsigned int lcore_id,
diff --git a/lib/eal_cfg/rte_eal_cfg.h b/lib/eal_cfg/rte_eal_cfg.h
index 34a52d70e8..200010c40a 100644
--- a/lib/eal_cfg/rte_eal_cfg.h
+++ b/lib/eal_cfg/rte_eal_cfg.h
@@ -25,6 +25,7 @@ extern "C" {
#include <rte_os.h>
#include <rte_compat.h>
+#include <rte_devargs.h>
#include <rte_eal.h>
#include <rte_pci_dev_feature_defs.h>
@@ -568,6 +569,94 @@ __rte_experimental
bool
rte_eal_cfg_is_service_lcore(const struct rte_eal_cfg *cfg, unsigned int lcore_id);
+/**
+ * @}
+ */
+
+/**
+ * @name Device allow/block/virtual configuration
+ * @{
+ */
+
+/**
+ * Add a device allow entry (equivalent to -a / --allow).
+ *
+ * Appends a PCI or other bus device address to the allow list.
+ * Only devices on the allow list are probed when an allow list is present.
+ * Multiple calls may be made to allow multiple devices.
+ *
+ * @param cfg
+ * Configuration handle. Must not be NULL.
+ * @param devargs
+ * Device argument string, e.g. "0000:00:01.0" or "0000:00:01.0,key=val".
+ * Must not be NULL or empty.
+ * @return
+ * 0 on success, or -1 with rte_errno set to EINVAL (NULL or empty devargs
+ * or NULL cfg) or ENOMEM on allocation failure.
+ */
+__rte_experimental
+int
+rte_eal_cfg_add_device_allow(struct rte_eal_cfg *cfg, const char *devargs);
+
+/**
+ * Add a device block entry (equivalent to -b / --block).
+ *
+ * Appends a device address to the block list, preventing it from being probed.
+ *
+ * @param cfg
+ * Configuration handle. Must not be NULL.
+ * @param devargs
+ * Device argument string. Must not be NULL or empty.
+ * @return
+ * 0 on success, or -1 with rte_errno set to EINVAL or ENOMEM.
+ */
+__rte_experimental
+int
+rte_eal_cfg_add_device_block(struct rte_eal_cfg *cfg, const char *devargs);
+
+/**
+ * Add a virtual device (equivalent to --vdev).
+ *
+ * Appends a virtual device entry to the configuration.
+ *
+ * @param cfg
+ * Configuration handle. Must not be NULL.
+ * @param devargs
+ * Virtual device argument string, e.g. "net_ring0" or
+ * "net_ring0,size=1024". Must not be NULL or empty.
+ * @return
+ * 0 on success, or -1 with rte_errno set to EINVAL or ENOMEM.
+ */
+__rte_experimental
+int
+rte_eal_cfg_add_vdev(struct rte_eal_cfg *cfg, const char *devargs);
+
+/**
+ * @}
+ */
+
+/**
+ * @name Plugin configuration
+ * @{
+ */
+
+/**
+ * Add a plugin (shared library) to load at EAL initialisation (equivalent to -d).
+ *
+ * Each call appends one entry. Glob patterns are supported for directory
+ * expansion as with the -d command-line option.
+ *
+ * @param cfg
+ * Configuration handle. Must not be NULL.
+ * @param path
+ * Path to the shared library or a glob pattern. Must not be NULL or empty.
+ * @return
+ * 0 on success, or -1 with rte_errno set to EINVAL or ENOMEM.
+ */
+__rte_experimental
+int
+rte_eal_cfg_add_plugin(struct rte_eal_cfg *cfg, const char *path);
+
/**
* @}
*/
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* [RFC PATCH 44/44] eal_cfg: add APIs for configuring remaining init settings
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (42 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 43/44] eal_cfg: support device and driver lists Bruce Richardson
@ 2026-04-29 16:58 ` Bruce Richardson
2026-04-29 21:40 ` [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Stephen Hemminger
2026-04-29 22:04 ` Stephen Hemminger
45 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-29 16:58 UTC (permalink / raw)
To: dev; +Cc: techboard, Bruce Richardson
Add APIs to configure the tracing settings for EAL, and the two other
remaining settings: the default mempool, and the vfio vf UUID token.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
app/test/test_eal_cfg.c | 205 ++++++++++++++++++++++++++++++++++++++
lib/eal_cfg/eal_cfg.c | 134 +++++++++++++++++++++++++
lib/eal_cfg/rte_eal_cfg.h | 177 +++++++++++++++++++++++++++++++-
3 files changed, 515 insertions(+), 1 deletion(-)
diff --git a/app/test/test_eal_cfg.c b/app/test/test_eal_cfg.c
index cf197e3eaa..ef93a858e6 100644
--- a/app/test/test_eal_cfg.c
+++ b/app/test/test_eal_cfg.c
@@ -949,6 +949,118 @@ test_eal_cfg_plugin(void)
return TEST_SUCCESS;
}
+/* Test tracing configuration APIs. */
+static int
+test_eal_cfg_trace(void)
+{
+ struct rte_eal_cfg *cfg;
+
+ /* NULL cfg */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_add_trace_pattern(NULL, "lib.eal.*") == -1,
+ "Expected -1 for NULL cfg");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for NULL cfg, got %d", rte_errno);
+
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_trace_dir(NULL, "/tmp/traces") == -1,
+ "Expected -1 for NULL cfg");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for NULL cfg, got %d", rte_errno);
+ TEST_ASSERT(rte_eal_cfg_get_trace_dir(NULL) == NULL,
+ "Expected NULL from get_trace_dir with NULL cfg");
+
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_trace_bufsz(NULL, 1024) == -1,
+ "Expected -1 for NULL cfg");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for NULL cfg, got %d", rte_errno);
+ TEST_ASSERT(rte_eal_cfg_get_trace_bufsz(NULL) == 0,
+ "Expected 0 from get_trace_bufsz with NULL cfg");
+
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_trace_mode(NULL, RTE_TRACE_MODE_DISCARD) == -1,
+ "Expected -1 for NULL cfg");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for NULL cfg, got %d", rte_errno);
+ TEST_ASSERT(rte_eal_cfg_get_trace_mode(NULL) == RTE_TRACE_MODE_OVERWRITE,
+ "Expected default OVERWRITE from get_trace_mode with NULL cfg");
+
+ cfg = rte_eal_cfg_create();
+ TEST_ASSERT_NOT_NULL(cfg, "rte_eal_cfg_create returned NULL");
+
+ /* --- add_trace_pattern --- */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_add_trace_pattern(cfg, "") == -1,
+ "Expected -1 for empty pattern");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for empty pattern, got %d", rte_errno);
+
+ TEST_ASSERT(rte_eal_cfg_add_trace_pattern(cfg, "lib.eal.*") == 0,
+ "Expected 0 for valid pattern");
+ TEST_ASSERT(rte_eal_cfg_add_trace_pattern(cfg, "lib.mempool.*") == 0,
+ "Expected 0 for second pattern");
+
+ /* --- set/get trace_dir --- */
+ TEST_ASSERT(rte_eal_cfg_get_trace_dir(cfg) == NULL,
+ "Expected default trace_dir == NULL");
+
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_trace_dir(cfg, "") == -1,
+ "Expected -1 for empty dir");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for empty dir, got %d", rte_errno);
+
+ TEST_ASSERT(rte_eal_cfg_set_trace_dir(cfg, "/tmp/traces") == 0,
+ "Expected 0 for valid trace_dir");
+ TEST_ASSERT(strcmp(rte_eal_cfg_get_trace_dir(cfg), "/tmp/traces") == 0,
+ "get_trace_dir returned wrong value");
+
+ /* overwrite */
+ TEST_ASSERT(rte_eal_cfg_set_trace_dir(cfg, "/var/log/traces") == 0,
+ "Expected 0 overwriting trace_dir");
+ TEST_ASSERT(strcmp(rte_eal_cfg_get_trace_dir(cfg), "/var/log/traces") == 0,
+ "get_trace_dir did not reflect overwrite");
+
+ /* --- set/get trace_bufsz --- */
+ TEST_ASSERT(rte_eal_cfg_get_trace_bufsz(cfg) == 0,
+ "Expected default trace_bufsz == 0");
+
+ TEST_ASSERT(rte_eal_cfg_set_trace_bufsz(cfg, 2 * 1024 * 1024) == 0,
+ "Expected 0 setting trace_bufsz");
+ TEST_ASSERT(rte_eal_cfg_get_trace_bufsz(cfg) == 2 * 1024 * 1024,
+ "get_trace_bufsz returned wrong value");
+
+ /* 0 is valid (means use default) */
+ TEST_ASSERT(rte_eal_cfg_set_trace_bufsz(cfg, 0) == 0,
+ "Expected 0 setting trace_bufsz to 0");
+ TEST_ASSERT(rte_eal_cfg_get_trace_bufsz(cfg) == 0,
+ "Expected 0 after setting trace_bufsz to 0");
+
+ /* --- set/get trace_mode --- */
+ TEST_ASSERT(rte_eal_cfg_get_trace_mode(cfg) == RTE_TRACE_MODE_OVERWRITE,
+ "Expected default trace_mode == OVERWRITE");
+
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_trace_mode(cfg, (enum rte_trace_mode)99) == -1,
+ "Expected -1 for invalid trace_mode");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for invalid mode, got %d", rte_errno);
+
+ TEST_ASSERT(rte_eal_cfg_set_trace_mode(cfg, RTE_TRACE_MODE_DISCARD) == 0,
+ "Expected 0 setting DISCARD mode");
+ TEST_ASSERT(rte_eal_cfg_get_trace_mode(cfg) == RTE_TRACE_MODE_DISCARD,
+ "Expected DISCARD after set");
+
+ TEST_ASSERT(rte_eal_cfg_set_trace_mode(cfg, RTE_TRACE_MODE_OVERWRITE) == 0,
+ "Expected 0 setting OVERWRITE mode");
+ TEST_ASSERT(rte_eal_cfg_get_trace_mode(cfg) == RTE_TRACE_MODE_OVERWRITE,
+ "Expected OVERWRITE after set");
+
+ rte_eal_cfg_free(cfg);
+ return TEST_SUCCESS;
+}
+
#ifndef RTE_EXEC_ENV_WINDOWS /* windows is missing the necessary macros for comparing CPUSETs etc. */
/* Test set/get_lcore and set/is_service_lcore. */
static int
@@ -1084,6 +1196,96 @@ static int test_eal_cfg_lcore(void) { return TEST_SUCCESS; }
static int test_eal_cfg_service_lcore(void) { return TEST_SUCCESS; }
#endif /* RTE_EXEC_ENV_WINDOWS */
+/* Test set/get_vfio_vf_token. */
+static int
+test_eal_cfg_vfio_vf_token(void)
+{
+ struct rte_eal_cfg *cfg;
+ rte_uuid_t token, out;
+ const rte_uuid_t *p;
+ static const uint8_t bytes[16] = {
+ 0x01, 0x23, 0x45, 0x67, 0x89, 0xab, 0xcd, 0xef,
+ 0xfe, 0xdc, 0xba, 0x98, 0x76, 0x54, 0x32, 0x10,
+ };
+
+ /* NULL cfg: set returns -1/EINVAL; get returns NULL */
+ rte_errno = 0;
+ memcpy(token, bytes, sizeof(token));
+ TEST_ASSERT(rte_eal_cfg_set_vfio_vf_token(NULL, token) == -1,
+ "Expected -1 for NULL cfg");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for NULL cfg, got %d", rte_errno);
+ TEST_ASSERT(rte_eal_cfg_get_vfio_vf_token(NULL) == NULL,
+ "Expected NULL from get with NULL cfg");
+
+ cfg = rte_eal_cfg_create();
+ TEST_ASSERT_NOT_NULL(cfg, "rte_eal_cfg_create returned NULL");
+
+ /* freshly created cfg: token should be all-zero */
+ p = rte_eal_cfg_get_vfio_vf_token(cfg);
+ TEST_ASSERT_NOT_NULL(p, "Expected non-NULL pointer from get");
+ memset(out, 0xff, sizeof(out));
+ rte_uuid_copy(out, *p);
+ for (int i = 0; i < (int)sizeof(out); i++)
+ TEST_ASSERT(out[i] == 0,
+ "fresh token byte[%d] should be 0, got 0x%02x", i, out[i]);
+
+ /* set and get roundtrip */
+ TEST_ASSERT(rte_eal_cfg_set_vfio_vf_token(cfg, token) == 0,
+ "Expected 0 setting token");
+ p = rte_eal_cfg_get_vfio_vf_token(cfg);
+ TEST_ASSERT(memcmp(*p, bytes, sizeof(bytes)) == 0,
+ "Token roundtrip mismatch");
+
+ rte_eal_cfg_free(cfg);
+ return TEST_SUCCESS;
+}
+
+/* Test set/get_user_mbuf_pool_ops_name. */
+static int
+test_eal_cfg_mbuf_pool_ops(void)
+{
+ struct rte_eal_cfg *cfg;
+
+ /* NULL cfg */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_user_mbuf_pool_ops_name(NULL, "ring_mp_mc") == -1,
+ "Expected -1 for NULL cfg");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for NULL cfg, got %d", rte_errno);
+ TEST_ASSERT(rte_eal_cfg_get_user_mbuf_pool_ops_name(NULL) == NULL,
+ "Expected NULL from get with NULL cfg");
+
+ cfg = rte_eal_cfg_create();
+ TEST_ASSERT_NOT_NULL(cfg, "rte_eal_cfg_create returned NULL");
+
+ /* default: NULL */
+ TEST_ASSERT(rte_eal_cfg_get_user_mbuf_pool_ops_name(cfg) == NULL,
+ "Expected NULL before any set");
+
+ /* empty string → EINVAL */
+ rte_errno = 0;
+ TEST_ASSERT(rte_eal_cfg_set_user_mbuf_pool_ops_name(cfg, "") == -1,
+ "Expected -1 for empty name");
+ TEST_ASSERT(rte_errno == EINVAL,
+ "Expected EINVAL for empty name, got %d", rte_errno);
+
+ /* valid set and get roundtrip */
+ TEST_ASSERT(rte_eal_cfg_set_user_mbuf_pool_ops_name(cfg, "ring_mp_mc") == 0,
+ "Expected 0 for valid name");
+ TEST_ASSERT(strcmp(rte_eal_cfg_get_user_mbuf_pool_ops_name(cfg), "ring_mp_mc") == 0,
+ "Name roundtrip mismatch");
+
+ /* overwrite */
+ TEST_ASSERT(rte_eal_cfg_set_user_mbuf_pool_ops_name(cfg, "stack") == 0,
+ "Expected 0 for overwrite");
+ TEST_ASSERT(strcmp(rte_eal_cfg_get_user_mbuf_pool_ops_name(cfg), "stack") == 0,
+ "Overwrite name mismatch");
+
+ rte_eal_cfg_free(cfg);
+ return TEST_SUCCESS;
+}
+
static struct unit_test_suite eal_cfg_testsuite = {
.suite_name = "EAL cfg API tests",
.setup = NULL,
@@ -1103,8 +1305,11 @@ static struct unit_test_suite eal_cfg_testsuite = {
TEST_CASE(test_eal_cfg_in_memory),
TEST_CASE(test_eal_cfg_devopt),
TEST_CASE(test_eal_cfg_plugin),
+ TEST_CASE(test_eal_cfg_trace),
TEST_CASE(test_eal_cfg_lcore),
TEST_CASE(test_eal_cfg_service_lcore),
+ TEST_CASE(test_eal_cfg_vfio_vf_token),
+ TEST_CASE(test_eal_cfg_mbuf_pool_ops),
TEST_CASES_END()
}
};
diff --git a/lib/eal_cfg/eal_cfg.c b/lib/eal_cfg/eal_cfg.c
index 258e4fa11b..6bc4975b8d 100644
--- a/lib/eal_cfg/eal_cfg.c
+++ b/lib/eal_cfg/eal_cfg.c
@@ -15,6 +15,7 @@
#include <eal_export.h>
#include <rte_bitops.h>
#include <rte_devargs.h>
+#include <rte_uuid.h>
#include <rte_errno.h>
#include <rte_lcore.h>
#include <rte_log.h>
@@ -611,6 +612,92 @@ rte_eal_cfg_add_plugin(struct rte_eal_cfg *cfg, const char *path)
return 0;
}
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_add_trace_pattern, 26.07)
+int
+rte_eal_cfg_add_trace_pattern(struct rte_eal_cfg *cfg, const char *pattern)
+{
+ struct eal_trace_arg *ta;
+
+ CFG_REQUIRE_NOT_NULL(cfg);
+
+ if (pattern == NULL || pattern[0] == '\0') {
+ EAL_CFG_LOG(ERR, "%s: pattern is NULL or empty", __func__);
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ ta = malloc(sizeof(*ta));
+ if (ta == NULL) {
+ rte_errno = ENOMEM;
+ return -1;
+ }
+ ta->val = strdup(pattern);
+ if (ta->val == NULL) {
+ free(ta);
+ rte_errno = ENOMEM;
+ return -1;
+ }
+ STAILQ_INSERT_TAIL(&cfg->user_cfg.trace_patterns, ta, next);
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_trace_dir, 26.07)
+int
+rte_eal_cfg_set_trace_dir(struct rte_eal_cfg *cfg, const char *dir)
+{
+ char *copy;
+
+ CFG_REQUIRE_NOT_NULL(cfg);
+
+ if (dir == NULL || dir[0] == '\0') {
+ EAL_CFG_LOG(ERR, "%s: dir is NULL or empty", __func__);
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ copy = strdup(dir);
+ if (copy == NULL) {
+ rte_errno = ENOMEM;
+ return -1;
+ }
+ free(cfg->user_cfg.trace_dir);
+ cfg->user_cfg.trace_dir = copy;
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_trace_dir, 26.07)
+EAL_CFG_GETTER(const char *, trace_dir, NULL)
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_trace_bufsz, 26.07)
+int
+rte_eal_cfg_set_trace_bufsz(struct rte_eal_cfg *cfg, uint64_t bufsz)
+{
+ CFG_REQUIRE_NOT_NULL(cfg);
+ cfg->user_cfg.trace_bufsz = bufsz;
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_trace_bufsz, 26.07)
+EAL_CFG_GETTER(uint64_t, trace_bufsz, 0)
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_trace_mode, 26.07)
+int
+rte_eal_cfg_set_trace_mode(struct rte_eal_cfg *cfg, enum rte_trace_mode mode)
+{
+ CFG_REQUIRE_NOT_NULL(cfg);
+
+ if (mode != RTE_TRACE_MODE_OVERWRITE && mode != RTE_TRACE_MODE_DISCARD) {
+ EAL_CFG_LOG(ERR, "%s: invalid trace mode %d", __func__, (int)mode);
+ rte_errno = EINVAL;
+ return -1;
+ }
+ cfg->user_cfg.trace_mode = mode;
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_trace_mode, 26.07)
+EAL_CFG_GETTER(enum rte_trace_mode, trace_mode, RTE_TRACE_MODE_OVERWRITE)
+
RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_lcore, 26.07)
int
rte_eal_cfg_set_lcore(struct rte_eal_cfg *cfg, unsigned int lcore_id,
@@ -742,6 +829,53 @@ rte_eal_cfg_set_lcores_from_affinity(struct rte_eal_cfg *cfg, bool remap)
return 0;
}
+/* --- VFIO VF token --- */
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_vfio_vf_token, 26.07)
+int
+rte_eal_cfg_set_vfio_vf_token(struct rte_eal_cfg *cfg, const rte_uuid_t token)
+{
+ CFG_REQUIRE_NOT_NULL(cfg);
+ rte_uuid_copy(cfg->user_cfg.vfio_vf_token, token);
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_vfio_vf_token, 26.07)
+const rte_uuid_t *
+rte_eal_cfg_get_vfio_vf_token(const struct rte_eal_cfg *cfg)
+{
+ if (cfg == NULL)
+ return NULL;
+ return &cfg->user_cfg.vfio_vf_token;
+}
+
+/* --- User mbuf pool ops name --- */
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_set_user_mbuf_pool_ops_name, 26.07)
+int
+rte_eal_cfg_set_user_mbuf_pool_ops_name(struct rte_eal_cfg *cfg, const char *name)
+{
+ char *copy;
+
+ CFG_REQUIRE_NOT_NULL(cfg);
+ if (name == NULL || name[0] == '\0') {
+ EAL_CFG_LOG(ERR, "%s: name is NULL or empty", __func__);
+ rte_errno = EINVAL;
+ return -1;
+ }
+ copy = strdup(name);
+ if (copy == NULL) {
+ rte_errno = ENOMEM;
+ return -1;
+ }
+ free(cfg->user_cfg.user_mbuf_pool_ops_name);
+ cfg->user_cfg.user_mbuf_pool_ops_name = copy;
+ return 0;
+}
+
+RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_user_mbuf_pool_ops_name, 26.07)
+EAL_CFG_GETTER(const char *, user_mbuf_pool_ops_name, NULL)
+
RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_init_from_cfg, 26.07)
int
rte_eal_init_from_cfg(const char *progname, struct rte_eal_cfg *cfg)
diff --git a/lib/eal_cfg/rte_eal_cfg.h b/lib/eal_cfg/rte_eal_cfg.h
index 200010c40a..84dde085a0 100644
--- a/lib/eal_cfg/rte_eal_cfg.h
+++ b/lib/eal_cfg/rte_eal_cfg.h
@@ -25,9 +25,10 @@ extern "C" {
#include <rte_os.h>
#include <rte_compat.h>
-#include <rte_devargs.h>
#include <rte_eal.h>
#include <rte_pci_dev_feature_defs.h>
+#include <rte_trace.h>
+#include <rte_uuid.h>
/**
* Hugepage file unlink behaviour (equivalent to --huge-unlink).
@@ -657,6 +658,120 @@ __rte_experimental
int
rte_eal_cfg_add_plugin(struct rte_eal_cfg *cfg, const char *path);
+/**
+ * @}
+ */
+
+/**
+ * @name Tracing configuration
+ * @{
+ */
+
+/**
+ * Add a trace pattern (equivalent to --trace).
+ *
+ * Appends a glob pattern to select which trace points are enabled.
+ * Multiple patterns may be added; they are applied in order.
+ * Tracing is not supported on Windows; patterns are accepted but ignored.
+ *
+ * @param cfg
+ * Configuration handle. Must not be NULL.
+ * @param pattern
+ * Glob pattern selecting trace points, e.g. "lib.eal.*".
+ * Must not be NULL or empty.
+ * @return
+ * 0 on success, or -1 with rte_errno set to EINVAL or ENOMEM.
+ */
+__rte_experimental
+int
+rte_eal_cfg_add_trace_pattern(struct rte_eal_cfg *cfg, const char *pattern);
+
+/**
+ * Set the trace output directory (equivalent to --trace-dir).
+ *
+ * Sets the directory where trace files are written. The string is copied
+ * internally; the caller does not need to keep it alive.
+ * Tracing is not supported on Windows; the value is accepted but ignored.
+ *
+ * @param cfg
+ * Configuration handle. Must not be NULL.
+ * @param dir
+ * Path to the output directory. Must not be NULL or empty.
+ * @return
+ * 0 on success, or -1 with rte_errno set to EINVAL or ENOMEM.
+ */
+__rte_experimental
+int
+rte_eal_cfg_set_trace_dir(struct rte_eal_cfg *cfg, const char *dir);
+
+/**
+ * Get the configured trace output directory.
+ *
+ * @param cfg
+ * Configuration handle. If NULL, returns NULL.
+ * @return
+ * The trace directory string, or NULL if not set.
+ * Valid until the next call to rte_eal_cfg_set_trace_dir() or rte_eal_cfg_free().
+ */
+__rte_experimental
+const char *
+rte_eal_cfg_get_trace_dir(const struct rte_eal_cfg *cfg);
+
+/**
+ * Set the trace buffer size in bytes (equivalent to --trace-bufsz).
+ *
+ * A value of 0 means use the default (1 MB per lcore).
+ *
+ * @param cfg
+ * Configuration handle. Must not be NULL.
+ * @param bufsz
+ * Buffer size in bytes.
+ * @return
+ * 0 on success, or -1 with rte_errno set to EINVAL.
+ */
+__rte_experimental
+int
+rte_eal_cfg_set_trace_bufsz(struct rte_eal_cfg *cfg, uint64_t bufsz);
+
+/**
+ * Get the configured trace buffer size.
+ *
+ * @param cfg
+ * Configuration handle. If NULL, returns 0.
+ * @return
+ * Trace buffer size in bytes, or 0 if not set (default).
+ */
+__rte_experimental
+uint64_t
+rte_eal_cfg_get_trace_bufsz(const struct rte_eal_cfg *cfg);
+
+/**
+ * Set the trace mode (equivalent to --trace-mode).
+ *
+ * @param cfg
+ * Configuration handle. Must not be NULL.
+ * @param mode
+ * RTE_TRACE_MODE_OVERWRITE or RTE_TRACE_MODE_DISCARD.
+ * @return
+ * 0 on success, or -1 with rte_errno set to EINVAL (NULL cfg or
+ * unrecognised mode value).
+ */
+__rte_experimental
+int
+rte_eal_cfg_set_trace_mode(struct rte_eal_cfg *cfg, enum rte_trace_mode mode);
+
+/**
+ * Get the configured trace mode.
+ *
+ * @param cfg
+ * Configuration handle. If NULL, returns RTE_TRACE_MODE_OVERWRITE.
+ * @return
+ * The trace mode.
+ */
+__rte_experimental
+enum rte_trace_mode
+rte_eal_cfg_get_trace_mode(const struct rte_eal_cfg *cfg);
+
/**
* @}
*/
@@ -695,6 +810,66 @@ __rte_experimental
int
rte_eal_cfg_set_lcores_from_affinity(struct rte_eal_cfg *cfg, bool remap);
+/**
+ * Set the VFIO VF token (shared secret for VFIO-PCI bound PF and VFs).
+ *
+ * Equivalent to the --vfio-vf-token EAL option.
+ *
+ * @param cfg
+ * Configuration handle created with rte_eal_cfg_create().
+ * @param token
+ * UUID token to set.
+ * @return
+ * 0 on success, -1 on error (rte_errno is set).
+ */
+__rte_experimental
+int
+rte_eal_cfg_set_vfio_vf_token(struct rte_eal_cfg *cfg, const rte_uuid_t token);
+
+/**
+ * Get the VFIO VF token.
+ *
+ * Returns a pointer to the token stored inside @p cfg.
+ * The pointer is valid for the lifetime of @p cfg.
+ *
+ * @param cfg
+ * Configuration handle, or NULL.
+ * @return
+ * Pointer to the token, or NULL if @p cfg is NULL.
+ */
+__rte_experimental
+const rte_uuid_t *
+rte_eal_cfg_get_vfio_vf_token(const struct rte_eal_cfg *cfg);
+
+/**
+ * Set the user-defined mbuf pool ops name.
+ *
+ * Equivalent to the --mbuf-pool-ops-name EAL option.
+ *
+ * @param cfg
+ * Configuration handle created with rte_eal_cfg_create().
+ * @param name
+ * Pool ops name string. Must not be NULL or empty.
+ * @return
+ * 0 on success, -1 on error (rte_errno is set to EINVAL or ENOMEM).
+ */
+__rte_experimental
+int
+rte_eal_cfg_set_user_mbuf_pool_ops_name(struct rte_eal_cfg *cfg, const char *name);
+
+/**
+ * Get the user-defined mbuf pool ops name.
+ *
+ * @param cfg
+ * Configuration handle, or NULL.
+ * @return
+ * Pointer to the ops name string, or NULL if not set or @p cfg is NULL.
+ * The returned pointer is owned by @p cfg; do not free it.
+ */
+__rte_experimental
+const char *
+rte_eal_cfg_get_user_mbuf_pool_ops_name(const struct rte_eal_cfg *cfg);
+
/**
* Initialise the EAL using a programmatic configuration handle.
*
--
2.51.0
^ permalink raw reply related [flat|nested] 50+ messages in thread* Re: [RFC PATCH 00/44] Allow intitializing EAL without argc/argv
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (43 preceding siblings ...)
2026-04-29 16:58 ` [RFC PATCH 44/44] eal_cfg: add APIs for configuring remaining init settings Bruce Richardson
@ 2026-04-29 21:40 ` Stephen Hemminger
2026-04-29 22:04 ` Stephen Hemminger
45 siblings, 0 replies; 50+ messages in thread
From: Stephen Hemminger @ 2026-04-29 21:40 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, techboard
On Wed, 29 Apr 2026 17:57:52 +0100
Bruce Richardson <bruce.richardson@intel.com> wrote:
> Part 3: Prototype of an eal_cfg library
>
> Once we have the internal C API to init eal using a struct
> rte_eal_user_cfg, we can create new libraries which provide alternate
> ways to build up the user_cfg and initialize DPDK. Patches 37-44 have a
> rough example of such a library.
>
> - The lib allows a user to create an opaque rte_eal_user_cfg struct,
> which can then be modified by APIs to get/set various parameters
> before calling rte_cfg_eal_init().
> - An alternative way to do things (not prototyped), may be to have a
> library that creates an eal_user_cfg struct based on the contents of
> an ini file using the configfile library.
> [Both these options could be used in parallel. Note too that both have
> no ABI implications for adding new flags, or making old ones no-ops!]
Ideally cfg would be ini file.
But the existing ini file parser has lots of issues and is really
not that usable. Compared to other equivalent non-DPDK libraries.
^ permalink raw reply [flat|nested] 50+ messages in thread* Re: [RFC PATCH 00/44] Allow intitializing EAL without argc/argv
2026-04-29 16:57 [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Bruce Richardson
` (44 preceding siblings ...)
2026-04-29 21:40 ` [RFC PATCH 00/44] Allow intitializing EAL without argc/argv Stephen Hemminger
@ 2026-04-29 22:04 ` Stephen Hemminger
2026-04-30 8:00 ` Bruce Richardson
45 siblings, 1 reply; 50+ messages in thread
From: Stephen Hemminger @ 2026-04-29 22:04 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, techboard
On Wed, 29 Apr 2026 17:57:52 +0100
Bruce Richardson <bruce.richardson@intel.com> wrote:
> The ultimate end goal of this RFC is, as stated in the title, to move
> away from argc/argv as the sole means of configuring EAL on init. This
> set therefore looks to:
> * rework EAL so that we have a generic EAL init function taking a
> struct-based configuration
> * provide a rough *example* of how that might be used to create a new
> set of C APIs to be used by apps to initialize EAL without having to
> create dummy argc/argv pairs. [A side benefit of this is that it
> makes it a lot easier to initialize EAL from other languages like Rust
> or Python too, because arrays of C strings are not the most
> user-friendly for a foreign interface]
>
> Therefore this set can be considered in 3 parts:
>
> Part 1: Struct rework.
>
> Largely for legacy reasons, we have a number of different structs and
> arrays holding eal configuration, without having clear guides as to why
> certain fields are stored where. The first ~30 patches rework the
> existing structs - rte_config, internal_config, lcore_config etc.
> into three defined-purpose structs:
>
> - eal_platform_info - contains the raw HW info for the system, details
> of CPUs and hugepage mounts. This is initialized on first use - even
> before EAL init is called - and is then immutable, since our HW should
> not change much underneath us. Its early availability means that it
> can be used to sanity check the contents of the other structs as they
> are being built up.
>
> - eal_user_cfg - contains the config settings passed in by the user. For
> existing rte_eal_init, this is built up in the arg parse stage, and
> it's contents verified against the platform info, e.g. to check core
> masks are valid etc. Once argument parsing is completed, is also
> immutable.
>
> - runtime_cfg - basically all the runtime settings that need to be there
> for DPDK to run, or which change over time. Largely combined content
> of the old rte_config, internal_config and lcore_config structs. This
> is initialized from the other two structs by eal initialization and
> can be modified by EAL at any time.
>
> Part 2: Cleanup and Split EAL init
>
> With the 3 structs clearly separated by purpose, we can then do some
> cleanup of the code, before - in patches 35 & 36 - splitting EAL into
> two and providing an *internal* struct-based alternative API for
> initializing DPDK. The old rte_eal_init function still exists, just the
> contents of it after the argument parsing are put into a new, static
> eal_runtime_init() function, which take the argparse output (user_cfg
> struct). Patch 36, adds a thin internal-API wrapper around this static,
> which is necessary to take care of things like the run-once flag. [This
> wrapper should never be the public API, since the struct will change and
> therefore be an ABI break. It's designed to be used by other wrapper
> libs which provide a stable ABI for config]
>
> Part 3: Prototype of an eal_cfg library
>
> Once we have the internal C API to init eal using a struct
> rte_eal_user_cfg, we can create new libraries which provide alternate
> ways to build up the user_cfg and initialize DPDK. Patches 37-44 have a
> rough example of such a library.
>
> - The lib allows a user to create an opaque rte_eal_user_cfg struct,
> which can then be modified by APIs to get/set various parameters
> before calling rte_cfg_eal_init().
> - An alternative way to do things (not prototyped), may be to have a
> library that creates an eal_user_cfg struct based on the contents of
> an ini file using the configfile library.
> [Both these options could be used in parallel. Note too that both have
> no ABI implications for adding new flags, or making old ones no-ops!]
>
> Bruce Richardson (44):
> eal: define new functionally distinct config structs
> eal: move memory request fields to user config
> eal: move NUMA request fields to user config
> eal: move hugepage policy fields to user config
> eal: move process policy fields to user config
> eal: move advanced user config options to user cfg struct
> eal: move hugepage size info to platform info struct
> telemetry: make cpuset init parameter const
> eal: move runtime state to appropriate structure
> eal: record details of all cpus in platform info
> eal: use platform info for lcore lookups
> eal: add RTE_CPU_FFS macro
> eal: store lcore configuration in runtime data
> eal: cleanup CPU init function
> eal: move numa node information to platform info struct
> eal: move lcore role and count to runtime state
> eal: make lcore role a field in lcore config struct
> eal: move main lcore setting to runtime config struct
> eal: move iova mode and process type to runtime cfg
> eal: move memory config pointer to runtime state struct
> eal: remove rte_config structure
> eal: separate runtime state update from arg parsing
> eal: move devopt_list staging list into user_cfg
> eal: separate plugin paths from loaded plugin objects
> eal: simplify internal driver path iteration APIs
> eal: move trace config into user config struct
> eal: record service cores in user config struct
> eal: store user-provided lcore info in user config struct
> eal: clarify docs on params taking lcore IDs
> eal: remove internal config reset function
> eal: move functions setting runtime state
> eal: initialize platform info on first use
> eal: remove duplicated scan of sysfs for hugepage details
> eal: add utilities for working with user config struct
> eal: split EAL init into two stages
> eal: provide hooks for init with externally supplied config
> eal_cfg: add new library to programmatically init DPDK
> eal_cfg: configure defaults for easier testing and use
> app/test: enable testing init using EAL config lib
> eal_cfg: add basic setters and getters
> eal_cfg: add hugepage memory configuration
> eal_cfg: support configuring lcores
> eal_cfg: support device and driver lists
> eal_cfg: add APIs for configuring remaining init settings
>
> app/test/meson.build | 1 +
> app/test/process.h | 4 +-
> app/test/test.c | 14 +-
> app/test/test.h | 1 +
> app/test/test_eal_cfg.c | 1323 +++++++++++++++++++++
> doc/api/doxy-api-index.md | 1 +
> doc/api/doxy-api.conf.in | 1 +
> doc/guides/linux_gsg/eal_args.include.rst | 38 +-
> doc/guides/prog_guide/eal_cfg_lib.rst | 23 +
> doc/guides/prog_guide/index.rst | 1 +
> lib/eal/common/eal_common_bus.c | 4 +-
> lib/eal/common/eal_common_config.c | 221 +++-
> lib/eal/common/eal_common_dynmem.c | 66 +-
> lib/eal/common/eal_common_fbarray.c | 10 +-
> lib/eal/common/eal_common_launch.c | 25 +-
> lib/eal/common/eal_common_lcore.c | 230 ++--
> lib/eal/common/eal_common_mcfg.c | 44 +-
> lib/eal/common/eal_common_memalloc.c | 5 +-
> lib/eal/common/eal_common_memory.c | 104 +-
> lib/eal/common/eal_common_memzone.c | 24 +-
> lib/eal/common/eal_common_options.c | 823 +++++--------
> lib/eal/common/eal_common_proc.c | 43 +-
> lib/eal/common/eal_common_tailqs.c | 6 +-
> lib/eal/common/eal_common_thread.c | 81 +-
> lib/eal/common/eal_common_timer.c | 2 +-
> lib/eal/common/eal_common_trace.c | 30 +-
> lib/eal/common/eal_common_trace_utils.c | 104 --
> lib/eal/common/eal_hugepages.h | 8 +
> lib/eal/common/eal_internal_cfg.h | 376 +++++-
> lib/eal/common/eal_memcfg.h | 3 +
> lib/eal/common/eal_option_list.h | 6 +-
> lib/eal/common/eal_options.h | 7 +-
> lib/eal/common/eal_private.h | 108 +-
> lib/eal/common/eal_trace.h | 11 -
> lib/eal/common/malloc_elem.c | 15 +-
> lib/eal/common/malloc_heap.c | 41 +-
> lib/eal/common/malloc_mp.c | 2 +-
> lib/eal/common/rte_malloc.c | 14 +-
> lib/eal/common/rte_service.c | 17 +-
> lib/eal/freebsd/eal.c | 271 +++--
> lib/eal/freebsd/eal_hugepage_info.c | 71 +-
> lib/eal/freebsd/eal_lcore.c | 16 +-
> lib/eal/freebsd/eal_memory.c | 46 +-
> lib/eal/freebsd/include/rte_os.h | 2 +
> lib/eal/include/rte_eal.h | 35 +-
> lib/eal/include/rte_memzone.h | 10 +-
> lib/eal/include/rte_tailq.h | 2 +-
> lib/eal/linux/eal.c | 280 +++--
> lib/eal/linux/eal_hugepage_info.c | 219 ++--
> lib/eal/linux/eal_lcore.c | 13 +
> lib/eal/linux/eal_memalloc.c | 168 ++-
> lib/eal/linux/eal_memory.c | 153 ++-
> lib/eal/linux/eal_timer_hpet.c | 21 +-
> lib/eal/linux/eal_vfio.c | 11 +-
> lib/eal/linux/include/rte_os.h | 10 +
> lib/eal/unix/eal_unix_thread.c | 11 +-
> lib/eal/windows/eal.c | 177 ++-
> lib/eal/windows/eal_hugepages.c | 60 +-
> lib/eal/windows/eal_lcore.c | 6 +
> lib/eal/windows/eal_memalloc.c | 37 +-
> lib/eal/windows/eal_memory.c | 14 +-
> lib/eal/windows/eal_thread.c | 11 +-
> lib/eal/windows/eal_windows.h | 8 -
> lib/eal/windows/include/rte_os.h | 1 +
> lib/eal/windows/include/sched.h | 10 +
> lib/eal_cfg/eal_cfg.c | 918 ++++++++++++++
> lib/eal_cfg/meson.build | 6 +
> lib/eal_cfg/rte_eal_cfg.h | 903 ++++++++++++++
> lib/meson.build | 1 +
> lib/telemetry/telemetry.c | 4 +-
> lib/telemetry/telemetry_internal.h | 2 +-
> 71 files changed, 5450 insertions(+), 1884 deletions(-)
> create mode 100644 app/test/test_eal_cfg.c
> create mode 100644 doc/guides/prog_guide/eal_cfg_lib.rst
> create mode 100644 lib/eal_cfg/eal_cfg.c
> create mode 100644 lib/eal_cfg/meson.build
> create mode 100644 lib/eal_cfg/rte_eal_cfg.h
>
> --
> 2.51.0
>
Lots of AI feedback below.
I would also add that use of atoi() is on the naughty list and any new
code using it is going to get bad marks.
Review of RFC PATCH 0/44: EAL init args series
=================================================
This is a substantial and well-motivated rework. Splitting internal_config
into eal_user_cfg / eal_platform_info / eal_runtime_state is the right shape,
and the new eal_cfg library is a clean way to expose programmatic init.
Below are the issues found across the series, organised by patch.
Patch 02/44 (eal: move memory request fields to user config)
------------------------------------------------------------
Error: eal_parse_args() reads the memory size with atoi() and then promotes
via *= 1024ULL into user_cfg->memory (a size_t). atoi() returns int and has
no error reporting; values that don't fit in int, are negative, or are
non-numeric are silently turned into garbage and then multiplied. This is a
pre-existing pattern, but you are touching the exact line - please switch
to strtoull() with errno check while you are here, mirroring the validation
you already do for nchannel/nrank in this same hunk.
Warning: in the new nrank validation, "nrank > 16 || nrank > UINT8_MAX" -
the second clause is dead because 16 < UINT8_MAX. Drop the > UINT8_MAX
check (the > 16 covers it).
Info: throughout the eal_memory.c hunks you alternate between caching
"const struct eal_user_cfg *user_cfg = eal_get_user_configuration();" and
inlining "eal_get_user_configuration()->memory" repeatedly in the same
function. Pick one style; the inlined repeated calls also add a function
call per access.
Patch 05/44 (eal: move process policy fields to user config)
------------------------------------------------------------
Info: several "int_cfg->X = true;" assignments where X is still
"volatile unsigned" in internal_config (e.g. legacy_mem,
single_file_segments, match_allocations). Compiles, but mixes bool and
unsigned - convert these fields in this same patch or stay with 1/0 until
they move.
Patch 09/44 (eal: move runtime state to appropriate structure)
--------------------------------------------------------------
Warning: eal_reset_internal_config() loses its "struct internal_config *"
parameter, so existing out-of-tree callers break silently (no compile
error, since the function exists). Worth a release-note line.
Patch 11/44 (eal: use platform info for lcore lookups)
------------------------------------------------------
Error: behavioural regression that is not called out in the commit
message. The old eal_cpuset_socket_id() returned SOCKET_ID_ANY when a
cpuset spanned multiple NUMA nodes; the new thread_update_affinity() for
non-EAL threads, and the new rte_lcore_to_socket_id(), both pick the NUMA
id of the first CPU in the set. After this patch, rte_socket_id() for a
control thread pinned to CPUs on two sockets will silently return one
socket id instead of SOCKET_ID_ANY. Drivers that rely on SOCKET_ID_ANY to
pick a per-call NUMA node will be affected. Either preserve the prior
semantics or call out the change.
Patch 12/44 (eal: add RTE_CPU_FFS macro)
----------------------------------------
Warning: _cpu_ffs() in lib/eal/linux/include/rte_os.h uses a leading
underscore, which is reserved at file scope by C11 7.1.3. Use a different
name (e.g. rte_cpu_ffs_impl).
Warning: cpu_ffs() in lib/eal/windows/include/sched.h has no rte_ or RTE_
prefix and is exposed via a header that is included broadly on Windows
builds. Likely to clash with anything else that defines cpu_ffs. Rename
or static inline with a prefix.
Patch 14/44 (eal: cleanup CPU init function)
--------------------------------------------
Warning: lcore_to_socket_id is calloc()-allocated but only filled for CPUs
where eal_cpu_detected(cpu_id) != 0. Skipped slots remain 0 from calloc,
so the subsequent qsort + dedup loop counts socket 0 even on systems where
no detected CPU is on socket 0. This was also true of the prior stack
array (= {0}), so no regression - but it is worth fixing while the code is
being rewritten: only push detected sockets into the array, then qsort
and dedup only the populated prefix.
Info: the commit message says the function should "not make any changes
to runtime_state but only the platform_info", but it still writes
config->numa_node_count and config->numa_nodes (rte_config). That
ownership move happens in patch 15; consider tightening the commit
message, or merge the two patches.
Patch 15/44 (eal: move numa node information to platform info)
--------------------------------------------------------------
Error: realloc(p, 0) is implementation-defined in C17 and explicitly
undefined in C23. The code:
uint32_t *tmp = realloc(platform_info->numa_nodes,
numa_node_count * sizeof(*platform_info->numa_nodes));
if (tmp != NULL)
platform_info->numa_nodes = tmp;
If numa_node_count is 0 (a system with cpu_count > 0 but no detected
CPUs - degenerate but reachable on some virtualised setups), some libcs
free the buffer and return NULL, leaving platform_info->numa_nodes
pointing at freed memory - a use-after-free on the next access. Add an
"if (numa_node_count == 0)" early-out, or skip the realloc-shrink
entirely when 0.
Warning: malloc(platform_info->cpu_count * sizeof(...)) with no overflow
check. cpu_count comes from sysconf / sysctl and is bounded in practice,
but a single safer expression - e.g. calloc(cpu_count, sizeof(*numa_nodes))
- is just as cheap.
Patch 18/44 (eal: move main lcore setting to runtime config)
------------------------------------------------------------
Warning: in eal_parse_main_lcore():
user_cfg->main_lcore = (uint32_t) strtol(arg, &parsing_end, 0);
user_cfg->main_lcore is declared int (added in this patch). The (uint32_t)
cast then store into int is dead in the success path and confusing - looks
like a leftover from when the field was uint32_t. Either drop the cast,
or use strtoul() and remove the sentinel/range games. The downstream
sentinel check "user_cfg->main_lcore != -1" is fragile because nothing in
this function's positive path is prevented from producing -1 (e.g. user
passing 4294967295); the subsequent ">= RTE_MAX_LCORE" check happens to
catch it via unsigned promotion, but the chain is brittle.
Patch 22/44 (eal: separate runtime state update from arg parsing)
-----------------------------------------------------------------
Error: this patch moves the per-NUMA -> total-memory summing out of
eal_apply_runtime_state() and into the body of eal_parse_args():
/* sum per-NUMA memory requests into user_cfg->memory */
for (int i = 0; i < RTE_MAX_NUMA_NODES; i++)
user_cfg->memory += user_cfg->numa_mem[i];
In patch 31, eal_apply_runtime_state() is no longer called from
eal_parse_args() - it is invoked explicitly from rte_eal_init(). After
patch 36, the second init path rte_eal_runtime_init() (used by
rte_eal_init_from_cfg()) calls eal_apply_runtime_state() but never goes
through eal_parse_args(), so the summing loop never runs. A user who
configures numa_mem[] via rte_eal_cfg_set_numa_mem() ends up with
user_cfg->memory == 0 going into eal_dynmem_calc_num_pages_per_socket()
and the linux/eal.c "no-huge fallback to 64MB" check, which is wrong.
Move the summing into eal_apply_runtime_state() (or into a small helper
that both paths call) so the eal_cfg path produces the same
user_cfg->memory as the CLI path.
Patch 32/44 (eal: initialize platform info on first use)
--------------------------------------------------------
Error: resource leak / double-allocation on retry in the lazy initializer:
if (rte_eal_cpu_init(&eal_platform_info) < 0) { ...; return NULL; }
if (eal_get_platform_hp_info(&eal_platform_info) < 0) { ...; return NULL; }
Both functions assign to platform_info->cpu_info, numa_nodes, etc. with
calloc/malloc and never free on the failure path. If
eal_get_platform_hp_info() fails, cpu_info and numa_nodes are leaked,
"initialized" is left false, and a subsequent call retries
rte_eal_cpu_init() which overwrites the leaked pointers. Either set
"initialized = true" only once everything succeeds and add explicit
cleanup of the partially-built struct on failure, or refuse to retry
after the first failure.
Warning: eal_get_platform_hp_info() (Linux) writes hps->max_pages[socket]
where socket comes from platform_info->numa_nodes[i] (a discovered NUMA
node id) and max_pages[] is sized RTE_MAX_NUMA_NODES. On systems where
the kernel reports a NUMA node id >= RTE_MAX_NUMA_NODES (sparse NUMA
topologies on large boxes), this is an out-of-bounds write. Bound-check
before indexing.
Warning: eal_get_platform_hp_info() calls rte_str_to_size(...) and stores
the result as the page size without checking for 0 (rte_str_to_size
returns 0 on failure). A malformed "hugepages-XYZ" directory name would
silently install a phantom page-size-0 entry and make later loops divide
by zero or count infinite pages.
Warning: eal_get_platform_info() is renamed to rte_eal_get_platform_info()
and exported as __rte_internal in patch 36, but in patch 32 itself the
function is still called eal_get_platform_info() from many call sites
that get switched again in patch 36. Two renames of the same set of call
sites in adjacent patches makes bisecting noisy - consider squashing patch
36's rename into 32, or doing the rename in its own no-op patch ahead of
both.
Warning: in eal_hugepage_info_read() (Linux), the new tail-counting loop:
for (unsigned int i = 0; i < MAX_HUGEPAGE_SIZES; i++) {
if (runtime_state->hugepage_info[i].hugepage_sz == 0)
break;
runtime_state->num_hugepage_sizes = i + 1;
}
assumes the primary process always writes a contiguous prefix of valid
entries followed by zeros. The shared-memory layout is sizeof
hugepage_info[MAX_HUGEPAGE_SIZES] and the primary writes via memcpy after
qsort - so any unused tail entries do come through as zero, but only
because runtime_state is statically zero-initialised on the primary and
qsort sorts the populated prefix. A defensive primary-side memset of
unused tail entries would make this contract explicit instead of implicit.
Info: in the lazy init, the "unlikely(!atomic_load_acquire())" outer check
followed by spinlock_lock then re-check with relaxed is the standard
double-checked init pattern, but the inner re-check should use acquire
for symmetry - a relaxed load inside the lock is OK because the unlock
pairs with the next acquire, but readability suffers.
Patch 33/44 (eal: remove duplicated scan of sysfs for hugepage)
---------------------------------------------------------------
Info: this patch is the natural place to delete get_hugepage_dir() in
eal_hugepage_info.c if it is now only referenced from
eal_get_platform_hp_info(). If it still has another caller, fine - but
worth a grep and a note.
Patch 34/44 (eal: add utilities for working with user config)
-------------------------------------------------------------
Warning: eal_user_cfg_copy() does "*dst = *src;" then "TAILQ_INIT(&dst->...)".
Between those two statements, dst's list head pointers transiently alias
src's list head storage. Any signal handler or concurrent reader (none
today, but the global user_cfg is reachable via eal_get_user_configuration())
would observe a corrupt list. The safer order is: zero / TAILQ_INIT all
the list heads and pointer fields first, then copy the scalars
field-by-field, then deep-copy the lists. This also documents intent
better than relying on the shallow copy followed by surgery.
Warning: <malloc.h> is non-portable - glibc and Windows ship it; FreeBSD
does not provide it on the include path used by DPDK builds without
_BSD_SOURCE gymnastics. The existing pattern (see eal_common_lcore_var.c)
is "#ifdef RTE_EXEC_ENV_WINDOWS #include <malloc.h> #endif". Do the same
here, since <stdlib.h> is sufficient for free()/malloc() on POSIX.
Warning: eal_user_cfg_copy() is a static inline in a header that pulls in
<malloc.h>, <sys/queue.h>, rte_devargs.h, rte_trace.h, rte_vect.h,
rte_compat.h, rte_stdatomic.h, eal_thread.h. It is also large (~80 LoC of
deep-copy logic). For a function called once per init path, static inline
in a private header buys nothing and inflates compile times for every
translation unit that includes eal_internal_cfg.h. Move the body to a .c
file.
Patch 35/44 (eal: split EAL init into two stages)
-------------------------------------------------
Info: subject has a stray double space after the colon
("[RFC PATCH 35/44] eal: ..."). Cosmetic.
Patch 36/44 (eal: provide hooks for init with externally-supplied config)
-------------------------------------------------------------------------
Warning: rte_eal_runtime_init() calls rte_eal_get_platform_info() to "make
sure platform info is available" before the init_has_run CAS. If two
threads call rte_eal_init_from_cfg() concurrently, both will trigger the
platform-info lazy init (which is internally locked, so safe), but only
one will pass the CAS. The other will leave with EALREADY, which is fine
- but you should document that rte_eal_init_from_cfg() is not callable
concurrently.
Warning: the eal_log_init(progname) call happens after the init_has_run
CAS in rte_eal_runtime_init() but before it in rte_eal_init() (where it
is implicit via the existing flow). On the EALREADY path the new function
logs nothing about who is racing, while the existing rte_eal_init() does.
Minor inconsistency.
Patch 37/44 (eal_cfg: add new library to programmatically init)
---------------------------------------------------------------
Warning: lib/eal_cfg/meson.build is silent about its dependency on eal.
Since eal_cfg calls rte_eal_runtime_init, eal_get_user_configuration,
etc., and uses eal_internal_cfg.h, the meson file should list
"deps += ['eal']" explicitly rather than relying on the library order in
lib/meson.build. The current file is just "sources = files(...)" plus an
"includes +=" line, which works only because eal happens to be earlier
in the list.
Warning: rte_eal_init_from_cfg() falls back to a stack-allocated local_cfg
when cfg == NULL. Because EAL_USER_CFG_INITIALIZER uses
&local_cfg.user_cfg.devopt_list etc., the TAILQ heads point into the
stack, which is correct - but eal_user_cfg_copy() then shallow-copies
"*dst = *src", which means dst's list heads briefly contain the address
of the source (stack) struct's first/last pointers before
TAILQ_INIT(&dst->...) overwrites them. That window is fine because the
lists are empty, but if you later add a non-empty default in
EAL_USER_CFG_INITIALIZER this becomes a bug. Worth a short comment in
eal_user_cfg_copy() calling out the dependency on empty default lists.
Patch 39/44 (app/test: enable testing init using EAL config lib)
----------------------------------------------------------------
Warning: the new do_recursive_call() reads RECURSIVE_ENV_VAR via getenv()
early in main(), then again later. Two getenv() calls on the same env var
in the same process are fine functionally, but the second read of
recursive_call was previously a file-scope static - consider passing the
value through instead, since you already changed the function signature.
Warning: the pre-EAL-init dispatch in main():
if (recursive_call != NULL && strcmp(recursive_call, "test_eal_cfg_init") == 0)
return test_eal_cfg_init();
bypasses all of the existing test setup (DPDK_TEST_PARAMS, signal
handlers, log redirection). That is intentional, but the test then calls
process_dup() from inside the spawned child to fan out subtests, and
process_dup() re-execs the binary with argv[0] only. That will not
propagate DPDK_TEST_PARAMS etc. Worth a comment that this dispatch path
is deliberately bare.
Patch 40/44 (eal_cfg: add basic setters and getters)
----------------------------------------------------
Error: rte_eal_cfg_set_max_simd_bitwidth() validates
"bitwidth >= RTE_VECT_SIMD_DISABLED && rte_is_power_of_2(bitwidth)" but
does not enforce "bitwidth <= RTE_VECT_SIMD_MAX" (= INT16_MAX + 1 =
32768). A caller passing 16384 or 32768 is accepted, but only 32768 is
actually a recognised value; 16384 is a valid power of two but never
produced by the parser and may not be handled by SIMD-bitwidth consumers.
Either mirror the parser's behaviour (only accept the named enum values
plus DISABLED) or document the accepted range.
Warning: the field type mismatch from patch 18 propagates here.
EAL_CFG_GETTER(int, main_lcore, -1) returns int correctly, and the setter
range-checks [0, RTE_MAX_LCORE) plus the -1 sentinel - but accepts any
other negative value through "val != -1 && (val < 0 || ...)", returning
ERANGE. Good. However the comparison in eal_apply_runtime_state()
(patch 31) is still "if (user_cfg->main_lcore != -1)". If you ever change
the sentinel, you have two places to update.
Warning: EAL_CFG_BOOL macro defines rte_eal_cfg_get_<name>() with
"if (cfg == NULL) return false;" on a single line. Some style checkers
complain; minor.
Warning: find_absent_numa_node() in the test scans for an unused NUMA id
by walking [0, RTE_MAX_NUMA_NODES) and returning the first id not in
pi->numa_nodes[]. On systems where NUMA ids are sparse (e.g. ids 0, 1,
8, 9), this returns 2, which the platform-info-driven validator may also
reject as "absent" - good for the test. But on a system that genuinely
has ids 0..RTE_MAX_NUMA_NODES-1, the test prints "Skipping ENODEV test"
- which is correct, but worth a printf indicating which platform-detected
ids are present so the skip is debuggable.
Info: the EAL_CFG_GETTER and EAL_CFG_BOOL macros are clever but make
RTE_EXPORT_EXPERIMENTAL_SYMBOL lines noisy and harder to grep for. A
short comment block at the macro definition explaining the
symbol-extraction requirements (no spaces, no statement before the
export) would help future maintainers.
Patch 41/44 (eal_cfg: add hugepage memory configuration)
--------------------------------------------------------
Error: rte_eal_cfg_set_in_memory(cfg, true) sets no_shconf = true and
hugepage_file.unlink_before_mapping = true, but the test
(test_eal_cfg_in_memory) explicitly notes "disabling it must not reverse
those" - and the implementation respects that by only setting those
fields when val == true. Good. However rte_eal_cfg_get_huge_unlink() will
then permanently report RTE_EAL_HUGE_UNLINK_ALWAYS even after the user
calls set_in_memory(cfg, false). The user has no way to undo "in_memory
implied huge_unlink=always" without explicitly calling
set_huge_unlink(EXISTING). This is asymmetric and surprising. Either
document it loudly in the in_memory setter doc comment, or auto-restore
the prior huge_unlink value when in_memory is cleared.
Warning: hugedir_is_hugetlbfs() uses statfs() only on Linux; on
FreeBSD/Windows it returns true permissively. The test
(test_eal_cfg_hugepage_dir) negative case (passing "/") will then fail on
FreeBSD since EAL_CFG won't reject it. Skip the negative test there too,
or implement statfs for FreeBSD.
Warning: rte_eal_cfg_set_hugefile_prefix() rejects '%' to match the
parser, but allows other path-affecting characters like '/'. Worth either
a clear doc note ("the prefix is used as a filename component, not a
path") or a stricter validator.
Warning: rte_eal_cfg_set_huge_unlink(RTE_EAL_HUGE_UNLINK_NEVER) sets
unlink_existing = false and unlink_before_mapping = false. But the
default EAL_USER_CFG_INITIALIZER sets unlink_existing = true. So the only
way to reach NEVER is via the new setter - fine. However, the inverse
mapping in get_huge_unlink() returns EXISTING when both flags are at
their default (existing=true, before_mapping=false), and EXISTING
corresponds to "remove stale at startup, keep new". A user who reads
get_huge_unlink() on a fresh handle gets EXISTING, which matches the CLI
default, but the doc comment for RTE_EAL_HUGE_UNLINK_EXISTING and the
EAL_USER_CFG_INITIALIZER should cross-reference each other so this
contract is obvious.
Patch 42/44 (eal_cfg: support configuring lcores)
-------------------------------------------------
Error: subtest_eal_cfg_init_lcore_affinity picks two random CPUs with
"srand((unsigned int)time(NULL)); idx = rand() % ncpus;". Using
time()-seeded rand() in a unit test makes the test flaky-by-design when
CI runs many tests within the same second, and it produces
non-reproducible failures. Pin to deterministic indices (e.g. first two
detected CPUs).
Warning: rte_eal_cfg_set_lcore() allocates a new rte_cpuset_t only when
the slot is currently NULL, then memcpy's the input - so on replace=true
for an already-populated slot, it correctly reuses the buffer. But on
replace=false returning EEXIST, the function leaves
lcore_cpusets[lcore_id] untouched, which is the right behaviour but worth
an explicit comment because the early-return is doing double duty.
Warning: rte_eal_cfg_set_service_lcore() requires the lcore to already be
configured via set_lcore() and returns ENOENT otherwise. That ordering
requirement is documented in the doc comment, but the test
(test_eal_cfg_service_lcore) doesn't appear to cover the
"set_service_lcore before set_lcore" failure case in this patch - only
via add_lcores_from_affinity paths. Add the negative test.
Warning: subtest_eal_cfg_init_lcore_affinity calls rte_eal_cleanup() at
the end and returns TEST_SUCCESS. But this is running inside a subprocess
spawned by process_dup(). If rte_eal_cleanup() ever exits non-zero or
aborts (e.g. on shared-memory teardown failure), the subprocess returns
non-zero and the parent reports the test as failed. Capture the cleanup
return code and fail explicitly rather than implicitly.
Warning: subtest_eal_cfg_init_lcore_affinity reads pi->cpu_info[i].detected
on a pi that may have only been initialised by the lazy
rte_eal_get_platform_info() triggered inside the test - but the test
calls rte_eal_get_platform_info() before rte_eal_init_from_cfg(). That
works because the platform-info init is now decoupled from EAL init
(patch 32), but it makes the test order-sensitive in a non-obvious way.
A one-line comment explaining that rte_eal_get_platform_info() is
callable pre-init would help future readers.
Info: rte_eal_cfg_is_service_lcore() returns false for NULL cfg - but a
caller cannot distinguish "you passed NULL" from "lcore is not a service
core". Returning false is fine here, but mention it in the doc.
Patch 43/44 (eal_cfg: support device and driver lists)
------------------------------------------------------
Warning: cfg_add_devopt() does TAILQ_INSERT_TAIL on the user-supplied
cfg's list. That list is a struct eal_devopt_list, but its head was
initialised either by EAL_USER_CFG_INITIALIZER (when created via
rte_eal_cfg_create()) or by zero-fill (if the caller embedded one). The
latter case will crash on insert. The function should either require
EAL_USER_CFG_INITIALIZER-initialised handles only (which
rte_eal_cfg_create() guarantees) or assert the head is initialised. A
short comment is enough.
Warning: rte_eal_cfg_add_plugin() accepts any non-empty path string and
silently truncates via strlcpy(p->name, path, sizeof(p->name)) if longer
than PATH_MAX. Either reject >= PATH_MAX, or document the truncation.
Warning: subtest_eal_cfg_init_null_pmd_vdevs is gated on #ifdef
RTE_NET_NULL. The #ifdef is around the entire test body but not the
"return TEST_SUCCESS;", so when RTE_NET_NULL is undefined the subtest
unconditionally returns TEST_SUCCESS without doing anything - and without
printing a "skipping" message. Add a printf(" Skipping: RTE_NET_NULL
not built\n"); for parity with the other skip paths in this file.
Patch 44/44 (eal_cfg: add APIs for configuring remaining init)
--------------------------------------------------------------
Warning: rte_eal_cfg_set_vfio_vf_token() accepts any 16 bytes without
parsing/validation. The parser side (eal_parse_vfio_vf_token) rejects
malformed UUIDs. Programmatic API is more permissive than CLI here, which
is OK by design, but the doc comment should say so explicitly.
Warning: rte_eal_cfg_set_trace_dir() does not check that the directory
exists or is writable - it only rejects NULL/empty. The CLI path also
doesn't (it defers to trace init), so this is consistent. Worth a doc
note that errors are deferred to rte_eal_init_from_cfg().
Warning: rte_eal_cfg_set_trace_bufsz(cfg, 0) is documented as "0 means
use the default" and the test verifies that. But
rte_eal_cfg_get_trace_bufsz(NULL) also returns 0. So a caller who reads
back 0 cannot tell "I have a NULL handle" from "I asked for default".
Same readability point as is_service_lcore above - an explicit
EAL_CFG_GETTER(uint64_t, trace_bufsz, 0) is fine, but document the
ambiguity.
Series-wide notes
-----------------
Several patches change behaviour subtly without saying so in the commit
log (patches 11, 14, 22). DPDK reviewers are picky about behavioural
changes hiding in "refactor" patches; please call them out.
The new EAL_USER_CFG_INITIALIZER macro is a compound literal in
initializer position. With strict pedantic compilers (-Wpedantic) you may
see warnings about C99 compound literals being used in static contexts.
The macro is only ever used in non-static contexts in this series, so
this is fine, but worth a comment so a future user doesn't try to use it
for a file-scope "static struct eal_user_cfg foo = EAL_USER_CFG_INITIALIZER(foo);".
lib/eal_cfg/eal_cfg.c includes <linux/magic.h> directly. Older glibc /
musl may not ship that header for non-Linux containers; consider using
the upstream DPDK pattern of "#define HUGETLBFS_MAGIC 0x958458f6" if
missing.
The series has no MAINTAINERS update for lib/eal_cfg/ and no
release_notes entry. Both will be required before merge.
The new public header lib/eal_cfg/rte_eal_cfg.h uses __rte_experimental
on every function. Good. But the enum rte_eal_huge_unlink is also new,
and enums cannot be marked experimental in DPDK's libabigail config. Add
this enum to devtools/libabigail.abignore if you intend to grow it (e.g.
add RTE_EAL_HUGE_UNLINK_AUTO later).
Several patches add new headers and don't update lib/eal/version.map.
With RTE_EXPORT_* macros that's expected - but I want to confirm the
export-extraction tooling picks up the macro form when used inside an
EAL_CFG_GETTER / EAL_CFG_BOOL macro expansion. The gen-version-map.py
extractor I last looked at parses textually and may not see
"RTE_EXPORT_EXPERIMENTAL_SYMBOL(rte_eal_cfg_get_no_pci, 26.07)" followed
on the next physical line by "EAL_CFG_BOOL(no_pci)". Worth a
build-and-nm check that all the expected symbols actually appear in the
resulting .so.
Doxygen: the new "/** @name */" ... "/** @} */" groupings in
rte_eal_cfg.h will only render as groups if the file is in
doxy-api.conf.in and the @name blocks are paired with closing @} on
every group. I count one group ("Per-lcore CPU affinity configuration")
with two "/** @} */" closes for one open. Likely a stray @}.
Many of the test files use C99 designated initialisers and mid-block
declarations heavily; that is consistent with current DPDK style, just
noting it for older toolchains.
Net assessment
--------------
The structural split is sound and worth landing; the eal_cfg API design
is reasonable. The blockers I would call out for v2 are the numa_mem
summing gap (patch 22) for the eal_cfg path, the lazy-init failure
cleanup leak (patch 32), the realloc(0) UB (patch 15), and the non-EAL
thread NUMA-id semantics change (patch 11). Most of the rest is
doc/comment/style polish that can ride along in v2.
No Reviewed-by - there are real correctness issues to address first.
^ permalink raw reply [flat|nested] 50+ messages in thread* Re: [RFC PATCH 00/44] Allow intitializing EAL without argc/argv
2026-04-29 22:04 ` Stephen Hemminger
@ 2026-04-30 8:00 ` Bruce Richardson
0 siblings, 0 replies; 50+ messages in thread
From: Bruce Richardson @ 2026-04-30 8:00 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev, techboard
On Wed, Apr 29, 2026 at 03:04:49PM -0700, Stephen Hemminger wrote:
> On Wed, 29 Apr 2026 17:57:52 +0100
> Bruce Richardson <bruce.richardson@intel.com> wrote:
>
> > The ultimate end goal of this RFC is, as stated in the title, to move
> > away from argc/argv as the sole means of configuring EAL on init. This
> > set therefore looks to:
> > * rework EAL so that we have a generic EAL init function taking a
> > struct-based configuration
> > * provide a rough *example* of how that might be used to create a new
> > set of C APIs to be used by apps to initialize EAL without having to
> > create dummy argc/argv pairs. [A side benefit of this is that it
> > makes it a lot easier to initialize EAL from other languages like Rust
> > or Python too, because arrays of C strings are not the most
> > user-friendly for a foreign interface]
> >
> > Therefore this set can be considered in 3 parts:
> >
> > Part 1: Struct rework.
> >
> > Largely for legacy reasons, we have a number of different structs and
> > arrays holding eal configuration, without having clear guides as to why
> > certain fields are stored where. The first ~30 patches rework the
> > existing structs - rte_config, internal_config, lcore_config etc.
> > into three defined-purpose structs:
> >
> > - eal_platform_info - contains the raw HW info for the system, details
> > of CPUs and hugepage mounts. This is initialized on first use - even
> > before EAL init is called - and is then immutable, since our HW should
> > not change much underneath us. Its early availability means that it
> > can be used to sanity check the contents of the other structs as they
> > are being built up.
> >
> > - eal_user_cfg - contains the config settings passed in by the user. For
> > existing rte_eal_init, this is built up in the arg parse stage, and
> > it's contents verified against the platform info, e.g. to check core
> > masks are valid etc. Once argument parsing is completed, is also
> > immutable.
> >
> > - runtime_cfg - basically all the runtime settings that need to be there
> > for DPDK to run, or which change over time. Largely combined content
> > of the old rte_config, internal_config and lcore_config structs. This
> > is initialized from the other two structs by eal initialization and
> > can be modified by EAL at any time.
> >
> > Part 2: Cleanup and Split EAL init
> >
> > With the 3 structs clearly separated by purpose, we can then do some
> > cleanup of the code, before - in patches 35 & 36 - splitting EAL into
> > two and providing an *internal* struct-based alternative API for
> > initializing DPDK. The old rte_eal_init function still exists, just the
> > contents of it after the argument parsing are put into a new, static
> > eal_runtime_init() function, which take the argparse output (user_cfg
> > struct). Patch 36, adds a thin internal-API wrapper around this static,
> > which is necessary to take care of things like the run-once flag. [This
> > wrapper should never be the public API, since the struct will change and
> > therefore be an ABI break. It's designed to be used by other wrapper
> > libs which provide a stable ABI for config]
> >
> > Part 3: Prototype of an eal_cfg library
> >
> > Once we have the internal C API to init eal using a struct
> > rte_eal_user_cfg, we can create new libraries which provide alternate
> > ways to build up the user_cfg and initialize DPDK. Patches 37-44 have a
> > rough example of such a library.
> >
> > - The lib allows a user to create an opaque rte_eal_user_cfg struct,
> > which can then be modified by APIs to get/set various parameters
> > before calling rte_cfg_eal_init().
> > - An alternative way to do things (not prototyped), may be to have a
> > library that creates an eal_user_cfg struct based on the contents of
> > an ini file using the configfile library.
> > [Both these options could be used in parallel. Note too that both have
> > no ABI implications for adding new flags, or making old ones no-ops!]
> >
> > Bruce Richardson (44):
> > eal: define new functionally distinct config structs
> > eal: move memory request fields to user config
> > eal: move NUMA request fields to user config
> > eal: move hugepage policy fields to user config
> > eal: move process policy fields to user config
> > eal: move advanced user config options to user cfg struct
> > eal: move hugepage size info to platform info struct
> > telemetry: make cpuset init parameter const
> > eal: move runtime state to appropriate structure
> > eal: record details of all cpus in platform info
> > eal: use platform info for lcore lookups
> > eal: add RTE_CPU_FFS macro
> > eal: store lcore configuration in runtime data
> > eal: cleanup CPU init function
> > eal: move numa node information to platform info struct
> > eal: move lcore role and count to runtime state
> > eal: make lcore role a field in lcore config struct
> > eal: move main lcore setting to runtime config struct
> > eal: move iova mode and process type to runtime cfg
> > eal: move memory config pointer to runtime state struct
> > eal: remove rte_config structure
> > eal: separate runtime state update from arg parsing
> > eal: move devopt_list staging list into user_cfg
> > eal: separate plugin paths from loaded plugin objects
> > eal: simplify internal driver path iteration APIs
> > eal: move trace config into user config struct
> > eal: record service cores in user config struct
> > eal: store user-provided lcore info in user config struct
> > eal: clarify docs on params taking lcore IDs
> > eal: remove internal config reset function
> > eal: move functions setting runtime state
> > eal: initialize platform info on first use
> > eal: remove duplicated scan of sysfs for hugepage details
> > eal: add utilities for working with user config struct
> > eal: split EAL init into two stages
> > eal: provide hooks for init with externally supplied config
> > eal_cfg: add new library to programmatically init DPDK
> > eal_cfg: configure defaults for easier testing and use
> > app/test: enable testing init using EAL config lib
> > eal_cfg: add basic setters and getters
> > eal_cfg: add hugepage memory configuration
> > eal_cfg: support configuring lcores
> > eal_cfg: support device and driver lists
> > eal_cfg: add APIs for configuring remaining init settings
> >
> > app/test/meson.build | 1 +
> > app/test/process.h | 4 +-
> > app/test/test.c | 14 +-
> > app/test/test.h | 1 +
> > app/test/test_eal_cfg.c | 1323 +++++++++++++++++++++
> > doc/api/doxy-api-index.md | 1 +
> > doc/api/doxy-api.conf.in | 1 +
> > doc/guides/linux_gsg/eal_args.include.rst | 38 +-
> > doc/guides/prog_guide/eal_cfg_lib.rst | 23 +
> > doc/guides/prog_guide/index.rst | 1 +
> > lib/eal/common/eal_common_bus.c | 4 +-
> > lib/eal/common/eal_common_config.c | 221 +++-
> > lib/eal/common/eal_common_dynmem.c | 66 +-
> > lib/eal/common/eal_common_fbarray.c | 10 +-
> > lib/eal/common/eal_common_launch.c | 25 +-
> > lib/eal/common/eal_common_lcore.c | 230 ++--
> > lib/eal/common/eal_common_mcfg.c | 44 +-
> > lib/eal/common/eal_common_memalloc.c | 5 +-
> > lib/eal/common/eal_common_memory.c | 104 +-
> > lib/eal/common/eal_common_memzone.c | 24 +-
> > lib/eal/common/eal_common_options.c | 823 +++++--------
> > lib/eal/common/eal_common_proc.c | 43 +-
> > lib/eal/common/eal_common_tailqs.c | 6 +-
> > lib/eal/common/eal_common_thread.c | 81 +-
> > lib/eal/common/eal_common_timer.c | 2 +-
> > lib/eal/common/eal_common_trace.c | 30 +-
> > lib/eal/common/eal_common_trace_utils.c | 104 --
> > lib/eal/common/eal_hugepages.h | 8 +
> > lib/eal/common/eal_internal_cfg.h | 376 +++++-
> > lib/eal/common/eal_memcfg.h | 3 +
> > lib/eal/common/eal_option_list.h | 6 +-
> > lib/eal/common/eal_options.h | 7 +-
> > lib/eal/common/eal_private.h | 108 +-
> > lib/eal/common/eal_trace.h | 11 -
> > lib/eal/common/malloc_elem.c | 15 +-
> > lib/eal/common/malloc_heap.c | 41 +-
> > lib/eal/common/malloc_mp.c | 2 +-
> > lib/eal/common/rte_malloc.c | 14 +-
> > lib/eal/common/rte_service.c | 17 +-
> > lib/eal/freebsd/eal.c | 271 +++--
> > lib/eal/freebsd/eal_hugepage_info.c | 71 +-
> > lib/eal/freebsd/eal_lcore.c | 16 +-
> > lib/eal/freebsd/eal_memory.c | 46 +-
> > lib/eal/freebsd/include/rte_os.h | 2 +
> > lib/eal/include/rte_eal.h | 35 +-
> > lib/eal/include/rte_memzone.h | 10 +-
> > lib/eal/include/rte_tailq.h | 2 +-
> > lib/eal/linux/eal.c | 280 +++--
> > lib/eal/linux/eal_hugepage_info.c | 219 ++--
> > lib/eal/linux/eal_lcore.c | 13 +
> > lib/eal/linux/eal_memalloc.c | 168 ++-
> > lib/eal/linux/eal_memory.c | 153 ++-
> > lib/eal/linux/eal_timer_hpet.c | 21 +-
> > lib/eal/linux/eal_vfio.c | 11 +-
> > lib/eal/linux/include/rte_os.h | 10 +
> > lib/eal/unix/eal_unix_thread.c | 11 +-
> > lib/eal/windows/eal.c | 177 ++-
> > lib/eal/windows/eal_hugepages.c | 60 +-
> > lib/eal/windows/eal_lcore.c | 6 +
> > lib/eal/windows/eal_memalloc.c | 37 +-
> > lib/eal/windows/eal_memory.c | 14 +-
> > lib/eal/windows/eal_thread.c | 11 +-
> > lib/eal/windows/eal_windows.h | 8 -
> > lib/eal/windows/include/rte_os.h | 1 +
> > lib/eal/windows/include/sched.h | 10 +
> > lib/eal_cfg/eal_cfg.c | 918 ++++++++++++++
> > lib/eal_cfg/meson.build | 6 +
> > lib/eal_cfg/rte_eal_cfg.h | 903 ++++++++++++++
> > lib/meson.build | 1 +
> > lib/telemetry/telemetry.c | 4 +-
> > lib/telemetry/telemetry_internal.h | 2 +-
> > 71 files changed, 5450 insertions(+), 1884 deletions(-)
> > create mode 100644 app/test/test_eal_cfg.c
> > create mode 100644 doc/guides/prog_guide/eal_cfg_lib.rst
> > create mode 100644 lib/eal_cfg/eal_cfg.c
> > create mode 100644 lib/eal_cfg/meson.build
> > create mode 100644 lib/eal_cfg/rte_eal_cfg.h
> >
> > --
> > 2.51.0
> >
>
> Lots of AI feedback below.
Thanks. This is still RFC quality in places. If folks feel this rework is
worth doing, some further cleanup will likely be done beyond even what AI
called out.
If I do progress this to a V1, I likely will focus only on the patches
reworking EAL. The example "eal_cfg" library are included more as an
example of what's possible, rather than for real consideration. It also
makes the patchset smaller too, thankfully, because it is rather large, I
realise.
^ permalink raw reply [flat|nested] 50+ messages in thread