* [PATCH 2/3] test: Convert tick-* probes to ioctl:entry for tst.trunc[quant].d
2025-05-01 18:22 [PATCH 1/3] Cache per-CPU agg map IDs eugene.loh
@ 2025-05-01 18:22 ` eugene.loh
2025-07-16 13:21 ` Nick Alcock
2025-05-01 18:22 ` [PATCH 3/3] test: Mimic dtrace arithmetic more closely for avg/stddev eugene.loh
` (2 subsequent siblings)
3 siblings, 1 reply; 10+ messages in thread
From: eugene.loh @ 2025-05-01 18:22 UTC (permalink / raw)
To: dtrace, dtrace-devel
From: Eugene Loh <eugene.loh@oracle.com>
Historically, many tests have used tick-* probes to get multiple
probe firings, but those probes can be unreliable, depending on
how a kernel is configured. Tests that required very many probe
firings have been converted to ioctl:entry. Tests that required
very few have been left alone.
Convert more of these tests. They normally pass, but with erratic
execution time and sometimes time out.
Signed-off-by: Eugene Loh <eugene.loh@oracle.com>
---
test/unittest/aggs/tst.trunc.d | 12 +++++++-----
test/unittest/aggs/tst.truncquant.d | 12 +++++++-----
2 files changed, 14 insertions(+), 10 deletions(-)
diff --git a/test/unittest/aggs/tst.trunc.d b/test/unittest/aggs/tst.trunc.d
index dbaeb09a8..044afb3a6 100644
--- a/test/unittest/aggs/tst.trunc.d
+++ b/test/unittest/aggs/tst.trunc.d
@@ -1,23 +1,25 @@
/*
* Oracle Linux DTrace.
- * Copyright (c) 2006, 2020, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2006, 2025, Oracle and/or its affiliates. All rights reserved.
* Licensed under the Universal Permissive License v 1.0 as shown at
* http://oss.oracle.com/licenses/upl.
*/
+/* @@trigger: bogus-ioctl */
+/* @@nosort */
#pragma D option quiet
int i;
-tick-10ms
-/i < 100/
+syscall::ioctl:entry
+/pid == $target/
{
@[i] = sum(i);
i++;
}
-tick-10ms
-/i == 100/
+syscall::ioctl:entry
+/pid == $target && i == 100/
{
exit(0);
}
diff --git a/test/unittest/aggs/tst.truncquant.d b/test/unittest/aggs/tst.truncquant.d
index 9a8b707e5..9b9794e40 100644
--- a/test/unittest/aggs/tst.truncquant.d
+++ b/test/unittest/aggs/tst.truncquant.d
@@ -1,16 +1,18 @@
/*
* Oracle Linux DTrace.
- * Copyright (c) 2006, 2020, Oracle and/or its affiliates. All rights reserved.
+ * Copyright (c) 2006, 2025, Oracle and/or its affiliates. All rights reserved.
* Licensed under the Universal Permissive License v 1.0 as shown at
* http://oss.oracle.com/licenses/upl.
*/
+/* @@trigger: bogus-ioctl */
+/* @@nosort */
#pragma D option quiet
int i;
-tick-10ms
-/i < 100/
+syscall::ioctl:entry
+/pid == $target/
{
@[i] = lquantize(i, 0, 150);
@[i] = lquantize(i + 1, 0, 150);
@@ -19,8 +21,8 @@ tick-10ms
i++;
}
-tick-10ms
-/i == 100/
+syscall::ioctl:entry
+/pid == $target && i == 100/
{
exit(0);
}
--
2.43.5
^ permalink raw reply related [flat|nested] 10+ messages in thread* Re: [PATCH 2/3] test: Convert tick-* probes to ioctl:entry for tst.trunc[quant].d
2025-05-01 18:22 ` [PATCH 2/3] test: Convert tick-* probes to ioctl:entry for tst.trunc[quant].d eugene.loh
@ 2025-07-16 13:21 ` Nick Alcock
0 siblings, 0 replies; 10+ messages in thread
From: Nick Alcock @ 2025-07-16 13:21 UTC (permalink / raw)
To: eugene.loh; +Cc: dtrace, dtrace-devel
On 1 May 2025, eugene loh said:
> From: Eugene Loh <eugene.loh@oracle.com>
>
> Historically, many tests have used tick-* probes to get multiple
> probe firings, but those probes can be unreliable, depending on
> how a kernel is configured. Tests that required very many probe
> firings have been converted to ioctl:entry. Tests that required
> very few have been left alone.
>
> Convert more of these tests. They normally pass, but with erratic
> execution time and sometimes time out.
>
> Signed-off-by: Eugene Loh <eugene.loh@oracle.com>
Reviewed-by: Nick Alcock <nick.alcock@oracle.com>
^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 3/3] test: Mimic dtrace arithmetic more closely for avg/stddev
2025-05-01 18:22 [PATCH 1/3] Cache per-CPU agg map IDs eugene.loh
2025-05-01 18:22 ` [PATCH 2/3] test: Convert tick-* probes to ioctl:entry for tst.trunc[quant].d eugene.loh
@ 2025-05-01 18:22 ` eugene.loh
2025-07-16 13:22 ` [DTrace-devel] " Nick Alcock
2025-07-16 13:20 ` [DTrace-devel] [PATCH 1/3] Cache per-CPU agg map IDs Nick Alcock
2025-07-16 19:10 ` Kris Van Hees
3 siblings, 1 reply; 10+ messages in thread
From: eugene.loh @ 2025-05-01 18:22 UTC (permalink / raw)
To: dtrace, dtrace-devel
From: Eugene Loh <eugene.loh@oracle.com>
The multicpus test checks that data from multiple CPUs is aggregated
properly. Operations like avg() and stddev() require division.
DTrace uses integer division, while awk does not.
Change the awk check to truncate intermediate results after division.
In practice, intermediate results are typically integer values anyhow.
So the test has generally passed. Non-integer values could arise if,
for example, CPUs are not numbered consecutively. More typically,
there may be a problem that the profile probe is not firing on every
expected CPU. So a test failure probably was a sign of a problem,
but that's a different problem, one beyond the scope of this test.
Signed-off-by: Eugene Loh <eugene.loh@oracle.com>
---
test/unittest/aggs/tst.multicpus.sh | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/test/unittest/aggs/tst.multicpus.sh b/test/unittest/aggs/tst.multicpus.sh
index 50eeaae44..46c668867 100755
--- a/test/unittest/aggs/tst.multicpus.sh
+++ b/test/unittest/aggs/tst.multicpus.sh
@@ -79,10 +79,10 @@ gawk '
# first we finish computing our estimates for avg and stddev
# (the other results require no further action)
- xavg /= xcnt;
+ xavg /= xcnt; xavg = int(xavg);
- xstm /= xcnt;
- xstd /= xcnt;
+ xstm /= xcnt; xstm = int(xstm);
+ xstd /= xcnt; xstd = int(xstd);
xstd -= xstm * xstm;
xstd = int(sqrt(xstd));
--
2.43.5
^ permalink raw reply related [flat|nested] 10+ messages in thread* Re: [DTrace-devel] [PATCH 3/3] test: Mimic dtrace arithmetic more closely for avg/stddev
2025-05-01 18:22 ` [PATCH 3/3] test: Mimic dtrace arithmetic more closely for avg/stddev eugene.loh
@ 2025-07-16 13:22 ` Nick Alcock
2025-07-16 19:51 ` Eugene Loh
0 siblings, 1 reply; 10+ messages in thread
From: Nick Alcock @ 2025-07-16 13:22 UTC (permalink / raw)
To: eugene.loh--- via DTrace-devel; +Cc: dtrace, eugene.loh
On 1 May 2025, eugene loh stated:
> From: Eugene Loh <eugene.loh@oracle.com>
>
> The multicpus test checks that data from multiple CPUs is aggregated
> properly. Operations like avg() and stddev() require division.
> DTrace uses integer division, while awk does not.
Oops!
> for example, CPUs are not numbered consecutively. More typically,
> there may be a problem that the profile probe is not firing on every
> expected CPU. So a test failure probably was a sign of a problem,
> but that's a different problem, one beyond the scope of this test.
... is there anything else that tests for those other problems?
> Signed-off-by: Eugene Loh <eugene.loh@oracle.com>
Reviewed-by: Nick Alcock <nick.alcock@oracle.com>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [DTrace-devel] [PATCH 3/3] test: Mimic dtrace arithmetic more closely for avg/stddev
2025-07-16 13:22 ` [DTrace-devel] " Nick Alcock
@ 2025-07-16 19:51 ` Eugene Loh
0 siblings, 0 replies; 10+ messages in thread
From: Eugene Loh @ 2025-07-16 19:51 UTC (permalink / raw)
To: Nick Alcock, eugene.loh--- via DTrace-devel; +Cc: dtrace
On 7/16/25 09:22, Nick Alcock wrote:
> On 1 May 2025, eugene loh stated:
>
>> From: Eugene Loh <eugene.loh@oracle.com>
>>
>> The multicpus test checks that data from multiple CPUs is aggregated
>> properly. Operations like avg() and stddev() require division.
>> DTrace uses integer division, while awk does not.
> Oops!
>
>> for example, CPUs are not numbered consecutively. More typically,
>> there may be a problem that the profile probe is not firing on every
>> expected CPU. So a test failure probably was a sign of a problem,
>> but that's a different problem, one beyond the scope of this test.
> ... is there anything else that tests for those other problems?
I sure thought so, but I can't find anything and the obvious place to
look is test/unittest/profile. Frankly, I thought there were some
"quality of profile" tests (like if I want a probe firing every 500
msec, how often does the probe actually fire?).
>> Signed-off-by: Eugene Loh <eugene.loh@oracle.com>
> Reviewed-by: Nick Alcock <nick.alcock@oracle.com>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [DTrace-devel] [PATCH 1/3] Cache per-CPU agg map IDs
2025-05-01 18:22 [PATCH 1/3] Cache per-CPU agg map IDs eugene.loh
2025-05-01 18:22 ` [PATCH 2/3] test: Convert tick-* probes to ioctl:entry for tst.trunc[quant].d eugene.loh
2025-05-01 18:22 ` [PATCH 3/3] test: Mimic dtrace arithmetic more closely for avg/stddev eugene.loh
@ 2025-07-16 13:20 ` Nick Alcock
2025-07-16 18:42 ` Eugene Loh
2025-07-16 19:10 ` Kris Van Hees
3 siblings, 1 reply; 10+ messages in thread
From: Nick Alcock @ 2025-07-16 13:20 UTC (permalink / raw)
To: eugene.loh, dtrace, dtrace-devel
On 1 May 2025, eugene loh outgrape:
> From: Eugene Loh <eugene.loh@oracle.com>
>
> The dt_bpf_map_lookup_fd(dtp->dt_aggmap_fd, &cpu) call that is used to
> snap or truncate aggregations takes a few milliseconds, which seems all
> right. For large systems (e.g., 100 CPUs) and many truncations (e.g.,
> tst.trunc.d, etc.), however, a trunc() might end up costing a minute on
> the consumer side, which is unreasonable and causes such tests to time
> out. The run time is due almost exclusively to looking up the per-CPU
> agg map ID.
... how on earth did you figure this out? Some sort of profiler,
presumably...
> Cache the per-CPU agg map IDs.
LGTM, doing the slow operation only once rather than once per snap seems
sensible. (If this turns out to slow down the no-aggs case too much,
we could always populate the map just once on the first call to
dtrace_aggregate_snap() and dt_aggwalk_remove(). But presumably just one
operation is fast enough.)
> Signed-off-by: Eugene Loh <eugene.loh@oracle.com>
Reviewed-by: Nick Alcock <nick.alcock@oracle.com>
(as long as Kris is happy with this probably-theoretical slowdown.)
--
NULL && (void)
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [DTrace-devel] [PATCH 1/3] Cache per-CPU agg map IDs
2025-07-16 13:20 ` [DTrace-devel] [PATCH 1/3] Cache per-CPU agg map IDs Nick Alcock
@ 2025-07-16 18:42 ` Eugene Loh
0 siblings, 0 replies; 10+ messages in thread
From: Eugene Loh @ 2025-07-16 18:42 UTC (permalink / raw)
To: Nick Alcock, dtrace, dtrace-devel
On 7/16/25 09:20, Nick Alcock wrote:
> On 1 May 2025, eugene loh outgrape:
>> From: Eugene Loh <eugene.loh@oracle.com>
>>
>> The dt_bpf_map_lookup_fd(dtp->dt_aggmap_fd, &cpu) call that is used to
>> snap or truncate aggregations takes a few milliseconds, which seems all
>> right. For large systems (e.g., 100 CPUs) and many truncations (e.g.,
>> tst.trunc.d, etc.), however, a trunc() might end up costing a minute on
>> the consumer side, which is unreasonable and causes such tests to time
>> out. The run time is due almost exclusively to looking up the per-CPU
>> agg map ID.
> ... how on earth did you figure this out? Some sort of profiler,
> presumably...
I wish I could say "gprofng" but the first thing is figuring out under
what conditions what was happening. Once I knew where to look for what,
I just happened to stick in some timing calls to nail down the culprit.
Could have been gprofng, I suppose.
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [DTrace-devel] [PATCH 1/3] Cache per-CPU agg map IDs
2025-05-01 18:22 [PATCH 1/3] Cache per-CPU agg map IDs eugene.loh
` (2 preceding siblings ...)
2025-07-16 13:20 ` [DTrace-devel] [PATCH 1/3] Cache per-CPU agg map IDs Nick Alcock
@ 2025-07-16 19:10 ` Kris Van Hees
2025-07-16 20:05 ` Eugene Loh
3 siblings, 1 reply; 10+ messages in thread
From: Kris Van Hees @ 2025-07-16 19:10 UTC (permalink / raw)
To: eugene.loh; +Cc: dtrace, dtrace-devel
On Thu, May 01, 2025 at 02:22:50PM -0400, eugene.loh--- via DTrace-devel wrote:
> From: Eugene Loh <eugene.loh@oracle.com>
>
> The dt_bpf_map_lookup_fd(dtp->dt_aggmap_fd, &cpu) call that is used to
> snap or truncate aggregations takes a few milliseconds, which seems all
> right. For large systems (e.g., 100 CPUs) and many truncations (e.g.,
> tst.trunc.d, etc.), however, a trunc() might end up costing a minute on
> the consumer side, which is unreasonable and causes such tests to time
> out. The run time is due almost exclusively to looking up the per-CPU
> agg map ID.
Of course, we only go from 2 bpf() syscalls per call to 1 bpf() syscall per
call. Have you collected timings for the two kinds of bpf calls we use?
I would actually expect that the larger cost lies with the
bpf_map_get_fd_by_id() one rather than the bpf_map_lookup() since the
fd_by_id needs to allocate an fd and link it with a map.
In all, how bad would it be if we were to just keep the fds open for the
per-CPU aggregation data, and not having to do these bpf syscalls at all.
But we could end up with 100s of fds open of course...
> Cache the per-CPU agg map IDs.
>
> Signed-off-by: Eugene Loh <eugene.loh@oracle.com>
> ---
> libdtrace/dt_aggregate.c | 5 ++---
> libdtrace/dt_bpf.c | 6 ++++++
> libdtrace/dt_impl.h | 1 +
> libdtrace/dt_open.c | 1 +
> 4 files changed, 10 insertions(+), 3 deletions(-)
>
> diff --git a/libdtrace/dt_aggregate.c b/libdtrace/dt_aggregate.c
> index 9e47fcab7..86f9d4d5b 100644
> --- a/libdtrace/dt_aggregate.c
> +++ b/libdtrace/dt_aggregate.c
> @@ -800,7 +800,7 @@ dtrace_aggregate_snap(dtrace_hdl_t *dtp)
>
> for (i = 0; i < dtp->dt_conf.num_online_cpus; i++) {
> int cpu = dtp->dt_conf.cpus[i].cpu_id;
> - int fd = dt_bpf_map_lookup_fd(dtp->dt_aggmap_fd, &cpu);
> + int fd = dt_bpf_map_get_fd_by_id(dtp->dt_aggmap_ids[i]);
>
> if (fd < 0)
> return DTRACE_WORKSTATUS_ERROR;
> @@ -1232,8 +1232,7 @@ dt_aggwalk_remove(dtrace_hdl_t *dtp, dt_ahashent_t *h)
> memcpy(key, agd->dtada_key, agd->dtada_desc->dtagd_ksize);
>
> for (i = 0; i < ncpus; i++) {
> - int cpu = dtp->dt_conf.cpus[i].cpu_id;
> - int fd = dt_bpf_map_lookup_fd(dtp->dt_aggmap_fd, &cpu);
> + int fd = dt_bpf_map_get_fd_by_id(dtp->dt_aggmap_ids[i]);
>
> if (fd < 0)
> return DTRACE_WORKSTATUS_ERROR;
> diff --git a/libdtrace/dt_bpf.c b/libdtrace/dt_bpf.c
> index d6722cbd1..635780738 100644
> --- a/libdtrace/dt_bpf.c
> +++ b/libdtrace/dt_bpf.c
> @@ -689,6 +689,10 @@ gmap_create_aggs(dtrace_hdl_t *dtp)
> if (dtp->dt_aggmap_fd == -1)
> return -1;
>
> + dtp->dt_aggmap_ids = dt_calloc(dtp, dtp->dt_conf.num_online_cpus, sizeof(int));
> + if (dtp->dt_aggmap_ids == NULL)
> + return dt_set_errno(dtp, EDT_NOMEM);
> +
> for (i = 0; i < dtp->dt_conf.num_online_cpus; i++) {
> int cpu = dtp->dt_conf.cpus[i].cpu_id;
> char name[16];
> @@ -702,6 +706,8 @@ gmap_create_aggs(dtrace_hdl_t *dtp)
> return map_create_error(dtp, name, errno);
>
> dt_bpf_map_update(dtp->dt_aggmap_fd, &cpu, &fd);
> + if (dt_bpf_map_lookup(dtp->dt_aggmap_fd, &cpu, &dtp->dt_aggmap_ids[i]) < 0)
> + return -1;
> }
>
> /* Create the agg generation value array. */
> diff --git a/libdtrace/dt_impl.h b/libdtrace/dt_impl.h
> index 68fb8ec53..1033154d9 100644
> --- a/libdtrace/dt_impl.h
> +++ b/libdtrace/dt_impl.h
> @@ -388,6 +388,7 @@ struct dtrace_hdl {
> int dt_proc_fd; /* file descriptor for proc eventfd */
> int dt_stmap_fd; /* file descriptor for the 'state' BPF map */
> int dt_aggmap_fd; /* file descriptor for the 'aggs' BPF map */
> + int *dt_aggmap_ids; /* ids for the 'aggN' BPF maps */
> int dt_genmap_fd; /* file descriptor for the 'agggen' BPF map */
> int dt_cpumap_fd; /* file descriptor for the 'cpuinfo' BPF map */
> int dt_usdt_pridsmap_fd; /* file descriptor for the 'usdt_prids' BPF map */
> diff --git a/libdtrace/dt_open.c b/libdtrace/dt_open.c
> index 71ee21467..7da4c82cc 100644
> --- a/libdtrace/dt_open.c
> +++ b/libdtrace/dt_open.c
> @@ -1233,6 +1233,7 @@ dtrace_close(dtrace_hdl_t *dtp)
> dt_probe_detach_all(dtp);
>
> dt_free(dtp, dtp->dt_conf.cpus);
> + dt_free(dtp, dtp->dt_aggmap_ids);
>
> if (dtp->dt_procs != NULL)
> dt_proc_hash_destroy(dtp);
> --
> 2.43.5
>
>
> _______________________________________________
> DTrace-devel mailing list
> DTrace-devel@oss.oracle.com
> https://oss.oracle.com/mailman/listinfo/dtrace-devel
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [DTrace-devel] [PATCH 1/3] Cache per-CPU agg map IDs
2025-07-16 19:10 ` Kris Van Hees
@ 2025-07-16 20:05 ` Eugene Loh
0 siblings, 0 replies; 10+ messages in thread
From: Eugene Loh @ 2025-07-16 20:05 UTC (permalink / raw)
To: Kris Van Hees; +Cc: dtrace, dtrace-devel
On 7/16/25 15:10, Kris Van Hees wrote:
> On Thu, May 01, 2025 at 02:22:50PM -0400, eugene.loh--- via DTrace-devel wrote:
>> From: Eugene Loh <eugene.loh@oracle.com>
>>
>> The dt_bpf_map_lookup_fd(dtp->dt_aggmap_fd, &cpu) call that is used to
>> snap or truncate aggregations takes a few milliseconds, which seems all
>> right. For large systems (e.g., 100 CPUs) and many truncations (e.g.,
>> tst.trunc.d, etc.), however, a trunc() might end up costing a minute on
>> the consumer side, which is unreasonable and causes such tests to time
>> out. The run time is due almost exclusively to looking up the per-CPU
>> agg map ID.
> Of course, we only go from 2 bpf() syscalls per call to 1 bpf() syscall per
> call. Have you collected timings for the two kinds of bpf calls we use?
I'm rusty on this, but as I remember I collected timings on the BPF
calls and it was clear who the culprit was.
> I would actually expect that the larger cost lies with the
> bpf_map_get_fd_by_id() one rather than the bpf_map_lookup() since the
> fd_by_id needs to allocate an fd and link it with a map.
>
> In all, how bad would it be if we were to just keep the fds open for the
> per-CPU aggregation data, and not having to do these bpf syscalls at all.
>
> But we could end up with 100s of fds open of course...
Yeah, I think I decided against that approach for that reason (iirc).
I'm happy with Nick's R-b, but let me know if you want more
investigation on this patch.
>> Cache the per-CPU agg map IDs.
>>
>> Signed-off-by: Eugene Loh <eugene.loh@oracle.com>
>> ---
>> libdtrace/dt_aggregate.c | 5 ++---
>> libdtrace/dt_bpf.c | 6 ++++++
>> libdtrace/dt_impl.h | 1 +
>> libdtrace/dt_open.c | 1 +
>> 4 files changed, 10 insertions(+), 3 deletions(-)
>>
>> diff --git a/libdtrace/dt_aggregate.c b/libdtrace/dt_aggregate.c
>> index 9e47fcab7..86f9d4d5b 100644
>> --- a/libdtrace/dt_aggregate.c
>> +++ b/libdtrace/dt_aggregate.c
>> @@ -800,7 +800,7 @@ dtrace_aggregate_snap(dtrace_hdl_t *dtp)
>>
>> for (i = 0; i < dtp->dt_conf.num_online_cpus; i++) {
>> int cpu = dtp->dt_conf.cpus[i].cpu_id;
>> - int fd = dt_bpf_map_lookup_fd(dtp->dt_aggmap_fd, &cpu);
>> + int fd = dt_bpf_map_get_fd_by_id(dtp->dt_aggmap_ids[i]);
>>
>> if (fd < 0)
>> return DTRACE_WORKSTATUS_ERROR;
>> @@ -1232,8 +1232,7 @@ dt_aggwalk_remove(dtrace_hdl_t *dtp, dt_ahashent_t *h)
>> memcpy(key, agd->dtada_key, agd->dtada_desc->dtagd_ksize);
>>
>> for (i = 0; i < ncpus; i++) {
>> - int cpu = dtp->dt_conf.cpus[i].cpu_id;
>> - int fd = dt_bpf_map_lookup_fd(dtp->dt_aggmap_fd, &cpu);
>> + int fd = dt_bpf_map_get_fd_by_id(dtp->dt_aggmap_ids[i]);
>>
>> if (fd < 0)
>> return DTRACE_WORKSTATUS_ERROR;
>> diff --git a/libdtrace/dt_bpf.c b/libdtrace/dt_bpf.c
>> index d6722cbd1..635780738 100644
>> --- a/libdtrace/dt_bpf.c
>> +++ b/libdtrace/dt_bpf.c
>> @@ -689,6 +689,10 @@ gmap_create_aggs(dtrace_hdl_t *dtp)
>> if (dtp->dt_aggmap_fd == -1)
>> return -1;
>>
>> + dtp->dt_aggmap_ids = dt_calloc(dtp, dtp->dt_conf.num_online_cpus, sizeof(int));
>> + if (dtp->dt_aggmap_ids == NULL)
>> + return dt_set_errno(dtp, EDT_NOMEM);
>> +
>> for (i = 0; i < dtp->dt_conf.num_online_cpus; i++) {
>> int cpu = dtp->dt_conf.cpus[i].cpu_id;
>> char name[16];
>> @@ -702,6 +706,8 @@ gmap_create_aggs(dtrace_hdl_t *dtp)
>> return map_create_error(dtp, name, errno);
>>
>> dt_bpf_map_update(dtp->dt_aggmap_fd, &cpu, &fd);
>> + if (dt_bpf_map_lookup(dtp->dt_aggmap_fd, &cpu, &dtp->dt_aggmap_ids[i]) < 0)
>> + return -1;
>> }
>>
>> /* Create the agg generation value array. */
>> diff --git a/libdtrace/dt_impl.h b/libdtrace/dt_impl.h
>> index 68fb8ec53..1033154d9 100644
>> --- a/libdtrace/dt_impl.h
>> +++ b/libdtrace/dt_impl.h
>> @@ -388,6 +388,7 @@ struct dtrace_hdl {
>> int dt_proc_fd; /* file descriptor for proc eventfd */
>> int dt_stmap_fd; /* file descriptor for the 'state' BPF map */
>> int dt_aggmap_fd; /* file descriptor for the 'aggs' BPF map */
>> + int *dt_aggmap_ids; /* ids for the 'aggN' BPF maps */
>> int dt_genmap_fd; /* file descriptor for the 'agggen' BPF map */
>> int dt_cpumap_fd; /* file descriptor for the 'cpuinfo' BPF map */
>> int dt_usdt_pridsmap_fd; /* file descriptor for the 'usdt_prids' BPF map */
>> diff --git a/libdtrace/dt_open.c b/libdtrace/dt_open.c
>> index 71ee21467..7da4c82cc 100644
>> --- a/libdtrace/dt_open.c
>> +++ b/libdtrace/dt_open.c
>> @@ -1233,6 +1233,7 @@ dtrace_close(dtrace_hdl_t *dtp)
>> dt_probe_detach_all(dtp);
>>
>> dt_free(dtp, dtp->dt_conf.cpus);
>> + dt_free(dtp, dtp->dt_aggmap_ids);
>>
>> if (dtp->dt_procs != NULL)
>> dt_proc_hash_destroy(dtp);
>> --
>> 2.43.5
>>
>>
>> _______________________________________________
>> DTrace-devel mailing list
>> DTrace-devel@oss.oracle.com
>> https://oss.oracle.com/mailman/listinfo/dtrace-devel
^ permalink raw reply [flat|nested] 10+ messages in thread