* [PATCH 0/7] libtracefs: More fixes for memory mapping ring buffer API
@ 2024-01-10 2:51 Steven Rostedt
2024-01-10 2:51 ` [PATCH 1/7] libtracefs: Have mapping work with the other tracefs_cpu* functions Steven Rostedt
` (6 more replies)
0 siblings, 7 replies; 8+ messages in thread
From: Steven Rostedt @ 2024-01-10 2:51 UTC (permalink / raw)
To: linux-trace-devel; +Cc: Vincent Donnefort, Steven Rostedt (Google)
From: "Steven Rostedt (Google)" <rostedt@goodmis.org>
Now all the read APIs support memory mapped ring buffer.
For those that use splice and pipe, if the tcpu is memory mapped, it will
then fall back to copying directly from the memory map.
Also added a tracfes_mapping_is_supported() API to test if the kernel and
library support memory mapping.
And updated the tests to use memory mapped ring buffers when possible.
Steven Rostedt (Google) (7):
libtracefs: Have mapping work with the other tracefs_cpu* functions
libtracefs: Have tracefs_mmap_read() include subbuf meta data
libtracefs: Have nonblock tracefs_cpu reads set errno EAGAIN
libtracefs: Fix tracefs_mmap() kbuf usage
libtracefs: Call mmap ioctl if a refresh happens
libtracefs: Add tracefs_mapped_is_supported() API
libtracefs utest: Add tests to use mapping if supported
Documentation/libtracefs-cpu-map.txt | 38 ++++++++++++---
Documentation/libtracefs.txt | 1 +
include/tracefs.h | 1 +
src/tracefs-events.c | 8 +---
src/tracefs-mmap.c | 28 +++++++++--
src/tracefs-record.c | 72 +++++++++++++++++++++++++++-
utest/tracefs-utest.c | 69 ++++++++++++++++++--------
7 files changed, 176 insertions(+), 41 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 1/7] libtracefs: Have mapping work with the other tracefs_cpu* functions
2024-01-10 2:51 [PATCH 0/7] libtracefs: More fixes for memory mapping ring buffer API Steven Rostedt
@ 2024-01-10 2:51 ` Steven Rostedt
2024-01-10 2:51 ` [PATCH 2/7] libtracefs: Have tracefs_mmap_read() include subbuf meta data Steven Rostedt
` (5 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Steven Rostedt @ 2024-01-10 2:51 UTC (permalink / raw)
To: linux-trace-devel; +Cc: Vincent Donnefort, Steven Rostedt (Google)
From: "Steven Rostedt (Google)" <rostedt@goodmis.org>
If the tracefs_cpu is opened with tracefs_cpu_open_mapped() and it
successfully maps, then still allow the other tracefs_cpu*() functions to
work with the mapping. That is:
tracefs_cpu_buffered_read() will act like tracefs_cpu_read()
tracefs_cpu_buffered_read_buf() will act like tracefs_cpu_read_buf()
and tracefs_cpu_write() and tracefs_cpu_pipe() will read from the mapping
instead of using splice.
Update tracefs_iterate_raw_events() to always use
tracefs_cpu_buffered_read_buf() as it will do the right thing when buffered.
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
Documentation/libtracefs-cpu-map.txt | 29 +++++++++++++++++++++------
src/tracefs-events.c | 8 +-------
src/tracefs-record.c | 30 ++++++++++++++++++++++++++++
3 files changed, 54 insertions(+), 13 deletions(-)
diff --git a/Documentation/libtracefs-cpu-map.txt b/Documentation/libtracefs-cpu-map.txt
index 2c80d0894f87..ed62c96f83a1 100644
--- a/Documentation/libtracefs-cpu-map.txt
+++ b/Documentation/libtracefs-cpu-map.txt
@@ -29,11 +29,16 @@ memory mapping via either *tracefs_cpu_map()* or by *tracefs_cpu_open_mapped()*
then the functions *tracefs_cpu_read*(3) and *tracefs_cpu_read_buf*(3) will use
the mapping directly instead of calling the read system call.
-Note, mapping can also slow down *tracefs_cpu_buffered_read*(3) and
-*tracefs_cpu_buffered_read_buf*(3), as those use splice piping and when the
-kernel ring buffer is memory mapped, splice does a copy instead of using the
-ring buffer directly. Thus care must be used when determining to map the
-ring buffer or not, and why it does not get mapped by default.
+Note, mapping will cause *tracefs_cpu_buffered_read*(3) and *tracefs_cpu_buffered_read_buf*(3)
+to act just like *tracefs_cpu_read*(3) and *tracefs_cpu_read_buf*(3) respectively
+as it doesn't make sense to use a splice pipe when mapped. The kernel will do
+a copy for splice reads on mapping, and then another copy in the function when
+it can avoid the copying if the ring buffer is memory mapped.
+
+If the _tcpu_ is memory mapped it will also force *tracefs_cpu_write*(3) and
+*tracefs_cpu_pipe*(3) to copy from the mapping instead of using splice.
+Thus care must be used when determining to map the ring buffer or not,
+and why it does not get mapped by default.
The *tracefs_cpu_is_mapped()* function will return true if _tcpu_ currently has
its ring buffer memory mapped and false otherwise. This does not return whether or
@@ -78,6 +83,7 @@ static void read_subbuf(struct tep_handle *tep, struct kbuffer *kbuf)
{
static struct trace_seq seq;
struct tep_record record;
+ int missed_events;
if (seq.buffer)
trace_seq_reset(&seq);
@@ -86,6 +92,14 @@ static void read_subbuf(struct tep_handle *tep, struct kbuffer *kbuf)
while ((record.data = kbuffer_read_event(kbuf, &record.ts))) {
record.size = kbuffer_event_size(kbuf);
+ missed_events = kbuffer_missed_events(kbuf);
+ if (missed_events) {
+ printf("[MISSED EVENTS");
+ if (missed_events > 0)
+ printf(": %d]\n", missed_events);
+ else
+ printf("]\n");
+ }
kbuffer_next_event(kbuf, NULL);
tep_print_event(tep, &seq, &record,
"%s-%d %6.1000d\t%s: %s\n",
@@ -128,7 +142,10 @@ int main (int argc, char **argv)
/*
* If this kernel supports mapping, use normal read,
- * otherwise use the piped buffer read.
+ * otherwise use the piped buffer read, although if
+ * the mapping succeeded, tracefs_cpu_buffered_read_buf()
+ * acts the same as tracefs_cpu_read_buf(). But this is just
+ * an example on how to use tracefs_cpu_is_mapped().
*/
mapped = tracefs_cpu_is_mapped(tcpu);
if (!mapped)
diff --git a/src/tracefs-events.c b/src/tracefs-events.c
index 9f620abebdda..1b1693cf0923 100644
--- a/src/tracefs-events.c
+++ b/src/tracefs-events.c
@@ -32,7 +32,6 @@ struct cpu_iterate {
struct tep_event *event;
struct kbuffer *kbuf;
int cpu;
- bool mapped;
};
static int read_kbuf_record(struct cpu_iterate *cpu)
@@ -67,11 +66,7 @@ int read_next_page(struct tep_handle *tep, struct cpu_iterate *cpu)
if (!cpu->tcpu)
return -1;
- /* Do not do buffered reads if it is mapped */
- if (cpu->mapped)
- kbuf = tracefs_cpu_read_buf(cpu->tcpu, true);
- else
- kbuf = tracefs_cpu_buffered_read_buf(cpu->tcpu, true);
+ kbuf = tracefs_cpu_buffered_read_buf(cpu->tcpu, true);
/*
* tracefs_cpu_buffered_read_buf() only reads in full subbuffer size,
* but this wants partial buffers as well. If the function returns
@@ -295,7 +290,6 @@ static int open_cpu_files(struct tracefs_instance *instance, cpu_set_t *cpus,
tmp[i].tcpu = tcpu;
tmp[i].cpu = cpu;
- tmp[i].mapped = tracefs_cpu_is_mapped(tcpu);
i++;
}
*count = i;
diff --git a/src/tracefs-record.c b/src/tracefs-record.c
index fca3ddf9afbe..881f49256dc2 100644
--- a/src/tracefs-record.c
+++ b/src/tracefs-record.c
@@ -571,6 +571,9 @@ int tracefs_cpu_buffered_read(struct tracefs_cpu *tcpu, void *buffer, bool nonbl
if (ret <= 0)
return ret;
+ if (tcpu->mapping)
+ return trace_mmap_read(tcpu->mapping, buffer);
+
if (tcpu->flags & TC_NONBLOCK)
mode |= SPLICE_F_NONBLOCK;
@@ -617,6 +620,16 @@ struct kbuffer *tracefs_cpu_buffered_read_buf(struct tracefs_cpu *tcpu, bool non
{
int ret;
+ /* If mapping is enabled, just use it directly */
+ if (tcpu->mapping) {
+ ret = wait_on_input(tcpu, nonblock);
+ if (ret <= 0)
+ return NULL;
+
+ ret = trace_mmap_load_subbuf(tcpu->mapping, tcpu->kbuf);
+ return ret > 0 ? tcpu->kbuf : NULL;
+ }
+
if (!get_buffer(tcpu))
return NULL;
@@ -795,6 +808,20 @@ int tracefs_cpu_write(struct tracefs_cpu *tcpu, int wfd, bool nonblock)
int tot;
int ret;
+ if (tcpu->mapping) {
+ int r = tracefs_cpu_read(tcpu, buffer, nonblock);
+ if (r < 0)
+ return r;
+ do {
+ ret = write(wfd, buffer, r);
+ if (ret < 0)
+ return ret;
+ r -= ret;
+ tot_write += ret;
+ } while (r > 0);
+ return tot_write;
+ }
+
ret = wait_on_input(tcpu, nonblock);
if (ret <= 0)
return ret;
@@ -860,6 +887,9 @@ int tracefs_cpu_pipe(struct tracefs_cpu *tcpu, int wfd, bool nonblock)
int mode = SPLICE_F_MOVE;
int ret;
+ if (tcpu->mapping)
+ return tracefs_cpu_write(tcpu, wfd, nonblock);
+
ret = wait_on_input(tcpu, nonblock);
if (ret <= 0)
return ret;
--
2.43.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 2/7] libtracefs: Have tracefs_mmap_read() include subbuf meta data
2024-01-10 2:51 [PATCH 0/7] libtracefs: More fixes for memory mapping ring buffer API Steven Rostedt
2024-01-10 2:51 ` [PATCH 1/7] libtracefs: Have mapping work with the other tracefs_cpu* functions Steven Rostedt
@ 2024-01-10 2:51 ` Steven Rostedt
2024-01-10 2:51 ` [PATCH 3/7] libtracefs: Have nonblock tracefs_cpu reads set errno EAGAIN Steven Rostedt
` (4 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Steven Rostedt @ 2024-01-10 2:51 UTC (permalink / raw)
To: linux-trace-devel; +Cc: Vincent Donnefort, Steven Rostedt (Google)
From: "Steven Rostedt (Google)" <rostedt@goodmis.org>
The tracefs_cpu_read() returns tracefs_mmap_read() when the tcpu is mapped.
But tracefs_cpu_read() is supposed to return the amount that was read, which
includes the sub-buffer meta data. But kbuffer_read_buffer() only returns
the amount of data that was read and does not include the sub-buffer meta
data.
Fixes: 2ed14b59 ("libtracefs: Add ring buffer memory mapping APIs")
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
src/tracefs-mmap.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/src/tracefs-mmap.c b/src/tracefs-mmap.c
index 0f90f51e9339..f481bda37eaf 100644
--- a/src/tracefs-mmap.c
+++ b/src/tracefs-mmap.c
@@ -207,5 +207,10 @@ __hidden int trace_mmap_read(void *mapping, void *buffer)
return ret;
/* Update the buffer */
- return kbuffer_read_buffer(kbuf, buffer, tmap->map->subbuf_size);
+ ret = kbuffer_read_buffer(kbuf, buffer, tmap->map->subbuf_size);
+ if (ret <= 0)
+ return ret;
+
+ /* This needs to include the size of the meta data too */
+ return ret + kbuffer_start_of_data(kbuf);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 3/7] libtracefs: Have nonblock tracefs_cpu reads set errno EAGAIN
2024-01-10 2:51 [PATCH 0/7] libtracefs: More fixes for memory mapping ring buffer API Steven Rostedt
2024-01-10 2:51 ` [PATCH 1/7] libtracefs: Have mapping work with the other tracefs_cpu* functions Steven Rostedt
2024-01-10 2:51 ` [PATCH 2/7] libtracefs: Have tracefs_mmap_read() include subbuf meta data Steven Rostedt
@ 2024-01-10 2:51 ` Steven Rostedt
2024-01-10 2:51 ` [PATCH 4/7] libtracefs: Fix tracefs_mmap() kbuf usage Steven Rostedt
` (3 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Steven Rostedt @ 2024-01-10 2:51 UTC (permalink / raw)
To: linux-trace-devel; +Cc: Vincent Donnefort, Steven Rostedt (Google)
From: "Steven Rostedt (Google)" <rostedt@goodmis.org>
If ring buffer memory mapping is used, and nothing is available to read on
the buffer, and nonblock is set, still update errno to EAGAIN.
Also always return subbuf_size when any data was read, and zero out the
rest of the buffer just like the kernel would do.
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
src/tracefs-record.c | 25 ++++++++++++++++++++++---
1 file changed, 22 insertions(+), 3 deletions(-)
diff --git a/src/tracefs-record.c b/src/tracefs-record.c
index 881f49256dc2..59260da0e32c 100644
--- a/src/tracefs-record.c
+++ b/src/tracefs-record.c
@@ -402,6 +402,25 @@ static int wait_on_input(struct tracefs_cpu *tcpu, bool nonblock)
return FD_ISSET(tcpu->fd, &rfds);
}
+/* If nonblock is set, set errno to EAGAIN on no data */
+static int mmap_read(struct tracefs_cpu *tcpu, void *buffer, bool nonblock)
+{
+ void *mapping = tcpu->mapping;
+ int ret;
+
+ ret = trace_mmap_read(mapping, buffer);
+ if (ret <= 0) {
+ if (!ret && nonblock)
+ errno = EAGAIN;
+ return ret;
+ }
+
+ /* Write full sub-buffer size, but zero out empty space */
+ if (ret < tcpu->subbuf_size)
+ memset(buffer + ret, 0, tcpu->subbuf_size - ret);
+ return tcpu->subbuf_size;
+}
+
/**
* tracefs_cpu_read - read from the raw trace file
* @tcpu: The descriptor representing the raw trace file
@@ -433,7 +452,7 @@ int tracefs_cpu_read(struct tracefs_cpu *tcpu, void *buffer, bool nonblock)
return ret;
if (tcpu->mapping)
- return trace_mmap_read(tcpu->mapping, buffer);
+ return mmap_read(tcpu, buffer, nonblock);
ret = read(tcpu->fd, buffer, tcpu->subbuf_size);
@@ -572,7 +591,7 @@ int tracefs_cpu_buffered_read(struct tracefs_cpu *tcpu, void *buffer, bool nonbl
return ret;
if (tcpu->mapping)
- return trace_mmap_read(tcpu->mapping, buffer);
+ return mmap_read(tcpu, buffer, nonblock);
if (tcpu->flags & TC_NONBLOCK)
mode |= SPLICE_F_NONBLOCK;
@@ -704,7 +723,7 @@ int tracefs_cpu_flush(struct tracefs_cpu *tcpu, void *buffer)
tcpu->buffered = 0;
if (tcpu->mapping)
- return trace_mmap_read(tcpu->mapping, buffer);
+ return mmap_read(tcpu, buffer, false);
if (tcpu->buffered) {
ret = read(tcpu->splice_pipe[0], buffer, tcpu->subbuf_size);
--
2.43.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 4/7] libtracefs: Fix tracefs_mmap() kbuf usage
2024-01-10 2:51 [PATCH 0/7] libtracefs: More fixes for memory mapping ring buffer API Steven Rostedt
` (2 preceding siblings ...)
2024-01-10 2:51 ` [PATCH 3/7] libtracefs: Have nonblock tracefs_cpu reads set errno EAGAIN Steven Rostedt
@ 2024-01-10 2:51 ` Steven Rostedt
2024-01-10 2:51 ` [PATCH 5/7] libtracefs: Call mmap ioctl if a refresh happens Steven Rostedt
` (2 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Steven Rostedt @ 2024-01-10 2:51 UTC (permalink / raw)
To: linux-trace-devel; +Cc: Vincent Donnefort, Steven Rostedt (Google)
From: "Steven Rostedt (Google)" <rostedt@goodmis.org>
The tracefs_mmap() function used the "kbuf" variable instead of the
"tmap->kbuf" variable which caused confusion later on.
The kbuf passed in is only used to make the "duplicate" for tmap->kbuf, so
once that is done, just assign kbuf = tmap->kbuf and use "kbuf" throughout
the rest of the function to avoid confusion.
Fixes: 2ed14b594f669 ("libtracefs: Add ring buffer memory mapping APIs")
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
src/tracefs-mmap.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/src/tracefs-mmap.c b/src/tracefs-mmap.c
index f481bda37eaf..e0e37681e019 100644
--- a/src/tracefs-mmap.c
+++ b/src/tracefs-mmap.c
@@ -81,6 +81,7 @@ __hidden void *trace_mmap(int fd, struct kbuffer *kbuf)
munmap(meta, page_size);
free(tmap);
}
+ kbuf = tmap->kbuf;
tmap->fd = fd;
@@ -91,7 +92,7 @@ __hidden void *trace_mmap(int fd, struct kbuffer *kbuf)
munmap(meta, page_size);
meta = mmap(NULL, tmap->meta_len, PROT_READ, MAP_SHARED, fd, 0);
if (meta == MAP_FAILED) {
- kbuffer_free(tmap->kbuf);
+ kbuffer_free(kbuf);
free(tmap);
return NULL;
}
@@ -106,7 +107,7 @@ __hidden void *trace_mmap(int fd, struct kbuffer *kbuf)
fd, tmap->meta_len);
if (tmap->data == MAP_FAILED) {
munmap(meta, tmap->meta_len);
- kbuffer_free(tmap->kbuf);
+ kbuffer_free(kbuf);
free(tmap);
return NULL;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 5/7] libtracefs: Call mmap ioctl if a refresh happens
2024-01-10 2:51 [PATCH 0/7] libtracefs: More fixes for memory mapping ring buffer API Steven Rostedt
` (3 preceding siblings ...)
2024-01-10 2:51 ` [PATCH 4/7] libtracefs: Fix tracefs_mmap() kbuf usage Steven Rostedt
@ 2024-01-10 2:51 ` Steven Rostedt
2024-01-10 2:51 ` [PATCH 6/7] libtracefs: Add tracefs_mapped_is_supported() API Steven Rostedt
2024-01-10 2:51 ` [PATCH 7/7] libtracefs utest: Add tests to use mapping if supported Steven Rostedt
6 siblings, 0 replies; 8+ messages in thread
From: Steven Rostedt @ 2024-01-10 2:51 UTC (permalink / raw)
To: linux-trace-devel; +Cc: Vincent Donnefort, Steven Rostedt (Google)
From: "Steven Rostedt (Google)" <rostedt@goodmis.org>
If the reader sub-buffer gets more data on it due to a writer still on the
reader sub-buffer, just updating the kbuf is not enough. An ioctl() needs to
be called again to update the "read" pointer, otherwise after reading the
full reader sub-buffer and calling the next ioctl(), the kernel will not
swap out for a new reader sub-buffer as it still thinks there's more to be
read.
Fixes: 2ed14b594f669 ("libtracefs: Add ring buffer memory mapping APIs")
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
src/tracefs-mmap.c | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/src/tracefs-mmap.c b/src/tracefs-mmap.c
index e0e37681e019..499233ae396c 100644
--- a/src/tracefs-mmap.c
+++ b/src/tracefs-mmap.c
@@ -143,6 +143,11 @@ __hidden void trace_unmap(void *mapping)
free(tmap);
}
+static int get_reader(struct trace_mmap *tmap)
+{
+ return ioctl(tmap->fd, TRACE_MMAP_IOCTL_GET_READER);
+}
+
__hidden int trace_mmap_load_subbuf(void *mapping, struct kbuffer *kbuf)
{
struct trace_mmap *tmap = mapping;
@@ -171,11 +176,18 @@ __hidden int trace_mmap_load_subbuf(void *mapping, struct kbuffer *kbuf)
kbuffer_refresh(kbuf);
/* Are there still events to read? */
- if (kbuffer_curr_size(kbuf))
+ if (kbuffer_curr_size(kbuf)) {
+ /* If current is greater than what was read, refresh */
+ if (kbuffer_curr_offset(kbuf) + kbuffer_curr_size(kbuf) >
+ tmap->map->reader.read) {
+ if (get_reader(tmap) < 0)
+ return -1;
+ }
return 1;
+ }
/* See if a new page is ready? */
- if (ioctl(tmap->fd, TRACE_MMAP_IOCTL_GET_READER) < 0)
+ if (get_reader(tmap) < 0)
return -1;
id = tmap->map->reader.id;
data = tmap->data + tmap->map->subbuf_size * id;
--
2.43.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 6/7] libtracefs: Add tracefs_mapped_is_supported() API
2024-01-10 2:51 [PATCH 0/7] libtracefs: More fixes for memory mapping ring buffer API Steven Rostedt
` (4 preceding siblings ...)
2024-01-10 2:51 ` [PATCH 5/7] libtracefs: Call mmap ioctl if a refresh happens Steven Rostedt
@ 2024-01-10 2:51 ` Steven Rostedt
2024-01-10 2:51 ` [PATCH 7/7] libtracefs utest: Add tests to use mapping if supported Steven Rostedt
6 siblings, 0 replies; 8+ messages in thread
From: Steven Rostedt @ 2024-01-10 2:51 UTC (permalink / raw)
To: linux-trace-devel; +Cc: Vincent Donnefort, Steven Rostedt (Google)
From: "Steven Rostedt (Google)" <rostedt@goodmis.org>
Add tracefs_mapped_is_supported() which returns true if tracefs mapping is
supported and false if it is not or an error occurred.
This is useful so that an application can decide to do things differently if
mapping is supported or not.
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
Documentation/libtracefs-cpu-map.txt | 9 ++++++++-
Documentation/libtracefs.txt | 1 +
include/tracefs.h | 1 +
src/tracefs-record.c | 19 +++++++++++++++++++
4 files changed, 29 insertions(+), 1 deletion(-)
diff --git a/Documentation/libtracefs-cpu-map.txt b/Documentation/libtracefs-cpu-map.txt
index ed62c96f83a1..d961123b55a8 100644
--- a/Documentation/libtracefs-cpu-map.txt
+++ b/Documentation/libtracefs-cpu-map.txt
@@ -3,7 +3,7 @@ libtracefs(3)
NAME
----
-tracefs_cpu_open_mapped, tracefs_cpu_is_mapped, tracefs_cpu_map, tracefs_cpu_unmap - Memory mapping of the ring buffer
+tracefs_cpu_open_mapped, tracefs_cpu_is_mapped, tracefs_mapped_is_supported, tracefs_cpu_map, tracefs_cpu_unmap - Memory mapping of the ring buffer
SYNOPSIS
--------
@@ -12,6 +12,7 @@ SYNOPSIS
*#include <tracefs.h>*
bool *tracefs_cpu_is_mapped*(struct tracefs_cpu pass:[*]tcpu);
+bool *tracefs_mapped_is_supported*(void);
int *tracefs_cpu_map*(struct tracefs_cpu pass:[*]tcpu);
void *tracefs_cpu_unmap*(struct tracefs_cpu pass:[*]tcpu);
struct tracefs_cpu pass:[*]*tracefs_cpu_open_mapped*(struct tracefs_instance pass:[*]instance,
@@ -45,6 +46,9 @@ its ring buffer memory mapped and false otherwise. This does not return whether
not that the kernel supports memory mapping, but that can usually be determined
by calling *tracefs_cpu_map()*.
+The *tracefs_mapped_is_supported()* returns true if the ring buffer can be
+memory mapped.
+
The *tracefs_cpu_map()* function will attempt to map the ring buffer associated
to _tcpu_ if it is not already mapped.
@@ -62,6 +66,9 @@ RETURN VALUE
*tracefs_cpu_is_mapped()* returns true if the given _tcpu_ has its ring buffer
memory mapped or false otherwise.
+*tracefs_mapped_is_supported()* returns true if the tracing ring buffer can be
+memory mapped or false if it cannot be or an error occurred.
+
*tracefs_cpu_map()* returns 0 on success and -1 on error in mapping. If 0 is
returned then *tracefs_cpu_is_mapped()* will return true afterward, or false
if the mapping failed.
diff --git a/Documentation/libtracefs.txt b/Documentation/libtracefs.txt
index 9c3cc3ea3c06..860e2be7d96a 100644
--- a/Documentation/libtracefs.txt
+++ b/Documentation/libtracefs.txt
@@ -148,6 +148,7 @@ Trace stream:
Memory mapping the ring buffer:
bool *tracefs_cpu_is_mapped*(struct tracefs_cpu pass:[*]tcpu);
+ bool *tracefs_mapped_is_supported*(void);
int *tracefs_cpu_map*(struct tracefs_cpu pass:[*]tcpu);
void *tracefs_cpu_unmap*(struct tracefs_cpu pass:[*]tcpu);
struct tracefs_cpu pass:[*]*tracefs_cpu_open_mapped*(struct tracefs_instance pass:[*]instance,
diff --git a/include/tracefs.h b/include/tracefs.h
index 8569171247b7..b6e0f6b3c851 100644
--- a/include/tracefs.h
+++ b/include/tracefs.h
@@ -695,6 +695,7 @@ int tracefs_snapshot_free(struct tracefs_instance *instance);
/* Memory mapping of ring buffer */
bool tracefs_cpu_is_mapped(struct tracefs_cpu *tcpu);
+bool tracefs_mapped_is_supported(void);
int tracefs_cpu_map(struct tracefs_cpu *tcpu);
void tracefs_cpu_unmap(struct tracefs_cpu *tcpu);
struct tracefs_cpu *tracefs_cpu_open_mapped(struct tracefs_instance *instance,
diff --git a/src/tracefs-record.c b/src/tracefs-record.c
index 59260da0e32c..932e8b44775b 100644
--- a/src/tracefs-record.c
+++ b/src/tracefs-record.c
@@ -317,6 +317,25 @@ bool tracefs_cpu_is_mapped(struct tracefs_cpu *tcpu)
return tcpu->mapping != NULL;
}
+/**
+ * tracefs_mapped_is_supported - find out if memory mapping is supported
+ *
+ * Return true if the ring buffer can be memory mapped, or false on
+ * error or it cannot be.
+ */
+bool tracefs_mapped_is_supported(void)
+{
+ struct tracefs_cpu *tcpu;
+ bool ret;
+
+ tcpu = tracefs_cpu_open_mapped(NULL, 0, false);
+ if (!tcpu)
+ return false;
+ ret = tracefs_cpu_is_mapped(tcpu);
+ tracefs_cpu_close(tcpu);
+ return ret;
+}
+
int tracefs_cpu_map(struct tracefs_cpu *tcpu)
{
if (tcpu->mapping)
--
2.43.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 7/7] libtracefs utest: Add tests to use mapping if supported
2024-01-10 2:51 [PATCH 0/7] libtracefs: More fixes for memory mapping ring buffer API Steven Rostedt
` (5 preceding siblings ...)
2024-01-10 2:51 ` [PATCH 6/7] libtracefs: Add tracefs_mapped_is_supported() API Steven Rostedt
@ 2024-01-10 2:51 ` Steven Rostedt
6 siblings, 0 replies; 8+ messages in thread
From: Steven Rostedt @ 2024-01-10 2:51 UTC (permalink / raw)
To: linux-trace-devel; +Cc: Vincent Donnefort, Steven Rostedt (Google)
From: "Steven Rostedt (Google)" <rostedt@goodmis.org>
If the memory mapping of the ring buffer is supported, run the tests that do
tracefs_cpu_open() also with tracefs_cpu_open_mapped().
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
---
utest/tracefs-utest.c | 69 ++++++++++++++++++++++++++++++-------------
1 file changed, 49 insertions(+), 20 deletions(-)
diff --git a/utest/tracefs-utest.c b/utest/tracefs-utest.c
index f5afec0e338a..963fac70846d 100644
--- a/utest/tracefs-utest.c
+++ b/utest/tracefs-utest.c
@@ -122,6 +122,8 @@ static struct test_sample test_array[TEST_ARRAY_SIZE];
static int test_found;
static unsigned long long last_ts;
+static bool mapping_is_supported;
+
static void msleep(int ms)
{
struct timespec tspec;
@@ -1098,7 +1100,7 @@ static int make_trace_temp_file(void)
return fd;
}
-static int setup_trace_cpu(struct tracefs_instance *instance, struct test_cpu_data *data, bool nonblock)
+static int setup_trace_cpu(struct tracefs_instance *instance, struct test_cpu_data *data, bool nonblock, bool map)
{
struct tep_format_field **fields;
struct tep_event *event;
@@ -1121,7 +1123,11 @@ static int setup_trace_cpu(struct tracefs_instance *instance, struct test_cpu_da
data->tep = test_tep;
- data->tcpu = tracefs_cpu_open(instance, 0, nonblock);
+ if (map)
+ data->tcpu = tracefs_cpu_open_mapped(instance, 0, nonblock);
+ else
+ data->tcpu = tracefs_cpu_open(instance, 0, nonblock);
+
CU_TEST(data->tcpu != NULL);
if (!data->tcpu)
goto fail;
@@ -1205,14 +1211,17 @@ static void shutdown_trace_cpu(struct test_cpu_data *data)
cleanup_trace_cpu(data);
}
-static void reset_trace_cpu(struct test_cpu_data *data, bool nonblock)
+static void reset_trace_cpu(struct test_cpu_data *data, bool nonblock, bool map)
{
close(data->fd);
tracefs_cpu_close(data->tcpu);
data->fd = make_trace_temp_file();
CU_TEST(data->fd >= 0);
- data->tcpu = tracefs_cpu_open(data->instance, 0, nonblock);
+ if (map)
+ data->tcpu = tracefs_cpu_open_mapped(data->instance, 0, nonblock);
+ else
+ data->tcpu = tracefs_cpu_open(data->instance, 0, nonblock);
CU_TEST(data->tcpu != NULL);
}
@@ -1252,11 +1261,11 @@ static void test_cpu_read(struct test_cpu_data *data, int expect)
CU_TEST(cnt == expect);
}
-static void test_instance_trace_cpu_read(struct tracefs_instance *instance)
+static void test_instance_trace_cpu_read(struct tracefs_instance *instance, bool map)
{
struct test_cpu_data data;
- if (setup_trace_cpu(instance, &data, true))
+ if (setup_trace_cpu(instance, &data, true, map))
return;
test_cpu_read(&data, 1);
@@ -1270,8 +1279,13 @@ static void test_instance_trace_cpu_read(struct tracefs_instance *instance)
static void test_trace_cpu_read(void)
{
- test_instance_trace_cpu_read(NULL);
- test_instance_trace_cpu_read(test_instance);
+ test_instance_trace_cpu_read(NULL, false);
+ if (mapping_is_supported)
+ test_instance_trace_cpu_read(NULL, true);
+
+ test_instance_trace_cpu_read(test_instance, false);
+ if (mapping_is_supported)
+ test_instance_trace_cpu_read(test_instance, true);
}
static void *trace_cpu_read_thread(void *arg)
@@ -1292,6 +1306,7 @@ static void *trace_cpu_read_thread(void *arg)
static void test_cpu_read_buf_percent(struct test_cpu_data *data, int percent)
{
+ char buffer[tracefs_cpu_read_size(data->tcpu)];
pthread_t thread;
int save_percent;
ssize_t expect;
@@ -1343,7 +1358,7 @@ static void test_cpu_read_buf_percent(struct test_cpu_data *data, int percent)
CU_TEST(data->done == true);
- while (tracefs_cpu_flush_buf(data->tcpu))
+ while (tracefs_cpu_flush(data->tcpu, buffer))
;
tracefs_cpu_stop(data->tcpu);
@@ -1353,24 +1368,24 @@ static void test_cpu_read_buf_percent(struct test_cpu_data *data, int percent)
CU_TEST(ret == 0);
}
-static void test_instance_trace_cpu_read_buf_percent(struct tracefs_instance *instance)
+static void test_instance_trace_cpu_read_buf_percent(struct tracefs_instance *instance, bool map)
{
struct test_cpu_data data;
- if (setup_trace_cpu(instance, &data, false))
+ if (setup_trace_cpu(instance, &data, false, map))
return;
test_cpu_read_buf_percent(&data, 0);
- reset_trace_cpu(&data, false);
+ reset_trace_cpu(&data, false, map);
test_cpu_read_buf_percent(&data, 1);
- reset_trace_cpu(&data, false);
+ reset_trace_cpu(&data, false, map);
test_cpu_read_buf_percent(&data, 50);
- reset_trace_cpu(&data, false);
+ reset_trace_cpu(&data, false, map);
test_cpu_read_buf_percent(&data, 100);
@@ -1379,8 +1394,12 @@ static void test_instance_trace_cpu_read_buf_percent(struct tracefs_instance *in
static void test_trace_cpu_read_buf_percent(void)
{
- test_instance_trace_cpu_read_buf_percent(NULL);
- test_instance_trace_cpu_read_buf_percent(test_instance);
+ test_instance_trace_cpu_read_buf_percent(NULL, false);
+ if (mapping_is_supported)
+ test_instance_trace_cpu_read_buf_percent(NULL, true);
+ test_instance_trace_cpu_read_buf_percent(test_instance, false);
+ if (mapping_is_supported)
+ test_instance_trace_cpu_read_buf_percent(test_instance, true);
}
struct follow_data {
@@ -1916,11 +1935,11 @@ static void test_cpu_pipe(struct test_cpu_data *data, int expect)
CU_TEST(cnt == expect);
}
-static void test_instance_trace_cpu_pipe(struct tracefs_instance *instance)
+static void test_instance_trace_cpu_pipe(struct tracefs_instance *instance, bool map)
{
struct test_cpu_data data;
- if (setup_trace_cpu(instance, &data, true))
+ if (setup_trace_cpu(instance, &data, true, map))
return;
test_cpu_pipe(&data, 1);
@@ -1934,8 +1953,12 @@ static void test_instance_trace_cpu_pipe(struct tracefs_instance *instance)
static void test_trace_cpu_pipe(void)
{
- test_instance_trace_cpu_pipe(NULL);
- test_instance_trace_cpu_pipe(test_instance);
+ test_instance_trace_cpu_pipe(NULL, false);
+ if (mapping_is_supported)
+ test_instance_trace_cpu_pipe(NULL, true);
+ test_instance_trace_cpu_pipe(test_instance, false);
+ if (mapping_is_supported)
+ test_instance_trace_cpu_pipe(test_instance, true);
}
static struct tracefs_dynevent **get_dynevents_check(enum tracefs_dynevent_type types, int count)
@@ -3557,6 +3580,12 @@ static int test_suite_init(void)
if (!test_instance)
return 1;
+ mapping_is_supported = tracefs_mapped_is_supported();
+ if (mapping_is_supported)
+ printf("Testing mmapped buffers too\n");
+ else
+ printf("Memory mapped buffers not supported\n");
+
/* Start with a new slate */
tracefs_instance_reset(NULL);
--
2.43.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
end of thread, other threads:[~2024-01-10 3:00 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-01-10 2:51 [PATCH 0/7] libtracefs: More fixes for memory mapping ring buffer API Steven Rostedt
2024-01-10 2:51 ` [PATCH 1/7] libtracefs: Have mapping work with the other tracefs_cpu* functions Steven Rostedt
2024-01-10 2:51 ` [PATCH 2/7] libtracefs: Have tracefs_mmap_read() include subbuf meta data Steven Rostedt
2024-01-10 2:51 ` [PATCH 3/7] libtracefs: Have nonblock tracefs_cpu reads set errno EAGAIN Steven Rostedt
2024-01-10 2:51 ` [PATCH 4/7] libtracefs: Fix tracefs_mmap() kbuf usage Steven Rostedt
2024-01-10 2:51 ` [PATCH 5/7] libtracefs: Call mmap ioctl if a refresh happens Steven Rostedt
2024-01-10 2:51 ` [PATCH 6/7] libtracefs: Add tracefs_mapped_is_supported() API Steven Rostedt
2024-01-10 2:51 ` [PATCH 7/7] libtracefs utest: Add tests to use mapping if supported Steven Rostedt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).