* [for-next][PATCH 01/13] x86_64: Add gap to int3 to allow for call emulation
[not found] <20190508202427.252736423@goodmis.org>
@ 2019-05-08 20:24 ` Steven Rostedt
2019-05-08 20:24 ` [for-next][PATCH 02/13] x86_64: Allow breakpoints to emulate call instructions Steven Rostedt
` (3 subsequent siblings)
4 siblings, 0 replies; 5+ messages in thread
From: Steven Rostedt @ 2019-05-08 20:24 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, stable, Nicolai Stange,
Masami Hiramatsu, Josh Poimboeuf
From: Josh Poimboeuf <jpoimboe@redhat.com>
To allow an int3 handler to emulate a call instruction, it must be able to
push a return address onto the stack. Add a gap to the stack to allow the
int3 handler to push the return address and change the return from int3 to
jump straight to the emulated called function target.
Link: http://lkml.kernel.org/r/20181130183917.hxmti5josgq4clti@treble
Link: http://lkml.kernel.org/r/20190502162133.GX2623@hirez.programming.kicks-ass.net
[
Note, this is needed to allow Live Kernel Patching to not miss calling a
patched function when tracing is enabled. -- Steven Rostedt
]
Cc: stable@vger.kernel.org
Fixes: b700e7f03df5 ("livepatch: kernel: add support for live patching")
Tested-by: Nicolai Stange <nstange@suse.de>
Reviewed-by: Nicolai Stange <nstange@suse.de>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
arch/x86/entry/entry_64.S | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 1f0efdb7b629..27fcc6fbdd52 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -879,7 +879,7 @@ apicinterrupt IRQ_WORK_VECTOR irq_work_interrupt smp_irq_work_interrupt
* @paranoid == 2 is special: the stub will never switch stacks. This is for
* #DF: if the thread stack is somehow unusable, we'll still get a useful OOPS.
*/
-.macro idtentry sym do_sym has_error_code:req paranoid=0 shift_ist=-1
+.macro idtentry sym do_sym has_error_code:req paranoid=0 shift_ist=-1 create_gap=0
ENTRY(\sym)
UNWIND_HINT_IRET_REGS offset=\has_error_code*8
@@ -899,6 +899,20 @@ ENTRY(\sym)
jnz .Lfrom_usermode_switch_stack_\@
.endif
+ .if \create_gap == 1
+ /*
+ * If coming from kernel space, create a 6-word gap to allow the
+ * int3 handler to emulate a call instruction.
+ */
+ testb $3, CS-ORIG_RAX(%rsp)
+ jnz .Lfrom_usermode_no_gap_\@
+ .rept 6
+ pushq 5*8(%rsp)
+ .endr
+ UNWIND_HINT_IRET_REGS offset=8
+.Lfrom_usermode_no_gap_\@:
+ .endif
+
.if \paranoid
call paranoid_entry
.else
@@ -1130,7 +1144,7 @@ apicinterrupt3 HYPERV_STIMER0_VECTOR \
#endif /* CONFIG_HYPERV */
idtentry debug do_debug has_error_code=0 paranoid=1 shift_ist=DEBUG_STACK
-idtentry int3 do_int3 has_error_code=0
+idtentry int3 do_int3 has_error_code=0 create_gap=1
idtentry stack_segment do_stack_segment has_error_code=1
#ifdef CONFIG_XEN_PV
--
2.20.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [for-next][PATCH 02/13] x86_64: Allow breakpoints to emulate call instructions
[not found] <20190508202427.252736423@goodmis.org>
2019-05-08 20:24 ` [for-next][PATCH 01/13] x86_64: Add gap to int3 to allow for call emulation Steven Rostedt
@ 2019-05-08 20:24 ` Steven Rostedt
2019-05-08 20:24 ` [for-next][PATCH 03/13] ftrace/x86_64: Emulate call function while updating in breakpoint handler Steven Rostedt
` (2 subsequent siblings)
4 siblings, 0 replies; 5+ messages in thread
From: Steven Rostedt @ 2019-05-08 20:24 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Andy Lutomirski, Nicolai Stange,
Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
the arch/x86 maintainers, Josh Poimboeuf, Jiri Kosina,
Miroslav Benes, Petr Mladek, Joe Lawrence, Shuah Khan,
Konrad Rzeszutek Wilk, Tim Chen, Sebastian Andrzej Siewior,
Mimi Zohar, Juergen Gross, Nick Desaulniers, Nayna Jain,
Masahiro Yamada, Joerg Roedel,
open list:KERNEL SELFTEST FRAMEWORK, stable, Masami Hiramatsu,
Peter Zijlstra (Intel)
From: Peter Zijlstra <peterz@infradead.org>
In order to allow breakpoints to emulate call instructions, they need to push
the return address onto the stack. The x86_64 int3 handler adds a small gap
to allow the stack to grow some. Use this gap to add the return address to
be able to emulate a call instruction at the breakpoint location.
These helper functions are added:
int3_emulate_jmp(): changes the location of the regs->ip to return there.
(The next two are only for x86_64)
int3_emulate_push(): to push the address onto the gap in the stack
int3_emulate_call(): push the return address and change regs->ip
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Nicolai Stange <nstange@suse.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: the arch/x86 maintainers <x86@kernel.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Miroslav Benes <mbenes@suse.cz>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Joe Lawrence <joe.lawrence@redhat.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Mimi Zohar <zohar@linux.ibm.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Nayna Jain <nayna@linux.ibm.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Joerg Roedel <jroedel@suse.de>
Cc: "open list:KERNEL SELFTEST FRAMEWORK" <linux-kselftest@vger.kernel.org>
Cc: stable@vger.kernel.org
Fixes: b700e7f03df5 ("livepatch: kernel: add support for live patching")
Tested-by: Nicolai Stange <nstange@suse.de>
Reviewed-by: Nicolai Stange <nstange@suse.de>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
[ Modified to only work for x86_64 and added comment to int3_emulate_push() ]
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
arch/x86/include/asm/text-patching.h | 28 ++++++++++++++++++++++++++++
1 file changed, 28 insertions(+)
diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h
index e85ff65c43c3..05861cc08787 100644
--- a/arch/x86/include/asm/text-patching.h
+++ b/arch/x86/include/asm/text-patching.h
@@ -39,4 +39,32 @@ extern int poke_int3_handler(struct pt_regs *regs);
extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
extern int after_bootmem;
+static inline void int3_emulate_jmp(struct pt_regs *regs, unsigned long ip)
+{
+ regs->ip = ip;
+}
+
+#define INT3_INSN_SIZE 1
+#define CALL_INSN_SIZE 5
+
+#ifdef CONFIG_X86_64
+static inline void int3_emulate_push(struct pt_regs *regs, unsigned long val)
+{
+ /*
+ * The int3 handler in entry_64.S adds a gap between the
+ * stack where the break point happened, and the saving of
+ * pt_regs. We can extend the original stack because of
+ * this gap. See the idtentry macro's create_gap option.
+ */
+ regs->sp -= sizeof(unsigned long);
+ *(unsigned long *)regs->sp = val;
+}
+
+static inline void int3_emulate_call(struct pt_regs *regs, unsigned long func)
+{
+ int3_emulate_push(regs, regs->ip - INT3_INSN_SIZE + CALL_INSN_SIZE);
+ int3_emulate_jmp(regs, func);
+}
+#endif
+
#endif /* _ASM_X86_TEXT_PATCHING_H */
--
2.20.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [for-next][PATCH 03/13] ftrace/x86_64: Emulate call function while updating in breakpoint handler
[not found] <20190508202427.252736423@goodmis.org>
2019-05-08 20:24 ` [for-next][PATCH 01/13] x86_64: Add gap to int3 to allow for call emulation Steven Rostedt
2019-05-08 20:24 ` [for-next][PATCH 02/13] x86_64: Allow breakpoints to emulate call instructions Steven Rostedt
@ 2019-05-08 20:24 ` Steven Rostedt
2019-05-08 20:24 ` [for-next][PATCH 06/13] tracing: probeevent: Fix to make the type of $comm string Steven Rostedt
2019-05-08 20:24 ` [for-next][PATCH 11/13] tracing: Fix partial reading of trace events id file Steven Rostedt
4 siblings, 0 replies; 5+ messages in thread
From: Steven Rostedt @ 2019-05-08 20:24 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Andy Lutomirski, Nicolai Stange,
Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
the arch/x86 maintainers, Josh Poimboeuf, Jiri Kosina,
Miroslav Benes, Petr Mladek, Joe Lawrence, Shuah Khan,
Konrad Rzeszutek Wilk, Tim Chen, Sebastian Andrzej Siewior,
Mimi Zohar, Juergen Gross, Nick Desaulniers, Nayna Jain,
Masahiro Yamada, Joerg Roedel,
open list:KERNEL SELFTEST FRAMEWORK, stable, Masami Hiramatsu,
Peter Zijlstra (Intel)
From: Peter Zijlstra <peterz@infradead.org>
Nicolai Stange discovered[1] that if live kernel patching is enabled, and the
function tracer started tracing the same function that was patched, the
conversion of the fentry call site during the translation of going from
calling the live kernel patch trampoline to the iterator trampoline, would
have as slight window where it didn't call anything.
As live kernel patching depends on ftrace to always call its code (to
prevent the function being traced from being called, as it will redirect
it). This small window would allow the old buggy function to be called, and
this can cause undesirable results.
Nicolai submitted new patches[2] but these were controversial. As this is
similar to the static call emulation issues that came up a while ago[3].
But after some debate[4][5] adding a gap in the stack when entering the
breakpoint handler allows for pushing the return address onto the stack to
easily emulate a call.
[1] http://lkml.kernel.org/r/20180726104029.7736-1-nstange@suse.de
[2] http://lkml.kernel.org/r/20190427100639.15074-1-nstange@suse.de
[3] http://lkml.kernel.org/r/3cf04e113d71c9f8e4be95fb84a510f085aa4afa.1541711457.git.jpoimboe@redhat.com
[4] http://lkml.kernel.org/r/CAHk-=wh5OpheSU8Em_Q3Hg8qw_JtoijxOdPtHru6d+5K8TWM=A@mail.gmail.com
[5] http://lkml.kernel.org/r/CAHk-=wjvQxY4DvPrJ6haPgAa6b906h=MwZXO6G8OtiTGe=N7_w@mail.gmail.com
[
Live kernel patching is not implemented on x86_32, thus the emulate
calls are only for x86_64.
]
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Nicolai Stange <nstange@suse.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: the arch/x86 maintainers <x86@kernel.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Jiri Kosina <jikos@kernel.org>
Cc: Miroslav Benes <mbenes@suse.cz>
Cc: Petr Mladek <pmladek@suse.com>
Cc: Joe Lawrence <joe.lawrence@redhat.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Mimi Zohar <zohar@linux.ibm.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Nick Desaulniers <ndesaulniers@google.com>
Cc: Nayna Jain <nayna@linux.ibm.com>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Joerg Roedel <jroedel@suse.de>
Cc: "open list:KERNEL SELFTEST FRAMEWORK" <linux-kselftest@vger.kernel.org>
Cc: stable@vger.kernel.org
Fixes: b700e7f03df5 ("livepatch: kernel: add support for live patching")
Tested-by: Nicolai Stange <nstange@suse.de>
Reviewed-by: Nicolai Stange <nstange@suse.de>
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
[ Changed to only implement emulated calls for x86_64 ]
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
arch/x86/kernel/ftrace.c | 32 +++++++++++++++++++++++++++-----
1 file changed, 27 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index ef49517f6bb2..bd553b3af22e 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -29,6 +29,7 @@
#include <asm/kprobes.h>
#include <asm/ftrace.h>
#include <asm/nops.h>
+#include <asm/text-patching.h>
#ifdef CONFIG_DYNAMIC_FTRACE
@@ -231,6 +232,7 @@ int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
}
static unsigned long ftrace_update_func;
+static unsigned long ftrace_update_func_call;
static int update_ftrace_func(unsigned long ip, void *new)
{
@@ -259,6 +261,8 @@ int ftrace_update_ftrace_func(ftrace_func_t func)
unsigned char *new;
int ret;
+ ftrace_update_func_call = (unsigned long)func;
+
new = ftrace_call_replace(ip, (unsigned long)func);
ret = update_ftrace_func(ip, new);
@@ -294,13 +298,28 @@ int ftrace_int3_handler(struct pt_regs *regs)
if (WARN_ON_ONCE(!regs))
return 0;
- ip = regs->ip - 1;
- if (!ftrace_location(ip) && !is_ftrace_caller(ip))
- return 0;
+ ip = regs->ip - INT3_INSN_SIZE;
- regs->ip += MCOUNT_INSN_SIZE - 1;
+#ifdef CONFIG_X86_64
+ if (ftrace_location(ip)) {
+ int3_emulate_call(regs, (unsigned long)ftrace_regs_caller);
+ return 1;
+ } else if (is_ftrace_caller(ip)) {
+ if (!ftrace_update_func_call) {
+ int3_emulate_jmp(regs, ip + CALL_INSN_SIZE);
+ return 1;
+ }
+ int3_emulate_call(regs, ftrace_update_func_call);
+ return 1;
+ }
+#else
+ if (ftrace_location(ip) || is_ftrace_caller(ip)) {
+ int3_emulate_jmp(regs, ip + CALL_INSN_SIZE);
+ return 1;
+ }
+#endif
- return 1;
+ return 0;
}
NOKPROBE_SYMBOL(ftrace_int3_handler);
@@ -859,6 +878,8 @@ void arch_ftrace_update_trampoline(struct ftrace_ops *ops)
func = ftrace_ops_get_func(ops);
+ ftrace_update_func_call = (unsigned long)func;
+
/* Do a safe modify in case the trampoline is executing */
new = ftrace_call_replace(ip, (unsigned long)func);
ret = update_ftrace_func(ip, new);
@@ -960,6 +981,7 @@ static int ftrace_mod_jmp(unsigned long ip, void *func)
{
unsigned char *new;
+ ftrace_update_func_call = 0UL;
new = ftrace_jmp_replace(ip, (unsigned long)func);
return update_ftrace_func(ip, new);
--
2.20.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [for-next][PATCH 06/13] tracing: probeevent: Fix to make the type of $comm string
[not found] <20190508202427.252736423@goodmis.org>
` (2 preceding siblings ...)
2019-05-08 20:24 ` [for-next][PATCH 03/13] ftrace/x86_64: Emulate call function while updating in breakpoint handler Steven Rostedt
@ 2019-05-08 20:24 ` Steven Rostedt
2019-05-08 20:24 ` [for-next][PATCH 11/13] tracing: Fix partial reading of trace events id file Steven Rostedt
4 siblings, 0 replies; 5+ messages in thread
From: Steven Rostedt @ 2019-05-08 20:24 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Andreas Ziegler, Ingo Molnar, stable,
Masami Hiramatsu
From: Masami Hiramatsu <mhiramat@kernel.org>
Fix to make the type of $comm "string". If we set the other type to $comm
argument, it shows meaningless value or wrong data. Currently probe events
allow us to set string array type (e.g. ":string[2]"), or other digit types
like x8 on $comm. But since clearly $comm is just a string data, it should
not be fetched by other types including array.
Link: http://lkml.kernel.org/r/155723736241.9149.14582064184468574539.stgit@devnote2
Cc: Andreas Ziegler <andreas.ziegler@fau.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: stable@vger.kernel.org
Fixes: 533059281ee5 ("tracing: probeevent: Introduce new argument fetching code")
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
kernel/trace/trace_probe.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c
index 4cc2d467d34c..e0d1d5353464 100644
--- a/kernel/trace/trace_probe.c
+++ b/kernel/trace/trace_probe.c
@@ -533,13 +533,14 @@ static int traceprobe_parse_probe_arg_body(char *arg, ssize_t *size,
}
}
}
- /*
- * The default type of $comm should be "string", and it can't be
- * dereferenced.
- */
- if (!t && strcmp(arg, "$comm") == 0)
+
+ /* Since $comm can not be dereferred, we can find $comm by strcmp */
+ if (strcmp(arg, "$comm") == 0) {
+ /* The type of $comm must be "string", and not an array. */
+ if (parg->count || (t && strcmp(t, "string")))
+ return -EINVAL;
parg->type = find_fetch_type("string");
- else
+ } else
parg->type = find_fetch_type(t);
if (!parg->type) {
trace_probe_log_err(offset + (t ? (t - arg) : 0), BAD_TYPE);
--
2.20.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [for-next][PATCH 11/13] tracing: Fix partial reading of trace events id file
[not found] <20190508202427.252736423@goodmis.org>
` (3 preceding siblings ...)
2019-05-08 20:24 ` [for-next][PATCH 06/13] tracing: probeevent: Fix to make the type of $comm string Steven Rostedt
@ 2019-05-08 20:24 ` Steven Rostedt
4 siblings, 0 replies; 5+ messages in thread
From: Steven Rostedt @ 2019-05-08 20:24 UTC (permalink / raw)
To: linux-kernel
Cc: Ingo Molnar, Andrew Morton, Orit Wasserman, Oleg Nesterov,
Ingo Molnar, stable, Elazar Leibovich
From: Elazar Leibovich <elazar@lightbitslabs.com>
When reading only part of the id file, the ppos isn't tracked correctly.
This is taken care by simple_read_from_buffer.
Reading a single byte, and then the next byte would result EOF.
While this seems like not a big deal, this breaks abstractions that
reads information from files unbuffered. See for example
https://github.com/golang/go/issues/29399
This code was mentioned as problematic in
commit cd458ba9d5a5
("tracing: Do not (ab)use trace_seq in event_id_read()")
An example C code that show this bug is:
#include <stdio.h>
#include <stdint.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
int main(int argc, char **argv) {
if (argc < 2)
return 1;
int fd = open(argv[1], O_RDONLY);
char c;
read(fd, &c, 1);
printf("First %c\n", c);
read(fd, &c, 1);
printf("Second %c\n", c);
}
Then run with, e.g.
sudo ./a.out /sys/kernel/debug/tracing/events/tcp/tcp_set_state/id
You'll notice you're getting the first character twice, instead of the
first two characters in the id file.
Link: http://lkml.kernel.org/r/20181231115837.4932-1-elazar@lightbitslabs.com
Cc: Orit Wasserman <orit.was@gmail.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: stable@vger.kernel.org
Fixes: 23725aeeab10b ("ftrace: provide an id file for each event")
Signed-off-by: Elazar Leibovich <elazar@lightbitslabs.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
---
kernel/trace/trace_events.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
index 81c038ed6cee..0ce3db67f556 100644
--- a/kernel/trace/trace_events.c
+++ b/kernel/trace/trace_events.c
@@ -1319,9 +1319,6 @@ event_id_read(struct file *filp, char __user *ubuf, size_t cnt, loff_t *ppos)
char buf[32];
int len;
- if (*ppos)
- return 0;
-
if (unlikely(!id))
return -ENODEV;
--
2.20.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
end of thread, other threads:[~2019-05-08 20:25 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20190508202427.252736423@goodmis.org>
2019-05-08 20:24 ` [for-next][PATCH 01/13] x86_64: Add gap to int3 to allow for call emulation Steven Rostedt
2019-05-08 20:24 ` [for-next][PATCH 02/13] x86_64: Allow breakpoints to emulate call instructions Steven Rostedt
2019-05-08 20:24 ` [for-next][PATCH 03/13] ftrace/x86_64: Emulate call function while updating in breakpoint handler Steven Rostedt
2019-05-08 20:24 ` [for-next][PATCH 06/13] tracing: probeevent: Fix to make the type of $comm string Steven Rostedt
2019-05-08 20:24 ` [for-next][PATCH 11/13] tracing: Fix partial reading of trace events id file Steven Rostedt
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).