From: Indu Bhagat <indu.bhagat@oracle.com>
To: Josh Poimboeuf <jpoimboe@kernel.org>,
Andrii Nakryiko <andrii.nakryiko@gmail.com>
Cc: x86@kernel.org, Peter Zijlstra <peterz@infradead.org>,
Steven Rostedt <rostedt@goodmis.org>,
Ingo Molnar <mingo@kernel.org>,
Arnaldo Carvalho de Melo <acme@kernel.org>,
linux-kernel@vger.kernel.org, Mark Rutland <mark.rutland@arm.com>,
Alexander Shishkin <alexander.shishkin@linux.intel.com>,
Jiri Olsa <jolsa@kernel.org>, Namhyung Kim <namhyung@kernel.org>,
Ian Rogers <irogers@google.com>,
Adrian Hunter <adrian.hunter@intel.com>,
linux-perf-users@vger.kernel.org, Mark Brown <broonie@kernel.org>,
linux-toolchains@vger.kernel.org, Jordan Rome <jordalgo@meta.com>,
Sam James <sam@gentoo.org>,
linux-trace-kernel@vger.kernel.org,
Jens Remus <jremus@linux.ibm.com>,
Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
Florian Weimer <fweimer@redhat.com>,
Andy Lutomirski <luto@kernel.org>,
Masami Hiramatsu <mhiramat@kernel.org>,
Weinan Liu <wnliu@google.com>
Subject: Re: [PATCH v4 17/39] unwind_user/sframe: Add support for reading .sframe headers
Date: Thu, 30 Jan 2025 13:39:52 -0800 [thread overview]
Message-ID: <e4a52247-af59-4c0f-9215-00ee83a61339@oracle.com> (raw)
In-Reply-To: <20250129020249.owmklacvuvss7z7n@jpoimboe>
On 1/28/25 6:02 PM, Josh Poimboeuf wrote:
> On Mon, Jan 27, 2025 at 05:10:27PM -0800, Andrii Nakryiko wrote:
>>> Yes, in theory, it is allowed (as per the specification) to have an
>>> SFrame section with zero number of FDEs/FREs. But since such a section
>>> will not be useful, I share the opinion that it makes sense to disallow
>>> it in the current unwinding contexts, for now (JIT usecase may change
>>> things later).
>>>
>>
>> I disagree, actually. If it's a legal thing, it shouldn't be randomly
>> rejected. If we later make use of that, we'd have to worry not to
>> accidentally cause problems on older kernels that arbitrarily rejected
>> empty FDE just because it didn't make sense at some point (without
>> causing any issues).
>
> If such older kernels don't do anything with the section anyway, what's
> the point of pretending they do?
>
> Returning an error would actually make more sense as it communicates
> that the kernel doesn't support whatever hypothetical thing you're
> trying to do with 0 FDEs.
>
>>> SFRAME_F_FRAME_POINTER flag is not being set currently by GAS/GNU ld at all.
>>>
>>>>>> + dbg("no fde/fre entries\n");
>>>>>> + return -EINVAL;
>>>>>> + }
>>>>>> +
>>>>>> + header_end = sec->sframe_start + SFRAME_HEADER_SIZE(shdr);
>>>>>> + if (header_end >= sec->sframe_end) {
>>>>>
>>>>> if we allow zero FDEs/FREs, header_end == sec->sframe_end is legal, right?
>>>>
>>>> I suppose so, but again I'm not seeing any reason to support that.
>>
>> Let's invert this. Is there any reason why it shouldn't be supported? ;)
>
> It's simple, we don't add code to "support" some vague hypothetical.
>
> For whatever definition of "support", since there's literally nothing
> the kernel can do with that.
>
>> But what if we take a struct-of-arrays approach and represent it more like:
>>
>> struct FDE_and_FREs {
>> struct sframe_func_desc_entry fde_metadata;
>> u8|u16|u32 start_addrs[N]; /* can extend to u64 as well */
>> u8 sfre_infos[N];
>> u8 offsets8[M8];
>> u16 offsets16[M16] __aligned(2);
>> u32 offsets32[M32] __aligned(4);
>> /* we can naturally extend to support also u64 offsets */
>> };
>>
>> i.e., we split all FRE records into their three constituents: start
>> addresses, info bytes, and then each FRE can fall into either 8-, 16-,
>> or 32-bit offsets "bucket". We collect all the offsets, depending on
>> their size, into these aligned offsets{8,16,32} arrays (with natural
>> extension to 64 bits, if necessary), with at most wasting 1-3 bytes to
>> ensure proper alignment everywhere.
>
> Makes sense. Though I also have another idea below.
>
>> Note, at this point we need to decide if we want to make FREs binary
>> searchable or not.
>>
>> If not, we don't really need anything extra. As we process each
>> start_addrs[i] and sfre_infos[i] to find matching FRE, we keep track
>> of how many 8-, 16-, and 32-bit offsets already processed FREs
>> consumed, and when we find the right one, we know exactly the starting
>> index within offset{8,16,32}. Done.
>>
>> But if we were to make FREs binary searchable, we need to basically
>> have an index of offset pointers to quickly find offsetsX[j] position
>> corresponding to FRE #i. For that, we can have an extra array right
>> next to start_addrs, "semantically parallel" to it:
>>
>> u8|u16|u32 start_addrs[N];
>> u8|u16|u32 offset_idxs[N];
>
> Binary search would definitely help. I did a crude histogram for "FREs
> per FDE" for a few binaries on my test system:
>
> gdb (the biggest binary on my test system):
>
> 10th Percentile: 1
> 20th Percentile: 1
> 30th Percentile: 1
> 40th Percentile: 3
> 50th Percentile: 5
> 60th Percentile: 8
> 70th Percentile: 11
> 80th Percentile: 14
> 90th Percentile: 16
> 100th Percentile: 472
>
> bash:
>
> 10th Percentile: 1
> 20th Percentile: 1
> 30th Percentile: 3
> 40th Percentile: 5
> 50th Percentile: 7
> 60th Percentile: 9
> 70th Percentile: 12
> 80th Percentile: 16
> 90th Percentile: 17
> 100th Percentile: 46
>
> glibc:
>
> 10th Percentile: 1
> 20th Percentile: 1
> 30th Percentile: 1
> 40th Percentile: 1
> 50th Percentile: 4
> 60th Percentile: 6
> 70th Percentile: 9
> 80th Percentile: 14
> 90th Percentile: 16
> 100th Percentile: 44
>
> libpython:
>
> 10th Percentile: 1
> 20th Percentile: 3
> 30th Percentile: 4
> 40th Percentile: 6
> 50th Percentile: 8
> 60th Percentile: 11
> 70th Percentile: 12
> 80th Percentile: 16
> 90th Percentile: 20
> 100th Percentile: 112
>
> So binary search would help in a lot of cases.
>
Thanks for gathering this.
I suspect on aarch64, the numbers will be very different (leaning
towards very low number of FREs per FDE).
Making SFrame FREs amenable to binary search can be targeted. Both your
and Andrii's proposal do address that...
> However, if we're going that route, we might want to even consider a
> completely revamped data layout. For example:
>
> One insight is that the vast majority of (cfa, fp, ra) tuples aren't
> unique. They could be deduped by storing the unique tuples in a
> standalone 'fre_data' array which is referenced by another
> address-specific array.
>
> struct fre_data {
> s8|s16|s32 cfa, fp, ra;
> u8 info;
> };
> struct fre_data fre_data[num_fre_data];
>
We had the same observation at the time of SFrame V1. And this method
of compaction (deduped tuples) was brain-stormed a bit. Back then, the
costs were thought to be:
- more work at build time.
- an additional data access once the FRE is found (as there is
indirection).
So it was really compaction at the costs above. We did steer towards
simplicity and the SFrame FRE is what it stands today.
The difference in the pros and cons now from then:
- pros: helps mitigate unaligned accesses
- cons: interferes slightly with the design goal of efficient
addition and removal of stack trace information per function for JIT.
Think "removal" as the set of actions necessary for addressing
fragmentation in SFrame section data in JIT usecase.
> The storage sizes of cfa/fp/ra can be a constant specified in the global
> sframe header. It's fine all being the same size as it looks like this
> array wouldn't typically be more than a few hundred entries anyway.
>
> Then there would be an array of sorted FRE entries which reference the
> fre_data[] entries:
>
> struct fre {
> s32|s64 start_address;
> u8|u16 fre_data_idx;
>
> } __packed;
> struct fre fres[num_fres];
>
> (For alignment reasons that should probably be two separate arrays, even
> though not ideal for cache locality)
>
> Here again the field storage sizes would be specified in the global
> sframe header.
>
> Note FDEs aren't even needed here as the unwinder doesn't need to know
> when a function begins/ends. The only info needed by the unwinder is
> just the fre_data struct. So a simple binary search of fres[] is all
> that's really needed.
>
Splitting out information (start_address) to an FDE (as done in V1/V2)
has the benefit that a job like relocating information is proportional
to O(NumFunctions).
In the case above, IIUC, where the proposal puts start_address in the
FRE, these costs will be (much) higher.
In addition, not being able to identify stack trace information per
function will affect the JIT usecase. We need to able to mark stack
trace information stale for functions in JIT environment.
I think the first conceptual landing point in the information layout
should be a per-function entry.
> But wait, there's more...
>
> The binary search could be made significantly faster using a small fast
> lookup array which is indexed evenly across the text address offset
> space, similar to what ORC does:
>
> u32 fre_chunks[num_chunks];
>
> The text address space (starting at offset 0) can be split into
> 'num_chunks' chunks of size 'chunk_size'. The value of
> fre_chunks[offset/chunk_size] is an index into the fres[] array.
>
> Taking my gdb binary as an example:
>
> .text is 6417058 bytes, with 146997 total sframe FREs. Assuming a chunk
> size of 1024, fre_chunks[] needs 6417058/1024 = 6267 entries.
>
> For unwinding at text offset 0x400000, the index into fre_chunks[] would
> be 0x400000/1024 = 4096. If fre_chunks[4096] == 96074 and
> fre_chunks[4096+1] == 96098, you need only do a binary search of the 24
> entries between fres[96074] && fres[96098] rather than searching the
> entire 146997 byte array.
>
> .sframe size calculation:
>
> 374 unique fre_data entries (out of 146997 total FREs!)
> = 374 * (2 * 3) = 2244 bytes
>
> 146997 fre entries
> = 146997 * (4 + 2) = 881982 bytes
>
> .text size 6417058 (chunk_size = 1024, num_chunks=6267)
> = 6267 * 8 = 43869 bytes
>
> Approximate total .sframe size would be 2244 + 881982 + 8192 = 906k,
> plus negligible header size. Which is smaller than the v2 .sframe on my
> gdb binary (985k).
>
> With the chunked lookup table, the avg lookup is:
>
> log2(146997/6267) = ~4.5 iterations
>
> whereas a full binary search would be:
>
> log2(146997) = 17 iterations
>
> So assuming I got all that math right, it's over 3x faster and the
> binary is smaller (or at least should be roughly comparable).
>
> Of course the downside is it's an all new format. Presumably the linker
> would need to do more work than it's currently doing, e.g., find all the
> duplicates and set up the data accordingly.
>
next prev parent reply other threads:[~2025-01-30 21:40 UTC|newest]
Thread overview: 161+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-22 2:30 [PATCH v4 00/39] unwind, perf: sframe user space unwinding Josh Poimboeuf
2025-01-22 2:30 ` [PATCH v4 01/39] task_work: Fix TWA_NMI_CURRENT error handling Josh Poimboeuf
2025-01-22 12:28 ` Peter Zijlstra
2025-01-22 20:47 ` Josh Poimboeuf
2025-01-23 8:14 ` Peter Zijlstra
2025-01-23 17:15 ` Josh Poimboeuf
2025-01-23 22:19 ` Peter Zijlstra
2025-04-22 16:14 ` Steven Rostedt
2025-01-22 2:30 ` [PATCH v4 02/39] task_work: Fix TWA_NMI_CURRENT race with __schedule() Josh Poimboeuf
2025-01-22 12:23 ` Peter Zijlstra
2025-01-22 12:42 ` Peter Zijlstra
2025-01-22 21:03 ` Josh Poimboeuf
2025-01-22 22:14 ` Josh Poimboeuf
2025-01-23 8:15 ` Peter Zijlstra
2025-04-22 16:15 ` Steven Rostedt
2025-04-22 17:20 ` Josh Poimboeuf
2025-01-22 2:30 ` [PATCH v4 03/39] mm: Add guard for mmap_read_lock Josh Poimboeuf
2025-01-22 2:30 ` [PATCH v4 04/39] x86/vdso: Fix DWARF generation for getrandom() Josh Poimboeuf
2025-01-22 2:30 ` [PATCH v4 05/39] x86/asm: Avoid emitting DWARF CFI for non-VDSO Josh Poimboeuf
2025-01-24 16:08 ` Jens Remus
2025-01-24 16:47 ` Josh Poimboeuf
2025-01-22 2:30 ` [PATCH v4 06/39] x86/asm: Fix VDSO DWARF generation with kernel IBT enabled Josh Poimboeuf
2025-01-22 2:30 ` [PATCH v4 07/39] x86/vdso: Use SYM_FUNC_{START,END} in __kernel_vsyscall() Josh Poimboeuf
2025-01-22 2:31 ` [PATCH v4 08/39] x86/vdso: Use CFI macros in __vdso_sgx_enter_enclave() Josh Poimboeuf
2025-01-22 2:31 ` [PATCH v4 09/39] x86/vdso: Enable sframe generation in VDSO Josh Poimboeuf
2025-01-24 16:00 ` Jens Remus
2025-01-24 16:43 ` Josh Poimboeuf
2025-01-24 16:53 ` Josh Poimboeuf
2025-04-22 17:44 ` Steven Rostedt
2025-01-24 16:30 ` Jens Remus
2025-01-24 16:56 ` Josh Poimboeuf
2025-01-22 2:31 ` [PATCH v4 10/39] x86/uaccess: Add unsafe_copy_from_user() implementation Josh Poimboeuf
2025-01-22 2:31 ` [PATCH v4 11/39] unwind_user: Add user space unwinding API Josh Poimboeuf
2025-01-24 16:41 ` Jens Remus
2025-01-24 17:09 ` Josh Poimboeuf
2025-01-24 17:59 ` Andrii Nakryiko
2025-01-24 18:08 ` Josh Poimboeuf
2025-01-24 20:02 ` Steven Rostedt
2025-01-24 22:05 ` Josh Poimboeuf
2025-01-22 2:31 ` [PATCH v4 12/39] unwind_user: Add frame pointer support Josh Poimboeuf
2025-01-24 17:59 ` Andrii Nakryiko
2025-01-24 18:16 ` Josh Poimboeuf
2025-04-24 13:41 ` Steven Rostedt
2025-01-22 2:31 ` [PATCH v4 13/39] unwind_user/x86: Enable frame pointer unwinding on x86 Josh Poimboeuf
2025-01-22 2:31 ` [PATCH v4 14/39] perf/x86: Rename get_segment_base() and make it global Josh Poimboeuf
2025-01-22 12:51 ` Peter Zijlstra
2025-01-22 21:37 ` Josh Poimboeuf
2025-01-24 20:09 ` Steven Rostedt
2025-01-24 22:06 ` Josh Poimboeuf
2025-01-22 2:31 ` [PATCH v4 15/39] unwind_user: Add compat mode frame pointer support Josh Poimboeuf
2025-01-22 2:31 ` [PATCH v4 16/39] unwind_user/x86: Enable compat mode frame pointer unwinding on x86 Josh Poimboeuf
2025-01-22 2:31 ` [PATCH v4 17/39] unwind_user/sframe: Add support for reading .sframe headers Josh Poimboeuf
2025-01-24 18:00 ` Andrii Nakryiko
2025-01-24 19:21 ` Josh Poimboeuf
2025-01-24 20:13 ` Steven Rostedt
2025-01-24 22:39 ` Josh Poimboeuf
2025-01-24 22:13 ` Indu Bhagat
2025-01-28 1:10 ` Andrii Nakryiko
2025-01-29 2:02 ` Josh Poimboeuf
2025-01-30 0:02 ` Andrii Nakryiko
2025-02-04 18:26 ` Josh Poimboeuf
2025-01-30 21:39 ` Indu Bhagat [this message]
2025-02-05 0:57 ` Josh Poimboeuf
2025-02-06 1:10 ` Indu Bhagat
2025-02-05 13:56 ` Jens Remus
2025-02-07 21:13 ` Josh Poimboeuf
2025-01-30 21:21 ` Indu Bhagat
2025-02-04 19:59 ` Josh Poimboeuf
2025-02-05 23:16 ` Andrii Nakryiko
2025-02-05 11:01 ` Jens Remus
2025-02-05 23:05 ` Andrii Nakryiko
2025-01-24 20:31 ` Indu Bhagat
2025-01-22 2:31 ` [PATCH v4 18/39] unwind_user/sframe: Store sframe section data in per-mm maple tree Josh Poimboeuf
2025-01-22 2:31 ` [PATCH v4 19/39] unwind_user/sframe: Add support for reading .sframe contents Josh Poimboeuf
2025-01-24 16:36 ` Jens Remus
2025-01-24 17:07 ` Josh Poimboeuf
2025-01-24 18:02 ` Andrii Nakryiko
2025-01-24 21:41 ` Josh Poimboeuf
2025-01-28 0:39 ` Andrii Nakryiko
2025-01-28 10:50 ` Jens Remus
2025-01-29 2:04 ` Josh Poimboeuf
2025-01-28 10:54 ` Jens Remus
2025-01-30 19:51 ` Weinan Liu
2025-02-04 19:42 ` Josh Poimboeuf
2025-01-30 15:07 ` Indu Bhagat
2025-02-04 18:38 ` Josh Poimboeuf
2025-01-30 15:47 ` Jens Remus
2025-02-04 18:51 ` Josh Poimboeuf
2025-02-05 9:47 ` Jens Remus
2025-02-07 21:06 ` Josh Poimboeuf
2025-02-10 15:56 ` Jens Remus
2025-01-22 2:31 ` [PATCH v4 20/39] unwind_user/sframe: Detect .sframe sections in executables Josh Poimboeuf
2025-01-22 2:31 ` [PATCH v4 21/39] unwind_user/sframe: Add prctl() interface for registering .sframe sections Josh Poimboeuf
2025-01-22 2:31 ` [PATCH v4 22/39] unwind_user/sframe: Wire up unwind_user to sframe Josh Poimboeuf
2025-01-22 2:31 ` [PATCH v4 23/39] unwind_user/sframe/x86: Enable sframe unwinding on x86 Josh Poimboeuf
2025-01-22 2:31 ` [PATCH v4 24/39] unwind_user/sframe: Remove .sframe section on detected corruption Josh Poimboeuf
2025-01-22 2:31 ` [PATCH v4 25/39] unwind_user/sframe: Show file name in debug output Josh Poimboeuf
2025-01-30 16:17 ` Jens Remus
2025-02-04 19:10 ` Josh Poimboeuf
2025-02-05 10:04 ` Jens Remus
2025-01-22 2:31 ` [PATCH v4 26/39] unwind_user/sframe: Enable debugging in uaccess regions Josh Poimboeuf
2025-01-30 16:38 ` Jens Remus
2025-02-04 19:33 ` Josh Poimboeuf
2025-01-22 2:31 ` [PATCH v4 27/39] unwind_user/sframe: Add .sframe validation option Josh Poimboeuf
2025-01-22 2:31 ` [PATCH v4 28/39] unwind_user/deferred: Add deferred unwinding interface Josh Poimboeuf
2025-01-22 13:37 ` Peter Zijlstra
2025-01-22 14:16 ` Peter Zijlstra
2025-01-22 22:51 ` Josh Poimboeuf
2025-01-23 8:17 ` Peter Zijlstra
2025-01-23 18:30 ` Josh Poimboeuf
2025-01-23 21:58 ` Peter Zijlstra
2025-01-22 21:38 ` Josh Poimboeuf
2025-01-22 13:44 ` Peter Zijlstra
2025-01-22 21:52 ` Josh Poimboeuf
2025-01-22 20:13 ` Mathieu Desnoyers
2025-01-23 4:05 ` Josh Poimboeuf
2025-01-23 8:25 ` Peter Zijlstra
2025-01-23 18:43 ` Josh Poimboeuf
2025-01-23 22:13 ` Peter Zijlstra
2025-01-24 21:58 ` Steven Rostedt
2025-01-24 22:46 ` Josh Poimboeuf
2025-01-24 22:50 ` Josh Poimboeuf
2025-01-24 23:57 ` Steven Rostedt
2025-01-30 20:21 ` Steven Rostedt
2025-02-05 2:25 ` Josh Poimboeuf
2025-01-24 16:35 ` Jens Remus
2025-01-24 16:57 ` Josh Poimboeuf
2025-01-22 2:31 ` [PATCH v4 29/39] unwind_user/deferred: Add unwind cache Josh Poimboeuf
2025-01-22 13:57 ` Peter Zijlstra
2025-01-22 22:36 ` Josh Poimboeuf
2025-01-23 8:31 ` Peter Zijlstra
2025-01-23 18:45 ` Josh Poimboeuf
2025-01-22 2:31 ` [PATCH v4 30/39] unwind_user/deferred: Make unwind deferral requests NMI-safe Josh Poimboeuf
2025-01-22 14:15 ` Peter Zijlstra
2025-01-22 22:49 ` Josh Poimboeuf
2025-01-23 8:40 ` Peter Zijlstra
2025-01-23 19:48 ` Josh Poimboeuf
2025-01-23 19:54 ` Josh Poimboeuf
2025-01-23 22:17 ` Peter Zijlstra
2025-01-23 23:34 ` Josh Poimboeuf
2025-01-24 11:58 ` Peter Zijlstra
2025-01-22 14:24 ` Peter Zijlstra
2025-01-22 22:52 ` Josh Poimboeuf
2025-01-23 8:42 ` Peter Zijlstra
2025-01-22 2:31 ` [PATCH v4 31/39] perf: Remove get_perf_callchain() 'init_nr' argument Josh Poimboeuf
2025-01-22 2:31 ` [PATCH v4 32/39] perf: Remove get_perf_callchain() 'crosstask' argument Josh Poimboeuf
2025-01-24 18:13 ` Andrii Nakryiko
2025-01-24 22:00 ` Josh Poimboeuf
2025-01-28 0:39 ` Andrii Nakryiko
2025-01-22 2:31 ` [PATCH v4 33/39] perf: Simplify get_perf_callchain() user logic Josh Poimboeuf
2025-01-22 2:31 ` [PATCH v4 34/39] perf: Skip user unwind if !current->mm Josh Poimboeuf
2025-01-22 14:29 ` Peter Zijlstra
2025-01-22 23:08 ` Josh Poimboeuf
2025-01-23 8:44 ` Peter Zijlstra
2025-01-22 2:31 ` [PATCH v4 35/39] perf: Support deferred user callchains Josh Poimboeuf
2025-01-22 2:31 ` [PATCH v4 36/39] perf tools: Minimal CALLCHAIN_DEFERRED support Josh Poimboeuf
2025-01-22 2:31 ` [PATCH v4 37/39] perf record: Enable defer_callchain for user callchains Josh Poimboeuf
2025-01-22 2:31 ` [PATCH v4 38/39] perf script: Display PERF_RECORD_CALLCHAIN_DEFERRED Josh Poimboeuf
2025-01-22 2:31 ` [PATCH v4 39/39] perf tools: Merge deferred user callchains Josh Poimboeuf
2025-01-22 2:35 ` [PATCH v4 00/39] unwind, perf: sframe user space unwinding Josh Poimboeuf
2025-01-22 16:13 ` Steven Rostedt
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e4a52247-af59-4c0f-9215-00ee83a61339@oracle.com \
--to=indu.bhagat@oracle.com \
--cc=acme@kernel.org \
--cc=adrian.hunter@intel.com \
--cc=alexander.shishkin@linux.intel.com \
--cc=andrii.nakryiko@gmail.com \
--cc=broonie@kernel.org \
--cc=fweimer@redhat.com \
--cc=irogers@google.com \
--cc=jolsa@kernel.org \
--cc=jordalgo@meta.com \
--cc=jpoimboe@kernel.org \
--cc=jremus@linux.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-perf-users@vger.kernel.org \
--cc=linux-toolchains@vger.kernel.org \
--cc=linux-trace-kernel@vger.kernel.org \
--cc=luto@kernel.org \
--cc=mark.rutland@arm.com \
--cc=mathieu.desnoyers@efficios.com \
--cc=mhiramat@kernel.org \
--cc=mingo@kernel.org \
--cc=namhyung@kernel.org \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=sam@gentoo.org \
--cc=wnliu@google.com \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).