* Re: [PATCH v3 11/19] unwind: Add deferred user space unwinding API [not found] ` <20241029183440.fbwoistveyxneezt@treble.attlocal.net> @ 2024-10-30 13:44 ` Mathieu Desnoyers 2024-10-30 17:47 ` Josh Poimboeuf 0 siblings, 1 reply; 7+ messages in thread From: Mathieu Desnoyers @ 2024-10-30 13:44 UTC (permalink / raw) To: Josh Poimboeuf Cc: Peter Zijlstra, x86, Steven Rostedt, Ingo Molnar, Arnaldo Carvalho de Melo, linux-kernel, Indu Bhagat, Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim, Ian Rogers, Adrian Hunter, linux-perf-users, Mark Brown, linux-toolchains, Jordan Rome, Sam James, linux-trace-kernel, Andrii Nakryiko, Jens Remus, Florian Weimer, Andy Lutomirski On 2024-10-29 14:34, Josh Poimboeuf wrote: > On Tue, Oct 29, 2024 at 01:47:59PM -0400, Mathieu Desnoyers wrote: >>>> If ftrace needs brain damage like this, can't we push this to the user? >>>> >>>> That is, do away with the per-cpu sequence crap, and add a per-task >>>> counter that is incremented for every return-to-userspace. >>> >>> That would definitely make things easier for me, though IIRC Steven and >>> Mathieu had some concerns about TID wrapping over days/months/years. >>> >>> With that mindset I suppose the per-CPU counter could also wrap, though >>> that could be mitigated by making the cookie a struct with more bits. >>> >> >> AFAIR, the scheme we discussed in Prague was different than the >> implementation here. > > It does differ a bit. I'll explain why below. > >> We discussed having a free-running counter per-cpu, and combining it >> with the cpu number as top (or low) bits, to effectively make a 64-bit >> value that is unique across the entire system, but without requiring a >> global counter with its associated cache line bouncing. >> >> Here is part where the implementation here differs from our discussion: >> I recall we discussed keeping a snapshot of the counter value within >> the task struct of the thread. So we only snapshot the per-cpu value >> on first use after entering the kernel, and after that we use the same >> per-cpu value snapshot (from task struct) up until return to userspace. >> We clear the task struct cookie snapshot on return to userspace. > > Right, so adding some details to this, what I remember specifically > deciding on: > > - In unwind_user_deferred(), if task cookie is 0, it snapshots the > per-cpu counter, stores the old value in the task cookie, and > increments the new value (with CPU # in top 48 bits). > > - Future calls to unwind_user_deferred() see the task cookie is set and > reuse the same cookie. > > - Then the task work (which runs right before exiting the kernel) > clears the task cookie (with interrupts disabled), does the unwind, > and calls the callbacks. It clears the task cookie so that any > future calls to unwind_user_deferred() (either before exiting the > kernel or after next entry) are guaranteed to get an unwind. This is where I think we should improve the logic. If an unwind_user_deferred() is called right over/after the unwind, I don't think we want to issue another stack walk: it would be redundant with the one already in progress or completed. What we'd want is to make sure that the cookie returned to that unwind_user_deferred() is the same as the cookie associated with the in-progress or completed stack unwind. This way, trace analysis can look at the surrounding events (before and after) the unwind request and find the associated call stack. > > That's what I had implemented originally. It had a major performance > issue, particularly for long stacks (bash sometimes had 300+ stack > frames in my testing). That's probably because long stacks require a lot of work (user pages accesses) to be done when stack walking, which increases the likeliness of re-issuing unwind_user_deferred() while the stack walk is being done. That's basically a lack-of-progress issue: a sufficiently long stack walk with a sufficiently frequent unwind_user_deferred() trigger could make the system stall forever trying to service stalk walks again and again. That's bad. > > The task work runs with interrupts enabled. So if another PMU interrupt > and call to unwind_user_deferred() happens after the task work clears > the task cookie but before kernel exit, a new cookie is generated and an > additional task work is scheduled. For long stacks, performance gets > really bad, dominated by all the extra unnecessary unwinding. What you want here is to move the point where you clear the task cookie to _after_ completion of stack unwind. There are a few ways this can be done: A) Clear the task cookie in the task_work() right after the unwind_user_deferred() is completed. Downside: if some long task work happen to be done after the stack walk, a new unwind_user_deferred() could be issued again and we may end up looping forever taking stack unwind and never actually making forward progress. B) Clear the task cookie after the exit_to_user_mode_loop is done, before returning to user-space, while interrupts are disabled. C) Clear the task cookie when entering kernel space again. I think (B) and (C) could be interesting approaches, perhaps with (B) slightly more interesting because it cleans up after itself before returning to userspace. Also (B) allows us to return to a purely lazy scheme where there is nothing to do when entering the kernel. This is therefore more efficient in terms of cache locality, because we can expect our per-task state to be cache hot when returning to userspace, but not necessarily when entering into the kernel. > > So I changed the design a bit: > > - Increment a per-cpu counter at kernel entry before interrupts are > enabled. > > - In unwind_user_deferred(), if task cookie is 0, it sets the task > cookie based on the per-cpu counter, like before. However if this > cookie was the last one used by this callback+task, it skips the > callback altogether. > > So it eliminates duplicate unwinds except for the CPU migration case. This sounds complicated and fragile. And why would we care about duplicated unwinds on cpu migration ? What prevents us from moving the per-task cookie clearing to after exit_to_user_mode_loop instead ? Then there is no need to do per-cpu counter increment on every kernel entry and we can go back to a lazy scheme. > > If I change the entry code to increment a per-task counter instead of a > per-cpu counter then this problem goes away. I was just concerned about > the performance impacts of doing that on every entry. Moving from per-cpu to per-task makes this cookie task-specific and not global anymore, I don't think we want this for a stack walking infrastructure meant to be used by various tracers. Also a global cookie is more robust and does not depend on guaranteeing that all the trace data is present to guarantee current thread ID accuracy and thus that cookies match between deferred unwind request and their fulfillment. Thanks, Mathieu -- Mathieu Desnoyers EfficiOS Inc. https://www.efficios.com ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v3 11/19] unwind: Add deferred user space unwinding API 2024-10-30 13:44 ` [PATCH v3 11/19] unwind: Add deferred user space unwinding API Mathieu Desnoyers @ 2024-10-30 17:47 ` Josh Poimboeuf 2024-10-30 17:55 ` Josh Poimboeuf 2024-10-30 18:25 ` Josh Poimboeuf 0 siblings, 2 replies; 7+ messages in thread From: Josh Poimboeuf @ 2024-10-30 17:47 UTC (permalink / raw) To: Mathieu Desnoyers Cc: Peter Zijlstra, x86, Steven Rostedt, Ingo Molnar, Arnaldo Carvalho de Melo, linux-kernel, Indu Bhagat, Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim, Ian Rogers, Adrian Hunter, linux-perf-users, Mark Brown, linux-toolchains, Jordan Rome, Sam James, linux-trace-kernel, Andrii Nakryiko, Jens Remus, Florian Weimer, Andy Lutomirski On Wed, Oct 30, 2024 at 09:44:14AM -0400, Mathieu Desnoyers wrote: > What you want here is to move the point where you clear the task > cookie to _after_ completion of stack unwind. There are a few ways > this can be done: > > A) Clear the task cookie in the task_work() right after the > unwind_user_deferred() is completed. Downside: if some long task work > happen to be done after the stack walk, a new unwind_user_deferred() > could be issued again and we may end up looping forever taking stack > unwind and never actually making forward progress. > > B) Clear the task cookie after the exit_to_user_mode_loop is done, > before returning to user-space, while interrupts are disabled. Problem is, if another tracer calls unwind_user_deferred() for the first time, after the task work but before the task cookie gets cleared, it will see the cookie is non-zero and will fail to schedule another task work. So its callback never gets called. > > If I change the entry code to increment a per-task counter instead of a > > per-cpu counter then this problem goes away. I was just concerned about > > the performance impacts of doing that on every entry. > > Moving from per-cpu to per-task makes this cookie task-specific and not > global anymore, I don't think we want this for a stack walking > infrastructure meant to be used by various tracers. Also a global cookie > is more robust and does not depend on guaranteeing that all the > trace data is present to guarantee current thread ID accuracy and > thus that cookies match between deferred unwind request and their > fulfillment. I don't disagree. What I meant was, on entry (or exit), increment the task cookie *with* the CPU bits included. Or as you suggested previously, the "cookie" just be a struct with two fields: CPU # and per-task entry counter. -- Josh ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v3 11/19] unwind: Add deferred user space unwinding API 2024-10-30 17:47 ` Josh Poimboeuf @ 2024-10-30 17:55 ` Josh Poimboeuf 2024-10-30 18:25 ` Josh Poimboeuf 1 sibling, 0 replies; 7+ messages in thread From: Josh Poimboeuf @ 2024-10-30 17:55 UTC (permalink / raw) To: Mathieu Desnoyers Cc: Peter Zijlstra, x86, Steven Rostedt, Ingo Molnar, Arnaldo Carvalho de Melo, linux-kernel, Indu Bhagat, Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim, Ian Rogers, Adrian Hunter, linux-perf-users, Mark Brown, linux-toolchains, Jordan Rome, Sam James, linux-trace-kernel, Andrii Nakryiko, Jens Remus, Florian Weimer, Andy Lutomirski On Wed, Oct 30, 2024 at 10:47:55AM -0700, Josh Poimboeuf wrote: > > Moving from per-cpu to per-task makes this cookie task-specific and not > > global anymore, I don't think we want this for a stack walking > > infrastructure meant to be used by various tracers. Also a global cookie > > is more robust and does not depend on guaranteeing that all the > > trace data is present to guarantee current thread ID accuracy and > > thus that cookies match between deferred unwind request and their > > fulfillment. > > I don't disagree. What I meant was, on entry (or exit), increment the > task cookie *with* the CPU bits included. > > Or as you suggested previously, the "cookie" just be a struct with two > fields: CPU # and per-task entry counter. Never mind, that wouldn't work... Two tasks could have identical per-task counters while being scheduled on the same CPU. -- Josh ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v3 11/19] unwind: Add deferred user space unwinding API 2024-10-30 17:47 ` Josh Poimboeuf 2024-10-30 17:55 ` Josh Poimboeuf @ 2024-10-30 18:25 ` Josh Poimboeuf 1 sibling, 0 replies; 7+ messages in thread From: Josh Poimboeuf @ 2024-10-30 18:25 UTC (permalink / raw) To: Mathieu Desnoyers Cc: Peter Zijlstra, x86, Steven Rostedt, Ingo Molnar, Arnaldo Carvalho de Melo, linux-kernel, Indu Bhagat, Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim, Ian Rogers, Adrian Hunter, linux-perf-users, Mark Brown, linux-toolchains, Jordan Rome, Sam James, linux-trace-kernel, Andrii Nakryiko, Jens Remus, Florian Weimer, Andy Lutomirski On Wed, Oct 30, 2024 at 10:47:55AM -0700, Josh Poimboeuf wrote: > On Wed, Oct 30, 2024 at 09:44:14AM -0400, Mathieu Desnoyers wrote: > > What you want here is to move the point where you clear the task > > cookie to _after_ completion of stack unwind. There are a few ways > > this can be done: > > > > A) Clear the task cookie in the task_work() right after the > > unwind_user_deferred() is completed. Downside: if some long task work > > happen to be done after the stack walk, a new unwind_user_deferred() > > could be issued again and we may end up looping forever taking stack > > unwind and never actually making forward progress. > > > > B) Clear the task cookie after the exit_to_user_mode_loop is done, > > before returning to user-space, while interrupts are disabled. > > Problem is, if another tracer calls unwind_user_deferred() for the first > time, after the task work but before the task cookie gets cleared, it > will see the cookie is non-zero and will fail to schedule another task > work. So its callback never gets called. How about we: - clear task cookie on entry or exit from user space - use a different variable (rather than clearing of task cookie) to communicate whether task work is pending - keep track of which callbacks have been called for this cookie something like so? int unwind_user_deferred(struct unwind_callback *callback, u64 *ctx_cookie, void *data) { struct unwind_task_info *info = ¤t->unwind_task_info; unsigned int work_pending = work_pending; u64 cookie = info->ctx_cookie; int idx = callback->idx; if (WARN_ON_ONCE(in_nmi())) return -EINVAL; if (!current->mm) return -EINVAL; guard(irqsave)(); if (cookie && (info->pending_callbacks & (1 << idx))) { /* callback already scheduled */ goto done; } /* * If this is the first call from any caller since the most recent * entry from user space, initialize the task context cookie. */ if (!cookie) { u64 cpu = raw_smp_processor_id(); u64 ctx_ctr; __this_cpu_inc(unwind_ctx_ctr); ctx_ctr = __this_cpu_read(unwind_ctx_ctr); cookie = ctx_to_cookie(cpu, ctx_ctr); info->ctx_cookie = cookie; } else { if (cookie == info->last_cookies[idx]) { /* * The callback has already been called with this unwind, * nothing to do. */ goto done; } if (info->pending_callbacks & (1 << idx)) { /* callback already scheduled */ goto done; } } info->pending_callbacks |= (1 << idx); info->privs[idx] = data; info->last_cookies[idx] = cookie; if (!info->task_pending) { info->task_pending = 1; /* cleared by unwind_user_task_work() */ task_work_add(current, &info->work, TWA_RESUME); } done: if (ctx_cookie) *ctx_cookie = cookie; return 0; } -- Josh ^ permalink raw reply [flat|nested] 7+ messages in thread
[parent not found: <42c0a99236af65c09c8182e260af7bcf5aa1e158.1730150953.git.jpoimboe@kernel.org>]
[parent not found: <20241105124053.523e93dd@gandalf.local.home>]
* Re: [PATCH v3 09/19] unwind: Introduce sframe user space unwinding [not found] ` <20241105124053.523e93dd@gandalf.local.home> @ 2024-11-05 17:45 ` Steven Rostedt 0 siblings, 0 replies; 7+ messages in thread From: Steven Rostedt @ 2024-11-05 17:45 UTC (permalink / raw) To: Josh Poimboeuf Cc: x86, Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo, linux-kernel, Indu Bhagat, Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim, Ian Rogers, Adrian Hunter, linux-perf-users, Mark Brown, linux-toolchains, Jordan Rome, Sam James, linux-trace-kernel, Andrii Nakryiko, Jens Remus, Mathieu Desnoyers, Florian Weimer, Andy Lutomirski On Tue, 5 Nov 2024 12:40:53 -0500 Steven Rostedt <rostedt@goodmis.org> wrote: > On Mon, 28 Oct 2024 14:47:56 -0700 > Josh Poimboeuf <jpoimboe@kernel.org> wrote: <linux-trace-kernel@vger.kerne.org>: Host or domain name not found. Name service error for name=vger.kerne.org type=AAAA: Host not found Hmm, no wonder this didn't show up in patchwork :-/ Please fix on your next version. Thanks, -- Steve ^ permalink raw reply [flat|nested] 7+ messages in thread
[parent not found: <1f83be89-b816-48a3-a7ee-9b72f07b558e@linux.ibm.com>]
[parent not found: <20241113155040.6a9a1bed@gandalf.local.home>]
[parent not found: <20241113211535.ghnw52wkgudkjvgv@jpoimboe>]
[parent not found: <20241113171326.6d1ddc83@gandalf.local.home>]
* Re: [PATCH v3 09/19] unwind: Introduce sframe user space unwinding [not found] ` <20241113171326.6d1ddc83@gandalf.local.home> @ 2024-11-13 22:21 ` Steven Rostedt 2024-11-13 22:25 ` Steven Rostedt 0 siblings, 1 reply; 7+ messages in thread From: Steven Rostedt @ 2024-11-13 22:21 UTC (permalink / raw) To: Josh Poimboeuf Cc: Jens Remus, x86, Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo, linux-kernel, Indu Bhagat, Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim, Ian Rogers, Adrian Hunter, linux-perf-users, Mark Brown, linux-toolchains, Jordan Rome, Sam James, linux-trace-kernel, Andrii Nakryiko, Mathieu Desnoyers, Florian Weimer, Andy Lutomirski, Heiko Carstens, Vasily Gorbik [ This reply fixes the linux-trace-kernel email :-p ] On Wed, 13 Nov 2024 17:13:26 -0500 Steven Rostedt <rostedt@goodmis.org> wrote: > BTW, the following changes were needed to make it work for me: > > -- Steve > > diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c > index 434c548f0837..64cc3c1188ca 100644 > --- a/fs/binfmt_elf.c > +++ b/fs/binfmt_elf.c > @@ -842,7 +842,8 @@ static int load_elf_binary(struct linux_binprm *bprm) > int first_pt_load = 1; > unsigned long error; > struct elf_phdr *elf_ppnt, *elf_phdata, *interp_elf_phdata = NULL; > - struct elf_phdr *elf_property_phdata = NULL, *sframe_phdr = NULL; > + struct elf_phdr *elf_property_phdata = NULL; > + unsigned long sframe_vaddr = 0; Could not just save the pointer to the sframe phd, as it gets freed before we need it. > unsigned long elf_brk; > int retval, i; > unsigned long elf_entry; > @@ -951,7 +952,7 @@ static int load_elf_binary(struct linux_binprm *bprm) > break; > > case PT_GNU_SFRAME: > - sframe_phdr = elf_ppnt; > + sframe_vaddr = elf_ppnt->p_vaddr; > break; > > case PT_LOPROC ... PT_HIPROC: > @@ -1344,8 +1345,8 @@ static int load_elf_binary(struct linux_binprm *bprm) > task_pid_nr(current), retval); > } > > - if (sframe_phdr) > - sframe_add_section(load_bias + sframe_phdr->p_vaddr, > + if (sframe_vaddr) > + sframe_add_section(load_bias + sframe_vaddr, > start_code, end_code); > > regs = current_pt_regs(); > diff --git a/kernel/unwind/sframe.c b/kernel/unwind/sframe.c > index 933e47696e29..ca4ef0b72772 100644 > --- a/kernel/unwind/sframe.c > +++ b/kernel/unwind/sframe.c > @@ -73,15 +73,15 @@ static int find_fde(struct sframe_section *sec, unsigned long ip, > struct sframe_fde *fde) > { > struct sframe_fde __user *first, *last, *found = NULL; > - u32 ip_off, func_off_low = 0, func_off_high = -1; > + s32 ip_off, func_off_low = INT_MIN, func_off_high = INT_MAX; The ip_off is a signed it. I wrote a program to dump out the sframe section of files, and I had: ffffed88: (1020) size: 16 off: 146 num: 2 info: 1 rep:16 ffffed98: (1030) size: 336 off: 154 num: 2 info:17 rep:16 ffffefe1: (1279) size: 113 off: 0 num: 4 info: 0 rep: 0 fffff052: (12ea) size: 54 off: 15 num: 3 info: 0 rep: 0 fffff088: (1320) size: 167 off: 26 num: 3 info: 0 rep: 0 fffff12f: (13c7) size: 167 off: 37 num: 4 info: 0 rep: 0 fffff1d6: (146e) size: 167 off: 52 num: 4 info: 0 rep: 0 fffff27d: (1515) size: 22 off: 67 num: 4 info: 0 rep: 0 fffff293: (152b) size: 141 off: 82 num: 4 info: 0 rep: 0 fffff320: (15b8) size: 81 off: 97 num: 4 info: 0 rep: 0 fffff371: (1609) size: 671 off: 112 num: 4 info: 1 rep: 0 fffff610: (18a8) size: 171 off: 131 num: 4 info: 0 rep: 0 The above turns was created by a loop of: fde = (void *)sframes + sizeof(*sframes) + sframes->sfh_fdeoff; for (s = 0; s < sframes->sfh_num_fdes; s++, fde++) { printf("\t%x: (%lx) size:%8u off:%8u num:%8u info:%2u rep:%2u\n", fde->sfde_func_start_address, fde->sfde_func_start_address + shdr->sh_offset, fde->sfde_func_size, fde->sfde_func_start_fre_off, fde->sfde_func_num_fres, fde->sfde_func_info, fde->sfde_func_rep_size); } As you can see, all the ip_off are negative. > > ip_off = ip - sec->sframe_addr; > > first = (void __user *)sec->fdes_addr; > - last = first + sec->fdes_nr; > + last = first + sec->fdes_nr - 1; The above was mentioned before. > while (first <= last) { > struct sframe_fde __user *mid; > - u32 func_off; > + s32 func_off; > > mid = first + ((last - first) / 2); > > diff --git a/kernel/unwind/user.c b/kernel/unwind/user.c > index 11aadfade005..d9cd820150c5 100644 > --- a/kernel/unwind/user.c > +++ b/kernel/unwind/user.c > @@ -97,7 +97,7 @@ int unwind_user_start(struct unwind_user_state *state) > > if (current_has_sframe()) > state->type = UNWIND_USER_TYPE_SFRAME; > - else if (IS_ENABLED(CONFIG_UNWIND_USER_FP)) > + else if (IS_ENABLED(CONFIG_HAVE_UNWIND_USER_FP)) This was mentioned too. > state->type = UNWIND_USER_TYPE_FP; > else > state->type = UNWIND_USER_TYPE_NONE; > @@ -138,7 +138,7 @@ int unwind_user(struct unwind_stacktrace *trace, unsigned int max_entries) > static u64 ctx_to_cookie(u64 cpu, u64 ctx) > { > BUILD_BUG_ON(NR_CPUS > 65535); > - return (ctx & ((1UL << 48) - 1)) | cpu; > + return (ctx & ((1UL << 48) - 1)) | (cpu << 48); And so was this. -- Steve > } > > /* ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v3 09/19] unwind: Introduce sframe user space unwinding 2024-11-13 22:21 ` Steven Rostedt @ 2024-11-13 22:25 ` Steven Rostedt 0 siblings, 0 replies; 7+ messages in thread From: Steven Rostedt @ 2024-11-13 22:25 UTC (permalink / raw) To: Josh Poimboeuf Cc: Jens Remus, x86, Peter Zijlstra, Ingo Molnar, Arnaldo Carvalho de Melo, linux-kernel, Indu Bhagat, Mark Rutland, Alexander Shishkin, Jiri Olsa, Namhyung Kim, Ian Rogers, Adrian Hunter, linux-perf-users, Mark Brown, linux-toolchains, Jordan Rome, Sam James, linux-trace-kernel, Andrii Nakryiko, Mathieu Desnoyers, Florian Weimer, Andy Lutomirski, Heiko Carstens, Vasily Gorbik On Wed, 13 Nov 2024 17:21:18 -0500 Steven Rostedt <rostedt@goodmis.org> wrote: > > diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c > > index 434c548f0837..64cc3c1188ca 100644 > > --- a/fs/binfmt_elf.c > > +++ b/fs/binfmt_elf.c > > @@ -842,7 +842,8 @@ static int load_elf_binary(struct linux_binprm *bprm) > > int first_pt_load = 1; > > unsigned long error; > > struct elf_phdr *elf_ppnt, *elf_phdata, *interp_elf_phdata = NULL; > > - struct elf_phdr *elf_property_phdata = NULL, *sframe_phdr = NULL; > > + struct elf_phdr *elf_property_phdata = NULL; > > + unsigned long sframe_vaddr = 0; > > Could not just save the pointer to the sframe phd, as it gets freed before we need it. ^^^ phdr > > diff --git a/kernel/unwind/sframe.c b/kernel/unwind/sframe.c > > index 933e47696e29..ca4ef0b72772 100644 > > --- a/kernel/unwind/sframe.c > > +++ b/kernel/unwind/sframe.c > > @@ -73,15 +73,15 @@ static int find_fde(struct sframe_section *sec, unsigned long ip, > > struct sframe_fde *fde) > > { > > struct sframe_fde __user *first, *last, *found = NULL; > > - u32 ip_off, func_off_low = 0, func_off_high = -1; > > + s32 ip_off, func_off_low = INT_MIN, func_off_high = INT_MAX; > > The ip_off is a signed it. I wrote a program to dump out the sframe section ^^ int > of files, and I had: > > ffffed88: (1020) size: 16 off: 146 num: 2 info: 1 rep:16 > ffffed98: (1030) size: 336 off: 154 num: 2 info:17 rep:16 > ffffefe1: (1279) size: 113 off: 0 num: 4 info: 0 rep: 0 > fffff052: (12ea) size: 54 off: 15 num: 3 info: 0 rep: 0 > fffff088: (1320) size: 167 off: 26 num: 3 info: 0 rep: 0 > fffff12f: (13c7) size: 167 off: 37 num: 4 info: 0 rep: 0 > fffff1d6: (146e) size: 167 off: 52 num: 4 info: 0 rep: 0 > fffff27d: (1515) size: 22 off: 67 num: 4 info: 0 rep: 0 > fffff293: (152b) size: 141 off: 82 num: 4 info: 0 rep: 0 > fffff320: (15b8) size: 81 off: 97 num: 4 info: 0 rep: 0 > fffff371: (1609) size: 671 off: 112 num: 4 info: 1 rep: 0 > fffff610: (18a8) size: 171 off: 131 num: 4 info: 0 rep: 0 > > The above turns was created by a loop of: ^^^^^^^^^ items were No idea why I typed that :-p I can't blame jetlag anymore. -- Steve ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2024-11-13 22:25 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <cover.1730150953.git.jpoimboe@kernel.org>
[not found] ` <a94eb70a80c4a13dedb2655b7848304a992cb1b0.1730150953.git.jpoimboe@kernel.org>
[not found] ` <20241029135617.GB14555@noisy.programming.kicks-ass.net>
[not found] ` <20241029171752.4y67p3ob24riogpi@treble.attlocal.net>
[not found] ` <bcd11a07-45fb-442b-a25b-5cadc6aac0e6@efficios.com>
[not found] ` <20241029183440.fbwoistveyxneezt@treble.attlocal.net>
2024-10-30 13:44 ` [PATCH v3 11/19] unwind: Add deferred user space unwinding API Mathieu Desnoyers
2024-10-30 17:47 ` Josh Poimboeuf
2024-10-30 17:55 ` Josh Poimboeuf
2024-10-30 18:25 ` Josh Poimboeuf
[not found] ` <42c0a99236af65c09c8182e260af7bcf5aa1e158.1730150953.git.jpoimboe@kernel.org>
[not found] ` <20241105124053.523e93dd@gandalf.local.home>
2024-11-05 17:45 ` [PATCH v3 09/19] unwind: Introduce sframe user space unwinding Steven Rostedt
[not found] ` <1f83be89-b816-48a3-a7ee-9b72f07b558e@linux.ibm.com>
[not found] ` <20241113155040.6a9a1bed@gandalf.local.home>
[not found] ` <20241113211535.ghnw52wkgudkjvgv@jpoimboe>
[not found] ` <20241113171326.6d1ddc83@gandalf.local.home>
2024-11-13 22:21 ` Steven Rostedt
2024-11-13 22:25 ` Steven Rostedt
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox