* [RFC patch 11/41] mm/slub: Remove the ULONG_MAX stack trace hackery
[not found] <20190410102754.387743324@linutronix.de>
@ 2019-04-10 10:28 ` Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 12/41] mm/page_owner: " Thomas Gleixner
` (5 subsequent siblings)
6 siblings, 0 replies; 12+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
To: LKML
Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
Alexander Potapenko, Andrew Morton, Pekka Enberg, linux-mm,
David Rientjes, Christoph Lameter
No architecture terminates the stack trace with ULONG_MAX anymore. Remove
the cruft.
While at it remove the pointless loop of clearing the stack array
completely. It's sufficient to clear the last entry as the consumers break
out on the first zeroed entry anyway.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: linux-mm@kvack.org
Cc: David Rientjes <rientjes@google.com>
Cc: Christoph Lameter <cl@linux.com>
---
mm/slub.c | 13 ++++---------
1 file changed, 4 insertions(+), 9 deletions(-)
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -553,7 +553,6 @@ static void set_track(struct kmem_cache
if (addr) {
#ifdef CONFIG_STACKTRACE
struct stack_trace trace;
- int i;
trace.nr_entries = 0;
trace.max_entries = TRACK_ADDRS_COUNT;
@@ -563,20 +562,16 @@ static void set_track(struct kmem_cache
save_stack_trace(&trace);
metadata_access_disable();
- /* See rant in lockdep.c */
- if (trace.nr_entries != 0 &&
- trace.entries[trace.nr_entries - 1] == ULONG_MAX)
- trace.nr_entries--;
-
- for (i = trace.nr_entries; i < TRACK_ADDRS_COUNT; i++)
- p->addrs[i] = 0;
+ if (trace.nr_entries < TRACK_ADDRS_COUNT)
+ p->addrs[trace.nr_entries] = 0;
#endif
p->addr = addr;
p->cpu = smp_processor_id();
p->pid = current->pid;
p->when = jiffies;
- } else
+ } else {
memset(p, 0, sizeof(struct track));
+ }
}
static void init_tracking(struct kmem_cache *s, void *object)
^ permalink raw reply [flat|nested] 12+ messages in thread
* [RFC patch 12/41] mm/page_owner: Remove the ULONG_MAX stack trace hackery
[not found] <20190410102754.387743324@linutronix.de>
2019-04-10 10:28 ` [RFC patch 11/41] mm/slub: Remove the ULONG_MAX stack trace hackery Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 13/41] mm/kasan: " Thomas Gleixner
` (4 subsequent siblings)
6 siblings, 0 replies; 12+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
To: LKML
Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
Alexander Potapenko, Michal Hocko, linux-mm, Mike Rapoport,
Andrew Morton
No architecture terminates the stack trace with ULONG_MAX anymore. Remove
the cruft.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Michal Hocko <mhocko@suse.com>
Cc: linux-mm@kvack.org
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
---
mm/page_owner.c | 3 ---
1 file changed, 3 deletions(-)
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -148,9 +148,6 @@ static noinline depot_stack_handle_t sav
depot_stack_handle_t handle;
save_stack_trace(&trace);
- if (trace.nr_entries != 0 &&
- trace.entries[trace.nr_entries-1] == ULONG_MAX)
- trace.nr_entries--;
/*
* We need to check recursion here because our request to stackdepot
^ permalink raw reply [flat|nested] 12+ messages in thread
* [RFC patch 13/41] mm/kasan: Remove the ULONG_MAX stack trace hackery
[not found] <20190410102754.387743324@linutronix.de>
2019-04-10 10:28 ` [RFC patch 11/41] mm/slub: Remove the ULONG_MAX stack trace hackery Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 12/41] mm/page_owner: " Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
2019-04-10 11:31 ` Dmitry Vyukov
2019-04-10 10:28 ` [RFC patch 23/41] mm/slub: Simplify stack trace retrieval Thomas Gleixner
` (3 subsequent siblings)
6 siblings, 1 reply; 12+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
To: LKML
Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
Alexander Potapenko, Andrey Ryabinin, kasan-dev, Dmitry Vyukov,
linux-mm
No architecture terminates the stack trace with ULONG_MAX anymore. Remove
the cruft.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: kasan-dev@googlegroups.com
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: linux-mm@kvack.org
---
mm/kasan/common.c | 3 ---
1 file changed, 3 deletions(-)
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -74,9 +74,6 @@ static inline depot_stack_handle_t save_
save_stack_trace(&trace);
filter_irq_stacks(&trace);
- if (trace.nr_entries != 0 &&
- trace.entries[trace.nr_entries-1] == ULONG_MAX)
- trace.nr_entries--;
return depot_save_stack(&trace, flags);
}
^ permalink raw reply [flat|nested] 12+ messages in thread
* [RFC patch 23/41] mm/slub: Simplify stack trace retrieval
[not found] <20190410102754.387743324@linutronix.de>
` (2 preceding siblings ...)
2019-04-10 10:28 ` [RFC patch 13/41] mm/kasan: " Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 24/41] mm/kmemleak: Simplify stacktrace handling Thomas Gleixner
` (2 subsequent siblings)
6 siblings, 0 replies; 12+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
To: LKML
Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
Alexander Potapenko, Andrew Morton, Pekka Enberg, linux-mm,
David Rientjes, Christoph Lameter
Replace the indirection through struct stack_trace with an invocation of
the storage array based interface.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: linux-mm@kvack.org
Cc: David Rientjes <rientjes@google.com>
Cc: Christoph Lameter <cl@linux.com>
---
mm/slub.c | 12 ++++--------
1 file changed, 4 insertions(+), 8 deletions(-)
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -552,18 +552,14 @@ static void set_track(struct kmem_cache
if (addr) {
#ifdef CONFIG_STACKTRACE
- struct stack_trace trace;
+ unsigned int nent;
- trace.nr_entries = 0;
- trace.max_entries = TRACK_ADDRS_COUNT;
- trace.entries = p->addrs;
- trace.skip = 3;
metadata_access_enable();
- save_stack_trace(&trace);
+ nent = stack_trace_save(p->addrs, TRACK_ADDRS_COUNT, 3);
metadata_access_disable();
- if (trace.nr_entries < TRACK_ADDRS_COUNT)
- p->addrs[trace.nr_entries] = 0;
+ if (nent < TRACK_ADDRS_COUNT)
+ p->addrs[nent] = 0;
#endif
p->addr = addr;
p->cpu = smp_processor_id();
^ permalink raw reply [flat|nested] 12+ messages in thread
* [RFC patch 24/41] mm/kmemleak: Simplify stacktrace handling
[not found] <20190410102754.387743324@linutronix.de>
` (3 preceding siblings ...)
2019-04-10 10:28 ` [RFC patch 23/41] mm/slub: Simplify stack trace retrieval Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 25/41] mm/kasan: " Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 26/41] mm/page_owner: Simplify stack trace handling Thomas Gleixner
6 siblings, 0 replies; 12+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
To: LKML
Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
Alexander Potapenko, Catalin Marinas, linux-mm
Replace the indirection through struct stack_trace by using the storage
array based interfaces.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: linux-mm@kvack.org
---
mm/kmemleak.c | 24 +++---------------------
1 file changed, 3 insertions(+), 21 deletions(-)
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -410,11 +410,6 @@ static void print_unreferenced(struct se
*/
static void dump_object_info(struct kmemleak_object *object)
{
- struct stack_trace trace;
-
- trace.nr_entries = object->trace_len;
- trace.entries = object->trace;
-
pr_notice("Object 0x%08lx (size %zu):\n",
object->pointer, object->size);
pr_notice(" comm \"%s\", pid %d, jiffies %lu\n",
@@ -424,7 +419,7 @@ static void dump_object_info(struct kmem
pr_notice(" flags = 0x%x\n", object->flags);
pr_notice(" checksum = %u\n", object->checksum);
pr_notice(" backtrace:\n");
- print_stack_trace(&trace, 4);
+ stack_trace_print(object->trace, object->trace_len, 4);
}
/*
@@ -553,15 +548,7 @@ static struct kmemleak_object *find_and_
*/
static int __save_stack_trace(unsigned long *trace)
{
- struct stack_trace stack_trace;
-
- stack_trace.max_entries = MAX_TRACE;
- stack_trace.nr_entries = 0;
- stack_trace.entries = trace;
- stack_trace.skip = 2;
- save_stack_trace(&stack_trace);
-
- return stack_trace.nr_entries;
+ return stack_trace_save(trace, MAX_TRACE, 2);
}
/*
@@ -2019,13 +2006,8 @@ early_param("kmemleak", kmemleak_boot_co
static void __init print_log_trace(struct early_log *log)
{
- struct stack_trace trace;
-
- trace.nr_entries = log->trace_len;
- trace.entries = log->trace;
-
pr_notice("Early log backtrace:\n");
- print_stack_trace(&trace, 2);
+ stack_trace_print(log->trace, log->trace_len, 2);
}
/*
^ permalink raw reply [flat|nested] 12+ messages in thread
* [RFC patch 25/41] mm/kasan: Simplify stacktrace handling
[not found] <20190410102754.387743324@linutronix.de>
` (4 preceding siblings ...)
2019-04-10 10:28 ` [RFC patch 24/41] mm/kmemleak: Simplify stacktrace handling Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
2019-04-10 11:33 ` Dmitry Vyukov
2019-04-11 2:55 ` Josh Poimboeuf
2019-04-10 10:28 ` [RFC patch 26/41] mm/page_owner: Simplify stack trace handling Thomas Gleixner
6 siblings, 2 replies; 12+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
To: LKML
Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
Alexander Potapenko, Andrey Ryabinin, Dmitry Vyukov, kasan-dev,
linux-mm
Replace the indirection through struct stack_trace by using the storage
array based interfaces.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: kasan-dev@googlegroups.com
Cc: linux-mm@kvack.org
---
mm/kasan/common.c | 30 ++++++++++++------------------
mm/kasan/report.c | 7 ++++---
2 files changed, 16 insertions(+), 21 deletions(-)
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -48,34 +48,28 @@ static inline int in_irqentry_text(unsig
ptr < (unsigned long)&__softirqentry_text_end);
}
-static inline void filter_irq_stacks(struct stack_trace *trace)
+static inline unsigned int filter_irq_stacks(unsigned long *entries,
+ unsigned int nr_entries)
{
- int i;
+ unsigned int i;
- if (!trace->nr_entries)
- return;
- for (i = 0; i < trace->nr_entries; i++)
- if (in_irqentry_text(trace->entries[i])) {
+ for (i = 0; i < nr_entries; i++) {
+ if (in_irqentry_text(entries[i])) {
/* Include the irqentry function into the stack. */
- trace->nr_entries = i + 1;
- break;
+ return i + 1;
}
+ }
+ return nr_entries;
}
static inline depot_stack_handle_t save_stack(gfp_t flags)
{
unsigned long entries[KASAN_STACK_DEPTH];
- struct stack_trace trace = {
- .nr_entries = 0,
- .entries = entries,
- .max_entries = KASAN_STACK_DEPTH,
- .skip = 0
- };
+ unsigned int nent;
- save_stack_trace(&trace);
- filter_irq_stacks(&trace);
-
- return depot_save_stack(&trace, flags);
+ nent = stack_trace_save(entries, ARRAY_SIZE(entries), 0);
+ nent = filter_irq_stacks(entries, nent);
+ return stack_depot_save(entries, nent, flags);
}
static inline void set_track(struct kasan_track *track, gfp_t flags)
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -100,10 +100,11 @@ static void print_track(struct kasan_tra
{
pr_err("%s by task %u:\n", prefix, track->pid);
if (track->stack) {
- struct stack_trace trace;
+ unsigned long *entries;
+ unsigned int nent;
- depot_fetch_stack(track->stack, &trace);
- print_stack_trace(&trace, 0);
+ nent = stack_depot_fetch(track->stack, &entries);
+ stack_trace_print(entries, nent, 0);
} else {
pr_err("(stack is not available)\n");
}
^ permalink raw reply [flat|nested] 12+ messages in thread
* [RFC patch 26/41] mm/page_owner: Simplify stack trace handling
[not found] <20190410102754.387743324@linutronix.de>
` (5 preceding siblings ...)
2019-04-10 10:28 ` [RFC patch 25/41] mm/kasan: " Thomas Gleixner
@ 2019-04-10 10:28 ` Thomas Gleixner
6 siblings, 0 replies; 12+ messages in thread
From: Thomas Gleixner @ 2019-04-10 10:28 UTC (permalink / raw)
To: LKML
Cc: Josh Poimboeuf, x86, Andy Lutomirski, Steven Rostedt,
Alexander Potapenko, linux-mm, Mike Rapoport, David Rientjes,
Andrew Morton
Replace the indirection through struct stack_trace by using the storage
array based interfaces.
The original code in all printing functions is really wrong. It allocates a
storage array on stack which is unused because depot_fetch_stack() does not
store anything in it. It overwrites the entries pointer in the stack_trace
struct so it points to the depot storage.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-mm@kvack.org
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
---
mm/page_owner.c | 79 +++++++++++++++++++-------------------------------------
1 file changed, 28 insertions(+), 51 deletions(-)
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -58,15 +58,10 @@ static bool need_page_owner(void)
static __always_inline depot_stack_handle_t create_dummy_stack(void)
{
unsigned long entries[4];
- struct stack_trace dummy;
+ unsigned int nent;
- dummy.nr_entries = 0;
- dummy.max_entries = ARRAY_SIZE(entries);
- dummy.entries = &entries[0];
- dummy.skip = 0;
-
- save_stack_trace(&dummy);
- return depot_save_stack(&dummy, GFP_KERNEL);
+ nent = stack_trace_save(entries, ARRAY_SIZE(entries), 0);
+ return stack_depot_save(entries, nent, GFP_KERNEL);
}
static noinline void register_dummy_stack(void)
@@ -120,46 +115,39 @@ void __reset_page_owner(struct page *pag
}
}
-static inline bool check_recursive_alloc(struct stack_trace *trace,
- unsigned long ip)
+static inline bool check_recursive_alloc(unsigned long *entries,
+ unsigned int nr_entries,
+ unsigned long ip)
{
- int i;
+ unsigned int i;
- if (!trace->nr_entries)
- return false;
-
- for (i = 0; i < trace->nr_entries; i++) {
- if (trace->entries[i] == ip)
+ for (i = 0; i < nr_entries; i++) {
+ if (entries[i] == ip)
return true;
}
-
return false;
}
static noinline depot_stack_handle_t save_stack(gfp_t flags)
{
unsigned long entries[PAGE_OWNER_STACK_DEPTH];
- struct stack_trace trace = {
- .nr_entries = 0,
- .entries = entries,
- .max_entries = PAGE_OWNER_STACK_DEPTH,
- .skip = 2
- };
depot_stack_handle_t handle;
+ unsigned int nent;
- save_stack_trace(&trace);
+ nent = stack_trace_save(entries, ARRAY_SIZE(entries), 2);
/*
- * We need to check recursion here because our request to stackdepot
- * could trigger memory allocation to save new entry. New memory
- * allocation would reach here and call depot_save_stack() again
- * if we don't catch it. There is still not enough memory in stackdepot
- * so it would try to allocate memory again and loop forever.
+ * We need to check recursion here because our request to
+ * stackdepot could trigger memory allocation to save new
+ * entry. New memory allocation would reach here and call
+ * stack_depot_save_entries() again if we don't catch it. There is
+ * still not enough memory in stackdepot so it would try to
+ * allocate memory again and loop forever.
*/
- if (check_recursive_alloc(&trace, _RET_IP_))
+ if (check_recursive_alloc(entries, nent, _RET_IP_))
return dummy_handle;
- handle = depot_save_stack(&trace, flags);
+ handle = stack_depot_save(entries, nent, flags);
if (!handle)
handle = failure_handle;
@@ -337,16 +325,10 @@ print_page_owner(char __user *buf, size_
struct page *page, struct page_owner *page_owner,
depot_stack_handle_t handle)
{
- int ret;
- int pageblock_mt, page_mt;
+ int ret, pageblock_mt, page_mt;
+ unsigned long *entries;
+ unsigned int nent;
char *kbuf;
- unsigned long entries[PAGE_OWNER_STACK_DEPTH];
- struct stack_trace trace = {
- .nr_entries = 0,
- .entries = entries,
- .max_entries = PAGE_OWNER_STACK_DEPTH,
- .skip = 0
- };
count = min_t(size_t, count, PAGE_SIZE);
kbuf = kmalloc(count, GFP_KERNEL);
@@ -375,8 +357,8 @@ print_page_owner(char __user *buf, size_
if (ret >= count)
goto err;
- depot_fetch_stack(handle, &trace);
- ret += snprint_stack_trace(kbuf + ret, count - ret, &trace, 0);
+ nent = stack_depot_fetch(handle, &entries);
+ ret += stack_trace_snprint(kbuf + ret, count - ret, entries, nent, 0);
if (ret >= count)
goto err;
@@ -407,14 +389,9 @@ void __dump_page_owner(struct page *page
{
struct page_ext *page_ext = lookup_page_ext(page);
struct page_owner *page_owner;
- unsigned long entries[PAGE_OWNER_STACK_DEPTH];
- struct stack_trace trace = {
- .nr_entries = 0,
- .entries = entries,
- .max_entries = PAGE_OWNER_STACK_DEPTH,
- .skip = 0
- };
depot_stack_handle_t handle;
+ unsigned long *entries;
+ unsigned int nent;
gfp_t gfp_mask;
int mt;
@@ -438,10 +415,10 @@ void __dump_page_owner(struct page *page
return;
}
- depot_fetch_stack(handle, &trace);
+ nent = stack_depot_fetch(handle, &entries);
pr_alert("page allocated via order %u, migratetype %s, gfp_mask %#x(%pGg)\n",
page_owner->order, migratetype_names[mt], gfp_mask, &gfp_mask);
- print_stack_trace(&trace, 0);
+ stack_trace_print(entries, nent, 0);
if (page_owner->last_migrate_reason != -1)
pr_alert("page has been migrated, last migrate reason: %s\n",
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC patch 13/41] mm/kasan: Remove the ULONG_MAX stack trace hackery
2019-04-10 10:28 ` [RFC patch 13/41] mm/kasan: " Thomas Gleixner
@ 2019-04-10 11:31 ` Dmitry Vyukov
0 siblings, 0 replies; 12+ messages in thread
From: Dmitry Vyukov @ 2019-04-10 11:31 UTC (permalink / raw)
To: Thomas Gleixner
Cc: LKML, Josh Poimboeuf, the arch/x86 maintainers, Andy Lutomirski,
Steven Rostedt, Alexander Potapenko, Andrey Ryabinin, kasan-dev,
Linux-MM
On Wed, Apr 10, 2019 at 1:05 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> No architecture terminates the stack trace with ULONG_MAX anymore. Remove
> the cruft.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Alexander Potapenko <glider@google.com>
> Cc: kasan-dev@googlegroups.com
> Cc: Dmitry Vyukov <dvyukov@google.com>
> Cc: linux-mm@kvack.org
> ---
> mm/kasan/common.c | 3 ---
> 1 file changed, 3 deletions(-)
>
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -74,9 +74,6 @@ static inline depot_stack_handle_t save_
>
> save_stack_trace(&trace);
> filter_irq_stacks(&trace);
> - if (trace.nr_entries != 0 &&
> - trace.entries[trace.nr_entries-1] == ULONG_MAX)
> - trace.nr_entries--;
>
> return depot_save_stack(&trace, flags);
> }
Acked-by: Dmitry Vyukov <dvyukov@google.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC patch 25/41] mm/kasan: Simplify stacktrace handling
2019-04-10 10:28 ` [RFC patch 25/41] mm/kasan: " Thomas Gleixner
@ 2019-04-10 11:33 ` Dmitry Vyukov
2019-04-11 2:55 ` Josh Poimboeuf
1 sibling, 0 replies; 12+ messages in thread
From: Dmitry Vyukov @ 2019-04-10 11:33 UTC (permalink / raw)
To: Thomas Gleixner
Cc: LKML, Josh Poimboeuf, the arch/x86 maintainers, Andy Lutomirski,
Steven Rostedt, Alexander Potapenko, Andrey Ryabinin, kasan-dev,
Linux-MM
On Wed, Apr 10, 2019 at 1:06 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> Replace the indirection through struct stack_trace by using the storage
> array based interfaces.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Alexander Potapenko <glider@google.com>
> Cc: Dmitry Vyukov <dvyukov@google.com>
> Cc: kasan-dev@googlegroups.com
> Cc: linux-mm@kvack.org
> ---
> mm/kasan/common.c | 30 ++++++++++++------------------
> mm/kasan/report.c | 7 ++++---
> 2 files changed, 16 insertions(+), 21 deletions(-)
>
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -48,34 +48,28 @@ static inline int in_irqentry_text(unsig
> ptr < (unsigned long)&__softirqentry_text_end);
> }
>
> -static inline void filter_irq_stacks(struct stack_trace *trace)
> +static inline unsigned int filter_irq_stacks(unsigned long *entries,
> + unsigned int nr_entries)
> {
> - int i;
> + unsigned int i;
>
> - if (!trace->nr_entries)
> - return;
> - for (i = 0; i < trace->nr_entries; i++)
> - if (in_irqentry_text(trace->entries[i])) {
> + for (i = 0; i < nr_entries; i++) {
> + if (in_irqentry_text(entries[i])) {
> /* Include the irqentry function into the stack. */
> - trace->nr_entries = i + 1;
> - break;
> + return i + 1;
> }
> + }
> + return nr_entries;
> }
>
> static inline depot_stack_handle_t save_stack(gfp_t flags)
> {
> unsigned long entries[KASAN_STACK_DEPTH];
> - struct stack_trace trace = {
> - .nr_entries = 0,
> - .entries = entries,
> - .max_entries = KASAN_STACK_DEPTH,
> - .skip = 0
> - };
> + unsigned int nent;
>
> - save_stack_trace(&trace);
> - filter_irq_stacks(&trace);
> -
> - return depot_save_stack(&trace, flags);
> + nent = stack_trace_save(entries, ARRAY_SIZE(entries), 0);
> + nent = filter_irq_stacks(entries, nent);
> + return stack_depot_save(entries, nent, flags);
> }
>
> static inline void set_track(struct kasan_track *track, gfp_t flags)
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -100,10 +100,11 @@ static void print_track(struct kasan_tra
> {
> pr_err("%s by task %u:\n", prefix, track->pid);
> if (track->stack) {
> - struct stack_trace trace;
> + unsigned long *entries;
> + unsigned int nent;
>
> - depot_fetch_stack(track->stack, &trace);
> - print_stack_trace(&trace, 0);
> + nent = stack_depot_fetch(track->stack, &entries);
> + stack_trace_print(entries, nent, 0);
> } else {
> pr_err("(stack is not available)\n");
> }
Acked-by: Dmitry Vyukov <dvyukov@google.com>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC patch 25/41] mm/kasan: Simplify stacktrace handling
2019-04-10 10:28 ` [RFC patch 25/41] mm/kasan: " Thomas Gleixner
2019-04-10 11:33 ` Dmitry Vyukov
@ 2019-04-11 2:55 ` Josh Poimboeuf
2019-04-14 16:54 ` Thomas Gleixner
1 sibling, 1 reply; 12+ messages in thread
From: Josh Poimboeuf @ 2019-04-11 2:55 UTC (permalink / raw)
To: Thomas Gleixner
Cc: LKML, x86, Andy Lutomirski, Steven Rostedt, Alexander Potapenko,
Andrey Ryabinin, Dmitry Vyukov, kasan-dev, linux-mm
On Wed, Apr 10, 2019 at 12:28:19PM +0200, Thomas Gleixner wrote:
> Replace the indirection through struct stack_trace by using the storage
> array based interfaces.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Alexander Potapenko <glider@google.com>
> Cc: Dmitry Vyukov <dvyukov@google.com>
> Cc: kasan-dev@googlegroups.com
> Cc: linux-mm@kvack.org
> ---
> mm/kasan/common.c | 30 ++++++++++++------------------
> mm/kasan/report.c | 7 ++++---
> 2 files changed, 16 insertions(+), 21 deletions(-)
>
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -48,34 +48,28 @@ static inline int in_irqentry_text(unsig
> ptr < (unsigned long)&__softirqentry_text_end);
> }
>
> -static inline void filter_irq_stacks(struct stack_trace *trace)
> +static inline unsigned int filter_irq_stacks(unsigned long *entries,
> + unsigned int nr_entries)
> {
> - int i;
> + unsigned int i;
>
> - if (!trace->nr_entries)
> - return;
> - for (i = 0; i < trace->nr_entries; i++)
> - if (in_irqentry_text(trace->entries[i])) {
> + for (i = 0; i < nr_entries; i++) {
> + if (in_irqentry_text(entries[i])) {
> /* Include the irqentry function into the stack. */
> - trace->nr_entries = i + 1;
> - break;
> + return i + 1;
Isn't this an off-by-one error if "i" points to the last entry of the
array?
--
Josh
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC patch 25/41] mm/kasan: Simplify stacktrace handling
2019-04-11 2:55 ` Josh Poimboeuf
@ 2019-04-14 16:54 ` Thomas Gleixner
2019-04-14 17:00 ` Thomas Gleixner
0 siblings, 1 reply; 12+ messages in thread
From: Thomas Gleixner @ 2019-04-14 16:54 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: LKML, x86, Andy Lutomirski, Steven Rostedt, Alexander Potapenko,
Andrey Ryabinin, Dmitry Vyukov, kasan-dev, linux-mm
On Wed, 10 Apr 2019, Josh Poimboeuf wrote:
> On Wed, Apr 10, 2019 at 12:28:19PM +0200, Thomas Gleixner wrote:
> > Replace the indirection through struct stack_trace by using the storage
> > array based interfaces.
> >
> > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> > Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> > Cc: Alexander Potapenko <glider@google.com>
> > Cc: Dmitry Vyukov <dvyukov@google.com>
> > Cc: kasan-dev@googlegroups.com
> > Cc: linux-mm@kvack.org
> > ---
> > mm/kasan/common.c | 30 ++++++++++++------------------
> > mm/kasan/report.c | 7 ++++---
> > 2 files changed, 16 insertions(+), 21 deletions(-)
> >
> > --- a/mm/kasan/common.c
> > +++ b/mm/kasan/common.c
> > @@ -48,34 +48,28 @@ static inline int in_irqentry_text(unsig
> > ptr < (unsigned long)&__softirqentry_text_end);
> > }
> >
> > -static inline void filter_irq_stacks(struct stack_trace *trace)
> > +static inline unsigned int filter_irq_stacks(unsigned long *entries,
> > + unsigned int nr_entries)
> > {
> > - int i;
> > + unsigned int i;
> >
> > - if (!trace->nr_entries)
> > - return;
> > - for (i = 0; i < trace->nr_entries; i++)
> > - if (in_irqentry_text(trace->entries[i])) {
> > + for (i = 0; i < nr_entries; i++) {
> > + if (in_irqentry_text(entries[i])) {
> > /* Include the irqentry function into the stack. */
> > - trace->nr_entries = i + 1;
> > - break;
> > + return i + 1;
>
> Isn't this an off-by-one error if "i" points to the last entry of the
> array?
Yes, copied one ...
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [RFC patch 25/41] mm/kasan: Simplify stacktrace handling
2019-04-14 16:54 ` Thomas Gleixner
@ 2019-04-14 17:00 ` Thomas Gleixner
0 siblings, 0 replies; 12+ messages in thread
From: Thomas Gleixner @ 2019-04-14 17:00 UTC (permalink / raw)
To: Josh Poimboeuf
Cc: LKML, x86, Andy Lutomirski, Steven Rostedt, Alexander Potapenko,
Andrey Ryabinin, Dmitry Vyukov, kasan-dev, linux-mm
On Sun, 14 Apr 2019, Thomas Gleixner wrote:
> On Wed, 10 Apr 2019, Josh Poimboeuf wrote:
> > On Wed, Apr 10, 2019 at 12:28:19PM +0200, Thomas Gleixner wrote:
> > > Replace the indirection through struct stack_trace by using the storage
> > > array based interfaces.
> > >
> > > Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> > > Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> > > Cc: Alexander Potapenko <glider@google.com>
> > > Cc: Dmitry Vyukov <dvyukov@google.com>
> > > Cc: kasan-dev@googlegroups.com
> > > Cc: linux-mm@kvack.org
> > > ---
> > > mm/kasan/common.c | 30 ++++++++++++------------------
> > > mm/kasan/report.c | 7 ++++---
> > > 2 files changed, 16 insertions(+), 21 deletions(-)
> > >
> > > --- a/mm/kasan/common.c
> > > +++ b/mm/kasan/common.c
> > > @@ -48,34 +48,28 @@ static inline int in_irqentry_text(unsig
> > > ptr < (unsigned long)&__softirqentry_text_end);
> > > }
> > >
> > > -static inline void filter_irq_stacks(struct stack_trace *trace)
> > > +static inline unsigned int filter_irq_stacks(unsigned long *entries,
> > > + unsigned int nr_entries)
> > > {
> > > - int i;
> > > + unsigned int i;
> > >
> > > - if (!trace->nr_entries)
> > > - return;
> > > - for (i = 0; i < trace->nr_entries; i++)
> > > - if (in_irqentry_text(trace->entries[i])) {
> > > + for (i = 0; i < nr_entries; i++) {
> > > + if (in_irqentry_text(entries[i])) {
> > > /* Include the irqentry function into the stack. */
> > > - trace->nr_entries = i + 1;
> > > - break;
> > > + return i + 1;
> >
> > Isn't this an off-by-one error if "i" points to the last entry of the
> > array?
>
> Yes, copied one ...
Oh, no. The point is that it returns the number of stack entries to
store. So if i == nr_entries - 1, then it returns nr_entries, i.e. all
entries are stored.
Thanks,
tglx
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2019-04-14 17:00 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20190410102754.387743324@linutronix.de>
2019-04-10 10:28 ` [RFC patch 11/41] mm/slub: Remove the ULONG_MAX stack trace hackery Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 12/41] mm/page_owner: " Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 13/41] mm/kasan: " Thomas Gleixner
2019-04-10 11:31 ` Dmitry Vyukov
2019-04-10 10:28 ` [RFC patch 23/41] mm/slub: Simplify stack trace retrieval Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 24/41] mm/kmemleak: Simplify stacktrace handling Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 25/41] mm/kasan: " Thomas Gleixner
2019-04-10 11:33 ` Dmitry Vyukov
2019-04-11 2:55 ` Josh Poimboeuf
2019-04-14 16:54 ` Thomas Gleixner
2019-04-14 17:00 ` Thomas Gleixner
2019-04-10 10:28 ` [RFC patch 26/41] mm/page_owner: Simplify stack trace handling Thomas Gleixner
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).