public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/2]  SLUB memory debugging improvements.
@ 2011-06-29  0:47 greearb
  2011-06-29  0:47 ` [PATCH v2 1/2] slub: Enable backtrace for create/delete points greearb
  2011-06-29  0:47 ` [PATCH v2 2/2] slub: Add method to verify memory is not freed greearb
  0 siblings, 2 replies; 8+ messages in thread
From: greearb @ 2011-06-29  0:47 UTC (permalink / raw)
  To: penberg, cl, linux-kernel; +Cc: Ben Greear

From: Ben Greear <greearb@candelatech.com>

The idea is to grab a useful sized stack trace for creation/deletion
points instead of just a single method.

A second patch adds a command to check if memory should be
in use.  This logic could be placed in a location that is
suspected of accessing freed data.

Example output from debugging some funky nfs/rpc behaviour:

=============================================================================
BUG kmalloc-64: Object is on free-list
-----------------------------------------------------------------------------

INFO: Allocated in rpcb_getport_async+0x39c/0x5a5 [sunrpc] age=381 cpu=3 pid=3750
        __slab_alloc+0x348/0x3ba
        kmem_cache_alloc_trace+0x67/0xe7
        rpcb_getport_async+0x39c/0x5a5 [sunrpc]
        call_bind+0x70/0x75 [sunrpc]
        __rpc_execute+0x78/0x24b [sunrpc]
        rpc_execute+0x3d/0x42 [sunrpc]
        rpc_run_task+0x79/0x81 [sunrpc]
        rpc_call_sync+0x3f/0x60 [sunrpc]
        rpc_ping+0x42/0x58 [sunrpc]
        rpc_create+0x4aa/0x527 [sunrpc]
        nfs_create_rpc_client+0xb1/0xf6 [nfs]
        nfs_init_client+0x3b/0x7d [nfs]
        nfs_get_client+0x453/0x5ab [nfs]
        nfs_create_server+0x10b/0x437 [nfs]
        nfs_fs_mount+0x4ca/0x708 [nfs]
        mount_fs+0x6b/0x152
INFO: Freed in rpcb_map_release+0x3f/0x44 [sunrpc] age=30 cpu=2 pid=29049
        __slab_free+0x57/0x150
        kfree+0x107/0x13a
        rpcb_map_release+0x3f/0x44 [sunrpc]
        rpc_release_calldata+0x12/0x14 [sunrpc]
        rpc_free_task+0x59/0x61 [sunrpc]
        rpc_final_put_task+0x82/0x8a [sunrpc]
        __rpc_execute+0x23c/0x24b [sunrpc]
        rpc_async_schedule+0x10/0x12 [sunrpc]
        process_one_work+0x230/0x41d
        worker_thread+0x133/0x217
        kthread+0x7d/0x85
        kernel_thread_helper+0x4/0x10
INFO: Slab 0xffffea00029aa470 objects=20 used=9 fp=0xffff8800be7830d8 flags=0x20000000004081
INFO: Object 0xffff8800be7830d8 @offset=4312 fp=0xffff8800be7827a8

Bytes b4 0xffff8800be7830c8:  87 a8 96 00 01 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a .�......ZZZZZZZZ
  Object 0xffff8800be7830d8:  6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
  Object 0xffff8800be7830e8:  6b 6b 6b 6b 01 08 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkk..kkkkkkkkkk
  Object 0xffff8800be7830f8:  6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
  Object 0xffff8800be783108:  6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b a5 kkkkkkkkkkkkkkk�
 Redzone 0xffff8800be783118:  bb bb bb bb bb bb bb bb                         �������������
 Padding 0xffff8800be783258:  5a 5a 5a 5a 5a 5a 5a 5a                         ZZZZZZZZ
Pid: 29049, comm: kworker/2:2 Not tainted 3.0.0-rc4+ #8
Call Trace:
 [<ffffffff811055c3>] print_trailer+0x131/0x13a
 [<ffffffff81105601>] object_err+0x35/0x3e
 [<ffffffff8110746f>] verify_mem_not_deleted+0x7a/0xb7
 [<ffffffffa02851b5>] rpcb_getport_done+0x23/0x126 [sunrpc]
 [<ffffffffa027d0ba>] rpc_exit_task+0x3f/0x6d [sunrpc]
 [<ffffffffa027d4ab>] __rpc_execute+0x78/0x24b [sunrpc]
 [<ffffffffa027d6c0>] ? rpc_execute+0x42/0x42 [sunrpc]
 [<ffffffffa027d6d0>] rpc_async_schedule+0x10/0x12 [sunrpc]
 [<ffffffff810611b7>] process_one_work+0x230/0x41d
 [<ffffffff81061102>] ? process_one_work+0x17b/0x41d
 [<ffffffff81063613>] worker_thread+0x133/0x217
 [<ffffffff810634e0>] ? manage_workers+0x191/0x191
 [<ffffffff81066e10>] kthread+0x7d/0x85
 [<ffffffff81485924>] kernel_thread_helper+0x4/0x10
 [<ffffffff8147eb18>] ? retint_restore_args+0x13/0x13
 [<ffffffff81066d93>] ? __init_kthread_worker+0x56/0x56
 [<ffffffff81485920>] ? gs_change+0x13/0x13

Ben Greear (2):
  slub: Enable backtrace for create/delete points.
  slub:  Add method to verify memory is not freed.

 include/linux/slab.h |   13 ++++++++
 mm/slub.c            |   80 +++++++++++++++++++++++++++++++++++++++++++++----
 2 files changed, 86 insertions(+), 7 deletions(-)

-- 
1.7.3.4


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 1/2] slub: Enable backtrace for create/delete points.
  2011-06-29  0:47 [PATCH v2 0/2] SLUB memory debugging improvements greearb
@ 2011-06-29  0:47 ` greearb
  2011-07-01 14:08   ` Christoph Lameter
  2011-06-29  0:47 ` [PATCH v2 2/2] slub: Add method to verify memory is not freed greearb
  1 sibling, 1 reply; 8+ messages in thread
From: greearb @ 2011-06-29  0:47 UTC (permalink / raw)
  To: penberg, cl, linux-kernel; +Cc: Ben Greear

From: Ben Greear <greearb@candelatech.com>

This patch attempts to grab a backtrace for the creation
and deletion points of the slub object.  When a fault is
detected, we can then get a better idea of where the item
was deleted.

Signed-off-by: Ben Greear <greearb@candelatech.com>
---

v2:  Fix compile when CONFIG_STACKTRACE is not enabled
    (It is required for this particular feature.)

:100644 100644 35f351f... 3477ce5... M	mm/slub.c
 mm/slub.c |   44 +++++++++++++++++++++++++++++++++++++-------
 1 files changed, 37 insertions(+), 7 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 35f351f..3477ce5 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -191,8 +191,12 @@ static LIST_HEAD(slab_caches);
 /*
  * Tracking user of a slab.
  */
+#define TRACK_ADDRS_COUNT 16
 struct track {
-	unsigned long addr;	/* Called from address */
+	unsigned long caddr;
+#ifdef CONFIG_STACKTRACE
+	unsigned long addrs[TRACK_ADDRS_COUNT];	/* Called from address */
+#endif
 	int cpu;		/* Was running on cpu */
 	int pid;		/* Pid context */
 	unsigned long when;	/* When did the operation occur */
@@ -420,7 +424,25 @@ static void set_track(struct kmem_cache *s, void *object,
 	struct track *p = get_track(s, object, alloc);
 
 	if (addr) {
-		p->addr = addr;
+#ifdef CONFIG_STACKTRACE
+		struct stack_trace trace;
+		int i;
+
+		trace.nr_entries = 0;
+		trace.max_entries = TRACK_ADDRS_COUNT;
+		trace.entries = p->addrs;
+		trace.skip = 3;
+		save_stack_trace(&trace);
+
+		/* See rant in lockdep.c */
+		if (trace.nr_entries != 0 &&
+		    trace.entries[trace.nr_entries - 1] == ULONG_MAX)
+			trace.nr_entries--;
+
+		for (i = trace.nr_entries; i < TRACK_ADDRS_COUNT; i++)
+			p->addrs[i] = 0;
+#endif
+		p->caddr = addr;
 		p->cpu = smp_processor_id();
 		p->pid = current->pid;
 		p->when = jiffies;
@@ -439,11 +461,19 @@ static void init_tracking(struct kmem_cache *s, void *object)
 
 static void print_track(const char *s, struct track *t)
 {
-	if (!t->addr)
+	int i;
+	if (!t->caddr)
 		return;
 
 	printk(KERN_ERR "INFO: %s in %pS age=%lu cpu=%u pid=%d\n",
-		s, (void *)t->addr, jiffies - t->when, t->cpu, t->pid);
+		s, (void *)t->caddr, jiffies - t->when, t->cpu, t->pid);
+#ifdef CONFIG_STACKTRACE
+	for (i = 0; i < TRACK_ADDRS_COUNT; i++)
+		if (t->addrs[i])
+			printk(KERN_ERR "\t%pS\n", (void *)t->addrs[i]);
+		else
+			break;
+#endif
 }
 
 static void print_tracking(struct kmem_cache *s, void *object)
@@ -3721,7 +3751,7 @@ static int add_location(struct loc_track *t, struct kmem_cache *s,
 			break;
 
 		caddr = t->loc[pos].addr;
-		if (track->addr == caddr) {
+		if (track->caddr == caddr) {
 
 			l = &t->loc[pos];
 			l->count++;
@@ -3744,7 +3774,7 @@ static int add_location(struct loc_track *t, struct kmem_cache *s,
 			return 1;
 		}
 
-		if (track->addr < caddr)
+		if (track->caddr < caddr)
 			end = pos;
 		else
 			start = pos;
@@ -3762,7 +3792,7 @@ static int add_location(struct loc_track *t, struct kmem_cache *s,
 			(t->count - pos) * sizeof(struct location));
 	t->count++;
 	l->count = 1;
-	l->addr = track->addr;
+	l->addr = track->caddr;
 	l->sum_time = age;
 	l->min_time = age;
 	l->max_time = age;
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 2/2] slub:  Add method to verify memory is not freed.
  2011-06-29  0:47 [PATCH v2 0/2] SLUB memory debugging improvements greearb
  2011-06-29  0:47 ` [PATCH v2 1/2] slub: Enable backtrace for create/delete points greearb
@ 2011-06-29  0:47 ` greearb
  2011-07-01 14:11   ` Christoph Lameter
  1 sibling, 1 reply; 8+ messages in thread
From: greearb @ 2011-06-29  0:47 UTC (permalink / raw)
  To: penberg, cl, linux-kernel; +Cc: Ben Greear

From: Ben Greear <greearb@candelatech.com>

This is for tracking down suspect memory usage.

Signed-off-by: Ben Greear <greearb@candelatech.com>
---

v2:  Fix issues brought up in review.

:100644 100644 ad4dd1c... 916ce0f... M	include/linux/slab.h
:100644 100644 3477ce5... f64caca... M	mm/slub.c
 include/linux/slab.h |   13 +++++++++++++
 mm/slub.c            |   36 ++++++++++++++++++++++++++++++++++++
 2 files changed, 49 insertions(+), 0 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index ad4dd1c..916ce0f 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -302,6 +302,19 @@ extern void *__kmalloc_node_track_caller(size_t, gfp_t, int, unsigned long);
 
 #endif /* CONFIG_NUMA */
 
+/**
+ * Calling this on allocated memory will check that the memory
+ * is expected to be in use, and print warnings if not.
+ */
+#if defined(CONFIG_SLUB) && defined(CONFIG_SLUB_DEBUG)
+extern bool verify_mem_not_deleted(const void *x);
+#else
+static inline bool verify_mem_not_deleted(const void *x)
+{
+	return true;
+}
+#endif
+
 /*
  * Shortcuts
  */
diff --git a/mm/slub.c b/mm/slub.c
index 3477ce5..f64caca 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2958,6 +2958,42 @@ size_t ksize(const void *object)
 }
 EXPORT_SYMBOL(ksize);
 
+#ifdef CONFIG_SLUB_DEBUG
+bool verify_mem_not_deleted(const void *x)
+{
+	struct page *page;
+	void *object = (void *)x;
+	unsigned long flags;
+	bool rv;
+
+	if (unlikely(ZERO_OR_NULL_PTR(x)))
+		return false;
+
+	local_irq_save(flags);
+
+	page = virt_to_head_page(x);
+	if (unlikely(!PageSlab(page))) {
+		/* maybe it was from stack? */
+		rv = true;
+		goto out_unlock;
+	}
+
+	slab_lock(page);
+	if (on_freelist(page->slab, page, object)) {
+		object_err(page->slab, page, object, "Object is on free-list");
+		rv = false;
+	} else {
+		rv = true;
+	}
+	slab_unlock(page);
+
+out_unlock:
+	local_irq_restore(flags);
+	return rv;
+}
+EXPORT_SYMBOL(verify_mem_not_deleted);
+#endif
+
 void kfree(const void *x)
 {
 	struct page *page;
-- 
1.7.3.4


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/2] slub: Enable backtrace for create/delete points.
  2011-06-29  0:47 ` [PATCH v2 1/2] slub: Enable backtrace for create/delete points greearb
@ 2011-07-01 14:08   ` Christoph Lameter
  2011-07-01 14:20     ` Ben Greear
  0 siblings, 1 reply; 8+ messages in thread
From: Christoph Lameter @ 2011-07-01 14:08 UTC (permalink / raw)
  To: Ben Greear; +Cc: penberg, linux-kernel

On Tue, 28 Jun 2011, greearb@candelatech.com wrote:

> diff --git a/mm/slub.c b/mm/slub.c
> index 35f351f..3477ce5 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -191,8 +191,12 @@ static LIST_HEAD(slab_caches);
>  /*
>   * Tracking user of a slab.
>   */
> +#define TRACK_ADDRS_COUNT 16
>  struct track {
> -	unsigned long addr;	/* Called from address */
> +	unsigned long caddr;

Keep the name the same since its role does not change. That makes the
patch touch less code.

> +#ifdef CONFIG_STACKTRACE
> +	unsigned long addrs[TRACK_ADDRS_COUNT];	/* Called from address */
> +#endif
>  	int cpu;		/* Was running on cpu */
>  	int pid;		/* Pid context */
>  	unsigned long when;	/* When did the operation occur */
> @@ -420,7 +424,25 @@ static void set_track(struct kmem_cache *s, void *object,
>  	struct track *p = get_track(s, object, alloc);
>
>  	if (addr) {
> -		p->addr = addr;
> +#ifdef CONFIG_STACKTRACE
> +		struct stack_trace trace;
> +		int i;
> +
> +		trace.nr_entries = 0;
> +		trace.max_entries = TRACK_ADDRS_COUNT;
> +		trace.entries = p->addrs;
> +		trace.skip = 3;
> +		save_stack_trace(&trace);
> +
> +		/* See rant in lockdep.c */
> +		if (trace.nr_entries != 0 &&
> +		    trace.entries[trace.nr_entries - 1] == ULONG_MAX)
> +			trace.nr_entries--;
> +
> +		for (i = trace.nr_entries; i < TRACK_ADDRS_COUNT; i++)
> +			p->addrs[i] = 0;
> +#endif

Hmmm... Is this really working on all architectures? I remember from years
past that we had issues with adding the same functionality to slab.

Otherwise the patch is quite straightforward (and it will be much cleaner
if you do not change the name of "addr").


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 2/2] slub:  Add method to verify memory is not freed.
  2011-06-29  0:47 ` [PATCH v2 2/2] slub: Add method to verify memory is not freed greearb
@ 2011-07-01 14:11   ` Christoph Lameter
  2011-07-01 14:23     ` Ben Greear
  2011-07-07 18:00     ` Ben Greear
  0 siblings, 2 replies; 8+ messages in thread
From: Christoph Lameter @ 2011-07-01 14:11 UTC (permalink / raw)
  To: Ben Greear; +Cc: penberg, linux-kernel

On Tue, 28 Jun 2011, greearb@candelatech.com wrote:

> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index ad4dd1c..916ce0f 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -302,6 +302,19 @@ extern void *__kmalloc_node_track_caller(size_t, gfp_t, int, unsigned long);
>
>  #endif /* CONFIG_NUMA */
>
> +/**
> + * Calling this on allocated memory will check that the memory
> + * is expected to be in use, and print warnings if not.
> + */
> +#if defined(CONFIG_SLUB) && defined(CONFIG_SLUB_DEBUG)
> +extern bool verify_mem_not_deleted(const void *x);
> +#else
> +static inline bool verify_mem_not_deleted(const void *x)
> +{
> +	return true;
> +}
> +#endif
> +

Why is this in slab.h and not slub_def.h?

> diff --git a/mm/slub.c b/mm/slub.c
> index 3477ce5..f64caca 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2958,6 +2958,42 @@ size_t ksize(const void *object)
>  }
>  EXPORT_SYMBOL(ksize);
>
> +#ifdef CONFIG_SLUB_DEBUG
> +bool verify_mem_not_deleted(const void *x)
> +{
> +	struct page *page;
> +	void *object = (void *)x;
> +	unsigned long flags;
> +	bool rv;
> +
> +	if (unlikely(ZERO_OR_NULL_PTR(x)))
> +		return false;
> +
> +	local_irq_save(flags);
> +
> +	page = virt_to_head_page(x);
> +	if (unlikely(!PageSlab(page))) {
> +		/* maybe it was from stack? */
> +		rv = true;
> +		goto out_unlock;
> +	}

The above check is problematic for slabs that have SLAB_DESTROY_BY_RCU
set. PageSlab(page) may be true to the next rcu interval.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 1/2] slub: Enable backtrace for create/delete points.
  2011-07-01 14:08   ` Christoph Lameter
@ 2011-07-01 14:20     ` Ben Greear
  0 siblings, 0 replies; 8+ messages in thread
From: Ben Greear @ 2011-07-01 14:20 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: penberg, linux-kernel

On 07/01/2011 07:08 AM, Christoph Lameter wrote:
> On Tue, 28 Jun 2011, greearb@candelatech.com wrote:
>
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 35f351f..3477ce5 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -191,8 +191,12 @@ static LIST_HEAD(slab_caches);
>>   /*
>>    * Tracking user of a slab.
>>    */
>> +#define TRACK_ADDRS_COUNT 16
>>   struct track {
>> -	unsigned long addr;	/* Called from address */
>> +	unsigned long caddr;
>
> Keep the name the same since its role does not change. That makes the
> patch touch less code.
>
>> +#ifdef CONFIG_STACKTRACE
>> +	unsigned long addrs[TRACK_ADDRS_COUNT];	/* Called from address */
>> +#endif
>>   	int cpu;		/* Was running on cpu */
>>   	int pid;		/* Pid context */
>>   	unsigned long when;	/* When did the operation occur */
>> @@ -420,7 +424,25 @@ static void set_track(struct kmem_cache *s, void *object,
>>   	struct track *p = get_track(s, object, alloc);
>>
>>   	if (addr) {
>> -		p->addr = addr;
>> +#ifdef CONFIG_STACKTRACE
>> +		struct stack_trace trace;
>> +		int i;
>> +
>> +		trace.nr_entries = 0;
>> +		trace.max_entries = TRACK_ADDRS_COUNT;
>> +		trace.entries = p->addrs;
>> +		trace.skip = 3;
>> +		save_stack_trace(&trace);
>> +
>> +		/* See rant in lockdep.c */
>> +		if (trace.nr_entries != 0&&
>> +		    trace.entries[trace.nr_entries - 1] == ULONG_MAX)
>> +			trace.nr_entries--;
>> +
>> +		for (i = trace.nr_entries; i<  TRACK_ADDRS_COUNT; i++)
>> +			p->addrs[i] = 0;
>> +#endif
>
> Hmmm... Is this really working on all architectures? I remember from years
> past that we had issues with adding the same functionality to slab.

The lockdep code seems to use the same basic logic, and I didn't notice
any arch checks in it.  I think recently someone must have fixed the backtrace
logic to be more easily used...this API didn't exist a few kernels
ago.

>
> Otherwise the patch is quite straightforward (and it will be much cleaner
> if you do not change the name of "addr").

Ok, will change that.

Thanks,
Ben

>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/


-- 
Ben Greear <greearb@candelatech.com>
Candela Technologies Inc  http://www.candelatech.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 2/2] slub:  Add method to verify memory is not freed.
  2011-07-01 14:11   ` Christoph Lameter
@ 2011-07-01 14:23     ` Ben Greear
  2011-07-07 18:00     ` Ben Greear
  1 sibling, 0 replies; 8+ messages in thread
From: Ben Greear @ 2011-07-01 14:23 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: penberg, linux-kernel

On 07/01/2011 07:11 AM, Christoph Lameter wrote:
> On Tue, 28 Jun 2011, greearb@candelatech.com wrote:
>
>> diff --git a/include/linux/slab.h b/include/linux/slab.h
>> index ad4dd1c..916ce0f 100644
>> --- a/include/linux/slab.h
>> +++ b/include/linux/slab.h
>> @@ -302,6 +302,19 @@ extern void *__kmalloc_node_track_caller(size_t, gfp_t, int, unsigned long);
>>
>>   #endif /* CONFIG_NUMA */
>>
>> +/**
>> + * Calling this on allocated memory will check that the memory
>> + * is expected to be in use, and print warnings if not.
>> + */
>> +#if defined(CONFIG_SLUB)&&  defined(CONFIG_SLUB_DEBUG)
>> +extern bool verify_mem_not_deleted(const void *x);
>> +#else
>> +static inline bool verify_mem_not_deleted(const void *x)
>> +{
>> +	return true;
>> +}
>> +#endif
>> +
>
> Why is this in slab.h and not slub_def.h?

No reason..I'll move it.

>
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 3477ce5..f64caca 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -2958,6 +2958,42 @@ size_t ksize(const void *object)
>>   }
>>   EXPORT_SYMBOL(ksize);
>>
>> +#ifdef CONFIG_SLUB_DEBUG
>> +bool verify_mem_not_deleted(const void *x)
>> +{
>> +	struct page *page;
>> +	void *object = (void *)x;
>> +	unsigned long flags;
>> +	bool rv;
>> +
>> +	if (unlikely(ZERO_OR_NULL_PTR(x)))
>> +		return false;
>> +
>> +	local_irq_save(flags);
>> +
>> +	page = virt_to_head_page(x);
>> +	if (unlikely(!PageSlab(page))) {
>> +		/* maybe it was from stack? */
>> +		rv = true;
>> +		goto out_unlock;
>> +	}
>
> The above check is problematic for slabs that have SLAB_DESTROY_BY_RCU
> set. PageSlab(page) may be true to the next rcu interval.

Well, any suggestions on how to do this better?

Thanks,
Ben

> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/


-- 
Ben Greear <greearb@candelatech.com>
Candela Technologies Inc  http://www.candelatech.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 2/2] slub:  Add method to verify memory is not freed.
  2011-07-01 14:11   ` Christoph Lameter
  2011-07-01 14:23     ` Ben Greear
@ 2011-07-07 18:00     ` Ben Greear
  1 sibling, 0 replies; 8+ messages in thread
From: Ben Greear @ 2011-07-07 18:00 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: penberg, linux-kernel

On 07/01/2011 07:11 AM, Christoph Lameter wrote:

>> +#ifdef CONFIG_SLUB_DEBUG
>> +bool verify_mem_not_deleted(const void *x)
>> +{
>> +	struct page *page;
>> +	void *object = (void *)x;
>> +	unsigned long flags;
>> +	bool rv;
>> +
>> +	if (unlikely(ZERO_OR_NULL_PTR(x)))
>> +		return false;
>> +
>> +	local_irq_save(flags);
>> +
>> +	page = virt_to_head_page(x);
>> +	if (unlikely(!PageSlab(page))) {
>> +		/* maybe it was from stack? */
>> +		rv = true;
>> +		goto out_unlock;
>> +	}
>
> The above check is problematic for slabs that have SLAB_DESTROY_BY_RCU
> set. PageSlab(page) may be true to the next rcu interval.

I do not see the problem here.  If PageSlab(page) is true, we fall
through and do the check for on-free-list, under the slab_lock().

We only return error if it is on the free list.  Would these pages
waiting on the RCU interval show up as in the free list?

Thanks,
Ben

-- 
Ben Greear <greearb@candelatech.com>
Candela Technologies Inc  http://www.candelatech.com


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2011-07-07 18:00 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-06-29  0:47 [PATCH v2 0/2] SLUB memory debugging improvements greearb
2011-06-29  0:47 ` [PATCH v2 1/2] slub: Enable backtrace for create/delete points greearb
2011-07-01 14:08   ` Christoph Lameter
2011-07-01 14:20     ` Ben Greear
2011-06-29  0:47 ` [PATCH v2 2/2] slub: Add method to verify memory is not freed greearb
2011-07-01 14:11   ` Christoph Lameter
2011-07-01 14:23     ` Ben Greear
2011-07-07 18:00     ` Ben Greear

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox