linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] Add debugging boundary check to pfn_to_page
@ 2011-06-08 19:18 Eric B Munson
  2011-06-08 19:31 ` Randy Dunlap
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Eric B Munson @ 2011-06-08 19:18 UTC (permalink / raw)
  To: arnd
  Cc: akpm, paulmck, mingo, randy.dunlap, josh, linux-arch,
	linux-kernel, mgorman, linux-mm, Eric B Munson

Bugzilla 36192 showed a problem where pages were being accessed outside of
a node boundary.  It would be helpful in diagnosing this kind of problem to
have pfn_to_page complain when a page is accessed outside of the node boundary.
This patch adds a new debug config option which adds a WARN_ON in pfn_to_page
that will complain when pages are accessed outside of the node boundary.

Signed-of-by: Eric B Munson <emunson@mgebm.net>
---
 include/asm-generic/memory_model.h |   19 +++++++++++++++----
 lib/Kconfig.debug                  |   10 ++++++++++
 2 files changed, 25 insertions(+), 4 deletions(-)

diff --git a/include/asm-generic/memory_model.h b/include/asm-generic/memory_model.h
index fb2d63f..a0f1d19 100644
--- a/include/asm-generic/memory_model.h
+++ b/include/asm-generic/memory_model.h
@@ -62,11 +62,22 @@
 	(unsigned long)(__pg - __section_mem_map_addr(__nr_to_section(__sec)));	\
 })
 
-#define __pfn_to_page(pfn)				\
-({	unsigned long __pfn = (pfn);			\
-	struct mem_section *__sec = __pfn_to_section(__pfn);	\
-	__section_mem_map_addr(__sec) + __pfn;		\
+#ifdef CONFIG_DEBUG_MEMORY_MODEL
+#define __pfn_to_page(pfn)						\
+({	unsigned long __pfn = (pfn);					\
+	struct mem_section *__sec = __pfn_to_section(__pfn);		\
+	struct page *__page = __section_mem_map_addr(__sec) + __pfn;	\
+	WARN_ON(__page->flags == 0);					\
+	__page;								\
 })
+#else
+#define __pfn_to_page(pfn)						\
+({	unsigned long __pfn = (pfn);					\
+	struct mem_section *__sec = __pfn_to_section(__pfn);		\
+	__section_mem_map_addr(__sec) + __pfn;	\
+})
+#endif /* CONFIG_DEBUG_MEMORY_MODEL */
+
 #endif /* CONFIG_FLATMEM/DISCONTIGMEM/SPARSEMEM */
 
 #define page_to_pfn __page_to_pfn
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index dd373c8..d932cbf 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -777,6 +777,16 @@ config DEBUG_MEMORY_INIT
 
 	  If unsure, say Y
 
+config DEBUG_MEMORY_MODEL
+	bool "Debug memory model" if SPARSEMEM || DISCONTIGMEM
+	depends on SPARSEMEM || DISCONTIGMEM
+	help
+	  Enable this to check that page accesses are done within node
+	  boundaries.  The check will warn each time a page is requested
+	  outside node boundaries.
+
+	  If unsure, say N
+
 config DEBUG_LIST
 	bool "Debug linked list manipulation"
 	depends on DEBUG_KERNEL
-- 
1.7.4.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] Add debugging boundary check to pfn_to_page
  2011-06-08 19:18 [PATCH] Add debugging boundary check to pfn_to_page Eric B Munson
@ 2011-06-08 19:31 ` Randy Dunlap
  2011-06-08 19:56 ` Paul E. McKenney
  2011-06-08 20:49 ` Dave Hansen
  2 siblings, 0 replies; 5+ messages in thread
From: Randy Dunlap @ 2011-06-08 19:31 UTC (permalink / raw)
  To: Eric B Munson
  Cc: arnd, akpm, paulmck, mingo, josh, linux-arch, linux-kernel,
	mgorman, linux-mm

On 06/08/11 12:18, Eric B Munson wrote:
> Bugzilla 36192 showed a problem where pages were being accessed outside of
> a node boundary.  It would be helpful in diagnosing this kind of problem to
> have pfn_to_page complain when a page is accessed outside of the node boundary.
> This patch adds a new debug config option which adds a WARN_ON in pfn_to_page
> that will complain when pages are accessed outside of the node boundary.
> 
> Signed-of-by: Eric B Munson <emunson@mgebm.net>
> ---
>  include/asm-generic/memory_model.h |   19 +++++++++++++++----
>  lib/Kconfig.debug                  |   10 ++++++++++
>  2 files changed, 25 insertions(+), 4 deletions(-)
> 
> diff --git a/include/asm-generic/memory_model.h b/include/asm-generic/memory_model.h
> index fb2d63f..a0f1d19 100644
> --- a/include/asm-generic/memory_model.h
> +++ b/include/asm-generic/memory_model.h
> @@ -62,11 +62,22 @@
>  	(unsigned long)(__pg - __section_mem_map_addr(__nr_to_section(__sec)));	\
>  })
>  
> -#define __pfn_to_page(pfn)				\
> -({	unsigned long __pfn = (pfn);			\
> -	struct mem_section *__sec = __pfn_to_section(__pfn);	\
> -	__section_mem_map_addr(__sec) + __pfn;		\
> +#ifdef CONFIG_DEBUG_MEMORY_MODEL
> +#define __pfn_to_page(pfn)						\
> +({	unsigned long __pfn = (pfn);					\
> +	struct mem_section *__sec = __pfn_to_section(__pfn);		\
> +	struct page *__page = __section_mem_map_addr(__sec) + __pfn;	\
> +	WARN_ON(__page->flags == 0);					\
> +	__page;								\
>  })
> +#else
> +#define __pfn_to_page(pfn)						\
> +({	unsigned long __pfn = (pfn);					\
> +	struct mem_section *__sec = __pfn_to_section(__pfn);		\
> +	__section_mem_map_addr(__sec) + __pfn;	\
> +})
> +#endif /* CONFIG_DEBUG_MEMORY_MODEL */
> +
>  #endif /* CONFIG_FLATMEM/DISCONTIGMEM/SPARSEMEM */
>  
>  #define page_to_pfn __page_to_pfn
> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> index dd373c8..d932cbf 100644
> --- a/lib/Kconfig.debug
> +++ b/lib/Kconfig.debug
> @@ -777,6 +777,16 @@ config DEBUG_MEMORY_INIT
>  
>  	  If unsure, say Y
>  
> +config DEBUG_MEMORY_MODEL
> +	bool "Debug memory model" if SPARSEMEM || DISCONTIGMEM
> +	depends on SPARSEMEM || DISCONTIGMEM

	bool <prompt> <expr>
creates the dependency, so the "depends on" line is not needed.

> +	help
> +	  Enable this to check that page accesses are done within node
> +	  boundaries.  The check will warn each time a page is requested
> +	  outside node boundaries.
> +
> +	  If unsure, say N
> +
>  config DEBUG_LIST
>  	bool "Debug linked list manipulation"
>  	depends on DEBUG_KERNEL


-- 
~Randy
*** Remember to use Documentation/SubmitChecklist when testing your code ***

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] Add debugging boundary check to pfn_to_page
  2011-06-08 19:18 [PATCH] Add debugging boundary check to pfn_to_page Eric B Munson
  2011-06-08 19:31 ` Randy Dunlap
@ 2011-06-08 19:56 ` Paul E. McKenney
  2011-06-08 20:49 ` Dave Hansen
  2 siblings, 0 replies; 5+ messages in thread
From: Paul E. McKenney @ 2011-06-08 19:56 UTC (permalink / raw)
  To: Eric B Munson
  Cc: arnd, akpm, mingo, randy.dunlap, josh, linux-arch, linux-kernel,
	mgorman, linux-mm

On Wed, Jun 08, 2011 at 03:18:54PM -0400, Eric B Munson wrote:
> Bugzilla 36192 showed a problem where pages were being accessed outside of
> a node boundary.  It would be helpful in diagnosing this kind of problem to
> have pfn_to_page complain when a page is accessed outside of the node boundary.
> This patch adds a new debug config option which adds a WARN_ON in pfn_to_page
> that will complain when pages are accessed outside of the node boundary.
> 
> Signed-of-by: Eric B Munson <emunson@mgebm.net>
> ---
>  include/asm-generic/memory_model.h |   19 +++++++++++++++----
>  lib/Kconfig.debug                  |   10 ++++++++++
>  2 files changed, 25 insertions(+), 4 deletions(-)
> 
> diff --git a/include/asm-generic/memory_model.h b/include/asm-generic/memory_model.h
> index fb2d63f..a0f1d19 100644
> --- a/include/asm-generic/memory_model.h
> +++ b/include/asm-generic/memory_model.h
> @@ -62,11 +62,22 @@
>  	(unsigned long)(__pg - __section_mem_map_addr(__nr_to_section(__sec)));	\
>  })
> 
> -#define __pfn_to_page(pfn)				\
> -({	unsigned long __pfn = (pfn);			\
> -	struct mem_section *__sec = __pfn_to_section(__pfn);	\
> -	__section_mem_map_addr(__sec) + __pfn;		\
> +#ifdef CONFIG_DEBUG_MEMORY_MODEL
> +#define __pfn_to_page(pfn)						\
> +({	unsigned long __pfn = (pfn);					\
> +	struct mem_section *__sec = __pfn_to_section(__pfn);		\
> +	struct page *__page = __section_mem_map_addr(__sec) + __pfn;	\
> +	WARN_ON(__page->flags == 0);					\
> +	__page;								\
>  })
> +#else
> +#define __pfn_to_page(pfn)						\
> +({	unsigned long __pfn = (pfn);					\
> +	struct mem_section *__sec = __pfn_to_section(__pfn);		\
> +	__section_mem_map_addr(__sec) + __pfn;	\
> +})
> +#endif /* CONFIG_DEBUG_MEMORY_MODEL */
> +

The following variant would avoid the duplicate code, FWIW.

#define __pfn_to_page_nodebug(pfn)					\
({	unsigned long __pfn = (pfn);					\
	struct mem_section *__sec = __pfn_to_section(__pfn);		\
	__section_mem_map_addr(__sec) + __pfn;				\
})
#ifdef CONFIG_DEBUG_MEMORY_MODEL
#define __pfn_to_page(pfn)						\
({									\
	struct page *__page = __pfn_to_page_nodebug(pfn);		\
	WARN_ON(__page->flags == 0);					\
	__page;								\
})
#else
#define __pfn_to_page(pfn) __pfn_to_page_nodebug(pfn)
#endif /* CONFIG_DEBUG_MEMORY_MODEL */

							Thanx, Paul

>  #endif /* CONFIG_FLATMEM/DISCONTIGMEM/SPARSEMEM */
> 
>  #define page_to_pfn __page_to_pfn
> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> index dd373c8..d932cbf 100644
> --- a/lib/Kconfig.debug
> +++ b/lib/Kconfig.debug
> @@ -777,6 +777,16 @@ config DEBUG_MEMORY_INIT
> 
>  	  If unsure, say Y
> 
> +config DEBUG_MEMORY_MODEL
> +	bool "Debug memory model" if SPARSEMEM || DISCONTIGMEM
> +	depends on SPARSEMEM || DISCONTIGMEM
> +	help
> +	  Enable this to check that page accesses are done within node
> +	  boundaries.  The check will warn each time a page is requested
> +	  outside node boundaries.
> +
> +	  If unsure, say N
> +
>  config DEBUG_LIST
>  	bool "Debug linked list manipulation"
>  	depends on DEBUG_KERNEL
> -- 
> 1.7.4.1
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] Add debugging boundary check to pfn_to_page
  2011-06-08 19:18 [PATCH] Add debugging boundary check to pfn_to_page Eric B Munson
  2011-06-08 19:31 ` Randy Dunlap
  2011-06-08 19:56 ` Paul E. McKenney
@ 2011-06-08 20:49 ` Dave Hansen
  2011-06-10 13:27   ` Eric B Munson
  2 siblings, 1 reply; 5+ messages in thread
From: Dave Hansen @ 2011-06-08 20:49 UTC (permalink / raw)
  To: Eric B Munson
  Cc: arnd, akpm, paulmck, mingo, randy.dunlap, josh, linux-arch,
	linux-kernel, mgorman, linux-mm

On Wed, 2011-06-08 at 15:18 -0400, Eric B Munson wrote:
> -#define __pfn_to_page(pfn)                             \
> -({     unsigned long __pfn = (pfn);                    \
> -       struct mem_section *__sec = __pfn_to_section(__pfn);    \
> -       __section_mem_map_addr(__sec) + __pfn;          \
> +#ifdef CONFIG_DEBUG_MEMORY_MODEL
> +#define __pfn_to_page(pfn)                                             \
> +({     unsigned long __pfn = (pfn);                                    \
> +       struct mem_section *__sec = __pfn_to_section(__pfn);            \
> +       struct page *__page = __section_mem_map_addr(__sec) + __pfn;    \
> +       WARN_ON(__page->flags == 0);                                    \
> +       __page;                                                         \

What was the scenario you're trying to catch here?  If you give a really
crummy __pfn, you'll probably go off the end of one of the mem_section[]
arrays, and get garbage back for __sec.  You might also get a NULL back
from __section_mem_map_addr() if the section is possibly valid, but just
not present on this particular system.

I _think_ the only kind of bug this will catch is if you have a valid
section, with a valid section_mem_map[] but still manage to find
yourself with an 'struct page' unclaimed by any zone and thus
uninitialized.

You could catch a lot more cases by being a bit more paranoid:

void check_pfn(unsigned long pfn)
{
	int nid;
	
	// hacked in from pfn_to_nid:
	// Don't actually do this, add a new helper near pfn_to_nid()
	// Can this even fit in the physnode_map?
	if (pfn / PAGES_PER_ELEMENT > ARRAY_SIZE(physnode_map))
		WARN();

	// Is there a valid nid there?
	nid = pfn_to_nid(pfn);
	if (nid == -1)
		WARN();
	
	// check against NODE_DATA(nid)->node_start_pfn;
	// check against NODE_DATA(nid)->node_spanned_pages;
}
>  })
> +#else
> +#define __pfn_to_page(pfn)                                             \
> +({     unsigned long __pfn = (pfn);                                    \
> +       struct mem_section *__sec = __pfn_to_section(__pfn);            \
> +       __section_mem_map_addr(__sec) + __pfn;  \
> +})
> +#endif /* CONFIG_DEBUG_MEMORY_MODEL */ 

Instead of making a completely new __pfn_to_page() in the debugging
case, I'd probably do something like this:

#ifdef CONFIG_DEBUG_MEMORY_MODEL
#define check_foo(foo) {\
	some_check_here(foo);\
	WARN_ON(foo->flags);\
}
#else
#define check_foo(foo) do{}while(0)
#endif;

#define __pfn_to_page(pfn)                                             \
({     unsigned long __pfn = (pfn);                                    \
       struct mem_section *__sec = __pfn_to_section(__pfn);            \
       struct page *__page = __section_mem_map_addr(__sec) + __pfn;    \
       check_foo(page)							\
       __page;                                                         \
 })

That'll make sure that the two copies of __pfn_to_page() don't
accidentally diverge.  It also makes it a lot easier to read, I think.

-- Dave

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] Add debugging boundary check to pfn_to_page
  2011-06-08 20:49 ` Dave Hansen
@ 2011-06-10 13:27   ` Eric B Munson
  0 siblings, 0 replies; 5+ messages in thread
From: Eric B Munson @ 2011-06-10 13:27 UTC (permalink / raw)
  To: Dave Hansen
  Cc: arnd, akpm, paulmck, mingo, randy.dunlap, josh, linux-arch,
	linux-kernel, mgorman, linux-mm

[-- Attachment #1: Type: text/plain, Size: 3380 bytes --]

On Wed, 08 Jun 2011, Dave Hansen wrote:

> On Wed, 2011-06-08 at 15:18 -0400, Eric B Munson wrote:
> > -#define __pfn_to_page(pfn)                             \
> > -({     unsigned long __pfn = (pfn);                    \
> > -       struct mem_section *__sec = __pfn_to_section(__pfn);    \
> > -       __section_mem_map_addr(__sec) + __pfn;          \
> > +#ifdef CONFIG_DEBUG_MEMORY_MODEL
> > +#define __pfn_to_page(pfn)                                             \
> > +({     unsigned long __pfn = (pfn);                                    \
> > +       struct mem_section *__sec = __pfn_to_section(__pfn);            \
> > +       struct page *__page = __section_mem_map_addr(__sec) + __pfn;    \
> > +       WARN_ON(__page->flags == 0);                                    \
> > +       __page;                                                         \
> 
> What was the scenario you're trying to catch here?  If you give a really
> crummy __pfn, you'll probably go off the end of one of the mem_section[]
> arrays, and get garbage back for __sec.  You might also get a NULL back
> from __section_mem_map_addr() if the section is possibly valid, but just
> not present on this particular system.
> 
> I _think_ the only kind of bug this will catch is if you have a valid
> section, with a valid section_mem_map[] but still manage to find
> yourself with an 'struct page' unclaimed by any zone and thus
> uninitialized.

This is the case I was going after.  I will rework for a V2 based on the
feedback here.

> 
> You could catch a lot more cases by being a bit more paranoid:
> 
> void check_pfn(unsigned long pfn)
> {
> 	int nid;
> 	
> 	// hacked in from pfn_to_nid:
> 	// Don't actually do this, add a new helper near pfn_to_nid()
> 	// Can this even fit in the physnode_map?
> 	if (pfn / PAGES_PER_ELEMENT > ARRAY_SIZE(physnode_map))
> 		WARN();
> 
> 	// Is there a valid nid there?
> 	nid = pfn_to_nid(pfn);
> 	if (nid == -1)
> 		WARN();
> 	
> 	// check against NODE_DATA(nid)->node_start_pfn;
> 	// check against NODE_DATA(nid)->node_spanned_pages;
> }
> >  })
> > +#else
> > +#define __pfn_to_page(pfn)                                             \
> > +({     unsigned long __pfn = (pfn);                                    \
> > +       struct mem_section *__sec = __pfn_to_section(__pfn);            \
> > +       __section_mem_map_addr(__sec) + __pfn;  \
> > +})
> > +#endif /* CONFIG_DEBUG_MEMORY_MODEL */ 
> 
> Instead of making a completely new __pfn_to_page() in the debugging
> case, I'd probably do something like this:
> 
> #ifdef CONFIG_DEBUG_MEMORY_MODEL
> #define check_foo(foo) {\
> 	some_check_here(foo);\
> 	WARN_ON(foo->flags);\
> }
> #else
> #define check_foo(foo) do{}while(0)
> #endif;
> 
> #define __pfn_to_page(pfn)                                             \
> ({     unsigned long __pfn = (pfn);                                    \
>        struct mem_section *__sec = __pfn_to_section(__pfn);            \
>        struct page *__page = __section_mem_map_addr(__sec) + __pfn;    \
>        check_foo(page)							\
>        __page;                                                         \
>  })
> 
> That'll make sure that the two copies of __pfn_to_page() don't
> accidentally diverge.  It also makes it a lot easier to read, I think.
> 
> -- Dave
> 

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 490 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2011-06-10 13:27 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-06-08 19:18 [PATCH] Add debugging boundary check to pfn_to_page Eric B Munson
2011-06-08 19:31 ` Randy Dunlap
2011-06-08 19:56 ` Paul E. McKenney
2011-06-08 20:49 ` Dave Hansen
2011-06-10 13:27   ` Eric B Munson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).