public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH]mm/vmalloc: interchage the implementation of vmalloc_to_{pfn,page}
@ 2013-11-28 16:29 Jianyu Zhan
  2013-11-28 17:41 ` Vladimir Murzin
  0 siblings, 1 reply; 4+ messages in thread
From: Jianyu Zhan @ 2013-11-28 16:29 UTC (permalink / raw)
  To: linux-mm
  Cc: akpm, iamjoonsoo.kim, zhangyanfei, liwanp, rientjes, linux-kernel

Currently we are implementing vmalloc_to_pfn() as a wrapper of 
vmalloc_to_page(), which is implemented as follow: 

 1. walks the page talbes to generates the corresponding pfn,
 2. then wraps the pfn to struct page,
 3. returns it.

And vmalloc_to_pfn() re-wraps the vmalloc_to_page() to get the pfn.

This seems too circuitous, so this patch reverses the way:
implementing the vmalloc_to_page() as a wrapper of vmalloc_to_pfn().
This makes vmalloc_to_pfn() and vmalloc_to_page() slightly effective.

No functional change. 

Signed-off-by: Jianyu Zhan <nasa4836@gmail.com>
---
mm/vmalloc.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 0fdf968..a335e21 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -220,12 +220,12 @@ int is_vmalloc_or_module_addr(const void *x)
 }
 
 /*
- * Walk a vmap address to the struct page it maps.
+ * Walk a vmap address to the physical pfn it maps to.
  */
-struct page *vmalloc_to_page(const void *vmalloc_addr)
+unsigned long vmalloc_to_pfn(const void *vmalloc_addr)
 {
 	unsigned long addr = (unsigned long) vmalloc_addr;
-	struct page *page = NULL;
+	unsigned long pfn;
 	pgd_t *pgd = pgd_offset_k(addr);
 
 	/*
@@ -244,23 +244,23 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
 				ptep = pte_offset_map(pmd, addr);
 				pte = *ptep;
 				if (pte_present(pte))
-					page = pte_page(pte);
+					pfn = pte_page(pte);
 				pte_unmap(ptep);
 			}
 		}
 	}
-	return page;
+	return pfn;
 }
-EXPORT_SYMBOL(vmalloc_to_page);
+EXPORT_SYMBOL(vmalloc_to_pfn);
 
 /*
- * Map a vmalloc()-space virtual address to the physical page frame number.
+ * Map a vmalloc()-space virtual address to the struct page.
  */
-unsigned long vmalloc_to_pfn(const void *vmalloc_addr)
+struct page *vmalloc_to_page(const void *vmalloc_addr)
 {
-	return page_to_pfn(vmalloc_to_page(vmalloc_addr));
+	return pfn_to_page(vmalloc_to_pfn(vmalloc_addr));
 }
-EXPORT_SYMBOL(vmalloc_to_pfn);
+EXPORT_SYMBOL(vmalloc_to_page);
 
 
 /*** Global kva allocator ***/

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH]mm/vmalloc: interchage the implementation of vmalloc_to_{pfn,page}
  2013-11-28 16:29 Jianyu Zhan
@ 2013-11-28 17:41 ` Vladimir Murzin
  0 siblings, 0 replies; 4+ messages in thread
From: Vladimir Murzin @ 2013-11-28 17:41 UTC (permalink / raw)
  To: Jianyu Zhan
  Cc: linux-mm, akpm, iamjoonsoo.kim, zhangyanfei, liwanp, rientjes,
	linux-kernel

On Fri, Nov 29, 2013 at 12:29:13AM +0800, Jianyu Zhan wrote:
> Currently we are implementing vmalloc_to_pfn() as a wrapper of 
> vmalloc_to_page(), which is implemented as follow: 
> 
>  1. walks the page talbes to generates the corresponding pfn,
>  2. then wraps the pfn to struct page,
>  3. returns it.
> 
> And vmalloc_to_pfn() re-wraps the vmalloc_to_page() to get the pfn.
> 
> This seems too circuitous, so this patch reverses the way:
> implementing the vmalloc_to_page() as a wrapper of vmalloc_to_pfn().
> This makes vmalloc_to_pfn() and vmalloc_to_page() slightly effective.

Any numbers for efficiency?

> 

> No functional change. 
> 
> Signed-off-by: Jianyu Zhan <nasa4836@gmail.com>
> ---
> mm/vmalloc.c | 20 ++++++++++----------
>  1 file changed, 10 insertions(+), 10 deletions(-)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 0fdf968..a335e21 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -220,12 +220,12 @@ int is_vmalloc_or_module_addr(const void *x)
>  }
>  
>  /*
> - * Walk a vmap address to the struct page it maps.
> + * Walk a vmap address to the physical pfn it maps to.
>   */
> -struct page *vmalloc_to_page(const void *vmalloc_addr)
> +unsigned long vmalloc_to_pfn(const void *vmalloc_addr)
>  {
>  	unsigned long addr = (unsigned long) vmalloc_addr;
> -	struct page *page = NULL;
> +	unsigned long pfn;

uninitialized pfn will lead to a bug.

>  	pgd_t *pgd = pgd_offset_k(addr);
>  
>  	/*
> @@ -244,23 +244,23 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
>  				ptep = pte_offset_map(pmd, addr);
>  				pte = *ptep;
>  				if (pte_present(pte))
> -					page = pte_page(pte);
> +					pfn = pte_page(pte);

page_to_pfn is missed here.

Have you ever tested there is no functional changes?

Vladimir

>  				pte_unmap(ptep);
>  			}
>  		}
>  	}
> -	return page;
> +	return pfn;
>  }
> -EXPORT_SYMBOL(vmalloc_to_page);
> +EXPORT_SYMBOL(vmalloc_to_pfn);
>  
>  /*
> - * Map a vmalloc()-space virtual address to the physical page frame number.
> + * Map a vmalloc()-space virtual address to the struct page.
>   */
> -unsigned long vmalloc_to_pfn(const void *vmalloc_addr)
> +struct page *vmalloc_to_page(const void *vmalloc_addr)
>  {
> -	return page_to_pfn(vmalloc_to_page(vmalloc_addr));
> +	return pfn_to_page(vmalloc_to_pfn(vmalloc_addr));
>  }
> -EXPORT_SYMBOL(vmalloc_to_pfn);
> +EXPORT_SYMBOL(vmalloc_to_page);
>  
>  
>  /*** Global kva allocator ***/
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH]mm/vmalloc: interchage the implementation of vmalloc_to_{pfn,page}
@ 2013-11-28 18:11 Jianyu Zhan
  2013-12-01 18:23 ` Vladimir Murzin
  0 siblings, 1 reply; 4+ messages in thread
From: Jianyu Zhan @ 2013-11-28 18:11 UTC (permalink / raw)
  To: murzin.v; +Cc: akpm, iamjoonsoo.kim, zhangyanfei, rientjes, linux-kernel


Hi, Vladimir,

On Fri, Nov 29, 2013 at 1:41 AM, Vladimir Murzin <murzin.v@gmail.com> wrote:
>
> Any numbers for efficiency?
>

For the original implementation, vmalloc_to_pfn() wraps the vmalloc_to_page(),
which means

     pfn   ------>         struct page      ------>    pfn
      |                                                          |
  vmalloc_to_page()                             vmalloc_to_pfn()

So this patch interchange the implementation, do the dirty page table
walking code in vmalloc_to_pfn(), and then vmalloc_to_page() uses it, the graph
now becomes

     pfn            ------>         struct page        
       |                                     |
  vmalloc_to_pfn()         vmalloc_to_page()


>>  /*
>> - * Walk a vmap address to the struct page it maps.
>> + * Walk a vmap address to the physical pfn it maps to.
>>   */
>> -struct page *vmalloc_to_page(const void *vmalloc_addr)
>> +unsigned long vmalloc_to_pfn(const void *vmalloc_addr)
>>  {
>>       unsigned long addr = (unsigned long) vmalloc_addr;
>> -     struct page *page = NULL;
>> +     unsigned long pfn;
>
> uninitialized pfn will lead to a bug.
>

Why? The coding pratice has mandates we use it after we initialize it,
And if we initialize it , to what value will it promise no bug?
It is unlikely a rubbish initial value will creep in.


>>       /*
>> @@ -244,23 +244,23 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
>>                               ptep = pte_offset_map(pmd, addr);
>>                               pte = *ptep;
>>                               if (pte_present(pte))
>> -                                     page = pte_page(pte);
>> +                                     pfn = pte_page(pte);
>
> page_to_pfn is missed here.
>
> Have you ever tested there is no functional changes?

Oh, gods. My fault. It did has no functional changes.

I just sent the incorrect patch...

it should be  
 -   page = pte_page(pte);
 +  pfn = pte_pfn(pte);;

Here is the resent patch:


---
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 0fdf968..e4f0db2 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -220,12 +220,12 @@ int is_vmalloc_or_module_addr(const void *x)
 }
 
 /*
- * Walk a vmap address to the struct page it maps.
+ * Walk a vmap address to the physical pfn it maps to.
  */
-struct page *vmalloc_to_page(const void *vmalloc_addr)
+unsigned long vmalloc_to_pfn(const void *vmalloc_addr)
 {
 	unsigned long addr = (unsigned long) vmalloc_addr;
-	struct page *page = NULL;
+	unsigned long pfn = 0;
 	pgd_t *pgd = pgd_offset_k(addr);
 
 	/*
@@ -244,23 +244,23 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
 				ptep = pte_offset_map(pmd, addr);
 				pte = *ptep;
 				if (pte_present(pte))
-					page = pte_page(pte);
+					pfn = pte_pfn(pte);
 				pte_unmap(ptep);
 			}
 		}
 	}
-	return page;
+	return pfn;
 }
-EXPORT_SYMBOL(vmalloc_to_page);
+EXPORT_SYMBOL(vmalloc_to_pfn);
 
 /*
- * Map a vmalloc()-space virtual address to the physical page frame number.
+ * Map a vmalloc()-space virtual address to the struct page.
  */
-unsigned long vmalloc_to_pfn(const void *vmalloc_addr)
+struct page *vmalloc_to_page(const void *vmalloc_addr)
 {
-	return page_to_pfn(vmalloc_to_page(vmalloc_addr));
+	return pfn_to_page(vmalloc_to_pfn(vmalloc_addr));
 }
-EXPORT_SYMBOL(vmalloc_to_pfn);
+EXPORT_SYMBOL(vmalloc_to_page);
 
 
 /*** Global kva allocator ***/

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH]mm/vmalloc: interchage the implementation of vmalloc_to_{pfn,page}
  2013-11-28 18:11 [PATCH]mm/vmalloc: interchage the implementation of vmalloc_to_{pfn,page} Jianyu Zhan
@ 2013-12-01 18:23 ` Vladimir Murzin
  0 siblings, 0 replies; 4+ messages in thread
From: Vladimir Murzin @ 2013-12-01 18:23 UTC (permalink / raw)
  To: Jianyu Zhan; +Cc: akpm, iamjoonsoo.kim, zhangyanfei, rientjes, linux-kernel

On Fri, Nov 29, 2013 at 02:11:14AM +0800, Jianyu Zhan wrote:
> 
> Hi, Vladimir,
> 
> On Fri, Nov 29, 2013 at 1:41 AM, Vladimir Murzin <murzin.v@gmail.com> wrote:
> >
> > Any numbers for efficiency?
> >
> 
> For the original implementation, vmalloc_to_pfn() wraps the vmalloc_to_page(),
> which means
> 
>      pfn   ------>         struct page      ------>    pfn
>       |                                                          |
>   vmalloc_to_page()                             vmalloc_to_pfn()
> 
> So this patch interchange the implementation, do the dirty page table
> walking code in vmalloc_to_pfn(), and then vmalloc_to_page() uses it, the graph
> now becomes
> 
>      pfn            ------>         struct page        
>        |                                     |
>   vmalloc_to_pfn()         vmalloc_to_page()
> 
> 
> >>  /*
> >> - * Walk a vmap address to the struct page it maps.
> >> + * Walk a vmap address to the physical pfn it maps to.
> >>   */
> >> -struct page *vmalloc_to_page(const void *vmalloc_addr)
> >> +unsigned long vmalloc_to_pfn(const void *vmalloc_addr)
> >>  {
> >>       unsigned long addr = (unsigned long) vmalloc_addr;
> >> -     struct page *page = NULL;
> >> +     unsigned long pfn;
> >
> > uninitialized pfn will lead to a bug.
> >
> 
> Why? The coding pratice has mandates we use it after we initialize it,
> And if we initialize it , to what value will it promise no bug?

Unless you initialize it conditionally. I bet gcc warned you about this ;)

> It is unlikely a rubbish initial value will creep in.
> 
> 
> >>       /*
> >> @@ -244,23 +244,23 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
> >>                               ptep = pte_offset_map(pmd, addr);
> >>                               pte = *ptep;
> >>                               if (pte_present(pte))
> >> -                                     page = pte_page(pte);
> >> +                                     pfn = pte_page(pte);
> >
> > page_to_pfn is missed here.
> >
> > Have you ever tested there is no functional changes?
> 
> Oh, gods. My fault. It did has no functional changes.
> 
> I just sent the incorrect patch...
> 
> it should be  
>  -   page = pte_page(pte);
>  +  pfn = pte_pfn(pte);;
> 
> Here is the resent patch:
> 

I think it is incorrect too. Originally, vmalloc_to_page might return NULL
under some conditions. With your implementation it will return pfn_to_page(0)
which is not the same as NULL.

Vladimir

> 
> ---
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 0fdf968..e4f0db2 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -220,12 +220,12 @@ int is_vmalloc_or_module_addr(const void *x)
>  }
>  
>  /*
> - * Walk a vmap address to the struct page it maps.
> + * Walk a vmap address to the physical pfn it maps to.
>   */
> -struct page *vmalloc_to_page(const void *vmalloc_addr)
> +unsigned long vmalloc_to_pfn(const void *vmalloc_addr)
>  {
>  	unsigned long addr = (unsigned long) vmalloc_addr;
> -	struct page *page = NULL;
> +	unsigned long pfn = 0;
>  	pgd_t *pgd = pgd_offset_k(addr);
>  
>  	/*
> @@ -244,23 +244,23 @@ struct page *vmalloc_to_page(const void *vmalloc_addr)
>  				ptep = pte_offset_map(pmd, addr);
>  				pte = *ptep;
>  				if (pte_present(pte))
> -					page = pte_page(pte);
> +					pfn = pte_pfn(pte);
>  				pte_unmap(ptep);
>  			}
>  		}
>  	}
> -	return page;
> +	return pfn;
>  }
> -EXPORT_SYMBOL(vmalloc_to_page);
> +EXPORT_SYMBOL(vmalloc_to_pfn);
>  
>  /*
> - * Map a vmalloc()-space virtual address to the physical page frame number.
> + * Map a vmalloc()-space virtual address to the struct page.
>   */
> -unsigned long vmalloc_to_pfn(const void *vmalloc_addr)
> +struct page *vmalloc_to_page(const void *vmalloc_addr)
>  {
> -	return page_to_pfn(vmalloc_to_page(vmalloc_addr));
> +	return pfn_to_page(vmalloc_to_pfn(vmalloc_addr));
>  }
> -EXPORT_SYMBOL(vmalloc_to_pfn);
> +EXPORT_SYMBOL(vmalloc_to_page);
>  
>  
>  /*** Global kva allocator ***/

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2013-12-01 18:24 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-11-28 18:11 [PATCH]mm/vmalloc: interchage the implementation of vmalloc_to_{pfn,page} Jianyu Zhan
2013-12-01 18:23 ` Vladimir Murzin
  -- strict thread matches above, loose matches on Subject: below --
2013-11-28 16:29 Jianyu Zhan
2013-11-28 17:41 ` Vladimir Murzin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox