linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] powerpc/mm: Check gigantic page range correctly inside memblock
@ 2017-07-03  7:31 Anshuman Khandual
  2017-07-05  4:33 ` Aneesh Kumar K.V
  0 siblings, 1 reply; 2+ messages in thread
From: Anshuman Khandual @ 2017-07-03  7:31 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: aneesh.kumar, mpe

The gigantic page range received from platform actually extends
upto (block_size * expeted_pages) starting at any given address
instead of just a single 16GB page.

Fixes: 4792adbac9eb ("powerpc: Don't use a 16G page if beyond mem= limits")
Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
---
Though in actual experiments never seen multiple gigantic pages (16GB)
starting at the same address. But again its very much possible looking
at the device tree property interfaces and depending upon what PowerVM
provides.

 arch/powerpc/mm/hash_utils_64.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index f2095ce..a3f1e7d 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -507,7 +507,7 @@ static int __init htab_dt_scan_hugepage_blocks(unsigned long node,
 	printk(KERN_INFO "Huge page(16GB) memory: "
 			"addr = 0x%lX size = 0x%lX pages = %d\n",
 			phys_addr, block_size, expected_pages);
-	if (phys_addr + (16 * GB) <= memblock_end_of_DRAM()) {
+	if (phys_addr + block_size * expected_pages <= memblock_end_of_DRAM()) {
 		memblock_reserve(phys_addr, block_size * expected_pages);
 		add_gpage(phys_addr, block_size, expected_pages);
 	}
-- 
1.8.5.6

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] powerpc/mm: Check gigantic page range correctly inside memblock
  2017-07-03  7:31 [PATCH] powerpc/mm: Check gigantic page range correctly inside memblock Anshuman Khandual
@ 2017-07-05  4:33 ` Aneesh Kumar K.V
  0 siblings, 0 replies; 2+ messages in thread
From: Aneesh Kumar K.V @ 2017-07-05  4:33 UTC (permalink / raw)
  To: Anshuman Khandual, linuxppc-dev



On Monday 03 July 2017 01:01 PM, Anshuman Khandual wrote:
> The gigantic page range received from platform actually extends
> upto (block_size * expeted_pages) starting at any given address
> instead of just a single 16GB page.
> 
> Fixes: 4792adbac9eb ("powerpc: Don't use a 16G page if beyond mem= limits")
> Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
> ---
> Though in actual experiments never seen multiple gigantic pages (16GB)
> starting at the same address. But again its very much possible looking
> at the device tree property interfaces and depending upon what PowerVM
> provides.
> 
>   arch/powerpc/mm/hash_utils_64.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
> index f2095ce..a3f1e7d 100644
> --- a/arch/powerpc/mm/hash_utils_64.c
> +++ b/arch/powerpc/mm/hash_utils_64.c
> @@ -507,7 +507,7 @@ static int __init htab_dt_scan_hugepage_blocks(unsigned long node,
>   	printk(KERN_INFO "Huge page(16GB) memory: "
>   			"addr = 0x%lX size = 0x%lX pages = %d\n",
>   			phys_addr, block_size, expected_pages);
> -	if (phys_addr + (16 * GB) <= memblock_end_of_DRAM()) {
> +	if (phys_addr + block_size * expected_pages <= memblock_end_of_DRAM()) {
>   		memblock_reserve(phys_addr, block_size * expected_pages);
>   		add_gpage(phys_addr, block_size, expected_pages);
>   	}
> 

This was already posted by another person.

https://lkml.kernel.org/r/20170112090906.17864-1-rui.teng@linux.vnet.ibm.com


-aneesh

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2017-07-05  4:33 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-07-03  7:31 [PATCH] powerpc/mm: Check gigantic page range correctly inside memblock Anshuman Khandual
2017-07-05  4:33 ` Aneesh Kumar K.V

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).