linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm/hugetlb: fix total hugetlbfs pages count when memory overcommit accouting
@ 2013-03-13  7:08 Wanpeng Li
  2013-03-13  8:02 ` Hillf Danton
  2013-03-14  9:44 ` Michal Hocko
  0 siblings, 2 replies; 8+ messages in thread
From: Wanpeng Li @ 2013-03-13  7:08 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Hocko, Aneesh Kumar K.V, Hillf Danton, KAMEZAWA Hiroyuki,
	linux-mm, linux-kernel, Wanpeng Li

After commit 42d7395f ("mm: support more pagesizes for MAP_HUGETLB/SHM_HUGETLB")
be merged, kernel permit multiple huge page sizes, and when the system administrator 
has configured the system to provide huge page pools of different sizes, application 
can choose the page size used for their allocation. However, just default size of 
huge page pool is statistical when memory overcommit accouting, the bad is that this 
will result in innocent processes be killed by oom-killer later. Fix it by statistic 
all huge page pools of different sizes provided by administrator.

Testcase:
boot: hugepagesz=1G hugepages=1
before patch:
egrep 'CommitLimit' /proc/meminfo
CommitLimit:     55434168 kB
after patch:
egrep 'CommitLimit' /proc/meminfo
CommitLimit:     54909880 kB

Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
---
 mm/hugetlb.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index cdb64e4..9e25040 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2124,8 +2124,11 @@ int hugetlb_report_node_meminfo(int nid, char *buf)
 /* Return the number pages of memory we physically have, in PAGE_SIZE units. */
 unsigned long hugetlb_total_pages(void)
 {
-	struct hstate *h = &default_hstate;
-	return h->nr_huge_pages * pages_per_huge_page(h);
+	struct hstate *h;
+	unsigned long nr_total_pages = 0;
+	for_each_hstate(h)
+		nr_total_pages += h->nr_huge_pages * pages_per_huge_page(h);
+	return nr_total_pages;
 }
 
 static int hugetlb_acct_memory(struct hstate *h, long delta)
-- 
1.7.11.7

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm/hugetlb: fix total hugetlbfs pages count when memory overcommit accouting
  2013-03-13  7:08 [PATCH] mm/hugetlb: fix total hugetlbfs pages count when memory overcommit accouting Wanpeng Li
@ 2013-03-13  8:02 ` Hillf Danton
  2013-03-13  8:32   ` Wanpeng Li
                     ` (2 more replies)
  2013-03-14  9:44 ` Michal Hocko
  1 sibling, 3 replies; 8+ messages in thread
From: Hillf Danton @ 2013-03-13  8:02 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: Andrew Morton, Michal Hocko, Aneesh Kumar K.V, KAMEZAWA Hiroyuki,
	linux-mm, LKML, Andi Kleen

[cc Andi]
On Wed, Mar 13, 2013 at 3:08 PM, Wanpeng Li <liwanp@linux.vnet.ibm.com> wrote:
> After commit 42d7395f ("mm: support more pagesizes for MAP_HUGETLB/SHM_HUGETLB")
> be merged, kernel permit multiple huge page sizes, and when the system administrator
> has configured the system to provide huge page pools of different sizes, application
> can choose the page size used for their allocation. However, just default size of
> huge page pool is statistical when memory overcommit accouting, the bad is that this
> will result in innocent processes be killed by oom-killer later. Fix it by statistic
> all huge page pools of different sizes provided by administrator.
>
Can we enrich the output of hugetlb_report_meminfo() ?

thanks
Hillf

> Testcase:
> boot: hugepagesz=1G hugepages=1
> before patch:
> egrep 'CommitLimit' /proc/meminfo
> CommitLimit:     55434168 kB
> after patch:
> egrep 'CommitLimit' /proc/meminfo
> CommitLimit:     54909880 kB
>
> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
> ---
>  mm/hugetlb.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index cdb64e4..9e25040 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2124,8 +2124,11 @@ int hugetlb_report_node_meminfo(int nid, char *buf)
>  /* Return the number pages of memory we physically have, in PAGE_SIZE units. */
>  unsigned long hugetlb_total_pages(void)
>  {
> -       struct hstate *h = &default_hstate;
> -       return h->nr_huge_pages * pages_per_huge_page(h);
> +       struct hstate *h;
> +       unsigned long nr_total_pages = 0;
> +       for_each_hstate(h)
> +               nr_total_pages += h->nr_huge_pages * pages_per_huge_page(h);
> +       return nr_total_pages;
>  }
>
>  static int hugetlb_acct_memory(struct hstate *h, long delta)
> --
> 1.7.11.7
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm/hugetlb: fix total hugetlbfs pages count when memory overcommit accouting
  2013-03-13  8:02 ` Hillf Danton
@ 2013-03-13  8:32   ` Wanpeng Li
  2013-03-13  8:32   ` Wanpeng Li
  2013-03-13 16:59   ` Andi Kleen
  2 siblings, 0 replies; 8+ messages in thread
From: Wanpeng Li @ 2013-03-13  8:32 UTC (permalink / raw)
  To: Hillf Danton, Andi Kleen
  Cc: Andrew Morton, Michal Hocko, Aneesh Kumar K.V, KAMEZAWA Hiroyuki,
	linux-mm, LKML

On Wed, Mar 13, 2013 at 04:02:03PM +0800, Hillf Danton wrote:
>[cc Andi]
>On Wed, Mar 13, 2013 at 3:08 PM, Wanpeng Li <liwanp@linux.vnet.ibm.com> wrote:
>> After commit 42d7395f ("mm: support more pagesizes for MAP_HUGETLB/SHM_HUGETLB")
>> be merged, kernel permit multiple huge page sizes, and when the system administrator
>> has configured the system to provide huge page pools of different sizes, application
>> can choose the page size used for their allocation. However, just default size of
>> huge page pool is statistical when memory overcommit accouting, the bad is that this
>> will result in innocent processes be killed by oom-killer later. Fix it by statistic
>> all huge page pools of different sizes provided by administrator.
>>

Hi Hillf,

>Can we enrich the output of hugetlb_report_meminfo() ?
>

Yes, I have already thought of this stuff, we can dump multiple huge page
pools information in /proc/meminfo and /sys/devices/system/node/node*/meminfo.
I can do it in another patch, what's your opinion, Andi?

Regards,
Wanpeng Li 

>thanks
>Hillf
>
>> Testcase:
>> boot: hugepagesz=1G hugepages=1
>> before patch:
>> egrep 'CommitLimit' /proc/meminfo
>> CommitLimit:     55434168 kB
>> after patch:
>> egrep 'CommitLimit' /proc/meminfo
>> CommitLimit:     54909880 kB
>>
>> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
>> ---
>>  mm/hugetlb.c | 7 +++++--
>>  1 file changed, 5 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index cdb64e4..9e25040 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -2124,8 +2124,11 @@ int hugetlb_report_node_meminfo(int nid, char *buf)
>>  /* Return the number pages of memory we physically have, in PAGE_SIZE units. */
>>  unsigned long hugetlb_total_pages(void)
>>  {
>> -       struct hstate *h = &default_hstate;
>> -       return h->nr_huge_pages * pages_per_huge_page(h);
>> +       struct hstate *h;
>> +       unsigned long nr_total_pages = 0;
>> +       for_each_hstate(h)
>> +               nr_total_pages += h->nr_huge_pages * pages_per_huge_page(h);
>> +       return nr_total_pages;
>>  }
>>
>>  static int hugetlb_acct_memory(struct hstate *h, long delta)
>> --
>> 1.7.11.7
>>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm/hugetlb: fix total hugetlbfs pages count when memory overcommit accouting
  2013-03-13  8:02 ` Hillf Danton
  2013-03-13  8:32   ` Wanpeng Li
@ 2013-03-13  8:32   ` Wanpeng Li
  2013-03-13 16:59   ` Andi Kleen
  2 siblings, 0 replies; 8+ messages in thread
From: Wanpeng Li @ 2013-03-13  8:32 UTC (permalink / raw)
  To: Hillf Danton, Andi Kleen
  Cc: Andrew Morton, Michal Hocko, Aneesh Kumar K.V, KAMEZAWA Hiroyuki,
	linux-mm, LKML

On Wed, Mar 13, 2013 at 04:02:03PM +0800, Hillf Danton wrote:
>[cc Andi]
>On Wed, Mar 13, 2013 at 3:08 PM, Wanpeng Li <liwanp@linux.vnet.ibm.com> wrote:
>> After commit 42d7395f ("mm: support more pagesizes for MAP_HUGETLB/SHM_HUGETLB")
>> be merged, kernel permit multiple huge page sizes, and when the system administrator
>> has configured the system to provide huge page pools of different sizes, application
>> can choose the page size used for their allocation. However, just default size of
>> huge page pool is statistical when memory overcommit accouting, the bad is that this
>> will result in innocent processes be killed by oom-killer later. Fix it by statistic
>> all huge page pools of different sizes provided by administrator.
>>

Hi Hillf,

>Can we enrich the output of hugetlb_report_meminfo() ?
>

Yes, I have already thought of this stuff, we can dump multiple huge page
pools information in /proc/meminfo and /sys/devices/system/node/node*/meminfo.
I can do it in another patch, what's your opinion, Andi?

Regards,
Wanpeng Li 

>thanks
>Hillf
>
>> Testcase:
>> boot: hugepagesz=1G hugepages=1
>> before patch:
>> egrep 'CommitLimit' /proc/meminfo
>> CommitLimit:     55434168 kB
>> after patch:
>> egrep 'CommitLimit' /proc/meminfo
>> CommitLimit:     54909880 kB
>>
>> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
>> ---
>>  mm/hugetlb.c | 7 +++++--
>>  1 file changed, 5 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index cdb64e4..9e25040 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -2124,8 +2124,11 @@ int hugetlb_report_node_meminfo(int nid, char *buf)
>>  /* Return the number pages of memory we physically have, in PAGE_SIZE units. */
>>  unsigned long hugetlb_total_pages(void)
>>  {
>> -       struct hstate *h = &default_hstate;
>> -       return h->nr_huge_pages * pages_per_huge_page(h);
>> +       struct hstate *h;
>> +       unsigned long nr_total_pages = 0;
>> +       for_each_hstate(h)
>> +               nr_total_pages += h->nr_huge_pages * pages_per_huge_page(h);
>> +       return nr_total_pages;
>>  }
>>
>>  static int hugetlb_acct_memory(struct hstate *h, long delta)
>> --
>> 1.7.11.7
>>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm/hugetlb: fix total hugetlbfs pages count when memory overcommit accouting
  2013-03-13  8:02 ` Hillf Danton
  2013-03-13  8:32   ` Wanpeng Li
  2013-03-13  8:32   ` Wanpeng Li
@ 2013-03-13 16:59   ` Andi Kleen
  2 siblings, 0 replies; 8+ messages in thread
From: Andi Kleen @ 2013-03-13 16:59 UTC (permalink / raw)
  To: Hillf Danton
  Cc: Wanpeng Li, Andrew Morton, Michal Hocko, Aneesh Kumar K.V,
	KAMEZAWA Hiroyuki, linux-mm, LKML

> Can we enrich the output of hugetlb_report_meminfo() ?

The data is reported separately in sysfs. It was originally decided
to not extend meminfo for them.

-Andi

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm/hugetlb: fix total hugetlbfs pages count when memory overcommit accouting
  2013-03-13  7:08 [PATCH] mm/hugetlb: fix total hugetlbfs pages count when memory overcommit accouting Wanpeng Li
  2013-03-13  8:02 ` Hillf Danton
@ 2013-03-14  9:44 ` Michal Hocko
  2013-03-14 10:15   ` Wanpeng Li
  2013-03-14 10:15   ` Wanpeng Li
  1 sibling, 2 replies; 8+ messages in thread
From: Michal Hocko @ 2013-03-14  9:44 UTC (permalink / raw)
  To: Wanpeng Li
  Cc: Andrew Morton, Aneesh Kumar K.V, Hillf Danton, KAMEZAWA Hiroyuki,
	linux-mm, linux-kernel

On Wed 13-03-13 15:08:31, Wanpeng Li wrote:
> After commit 42d7395f ("mm: support more pagesizes for MAP_HUGETLB/SHM_HUGETLB")
> be merged, kernel permit multiple huge page sizes,

multiple huge page sizes were possible long before this commit. The
above mentioned patch just made their usage via IPC much easier. You
could do the same previously (since a137e1cc) by mounting hugetlbfs with
a specific page size as a parameter and using mmap.

> and when the system administrator has configured the system to provide
> huge page pools of different sizes, application can choose the page
> size used for their allocation.

> However, just default size of huge page pool is statistical when
> memory overcommit accouting, the bad is that this will result in
> innocent processes be killed by oom-killer later.

Why would an innnocent process be killed? The overcommit calculation
is incorrect, that is true, but this just means that an unexpected
ENOMEM/EFAULT or SIGSEGV would be returned, no? How an OOM could be a
result?

> Fix it by statistic all huge page pools of different sizes provided by
> administrator.

The patch makes sense but the description is misleading AFAICS.

> Testcase:
> boot: hugepagesz=1G hugepages=1
> before patch:
> egrep 'CommitLimit' /proc/meminfo
> CommitLimit:     55434168 kB
> after patch:
> egrep 'CommitLimit' /proc/meminfo
> CommitLimit:     54909880 kB
> 
> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
> ---
>  mm/hugetlb.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index cdb64e4..9e25040 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2124,8 +2124,11 @@ int hugetlb_report_node_meminfo(int nid, char *buf)
>  /* Return the number pages of memory we physically have, in PAGE_SIZE units. */
>  unsigned long hugetlb_total_pages(void)
>  {
> -	struct hstate *h = &default_hstate;
> -	return h->nr_huge_pages * pages_per_huge_page(h);
> +	struct hstate *h;
> +	unsigned long nr_total_pages = 0;
> +	for_each_hstate(h)
> +		nr_total_pages += h->nr_huge_pages * pages_per_huge_page(h);
> +	return nr_total_pages;
>  }
>  
>  static int hugetlb_acct_memory(struct hstate *h, long delta)
> -- 
> 1.7.11.7
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm/hugetlb: fix total hugetlbfs pages count when memory overcommit accouting
  2013-03-14  9:44 ` Michal Hocko
@ 2013-03-14 10:15   ` Wanpeng Li
  2013-03-14 10:15   ` Wanpeng Li
  1 sibling, 0 replies; 8+ messages in thread
From: Wanpeng Li @ 2013-03-14 10:15 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Aneesh Kumar K.V, Hillf Danton, KAMEZAWA Hiroyuki,
	linux-mm, linux-kernel, Wanpeng Li

On Thu, Mar 14, 2013 at 10:44:19AM +0100, Michal Hocko wrote:
>On Wed 13-03-13 15:08:31, Wanpeng Li wrote:
>> After commit 42d7395f ("mm: support more pagesizes for MAP_HUGETLB/SHM_HUGETLB")
>> be merged, kernel permit multiple huge page sizes,
>

Hi Michal,

>multiple huge page sizes were possible long before this commit. The
>above mentioned patch just made their usage via IPC much easier. You
>could do the same previously (since a137e1cc) by mounting hugetlbfs with
>a specific page size as a parameter and using mmap.
>

Agreed.

>> and when the system administrator has configured the system to provide
>> huge page pools of different sizes, application can choose the page
>> size used for their allocation.
>
>> However, just default size of huge page pool is statistical when
>> memory overcommit accouting, the bad is that this will result in
>> innocent processes be killed by oom-killer later.
>
>Why would an innnocent process be killed? The overcommit calculation
>is incorrect, that is true, but this just means that an unexpected
>ENOMEM/EFAULT or SIGSEGV would be returned, no? How an OOM could be a
>result?

Agreed.

>
>> Fix it by statistic all huge page pools of different sizes provided by
>> administrator.
>
>The patch makes sense but the description is misleading AFAICS.
>

Thanks for your pointing out Michal, I will update the description. :-)

Regards,
Wanpeng Li 

>> Testcase:
>> boot: hugepagesz=1G hugepages=1
>> before patch:
>> egrep 'CommitLimit' /proc/meminfo
>> CommitLimit:     55434168 kB
>> after patch:
>> egrep 'CommitLimit' /proc/meminfo
>> CommitLimit:     54909880 kB
>> 
>> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
>> ---
>>  mm/hugetlb.c | 7 +++++--
>>  1 file changed, 5 insertions(+), 2 deletions(-)
>> 
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index cdb64e4..9e25040 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -2124,8 +2124,11 @@ int hugetlb_report_node_meminfo(int nid, char *buf)
>>  /* Return the number pages of memory we physically have, in PAGE_SIZE units. */
>>  unsigned long hugetlb_total_pages(void)
>>  {
>> -	struct hstate *h = &default_hstate;
>> -	return h->nr_huge_pages * pages_per_huge_page(h);
>> +	struct hstate *h;
>> +	unsigned long nr_total_pages = 0;
>> +	for_each_hstate(h)
>> +		nr_total_pages += h->nr_huge_pages * pages_per_huge_page(h);
>> +	return nr_total_pages;
>>  }
>>  
>>  static int hugetlb_acct_memory(struct hstate *h, long delta)
>> -- 
>> 1.7.11.7
>> 
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo@kvack.org.  For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
>
>-- 
>Michal Hocko
>SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm/hugetlb: fix total hugetlbfs pages count when memory overcommit accouting
  2013-03-14  9:44 ` Michal Hocko
  2013-03-14 10:15   ` Wanpeng Li
@ 2013-03-14 10:15   ` Wanpeng Li
  1 sibling, 0 replies; 8+ messages in thread
From: Wanpeng Li @ 2013-03-14 10:15 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, Aneesh Kumar K.V, Hillf Danton, KAMEZAWA Hiroyuki,
	linux-mm, linux-kernel, Wanpeng Li

On Thu, Mar 14, 2013 at 10:44:19AM +0100, Michal Hocko wrote:
>On Wed 13-03-13 15:08:31, Wanpeng Li wrote:
>> After commit 42d7395f ("mm: support more pagesizes for MAP_HUGETLB/SHM_HUGETLB")
>> be merged, kernel permit multiple huge page sizes,
>

Hi Michal,

>multiple huge page sizes were possible long before this commit. The
>above mentioned patch just made their usage via IPC much easier. You
>could do the same previously (since a137e1cc) by mounting hugetlbfs with
>a specific page size as a parameter and using mmap.
>

Agreed.

>> and when the system administrator has configured the system to provide
>> huge page pools of different sizes, application can choose the page
>> size used for their allocation.
>
>> However, just default size of huge page pool is statistical when
>> memory overcommit accouting, the bad is that this will result in
>> innocent processes be killed by oom-killer later.
>
>Why would an innnocent process be killed? The overcommit calculation
>is incorrect, that is true, but this just means that an unexpected
>ENOMEM/EFAULT or SIGSEGV would be returned, no? How an OOM could be a
>result?

Agreed.

>
>> Fix it by statistic all huge page pools of different sizes provided by
>> administrator.
>
>The patch makes sense but the description is misleading AFAICS.
>

Thanks for your pointing out Michal, I will update the description. :-)

Regards,
Wanpeng Li 

>> Testcase:
>> boot: hugepagesz=1G hugepages=1
>> before patch:
>> egrep 'CommitLimit' /proc/meminfo
>> CommitLimit:     55434168 kB
>> after patch:
>> egrep 'CommitLimit' /proc/meminfo
>> CommitLimit:     54909880 kB
>> 
>> Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
>> ---
>>  mm/hugetlb.c | 7 +++++--
>>  1 file changed, 5 insertions(+), 2 deletions(-)
>> 
>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>> index cdb64e4..9e25040 100644
>> --- a/mm/hugetlb.c
>> +++ b/mm/hugetlb.c
>> @@ -2124,8 +2124,11 @@ int hugetlb_report_node_meminfo(int nid, char *buf)
>>  /* Return the number pages of memory we physically have, in PAGE_SIZE units. */
>>  unsigned long hugetlb_total_pages(void)
>>  {
>> -	struct hstate *h = &default_hstate;
>> -	return h->nr_huge_pages * pages_per_huge_page(h);
>> +	struct hstate *h;
>> +	unsigned long nr_total_pages = 0;
>> +	for_each_hstate(h)
>> +		nr_total_pages += h->nr_huge_pages * pages_per_huge_page(h);
>> +	return nr_total_pages;
>>  }
>>  
>>  static int hugetlb_acct_memory(struct hstate *h, long delta)
>> -- 
>> 1.7.11.7
>> 
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo@kvack.org.  For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
>
>-- 
>Michal Hocko
>SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2013-03-14 10:16 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-03-13  7:08 [PATCH] mm/hugetlb: fix total hugetlbfs pages count when memory overcommit accouting Wanpeng Li
2013-03-13  8:02 ` Hillf Danton
2013-03-13  8:32   ` Wanpeng Li
2013-03-13  8:32   ` Wanpeng Li
2013-03-13 16:59   ` Andi Kleen
2013-03-14  9:44 ` Michal Hocko
2013-03-14 10:15   ` Wanpeng Li
2013-03-14 10:15   ` Wanpeng Li

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).