* [PATCH V2] Do not allow pagesize >= MAX_ORDER pool adjustment @ 2011-01-05 20:29 Eric B Munson 2011-01-05 21:11 ` Andrew Morton 2011-01-06 2:51 ` CAI Qian 0 siblings, 2 replies; 4+ messages in thread From: Eric B Munson @ 2011-01-05 20:29 UTC (permalink / raw) To: akpm; +Cc: linux-kernel, linux-mm, mel, caiqian, mhocko, Eric B Munson Huge pages with order >= MAX_ORDER must be allocated at boot via the kernel command line, they cannot be allocated or freed once the kernel is up and running. Currently we allow values to be written to the sysfs and sysctl files controling pool size for these huge page sizes. This patch makes the store functions for nr_hugepages and nr_overcommit_hugepages return -EINVAL when the pool for a page size >= MAX_ORDER is changed. Reported-by: CAI Qian <caiqian@redhat.com> Signed-off-by: Eric B Munson <emunson@mgebm.net> --- Changes from V1: Add check to sysctl handler mm/hugetlb.c | 12 ++++++++++++ 1 files changed, 12 insertions(+), 0 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 5cb71a9..15bd633 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1443,6 +1443,12 @@ static ssize_t nr_hugepages_store_common(bool obey_mempolicy, return -EINVAL; h = kobj_to_hstate(kobj, &nid); + + if (h->order >= MAX_ORDER) { + NODEMASK_FREE(nodes_allowed); + return -EINVAL; + } + if (nid == NUMA_NO_NODE) { /* * global hstate attribute @@ -1517,6 +1523,9 @@ static ssize_t nr_overcommit_hugepages_store(struct kobject *kobj, unsigned long input; struct hstate *h = kobj_to_hstate(kobj, NULL); + if (h->order >= MAX_ORDER) + return -EINVAL; + err = strict_strtoul(buf, 10, &input); if (err) return -EINVAL; @@ -1926,6 +1935,9 @@ static int hugetlb_sysctl_handler_common(bool obey_mempolicy, if (!write) tmp = h->max_huge_pages; + if (write && h->order >= MAX_ORDER) + return -EINVAL; + table->data = &tmp; table->maxlen = sizeof(unsigned long); proc_doulongvec_minmax(table, write, buffer, length, ppos); -- 1.7.1 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/ Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH V2] Do not allow pagesize >= MAX_ORDER pool adjustment 2011-01-05 20:29 [PATCH V2] Do not allow pagesize >= MAX_ORDER pool adjustment Eric B Munson @ 2011-01-05 21:11 ` Andrew Morton 2011-01-06 6:54 ` CAI Qian 2011-01-06 2:51 ` CAI Qian 1 sibling, 1 reply; 4+ messages in thread From: Andrew Morton @ 2011-01-05 21:11 UTC (permalink / raw) To: Eric B Munson; +Cc: linux-kernel, linux-mm, mel, caiqian, mhocko On Wed, 5 Jan 2011 13:29:57 -0700 Eric B Munson <emunson@mgebm.net> wrote: > Huge pages with order >= MAX_ORDER must be allocated at boot via > the kernel command line, they cannot be allocated or freed once > the kernel is up and running. Currently we allow values to be > written to the sysfs and sysctl files controling pool size for these > huge page sizes. This patch makes the store functions for nr_hugepages > and nr_overcommit_hugepages return -EINVAL when the pool for a > page size >= MAX_ORDER is changed. > gack, you people keep on making me look at the hugetlb code :( > index 5cb71a9..15bd633 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1443,6 +1443,12 @@ static ssize_t nr_hugepages_store_common(bool obey_mempolicy, > return -EINVAL; Why do these functions do a `return 0' if strict_strtoul() failed? > > h = kobj_to_hstate(kobj, &nid); > + > + if (h->order >= MAX_ORDER) { > + NODEMASK_FREE(nodes_allowed); > + return -EINVAL; > + } Let's avoid having multiple unwind-and-return paths in a function, please. it often leads to resource leaks and locking errors as the code evolves. --- a/mm/hugetlb.c~hugetlb-do-not-allow-pagesize-=-max_order-pool-adjustment-fix +++ a/mm/hugetlb.c @@ -1363,6 +1363,7 @@ static ssize_t nr_hugepages_show_common( return sprintf(buf, "%lu\n", nr_huge_pages); } + static ssize_t nr_hugepages_store_common(bool obey_mempolicy, struct kobject *kobj, struct kobj_attribute *attr, const char *buf, size_t len) @@ -1375,15 +1376,14 @@ static ssize_t nr_hugepages_store_common err = strict_strtoul(buf, 10, &count); if (err) { - NODEMASK_FREE(nodes_allowed); - return 0; + err = 0; /* This seems wrong */ + goto out; } h = kobj_to_hstate(kobj, &nid); - if (h->order >= MAX_ORDER) { - NODEMASK_FREE(nodes_allowed); - return -EINVAL; + err = -EINVAL; + goto out; } if (nid == NUMA_NO_NODE) { @@ -1411,6 +1411,9 @@ static ssize_t nr_hugepages_store_common NODEMASK_FREE(nodes_allowed); return len; +out: + NODEMASK_FREE(nodes_allowed); + return err; } static ssize_t nr_hugepages_show(struct kobject *kobj, _ -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/ Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH V2] Do not allow pagesize >= MAX_ORDER pool adjustment 2011-01-05 21:11 ` Andrew Morton @ 2011-01-06 6:54 ` CAI Qian 0 siblings, 0 replies; 4+ messages in thread From: CAI Qian @ 2011-01-06 6:54 UTC (permalink / raw) To: Andrew Morton; +Cc: linux-kernel, linux-mm, mel, mhocko, Eric B Munson ----- Original Message ----- > On Wed, 5 Jan 2011 13:29:57 -0700 > Eric B Munson <emunson@mgebm.net> wrote: > > > Huge pages with order >= MAX_ORDER must be allocated at boot via > > the kernel command line, they cannot be allocated or freed once > > the kernel is up and running. Currently we allow values to be > > written to the sysfs and sysctl files controling pool size for these > > huge page sizes. This patch makes the store functions for > > nr_hugepages > > and nr_overcommit_hugepages return -EINVAL when the pool for a > > page size >= MAX_ORDER is changed. > > > > gack, you people keep on making me look at the hugetlb code :( > > > index 5cb71a9..15bd633 100644 > > --- a/mm/hugetlb.c > > +++ b/mm/hugetlb.c > > @@ -1443,6 +1443,12 @@ static ssize_t nr_hugepages_store_common(bool > > obey_mempolicy, > > return -EINVAL; > > Why do these functions do a `return 0' if strict_strtoul() failed? > > > > > h = kobj_to_hstate(kobj, &nid); > > + > > + if (h->order >= MAX_ORDER) { > > + NODEMASK_FREE(nodes_allowed); > > + return -EINVAL; > > + } > > Let's avoid having multiple unwind-and-return paths in a function, > please. it often leads to resource leaks and locking errors as the > code evolves. > > --- > a/mm/hugetlb.c~hugetlb-do-not-allow-pagesize-=-max_order-pool-adjustment-fix > +++ a/mm/hugetlb.c > @@ -1363,6 +1363,7 @@ static ssize_t nr_hugepages_show_common( > > return sprintf(buf, "%lu\n", nr_huge_pages); > } > + > static ssize_t nr_hugepages_store_common(bool obey_mempolicy, > struct kobject *kobj, struct kobj_attribute *attr, > const char *buf, size_t len) > @@ -1375,15 +1376,14 @@ static ssize_t nr_hugepages_store_common > > err = strict_strtoul(buf, 10, &count); > if (err) { > - NODEMASK_FREE(nodes_allowed); > - return 0; > + err = 0; /* This seems wrong */ > + goto out; > } > > h = kobj_to_hstate(kobj, &nid); > - > if (h->order >= MAX_ORDER) { > - NODEMASK_FREE(nodes_allowed); > - return -EINVAL; > + err = -EINVAL; > + goto out; > } > > if (nid == NUMA_NO_NODE) { > @@ -1411,6 +1411,9 @@ static ssize_t nr_hugepages_store_common > NODEMASK_FREE(nodes_allowed); > > return len; > +out: > + NODEMASK_FREE(nodes_allowed); > + return err; > } > > static ssize_t nr_hugepages_show(struct kobject *kobj, > _ As I mentioned in another thread. This is missing checking in hugetlb_overcommit_handler(). CAI Qian ------------- diff --git a/mm/hugetlb.c b/mm/hugetlb.c index c4a3558..60740bd 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1510,6 +1510,7 @@ static ssize_t nr_overcommit_hugepages_show(struct kobject *kobj, struct hstate *h = kobj_to_hstate(kobj, NULL); return sprintf(buf, "%lu\n", h->nr_overcommit_huge_pages); } + static ssize_t nr_overcommit_hugepages_store(struct kobject *kobj, struct kobj_attribute *attr, const char *buf, size_t count) { @@ -1986,6 +1987,9 @@ int hugetlb_overcommit_handler(struct ctl_table *table, int write, if (!write) tmp = h->nr_overcommit_huge_pages; + if (write && h->order >= MAX_ORDER) + return -EINVAL; + table->data = &tmp; table->maxlen = sizeof(unsigned long); proc_doulongvec_minmax(table, write, buffer, length, ppos); -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/ Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH V2] Do not allow pagesize >= MAX_ORDER pool adjustment 2011-01-05 20:29 [PATCH V2] Do not allow pagesize >= MAX_ORDER pool adjustment Eric B Munson 2011-01-05 21:11 ` Andrew Morton @ 2011-01-06 2:51 ` CAI Qian 1 sibling, 0 replies; 4+ messages in thread From: CAI Qian @ 2011-01-06 2:51 UTC (permalink / raw) To: Eric B Munson; +Cc: linux-kernel, linux-mm, mel, mhocko, akpm ----- Original Message ----- > Huge pages with order >= MAX_ORDER must be allocated at boot via > the kernel command line, they cannot be allocated or freed once > the kernel is up and running. Currently we allow values to be > written to the sysfs and sysctl files controling pool size for these > huge page sizes. This patch makes the store functions for nr_hugepages > and nr_overcommit_hugepages return -EINVAL when the pool for a > page size >= MAX_ORDER is changed. > > Reported-by: CAI Qian <caiqian@redhat.com> > > Signed-off-by: Eric B Munson <emunson@mgebm.net> > --- > Changes from V1: > Add check to sysctl handler > > mm/hugetlb.c | 12 ++++++++++++ > 1 files changed, 12 insertions(+), 0 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 5cb71a9..15bd633 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1443,6 +1443,12 @@ static ssize_t nr_hugepages_store_common(bool > obey_mempolicy, > return -EINVAL; > > h = kobj_to_hstate(kobj, &nid); > + > + if (h->order >= MAX_ORDER) { > + NODEMASK_FREE(nodes_allowed); > + return -EINVAL; > + } > + > if (nid == NUMA_NO_NODE) { > /* > * global hstate attribute > @@ -1517,6 +1523,9 @@ static ssize_t > nr_overcommit_hugepages_store(struct kobject *kobj, > unsigned long input; > struct hstate *h = kobj_to_hstate(kobj, NULL); > > + if (h->order >= MAX_ORDER) > + return -EINVAL; > + > err = strict_strtoul(buf, 10, &input); > if (err) > return -EINVAL; > @@ -1926,6 +1935,9 @@ static int hugetlb_sysctl_handler_common(bool > obey_mempolicy, > if (!write) > tmp = h->max_huge_pages; > > + if (write && h->order >= MAX_ORDER) > + return -EINVAL; > + > table->data = &tmp; > table->maxlen = sizeof(unsigned long); > proc_doulongvec_minmax(table, write, buffer, length, ppos); > -- > 1.7.1 I believe this is still not enough. hugetlb_overcommit_handler() does not use hugetlb_sysctl_handler_common() which also need this. Otherwise, people can still abuse /proc/sys/vm/nr_overcommit_pages. CAI Qian -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/ Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2011-01-06 6:54 UTC | newest] Thread overview: 4+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2011-01-05 20:29 [PATCH V2] Do not allow pagesize >= MAX_ORDER pool adjustment Eric B Munson 2011-01-05 21:11 ` Andrew Morton 2011-01-06 6:54 ` CAI Qian 2011-01-06 2:51 ` CAI Qian
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).