* [PATCH] ipc,mqueue: remove limits for the amount of system-wide queues [not found] ` <1391803868.1099.23.camel@buesod1.americas.hpqcorp.net> @ 2014-02-09 21:06 ` Davidlohr Bueso 2014-02-11 18:14 ` Doug Ledford 2014-02-11 22:16 ` Andrew Morton 0 siblings, 2 replies; 3+ messages in thread From: Davidlohr Bueso @ 2014-02-09 21:06 UTC (permalink / raw) To: m, stable, akpm; +Cc: linux-kernel, Manfred Spraul, dledford From: Davidlohr Bueso <davidlohr@hp.com> Commit 93e6f119 (ipc/mqueue: cleanup definition names and locations) added global hardcoded limits to the amount of message queues that can be created. While these limits are per-namespace, reality is that it ends up breaking userspace applications. Historically users have, at least in theory, been able to create up to INT_MAX queues, and limiting it to just 1024 is way too low and dramatic for some workloads and use cases. For instance, Madars reports: "This update imposes bad limits on our multi-process application. As our app uses approaches that each process opens its own set of queues (usually something about 3-5 queues per process). In some scenarios we might run up to 3000 processes or more (which of-course for linux is not a problem). Thus we might need up to 9000 queues or more. All processes run under one user." Other affected users can be found in launchpad bug #1155695: https://bugs.launchpad.net/ubuntu/+source/manpages/+bug/1155695 Instead of increasing this limit, revert it entirely and fallback to the original way of dealing queue limits -- where once a user's resource limit is reached, and all memory is used, new queues cannot be created. Reported-by: m@silodev.com Cc: Doug Ledford <dledford@redhat.com> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: stable@vger.kernel.org # v3.5+ Signed-off-by: Davidlohr Bueso <davidlohr@hp.com> --- include/linux/ipc_namespace.h | 2 -- ipc/mq_sysctl.c | 18 ++++++++++++------ ipc/mqueue.c | 6 +++--- 3 files changed, 15 insertions(+), 11 deletions(-) diff --git a/include/linux/ipc_namespace.h b/include/linux/ipc_namespace.h index e7831d2..35e7eca 100644 --- a/include/linux/ipc_namespace.h +++ b/include/linux/ipc_namespace.h @@ -118,9 +118,7 @@ extern int mq_init_ns(struct ipc_namespace *ns); * the new maximum will handle anyone else. I may have to revisit this * in the future. */ -#define MIN_QUEUESMAX 1 #define DFLT_QUEUESMAX 256 -#define HARD_QUEUESMAX 1024 #define MIN_MSGMAX 1 #define DFLT_MSG 10U #define DFLT_MSGMAX 10 diff --git a/ipc/mq_sysctl.c b/ipc/mq_sysctl.c index 383d638..5bb8bfe 100644 --- a/ipc/mq_sysctl.c +++ b/ipc/mq_sysctl.c @@ -22,6 +22,16 @@ static void *get_mq(ctl_table *table) return which; } +static int proc_mq_dointvec(ctl_table *table, int write, + void __user *buffer, size_t *lenp, loff_t *ppos) +{ + struct ctl_table mq_table; + memcpy(&mq_table, table, sizeof(mq_table)); + mq_table.data = get_mq(table); + + return proc_dointvec(&mq_table, write, buffer, lenp, ppos); +} + static int proc_mq_dointvec_minmax(ctl_table *table, int write, void __user *buffer, size_t *lenp, loff_t *ppos) { @@ -33,12 +43,10 @@ static int proc_mq_dointvec_minmax(ctl_table *table, int write, lenp, ppos); } #else +#define proc_mq_dointvec NULL #define proc_mq_dointvec_minmax NULL #endif -static int msg_queues_limit_min = MIN_QUEUESMAX; -static int msg_queues_limit_max = HARD_QUEUESMAX; - static int msg_max_limit_min = MIN_MSGMAX; static int msg_max_limit_max = HARD_MSGMAX; @@ -51,9 +59,7 @@ static ctl_table mq_sysctls[] = { .data = &init_ipc_ns.mq_queues_max, .maxlen = sizeof(int), .mode = 0644, - .proc_handler = proc_mq_dointvec_minmax, - .extra1 = &msg_queues_limit_min, - .extra2 = &msg_queues_limit_max, + .proc_handler = proc_mq_dointvec, }, { .procname = "msg_max", diff --git a/ipc/mqueue.c b/ipc/mqueue.c index ccf1f9f..c3b3117 100644 --- a/ipc/mqueue.c +++ b/ipc/mqueue.c @@ -433,9 +433,9 @@ static int mqueue_create(struct inode *dir, struct dentry *dentry, error = -EACCES; goto out_unlock; } - if (ipc_ns->mq_queues_count >= HARD_QUEUESMAX || - (ipc_ns->mq_queues_count >= ipc_ns->mq_queues_max && - !capable(CAP_SYS_RESOURCE))) { + + if (ipc_ns->mq_queues_count >= ipc_ns->mq_queues_max && + !capable(CAP_SYS_RESOURCE)) { error = -ENOSPC; goto out_unlock; } -- 1.8.1.4 ^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] ipc,mqueue: remove limits for the amount of system-wide queues 2014-02-09 21:06 ` [PATCH] ipc,mqueue: remove limits for the amount of system-wide queues Davidlohr Bueso @ 2014-02-11 18:14 ` Doug Ledford 2014-02-11 22:16 ` Andrew Morton 1 sibling, 0 replies; 3+ messages in thread From: Doug Ledford @ 2014-02-11 18:14 UTC (permalink / raw) To: Davidlohr Bueso, m, stable, akpm; +Cc: linux-kernel, Manfred Spraul [-- Attachment #1: Type: text/plain, Size: 1807 bytes --] On 2/9/2014 4:06 PM, Davidlohr Bueso wrote: > From: Davidlohr Bueso <davidlohr@hp.com> > > Commit 93e6f119 (ipc/mqueue: cleanup definition names and locations) added > global hardcoded limits to the amount of message queues that can be created. > While these limits are per-namespace, reality is that it ends up breaking > userspace applications. Historically users have, at least in theory, been able > to create up to INT_MAX queues, and limiting it to just 1024 is way too low > and dramatic for some workloads and use cases. For instance, Madars reports: > > "This update imposes bad limits on our multi-process application. As our > app uses approaches that each process opens its own set of queues (usually > something about 3-5 queues per process). In some scenarios we might run up > to 3000 processes or more (which of-course for linux is not a problem). > Thus we might need up to 9000 queues or more. All processes run under one > user." > > Other affected users can be found in launchpad bug #1155695: > https://bugs.launchpad.net/ubuntu/+source/manpages/+bug/1155695 > > Instead of increasing this limit, revert it entirely and fallback to the > original way of dealing queue limits -- where once a user's resource limit > is reached, and all memory is used, new queues cannot be created. > > Reported-by: m@silodev.com > Cc: Doug Ledford <dledford@redhat.com> Acked-by: Doug Ledford <dledford@redhat.com> > Cc: Manfred Spraul <manfred@colorfullife.com> > Cc: stable@vger.kernel.org # v3.5+ > Signed-off-by: Davidlohr Bueso <davidlohr@hp.com> > --- > include/linux/ipc_namespace.h | 2 -- > ipc/mq_sysctl.c | 18 ++++++++++++------ > ipc/mqueue.c | 6 +++--- > 3 files changed, 15 insertions(+), 11 deletions(-) [-- Attachment #2: OpenPGP digital signature --] [-- Type: application/pgp-signature, Size: 899 bytes --] ^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] ipc,mqueue: remove limits for the amount of system-wide queues 2014-02-09 21:06 ` [PATCH] ipc,mqueue: remove limits for the amount of system-wide queues Davidlohr Bueso 2014-02-11 18:14 ` Doug Ledford @ 2014-02-11 22:16 ` Andrew Morton 1 sibling, 0 replies; 3+ messages in thread From: Andrew Morton @ 2014-02-11 22:16 UTC (permalink / raw) To: Davidlohr Bueso; +Cc: m, stable, linux-kernel, Manfred Spraul, dledford On Sun, 09 Feb 2014 13:06:03 -0800 Davidlohr Bueso <davidlohr@hp.com> wrote: > From: Davidlohr Bueso <davidlohr@hp.com> > > Commit 93e6f119 (ipc/mqueue: cleanup definition names and locations) added > global hardcoded limits to the amount of message queues that can be created. > While these limits are per-namespace, reality is that it ends up breaking > userspace applications. Historically users have, at least in theory, been able > to create up to INT_MAX queues, and limiting it to just 1024 is way too low > and dramatic for some workloads and use cases. For instance, Madars reports: > > "This update imposes bad limits on our multi-process application. As our > app uses approaches that each process opens its own set of queues (usually > something about 3-5 queues per process). In some scenarios we might run up > to 3000 processes or more (which of-course for linux is not a problem). > Thus we might need up to 9000 queues or more. All processes run under one > user." > > Other affected users can be found in launchpad bug #1155695: > https://bugs.launchpad.net/ubuntu/+source/manpages/+bug/1155695 > > Instead of increasing this limit, revert it entirely and fallback to the > original way of dealing queue limits -- where once a user's resource limit > is reached, and all memory is used, new queues cannot be created. > > --- a/ipc/mq_sysctl.c > +++ b/ipc/mq_sysctl.c > @@ -22,6 +22,16 @@ static void *get_mq(ctl_table *table) > return which; > } > > +static int proc_mq_dointvec(ctl_table *table, int write, > + void __user *buffer, size_t *lenp, loff_t *ppos) > +{ > + struct ctl_table mq_table; > + memcpy(&mq_table, table, sizeof(mq_table)); > + mq_table.data = get_mq(table); > + > + return proc_dointvec(&mq_table, write, buffer, lenp, ppos); > +} > + > static int proc_mq_dointvec_minmax(ctl_table *table, int write, > void __user *buffer, size_t *lenp, loff_t *ppos) > { > > ... > > @@ -51,9 +59,7 @@ static ctl_table mq_sysctls[] = { > .data = &init_ipc_ns.mq_queues_max, > .maxlen = sizeof(int), > .mode = 0644, > - .proc_handler = proc_mq_dointvec_minmax, > - .extra1 = &msg_queues_limit_min, > - .extra2 = &msg_queues_limit_max, > + .proc_handler = proc_mq_dointvec, > }, hm, afaict proc_mq_dointvec() isn't needed - proc_dointvec_minmax() will do the right thing if ->extra1 and/or ->extra2 are NULL, so we can still use proc_mq_dointvec_minmax(). Which has absolutely nothing at all to do with your patch, but makes me think we could take a sharp instrument to the sysctl code... ^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2014-02-11 22:16 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <11414.87.110.183.114.1391682066.squirrel@www.silodev.com>
[not found] ` <1391803868.1099.23.camel@buesod1.americas.hpqcorp.net>
2014-02-09 21:06 ` [PATCH] ipc,mqueue: remove limits for the amount of system-wide queues Davidlohr Bueso
2014-02-11 18:14 ` Doug Ledford
2014-02-11 22:16 ` Andrew Morton
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).