From: Albert Vaca Cintora <albertvaka@gmail.com>
To: albertvaka@gmail.com, akpm@linux-foundation.org,
rdunlap@infradead.org, mingo@kernel.org, jack@suse.cz,
ebiederm@xmission.com, nsaenzjulienne@suse.de,
linux-kernel@vger.kernel.org, corbet@lwn.net,
linux-doc@vger.kernel.org, mbrugger@suse.com
Subject: [PATCH v3 2/3] kernel/ucounts: expose count of inotify watches in use
Date: Fri, 31 May 2019 21:50:15 +0200 [thread overview]
Message-ID: <20190531195016.4430-2-albertvaka@gmail.com> (raw)
In-Reply-To: <20190531195016.4430-1-albertvaka@gmail.com>
Adds a readonly 'current_inotify_watches' entry to the user sysctl table.
The handler for this entry is a custom function that ends up calling
proc_dointvec. Said sysctl table already contains 'max_inotify_watches'
and it gets mounted under /proc/sys/user/.
Inotify watches are a finite resource, in a similar way to available file
descriptors. The motivation for this patch is to be able to set up
monitoring and alerting before an application starts failing because
it runs out of inotify watches.
Signed-off-by: Albert Vaca Cintora <albertvaka@gmail.com>
Acked-by: Jan Kara <jack@suse.cz>
Reviewed-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
---
kernel/ucount.c | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
diff --git a/kernel/ucount.c b/kernel/ucount.c
index 909c856e809f..05b0e76208d3 100644
--- a/kernel/ucount.c
+++ b/kernel/ucount.c
@@ -118,6 +118,26 @@ static void put_ucounts(struct ucounts *ucounts)
kfree(ucounts);
}
+#ifdef CONFIG_INOTIFY_USER
+int proc_read_inotify_watches(struct ctl_table *table, int write,
+ void __user *buffer, size_t *lenp, loff_t *ppos)
+{
+ struct ucounts *ucounts;
+ struct ctl_table fake_table;
+ int count = -1;
+
+ ucounts = get_ucounts(current_user_ns(), current_euid());
+ if (ucounts != NULL) {
+ count = atomic_read(&ucounts->ucount[UCOUNT_INOTIFY_WATCHES]);
+ put_ucounts(ucounts);
+ }
+
+ fake_table.data = &count;
+ fake_table.maxlen = sizeof(count);
+ return proc_dointvec(&fake_table, write, buffer, lenp, ppos);
+}
+#endif
+
static int zero = 0;
static int int_max = INT_MAX;
#define UCOUNT_ENTRY(name) \
@@ -140,6 +160,12 @@ static struct ctl_table user_table[] = {
#ifdef CONFIG_INOTIFY_USER
UCOUNT_ENTRY("max_inotify_instances"),
UCOUNT_ENTRY("max_inotify_watches"),
+ {
+ .procname = "current_inotify_watches",
+ .maxlen = sizeof(int),
+ .mode = 0444,
+ .proc_handler = proc_read_inotify_watches,
+ },
#endif
{ }
};
--
2.21.0
next prev parent reply other threads:[~2019-05-31 19:50 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-05-31 19:50 [PATCH v3 1/3] Move *_ucounts functions above Albert Vaca Cintora
2019-05-31 19:50 ` Albert Vaca Cintora [this message]
2019-06-01 0:00 ` [PATCH v3 2/3] kernel/ucounts: expose count of inotify watches in use Andrew Morton
2019-06-01 18:20 ` Albert Vaca Cintora
2019-10-16 18:47 ` Albert Vaca Cintora
2019-05-31 19:50 ` [PATCH v3 3/3] Documentation for /proc/sys/user/*_inotify_* Albert Vaca Cintora
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190531195016.4430-2-albertvaka@gmail.com \
--to=albertvaka@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=corbet@lwn.net \
--cc=ebiederm@xmission.com \
--cc=jack@suse.cz \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mbrugger@suse.com \
--cc=mingo@kernel.org \
--cc=nsaenzjulienne@suse.de \
--cc=rdunlap@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox