* [PATCH v3 0/7] Process connector bug fixes & enhancements
@ 2023-03-29 18:25 Anjali Kulkarni
2023-03-29 18:25 ` [PATCH v3 1/7] netlink: Reverse the patch which removed filtering Anjali Kulkarni
` (6 more replies)
0 siblings, 7 replies; 14+ messages in thread
From: Anjali Kulkarni @ 2023-03-29 18:25 UTC (permalink / raw)
To: davem
Cc: edumazet, kuba, pabeni, zbr, brauner, johannes, ecree.xilinx,
leon, keescook, socketcan, petrm, linux-kernel, netdev,
anjali.k.kulkarni
From: Anjali Kulkarni <anjali.k.kulkarni@oracle.com>
In this series, we add filtering to the proc connector module. This
is required to fix some bugs and also will enable the addition of event
based filtering, which will improve performance for anyone interested
in a subset of process events, as compared to the current approach,
which is to send all event notifications.
Thus, a client can register to listen for only exit or fork or a mix or
all of the events. This greatly enhances performance - currently, we
need to listen to all events, and there are 9 different types of events.
For eg. handling 3 types of events - 8K-forks + 8K-exits + 8K-execs takes
200ms, whereas handling 2 types - 8K-forks + 8K-exits takes about 150ms,
and handling just one type - 8K exits takes about 70ms.
Reason why we need the above changes and also a new event type
PROC_EVENT_NONZERO_EXIT, which is only sent by kernel to a listening
application when any process exiting has a non-zero exit status is:
Oracle DB runs on a large scale with 100000s of short lived processes,
starting up and exiting quickly. A process monitoring DB daemon which
tracks and cleans up after processes that have died without a proper exit
needs notifications only when a process died with a non-zero exit code
(which should be rare).
This change will give Oracle DB substantial performance savings - it takes
50ms to scan about 8K PIDs in /proc, about 500ms for 100K PIDs. DB does
this check every 3 secs, so over an hour we save 10secs for 100K PIDs.
Measuring the time using pidfds for monitoring 8K process exits took 4
times longer - 200ms, as compared to 70ms using only exit notifications
of proc connector. Hence, we cannot use pidfd for our use case.
This kind of a new event could also be useful to other applications like
Google's lmkd daemon, which needs a killed process's exit notification.
This patch series is organized as follows -
Patch 1 : Needed for patch 3 to work.
Patch 2 : Needed for patch 3 to work.
Patch 3 : Fixes some bugs in proc connector, details in the patch.
Patch 4 : Test code for proc connector.
Patch 5 : Adds event based filtering for performance enhancements.
Patch 6 : Needed for patch 7 to work.
Patch 7 : Allow non-root users access to proc connector events.
v2->v3 changes:
- Fix comments by Jakub Kicinski to separate netlink (patch 2) (after
layering) from connector fixes (patch 3).
- Minor fixes suggested by Jakub.
- Add new multicast group level permissions check at netlink layer.
Split this into netlink & connector layers (patches 6 & 7)
v1->v2 changes:
- Fix comments by Jakub Kicinski to keep layering within netlink and
update kdocs.
- Move non-root users access patch last in series so remaining patches
can go in first.
v->v1 changes:
- Changed commit log in patch 4 as suggested by Christian Brauner
- Changed patch 4 to make more fine grained access to non-root users
- Fixed warning in cn_proc.c,
Reported-by: kernel test robot <lkp@intel.com>
- Fixed some existing warnings in cn_proc.c
Anjali Kulkarni (7):
netlink: Reverse the patch which removed filtering
netlink: Add new netlink_release function
connector/cn_proc: Add filtering to fix some bugs
connector/cn_proc: Test code for proc connector
connector/cn_proc: Performance improvements
netlink: Add multicast group level permissions
connector/cn_proc: Allow non-root users access
drivers/connector/cn_proc.c | 105 +++++++++--
drivers/connector/connector.c | 22 ++-
drivers/w1/w1_netlink.c | 6 +-
include/linux/connector.h | 8 +-
include/linux/netlink.h | 7 +
include/uapi/linux/cn_proc.h | 62 +++++--
net/netlink/af_netlink.c | 56 +++++-
net/netlink/af_netlink.h | 6 +
samples/connector/proc_filter.c | 301 ++++++++++++++++++++++++++++++++
9 files changed, 528 insertions(+), 45 deletions(-)
create mode 100644 samples/connector/proc_filter.c
--
2.40.0
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v3 1/7] netlink: Reverse the patch which removed filtering
2023-03-29 18:25 [PATCH v3 0/7] Process connector bug fixes & enhancements Anjali Kulkarni
@ 2023-03-29 18:25 ` Anjali Kulkarni
2023-03-29 18:25 ` [PATCH v3 2/7] netlink: Add new netlink_release function Anjali Kulkarni
` (5 subsequent siblings)
6 siblings, 0 replies; 14+ messages in thread
From: Anjali Kulkarni @ 2023-03-29 18:25 UTC (permalink / raw)
To: davem
Cc: edumazet, kuba, pabeni, zbr, brauner, johannes, ecree.xilinx,
leon, keescook, socketcan, petrm, linux-kernel, netdev,
anjali.k.kulkarni
To use filtering at the connector & cn_proc layers, we need to enable
filtering in the netlink layer. This reverses the patch which removed
netlink filtering.
Signed-off-by: Anjali Kulkarni <anjali.k.kulkarni@oracle.com>
---
include/linux/netlink.h | 5 +++++
net/netlink/af_netlink.c | 25 +++++++++++++++++++++++--
2 files changed, 28 insertions(+), 2 deletions(-)
diff --git a/include/linux/netlink.h b/include/linux/netlink.h
index c43ac7690eca..866bbc5a4c8d 100644
--- a/include/linux/netlink.h
+++ b/include/linux/netlink.h
@@ -206,6 +206,11 @@ bool netlink_strict_get_check(struct sk_buff *skb);
int netlink_unicast(struct sock *ssk, struct sk_buff *skb, __u32 portid, int nonblock);
int netlink_broadcast(struct sock *ssk, struct sk_buff *skb, __u32 portid,
__u32 group, gfp_t allocation);
+int netlink_broadcast_filtered(struct sock *ssk, struct sk_buff *skb,
+ __u32 portid, __u32 group, gfp_t allocation,
+ int (*filter)(struct sock *dsk,
+ struct sk_buff *skb, void *data),
+ void *filter_data);
int netlink_set_err(struct sock *ssk, __u32 portid, __u32 group, int code);
int netlink_register_notifier(struct notifier_block *nb);
int netlink_unregister_notifier(struct notifier_block *nb);
diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
index c64277659753..003c7e6ec9be 100644
--- a/net/netlink/af_netlink.c
+++ b/net/netlink/af_netlink.c
@@ -1432,6 +1432,8 @@ struct netlink_broadcast_data {
int delivered;
gfp_t allocation;
struct sk_buff *skb, *skb2;
+ int (*tx_filter)(struct sock *dsk, struct sk_buff *skb, void *data);
+ void *tx_data;
};
static void do_one_broadcast(struct sock *sk,
@@ -1485,6 +1487,11 @@ static void do_one_broadcast(struct sock *sk,
p->delivery_failure = 1;
goto out;
}
+ if (p->tx_filter && p->tx_filter(sk, p->skb2, p->tx_data)) {
+ kfree_skb(p->skb2);
+ p->skb2 = NULL;
+ goto out;
+ }
if (sk_filter(sk, p->skb2)) {
kfree_skb(p->skb2);
p->skb2 = NULL;
@@ -1507,8 +1514,12 @@ static void do_one_broadcast(struct sock *sk,
sock_put(sk);
}
-int netlink_broadcast(struct sock *ssk, struct sk_buff *skb, u32 portid,
- u32 group, gfp_t allocation)
+int netlink_broadcast_filtered(struct sock *ssk, struct sk_buff *skb,
+ u32 portid,
+ u32 group, gfp_t allocation,
+ int (*filter)(struct sock *dsk,
+ struct sk_buff *skb, void *data),
+ void *filter_data)
{
struct net *net = sock_net(ssk);
struct netlink_broadcast_data info;
@@ -1527,6 +1538,8 @@ int netlink_broadcast(struct sock *ssk, struct sk_buff *skb, u32 portid,
info.allocation = allocation;
info.skb = skb;
info.skb2 = NULL;
+ info.tx_filter = filter;
+ info.tx_data = filter_data;
/* While we sleep in clone, do not allow to change socket list */
@@ -1552,6 +1565,14 @@ int netlink_broadcast(struct sock *ssk, struct sk_buff *skb, u32 portid,
}
return -ESRCH;
}
+EXPORT_SYMBOL(netlink_broadcast_filtered);
+
+int netlink_broadcast(struct sock *ssk, struct sk_buff *skb, u32 portid,
+ u32 group, gfp_t allocation)
+{
+ return netlink_broadcast_filtered(ssk, skb, portid, group, allocation,
+ NULL, NULL);
+}
EXPORT_SYMBOL(netlink_broadcast);
struct netlink_set_err_data {
--
2.40.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v3 2/7] netlink: Add new netlink_release function
2023-03-29 18:25 [PATCH v3 0/7] Process connector bug fixes & enhancements Anjali Kulkarni
2023-03-29 18:25 ` [PATCH v3 1/7] netlink: Reverse the patch which removed filtering Anjali Kulkarni
@ 2023-03-29 18:25 ` Anjali Kulkarni
2023-03-29 18:25 ` [PATCH v3 3/7] connector/cn_proc: Add filtering to fix some bugs Anjali Kulkarni
` (4 subsequent siblings)
6 siblings, 0 replies; 14+ messages in thread
From: Anjali Kulkarni @ 2023-03-29 18:25 UTC (permalink / raw)
To: davem
Cc: edumazet, kuba, pabeni, zbr, brauner, johannes, ecree.xilinx,
leon, keescook, socketcan, petrm, linux-kernel, netdev,
anjali.k.kulkarni
A new function netlink_release is added in netlink_sock to store the
protocol's release function. This is called when the socket is deleted.
This can be supplied by the protocol via the release function in
netlink_kernel_cfg. This is being added for the NETLINK_CONNECTOR
protocol, so it can free it's data when socket is deleted.
Signed-off-by: Anjali Kulkarni <anjali.k.kulkarni@oracle.com>
---
include/linux/netlink.h | 1 +
net/netlink/af_netlink.c | 6 ++++++
net/netlink/af_netlink.h | 4 ++++
3 files changed, 11 insertions(+)
diff --git a/include/linux/netlink.h b/include/linux/netlink.h
index 866bbc5a4c8d..05a316aa93b4 100644
--- a/include/linux/netlink.h
+++ b/include/linux/netlink.h
@@ -51,6 +51,7 @@ struct netlink_kernel_cfg {
int (*bind)(struct net *net, int group);
void (*unbind)(struct net *net, int group);
bool (*compare)(struct net *net, struct sock *sk);
+ void (*release) (struct sock *sk, unsigned long *groups);
};
struct sock *__netlink_kernel_create(struct net *net, int unit,
diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
index 003c7e6ec9be..dc7880055705 100644
--- a/net/netlink/af_netlink.c
+++ b/net/netlink/af_netlink.c
@@ -677,6 +677,7 @@ static int netlink_create(struct net *net, struct socket *sock, int protocol,
struct netlink_sock *nlk;
int (*bind)(struct net *net, int group);
void (*unbind)(struct net *net, int group);
+ void (*release)(struct sock *sock, unsigned long *groups);
int err = 0;
sock->state = SS_UNCONNECTED;
@@ -704,6 +705,7 @@ static int netlink_create(struct net *net, struct socket *sock, int protocol,
cb_mutex = nl_table[protocol].cb_mutex;
bind = nl_table[protocol].bind;
unbind = nl_table[protocol].unbind;
+ release = nl_table[protocol].release;
netlink_unlock_table();
if (err < 0)
@@ -719,6 +721,7 @@ static int netlink_create(struct net *net, struct socket *sock, int protocol,
nlk->module = module;
nlk->netlink_bind = bind;
nlk->netlink_unbind = unbind;
+ nlk->netlink_release = release;
out:
return err;
@@ -763,6 +766,8 @@ static int netlink_release(struct socket *sock)
* OK. Socket is unlinked, any packets that arrive now
* will be purged.
*/
+ if (nlk->netlink_release)
+ nlk->netlink_release(sk, nlk->groups);
/* must not acquire netlink_table_lock in any way again before unbind
* and notifying genetlink is done as otherwise it might deadlock
@@ -2117,6 +2122,7 @@ __netlink_kernel_create(struct net *net, int unit, struct module *module,
if (cfg) {
nl_table[unit].bind = cfg->bind;
nl_table[unit].unbind = cfg->unbind;
+ nl_table[unit].release = cfg->release;
nl_table[unit].flags = cfg->flags;
if (cfg->compare)
nl_table[unit].compare = cfg->compare;
diff --git a/net/netlink/af_netlink.h b/net/netlink/af_netlink.h
index 5f454c8de6a4..054335a34804 100644
--- a/net/netlink/af_netlink.h
+++ b/net/netlink/af_netlink.h
@@ -42,6 +42,8 @@ struct netlink_sock {
void (*netlink_rcv)(struct sk_buff *skb);
int (*netlink_bind)(struct net *net, int group);
void (*netlink_unbind)(struct net *net, int group);
+ void (*netlink_release)(struct sock *sk,
+ unsigned long *groups);
struct module *module;
struct rhash_head node;
@@ -65,6 +67,8 @@ struct netlink_table {
int (*bind)(struct net *net, int group);
void (*unbind)(struct net *net, int group);
bool (*compare)(struct net *net, struct sock *sock);
+ void (*release)(struct sock *sk,
+ unsigned long *groups);
int registered;
};
--
2.40.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v3 3/7] connector/cn_proc: Add filtering to fix some bugs
2023-03-29 18:25 [PATCH v3 0/7] Process connector bug fixes & enhancements Anjali Kulkarni
2023-03-29 18:25 ` [PATCH v3 1/7] netlink: Reverse the patch which removed filtering Anjali Kulkarni
2023-03-29 18:25 ` [PATCH v3 2/7] netlink: Add new netlink_release function Anjali Kulkarni
@ 2023-03-29 18:25 ` Anjali Kulkarni
2023-03-29 18:25 ` [PATCH v3 4/7] connector/cn_proc: Test code for proc connector Anjali Kulkarni
` (3 subsequent siblings)
6 siblings, 0 replies; 14+ messages in thread
From: Anjali Kulkarni @ 2023-03-29 18:25 UTC (permalink / raw)
To: davem
Cc: edumazet, kuba, pabeni, zbr, brauner, johannes, ecree.xilinx,
leon, keescook, socketcan, petrm, linux-kernel, netdev,
anjali.k.kulkarni
The current proc connector code has the foll. bugs - if there are more
than one listeners for the proc connector messages, and one of them
deregisters for listening using PROC_CN_MCAST_IGNORE, they will still get
all proc connector messages, as long as there is another listener.
Another issue is if one client calls PROC_CN_MCAST_LISTEN, and another one
calls PROC_CN_MCAST_IGNORE, then both will end up not getting any messages.
This patch adds filtering and drops packet if client has sent
PROC_CN_MCAST_IGNORE. This data is stored in the client socket's
sk_user_data. In addition, we only increment or decrement
proc_event_num_listeners once per client. This fixes the above issues.
cn_release is the release function added for NETLINK_CONNECTOR. It uses
the newly added netlink_release function added to netlink_sock. It will
free sk_user_data.
Signed-off-by: Anjali Kulkarni <anjali.k.kulkarni@oracle.com>
---
drivers/connector/cn_proc.c | 53 ++++++++++++++++++++++++++++-------
drivers/connector/connector.c | 21 +++++++++++---
drivers/w1/w1_netlink.c | 6 ++--
include/linux/connector.h | 8 +++++-
include/uapi/linux/cn_proc.h | 43 ++++++++++++++++------------
5 files changed, 96 insertions(+), 35 deletions(-)
diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c
index ccac1c453080..84f38d2bd4b9 100644
--- a/drivers/connector/cn_proc.c
+++ b/drivers/connector/cn_proc.c
@@ -48,6 +48,21 @@ static DEFINE_PER_CPU(struct local_event, local_event) = {
.lock = INIT_LOCAL_LOCK(lock),
};
+static int cn_filter(struct sock *dsk, struct sk_buff *skb, void *data)
+{
+ enum proc_cn_mcast_op mc_op;
+
+ if (!dsk)
+ return 0;
+
+ mc_op = ((struct proc_input *)(dsk->sk_user_data))->mcast_op;
+
+ if (mc_op == PROC_CN_MCAST_IGNORE)
+ return 1;
+
+ return 0;
+}
+
static inline void send_msg(struct cn_msg *msg)
{
local_lock(&local_event.lock);
@@ -61,7 +76,8 @@ static inline void send_msg(struct cn_msg *msg)
*
* If cn_netlink_send() fails, the data is not sent.
*/
- cn_netlink_send(msg, 0, CN_IDX_PROC, GFP_NOWAIT);
+ cn_netlink_send_mult(msg, msg->len, 0, CN_IDX_PROC, GFP_NOWAIT,
+ cn_filter, NULL);
local_unlock(&local_event.lock);
}
@@ -346,11 +362,9 @@ static void cn_proc_ack(int err, int rcvd_seq, int rcvd_ack)
static void cn_proc_mcast_ctl(struct cn_msg *msg,
struct netlink_skb_parms *nsp)
{
- enum proc_cn_mcast_op *mc_op = NULL;
- int err = 0;
-
- if (msg->len != sizeof(*mc_op))
- return;
+ enum proc_cn_mcast_op mc_op = 0, prev_mc_op = 0;
+ int err = 0, initial = 0;
+ struct sock *sk = NULL;
/*
* Events are reported with respect to the initial pid
@@ -367,13 +381,32 @@ static void cn_proc_mcast_ctl(struct cn_msg *msg,
goto out;
}
- mc_op = (enum proc_cn_mcast_op *)msg->data;
- switch (*mc_op) {
+ if (msg->len == sizeof(mc_op))
+ mc_op = *((enum proc_cn_mcast_op *)msg->data);
+ else
+ return;
+
+ if (nsp->sk) {
+ sk = nsp->sk;
+ if (sk->sk_user_data == NULL) {
+ sk->sk_user_data = kzalloc(sizeof(struct proc_input),
+ GFP_KERNEL);
+ initial = 1;
+ } else {
+ prev_mc_op =
+ ((struct proc_input *)(sk->sk_user_data))->mcast_op;
+ }
+ ((struct proc_input *)(sk->sk_user_data))->mcast_op = mc_op;
+ }
+
+ switch (mc_op) {
case PROC_CN_MCAST_LISTEN:
- atomic_inc(&proc_event_num_listeners);
+ if (initial || (prev_mc_op != PROC_CN_MCAST_LISTEN))
+ atomic_inc(&proc_event_num_listeners);
break;
case PROC_CN_MCAST_IGNORE:
- atomic_dec(&proc_event_num_listeners);
+ if (!initial && (prev_mc_op != PROC_CN_MCAST_IGNORE))
+ atomic_dec(&proc_event_num_listeners);
break;
default:
err = EINVAL;
diff --git a/drivers/connector/connector.c b/drivers/connector/connector.c
index 48ec7ce6ecac..d1179df2b0ba 100644
--- a/drivers/connector/connector.c
+++ b/drivers/connector/connector.c
@@ -59,7 +59,9 @@ static int cn_already_initialized;
* both, or if both are zero then the group is looked up and sent there.
*/
int cn_netlink_send_mult(struct cn_msg *msg, u16 len, u32 portid, u32 __group,
- gfp_t gfp_mask)
+ gfp_t gfp_mask,
+ int (*filter)(struct sock *dsk, struct sk_buff *skb, void *data),
+ void *filter_data)
{
struct cn_callback_entry *__cbq;
unsigned int size;
@@ -110,8 +112,9 @@ int cn_netlink_send_mult(struct cn_msg *msg, u16 len, u32 portid, u32 __group,
NETLINK_CB(skb).dst_group = group;
if (group)
- return netlink_broadcast(dev->nls, skb, portid, group,
- gfp_mask);
+ return netlink_broadcast_filtered(dev->nls, skb, portid, group,
+ gfp_mask, filter,
+ (void *)filter_data);
return netlink_unicast(dev->nls, skb, portid,
!gfpflags_allow_blocking(gfp_mask));
}
@@ -121,7 +124,8 @@ EXPORT_SYMBOL_GPL(cn_netlink_send_mult);
int cn_netlink_send(struct cn_msg *msg, u32 portid, u32 __group,
gfp_t gfp_mask)
{
- return cn_netlink_send_mult(msg, msg->len, portid, __group, gfp_mask);
+ return cn_netlink_send_mult(msg, msg->len, portid, __group, gfp_mask,
+ NULL, NULL);
}
EXPORT_SYMBOL_GPL(cn_netlink_send);
@@ -162,6 +166,14 @@ static int cn_call_callback(struct sk_buff *skb)
return err;
}
+static void cn_release(struct sock *sk, unsigned long *groups)
+{
+ if (groups && test_bit(CN_IDX_PROC - 1, groups)) {
+ kfree(sk->sk_user_data);
+ sk->sk_user_data = NULL;
+ }
+}
+
/*
* Main netlink receiving function.
*
@@ -249,6 +261,7 @@ static int cn_init(void)
struct netlink_kernel_cfg cfg = {
.groups = CN_NETLINK_USERS + 0xf,
.input = cn_rx_skb,
+ .release = cn_release,
};
dev->nls = netlink_kernel_create(&init_net, NETLINK_CONNECTOR, &cfg);
diff --git a/drivers/w1/w1_netlink.c b/drivers/w1/w1_netlink.c
index db110cc442b1..691978cddab7 100644
--- a/drivers/w1/w1_netlink.c
+++ b/drivers/w1/w1_netlink.c
@@ -65,7 +65,8 @@ static void w1_unref_block(struct w1_cb_block *block)
u16 len = w1_reply_len(block);
if (len) {
cn_netlink_send_mult(block->first_cn, len,
- block->portid, 0, GFP_KERNEL);
+ block->portid, 0,
+ GFP_KERNEL, NULL, NULL);
}
kfree(block);
}
@@ -83,7 +84,8 @@ static void w1_reply_make_space(struct w1_cb_block *block, u16 space)
{
u16 len = w1_reply_len(block);
if (len + space >= block->maxlen) {
- cn_netlink_send_mult(block->first_cn, len, block->portid, 0, GFP_KERNEL);
+ cn_netlink_send_mult(block->first_cn, len, block->portid,
+ 0, GFP_KERNEL, NULL, NULL);
block->first_cn->len = 0;
block->cn = NULL;
block->msg = NULL;
diff --git a/include/linux/connector.h b/include/linux/connector.h
index 487350bb19c3..cec2d99ae902 100644
--- a/include/linux/connector.h
+++ b/include/linux/connector.h
@@ -90,13 +90,19 @@ void cn_del_callback(const struct cb_id *id);
* If @group is not zero, then message will be delivered
* to the specified group.
* @gfp_mask: GFP mask.
+ * @filter: Filter function to be used at netlink layer.
+ * @filter_data:Filter data to be supplied to the filter function
*
* It can be safely called from softirq context, but may silently
* fail under strong memory pressure.
*
* If there are no listeners for given group %-ESRCH can be returned.
*/
-int cn_netlink_send_mult(struct cn_msg *msg, u16 len, u32 portid, u32 group, gfp_t gfp_mask);
+int cn_netlink_send_mult(struct cn_msg *msg, u16 len, u32 portid,
+ u32 group, gfp_t gfp_mask,
+ int (*filter)(struct sock *dsk, struct sk_buff *skb,
+ void *data),
+ void *filter_data);
/**
* cn_netlink_send - Sends message to the specified groups.
diff --git a/include/uapi/linux/cn_proc.h b/include/uapi/linux/cn_proc.h
index db210625cee8..6a06fb424313 100644
--- a/include/uapi/linux/cn_proc.h
+++ b/include/uapi/linux/cn_proc.h
@@ -30,6 +30,30 @@ enum proc_cn_mcast_op {
PROC_CN_MCAST_IGNORE = 2
};
+enum proc_cn_event {
+ /* Use successive bits so the enums can be used to record
+ * sets of events as well
+ */
+ PROC_EVENT_NONE = 0x00000000,
+ PROC_EVENT_FORK = 0x00000001,
+ PROC_EVENT_EXEC = 0x00000002,
+ PROC_EVENT_UID = 0x00000004,
+ PROC_EVENT_GID = 0x00000040,
+ PROC_EVENT_SID = 0x00000080,
+ PROC_EVENT_PTRACE = 0x00000100,
+ PROC_EVENT_COMM = 0x00000200,
+ /* "next" should be 0x00000400 */
+ /* "last" is the last process event: exit,
+ * while "next to last" is coredumping event
+ */
+ PROC_EVENT_COREDUMP = 0x40000000,
+ PROC_EVENT_EXIT = 0x80000000
+};
+
+struct proc_input {
+ enum proc_cn_mcast_op mcast_op;
+};
+
/*
* From the user's point of view, the process
* ID is the thread group ID and thread ID is the internal
@@ -44,24 +68,7 @@ enum proc_cn_mcast_op {
*/
struct proc_event {
- enum what {
- /* Use successive bits so the enums can be used to record
- * sets of events as well
- */
- PROC_EVENT_NONE = 0x00000000,
- PROC_EVENT_FORK = 0x00000001,
- PROC_EVENT_EXEC = 0x00000002,
- PROC_EVENT_UID = 0x00000004,
- PROC_EVENT_GID = 0x00000040,
- PROC_EVENT_SID = 0x00000080,
- PROC_EVENT_PTRACE = 0x00000100,
- PROC_EVENT_COMM = 0x00000200,
- /* "next" should be 0x00000400 */
- /* "last" is the last process event: exit,
- * while "next to last" is coredumping event */
- PROC_EVENT_COREDUMP = 0x40000000,
- PROC_EVENT_EXIT = 0x80000000
- } what;
+ enum proc_cn_event what;
__u32 cpu;
__u64 __attribute__((aligned(8))) timestamp_ns;
/* Number of nano seconds since system boot */
--
2.40.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v3 4/7] connector/cn_proc: Test code for proc connector
2023-03-29 18:25 [PATCH v3 0/7] Process connector bug fixes & enhancements Anjali Kulkarni
` (2 preceding siblings ...)
2023-03-29 18:25 ` [PATCH v3 3/7] connector/cn_proc: Add filtering to fix some bugs Anjali Kulkarni
@ 2023-03-29 18:25 ` Anjali Kulkarni
2023-03-29 18:25 ` [PATCH v3 5/7] connector/cn_proc: Performance improvements Anjali Kulkarni
` (2 subsequent siblings)
6 siblings, 0 replies; 14+ messages in thread
From: Anjali Kulkarni @ 2023-03-29 18:25 UTC (permalink / raw)
To: davem
Cc: edumazet, kuba, pabeni, zbr, brauner, johannes, ecree.xilinx,
leon, keescook, socketcan, petrm, linux-kernel, netdev,
anjali.k.kulkarni
Test code for proc connector.
Signed-off-by: Anjali Kulkarni <anjali.k.kulkarni@oracle.com>
---
samples/connector/proc_filter.c | 262 ++++++++++++++++++++++++++++++++
1 file changed, 262 insertions(+)
create mode 100644 samples/connector/proc_filter.c
diff --git a/samples/connector/proc_filter.c b/samples/connector/proc_filter.c
new file mode 100644
index 000000000000..25202f5bc126
--- /dev/null
+++ b/samples/connector/proc_filter.c
@@ -0,0 +1,262 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#include <sys/types.h>
+#include <sys/epoll.h>
+#include <sys/socket.h>
+#include <linux/netlink.h>
+#include <linux/connector.h>
+#include <linux/cn_proc.h>
+
+#include <stddef.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+#include <strings.h>
+#include <errno.h>
+#include <signal.h>
+
+#define NL_MESSAGE_SIZE (sizeof(struct nlmsghdr) + sizeof(struct cn_msg) + \
+ sizeof(int))
+
+#define MAX_EVENTS 1
+
+#ifdef ENABLE_PRINTS
+#define Printf printf
+#else
+#define Printf
+#endif
+
+volatile static int interrupted;
+static int nl_sock, ret_errno, tcount;
+static struct epoll_event evn;
+
+int send_message(enum proc_cn_mcast_op mcast_op)
+{
+ char buff[NL_MESSAGE_SIZE];
+ struct nlmsghdr *hdr;
+ struct cn_msg *msg;
+
+ hdr = (struct nlmsghdr *)buff;
+ hdr->nlmsg_len = NL_MESSAGE_SIZE;
+ hdr->nlmsg_type = NLMSG_DONE;
+ hdr->nlmsg_flags = 0;
+ hdr->nlmsg_seq = 0;
+ hdr->nlmsg_pid = getpid();
+
+ msg = (struct cn_msg *)NLMSG_DATA(hdr);
+ msg->id.idx = CN_IDX_PROC;
+ msg->id.val = CN_VAL_PROC;
+ msg->seq = 0;
+ msg->ack = 0;
+ msg->flags = 0;
+
+ msg->len = sizeof(int);
+ *(int *)msg->data = mcast_op;
+
+ if (send(nl_sock, hdr, hdr->nlmsg_len, 0) == -1) {
+ ret_errno = errno;
+ perror("send failed");
+ return -3;
+ }
+ return 0;
+}
+
+int register_proc_netlink(int *efd, enum proc_cn_mcast_op mcast_op)
+{
+ struct sockaddr_nl sa_nl;
+ int err = 0, epoll_fd;
+
+ nl_sock = socket(PF_NETLINK, SOCK_DGRAM, NETLINK_CONNECTOR);
+
+ if (nl_sock == -1) {
+ ret_errno = errno;
+ perror("socket failed");
+ return -1;
+ }
+
+ bzero(&sa_nl, sizeof(sa_nl));
+ sa_nl.nl_family = AF_NETLINK;
+ sa_nl.nl_groups = CN_IDX_PROC;
+ sa_nl.nl_pid = getpid();
+
+ if (bind(nl_sock, (struct sockaddr *)&sa_nl, sizeof(sa_nl)) == -1) {
+ ret_errno = errno;
+ perror("bind failed");
+ return -2;
+ }
+
+ epoll_fd = epoll_create1(EPOLL_CLOEXEC);
+ if (epoll_fd < 0) {
+ ret_errno = errno;
+ perror("epoll_create1 failed");
+ return -2;
+ }
+
+ err = send_message(mcast_op);
+ if (err < 0)
+ return err;
+
+ evn.events = EPOLLIN;
+ evn.data.fd = nl_sock;
+ if (epoll_ctl(epoll_fd, EPOLL_CTL_ADD, nl_sock, &evn) < 0) {
+ ret_errno = errno;
+ perror("epoll_ctl failed");
+ return -3;
+ }
+ *efd = epoll_fd;
+ return 0;
+}
+
+static void sigint(__attribute__((__always_unused)) int sig)
+{
+ interrupted = 1;
+}
+
+int handle_packet(char *buff, int fd, struct proc_event *event)
+{
+ struct nlmsghdr *hdr;
+
+ hdr = (struct nlmsghdr *)buff;
+
+ if (hdr->nlmsg_type == NLMSG_ERROR) {
+ perror("NLMSG_ERROR error\n");
+ return -3;
+ } else if (hdr->nlmsg_type == NLMSG_DONE) {
+ event = (struct proc_event *)
+ ((struct cn_msg *)NLMSG_DATA(hdr))->data;
+ tcount++;
+ switch (event->what) {
+ case PROC_EVENT_EXIT:
+ Printf("Exit process %d (tgid %d) with code %d, signal %d\n",
+ event->event_data.exit.process_pid,
+ event->event_data.exit.process_tgid,
+ event->event_data.exit.exit_code,
+ event->event_data.exit.exit_signal);
+ break;
+ case PROC_EVENT_FORK:
+ Printf("Fork process %d (tgid %d), parent %d (tgid %d)\n",
+ event->event_data.fork.child_pid,
+ event->event_data.fork.child_tgid,
+ event->event_data.fork.parent_pid,
+ event->event_data.fork.parent_tgid);
+ break;
+ case PROC_EVENT_EXEC:
+ Printf("Exec process %d (tgid %d)\n",
+ event->event_data.exec.process_pid,
+ event->event_data.exec.process_tgid);
+ break;
+ case PROC_EVENT_UID:
+ Printf("UID process %d (tgid %d) uid %d euid %d\n",
+ event->event_data.id.process_pid,
+ event->event_data.id.process_tgid,
+ event->event_data.id.r.ruid,
+ event->event_data.id.e.euid);
+ break;
+ case PROC_EVENT_GID:
+ Printf("GID process %d (tgid %d) gid %d egid %d\n",
+ event->event_data.id.process_pid,
+ event->event_data.id.process_tgid,
+ event->event_data.id.r.rgid,
+ event->event_data.id.e.egid);
+ break;
+ case PROC_EVENT_SID:
+ Printf("SID process %d (tgid %d)\n",
+ event->event_data.sid.process_pid,
+ event->event_data.sid.process_tgid);
+ break;
+ case PROC_EVENT_PTRACE:
+ Printf("Ptrace process %d (tgid %d), Tracer %d (tgid %d)\n",
+ event->event_data.ptrace.process_pid,
+ event->event_data.ptrace.process_tgid,
+ event->event_data.ptrace.tracer_pid,
+ event->event_data.ptrace.tracer_tgid);
+ break;
+ case PROC_EVENT_COMM:
+ Printf("Comm process %d (tgid %d) comm %s\n",
+ event->event_data.comm.process_pid,
+ event->event_data.comm.process_tgid,
+ event->event_data.comm.comm);
+ break;
+ case PROC_EVENT_COREDUMP:
+ Printf("Coredump process %d (tgid %d) parent %d, (tgid %d)\n",
+ event->event_data.coredump.process_pid,
+ event->event_data.coredump.process_tgid,
+ event->event_data.coredump.parent_pid,
+ event->event_data.coredump.parent_tgid);
+ break;
+ default:
+ break;
+ }
+ }
+ return 0;
+}
+
+int handle_events(int epoll_fd, struct proc_event *pev)
+{
+ char buff[CONNECTOR_MAX_MSG_SIZE];
+ struct epoll_event ev[MAX_EVENTS];
+ int i, event_count = 0, err = 0;
+
+ event_count = epoll_wait(epoll_fd, ev, MAX_EVENTS, -1);
+ if (event_count < 0) {
+ ret_errno = errno;
+ if (ret_errno != EINTR)
+ perror("epoll_wait failed");
+ return -3;
+ }
+ for (i = 0; i < event_count; i++) {
+ if (!(ev[i].events & EPOLLIN))
+ continue;
+ if (recv(ev[i].data.fd, buff, sizeof(buff), 0) == -1) {
+ ret_errno = errno;
+ perror("recv failed");
+ return -3;
+ }
+ err = handle_packet(buff, ev[i].data.fd, pev);
+ if (err < 0)
+ return err;
+ }
+ return 0;
+}
+
+int main(int argc, char *argv[])
+{
+ int epoll_fd, err;
+ struct proc_event proc_ev;
+
+ signal(SIGINT, sigint);
+
+ err = register_proc_netlink(&epoll_fd, PROC_CN_MCAST_LISTEN);
+ if (err < 0) {
+ if (err == -2)
+ close(nl_sock);
+ if (err == -3) {
+ close(nl_sock);
+ close(epoll_fd);
+ }
+ exit(1);
+ }
+
+ while (!interrupted) {
+ err = handle_events(epoll_fd, &proc_ev);
+ if (err < 0) {
+ if (ret_errno == EINTR)
+ continue;
+ if (err == -2)
+ close(nl_sock);
+ if (err == -3) {
+ close(nl_sock);
+ close(epoll_fd);
+ }
+ exit(1);
+ }
+ }
+
+ send_message(PROC_CN_MCAST_IGNORE);
+
+ close(epoll_fd);
+ close(nl_sock);
+
+ printf("Done total count: %d\n", tcount);
+ exit(0);
+}
--
2.40.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v3 5/7] connector/cn_proc: Performance improvements
2023-03-29 18:25 [PATCH v3 0/7] Process connector bug fixes & enhancements Anjali Kulkarni
` (3 preceding siblings ...)
2023-03-29 18:25 ` [PATCH v3 4/7] connector/cn_proc: Test code for proc connector Anjali Kulkarni
@ 2023-03-29 18:25 ` Anjali Kulkarni
2023-03-29 18:25 ` [PATCH v3 6/7] netlink: Add multicast group level permissions Anjali Kulkarni
2023-03-29 18:25 ` [PATCH v3 7/7] connector/cn_proc: Allow non-root users access Anjali Kulkarni
6 siblings, 0 replies; 14+ messages in thread
From: Anjali Kulkarni @ 2023-03-29 18:25 UTC (permalink / raw)
To: davem
Cc: edumazet, kuba, pabeni, zbr, brauner, johannes, ecree.xilinx,
leon, keescook, socketcan, petrm, linux-kernel, netdev,
anjali.k.kulkarni
This patch adds the capability to filter messages sent by the proc
connector on the event type supplied in the message from the client
to the connector. The client can register to listen for an event type
given in struct proc_input.
This event based filteting will greatly enhance performance - handling
8K exits takes about 70ms, whereas 8K-forks + 8K-exits takes about 150ms
& handling 8K-forks + 8K-exits + 8K-execs takes 200ms. There are currently
9 different types of events, and we need to listen to all of them. Also,
measuring the time using pidfds for monitoring 8K process exits took
much longer - 200ms, as compared to 70ms using only exit notifications of
proc connector.
We also add a new event type - PROC_EVENT_NONZERO_EXIT, which is
only sent by kernel to a listening application when any process exiting,
has a non-zero exit status. This will help the clients like Oracle DB,
where a monitoring process wants notfications for non-zero process exits
so it can cleanup after them.
This kind of a new event could also be useful to other applications like
Google's lmkd daemon, which needs a killed process's exit notification.
The patch takes care that existing clients using old mechanism of not
sending the event type work without any changes.
cn_filter function checks to see if the event type being notified via
proc connector matches the event type requested by client, before
sending(matches) or dropping(does not match) a packet.
The proc_filter.c test file is updated to reflect the new filtering.
Signed-off-by: Anjali Kulkarni <anjali.k.kulkarni@oracle.com>
---
drivers/connector/cn_proc.c | 59 +++++++++++++++++++++++++++++----
include/uapi/linux/cn_proc.h | 19 +++++++++++
samples/connector/proc_filter.c | 47 +++++++++++++++++++++++---
3 files changed, 115 insertions(+), 10 deletions(-)
diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c
index 84f38d2bd4b9..35bec1fd7ee0 100644
--- a/drivers/connector/cn_proc.c
+++ b/drivers/connector/cn_proc.c
@@ -50,21 +50,44 @@ static DEFINE_PER_CPU(struct local_event, local_event) = {
static int cn_filter(struct sock *dsk, struct sk_buff *skb, void *data)
{
+ uintptr_t val;
+ __u32 what, exit_code, *ptr;
enum proc_cn_mcast_op mc_op;
- if (!dsk)
+ if (!dsk || !data)
return 0;
+ ptr = (__u32 *)data;
+ what = *ptr++;
+ exit_code = *ptr;
+ val = ((struct proc_input *)(dsk->sk_user_data))->event_type;
mc_op = ((struct proc_input *)(dsk->sk_user_data))->mcast_op;
if (mc_op == PROC_CN_MCAST_IGNORE)
return 1;
- return 0;
+ if ((__u32)val == PROC_EVENT_ALL)
+ return 0;
+ /*
+ * Drop packet if we have to report only non-zero exit status
+ * (PROC_EVENT_NONZERO_EXIT) and exit status is 0
+ */
+ if (((__u32)val & PROC_EVENT_NONZERO_EXIT) &&
+ (what == PROC_EVENT_EXIT)) {
+ if (exit_code)
+ return 0;
+ else
+ return 1;
+ }
+ if ((__u32)val & what)
+ return 0;
+ return 1;
}
static inline void send_msg(struct cn_msg *msg)
{
+ __u32 filter_data[2];
+
local_lock(&local_event.lock);
msg->seq = __this_cpu_inc_return(local_event.count) - 1;
@@ -76,8 +99,15 @@ static inline void send_msg(struct cn_msg *msg)
*
* If cn_netlink_send() fails, the data is not sent.
*/
+ filter_data[0] = ((struct proc_event *)msg->data)->what;
+ if (filter_data[0] == PROC_EVENT_EXIT) {
+ filter_data[1] =
+ ((struct proc_event *)msg->data)->event_data.exit.exit_code;
+ } else {
+ filter_data[1] = 0;
+ }
cn_netlink_send_mult(msg, msg->len, 0, CN_IDX_PROC, GFP_NOWAIT,
- cn_filter, NULL);
+ cn_filter, (void *)filter_data);
local_unlock(&local_event.lock);
}
@@ -357,12 +387,15 @@ static void cn_proc_ack(int err, int rcvd_seq, int rcvd_ack)
/**
* cn_proc_mcast_ctl
- * @data: message sent from userspace via the connector
+ * @msg: message sent from userspace via the connector
+ * @nsp: NETLINK_CB of the client's socket buffer
*/
static void cn_proc_mcast_ctl(struct cn_msg *msg,
struct netlink_skb_parms *nsp)
{
enum proc_cn_mcast_op mc_op = 0, prev_mc_op = 0;
+ struct proc_input *pinput = NULL;
+ enum proc_cn_event ev_type = 0;
int err = 0, initial = 0;
struct sock *sk = NULL;
@@ -381,11 +414,21 @@ static void cn_proc_mcast_ctl(struct cn_msg *msg,
goto out;
}
- if (msg->len == sizeof(mc_op))
+ if (msg->len == sizeof(*pinput)) {
+ pinput = (struct proc_input *)msg->data;
+ mc_op = pinput->mcast_op;
+ ev_type = pinput->event_type;
+ } else if (msg->len == sizeof(mc_op)) {
mc_op = *((enum proc_cn_mcast_op *)msg->data);
- else
+ ev_type = PROC_EVENT_ALL;
+ } else
return;
+ ev_type = valid_event((enum proc_cn_event)ev_type);
+
+ if (ev_type == PROC_EVENT_NONE)
+ ev_type = PROC_EVENT_ALL;
+
if (nsp->sk) {
sk = nsp->sk;
if (sk->sk_user_data == NULL) {
@@ -396,6 +439,8 @@ static void cn_proc_mcast_ctl(struct cn_msg *msg,
prev_mc_op =
((struct proc_input *)(sk->sk_user_data))->mcast_op;
}
+ ((struct proc_input *)(sk->sk_user_data))->event_type =
+ ev_type;
((struct proc_input *)(sk->sk_user_data))->mcast_op = mc_op;
}
@@ -407,6 +452,8 @@ static void cn_proc_mcast_ctl(struct cn_msg *msg,
case PROC_CN_MCAST_IGNORE:
if (!initial && (prev_mc_op != PROC_CN_MCAST_IGNORE))
atomic_dec(&proc_event_num_listeners);
+ ((struct proc_input *)(sk->sk_user_data))->event_type =
+ PROC_EVENT_NONE;
break;
default:
err = EINVAL;
diff --git a/include/uapi/linux/cn_proc.h b/include/uapi/linux/cn_proc.h
index 6a06fb424313..f2afb7cc4926 100644
--- a/include/uapi/linux/cn_proc.h
+++ b/include/uapi/linux/cn_proc.h
@@ -30,6 +30,15 @@ enum proc_cn_mcast_op {
PROC_CN_MCAST_IGNORE = 2
};
+#define PROC_EVENT_ALL (PROC_EVENT_FORK | PROC_EVENT_EXEC | PROC_EVENT_UID | \
+ PROC_EVENT_GID | PROC_EVENT_SID | PROC_EVENT_PTRACE | \
+ PROC_EVENT_COMM | PROC_EVENT_NONZERO_EXIT | \
+ PROC_EVENT_COREDUMP | PROC_EVENT_EXIT)
+
+/*
+ * If you add an entry in proc_cn_event, make sure you add it in
+ * PROC_EVENT_ALL above as well.
+ */
enum proc_cn_event {
/* Use successive bits so the enums can be used to record
* sets of events as well
@@ -45,15 +54,25 @@ enum proc_cn_event {
/* "next" should be 0x00000400 */
/* "last" is the last process event: exit,
* while "next to last" is coredumping event
+ * before that is report only if process dies
+ * with non-zero exit status
*/
+ PROC_EVENT_NONZERO_EXIT = 0x20000000,
PROC_EVENT_COREDUMP = 0x40000000,
PROC_EVENT_EXIT = 0x80000000
};
struct proc_input {
enum proc_cn_mcast_op mcast_op;
+ enum proc_cn_event event_type;
};
+static inline enum proc_cn_event valid_event(enum proc_cn_event ev_type)
+{
+ ev_type &= PROC_EVENT_ALL;
+ return ev_type;
+}
+
/*
* From the user's point of view, the process
* ID is the thread group ID and thread ID is the internal
diff --git a/samples/connector/proc_filter.c b/samples/connector/proc_filter.c
index 25202f5bc126..63504fc5f002 100644
--- a/samples/connector/proc_filter.c
+++ b/samples/connector/proc_filter.c
@@ -15,22 +15,33 @@
#include <errno.h>
#include <signal.h>
+#define FILTER
+
+#ifdef FILTER
+#define NL_MESSAGE_SIZE (sizeof(struct nlmsghdr) + sizeof(struct cn_msg) + \
+ sizeof(struct proc_input))
+#else
#define NL_MESSAGE_SIZE (sizeof(struct nlmsghdr) + sizeof(struct cn_msg) + \
sizeof(int))
+#endif
#define MAX_EVENTS 1
+volatile static int interrupted;
+static int nl_sock, ret_errno, tcount;
+static struct epoll_event evn;
+
#ifdef ENABLE_PRINTS
#define Printf printf
#else
#define Printf
#endif
-volatile static int interrupted;
-static int nl_sock, ret_errno, tcount;
-static struct epoll_event evn;
-
+#ifdef FILTER
+int send_message(struct proc_input *pinp)
+#else
int send_message(enum proc_cn_mcast_op mcast_op)
+#endif
{
char buff[NL_MESSAGE_SIZE];
struct nlmsghdr *hdr;
@@ -50,8 +61,14 @@ int send_message(enum proc_cn_mcast_op mcast_op)
msg->ack = 0;
msg->flags = 0;
+#ifdef FILTER
+ msg->len = sizeof(struct proc_input);
+ ((struct proc_input *)msg->data)->mcast_op = pinp->mcast_op;
+ ((struct proc_input *)msg->data)->event_type = pinp->event_type;
+#else
msg->len = sizeof(int);
*(int *)msg->data = mcast_op;
+#endif
if (send(nl_sock, hdr, hdr->nlmsg_len, 0) == -1) {
ret_errno = errno;
@@ -61,7 +78,11 @@ int send_message(enum proc_cn_mcast_op mcast_op)
return 0;
}
+#ifdef FILTER
+int register_proc_netlink(int *efd, struct proc_input *input)
+#else
int register_proc_netlink(int *efd, enum proc_cn_mcast_op mcast_op)
+#endif
{
struct sockaddr_nl sa_nl;
int err = 0, epoll_fd;
@@ -92,7 +113,11 @@ int register_proc_netlink(int *efd, enum proc_cn_mcast_op mcast_op)
return -2;
}
+#ifdef FILTER
+ err = send_message(input);
+#else
err = send_message(mcast_op);
+#endif
if (err < 0)
return err;
@@ -223,10 +248,19 @@ int main(int argc, char *argv[])
{
int epoll_fd, err;
struct proc_event proc_ev;
+#ifdef FILTER
+ struct proc_input input;
+#endif
signal(SIGINT, sigint);
+#ifdef FILTER
+ input.event_type = PROC_EVENT_NONZERO_EXIT;
+ input.mcast_op = PROC_CN_MCAST_LISTEN;
+ err = register_proc_netlink(&epoll_fd, &input);
+#else
err = register_proc_netlink(&epoll_fd, PROC_CN_MCAST_LISTEN);
+#endif
if (err < 0) {
if (err == -2)
close(nl_sock);
@@ -252,7 +286,12 @@ int main(int argc, char *argv[])
}
}
+#ifdef FILTER
+ input.mcast_op = PROC_CN_MCAST_IGNORE;
+ send_message(&input);
+#else
send_message(PROC_CN_MCAST_IGNORE);
+#endif
close(epoll_fd);
close(nl_sock);
--
2.40.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v3 6/7] netlink: Add multicast group level permissions
2023-03-29 18:25 [PATCH v3 0/7] Process connector bug fixes & enhancements Anjali Kulkarni
` (4 preceding siblings ...)
2023-03-29 18:25 ` [PATCH v3 5/7] connector/cn_proc: Performance improvements Anjali Kulkarni
@ 2023-03-29 18:25 ` Anjali Kulkarni
2023-03-31 6:39 ` Jakub Kicinski
2023-03-29 18:25 ` [PATCH v3 7/7] connector/cn_proc: Allow non-root users access Anjali Kulkarni
6 siblings, 1 reply; 14+ messages in thread
From: Anjali Kulkarni @ 2023-03-29 18:25 UTC (permalink / raw)
To: davem
Cc: edumazet, kuba, pabeni, zbr, brauner, johannes, ecree.xilinx,
leon, keescook, socketcan, petrm, linux-kernel, netdev,
anjali.k.kulkarni
A new field perm_groups is added in netlink_sock to store the protocol's
multicast group access permissions. This is to allow for a more fine
grained access control than just at the protocol level. These
permissions can be supplied by the protocol via the netlink_kernel_cfg.
A new function netlink_multicast_allowed() is added, which checks if
the protocol's multicast group has non-root access before allowing bind.
Signed-off-by: Anjali Kulkarni <anjali.k.kulkarni@oracle.com>
---
include/linux/netlink.h | 1 +
net/netlink/af_netlink.c | 25 +++++++++++++++++++++++--
net/netlink/af_netlink.h | 2 ++
3 files changed, 26 insertions(+), 2 deletions(-)
diff --git a/include/linux/netlink.h b/include/linux/netlink.h
index 05a316aa93b4..253cbcd7a290 100644
--- a/include/linux/netlink.h
+++ b/include/linux/netlink.h
@@ -46,6 +46,7 @@ void netlink_table_ungrab(void);
struct netlink_kernel_cfg {
unsigned int groups;
unsigned int flags;
+ long unsigned perm_groups;
void (*input)(struct sk_buff *skb);
struct mutex *cb_mutex;
int (*bind)(struct net *net, int group);
diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
index dc7880055705..f31173d28dd0 100644
--- a/net/netlink/af_netlink.c
+++ b/net/netlink/af_netlink.c
@@ -679,6 +679,7 @@ static int netlink_create(struct net *net, struct socket *sock, int protocol,
void (*unbind)(struct net *net, int group);
void (*release)(struct sock *sock, unsigned long *groups);
int err = 0;
+ unsigned long perm_groups;
sock->state = SS_UNCONNECTED;
@@ -706,6 +707,7 @@ static int netlink_create(struct net *net, struct socket *sock, int protocol,
bind = nl_table[protocol].bind;
unbind = nl_table[protocol].unbind;
release = nl_table[protocol].release;
+ perm_groups = nl_table[protocol].perm_groups;
netlink_unlock_table();
if (err < 0)
@@ -722,6 +724,7 @@ static int netlink_create(struct net *net, struct socket *sock, int protocol,
nlk->netlink_bind = bind;
nlk->netlink_unbind = unbind;
nlk->netlink_release = release;
+ nlk->perm_groups = perm_groups;
out:
return err;
@@ -938,6 +941,20 @@ bool netlink_net_capable(const struct sk_buff *skb, int cap)
}
EXPORT_SYMBOL(netlink_net_capable);
+static inline bool netlink_multicast_allowed(unsigned long perm_groups,
+ unsigned long groups)
+{
+ int group;
+
+ for (group = 0; group < BITS_PER_TYPE(u32); group++) {
+ if (test_bit(group, &groups)) {
+ if (!test_bit(group, &perm_groups))
+ return false;
+ }
+ }
+ return true;
+}
+
static inline int netlink_allowed(const struct socket *sock, unsigned int flag)
{
return (nl_table[sock->sk->sk_protocol].flags & flag) ||
@@ -1023,8 +1040,11 @@ static int netlink_bind(struct socket *sock, struct sockaddr *addr,
/* Only superuser is allowed to listen multicasts */
if (groups) {
- if (!netlink_allowed(sock, NL_CFG_F_NONROOT_RECV))
- return -EPERM;
+ if (!netlink_allowed(sock, NL_CFG_F_NONROOT_RECV)) {
+ if (!netlink_multicast_allowed(nlk->perm_groups,
+ groups))
+ return -EPERM;
+ }
err = netlink_realloc_groups(sk);
if (err)
return err;
@@ -2124,6 +2144,7 @@ __netlink_kernel_create(struct net *net, int unit, struct module *module,
nl_table[unit].unbind = cfg->unbind;
nl_table[unit].release = cfg->release;
nl_table[unit].flags = cfg->flags;
+ nl_table[unit].perm_groups = cfg->perm_groups;
if (cfg->compare)
nl_table[unit].compare = cfg->compare;
}
diff --git a/net/netlink/af_netlink.h b/net/netlink/af_netlink.h
index 054335a34804..b7880254c716 100644
--- a/net/netlink/af_netlink.h
+++ b/net/netlink/af_netlink.h
@@ -29,6 +29,7 @@ struct netlink_sock {
u32 flags;
u32 subscriptions;
u32 ngroups;
+ unsigned long perm_groups;
unsigned long *groups;
unsigned long state;
size_t max_recvmsg_len;
@@ -62,6 +63,7 @@ struct netlink_table {
struct listeners __rcu *listeners;
unsigned int flags;
unsigned int groups;
+ unsigned long perm_groups;
struct mutex *cb_mutex;
struct module *module;
int (*bind)(struct net *net, int group);
--
2.40.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH v3 7/7] connector/cn_proc: Allow non-root users access
2023-03-29 18:25 [PATCH v3 0/7] Process connector bug fixes & enhancements Anjali Kulkarni
` (5 preceding siblings ...)
2023-03-29 18:25 ` [PATCH v3 6/7] netlink: Add multicast group level permissions Anjali Kulkarni
@ 2023-03-29 18:25 ` Anjali Kulkarni
6 siblings, 0 replies; 14+ messages in thread
From: Anjali Kulkarni @ 2023-03-29 18:25 UTC (permalink / raw)
To: davem
Cc: edumazet, kuba, pabeni, zbr, brauner, johannes, ecree.xilinx,
leon, keescook, socketcan, petrm, linux-kernel, netdev,
anjali.k.kulkarni
There were a couple of reasons for not allowing non-root users access
initially - one is there was some point no proper receive buffer
management in place for netlink multicast. But that should be long
fixed. See link below for more context.
Second is that some of the messages may contain data that is root only. But
this should be handled with a finer granularity, which is being done at the
protocol layer. The only problematic protocols are nf_queue and the
firewall netlink. Hence, this restriction for non-root access was relaxed
for NETLINK_ROUTE initially:
https://lore.kernel.org/all/20020612013101.A22399@wotan.suse.de/
This restriction has also been removed for following protocols:
NETLINK_KOBJECT_UEVENT, NETLINK_AUDIT, NETLINK_SOCK_DIAG,
NETLINK_GENERIC, NETLINK_SELINUX.
Since process connector messages are not sensitive (process fork, exit
notifications etc.), and anyone can read /proc data, we can allow non-root
access here. However, since process event notification is not the only
consumer of NETLINK_CONNECTOR, we can make this change even more
fine grained than the protocol level, by checking for multicast group
within the protocol.
Set perm_groups for CN_IDX_PROC to 1 indicating it has non-root access.
Reason we need this change is we cannot run our DB application as root.
Signed-off-by: Anjali Kulkarni <anjali.k.kulkarni@oracle.com>
---
drivers/connector/cn_proc.c | 7 -------
drivers/connector/connector.c | 1 +
2 files changed, 1 insertion(+), 7 deletions(-)
diff --git a/drivers/connector/cn_proc.c b/drivers/connector/cn_proc.c
index 35bec1fd7ee0..046a8c1d8577 100644
--- a/drivers/connector/cn_proc.c
+++ b/drivers/connector/cn_proc.c
@@ -408,12 +408,6 @@ static void cn_proc_mcast_ctl(struct cn_msg *msg,
!task_is_in_init_pid_ns(current))
return;
- /* Can only change if privileged. */
- if (!__netlink_ns_capable(nsp, &init_user_ns, CAP_NET_ADMIN)) {
- err = EPERM;
- goto out;
- }
-
if (msg->len == sizeof(*pinput)) {
pinput = (struct proc_input *)msg->data;
mc_op = pinput->mcast_op;
@@ -460,7 +454,6 @@ static void cn_proc_mcast_ctl(struct cn_msg *msg,
break;
}
-out:
cn_proc_ack(err, msg->seq, msg->ack);
}
diff --git a/drivers/connector/connector.c b/drivers/connector/connector.c
index d1179df2b0ba..a9e5ed36732d 100644
--- a/drivers/connector/connector.c
+++ b/drivers/connector/connector.c
@@ -262,6 +262,7 @@ static int cn_init(void)
.groups = CN_NETLINK_USERS + 0xf,
.input = cn_rx_skb,
.release = cn_release,
+ .perm_groups = 0x1,
};
dev->nls = netlink_kernel_create(&init_net, NETLINK_CONNECTOR, &cfg);
--
2.40.0
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH v3 6/7] netlink: Add multicast group level permissions
2023-03-29 18:25 ` [PATCH v3 6/7] netlink: Add multicast group level permissions Anjali Kulkarni
@ 2023-03-31 6:39 ` Jakub Kicinski
2023-03-31 17:00 ` Anjali Kulkarni
0 siblings, 1 reply; 14+ messages in thread
From: Jakub Kicinski @ 2023-03-31 6:39 UTC (permalink / raw)
To: Anjali Kulkarni
Cc: davem, edumazet, pabeni, zbr, brauner, johannes, ecree.xilinx,
leon, keescook, socketcan, petrm, linux-kernel, netdev
On Wed, 29 Mar 2023 11:25:42 -0700 Anjali Kulkarni wrote:
> A new field perm_groups is added in netlink_sock to store the protocol's
> multicast group access permissions. This is to allow for a more fine
> grained access control than just at the protocol level. These
> permissions can be supplied by the protocol via the netlink_kernel_cfg.
> A new function netlink_multicast_allowed() is added, which checks if
> the protocol's multicast group has non-root access before allowing bind.
Is there a reason this is better than implementing .bind
in the connector family and filtering there?
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v3 6/7] netlink: Add multicast group level permissions
2023-03-31 6:39 ` Jakub Kicinski
@ 2023-03-31 17:00 ` Anjali Kulkarni
2023-03-31 17:24 ` Jakub Kicinski
0 siblings, 1 reply; 14+ messages in thread
From: Anjali Kulkarni @ 2023-03-31 17:00 UTC (permalink / raw)
To: Jakub Kicinski
Cc: davem@davemloft.net, edumazet@google.com, pabeni@redhat.com,
zbr@ioremap.net, brauner@kernel.org, johannes@sipsolutions.net,
ecree.xilinx@gmail.com, leon@kernel.org, keescook@chromium.org,
socketcan@hartkopp.net, petrm@nvidia.com,
linux-kernel@vger.kernel.org, netdev@vger.kernel.org
> On Mar 30, 2023, at 11:39 PM, Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Wed, 29 Mar 2023 11:25:42 -0700 Anjali Kulkarni wrote:
>> A new field perm_groups is added in netlink_sock to store the protocol's
>> multicast group access permissions. This is to allow for a more fine
>> grained access control than just at the protocol level. These
>> permissions can be supplied by the protocol via the netlink_kernel_cfg.
>> A new function netlink_multicast_allowed() is added, which checks if
>> the protocol's multicast group has non-root access before allowing bind.
>
> Is there a reason this is better than implementing .bind
> in the connector family and filtering there?
Are you suggesting adding something like a new struct proto_ops for the connector family? I have not looked into that, though that would seem like a lot of work, and also I have not seen any infra structure to call into protocol specific bind from netlink bind?
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v3 6/7] netlink: Add multicast group level permissions
2023-03-31 17:00 ` Anjali Kulkarni
@ 2023-03-31 17:24 ` Jakub Kicinski
2023-03-31 17:48 ` Anjali Kulkarni
0 siblings, 1 reply; 14+ messages in thread
From: Jakub Kicinski @ 2023-03-31 17:24 UTC (permalink / raw)
To: Anjali Kulkarni
Cc: davem@davemloft.net, edumazet@google.com, pabeni@redhat.com,
zbr@ioremap.net, brauner@kernel.org, johannes@sipsolutions.net,
ecree.xilinx@gmail.com, leon@kernel.org, keescook@chromium.org,
socketcan@hartkopp.net, petrm@nvidia.com,
linux-kernel@vger.kernel.org, netdev@vger.kernel.org
On Fri, 31 Mar 2023 17:00:27 +0000 Anjali Kulkarni wrote:
> > Is there a reason this is better than implementing .bind
> > in the connector family and filtering there?
>
> Are you suggesting adding something like a new struct proto_ops for
> the connector family? I have not looked into that, though that would
> seem like a lot of work, and also I have not seen any infra structure
> to call into protocol specific bind from netlink bind?
Where you're adding a release callback in patch 2 - there's a bind
callback already three lines above. What am I missing?
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v3 6/7] netlink: Add multicast group level permissions
2023-03-31 17:24 ` Jakub Kicinski
@ 2023-03-31 17:48 ` Anjali Kulkarni
2023-03-31 18:13 ` Jakub Kicinski
0 siblings, 1 reply; 14+ messages in thread
From: Anjali Kulkarni @ 2023-03-31 17:48 UTC (permalink / raw)
To: Jakub Kicinski
Cc: davem@davemloft.net, edumazet@google.com, pabeni@redhat.com,
zbr@ioremap.net, brauner@kernel.org, johannes@sipsolutions.net,
ecree.xilinx@gmail.com, leon@kernel.org, keescook@chromium.org,
socketcan@hartkopp.net, petrm@nvidia.com,
linux-kernel@vger.kernel.org, netdev@vger.kernel.org
> On Mar 31, 2023, at 10:24 AM, Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Fri, 31 Mar 2023 17:00:27 +0000 Anjali Kulkarni wrote:
>>> Is there a reason this is better than implementing .bind
>>> in the connector family and filtering there?
>>
>> Are you suggesting adding something like a new struct proto_ops for
>> the connector family? I have not looked into that, though that would
>> seem like a lot of work, and also I have not seen any infra structure
>> to call into protocol specific bind from netlink bind?
>
> Where you're adding a release callback in patch 2 - there's a bind
> callback already three lines above. What am I missing?
Ah yes, that one is actually meant to be used for adding(bind) and deleting(unbind) multicast group memberships. So it is also called from setsockopt() - so I think just checking for root access permission changes the semantics of what it is meant to be used for? Besides we would need to change some of that ordering there (check for permissions & netlink_bind call) and changing it for all users of netlink might not be a good idea…?
Anjali
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v3 6/7] netlink: Add multicast group level permissions
2023-03-31 17:48 ` Anjali Kulkarni
@ 2023-03-31 18:13 ` Jakub Kicinski
2023-03-31 18:38 ` Anjali Kulkarni
0 siblings, 1 reply; 14+ messages in thread
From: Jakub Kicinski @ 2023-03-31 18:13 UTC (permalink / raw)
To: Anjali Kulkarni
Cc: davem@davemloft.net, edumazet@google.com, pabeni@redhat.com,
zbr@ioremap.net, brauner@kernel.org, johannes@sipsolutions.net,
ecree.xilinx@gmail.com, leon@kernel.org, keescook@chromium.org,
socketcan@hartkopp.net, petrm@nvidia.com,
linux-kernel@vger.kernel.org, netdev@vger.kernel.org
On Fri, 31 Mar 2023 17:48:18 +0000 Anjali Kulkarni wrote:
> > On Mar 31, 2023, at 10:24 AM, Jakub Kicinski <kuba@kernel.org> wrote:
> > On Fri, 31 Mar 2023 17:00:27 +0000 Anjali Kulkarni wrote:
> >> Are you suggesting adding something like a new struct proto_ops for
> >> the connector family? I have not looked into that, though that would
> >> seem like a lot of work, and also I have not seen any infra structure
> >> to call into protocol specific bind from netlink bind?
> >
> > Where you're adding a release callback in patch 2 - there's a bind
> > callback already three lines above. What am I missing?
> Ah yes, that one is actually meant to be used for adding(bind) and
> deleting(unbind) multicast group memberships. So it is also called
> from setsockopt() - so I think just checking for root access
> permission changes the semantics of what it is meant to be used for?
> Besides we would need to change some of that ordering there (check
> for permissions & netlink_bind call) and changing it for all users of
> netlink might not be a good idea…?
AFAICT genetlink uses that callback in the way I'm suggesting already
(see genl_bind()) so if you can spot a bug or a problem - we need to
fix it :S
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v3 6/7] netlink: Add multicast group level permissions
2023-03-31 18:13 ` Jakub Kicinski
@ 2023-03-31 18:38 ` Anjali Kulkarni
0 siblings, 0 replies; 14+ messages in thread
From: Anjali Kulkarni @ 2023-03-31 18:38 UTC (permalink / raw)
To: Jakub Kicinski
Cc: davem@davemloft.net, edumazet@google.com, pabeni@redhat.com,
zbr@ioremap.net, brauner@kernel.org, johannes@sipsolutions.net,
ecree.xilinx@gmail.com, leon@kernel.org, keescook@chromium.org,
socketcan@hartkopp.net, petrm@nvidia.com,
linux-kernel@vger.kernel.org, netdev@vger.kernel.org
> On Mar 31, 2023, at 11:13 AM, Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Fri, 31 Mar 2023 17:48:18 +0000 Anjali Kulkarni wrote:
>>> On Mar 31, 2023, at 10:24 AM, Jakub Kicinski <kuba@kernel.org> wrote:
>>> On Fri, 31 Mar 2023 17:00:27 +0000 Anjali Kulkarni wrote:
>>>> Are you suggesting adding something like a new struct proto_ops for
>>>> the connector family? I have not looked into that, though that would
>>>> seem like a lot of work, and also I have not seen any infra structure
>>>> to call into protocol specific bind from netlink bind?
>>>
>>> Where you're adding a release callback in patch 2 - there's a bind
>>> callback already three lines above. What am I missing?
>> Ah yes, that one is actually meant to be used for adding(bind) and
>> deleting(unbind) multicast group memberships. So it is also called
>> from setsockopt() - so I think just checking for root access
>> permission changes the semantics of what it is meant to be used for?
>> Besides we would need to change some of that ordering there (check
>> for permissions & netlink_bind call) and changing it for all users of
>> netlink might not be a good idea…?
>
> AFAICT genetlink uses that callback in the way I'm suggesting already
> (see genl_bind()) so if you can spot a bug or a problem - we need to
> fix it :S
Ok, I will take a look and make the change.
Anjali
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2023-03-31 18:39 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-03-29 18:25 [PATCH v3 0/7] Process connector bug fixes & enhancements Anjali Kulkarni
2023-03-29 18:25 ` [PATCH v3 1/7] netlink: Reverse the patch which removed filtering Anjali Kulkarni
2023-03-29 18:25 ` [PATCH v3 2/7] netlink: Add new netlink_release function Anjali Kulkarni
2023-03-29 18:25 ` [PATCH v3 3/7] connector/cn_proc: Add filtering to fix some bugs Anjali Kulkarni
2023-03-29 18:25 ` [PATCH v3 4/7] connector/cn_proc: Test code for proc connector Anjali Kulkarni
2023-03-29 18:25 ` [PATCH v3 5/7] connector/cn_proc: Performance improvements Anjali Kulkarni
2023-03-29 18:25 ` [PATCH v3 6/7] netlink: Add multicast group level permissions Anjali Kulkarni
2023-03-31 6:39 ` Jakub Kicinski
2023-03-31 17:00 ` Anjali Kulkarni
2023-03-31 17:24 ` Jakub Kicinski
2023-03-31 17:48 ` Anjali Kulkarni
2023-03-31 18:13 ` Jakub Kicinski
2023-03-31 18:38 ` Anjali Kulkarni
2023-03-29 18:25 ` [PATCH v3 7/7] connector/cn_proc: Allow non-root users access Anjali Kulkarni
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).