From: Shailabh Nagar <nagar@watson.ibm.com>
To: Andrew Morton <akpm@osdl.org>
Cc: linux-kernel <linux-kernel@vger.kernel.org>,
Jay Lan <jlan@sgi.com>, Chris Sturtivant <csturtiv@sgi.com>,
Paul Jackson <pj@sgi.com>, Balbir Singh <balbir@in.ibm.com>,
Chandra Seetharaman <sekharan@us.ibm.com>,
Jamal <hadi@cyberus.ca>, netdev <netdev@vger.kernel.org>
Subject: [Patch 6/6] per task delay accounting taskstats interface: fix clone skbs for each listener
Date: Tue, 11 Jul 2006 00:36:39 -0400 [thread overview]
Message-ID: <1152592599.14142.136.camel@localhost.localdomain> (raw)
In-Reply-To: <1152591838.14142.114.camel@localhost.localdomain>
Use a cloned sk_buff for each netlink message sent to multiple listeners.
Earlier, the same skb, representing a netlink message, was being erroneously
reused for doing genetlink_unicast()'s (effectively netlink_unicast()) to
each listener on the per-cpu list of listeners. Since netlink_unicast() frees
up the skb passed to it, regardless of status of the send, reuse is bad.
Thanks to Chandra Seetharaman for discovering this bug.
Signed-Off-By: Shailabh Nagar <nagar@watson.ibm.com>
Signed-Off-By: Chandra Seetharaman <sekharan@us.ibm.com>
kernel/taskstats.c | 13 ++++++++++++-
1 files changed, 12 insertions(+), 1 deletion(-)
Index: linux-2.6.18-rc1/kernel/taskstats.c
===================================================================
--- linux-2.6.18-rc1.orig/kernel/taskstats.c 2006-07-10 23:44:56.000000000 -0400
+++ linux-2.6.18-rc1/kernel/taskstats.c 2006-07-10 23:45:15.000000000 -0400
@@ -125,6 +125,7 @@ static int send_cpu_listeners(struct sk_
struct genlmsghdr *genlhdr = nlmsg_data((struct nlmsghdr *)skb->data);
struct listener_list *listeners;
struct listener *s, *tmp;
+ struct sk_buff *skb_next, *skb_cur = skb;
void *reply = genlmsg_data(genlhdr);
int rc, ret;
@@ -138,12 +139,22 @@ static int send_cpu_listeners(struct sk_
listeners = &per_cpu(listener_array, cpu);
down_write(&listeners->sem);
list_for_each_entry_safe(s, tmp, &listeners->list, list) {
- ret = genlmsg_unicast(skb, s->pid);
+ skb_next = NULL;
+ if (!list_islast(&s->list, &listeners->list)) {
+ skb_next = skb_clone(skb_cur, GFP_KERNEL);
+ if (!skb_next) {
+ nlmsg_free(skb_cur);
+ rc = -ENOMEM;
+ break;
+ }
+ }
+ ret = genlmsg_unicast(skb_cur, s->pid);
if (ret == -ECONNREFUSED) {
list_del(&s->list);
kfree(s);
rc = ret;
}
+ skb_cur = skb_next;
}
up_write(&listeners->sem);
next parent reply other threads:[~2006-07-11 4:36 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <1152591838.14142.114.camel@localhost.localdomain>
2006-07-11 4:36 ` Shailabh Nagar [this message]
2006-07-11 10:05 ` [Patch 6/6] per task delay accounting taskstats interface: fix clone skbs for each listener Andrew Morton
2006-07-11 10:28 ` Herbert Xu
2006-07-11 10:57 ` Andrew Morton
2006-07-11 11:15 ` Herbert Xu
2006-07-11 17:44 ` Shailabh Nagar
2006-07-12 5:41 ` Shailabh Nagar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1152592599.14142.136.camel@localhost.localdomain \
--to=nagar@watson.ibm.com \
--cc=akpm@osdl.org \
--cc=balbir@in.ibm.com \
--cc=csturtiv@sgi.com \
--cc=hadi@cyberus.ca \
--cc=jlan@sgi.com \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=pj@sgi.com \
--cc=sekharan@us.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).