* [PATCH 0/20] mib: add struct net to IP, TCP and NET macros
@ 2008-07-16 8:05 Pavel Emelyanov
2008-07-16 8:08 ` [PATCH 1/20] ipv4: prepare net initialization for IP accounting Pavel Emelyanov
` (20 more replies)
0 siblings, 21 replies; 22+ messages in thread
From: Pavel Emelyanov @ 2008-07-16 8:05 UTC (permalink / raw)
To: David Miller; +Cc: Linux Netdev List
This is actually the last very scattered set, that changes many calls
to MIB accounting macros. What is left requires less patching.
David, I hope this (and a finalizing set, I will prepare after this
one is accepted) can go to current 2.6.27 merge window, but it is OK
with me if your decision is to delay this till 2.6.28.
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH 1/20] ipv4: prepare net initialization for IP accounting
2008-07-16 8:05 [PATCH 0/20] mib: add struct net to IP, TCP and NET macros Pavel Emelyanov
@ 2008-07-16 8:08 ` Pavel Emelyanov
2008-07-16 8:09 ` [PATCH 2/20] mib: drop unused IP_INC_STATS_USER Pavel Emelyanov
` (19 subsequent siblings)
20 siblings, 0 replies; 22+ messages in thread
From: Pavel Emelyanov @ 2008-07-16 8:08 UTC (permalink / raw)
To: David Miller; +Cc: Linux Netdev List
Some places, that deal with IP statistics already have where to
get a struct net from, but use it directly, without declaring
a separate variable on the stack.
So, save this net on the stack for future IP_XXX_STATS macros.
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
---
net/ipv4/inet_connection_sock.c | 3 ++-
net/ipv4/ip_fragment.c | 6 +++---
net/ipv4/udp.c | 4 +++-
3 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
index 5bbf000..8338e10 100644
--- a/net/ipv4/inet_connection_sock.c
+++ b/net/ipv4/inet_connection_sock.c
@@ -338,9 +338,10 @@ struct dst_entry* inet_csk_route_req(struct sock *sk,
.uli_u = { .ports =
{ .sport = inet_sk(sk)->sport,
.dport = ireq->rmt_port } } };
+ struct net *net = sock_net(sk);
security_req_classify_flow(req, &fl);
- if (ip_route_output_flow(sock_net(sk), &rt, &fl, sk, 0)) {
+ if (ip_route_output_flow(net, &rt, &fl, sk, 0)) {
IP_INC_STATS_BH(IPSTATS_MIB_OUTNOROUTES);
return NULL;
}
diff --git a/net/ipv4/ip_fragment.c b/net/ipv4/ip_fragment.c
index fbd5804..e23be5b 100644
--- a/net/ipv4/ip_fragment.c
+++ b/net/ipv4/ip_fragment.c
@@ -187,8 +187,10 @@ static void ip_evictor(struct net *net)
static void ip_expire(unsigned long arg)
{
struct ipq *qp;
+ struct net *net;
qp = container_of((struct inet_frag_queue *) arg, struct ipq, q);
+ net = container_of(qp->q.net, struct net, ipv4.frags);
spin_lock(&qp->q.lock);
@@ -202,9 +204,7 @@ static void ip_expire(unsigned long arg)
if ((qp->q.last_in & INET_FRAG_FIRST_IN) && qp->q.fragments != NULL) {
struct sk_buff *head = qp->q.fragments;
- struct net *net;
- net = container_of(qp->q.net, struct net, ipv4.frags);
/* Send an ICMP "Fragment Reassembly Timeout" message. */
if ((head->dev = dev_get_by_index(net, qp->iif)) != NULL) {
icmp_send(head, ICMP_TIME_EXCEEDED, ICMP_EXC_FRAGTIME, 0);
@@ -570,9 +570,9 @@ int ip_defrag(struct sk_buff *skb, u32 user)
struct ipq *qp;
struct net *net;
+ net = skb->dev ? dev_net(skb->dev) : dev_net(skb->dst->dev);
IP_INC_STATS_BH(IPSTATS_MIB_REASMREQDS);
- net = skb->dev ? dev_net(skb->dev) : dev_net(skb->dst->dev);
/* Start by cleaning up the memory. */
if (atomic_read(&net->ipv4.frags.mem) > net->ipv4.frags.high_thresh)
ip_evictor(net);
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index fdd4faa..048ef57 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -656,8 +656,10 @@ int udp_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
.uli_u = { .ports =
{ .sport = inet->sport,
.dport = dport } } };
+ struct net *net = sock_net(sk);
+
security_sk_classify_flow(sk, &fl);
- err = ip_route_output_flow(sock_net(sk), &rt, &fl, sk, 1);
+ err = ip_route_output_flow(net, &rt, &fl, sk, 1);
if (err) {
if (err == -ENETUNREACH)
IP_INC_STATS_BH(IPSTATS_MIB_OUTNOROUTES);
--
1.5.5.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 2/20] mib: drop unused IP_INC_STATS_USER
2008-07-16 8:05 [PATCH 0/20] mib: add struct net to IP, TCP and NET macros Pavel Emelyanov
2008-07-16 8:08 ` [PATCH 1/20] ipv4: prepare net initialization for IP accounting Pavel Emelyanov
@ 2008-07-16 8:09 ` Pavel Emelyanov
2008-07-16 8:10 ` [PATCH 3/20] mib: add net to IP_INC_STATS Pavel Emelyanov
` (18 subsequent siblings)
20 siblings, 0 replies; 22+ messages in thread
From: Pavel Emelyanov @ 2008-07-16 8:09 UTC (permalink / raw)
To: David Miller; +Cc: Linux Netdev List
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
---
include/net/ip.h | 1 -
1 files changed, 0 insertions(+), 1 deletions(-)
diff --git a/include/net/ip.h b/include/net/ip.h
index 3b40bc2..673ecdb 100644
--- a/include/net/ip.h
+++ b/include/net/ip.h
@@ -159,7 +159,6 @@ extern struct ipv4_config ipv4_config;
DECLARE_SNMP_STAT(struct ipstats_mib, ip_statistics);
#define IP_INC_STATS(field) SNMP_INC_STATS(ip_statistics, field)
#define IP_INC_STATS_BH(field) SNMP_INC_STATS_BH(ip_statistics, field)
-#define IP_INC_STATS_USER(field) SNMP_INC_STATS_USER(ip_statistics, field)
#define IP_ADD_STATS_BH(field, val) SNMP_ADD_STATS_BH(ip_statistics, field, val)
DECLARE_SNMP_STAT(struct linux_mib, net_statistics);
#define NET_INC_STATS(field) SNMP_INC_STATS(net_statistics, field)
--
1.5.5.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 3/20] mib: add net to IP_INC_STATS
2008-07-16 8:05 [PATCH 0/20] mib: add struct net to IP, TCP and NET macros Pavel Emelyanov
2008-07-16 8:08 ` [PATCH 1/20] ipv4: prepare net initialization for IP accounting Pavel Emelyanov
2008-07-16 8:09 ` [PATCH 2/20] mib: drop unused IP_INC_STATS_USER Pavel Emelyanov
@ 2008-07-16 8:10 ` Pavel Emelyanov
2008-07-16 8:12 ` [PATCH 4/20] mib: add net to IP_INC_STATS_BH Pavel Emelyanov
` (17 subsequent siblings)
20 siblings, 0 replies; 22+ messages in thread
From: Pavel Emelyanov @ 2008-07-16 8:10 UTC (permalink / raw)
To: David Miller; +Cc: Linux Netdev List
All the callers already have either the net itself, or the place
where to get it from.
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
---
include/net/ip.h | 2 +-
net/ipv4/ip_forward.c | 2 +-
net/ipv4/ip_output.c | 30 +++++++++++++++---------------
net/ipv4/raw.c | 2 +-
4 files changed, 18 insertions(+), 18 deletions(-)
diff --git a/include/net/ip.h b/include/net/ip.h
index 673ecdb..b9aaa32 100644
--- a/include/net/ip.h
+++ b/include/net/ip.h
@@ -157,7 +157,7 @@ struct ipv4_config
extern struct ipv4_config ipv4_config;
DECLARE_SNMP_STAT(struct ipstats_mib, ip_statistics);
-#define IP_INC_STATS(field) SNMP_INC_STATS(ip_statistics, field)
+#define IP_INC_STATS(net, field) do { (void)net; SNMP_INC_STATS(ip_statistics, field); } while (0)
#define IP_INC_STATS_BH(field) SNMP_INC_STATS_BH(ip_statistics, field)
#define IP_ADD_STATS_BH(field, val) SNMP_ADD_STATS_BH(ip_statistics, field, val)
DECLARE_SNMP_STAT(struct linux_mib, net_statistics);
diff --git a/net/ipv4/ip_forward.c b/net/ipv4/ip_forward.c
index da14725..7f78a5a 100644
--- a/net/ipv4/ip_forward.c
+++ b/net/ipv4/ip_forward.c
@@ -88,7 +88,7 @@ int ip_forward(struct sk_buff *skb)
if (unlikely(skb->len > dst_mtu(&rt->u.dst) && !skb_is_gso(skb) &&
(ip_hdr(skb)->frag_off & htons(IP_DF))) && !skb->local_df) {
- IP_INC_STATS(IPSTATS_MIB_FRAGFAILS);
+ IP_INC_STATS(dev_net(rt->u.dst.dev), IPSTATS_MIB_FRAGFAILS);
icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
htonl(dst_mtu(&rt->u.dst)));
goto drop;
diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
index f003186..465544f 100644
--- a/net/ipv4/ip_output.c
+++ b/net/ipv4/ip_output.c
@@ -182,9 +182,9 @@ static inline int ip_finish_output2(struct sk_buff *skb)
unsigned int hh_len = LL_RESERVED_SPACE(dev);
if (rt->rt_type == RTN_MULTICAST)
- IP_INC_STATS(IPSTATS_MIB_OUTMCASTPKTS);
+ IP_INC_STATS(dev_net(dev), IPSTATS_MIB_OUTMCASTPKTS);
else if (rt->rt_type == RTN_BROADCAST)
- IP_INC_STATS(IPSTATS_MIB_OUTBCASTPKTS);
+ IP_INC_STATS(dev_net(dev), IPSTATS_MIB_OUTBCASTPKTS);
/* Be paranoid, rather than too clever. */
if (unlikely(skb_headroom(skb) < hh_len && dev->header_ops)) {
@@ -244,7 +244,7 @@ int ip_mc_output(struct sk_buff *skb)
/*
* If the indicated interface is up and running, send the packet.
*/
- IP_INC_STATS(IPSTATS_MIB_OUTREQUESTS);
+ IP_INC_STATS(dev_net(dev), IPSTATS_MIB_OUTREQUESTS);
skb->dev = dev;
skb->protocol = htons(ETH_P_IP);
@@ -298,7 +298,7 @@ int ip_output(struct sk_buff *skb)
{
struct net_device *dev = skb->dst->dev;
- IP_INC_STATS(IPSTATS_MIB_OUTREQUESTS);
+ IP_INC_STATS(dev_net(dev), IPSTATS_MIB_OUTREQUESTS);
skb->dev = dev;
skb->protocol = htons(ETH_P_IP);
@@ -389,7 +389,7 @@ packet_routed:
return ip_local_out(skb);
no_route:
- IP_INC_STATS(IPSTATS_MIB_OUTNOROUTES);
+ IP_INC_STATS(sock_net(sk), IPSTATS_MIB_OUTNOROUTES);
kfree_skb(skb);
return -EHOSTUNREACH;
}
@@ -451,7 +451,7 @@ int ip_fragment(struct sk_buff *skb, int (*output)(struct sk_buff*))
iph = ip_hdr(skb);
if (unlikely((iph->frag_off & htons(IP_DF)) && !skb->local_df)) {
- IP_INC_STATS(IPSTATS_MIB_FRAGFAILS);
+ IP_INC_STATS(dev_net(dev), IPSTATS_MIB_FRAGFAILS);
icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
htonl(ip_skb_dst_mtu(skb)));
kfree_skb(skb);
@@ -542,7 +542,7 @@ int ip_fragment(struct sk_buff *skb, int (*output)(struct sk_buff*))
err = output(skb);
if (!err)
- IP_INC_STATS(IPSTATS_MIB_FRAGCREATES);
+ IP_INC_STATS(dev_net(dev), IPSTATS_MIB_FRAGCREATES);
if (err || !frag)
break;
@@ -552,7 +552,7 @@ int ip_fragment(struct sk_buff *skb, int (*output)(struct sk_buff*))
}
if (err == 0) {
- IP_INC_STATS(IPSTATS_MIB_FRAGOKS);
+ IP_INC_STATS(dev_net(dev), IPSTATS_MIB_FRAGOKS);
return 0;
}
@@ -561,7 +561,7 @@ int ip_fragment(struct sk_buff *skb, int (*output)(struct sk_buff*))
kfree_skb(frag);
frag = skb;
}
- IP_INC_STATS(IPSTATS_MIB_FRAGFAILS);
+ IP_INC_STATS(dev_net(dev), IPSTATS_MIB_FRAGFAILS);
return err;
}
@@ -673,15 +673,15 @@ slow_path:
if (err)
goto fail;
- IP_INC_STATS(IPSTATS_MIB_FRAGCREATES);
+ IP_INC_STATS(dev_net(dev), IPSTATS_MIB_FRAGCREATES);
}
kfree_skb(skb);
- IP_INC_STATS(IPSTATS_MIB_FRAGOKS);
+ IP_INC_STATS(dev_net(dev), IPSTATS_MIB_FRAGOKS);
return err;
fail:
kfree_skb(skb);
- IP_INC_STATS(IPSTATS_MIB_FRAGFAILS);
+ IP_INC_STATS(dev_net(dev), IPSTATS_MIB_FRAGFAILS);
return err;
}
@@ -1047,7 +1047,7 @@ alloc_new_skb:
error:
inet->cork.length -= length;
- IP_INC_STATS(IPSTATS_MIB_OUTDISCARDS);
+ IP_INC_STATS(sock_net(sk), IPSTATS_MIB_OUTDISCARDS);
return err;
}
@@ -1189,7 +1189,7 @@ ssize_t ip_append_page(struct sock *sk, struct page *page,
error:
inet->cork.length -= size;
- IP_INC_STATS(IPSTATS_MIB_OUTDISCARDS);
+ IP_INC_STATS(sock_net(sk), IPSTATS_MIB_OUTDISCARDS);
return err;
}
@@ -1298,7 +1298,7 @@ out:
return err;
error:
- IP_INC_STATS(IPSTATS_MIB_OUTDISCARDS);
+ IP_INC_STATS(net, IPSTATS_MIB_OUTDISCARDS);
goto out;
}
diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c
index 7f39ea4..cd97574 100644
--- a/net/ipv4/raw.c
+++ b/net/ipv4/raw.c
@@ -385,7 +385,7 @@ error_fault:
err = -EFAULT;
kfree_skb(skb);
error:
- IP_INC_STATS(IPSTATS_MIB_OUTDISCARDS);
+ IP_INC_STATS(net, IPSTATS_MIB_OUTDISCARDS);
return err;
}
--
1.5.5.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 4/20] mib: add net to IP_INC_STATS_BH
2008-07-16 8:05 [PATCH 0/20] mib: add struct net to IP, TCP and NET macros Pavel Emelyanov
` (2 preceding siblings ...)
2008-07-16 8:10 ` [PATCH 3/20] mib: add net to IP_INC_STATS Pavel Emelyanov
@ 2008-07-16 8:12 ` Pavel Emelyanov
2008-07-16 8:14 ` [PATCH 5/20] mib: add net to IP_ADD_STATS_BH Pavel Emelyanov
` (16 subsequent siblings)
20 siblings, 0 replies; 22+ messages in thread
From: Pavel Emelyanov @ 2008-07-16 8:12 UTC (permalink / raw)
To: David Miller; +Cc: Linux Netdev List
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
---
include/net/ip.h | 2 +-
net/dccp/ipv4.c | 2 +-
net/ipv4/datagram.c | 2 +-
net/ipv4/inet_connection_sock.c | 4 ++--
net/ipv4/ip_forward.c | 4 ++--
net/ipv4/ip_fragment.c | 17 ++++++++++-------
net/ipv4/ip_input.c | 30 ++++++++++++++++--------------
net/ipv4/ipmr.c | 4 ++--
net/ipv4/route.c | 3 ++-
net/ipv4/tcp_ipv4.c | 2 +-
net/ipv4/udp.c | 2 +-
net/sctp/output.c | 3 ++-
12 files changed, 41 insertions(+), 34 deletions(-)
diff --git a/include/net/ip.h b/include/net/ip.h
index b9aaa32..9973ce0 100644
--- a/include/net/ip.h
+++ b/include/net/ip.h
@@ -158,7 +158,7 @@ struct ipv4_config
extern struct ipv4_config ipv4_config;
DECLARE_SNMP_STAT(struct ipstats_mib, ip_statistics);
#define IP_INC_STATS(net, field) do { (void)net; SNMP_INC_STATS(ip_statistics, field); } while (0)
-#define IP_INC_STATS_BH(field) SNMP_INC_STATS_BH(ip_statistics, field)
+#define IP_INC_STATS_BH(net, field) do { (void)net; SNMP_INC_STATS_BH(ip_statistics, field); } while (0)
#define IP_ADD_STATS_BH(field, val) SNMP_ADD_STATS_BH(ip_statistics, field, val)
DECLARE_SNMP_STAT(struct linux_mib, net_statistics);
#define NET_INC_STATS(field) SNMP_INC_STATS(net_statistics, field)
diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c
index 2b03c47..9f760a1 100644
--- a/net/dccp/ipv4.c
+++ b/net/dccp/ipv4.c
@@ -465,7 +465,7 @@ static struct dst_entry* dccp_v4_route_skb(struct net *net, struct sock *sk,
security_skb_classify_flow(skb, &fl);
if (ip_route_output_flow(net, &rt, &fl, sk, 0)) {
- IP_INC_STATS_BH(IPSTATS_MIB_OUTNOROUTES);
+ IP_INC_STATS_BH(net, IPSTATS_MIB_OUTNOROUTES);
return NULL;
}
diff --git a/net/ipv4/datagram.c b/net/ipv4/datagram.c
index 0c0c73f..5e6c5a0 100644
--- a/net/ipv4/datagram.c
+++ b/net/ipv4/datagram.c
@@ -52,7 +52,7 @@ int ip4_datagram_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
inet->sport, usin->sin_port, sk, 1);
if (err) {
if (err == -ENETUNREACH)
- IP_INC_STATS_BH(IPSTATS_MIB_OUTNOROUTES);
+ IP_INC_STATS_BH(sock_net(sk), IPSTATS_MIB_OUTNOROUTES);
return err;
}
diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
index 8338e10..bb81c95 100644
--- a/net/ipv4/inet_connection_sock.c
+++ b/net/ipv4/inet_connection_sock.c
@@ -342,12 +342,12 @@ struct dst_entry* inet_csk_route_req(struct sock *sk,
security_req_classify_flow(req, &fl);
if (ip_route_output_flow(net, &rt, &fl, sk, 0)) {
- IP_INC_STATS_BH(IPSTATS_MIB_OUTNOROUTES);
+ IP_INC_STATS_BH(net, IPSTATS_MIB_OUTNOROUTES);
return NULL;
}
if (opt && opt->is_strictroute && rt->rt_dst != rt->rt_gateway) {
ip_rt_put(rt);
- IP_INC_STATS_BH(IPSTATS_MIB_OUTNOROUTES);
+ IP_INC_STATS_BH(net, IPSTATS_MIB_OUTNOROUTES);
return NULL;
}
return &rt->u.dst;
diff --git a/net/ipv4/ip_forward.c b/net/ipv4/ip_forward.c
index 7f78a5a..450016b 100644
--- a/net/ipv4/ip_forward.c
+++ b/net/ipv4/ip_forward.c
@@ -42,7 +42,7 @@ static int ip_forward_finish(struct sk_buff *skb)
{
struct ip_options * opt = &(IPCB(skb)->opt);
- IP_INC_STATS_BH(IPSTATS_MIB_OUTFORWDATAGRAMS);
+ IP_INC_STATS_BH(dev_net(skb->dst->dev), IPSTATS_MIB_OUTFORWDATAGRAMS);
if (unlikely(opt->optlen))
ip_forward_options(skb);
@@ -123,7 +123,7 @@ sr_failed:
too_many_hops:
/* Tell the sender its packet died... */
- IP_INC_STATS_BH(IPSTATS_MIB_INHDRERRORS);
+ IP_INC_STATS_BH(dev_net(skb->dst->dev), IPSTATS_MIB_INHDRERRORS);
icmp_send(skb, ICMP_TIME_EXCEEDED, ICMP_EXC_TTL, 0);
drop:
kfree_skb(skb);
diff --git a/net/ipv4/ip_fragment.c b/net/ipv4/ip_fragment.c
index e23be5b..65364c0 100644
--- a/net/ipv4/ip_fragment.c
+++ b/net/ipv4/ip_fragment.c
@@ -199,8 +199,8 @@ static void ip_expire(unsigned long arg)
ipq_kill(qp);
- IP_INC_STATS_BH(IPSTATS_MIB_REASMTIMEOUT);
- IP_INC_STATS_BH(IPSTATS_MIB_REASMFAILS);
+ IP_INC_STATS_BH(net, IPSTATS_MIB_REASMTIMEOUT);
+ IP_INC_STATS_BH(net, IPSTATS_MIB_REASMFAILS);
if ((qp->q.last_in & INET_FRAG_FIRST_IN) && qp->q.fragments != NULL) {
struct sk_buff *head = qp->q.fragments;
@@ -261,7 +261,10 @@ static inline int ip_frag_too_far(struct ipq *qp)
rc = qp->q.fragments && (end - start) > max;
if (rc) {
- IP_INC_STATS_BH(IPSTATS_MIB_REASMFAILS);
+ struct net *net;
+
+ net = container_of(qp->q.net, struct net, ipv4.frags);
+ IP_INC_STATS_BH(net, IPSTATS_MIB_REASMFAILS);
}
return rc;
@@ -545,7 +548,7 @@ static int ip_frag_reasm(struct ipq *qp, struct sk_buff *prev,
iph = ip_hdr(head);
iph->frag_off = 0;
iph->tot_len = htons(len);
- IP_INC_STATS_BH(IPSTATS_MIB_REASMOKS);
+ IP_INC_STATS_BH(dev_net(dev), IPSTATS_MIB_REASMOKS);
qp->q.fragments = NULL;
return 0;
@@ -560,7 +563,7 @@ out_oversize:
"Oversized IP packet from " NIPQUAD_FMT ".\n",
NIPQUAD(qp->saddr));
out_fail:
- IP_INC_STATS_BH(IPSTATS_MIB_REASMFAILS);
+ IP_INC_STATS_BH(dev_net(dev), IPSTATS_MIB_REASMFAILS);
return err;
}
@@ -571,7 +574,7 @@ int ip_defrag(struct sk_buff *skb, u32 user)
struct net *net;
net = skb->dev ? dev_net(skb->dev) : dev_net(skb->dst->dev);
- IP_INC_STATS_BH(IPSTATS_MIB_REASMREQDS);
+ IP_INC_STATS_BH(net, IPSTATS_MIB_REASMREQDS);
/* Start by cleaning up the memory. */
if (atomic_read(&net->ipv4.frags.mem) > net->ipv4.frags.high_thresh)
@@ -590,7 +593,7 @@ int ip_defrag(struct sk_buff *skb, u32 user)
return ret;
}
- IP_INC_STATS_BH(IPSTATS_MIB_REASMFAILS);
+ IP_INC_STATS_BH(net, IPSTATS_MIB_REASMFAILS);
kfree_skb(skb);
return -ENOMEM;
}
diff --git a/net/ipv4/ip_input.c b/net/ipv4/ip_input.c
index 7c26428..043f640 100644
--- a/net/ipv4/ip_input.c
+++ b/net/ipv4/ip_input.c
@@ -230,16 +230,16 @@ static int ip_local_deliver_finish(struct sk_buff *skb)
protocol = -ret;
goto resubmit;
}
- IP_INC_STATS_BH(IPSTATS_MIB_INDELIVERS);
+ IP_INC_STATS_BH(net, IPSTATS_MIB_INDELIVERS);
} else {
if (!raw) {
if (xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb)) {
- IP_INC_STATS_BH(IPSTATS_MIB_INUNKNOWNPROTOS);
+ IP_INC_STATS_BH(net, IPSTATS_MIB_INUNKNOWNPROTOS);
icmp_send(skb, ICMP_DEST_UNREACH,
ICMP_PROT_UNREACH, 0);
}
} else
- IP_INC_STATS_BH(IPSTATS_MIB_INDELIVERS);
+ IP_INC_STATS_BH(net, IPSTATS_MIB_INDELIVERS);
kfree_skb(skb);
}
}
@@ -281,7 +281,7 @@ static inline int ip_rcv_options(struct sk_buff *skb)
--ANK (980813)
*/
if (skb_cow(skb, skb_headroom(skb))) {
- IP_INC_STATS_BH(IPSTATS_MIB_INDISCARDS);
+ IP_INC_STATS_BH(dev_net(dev), IPSTATS_MIB_INDISCARDS);
goto drop;
}
@@ -290,7 +290,7 @@ static inline int ip_rcv_options(struct sk_buff *skb)
opt->optlen = iph->ihl*4 - sizeof(struct iphdr);
if (ip_options_compile(dev_net(dev), opt, skb)) {
- IP_INC_STATS_BH(IPSTATS_MIB_INHDRERRORS);
+ IP_INC_STATS_BH(dev_net(dev), IPSTATS_MIB_INHDRERRORS);
goto drop;
}
@@ -334,9 +334,11 @@ static int ip_rcv_finish(struct sk_buff *skb)
skb->dev);
if (unlikely(err)) {
if (err == -EHOSTUNREACH)
- IP_INC_STATS_BH(IPSTATS_MIB_INADDRERRORS);
+ IP_INC_STATS_BH(dev_net(skb->dev),
+ IPSTATS_MIB_INADDRERRORS);
else if (err == -ENETUNREACH)
- IP_INC_STATS_BH(IPSTATS_MIB_INNOROUTES);
+ IP_INC_STATS_BH(dev_net(skb->dev),
+ IPSTATS_MIB_INNOROUTES);
goto drop;
}
}
@@ -357,9 +359,9 @@ static int ip_rcv_finish(struct sk_buff *skb)
rt = skb->rtable;
if (rt->rt_type == RTN_MULTICAST)
- IP_INC_STATS_BH(IPSTATS_MIB_INMCASTPKTS);
+ IP_INC_STATS_BH(dev_net(rt->u.dst.dev), IPSTATS_MIB_INMCASTPKTS);
else if (rt->rt_type == RTN_BROADCAST)
- IP_INC_STATS_BH(IPSTATS_MIB_INBCASTPKTS);
+ IP_INC_STATS_BH(dev_net(rt->u.dst.dev), IPSTATS_MIB_INBCASTPKTS);
return dst_input(skb);
@@ -382,10 +384,10 @@ int ip_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt,
if (skb->pkt_type == PACKET_OTHERHOST)
goto drop;
- IP_INC_STATS_BH(IPSTATS_MIB_INRECEIVES);
+ IP_INC_STATS_BH(dev_net(dev), IPSTATS_MIB_INRECEIVES);
if ((skb = skb_share_check(skb, GFP_ATOMIC)) == NULL) {
- IP_INC_STATS_BH(IPSTATS_MIB_INDISCARDS);
+ IP_INC_STATS_BH(dev_net(dev), IPSTATS_MIB_INDISCARDS);
goto out;
}
@@ -418,7 +420,7 @@ int ip_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt,
len = ntohs(iph->tot_len);
if (skb->len < len) {
- IP_INC_STATS_BH(IPSTATS_MIB_INTRUNCATEDPKTS);
+ IP_INC_STATS_BH(dev_net(dev), IPSTATS_MIB_INTRUNCATEDPKTS);
goto drop;
} else if (len < (iph->ihl*4))
goto inhdr_error;
@@ -428,7 +430,7 @@ int ip_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt,
* Note this now means skb->len holds ntohs(iph->tot_len).
*/
if (pskb_trim_rcsum(skb, len)) {
- IP_INC_STATS_BH(IPSTATS_MIB_INDISCARDS);
+ IP_INC_STATS_BH(dev_net(dev), IPSTATS_MIB_INDISCARDS);
goto drop;
}
@@ -439,7 +441,7 @@ int ip_rcv(struct sk_buff *skb, struct net_device *dev, struct packet_type *pt,
ip_rcv_finish);
inhdr_error:
- IP_INC_STATS_BH(IPSTATS_MIB_INHDRERRORS);
+ IP_INC_STATS_BH(dev_net(dev), IPSTATS_MIB_INHDRERRORS);
drop:
kfree_skb(skb);
out:
diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
index c9ab47b..033c712 100644
--- a/net/ipv4/ipmr.c
+++ b/net/ipv4/ipmr.c
@@ -1178,7 +1178,7 @@ static inline int ipmr_forward_finish(struct sk_buff *skb)
{
struct ip_options * opt = &(IPCB(skb)->opt);
- IP_INC_STATS_BH(IPSTATS_MIB_OUTFORWDATAGRAMS);
+ IP_INC_STATS_BH(dev_net(skb->dst->dev), IPSTATS_MIB_OUTFORWDATAGRAMS);
if (unlikely(opt->optlen))
ip_forward_options(skb);
@@ -1241,7 +1241,7 @@ static void ipmr_queue_xmit(struct sk_buff *skb, struct mfc_cache *c, int vifi)
to blackhole.
*/
- IP_INC_STATS_BH(IPSTATS_MIB_FRAGFAILS);
+ IP_INC_STATS_BH(dev_net(dev), IPSTATS_MIB_FRAGFAILS);
ip_rt_put(rt);
goto out_free;
}
diff --git a/net/ipv4/route.c b/net/ipv4/route.c
index 79c1e74..e4ab0ac 100644
--- a/net/ipv4/route.c
+++ b/net/ipv4/route.c
@@ -1429,7 +1429,8 @@ static int ip_error(struct sk_buff *skb)
break;
case ENETUNREACH:
code = ICMP_NET_UNREACH;
- IP_INC_STATS_BH(IPSTATS_MIB_INNOROUTES);
+ IP_INC_STATS_BH(dev_net(rt->u.dst.dev),
+ IPSTATS_MIB_INNOROUTES);
break;
case EACCES:
code = ICMP_PKT_FILTERED;
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 0fefd44..7797d52 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -175,7 +175,7 @@ int tcp_v4_connect(struct sock *sk, struct sockaddr *uaddr, int addr_len)
inet->sport, usin->sin_port, sk, 1);
if (tmp < 0) {
if (tmp == -ENETUNREACH)
- IP_INC_STATS_BH(IPSTATS_MIB_OUTNOROUTES);
+ IP_INC_STATS_BH(sock_net(sk), IPSTATS_MIB_OUTNOROUTES);
return tmp;
}
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 048ef57..1560d11 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -662,7 +662,7 @@ int udp_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
err = ip_route_output_flow(net, &rt, &fl, sk, 1);
if (err) {
if (err == -ENETUNREACH)
- IP_INC_STATS_BH(IPSTATS_MIB_OUTNOROUTES);
+ IP_INC_STATS_BH(net, IPSTATS_MIB_OUTNOROUTES);
goto out;
}
diff --git a/net/sctp/output.c b/net/sctp/output.c
index abcd00d..9a63a3f 100644
--- a/net/sctp/output.c
+++ b/net/sctp/output.c
@@ -50,6 +50,7 @@
#include <linux/init.h>
#include <net/inet_ecn.h>
#include <net/icmp.h>
+#include <net/net_namespace.h>
#ifndef TEST_FRAME
#include <net/tcp.h>
@@ -595,7 +596,7 @@ out:
return err;
no_route:
kfree_skb(nskb);
- IP_INC_STATS_BH(IPSTATS_MIB_OUTNOROUTES);
+ IP_INC_STATS_BH(&init_net, IPSTATS_MIB_OUTNOROUTES);
/* FIXME: Returning the 'err' will effect all the associations
* associated with a socket, although only one of the paths of the
--
1.5.5.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 5/20] mib: add net to IP_ADD_STATS_BH
2008-07-16 8:05 [PATCH 0/20] mib: add struct net to IP, TCP and NET macros Pavel Emelyanov
` (3 preceding siblings ...)
2008-07-16 8:12 ` [PATCH 4/20] mib: add net to IP_INC_STATS_BH Pavel Emelyanov
@ 2008-07-16 8:14 ` Pavel Emelyanov
2008-07-16 8:16 ` [PATCH 6/20] inet: prepare struct net for TCP MIB accounting Pavel Emelyanov
` (15 subsequent siblings)
20 siblings, 0 replies; 22+ messages in thread
From: Pavel Emelyanov @ 2008-07-16 8:14 UTC (permalink / raw)
To: David Miller; +Cc: Linux Netdev List
Very simple - only ip_evictor (fragments) requires such.
This patch ends up the IP_XXX_STATS patching.
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
---
include/net/ip.h | 2 +-
net/ipv4/ip_fragment.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/net/ip.h b/include/net/ip.h
index 9973ce0..93d0e09 100644
--- a/include/net/ip.h
+++ b/include/net/ip.h
@@ -159,7 +159,7 @@ extern struct ipv4_config ipv4_config;
DECLARE_SNMP_STAT(struct ipstats_mib, ip_statistics);
#define IP_INC_STATS(net, field) do { (void)net; SNMP_INC_STATS(ip_statistics, field); } while (0)
#define IP_INC_STATS_BH(net, field) do { (void)net; SNMP_INC_STATS_BH(ip_statistics, field); } while (0)
-#define IP_ADD_STATS_BH(field, val) SNMP_ADD_STATS_BH(ip_statistics, field, val)
+#define IP_ADD_STATS_BH(net, field, val) SNMP_ADD_STATS_BH(ip_statistics, field, val)
DECLARE_SNMP_STAT(struct linux_mib, net_statistics);
#define NET_INC_STATS(field) SNMP_INC_STATS(net_statistics, field)
#define NET_INC_STATS_BH(field) SNMP_INC_STATS_BH(net_statistics, field)
diff --git a/net/ipv4/ip_fragment.c b/net/ipv4/ip_fragment.c
index 65364c0..38d38f0 100644
--- a/net/ipv4/ip_fragment.c
+++ b/net/ipv4/ip_fragment.c
@@ -178,7 +178,7 @@ static void ip_evictor(struct net *net)
evicted = inet_frag_evictor(&net->ipv4.frags, &ip4_frags);
if (evicted)
- IP_ADD_STATS_BH(IPSTATS_MIB_REASMFAILS, evicted);
+ IP_ADD_STATS_BH(net, IPSTATS_MIB_REASMFAILS, evicted);
}
/*
--
1.5.5.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 6/20] inet: prepare struct net for TCP MIB accounting
2008-07-16 8:05 [PATCH 0/20] mib: add struct net to IP, TCP and NET macros Pavel Emelyanov
` (4 preceding siblings ...)
2008-07-16 8:14 ` [PATCH 5/20] mib: add net to IP_ADD_STATS_BH Pavel Emelyanov
@ 2008-07-16 8:16 ` Pavel Emelyanov
2008-07-16 8:19 ` [PATCH 8/20] tcp: add net to tcp_mib_init Pavel Emelyanov
` (14 subsequent siblings)
20 siblings, 0 replies; 22+ messages in thread
From: Pavel Emelyanov @ 2008-07-16 8:16 UTC (permalink / raw)
To: David Miller; +Cc: Linux Netdev List
This is the same as the first patch in the set, but preparing
the net for TCP_XXX_STATS - save the struct net on the stack
where required and possible.
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
---
net/ipv4/tcp_ipv4.c | 10 +++++++---
net/ipv6/tcp_ipv6.c | 3 ++-
2 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 7797d52..db3bf9b 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -544,6 +544,7 @@ static void tcp_v4_send_reset(struct sock *sk, struct sk_buff *skb)
#ifdef CONFIG_TCP_MD5SIG
struct tcp_md5sig_key *key;
#endif
+ struct net *net;
/* Never send a reset in response to a reset. */
if (th->rst)
@@ -594,7 +595,8 @@ static void tcp_v4_send_reset(struct sock *sk, struct sk_buff *skb)
sizeof(struct tcphdr), IPPROTO_TCP, 0);
arg.csumoffset = offsetof(struct tcphdr, check) / 2;
- ip_send_reply(dev_net(skb->dst->dev)->ipv4.tcp_sock, skb,
+ net = dev_net(skb->dst->dev);
+ ip_send_reply(net->ipv4.tcp_sock, skb,
&arg, arg.iov[0].iov_len);
TCP_INC_STATS_BH(TCP_MIB_OUTSEGS);
@@ -619,6 +621,7 @@ static void tcp_v4_send_ack(struct sk_buff *skb, u32 seq, u32 ack,
];
} rep;
struct ip_reply_arg arg;
+ struct net *net = dev_net(skb->dev);
memset(&rep.th, 0, sizeof(struct tcphdr));
memset(&arg, 0, sizeof(arg));
@@ -668,7 +671,7 @@ static void tcp_v4_send_ack(struct sk_buff *skb, u32 seq, u32 ack,
if (oif)
arg.bound_dev_if = oif;
- ip_send_reply(dev_net(skb->dev)->ipv4.tcp_sock, skb,
+ ip_send_reply(net->ipv4.tcp_sock, skb,
&arg, arg.iov[0].iov_len);
TCP_INC_STATS_BH(TCP_MIB_OUTSEGS);
@@ -1505,6 +1508,7 @@ int tcp_v4_rcv(struct sk_buff *skb)
struct tcphdr *th;
struct sock *sk;
int ret;
+ struct net *net = dev_net(skb->dev);
if (skb->pkt_type != PACKET_HOST)
goto discard_it;
@@ -1539,7 +1543,7 @@ int tcp_v4_rcv(struct sk_buff *skb)
TCP_SKB_CB(skb)->flags = iph->tos;
TCP_SKB_CB(skb)->sacked = 0;
- sk = __inet_lookup(dev_net(skb->dev), &tcp_hashinfo, iph->saddr,
+ sk = __inet_lookup(net, &tcp_hashinfo, iph->saddr,
th->source, iph->daddr, th->dest, inet_iif(skb));
if (!sk)
goto no_tcp_socket;
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index 30dbab7..fc5f716 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -1617,6 +1617,7 @@ static int tcp_v6_rcv(struct sk_buff *skb)
struct tcphdr *th;
struct sock *sk;
int ret;
+ struct net *net = dev_net(skb->dev);
if (skb->pkt_type != PACKET_HOST)
goto discard_it;
@@ -1648,7 +1649,7 @@ static int tcp_v6_rcv(struct sk_buff *skb)
TCP_SKB_CB(skb)->flags = ipv6_get_dsfield(ipv6_hdr(skb));
TCP_SKB_CB(skb)->sacked = 0;
- sk = __inet6_lookup(dev_net(skb->dev), &tcp_hashinfo,
+ sk = __inet6_lookup(net, &tcp_hashinfo,
&ipv6_hdr(skb)->saddr, th->source,
&ipv6_hdr(skb)->daddr, ntohs(th->dest),
inet6_iif(skb));
--
1.5.5.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 8/20] tcp: add net to tcp_mib_init
2008-07-16 8:05 [PATCH 0/20] mib: add struct net to IP, TCP and NET macros Pavel Emelyanov
` (5 preceding siblings ...)
2008-07-16 8:16 ` [PATCH 6/20] inet: prepare struct net for TCP MIB accounting Pavel Emelyanov
@ 2008-07-16 8:19 ` Pavel Emelyanov
2008-07-16 8:21 ` [PATCH 9/20] mib: add net to TCP_INC_STATS Pavel Emelyanov
` (13 subsequent siblings)
20 siblings, 0 replies; 22+ messages in thread
From: Pavel Emelyanov @ 2008-07-16 8:19 UTC (permalink / raw)
To: David Miller; +Cc: Linux Netdev List
This one sets TCP MIBs after zeroing them, and thus requires
the net.
The existing single caller can use init_net (temporarily).
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
---
include/net/tcp.h | 2 +-
net/ipv4/af_inet.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 048f8de..b9d0ba6 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -1024,7 +1024,7 @@ static inline int tcp_paws_check(const struct tcp_options_received *rx_opt, int
#define TCP_CHECK_TIMER(sk) do { } while (0)
-static inline void tcp_mib_init(void)
+static inline void tcp_mib_init(struct net *net)
{
/* See RFC 2012 */
TCP_ADD_STATS_USER(TCP_MIB_RTOALGORITHM, 1);
diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
index dc41133..95a966d 100644
--- a/net/ipv4/af_inet.c
+++ b/net/ipv4/af_inet.c
@@ -1363,7 +1363,7 @@ static int __init init_ipv4_mibs(void)
sizeof(struct udp_mib)) < 0)
goto err_udplite_mib;
- tcp_mib_init();
+ tcp_mib_init(&init_net);
return 0;
--
1.5.5.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 9/20] mib: add net to TCP_INC_STATS
2008-07-16 8:05 [PATCH 0/20] mib: add struct net to IP, TCP and NET macros Pavel Emelyanov
` (6 preceding siblings ...)
2008-07-16 8:19 ` [PATCH 8/20] tcp: add net to tcp_mib_init Pavel Emelyanov
@ 2008-07-16 8:21 ` Pavel Emelyanov
2008-07-16 8:22 ` [PATCH 10/20] mib: add net to TCP_INC_STATS_BH Pavel Emelyanov
` (12 subsequent siblings)
20 siblings, 0 replies; 22+ messages in thread
From: Pavel Emelyanov @ 2008-07-16 8:21 UTC (permalink / raw)
To: David Miller; +Cc: Linux Netdev List
Fortunately (almost) all the TCP code has a sock to get the net from :)
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
---
include/net/tcp.h | 2 +-
net/ipv4/tcp.c | 4 ++--
net/ipv4/tcp_output.c | 10 +++++-----
3 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/include/net/tcp.h b/include/net/tcp.h
index b9d0ba6..1506515 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -266,7 +266,7 @@ static inline int tcp_too_many_orphans(struct sock *sk, int num)
extern struct proto tcp_prot;
DECLARE_SNMP_STAT(struct tcp_mib, tcp_statistics);
-#define TCP_INC_STATS(field) SNMP_INC_STATS(tcp_statistics, field)
+#define TCP_INC_STATS(net, field) do { (void)net; SNMP_INC_STATS(tcp_statistics, field); } while (0)
#define TCP_INC_STATS_BH(field) SNMP_INC_STATS_BH(tcp_statistics, field)
#define TCP_DEC_STATS(field) SNMP_DEC_STATS(tcp_statistics, field)
#define TCP_ADD_STATS_USER(field, val) SNMP_ADD_STATS_USER(tcp_statistics, field, val)
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 56a133c..fe042c6 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -1668,12 +1668,12 @@ void tcp_set_state(struct sock *sk, int state)
switch (state) {
case TCP_ESTABLISHED:
if (oldstate != TCP_ESTABLISHED)
- TCP_INC_STATS(TCP_MIB_CURRESTAB);
+ TCP_INC_STATS(sock_net(sk), TCP_MIB_CURRESTAB);
break;
case TCP_CLOSE:
if (oldstate == TCP_CLOSE_WAIT || oldstate == TCP_ESTABLISHED)
- TCP_INC_STATS(TCP_MIB_ESTABRESETS);
+ TCP_INC_STATS(sock_net(sk), TCP_MIB_ESTABRESETS);
sk->sk_prot->unhash(sk);
if (inet_csk(sk)->icsk_bind_hash &&
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index edef2af..c3b0da8 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -618,7 +618,7 @@ static int tcp_transmit_skb(struct sock *sk, struct sk_buff *skb, int clone_it,
tcp_event_data_sent(tp, skb, sk);
if (after(tcb->end_seq, tp->snd_nxt) || tcb->seq == tcb->end_seq)
- TCP_INC_STATS(TCP_MIB_OUTSEGS);
+ TCP_INC_STATS(sock_net(sk), TCP_MIB_OUTSEGS);
err = icsk->icsk_af_ops->queue_xmit(skb, 0);
if (likely(err <= 0))
@@ -1910,7 +1910,7 @@ int tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb)
if (err == 0) {
/* Update global TCP statistics. */
- TCP_INC_STATS(TCP_MIB_RETRANSSEGS);
+ TCP_INC_STATS(sock_net(sk), TCP_MIB_RETRANSSEGS);
tp->total_retrans++;
@@ -2132,7 +2132,7 @@ void tcp_send_active_reset(struct sock *sk, gfp_t priority)
if (tcp_transmit_skb(sk, skb, 0, priority))
NET_INC_STATS(LINUX_MIB_TCPABORTFAILED);
- TCP_INC_STATS(TCP_MIB_OUTRSTS);
+ TCP_INC_STATS(sock_net(sk), TCP_MIB_OUTRSTS);
}
/* WARNING: This routine must only be called when we have already sent
@@ -2258,7 +2258,7 @@ struct sk_buff *tcp_make_synack(struct sock *sk, struct dst_entry *dst,
);
th->doff = (tcp_header_size >> 2);
- TCP_INC_STATS(TCP_MIB_OUTSEGS);
+ TCP_INC_STATS(sock_net(sk), TCP_MIB_OUTSEGS);
#ifdef CONFIG_TCP_MD5SIG
/* Okay, we have all we need - do the md5 hash if needed */
@@ -2367,7 +2367,7 @@ int tcp_connect(struct sock *sk)
*/
tp->snd_nxt = tp->write_seq;
tp->pushed_seq = tp->write_seq;
- TCP_INC_STATS(TCP_MIB_ACTIVEOPENS);
+ TCP_INC_STATS(sock_net(sk), TCP_MIB_ACTIVEOPENS);
/* Timer for repeating the SYN until an answer. */
inet_csk_reset_xmit_timer(sk, ICSK_TIME_RETRANS,
--
1.5.5.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 10/20] mib: add net to TCP_INC_STATS_BH
2008-07-16 8:05 [PATCH 0/20] mib: add struct net to IP, TCP and NET macros Pavel Emelyanov
` (7 preceding siblings ...)
2008-07-16 8:21 ` [PATCH 9/20] mib: add net to TCP_INC_STATS Pavel Emelyanov
@ 2008-07-16 8:22 ` Pavel Emelyanov
2008-07-16 8:23 ` [PATCH 11/20] mib: add net to TCP_DEC_STATS Pavel Emelyanov
` (11 subsequent siblings)
20 siblings, 0 replies; 22+ messages in thread
From: Pavel Emelyanov @ 2008-07-16 8:22 UTC (permalink / raw)
To: David Miller; +Cc: Linux Netdev List
Same as before - the sock is always there to get the net from,
but there are also some places with the net already saved on
the stack.
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
---
include/net/tcp.h | 2 +-
net/ipv4/tcp.c | 2 +-
net/ipv4/tcp_input.c | 6 +++---
net/ipv4/tcp_ipv4.c | 14 +++++++-------
net/ipv4/tcp_minisocks.c | 4 ++--
net/ipv6/tcp_ipv6.c | 14 +++++++-------
6 files changed, 21 insertions(+), 21 deletions(-)
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 1506515..8a2cd07 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -267,7 +267,7 @@ extern struct proto tcp_prot;
DECLARE_SNMP_STAT(struct tcp_mib, tcp_statistics);
#define TCP_INC_STATS(net, field) do { (void)net; SNMP_INC_STATS(tcp_statistics, field); } while (0)
-#define TCP_INC_STATS_BH(field) SNMP_INC_STATS_BH(tcp_statistics, field)
+#define TCP_INC_STATS_BH(net, field) do { (void)net; SNMP_INC_STATS_BH(tcp_statistics, field); } while (0)
#define TCP_DEC_STATS(field) SNMP_DEC_STATS(tcp_statistics, field)
#define TCP_ADD_STATS_USER(field, val) SNMP_ADD_STATS_USER(tcp_statistics, field, val)
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index fe042c6..9caaf83 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -2663,7 +2663,7 @@ EXPORT_SYMBOL(__tcp_put_md5sig_pool);
void tcp_done(struct sock *sk)
{
if(sk->sk_state == TCP_SYN_SENT || sk->sk_state == TCP_SYN_RECV)
- TCP_INC_STATS_BH(TCP_MIB_ATTEMPTFAILS);
+ TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_ATTEMPTFAILS);
tcp_set_state(sk, TCP_CLOSE);
tcp_clear_xmit_timers(sk);
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index d6ea970..01e8004 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -4795,7 +4795,7 @@ int tcp_rcv_established(struct sock *sk, struct sk_buff *skb,
tcp_data_snd_check(sk);
return 0;
} else { /* Header too small */
- TCP_INC_STATS_BH(TCP_MIB_INERRS);
+ TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_INERRS);
goto discard;
}
} else {
@@ -4934,7 +4934,7 @@ slow_path:
tcp_replace_ts_recent(tp, TCP_SKB_CB(skb)->seq);
if (th->syn && !before(TCP_SKB_CB(skb)->seq, tp->rcv_nxt)) {
- TCP_INC_STATS_BH(TCP_MIB_INERRS);
+ TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_INERRS);
NET_INC_STATS_BH(LINUX_MIB_TCPABORTONSYN);
tcp_reset(sk);
return 1;
@@ -4957,7 +4957,7 @@ step5:
return 0;
csum_error:
- TCP_INC_STATS_BH(TCP_MIB_INERRS);
+ TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_INERRS);
discard:
__kfree_skb(skb);
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index db3bf9b..e876312 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -599,8 +599,8 @@ static void tcp_v4_send_reset(struct sock *sk, struct sk_buff *skb)
ip_send_reply(net->ipv4.tcp_sock, skb,
&arg, arg.iov[0].iov_len);
- TCP_INC_STATS_BH(TCP_MIB_OUTSEGS);
- TCP_INC_STATS_BH(TCP_MIB_OUTRSTS);
+ TCP_INC_STATS_BH(net, TCP_MIB_OUTSEGS);
+ TCP_INC_STATS_BH(net, TCP_MIB_OUTRSTS);
}
/* The code following below sending ACKs in SYN-RECV and TIME-WAIT states
@@ -674,7 +674,7 @@ static void tcp_v4_send_ack(struct sk_buff *skb, u32 seq, u32 ack,
ip_send_reply(net->ipv4.tcp_sock, skb,
&arg, arg.iov[0].iov_len);
- TCP_INC_STATS_BH(TCP_MIB_OUTSEGS);
+ TCP_INC_STATS_BH(net, TCP_MIB_OUTSEGS);
}
static void tcp_v4_timewait_ack(struct sock *sk, struct sk_buff *skb)
@@ -1494,7 +1494,7 @@ discard:
return 0;
csum_err:
- TCP_INC_STATS_BH(TCP_MIB_INERRS);
+ TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_INERRS);
goto discard;
}
@@ -1514,7 +1514,7 @@ int tcp_v4_rcv(struct sk_buff *skb)
goto discard_it;
/* Count it even if it's bad */
- TCP_INC_STATS_BH(TCP_MIB_INSEGS);
+ TCP_INC_STATS_BH(net, TCP_MIB_INSEGS);
if (!pskb_may_pull(skb, sizeof(struct tcphdr)))
goto discard_it;
@@ -1590,7 +1590,7 @@ no_tcp_socket:
if (skb->len < (th->doff << 2) || tcp_checksum_complete(skb)) {
bad_packet:
- TCP_INC_STATS_BH(TCP_MIB_INERRS);
+ TCP_INC_STATS_BH(net, TCP_MIB_INERRS);
} else {
tcp_v4_send_reset(NULL, skb);
}
@@ -1611,7 +1611,7 @@ do_time_wait:
}
if (skb->len < (th->doff << 2) || tcp_checksum_complete(skb)) {
- TCP_INC_STATS_BH(TCP_MIB_INERRS);
+ TCP_INC_STATS_BH(net, TCP_MIB_INERRS);
inet_twsk_put(inet_twsk(sk));
goto discard_it;
}
diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
index ea68a47..8b02b10 100644
--- a/net/ipv4/tcp_minisocks.c
+++ b/net/ipv4/tcp_minisocks.c
@@ -480,7 +480,7 @@ struct sock *tcp_create_openreq_child(struct sock *sk, struct request_sock *req,
newtp->rx_opt.mss_clamp = req->mss;
TCP_ECN_openreq_child(newtp, req);
- TCP_INC_STATS_BH(TCP_MIB_PASSIVEOPENS);
+ TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_PASSIVEOPENS);
}
return newsk;
}
@@ -630,7 +630,7 @@ struct sock *tcp_check_req(struct sock *sk,struct sk_buff *skb,
* "fourth, check the SYN bit"
*/
if (flg & (TCP_FLAG_RST|TCP_FLAG_SYN)) {
- TCP_INC_STATS_BH(TCP_MIB_ATTEMPTFAILS);
+ TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_ATTEMPTFAILS);
goto embryonic_reset;
}
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index fc5f716..3895d91 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -1004,8 +1004,8 @@ static void tcp_v6_send_reset(struct sock *sk, struct sk_buff *skb)
if (xfrm_lookup(&buff->dst, &fl, NULL, 0) >= 0) {
ip6_xmit(ctl_sk, buff, &fl, NULL, 0);
- TCP_INC_STATS_BH(TCP_MIB_OUTSEGS);
- TCP_INC_STATS_BH(TCP_MIB_OUTRSTS);
+ TCP_INC_STATS_BH(net, TCP_MIB_OUTSEGS);
+ TCP_INC_STATS_BH(net, TCP_MIB_OUTRSTS);
return;
}
}
@@ -1089,7 +1089,7 @@ static void tcp_v6_send_ack(struct sk_buff *skb, u32 seq, u32 ack, u32 win, u32
if (!ip6_dst_lookup(ctl_sk, &buff->dst, &fl)) {
if (xfrm_lookup(&buff->dst, &fl, NULL, 0) >= 0) {
ip6_xmit(ctl_sk, buff, &fl, NULL, 0);
- TCP_INC_STATS_BH(TCP_MIB_OUTSEGS);
+ TCP_INC_STATS_BH(net, TCP_MIB_OUTSEGS);
return;
}
}
@@ -1579,7 +1579,7 @@ discard:
kfree_skb(skb);
return 0;
csum_err:
- TCP_INC_STATS_BH(TCP_MIB_INERRS);
+ TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_INERRS);
goto discard;
@@ -1625,7 +1625,7 @@ static int tcp_v6_rcv(struct sk_buff *skb)
/*
* Count it even if it's bad.
*/
- TCP_INC_STATS_BH(TCP_MIB_INSEGS);
+ TCP_INC_STATS_BH(net, TCP_MIB_INSEGS);
if (!pskb_may_pull(skb, sizeof(struct tcphdr)))
goto discard_it;
@@ -1697,7 +1697,7 @@ no_tcp_socket:
if (skb->len < (th->doff<<2) || tcp_checksum_complete(skb)) {
bad_packet:
- TCP_INC_STATS_BH(TCP_MIB_INERRS);
+ TCP_INC_STATS_BH(net, TCP_MIB_INERRS);
} else {
tcp_v6_send_reset(NULL, skb);
}
@@ -1722,7 +1722,7 @@ do_time_wait:
}
if (skb->len < (th->doff<<2) || tcp_checksum_complete(skb)) {
- TCP_INC_STATS_BH(TCP_MIB_INERRS);
+ TCP_INC_STATS_BH(net, TCP_MIB_INERRS);
inet_twsk_put(inet_twsk(sk));
goto discard_it;
}
--
1.5.5.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 11/20] mib: add net to TCP_DEC_STATS
2008-07-16 8:05 [PATCH 0/20] mib: add struct net to IP, TCP and NET macros Pavel Emelyanov
` (8 preceding siblings ...)
2008-07-16 8:22 ` [PATCH 10/20] mib: add net to TCP_INC_STATS_BH Pavel Emelyanov
@ 2008-07-16 8:23 ` Pavel Emelyanov
2008-07-16 8:24 ` [PATCH 12/20] mib: add net to TCP_ADD_STATS_USER Pavel Emelyanov
` (10 subsequent siblings)
20 siblings, 0 replies; 22+ messages in thread
From: Pavel Emelyanov @ 2008-07-16 8:23 UTC (permalink / raw)
To: David Miller; +Cc: Linux Netdev List
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
---
include/net/tcp.h | 2 +-
net/ipv4/tcp.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 8a2cd07..92d8769 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -268,7 +268,7 @@ extern struct proto tcp_prot;
DECLARE_SNMP_STAT(struct tcp_mib, tcp_statistics);
#define TCP_INC_STATS(net, field) do { (void)net; SNMP_INC_STATS(tcp_statistics, field); } while (0)
#define TCP_INC_STATS_BH(net, field) do { (void)net; SNMP_INC_STATS_BH(tcp_statistics, field); } while (0)
-#define TCP_DEC_STATS(field) SNMP_DEC_STATS(tcp_statistics, field)
+#define TCP_DEC_STATS(net, field) do { (void)net; SNMP_DEC_STATS(tcp_statistics, field); } while (0)
#define TCP_ADD_STATS_USER(field, val) SNMP_ADD_STATS_USER(tcp_statistics, field, val)
extern void tcp_v4_err(struct sk_buff *skb, u32);
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 9caaf83..315b82e 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -1682,7 +1682,7 @@ void tcp_set_state(struct sock *sk, int state)
/* fall through */
default:
if (oldstate==TCP_ESTABLISHED)
- TCP_DEC_STATS(TCP_MIB_CURRESTAB);
+ TCP_DEC_STATS(sock_net(sk), TCP_MIB_CURRESTAB);
}
/* Change state AFTER socket is unhashed to avoid closed
--
1.5.5.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 12/20] mib: add net to TCP_ADD_STATS_USER
2008-07-16 8:05 [PATCH 0/20] mib: add struct net to IP, TCP and NET macros Pavel Emelyanov
` (9 preceding siblings ...)
2008-07-16 8:23 ` [PATCH 11/20] mib: add net to TCP_DEC_STATS Pavel Emelyanov
@ 2008-07-16 8:24 ` Pavel Emelyanov
2008-07-16 8:27 ` [PATCH 13/20] sock: add net to prot->enter_memory_pressure callback Pavel Emelyanov
` (9 subsequent siblings)
20 siblings, 0 replies; 22+ messages in thread
From: Pavel Emelyanov @ 2008-07-16 8:24 UTC (permalink / raw)
To: David Miller; +Cc: Linux Netdev List
Now we're done with the TCP_XXX_STATS macros.
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
---
include/net/tcp.h | 10 +++++-----
1 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 92d8769..4d78818 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -269,7 +269,7 @@ DECLARE_SNMP_STAT(struct tcp_mib, tcp_statistics);
#define TCP_INC_STATS(net, field) do { (void)net; SNMP_INC_STATS(tcp_statistics, field); } while (0)
#define TCP_INC_STATS_BH(net, field) do { (void)net; SNMP_INC_STATS_BH(tcp_statistics, field); } while (0)
#define TCP_DEC_STATS(net, field) do { (void)net; SNMP_DEC_STATS(tcp_statistics, field); } while (0)
-#define TCP_ADD_STATS_USER(field, val) SNMP_ADD_STATS_USER(tcp_statistics, field, val)
+#define TCP_ADD_STATS_USER(net, field, val) do { (void)net; SNMP_ADD_STATS_USER(tcp_statistics, field, val); } while (0)
extern void tcp_v4_err(struct sk_buff *skb, u32);
@@ -1027,10 +1027,10 @@ static inline int tcp_paws_check(const struct tcp_options_received *rx_opt, int
static inline void tcp_mib_init(struct net *net)
{
/* See RFC 2012 */
- TCP_ADD_STATS_USER(TCP_MIB_RTOALGORITHM, 1);
- TCP_ADD_STATS_USER(TCP_MIB_RTOMIN, TCP_RTO_MIN*1000/HZ);
- TCP_ADD_STATS_USER(TCP_MIB_RTOMAX, TCP_RTO_MAX*1000/HZ);
- TCP_ADD_STATS_USER(TCP_MIB_MAXCONN, -1);
+ TCP_ADD_STATS_USER(net, TCP_MIB_RTOALGORITHM, 1);
+ TCP_ADD_STATS_USER(net, TCP_MIB_RTOMIN, TCP_RTO_MIN*1000/HZ);
+ TCP_ADD_STATS_USER(net, TCP_MIB_RTOMAX, TCP_RTO_MAX*1000/HZ);
+ TCP_ADD_STATS_USER(net, TCP_MIB_MAXCONN, -1);
}
/* from STCP */
--
1.5.5.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 13/20] sock: add net to prot->enter_memory_pressure callback
2008-07-16 8:05 [PATCH 0/20] mib: add struct net to IP, TCP and NET macros Pavel Emelyanov
` (10 preceding siblings ...)
2008-07-16 8:24 ` [PATCH 12/20] mib: add net to TCP_ADD_STATS_USER Pavel Emelyanov
@ 2008-07-16 8:27 ` Pavel Emelyanov
2008-07-16 8:29 ` [PATCH 14/20] inet: prepare net on the stack for NET accounting macros Pavel Emelyanov
` (8 subsequent siblings)
20 siblings, 0 replies; 22+ messages in thread
From: Pavel Emelyanov @ 2008-07-16 8:27 UTC (permalink / raw)
To: David Miller; +Cc: Linux Netdev List
The tcp_enter_memory_pressure calls NET_INC_STATS, but doesn't
have where to get the net from.
I decided to add a sk argument, not the net itself, only to factor
all the required sock_net(sk) calls inside the enter_memory_pressure
callback itself.
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
---
include/net/sock.h | 4 ++--
include/net/tcp.h | 2 +-
net/core/sock.c | 2 +-
net/decnet/af_decnet.c | 2 +-
net/ipv4/tcp.c | 4 ++--
net/sctp/socket.c | 2 +-
6 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/include/net/sock.h b/include/net/sock.h
index 3f4897a..06c5259 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -565,7 +565,7 @@ struct proto {
#endif
/* Memory pressure */
- void (*enter_memory_pressure)(void);
+ void (*enter_memory_pressure)(struct sock *sk);
atomic_t *memory_allocated; /* Current allocated memory. */
atomic_t *sockets_allocated; /* Current number of sockets. */
/*
@@ -1210,7 +1210,7 @@ static inline struct page *sk_stream_alloc_page(struct sock *sk)
page = alloc_pages(sk->sk_allocation, 0);
if (!page) {
- sk->sk_prot->enter_memory_pressure();
+ sk->sk_prot->enter_memory_pressure(sk);
sk_stream_moderate_sndbuf(sk);
}
return page;
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 4d78818..c25cb52 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -975,7 +975,7 @@ static inline void tcp_openreq_init(struct request_sock *req,
ireq->rmt_port = tcp_hdr(skb)->source;
}
-extern void tcp_enter_memory_pressure(void);
+extern void tcp_enter_memory_pressure(struct sock *sk);
static inline int keepalive_intvl_when(const struct tcp_sock *tp)
{
diff --git a/net/core/sock.c b/net/core/sock.c
index 2c0ba52..10a64d5 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -1442,7 +1442,7 @@ int __sk_mem_schedule(struct sock *sk, int size, int kind)
/* Under pressure. */
if (allocated > prot->sysctl_mem[1])
if (prot->enter_memory_pressure)
- prot->enter_memory_pressure();
+ prot->enter_memory_pressure(sk);
/* Over hard limit. */
if (allocated > prot->sysctl_mem[2])
diff --git a/net/decnet/af_decnet.c b/net/decnet/af_decnet.c
index 931bdf9..61b7df5 100644
--- a/net/decnet/af_decnet.c
+++ b/net/decnet/af_decnet.c
@@ -451,7 +451,7 @@ static void dn_destruct(struct sock *sk)
static int dn_memory_pressure;
-static void dn_enter_memory_pressure(void)
+static void dn_enter_memory_pressure(struct sock *sk)
{
if (!dn_memory_pressure) {
dn_memory_pressure = 1;
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 315b82e..44b2fda 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -316,7 +316,7 @@ int tcp_memory_pressure __read_mostly;
EXPORT_SYMBOL(tcp_memory_pressure);
-void tcp_enter_memory_pressure(void)
+void tcp_enter_memory_pressure(struct sock *sk)
{
if (!tcp_memory_pressure) {
NET_INC_STATS(LINUX_MIB_TCPMEMORYPRESSURES);
@@ -649,7 +649,7 @@ struct sk_buff *sk_stream_alloc_skb(struct sock *sk, int size, gfp_t gfp)
}
__kfree_skb(skb);
} else {
- sk->sk_prot->enter_memory_pressure();
+ sk->sk_prot->enter_memory_pressure(sk);
sk_stream_moderate_sndbuf(sk);
}
return NULL;
diff --git a/net/sctp/socket.c b/net/sctp/socket.c
index df5572c..6aba01b 100644
--- a/net/sctp/socket.c
+++ b/net/sctp/socket.c
@@ -116,7 +116,7 @@ static int sctp_memory_pressure;
static atomic_t sctp_memory_allocated;
static atomic_t sctp_sockets_allocated;
-static void sctp_enter_memory_pressure(void)
+static void sctp_enter_memory_pressure(struct sock *sk)
{
sctp_memory_pressure = 1;
}
--
1.5.5.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 14/20] inet: prepare net on the stack for NET accounting macros
2008-07-16 8:05 [PATCH 0/20] mib: add struct net to IP, TCP and NET macros Pavel Emelyanov
` (11 preceding siblings ...)
2008-07-16 8:27 ` [PATCH 13/20] sock: add net to prot->enter_memory_pressure callback Pavel Emelyanov
@ 2008-07-16 8:29 ` Pavel Emelyanov
2008-07-16 8:31 ` [PATCH 15/20] tcp: replace tcp_sock argument with sock in some places Pavel Emelyanov
` (7 subsequent siblings)
20 siblings, 0 replies; 22+ messages in thread
From: Pavel Emelyanov @ 2008-07-16 8:29 UTC (permalink / raw)
To: David Miller; +Cc: Linux Netdev List
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
---
net/dccp/ipv6.c | 3 ++-
net/ipv4/arp.c | 3 ++-
net/ipv6/tcp_ipv6.c | 3 ++-
3 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
index eec3c47..83cc9bb 100644
--- a/net/dccp/ipv6.c
+++ b/net/dccp/ipv6.c
@@ -93,8 +93,9 @@ static void dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
struct sock *sk;
int err;
__u64 seq;
+ struct net *net = dev_net(skb->dev);
- sk = inet6_lookup(dev_net(skb->dev), &dccp_hashinfo,
+ sk = inet6_lookup(net, &dccp_hashinfo,
&hdr->daddr, dh->dccph_dport,
&hdr->saddr, dh->dccph_sport, inet6_iif(skb));
diff --git a/net/ipv4/arp.c b/net/ipv4/arp.c
index 29df75a..aab98b8 100644
--- a/net/ipv4/arp.c
+++ b/net/ipv4/arp.c
@@ -421,8 +421,9 @@ static int arp_filter(__be32 sip, __be32 tip, struct net_device *dev)
struct rtable *rt;
int flag = 0;
/*unsigned long now; */
+ struct net *net = dev_net(dev);
- if (ip_route_output_key(dev_net(dev), &rt, &fl) < 0)
+ if (ip_route_output_key(net, &rt, &fl) < 0)
return 1;
if (rt->u.dst.dev != dev) {
NET_INC_STATS_BH(LINUX_MIB_ARPFILTER);
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index 3895d91..d58b83a 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -323,8 +323,9 @@ static void tcp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
int err;
struct tcp_sock *tp;
__u32 seq;
+ struct net *net = dev_net(skb->dev);
- sk = inet6_lookup(dev_net(skb->dev), &tcp_hashinfo, &hdr->daddr,
+ sk = inet6_lookup(net, &tcp_hashinfo, &hdr->daddr,
th->dest, &hdr->saddr, th->source, skb->dev->ifindex);
if (sk == NULL) {
--
1.5.5.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 15/20] tcp: replace tcp_sock argument with sock in some places
2008-07-16 8:05 [PATCH 0/20] mib: add struct net to IP, TCP and NET macros Pavel Emelyanov
` (12 preceding siblings ...)
2008-07-16 8:29 ` [PATCH 14/20] inet: prepare net on the stack for NET accounting macros Pavel Emelyanov
@ 2008-07-16 8:31 ` Pavel Emelyanov
2008-07-16 8:32 ` [PATCH 16/20] mib: add net to NET_INC_STATS Pavel Emelyanov
` (6 subsequent siblings)
20 siblings, 0 replies; 22+ messages in thread
From: Pavel Emelyanov @ 2008-07-16 8:31 UTC (permalink / raw)
To: David Miller; +Cc: Linux Netdev List
These places have a tcp_sock, but we'd prefer the sock itself to
get net from it. Fortunately, tcp_sk macro is just a type cast, so
this replace is really cheap.
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
---
net/ipv4/tcp_input.c | 31 ++++++++++++++++++-------------
1 files changed, 18 insertions(+), 13 deletions(-)
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index 01e8004..f50d843 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -1169,10 +1169,11 @@ static void tcp_mark_lost_retrans(struct sock *sk)
tp->lost_retrans_low = new_low_seq;
}
-static int tcp_check_dsack(struct tcp_sock *tp, struct sk_buff *ack_skb,
+static int tcp_check_dsack(struct sock *sk, struct sk_buff *ack_skb,
struct tcp_sack_block_wire *sp, int num_sacks,
u32 prior_snd_una)
{
+ struct tcp_sock *tp = tcp_sk(sk);
u32 start_seq_0 = get_unaligned_be32(&sp[0].start_seq);
u32 end_seq_0 = get_unaligned_be32(&sp[0].end_seq);
int dup_sack = 0;
@@ -1434,7 +1435,7 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb,
tcp_highest_sack_reset(sk);
}
- found_dup_sack = tcp_check_dsack(tp, ack_skb, sp_wire,
+ found_dup_sack = tcp_check_dsack(sk, ack_skb, sp_wire,
num_sacks, prior_snd_una);
if (found_dup_sack)
flag |= FLAG_DSACKING_ACK;
@@ -3711,8 +3712,10 @@ static inline int tcp_sack_extend(struct tcp_sack_block *sp, u32 seq,
return 0;
}
-static void tcp_dsack_set(struct tcp_sock *tp, u32 seq, u32 end_seq)
+static void tcp_dsack_set(struct sock *sk, u32 seq, u32 end_seq)
{
+ struct tcp_sock *tp = tcp_sk(sk);
+
if (tcp_is_sack(tp) && sysctl_tcp_dsack) {
int mib_idx;
@@ -3731,10 +3734,12 @@ static void tcp_dsack_set(struct tcp_sock *tp, u32 seq, u32 end_seq)
}
}
-static void tcp_dsack_extend(struct tcp_sock *tp, u32 seq, u32 end_seq)
+static void tcp_dsack_extend(struct sock *sk, u32 seq, u32 end_seq)
{
+ struct tcp_sock *tp = tcp_sk(sk);
+
if (!tp->rx_opt.dsack)
- tcp_dsack_set(tp, seq, end_seq);
+ tcp_dsack_set(sk, seq, end_seq);
else
tcp_sack_extend(tp->duplicate_sack, seq, end_seq);
}
@@ -3753,7 +3758,7 @@ static void tcp_send_dupack(struct sock *sk, struct sk_buff *skb)
if (after(TCP_SKB_CB(skb)->end_seq, tp->rcv_nxt))
end_seq = tp->rcv_nxt;
- tcp_dsack_set(tp, TCP_SKB_CB(skb)->seq, end_seq);
+ tcp_dsack_set(sk, TCP_SKB_CB(skb)->seq, end_seq);
}
}
@@ -3906,7 +3911,7 @@ static void tcp_ofo_queue(struct sock *sk)
__u32 dsack = dsack_high;
if (before(TCP_SKB_CB(skb)->end_seq, dsack_high))
dsack_high = TCP_SKB_CB(skb)->end_seq;
- tcp_dsack_extend(tp, TCP_SKB_CB(skb)->seq, dsack);
+ tcp_dsack_extend(sk, TCP_SKB_CB(skb)->seq, dsack);
}
if (!after(TCP_SKB_CB(skb)->end_seq, tp->rcv_nxt)) {
@@ -4035,7 +4040,7 @@ queue_and_out:
if (!after(TCP_SKB_CB(skb)->end_seq, tp->rcv_nxt)) {
/* A retransmit, 2nd most common case. Force an immediate ack. */
NET_INC_STATS_BH(LINUX_MIB_DELAYEDACKLOST);
- tcp_dsack_set(tp, TCP_SKB_CB(skb)->seq, TCP_SKB_CB(skb)->end_seq);
+ tcp_dsack_set(sk, TCP_SKB_CB(skb)->seq, TCP_SKB_CB(skb)->end_seq);
out_of_window:
tcp_enter_quickack_mode(sk);
@@ -4057,7 +4062,7 @@ drop:
tp->rcv_nxt, TCP_SKB_CB(skb)->seq,
TCP_SKB_CB(skb)->end_seq);
- tcp_dsack_set(tp, TCP_SKB_CB(skb)->seq, tp->rcv_nxt);
+ tcp_dsack_set(sk, TCP_SKB_CB(skb)->seq, tp->rcv_nxt);
/* If window is closed, drop tail of packet. But after
* remembering D-SACK for its head made in previous line.
@@ -4122,12 +4127,12 @@ drop:
if (!after(end_seq, TCP_SKB_CB(skb1)->end_seq)) {
/* All the bits are present. Drop. */
__kfree_skb(skb);
- tcp_dsack_set(tp, seq, end_seq);
+ tcp_dsack_set(sk, seq, end_seq);
goto add_sack;
}
if (after(seq, TCP_SKB_CB(skb1)->seq)) {
/* Partial overlap. */
- tcp_dsack_set(tp, seq,
+ tcp_dsack_set(sk, seq,
TCP_SKB_CB(skb1)->end_seq);
} else {
skb1 = skb1->prev;
@@ -4140,12 +4145,12 @@ drop:
(struct sk_buff *)&tp->out_of_order_queue &&
after(end_seq, TCP_SKB_CB(skb1)->seq)) {
if (before(end_seq, TCP_SKB_CB(skb1)->end_seq)) {
- tcp_dsack_extend(tp, TCP_SKB_CB(skb1)->seq,
+ tcp_dsack_extend(sk, TCP_SKB_CB(skb1)->seq,
end_seq);
break;
}
__skb_unlink(skb1, &tp->out_of_order_queue);
- tcp_dsack_extend(tp, TCP_SKB_CB(skb1)->seq,
+ tcp_dsack_extend(sk, TCP_SKB_CB(skb1)->seq,
TCP_SKB_CB(skb1)->end_seq);
__kfree_skb(skb1);
}
--
1.5.5.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 16/20] mib: add net to NET_INC_STATS
2008-07-16 8:05 [PATCH 0/20] mib: add struct net to IP, TCP and NET macros Pavel Emelyanov
` (13 preceding siblings ...)
2008-07-16 8:31 ` [PATCH 15/20] tcp: replace tcp_sock argument with sock in some places Pavel Emelyanov
@ 2008-07-16 8:32 ` Pavel Emelyanov
2008-07-16 8:33 ` [PATCH 17/20] mib: add net to NET_INC_STATS_BH Pavel Emelyanov
` (5 subsequent siblings)
20 siblings, 0 replies; 22+ messages in thread
From: Pavel Emelyanov @ 2008-07-16 8:32 UTC (permalink / raw)
To: David Miller; +Cc: Linux Netdev List
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
---
include/net/ip.h | 2 +-
net/ipv4/tcp.c | 2 +-
net/ipv4/tcp_output.c | 4 ++--
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/include/net/ip.h b/include/net/ip.h
index 93d0e09..b42a434 100644
--- a/include/net/ip.h
+++ b/include/net/ip.h
@@ -161,7 +161,7 @@ DECLARE_SNMP_STAT(struct ipstats_mib, ip_statistics);
#define IP_INC_STATS_BH(net, field) do { (void)net; SNMP_INC_STATS_BH(ip_statistics, field); } while (0)
#define IP_ADD_STATS_BH(net, field, val) SNMP_ADD_STATS_BH(ip_statistics, field, val)
DECLARE_SNMP_STAT(struct linux_mib, net_statistics);
-#define NET_INC_STATS(field) SNMP_INC_STATS(net_statistics, field)
+#define NET_INC_STATS(net, field) do { (void)net; SNMP_INC_STATS(net_statistics, field); } while (0)
#define NET_INC_STATS_BH(field) SNMP_INC_STATS_BH(net_statistics, field)
#define NET_INC_STATS_USER(field) SNMP_INC_STATS_USER(net_statistics, field)
#define NET_ADD_STATS_BH(field, adnd) SNMP_ADD_STATS_BH(net_statistics, field, adnd)
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 44b2fda..9c420da 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -319,7 +319,7 @@ EXPORT_SYMBOL(tcp_memory_pressure);
void tcp_enter_memory_pressure(struct sock *sk)
{
if (!tcp_memory_pressure) {
- NET_INC_STATS(LINUX_MIB_TCPMEMORYPRESSURES);
+ NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPMEMORYPRESSURES);
tcp_memory_pressure = 1;
}
}
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index c3b0da8..176f070 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -2119,7 +2119,7 @@ void tcp_send_active_reset(struct sock *sk, gfp_t priority)
/* NOTE: No TCP options attached and we never retransmit this. */
skb = alloc_skb(MAX_TCP_HEADER, priority);
if (!skb) {
- NET_INC_STATS(LINUX_MIB_TCPABORTFAILED);
+ NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPABORTFAILED);
return;
}
@@ -2130,7 +2130,7 @@ void tcp_send_active_reset(struct sock *sk, gfp_t priority)
/* Send it off. */
TCP_SKB_CB(skb)->when = tcp_time_stamp;
if (tcp_transmit_skb(sk, skb, 0, priority))
- NET_INC_STATS(LINUX_MIB_TCPABORTFAILED);
+ NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPABORTFAILED);
TCP_INC_STATS(sock_net(sk), TCP_MIB_OUTRSTS);
}
--
1.5.5.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 17/20] mib: add net to NET_INC_STATS_BH
2008-07-16 8:05 [PATCH 0/20] mib: add struct net to IP, TCP and NET macros Pavel Emelyanov
` (14 preceding siblings ...)
2008-07-16 8:32 ` [PATCH 16/20] mib: add net to NET_INC_STATS Pavel Emelyanov
@ 2008-07-16 8:33 ` Pavel Emelyanov
2008-07-16 8:34 ` [PATCH 18/20] mib: add net to NET_INC_STATS_USER Pavel Emelyanov
` (4 subsequent siblings)
20 siblings, 0 replies; 22+ messages in thread
From: Pavel Emelyanov @ 2008-07-16 8:33 UTC (permalink / raw)
To: David Miller; +Cc: Linux Netdev List
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
---
include/net/ip.h | 2 +-
include/net/tcp.h | 2 +-
net/dccp/ipv4.c | 10 +++---
net/dccp/ipv6.c | 8 ++--
net/dccp/timer.c | 4 +-
net/ipv4/arp.c | 2 +-
net/ipv4/inet_hashtables.c | 4 +-
net/ipv4/syncookies.c | 6 ++--
net/ipv4/tcp.c | 6 ++-
net/ipv4/tcp_input.c | 65 ++++++++++++++++++++++---------------------
net/ipv4/tcp_ipv4.c | 12 ++++----
net/ipv4/tcp_minisocks.c | 6 ++--
net/ipv4/tcp_output.c | 4 +-
net/ipv4/tcp_timer.c | 12 ++++----
net/ipv6/inet6_hashtables.c | 4 +-
net/ipv6/syncookies.c | 6 ++--
net/ipv6/tcp_ipv6.c | 10 +++---
net/sctp/input.c | 2 +-
18 files changed, 84 insertions(+), 81 deletions(-)
diff --git a/include/net/ip.h b/include/net/ip.h
index b42a434..21df167 100644
--- a/include/net/ip.h
+++ b/include/net/ip.h
@@ -162,7 +162,7 @@ DECLARE_SNMP_STAT(struct ipstats_mib, ip_statistics);
#define IP_ADD_STATS_BH(net, field, val) SNMP_ADD_STATS_BH(ip_statistics, field, val)
DECLARE_SNMP_STAT(struct linux_mib, net_statistics);
#define NET_INC_STATS(net, field) do { (void)net; SNMP_INC_STATS(net_statistics, field); } while (0)
-#define NET_INC_STATS_BH(field) SNMP_INC_STATS_BH(net_statistics, field)
+#define NET_INC_STATS_BH(net, field) do { (void)net; SNMP_INC_STATS_BH(net_statistics, field); } while (0)
#define NET_INC_STATS_USER(field) SNMP_INC_STATS_USER(net_statistics, field)
#define NET_ADD_STATS_BH(field, adnd) SNMP_ADD_STATS_BH(net_statistics, field, adnd)
#define NET_ADD_STATS_USER(field, adnd) SNMP_ADD_STATS_USER(net_statistics, field, adnd)
diff --git a/include/net/tcp.h b/include/net/tcp.h
index c25cb52..60e5be8 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -894,7 +894,7 @@ static inline int tcp_prequeue(struct sock *sk, struct sk_buff *skb)
while ((skb1 = __skb_dequeue(&tp->ucopy.prequeue)) != NULL) {
sk->sk_backlog_rcv(sk, skb1);
- NET_INC_STATS_BH(LINUX_MIB_TCPPREQUEUEDROPPED);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPPREQUEUEDROPPED);
}
tp->ucopy.memory = 0;
diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c
index 9f760a1..2622ace 100644
--- a/net/dccp/ipv4.c
+++ b/net/dccp/ipv4.c
@@ -230,7 +230,7 @@ static void dccp_v4_err(struct sk_buff *skb, u32 info)
* servers this needs to be solved differently.
*/
if (sock_owned_by_user(sk))
- NET_INC_STATS_BH(LINUX_MIB_LOCKDROPPEDICMPS);
+ NET_INC_STATS_BH(net, LINUX_MIB_LOCKDROPPEDICMPS);
if (sk->sk_state == DCCP_CLOSED)
goto out;
@@ -239,7 +239,7 @@ static void dccp_v4_err(struct sk_buff *skb, u32 info)
seq = dccp_hdr_seq(dh);
if ((1 << sk->sk_state) & ~(DCCPF_REQUESTING | DCCPF_LISTEN) &&
!between48(seq, dp->dccps_swl, dp->dccps_swh)) {
- NET_INC_STATS_BH(LINUX_MIB_OUTOFWINDOWICMPS);
+ NET_INC_STATS_BH(net, LINUX_MIB_OUTOFWINDOWICMPS);
goto out;
}
@@ -286,7 +286,7 @@ static void dccp_v4_err(struct sk_buff *skb, u32 info)
BUG_TRAP(!req->sk);
if (seq != dccp_rsk(req)->dreq_iss) {
- NET_INC_STATS_BH(LINUX_MIB_OUTOFWINDOWICMPS);
+ NET_INC_STATS_BH(net, LINUX_MIB_OUTOFWINDOWICMPS);
goto out;
}
/*
@@ -409,9 +409,9 @@ struct sock *dccp_v4_request_recv_sock(struct sock *sk, struct sk_buff *skb,
return newsk;
exit_overflow:
- NET_INC_STATS_BH(LINUX_MIB_LISTENOVERFLOWS);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS);
exit:
- NET_INC_STATS_BH(LINUX_MIB_LISTENDROPS);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENDROPS);
dst_release(dst);
return NULL;
}
diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
index 83cc9bb..b74e8b2 100644
--- a/net/dccp/ipv6.c
+++ b/net/dccp/ipv6.c
@@ -111,7 +111,7 @@ static void dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
bh_lock_sock(sk);
if (sock_owned_by_user(sk))
- NET_INC_STATS_BH(LINUX_MIB_LOCKDROPPEDICMPS);
+ NET_INC_STATS_BH(net, LINUX_MIB_LOCKDROPPEDICMPS);
if (sk->sk_state == DCCP_CLOSED)
goto out;
@@ -189,7 +189,7 @@ static void dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
BUG_TRAP(req->sk == NULL);
if (seq != dccp_rsk(req)->dreq_iss) {
- NET_INC_STATS_BH(LINUX_MIB_OUTOFWINDOWICMPS);
+ NET_INC_STATS_BH(net, LINUX_MIB_OUTOFWINDOWICMPS);
goto out;
}
@@ -630,9 +630,9 @@ static struct sock *dccp_v6_request_recv_sock(struct sock *sk,
return newsk;
out_overflow:
- NET_INC_STATS_BH(LINUX_MIB_LISTENOVERFLOWS);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS);
out:
- NET_INC_STATS_BH(LINUX_MIB_LISTENDROPS);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENDROPS);
if (opt != NULL && opt != np->opt)
sock_kfree_s(sk, opt, opt->tot_len);
dst_release(dst);
diff --git a/net/dccp/timer.c b/net/dccp/timer.c
index 8703a79..3608d53 100644
--- a/net/dccp/timer.c
+++ b/net/dccp/timer.c
@@ -224,7 +224,7 @@ static void dccp_delack_timer(unsigned long data)
if (sock_owned_by_user(sk)) {
/* Try again later. */
icsk->icsk_ack.blocked = 1;
- NET_INC_STATS_BH(LINUX_MIB_DELAYEDACKLOCKED);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_DELAYEDACKLOCKED);
sk_reset_timer(sk, &icsk->icsk_delack_timer,
jiffies + TCP_DELACK_MIN);
goto out;
@@ -254,7 +254,7 @@ static void dccp_delack_timer(unsigned long data)
icsk->icsk_ack.ato = TCP_ATO_MIN;
}
dccp_send_ack(sk);
- NET_INC_STATS_BH(LINUX_MIB_DELAYEDACKS);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_DELAYEDACKS);
}
out:
bh_unlock_sock(sk);
diff --git a/net/ipv4/arp.c b/net/ipv4/arp.c
index aab98b8..b043eda 100644
--- a/net/ipv4/arp.c
+++ b/net/ipv4/arp.c
@@ -426,7 +426,7 @@ static int arp_filter(__be32 sip, __be32 tip, struct net_device *dev)
if (ip_route_output_key(net, &rt, &fl) < 0)
return 1;
if (rt->u.dst.dev != dev) {
- NET_INC_STATS_BH(LINUX_MIB_ARPFILTER);
+ NET_INC_STATS_BH(net, LINUX_MIB_ARPFILTER);
flag = 1;
}
ip_rt_put(rt);
diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
index eca5899..115f537 100644
--- a/net/ipv4/inet_hashtables.c
+++ b/net/ipv4/inet_hashtables.c
@@ -312,11 +312,11 @@ unique:
if (twp) {
*twp = tw;
- NET_INC_STATS_BH(LINUX_MIB_TIMEWAITRECYCLED);
+ NET_INC_STATS_BH(net, LINUX_MIB_TIMEWAITRECYCLED);
} else if (tw) {
/* Silly. Should hash-dance instead... */
inet_twsk_deschedule(tw, death_row);
- NET_INC_STATS_BH(LINUX_MIB_TIMEWAITRECYCLED);
+ NET_INC_STATS_BH(net, LINUX_MIB_TIMEWAITRECYCLED);
inet_twsk_put(tw);
}
diff --git a/net/ipv4/syncookies.c b/net/ipv4/syncookies.c
index fdde2ae..51bc24d 100644
--- a/net/ipv4/syncookies.c
+++ b/net/ipv4/syncookies.c
@@ -173,7 +173,7 @@ __u32 cookie_v4_init_sequence(struct sock *sk, struct sk_buff *skb, __u16 *mssp)
;
*mssp = msstab[mssind] + 1;
- NET_INC_STATS_BH(LINUX_MIB_SYNCOOKIESSENT);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_SYNCOOKIESSENT);
return secure_tcp_syn_cookie(iph->saddr, iph->daddr,
th->source, th->dest, ntohl(th->seq),
@@ -269,11 +269,11 @@ struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb,
if (time_after(jiffies, tp->last_synq_overflow + TCP_TIMEOUT_INIT) ||
(mss = cookie_check(skb, cookie)) == 0) {
- NET_INC_STATS_BH(LINUX_MIB_SYNCOOKIESFAILED);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_SYNCOOKIESFAILED);
goto out;
}
- NET_INC_STATS_BH(LINUX_MIB_SYNCOOKIESRECV);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_SYNCOOKIESRECV);
/* check for timestamp cookie support */
memset(&tcp_opt, 0, sizeof(tcp_opt));
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 9c420da..9aa4a0e 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -1871,7 +1871,8 @@ adjudge_to_death:
if (tp->linger2 < 0) {
tcp_set_state(sk, TCP_CLOSE);
tcp_send_active_reset(sk, GFP_ATOMIC);
- NET_INC_STATS_BH(LINUX_MIB_TCPABORTONLINGER);
+ NET_INC_STATS_BH(sock_net(sk),
+ LINUX_MIB_TCPABORTONLINGER);
} else {
const int tmo = tcp_fin_time(sk);
@@ -1893,7 +1894,8 @@ adjudge_to_death:
"sockets\n");
tcp_set_state(sk, TCP_CLOSE);
tcp_send_active_reset(sk, GFP_ATOMIC);
- NET_INC_STATS_BH(LINUX_MIB_TCPABORTONMEMORY);
+ NET_INC_STATS_BH(sock_net(sk),
+ LINUX_MIB_TCPABORTONMEMORY);
}
}
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
index f50d843..fac49a6 100644
--- a/net/ipv4/tcp_input.c
+++ b/net/ipv4/tcp_input.c
@@ -961,7 +961,7 @@ static void tcp_update_reordering(struct sock *sk, const int metric,
else
mib_idx = LINUX_MIB_TCPSACKREORDER;
- NET_INC_STATS_BH(mib_idx);
+ NET_INC_STATS_BH(sock_net(sk), mib_idx);
#if FASTRETRANS_DEBUG > 1
printk(KERN_DEBUG "Disorder%d %d %u f%u s%u rr%d\n",
tp->rx_opt.sack_ok, inet_csk(sk)->icsk_ca_state,
@@ -1157,7 +1157,7 @@ static void tcp_mark_lost_retrans(struct sock *sk)
tp->lost_out += tcp_skb_pcount(skb);
TCP_SKB_CB(skb)->sacked |= TCPCB_LOST;
}
- NET_INC_STATS_BH(LINUX_MIB_TCPLOSTRETRANSMIT);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPLOSTRETRANSMIT);
} else {
if (before(ack_seq, new_low_seq))
new_low_seq = ack_seq;
@@ -1181,7 +1181,7 @@ static int tcp_check_dsack(struct sock *sk, struct sk_buff *ack_skb,
if (before(start_seq_0, TCP_SKB_CB(ack_skb)->ack_seq)) {
dup_sack = 1;
tcp_dsack_seen(tp);
- NET_INC_STATS_BH(LINUX_MIB_TCPDSACKRECV);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPDSACKRECV);
} else if (num_sacks > 1) {
u32 end_seq_1 = get_unaligned_be32(&sp[1].end_seq);
u32 start_seq_1 = get_unaligned_be32(&sp[1].start_seq);
@@ -1190,7 +1190,8 @@ static int tcp_check_dsack(struct sock *sk, struct sk_buff *ack_skb,
!before(start_seq_0, start_seq_1)) {
dup_sack = 1;
tcp_dsack_seen(tp);
- NET_INC_STATS_BH(LINUX_MIB_TCPDSACKOFORECV);
+ NET_INC_STATS_BH(sock_net(sk),
+ LINUX_MIB_TCPDSACKOFORECV);
}
}
@@ -1476,7 +1477,7 @@ tcp_sacktag_write_queue(struct sock *sk, struct sk_buff *ack_skb,
mib_idx = LINUX_MIB_TCPSACKDISCARD;
}
- NET_INC_STATS_BH(mib_idx);
+ NET_INC_STATS_BH(sock_net(sk), mib_idx);
if (i == 0)
first_sack_index = -1;
continue;
@@ -1969,7 +1970,7 @@ static int tcp_check_sack_reneging(struct sock *sk, int flag)
{
if (flag & FLAG_SACK_RENEGING) {
struct inet_connection_sock *icsk = inet_csk(sk);
- NET_INC_STATS_BH(LINUX_MIB_TCPSACKRENEGING);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPSACKRENEGING);
tcp_enter_loss(sk, 1);
icsk->icsk_retransmits++;
@@ -2401,7 +2402,7 @@ static int tcp_try_undo_recovery(struct sock *sk)
else
mib_idx = LINUX_MIB_TCPFULLUNDO;
- NET_INC_STATS_BH(mib_idx);
+ NET_INC_STATS_BH(sock_net(sk), mib_idx);
tp->undo_marker = 0;
}
if (tp->snd_una == tp->high_seq && tcp_is_reno(tp)) {
@@ -2424,7 +2425,7 @@ static void tcp_try_undo_dsack(struct sock *sk)
DBGUNDO(sk, "D-SACK");
tcp_undo_cwr(sk, 1);
tp->undo_marker = 0;
- NET_INC_STATS_BH(LINUX_MIB_TCPDSACKUNDO);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPDSACKUNDO);
}
}
@@ -2447,7 +2448,7 @@ static int tcp_try_undo_partial(struct sock *sk, int acked)
DBGUNDO(sk, "Hoe");
tcp_undo_cwr(sk, 0);
- NET_INC_STATS_BH(LINUX_MIB_TCPPARTIALUNDO);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPPARTIALUNDO);
/* So... Do not make Hoe's retransmit yet.
* If the first packet was delayed, the rest
@@ -2476,7 +2477,7 @@ static int tcp_try_undo_loss(struct sock *sk)
DBGUNDO(sk, "partial loss");
tp->lost_out = 0;
tcp_undo_cwr(sk, 1);
- NET_INC_STATS_BH(LINUX_MIB_TCPLOSSUNDO);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPLOSSUNDO);
inet_csk(sk)->icsk_retransmits = 0;
tp->undo_marker = 0;
if (tcp_is_sack(tp))
@@ -2595,7 +2596,7 @@ static void tcp_fastretrans_alert(struct sock *sk, int pkts_acked, int flag)
icsk->icsk_ca_state != TCP_CA_Open &&
tp->fackets_out > tp->reordering) {
tcp_mark_head_lost(sk, tp->fackets_out - tp->reordering);
- NET_INC_STATS_BH(LINUX_MIB_TCPLOSS);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPLOSS);
}
/* D. Check consistency of the current state. */
@@ -2700,7 +2701,7 @@ static void tcp_fastretrans_alert(struct sock *sk, int pkts_acked, int flag)
else
mib_idx = LINUX_MIB_TCPSACKRECOVERY;
- NET_INC_STATS_BH(mib_idx);
+ NET_INC_STATS_BH(sock_net(sk), mib_idx);
tp->high_seq = tp->snd_nxt;
tp->prior_ssthresh = 0;
@@ -3211,7 +3212,7 @@ static int tcp_process_frto(struct sock *sk, int flag)
}
tp->frto_counter = 0;
tp->undo_marker = 0;
- NET_INC_STATS_BH(LINUX_MIB_TCPSPURIOUSRTOS);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPSPURIOUSRTOS);
}
return 0;
}
@@ -3264,12 +3265,12 @@ static int tcp_ack(struct sock *sk, struct sk_buff *skb, int flag)
tcp_ca_event(sk, CA_EVENT_FAST_ACK);
- NET_INC_STATS_BH(LINUX_MIB_TCPHPACKS);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPHPACKS);
} else {
if (ack_seq != TCP_SKB_CB(skb)->end_seq)
flag |= FLAG_DATA;
else
- NET_INC_STATS_BH(LINUX_MIB_TCPPUREACKS);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPPUREACKS);
flag |= tcp_ack_update_window(sk, skb, ack, ack_seq);
@@ -3724,7 +3725,7 @@ static void tcp_dsack_set(struct sock *sk, u32 seq, u32 end_seq)
else
mib_idx = LINUX_MIB_TCPDSACKOFOSENT;
- NET_INC_STATS_BH(mib_idx);
+ NET_INC_STATS_BH(sock_net(sk), mib_idx);
tp->rx_opt.dsack = 1;
tp->duplicate_sack[0].start_seq = seq;
@@ -3750,7 +3751,7 @@ static void tcp_send_dupack(struct sock *sk, struct sk_buff *skb)
if (TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(skb)->seq &&
before(TCP_SKB_CB(skb)->seq, tp->rcv_nxt)) {
- NET_INC_STATS_BH(LINUX_MIB_DELAYEDACKLOST);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_DELAYEDACKLOST);
tcp_enter_quickack_mode(sk);
if (tcp_is_sack(tp) && sysctl_tcp_dsack) {
@@ -4039,7 +4040,7 @@ queue_and_out:
if (!after(TCP_SKB_CB(skb)->end_seq, tp->rcv_nxt)) {
/* A retransmit, 2nd most common case. Force an immediate ack. */
- NET_INC_STATS_BH(LINUX_MIB_DELAYEDACKLOST);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_DELAYEDACKLOST);
tcp_dsack_set(sk, TCP_SKB_CB(skb)->seq, TCP_SKB_CB(skb)->end_seq);
out_of_window:
@@ -4181,7 +4182,7 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list,
struct sk_buff *next = skb->next;
__skb_unlink(skb, list);
__kfree_skb(skb);
- NET_INC_STATS_BH(LINUX_MIB_TCPRCVCOLLAPSED);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPRCVCOLLAPSED);
skb = next;
continue;
}
@@ -4249,7 +4250,7 @@ tcp_collapse(struct sock *sk, struct sk_buff_head *list,
struct sk_buff *next = skb->next;
__skb_unlink(skb, list);
__kfree_skb(skb);
- NET_INC_STATS_BH(LINUX_MIB_TCPRCVCOLLAPSED);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPRCVCOLLAPSED);
skb = next;
if (skb == tail ||
tcp_hdr(skb)->syn ||
@@ -4312,7 +4313,7 @@ static int tcp_prune_ofo_queue(struct sock *sk)
int res = 0;
if (!skb_queue_empty(&tp->out_of_order_queue)) {
- NET_INC_STATS_BH(LINUX_MIB_OFOPRUNED);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_OFOPRUNED);
__skb_queue_purge(&tp->out_of_order_queue);
/* Reset SACK state. A conforming SACK implementation will
@@ -4341,7 +4342,7 @@ static int tcp_prune_queue(struct sock *sk)
SOCK_DEBUG(sk, "prune_queue: c=%x\n", tp->copied_seq);
- NET_INC_STATS_BH(LINUX_MIB_PRUNECALLED);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_PRUNECALLED);
if (atomic_read(&sk->sk_rmem_alloc) >= sk->sk_rcvbuf)
tcp_clamp_window(sk);
@@ -4370,7 +4371,7 @@ static int tcp_prune_queue(struct sock *sk)
* drop receive data on the floor. It will get retransmitted
* and hopefully then we'll have sufficient space.
*/
- NET_INC_STATS_BH(LINUX_MIB_RCVPRUNED);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_RCVPRUNED);
/* Massive buffer overcommit. */
tp->pred_flags = 0;
@@ -4837,7 +4838,7 @@ int tcp_rcv_established(struct sock *sk, struct sk_buff *skb,
__skb_pull(skb, tcp_header_len);
tp->rcv_nxt = TCP_SKB_CB(skb)->end_seq;
- NET_INC_STATS_BH(LINUX_MIB_TCPHPHITSTOUSER);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPHPHITSTOUSER);
}
if (copied_early)
tcp_cleanup_rbuf(sk, skb->len);
@@ -4860,7 +4861,7 @@ int tcp_rcv_established(struct sock *sk, struct sk_buff *skb,
if ((int)skb->truesize > sk->sk_forward_alloc)
goto step5;
- NET_INC_STATS_BH(LINUX_MIB_TCPHPHITS);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPHPHITS);
/* Bulk data transfer: receiver */
__skb_pull(skb, tcp_header_len);
@@ -4904,7 +4905,7 @@ slow_path:
if (tcp_fast_parse_options(skb, th, tp) && tp->rx_opt.saw_tstamp &&
tcp_paws_discard(sk, skb)) {
if (!th->rst) {
- NET_INC_STATS_BH(LINUX_MIB_PAWSESTABREJECTED);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_PAWSESTABREJECTED);
tcp_send_dupack(sk, skb);
goto discard;
}
@@ -4940,7 +4941,7 @@ slow_path:
if (th->syn && !before(TCP_SKB_CB(skb)->seq, tp->rcv_nxt)) {
TCP_INC_STATS_BH(sock_net(sk), TCP_MIB_INERRS);
- NET_INC_STATS_BH(LINUX_MIB_TCPABORTONSYN);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPABORTONSYN);
tcp_reset(sk);
return 1;
}
@@ -4996,7 +4997,7 @@ static int tcp_rcv_synsent_state_process(struct sock *sk, struct sk_buff *skb,
if (tp->rx_opt.saw_tstamp && tp->rx_opt.rcv_tsecr &&
!between(tp->rx_opt.rcv_tsecr, tp->retrans_stamp,
tcp_time_stamp)) {
- NET_INC_STATS_BH(LINUX_MIB_PAWSACTIVEREJECTED);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_PAWSACTIVEREJECTED);
goto reset_and_undo;
}
@@ -5280,7 +5281,7 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb,
if (tcp_fast_parse_options(skb, th, tp) && tp->rx_opt.saw_tstamp &&
tcp_paws_discard(sk, skb)) {
if (!th->rst) {
- NET_INC_STATS_BH(LINUX_MIB_PAWSESTABREJECTED);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_PAWSESTABREJECTED);
tcp_send_dupack(sk, skb);
goto discard;
}
@@ -5309,7 +5310,7 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb,
* Check for a SYN in window.
*/
if (th->syn && !before(TCP_SKB_CB(skb)->seq, tp->rcv_nxt)) {
- NET_INC_STATS_BH(LINUX_MIB_TCPABORTONSYN);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPABORTONSYN);
tcp_reset(sk);
return 1;
}
@@ -5391,7 +5392,7 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb,
(TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(skb)->seq &&
after(TCP_SKB_CB(skb)->end_seq - th->fin, tp->rcv_nxt))) {
tcp_done(sk);
- NET_INC_STATS_BH(LINUX_MIB_TCPABORTONDATA);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPABORTONDATA);
return 1;
}
@@ -5451,7 +5452,7 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb,
if (sk->sk_shutdown & RCV_SHUTDOWN) {
if (TCP_SKB_CB(skb)->end_seq != TCP_SKB_CB(skb)->seq &&
after(TCP_SKB_CB(skb)->end_seq - th->fin, tp->rcv_nxt)) {
- NET_INC_STATS_BH(LINUX_MIB_TCPABORTONDATA);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPABORTONDATA);
tcp_reset(sk);
return 1;
}
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index e876312..29adc66 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -366,7 +366,7 @@ void tcp_v4_err(struct sk_buff *skb, u32 info)
* servers this needs to be solved differently.
*/
if (sock_owned_by_user(sk))
- NET_INC_STATS_BH(LINUX_MIB_LOCKDROPPEDICMPS);
+ NET_INC_STATS_BH(net, LINUX_MIB_LOCKDROPPEDICMPS);
if (sk->sk_state == TCP_CLOSE)
goto out;
@@ -375,7 +375,7 @@ void tcp_v4_err(struct sk_buff *skb, u32 info)
seq = ntohl(th->seq);
if (sk->sk_state != TCP_LISTEN &&
!between(seq, tp->snd_una, tp->snd_nxt)) {
- NET_INC_STATS_BH(LINUX_MIB_OUTOFWINDOWICMPS);
+ NET_INC_STATS_BH(net, LINUX_MIB_OUTOFWINDOWICMPS);
goto out;
}
@@ -422,7 +422,7 @@ void tcp_v4_err(struct sk_buff *skb, u32 info)
BUG_TRAP(!req->sk);
if (seq != tcp_rsk(req)->snt_isn) {
- NET_INC_STATS_BH(LINUX_MIB_OUTOFWINDOWICMPS);
+ NET_INC_STATS_BH(net, LINUX_MIB_OUTOFWINDOWICMPS);
goto out;
}
@@ -1251,7 +1251,7 @@ int tcp_v4_conn_request(struct sock *sk, struct sk_buff *skb)
if (get_seconds() < peer->tcp_ts_stamp + TCP_PAWS_MSL &&
(s32)(peer->tcp_ts - req->ts_recent) >
TCP_PAWS_WINDOW) {
- NET_INC_STATS_BH(LINUX_MIB_PAWSPASSIVEREJECTED);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_PAWSPASSIVEREJECTED);
goto drop_and_release;
}
}
@@ -1365,9 +1365,9 @@ struct sock *tcp_v4_syn_recv_sock(struct sock *sk, struct sk_buff *skb,
return newsk;
exit_overflow:
- NET_INC_STATS_BH(LINUX_MIB_LISTENOVERFLOWS);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS);
exit:
- NET_INC_STATS_BH(LINUX_MIB_LISTENDROPS);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENDROPS);
dst_release(dst);
return NULL;
}
diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
index 8b02b10..204c421 100644
--- a/net/ipv4/tcp_minisocks.c
+++ b/net/ipv4/tcp_minisocks.c
@@ -244,7 +244,7 @@ kill:
}
if (paws_reject)
- NET_INC_STATS_BH(LINUX_MIB_PAWSESTABREJECTED);
+ NET_INC_STATS_BH(twsk_net(tw), LINUX_MIB_PAWSESTABREJECTED);
if (!th->rst) {
/* In this case we must reset the TIMEWAIT timer.
@@ -611,7 +611,7 @@ struct sock *tcp_check_req(struct sock *sk,struct sk_buff *skb,
if (!(flg & TCP_FLAG_RST))
req->rsk_ops->send_ack(skb, req);
if (paws_reject)
- NET_INC_STATS_BH(LINUX_MIB_PAWSESTABREJECTED);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_PAWSESTABREJECTED);
return NULL;
}
@@ -695,7 +695,7 @@ struct sock *tcp_check_req(struct sock *sk,struct sk_buff *skb,
}
embryonic_reset:
- NET_INC_STATS_BH(LINUX_MIB_EMBRYONICRSTS);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_EMBRYONICRSTS);
if (!(flg & TCP_FLAG_RST))
req->rsk_ops->send_reset(sk, skb);
diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
index 176f070..36a1970 100644
--- a/net/ipv4/tcp_output.c
+++ b/net/ipv4/tcp_output.c
@@ -1995,7 +1995,7 @@ void tcp_xmit_retransmit_queue(struct sock *sk)
mib_idx = LINUX_MIB_TCPFASTRETRANS;
else
mib_idx = LINUX_MIB_TCPSLOWSTARTRETRANS;
- NET_INC_STATS_BH(mib_idx);
+ NET_INC_STATS_BH(sock_net(sk), mib_idx);
if (skb == tcp_write_queue_head(sk))
inet_csk_reset_xmit_timer(sk, ICSK_TIME_RETRANS,
@@ -2065,7 +2065,7 @@ void tcp_xmit_retransmit_queue(struct sock *sk)
inet_csk(sk)->icsk_rto,
TCP_RTO_MAX);
- NET_INC_STATS_BH(LINUX_MIB_TCPFORWARDRETRANS);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPFORWARDRETRANS);
}
}
diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
index 6a480d1..328e0cf 100644
--- a/net/ipv4/tcp_timer.c
+++ b/net/ipv4/tcp_timer.c
@@ -48,7 +48,7 @@ static void tcp_write_err(struct sock *sk)
sk->sk_error_report(sk);
tcp_done(sk);
- NET_INC_STATS_BH(LINUX_MIB_TCPABORTONTIMEOUT);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPABORTONTIMEOUT);
}
/* Do not allow orphaned sockets to eat all our resources.
@@ -89,7 +89,7 @@ static int tcp_out_of_resources(struct sock *sk, int do_reset)
if (do_reset)
tcp_send_active_reset(sk, GFP_ATOMIC);
tcp_done(sk);
- NET_INC_STATS_BH(LINUX_MIB_TCPABORTONMEMORY);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPABORTONMEMORY);
return 1;
}
return 0;
@@ -179,7 +179,7 @@ static void tcp_delack_timer(unsigned long data)
if (sock_owned_by_user(sk)) {
/* Try again later. */
icsk->icsk_ack.blocked = 1;
- NET_INC_STATS_BH(LINUX_MIB_DELAYEDACKLOCKED);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_DELAYEDACKLOCKED);
sk_reset_timer(sk, &icsk->icsk_delack_timer, jiffies + TCP_DELACK_MIN);
goto out_unlock;
}
@@ -198,7 +198,7 @@ static void tcp_delack_timer(unsigned long data)
if (!skb_queue_empty(&tp->ucopy.prequeue)) {
struct sk_buff *skb;
- NET_INC_STATS_BH(LINUX_MIB_TCPSCHEDULERFAILED);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPSCHEDULERFAILED);
while ((skb = __skb_dequeue(&tp->ucopy.prequeue)) != NULL)
sk->sk_backlog_rcv(sk, skb);
@@ -218,7 +218,7 @@ static void tcp_delack_timer(unsigned long data)
icsk->icsk_ack.ato = TCP_ATO_MIN;
}
tcp_send_ack(sk);
- NET_INC_STATS_BH(LINUX_MIB_DELAYEDACKS);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_DELAYEDACKS);
}
TCP_CHECK_TIMER(sk);
@@ -346,7 +346,7 @@ static void tcp_retransmit_timer(struct sock *sk)
} else {
mib_idx = LINUX_MIB_TCPTIMEOUTS;
}
- NET_INC_STATS_BH(mib_idx);
+ NET_INC_STATS_BH(sock_net(sk), mib_idx);
}
if (tcp_use_frto(sk)) {
diff --git a/net/ipv6/inet6_hashtables.c b/net/ipv6/inet6_hashtables.c
index a9cc8ab..00a8a5f 100644
--- a/net/ipv6/inet6_hashtables.c
+++ b/net/ipv6/inet6_hashtables.c
@@ -210,11 +210,11 @@ unique:
if (twp != NULL) {
*twp = tw;
- NET_INC_STATS_BH(LINUX_MIB_TIMEWAITRECYCLED);
+ NET_INC_STATS_BH(twsk_net(tw), LINUX_MIB_TIMEWAITRECYCLED);
} else if (tw != NULL) {
/* Silly. Should hash-dance instead... */
inet_twsk_deschedule(tw, death_row);
- NET_INC_STATS_BH(LINUX_MIB_TIMEWAITRECYCLED);
+ NET_INC_STATS_BH(twsk_net(tw), LINUX_MIB_TIMEWAITRECYCLED);
inet_twsk_put(tw);
}
diff --git a/net/ipv6/syncookies.c b/net/ipv6/syncookies.c
index 3ecc115..6a68eeb 100644
--- a/net/ipv6/syncookies.c
+++ b/net/ipv6/syncookies.c
@@ -137,7 +137,7 @@ __u32 cookie_v6_init_sequence(struct sock *sk, struct sk_buff *skb, __u16 *mssp)
;
*mssp = msstab[mssind] + 1;
- NET_INC_STATS_BH(LINUX_MIB_SYNCOOKIESSENT);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_SYNCOOKIESSENT);
return secure_tcp_syn_cookie(&iph->saddr, &iph->daddr, th->source,
th->dest, ntohl(th->seq),
@@ -177,11 +177,11 @@ struct sock *cookie_v6_check(struct sock *sk, struct sk_buff *skb)
if (time_after(jiffies, tp->last_synq_overflow + TCP_TIMEOUT_INIT) ||
(mss = cookie_check(skb, cookie)) == 0) {
- NET_INC_STATS_BH(LINUX_MIB_SYNCOOKIESFAILED);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_SYNCOOKIESFAILED);
goto out;
}
- NET_INC_STATS_BH(LINUX_MIB_SYNCOOKIESRECV);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_SYNCOOKIESRECV);
/* check for timestamp cookie support */
memset(&tcp_opt, 0, sizeof(tcp_opt));
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index d58b83a..ca5b93a 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -340,7 +340,7 @@ static void tcp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
bh_lock_sock(sk);
if (sock_owned_by_user(sk))
- NET_INC_STATS_BH(LINUX_MIB_LOCKDROPPEDICMPS);
+ NET_INC_STATS_BH(net, LINUX_MIB_LOCKDROPPEDICMPS);
if (sk->sk_state == TCP_CLOSE)
goto out;
@@ -349,7 +349,7 @@ static void tcp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
seq = ntohl(th->seq);
if (sk->sk_state != TCP_LISTEN &&
!between(seq, tp->snd_una, tp->snd_nxt)) {
- NET_INC_STATS_BH(LINUX_MIB_OUTOFWINDOWICMPS);
+ NET_INC_STATS_BH(net, LINUX_MIB_OUTOFWINDOWICMPS);
goto out;
}
@@ -424,7 +424,7 @@ static void tcp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
BUG_TRAP(req->sk == NULL);
if (seq != tcp_rsk(req)->snt_isn) {
- NET_INC_STATS_BH(LINUX_MIB_OUTOFWINDOWICMPS);
+ NET_INC_STATS_BH(net, LINUX_MIB_OUTOFWINDOWICMPS);
goto out;
}
@@ -1449,9 +1449,9 @@ static struct sock * tcp_v6_syn_recv_sock(struct sock *sk, struct sk_buff *skb,
return newsk;
out_overflow:
- NET_INC_STATS_BH(LINUX_MIB_LISTENOVERFLOWS);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENOVERFLOWS);
out:
- NET_INC_STATS_BH(LINUX_MIB_LISTENDROPS);
+ NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_LISTENDROPS);
if (opt && opt != np->opt)
sock_kfree_s(sk, opt, opt->tot_len);
dst_release(dst);
diff --git a/net/sctp/input.c b/net/sctp/input.c
index ed8834e..5ed93c0 100644
--- a/net/sctp/input.c
+++ b/net/sctp/input.c
@@ -486,7 +486,7 @@ struct sock *sctp_err_lookup(int family, struct sk_buff *skb,
* servers this needs to be solved differently.
*/
if (sock_owned_by_user(sk))
- NET_INC_STATS_BH(LINUX_MIB_LOCKDROPPEDICMPS);
+ NET_INC_STATS_BH(&init_net, LINUX_MIB_LOCKDROPPEDICMPS);
*app = asoc;
*tpp = transport;
--
1.5.5.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 18/20] mib: add net to NET_INC_STATS_USER
2008-07-16 8:05 [PATCH 0/20] mib: add struct net to IP, TCP and NET macros Pavel Emelyanov
` (15 preceding siblings ...)
2008-07-16 8:33 ` [PATCH 17/20] mib: add net to NET_INC_STATS_BH Pavel Emelyanov
@ 2008-07-16 8:34 ` Pavel Emelyanov
2008-07-16 8:37 ` [PATCH 19/20] mib: add net to NET_ADD_STATS_BH Pavel Emelyanov
` (3 subsequent siblings)
20 siblings, 0 replies; 22+ messages in thread
From: Pavel Emelyanov @ 2008-07-16 8:34 UTC (permalink / raw)
To: David Miller; +Cc: Linux Netdev List
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
---
include/net/ip.h | 2 +-
net/ipv4/tcp.c | 6 +++---
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/include/net/ip.h b/include/net/ip.h
index 21df167..79d1319 100644
--- a/include/net/ip.h
+++ b/include/net/ip.h
@@ -163,7 +163,7 @@ DECLARE_SNMP_STAT(struct ipstats_mib, ip_statistics);
DECLARE_SNMP_STAT(struct linux_mib, net_statistics);
#define NET_INC_STATS(net, field) do { (void)net; SNMP_INC_STATS(net_statistics, field); } while (0)
#define NET_INC_STATS_BH(net, field) do { (void)net; SNMP_INC_STATS_BH(net_statistics, field); } while (0)
-#define NET_INC_STATS_USER(field) SNMP_INC_STATS_USER(net_statistics, field)
+#define NET_INC_STATS_USER(net, field) do { (void)net; SNMP_INC_STATS_USER(net_statistics, field); } while (0)
#define NET_ADD_STATS_BH(field, adnd) SNMP_ADD_STATS_BH(net_statistics, field, adnd)
#define NET_ADD_STATS_USER(field, adnd) SNMP_ADD_STATS_USER(net_statistics, field, adnd)
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index 9aa4a0e..bbbd2fc 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -1153,7 +1153,7 @@ static void tcp_prequeue_process(struct sock *sk)
struct sk_buff *skb;
struct tcp_sock *tp = tcp_sk(sk);
- NET_INC_STATS_USER(LINUX_MIB_TCPPREQUEUED);
+ NET_INC_STATS_USER(sock_net(sk), LINUX_MIB_TCPPREQUEUED);
/* RX process wants to run with disabled BHs, though it is not
* necessary */
@@ -1793,13 +1793,13 @@ void tcp_close(struct sock *sk, long timeout)
*/
if (data_was_unread) {
/* Unread data was tossed, zap the connection. */
- NET_INC_STATS_USER(LINUX_MIB_TCPABORTONCLOSE);
+ NET_INC_STATS_USER(sock_net(sk), LINUX_MIB_TCPABORTONCLOSE);
tcp_set_state(sk, TCP_CLOSE);
tcp_send_active_reset(sk, GFP_KERNEL);
} else if (sock_flag(sk, SOCK_LINGER) && !sk->sk_lingertime) {
/* Check zero linger _after_ checking for unread data. */
sk->sk_prot->disconnect(sk, 0);
- NET_INC_STATS_USER(LINUX_MIB_TCPABORTONDATA);
+ NET_INC_STATS_USER(sock_net(sk), LINUX_MIB_TCPABORTONDATA);
} else if (tcp_close_state(sk)) {
/* We FIN if the application ate all the data before
* zapping the connection.
--
1.5.5.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 19/20] mib: add net to NET_ADD_STATS_BH
2008-07-16 8:05 [PATCH 0/20] mib: add struct net to IP, TCP and NET macros Pavel Emelyanov
` (16 preceding siblings ...)
2008-07-16 8:34 ` [PATCH 18/20] mib: add net to NET_INC_STATS_USER Pavel Emelyanov
@ 2008-07-16 8:37 ` Pavel Emelyanov
2008-07-16 8:38 ` [PATCH 20/20] mib: add net to NET_ADD_STATS_USER Pavel Emelyanov
` (2 subsequent siblings)
20 siblings, 0 replies; 22+ messages in thread
From: Pavel Emelyanov @ 2008-07-16 8:37 UTC (permalink / raw)
To: David Miller; +Cc: Linux Netdev List
This one is tricky.
The thing is that this macro is only used when killing tw buckets,
but since this killer is promiscuous wrt to which net each particular
tw belongs to, I have to use it only when NET_NS is off. When the net
namespaces are on, I use the INET_INC_STATS_BH for each bucket.
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
---
include/net/ip.h | 2 +-
net/ipv4/inet_timewait_sock.c | 15 ++++++++++++---
2 files changed, 13 insertions(+), 4 deletions(-)
diff --git a/include/net/ip.h b/include/net/ip.h
index 79d1319..a8275b1 100644
--- a/include/net/ip.h
+++ b/include/net/ip.h
@@ -164,7 +164,7 @@ DECLARE_SNMP_STAT(struct linux_mib, net_statistics);
#define NET_INC_STATS(net, field) do { (void)net; SNMP_INC_STATS(net_statistics, field); } while (0)
#define NET_INC_STATS_BH(net, field) do { (void)net; SNMP_INC_STATS_BH(net_statistics, field); } while (0)
#define NET_INC_STATS_USER(net, field) do { (void)net; SNMP_INC_STATS_USER(net_statistics, field); } while (0)
-#define NET_ADD_STATS_BH(field, adnd) SNMP_ADD_STATS_BH(net_statistics, field, adnd)
+#define NET_ADD_STATS_BH(net, field, adnd) do { (void)net; SNMP_ADD_STATS_BH(net_statistics, field, adnd); } while (0)
#define NET_ADD_STATS_USER(field, adnd) SNMP_ADD_STATS_USER(net_statistics, field, adnd)
extern unsigned long snmp_fold_field(void *mib[], int offt);
diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c
index 06006a5..75c2def 100644
--- a/net/ipv4/inet_timewait_sock.c
+++ b/net/ipv4/inet_timewait_sock.c
@@ -160,6 +160,9 @@ rescan:
__inet_twsk_del_dead_node(tw);
spin_unlock(&twdr->death_lock);
__inet_twsk_kill(tw, twdr->hashinfo);
+#ifdef CONFIG_NET_NS
+ NET_INC_STATS_BH(twsk_net(tw), LINUX_MIB_TIMEWAITED);
+#endif
inet_twsk_put(tw);
killed++;
spin_lock(&twdr->death_lock);
@@ -178,8 +181,9 @@ rescan:
}
twdr->tw_count -= killed;
- NET_ADD_STATS_BH(LINUX_MIB_TIMEWAITED, killed);
-
+#ifndef CONFIG_NET_NS
+ NET_ADD_STATS_BH(&init_net, LINUX_MIB_TIMEWAITED, killed);
+#endif
return ret;
}
@@ -372,6 +376,9 @@ void inet_twdr_twcal_tick(unsigned long data)
&twdr->twcal_row[slot]) {
__inet_twsk_del_dead_node(tw);
__inet_twsk_kill(tw, twdr->hashinfo);
+#ifdef CONFIG_NET_NS
+ NET_INC_STATS_BH(twsk_net(tw), LINUX_MIB_TIMEWAITKILLED);
+#endif
inet_twsk_put(tw);
killed++;
}
@@ -395,7 +402,9 @@ void inet_twdr_twcal_tick(unsigned long data)
out:
if ((twdr->tw_count -= killed) == 0)
del_timer(&twdr->tw_timer);
- NET_ADD_STATS_BH(LINUX_MIB_TIMEWAITKILLED, killed);
+#ifndef CONFIG_NET_NS
+ NET_ADD_STATS_BH(&init_net, LINUX_MIB_TIMEWAITKILLED, killed);
+#endif
spin_unlock(&twdr->death_lock);
}
--
1.5.5.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH 20/20] mib: add net to NET_ADD_STATS_USER
2008-07-16 8:05 [PATCH 0/20] mib: add struct net to IP, TCP and NET macros Pavel Emelyanov
` (17 preceding siblings ...)
2008-07-16 8:37 ` [PATCH 19/20] mib: add net to NET_ADD_STATS_BH Pavel Emelyanov
@ 2008-07-16 8:38 ` Pavel Emelyanov
[not found] ` <487DAE95.4090608@openvz.org>
2008-07-17 3:38 ` [PATCH 0/20] mib: add struct net to IP, TCP and NET macros David Miller
20 siblings, 0 replies; 22+ messages in thread
From: Pavel Emelyanov @ 2008-07-16 8:38 UTC (permalink / raw)
To: David Miller; +Cc: Linux Netdev List
Done with NET_XXX_STATS macros :)
To be continued...
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
---
include/net/ip.h | 2 +-
net/ipv4/tcp.c | 6 +++---
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/include/net/ip.h b/include/net/ip.h
index a8275b1..02924fb 100644
--- a/include/net/ip.h
+++ b/include/net/ip.h
@@ -165,7 +165,7 @@ DECLARE_SNMP_STAT(struct linux_mib, net_statistics);
#define NET_INC_STATS_BH(net, field) do { (void)net; SNMP_INC_STATS_BH(net_statistics, field); } while (0)
#define NET_INC_STATS_USER(net, field) do { (void)net; SNMP_INC_STATS_USER(net_statistics, field); } while (0)
#define NET_ADD_STATS_BH(net, field, adnd) do { (void)net; SNMP_ADD_STATS_BH(net_statistics, field, adnd); } while (0)
-#define NET_ADD_STATS_USER(field, adnd) SNMP_ADD_STATS_USER(net_statistics, field, adnd)
+#define NET_ADD_STATS_USER(net, field, adnd) do { (void)net; SNMP_ADD_STATS_USER(net_statistics, field, adnd); } while (0)
extern unsigned long snmp_fold_field(void *mib[], int offt);
extern int snmp_mib_init(void *ptr[2], size_t mibsize);
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
index bbbd2fc..e00ef15 100644
--- a/net/ipv4/tcp.c
+++ b/net/ipv4/tcp.c
@@ -1475,7 +1475,7 @@ int tcp_recvmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *msg,
/* __ Restore normal policy in scheduler __ */
if ((chunk = len - tp->ucopy.len) != 0) {
- NET_ADD_STATS_USER(LINUX_MIB_TCPDIRECTCOPYFROMBACKLOG, chunk);
+ NET_ADD_STATS_USER(sock_net(sk), LINUX_MIB_TCPDIRECTCOPYFROMBACKLOG, chunk);
len -= chunk;
copied += chunk;
}
@@ -1486,7 +1486,7 @@ do_prequeue:
tcp_prequeue_process(sk);
if ((chunk = len - tp->ucopy.len) != 0) {
- NET_ADD_STATS_USER(LINUX_MIB_TCPDIRECTCOPYFROMPREQUEUE, chunk);
+ NET_ADD_STATS_USER(sock_net(sk), LINUX_MIB_TCPDIRECTCOPYFROMPREQUEUE, chunk);
len -= chunk;
copied += chunk;
}
@@ -1601,7 +1601,7 @@ skip_copy:
tcp_prequeue_process(sk);
if (copied > 0 && (chunk = len - tp->ucopy.len) != 0) {
- NET_ADD_STATS_USER(LINUX_MIB_TCPDIRECTCOPYFROMPREQUEUE, chunk);
+ NET_ADD_STATS_USER(sock_net(sk), LINUX_MIB_TCPDIRECTCOPYFROMPREQUEUE, chunk);
len -= chunk;
copied += chunk;
}
--
1.5.5.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [PATCH (resend) 7/20] mib: drop unused TCP MIB accounting macros
[not found] ` <487DAE95.4090608@openvz.org>
@ 2008-07-16 8:43 ` Pavel Emelyanov
0 siblings, 0 replies; 22+ messages in thread
From: Pavel Emelyanov @ 2008-07-16 8:43 UTC (permalink / raw)
To: David Miller; +Cc: Linux Netdev List
The vger rejected triple X in the first version subject, so I
resend the patch.
Log:
TCP_INC_STATS_USER and TCP_ADD_STATS_BH are currently unused.
Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
---
include/net/tcp.h | 2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/include/net/tcp.h b/include/net/tcp.h
index 6b827cc..048f8de 100644
--- a/include/net/tcp.h
+++ b/include/net/tcp.h
@@ -268,9 +268,7 @@ extern struct proto tcp_prot;
DECLARE_SNMP_STAT(struct tcp_mib, tcp_statistics);
#define TCP_INC_STATS(field) SNMP_INC_STATS(tcp_statistics, field)
#define TCP_INC_STATS_BH(field) SNMP_INC_STATS_BH(tcp_statistics, field)
-#define TCP_INC_STATS_USER(field) SNMP_INC_STATS_USER(tcp_statistics, field)
#define TCP_DEC_STATS(field) SNMP_DEC_STATS(tcp_statistics, field)
-#define TCP_ADD_STATS_BH(field, val) SNMP_ADD_STATS_BH(tcp_statistics, field, val)
#define TCP_ADD_STATS_USER(field, val) SNMP_ADD_STATS_USER(tcp_statistics, field, val)
extern void tcp_v4_err(struct sk_buff *skb, u32);
-- 1.5.5.1
^ permalink raw reply related [flat|nested] 22+ messages in thread
* Re: [PATCH 0/20] mib: add struct net to IP, TCP and NET macros
2008-07-16 8:05 [PATCH 0/20] mib: add struct net to IP, TCP and NET macros Pavel Emelyanov
` (19 preceding siblings ...)
[not found] ` <487DAE95.4090608@openvz.org>
@ 2008-07-17 3:38 ` David Miller
20 siblings, 0 replies; 22+ messages in thread
From: David Miller @ 2008-07-17 3:38 UTC (permalink / raw)
To: xemul; +Cc: netdev
From: Pavel Emelyanov <xemul@openvz.org>
Date: Wed, 16 Jul 2008 12:05:10 +0400
> This is actually the last very scattered set, that changes many calls
> to MIB accounting macros. What is left requires less patching.
>
> David, I hope this (and a finalizing set, I will prepare after this
> one is accepted) can go to current 2.6.27 merge window, but it is OK
> with me if your decision is to delay this till 2.6.28.
>
> Signed-off-by: Pavel Emelyanov <xemul@openvz.org>
Ok, all applied to net-next-2.6, I'll push after I do some build
testing.
Patch 19 is ugly :( But I can't recommend another approach.
If you could find a way to clean that stuff up, it would
really be appreciated.
If you can get your "finalizing set" to me today or tomorrow,
I guess I can apply it.
I plan to merge net-next to Linus either Sunday or Monday.
^ permalink raw reply [flat|nested] 22+ messages in thread
end of thread, other threads:[~2008-07-17 3:38 UTC | newest]
Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-07-16 8:05 [PATCH 0/20] mib: add struct net to IP, TCP and NET macros Pavel Emelyanov
2008-07-16 8:08 ` [PATCH 1/20] ipv4: prepare net initialization for IP accounting Pavel Emelyanov
2008-07-16 8:09 ` [PATCH 2/20] mib: drop unused IP_INC_STATS_USER Pavel Emelyanov
2008-07-16 8:10 ` [PATCH 3/20] mib: add net to IP_INC_STATS Pavel Emelyanov
2008-07-16 8:12 ` [PATCH 4/20] mib: add net to IP_INC_STATS_BH Pavel Emelyanov
2008-07-16 8:14 ` [PATCH 5/20] mib: add net to IP_ADD_STATS_BH Pavel Emelyanov
2008-07-16 8:16 ` [PATCH 6/20] inet: prepare struct net for TCP MIB accounting Pavel Emelyanov
2008-07-16 8:19 ` [PATCH 8/20] tcp: add net to tcp_mib_init Pavel Emelyanov
2008-07-16 8:21 ` [PATCH 9/20] mib: add net to TCP_INC_STATS Pavel Emelyanov
2008-07-16 8:22 ` [PATCH 10/20] mib: add net to TCP_INC_STATS_BH Pavel Emelyanov
2008-07-16 8:23 ` [PATCH 11/20] mib: add net to TCP_DEC_STATS Pavel Emelyanov
2008-07-16 8:24 ` [PATCH 12/20] mib: add net to TCP_ADD_STATS_USER Pavel Emelyanov
2008-07-16 8:27 ` [PATCH 13/20] sock: add net to prot->enter_memory_pressure callback Pavel Emelyanov
2008-07-16 8:29 ` [PATCH 14/20] inet: prepare net on the stack for NET accounting macros Pavel Emelyanov
2008-07-16 8:31 ` [PATCH 15/20] tcp: replace tcp_sock argument with sock in some places Pavel Emelyanov
2008-07-16 8:32 ` [PATCH 16/20] mib: add net to NET_INC_STATS Pavel Emelyanov
2008-07-16 8:33 ` [PATCH 17/20] mib: add net to NET_INC_STATS_BH Pavel Emelyanov
2008-07-16 8:34 ` [PATCH 18/20] mib: add net to NET_INC_STATS_USER Pavel Emelyanov
2008-07-16 8:37 ` [PATCH 19/20] mib: add net to NET_ADD_STATS_BH Pavel Emelyanov
2008-07-16 8:38 ` [PATCH 20/20] mib: add net to NET_ADD_STATS_USER Pavel Emelyanov
[not found] ` <487DAE95.4090608@openvz.org>
2008-07-16 8:43 ` [PATCH (resend) 7/20] mib: drop unused TCP MIB accounting macros Pavel Emelyanov
2008-07-17 3:38 ` [PATCH 0/20] mib: add struct net to IP, TCP and NET macros David Miller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).