netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [patch 0/8 2.6.32] CVE-2010-4251: packet backlog can get too large
@ 2011-11-13 20:13 Dan Carpenter
  2011-11-13 20:17 ` [patch 2/8 2.6.32] udp: use limited socket backlog Dan Carpenter
                   ` (7 more replies)
  0 siblings, 8 replies; 12+ messages in thread
From: Dan Carpenter @ 2011-11-13 20:13 UTC (permalink / raw)
  To: stable; +Cc: Greg Kroah-Hartman, netdev, Zhu Yi, Eric Dumazet

I'm still very new to this whole -stable business so please let me
know if I do somethig wrong.

This patch series is to address CVE-2010-4251 for the 2.6.32 stable
kernel.  Here is the CVE summary:

 "The socket implementation in net/core/sock.c in the Linux kernel
  before 2.6.34 does not properly manage a backlog of received
  packets, which allows remote attackers to cause a denial of service
  (memory consumption) by sending a large amount of network traffic,
  as demonstrated by netperf UDP tests."

[patch 1/8] introduces sk_add_backlog_limited()
[patch 2-7/8] change each network protocol to use sk_add_backlog_limited()
	where appropriate.
[patch 8/8] renames sk_add_backlog() to __sk_add_backlog() and
	sk_add_backlog_limited() to sk_add_backlog().

The patches mostly apply without changes.  The exception is:
[patch 2/8] udp: use limited socket backlog
Then the rename [patch 8/8] needed to be changed as well to match.

regards,
dan carpenter

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [patch 2/8 2.6.32] udp: use limited socket backlog
  2011-11-13 20:13 [patch 0/8 2.6.32] CVE-2010-4251: packet backlog can get too large Dan Carpenter
@ 2011-11-13 20:17 ` Dan Carpenter
  2011-11-13 20:18 ` [patch 3/8 2.6.32] x25: " Dan Carpenter
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Dan Carpenter @ 2011-11-13 20:17 UTC (permalink / raw)
  To: stable; +Cc: Greg Kroah-Hartman, Zhu Yi, netdev, Eric Dumazet

I had to make some changes to the first chunk in net/ipv6/udp.c to
make this to apply.

>From 55349790d7cbf0d381873a7ece1dcafcffd4aaa9 Mon Sep 17 00:00:00 2001
From: Zhu Yi <yi.zhu@intel.com>
Date: Thu, 4 Mar 2010 18:01:42 +0000
Subject: [PATCH] udp: use limited socket backlog

Make udp adapt to the limited socket backlog change.

Cc: "David S. Miller" <davem@davemloft.net>
Cc: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
Cc: "Pekka Savola (ipv6)" <pekkas@netcore.fi>
Cc: Patrick McHardy <kaber@trash.net>
Signed-off-by: Zhu Yi <yi.zhu@intel.com>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
---
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 0ac8833..2eaeaf1 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1177,8 +1177,10 @@ int udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
 	bh_lock_sock(sk);
 	if (!sock_owned_by_user(sk))
 		rc = __udp_queue_rcv_skb(sk, skb);
-	else
-		sk_add_backlog(sk, skb);
+	else if (sk_add_backlog_limited(sk, skb)) {
+		bh_unlock_sock(sk);
+		goto drop;
+	}
 	bh_unlock_sock(sk);
 
 	return rc;
diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
index ca520d4..4400eb0 100644
--- a/net/ipv6/udp.c
+++ b/net/ipv6/udp.c
@@ -473,16 +473,19 @@ static int __udp6_lib_mcast_deliver(struct net *net, struct sk_buff *skb,
 			bh_lock_sock(sk2);
 			if (!sock_owned_by_user(sk2))
 				udpv6_queue_rcv_skb(sk2, buff);
-			else
-				sk_add_backlog(sk2, buff);
+			else if (sk_add_backlog_limited(sk2, buff)) {
+				kfree_skb(buff);
+				bh_unlock_sock(sk2);
+				goto out;
+			}
 			bh_unlock_sock(sk2);
 		}
 	}
 	bh_lock_sock(sk);
 	if (!sock_owned_by_user(sk))
 		udpv6_queue_rcv_skb(sk, skb);
-	else
-		sk_add_backlog(sk, skb);
+	else if (sk_add_backlog_limited(sk, skb))
+		kfree_skb(skb);
 	bh_unlock_sock(sk);
 out:
 	spin_unlock(&hslot->lock);
@@ -601,8 +604,12 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
 	bh_lock_sock(sk);
 	if (!sock_owned_by_user(sk))
 		udpv6_queue_rcv_skb(sk, skb);
-	else
-		sk_add_backlog(sk, skb);
+	else if (sk_add_backlog_limited(sk, skb)) {
+		atomic_inc(&sk->sk_drops);
+		bh_unlock_sock(sk);
+		sock_put(sk);
+		goto discard;
+	}
 	bh_unlock_sock(sk);
 	sock_put(sk);
 	return 0;

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [patch 3/8 2.6.32] x25: use limited socket backlog
  2011-11-13 20:13 [patch 0/8 2.6.32] CVE-2010-4251: packet backlog can get too large Dan Carpenter
  2011-11-13 20:17 ` [patch 2/8 2.6.32] udp: use limited socket backlog Dan Carpenter
@ 2011-11-13 20:18 ` Dan Carpenter
  2011-11-13 20:18 ` [patch 4/8 2.6.32] sctp: " Dan Carpenter
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Dan Carpenter @ 2011-11-13 20:18 UTC (permalink / raw)
  To: stable; +Cc: Greg Kroah-Hartman, Zhu Yi, netdev, Eric Dumazet

This applied without changes to the original patch.

>From 2499849ee8f513e795b9f2c19a42d6356e4943a4 Mon Sep 17 00:00:00 2001
From: Zhu Yi <yi.zhu@intel.com>
Date: Thu, 4 Mar 2010 18:01:46 +0000
Subject: [PATCH] x25: use limited socket backlog

Make x25 adapt to the limited socket backlog change.

Cc: Andrew Hendry <andrew.hendry@gmail.com>
Signed-off-by: Zhu Yi <yi.zhu@intel.com>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
---
 net/x25/x25_dev.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/net/x25/x25_dev.c b/net/x25/x25_dev.c
index 3e1efe5..a9da0dc 100644
--- a/net/x25/x25_dev.c
+++ b/net/x25/x25_dev.c
@@ -53,7 +53,7 @@ static int x25_receive_data(struct sk_buff *skb, struct x25_neigh *nb)
 		if (!sock_owned_by_user(sk)) {
 			queued = x25_process_rx_frame(sk, skb);
 		} else {
-			sk_add_backlog(sk, skb);
+			queued = !sk_add_backlog_limited(sk, skb);
 		}
 		bh_unlock_sock(sk);
 		sock_put(sk);
-- 
1.7.8.rc0.dirty

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [patch 4/8 2.6.32] sctp: use limited socket backlog
  2011-11-13 20:13 [patch 0/8 2.6.32] CVE-2010-4251: packet backlog can get too large Dan Carpenter
  2011-11-13 20:17 ` [patch 2/8 2.6.32] udp: use limited socket backlog Dan Carpenter
  2011-11-13 20:18 ` [patch 3/8 2.6.32] x25: " Dan Carpenter
@ 2011-11-13 20:18 ` Dan Carpenter
  2011-11-13 20:18 ` [patch 5/8 2.6.32] tipc: " Dan Carpenter
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Dan Carpenter @ 2011-11-13 20:18 UTC (permalink / raw)
  To: stable; +Cc: Greg Kroah-Hartman, Zhu Yi, netdev, Eric Dumazet

This applied without changes to the original patch.

>From 50b1a782f845140f4138f14a1ce8a4a6dd0cc82f Mon Sep 17 00:00:00 2001
From: Zhu Yi <yi.zhu@intel.com>
Date: Thu, 4 Mar 2010 18:01:44 +0000
Subject: [PATCH] sctp: use limited socket backlog

Make sctp adapt to the limited socket backlog change.

Cc: Vlad Yasevich <vladislav.yasevich@hp.com>
Cc: Sridhar Samudrala <sri@us.ibm.com>
Signed-off-by: Zhu Yi <yi.zhu@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
---
 net/sctp/input.c  |   42 +++++++++++++++++++++++++++---------------
 net/sctp/socket.c |    3 +++
 2 files changed, 30 insertions(+), 15 deletions(-)

diff --git a/net/sctp/input.c b/net/sctp/input.c
index c0c973e..cbc0636 100644
--- a/net/sctp/input.c
+++ b/net/sctp/input.c
@@ -75,7 +75,7 @@ static struct sctp_association *__sctp_lookup_association(
 					const union sctp_addr *peer,
 					struct sctp_transport **pt);
 
-static void sctp_add_backlog(struct sock *sk, struct sk_buff *skb);
+static int sctp_add_backlog(struct sock *sk, struct sk_buff *skb);
 
 
 /* Calculate the SCTP checksum of an SCTP packet.  */
@@ -265,8 +265,13 @@ int sctp_rcv(struct sk_buff *skb)
 	}
 
 	if (sock_owned_by_user(sk)) {
+		if (sctp_add_backlog(sk, skb)) {
+			sctp_bh_unlock_sock(sk);
+			sctp_chunk_free(chunk);
+			skb = NULL; /* sctp_chunk_free already freed the skb */
+			goto discard_release;
+		}
 		SCTP_INC_STATS_BH(SCTP_MIB_IN_PKT_BACKLOG);
-		sctp_add_backlog(sk, skb);
 	} else {
 		SCTP_INC_STATS_BH(SCTP_MIB_IN_PKT_SOFTIRQ);
 		sctp_inq_push(&chunk->rcvr->inqueue, chunk);
@@ -336,8 +341,10 @@ int sctp_backlog_rcv(struct sock *sk, struct sk_buff *skb)
 		sctp_bh_lock_sock(sk);
 
 		if (sock_owned_by_user(sk)) {
-			sk_add_backlog(sk, skb);
-			backloged = 1;
+			if (sk_add_backlog_limited(sk, skb))
+				sctp_chunk_free(chunk);
+			else
+				backloged = 1;
 		} else
 			sctp_inq_push(inqueue, chunk);
 
@@ -362,22 +369,27 @@ done:
 	return 0;
 }
 
-static void sctp_add_backlog(struct sock *sk, struct sk_buff *skb)
+static int sctp_add_backlog(struct sock *sk, struct sk_buff *skb)
 {
 	struct sctp_chunk *chunk = SCTP_INPUT_CB(skb)->chunk;
 	struct sctp_ep_common *rcvr = chunk->rcvr;
+	int ret;
 
-	/* Hold the assoc/ep while hanging on the backlog queue.
-	 * This way, we know structures we need will not disappear from us
-	 */
-	if (SCTP_EP_TYPE_ASSOCIATION == rcvr->type)
-		sctp_association_hold(sctp_assoc(rcvr));
-	else if (SCTP_EP_TYPE_SOCKET == rcvr->type)
-		sctp_endpoint_hold(sctp_ep(rcvr));
-	else
-		BUG();
+	ret = sk_add_backlog_limited(sk, skb);
+	if (!ret) {
+		/* Hold the assoc/ep while hanging on the backlog queue.
+		 * This way, we know structures we need will not disappear
+		 * from us
+		 */
+		if (SCTP_EP_TYPE_ASSOCIATION == rcvr->type)
+			sctp_association_hold(sctp_assoc(rcvr));
+		else if (SCTP_EP_TYPE_SOCKET == rcvr->type)
+			sctp_endpoint_hold(sctp_ep(rcvr));
+		else
+			BUG();
+	}
+	return ret;
 
-	sk_add_backlog(sk, skb);
 }
 
 /* Handle icmp frag needed error. */
diff --git a/net/sctp/socket.c b/net/sctp/socket.c
index f6d1e59..dfc5c12 100644
--- a/net/sctp/socket.c
+++ b/net/sctp/socket.c
@@ -3720,6 +3720,9 @@ SCTP_STATIC int sctp_init_sock(struct sock *sk)
 	SCTP_DBG_OBJCNT_INC(sock);
 	percpu_counter_inc(&sctp_sockets_allocated);
 
+	/* Set socket backlog limit. */
+	sk->sk_backlog.limit = sysctl_sctp_rmem[1];
+
 	local_bh_disable();
 	sock_prot_inuse_add(sock_net(sk), sk->sk_prot, 1);
 	local_bh_enable();
-- 
1.7.8.rc0.dirty

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [patch 5/8 2.6.32] tipc: use limited socket backlog
  2011-11-13 20:13 [patch 0/8 2.6.32] CVE-2010-4251: packet backlog can get too large Dan Carpenter
                   ` (2 preceding siblings ...)
  2011-11-13 20:18 ` [patch 4/8 2.6.32] sctp: " Dan Carpenter
@ 2011-11-13 20:18 ` Dan Carpenter
  2011-11-13 20:19 ` [patch 6/8 2.6.32] tcp: " Dan Carpenter
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Dan Carpenter @ 2011-11-13 20:18 UTC (permalink / raw)
  To: stable; +Cc: Greg Kroah-Hartman, Zhu Yi, netdev, Eric Dumazet

This applied without changes to the original patch.

>From 53eecb1be5ae499d399d2923933937a9ea1a284f Mon Sep 17 00:00:00 2001
From: Zhu Yi <yi.zhu@intel.com>
Date: Thu, 4 Mar 2010 18:01:45 +0000
Subject: [PATCH] tipc: use limited socket backlog

Make tipc adapt to the limited socket backlog change.

Cc: Jon Maloy <jon.maloy@ericsson.com>
Cc: Allan Stephens <allan.stephens@windriver.com>
Signed-off-by: Zhu Yi <yi.zhu@intel.com>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: Allan Stephens <allan.stephens@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
---
 net/tipc/socket.c |    6 ++++--
 1 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/net/tipc/socket.c b/net/tipc/socket.c
index 1ea64f0..22bfbc3 100644
--- a/net/tipc/socket.c
+++ b/net/tipc/socket.c
@@ -1322,8 +1322,10 @@ static u32 dispatch(struct tipc_port *tport, struct sk_buff *buf)
 	if (!sock_owned_by_user(sk)) {
 		res = filter_rcv(sk, buf);
 	} else {
-		sk_add_backlog(sk, buf);
-		res = TIPC_OK;
+		if (sk_add_backlog_limited(sk, buf))
+			res = TIPC_ERR_OVERLOAD;
+		else
+			res = TIPC_OK;
 	}
 	bh_unlock_sock(sk);
 
-- 
1.7.8.rc0.dirty

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [patch 6/8 2.6.32] tcp: use limited socket backlog
  2011-11-13 20:13 [patch 0/8 2.6.32] CVE-2010-4251: packet backlog can get too large Dan Carpenter
                   ` (3 preceding siblings ...)
  2011-11-13 20:18 ` [patch 5/8 2.6.32] tipc: " Dan Carpenter
@ 2011-11-13 20:19 ` Dan Carpenter
  2011-11-13 20:19 ` [patch 7/8 2.6.32] llc: " Dan Carpenter
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 12+ messages in thread
From: Dan Carpenter @ 2011-11-13 20:19 UTC (permalink / raw)
  To: stable; +Cc: Greg Kroah-Hartman, Zhu Yi, netdev, Eric Dumazet

This applied without changes to the original patch.

>From 6b03a53a5ab7ccf2d5d69f96cf1c739c4d2a8fb9 Mon Sep 17 00:00:00 2001
From: Zhu Yi <yi.zhu@intel.com>
Date: Thu, 4 Mar 2010 18:01:41 +0000
Subject: [PATCH] tcp: use limited socket backlog

Make tcp adapt to the limited socket backlog change.

Cc: "David S. Miller" <davem@davemloft.net>
Cc: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
Cc: "Pekka Savola (ipv6)" <pekkas@netcore.fi>
Cc: Patrick McHardy <kaber@trash.net>
Signed-off-by: Zhu Yi <yi.zhu@intel.com>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
---
 net/ipv4/tcp_ipv4.c |    6 ++++--
 net/ipv6/tcp_ipv6.c |    6 ++++--
 2 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index c3588b4..4baf194 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -1682,8 +1682,10 @@ process:
 			if (!tcp_prequeue(sk, skb))
 				ret = tcp_v4_do_rcv(sk, skb);
 		}
-	} else
-		sk_add_backlog(sk, skb);
+	} else if (sk_add_backlog_limited(sk, skb)) {
+		bh_unlock_sock(sk);
+		goto discard_and_relse;
+	}
 	bh_unlock_sock(sk);
 
 	sock_put(sk);
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index 6963a6b..c4ea9d5 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -1740,8 +1740,10 @@ process:
 			if (!tcp_prequeue(sk, skb))
 				ret = tcp_v6_do_rcv(sk, skb);
 		}
-	} else
-		sk_add_backlog(sk, skb);
+	} else if (sk_add_backlog_limited(sk, skb)) {
+		bh_unlock_sock(sk);
+		goto discard_and_relse;
+	}
 	bh_unlock_sock(sk);
 
 	sock_put(sk);
-- 
1.7.8.rc0.dirty

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [patch 7/8 2.6.32] llc: use limited socket backlog
  2011-11-13 20:13 [patch 0/8 2.6.32] CVE-2010-4251: packet backlog can get too large Dan Carpenter
                   ` (4 preceding siblings ...)
  2011-11-13 20:19 ` [patch 6/8 2.6.32] tcp: " Dan Carpenter
@ 2011-11-13 20:19 ` Dan Carpenter
  2011-11-13 20:19 ` [patch 8/8 2.6.32] net: backlog functions rename Dan Carpenter
  2011-11-13 20:58 ` [patch 0/8 2.6.32] CVE-2010-4251: packet backlog can get too large David Miller
  7 siblings, 0 replies; 12+ messages in thread
From: Dan Carpenter @ 2011-11-13 20:19 UTC (permalink / raw)
  To: stable; +Cc: Greg Kroah-Hartman, Zhu Yi, netdev, Eric Dumazet

This applied without changes to the original patch.

>From 79545b681961d7001c1f4c3eb9ffb87bed4485db Mon Sep 17 00:00:00 2001
From: Zhu Yi <yi.zhu@intel.com>
Date: Thu, 4 Mar 2010 18:01:43 +0000
Subject: [PATCH] llc: use limited socket backlog

Make llc adapt to the limited socket backlog change.

Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Signed-off-by: Zhu Yi <yi.zhu@intel.com>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
---
 net/llc/llc_conn.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/net/llc/llc_conn.c b/net/llc/llc_conn.c
index a8dde9b..c0539ff 100644
--- a/net/llc/llc_conn.c
+++ b/net/llc/llc_conn.c
@@ -827,7 +827,8 @@ void llc_conn_handler(struct llc_sap *sap, struct sk_buff *skb)
 	else {
 		dprintk("%s: adding to backlog...\n", __func__);
 		llc_set_backlog_type(skb, LLC_PACKET);
-		sk_add_backlog(sk, skb);
+		if (sk_add_backlog_limited(sk, skb))
+			goto drop_unlock;
 	}
 out:
 	bh_unlock_sock(sk);
-- 
1.7.8.rc0.dirty

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [patch 8/8 2.6.32] net: backlog functions rename
  2011-11-13 20:13 [patch 0/8 2.6.32] CVE-2010-4251: packet backlog can get too large Dan Carpenter
                   ` (5 preceding siblings ...)
  2011-11-13 20:19 ` [patch 7/8 2.6.32] llc: " Dan Carpenter
@ 2011-11-13 20:19 ` Dan Carpenter
  2011-11-13 20:58 ` [patch 0/8 2.6.32] CVE-2010-4251: packet backlog can get too large David Miller
  7 siblings, 0 replies; 12+ messages in thread
From: Dan Carpenter @ 2011-11-13 20:19 UTC (permalink / raw)
  To: stable; +Cc: Greg Kroah-Hartman, Zhu Yi, netdev, Eric Dumazet

I had to make some trivial changes here as a result of the changes I
made in [patch 2/8 2.6.32] udp: use limited socket backlog.

>From a3a858ff18a72a8d388e31ab0d98f7e944841a62 Mon Sep 17 00:00:00 2001
From: Zhu Yi <yi.zhu@intel.com>
Date: Thu, 4 Mar 2010 18:01:47 +0000
Subject: [PATCH] net: backlog functions rename

sk_add_backlog -> __sk_add_backlog
sk_add_backlog_limited -> sk_add_backlog

Signed-off-by: Zhu Yi <yi.zhu@intel.com>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
---

diff --git a/include/net/sock.h b/include/net/sock.h
index 5a4844a..12fdc27 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -564,7 +564,7 @@ static inline int sk_stream_memory_free(struct sock *sk)
 }
 
 /* OOB backlog add */
-static inline void sk_add_backlog(struct sock *sk, struct sk_buff *skb)
+static inline void __sk_add_backlog(struct sock *sk, struct sk_buff *skb)
 {
 	if (!sk->sk_backlog.tail) {
 		sk->sk_backlog.head = sk->sk_backlog.tail = skb;
@@ -576,12 +576,12 @@ static inline void sk_add_backlog(struct sock *sk, struct sk_buff *skb)
 }
 
 /* The per-socket spinlock must be held here. */
-static inline int sk_add_backlog_limited(struct sock *sk, struct sk_buff *skb)
+static inline int sk_add_backlog(struct sock *sk, struct sk_buff *skb)
 {
 	if (sk->sk_backlog.len >= max(sk->sk_backlog.limit, sk->sk_rcvbuf << 1))
 		return -ENOBUFS;
 
-	sk_add_backlog(sk, skb);
+	__sk_add_backlog(sk, skb);
 	sk->sk_backlog.len += skb->truesize;
 	return 0;
 }
diff --git a/net/core/sock.c b/net/core/sock.c
index 5797dab..eb191dd 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -336,7 +336,7 @@ int sk_receive_skb(struct sock *sk, struct sk_buff *skb, const int nested)
 		rc = sk_backlog_rcv(sk, skb);
 
 		mutex_release(&sk->sk_lock.dep_map, 1, _RET_IP_);
-	} else if (sk_add_backlog_limited(sk, skb)) {
+	} else if (sk_add_backlog(sk, skb)) {
 		bh_unlock_sock(sk);
 		atomic_inc(&sk->sk_drops);
 		goto discard_and_relse;
diff --git a/net/dccp/minisocks.c b/net/dccp/minisocks.c
index 5ca49ce..1d5abd2 100644
--- a/net/dccp/minisocks.c
+++ b/net/dccp/minisocks.c
@@ -254,7 +254,7 @@ int dccp_child_process(struct sock *parent, struct sock *child,
 		 * in main socket hash table and lock on listening
 		 * socket does not protect us more.
 		 */
-		sk_add_backlog(child, skb);
+		__sk_add_backlog(child, skb);
 	}
 
 	bh_unlock_sock(child);
diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
index 6e824ac1..c64db81 100644
--- a/net/ipv4/tcp_ipv4.c
+++ b/net/ipv4/tcp_ipv4.c
@@ -1635,7 +1635,7 @@ process:
 			if (!tcp_prequeue(sk, skb))
 				ret = tcp_v4_do_rcv(sk, skb);
 		}
-	} else if (sk_add_backlog_limited(sk, skb)) {
+	} else if (sk_add_backlog(sk, skb)) {
 		bh_unlock_sock(sk);
 		goto discard_and_relse;
 	}
diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
index 4c03598..ab7ac46 100644
--- a/net/ipv4/tcp_minisocks.c
+++ b/net/ipv4/tcp_minisocks.c
@@ -702,7 +702,7 @@ int tcp_child_process(struct sock *parent, struct sock *child,
 		 * in main socket hash table and lock on listening
 		 * socket does not protect us more.
 		 */
-		sk_add_backlog(child, skb);
+		__sk_add_backlog(child, skb);
 	}
 
 	bh_unlock_sock(child);
diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
index 2eaeaf1..be49cd4 100644
--- a/net/ipv4/udp.c
+++ b/net/ipv4/udp.c
@@ -1177,7 +1177,7 @@ int udp_queue_rcv_skb(struct sock *sk, struct sk_buff *skb)
 	bh_lock_sock(sk);
 	if (!sock_owned_by_user(sk))
 		rc = __udp_queue_rcv_skb(sk, skb);
-	else if (sk_add_backlog_limited(sk, skb)) {
+	else if (sk_add_backlog(sk, skb)) {
 		bh_unlock_sock(sk);
 		goto drop;
 	}
diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
index dc06299..a1114ae 100644
--- a/net/ipv6/tcp_ipv6.c
+++ b/net/ipv6/tcp_ipv6.c
@@ -1686,7 +1686,7 @@ process:
 			if (!tcp_prequeue(sk, skb))
 				ret = tcp_v6_do_rcv(sk, skb);
 		}
-	} else if (sk_add_backlog_limited(sk, skb)) {
+	} else if (sk_add_backlog(sk, skb)) {
 		bh_unlock_sock(sk);
 		goto discard_and_relse;
 	}
diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
index 4400eb0..e0cf8f3 100644
--- a/net/ipv6/udp.c
+++ b/net/ipv6/udp.c
@@ -473,7 +473,7 @@ static int __udp6_lib_mcast_deliver(struct net *net, struct sk_buff *skb,
 			bh_lock_sock(sk2);
 			if (!sock_owned_by_user(sk2))
 				udpv6_queue_rcv_skb(sk2, buff);
-			else if (sk_add_backlog_limited(sk2, buff)) {
+			else if (sk_add_backlog(sk2, buff)) {
 				kfree_skb(buff);
 				bh_unlock_sock(sk2);
 				goto out;
@@ -484,7 +484,7 @@ static int __udp6_lib_mcast_deliver(struct net *net, struct sk_buff *skb,
 	bh_lock_sock(sk);
 	if (!sock_owned_by_user(sk))
 		udpv6_queue_rcv_skb(sk, skb);
-	else if (sk_add_backlog_limited(sk, skb))
+	else if (sk_add_backlog(sk, skb))
 		kfree_skb(skb);
 	bh_unlock_sock(sk);
 out:
@@ -604,7 +604,7 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable,
 	bh_lock_sock(sk);
 	if (!sock_owned_by_user(sk))
 		udpv6_queue_rcv_skb(sk, skb);
-	else if (sk_add_backlog_limited(sk, skb)) {
+	else if (sk_add_backlog(sk, skb)) {
 		atomic_inc(&sk->sk_drops);
 		bh_unlock_sock(sk);
 		sock_put(sk);
diff --git a/net/llc/llc_c_ac.c b/net/llc/llc_c_ac.c
index 019c780..86d6985 100644
--- a/net/llc/llc_c_ac.c
+++ b/net/llc/llc_c_ac.c
@@ -1437,7 +1437,7 @@ static void llc_process_tmr_ev(struct sock *sk, struct sk_buff *skb)
 			llc_conn_state_process(sk, skb);
 		else {
 			llc_set_backlog_type(skb, LLC_EVENT);
-			sk_add_backlog(sk, skb);
+			__sk_add_backlog(sk, skb);
 		}
 	}
 }
diff --git a/net/llc/llc_conn.c b/net/llc/llc_conn.c
index 8f97546..c61ca88 100644
--- a/net/llc/llc_conn.c
+++ b/net/llc/llc_conn.c
@@ -756,7 +756,7 @@ void llc_conn_handler(struct llc_sap *sap, struct sk_buff *skb)
 	else {
 		dprintk("%s: adding to backlog...\n", __func__);
 		llc_set_backlog_type(skb, LLC_PACKET);
-		if (sk_add_backlog_limited(sk, skb))
+		if (sk_add_backlog(sk, skb))
 			goto drop_unlock;
 	}
 out:
diff --git a/net/sctp/input.c b/net/sctp/input.c
index 3271c7b..eaacf76 100644
--- a/net/sctp/input.c
+++ b/net/sctp/input.c
@@ -341,7 +341,7 @@ int sctp_backlog_rcv(struct sock *sk, struct sk_buff *skb)
 		sctp_bh_lock_sock(sk);
 
 		if (sock_owned_by_user(sk)) {
-			if (sk_add_backlog_limited(sk, skb))
+			if (sk_add_backlog(sk, skb))
 				sctp_chunk_free(chunk);
 			else
 				backloged = 1;
@@ -375,7 +375,7 @@ static int sctp_add_backlog(struct sock *sk, struct sk_buff *skb)
 	struct sctp_ep_common *rcvr = chunk->rcvr;
 	int ret;
 
-	ret = sk_add_backlog_limited(sk, skb);
+	ret = sk_add_backlog(sk, skb);
 	if (!ret) {
 		/* Hold the assoc/ep while hanging on the backlog queue.
 		 * This way, we know structures we need will not disappear
diff --git a/net/tipc/socket.c b/net/tipc/socket.c
index bf4b320..f1e6629 100644
--- a/net/tipc/socket.c
+++ b/net/tipc/socket.c
@@ -1323,7 +1323,7 @@ static u32 dispatch(struct tipc_port *tport, struct sk_buff *buf)
 	if (!sock_owned_by_user(sk)) {
 		res = filter_rcv(sk, buf);
 	} else {
-		if (sk_add_backlog_limited(sk, buf))
+		if (sk_add_backlog(sk, buf))
 			res = TIPC_ERR_OVERLOAD;
 		else
 			res = TIPC_OK;
diff --git a/net/x25/x25_dev.c b/net/x25/x25_dev.c
index a9da0dc..52e3042 100644
--- a/net/x25/x25_dev.c
+++ b/net/x25/x25_dev.c
@@ -53,7 +53,7 @@ static int x25_receive_data(struct sk_buff *skb, struct x25_neigh *nb)
 		if (!sock_owned_by_user(sk)) {
 			queued = x25_process_rx_frame(sk, skb);
 		} else {
-			queued = !sk_add_backlog_limited(sk, skb);
+			queued = !sk_add_backlog(sk, skb);
 		}
 		bh_unlock_sock(sk);
 		sock_put(sk);

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [patch 0/8 2.6.32] CVE-2010-4251: packet backlog can get too large
  2011-11-13 20:13 [patch 0/8 2.6.32] CVE-2010-4251: packet backlog can get too large Dan Carpenter
                   ` (6 preceding siblings ...)
  2011-11-13 20:19 ` [patch 8/8 2.6.32] net: backlog functions rename Dan Carpenter
@ 2011-11-13 20:58 ` David Miller
  2011-11-13 23:29   ` Ben Hutchings
  7 siblings, 1 reply; 12+ messages in thread
From: David Miller @ 2011-11-13 20:58 UTC (permalink / raw)
  To: dan.carpenter; +Cc: stable, greg, netdev, yi.zhu, eric.dumazet

From: Dan Carpenter <dan.carpenter@oracle.com>
Date: Sun, 13 Nov 2011 23:13:36 +0300

> This patch series is to address CVE-2010-4251 for the 2.6.32 stable
> kernel.  Here is the CVE summary:
> 
>  "The socket implementation in net/core/sock.c in the Linux kernel
>   before 2.6.34 does not properly manage a backlog of received
>   packets, which allows remote attackers to cause a denial of service
>   (memory consumption) by sending a large amount of network traffic,
>   as demonstrated by netperf UDP tests."
> 
> [patch 1/8] introduces sk_add_backlog_limited()
> [patch 2-7/8] change each network protocol to use sk_add_backlog_limited()
> 	where appropriate.
> [patch 8/8] renames sk_add_backlog() to __sk_add_backlog() and
> 	sk_add_backlog_limited() to sk_add_backlog().
> 
> The patches mostly apply without changes.  The exception is:
> [patch 2/8] udp: use limited socket backlog
> Then the rename [patch 8/8] needed to be changed as well to match.

These changes are way too intrusive and potentially regression
inducing for -stable inclusion, especially a kernel that is in such
deep maintainence mode as 2.6.32 is.

Also, I tend to personally submit networking -stable patches, so please
do not bypass me in this manner and instead recommend such submissions
on the netdev list so I can evaluate the request.

Thanks.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [patch 0/8 2.6.32] CVE-2010-4251: packet backlog can get too large
  2011-11-13 20:58 ` [patch 0/8 2.6.32] CVE-2010-4251: packet backlog can get too large David Miller
@ 2011-11-13 23:29   ` Ben Hutchings
  2011-11-14  3:24     ` David Miller
  0 siblings, 1 reply; 12+ messages in thread
From: Ben Hutchings @ 2011-11-13 23:29 UTC (permalink / raw)
  To: David Miller; +Cc: dan.carpenter, stable, greg, netdev, yi.zhu, eric.dumazet

[-- Attachment #1: Type: text/plain, Size: 1884 bytes --]

On Sun, 2011-11-13 at 15:58 -0500, David Miller wrote:
> From: Dan Carpenter <dan.carpenter@oracle.com>
> Date: Sun, 13 Nov 2011 23:13:36 +0300
> 
> > This patch series is to address CVE-2010-4251 for the 2.6.32 stable
> > kernel.  Here is the CVE summary:
> > 
> >  "The socket implementation in net/core/sock.c in the Linux kernel
> >   before 2.6.34 does not properly manage a backlog of received
> >   packets, which allows remote attackers to cause a denial of service
> >   (memory consumption) by sending a large amount of network traffic,
> >   as demonstrated by netperf UDP tests."
> > 
> > [patch 1/8] introduces sk_add_backlog_limited()
> > [patch 2-7/8] change each network protocol to use sk_add_backlog_limited()
> > 	where appropriate.
> > [patch 8/8] renames sk_add_backlog() to __sk_add_backlog() and
> > 	sk_add_backlog_limited() to sk_add_backlog().
> > 
> > The patches mostly apply without changes.  The exception is:
> > [patch 2/8] udp: use limited socket backlog
> > Then the rename [patch 8/8] needed to be changed as well to match.
> 
> These changes are way too intrusive and potentially regression
> inducing for -stable inclusion, especially a kernel that is in such
> deep maintainence mode as 2.6.32 is.

Debian 6.0 based on Linux 2.6.32 has patches #1-7, except our backport
of #2 (for udp) looks a bit different.

Apparently several other distributions have also applied these.

> Also, I tend to personally submit networking -stable patches, so please
> do not bypass me in this manner and instead recommend such submissions
> on the netdev list so I can evaluate the request.

But you've previously said that you are not submitting networking
patches to the longterm series.  Did you change your mind?

Ben.

-- 
Ben Hutchings
Never attribute to conspiracy what can adequately be explained by stupidity.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [patch 0/8 2.6.32] CVE-2010-4251: packet backlog can get too large
  2011-11-13 23:29   ` Ben Hutchings
@ 2011-11-14  3:24     ` David Miller
  2011-11-14 18:11       ` Greg KH
  0 siblings, 1 reply; 12+ messages in thread
From: David Miller @ 2011-11-14  3:24 UTC (permalink / raw)
  To: ben; +Cc: dan.carpenter, stable, greg, netdev, yi.zhu, eric.dumazet

From: Ben Hutchings <ben@decadent.org.uk>
Date: Sun, 13 Nov 2011 23:29:09 +0000

> But you've previously said that you are not submitting networking
> patches to the longterm series.  Did you change your mind?

No, I in fact haven't.

But I will say that if distributions want to apply this thing, that's
fine, but it doesn't automatically make it a good idea for -stable
to take it too.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [patch 0/8 2.6.32] CVE-2010-4251: packet backlog can get too large
  2011-11-14  3:24     ` David Miller
@ 2011-11-14 18:11       ` Greg KH
  0 siblings, 0 replies; 12+ messages in thread
From: Greg KH @ 2011-11-14 18:11 UTC (permalink / raw)
  To: David Miller; +Cc: ben, dan.carpenter, stable, netdev, yi.zhu, eric.dumazet

On Sun, Nov 13, 2011 at 10:24:10PM -0500, David Miller wrote:
> From: Ben Hutchings <ben@decadent.org.uk>
> Date: Sun, 13 Nov 2011 23:29:09 +0000
> 
> > But you've previously said that you are not submitting networking
> > patches to the longterm series.  Did you change your mind?
> 
> No, I in fact haven't.
> 
> But I will say that if distributions want to apply this thing, that's
> fine, but it doesn't automatically make it a good idea for -stable
> to take it too.

Thanks for letting me know, I'll drop these from my to-apply mbox.

greg k-h

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2011-11-14 18:14 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-11-13 20:13 [patch 0/8 2.6.32] CVE-2010-4251: packet backlog can get too large Dan Carpenter
2011-11-13 20:17 ` [patch 2/8 2.6.32] udp: use limited socket backlog Dan Carpenter
2011-11-13 20:18 ` [patch 3/8 2.6.32] x25: " Dan Carpenter
2011-11-13 20:18 ` [patch 4/8 2.6.32] sctp: " Dan Carpenter
2011-11-13 20:18 ` [patch 5/8 2.6.32] tipc: " Dan Carpenter
2011-11-13 20:19 ` [patch 6/8 2.6.32] tcp: " Dan Carpenter
2011-11-13 20:19 ` [patch 7/8 2.6.32] llc: " Dan Carpenter
2011-11-13 20:19 ` [patch 8/8 2.6.32] net: backlog functions rename Dan Carpenter
2011-11-13 20:58 ` [patch 0/8 2.6.32] CVE-2010-4251: packet backlog can get too large David Miller
2011-11-13 23:29   ` Ben Hutchings
2011-11-14  3:24     ` David Miller
2011-11-14 18:11       ` Greg KH

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).