netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC v2 PATCH 0/2] Add RCU locking to SCTP address management
@ 2007-09-12 19:46 Vlad Yasevich
  2007-09-12 19:46 ` [RFC v2 PATCH 1/2] SCTP: Add RCU synchronization around sctp_localaddr_list Vlad Yasevich
                   ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: Vlad Yasevich @ 2007-09-12 19:46 UTC (permalink / raw)
  To: netdev; +Cc: lksctp-developers

Ok, this is version 2 of the patch that incorporates comments from
Sridhar Samudrala and Paul McKenney.

The changes icorporated are:
 1.  Add locking around the modification of the global sctp_local_addr_list
 when processing the notifiers.  After looking around, it is possible for
 the IPv4 and IPv6 notifiers to be called at the same time, which means that
 we need a spin lock.

 2.  After the Paul's explanation of why writers would would to call
 rcu_read_lock, it's apparent that we really don't need that in our usage.
 I've removed all that I could find and conser safe.

 3. I took Paul's suggestiong of passing an explicit rcu callback when
 removing entries from the list since these can be done it different
 contexts.  This made the removal code rather simple.

Things I've left behind:
 1.  The valid flag remains.  After discussing the virtues with Paul Moore
 (who used the same functionality in Netlabel code), I think that the
 valid flag slightly reduces the possibility that the reader will use
 an entry that's about to be removed.  It's a good thing in our case.
 It doesn't really harm anything if a reader used a !valid entry, but
 I'd like to reduce that chance.

I would appreciate any further comments

Thanks
-vlad

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [RFC v2 PATCH 1/2] SCTP: Add RCU synchronization around sctp_localaddr_list
  2007-09-12 19:46 [RFC v2 PATCH 0/2] Add RCU locking to SCTP address management Vlad Yasevich
@ 2007-09-12 19:46 ` Vlad Yasevich
  2007-09-12 22:26   ` Paul E. McKenney
  2007-09-12 23:03   ` Sridhar Samudrala
  2007-09-12 19:46 ` [RFC v2 PATCH 2/2] SCTP: Convert bind_addr_list locking to RCU Vlad Yasevich
  2007-09-12 20:59 ` [RFC v2 PATCH 0/2] Add RCU locking to SCTP address management Vlad Yasevich
  2 siblings, 2 replies; 14+ messages in thread
From: Vlad Yasevich @ 2007-09-12 19:46 UTC (permalink / raw)
  To: netdev; +Cc: lksctp-developers, Vlad Yasevich

sctp_localaddr_list is modified dynamically via NETDEV_UP
and NETDEV_DOWN events, but there is not synchronization
between writer (even handler) and readers.  As a result,
the readers can access an entry that has been freed and
crash the sytem.

Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
---
 include/net/sctp/sctp.h    |    1 +
 include/net/sctp/structs.h |    6 +++++
 net/sctp/bind_addr.c       |    2 +
 net/sctp/ipv6.c            |   34 ++++++++++++++++++++++---------
 net/sctp/protocol.c        |   46 ++++++++++++++++++++++++++++++-------------
 net/sctp/socket.c          |   38 +++++++++++++++++++++++------------
 6 files changed, 90 insertions(+), 37 deletions(-)

diff --git a/include/net/sctp/sctp.h b/include/net/sctp/sctp.h
index d529045..c9cc00c 100644
--- a/include/net/sctp/sctp.h
+++ b/include/net/sctp/sctp.h
@@ -123,6 +123,7 @@
  * sctp/protocol.c
  */
 extern struct sock *sctp_get_ctl_sock(void);
+extern void sctp_local_addr_free(struct rcu_head *head);
 extern int sctp_copy_local_addr_list(struct sctp_bind_addr *,
 				     sctp_scope_t, gfp_t gfp,
 				     int flags);
diff --git a/include/net/sctp/structs.h b/include/net/sctp/structs.h
index c0d5848..a89e361 100644
--- a/include/net/sctp/structs.h
+++ b/include/net/sctp/structs.h
@@ -207,6 +207,9 @@ extern struct sctp_globals {
 	 * It is a list of sctp_sockaddr_entry.
 	 */
 	struct list_head local_addr_list;
+
+	/* Lock that protects the local_addr_list writers */
+	spinlock_t addr_list_lock;
 	
 	/* Flag to indicate if addip is enabled. */
 	int addip_enable;
@@ -242,6 +245,7 @@ extern struct sctp_globals {
 #define sctp_port_alloc_lock		(sctp_globals.port_alloc_lock)
 #define sctp_port_hashtable		(sctp_globals.port_hashtable)
 #define sctp_local_addr_list		(sctp_globals.local_addr_list)
+#define sctp_local_addr_lock		(sctp_globals.addr_list_lock)
 #define sctp_addip_enable		(sctp_globals.addip_enable)
 #define sctp_prsctp_enable		(sctp_globals.prsctp_enable)
 
@@ -737,8 +741,10 @@ const union sctp_addr *sctp_source(const struct sctp_chunk *chunk);
 /* This is a structure for holding either an IPv6 or an IPv4 address.  */
 struct sctp_sockaddr_entry {
 	struct list_head list;
+	struct rcu_head	rcu;
 	union sctp_addr a;
 	__u8 use_as_src;
+	__u8 valid;
 };
 
 typedef struct sctp_chunk *(sctp_packet_phandler_t)(struct sctp_association *);
diff --git a/net/sctp/bind_addr.c b/net/sctp/bind_addr.c
index fdb287a..7fc369f 100644
--- a/net/sctp/bind_addr.c
+++ b/net/sctp/bind_addr.c
@@ -163,8 +163,10 @@ int sctp_add_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *new,
 		addr->a.v4.sin_port = htons(bp->port);
 
 	addr->use_as_src = use_as_src;
+	addr->valid = 1;
 
 	INIT_LIST_HEAD(&addr->list);
+	INIT_RCU_HEAD(&addr->rcu);
 	list_add_tail(&addr->list, &bp->address_list);
 	SCTP_DBG_OBJCNT_INC(addr);
 
diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c
index f8aa23d..54ff472 100644
--- a/net/sctp/ipv6.c
+++ b/net/sctp/ipv6.c
@@ -77,13 +77,18 @@
 
 #include <asm/uaccess.h>
 
-/* Event handler for inet6 address addition/deletion events.  */
+/* Event handler for inet6 address addition/deletion events.
+ * This even is part of the atomic notifier call chain
+ * and thus happens atomically and can NOT sleep.  As a result
+ * we can't and really don't need to add any locks to guard the
+ * RCU.
+ */
 static int sctp_inet6addr_event(struct notifier_block *this, unsigned long ev,
 				void *ptr)
 {
 	struct inet6_ifaddr *ifa = (struct inet6_ifaddr *)ptr;
-	struct sctp_sockaddr_entry *addr;
-	struct list_head *pos, *temp;
+	struct sctp_sockaddr_entry *addr = NULL;
+	struct sctp_sockaddr_entry *temp;
 
 	switch (ev) {
 	case NETDEV_UP:
@@ -94,19 +99,26 @@ static int sctp_inet6addr_event(struct notifier_block *this, unsigned long ev,
 			memcpy(&addr->a.v6.sin6_addr, &ifa->addr,
 				 sizeof(struct in6_addr));
 			addr->a.v6.sin6_scope_id = ifa->idev->dev->ifindex;
-			list_add_tail(&addr->list, &sctp_local_addr_list);
+			addr->valid = 1;
+			spin_lock_bh(&sctp_local_addr_lock);
+			list_add_tail_rcu(&addr->list, &sctp_local_addr_list);
+			spin_unlock_bh(&sctp_local_addr_lock);
 		}
 		break;
 	case NETDEV_DOWN:
-		list_for_each_safe(pos, temp, &sctp_local_addr_list) {
-			addr = list_entry(pos, struct sctp_sockaddr_entry, list);
-			if (ipv6_addr_equal(&addr->a.v6.sin6_addr, &ifa->addr)) {
-				list_del(pos);
-				kfree(addr);
+		spin_lock_bh(&sctp_local_addr_lock);
+		list_for_each_entry_safe(addr, temp,
+					&sctp_local_addr_list, list) {
+			if (ipv6_addr_equal(&addr->a.v6.sin6_addr,
+					     &ifa->addr)) {
+				addr->valid = 0;
+				list_del_rcu(&addr->list);
 				break;
 			}
 		}
-
+		spin_unlock_bh(&sctp_local_addr_lock);
+		if (addr && !addr->valid)
+			call_rcu(&addr->rcu, sctp_local_addr_free);
 		break;
 	}
 
@@ -367,7 +379,9 @@ static void sctp_v6_copy_addrlist(struct list_head *addrlist,
 			addr->a.v6.sin6_port = 0;
 			addr->a.v6.sin6_addr = ifp->addr;
 			addr->a.v6.sin6_scope_id = dev->ifindex;
+			addr->valid = 1;
 			INIT_LIST_HEAD(&addr->list);
+			INIT_RCU_HEAD(&addr->rcu);
 			list_add_tail(&addr->list, addrlist);
 		}
 	}
diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
index e98579b..4688559 100644
--- a/net/sctp/protocol.c
+++ b/net/sctp/protocol.c
@@ -153,6 +153,8 @@ static void sctp_v4_copy_addrlist(struct list_head *addrlist,
 			addr->a.v4.sin_family = AF_INET;
 			addr->a.v4.sin_port = 0;
 			addr->a.v4.sin_addr.s_addr = ifa->ifa_local;
+			addr->valid = 1;
+			INIT_RCU_HEAD(&addr->rcu);
 			list_add_tail(&addr->list, addrlist);
 		}
 	}
@@ -192,16 +194,24 @@ static void sctp_free_local_addr_list(void)
 	}
 }
 
+void sctp_local_addr_free(struct rcu_head *head)
+{
+	struct sctp_sockaddr_entry *e = container_of(head,
+				struct sctp_sockaddr_entry, rcu);
+	kfree(e);
+}
+
 /* Copy the local addresses which are valid for 'scope' into 'bp'.  */
 int sctp_copy_local_addr_list(struct sctp_bind_addr *bp, sctp_scope_t scope,
 			      gfp_t gfp, int copy_flags)
 {
 	struct sctp_sockaddr_entry *addr;
 	int error = 0;
-	struct list_head *pos, *temp;
 
-	list_for_each_safe(pos, temp, &sctp_local_addr_list) {
-		addr = list_entry(pos, struct sctp_sockaddr_entry, list);
+	rcu_read_lock();
+	list_for_each_entry_rcu(addr, &sctp_local_addr_list, list) {
+		if (!addr->valid)
+			continue;
 		if (sctp_in_scope(&addr->a, scope)) {
 			/* Now that the address is in scope, check to see if
 			 * the address type is really supported by the local
@@ -221,6 +231,7 @@ int sctp_copy_local_addr_list(struct sctp_bind_addr *bp, sctp_scope_t scope,
 	}
 
 end_copy:
+	rcu_read_unlock();
 	return error;
 }
 
@@ -605,8 +616,8 @@ static int sctp_inetaddr_event(struct notifier_block *this, unsigned long ev,
 			       void *ptr)
 {
 	struct in_ifaddr *ifa = (struct in_ifaddr *)ptr;
-	struct sctp_sockaddr_entry *addr;
-	struct list_head *pos, *temp;
+	struct sctp_sockaddr_entry *addr = NULL;
+	struct sctp_sockaddr_entry *temp;
 
 	switch (ev) {
 	case NETDEV_UP:
@@ -615,19 +626,25 @@ static int sctp_inetaddr_event(struct notifier_block *this, unsigned long ev,
 			addr->a.v4.sin_family = AF_INET;
 			addr->a.v4.sin_port = 0;
 			addr->a.v4.sin_addr.s_addr = ifa->ifa_local;
-			list_add_tail(&addr->list, &sctp_local_addr_list);
+			addr->valid = 1;
+			spin_lock_bh(&sctp_local_addr_lock);
+			list_add_tail_rcu(&addr->list, &sctp_local_addr_list);
+			spin_unlock_bh(&sctp_local_addr_lock);
 		}
 		break;
 	case NETDEV_DOWN:
-		list_for_each_safe(pos, temp, &sctp_local_addr_list) {
-			addr = list_entry(pos, struct sctp_sockaddr_entry, list);
+		spin_lock_bh(&sctp_local_addr_lock);
+		list_for_each_entry_safe(addr, temp,
+					&sctp_local_addr_list, list) {
 			if (addr->a.v4.sin_addr.s_addr == ifa->ifa_local) {
-				list_del(pos);
-				kfree(addr);
+				addr->valid = 0;
+				list_del_rcu(&addr->list);
 				break;
 			}
 		}
-
+		spin_unlock_bh(&sctp_local_addr_lock);
+		if (addr && !addr->valid)
+			call_rcu(&addr->rcu, sctp_local_addr_free);
 		break;
 	}
 
@@ -1160,6 +1177,7 @@ SCTP_STATIC __init int sctp_init(void)
 
 	/* Initialize the local address list. */
 	INIT_LIST_HEAD(&sctp_local_addr_list);
+	spin_lock_init(&sctp_local_addr_lock);
 	sctp_get_local_addr_list();
 
 	/* Register notifier for inet address additions/deletions. */
@@ -1227,6 +1245,9 @@ SCTP_STATIC __exit void sctp_exit(void)
 	sctp_v6_del_protocol();
 	inet_del_protocol(&sctp_protocol, IPPROTO_SCTP);
 
+	/* Unregister notifier for inet address additions/deletions. */
+	unregister_inetaddr_notifier(&sctp_inetaddr_notifier);
+
 	/* Free the local address list.  */
 	sctp_free_local_addr_list();
 
@@ -1240,9 +1261,6 @@ SCTP_STATIC __exit void sctp_exit(void)
 	inet_unregister_protosw(&sctp_stream_protosw);
 	inet_unregister_protosw(&sctp_seqpacket_protosw);
 
-	/* Unregister notifier for inet address additions/deletions. */
-	unregister_inetaddr_notifier(&sctp_inetaddr_notifier);
-
 	sctp_sysctl_unregister();
 	list_del(&sctp_ipv4_specific.list);
 
diff --git a/net/sctp/socket.c b/net/sctp/socket.c
index 3335460..a3acf78 100644
--- a/net/sctp/socket.c
+++ b/net/sctp/socket.c
@@ -4057,9 +4057,9 @@ static int sctp_getsockopt_local_addrs_num_old(struct sock *sk, int len,
 					       int __user *optlen)
 {
 	sctp_assoc_t id;
+	struct list_head *pos;
 	struct sctp_bind_addr *bp;
 	struct sctp_association *asoc;
-	struct list_head *pos, *temp;
 	struct sctp_sockaddr_entry *addr;
 	rwlock_t *addr_lock;
 	int cnt = 0;
@@ -4096,15 +4096,19 @@ static int sctp_getsockopt_local_addrs_num_old(struct sock *sk, int len,
 		addr = list_entry(bp->address_list.next,
 				  struct sctp_sockaddr_entry, list);
 		if (sctp_is_any(&addr->a)) {
-			list_for_each_safe(pos, temp, &sctp_local_addr_list) {
-				addr = list_entry(pos,
-						  struct sctp_sockaddr_entry,
-						  list);
+			rcu_read_lock();
+			list_for_each_entry_rcu(addr,
+						&sctp_local_addr_list, list) {
+				if (!addr->valid)
+					continue;
+
 				if ((PF_INET == sk->sk_family) &&
 				    (AF_INET6 == addr->a.sa.sa_family))
 					continue;
+
 				cnt++;
 			}
+			rcu_read_unlock();
 		} else {
 			cnt = 1;
 		}
@@ -4127,14 +4131,16 @@ static int sctp_copy_laddrs_old(struct sock *sk, __u16 port,
 					int max_addrs, void *to,
 					int *bytes_copied)
 {
-	struct list_head *pos, *next;
 	struct sctp_sockaddr_entry *addr;
 	union sctp_addr temp;
 	int cnt = 0;
 	int addrlen;
 
-	list_for_each_safe(pos, next, &sctp_local_addr_list) {
-		addr = list_entry(pos, struct sctp_sockaddr_entry, list);
+	rcu_read_lock();
+	list_for_each_entry_rcu(addr, &sctp_local_addr_list, list) {
+		if (!addr->valid)
+			continue;
+
 		if ((PF_INET == sk->sk_family) &&
 		    (AF_INET6 == addr->a.sa.sa_family))
 			continue;
@@ -4149,6 +4155,7 @@ static int sctp_copy_laddrs_old(struct sock *sk, __u16 port,
 		cnt ++;
 		if (cnt >= max_addrs) break;
 	}
+	rcu_read_unlock();
 
 	return cnt;
 }
@@ -4156,14 +4163,16 @@ static int sctp_copy_laddrs_old(struct sock *sk, __u16 port,
 static int sctp_copy_laddrs(struct sock *sk, __u16 port, void *to,
 			    size_t space_left, int *bytes_copied)
 {
-	struct list_head *pos, *next;
 	struct sctp_sockaddr_entry *addr;
 	union sctp_addr temp;
 	int cnt = 0;
 	int addrlen;
 
-	list_for_each_safe(pos, next, &sctp_local_addr_list) {
-		addr = list_entry(pos, struct sctp_sockaddr_entry, list);
+	rcu_read_lock();
+	list_for_each_entry_rcu(addr, &sctp_local_addr_list, list) {
+		if (!addr->valid)
+			continue;
+
 		if ((PF_INET == sk->sk_family) &&
 		    (AF_INET6 == addr->a.sa.sa_family))
 			continue;
@@ -4171,8 +4180,10 @@ static int sctp_copy_laddrs(struct sock *sk, __u16 port, void *to,
 		sctp_get_pf_specific(sk->sk_family)->addr_v4map(sctp_sk(sk),
 								&temp);
 		addrlen = sctp_get_af_specific(temp.sa.sa_family)->sockaddr_len;
-		if (space_left < addrlen)
-			return -ENOMEM;
+		if (space_left < addrlen) {
+			cnt =  -ENOMEM;
+			break;
+		}
 		memcpy(to, &temp, addrlen);
 
 		to += addrlen;
@@ -4180,6 +4191,7 @@ static int sctp_copy_laddrs(struct sock *sk, __u16 port, void *to,
 		space_left -= addrlen;
 		*bytes_copied += addrlen;
 	}
+	rcu_read_unlock();
 
 	return cnt;
 }
-- 
1.5.2.4


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [RFC v2 PATCH 2/2] SCTP: Convert bind_addr_list locking to RCU
  2007-09-12 19:46 [RFC v2 PATCH 0/2] Add RCU locking to SCTP address management Vlad Yasevich
  2007-09-12 19:46 ` [RFC v2 PATCH 1/2] SCTP: Add RCU synchronization around sctp_localaddr_list Vlad Yasevich
@ 2007-09-12 19:46 ` Vlad Yasevich
  2007-09-12 20:59 ` [RFC v2 PATCH 0/2] Add RCU locking to SCTP address management Vlad Yasevich
  2 siblings, 0 replies; 14+ messages in thread
From: Vlad Yasevich @ 2007-09-12 19:46 UTC (permalink / raw)
  To: netdev; +Cc: lksctp-developers, Vlad Yasevich

Since the sctp_sockaddr_entry is now RCU enabled as part of
the patch to synchronize sctp_localaddr_list, it makes sense to
change all handling of these entries to RCU.  This includes the
sctp_bind_addrs structure and it's list of bound addresses.

This list is currently protected by an external rw_lock and that
looks like an overkill.  There are only 2 writers to the list:
bind()/bindx() calls, and BH processing of ASCONF-ACK chunks.
These are already seriealized via the socket lock, so they will
not step on each other.  These are also relatively rare, so we
should be good with RCU.

The readers are varied and they are easily converted to RCU.

Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
---
 include/net/sctp/structs.h |    7 +--
 net/sctp/associola.c       |   14 +-----
 net/sctp/bind_addr.c       |   68 ++++++++++++++++++++----------
 net/sctp/endpointola.c     |   27 +++---------
 net/sctp/ipv6.c            |   12 ++---
 net/sctp/protocol.c        |   25 ++++-------
 net/sctp/sm_make_chunk.c   |   18 +++-----
 net/sctp/socket.c          |   98 ++++++++++++-------------------------------
 8 files changed, 106 insertions(+), 163 deletions(-)

diff --git a/include/net/sctp/structs.h b/include/net/sctp/structs.h
index a89e361..c2fe2dc 100644
--- a/include/net/sctp/structs.h
+++ b/include/net/sctp/structs.h
@@ -1155,7 +1155,9 @@ int sctp_bind_addr_copy(struct sctp_bind_addr *dest,
 			int flags);
 int sctp_add_bind_addr(struct sctp_bind_addr *, union sctp_addr *,
 		       __u8 use_as_src, gfp_t gfp);
-int sctp_del_bind_addr(struct sctp_bind_addr *, union sctp_addr *);
+int sctp_del_bind_addr(struct sctp_bind_addr *, union sctp_addr *,
+			void (*rcu_call)(struct rcu_head *,
+					  void (*func)(struct rcu_head *)));
 int sctp_bind_addr_match(struct sctp_bind_addr *, const union sctp_addr *,
 			 struct sctp_sock *);
 union sctp_addr *sctp_find_unmatch_addr(struct sctp_bind_addr	*bp,
@@ -1226,9 +1228,6 @@ struct sctp_ep_common {
 	 * bind_addr.address_list is our set of local IP addresses.
 	 */
 	struct sctp_bind_addr bind_addr;
-
-	/* Protection during address list comparisons. */
-	rwlock_t   addr_lock;
 };
 
 
diff --git a/net/sctp/associola.c b/net/sctp/associola.c
index 2ad1caf..9bad8ba 100644
--- a/net/sctp/associola.c
+++ b/net/sctp/associola.c
@@ -99,7 +99,6 @@ static struct sctp_association *sctp_association_init(struct sctp_association *a
 
 	/* Initialize the bind addr area.  */
 	sctp_bind_addr_init(&asoc->base.bind_addr, ep->base.bind_addr.port);
-	rwlock_init(&asoc->base.addr_lock);
 
 	asoc->state = SCTP_STATE_CLOSED;
 
@@ -937,8 +936,6 @@ struct sctp_transport *sctp_assoc_is_match(struct sctp_association *asoc,
 {
 	struct sctp_transport *transport;
 
-	sctp_read_lock(&asoc->base.addr_lock);
-
 	if ((htons(asoc->base.bind_addr.port) == laddr->v4.sin_port) &&
 	    (htons(asoc->peer.port) == paddr->v4.sin_port)) {
 		transport = sctp_assoc_lookup_paddr(asoc, paddr);
@@ -952,7 +949,6 @@ struct sctp_transport *sctp_assoc_is_match(struct sctp_association *asoc,
 	transport = NULL;
 
 out:
-	sctp_read_unlock(&asoc->base.addr_lock);
 	return transport;
 }
 
@@ -1376,19 +1372,13 @@ int sctp_assoc_set_bind_addr_from_cookie(struct sctp_association *asoc,
 int sctp_assoc_lookup_laddr(struct sctp_association *asoc,
 			    const union sctp_addr *laddr)
 {
-	int found;
+	int found = 0;
 
-	sctp_read_lock(&asoc->base.addr_lock);
 	if ((asoc->base.bind_addr.port == ntohs(laddr->v4.sin_port)) &&
 	    sctp_bind_addr_match(&asoc->base.bind_addr, laddr,
-				 sctp_sk(asoc->base.sk))) {
+				 sctp_sk(asoc->base.sk)))
 		found = 1;
-		goto out;
-	}
 
-	found = 0;
-out:
-	sctp_read_unlock(&asoc->base.addr_lock);
 	return found;
 }
 
diff --git a/net/sctp/bind_addr.c b/net/sctp/bind_addr.c
index 7fc369f..14f4c02 100644
--- a/net/sctp/bind_addr.c
+++ b/net/sctp/bind_addr.c
@@ -167,7 +167,11 @@ int sctp_add_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *new,
 
 	INIT_LIST_HEAD(&addr->list);
 	INIT_RCU_HEAD(&addr->rcu);
-	list_add_tail(&addr->list, &bp->address_list);
+
+	/* We always hold a socket lock when calling this function,
+	 * so rcu_read_lock is not needed.
+	 */
+	list_add_tail_rcu(&addr->list, &bp->address_list);
 	SCTP_DBG_OBJCNT_INC(addr);
 
 	return 0;
@@ -176,23 +180,35 @@ int sctp_add_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *new,
 /* Delete an address from the bind address list in the SCTP_bind_addr
  * structure.
  */
-int sctp_del_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *del_addr)
+int sctp_del_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *del_addr,
+			void (*rcu_call)(struct rcu_head *head,
+					 void (*func)(struct rcu_head *head)))
 {
-	struct list_head *pos, *temp;
-	struct sctp_sockaddr_entry *addr;
+	struct sctp_sockaddr_entry *addr, *temp;
 
-	list_for_each_safe(pos, temp, &bp->address_list) {
-		addr = list_entry(pos, struct sctp_sockaddr_entry, list);
+	/* We hold the socket lock when calling this function, so
+	 * rcu_read_lock is not needed.
+	 */
+	list_for_each_entry_safe(addr, temp, &bp->address_list, list) {
 		if (sctp_cmp_addr_exact(&addr->a, del_addr)) {
 			/* Found the exact match. */
-			list_del(pos);
-			kfree(addr);
-			SCTP_DBG_OBJCNT_DEC(addr);
-
-			return 0;
+			addr->valid = 0;
+			list_del_rcu(&addr->list);
+			break;
 		}
 	}
 
+	/* Call the rcu callback provided in the args.  This function is
+	 * called by both BH packet processing and user side socket option
+	 * processing, but it works on different lists in those 2 contexts.
+	 * Each context provides it's own callback, whether call_rc_bh()
+	 * or call_rcu(), to make sure that we wait an for appropriate time.
+	 */
+	if (addr && !addr->valid) {
+		rcu_call(&addr->rcu, sctp_local_addr_free);
+		SCTP_DBG_OBJCNT_DEC(addr);
+	}
+
 	return -EINVAL;
 }
 
@@ -302,15 +318,20 @@ int sctp_bind_addr_match(struct sctp_bind_addr *bp,
 			 struct sctp_sock *opt)
 {
 	struct sctp_sockaddr_entry *laddr;
-	struct list_head *pos;
-
-	list_for_each(pos, &bp->address_list) {
-		laddr = list_entry(pos, struct sctp_sockaddr_entry, list);
-		if (opt->pf->cmp_addr(&laddr->a, addr, opt))
-			return 1;
+	int match = 0;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(laddr, &bp->address_list, list) {
+		if (!laddr->valid)
+			continue;
+		if (opt->pf->cmp_addr(&laddr->a, addr, opt)) {
+			match = 1;
+			break;
+		}
 	}
+	rcu_read_unlock();
 
-	return 0;
+	return match;
 }
 
 /* Find the first address in the bind address list that is not present in
@@ -325,18 +346,19 @@ union sctp_addr *sctp_find_unmatch_addr(struct sctp_bind_addr	*bp,
 	union sctp_addr			*addr;
 	void 				*addr_buf;
 	struct sctp_af			*af;
-	struct list_head		*pos;
 	int				i;
 
-	list_for_each(pos, &bp->address_list) {
-		laddr = list_entry(pos, struct sctp_sockaddr_entry, list);
-
+	/* This is only called sctp_send_asconf_del_ip() and we hold
+	 * the socket lock in that code patch, so that address list
+	 * can't change.
+	 */
+	list_for_each_entry_rcu(laddr, &bp->address_list, list) {
 		addr_buf = (union sctp_addr *)addrs;
 		for (i = 0; i < addrcnt; i++) {
 			addr = (union sctp_addr *)addr_buf;
 			af = sctp_get_af_specific(addr->v4.sin_family);
 			if (!af)
-				return NULL;
+				break;
 
 			if (opt->pf->cmp_addr(&laddr->a, addr, opt))
 				break;
diff --git a/net/sctp/endpointola.c b/net/sctp/endpointola.c
index 1404a9e..d888332 100644
--- a/net/sctp/endpointola.c
+++ b/net/sctp/endpointola.c
@@ -92,7 +92,6 @@ static struct sctp_endpoint *sctp_endpoint_init(struct sctp_endpoint *ep,
 
 	/* Initialize the bind addr area */
 	sctp_bind_addr_init(&ep->base.bind_addr, 0);
-	rwlock_init(&ep->base.addr_lock);
 
 	/* Remember who we are attached to.  */
 	ep->base.sk = sk;
@@ -225,21 +224,14 @@ void sctp_endpoint_put(struct sctp_endpoint *ep)
 struct sctp_endpoint *sctp_endpoint_is_match(struct sctp_endpoint *ep,
 					       const union sctp_addr *laddr)
 {
-	struct sctp_endpoint *retval;
+	struct sctp_endpoint *retval = NULL;
 
-	sctp_read_lock(&ep->base.addr_lock);
 	if (htons(ep->base.bind_addr.port) == laddr->v4.sin_port) {
 		if (sctp_bind_addr_match(&ep->base.bind_addr, laddr,
-					 sctp_sk(ep->base.sk))) {
+					 sctp_sk(ep->base.sk)))
 			retval = ep;
-			goto out;
-		}
 	}
 
-	retval = NULL;
-
-out:
-	sctp_read_unlock(&ep->base.addr_lock);
 	return retval;
 }
 
@@ -261,9 +253,7 @@ static struct sctp_association *__sctp_endpoint_lookup_assoc(
 	list_for_each(pos, &ep->asocs) {
 		asoc = list_entry(pos, struct sctp_association, asocs);
 		if (rport == asoc->peer.port) {
-			sctp_read_lock(&asoc->base.addr_lock);
 			*transport = sctp_assoc_lookup_paddr(asoc, paddr);
-			sctp_read_unlock(&asoc->base.addr_lock);
 
 			if (*transport)
 				return asoc;
@@ -295,20 +285,17 @@ struct sctp_association *sctp_endpoint_lookup_assoc(
 int sctp_endpoint_is_peeled_off(struct sctp_endpoint *ep,
 				const union sctp_addr *paddr)
 {
-	struct list_head *pos;
 	struct sctp_sockaddr_entry *addr;
 	struct sctp_bind_addr *bp;
 
-	sctp_read_lock(&ep->base.addr_lock);
 	bp = &ep->base.bind_addr;
-	list_for_each(pos, &bp->address_list) {
-		addr = list_entry(pos, struct sctp_sockaddr_entry, list);
-		if (sctp_has_association(&addr->a, paddr)) {
-			sctp_read_unlock(&ep->base.addr_lock);
+	/* This function is called whith the socket lock held,
+	 * so the address_list can not change.
+	 */
+	list_for_each_entry_rcu(addr, &bp->address_list, list) {
+		if (sctp_has_association(&addr->a, paddr))
 			return 1;
-		}
 	}
-	sctp_read_unlock(&ep->base.addr_lock);
 
 	return 0;
 }
diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c
index 54ff472..c8b0115 100644
--- a/net/sctp/ipv6.c
+++ b/net/sctp/ipv6.c
@@ -302,9 +302,7 @@ static void sctp_v6_get_saddr(struct sctp_association *asoc,
 			      union sctp_addr *saddr)
 {
 	struct sctp_bind_addr *bp;
-	rwlock_t *addr_lock;
 	struct sctp_sockaddr_entry *laddr;
-	struct list_head *pos;
 	sctp_scope_t scope;
 	union sctp_addr *baddr = NULL;
 	__u8 matchlen = 0;
@@ -324,14 +322,14 @@ static void sctp_v6_get_saddr(struct sctp_association *asoc,
 	scope = sctp_scope(daddr);
 
 	bp = &asoc->base.bind_addr;
-	addr_lock = &asoc->base.addr_lock;
 
 	/* Go through the bind address list and find the best source address
 	 * that matches the scope of the destination address.
 	 */
-	sctp_read_lock(addr_lock);
-	list_for_each(pos, &bp->address_list) {
-		laddr = list_entry(pos, struct sctp_sockaddr_entry, list);
+	rcu_read_lock();
+	list_for_each_entry_rcu(laddr, &bp->address_list, list) {
+		if (!laddr->valid)
+			continue;
 		if ((laddr->use_as_src) &&
 		    (laddr->a.sa.sa_family == AF_INET6) &&
 		    (scope <= sctp_scope(&laddr->a))) {
@@ -353,7 +351,7 @@ static void sctp_v6_get_saddr(struct sctp_association *asoc,
 		       __FUNCTION__, asoc, NIP6(daddr->v6.sin6_addr));
 	}
 
-	sctp_read_unlock(addr_lock);
+	rcu_read_unlock();
 }
 
 /* Make a copy of all potential local addresses. */
diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
index 4688559..35af75b 100644
--- a/net/sctp/protocol.c
+++ b/net/sctp/protocol.c
@@ -223,7 +223,7 @@ int sctp_copy_local_addr_list(struct sctp_bind_addr *bp, sctp_scope_t scope,
 			      (copy_flags & SCTP_ADDR6_ALLOWED) &&
 			      (copy_flags & SCTP_ADDR6_PEERSUPP)))) {
 				error = sctp_add_bind_addr(bp, &addr->a, 1,
-							   GFP_ATOMIC);
+						    GFP_ATOMIC);
 				if (error)
 					goto end_copy;
 			}
@@ -427,9 +427,7 @@ static struct dst_entry *sctp_v4_get_dst(struct sctp_association *asoc,
 	struct rtable *rt;
 	struct flowi fl;
 	struct sctp_bind_addr *bp;
-	rwlock_t *addr_lock;
 	struct sctp_sockaddr_entry *laddr;
-	struct list_head *pos;
 	struct dst_entry *dst = NULL;
 	union sctp_addr dst_saddr;
 
@@ -458,23 +456,20 @@ static struct dst_entry *sctp_v4_get_dst(struct sctp_association *asoc,
 		goto out;
 
 	bp = &asoc->base.bind_addr;
-	addr_lock = &asoc->base.addr_lock;
 
 	if (dst) {
 		/* Walk through the bind address list and look for a bind
 		 * address that matches the source address of the returned dst.
 		 */
-		sctp_read_lock(addr_lock);
-		list_for_each(pos, &bp->address_list) {
-			laddr = list_entry(pos, struct sctp_sockaddr_entry,
-					   list);
-			if (!laddr->use_as_src)
+		rcu_read_lock();
+		list_for_each_entry_rcu(laddr, &bp->address_list, list) {
+			if (!laddr->valid || !laddr->use_as_src)
 				continue;
 			sctp_v4_dst_saddr(&dst_saddr, dst, htons(bp->port));
 			if (sctp_v4_cmp_addr(&dst_saddr, &laddr->a))
 				goto out_unlock;
 		}
-		sctp_read_unlock(addr_lock);
+		rcu_read_unlock();
 
 		/* None of the bound addresses match the source address of the
 		 * dst. So release it.
@@ -486,10 +481,10 @@ static struct dst_entry *sctp_v4_get_dst(struct sctp_association *asoc,
 	/* Walk through the bind address list and try to get a dst that
 	 * matches a bind address as the source address.
 	 */
-	sctp_read_lock(addr_lock);
-	list_for_each(pos, &bp->address_list) {
-		laddr = list_entry(pos, struct sctp_sockaddr_entry, list);
-
+	rcu_read_lock();
+	list_for_each_entry_rcu(laddr, &bp->address_list, list) {
+		if (!laddr->valid)
+			continue;
 		if ((laddr->use_as_src) &&
 		    (AF_INET == laddr->a.sa.sa_family)) {
 			fl.fl4_src = laddr->a.v4.sin_addr.s_addr;
@@ -501,7 +496,7 @@ static struct dst_entry *sctp_v4_get_dst(struct sctp_association *asoc,
 	}
 
 out_unlock:
-	sctp_read_unlock(addr_lock);
+	rcu_read_unlock();
 out:
 	if (dst)
 		SCTP_DEBUG_PRINTK("rt_dst:%u.%u.%u.%u, rt_src:%u.%u.%u.%u\n",
diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c
index 79856c9..0dc965c 100644
--- a/net/sctp/sm_make_chunk.c
+++ b/net/sctp/sm_make_chunk.c
@@ -1531,7 +1531,7 @@ no_hmac:
 	/* Also, add the destination address. */
 	if (list_empty(&retval->base.bind_addr.address_list)) {
 		sctp_add_bind_addr(&retval->base.bind_addr, &chunk->dest, 1,
-				   GFP_ATOMIC);
+				GFP_ATOMIC);
 	}
 
 	retval->next_tsn = retval->c.initial_tsn;
@@ -2613,22 +2613,16 @@ static int sctp_asconf_param_success(struct sctp_association *asoc,
 
 	switch (asconf_param->param_hdr.type) {
 	case SCTP_PARAM_ADD_IP:
-		sctp_local_bh_disable();
-		sctp_write_lock(&asoc->base.addr_lock);
-		list_for_each(pos, &bp->address_list) {
-			saddr = list_entry(pos, struct sctp_sockaddr_entry, list);
+		/* This is always done in BH context with a socket lock
+		 * held, so the list can not change.
+		 */
+		list_for_each_entry_rcu(saddr, &bp->address_list, list) {
 			if (sctp_cmp_addr_exact(&saddr->a, &addr))
 				saddr->use_as_src = 1;
 		}
-		sctp_write_unlock(&asoc->base.addr_lock);
-		sctp_local_bh_enable();
 		break;
 	case SCTP_PARAM_DEL_IP:
-		sctp_local_bh_disable();
-		sctp_write_lock(&asoc->base.addr_lock);
-		retval = sctp_del_bind_addr(bp, &addr);
-		sctp_write_unlock(&asoc->base.addr_lock);
-		sctp_local_bh_enable();
+		retval = sctp_del_bind_addr(bp, &addr, call_rcu_bh);
 		list_for_each(pos, &asoc->peer.transport_addr_list) {
 			transport = list_entry(pos, struct sctp_transport,
 						 transports);
diff --git a/net/sctp/socket.c b/net/sctp/socket.c
index a3acf78..cb253ab 100644
--- a/net/sctp/socket.c
+++ b/net/sctp/socket.c
@@ -367,14 +367,10 @@ SCTP_STATIC int sctp_do_bind(struct sock *sk, union sctp_addr *addr, int len)
 	if (!bp->port)
 		bp->port = inet_sk(sk)->num;
 
-	/* Add the address to the bind address list.  */
-	sctp_local_bh_disable();
-	sctp_write_lock(&ep->base.addr_lock);
-
-	/* Use GFP_ATOMIC since BHs are disabled.  */
+	/* Add the address to the bind address list.
+	 * Use GFP_ATOMIC since BHs will be disabled.
+	 */
 	ret = sctp_add_bind_addr(bp, addr, 1, GFP_ATOMIC);
-	sctp_write_unlock(&ep->base.addr_lock);
-	sctp_local_bh_enable();
 
 	/* Copy back into socket for getsockname() use. */
 	if (!ret) {
@@ -544,15 +540,12 @@ static int sctp_send_asconf_add_ip(struct sock		*sk,
 		if (i < addrcnt)
 			continue;
 
-		/* Use the first address in bind addr list of association as
-		 * Address Parameter of ASCONF CHUNK.
+		/* Use the first valid address in bind addr list of
+		 * association as Address Parameter of ASCONF CHUNK.
 		 */
-		sctp_read_lock(&asoc->base.addr_lock);
 		bp = &asoc->base.bind_addr;
 		p = bp->address_list.next;
 		laddr = list_entry(p, struct sctp_sockaddr_entry, list);
-		sctp_read_unlock(&asoc->base.addr_lock);
-
 		chunk = sctp_make_asconf_update_ip(asoc, &laddr->a, addrs,
 						   addrcnt, SCTP_PARAM_ADD_IP);
 		if (!chunk) {
@@ -567,8 +560,6 @@ static int sctp_send_asconf_add_ip(struct sock		*sk,
 		/* Add the new addresses to the bind address list with
 		 * use_as_src set to 0.
 		 */
-		sctp_local_bh_disable();
-		sctp_write_lock(&asoc->base.addr_lock);
 		addr_buf = addrs;
 		for (i = 0; i < addrcnt; i++) {
 			addr = (union sctp_addr *)addr_buf;
@@ -578,8 +569,6 @@ static int sctp_send_asconf_add_ip(struct sock		*sk,
 						    GFP_ATOMIC);
 			addr_buf += af->sockaddr_len;
 		}
-		sctp_write_unlock(&asoc->base.addr_lock);
-		sctp_local_bh_enable();
 	}
 
 out:
@@ -651,13 +640,7 @@ static int sctp_bindx_rem(struct sock *sk, struct sockaddr *addrs, int addrcnt)
 		 * socket routing and failover schemes. Refer to comments in
 		 * sctp_do_bind(). -daisy
 		 */
-		sctp_local_bh_disable();
-		sctp_write_lock(&ep->base.addr_lock);
-
-		retval = sctp_del_bind_addr(bp, sa_addr);
-
-		sctp_write_unlock(&ep->base.addr_lock);
-		sctp_local_bh_enable();
+		retval = sctp_del_bind_addr(bp, sa_addr, call_rcu);
 
 		addr_buf += af->sockaddr_len;
 err_bindx_rem:
@@ -748,14 +731,16 @@ static int sctp_send_asconf_del_ip(struct sock		*sk,
 		 * make sure that we do not delete all the addresses in the
 		 * association.
 		 */
-		sctp_read_lock(&asoc->base.addr_lock);
 		bp = &asoc->base.bind_addr;
 		laddr = sctp_find_unmatch_addr(bp, (union sctp_addr *)addrs,
 					       addrcnt, sp);
-		sctp_read_unlock(&asoc->base.addr_lock);
 		if (!laddr)
 			continue;
 
+		/* We do not need RCU protection throughout this loop
+		 * because this is done under a socket lock from the
+		 * setsockopt call.
+		 */
 		chunk = sctp_make_asconf_update_ip(asoc, laddr, addrs, addrcnt,
 						   SCTP_PARAM_DEL_IP);
 		if (!chunk) {
@@ -766,23 +751,16 @@ static int sctp_send_asconf_del_ip(struct sock		*sk,
 		/* Reset use_as_src flag for the addresses in the bind address
 		 * list that are to be deleted.
 		 */
-		sctp_local_bh_disable();
-		sctp_write_lock(&asoc->base.addr_lock);
 		addr_buf = addrs;
 		for (i = 0; i < addrcnt; i++) {
 			laddr = (union sctp_addr *)addr_buf;
 			af = sctp_get_af_specific(laddr->v4.sin_family);
-			list_for_each(pos1, &bp->address_list) {
-				saddr = list_entry(pos1,
-						   struct sctp_sockaddr_entry,
-						   list);
+			list_for_each_entry_rcu(saddr, &bp->address_list, list) {
 				if (sctp_cmp_addr_exact(&saddr->a, laddr))
 					saddr->use_as_src = 0;
 			}
 			addr_buf += af->sockaddr_len;
 		}
-		sctp_write_unlock(&asoc->base.addr_lock);
-		sctp_local_bh_enable();
 
 		/* Update the route and saddr entries for all the transports
 		 * as some of the addresses in the bind address list are
@@ -4057,11 +4035,9 @@ static int sctp_getsockopt_local_addrs_num_old(struct sock *sk, int len,
 					       int __user *optlen)
 {
 	sctp_assoc_t id;
-	struct list_head *pos;
 	struct sctp_bind_addr *bp;
 	struct sctp_association *asoc;
 	struct sctp_sockaddr_entry *addr;
-	rwlock_t *addr_lock;
 	int cnt = 0;
 
 	if (len < sizeof(sctp_assoc_t))
@@ -4078,17 +4054,13 @@ static int sctp_getsockopt_local_addrs_num_old(struct sock *sk, int len,
 	 */
 	if (0 == id) {
 		bp = &sctp_sk(sk)->ep->base.bind_addr;
-		addr_lock = &sctp_sk(sk)->ep->base.addr_lock;
 	} else {
 		asoc = sctp_id2assoc(sk, id);
 		if (!asoc)
 			return -EINVAL;
 		bp = &asoc->base.bind_addr;
-		addr_lock = &asoc->base.addr_lock;
 	}
 
-	sctp_read_lock(addr_lock);
-
 	/* If the endpoint is bound to 0.0.0.0 or ::0, count the valid
 	 * addresses from the global local address list.
 	 */
@@ -4115,12 +4087,14 @@ static int sctp_getsockopt_local_addrs_num_old(struct sock *sk, int len,
 		goto done;
 	}
 
-	list_for_each(pos, &bp->address_list) {
+	/* Protection on the bound address list is not needed,
+	 * since in the socket option context we hold the socket lock,
+	 * so there is no way that the bound address list can change.
+	 */
+	list_for_each_entry_rcu(addr, &bp->address_list, list) {
 		cnt ++;
 	}
-
 done:
-	sctp_read_unlock(addr_lock);
 	return cnt;
 }
 
@@ -4204,7 +4178,6 @@ static int sctp_getsockopt_local_addrs_old(struct sock *sk, int len,
 {
 	struct sctp_bind_addr *bp;
 	struct sctp_association *asoc;
-	struct list_head *pos;
 	int cnt = 0;
 	struct sctp_getaddrs_old getaddrs;
 	struct sctp_sockaddr_entry *addr;
@@ -4212,7 +4185,6 @@ static int sctp_getsockopt_local_addrs_old(struct sock *sk, int len,
 	union sctp_addr temp;
 	struct sctp_sock *sp = sctp_sk(sk);
 	int addrlen;
-	rwlock_t *addr_lock;
 	int err = 0;
 	void *addrs;
 	void *buf;
@@ -4234,13 +4206,11 @@ static int sctp_getsockopt_local_addrs_old(struct sock *sk, int len,
 	 */
 	if (0 == getaddrs.assoc_id) {
 		bp = &sctp_sk(sk)->ep->base.bind_addr;
-		addr_lock = &sctp_sk(sk)->ep->base.addr_lock;
 	} else {
 		asoc = sctp_id2assoc(sk, getaddrs.assoc_id);
 		if (!asoc)
 			return -EINVAL;
 		bp = &asoc->base.bind_addr;
-		addr_lock = &asoc->base.addr_lock;
 	}
 
 	to = getaddrs.addrs;
@@ -4254,8 +4224,6 @@ static int sctp_getsockopt_local_addrs_old(struct sock *sk, int len,
 	if (!addrs)
 		return -ENOMEM;
 
-	sctp_read_lock(addr_lock);
-
 	/* If the endpoint is bound to 0.0.0.0 or ::0, get the valid
 	 * addresses from the global local address list.
 	 */
@@ -4271,8 +4239,11 @@ static int sctp_getsockopt_local_addrs_old(struct sock *sk, int len,
 	}
 
 	buf = addrs;
-	list_for_each(pos, &bp->address_list) {
-		addr = list_entry(pos, struct sctp_sockaddr_entry, list);
+	/* Protection on the bound address list is not needed since
+	 * in the socket option context we hold a socket lock and
+	 * thus the bound address list can't change.
+	 */
+	list_for_each_entry_rcu(addr, &bp->address_list, list) {
 		memcpy(&temp, &addr->a, sizeof(temp));
 		sctp_get_pf_specific(sk->sk_family)->addr_v4map(sp, &temp);
 		addrlen = sctp_get_af_specific(temp.sa.sa_family)->sockaddr_len;
@@ -4284,8 +4255,6 @@ static int sctp_getsockopt_local_addrs_old(struct sock *sk, int len,
 	}
 
 copy_getaddrs:
-	sctp_read_unlock(addr_lock);
-
 	/* copy the entire address list into the user provided space */
 	if (copy_to_user(to, addrs, bytes_copied)) {
 		err = -EFAULT;
@@ -4307,7 +4276,6 @@ static int sctp_getsockopt_local_addrs(struct sock *sk, int len,
 {
 	struct sctp_bind_addr *bp;
 	struct sctp_association *asoc;
-	struct list_head *pos;
 	int cnt = 0;
 	struct sctp_getaddrs getaddrs;
 	struct sctp_sockaddr_entry *addr;
@@ -4315,7 +4283,6 @@ static int sctp_getsockopt_local_addrs(struct sock *sk, int len,
 	union sctp_addr temp;
 	struct sctp_sock *sp = sctp_sk(sk);
 	int addrlen;
-	rwlock_t *addr_lock;
 	int err = 0;
 	size_t space_left;
 	int bytes_copied = 0;
@@ -4336,13 +4303,11 @@ static int sctp_getsockopt_local_addrs(struct sock *sk, int len,
 	 */
 	if (0 == getaddrs.assoc_id) {
 		bp = &sctp_sk(sk)->ep->base.bind_addr;
-		addr_lock = &sctp_sk(sk)->ep->base.addr_lock;
 	} else {
 		asoc = sctp_id2assoc(sk, getaddrs.assoc_id);
 		if (!asoc)
 			return -EINVAL;
 		bp = &asoc->base.bind_addr;
-		addr_lock = &asoc->base.addr_lock;
 	}
 
 	to = optval + offsetof(struct sctp_getaddrs,addrs);
@@ -4352,8 +4317,6 @@ static int sctp_getsockopt_local_addrs(struct sock *sk, int len,
 	if (!addrs)
 		return -ENOMEM;
 
-	sctp_read_lock(addr_lock);
-
 	/* If the endpoint is bound to 0.0.0.0 or ::0, get the valid
 	 * addresses from the global local address list.
 	 */
@@ -4365,21 +4328,24 @@ static int sctp_getsockopt_local_addrs(struct sock *sk, int len,
 						space_left, &bytes_copied);
 			if (cnt < 0) {
 				err = cnt;
-				goto error_lock;
+				goto out;
 			}
 			goto copy_getaddrs;
 		}
 	}
 
 	buf = addrs;
-	list_for_each(pos, &bp->address_list) {
-		addr = list_entry(pos, struct sctp_sockaddr_entry, list);
+	/* Protection on the bound address list is not needed since
+	 * in the socket option context we hold a socket lock and
+	 * thus the bound address list can't change.
+	 */
+	list_for_each_entry_rcu(addr, &bp->address_list, list) {
 		memcpy(&temp, &addr->a, sizeof(temp));
 		sctp_get_pf_specific(sk->sk_family)->addr_v4map(sp, &temp);
 		addrlen = sctp_get_af_specific(temp.sa.sa_family)->sockaddr_len;
 		if (space_left < addrlen) {
 			err =  -ENOMEM; /*fixme: right error?*/
-			goto error_lock;
+			goto out;
 		}
 		memcpy(buf, &temp, addrlen);
 		buf += addrlen;
@@ -4389,8 +4355,6 @@ static int sctp_getsockopt_local_addrs(struct sock *sk, int len,
 	}
 
 copy_getaddrs:
-	sctp_read_unlock(addr_lock);
-
 	if (copy_to_user(to, addrs, bytes_copied)) {
 		err = -EFAULT;
 		goto out;
@@ -4401,12 +4365,6 @@ copy_getaddrs:
 	}
 	if (put_user(bytes_copied, optlen))
 		err = -EFAULT;
-
-	goto out;
-
-error_lock:
-	sctp_read_unlock(addr_lock);
-
 out:
 	kfree(addrs);
 	return err;
-- 
1.5.2.4


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [RFC v2 PATCH 0/2] Add RCU locking to SCTP address management
  2007-09-12 19:46 [RFC v2 PATCH 0/2] Add RCU locking to SCTP address management Vlad Yasevich
  2007-09-12 19:46 ` [RFC v2 PATCH 1/2] SCTP: Add RCU synchronization around sctp_localaddr_list Vlad Yasevich
  2007-09-12 19:46 ` [RFC v2 PATCH 2/2] SCTP: Convert bind_addr_list locking to RCU Vlad Yasevich
@ 2007-09-12 20:59 ` Vlad Yasevich
  2007-09-12 21:03   ` [RFC v3 PATCH 2/21] SCTP: Convert bind_addr_list locking to RCU Vlad Yasevich
  2 siblings, 1 reply; 14+ messages in thread
From: Vlad Yasevich @ 2007-09-12 20:59 UTC (permalink / raw)
  To: netdev; +Cc: lksctp-developers

Vlad Yasevich wrote:
> Ok, this is version 2 of the patch that incorporates comments from
> Sridhar Samudrala and Paul McKenney.
> 
> The changes icorporated are:
>  1.  Add locking around the modification of the global sctp_local_addr_list
>  when processing the notifiers.  After looking around, it is possible for
>  the IPv4 and IPv6 notifiers to be called at the same time, which means that
>  we need a spin lock.
> 
>  2.  After the Paul's explanation of why writers would would to call
>  rcu_read_lock, it's apparent that we really don't need that in our usage.
>  I've removed all that I could find and conser safe.
> 
>  3. I took Paul's suggestiong of passing an explicit rcu callback when
>  removing entries from the list since these can be done it different
>  contexts.  This made the removal code rather simple.

Paul Moore just pointed out that using rcu versions of list traversal
macros inside the writer protected sections is just plain silly, so
consider that another change.

I'll update patch 2/2 with that comment.

-vlad

> 
> Things I've left behind:
>  1.  The valid flag remains.  After discussing the virtues with Paul Moore
>  (who used the same functionality in Netlabel code), I think that the
>  valid flag slightly reduces the possibility that the reader will use
>  an entry that's about to be removed.  It's a good thing in our case.
>  It doesn't really harm anything if a reader used a !valid entry, but
>  I'd like to reduce that chance.
> 
> I would appreciate any further comments

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [RFC v3 PATCH 2/21] SCTP: Convert bind_addr_list locking to RCU
  2007-09-12 20:59 ` [RFC v2 PATCH 0/2] Add RCU locking to SCTP address management Vlad Yasevich
@ 2007-09-12 21:03   ` Vlad Yasevich
  2007-09-12 22:33     ` Paul E. McKenney
  0 siblings, 1 reply; 14+ messages in thread
From: Vlad Yasevich @ 2007-09-12 21:03 UTC (permalink / raw)
  To: netdev; +Cc: lksctp-developers, Vlad Yasevich

[... and here is the updated version as promissed ...]

Since the sctp_sockaddr_entry is now RCU enabled as part of
the patch to synchronize sctp_localaddr_list, it makes sense to
change all handling of these entries to RCU.  This includes the
sctp_bind_addrs structure and it's list of bound addresses.

This list is currently protected by an external rw_lock and that
looks like an overkill.  There are only 2 writers to the list:
bind()/bindx() calls, and BH processing of ASCONF-ACK chunks.
These are already seriealized via the socket lock, so they will
not step on each other.  These are also relatively rare, so we
should be good with RCU.

The readers are varied and they are easily converted to RCU.

Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
---
 include/net/sctp/structs.h |    7 +--
 net/sctp/associola.c       |   14 +-----
 net/sctp/bind_addr.c       |   68 ++++++++++++++++++++----------
 net/sctp/endpointola.c     |   27 +++---------
 net/sctp/ipv6.c            |   12 ++---
 net/sctp/protocol.c        |   25 ++++-------
 net/sctp/sm_make_chunk.c   |   18 +++-----
 net/sctp/socket.c          |   98 ++++++++++++-------------------------------
 8 files changed, 106 insertions(+), 163 deletions(-)

diff --git a/include/net/sctp/structs.h b/include/net/sctp/structs.h
index a89e361..c2fe2dc 100644
--- a/include/net/sctp/structs.h
+++ b/include/net/sctp/structs.h
@@ -1155,7 +1155,9 @@ int sctp_bind_addr_copy(struct sctp_bind_addr *dest,
 			int flags);
 int sctp_add_bind_addr(struct sctp_bind_addr *, union sctp_addr *,
 		       __u8 use_as_src, gfp_t gfp);
-int sctp_del_bind_addr(struct sctp_bind_addr *, union sctp_addr *);
+int sctp_del_bind_addr(struct sctp_bind_addr *, union sctp_addr *,
+			void (*rcu_call)(struct rcu_head *,
+					  void (*func)(struct rcu_head *)));
 int sctp_bind_addr_match(struct sctp_bind_addr *, const union sctp_addr *,
 			 struct sctp_sock *);
 union sctp_addr *sctp_find_unmatch_addr(struct sctp_bind_addr	*bp,
@@ -1226,9 +1228,6 @@ struct sctp_ep_common {
 	 * bind_addr.address_list is our set of local IP addresses.
 	 */
 	struct sctp_bind_addr bind_addr;
-
-	/* Protection during address list comparisons. */
-	rwlock_t   addr_lock;
 };
 
 
diff --git a/net/sctp/associola.c b/net/sctp/associola.c
index 2ad1caf..9bad8ba 100644
--- a/net/sctp/associola.c
+++ b/net/sctp/associola.c
@@ -99,7 +99,6 @@ static struct sctp_association *sctp_association_init(struct sctp_association *a
 
 	/* Initialize the bind addr area.  */
 	sctp_bind_addr_init(&asoc->base.bind_addr, ep->base.bind_addr.port);
-	rwlock_init(&asoc->base.addr_lock);
 
 	asoc->state = SCTP_STATE_CLOSED;
 
@@ -937,8 +936,6 @@ struct sctp_transport *sctp_assoc_is_match(struct sctp_association *asoc,
 {
 	struct sctp_transport *transport;
 
-	sctp_read_lock(&asoc->base.addr_lock);
-
 	if ((htons(asoc->base.bind_addr.port) == laddr->v4.sin_port) &&
 	    (htons(asoc->peer.port) == paddr->v4.sin_port)) {
 		transport = sctp_assoc_lookup_paddr(asoc, paddr);
@@ -952,7 +949,6 @@ struct sctp_transport *sctp_assoc_is_match(struct sctp_association *asoc,
 	transport = NULL;
 
 out:
-	sctp_read_unlock(&asoc->base.addr_lock);
 	return transport;
 }
 
@@ -1376,19 +1372,13 @@ int sctp_assoc_set_bind_addr_from_cookie(struct sctp_association *asoc,
 int sctp_assoc_lookup_laddr(struct sctp_association *asoc,
 			    const union sctp_addr *laddr)
 {
-	int found;
+	int found = 0;
 
-	sctp_read_lock(&asoc->base.addr_lock);
 	if ((asoc->base.bind_addr.port == ntohs(laddr->v4.sin_port)) &&
 	    sctp_bind_addr_match(&asoc->base.bind_addr, laddr,
-				 sctp_sk(asoc->base.sk))) {
+				 sctp_sk(asoc->base.sk)))
 		found = 1;
-		goto out;
-	}
 
-	found = 0;
-out:
-	sctp_read_unlock(&asoc->base.addr_lock);
 	return found;
 }
 
diff --git a/net/sctp/bind_addr.c b/net/sctp/bind_addr.c
index 7fc369f..d16055f 100644
--- a/net/sctp/bind_addr.c
+++ b/net/sctp/bind_addr.c
@@ -167,7 +167,11 @@ int sctp_add_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *new,
 
 	INIT_LIST_HEAD(&addr->list);
 	INIT_RCU_HEAD(&addr->rcu);
-	list_add_tail(&addr->list, &bp->address_list);
+
+	/* We always hold a socket lock when calling this function,
+	 * so rcu_read_lock is not needed.
+	 */
+	list_add_tail_rcu(&addr->list, &bp->address_list);
 	SCTP_DBG_OBJCNT_INC(addr);
 
 	return 0;
@@ -176,23 +180,35 @@ int sctp_add_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *new,
 /* Delete an address from the bind address list in the SCTP_bind_addr
  * structure.
  */
-int sctp_del_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *del_addr)
+int sctp_del_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *del_addr,
+			void (*rcu_call)(struct rcu_head *head,
+					 void (*func)(struct rcu_head *head)))
 {
-	struct list_head *pos, *temp;
-	struct sctp_sockaddr_entry *addr;
+	struct sctp_sockaddr_entry *addr, *temp;
 
-	list_for_each_safe(pos, temp, &bp->address_list) {
-		addr = list_entry(pos, struct sctp_sockaddr_entry, list);
+	/* We hold the socket lock when calling this function, so
+	 * rcu_read_lock is not needed.
+	 */
+	list_for_each_entry_safe(addr, temp, &bp->address_list, list) {
 		if (sctp_cmp_addr_exact(&addr->a, del_addr)) {
 			/* Found the exact match. */
-			list_del(pos);
-			kfree(addr);
-			SCTP_DBG_OBJCNT_DEC(addr);
-
-			return 0;
+			addr->valid = 0;
+			list_del_rcu(&addr->list);
+			break;
 		}
 	}
 
+	/* Call the rcu callback provided in the args.  This function is
+	 * called by both BH packet processing and user side socket option
+	 * processing, but it works on different lists in those 2 contexts.
+	 * Each context provides it's own callback, whether call_rc_bh()
+	 * or call_rcu(), to make sure that we wait an for appropriate time.
+	 */
+	if (addr && !addr->valid) {
+		rcu_call(&addr->rcu, sctp_local_addr_free);
+		SCTP_DBG_OBJCNT_DEC(addr);
+	}
+
 	return -EINVAL;
 }
 
@@ -302,15 +318,20 @@ int sctp_bind_addr_match(struct sctp_bind_addr *bp,
 			 struct sctp_sock *opt)
 {
 	struct sctp_sockaddr_entry *laddr;
-	struct list_head *pos;
-
-	list_for_each(pos, &bp->address_list) {
-		laddr = list_entry(pos, struct sctp_sockaddr_entry, list);
-		if (opt->pf->cmp_addr(&laddr->a, addr, opt))
-			return 1;
+	int match = 0;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(laddr, &bp->address_list, list) {
+		if (!laddr->valid)
+			continue;
+		if (opt->pf->cmp_addr(&laddr->a, addr, opt)) {
+			match = 1;
+			break;
+		}
 	}
+	rcu_read_unlock();
 
-	return 0;
+	return match;
 }
 
 /* Find the first address in the bind address list that is not present in
@@ -325,18 +346,19 @@ union sctp_addr *sctp_find_unmatch_addr(struct sctp_bind_addr	*bp,
 	union sctp_addr			*addr;
 	void 				*addr_buf;
 	struct sctp_af			*af;
-	struct list_head		*pos;
 	int				i;
 
-	list_for_each(pos, &bp->address_list) {
-		laddr = list_entry(pos, struct sctp_sockaddr_entry, list);
-
+	/* This is only called sctp_send_asconf_del_ip() and we hold
+	 * the socket lock in that code patch, so that address list
+	 * can't change.
+	 */
+	list_for_each_entry(laddr, &bp->address_list, list) {
 		addr_buf = (union sctp_addr *)addrs;
 		for (i = 0; i < addrcnt; i++) {
 			addr = (union sctp_addr *)addr_buf;
 			af = sctp_get_af_specific(addr->v4.sin_family);
 			if (!af)
-				return NULL;
+				break;
 
 			if (opt->pf->cmp_addr(&laddr->a, addr, opt))
 				break;
diff --git a/net/sctp/endpointola.c b/net/sctp/endpointola.c
index 1404a9e..110d912 100644
--- a/net/sctp/endpointola.c
+++ b/net/sctp/endpointola.c
@@ -92,7 +92,6 @@ static struct sctp_endpoint *sctp_endpoint_init(struct sctp_endpoint *ep,
 
 	/* Initialize the bind addr area */
 	sctp_bind_addr_init(&ep->base.bind_addr, 0);
-	rwlock_init(&ep->base.addr_lock);
 
 	/* Remember who we are attached to.  */
 	ep->base.sk = sk;
@@ -225,21 +224,14 @@ void sctp_endpoint_put(struct sctp_endpoint *ep)
 struct sctp_endpoint *sctp_endpoint_is_match(struct sctp_endpoint *ep,
 					       const union sctp_addr *laddr)
 {
-	struct sctp_endpoint *retval;
+	struct sctp_endpoint *retval = NULL;
 
-	sctp_read_lock(&ep->base.addr_lock);
 	if (htons(ep->base.bind_addr.port) == laddr->v4.sin_port) {
 		if (sctp_bind_addr_match(&ep->base.bind_addr, laddr,
-					 sctp_sk(ep->base.sk))) {
+					 sctp_sk(ep->base.sk)))
 			retval = ep;
-			goto out;
-		}
 	}
 
-	retval = NULL;
-
-out:
-	sctp_read_unlock(&ep->base.addr_lock);
 	return retval;
 }
 
@@ -261,9 +253,7 @@ static struct sctp_association *__sctp_endpoint_lookup_assoc(
 	list_for_each(pos, &ep->asocs) {
 		asoc = list_entry(pos, struct sctp_association, asocs);
 		if (rport == asoc->peer.port) {
-			sctp_read_lock(&asoc->base.addr_lock);
 			*transport = sctp_assoc_lookup_paddr(asoc, paddr);
-			sctp_read_unlock(&asoc->base.addr_lock);
 
 			if (*transport)
 				return asoc;
@@ -295,20 +285,17 @@ struct sctp_association *sctp_endpoint_lookup_assoc(
 int sctp_endpoint_is_peeled_off(struct sctp_endpoint *ep,
 				const union sctp_addr *paddr)
 {
-	struct list_head *pos;
 	struct sctp_sockaddr_entry *addr;
 	struct sctp_bind_addr *bp;
 
-	sctp_read_lock(&ep->base.addr_lock);
 	bp = &ep->base.bind_addr;
-	list_for_each(pos, &bp->address_list) {
-		addr = list_entry(pos, struct sctp_sockaddr_entry, list);
-		if (sctp_has_association(&addr->a, paddr)) {
-			sctp_read_unlock(&ep->base.addr_lock);
+	/* This function is called whith the socket lock held,
+	 * so the address_list can not change.
+	 */
+	list_for_each_entry(addr, &bp->address_list, list) {
+		if (sctp_has_association(&addr->a, paddr))
 			return 1;
-		}
 	}
-	sctp_read_unlock(&ep->base.addr_lock);
 
 	return 0;
 }
diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c
index 54ff472..c8b0115 100644
--- a/net/sctp/ipv6.c
+++ b/net/sctp/ipv6.c
@@ -302,9 +302,7 @@ static void sctp_v6_get_saddr(struct sctp_association *asoc,
 			      union sctp_addr *saddr)
 {
 	struct sctp_bind_addr *bp;
-	rwlock_t *addr_lock;
 	struct sctp_sockaddr_entry *laddr;
-	struct list_head *pos;
 	sctp_scope_t scope;
 	union sctp_addr *baddr = NULL;
 	__u8 matchlen = 0;
@@ -324,14 +322,14 @@ static void sctp_v6_get_saddr(struct sctp_association *asoc,
 	scope = sctp_scope(daddr);
 
 	bp = &asoc->base.bind_addr;
-	addr_lock = &asoc->base.addr_lock;
 
 	/* Go through the bind address list and find the best source address
 	 * that matches the scope of the destination address.
 	 */
-	sctp_read_lock(addr_lock);
-	list_for_each(pos, &bp->address_list) {
-		laddr = list_entry(pos, struct sctp_sockaddr_entry, list);
+	rcu_read_lock();
+	list_for_each_entry_rcu(laddr, &bp->address_list, list) {
+		if (!laddr->valid)
+			continue;
 		if ((laddr->use_as_src) &&
 		    (laddr->a.sa.sa_family == AF_INET6) &&
 		    (scope <= sctp_scope(&laddr->a))) {
@@ -353,7 +351,7 @@ static void sctp_v6_get_saddr(struct sctp_association *asoc,
 		       __FUNCTION__, asoc, NIP6(daddr->v6.sin6_addr));
 	}
 
-	sctp_read_unlock(addr_lock);
+	rcu_read_unlock();
 }
 
 /* Make a copy of all potential local addresses. */
diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
index 4688559..35af75b 100644
--- a/net/sctp/protocol.c
+++ b/net/sctp/protocol.c
@@ -223,7 +223,7 @@ int sctp_copy_local_addr_list(struct sctp_bind_addr *bp, sctp_scope_t scope,
 			      (copy_flags & SCTP_ADDR6_ALLOWED) &&
 			      (copy_flags & SCTP_ADDR6_PEERSUPP)))) {
 				error = sctp_add_bind_addr(bp, &addr->a, 1,
-							   GFP_ATOMIC);
+						    GFP_ATOMIC);
 				if (error)
 					goto end_copy;
 			}
@@ -427,9 +427,7 @@ static struct dst_entry *sctp_v4_get_dst(struct sctp_association *asoc,
 	struct rtable *rt;
 	struct flowi fl;
 	struct sctp_bind_addr *bp;
-	rwlock_t *addr_lock;
 	struct sctp_sockaddr_entry *laddr;
-	struct list_head *pos;
 	struct dst_entry *dst = NULL;
 	union sctp_addr dst_saddr;
 
@@ -458,23 +456,20 @@ static struct dst_entry *sctp_v4_get_dst(struct sctp_association *asoc,
 		goto out;
 
 	bp = &asoc->base.bind_addr;
-	addr_lock = &asoc->base.addr_lock;
 
 	if (dst) {
 		/* Walk through the bind address list and look for a bind
 		 * address that matches the source address of the returned dst.
 		 */
-		sctp_read_lock(addr_lock);
-		list_for_each(pos, &bp->address_list) {
-			laddr = list_entry(pos, struct sctp_sockaddr_entry,
-					   list);
-			if (!laddr->use_as_src)
+		rcu_read_lock();
+		list_for_each_entry_rcu(laddr, &bp->address_list, list) {
+			if (!laddr->valid || !laddr->use_as_src)
 				continue;
 			sctp_v4_dst_saddr(&dst_saddr, dst, htons(bp->port));
 			if (sctp_v4_cmp_addr(&dst_saddr, &laddr->a))
 				goto out_unlock;
 		}
-		sctp_read_unlock(addr_lock);
+		rcu_read_unlock();
 
 		/* None of the bound addresses match the source address of the
 		 * dst. So release it.
@@ -486,10 +481,10 @@ static struct dst_entry *sctp_v4_get_dst(struct sctp_association *asoc,
 	/* Walk through the bind address list and try to get a dst that
 	 * matches a bind address as the source address.
 	 */
-	sctp_read_lock(addr_lock);
-	list_for_each(pos, &bp->address_list) {
-		laddr = list_entry(pos, struct sctp_sockaddr_entry, list);
-
+	rcu_read_lock();
+	list_for_each_entry_rcu(laddr, &bp->address_list, list) {
+		if (!laddr->valid)
+			continue;
 		if ((laddr->use_as_src) &&
 		    (AF_INET == laddr->a.sa.sa_family)) {
 			fl.fl4_src = laddr->a.v4.sin_addr.s_addr;
@@ -501,7 +496,7 @@ static struct dst_entry *sctp_v4_get_dst(struct sctp_association *asoc,
 	}
 
 out_unlock:
-	sctp_read_unlock(addr_lock);
+	rcu_read_unlock();
 out:
 	if (dst)
 		SCTP_DEBUG_PRINTK("rt_dst:%u.%u.%u.%u, rt_src:%u.%u.%u.%u\n",
diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c
index 79856c9..2e34220 100644
--- a/net/sctp/sm_make_chunk.c
+++ b/net/sctp/sm_make_chunk.c
@@ -1531,7 +1531,7 @@ no_hmac:
 	/* Also, add the destination address. */
 	if (list_empty(&retval->base.bind_addr.address_list)) {
 		sctp_add_bind_addr(&retval->base.bind_addr, &chunk->dest, 1,
-				   GFP_ATOMIC);
+				GFP_ATOMIC);
 	}
 
 	retval->next_tsn = retval->c.initial_tsn;
@@ -2613,22 +2613,16 @@ static int sctp_asconf_param_success(struct sctp_association *asoc,
 
 	switch (asconf_param->param_hdr.type) {
 	case SCTP_PARAM_ADD_IP:
-		sctp_local_bh_disable();
-		sctp_write_lock(&asoc->base.addr_lock);
-		list_for_each(pos, &bp->address_list) {
-			saddr = list_entry(pos, struct sctp_sockaddr_entry, list);
+		/* This is always done in BH context with a socket lock
+		 * held, so the list can not change.
+		 */
+		list_for_each_entry(saddr, &bp->address_list, list) {
 			if (sctp_cmp_addr_exact(&saddr->a, &addr))
 				saddr->use_as_src = 1;
 		}
-		sctp_write_unlock(&asoc->base.addr_lock);
-		sctp_local_bh_enable();
 		break;
 	case SCTP_PARAM_DEL_IP:
-		sctp_local_bh_disable();
-		sctp_write_lock(&asoc->base.addr_lock);
-		retval = sctp_del_bind_addr(bp, &addr);
-		sctp_write_unlock(&asoc->base.addr_lock);
-		sctp_local_bh_enable();
+		retval = sctp_del_bind_addr(bp, &addr, call_rcu_bh);
 		list_for_each(pos, &asoc->peer.transport_addr_list) {
 			transport = list_entry(pos, struct sctp_transport,
 						 transports);
diff --git a/net/sctp/socket.c b/net/sctp/socket.c
index a3acf78..772fbfb 100644
--- a/net/sctp/socket.c
+++ b/net/sctp/socket.c
@@ -367,14 +367,10 @@ SCTP_STATIC int sctp_do_bind(struct sock *sk, union sctp_addr *addr, int len)
 	if (!bp->port)
 		bp->port = inet_sk(sk)->num;
 
-	/* Add the address to the bind address list.  */
-	sctp_local_bh_disable();
-	sctp_write_lock(&ep->base.addr_lock);
-
-	/* Use GFP_ATOMIC since BHs are disabled.  */
+	/* Add the address to the bind address list.
+	 * Use GFP_ATOMIC since BHs will be disabled.
+	 */
 	ret = sctp_add_bind_addr(bp, addr, 1, GFP_ATOMIC);
-	sctp_write_unlock(&ep->base.addr_lock);
-	sctp_local_bh_enable();
 
 	/* Copy back into socket for getsockname() use. */
 	if (!ret) {
@@ -544,15 +540,12 @@ static int sctp_send_asconf_add_ip(struct sock		*sk,
 		if (i < addrcnt)
 			continue;
 
-		/* Use the first address in bind addr list of association as
-		 * Address Parameter of ASCONF CHUNK.
+		/* Use the first valid address in bind addr list of
+		 * association as Address Parameter of ASCONF CHUNK.
 		 */
-		sctp_read_lock(&asoc->base.addr_lock);
 		bp = &asoc->base.bind_addr;
 		p = bp->address_list.next;
 		laddr = list_entry(p, struct sctp_sockaddr_entry, list);
-		sctp_read_unlock(&asoc->base.addr_lock);
-
 		chunk = sctp_make_asconf_update_ip(asoc, &laddr->a, addrs,
 						   addrcnt, SCTP_PARAM_ADD_IP);
 		if (!chunk) {
@@ -567,8 +560,6 @@ static int sctp_send_asconf_add_ip(struct sock		*sk,
 		/* Add the new addresses to the bind address list with
 		 * use_as_src set to 0.
 		 */
-		sctp_local_bh_disable();
-		sctp_write_lock(&asoc->base.addr_lock);
 		addr_buf = addrs;
 		for (i = 0; i < addrcnt; i++) {
 			addr = (union sctp_addr *)addr_buf;
@@ -578,8 +569,6 @@ static int sctp_send_asconf_add_ip(struct sock		*sk,
 						    GFP_ATOMIC);
 			addr_buf += af->sockaddr_len;
 		}
-		sctp_write_unlock(&asoc->base.addr_lock);
-		sctp_local_bh_enable();
 	}
 
 out:
@@ -651,13 +640,7 @@ static int sctp_bindx_rem(struct sock *sk, struct sockaddr *addrs, int addrcnt)
 		 * socket routing and failover schemes. Refer to comments in
 		 * sctp_do_bind(). -daisy
 		 */
-		sctp_local_bh_disable();
-		sctp_write_lock(&ep->base.addr_lock);
-
-		retval = sctp_del_bind_addr(bp, sa_addr);
-
-		sctp_write_unlock(&ep->base.addr_lock);
-		sctp_local_bh_enable();
+		retval = sctp_del_bind_addr(bp, sa_addr, call_rcu);
 
 		addr_buf += af->sockaddr_len;
 err_bindx_rem:
@@ -748,14 +731,16 @@ static int sctp_send_asconf_del_ip(struct sock		*sk,
 		 * make sure that we do not delete all the addresses in the
 		 * association.
 		 */
-		sctp_read_lock(&asoc->base.addr_lock);
 		bp = &asoc->base.bind_addr;
 		laddr = sctp_find_unmatch_addr(bp, (union sctp_addr *)addrs,
 					       addrcnt, sp);
-		sctp_read_unlock(&asoc->base.addr_lock);
 		if (!laddr)
 			continue;
 
+		/* We do not need RCU protection throughout this loop
+		 * because this is done under a socket lock from the
+		 * setsockopt call.
+		 */
 		chunk = sctp_make_asconf_update_ip(asoc, laddr, addrs, addrcnt,
 						   SCTP_PARAM_DEL_IP);
 		if (!chunk) {
@@ -766,23 +751,16 @@ static int sctp_send_asconf_del_ip(struct sock		*sk,
 		/* Reset use_as_src flag for the addresses in the bind address
 		 * list that are to be deleted.
 		 */
-		sctp_local_bh_disable();
-		sctp_write_lock(&asoc->base.addr_lock);
 		addr_buf = addrs;
 		for (i = 0; i < addrcnt; i++) {
 			laddr = (union sctp_addr *)addr_buf;
 			af = sctp_get_af_specific(laddr->v4.sin_family);
-			list_for_each(pos1, &bp->address_list) {
-				saddr = list_entry(pos1,
-						   struct sctp_sockaddr_entry,
-						   list);
+			list_for_each_entry(saddr, &bp->address_list, list) {
 				if (sctp_cmp_addr_exact(&saddr->a, laddr))
 					saddr->use_as_src = 0;
 			}
 			addr_buf += af->sockaddr_len;
 		}
-		sctp_write_unlock(&asoc->base.addr_lock);
-		sctp_local_bh_enable();
 
 		/* Update the route and saddr entries for all the transports
 		 * as some of the addresses in the bind address list are
@@ -4057,11 +4035,9 @@ static int sctp_getsockopt_local_addrs_num_old(struct sock *sk, int len,
 					       int __user *optlen)
 {
 	sctp_assoc_t id;
-	struct list_head *pos;
 	struct sctp_bind_addr *bp;
 	struct sctp_association *asoc;
 	struct sctp_sockaddr_entry *addr;
-	rwlock_t *addr_lock;
 	int cnt = 0;
 
 	if (len < sizeof(sctp_assoc_t))
@@ -4078,17 +4054,13 @@ static int sctp_getsockopt_local_addrs_num_old(struct sock *sk, int len,
 	 */
 	if (0 == id) {
 		bp = &sctp_sk(sk)->ep->base.bind_addr;
-		addr_lock = &sctp_sk(sk)->ep->base.addr_lock;
 	} else {
 		asoc = sctp_id2assoc(sk, id);
 		if (!asoc)
 			return -EINVAL;
 		bp = &asoc->base.bind_addr;
-		addr_lock = &asoc->base.addr_lock;
 	}
 
-	sctp_read_lock(addr_lock);
-
 	/* If the endpoint is bound to 0.0.0.0 or ::0, count the valid
 	 * addresses from the global local address list.
 	 */
@@ -4115,12 +4087,14 @@ static int sctp_getsockopt_local_addrs_num_old(struct sock *sk, int len,
 		goto done;
 	}
 
-	list_for_each(pos, &bp->address_list) {
+	/* Protection on the bound address list is not needed,
+	 * since in the socket option context we hold the socket lock,
+	 * so there is no way that the bound address list can change.
+	 */
+	list_for_each_entry(addr, &bp->address_list, list) {
 		cnt ++;
 	}
-
 done:
-	sctp_read_unlock(addr_lock);
 	return cnt;
 }
 
@@ -4204,7 +4178,6 @@ static int sctp_getsockopt_local_addrs_old(struct sock *sk, int len,
 {
 	struct sctp_bind_addr *bp;
 	struct sctp_association *asoc;
-	struct list_head *pos;
 	int cnt = 0;
 	struct sctp_getaddrs_old getaddrs;
 	struct sctp_sockaddr_entry *addr;
@@ -4212,7 +4185,6 @@ static int sctp_getsockopt_local_addrs_old(struct sock *sk, int len,
 	union sctp_addr temp;
 	struct sctp_sock *sp = sctp_sk(sk);
 	int addrlen;
-	rwlock_t *addr_lock;
 	int err = 0;
 	void *addrs;
 	void *buf;
@@ -4234,13 +4206,11 @@ static int sctp_getsockopt_local_addrs_old(struct sock *sk, int len,
 	 */
 	if (0 == getaddrs.assoc_id) {
 		bp = &sctp_sk(sk)->ep->base.bind_addr;
-		addr_lock = &sctp_sk(sk)->ep->base.addr_lock;
 	} else {
 		asoc = sctp_id2assoc(sk, getaddrs.assoc_id);
 		if (!asoc)
 			return -EINVAL;
 		bp = &asoc->base.bind_addr;
-		addr_lock = &asoc->base.addr_lock;
 	}
 
 	to = getaddrs.addrs;
@@ -4254,8 +4224,6 @@ static int sctp_getsockopt_local_addrs_old(struct sock *sk, int len,
 	if (!addrs)
 		return -ENOMEM;
 
-	sctp_read_lock(addr_lock);
-
 	/* If the endpoint is bound to 0.0.0.0 or ::0, get the valid
 	 * addresses from the global local address list.
 	 */
@@ -4271,8 +4239,11 @@ static int sctp_getsockopt_local_addrs_old(struct sock *sk, int len,
 	}
 
 	buf = addrs;
-	list_for_each(pos, &bp->address_list) {
-		addr = list_entry(pos, struct sctp_sockaddr_entry, list);
+	/* Protection on the bound address list is not needed since
+	 * in the socket option context we hold a socket lock and
+	 * thus the bound address list can't change.
+	 */
+	list_for_each_entry(addr, &bp->address_list, list) {
 		memcpy(&temp, &addr->a, sizeof(temp));
 		sctp_get_pf_specific(sk->sk_family)->addr_v4map(sp, &temp);
 		addrlen = sctp_get_af_specific(temp.sa.sa_family)->sockaddr_len;
@@ -4284,8 +4255,6 @@ static int sctp_getsockopt_local_addrs_old(struct sock *sk, int len,
 	}
 
 copy_getaddrs:
-	sctp_read_unlock(addr_lock);
-
 	/* copy the entire address list into the user provided space */
 	if (copy_to_user(to, addrs, bytes_copied)) {
 		err = -EFAULT;
@@ -4307,7 +4276,6 @@ static int sctp_getsockopt_local_addrs(struct sock *sk, int len,
 {
 	struct sctp_bind_addr *bp;
 	struct sctp_association *asoc;
-	struct list_head *pos;
 	int cnt = 0;
 	struct sctp_getaddrs getaddrs;
 	struct sctp_sockaddr_entry *addr;
@@ -4315,7 +4283,6 @@ static int sctp_getsockopt_local_addrs(struct sock *sk, int len,
 	union sctp_addr temp;
 	struct sctp_sock *sp = sctp_sk(sk);
 	int addrlen;
-	rwlock_t *addr_lock;
 	int err = 0;
 	size_t space_left;
 	int bytes_copied = 0;
@@ -4336,13 +4303,11 @@ static int sctp_getsockopt_local_addrs(struct sock *sk, int len,
 	 */
 	if (0 == getaddrs.assoc_id) {
 		bp = &sctp_sk(sk)->ep->base.bind_addr;
-		addr_lock = &sctp_sk(sk)->ep->base.addr_lock;
 	} else {
 		asoc = sctp_id2assoc(sk, getaddrs.assoc_id);
 		if (!asoc)
 			return -EINVAL;
 		bp = &asoc->base.bind_addr;
-		addr_lock = &asoc->base.addr_lock;
 	}
 
 	to = optval + offsetof(struct sctp_getaddrs,addrs);
@@ -4352,8 +4317,6 @@ static int sctp_getsockopt_local_addrs(struct sock *sk, int len,
 	if (!addrs)
 		return -ENOMEM;
 
-	sctp_read_lock(addr_lock);
-
 	/* If the endpoint is bound to 0.0.0.0 or ::0, get the valid
 	 * addresses from the global local address list.
 	 */
@@ -4365,21 +4328,24 @@ static int sctp_getsockopt_local_addrs(struct sock *sk, int len,
 						space_left, &bytes_copied);
 			if (cnt < 0) {
 				err = cnt;
-				goto error_lock;
+				goto out;
 			}
 			goto copy_getaddrs;
 		}
 	}
 
 	buf = addrs;
-	list_for_each(pos, &bp->address_list) {
-		addr = list_entry(pos, struct sctp_sockaddr_entry, list);
+	/* Protection on the bound address list is not needed since
+	 * in the socket option context we hold a socket lock and
+	 * thus the bound address list can't change.
+	 */
+	list_for_each_entry(addr, &bp->address_list, list) {
 		memcpy(&temp, &addr->a, sizeof(temp));
 		sctp_get_pf_specific(sk->sk_family)->addr_v4map(sp, &temp);
 		addrlen = sctp_get_af_specific(temp.sa.sa_family)->sockaddr_len;
 		if (space_left < addrlen) {
 			err =  -ENOMEM; /*fixme: right error?*/
-			goto error_lock;
+			goto out;
 		}
 		memcpy(buf, &temp, addrlen);
 		buf += addrlen;
@@ -4389,8 +4355,6 @@ static int sctp_getsockopt_local_addrs(struct sock *sk, int len,
 	}
 
 copy_getaddrs:
-	sctp_read_unlock(addr_lock);
-
 	if (copy_to_user(to, addrs, bytes_copied)) {
 		err = -EFAULT;
 		goto out;
@@ -4401,12 +4365,6 @@ copy_getaddrs:
 	}
 	if (put_user(bytes_copied, optlen))
 		err = -EFAULT;
-
-	goto out;
-
-error_lock:
-	sctp_read_unlock(addr_lock);
-
 out:
 	kfree(addrs);
 	return err;
-- 
1.5.2.4


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [RFC v2 PATCH 1/2] SCTP: Add RCU synchronization around sctp_localaddr_list
  2007-09-12 19:46 ` [RFC v2 PATCH 1/2] SCTP: Add RCU synchronization around sctp_localaddr_list Vlad Yasevich
@ 2007-09-12 22:26   ` Paul E. McKenney
  2007-09-12 23:03   ` Sridhar Samudrala
  1 sibling, 0 replies; 14+ messages in thread
From: Paul E. McKenney @ 2007-09-12 22:26 UTC (permalink / raw)
  To: Vlad Yasevich; +Cc: netdev, lksctp-developers

On Wed, Sep 12, 2007 at 03:46:37PM -0400, Vlad Yasevich wrote:
> sctp_localaddr_list is modified dynamically via NETDEV_UP
> and NETDEV_DOWN events, but there is not synchronization
> between writer (even handler) and readers.  As a result,
> the readers can access an entry that has been freed and
> crash the sytem.

Looks much better!!!  Typo in comment noted below.

Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

> Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
> ---
>  include/net/sctp/sctp.h    |    1 +
>  include/net/sctp/structs.h |    6 +++++
>  net/sctp/bind_addr.c       |    2 +
>  net/sctp/ipv6.c            |   34 ++++++++++++++++++++++---------
>  net/sctp/protocol.c        |   46 ++++++++++++++++++++++++++++++-------------
>  net/sctp/socket.c          |   38 +++++++++++++++++++++++------------
>  6 files changed, 90 insertions(+), 37 deletions(-)
> 
> diff --git a/include/net/sctp/sctp.h b/include/net/sctp/sctp.h
> index d529045..c9cc00c 100644
> --- a/include/net/sctp/sctp.h
> +++ b/include/net/sctp/sctp.h
> @@ -123,6 +123,7 @@
>   * sctp/protocol.c
>   */
>  extern struct sock *sctp_get_ctl_sock(void);
> +extern void sctp_local_addr_free(struct rcu_head *head);
>  extern int sctp_copy_local_addr_list(struct sctp_bind_addr *,
>  				     sctp_scope_t, gfp_t gfp,
>  				     int flags);
> diff --git a/include/net/sctp/structs.h b/include/net/sctp/structs.h
> index c0d5848..a89e361 100644
> --- a/include/net/sctp/structs.h
> +++ b/include/net/sctp/structs.h
> @@ -207,6 +207,9 @@ extern struct sctp_globals {
>  	 * It is a list of sctp_sockaddr_entry.
>  	 */
>  	struct list_head local_addr_list;
> +
> +	/* Lock that protects the local_addr_list writers */
> +	spinlock_t addr_list_lock;
>  	
>  	/* Flag to indicate if addip is enabled. */
>  	int addip_enable;
> @@ -242,6 +245,7 @@ extern struct sctp_globals {
>  #define sctp_port_alloc_lock		(sctp_globals.port_alloc_lock)
>  #define sctp_port_hashtable		(sctp_globals.port_hashtable)
>  #define sctp_local_addr_list		(sctp_globals.local_addr_list)
> +#define sctp_local_addr_lock		(sctp_globals.addr_list_lock)
>  #define sctp_addip_enable		(sctp_globals.addip_enable)
>  #define sctp_prsctp_enable		(sctp_globals.prsctp_enable)
> 
> @@ -737,8 +741,10 @@ const union sctp_addr *sctp_source(const struct sctp_chunk *chunk);
>  /* This is a structure for holding either an IPv6 or an IPv4 address.  */
>  struct sctp_sockaddr_entry {
>  	struct list_head list;
> +	struct rcu_head	rcu;
>  	union sctp_addr a;
>  	__u8 use_as_src;
> +	__u8 valid;
>  };
> 
>  typedef struct sctp_chunk *(sctp_packet_phandler_t)(struct sctp_association *);
> diff --git a/net/sctp/bind_addr.c b/net/sctp/bind_addr.c
> index fdb287a..7fc369f 100644
> --- a/net/sctp/bind_addr.c
> +++ b/net/sctp/bind_addr.c
> @@ -163,8 +163,10 @@ int sctp_add_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *new,
>  		addr->a.v4.sin_port = htons(bp->port);
> 
>  	addr->use_as_src = use_as_src;
> +	addr->valid = 1;
> 
>  	INIT_LIST_HEAD(&addr->list);
> +	INIT_RCU_HEAD(&addr->rcu);
>  	list_add_tail(&addr->list, &bp->address_list);
>  	SCTP_DBG_OBJCNT_INC(addr);
> 
> diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c
> index f8aa23d..54ff472 100644
> --- a/net/sctp/ipv6.c
> +++ b/net/sctp/ipv6.c
> @@ -77,13 +77,18 @@
> 
>  #include <asm/uaccess.h>
> 
> -/* Event handler for inet6 address addition/deletion events.  */
> +/* Event handler for inet6 address addition/deletion events.
> + * This even is part of the atomic notifier call chain

s/even/event/

> + * and thus happens atomically and can NOT sleep.  As a result
> + * we can't and really don't need to add any locks to guard the
> + * RCU.
> + */
>  static int sctp_inet6addr_event(struct notifier_block *this, unsigned long ev,
>  				void *ptr)
>  {
>  	struct inet6_ifaddr *ifa = (struct inet6_ifaddr *)ptr;
> -	struct sctp_sockaddr_entry *addr;
> -	struct list_head *pos, *temp;
> +	struct sctp_sockaddr_entry *addr = NULL;
> +	struct sctp_sockaddr_entry *temp;
> 
>  	switch (ev) {
>  	case NETDEV_UP:
> @@ -94,19 +99,26 @@ static int sctp_inet6addr_event(struct notifier_block *this, unsigned long ev,
>  			memcpy(&addr->a.v6.sin6_addr, &ifa->addr,
>  				 sizeof(struct in6_addr));
>  			addr->a.v6.sin6_scope_id = ifa->idev->dev->ifindex;
> -			list_add_tail(&addr->list, &sctp_local_addr_list);
> +			addr->valid = 1;
> +			spin_lock_bh(&sctp_local_addr_lock);
> +			list_add_tail_rcu(&addr->list, &sctp_local_addr_list);
> +			spin_unlock_bh(&sctp_local_addr_lock);
>  		}
>  		break;
>  	case NETDEV_DOWN:
> -		list_for_each_safe(pos, temp, &sctp_local_addr_list) {
> -			addr = list_entry(pos, struct sctp_sockaddr_entry, list);
> -			if (ipv6_addr_equal(&addr->a.v6.sin6_addr, &ifa->addr)) {
> -				list_del(pos);
> -				kfree(addr);
> +		spin_lock_bh(&sctp_local_addr_lock);
> +		list_for_each_entry_safe(addr, temp,
> +					&sctp_local_addr_list, list) {
> +			if (ipv6_addr_equal(&addr->a.v6.sin6_addr,
> +					     &ifa->addr)) {
> +				addr->valid = 0;
> +				list_del_rcu(&addr->list);
>  				break;
>  			}
>  		}
> -
> +		spin_unlock_bh(&sctp_local_addr_lock);
> +		if (addr && !addr->valid)
> +			call_rcu(&addr->rcu, sctp_local_addr_free);
>  		break;
>  	}
> 
> @@ -367,7 +379,9 @@ static void sctp_v6_copy_addrlist(struct list_head *addrlist,
>  			addr->a.v6.sin6_port = 0;
>  			addr->a.v6.sin6_addr = ifp->addr;
>  			addr->a.v6.sin6_scope_id = dev->ifindex;
> +			addr->valid = 1;
>  			INIT_LIST_HEAD(&addr->list);
> +			INIT_RCU_HEAD(&addr->rcu);
>  			list_add_tail(&addr->list, addrlist);
>  		}
>  	}
> diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
> index e98579b..4688559 100644
> --- a/net/sctp/protocol.c
> +++ b/net/sctp/protocol.c
> @@ -153,6 +153,8 @@ static void sctp_v4_copy_addrlist(struct list_head *addrlist,
>  			addr->a.v4.sin_family = AF_INET;
>  			addr->a.v4.sin_port = 0;
>  			addr->a.v4.sin_addr.s_addr = ifa->ifa_local;
> +			addr->valid = 1;
> +			INIT_RCU_HEAD(&addr->rcu);
>  			list_add_tail(&addr->list, addrlist);
>  		}
>  	}
> @@ -192,16 +194,24 @@ static void sctp_free_local_addr_list(void)
>  	}
>  }
> 
> +void sctp_local_addr_free(struct rcu_head *head)
> +{
> +	struct sctp_sockaddr_entry *e = container_of(head,
> +				struct sctp_sockaddr_entry, rcu);
> +	kfree(e);
> +}
> +
>  /* Copy the local addresses which are valid for 'scope' into 'bp'.  */
>  int sctp_copy_local_addr_list(struct sctp_bind_addr *bp, sctp_scope_t scope,
>  			      gfp_t gfp, int copy_flags)
>  {
>  	struct sctp_sockaddr_entry *addr;
>  	int error = 0;
> -	struct list_head *pos, *temp;
> 
> -	list_for_each_safe(pos, temp, &sctp_local_addr_list) {
> -		addr = list_entry(pos, struct sctp_sockaddr_entry, list);
> +	rcu_read_lock();
> +	list_for_each_entry_rcu(addr, &sctp_local_addr_list, list) {
> +		if (!addr->valid)
> +			continue;
>  		if (sctp_in_scope(&addr->a, scope)) {
>  			/* Now that the address is in scope, check to see if
>  			 * the address type is really supported by the local
> @@ -221,6 +231,7 @@ int sctp_copy_local_addr_list(struct sctp_bind_addr *bp, sctp_scope_t scope,
>  	}
> 
>  end_copy:
> +	rcu_read_unlock();
>  	return error;
>  }
> 
> @@ -605,8 +616,8 @@ static int sctp_inetaddr_event(struct notifier_block *this, unsigned long ev,
>  			       void *ptr)
>  {
>  	struct in_ifaddr *ifa = (struct in_ifaddr *)ptr;
> -	struct sctp_sockaddr_entry *addr;
> -	struct list_head *pos, *temp;
> +	struct sctp_sockaddr_entry *addr = NULL;
> +	struct sctp_sockaddr_entry *temp;
> 
>  	switch (ev) {
>  	case NETDEV_UP:
> @@ -615,19 +626,25 @@ static int sctp_inetaddr_event(struct notifier_block *this, unsigned long ev,
>  			addr->a.v4.sin_family = AF_INET;
>  			addr->a.v4.sin_port = 0;
>  			addr->a.v4.sin_addr.s_addr = ifa->ifa_local;
> -			list_add_tail(&addr->list, &sctp_local_addr_list);
> +			addr->valid = 1;
> +			spin_lock_bh(&sctp_local_addr_lock);
> +			list_add_tail_rcu(&addr->list, &sctp_local_addr_list);
> +			spin_unlock_bh(&sctp_local_addr_lock);
>  		}
>  		break;
>  	case NETDEV_DOWN:
> -		list_for_each_safe(pos, temp, &sctp_local_addr_list) {
> -			addr = list_entry(pos, struct sctp_sockaddr_entry, list);
> +		spin_lock_bh(&sctp_local_addr_lock);
> +		list_for_each_entry_safe(addr, temp,
> +					&sctp_local_addr_list, list) {
>  			if (addr->a.v4.sin_addr.s_addr == ifa->ifa_local) {
> -				list_del(pos);
> -				kfree(addr);
> +				addr->valid = 0;
> +				list_del_rcu(&addr->list);
>  				break;
>  			}
>  		}
> -
> +		spin_unlock_bh(&sctp_local_addr_lock);
> +		if (addr && !addr->valid)
> +			call_rcu(&addr->rcu, sctp_local_addr_free);
>  		break;
>  	}
> 
> @@ -1160,6 +1177,7 @@ SCTP_STATIC __init int sctp_init(void)
> 
>  	/* Initialize the local address list. */
>  	INIT_LIST_HEAD(&sctp_local_addr_list);
> +	spin_lock_init(&sctp_local_addr_lock);
>  	sctp_get_local_addr_list();
> 
>  	/* Register notifier for inet address additions/deletions. */
> @@ -1227,6 +1245,9 @@ SCTP_STATIC __exit void sctp_exit(void)
>  	sctp_v6_del_protocol();
>  	inet_del_protocol(&sctp_protocol, IPPROTO_SCTP);
> 
> +	/* Unregister notifier for inet address additions/deletions. */
> +	unregister_inetaddr_notifier(&sctp_inetaddr_notifier);
> +
>  	/* Free the local address list.  */
>  	sctp_free_local_addr_list();
> 
> @@ -1240,9 +1261,6 @@ SCTP_STATIC __exit void sctp_exit(void)
>  	inet_unregister_protosw(&sctp_stream_protosw);
>  	inet_unregister_protosw(&sctp_seqpacket_protosw);
> 
> -	/* Unregister notifier for inet address additions/deletions. */
> -	unregister_inetaddr_notifier(&sctp_inetaddr_notifier);
> -
>  	sctp_sysctl_unregister();
>  	list_del(&sctp_ipv4_specific.list);
> 
> diff --git a/net/sctp/socket.c b/net/sctp/socket.c
> index 3335460..a3acf78 100644
> --- a/net/sctp/socket.c
> +++ b/net/sctp/socket.c
> @@ -4057,9 +4057,9 @@ static int sctp_getsockopt_local_addrs_num_old(struct sock *sk, int len,
>  					       int __user *optlen)
>  {
>  	sctp_assoc_t id;
> +	struct list_head *pos;
>  	struct sctp_bind_addr *bp;
>  	struct sctp_association *asoc;
> -	struct list_head *pos, *temp;
>  	struct sctp_sockaddr_entry *addr;
>  	rwlock_t *addr_lock;
>  	int cnt = 0;
> @@ -4096,15 +4096,19 @@ static int sctp_getsockopt_local_addrs_num_old(struct sock *sk, int len,
>  		addr = list_entry(bp->address_list.next,
>  				  struct sctp_sockaddr_entry, list);
>  		if (sctp_is_any(&addr->a)) {
> -			list_for_each_safe(pos, temp, &sctp_local_addr_list) {
> -				addr = list_entry(pos,
> -						  struct sctp_sockaddr_entry,
> -						  list);
> +			rcu_read_lock();
> +			list_for_each_entry_rcu(addr,
> +						&sctp_local_addr_list, list) {
> +				if (!addr->valid)
> +					continue;
> +
>  				if ((PF_INET == sk->sk_family) &&
>  				    (AF_INET6 == addr->a.sa.sa_family))
>  					continue;
> +
>  				cnt++;
>  			}
> +			rcu_read_unlock();
>  		} else {
>  			cnt = 1;
>  		}
> @@ -4127,14 +4131,16 @@ static int sctp_copy_laddrs_old(struct sock *sk, __u16 port,
>  					int max_addrs, void *to,
>  					int *bytes_copied)
>  {
> -	struct list_head *pos, *next;
>  	struct sctp_sockaddr_entry *addr;
>  	union sctp_addr temp;
>  	int cnt = 0;
>  	int addrlen;
> 
> -	list_for_each_safe(pos, next, &sctp_local_addr_list) {
> -		addr = list_entry(pos, struct sctp_sockaddr_entry, list);
> +	rcu_read_lock();
> +	list_for_each_entry_rcu(addr, &sctp_local_addr_list, list) {
> +		if (!addr->valid)
> +			continue;
> +
>  		if ((PF_INET == sk->sk_family) &&
>  		    (AF_INET6 == addr->a.sa.sa_family))
>  			continue;
> @@ -4149,6 +4155,7 @@ static int sctp_copy_laddrs_old(struct sock *sk, __u16 port,
>  		cnt ++;
>  		if (cnt >= max_addrs) break;
>  	}
> +	rcu_read_unlock();
> 
>  	return cnt;
>  }
> @@ -4156,14 +4163,16 @@ static int sctp_copy_laddrs_old(struct sock *sk, __u16 port,
>  static int sctp_copy_laddrs(struct sock *sk, __u16 port, void *to,
>  			    size_t space_left, int *bytes_copied)
>  {
> -	struct list_head *pos, *next;
>  	struct sctp_sockaddr_entry *addr;
>  	union sctp_addr temp;
>  	int cnt = 0;
>  	int addrlen;
> 
> -	list_for_each_safe(pos, next, &sctp_local_addr_list) {
> -		addr = list_entry(pos, struct sctp_sockaddr_entry, list);
> +	rcu_read_lock();
> +	list_for_each_entry_rcu(addr, &sctp_local_addr_list, list) {
> +		if (!addr->valid)
> +			continue;
> +
>  		if ((PF_INET == sk->sk_family) &&
>  		    (AF_INET6 == addr->a.sa.sa_family))
>  			continue;
> @@ -4171,8 +4180,10 @@ static int sctp_copy_laddrs(struct sock *sk, __u16 port, void *to,
>  		sctp_get_pf_specific(sk->sk_family)->addr_v4map(sctp_sk(sk),
>  								&temp);
>  		addrlen = sctp_get_af_specific(temp.sa.sa_family)->sockaddr_len;
> -		if (space_left < addrlen)
> -			return -ENOMEM;
> +		if (space_left < addrlen) {
> +			cnt =  -ENOMEM;
> +			break;
> +		}
>  		memcpy(to, &temp, addrlen);
> 
>  		to += addrlen;
> @@ -4180,6 +4191,7 @@ static int sctp_copy_laddrs(struct sock *sk, __u16 port, void *to,
>  		space_left -= addrlen;
>  		*bytes_copied += addrlen;
>  	}
> +	rcu_read_unlock();
> 
>  	return cnt;
>  }
> -- 
> 1.5.2.4
> 
> -
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC v3 PATCH 2/21] SCTP: Convert bind_addr_list locking to RCU
  2007-09-12 21:03   ` [RFC v3 PATCH 2/21] SCTP: Convert bind_addr_list locking to RCU Vlad Yasevich
@ 2007-09-12 22:33     ` Paul E. McKenney
  2007-09-13 17:59       ` Sridhar Samudrala
  0 siblings, 1 reply; 14+ messages in thread
From: Paul E. McKenney @ 2007-09-12 22:33 UTC (permalink / raw)
  To: Vlad Yasevich; +Cc: netdev, lksctp-developers

On Wed, Sep 12, 2007 at 05:03:42PM -0400, Vlad Yasevich wrote:
> [... and here is the updated version as promissed ...]
> 
> Since the sctp_sockaddr_entry is now RCU enabled as part of
> the patch to synchronize sctp_localaddr_list, it makes sense to
> change all handling of these entries to RCU.  This includes the
> sctp_bind_addrs structure and it's list of bound addresses.
> 
> This list is currently protected by an external rw_lock and that
> looks like an overkill.  There are only 2 writers to the list:
> bind()/bindx() calls, and BH processing of ASCONF-ACK chunks.
> These are already seriealized via the socket lock, so they will
> not step on each other.  These are also relatively rare, so we
> should be good with RCU.
> 
> The readers are varied and they are easily converted to RCU.

Looks good from an RCU viewpoint -- I must defer to others on
the networking aspects.

Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

> Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
> ---
>  include/net/sctp/structs.h |    7 +--
>  net/sctp/associola.c       |   14 +-----
>  net/sctp/bind_addr.c       |   68 ++++++++++++++++++++----------
>  net/sctp/endpointola.c     |   27 +++---------
>  net/sctp/ipv6.c            |   12 ++---
>  net/sctp/protocol.c        |   25 ++++-------
>  net/sctp/sm_make_chunk.c   |   18 +++-----
>  net/sctp/socket.c          |   98 ++++++++++++-------------------------------
>  8 files changed, 106 insertions(+), 163 deletions(-)
> 
> diff --git a/include/net/sctp/structs.h b/include/net/sctp/structs.h
> index a89e361..c2fe2dc 100644
> --- a/include/net/sctp/structs.h
> +++ b/include/net/sctp/structs.h
> @@ -1155,7 +1155,9 @@ int sctp_bind_addr_copy(struct sctp_bind_addr *dest,
>  			int flags);
>  int sctp_add_bind_addr(struct sctp_bind_addr *, union sctp_addr *,
>  		       __u8 use_as_src, gfp_t gfp);
> -int sctp_del_bind_addr(struct sctp_bind_addr *, union sctp_addr *);
> +int sctp_del_bind_addr(struct sctp_bind_addr *, union sctp_addr *,
> +			void (*rcu_call)(struct rcu_head *,
> +					  void (*func)(struct rcu_head *)));
>  int sctp_bind_addr_match(struct sctp_bind_addr *, const union sctp_addr *,
>  			 struct sctp_sock *);
>  union sctp_addr *sctp_find_unmatch_addr(struct sctp_bind_addr	*bp,
> @@ -1226,9 +1228,6 @@ struct sctp_ep_common {
>  	 * bind_addr.address_list is our set of local IP addresses.
>  	 */
>  	struct sctp_bind_addr bind_addr;
> -
> -	/* Protection during address list comparisons. */
> -	rwlock_t   addr_lock;
>  };
> 
> 
> diff --git a/net/sctp/associola.c b/net/sctp/associola.c
> index 2ad1caf..9bad8ba 100644
> --- a/net/sctp/associola.c
> +++ b/net/sctp/associola.c
> @@ -99,7 +99,6 @@ static struct sctp_association *sctp_association_init(struct sctp_association *a
> 
>  	/* Initialize the bind addr area.  */
>  	sctp_bind_addr_init(&asoc->base.bind_addr, ep->base.bind_addr.port);
> -	rwlock_init(&asoc->base.addr_lock);
> 
>  	asoc->state = SCTP_STATE_CLOSED;
> 
> @@ -937,8 +936,6 @@ struct sctp_transport *sctp_assoc_is_match(struct sctp_association *asoc,
>  {
>  	struct sctp_transport *transport;
> 
> -	sctp_read_lock(&asoc->base.addr_lock);
> -
>  	if ((htons(asoc->base.bind_addr.port) == laddr->v4.sin_port) &&
>  	    (htons(asoc->peer.port) == paddr->v4.sin_port)) {
>  		transport = sctp_assoc_lookup_paddr(asoc, paddr);
> @@ -952,7 +949,6 @@ struct sctp_transport *sctp_assoc_is_match(struct sctp_association *asoc,
>  	transport = NULL;
> 
>  out:
> -	sctp_read_unlock(&asoc->base.addr_lock);
>  	return transport;
>  }
> 
> @@ -1376,19 +1372,13 @@ int sctp_assoc_set_bind_addr_from_cookie(struct sctp_association *asoc,
>  int sctp_assoc_lookup_laddr(struct sctp_association *asoc,
>  			    const union sctp_addr *laddr)
>  {
> -	int found;
> +	int found = 0;
> 
> -	sctp_read_lock(&asoc->base.addr_lock);
>  	if ((asoc->base.bind_addr.port == ntohs(laddr->v4.sin_port)) &&
>  	    sctp_bind_addr_match(&asoc->base.bind_addr, laddr,
> -				 sctp_sk(asoc->base.sk))) {
> +				 sctp_sk(asoc->base.sk)))
>  		found = 1;
> -		goto out;
> -	}
> 
> -	found = 0;
> -out:
> -	sctp_read_unlock(&asoc->base.addr_lock);
>  	return found;
>  }
> 
> diff --git a/net/sctp/bind_addr.c b/net/sctp/bind_addr.c
> index 7fc369f..d16055f 100644
> --- a/net/sctp/bind_addr.c
> +++ b/net/sctp/bind_addr.c
> @@ -167,7 +167,11 @@ int sctp_add_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *new,
> 
>  	INIT_LIST_HEAD(&addr->list);
>  	INIT_RCU_HEAD(&addr->rcu);
> -	list_add_tail(&addr->list, &bp->address_list);
> +
> +	/* We always hold a socket lock when calling this function,
> +	 * so rcu_read_lock is not needed.
> +	 */
> +	list_add_tail_rcu(&addr->list, &bp->address_list);
>  	SCTP_DBG_OBJCNT_INC(addr);
> 
>  	return 0;
> @@ -176,23 +180,35 @@ int sctp_add_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *new,
>  /* Delete an address from the bind address list in the SCTP_bind_addr
>   * structure.
>   */
> -int sctp_del_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *del_addr)
> +int sctp_del_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *del_addr,
> +			void (*rcu_call)(struct rcu_head *head,
> +					 void (*func)(struct rcu_head *head)))
>  {
> -	struct list_head *pos, *temp;
> -	struct sctp_sockaddr_entry *addr;
> +	struct sctp_sockaddr_entry *addr, *temp;
> 
> -	list_for_each_safe(pos, temp, &bp->address_list) {
> -		addr = list_entry(pos, struct sctp_sockaddr_entry, list);
> +	/* We hold the socket lock when calling this function, so
> +	 * rcu_read_lock is not needed.
> +	 */
> +	list_for_each_entry_safe(addr, temp, &bp->address_list, list) {
>  		if (sctp_cmp_addr_exact(&addr->a, del_addr)) {
>  			/* Found the exact match. */
> -			list_del(pos);
> -			kfree(addr);
> -			SCTP_DBG_OBJCNT_DEC(addr);
> -
> -			return 0;
> +			addr->valid = 0;
> +			list_del_rcu(&addr->list);
> +			break;
>  		}
>  	}
> 
> +	/* Call the rcu callback provided in the args.  This function is
> +	 * called by both BH packet processing and user side socket option
> +	 * processing, but it works on different lists in those 2 contexts.
> +	 * Each context provides it's own callback, whether call_rc_bh()
> +	 * or call_rcu(), to make sure that we wait an for appropriate time.
> +	 */
> +	if (addr && !addr->valid) {
> +		rcu_call(&addr->rcu, sctp_local_addr_free);
> +		SCTP_DBG_OBJCNT_DEC(addr);
> +	}
> +
>  	return -EINVAL;
>  }
> 
> @@ -302,15 +318,20 @@ int sctp_bind_addr_match(struct sctp_bind_addr *bp,
>  			 struct sctp_sock *opt)
>  {
>  	struct sctp_sockaddr_entry *laddr;
> -	struct list_head *pos;
> -
> -	list_for_each(pos, &bp->address_list) {
> -		laddr = list_entry(pos, struct sctp_sockaddr_entry, list);
> -		if (opt->pf->cmp_addr(&laddr->a, addr, opt))
> -			return 1;
> +	int match = 0;
> +
> +	rcu_read_lock();
> +	list_for_each_entry_rcu(laddr, &bp->address_list, list) {
> +		if (!laddr->valid)
> +			continue;
> +		if (opt->pf->cmp_addr(&laddr->a, addr, opt)) {
> +			match = 1;
> +			break;
> +		}
>  	}
> +	rcu_read_unlock();
> 
> -	return 0;
> +	return match;
>  }
> 
>  /* Find the first address in the bind address list that is not present in
> @@ -325,18 +346,19 @@ union sctp_addr *sctp_find_unmatch_addr(struct sctp_bind_addr	*bp,
>  	union sctp_addr			*addr;
>  	void 				*addr_buf;
>  	struct sctp_af			*af;
> -	struct list_head		*pos;
>  	int				i;
> 
> -	list_for_each(pos, &bp->address_list) {
> -		laddr = list_entry(pos, struct sctp_sockaddr_entry, list);
> -
> +	/* This is only called sctp_send_asconf_del_ip() and we hold
> +	 * the socket lock in that code patch, so that address list
> +	 * can't change.
> +	 */
> +	list_for_each_entry(laddr, &bp->address_list, list) {
>  		addr_buf = (union sctp_addr *)addrs;
>  		for (i = 0; i < addrcnt; i++) {
>  			addr = (union sctp_addr *)addr_buf;
>  			af = sctp_get_af_specific(addr->v4.sin_family);
>  			if (!af)
> -				return NULL;
> +				break;
> 
>  			if (opt->pf->cmp_addr(&laddr->a, addr, opt))
>  				break;
> diff --git a/net/sctp/endpointola.c b/net/sctp/endpointola.c
> index 1404a9e..110d912 100644
> --- a/net/sctp/endpointola.c
> +++ b/net/sctp/endpointola.c
> @@ -92,7 +92,6 @@ static struct sctp_endpoint *sctp_endpoint_init(struct sctp_endpoint *ep,
> 
>  	/* Initialize the bind addr area */
>  	sctp_bind_addr_init(&ep->base.bind_addr, 0);
> -	rwlock_init(&ep->base.addr_lock);
> 
>  	/* Remember who we are attached to.  */
>  	ep->base.sk = sk;
> @@ -225,21 +224,14 @@ void sctp_endpoint_put(struct sctp_endpoint *ep)
>  struct sctp_endpoint *sctp_endpoint_is_match(struct sctp_endpoint *ep,
>  					       const union sctp_addr *laddr)
>  {
> -	struct sctp_endpoint *retval;
> +	struct sctp_endpoint *retval = NULL;
> 
> -	sctp_read_lock(&ep->base.addr_lock);
>  	if (htons(ep->base.bind_addr.port) == laddr->v4.sin_port) {
>  		if (sctp_bind_addr_match(&ep->base.bind_addr, laddr,
> -					 sctp_sk(ep->base.sk))) {
> +					 sctp_sk(ep->base.sk)))
>  			retval = ep;
> -			goto out;
> -		}
>  	}
> 
> -	retval = NULL;
> -
> -out:
> -	sctp_read_unlock(&ep->base.addr_lock);
>  	return retval;
>  }
> 
> @@ -261,9 +253,7 @@ static struct sctp_association *__sctp_endpoint_lookup_assoc(
>  	list_for_each(pos, &ep->asocs) {
>  		asoc = list_entry(pos, struct sctp_association, asocs);
>  		if (rport == asoc->peer.port) {
> -			sctp_read_lock(&asoc->base.addr_lock);
>  			*transport = sctp_assoc_lookup_paddr(asoc, paddr);
> -			sctp_read_unlock(&asoc->base.addr_lock);
> 
>  			if (*transport)
>  				return asoc;
> @@ -295,20 +285,17 @@ struct sctp_association *sctp_endpoint_lookup_assoc(
>  int sctp_endpoint_is_peeled_off(struct sctp_endpoint *ep,
>  				const union sctp_addr *paddr)
>  {
> -	struct list_head *pos;
>  	struct sctp_sockaddr_entry *addr;
>  	struct sctp_bind_addr *bp;
> 
> -	sctp_read_lock(&ep->base.addr_lock);
>  	bp = &ep->base.bind_addr;
> -	list_for_each(pos, &bp->address_list) {
> -		addr = list_entry(pos, struct sctp_sockaddr_entry, list);
> -		if (sctp_has_association(&addr->a, paddr)) {
> -			sctp_read_unlock(&ep->base.addr_lock);
> +	/* This function is called whith the socket lock held,
> +	 * so the address_list can not change.
> +	 */
> +	list_for_each_entry(addr, &bp->address_list, list) {
> +		if (sctp_has_association(&addr->a, paddr))
>  			return 1;
> -		}
>  	}
> -	sctp_read_unlock(&ep->base.addr_lock);
> 
>  	return 0;
>  }
> diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c
> index 54ff472..c8b0115 100644
> --- a/net/sctp/ipv6.c
> +++ b/net/sctp/ipv6.c
> @@ -302,9 +302,7 @@ static void sctp_v6_get_saddr(struct sctp_association *asoc,
>  			      union sctp_addr *saddr)
>  {
>  	struct sctp_bind_addr *bp;
> -	rwlock_t *addr_lock;
>  	struct sctp_sockaddr_entry *laddr;
> -	struct list_head *pos;
>  	sctp_scope_t scope;
>  	union sctp_addr *baddr = NULL;
>  	__u8 matchlen = 0;
> @@ -324,14 +322,14 @@ static void sctp_v6_get_saddr(struct sctp_association *asoc,
>  	scope = sctp_scope(daddr);
> 
>  	bp = &asoc->base.bind_addr;
> -	addr_lock = &asoc->base.addr_lock;
> 
>  	/* Go through the bind address list and find the best source address
>  	 * that matches the scope of the destination address.
>  	 */
> -	sctp_read_lock(addr_lock);
> -	list_for_each(pos, &bp->address_list) {
> -		laddr = list_entry(pos, struct sctp_sockaddr_entry, list);
> +	rcu_read_lock();
> +	list_for_each_entry_rcu(laddr, &bp->address_list, list) {
> +		if (!laddr->valid)
> +			continue;
>  		if ((laddr->use_as_src) &&
>  		    (laddr->a.sa.sa_family == AF_INET6) &&
>  		    (scope <= sctp_scope(&laddr->a))) {
> @@ -353,7 +351,7 @@ static void sctp_v6_get_saddr(struct sctp_association *asoc,
>  		       __FUNCTION__, asoc, NIP6(daddr->v6.sin6_addr));
>  	}
> 
> -	sctp_read_unlock(addr_lock);
> +	rcu_read_unlock();
>  }
> 
>  /* Make a copy of all potential local addresses. */
> diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
> index 4688559..35af75b 100644
> --- a/net/sctp/protocol.c
> +++ b/net/sctp/protocol.c
> @@ -223,7 +223,7 @@ int sctp_copy_local_addr_list(struct sctp_bind_addr *bp, sctp_scope_t scope,
>  			      (copy_flags & SCTP_ADDR6_ALLOWED) &&
>  			      (copy_flags & SCTP_ADDR6_PEERSUPP)))) {
>  				error = sctp_add_bind_addr(bp, &addr->a, 1,
> -							   GFP_ATOMIC);
> +						    GFP_ATOMIC);
>  				if (error)
>  					goto end_copy;
>  			}
> @@ -427,9 +427,7 @@ static struct dst_entry *sctp_v4_get_dst(struct sctp_association *asoc,
>  	struct rtable *rt;
>  	struct flowi fl;
>  	struct sctp_bind_addr *bp;
> -	rwlock_t *addr_lock;
>  	struct sctp_sockaddr_entry *laddr;
> -	struct list_head *pos;
>  	struct dst_entry *dst = NULL;
>  	union sctp_addr dst_saddr;
> 
> @@ -458,23 +456,20 @@ static struct dst_entry *sctp_v4_get_dst(struct sctp_association *asoc,
>  		goto out;
> 
>  	bp = &asoc->base.bind_addr;
> -	addr_lock = &asoc->base.addr_lock;
> 
>  	if (dst) {
>  		/* Walk through the bind address list and look for a bind
>  		 * address that matches the source address of the returned dst.
>  		 */
> -		sctp_read_lock(addr_lock);
> -		list_for_each(pos, &bp->address_list) {
> -			laddr = list_entry(pos, struct sctp_sockaddr_entry,
> -					   list);
> -			if (!laddr->use_as_src)
> +		rcu_read_lock();
> +		list_for_each_entry_rcu(laddr, &bp->address_list, list) {
> +			if (!laddr->valid || !laddr->use_as_src)
>  				continue;
>  			sctp_v4_dst_saddr(&dst_saddr, dst, htons(bp->port));
>  			if (sctp_v4_cmp_addr(&dst_saddr, &laddr->a))
>  				goto out_unlock;
>  		}
> -		sctp_read_unlock(addr_lock);
> +		rcu_read_unlock();
> 
>  		/* None of the bound addresses match the source address of the
>  		 * dst. So release it.
> @@ -486,10 +481,10 @@ static struct dst_entry *sctp_v4_get_dst(struct sctp_association *asoc,
>  	/* Walk through the bind address list and try to get a dst that
>  	 * matches a bind address as the source address.
>  	 */
> -	sctp_read_lock(addr_lock);
> -	list_for_each(pos, &bp->address_list) {
> -		laddr = list_entry(pos, struct sctp_sockaddr_entry, list);
> -
> +	rcu_read_lock();
> +	list_for_each_entry_rcu(laddr, &bp->address_list, list) {
> +		if (!laddr->valid)
> +			continue;
>  		if ((laddr->use_as_src) &&
>  		    (AF_INET == laddr->a.sa.sa_family)) {
>  			fl.fl4_src = laddr->a.v4.sin_addr.s_addr;
> @@ -501,7 +496,7 @@ static struct dst_entry *sctp_v4_get_dst(struct sctp_association *asoc,
>  	}
> 
>  out_unlock:
> -	sctp_read_unlock(addr_lock);
> +	rcu_read_unlock();
>  out:
>  	if (dst)
>  		SCTP_DEBUG_PRINTK("rt_dst:%u.%u.%u.%u, rt_src:%u.%u.%u.%u\n",
> diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c
> index 79856c9..2e34220 100644
> --- a/net/sctp/sm_make_chunk.c
> +++ b/net/sctp/sm_make_chunk.c
> @@ -1531,7 +1531,7 @@ no_hmac:
>  	/* Also, add the destination address. */
>  	if (list_empty(&retval->base.bind_addr.address_list)) {
>  		sctp_add_bind_addr(&retval->base.bind_addr, &chunk->dest, 1,
> -				   GFP_ATOMIC);
> +				GFP_ATOMIC);
>  	}
> 
>  	retval->next_tsn = retval->c.initial_tsn;
> @@ -2613,22 +2613,16 @@ static int sctp_asconf_param_success(struct sctp_association *asoc,
> 
>  	switch (asconf_param->param_hdr.type) {
>  	case SCTP_PARAM_ADD_IP:
> -		sctp_local_bh_disable();
> -		sctp_write_lock(&asoc->base.addr_lock);
> -		list_for_each(pos, &bp->address_list) {
> -			saddr = list_entry(pos, struct sctp_sockaddr_entry, list);
> +		/* This is always done in BH context with a socket lock
> +		 * held, so the list can not change.
> +		 */
> +		list_for_each_entry(saddr, &bp->address_list, list) {
>  			if (sctp_cmp_addr_exact(&saddr->a, &addr))
>  				saddr->use_as_src = 1;
>  		}
> -		sctp_write_unlock(&asoc->base.addr_lock);
> -		sctp_local_bh_enable();
>  		break;
>  	case SCTP_PARAM_DEL_IP:
> -		sctp_local_bh_disable();
> -		sctp_write_lock(&asoc->base.addr_lock);
> -		retval = sctp_del_bind_addr(bp, &addr);
> -		sctp_write_unlock(&asoc->base.addr_lock);
> -		sctp_local_bh_enable();
> +		retval = sctp_del_bind_addr(bp, &addr, call_rcu_bh);
>  		list_for_each(pos, &asoc->peer.transport_addr_list) {
>  			transport = list_entry(pos, struct sctp_transport,
>  						 transports);
> diff --git a/net/sctp/socket.c b/net/sctp/socket.c
> index a3acf78..772fbfb 100644
> --- a/net/sctp/socket.c
> +++ b/net/sctp/socket.c
> @@ -367,14 +367,10 @@ SCTP_STATIC int sctp_do_bind(struct sock *sk, union sctp_addr *addr, int len)
>  	if (!bp->port)
>  		bp->port = inet_sk(sk)->num;
> 
> -	/* Add the address to the bind address list.  */
> -	sctp_local_bh_disable();
> -	sctp_write_lock(&ep->base.addr_lock);
> -
> -	/* Use GFP_ATOMIC since BHs are disabled.  */
> +	/* Add the address to the bind address list.
> +	 * Use GFP_ATOMIC since BHs will be disabled.
> +	 */
>  	ret = sctp_add_bind_addr(bp, addr, 1, GFP_ATOMIC);
> -	sctp_write_unlock(&ep->base.addr_lock);
> -	sctp_local_bh_enable();
> 
>  	/* Copy back into socket for getsockname() use. */
>  	if (!ret) {
> @@ -544,15 +540,12 @@ static int sctp_send_asconf_add_ip(struct sock		*sk,
>  		if (i < addrcnt)
>  			continue;
> 
> -		/* Use the first address in bind addr list of association as
> -		 * Address Parameter of ASCONF CHUNK.
> +		/* Use the first valid address in bind addr list of
> +		 * association as Address Parameter of ASCONF CHUNK.
>  		 */
> -		sctp_read_lock(&asoc->base.addr_lock);
>  		bp = &asoc->base.bind_addr;
>  		p = bp->address_list.next;
>  		laddr = list_entry(p, struct sctp_sockaddr_entry, list);
> -		sctp_read_unlock(&asoc->base.addr_lock);
> -
>  		chunk = sctp_make_asconf_update_ip(asoc, &laddr->a, addrs,
>  						   addrcnt, SCTP_PARAM_ADD_IP);
>  		if (!chunk) {
> @@ -567,8 +560,6 @@ static int sctp_send_asconf_add_ip(struct sock		*sk,
>  		/* Add the new addresses to the bind address list with
>  		 * use_as_src set to 0.
>  		 */
> -		sctp_local_bh_disable();
> -		sctp_write_lock(&asoc->base.addr_lock);
>  		addr_buf = addrs;
>  		for (i = 0; i < addrcnt; i++) {
>  			addr = (union sctp_addr *)addr_buf;
> @@ -578,8 +569,6 @@ static int sctp_send_asconf_add_ip(struct sock		*sk,
>  						    GFP_ATOMIC);
>  			addr_buf += af->sockaddr_len;
>  		}
> -		sctp_write_unlock(&asoc->base.addr_lock);
> -		sctp_local_bh_enable();
>  	}
> 
>  out:
> @@ -651,13 +640,7 @@ static int sctp_bindx_rem(struct sock *sk, struct sockaddr *addrs, int addrcnt)
>  		 * socket routing and failover schemes. Refer to comments in
>  		 * sctp_do_bind(). -daisy
>  		 */
> -		sctp_local_bh_disable();
> -		sctp_write_lock(&ep->base.addr_lock);
> -
> -		retval = sctp_del_bind_addr(bp, sa_addr);
> -
> -		sctp_write_unlock(&ep->base.addr_lock);
> -		sctp_local_bh_enable();
> +		retval = sctp_del_bind_addr(bp, sa_addr, call_rcu);
> 
>  		addr_buf += af->sockaddr_len;
>  err_bindx_rem:
> @@ -748,14 +731,16 @@ static int sctp_send_asconf_del_ip(struct sock		*sk,
>  		 * make sure that we do not delete all the addresses in the
>  		 * association.
>  		 */
> -		sctp_read_lock(&asoc->base.addr_lock);
>  		bp = &asoc->base.bind_addr;
>  		laddr = sctp_find_unmatch_addr(bp, (union sctp_addr *)addrs,
>  					       addrcnt, sp);
> -		sctp_read_unlock(&asoc->base.addr_lock);
>  		if (!laddr)
>  			continue;
> 
> +		/* We do not need RCU protection throughout this loop
> +		 * because this is done under a socket lock from the
> +		 * setsockopt call.
> +		 */
>  		chunk = sctp_make_asconf_update_ip(asoc, laddr, addrs, addrcnt,
>  						   SCTP_PARAM_DEL_IP);
>  		if (!chunk) {
> @@ -766,23 +751,16 @@ static int sctp_send_asconf_del_ip(struct sock		*sk,
>  		/* Reset use_as_src flag for the addresses in the bind address
>  		 * list that are to be deleted.
>  		 */
> -		sctp_local_bh_disable();
> -		sctp_write_lock(&asoc->base.addr_lock);
>  		addr_buf = addrs;
>  		for (i = 0; i < addrcnt; i++) {
>  			laddr = (union sctp_addr *)addr_buf;
>  			af = sctp_get_af_specific(laddr->v4.sin_family);
> -			list_for_each(pos1, &bp->address_list) {
> -				saddr = list_entry(pos1,
> -						   struct sctp_sockaddr_entry,
> -						   list);
> +			list_for_each_entry(saddr, &bp->address_list, list) {
>  				if (sctp_cmp_addr_exact(&saddr->a, laddr))
>  					saddr->use_as_src = 0;
>  			}
>  			addr_buf += af->sockaddr_len;
>  		}
> -		sctp_write_unlock(&asoc->base.addr_lock);
> -		sctp_local_bh_enable();
> 
>  		/* Update the route and saddr entries for all the transports
>  		 * as some of the addresses in the bind address list are
> @@ -4057,11 +4035,9 @@ static int sctp_getsockopt_local_addrs_num_old(struct sock *sk, int len,
>  					       int __user *optlen)
>  {
>  	sctp_assoc_t id;
> -	struct list_head *pos;
>  	struct sctp_bind_addr *bp;
>  	struct sctp_association *asoc;
>  	struct sctp_sockaddr_entry *addr;
> -	rwlock_t *addr_lock;
>  	int cnt = 0;
> 
>  	if (len < sizeof(sctp_assoc_t))
> @@ -4078,17 +4054,13 @@ static int sctp_getsockopt_local_addrs_num_old(struct sock *sk, int len,
>  	 */
>  	if (0 == id) {
>  		bp = &sctp_sk(sk)->ep->base.bind_addr;
> -		addr_lock = &sctp_sk(sk)->ep->base.addr_lock;
>  	} else {
>  		asoc = sctp_id2assoc(sk, id);
>  		if (!asoc)
>  			return -EINVAL;
>  		bp = &asoc->base.bind_addr;
> -		addr_lock = &asoc->base.addr_lock;
>  	}
> 
> -	sctp_read_lock(addr_lock);
> -
>  	/* If the endpoint is bound to 0.0.0.0 or ::0, count the valid
>  	 * addresses from the global local address list.
>  	 */
> @@ -4115,12 +4087,14 @@ static int sctp_getsockopt_local_addrs_num_old(struct sock *sk, int len,
>  		goto done;
>  	}
> 
> -	list_for_each(pos, &bp->address_list) {
> +	/* Protection on the bound address list is not needed,
> +	 * since in the socket option context we hold the socket lock,
> +	 * so there is no way that the bound address list can change.
> +	 */
> +	list_for_each_entry(addr, &bp->address_list, list) {
>  		cnt ++;
>  	}
> -
>  done:
> -	sctp_read_unlock(addr_lock);
>  	return cnt;
>  }
> 
> @@ -4204,7 +4178,6 @@ static int sctp_getsockopt_local_addrs_old(struct sock *sk, int len,
>  {
>  	struct sctp_bind_addr *bp;
>  	struct sctp_association *asoc;
> -	struct list_head *pos;
>  	int cnt = 0;
>  	struct sctp_getaddrs_old getaddrs;
>  	struct sctp_sockaddr_entry *addr;
> @@ -4212,7 +4185,6 @@ static int sctp_getsockopt_local_addrs_old(struct sock *sk, int len,
>  	union sctp_addr temp;
>  	struct sctp_sock *sp = sctp_sk(sk);
>  	int addrlen;
> -	rwlock_t *addr_lock;
>  	int err = 0;
>  	void *addrs;
>  	void *buf;
> @@ -4234,13 +4206,11 @@ static int sctp_getsockopt_local_addrs_old(struct sock *sk, int len,
>  	 */
>  	if (0 == getaddrs.assoc_id) {
>  		bp = &sctp_sk(sk)->ep->base.bind_addr;
> -		addr_lock = &sctp_sk(sk)->ep->base.addr_lock;
>  	} else {
>  		asoc = sctp_id2assoc(sk, getaddrs.assoc_id);
>  		if (!asoc)
>  			return -EINVAL;
>  		bp = &asoc->base.bind_addr;
> -		addr_lock = &asoc->base.addr_lock;
>  	}
> 
>  	to = getaddrs.addrs;
> @@ -4254,8 +4224,6 @@ static int sctp_getsockopt_local_addrs_old(struct sock *sk, int len,
>  	if (!addrs)
>  		return -ENOMEM;
> 
> -	sctp_read_lock(addr_lock);
> -
>  	/* If the endpoint is bound to 0.0.0.0 or ::0, get the valid
>  	 * addresses from the global local address list.
>  	 */
> @@ -4271,8 +4239,11 @@ static int sctp_getsockopt_local_addrs_old(struct sock *sk, int len,
>  	}
> 
>  	buf = addrs;
> -	list_for_each(pos, &bp->address_list) {
> -		addr = list_entry(pos, struct sctp_sockaddr_entry, list);
> +	/* Protection on the bound address list is not needed since
> +	 * in the socket option context we hold a socket lock and
> +	 * thus the bound address list can't change.
> +	 */
> +	list_for_each_entry(addr, &bp->address_list, list) {
>  		memcpy(&temp, &addr->a, sizeof(temp));
>  		sctp_get_pf_specific(sk->sk_family)->addr_v4map(sp, &temp);
>  		addrlen = sctp_get_af_specific(temp.sa.sa_family)->sockaddr_len;
> @@ -4284,8 +4255,6 @@ static int sctp_getsockopt_local_addrs_old(struct sock *sk, int len,
>  	}
> 
>  copy_getaddrs:
> -	sctp_read_unlock(addr_lock);
> -
>  	/* copy the entire address list into the user provided space */
>  	if (copy_to_user(to, addrs, bytes_copied)) {
>  		err = -EFAULT;
> @@ -4307,7 +4276,6 @@ static int sctp_getsockopt_local_addrs(struct sock *sk, int len,
>  {
>  	struct sctp_bind_addr *bp;
>  	struct sctp_association *asoc;
> -	struct list_head *pos;
>  	int cnt = 0;
>  	struct sctp_getaddrs getaddrs;
>  	struct sctp_sockaddr_entry *addr;
> @@ -4315,7 +4283,6 @@ static int sctp_getsockopt_local_addrs(struct sock *sk, int len,
>  	union sctp_addr temp;
>  	struct sctp_sock *sp = sctp_sk(sk);
>  	int addrlen;
> -	rwlock_t *addr_lock;
>  	int err = 0;
>  	size_t space_left;
>  	int bytes_copied = 0;
> @@ -4336,13 +4303,11 @@ static int sctp_getsockopt_local_addrs(struct sock *sk, int len,
>  	 */
>  	if (0 == getaddrs.assoc_id) {
>  		bp = &sctp_sk(sk)->ep->base.bind_addr;
> -		addr_lock = &sctp_sk(sk)->ep->base.addr_lock;
>  	} else {
>  		asoc = sctp_id2assoc(sk, getaddrs.assoc_id);
>  		if (!asoc)
>  			return -EINVAL;
>  		bp = &asoc->base.bind_addr;
> -		addr_lock = &asoc->base.addr_lock;
>  	}
> 
>  	to = optval + offsetof(struct sctp_getaddrs,addrs);
> @@ -4352,8 +4317,6 @@ static int sctp_getsockopt_local_addrs(struct sock *sk, int len,
>  	if (!addrs)
>  		return -ENOMEM;
> 
> -	sctp_read_lock(addr_lock);
> -
>  	/* If the endpoint is bound to 0.0.0.0 or ::0, get the valid
>  	 * addresses from the global local address list.
>  	 */
> @@ -4365,21 +4328,24 @@ static int sctp_getsockopt_local_addrs(struct sock *sk, int len,
>  						space_left, &bytes_copied);
>  			if (cnt < 0) {
>  				err = cnt;
> -				goto error_lock;
> +				goto out;
>  			}
>  			goto copy_getaddrs;
>  		}
>  	}
> 
>  	buf = addrs;
> -	list_for_each(pos, &bp->address_list) {
> -		addr = list_entry(pos, struct sctp_sockaddr_entry, list);
> +	/* Protection on the bound address list is not needed since
> +	 * in the socket option context we hold a socket lock and
> +	 * thus the bound address list can't change.
> +	 */
> +	list_for_each_entry(addr, &bp->address_list, list) {
>  		memcpy(&temp, &addr->a, sizeof(temp));
>  		sctp_get_pf_specific(sk->sk_family)->addr_v4map(sp, &temp);
>  		addrlen = sctp_get_af_specific(temp.sa.sa_family)->sockaddr_len;
>  		if (space_left < addrlen) {
>  			err =  -ENOMEM; /*fixme: right error?*/
> -			goto error_lock;
> +			goto out;
>  		}
>  		memcpy(buf, &temp, addrlen);
>  		buf += addrlen;
> @@ -4389,8 +4355,6 @@ static int sctp_getsockopt_local_addrs(struct sock *sk, int len,
>  	}
> 
>  copy_getaddrs:
> -	sctp_read_unlock(addr_lock);
> -
>  	if (copy_to_user(to, addrs, bytes_copied)) {
>  		err = -EFAULT;
>  		goto out;
> @@ -4401,12 +4365,6 @@ copy_getaddrs:
>  	}
>  	if (put_user(bytes_copied, optlen))
>  		err = -EFAULT;
> -
> -	goto out;
> -
> -error_lock:
> -	sctp_read_unlock(addr_lock);
> -
>  out:
>  	kfree(addrs);
>  	return err;
> -- 
> 1.5.2.4
> 
> -
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC v2 PATCH 1/2] SCTP: Add RCU synchronization around sctp_localaddr_list
  2007-09-12 19:46 ` [RFC v2 PATCH 1/2] SCTP: Add RCU synchronization around sctp_localaddr_list Vlad Yasevich
  2007-09-12 22:26   ` Paul E. McKenney
@ 2007-09-12 23:03   ` Sridhar Samudrala
  2007-09-13 13:46     ` Vlad Yasevich
  1 sibling, 1 reply; 14+ messages in thread
From: Sridhar Samudrala @ 2007-09-12 23:03 UTC (permalink / raw)
  To: Vlad Yasevich; +Cc: netdev, lksctp-developers

Vlad,

few minor comments inline.
otherwise, looks good.

Thanks
Sridhar

On Wed, 2007-09-12 at 15:46 -0400, Vlad Yasevich wrote:
> sctp_localaddr_list is modified dynamically via NETDEV_UP
> and NETDEV_DOWN events, but there is not synchronization
> between writer (even handler) and readers.  As a result,
> the readers can access an entry that has been freed and
> crash the sytem.
> 
> Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
> ---
>  include/net/sctp/sctp.h    |    1 +
>  include/net/sctp/structs.h |    6 +++++
>  net/sctp/bind_addr.c       |    2 +
>  net/sctp/ipv6.c            |   34 ++++++++++++++++++++++---------
>  net/sctp/protocol.c        |   46 ++++++++++++++++++++++++++++++-------------
>  net/sctp/socket.c          |   38 +++++++++++++++++++++++------------
>  6 files changed, 90 insertions(+), 37 deletions(-)
> 
> diff --git a/include/net/sctp/sctp.h b/include/net/sctp/sctp.h
> index d529045..c9cc00c 100644
> --- a/include/net/sctp/sctp.h
> +++ b/include/net/sctp/sctp.h
> @@ -123,6 +123,7 @@
>   * sctp/protocol.c
>   */
>  extern struct sock *sctp_get_ctl_sock(void);
> +extern void sctp_local_addr_free(struct rcu_head *head);
>  extern int sctp_copy_local_addr_list(struct sctp_bind_addr *,
>  				     sctp_scope_t, gfp_t gfp,
>  				     int flags);
> diff --git a/include/net/sctp/structs.h b/include/net/sctp/structs.h
> index c0d5848..a89e361 100644
> --- a/include/net/sctp/structs.h
> +++ b/include/net/sctp/structs.h
> @@ -207,6 +207,9 @@ extern struct sctp_globals {
>  	 * It is a list of sctp_sockaddr_entry.
>  	 */
>  	struct list_head local_addr_list;
> +
> +	/* Lock that protects the local_addr_list writers */
> +	spinlock_t addr_list_lock;
>  	
>  	/* Flag to indicate if addip is enabled. */
>  	int addip_enable;
> @@ -242,6 +245,7 @@ extern struct sctp_globals {
>  #define sctp_port_alloc_lock		(sctp_globals.port_alloc_lock)
>  #define sctp_port_hashtable		(sctp_globals.port_hashtable)
>  #define sctp_local_addr_list		(sctp_globals.local_addr_list)
> +#define sctp_local_addr_lock		(sctp_globals.addr_list_lock)
>  #define sctp_addip_enable		(sctp_globals.addip_enable)
>  #define sctp_prsctp_enable		(sctp_globals.prsctp_enable)
> 
> @@ -737,8 +741,10 @@ const union sctp_addr *sctp_source(const struct sctp_chunk *chunk);
>  /* This is a structure for holding either an IPv6 or an IPv4 address.  */
>  struct sctp_sockaddr_entry {
>  	struct list_head list;
> +	struct rcu_head	rcu;
>  	union sctp_addr a;
>  	__u8 use_as_src;
> +	__u8 valid;
>  };
> 
>  typedef struct sctp_chunk *(sctp_packet_phandler_t)(struct sctp_association *);
> diff --git a/net/sctp/bind_addr.c b/net/sctp/bind_addr.c
> index fdb287a..7fc369f 100644
> --- a/net/sctp/bind_addr.c
> +++ b/net/sctp/bind_addr.c
> @@ -163,8 +163,10 @@ int sctp_add_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *new,
>  		addr->a.v4.sin_port = htons(bp->port);
> 
>  	addr->use_as_src = use_as_src;
> +	addr->valid = 1;
> 
>  	INIT_LIST_HEAD(&addr->list);
> +	INIT_RCU_HEAD(&addr->rcu);
>  	list_add_tail(&addr->list, &bp->address_list);
>  	SCTP_DBG_OBJCNT_INC(addr);
> 
> diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c
> index f8aa23d..54ff472 100644
> --- a/net/sctp/ipv6.c
> +++ b/net/sctp/ipv6.c
> @@ -77,13 +77,18 @@
> 
>  #include <asm/uaccess.h>
> 
> -/* Event handler for inet6 address addition/deletion events.  */
> +/* Event handler for inet6 address addition/deletion events.
> + * This even is part of the atomic notifier call chain
> + * and thus happens atomically and can NOT sleep.  As a result
> + * we can't and really don't need to add any locks to guard the
> + * RCU.
> + */

Now that we are adding a spin_lock, the above comment is not valid.
It should be fixed saying that we still need a lock because we use the 
same list for both inet and inet6 address events and they can happen in
parallel.

>  static int sctp_inet6addr_event(struct notifier_block *this, unsigned long ev,
>  				void *ptr)
>  {
>  	struct inet6_ifaddr *ifa = (struct inet6_ifaddr *)ptr;
> -	struct sctp_sockaddr_entry *addr;
> -	struct list_head *pos, *temp;
> +	struct sctp_sockaddr_entry *addr = NULL;
> +	struct sctp_sockaddr_entry *temp;
> 
>  	switch (ev) {
>  	case NETDEV_UP:
> @@ -94,19 +99,26 @@ static int sctp_inet6addr_event(struct notifier_block *this, unsigned long ev,
>  			memcpy(&addr->a.v6.sin6_addr, &ifa->addr,
>  				 sizeof(struct in6_addr));
>  			addr->a.v6.sin6_scope_id = ifa->idev->dev->ifindex;
> -			list_add_tail(&addr->list, &sctp_local_addr_list);
> +			addr->valid = 1;
> +			spin_lock_bh(&sctp_local_addr_lock);
> +			list_add_tail_rcu(&addr->list, &sctp_local_addr_list);
> +			spin_unlock_bh(&sctp_local_addr_lock);
>  		}
>  		break;
>  	case NETDEV_DOWN:
> -		list_for_each_safe(pos, temp, &sctp_local_addr_list) {
> -			addr = list_entry(pos, struct sctp_sockaddr_entry, list);
> -			if (ipv6_addr_equal(&addr->a.v6.sin6_addr, &ifa->addr)) {
> -				list_del(pos);
> -				kfree(addr);
> +		spin_lock_bh(&sctp_local_addr_lock);
> +		list_for_each_entry_safe(addr, temp,
> +					&sctp_local_addr_list, list) {
> +			if (ipv6_addr_equal(&addr->a.v6.sin6_addr,
> +					     &ifa->addr)) {
> +				addr->valid = 0;
> +				list_del_rcu(&addr->list);
>  				break;
>  			}
>  		}
> -
> +		spin_unlock_bh(&sctp_local_addr_lock);
> +		if (addr && !addr->valid)
> +			call_rcu(&addr->rcu, sctp_local_addr_free);
>  		break;
>  	}
> 
> @@ -367,7 +379,9 @@ static void sctp_v6_copy_addrlist(struct list_head *addrlist,
>  			addr->a.v6.sin6_port = 0;
>  			addr->a.v6.sin6_addr = ifp->addr;
>  			addr->a.v6.sin6_scope_id = dev->ifindex;
> +			addr->valid = 1;
>  			INIT_LIST_HEAD(&addr->list);
> +			INIT_RCU_HEAD(&addr->rcu);
>  			list_add_tail(&addr->list, addrlist);
>  		}
>  	}
> diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
> index e98579b..4688559 100644
> --- a/net/sctp/protocol.c
> +++ b/net/sctp/protocol.c
> @@ -153,6 +153,8 @@ static void sctp_v4_copy_addrlist(struct list_head *addrlist,
>  			addr->a.v4.sin_family = AF_INET;
>  			addr->a.v4.sin_port = 0;
>  			addr->a.v4.sin_addr.s_addr = ifa->ifa_local;
> +			addr->valid = 1;
> +			INIT_RCU_HEAD(&addr->rcu);

This has nothing to do with this patch, but i noticed that
INIT_LIST_HEAD(&addr->list) is missing here when comparing with
earlier v6 version of this routine.

>  			list_add_tail(&addr->list, addrlist);
>  		}
>  	}
> @@ -192,16 +194,24 @@ static void sctp_free_local_addr_list(void)
>  	}
>  }
> 
> +void sctp_local_addr_free(struct rcu_head *head)
> +{
> +	struct sctp_sockaddr_entry *e = container_of(head,
> +				struct sctp_sockaddr_entry, rcu);
> +	kfree(e);
> +}
> +
>  /* Copy the local addresses which are valid for 'scope' into 'bp'.  */
>  int sctp_copy_local_addr_list(struct sctp_bind_addr *bp, sctp_scope_t scope,
>  			      gfp_t gfp, int copy_flags)
>  {
>  	struct sctp_sockaddr_entry *addr;
>  	int error = 0;
> -	struct list_head *pos, *temp;
> 
> -	list_for_each_safe(pos, temp, &sctp_local_addr_list) {
> -		addr = list_entry(pos, struct sctp_sockaddr_entry, list);
> +	rcu_read_lock();
> +	list_for_each_entry_rcu(addr, &sctp_local_addr_list, list) {
> +		if (!addr->valid)
> +			continue;
>  		if (sctp_in_scope(&addr->a, scope)) {
>  			/* Now that the address is in scope, check to see if
>  			 * the address type is really supported by the local
> @@ -221,6 +231,7 @@ int sctp_copy_local_addr_list(struct sctp_bind_addr *bp, sctp_scope_t scope,
>  	}
> 
>  end_copy:
> +	rcu_read_unlock();
>  	return error;
>  }
> 
> @@ -605,8 +616,8 @@ static int sctp_inetaddr_event(struct notifier_block *this, unsigned long ev,
>  			       void *ptr)
>  {
>  	struct in_ifaddr *ifa = (struct in_ifaddr *)ptr;
> -	struct sctp_sockaddr_entry *addr;
> -	struct list_head *pos, *temp;
> +	struct sctp_sockaddr_entry *addr = NULL;
> +	struct sctp_sockaddr_entry *temp;
> 
>  	switch (ev) {
>  	case NETDEV_UP:
> @@ -615,19 +626,25 @@ static int sctp_inetaddr_event(struct notifier_block *this, unsigned long ev,
>  			addr->a.v4.sin_family = AF_INET;
>  			addr->a.v4.sin_port = 0;
>  			addr->a.v4.sin_addr.s_addr = ifa->ifa_local;
> -			list_add_tail(&addr->list, &sctp_local_addr_list);
> +			addr->valid = 1;
> +			spin_lock_bh(&sctp_local_addr_lock);
> +			list_add_tail_rcu(&addr->list, &sctp_local_addr_list);
> +			spin_unlock_bh(&sctp_local_addr_lock);
>  		}
>  		break;
>  	case NETDEV_DOWN:
> -		list_for_each_safe(pos, temp, &sctp_local_addr_list) {
> -			addr = list_entry(pos, struct sctp_sockaddr_entry, list);
> +		spin_lock_bh(&sctp_local_addr_lock);
> +		list_for_each_entry_safe(addr, temp,
> +					&sctp_local_addr_list, list) {
>  			if (addr->a.v4.sin_addr.s_addr == ifa->ifa_local) {
> -				list_del(pos);
> -				kfree(addr);
> +				addr->valid = 0;
> +				list_del_rcu(&addr->list);
>  				break;
>  			}
>  		}
> -
> +		spin_unlock_bh(&sctp_local_addr_lock);
> +		if (addr && !addr->valid)
> +			call_rcu(&addr->rcu, sctp_local_addr_free);
>  		break;
>  	}
> 
> @@ -1160,6 +1177,7 @@ SCTP_STATIC __init int sctp_init(void)
> 
>  	/* Initialize the local address list. */
>  	INIT_LIST_HEAD(&sctp_local_addr_list);
> +	spin_lock_init(&sctp_local_addr_lock);
>  	sctp_get_local_addr_list();
> 
>  	/* Register notifier for inet address additions/deletions. */
> @@ -1227,6 +1245,9 @@ SCTP_STATIC __exit void sctp_exit(void)
>  	sctp_v6_del_protocol();
>  	inet_del_protocol(&sctp_protocol, IPPROTO_SCTP);
> 
> +	/* Unregister notifier for inet address additions/deletions. */
> +	unregister_inetaddr_notifier(&sctp_inetaddr_notifier);
> +
>  	/* Free the local address list.  */
>  	sctp_free_local_addr_list();
> 
> @@ -1240,9 +1261,6 @@ SCTP_STATIC __exit void sctp_exit(void)
>  	inet_unregister_protosw(&sctp_stream_protosw);
>  	inet_unregister_protosw(&sctp_seqpacket_protosw);
> 
> -	/* Unregister notifier for inet address additions/deletions. */
> -	unregister_inetaddr_notifier(&sctp_inetaddr_notifier);
> -
>  	sctp_sysctl_unregister();
>  	list_del(&sctp_ipv4_specific.list);
> 
> diff --git a/net/sctp/socket.c b/net/sctp/socket.c
> index 3335460..a3acf78 100644
> --- a/net/sctp/socket.c
> +++ b/net/sctp/socket.c
> @@ -4057,9 +4057,9 @@ static int sctp_getsockopt_local_addrs_num_old(struct sock *sk, int len,
>  					       int __user *optlen)
>  {
>  	sctp_assoc_t id;
> +	struct list_head *pos;
>  	struct sctp_bind_addr *bp;
>  	struct sctp_association *asoc;
> -	struct list_head *pos, *temp;
>  	struct sctp_sockaddr_entry *addr;
>  	rwlock_t *addr_lock;
>  	int cnt = 0;
> @@ -4096,15 +4096,19 @@ static int sctp_getsockopt_local_addrs_num_old(struct sock *sk, int len,
>  		addr = list_entry(bp->address_list.next,
>  				  struct sctp_sockaddr_entry, list);
>  		if (sctp_is_any(&addr->a)) {
> -			list_for_each_safe(pos, temp, &sctp_local_addr_list) {
> -				addr = list_entry(pos,
> -						  struct sctp_sockaddr_entry,
> -						  list);
> +			rcu_read_lock();
> +			list_for_each_entry_rcu(addr,
> +						&sctp_local_addr_list, list) {
> +				if (!addr->valid)
> +					continue;
> +
>  				if ((PF_INET == sk->sk_family) &&
>  				    (AF_INET6 == addr->a.sa.sa_family))
>  					continue;
> +
>  				cnt++;
>  			}
> +			rcu_read_unlock();
>  		} else {
>  			cnt = 1;
>  		}
> @@ -4127,14 +4131,16 @@ static int sctp_copy_laddrs_old(struct sock *sk, __u16 port,
>  					int max_addrs, void *to,
>  					int *bytes_copied)
>  {
> -	struct list_head *pos, *next;
>  	struct sctp_sockaddr_entry *addr;
>  	union sctp_addr temp;
>  	int cnt = 0;
>  	int addrlen;
> 
> -	list_for_each_safe(pos, next, &sctp_local_addr_list) {
> -		addr = list_entry(pos, struct sctp_sockaddr_entry, list);
> +	rcu_read_lock();
> +	list_for_each_entry_rcu(addr, &sctp_local_addr_list, list) {
> +		if (!addr->valid)
> +			continue;
> +
>  		if ((PF_INET == sk->sk_family) &&
>  		    (AF_INET6 == addr->a.sa.sa_family))
>  			continue;
> @@ -4149,6 +4155,7 @@ static int sctp_copy_laddrs_old(struct sock *sk, __u16 port,
>  		cnt ++;
>  		if (cnt >= max_addrs) break;
>  	}
> +	rcu_read_unlock();
> 
>  	return cnt;
>  }
> @@ -4156,14 +4163,16 @@ static int sctp_copy_laddrs_old(struct sock *sk, __u16 port,
>  static int sctp_copy_laddrs(struct sock *sk, __u16 port, void *to,
>  			    size_t space_left, int *bytes_copied)
>  {
> -	struct list_head *pos, *next;
>  	struct sctp_sockaddr_entry *addr;
>  	union sctp_addr temp;
>  	int cnt = 0;
>  	int addrlen;
> 
> -	list_for_each_safe(pos, next, &sctp_local_addr_list) {
> -		addr = list_entry(pos, struct sctp_sockaddr_entry, list);
> +	rcu_read_lock();
> +	list_for_each_entry_rcu(addr, &sctp_local_addr_list, list) {
> +		if (!addr->valid)
> +			continue;
> +
>  		if ((PF_INET == sk->sk_family) &&
>  		    (AF_INET6 == addr->a.sa.sa_family))
>  			continue;
> @@ -4171,8 +4180,10 @@ static int sctp_copy_laddrs(struct sock *sk, __u16 port, void *to,
>  		sctp_get_pf_specific(sk->sk_family)->addr_v4map(sctp_sk(sk),
>  								&temp);
>  		addrlen = sctp_get_af_specific(temp.sa.sa_family)->sockaddr_len;
> -		if (space_left < addrlen)
> -			return -ENOMEM;
> +		if (space_left < addrlen) {
> +			cnt =  -ENOMEM;
> +			break;
> +		}
>  		memcpy(to, &temp, addrlen);
> 
>  		to += addrlen;
> @@ -4180,6 +4191,7 @@ static int sctp_copy_laddrs(struct sock *sk, __u16 port, void *to,
>  		space_left -= addrlen;
>  		*bytes_copied += addrlen;
>  	}
> +	rcu_read_unlock();
> 
>  	return cnt;
>  }


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC v2 PATCH 1/2] SCTP: Add RCU synchronization around sctp_localaddr_list
  2007-09-12 23:03   ` Sridhar Samudrala
@ 2007-09-13 13:46     ` Vlad Yasevich
  0 siblings, 0 replies; 14+ messages in thread
From: Vlad Yasevich @ 2007-09-13 13:46 UTC (permalink / raw)
  To: Sridhar Samudrala; +Cc: netdev, lksctp-developers

Hi Sridhar

Sridhar Samudrala wrote:
> Vlad,
> 
> few minor comments inline.
> otherwise, looks good.
> 
> Thanks
> Sridhar
> 
>> diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c
>> index f8aa23d..54ff472 100644
>> --- a/net/sctp/ipv6.c
>> +++ b/net/sctp/ipv6.c
>> @@ -77,13 +77,18 @@
>>
>>  #include <asm/uaccess.h>
>>
>> -/* Event handler for inet6 address addition/deletion events.  */
>> +/* Event handler for inet6 address addition/deletion events.
>> + * This even is part of the atomic notifier call chain
>> + * and thus happens atomically and can NOT sleep.  As a result
>> + * we can't and really don't need to add any locks to guard the
>> + * RCU.
>> + */
> 
> Now that we are adding a spin_lock, the above comment is not valid.
> It should be fixed saying that we still need a lock because we use the 
> same list for both inet and inet6 address events and they can happen in
> parallel.

Yes, I forgot to fix this comment.  Will do.


>> diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
>> index e98579b..4688559 100644
>> --- a/net/sctp/protocol.c
>> +++ b/net/sctp/protocol.c
>> @@ -153,6 +153,8 @@ static void sctp_v4_copy_addrlist(struct list_head *addrlist,
>>  			addr->a.v4.sin_family = AF_INET;
>>  			addr->a.v4.sin_port = 0;
>>  			addr->a.v4.sin_addr.s_addr = ifa->ifa_local;
>> +			addr->valid = 1;
>> +			INIT_RCU_HEAD(&addr->rcu);
> 
> This has nothing to do with this patch, but i noticed that
> INIT_LIST_HEAD(&addr->list) is missing here when comparing with
> earlier v6 version of this routine.

Hmm...  I thought it looked a little different, but didn't pay too much
attention to it.  I'll add a follow-on patch to fix this.

Thanks
-vlad

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC v3 PATCH 2/21] SCTP: Convert bind_addr_list locking to RCU
  2007-09-12 22:33     ` Paul E. McKenney
@ 2007-09-13 17:59       ` Sridhar Samudrala
  2007-09-13 18:15         ` Vlad Yasevich
  2007-09-13 19:33         ` [Lksctp-developers] " Vlad Yasevich
  0 siblings, 2 replies; 14+ messages in thread
From: Sridhar Samudrala @ 2007-09-13 17:59 UTC (permalink / raw)
  To: paulmck; +Cc: Vlad Yasevich, netdev, lksctp-developers

On Wed, 2007-09-12 at 15:33 -0700, Paul E. McKenney wrote:
> On Wed, Sep 12, 2007 at 05:03:42PM -0400, Vlad Yasevich wrote:
> > [... and here is the updated version as promissed ...]
> > 
> > Since the sctp_sockaddr_entry is now RCU enabled as part of
> > the patch to synchronize sctp_localaddr_list, it makes sense to
> > change all handling of these entries to RCU.  This includes the
> > sctp_bind_addrs structure and it's list of bound addresses.
> > 
> > This list is currently protected by an external rw_lock and that
> > looks like an overkill.  There are only 2 writers to the list:
> > bind()/bindx() calls, and BH processing of ASCONF-ACK chunks.
> > These are already seriealized via the socket lock, so they will
> > not step on each other.  These are also relatively rare, so we
> > should be good with RCU.
> > 
> > The readers are varied and they are easily converted to RCU.
> 
> Looks good from an RCU viewpoint -- I must defer to others on
> the networking aspects.
> 
> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

looks good to me too. some minor typos and some comments on
RCU usage comments inline.

Also, I guess we can remove the sctp_[read/write]_[un]lock macros
from sctp.h now that you removed the all the users of rwlocks
in SCTP

Thanks
Sridhar
> 
> > Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
> > ---
> >  include/net/sctp/structs.h |    7 +--
> >  net/sctp/associola.c       |   14 +-----
> >  net/sctp/bind_addr.c       |   68 ++++++++++++++++++++----------
> >  net/sctp/endpointola.c     |   27 +++---------
> >  net/sctp/ipv6.c            |   12 ++---
> >  net/sctp/protocol.c        |   25 ++++-------
> >  net/sctp/sm_make_chunk.c   |   18 +++-----
> >  net/sctp/socket.c          |   98 ++++++++++++-------------------------------
> >  8 files changed, 106 insertions(+), 163 deletions(-)
> > 
> > diff --git a/include/net/sctp/structs.h b/include/net/sctp/structs.h
> > index a89e361..c2fe2dc 100644
> > --- a/include/net/sctp/structs.h
> > +++ b/include/net/sctp/structs.h
> > @@ -1155,7 +1155,9 @@ int sctp_bind_addr_copy(struct sctp_bind_addr *dest,
> >  			int flags);
> >  int sctp_add_bind_addr(struct sctp_bind_addr *, union sctp_addr *,
> >  		       __u8 use_as_src, gfp_t gfp);
> > -int sctp_del_bind_addr(struct sctp_bind_addr *, union sctp_addr *);
> > +int sctp_del_bind_addr(struct sctp_bind_addr *, union sctp_addr *,
> > +			void (*rcu_call)(struct rcu_head *,
> > +					  void (*func)(struct rcu_head *)));
> >  int sctp_bind_addr_match(struct sctp_bind_addr *, const union sctp_addr *,
> >  			 struct sctp_sock *);
> >  union sctp_addr *sctp_find_unmatch_addr(struct sctp_bind_addr	*bp,
> > @@ -1226,9 +1228,6 @@ struct sctp_ep_common {
> >  	 * bind_addr.address_list is our set of local IP addresses.
> >  	 */
> >  	struct sctp_bind_addr bind_addr;
> > -
> > -	/* Protection during address list comparisons. */
> > -	rwlock_t   addr_lock;
> >  };
> > 
> > 
> > diff --git a/net/sctp/associola.c b/net/sctp/associola.c
> > index 2ad1caf..9bad8ba 100644
> > --- a/net/sctp/associola.c
> > +++ b/net/sctp/associola.c
> > @@ -99,7 +99,6 @@ static struct sctp_association *sctp_association_init(struct sctp_association *a
> > 
> >  	/* Initialize the bind addr area.  */
> >  	sctp_bind_addr_init(&asoc->base.bind_addr, ep->base.bind_addr.port);
> > -	rwlock_init(&asoc->base.addr_lock);
> > 
> >  	asoc->state = SCTP_STATE_CLOSED;
> > 
> > @@ -937,8 +936,6 @@ struct sctp_transport *sctp_assoc_is_match(struct sctp_association *asoc,
> >  {
> >  	struct sctp_transport *transport;
> > 
> > -	sctp_read_lock(&asoc->base.addr_lock);
> > -
> >  	if ((htons(asoc->base.bind_addr.port) == laddr->v4.sin_port) &&
> >  	    (htons(asoc->peer.port) == paddr->v4.sin_port)) {
> >  		transport = sctp_assoc_lookup_paddr(asoc, paddr);
> > @@ -952,7 +949,6 @@ struct sctp_transport *sctp_assoc_is_match(struct sctp_association *asoc,
> >  	transport = NULL;
> > 
> >  out:
> > -	sctp_read_unlock(&asoc->base.addr_lock);
> >  	return transport;
> >  }
> > 
> > @@ -1376,19 +1372,13 @@ int sctp_assoc_set_bind_addr_from_cookie(struct sctp_association *asoc,
> >  int sctp_assoc_lookup_laddr(struct sctp_association *asoc,
> >  			    const union sctp_addr *laddr)
> >  {
> > -	int found;
> > +	int found = 0;
> > 
> > -	sctp_read_lock(&asoc->base.addr_lock);
> >  	if ((asoc->base.bind_addr.port == ntohs(laddr->v4.sin_port)) &&
> >  	    sctp_bind_addr_match(&asoc->base.bind_addr, laddr,
> > -				 sctp_sk(asoc->base.sk))) {
> > +				 sctp_sk(asoc->base.sk)))
> >  		found = 1;
> > -		goto out;
> > -	}
> > 
> > -	found = 0;
> > -out:
> > -	sctp_read_unlock(&asoc->base.addr_lock);
> >  	return found;
> >  }
> > 
> > diff --git a/net/sctp/bind_addr.c b/net/sctp/bind_addr.c
> > index 7fc369f..d16055f 100644
> > --- a/net/sctp/bind_addr.c
> > +++ b/net/sctp/bind_addr.c
> > @@ -167,7 +167,11 @@ int sctp_add_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *new,
> > 
> >  	INIT_LIST_HEAD(&addr->list);
> >  	INIT_RCU_HEAD(&addr->rcu);
> > -	list_add_tail(&addr->list, &bp->address_list);
> > +
> > +	/* We always hold a socket lock when calling this function,
> > +	 * so rcu_read_lock is not needed.
> > +	 */
> > +	list_add_tail_rcu(&addr->list, &bp->address_list);

I am little confused with the comment above.
Isn't this an update-side of RCU. If so, this should be protected
by a spin-lock or a mutex rather than rcu_read_lock().

> >  	SCTP_DBG_OBJCNT_INC(addr);
> > 
> >  	return 0;
> > @@ -176,23 +180,35 @@ int sctp_add_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *new,
> >  /* Delete an address from the bind address list in the SCTP_bind_addr
> >   * structure.
> >   */
> > -int sctp_del_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *del_addr)
> > +int sctp_del_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *del_addr,
> > +			void (*rcu_call)(struct rcu_head *head,
> > +					 void (*func)(struct rcu_head *head)))
> >  {
> > -	struct list_head *pos, *temp;
> > -	struct sctp_sockaddr_entry *addr;
> > +	struct sctp_sockaddr_entry *addr, *temp;
> > 
> > -	list_for_each_safe(pos, temp, &bp->address_list) {
> > -		addr = list_entry(pos, struct sctp_sockaddr_entry, list);
> > +	/* We hold the socket lock when calling this function, so
> > +	 * rcu_read_lock is not needed.
> > +	 */

Same as above. This is also an update-side of RCU protected 
by socket lock.

> > +	list_for_each_entry_safe(addr, temp, &bp->address_list, list) {
> >  		if (sctp_cmp_addr_exact(&addr->a, del_addr)) {
> >  			/* Found the exact match. */
> > -			list_del(pos);
> > -			kfree(addr);
> > -			SCTP_DBG_OBJCNT_DEC(addr);
> > -
> > -			return 0;
> > +			addr->valid = 0;
> > +			list_del_rcu(&addr->list);
> > +			break;
> >  		}
> >  	}
> > 
> > +	/* Call the rcu callback provided in the args.  This function is
> > +	 * called by both BH packet processing and user side socket option
> > +	 * processing, but it works on different lists in those 2 contexts.
> > +	 * Each context provides it's own callback, whether call_rc_bh()
s/call_rc_bh/call_rcu_bh

> > +	 * or call_rcu(), to make sure that we wait an for appropriate time.
s/an for/for an

> > +	 */
> > +	if (addr && !addr->valid) {
> > +		rcu_call(&addr->rcu, sctp_local_addr_free);
> > +		SCTP_DBG_OBJCNT_DEC(addr);
> > +	}
> > +
> >  	return -EINVAL;
> >  }
> > 
> > @@ -302,15 +318,20 @@ int sctp_bind_addr_match(struct sctp_bind_addr *bp,
> >  			 struct sctp_sock *opt)
> >  {
> >  	struct sctp_sockaddr_entry *laddr;
> > -	struct list_head *pos;
> > -
> > -	list_for_each(pos, &bp->address_list) {
> > -		laddr = list_entry(pos, struct sctp_sockaddr_entry, list);
> > -		if (opt->pf->cmp_addr(&laddr->a, addr, opt))
> > -			return 1;
> > +	int match = 0;
> > +
> > +	rcu_read_lock();
> > +	list_for_each_entry_rcu(laddr, &bp->address_list, list) {
> > +		if (!laddr->valid)
> > +			continue;
> > +		if (opt->pf->cmp_addr(&laddr->a, addr, opt)) {
> > +			match = 1;
> > +			break;
> > +		}
> >  	}
> > +	rcu_read_unlock();
> > 
> > -	return 0;
> > +	return match;
> >  }
> > 
> >  /* Find the first address in the bind address list that is not present in
> > @@ -325,18 +346,19 @@ union sctp_addr *sctp_find_unmatch_addr(struct sctp_bind_addr	*bp,
> >  	union sctp_addr			*addr;
> >  	void 				*addr_buf;
> >  	struct sctp_af			*af;
> > -	struct list_head		*pos;
> >  	int				i;
> > 
> > -	list_for_each(pos, &bp->address_list) {
> > -		laddr = list_entry(pos, struct sctp_sockaddr_entry, list);
> > -
> > +	/* This is only called sctp_send_asconf_del_ip() and we hold
> > +	 * the socket lock in that code patch, so that address list
> > +	 * can't change.
> > +	 */
> > +	list_for_each_entry(laddr, &bp->address_list, list) {
> >  		addr_buf = (union sctp_addr *)addrs;
> >  		for (i = 0; i < addrcnt; i++) {
> >  			addr = (union sctp_addr *)addr_buf;
> >  			af = sctp_get_af_specific(addr->v4.sin_family);
> >  			if (!af)
> > -				return NULL;
> > +				break;
> > 
> >  			if (opt->pf->cmp_addr(&laddr->a, addr, opt))
> >  				break;
> > diff --git a/net/sctp/endpointola.c b/net/sctp/endpointola.c
> > index 1404a9e..110d912 100644
> > --- a/net/sctp/endpointola.c
> > +++ b/net/sctp/endpointola.c
> > @@ -92,7 +92,6 @@ static struct sctp_endpoint *sctp_endpoint_init(struct sctp_endpoint *ep,
> > 
> >  	/* Initialize the bind addr area */
> >  	sctp_bind_addr_init(&ep->base.bind_addr, 0);
> > -	rwlock_init(&ep->base.addr_lock);
> > 
> >  	/* Remember who we are attached to.  */
> >  	ep->base.sk = sk;
> > @@ -225,21 +224,14 @@ void sctp_endpoint_put(struct sctp_endpoint *ep)
> >  struct sctp_endpoint *sctp_endpoint_is_match(struct sctp_endpoint *ep,
> >  					       const union sctp_addr *laddr)
> >  {
> > -	struct sctp_endpoint *retval;
> > +	struct sctp_endpoint *retval = NULL;
> > 
> > -	sctp_read_lock(&ep->base.addr_lock);
> >  	if (htons(ep->base.bind_addr.port) == laddr->v4.sin_port) {
> >  		if (sctp_bind_addr_match(&ep->base.bind_addr, laddr,
> > -					 sctp_sk(ep->base.sk))) {
> > +					 sctp_sk(ep->base.sk)))
> >  			retval = ep;
> > -			goto out;
> > -		}
> >  	}
> > 
> > -	retval = NULL;
> > -
> > -out:
> > -	sctp_read_unlock(&ep->base.addr_lock);
> >  	return retval;
> >  }
> > 
> > @@ -261,9 +253,7 @@ static struct sctp_association *__sctp_endpoint_lookup_assoc(
> >  	list_for_each(pos, &ep->asocs) {
> >  		asoc = list_entry(pos, struct sctp_association, asocs);
> >  		if (rport == asoc->peer.port) {
> > -			sctp_read_lock(&asoc->base.addr_lock);
> >  			*transport = sctp_assoc_lookup_paddr(asoc, paddr);
> > -			sctp_read_unlock(&asoc->base.addr_lock);
> > 
> >  			if (*transport)
> >  				return asoc;
> > @@ -295,20 +285,17 @@ struct sctp_association *sctp_endpoint_lookup_assoc(
> >  int sctp_endpoint_is_peeled_off(struct sctp_endpoint *ep,
> >  				const union sctp_addr *paddr)
> >  {
> > -	struct list_head *pos;
> >  	struct sctp_sockaddr_entry *addr;
> >  	struct sctp_bind_addr *bp;
> > 
> > -	sctp_read_lock(&ep->base.addr_lock);
> >  	bp = &ep->base.bind_addr;
> > -	list_for_each(pos, &bp->address_list) {
> > -		addr = list_entry(pos, struct sctp_sockaddr_entry, list);
> > -		if (sctp_has_association(&addr->a, paddr)) {
> > -			sctp_read_unlock(&ep->base.addr_lock);
> > +	/* This function is called whith the socket lock held,
s/whith/with

> > +	 * so the address_list can not change.
> > +	 */
> > +	list_for_each_entry(addr, &bp->address_list, list) {
> > +		if (sctp_has_association(&addr->a, paddr))
> >  			return 1;
> > -		}
> >  	}
> > -	sctp_read_unlock(&ep->base.addr_lock);
> > 
> >  	return 0;
> >  }

<snip>
deleted the rest of the patch as it looks good and i have no
comments.


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [RFC v3 PATCH 2/21] SCTP: Convert bind_addr_list locking to RCU
  2007-09-13 17:59       ` Sridhar Samudrala
@ 2007-09-13 18:15         ` Vlad Yasevich
  2007-09-13 19:33         ` [Lksctp-developers] " Vlad Yasevich
  1 sibling, 0 replies; 14+ messages in thread
From: Vlad Yasevich @ 2007-09-13 18:15 UTC (permalink / raw)
  To: Sridhar Samudrala; +Cc: paulmck, netdev, lksctp-developers

Hi Sridhar

Sridhar Samudrala wrote:
> 
> looks good to me too. some minor typos and some comments on
> RCU usage comments inline.
> 
> Also, I guess we can remove the sctp_[read/write]_[un]lock macros
> from sctp.h now that you removed the all the users of rwlocks
> in SCTP

Ok.  I guess I pull them.

> 
> Thanks
> Sridhar
>>> Signed-off-by: Vlad Yasevich <vladislav.yasevich@hp.com>
>>> ---
>>>  include/net/sctp/structs.h |    7 +--
>>>  net/sctp/associola.c       |   14 +-----
>>>  net/sctp/bind_addr.c       |   68 ++++++++++++++++++++----------
>>>  net/sctp/endpointola.c     |   27 +++---------
>>>  net/sctp/ipv6.c            |   12 ++---
>>>  net/sctp/protocol.c        |   25 ++++-------
>>>  net/sctp/sm_make_chunk.c   |   18 +++-----
>>>  net/sctp/socket.c          |   98 ++++++++++++-------------------------------
>>>  8 files changed, 106 insertions(+), 163 deletions(-)
>>>
>>>
>>> diff --git a/net/sctp/bind_addr.c b/net/sctp/bind_addr.c
>>> index 7fc369f..d16055f 100644
>>> --- a/net/sctp/bind_addr.c
>>> +++ b/net/sctp/bind_addr.c
>>> @@ -167,7 +167,11 @@ int sctp_add_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *new,
>>>
>>>  	INIT_LIST_HEAD(&addr->list);
>>>  	INIT_RCU_HEAD(&addr->rcu);
>>> -	list_add_tail(&addr->list, &bp->address_list);
>>> +
>>> +	/* We always hold a socket lock when calling this function,
>>> +	 * so rcu_read_lock is not needed.
>>> +	 */
>>> +	list_add_tail_rcu(&addr->list, &bp->address_list);
> 
> I am little confused with the comment above.
> Isn't this an update-side of RCU. If so, this should be protected
> by a spin-lock or a mutex rather than rcu_read_lock().
> 

Yes, the comment is confusing.  I put it there because I removed the rcu_read_lock() that
was also taken in prior version of the patch.  The comment should really say, that since
the socket is held, we don't need another synchronizing spin lock in this case.

>>>  	SCTP_DBG_OBJCNT_INC(addr);
>>>
>>>  	return 0;
>>> @@ -176,23 +180,35 @@ int sctp_add_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *new,
>>>  /* Delete an address from the bind address list in the SCTP_bind_addr
>>>   * structure.
>>>   */
>>> -int sctp_del_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *del_addr)
>>> +int sctp_del_bind_addr(struct sctp_bind_addr *bp, union sctp_addr *del_addr,
>>> +			void (*rcu_call)(struct rcu_head *head,
>>> +					 void (*func)(struct rcu_head *head)))
>>>  {
>>> -	struct list_head *pos, *temp;
>>> -	struct sctp_sockaddr_entry *addr;
>>> +	struct sctp_sockaddr_entry *addr, *temp;
>>>
>>> -	list_for_each_safe(pos, temp, &bp->address_list) {
>>> -		addr = list_entry(pos, struct sctp_sockaddr_entry, list);
>>> +	/* We hold the socket lock when calling this function, so
>>> +	 * rcu_read_lock is not needed.
>>> +	 */
> 
> Same as above. This is also an update-side of RCU protected 
> by socket lock.

Same reason.  Prior versions used rcu_spin_lock and I was just making a note that
that not needed.  I'll remove.

> 
>>> +	list_for_each_entry_safe(addr, temp, &bp->address_list, list) {
>>>  		if (sctp_cmp_addr_exact(&addr->a, del_addr)) {
>>>  			/* Found the exact match. */
>>> -			list_del(pos);
>>> -			kfree(addr);
>>> -			SCTP_DBG_OBJCNT_DEC(addr);
>>> -
>>> -			return 0;
>>> +			addr->valid = 0;
>>> +			list_del_rcu(&addr->list);
>>> +			break;
>>>  		}
>>>  	}
>>>
>>> +	/* Call the rcu callback provided in the args.  This function is
>>> +	 * called by both BH packet processing and user side socket option
>>> +	 * processing, but it works on different lists in those 2 contexts.
>>> +	 * Each context provides it's own callback, whether call_rc_bh()
> s/call_rc_bh/call_rcu_bh

yep.

> 
>>> +	 * or call_rcu(), to make sure that we wait an for appropriate time.
> s/an for/for an

yep.  fat fingered...

>>> @@ -295,20 +285,17 @@ struct sctp_association *sctp_endpoint_lookup_assoc(
>>>  int sctp_endpoint_is_peeled_off(struct sctp_endpoint *ep,
>>>  				const union sctp_addr *paddr)
>>>  {
>>> -	struct list_head *pos;
>>>  	struct sctp_sockaddr_entry *addr;
>>>  	struct sctp_bind_addr *bp;
>>>
>>> -	sctp_read_lock(&ep->base.addr_lock);
>>>  	bp = &ep->base.bind_addr;
>>> -	list_for_each(pos, &bp->address_list) {
>>> -		addr = list_entry(pos, struct sctp_sockaddr_entry, list);
>>> -		if (sctp_has_association(&addr->a, paddr)) {
>>> -			sctp_read_unlock(&ep->base.addr_lock);
>>> +	/* This function is called whith the socket lock held,
> s/whith/with

ok


Thanks
-vlad


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Lksctp-developers] [RFC v3 PATCH 2/21] SCTP: Convert bind_addr_list locking to RCU
  2007-09-13 17:59       ` Sridhar Samudrala
  2007-09-13 18:15         ` Vlad Yasevich
@ 2007-09-13 19:33         ` Vlad Yasevich
  2007-09-13 19:56           ` Sridhar Samudrala
  1 sibling, 1 reply; 14+ messages in thread
From: Vlad Yasevich @ 2007-09-13 19:33 UTC (permalink / raw)
  To: Sridhar Samudrala; +Cc: paulmck, netdev, lksctp-developers

Hi Sridhar

Sridhar Samudrala wrote:
> On Wed, 2007-09-12 at 15:33 -0700, Paul E. McKenney wrote:
>> On Wed, Sep 12, 2007 at 05:03:42PM -0400, Vlad Yasevich wrote:
>>> [... and here is the updated version as promissed ...]
>>>
>>> Since the sctp_sockaddr_entry is now RCU enabled as part of
>>> the patch to synchronize sctp_localaddr_list, it makes sense to
>>> change all handling of these entries to RCU.  This includes the
>>> sctp_bind_addrs structure and it's list of bound addresses.
>>>
>>> This list is currently protected by an external rw_lock and that
>>> looks like an overkill.  There are only 2 writers to the list:
>>> bind()/bindx() calls, and BH processing of ASCONF-ACK chunks.
>>> These are already seriealized via the socket lock, so they will
>>> not step on each other.  These are also relatively rare, so we
>>> should be good with RCU.
>>>
>>> The readers are varied and they are easily converted to RCU.
>> Looks good from an RCU viewpoint -- I must defer to others on
>> the networking aspects.
>>
>> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> 
> looks good to me too. some minor typos and some comments on
> RCU usage comments inline.
> 
> Also, I guess we can remove the sctp_[read/write]_[un]lock macros
> from sctp.h now that you removed the all the users of rwlocks
> in SCTP
> 

Looks like some of the hashing calls still use sctp_write_[un]lock
macros, but use normal read_lock() for the read side.

I'll clean that up after these patches are accepted.

-vlad

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Lksctp-developers] [RFC v3 PATCH 2/21] SCTP: Convert bind_addr_list locking to RCU
  2007-09-13 19:33         ` [Lksctp-developers] " Vlad Yasevich
@ 2007-09-13 19:56           ` Sridhar Samudrala
  2007-09-13 20:14             ` Vlad Yasevich
  0 siblings, 1 reply; 14+ messages in thread
From: Sridhar Samudrala @ 2007-09-13 19:56 UTC (permalink / raw)
  To: Vlad Yasevich; +Cc: paulmck, netdev, lksctp-developers

On Thu, 2007-09-13 at 15:33 -0400, Vlad Yasevich wrote:
> Hi Sridhar
> 
> Sridhar Samudrala wrote:
> > On Wed, 2007-09-12 at 15:33 -0700, Paul E. McKenney wrote:
> >> On Wed, Sep 12, 2007 at 05:03:42PM -0400, Vlad Yasevich wrote:
> >>> [... and here is the updated version as promissed ...]
> >>>
> >>> Since the sctp_sockaddr_entry is now RCU enabled as part of
> >>> the patch to synchronize sctp_localaddr_list, it makes sense to
> >>> change all handling of these entries to RCU.  This includes the
> >>> sctp_bind_addrs structure and it's list of bound addresses.
> >>>
> >>> This list is currently protected by an external rw_lock and that
> >>> looks like an overkill.  There are only 2 writers to the list:
> >>> bind()/bindx() calls, and BH processing of ASCONF-ACK chunks.
> >>> These are already seriealized via the socket lock, so they will
> >>> not step on each other.  These are also relatively rare, so we
> >>> should be good with RCU.
> >>>
> >>> The readers are varied and they are easily converted to RCU.
> >> Looks good from an RCU viewpoint -- I must defer to others on
> >> the networking aspects.
> >>
> >> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> > 
> > looks good to me too. some minor typos and some comments on
> > RCU usage comments inline.
> > 
> > Also, I guess we can remove the sctp_[read/write]_[un]lock macros
> > from sctp.h now that you removed the all the users of rwlocks
> > in SCTP
> > 
> 
> Looks like some of the hashing calls still use sctp_write_[un]lock
> macros, but use normal read_lock() for the read side.
> 
> I'll clean that up after these patches are accepted.

OK. You may also consider looking into the generic inet_hashtable
infrastructure and see if we can use it for SCTP.

Thanks
Sridhar


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [Lksctp-developers] [RFC v3 PATCH 2/21] SCTP: Convert bind_addr_list locking to RCU
  2007-09-13 19:56           ` Sridhar Samudrala
@ 2007-09-13 20:14             ` Vlad Yasevich
  0 siblings, 0 replies; 14+ messages in thread
From: Vlad Yasevich @ 2007-09-13 20:14 UTC (permalink / raw)
  To: Sridhar Samudrala; +Cc: paulmck, netdev, lksctp-developers

Sridhar Samudrala wrote:
> On Thu, 2007-09-13 at 15:33 -0400, Vlad Yasevich wrote:
>> Hi Sridhar
>>
>> Sridhar Samudrala wrote:
>>> On Wed, 2007-09-12 at 15:33 -0700, Paul E. McKenney wrote:
>>>> On Wed, Sep 12, 2007 at 05:03:42PM -0400, Vlad Yasevich wrote:
>>>>> [... and here is the updated version as promissed ...]
>>>>>
>>>>> Since the sctp_sockaddr_entry is now RCU enabled as part of
>>>>> the patch to synchronize sctp_localaddr_list, it makes sense to
>>>>> change all handling of these entries to RCU.  This includes the
>>>>> sctp_bind_addrs structure and it's list of bound addresses.
>>>>>
>>>>> This list is currently protected by an external rw_lock and that
>>>>> looks like an overkill.  There are only 2 writers to the list:
>>>>> bind()/bindx() calls, and BH processing of ASCONF-ACK chunks.
>>>>> These are already seriealized via the socket lock, so they will
>>>>> not step on each other.  These are also relatively rare, so we
>>>>> should be good with RCU.
>>>>>
>>>>> The readers are varied and they are easily converted to RCU.
>>>> Looks good from an RCU viewpoint -- I must defer to others on
>>>> the networking aspects.
>>>>
>>>> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
>>> looks good to me too. some minor typos and some comments on
>>> RCU usage comments inline.
>>>
>>> Also, I guess we can remove the sctp_[read/write]_[un]lock macros
>>> from sctp.h now that you removed the all the users of rwlocks
>>> in SCTP
>>>
>> Looks like some of the hashing calls still use sctp_write_[un]lock
>> macros, but use normal read_lock() for the read side.
>>
>> I'll clean that up after these patches are accepted.
> 
> OK. You may also consider looking into the generic inet_hashtable
> infrastructure and see if we can use it for SCTP.
> 
> 

I've had a patch set brewing for a while.  I had everything done except the
association hash.  Have been trying to figure out how to plug that one in...

If you want to take a look, I can send you what I have so far. :)

-vlad

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2007-09-13 20:15 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-09-12 19:46 [RFC v2 PATCH 0/2] Add RCU locking to SCTP address management Vlad Yasevich
2007-09-12 19:46 ` [RFC v2 PATCH 1/2] SCTP: Add RCU synchronization around sctp_localaddr_list Vlad Yasevich
2007-09-12 22:26   ` Paul E. McKenney
2007-09-12 23:03   ` Sridhar Samudrala
2007-09-13 13:46     ` Vlad Yasevich
2007-09-12 19:46 ` [RFC v2 PATCH 2/2] SCTP: Convert bind_addr_list locking to RCU Vlad Yasevich
2007-09-12 20:59 ` [RFC v2 PATCH 0/2] Add RCU locking to SCTP address management Vlad Yasevich
2007-09-12 21:03   ` [RFC v3 PATCH 2/21] SCTP: Convert bind_addr_list locking to RCU Vlad Yasevich
2007-09-12 22:33     ` Paul E. McKenney
2007-09-13 17:59       ` Sridhar Samudrala
2007-09-13 18:15         ` Vlad Yasevich
2007-09-13 19:33         ` [Lksctp-developers] " Vlad Yasevich
2007-09-13 19:56           ` Sridhar Samudrala
2007-09-13 20:14             ` Vlad Yasevich

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).