netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/9] RDS updates
@ 2009-03-30 18:44 Andy Grover
  2009-03-30 18:44 ` [PATCH 1/9] RDS: Fix m_rs_lock deadlock Andy Grover
                   ` (8 more replies)
  0 siblings, 9 replies; 13+ messages in thread
From: Andy Grover @ 2009-03-30 18:44 UTC (permalink / raw)
  To: netdev; +Cc: rds-devel

Hi, here are some recent fixes and cleanups for RDS.

Thanks -- Regards -- Andy



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/9] RDS: Fix m_rs_lock deadlock
  2009-03-30 18:44 [PATCH 0/9] RDS updates Andy Grover
@ 2009-03-30 18:44 ` Andy Grover
  2009-03-30 18:44 ` [PATCH 2/9] RDS/IW+IB: Set recv ring low water mark to 1/2 full Andy Grover
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Andy Grover @ 2009-03-30 18:44 UTC (permalink / raw)
  To: netdev; +Cc: rds-devel

rs_send_drop_to() is called during socket close. If it takes
m_rs_lock without disabling interrupts, then
rds_send_remove_from_sock() can run from the rx completion
handler and thus deadlock.

Signed-off-by: Andy Grover <andy.grover@oracle.com>
---
 net/rds/send.c |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/net/rds/send.c b/net/rds/send.c
index 1b37364..104fe03 100644
--- a/net/rds/send.c
+++ b/net/rds/send.c
@@ -615,7 +615,7 @@ void rds_send_drop_to(struct rds_sock *rs, struct sockaddr_in *dest)
 {
 	struct rds_message *rm, *tmp;
 	struct rds_connection *conn;
-	unsigned long flags;
+	unsigned long flags, flags2;
 	LIST_HEAD(list);
 	int wake = 0;
 
@@ -651,9 +651,9 @@ void rds_send_drop_to(struct rds_sock *rs, struct sockaddr_in *dest)
 	list_for_each_entry(rm, &list, m_sock_item) {
 		/* We do this here rather than in the loop above, so that
 		 * we don't have to nest m_rs_lock under rs->rs_lock */
-		spin_lock(&rm->m_rs_lock);
+		spin_lock_irqsave(&rm->m_rs_lock, flags2);
 		rm->m_rs = NULL;
-		spin_unlock(&rm->m_rs_lock);
+		spin_unlock_irqrestore(&rm->m_rs_lock, flags2);
 
 		/*
 		 * If we see this flag cleared then we're *sure* that someone
-- 
1.5.6.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/9] RDS/IW+IB: Set recv ring low water mark to 1/2 full.
  2009-03-30 18:44 [PATCH 0/9] RDS updates Andy Grover
  2009-03-30 18:44 ` [PATCH 1/9] RDS: Fix m_rs_lock deadlock Andy Grover
@ 2009-03-30 18:44 ` Andy Grover
  2009-03-30 18:44 ` [PATCH 3/9] RDS: Correct some iw references in rdma_transport.c Andy Grover
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Andy Grover @ 2009-03-30 18:44 UTC (permalink / raw)
  To: netdev; +Cc: rds-devel, Steve Wise

From: Steve Wise <swise@opengridcomputing.com>

Currently the recv ring low water mark is 1/4 the depth.  Performance
measurements show that this limits iWARP throughput by flow controlling
the rds-stress senders.  Setting it to 1/2 seems to max the T3
performance.  I tried even higher levels but that didn't help and it
started to increase the rds thread cpu utilization.

Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Andy Grover <andy.grover@oracle.com>
---
 net/rds/ib_ring.c |    2 +-
 net/rds/iw_ring.c |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/rds/ib_ring.c b/net/rds/ib_ring.c
index 99a6cca..ff97e8e 100644
--- a/net/rds/ib_ring.c
+++ b/net/rds/ib_ring.c
@@ -137,7 +137,7 @@ int rds_ib_ring_empty(struct rds_ib_work_ring *ring)
 
 int rds_ib_ring_low(struct rds_ib_work_ring *ring)
 {
-	return __rds_ib_ring_used(ring) <= (ring->w_nr >> 2);
+	return __rds_ib_ring_used(ring) <= (ring->w_nr >> 1);
 }
 
 /*
diff --git a/net/rds/iw_ring.c b/net/rds/iw_ring.c
index d422d4b..da8e3b6 100644
--- a/net/rds/iw_ring.c
+++ b/net/rds/iw_ring.c
@@ -137,7 +137,7 @@ int rds_iw_ring_empty(struct rds_iw_work_ring *ring)
 
 int rds_iw_ring_low(struct rds_iw_work_ring *ring)
 {
-	return __rds_iw_ring_used(ring) <= (ring->w_nr >> 2);
+	return __rds_iw_ring_used(ring) <= (ring->w_nr >> 1);
 }
 
 
-- 
1.5.6.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/9] RDS: Correct some iw references in rdma_transport.c
  2009-03-30 18:44 [PATCH 0/9] RDS updates Andy Grover
  2009-03-30 18:44 ` [PATCH 1/9] RDS: Fix m_rs_lock deadlock Andy Grover
  2009-03-30 18:44 ` [PATCH 2/9] RDS/IW+IB: Set recv ring low water mark to 1/2 full Andy Grover
@ 2009-03-30 18:44 ` Andy Grover
  2009-03-30 18:44 ` [PATCH 4/9] RDS/IW+IB: Set the RDS_LL_SEND_FULL bit when we're throttled Andy Grover
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Andy Grover @ 2009-03-30 18:44 UTC (permalink / raw)
  To: netdev; +Cc: rds-devel

Had some lingering instances of _iw_ variable names from when
the listen code was centralized into rdma_transport.c

Signed-off-by: Andy Grover <andy.grover@oracle.com>
---
 net/rds/rdma_transport.c |   12 ++++++------
 1 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/net/rds/rdma_transport.c b/net/rds/rdma_transport.c
index 7b19024..7d0f901 100644
--- a/net/rds/rdma_transport.c
+++ b/net/rds/rdma_transport.c
@@ -34,7 +34,7 @@
 
 #include "rdma_transport.h"
 
-static struct rdma_cm_id *rds_iw_listen_id;
+static struct rdma_cm_id *rds_rdma_listen_id;
 
 int rds_rdma_cm_event_handler(struct rdma_cm_id *cm_id,
 			      struct rdma_cm_event *event)
@@ -161,7 +161,7 @@ static int __init rds_rdma_listen_init(void)
 
 	rdsdebug("cm %p listening on port %u\n", cm_id, RDS_PORT);
 
-	rds_iw_listen_id = cm_id;
+	rds_rdma_listen_id = cm_id;
 	cm_id = NULL;
 out:
 	if (cm_id)
@@ -171,10 +171,10 @@ out:
 
 static void rds_rdma_listen_stop(void)
 {
-	if (rds_iw_listen_id) {
-		rdsdebug("cm %p\n", rds_iw_listen_id);
-		rdma_destroy_id(rds_iw_listen_id);
-		rds_iw_listen_id = NULL;
+	if (rds_rdma_listen_id) {
+		rdsdebug("cm %p\n", rds_rdma_listen_id);
+		rdma_destroy_id(rds_rdma_listen_id);
+		rds_rdma_listen_id = NULL;
 	}
 }
 
-- 
1.5.6.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/9] RDS/IW+IB: Set the RDS_LL_SEND_FULL bit when we're throttled.
  2009-03-30 18:44 [PATCH 0/9] RDS updates Andy Grover
                   ` (2 preceding siblings ...)
  2009-03-30 18:44 ` [PATCH 3/9] RDS: Correct some iw references in rdma_transport.c Andy Grover
@ 2009-03-30 18:44 ` Andy Grover
  2009-03-30 18:44 ` [PATCH 5/9] RDS/IW+IB: Allow max credit advertise window Andy Grover
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Andy Grover @ 2009-03-30 18:44 UTC (permalink / raw)
  To: netdev; +Cc: rds-devel, Steve Wise

From: Steve Wise <swise@opengridcomputing.com>

The RDS_LL_SEND_FULL bit should be set when we stop transmitted due to
flow control.  Otherwise the send worker will keep trying as opposed to
sleeping until we unthrottle.  Saves CPU.

Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Andy Grover <andy.grover@oracle.com>
---
 net/rds/ib_send.c |    2 +-
 net/rds/iw_send.c |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/rds/ib_send.c b/net/rds/ib_send.c
index cb6c52c..fa684b7 100644
--- a/net/rds/ib_send.c
+++ b/net/rds/ib_send.c
@@ -506,7 +506,7 @@ int rds_ib_xmit(struct rds_connection *conn, struct rds_message *rm,
 			flow_controlled++;
 		}
 		if (work_alloc == 0) {
-			rds_ib_ring_unalloc(&ic->i_send_ring, work_alloc);
+			set_bit(RDS_LL_SEND_FULL, &conn->c_flags);
 			rds_ib_stats_inc(s_ib_tx_throttle);
 			ret = -ENOMEM;
 			goto out;
diff --git a/net/rds/iw_send.c b/net/rds/iw_send.c
index 22dd38f..626290b 100644
--- a/net/rds/iw_send.c
+++ b/net/rds/iw_send.c
@@ -549,7 +549,7 @@ int rds_iw_xmit(struct rds_connection *conn, struct rds_message *rm,
 			flow_controlled++;
 		}
 		if (work_alloc == 0) {
-			rds_iw_ring_unalloc(&ic->i_send_ring, work_alloc);
+			set_bit(RDS_LL_SEND_FULL, &conn->c_flags);
 			rds_iw_stats_inc(s_iw_tx_throttle);
 			ret = -ENOMEM;
 			goto out;
-- 
1.5.6.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 5/9] RDS/IW+IB: Allow max credit advertise window.
  2009-03-30 18:44 [PATCH 0/9] RDS updates Andy Grover
                   ` (3 preceding siblings ...)
  2009-03-30 18:44 ` [PATCH 4/9] RDS/IW+IB: Set the RDS_LL_SEND_FULL bit when we're throttled Andy Grover
@ 2009-03-30 18:44 ` Andy Grover
  2009-03-30 18:44 ` [PATCH 6/9] RDS: Fix ordering in a conditional Andy Grover
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Andy Grover @ 2009-03-30 18:44 UTC (permalink / raw)
  To: netdev; +Cc: rds-devel, Steve Wise

From: Steve Wise <swise@opengridcomputing.com>

Fix hack that restricts the credit advertisement to 127.

Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Andy Grover <andy.grover@oracle.com>
---
 net/rds/ib.h      |    2 +-
 net/rds/ib_recv.c |    2 +-
 net/rds/ib_send.c |    8 ++++----
 net/rds/iw.h      |    2 +-
 net/rds/iw_recv.c |    2 +-
 net/rds/iw_send.c |    8 ++++----
 net/rds/rds.h     |    2 +-
 7 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/net/rds/ib.h b/net/rds/ib.h
index 8be563a..7ff9ea0 100644
--- a/net/rds/ib.h
+++ b/net/rds/ib.h
@@ -320,7 +320,7 @@ int rds_ib_xmit_rdma(struct rds_connection *conn, struct rds_rdma_op *op);
 void rds_ib_send_add_credits(struct rds_connection *conn, unsigned int credits);
 void rds_ib_advertise_credits(struct rds_connection *conn, unsigned int posted);
 int rds_ib_send_grab_credits(struct rds_ib_connection *ic, u32 wanted,
-			     u32 *adv_credits, int need_posted);
+			     u32 *adv_credits, int need_posted, int max_posted);
 
 /* ib_stats.c */
 DECLARE_PER_CPU(struct rds_ib_statistics, rds_ib_stats);
diff --git a/net/rds/ib_recv.c b/net/rds/ib_recv.c
index 5061b55..71b032b 100644
--- a/net/rds/ib_recv.c
+++ b/net/rds/ib_recv.c
@@ -491,7 +491,7 @@ void rds_ib_attempt_ack(struct rds_ib_connection *ic)
 	}
 
 	/* Can we get a send credit? */
-	if (!rds_ib_send_grab_credits(ic, 1, &adv_credits, 0)) {
+	if (!rds_ib_send_grab_credits(ic, 1, &adv_credits, 0, RDS_MAX_ADV_CREDIT)) {
 		rds_ib_stats_inc(s_ib_tx_throttle);
 		clear_bit(IB_ACK_IN_FLIGHT, &ic->i_ack_flags);
 		return;
diff --git a/net/rds/ib_send.c b/net/rds/ib_send.c
index fa684b7..23bf830 100644
--- a/net/rds/ib_send.c
+++ b/net/rds/ib_send.c
@@ -311,7 +311,7 @@ void rds_ib_send_cq_comp_handler(struct ib_cq *cq, void *context)
  * and using atomic_cmpxchg when updating the two counters.
  */
 int rds_ib_send_grab_credits(struct rds_ib_connection *ic,
-			     u32 wanted, u32 *adv_credits, int need_posted)
+			     u32 wanted, u32 *adv_credits, int need_posted, int max_posted)
 {
 	unsigned int avail, posted, got = 0, advertise;
 	long oldval, newval;
@@ -351,7 +351,7 @@ try_again:
 	 * available.
 	 */
 	if (posted && (got || need_posted)) {
-		advertise = min_t(unsigned int, posted, RDS_MAX_ADV_CREDIT);
+		advertise = min_t(unsigned int, posted, max_posted);
 		newval -= IB_SET_POST_CREDITS(advertise);
 	}
 
@@ -498,7 +498,7 @@ int rds_ib_xmit(struct rds_connection *conn, struct rds_message *rm,
 
 	credit_alloc = work_alloc;
 	if (ic->i_flowctl) {
-		credit_alloc = rds_ib_send_grab_credits(ic, work_alloc, &posted, 0);
+		credit_alloc = rds_ib_send_grab_credits(ic, work_alloc, &posted, 0, RDS_MAX_ADV_CREDIT);
 		adv_credits += posted;
 		if (credit_alloc < work_alloc) {
 			rds_ib_ring_unalloc(&ic->i_send_ring, work_alloc - credit_alloc);
@@ -571,7 +571,7 @@ int rds_ib_xmit(struct rds_connection *conn, struct rds_message *rm,
 		/*
 		 * Update adv_credits since we reset the ACK_REQUIRED bit.
 		 */
-		rds_ib_send_grab_credits(ic, 0, &posted, 1);
+		rds_ib_send_grab_credits(ic, 0, &posted, 1, RDS_MAX_ADV_CREDIT - adv_credits);
 		adv_credits += posted;
 		BUG_ON(adv_credits > 255);
 	} else if (ic->i_rm != rm)
diff --git a/net/rds/iw.h b/net/rds/iw.h
index 0ddda34..6bbe459 100644
--- a/net/rds/iw.h
+++ b/net/rds/iw.h
@@ -348,7 +348,7 @@ int rds_iw_xmit_rdma(struct rds_connection *conn, struct rds_rdma_op *op);
 void rds_iw_send_add_credits(struct rds_connection *conn, unsigned int credits);
 void rds_iw_advertise_credits(struct rds_connection *conn, unsigned int posted);
 int rds_iw_send_grab_credits(struct rds_iw_connection *ic, u32 wanted,
-			     u32 *adv_credits, int need_posted);
+			     u32 *adv_credits, int need_posted, int max_posted);
 
 /* ib_stats.c */
 DECLARE_PER_CPU(struct rds_iw_statistics, rds_iw_stats);
diff --git a/net/rds/iw_recv.c b/net/rds/iw_recv.c
index a1931f0..44cc293 100644
--- a/net/rds/iw_recv.c
+++ b/net/rds/iw_recv.c
@@ -491,7 +491,7 @@ void rds_iw_attempt_ack(struct rds_iw_connection *ic)
 	}
 
 	/* Can we get a send credit? */
-	if (!rds_iw_send_grab_credits(ic, 1, &adv_credits, 0)) {
+	if (!rds_iw_send_grab_credits(ic, 1, &adv_credits, 0, RDS_MAX_ADV_CREDIT)) {
 		rds_iw_stats_inc(s_iw_tx_throttle);
 		clear_bit(IB_ACK_IN_FLIGHT, &ic->i_ack_flags);
 		return;
diff --git a/net/rds/iw_send.c b/net/rds/iw_send.c
index 626290b..44a6a05 100644
--- a/net/rds/iw_send.c
+++ b/net/rds/iw_send.c
@@ -347,7 +347,7 @@ void rds_iw_send_cq_comp_handler(struct ib_cq *cq, void *context)
  * and using atomic_cmpxchg when updating the two counters.
  */
 int rds_iw_send_grab_credits(struct rds_iw_connection *ic,
-			     u32 wanted, u32 *adv_credits, int need_posted)
+			     u32 wanted, u32 *adv_credits, int need_posted, int max_posted)
 {
 	unsigned int avail, posted, got = 0, advertise;
 	long oldval, newval;
@@ -387,7 +387,7 @@ try_again:
 	 * available.
 	 */
 	if (posted && (got || need_posted)) {
-		advertise = min_t(unsigned int, posted, RDS_MAX_ADV_CREDIT);
+		advertise = min_t(unsigned int, posted, max_posted);
 		newval -= IB_SET_POST_CREDITS(advertise);
 	}
 
@@ -541,7 +541,7 @@ int rds_iw_xmit(struct rds_connection *conn, struct rds_message *rm,
 
 	credit_alloc = work_alloc;
 	if (ic->i_flowctl) {
-		credit_alloc = rds_iw_send_grab_credits(ic, work_alloc, &posted, 0);
+		credit_alloc = rds_iw_send_grab_credits(ic, work_alloc, &posted, 0, RDS_MAX_ADV_CREDIT);
 		adv_credits += posted;
 		if (credit_alloc < work_alloc) {
 			rds_iw_ring_unalloc(&ic->i_send_ring, work_alloc - credit_alloc);
@@ -614,7 +614,7 @@ int rds_iw_xmit(struct rds_connection *conn, struct rds_message *rm,
 		/*
 		 * Update adv_credits since we reset the ACK_REQUIRED bit.
 		 */
-		rds_iw_send_grab_credits(ic, 0, &posted, 1);
+		rds_iw_send_grab_credits(ic, 0, &posted, 1, RDS_MAX_ADV_CREDIT - adv_credits);
 		adv_credits += posted;
 		BUG_ON(adv_credits > 255);
 	} else if (ic->i_rm != rm)
diff --git a/net/rds/rds.h b/net/rds/rds.h
index 0604007..a6c8c43 100644
--- a/net/rds/rds.h
+++ b/net/rds/rds.h
@@ -128,7 +128,7 @@ struct rds_connection {
 #define RDS_FLAG_CONG_BITMAP	0x01
 #define RDS_FLAG_ACK_REQUIRED	0x02
 #define RDS_FLAG_RETRANSMITTED	0x04
-#define RDS_MAX_ADV_CREDIT	127
+#define RDS_MAX_ADV_CREDIT	255
 
 /*
  * Maximum space available for extension headers.
-- 
1.5.6.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 6/9] RDS: Fix ordering in a conditional
  2009-03-30 18:44 [PATCH 0/9] RDS updates Andy Grover
                   ` (4 preceding siblings ...)
  2009-03-30 18:44 ` [PATCH 5/9] RDS/IW+IB: Allow max credit advertise window Andy Grover
@ 2009-03-30 18:44 ` Andy Grover
  2009-03-31  4:27   ` Roland Dreier
  2009-03-30 18:44 ` [PATCH 7/9] RDS: Establish connection before parsing CMSGs Andy Grover
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 13+ messages in thread
From: Andy Grover @ 2009-03-30 18:44 UTC (permalink / raw)
  To: netdev; +Cc: rds-devel

Putting the constant first is a supposed "best practice" that actually makes
the code harder to read.

Signed-off-by: Andy Grover <andy.grover@oracle.com>
---
 net/rds/rdma.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/net/rds/rdma.c b/net/rds/rdma.c
index eaeeb91..584eac3 100644
--- a/net/rds/rdma.c
+++ b/net/rds/rdma.c
@@ -155,7 +155,7 @@ static int rds_pin_pages(unsigned long user_addr, unsigned int nr_pages,
 			     nr_pages, write, 0, pages, NULL);
 	up_read(&current->mm->mmap_sem);
 
-	if (0 <= ret && (unsigned) ret < nr_pages) {
+	if (ret > 0 && (unsigned) ret < nr_pages) {
 		while (ret--)
 			put_page(pages[ret]);
 		ret = -EFAULT;
-- 
1.5.6.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 7/9] RDS: Establish connection before parsing CMSGs
  2009-03-30 18:44 [PATCH 0/9] RDS updates Andy Grover
                   ` (5 preceding siblings ...)
  2009-03-30 18:44 ` [PATCH 6/9] RDS: Fix ordering in a conditional Andy Grover
@ 2009-03-30 18:44 ` Andy Grover
  2009-03-30 18:44 ` [PATCH 8/9] RDS: Rewrite connection cleanup Andy Grover
  2009-03-30 18:44 ` [PATCH 9/9] RDS: use get_user_pages_fast() Andy Grover
  8 siblings, 0 replies; 13+ messages in thread
From: Andy Grover @ 2009-03-30 18:44 UTC (permalink / raw)
  To: netdev; +Cc: rds-devel

The first message to a remote node should prompt a new connection.
Even an RDMA op via CMSG. Therefore move CMSG parsing to after
connection establishment.

Signed-off-by: Andy Grover <andy.grover@oracle.com>
---
 net/rds/send.c |   10 +++++-----
 1 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/net/rds/send.c b/net/rds/send.c
index 104fe03..a4a7f42 100644
--- a/net/rds/send.c
+++ b/net/rds/send.c
@@ -854,11 +854,6 @@ int rds_sendmsg(struct kiocb *iocb, struct socket *sock, struct msghdr *msg,
 
 	rm->m_daddr = daddr;
 
-	/* Parse any control messages the user may have included. */
-	ret = rds_cmsg_send(rs, rm, msg, &allocated_mr);
-	if (ret)
-		goto out;
-
 	/* rds_conn_create has a spinlock that runs with IRQ off.
 	 * Caching the conn in the socket helps a lot. */
 	if (rs->rs_conn && rs->rs_conn->c_faddr == daddr)
@@ -874,6 +869,11 @@ int rds_sendmsg(struct kiocb *iocb, struct socket *sock, struct msghdr *msg,
 		rs->rs_conn = conn;
 	}
 
+	/* Parse any control messages the user may have included. */
+	ret = rds_cmsg_send(rs, rm, msg, &allocated_mr);
+	if (ret)
+		goto out;
+
 	if ((rm->m_rdma_cookie || rm->m_rdma_op)
 	 && conn->c_trans->xmit_rdma == NULL) {
 		if (printk_ratelimit())
-- 
1.5.6.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 8/9] RDS: Rewrite connection cleanup
  2009-03-30 18:44 [PATCH 0/9] RDS updates Andy Grover
                   ` (6 preceding siblings ...)
  2009-03-30 18:44 ` [PATCH 7/9] RDS: Establish connection before parsing CMSGs Andy Grover
@ 2009-03-30 18:44 ` Andy Grover
  2009-03-30 18:44 ` [PATCH 9/9] RDS: use get_user_pages_fast() Andy Grover
  8 siblings, 0 replies; 13+ messages in thread
From: Andy Grover @ 2009-03-30 18:44 UTC (permalink / raw)
  To: netdev; +Cc: rds-devel

This fixes a bug where a connection was unexpectedly
not on *any* list while being destroyed. It also
cleans up some code duplication and regularizes some
function names.

* Grab appropriate lock in conn_free() and explain in comment
* Ensure via locking that a conn is never not on either
  a dev's list or the nodev list
* Add rds_xx_remove_conn() to match rds_xx_add_conn()
* Make rds_xx_add_conn() return void
* Rename remove_{,nodev_}conns() to
  destroy_{,nodev_}conns() and unify their implementation
  in a helper function
* Document lock ordering as nodev conn_lock before
  dev_conn_lock

Reported-by: Yosef Etigin <yosefe@voltaire.com>
Signed-off-by: Andy Grover <andy.grover@oracle.com>
---
 net/rds/ib.c      |    5 +++--
 net/rds/ib.h      |   14 +++++++++++---
 net/rds/ib_cm.c   |   34 +++++++++++++++++++---------------
 net/rds/ib_rdma.c |   43 +++++++++++++++++++++----------------------
 net/rds/iw.c      |    5 +++--
 net/rds/iw.h      |   14 +++++++++++---
 net/rds/iw_cm.c   |   35 +++++++++++++++++++----------------
 net/rds/iw_rdma.c |   44 ++++++++++++++++++++++----------------------
 8 files changed, 109 insertions(+), 85 deletions(-)

diff --git a/net/rds/ib.c b/net/rds/ib.c
index 06a7b79..4933b38 100644
--- a/net/rds/ib.c
+++ b/net/rds/ib.c
@@ -51,6 +51,7 @@ MODULE_PARM_DESC(fmr_message_size, " Max size of a RDMA transfer");
 
 struct list_head rds_ib_devices;
 
+/* NOTE: if also grabbing ibdev lock, grab this first */
 DEFINE_SPINLOCK(ib_nodev_conns_lock);
 LIST_HEAD(ib_nodev_conns);
 
@@ -137,7 +138,7 @@ void rds_ib_remove_one(struct ib_device *device)
 		kfree(i_ipaddr);
 	}
 
-	rds_ib_remove_conns(rds_ibdev);
+	rds_ib_destroy_conns(rds_ibdev);
 
 	if (rds_ibdev->mr_pool)
 		rds_ib_destroy_mr_pool(rds_ibdev->mr_pool);
@@ -249,7 +250,7 @@ static int rds_ib_laddr_check(__be32 addr)
 void rds_ib_exit(void)
 {
 	rds_info_deregister_func(RDS_INFO_IB_CONNECTIONS, rds_ib_ic_info);
-	rds_ib_remove_nodev_conns();
+	rds_ib_destroy_nodev_conns();
 	ib_unregister_client(&rds_ib_client);
 	rds_ib_sysctl_exit();
 	rds_ib_recv_exit();
diff --git a/net/rds/ib.h b/net/rds/ib.h
index 7ff9ea0..4f82a1d 100644
--- a/net/rds/ib.h
+++ b/net/rds/ib.h
@@ -267,9 +267,17 @@ void rds_ib_cm_connect_complete(struct rds_connection *conn,
 
 /* ib_rdma.c */
 int rds_ib_update_ipaddr(struct rds_ib_device *rds_ibdev, __be32 ipaddr);
-int rds_ib_add_conn(struct rds_ib_device *rds_ibdev, struct rds_connection *conn);
-void rds_ib_remove_nodev_conns(void);
-void rds_ib_remove_conns(struct rds_ib_device *rds_ibdev);
+void rds_ib_add_conn(struct rds_ib_device *rds_ibdev, struct rds_connection *conn);
+void rds_ib_remove_conn(struct rds_ib_device *rds_ibdev, struct rds_connection *conn);
+void __rds_ib_destroy_conns(struct list_head *list, spinlock_t *list_lock);
+static inline void rds_ib_destroy_nodev_conns(void)
+{
+	__rds_ib_destroy_conns(&ib_nodev_conns, &ib_nodev_conns_lock);
+}
+static inline void rds_ib_destroy_conns(struct rds_ib_device *rds_ibdev)
+{
+	__rds_ib_destroy_conns(&rds_ibdev->conn_list, &rds_ibdev->spinlock);
+}
 struct rds_ib_mr_pool *rds_ib_create_mr_pool(struct rds_ib_device *);
 void rds_ib_get_mr_info(struct rds_ib_device *rds_ibdev, struct rds_info_rdma_connection *iinfo);
 void rds_ib_destroy_mr_pool(struct rds_ib_mr_pool *);
diff --git a/net/rds/ib_cm.c b/net/rds/ib_cm.c
index 0532237..889ab04 100644
--- a/net/rds/ib_cm.c
+++ b/net/rds/ib_cm.c
@@ -126,9 +126,7 @@ void rds_ib_cm_connect_complete(struct rds_connection *conn, struct rdma_cm_even
 	err = rds_ib_update_ipaddr(rds_ibdev, conn->c_laddr);
 	if (err)
 		printk(KERN_ERR "rds_ib_update_ipaddr failed (%d)\n", err);
-	err = rds_ib_add_conn(rds_ibdev, conn);
-	if (err)
-		printk(KERN_ERR "rds_ib_add_conn failed (%d)\n", err);
+	rds_ib_add_conn(rds_ibdev, conn);
 
 	/* If the peer gave us the last packet it saw, process this as if
 	 * we had received a regular ACK. */
@@ -616,18 +614,8 @@ void rds_ib_conn_shutdown(struct rds_connection *conn)
 		/*
 		 * Move connection back to the nodev list.
 		 */
-		if (ic->rds_ibdev) {
-
-			spin_lock_irq(&ic->rds_ibdev->spinlock);
-			BUG_ON(list_empty(&ic->ib_node));
-			list_del(&ic->ib_node);
-			spin_unlock_irq(&ic->rds_ibdev->spinlock);
-
-			spin_lock_irq(&ib_nodev_conns_lock);
-			list_add_tail(&ic->ib_node, &ib_nodev_conns);
-			spin_unlock_irq(&ib_nodev_conns_lock);
-			ic->rds_ibdev = NULL;
-		}
+		if (ic->rds_ibdev)
+			rds_ib_remove_conn(ic->rds_ibdev, conn);
 
 		ic->i_cm_id = NULL;
 		ic->i_pd = NULL;
@@ -701,11 +689,27 @@ int rds_ib_conn_alloc(struct rds_connection *conn, gfp_t gfp)
 	return 0;
 }
 
+/*
+ * Free a connection. Connection must be shut down and not set for reconnect.
+ */
 void rds_ib_conn_free(void *arg)
 {
 	struct rds_ib_connection *ic = arg;
+	spinlock_t	*lock_ptr;
+
 	rdsdebug("ic %p\n", ic);
+
+	/*
+	 * Conn is either on a dev's list or on the nodev list.
+	 * A race with shutdown() or connect() would cause problems
+	 * (since rds_ibdev would change) but that should never happen.
+	 */
+	lock_ptr = ic->rds_ibdev ? &ic->rds_ibdev->spinlock : &ib_nodev_conns_lock;
+
+	spin_lock_irq(lock_ptr);
 	list_del(&ic->ib_node);
+	spin_unlock_irq(lock_ptr);
+
 	kfree(ic);
 }
 
diff --git a/net/rds/ib_rdma.c b/net/rds/ib_rdma.c
index 69a6289..81033af 100644
--- a/net/rds/ib_rdma.c
+++ b/net/rds/ib_rdma.c
@@ -139,7 +139,7 @@ int rds_ib_update_ipaddr(struct rds_ib_device *rds_ibdev, __be32 ipaddr)
 	return rds_ib_add_ipaddr(rds_ibdev, ipaddr);
 }
 
-int rds_ib_add_conn(struct rds_ib_device *rds_ibdev, struct rds_connection *conn)
+void rds_ib_add_conn(struct rds_ib_device *rds_ibdev, struct rds_connection *conn)
 {
 	struct rds_ib_connection *ic = conn->c_transport_data;
 
@@ -148,45 +148,44 @@ int rds_ib_add_conn(struct rds_ib_device *rds_ibdev, struct rds_connection *conn
 	BUG_ON(list_empty(&ib_nodev_conns));
 	BUG_ON(list_empty(&ic->ib_node));
 	list_del(&ic->ib_node);
-	spin_unlock_irq(&ib_nodev_conns_lock);
 
 	spin_lock_irq(&rds_ibdev->spinlock);
 	list_add_tail(&ic->ib_node, &rds_ibdev->conn_list);
 	spin_unlock_irq(&rds_ibdev->spinlock);
+	spin_unlock_irq(&ib_nodev_conns_lock);
 
 	ic->rds_ibdev = rds_ibdev;
-
-	return 0;
 }
 
-void rds_ib_remove_nodev_conns(void)
+void rds_ib_remove_conn(struct rds_ib_device *rds_ibdev, struct rds_connection *conn)
 {
-	struct rds_ib_connection *ic, *_ic;
-	LIST_HEAD(tmp_list);
+	struct rds_ib_connection *ic = conn->c_transport_data;
 
-	/* avoid calling conn_destroy with irqs off */
-	spin_lock_irq(&ib_nodev_conns_lock);
-	list_splice(&ib_nodev_conns, &tmp_list);
-	INIT_LIST_HEAD(&ib_nodev_conns);
-	spin_unlock_irq(&ib_nodev_conns_lock);
+	/* place conn on nodev_conns_list */
+	spin_lock(&ib_nodev_conns_lock);
 
-	list_for_each_entry_safe(ic, _ic, &tmp_list, ib_node) {
-		if (ic->conn->c_passive)
-			rds_conn_destroy(ic->conn->c_passive);
-		rds_conn_destroy(ic->conn);
-	}
+	spin_lock_irq(&rds_ibdev->spinlock);
+	BUG_ON(list_empty(&ic->ib_node));
+	list_del(&ic->ib_node);
+	spin_unlock_irq(&rds_ibdev->spinlock);
+
+	list_add_tail(&ic->ib_node, &ib_nodev_conns);
+
+	spin_unlock(&ib_nodev_conns_lock);
+
+	ic->rds_ibdev = NULL;
 }
 
-void rds_ib_remove_conns(struct rds_ib_device *rds_ibdev)
+void __rds_ib_destroy_conns(struct list_head *list, spinlock_t *list_lock)
 {
 	struct rds_ib_connection *ic, *_ic;
 	LIST_HEAD(tmp_list);
 
 	/* avoid calling conn_destroy with irqs off */
-	spin_lock_irq(&rds_ibdev->spinlock);
-	list_splice(&rds_ibdev->conn_list, &tmp_list);
-	INIT_LIST_HEAD(&rds_ibdev->conn_list);
-	spin_unlock_irq(&rds_ibdev->spinlock);
+	spin_lock_irq(list_lock);
+	list_splice(list, &tmp_list);
+	INIT_LIST_HEAD(list);
+	spin_unlock_irq(list_lock);
 
 	list_for_each_entry_safe(ic, _ic, &tmp_list, ib_node) {
 		if (ic->conn->c_passive)
diff --git a/net/rds/iw.c b/net/rds/iw.c
index 1b56905..b732efb 100644
--- a/net/rds/iw.c
+++ b/net/rds/iw.c
@@ -51,6 +51,7 @@ MODULE_PARM_DESC(fastreg_message_size, " Max size of a RDMA transfer (fastreg MR
 
 struct list_head rds_iw_devices;
 
+/* NOTE: if also grabbing iwdev lock, grab this first */
 DEFINE_SPINLOCK(iw_nodev_conns_lock);
 LIST_HEAD(iw_nodev_conns);
 
@@ -145,7 +146,7 @@ void rds_iw_remove_one(struct ib_device *device)
 	}
 	spin_unlock_irq(&rds_iwdev->spinlock);
 
-	rds_iw_remove_conns(rds_iwdev);
+	rds_iw_destroy_conns(rds_iwdev);
 
 	if (rds_iwdev->mr_pool)
 		rds_iw_destroy_mr_pool(rds_iwdev->mr_pool);
@@ -258,7 +259,7 @@ static int rds_iw_laddr_check(__be32 addr)
 void rds_iw_exit(void)
 {
 	rds_info_deregister_func(RDS_INFO_IWARP_CONNECTIONS, rds_iw_ic_info);
-	rds_iw_remove_nodev_conns();
+	rds_iw_destroy_nodev_conns();
 	ib_unregister_client(&rds_iw_client);
 	rds_iw_sysctl_exit();
 	rds_iw_recv_exit();
diff --git a/net/rds/iw.h b/net/rds/iw.h
index 6bbe459..afc4b4b 100644
--- a/net/rds/iw.h
+++ b/net/rds/iw.h
@@ -294,9 +294,17 @@ void rds_iw_cm_connect_complete(struct rds_connection *conn,
 
 /* ib_rdma.c */
 int rds_iw_update_cm_id(struct rds_iw_device *rds_iwdev, struct rdma_cm_id *cm_id);
-int rds_iw_add_conn(struct rds_iw_device *rds_iwdev, struct rds_connection *conn);
-void rds_iw_remove_nodev_conns(void);
-void rds_iw_remove_conns(struct rds_iw_device *rds_iwdev);
+void rds_iw_add_conn(struct rds_iw_device *rds_iwdev, struct rds_connection *conn);
+void rds_iw_remove_conn(struct rds_iw_device *rds_iwdev, struct rds_connection *conn);
+void __rds_iw_destroy_conns(struct list_head *list, spinlock_t *list_lock);
+static inline void rds_iw_destroy_nodev_conns(void)
+{
+	__rds_iw_destroy_conns(&iw_nodev_conns, &iw_nodev_conns_lock);
+}
+static inline void rds_iw_destroy_conns(struct rds_iw_device *rds_iwdev)
+{
+	__rds_iw_destroy_conns(&rds_iwdev->conn_list, &rds_iwdev->spinlock);
+}
 struct rds_iw_mr_pool *rds_iw_create_mr_pool(struct rds_iw_device *);
 void rds_iw_get_mr_info(struct rds_iw_device *rds_iwdev, struct rds_info_rdma_connection *iinfo);
 void rds_iw_destroy_mr_pool(struct rds_iw_mr_pool *);
diff --git a/net/rds/iw_cm.c b/net/rds/iw_cm.c
index 57ecb3d..0ffaa3e 100644
--- a/net/rds/iw_cm.c
+++ b/net/rds/iw_cm.c
@@ -86,9 +86,7 @@ void rds_iw_cm_connect_complete(struct rds_connection *conn, struct rdma_cm_even
 	err = rds_iw_update_cm_id(rds_iwdev, ic->i_cm_id);
 	if (err)
 		printk(KERN_ERR "rds_iw_update_ipaddr failed (%d)\n", err);
-	err = rds_iw_add_conn(rds_iwdev, conn);
-	if (err)
-		printk(KERN_ERR "rds_iw_add_conn failed (%d)\n", err);
+	rds_iw_add_conn(rds_iwdev, conn);
 
 	/* If the peer gave us the last packet it saw, process this as if
 	 * we had received a regular ACK. */
@@ -637,19 +635,8 @@ void rds_iw_conn_shutdown(struct rds_connection *conn)
 		 * 	Move connection back to the nodev list.
 		 * 	Remove cm_id from the device cm_id list.
 		 */
-		if (ic->rds_iwdev) {
-
-			spin_lock_irq(&ic->rds_iwdev->spinlock);
-			BUG_ON(list_empty(&ic->iw_node));
-			list_del(&ic->iw_node);
-			spin_unlock_irq(&ic->rds_iwdev->spinlock);
-
-			spin_lock_irq(&iw_nodev_conns_lock);
-			list_add_tail(&ic->iw_node, &iw_nodev_conns);
-			spin_unlock_irq(&iw_nodev_conns_lock);
-			rds_iw_remove_cm_id(ic->rds_iwdev, ic->i_cm_id);
-			ic->rds_iwdev = NULL;
-		}
+		if (ic->rds_iwdev)
+			rds_iw_remove_conn(ic->rds_iwdev, conn);
 
 		rdma_destroy_id(ic->i_cm_id);
 
@@ -726,11 +713,27 @@ int rds_iw_conn_alloc(struct rds_connection *conn, gfp_t gfp)
 	return 0;
 }
 
+/*
+ * Free a connection. Connection must be shut down and not set for reconnect.
+ */
 void rds_iw_conn_free(void *arg)
 {
 	struct rds_iw_connection *ic = arg;
+	spinlock_t	*lock_ptr;
+
 	rdsdebug("ic %p\n", ic);
+
+	/*
+	 * Conn is either on a dev's list or on the nodev list.
+	 * A race with shutdown() or connect() would cause problems
+	 * (since rds_iwdev would change) but that should never happen.
+	 */
+	lock_ptr = ic->rds_iwdev ? &ic->rds_iwdev->spinlock : &iw_nodev_conns_lock;
+
+	spin_lock_irq(lock_ptr);
 	list_del(&ic->iw_node);
+	spin_unlock_irq(lock_ptr);
+
 	kfree(ic);
 }
 
diff --git a/net/rds/iw_rdma.c b/net/rds/iw_rdma.c
index 1c02a8f..dcdb37d 100644
--- a/net/rds/iw_rdma.c
+++ b/net/rds/iw_rdma.c
@@ -196,7 +196,7 @@ int rds_iw_update_cm_id(struct rds_iw_device *rds_iwdev, struct rdma_cm_id *cm_i
 	return rds_iw_add_cm_id(rds_iwdev, cm_id);
 }
 
-int rds_iw_add_conn(struct rds_iw_device *rds_iwdev, struct rds_connection *conn)
+void rds_iw_add_conn(struct rds_iw_device *rds_iwdev, struct rds_connection *conn)
 {
 	struct rds_iw_connection *ic = conn->c_transport_data;
 
@@ -205,45 +205,45 @@ int rds_iw_add_conn(struct rds_iw_device *rds_iwdev, struct rds_connection *conn
 	BUG_ON(list_empty(&iw_nodev_conns));
 	BUG_ON(list_empty(&ic->iw_node));
 	list_del(&ic->iw_node);
-	spin_unlock_irq(&iw_nodev_conns_lock);
 
 	spin_lock_irq(&rds_iwdev->spinlock);
 	list_add_tail(&ic->iw_node, &rds_iwdev->conn_list);
 	spin_unlock_irq(&rds_iwdev->spinlock);
+	spin_unlock_irq(&iw_nodev_conns_lock);
 
 	ic->rds_iwdev = rds_iwdev;
-
-	return 0;
 }
 
-void rds_iw_remove_nodev_conns(void)
+void rds_iw_remove_conn(struct rds_iw_device *rds_iwdev, struct rds_connection *conn)
 {
-	struct rds_iw_connection *ic, *_ic;
-	LIST_HEAD(tmp_list);
+	struct rds_iw_connection *ic = conn->c_transport_data;
 
-	/* avoid calling conn_destroy with irqs off */
-	spin_lock_irq(&iw_nodev_conns_lock);
-	list_splice(&iw_nodev_conns, &tmp_list);
-	INIT_LIST_HEAD(&iw_nodev_conns);
-	spin_unlock_irq(&iw_nodev_conns_lock);
+	/* place conn on nodev_conns_list */
+	spin_lock(&iw_nodev_conns_lock);
 
-	list_for_each_entry_safe(ic, _ic, &tmp_list, iw_node) {
-		if (ic->conn->c_passive)
-			rds_conn_destroy(ic->conn->c_passive);
-		rds_conn_destroy(ic->conn);
-	}
+	spin_lock_irq(&rds_iwdev->spinlock);
+	BUG_ON(list_empty(&ic->iw_node));
+	list_del(&ic->iw_node);
+	spin_unlock_irq(&rds_iwdev->spinlock);
+
+	list_add_tail(&ic->iw_node, &iw_nodev_conns);
+
+	spin_unlock(&iw_nodev_conns_lock);
+
+	rds_iw_remove_cm_id(ic->rds_iwdev, ic->i_cm_id);
+	ic->rds_iwdev = NULL;
 }
 
-void rds_iw_remove_conns(struct rds_iw_device *rds_iwdev)
+void __rds_iw_destroy_conns(struct list_head *list, spinlock_t *list_lock)
 {
 	struct rds_iw_connection *ic, *_ic;
 	LIST_HEAD(tmp_list);
 
 	/* avoid calling conn_destroy with irqs off */
-	spin_lock_irq(&rds_iwdev->spinlock);
-	list_splice(&rds_iwdev->conn_list, &tmp_list);
-	INIT_LIST_HEAD(&rds_iwdev->conn_list);
-	spin_unlock_irq(&rds_iwdev->spinlock);
+	spin_lock_irq(list_lock);
+	list_splice(list, &tmp_list);
+	INIT_LIST_HEAD(list);
+	spin_unlock_irq(list_lock);
 
 	list_for_each_entry_safe(ic, _ic, &tmp_list, iw_node) {
 		if (ic->conn->c_passive)
-- 
1.5.6.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 9/9] RDS: use get_user_pages_fast()
  2009-03-30 18:44 [PATCH 0/9] RDS updates Andy Grover
                   ` (7 preceding siblings ...)
  2009-03-30 18:44 ` [PATCH 8/9] RDS: Rewrite connection cleanup Andy Grover
@ 2009-03-30 18:44 ` Andy Grover
  8 siblings, 0 replies; 13+ messages in thread
From: Andy Grover @ 2009-03-30 18:44 UTC (permalink / raw)
  To: netdev; +Cc: rds-devel

Use the new function that is simpler and faster.

Signed-off-by: Andy Grover <andy.grover@oracle.com>
---
 net/rds/info.c |    5 +----
 net/rds/rdma.c |    5 +----
 2 files changed, 2 insertions(+), 8 deletions(-)

diff --git a/net/rds/info.c b/net/rds/info.c
index 1d88553..62aeef3 100644
--- a/net/rds/info.c
+++ b/net/rds/info.c
@@ -188,10 +188,7 @@ int rds_info_getsockopt(struct socket *sock, int optname, char __user *optval,
 		ret = -ENOMEM;
 		goto out;
 	}
-	down_read(&current->mm->mmap_sem);
-	ret = get_user_pages(current, current->mm, start, nr_pages, 1, 0,
-			     pages, NULL);
-	up_read(&current->mm->mmap_sem);
+	ret = get_user_pages_fast(start, nr_pages, 1, pages);
 	if (ret != nr_pages) {
 		if (ret > 0)
 			nr_pages = ret;
diff --git a/net/rds/rdma.c b/net/rds/rdma.c
index 584eac3..6ecea09 100644
--- a/net/rds/rdma.c
+++ b/net/rds/rdma.c
@@ -150,10 +150,7 @@ static int rds_pin_pages(unsigned long user_addr, unsigned int nr_pages,
 {
 	int ret;
 
-	down_read(&current->mm->mmap_sem);
-	ret = get_user_pages(current, current->mm, user_addr,
-			     nr_pages, write, 0, pages, NULL);
-	up_read(&current->mm->mmap_sem);
+	ret = get_user_pages_fast(user_addr, nr_pages, write, pages);
 
 	if (ret > 0 && (unsigned) ret < nr_pages) {
 		while (ret--)
-- 
1.5.6.3


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 6/9] RDS: Fix ordering in a conditional
  2009-03-30 18:44 ` [PATCH 6/9] RDS: Fix ordering in a conditional Andy Grover
@ 2009-03-31  4:27   ` Roland Dreier
  2009-03-31  6:56     ` Andrew Grover
  0 siblings, 1 reply; 13+ messages in thread
From: Roland Dreier @ 2009-03-31  4:27 UTC (permalink / raw)
  To: Andy Grover; +Cc: netdev, rds-devel

 > -	if (0 <= ret && (unsigned) ret < nr_pages) {
 > +	if (ret > 0 && (unsigned) ret < nr_pages) {

This is not an equivalent transformation -- the original code is true if
ret == 0, while the new code is false.

Also it seems you don't need the unsigned cast here, since the clause
before just checked that ret is positive?

 - R.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 6/9] RDS: Fix ordering in a conditional
  2009-03-31  4:27   ` Roland Dreier
@ 2009-03-31  6:56     ` Andrew Grover
  2009-03-31 21:50       ` David Miller
  0 siblings, 1 reply; 13+ messages in thread
From: Andrew Grover @ 2009-03-31  6:56 UTC (permalink / raw)
  To: Roland Dreier; +Cc: Andy Grover, netdev, rds-devel

On Mon, Mar 30, 2009 at 9:27 PM, Roland Dreier <rdreier@cisco.com> wrote:
>  > -    if (0 <= ret && (unsigned) ret < nr_pages) {
>  > +    if (ret > 0 && (unsigned) ret < nr_pages) {
>
> This is not an equivalent transformation -- the original code is true if
> ret == 0, while the new code is false.

Ah! Good point.

> Also it seems you don't need the unsigned cast here, since the clause
> before just checked that ret is positive?

True, but I'd bet the compiler will warn if we remove it. I'll try it
tomorrow and see.

Thanks! -- Regards -- Andy

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 6/9] RDS: Fix ordering in a conditional
  2009-03-31  6:56     ` Andrew Grover
@ 2009-03-31 21:50       ` David Miller
  0 siblings, 0 replies; 13+ messages in thread
From: David Miller @ 2009-03-31 21:50 UTC (permalink / raw)
  To: andy.grover; +Cc: rdreier, andy.grover, netdev, rds-devel

From: Andrew Grover <andy.grover@gmail.com>
Date: Mon, 30 Mar 2009 23:56:14 -0700

> On Mon, Mar 30, 2009 at 9:27 PM, Roland Dreier <rdreier@cisco.com> wrote:
> >  > -    if (0 <= ret && (unsigned) ret < nr_pages) {
> >  > +    if (ret > 0 && (unsigned) ret < nr_pages) {
> >
> > This is not an equivalent transformation -- the original code is true if
> > ret == 0, while the new code is false.
> 
> Ah! Good point.
> 
> > Also it seems you don't need the unsigned cast here, since the clause
> > before just checked that ret is positive?
> 
> True, but I'd bet the compiler will warn if we remove it. I'll try it
> tomorrow and see.

Andy, also please resubmit only the real honest-to-goodness bug fixes
in this patch series.

I don't want to see cleanups, or optimizations like the transformation
over to using get_user_pages_fast().

You could have sent that kind of stuff to me weeks ago.

Thanks.


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2009-03-31 21:50 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-03-30 18:44 [PATCH 0/9] RDS updates Andy Grover
2009-03-30 18:44 ` [PATCH 1/9] RDS: Fix m_rs_lock deadlock Andy Grover
2009-03-30 18:44 ` [PATCH 2/9] RDS/IW+IB: Set recv ring low water mark to 1/2 full Andy Grover
2009-03-30 18:44 ` [PATCH 3/9] RDS: Correct some iw references in rdma_transport.c Andy Grover
2009-03-30 18:44 ` [PATCH 4/9] RDS/IW+IB: Set the RDS_LL_SEND_FULL bit when we're throttled Andy Grover
2009-03-30 18:44 ` [PATCH 5/9] RDS/IW+IB: Allow max credit advertise window Andy Grover
2009-03-30 18:44 ` [PATCH 6/9] RDS: Fix ordering in a conditional Andy Grover
2009-03-31  4:27   ` Roland Dreier
2009-03-31  6:56     ` Andrew Grover
2009-03-31 21:50       ` David Miller
2009-03-30 18:44 ` [PATCH 7/9] RDS: Establish connection before parsing CMSGs Andy Grover
2009-03-30 18:44 ` [PATCH 8/9] RDS: Rewrite connection cleanup Andy Grover
2009-03-30 18:44 ` [PATCH 9/9] RDS: use get_user_pages_fast() Andy Grover

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).