From: Sowmini Varadhan <sowmini.varadhan@oracle.com>
To: netdev@vger.kernel.org, linux-kernel@vger.kernel.org
Cc: chien.yen@oracle.com, davem@davemloft.net,
rds-devel@oss.oracle.com, agrover@redhat.com, clm@fb.com,
zab@zabbo.net, ajaykumar.hotchandani@oracle.com,
sergei.shtylyov@cogentembedded.com,
Sowmini Varadhan <sowmini.varadhan@oracle.com>
Subject: [PATCH v5 3/3] RDS: make sure not to loop forever inside rds_send_xmit
Date: Wed, 8 Apr 2015 09:46:11 -0400 [thread overview]
Message-ID: <ba57e7b84e5ca8389ecf08e2e3618d4c14552b74.1428500006.git.sowmini.varadhan@oracle.com> (raw)
In-Reply-To: <cover.1428500006.git.sowmini.varadhan@oracle.com>
In-Reply-To: <cover.1428500006.git.sowmini.varadhan@oracle.com>
If a determined set of concurrent senders keep the send queue full,
we can loop forever inside rds_send_xmit. This fix has two parts.
First we are dropping out of the while(1) loop after we've processed a
large batch of messages.
Second we add a generation number that gets bumped each time the
xmit bit lock is acquired. If someone else has jumped in and
made progress in the queue, we skip our goto restart.
Original patch by Chris Mason.
Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
---
v5: Modified comment per DaveM feedback
net/rds/connection.c | 1 +
net/rds/rds.h | 1 +
net/rds/send.c | 33 +++++++++++++++++++++++++++++++--
3 files changed, 33 insertions(+), 2 deletions(-)
diff --git a/net/rds/connection.c b/net/rds/connection.c
index 7952a5b..14f0413 100644
--- a/net/rds/connection.c
+++ b/net/rds/connection.c
@@ -193,6 +193,7 @@ static struct rds_connection *__rds_conn_create(__be32 laddr, __be32 faddr,
}
atomic_set(&conn->c_state, RDS_CONN_DOWN);
+ conn->c_send_gen = 0;
conn->c_reconnect_jiffies = 0;
INIT_DELAYED_WORK(&conn->c_send_w, rds_send_worker);
INIT_DELAYED_WORK(&conn->c_recv_w, rds_recv_worker);
diff --git a/net/rds/rds.h b/net/rds/rds.h
index c3f2855..0d41155 100644
--- a/net/rds/rds.h
+++ b/net/rds/rds.h
@@ -110,6 +110,7 @@ struct rds_connection {
void *c_transport_data;
atomic_t c_state;
+ unsigned long c_send_gen;
unsigned long c_flags;
unsigned long c_reconnect_jiffies;
struct delayed_work c_send_w;
diff --git a/net/rds/send.c b/net/rds/send.c
index 44672be..e9430f5 100644
--- a/net/rds/send.c
+++ b/net/rds/send.c
@@ -140,8 +140,11 @@ int rds_send_xmit(struct rds_connection *conn)
struct scatterlist *sg;
int ret = 0;
LIST_HEAD(to_be_dropped);
+ int batch_count;
+ unsigned long send_gen = 0;
restart:
+ batch_count = 0;
/*
* sendmsg calls here after having queued its message on the send
@@ -157,6 +160,17 @@ int rds_send_xmit(struct rds_connection *conn)
}
/*
+ * we record the send generation after doing the xmit acquire.
+ * if someone else manages to jump in and do some work, we'll use
+ * this to avoid a goto restart farther down.
+ *
+ * The acquire_in_xmit() check above ensures that only one
+ * caller can increment c_send_gen at any time.
+ */
+ conn->c_send_gen++;
+ send_gen = conn->c_send_gen;
+
+ /*
* rds_conn_shutdown() sets the conn state and then tests RDS_IN_XMIT,
* we do the opposite to avoid races.
*/
@@ -202,6 +216,16 @@ int rds_send_xmit(struct rds_connection *conn)
if (!rm) {
unsigned int len;
+ batch_count++;
+
+ /* we want to process as big a batch as we can, but
+ * we also want to avoid softlockups. If we've been
+ * through a lot of messages, lets back off and see
+ * if anyone else jumps in
+ */
+ if (batch_count >= 1024)
+ goto over_batch;
+
spin_lock_irqsave(&conn->c_lock, flags);
if (!list_empty(&conn->c_send_queue)) {
@@ -357,9 +381,9 @@ int rds_send_xmit(struct rds_connection *conn)
}
}
+over_batch:
if (conn->c_trans->xmit_complete)
conn->c_trans->xmit_complete(conn);
-
release_in_xmit(conn);
/* Nuke any messages we decided not to retransmit. */
@@ -380,10 +404,15 @@ int rds_send_xmit(struct rds_connection *conn)
* If the transport cannot continue (i.e ret != 0), then it must
* call us when more room is available, such as from the tx
* completion handler.
+ *
+ * We have an extra generation check here so that if someone manages
+ * to jump in after our release_in_xmit, we'll see that they have done
+ * some work and we will skip our goto
*/
if (ret == 0) {
smp_mb();
- if (!list_empty(&conn->c_send_queue)) {
+ if (!list_empty(&conn->c_send_queue) &&
+ send_gen == conn->c_send_gen) {
rds_stats_inc(s_send_lock_queue_raced);
goto restart;
}
--
1.7.1
next prev parent reply other threads:[~2015-04-08 13:46 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-04-08 13:46 [PATCH v5 0/3] RDS: RDS-core fixes Sowmini Varadhan
2015-04-08 13:46 ` [PATCH v5 1/3] Document AF_RDS, PF_RDS and SOL_RDS correctly Sowmini Varadhan
2015-04-08 13:46 ` [PATCH v5 2/3] RDS: only use passive connections when addresses match Sowmini Varadhan
2015-04-08 13:46 ` Sowmini Varadhan [this message]
2015-04-08 16:19 ` [PATCH v5 0/3] RDS: RDS-core fixes David Miller
2015-04-08 19:18 ` [PATCH v6 " David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ba57e7b84e5ca8389ecf08e2e3618d4c14552b74.1428500006.git.sowmini.varadhan@oracle.com \
--to=sowmini.varadhan@oracle.com \
--cc=agrover@redhat.com \
--cc=ajaykumar.hotchandani@oracle.com \
--cc=chien.yen@oracle.com \
--cc=clm@fb.com \
--cc=davem@davemloft.net \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=rds-devel@oss.oracle.com \
--cc=sergei.shtylyov@cogentembedded.com \
--cc=zab@zabbo.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).