From: Dust Li <dust.li@linux.alibaba.com>
To: Karsten Graul <kgraul@linux.ibm.com>,
Tony Lu <tonylu@linux.alibaba.com>,
Guangguan Wang <guangguan.wang@linux.alibaba.com>
Cc: davem@davemloft.net, kuba@kernel.org, netdev@vger.kernel.org,
linux-s390@vger.kernel.org, linux-rdma@vger.kernel.org
Subject: [PATCH net-next 5/7] net/smc: correct settings of RMB window update limit
Date: Tue, 1 Mar 2022 17:44:00 +0800 [thread overview]
Message-ID: <20220301094402.14992-6-dust.li@linux.alibaba.com> (raw)
In-Reply-To: <20220301094402.14992-1-dust.li@linux.alibaba.com>
rmbe_update_limit is used to limit announcing receive
window updating too frequently. RFC7609 request a minimal
increase in the window size of 10% of the receive buffer
space. But current implementation used:
min_t(int, rmbe_size / 10, SOCK_MIN_SNDBUF / 2)
and SOCK_MIN_SNDBUF / 2 == 2304 Bytes, which is almost
always less then 10% of the receive buffer space.
This causes the receiver always sending CDC message to
update its consumer cursor when it consumes more then 2K
of data. And as a result, we may encounter something like
"TCP silly window syndrome" when sending 2.5~8K message.
This patch fixes this using max(rmbe_size / 10, SOCK_MIN_SNDBUF / 2).
With this patch and SMC autocorking enabled, qperf 2K/4K/8K
tcp_bw test shows 45%/75%/40% increase in throughput respectively.
Signed-off-by: Dust Li <dust.li@linux.alibaba.com>
---
net/smc/smc_core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
index 29525d03b253..1674b2549f8b 100644
--- a/net/smc/smc_core.c
+++ b/net/smc/smc_core.c
@@ -1988,7 +1988,7 @@ static struct smc_buf_desc *smc_buf_get_slot(int compressed_bufsize,
*/
static inline int smc_rmb_wnd_update_limit(int rmbe_size)
{
- return min_t(int, rmbe_size / 10, SOCK_MIN_SNDBUF / 2);
+ return max_t(int, rmbe_size / 10, SOCK_MIN_SNDBUF / 2);
}
/* map an rmb buf to a link */
--
2.19.1.3.ge56e4f7
next prev parent reply other threads:[~2022-03-01 9:44 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-01 9:43 [PATCH net-next 0/7] net/smc: some datapath performance optimizations Dust Li
2022-03-01 9:43 ` [PATCH net-next 1/7] net/smc: add sysctl interface for SMC Dust Li
2022-03-01 9:43 ` [PATCH net-next 2/7] net/smc: add autocorking support Dust Li
2022-03-01 9:43 ` [PATCH net-next 3/7] net/smc: add sysctl for autocorking Dust Li
2022-03-01 22:20 ` Jakub Kicinski
2022-03-01 9:43 ` [PATCH net-next 4/7] net/smc: send directly on setting TCP_NODELAY Dust Li
2022-03-01 9:44 ` Dust Li [this message]
2022-03-01 9:44 ` [PATCH net-next 6/7] net/smc: don't req_notify until all CQEs drained Dust Li
2022-03-01 10:14 ` Leon Romanovsky
2022-03-01 10:53 ` dust.li
2022-03-04 8:19 ` Karsten Graul
2022-03-04 8:23 ` dust.li
2022-03-01 9:44 ` [PATCH net-next 7/7] net/smc: don't send in the BH context if sock_owned_by_user Dust Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220301094402.14992-6-dust.li@linux.alibaba.com \
--to=dust.li@linux.alibaba.com \
--cc=davem@davemloft.net \
--cc=guangguan.wang@linux.alibaba.com \
--cc=kgraul@linux.ibm.com \
--cc=kuba@kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=linux-s390@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=tonylu@linux.alibaba.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).