From: "D. Wythe" <alibuda@linux.alibaba.com>
To: Wen Gu <guwen@linux.alibaba.com>,
kgraul@linux.ibm.com, wenjia@linux.ibm.com
Cc: kuba@kernel.org, davem@davemloft.net, netdev@vger.kernel.org,
linux-s390@vger.kernel.org, linux-rdma@vger.kernel.org
Subject: Re: [PATCH net-next v2 10/10] net/smc: fix application data exception
Date: Fri, 16 Sep 2022 13:24:17 +0800 [thread overview]
Message-ID: <93eddaa5-082c-c3d2-8bc0-f6aa912c9398@linux.alibaba.com> (raw)
In-Reply-To: <9f67d8b3-e813-6bc6-ca1f-e387288e9df4@linux.alibaba.com>
Hi, Wen Gu
This is indeed same issues, I will fix it in the next version.
Thanks
D. Wythe
On 9/8/22 5:37 PM, Wen Gu wrote:
>
>
> On 2022/8/26 17:51, D. Wythe wrote:
>
>> From: "D. Wythe" <alibuda@linux.alibaba.com>
>>
>> After we optimize the parallel capability of SMC-R connection
>> establishment, There is a certain probability that following
>> exceptions will occur in the wrk benchmark test:
>>
>> Running 10s test @ http://11.213.45.6:80
>> 8 threads and 64 connections
>> Thread Stats Avg Stdev Max +/- Stdev
>> Latency 3.72ms 13.94ms 245.33ms 94.17%
>> Req/Sec 1.96k 713.67 5.41k 75.16%
>> 155262 requests in 10.10s, 23.10MB read
>> Non-2xx or 3xx responses: 3
>>
>> We will find that the error is HTTP 400 error, which is a serious
>> exception in our test, which means the application data was
>> corrupted.
>>
>> Consider the following scenarios:
>>
>> CPU0 CPU1
>>
>> buf_desc->used = 0;
>> cmpxchg(buf_desc->used, 0, 1)
>> deal_with(buf_desc)
>>
>> memset(buf_desc->cpu_addr,0);
>>
>> This will cause the data received by a victim connection to be cleared,
>> thus triggering an HTTP 400 error in the server.
>>
>> This patch exchange the order between clear used and memset, add
>> barrier to ensure memory consistency.
>>
>> Fixes: 1c5526968e27 ("net/smc: Clear memory when release and reuse buffer")
>> Signed-off-by: D. Wythe <alibuda@linux.alibaba.com>
>> ---
>> net/smc/smc_core.c | 5 +++--
>> 1 file changed, 3 insertions(+), 2 deletions(-)
>>
>> diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
>> index 84bf84c..fdad953 100644
>> --- a/net/smc/smc_core.c
>> +++ b/net/smc/smc_core.c
>> @@ -1380,8 +1380,9 @@ static void smcr_buf_unuse(struct smc_buf_desc *buf_desc, bool is_rmb,
>> smc_buf_free(lgr, is_rmb, buf_desc);
>> } else {
>> - buf_desc->used = 0;
>> - memset(buf_desc->cpu_addr, 0, buf_desc->len);
>> + /* memzero_explicit provides potential memory barrier semantics */
>> + memzero_explicit(buf_desc->cpu_addr, buf_desc->len);
>> + WRITE_ONCE(buf_desc->used, 0);
>> }
>> }
>
> It seems that the same issue exists in smc_buf_unuse(), Maybe it also needs to be fixed?
>
>
> static void smc_buf_unuse(struct smc_connection *conn,
> struct smc_link_group *lgr)
> {
> if (conn->sndbuf_desc) {
> if (!lgr->is_smcd && conn->sndbuf_desc->is_vm) {
> smcr_buf_unuse(conn->sndbuf_desc, false, lgr);
> } else {
> conn->sndbuf_desc->used = 0;
> memset(conn->sndbuf_desc->cpu_addr, 0,
> conn->sndbuf_desc->len);
> ^...................
> }
> }
> if (conn->rmb_desc) {
> if (!lgr->is_smcd) {
> smcr_buf_unuse(conn->rmb_desc, true, lgr);
> } else {
> conn->rmb_desc->used = 0;
> memset(conn->rmb_desc->cpu_addr, 0,
> conn->rmb_desc->len +
> sizeof(struct smcd_cdc_msg));
> ^...................
> }
> }
> }
>
> Thanks,
> Wen Gu
next prev parent reply other threads:[~2022-09-16 5:24 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-08-26 9:51 [PATCH net-next v2 00/10] optimize the parallelism of SMC-R connections D. Wythe
2022-08-26 9:51 ` [PATCH net-next v2 01/10] net/smc: remove locks smc_client_lgr_pending and smc_server_lgr_pending D. Wythe
2022-08-29 14:48 ` Jan Karcher
2022-08-31 15:04 ` Jan Karcher
2022-09-02 11:25 ` D. Wythe
2022-09-07 8:10 ` Jan Karcher
2022-09-16 5:16 ` D. Wythe
2022-08-26 9:51 ` [PATCH net-next v2 02/10] net/smc: fix SMC_CLC_DECL_ERR_REGRMB without smc_server_lgr_pending D. Wythe
2022-08-26 9:51 ` [PATCH net-next v2 03/10] net/smc: allow confirm/delete rkey response deliver multiplex D. Wythe
2022-08-26 9:51 ` [PATCH net-next v2 04/10] net/smc: make SMC_LLC_FLOW_RKEY run concurrently D. Wythe
2022-08-26 9:51 ` [PATCH net-next v2 05/10] net/smc: llc_conf_mutex refactor, replace it with rw_semaphore D. Wythe
2022-08-26 9:51 ` [PATCH net-next v2 06/10] net/smc: use read semaphores to reduce unnecessary blocking in smc_buf_create() & smcr_buf_unuse() D. Wythe
2022-08-26 9:51 ` [PATCH net-next v2 07/10] net/smc: reduce unnecessary blocking in smcr_lgr_reg_rmbs() D. Wythe
2022-08-26 9:51 ` [PATCH net-next v2 08/10] net/smc: replace mutex rmbs_lock and sndbufs_lock with rw_semaphore D. Wythe
2022-08-26 9:51 ` [PATCH net-next v2 09/10] net/smc: Fix potential panic dues to unprotected smc_llc_srv_add_link() D. Wythe
2022-08-26 9:51 ` [PATCH net-next v2 10/10] net/smc: fix application data exception D. Wythe
2022-09-08 9:37 ` Wen Gu
2022-09-16 5:24 ` D. Wythe [this message]
2022-08-27 1:32 ` [PATCH net-next v2 00/10] optimize the parallelism of SMC-R connections Jakub Kicinski
2022-08-29 3:25 ` Tony Lu
2022-08-29 3:28 ` D. Wythe
2022-09-09 6:59 ` Jan Karcher
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=93eddaa5-082c-c3d2-8bc0-f6aa912c9398@linux.alibaba.com \
--to=alibuda@linux.alibaba.com \
--cc=davem@davemloft.net \
--cc=guwen@linux.alibaba.com \
--cc=kgraul@linux.ibm.com \
--cc=kuba@kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=linux-s390@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=wenjia@linux.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox