* [PATCH mptcp-next v4 0/4] MP_FAIL echo and retrans
@ 2022-03-10 8:09 Geliang Tang
2022-03-10 8:09 ` [PATCH mptcp-next v4 1/4] mptcp: add MP_FAIL response support Geliang Tang
` (3 more replies)
0 siblings, 4 replies; 7+ messages in thread
From: Geliang Tang @ 2022-03-10 8:09 UTC (permalink / raw)
To: mptcp; +Cc: Geliang Tang
v4:
- start and stop sk_timer instead of setting and clearing msk->flags.
- add a new patch.
v3:
- use msk->sk_timer and a msk->flags bit.
- use READ_ONCE/WRITE_ONCE for subflow->mp_fail_response_expect.
- update selftest.
v2:
- don't clear mp_fail_response_expect flag.
- add a helper mp_fail_response_expect_subflow to get the subflow,
instead of using msk->first.
- add locks as Mat suggested.
- add a selftest.
Closes: https://github.com/multipath-tcp/mptcp_net-next/issues/261
Geliang Tang (4):
mptcp: add MP_FAIL response support
mptcp: reset subflow when MP_FAIL doesn't respond
selftests: mptcp: check MP_FAIL response mibs
selftests: mptcp: print extra msg in chk_csum_nr
net/mptcp/pm.c | 17 ++++++-
net/mptcp/protocol.c | 44 +++++++++++++++++
net/mptcp/protocol.h | 2 +
net/mptcp/subflow.c | 11 +++++
.../testing/selftests/net/mptcp/mptcp_join.sh | 49 +++++++++++++++++--
5 files changed, 118 insertions(+), 5 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 7+ messages in thread* [PATCH mptcp-next v4 1/4] mptcp: add MP_FAIL response support 2022-03-10 8:09 [PATCH mptcp-next v4 0/4] MP_FAIL echo and retrans Geliang Tang @ 2022-03-10 8:09 ` Geliang Tang 2022-03-10 8:09 ` [PATCH mptcp-next v4 2/4] mptcp: reset subflow when MP_FAIL doesn't respond Geliang Tang ` (2 subsequent siblings) 3 siblings, 0 replies; 7+ messages in thread From: Geliang Tang @ 2022-03-10 8:09 UTC (permalink / raw) To: mptcp; +Cc: Geliang Tang This patch added a new struct member mp_fail_response_expect in struct mptcp_subflow_context to support MP_FAIL response. In the single subflow with checksum error and contiguous data special case, a MP_FAIL sent in response to another MP_FAIL. Signed-off-by: Geliang Tang <geliang.tang@suse.com> --- net/mptcp/pm.c | 10 +++++++++- net/mptcp/protocol.h | 1 + net/mptcp/subflow.c | 2 ++ 3 files changed, 12 insertions(+), 1 deletion(-) diff --git a/net/mptcp/pm.c b/net/mptcp/pm.c index d0d31d5c198a..f5f4561f332a 100644 --- a/net/mptcp/pm.c +++ b/net/mptcp/pm.c @@ -279,8 +279,16 @@ void mptcp_pm_mp_fail_received(struct sock *sk, u64 fail_seq) pr_debug("fail_seq=%llu", fail_seq); - if (!mptcp_has_another_subflow(sk) && READ_ONCE(msk->allow_infinite_fallback)) + if (mptcp_has_another_subflow(sk) || !READ_ONCE(msk->allow_infinite_fallback)) + return; + + if (!READ_ONCE(subflow->mp_fail_response_expect)) { + pr_debug("send MP_FAIL response and infinite map"); + + subflow->send_mp_fail = 1; + MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPFAILTX); subflow->send_infinite_map = 1; + } } /* path manager helpers */ diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h index c8bada4537e2..83f0205f0d95 100644 --- a/net/mptcp/protocol.h +++ b/net/mptcp/protocol.h @@ -456,6 +456,7 @@ struct mptcp_subflow_context { stale : 1, /* unable to snd/rcv data, do not use for xmit */ local_id_valid : 1; /* local_id is correctly initialized */ enum mptcp_data_avail data_avail; + bool mp_fail_response_expect; u32 remote_nonce; u64 thmac; u32 local_nonce; diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c index 30ffb00661bb..ca2352ad20d4 100644 --- a/net/mptcp/subflow.c +++ b/net/mptcp/subflow.c @@ -1217,6 +1217,8 @@ static bool subflow_check_data_avail(struct sock *ssk) tcp_send_active_reset(ssk, GFP_ATOMIC); while ((skb = skb_peek(&ssk->sk_receive_queue))) sk_eat_skb(ssk, skb); + } else { + WRITE_ONCE(subflow->mp_fail_response_expect, true); } WRITE_ONCE(subflow->data_avail, MPTCP_SUBFLOW_NODATA); return true; -- 2.34.1 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH mptcp-next v4 2/4] mptcp: reset subflow when MP_FAIL doesn't respond 2022-03-10 8:09 [PATCH mptcp-next v4 0/4] MP_FAIL echo and retrans Geliang Tang 2022-03-10 8:09 ` [PATCH mptcp-next v4 1/4] mptcp: add MP_FAIL response support Geliang Tang @ 2022-03-10 8:09 ` Geliang Tang 2022-03-14 23:28 ` Mat Martineau 2022-03-10 8:09 ` [PATCH mptcp-next v4 3/4] selftests: mptcp: check MP_FAIL response mibs Geliang Tang 2022-03-10 8:09 ` [PATCH mptcp-next v4 4/4] selftests: mptcp: print extra msg in chk_csum_nr Geliang Tang 3 siblings, 1 reply; 7+ messages in thread From: Geliang Tang @ 2022-03-10 8:09 UTC (permalink / raw) To: mptcp; +Cc: Geliang Tang This patch added a new msk->flags bit MPTCP_FAIL_NO_RESPONSE, then reused sk_timer to trigger a check if we have not received a response from the peer after sending MP_FAIL. If the peer doesn't respond properly, reset the subflow. Signed-off-by: Geliang Tang <geliang.tang@suse.com> --- net/mptcp/pm.c | 7 +++++++ net/mptcp/protocol.c | 44 ++++++++++++++++++++++++++++++++++++++++++++ net/mptcp/protocol.h | 1 + net/mptcp/subflow.c | 9 +++++++++ 4 files changed, 61 insertions(+) diff --git a/net/mptcp/pm.c b/net/mptcp/pm.c index f5f4561f332a..e3a541ffebbd 100644 --- a/net/mptcp/pm.c +++ b/net/mptcp/pm.c @@ -276,6 +276,7 @@ void mptcp_pm_mp_fail_received(struct sock *sk, u64 fail_seq) { struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); struct mptcp_sock *msk = mptcp_sk(subflow->conn); + struct sock *s = (struct sock *)msk; pr_debug("fail_seq=%llu", fail_seq); @@ -288,6 +289,12 @@ void mptcp_pm_mp_fail_received(struct sock *sk, u64 fail_seq) subflow->send_mp_fail = 1; MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPFAILTX); subflow->send_infinite_map = 1; + } else if (s && inet_sk_state_load(s) != TCP_CLOSE) { + pr_debug("MP_FAIL response received"); + + mptcp_data_lock(s); + sk_stop_timer(s, &s->sk_timer); + mptcp_data_unlock(s); } } diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c index 3cb975227d12..d1a62e2a1d29 100644 --- a/net/mptcp/protocol.c +++ b/net/mptcp/protocol.c @@ -2166,9 +2166,33 @@ static void mptcp_retransmit_timer(struct timer_list *t) sock_put(sk); } +static struct mptcp_subflow_context * +mp_fail_response_expect_subflow(struct mptcp_sock *msk) +{ + struct mptcp_subflow_context *subflow, *ret = NULL; + + mptcp_for_each_subflow(msk, subflow) { + if (READ_ONCE(subflow->mp_fail_response_expect)) { + ret = subflow; + break; + } + } + + return ret; +} + static void mptcp_timeout_timer(struct timer_list *t) { struct sock *sk = from_timer(sk, t, sk_timer); + struct mptcp_sock *msk = mptcp_sk(sk); + struct mptcp_subflow_context *subflow; + + subflow = mp_fail_response_expect_subflow(msk); + if (subflow) { + bh_lock_sock(sk); + __set_bit(MPTCP_FAIL_NO_RESPONSE, &msk->flags); + bh_unlock_sock(sk); + } mptcp_schedule_work(sk); sock_put(sk); @@ -2494,6 +2518,23 @@ static void __mptcp_retrans(struct sock *sk) mptcp_reset_timer(sk); } +static void mptcp_mp_fail_no_response(struct mptcp_sock *msk) +{ + struct mptcp_subflow_context *subflow; + struct sock *ssk; + bool slow; + + subflow = mp_fail_response_expect_subflow(msk); + if (subflow) { + pr_debug("MP_FAIL doesn't respond, reset the subflow"); + + ssk = mptcp_subflow_tcp_sock(subflow); + slow = lock_sock_fast(ssk); + mptcp_subflow_reset(ssk); + unlock_sock_fast(ssk, slow); + } +} + static void mptcp_worker(struct work_struct *work) { struct mptcp_sock *msk = container_of(work, struct mptcp_sock, work); @@ -2534,6 +2575,9 @@ static void mptcp_worker(struct work_struct *work) if (test_and_clear_bit(MPTCP_WORK_RTX, &msk->flags)) __mptcp_retrans(sk); + if (test_and_clear_bit(MPTCP_FAIL_NO_RESPONSE, &msk->flags)) + mptcp_mp_fail_no_response(msk); + unlock: release_sock(sk); sock_put(sk); diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h index 83f0205f0d95..bf58d3c886f5 100644 --- a/net/mptcp/protocol.h +++ b/net/mptcp/protocol.h @@ -116,6 +116,7 @@ #define MPTCP_WORK_EOF 3 #define MPTCP_FALLBACK_DONE 4 #define MPTCP_WORK_CLOSE_SUBFLOW 5 +#define MPTCP_FAIL_NO_RESPONSE 6 /* MPTCP socket release cb flags */ #define MPTCP_PUSH_PENDING 1 diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c index ca2352ad20d4..d38b8089988e 100644 --- a/net/mptcp/subflow.c +++ b/net/mptcp/subflow.c @@ -968,6 +968,7 @@ static enum mapping_status get_mapping_status(struct sock *ssk, { struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); bool csum_reqd = READ_ONCE(msk->csum_enabled); + struct sock *sk = (struct sock *)msk; struct mptcp_ext *mpext; struct sk_buff *skb; u16 data_len; @@ -1009,6 +1010,11 @@ static enum mapping_status get_mapping_status(struct sock *ssk, pr_debug("infinite mapping received"); MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_INFINITEMAPRX); subflow->map_data_len = 0; + if (sk && inet_sk_state_load(sk) != TCP_CLOSE) { + mptcp_data_lock(sk); + sk_stop_timer(sk, &sk->sk_timer); + mptcp_data_unlock(sk); + } return MAPPING_INVALID; } @@ -1219,6 +1225,9 @@ static bool subflow_check_data_avail(struct sock *ssk) sk_eat_skb(ssk, skb); } else { WRITE_ONCE(subflow->mp_fail_response_expect, true); + sk_reset_timer((struct sock *)msk, + &((struct sock *)msk)->sk_timer, + jiffies + TCP_RTO_MAX); } WRITE_ONCE(subflow->data_avail, MPTCP_SUBFLOW_NODATA); return true; -- 2.34.1 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH mptcp-next v4 2/4] mptcp: reset subflow when MP_FAIL doesn't respond 2022-03-10 8:09 ` [PATCH mptcp-next v4 2/4] mptcp: reset subflow when MP_FAIL doesn't respond Geliang Tang @ 2022-03-14 23:28 ` Mat Martineau 0 siblings, 0 replies; 7+ messages in thread From: Mat Martineau @ 2022-03-14 23:28 UTC (permalink / raw) To: Geliang Tang; +Cc: mptcp On Thu, 10 Mar 2022, Geliang Tang wrote: > This patch added a new msk->flags bit MPTCP_FAIL_NO_RESPONSE, then reused > sk_timer to trigger a check if we have not received a response from the > peer after sending MP_FAIL. If the peer doesn't respond properly, reset > the subflow. > > Signed-off-by: Geliang Tang <geliang.tang@suse.com> > --- > net/mptcp/pm.c | 7 +++++++ > net/mptcp/protocol.c | 44 ++++++++++++++++++++++++++++++++++++++++++++ > net/mptcp/protocol.h | 1 + > net/mptcp/subflow.c | 9 +++++++++ > 4 files changed, 61 insertions(+) > > diff --git a/net/mptcp/pm.c b/net/mptcp/pm.c > index f5f4561f332a..e3a541ffebbd 100644 > --- a/net/mptcp/pm.c > +++ b/net/mptcp/pm.c > @@ -276,6 +276,7 @@ void mptcp_pm_mp_fail_received(struct sock *sk, u64 fail_seq) > { > struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); > struct mptcp_sock *msk = mptcp_sk(subflow->conn); > + struct sock *s = (struct sock *)msk; > > pr_debug("fail_seq=%llu", fail_seq); > > @@ -288,6 +289,12 @@ void mptcp_pm_mp_fail_received(struct sock *sk, u64 fail_seq) > subflow->send_mp_fail = 1; > MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPFAILTX); > subflow->send_infinite_map = 1; > + } else if (s && inet_sk_state_load(s) != TCP_CLOSE) { > + pr_debug("MP_FAIL response received"); > + > + mptcp_data_lock(s); I think the TCP_CLOSE state should be checked again after acquiring the lock to be sure the state hasn't changed - another thread could have acquired the lock and changed the socket state. > + sk_stop_timer(s, &s->sk_timer); > + mptcp_data_unlock(s); > } > } > > diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c > index 3cb975227d12..d1a62e2a1d29 100644 > --- a/net/mptcp/protocol.c > +++ b/net/mptcp/protocol.c > @@ -2166,9 +2166,33 @@ static void mptcp_retransmit_timer(struct timer_list *t) > sock_put(sk); > } > > +static struct mptcp_subflow_context * > +mp_fail_response_expect_subflow(struct mptcp_sock *msk) > +{ > + struct mptcp_subflow_context *subflow, *ret = NULL; > + > + mptcp_for_each_subflow(msk, subflow) { > + if (READ_ONCE(subflow->mp_fail_response_expect)) { > + ret = subflow; > + break; > + } > + } > + > + return ret; > +} > + > static void mptcp_timeout_timer(struct timer_list *t) > { > struct sock *sk = from_timer(sk, t, sk_timer); > + struct mptcp_sock *msk = mptcp_sk(sk); > + struct mptcp_subflow_context *subflow; > + > + subflow = mp_fail_response_expect_subflow(msk); The msk needs to be locked before iterating over the subflows. > + if (subflow) { > + bh_lock_sock(sk); > + __set_bit(MPTCP_FAIL_NO_RESPONSE, &msk->flags); > + bh_unlock_sock(sk); > + } > > mptcp_schedule_work(sk); > sock_put(sk); > @@ -2494,6 +2518,23 @@ static void __mptcp_retrans(struct sock *sk) > mptcp_reset_timer(sk); > } > > +static void mptcp_mp_fail_no_response(struct mptcp_sock *msk) > +{ > + struct mptcp_subflow_context *subflow; > + struct sock *ssk; > + bool slow; > + > + subflow = mp_fail_response_expect_subflow(msk); > + if (subflow) { > + pr_debug("MP_FAIL doesn't respond, reset the subflow"); > + > + ssk = mptcp_subflow_tcp_sock(subflow); > + slow = lock_sock_fast(ssk); > + mptcp_subflow_reset(ssk); > + unlock_sock_fast(ssk, slow); > + } > +} > + > static void mptcp_worker(struct work_struct *work) > { > struct mptcp_sock *msk = container_of(work, struct mptcp_sock, work); > @@ -2534,6 +2575,9 @@ static void mptcp_worker(struct work_struct *work) > if (test_and_clear_bit(MPTCP_WORK_RTX, &msk->flags)) > __mptcp_retrans(sk); > > + if (test_and_clear_bit(MPTCP_FAIL_NO_RESPONSE, &msk->flags)) > + mptcp_mp_fail_no_response(msk); > + > unlock: > release_sock(sk); > sock_put(sk); > diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h > index 83f0205f0d95..bf58d3c886f5 100644 > --- a/net/mptcp/protocol.h > +++ b/net/mptcp/protocol.h > @@ -116,6 +116,7 @@ > #define MPTCP_WORK_EOF 3 > #define MPTCP_FALLBACK_DONE 4 > #define MPTCP_WORK_CLOSE_SUBFLOW 5 > +#define MPTCP_FAIL_NO_RESPONSE 6 > > /* MPTCP socket release cb flags */ > #define MPTCP_PUSH_PENDING 1 > diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c > index ca2352ad20d4..d38b8089988e 100644 > --- a/net/mptcp/subflow.c > +++ b/net/mptcp/subflow.c > @@ -968,6 +968,7 @@ static enum mapping_status get_mapping_status(struct sock *ssk, > { > struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); > bool csum_reqd = READ_ONCE(msk->csum_enabled); > + struct sock *sk = (struct sock *)msk; > struct mptcp_ext *mpext; > struct sk_buff *skb; > u16 data_len; > @@ -1009,6 +1010,11 @@ static enum mapping_status get_mapping_status(struct sock *ssk, > pr_debug("infinite mapping received"); > MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_INFINITEMAPRX); > subflow->map_data_len = 0; > + if (sk && inet_sk_state_load(sk) != TCP_CLOSE) { > + mptcp_data_lock(sk); Similar to above, best to re-check for TCP_CLOSE. > + sk_stop_timer(sk, &sk->sk_timer); > + mptcp_data_unlock(sk); > + } > return MAPPING_INVALID; > } > > @@ -1219,6 +1225,9 @@ static bool subflow_check_data_avail(struct sock *ssk) > sk_eat_skb(ssk, skb); > } else { > WRITE_ONCE(subflow->mp_fail_response_expect, true); > + sk_reset_timer((struct sock *)msk, > + &((struct sock *)msk)->sk_timer, > + jiffies + TCP_RTO_MAX); Need to use mptcp_data_lock() here too. Can you also double-check other uses of sk_timer to make sure it's protected by the lock everywhere? The existing calls for using the timer at msk close might not all do this locking. > } > WRITE_ONCE(subflow->data_avail, MPTCP_SUBFLOW_NODATA); > return true; > -- > 2.34.1 -- Mat Martineau Intel ^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH mptcp-next v4 3/4] selftests: mptcp: check MP_FAIL response mibs 2022-03-10 8:09 [PATCH mptcp-next v4 0/4] MP_FAIL echo and retrans Geliang Tang 2022-03-10 8:09 ` [PATCH mptcp-next v4 1/4] mptcp: add MP_FAIL response support Geliang Tang 2022-03-10 8:09 ` [PATCH mptcp-next v4 2/4] mptcp: reset subflow when MP_FAIL doesn't respond Geliang Tang @ 2022-03-10 8:09 ` Geliang Tang 2022-03-10 8:09 ` [PATCH mptcp-next v4 4/4] selftests: mptcp: print extra msg in chk_csum_nr Geliang Tang 3 siblings, 0 replies; 7+ messages in thread From: Geliang Tang @ 2022-03-10 8:09 UTC (permalink / raw) To: mptcp; +Cc: Geliang Tang This patch extended chk_fail_nr to check the MP_FAIL response mibs. Added a new argument invert for chk_fail_nr to allow it can check the MP_FAIL TX and RX mibs from the opposite direction. When the infinite map was received before the MP_FAIL response, the response will be lost. A '-' can be added into fail_tx or fail_rx to represent that MP_FAIL response TX or RX can be lost when doing the checks. Signed-off-by: Geliang Tang <geliang.tang@suse.com> --- .../testing/selftests/net/mptcp/mptcp_join.sh | 38 +++++++++++++++++-- 1 file changed, 35 insertions(+), 3 deletions(-) diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh index 959e46122a84..2b77576d773c 100755 --- a/tools/testing/selftests/net/mptcp/mptcp_join.sh +++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh @@ -1055,13 +1055,38 @@ chk_fail_nr() { local fail_tx=$1 local fail_rx=$2 + local ns_invert=${3:-""} local count local dump_stats + local ns_tx=$ns1 + local ns_rx=$ns2 + local extra_msg="" + local allow_tx_lost=0 + local allow_rx_lost=0 + + if [[ $ns_invert = "invert" ]]; then + ns_tx=$ns2 + ns_rx=$ns1 + extra_msg=" invert" + fi + + if [[ "${fail_tx}" = "-"* ]]; then + allow_tx_lost=1 + fail_tx=${fail_tx:1} + fi + if [[ "${fail_rx}" = "-"* ]]; then + allow_rx_lost=1 + fail_rx=${fail_rx:1} + fi printf "%-${nr_blank}s %s" " " "ftx" - count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtMPFailTx | awk '{print $2}') + count=$(ip netns exec $ns_tx nstat -as | grep MPTcpExtMPFailTx | awk '{print $2}') [ -z "$count" ] && count=0 if [ "$count" != "$fail_tx" ]; then + extra_msg="$extra_msg,tx=$count" + fi + if { [ "$count" != "$fail_tx" ] && [ $allow_tx_lost -eq 0 ]; } || + { [ "$count" -gt "$fail_tx" ] && [ $allow_tx_lost -eq 1 ]; }; then echo "[fail] got $count MP_FAIL[s] TX expected $fail_tx" fail_test dump_stats=1 @@ -1070,17 +1095,23 @@ chk_fail_nr() fi echo -n " - failrx" - count=$(ip netns exec $ns2 nstat -as | grep MPTcpExtMPFailRx | awk '{print $2}') + count=$(ip netns exec $ns_rx nstat -as | grep MPTcpExtMPFailRx | awk '{print $2}') [ -z "$count" ] && count=0 if [ "$count" != "$fail_rx" ]; then + extra_msg="$extra_msg,rx=$count" + fi + if { [ "$count" != "$fail_rx" ] && [ $allow_rx_lost -eq 0 ]; } || + { [ "$count" -gt "$fail_rx" ] && [ $allow_rx_lost -eq 1 ]; }; then echo "[fail] got $count MP_FAIL[s] RX expected $fail_rx" fail_test dump_stats=1 else - echo "[ ok ]" + echo -n "[ ok ]" fi [ "${dump_stats}" = 1 ] && dump_stats + + echo "$extra_msg" } chk_fclose_nr() @@ -2672,6 +2703,7 @@ fail_tests() if reset_with_fail "Infinite map" 1; then run_tests $ns1 $ns2 10.0.1.1 128 chk_join_nr 0 0 0 +1 +0 1 0 1 "$(pedit_action_pkts)" + chk_fail_nr 1 -1 invert fi } -- 2.34.1 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH mptcp-next v4 4/4] selftests: mptcp: print extra msg in chk_csum_nr 2022-03-10 8:09 [PATCH mptcp-next v4 0/4] MP_FAIL echo and retrans Geliang Tang ` (2 preceding siblings ...) 2022-03-10 8:09 ` [PATCH mptcp-next v4 3/4] selftests: mptcp: check MP_FAIL response mibs Geliang Tang @ 2022-03-10 8:09 ` Geliang Tang 2022-03-10 9:58 ` selftests: mptcp: print extra msg in chk_csum_nr: Tests Results MPTCP CI 3 siblings, 1 reply; 7+ messages in thread From: Geliang Tang @ 2022-03-10 8:09 UTC (permalink / raw) To: mptcp; +Cc: Geliang Tang When the multiple checksum errors occur in chk_csum_nr(), print the numbers of the errors as an extra message. Signed-off-by: Geliang Tang <geliang.tang@suse.com> --- tools/testing/selftests/net/mptcp/mptcp_join.sh | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/tools/testing/selftests/net/mptcp/mptcp_join.sh b/tools/testing/selftests/net/mptcp/mptcp_join.sh index 2b77576d773c..375b087edc7b 100755 --- a/tools/testing/selftests/net/mptcp/mptcp_join.sh +++ b/tools/testing/selftests/net/mptcp/mptcp_join.sh @@ -1014,6 +1014,7 @@ chk_csum_nr() local csum_ns2=${2:-0} local count local dump_stats + local extra_msg="" local allow_multi_errors_ns1=0 local allow_multi_errors_ns2=0 @@ -1029,6 +1030,9 @@ chk_csum_nr() printf "%-${nr_blank}s %s" " " "sum" count=$(ip netns exec $ns1 nstat -as | grep MPTcpExtDataCsumErr | awk '{print $2}') [ -z "$count" ] && count=0 + if [ "$count" != "$csum_ns1" ]; then + extra_msg="$extra_msg ns1=$count" + fi if { [ "$count" != $csum_ns1 ] && [ $allow_multi_errors_ns1 -eq 0 ]; } || { [ "$count" -lt $csum_ns1 ] && [ $allow_multi_errors_ns1 -eq 1 ]; }; then echo "[fail] got $count data checksum error[s] expected $csum_ns1" @@ -1040,15 +1044,20 @@ chk_csum_nr() echo -n " - csum " count=$(ip netns exec $ns2 nstat -as | grep MPTcpExtDataCsumErr | awk '{print $2}') [ -z "$count" ] && count=0 + if [ "$count" != "$csum_ns2" ]; then + extra_msg="$extra_msg ns2=$count" + fi if { [ "$count" != $csum_ns2 ] && [ $allow_multi_errors_ns2 -eq 0 ]; } || { [ "$count" -lt $csum_ns2 ] && [ $allow_multi_errors_ns2 -eq 1 ]; }; then echo "[fail] got $count data checksum error[s] expected $csum_ns2" fail_test dump_stats=1 else - echo "[ ok ]" + echo -n "[ ok ]" fi [ "${dump_stats}" = 1 ] && dump_stats + + echo "$extra_msg" } chk_fail_nr() -- 2.34.1 ^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: selftests: mptcp: print extra msg in chk_csum_nr: Tests Results 2022-03-10 8:09 ` [PATCH mptcp-next v4 4/4] selftests: mptcp: print extra msg in chk_csum_nr Geliang Tang @ 2022-03-10 9:58 ` MPTCP CI 0 siblings, 0 replies; 7+ messages in thread From: MPTCP CI @ 2022-03-10 9:58 UTC (permalink / raw) To: Geliang Tang; +Cc: mptcp Hi Geliang, Thank you for your modifications, that's great! Our CI did some validations and here is its report: - KVM Validation: normal: - Success! ✅: - Task: https://cirrus-ci.com/task/5981420215599104 - Summary: https://api.cirrus-ci.com/v1/artifact/task/5981420215599104/summary/summary.txt - KVM Validation: debug: - Unstable: 2 failed test(s): selftest_diag selftest_mptcp_join 🔴: - Task: https://cirrus-ci.com/task/5418470262177792 - Summary: https://api.cirrus-ci.com/v1/artifact/task/5418470262177792/summary/summary.txt Initiator: Patchew Applier Commits: https://github.com/multipath-tcp/mptcp_net-next/commits/0f7e3bc359ca Please note that despite all the efforts that have been already done to have a stable tests suite when executed on a public CI like here, it is possible some reported issues are not due to your modifications. Still, do not hesitate to help us improve that ;-) Cheers, MPTCP GH Action bot Bot operated by Matthieu Baerts (Tessares) ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2022-03-14 23:28 UTC | newest] Thread overview: 7+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2022-03-10 8:09 [PATCH mptcp-next v4 0/4] MP_FAIL echo and retrans Geliang Tang 2022-03-10 8:09 ` [PATCH mptcp-next v4 1/4] mptcp: add MP_FAIL response support Geliang Tang 2022-03-10 8:09 ` [PATCH mptcp-next v4 2/4] mptcp: reset subflow when MP_FAIL doesn't respond Geliang Tang 2022-03-14 23:28 ` Mat Martineau 2022-03-10 8:09 ` [PATCH mptcp-next v4 3/4] selftests: mptcp: check MP_FAIL response mibs Geliang Tang 2022-03-10 8:09 ` [PATCH mptcp-next v4 4/4] selftests: mptcp: print extra msg in chk_csum_nr Geliang Tang 2022-03-10 9:58 ` selftests: mptcp: print extra msg in chk_csum_nr: Tests Results MPTCP CI
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.