From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C1A3A143C55; Thu, 13 Jun 2024 11:57:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718279820; cv=none; b=PrXg2BsjKRqg5BAepX9Xu3hZ9dzPmeyxVXf7sK4AdQ5Q/EyeZX6hFBCz66p2yWwI8Zw8G0clrmqJgp03/YWKwU/zhY9QtI242gbeu66q5Cr+xfsRVhg0v25ywOinf56wUX4YO9nreCRSi/u6dpMp4d1Snag7nWNbywKjgPBXfD8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1718279820; c=relaxed/simple; bh=LYlypHw2R2Pi7R6OM1DocPfoB697URyLlbxnAuAOlvg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SnG/4TCmO2ZeIZxmy9Erg5fVfsqms9uIeNLiAvNJVDDe1gAK4ddNcMKN993C8tgnkHftGdHtf11YzOk+cwoF1bNknED1HSXsm2oJDsYpNBwa7Z48gXOkYyZt0r8HcWekf5SQnfS5D5MHyeHWnqMEBRxXfgNG+2fuUhp7zB0FPqQ= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=zeBvHRNt; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="zeBvHRNt" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 48647C2BBFC; Thu, 13 Jun 2024 11:57:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1718279820; bh=LYlypHw2R2Pi7R6OM1DocPfoB697URyLlbxnAuAOlvg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=zeBvHRNtSzwGdENIYauZxf8ZUdkBk7JM+dIU9zvg6dCZvbqv875O5EmpKYJ1cDgbz JqPVsR5FBo+PScHBwc4TykL+GUjSBrj+DinaVo0lAc10HfurujuhrAGpHkbx01AFPU wyYOsSPlxoOhz69Dh537qXj/Rvnn/wVQ1CvlC9VM= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Eric Dumazet , Jakub Kicinski , Sasha Levin Subject: [PATCH 5.4 047/202] tcp: avoid premature drops in tcp_add_backlog() Date: Thu, 13 Jun 2024 13:32:25 +0200 Message-ID: <20240613113229.588647299@linuxfoundation.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240613113227.759341286@linuxfoundation.org> References: <20240613113227.759341286@linuxfoundation.org> User-Agent: quilt/0.67 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 5.4-stable review patch. If anyone has any objections, please let me know. ------------------ From: Eric Dumazet [ Upstream commit ec00ed472bdb7d0af840da68c8c11bff9f4d9caa ] While testing TCP performance with latest trees, I saw suspect SOCKET_BACKLOG drops. tcp_add_backlog() computes its limit with : limit = (u32)READ_ONCE(sk->sk_rcvbuf) + (u32)(READ_ONCE(sk->sk_sndbuf) >> 1); limit += 64 * 1024; This does not take into account that sk->sk_backlog.len is reset only at the very end of __release_sock(). Both sk->sk_backlog.len and sk->sk_rmem_alloc could reach sk_rcvbuf in normal conditions. We should double sk->sk_rcvbuf contribution in the formula to absorb bubbles in the backlog, which happen more often for very fast flows. This change maintains decent protection against abuses. Fixes: c377411f2494 ("net: sk_add_backlog() take rmem_alloc into account") Signed-off-by: Eric Dumazet Link: https://lore.kernel.org/r/20240423125620.3309458-1-edumazet@google.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- net/ipv4/tcp_ipv4.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index d29d4b8192643..c18ad443ca7db 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -1678,7 +1678,7 @@ int tcp_v4_early_demux(struct sk_buff *skb) bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb) { - u32 limit, tail_gso_size, tail_gso_segs; + u32 tail_gso_size, tail_gso_segs; struct skb_shared_info *shinfo; const struct tcphdr *th; struct tcphdr *thtail; @@ -1687,6 +1687,7 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb) bool fragstolen; u32 gso_segs; u32 gso_size; + u64 limit; int delta; /* In case all data was pulled from skb frags (in __pskb_pull_tail()), @@ -1781,7 +1782,13 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb) __skb_push(skb, hdrlen); no_coalesce: - limit = (u32)READ_ONCE(sk->sk_rcvbuf) + (u32)(READ_ONCE(sk->sk_sndbuf) >> 1); + /* sk->sk_backlog.len is reset only at the end of __release_sock(). + * Both sk->sk_backlog.len and sk->sk_rmem_alloc could reach + * sk_rcvbuf in normal conditions. + */ + limit = ((u64)READ_ONCE(sk->sk_rcvbuf)) << 1; + + limit += ((u32)READ_ONCE(sk->sk_sndbuf)) >> 1; /* Only socket owner can try to collapse/prune rx queues * to reduce memory overhead, so add a little headroom here. @@ -1789,6 +1796,8 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb) */ limit += 64 * 1024; + limit = min_t(u64, limit, UINT_MAX); + if (unlikely(sk_add_backlog(sk, skb, limit))) { bh_unlock_sock(sk); __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPBACKLOGDROP); -- 2.43.0