From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1874E1AB522; Thu, 6 Jun 2024 14:14:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717683259; cv=none; b=KOy/r/FB3rbpd2dQUsipFxJ2KxbSpx1hPgjJl3Kwzh3Y04ZK1EAXrVi10kT3/yQUCwZvv9+u9Ab/k7c/K5LVGcatqRBYuEUqTHTGC3E2rRkjVLusCr0+cdUeRtzZ6yH41HfcZvDc8MemDCu+uzMw3NU0YfXBVEnfEte4Gs3AwKw= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1717683259; c=relaxed/simple; bh=JgXM6OdrUXXRiBtrBqjlPzQj6N0KUhak179T+a03N8U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=n1T4Y0JIKbLwCnetoiRJFw7qKIz6VlQ88/Y0dREU3aJ6cGilkZ7mucsvo/uobbf3hPRg9PvPZRNJpcSdZpMh+ZtZBAmk4OnJDUKhVCbU+bvBRLMY07jAPQWwJLGidXm4XndZz8reE7OQsuB3UskK3yIFJxMuKxXpEt6cnboBwzw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b=Ob633oyN; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linuxfoundation.org header.i=@linuxfoundation.org header.b="Ob633oyN" Received: by smtp.kernel.org (Postfix) with ESMTPSA id AADD6C32781; Thu, 6 Jun 2024 14:14:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1717683258; bh=JgXM6OdrUXXRiBtrBqjlPzQj6N0KUhak179T+a03N8U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ob633oyN6vCw2TpH2To0VtlmOq2pCuzIcW0CHjJjrRhtReiZ7l25fWklygBWHRzET b1JBTRnM21qnMQ729gRcKommuAW0ffAmz3Bcy3f+hsrztEAh+ND7Zw3iMnsKy39FXk LcF7TeMukYYdw0x+Ao5ONlqlFQJLmxpV/EMHm35A= From: Greg Kroah-Hartman To: stable@vger.kernel.org Cc: Greg Kroah-Hartman , patches@lists.linux.dev, Eric Dumazet , Jakub Kicinski , Sasha Levin Subject: [PATCH 6.6 202/744] tcp: avoid premature drops in tcp_add_backlog() Date: Thu, 6 Jun 2024 15:57:54 +0200 Message-ID: <20240606131738.891958323@linuxfoundation.org> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20240606131732.440653204@linuxfoundation.org> References: <20240606131732.440653204@linuxfoundation.org> User-Agent: quilt/0.67 X-stable: review X-Patchwork-Hint: ignore Precedence: bulk X-Mailing-List: patches@lists.linux.dev List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit 6.6-stable review patch. If anyone has any objections, please let me know. ------------------ From: Eric Dumazet [ Upstream commit ec00ed472bdb7d0af840da68c8c11bff9f4d9caa ] While testing TCP performance with latest trees, I saw suspect SOCKET_BACKLOG drops. tcp_add_backlog() computes its limit with : limit = (u32)READ_ONCE(sk->sk_rcvbuf) + (u32)(READ_ONCE(sk->sk_sndbuf) >> 1); limit += 64 * 1024; This does not take into account that sk->sk_backlog.len is reset only at the very end of __release_sock(). Both sk->sk_backlog.len and sk->sk_rmem_alloc could reach sk_rcvbuf in normal conditions. We should double sk->sk_rcvbuf contribution in the formula to absorb bubbles in the backlog, which happen more often for very fast flows. This change maintains decent protection against abuses. Fixes: c377411f2494 ("net: sk_add_backlog() take rmem_alloc into account") Signed-off-by: Eric Dumazet Link: https://lore.kernel.org/r/20240423125620.3309458-1-edumazet@google.com Signed-off-by: Jakub Kicinski Signed-off-by: Sasha Levin --- net/ipv4/tcp_ipv4.c | 13 +++++++++++-- 1 file changed, 11 insertions(+), 2 deletions(-) diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index c464ced7137ee..7c2ca4df0daa3 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -1822,7 +1822,7 @@ int tcp_v4_early_demux(struct sk_buff *skb) bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb, enum skb_drop_reason *reason) { - u32 limit, tail_gso_size, tail_gso_segs; + u32 tail_gso_size, tail_gso_segs; struct skb_shared_info *shinfo; const struct tcphdr *th; struct tcphdr *thtail; @@ -1831,6 +1831,7 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb, bool fragstolen; u32 gso_segs; u32 gso_size; + u64 limit; int delta; /* In case all data was pulled from skb frags (in __pskb_pull_tail()), @@ -1928,7 +1929,13 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb, __skb_push(skb, hdrlen); no_coalesce: - limit = (u32)READ_ONCE(sk->sk_rcvbuf) + (u32)(READ_ONCE(sk->sk_sndbuf) >> 1); + /* sk->sk_backlog.len is reset only at the end of __release_sock(). + * Both sk->sk_backlog.len and sk->sk_rmem_alloc could reach + * sk_rcvbuf in normal conditions. + */ + limit = ((u64)READ_ONCE(sk->sk_rcvbuf)) << 1; + + limit += ((u32)READ_ONCE(sk->sk_sndbuf)) >> 1; /* Only socket owner can try to collapse/prune rx queues * to reduce memory overhead, so add a little headroom here. @@ -1936,6 +1943,8 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb, */ limit += 64 * 1024; + limit = min_t(u64, limit, UINT_MAX); + if (unlikely(sk_add_backlog(sk, skb, limit))) { bh_unlock_sock(sk); *reason = SKB_DROP_REASON_SOCKET_BACKLOG; -- 2.43.0