From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D99C72E3FE for ; Tue, 7 Nov 2023 12:12:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="iAY4RitU" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DBC8BC433A9; Tue, 7 Nov 2023 12:12:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1699359161; bh=rLQ5JtPfqbfWN75jKoa4h4vYeHE8yV6K8LVdWWy5zqE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iAY4RitUONqWiCj5nkVMlyjqil5mw/kQWXYxblEC2YXvq4LLa1BE9FN1fqLQffXRZ NG9Ke6BqvaWO7TOyLpms8/oKwBsmDq3krs9FVugurHlW3h4QW3o1fIQxXUw02enjyT wsYXHC/9sebNXkU3DkJySVs2c3HaIaaW8Vj3fuHJhM9LfywTOzQLrskwi6EwrXs1dR pkgs2oKao7Iy5BnohRw1yL2FbGnu8VTF0QGK07isrCi3nSDuCsDdLgXa4GLs9etKW7 m6oJN0gJjD/EXSyL514LmNhqr3PHdbKxKqroQ6P+VRGR1Rh+Ls+0CtPXOnLabEJoBO OZroFQFyjVCSg== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Eric Dumazet , "David S . Miller" , Sasha Levin , kuba@kernel.org, pabeni@redhat.com, netdev@vger.kernel.org Subject: [PATCH AUTOSEL 5.10 06/11] net: annotate data-races around sk->sk_tx_queue_mapping Date: Tue, 7 Nov 2023 07:12:21 -0500 Message-ID: <20231107121230.3758617-6-sashal@kernel.org> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20231107121230.3758617-1-sashal@kernel.org> References: <20231107121230.3758617-1-sashal@kernel.org> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore X-stable-base: Linux 5.10.199 Content-Transfer-Encoding: 8bit From: Eric Dumazet [ Upstream commit 0bb4d124d34044179b42a769a0c76f389ae973b6 ] This field can be read or written without socket lock being held. Add annotations to avoid load-store tearing. Signed-off-by: Eric Dumazet Signed-off-by: David S. Miller Signed-off-by: Sasha Levin --- include/net/sock.h | 20 ++++++++++++++++---- 1 file changed, 16 insertions(+), 4 deletions(-) diff --git a/include/net/sock.h b/include/net/sock.h index 234196d904238..9d5e603a10f5a 100644 --- a/include/net/sock.h +++ b/include/net/sock.h @@ -1853,21 +1853,33 @@ static inline void sk_tx_queue_set(struct sock *sk, int tx_queue) /* sk_tx_queue_mapping accept only upto a 16-bit value */ if (WARN_ON_ONCE((unsigned short)tx_queue >= USHRT_MAX)) return; - sk->sk_tx_queue_mapping = tx_queue; + /* Paired with READ_ONCE() in sk_tx_queue_get() and + * other WRITE_ONCE() because socket lock might be not held. + */ + WRITE_ONCE(sk->sk_tx_queue_mapping, tx_queue); } #define NO_QUEUE_MAPPING USHRT_MAX static inline void sk_tx_queue_clear(struct sock *sk) { - sk->sk_tx_queue_mapping = NO_QUEUE_MAPPING; + /* Paired with READ_ONCE() in sk_tx_queue_get() and + * other WRITE_ONCE() because socket lock might be not held. + */ + WRITE_ONCE(sk->sk_tx_queue_mapping, NO_QUEUE_MAPPING); } static inline int sk_tx_queue_get(const struct sock *sk) { - if (sk && sk->sk_tx_queue_mapping != NO_QUEUE_MAPPING) - return sk->sk_tx_queue_mapping; + if (sk) { + /* Paired with WRITE_ONCE() in sk_tx_queue_clear() + * and sk_tx_queue_set(). + */ + int val = READ_ONCE(sk->sk_tx_queue_mapping); + if (val != NO_QUEUE_MAPPING) + return val; + } return -1; } -- 2.42.0