From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Jiri Pirko , Jiri Benc , "David S. Miller" Subject: [PATCH 3.18 024/183] team: avoid possible underflow of count_pending value for notify_peers and mcast_rejoin Date: Sun, 25 Jan 2015 10:05:46 -0800 Message-Id: <20150125180811.220058099@linuxfoundation.org> In-Reply-To: <20150125180810.160428929@linuxfoundation.org> References: <20150125180810.160428929@linuxfoundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-15 Sender: linux-kernel-owner@vger.kernel.org List-ID: 3.18-stable review patch. If anyone has any objections, please let me know. ------------------ From: Jiri Pirko [ Upstream commit b0d11b42785b70e19bc6a3122eead3f7969a7589 ] This patch is fixing a race condition that may cause setting count_pending to -1, which results in unwanted big bulk of arp messages (in case of "notify peers"). Consider following scenario: count_pending == 2 CPU0 CPU1 team_notify_peers_work atomic_dec_and_test (dec count_pending to 1) schedule_delayed_work team_notify_peers atomic_add (adding 1 to count_pending) team_notify_peers_work atomic_dec_and_test (dec count_pending to 1) schedule_delayed_work team_notify_peers_work atomic_dec_and_test (dec count_pending to 0) schedule_delayed_work team_notify_peers_work atomic_dec_and_test (dec count_pending to -1) Fix this race by using atomic_dec_if_positive - that will prevent count_pending running under 0. Fixes: fc423ff00df3a1955441 ("team: add peer notification") Fixes: 492b200efdd20b8fcfd ("team: add support for sending multicast rejoins") Signed-off-by: Jiri Pirko Signed-off-by: Jiri Benc Signed-off-by: David S. Miller Signed-off-by: Greg Kroah-Hartman --- drivers/net/team/team.c | 16 ++++++++++++++-- 1 file changed, 14 insertions(+), 2 deletions(-) --- a/drivers/net/team/team.c +++ b/drivers/net/team/team.c @@ -629,6 +629,7 @@ static int team_change_mode(struct team static void team_notify_peers_work(struct work_struct *work) { struct team *team; + int val; team = container_of(work, struct team, notify_peers.dw.work); @@ -636,9 +637,14 @@ static void team_notify_peers_work(struc schedule_delayed_work(&team->notify_peers.dw, 0); return; } + val = atomic_dec_if_positive(&team->notify_peers.count_pending); + if (val < 0) { + rtnl_unlock(); + return; + } call_netdevice_notifiers(NETDEV_NOTIFY_PEERS, team->dev); rtnl_unlock(); - if (!atomic_dec_and_test(&team->notify_peers.count_pending)) + if (val) schedule_delayed_work(&team->notify_peers.dw, msecs_to_jiffies(team->notify_peers.interval)); } @@ -669,6 +675,7 @@ static void team_notify_peers_fini(struc static void team_mcast_rejoin_work(struct work_struct *work) { struct team *team; + int val; team = container_of(work, struct team, mcast_rejoin.dw.work); @@ -676,9 +683,14 @@ static void team_mcast_rejoin_work(struc schedule_delayed_work(&team->mcast_rejoin.dw, 0); return; } + val = atomic_dec_if_positive(&team->mcast_rejoin.count_pending); + if (val < 0) { + rtnl_unlock(); + return; + } call_netdevice_notifiers(NETDEV_RESEND_IGMP, team->dev); rtnl_unlock(); - if (!atomic_dec_and_test(&team->mcast_rejoin.count_pending)) + if (val) schedule_delayed_work(&team->mcast_rejoin.dw, msecs_to_jiffies(team->mcast_rejoin.interval)); }