From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-dl1-f74.google.com (mail-dl1-f74.google.com [74.125.82.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F35CE32FA14 for ; Mon, 6 Apr 2026 03:05:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=74.125.82.74 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775444702; cv=none; b=VEFFKwl1JYOL9ERXsbnPulPKGaR/27cwaiea4/NHVKfyBKcKgfyF/DEh0JEZIG7Ewt+6HcXeeQ7yYsEvXBC5u27Ks9Yh7Unxhar5j+wo6D4PDqKG6mpYg2B35v+2vG1zpR5T3nnpsY7NWrK+WJVaCr8K724Hegy67sRM2E5AcQs= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775444702; c=relaxed/simple; bh=uAv87VwCp5wSqehPuzxNCJeD70A4A5yNLbBvxVr+7KQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=nvNv8SShxR4UtGJUfXyI4LQJSB2wdbRjAxq1mrCM9q8LQcXpRZGagzCr29PVF8AdOXmtKm31Yy8zQGzdLv34r2b74JeoOGD2zbPokwPJCw9Yv9dMF/kFaTSJuFfzlXhbU+rZZrtXevO3YNb0GttJD8bQOcSQ2clZLYo+rauDoYo= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--marcharvey.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=WsJfv2ac; arc=none smtp.client-ip=74.125.82.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--marcharvey.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="WsJfv2ac" Received: by mail-dl1-f74.google.com with SMTP id a92af1059eb24-12bf921cf49so3787150c88.1 for ; Sun, 05 Apr 2026 20:05:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20251104; t=1775444700; x=1776049500; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ZRwQp0TqW5ryVF2jY0Xvn7xyn++qV9su6qauccEV+bw=; b=WsJfv2acikz9xzO/LH28R/ypTYXDacCSR7dLq9+AHPIxjV7366ulTrBIV0GXrcdABR Lj0u6RCsvZPUIjapAhCA7mbw4WPPK3mdgpBUzT6RegE2fiT6LLFxYuwwvnT2UDuiEefJ zGsawM94GB9BInYGbYmRGiQ7DX9WuL0v3AoJ77X6zl2rZZqdlyKKYvMDDfrgr5ViWJpb Ddv8+YgL7mA/5ZkV/cszp/l8YgYRj12pc5AAzZbfqc7U4lczQIpOho2xRbUUsvIChu+n ZL0bBqlHBSKboEoO1yzdrG/W+kSECE27Vpv0KkHDQeEpEL9dutZm23LAGga0dSFK0dLn 4Kuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1775444700; x=1776049500; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ZRwQp0TqW5ryVF2jY0Xvn7xyn++qV9su6qauccEV+bw=; b=itSDZBy1oirSI0ZbE2Kco1hV3jud3luy2z0VIjBEcLcYEwJZ0BdjkhrARi9UbuGeFl 9WUs2m7ewbtnmaan4eZJPr5dO5hHUovx8mqYVtnLB/hIlSaKmQCqwdAVN5DZKYEviz0+ 830ITPMMdaiFskRTs2EIr24beUP5Avjk6WGoW43yMz2QbIHSuM2XsiYEYM1f0XuS1a0V RO0BKzGCO07TpbWp7bQhHEpnaalSA5WYCnit0I7XKJ6l8CH+Nu3IVNNrCRbk5SCzjFki AURnUdoo2nEwQu5vWLbznYeGi488ZGChPoxR0n8sdQzw8A7F7O0kGuxSP+NIiS/0ABeh P+Iw== X-Forwarded-Encrypted: i=1; AJvYcCWTFp2yaUOhQirDLpsb3HiwsssgXa4UsDByElKxXV6yU0XG4D6T0nublsdeKxSXhs7I91qyLDzvrp/KnJYHDdc=@vger.kernel.org X-Gm-Message-State: AOJu0Yz3kWJO6Ka9XAok9zEESiCGThDQnwavMeuF9r/htfrJr+ozWTHi xGJWuu21h7swlTCldrVC3ZW9umte0J+26cFa1rREZL/buv/4W8vhmQQlcucvUw/1UD2nDKrBV9G Z1wYeOW1HXdSQOc+fRXkbZg== X-Received: from dlbur5.prod.google.com ([2002:a05:7022:ea45:b0:12a:c5dd:73f9]) (user=marcharvey job=prod-delivery.src-stubby-dispatcher) by 2002:a05:7022:10d:b0:128:d967:4678 with SMTP id a92af1059eb24-12bfb74427cmr4500021c88.23.1775444699944; Sun, 05 Apr 2026 20:04:59 -0700 (PDT) Date: Mon, 06 Apr 2026 03:03:43 +0000 In-Reply-To: <20260406-teaming-driver-internal-v5-0-e8a3f348a1c5@google.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20260406-teaming-driver-internal-v5-0-e8a3f348a1c5@google.com> X-Developer-Key: i=marcharvey@google.com; a=ed25519; pk=OzOeciadbfF5Bug/4/hyEAwfrruSY4tn0Q0LocyYUL0= X-Developer-Signature: v=1; a=ed25519-sha256; t=1775444686; l=8995; i=marcharvey@google.com; s=20260401; h=from:subject:message-id; bh=uAv87VwCp5wSqehPuzxNCJeD70A4A5yNLbBvxVr+7KQ=; b=kiM/3jF7evfx2LehoEnYMzTVJa0HuZqHzrP2nPWDZYyIPkFXHiFW6ihlpsUByD8WWjl7lCI78 /EwtxuRfebwAFgMawWxWGtP9F7Lp+RgcQ8vTEW6tfkOW8/YMMe7asfo X-Mailer: b4 0.14.3 Message-ID: <20260406-teaming-driver-internal-v5-7-e8a3f348a1c5@google.com> Subject: [PATCH net-next v5 07/10] net: team: Track rx enablement separately from tx enablement From: Marc Harvey To: Jiri Pirko , Andrew Lunn , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Shuah Khan , Simon Horman Cc: netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, Marc Harvey Content-Type: text/plain; charset="utf-8" Separate the rx and tx enablement/disablement into different functions so that it is easier to interact with them independently later. Although this patch changes receive and transmit paths, the actual behavior of the teaming driver should remain unchanged, since there is no option introduced yet to change rx or tx enablement independently. Those options will be added in follow-up patches. Signed-off-by: Marc Harvey --- Changes in v5: - Reorder function calls in team_port_enable() to make sure the call order stays the same as before. - Link to v4: https://lore.kernel.org/netdev/20260403-teaming-driver-internal-v4-7-d3032f33ca25@google.com/ Changes in v4: - New patch: split from the original monolithic v3 patch "net: team: Decouple rx and tx enablement in the team driver". - Link to v3: https://lore.kernel.org/netdev/20260402-teaming-driver-internal-v3-6-e8cfdec3b5c2@google.com/ --- drivers/net/team/team_core.c | 104 ++++++++++++++++++++++++------- drivers/net/team/team_mode_loadbalance.c | 2 +- include/linux/if_team.h | 16 ++++- 3 files changed, 95 insertions(+), 27 deletions(-) diff --git a/drivers/net/team/team_core.c b/drivers/net/team/team_core.c index 826769473878..e437099a5a17 100644 --- a/drivers/net/team/team_core.c +++ b/drivers/net/team/team_core.c @@ -87,7 +87,7 @@ static void team_lower_state_changed(struct team_port *port) struct netdev_lag_lower_state_info info; info.link_up = port->linkup; - info.tx_enabled = team_port_enabled(port); + info.tx_enabled = team_port_tx_enabled(port); netdev_lower_state_changed(port->dev, &info); } @@ -538,7 +538,7 @@ static void team_adjust_ops(struct team *team) else team->ops.transmit = team->mode->ops->transmit; - if (!team->tx_en_port_count || !team_is_mode_set(team) || + if (!team->rx_en_port_count || !team_is_mode_set(team) || !team->mode->ops->receive) team->ops.receive = team_dummy_receive; else @@ -734,7 +734,7 @@ static rx_handler_result_t team_handle_frame(struct sk_buff **pskb) port = team_port_get_rcu(skb->dev); team = port->team; - if (!team_port_enabled(port)) { + if (!team_port_rx_enabled(port)) { if (is_link_local_ether_addr(eth_hdr(skb)->h_dest)) /* link-local packets are mostly useful when stack receives them * with the link they arrive on. @@ -876,7 +876,7 @@ static void __team_queue_override_enabled_check(struct team *team) static void team_queue_override_port_prio_changed(struct team *team, struct team_port *port) { - if (!port->queue_id || !team_port_enabled(port)) + if (!port->queue_id || !team_port_tx_enabled(port)) return; __team_queue_override_port_del(team, port); __team_queue_override_port_add(team, port); @@ -887,7 +887,7 @@ static void team_queue_override_port_change_queue_id(struct team *team, struct team_port *port, u16 new_queue_id) { - if (team_port_enabled(port)) { + if (team_port_tx_enabled(port)) { __team_queue_override_port_del(team, port); port->queue_id = new_queue_id; __team_queue_override_port_add(team, port); @@ -927,26 +927,33 @@ static bool team_port_find(const struct team *team, return false; } +static void __team_port_enable_rx(struct team *team, + struct team_port *port) +{ + team->rx_en_port_count++; + WRITE_ONCE(port->rx_enabled, true); +} + +static void __team_port_disable_rx(struct team *team, + struct team_port *port) +{ + team->rx_en_port_count--; + WRITE_ONCE(port->rx_enabled, false); +} + /* - * Enable/disable port by adding to enabled port hashlist and setting - * port->tx_index (Might be racy so reader could see incorrect ifindex when - * processing a flying packet, but that is not a problem). Write guarded - * by RTNL. + * Enable just TX on the port by adding to tx-enabled port hashlist and + * setting port->tx_index (Might be racy so reader could see incorrect + * ifindex when processing a flying packet, but that is not a problem). + * Write guarded by RTNL. */ -static void team_port_enable(struct team *team, - struct team_port *port) +static void __team_port_enable_tx(struct team *team, + struct team_port *port) { - if (team_port_enabled(port)) - return; WRITE_ONCE(port->tx_index, team->tx_en_port_count); WRITE_ONCE(team->tx_en_port_count, team->tx_en_port_count + 1); hlist_add_head_rcu(&port->tx_hlist, team_tx_port_index_hash(team, port->tx_index)); - team_adjust_ops(team); - team_queue_override_port_add(team, port); - team_notify_peers(team); - team_mcast_rejoin(team); - team_lower_state_changed(port); } static void __reconstruct_port_hlist(struct team *team, int rm_index) @@ -965,20 +972,69 @@ static void __reconstruct_port_hlist(struct team *team, int rm_index) } } -static void team_port_disable(struct team *team, - struct team_port *port) +static void __team_port_disable_tx(struct team *team, + struct team_port *port) { - if (!team_port_enabled(port)) - return; if (team->ops.port_tx_disabled) team->ops.port_tx_disabled(team, port); + hlist_del_rcu(&port->tx_hlist); __reconstruct_port_hlist(team, port->tx_index); + WRITE_ONCE(port->tx_index, -1); WRITE_ONCE(team->tx_en_port_count, team->tx_en_port_count - 1); - team_queue_override_port_del(team, port); +} + +/* + * Enable TX AND RX on the port. + */ +static void team_port_enable(struct team *team, + struct team_port *port) +{ + bool rx_was_enabled; + bool tx_was_enabled; + + if (team_port_enabled(port)) + return; + + rx_was_enabled = team_port_rx_enabled(port); + tx_was_enabled = team_port_tx_enabled(port); + + if (!rx_was_enabled) + __team_port_enable_rx(team, port); + if (!tx_was_enabled) + __team_port_enable_tx(team, port); + + team_adjust_ops(team); + if (!tx_was_enabled) + team_queue_override_port_add(team, port); + team_notify_peers(team); + if (!rx_was_enabled) + team_mcast_rejoin(team); + if (!tx_was_enabled) + team_lower_state_changed(port); +} + +static void team_port_disable(struct team *team, + struct team_port *port) +{ + bool rx_was_enabled = team_port_rx_enabled(port); + bool tx_was_enabled = team_port_tx_enabled(port); + + if (!tx_was_enabled && !rx_was_enabled) + return; + + if (tx_was_enabled) { + __team_port_disable_tx(team, port); + team_queue_override_port_del(team, port); + } + if (rx_was_enabled) + __team_port_disable_rx(team, port); + team_adjust_ops(team); - team_lower_state_changed(port); + + if (tx_was_enabled) + team_lower_state_changed(port); } static int team_port_enter(struct team *team, struct team_port *port) diff --git a/drivers/net/team/team_mode_loadbalance.c b/drivers/net/team/team_mode_loadbalance.c index 4833fbfe241e..38a459649569 100644 --- a/drivers/net/team/team_mode_loadbalance.c +++ b/drivers/net/team/team_mode_loadbalance.c @@ -380,7 +380,7 @@ static int lb_tx_hash_to_port_mapping_set(struct team *team, list_for_each_entry(port, &team->port_list, list) { if (ctx->data.u32_val == port->dev->ifindex && - team_port_enabled(port)) { + team_port_tx_enabled(port)) { rcu_assign_pointer(LB_HTPM_PORT_BY_HASH(lb_priv, hash), port); return 0; diff --git a/include/linux/if_team.h b/include/linux/if_team.h index c777170ef552..3d21e06fda67 100644 --- a/include/linux/if_team.h +++ b/include/linux/if_team.h @@ -31,6 +31,7 @@ struct team_port { struct list_head list; /* node in ordinary list */ struct team *team; int tx_index; /* index of tx enabled port. If disabled, -1 */ + bool rx_enabled; bool linkup; /* either state.linkup or user.linkup */ @@ -75,14 +76,24 @@ static inline struct team_port *team_port_get_rcu(const struct net_device *dev) return rcu_dereference(dev->rx_handler_data); } -static inline bool team_port_enabled(struct team_port *port) +static inline bool team_port_rx_enabled(struct team_port *port) +{ + return READ_ONCE(port->rx_enabled); +} + +static inline bool team_port_tx_enabled(struct team_port *port) { return READ_ONCE(port->tx_index) != -1; } +static inline bool team_port_enabled(struct team_port *port) +{ + return team_port_rx_enabled(port) && team_port_tx_enabled(port); +} + static inline bool team_port_txable(struct team_port *port) { - return port->linkup && team_port_enabled(port); + return port->linkup && team_port_tx_enabled(port); } static inline bool team_port_dev_txable(const struct net_device *port_dev) @@ -193,6 +204,7 @@ struct team { * List of tx-enabled ports and counts of rx and tx-enabled ports. */ int tx_en_port_count; + int rx_en_port_count; struct hlist_head tx_en_port_hlist[TEAM_PORT_HASHENTRIES]; struct list_head port_list; /* list of all ports */ -- 2.53.0.1185.g05d4b7b318-goog