From: Nikolay Aleksandrov <razor@blackwall.org>
To: netdev@vger.kernel.org
Cc: roopa@nvidia.com, bridge@lists.linux-foundation.org,
Nikolay Aleksandrov <nikolay@nvidia.com>
Subject: [PATCH net-next 1/6] net: bridge: mcast: record querier port device ifindex instead of pointer
Date: Fri, 13 Aug 2021 17:59:57 +0300 [thread overview]
Message-ID: <20210813150002.673579-2-razor@blackwall.org> (raw)
In-Reply-To: <20210813150002.673579-1-razor@blackwall.org>
From: Nikolay Aleksandrov <nikolay@nvidia.com>
Currently when a querier port is detected its net_bridge_port pointer is
recorded, but it's used only for comparisons so it's fine to have stale
pointer, in order to dereference and use the port pointer a proper
accounting of its usage must be implemented adding unnecessary
complexity. To solve the problem we can just store the netdevice ifindex
instead of the port pointer and retrieve the bridge port. It is a best
effort and the device needs to be validated that is still part of that
bridge before use, but that is small price to pay for avoiding querier
reference counting for each port/vlan.
Signed-off-by: Nikolay Aleksandrov <nikolay@nvidia.com>
---
net/bridge/br_multicast.c | 19 ++++++++++++-------
net/bridge/br_private.h | 2 +-
2 files changed, 13 insertions(+), 8 deletions(-)
diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
index df6bf6a237aa..853b947edf87 100644
--- a/net/bridge/br_multicast.c
+++ b/net/bridge/br_multicast.c
@@ -2850,7 +2850,8 @@ static bool br_ip4_multicast_select_querier(struct net_bridge_mcast *brmctx,
brmctx->ip4_querier.addr.src.ip4 = saddr;
/* update protected by general multicast_lock by caller */
- rcu_assign_pointer(brmctx->ip4_querier.port, port);
+ if (port)
+ brmctx->ip4_querier.port_ifidx = port->dev->ifindex;
return true;
}
@@ -2875,7 +2876,8 @@ static bool br_ip6_multicast_select_querier(struct net_bridge_mcast *brmctx,
brmctx->ip6_querier.addr.src.ip6 = *saddr;
/* update protected by general multicast_lock by caller */
- rcu_assign_pointer(brmctx->ip6_querier.port, port);
+ if (port)
+ brmctx->ip6_querier.port_ifidx = port->dev->ifindex;
return true;
}
@@ -3675,7 +3677,7 @@ static void br_multicast_query_expired(struct net_bridge_mcast *brmctx,
if (query->startup_sent < brmctx->multicast_startup_query_count)
query->startup_sent++;
- RCU_INIT_POINTER(querier->port, NULL);
+ querier->port_ifidx = 0;
br_multicast_send_query(brmctx, NULL, query);
out:
spin_unlock(&brmctx->br->multicast_lock);
@@ -3732,12 +3734,12 @@ void br_multicast_ctx_init(struct net_bridge *br,
brmctx->multicast_membership_interval = 260 * HZ;
brmctx->ip4_other_query.delay_time = 0;
- brmctx->ip4_querier.port = NULL;
+ brmctx->ip4_querier.port_ifidx = 0;
brmctx->multicast_igmp_version = 2;
#if IS_ENABLED(CONFIG_IPV6)
brmctx->multicast_mld_version = 1;
brmctx->ip6_other_query.delay_time = 0;
- brmctx->ip6_querier.port = NULL;
+ brmctx->ip6_querier.port_ifidx = 0;
#endif
timer_setup(&brmctx->ip4_mc_router_timer,
@@ -4479,6 +4481,7 @@ bool br_multicast_has_querier_adjacent(struct net_device *dev, int proto)
struct net_bridge *br;
struct net_bridge_port *port;
bool ret = false;
+ int port_ifidx;
rcu_read_lock();
if (!netif_is_bridge_port(dev))
@@ -4493,14 +4496,16 @@ bool br_multicast_has_querier_adjacent(struct net_device *dev, int proto)
switch (proto) {
case ETH_P_IP:
+ port_ifidx = brmctx->ip4_querier.port_ifidx;
if (!timer_pending(&brmctx->ip4_other_query.timer) ||
- rcu_dereference(brmctx->ip4_querier.port) == port)
+ port_ifidx == port->dev->ifindex)
goto unlock;
break;
#if IS_ENABLED(CONFIG_IPV6)
case ETH_P_IPV6:
+ port_ifidx = brmctx->ip6_querier.port_ifidx;
if (!timer_pending(&brmctx->ip6_other_query.timer) ||
- rcu_dereference(brmctx->ip6_querier.port) == port)
+ port_ifidx == port->dev->ifindex)
goto unlock;
break;
#endif
diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h
index 32c218aa3f36..b92fab5ae0fb 100644
--- a/net/bridge/br_private.h
+++ b/net/bridge/br_private.h
@@ -81,7 +81,7 @@ struct bridge_mcast_other_query {
/* selected querier */
struct bridge_mcast_querier {
struct br_ip addr;
- struct net_bridge_port __rcu *port;
+ int port_ifidx;
};
/* IGMP/MLD statistics */
--
2.31.1
next prev parent reply other threads:[~2021-08-13 15:00 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-08-13 14:59 [PATCH net-next 0/6] net: bridge: mcast: dump querier state Nikolay Aleksandrov
2021-08-13 14:59 ` Nikolay Aleksandrov [this message]
2021-08-13 14:59 ` [PATCH net-next 2/6] net: bridge: mcast: make sure querier port/address updates are consistent Nikolay Aleksandrov
2021-08-13 14:59 ` [PATCH net-next 3/6] net: bridge: mcast: consolidate querier selection for ipv4 and ipv6 Nikolay Aleksandrov
2021-08-13 15:00 ` [PATCH net-next 4/6] net: bridge: mcast: dump ipv4 querier state Nikolay Aleksandrov
2021-08-16 8:33 ` Dan Carpenter
2021-08-16 8:37 ` Nikolay Aleksandrov
2021-08-13 15:00 ` [PATCH net-next 5/6] net: bridge: mcast: dump ipv6 " Nikolay Aleksandrov
2021-08-13 15:00 ` [PATCH net-next 6/6] net: bridge: vlan: dump mcast ctx " Nikolay Aleksandrov
2021-08-14 13:10 ` [PATCH net-next 0/6] net: bridge: mcast: dump " patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210813150002.673579-2-razor@blackwall.org \
--to=razor@blackwall.org \
--cc=bridge@lists.linux-foundation.org \
--cc=netdev@vger.kernel.org \
--cc=nikolay@nvidia.com \
--cc=roopa@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).