* [PATCH net-next 01/13] bridge: mcast: Dump MDB entries even when snooping is disabled
2023-10-16 13:12 [PATCH net-next 00/13] Add MDB get support Ido Schimmel
@ 2023-10-16 13:12 ` Ido Schimmel
2023-10-17 9:04 ` Nikolay Aleksandrov
2023-10-16 13:12 ` [PATCH net-next 02/13] bridge: mcast: Account for missing attributes Ido Schimmel
` (11 subsequent siblings)
12 siblings, 1 reply; 30+ messages in thread
From: Ido Schimmel @ 2023-10-16 13:12 UTC (permalink / raw)
To: netdev, bridge
Cc: davem, kuba, edumazet, pabeni, roopa, razor, mlxsw, Ido Schimmel
Currently, the bridge driver does not dump MDB entries when multicast
snooping is disabled although the entries are present in the kernel:
# bridge mdb add dev br0 port swp1 grp 239.1.1.1 permanent
# bridge mdb show dev br0
dev br0 port swp1 grp 239.1.1.1 permanent
dev br0 port br0 grp ff02::6a temp
dev br0 port br0 grp ff02::1:ff9d:e61b temp
# ip link set dev br0 type bridge mcast_snooping 0
# bridge mdb show dev br0
# ip link set dev br0 type bridge mcast_snooping 1
# bridge mdb show dev br0
dev br0 port swp1 grp 239.1.1.1 permanent
dev br0 port br0 grp ff02::6a temp
dev br0 port br0 grp ff02::1:ff9d:e61b temp
This behavior differs from other netlink dump interfaces that dump
entries regardless if they are used or not. For example, VLANs are
dumped even when VLAN filtering is disabled:
# ip link set dev br0 type bridge vlan_filtering 0
# bridge vlan show dev swp1
port vlan-id
swp1 1 PVID Egress Untagged
Remove the check and always dump MDB entries:
# bridge mdb add dev br0 port swp1 grp 239.1.1.1 permanent
# bridge mdb show dev br0
dev br0 port swp1 grp 239.1.1.1 permanent
dev br0 port br0 grp ff02::6a temp
dev br0 port br0 grp ff02::1:ffeb:1a4d temp
# ip link set dev br0 type bridge mcast_snooping 0
# bridge mdb show dev br0
dev br0 port swp1 grp 239.1.1.1 permanent
dev br0 port br0 grp ff02::6a temp
dev br0 port br0 grp ff02::1:ffeb:1a4d temp
# ip link set dev br0 type bridge mcast_snooping 1
# bridge mdb show dev br0
dev br0 port swp1 grp 239.1.1.1 permanent
dev br0 port br0 grp ff02::6a temp
dev br0 port br0 grp ff02::1:ffeb:1a4d temp
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
---
net/bridge/br_mdb.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c
index 7305f5f8215c..fb58bb1b60e8 100644
--- a/net/bridge/br_mdb.c
+++ b/net/bridge/br_mdb.c
@@ -323,9 +323,6 @@ static int br_mdb_fill_info(struct sk_buff *skb, struct netlink_callback *cb,
struct net_bridge_mdb_entry *mp;
struct nlattr *nest, *nest2;
- if (!br_opt_get(br, BROPT_MULTICAST_ENABLED))
- return 0;
-
nest = nla_nest_start_noflag(skb, MDBA_MDB);
if (nest == NULL)
return -EMSGSIZE;
--
2.40.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [PATCH net-next 01/13] bridge: mcast: Dump MDB entries even when snooping is disabled
2023-10-16 13:12 ` [PATCH net-next 01/13] bridge: mcast: Dump MDB entries even when snooping is disabled Ido Schimmel
@ 2023-10-17 9:04 ` Nikolay Aleksandrov
0 siblings, 0 replies; 30+ messages in thread
From: Nikolay Aleksandrov @ 2023-10-17 9:04 UTC (permalink / raw)
To: Ido Schimmel, netdev, bridge; +Cc: davem, kuba, edumazet, pabeni, roopa, mlxsw
On 10/16/23 16:12, Ido Schimmel wrote:
> Currently, the bridge driver does not dump MDB entries when multicast
> snooping is disabled although the entries are present in the kernel:
>
> # bridge mdb add dev br0 port swp1 grp 239.1.1.1 permanent
> # bridge mdb show dev br0
> dev br0 port swp1 grp 239.1.1.1 permanent
> dev br0 port br0 grp ff02::6a temp
> dev br0 port br0 grp ff02::1:ff9d:e61b temp
> # ip link set dev br0 type bridge mcast_snooping 0
> # bridge mdb show dev br0
> # ip link set dev br0 type bridge mcast_snooping 1
> # bridge mdb show dev br0
> dev br0 port swp1 grp 239.1.1.1 permanent
> dev br0 port br0 grp ff02::6a temp
> dev br0 port br0 grp ff02::1:ff9d:e61b temp
>
> This behavior differs from other netlink dump interfaces that dump
> entries regardless if they are used or not. For example, VLANs are
> dumped even when VLAN filtering is disabled:
>
> # ip link set dev br0 type bridge vlan_filtering 0
> # bridge vlan show dev swp1
> port vlan-id
> swp1 1 PVID Egress Untagged
>
> Remove the check and always dump MDB entries:
>
> # bridge mdb add dev br0 port swp1 grp 239.1.1.1 permanent
> # bridge mdb show dev br0
> dev br0 port swp1 grp 239.1.1.1 permanent
> dev br0 port br0 grp ff02::6a temp
> dev br0 port br0 grp ff02::1:ffeb:1a4d temp
> # ip link set dev br0 type bridge mcast_snooping 0
> # bridge mdb show dev br0
> dev br0 port swp1 grp 239.1.1.1 permanent
> dev br0 port br0 grp ff02::6a temp
> dev br0 port br0 grp ff02::1:ffeb:1a4d temp
> # ip link set dev br0 type bridge mcast_snooping 1
> # bridge mdb show dev br0
> dev br0 port swp1 grp 239.1.1.1 permanent
> dev br0 port br0 grp ff02::6a temp
> dev br0 port br0 grp ff02::1:ffeb:1a4d temp
>
> Signed-off-by: Ido Schimmel <idosch@nvidia.com>
> ---
> net/bridge/br_mdb.c | 3 ---
> 1 file changed, 3 deletions(-)
>
> diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c
> index 7305f5f8215c..fb58bb1b60e8 100644
> --- a/net/bridge/br_mdb.c
> +++ b/net/bridge/br_mdb.c
> @@ -323,9 +323,6 @@ static int br_mdb_fill_info(struct sk_buff *skb, struct netlink_callback *cb,
> struct net_bridge_mdb_entry *mp;
> struct nlattr *nest, *nest2;
>
> - if (!br_opt_get(br, BROPT_MULTICAST_ENABLED))
> - return 0;
> -
> nest = nla_nest_start_noflag(skb, MDBA_MDB);
> if (nest == NULL)
> return -EMSGSIZE;
Finally! Thanks :) this has been a long-standing annoyance.
Acked-by: Nikolay Aleksandrov <razor@blackwall.org>
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH net-next 02/13] bridge: mcast: Account for missing attributes
2023-10-16 13:12 [PATCH net-next 00/13] Add MDB get support Ido Schimmel
2023-10-16 13:12 ` [PATCH net-next 01/13] bridge: mcast: Dump MDB entries even when snooping is disabled Ido Schimmel
@ 2023-10-16 13:12 ` Ido Schimmel
2023-10-17 9:05 ` Nikolay Aleksandrov
2023-10-16 13:12 ` [PATCH net-next 03/13] bridge: mcast: Factor out a helper for PG entry size calculation Ido Schimmel
` (10 subsequent siblings)
12 siblings, 1 reply; 30+ messages in thread
From: Ido Schimmel @ 2023-10-16 13:12 UTC (permalink / raw)
To: netdev, bridge
Cc: davem, kuba, edumazet, pabeni, roopa, razor, mlxsw, Ido Schimmel
The 'MDBA_MDB' and 'MDBA_MDB_ENTRY' nest attributes are not accounted
for when calculating the size of MDB notifications. Add them along with
comments for existing attributes.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
---
net/bridge/br_mdb.c | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c
index fb58bb1b60e8..08de94bffc12 100644
--- a/net/bridge/br_mdb.c
+++ b/net/bridge/br_mdb.c
@@ -452,11 +452,18 @@ static int nlmsg_populate_mdb_fill(struct sk_buff *skb,
static size_t rtnl_mdb_nlmsg_size(struct net_bridge_port_group *pg)
{
- size_t nlmsg_size = NLMSG_ALIGN(sizeof(struct br_port_msg)) +
- nla_total_size(sizeof(struct br_mdb_entry)) +
- nla_total_size(sizeof(u32));
struct net_bridge_group_src *ent;
- size_t addr_size = 0;
+ size_t nlmsg_size, addr_size = 0;
+
+ nlmsg_size = NLMSG_ALIGN(sizeof(struct br_port_msg)) +
+ /* MDBA_MDB */
+ nla_total_size(0) +
+ /* MDBA_MDB_ENTRY */
+ nla_total_size(0) +
+ /* MDBA_MDB_ENTRY_INFO */
+ nla_total_size(sizeof(struct br_mdb_entry)) +
+ /* MDBA_MDB_EATTR_TIMER */
+ nla_total_size(sizeof(u32));
if (!pg)
goto out;
--
2.40.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [PATCH net-next 02/13] bridge: mcast: Account for missing attributes
2023-10-16 13:12 ` [PATCH net-next 02/13] bridge: mcast: Account for missing attributes Ido Schimmel
@ 2023-10-17 9:05 ` Nikolay Aleksandrov
0 siblings, 0 replies; 30+ messages in thread
From: Nikolay Aleksandrov @ 2023-10-17 9:05 UTC (permalink / raw)
To: Ido Schimmel, netdev, bridge; +Cc: davem, kuba, edumazet, pabeni, roopa, mlxsw
On 10/16/23 16:12, Ido Schimmel wrote:
> The 'MDBA_MDB' and 'MDBA_MDB_ENTRY' nest attributes are not accounted
> for when calculating the size of MDB notifications. Add them along with
> comments for existing attributes.
>
> Signed-off-by: Ido Schimmel <idosch@nvidia.com>
> ---
> net/bridge/br_mdb.c | 15 +++++++++++----
> 1 file changed, 11 insertions(+), 4 deletions(-)
>
> diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c
> index fb58bb1b60e8..08de94bffc12 100644
> --- a/net/bridge/br_mdb.c
> +++ b/net/bridge/br_mdb.c
> @@ -452,11 +452,18 @@ static int nlmsg_populate_mdb_fill(struct sk_buff *skb,
>
> static size_t rtnl_mdb_nlmsg_size(struct net_bridge_port_group *pg)
> {
> - size_t nlmsg_size = NLMSG_ALIGN(sizeof(struct br_port_msg)) +
> - nla_total_size(sizeof(struct br_mdb_entry)) +
> - nla_total_size(sizeof(u32));
> struct net_bridge_group_src *ent;
> - size_t addr_size = 0;
> + size_t nlmsg_size, addr_size = 0;
> +
> + nlmsg_size = NLMSG_ALIGN(sizeof(struct br_port_msg)) +
> + /* MDBA_MDB */
> + nla_total_size(0) +
> + /* MDBA_MDB_ENTRY */
> + nla_total_size(0) +
> + /* MDBA_MDB_ENTRY_INFO */
> + nla_total_size(sizeof(struct br_mdb_entry)) +
> + /* MDBA_MDB_EATTR_TIMER */
> + nla_total_size(sizeof(u32));
>
> if (!pg)
> goto out;
Acked-by: Nikolay Aleksandrov <razor@blackwall.org>
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH net-next 03/13] bridge: mcast: Factor out a helper for PG entry size calculation
2023-10-16 13:12 [PATCH net-next 00/13] Add MDB get support Ido Schimmel
2023-10-16 13:12 ` [PATCH net-next 01/13] bridge: mcast: Dump MDB entries even when snooping is disabled Ido Schimmel
2023-10-16 13:12 ` [PATCH net-next 02/13] bridge: mcast: Account for missing attributes Ido Schimmel
@ 2023-10-16 13:12 ` Ido Schimmel
2023-10-17 9:05 ` Nikolay Aleksandrov
2023-10-16 13:12 ` [PATCH net-next 04/13] bridge: mcast: Rename MDB entry get function Ido Schimmel
` (9 subsequent siblings)
12 siblings, 1 reply; 30+ messages in thread
From: Ido Schimmel @ 2023-10-16 13:12 UTC (permalink / raw)
To: netdev, bridge
Cc: davem, kuba, edumazet, pabeni, roopa, razor, mlxsw, Ido Schimmel
Currently, netlink notifications are sent for individual port group
entries and not for the entire MDB entry itself.
Subsequent patches are going to add MDB get support which will require
the bridge driver to reply with an entire MDB entry.
Therefore, as a preparation, factor out an helper to calculate the size
of an individual port group entry. When determining the size of the
reply this helper will be invoked for each port group entry in the MDB
entry.
No functional changes intended.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
---
net/bridge/br_mdb.c | 20 +++++++++++++-------
1 file changed, 13 insertions(+), 7 deletions(-)
diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c
index 08de94bffc12..42983f6a0abd 100644
--- a/net/bridge/br_mdb.c
+++ b/net/bridge/br_mdb.c
@@ -450,18 +450,13 @@ static int nlmsg_populate_mdb_fill(struct sk_buff *skb,
return -EMSGSIZE;
}
-static size_t rtnl_mdb_nlmsg_size(struct net_bridge_port_group *pg)
+static size_t rtnl_mdb_nlmsg_pg_size(const struct net_bridge_port_group *pg)
{
struct net_bridge_group_src *ent;
size_t nlmsg_size, addr_size = 0;
- nlmsg_size = NLMSG_ALIGN(sizeof(struct br_port_msg)) +
- /* MDBA_MDB */
- nla_total_size(0) +
- /* MDBA_MDB_ENTRY */
- nla_total_size(0) +
/* MDBA_MDB_ENTRY_INFO */
- nla_total_size(sizeof(struct br_mdb_entry)) +
+ nlmsg_size = nla_total_size(sizeof(struct br_mdb_entry)) +
/* MDBA_MDB_EATTR_TIMER */
nla_total_size(sizeof(u32));
@@ -511,6 +506,17 @@ static size_t rtnl_mdb_nlmsg_size(struct net_bridge_port_group *pg)
return nlmsg_size;
}
+static size_t rtnl_mdb_nlmsg_size(const struct net_bridge_port_group *pg)
+{
+ return NLMSG_ALIGN(sizeof(struct br_port_msg)) +
+ /* MDBA_MDB */
+ nla_total_size(0) +
+ /* MDBA_MDB_ENTRY */
+ nla_total_size(0) +
+ /* Port group entry */
+ rtnl_mdb_nlmsg_pg_size(pg);
+}
+
void br_mdb_notify(struct net_device *dev,
struct net_bridge_mdb_entry *mp,
struct net_bridge_port_group *pg,
--
2.40.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [PATCH net-next 03/13] bridge: mcast: Factor out a helper for PG entry size calculation
2023-10-16 13:12 ` [PATCH net-next 03/13] bridge: mcast: Factor out a helper for PG entry size calculation Ido Schimmel
@ 2023-10-17 9:05 ` Nikolay Aleksandrov
0 siblings, 0 replies; 30+ messages in thread
From: Nikolay Aleksandrov @ 2023-10-17 9:05 UTC (permalink / raw)
To: Ido Schimmel, netdev, bridge; +Cc: davem, kuba, edumazet, pabeni, roopa, mlxsw
On 10/16/23 16:12, Ido Schimmel wrote:
> Currently, netlink notifications are sent for individual port group
> entries and not for the entire MDB entry itself.
>
> Subsequent patches are going to add MDB get support which will require
> the bridge driver to reply with an entire MDB entry.
>
> Therefore, as a preparation, factor out an helper to calculate the size
> of an individual port group entry. When determining the size of the
> reply this helper will be invoked for each port group entry in the MDB
> entry.
>
> No functional changes intended.
>
> Signed-off-by: Ido Schimmel <idosch@nvidia.com>
> ---
> net/bridge/br_mdb.c | 20 +++++++++++++-------
> 1 file changed, 13 insertions(+), 7 deletions(-)
>
> diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c
> index 08de94bffc12..42983f6a0abd 100644
> --- a/net/bridge/br_mdb.c
> +++ b/net/bridge/br_mdb.c
> @@ -450,18 +450,13 @@ static int nlmsg_populate_mdb_fill(struct sk_buff *skb,
> return -EMSGSIZE;
> }
>
> -static size_t rtnl_mdb_nlmsg_size(struct net_bridge_port_group *pg)
> +static size_t rtnl_mdb_nlmsg_pg_size(const struct net_bridge_port_group *pg)
> {
> struct net_bridge_group_src *ent;
> size_t nlmsg_size, addr_size = 0;
>
> - nlmsg_size = NLMSG_ALIGN(sizeof(struct br_port_msg)) +
> - /* MDBA_MDB */
> - nla_total_size(0) +
> - /* MDBA_MDB_ENTRY */
> - nla_total_size(0) +
> /* MDBA_MDB_ENTRY_INFO */
> - nla_total_size(sizeof(struct br_mdb_entry)) +
> + nlmsg_size = nla_total_size(sizeof(struct br_mdb_entry)) +
> /* MDBA_MDB_EATTR_TIMER */
> nla_total_size(sizeof(u32));
>
> @@ -511,6 +506,17 @@ static size_t rtnl_mdb_nlmsg_size(struct net_bridge_port_group *pg)
> return nlmsg_size;
> }
>
> +static size_t rtnl_mdb_nlmsg_size(const struct net_bridge_port_group *pg)
> +{
> + return NLMSG_ALIGN(sizeof(struct br_port_msg)) +
> + /* MDBA_MDB */
> + nla_total_size(0) +
> + /* MDBA_MDB_ENTRY */
> + nla_total_size(0) +
> + /* Port group entry */
> + rtnl_mdb_nlmsg_pg_size(pg);
> +}
> +
> void br_mdb_notify(struct net_device *dev,
> struct net_bridge_mdb_entry *mp,
> struct net_bridge_port_group *pg,
Acked-by: Nikolay Aleksandrov <razor@blackwall.org>
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH net-next 04/13] bridge: mcast: Rename MDB entry get function
2023-10-16 13:12 [PATCH net-next 00/13] Add MDB get support Ido Schimmel
` (2 preceding siblings ...)
2023-10-16 13:12 ` [PATCH net-next 03/13] bridge: mcast: Factor out a helper for PG entry size calculation Ido Schimmel
@ 2023-10-16 13:12 ` Ido Schimmel
2023-10-17 9:06 ` Nikolay Aleksandrov
2023-10-16 13:12 ` [PATCH net-next 05/13] vxlan: mdb: Adjust function arguments Ido Schimmel
` (8 subsequent siblings)
12 siblings, 1 reply; 30+ messages in thread
From: Ido Schimmel @ 2023-10-16 13:12 UTC (permalink / raw)
To: netdev, bridge
Cc: davem, kuba, edumazet, pabeni, roopa, razor, mlxsw, Ido Schimmel
The current name is going to conflict with the upcoming net device
operation for the MDB get operation.
Rename the function to br_mdb_entry_skb_get(). No functional changes
intended.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
---
net/bridge/br_device.c | 2 +-
net/bridge/br_input.c | 2 +-
net/bridge/br_multicast.c | 5 +++--
net/bridge/br_private.h | 10 ++++++----
4 files changed, 11 insertions(+), 8 deletions(-)
diff --git a/net/bridge/br_device.c b/net/bridge/br_device.c
index 9a5ea06236bd..d624710b384a 100644
--- a/net/bridge/br_device.c
+++ b/net/bridge/br_device.c
@@ -92,7 +92,7 @@ netdev_tx_t br_dev_xmit(struct sk_buff *skb, struct net_device *dev)
goto out;
}
- mdst = br_mdb_get(brmctx, skb, vid);
+ mdst = br_mdb_entry_skb_get(brmctx, skb, vid);
if ((mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) &&
br_multicast_querier_exists(brmctx, eth_hdr(skb), mdst))
br_multicast_flood(mdst, skb, brmctx, false, true);
diff --git a/net/bridge/br_input.c b/net/bridge/br_input.c
index c729528b5e85..f21097e73482 100644
--- a/net/bridge/br_input.c
+++ b/net/bridge/br_input.c
@@ -175,7 +175,7 @@ int br_handle_frame_finish(struct net *net, struct sock *sk, struct sk_buff *skb
switch (pkt_type) {
case BR_PKT_MULTICAST:
- mdst = br_mdb_get(brmctx, skb, vid);
+ mdst = br_mdb_entry_skb_get(brmctx, skb, vid);
if ((mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) &&
br_multicast_querier_exists(brmctx, eth_hdr(skb), mdst)) {
if ((mdst && mdst->host_joined) ||
diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c
index 96d1fc78dd39..d7d021af1029 100644
--- a/net/bridge/br_multicast.c
+++ b/net/bridge/br_multicast.c
@@ -145,8 +145,9 @@ static struct net_bridge_mdb_entry *br_mdb_ip6_get(struct net_bridge *br,
}
#endif
-struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge_mcast *brmctx,
- struct sk_buff *skb, u16 vid)
+struct net_bridge_mdb_entry *
+br_mdb_entry_skb_get(struct net_bridge_mcast *brmctx, struct sk_buff *skb,
+ u16 vid)
{
struct net_bridge *br = brmctx->br;
struct br_ip ip;
diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h
index cbbe35278459..3220898424ce 100644
--- a/net/bridge/br_private.h
+++ b/net/bridge/br_private.h
@@ -951,8 +951,9 @@ int br_multicast_rcv(struct net_bridge_mcast **brmctx,
struct net_bridge_mcast_port **pmctx,
struct net_bridge_vlan *vlan,
struct sk_buff *skb, u16 vid);
-struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge_mcast *brmctx,
- struct sk_buff *skb, u16 vid);
+struct net_bridge_mdb_entry *
+br_mdb_entry_skb_get(struct net_bridge_mcast *brmctx, struct sk_buff *skb,
+ u16 vid);
int br_multicast_add_port(struct net_bridge_port *port);
void br_multicast_del_port(struct net_bridge_port *port);
void br_multicast_enable_port(struct net_bridge_port *port);
@@ -1341,8 +1342,9 @@ static inline int br_multicast_rcv(struct net_bridge_mcast **brmctx,
return 0;
}
-static inline struct net_bridge_mdb_entry *br_mdb_get(struct net_bridge_mcast *brmctx,
- struct sk_buff *skb, u16 vid)
+static inline struct net_bridge_mdb_entry *
+br_mdb_entry_skb_get(struct net_bridge_mcast *brmctx, struct sk_buff *skb,
+ u16 vid)
{
return NULL;
}
--
2.40.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [PATCH net-next 04/13] bridge: mcast: Rename MDB entry get function
2023-10-16 13:12 ` [PATCH net-next 04/13] bridge: mcast: Rename MDB entry get function Ido Schimmel
@ 2023-10-17 9:06 ` Nikolay Aleksandrov
0 siblings, 0 replies; 30+ messages in thread
From: Nikolay Aleksandrov @ 2023-10-17 9:06 UTC (permalink / raw)
To: Ido Schimmel, netdev, bridge; +Cc: davem, kuba, edumazet, pabeni, roopa, mlxsw
On 10/16/23 16:12, Ido Schimmel wrote:
> The current name is going to conflict with the upcoming net device
> operation for the MDB get operation.
>
> Rename the function to br_mdb_entry_skb_get(). No functional changes
> intended.
>
> Signed-off-by: Ido Schimmel <idosch@nvidia.com>
> ---
> net/bridge/br_device.c | 2 +-
> net/bridge/br_input.c | 2 +-
> net/bridge/br_multicast.c | 5 +++--
> net/bridge/br_private.h | 10 ++++++----
> 4 files changed, 11 insertions(+), 8 deletions(-)
>
Acked-by: Nikolay Aleksandrov <razor@blackwall.org>
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH net-next 05/13] vxlan: mdb: Adjust function arguments
2023-10-16 13:12 [PATCH net-next 00/13] Add MDB get support Ido Schimmel
` (3 preceding siblings ...)
2023-10-16 13:12 ` [PATCH net-next 04/13] bridge: mcast: Rename MDB entry get function Ido Schimmel
@ 2023-10-16 13:12 ` Ido Schimmel
2023-10-17 9:06 ` Nikolay Aleksandrov
2023-10-16 13:12 ` [PATCH net-next 06/13] vxlan: mdb: Factor out a helper for remote entry size calculation Ido Schimmel
` (7 subsequent siblings)
12 siblings, 1 reply; 30+ messages in thread
From: Ido Schimmel @ 2023-10-16 13:12 UTC (permalink / raw)
To: netdev, bridge
Cc: davem, kuba, edumazet, pabeni, roopa, razor, mlxsw, Ido Schimmel
Adjust the function's arguments and rename it to allow it to be reused
by future call sites that only have access to 'struct
vxlan_mdb_entry_key', but not to 'struct vxlan_mdb_config'.
No functional changes intended.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
---
drivers/net/vxlan/vxlan_mdb.c | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)
diff --git a/drivers/net/vxlan/vxlan_mdb.c b/drivers/net/vxlan/vxlan_mdb.c
index 5e041622261a..0b6043e1473b 100644
--- a/drivers/net/vxlan/vxlan_mdb.c
+++ b/drivers/net/vxlan/vxlan_mdb.c
@@ -370,12 +370,10 @@ static bool vxlan_mdb_is_valid_source(const struct nlattr *attr, __be16 proto,
return true;
}
-static void vxlan_mdb_config_group_set(struct vxlan_mdb_config *cfg,
- const struct br_mdb_entry *entry,
- const struct nlattr *source_attr)
+static void vxlan_mdb_group_set(struct vxlan_mdb_entry_key *group,
+ const struct br_mdb_entry *entry,
+ const struct nlattr *source_attr)
{
- struct vxlan_mdb_entry_key *group = &cfg->group;
-
switch (entry->addr.proto) {
case htons(ETH_P_IP):
group->dst.sa.sa_family = AF_INET;
@@ -503,7 +501,7 @@ static int vxlan_mdb_config_attrs_init(struct vxlan_mdb_config *cfg,
entry->addr.proto, extack))
return -EINVAL;
- vxlan_mdb_config_group_set(cfg, entry, mdbe_attrs[MDBE_ATTR_SOURCE]);
+ vxlan_mdb_group_set(&cfg->group, entry, mdbe_attrs[MDBE_ATTR_SOURCE]);
/* rtnetlink code only validates that IPv4 group address is
* multicast.
--
2.40.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [PATCH net-next 05/13] vxlan: mdb: Adjust function arguments
2023-10-16 13:12 ` [PATCH net-next 05/13] vxlan: mdb: Adjust function arguments Ido Schimmel
@ 2023-10-17 9:06 ` Nikolay Aleksandrov
0 siblings, 0 replies; 30+ messages in thread
From: Nikolay Aleksandrov @ 2023-10-17 9:06 UTC (permalink / raw)
To: Ido Schimmel, netdev, bridge; +Cc: davem, kuba, edumazet, pabeni, roopa, mlxsw
On 10/16/23 16:12, Ido Schimmel wrote:
> Adjust the function's arguments and rename it to allow it to be reused
> by future call sites that only have access to 'struct
> vxlan_mdb_entry_key', but not to 'struct vxlan_mdb_config'.
>
> No functional changes intended.
>
> Signed-off-by: Ido Schimmel <idosch@nvidia.com>
> ---
> drivers/net/vxlan/vxlan_mdb.c | 10 ++++------
> 1 file changed, 4 insertions(+), 6 deletions(-)
>
Acked-by: Nikolay Aleksandrov <razor@blackwall.org>
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH net-next 06/13] vxlan: mdb: Factor out a helper for remote entry size calculation
2023-10-16 13:12 [PATCH net-next 00/13] Add MDB get support Ido Schimmel
` (4 preceding siblings ...)
2023-10-16 13:12 ` [PATCH net-next 05/13] vxlan: mdb: Adjust function arguments Ido Schimmel
@ 2023-10-16 13:12 ` Ido Schimmel
2023-10-17 9:06 ` Nikolay Aleksandrov
2023-10-16 13:12 ` [PATCH net-next 07/13] bridge: add MDB get uAPI attributes Ido Schimmel
` (6 subsequent siblings)
12 siblings, 1 reply; 30+ messages in thread
From: Ido Schimmel @ 2023-10-16 13:12 UTC (permalink / raw)
To: netdev, bridge
Cc: davem, kuba, edumazet, pabeni, roopa, razor, mlxsw, Ido Schimmel
Currently, netlink notifications are sent for individual remote entries
and not for the entire MDB entry itself.
Subsequent patches are going to add MDB get support which will require
the VXLAN driver to reply with an entire MDB entry.
Therefore, as a preparation, factor out a helper to calculate the size
of an individual remote entry. When determining the size of the reply
this helper will be invoked for each remote entry in the MDB entry.
No functional changes intended.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
---
drivers/net/vxlan/vxlan_mdb.c | 28 +++++++++++++++++++---------
1 file changed, 19 insertions(+), 9 deletions(-)
diff --git a/drivers/net/vxlan/vxlan_mdb.c b/drivers/net/vxlan/vxlan_mdb.c
index 0b6043e1473b..19640f7e3a88 100644
--- a/drivers/net/vxlan/vxlan_mdb.c
+++ b/drivers/net/vxlan/vxlan_mdb.c
@@ -925,23 +925,20 @@ vxlan_mdb_nlmsg_src_list_size(const struct vxlan_mdb_entry_key *group,
return nlmsg_size;
}
-static size_t vxlan_mdb_nlmsg_size(const struct vxlan_dev *vxlan,
- const struct vxlan_mdb_entry *mdb_entry,
- const struct vxlan_mdb_remote *remote)
+static size_t
+vxlan_mdb_nlmsg_remote_size(const struct vxlan_dev *vxlan,
+ const struct vxlan_mdb_entry *mdb_entry,
+ const struct vxlan_mdb_remote *remote)
{
const struct vxlan_mdb_entry_key *group = &mdb_entry->key;
struct vxlan_rdst *rd = rtnl_dereference(remote->rd);
size_t nlmsg_size;
- nlmsg_size = NLMSG_ALIGN(sizeof(struct br_port_msg)) +
- /* MDBA_MDB */
- nla_total_size(0) +
- /* MDBA_MDB_ENTRY */
- nla_total_size(0) +
/* MDBA_MDB_ENTRY_INFO */
- nla_total_size(sizeof(struct br_mdb_entry)) +
+ nlmsg_size = nla_total_size(sizeof(struct br_mdb_entry)) +
/* MDBA_MDB_EATTR_TIMER */
nla_total_size(sizeof(u32));
+
/* MDBA_MDB_EATTR_SOURCE */
if (vxlan_mdb_is_sg(group))
nlmsg_size += nla_total_size(vxlan_addr_size(&group->dst));
@@ -969,6 +966,19 @@ static size_t vxlan_mdb_nlmsg_size(const struct vxlan_dev *vxlan,
return nlmsg_size;
}
+static size_t vxlan_mdb_nlmsg_size(const struct vxlan_dev *vxlan,
+ const struct vxlan_mdb_entry *mdb_entry,
+ const struct vxlan_mdb_remote *remote)
+{
+ return NLMSG_ALIGN(sizeof(struct br_port_msg)) +
+ /* MDBA_MDB */
+ nla_total_size(0) +
+ /* MDBA_MDB_ENTRY */
+ nla_total_size(0) +
+ /* Remote entry */
+ vxlan_mdb_nlmsg_remote_size(vxlan, mdb_entry, remote);
+}
+
static int vxlan_mdb_nlmsg_fill(const struct vxlan_dev *vxlan,
struct sk_buff *skb,
const struct vxlan_mdb_entry *mdb_entry,
--
2.40.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [PATCH net-next 06/13] vxlan: mdb: Factor out a helper for remote entry size calculation
2023-10-16 13:12 ` [PATCH net-next 06/13] vxlan: mdb: Factor out a helper for remote entry size calculation Ido Schimmel
@ 2023-10-17 9:06 ` Nikolay Aleksandrov
0 siblings, 0 replies; 30+ messages in thread
From: Nikolay Aleksandrov @ 2023-10-17 9:06 UTC (permalink / raw)
To: Ido Schimmel, netdev, bridge; +Cc: davem, kuba, edumazet, pabeni, roopa, mlxsw
On 10/16/23 16:12, Ido Schimmel wrote:
> Currently, netlink notifications are sent for individual remote entries
> and not for the entire MDB entry itself.
>
> Subsequent patches are going to add MDB get support which will require
> the VXLAN driver to reply with an entire MDB entry.
>
> Therefore, as a preparation, factor out a helper to calculate the size
> of an individual remote entry. When determining the size of the reply
> this helper will be invoked for each remote entry in the MDB entry.
>
> No functional changes intended.
>
> Signed-off-by: Ido Schimmel <idosch@nvidia.com>
> ---
> drivers/net/vxlan/vxlan_mdb.c | 28 +++++++++++++++++++---------
> 1 file changed, 19 insertions(+), 9 deletions(-)
>
Acked-by: Nikolay Aleksandrov <razor@blackwall.org>
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH net-next 07/13] bridge: add MDB get uAPI attributes
2023-10-16 13:12 [PATCH net-next 00/13] Add MDB get support Ido Schimmel
` (5 preceding siblings ...)
2023-10-16 13:12 ` [PATCH net-next 06/13] vxlan: mdb: Factor out a helper for remote entry size calculation Ido Schimmel
@ 2023-10-16 13:12 ` Ido Schimmel
2023-10-17 9:08 ` Nikolay Aleksandrov
2023-10-16 13:12 ` [PATCH net-next 08/13] net: Add MDB get device operation Ido Schimmel
` (5 subsequent siblings)
12 siblings, 1 reply; 30+ messages in thread
From: Ido Schimmel @ 2023-10-16 13:12 UTC (permalink / raw)
To: netdev, bridge
Cc: davem, kuba, edumazet, pabeni, roopa, razor, mlxsw, Ido Schimmel
Add MDB get attributes that correspond to the MDB set attributes used in
RTM_NEWMDB messages. Specifically, add 'MDBA_GET_ENTRY' which will hold
a 'struct br_mdb_entry' and 'MDBA_GET_ENTRY_ATTRS' which will hold
'MDBE_ATTR_*' attributes that are used as indexes (source IP and source
VNI).
An example request will look as follows:
[ struct nlmsghdr ]
[ struct br_port_msg ]
[ MDBA_GET_ENTRY ]
struct br_mdb_entry
[ MDBA_GET_ENTRY_ATTRS ]
[ MDBE_ATTR_SOURCE ]
struct in_addr / struct in6_addr
[ MDBE_ATTR_SRC_VNI ]
u32
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
---
include/uapi/linux/if_bridge.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/include/uapi/linux/if_bridge.h b/include/uapi/linux/if_bridge.h
index f95326fce6bb..7e1bf080b414 100644
--- a/include/uapi/linux/if_bridge.h
+++ b/include/uapi/linux/if_bridge.h
@@ -723,6 +723,14 @@ enum {
};
#define MDBA_SET_ENTRY_MAX (__MDBA_SET_ENTRY_MAX - 1)
+enum {
+ MDBA_GET_ENTRY_UNSPEC,
+ MDBA_GET_ENTRY,
+ MDBA_GET_ENTRY_ATTRS,
+ __MDBA_GET_ENTRY_MAX,
+};
+#define MDBA_GET_ENTRY_MAX (__MDBA_GET_ENTRY_MAX - 1)
+
/* [MDBA_SET_ENTRY_ATTRS] = {
* [MDBE_ATTR_xxx]
* ...
--
2.40.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [PATCH net-next 07/13] bridge: add MDB get uAPI attributes
2023-10-16 13:12 ` [PATCH net-next 07/13] bridge: add MDB get uAPI attributes Ido Schimmel
@ 2023-10-17 9:08 ` Nikolay Aleksandrov
2023-10-17 10:58 ` Ido Schimmel
0 siblings, 1 reply; 30+ messages in thread
From: Nikolay Aleksandrov @ 2023-10-17 9:08 UTC (permalink / raw)
To: Ido Schimmel, netdev, bridge; +Cc: davem, kuba, edumazet, pabeni, roopa, mlxsw
On 10/16/23 16:12, Ido Schimmel wrote:
> Add MDB get attributes that correspond to the MDB set attributes used in
> RTM_NEWMDB messages. Specifically, add 'MDBA_GET_ENTRY' which will hold
> a 'struct br_mdb_entry' and 'MDBA_GET_ENTRY_ATTRS' which will hold
> 'MDBE_ATTR_*' attributes that are used as indexes (source IP and source
> VNI).
>
> An example request will look as follows:
>
> [ struct nlmsghdr ]
> [ struct br_port_msg ]
> [ MDBA_GET_ENTRY ]
> struct br_mdb_entry
> [ MDBA_GET_ENTRY_ATTRS ]
> [ MDBE_ATTR_SOURCE ]
> struct in_addr / struct in6_addr
> [ MDBE_ATTR_SRC_VNI ]
> u32
>
Could you please add this info as a comment above the enum?
Similar to the enum below it. It'd be nice to have an example
of what's expected.
> Signed-off-by: Ido Schimmel <idosch@nvidia.com>
> ---
> include/uapi/linux/if_bridge.h | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/include/uapi/linux/if_bridge.h b/include/uapi/linux/if_bridge.h
> index f95326fce6bb..7e1bf080b414 100644
> --- a/include/uapi/linux/if_bridge.h
> +++ b/include/uapi/linux/if_bridge.h
> @@ -723,6 +723,14 @@ enum {
> };
> #define MDBA_SET_ENTRY_MAX (__MDBA_SET_ENTRY_MAX - 1)
>
> +enum {
> + MDBA_GET_ENTRY_UNSPEC,
> + MDBA_GET_ENTRY,
> + MDBA_GET_ENTRY_ATTRS,
> + __MDBA_GET_ENTRY_MAX,
> +};
> +#define MDBA_GET_ENTRY_MAX (__MDBA_GET_ENTRY_MAX - 1)
> +
> /* [MDBA_SET_ENTRY_ATTRS] = {
> * [MDBE_ATTR_xxx]
> * ...
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [PATCH net-next 07/13] bridge: add MDB get uAPI attributes
2023-10-17 9:08 ` Nikolay Aleksandrov
@ 2023-10-17 10:58 ` Ido Schimmel
0 siblings, 0 replies; 30+ messages in thread
From: Ido Schimmel @ 2023-10-17 10:58 UTC (permalink / raw)
To: Nikolay Aleksandrov
Cc: netdev, bridge, davem, kuba, edumazet, pabeni, roopa, mlxsw
On Tue, Oct 17, 2023 at 12:08:30PM +0300, Nikolay Aleksandrov wrote:
> On 10/16/23 16:12, Ido Schimmel wrote:
> > Add MDB get attributes that correspond to the MDB set attributes used in
> > RTM_NEWMDB messages. Specifically, add 'MDBA_GET_ENTRY' which will hold
> > a 'struct br_mdb_entry' and 'MDBA_GET_ENTRY_ATTRS' which will hold
> > 'MDBE_ATTR_*' attributes that are used as indexes (source IP and source
> > VNI).
> >
> > An example request will look as follows:
> >
> > [ struct nlmsghdr ]
> > [ struct br_port_msg ]
> > [ MDBA_GET_ENTRY ]
> > struct br_mdb_entry
> > [ MDBA_GET_ENTRY_ATTRS ]
> > [ MDBE_ATTR_SOURCE ]
> > struct in_addr / struct in6_addr
> > [ MDBE_ATTR_SRC_VNI ]
> > u32
> >
>
> Could you please add this info as a comment above the enum?
> Similar to the enum below it. It'd be nice to have an example
> of what's expected.
Yes, will add in v2
Thanks
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH net-next 08/13] net: Add MDB get device operation
2023-10-16 13:12 [PATCH net-next 00/13] Add MDB get support Ido Schimmel
` (6 preceding siblings ...)
2023-10-16 13:12 ` [PATCH net-next 07/13] bridge: add MDB get uAPI attributes Ido Schimmel
@ 2023-10-16 13:12 ` Ido Schimmel
2023-10-17 9:08 ` Nikolay Aleksandrov
2023-10-16 13:12 ` [PATCH net-next 09/13] bridge: mcast: Add MDB get support Ido Schimmel
` (4 subsequent siblings)
12 siblings, 1 reply; 30+ messages in thread
From: Ido Schimmel @ 2023-10-16 13:12 UTC (permalink / raw)
To: netdev, bridge
Cc: davem, kuba, edumazet, pabeni, roopa, razor, mlxsw, Ido Schimmel
Add MDB net device operation that will be invoked by rtnetlink code in
response to received RTM_GETMDB messages. Subsequent patches will
implement the operation in the bridge and VXLAN drivers.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
---
include/linux/netdevice.h | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 1c7681263d30..18376b65dc61 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -1586,6 +1586,10 @@ struct net_device_ops {
int (*ndo_mdb_dump)(struct net_device *dev,
struct sk_buff *skb,
struct netlink_callback *cb);
+ int (*ndo_mdb_get)(struct net_device *dev,
+ struct nlattr *tb[], u32 portid,
+ u32 seq,
+ struct netlink_ext_ack *extack);
int (*ndo_bridge_setlink)(struct net_device *dev,
struct nlmsghdr *nlh,
u16 flags,
--
2.40.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [PATCH net-next 08/13] net: Add MDB get device operation
2023-10-16 13:12 ` [PATCH net-next 08/13] net: Add MDB get device operation Ido Schimmel
@ 2023-10-17 9:08 ` Nikolay Aleksandrov
0 siblings, 0 replies; 30+ messages in thread
From: Nikolay Aleksandrov @ 2023-10-17 9:08 UTC (permalink / raw)
To: Ido Schimmel, netdev, bridge; +Cc: davem, kuba, edumazet, pabeni, roopa, mlxsw
On 10/16/23 16:12, Ido Schimmel wrote:
> Add MDB net device operation that will be invoked by rtnetlink code in
> response to received RTM_GETMDB messages. Subsequent patches will
> implement the operation in the bridge and VXLAN drivers.
>
> Signed-off-by: Ido Schimmel <idosch@nvidia.com>
> ---
> include/linux/netdevice.h | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> index 1c7681263d30..18376b65dc61 100644
> --- a/include/linux/netdevice.h
> +++ b/include/linux/netdevice.h
> @@ -1586,6 +1586,10 @@ struct net_device_ops {
> int (*ndo_mdb_dump)(struct net_device *dev,
> struct sk_buff *skb,
> struct netlink_callback *cb);
> + int (*ndo_mdb_get)(struct net_device *dev,
> + struct nlattr *tb[], u32 portid,
> + u32 seq,
> + struct netlink_ext_ack *extack);
> int (*ndo_bridge_setlink)(struct net_device *dev,
> struct nlmsghdr *nlh,
> u16 flags,
Acked-by: Nikolay Aleksandrov <razor@blackwall.org>
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH net-next 09/13] bridge: mcast: Add MDB get support
2023-10-16 13:12 [PATCH net-next 00/13] Add MDB get support Ido Schimmel
` (7 preceding siblings ...)
2023-10-16 13:12 ` [PATCH net-next 08/13] net: Add MDB get device operation Ido Schimmel
@ 2023-10-16 13:12 ` Ido Schimmel
2023-10-17 9:24 ` Nikolay Aleksandrov
2023-10-16 13:12 ` [PATCH net-next 10/13] vxlan: mdb: " Ido Schimmel
` (3 subsequent siblings)
12 siblings, 1 reply; 30+ messages in thread
From: Ido Schimmel @ 2023-10-16 13:12 UTC (permalink / raw)
To: netdev, bridge
Cc: davem, kuba, edumazet, pabeni, roopa, razor, mlxsw, Ido Schimmel
Implement support for MDB get operation by looking up a matching MDB
entry, allocating the skb according to the entry's size and then filling
in the response. The operation is performed under the bridge multicast
lock to ensure that the entry does not change between the time the reply
size is determined and when the reply is filled in.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
---
net/bridge/br_device.c | 1 +
net/bridge/br_mdb.c | 154 ++++++++++++++++++++++++++++++++++++++++
net/bridge/br_private.h | 9 +++
3 files changed, 164 insertions(+)
diff --git a/net/bridge/br_device.c b/net/bridge/br_device.c
index d624710b384a..8f40de3af154 100644
--- a/net/bridge/br_device.c
+++ b/net/bridge/br_device.c
@@ -472,6 +472,7 @@ static const struct net_device_ops br_netdev_ops = {
.ndo_mdb_add = br_mdb_add,
.ndo_mdb_del = br_mdb_del,
.ndo_mdb_dump = br_mdb_dump,
+ .ndo_mdb_get = br_mdb_get,
.ndo_bridge_getlink = br_getlink,
.ndo_bridge_setlink = br_setlink,
.ndo_bridge_dellink = br_dellink,
diff --git a/net/bridge/br_mdb.c b/net/bridge/br_mdb.c
index 42983f6a0abd..973e27fe3498 100644
--- a/net/bridge/br_mdb.c
+++ b/net/bridge/br_mdb.c
@@ -1411,3 +1411,157 @@ int br_mdb_del(struct net_device *dev, struct nlattr *tb[],
br_mdb_config_fini(&cfg);
return err;
}
+
+static const struct nla_policy br_mdbe_attrs_get_pol[MDBE_ATTR_MAX + 1] = {
+ [MDBE_ATTR_SOURCE] = NLA_POLICY_RANGE(NLA_BINARY,
+ sizeof(struct in_addr),
+ sizeof(struct in6_addr)),
+};
+
+static int br_mdb_get_parse(struct net_device *dev, struct nlattr *tb[],
+ struct br_ip *group, struct netlink_ext_ack *extack)
+{
+ struct br_mdb_entry *entry = nla_data(tb[MDBA_GET_ENTRY]);
+ struct nlattr *mdbe_attrs[MDBE_ATTR_MAX + 1];
+ int err;
+
+ if (!tb[MDBA_GET_ENTRY_ATTRS]) {
+ __mdb_entry_to_br_ip(entry, group, NULL);
+ return 0;
+ }
+
+ err = nla_parse_nested(mdbe_attrs, MDBE_ATTR_MAX,
+ tb[MDBA_GET_ENTRY_ATTRS], br_mdbe_attrs_get_pol,
+ extack);
+ if (err)
+ return err;
+
+ if (mdbe_attrs[MDBE_ATTR_SOURCE] &&
+ !is_valid_mdb_source(mdbe_attrs[MDBE_ATTR_SOURCE],
+ entry->addr.proto, extack))
+ return -EINVAL;
+
+ __mdb_entry_to_br_ip(entry, group, mdbe_attrs);
+
+ return 0;
+}
+
+static struct sk_buff *
+br_mdb_get_reply_alloc(const struct net_bridge_mdb_entry *mp)
+{
+ struct net_bridge_port_group *pg;
+ size_t nlmsg_size;
+
+ nlmsg_size = NLMSG_ALIGN(sizeof(struct br_port_msg)) +
+ /* MDBA_MDB */
+ nla_total_size(0) +
+ /* MDBA_MDB_ENTRY */
+ nla_total_size(0);
+
+ if (mp->host_joined)
+ nlmsg_size += rtnl_mdb_nlmsg_pg_size(NULL);
+
+ for (pg = mlock_dereference(mp->ports, mp->br); pg;
+ pg = mlock_dereference(pg->next, mp->br))
+ nlmsg_size += rtnl_mdb_nlmsg_pg_size(pg);
+
+ return nlmsg_new(nlmsg_size, GFP_ATOMIC);
+}
+
+static int br_mdb_get_reply_fill(struct sk_buff *skb,
+ struct net_bridge_mdb_entry *mp, u32 portid,
+ u32 seq)
+{
+ struct nlattr *mdb_nest, *mdb_entry_nest;
+ struct net_bridge_port_group *pg;
+ struct br_port_msg *bpm;
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = nlmsg_put(skb, portid, seq, RTM_NEWMDB, sizeof(*bpm), 0);
+ if (!nlh)
+ return -EMSGSIZE;
+
+ bpm = nlmsg_data(nlh);
+ memset(bpm, 0, sizeof(*bpm));
+ bpm->family = AF_BRIDGE;
+ bpm->ifindex = mp->br->dev->ifindex;
+ mdb_nest = nla_nest_start_noflag(skb, MDBA_MDB);
+ if (!mdb_nest) {
+ err = -EMSGSIZE;
+ goto cancel;
+ }
+ mdb_entry_nest = nla_nest_start_noflag(skb, MDBA_MDB_ENTRY);
+ if (!mdb_entry_nest) {
+ err = -EMSGSIZE;
+ goto cancel;
+ }
+
+ if (mp->host_joined) {
+ err = __mdb_fill_info(skb, mp, NULL);
+ if (err)
+ goto cancel;
+ }
+
+ for (pg = mlock_dereference(mp->ports, mp->br); pg;
+ pg = mlock_dereference(pg->next, mp->br)) {
+ err = __mdb_fill_info(skb, mp, pg);
+ if (err)
+ goto cancel;
+ }
+
+ nla_nest_end(skb, mdb_entry_nest);
+ nla_nest_end(skb, mdb_nest);
+ nlmsg_end(skb, nlh);
+
+ return 0;
+
+cancel:
+ nlmsg_cancel(skb, nlh);
+ return err;
+}
+
+int br_mdb_get(struct net_device *dev, struct nlattr *tb[], u32 portid, u32 seq,
+ struct netlink_ext_ack *extack)
+{
+ struct net_bridge *br = netdev_priv(dev);
+ struct net_bridge_mdb_entry *mp;
+ struct sk_buff *skb;
+ struct br_ip group;
+ int err;
+
+ err = br_mdb_get_parse(dev, tb, &group, extack);
+ if (err)
+ return err;
+
+ spin_lock_bh(&br->multicast_lock);
+
+ mp = br_mdb_ip_get(br, &group);
+ if (!mp) {
+ NL_SET_ERR_MSG_MOD(extack, "MDB entry not found");
+ err = -ENOENT;
+ goto unlock;
+ }
+
+ skb = br_mdb_get_reply_alloc(mp);
+ if (!skb) {
+ err = -ENOMEM;
+ goto unlock;
+ }
+
+ err = br_mdb_get_reply_fill(skb, mp, portid, seq);
+ if (err) {
+ NL_SET_ERR_MSG_MOD(extack, "Failed to fill MDB get reply");
+ goto free;
+ }
+
+ spin_unlock_bh(&br->multicast_lock);
+
+ return rtnl_unicast(skb, dev_net(dev), portid);
+
+free:
+ kfree_skb(skb);
+unlock:
+ spin_unlock_bh(&br->multicast_lock);
+ return err;
+}
diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h
index 3220898424ce..ad49d5008ec2 100644
--- a/net/bridge/br_private.h
+++ b/net/bridge/br_private.h
@@ -1018,6 +1018,8 @@ int br_mdb_del(struct net_device *dev, struct nlattr *tb[],
struct netlink_ext_ack *extack);
int br_mdb_dump(struct net_device *dev, struct sk_buff *skb,
struct netlink_callback *cb);
+int br_mdb_get(struct net_device *dev, struct nlattr *tb[], u32 portid, u32 seq,
+ struct netlink_ext_ack *extack);
void br_multicast_host_join(const struct net_bridge_mcast *brmctx,
struct net_bridge_mdb_entry *mp, bool notify);
void br_multicast_host_leave(struct net_bridge_mdb_entry *mp, bool notify);
@@ -1428,6 +1430,13 @@ static inline int br_mdb_dump(struct net_device *dev, struct sk_buff *skb,
return 0;
}
+static inline int br_mdb_get(struct net_device *dev, struct nlattr *tb[],
+ u32 portid, u32 seq,
+ struct netlink_ext_ack *extack)
+{
+ return -EOPNOTSUPP;
+}
+
static inline int br_mdb_hash_init(struct net_bridge *br)
{
return 0;
--
2.40.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [PATCH net-next 09/13] bridge: mcast: Add MDB get support
2023-10-16 13:12 ` [PATCH net-next 09/13] bridge: mcast: Add MDB get support Ido Schimmel
@ 2023-10-17 9:24 ` Nikolay Aleksandrov
2023-10-17 11:03 ` Ido Schimmel
0 siblings, 1 reply; 30+ messages in thread
From: Nikolay Aleksandrov @ 2023-10-17 9:24 UTC (permalink / raw)
To: Ido Schimmel, netdev, bridge; +Cc: davem, kuba, edumazet, pabeni, roopa, mlxsw
On 10/16/23 16:12, Ido Schimmel wrote:
> Implement support for MDB get operation by looking up a matching MDB
> entry, allocating the skb according to the entry's size and then filling
> in the response. The operation is performed under the bridge multicast
> lock to ensure that the entry does not change between the time the reply
> size is determined and when the reply is filled in.
>
> Signed-off-by: Ido Schimmel <idosch@nvidia.com>
> ---
> net/bridge/br_device.c | 1 +
> net/bridge/br_mdb.c | 154 ++++++++++++++++++++++++++++++++++++++++
> net/bridge/br_private.h | 9 +++
> 3 files changed, 164 insertions(+)
>
[snip]
> +int br_mdb_get(struct net_device *dev, struct nlattr *tb[], u32 portid, u32 seq,
> + struct netlink_ext_ack *extack)
> +{
> + struct net_bridge *br = netdev_priv(dev);
> + struct net_bridge_mdb_entry *mp;
> + struct sk_buff *skb;
> + struct br_ip group;
> + int err;
> +
> + err = br_mdb_get_parse(dev, tb, &group, extack);
> + if (err)
> + return err;
> +
> + spin_lock_bh(&br->multicast_lock);
Since this is only reading, could we use rcu to avoid blocking mcast
processing?
> +
> + mp = br_mdb_ip_get(br, &group);
> + if (!mp) {
> + NL_SET_ERR_MSG_MOD(extack, "MDB entry not found");
> + err = -ENOENT;
> + goto unlock;
> + }
> +
> + skb = br_mdb_get_reply_alloc(mp);
> + if (!skb) {
> + err = -ENOMEM;
> + goto unlock;
> + }
> +
> + err = br_mdb_get_reply_fill(skb, mp, portid, seq);
> + if (err) {
> + NL_SET_ERR_MSG_MOD(extack, "Failed to fill MDB get reply");
> + goto free;
> + }
> +
> + spin_unlock_bh(&br->multicast_lock);
> +
> + return rtnl_unicast(skb, dev_net(dev), portid);
> +
> +free:
> + kfree_skb(skb);
> +unlock:
> + spin_unlock_bh(&br->multicast_lock);
> + return err;
> +}
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [PATCH net-next 09/13] bridge: mcast: Add MDB get support
2023-10-17 9:24 ` Nikolay Aleksandrov
@ 2023-10-17 11:03 ` Ido Schimmel
2023-10-17 12:53 ` Nikolay Aleksandrov
0 siblings, 1 reply; 30+ messages in thread
From: Ido Schimmel @ 2023-10-17 11:03 UTC (permalink / raw)
To: Nikolay Aleksandrov
Cc: netdev, bridge, davem, kuba, edumazet, pabeni, roopa, mlxsw
On Tue, Oct 17, 2023 at 12:24:44PM +0300, Nikolay Aleksandrov wrote:
> On 10/16/23 16:12, Ido Schimmel wrote:
> > Implement support for MDB get operation by looking up a matching MDB
> > entry, allocating the skb according to the entry's size and then filling
> > in the response. The operation is performed under the bridge multicast
> > lock to ensure that the entry does not change between the time the reply
> > size is determined and when the reply is filled in.
> >
> > Signed-off-by: Ido Schimmel <idosch@nvidia.com>
> > ---
> > net/bridge/br_device.c | 1 +
> > net/bridge/br_mdb.c | 154 ++++++++++++++++++++++++++++++++++++++++
> > net/bridge/br_private.h | 9 +++
> > 3 files changed, 164 insertions(+)
> >
> [snip]
> > +int br_mdb_get(struct net_device *dev, struct nlattr *tb[], u32 portid, u32 seq,
> > + struct netlink_ext_ack *extack)
> > +{
> > + struct net_bridge *br = netdev_priv(dev);
> > + struct net_bridge_mdb_entry *mp;
> > + struct sk_buff *skb;
> > + struct br_ip group;
> > + int err;
> > +
> > + err = br_mdb_get_parse(dev, tb, &group, extack);
> > + if (err)
> > + return err;
> > +
> > + spin_lock_bh(&br->multicast_lock);
>
> Since this is only reading, could we use rcu to avoid blocking mcast
> processing?
I tried to explain this choice in the commit message. Do you think it's
a non-issue?
>
> > +
> > + mp = br_mdb_ip_get(br, &group);
> > + if (!mp) {
> > + NL_SET_ERR_MSG_MOD(extack, "MDB entry not found");
> > + err = -ENOENT;
> > + goto unlock;
> > + }
> > +
> > + skb = br_mdb_get_reply_alloc(mp);
> > + if (!skb) {
> > + err = -ENOMEM;
> > + goto unlock;
> > + }
> > +
> > + err = br_mdb_get_reply_fill(skb, mp, portid, seq);
> > + if (err) {
> > + NL_SET_ERR_MSG_MOD(extack, "Failed to fill MDB get reply");
> > + goto free;
> > + }
> > +
> > + spin_unlock_bh(&br->multicast_lock);
> > +
> > + return rtnl_unicast(skb, dev_net(dev), portid);
> > +
> > +free:
> > + kfree_skb(skb);
> > +unlock:
> > + spin_unlock_bh(&br->multicast_lock);
> > + return err;
> > +}
>
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [PATCH net-next 09/13] bridge: mcast: Add MDB get support
2023-10-17 11:03 ` Ido Schimmel
@ 2023-10-17 12:53 ` Nikolay Aleksandrov
0 siblings, 0 replies; 30+ messages in thread
From: Nikolay Aleksandrov @ 2023-10-17 12:53 UTC (permalink / raw)
To: Ido Schimmel; +Cc: netdev, bridge, davem, kuba, edumazet, pabeni, roopa, mlxsw
On 10/17/23 14:03, Ido Schimmel wrote:
> On Tue, Oct 17, 2023 at 12:24:44PM +0300, Nikolay Aleksandrov wrote:
>> On 10/16/23 16:12, Ido Schimmel wrote:
>>> Implement support for MDB get operation by looking up a matching MDB
>>> entry, allocating the skb according to the entry's size and then filling
>>> in the response. The operation is performed under the bridge multicast
>>> lock to ensure that the entry does not change between the time the reply
>>> size is determined and when the reply is filled in.
>>>
>>> Signed-off-by: Ido Schimmel <idosch@nvidia.com>
>>> ---
>>> net/bridge/br_device.c | 1 +
>>> net/bridge/br_mdb.c | 154 ++++++++++++++++++++++++++++++++++++++++
>>> net/bridge/br_private.h | 9 +++
>>> 3 files changed, 164 insertions(+)
>>>
>> [snip]
>>> +int br_mdb_get(struct net_device *dev, struct nlattr *tb[], u32 portid, u32 seq,
>>> + struct netlink_ext_ack *extack)
>>> +{
>>> + struct net_bridge *br = netdev_priv(dev);
>>> + struct net_bridge_mdb_entry *mp;
>>> + struct sk_buff *skb;
>>> + struct br_ip group;
>>> + int err;
>>> +
>>> + err = br_mdb_get_parse(dev, tb, &group, extack);
>>> + if (err)
>>> + return err;
>>> +
>>> + spin_lock_bh(&br->multicast_lock);
>>
>> Since this is only reading, could we use rcu to avoid blocking mcast
>> processing?
>
> I tried to explain this choice in the commit message. Do you think it's
> a non-issue?
>
Unless you really need a stable snapshot, I think it's worth
not blocking igmp processing for a read. It's not critical,
if you do need a stable snapshot then it's ok.
>>
>>> +
>>> + mp = br_mdb_ip_get(br, &group);
>>> + if (!mp) {
>>> + NL_SET_ERR_MSG_MOD(extack, "MDB entry not found");
>>> + err = -ENOENT;
>>> + goto unlock;
>>> + }
>>> +
>>> + skb = br_mdb_get_reply_alloc(mp);
>>> + if (!skb) {
>>> + err = -ENOMEM;
>>> + goto unlock;
>>> + }
>>> +
>>> + err = br_mdb_get_reply_fill(skb, mp, portid, seq);
>>> + if (err) {
>>> + NL_SET_ERR_MSG_MOD(extack, "Failed to fill MDB get reply");
>>> + goto free;
>>> + }
>>> +
>>> + spin_unlock_bh(&br->multicast_lock);
>>> +
>>> + return rtnl_unicast(skb, dev_net(dev), portid);
>>> +
>>> +free:
>>> + kfree_skb(skb);
>>> +unlock:
>>> + spin_unlock_bh(&br->multicast_lock);
>>> + return err;
>>> +}
>>
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH net-next 10/13] vxlan: mdb: Add MDB get support
2023-10-16 13:12 [PATCH net-next 00/13] Add MDB get support Ido Schimmel
` (8 preceding siblings ...)
2023-10-16 13:12 ` [PATCH net-next 09/13] bridge: mcast: Add MDB get support Ido Schimmel
@ 2023-10-16 13:12 ` Ido Schimmel
2023-10-17 9:28 ` Nikolay Aleksandrov
2023-10-16 13:12 ` [PATCH net-next 11/13] rtnetlink: " Ido Schimmel
` (2 subsequent siblings)
12 siblings, 1 reply; 30+ messages in thread
From: Ido Schimmel @ 2023-10-16 13:12 UTC (permalink / raw)
To: netdev, bridge
Cc: davem, kuba, edumazet, pabeni, roopa, razor, mlxsw, Ido Schimmel
Implement support for MDB get operation by looking up a matching MDB
entry, allocating the skb according to the entry's size and then filling
in the response.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
---
drivers/net/vxlan/vxlan_core.c | 1 +
drivers/net/vxlan/vxlan_mdb.c | 150 ++++++++++++++++++++++++++++++
drivers/net/vxlan/vxlan_private.h | 2 +
3 files changed, 153 insertions(+)
diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c
index 6f7d45e3cfa2..7ed19f2cf6f5 100644
--- a/drivers/net/vxlan/vxlan_core.c
+++ b/drivers/net/vxlan/vxlan_core.c
@@ -3302,6 +3302,7 @@ static const struct net_device_ops vxlan_netdev_ether_ops = {
.ndo_mdb_add = vxlan_mdb_add,
.ndo_mdb_del = vxlan_mdb_del,
.ndo_mdb_dump = vxlan_mdb_dump,
+ .ndo_mdb_get = vxlan_mdb_get,
.ndo_fill_metadata_dst = vxlan_fill_metadata_dst,
};
diff --git a/drivers/net/vxlan/vxlan_mdb.c b/drivers/net/vxlan/vxlan_mdb.c
index 19640f7e3a88..e472fd67fc2e 100644
--- a/drivers/net/vxlan/vxlan_mdb.c
+++ b/drivers/net/vxlan/vxlan_mdb.c
@@ -1306,6 +1306,156 @@ int vxlan_mdb_del(struct net_device *dev, struct nlattr *tb[],
return err;
}
+static const struct nla_policy vxlan_mdbe_attrs_get_pol[MDBE_ATTR_MAX + 1] = {
+ [MDBE_ATTR_SOURCE] = NLA_POLICY_RANGE(NLA_BINARY,
+ sizeof(struct in_addr),
+ sizeof(struct in6_addr)),
+ [MDBE_ATTR_SRC_VNI] = NLA_POLICY_FULL_RANGE(NLA_U32, &vni_range),
+};
+
+static int vxlan_mdb_get_parse(struct net_device *dev, struct nlattr *tb[],
+ struct vxlan_mdb_entry_key *group,
+ struct netlink_ext_ack *extack)
+{
+ struct br_mdb_entry *entry = nla_data(tb[MDBA_GET_ENTRY]);
+ struct nlattr *mdbe_attrs[MDBE_ATTR_MAX + 1];
+ struct vxlan_dev *vxlan = netdev_priv(dev);
+ int err;
+
+ memset(group, 0, sizeof(*group));
+ group->vni = vxlan->default_dst.remote_vni;
+
+ if (!tb[MDBA_GET_ENTRY_ATTRS]) {
+ vxlan_mdb_group_set(group, entry, NULL);
+ return 0;
+ }
+
+ err = nla_parse_nested(mdbe_attrs, MDBE_ATTR_MAX,
+ tb[MDBA_GET_ENTRY_ATTRS],
+ vxlan_mdbe_attrs_get_pol, extack);
+ if (err)
+ return err;
+
+ if (mdbe_attrs[MDBE_ATTR_SOURCE] &&
+ !vxlan_mdb_is_valid_source(mdbe_attrs[MDBE_ATTR_SOURCE],
+ entry->addr.proto, extack))
+ return -EINVAL;
+
+ vxlan_mdb_group_set(group, entry, mdbe_attrs[MDBE_ATTR_SOURCE]);
+
+ if (mdbe_attrs[MDBE_ATTR_SRC_VNI])
+ group->vni =
+ cpu_to_be32(nla_get_u32(mdbe_attrs[MDBE_ATTR_SRC_VNI]));
+
+ return 0;
+}
+
+static struct sk_buff *
+vxlan_mdb_get_reply_alloc(const struct vxlan_dev *vxlan,
+ const struct vxlan_mdb_entry *mdb_entry)
+{
+ struct vxlan_mdb_remote *remote;
+ size_t nlmsg_size;
+
+ nlmsg_size = NLMSG_ALIGN(sizeof(struct br_port_msg)) +
+ /* MDBA_MDB */
+ nla_total_size(0) +
+ /* MDBA_MDB_ENTRY */
+ nla_total_size(0);
+
+ list_for_each_entry(remote, &mdb_entry->remotes, list)
+ nlmsg_size += vxlan_mdb_nlmsg_remote_size(vxlan, mdb_entry,
+ remote);
+
+ return nlmsg_new(nlmsg_size, GFP_KERNEL);
+}
+
+static int
+vxlan_mdb_get_reply_fill(const struct vxlan_dev *vxlan,
+ struct sk_buff *skb,
+ const struct vxlan_mdb_entry *mdb_entry,
+ u32 portid, u32 seq)
+{
+ struct nlattr *mdb_nest, *mdb_entry_nest;
+ struct vxlan_mdb_remote *remote;
+ struct br_port_msg *bpm;
+ struct nlmsghdr *nlh;
+ int err;
+
+ nlh = nlmsg_put(skb, portid, seq, RTM_NEWMDB, sizeof(*bpm), 0);
+ if (!nlh)
+ return -EMSGSIZE;
+
+ bpm = nlmsg_data(nlh);
+ memset(bpm, 0, sizeof(*bpm));
+ bpm->family = AF_BRIDGE;
+ bpm->ifindex = vxlan->dev->ifindex;
+ mdb_nest = nla_nest_start_noflag(skb, MDBA_MDB);
+ if (!mdb_nest) {
+ err = -EMSGSIZE;
+ goto cancel;
+ }
+ mdb_entry_nest = nla_nest_start_noflag(skb, MDBA_MDB_ENTRY);
+ if (!mdb_entry_nest) {
+ err = -EMSGSIZE;
+ goto cancel;
+ }
+
+ list_for_each_entry(remote, &mdb_entry->remotes, list) {
+ err = vxlan_mdb_entry_info_fill(vxlan, skb, mdb_entry, remote);
+ if (err)
+ goto cancel;
+ }
+
+ nla_nest_end(skb, mdb_entry_nest);
+ nla_nest_end(skb, mdb_nest);
+ nlmsg_end(skb, nlh);
+
+ return 0;
+
+cancel:
+ nlmsg_cancel(skb, nlh);
+ return err;
+}
+
+int vxlan_mdb_get(struct net_device *dev, struct nlattr *tb[], u32 portid,
+ u32 seq, struct netlink_ext_ack *extack)
+{
+ struct vxlan_dev *vxlan = netdev_priv(dev);
+ struct vxlan_mdb_entry *mdb_entry;
+ struct vxlan_mdb_entry_key group;
+ struct sk_buff *skb;
+ int err;
+
+ ASSERT_RTNL();
+
+ err = vxlan_mdb_get_parse(dev, tb, &group, extack);
+ if (err)
+ return err;
+
+ mdb_entry = vxlan_mdb_entry_lookup(vxlan, &group);
+ if (!mdb_entry) {
+ NL_SET_ERR_MSG_MOD(extack, "MDB entry not found");
+ return -ENOENT;
+ }
+
+ skb = vxlan_mdb_get_reply_alloc(vxlan, mdb_entry);
+ if (!skb)
+ return -ENOMEM;
+
+ err = vxlan_mdb_get_reply_fill(vxlan, skb, mdb_entry, portid, seq);
+ if (err) {
+ NL_SET_ERR_MSG_MOD(extack, "Failed to fill MDB get reply");
+ goto free;
+ }
+
+ return rtnl_unicast(skb, dev_net(dev), portid);
+
+free:
+ kfree_skb(skb);
+ return err;
+}
+
struct vxlan_mdb_entry *vxlan_mdb_entry_skb_get(struct vxlan_dev *vxlan,
struct sk_buff *skb,
__be32 src_vni)
diff --git a/drivers/net/vxlan/vxlan_private.h b/drivers/net/vxlan/vxlan_private.h
index 817fa3075842..db679c380955 100644
--- a/drivers/net/vxlan/vxlan_private.h
+++ b/drivers/net/vxlan/vxlan_private.h
@@ -235,6 +235,8 @@ int vxlan_mdb_add(struct net_device *dev, struct nlattr *tb[], u16 nlmsg_flags,
struct netlink_ext_ack *extack);
int vxlan_mdb_del(struct net_device *dev, struct nlattr *tb[],
struct netlink_ext_ack *extack);
+int vxlan_mdb_get(struct net_device *dev, struct nlattr *tb[], u32 portid,
+ u32 seq, struct netlink_ext_ack *extack);
struct vxlan_mdb_entry *vxlan_mdb_entry_skb_get(struct vxlan_dev *vxlan,
struct sk_buff *skb,
__be32 src_vni);
--
2.40.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [PATCH net-next 10/13] vxlan: mdb: Add MDB get support
2023-10-16 13:12 ` [PATCH net-next 10/13] vxlan: mdb: " Ido Schimmel
@ 2023-10-17 9:28 ` Nikolay Aleksandrov
0 siblings, 0 replies; 30+ messages in thread
From: Nikolay Aleksandrov @ 2023-10-17 9:28 UTC (permalink / raw)
To: Ido Schimmel, netdev, bridge; +Cc: davem, kuba, edumazet, pabeni, roopa, mlxsw
On 10/16/23 16:12, Ido Schimmel wrote:
> Implement support for MDB get operation by looking up a matching MDB
> entry, allocating the skb according to the entry's size and then filling
> in the response.
>
> Signed-off-by: Ido Schimmel <idosch@nvidia.com>
> ---
> drivers/net/vxlan/vxlan_core.c | 1 +
> drivers/net/vxlan/vxlan_mdb.c | 150 ++++++++++++++++++++++++++++++
> drivers/net/vxlan/vxlan_private.h | 2 +
> 3 files changed, 153 insertions(+)
>
Acked-by: Nikolay Aleksandrov <razor@blackwall.org>
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH net-next 11/13] rtnetlink: Add MDB get support
2023-10-16 13:12 [PATCH net-next 00/13] Add MDB get support Ido Schimmel
` (9 preceding siblings ...)
2023-10-16 13:12 ` [PATCH net-next 10/13] vxlan: mdb: " Ido Schimmel
@ 2023-10-16 13:12 ` Ido Schimmel
2023-10-17 9:29 ` Nikolay Aleksandrov
2023-10-16 13:12 ` [PATCH net-next 12/13] selftests: bridge_mdb: Use MDB get instead of dump Ido Schimmel
2023-10-16 13:12 ` [PATCH net-next 13/13] selftests: vxlan_mdb: " Ido Schimmel
12 siblings, 1 reply; 30+ messages in thread
From: Ido Schimmel @ 2023-10-16 13:12 UTC (permalink / raw)
To: netdev, bridge
Cc: davem, kuba, edumazet, pabeni, roopa, razor, mlxsw, Ido Schimmel
Now that both the bridge and VXLAN drivers implement the MDB get net
device operation, expose the functionality to user space by registering
a handler for RTM_GETMDB messages. Derive the net device from the
ifindex specified in the ancillary header and invoke its MDB get NDO.
Note that unlike other get handlers, the allocation of the skb
containing the response is not performed in the common rtnetlink code as
the size is variable and needs to be determined by the respective
driver.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
---
net/core/rtnetlink.c | 89 +++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 88 insertions(+), 1 deletion(-)
diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
index eef7f7788996..e4fb242655b4 100644
--- a/net/core/rtnetlink.c
+++ b/net/core/rtnetlink.c
@@ -6221,6 +6221,93 @@ static int rtnl_mdb_dump(struct sk_buff *skb, struct netlink_callback *cb)
return skb->len;
}
+static int rtnl_validate_mdb_entry_get(const struct nlattr *attr,
+ struct netlink_ext_ack *extack)
+{
+ struct br_mdb_entry *entry = nla_data(attr);
+
+ if (nla_len(attr) != sizeof(struct br_mdb_entry)) {
+ NL_SET_ERR_MSG_ATTR(extack, attr, "Invalid attribute length");
+ return -EINVAL;
+ }
+
+ if (entry->ifindex) {
+ NL_SET_ERR_MSG(extack, "Entry ifindex cannot be specified");
+ return -EINVAL;
+ }
+
+ if (entry->state) {
+ NL_SET_ERR_MSG(extack, "Entry state cannot be specified");
+ return -EINVAL;
+ }
+
+ if (entry->flags) {
+ NL_SET_ERR_MSG(extack, "Entry flags cannot be specified");
+ return -EINVAL;
+ }
+
+ if (entry->vid >= VLAN_VID_MASK) {
+ NL_SET_ERR_MSG(extack, "Invalid entry VLAN id");
+ return -EINVAL;
+ }
+
+ if (entry->addr.proto != htons(ETH_P_IP) &&
+ entry->addr.proto != htons(ETH_P_IPV6) &&
+ entry->addr.proto != 0) {
+ NL_SET_ERR_MSG(extack, "Unknown entry protocol");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static const struct nla_policy mdba_get_policy[MDBA_GET_ENTRY_MAX + 1] = {
+ [MDBA_GET_ENTRY] = NLA_POLICY_VALIDATE_FN(NLA_BINARY,
+ rtnl_validate_mdb_entry_get,
+ sizeof(struct br_mdb_entry)),
+ [MDBA_GET_ENTRY_ATTRS] = { .type = NLA_NESTED },
+};
+
+static int rtnl_mdb_get(struct sk_buff *in_skb, struct nlmsghdr *nlh,
+ struct netlink_ext_ack *extack)
+{
+ struct nlattr *tb[MDBA_GET_ENTRY_MAX + 1];
+ struct net *net = sock_net(in_skb->sk);
+ struct br_port_msg *bpm;
+ struct net_device *dev;
+ int err;
+
+ err = nlmsg_parse(nlh, sizeof(struct br_port_msg), tb,
+ MDBA_GET_ENTRY_MAX, mdba_get_policy, extack);
+ if (err)
+ return err;
+
+ bpm = nlmsg_data(nlh);
+ if (!bpm->ifindex) {
+ NL_SET_ERR_MSG(extack, "Invalid ifindex");
+ return -EINVAL;
+ }
+
+ dev = __dev_get_by_index(net, bpm->ifindex);
+ if (!dev) {
+ NL_SET_ERR_MSG(extack, "Device doesn't exist");
+ return -ENODEV;
+ }
+
+ if (NL_REQ_ATTR_CHECK(extack, NULL, tb, MDBA_GET_ENTRY)) {
+ NL_SET_ERR_MSG(extack, "Missing MDBA_GET_ENTRY attribute");
+ return -EINVAL;
+ }
+
+ if (!dev->netdev_ops->ndo_mdb_get) {
+ NL_SET_ERR_MSG(extack, "Device does not support MDB operations");
+ return -EOPNOTSUPP;
+ }
+
+ return dev->netdev_ops->ndo_mdb_get(dev, tb, NETLINK_CB(in_skb).portid,
+ nlh->nlmsg_seq, extack);
+}
+
static int rtnl_validate_mdb_entry(const struct nlattr *attr,
struct netlink_ext_ack *extack)
{
@@ -6597,7 +6684,7 @@ void __init rtnetlink_init(void)
0);
rtnl_register(PF_UNSPEC, RTM_SETSTATS, rtnl_stats_set, NULL, 0);
- rtnl_register(PF_BRIDGE, RTM_GETMDB, NULL, rtnl_mdb_dump, 0);
+ rtnl_register(PF_BRIDGE, RTM_GETMDB, rtnl_mdb_get, rtnl_mdb_dump, 0);
rtnl_register(PF_BRIDGE, RTM_NEWMDB, rtnl_mdb_add, NULL, 0);
rtnl_register(PF_BRIDGE, RTM_DELMDB, rtnl_mdb_del, NULL, 0);
}
--
2.40.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [PATCH net-next 11/13] rtnetlink: Add MDB get support
2023-10-16 13:12 ` [PATCH net-next 11/13] rtnetlink: " Ido Schimmel
@ 2023-10-17 9:29 ` Nikolay Aleksandrov
0 siblings, 0 replies; 30+ messages in thread
From: Nikolay Aleksandrov @ 2023-10-17 9:29 UTC (permalink / raw)
To: Ido Schimmel, netdev, bridge; +Cc: davem, kuba, edumazet, pabeni, roopa, mlxsw
On 10/16/23 16:12, Ido Schimmel wrote:
> Now that both the bridge and VXLAN drivers implement the MDB get net
> device operation, expose the functionality to user space by registering
> a handler for RTM_GETMDB messages. Derive the net device from the
> ifindex specified in the ancillary header and invoke its MDB get NDO.
>
> Note that unlike other get handlers, the allocation of the skb
> containing the response is not performed in the common rtnetlink code as
> the size is variable and needs to be determined by the respective
> driver.
>
> Signed-off-by: Ido Schimmel <idosch@nvidia.com>
> ---
> net/core/rtnetlink.c | 89 +++++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 88 insertions(+), 1 deletion(-)
>
Acked-by: Nikolay Aleksandrov <razor@blackwall.org>
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH net-next 12/13] selftests: bridge_mdb: Use MDB get instead of dump
2023-10-16 13:12 [PATCH net-next 00/13] Add MDB get support Ido Schimmel
` (10 preceding siblings ...)
2023-10-16 13:12 ` [PATCH net-next 11/13] rtnetlink: " Ido Schimmel
@ 2023-10-16 13:12 ` Ido Schimmel
2023-10-17 9:29 ` Nikolay Aleksandrov
2023-10-16 13:12 ` [PATCH net-next 13/13] selftests: vxlan_mdb: " Ido Schimmel
12 siblings, 1 reply; 30+ messages in thread
From: Ido Schimmel @ 2023-10-16 13:12 UTC (permalink / raw)
To: netdev, bridge
Cc: davem, kuba, edumazet, pabeni, roopa, razor, mlxsw, Ido Schimmel
Test the new MDB get functionality by converting dump and grep to MDB
get.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
---
.../selftests/net/forwarding/bridge_mdb.sh | 184 +++++++-----------
1 file changed, 71 insertions(+), 113 deletions(-)
diff --git a/tools/testing/selftests/net/forwarding/bridge_mdb.sh b/tools/testing/selftests/net/forwarding/bridge_mdb.sh
index d0c6c499d5da..e4e3e9405056 100755
--- a/tools/testing/selftests/net/forwarding/bridge_mdb.sh
+++ b/tools/testing/selftests/net/forwarding/bridge_mdb.sh
@@ -145,14 +145,14 @@ cfg_test_host_common()
# Check basic add, replace and delete behavior.
bridge mdb add dev br0 port br0 grp $grp $state vid 10
- bridge mdb show dev br0 vid 10 | grep -q "$grp"
+ bridge mdb get dev br0 grp $grp vid 10 &> /dev/null
check_err $? "Failed to add $name host entry"
bridge mdb replace dev br0 port br0 grp $grp $state vid 10 &> /dev/null
check_fail $? "Managed to replace $name host entry"
bridge mdb del dev br0 port br0 grp $grp $state vid 10
- bridge mdb show dev br0 vid 10 | grep -q "$grp"
+ bridge mdb get dev br0 grp $grp vid 10 &> /dev/null
check_fail $? "Failed to delete $name host entry"
# Check error cases.
@@ -200,7 +200,7 @@ cfg_test_port_common()
# Check basic add, replace and delete behavior.
bridge mdb add dev br0 port $swp1 $grp_key permanent vid 10
- bridge mdb show dev br0 vid 10 | grep -q "$grp_key"
+ bridge mdb get dev br0 $grp_key vid 10 &> /dev/null
check_err $? "Failed to add $name entry"
bridge mdb replace dev br0 port $swp1 $grp_key permanent vid 10 \
@@ -208,31 +208,31 @@ cfg_test_port_common()
check_err $? "Failed to replace $name entry"
bridge mdb del dev br0 port $swp1 $grp_key permanent vid 10
- bridge mdb show dev br0 vid 10 | grep -q "$grp_key"
+ bridge mdb get dev br0 $grp_key vid 10 &> /dev/null
check_fail $? "Failed to delete $name entry"
# Check default protocol and replacement.
bridge mdb add dev br0 port $swp1 $grp_key permanent vid 10
- bridge -d mdb show dev br0 vid 10 | grep "$grp_key" | grep -q "static"
+ bridge -d mdb get dev br0 $grp_key vid 10 | grep -q "static"
check_err $? "$name entry not added with default \"static\" protocol"
bridge mdb replace dev br0 port $swp1 $grp_key permanent vid 10 \
proto 123
- bridge -d mdb show dev br0 vid 10 | grep "$grp_key" | grep -q "123"
+ bridge -d mdb get dev br0 $grp_key vid 10 | grep -q "123"
check_err $? "Failed to replace protocol of $name entry"
bridge mdb del dev br0 port $swp1 $grp_key permanent vid 10
# Check behavior when VLAN is not specified.
bridge mdb add dev br0 port $swp1 $grp_key permanent
- bridge mdb show dev br0 vid 10 | grep -q "$grp_key"
+ bridge mdb get dev br0 $grp_key vid 10 &> /dev/null
check_err $? "$name entry with VLAN 10 not added when VLAN was not specified"
- bridge mdb show dev br0 vid 20 | grep -q "$grp_key"
+ bridge mdb get dev br0 $grp_key vid 20 &> /dev/null
check_err $? "$name entry with VLAN 20 not added when VLAN was not specified"
bridge mdb del dev br0 port $swp1 $grp_key permanent
- bridge mdb show dev br0 vid 10 | grep -q "$grp_key"
+ bridge mdb get dev br0 $grp_key vid 10 &> /dev/null
check_fail $? "$name entry with VLAN 10 not deleted when VLAN was not specified"
- bridge mdb show dev br0 vid 20 | grep -q "$grp_key"
+ bridge mdb get dev br0 $grp_key vid 20 &> /dev/null
check_fail $? "$name entry with VLAN 20 not deleted when VLAN was not specified"
# Check behavior when bridge port is down.
@@ -298,21 +298,21 @@ __cfg_test_port_ip_star_g()
RET=0
bridge mdb add dev br0 port $swp1 grp $grp vid 10
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -q "exclude"
+ bridge -d mdb get dev br0 grp $grp vid 10 | grep -q "exclude"
check_err $? "Default filter mode is not \"exclude\""
bridge mdb del dev br0 port $swp1 grp $grp vid 10
# Check basic add and delete behavior.
bridge mdb add dev br0 port $swp1 grp $grp vid 10 filter_mode exclude \
source_list $src1
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -q -v "src"
+ bridge -d mdb get dev br0 grp $grp vid 10 &> /dev/null
check_err $? "(*, G) entry not created"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -q "src $src1"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 &> /dev/null
check_err $? "(S, G) entry not created"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -q -v "src"
+ bridge -d mdb get dev br0 grp $grp vid 10 &> /dev/null
check_fail $? "(*, G) entry not deleted"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -q "src $src1"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 &> /dev/null
check_fail $? "(S, G) entry not deleted"
## State (permanent / temp) tests.
@@ -321,18 +321,15 @@ __cfg_test_port_ip_star_g()
bridge mdb add dev br0 port $swp1 grp $grp permanent vid 10 \
filter_mode exclude source_list $src1
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "permanent"
+ bridge -d mdb get dev br0 grp $grp vid 10 | grep -q "permanent"
check_err $? "(*, G) entry not added as \"permanent\" when should"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep "src" | \
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 | \
grep -q "permanent"
check_err $? "(S, G) entry not added as \"permanent\" when should"
- bridge -d -s mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q " 0.00"
+ bridge -d -s mdb get dev br0 grp $grp vid 10 | grep -q " 0.00"
check_err $? "(*, G) \"permanent\" entry has a pending group timer"
- bridge -d -s mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "\/0.00"
+ bridge -d -s mdb get dev br0 grp $grp vid 10 | grep -q "\/0.00"
check_err $? "\"permanent\" source entry has a pending source timer"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
@@ -342,18 +339,14 @@ __cfg_test_port_ip_star_g()
bridge mdb add dev br0 port $swp1 grp $grp temp vid 10 \
filter_mode exclude source_list $src1
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "temp"
+ bridge -d mdb get dev br0 grp $grp vid 10 | grep -q "temp"
check_err $? "(*, G) EXCLUDE entry not added as \"temp\" when should"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep "src" | \
- grep -q "temp"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 | grep -q "temp"
check_err $? "(S, G) \"blocked\" entry not added as \"temp\" when should"
- bridge -d -s mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q " 0.00"
+ bridge -d -s mdb get dev br0 grp $grp vid 10 | grep -q " 0.00"
check_fail $? "(*, G) EXCLUDE entry does not have a pending group timer"
- bridge -d -s mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "\/0.00"
+ bridge -d -s mdb get dev br0 grp $grp vid 10 | grep -q "\/0.00"
check_err $? "\"blocked\" source entry has a pending source timer"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
@@ -363,18 +356,14 @@ __cfg_test_port_ip_star_g()
bridge mdb add dev br0 port $swp1 grp $grp temp vid 10 \
filter_mode include source_list $src1
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "temp"
+ bridge -d mdb get dev br0 grp $grp vid 10 | grep -q "temp"
check_err $? "(*, G) INCLUDE entry not added as \"temp\" when should"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep "src" | \
- grep -q "temp"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 | grep -q "temp"
check_err $? "(S, G) entry not added as \"temp\" when should"
- bridge -d -s mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q " 0.00"
+ bridge -d -s mdb get dev br0 grp $grp vid 10 | grep -q " 0.00"
check_err $? "(*, G) INCLUDE entry has a pending group timer"
- bridge -d -s mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "\/0.00"
+ bridge -d -s mdb get dev br0 grp $grp vid 10 | grep -q "\/0.00"
check_fail $? "Source entry does not have a pending source timer"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
@@ -383,8 +372,7 @@ __cfg_test_port_ip_star_g()
bridge mdb add dev br0 port $swp1 grp $grp temp vid 10 \
filter_mode include source_list $src1
- bridge -d -s mdb show dev br0 vid 10 | grep "$grp" | grep "src" | \
- grep -q " 0.00"
+ bridge -d -s mdb get dev br0 grp $grp src $src1 vid 10 | grep -q " 0.00"
check_err $? "(S, G) entry has a pending group timer"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
@@ -396,11 +384,9 @@ __cfg_test_port_ip_star_g()
bridge mdb add dev br0 port $swp1 grp $grp vid 10 \
filter_mode include source_list $src1
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "include"
+ bridge -d mdb get dev br0 grp $grp vid 10 | grep -q "include"
check_err $? "(*, G) INCLUDE not added with \"include\" filter mode"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep "src" | \
- grep -q "blocked"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 | grep -q "blocked"
check_fail $? "(S, G) entry marked as \"blocked\" when should not"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
@@ -410,11 +396,9 @@ __cfg_test_port_ip_star_g()
bridge mdb add dev br0 port $swp1 grp $grp vid 10 \
filter_mode exclude source_list $src1
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "exclude"
+ bridge -d mdb get dev br0 grp $grp vid 10 | grep -q "exclude"
check_err $? "(*, G) EXCLUDE not added with \"exclude\" filter mode"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep "src" | \
- grep -q "blocked"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 | grep -q "blocked"
check_err $? "(S, G) entry not marked as \"blocked\" when should"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
@@ -426,11 +410,9 @@ __cfg_test_port_ip_star_g()
bridge mdb add dev br0 port $swp1 grp $grp vid 10 \
filter_mode exclude source_list $src1 proto zebra
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "zebra"
+ bridge -d mdb get dev br0 grp $grp vid 10 | grep -q "zebra"
check_err $? "(*, G) entry not added with \"zebra\" protocol"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep "src" | \
- grep -q "zebra"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 | grep -q "zebra"
check_err $? "(S, G) entry not marked added with \"zebra\" protocol"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
@@ -443,20 +425,16 @@ __cfg_test_port_ip_star_g()
bridge mdb replace dev br0 port $swp1 grp $grp permanent vid 10 \
filter_mode exclude source_list $src1
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "permanent"
+ bridge -d mdb get dev br0 grp $grp vid 10 | grep -q "permanent"
check_err $? "(*, G) entry not marked as \"permanent\" after replace"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep "src" | \
- grep -q "permanent"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 | grep -q "permanent"
check_err $? "(S, G) entry not marked as \"permanent\" after replace"
bridge mdb replace dev br0 port $swp1 grp $grp temp vid 10 \
filter_mode exclude source_list $src1
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "temp"
+ bridge -d mdb get dev br0 grp $grp vid 10 | grep -q "temp"
check_err $? "(*, G) entry not marked as \"temp\" after replace"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep "src" | \
- grep -q "temp"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 | grep -q "temp"
check_err $? "(S, G) entry not marked as \"temp\" after replace"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
@@ -467,20 +445,16 @@ __cfg_test_port_ip_star_g()
bridge mdb replace dev br0 port $swp1 grp $grp temp vid 10 \
filter_mode include source_list $src1
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "include"
+ bridge -d mdb get dev br0 grp $grp vid 10 | grep -q "include"
check_err $? "(*, G) not marked with \"include\" filter mode after replace"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep "src" | \
- grep -q "blocked"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 | grep -q "blocked"
check_fail $? "(S, G) marked as \"blocked\" after replace"
bridge mdb replace dev br0 port $swp1 grp $grp temp vid 10 \
filter_mode exclude source_list $src1
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "exclude"
+ bridge -d mdb get dev br0 grp $grp vid 10 | grep -q "exclude"
check_err $? "(*, G) not marked with \"exclude\" filter mode after replace"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep "src" | \
- grep -q "blocked"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 | grep -q "blocked"
check_err $? "(S, G) not marked as \"blocked\" after replace"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
@@ -491,20 +465,20 @@ __cfg_test_port_ip_star_g()
bridge mdb replace dev br0 port $swp1 grp $grp temp vid 10 \
filter_mode exclude source_list $src1,$src2,$src3
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -q "src $src1"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 &> /dev/null
check_err $? "(S, G) entry for source $src1 not created after replace"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -q "src $src2"
+ bridge -d mdb get dev br0 grp $grp src $src2 vid 10 &> /dev/null
check_err $? "(S, G) entry for source $src2 not created after replace"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -q "src $src3"
+ bridge -d mdb get dev br0 grp $grp src $src3 vid 10 &> /dev/null
check_err $? "(S, G) entry for source $src3 not created after replace"
bridge mdb replace dev br0 port $swp1 grp $grp temp vid 10 \
filter_mode exclude source_list $src1,$src3
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -q "src $src1"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 &> /dev/null
check_err $? "(S, G) entry for source $src1 not created after second replace"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -q "src $src2"
+ bridge -d mdb get dev br0 grp $grp src $src2 vid 10 &> /dev/null
check_fail $? "(S, G) entry for source $src2 created after second replace"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -q "src $src3"
+ bridge -d mdb get dev br0 grp $grp src $src3 vid 10 &> /dev/null
check_err $? "(S, G) entry for source $src3 not created after second replace"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
@@ -515,11 +489,9 @@ __cfg_test_port_ip_star_g()
bridge mdb replace dev br0 port $swp1 grp $grp temp vid 10 \
filter_mode exclude source_list $src1 proto bgp
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep -v "src" | \
- grep -q "bgp"
+ bridge -d mdb get dev br0 grp $grp vid 10 | grep -q "bgp"
check_err $? "(*, G) protocol not changed to \"bgp\" after replace"
- bridge -d mdb show dev br0 vid 10 | grep "$grp" | grep "src" | \
- grep -q "bgp"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 | grep -q "bgp"
check_err $? "(S, G) protocol not changed to \"bgp\" after replace"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
@@ -532,8 +504,8 @@ __cfg_test_port_ip_star_g()
bridge mdb add dev br0 port $swp2 grp $grp vid 10 \
filter_mode include source_list $src1
bridge mdb add dev br0 port $swp1 grp $grp vid 10
- bridge -d mdb show dev br0 vid 10 | grep "$swp1" | grep "$grp" | \
- grep "$src1" | grep -q "added_by_star_ex"
+ bridge -d mdb get dev br0 grp $grp src $src1 vid 10 | grep "$swp1" | \
+ grep -q "added_by_star_ex"
check_err $? "\"added_by_star_ex\" entry not created after adding (*, G) entry"
bridge mdb del dev br0 port $swp1 grp $grp vid 10
bridge mdb del dev br0 port $swp2 grp $grp src $src1 vid 10
@@ -606,27 +578,23 @@ __cfg_test_port_ip_sg()
RET=0
bridge mdb add dev br0 port $swp1 $grp_key vid 10
- bridge -d mdb show dev br0 vid 10 | grep "$grp_key" | grep -q "include"
+ bridge -d mdb get dev br0 $grp_key vid 10 | grep -q "include"
check_err $? "Default filter mode is not \"include\""
bridge mdb del dev br0 port $swp1 $grp_key vid 10
# Check that entries can be added as both permanent and temp and that
# group timer is set correctly.
bridge mdb add dev br0 port $swp1 $grp_key permanent vid 10
- bridge -d mdb show dev br0 vid 10 | grep "$grp_key" | \
- grep -q "permanent"
+ bridge -d mdb get dev br0 $grp_key vid 10 | grep -q "permanent"
check_err $? "Entry not added as \"permanent\" when should"
- bridge -d -s mdb show dev br0 vid 10 | grep "$grp_key" | \
- grep -q " 0.00"
+ bridge -d -s mdb get dev br0 $grp_key vid 10 | grep -q " 0.00"
check_err $? "\"permanent\" entry has a pending group timer"
bridge mdb del dev br0 port $swp1 $grp_key vid 10
bridge mdb add dev br0 port $swp1 $grp_key temp vid 10
- bridge -d mdb show dev br0 vid 10 | grep "$grp_key" | \
- grep -q "temp"
+ bridge -d mdb get dev br0 $grp_key vid 10 | grep -q "temp"
check_err $? "Entry not added as \"temp\" when should"
- bridge -d -s mdb show dev br0 vid 10 | grep "$grp_key" | \
- grep -q " 0.00"
+ bridge -d -s mdb get dev br0 $grp_key vid 10 | grep -q " 0.00"
check_fail $? "\"temp\" entry has an unpending group timer"
bridge mdb del dev br0 port $swp1 $grp_key vid 10
@@ -650,24 +618,19 @@ __cfg_test_port_ip_sg()
# Check that we can replace available attributes.
bridge mdb add dev br0 port $swp1 $grp_key vid 10 proto 123
bridge mdb replace dev br0 port $swp1 $grp_key vid 10 proto 111
- bridge -d mdb show dev br0 vid 10 | grep "$grp_key" | \
- grep -q "111"
+ bridge -d mdb get dev br0 $grp_key vid 10 | grep -q "111"
check_err $? "Failed to replace protocol"
bridge mdb replace dev br0 port $swp1 $grp_key vid 10 permanent
- bridge -d mdb show dev br0 vid 10 | grep "$grp_key" | \
- grep -q "permanent"
+ bridge -d mdb get dev br0 $grp_key vid 10 | grep -q "permanent"
check_err $? "Entry not marked as \"permanent\" after replace"
- bridge -d -s mdb show dev br0 vid 10 | grep "$grp_key" | \
- grep -q " 0.00"
+ bridge -d -s mdb get dev br0 $grp_key vid 10 | grep -q " 0.00"
check_err $? "Entry has a pending group timer after replace"
bridge mdb replace dev br0 port $swp1 $grp_key vid 10 temp
- bridge -d mdb show dev br0 vid 10 | grep "$grp_key" | \
- grep -q "temp"
+ bridge -d mdb get dev br0 $grp_key vid 10 | grep -q "temp"
check_err $? "Entry not marked as \"temp\" after replace"
- bridge -d -s mdb show dev br0 vid 10 | grep "$grp_key" | \
- grep -q " 0.00"
+ bridge -d -s mdb get dev br0 $grp_key vid 10 | grep -q " 0.00"
check_fail $? "Entry has an unpending group timer after replace"
bridge mdb del dev br0 port $swp1 $grp_key vid 10
@@ -675,7 +638,7 @@ __cfg_test_port_ip_sg()
# (*, G) ports need to be added to it.
bridge mdb add dev br0 port $swp2 grp $grp vid 10
bridge mdb add dev br0 port $swp1 $grp_key vid 10
- bridge mdb show dev br0 vid 10 | grep "$grp_key" | grep $swp2 | \
+ bridge mdb get dev br0 $grp_key vid 10 | grep $swp2 | \
grep -q "added_by_star_ex"
check_err $? "\"added_by_star_ex\" entry not created after adding (S, G) entry"
bridge mdb del dev br0 port $swp1 $grp_key vid 10
@@ -1132,7 +1095,7 @@ ctrl_igmpv3_is_in_test()
$MZ $h1.10 -c 1 -a own -b 01:00:5e:01:01:01 -A 192.0.2.1 -B 239.1.1.1 \
-t ip proto=2,p=$(igmpv3_is_in_get 239.1.1.1 192.0.2.2) -q
- bridge -d mdb show dev br0 vid 10 | grep 239.1.1.1 | grep -q 192.0.2.2
+ bridge mdb get dev br0 grp 239.1.1.1 src 192.0.2.2 vid 10 &> /dev/null
check_fail $? "Permanent entry affected by IGMP packet"
# Replace the permanent entry with a temporary one and check that after
@@ -1145,12 +1108,10 @@ ctrl_igmpv3_is_in_test()
$MZ $h1.10 -a own -b 01:00:5e:01:01:01 -c 1 -A 192.0.2.1 -B 239.1.1.1 \
-t ip proto=2,p=$(igmpv3_is_in_get 239.1.1.1 192.0.2.2) -q
- bridge -d mdb show dev br0 vid 10 | grep 239.1.1.1 | grep -v "src" | \
- grep -q 192.0.2.2
+ bridge -d mdb get dev br0 grp 239.1.1.1 vid 10 | grep -q 192.0.2.2
check_err $? "Source not add to source list"
- bridge -d mdb show dev br0 vid 10 | grep 239.1.1.1 | \
- grep -q "src 192.0.2.2"
+ bridge mdb get dev br0 grp 239.1.1.1 src 192.0.2.2 vid 10 &> /dev/null
check_err $? "(S, G) entry not created for new source"
bridge mdb del dev br0 port $swp1 grp 239.1.1.1 vid 10
@@ -1172,8 +1133,7 @@ ctrl_mldv2_is_in_test()
$MZ -6 $h1.10 -a own -b 33:33:00:00:00:01 -c 1 -A fe80::1 -B ff0e::1 \
-t ip hop=1,next=0,p="$p" -q
- bridge -d mdb show dev br0 vid 10 | grep ff0e::1 | \
- grep -q 2001:db8:1::2
+ bridge mdb get dev br0 grp ff0e::1 src 2001:db8:1::2 vid 10 &> /dev/null
check_fail $? "Permanent entry affected by MLD packet"
# Replace the permanent entry with a temporary one and check that after
@@ -1186,12 +1146,10 @@ ctrl_mldv2_is_in_test()
$MZ -6 $h1.10 -a own -b 33:33:00:00:00:01 -c 1 -A fe80::1 -B ff0e::1 \
-t ip hop=1,next=0,p="$p" -q
- bridge -d mdb show dev br0 vid 10 | grep ff0e::1 | grep -v "src" | \
- grep -q 2001:db8:1::2
+ bridge -d mdb get dev br0 grp ff0e::1 vid 10 | grep -q 2001:db8:1::2
check_err $? "Source not add to source list"
- bridge -d mdb show dev br0 vid 10 | grep ff0e::1 | \
- grep -q "src 2001:db8:1::2"
+ bridge mdb get dev br0 grp ff0e::1 src 2001:db8:1::2 vid 10 &> /dev/null
check_err $? "(S, G) entry not created for new source"
bridge mdb del dev br0 port $swp1 grp ff0e::1 vid 10
@@ -1208,8 +1166,8 @@ ctrl_test()
ctrl_mldv2_is_in_test
}
-if ! bridge mdb help 2>&1 | grep -q "replace"; then
- echo "SKIP: iproute2 too old, missing bridge mdb replace support"
+if ! bridge mdb help 2>&1 | grep -q "get"; then
+ echo "SKIP: iproute2 too old, missing bridge mdb get support"
exit $ksft_skip
fi
--
2.40.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [PATCH net-next 12/13] selftests: bridge_mdb: Use MDB get instead of dump
2023-10-16 13:12 ` [PATCH net-next 12/13] selftests: bridge_mdb: Use MDB get instead of dump Ido Schimmel
@ 2023-10-17 9:29 ` Nikolay Aleksandrov
0 siblings, 0 replies; 30+ messages in thread
From: Nikolay Aleksandrov @ 2023-10-17 9:29 UTC (permalink / raw)
To: Ido Schimmel, netdev, bridge; +Cc: davem, kuba, edumazet, pabeni, roopa, mlxsw
On 10/16/23 16:12, Ido Schimmel wrote:
> Test the new MDB get functionality by converting dump and grep to MDB
> get.
>
> Signed-off-by: Ido Schimmel <idosch@nvidia.com>
> ---
> .../selftests/net/forwarding/bridge_mdb.sh | 184 +++++++-----------
> 1 file changed, 71 insertions(+), 113 deletions(-)
>
Acked-by: Nikolay Aleksandrov <razor@blackwall.org>
^ permalink raw reply [flat|nested] 30+ messages in thread
* [PATCH net-next 13/13] selftests: vxlan_mdb: Use MDB get instead of dump
2023-10-16 13:12 [PATCH net-next 00/13] Add MDB get support Ido Schimmel
` (11 preceding siblings ...)
2023-10-16 13:12 ` [PATCH net-next 12/13] selftests: bridge_mdb: Use MDB get instead of dump Ido Schimmel
@ 2023-10-16 13:12 ` Ido Schimmel
2023-10-17 9:30 ` Nikolay Aleksandrov
12 siblings, 1 reply; 30+ messages in thread
From: Ido Schimmel @ 2023-10-16 13:12 UTC (permalink / raw)
To: netdev, bridge
Cc: davem, kuba, edumazet, pabeni, roopa, razor, mlxsw, Ido Schimmel
Test the new MDB get functionality by converting dump and grep to MDB
get.
Signed-off-by: Ido Schimmel <idosch@nvidia.com>
---
tools/testing/selftests/net/test_vxlan_mdb.sh | 108 +++++++++---------
1 file changed, 54 insertions(+), 54 deletions(-)
diff --git a/tools/testing/selftests/net/test_vxlan_mdb.sh b/tools/testing/selftests/net/test_vxlan_mdb.sh
index 31e5f0f8859d..6e996f8063cd 100755
--- a/tools/testing/selftests/net/test_vxlan_mdb.sh
+++ b/tools/testing/selftests/net/test_vxlan_mdb.sh
@@ -337,62 +337,62 @@ basic_common()
# Basic add, replace and delete behavior.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 $grp_key permanent dst $vtep_ip src_vni 10010"
log_test $? 0 "MDB entry addition"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 $grp_key src_vni 10010"
log_test $? 0 "MDB entry presence after addition"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 $grp_key permanent dst $vtep_ip src_vni 10010"
log_test $? 0 "MDB entry replacement"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 $grp_key src_vni 10010"
log_test $? 0 "MDB entry presence after replacement"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 $grp_key dst $vtep_ip src_vni 10010"
log_test $? 0 "MDB entry deletion"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\""
- log_test $? 1 "MDB entry presence after deletion"
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 $grp_key src_vni 10010"
+ log_test $? 254 "MDB entry presence after deletion"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 $grp_key dst $vtep_ip src_vni 10010"
log_test $? 255 "Non-existent MDB entry deletion"
# Default protocol and replacement.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 $grp_key permanent dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\" | grep \"proto static\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 $grp_key src_vni 10010 | grep \"proto static\""
log_test $? 0 "MDB entry default protocol"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 $grp_key permanent proto 123 dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\" | grep \"proto 123\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 $grp_key src_vni 10010 | grep \"proto 123\""
log_test $? 0 "MDB entry protocol replacement"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 $grp_key dst $vtep_ip src_vni 10010"
# Default destination port and replacement.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 $grp_key permanent dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\" | grep \" dst_port \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 $grp_key src_vni 10010 | grep \" dst_port \""
log_test $? 1 "MDB entry default destination port"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 $grp_key permanent dst $vtep_ip dst_port 1234 src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\" | grep \"dst_port 1234\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 $grp_key src_vni 10010 | grep \"dst_port 1234\""
log_test $? 0 "MDB entry destination port replacement"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 $grp_key dst $vtep_ip src_vni 10010"
# Default destination VNI and replacement.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 $grp_key permanent dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\" | grep \" vni \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 $grp_key src_vni 10010 | grep \" vni \""
log_test $? 1 "MDB entry default destination VNI"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 $grp_key permanent dst $vtep_ip vni 1234 src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\" | grep \"vni 1234\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 $grp_key src_vni 10010 | grep \"vni 1234\""
log_test $? 0 "MDB entry destination VNI replacement"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 $grp_key dst $vtep_ip src_vni 10010"
# Default outgoing interface and replacement.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 $grp_key permanent dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\" | grep \" via \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 $grp_key src_vni 10010 | grep \" via \""
log_test $? 1 "MDB entry default outgoing interface"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 $grp_key permanent dst $vtep_ip src_vni 10010 via veth0"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep \"$grp_key\" | grep \"via veth0\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 $grp_key src_vni 10010 | grep \"via veth0\""
log_test $? 0 "MDB entry outgoing interface replacement"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 $grp_key dst $vtep_ip src_vni 10010"
@@ -550,127 +550,127 @@ star_g_common()
# Basic add, replace and delete behavior.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip src_vni 10010"
log_test $? 0 "(*, G) MDB entry addition with source list"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010"
log_test $? 0 "(*, G) MDB entry presence after addition"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010"
log_test $? 0 "(S, G) MDB entry presence after addition"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip src_vni 10010"
log_test $? 0 "(*, G) MDB entry replacement with source list"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010"
log_test $? 0 "(*, G) MDB entry presence after replacement"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010"
log_test $? 0 "(S, G) MDB entry presence after replacement"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep_ip src_vni 10010"
log_test $? 0 "(*, G) MDB entry deletion"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \""
- log_test $? 1 "(*, G) MDB entry presence after deletion"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\""
- log_test $? 1 "(S, G) MDB entry presence after deletion"
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010"
+ log_test $? 254 "(*, G) MDB entry presence after deletion"
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010"
+ log_test $? 254 "(S, G) MDB entry presence after deletion"
# Default filter mode and replacement.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp permanent dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep exclude"
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010 | grep exclude"
log_test $? 0 "(*, G) MDB entry default filter mode"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode include source_list $src1 dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep include"
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010 | grep include"
log_test $? 0 "(*, G) MDB entry after replacing filter mode to \"include\""
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010"
log_test $? 0 "(S, G) MDB entry after replacing filter mode to \"include\""
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\" | grep blocked"
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010 | grep blocked"
log_test $? 1 "\"blocked\" flag after replacing filter mode to \"include\""
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep exclude"
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010 | grep exclude"
log_test $? 0 "(*, G) MDB entry after replacing filter mode to \"exclude\""
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grep grp $grp src $src1 src_vni 10010"
log_test $? 0 "(S, G) MDB entry after replacing filter mode to \"exclude\""
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\" | grep blocked"
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010 | grep blocked"
log_test $? 0 "\"blocked\" flag after replacing filter mode to \"exclude\""
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep_ip src_vni 10010"
# Default source list and replacement.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp permanent dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep source_list"
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010 | grep source_list"
log_test $? 1 "(*, G) MDB entry default source list"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1,$src2,$src3 dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010"
log_test $? 0 "(S, G) MDB entry of 1st source after replacing source list"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src2\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src2 src_vni 10010"
log_test $? 0 "(S, G) MDB entry of 2nd source after replacing source list"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src3\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src3 src_vni 10010"
log_test $? 0 "(S, G) MDB entry of 3rd source after replacing source list"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1,$src3 dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src1\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010"
log_test $? 0 "(S, G) MDB entry of 1st source after removing source"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src2\""
- log_test $? 1 "(S, G) MDB entry of 2nd source after removing source"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \"src $src3\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src2 src_vni 10010"
+ log_test $? 254 "(S, G) MDB entry of 2nd source after removing source"
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src3 src_vni 10010"
log_test $? 0 "(S, G) MDB entry of 3rd source after removing source"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep_ip src_vni 10010"
# Default protocol and replacement.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \" | grep \"proto static\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010 | grep \"proto static\""
log_test $? 0 "(*, G) MDB entry default protocol"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \" src \" | grep \"proto static\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010 | grep \"proto static\""
log_test $? 0 "(S, G) MDB entry default protocol"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 proto bgp dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \" | grep \"proto bgp\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010 | grep \"proto bgp\""
log_test $? 0 "(*, G) MDB entry protocol after replacement"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \" src \" | grep \"proto bgp\""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010 | grep \"proto bgp\""
log_test $? 0 "(S, G) MDB entry protocol after replacement"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep_ip src_vni 10010"
# Default destination port and replacement.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \" | grep \" dst_port \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010 | grep \" dst_port \""
log_test $? 1 "(*, G) MDB entry default destination port"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \" src \" | grep \" dst_port \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010 | grep \" dst_port \""
log_test $? 1 "(S, G) MDB entry default destination port"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip dst_port 1234 src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \" | grep \" dst_port 1234 \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010 | grep \" dst_port 1234 \""
log_test $? 0 "(*, G) MDB entry destination port after replacement"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \" src \" | grep \" dst_port 1234 \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010 | grep \" dst_port 1234 \""
log_test $? 0 "(S, G) MDB entry destination port after replacement"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep_ip src_vni 10010"
# Default destination VNI and replacement.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \" | grep \" vni \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010 | grep \" vni \""
log_test $? 1 "(*, G) MDB entry default destination VNI"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \" src \" | grep \" vni \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010 | grep \" vni \""
log_test $? 1 "(S, G) MDB entry default destination VNI"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip vni 1234 src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \" | grep \" vni 1234 \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010 | grep \" vni 1234 \""
log_test $? 0 "(*, G) MDB entry destination VNI after replacement"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \" src \" | grep \" vni 1234 \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010 | grep \" vni 1234 \""
log_test $? 0 "(S, G) MDB entry destination VNI after replacement"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep_ip src_vni 10010"
# Default outgoing interface and replacement.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \" | grep \" via \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010 | grep \" via \""
log_test $? 1 "(*, G) MDB entry default outgoing interface"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \" src \" | grep \" via \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010 | grep \" via \""
log_test $? 1 "(S, G) MDB entry default outgoing interface"
run_cmd "bridge -n $ns1 mdb replace dev vx0 port vx0 grp $grp permanent filter_mode exclude source_list $src1 dst $vtep_ip src_vni 10010 via veth0"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep -v \" src \" | grep \" via veth0 \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src_vni 10010 | grep \" via veth0 \""
log_test $? 0 "(*, G) MDB entry outgoing interface after replacement"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep \" src \" | grep \" via veth0 \""
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src1 src_vni 10010 | grep \" via veth0 \""
log_test $? 0 "(S, G) MDB entry outgoing interface after replacement"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp dst $vtep_ip src_vni 10010"
@@ -772,7 +772,7 @@ sg_common()
# Default filter mode.
run_cmd "bridge -n $ns1 mdb add dev vx0 port vx0 grp $grp src $src permanent dst $vtep_ip src_vni 10010"
- run_cmd "bridge -n $ns1 -d -s mdb show dev vx0 | grep $grp | grep include"
+ run_cmd "bridge -n $ns1 -d -s mdb get dev vx0 grp $grp src $src src_vni 10010 | grep include"
log_test $? 0 "(S, G) MDB entry default filter mode"
run_cmd "bridge -n $ns1 mdb del dev vx0 port vx0 grp $grp src $src permanent dst $vtep_ip src_vni 10010"
@@ -2296,9 +2296,9 @@ if [ ! -x "$(command -v jq)" ]; then
exit $ksft_skip
fi
-bridge mdb help 2>&1 | grep -q "src_vni"
+bridge mdb help 2>&1 | grep -q "get"
if [ $? -ne 0 ]; then
- echo "SKIP: iproute2 bridge too old, missing VXLAN MDB support"
+ echo "SKIP: iproute2 bridge too old, missing VXLAN MDB get support"
exit $ksft_skip
fi
--
2.40.1
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [PATCH net-next 13/13] selftests: vxlan_mdb: Use MDB get instead of dump
2023-10-16 13:12 ` [PATCH net-next 13/13] selftests: vxlan_mdb: " Ido Schimmel
@ 2023-10-17 9:30 ` Nikolay Aleksandrov
0 siblings, 0 replies; 30+ messages in thread
From: Nikolay Aleksandrov @ 2023-10-17 9:30 UTC (permalink / raw)
To: Ido Schimmel, netdev, bridge; +Cc: davem, kuba, edumazet, pabeni, roopa, mlxsw
On 10/16/23 16:12, Ido Schimmel wrote:
> Test the new MDB get functionality by converting dump and grep to MDB
> get.
>
> Signed-off-by: Ido Schimmel <idosch@nvidia.com>
> ---
> tools/testing/selftests/net/test_vxlan_mdb.sh | 108 +++++++++---------
> 1 file changed, 54 insertions(+), 54 deletions(-)
>
Acked-by: Nikolay Aleksandrov <razor@blackwall.org>
^ permalink raw reply [flat|nested] 30+ messages in thread