* [net-next v6 0/2] net: sched: allow user to select txqueue
@ 2021-12-22 12:08 xiangxia.m.yue
2021-12-22 12:08 ` [net-next v6 1/2] net: sched: use queue_mapping to pick tx queue xiangxia.m.yue
2021-12-22 12:08 ` [net-next v6 2/2] net: sched: support hash/classid/cpuid selecting " xiangxia.m.yue
0 siblings, 2 replies; 4+ messages in thread
From: xiangxia.m.yue @ 2021-12-22 12:08 UTC (permalink / raw)
To: netdev
Cc: Tonghao Zhang, Jamal Hadi Salim, Cong Wang, Jiri Pirko,
David S. Miller, Jakub Kicinski, Jonathan Lemon, Eric Dumazet,
Alexander Lobakin, Paolo Abeni, Talal Ahmad, Kevin Hao,
Ilias Apalodimas, Kees Cook, Kumar Kartikeya Dwivedi,
Antoine Tenart, Wei Wang, Arnd Bergmann
From: Tonghao Zhang <xiangxia.m.yue@gmail.com>
Patch 1 allow user to select txqueue in clsact hook.
Patch 2 support skbhash, classid, cpuid to select txqueue.
Tonghao Zhang (2):
net: sched: use queue_mapping to pick tx queue
net: sched: support hash/classid/cpuid selecting tx queue
include/linux/netdevice.h | 3 +
include/linux/rtnetlink.h | 3 +
include/net/tc_act/tc_skbedit.h | 1 +
include/uapi/linux/tc_act/tc_skbedit.h | 8 +++
net/core/dev.c | 44 +++++++++++-
net/sched/act_skbedit.c | 96 ++++++++++++++++++++++++--
6 files changed, 149 insertions(+), 6 deletions(-)
--
v6:
* 1/2 use static key and compiled when CONFIG_NET_EGRESS configured.
v5:
* 1/2 merge netdev_xmit_reset_txqueue(void),
netdev_xmit_skip_txqueue(void), to netdev_xmit_skip_txqueue(bool skip).
v4:
* 1/2 introduce netdev_xmit_reset_txqueue() and invoked in
__dev_queue_xmit(), so ximt.skip_txqueue will not affect
selecting tx queue in next netdev, or next packets.
more details, see commit log.
* 2/2 fix the coding style, rename:
SKBEDIT_F_QUEUE_MAPPING_HASH -> SKBEDIT_F_TXQ_SKBHASH
SKBEDIT_F_QUEUE_MAPPING_CLASSID -> SKBEDIT_F_TXQ_CLASSID
SKBEDIT_F_QUEUE_MAPPING_CPUID -> SKBEDIT_F_TXQ_CPUID
* 2/2 refactor tcf_skbedit_hash, if type of hash is not specified, use
the queue_mapping, because hash % mapping_mod == 0 in "case 0:"
* 2/2 merge the check and add extack
v3:
* 2/2 fix the warning, add cpuid hash type.
v2:
* 1/2 change skb->tc_skip_txqueue to per-cpu var, add more commit message.
* 2/2 optmize the codes.
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Cong Wang <xiyou.wangcong@gmail.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Jonathan Lemon <jonathan.lemon@gmail.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Alexander Lobakin <alobakin@pm.me>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: Talal Ahmad <talalahmad@google.com>
Cc: Kevin Hao <haokexin@gmail.com>
Cc: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Cc: Antoine Tenart <atenart@kernel.org>
Cc: Wei Wang <weiwan@google.com>
Cc: Arnd Bergmann <arnd@arndb.de>
--
2.27.0
^ permalink raw reply [flat|nested] 4+ messages in thread
* [net-next v6 1/2] net: sched: use queue_mapping to pick tx queue
2021-12-22 12:08 [net-next v6 0/2] net: sched: allow user to select txqueue xiangxia.m.yue
@ 2021-12-22 12:08 ` xiangxia.m.yue
2021-12-22 22:44 ` kernel test robot
2021-12-22 12:08 ` [net-next v6 2/2] net: sched: support hash/classid/cpuid selecting " xiangxia.m.yue
1 sibling, 1 reply; 4+ messages in thread
From: xiangxia.m.yue @ 2021-12-22 12:08 UTC (permalink / raw)
To: netdev
Cc: Tonghao Zhang, Jamal Hadi Salim, Cong Wang, Jiri Pirko,
David S. Miller, Jakub Kicinski, Jonathan Lemon, Eric Dumazet,
Alexander Lobakin, Paolo Abeni, Talal Ahmad, Kevin Hao,
Ilias Apalodimas, Kees Cook, Kumar Kartikeya Dwivedi,
Antoine Tenart, Wei Wang, Arnd Bergmann
From: Tonghao Zhang <xiangxia.m.yue@gmail.com>
This patch fixes issue:
* If we install tc filters with act_skbedit in clsact hook.
It doesn't work, because netdev_core_pick_tx() overwrites
queue_mapping.
$ tc filter ... action skbedit queue_mapping 1
And this patch is useful:
* We can use FQ + EDT to implement efficient policies. Tx queues
are picked by xps, ndo_select_queue of netdev driver, or skb hash
in netdev_core_pick_tx(). In fact, the netdev driver, and skb
hash are _not_ under control. xps uses the CPUs map to select Tx
queues, but we can't figure out which task_struct of pod/containter
running on this cpu in most case. We can use clsact filters to classify
one pod/container traffic to one Tx queue. Why ?
In containter networking environment, there are two kinds of pod/
containter/net-namespace. One kind (e.g. P1, P2), the high throughput
is key in these applications. But avoid running out of network resource,
the outbound traffic of these pods is limited, using or sharing one
dedicated Tx queues assigned HTB/TBF/FQ Qdisc. Other kind of pods
(e.g. Pn), the low latency of data access is key. And the traffic is not
limited. Pods use or share other dedicated Tx queues assigned FIFO Qdisc.
This choice provides two benefits. First, contention on the HTB/FQ Qdisc
lock is significantly reduced since fewer CPUs contend for the same queue.
More importantly, Qdisc contention can be eliminated completely if each
CPU has its own FIFO Qdisc for the second kind of pods.
There must be a mechanism in place to support classifying traffic based on
pods/container to different Tx queues. Note that clsact is outside of Qdisc
while Qdisc can run a classifier to select a sub-queue under the lock.
In general recording the decision in the skb seems a little heavy handed.
This patch introduces a per-CPU variable, suggested by Eric.
The xmit.skip_txqueue flag is firstly cleared in __dev_queue_xmit().
- Tx Qdisc may install that skbedit actions, then xmit.skip_txqueue flag
is set in qdisc->enqueue() though tx queue has been selected in
netdev_tx_queue_mapping() or netdev_core_pick_tx(). That flag is cleared
firstly in __dev_queue_xmit(), is useful:
- Avoid picking Tx queue with netdev_tx_queue_mapping() in next netdev
in such case: eth0 macvlan - eth0.3 vlan - eth0 ixgbe-phy:
For example, eth0, macvlan in pod, which root Qdisc install skbedit
queue_mapping, send packets to eth0.3, vlan in host. In __dev_queue_xmit() of
eth0.3, clear the flag, does not select tx queue according to skb->queue_mapping
because there is no filters in clsact or tx Qdisc of this netdev.
Same action taked in eth0, ixgbe in Host.
- Avoid picking Tx queue for next packet. If we set xmit.skip_txqueue
in tx Qdisc (qdisc->enqueue()), the proper way to clear it is clearing it
in __dev_queue_xmit when processing next packets.
For performance reasons, use the static key. If user does not config the NET_EGRESS,
the patch will not be compiled.
+----+ +----+ +----+
| P1 | | P2 | | Pn |
+----+ +----+ +----+
| | |
+-----------+-----------+
|
| clsact/skbedit
| MQ
v
+-----------+-----------+
| q0 | q1 | qn
v v v
HTB/FQ HTB/FQ ... FIFO
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Cong Wang <xiyou.wangcong@gmail.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Jonathan Lemon <jonathan.lemon@gmail.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Alexander Lobakin <alobakin@pm.me>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: Talal Ahmad <talalahmad@google.com>
Cc: Kevin Hao <haokexin@gmail.com>
Cc: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Cc: Antoine Tenart <atenart@kernel.org>
Cc: Wei Wang <weiwan@google.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Suggested-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com>
---
include/linux/netdevice.h | 3 +++
include/linux/rtnetlink.h | 3 +++
net/core/dev.c | 44 ++++++++++++++++++++++++++++++++++++++-
net/sched/act_skbedit.c | 18 ++++++++++++++--
4 files changed, 65 insertions(+), 3 deletions(-)
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 8b0bdeb4734e..708e9f4cca01 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -3010,6 +3010,9 @@ struct softnet_data {
struct {
u16 recursion;
u8 more;
+#ifdef CONFIG_NET_EGRESS
+ u8 skip_txqueue;
+#endif
} xmit;
#ifdef CONFIG_RPS
/* input_queue_head should be written by cpu owning this struct,
diff --git a/include/linux/rtnetlink.h b/include/linux/rtnetlink.h
index bb9cb84114c1..256bf78daea6 100644
--- a/include/linux/rtnetlink.h
+++ b/include/linux/rtnetlink.h
@@ -100,6 +100,9 @@ void net_dec_ingress_queue(void);
#ifdef CONFIG_NET_EGRESS
void net_inc_egress_queue(void);
void net_dec_egress_queue(void);
+void net_inc_queue_mapping(void);
+void net_dec_queue_mapping(void);
+void netdev_xmit_skip_txqueue(bool skip);
#endif
void rtnetlink_init(void);
diff --git a/net/core/dev.c b/net/core/dev.c
index a855e41bbe39..b197dabcd721 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -1998,6 +1998,20 @@ void net_dec_egress_queue(void)
static_branch_dec(&egress_needed_key);
}
EXPORT_SYMBOL_GPL(net_dec_egress_queue);
+
+static DEFINE_STATIC_KEY_FALSE(txqueue_needed_key);
+
+void net_inc_queue_mapping(void)
+{
+ static_branch_inc(&txqueue_needed_key);
+}
+EXPORT_SYMBOL_GPL(net_inc_queue_mapping);
+
+void net_dec_queue_mapping(void)
+{
+ static_branch_dec(&txqueue_needed_key);
+}
+EXPORT_SYMBOL_GPL(net_dec_queue_mapping);
#endif
static DEFINE_STATIC_KEY_FALSE(netstamp_needed_key);
@@ -3860,6 +3874,25 @@ sch_handle_egress(struct sk_buff *skb, int *ret, struct net_device *dev)
return skb;
}
+
+static inline struct netdev_queue *
+netdev_tx_queue_mapping(struct net_device *dev, struct sk_buff *skb)
+{
+ int qm = skb_get_queue_mapping(skb);
+
+ return netdev_get_tx_queue(dev, netdev_cap_txqueue(dev, qm));
+}
+
+static inline bool netdev_xmit_txqueue_skipped(void)
+{
+ return __this_cpu_read(softnet_data.xmit.skip_txqueue);
+}
+
+void netdev_xmit_skip_txqueue(bool skip)
+{
+ __this_cpu_write(softnet_data.xmit.skip_txqueue, skip);
+}
+EXPORT_SYMBOL_GPL(netdev_xmit_skip_txqueue);
#endif /* CONFIG_NET_EGRESS */
#ifdef CONFIG_XPS
@@ -4052,6 +4085,9 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
skb->tc_at_ingress = 0;
#endif
#ifdef CONFIG_NET_EGRESS
+ if (static_branch_unlikely(&txqueue_needed_key))
+ netdev_xmit_skip_txqueue(false);
+
if (static_branch_unlikely(&egress_needed_key)) {
if (nf_hook_egress_active()) {
skb = nf_hook_egress(skb, &rc, dev);
@@ -4064,7 +4100,14 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
goto out;
nf_skip_egress(skb, false);
}
+
+ if (static_branch_unlikely(&txqueue_needed_key) &&
+ netdev_xmit_txqueue_skipped())
+ txq = netdev_tx_queue_mapping(dev, skb);
+ else
#endif
+ txq = netdev_core_pick_tx(dev, skb, sb_dev);
+
/* If device/qdisc don't need skb->dst, release it right now while
* its hot in this cpu cache.
*/
@@ -4073,7 +4116,6 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
else
skb_dst_force(skb);
- txq = netdev_core_pick_tx(dev, skb, sb_dev);
q = rcu_dereference_bh(txq->qdisc);
trace_net_dev_queue(skb);
diff --git a/net/sched/act_skbedit.c b/net/sched/act_skbedit.c
index ceba11b198bb..325991080a8a 100644
--- a/net/sched/act_skbedit.c
+++ b/net/sched/act_skbedit.c
@@ -58,8 +58,12 @@ static int tcf_skbedit_act(struct sk_buff *skb, const struct tc_action *a,
}
}
if (params->flags & SKBEDIT_F_QUEUE_MAPPING &&
- skb->dev->real_num_tx_queues > params->queue_mapping)
+ skb->dev->real_num_tx_queues > params->queue_mapping) {
+#ifdef CONFIG_NET_EGRESS
+ netdev_xmit_skip_txqueue(true);
+#endif
skb_set_queue_mapping(skb, params->queue_mapping);
+ }
if (params->flags & SKBEDIT_F_MARK) {
skb->mark &= ~params->mask;
skb->mark |= params->mark & params->mask;
@@ -225,6 +229,11 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla,
if (goto_ch)
tcf_chain_put_by_act(goto_ch);
+#ifdef CONFIG_NET_EGRESS
+ if (flags & SKBEDIT_F_QUEUE_MAPPING)
+ net_inc_queue_mapping();
+#endif
+
return ret;
put_chain:
if (goto_ch)
@@ -295,8 +304,13 @@ static void tcf_skbedit_cleanup(struct tc_action *a)
struct tcf_skbedit_params *params;
params = rcu_dereference_protected(d->params, 1);
- if (params)
+ if (params) {
+#ifdef CONFIG_NET_EGRESS
+ if (params->flags & SKBEDIT_F_QUEUE_MAPPING)
+ net_dec_queue_mapping();
+#endif
kfree_rcu(params, rcu);
+ }
}
static int tcf_skbedit_walker(struct net *net, struct sk_buff *skb,
--
2.27.0
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [net-next v6 2/2] net: sched: support hash/classid/cpuid selecting tx queue
2021-12-22 12:08 [net-next v6 0/2] net: sched: allow user to select txqueue xiangxia.m.yue
2021-12-22 12:08 ` [net-next v6 1/2] net: sched: use queue_mapping to pick tx queue xiangxia.m.yue
@ 2021-12-22 12:08 ` xiangxia.m.yue
1 sibling, 0 replies; 4+ messages in thread
From: xiangxia.m.yue @ 2021-12-22 12:08 UTC (permalink / raw)
To: netdev
Cc: Tonghao Zhang, Jamal Hadi Salim, Cong Wang, Jiri Pirko,
David S. Miller, Jakub Kicinski, Jonathan Lemon, Eric Dumazet,
Alexander Lobakin, Paolo Abeni, Talal Ahmad, Kevin Hao,
Ilias Apalodimas, Kees Cook, Kumar Kartikeya Dwivedi,
Antoine Tenart, Wei Wang, Arnd Bergmann
From: Tonghao Zhang <xiangxia.m.yue@gmail.com>
This patch allows user to select queue_mapping, range
from A to B. And user can use skbhash, cgroup classid
and cpuid to select Tx queues. Then we can load balance
packets from A to B queue. The range is an unsigned 16bit
value in decimal format.
$ tc filter ... action skbedit queue_mapping skbhash A B
"skbedit queue_mapping QUEUE_MAPPING" (from "man 8 tc-skbedit")
is enhanced with flags:
* SKBEDIT_F_TXQ_SKBHASH
* SKBEDIT_F_TXQ_CLASSID
* SKBEDIT_F_TXQ_CPUID
Use skb->hash, cgroup classid, or cpuid to distribute packets.
Then same range of tx queues can be shared for different flows,
cgroups, or CPUs in a variety of scenarios.
For example, F1 may share range R1 with F2. The best way to do
that is to set flag to SKBEDIT_F_TXQ_HASH, using skb->hash to
share the queues. If cgroup C1 want to share the R1 with cgroup
C2 .. Cn, use the SKBEDIT_F_TXQ_CLASSID. Of course, in some other
scenario, C1 use R1, while Cn can use the Rn.
Cc: Jamal Hadi Salim <jhs@mojatatu.com>
Cc: Cong Wang <xiyou.wangcong@gmail.com>
Cc: Jiri Pirko <jiri@resnulli.us>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Jonathan Lemon <jonathan.lemon@gmail.com>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Alexander Lobakin <alobakin@pm.me>
Cc: Paolo Abeni <pabeni@redhat.com>
Cc: Talal Ahmad <talalahmad@google.com>
Cc: Kevin Hao <haokexin@gmail.com>
Cc: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Cc: Antoine Tenart <atenart@kernel.org>
Cc: Wei Wang <weiwan@google.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Tonghao Zhang <xiangxia.m.yue@gmail.com>
---
include/net/tc_act/tc_skbedit.h | 1 +
include/uapi/linux/tc_act/tc_skbedit.h | 8 +++
net/sched/act_skbedit.c | 78 +++++++++++++++++++++++++-
3 files changed, 84 insertions(+), 3 deletions(-)
diff --git a/include/net/tc_act/tc_skbedit.h b/include/net/tc_act/tc_skbedit.h
index 00bfee70609e..ee96e0fa6566 100644
--- a/include/net/tc_act/tc_skbedit.h
+++ b/include/net/tc_act/tc_skbedit.h
@@ -17,6 +17,7 @@ struct tcf_skbedit_params {
u32 mark;
u32 mask;
u16 queue_mapping;
+ u16 mapping_mod;
u16 ptype;
struct rcu_head rcu;
};
diff --git a/include/uapi/linux/tc_act/tc_skbedit.h b/include/uapi/linux/tc_act/tc_skbedit.h
index 800e93377218..5ea1438a4d88 100644
--- a/include/uapi/linux/tc_act/tc_skbedit.h
+++ b/include/uapi/linux/tc_act/tc_skbedit.h
@@ -29,6 +29,13 @@
#define SKBEDIT_F_PTYPE 0x8
#define SKBEDIT_F_MASK 0x10
#define SKBEDIT_F_INHERITDSFIELD 0x20
+#define SKBEDIT_F_TXQ_SKBHASH 0x40
+#define SKBEDIT_F_TXQ_CLASSID 0x80
+#define SKBEDIT_F_TXQ_CPUID 0x100
+
+#define SKBEDIT_F_TXQ_HASH_MASK (SKBEDIT_F_TXQ_SKBHASH | \
+ SKBEDIT_F_TXQ_CLASSID | \
+ SKBEDIT_F_TXQ_CPUID)
struct tc_skbedit {
tc_gen;
@@ -45,6 +52,7 @@ enum {
TCA_SKBEDIT_PTYPE,
TCA_SKBEDIT_MASK,
TCA_SKBEDIT_FLAGS,
+ TCA_SKBEDIT_QUEUE_MAPPING_MAX,
__TCA_SKBEDIT_MAX
};
#define TCA_SKBEDIT_MAX (__TCA_SKBEDIT_MAX - 1)
diff --git a/net/sched/act_skbedit.c b/net/sched/act_skbedit.c
index 325991080a8a..9493b3102923 100644
--- a/net/sched/act_skbedit.c
+++ b/net/sched/act_skbedit.c
@@ -10,6 +10,7 @@
#include <linux/kernel.h>
#include <linux/skbuff.h>
#include <linux/rtnetlink.h>
+#include <net/cls_cgroup.h>
#include <net/netlink.h>
#include <net/pkt_sched.h>
#include <net/ip.h>
@@ -23,6 +24,38 @@
static unsigned int skbedit_net_id;
static struct tc_action_ops act_skbedit_ops;
+static u16 tcf_skbedit_hash(struct tcf_skbedit_params *params,
+ struct sk_buff *skb)
+{
+ u32 mapping_hash_type = params->flags & SKBEDIT_F_TXQ_HASH_MASK;
+ u16 queue_mapping = params->queue_mapping;
+ u16 mapping_mod = params->mapping_mod;
+ u32 hash = 0;
+
+ switch (mapping_hash_type) {
+ case SKBEDIT_F_TXQ_CLASSID:
+ hash = task_get_classid(skb);
+ break;
+ case SKBEDIT_F_TXQ_SKBHASH:
+ hash = skb_get_hash(skb);
+ break;
+ case SKBEDIT_F_TXQ_CPUID:
+ hash = raw_smp_processor_id();
+ break;
+ case 0:
+ /* Hash type isn't specified. In this case:
+ * hash % mapping_mod == 0
+ */
+ break;
+ default:
+ net_warn_ratelimited("The type of queue_mapping hash is not supported. 0x%x\n",
+ mapping_hash_type);
+ }
+
+ queue_mapping = queue_mapping + hash % mapping_mod;
+ return netdev_cap_txqueue(skb->dev, queue_mapping);
+}
+
static int tcf_skbedit_act(struct sk_buff *skb, const struct tc_action *a,
struct tcf_result *res)
{
@@ -62,7 +95,7 @@ static int tcf_skbedit_act(struct sk_buff *skb, const struct tc_action *a,
#ifdef CONFIG_NET_EGRESS
netdev_xmit_skip_txqueue(true);
#endif
- skb_set_queue_mapping(skb, params->queue_mapping);
+ skb_set_queue_mapping(skb, tcf_skbedit_hash(params, skb));
}
if (params->flags & SKBEDIT_F_MARK) {
skb->mark &= ~params->mask;
@@ -96,6 +129,7 @@ static const struct nla_policy skbedit_policy[TCA_SKBEDIT_MAX + 1] = {
[TCA_SKBEDIT_PTYPE] = { .len = sizeof(u16) },
[TCA_SKBEDIT_MASK] = { .len = sizeof(u32) },
[TCA_SKBEDIT_FLAGS] = { .len = sizeof(u64) },
+ [TCA_SKBEDIT_QUEUE_MAPPING_MAX] = { .len = sizeof(u16) },
};
static int tcf_skbedit_init(struct net *net, struct nlattr *nla,
@@ -112,6 +146,7 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla,
struct tcf_skbedit *d;
u32 flags = 0, *priority = NULL, *mark = NULL, *mask = NULL;
u16 *queue_mapping = NULL, *ptype = NULL;
+ u16 mapping_mod = 1;
bool exists = false;
int ret = 0, err;
u32 index;
@@ -156,7 +191,34 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla,
if (tb[TCA_SKBEDIT_FLAGS] != NULL) {
u64 *pure_flags = nla_data(tb[TCA_SKBEDIT_FLAGS]);
-
+ u64 mapping_hash_type;
+
+ mapping_hash_type = *pure_flags & SKBEDIT_F_TXQ_HASH_MASK;
+ if (mapping_hash_type) {
+ u16 *queue_mapping_max;
+
+ /* Hash types are mutually exclusive. */
+ if (mapping_hash_type & (mapping_hash_type - 1)) {
+ NL_SET_ERR_MSG_MOD(extack, "Multi types of hash are specified.");
+ return -EINVAL;
+ }
+
+ if (!tb[TCA_SKBEDIT_QUEUE_MAPPING] ||
+ !tb[TCA_SKBEDIT_QUEUE_MAPPING_MAX]) {
+ NL_SET_ERR_MSG_MOD(extack, "Missing required range of queue_mapping.");
+ return -EINVAL;
+ }
+
+ queue_mapping_max =
+ nla_data(tb[TCA_SKBEDIT_QUEUE_MAPPING_MAX]);
+ if (*queue_mapping_max < *queue_mapping) {
+ NL_SET_ERR_MSG_MOD(extack, "The range of queue_mapping is invalid, max < min.");
+ return -EINVAL;
+ }
+
+ mapping_mod = *queue_mapping_max - *queue_mapping + 1;
+ flags |= mapping_hash_type;
+ }
if (*pure_flags & SKBEDIT_F_INHERITDSFIELD)
flags |= SKBEDIT_F_INHERITDSFIELD;
}
@@ -208,8 +270,10 @@ static int tcf_skbedit_init(struct net *net, struct nlattr *nla,
params_new->flags = flags;
if (flags & SKBEDIT_F_PRIORITY)
params_new->priority = *priority;
- if (flags & SKBEDIT_F_QUEUE_MAPPING)
+ if (flags & SKBEDIT_F_QUEUE_MAPPING) {
params_new->queue_mapping = *queue_mapping;
+ params_new->mapping_mod = mapping_mod;
+ }
if (flags & SKBEDIT_F_MARK)
params_new->mark = *mark;
if (flags & SKBEDIT_F_PTYPE)
@@ -281,6 +345,13 @@ static int tcf_skbedit_dump(struct sk_buff *skb, struct tc_action *a,
goto nla_put_failure;
if (params->flags & SKBEDIT_F_INHERITDSFIELD)
pure_flags |= SKBEDIT_F_INHERITDSFIELD;
+ if (params->flags & SKBEDIT_F_TXQ_HASH_MASK) {
+ if (nla_put_u16(skb, TCA_SKBEDIT_QUEUE_MAPPING_MAX,
+ params->queue_mapping + params->mapping_mod - 1))
+ goto nla_put_failure;
+
+ pure_flags |= params->flags & SKBEDIT_F_TXQ_HASH_MASK;
+ }
if (pure_flags != 0 &&
nla_put(skb, TCA_SKBEDIT_FLAGS, sizeof(pure_flags), &pure_flags))
goto nla_put_failure;
@@ -335,6 +406,7 @@ static size_t tcf_skbedit_get_fill_size(const struct tc_action *act)
return nla_total_size(sizeof(struct tc_skbedit))
+ nla_total_size(sizeof(u32)) /* TCA_SKBEDIT_PRIORITY */
+ nla_total_size(sizeof(u16)) /* TCA_SKBEDIT_QUEUE_MAPPING */
+ + nla_total_size(sizeof(u16)) /* TCA_SKBEDIT_QUEUE_MAPPING_MAX */
+ nla_total_size(sizeof(u32)) /* TCA_SKBEDIT_MARK */
+ nla_total_size(sizeof(u16)) /* TCA_SKBEDIT_PTYPE */
+ nla_total_size(sizeof(u32)) /* TCA_SKBEDIT_MASK */
--
2.27.0
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [net-next v6 1/2] net: sched: use queue_mapping to pick tx queue
2021-12-22 12:08 ` [net-next v6 1/2] net: sched: use queue_mapping to pick tx queue xiangxia.m.yue
@ 2021-12-22 22:44 ` kernel test robot
0 siblings, 0 replies; 4+ messages in thread
From: kernel test robot @ 2021-12-22 22:44 UTC (permalink / raw)
To: xiangxia.m.yue, netdev
Cc: kbuild-all, Tonghao Zhang, Jamal Hadi Salim, Cong Wang,
Jiri Pirko, Jakub Kicinski, Jonathan Lemon, Eric Dumazet,
Alexander Lobakin, Paolo Abeni
Hi,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on net-next/master]
url: https://github.com/0day-ci/linux/commits/xiangxia-m-yue-gmail-com/net-sched-allow-user-to-select-txqueue/20211222-201128
base: https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git f4f2970dfd87e5132c436e6125148914596a9863
config: i386-randconfig-m031-20211222 (https://download.01.org/0day-ci/archive/20211223/202112230652.grNJivfH-lkp@intel.com/config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>
smatch warnings:
net/core/dev.c:4114 __dev_queue_xmit() warn: inconsistent indenting
vim +4114 net/core/dev.c
638b2a699fd3ec9 Jiri Pirko 2015-05-12 4036
d29f749e252bcdb Dave Jones 2008-07-22 4037 /**
9d08dd3d320fab4 Jason Wang 2014-01-20 4038 * __dev_queue_xmit - transmit a buffer
d29f749e252bcdb Dave Jones 2008-07-22 4039 * @skb: buffer to transmit
eadec877ce9ca46 Alexander Duyck 2018-07-09 4040 * @sb_dev: suboordinate device used for L2 forwarding offload
d29f749e252bcdb Dave Jones 2008-07-22 4041 *
d29f749e252bcdb Dave Jones 2008-07-22 4042 * Queue a buffer for transmission to a network device. The caller must
d29f749e252bcdb Dave Jones 2008-07-22 4043 * have set the device and priority and built the buffer before calling
d29f749e252bcdb Dave Jones 2008-07-22 4044 * this function. The function can be called from an interrupt.
d29f749e252bcdb Dave Jones 2008-07-22 4045 *
d29f749e252bcdb Dave Jones 2008-07-22 4046 * A negative errno code is returned on a failure. A success does not
d29f749e252bcdb Dave Jones 2008-07-22 4047 * guarantee the frame will be transmitted as it may be dropped due
d29f749e252bcdb Dave Jones 2008-07-22 4048 * to congestion or traffic shaping.
d29f749e252bcdb Dave Jones 2008-07-22 4049 *
d29f749e252bcdb Dave Jones 2008-07-22 4050 * -----------------------------------------------------------------------------------
d29f749e252bcdb Dave Jones 2008-07-22 4051 * I notice this method can also return errors from the queue disciplines,
d29f749e252bcdb Dave Jones 2008-07-22 4052 * including NET_XMIT_DROP, which is a positive value. So, errors can also
d29f749e252bcdb Dave Jones 2008-07-22 4053 * be positive.
d29f749e252bcdb Dave Jones 2008-07-22 4054 *
d29f749e252bcdb Dave Jones 2008-07-22 4055 * Regardless of the return value, the skb is consumed, so it is currently
d29f749e252bcdb Dave Jones 2008-07-22 4056 * difficult to retry a send to this method. (You can bump the ref count
d29f749e252bcdb Dave Jones 2008-07-22 4057 * before sending to hold a reference for retry if you are careful.)
d29f749e252bcdb Dave Jones 2008-07-22 4058 *
d29f749e252bcdb Dave Jones 2008-07-22 4059 * When calling this method, interrupts MUST be enabled. This is because
d29f749e252bcdb Dave Jones 2008-07-22 4060 * the BH enable code must have IRQs enabled so that it will not deadlock.
d29f749e252bcdb Dave Jones 2008-07-22 4061 * --BLG
d29f749e252bcdb Dave Jones 2008-07-22 4062 */
eadec877ce9ca46 Alexander Duyck 2018-07-09 4063 static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
^1da177e4c3f415 Linus Torvalds 2005-04-16 4064 {
^1da177e4c3f415 Linus Torvalds 2005-04-16 4065 struct net_device *dev = skb->dev;
dc2b48475a0a36f David S. Miller 2008-07-08 4066 struct netdev_queue *txq;
^1da177e4c3f415 Linus Torvalds 2005-04-16 4067 struct Qdisc *q;
^1da177e4c3f415 Linus Torvalds 2005-04-16 4068 int rc = -ENOMEM;
f53c723902d1ac5 Steffen Klassert 2017-12-20 4069 bool again = false;
^1da177e4c3f415 Linus Torvalds 2005-04-16 4070
6d1ccff62780682 Eric Dumazet 2013-02-05 4071 skb_reset_mac_header(skb);
6d1ccff62780682 Eric Dumazet 2013-02-05 4072
e7fd2885385157d Willem de Bruijn 2014-08-04 4073 if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_SCHED_TSTAMP))
e7ed11ee945438b Yousuk Seung 2021-01-20 4074 __skb_tstamp_tx(skb, NULL, NULL, skb->sk, SCM_TSTAMP_SCHED);
e7fd2885385157d Willem de Bruijn 2014-08-04 4075
^1da177e4c3f415 Linus Torvalds 2005-04-16 4076 /* Disable soft irqs for various locks below. Also
^1da177e4c3f415 Linus Torvalds 2005-04-16 4077 * stops preemption for RCU.
^1da177e4c3f415 Linus Torvalds 2005-04-16 4078 */
d4828d85d188dc7 Herbert Xu 2006-06-22 4079 rcu_read_lock_bh();
^1da177e4c3f415 Linus Torvalds 2005-04-16 4080
5bc1421e34ecfe0 Neil Horman 2011-11-22 4081 skb_update_prio(skb);
5bc1421e34ecfe0 Neil Horman 2011-11-22 4082
1f211a1b929c804 Daniel Borkmann 2016-01-07 4083 qdisc_pkt_len_init(skb);
1f211a1b929c804 Daniel Borkmann 2016-01-07 4084 #ifdef CONFIG_NET_CLS_ACT
8dc07fdbf2054f1 Willem de Bruijn 2017-01-07 4085 skb->tc_at_ingress = 0;
42df6e1d221dddc Lukas Wunner 2021-10-08 4086 #endif
1f211a1b929c804 Daniel Borkmann 2016-01-07 4087 #ifdef CONFIG_NET_EGRESS
3435309167e51b1 Tonghao Zhang 2021-12-22 4088 if (static_branch_unlikely(&txqueue_needed_key))
3435309167e51b1 Tonghao Zhang 2021-12-22 4089 netdev_xmit_skip_txqueue(false);
3435309167e51b1 Tonghao Zhang 2021-12-22 4090
aabf6772cc745f9 Davidlohr Bueso 2018-05-08 4091 if (static_branch_unlikely(&egress_needed_key)) {
42df6e1d221dddc Lukas Wunner 2021-10-08 4092 if (nf_hook_egress_active()) {
42df6e1d221dddc Lukas Wunner 2021-10-08 4093 skb = nf_hook_egress(skb, &rc, dev);
42df6e1d221dddc Lukas Wunner 2021-10-08 4094 if (!skb)
42df6e1d221dddc Lukas Wunner 2021-10-08 4095 goto out;
42df6e1d221dddc Lukas Wunner 2021-10-08 4096 }
42df6e1d221dddc Lukas Wunner 2021-10-08 4097 nf_skip_egress(skb, true);
1f211a1b929c804 Daniel Borkmann 2016-01-07 4098 skb = sch_handle_egress(skb, &rc, dev);
1f211a1b929c804 Daniel Borkmann 2016-01-07 4099 if (!skb)
1f211a1b929c804 Daniel Borkmann 2016-01-07 4100 goto out;
42df6e1d221dddc Lukas Wunner 2021-10-08 4101 nf_skip_egress(skb, false);
1f211a1b929c804 Daniel Borkmann 2016-01-07 4102 }
3435309167e51b1 Tonghao Zhang 2021-12-22 4103
3435309167e51b1 Tonghao Zhang 2021-12-22 4104 if (static_branch_unlikely(&txqueue_needed_key) &&
3435309167e51b1 Tonghao Zhang 2021-12-22 4105 netdev_xmit_txqueue_skipped())
3435309167e51b1 Tonghao Zhang 2021-12-22 4106 txq = netdev_tx_queue_mapping(dev, skb);
3435309167e51b1 Tonghao Zhang 2021-12-22 4107 else
1f211a1b929c804 Daniel Borkmann 2016-01-07 4108 #endif
3435309167e51b1 Tonghao Zhang 2021-12-22 4109 txq = netdev_core_pick_tx(dev, skb, sb_dev);
3435309167e51b1 Tonghao Zhang 2021-12-22 4110
0287587884b1504 Eric Dumazet 2014-10-05 4111 /* If device/qdisc don't need skb->dst, release it right now while
0287587884b1504 Eric Dumazet 2014-10-05 4112 * its hot in this cpu cache.
0287587884b1504 Eric Dumazet 2014-10-05 4113 */
0287587884b1504 Eric Dumazet 2014-10-05 @4114 if (dev->priv_flags & IFF_XMIT_DST_RELEASE)
0287587884b1504 Eric Dumazet 2014-10-05 4115 skb_dst_drop(skb);
0287587884b1504 Eric Dumazet 2014-10-05 4116 else
0287587884b1504 Eric Dumazet 2014-10-05 4117 skb_dst_force(skb);
0287587884b1504 Eric Dumazet 2014-10-05 4118
a898def29e4119b Paul E. McKenney 2010-02-22 4119 q = rcu_dereference_bh(txq->qdisc);
37437bb2e1ae8af David S. Miller 2008-07-16 4120
cf66ba58b5cb8b1 Koki Sanagi 2010-08-23 4121 trace_net_dev_queue(skb);
^1da177e4c3f415 Linus Torvalds 2005-04-16 4122 if (q->enqueue) {
bbd8a0d3a3b65d3 Krishna Kumar 2009-08-06 4123 rc = __dev_xmit_skb(skb, q, dev, txq);
^1da177e4c3f415 Linus Torvalds 2005-04-16 4124 goto out;
^1da177e4c3f415 Linus Torvalds 2005-04-16 4125 }
^1da177e4c3f415 Linus Torvalds 2005-04-16 4126
^1da177e4c3f415 Linus Torvalds 2005-04-16 4127 /* The device has no queue. Common case for software devices:
eb13da1a103a808 tcharding 2017-02-09 4128 * loopback, all the sorts of tunnels...
^1da177e4c3f415 Linus Torvalds 2005-04-16 4129
eb13da1a103a808 tcharding 2017-02-09 4130 * Really, it is unlikely that netif_tx_lock protection is necessary
eb13da1a103a808 tcharding 2017-02-09 4131 * here. (f.e. loopback and IP tunnels are clean ignoring statistics
eb13da1a103a808 tcharding 2017-02-09 4132 * counters.)
eb13da1a103a808 tcharding 2017-02-09 4133 * However, it is possible, that they rely on protection
eb13da1a103a808 tcharding 2017-02-09 4134 * made by us here.
^1da177e4c3f415 Linus Torvalds 2005-04-16 4135
eb13da1a103a808 tcharding 2017-02-09 4136 * Check this and shot the lock. It is not prone from deadlocks.
eb13da1a103a808 tcharding 2017-02-09 4137 *Either shot noqueue qdisc, it is even simpler 8)
^1da177e4c3f415 Linus Torvalds 2005-04-16 4138 */
^1da177e4c3f415 Linus Torvalds 2005-04-16 4139 if (dev->flags & IFF_UP) {
^1da177e4c3f415 Linus Torvalds 2005-04-16 4140 int cpu = smp_processor_id(); /* ok because BHs are off */
^1da177e4c3f415 Linus Torvalds 2005-04-16 4141
7a10d8c810cfad3 Eric Dumazet 2021-11-30 4142 /* Other cpus might concurrently change txq->xmit_lock_owner
7a10d8c810cfad3 Eric Dumazet 2021-11-30 4143 * to -1 or to their cpu id, but not to our id.
7a10d8c810cfad3 Eric Dumazet 2021-11-30 4144 */
7a10d8c810cfad3 Eric Dumazet 2021-11-30 4145 if (READ_ONCE(txq->xmit_lock_owner) != cpu) {
97cdcf37b57e3f2 Florian Westphal 2019-04-01 4146 if (dev_xmit_recursion())
745e20f1b626b1b Eric Dumazet 2010-09-29 4147 goto recursion_alert;
745e20f1b626b1b Eric Dumazet 2010-09-29 4148
f53c723902d1ac5 Steffen Klassert 2017-12-20 4149 skb = validate_xmit_skb(skb, dev, &again);
1f59533f9ca5634 Jesper Dangaard Brouer 2014-09-03 4150 if (!skb)
d21fd63ea385620 Eric Dumazet 2016-04-12 4151 goto out;
1f59533f9ca5634 Jesper Dangaard Brouer 2014-09-03 4152
3744741adab6d91 Willy Tarreau 2020-08-10 4153 PRANDOM_ADD_NOISE(skb, dev, txq, jiffies);
c773e847ea8f681 David S. Miller 2008-07-08 4154 HARD_TX_LOCK(dev, txq, cpu);
^1da177e4c3f415 Linus Torvalds 2005-04-16 4155
7346649826382b7 Tom Herbert 2011-11-28 4156 if (!netif_xmit_stopped(txq)) {
97cdcf37b57e3f2 Florian Westphal 2019-04-01 4157 dev_xmit_recursion_inc();
ce93718fb7cdbc0 David S. Miller 2014-08-30 4158 skb = dev_hard_start_xmit(skb, dev, txq, &rc);
97cdcf37b57e3f2 Florian Westphal 2019-04-01 4159 dev_xmit_recursion_dec();
572a9d7b6fc7f20 Patrick McHardy 2009-11-10 4160 if (dev_xmit_complete(rc)) {
c773e847ea8f681 David S. Miller 2008-07-08 4161 HARD_TX_UNLOCK(dev, txq);
^1da177e4c3f415 Linus Torvalds 2005-04-16 4162 goto out;
^1da177e4c3f415 Linus Torvalds 2005-04-16 4163 }
^1da177e4c3f415 Linus Torvalds 2005-04-16 4164 }
c773e847ea8f681 David S. Miller 2008-07-08 4165 HARD_TX_UNLOCK(dev, txq);
e87cc4728f0e2fb Joe Perches 2012-05-13 4166 net_crit_ratelimited("Virtual device %s asks to queue packet!\n",
7b6cd1ce72176e2 Joe Perches 2012-02-01 4167 dev->name);
^1da177e4c3f415 Linus Torvalds 2005-04-16 4168 } else {
^1da177e4c3f415 Linus Torvalds 2005-04-16 4169 /* Recursion is detected! It is possible,
745e20f1b626b1b Eric Dumazet 2010-09-29 4170 * unfortunately
745e20f1b626b1b Eric Dumazet 2010-09-29 4171 */
745e20f1b626b1b Eric Dumazet 2010-09-29 4172 recursion_alert:
e87cc4728f0e2fb Joe Perches 2012-05-13 4173 net_crit_ratelimited("Dead loop on virtual device %s, fix it urgently!\n",
7b6cd1ce72176e2 Joe Perches 2012-02-01 4174 dev->name);
^1da177e4c3f415 Linus Torvalds 2005-04-16 4175 }
^1da177e4c3f415 Linus Torvalds 2005-04-16 4176 }
^1da177e4c3f415 Linus Torvalds 2005-04-16 4177
^1da177e4c3f415 Linus Torvalds 2005-04-16 4178 rc = -ENETDOWN;
d4828d85d188dc7 Herbert Xu 2006-06-22 4179 rcu_read_unlock_bh();
^1da177e4c3f415 Linus Torvalds 2005-04-16 4180
015f0688f57ca4d Eric Dumazet 2014-03-27 4181 atomic_long_inc(&dev->tx_dropped);
1f59533f9ca5634 Jesper Dangaard Brouer 2014-09-03 4182 kfree_skb_list(skb);
^1da177e4c3f415 Linus Torvalds 2005-04-16 4183 return rc;
^1da177e4c3f415 Linus Torvalds 2005-04-16 4184 out:
d4828d85d188dc7 Herbert Xu 2006-06-22 4185 rcu_read_unlock_bh();
^1da177e4c3f415 Linus Torvalds 2005-04-16 4186 return rc;
^1da177e4c3f415 Linus Torvalds 2005-04-16 4187 }
f663dd9aaf9ed12 Jason Wang 2014-01-10 4188
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2021-12-22 22:45 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-12-22 12:08 [net-next v6 0/2] net: sched: allow user to select txqueue xiangxia.m.yue
2021-12-22 12:08 ` [net-next v6 1/2] net: sched: use queue_mapping to pick tx queue xiangxia.m.yue
2021-12-22 22:44 ` kernel test robot
2021-12-22 12:08 ` [net-next v6 2/2] net: sched: support hash/classid/cpuid selecting " xiangxia.m.yue
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).