netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next 0/4] xps_flows: XPS flow steering when there is no socket
@ 2016-09-01  0:10 Tom Herbert
  2016-09-01  0:10 ` [PATCH net-next 1/4] net: Set SW hash in skb_set_hash_from_sk Tom Herbert
                   ` (5 more replies)
  0 siblings, 6 replies; 13+ messages in thread
From: Tom Herbert @ 2016-09-01  0:10 UTC (permalink / raw)
  To: davem, netdev; +Cc: kernel-team, rick.jones2

This patch set introduces transmit flow steering for socketless packets.
The idea is that we record the transmit queues in a flow table that is
indexed by skbuff hash.  The flow table entries have two values: the
queue_index and the head cnt of packets from the TX queue. We only allow
a queue to change for a flow if the tail cnt in the TX queue advances
beyond the recorded head cnt. That is the condition that should indicate
that all outstanding packets for the flow have completed transmission so
the queue can change.

Tracking the inflight queue is performed as part of DQL. Two fields are
added to the dql structure: num_enqueue_ops and num_completed_ops.
num_enqueue_ops incremented in dql_queued and num_completed_ops is
incremented in dql_completed by the number of operations completed (an
new argument to the function).

This patch set creates /sys/class/net/eth*/xps_dev_flow_table_cnt
which number of entries in the XPS flow table.

Note that the functionality here is technically best effort (for
instance we don't obtain a lock while processing a flow table entry).
Under high load it is possible that OOO packets can still be generated
due to XPS if two threads are hammering on the same flow table entry.
The assumption of this patches is that OOO packets are not the end of
the world and these should prevent OOO in most common use cases with
XPS.

This is a followup to previous RFC version. Fixes from RFC are:

  - Move counters to DQL
  - Fixed typo
  - Simplified get flow index funtion
  - Fixed sysfs flow_table_cnt to properly use DEVICE_ATTR_RW
  - Renamed the mechanism

Tested:
  Manually forced all packets to go through the xps_flows path.
  Observed that some flows were deferred to change queues because
  packets were in flight witht the flow bucket.

Tom Herbert (4):
  net: Set SW hash in skb_set_hash_from_sk
  dql: Add counters for number of queuing and completion operations
  net: Add xps_dev_flow_table_cnt
  xps_flows: XPS for packets that don't have a socket

 include/linux/dynamic_queue_limits.h |   7 ++-
 include/linux/netdevice.h            |  26 ++++++++-
 include/net/sock.h                   |   6 +-
 lib/dynamic_queue_limits.c           |   3 +-
 net/Kconfig                          |   6 ++
 net/core/dev.c                       |  85 ++++++++++++++++++++++++-----
 net/core/net-sysfs.c                 | 103 +++++++++++++++++++++++++++++++++++
 7 files changed, 214 insertions(+), 22 deletions(-)

-- 
2.8.0.rc2

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH net-next 1/4] net: Set SW hash in skb_set_hash_from_sk
  2016-09-01  0:10 [PATCH net-next 0/4] xps_flows: XPS flow steering when there is no socket Tom Herbert
@ 2016-09-01  0:10 ` Tom Herbert
  2016-09-01  0:10 ` [PATCH net-next 2/4] dql: Add counters for number of queuing and completion operations Tom Herbert
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Tom Herbert @ 2016-09-01  0:10 UTC (permalink / raw)
  To: davem, netdev; +Cc: kernel-team, rick.jones2

Use the __skb_set_sw_hash to set the hash in an skbuff from the socket
txhash.

Signed-off-by: Tom Herbert <tom@herbertland.com>
---
 include/net/sock.h | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/include/net/sock.h b/include/net/sock.h
index c797c57..12e585c 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -1910,10 +1910,8 @@ static inline void sock_poll_wait(struct file *filp,
 
 static inline void skb_set_hash_from_sk(struct sk_buff *skb, struct sock *sk)
 {
-	if (sk->sk_txhash) {
-		skb->l4_hash = 1;
-		skb->hash = sk->sk_txhash;
-	}
+	if (sk->sk_txhash)
+		__skb_set_sw_hash(skb, sk->sk_txhash, true);
 }
 
 void skb_set_owner_w(struct sk_buff *skb, struct sock *sk);
-- 
2.8.0.rc2

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 2/4] dql: Add counters for number of queuing and completion operations
  2016-09-01  0:10 [PATCH net-next 0/4] xps_flows: XPS flow steering when there is no socket Tom Herbert
  2016-09-01  0:10 ` [PATCH net-next 1/4] net: Set SW hash in skb_set_hash_from_sk Tom Herbert
@ 2016-09-01  0:10 ` Tom Herbert
  2016-09-01  0:10 ` [PATCH net-next 3/4] net: Add xps_dev_flow_table_cnt Tom Herbert
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Tom Herbert @ 2016-09-01  0:10 UTC (permalink / raw)
  To: davem, netdev; +Cc: kernel-team, rick.jones2

Add two new counters to struct dql that are num_enqueue_ops and
num_completed_ops. num_enqueue_ops is incremented by one in each call to
dql_queued. num_enqueue_ops is incremented in dql_completed which takes
an argument indicating number of operations completed. These counters
are only intended for statistics and do not impact the BQL algorithm.

We add a new sysfs entry in byte_queue_limits named inflight_pkts.
This provides the number of packets in flight for the queue by
dql->num_enqueue_ops - dql->num_completed_ops.

Signed-off-by: Tom Herbert <tom@herbertland.com>
---
 include/linux/dynamic_queue_limits.h |  7 ++++++-
 include/linux/netdevice.h            |  2 +-
 lib/dynamic_queue_limits.c           |  3 ++-
 net/core/net-sysfs.c                 | 14 ++++++++++++++
 4 files changed, 23 insertions(+), 3 deletions(-)

diff --git a/include/linux/dynamic_queue_limits.h b/include/linux/dynamic_queue_limits.h
index a4be703..b6a4804 100644
--- a/include/linux/dynamic_queue_limits.h
+++ b/include/linux/dynamic_queue_limits.h
@@ -43,6 +43,8 @@ struct dql {
 	unsigned int	adj_limit;		/* limit + num_completed */
 	unsigned int	last_obj_cnt;		/* Count at last queuing */
 
+	unsigned int	num_enqueue_ops;	/* Number of queue operations */
+
 	/* Fields accessed only by completion path (dql_completed) */
 
 	unsigned int	limit ____cacheline_aligned_in_smp; /* Current limit */
@@ -55,6 +57,8 @@ struct dql {
 	unsigned int	lowest_slack;		/* Lowest slack found */
 	unsigned long	slack_start_time;	/* Time slacks seen */
 
+	unsigned int	num_completed_ops;	/* Number of complete ops */
+
 	/* Configuration */
 	unsigned int	max_limit;		/* Max limit */
 	unsigned int	min_limit;		/* Minimum limit */
@@ -83,6 +87,7 @@ static inline void dql_queued(struct dql *dql, unsigned int count)
 	barrier();
 
 	dql->num_queued += count;
+	dql->num_enqueue_ops++;
 }
 
 /* Returns how many objects can be queued, < 0 indicates over limit. */
@@ -92,7 +97,7 @@ static inline int dql_avail(const struct dql *dql)
 }
 
 /* Record number of completed objects and recalculate the limit. */
-void dql_completed(struct dql *dql, unsigned int count);
+void dql_completed(struct dql *dql, unsigned int count, unsigned int ops);
 
 /* Reset dql state */
 void dql_reset(struct dql *dql);
diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index d122be9..0d1d748 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -2999,7 +2999,7 @@ static inline void netdev_tx_completed_queue(struct netdev_queue *dev_queue,
 	if (unlikely(!bytes))
 		return;
 
-	dql_completed(&dev_queue->dql, bytes);
+	dql_completed(&dev_queue->dql, bytes, pkts);
 
 	/*
 	 * Without the memory barrier there is a small possiblity that
diff --git a/lib/dynamic_queue_limits.c b/lib/dynamic_queue_limits.c
index f346715..d5e7a27 100644
--- a/lib/dynamic_queue_limits.c
+++ b/lib/dynamic_queue_limits.c
@@ -14,7 +14,7 @@
 #define AFTER_EQ(A, B) ((int)((A) - (B)) >= 0)
 
 /* Records completed count and recalculates the queue limit */
-void dql_completed(struct dql *dql, unsigned int count)
+void dql_completed(struct dql *dql, unsigned int count, unsigned int ops)
 {
 	unsigned int inprogress, prev_inprogress, limit;
 	unsigned int ovlimit, completed, num_queued;
@@ -108,6 +108,7 @@ void dql_completed(struct dql *dql, unsigned int count)
 	dql->prev_ovlimit = ovlimit;
 	dql->prev_last_obj_cnt = dql->last_obj_cnt;
 	dql->num_completed = completed;
+	dql->num_completed_ops += ops;
 	dql->prev_num_queued = num_queued;
 }
 EXPORT_SYMBOL(dql_completed);
diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
index 6e4f347..ab7b0b6 100644
--- a/net/core/net-sysfs.c
+++ b/net/core/net-sysfs.c
@@ -1147,6 +1147,19 @@ static ssize_t bql_show_inflight(struct netdev_queue *queue,
 static struct netdev_queue_attribute bql_inflight_attribute =
 	__ATTR(inflight, S_IRUGO, bql_show_inflight, NULL);
 
+static ssize_t bql_show_inflight_pkts(struct netdev_queue *queue,
+				      struct netdev_queue_attribute *attr,
+				      char *buf)
+{
+	struct dql *dql = &queue->dql;
+
+	return sprintf(buf, "%u\n",
+		       dql->num_enqueue_ops - dql->num_completed_ops);
+}
+
+static struct netdev_queue_attribute bql_inflight_pkts_attribute =
+	__ATTR(inflight_pkts, S_IRUGO, bql_show_inflight_pkts, NULL);
+
 #define BQL_ATTR(NAME, FIELD)						\
 static ssize_t bql_show_ ## NAME(struct netdev_queue *queue,		\
 				 struct netdev_queue_attribute *attr,	\
@@ -1176,6 +1189,7 @@ static struct attribute *dql_attrs[] = {
 	&bql_limit_min_attribute.attr,
 	&bql_hold_time_attribute.attr,
 	&bql_inflight_attribute.attr,
+	&bql_inflight_pkts_attribute.attr,
 	NULL
 };
 
-- 
2.8.0.rc2

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 3/4] net: Add xps_dev_flow_table_cnt
  2016-09-01  0:10 [PATCH net-next 0/4] xps_flows: XPS flow steering when there is no socket Tom Herbert
  2016-09-01  0:10 ` [PATCH net-next 1/4] net: Set SW hash in skb_set_hash_from_sk Tom Herbert
  2016-09-01  0:10 ` [PATCH net-next 2/4] dql: Add counters for number of queuing and completion operations Tom Herbert
@ 2016-09-01  0:10 ` Tom Herbert
  2016-09-01  0:10 ` [PATCH net-next 4/4] xps_flows: XPS for packets that don't have a socket Tom Herbert
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 13+ messages in thread
From: Tom Herbert @ 2016-09-01  0:10 UTC (permalink / raw)
  To: davem, netdev; +Cc: kernel-team, rick.jones2

Add infrastructure and definitions to create XFS flow tables. This
creates the new sys entry /sys/class/net/eth*/xps_dev_flow_table_cnt

Signed-off-by: Tom Herbert <tom@herbertland.com>
---
 include/linux/netdevice.h | 24 +++++++++++++
 net/core/net-sysfs.c      | 89 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 113 insertions(+)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index 0d1d748..0164c47 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -736,6 +736,27 @@ struct xps_dev_maps {
     (nr_cpu_ids * sizeof(struct xps_map *)))
 #endif /* CONFIG_XPS */
 
+#ifdef CONFIG_XPS_FLOWS
+struct xps_dev_flow {
+	union {
+		u64	v64;
+		struct {
+			int		queue_index;
+			unsigned int	queue_ptr;
+		};
+	};
+};
+
+struct xps_dev_flow_table {
+	unsigned int mask;
+	struct rcu_head rcu;
+	struct xps_dev_flow flows[0];
+};
+#define XPS_DEV_FLOW_TABLE_SIZE(_num) (sizeof(struct xps_dev_flow_table) + \
+	((_num) * sizeof(struct xps_dev_flow)))
+
+#endif /* CONFIG_XPS_FLOWS */
+
 #define TC_MAX_QUEUE	16
 #define TC_BITMASK	15
 /* HW offloaded queuing disciplines txq count and offset maps */
@@ -1809,6 +1830,9 @@ struct net_device {
 #ifdef CONFIG_XPS
 	struct xps_dev_maps __rcu *xps_maps;
 #endif
+#ifdef CONFIG_XPS_FLOWS
+	struct xps_dev_flow_table __rcu *xps_flow_table;
+#endif
 #ifdef CONFIG_NET_CLS_ACT
 	struct tcf_proto __rcu  *egress_cl_list;
 #endif
diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
index ab7b0b6..0d00b9c 100644
--- a/net/core/net-sysfs.c
+++ b/net/core/net-sysfs.c
@@ -503,6 +503,92 @@ static ssize_t phys_switch_id_show(struct device *dev,
 }
 static DEVICE_ATTR_RO(phys_switch_id);
 
+#ifdef CONFIG_XPS_FLOWS
+static void xps_dev_flow_table_release(struct rcu_head *rcu)
+{
+	struct xps_dev_flow_table *table = container_of(rcu,
+	    struct xps_dev_flow_table, rcu);
+	vfree(table);
+}
+
+static int change_xps_dev_flow_table_cnt(struct net_device *dev,
+					 unsigned long count)
+{
+	unsigned long mask;
+	struct xps_dev_flow_table *table, *old_table;
+	static DEFINE_SPINLOCK(xps_dev_flow_lock);
+
+	if (!capable(CAP_NET_ADMIN))
+		return -EPERM;
+
+	if (count) {
+		mask = count - 1;
+		/* mask = roundup_pow_of_two(count) - 1;
+		 * without overflows...
+		 */
+		while ((mask | (mask >> 1)) != mask)
+			mask |= (mask >> 1);
+		/* On 64 bit arches, must check mask fits in table->mask (u32),
+		 * and on 32bit arches, must check
+		 * XPS_DEV_FLOW_TABLE_SIZE(mask + 1) doesn't overflow.
+		 */
+#if BITS_PER_LONG > 32
+		if (mask > (unsigned long)(u32)mask)
+			return -EINVAL;
+#else
+		if (mask > (ULONG_MAX - XPS_DEV_FLOW_TABLE_SIZE(1))
+				/ sizeof(struct xps_dev_flow)) {
+			/* Enforce a limit to prevent overflow */
+			return -EINVAL;
+		}
+#endif
+		table = vmalloc(XPS_DEV_FLOW_TABLE_SIZE(mask + 1));
+		if (!table)
+			return -ENOMEM;
+
+		table->mask = mask;
+		for (count = 0; count <= mask; count++)
+			table->flows[count].queue_index = -1;
+	} else
+		table = NULL;
+
+	spin_lock(&xps_dev_flow_lock);
+	old_table = rcu_dereference_protected(dev->xps_flow_table,
+					      lockdep_is_held(&xps_dev_flow_lock));
+	rcu_assign_pointer(dev->xps_flow_table, table);
+	spin_unlock(&xps_dev_flow_lock);
+
+	if (old_table)
+		call_rcu(&old_table->rcu, xps_dev_flow_table_release);
+
+	return 0;
+}
+
+static ssize_t xps_dev_flow_table_cnt_store(struct device *dev,
+					    struct device_attribute *attr,
+					    const char *buf, size_t len)
+{
+	return netdev_store(dev, attr, buf, len, change_xps_dev_flow_table_cnt);
+}
+
+static ssize_t xps_dev_flow_table_cnt_show(struct device *dev,
+			    struct device_attribute *attr, char *buf)
+{
+	struct net_device *netdev = to_net_dev(dev);
+	struct xps_dev_flow_table *table;
+	unsigned int cnt = 0;
+
+	rcu_read_lock();
+	table = rcu_dereference(netdev->xps_flow_table);
+	if (table)
+		cnt = table->mask + 1;
+	rcu_read_unlock();
+
+	return sprintf(buf, fmt_dec, cnt);
+}
+DEVICE_ATTR_RW(xps_dev_flow_table_cnt);
+#endif /* CONFIG_XPS_FLOWS */
+
 static struct attribute *net_class_attrs[] = {
 	&dev_attr_netdev_group.attr,
 	&dev_attr_type.attr,
@@ -531,6 +617,9 @@ static struct attribute *net_class_attrs[] = {
 	&dev_attr_phys_port_name.attr,
 	&dev_attr_phys_switch_id.attr,
 	&dev_attr_proto_down.attr,
+#ifdef CONFIG_XPS_FLOWS
+	&dev_attr_xps_dev_flow_table_cnt.attr,
+#endif
 	NULL,
 };
 ATTRIBUTE_GROUPS(net_class);
-- 
2.8.0.rc2

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH net-next 4/4] xps_flows: XPS for packets that don't have a socket
  2016-09-01  0:10 [PATCH net-next 0/4] xps_flows: XPS flow steering when there is no socket Tom Herbert
                   ` (2 preceding siblings ...)
  2016-09-01  0:10 ` [PATCH net-next 3/4] net: Add xps_dev_flow_table_cnt Tom Herbert
@ 2016-09-01  0:10 ` Tom Herbert
  2016-09-01 15:36   ` Alexander Duyck
  2016-09-01  0:37 ` [PATCH net-next 0/4] xps_flows: XPS flow steering when there is no socket Eric Dumazet
  2016-09-01 19:25 ` Florian Fainelli
  5 siblings, 1 reply; 13+ messages in thread
From: Tom Herbert @ 2016-09-01  0:10 UTC (permalink / raw)
  To: davem, netdev; +Cc: kernel-team, rick.jones2

xps_flows maintains a per device flow table that is indexed by the
skbuff hash. The table is only consulted when there is no queue saved in
a transmit socket for an skbuff.

Each entry in the flow table contains a queue index and a queue
pointer. The queue pointer is set when a queue is chosen using a
flow table entry. This pointer is set to the head pointer in the
transmit queue (which is maintained by BQL).

The new function get_xps_flows_index that looks up flows in the
xps_flows table. The entry returned gives the last queue a matching flow
used. The returned queue is compared against the normal XPS queue. If
they are different, then we only switch if the tail pointer in the TX
queue has advanced past the pointer saved in the entry. In this
way OOO should be avoided when XPS wants to use a different queue.

Signed-off-by: Tom Herbert <tom@herbertland.com>
---
 net/Kconfig    |  6 +++++
 net/core/dev.c | 85 +++++++++++++++++++++++++++++++++++++++++++++++-----------
 2 files changed, 76 insertions(+), 15 deletions(-)

diff --git a/net/Kconfig b/net/Kconfig
index 7b6cd34..f77fad1 100644
--- a/net/Kconfig
+++ b/net/Kconfig
@@ -255,6 +255,12 @@ config XPS
 	depends on SMP
 	default y
 
+config XPS_FLOWS
+	bool
+	depends on XPS
+	depends on BQL
+	default y
+
 config HWBM
        bool
 
diff --git a/net/core/dev.c b/net/core/dev.c
index 34b5322..fc68d19 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -3210,6 +3210,7 @@ sch_handle_egress(struct sk_buff *skb, int *ret, struct net_device *dev)
 }
 #endif /* CONFIG_NET_EGRESS */
 
+/* Must be called with RCU read_lock */
 static inline int get_xps_queue(struct net_device *dev, struct sk_buff *skb)
 {
 #ifdef CONFIG_XPS
@@ -3217,7 +3218,6 @@ static inline int get_xps_queue(struct net_device *dev, struct sk_buff *skb)
 	struct xps_map *map;
 	int queue_index = -1;
 
-	rcu_read_lock();
 	dev_maps = rcu_dereference(dev->xps_maps);
 	if (dev_maps) {
 		map = rcu_dereference(
@@ -3232,7 +3232,6 @@ static inline int get_xps_queue(struct net_device *dev, struct sk_buff *skb)
 				queue_index = -1;
 		}
 	}
-	rcu_read_unlock();
 
 	return queue_index;
 #else
@@ -3240,26 +3239,82 @@ static inline int get_xps_queue(struct net_device *dev, struct sk_buff *skb)
 #endif
 }
 
-static u16 __netdev_pick_tx(struct net_device *dev, struct sk_buff *skb)
+/* Must be called with RCU read_lock */
+static int get_xps_flows_index(struct net_device *dev, struct sk_buff *skb)
 {
-	struct sock *sk = skb->sk;
-	int queue_index = sk_tx_queue_get(sk);
+#ifdef CONFIG_XPS_FLOWS
+	struct xps_dev_flow_table *flow_table;
+	struct xps_dev_flow ent;
+	int queue_index;
+	struct netdev_queue *txq;
+	u32 hash;
 
-	if (queue_index < 0 || skb->ooo_okay ||
-	    queue_index >= dev->real_num_tx_queues) {
-		int new_index = get_xps_queue(dev, skb);
-		if (new_index < 0)
-			new_index = skb_tx_hash(dev, skb);
+	flow_table = rcu_dereference(dev->xps_flow_table);
+	if (!flow_table)
+		return -1;
 
-		if (queue_index != new_index && sk &&
-		    sk_fullsock(sk) &&
-		    rcu_access_pointer(sk->sk_dst_cache))
-			sk_tx_queue_set(sk, new_index);
+	queue_index = get_xps_queue(dev, skb);
+	if (queue_index < 0)
+		return -1;
 
-		queue_index = new_index;
+	hash = skb_get_hash(skb);
+	if (!hash)
+		return -1;
+
+	ent.v64 = flow_table->flows[hash & flow_table->mask].v64;
+
+	if (queue_index != ent.queue_index &&
+	    ent.queue_index >= 0 &&
+	    ent.queue_index < dev->real_num_tx_queues) {
+		txq = netdev_get_tx_queue(dev, ent.queue_index);
+		if ((int)(txq->dql.num_completed_ops - ent.queue_ptr) < 0)  {
+			/* The current queue's tail has not advanced beyond the
+			 * last packet that was enqueued using the table entry.
+			 * We can't change queues without risking OOO. Stick
+			 * with the queue listed in the flow table.
+			 */
+			queue_index = ent.queue_index;
+		}
 	}
 
+	/* Save the updated entry */
+	txq = netdev_get_tx_queue(dev, queue_index);
+	ent.queue_index = queue_index;
+	ent.queue_ptr = txq->dql.num_enqueue_ops;
+	flow_table->flows[hash & flow_table->mask].v64 = ent.v64;
+
 	return queue_index;
+#else
+	return get_xps_queue(dev, skb);
+#endif
+}
+
+static u16 __netdev_pick_tx(struct net_device *dev, struct sk_buff *skb)
+{
+	struct sock *sk = skb->sk;
+	int queue_index = sk_tx_queue_get(sk);
+	int new_index;
+
+	if (queue_index < 0) {
+		/* Socket did not provide a queue index, try xps_flows */
+		new_index = get_xps_flows_index(dev, skb);
+	} else if (skb->ooo_okay || queue_index >= dev->real_num_tx_queues) {
+		/* Queue index in socket, see if we can find a better one */
+		new_index = get_xps_queue(dev, skb);
+	} else {
+		/* Valid queue in socket and can't send OOO. Just return it */
+		return queue_index;
+	}
+
+	/* No queue index from flow steering, fallback to hash */
+	if (new_index < 0)
+		new_index = skb_tx_hash(dev, skb);
+
+	if (queue_index != new_index && sk && sk_fullsock(sk) &&
+	    rcu_access_pointer(sk->sk_dst_cache))
+		sk_tx_queue_set(sk, new_index);
+
+	return new_index;
 }
 
 struct netdev_queue *netdev_pick_tx(struct net_device *dev,
-- 
2.8.0.rc2

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next 0/4] xps_flows: XPS flow steering when there is no socket
  2016-09-01  0:10 [PATCH net-next 0/4] xps_flows: XPS flow steering when there is no socket Tom Herbert
                   ` (3 preceding siblings ...)
  2016-09-01  0:10 ` [PATCH net-next 4/4] xps_flows: XPS for packets that don't have a socket Tom Herbert
@ 2016-09-01  0:37 ` Eric Dumazet
  2016-09-01 16:14   ` Tom Herbert
  2016-09-01 19:25 ` Florian Fainelli
  5 siblings, 1 reply; 13+ messages in thread
From: Eric Dumazet @ 2016-09-01  0:37 UTC (permalink / raw)
  To: Tom Herbert; +Cc: davem, netdev, kernel-team, rick.jones2

On Wed, 2016-08-31 at 17:10 -0700, Tom Herbert wrote:

> Tested:
>   Manually forced all packets to go through the xps_flows path.
>   Observed that some flows were deferred to change queues because
>   packets were in flight witht the flow bucket.

I did not realize you were ready to submit this new infra !

Please add performance tests and documentation.
( Documentation/networking/scaling.txt should be a nice place ) 

Unconnected UDP packets are candidates to this selection,
even locally generated, while maybe the applications are pinning their
thread(s) to cpu(s)
TX completion will then happen on multiple cpus.

Not sure about af_packet and/or pktgen ?

- The new hash table is vmalloc()ed on a single NUMA node. (in
comparison RFS table (per rx queue) can be properly accessed by a single
cpu servicing queue interrupts)

- Each packet will likely get an additional cache miss in a DDOS
forwarding workload.

Thanks.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next 4/4] xps_flows: XPS for packets that don't have a socket
  2016-09-01  0:10 ` [PATCH net-next 4/4] xps_flows: XPS for packets that don't have a socket Tom Herbert
@ 2016-09-01 15:36   ` Alexander Duyck
  2016-09-01 15:56     ` Tom Herbert
  0 siblings, 1 reply; 13+ messages in thread
From: Alexander Duyck @ 2016-09-01 15:36 UTC (permalink / raw)
  To: Tom Herbert; +Cc: David Miller, Netdev, Kernel Team, Rick Jones

On Wed, Aug 31, 2016 at 5:10 PM, Tom Herbert <tom@herbertland.com> wrote:
> xps_flows maintains a per device flow table that is indexed by the
> skbuff hash. The table is only consulted when there is no queue saved in
> a transmit socket for an skbuff.
>
> Each entry in the flow table contains a queue index and a queue
> pointer. The queue pointer is set when a queue is chosen using a
> flow table entry. This pointer is set to the head pointer in the
> transmit queue (which is maintained by BQL).
>
> The new function get_xps_flows_index that looks up flows in the
> xps_flows table. The entry returned gives the last queue a matching flow
> used. The returned queue is compared against the normal XPS queue. If
> they are different, then we only switch if the tail pointer in the TX
> queue has advanced past the pointer saved in the entry. In this
> way OOO should be avoided when XPS wants to use a different queue.
>
> Signed-off-by: Tom Herbert <tom@herbertland.com>
> ---
>  net/Kconfig    |  6 +++++
>  net/core/dev.c | 85 +++++++++++++++++++++++++++++++++++++++++++++++-----------
>  2 files changed, 76 insertions(+), 15 deletions(-)
>

So it looks like you didn't address the two issues I called out with
this patch last time.  I have called them out again below.

> diff --git a/net/core/dev.c b/net/core/dev.c
> index 34b5322..fc68d19 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c

<snip>

> @@ -3240,26 +3239,82 @@ static inline int get_xps_queue(struct net_device *dev, struct sk_buff *skb)
>  #endif
>  }
>
> -static u16 __netdev_pick_tx(struct net_device *dev, struct sk_buff *skb)
> +/* Must be called with RCU read_lock */
> +static int get_xps_flows_index(struct net_device *dev, struct sk_buff *skb)
>  {
> -       struct sock *sk = skb->sk;
> -       int queue_index = sk_tx_queue_get(sk);
> +#ifdef CONFIG_XPS_FLOWS
> +       struct xps_dev_flow_table *flow_table;
> +       struct xps_dev_flow ent;
> +       int queue_index;
> +       struct netdev_queue *txq;
> +       u32 hash;
>
> -       if (queue_index < 0 || skb->ooo_okay ||
> -           queue_index >= dev->real_num_tx_queues) {
> -               int new_index = get_xps_queue(dev, skb);
> -               if (new_index < 0)
> -                       new_index = skb_tx_hash(dev, skb);
> +       flow_table = rcu_dereference(dev->xps_flow_table);
> +       if (!flow_table)
> +               return -1;
>
> -               if (queue_index != new_index && sk &&
> -                   sk_fullsock(sk) &&
> -                   rcu_access_pointer(sk->sk_dst_cache))
> -                       sk_tx_queue_set(sk, new_index);
> +       queue_index = get_xps_queue(dev, skb);
> +       if (queue_index < 0)
> +               return -1;

I really think what would make more sense here is to just call
skb_tx_hash to acquire the queue_index instead of just exiting.  That
way we don't have the flows toggling back and forth between XPS and
non-XPS cpus.

> -               queue_index = new_index;
> +       hash = skb_get_hash(skb);
> +       if (!hash)
> +               return -1;

So a hash of 0 is perfectly valid.  So this line doesn't make any
sense.  You could just drop these two lines and work with the hash you
generated.

The rest of this looks good.

- Alex

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next 4/4] xps_flows: XPS for packets that don't have a socket
  2016-09-01 15:36   ` Alexander Duyck
@ 2016-09-01 15:56     ` Tom Herbert
  2016-09-01 23:18       ` Alexander Duyck
  0 siblings, 1 reply; 13+ messages in thread
From: Tom Herbert @ 2016-09-01 15:56 UTC (permalink / raw)
  To: Alexander Duyck; +Cc: David Miller, Netdev, Kernel Team, Rick Jones

On Thu, Sep 1, 2016 at 8:36 AM, Alexander Duyck
<alexander.duyck@gmail.com> wrote:
> On Wed, Aug 31, 2016 at 5:10 PM, Tom Herbert <tom@herbertland.com> wrote:
>> xps_flows maintains a per device flow table that is indexed by the
>> skbuff hash. The table is only consulted when there is no queue saved in
>> a transmit socket for an skbuff.
>>
>> Each entry in the flow table contains a queue index and a queue
>> pointer. The queue pointer is set when a queue is chosen using a
>> flow table entry. This pointer is set to the head pointer in the
>> transmit queue (which is maintained by BQL).
>>
>> The new function get_xps_flows_index that looks up flows in the
>> xps_flows table. The entry returned gives the last queue a matching flow
>> used. The returned queue is compared against the normal XPS queue. If
>> they are different, then we only switch if the tail pointer in the TX
>> queue has advanced past the pointer saved in the entry. In this
>> way OOO should be avoided when XPS wants to use a different queue.
>>
>> Signed-off-by: Tom Herbert <tom@herbertland.com>
>> ---
>>  net/Kconfig    |  6 +++++
>>  net/core/dev.c | 85 +++++++++++++++++++++++++++++++++++++++++++++++-----------
>>  2 files changed, 76 insertions(+), 15 deletions(-)
>>
>
> So it looks like you didn't address the two issues I called out with
> this patch last time.  I have called them out again below.
>
>> diff --git a/net/core/dev.c b/net/core/dev.c
>> index 34b5322..fc68d19 100644
>> --- a/net/core/dev.c
>> +++ b/net/core/dev.c
>
> <snip>
>
>> @@ -3240,26 +3239,82 @@ static inline int get_xps_queue(struct net_device *dev, struct sk_buff *skb)
>>  #endif
>>  }
>>
>> -static u16 __netdev_pick_tx(struct net_device *dev, struct sk_buff *skb)
>> +/* Must be called with RCU read_lock */
>> +static int get_xps_flows_index(struct net_device *dev, struct sk_buff *skb)
>>  {
>> -       struct sock *sk = skb->sk;
>> -       int queue_index = sk_tx_queue_get(sk);
>> +#ifdef CONFIG_XPS_FLOWS
>> +       struct xps_dev_flow_table *flow_table;
>> +       struct xps_dev_flow ent;
>> +       int queue_index;
>> +       struct netdev_queue *txq;
>> +       u32 hash;
>>
>> -       if (queue_index < 0 || skb->ooo_okay ||
>> -           queue_index >= dev->real_num_tx_queues) {
>> -               int new_index = get_xps_queue(dev, skb);
>> -               if (new_index < 0)
>> -                       new_index = skb_tx_hash(dev, skb);
>> +       flow_table = rcu_dereference(dev->xps_flow_table);
>> +       if (!flow_table)
>> +               return -1;
>>
>> -               if (queue_index != new_index && sk &&
>> -                   sk_fullsock(sk) &&
>> -                   rcu_access_pointer(sk->sk_dst_cache))
>> -                       sk_tx_queue_set(sk, new_index);
>> +       queue_index = get_xps_queue(dev, skb);
>> +       if (queue_index < 0)
>> +               return -1;
>
> I really think what would make more sense here is to just call
> skb_tx_hash to acquire the queue_index instead of just exiting.  That
> way we don't have the flows toggling back and forth between XPS and
> non-XPS cpus.
>
__netdev_pick_tx checks the return value to be < 0 and will call
skb_tx_hash if it is.

>> -               queue_index = new_index;
>> +       hash = skb_get_hash(skb);
>> +       if (!hash)
>> +               return -1;
>
> So a hash of 0 is perfectly valid.  So this line doesn't make any
> sense.  You could just drop these two lines and work with the hash you
> generated.
>
A hash of zero indicates an invalid hash. See __flow_hash_from_keys
for instance.

> The rest of this looks good.
>
> - Alex

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next 0/4] xps_flows: XPS flow steering when there is no socket
  2016-09-01  0:37 ` [PATCH net-next 0/4] xps_flows: XPS flow steering when there is no socket Eric Dumazet
@ 2016-09-01 16:14   ` Tom Herbert
  0 siblings, 0 replies; 13+ messages in thread
From: Tom Herbert @ 2016-09-01 16:14 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: David S. Miller, Linux Kernel Network Developers, Kernel Team,
	Rick Jones

On Wed, Aug 31, 2016 at 5:37 PM, Eric Dumazet <eric.dumazet@gmail.com> wrote:
> On Wed, 2016-08-31 at 17:10 -0700, Tom Herbert wrote:
>
>> Tested:
>>   Manually forced all packets to go through the xps_flows path.
>>   Observed that some flows were deferred to change queues because
>>   packets were in flight witht the flow bucket.
>
> I did not realize you were ready to submit this new infra !
>
Sorry, I was assuming there would be some more revisions :-).

> Please add performance tests and documentation.
> ( Documentation/networking/scaling.txt should be a nice place )
>
Waiting to see if this mitigates Rick;s problem.

> Unconnected UDP packets are candidates to this selection,
> even locally generated, while maybe the applications are pinning their
> thread(s) to cpu(s)
> TX completion will then happen on multiple cpus.
>
They are are now, but I am not certain that is the way to go. Not all
unconnected UDP has in order delivery requirements, I suspect most
don't so this might be configuration. I do wonder about something like
QUIC though, do you know if they are using unconnected sockets and
depend in in order delivery?

> Not sure about af_packet and/or pktgen ?
>
> - The new hash table is vmalloc()ed on a single NUMA node. (in
> comparison RFS table (per rx queue) can be properly accessed by a single
> cpu servicing queue interrupts)
>
Yeah, that's kind of unpleasant. Since we're starting from the
application side this is more like rps_sock_flow_table but we are
writing it in every packet. Other than sizing the table to prevent
collisions between flows, I don't readily see a way to get the same
sort of isolation we have in RPS. Any ideas?
.
> - Each packet will likely get an additional cache miss in a DDOS
> forwarding workload.

We don't need xps_flows in forwarding. It looks like the only
situations we need it is when the host is sourcing a flow but there is
no connected socket available. I'll make the mechanism opt-in in next
rev.

Thanks,
Tom

>
> Thanks.
>
>

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next 0/4] xps_flows: XPS flow steering when there is no socket
  2016-09-01  0:10 [PATCH net-next 0/4] xps_flows: XPS flow steering when there is no socket Tom Herbert
                   ` (4 preceding siblings ...)
  2016-09-01  0:37 ` [PATCH net-next 0/4] xps_flows: XPS flow steering when there is no socket Eric Dumazet
@ 2016-09-01 19:25 ` Florian Fainelli
  2016-09-01 19:32   ` Tom Herbert
  5 siblings, 1 reply; 13+ messages in thread
From: Florian Fainelli @ 2016-09-01 19:25 UTC (permalink / raw)
  To: Tom Herbert, davem, netdev; +Cc: kernel-team, rick.jones2

On 08/31/2016 05:10 PM, Tom Herbert wrote:
> This patch set introduces transmit flow steering for socketless packets.
> The idea is that we record the transmit queues in a flow table that is
> indexed by skbuff hash.  The flow table entries have two values: the
> queue_index and the head cnt of packets from the TX queue. We only allow
> a queue to change for a flow if the tail cnt in the TX queue advances
> beyond the recorded head cnt. That is the condition that should indicate
> that all outstanding packets for the flow have completed transmission so
> the queue can change.
> 
> Tracking the inflight queue is performed as part of DQL. Two fields are
> added to the dql structure: num_enqueue_ops and num_completed_ops.
> num_enqueue_ops incremented in dql_queued and num_completed_ops is
> incremented in dql_completed by the number of operations completed (an
> new argument to the function).
> 
> This patch set creates /sys/class/net/eth*/xps_dev_flow_table_cnt
> which number of entries in the XPS flow table.

If you respin, do you mind updating the sysfs documentation at
Documentation/ABI/testing/sysfs-class-net-queues with the new entries
you are adding? Thanks!
-- 
Florian

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next 0/4] xps_flows: XPS flow steering when there is no socket
  2016-09-01 19:25 ` Florian Fainelli
@ 2016-09-01 19:32   ` Tom Herbert
  2016-09-01 19:46     ` Florian Fainelli
  0 siblings, 1 reply; 13+ messages in thread
From: Tom Herbert @ 2016-09-01 19:32 UTC (permalink / raw)
  To: Florian Fainelli
  Cc: David S. Miller, Linux Kernel Network Developers, Kernel Team,
	Rick Jones

On Thu, Sep 1, 2016 at 12:25 PM, Florian Fainelli <f.fainelli@gmail.com> wrote:
> On 08/31/2016 05:10 PM, Tom Herbert wrote:
>> This patch set introduces transmit flow steering for socketless packets.
>> The idea is that we record the transmit queues in a flow table that is
>> indexed by skbuff hash.  The flow table entries have two values: the
>> queue_index and the head cnt of packets from the TX queue. We only allow
>> a queue to change for a flow if the tail cnt in the TX queue advances
>> beyond the recorded head cnt. That is the condition that should indicate
>> that all outstanding packets for the flow have completed transmission so
>> the queue can change.
>>
>> Tracking the inflight queue is performed as part of DQL. Two fields are
>> added to the dql structure: num_enqueue_ops and num_completed_ops.
>> num_enqueue_ops incremented in dql_queued and num_completed_ops is
>> incremented in dql_completed by the number of operations completed (an
>> new argument to the function).
>>
>> This patch set creates /sys/class/net/eth*/xps_dev_flow_table_cnt
>> which number of entries in the XPS flow table.
>
> If you respin, do you mind updating the sysfs documentation at
> Documentation/ABI/testing/sysfs-class-net-queues with the new entries
> you are adding? Thanks!

There are no per-queue sysfs entries being added.

Tom

> --
> Florian

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next 0/4] xps_flows: XPS flow steering when there is no socket
  2016-09-01 19:32   ` Tom Herbert
@ 2016-09-01 19:46     ` Florian Fainelli
  0 siblings, 0 replies; 13+ messages in thread
From: Florian Fainelli @ 2016-09-01 19:46 UTC (permalink / raw)
  To: Tom Herbert
  Cc: David S. Miller, Linux Kernel Network Developers, Kernel Team,
	Rick Jones

On 09/01/2016 12:32 PM, Tom Herbert wrote:
> On Thu, Sep 1, 2016 at 12:25 PM, Florian Fainelli <f.fainelli@gmail.com> wrote:
>> On 08/31/2016 05:10 PM, Tom Herbert wrote:
>>> This patch set introduces transmit flow steering for socketless packets.
>>> The idea is that we record the transmit queues in a flow table that is
>>> indexed by skbuff hash.  The flow table entries have two values: the
>>> queue_index and the head cnt of packets from the TX queue. We only allow
>>> a queue to change for a flow if the tail cnt in the TX queue advances
>>> beyond the recorded head cnt. That is the condition that should indicate
>>> that all outstanding packets for the flow have completed transmission so
>>> the queue can change.
>>>
>>> Tracking the inflight queue is performed as part of DQL. Two fields are
>>> added to the dql structure: num_enqueue_ops and num_completed_ops.
>>> num_enqueue_ops incremented in dql_queued and num_completed_ops is
>>> incremented in dql_completed by the number of operations completed (an
>>> new argument to the function).
>>>
>>> This patch set creates /sys/class/net/eth*/xps_dev_flow_table_cnt
>>> which number of entries in the XPS flow table.
>>
>> If you respin, do you mind updating the sysfs documentation at
>> Documentation/ABI/testing/sysfs-class-net-queues with the new entries
>> you are adding? Thanks!
> 
> There are no per-queue sysfs entries being added.

OK I pasted the wrong file, how about this one:
Documentation/ABI/testing/sysfs-class-net

the point being, you add a new attribute, you document it.
-- 
Florian

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH net-next 4/4] xps_flows: XPS for packets that don't have a socket
  2016-09-01 15:56     ` Tom Herbert
@ 2016-09-01 23:18       ` Alexander Duyck
  0 siblings, 0 replies; 13+ messages in thread
From: Alexander Duyck @ 2016-09-01 23:18 UTC (permalink / raw)
  To: Tom Herbert; +Cc: David Miller, Netdev, Kernel Team, Rick Jones

On Thu, Sep 1, 2016 at 8:56 AM, Tom Herbert <tom@herbertland.com> wrote:
> On Thu, Sep 1, 2016 at 8:36 AM, Alexander Duyck
> <alexander.duyck@gmail.com> wrote:
>> On Wed, Aug 31, 2016 at 5:10 PM, Tom Herbert <tom@herbertland.com> wrote:
>>> xps_flows maintains a per device flow table that is indexed by the
>>> skbuff hash. The table is only consulted when there is no queue saved in
>>> a transmit socket for an skbuff.
>>>
>>> Each entry in the flow table contains a queue index and a queue
>>> pointer. The queue pointer is set when a queue is chosen using a
>>> flow table entry. This pointer is set to the head pointer in the
>>> transmit queue (which is maintained by BQL).
>>>
>>> The new function get_xps_flows_index that looks up flows in the
>>> xps_flows table. The entry returned gives the last queue a matching flow
>>> used. The returned queue is compared against the normal XPS queue. If
>>> they are different, then we only switch if the tail pointer in the TX
>>> queue has advanced past the pointer saved in the entry. In this
>>> way OOO should be avoided when XPS wants to use a different queue.
>>>
>>> Signed-off-by: Tom Herbert <tom@herbertland.com>
>>> ---
>>>  net/Kconfig    |  6 +++++
>>>  net/core/dev.c | 85 +++++++++++++++++++++++++++++++++++++++++++++++-----------
>>>  2 files changed, 76 insertions(+), 15 deletions(-)
>>>
>>
>> So it looks like you didn't address the two issues I called out with
>> this patch last time.  I have called them out again below.
>>
>>> diff --git a/net/core/dev.c b/net/core/dev.c
>>> index 34b5322..fc68d19 100644
>>> --- a/net/core/dev.c
>>> +++ b/net/core/dev.c
>>
>> <snip>
>>
>>> @@ -3240,26 +3239,82 @@ static inline int get_xps_queue(struct net_device *dev, struct sk_buff *skb)
>>>  #endif
>>>  }
>>>
>>> -static u16 __netdev_pick_tx(struct net_device *dev, struct sk_buff *skb)
>>> +/* Must be called with RCU read_lock */
>>> +static int get_xps_flows_index(struct net_device *dev, struct sk_buff *skb)
>>>  {
>>> -       struct sock *sk = skb->sk;
>>> -       int queue_index = sk_tx_queue_get(sk);
>>> +#ifdef CONFIG_XPS_FLOWS
>>> +       struct xps_dev_flow_table *flow_table;
>>> +       struct xps_dev_flow ent;
>>> +       int queue_index;
>>> +       struct netdev_queue *txq;
>>> +       u32 hash;
>>>
>>> -       if (queue_index < 0 || skb->ooo_okay ||
>>> -           queue_index >= dev->real_num_tx_queues) {
>>> -               int new_index = get_xps_queue(dev, skb);
>>> -               if (new_index < 0)
>>> -                       new_index = skb_tx_hash(dev, skb);
>>> +       flow_table = rcu_dereference(dev->xps_flow_table);
>>> +       if (!flow_table)
>>> +               return -1;
>>>
>>> -               if (queue_index != new_index && sk &&
>>> -                   sk_fullsock(sk) &&
>>> -                   rcu_access_pointer(sk->sk_dst_cache))
>>> -                       sk_tx_queue_set(sk, new_index);
>>> +       queue_index = get_xps_queue(dev, skb);
>>> +       if (queue_index < 0)
>>> +               return -1;
>>
>> I really think what would make more sense here is to just call
>> skb_tx_hash to acquire the queue_index instead of just exiting.  That
>> way we don't have the flows toggling back and forth between XPS and
>> non-XPS cpus.
>>
> __netdev_pick_tx checks the return value to be < 0 and will call
> skb_tx_hash if it is.

Right, but we might be bouncing between a CPU that is listed in the
XPS maps and one that is not.  We should be performing the same checks
in either case to avoid having the traffic bounce between queues.

>>> -               queue_index = new_index;
>>> +       hash = skb_get_hash(skb);
>>> +       if (!hash)
>>> +               return -1;
>>
>> So a hash of 0 is perfectly valid.  So this line doesn't make any
>> sense.  You could just drop these two lines and work with the hash you
>> generated.
>>
> A hash of zero indicates an invalid hash. See __flow_hash_from_keys
> for instance.
>

So this is going to force all non-hashable traffic not from a socket
onto queue 0 since you are rejecting it here and that will force it
into skb_tx_hash which should give you a result of 0.  Is that what
you intend to do?

Also I would recommend moving this check up to before you call
get_xps_queue.  There isn't much point in getting the XPS queue for
hash 0 if you are are just going to throw away the value anyway.

- Alex

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2016-09-01 23:18 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-09-01  0:10 [PATCH net-next 0/4] xps_flows: XPS flow steering when there is no socket Tom Herbert
2016-09-01  0:10 ` [PATCH net-next 1/4] net: Set SW hash in skb_set_hash_from_sk Tom Herbert
2016-09-01  0:10 ` [PATCH net-next 2/4] dql: Add counters for number of queuing and completion operations Tom Herbert
2016-09-01  0:10 ` [PATCH net-next 3/4] net: Add xps_dev_flow_table_cnt Tom Herbert
2016-09-01  0:10 ` [PATCH net-next 4/4] xps_flows: XPS for packets that don't have a socket Tom Herbert
2016-09-01 15:36   ` Alexander Duyck
2016-09-01 15:56     ` Tom Herbert
2016-09-01 23:18       ` Alexander Duyck
2016-09-01  0:37 ` [PATCH net-next 0/4] xps_flows: XPS flow steering when there is no socket Eric Dumazet
2016-09-01 16:14   ` Tom Herbert
2016-09-01 19:25 ` Florian Fainelli
2016-09-01 19:32   ` Tom Herbert
2016-09-01 19:46     ` Florian Fainelli

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).