netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v3 0/5] virtio-net: support dynamic coalescing moderation
@ 2023-11-14  5:55 Heng Qi
  2023-11-14  5:55 ` [PATCH net-next v3 1/5] virtio-net: returns whether napi is complete Heng Qi
                   ` (4 more replies)
  0 siblings, 5 replies; 8+ messages in thread
From: Heng Qi @ 2023-11-14  5:55 UTC (permalink / raw)
  To: netdev, virtualization
  Cc: Jason Wang, Michael S . Tsirkin, Eric Dumazet, Paolo Abeni,
	David S . Miller, Jesper Dangaard Brouer, John Fastabend,
	Alexei Starovoitov, Simon Horman, Jakub Kicinski, xuanzhuo

Now, virtio-net already supports per-queue moderation parameter
setting. Based on this, we use the linux dimlib to support
dynamic coalescing moderation for virtio-net.

Due to some scheduling issues, we only support and test the rx dim.

Some test results:

I. Sockperf UDP
=================================================
1. Env
rxq_0 with affinity to cpu_0.

2. Cmd
client: taskset -c 0 sockperf tp -p 8989 -i $IP -t 10 -m 16B
server: taskset -c 0 sockperf sr -p 8989

3. Result
dim off: 1143277.00 rxpps, throughput 17.844 MBps, cpu is 100%.
dim on:  1124161.00 rxpps, throughput 17.610 MBps, cpu is 83.5%.
=================================================

II. Redis
=================================================
1. Env
There are 8 rxqs, and rxq_i with affinity to cpu_i.

2. Result
When all cpus are 100%, ops/sec of memtier_benchmark client is
dim off:  978437.23
dim on:  1143638.28
=================================================

III. Nginx
=================================================
1. Env
There are 8 rxqs and rxq_i with affinity to cpu_i.

2. Result
When all cpus are 100%, requests/sec of wrk client is
dim off:  877931.67
dim on:  1019160.31
=================================================

IV. Latency of sockperf udp
=================================================
1. Rx cmd
taskset -c 0 sockperf sr -p 8989

2. Tx cmd
taskset -c 0 sockperf pp -i ${ip} -p 8989 -t 10

After running this cmd 5 times and averaging the results,

3. Result
dim off: 17.7735 usec
dim on:  18.0110 usec
=================================================

Changelog:
v2->v3:
- Patch(4/5): some minor modifications.

v1->v2:
- Patch(2/5): a minor fix.
- Patch(4/5):
   - improve the judgment of dim switch conditions.
   - fix safe problem of work thread.
- Patch(5/5): drop the tx dim implementation.


Heng Qi (5):
  virtio-net: returns whether napi is complete
  virtio-net: separate rx/tx coalescing moderation cmds
  virtio-net: extract virtqueue coalescig cmd for reuse
  virtio-net: support rx netdim
  virtio-net: return -EOPNOTSUPP for adaptive-tx

 drivers/net/virtio_net.c | 335 ++++++++++++++++++++++++++++++++-------
 1 file changed, 278 insertions(+), 57 deletions(-)

-- 
2.19.1.6.gb485710b


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH net-next v3 1/5] virtio-net: returns whether napi is complete
  2023-11-14  5:55 [PATCH net-next v3 0/5] virtio-net: support dynamic coalescing moderation Heng Qi
@ 2023-11-14  5:55 ` Heng Qi
  2023-11-14  5:55 ` [PATCH net-next v3 2/5] virtio-net: separate rx/tx coalescing moderation cmds Heng Qi
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Heng Qi @ 2023-11-14  5:55 UTC (permalink / raw)
  To: netdev, virtualization
  Cc: Jason Wang, Michael S . Tsirkin, Eric Dumazet, Paolo Abeni,
	David S . Miller, Jesper Dangaard Brouer, John Fastabend,
	Alexei Starovoitov, Simon Horman, Jakub Kicinski, xuanzhuo

rx netdim needs to count the traffic during a complete napi process,
and start updating and comparing samples to make decisions after
the napi ends. Let virtqueue_napi_complete() return true if napi is done,
otherwise vice versa.

Signed-off-by: Heng Qi <hengqi@linux.alibaba.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 drivers/net/virtio_net.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index d16f592c2061..0ad2894e6a5e 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -431,7 +431,7 @@ static void virtqueue_napi_schedule(struct napi_struct *napi,
 	}
 }
 
-static void virtqueue_napi_complete(struct napi_struct *napi,
+static bool virtqueue_napi_complete(struct napi_struct *napi,
 				    struct virtqueue *vq, int processed)
 {
 	int opaque;
@@ -440,9 +440,13 @@ static void virtqueue_napi_complete(struct napi_struct *napi,
 	if (napi_complete_done(napi, processed)) {
 		if (unlikely(virtqueue_poll(vq, opaque)))
 			virtqueue_napi_schedule(napi, vq);
+		else
+			return true;
 	} else {
 		virtqueue_disable_cb(vq);
 	}
+
+	return false;
 }
 
 static void skb_xmit_done(struct virtqueue *vq)
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH net-next v3 2/5] virtio-net: separate rx/tx coalescing moderation cmds
  2023-11-14  5:55 [PATCH net-next v3 0/5] virtio-net: support dynamic coalescing moderation Heng Qi
  2023-11-14  5:55 ` [PATCH net-next v3 1/5] virtio-net: returns whether napi is complete Heng Qi
@ 2023-11-14  5:55 ` Heng Qi
  2023-11-14  5:55 ` [PATCH net-next v3 3/5] virtio-net: extract virtqueue coalescig cmd for reuse Heng Qi
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Heng Qi @ 2023-11-14  5:55 UTC (permalink / raw)
  To: netdev, virtualization
  Cc: Jason Wang, Michael S . Tsirkin, Eric Dumazet, Paolo Abeni,
	David S . Miller, Jesper Dangaard Brouer, John Fastabend,
	Alexei Starovoitov, Simon Horman, Jakub Kicinski, xuanzhuo

This patch separates the rx and tx global coalescing moderation
commands to support netdim switches in subsequent patches.

Signed-off-by: Heng Qi <hengqi@linux.alibaba.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 drivers/net/virtio_net.c | 31 ++++++++++++++++++++++++++++---
 1 file changed, 28 insertions(+), 3 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 0ad2894e6a5e..0285301caf78 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -3266,10 +3266,10 @@ static int virtnet_get_link_ksettings(struct net_device *dev,
 	return 0;
 }
 
-static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi,
-				       struct ethtool_coalesce *ec)
+static int virtnet_send_tx_notf_coal_cmds(struct virtnet_info *vi,
+					  struct ethtool_coalesce *ec)
 {
-	struct scatterlist sgs_tx, sgs_rx;
+	struct scatterlist sgs_tx;
 	int i;
 
 	vi->ctrl->coal_tx.tx_usecs = cpu_to_le32(ec->tx_coalesce_usecs);
@@ -3289,6 +3289,15 @@ static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi,
 		vi->sq[i].intr_coal.max_packets = ec->tx_max_coalesced_frames;
 	}
 
+	return 0;
+}
+
+static int virtnet_send_rx_notf_coal_cmds(struct virtnet_info *vi,
+					  struct ethtool_coalesce *ec)
+{
+	struct scatterlist sgs_rx;
+	int i;
+
 	vi->ctrl->coal_rx.rx_usecs = cpu_to_le32(ec->rx_coalesce_usecs);
 	vi->ctrl->coal_rx.rx_max_packets = cpu_to_le32(ec->rx_max_coalesced_frames);
 	sg_init_one(&sgs_rx, &vi->ctrl->coal_rx, sizeof(vi->ctrl->coal_rx));
@@ -3309,6 +3318,22 @@ static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi,
 	return 0;
 }
 
+static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi,
+				       struct ethtool_coalesce *ec)
+{
+	int err;
+
+	err = virtnet_send_tx_notf_coal_cmds(vi, ec);
+	if (err)
+		return err;
+
+	err = virtnet_send_rx_notf_coal_cmds(vi, ec);
+	if (err)
+		return err;
+
+	return 0;
+}
+
 static int virtnet_send_ctrl_coal_vq_cmd(struct virtnet_info *vi,
 					 u16 vqn, u32 max_usecs, u32 max_packets)
 {
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH net-next v3 3/5] virtio-net: extract virtqueue coalescig cmd for reuse
  2023-11-14  5:55 [PATCH net-next v3 0/5] virtio-net: support dynamic coalescing moderation Heng Qi
  2023-11-14  5:55 ` [PATCH net-next v3 1/5] virtio-net: returns whether napi is complete Heng Qi
  2023-11-14  5:55 ` [PATCH net-next v3 2/5] virtio-net: separate rx/tx coalescing moderation cmds Heng Qi
@ 2023-11-14  5:55 ` Heng Qi
  2023-11-14  5:55 ` [PATCH net-next v3 4/5] virtio-net: support rx netdim Heng Qi
  2023-11-14  5:55 ` [PATCH net-next v3 5/5] virtio-net: return -EOPNOTSUPP for adaptive-tx Heng Qi
  4 siblings, 0 replies; 8+ messages in thread
From: Heng Qi @ 2023-11-14  5:55 UTC (permalink / raw)
  To: netdev, virtualization
  Cc: Jason Wang, Michael S . Tsirkin, Eric Dumazet, Paolo Abeni,
	David S . Miller, Jesper Dangaard Brouer, John Fastabend,
	Alexei Starovoitov, Simon Horman, Jakub Kicinski, xuanzhuo

Extract commands to set virtqueue coalescing parameters for reuse
by ethtool -Q, vq resize and netdim.

Signed-off-by: Heng Qi <hengqi@linux.alibaba.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 drivers/net/virtio_net.c | 106 +++++++++++++++++++++++----------------
 1 file changed, 64 insertions(+), 42 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 0285301caf78..69fe09e99b3c 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -2849,6 +2849,58 @@ static void virtnet_cpu_notif_remove(struct virtnet_info *vi)
 					    &vi->node_dead);
 }
 
+static int virtnet_send_ctrl_coal_vq_cmd(struct virtnet_info *vi,
+					 u16 vqn, u32 max_usecs, u32 max_packets)
+{
+	struct scatterlist sgs;
+
+	vi->ctrl->coal_vq.vqn = cpu_to_le16(vqn);
+	vi->ctrl->coal_vq.coal.max_usecs = cpu_to_le32(max_usecs);
+	vi->ctrl->coal_vq.coal.max_packets = cpu_to_le32(max_packets);
+	sg_init_one(&sgs, &vi->ctrl->coal_vq, sizeof(vi->ctrl->coal_vq));
+
+	if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_NOTF_COAL,
+				  VIRTIO_NET_CTRL_NOTF_COAL_VQ_SET,
+				  &sgs))
+		return -EINVAL;
+
+	return 0;
+}
+
+static int virtnet_send_rx_ctrl_coal_vq_cmd(struct virtnet_info *vi,
+					    u16 queue, u32 max_usecs,
+					    u32 max_packets)
+{
+	int err;
+
+	err = virtnet_send_ctrl_coal_vq_cmd(vi, rxq2vq(queue),
+					    max_usecs, max_packets);
+	if (err)
+		return err;
+
+	vi->rq[queue].intr_coal.max_usecs = max_usecs;
+	vi->rq[queue].intr_coal.max_packets = max_packets;
+
+	return 0;
+}
+
+static int virtnet_send_tx_ctrl_coal_vq_cmd(struct virtnet_info *vi,
+					    u16 queue, u32 max_usecs,
+					    u32 max_packets)
+{
+	int err;
+
+	err = virtnet_send_ctrl_coal_vq_cmd(vi, txq2vq(queue),
+					    max_usecs, max_packets);
+	if (err)
+		return err;
+
+	vi->sq[queue].intr_coal.max_usecs = max_usecs;
+	vi->sq[queue].intr_coal.max_packets = max_packets;
+
+	return 0;
+}
+
 static void virtnet_get_ringparam(struct net_device *dev,
 				  struct ethtool_ringparam *ring,
 				  struct kernel_ethtool_ringparam *kernel_ring,
@@ -2906,14 +2958,11 @@ static int virtnet_set_ringparam(struct net_device *dev,
 			 * through the VIRTIO_NET_CTRL_NOTF_COAL_TX_SET command, or, if the driver
 			 * did not set any TX coalescing parameters, to 0.
 			 */
-			err = virtnet_send_ctrl_coal_vq_cmd(vi, txq2vq(i),
-							    vi->intr_coal_tx.max_usecs,
-							    vi->intr_coal_tx.max_packets);
+			err = virtnet_send_tx_ctrl_coal_vq_cmd(vi, i,
+							       vi->intr_coal_tx.max_usecs,
+							       vi->intr_coal_tx.max_packets);
 			if (err)
 				return err;
-
-			vi->sq[i].intr_coal.max_usecs = vi->intr_coal_tx.max_usecs;
-			vi->sq[i].intr_coal.max_packets = vi->intr_coal_tx.max_packets;
 		}
 
 		if (ring->rx_pending != rx_pending) {
@@ -2922,14 +2971,11 @@ static int virtnet_set_ringparam(struct net_device *dev,
 				return err;
 
 			/* The reason is same as the transmit virtqueue reset */
-			err = virtnet_send_ctrl_coal_vq_cmd(vi, rxq2vq(i),
-							    vi->intr_coal_rx.max_usecs,
-							    vi->intr_coal_rx.max_packets);
+			err = virtnet_send_rx_ctrl_coal_vq_cmd(vi, i,
+							       vi->intr_coal_rx.max_usecs,
+							       vi->intr_coal_rx.max_packets);
 			if (err)
 				return err;
-
-			vi->rq[i].intr_coal.max_usecs = vi->intr_coal_rx.max_usecs;
-			vi->rq[i].intr_coal.max_packets = vi->intr_coal_rx.max_packets;
 		}
 	}
 
@@ -3334,48 +3380,24 @@ static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi,
 	return 0;
 }
 
-static int virtnet_send_ctrl_coal_vq_cmd(struct virtnet_info *vi,
-					 u16 vqn, u32 max_usecs, u32 max_packets)
-{
-	struct scatterlist sgs;
-
-	vi->ctrl->coal_vq.vqn = cpu_to_le16(vqn);
-	vi->ctrl->coal_vq.coal.max_usecs = cpu_to_le32(max_usecs);
-	vi->ctrl->coal_vq.coal.max_packets = cpu_to_le32(max_packets);
-	sg_init_one(&sgs, &vi->ctrl->coal_vq, sizeof(vi->ctrl->coal_vq));
-
-	if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_NOTF_COAL,
-				  VIRTIO_NET_CTRL_NOTF_COAL_VQ_SET,
-				  &sgs))
-		return -EINVAL;
-
-	return 0;
-}
-
 static int virtnet_send_notf_coal_vq_cmds(struct virtnet_info *vi,
 					  struct ethtool_coalesce *ec,
 					  u16 queue)
 {
 	int err;
 
-	err = virtnet_send_ctrl_coal_vq_cmd(vi, rxq2vq(queue),
-					    ec->rx_coalesce_usecs,
-					    ec->rx_max_coalesced_frames);
+	err = virtnet_send_rx_ctrl_coal_vq_cmd(vi, queue,
+					       ec->rx_coalesce_usecs,
+					       ec->rx_max_coalesced_frames);
 	if (err)
 		return err;
 
-	vi->rq[queue].intr_coal.max_usecs = ec->rx_coalesce_usecs;
-	vi->rq[queue].intr_coal.max_packets = ec->rx_max_coalesced_frames;
-
-	err = virtnet_send_ctrl_coal_vq_cmd(vi, txq2vq(queue),
-					    ec->tx_coalesce_usecs,
-					    ec->tx_max_coalesced_frames);
+	err = virtnet_send_tx_ctrl_coal_vq_cmd(vi, queue,
+					       ec->tx_coalesce_usecs,
+					       ec->tx_max_coalesced_frames);
 	if (err)
 		return err;
 
-	vi->sq[queue].intr_coal.max_usecs = ec->tx_coalesce_usecs;
-	vi->sq[queue].intr_coal.max_packets = ec->tx_max_coalesced_frames;
-
 	return 0;
 }
 
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH net-next v3 4/5] virtio-net: support rx netdim
  2023-11-14  5:55 [PATCH net-next v3 0/5] virtio-net: support dynamic coalescing moderation Heng Qi
                   ` (2 preceding siblings ...)
  2023-11-14  5:55 ` [PATCH net-next v3 3/5] virtio-net: extract virtqueue coalescig cmd for reuse Heng Qi
@ 2023-11-14  5:55 ` Heng Qi
  2023-11-14  5:55 ` [PATCH net-next v3 5/5] virtio-net: return -EOPNOTSUPP for adaptive-tx Heng Qi
  4 siblings, 0 replies; 8+ messages in thread
From: Heng Qi @ 2023-11-14  5:55 UTC (permalink / raw)
  To: netdev, virtualization
  Cc: Jason Wang, Michael S . Tsirkin, Eric Dumazet, Paolo Abeni,
	David S . Miller, Jesper Dangaard Brouer, John Fastabend,
	Alexei Starovoitov, Simon Horman, Jakub Kicinski, xuanzhuo

By comparing the traffic information in the complete napi processes,
let the virtio-net driver automatically adjust the coalescing
moderation parameters of each receive queue.

Signed-off-by: Heng Qi <hengqi@linux.alibaba.com>
---
v2->v3:
- Some minor modifications.

v1->v2:
- Improved the judgment of dim switch conditions.
- Cancel the work when vq reset.

 drivers/net/virtio_net.c | 191 ++++++++++++++++++++++++++++++++++-----
 1 file changed, 169 insertions(+), 22 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 69fe09e99b3c..bc32d5aae005 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -19,6 +19,7 @@
 #include <linux/average.h>
 #include <linux/filter.h>
 #include <linux/kernel.h>
+#include <linux/dim.h>
 #include <net/route.h>
 #include <net/xdp.h>
 #include <net/net_failover.h>
@@ -172,6 +173,17 @@ struct receive_queue {
 
 	struct virtnet_rq_stats stats;
 
+	/* The number of rx notifications */
+	u16 calls;
+
+	/* Is dynamic interrupt moderation enabled? */
+	bool dim_enabled;
+
+	/* Dynamic Interrupt Moderation */
+	struct dim dim;
+
+	u32 packets_in_napi;
+
 	struct virtnet_interrupt_coalesce intr_coal;
 
 	/* Chain pages by the private ptr. */
@@ -305,6 +317,9 @@ struct virtnet_info {
 	u8 duplex;
 	u32 speed;
 
+	/* Is rx dynamic interrupt moderation enabled? */
+	bool rx_dim_enabled;
+
 	/* Interrupt coalescing settings */
 	struct virtnet_interrupt_coalesce intr_coal_tx;
 	struct virtnet_interrupt_coalesce intr_coal_rx;
@@ -2001,6 +2016,7 @@ static void skb_recv_done(struct virtqueue *rvq)
 	struct virtnet_info *vi = rvq->vdev->priv;
 	struct receive_queue *rq = &vi->rq[vq2rxq(rvq)];
 
+	rq->calls++;
 	virtqueue_napi_schedule(&rq->napi, rvq);
 }
 
@@ -2141,6 +2157,26 @@ static void virtnet_poll_cleantx(struct receive_queue *rq)
 	}
 }
 
+static void virtnet_rx_dim_work(struct work_struct *work);
+
+static void virtnet_rx_dim_update(struct virtnet_info *vi, struct receive_queue *rq)
+{
+	struct dim_sample cur_sample = {};
+
+	if (!rq->packets_in_napi)
+		return;
+
+	u64_stats_update_begin(&rq->stats.syncp);
+	dim_update_sample(rq->calls,
+			  u64_stats_read(&rq->stats.packets),
+			  u64_stats_read(&rq->stats.bytes),
+			  &cur_sample);
+	u64_stats_update_end(&rq->stats.syncp);
+
+	net_dim(&rq->dim, cur_sample);
+	rq->packets_in_napi = 0;
+}
+
 static int virtnet_poll(struct napi_struct *napi, int budget)
 {
 	struct receive_queue *rq =
@@ -2149,17 +2185,22 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
 	struct send_queue *sq;
 	unsigned int received;
 	unsigned int xdp_xmit = 0;
+	bool napi_complete;
 
 	virtnet_poll_cleantx(rq);
 
 	received = virtnet_receive(rq, budget, &xdp_xmit);
+	rq->packets_in_napi += received;
 
 	if (xdp_xmit & VIRTIO_XDP_REDIR)
 		xdp_do_flush();
 
 	/* Out of packets? */
-	if (received < budget)
-		virtqueue_napi_complete(napi, rq->vq, received);
+	if (received < budget) {
+		napi_complete = virtqueue_napi_complete(napi, rq->vq, received);
+		if (napi_complete && rq->dim_enabled)
+			virtnet_rx_dim_update(vi, rq);
+	}
 
 	if (xdp_xmit & VIRTIO_XDP_TX) {
 		sq = virtnet_xdp_get_sq(vi);
@@ -2179,6 +2220,7 @@ static void virtnet_disable_queue_pair(struct virtnet_info *vi, int qp_index)
 	virtnet_napi_tx_disable(&vi->sq[qp_index].napi);
 	napi_disable(&vi->rq[qp_index].napi);
 	xdp_rxq_info_unreg(&vi->rq[qp_index].xdp_rxq);
+	cancel_work_sync(&vi->rq[qp_index].dim.work);
 }
 
 static int virtnet_enable_queue_pair(struct virtnet_info *vi, int qp_index)
@@ -2196,6 +2238,9 @@ static int virtnet_enable_queue_pair(struct virtnet_info *vi, int qp_index)
 	if (err < 0)
 		goto err_xdp_reg_mem_model;
 
+	INIT_WORK(&vi->rq[qp_index].dim.work, virtnet_rx_dim_work);
+	vi->rq[qp_index].dim.mode = DIM_CQ_PERIOD_MODE_START_FROM_EQE;
+
 	virtnet_napi_enable(vi->rq[qp_index].vq, &vi->rq[qp_index].napi);
 	virtnet_napi_tx_enable(vi, vi->sq[qp_index].vq, &vi->sq[qp_index].napi);
 
@@ -2393,8 +2438,10 @@ static int virtnet_rx_resize(struct virtnet_info *vi,
 
 	qindex = rq - vi->rq;
 
-	if (running)
+	if (running) {
 		napi_disable(&rq->napi);
+		cancel_work_sync(&rq->dim.work);
+	}
 
 	err = virtqueue_resize(rq->vq, ring_num, virtnet_rq_free_unused_buf);
 	if (err)
@@ -2403,8 +2450,10 @@ static int virtnet_rx_resize(struct virtnet_info *vi,
 	if (!try_fill_recv(vi, rq, GFP_KERNEL))
 		schedule_delayed_work(&vi->refill, 0);
 
-	if (running)
+	if (running) {
+		INIT_WORK(&rq->dim.work, virtnet_rx_dim_work);
 		virtnet_napi_enable(rq->vq, &rq->napi);
+	}
 	return err;
 }
 
@@ -3341,24 +3390,55 @@ static int virtnet_send_tx_notf_coal_cmds(struct virtnet_info *vi,
 static int virtnet_send_rx_notf_coal_cmds(struct virtnet_info *vi,
 					  struct ethtool_coalesce *ec)
 {
+	bool rx_ctrl_dim_on = !!ec->use_adaptive_rx_coalesce;
+	bool update = false, switch_dim;
 	struct scatterlist sgs_rx;
 	int i;
 
-	vi->ctrl->coal_rx.rx_usecs = cpu_to_le32(ec->rx_coalesce_usecs);
-	vi->ctrl->coal_rx.rx_max_packets = cpu_to_le32(ec->rx_max_coalesced_frames);
-	sg_init_one(&sgs_rx, &vi->ctrl->coal_rx, sizeof(vi->ctrl->coal_rx));
-
-	if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_NOTF_COAL,
-				  VIRTIO_NET_CTRL_NOTF_COAL_RX_SET,
-				  &sgs_rx))
-		return -EINVAL;
+	switch_dim = rx_ctrl_dim_on != vi->rx_dim_enabled;
+	if (switch_dim) {
+		if (rx_ctrl_dim_on) {
+			if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_VQ_NOTF_COAL)) {
+				vi->rx_dim_enabled = true;
+				for (i = 0; i < vi->max_queue_pairs; i++)
+					vi->rq[i].dim_enabled = true;
+			} else {
+				return -EOPNOTSUPP;
+			}
+		} else {
+			vi->rx_dim_enabled = false;
+			for (i = 0; i < vi->max_queue_pairs; i++)
+				vi->rq[i].dim_enabled = false;
+		}
+	} else {
+		if (ec->rx_coalesce_usecs != vi->intr_coal_rx.max_usecs ||
+		    ec->rx_max_coalesced_frames != vi->intr_coal_rx.max_packets)
+			update = true;
 
-	/* Save parameters */
-	vi->intr_coal_rx.max_usecs = ec->rx_coalesce_usecs;
-	vi->intr_coal_rx.max_packets = ec->rx_max_coalesced_frames;
-	for (i = 0; i < vi->max_queue_pairs; i++) {
-		vi->rq[i].intr_coal.max_usecs = ec->rx_coalesce_usecs;
-		vi->rq[i].intr_coal.max_packets = ec->rx_max_coalesced_frames;
+		if (vi->rx_dim_enabled) {
+			if (update)
+				return -EINVAL;
+		} else {
+			/* Since the per-queue coalescing params can be set,
+			 * we need apply the global new params even if they
+			 * are not updated.
+			 */
+			vi->ctrl->coal_rx.rx_usecs = cpu_to_le32(ec->rx_coalesce_usecs);
+			vi->ctrl->coal_rx.rx_max_packets = cpu_to_le32(ec->rx_max_coalesced_frames);
+			sg_init_one(&sgs_rx, &vi->ctrl->coal_rx, sizeof(vi->ctrl->coal_rx));
+
+			if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_NOTF_COAL,
+						  VIRTIO_NET_CTRL_NOTF_COAL_RX_SET,
+						  &sgs_rx))
+				return -EINVAL;
+
+			vi->intr_coal_rx.max_usecs = ec->rx_coalesce_usecs;
+			vi->intr_coal_rx.max_packets = ec->rx_max_coalesced_frames;
+			for (i = 0; i < vi->max_queue_pairs; i++) {
+				vi->rq[i].intr_coal.max_usecs = ec->rx_coalesce_usecs;
+				vi->rq[i].intr_coal.max_packets = ec->rx_max_coalesced_frames;
+			}
+		}
 	}
 
 	return 0;
@@ -3380,15 +3460,54 @@ static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi,
 	return 0;
 }
 
+static int virtnet_send_rx_notf_coal_vq_cmds(struct virtnet_info *vi,
+					     struct ethtool_coalesce *ec,
+					     u16 queue)
+{
+	bool rx_ctrl_dim_on = !!ec->use_adaptive_rx_coalesce;
+	bool cur_rx_dim = vi->rq[queue].dim_enabled;
+	bool update = false, switch_dim;
+	u32 max_usecs, max_packets;
+	int err;
+
+	switch_dim = rx_ctrl_dim_on != cur_rx_dim;
+	if (switch_dim) {
+		if (rx_ctrl_dim_on)
+			vi->rq[queue].dim_enabled = true;
+		else
+			vi->rq[queue].dim_enabled = false;
+	} else {
+		max_usecs = vi->rq[queue].intr_coal.max_usecs;
+		max_packets = vi->rq[queue].intr_coal.max_packets;
+		if (ec->rx_coalesce_usecs != max_usecs ||
+		    ec->rx_max_coalesced_frames != max_packets)
+			update = true;
+
+		if (cur_rx_dim) {
+			if (update)
+				return -EINVAL;
+		} else {
+			if (!update)
+				return 0;
+
+			err = virtnet_send_rx_ctrl_coal_vq_cmd(vi, queue,
+							       ec->rx_coalesce_usecs,
+							       ec->rx_max_coalesced_frames);
+			if (err)
+				return err;
+		}
+	}
+
+	return 0;
+}
+
 static int virtnet_send_notf_coal_vq_cmds(struct virtnet_info *vi,
 					  struct ethtool_coalesce *ec,
 					  u16 queue)
 {
 	int err;
 
-	err = virtnet_send_rx_ctrl_coal_vq_cmd(vi, queue,
-					       ec->rx_coalesce_usecs,
-					       ec->rx_max_coalesced_frames);
+	err = virtnet_send_rx_notf_coal_vq_cmds(vi, ec, queue);
 	if (err)
 		return err;
 
@@ -3401,6 +3520,32 @@ static int virtnet_send_notf_coal_vq_cmds(struct virtnet_info *vi,
 	return 0;
 }
 
+static void virtnet_rx_dim_work(struct work_struct *work)
+{
+	struct dim *dim = container_of(work, struct dim, work);
+	struct receive_queue *rq = container_of(dim,
+			struct receive_queue, dim);
+	struct virtnet_info *vi = rq->vq->vdev->priv;
+	struct net_device *dev = vi->dev;
+	struct dim_cq_moder update_moder;
+	int qnum = rq - vi->rq, err;
+
+	update_moder = net_dim_get_rx_moderation(dim->mode, dim->profile_ix);
+	if (update_moder.usec != vi->rq[qnum].intr_coal.max_usecs ||
+	    update_moder.pkts != vi->rq[qnum].intr_coal.max_packets) {
+		rtnl_lock();
+		err = virtnet_send_rx_ctrl_coal_vq_cmd(vi, qnum,
+						       update_moder.usec,
+						       update_moder.pkts);
+		if (err)
+			pr_debug("%s: Failed to send dim parameters on rxq%d\n",
+				 dev->name, (int)(rq - vi->rq));
+		rtnl_unlock();
+	}
+
+	dim->state = DIM_START_MEASURE;
+}
+
 static int virtnet_coal_params_supported(struct ethtool_coalesce *ec)
 {
 	/* usecs coalescing is supported only if VIRTIO_NET_F_NOTF_COAL
@@ -3482,6 +3627,7 @@ static int virtnet_get_coalesce(struct net_device *dev,
 		ec->tx_coalesce_usecs = vi->intr_coal_tx.max_usecs;
 		ec->tx_max_coalesced_frames = vi->intr_coal_tx.max_packets;
 		ec->rx_max_coalesced_frames = vi->intr_coal_rx.max_packets;
+		ec->use_adaptive_rx_coalesce = vi->rx_dim_enabled;
 	} else {
 		ec->rx_max_coalesced_frames = 1;
 
@@ -3539,6 +3685,7 @@ static int virtnet_get_per_queue_coalesce(struct net_device *dev,
 		ec->tx_coalesce_usecs = vi->sq[queue].intr_coal.max_usecs;
 		ec->tx_max_coalesced_frames = vi->sq[queue].intr_coal.max_packets;
 		ec->rx_max_coalesced_frames = vi->rq[queue].intr_coal.max_packets;
+		ec->use_adaptive_rx_coalesce = vi->rq[queue].dim_enabled;
 	} else {
 		ec->rx_max_coalesced_frames = 1;
 
@@ -3664,7 +3811,7 @@ static int virtnet_set_rxnfc(struct net_device *dev, struct ethtool_rxnfc *info)
 
 static const struct ethtool_ops virtnet_ethtool_ops = {
 	.supported_coalesce_params = ETHTOOL_COALESCE_MAX_FRAMES |
-		ETHTOOL_COALESCE_USECS,
+		ETHTOOL_COALESCE_USECS | ETHTOOL_COALESCE_USE_ADAPTIVE_RX,
 	.get_drvinfo = virtnet_get_drvinfo,
 	.get_link = ethtool_op_get_link,
 	.get_ringparam = virtnet_get_ringparam,
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH net-next v3 5/5] virtio-net: return -EOPNOTSUPP for adaptive-tx
  2023-11-14  5:55 [PATCH net-next v3 0/5] virtio-net: support dynamic coalescing moderation Heng Qi
                   ` (3 preceding siblings ...)
  2023-11-14  5:55 ` [PATCH net-next v3 4/5] virtio-net: support rx netdim Heng Qi
@ 2023-11-14  5:55 ` Heng Qi
  2023-11-15  4:23   ` Jakub Kicinski
  4 siblings, 1 reply; 8+ messages in thread
From: Heng Qi @ 2023-11-14  5:55 UTC (permalink / raw)
  To: netdev, virtualization
  Cc: Jason Wang, Michael S . Tsirkin, Eric Dumazet, Paolo Abeni,
	David S . Miller, Jesper Dangaard Brouer, John Fastabend,
	Alexei Starovoitov, Simon Horman, Jakub Kicinski, xuanzhuo

We do not currently support tx dim, so respond to -EOPNOTSUPP.

Signed-off-by: Heng Qi <hengqi@linux.alibaba.com>
---
v1->v2:
- Use -EOPNOTSUPP instead of specific implementation.

 drivers/net/virtio_net.c | 29 ++++++++++++++++++++++++++---
 1 file changed, 26 insertions(+), 3 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index bc32d5aae005..b082f2acbb22 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -3364,9 +3364,15 @@ static int virtnet_get_link_ksettings(struct net_device *dev,
 static int virtnet_send_tx_notf_coal_cmds(struct virtnet_info *vi,
 					  struct ethtool_coalesce *ec)
 {
+	bool tx_ctrl_dim_on = !!ec->use_adaptive_tx_coalesce;
 	struct scatterlist sgs_tx;
 	int i;
 
+	if (tx_ctrl_dim_on) {
+		pr_debug("Failed to enable adaptive-tx, which is not supported\n");
+		return -EOPNOTSUPP;
+	}
+
 	vi->ctrl->coal_tx.tx_usecs = cpu_to_le32(ec->tx_coalesce_usecs);
 	vi->ctrl->coal_tx.tx_max_packets = cpu_to_le32(ec->tx_max_coalesced_frames);
 	sg_init_one(&sgs_tx, &vi->ctrl->coal_tx, sizeof(vi->ctrl->coal_tx));
@@ -3501,6 +3507,25 @@ static int virtnet_send_rx_notf_coal_vq_cmds(struct virtnet_info *vi,
 	return 0;
 }
 
+static int virtnet_send_tx_notf_coal_vq_cmds(struct virtnet_info *vi,
+					     struct ethtool_coalesce *ec,
+					     u16 queue)
+{
+	bool tx_ctrl_dim_on = !!ec->use_adaptive_tx_coalesce;
+	int err;
+
+	if (tx_ctrl_dim_on) {
+		pr_debug("Enabling adaptive-tx for txq%d is not supported\n", queue);
+		return -EOPNOTSUPP;
+	}
+
+	err = virtnet_send_tx_ctrl_coal_vq_cmd(vi, queue,
+					       ec->tx_coalesce_usecs,
+					       ec->tx_max_coalesced_frames);
+
+	return err;
+}
+
 static int virtnet_send_notf_coal_vq_cmds(struct virtnet_info *vi,
 					  struct ethtool_coalesce *ec,
 					  u16 queue)
@@ -3511,9 +3536,7 @@ static int virtnet_send_notf_coal_vq_cmds(struct virtnet_info *vi,
 	if (err)
 		return err;
 
-	err = virtnet_send_tx_ctrl_coal_vq_cmd(vi, queue,
-					       ec->tx_coalesce_usecs,
-					       ec->tx_max_coalesced_frames);
+	err = virtnet_send_tx_notf_coal_vq_cmds(vi, ec, queue);
 	if (err)
 		return err;
 
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH net-next v3 5/5] virtio-net: return -EOPNOTSUPP for adaptive-tx
  2023-11-14  5:55 ` [PATCH net-next v3 5/5] virtio-net: return -EOPNOTSUPP for adaptive-tx Heng Qi
@ 2023-11-15  4:23   ` Jakub Kicinski
  2023-11-15  4:52     ` Heng Qi
  0 siblings, 1 reply; 8+ messages in thread
From: Jakub Kicinski @ 2023-11-15  4:23 UTC (permalink / raw)
  To: Heng Qi
  Cc: netdev, virtualization, Jason Wang, Michael S . Tsirkin,
	Eric Dumazet, Paolo Abeni, David S . Miller,
	Jesper Dangaard Brouer, John Fastabend, Alexei Starovoitov,
	Simon Horman, xuanzhuo

On Tue, 14 Nov 2023 13:55:47 +0800 Heng Qi wrote:
> We do not currently support tx dim, so respond to -EOPNOTSUPP.

Hm, why do you need this? You don't set ADAPTIVE_TX in
.supported_coalesce_params, so core should prevent attempts
to enable ADAPTIVE_TX.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH net-next v3 5/5] virtio-net: return -EOPNOTSUPP for adaptive-tx
  2023-11-15  4:23   ` Jakub Kicinski
@ 2023-11-15  4:52     ` Heng Qi
  0 siblings, 0 replies; 8+ messages in thread
From: Heng Qi @ 2023-11-15  4:52 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: netdev, virtualization, Jason Wang, Michael S . Tsirkin,
	Eric Dumazet, Paolo Abeni, David S . Miller,
	Jesper Dangaard Brouer, John Fastabend, Alexei Starovoitov,
	Simon Horman, xuanzhuo



在 2023/11/15 下午12:23, Jakub Kicinski 写道:
> On Tue, 14 Nov 2023 13:55:47 +0800 Heng Qi wrote:
>> We do not currently support tx dim, so respond to -EOPNOTSUPP.
> Hm, why do you need this? You don't set ADAPTIVE_TX in
> .supported_coalesce_params, so core should prevent attempts
> to enable ADAPTIVE_TX.

Indeed. Will drop this.

Thanks!



^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2023-11-15  4:52 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-11-14  5:55 [PATCH net-next v3 0/5] virtio-net: support dynamic coalescing moderation Heng Qi
2023-11-14  5:55 ` [PATCH net-next v3 1/5] virtio-net: returns whether napi is complete Heng Qi
2023-11-14  5:55 ` [PATCH net-next v3 2/5] virtio-net: separate rx/tx coalescing moderation cmds Heng Qi
2023-11-14  5:55 ` [PATCH net-next v3 3/5] virtio-net: extract virtqueue coalescig cmd for reuse Heng Qi
2023-11-14  5:55 ` [PATCH net-next v3 4/5] virtio-net: support rx netdim Heng Qi
2023-11-14  5:55 ` [PATCH net-next v3 5/5] virtio-net: return -EOPNOTSUPP for adaptive-tx Heng Qi
2023-11-15  4:23   ` Jakub Kicinski
2023-11-15  4:52     ` Heng Qi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).