* [PATCH net-next V4 0/3] virtio_net: add per queue interrupt coalescing support
@ 2023-07-25 13:07 Gavin Li
2023-07-25 13:07 ` [PATCH net-next V4 1/3] virtio_net: extract interrupt coalescing settings to a structure Gavin Li
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: Gavin Li @ 2023-07-25 13:07 UTC (permalink / raw)
To: mst, jasowang, xuanzhuo, davem, edumazet, kuba, pabeni, ast,
daniel, hawk, john.fastabend, jiri, dtatulea
Cc: gavi, virtualization, netdev, linux-kernel, bpf
Currently, coalescing parameters are grouped for all transmit and receive
virtqueues. This patch series add support to set or get the parameters for
a specified virtqueue.
When the traffic between virtqueues is unbalanced, for example, one virtqueue
is busy and another virtqueue is idle, then it will be very useful to
control coalescing parameters at the virtqueue granularity.
Example command:
$ ethtool -Q eth5 queue_mask 0x1 --coalesce tx-packets 10
Would set max_packets=10 to VQ 1.
$ ethtool -Q eth5 queue_mask 0x1 --coalesce rx-packets 10
Would set max_packets=10 to VQ 0.
$ ethtool -Q eth5 queue_mask 0x1 --show-coalesce
Queue: 0
Adaptive RX: off TX: off
stats-block-usecs: 0
sample-interval: 0
pkt-rate-low: 0
pkt-rate-high: 0
rx-usecs: 222
rx-frames: 0
rx-usecs-irq: 0
rx-frames-irq: 256
tx-usecs: 222
tx-frames: 0
tx-usecs-irq: 0
tx-frames-irq: 256
rx-usecs-low: 0
rx-frame-low: 0
tx-usecs-low: 0
tx-frame-low: 0
rx-usecs-high: 0
rx-frame-high: 0
tx-usecs-high: 0
tx-frame-high: 0
Gavin Li (3):
virtio_net: extract interrupt coalescing settings to a structure
virtio_net: support per queue interrupt coalesce command
---
changelog:
v1->v2
- Addressed the comment from Xuan Zhuo
- Allocate memory from heap instead of using stack memory for control vq
messages
v2->v3
- Addressed the comment from Heng Qi
- Use control_buf for control vq messages
v3->v4
- Addressed the comment from Michael S. Tsirkin
- Refactor set_coalesce of both per queue and global config that were
littered with if/else branches
---
virtio_net: enable per queue interrupt coalesce feature
drivers/net/virtio_net.c | 187 ++++++++++++++++++++++++++++----
include/uapi/linux/virtio_net.h | 14 +++
2 files changed, 177 insertions(+), 24 deletions(-)
--
2.39.1
^ permalink raw reply [flat|nested] 10+ messages in thread* [PATCH net-next V4 1/3] virtio_net: extract interrupt coalescing settings to a structure 2023-07-25 13:07 [PATCH net-next V4 0/3] virtio_net: add per queue interrupt coalescing support Gavin Li @ 2023-07-25 13:07 ` Gavin Li 2023-07-25 13:07 ` [PATCH net-next V4 2/3] virtio_net: support per queue interrupt coalesce command Gavin Li 2023-07-25 13:07 ` [PATCH net-next V4 3/3] virtio_net: enable per queue interrupt coalesce feature Gavin Li 2 siblings, 0 replies; 10+ messages in thread From: Gavin Li @ 2023-07-25 13:07 UTC (permalink / raw) To: mst, jasowang, xuanzhuo, davem, edumazet, kuba, pabeni, ast, daniel, hawk, john.fastabend, jiri, dtatulea Cc: gavi, virtualization, netdev, linux-kernel, bpf, Heng Qi Extract interrupt coalescing settings to a structure so that it could be reused in other data structures. Signed-off-by: Gavin Li <gavinl@nvidia.com> Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Acked-by: Jason Wang <jasowang@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Heng Qi <hengqi@linux.alibaba.com> --- drivers/net/virtio_net.c | 35 +++++++++++++++++++---------------- 1 file changed, 19 insertions(+), 16 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 0db14f6b87d3..dd5fec073a27 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -126,6 +126,11 @@ static const struct virtnet_stat_desc virtnet_rq_stats_desc[] = { #define VIRTNET_SQ_STATS_LEN ARRAY_SIZE(virtnet_sq_stats_desc) #define VIRTNET_RQ_STATS_LEN ARRAY_SIZE(virtnet_rq_stats_desc) +struct virtnet_interrupt_coalesce { + u32 max_packets; + u32 max_usecs; +}; + /* Internal representation of a send virtqueue */ struct send_queue { /* Virtqueue associated with this send _queue */ @@ -281,10 +286,8 @@ struct virtnet_info { u32 speed; /* Interrupt coalescing settings */ - u32 tx_usecs; - u32 rx_usecs; - u32 tx_max_packets; - u32 rx_max_packets; + struct virtnet_interrupt_coalesce intr_coal_tx; + struct virtnet_interrupt_coalesce intr_coal_rx; unsigned long guest_offloads; unsigned long guest_offloads_capable; @@ -3056,8 +3059,8 @@ static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi, return -EINVAL; /* Save parameters */ - vi->tx_usecs = ec->tx_coalesce_usecs; - vi->tx_max_packets = ec->tx_max_coalesced_frames; + vi->intr_coal_tx.max_usecs = ec->tx_coalesce_usecs; + vi->intr_coal_tx.max_packets = ec->tx_max_coalesced_frames; vi->ctrl->coal_rx.rx_usecs = cpu_to_le32(ec->rx_coalesce_usecs); vi->ctrl->coal_rx.rx_max_packets = cpu_to_le32(ec->rx_max_coalesced_frames); @@ -3069,8 +3072,8 @@ static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi, return -EINVAL; /* Save parameters */ - vi->rx_usecs = ec->rx_coalesce_usecs; - vi->rx_max_packets = ec->rx_max_coalesced_frames; + vi->intr_coal_rx.max_usecs = ec->rx_coalesce_usecs; + vi->intr_coal_rx.max_packets = ec->rx_max_coalesced_frames; return 0; } @@ -3132,10 +3135,10 @@ static int virtnet_get_coalesce(struct net_device *dev, struct virtnet_info *vi = netdev_priv(dev); if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_NOTF_COAL)) { - ec->rx_coalesce_usecs = vi->rx_usecs; - ec->tx_coalesce_usecs = vi->tx_usecs; - ec->tx_max_coalesced_frames = vi->tx_max_packets; - ec->rx_max_coalesced_frames = vi->rx_max_packets; + ec->rx_coalesce_usecs = vi->intr_coal_rx.max_usecs; + ec->tx_coalesce_usecs = vi->intr_coal_tx.max_usecs; + ec->tx_max_coalesced_frames = vi->intr_coal_tx.max_packets; + ec->rx_max_coalesced_frames = vi->intr_coal_rx.max_packets; } else { ec->rx_max_coalesced_frames = 1; @@ -4119,10 +4122,10 @@ static int virtnet_probe(struct virtio_device *vdev) } if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_NOTF_COAL)) { - vi->rx_usecs = 0; - vi->tx_usecs = 0; - vi->tx_max_packets = 0; - vi->rx_max_packets = 0; + vi->intr_coal_rx.max_usecs = 0; + vi->intr_coal_tx.max_usecs = 0; + vi->intr_coal_tx.max_packets = 0; + vi->intr_coal_rx.max_packets = 0; } if (virtio_has_feature(vdev, VIRTIO_NET_F_HASH_REPORT)) -- 2.39.1 ^ permalink raw reply related [flat|nested] 10+ messages in thread
* [PATCH net-next V4 2/3] virtio_net: support per queue interrupt coalesce command 2023-07-25 13:07 [PATCH net-next V4 0/3] virtio_net: add per queue interrupt coalescing support Gavin Li 2023-07-25 13:07 ` [PATCH net-next V4 1/3] virtio_net: extract interrupt coalescing settings to a structure Gavin Li @ 2023-07-25 13:07 ` Gavin Li 2023-07-27 13:28 ` Paolo Abeni 2023-07-31 6:24 ` Jason Wang 2023-07-25 13:07 ` [PATCH net-next V4 3/3] virtio_net: enable per queue interrupt coalesce feature Gavin Li 2 siblings, 2 replies; 10+ messages in thread From: Gavin Li @ 2023-07-25 13:07 UTC (permalink / raw) To: mst, jasowang, xuanzhuo, davem, edumazet, kuba, pabeni, ast, daniel, hawk, john.fastabend, jiri, dtatulea Cc: gavi, virtualization, netdev, linux-kernel, bpf, Heng Qi Add interrupt_coalesce config in send_queue and receive_queue to cache user config. Send per virtqueue interrupt moderation config to underlying device in order to have more efficient interrupt moderation and cpu utilization of guest VM. Additionally, address all the VQs when updating the global configuration, as now the individual VQs configuration can diverge from the global configuration. Signed-off-by: Gavin Li <gavinl@nvidia.com> Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Heng Qi <hengqi@linux.alibaba.com> --- drivers/net/virtio_net.c | 149 ++++++++++++++++++++++++++++++-- include/uapi/linux/virtio_net.h | 14 +++ 2 files changed, 155 insertions(+), 8 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index dd5fec073a27..c185930d7c9d 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -144,6 +144,8 @@ struct send_queue { struct virtnet_sq_stats stats; + struct virtnet_interrupt_coalesce intr_coal; + struct napi_struct napi; /* Record whether sq is in reset state. */ @@ -161,6 +163,8 @@ struct receive_queue { struct virtnet_rq_stats stats; + struct virtnet_interrupt_coalesce intr_coal; + /* Chain pages by the private ptr. */ struct page *pages; @@ -212,6 +216,7 @@ struct control_buf { struct virtio_net_ctrl_rss rss; struct virtio_net_ctrl_coal_tx coal_tx; struct virtio_net_ctrl_coal_rx coal_rx; + struct virtio_net_ctrl_coal_vq coal_vq; }; struct virtnet_info { @@ -3078,6 +3083,55 @@ static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi, return 0; } +static int virtnet_send_ctrl_coal_vq_cmd(struct virtnet_info *vi, + u16 vqn, u32 max_usecs, u32 max_packets) +{ + struct scatterlist sgs; + + vi->ctrl->coal_vq.vqn = cpu_to_le16(vqn); + vi->ctrl->coal_vq.coal.max_usecs = cpu_to_le32(max_usecs); + vi->ctrl->coal_vq.coal.max_packets = cpu_to_le32(max_packets); + sg_init_one(&sgs, &vi->ctrl->coal_vq, sizeof(vi->ctrl->coal_vq)); + + if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_NOTF_COAL, + VIRTIO_NET_CTRL_NOTF_COAL_VQ_SET, + &sgs)) + return -EINVAL; + + return 0; +} + +static int virtnet_send_notf_coal_vq_cmds(struct virtnet_info *vi, + struct ethtool_coalesce *ec, + u16 queue) +{ + int err; + + if (ec->rx_coalesce_usecs || ec->rx_max_coalesced_frames) { + err = virtnet_send_ctrl_coal_vq_cmd(vi, rxq2vq(queue), + ec->rx_coalesce_usecs, + ec->rx_max_coalesced_frames); + if (err) + return err; + /* Save parameters */ + vi->rq[queue].intr_coal.max_usecs = ec->rx_coalesce_usecs; + vi->rq[queue].intr_coal.max_packets = ec->rx_max_coalesced_frames; + } + + if (ec->tx_coalesce_usecs || ec->tx_max_coalesced_frames) { + err = virtnet_send_ctrl_coal_vq_cmd(vi, txq2vq(queue), + ec->tx_coalesce_usecs, + ec->tx_max_coalesced_frames); + if (err) + return err; + /* Save parameters */ + vi->sq[queue].intr_coal.max_usecs = ec->tx_coalesce_usecs; + vi->sq[queue].intr_coal.max_packets = ec->tx_max_coalesced_frames; + } + + return 0; +} + static int virtnet_coal_params_supported(struct ethtool_coalesce *ec) { /* usecs coalescing is supported only if VIRTIO_NET_F_NOTF_COAL @@ -3093,22 +3147,42 @@ static int virtnet_coal_params_supported(struct ethtool_coalesce *ec) return 0; } +static int virtnet_should_update_vq_weight(int dev_flags, int weight, + int vq_weight, bool *should_update) +{ + if (weight ^ vq_weight) { + if (dev_flags & IFF_UP) + return -EBUSY; + *should_update = true; + } + + return 0; +} + static int virtnet_set_coalesce(struct net_device *dev, struct ethtool_coalesce *ec, struct kernel_ethtool_coalesce *kernel_coal, struct netlink_ext_ack *extack) { struct virtnet_info *vi = netdev_priv(dev); - int ret, i, napi_weight; + int ret, queue_number, napi_weight; bool update_napi = false; /* Can't change NAPI weight if the link is up */ napi_weight = ec->tx_max_coalesced_frames ? NAPI_POLL_WEIGHT : 0; - if (napi_weight ^ vi->sq[0].napi.weight) { - if (dev->flags & IFF_UP) - return -EBUSY; - else - update_napi = true; + for (queue_number = 0; queue_number < vi->max_queue_pairs; queue_number++) { + ret = virtnet_should_update_vq_weight(dev->flags, napi_weight, + vi->sq[queue_number].napi.weight, + &update_napi); + if (ret) + return ret; + + if (update_napi) { + /* All queues that belong to [queue_number, queue_count] will be + * updated for the sake of simplicity, which might not be necessary + */ + break; + } } if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_NOTF_COAL)) @@ -3120,8 +3194,8 @@ static int virtnet_set_coalesce(struct net_device *dev, return ret; if (update_napi) { - for (i = 0; i < vi->max_queue_pairs; i++) - vi->sq[i].napi.weight = napi_weight; + for (; queue_number < vi->max_queue_pairs; queue_number++) + vi->sq[queue_number].napi.weight = napi_weight; } return ret; @@ -3149,6 +3223,63 @@ static int virtnet_get_coalesce(struct net_device *dev, return 0; } +static int virtnet_set_per_queue_coalesce(struct net_device *dev, + u32 queue, + struct ethtool_coalesce *ec) +{ + struct virtnet_info *vi = netdev_priv(dev); + int ret, napi_weight; + bool update_napi = false; + + if (queue >= vi->max_queue_pairs) + return -EINVAL; + + /* Can't change NAPI weight if the link is up */ + napi_weight = ec->tx_max_coalesced_frames ? NAPI_POLL_WEIGHT : 0; + ret = virtnet_should_update_vq_weight(dev->flags, napi_weight, + vi->sq[queue].napi.weight, + &update_napi); + if (ret) + return ret; + + if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_VQ_NOTF_COAL)) + ret = virtnet_send_notf_coal_vq_cmds(vi, ec, queue); + else + ret = virtnet_coal_params_supported(ec); + + if (ret) + return ret; + + if (update_napi) + vi->sq[queue].napi.weight = napi_weight; + + return 0; +} + +static int virtnet_get_per_queue_coalesce(struct net_device *dev, + u32 queue, + struct ethtool_coalesce *ec) +{ + struct virtnet_info *vi = netdev_priv(dev); + + if (queue >= vi->max_queue_pairs) + return -EINVAL; + + if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_VQ_NOTF_COAL)) { + ec->rx_coalesce_usecs = vi->rq[queue].intr_coal.max_usecs; + ec->tx_coalesce_usecs = vi->sq[queue].intr_coal.max_usecs; + ec->tx_max_coalesced_frames = vi->sq[queue].intr_coal.max_packets; + ec->rx_max_coalesced_frames = vi->rq[queue].intr_coal.max_packets; + } else { + ec->rx_max_coalesced_frames = 1; + + if (vi->sq[0].napi.weight) + ec->tx_max_coalesced_frames = 1; + } + + return 0; +} + static void virtnet_init_settings(struct net_device *dev) { struct virtnet_info *vi = netdev_priv(dev); @@ -3279,6 +3410,8 @@ static const struct ethtool_ops virtnet_ethtool_ops = { .set_link_ksettings = virtnet_set_link_ksettings, .set_coalesce = virtnet_set_coalesce, .get_coalesce = virtnet_get_coalesce, + .set_per_queue_coalesce = virtnet_set_per_queue_coalesce, + .get_per_queue_coalesce = virtnet_get_per_queue_coalesce, .get_rxfh_key_size = virtnet_get_rxfh_key_size, .get_rxfh_indir_size = virtnet_get_rxfh_indir_size, .get_rxfh = virtnet_get_rxfh, diff --git a/include/uapi/linux/virtio_net.h b/include/uapi/linux/virtio_net.h index 12c1c9699935..cc65ef0f3c3e 100644 --- a/include/uapi/linux/virtio_net.h +++ b/include/uapi/linux/virtio_net.h @@ -56,6 +56,7 @@ #define VIRTIO_NET_F_MQ 22 /* Device supports Receive Flow * Steering */ #define VIRTIO_NET_F_CTRL_MAC_ADDR 23 /* Set MAC address */ +#define VIRTIO_NET_F_VQ_NOTF_COAL 52 /* Device supports virtqueue notification coalescing */ #define VIRTIO_NET_F_NOTF_COAL 53 /* Device supports notifications coalescing */ #define VIRTIO_NET_F_GUEST_USO4 54 /* Guest can handle USOv4 in. */ #define VIRTIO_NET_F_GUEST_USO6 55 /* Guest can handle USOv6 in. */ @@ -391,5 +392,18 @@ struct virtio_net_ctrl_coal_rx { }; #define VIRTIO_NET_CTRL_NOTF_COAL_RX_SET 1 +#define VIRTIO_NET_CTRL_NOTF_COAL_VQ_SET 2 +#define VIRTIO_NET_CTRL_NOTF_COAL_VQ_GET 3 + +struct virtio_net_ctrl_coal { + __le32 max_packets; + __le32 max_usecs; +}; + +struct virtio_net_ctrl_coal_vq { + __le16 vqn; + __le16 reserved; + struct virtio_net_ctrl_coal coal; +}; #endif /* _UAPI_LINUX_VIRTIO_NET_H */ -- 2.39.1 ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH net-next V4 2/3] virtio_net: support per queue interrupt coalesce command 2023-07-25 13:07 ` [PATCH net-next V4 2/3] virtio_net: support per queue interrupt coalesce command Gavin Li @ 2023-07-27 13:28 ` Paolo Abeni 2023-07-28 1:42 ` Jason Wang 2023-07-28 5:46 ` Michael S. Tsirkin 2023-07-31 6:24 ` Jason Wang 1 sibling, 2 replies; 10+ messages in thread From: Paolo Abeni @ 2023-07-27 13:28 UTC (permalink / raw) To: Gavin Li, mst, jasowang, xuanzhuo, davem, edumazet, kuba, ast, daniel, hawk, john.fastabend, jiri, dtatulea Cc: gavi, virtualization, netdev, linux-kernel, bpf, Heng Qi On Tue, 2023-07-25 at 16:07 +0300, Gavin Li wrote: > Add interrupt_coalesce config in send_queue and receive_queue to cache user > config. > > Send per virtqueue interrupt moderation config to underlying device in > order to have more efficient interrupt moderation and cpu utilization of > guest VM. > > Additionally, address all the VQs when updating the global configuration, > as now the individual VQs configuration can diverge from the global > configuration. > > Signed-off-by: Gavin Li <gavinl@nvidia.com> > Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> > Reviewed-by: Jiri Pirko <jiri@nvidia.com> > Acked-by: Michael S. Tsirkin <mst@redhat.com> FTR, this patch is significantly different from the version previously acked/reviewed, I'm unsure if all the reviewers are ok with the new one. [...] > static int virtnet_set_coalesce(struct net_device *dev, > struct ethtool_coalesce *ec, > struct kernel_ethtool_coalesce *kernel_coal, > struct netlink_ext_ack *extack) > { > struct virtnet_info *vi = netdev_priv(dev); > - int ret, i, napi_weight; > + int ret, queue_number, napi_weight; > bool update_napi = false; > > /* Can't change NAPI weight if the link is up */ > napi_weight = ec->tx_max_coalesced_frames ? NAPI_POLL_WEIGHT : 0; > - if (napi_weight ^ vi->sq[0].napi.weight) { > - if (dev->flags & IFF_UP) > - return -EBUSY; > - else > - update_napi = true; > + for (queue_number = 0; queue_number < vi->max_queue_pairs; queue_number++) { > + ret = virtnet_should_update_vq_weight(dev->flags, napi_weight, > + vi->sq[queue_number].napi.weight, > + &update_napi); > + if (ret) > + return ret; > + > + if (update_napi) { > + /* All queues that belong to [queue_number, queue_count] will be > + * updated for the sake of simplicity, which might not be necessary It looks like the comment above still refers to the old code. Should be: [queue_number, vi->max_queue_pairs] Otherwise LGTM, thanks! Paolo ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH net-next V4 2/3] virtio_net: support per queue interrupt coalesce command 2023-07-27 13:28 ` Paolo Abeni @ 2023-07-28 1:42 ` Jason Wang 2023-07-28 5:46 ` Michael S. Tsirkin 1 sibling, 0 replies; 10+ messages in thread From: Jason Wang @ 2023-07-28 1:42 UTC (permalink / raw) To: Paolo Abeni Cc: Gavin Li, mst, xuanzhuo, davem, edumazet, kuba, ast, daniel, hawk, john.fastabend, jiri, dtatulea, gavi, virtualization, netdev, linux-kernel, bpf, Heng Qi On Thu, Jul 27, 2023 at 9:28 PM Paolo Abeni <pabeni@redhat.com> wrote: > > On Tue, 2023-07-25 at 16:07 +0300, Gavin Li wrote: > > Add interrupt_coalesce config in send_queue and receive_queue to cache user > > config. > > > > Send per virtqueue interrupt moderation config to underlying device in > > order to have more efficient interrupt moderation and cpu utilization of > > guest VM. > > > > Additionally, address all the VQs when updating the global configuration, > > as now the individual VQs configuration can diverge from the global > > configuration. > > > > Signed-off-by: Gavin Li <gavinl@nvidia.com> > > Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> > > Reviewed-by: Jiri Pirko <jiri@nvidia.com> > > Acked-by: Michael S. Tsirkin <mst@redhat.com> > > FTR, this patch is significantly different from the version previously > acked/reviewed, I'm unsure if all the reviewers are ok with the new > one. Good point, and I plan to review this no later than next Monday and offer my ack if necessary. Please hold this series now. Thanks > > [...] > > > static int virtnet_set_coalesce(struct net_device *dev, > > struct ethtool_coalesce *ec, > > struct kernel_ethtool_coalesce *kernel_coal, > > struct netlink_ext_ack *extack) > > { > > struct virtnet_info *vi = netdev_priv(dev); > > - int ret, i, napi_weight; > > + int ret, queue_number, napi_weight; > > bool update_napi = false; > > > > /* Can't change NAPI weight if the link is up */ > > napi_weight = ec->tx_max_coalesced_frames ? NAPI_POLL_WEIGHT : 0; > > - if (napi_weight ^ vi->sq[0].napi.weight) { > > - if (dev->flags & IFF_UP) > > - return -EBUSY; > > - else > > - update_napi = true; > > + for (queue_number = 0; queue_number < vi->max_queue_pairs; queue_number++) { > > + ret = virtnet_should_update_vq_weight(dev->flags, napi_weight, > > + vi->sq[queue_number].napi.weight, > > + &update_napi); > > + if (ret) > > + return ret; > > + > > + if (update_napi) { > > + /* All queues that belong to [queue_number, queue_count] will be > > + * updated for the sake of simplicity, which might not be necessary > > It looks like the comment above still refers to the old code. Should > be: > [queue_number, vi->max_queue_pairs] > > Otherwise LGTM, thanks! > > Paolo > ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH net-next V4 2/3] virtio_net: support per queue interrupt coalesce command 2023-07-27 13:28 ` Paolo Abeni 2023-07-28 1:42 ` Jason Wang @ 2023-07-28 5:46 ` Michael S. Tsirkin 2023-07-31 6:25 ` Jason Wang 1 sibling, 1 reply; 10+ messages in thread From: Michael S. Tsirkin @ 2023-07-28 5:46 UTC (permalink / raw) To: Paolo Abeni Cc: Gavin Li, jasowang, xuanzhuo, davem, edumazet, kuba, ast, daniel, hawk, john.fastabend, jiri, dtatulea, gavi, virtualization, netdev, linux-kernel, bpf, Heng Qi On Thu, Jul 27, 2023 at 03:28:32PM +0200, Paolo Abeni wrote: > On Tue, 2023-07-25 at 16:07 +0300, Gavin Li wrote: > > Add interrupt_coalesce config in send_queue and receive_queue to cache user > > config. > > > > Send per virtqueue interrupt moderation config to underlying device in > > order to have more efficient interrupt moderation and cpu utilization of > > guest VM. > > > > Additionally, address all the VQs when updating the global configuration, > > as now the individual VQs configuration can diverge from the global > > configuration. > > > > Signed-off-by: Gavin Li <gavinl@nvidia.com> > > Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> > > Reviewed-by: Jiri Pirko <jiri@nvidia.com> > > Acked-by: Michael S. Tsirkin <mst@redhat.com> > > FTR, this patch is significantly different from the version previously > acked/reviewed, I'm unsure if all the reviewers are ok with the new > one. > > [...] still ok by me Acked-by: Michael S. Tsirkin <mst@redhat.com> let's wait for Jason too. > > static int virtnet_set_coalesce(struct net_device *dev, > > struct ethtool_coalesce *ec, > > struct kernel_ethtool_coalesce *kernel_coal, > > struct netlink_ext_ack *extack) > > { > > struct virtnet_info *vi = netdev_priv(dev); > > - int ret, i, napi_weight; > > + int ret, queue_number, napi_weight; > > bool update_napi = false; > > > > /* Can't change NAPI weight if the link is up */ > > napi_weight = ec->tx_max_coalesced_frames ? NAPI_POLL_WEIGHT : 0; > > - if (napi_weight ^ vi->sq[0].napi.weight) { > > - if (dev->flags & IFF_UP) > > - return -EBUSY; > > - else > > - update_napi = true; > > + for (queue_number = 0; queue_number < vi->max_queue_pairs; queue_number++) { > > + ret = virtnet_should_update_vq_weight(dev->flags, napi_weight, > > + vi->sq[queue_number].napi.weight, > > + &update_napi); > > + if (ret) > > + return ret; > > + > > + if (update_napi) { > > + /* All queues that belong to [queue_number, queue_count] will be > > + * updated for the sake of simplicity, which might not be necessary > > It looks like the comment above still refers to the old code. Should > be: > [queue_number, vi->max_queue_pairs] > > Otherwise LGTM, thanks! > > Paolo ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH net-next V4 2/3] virtio_net: support per queue interrupt coalesce command 2023-07-28 5:46 ` Michael S. Tsirkin @ 2023-07-31 6:25 ` Jason Wang 0 siblings, 0 replies; 10+ messages in thread From: Jason Wang @ 2023-07-31 6:25 UTC (permalink / raw) To: Michael S. Tsirkin, Paolo Abeni Cc: Gavin Li, xuanzhuo, davem, edumazet, kuba, ast, daniel, hawk, john.fastabend, jiri, dtatulea, gavi, virtualization, netdev, linux-kernel, bpf, Heng Qi 在 2023/7/28 13:46, Michael S. Tsirkin 写道: > On Thu, Jul 27, 2023 at 03:28:32PM +0200, Paolo Abeni wrote: >> On Tue, 2023-07-25 at 16:07 +0300, Gavin Li wrote: >>> Add interrupt_coalesce config in send_queue and receive_queue to cache user >>> config. >>> >>> Send per virtqueue interrupt moderation config to underlying device in >>> order to have more efficient interrupt moderation and cpu utilization of >>> guest VM. >>> >>> Additionally, address all the VQs when updating the global configuration, >>> as now the individual VQs configuration can diverge from the global >>> configuration. >>> >>> Signed-off-by: Gavin Li <gavinl@nvidia.com> >>> Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> >>> Reviewed-by: Jiri Pirko <jiri@nvidia.com> >>> Acked-by: Michael S. Tsirkin <mst@redhat.com> >> FTR, this patch is significantly different from the version previously >> acked/reviewed, I'm unsure if all the reviewers are ok with the new >> one. >> >> [...] > still ok by me > > Acked-by: Michael S. Tsirkin <mst@redhat.com> > > let's wait for Jason too. I'm fine with this series (I've acked each patch). Thanks > >>> static int virtnet_set_coalesce(struct net_device *dev, >>> struct ethtool_coalesce *ec, >>> struct kernel_ethtool_coalesce *kernel_coal, >>> struct netlink_ext_ack *extack) >>> { >>> struct virtnet_info *vi = netdev_priv(dev); >>> - int ret, i, napi_weight; >>> + int ret, queue_number, napi_weight; >>> bool update_napi = false; >>> >>> /* Can't change NAPI weight if the link is up */ >>> napi_weight = ec->tx_max_coalesced_frames ? NAPI_POLL_WEIGHT : 0; >>> - if (napi_weight ^ vi->sq[0].napi.weight) { >>> - if (dev->flags & IFF_UP) >>> - return -EBUSY; >>> - else >>> - update_napi = true; >>> + for (queue_number = 0; queue_number < vi->max_queue_pairs; queue_number++) { >>> + ret = virtnet_should_update_vq_weight(dev->flags, napi_weight, >>> + vi->sq[queue_number].napi.weight, >>> + &update_napi); >>> + if (ret) >>> + return ret; >>> + >>> + if (update_napi) { >>> + /* All queues that belong to [queue_number, queue_count] will be >>> + * updated for the sake of simplicity, which might not be necessary >> It looks like the comment above still refers to the old code. Should >> be: >> [queue_number, vi->max_queue_pairs] >> >> Otherwise LGTM, thanks! >> >> Paolo ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH net-next V4 2/3] virtio_net: support per queue interrupt coalesce command 2023-07-25 13:07 ` [PATCH net-next V4 2/3] virtio_net: support per queue interrupt coalesce command Gavin Li 2023-07-27 13:28 ` Paolo Abeni @ 2023-07-31 6:24 ` Jason Wang 1 sibling, 0 replies; 10+ messages in thread From: Jason Wang @ 2023-07-31 6:24 UTC (permalink / raw) To: Gavin Li, mst, xuanzhuo, davem, edumazet, kuba, pabeni, ast, daniel, hawk, john.fastabend, jiri, dtatulea Cc: gavi, virtualization, netdev, linux-kernel, bpf, Heng Qi 在 2023/7/25 21:07, Gavin Li 写道: > Add interrupt_coalesce config in send_queue and receive_queue to cache user > config. > > Send per virtqueue interrupt moderation config to underlying device in > order to have more efficient interrupt moderation and cpu utilization of > guest VM. > > Additionally, address all the VQs when updating the global configuration, > as now the individual VQs configuration can diverge from the global > configuration. > > Signed-off-by: Gavin Li <gavinl@nvidia.com> > Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> > Reviewed-by: Jiri Pirko <jiri@nvidia.com> > Acked-by: Michael S. Tsirkin <mst@redhat.com> > Reviewed-by: Heng Qi <hengqi@linux.alibaba.com> Acked-by: Jason Wang <jasowang@redhat.com> Thanks ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH net-next V4 3/3] virtio_net: enable per queue interrupt coalesce feature 2023-07-25 13:07 [PATCH net-next V4 0/3] virtio_net: add per queue interrupt coalescing support Gavin Li 2023-07-25 13:07 ` [PATCH net-next V4 1/3] virtio_net: extract interrupt coalescing settings to a structure Gavin Li 2023-07-25 13:07 ` [PATCH net-next V4 2/3] virtio_net: support per queue interrupt coalesce command Gavin Li @ 2023-07-25 13:07 ` Gavin Li 2023-07-31 6:24 ` Jason Wang 2 siblings, 1 reply; 10+ messages in thread From: Gavin Li @ 2023-07-25 13:07 UTC (permalink / raw) To: mst, jasowang, xuanzhuo, davem, edumazet, kuba, pabeni, ast, daniel, hawk, john.fastabend, jiri, dtatulea Cc: gavi, virtualization, netdev, linux-kernel, bpf, Heng Qi Enable per queue interrupt coalesce feature bit in driver and validate its dependency with control queue. Signed-off-by: Gavin Li <gavinl@nvidia.com> Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Heng Qi <hengqi@linux.alibaba.com> --- drivers/net/virtio_net.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index c185930d7c9d..57cb75f98618 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -4088,6 +4088,8 @@ static bool virtnet_validate_features(struct virtio_device *vdev) VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_HASH_REPORT, "VIRTIO_NET_F_CTRL_VQ") || VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_NOTF_COAL, + "VIRTIO_NET_F_CTRL_VQ") || + VIRTNET_FAIL_ON(vdev, VIRTIO_NET_F_VQ_NOTF_COAL, "VIRTIO_NET_F_CTRL_VQ"))) { return false; } @@ -4512,6 +4514,7 @@ static struct virtio_device_id id_table[] = { VIRTIO_NET_F_MTU, VIRTIO_NET_F_CTRL_GUEST_OFFLOADS, \ VIRTIO_NET_F_SPEED_DUPLEX, VIRTIO_NET_F_STANDBY, \ VIRTIO_NET_F_RSS, VIRTIO_NET_F_HASH_REPORT, VIRTIO_NET_F_NOTF_COAL, \ + VIRTIO_NET_F_VQ_NOTF_COAL, \ VIRTIO_NET_F_GUEST_HDRLEN static unsigned int features[] = { -- 2.39.1 ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH net-next V4 3/3] virtio_net: enable per queue interrupt coalesce feature 2023-07-25 13:07 ` [PATCH net-next V4 3/3] virtio_net: enable per queue interrupt coalesce feature Gavin Li @ 2023-07-31 6:24 ` Jason Wang 0 siblings, 0 replies; 10+ messages in thread From: Jason Wang @ 2023-07-31 6:24 UTC (permalink / raw) To: Gavin Li, mst, xuanzhuo, davem, edumazet, kuba, pabeni, ast, daniel, hawk, john.fastabend, jiri, dtatulea Cc: gavi, virtualization, netdev, linux-kernel, bpf, Heng Qi 在 2023/7/25 21:07, Gavin Li 写道: > Enable per queue interrupt coalesce feature bit in driver and validate its > dependency with control queue. > > Signed-off-by: Gavin Li <gavinl@nvidia.com> > Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> > Reviewed-by: Jiri Pirko <jiri@nvidia.com> > Acked-by: Michael S. Tsirkin <mst@redhat.com> > Reviewed-by: Heng Qi <hengqi@linux.alibaba.com> Acked-by: Jason Wang <jasowang@redhat.com> Thanks ^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2023-07-31 6:25 UTC | newest] Thread overview: 10+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2023-07-25 13:07 [PATCH net-next V4 0/3] virtio_net: add per queue interrupt coalescing support Gavin Li 2023-07-25 13:07 ` [PATCH net-next V4 1/3] virtio_net: extract interrupt coalescing settings to a structure Gavin Li 2023-07-25 13:07 ` [PATCH net-next V4 2/3] virtio_net: support per queue interrupt coalesce command Gavin Li 2023-07-27 13:28 ` Paolo Abeni 2023-07-28 1:42 ` Jason Wang 2023-07-28 5:46 ` Michael S. Tsirkin 2023-07-31 6:25 ` Jason Wang 2023-07-31 6:24 ` Jason Wang 2023-07-25 13:07 ` [PATCH net-next V4 3/3] virtio_net: enable per queue interrupt coalesce feature Gavin Li 2023-07-31 6:24 ` Jason Wang
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).