* Re: [PATCH net-next V4 2/3] virtio_net: support per queue interrupt coalesce command
[not found] ` <20230725130709.58207-3-gavinl@nvidia.com>
@ 2023-07-27 13:28 ` Paolo Abeni
2023-07-28 1:42 ` Jason Wang
2023-07-28 5:46 ` Michael S. Tsirkin
2023-07-31 6:24 ` Jason Wang
1 sibling, 2 replies; 6+ messages in thread
From: Paolo Abeni @ 2023-07-27 13:28 UTC (permalink / raw)
To: Gavin Li, mst, jasowang, xuanzhuo, davem, edumazet, kuba, ast,
daniel, hawk, john.fastabend, jiri, dtatulea
Cc: netdev, linux-kernel, gavi, virtualization, Heng Qi, bpf
On Tue, 2023-07-25 at 16:07 +0300, Gavin Li wrote:
> Add interrupt_coalesce config in send_queue and receive_queue to cache user
> config.
>
> Send per virtqueue interrupt moderation config to underlying device in
> order to have more efficient interrupt moderation and cpu utilization of
> guest VM.
>
> Additionally, address all the VQs when updating the global configuration,
> as now the individual VQs configuration can diverge from the global
> configuration.
>
> Signed-off-by: Gavin Li <gavinl@nvidia.com>
> Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com>
> Reviewed-by: Jiri Pirko <jiri@nvidia.com>
> Acked-by: Michael S. Tsirkin <mst@redhat.com>
FTR, this patch is significantly different from the version previously
acked/reviewed, I'm unsure if all the reviewers are ok with the new
one.
[...]
> static int virtnet_set_coalesce(struct net_device *dev,
> struct ethtool_coalesce *ec,
> struct kernel_ethtool_coalesce *kernel_coal,
> struct netlink_ext_ack *extack)
> {
> struct virtnet_info *vi = netdev_priv(dev);
> - int ret, i, napi_weight;
> + int ret, queue_number, napi_weight;
> bool update_napi = false;
>
> /* Can't change NAPI weight if the link is up */
> napi_weight = ec->tx_max_coalesced_frames ? NAPI_POLL_WEIGHT : 0;
> - if (napi_weight ^ vi->sq[0].napi.weight) {
> - if (dev->flags & IFF_UP)
> - return -EBUSY;
> - else
> - update_napi = true;
> + for (queue_number = 0; queue_number < vi->max_queue_pairs; queue_number++) {
> + ret = virtnet_should_update_vq_weight(dev->flags, napi_weight,
> + vi->sq[queue_number].napi.weight,
> + &update_napi);
> + if (ret)
> + return ret;
> +
> + if (update_napi) {
> + /* All queues that belong to [queue_number, queue_count] will be
> + * updated for the sake of simplicity, which might not be necessary
It looks like the comment above still refers to the old code. Should
be:
[queue_number, vi->max_queue_pairs]
Otherwise LGTM, thanks!
Paolo
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH net-next V4 2/3] virtio_net: support per queue interrupt coalesce command
2023-07-27 13:28 ` [PATCH net-next V4 2/3] virtio_net: support per queue interrupt coalesce command Paolo Abeni
@ 2023-07-28 1:42 ` Jason Wang
2023-07-28 5:46 ` Michael S. Tsirkin
1 sibling, 0 replies; 6+ messages in thread
From: Jason Wang @ 2023-07-28 1:42 UTC (permalink / raw)
To: Paolo Abeni
Cc: xuanzhuo, linux-kernel, hawk, daniel, mst, netdev, john.fastabend,
ast, gavi, edumazet, Heng Qi, jiri, kuba, bpf, virtualization,
davem, Gavin Li
On Thu, Jul 27, 2023 at 9:28 PM Paolo Abeni <pabeni@redhat.com> wrote:
>
> On Tue, 2023-07-25 at 16:07 +0300, Gavin Li wrote:
> > Add interrupt_coalesce config in send_queue and receive_queue to cache user
> > config.
> >
> > Send per virtqueue interrupt moderation config to underlying device in
> > order to have more efficient interrupt moderation and cpu utilization of
> > guest VM.
> >
> > Additionally, address all the VQs when updating the global configuration,
> > as now the individual VQs configuration can diverge from the global
> > configuration.
> >
> > Signed-off-by: Gavin Li <gavinl@nvidia.com>
> > Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com>
> > Reviewed-by: Jiri Pirko <jiri@nvidia.com>
> > Acked-by: Michael S. Tsirkin <mst@redhat.com>
>
> FTR, this patch is significantly different from the version previously
> acked/reviewed, I'm unsure if all the reviewers are ok with the new
> one.
Good point, and I plan to review this no later than next Monday and
offer my ack if necessary. Please hold this series now.
Thanks
>
> [...]
>
> > static int virtnet_set_coalesce(struct net_device *dev,
> > struct ethtool_coalesce *ec,
> > struct kernel_ethtool_coalesce *kernel_coal,
> > struct netlink_ext_ack *extack)
> > {
> > struct virtnet_info *vi = netdev_priv(dev);
> > - int ret, i, napi_weight;
> > + int ret, queue_number, napi_weight;
> > bool update_napi = false;
> >
> > /* Can't change NAPI weight if the link is up */
> > napi_weight = ec->tx_max_coalesced_frames ? NAPI_POLL_WEIGHT : 0;
> > - if (napi_weight ^ vi->sq[0].napi.weight) {
> > - if (dev->flags & IFF_UP)
> > - return -EBUSY;
> > - else
> > - update_napi = true;
> > + for (queue_number = 0; queue_number < vi->max_queue_pairs; queue_number++) {
> > + ret = virtnet_should_update_vq_weight(dev->flags, napi_weight,
> > + vi->sq[queue_number].napi.weight,
> > + &update_napi);
> > + if (ret)
> > + return ret;
> > +
> > + if (update_napi) {
> > + /* All queues that belong to [queue_number, queue_count] will be
> > + * updated for the sake of simplicity, which might not be necessary
>
> It looks like the comment above still refers to the old code. Should
> be:
> [queue_number, vi->max_queue_pairs]
>
> Otherwise LGTM, thanks!
>
> Paolo
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH net-next V4 2/3] virtio_net: support per queue interrupt coalesce command
2023-07-27 13:28 ` [PATCH net-next V4 2/3] virtio_net: support per queue interrupt coalesce command Paolo Abeni
2023-07-28 1:42 ` Jason Wang
@ 2023-07-28 5:46 ` Michael S. Tsirkin
2023-07-31 6:25 ` Jason Wang
1 sibling, 1 reply; 6+ messages in thread
From: Michael S. Tsirkin @ 2023-07-28 5:46 UTC (permalink / raw)
To: Paolo Abeni
Cc: xuanzhuo, linux-kernel, hawk, daniel, netdev, john.fastabend, ast,
gavi, edumazet, Heng Qi, jiri, kuba, bpf, virtualization, davem,
Gavin Li
On Thu, Jul 27, 2023 at 03:28:32PM +0200, Paolo Abeni wrote:
> On Tue, 2023-07-25 at 16:07 +0300, Gavin Li wrote:
> > Add interrupt_coalesce config in send_queue and receive_queue to cache user
> > config.
> >
> > Send per virtqueue interrupt moderation config to underlying device in
> > order to have more efficient interrupt moderation and cpu utilization of
> > guest VM.
> >
> > Additionally, address all the VQs when updating the global configuration,
> > as now the individual VQs configuration can diverge from the global
> > configuration.
> >
> > Signed-off-by: Gavin Li <gavinl@nvidia.com>
> > Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com>
> > Reviewed-by: Jiri Pirko <jiri@nvidia.com>
> > Acked-by: Michael S. Tsirkin <mst@redhat.com>
>
> FTR, this patch is significantly different from the version previously
> acked/reviewed, I'm unsure if all the reviewers are ok with the new
> one.
>
> [...]
still ok by me
Acked-by: Michael S. Tsirkin <mst@redhat.com>
let's wait for Jason too.
> > static int virtnet_set_coalesce(struct net_device *dev,
> > struct ethtool_coalesce *ec,
> > struct kernel_ethtool_coalesce *kernel_coal,
> > struct netlink_ext_ack *extack)
> > {
> > struct virtnet_info *vi = netdev_priv(dev);
> > - int ret, i, napi_weight;
> > + int ret, queue_number, napi_weight;
> > bool update_napi = false;
> >
> > /* Can't change NAPI weight if the link is up */
> > napi_weight = ec->tx_max_coalesced_frames ? NAPI_POLL_WEIGHT : 0;
> > - if (napi_weight ^ vi->sq[0].napi.weight) {
> > - if (dev->flags & IFF_UP)
> > - return -EBUSY;
> > - else
> > - update_napi = true;
> > + for (queue_number = 0; queue_number < vi->max_queue_pairs; queue_number++) {
> > + ret = virtnet_should_update_vq_weight(dev->flags, napi_weight,
> > + vi->sq[queue_number].napi.weight,
> > + &update_napi);
> > + if (ret)
> > + return ret;
> > +
> > + if (update_napi) {
> > + /* All queues that belong to [queue_number, queue_count] will be
> > + * updated for the sake of simplicity, which might not be necessary
>
> It looks like the comment above still refers to the old code. Should
> be:
> [queue_number, vi->max_queue_pairs]
>
> Otherwise LGTM, thanks!
>
> Paolo
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH net-next V4 2/3] virtio_net: support per queue interrupt coalesce command
[not found] ` <20230725130709.58207-3-gavinl@nvidia.com>
2023-07-27 13:28 ` [PATCH net-next V4 2/3] virtio_net: support per queue interrupt coalesce command Paolo Abeni
@ 2023-07-31 6:24 ` Jason Wang
1 sibling, 0 replies; 6+ messages in thread
From: Jason Wang @ 2023-07-31 6:24 UTC (permalink / raw)
To: Gavin Li, mst, xuanzhuo, davem, edumazet, kuba, pabeni, ast,
daniel, hawk, john.fastabend, jiri, dtatulea
Cc: netdev, linux-kernel, gavi, virtualization, Heng Qi, bpf
在 2023/7/25 21:07, Gavin Li 写道:
> Add interrupt_coalesce config in send_queue and receive_queue to cache user
> config.
>
> Send per virtqueue interrupt moderation config to underlying device in
> order to have more efficient interrupt moderation and cpu utilization of
> guest VM.
>
> Additionally, address all the VQs when updating the global configuration,
> as now the individual VQs configuration can diverge from the global
> configuration.
>
> Signed-off-by: Gavin Li <gavinl@nvidia.com>
> Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com>
> Reviewed-by: Jiri Pirko <jiri@nvidia.com>
> Acked-by: Michael S. Tsirkin <mst@redhat.com>
> Reviewed-by: Heng Qi <hengqi@linux.alibaba.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Thanks
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH net-next V4 3/3] virtio_net: enable per queue interrupt coalesce feature
[not found] ` <20230725130709.58207-4-gavinl@nvidia.com>
@ 2023-07-31 6:24 ` Jason Wang
0 siblings, 0 replies; 6+ messages in thread
From: Jason Wang @ 2023-07-31 6:24 UTC (permalink / raw)
To: Gavin Li, mst, xuanzhuo, davem, edumazet, kuba, pabeni, ast,
daniel, hawk, john.fastabend, jiri, dtatulea
Cc: netdev, linux-kernel, gavi, virtualization, Heng Qi, bpf
在 2023/7/25 21:07, Gavin Li 写道:
> Enable per queue interrupt coalesce feature bit in driver and validate its
> dependency with control queue.
>
> Signed-off-by: Gavin Li <gavinl@nvidia.com>
> Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com>
> Reviewed-by: Jiri Pirko <jiri@nvidia.com>
> Acked-by: Michael S. Tsirkin <mst@redhat.com>
> Reviewed-by: Heng Qi <hengqi@linux.alibaba.com>
Acked-by: Jason Wang <jasowang@redhat.com>
Thanks
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH net-next V4 2/3] virtio_net: support per queue interrupt coalesce command
2023-07-28 5:46 ` Michael S. Tsirkin
@ 2023-07-31 6:25 ` Jason Wang
0 siblings, 0 replies; 6+ messages in thread
From: Jason Wang @ 2023-07-31 6:25 UTC (permalink / raw)
To: Michael S. Tsirkin, Paolo Abeni
Cc: xuanzhuo, linux-kernel, hawk, daniel, netdev, john.fastabend, ast,
gavi, edumazet, Heng Qi, jiri, kuba, bpf, virtualization, davem,
Gavin Li
在 2023/7/28 13:46, Michael S. Tsirkin 写道:
> On Thu, Jul 27, 2023 at 03:28:32PM +0200, Paolo Abeni wrote:
>> On Tue, 2023-07-25 at 16:07 +0300, Gavin Li wrote:
>>> Add interrupt_coalesce config in send_queue and receive_queue to cache user
>>> config.
>>>
>>> Send per virtqueue interrupt moderation config to underlying device in
>>> order to have more efficient interrupt moderation and cpu utilization of
>>> guest VM.
>>>
>>> Additionally, address all the VQs when updating the global configuration,
>>> as now the individual VQs configuration can diverge from the global
>>> configuration.
>>>
>>> Signed-off-by: Gavin Li <gavinl@nvidia.com>
>>> Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com>
>>> Reviewed-by: Jiri Pirko <jiri@nvidia.com>
>>> Acked-by: Michael S. Tsirkin <mst@redhat.com>
>> FTR, this patch is significantly different from the version previously
>> acked/reviewed, I'm unsure if all the reviewers are ok with the new
>> one.
>>
>> [...]
> still ok by me
>
> Acked-by: Michael S. Tsirkin <mst@redhat.com>
>
> let's wait for Jason too.
I'm fine with this series (I've acked each patch).
Thanks
>
>>> static int virtnet_set_coalesce(struct net_device *dev,
>>> struct ethtool_coalesce *ec,
>>> struct kernel_ethtool_coalesce *kernel_coal,
>>> struct netlink_ext_ack *extack)
>>> {
>>> struct virtnet_info *vi = netdev_priv(dev);
>>> - int ret, i, napi_weight;
>>> + int ret, queue_number, napi_weight;
>>> bool update_napi = false;
>>>
>>> /* Can't change NAPI weight if the link is up */
>>> napi_weight = ec->tx_max_coalesced_frames ? NAPI_POLL_WEIGHT : 0;
>>> - if (napi_weight ^ vi->sq[0].napi.weight) {
>>> - if (dev->flags & IFF_UP)
>>> - return -EBUSY;
>>> - else
>>> - update_napi = true;
>>> + for (queue_number = 0; queue_number < vi->max_queue_pairs; queue_number++) {
>>> + ret = virtnet_should_update_vq_weight(dev->flags, napi_weight,
>>> + vi->sq[queue_number].napi.weight,
>>> + &update_napi);
>>> + if (ret)
>>> + return ret;
>>> +
>>> + if (update_napi) {
>>> + /* All queues that belong to [queue_number, queue_count] will be
>>> + * updated for the sake of simplicity, which might not be necessary
>> It looks like the comment above still refers to the old code. Should
>> be:
>> [queue_number, vi->max_queue_pairs]
>>
>> Otherwise LGTM, thanks!
>>
>> Paolo
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2023-07-31 6:25 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20230725130709.58207-1-gavinl@nvidia.com>
[not found] ` <20230725130709.58207-3-gavinl@nvidia.com>
2023-07-27 13:28 ` [PATCH net-next V4 2/3] virtio_net: support per queue interrupt coalesce command Paolo Abeni
2023-07-28 1:42 ` Jason Wang
2023-07-28 5:46 ` Michael S. Tsirkin
2023-07-31 6:25 ` Jason Wang
2023-07-31 6:24 ` Jason Wang
[not found] ` <20230725130709.58207-4-gavinl@nvidia.com>
2023-07-31 6:24 ` [PATCH net-next V4 3/3] virtio_net: enable per queue interrupt coalesce feature Jason Wang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).