* [PATCH net-next 0/2] Add TX stop/wake counters @ 2024-05-09 16:32 Daniel Jurgens 2024-05-09 16:32 ` [PATCH net-next 1/2] netdev: Add queue stats for TX stop and wake Daniel Jurgens 2024-05-09 16:32 ` [PATCH net-next 2/2] virtio_net: Add TX stopped and wake counters Daniel Jurgens 0 siblings, 2 replies; 14+ messages in thread From: Daniel Jurgens @ 2024-05-09 16:32 UTC (permalink / raw) To: netdev Cc: mst, jasowang, xuanzhuo, virtualization, davem, edumazet, kuba, pabeni, jiri, Daniel Jurgens Several drivers provide TX stop and wake counters via ethtool stats. Add those to the netdev queue stats, and use them in virtio_net. Daniel Jurgens (2): netdev: Add queue stats for TX stop and wake virtio_net: Add TX stopped and wake counters Documentation/netlink/specs/netdev.yaml | 10 +++++++++ drivers/net/virtio_net.c | 28 +++++++++++++++++++++++-- include/net/netdev_queues.h | 3 +++ include/uapi/linux/netdev.h | 2 ++ net/core/netdev-genl.c | 4 +++- tools/include/uapi/linux/netdev.h | 3 ++- 6 files changed, 46 insertions(+), 4 deletions(-) -- 2.44.0 ^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH net-next 1/2] netdev: Add queue stats for TX stop and wake 2024-05-09 16:32 [PATCH net-next 0/2] Add TX stop/wake counters Daniel Jurgens @ 2024-05-09 16:32 ` Daniel Jurgens 2024-05-09 20:46 ` Andrew Lunn 2024-05-10 1:31 ` Jakub Kicinski 2024-05-09 16:32 ` [PATCH net-next 2/2] virtio_net: Add TX stopped and wake counters Daniel Jurgens 1 sibling, 2 replies; 14+ messages in thread From: Daniel Jurgens @ 2024-05-09 16:32 UTC (permalink / raw) To: netdev Cc: mst, jasowang, xuanzhuo, virtualization, davem, edumazet, kuba, pabeni, jiri, Daniel Jurgens TX queue stop and wake are counted by some drivers. Support reporting these via netdev-genl queue stats. Signed-off-by: Daniel Jurgens <danielj@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> --- Documentation/netlink/specs/netdev.yaml | 10 ++++++++++ include/net/netdev_queues.h | 3 +++ include/uapi/linux/netdev.h | 2 ++ net/core/netdev-genl.c | 4 +++- tools/include/uapi/linux/netdev.h | 3 ++- 5 files changed, 20 insertions(+), 2 deletions(-) diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml index 2be4b3714d17..c8b976d03330 100644 --- a/Documentation/netlink/specs/netdev.yaml +++ b/Documentation/netlink/specs/netdev.yaml @@ -439,6 +439,16 @@ attribute-sets: Number of the packets dropped by the device due to the transmit packets bitrate exceeding the device rate limit. type: uint + - + name: tx-stop + doc: | + Number of times the tx queue was stopped. + type: uint + - + name: tx-wake + doc: | + Number of times the tx queue was restarted. + type: uint operations: list: diff --git a/include/net/netdev_queues.h b/include/net/netdev_queues.h index e7b84f018cee..a8a7e48dfa6c 100644 --- a/include/net/netdev_queues.h +++ b/include/net/netdev_queues.h @@ -41,6 +41,9 @@ struct netdev_queue_stats_tx { u64 hw_gso_wire_bytes; u64 hw_drop_ratelimits; + + u64 stop; + u64 wake; }; /** diff --git a/include/uapi/linux/netdev.h b/include/uapi/linux/netdev.h index cf24f1d9adf8..a8188202413e 100644 --- a/include/uapi/linux/netdev.h +++ b/include/uapi/linux/netdev.h @@ -165,6 +165,8 @@ enum { NETDEV_A_QSTATS_TX_HW_GSO_WIRE_PACKETS, NETDEV_A_QSTATS_TX_HW_GSO_WIRE_BYTES, NETDEV_A_QSTATS_TX_HW_DROP_RATELIMITS, + NETDEV_A_QSTATS_TX_STOP, + NETDEV_A_QSTATS_TX_WAKE, __NETDEV_A_QSTATS_MAX, NETDEV_A_QSTATS_MAX = (__NETDEV_A_QSTATS_MAX - 1) diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c index 4b5054087309..1f6ae6379e0f 100644 --- a/net/core/netdev-genl.c +++ b/net/core/netdev-genl.c @@ -517,7 +517,9 @@ netdev_nl_stats_write_tx(struct sk_buff *rsp, struct netdev_queue_stats_tx *tx) netdev_stat_put(rsp, NETDEV_A_QSTATS_TX_HW_GSO_BYTES, tx->hw_gso_bytes) || netdev_stat_put(rsp, NETDEV_A_QSTATS_TX_HW_GSO_WIRE_PACKETS, tx->hw_gso_wire_packets) || netdev_stat_put(rsp, NETDEV_A_QSTATS_TX_HW_GSO_WIRE_BYTES, tx->hw_gso_wire_bytes) || - netdev_stat_put(rsp, NETDEV_A_QSTATS_TX_HW_DROP_RATELIMITS, tx->hw_drop_ratelimits)) + netdev_stat_put(rsp, NETDEV_A_QSTATS_TX_HW_DROP_RATELIMITS, tx->hw_drop_ratelimits) || + netdev_stat_put(rsp, NETDEV_A_QSTATS_TX_STOP, tx->stop) || + netdev_stat_put(rsp, NETDEV_A_QSTATS_TX_WAKE, tx->wake)) return -EMSGSIZE; return 0; } diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h index cf24f1d9adf8..ccf6976b1693 100644 --- a/tools/include/uapi/linux/netdev.h +++ b/tools/include/uapi/linux/netdev.h @@ -164,7 +164,8 @@ enum { NETDEV_A_QSTATS_TX_HW_GSO_BYTES, NETDEV_A_QSTATS_TX_HW_GSO_WIRE_PACKETS, NETDEV_A_QSTATS_TX_HW_GSO_WIRE_BYTES, - NETDEV_A_QSTATS_TX_HW_DROP_RATELIMITS, + NETDEV_A_QSTATS_TX_STOP, + NETDEV_A_QSTATS_TX_WAKE, __NETDEV_A_QSTATS_MAX, NETDEV_A_QSTATS_MAX = (__NETDEV_A_QSTATS_MAX - 1) -- 2.44.0 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH net-next 1/2] netdev: Add queue stats for TX stop and wake 2024-05-09 16:32 ` [PATCH net-next 1/2] netdev: Add queue stats for TX stop and wake Daniel Jurgens @ 2024-05-09 20:46 ` Andrew Lunn 2024-05-09 21:19 ` Dan Jurgens 2024-05-10 1:31 ` Jakub Kicinski 1 sibling, 1 reply; 14+ messages in thread From: Andrew Lunn @ 2024-05-09 20:46 UTC (permalink / raw) To: Daniel Jurgens Cc: netdev, mst, jasowang, xuanzhuo, virtualization, davem, edumazet, kuba, pabeni, jiri On Thu, May 09, 2024 at 11:32:15AM -0500, Daniel Jurgens wrote: > TX queue stop and wake are counted by some drivers. > Support reporting these via netdev-genl queue stats. > > Signed-off-by: Daniel Jurgens <danielj@nvidia.com> > Reviewed-by: Jiri Pirko <jiri@nvidia.com> > --- > Documentation/netlink/specs/netdev.yaml | 10 ++++++++++ > include/net/netdev_queues.h | 3 +++ > include/uapi/linux/netdev.h | 2 ++ > net/core/netdev-genl.c | 4 +++- > tools/include/uapi/linux/netdev.h | 3 ++- > 5 files changed, 20 insertions(+), 2 deletions(-) > > diff --git a/Documentation/netlink/specs/netdev.yaml b/Documentation/netlink/specs/netdev.yaml > index 2be4b3714d17..c8b976d03330 100644 > --- a/Documentation/netlink/specs/netdev.yaml > +++ b/Documentation/netlink/specs/netdev.yaml > @@ -439,6 +439,16 @@ attribute-sets: > Number of the packets dropped by the device due to the transmit > packets bitrate exceeding the device rate limit. > type: uint > + - > + name: tx-stop > + doc: | > + Number of times the tx queue was stopped. > + type: uint > + - > + name: tx-wake > + doc: | > + Number of times the tx queue was restarted. > + type: uint I'm curious where these names came from. The opposite of stop would be start. The opposite of wake would be sleep. Are these meant to be opposites of each other? If they are opposites, why would they differ by more than 1? And if they can only differ by 1, why do we need both? Andrew ^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: [PATCH net-next 1/2] netdev: Add queue stats for TX stop and wake 2024-05-09 20:46 ` Andrew Lunn @ 2024-05-09 21:19 ` Dan Jurgens 2024-05-10 12:58 ` Andrew Lunn 0 siblings, 1 reply; 14+ messages in thread From: Dan Jurgens @ 2024-05-09 21:19 UTC (permalink / raw) To: Andrew Lunn Cc: netdev@vger.kernel.org, mst@redhat.com, jasowang@redhat.com, xuanzhuo@linux.alibaba.com, virtualization@lists.linux.dev, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Jiri Pirko > From: Andrew Lunn <andrew@lunn.ch> > Sent: Thursday, May 9, 2024 3:47 PM > To: Dan Jurgens <danielj@nvidia.com> > Subject: Re: [PATCH net-next 1/2] netdev: Add queue stats for TX stop and > wake > > On Thu, May 09, 2024 at 11:32:15AM -0500, Daniel Jurgens wrote: > > TX queue stop and wake are counted by some drivers. > > Support reporting these via netdev-genl queue stats. > > > > + name: tx-wake > > + doc: | > > + Number of times the tx queue was restarted. > > + type: uint > > I'm curious where these names came from. The opposite of stop would be > start. The opposite of wake would be sleep. Are these meant to be > opposites of each other? If they are opposites, why would they differ by > more than 1? And if they can only differ by 1, why do we need both? The names come from the API. netif_tx_stop_queue, netif_tx_wake_queue. It's true that they can only ever differ by 1, but when they do that's interesting. Though eventually a TX timeout will occur if it's due to something like a lost interrupt. The most useful thing is knowing if queues are being stopped frequently, so if there's objection to the wake side it can be dropped. > > Andrew ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH net-next 1/2] netdev: Add queue stats for TX stop and wake 2024-05-09 21:19 ` Dan Jurgens @ 2024-05-10 12:58 ` Andrew Lunn 2024-05-10 20:20 ` Dan Jurgens 0 siblings, 1 reply; 14+ messages in thread From: Andrew Lunn @ 2024-05-10 12:58 UTC (permalink / raw) To: Dan Jurgens Cc: netdev@vger.kernel.org, mst@redhat.com, jasowang@redhat.com, xuanzhuo@linux.alibaba.com, virtualization@lists.linux.dev, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Jiri Pirko On Thu, May 09, 2024 at 09:19:52PM +0000, Dan Jurgens wrote: > > From: Andrew Lunn <andrew@lunn.ch> > > Sent: Thursday, May 9, 2024 3:47 PM > > To: Dan Jurgens <danielj@nvidia.com> > > Subject: Re: [PATCH net-next 1/2] netdev: Add queue stats for TX stop and > > wake > > > > On Thu, May 09, 2024 at 11:32:15AM -0500, Daniel Jurgens wrote: > > > TX queue stop and wake are counted by some drivers. > > > Support reporting these via netdev-genl queue stats. > > > > > > + name: tx-wake > > > + doc: | > > > + Number of times the tx queue was restarted. > > > + type: uint > > > > I'm curious where these names came from. The opposite of stop would be > > start. The opposite of wake would be sleep. Are these meant to be > > opposites of each other? If they are opposites, why would they differ by > > more than 1? And if they can only differ by 1, why do we need both? > > The names come from the API. netif_tx_stop_queue, netif_tx_wake_queue. O.K. So in that context, these names make sense. Maybe extend the doc: to mention these function names? You say there are a few drivers with these counters? Does it make sense to actually push the increment into netif_tx_stop_queue(), netif_tx_wake_queue() so that they become available for all drivers? I've no idea what that implies... Andrew ^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: [PATCH net-next 1/2] netdev: Add queue stats for TX stop and wake 2024-05-10 12:58 ` Andrew Lunn @ 2024-05-10 20:20 ` Dan Jurgens 2024-05-10 20:40 ` Andrew Lunn 0 siblings, 1 reply; 14+ messages in thread From: Dan Jurgens @ 2024-05-10 20:20 UTC (permalink / raw) To: Andrew Lunn Cc: netdev@vger.kernel.org, mst@redhat.com, jasowang@redhat.com, xuanzhuo@linux.alibaba.com, virtualization@lists.linux.dev, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Jiri Pirko > From: Andrew Lunn <andrew@lunn.ch> > Sent: Friday, May 10, 2024 7:59 AM > To: Dan Jurgens <danielj@nvidia.com> > Cc: netdev@vger.kernel.org; mst@redhat.com; jasowang@redhat.com; > xuanzhuo@linux.alibaba.com; virtualization@lists.linux.dev; > davem@davemloft.net; edumazet@google.com; kuba@kernel.org; > pabeni@redhat.com; Jiri Pirko <jiri@nvidia.com> > Subject: Re: [PATCH net-next 1/2] netdev: Add queue stats for TX stop and > wake > > On Thu, May 09, 2024 at 09:19:52PM +0000, Dan Jurgens wrote: > > > From: Andrew Lunn <andrew@lunn.ch> > > > Sent: Thursday, May 9, 2024 3:47 PM > > > To: Dan Jurgens <danielj@nvidia.com> > > > Subject: Re: [PATCH net-next 1/2] netdev: Add queue stats for TX > > > stop and wake > > > > > > On Thu, May 09, 2024 at 11:32:15AM -0500, Daniel Jurgens wrote: > > > > TX queue stop and wake are counted by some drivers. > > > > Support reporting these via netdev-genl queue stats. > > > > > > > > + name: tx-wake > > > > + doc: | > > > > + Number of times the tx queue was restarted. > > > > + type: uint > > > > > > I'm curious where these names came from. The opposite of stop would > > > be start. The opposite of wake would be sleep. Are these meant to be > > > opposites of each other? If they are opposites, why would they > > > differ by more than 1? And if they can only differ by 1, why do we need > both? > > > > The names come from the API. netif_tx_stop_queue, > netif_tx_wake_queue. > > O.K. So in that context, these names make sense. Maybe extend the doc: > to mention these function names? > > You say there are a few drivers with these counters? Does it make sense to > actually push the increment into netif_tx_stop_queue(), > netif_tx_wake_queue() so that they become available for all drivers? > I've no idea what that implies... It wouldn't be trivial. The stats are queried from the driver. > > Andrew ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH net-next 1/2] netdev: Add queue stats for TX stop and wake 2024-05-10 20:20 ` Dan Jurgens @ 2024-05-10 20:40 ` Andrew Lunn 0 siblings, 0 replies; 14+ messages in thread From: Andrew Lunn @ 2024-05-10 20:40 UTC (permalink / raw) To: Dan Jurgens Cc: netdev@vger.kernel.org, mst@redhat.com, jasowang@redhat.com, xuanzhuo@linux.alibaba.com, virtualization@lists.linux.dev, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Jiri Pirko > It wouldn't be trivial. The stats are queried from the driver. So are page pool stats, with the increments happening in the page pool code, not the driver. Andrew ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH net-next 1/2] netdev: Add queue stats for TX stop and wake 2024-05-09 16:32 ` [PATCH net-next 1/2] netdev: Add queue stats for TX stop and wake Daniel Jurgens 2024-05-09 20:46 ` Andrew Lunn @ 2024-05-10 1:31 ` Jakub Kicinski 2024-05-10 3:37 ` Dan Jurgens 1 sibling, 1 reply; 14+ messages in thread From: Jakub Kicinski @ 2024-05-10 1:31 UTC (permalink / raw) To: Daniel Jurgens Cc: netdev, mst, jasowang, xuanzhuo, virtualization, davem, edumazet, pabeni, jiri On Thu, 9 May 2024 11:32:15 -0500 Daniel Jurgens wrote: > diff --git a/tools/include/uapi/linux/netdev.h b/tools/include/uapi/linux/netdev.h > index cf24f1d9adf8..ccf6976b1693 100644 > --- a/tools/include/uapi/linux/netdev.h > +++ b/tools/include/uapi/linux/netdev.h > @@ -164,7 +164,8 @@ enum { > NETDEV_A_QSTATS_TX_HW_GSO_BYTES, > NETDEV_A_QSTATS_TX_HW_GSO_WIRE_PACKETS, > NETDEV_A_QSTATS_TX_HW_GSO_WIRE_BYTES, > - NETDEV_A_QSTATS_TX_HW_DROP_RATELIMITS, Looks like an accidental removal? > + NETDEV_A_QSTATS_TX_STOP, > + NETDEV_A_QSTATS_TX_WAKE, Since you'll have to respin let me nit pick on the docs, as I'm hoping that those will be comprehensible to users not only devs. > + name: tx-stop > + doc: | > + Number of times the tx queue was stopped. How about: Number of times driver paused accepting new tx packets from the stack to this queue, because the queue was full. Note that if BQL is supported and enabled on the device the networking stack will avoid queuing a lot of data at once. > + name: tx-wake > + doc: | > + Number of times the tx queue was restarted. Number of times driver re-started accepting send requests to this queue from the stack. ^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: [PATCH net-next 1/2] netdev: Add queue stats for TX stop and wake 2024-05-10 1:31 ` Jakub Kicinski @ 2024-05-10 3:37 ` Dan Jurgens 0 siblings, 0 replies; 14+ messages in thread From: Dan Jurgens @ 2024-05-10 3:37 UTC (permalink / raw) To: Jakub Kicinski Cc: netdev@vger.kernel.org, mst@redhat.com, jasowang@redhat.com, xuanzhuo@linux.alibaba.com, virtualization@lists.linux.dev, davem@davemloft.net, edumazet@google.com, pabeni@redhat.com, Jiri Pirko > From: Jakub Kicinski <kuba@kernel.org> > Sent: Thursday, May 9, 2024 8:31 PM > To: Dan Jurgens <danielj@nvidia.com> > Cc: netdev@vger.kernel.org; mst@redhat.com; jasowang@redhat.com; > xuanzhuo@linux.alibaba.com; virtualization@lists.linux.dev; > davem@davemloft.net; edumazet@google.com; pabeni@redhat.com; Jiri > Pirko <jiri@nvidia.com> > Subject: Re: [PATCH net-next 1/2] netdev: Add queue stats for TX stop and > wake > > On Thu, 9 May 2024 11:32:15 -0500 Daniel Jurgens wrote: > > diff --git a/tools/include/uapi/linux/netdev.h > > b/tools/include/uapi/linux/netdev.h > > index cf24f1d9adf8..ccf6976b1693 100644 > > --- a/tools/include/uapi/linux/netdev.h > > +++ b/tools/include/uapi/linux/netdev.h > > @@ -164,7 +164,8 @@ enum { > > NETDEV_A_QSTATS_TX_HW_GSO_BYTES, > > NETDEV_A_QSTATS_TX_HW_GSO_WIRE_PACKETS, > > NETDEV_A_QSTATS_TX_HW_GSO_WIRE_BYTES, > > - NETDEV_A_QSTATS_TX_HW_DROP_RATELIMITS, > > Looks like an accidental removal? Yes, thanks. > > > + NETDEV_A_QSTATS_TX_STOP, > > + NETDEV_A_QSTATS_TX_WAKE, > > Since you'll have to respin let me nit pick on the docs, as I'm hoping that > those will be comprehensible to users not only devs. > > > + name: tx-stop > > + doc: | > > + Number of times the tx queue was stopped. > > How about: > > Number of times driver paused accepting new tx packets > from the stack to this queue, because the queue was full. > Note that if BQL is supported and enabled on the device > the networking stack will avoid queuing a lot of data at once. > > > + name: tx-wake > > + doc: | > > + Number of times the tx queue was restarted. > > Number of times driver re-started accepting send > requests to this queue from the stack. Will update it. Thanks for the text! ^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH net-next 2/2] virtio_net: Add TX stopped and wake counters 2024-05-09 16:32 [PATCH net-next 0/2] Add TX stop/wake counters Daniel Jurgens 2024-05-09 16:32 ` [PATCH net-next 1/2] netdev: Add queue stats for TX stop and wake Daniel Jurgens @ 2024-05-09 16:32 ` Daniel Jurgens 2024-05-10 1:21 ` Xuan Zhuo 1 sibling, 1 reply; 14+ messages in thread From: Daniel Jurgens @ 2024-05-09 16:32 UTC (permalink / raw) To: netdev Cc: mst, jasowang, xuanzhuo, virtualization, davem, edumazet, kuba, pabeni, jiri, Daniel Jurgens Add a tx queue stop and wake counters, they are useful for debugging. $ ./tools/net/ynl/cli.py --spec netlink/specs/netdev.yaml \ --dump qstats-get --json '{"scope": "queue"}' ... {'ifindex': 13, 'queue-id': 0, 'queue-type': 'tx', 'tx-bytes': 14756682850, 'tx-packets': 226465, 'tx-stop': 113208, 'tx-wake': 113208}, {'ifindex': 13, 'queue-id': 1, 'queue-type': 'tx', 'tx-bytes': 18167675008, 'tx-packets': 278660, 'tx-stop': 8632, 'tx-wake': 8632}] Signed-off-by: Daniel Jurgens <danielj@nvidia.com> Reviewed-by: Jiri Pirko <jiri@nvidia.com> --- drivers/net/virtio_net.c | 28 ++++++++++++++++++++++++++-- 1 file changed, 26 insertions(+), 2 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 218a446c4c27..df6121c38a1b 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -95,6 +95,8 @@ struct virtnet_sq_stats { u64_stats_t xdp_tx_drops; u64_stats_t kicks; u64_stats_t tx_timeouts; + u64_stats_t stop; + u64_stats_t wake; }; struct virtnet_rq_stats { @@ -145,6 +147,8 @@ static const struct virtnet_stat_desc virtnet_rq_stats_desc[] = { static const struct virtnet_stat_desc virtnet_sq_stats_desc_qstat[] = { VIRTNET_SQ_STAT_QSTAT("packets", packets), VIRTNET_SQ_STAT_QSTAT("bytes", bytes), + VIRTNET_SQ_STAT_QSTAT("stop", stop), + VIRTNET_SQ_STAT_QSTAT("wake", wake), }; static const struct virtnet_stat_desc virtnet_rq_stats_desc_qstat[] = { @@ -1014,6 +1018,9 @@ static void check_sq_full_and_disable(struct virtnet_info *vi, */ if (sq->vq->num_free < 2+MAX_SKB_FRAGS) { netif_stop_subqueue(dev, qnum); + u64_stats_update_begin(&sq->stats.syncp); + u64_stats_inc(&sq->stats.stop); + u64_stats_update_end(&sq->stats.syncp); if (use_napi) { if (unlikely(!virtqueue_enable_cb_delayed(sq->vq))) virtqueue_napi_schedule(&sq->napi, sq->vq); @@ -1022,6 +1029,9 @@ static void check_sq_full_and_disable(struct virtnet_info *vi, free_old_xmit(sq, false); if (sq->vq->num_free >= 2+MAX_SKB_FRAGS) { netif_start_subqueue(dev, qnum); + u64_stats_update_begin(&sq->stats.syncp); + u64_stats_inc(&sq->stats.wake); + u64_stats_update_end(&sq->stats.syncp); virtqueue_disable_cb(sq->vq); } } @@ -2322,8 +2332,14 @@ static void virtnet_poll_cleantx(struct receive_queue *rq) free_old_xmit(sq, true); } while (unlikely(!virtqueue_enable_cb_delayed(sq->vq))); - if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) + if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) { + if (netif_tx_queue_stopped(txq)) { + u64_stats_update_begin(&sq->stats.syncp); + u64_stats_inc(&sq->stats.wake); + u64_stats_update_end(&sq->stats.syncp); + } netif_tx_wake_queue(txq); + } __netif_tx_unlock(txq); } @@ -2473,8 +2489,14 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) virtqueue_disable_cb(sq->vq); free_old_xmit(sq, true); - if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) + if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) { + if (netif_tx_queue_stopped(txq)) { + u64_stats_update_begin(&sq->stats.syncp); + u64_stats_inc(&sq->stats.wake); + u64_stats_update_end(&sq->stats.syncp); + } netif_tx_wake_queue(txq); + } opaque = virtqueue_enable_cb_prepare(sq->vq); @@ -4790,6 +4812,8 @@ static void virtnet_get_base_stats(struct net_device *dev, tx->bytes = 0; tx->packets = 0; + tx->stop = 0; + tx->wake = 0; if (vi->device_stats_cap & VIRTIO_NET_STATS_TYPE_TX_BASIC) { tx->hw_drops = 0; -- 2.44.0 ^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH net-next 2/2] virtio_net: Add TX stopped and wake counters 2024-05-09 16:32 ` [PATCH net-next 2/2] virtio_net: Add TX stopped and wake counters Daniel Jurgens @ 2024-05-10 1:21 ` Xuan Zhuo 2024-05-10 3:35 ` Dan Jurgens 0 siblings, 1 reply; 14+ messages in thread From: Xuan Zhuo @ 2024-05-10 1:21 UTC (permalink / raw) To: Daniel Jurgens Cc: mst, jasowang, xuanzhuo, virtualization, davem, edumazet, kuba, pabeni, jiri, Daniel Jurgens, netdev On Thu, 9 May 2024 11:32:16 -0500, Daniel Jurgens <danielj@nvidia.com> wrote: > Add a tx queue stop and wake counters, they are useful for debugging. > > $ ./tools/net/ynl/cli.py --spec netlink/specs/netdev.yaml \ > --dump qstats-get --json '{"scope": "queue"}' > ... > {'ifindex': 13, > 'queue-id': 0, > 'queue-type': 'tx', > 'tx-bytes': 14756682850, > 'tx-packets': 226465, > 'tx-stop': 113208, > 'tx-wake': 113208}, > {'ifindex': 13, > 'queue-id': 1, > 'queue-type': 'tx', > 'tx-bytes': 18167675008, > 'tx-packets': 278660, > 'tx-stop': 8632, > 'tx-wake': 8632}] > > Signed-off-by: Daniel Jurgens <danielj@nvidia.com> > Reviewed-by: Jiri Pirko <jiri@nvidia.com> > --- > drivers/net/virtio_net.c | 28 ++++++++++++++++++++++++++-- > 1 file changed, 26 insertions(+), 2 deletions(-) > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > index 218a446c4c27..df6121c38a1b 100644 > --- a/drivers/net/virtio_net.c > +++ b/drivers/net/virtio_net.c > @@ -95,6 +95,8 @@ struct virtnet_sq_stats { > u64_stats_t xdp_tx_drops; > u64_stats_t kicks; > u64_stats_t tx_timeouts; > + u64_stats_t stop; > + u64_stats_t wake; > }; > > struct virtnet_rq_stats { > @@ -145,6 +147,8 @@ static const struct virtnet_stat_desc virtnet_rq_stats_desc[] = { > static const struct virtnet_stat_desc virtnet_sq_stats_desc_qstat[] = { > VIRTNET_SQ_STAT_QSTAT("packets", packets), > VIRTNET_SQ_STAT_QSTAT("bytes", bytes), > + VIRTNET_SQ_STAT_QSTAT("stop", stop), > + VIRTNET_SQ_STAT_QSTAT("wake", wake), > }; > > static const struct virtnet_stat_desc virtnet_rq_stats_desc_qstat[] = { > @@ -1014,6 +1018,9 @@ static void check_sq_full_and_disable(struct virtnet_info *vi, > */ > if (sq->vq->num_free < 2+MAX_SKB_FRAGS) { > netif_stop_subqueue(dev, qnum); > + u64_stats_update_begin(&sq->stats.syncp); > + u64_stats_inc(&sq->stats.stop); > + u64_stats_update_end(&sq->stats.syncp); How about introduce two helpers to wrap netif_tx_queue_stopped and netif_start_subqueue? > if (use_napi) { > if (unlikely(!virtqueue_enable_cb_delayed(sq->vq))) > virtqueue_napi_schedule(&sq->napi, sq->vq); > @@ -1022,6 +1029,9 @@ static void check_sq_full_and_disable(struct virtnet_info *vi, > free_old_xmit(sq, false); > if (sq->vq->num_free >= 2+MAX_SKB_FRAGS) { > netif_start_subqueue(dev, qnum); > + u64_stats_update_begin(&sq->stats.syncp); > + u64_stats_inc(&sq->stats.wake); > + u64_stats_update_end(&sq->stats.syncp); If we start the queue immediately, should we update the counter? Thanks. > virtqueue_disable_cb(sq->vq); > } > } > @@ -2322,8 +2332,14 @@ static void virtnet_poll_cleantx(struct receive_queue *rq) > free_old_xmit(sq, true); > } while (unlikely(!virtqueue_enable_cb_delayed(sq->vq))); > > - if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) > + if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) { > + if (netif_tx_queue_stopped(txq)) { > + u64_stats_update_begin(&sq->stats.syncp); > + u64_stats_inc(&sq->stats.wake); > + u64_stats_update_end(&sq->stats.syncp); > + } > netif_tx_wake_queue(txq); > + } > > __netif_tx_unlock(txq); > } > @@ -2473,8 +2489,14 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) > virtqueue_disable_cb(sq->vq); > free_old_xmit(sq, true); > > - if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) > + if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) { > + if (netif_tx_queue_stopped(txq)) { > + u64_stats_update_begin(&sq->stats.syncp); > + u64_stats_inc(&sq->stats.wake); > + u64_stats_update_end(&sq->stats.syncp); > + } > netif_tx_wake_queue(txq); > + } > > opaque = virtqueue_enable_cb_prepare(sq->vq); > > @@ -4790,6 +4812,8 @@ static void virtnet_get_base_stats(struct net_device *dev, > > tx->bytes = 0; > tx->packets = 0; > + tx->stop = 0; > + tx->wake = 0; > > if (vi->device_stats_cap & VIRTIO_NET_STATS_TYPE_TX_BASIC) { > tx->hw_drops = 0; > -- > 2.44.0 > ^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: [PATCH net-next 2/2] virtio_net: Add TX stopped and wake counters 2024-05-10 1:21 ` Xuan Zhuo @ 2024-05-10 3:35 ` Dan Jurgens 2024-05-10 6:48 ` Xuan Zhuo 0 siblings, 1 reply; 14+ messages in thread From: Dan Jurgens @ 2024-05-10 3:35 UTC (permalink / raw) To: Xuan Zhuo Cc: mst@redhat.com, jasowang@redhat.com, virtualization@lists.linux.dev, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Jiri Pirko, netdev@vger.kernel.org > From: Xuan Zhuo <xuanzhuo@linux.alibaba.com> > Sent: Thursday, May 9, 2024 8:22 PM > To: Dan Jurgens <danielj@nvidia.com> > Cc: mst@redhat.com; jasowang@redhat.com; xuanzhuo@linux.alibaba.com; > virtualization@lists.linux.dev; davem@davemloft.net; > edumazet@google.com; kuba@kernel.org; pabeni@redhat.com; Jiri Pirko > <jiri@nvidia.com>; Dan Jurgens <danielj@nvidia.com>; > netdev@vger.kernel.org > Subject: Re: [PATCH net-next 2/2] virtio_net: Add TX stopped and wake > counters > > On Thu, 9 May 2024 11:32:16 -0500, Daniel Jurgens <danielj@nvidia.com> > wrote: > > Add a tx queue stop and wake counters, they are useful for debugging. > > > > $ ./tools/net/ynl/cli.py --spec netlink/specs/netdev.yaml \ --dump > > qstats-get --json '{"scope": "queue"}' > > ... > > {'ifindex': 13, > > 'queue-id': 0, > > 'queue-type': 'tx', > > 'tx-bytes': 14756682850, > > 'tx-packets': 226465, > > 'tx-stop': 113208, > > 'tx-wake': 113208}, > > {'ifindex': 13, > > 'queue-id': 1, > > 'queue-type': 'tx', > > 'tx-bytes': 18167675008, > > 'tx-packets': 278660, > > 'tx-stop': 8632, > > 'tx-wake': 8632}] > > > > Signed-off-by: Daniel Jurgens <danielj@nvidia.com> > > Reviewed-by: Jiri Pirko <jiri@nvidia.com> > > --- > > drivers/net/virtio_net.c | 28 ++++++++++++++++++++++++++-- > > 1 file changed, 26 insertions(+), 2 deletions(-) > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index > > 218a446c4c27..df6121c38a1b 100644 > > --- a/drivers/net/virtio_net.c > > +++ b/drivers/net/virtio_net.c > > @@ -95,6 +95,8 @@ struct virtnet_sq_stats { > > u64_stats_t xdp_tx_drops; > > u64_stats_t kicks; > > u64_stats_t tx_timeouts; > > + u64_stats_t stop; > > + u64_stats_t wake; > > }; > > > > struct virtnet_rq_stats { > > @@ -145,6 +147,8 @@ static const struct virtnet_stat_desc > > virtnet_rq_stats_desc[] = { static const struct virtnet_stat_desc > virtnet_sq_stats_desc_qstat[] = { > > VIRTNET_SQ_STAT_QSTAT("packets", packets), > > VIRTNET_SQ_STAT_QSTAT("bytes", bytes), > > + VIRTNET_SQ_STAT_QSTAT("stop", stop), > > + VIRTNET_SQ_STAT_QSTAT("wake", wake), > > }; > > > > static const struct virtnet_stat_desc virtnet_rq_stats_desc_qstat[] = > > { @@ -1014,6 +1018,9 @@ static void check_sq_full_and_disable(struct > virtnet_info *vi, > > */ > > if (sq->vq->num_free < 2+MAX_SKB_FRAGS) { > > netif_stop_subqueue(dev, qnum); > > + u64_stats_update_begin(&sq->stats.syncp); > > + u64_stats_inc(&sq->stats.stop); > > + u64_stats_update_end(&sq->stats.syncp); > > How about introduce two helpers to wrap > netif_tx_queue_stopped and netif_start_subqueue? > > > if (use_napi) { > > if (unlikely(!virtqueue_enable_cb_delayed(sq->vq))) > > virtqueue_napi_schedule(&sq->napi, sq- > >vq); @@ -1022,6 +1029,9 @@ > > static void check_sq_full_and_disable(struct virtnet_info *vi, > > free_old_xmit(sq, false); > > if (sq->vq->num_free >= 2+MAX_SKB_FRAGS) { > > netif_start_subqueue(dev, qnum); > > + u64_stats_update_begin(&sq->stats.syncp); > > + u64_stats_inc(&sq->stats.wake); > > + u64_stats_update_end(&sq->stats.syncp); > > If we start the queue immediately, should we update the counter? I intentionally only counted the wakes on restarts after stopping the queue. I don't think counting the initial wake adds any value since it always happens. > > Thanks. > > > virtqueue_disable_cb(sq->vq); > > } > > } > > @@ -2322,8 +2332,14 @@ static void virtnet_poll_cleantx(struct > receive_queue *rq) > > free_old_xmit(sq, true); > > } while (unlikely(!virtqueue_enable_cb_delayed(sq->vq))); > > > > - if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) > > + if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) { > > + if (netif_tx_queue_stopped(txq)) { > > + u64_stats_update_begin(&sq->stats.syncp); > > + u64_stats_inc(&sq->stats.wake); > > + u64_stats_update_end(&sq->stats.syncp); > > + } > > netif_tx_wake_queue(txq); > > + } > > > > __netif_tx_unlock(txq); > > } > > @@ -2473,8 +2489,14 @@ static int virtnet_poll_tx(struct napi_struct > *napi, int budget) > > virtqueue_disable_cb(sq->vq); > > free_old_xmit(sq, true); > > > > - if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) > > + if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) { > > + if (netif_tx_queue_stopped(txq)) { > > + u64_stats_update_begin(&sq->stats.syncp); > > + u64_stats_inc(&sq->stats.wake); > > + u64_stats_update_end(&sq->stats.syncp); > > + } > > netif_tx_wake_queue(txq); > > + } > > > > opaque = virtqueue_enable_cb_prepare(sq->vq); > > > > @@ -4790,6 +4812,8 @@ static void virtnet_get_base_stats(struct > > net_device *dev, > > > > tx->bytes = 0; > > tx->packets = 0; > > + tx->stop = 0; > > + tx->wake = 0; > > > > if (vi->device_stats_cap & VIRTIO_NET_STATS_TYPE_TX_BASIC) { > > tx->hw_drops = 0; > > -- > > 2.44.0 > > ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: RE: [PATCH net-next 2/2] virtio_net: Add TX stopped and wake counters 2024-05-10 3:35 ` Dan Jurgens @ 2024-05-10 6:48 ` Xuan Zhuo 2024-05-10 16:56 ` Dan Jurgens 0 siblings, 1 reply; 14+ messages in thread From: Xuan Zhuo @ 2024-05-10 6:48 UTC (permalink / raw) To: Dan Jurgens Cc: mst@redhat.com, jasowang@redhat.com, virtualization@lists.linux.dev, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Jiri Pirko, netdev@vger.kernel.org On Fri, 10 May 2024 03:35:51 +0000, Dan Jurgens <danielj@nvidia.com> wrote: > > From: Xuan Zhuo <xuanzhuo@linux.alibaba.com> > > Sent: Thursday, May 9, 2024 8:22 PM > > To: Dan Jurgens <danielj@nvidia.com> > > Cc: mst@redhat.com; jasowang@redhat.com; xuanzhuo@linux.alibaba.com; > > virtualization@lists.linux.dev; davem@davemloft.net; > > edumazet@google.com; kuba@kernel.org; pabeni@redhat.com; Jiri Pirko > > <jiri@nvidia.com>; Dan Jurgens <danielj@nvidia.com>; > > netdev@vger.kernel.org > > Subject: Re: [PATCH net-next 2/2] virtio_net: Add TX stopped and wake > > counters > > > > On Thu, 9 May 2024 11:32:16 -0500, Daniel Jurgens <danielj@nvidia.com> > > wrote: > > > Add a tx queue stop and wake counters, they are useful for debugging. > > > > > > $ ./tools/net/ynl/cli.py --spec netlink/specs/netdev.yaml \ --dump > > > qstats-get --json '{"scope": "queue"}' > > > ... > > > {'ifindex': 13, > > > 'queue-id': 0, > > > 'queue-type': 'tx', > > > 'tx-bytes': 14756682850, > > > 'tx-packets': 226465, > > > 'tx-stop': 113208, > > > 'tx-wake': 113208}, > > > {'ifindex': 13, > > > 'queue-id': 1, > > > 'queue-type': 'tx', > > > 'tx-bytes': 18167675008, > > > 'tx-packets': 278660, > > > 'tx-stop': 8632, > > > 'tx-wake': 8632}] > > > > > > Signed-off-by: Daniel Jurgens <danielj@nvidia.com> > > > Reviewed-by: Jiri Pirko <jiri@nvidia.com> > > > --- > > > drivers/net/virtio_net.c | 28 ++++++++++++++++++++++++++-- > > > 1 file changed, 26 insertions(+), 2 deletions(-) > > > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index > > > 218a446c4c27..df6121c38a1b 100644 > > > --- a/drivers/net/virtio_net.c > > > +++ b/drivers/net/virtio_net.c > > > @@ -95,6 +95,8 @@ struct virtnet_sq_stats { > > > u64_stats_t xdp_tx_drops; > > > u64_stats_t kicks; > > > u64_stats_t tx_timeouts; > > > + u64_stats_t stop; > > > + u64_stats_t wake; > > > }; > > > > > > struct virtnet_rq_stats { > > > @@ -145,6 +147,8 @@ static const struct virtnet_stat_desc > > > virtnet_rq_stats_desc[] = { static const struct virtnet_stat_desc > > virtnet_sq_stats_desc_qstat[] = { > > > VIRTNET_SQ_STAT_QSTAT("packets", packets), > > > VIRTNET_SQ_STAT_QSTAT("bytes", bytes), > > > + VIRTNET_SQ_STAT_QSTAT("stop", stop), > > > + VIRTNET_SQ_STAT_QSTAT("wake", wake), > > > }; > > > > > > static const struct virtnet_stat_desc virtnet_rq_stats_desc_qstat[] = > > > { @@ -1014,6 +1018,9 @@ static void check_sq_full_and_disable(struct > > virtnet_info *vi, > > > */ > > > if (sq->vq->num_free < 2+MAX_SKB_FRAGS) { > > > netif_stop_subqueue(dev, qnum); > > > + u64_stats_update_begin(&sq->stats.syncp); > > > + u64_stats_inc(&sq->stats.stop); > > > + u64_stats_update_end(&sq->stats.syncp); > > > > How about introduce two helpers to wrap > > netif_tx_queue_stopped and netif_start_subqueue? > > > > > if (use_napi) { > > > if (unlikely(!virtqueue_enable_cb_delayed(sq->vq))) > > > virtqueue_napi_schedule(&sq->napi, sq- > > >vq); @@ -1022,6 +1029,9 @@ > > > static void check_sq_full_and_disable(struct virtnet_info *vi, > > > free_old_xmit(sq, false); > > > if (sq->vq->num_free >= 2+MAX_SKB_FRAGS) { > > > netif_start_subqueue(dev, qnum); > > > + u64_stats_update_begin(&sq->stats.syncp); > > > + u64_stats_inc(&sq->stats.wake); > > > + u64_stats_update_end(&sq->stats.syncp); > > > > If we start the queue immediately, should we update the counter? > > I intentionally only counted the wakes on restarts after stopping the queue. > I don't think counting the initial wake adds any value since it always happens. Here, we start the queue immediately after the queue is stopped. So for the upper layer, the queue "has not" changed the status, I think we do not need to update the wake counter. Thanks. > > > > > Thanks. > > > > > virtqueue_disable_cb(sq->vq); > > > } > > > } > > > @@ -2322,8 +2332,14 @@ static void virtnet_poll_cleantx(struct > > receive_queue *rq) > > > free_old_xmit(sq, true); > > > } while (unlikely(!virtqueue_enable_cb_delayed(sq->vq))); > > > > > > - if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) > > > + if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) { > > > + if (netif_tx_queue_stopped(txq)) { > > > + u64_stats_update_begin(&sq->stats.syncp); > > > + u64_stats_inc(&sq->stats.wake); > > > + u64_stats_update_end(&sq->stats.syncp); > > > + } > > > netif_tx_wake_queue(txq); > > > + } > > > > > > __netif_tx_unlock(txq); > > > } > > > @@ -2473,8 +2489,14 @@ static int virtnet_poll_tx(struct napi_struct > > *napi, int budget) > > > virtqueue_disable_cb(sq->vq); > > > free_old_xmit(sq, true); > > > > > > - if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) > > > + if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) { > > > + if (netif_tx_queue_stopped(txq)) { > > > + u64_stats_update_begin(&sq->stats.syncp); > > > + u64_stats_inc(&sq->stats.wake); > > > + u64_stats_update_end(&sq->stats.syncp); > > > + } > > > netif_tx_wake_queue(txq); > > > + } > > > > > > opaque = virtqueue_enable_cb_prepare(sq->vq); > > > > > > @@ -4790,6 +4812,8 @@ static void virtnet_get_base_stats(struct > > > net_device *dev, > > > > > > tx->bytes = 0; > > > tx->packets = 0; > > > + tx->stop = 0; > > > + tx->wake = 0; > > > > > > if (vi->device_stats_cap & VIRTIO_NET_STATS_TYPE_TX_BASIC) { > > > tx->hw_drops = 0; > > > -- > > > 2.44.0 > > > > ^ permalink raw reply [flat|nested] 14+ messages in thread
* RE: RE: [PATCH net-next 2/2] virtio_net: Add TX stopped and wake counters 2024-05-10 6:48 ` Xuan Zhuo @ 2024-05-10 16:56 ` Dan Jurgens 0 siblings, 0 replies; 14+ messages in thread From: Dan Jurgens @ 2024-05-10 16:56 UTC (permalink / raw) To: Xuan Zhuo Cc: mst@redhat.com, jasowang@redhat.com, virtualization@lists.linux.dev, davem@davemloft.net, edumazet@google.com, kuba@kernel.org, pabeni@redhat.com, Jiri Pirko, netdev@vger.kernel.org > From: Xuan Zhuo <xuanzhuo@linux.alibaba.com> > Sent: Friday, May 10, 2024 1:48 AM > To: Dan Jurgens <danielj@nvidia.com> > Cc: mst@redhat.com; jasowang@redhat.com; virtualization@lists.linux.dev; > davem@davemloft.net; edumazet@google.com; kuba@kernel.org; > pabeni@redhat.com; Jiri Pirko <jiri@nvidia.com>; netdev@vger.kernel.org > Subject: Re: RE: [PATCH net-next 2/2] virtio_net: Add TX stopped and wake > counters > > On Fri, 10 May 2024 03:35:51 +0000, Dan Jurgens <danielj@nvidia.com> > wrote: > > > From: Xuan Zhuo <xuanzhuo@linux.alibaba.com> > > > Sent: Thursday, May 9, 2024 8:22 PM > > > To: Dan Jurgens <danielj@nvidia.com> > > > Cc: mst@redhat.com; jasowang@redhat.com; > xuanzhuo@linux.alibaba.com; > > > virtualization@lists.linux.dev; davem@davemloft.net; > > > edumazet@google.com; kuba@kernel.org; pabeni@redhat.com; Jiri > Pirko > > > <jiri@nvidia.com>; Dan Jurgens <danielj@nvidia.com>; > > > netdev@vger.kernel.org > > > Subject: Re: [PATCH net-next 2/2] virtio_net: Add TX stopped and > > > wake counters > > > > > > On Thu, 9 May 2024 11:32:16 -0500, Daniel Jurgens > > > <danielj@nvidia.com> > > > wrote: > > > > Add a tx queue stop and wake counters, they are useful for debugging. > > > > > > > > $ ./tools/net/ynl/cli.py --spec netlink/specs/netdev.yaml \ --dump > > > > qstats-get --json '{"scope": "queue"}' > > > > ... > > > > {'ifindex': 13, > > > > 'queue-id': 0, > > > > 'queue-type': 'tx', > > > > 'tx-bytes': 14756682850, > > > > 'tx-packets': 226465, > > > > 'tx-stop': 113208, > > > > 'tx-wake': 113208}, > > > > {'ifindex': 13, > > > > 'queue-id': 1, > > > > 'queue-type': 'tx', > > > > 'tx-bytes': 18167675008, > > > > 'tx-packets': 278660, > > > > 'tx-stop': 8632, > > > > 'tx-wake': 8632}] > > > > > > > > Signed-off-by: Daniel Jurgens <danielj@nvidia.com> > > > > Reviewed-by: Jiri Pirko <jiri@nvidia.com> > > > > --- > > > > drivers/net/virtio_net.c | 28 ++++++++++++++++++++++++++-- > > > > 1 file changed, 26 insertions(+), 2 deletions(-) > > > > > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > > > > index 218a446c4c27..df6121c38a1b 100644 > > > > --- a/drivers/net/virtio_net.c > > > > +++ b/drivers/net/virtio_net.c > > > > @@ -95,6 +95,8 @@ struct virtnet_sq_stats { > > > > u64_stats_t xdp_tx_drops; > > > > u64_stats_t kicks; > > > > u64_stats_t tx_timeouts; > > > > + u64_stats_t stop; > > > > + u64_stats_t wake; > > > > }; > > > > > > > > struct virtnet_rq_stats { > > > > @@ -145,6 +147,8 @@ static const struct virtnet_stat_desc > > > > virtnet_rq_stats_desc[] = { static const struct virtnet_stat_desc > > > virtnet_sq_stats_desc_qstat[] = { > > > > VIRTNET_SQ_STAT_QSTAT("packets", packets), > > > > VIRTNET_SQ_STAT_QSTAT("bytes", bytes), > > > > + VIRTNET_SQ_STAT_QSTAT("stop", stop), > > > > + VIRTNET_SQ_STAT_QSTAT("wake", wake), > > > > }; > > > > > > > > static const struct virtnet_stat_desc > > > > virtnet_rq_stats_desc_qstat[] = { @@ -1014,6 +1018,9 @@ static > > > > void check_sq_full_and_disable(struct > > > virtnet_info *vi, > > > > */ > > > > if (sq->vq->num_free < 2+MAX_SKB_FRAGS) { > > > > netif_stop_subqueue(dev, qnum); > > > > + u64_stats_update_begin(&sq->stats.syncp); > > > > + u64_stats_inc(&sq->stats.stop); > > > > + u64_stats_update_end(&sq->stats.syncp); > > > > > > How about introduce two helpers to wrap netif_tx_queue_stopped and > > > netif_start_subqueue? > > > > > > > if (use_napi) { > > > > if (unlikely(!virtqueue_enable_cb_delayed(sq->vq))) > > > > virtqueue_napi_schedule(&sq->napi, sq- vq); > @@ -1022,6 > > > >+1029,9 @@ static void check_sq_full_and_disable(struct > > > >virtnet_info *vi, > > > > free_old_xmit(sq, false); > > > > if (sq->vq->num_free >= 2+MAX_SKB_FRAGS) { > > > > netif_start_subqueue(dev, qnum); > > > > + u64_stats_update_begin(&sq->stats.syncp); > > > > + u64_stats_inc(&sq->stats.wake); > > > > + u64_stats_update_end(&sq->stats.syncp); > > > > > > If we start the queue immediately, should we update the counter? > > > > I intentionally only counted the wakes on restarts after stopping the > queue. > > I don't think counting the initial wake adds any value since it always > happens. > > Here, we start the queue immediately after the queue is stopped. > So for the upper layer, the queue "has not" changed the status, I think we do > not need to update the wake counter. I think we should. We incremented the stop counter. If they get out sync wake loses any functionality. > > Thanks. > > > > > > > > > > Thanks. > > > > > > > virtqueue_disable_cb(sq->vq); > > > > } > > > > } > > > > @@ -2322,8 +2332,14 @@ static void virtnet_poll_cleantx(struct > > > receive_queue *rq) > > > > free_old_xmit(sq, true); > > > > } while (unlikely(!virtqueue_enable_cb_delayed(sq->vq))); > > > > > > > > - if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) > > > > + if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) { > > > > + if (netif_tx_queue_stopped(txq)) { > > > > + u64_stats_update_begin(&sq->stats.syncp); > > > > + u64_stats_inc(&sq->stats.wake); > > > > + u64_stats_update_end(&sq->stats.syncp); > > > > + } > > > > netif_tx_wake_queue(txq); > > > > + } > > > > > > > > __netif_tx_unlock(txq); > > > > } > > > > @@ -2473,8 +2489,14 @@ static int virtnet_poll_tx(struct > > > > napi_struct > > > *napi, int budget) > > > > virtqueue_disable_cb(sq->vq); > > > > free_old_xmit(sq, true); > > > > > > > > - if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) > > > > + if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) { > > > > + if (netif_tx_queue_stopped(txq)) { > > > > + u64_stats_update_begin(&sq->stats.syncp); > > > > + u64_stats_inc(&sq->stats.wake); > > > > + u64_stats_update_end(&sq->stats.syncp); > > > > + } > > > > netif_tx_wake_queue(txq); > > > > + } > > > > > > > > opaque = virtqueue_enable_cb_prepare(sq->vq); > > > > > > > > @@ -4790,6 +4812,8 @@ static void virtnet_get_base_stats(struct > > > > net_device *dev, > > > > > > > > tx->bytes = 0; > > > > tx->packets = 0; > > > > + tx->stop = 0; > > > > + tx->wake = 0; > > > > > > > > if (vi->device_stats_cap & VIRTIO_NET_STATS_TYPE_TX_BASIC) { > > > > tx->hw_drops = 0; > > > > -- > > > > 2.44.0 > > > > > > ^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2024-05-10 20:40 UTC | newest] Thread overview: 14+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2024-05-09 16:32 [PATCH net-next 0/2] Add TX stop/wake counters Daniel Jurgens 2024-05-09 16:32 ` [PATCH net-next 1/2] netdev: Add queue stats for TX stop and wake Daniel Jurgens 2024-05-09 20:46 ` Andrew Lunn 2024-05-09 21:19 ` Dan Jurgens 2024-05-10 12:58 ` Andrew Lunn 2024-05-10 20:20 ` Dan Jurgens 2024-05-10 20:40 ` Andrew Lunn 2024-05-10 1:31 ` Jakub Kicinski 2024-05-10 3:37 ` Dan Jurgens 2024-05-09 16:32 ` [PATCH net-next 2/2] virtio_net: Add TX stopped and wake counters Daniel Jurgens 2024-05-10 1:21 ` Xuan Zhuo 2024-05-10 3:35 ` Dan Jurgens 2024-05-10 6:48 ` Xuan Zhuo 2024-05-10 16:56 ` Dan Jurgens
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).