* [Qemu-devel] [PATCH v2 0/3] virtio-net discards TX data after link down
@ 2016-11-09 15:21 yuri.benditovich
2016-11-09 15:22 ` [Qemu-devel] [PATCH v2 1/3] net: Add virtio queue interface to update used index from vring state yuri.benditovich
` (2 more replies)
0 siblings, 3 replies; 12+ messages in thread
From: yuri.benditovich @ 2016-11-09 15:21 UTC (permalink / raw)
To: Michael S . Tsirkin, Jason Wang, qemu-devel; +Cc: dmitry, yan
From: Yuri Benditovich <yuri.benditovich@daynix.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1295637
Upon set_link monitor command or upon netdev deletion
virtio-net sends link down indication to the guest
and stops vhost if one is used.
Guest driver can still submit data for TX until it
recognizes link loss. If these packets not returned by
the host, the Windows guest will never be able to finish
disable/removal/shutdown. In order to allow qemu to
discard these packets, virtio queue shall update
its internal structure upon vhost stop.
Changes from v1:
- added drop for outstanding tx packets for tx=timer
- (mainly for case of vhost=off)
fixed link down flow to drop outstanding packets and
ensure tx queue notification enabled
Yuri Benditovich (3):
net: Add virtio queue interface to update used index from vring state
net: vhost stop updates virtio queue state
net: virtio-net discards TX data after link down
hw/net/virtio-net.c | 28 ++++++++++++++++++++++++++++
hw/virtio/vhost.c | 1 +
hw/virtio/virtio.c | 5 +++++
include/hw/virtio/virtio.h | 1 +
4 files changed, 35 insertions(+)
--
1.9.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* [Qemu-devel] [PATCH v2 1/3] net: Add virtio queue interface to update used index from vring state
2016-11-09 15:21 [Qemu-devel] [PATCH v2 0/3] virtio-net discards TX data after link down yuri.benditovich
@ 2016-11-09 15:22 ` yuri.benditovich
2016-11-09 15:22 ` [Qemu-devel] [PATCH v2 2/3] net: vhost stop updates virtio queue state yuri.benditovich
2016-11-09 15:22 ` [Qemu-devel] [PATCH v2 3/3] net: virtio-net discards TX data after link down yuri.benditovich
2 siblings, 0 replies; 12+ messages in thread
From: yuri.benditovich @ 2016-11-09 15:22 UTC (permalink / raw)
To: Michael S . Tsirkin, Jason Wang, qemu-devel; +Cc: dmitry, yan
From: Yuri Benditovich <yuri.benditovich@daynix.com>
Bring virtio queue to correct internal state for host-to-guest
operations when vhost is temporary stopped.
Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com>
---
hw/virtio/virtio.c | 5 +++++
include/hw/virtio/virtio.h | 1 +
2 files changed, 6 insertions(+)
diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
index d48d1a9..7e1274a 100644
--- a/hw/virtio/virtio.c
+++ b/hw/virtio/virtio.c
@@ -1983,6 +1983,11 @@ void virtio_queue_set_last_avail_idx(VirtIODevice *vdev, int n, uint16_t idx)
vdev->vq[n].shadow_avail_idx = idx;
}
+void virtio_queue_update_used_idx(VirtIODevice *vdev, int n)
+{
+ vdev->vq[n].used_idx = vring_used_idx(&vdev->vq[n]);
+}
+
void virtio_queue_invalidate_signalled_used(VirtIODevice *vdev, int n)
{
vdev->vq[n].signalled_used_valid = false;
diff --git a/include/hw/virtio/virtio.h b/include/hw/virtio/virtio.h
index b913aac..b9a9d6e 100644
--- a/include/hw/virtio/virtio.h
+++ b/include/hw/virtio/virtio.h
@@ -260,6 +260,7 @@ hwaddr virtio_queue_get_ring_size(VirtIODevice *vdev, int n);
uint16_t virtio_queue_get_last_avail_idx(VirtIODevice *vdev, int n);
void virtio_queue_set_last_avail_idx(VirtIODevice *vdev, int n, uint16_t idx);
void virtio_queue_invalidate_signalled_used(VirtIODevice *vdev, int n);
+void virtio_queue_update_used_idx(VirtIODevice *vdev, int n);
VirtQueue *virtio_get_queue(VirtIODevice *vdev, int n);
uint16_t virtio_get_queue_index(VirtQueue *vq);
EventNotifier *virtio_queue_get_guest_notifier(VirtQueue *vq);
--
1.9.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [Qemu-devel] [PATCH v2 2/3] net: vhost stop updates virtio queue state
2016-11-09 15:21 [Qemu-devel] [PATCH v2 0/3] virtio-net discards TX data after link down yuri.benditovich
2016-11-09 15:22 ` [Qemu-devel] [PATCH v2 1/3] net: Add virtio queue interface to update used index from vring state yuri.benditovich
@ 2016-11-09 15:22 ` yuri.benditovich
2016-11-09 17:07 ` Paolo Bonzini
2016-11-09 15:22 ` [Qemu-devel] [PATCH v2 3/3] net: virtio-net discards TX data after link down yuri.benditovich
2 siblings, 1 reply; 12+ messages in thread
From: yuri.benditovich @ 2016-11-09 15:22 UTC (permalink / raw)
To: Michael S . Tsirkin, Jason Wang, qemu-devel; +Cc: dmitry, yan
From: Yuri Benditovich <yuri.benditovich@daynix.com>
Make virtio queue suitable for push operation from qemu
after vhost was stopped.
Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com>
---
hw/virtio/vhost.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index bd051ab..2e990d0 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -963,6 +963,7 @@ static void vhost_virtqueue_stop(struct vhost_dev *dev,
virtio_queue_set_last_avail_idx(vdev, idx, state.num);
}
virtio_queue_invalidate_signalled_used(vdev, idx);
+ virtio_queue_update_used_idx(vdev, idx);
/* In the cross-endian case, we need to reset the vring endianness to
* native as legacy devices expect so by default.
--
1.9.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [Qemu-devel] [PATCH v2 3/3] net: virtio-net discards TX data after link down
2016-11-09 15:21 [Qemu-devel] [PATCH v2 0/3] virtio-net discards TX data after link down yuri.benditovich
2016-11-09 15:22 ` [Qemu-devel] [PATCH v2 1/3] net: Add virtio queue interface to update used index from vring state yuri.benditovich
2016-11-09 15:22 ` [Qemu-devel] [PATCH v2 2/3] net: vhost stop updates virtio queue state yuri.benditovich
@ 2016-11-09 15:22 ` yuri.benditovich
2016-11-09 20:28 ` Michael S. Tsirkin
2 siblings, 1 reply; 12+ messages in thread
From: yuri.benditovich @ 2016-11-09 15:22 UTC (permalink / raw)
To: Michael S . Tsirkin, Jason Wang, qemu-devel; +Cc: dmitry, yan
From: Yuri Benditovich <yuri.benditovich@daynix.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1295637
Upon set_link monitor command or upon netdev deletion
virtio-net sends link down indication to the guest
and stops vhost if one is used.
Guest driver can still submit data for TX until it
recognizes link loss. If these packets not returned by
the host, the Windows guest will never be able to finish
disable/removal/shutdown.
Now each packet sent by guest after NIC indicated link
down will be completed immediately.
Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com>
---
hw/net/virtio-net.c | 28 ++++++++++++++++++++++++++++
1 file changed, 28 insertions(+)
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 06bfe4b..ab4e18a 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -218,6 +218,16 @@ static void virtio_net_vnet_endian_status(VirtIONet *n, uint8_t status)
}
}
+static void virtio_net_drop_tx_queue_data(VirtIODevice *vdev, VirtQueue *vq)
+{
+ VirtQueueElement *elem;
+ while ((elem = virtqueue_pop(vq, sizeof(VirtQueueElement)))) {
+ virtqueue_push(vq, elem, 0);
+ virtio_notify(vdev, vq);
+ g_free(elem);
+ }
+}
+
static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t status)
{
VirtIONet *n = VIRTIO_NET(vdev);
@@ -262,6 +272,14 @@ static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t status)
} else {
qemu_bh_cancel(q->tx_bh);
}
+ if ((n->status & VIRTIO_NET_S_LINK_UP) == 0 &&
+ (queue_status & VIRTIO_CONFIG_S_DRIVER_OK)) {
+ /* if tx is waiting we are likely have some packets in tx queue
+ * and disabled notification */
+ q->tx_waiting = 0;
+ virtio_queue_set_notification(q->tx_vq, 1);
+ virtio_net_drop_tx_queue_data(vdev, q->tx_vq);
+ }
}
}
}
@@ -1319,6 +1337,11 @@ static void virtio_net_handle_tx_timer(VirtIODevice *vdev, VirtQueue *vq)
VirtIONet *n = VIRTIO_NET(vdev);
VirtIONetQueue *q = &n->vqs[vq2q(virtio_get_queue_index(vq))];
+ if (unlikely((n->status & VIRTIO_NET_S_LINK_UP) == 0)) {
+ virtio_net_drop_tx_queue_data(vdev, vq);
+ return;
+ }
+
/* This happens when device was stopped but VCPU wasn't. */
if (!vdev->vm_running) {
q->tx_waiting = 1;
@@ -1345,6 +1368,11 @@ static void virtio_net_handle_tx_bh(VirtIODevice *vdev, VirtQueue *vq)
VirtIONet *n = VIRTIO_NET(vdev);
VirtIONetQueue *q = &n->vqs[vq2q(virtio_get_queue_index(vq))];
+ if (unlikely((n->status & VIRTIO_NET_S_LINK_UP) == 0)) {
+ virtio_net_drop_tx_queue_data(vdev, vq);
+ return;
+ }
+
if (unlikely(q->tx_waiting)) {
return;
}
--
1.9.1
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH v2 2/3] net: vhost stop updates virtio queue state
2016-11-09 15:22 ` [Qemu-devel] [PATCH v2 2/3] net: vhost stop updates virtio queue state yuri.benditovich
@ 2016-11-09 17:07 ` Paolo Bonzini
2016-11-09 20:12 ` Michael S. Tsirkin
0 siblings, 1 reply; 12+ messages in thread
From: Paolo Bonzini @ 2016-11-09 17:07 UTC (permalink / raw)
To: yuri.benditovich, Michael S . Tsirkin, Jason Wang, qemu-devel; +Cc: dmitry, yan
On 09/11/2016 16:22, yuri.benditovich@daynix.com wrote:
> From: Yuri Benditovich <yuri.benditovich@daynix.com>
>
> Make virtio queue suitable for push operation from qemu
> after vhost was stopped.
>
> Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com>
> ---
> hw/virtio/vhost.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> index bd051ab..2e990d0 100644
> --- a/hw/virtio/vhost.c
> +++ b/hw/virtio/vhost.c
> @@ -963,6 +963,7 @@ static void vhost_virtqueue_stop(struct vhost_dev *dev,
> virtio_queue_set_last_avail_idx(vdev, idx, state.num);
> }
> virtio_queue_invalidate_signalled_used(vdev, idx);
> + virtio_queue_update_used_idx(vdev, idx);
All three functions virtio_queue_set_last_avail_idx,
virtio_queue_invalidate_signalled_used, virtio_queue_update_used_idx are
only used here.
Plus, virtio_queue_set_last_avail_idx is always called in practice,
since the only failure modes for VHOST_GET_VRING_BASE are EFAULT and
out-of-range virtqueue number. In both cases QEMU can ignore the other
two steps.
So perhaps we should have a single function virtio_queue_set_vring_base,
taking a vhost_vring_state struct? It can do all three.
Paolo
>
> /* In the cross-endian case, we need to reset the vring endianness to
> * native as legacy devices expect so by default.
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH v2 2/3] net: vhost stop updates virtio queue state
2016-11-09 17:07 ` Paolo Bonzini
@ 2016-11-09 20:12 ` Michael S. Tsirkin
0 siblings, 0 replies; 12+ messages in thread
From: Michael S. Tsirkin @ 2016-11-09 20:12 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: yuri.benditovich, Jason Wang, qemu-devel, dmitry, yan
On Wed, Nov 09, 2016 at 06:07:29PM +0100, Paolo Bonzini wrote:
>
>
> On 09/11/2016 16:22, yuri.benditovich@daynix.com wrote:
> > From: Yuri Benditovich <yuri.benditovich@daynix.com>
> >
> > Make virtio queue suitable for push operation from qemu
> > after vhost was stopped.
> >
> > Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com>
> > ---
> > hw/virtio/vhost.c | 1 +
> > 1 file changed, 1 insertion(+)
> >
> > diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
> > index bd051ab..2e990d0 100644
> > --- a/hw/virtio/vhost.c
> > +++ b/hw/virtio/vhost.c
> > @@ -963,6 +963,7 @@ static void vhost_virtqueue_stop(struct vhost_dev *dev,
> > virtio_queue_set_last_avail_idx(vdev, idx, state.num);
> > }
> > virtio_queue_invalidate_signalled_used(vdev, idx);
> > + virtio_queue_update_used_idx(vdev, idx);
>
> All three functions virtio_queue_set_last_avail_idx,
> virtio_queue_invalidate_signalled_used, virtio_queue_update_used_idx are
> only used here.
>
> Plus, virtio_queue_set_last_avail_idx is always called in practice,
> since the only failure modes for VHOST_GET_VRING_BASE are EFAULT and
> out-of-range virtqueue number. In both cases QEMU can ignore the other
> two steps.
>
> So perhaps we should have a single function virtio_queue_set_vring_base,
> taking a vhost_vring_state struct? It can do all three.
>
> Paolo
I don't object but I think the bugfix is helpful in this QEMU
version, and this change is minimally intrusive.
> >
> > /* In the cross-endian case, we need to reset the vring endianness to
> > * native as legacy devices expect so by default.
> >
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH v2 3/3] net: virtio-net discards TX data after link down
2016-11-09 15:22 ` [Qemu-devel] [PATCH v2 3/3] net: virtio-net discards TX data after link down yuri.benditovich
@ 2016-11-09 20:28 ` Michael S. Tsirkin
2016-11-09 23:56 ` Yuri Benditovich
0 siblings, 1 reply; 12+ messages in thread
From: Michael S. Tsirkin @ 2016-11-09 20:28 UTC (permalink / raw)
To: yuri.benditovich; +Cc: Jason Wang, qemu-devel, dmitry, yan
On Wed, Nov 09, 2016 at 05:22:02PM +0200, yuri.benditovich@daynix.com wrote:
> From: Yuri Benditovich <yuri.benditovich@daynix.com>
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1295637
> Upon set_link monitor command or upon netdev deletion
> virtio-net sends link down indication to the guest
> and stops vhost if one is used.
> Guest driver can still submit data for TX until it
> recognizes link loss. If these packets not returned by
> the host, the Windows guest will never be able to finish
> disable/removal/shutdown.
> Now each packet sent by guest after NIC indicated link
> down will be completed immediately.
>
> Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com>
> ---
> hw/net/virtio-net.c | 28 ++++++++++++++++++++++++++++
> 1 file changed, 28 insertions(+)
>
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 06bfe4b..ab4e18a 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -218,6 +218,16 @@ static void virtio_net_vnet_endian_status(VirtIONet *n, uint8_t status)
> }
> }
>
> +static void virtio_net_drop_tx_queue_data(VirtIODevice *vdev, VirtQueue *vq)
> +{
> + VirtQueueElement *elem;
> + while ((elem = virtqueue_pop(vq, sizeof(VirtQueueElement)))) {
> + virtqueue_push(vq, elem, 0);
> + virtio_notify(vdev, vq);
> + g_free(elem);
> + }
> +}
> +
> static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t status)
> {
> VirtIONet *n = VIRTIO_NET(vdev);
I don't like this part. This does too much queue parsing,
I would like to just copy head from avail to used ring.
For example, people want to support rings >1K in size.
Let's add bool virtqueue_drop(vq) and be done with it.
> @@ -262,6 +272,14 @@ static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t status)
> } else {
> qemu_bh_cancel(q->tx_bh);
> }
> + if ((n->status & VIRTIO_NET_S_LINK_UP) == 0 &&
> + (queue_status & VIRTIO_CONFIG_S_DRIVER_OK)) {
> + /* if tx is waiting we are likely have some packets in tx queue
> + * and disabled notification */
> + q->tx_waiting = 0;
> + virtio_queue_set_notification(q->tx_vq, 1);
> + virtio_net_drop_tx_queue_data(vdev, q->tx_vq);
> + }
> }
> }
> }
OK but what if guest keeps sending packets? What will drop them?
> @@ -1319,6 +1337,11 @@ static void virtio_net_handle_tx_timer(VirtIODevice *vdev, VirtQueue *vq)
> VirtIONet *n = VIRTIO_NET(vdev);
> VirtIONetQueue *q = &n->vqs[vq2q(virtio_get_queue_index(vq))];
>
> + if (unlikely((n->status & VIRTIO_NET_S_LINK_UP) == 0)) {
> + virtio_net_drop_tx_queue_data(vdev, vq);
> + return;
> + }
> +
> /* This happens when device was stopped but VCPU wasn't. */
> if (!vdev->vm_running) {
> q->tx_waiting = 1;
> @@ -1345,6 +1368,11 @@ static void virtio_net_handle_tx_bh(VirtIODevice *vdev, VirtQueue *vq)
> VirtIONet *n = VIRTIO_NET(vdev);
> VirtIONetQueue *q = &n->vqs[vq2q(virtio_get_queue_index(vq))];
>
> + if (unlikely((n->status & VIRTIO_NET_S_LINK_UP) == 0)) {
> + virtio_net_drop_tx_queue_data(vdev, vq);
> + return;
> + }
> +
> if (unlikely(q->tx_waiting)) {
> return;
> }
> --
> 1.9.1
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH v2 3/3] net: virtio-net discards TX data after link down
2016-11-09 20:28 ` Michael S. Tsirkin
@ 2016-11-09 23:56 ` Yuri Benditovich
2016-11-10 13:54 ` Michael S. Tsirkin
0 siblings, 1 reply; 12+ messages in thread
From: Yuri Benditovich @ 2016-11-09 23:56 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Jason Wang, qemu-devel, Dmitry Fleytman, Yan Vugenfirer
On Wed, Nov 9, 2016 at 10:28 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
> On Wed, Nov 09, 2016 at 05:22:02PM +0200, yuri.benditovich@daynix.com
> wrote:
> > From: Yuri Benditovich <yuri.benditovich@daynix.com>
> >
> > https://bugzilla.redhat.com/show_bug.cgi?id=1295637
> > Upon set_link monitor command or upon netdev deletion
> > virtio-net sends link down indication to the guest
> > and stops vhost if one is used.
> > Guest driver can still submit data for TX until it
> > recognizes link loss. If these packets not returned by
> > the host, the Windows guest will never be able to finish
> > disable/removal/shutdown.
> > Now each packet sent by guest after NIC indicated link
> > down will be completed immediately.
> >
> > Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com>
> > ---
> > hw/net/virtio-net.c | 28 ++++++++++++++++++++++++++++
> > 1 file changed, 28 insertions(+)
> >
> > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > index 06bfe4b..ab4e18a 100644
> > --- a/hw/net/virtio-net.c
> > +++ b/hw/net/virtio-net.c
> > @@ -218,6 +218,16 @@ static void virtio_net_vnet_endian_status(VirtIONet
> *n, uint8_t status)
> > }
> > }
> >
> > +static void virtio_net_drop_tx_queue_data(VirtIODevice *vdev,
> VirtQueue *vq)
> > +{
> > + VirtQueueElement *elem;
> > + while ((elem = virtqueue_pop(vq, sizeof(VirtQueueElement)))) {
> > + virtqueue_push(vq, elem, 0);
> > + virtio_notify(vdev, vq);
> > + g_free(elem);
> > + }
> > +}
> > +
> > static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t
> status)
> > {
> > VirtIONet *n = VIRTIO_NET(vdev);
>
> I don't like this part. This does too much queue parsing,
> I would like to just copy head from avail to used ring.
>
> For example, people want to support rings >1K in size.
> Let's add bool virtqueue_drop(vq) and be done with it.
>
> Please note that this code works only when link is down.
For me this was too complicated to write simpler procedure
with the same result.
>
> > @@ -262,6 +272,14 @@ static void virtio_net_set_status(struct
> VirtIODevice *vdev, uint8_t status)
> > } else {
> > qemu_bh_cancel(q->tx_bh);
> > }
> > + if ((n->status & VIRTIO_NET_S_LINK_UP) == 0 &&
> > + (queue_status & VIRTIO_CONFIG_S_DRIVER_OK)) {
> > + /* if tx is waiting we are likely have some packets in
> tx queue
> > + * and disabled notification */
> > + q->tx_waiting = 0;
> > + virtio_queue_set_notification(q->tx_vq, 1);
> > + virtio_net_drop_tx_queue_data(vdev, q->tx_vq);
> > + }
> > }
> > }
> > }
>
> OK but what if guest keeps sending packets? What will drop them?
>
> This code fixes following problem in original code (example):
We are in vhost=off and receive kick ->virtio_net_handle_tx_timer
-> tx_waiting=1, notification disabled, timer set
Now we receive link loss, cancel the timer and stay with packets in the
queue and with
disabled notification. Nobody will return them. (easy to reproduce with
timer set to 5ms)
Added code drops packets we already have and ensure we will report them
as completed to guest. If guest keeps sending packets, they will be dropped
in virtio_net_handle_tx_timer and in virtio_net_handle_tx_bh (in
procedures just below)
as we already with link down.
> > @@ -1319,6 +1337,11 @@ static void virtio_net_handle_tx_timer(VirtIODevice
> *vdev, VirtQueue *vq)
> > VirtIONet *n = VIRTIO_NET(vdev);
> > VirtIONetQueue *q = &n->vqs[vq2q(virtio_get_queue_index(vq))];
> >
> > + if (unlikely((n->status & VIRTIO_NET_S_LINK_UP) == 0)) {
> > + virtio_net_drop_tx_queue_data(vdev, vq);
> > + return;
> > + }
> > +
> > /* This happens when device was stopped but VCPU wasn't. */
> > if (!vdev->vm_running) {
> > q->tx_waiting = 1;
> > @@ -1345,6 +1368,11 @@ static void virtio_net_handle_tx_bh(VirtIODevice
> *vdev, VirtQueue *vq)
> > VirtIONet *n = VIRTIO_NET(vdev);
> > VirtIONetQueue *q = &n->vqs[vq2q(virtio_get_queue_index(vq))];
> >
> > + if (unlikely((n->status & VIRTIO_NET_S_LINK_UP) == 0)) {
> > + virtio_net_drop_tx_queue_data(vdev, vq);
> > + return;
> > + }
> > +
> > if (unlikely(q->tx_waiting)) {
> > return;
> > }
> > --
> > 1.9.1
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH v2 3/3] net: virtio-net discards TX data after link down
2016-11-09 23:56 ` Yuri Benditovich
@ 2016-11-10 13:54 ` Michael S. Tsirkin
2016-11-10 20:56 ` Yuri Benditovich
2016-11-23 9:52 ` Yuri Benditovich
0 siblings, 2 replies; 12+ messages in thread
From: Michael S. Tsirkin @ 2016-11-10 13:54 UTC (permalink / raw)
To: Yuri Benditovich; +Cc: Jason Wang, qemu-devel, Dmitry Fleytman, Yan Vugenfirer
On Thu, Nov 10, 2016 at 01:56:05AM +0200, Yuri Benditovich wrote:
>
>
> On Wed, Nov 9, 2016 at 10:28 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Wed, Nov 09, 2016 at 05:22:02PM +0200, yuri.benditovich@daynix.com
> wrote:
> > From: Yuri Benditovich <yuri.benditovich@daynix.com>
> >
> > https://bugzilla.redhat.com/show_bug.cgi?id=1295637
> > Upon set_link monitor command or upon netdev deletion
> > virtio-net sends link down indication to the guest
> > and stops vhost if one is used.
> > Guest driver can still submit data for TX until it
> > recognizes link loss. If these packets not returned by
> > the host, the Windows guest will never be able to finish
> > disable/removal/shutdown.
> > Now each packet sent by guest after NIC indicated link
> > down will be completed immediately.
> >
> > Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com>
> > ---
> > hw/net/virtio-net.c | 28 ++++++++++++++++++++++++++++
> > 1 file changed, 28 insertions(+)
> >
> > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > index 06bfe4b..ab4e18a 100644
> > --- a/hw/net/virtio-net.c
> > +++ b/hw/net/virtio-net.c
> > @@ -218,6 +218,16 @@ static void virtio_net_vnet_endian_status(VirtIONet
> *n, uint8_t status)
> > }
> > }
> >
> > +static void virtio_net_drop_tx_queue_data(VirtIODevice *vdev, VirtQueue
> *vq)
> > +{
> > + VirtQueueElement *elem;
> > + while ((elem = virtqueue_pop(vq, sizeof(VirtQueueElement)))) {
> > + virtqueue_push(vq, elem, 0);
> > + virtio_notify(vdev, vq);
> > + g_free(elem);
> > + }
> > +}
> > +
> > static void virtio_net_set_status(struct VirtIODevice *vdev, uint8_t
> status)
> > {
> > VirtIONet *n = VIRTIO_NET(vdev);
>
> I don't like this part. This does too much queue parsing,
> I would like to just copy head from avail to used ring.
>
> For example, people want to support rings >1K in size.
> Let's add bool virtqueue_drop(vq) and be done with it.
>
>
> Please note that this code works only when link is down.
> For me this was too complicated to write simpler procedure
> with the same result.
Yes - it's somewhat problematic and risky that we process
the ring in qemu, but I don't see an easy way around that.
But at least let's limit the processing and assumptions we
make.
>
>
> > @@ -262,6 +272,14 @@ static void virtio_net_set_status(struct
> VirtIODevice *vdev, uint8_t status)
> > } else {
> > qemu_bh_cancel(q->tx_bh);
> > }
> > + if ((n->status & VIRTIO_NET_S_LINK_UP) == 0 &&
> > + (queue_status & VIRTIO_CONFIG_S_DRIVER_OK)) {
> > + /* if tx is waiting we are likely have some packets in
... we likely have some ...
> tx queue
> > + * and disabled notification */
what does this refer to?
> > + q->tx_waiting = 0;
> > + virtio_queue_set_notification(q->tx_vq, 1);
> > + virtio_net_drop_tx_queue_data(vdev, q->tx_vq);
> > + }
> > }
> > }
> > }
>
> OK but what if guest keeps sending packets? What will drop them?
>
>
> This code fixes following problem in original code (example):
> We are in vhost=off and receive kick ->virtio_net_handle_tx_timer
> -> tx_waiting=1, notification disabled, timer set
> Now we receive link loss, cancel the timer and stay with packets in the queue
> and with
> disabled notification. Nobody will return them. (easy to reproduce with timer
> set to 5ms)
>
> Added code drops packets we already have and ensure we will report them
> as completed to guest. If guest keeps sending packets, they will be dropped
> in virtio_net_handle_tx_timer and in virtio_net_handle_tx_bh (in procedures
> just below)
> as we already with link down.
Yes I get that. I'm just not 100% sure all paths have
us listen on the ioeventfd and handle kicks without races -
this was previously assumed not to matter.
>
>
> > @@ -1319,6 +1337,11 @@ static void virtio_net_handle_tx_timer(
> VirtIODevice *vdev, VirtQueue *vq)
> > VirtIONet *n = VIRTIO_NET(vdev);
> > VirtIONetQueue *q = &n->vqs[vq2q(virtio_get_queue_index(vq))];
> >
> > + if (unlikely((n->status & VIRTIO_NET_S_LINK_UP) == 0)) {
> > + virtio_net_drop_tx_queue_data(vdev, vq);
> > + return;
> > + }
> > +
> > /* This happens when device was stopped but VCPU wasn't. */
> > if (!vdev->vm_running) {
> > q->tx_waiting = 1;
> > @@ -1345,6 +1368,11 @@ static void virtio_net_handle_tx_bh(VirtIODevice
> *vdev, VirtQueue *vq)
> > VirtIONet *n = VIRTIO_NET(vdev);
> > VirtIONetQueue *q = &n->vqs[vq2q(virtio_get_queue_index(vq))];
> >
> > + if (unlikely((n->status & VIRTIO_NET_S_LINK_UP) == 0)) {
> > + virtio_net_drop_tx_queue_data(vdev, vq);
> > + return;
> > + }
> > +
> > if (unlikely(q->tx_waiting)) {
> > return;
> > }
> > --
> > 1.9.1
>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH v2 3/3] net: virtio-net discards TX data after link down
2016-11-10 13:54 ` Michael S. Tsirkin
@ 2016-11-10 20:56 ` Yuri Benditovich
2016-11-23 9:52 ` Yuri Benditovich
1 sibling, 0 replies; 12+ messages in thread
From: Yuri Benditovich @ 2016-11-10 20:56 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Jason Wang, qemu-devel, Dmitry Fleytman, Yan Vugenfirer
On Thu, Nov 10, 2016 at 3:54 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
> On Thu, Nov 10, 2016 at 01:56:05AM +0200, Yuri Benditovich wrote:
> >
> >
> > On Wed, Nov 9, 2016 at 10:28 PM, Michael S. Tsirkin <mst@redhat.com>
> wrote:
> >
> > On Wed, Nov 09, 2016 at 05:22:02PM +0200,
> yuri.benditovich@daynix.com
> > wrote:
> > > From: Yuri Benditovich <yuri.benditovich@daynix.com>
> > >
> > > https://bugzilla.redhat.com/show_bug.cgi?id=1295637
> > > Upon set_link monitor command or upon netdev deletion
> > > virtio-net sends link down indication to the guest
> > > and stops vhost if one is used.
> > > Guest driver can still submit data for TX until it
> > > recognizes link loss. If these packets not returned by
> > > the host, the Windows guest will never be able to finish
> > > disable/removal/shutdown.
> > > Now each packet sent by guest after NIC indicated link
> > > down will be completed immediately.
> > >
> > > Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com>
> > > ---
> > > hw/net/virtio-net.c | 28 ++++++++++++++++++++++++++++
> > > 1 file changed, 28 insertions(+)
> > >
> > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > index 06bfe4b..ab4e18a 100644
> > > --- a/hw/net/virtio-net.c
> > > +++ b/hw/net/virtio-net.c
> > > @@ -218,6 +218,16 @@ static void virtio_net_vnet_endian_status(
> VirtIONet
> > *n, uint8_t status)
> > > }
> > > }
> > >
> > > +static void virtio_net_drop_tx_queue_data(VirtIODevice *vdev,
> VirtQueue
> > *vq)
> > > +{
> > > + VirtQueueElement *elem;
> > > + while ((elem = virtqueue_pop(vq, sizeof(VirtQueueElement)))) {
> > > + virtqueue_push(vq, elem, 0);
> > > + virtio_notify(vdev, vq);
> > > + g_free(elem);
> > > + }
> > > +}
> > > +
> > > static void virtio_net_set_status(struct VirtIODevice *vdev,
> uint8_t
> > status)
> > > {
> > > VirtIONet *n = VIRTIO_NET(vdev);
> >
> > I don't like this part. This does too much queue parsing,
> > I would like to just copy head from avail to used ring.
> >
> > For example, people want to support rings >1K in size.
> > Let's add bool virtqueue_drop(vq) and be done with it.
> >
> >
> > Please note that this code works only when link is down.
> > For me this was too complicated to write simpler procedure
> > with the same result.
>
> Yes - it's somewhat problematic and risky that we process
> the ring in qemu, but I don't see an easy way around that.
> But at least let's limit the processing and assumptions we
> make.
>
>
> >
> >
> > > @@ -262,6 +272,14 @@ static void virtio_net_set_status(struct
> > VirtIODevice *vdev, uint8_t status)
> > > } else {
> > > qemu_bh_cancel(q->tx_bh);
> > > }
> > > + if ((n->status & VIRTIO_NET_S_LINK_UP) == 0 &&
> > > + (queue_status & VIRTIO_CONFIG_S_DRIVER_OK)) {
> >
> > + /* if tx is waiting we are likely have some packets
> in
>
> ... we likely have some ...
>
> > tx queue
> > > + * and disabled notification */
>
> what does this refer to?
>
virtio-net.c processes tx by tic-tac scheme, for example:
handle_tx_bh sets tx_waiting, disables queue notification, schedules bh.
then tx_bh enables queue notification, flushes tx, clears tx_waiting.
when queue notification disabled, tx completion will not raise host
interrupt
So, when we discard bh upon link down between 'tic' and 'tac', if is good
to
enable notification back, drop packets and move to 'waiting for tx' state.
Similar 'tic-tac' cycle works in case of timer.
>
> > > + q->tx_waiting = 0;
> > > + virtio_queue_set_notification(q->tx_vq, 1);
> > > + virtio_net_drop_tx_queue_data(vdev, q->tx_vq);
> > > + }
> > > }
> > > }
> > > }
> >
> > OK but what if guest keeps sending packets? What will drop them?
> >
> >
> > This code fixes following problem in original code (example):
> > We are in vhost=off and receive kick ->virtio_net_handle_tx_timer
> > -> tx_waiting=1, notification disabled, timer set
> > Now we receive link loss, cancel the timer and stay with packets in the
> queue
> > and with
> > disabled notification. Nobody will return them. (easy to reproduce with
> timer
> > set to 5ms)
> >
> > Added code drops packets we already have and ensure we will report them
> > as completed to guest. If guest keeps sending packets, they will be
> dropped
> > in virtio_net_handle_tx_timer and in virtio_net_handle_tx_bh (in
> procedures
> > just below)
> > as we already with link down.
>
> Yes I get that. I'm just not 100% sure all paths have
> us listen on the ioeventfd and handle kicks without races -
> this was previously assumed not to matter.
>
>
> >
> >
> > > @@ -1319,6 +1337,11 @@ static void virtio_net_handle_tx_timer(
> > VirtIODevice *vdev, VirtQueue *vq)
> > > VirtIONet *n = VIRTIO_NET(vdev);
> > > VirtIONetQueue *q = &n->vqs[vq2q(virtio_get_queue_
> index(vq))];
> > >
> > > + if (unlikely((n->status & VIRTIO_NET_S_LINK_UP) == 0)) {
> > > + virtio_net_drop_tx_queue_data(vdev, vq);
> > > + return;
> > > + }
> > > +
> > > /* This happens when device was stopped but VCPU wasn't. */
> > > if (!vdev->vm_running) {
> > > q->tx_waiting = 1;
> > > @@ -1345,6 +1368,11 @@ static void virtio_net_handle_tx_bh(
> VirtIODevice
> > *vdev, VirtQueue *vq)
> > > VirtIONet *n = VIRTIO_NET(vdev);
> > > VirtIONetQueue *q = &n->vqs[vq2q(virtio_get_queue_
> index(vq))];
> > >
> > > + if (unlikely((n->status & VIRTIO_NET_S_LINK_UP) == 0)) {
> > > + virtio_net_drop_tx_queue_data(vdev, vq);
> > > + return;
> > > + }
> > > +
> > > if (unlikely(q->tx_waiting)) {
> > > return;
> > > }
> > > --
> > > 1.9.1
> >
> >
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH v2 3/3] net: virtio-net discards TX data after link down
2016-11-10 13:54 ` Michael S. Tsirkin
2016-11-10 20:56 ` Yuri Benditovich
@ 2016-11-23 9:52 ` Yuri Benditovich
2016-11-23 13:16 ` Michael S. Tsirkin
1 sibling, 1 reply; 12+ messages in thread
From: Yuri Benditovich @ 2016-11-23 9:52 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Jason Wang, qemu-devel, Dmitry Fleytman, Yan Vugenfirer
On Thu, Nov 10, 2016 at 3:54 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
> On Thu, Nov 10, 2016 at 01:56:05AM +0200, Yuri Benditovich wrote:
> >
> >
> > On Wed, Nov 9, 2016 at 10:28 PM, Michael S. Tsirkin <mst@redhat.com>
> wrote:
> >
> > On Wed, Nov 09, 2016 at 05:22:02PM +0200,
> yuri.benditovich@daynix.com
> > wrote:
> > > From: Yuri Benditovich <yuri.benditovich@daynix.com>
> > >
> > > https://bugzilla.redhat.com/show_bug.cgi?id=1295637
> > > Upon set_link monitor command or upon netdev deletion
> > > virtio-net sends link down indication to the guest
> > > and stops vhost if one is used.
> > > Guest driver can still submit data for TX until it
> > > recognizes link loss. If these packets not returned by
> > > the host, the Windows guest will never be able to finish
> > > disable/removal/shutdown.
> > > Now each packet sent by guest after NIC indicated link
> > > down will be completed immediately.
> > >
> > > Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com>
> > > ---
> > > hw/net/virtio-net.c | 28 ++++++++++++++++++++++++++++
> > > 1 file changed, 28 insertions(+)
> > >
> > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > index 06bfe4b..ab4e18a 100644
> > > --- a/hw/net/virtio-net.c
> > > +++ b/hw/net/virtio-net.c
> > > @@ -218,6 +218,16 @@ static void virtio_net_vnet_endian_status(
> VirtIONet
> > *n, uint8_t status)
> > > }
> > > }
> > >
> > > +static void virtio_net_drop_tx_queue_data(VirtIODevice *vdev,
> VirtQueue
> > *vq)
> > > +{
> > > + VirtQueueElement *elem;
> > > + while ((elem = virtqueue_pop(vq, sizeof(VirtQueueElement)))) {
> > > + virtqueue_push(vq, elem, 0);
> > > + virtio_notify(vdev, vq);
> > > + g_free(elem);
> > > + }
> > > +}
> > > +
> > > static void virtio_net_set_status(struct VirtIODevice *vdev,
> uint8_t
> > status)
> > > {
> > > VirtIONet *n = VIRTIO_NET(vdev);
> >
> > I don't like this part. This does too much queue parsing,
> > I would like to just copy head from avail to used ring.
> >
> > For example, people want to support rings >1K in size.
> > Let's add bool virtqueue_drop(vq) and be done with it.
> >
> >
> > Please note that this code works only when link is down.
> > For me this was too complicated to write simpler procedure
> > with the same result.
>
> Yes - it's somewhat problematic and risky that we process
> the ring in qemu, but I don't see an easy way around that.
> But at least let's limit the processing and assumptions we
> make.
>
>
So, what is the status and how do we make the progress.
What kind of change in the patch you suggest?
Thanks,
Yuri
>
> >
> >
> > > @@ -262,6 +272,14 @@ static void virtio_net_set_status(struct
> > VirtIODevice *vdev, uint8_t status)
> > > } else {
> > > qemu_bh_cancel(q->tx_bh);
> > > }
> > > + if ((n->status & VIRTIO_NET_S_LINK_UP) == 0 &&
> > > + (queue_status & VIRTIO_CONFIG_S_DRIVER_OK)) {
> > > + /* if tx is waiting we are likely have some
> packets in
>
> ... we likely have some ...
>
> > tx queue
> > > + * and disabled notification */
>
> what does this refer to?
>
> > > + q->tx_waiting = 0;
> > > + virtio_queue_set_notification(q->tx_vq, 1);
> > > + virtio_net_drop_tx_queue_data(vdev, q->tx_vq);
> > > + }
> > > }
> > > }
> > > }
> >
> > OK but what if guest keeps sending packets? What will drop them?
> >
> >
> > This code fixes following problem in original code (example):
> > We are in vhost=off and receive kick ->virtio_net_handle_tx_timer
> > -> tx_waiting=1, notification disabled, timer set
> > Now we receive link loss, cancel the timer and stay with packets in the
> queue
> > and with
> > disabled notification. Nobody will return them. (easy to reproduce with
> timer
> > set to 5ms)
> >
> > Added code drops packets we already have and ensure we will report them
> > as completed to guest. If guest keeps sending packets, they will be
> dropped
> > in virtio_net_handle_tx_timer and in virtio_net_handle_tx_bh (in
> procedures
> > just below)
> > as we already with link down.
>
> Yes I get that. I'm just not 100% sure all paths have
> us listen on the ioeventfd and handle kicks without races -
> this was previously assumed not to matter.
>
>
> >
> >
> > > @@ -1319,6 +1337,11 @@ static void virtio_net_handle_tx_timer(
> > VirtIODevice *vdev, VirtQueue *vq)
> > > VirtIONet *n = VIRTIO_NET(vdev);
> > > VirtIONetQueue *q = &n->vqs[vq2q(virtio_get_queue_
> index(vq))];
> > >
> > > + if (unlikely((n->status & VIRTIO_NET_S_LINK_UP) == 0)) {
> > > + virtio_net_drop_tx_queue_data(vdev, vq);
> > > + return;
> > > + }
> > > +
> > > /* This happens when device was stopped but VCPU wasn't. */
> > > if (!vdev->vm_running) {
> > > q->tx_waiting = 1;
> > > @@ -1345,6 +1368,11 @@ static void virtio_net_handle_tx_bh(
> VirtIODevice
> > *vdev, VirtQueue *vq)
> > > VirtIONet *n = VIRTIO_NET(vdev);
> > > VirtIONetQueue *q = &n->vqs[vq2q(virtio_get_queue_
> index(vq))];
> > >
> > > + if (unlikely((n->status & VIRTIO_NET_S_LINK_UP) == 0)) {
> > > + virtio_net_drop_tx_queue_data(vdev, vq);
> > > + return;
> > > + }
> > > +
> > > if (unlikely(q->tx_waiting)) {
> > > return;
> > > }
> > > --
> > > 1.9.1
> >
> >
>
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Qemu-devel] [PATCH v2 3/3] net: virtio-net discards TX data after link down
2016-11-23 9:52 ` Yuri Benditovich
@ 2016-11-23 13:16 ` Michael S. Tsirkin
0 siblings, 0 replies; 12+ messages in thread
From: Michael S. Tsirkin @ 2016-11-23 13:16 UTC (permalink / raw)
To: Yuri Benditovich; +Cc: Jason Wang, qemu-devel, Dmitry Fleytman, Yan Vugenfirer
On Wed, Nov 23, 2016 at 11:52:25AM +0200, Yuri Benditovich wrote:
>
>
> On Thu, Nov 10, 2016 at 3:54 PM, Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Thu, Nov 10, 2016 at 01:56:05AM +0200, Yuri Benditovich wrote:
> >
> >
> > On Wed, Nov 9, 2016 at 10:28 PM, Michael S. Tsirkin <mst@redhat.com>
> wrote:
> >
> > On Wed, Nov 09, 2016 at 05:22:02PM +0200, yuri.benditovich@daynix.com
> > wrote:
> > > From: Yuri Benditovich <yuri.benditovich@daynix.com>
> > >
> > > https://bugzilla.redhat.com/show_bug.cgi?id=1295637
> > > Upon set_link monitor command or upon netdev deletion
> > > virtio-net sends link down indication to the guest
> > > and stops vhost if one is used.
> > > Guest driver can still submit data for TX until it
> > > recognizes link loss. If these packets not returned by
> > > the host, the Windows guest will never be able to finish
> > > disable/removal/shutdown.
> > > Now each packet sent by guest after NIC indicated link
> > > down will be completed immediately.
> > >
> > > Signed-off-by: Yuri Benditovich <yuri.benditovich@daynix.com>
> > > ---
> > > hw/net/virtio-net.c | 28 ++++++++++++++++++++++++++++
> > > 1 file changed, 28 insertions(+)
> > >
> > > diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > index 06bfe4b..ab4e18a 100644
> > > --- a/hw/net/virtio-net.c
> > > +++ b/hw/net/virtio-net.c
> > > @@ -218,6 +218,16 @@ static void virtio_net_vnet_endian_status(
> VirtIONet
> > *n, uint8_t status)
> > > }
> > > }
> > >
> > > +static void virtio_net_drop_tx_queue_data(VirtIODevice *vdev,
> VirtQueue
> > *vq)
> > > +{
> > > + VirtQueueElement *elem;
> > > + while ((elem = virtqueue_pop(vq, sizeof(VirtQueueElement)))) {
> > > + virtqueue_push(vq, elem, 0);
> > > + virtio_notify(vdev, vq);
> > > + g_free(elem);
> > > + }
> > > +}
> > > +
> > > static void virtio_net_set_status(struct VirtIODevice *vdev,
> uint8_t
> > status)
> > > {
> > > VirtIONet *n = VIRTIO_NET(vdev);
> >
> > I don't like this part. This does too much queue parsing,
> > I would like to just copy head from avail to used ring.
> >
> > For example, people want to support rings >1K in size.
> > Let's add bool virtqueue_drop(vq) and be done with it.
> >
> >
> > Please note that this code works only when link is down.
> > For me this was too complicated to write simpler procedure
> > with the same result.
>
> Yes - it's somewhat problematic and risky that we process
> the ring in qemu, but I don't see an easy way around that.
> But at least let's limit the processing and assumptions we
> make.
>
>
>
> So, what is the status and how do we make the progress.
> What kind of change in the patch you suggest?
>
> Thanks,
> Yuri
Add an API that copies entries from avail to used ring
without looking at the desc buffer.
>
>
> >
> >
> > > @@ -262,6 +272,14 @@ static void virtio_net_set_status(struct
> > VirtIODevice *vdev, uint8_t status)
> > > } else {
> > > qemu_bh_cancel(q->tx_bh);
> > > }
> > > + if ((n->status & VIRTIO_NET_S_LINK_UP) == 0 &&
> > > + (queue_status & VIRTIO_CONFIG_S_DRIVER_OK)) {
> > > + /* if tx is waiting we are likely have some
> packets in
>
> ... we likely have some ...
>
> > tx queue
> > > + * and disabled notification */
>
> what does this refer to?
>
> > > + q->tx_waiting = 0;
> > > + virtio_queue_set_notification(q->tx_vq, 1);
> > > + virtio_net_drop_tx_queue_data(vdev, q->tx_vq);
> > > + }
> > > }
> > > }
> > > }
> >
> > OK but what if guest keeps sending packets? What will drop them?
> >
> >
> > This code fixes following problem in original code (example):
> > We are in vhost=off and receive kick ->virtio_net_handle_tx_timer
> > -> tx_waiting=1, notification disabled, timer set
> > Now we receive link loss, cancel the timer and stay with packets in the
> queue
> > and with
> > disabled notification. Nobody will return them. (easy to reproduce with
> timer
> > set to 5ms)
> >
> > Added code drops packets we already have and ensure we will report them
> > as completed to guest. If guest keeps sending packets, they will be
> dropped
> > in virtio_net_handle_tx_timer and in virtio_net_handle_tx_bh (in
> procedures
> > just below)
> > as we already with link down.
>
> Yes I get that. I'm just not 100% sure all paths have
> us listen on the ioeventfd and handle kicks without races -
> this was previously assumed not to matter.
>
>
> >
> >
> > > @@ -1319,6 +1337,11 @@ static void virtio_net_handle_tx_timer(
> > VirtIODevice *vdev, VirtQueue *vq)
> > > VirtIONet *n = VIRTIO_NET(vdev);
> > > VirtIONetQueue *q = &n->vqs[vq2q(virtio_get_queue_index(vq))];
> > >
> > > + if (unlikely((n->status & VIRTIO_NET_S_LINK_UP) == 0)) {
> > > + virtio_net_drop_tx_queue_data(vdev, vq);
> > > + return;
> > > + }
> > > +
> > > /* This happens when device was stopped but VCPU wasn't. */
> > > if (!vdev->vm_running) {
> > > q->tx_waiting = 1;
> > > @@ -1345,6 +1368,11 @@ static void virtio_net_handle_tx_bh(
> VirtIODevice
> > *vdev, VirtQueue *vq)
> > > VirtIONet *n = VIRTIO_NET(vdev);
> > > VirtIONetQueue *q = &n->vqs[vq2q(virtio_get_queue_index(vq))];
> > >
> > > + if (unlikely((n->status & VIRTIO_NET_S_LINK_UP) == 0)) {
> > > + virtio_net_drop_tx_queue_data(vdev, vq);
> > > + return;
> > > + }
> > > +
> > > if (unlikely(q->tx_waiting)) {
> > > return;
> > > }
> > > --
> > > 1.9.1
> >
> >
>
>
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2016-11-23 13:16 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-11-09 15:21 [Qemu-devel] [PATCH v2 0/3] virtio-net discards TX data after link down yuri.benditovich
2016-11-09 15:22 ` [Qemu-devel] [PATCH v2 1/3] net: Add virtio queue interface to update used index from vring state yuri.benditovich
2016-11-09 15:22 ` [Qemu-devel] [PATCH v2 2/3] net: vhost stop updates virtio queue state yuri.benditovich
2016-11-09 17:07 ` Paolo Bonzini
2016-11-09 20:12 ` Michael S. Tsirkin
2016-11-09 15:22 ` [Qemu-devel] [PATCH v2 3/3] net: virtio-net discards TX data after link down yuri.benditovich
2016-11-09 20:28 ` Michael S. Tsirkin
2016-11-09 23:56 ` Yuri Benditovich
2016-11-10 13:54 ` Michael S. Tsirkin
2016-11-10 20:56 ` Yuri Benditovich
2016-11-23 9:52 ` Yuri Benditovich
2016-11-23 13:16 ` Michael S. Tsirkin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).