* [Qemu-devel] [PATCH v2] virtio_net: flush uncompleted TX on reset
@ 2018-03-16 12:07 Greg Kurz
2018-03-20 2:34 ` Jason Wang
0 siblings, 1 reply; 4+ messages in thread
From: Greg Kurz @ 2018-03-16 12:07 UTC (permalink / raw)
To: qemu-devel; +Cc: Michael S. Tsirkin, Jason Wang, R Nageswara Sastry
If the backend could not transmit a packet right away for some reason,
the packet is queued for asynchronous sending. The corresponding vq
element is tracked in the async_tx.elem field of the VirtIONetQueue,
for later freeing when the transmission is complete.
If a reset happens before completion, virtio_net_tx_complete() will push
async_tx.elem back to the guest anyway, and we end up with the inuse flag
of the vq being equal to -1. The next call to virtqueue_pop() is then
likely to fail with "Virtqueue size exceeded".
This can be reproduced easily by starting a guest with an hubport backend
that is not connected to a functional network, eg,
-device virtio-net-pci,netdev=hub0 -netdev hubport,id=hub0,hubid=0
and no other -netdev hubport,hubid=0 on the command line.
The appropriate fix is to ensure that such an asynchronous transmission
cannot survive a device reset. So for all queues, we first try to send
the packet again, and eventually we purge it if the backend still could
not deliver it.
Reported-by: R. Nageswara Sastry <nasastry@in.ibm.com>
Buglink: https://github.com/open-power-host-os/qemu/issues/37
Signed-off-by: Greg Kurz <groug@kaod.org>
Tested-by: R. Nageswara Sastry <nasastry@in.ibm.com>
---
v2: - make qemu_flush_or_purge_queued_packets() extern and use it
- reworded reproducer paragraph in changelog
---
hw/net/virtio-net.c | 8 ++++++++
include/net/net.h | 1 +
net/net.c | 1 -
3 files changed, 9 insertions(+), 1 deletion(-)
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 188744e17d57..e5ed35489380 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -422,6 +422,7 @@ static RxFilterInfo *virtio_net_query_rxfilter(NetClientState *nc)
static void virtio_net_reset(VirtIODevice *vdev)
{
VirtIONet *n = VIRTIO_NET(vdev);
+ int i;
/* Reset back to compatibility mode */
n->promisc = 1;
@@ -445,6 +446,13 @@ static void virtio_net_reset(VirtIODevice *vdev)
memcpy(&n->mac[0], &n->nic->conf->macaddr, sizeof(n->mac));
qemu_format_nic_info_str(qemu_get_queue(n->nic), n->mac);
memset(n->vlans, 0, MAX_VLAN >> 3);
+
+ /* Flush any async TX */
+ for (i = 0; i < n->max_queues; i++) {
+ NetClientState *nc = qemu_get_subqueue(n->nic, i);
+ qemu_flush_or_purge_queued_packets(nc->peer, true);
+ assert(!virtio_net_get_subqueue(nc)->async_tx.elem);
+ }
}
static void peer_test_vnet_hdr(VirtIONet *n)
diff --git a/include/net/net.h b/include/net/net.h
index a943e968a3dc..1f7341e4592b 100644
--- a/include/net/net.h
+++ b/include/net/net.h
@@ -153,6 +153,7 @@ ssize_t qemu_send_packet_async(NetClientState *nc, const uint8_t *buf,
int size, NetPacketSent *sent_cb);
void qemu_purge_queued_packets(NetClientState *nc);
void qemu_flush_queued_packets(NetClientState *nc);
+void qemu_flush_or_purge_queued_packets(NetClientState *nc, bool purge);
void qemu_format_nic_info_str(NetClientState *nc, uint8_t macaddr[6]);
bool qemu_has_ufo(NetClientState *nc);
bool qemu_has_vnet_hdr(NetClientState *nc);
diff --git a/net/net.c b/net/net.c
index 5222e450698c..29f83983e55d 100644
--- a/net/net.c
+++ b/net/net.c
@@ -595,7 +595,6 @@ void qemu_purge_queued_packets(NetClientState *nc)
qemu_net_queue_purge(nc->peer->incoming_queue, nc);
}
-static
void qemu_flush_or_purge_queued_packets(NetClientState *nc, bool purge)
{
nc->receive_disabled = 0;
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] [PATCH v2] virtio_net: flush uncompleted TX on reset
2018-03-16 12:07 [Qemu-devel] [PATCH v2] virtio_net: flush uncompleted TX on reset Greg Kurz
@ 2018-03-20 2:34 ` Jason Wang
2018-03-20 3:27 ` Jason Wang
0 siblings, 1 reply; 4+ messages in thread
From: Jason Wang @ 2018-03-20 2:34 UTC (permalink / raw)
To: Greg Kurz, qemu-devel; +Cc: R Nageswara Sastry, Michael S. Tsirkin
On 2018年03月16日 20:07, Greg Kurz wrote:
> If the backend could not transmit a packet right away for some reason,
> the packet is queued for asynchronous sending. The corresponding vq
> element is tracked in the async_tx.elem field of the VirtIONetQueue,
> for later freeing when the transmission is complete.
>
> If a reset happens before completion, virtio_net_tx_complete() will push
> async_tx.elem back to the guest anyway, and we end up with the inuse flag
> of the vq being equal to -1. The next call to virtqueue_pop() is then
> likely to fail with "Virtqueue size exceeded".
>
> This can be reproduced easily by starting a guest with an hubport backend
> that is not connected to a functional network, eg,
>
> -device virtio-net-pci,netdev=hub0 -netdev hubport,id=hub0,hubid=0
>
> and no other -netdev hubport,hubid=0 on the command line.
>
> The appropriate fix is to ensure that such an asynchronous transmission
> cannot survive a device reset. So for all queues, we first try to send
> the packet again, and eventually we purge it if the backend still could
> not deliver it.
>
> Reported-by: R. Nageswara Sastry <nasastry@in.ibm.com>
> Buglink: https://github.com/open-power-host-os/qemu/issues/37
> Signed-off-by: Greg Kurz <groug@kaod.org>
> Tested-by: R. Nageswara Sastry <nasastry@in.ibm.com>
> ---
> v2: - make qemu_flush_or_purge_queued_packets() extern and use it
> - reworded reproducer paragraph in changelog
> ---
> hw/net/virtio-net.c | 8 ++++++++
> include/net/net.h | 1 +
> net/net.c | 1 -
> 3 files changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 188744e17d57..e5ed35489380 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -422,6 +422,7 @@ static RxFilterInfo *virtio_net_query_rxfilter(NetClientState *nc)
> static void virtio_net_reset(VirtIODevice *vdev)
> {
> VirtIONet *n = VIRTIO_NET(vdev);
> + int i;
>
> /* Reset back to compatibility mode */
> n->promisc = 1;
> @@ -445,6 +446,13 @@ static void virtio_net_reset(VirtIODevice *vdev)
> memcpy(&n->mac[0], &n->nic->conf->macaddr, sizeof(n->mac));
> qemu_format_nic_info_str(qemu_get_queue(n->nic), n->mac);
> memset(n->vlans, 0, MAX_VLAN >> 3);
> +
> + /* Flush any async TX */
> + for (i = 0; i < n->max_queues; i++) {
> + NetClientState *nc = qemu_get_subqueue(n->nic, i);
> + qemu_flush_or_purge_queued_packets(nc->peer, true);
> + assert(!virtio_net_get_subqueue(nc)->async_tx.elem);
> + }
> }
>
> static void peer_test_vnet_hdr(VirtIONet *n)
> diff --git a/include/net/net.h b/include/net/net.h
> index a943e968a3dc..1f7341e4592b 100644
> --- a/include/net/net.h
> +++ b/include/net/net.h
> @@ -153,6 +153,7 @@ ssize_t qemu_send_packet_async(NetClientState *nc, const uint8_t *buf,
> int size, NetPacketSent *sent_cb);
> void qemu_purge_queued_packets(NetClientState *nc);
> void qemu_flush_queued_packets(NetClientState *nc);
> +void qemu_flush_or_purge_queued_packets(NetClientState *nc, bool purge);
> void qemu_format_nic_info_str(NetClientState *nc, uint8_t macaddr[6]);
> bool qemu_has_ufo(NetClientState *nc);
> bool qemu_has_vnet_hdr(NetClientState *nc);
> diff --git a/net/net.c b/net/net.c
> index 5222e450698c..29f83983e55d 100644
> --- a/net/net.c
> +++ b/net/net.c
> @@ -595,7 +595,6 @@ void qemu_purge_queued_packets(NetClientState *nc)
> qemu_net_queue_purge(nc->peer->incoming_queue, nc);
> }
>
> -static
> void qemu_flush_or_purge_queued_packets(NetClientState *nc, bool purge)
> {
> nc->receive_disabled = 0;
>
>
Applied and queued for -stable.
Thanks
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] [PATCH v2] virtio_net: flush uncompleted TX on reset
2018-03-20 2:34 ` Jason Wang
@ 2018-03-20 3:27 ` Jason Wang
2018-03-20 10:20 ` Greg Kurz
0 siblings, 1 reply; 4+ messages in thread
From: Jason Wang @ 2018-03-20 3:27 UTC (permalink / raw)
To: Greg Kurz, qemu-devel; +Cc: R Nageswara Sastry, Michael S. Tsirkin
>> -static
>> void qemu_flush_or_purge_queued_packets(NetClientState *nc, bool
>> purge)
>> {
>> nc->receive_disabled = 0;
>>
>>
>
> Applied and queued for -stable.
>
> Thanks
>
Unfortunately, this breaks hotplug test:
TEST: tests/virtio-net-test... (pid=7117)
/x86_64/virtio/net/pci/basic: OK
/x86_64/virtio/net/pci/rx_stop_cont: OK
/x86_64/virtio/net/pci/hotplug: Broken pipe
FAIL
Thanks
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [Qemu-devel] [PATCH v2] virtio_net: flush uncompleted TX on reset
2018-03-20 3:27 ` Jason Wang
@ 2018-03-20 10:20 ` Greg Kurz
0 siblings, 0 replies; 4+ messages in thread
From: Greg Kurz @ 2018-03-20 10:20 UTC (permalink / raw)
To: Jason Wang; +Cc: qemu-devel, R Nageswara Sastry, Michael S. Tsirkin
On Tue, 20 Mar 2018 11:27:26 +0800
Jason Wang <jasowang@redhat.com> wrote:
> >> -static
> >> void qemu_flush_or_purge_queued_packets(NetClientState *nc, bool
> >> purge)
> >> {
> >> nc->receive_disabled = 0;
> >>
> >>
> >
> > Applied and queued for -stable.
> >
> > Thanks
> >
>
> Unfortunately, this breaks hotplug test:
>
> TEST: tests/virtio-net-test... (pid=7117)
> /x86_64/virtio/net/pci/basic: OK
> /x86_64/virtio/net/pci/rx_stop_cont: OK
> /x86_64/virtio/net/pci/hotplug: Broken pipe
> FAIL
>
> Thanks
Hi Jason,
Yes, I've just realized this patch assumes the virtio-net device
does have an associated backend (ie, nc->peer != NULL) otherwise
we segfault. This happens to be the case with the hotplug test.
I'll send a v3.
Cheers,
--
Greg
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2018-03-20 10:20 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-03-16 12:07 [Qemu-devel] [PATCH v2] virtio_net: flush uncompleted TX on reset Greg Kurz
2018-03-20 2:34 ` Jason Wang
2018-03-20 3:27 ` Jason Wang
2018-03-20 10:20 ` Greg Kurz
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).