From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dor Laor Subject: Re: [PATCH 3/9] kvm: qemu: Remove virtio_net tx ring-full heuristic Date: Fri, 25 Jul 2008 02:56:03 +0300 Message-ID: <48891693.90101@qumranet.com> References: <1216899979-32532-1-git-send-email-markmc@redhat.com> <1216899979-32532-2-git-send-email-markmc@redhat.com> <1216899979-32532-3-git-send-email-markmc@redhat.com> <1216899979-32532-4-git-send-email-markmc@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Cc: kvm@vger.kernel.org, Herbert Xu , Rusty Russell To: Mark McLoughlin Return-path: Received: from il.qumranet.com ([212.179.150.194]:44573 "EHLO il.qumranet.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752755AbYGXX5k (ORCPT ); Thu, 24 Jul 2008 19:57:40 -0400 In-Reply-To: <1216899979-32532-4-git-send-email-markmc@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: Mark McLoughlin wrote: > virtio_net tries to guess when it has received a tx > notification from the guest whether it indicates that the > guest has no more room in the tx ring and it should > immediately flush the queued buffers. > > The heuristic is based on the fact that there are 128 > buffer entries in the ring and each packet uses 2 buffers > (i.e. the virtio_net_hdr and the packet's linear data). > > Using GSO or increasing the size of the rings will break > that heuristic, so let's remove it and assume that any > notification from the guest after we've disabled > notifications indicates that we should flush our buffers. > > Signed-off-by: Mark McLoughlin > --- > qemu/hw/virtio-net.c | 3 +-- > 1 files changed, 1 insertions(+), 2 deletions(-) > > diff --git a/qemu/hw/virtio-net.c b/qemu/hw/virtio-net.c > index 31867f1..4adfa42 100644 > --- a/qemu/hw/virtio-net.c > +++ b/qemu/hw/virtio-net.c > @@ -175,8 +175,7 @@ static void virtio_net_handle_tx(VirtIODevice *vdev, VirtQueue *vq) > { > VirtIONet *n = to_virtio_net(vdev); > > - if (n->tx_timer_active && > - (vq->vring.avail->idx - vq->last_avail_idx) == 64) { > + if (n->tx_timer_active) { > vq->vring.used->flags &= ~VRING_USED_F_NO_NOTIFY; > qemu_del_timer(n->tx_timer); > n->tx_timer_active = 0; > Actually we can improve latency a bit more by using this timer only for high throughput scenario. For example, if during the previous timer period no/few packets were accumulated, we can set the flag off and not issue new timer. This way we'll get notified immediately without timer latency. When lots of packets will be transmitted, we'll go back to this batch mode again. Cheers, Dor