From mboxrd@z Thu Jan 1 00:00:00 1970 From: Mark McLoughlin Subject: Re: [PATCH 3/9] kvm: qemu: Remove virtio_net tx ring-full heuristic Date: Fri, 25 Jul 2008 18:30:15 +0100 Message-ID: <1217007015.7098.102.camel@muff> References: <1216899979-32532-1-git-send-email-markmc@redhat.com> <1216899979-32532-4-git-send-email-markmc@redhat.com> <48890ECD.10104@qumranet.com> <200807251030.39301.rusty@rustcorp.com.au> Reply-To: Mark McLoughlin Mime-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 7bit Cc: Dor Laor , kvm@vger.kernel.org, Herbert Xu To: Rusty Russell Return-path: Received: from mx1.redhat.com ([66.187.233.31]:50624 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751778AbYGYRag (ORCPT ); Fri, 25 Jul 2008 13:30:36 -0400 In-Reply-To: <200807251030.39301.rusty@rustcorp.com.au> Sender: kvm-owner@vger.kernel.org List-ID: On Fri, 2008-07-25 at 10:30 +1000, Rusty Russell wrote: > On Friday 25 July 2008 09:22:53 Dor Laor wrote: > > Mark McLoughlin wrote: > > > vq->vring.used->flags &= ~VRING_USED_F_NO_NOTIFY; > > > qemu_del_timer(n->tx_timer); > > > n->tx_timer_active = 0; > > > > As stated by newer messages, we should handle the first tx notification > > if the timer wasn't active to shorten latency. > > Cheers, Dor > > Here's what lguest does at the moment. Basically, we cut the timeout a tiny > bit each time, until we get *fewer* packets than last time. Then we bump it > up again. > > Rough, but seems to work (it should be a per-device var of course, not a > static). > > @@ -921,6 +922,7 @@ static void handle_net_output(int fd, st > unsigned int head, out, in, num = 0; > int len; > struct iovec iov[vq->vring.num]; > + static int last_timeout_num; > > if (!timeout) > net_xmit_notify++; > @@ -941,6 +943,14 @@ static void handle_net_output(int fd, st > /* Block further kicks and set up a timer if we saw anything. */ > if (!timeout && num) > block_vq(vq); > + > + if (timeout) { > + if (num < last_timeout_num) > + timeout_usec += 10; > + else if (timeout_usec > 1) > + timeout_usec--; > + last_timeout_num = num; > + } Yeah, I gave this a try in kvm and in the host->guest case the timeout just grew and grew. In the guest->host case, it did stabilise at around 50us with high throughput. Basically, I think in the host->guest case the number of buffers was very variable so the timeout would get bumped by 10, reduced by a small amount, bumped by 10, reduced by a small amount, ... But, I agree the general principal seems about right; just needs some tweaking. Cheers, Mark.