From mboxrd@z Thu Jan 1 00:00:00 1970 From: Mark McLoughlin Subject: Re: [PATCH 0/6] Kill off the virtio_net tx mitigation timer Date: Thu, 06 Nov 2008 17:45:15 +0000 Message-ID: <1225993515.10879.48.camel@blaa> References: <> <1225389113-28332-1-git-send-email-markmc@redhat.com> <490D7754.4070807@redhat.com> <1225715009.5904.39.camel@blaa> <490EF141.8040005@redhat.com> Reply-To: Mark McLoughlin Mime-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 7bit Cc: kvm@vger.kernel.org To: Avi Kivity Return-path: Received: from mx2.redhat.com ([66.187.237.31]:56963 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751391AbYKFRqX (ORCPT ); Thu, 6 Nov 2008 12:46:23 -0500 Received: from int-mx2.corp.redhat.com (int-mx2.corp.redhat.com [172.16.27.26]) by mx2.redhat.com (8.13.8/8.13.8) with ESMTP id mA6HkHSL005306 for ; Thu, 6 Nov 2008 12:46:17 -0500 In-Reply-To: <490EF141.8040005@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: Hi Avi, Just thinking about your variable window suggestion ... On Mon, 2008-11-03 at 14:40 +0200, Avi Kivity wrote: > Mark McLoughlin wrote: > > On Sun, 2008-11-02 at 11:48 +0200, Avi Kivity wrote: > >> Where does the benefit come from? > >> > > > > There are two things going on here, I think. > > > > First is that the timer affects latency, removing the timeout helps > > that. > > > > If the timer affects latency, then something is very wrong. We're > lacking an adjustable window. > > The way I see it, the notification window should be adjusted according > to the current workload. If the link is idle, the window should be one > packet -- notify as soon as something is queued. As the workload > increases, the window increases to (safety_factor * allowable_latency / > packet_rate). The timer is set to allowable_latency to catch changes in > workload. > > For example: > > - allowable_latency 1ms (implies 1K vmexits/sec desired) > - current packet_rate 20K packets/sec > - safety_factor 0.8 > > So we request notifications every 0.8 * 20K * 1m = 16 packets, and set > the timer to 1ms. Usually we get a notification every 16 packets, just > before timer expiration. If the workload increases, we get > notifications sooner, so we increase the window. If the workload drops, > the timer fires and we decrease the window. > > The timer should never fire on an all-out benchmark, or in a ping test. The way I see this (continuing with your example figures) playing out is: - If we have a packet rate of <2.5K packets/sec, we essentially have zero added latency - each packet causes a vmexit and the packet is dispatched immediately - As soon as we go above 2.5k we add, on average, an additional ~400us delay to each packet - This is almost identical to our current scheme with an 800us timer, except that flushes are typically triggered by a vmexit instead of the timer expiring I don't think this is the effect you're looking for? Am I missing something? Cheers, Mark.