From mboxrd@z Thu Jan 1 00:00:00 1970 From: Mark McLoughlin Subject: Re: [PATCH 0/6] Kill off the virtio_net tx mitigation timer Date: Thu, 06 Nov 2008 16:46:02 +0000 Message-ID: <1225989962.10879.19.camel@blaa> References: <> <1225389113-28332-1-git-send-email-markmc@redhat.com> <490D7754.4070807@redhat.com> <1225715009.5904.39.camel@blaa> <490EF141.8040005@redhat.com> <1225724694.5904.63.camel@blaa> <490F1690.6060509@redhat.com> Reply-To: Mark McLoughlin Mime-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 7bit Cc: kvm@vger.kernel.org To: Avi Kivity Return-path: Received: from mx2.redhat.com ([66.187.237.31]:47119 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750882AbYKFQrF (ORCPT ); Thu, 6 Nov 2008 11:47:05 -0500 Received: from int-mx2.corp.redhat.com (int-mx2.corp.redhat.com [172.16.27.26]) by mx2.redhat.com (8.13.8/8.13.8) with ESMTP id mA6Gl4GW024232 for ; Thu, 6 Nov 2008 11:47:04 -0500 In-Reply-To: <490F1690.6060509@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: Hey, So, I went off and spent some time gathering more data on this stuff and putting it together in a more consumable fashion. Here are some graphs showing the effect some of these changes have on throughput, cpu utilization and vmexit rate: http://markmc.fedorapeople.org/virtio-netperf/2008-11-06/ The results are a little surprising, and I'm not sure I've fully digested them yet but some conclusions: 1) Disabling notifications from the guest for longer helps; you see an increase in cpu utilization and vmexit rate, but that can be accounted for by the extra data we're transferring 2) Flushing (when the ring is full) in the I/O thread doesn't seem to help anything; strangely, it has a detrimental effect on host->guest traffic where I would expect us to hit this case at all. I suspect we may not actually be hitting the full ring condition in these tests at all. 3) The catch-more-io thing helps a little, especially host->guest, without any real detrimental impact. 4) Removing the tx timer doesn't have a huge affect on guest->host, except for 32 byte buffers where we see a huge increase in vmexits and a drop in throughput. Bizarrely, we don't see this effect with 64 byte buffers. However, it does have a pretty significant impact on host->guest, which makes sense since in that case we'll just have a steady stream of TCP ACK packets so if small guest->host packets are affected badly, so will the ACK packets. 5) The drop-mutex patch is a nice win overall, except for a huge increase in vmexits for sub-4k guest->host packets. Strange. Cheers, Mark.