From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:36050) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TQe0r-0002sv-Fq for qemu-devel@nongnu.org; Tue, 23 Oct 2012 08:55:24 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TQe0g-000099-RB for qemu-devel@nongnu.org; Tue, 23 Oct 2012 08:55:17 -0400 Received: from mail-ee0-f45.google.com ([74.125.83.45]:41333) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TQe0g-00008G-Lg for qemu-devel@nongnu.org; Tue, 23 Oct 2012 08:55:06 -0400 Received: by mail-ee0-f45.google.com with SMTP id b47so1504463eek.4 for ; Tue, 23 Oct 2012 05:55:05 -0700 (PDT) Date: Tue, 23 Oct 2012 14:55:03 +0200 From: Stefan Hajnoczi Message-ID: <20121023125503.GG19977@stefanha-thinkpad.redhat.com> References: <20121022111824.GA6916@amit.redhat.com> <1350913800.90009.YahooMailClassic@web163904.mail.gq1.yahoo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1350913800.90009.YahooMailClassic@web163904.mail.gq1.yahoo.com> Subject: Re: [Qemu-devel] [Bug 1066055] Re: Network performance regression with vde_switch List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Amit Shah Cc: Edivaldo de Araujo Pereira , qemu-devel@nongnu.org, Bug 1066055 <1066055@bugs.launchpad.net> On Mon, Oct 22, 2012 at 06:50:00AM -0700, Edivaldo de Araujo Pereira wrote: > I didn't take enough time to uderstand the code, so unfortunately I fear there is not much I could do to solve the problem, apart from trying your suggestions. But I'll try to spend a little more time on it, until we find a solution. I've thought a little about how to approach this. Amit, here's a brain dump: The simplest solution is to make virtqueue_avail_bytes() use the old behavior of stopping early. However, I wonder if we can actually *improve* performance of existing code by changing virtio-net.c:virtio_net_receive(). The intuition is that calling virtio_net_has_buffers() (internally calls virtqueue_avail_bytes()) followed by virtqueue_pop() is suboptimal because we're repeatedly traversing the descriptor chain. We can get rid of this repetition. A side-effect of this is that we no longer need to call virtqueue_avail_bytes() from virtio-net.c. Here's how: The common case in virtio_net_receive() is that we have buffers and they are large enough for the received packet. So to optimize for this case: 1. Take the VirtQueueElement off the vring but don't increment last_avail_idx yet. (This is essentially a "peek" operation.) 2. If there is an error or we drop the packet because the VirtQueueElement is too small, just bail out and we'll grab the same VirtQueueElement again next time. 3. When we've committed filling in this VirtQueueElement, increment last_avail_idx. This is the point of no return. Essentially we're splitting pop() into peek() and consume(). Peek() grabs the VirtQueueElement but does not increment last_avail_idx. Consume() simply increments last_avail_idx and maybe the EVENT_IDX optimization stuff. Whether this will improve performance, I'm not sure. Perhaps virtio_net_has_buffers() pulls most descriptors into the CPU's cache and following up with virtqueue_pop() is very cheap already. But the idea here is to avoid the virtio_net_has_buffers() because we'll find out soon enough when we try to pop :). Another approach would be to drop virtio_net_has_buffers() but continue to use virtqueue_pop(). We'd keep the same VirtQueueElem stashed in VirtIONet across virtio_net_receive() calls in the case where we drop the packet. I don't like this approach very much though because it gets tricky when the guest modifies the vring memory, resets the virtio device, etc across calls. Stefan