From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1FTjwB-0006Cm-16 for qemu-devel@nongnu.org; Wed, 12 Apr 2006 14:19:31 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1FTjw9-0006CI-A0 for qemu-devel@nongnu.org; Wed, 12 Apr 2006 14:19:30 -0400 Received: from [199.232.76.173] (helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1FTjw9-0006CD-6c for qemu-devel@nongnu.org; Wed, 12 Apr 2006 14:19:29 -0400 Received: from [64.233.162.193] (helo=zproxy.gmail.com) by monty-python.gnu.org with esmtp (Exim 4.52) id 1FTk1I-0003J8-9H for qemu-devel@nongnu.org; Wed, 12 Apr 2006 14:24:48 -0400 Received: by zproxy.gmail.com with SMTP id l8so1467223nzf for ; Wed, 12 Apr 2006 11:19:28 -0700 (PDT) Message-ID: <6fe044190604121119r5a161123s77c0d7d13beaa637@mail.gmail.com> Date: Wed, 12 Apr 2006 11:19:27 -0700 From: "Kenneth Duda" Sender: ken.duda@gmail.com Subject: Re: [Qemu-devel] Re: Network Performance between Win Host and Linux In-Reply-To: <443D0909.3090806@win4lin.com> MIME-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Content-Disposition: inline References: <6fe044190604111020h47108190x23983325567fb51c@mail.gmail.com> <6fe044190604111536k944383o99ab27411d3864db@mail.gmail.com> <443D0909.3090806@win4lin.com> Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Leo, thank you for exercising this stuff. > 1. before your patches, the upstream transfers (guest->host) consumed > almost no CPU at all, but of course were much slower. Now, about half > the CPU gets used under heavy upstream load. I am surprised that only half the CPU gets consumed --- that suggests there's another factor of two improvement waiting to be made. If you see anything like this with Linux-on-Linux, please let me know and I'll try to track it down. Separately, I'm curious about the path for getting these changes into the qemu mainline. If that's something you're in tune with and are in the mood to summarize for me, I'd appreciate that. We love qemu but there are some rough edges and I think we have something like 16 patches we're maintaining internally, many of which might be helpful for others. -Ken On 4/12/06, Leonardo E. Reiter wrote: > Hi Ken, > > (all) the patches seem to work very well and be very stable with Windows > 2000 guests here. I measured some SMB over TCP/IP transfers, and got > about a 1.5x downstream improvement and a 2x upstream improvement. You > will likely get more boost from less convoluted protocols like FTP or > something, but I didn't get around to testing that. Plus it's not clear > how much Windows itself is impeding the bandwidth. I am using > -kernel-kqemu. > > 2 additional things I noticed: > > 1. before your patches, the upstream transfers (guest->host) consumed > almost no CPU at all, but of course were much slower. Now, about half > the CPU gets used under heavy upstream load. The downstream, with > Windows guests at least, consumes 100% CPU the same as before. I > suspect you addressed this specifically with your select hack to avoid > the delay if there is pending slirp activity > > 2. overall latency "feels" improved as well, at least for basic stuff > like web browsing, etc. This is purely subjective. > > Nice work! I'll be testing with a Linux VM soon and try to pin down > some better benchmarks, free of Windows clutter. > > - Leo Reiter