From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1FTfxu-0003Ig-TX for qemu-devel@nongnu.org; Wed, 12 Apr 2006 10:05:02 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1FTfxs-0003HE-NV for qemu-devel@nongnu.org; Wed, 12 Apr 2006 10:05:01 -0400 Received: from [199.232.76.173] (helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1FTfxs-0003H8-Hb for qemu-devel@nongnu.org; Wed, 12 Apr 2006 10:05:00 -0400 Received: from [204.127.192.84] (helo=rwcrmhc14.comcast.net) by monty-python.gnu.org with esmtp (Exim 4.52) id 1FTg2y-0002xt-R9 for qemu-devel@nongnu.org; Wed, 12 Apr 2006 10:10:16 -0400 Message-ID: <443D0909.3090806@win4lin.com> Date: Wed, 12 Apr 2006 10:04:57 -0400 From: "Leonardo E. Reiter" MIME-Version: 1.0 Subject: Re: [Qemu-devel] Re: Network Performance between Win Host and Linux References: <6fe044190604111020h47108190x23983325567fb51c@mail.gmail.com> <6fe044190604111536k944383o99ab27411d3864db@mail.gmail.com> In-Reply-To: <6fe044190604111536k944383o99ab27411d3864db@mail.gmail.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Hi Ken, (all) the patches seem to work very well and be very stable with Windows 2000 guests here. I measured some SMB over TCP/IP transfers, and got about a 1.5x downstream improvement and a 2x upstream improvement. You will likely get more boost from less convoluted protocols like FTP or something, but I didn't get around to testing that. Plus it's not clear how much Windows itself is impeding the bandwidth. I am using -kernel-kqemu. 2 additional things I noticed: 1. before your patches, the upstream transfers (guest->host) consumed almost no CPU at all, but of course were much slower. Now, about half the CPU gets used under heavy upstream load. The downstream, with Windows guests at least, consumes 100% CPU the same as before. I suspect you addressed this specifically with your select hack to avoid the delay if there is pending slirp activity 2. overall latency "feels" improved as well, at least for basic stuff like web browsing, etc. This is purely subjective. Nice work! I'll be testing with a Linux VM soon and try to pin down some better benchmarks, free of Windows clutter. - Leo Reiter Kenneth Duda wrote: > The "qemu-slirp-performance" patch contains three improvements to qemu > slirp networking performance. Booting my virtual machine (which > NFS-mounts its root filesystem from the host) has been accelerated by > 8x, from over 5 minutes to 40 seconds. TCP throughput has been > accelerated from about 2 megabytes/sec to 9 megabytes/sec, in both > directions (measured using a simple python script). The system is > subjectively more responsive (for activities such as logging in or > running simple python scripts). > > The specific problems fixed are: > > - the mss for the slirp-to-vm direction was 512 bytes (now 1460); > - qemu would block in select() for up to four milliseconds at a > time, even when data was waiting on slirp sockets; > - slirp was deliberately delaying acks until timer expiration > (TF_DELACK), preventing the vm from opening its send window, in > violation of rfc2581. > > These fixes are together in one patch (qemu-slirp-performance.patch). > -- Leonardo E. Reiter Vice President of Product Development, CTO Win4Lin, Inc. Virtual Computing from Desktop to Data Center Main: +1 512 339 7979 Fax: +1 512 532 6501 http://www.win4lin.com