From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=45138 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1OgP8Z-0004QC-Jm for qemu-devel@nongnu.org; Tue, 03 Aug 2010 17:35:04 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1OgP8Y-0001VO-IB for qemu-devel@nongnu.org; Tue, 03 Aug 2010 17:35:03 -0400 Received: from mail-qw0-f45.google.com ([209.85.216.45]:46521) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1OgP8Y-0001Uw-Ei for qemu-devel@nongnu.org; Tue, 03 Aug 2010 17:35:02 -0400 Received: by qwf6 with SMTP id 6so367765qwf.4 for ; Tue, 03 Aug 2010 14:35:00 -0700 (PDT) Message-ID: <4C588B7D.5040902@codemonkey.ws> Date: Tue, 03 Aug 2010 16:34:53 -0500 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] Anyone seeing huge slowdown launching qemu with Linux 2.6.35? References: <4C5847CD.9080107@codemonkey.ws> <4C5848C7.3090806@redhat.com> <4C584982.5000108@codemonkey.ws> <4C584B66.5070404@redhat.com> <4C5854F1.3000905@codemonkey.ws> <4C5858B2.9090801@redhat.com> <4C585F5B.5070502@codemonkey.ws> <4C58635B.7020407@redhat.com> <20100803191346.GA28523@amd.home.annexia.org> <4C586C6E.9030002@redhat.com> <20100803200057.GB28523@amd.home.annexia.org> <4C5880BC.2080802@codemonkey.ws> <4C588685.8070509@redhat.com> In-Reply-To: <4C588685.8070509@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Paolo Bonzini Cc: qemu-devel@nongnu.org, kvm@vger.kernel.org, "Richard W.M. Jones" , Gleb Natapov , Avi Kivity On 08/03/2010 04:13 PM, Paolo Bonzini wrote: > On 08/03/2010 10:49 PM, Anthony Liguori wrote: >>> On the other hand we end up with stuff like only being able to add 29 >>> virtio-blk devices to a single guest. As best as I can tell, this >>> comes from PCI >> >> No, this comes from us being too clever for our own good and not >> following the way hardware does it. >> >> All modern systems keep disks on their own dedicated bus. In >> virtio-blk, we have a 1-1 relationship between disks and PCI devices. >> That's a perfect example of what happens when we try to "improve" >> things. > > Comparing (from personal experience) the complexity of the Windows > drivers for Xen and virtio shows that it's not a bad idea at all. Not quite sure what you're suggesting, but I could have been clearer. Instead of having virtio-blk where a virtio disk has a 1-1 mapping to a PCI device, we probably should have just done virtio-scsi. Since most OSes have a SCSI-centric block layer, it would have resulted in much simpler drivers and we could support more than 1 disk per PCI slot. I had thought Christoph was working on such a device at some point in time... Regards, Anthony Liguori > > Paolo