From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=41370 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Ogh72-00038n-7N for qemu-devel@nongnu.org; Wed, 04 Aug 2010 12:46:41 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1Ogh71-00053m-10 for qemu-devel@nongnu.org; Wed, 04 Aug 2010 12:46:40 -0400 Received: from mail-qy0-f180.google.com ([209.85.216.180]:58919) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1Ogh70-00053f-QW for qemu-devel@nongnu.org; Wed, 04 Aug 2010 12:46:38 -0400 Received: by qyk31 with SMTP id 31so2331847qyk.4 for ; Wed, 04 Aug 2010 09:46:38 -0700 (PDT) Message-ID: <4C59996A.9010402@codemonkey.ws> Date: Wed, 04 Aug 2010 11:46:34 -0500 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] Anyone seeing huge slowdown launching qemu with Linux 2.6.35? References: <4C5847CD.9080107@codemonkey.ws> <4C5848C7.3090806@redhat.com> <4C584982.5000108@codemonkey.ws> <4C584B66.5070404@redhat.com> <4C5854F1.3000905@codemonkey.ws> <4C5858B2.9090801@redhat.com> <4C585F5B.5070502@codemonkey.ws> <4C58635B.7020407@redhat.com> <20100803191346.GA28523@amd.home.annexia.org> <4C586C6E.9030002@redhat.com> <20100803200057.GB28523@amd.home.annexia.org> <4C5880BC.2080802@codemonkey.ws> <4C588685.8070509@redhat.com> <4C588B7D.5040902@codemonkey.ws> <4C591D87.5020809@redhat.com> <4C5962DA.4090108@codemonkey.ws> <4C5998EB.2050804@redhat.com> In-Reply-To: <4C5998EB.2050804@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: Paolo Bonzini , kvm@vger.kernel.org, "Richard W.M. Jones" , Gleb Natapov , qemu-devel@nongnu.org On 08/04/2010 11:44 AM, Avi Kivity wrote: > On 08/04/2010 03:53 PM, Anthony Liguori wrote: >> >> So how do we enable support for more than 20 disks? I think a >> virtio-scsi is inevitable.. > > Not only for large numbers of disks, also for JBOD performance. If > you have one queue per disk you'll have low queue depths and high > interrupt rates. Aggregating many spindles into a single queue is > important for reducing overhead. Right, the only question is, to you inject your own bus or do you just reuse SCSI. On the surface, it seems like reusing SCSI has a significant number of advantages. For instance, without changing the guest's drivers, we can implement PV cdroms or PC tape drivers. It also supports SCSI level pass through which is pretty nice for enabling things like NPIV. Regards, Anthony Liguori