From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=38526 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1OghLP-0000Bk-GG for qemu-devel@nongnu.org; Wed, 04 Aug 2010 13:01:32 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1OghLN-0007Pp-Dp for qemu-devel@nongnu.org; Wed, 04 Aug 2010 13:01:31 -0400 Received: from mx1.redhat.com ([209.132.183.28]:46271) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1OghLN-0007Pf-63 for qemu-devel@nongnu.org; Wed, 04 Aug 2010 13:01:29 -0400 Message-ID: <4C599CE1.30501@redhat.com> Date: Wed, 04 Aug 2010 19:01:21 +0200 From: Paolo Bonzini MIME-Version: 1.0 Subject: Re: [Qemu-devel] Anyone seeing huge slowdown launching qemu with Linux 2.6.35? References: <4C5847CD.9080107@codemonkey.ws> <4C5848C7.3090806@redhat.com> <4C584982.5000108@codemonkey.ws> <4C584B66.5070404@redhat.com> <4C5854F1.3000905@codemonkey.ws> <4C5858B2.9090801@redhat.com> <4C585F5B.5070502@codemonkey.ws> <4C58635B.7020407@redhat.com> <20100803191346.GA28523@amd.home.annexia.org> <4C586C6E.9030002@redhat.com> <20100803200057.GB28523@amd.home.annexia.org> <4C5880BC.2080802@codemonkey.ws> <4C588685.8070509@redhat.com> <4C588B7D.5040902@codemonkey.ws> <4C591D87.5020809@redhat.com> <4C5962DA.4090108@codemonkey.ws> <4C5998EB.2050804@redhat.com> <4C59996A.9010402@codemonkey.ws> <58DD5B14-F1A9-408C-982A-A29E0A7FACF2@suse.de> <4C599A27.9000602@codemonkey.ws> In-Reply-To: <4C599A27.9000602@codemonkey.ws> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: Alexander Graf , Gleb Natapov , kvm@vger.kernel.org, qemu-devel@nongnu.org, "Richard W.M. Jones" , Avi Kivity On 08/04/2010 06:49 PM, Anthony Liguori wrote: >>> Right, the only question is, to you inject your own bus or do you >>> just reuse SCSI. On the surface, it seems like reusing SCSI has a >>> significant number of advantages. For instance, without changing the >>> guest's drivers, we can implement PV cdroms or PC tape drivers. If you want multiple LUNs per virtio device SCSI is obviously a good choice, but you will need something more (like the config space Avi mentioned). My position is that getting this "something more" right is considerably harder than virtio-blk. Maybe it will be done some day, but I still think that not having virtio-scsi from day 1 was actually a good thing. Even if we can learn from xenbus and all that. >> What exactly would keep us from doing that with virtio-blk? I thought >> that supports scsi commands already. > > I think the toughest change would be making it appear as a scsi device > within the guest. You could do that to virtio-blk but it would be a > flag day as reasonable configured guests will break. > > Having virtio-blk device show up as /dev/vdX was a big mistake. It's > been nothing but a giant PITA. There is an amazing amount of software > that only looks at /dev/sd* and /dev/hd*. That's another story and I totally agree here, but not reusing /dev/sd* is not intrinsic in the design of virtio-blk (and one thing that Windows gets right; everything is SCSI, period). Paolo