From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NHZJO-0005Y6-M5 for qemu-devel@nongnu.org; Mon, 07 Dec 2009 03:51:18 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1NHZJJ-0005WF-Pw for qemu-devel@nongnu.org; Mon, 07 Dec 2009 03:51:18 -0500 Received: from [199.232.76.173] (port=34303 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NHZJJ-0005W8-KJ for qemu-devel@nongnu.org; Mon, 07 Dec 2009 03:51:13 -0500 Received: from mx1.redhat.com ([209.132.183.28]:28307) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NHZJJ-0005TY-2S for qemu-devel@nongnu.org; Mon, 07 Dec 2009 03:51:13 -0500 Message-ID: <4B1CC1F3.80700@redhat.com> Date: Mon, 07 Dec 2009 09:50:59 +0100 From: Gerd Hoffmann MIME-Version: 1.0 Subject: Re: [Qemu-devel] [sneak preview] major scsi overhaul References: <4AF4ACA5.2090701@redhat.com> <200911161853.34668.paul@codesourcery.com> <4B0BCAA1.3090400@redhat.com> <200911241351.03650.paul@codesourcery.com> <4B0D5D36.6080100@redhat.com> <4B0E2EC8.7040309@suse.de> <4B0E3B90.5080001@redhat.com> <4B0E5EFD.6060701@suse.de> <4B0E60AF.9000508@redhat.com> <4B0E6496.1060203@suse.de> <4B0E8EF6.2080106@redhat.com> <4B0E9036.2070400@suse.de> <4B0E92B1.10903@redhat.com> <4B0EA3C2.2020007@suse.de> <4B0FB34A.50501@redhat.com> <4B167004.509@redhat.com> <4B1CBCB0.10605@suse.de> In-Reply-To: <4B1CBCB0.10605@suse.de> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Hannes Reinecke Cc: Paul Brook , qemu-devel@nongnu.org On 12/07/09 09:28, Hannes Reinecke wrote: >> Hmm. Well. Seems to work out at least for linux, i.e. it figures it >> got a bunch of sectors and tries to continue. Linux logs an I/O error. >> Also I didn't try other guests (yet). >> >> Using that as a way to limit scsi-disk request sizes probably isn't a >> good idea. For scsi-generic that would be a improvement over the >> current situation though. >> > Yes, quite. > > But for scsi-disk we could always fallback to using bounce-buffers, > could we not? We want limit the bounce buffer size though. As there seems to be no easy way to make sure the guest doesn't submit requests larger than a certain limit I guess there is no way around splitting the request into pieces for the bounce buffer case. We could add offset+size arguments to scsi_req_buf() to accomplish this. cheers, Gerd PS: FYI: I suspect I wouldn't find time this year to continue working on this seriously, especially on the time-consuming testing part. Top priority right now are finishing touches for the 0.12 release. x-mas will be family time.