From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:54450) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QHztI-0002Zc-Vh for qemu-devel@nongnu.org; Thu, 05 May 2011 10:50:57 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1QHztF-0006su-2T for qemu-devel@nongnu.org; Thu, 05 May 2011 10:50:56 -0400 Received: from mx1.redhat.com ([209.132.183.28]:56993) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QHztE-0006sg-OM for qemu-devel@nongnu.org; Thu, 05 May 2011 10:50:53 -0400 Message-ID: <4DC2B946.80100@redhat.com> Date: Thu, 05 May 2011 16:50:46 +0200 From: Paolo Bonzini MIME-Version: 1.0 References: <4DC1D30F.4070408@redhat.com> <20110505094323.GC5298@stefanha-thinkpad.localdomain> <4DC29CCB.8050903@redhat.com> <4DC2B455.9090007@suse.de> In-Reply-To: <4DC2B455.9090007@suse.de> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] virtio-scsi spec, first public draft List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Hannes Reinecke Cc: Stefan Hajnoczi , qemu-devel On 05/05/2011 04:29 PM, Hannes Reinecke wrote: >> I chose 1 requestq per target so that, with MSI-X support, each >> target can be associated to one MSI-X vector. >> >> If you want a large number of units, you can subdivide targets into >> logical units, or use multiple adapters if you prefer. We can have >> 20-odd SCSI adapters, each with 65534 targets. I think we're way >> beyond the practical limits even before LUN support is added to QEMU. > > But this will make queue full tracking harder. > If we have one queue per LUN the SCSI stack is able to track QUEUE FULL > states and will adjust the queue depth accordingly. > When we have only one queue per target we cannot track QUEUE FULL > anymore and have to rely on the static per-host 'can_queue' setting. > Which doesn't work as well, especially in a virtualized environment > where the queue full conditions might change at any time. So you want one virtqueue per LUN? I had it in the first version, but then you had to associate a (target, 8-byte LUN) pair to each virtqueue manually. That was very hairy, so I changed it to one target per queue. > But read on: > >> For comparison, Windows supports up to 1024 targets per adapter >> (split across 8 channels); IBM vSCSI provides up to 128; VMware >> supports a maximum of 15 SCSI targets per adapter and 4 adapters per >> VM. >> > We don't have to impose any hard limits here. The virtio scsi transport > would need to be able to detect the targets, and we would be using > whatever targets have been found. Yes, that's what I wrote above. Right now "detect the targets" means "send INQUIRY for LUN0 and/or REPORT LUNS to each virtqueue", thanks to the 1:1 relationship. In my first version it would mean: - associate each target's LUN0 to a virtqueue - if needed, send INQUIRY for LUN0 and/or REPORT LUNS - if needed, deassociate the LUN0 and the virtqueue Really, it was ugly. It also brings a lot more the question, such as what to do if a virtqueue has pending requests at deassociation time. >> Yes, just add the first LUN to it (it will be LUN0 which must be >> there anyway). The target's existence will be reported on the >> control receiveq. >> > ?? How is this supposed to work? > How can I detect the existence of a virtqueue ? Config space tells you how many virtqueue exist. That gives how many targets you can address at most. If some of them are empty at the beginning of the guest's life, their LUN0 will fail to answer INQUIRY and REPORT LUNS. (It is the same for vmw_pvscsi by the way, except simpler: the maximum # of targets is not configurable, and there is just one queue + one interrupt). > And to be consistent with the SCSI layer the virtqueues then in fact > would need to map the SCSI targets; LUNs would be detected from the SCSI > midlayer outside the control of the virtio-scsi HBA. Exactly, that was my point! It seemed so clean compared to a dynamic assignment between LUNs and virtqueues. >>>> VIRTIO_SCSI_T_TMF_LOGICAL_UNIT_DETACH asks the device to make the >>>> logical unit (and the target as well if this is the last logical >>>> unit) disappear. It takes an I_T_L nexus. This non-standard TMF >>>> should be used in response to a host request to shutdown a target >>>> or LUN, after having placed the LUN in a clean state. >> >> It is not really an initiator-driven detach, it is the initiator's >> acknowledgement of a target-driven detach. The target needs to know >> when the initiator is ready so that it can free resources attached >> to the logical unit (this is particularly important if the LU is a >> physical disk and it is opened with exclusive access). >> > Not required. The target can detach any LUN at any time and can rely on > the initiator to handle this situation. Multipath handles this just fine. I didn't invent this, we had a customer request this feature for Xen guests in the past (a "soft" target detach where the filesystem is unmounted cleanly). But I guess I can drop it since KVM guests have agents like Matahari that will take care of this. They will use out-of-band channels to start an initiator-driven detach, and I guess it's better this way. :) BTW, with barriers gone, I think I can also drop the per-target TMF command. Thanks for the review. Paolo