qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] about NPIV with qemu-kvm.
@ 2011-10-26  4:40 ya su
  2011-10-26  5:26 ` Hannes Reinecke
  0 siblings, 1 reply; 3+ messages in thread
From: ya su @ 2011-10-26  4:40 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Christoph Hellwig, Stefan Hajnoczi, kvm, Michael S. Tsirkin,
	Stefan Hajnoczi, qemu-devel, Nicholas A. Bellinger,
	Linux Kernel Mailing List, Christoph Hellwig, Paolo Bonzini,
	Linux Virtualization

hi, hannes:

    I want to use NPIV with qemu-kvm, I issued the following command:

    echo '1111222233334444:5555666677778888' >
/sys/class/fc_host/host0/vport_create

    and it will produce a new host6 and one vport succesfully, but it
does not create any virtual hba pci device. so I don't know how to
assign the virtual host to qemu-kvm.

    from your this mail, does array will first need to assign a lun to
this vport? and through this new created disk, like device /dev/sdf,
then I add qemu-kvm with -drive file=/dev/sdf,if=virtio... arguments?


Regards.

Suya.

2011/6/29, Hannes Reinecke <hare@suse.de>:
> On 06/29/2011 12:07 PM, Christoph Hellwig wrote:
>> On Wed, Jun 29, 2011 at 10:39:42AM +0100, Stefan Hajnoczi wrote:
>>> I think we're missing a level of addressing.  We need the ability to
>>> talk to multiple target ports in order for "list target ports" to make
>>> sense.  Right now there is one implicit target that handles all
>>> commands.  That means there is one fixed I_T Nexus.
>>>
>>> If we introduce "list target ports" we also need a way to say "This
>>> CDB is destined for target port #0".  Then it is possible to enumerate
>>> target ports and address targets independently of the LUN field in the
>>> CDB.
>>>
>>> I'm pretty sure this is also how SAS and other transports work.  In
>>> their framing they include the target port.
>>
>> Yes, exactly.  Hierachial LUNs are a nasty fringe feature that we should
>> avoid as much as possible, that is for everything but IBM vSCSI which is
>> braindead enough to force them.
>>
> Yep.
>
>>> The question is whether we really need to support multiple targets on
>>> a virtio-scsi adapter or not.  If you are selectively mapping LUNs
>>> that the guest may access, then multiple targets are not necessary.
>>> If we want to do pass-through of the entire SCSI bus then we need
>>> multiple targets but I'm not sure if there are other challenges like
>>> dependencies on the transport (Fibre Channel, SAS, etc) which make it
>>> impossible to pass through bus-level access?
>>
>> I don't think bus-level pass through is either easily possible nor
>> desirable.  What multiple targets are useful for is allowing more
>> virtual disks than we have virtual PCI slots.  We could do this by
>> supporting multiple LUNs, but given that many SCSI ressources are
>> target-based doing multiple targets most likely is the more scabale
>> and more logical variant.  E.g. we could much more easily have one
>> virtqueue per target than per LUN.
>>
> The general idea here is that we can support NPIV.
> With NPIV we'll have several scsi_hosts, each of which is assigned a
> different set of LUNs by the array.
> With virtio we need to able to react on LUN remapping on the array
> side, ie we need to be able to issue a 'REPORT LUNS' command and
> add/remove LUNs on the fly. This means we have to expose the
> scsi_host in some way via virtio.
>
> This is impossible with a one-to-one mapping between targets and
> LUNs. The actual bus-level pass-through will be just on the SCSI
> layer, ie 'REPORT LUNS' should be possible. If and how we do a LUN
> remapping internally on the host is a totally different matter.
> Same goes for the transport details; I doubt we will expose all the
> dingy details of the various transports, but rather restrict
> ourselves to an abstract transport.
>
> Cheers,
>
> Hannes
> --
> Dr. Hannes Reinecke		      zSeries & Storage
> hare@suse.de			      +49 911 74053 688
> SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
> GF: J. Hawn, J. Guild, F. Imendörffer, HRB 16746 (AG Nürnberg)
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [Qemu-devel] about NPIV with qemu-kvm.
  2011-10-26  4:40 [Qemu-devel] about NPIV with qemu-kvm ya su
@ 2011-10-26  5:26 ` Hannes Reinecke
  2011-10-27 12:53   ` ya su
  0 siblings, 1 reply; 3+ messages in thread
From: Hannes Reinecke @ 2011-10-26  5:26 UTC (permalink / raw)
  To: ya su
  Cc: Christoph Hellwig, Stefan Hajnoczi, kvm, Michael S. Tsirkin,
	Stefan Hajnoczi, qemu-devel, Nicholas A. Bellinger,
	Linux Kernel Mailing List, Christoph Hellwig, Paolo Bonzini,
	Linux Virtualization

On 10/26/2011 06:40 AM, ya su wrote:
> hi, hannes:
> 
>     I want to use NPIV with qemu-kvm, I issued the following command:
> 
>     echo '1111222233334444:5555666677778888' >
> /sys/class/fc_host/host0/vport_create
> 
>     and it will produce a new host6 and one vport succesfully, but it
> does not create any virtual hba pci device. so I don't know how to
> assign the virtual host to qemu-kvm.
> 
Well, you can't. There is no mechanism for. When using NPIV you need
to pass in the individual LUNs via eg virtio-blk.

>     from your this mail, does array will first need to assign a lun to
> this vport? and through this new created disk, like device /dev/sdf,
> then I add qemu-kvm with -drive file=/dev/sdf,if=virtio... arguments?
> 
Yes. That's what you need to do.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke              zSeries & Storage
hare@suse.de                  +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Markus Rex, HRB 16746 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [Qemu-devel] about NPIV with qemu-kvm.
  2011-10-26  5:26 ` Hannes Reinecke
@ 2011-10-27 12:53   ` ya su
  0 siblings, 0 replies; 3+ messages in thread
From: ya su @ 2011-10-27 12:53 UTC (permalink / raw)
  To: Hannes Reinecke
  Cc: Christoph Hellwig, Stefan Hajnoczi, kvm, Michael S. Tsirkin,
	Stefan Hajnoczi, qemu-devel, Nicholas A. Bellinger,
	Linux Kernel Mailing List, Christoph Hellwig, Paolo Bonzini

hi, hannes

      I really appreciate your clarify of my daze.

      as to improve vm's storage io perfomance as nearly hardware's,
it seems the only way is something like sr-iov by hba card.  NPIV can
not achieve this goal.

      I remember that LSI released some kind SAS controller(IR 2008?)
which support sr-iov , but there is not any document which describes
the steps to config. I wonder if your have any clues to help? thanks.

Regards.

Suya.

2011/10/26, Hannes Reinecke <hare@suse.de>:
> On 10/26/2011 06:40 AM, ya su wrote:
>> hi, hannes:
>>
>>     I want to use NPIV with qemu-kvm, I issued the following command:
>>
>>     echo '1111222233334444:5555666677778888' >
>> /sys/class/fc_host/host0/vport_create
>>
>>     and it will produce a new host6 and one vport succesfully, but it
>> does not create any virtual hba pci device. so I don't know how to
>> assign the virtual host to qemu-kvm.
>>
> Well, you can't. There is no mechanism for. When using NPIV you need
> to pass in the individual LUNs via eg virtio-blk.
>
>>     from your this mail, does array will first need to assign a lun to
>> this vport? and through this new created disk, like device /dev/sdf,
>> then I add qemu-kvm with -drive file=/dev/sdf,if=virtio... arguments?
>>
> Yes. That's what you need to do.
>
> Cheers,
>
> Hannes
> --
> Dr. Hannes Reinecke              zSeries & Storage
> hare@suse.de                  +49 911 74053 688
> SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
> GF: Markus Rex, HRB 16746 (AG Nürnberg)
>

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2011-10-27 12:53 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-10-26  4:40 [Qemu-devel] about NPIV with qemu-kvm ya su
2011-10-26  5:26 ` Hannes Reinecke
2011-10-27 12:53   ` ya su

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).