xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [question] Does HVM domain support xen-pcifront/xen-pciback?
@ 2012-02-06  8:32 Kai Huang
  2012-02-06 17:58 ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 6+ messages in thread
From: Kai Huang @ 2012-02-06  8:32 UTC (permalink / raw)
  To: Xen-devel

Hi,

I see in pcifront_init, if domain is not PV domain, pcifront_init just
returns error. So seems HVM domain does not support
xen-pcifront/xen-pciback mechanism? If it is true, why? I think
technically there's nothing that can block to support pcifront/pciback
in HVM, and for performance reason, there will be benefits if HVM
supports PV PCI operation.

-cody

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [question] Does HVM domain support xen-pcifront/xen-pciback?
  2012-02-06  8:32 [question] Does HVM domain support xen-pcifront/xen-pciback? Kai Huang
@ 2012-02-06 17:58 ` Konrad Rzeszutek Wilk
  2012-02-07 13:36   ` cody
  0 siblings, 1 reply; 6+ messages in thread
From: Konrad Rzeszutek Wilk @ 2012-02-06 17:58 UTC (permalink / raw)
  To: Kai Huang; +Cc: Xen-devel

On Mon, Feb 06, 2012 at 04:32:05PM +0800, Kai Huang wrote:
> Hi,
> 
> I see in pcifront_init, if domain is not PV domain, pcifront_init just
> returns error. So seems HVM domain does not support
> xen-pcifront/xen-pciback mechanism? If it is true, why? I think

Yup. B/c the only thing that the PV PCI protocol does is enable
PCI configuration emulation. And if you boot an HVM guest - QEMU does
that already.

> technically there's nothing that can block to support pcifront/pciback
> in HVM, and for performance reason, there will be benefits if HVM
> supports PV PCI operation.

Nope. The PCI operations are just for writting configuration deta in the
PCI space. Whcih is done mostly when a driver starts/stops and no more.

The performance is with interrupts and how to inject them in a guest - and
in there the PV is much faster than HVM due to the emulation layer complexity.
However, work by Stefano on making that path shorter has made the interrupt
injection much much faster.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [question] Does HVM domain support xen-pcifront/xen-pciback?
  2012-02-06 17:58 ` Konrad Rzeszutek Wilk
@ 2012-02-07 13:36   ` cody
  2012-02-07 17:14     ` Konrad Rzeszutek Wilk
  0 siblings, 1 reply; 6+ messages in thread
From: cody @ 2012-02-07 13:36 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: Xen-devel

On 02/07/2012 01:58 AM, Konrad Rzeszutek Wilk wrote:
> On Mon, Feb 06, 2012 at 04:32:05PM +0800, Kai Huang wrote:
>    
>> Hi,
>>
>> I see in pcifront_init, if domain is not PV domain, pcifront_init just
>> returns error. So seems HVM domain does not support
>> xen-pcifront/xen-pciback mechanism? If it is true, why? I think
>>      
> Yup. B/c the only thing that the PV PCI protocol does is enable
> PCI configuration emulation. And if you boot an HVM guest - QEMU does
> that already.
>
>    
I heard qemu does not support PCIE simulation, and Xen does not provides 
MMIO mechanism but only legacy IO port mechanism to guest for 
configuration space access. Is this true?

If using IO port mechanism, we can only access first 256B of 
configuration space, but if using PV PCI protocol, we will not have such 
limitation. I think this is an advantage of PCI PV protocol. Of course 
if Xen provides MMIO mechanism to guest for configuration space, it will 
not have this limitation too.

>> technically there's nothing that can block to support pcifront/pciback
>> in HVM, and for performance reason, there will be benefits if HVM
>> supports PV PCI operation.
>>      
> Nope. The PCI operations are just for writting configuration deta in the
> PCI space. Whcih is done mostly when a driver starts/stops and no more.
>
> The performance is with interrupts and how to inject them in a guest - and
> in there the PV is much faster than HVM due to the emulation layer complexity.
> However, work by Stefano on making that path shorter has made the interrupt
> injection much much faster.
>    
I think PV PCI protocol can be used for other purpose in the future, 
such as PF/VF communication. In this case it will be better if HVM 
domain can support PV PCI protocol.

-cody

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [question] Does HVM domain support xen-pcifront/xen-pciback?
  2012-02-07 13:36   ` cody
@ 2012-02-07 17:14     ` Konrad Rzeszutek Wilk
  2012-02-08  4:57       ` Kai Huang
  0 siblings, 1 reply; 6+ messages in thread
From: Konrad Rzeszutek Wilk @ 2012-02-07 17:14 UTC (permalink / raw)
  To: cody; +Cc: Xen-devel

On Tue, Feb 07, 2012 at 09:36:31PM +0800, cody wrote:
> On 02/07/2012 01:58 AM, Konrad Rzeszutek Wilk wrote:
> >On Mon, Feb 06, 2012 at 04:32:05PM +0800, Kai Huang wrote:
> >>Hi,
> >>
> >>I see in pcifront_init, if domain is not PV domain, pcifront_init just
> >>returns error. So seems HVM domain does not support
> >>xen-pcifront/xen-pciback mechanism? If it is true, why? I think
> >Yup. B/c the only thing that the PV PCI protocol does is enable
> >PCI configuration emulation. And if you boot an HVM guest - QEMU does
> >that already.
> >
> I heard qemu does not support PCIE simulation, and Xen does not
> provides MMIO mechanism but only legacy IO port mechanism to guest
> for configuration space access. Is this true?

The upstream version has a X58 north bridge implementation to support this.
(ioh3420.c).

In regards to MMIO mechanism are you talking about MSI-X and such? Then
the answer is it does. QEMU traps when a guest tries to write MSI vectors
in the BAR space and translates those to appropiate xen calls to setup
vectors for the guest.

> 
> If using IO port mechanism, we can only access first 256B of
> configuration space, but if using PV PCI protocol, we will not have
> such limitation. I think this is an advantage of PCI PV protocol. Of
> course if Xen provides MMIO mechanism to guest for configuration
> space, it will not have this limitation too.

What do you mean by MMIO mechanism? Setup of MSI-x vectors?
> 
> >>technically there's nothing that can block to support pcifront/pciback
> >>in HVM, and for performance reason, there will be benefits if HVM
> >>supports PV PCI operation.
> >Nope. The PCI operations are just for writting configuration deta in the
> >PCI space. Whcih is done mostly when a driver starts/stops and no more.
> >
> >The performance is with interrupts and how to inject them in a guest - and
> >in there the PV is much faster than HVM due to the emulation layer complexity.
> >However, work by Stefano on making that path shorter has made the interrupt
> >injection much much faster.
> I think PV PCI protocol can be used for other purpose in the future,
> such as PF/VF communication. In this case it will be better if HVM
> domain can support PV PCI protocol.

What is 'PF/VF' communication? Is it just the negoting in the guest of
parameters for the PF to inject in the VF? That seems a huge security risk.

Or is it some other esoteric PCI configuration options that are not 
available in the VF's BARs?

> 
> -cody
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [question] Does HVM domain support xen-pcifront/xen-pciback?
  2012-02-07 17:14     ` Konrad Rzeszutek Wilk
@ 2012-02-08  4:57       ` Kai Huang
  2012-02-08 18:07         ` Stefano Stabellini
  0 siblings, 1 reply; 6+ messages in thread
From: Kai Huang @ 2012-02-08  4:57 UTC (permalink / raw)
  To: Konrad Rzeszutek Wilk; +Cc: Xen-devel

On Wed, Feb 8, 2012 at 1:14 AM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On Tue, Feb 07, 2012 at 09:36:31PM +0800, cody wrote:
>> On 02/07/2012 01:58 AM, Konrad Rzeszutek Wilk wrote:
>> >On Mon, Feb 06, 2012 at 04:32:05PM +0800, Kai Huang wrote:
>> >>Hi,
>> >>
>> >>I see in pcifront_init, if domain is not PV domain, pcifront_init just
>> >>returns error. So seems HVM domain does not support
>> >>xen-pcifront/xen-pciback mechanism? If it is true, why? I think
>> >Yup. B/c the only thing that the PV PCI protocol does is enable
>> >PCI configuration emulation. And if you boot an HVM guest - QEMU does
>> >that already.
>> >
>> I heard qemu does not support PCIE simulation, and Xen does not
>> provides MMIO mechanism but only legacy IO port mechanism to guest
>> for configuration space access. Is this true?
>
> The upstream version has a X58 north bridge implementation to support this.
> (ioh3420.c).
>
> In regards to MMIO mechanism are you talking about MSI-X and such? Then
> the answer is it does. QEMU traps when a guest tries to write MSI vectors
> in the BAR space and translates those to appropiate xen calls to setup
> vectors for the guest.

MMIO mechanism means software can access PCI configuration space
through memory space, using normal mov instruction, like normal memory
access. I believe modern PCs all provide this mechanism. Basically
there'll be a physical memory address area reserved for PCI
configuration space, and base address will be reported in ACPI table.
If there's no such information in ACPI table, we need to use legacy IO
port mechanism.

I am not specifically talking about MSI-X. Accessing PCI configuration
space through IO port has limitation that we can only access first
256B of configuration space as it is designed for PCI bus. For PCIE
device, configuration space is extended to 4K, which means we cannot
access configuration space after 256B by using legacy IO port
mechanism. This is why PCIE device requires MMIO mechanism to access
it's configuration space. If PCIE device implements some capabilities,
such as MSI-X capability, after first 256B in it's configuration
space, we will never be able to enable MSI-X by using IO port
mechanism.

I am not familiar Xen and don't know if Xen has provides MMIO
mechanism to guest for PCI configuration space access (To provide it,
I think we need to report base address of configuration space in
guest's ACPI table, and mark pages of configuration space to be
non-accessible, as hypervisor needs to capture guest's configuration
space access).

-cody

>
>>
>> If using IO port mechanism, we can only access first 256B of
>> configuration space, but if using PV PCI protocol, we will not have
>> such limitation. I think this is an advantage of PCI PV protocol. Of
>> course if Xen provides MMIO mechanism to guest for configuration
>> space, it will not have this limitation too.
>
> What do you mean by MMIO mechanism? Setup of MSI-x vectors?

No. See above.

>>
>> >>technically there's nothing that can block to support pcifront/pciback
>> >>in HVM, and for performance reason, there will be benefits if HVM
>> >>supports PV PCI operation.
>> >Nope. The PCI operations are just for writting configuration deta in the
>> >PCI space. Whcih is done mostly when a driver starts/stops and no more.
>> >
>> >The performance is with interrupts and how to inject them in a guest - and
>> >in there the PV is much faster than HVM due to the emulation layer complexity.
>> >However, work by Stefano on making that path shorter has made the interrupt
>> >injection much much faster.
>> I think PV PCI protocol can be used for other purpose in the future,
>> such as PF/VF communication. In this case it will be better if HVM
>> domain can support PV PCI protocol.
>
> What is 'PF/VF' communication? Is it just the negoting in the guest of
> parameters for the PF to inject in the VF? That seems a huge security risk.
>
> Or is it some other esoteric PCI configuration options that are not
> available in the VF's BARs?
>

I think in real SRIOV card, there'll be some cases that VF needs to
get data (register, or memory data) from PF (or send data to) to work
correctly, not just configuration space data. It's highly card
specific and we cannot assume the dependencies between PF/VF for
specific card. I have no experience on specific SRIOV card driver
development, so I can't tell exactly what kinds of communication will
be needed, but it indeed exists -- Intel's SRIOV card has it's own
mailbox protocol for this, though it's communication is implemented in
SRIOV card hardware.

-cody

>>
>> -cody
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xensource.com
>> http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [question] Does HVM domain support xen-pcifront/xen-pciback?
  2012-02-08  4:57       ` Kai Huang
@ 2012-02-08 18:07         ` Stefano Stabellini
  0 siblings, 0 replies; 6+ messages in thread
From: Stefano Stabellini @ 2012-02-08 18:07 UTC (permalink / raw)
  To: Kai Huang; +Cc: Xen-devel@lists.xensource.com, Konrad Rzeszutek Wilk

On Wed, 8 Feb 2012, Kai Huang wrote:
> On Wed, Feb 8, 2012 at 1:14 AM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
> > On Tue, Feb 07, 2012 at 09:36:31PM +0800, cody wrote:
> >> On 02/07/2012 01:58 AM, Konrad Rzeszutek Wilk wrote:
> >> >On Mon, Feb 06, 2012 at 04:32:05PM +0800, Kai Huang wrote:
> >> >>Hi,
> >> >>
> >> >>I see in pcifront_init, if domain is not PV domain, pcifront_init just
> >> >>returns error. So seems HVM domain does not support
> >> >>xen-pcifront/xen-pciback mechanism? If it is true, why? I think
> >> >Yup. B/c the only thing that the PV PCI protocol does is enable
> >> >PCI configuration emulation. And if you boot an HVM guest - QEMU does
> >> >that already.
> >> >
> >> I heard qemu does not support PCIE simulation, and Xen does not
> >> provides MMIO mechanism but only legacy IO port mechanism to guest
> >> for configuration space access. Is this true?
> >
> > The upstream version has a X58 north bridge implementation to support this.
> > (ioh3420.c).
> >
> > In regards to MMIO mechanism are you talking about MSI-X and such? Then
> > the answer is it does. QEMU traps when a guest tries to write MSI vectors
> > in the BAR space and translates those to appropiate xen calls to setup
> > vectors for the guest.
> 
> MMIO mechanism means software can access PCI configuration space
> through memory space, using normal mov instruction, like normal memory
> access. I believe modern PCs all provide this mechanism. Basically
> there'll be a physical memory address area reserved for PCI
> configuration space, and base address will be reported in ACPI table.
> If there's no such information in ACPI table, we need to use legacy IO
> port mechanism.
> 
> I am not specifically talking about MSI-X. Accessing PCI configuration
> space through IO port has limitation that we can only access first
> 256B of configuration space as it is designed for PCI bus. For PCIE
> device, configuration space is extended to 4K, which means we cannot
> access configuration space after 256B by using legacy IO port
> mechanism. This is why PCIE device requires MMIO mechanism to access
> it's configuration space. If PCIE device implements some capabilities,
> such as MSI-X capability, after first 256B in it's configuration
> space, we will never be able to enable MSI-X by using IO port
> mechanism.
> 
> I am not familiar Xen and don't know if Xen has provides MMIO
> mechanism to guest for PCI configuration space access (To provide it,
> I think we need to report base address of configuration space in
> guest's ACPI table, and mark pages of configuration space to be
> non-accessible, as hypervisor needs to capture guest's configuration
> space access).

We don't support PCI configuration via MMIO at the moment but it is
certainly possible to introduce it.

On the other hand having PV PCI work with HVM guests is technically
challenging because we would need to support the emulated PCI domain/bus
and the new PV PCI domain/bus created by pcifront at the same time.
For example xenbus initialization is done through the Xen platform-pci
driver.
Also, considering that we need to support the emulated passthrough code
for Windows guests anyway, I don't think that introducing any more
complexity or use cases in either the PV or emulated code paths is a
good idea.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2012-02-08 18:07 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-02-06  8:32 [question] Does HVM domain support xen-pcifront/xen-pciback? Kai Huang
2012-02-06 17:58 ` Konrad Rzeszutek Wilk
2012-02-07 13:36   ` cody
2012-02-07 17:14     ` Konrad Rzeszutek Wilk
2012-02-08  4:57       ` Kai Huang
2012-02-08 18:07         ` Stefano Stabellini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).