xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: cody <mail.kai.huang@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Xen-devel@lists.xensource.com
Subject: Re: [question] Does HVM domain support xen-pcifront/xen-pciback?
Date: Tue, 07 Feb 2012 21:36:31 +0800	[thread overview]
Message-ID: <4F3128DF.20208@gmail.com> (raw)
In-Reply-To: <20120206175812.GB14439@phenom.dumpdata.com>

On 02/07/2012 01:58 AM, Konrad Rzeszutek Wilk wrote:
> On Mon, Feb 06, 2012 at 04:32:05PM +0800, Kai Huang wrote:
>    
>> Hi,
>>
>> I see in pcifront_init, if domain is not PV domain, pcifront_init just
>> returns error. So seems HVM domain does not support
>> xen-pcifront/xen-pciback mechanism? If it is true, why? I think
>>      
> Yup. B/c the only thing that the PV PCI protocol does is enable
> PCI configuration emulation. And if you boot an HVM guest - QEMU does
> that already.
>
>    
I heard qemu does not support PCIE simulation, and Xen does not provides 
MMIO mechanism but only legacy IO port mechanism to guest for 
configuration space access. Is this true?

If using IO port mechanism, we can only access first 256B of 
configuration space, but if using PV PCI protocol, we will not have such 
limitation. I think this is an advantage of PCI PV protocol. Of course 
if Xen provides MMIO mechanism to guest for configuration space, it will 
not have this limitation too.

>> technically there's nothing that can block to support pcifront/pciback
>> in HVM, and for performance reason, there will be benefits if HVM
>> supports PV PCI operation.
>>      
> Nope. The PCI operations are just for writting configuration deta in the
> PCI space. Whcih is done mostly when a driver starts/stops and no more.
>
> The performance is with interrupts and how to inject them in a guest - and
> in there the PV is much faster than HVM due to the emulation layer complexity.
> However, work by Stefano on making that path shorter has made the interrupt
> injection much much faster.
>    
I think PV PCI protocol can be used for other purpose in the future, 
such as PF/VF communication. In this case it will be better if HVM 
domain can support PV PCI protocol.

-cody

  reply	other threads:[~2012-02-07 13:36 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-02-06  8:32 [question] Does HVM domain support xen-pcifront/xen-pciback? Kai Huang
2012-02-06 17:58 ` Konrad Rzeszutek Wilk
2012-02-07 13:36   ` cody [this message]
2012-02-07 17:14     ` Konrad Rzeszutek Wilk
2012-02-08  4:57       ` Kai Huang
2012-02-08 18:07         ` Stefano Stabellini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4F3128DF.20208@gmail.com \
    --to=mail.kai.huang@gmail.com \
    --cc=Xen-devel@lists.xensource.com \
    --cc=konrad.wilk@oracle.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).