From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: cody <mail.kai.huang@gmail.com>
Cc: Xen-devel@lists.xensource.com
Subject: Re: [question] Does HVM domain support xen-pcifront/xen-pciback?
Date: Tue, 7 Feb 2012 12:14:56 -0500 [thread overview]
Message-ID: <20120207171456.GD4375@phenom.dumpdata.com> (raw)
In-Reply-To: <4F3128DF.20208@gmail.com>
On Tue, Feb 07, 2012 at 09:36:31PM +0800, cody wrote:
> On 02/07/2012 01:58 AM, Konrad Rzeszutek Wilk wrote:
> >On Mon, Feb 06, 2012 at 04:32:05PM +0800, Kai Huang wrote:
> >>Hi,
> >>
> >>I see in pcifront_init, if domain is not PV domain, pcifront_init just
> >>returns error. So seems HVM domain does not support
> >>xen-pcifront/xen-pciback mechanism? If it is true, why? I think
> >Yup. B/c the only thing that the PV PCI protocol does is enable
> >PCI configuration emulation. And if you boot an HVM guest - QEMU does
> >that already.
> >
> I heard qemu does not support PCIE simulation, and Xen does not
> provides MMIO mechanism but only legacy IO port mechanism to guest
> for configuration space access. Is this true?
The upstream version has a X58 north bridge implementation to support this.
(ioh3420.c).
In regards to MMIO mechanism are you talking about MSI-X and such? Then
the answer is it does. QEMU traps when a guest tries to write MSI vectors
in the BAR space and translates those to appropiate xen calls to setup
vectors for the guest.
>
> If using IO port mechanism, we can only access first 256B of
> configuration space, but if using PV PCI protocol, we will not have
> such limitation. I think this is an advantage of PCI PV protocol. Of
> course if Xen provides MMIO mechanism to guest for configuration
> space, it will not have this limitation too.
What do you mean by MMIO mechanism? Setup of MSI-x vectors?
>
> >>technically there's nothing that can block to support pcifront/pciback
> >>in HVM, and for performance reason, there will be benefits if HVM
> >>supports PV PCI operation.
> >Nope. The PCI operations are just for writting configuration deta in the
> >PCI space. Whcih is done mostly when a driver starts/stops and no more.
> >
> >The performance is with interrupts and how to inject them in a guest - and
> >in there the PV is much faster than HVM due to the emulation layer complexity.
> >However, work by Stefano on making that path shorter has made the interrupt
> >injection much much faster.
> I think PV PCI protocol can be used for other purpose in the future,
> such as PF/VF communication. In this case it will be better if HVM
> domain can support PV PCI protocol.
What is 'PF/VF' communication? Is it just the negoting in the guest of
parameters for the PF to inject in the VF? That seems a huge security risk.
Or is it some other esoteric PCI configuration options that are not
available in the VF's BARs?
>
> -cody
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
next prev parent reply other threads:[~2012-02-07 17:14 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-02-06 8:32 [question] Does HVM domain support xen-pcifront/xen-pciback? Kai Huang
2012-02-06 17:58 ` Konrad Rzeszutek Wilk
2012-02-07 13:36 ` cody
2012-02-07 17:14 ` Konrad Rzeszutek Wilk [this message]
2012-02-08 4:57 ` Kai Huang
2012-02-08 18:07 ` Stefano Stabellini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120207171456.GD4375@phenom.dumpdata.com \
--to=konrad.wilk@oracle.com \
--cc=Xen-devel@lists.xensource.com \
--cc=mail.kai.huang@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).