xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Julien Grall <julien.grall@linaro.org>
To: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: edgar.iglesias@xilinx.com,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Chen <Wei.Chen@arm.com>, Steve Capper <Steve.Capper@arm.com>,
	Andre Przywara <andre.przywara@arm.com>,
	manish.jaggi@caviumnetworks.com, punit.agrawal@arm.com,
	vikrams@qti.qualcomm.com, okaya@qti.qualcomm.com, "Goel,
	Sameer" <sgoel@qti.qualcomm.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Dave P Martin <Dave.Martin@arm.com>,
	Vijaya Kumar K <Vijaya.Kumar@caviumnetworks.com>
Subject: Re: [RFC] ARM PCI Passthrough design document
Date: Tue, 30 May 2017 10:54:27 +0100	[thread overview]
Message-ID: <f018cf5f-9270-862c-e92e-d3799cdc43e3@linaro.org> (raw)
In-Reply-To: <20170530074033.iubct7qskle6ps5v@dhcp-3-128.uk.xensource.com>

Hi Roger,

On 30/05/17 08:40, Roger Pau Monné wrote:
> On Fri, May 26, 2017 at 06:14:09PM +0100, Julien Grall wrote:
> [...]
>> ## Who is in charge of the host bridge?
>>
>> There are numerous implementation of host bridges which exist on ARM. A part of
>> them requires a specific driver as they cannot be driven by a generic host bridge
>> driver. Porting those drivers may be complex due to dependencies on other
>> components.
>>
>> This would be seen as signal to leave the host bridge drivers in the hardware
>> domain. Because Xen would need to access the configuration space, all the access
>> would have to be forwarded to hardware domain which in turn will access the
>> hardware.
>
> IMHO this is much more complicated that what seems from the paragraph
> above. There is currently no way for Xen to forward PCI config space
> accesses to any other entity. The closer Xen has to this would be
> IOREQ servers possibly, but then you have to take into account that
> in order to forward PCI config spaces to Dom0 you *might* have to
> schedule the Dom0 (ie: context switch to it), perform the access and
> then context switch back to Xen and get the value. I don't think the
> PCI code is prepared for such asynchronous accesses at all.

I don't see any issue to schedule DOM0... it is configuration access 
space, not BAR access. It does not matter if it is slow. What matters 
here is to be able to use the host bridges and do PCI passthrough with Xen.

Also, the PCI code is currently x86 specific and not prepared for ARM. 
It does not mean we should not get the code in shape to support ARM ;).

>
>> In this design document, we are considering that the host bridge driver can
>> be ported in Xen. In the case it is not possible, a interface to forward
>> configuration space access would need to be defined. The interface details
>> is out of scope.
>
> I think that you have to state that the driver is ported to Xen or the
> bridge will not be supported. I don't think it's feasible to forward
> PCI config space access from Xen to Dom0 at all.

Rather than arguing on the code is not ready for that. I would have 
appreciated if you gave technical details on why it is not feasible.

I already gave quite a few times insights on why it might be difficult 
to port an host bridges in Xen.
	- How do you configure the clock? What if they are shared?
	- How about host bridges using indirect access (e.g cf8 like)? What you 
expose to DOM0?
	- ....

Such host bridges will end up to pull a lot of code in Xen and require 
more design than finding about a way to forward configuration space in 
Xen. Those boards exists and people are looking at using Xen + PCI 
passthrough. So saying they are not supported is not the right solution 
here.

Anyway, I mentioned it in the design document to open a discussion and 
not something I am going to focus for a first version of PCI pass-through.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-05-30  9:54 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-26 17:14 [RFC] ARM PCI Passthrough design document Julien Grall
2017-05-29  2:30 ` Manish Jaggi
2017-05-29 18:14   ` Julien Grall
2017-05-30  5:53     ` Manish Jaggi
2017-05-30  9:33       ` Julien Grall
2017-05-30  7:53     ` Roger Pau Monné
2017-05-30  9:42       ` Julien Grall
2017-05-30  7:40 ` Roger Pau Monné
2017-05-30  9:54   ` Julien Grall [this message]
2017-06-16  0:31     ` Stefano Stabellini
2017-06-16  0:23 ` Stefano Stabellini
2017-06-20  0:19 ` Vikram Sethi
2017-06-28 15:22   ` Julien Grall
2017-06-29 15:17     ` Vikram Sethi
2017-07-03 14:35       ` Julien Grall
2017-07-04  8:30     ` roger.pau
2017-07-06 20:55       ` Vikram Sethi
2017-07-07  8:49         ` Roger Pau Monné
2017-07-07 21:50           ` Stefano Stabellini
2017-07-07 23:40             ` Vikram Sethi
2017-07-08  7:34             ` Roger Pau Monné
2018-01-19 10:34               ` Manish Jaggi
2017-07-19 14:41 ` Notes from PCI Passthrough design discussion at Xen Summit Punit Agrawal
2017-07-20  3:54   ` Manish Jaggi
2017-07-20  8:24     ` Roger Pau Monné
2017-07-20  9:32       ` Manish Jaggi
2017-07-20 10:29         ` Roger Pau Monné
2017-07-20 10:47           ` Julien Grall
2017-07-20 11:06             ` Roger Pau Monné
2017-07-20 11:52               ` Julien Grall
2017-07-20 11:02           ` Manish Jaggi
2017-07-20 10:41         ` Julien Grall
2017-07-20 11:00           ` Manish Jaggi
2017-07-20 12:24             ` Julien Grall
2018-01-22 11:10 ` [RFC] ARM PCI Passthrough design document Manish Jaggi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f018cf5f-9270-862c-e92e-d3799cdc43e3@linaro.org \
    --to=julien.grall@linaro.org \
    --cc=Dave.Martin@arm.com \
    --cc=Steve.Capper@arm.com \
    --cc=Vijaya.Kumar@caviumnetworks.com \
    --cc=Wei.Chen@arm.com \
    --cc=andre.przywara@arm.com \
    --cc=edgar.iglesias@xilinx.com \
    --cc=manish.jaggi@caviumnetworks.com \
    --cc=okaya@qti.qualcomm.com \
    --cc=punit.agrawal@arm.com \
    --cc=roger.pau@citrix.com \
    --cc=sgoel@qti.qualcomm.com \
    --cc=sstabellini@kernel.org \
    --cc=vikrams@qti.qualcomm.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).