xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: "Edgar E. Iglesias" <edgar.iglesias@xilinx.com>,
	"Stefano Stabellini" <sstabellini@kernel.org>,
	"Wei Chen" <Wei.Chen@arm.com>,
	"Campbell Sean" <scampbel@codeaurora.org>,
	"Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Jiandi An" <anjiandi@codeaurora.org>,
	"Punit Agrawal" <punit.agrawal@arm.com>,
	"Steve Capper" <Steve.Capper@arm.com>,
	alistair.francis@xilinx.com,
	xen-devel <xen-devel@lists.xenproject.org>,
	"manish.jaggi@caviumnetworks.com"
	<manish.jaggi@caviumnetworks.com>,
	"Shanker Donthineni" <shankerd@codeaurora.org>,
	"Roger Pau Monné" <roger.pau@citrix.com>
Subject: Re: [early RFC] ARM PCI Passthrough design document
Date: Thu, 2 Mar 2017 22:13:09 +0100	[thread overview]
Message-ID: <20170302211309.GV9606@toto> (raw)
In-Reply-To: <511c681c-c04e-2576-15dd-0b83c9512922@linaro.org>

On Thu, Feb 23, 2017 at 04:47:19PM +0000, Julien Grall wrote:
> 
> Hi Edgar,
> 
> On 22/02/17 04:03, Edgar E. Iglesias wrote:
> >On Mon, Feb 13, 2017 at 03:35:19PM +0000, Julien Grall wrote:
> >>On 02/02/17 15:33, Edgar E. Iglesias wrote:
> >>>On Wed, Feb 01, 2017 at 07:04:43PM +0000, Julien Grall wrote:
> >>>>On 31/01/2017 19:06, Edgar E. Iglesias wrote:
> >>>>>On Tue, Jan 31, 2017 at 05:09:53PM +0000, Julien Grall wrote:
> >>>I'll see if I can find working examples for PCIe on the ZCU102. Then I'll share
> >>>DTS, Kernel etc.
> >>
> >>I've found a device tree on the github from the ZCU102: zynqmp-zcu102.dts,
> >>it looks like there is no use of PHY for the pcie so far.
> >>
> >>Lets imagine in the future, pcie will use the PHY. If we decide to
> >>initialize the hostbridge in Xen, we would also have to pull the PHY code in
> >>the hypervisor. Leaving aside the problem to pull more code in Xen, this is
> >>not nice because the PHY is used by different components (e.g SATA, USB). So
> >>Xen and DOM0 would have to share the PHY.
> >>
> >>For Xen POV, the best solution would be the bootloader initializing the PHY
> >>because starting Xen. So we can keep all the hostbridge (initialization +
> >>access) in Xen.
> >>
> >>If it is not possible, then I would prefer to see the hostbridge
> >>initialization in DOM0.
> >
> >>>
> >>>I suspect that this setup has previously been done by the initial bootloader
> >>>auto-generated from design configuration tools.
> >>>
> >>>Now, this is moving into Linux.
> >>
> >>Do you know why they decide to move the code in Linux? What would be the
> >>problem to let the bootloader configuring the GT?
> >
> >
> >No, I'm not sure why this approach was not used. The only thing I can think of
> >is a runtime configuration approach.
> >
> >
> >>
> >>>There's a specific driver that does that but AFAICS, it has not been upstreamed yet.
> >>>You can see it here:
> >>>https://github.com/Xilinx/linux-xlnx/blob/master/drivers/phy/phy-zynqmp.c
> >>>
> >>>DTS nodes that need a PHY can then just refer to it, here's an example from SATA:
> >>>&sata {
> >>>       phy-names = "sata-phy";
> >>>       phys = <&lane3 PHY_TYPE_SATA 1 3 150000000>;
> >>>};
> >>>
> >Yes, I agree that the GT setup in the bootloader is very attractive.
> >I don't think hte setup sequence is complicated, we can perhaps even do it
> >on the commandline in u-boot or xsdb. I'll have to check.
> 
> That might simplify things for Xen. I would be happy to consider any other
> solutions. It might probably be worth to kick a separate thread regarding
> how to support Xilinx hostcontroller in Xen.
> 
> For now, I will explain in the design document the different situation we
> can encounter with an hostbridge and will leave open the design for
> initialization bits.
> 
> 
> [...]
> 
> >>>>
> >>>>From a design point of view, it would make more sense to have the MSI
> >>>>controller driver in Xen as the hostbridge emulation for guest will also
> >>>>live there.
> >>>>
> >>>>So if we receive MSI in Xen, we need to figure out a way for DOM0 and guest
> >>>>to receive MSI. The same way would be the best, and I guess non-PV if
> >>>>possible. I know you are looking to boot unmodified OS in a VM. This would
> >>>>mean we need to emulate the MSI controller and potentially xilinx PCI
> >>>>controller. How much are you willing to modify the OS?
> >>>
> >>>Today, we have not yet implemented PCIe drivers for our baremetal SDK. So
> >>>things are very open and we could design with pretty much anything in mind.
> >>>
> >>>Yes, we could perhaps include a very small model with most registers dummied.
> >>>Implementing the MSI read FIFO would allow us to:
> >>>
> >>>1. Inject the MSI doorbell SPI into guests. The guest will then see the same
> >>>  IRQ as on real HW.
> >>>
> >>>2. Guest reads host-controller registers (MSI FIFO) to get the signaled MSI.
> >>
> >>The Xilinx PCIe hostbridge is not the only hostbridge having MSI controller
> >>embedded. So I would like to see a generic solution if possible. This would
> >>avoid to increase the code required for emulation in Xen.
> >>
> >>My concern with a FIFO is it will require an upper bound to avoid using to
> >>much memory in Xen. What if the FIFO is full? Will you drop MSI?
> >
> >The FIFO I'm refering to is a FIFO in the MSI controller itself.
> 
> Sorry if it was unclear. I was trying to explain what would be the issue to
> emulate this kind of MSI controller in Xen not using them in Xen.
> 
> >I agree that this wouldn't be generic though....
> 
> An idea would be to emulate a GICv2m frame (see appendix E in ARM-DEN-0029
> v3.0) for the guest. The frame is able to handle a certain number of SPIs.
> Each MSI will be presented as a uniq SPI. The association SPI <-> MSI is
> left at the discretion of the driver.
> 
> A guest will discover the number of SPIs by reading the register MSI_TYPER.
> To initialize MSI, the guest will compose the message using the GICv2m
> doorbell (see register MSI_SETSPI_NS in the frame) and the SPI allocated. As
> the PCI hostbridge will be emulated for the guest, any write to the MSI
> space would be trapped. Then, I would expect Xen to allocate an host MSI,
> compose a new message using the doorbell of the Xilinx MSI controller and
> then write into the host PCI configuration space.
> 
> MSI will be received by the hypervisor that will look-up for the domain
> where it needs to be injected and will inject the SPI configured by the Xen.
> 
> The frame is always 4KB and the msi is embedded in it. This means we cannot
> map the virtual GICv2m MSI doorbell into the Xilinx MSI doorbell. The
> problem will also happen when using virtual ITS because a guest may have
> devices assigned using different physical ITS. However each ITS has it's own
> doorbell, therefore we would have to map all the ITS doorbell in the guest
> as we may not know which ITS will be used for hotplug devices.
> 
> To solve this problem, I would suggest to have a reserved range in the guest
> address space to map MSI doorbell.
> 
> This solution is the most generic I have in mind. The driver for the guest
> is very simple and the amount of emulation required is quite limited. Any
> opinions?

Yes, GICv2m is probably as generic and simple as we can get.
Sounds good as a starting point, if we run into something we can reconsider.

Thanks,
Edgar



> 
> I am also open to any other suggestions.
> 
> Cheers,
> 
> -- 
> Julien Grall
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-03-02 21:13 UTC|newest]

Thread overview: 82+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-12-29 14:04 [early RFC] ARM PCI Passthrough design document Julien Grall
2016-12-29 14:16 ` Jaggi, Manish
2016-12-29 17:03   ` Julien Grall
2016-12-29 18:41     ` Jaggi, Manish
2016-12-29 19:38       ` Julien Grall
2017-01-04  0:24 ` Stefano Stabellini
2017-01-24 14:28   ` Julien Grall
2017-01-24 20:07     ` Stefano Stabellini
2017-01-25 11:21       ` Roger Pau Monné
2017-01-25 18:53       ` Julien Grall
2017-01-31 16:53         ` Edgar E. Iglesias
2017-01-31 17:09           ` Julien Grall
2017-01-31 19:06             ` Edgar E. Iglesias
2017-01-31 22:08               ` Stefano Stabellini
2017-02-01 19:04               ` Julien Grall
2017-02-01 19:31                 ` Stefano Stabellini
2017-02-01 20:24                   ` Julien Grall
2017-02-02 15:33                 ` Edgar E. Iglesias
2017-02-02 23:12                   ` Stefano Stabellini
2017-02-02 23:44                     ` Edgar E. Iglesias
2017-02-10  1:01                       ` Stefano Stabellini
2017-02-13 15:39                         ` Julien Grall
2017-02-13 19:59                           ` Stefano Stabellini
2017-02-14 17:21                             ` Julien Grall
2017-02-14 18:20                               ` Stefano Stabellini
2017-02-14 20:18                                 ` Julien Grall
2017-02-13 15:35                   ` Julien Grall
2017-02-22  4:03                     ` Edgar E. Iglesias
2017-02-23 16:47                       ` Julien Grall
2017-03-02 21:13                         ` Edgar E. Iglesias [this message]
2017-02-02 15:40                 ` Roger Pau Monné
2017-02-13 16:22                   ` Julien Grall
2017-01-31 21:58         ` Stefano Stabellini
2017-02-01 20:12           ` Julien Grall
2017-02-01 10:55         ` Roger Pau Monné
2017-02-01 18:50           ` Stefano Stabellini
2017-02-10  9:48             ` Roger Pau Monné
2017-02-10 10:11               ` Paul Durrant
2017-02-10 12:57                 ` Roger Pau Monne
2017-02-10 13:02                   ` Paul Durrant
2017-02-10 21:04                     ` Stefano Stabellini
2017-02-02 12:38           ` Julien Grall
2017-02-02 23:06             ` Stefano Stabellini
2017-03-08 19:06               ` Julien Grall
2017-03-08 19:12                 ` Konrad Rzeszutek Wilk
2017-03-08 19:55                   ` Stefano Stabellini
2017-03-08 21:51                     ` Julien Grall
2017-03-09  2:59                   ` Roger Pau Monné
2017-03-09 11:17                     ` Konrad Rzeszutek Wilk
2017-03-09 13:26                       ` Julien Grall
2017-03-10  0:29                         ` Konrad Rzeszutek Wilk
2017-03-10  3:23                           ` Roger Pau Monné
2017-03-10 15:28                             ` Konrad Rzeszutek Wilk
2017-03-15 12:07                               ` Roger Pau Monné
2017-03-15 12:42                                 ` Konrad Rzeszutek Wilk
2017-03-15 12:56                                   ` Roger Pau Monné
2017-03-15 15:11                                     ` Venu Busireddy
2017-03-15 16:38                                       ` Roger Pau Monn?
2017-03-15 16:54                                         ` Venu Busireddy
2017-03-15 17:00                                           ` Roger Pau Monn?
2017-05-03 12:38                                             ` Julien Grall
2017-05-03 12:53                                         ` Julien Grall
2017-01-25  4:23     ` Manish Jaggi
2017-01-06 15:12 ` Roger Pau Monné
2017-01-06 21:16   ` Stefano Stabellini
2017-01-24 17:17   ` Julien Grall
2017-01-25 11:42     ` Roger Pau Monné
2017-01-31 15:59       ` Julien Grall
2017-01-31 22:03         ` Stefano Stabellini
2017-02-01 10:28           ` Roger Pau Monné
2017-02-01 18:45             ` Stefano Stabellini
2017-01-06 16:27 ` Edgar E. Iglesias
2017-01-06 21:12   ` Stefano Stabellini
2017-01-09 17:50     ` Edgar E. Iglesias
2017-01-19  5:09 ` Manish Jaggi
2017-01-24 17:43   ` Julien Grall
2017-01-25  4:37     ` Manish Jaggi
2017-01-25 15:25       ` Julien Grall
2017-01-30  7:41         ` Manish Jaggi
2017-01-31 13:33           ` Julien Grall
2017-05-19  6:38 ` Goel, Sameer
2017-05-19 16:48   ` Julien Grall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170302211309.GV9606@toto \
    --to=edgar.iglesias@gmail.com \
    --cc=Steve.Capper@arm.com \
    --cc=Wei.Chen@arm.com \
    --cc=alistair.francis@xilinx.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=anjiandi@codeaurora.org \
    --cc=edgar.iglesias@xilinx.com \
    --cc=julien.grall@linaro.org \
    --cc=manish.jaggi@caviumnetworks.com \
    --cc=punit.agrawal@arm.com \
    --cc=roger.pau@citrix.com \
    --cc=scampbel@codeaurora.org \
    --cc=shankerd@codeaurora.org \
    --cc=sstabellini@kernel.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).