From: Paul Durrant <Paul.Durrant@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>,
Andrew Cooper <Andrew.Cooper3@citrix.com>,
Jan Beulich <jbeulich@suse.com>,
Ian Jackson <Ian.Jackson@citrix.com>,
"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
"boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>
Subject: Re: [PATCH v2 1/9] xen/vpci: introduce basic handlers to trap accesses to the PCI config space
Date: Tue, 25 Apr 2017 08:35:38 +0000 [thread overview]
Message-ID: <8bafd32199544602912674ed218507f9@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <20170425082723.wjomk4rdekkz6aov@dhcp-3-128.uk.xensource.com>
> -----Original Message-----
> From: Roger Pau Monne
> Sent: 25 April 2017 09:27
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: xen-devel@lists.xenproject.org; konrad.wilk@oracle.com;
> boris.ostrovsky@oracle.com; Ian Jackson <Ian.Jackson@citrix.com>; Wei Liu
> <wei.liu2@citrix.com>; Jan Beulich <jbeulich@suse.com>; Andrew Cooper
> <Andrew.Cooper3@citrix.com>
> Subject: Re: [PATCH v2 1/9] xen/vpci: introduce basic handlers to trap
> accesses to the PCI config space
>
> On Mon, Apr 24, 2017 at 12:50:58PM +0100, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Roger Pau Monne
> > > Sent: 24 April 2017 12:03
> > > To: Paul Durrant <Paul.Durrant@citrix.com>
> > > Cc: xen-devel@lists.xenproject.org; konrad.wilk@oracle.com;
> > > boris.ostrovsky@oracle.com; Ian Jackson <Ian.Jackson@citrix.com>; Wei
> Liu
> > > <wei.liu2@citrix.com>; Jan Beulich <jbeulich@suse.com>; Andrew
> Cooper
> > > <Andrew.Cooper3@citrix.com>
> > > Subject: Re: [PATCH v2 1/9] xen/vpci: introduce basic handlers to trap
> > > accesses to the PCI config space
> > > IMHO I'm not sure Xen needs PCI register based trapping granularity. I
> would
> > > argue that whatever (IOREQ or Xen internal function) that wants to trap
> > > access
> > > to a specific PCI config device register needs to take care of all the
> > > registers for that device.
> > >
> >
> > Having distinct handers for distinct groups of makes sense though... e.g.
> being able to register a BAR handler for each BAR and then maybe an MSI-X
> capability handler for wherever that appears in the capability chain, etc. If
> you don't allow such registration at the top level then it ends up getting done
> at the next level.
>
> Yes, that's what's done here. Handlers for specific registers are added at the
> next level (vPCI). See patches 5, 6, 8 or 9 for examples.
>
> > That said, it may make more sense to have a top level of emulation that
> just handles all register reads and writes to config space and then a second
> level that has callbacks for BAR enumeration, bus master enable, MSI-X
> mask/unmask, etc.
> >
> > > I will look into hooking this code (vPCI) into the existing hvm_*_ioreq
> > > functionality, so that vPCI claims the full PCI config space for each device
> it
> > > manages.
> >
> > Cool.
>
> I've been looking into this, and I have to say this whole emulation handling is
> a mess.
Too right. It's pretty horrible.
>The fact that Xen differentiates between internal and external
> (IOREQ)
> handlers so early in the code (hvmemul_do_io) makes it far from trivial to
> unify internal and external handlers, the more that external handlers have
> grown a complex set of infrastructure that internal handlers don't have at
> all.
>
Indeed. Arguably that's because the external emulation is asynchronous and therefore requires more infrastructure but I think a lot of the abstraction is the wrong way round.
> Ideally I think the IOREQ filtering code should be generalized to apply to both
> internal and external handlers, and the difference between external and
> internal handlers should just be the set of functions that they use.
Exactly.
> External
> ones would always use generic IOREQ functions for pushing requests to the
> external emulators, while internal ones would just implement their own
> functions.
>
Yep. We are definitely on the same wavelength :-)
> That said, I think this is a non-trivial amount of work, that will further
> delay this series. I don't see an easy way to integrate this code with the
> current IOREQ code at all. I'm willing to do this, but I would rather have this
> series merged first, so that other people can start working on PVH Dom0.
>
Fair enough. If you've looked and come to that conclusion then I trust your judgement.
> ATM, the only think I can see that could be easily shared between the IOREQ
> code and vPCI is the PCI address decoding code.
>
Yes, maybe some utility functions/macros can be generalized. It's not much, but it's a start. Once 4.9 is out of the door I think there should be an I/O emulation cleanup/rationalization item for 4.10 which of course I'm happy to help with.
Cheers,
Paul
> Thanks, Roger.
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2017-04-25 8:35 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-20 15:17 [PATCH v2 0/9] vpci: PCI config space emulation Roger Pau Monne
2017-04-20 15:17 ` [PATCH v2 1/9] xen/vpci: introduce basic handlers to trap accesses to the PCI config space Roger Pau Monne
2017-04-21 16:07 ` Paul Durrant
2017-04-24 9:09 ` Roger Pau Monne
2017-04-24 9:34 ` Paul Durrant
2017-04-24 10:08 ` Roger Pau Monne
2017-04-24 10:19 ` Paul Durrant
2017-04-24 11:02 ` Roger Pau Monne
2017-04-24 11:50 ` Paul Durrant
2017-04-25 8:27 ` Roger Pau Monne
2017-04-25 8:35 ` Paul Durrant [this message]
2017-04-21 16:23 ` Paul Durrant
2017-04-24 9:42 ` Roger Pau Monne
2017-04-24 9:55 ` Paul Durrant
2017-04-24 9:58 ` Paul Durrant
2017-04-24 10:11 ` Roger Pau Monne
2017-04-24 10:12 ` Paul Durrant
2017-04-20 15:17 ` [PATCH v2 2/9] x86/ecam: add handlers for the PVH Dom0 MMCFG areas Roger Pau Monne
2017-04-20 15:17 ` [PATCH v2 3/9] xen/mm: move modify_identity_mmio to global file and drop __init Roger Pau Monne
2017-04-24 14:42 ` Julien Grall
2017-04-25 8:01 ` Roger Pau Monne
2017-04-25 9:09 ` Julien Grall
2017-04-25 9:25 ` Roger Pau Monne
2017-04-25 9:32 ` Jan Beulich
2017-04-26 8:26 ` Roger Pau Monne
2017-04-26 8:51 ` Jan Beulich
2017-04-27 8:58 ` Roger Pau Monne
2017-04-27 9:08 ` Julien Grall
2017-04-27 9:29 ` Jan Beulich
2017-04-20 15:17 ` [PATCH v2 4/9] xen/pci: split code to size BARs from pci_add_device Roger Pau Monne
2017-04-20 15:17 ` [PATCH v2 5/9] xen/vpci: add handlers to map the BARs Roger Pau Monne
2017-04-20 15:17 ` [PATCH v2 6/9] xen/vpci: trap access to the list of PCI capabilities Roger Pau Monne
2017-04-20 15:17 ` [PATCH v2 7/9] vpci: add a priority field to the vPCI register initializer Roger Pau Monne
2017-04-20 15:17 ` [PATCH v2 8/9] vpci/msi: add MSI handlers Roger Pau Monne
2017-04-21 8:38 ` Roger Pau Monne
2017-04-24 15:31 ` Julien Grall
2017-04-25 11:49 ` Roger Pau Monne
2017-04-25 12:00 ` Julien Grall
2017-04-25 13:19 ` Roger Pau Monne
2017-04-20 15:17 ` [PATCH v2 9/9] vpci/msix: add MSI-X handlers Roger Pau Monne
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8bafd32199544602912674ed218507f9@AMSPEX02CL03.citrite.net \
--to=paul.durrant@citrix.com \
--cc=Andrew.Cooper3@citrix.com \
--cc=Ian.Jackson@citrix.com \
--cc=boris.ostrovsky@oracle.com \
--cc=jbeulich@suse.com \
--cc=roger.pau@citrix.com \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).