From: "Yu, Zhang" <yu.c.zhang@linux.intel.com>
To: Paul Durrant <paul.durrant@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH] docs/design: introduce HVMMEM_ioreq_serverX types
Date: Fri, 26 Feb 2016 14:59:11 +0800 [thread overview]
Message-ID: <56CFF7BF.2040202@linux.intel.com> (raw)
In-Reply-To: <1456415349-30409-1-git-send-email-paul.durrant@citrix.com>
Hi Paul,
Thanks a lot for your help on this! And below are my questions.
On 2/25/2016 11:49 PM, Paul Durrant wrote:
> This patch adds a new 'designs' subdirectory under docs as a repository
> for this and future design proposals.
>
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> ---
>
> For convenience this document can also be viewed in PDF at:
>
> http://xenbits.xen.org/people/pauldu/hvmmem_ioreq_server.pdf
> ---
> docs/designs/hvmmem_ioreq_server.md | 63 +++++++++++++++++++++++++++++++++++++
> 1 file changed, 63 insertions(+)
> create mode 100755 docs/designs/hvmmem_ioreq_server.md
>
> diff --git a/docs/designs/hvmmem_ioreq_server.md b/docs/designs/hvmmem_ioreq_server.md
> new file mode 100755
> index 0000000..47fa715
> --- /dev/null
> +++ b/docs/designs/hvmmem_ioreq_server.md
> @@ -0,0 +1,63 @@
> +HVMMEM\_ioreq\_serverX
> +----------------------
> +
> +Background
> +==========
> +
> +The concept of the IOREQ server was introduced to allow multiple distinct
> +device emulators to a single VM. The XenGT project uses an IOREQ server to
> +provide mediated pass-through of Intel GPUs to guests and, as part of the
> +mediation, needs to intercept accesses to GPU page-tables (or GTTs) that
> +reside in guest RAM.
> +
> +The current implementation of this sets the type of GTT pages to type
> +HVMMEM\_mmio\_write\_dm, which causes Xen to emulate writes to such pages,
> +and then maps the guest physical addresses of those pages to the XenGT
> +IOREQ server using the HVMOP\_map\_io\_range\_to\_ioreq\_server hypercall.
> +However, because the number of GTTs is potentially large, using this
> +approach does not scale well.
> +
> +Proposal
> +========
> +
> +Because the number of spare types available in the P2M type-space is
> +currently very limited it is proposed that HVMMEM\_mmio\_write\_dm be
> +replaced by a single new type HVMMEM\_ioreq\_server. In future, if the
> +P2M type-space is increased, this can be renamed to HVMMEM\_ioreq\_server0
> +and new HVMMEM\_ioreq\_server1, HVMMEM\_ioreq\_server2, etc. types
> +can be added.
> +
> +Accesses to a page of type HVMMEM\_ioreq\_serverX should be the same as
> +HVMMEM\_ram\_rw until the type is _claimed_ by an IOREQ server. Furthermore
Sorry, do you mean even when a gfn is set to HVMMEM_ioreq_serverX type,
its access rights in P2M still remains unchanged? So the new hypercall
pair, HVMOP_[un]map_mem_type_to_ioreq_server, are also responsible for
the PTE updates on the access bits?
If it is true, I'm afraid this would be time consuming, because the
map/unmap will have to traverse all P2M structures to detect the PTEs
with HVMMEM_ioreq_serverX flag set. Yet in XenGT, setting this flag is
triggered dynamically with the construction/destruction of shadow PPGTT.
But I'm not sure to which degree the performance casualties will be,
with frequent EPT table walk and EPT tlb flush.
If it is not, I guess we can(e.g. when trying to WP a gfn):
1> use HVMOP_set_mem_type to set the HVMMEM_ioreq_serverX flag, which
for the write protected case works the same as HVMMEM_mmio_write_dm;
If successful, accesses to a page of type HVMMEM_ioreq_serverX should
trigger the ioreq server selection path, but will be discarded.
2> after HVMOP_map_mem_type_to_ioreq_server is called, all accesses to
this pages of type HVMMEM_ioreq_serverX would be forwarded to a
specified ioreq server.
As to XenGT backend device model, we only need to use the map hypercall
once when trying to contruct the first shadow PPGTT, and use the unmap
hypercall when a VM is going to be torn down.
Any suggestions? :)
Thanks
Yu
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-02-26 7:08 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-02-25 15:49 [PATCH] docs/design: introduce HVMMEM_ioreq_serverX types Paul Durrant
2016-02-25 16:28 ` Andrew Cooper
2016-02-25 16:48 ` Paul Durrant
2016-02-25 16:47 ` Jan Beulich
2016-02-25 16:55 ` Paul Durrant
2016-02-26 4:24 ` Tian, Kevin
2016-02-26 9:14 ` Jan Beulich
2016-02-26 9:14 ` Paul Durrant
2016-02-26 6:59 ` Yu, Zhang [this message]
2016-02-26 9:18 ` Jan Beulich
2016-02-26 9:36 ` Yu, Zhang
2016-02-26 9:50 ` Paul Durrant
2016-02-26 9:58 ` Yu, Zhang
2016-02-26 9:21 ` Paul Durrant
2016-02-26 9:30 ` Yu, Zhang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56CFF7BF.2040202@linux.intel.com \
--to=yu.c.zhang@linux.intel.com \
--cc=paul.durrant@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).