From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Paul Durrant <paul.durrant@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH] docs/design: introduce HVMMEM_ioreq_serverX types
Date: Thu, 25 Feb 2016 16:28:07 +0000 [thread overview]
Message-ID: <56CF2B97.5000708@citrix.com> (raw)
In-Reply-To: <1456415349-30409-1-git-send-email-paul.durrant@citrix.com>
On 25/02/16 15:49, Paul Durrant wrote:
> This patch adds a new 'designs' subdirectory under docs as a repository
> for this and future design proposals.
>
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> ---
>
> For convenience this document can also be viewed in PDF at:
>
> http://xenbits.xen.org/people/pauldu/hvmmem_ioreq_server.pdf
> ---
> docs/designs/hvmmem_ioreq_server.md | 63 +++++++++++++++++++++++++++++++++++++
> 1 file changed, 63 insertions(+)
> create mode 100755 docs/designs/hvmmem_ioreq_server.md
If you name it .markdown, the docs buildsystem will be able to publish
it automatically. Alternatively, teach the build system about .md.
On the other hand, .pandoc tends to end up making nicer PDFs.
>
> diff --git a/docs/designs/hvmmem_ioreq_server.md b/docs/designs/hvmmem_ioreq_server.md
> new file mode 100755
> index 0000000..47fa715
> --- /dev/null
> +++ b/docs/designs/hvmmem_ioreq_server.md
> @@ -0,0 +1,63 @@
> +HVMMEM\_ioreq\_serverX
> +----------------------
> +
> +Background
> +==========
> +
> +The concept of the IOREQ server was introduced to allow multiple distinct
> +device emulators to a single VM. The XenGT project uses an IOREQ server to
> +provide mediated pass-through of Intel GPUs to guests and, as part of the
> +mediation, needs to intercept accesses to GPU page-tables (or GTTs) that
> +reside in guest RAM.
> +
> +The current implementation of this sets the type of GTT pages to type
> +HVMMEM\_mmio\_write\_dm, which causes Xen to emulate writes to such pages,
> +and then maps the guest physical addresses of those pages to the XenGT
"then sends the guest physical" surely?
> +IOREQ server using the HVMOP\_map\_io\_range\_to\_ioreq\_server hypercall.
> +However, because the number of GTTs is potentially large, using this
> +approach does not scale well.
> +
> +Proposal
> +========
> +
> +Because the number of spare types available in the P2M type-space is
> +currently very limited it is proposed that HVMMEM\_mmio\_write\_dm be
> +replaced by a single new type HVMMEM\_ioreq\_server. In future, if the
> +P2M type-space is increased, this can be renamed to HVMMEM\_ioreq\_server0
> +and new HVMMEM\_ioreq\_server1, HVMMEM\_ioreq\_server2, etc. types
> +can be added.
> +
> +Accesses to a page of type HVMMEM\_ioreq\_serverX should be the same as
> +HVMMEM\_ram\_rw until the type is _claimed_ by an IOREQ server. Furthermore
> +it should only be possible to set the type of a page to
> +HVMMEM\_ioreq\_serverX if that page is currently of type HVMMEM\_ram\_rw.
> +
> +To allow an IOREQ server to claim or release a claim to a type a new pair
> +of hypercalls will be introduced:
> +
> +- HVMOP\_map\_mem\_type\_to\_ioreq\_server
> +- HVMOP\_unmap\_mem\_type\_from\_ioreq\_server
> +
> +and an associated argument structure:
> +
> + struct hvm_ioreq_mem_type {
> + domid_t domid; /* IN - domain to be serviced */
> + ioservid_t id; /* IN - server id */
> + hvmmem_type_t type; /* IN - memory type */
> + uint32_t flags; /* IN - types of access to be
> + intercepted */
> +
> + #define _HVMOP_IOREQ_MEM_ACCESS_READ 0
> + #define HVMOP_IOREQ_MEM_ACCESS_READ \
> + (1 << _HVMOP_IOREQ_MEM_ACCESS_READ)
(1U << ...)
> +
> + #define _HVMOP_IOREQ_MEM_ACCESS_WRITE 1
> + #define HVMOP_IOREQ_MEM_ACCESS_WRITE \
> + (1 << _HVMOP_IOREQ_MEM_ACCESS_WRITE)
> +
> + };
> +
> +
> +Once the type has been claimed then the requested types of access to any
> +page of the claimed type will be passed to the IOREQ server for handling.
> +Only HVMMEM\_ioreq\_serverX types may be claimed.
LGTM.
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-02-25 16:28 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-02-25 15:49 [PATCH] docs/design: introduce HVMMEM_ioreq_serverX types Paul Durrant
2016-02-25 16:28 ` Andrew Cooper [this message]
2016-02-25 16:48 ` Paul Durrant
2016-02-25 16:47 ` Jan Beulich
2016-02-25 16:55 ` Paul Durrant
2016-02-26 4:24 ` Tian, Kevin
2016-02-26 9:14 ` Jan Beulich
2016-02-26 9:14 ` Paul Durrant
2016-02-26 6:59 ` Yu, Zhang
2016-02-26 9:18 ` Jan Beulich
2016-02-26 9:36 ` Yu, Zhang
2016-02-26 9:50 ` Paul Durrant
2016-02-26 9:58 ` Yu, Zhang
2016-02-26 9:21 ` Paul Durrant
2016-02-26 9:30 ` Yu, Zhang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56CF2B97.5000708@citrix.com \
--to=andrew.cooper3@citrix.com \
--cc=paul.durrant@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).