From: "Yu, Zhang" <yu.c.zhang@linux.intel.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>,
Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Kevin Tian <kevin.tian@intel.com>, Wei Liu <wei.liu2@citrix.com>,
Ian Campbell <Ian.Campbell@citrix.com>,
Andrew Cooper <Andrew.Cooper3@citrix.com>,
"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
Paul Durrant <Paul.Durrant@citrix.com>,
Stefano Stabellini <Stefano.Stabellini@citrix.com>,
"zhiyuan.lv@intel.com" <zhiyuan.lv@intel.com>,
Jan Beulich <JBeulich@suse.com>, "Keir (Xen.org)" <keir@xen.org>
Subject: Re: [PATCH v3 3/3] tools: introduce parameter max_wp_ram_ranges.
Date: Thu, 4 Feb 2016 16:50:31 +0800 [thread overview]
Message-ID: <56B310D7.7010506@linux.intel.com> (raw)
In-Reply-To: <CAFLBxZYiss+mxi7mpO3nnQgQdb-JW58vL6QwpHOomDMFdQmypw@mail.gmail.com>
On 2/4/2016 2:21 AM, George Dunlap wrote:
> On Wed, Feb 3, 2016 at 5:41 PM, George Dunlap
> <George.Dunlap@eu.citrix.com> wrote:
>> I think at some point I suggested an alternate design based on marking
>> such gpfns with a special p2m type; I can't remember if that
>> suggestion was actually addressed or not.
>
> FWIW, the thread where I suggested using p2m types was in response to
>
> <1436163912-1506-2-git-send-email-yu.c.zhang@linux.intel.com>
>
> Looking through it again, the main objection Paul gave[1] was:
>
> "And it's the assertion that use of write_dm will only be relevant to
> gfns, and that all such notifications only need go to a single ioreq
> server, that I have a problem with. Whilst the use of io ranges to
> track gfn updates is, I agree, not ideal I think the overloading of
> write_dm is not a step in the right direction."
>
> Two issues raised here, about using only p2m types to implement write_dm:
> 1. More than one ioreq server may want to use the write_dm functionality
> 2. ioreq servers may want to use write_dm for things other than individual gpfns
>
> My answer to #1 was:
> 1. At the moment, we only need to support a single ioreq server using write_dm
> 2. It's not technically difficult to extend the number of servers
> supported to something sensible, like 4 (using 4 different write_dm
> p2m types)
> 3. The interface can be designed such that we can extend support to
> multiple servers when we need to.
>
> My answer to #2 was that there's no reason why using write_dm could be
> used for both individual gpfns and ranges; there's no reason the
> interface can't take a "start" and "count" argument, even if for the
> time being "count" is almost always going to be 1.
>
Well, talking about "the 'count' always going to be 1". I doubt that. :)
Statistics in XenGT shows that, GPU page tables are very likely to
be allocated in contiguous gpfns.
> Compare this to the downsides of the approach you're proposing:
> 1. Using 40 bytes of hypervisor space per guest GPU pagetable page (as
> opposed to using a bit in the existing p2m table)
> 2. Walking down an RB tree with 8000 individual nodes to find out
> which server to send the message to (rather than just reading the
> value from the p2m table).
8K is an upper limit for the rangeset, in many cases the RB tree will
not contain that many nodes.
> 3. Needing to determine on a guest-by-guest basis whether to change the limit
> 4. Needing to have an interface to make the limit even bigger, just in
> case we find workloads that have even more GTTs.
>
Well, I have suggested in yesterday's reply. XenGT can choose not to
change this limit even when workloads are getting heavy - with
tradeoffs in the device model side.
> I really don't understand where you're coming from on this. The
> approach you've chosen looks to me to be slower, more difficult to
> implement, and more complicated; and it's caused a lot more resistance
> trying to get this series accepted.
>
I agree utilizing the p2m types to do so is more efficient and quite
intuitive. But I hesitate to occupy the software available bits in EPT
PTEs(like Andrew's reply). Although we have introduced one, we believe
it can also be used for other situations in the future, not just XenGT.
Thanks
Yu
next prev parent reply other threads:[~2016-02-04 8:50 UTC|newest]
Thread overview: 109+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-01-29 10:45 [PATCH v12 0/3] Refactor ioreq server for better performance Yu Zhang
2016-01-29 10:45 ` [PATCH v12 1/3] Refactor rangeset structure " Yu Zhang
2016-01-29 10:45 ` [PATCH v12 2/3] Differentiate IO/mem resources tracked by ioreq server Yu Zhang
2016-01-29 10:45 ` [PATCH v3 3/3] tools: introduce parameter max_wp_ram_ranges Yu Zhang
2016-01-29 16:33 ` Jan Beulich
2016-01-30 14:38 ` Yu, Zhang
2016-02-01 7:52 ` Jan Beulich
2016-02-01 12:02 ` Wei Liu
2016-02-01 12:15 ` Jan Beulich
2016-02-01 12:49 ` Wei Liu
2016-02-01 13:07 ` Jan Beulich
2016-02-01 15:14 ` Yu, Zhang
2016-02-01 16:16 ` Jan Beulich
2016-02-01 16:33 ` Yu, Zhang
2016-02-01 16:19 ` Yu, Zhang
2016-02-01 16:35 ` Jan Beulich
2016-02-01 16:37 ` Yu, Zhang
2016-02-01 17:05 ` Ian Jackson
2016-02-02 8:04 ` Yu, Zhang
2016-02-02 11:51 ` Wei Liu
2016-02-02 13:56 ` Yu, Zhang
2016-02-02 10:32 ` Jan Beulich
2016-02-02 10:56 ` Yu, Zhang
2016-02-02 11:12 ` Jan Beulich
2016-02-02 14:01 ` Yu, Zhang
2016-02-02 14:42 ` Jan Beulich
2016-02-02 15:00 ` Yu, Zhang
2016-02-02 15:21 ` Jan Beulich
2016-02-02 15:19 ` Yu, Zhang
2016-02-03 7:10 ` Yu, Zhang
2016-02-03 8:32 ` Jan Beulich
2016-02-03 12:20 ` Paul Durrant
2016-02-03 12:35 ` Jan Beulich
2016-02-03 12:50 ` Paul Durrant
2016-02-03 13:00 ` Jan Beulich
2016-02-03 13:07 ` Paul Durrant
2016-02-03 13:17 ` Jan Beulich
2016-02-03 13:18 ` Paul Durrant
2016-02-03 14:43 ` Ian Jackson
2016-02-03 15:10 ` Paul Durrant
2016-02-03 17:50 ` George Dunlap
2016-02-04 8:50 ` Yu, Zhang
2016-02-03 17:41 ` George Dunlap
2016-02-03 18:21 ` George Dunlap
2016-02-03 18:26 ` George Dunlap
2016-02-03 18:39 ` Andrew Cooper
2016-02-03 19:12 ` George Dunlap
2016-02-04 8:51 ` Yu, Zhang
2016-02-04 10:49 ` George Dunlap
2016-02-04 11:08 ` Ian Campbell
2016-02-04 11:19 ` Ian Campbell
2016-02-04 8:50 ` Yu, Zhang [this message]
2016-02-04 9:28 ` Paul Durrant
2016-02-04 9:38 ` Yu, Zhang
2016-02-04 9:49 ` Paul Durrant
2016-02-04 10:34 ` Jan Beulich
2016-02-04 13:33 ` Ian Jackson
2016-02-04 13:47 ` Paul Durrant
2016-02-04 14:12 ` Jan Beulich
2016-02-04 14:25 ` Paul Durrant
2016-02-04 15:06 ` Ian Jackson
2016-02-04 15:51 ` Paul Durrant
2016-02-05 3:47 ` Tian, Kevin
2016-02-05 3:35 ` Tian, Kevin
2016-02-04 14:08 ` Jan Beulich
2016-02-04 17:12 ` George Dunlap
2016-02-05 4:18 ` Tian, Kevin
2016-02-05 8:41 ` Yu, Zhang
2016-02-05 8:32 ` Jan Beulich
2016-02-05 9:24 ` Paul Durrant
2016-02-05 10:41 ` Jan Beulich
2016-02-05 11:14 ` George Dunlap
2016-02-05 11:24 ` Paul Durrant
2016-02-16 7:22 ` Tian, Kevin
2016-02-16 8:50 ` Paul Durrant
2016-02-16 10:33 ` Jan Beulich
2016-02-16 11:11 ` Paul Durrant
2016-02-17 3:18 ` Tian, Kevin
2016-02-17 8:58 ` Paul Durrant
2016-02-17 9:32 ` Jan Beulich
2016-02-17 9:58 ` Tian, Kevin
2016-02-17 10:03 ` Paul Durrant
2016-02-17 10:22 ` Jan Beulich
2016-02-17 10:24 ` Paul Durrant
2016-02-17 10:25 ` Tian, Kevin
2016-02-17 11:01 ` George Dunlap
2016-02-17 11:12 ` Paul Durrant
2016-02-22 15:56 ` George Dunlap
2016-02-22 16:02 ` Paul Durrant
2016-02-22 16:45 ` George Dunlap
2016-02-22 17:01 ` Paul Durrant
2016-02-22 17:23 ` George Dunlap
2016-02-22 17:34 ` Paul Durrant
2016-02-05 8:41 ` Yu, Zhang
2016-02-04 11:06 ` George Dunlap
2016-02-05 2:01 ` Zhiyuan Lv
2016-02-05 3:44 ` Tian, Kevin
2016-02-05 8:38 ` Jan Beulich
2016-02-05 11:05 ` George Dunlap
2016-02-05 15:13 ` Zhiyuan Lv
2016-02-05 20:14 ` George Dunlap
2016-02-05 8:40 ` Yu, Zhang
2016-02-04 10:06 ` Ian Campbell
2016-02-05 3:31 ` Tian, Kevin
2016-02-02 11:31 ` Andrew Cooper
2016-02-02 11:43 ` Jan Beulich
2016-02-02 14:20 ` Andrew Cooper
2016-02-01 11:57 ` Wei Liu
2016-02-01 15:15 ` Yu, Zhang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=56B310D7.7010506@linux.intel.com \
--to=yu.c.zhang@linux.intel.com \
--cc=Andrew.Cooper3@citrix.com \
--cc=George.Dunlap@eu.citrix.com \
--cc=Ian.Campbell@citrix.com \
--cc=Ian.Jackson@eu.citrix.com \
--cc=JBeulich@suse.com \
--cc=Paul.Durrant@citrix.com \
--cc=Stefano.Stabellini@citrix.com \
--cc=keir@xen.org \
--cc=kevin.tian@intel.com \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xen.org \
--cc=zhiyuan.lv@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).