From: Paul Durrant <Paul.Durrant@citrix.com>
To: 'Jan Beulich' <JBeulich@suse.com>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
Wei Liu <wei.liu2@citrix.com>,
George Dunlap <George.Dunlap@citrix.com>,
xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 3/4] x86/HVM: implement memory read caching
Date: Wed, 12 Sep 2018 08:49:07 +0000 [thread overview]
Message-ID: <1cfd6901a033431ab6f3b51f60cc42c6@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <5B98D07002000078001E7A7C@prv1-mh.provo.novell.com>
> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 12 September 2018 09:38
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; George Dunlap
> <George.Dunlap@citrix.com>; Wei Liu <wei.liu2@citrix.com>; xen-devel
> <xen-devel@lists.xenproject.org>
> Subject: RE: [PATCH v2 3/4] x86/HVM: implement memory read caching
>
> >>> On 11.09.18 at 18:20, <Paul.Durrant@citrix.com> wrote:
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: 11 September 2018 14:15
> >>
> >> @@ -2664,9 +2685,35 @@ void hvm_dump_emulation_state(const char
> >> hvmemul_ctxt->insn_buf);
> >> }
> >>
> >> +struct hvmemul_cache *hvmemul_cache_init(unsigned int nents)
> >> +{
> >> + struct hvmemul_cache *cache = xmalloc_bytes(offsetof(struct
> >> hvmemul_cache,
> >> + ents[nents]));
> >> +
> >> + if ( cache )
> >> + {
> >> + cache->num_ents = 0;
> >> + cache->max_ents = nents;
> >> + }
> >> +
> >> + return cache;
> >> +}
> >> +
> >> bool hvmemul_read_cache(const struct hvmemul_cache *cache,
> paddr_t
> >> gpa,
> >> unsigned int level, void *buffer, unsigned int size)
> >> {
> >> + unsigned int i;
> >> +
> >
> > Here you could return false if cache is NULL...
>
> This one could perhaps be considered, but ...
>
> >> @@ -2674,6 +2721,35 @@ void hvmemul_write_cache(struct hvmemul_
> >> unsigned int level, const void *buffer,
> >> unsigned int size)
> >> {
> >> + unsigned int i;
> >> +
> >
> > ...and here just bail out. Thus making both functions safe to call with a
> > NULL cache.
>
> ... I'm pretty much opposed to this: The term "cache" might be slightly
> confusing here, but I lack a better idea for a name. Its presence is
> required for correctness. After all the series is not a performance
> improvement, but a plain bug fix (generalizing what we were able to
> special case for the actually observed problem in commit 91afb8139f
> ["x86/HVM: suppress I/O completion for port output"]). And with
> that I'd rather leave the read side as is as well.
>
Ok. I have no strong objection to the code structure as it stands so you can add my R-b to this and patch #2.
Paul
> Jan
>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
next prev parent reply other threads:[~2018-09-12 8:49 UTC|newest]
Thread overview: 48+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-09-11 13:10 [PATCH v2 0/4] x86/HVM: implement memory read caching Jan Beulich
2018-09-11 13:13 ` [PATCH v2 1/4] x86/mm: add optional cache to GLA->GFN translation Jan Beulich
2018-09-11 13:40 ` Razvan Cojocaru
2018-09-19 15:09 ` Wei Liu
2018-09-11 13:14 ` [PATCH v2 2/4] x86/mm: use optional cache in guest_walk_tables() Jan Beulich
2018-09-11 16:17 ` Paul Durrant
2018-09-12 8:30 ` Jan Beulich
2018-09-19 15:50 ` Wei Liu
2018-09-11 13:15 ` [PATCH v2 3/4] x86/HVM: implement memory read caching Jan Beulich
2018-09-11 16:20 ` Paul Durrant
2018-09-12 8:38 ` Jan Beulich
2018-09-12 8:49 ` Paul Durrant [this message]
2018-09-19 15:57 ` Wei Liu
2018-09-20 6:39 ` Jan Beulich
2018-09-20 15:42 ` Wei Liu
2018-09-11 13:16 ` [PATCH v2 4/4] x86/HVM: prefill cache with PDPTEs when possible Jan Beulich
2018-09-13 6:30 ` Tian, Kevin
2018-09-13 8:55 ` Jan Beulich
2018-09-14 2:18 ` Tian, Kevin
2018-09-14 8:12 ` Jan Beulich
2018-09-25 14:14 ` [PATCH v3 0/4] x86/HVM: implement memory read caching Jan Beulich
2018-09-25 14:23 ` [PATCH v3 1/4] x86/mm: add optional cache to GLA->GFN translation Jan Beulich
2018-09-25 14:24 ` [PATCH v3 2/4] x86/mm: use optional cache in guest_walk_tables() Jan Beulich
2018-09-25 14:25 ` [PATCH v3 3/4] x86/HVM: implement memory read caching Jan Beulich
2018-09-26 11:05 ` Wei Liu
2018-10-02 10:39 ` Ping: " Jan Beulich
2018-10-02 13:53 ` Boris Ostrovsky
2018-10-09 5:19 ` Tian, Kevin
2018-09-25 14:26 ` [PATCH v3 4/4] x86/HVM: prefill cache with PDPTEs when possible Jan Beulich
2018-09-25 14:38 ` Paul Durrant
2018-10-02 10:36 ` Ping: [PATCH v3 0/4] x86/HVM: implement memory read caching Jan Beulich
2018-10-02 10:51 ` Andrew Cooper
2018-10-02 12:47 ` Jan Beulich
2018-10-11 15:54 ` George Dunlap
2018-10-11 16:15 ` Jan Beulich
2018-10-11 16:33 ` George Dunlap
2018-10-12 6:32 ` Jan Beulich
2018-10-11 6:51 ` Jan Beulich
2018-10-11 17:36 ` George Dunlap
2018-10-12 13:55 ` [PATCH v2 " Andrew Cooper
2018-10-12 14:19 ` Jan Beulich
2018-10-18 15:20 ` George Dunlap
2019-05-07 16:22 ` George Dunlap
2019-05-07 16:22 ` [Xen-devel] " George Dunlap
2019-05-07 16:26 ` Jan Beulich
2019-05-07 16:26 ` [Xen-devel] " Jan Beulich
2018-11-09 10:17 ` Jan Beulich
2019-02-14 15:14 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1cfd6901a033431ab6f3b51f60cc42c6@AMSPEX02CL03.citrite.net \
--to=paul.durrant@citrix.com \
--cc=Andrew.Cooper3@citrix.com \
--cc=George.Dunlap@citrix.com \
--cc=JBeulich@suse.com \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).