From: "Michael S. Tsirkin" <mst@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-devel@nongnu.org, stefanha@redhat.com
Subject: Re: [Qemu-devel] [PATCH] exec: optimize phys_page_set_level
Date: Wed, 3 Jun 2015 18:14:47 +0200 [thread overview]
Message-ID: <20150603181331-mutt-send-email-mst@redhat.com> (raw)
In-Reply-To: <1432214398-14990-1-git-send-email-pbonzini@redhat.com>
On Thu, May 21, 2015 at 03:19:58PM +0200, Paolo Bonzini wrote:
> phys_page_set_level is writing zeroes to a struct that has just been
> filled in by phys_map_node_alloc. Instead, tell phys_map_node_alloc
> whether to fill in the page "as a leaf" or "as a non-leaf".
>
> memcpy is faster than struct assignment, which copies each bitfield
> individually. Arguably a compiler bug, but memcpy is super-special
> cased anyway so what could go wrong?
>
> This cuts the cost of phys_page_set_level from 25% to 5% when
> booting qboot.
>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This patch might also be faster for another reason:
it skips an extra loop over L2 in the leaf case.
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
> ---
> exec.c | 24 ++++++++++--------------
> 1 file changed, 10 insertions(+), 14 deletions(-)
>
> diff --git a/exec.c b/exec.c
> index e19ab22..fc8d05d 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -173,17 +173,22 @@ static void phys_map_node_reserve(PhysPageMap *map, unsigned nodes)
> }
> }
>
> -static uint32_t phys_map_node_alloc(PhysPageMap *map)
> +static uint32_t phys_map_node_alloc(PhysPageMap *map, bool leaf)
> {
> unsigned i;
> uint32_t ret;
> + PhysPageEntry e;
> + PhysPageEntry *p;
>
> ret = map->nodes_nb++;
> + p = map->nodes[ret];
> assert(ret != PHYS_MAP_NODE_NIL);
> assert(ret != map->nodes_nb_alloc);
> +
> + e.skip = leaf ? 0 : 1;
> + e.ptr = leaf ? PHYS_SECTION_UNASSIGNED : PHYS_MAP_NODE_NIL;
> for (i = 0; i < P_L2_SIZE; ++i) {
> - map->nodes[ret][i].skip = 1;
> - map->nodes[ret][i].ptr = PHYS_MAP_NODE_NIL;
> + memcpy(&p[i], &e, sizeof(e));
> }
> return ret;
> }
> @@ -193,21 +198,12 @@ static void phys_page_set_level(PhysPageMap *map, PhysPageEntry *lp,
> int level)
> {
> PhysPageEntry *p;
> - int i;
> hwaddr step = (hwaddr)1 << (level * P_L2_BITS);
>
> if (lp->skip && lp->ptr == PHYS_MAP_NODE_NIL) {
> - lp->ptr = phys_map_node_alloc(map);
> - p = map->nodes[lp->ptr];
> - if (level == 0) {
> - for (i = 0; i < P_L2_SIZE; i++) {
> - p[i].skip = 0;
> - p[i].ptr = PHYS_SECTION_UNASSIGNED;
> - }
> - }
> - } else {
> - p = map->nodes[lp->ptr];
> + lp->ptr = phys_map_node_alloc(map, level == 0);
> }
> + p = map->nodes[lp->ptr];
> lp = &p[(*index >> (level * P_L2_BITS)) & (P_L2_SIZE - 1)];
>
> while (*nb && lp < &p[P_L2_SIZE]) {
> --
> 2.4.1
>
prev parent reply other threads:[~2015-06-03 16:14 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-05-21 13:19 [Qemu-devel] [PATCH] exec: optimize phys_page_set_level Paolo Bonzini
2015-05-22 8:01 ` Stefan Hajnoczi
2015-06-03 4:30 ` Richard Henderson
2015-06-03 7:03 ` Paolo Bonzini
2015-06-03 16:14 ` Michael S. Tsirkin [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150603181331-mutt-send-email-mst@redhat.com \
--to=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).