qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Greg Kurz <groug@kaod.org>
To: David Gibson <david@gibson.dropbear.id.au>
Cc: lvivier@redhat.com, spopovyc@redhat.com, qemu-ppc@nongnu.org,
	qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH for-2.11] spapr: Include "pre-plugged" DIMMS in ram size calculation at reset
Date: Fri, 1 Dec 2017 11:11:22 +0100	[thread overview]
Message-ID: <20171201111122.25e43169@bahia.lan> (raw)
In-Reply-To: <20171201054125.25749-1-david@gibson.dropbear.id.au>

On Fri,  1 Dec 2017 16:41:25 +1100
David Gibson <david@gibson.dropbear.id.au> wrote:
> At guest reset time, we allocate a hash page table (HPT) for the guest
> based on the guest's RAM size.  If dynamic HPT resizing is not available we
> use the maximum RAM size, if it is we use the current RAM size.
> 
> But the "current RAM size" calculation is incorrect - we just use the
> "base" ram_size from the machine structure.  This doesn't include any
> pluggable DIMMs that are already plugged at reset time.
> 
> This means that if you try to start a 'pseries' machine with a DIMM
> specified on the command line that's much larger than the "base" RAM size,
> then the guest will get a woefully inadequate HPT.  This can lead to a
> guest freeze during boot as it runs out of HPT space during initial MMU
> setup.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---

Yeah I could easily reproduce the issue with:

-m 256M,maxmem=9G,slots=1 \
-object memory-backend-ram,id=ram0,size=8G \
-device pc-dimm,id=dimm0,memdev=ram0

and this patch indeed fixes the issue.

BTW, we already had a similar miscalculation of the current RAM size in
h_resize_hpt_prepare() that was addressed with commit db50f280cf5f7. Maybe
worth consolidating the calculation in some helper ?

Anyway,

Reviewed-by: Greg Kurz <groug@kaod.org>

and

Tested-by: Greg Kurz <groug@kaod.org>

>  hw/ppc/spapr.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index a471de6cab..1ac7eb0f8c 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -1386,7 +1386,10 @@ void spapr_setup_hpt_and_vrma(sPAPRMachineState *spapr)
>              && !spapr_ovec_test(spapr->ov5_cas, OV5_HPT_RESIZE))) {
>          hpt_shift = spapr_hpt_shift_for_ramsize(MACHINE(spapr)->maxram_size);
>      } else {
> -        hpt_shift = spapr_hpt_shift_for_ramsize(MACHINE(spapr)->ram_size);
> +        uint64_t current_ram_size;
> +
> +        current_ram_size = MACHINE(spapr)->ram_size + get_plugged_memory_size();
> +        hpt_shift = spapr_hpt_shift_for_ramsize(current_ram_size);
>      }
>      spapr_reallocate_hpt(spapr, hpt_shift, &error_fatal);
>  

      reply	other threads:[~2017-12-01 16:11 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-01  5:41 [Qemu-devel] [PATCH for-2.11] spapr: Include "pre-plugged" DIMMS in ram size calculation at reset David Gibson
2017-12-01 10:11 ` Greg Kurz [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171201111122.25e43169@bahia.lan \
    --to=groug@kaod.org \
    --cc=david@gibson.dropbear.id.au \
    --cc=lvivier@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-ppc@nongnu.org \
    --cc=spopovyc@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).