From: Christoffer Dall <christoffer.dall@linaro.org>
To: Peter Maydell <peter.maydell@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>,
qemu-arm <qemu-arm@nongnu.org>,
"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
QEMU Developers <qemu-devel@nongnu.org>,
Patch Tracking <patches@linaro.org>
Subject: Re: [Qemu-devel] [PATCH] virt: Lift the maximum RAM limit from 30GB to 255GB
Date: Fri, 26 Feb 2016 09:06:31 +0100 [thread overview]
Message-ID: <20160226080631.GB9352@cbox> (raw)
In-Reply-To: <CAFEAcA9d6u=2GLo5UEXGPzb=9bZNnk-CgEOt=235giCdPTW_pw@mail.gmail.com>
On Thu, Feb 25, 2016 at 04:51:51PM +0000, Peter Maydell wrote:
> [Typoed the kvmarm list address; sorry... -- PMM]
>
> On 25 February 2016 at 12:09, Peter Maydell <peter.maydell@linaro.org> wrote:
> > The virt board restricts guests to only 30GB of RAM. This is a
> > hangover from the vexpress-a15 board, and there's inherent reason
did you mean "there's *no* inherent reason" ?
> > for it. 30GB is smaller than you might reasonably want to provision
> > a VM for on a beefy server machine. Raise the limit to 255GB.
> >
> > We choose 255GB because the available space we currently have
> > below the 1TB boundary is up to the 512GB mark, but we don't
> > want to paint ourselves into a corner by assigning it all to
> > RAM. So we make half of it available for RAM, with the 256GB..512GB
> > range available for future non-RAM expansion purposes.
> >
> > If we need to provide more RAM to VMs in the future then we need to:
> > * allocate a second bank of RAM starting at 2TB and working up
> > * fix the DT and ACPI table generation code in QEMU to correctly
> > report two split lumps of RAM to the guest
> > * fix KVM in the host kernel to allow guests with >40 bit address spaces
> >
> > The last of these is obviously the trickiest, but it seems
> > reasonable to assume that anybody configuring a VM with a quarter
> > of a terabyte of RAM will be doing it on a host with more than a
> > terabyte of physical address space.
> >
> > Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
> > ---
> > CC'ing kvm-arm as a heads-up that my proposal here is to make
> > the kernel devs do the heavy lifting for supporting >255GB.
> > Discussion welcome on whether I have the tradeoffs here right.
I think so, this looks good to me.
> > ---
> > hw/arm/virt.c | 21 +++++++++++++++++++--
> > 1 file changed, 19 insertions(+), 2 deletions(-)
> >
> > diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> > index 44bbbea..7a56b46 100644
> > --- a/hw/arm/virt.c
> > +++ b/hw/arm/virt.c
> > @@ -95,6 +95,23 @@ typedef struct {
> > #define VIRT_MACHINE_CLASS(klass) \
> > OBJECT_CLASS_CHECK(VirtMachineClass, klass, TYPE_VIRT_MACHINE)
> >
> > +/* RAM limit in GB. Since VIRT_MEM starts at the 1GB mark, this means
> > + * RAM can go up to the 256GB mark, leaving 256GB of the physical
> > + * address space unallocated and free for future use between 256G and 512G.
> > + * If we need to provide more RAM to VMs in the future then we need to:
> > + * * allocate a second bank of RAM starting at 2TB and working up
> > + * * fix the DT and ACPI table generation code in QEMU to correctly
> > + * report two split lumps of RAM to the guest
> > + * * fix KVM in the host kernel to allow guests with >40 bit address spaces
> > + * (We don't want to fill all the way up to 512GB with RAM because
> > + * we might want it for non-RAM purposes later. Conversely it seems
> > + * reasonable to assume that anybody configuring a VM with a quarter
> > + * of a terabyte of RAM will be doing it on a host with more than a
> > + * terabyte of physical address space.)
> > + */
> > +#define RAMLIMIT_GB 255
> > +#define RAMLIMIT_BYTES (RAMLIMIT_GB * 1024ULL * 1024 * 1024)
> > +
> > /* Addresses and sizes of our components.
> > * 0..128MB is space for a flash device so we can run bootrom code such as UEFI.
> > * 128MB..256MB is used for miscellaneous device I/O.
> > @@ -130,7 +147,7 @@ static const MemMapEntry a15memmap[] = {
> > [VIRT_PCIE_MMIO] = { 0x10000000, 0x2eff0000 },
> > [VIRT_PCIE_PIO] = { 0x3eff0000, 0x00010000 },
> > [VIRT_PCIE_ECAM] = { 0x3f000000, 0x01000000 },
> > - [VIRT_MEM] = { 0x40000000, 30ULL * 1024 * 1024 * 1024 },
> > + [VIRT_MEM] = { 0x40000000, RAMLIMIT_BYTES },
> > /* Second PCIe window, 512GB wide at the 512GB boundary */
> > [VIRT_PCIE_MMIO_HIGH] = { 0x8000000000ULL, 0x8000000000ULL },
> > };
> > @@ -1066,7 +1083,7 @@ static void machvirt_init(MachineState *machine)
> > vbi->smp_cpus = smp_cpus;
> >
> > if (machine->ram_size > vbi->memmap[VIRT_MEM].size) {
> > - error_report("mach-virt: cannot model more than 30GB RAM");
> > + error_report("mach-virt: cannot model more than %dGB RAM", RAMLIMIT_GB);
> > exit(1);
> > }
> >
> > --
> > 1.9.1
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
next prev parent reply other threads:[~2016-02-26 8:05 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-02-25 12:09 [Qemu-devel] [PATCH] virt: Lift the maximum RAM limit from 30GB to 255GB Peter Maydell
2016-02-25 16:51 ` Peter Maydell
2016-02-26 8:06 ` Christoffer Dall [this message]
2016-02-26 10:22 ` Peter Maydell
2016-02-25 17:58 ` [Qemu-devel] [Qemu-arm] " Wei Huang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160226080631.GB9352@cbox \
--to=christoffer.dall@linaro.org \
--cc=kvmarm@lists.cs.columbia.edu \
--cc=marc.zyngier@arm.com \
--cc=patches@linaro.org \
--cc=peter.maydell@linaro.org \
--cc=qemu-arm@nongnu.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).