From: Laszlo Ersek <lersek@redhat.com>
To: Juerg Haefliger <juergh@gmail.com>
Cc: "Paolo Bonzini" <pbonzini@redhat.com>,
"Michael Tokarev" <mjt@tls.msk.ru>,
"Andreas Färber" <afaerber@suse.de>,
qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] QEMU savevm RAM page offsets
Date: Tue, 13 Aug 2013 21:25:00 +0200 [thread overview]
Message-ID: <520A880C.8010803@redhat.com> (raw)
In-Reply-To: <CADLDEKuTR2jbw9XcHHtbBCS=qZcrvq-O_gWqkO2vMwAcjwzQSQ@mail.gmail.com>
On 08/13/13 21:06, Juerg Haefliger wrote:
> On Tue, Aug 13, 2013 at 8:07 PM, Paolo Bonzini <pbonzini@redhat.com> wrote:
>> Il 13/08/2013 19:52, Juerg Haefliger ha scritto:
>>> I didn't mean to imply that the savevm format is broken and needed
>>> fixing. I was just wondering if the data is there and I simply hadn't
>>> found it. Upgrading QEMU is not an option at the moment since these
>>> are tightly controlled productions machines. Is it possible to loadvm
>>> a savevm file from 1.0 with 1.6 to then use guest-memory-dump?
>>
>> Yes, it should, but one important thing since 1.0 has been the merger of
>> qemu-kvm and QEMU. What distribution are you using? I know Fedora
>> allows qemu-kvm-1.0 to QEMU-1.6 compatibility, but I don't know about
>> others.
>
> Ubuntu 12.04
>
>
>> Michael Tokarev is the maintainer of the Debian package, so he may be
>> able to answer.
>>
>> Alternatively, you can modify your utility to simply add 512 MB to the
>> addresses above 3.5 GB.
>
> Is it really as simple as that? Isn't the OS (particularly Windows)
> possibly doing some crazy remapping that needs to be taken into
> account? meminfo on a VM with 4GB running Windows 2008 shows the
> following:
>
> C:\Users\Administrator\Desktop\MemInfo\amd64>MemInfo.exe -r
> MemInfo v2.10 - Show PFN database information
> Copyright (C) 2007-2009 Alex Ionescu
> www.alex-ionescu.com
>
> Physical Memory Range: 0000000000001000 to 000000000009B000 (154 pages, 616 KB)
> Physical Memory Range: 0000000000100000 to 00000000DFFFD000 (917245
> pages, 3668980 KB)
> Physical Memory Range: 0000000100000000 to 0000000120000000 (131072
> pages, 524288 KB)
> MmHighestPhysicalPage: 1179648
That should be fine, I think. The 384 KB hole between 640KB and 1MB is
actually contiguously backed by RAMBlock, it is just not (necessarily)
presented as conventional memory to the guest. You can treat the [0,
0x0e0000000) left-closed, right-open interval as contiguous.
Again, check out the diagram in 4/4 that I linked before. Compare it to
pc_init1() in "hw/pc_piix.c", at tag "v1.0" in
<git://git.kernel.org/pub/scm/virt/kvm/qemu-kvm.git>. Look for the
variable "below_4g_mem_size".
Laszlo
next prev parent reply other threads:[~2013-08-13 19:22 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-08-13 13:30 [Qemu-devel] QEMU savevm RAM page offsets Juerg Haefliger
2013-08-13 16:03 ` Andreas Färber
2013-08-13 16:51 ` Laszlo Ersek
2013-08-13 16:58 ` Laszlo Ersek
2013-08-13 17:52 ` Juerg Haefliger
2013-08-13 18:07 ` Paolo Bonzini
2013-08-13 19:06 ` Juerg Haefliger
2013-08-13 19:25 ` Laszlo Ersek [this message]
2013-08-16 6:12 ` Juerg Haefliger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=520A880C.8010803@redhat.com \
--to=lersek@redhat.com \
--cc=afaerber@suse.de \
--cc=juergh@gmail.com \
--cc=mjt@tls.msk.ru \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).