From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:45100) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1V3vZd-000253-R5 for qemu-devel@nongnu.org; Mon, 29 Jul 2013 18:05:55 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1V3vZY-00067K-VH for qemu-devel@nongnu.org; Mon, 29 Jul 2013 18:05:49 -0400 Received: from mx1.redhat.com ([209.132.183.28]:36711) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1V3vLt-0001ha-Ha for qemu-devel@nongnu.org; Mon, 29 Jul 2013 17:51:37 -0400 Message-ID: <51F6E473.4040703@redhat.com> Date: Mon, 29 Jul 2013 23:53:55 +0200 From: Laszlo Ersek MIME-Version: 1.0 References: <1375108636-17014-1-git-send-email-lersek@redhat.com> <20130729170841.3d746e35@redhat.com> In-Reply-To: <20130729170841.3d746e35@redhat.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 0/4] dump-guest-memory: correct the vmcores List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Luiz Capitulino Cc: Anthony Liguori , Jan Kiszka , Markus Armbruster , qemu-devel@nongnu.org On 07/29/13 23:08, Luiz Capitulino wrote: > On Mon, 29 Jul 2013 16:37:12 +0200 > Laszlo Ersek wrote: > >> (Apologies for the long To: list, I'm including everyone who >> participated in >> ). >> >> Conceptually, the dump-guest-memory command works as follows: >> (a) pause the guest, >> (b) get a snapshot of the guest's physical memory map, as provided by >> qemu, >> (c) retrieve the guest's virtual mappings, as seen by the guest (this is >> where paging=true vs. paging=false makes a difference), >> (d) filter (c) as requested by the QMP caller, >> (e) write ELF headers, keying off (b) -- the guest's physmap -- and (d) >> -- the filtered guest mappings. >> (f) dump RAM contents, keying off the same (b) and (d), >> (g) unpause the guest (if necessary). >> >> Patch #1 affects step (e); specifically, how (d) is matched against (b), >> when "paging" is "true", and the guest kernel maps more guest-physical >> RAM than it actually has. >> >> This can be done by non-malicious, clean-state guests (eg. a pristine >> RHEL-6.4 guest), and may cause libbfd errors due to PT_LOAD entries >> (coming directly from the guest page tables) exceeding the vmcore file's >> size. >> >> Patches #2 to #4 are independent of the "paging" option (or, more >> precisely, affect them equally); they affect (b). Currently input >> parameter (b), that is, the guest's physical memory map as provided by >> qemu, is implicitly represented by "ram_list.blocks". As a result, steps >> and outputs dependent on (b) will refer to qemu-internal offsets. >> >> Unfortunately, this breaks when the guest-visible physical addresses >> diverge from the qemu-internal, RAMBlock based representation. This can >> happen eg. for guests > 3.5 GB, due to the 32-bit PCI hole; see patch #4 >> for a diagram. >> >> Patch #2 introduces input parameter (b) explicitly, as a reasonably >> minimal map of guest-physical address ranges. (Minimality is not a hard >> requirement here, it just decreases the number of PT_LOAD entries >> written to the vmcore header.) Patch #3 populates this map. Patch #4 >> rebases the dump-guest-memory command to it, so that steps (e) and (f) >> work with guest-phys addresses. >> >> As a result, the "crash" utility can parse vmcores dumped for big x86_64 >> guests (paging=false). >> >> Please refer to Red Hat Bugzilla 981582 >> . >> >> Disclaimer: as you can tell from my progress in the RHBZ, I'm new to the >> memory API. The way I'm using it might be retarded. > > Is this for 1.6? It's for whichever release reviewers and maintainers accept it! :) On a more serious note, if someone makes an exception out of this, I won't object, but I'm not pushing for it. My posting close to the hard freeze was a coincidence. Thanks, Laszlo