From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:54719) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1V3ugd-0003aJ-LT for qemu-devel@nongnu.org; Mon, 29 Jul 2013 17:09:04 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1V3ugY-00037c-S8 for qemu-devel@nongnu.org; Mon, 29 Jul 2013 17:08:59 -0400 Received: from mx1.redhat.com ([209.132.183.28]:63693) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1V3ugY-00037W-KB for qemu-devel@nongnu.org; Mon, 29 Jul 2013 17:08:54 -0400 Date: Mon, 29 Jul 2013 17:08:41 -0400 From: Luiz Capitulino Message-ID: <20130729170841.3d746e35@redhat.com> In-Reply-To: <1375108636-17014-1-git-send-email-lersek@redhat.com> References: <1375108636-17014-1-git-send-email-lersek@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH 0/4] dump-guest-memory: correct the vmcores List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Laszlo Ersek Cc: Anthony Liguori , Jan Kiszka , Markus Armbruster , qemu-devel@nongnu.org On Mon, 29 Jul 2013 16:37:12 +0200 Laszlo Ersek wrote: > (Apologies for the long To: list, I'm including everyone who > participated in > ). > > Conceptually, the dump-guest-memory command works as follows: > (a) pause the guest, > (b) get a snapshot of the guest's physical memory map, as provided by > qemu, > (c) retrieve the guest's virtual mappings, as seen by the guest (this is > where paging=true vs. paging=false makes a difference), > (d) filter (c) as requested by the QMP caller, > (e) write ELF headers, keying off (b) -- the guest's physmap -- and (d) > -- the filtered guest mappings. > (f) dump RAM contents, keying off the same (b) and (d), > (g) unpause the guest (if necessary). > > Patch #1 affects step (e); specifically, how (d) is matched against (b), > when "paging" is "true", and the guest kernel maps more guest-physical > RAM than it actually has. > > This can be done by non-malicious, clean-state guests (eg. a pristine > RHEL-6.4 guest), and may cause libbfd errors due to PT_LOAD entries > (coming directly from the guest page tables) exceeding the vmcore file's > size. > > Patches #2 to #4 are independent of the "paging" option (or, more > precisely, affect them equally); they affect (b). Currently input > parameter (b), that is, the guest's physical memory map as provided by > qemu, is implicitly represented by "ram_list.blocks". As a result, steps > and outputs dependent on (b) will refer to qemu-internal offsets. > > Unfortunately, this breaks when the guest-visible physical addresses > diverge from the qemu-internal, RAMBlock based representation. This can > happen eg. for guests > 3.5 GB, due to the 32-bit PCI hole; see patch #4 > for a diagram. > > Patch #2 introduces input parameter (b) explicitly, as a reasonably > minimal map of guest-physical address ranges. (Minimality is not a hard > requirement here, it just decreases the number of PT_LOAD entries > written to the vmcore header.) Patch #3 populates this map. Patch #4 > rebases the dump-guest-memory command to it, so that steps (e) and (f) > work with guest-phys addresses. > > As a result, the "crash" utility can parse vmcores dumped for big x86_64 > guests (paging=false). > > Please refer to Red Hat Bugzilla 981582 > . > > Disclaimer: as you can tell from my progress in the RHBZ, I'm new to the > memory API. The way I'm using it might be retarded. Is this for 1.6?