From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Cooper Subject: Re: [PATCH v5][XSA-97] x86/paging: make log-dirty operations preemptible Date: Mon, 15 Sep 2014 15:37:46 +0100 Message-ID: <5416F9BA.6010207@citrix.com> References: <5409B0C70200007800031646@mail.emea.novell.com> <5411E9AB.4080906@citrix.com> <541300BC02000078000347D8@mail.emea.novell.com> <54169A33.7020305@citrix.com> <5416FD930200007800034F86@mail.emea.novell.com> <5416F00B.3030802@citrix.com> <541711DF02000078000350C0@mail.emea.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta4.messagelabs.com ([85.158.143.247]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1XTXRE-00056q-Bt for xen-devel@lists.xenproject.org; Mon, 15 Sep 2014 14:39:32 +0000 In-Reply-To: <541711DF02000078000350C0@mail.emea.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich , Tim Deegan Cc: xen-devel , Keir Fraser List-Id: xen-devel@lists.xenproject.org On 15/09/2014 15:20, Jan Beulich wrote: >>>> On 15.09.14 at 15:56, wrote: >> On 15/09/2014 13:54, Jan Beulich wrote: >>>>>> On 15.09.14 at 09:50, wrote: >>>> It is indeed migration v2, which is necessary in XenServer given our >>>> recent switch from 32bit dom0 to 64bit. The counts are only used for >>>> logging, and debugging purposes; all movement of pages is based off the >>>> bits in the bitmap alone. In particular, the dirty count is used as a >>>> basis of the statistics for the present iteration of migration. While >>>> getting it wrong is not the end of the world, it would certainly be >>>> preferable for the count to be accurate. >>>> >>>> As for the memory corruption, XenRT usually tests pairs of VMs at a time >>>> (32 and 64bit variants) and all operations as back-to-back as possible. >>>> Therefore, it is highly likely that a continued operation on one domain >>>> intersects with other paging operations on another. >>> But there's nothing I can see where domains would have a way >>> of getting mismatched. It is in particular this one >>> >>> (XEN) [ 7832.953068] mm.c:827:d0v0 pg_owner 100 l1e_owner 100, but >> real_pg_owner 99 >>> which puzzles me: Assuming Dom99 was the original one, how >>> would Dom100 get hold of any of Dom99's pages (IOW why would >>> Dom0 map one of Dom99's pages into Dom100)? The patch doesn't >>> alter any of the page refcounting after all. Nor does your v2 >>> migration series I would think. >> In this case, dom99 was migrating to dom100. The failure was part of >> verifying dom100v0's cr3 at the point of loading vcpu state, so Xen was >> in the process of pinning pagetables. >> >> There were no errors on pagetable normalisation, so dom99's PTEs were >> all correct, and there were no errors restoring any of dom100's memory, >> so Xen fully allocated frames for dom100's memory during >> populate_phymap() hypercalls. >> >> During pagetable normalisation, dom99's pfns in the stream are converted >> to dom100's mfns as per the newly created p2m from the >> populate_physmap() allocations. Then during dom100's cr3 validation, it >> finds a dom99 PTE and complains. >> >> Therefore, a frame Xen handed back to the toolstack as part of >> allocating dom100's memory still belonged to dom99. > Or on the saving side some page table(s) didn't get normalized at > all (in which case there necessarily also were no errors detected > with that). Not being marked as page table(s) would then also lead > to not getting converted back to machine representation on restore, > resulting in a reference to a page belonging to the old domain. > > But together with the memory corruption you mentioned seen in > HVM guests all of the above may just be secondary effects. Yes - that is my suspicion as well, although I was hoping that the failures would give some hints as to the root cause. ~Andrew