From mboxrd@z Thu Jan 1 00:00:00 1970 From: Mario Smarduch Subject: Re: [PATCH v6 4/4] add 2nd stage page fault handling during live migration Date: Thu, 29 May 2014 12:10:37 -0700 Message-ID: <5387862D.7050201@samsung.com> References: <1400178451-4984-1-git-send-email-m.smarduch@samsung.com> <1400178451-4984-5-git-send-email-m.smarduch@samsung.com> <20140527201945.GD16428@lvm> <53853C2F.8080003@samsung.com> <20140528080957.GH16428@lvm> <5386232A.3050602@samsung.com> <20140529085134.GB61607@lvm> <53876977.1040502@samsung.com> <20140529175726.GC62489@lvm> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: kvmarm@lists.cs.columbia.edu, marc.zyngier@arm.com, steve.capper@arm.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, gavin.guo@canonical.com, peter.maydell@linaro.org, jays.lee@samsung.com, sungjinn.chung@samsung.com To: Christoffer Dall Return-path: Received: from mailout2.w2.samsung.com ([211.189.100.12]:18839 "EHLO usmailout2.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751264AbaE2TKo (ORCPT ); Thu, 29 May 2014 15:10:44 -0400 Received: from uscpsbgex3.samsung.com (u124.gpu85.samsung.co.kr [203.254.195.124]) by mailout2.w2.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0N6C00KCFNXUJ2A0@mailout2.w2.samsung.com> for kvm@vger.kernel.org; Thu, 29 May 2014 15:10:42 -0400 (EDT) In-reply-to: <20140529175726.GC62489@lvm> Sender: kvm-owner@vger.kernel.org List-ID: On 05/29/2014 10:57 AM, Christoffer Dall wrote: > On Thu, May 29, 2014 at 10:08:07AM -0700, Mario Smarduch wrote: >> >>>> So this needs to be cleared up given this is key to logging. >>>> Cases this code handles during migration - >>>> 1. huge page fault described above - write protect fault so you breakup >>>> the huge page. >>>> 2. All other faults - first time access, pte write protect you again wind up in >>>> stage2_set_pte(). >>>> >>>> Am I missing something here? >>>> >>> >>> no, I forgot about the fact that we can take the permission fault now. >>> Hmm, ok, so either we need to use the original approach of always >>> splitting up huge pages or we need to just follow the regular huge page >>> path here and just mark all 512 4K pages dirty in the log, or handle it >>> in stage2_set_pte(). >>> >>> I would say go with the most simple appraoch for now (which may be going >>> back to splitting all pmd_huge() into regular pte's), and we can take a >>> more careful look in the next patch iteration. >>> >> >> Looking at the overall memslot update architecture and various >> fail scenarios - user_mem_abort() appears to be the most >> optimal and reliable place. First Write Protect huge pages after >> memslots are committed and deal with rest in user_mem_abort(). >> >> Still need some feedback on the pud_huge() before revising for >> next iteration? >> > Just assume it's not used for now, and that you don't have to consider > it, and make that assumption clear in the commit message, so it doesn't > block this work. I have a feeling we need to go through a few > iterations here, so let's get that rolling. > > Thanks. > Ok thanks I'm on it now. - Mario