From mboxrd@z Thu Jan 1 00:00:00 1970 From: Mario Smarduch Subject: Re: [PATCH v6 4/4] add 2nd stage page fault handling during live migration Date: Thu, 29 May 2014 10:08:07 -0700 Message-ID: <53876977.1040502@samsung.com> References: <1400178451-4984-1-git-send-email-m.smarduch@samsung.com> <1400178451-4984-5-git-send-email-m.smarduch@samsung.com> <20140527201945.GD16428@lvm> <53853C2F.8080003@samsung.com> <20140528080957.GH16428@lvm> <5386232A.3050602@samsung.com> <20140529085134.GB61607@lvm> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: kvmarm@lists.cs.columbia.edu, marc.zyngier@arm.com, steve.capper@arm.com, kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, gavin.guo@canonical.com, peter.maydell@linaro.org, jays.lee@samsung.com, sungjinn.chung@samsung.com To: Christoffer Dall Return-path: Received: from mailout3.w2.samsung.com ([211.189.100.13]:45607 "EHLO usmailout3.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756288AbaE2RIN (ORCPT ); Thu, 29 May 2014 13:08:13 -0400 Received: from uscpsbgex1.samsung.com (u122.gpu85.samsung.co.kr [203.254.195.122]) by usmailout3.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0N6C00HVII9NOC70@usmailout3.samsung.com> for kvm@vger.kernel.org; Thu, 29 May 2014 13:08:11 -0400 (EDT) In-reply-to: <20140529085134.GB61607@lvm> Sender: kvm-owner@vger.kernel.org List-ID: >> So this needs to be cleared up given this is key to logging. >> Cases this code handles during migration - >> 1. huge page fault described above - write protect fault so you breakup >> the huge page. >> 2. All other faults - first time access, pte write protect you again wind up in >> stage2_set_pte(). >> >> Am I missing something here? >> > > no, I forgot about the fact that we can take the permission fault now. > Hmm, ok, so either we need to use the original approach of always > splitting up huge pages or we need to just follow the regular huge page > path here and just mark all 512 4K pages dirty in the log, or handle it > in stage2_set_pte(). > > I would say go with the most simple appraoch for now (which may be going > back to splitting all pmd_huge() into regular pte's), and we can take a > more careful look in the next patch iteration. > Looking at the overall memslot update architecture and various fail scenarios - user_mem_abort() appears to be the most optimal and reliable place. First Write Protect huge pages after memslots are committed and deal with rest in user_mem_abort(). Still need some feedback on the pud_huge() before revising for next iteration? - Mario