From mboxrd@z Thu Jan 1 00:00:00 1970 From: Mario Smarduch Subject: huge 2nd stage pages and live migration Date: Fri, 28 Mar 2014 10:39:25 -0700 Message-ID: <5335B3CD.30207@samsung.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit To: kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org Return-path: Received: from mailout1.w2.samsung.com ([211.189.100.11]:18001 "EHLO usmailout1.samsung.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753297AbaC1Rjj (ORCPT ); Fri, 28 Mar 2014 13:39:39 -0400 Received: from uscpsbgex4.samsung.com (u125.gpu85.samsung.co.kr [203.254.195.125]) by mailout1.w2.samsung.com (Oracle Communications Messaging Server 7u4-24.01(7.0.4.24.0) 64bit (built Nov 17 2011)) with ESMTP id <0N3500J22QE20870@mailout1.w2.samsung.com> for kvm@vger.kernel.org; Fri, 28 Mar 2014 13:39:38 -0400 (EDT) Sender: kvm-owner@vger.kernel.org List-ID: Hello I've been working on live migration for ARM-KVM, and noticed problem completing migration with huge 2nd stage tables. Aafter write protecting the VM, for write fault 512 page bits are set in dirty_bitmap[] to take into account future writes to huge page.The pmd is write protected again when QEMU reads the dirty log, and the cycle repeats. With this not even a idle 32MB VM completes live migration. If QEMU uses THPs, and 2nd stage tables use pte's, then there is no problem, live migration is quick. I'm assumung QEMU and Guest huge pages with 2nd stage page table pte's should work fine too. I'm wondering how this has been solved (for any architecture)? - Mario