From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:35336) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XSKNA-00025F-Mi for qemu-devel@nongnu.org; Fri, 12 Sep 2014 02:30:25 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XSKN6-0005br-18 for qemu-devel@nongnu.org; Fri, 12 Sep 2014 02:30:20 -0400 Received: from [59.151.112.132] (port=53497 helo=heian.cn.fujitsu.com) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XSKN5-0005bV-Ld for qemu-devel@nongnu.org; Fri, 12 Sep 2014 02:30:15 -0400 Message-ID: <541292EC.3070907@cn.fujitsu.com> Date: Fri, 12 Sep 2014 14:30:04 +0800 From: Hongyang Yang MIME-Version: 1.0 References: <1406125538-27992-1-git-send-email-yanghy@cn.fujitsu.com> <1406125538-27992-17-git-send-email-yanghy@cn.fujitsu.com> <20140801151048.GH2430@work-vm> In-Reply-To: <20140801151048.GH2430@work-vm> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] [RFC PATCH 16/17] COLO ram cache: implement colo ram cache on slaver List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: "Dr. David Alan Gilbert" Cc: kvm@vger.kernel.org, GuiJianfeng@cn.fujitsu.com, eddie.dong@intel.com, qemu-devel@nongnu.org, mrhines@linux.vnet.ibm.com =E5=9C=A8 08/01/2014 11:10 PM, Dr. David Alan Gilbert =E5=86=99=E9=81=93: > * Yang Hongyang (yanghy@cn.fujitsu.com) wrote: >> The ram cache was initially the same as PVM's memory. At >> checkpoint, we cache the dirty memory of PVM into ram cache >> (so that ram cache always the same as PVM's memory at every >> checkpoint), flush cached memory to SVM after we received >> all PVM dirty memory(only needed to flush memory that was >> both dirty on PVM and SVM since last checkpoint). > > (Typo: 'r' on the end of the title) > > I think I understand the need for the cache, to be able to restore pages > that the SVM has modified that the PVM hadn't; however, if I understand > the change here, (to host_from_stream_offset) the SVM will load the > snapshot into the ram_cache rather than directly into host memory - why > is this necessary? If the SVMs CPU is stopped at this point couldn't > it load snapshot pages directly into host memory, clearing pages in the S= VMs > bitmap, so that the only pages that then get copied in flush_cache are > the pages that the SVM modified but the PVM *didn't* include in the snaps= hot? > I can see that you would need to do it the way you've done it if the > snapshot-load could fail (at the sametime the PVM failed) and thus the ol= d SVM > state would be the surviving state, but how could it fail at this point > given the whole stream is in the colo-buffer? I can see your confusion. Yes, you are right, we can do as what you said, b= ut at last, we still need to copy the dirty pages into ram cache as well (beca= use the ram cache is a snapshot and we need to keep this updated). So the quest= ion is whether we load the dirty pages into snapshot first or into host memory first. I think both methods can work and make no difference... > > >> +static void ram_flush_cache(void); >> static int ram_load(QEMUFile *f, void *opaque, int version_id) >> { >> ram_addr_t addr; >> int flags, ret =3D 0; >> static uint64_t seq_iter; >> + bool need_flush =3D false; > > Probably better as 'ram_cache_needs_flush' > > Dave > -- > Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK > . > --=20 Thanks, Yang.