From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:38581) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fBDF0-0002Kb-H7 for qemu-devel@nongnu.org; Wed, 25 Apr 2018 01:45:19 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fBDEx-0002vP-Ef for qemu-devel@nongnu.org; Wed, 25 Apr 2018 01:45:18 -0400 Date: Wed, 25 Apr 2018 01:45:12 -0400 (EDT) From: Pankaj Gupta Message-ID: <1046685642.22350584.1524635112123.JavaMail.zimbra@redhat.com> In-Reply-To: <20180424153154.05e79de7@redhat.com> References: <20180420123456.22196-1-david@redhat.com> <20180420123456.22196-4-david@redhat.com> <20180423141928.7e64b380@redhat.com> <908f1079-385f-24d3-99ad-152ecd6b01d2@redhat.com> <20180424153154.05e79de7@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH v3 3/3] pc-dimm: factor out address space logic into MemoryDevice code List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Igor Mammedov Cc: David Hildenbrand , Eduardo Habkost , "Michael S . Tsirkin" , qemu-devel@nongnu.org, Markus Armbruster , qemu-s390x@nongnu.org, qemu-ppc@nongnu.org, Marcel Apfelbaum , Paolo Bonzini , David Gibson , Richard Henderson > > > > > > >> + /* we will need a new memory slot for kvm and vhost */ > > >> + if (kvm_enabled() && !kvm_has_free_slot(machine)) { > > >> + error_setg(errp, "hypervisor has no free memory slots left"); > > >> + return; > > >> + } > > >> + if (!vhost_has_free_slot()) { > > >> + error_setg(errp, "a used vhost backend has no free memory slots > > >> left"); > > >> + return; > > >> + } > > > move these checks to pre_plug time > > > > > >> + > > >> + memory_region_add_subregion(&hpms->mr, addr - hpms->base, mr); > > > missing vmstate registration? > > > > Missed this one: To be called by the caller. Important because e.g. for > > virtio-pmem we don't want this (I assume :) ). > if pmem isn't on shared storage, then We'd probably want to migrate > it as well, otherwise target would experience data loss. > Anyways, I'd just reat it as normal RAM in migration case Main difference between RAM and pmem it acts like combination of RAM and disk. Saying this, in normal use-case size would be 100 GB's - few TB's range. I am not sure we really want to migrate it for non-shared storage use-case. One reason why nvdimm added vmstate info could be: still there would be transient writes in memory with fake DAX and there is no way(till now) to flush the guest writes. But with virtio-pmem we can flush such writes before migration and automatically at destination host with shared disk we will have updated data. Thanks, Pankaj