From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:33452) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fAzBQ-0008U2-ED for qemu-devel@nongnu.org; Tue, 24 Apr 2018 10:44:41 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fAzBN-0004gF-A3 for qemu-devel@nongnu.org; Tue, 24 Apr 2018 10:44:40 -0400 Date: Tue, 24 Apr 2018 16:44:17 +0200 From: Igor Mammedov Message-ID: <20180424164417.15347ebb@redhat.com> In-Reply-To: References: <20180420123456.22196-1-david@redhat.com> <20180420123456.22196-4-david@redhat.com> <20180423141928.7e64b380@redhat.com> <908f1079-385f-24d3-99ad-152ecd6b01d2@redhat.com> <20180424153154.05e79de7@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [PATCH v3 3/3] pc-dimm: factor out address space logic into MemoryDevice code List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: David Hildenbrand Cc: Pankaj Gupta , Eduardo Habkost , "Michael S . Tsirkin" , qemu-devel@nongnu.org, Markus Armbruster , qemu-s390x@nongnu.org, qemu-ppc@nongnu.org, Marcel Apfelbaum , Paolo Bonzini , David Gibson , Richard Henderson On Tue, 24 Apr 2018 15:41:23 +0200 David Hildenbrand wrote: > On 24.04.2018 15:31, Igor Mammedov wrote: > > On Mon, 23 Apr 2018 14:52:37 +0200 > > David Hildenbrand wrote: > > > >>> > >>>> + /* we will need a new memory slot for kvm and vhost */ > >>>> + if (kvm_enabled() && !kvm_has_free_slot(machine)) { > >>>> + error_setg(errp, "hypervisor has no free memory slots left"); > >>>> + return; > >>>> + } > >>>> + if (!vhost_has_free_slot()) { > >>>> + error_setg(errp, "a used vhost backend has no free memory slots left"); > >>>> + return; > >>>> + } > >>> move these checks to pre_plug time > >>> > >>>> + > >>>> + memory_region_add_subregion(&hpms->mr, addr - hpms->base, mr); > >>> missing vmstate registration? > >> > >> Missed this one: To be called by the caller. Important because e.g. for > >> virtio-pmem we don't want this (I assume :) ). > > if pmem isn't on shared storage, then We'd probably want to migrate > > it as well, otherwise target would experience data loss. > > Anyways, I'd just reat it as normal RAM in migration case > > Yes, if we realize that all MemoryDevices need this call, we can move it > to that place, too. > > Wonder if we might want to make this configurable for virtio-pmem later > on (via a flag or sth like that). I don't see any reason why we wouldn't like it to be migrated, it's the same as nvdimm but with another qemu:guest ABI and async flush instead of sync one we have with nvdimm.