From mboxrd@z Thu Jan 1 00:00:00 1970 From: Konrad Rzeszutek Wilk Subject: Re: [PATCH 4/4] hvmloader: add support to load extra ACPI tables from qemu Date: Tue, 26 Jan 2016 09:46:41 -0500 Message-ID: <20160126144641.GF19666@char.us.oracle.com> References: <56A0A25002000078000C971B@prv-mh.provo.novell.com> <56A095E3.5060507@linux.intel.com> <56A0AA8A02000078000C977D@prv-mh.provo.novell.com> <56A0A09A.2050101@linux.intel.com> <56A0C02A02000078000C9823@prv-mh.provo.novell.com> <20160121140103.GB6362@hz-desktop.sh.intel.com> <56A0FEA102000078000C9A44@prv-mh.provo.novell.com> <56A7785802000078000CB0CD@prv-mh.provo.novell.com> <56A77B8B.3010804@citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Content-Disposition: inline In-Reply-To: <56A77B8B.3010804@citrix.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: George Dunlap Cc: Jun Nakajima , Haozhong Zhang , Kevin Tian , Wei Liu , Ian Campbell , Stefano Stabellini , George Dunlap , Andrew Cooper , Ian Jackson , "xen-devel@lists.xen.org" , Jan Beulich , Xiao Guangrong , Keir Fraser List-Id: xen-devel@lists.xenproject.org On Tue, Jan 26, 2016 at 01:58:35PM +0000, George Dunlap wrote: > On 26/01/16 12:44, Jan Beulich wrote: > >>>> On 26.01.16 at 12:44, wrote: > >> On Thu, Jan 21, 2016 at 2:52 PM, Jan Beulich wrote: > >>>>>> On 21.01.16 at 15:01, wrote: > >>>> On 01/21/16 03:25, Jan Beulich wrote: > >>>>>>>> On 21.01.16 at 10:10, wrote: > >>>>>> c) hypervisor should mange PMEM resource pool and partition it to multiple > >>>>>> VMs. > >>>>> > >>>>> Yes. > >>>>> > >>>> > >>>> But I Still do not quite understand this part: why must pmem resource > >>>> management and partition be done in hypervisor? > >>> > >>> Because that's where memory management belongs. And PMEM, > >>> other than PBLK, is just another form of RAM. > >> > >> I haven't looked more deeply into the details of this, but this > >> argument doesn't seem right to me. > >> > >> Normal RAM in Xen is what might be called "fungible" -- at boot, all > >> RAM is zeroed, and it basically doesn't matter at all what RAM is > >> given to what guest. (There are restrictions of course: lowmem for > >> DMA, contiguous superpages, &c; but within those groups, it doesn't > >> matter *which* bit of lowmem you get, as long as you get enough to do > >> your job.) If you reboot your guest or hand RAM back to the > >> hypervisor, you assume that everything in it will disappear. When you > >> ask for RAM, you can request some parameters that it will have > >> (lowmem, on a specific node, &c), but you can't request a specific > >> page that you had before. > >> > >> This is not the case for PMEM. The whole point of PMEM (correct me if > >> I'm wrong) is to be used for long-term storage that survives over > >> reboot. It matters very much that a guest be given the same PRAM > >> after the host is rebooted that it was given before. It doesn't make > >> any sense to manage it the way Xen currently manages RAM (i.e., that > >> you request a page and get whatever Xen happens to give you). > > > > Interesting. This isn't the usage model I have been thinking about > > so far. Having just gone back to the original 0/4 mail, I'm afraid > > we're really left guessing, and you guessed differently than I did. > > My understanding of the intentions of PMEM so far was that this > > is a high-capacity, slower than DRAM but much faster than e.g. > > swapping to disk alternative to normal RAM. I.e. the persistent > > aspect of it wouldn't matter at all in this case (other than for PBLK, > > obviously). > > Oh, right -- yes, if the usage model of PRAM is just "cheap slow RAM", > then you're right -- it is just another form of RAM, that should be > treated no differently than say, lowmem: a fungible resource that can be > requested by setting a flag. I would think of it as MMIO ranges than RAM. Yes it is behind an MMC - but there are subtle things such as the new instructions - pcommit, clfushopt, and other that impact it. Furthermore ranges (contingous and most likely discontingous) of this "RAM" has to be shared with guests (at least dom0) and with other (multiple HVM guests). > > Haozhong? > > -George > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel