> The idea being that kvm_read_guest_page() will effectively pin the page > and put_page() has the effect of unpinning it? It seems to me that we > should be using page_cache_release()'ing since we're not just > get_page()'ing the memory. I may be wrong though. > > Both of these are an optimization though. It's not strictly needed for > what I'm after since in the case of ballooning, there's no reason why > someone would be calling kvm_read_guest_page() on the ballooned memory. > > >> secoend, is hacking the rmap to do reverse mapping to every present >> pte and put_page() the pages at rmap_remove() >> and this about all, to make this work. >> > > If I understand you correctly, this is to unpin the page whenever it is > removed from the rmap? That would certainly be useful but it's still an > optimization. The other obvious optimization to me would be to not use > get_user_pages() on all memory to start with and instead, allow pages to > be faulted in on use. This is particularly useful for creating a VM > with a very large amount of memory, and immediately ballooning down. > That way the large amount of memory doesn't need to be present to actual > spawn the guest. > > Regards, > > Anthony Liguori > > Izik idea is towards general guest swapping capability. The first step is just to increase the reference count of the rmapped pages. The second is to change the size of the shadow pages table as function of the guest memory usage and the third is to get notifications from Linux about pte state changes. btw: I have an unmerged balloon code (guest & host) with the old kernel mapping. The guest part may be still valid for the userspace allocation. Attaching it. Dor. >> > > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. > Still grepping through log files to find problems? Stop. > Now Search log events and configuration files using AJAX and a browser. > Download your FREE copy of Splunk now >> http://get.splunk.com/ > _______________________________________________ > kvm-devel mailing list > kvm-devel-5NWGOfrQmneRv+LV9MX5uipxlwaOVQ5f@public.gmane.org > https://lists.sourceforge.net/lists/listinfo/kvm-devel > >