From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Andres Lagar-Cavilla" Subject: Re: [PATCH 0 of 3] Update paging/sharing/access interfaces v2 Date: Fri, 10 Feb 2012 10:13:44 -0800 Message-ID: <59a249b9f97604d7cb364b8edc38c6bf.squirrel@webmail.lagarcavilla.org> References: <20120210161357.GF32107@ocelot.phlegethon.org> Reply-To: andres@lagarcavilla.org Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20120210161357.GF32107@ocelot.phlegethon.org> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xensource.com Errors-To: xen-devel-bounces@lists.xensource.com To: Tim Deegan Cc: andres@gridcentric.ca, xen-devel@lists.xensource.com, ian.campbell@citrix.com, adin@gridcentric.ca List-Id: xen-devel@lists.xenproject.org > At 01:08 -0500 on 09 Feb (1328749705), Andres Lagar-Cavilla wrote: >> i(Was switch from domctl to memops) >> Changes from v1 posted Feb 2nd 2012 >> >> - Patches 1 & 2 Acked-by Tim Deegan on the hypervisor side >> - Added patch 3 to clean up the enable domctl interface, based on >> discussion with Ian Campbell >> >> Description from original post follows: >> >> Per page operations in the paging, sharing, and access tracking >> subsystems are >> all implemented with domctls (e.g. a domctl to evict one page, or to >> share one >> page). >> >> Under heavy load, the domctl path reveals a lack of scalability. The >> domctl >> lock serializes dom0's vcpus in the hypervisor. When performing >> thousands of >> per-page operations on dozens of domains, these vcpus will spin in the >> hypervisor. Beyond the aggressive locking, an added inefficiency of >> blocking >> vcpus in the domctl lock is that dom0 is prevented from re-scheduling >> any of >> its other work-starved processes. >> >> We retain the domctl interface for setting up and tearing down >> paging/sharing/mem access for a domain. But we migrate all the per page >> operations to use the memory_op hypercalls (e.g XENMEM_*). >> >> This is a backwards-incompatible ABI change. It's been floating on the >> list for >> a couple weeks now, with no nacks thus far. >> >> Signed-off-by: Andres Lagar-Cavilla >> Signed-off-by: Adin Scannell > > Applied 1 and 2; thanks. > > I'll leave patch 3 for others to comment -- I know there are out-of-tree > users of the mem-access interface, and changing the hypercalls is less > disruptive than changing the libxc interface. Makes a lot of sense. Thanks. I don't view this change as sine qua-non, yet, "it would be nice if"... Is there a timeout mechanism if out-of-tree consumers are not on the ball? Actually, this hiatus allows me to float a perhaps cleaner way to map the ring: the known problem is that the pager may die abruptly, and Xen is still posting events to a page now belonging to some other dom0 process. This is dealt with in the qemu-dm case by stuffing the ring in an unused pfn (presumably somewhere in the mmio hole?) Would that work? Is there a policy for parceling out these "magic pfn's"? Andres > > Tim. >