From mboxrd@z Thu Jan 1 00:00:00 1970 From: Josh Zhao Subject: Re: [PATCH] xen/tmem: Don't use map_domain_page for long-life-time pages. Date: Thu, 22 Aug 2013 18:28:05 +0800 Message-ID: References: <1371127820-23294-1-git-send-email-konrad.wilk@oracle.com> <51B9C7FB.6000903@eu.citrix.com> <20130613132909.GI6303@konrad-lan.dumpdata.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20130613132909.GI6303@konrad-lan.dumpdata.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Konrad Rzeszutek Wilk Cc: Konrad Rzeszutek Wilk , Bob Liu , "xen-devel@lists.xensource.com" , Jan Beulich List-Id: xen-devel@lists.xenproject.org 2013/6/13 Konrad Rzeszutek Wilk : > On Thu, Jun 13, 2013 at 02:24:11PM +0100, George Dunlap wrote: >> On 13/06/13 13:50, Konrad Rzeszutek Wilk wrote: >> >When using tmem with Xen 4.3 (and debug build) we end up with: >> > >> >(XEN) Xen BUG at domain_page.c:143 >> >(XEN) ----[ Xen-4.3-unstable x86_64 debug=y Not tainted ]---- >> >(XEN) CPU: 3 >> >(XEN) RIP: e008:[] map_domain_page+0x61d/0x6e1 >> >.. >> >(XEN) Xen call trace: >> >(XEN) [] map_domain_page+0x61d/0x6e1 >> >(XEN) [] cli_get_page+0x15e/0x17b >> >(XEN) [] tmh_copy_from_client+0x150/0x284 >> >(XEN) [] do_tmem_put+0x323/0x5c4 >> >(XEN) [] do_tmem_op+0x5a0/0xbd0 >> >(XEN) [] syscall_enter+0xeb/0x145 >> >(XEN) >> > >> >A bit of debugging revealed that the map_domain_page and unmap_domain_page >> >are meant for short life-time mappings. And that those mappings are finite. >> >In the 2 VCPU guest we only have 32 entries and once we have exhausted those >> >we trigger the BUG_ON condition. >> > >> >The two functions - tmh_persistent_pool_page_[get,put] are used by the xmem_pool >> >when xmem_pool_[alloc,free] are called. These xmem_pool_* function are wrapped >> >in macro and functions - the entry points are via: tmem_malloc >> >and tmem_page_alloc. In both cases the users are in the hypervisor and they >> >do not seem to suffer from using the hypervisor virtual addresses. >> > >> >CC: Bob Liu >> >CC: Dan Magenheimer >> >Suggested-by: Jan Beulich >> >Signed-off-by: Konrad Rzeszutek Wilk >> >--- >> > xen/common/tmem_xen.c | 5 ++--- >> > 1 files changed, 2 insertions(+), 3 deletions(-) >> > >> >diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c >> >index 3a1f3c9..736a8c3 100644 >> >--- a/xen/common/tmem_xen.c >> >+++ b/xen/common/tmem_xen.c >> >@@ -385,7 +385,7 @@ static void *tmh_persistent_pool_page_get(unsigned long size) >> > if ( (pi = _tmh_alloc_page_thispool(d)) == NULL ) >> > return NULL; >> > ASSERT(IS_VALID_PAGE(pi)); >> >- return __map_domain_page(pi); >> >+ return page_to_virt(pi); >> >> Did I understand correctly that the map_domain_page() was required >> on >5TiB systems, presumably because of limited virtual address >> space? In which case this code will fail on those systems? > > Correct. I don't understand why the map_domain_page() was required on >5TiB ? >> >> If that is the case, then we need to have a way to make sure tmem >> cannot be enabled on such systems. > > Which Jan has had already done when he posted the 5TB patches. > >> >> -George > > _______________________________________________ > Xen-devel mailing list > Xen-devel@lists.xen.org > http://lists.xen.org/xen-devel