From mboxrd@z Thu Jan 1 00:00:00 1970 From: Bob Liu Subject: Re: [PATCH v2 1/3] xen: delay page scrubbing to allocation path Date: Tue, 01 Jul 2014 16:12:22 +0800 Message-ID: <53B26D66.8020503@oracle.com> References: <1404135584-29206-1-git-send-email-bob.liu@oracle.com> <53B1A4CC020000780001EA6D@mail.emea.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1X1tBA-00085I-L4 for xen-devel@lists.xenproject.org; Tue, 01 Jul 2014 08:12:40 +0000 In-Reply-To: <53B1A4CC020000780001EA6D@mail.emea.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich Cc: Bob Liu , keir@xen.org, ian.campbell@citrix.com, George.Dunlap@eu.citrix.com, andrew.cooper3@citrix.com, xen-devel@lists.xenproject.org List-Id: xen-devel@lists.xenproject.org On 06/30/2014 11:56 PM, Jan Beulich wrote: >>>> On 30.06.14 at 15:39, wrote: >> --- a/xen/common/page_alloc.c >> +++ b/xen/common/page_alloc.c >> @@ -711,6 +711,12 @@ static struct page_info *alloc_heap_pages( >> >> for ( i = 0; i < (1 << order); i++ ) >> { >> + if ( test_bit(_PGC_need_scrub, &pg[i].count_info) ) >> + { >> + scrub_one_page(&pg[i]); >> + pg[i].count_info &= ~PGC_need_scrub; >> + } >> + > > heap_lock is still being held here - scrubbing should be done after it > was dropped (or else you re-introduce the same latency problem to > other paths now needing to wait for the scrubbing to complete). > I see, now it only avoids this case e.g don't scrub all 4Tb when a 4Tb chunk found for a request of a single page. Anyway I will move the scrubbing out of spinlock in next version. >> @@ -876,6 +882,15 @@ static void free_heap_pages( >> midsize_alloc_zone_pages = max( >> midsize_alloc_zone_pages, total_avail_pages / MIDSIZE_ALLOC_FRAC); >> >> + if ( need_scrub ) >> + { >> + if ( !tainted ) >> + { >> + for ( i = 0; i < (1 << order); i++ ) >> + pg[i].count_info |= PGC_need_scrub; >> + } >> + } > > Two if()s like these should be folded into one. > Will be fixed. >> @@ -889,6 +904,17 @@ static void free_heap_pages( >> (PFN_ORDER(pg-mask) != order) || >> (phys_to_nid(page_to_maddr(pg-mask)) != node) ) >> break; >> + /* If we need scrub, only merge with PGC_need_scrub pages */ >> + if ( need_scrub ) >> + { >> + if ( !test_bit(_PGC_need_scrub, &(pg-mask)->count_info) ) >> + break; >> + } >> + else >> + { >> + if ( test_bit(_PGC_need_scrub, &(pg-mask)->count_info) ) >> + break; >> + } > > You're setting PGC_need_scrub on each 4k page anyway (which > is debatable), hence there's no need to look at the passed in > need_scrub flag here: Just check whether both chunks have the > flag set the same. Same below. > Right, thanks for your suggestion. >> @@ -1535,7 +1571,7 @@ void free_xenheap_pages(void *v, unsigned int order) >> >> memguard_guard_range(v, 1 << (order + PAGE_SHIFT)); >> >> - free_heap_pages(virt_to_page(v), order); >> + free_heap_pages(virt_to_page(v), order, 1); > > Why? > Sorry, a mistake here and will be fixed. >> @@ -1588,11 +1624,10 @@ void free_xenheap_pages(void *v, unsigned int order) >> >> for ( i = 0; i < (1u << order); i++ ) >> { >> - scrub_one_page(&pg[i]); >> pg[i].count_info &= ~PGC_xen_heap; >> } >> >> - free_heap_pages(pg, order); >> + free_heap_pages(pg, order, 1); > > The flags needs to be 1 here, but I don't see why you also pass 1 in > the other free_xenheap_pages() incarnation above. > >> @@ -1745,24 +1780,20 @@ void free_domheap_pages(struct page_info *pg, unsigned int order) >> * domain has died we assume responsibility for erasure. >> */ >> if ( unlikely(d->is_dying) ) >> - for ( i = 0; i < (1 << order); i++ ) >> - scrub_one_page(&pg[i]); >> - >> - free_heap_pages(pg, order); >> + free_heap_pages(pg, order, 1); >> + else >> + free_heap_pages(pg, order, 0); >> } >> else if ( unlikely(d == dom_cow) ) >> { >> ASSERT(order == 0); >> - scrub_one_page(pg); >> - free_heap_pages(pg, 0); >> + free_heap_pages(pg, 0, 1); >> drop_dom_ref = 0; >> } >> else >> { >> /* Freeing anonymous domain-heap pages. */ >> - for ( i = 0; i < (1 << order); i++ ) >> - scrub_one_page(&pg[i]); >> - free_heap_pages(pg, order); >> + free_heap_pages(pg, order, 1); >> drop_dom_ref = 0; >> } >> > > This hunk is patching no longer existing code (see commit daa4b800 > "slightly consolidate code in free_domheap_pages()"). > I'll rebase this patch to an newer version after that commit. Thanks again. -- Regards, -Bob