From mboxrd@z Thu Jan 1 00:00:00 1970 From: Bob Liu Subject: Re: [PATCH v2 3/3] xen: use idle vcpus to scrub pages Date: Fri, 25 Jul 2014 08:42:55 +0800 Message-ID: <53D1A80F.9040509@oracle.com> References: <1404135584-29206-1-git-send-email-bob.liu@oracle.com> <1404135584-29206-3-git-send-email-bob.liu@oracle.com> <53B2979C020000780001EE97@mail.emea.novell.com> <53B2A8C7.9040601@oracle.com> <53B2CCD1020000780001F027@mail.emea.novell.com> <53C4F171.8060807@oracle.com> <53CF80400200007800024F2B@mail.emea.novell.com> <53D06A8B.7010804@oracle.com> <53D0C2D1020000780002556F@mail.emea.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1XATbS-0002TR-66 for xen-devel@lists.xenproject.org; Fri, 25 Jul 2014 00:43:18 +0000 In-Reply-To: <53D0C2D1020000780002556F@mail.emea.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich Cc: Bob Liu , keir@xen.org, ian.campbell@citrix.com, George.Dunlap@eu.citrix.com, andrew.cooper3@citrix.com, xen-devel@lists.xenproject.org List-Id: xen-devel@lists.xenproject.org On 07/24/2014 02:24 PM, Jan Beulich wrote: >>>> On 24.07.14 at 04:08, wrote: > >> On 07/23/2014 03:28 PM, Jan Beulich wrote: >>>>>> On 15.07.14 at 11:16, wrote: >>>> After so many days I haven't make a workable solution if don't remove >>>> pages temporarily. The hardest part is iterating the heap free list >>>> without holding heap_lock because if holding the lock it might be heavy >>>> lock contention. >>>> So do you think it's acceptable if fixed all other concerns about this >>>> patch? >>> >>> No, I don't think so. Instead I'm of the opinion that you may have >>> worked in the wrong direction: Rather than not taking the heap lock >>> at all, it may also be sufficient to shrink the lock holding time (i.e. >>> avoid long loops with the lock held). >>> >> >> But I still think have to drop pages from heap list temporarily else >> heap lock must be taken for a long time to get rid of E.g. below race >> condition. >> >> A: alloc path B: idle loop >> >> spin_lock(&heap_lock) >> page_list_for_each( pg, &heap(node, zone, order) ) >> if _PGC_need_scrub is set, break; >> spin_unlock(&heap_lock) >> >> if ( test_bit(_PGC_need_scrub, pg) >> >> ^^^^ >> spin_lock(&heap_lock) >> delist page >> spin_unlock(&heap_lock) >> >> write data to this page >> >> scrub_one_page(pg) >> ^^^ will clean useful data > > No (and I'm sure I said so before): The only problem is with the > linked list itself; the page contents are not a problem - the > allocation path can simply wait for the already suggested > _PGC_scrubbing flag to clear before returning. And as already The page contents are a problem if the race condition I mentioned in previous email happen. Because there is a time window between checking the PGC_need_scrub flag and doing the real scrub in idle thread, the idle thread will still scrub a page after that page have been allocated by allocation path and been used(and have been written some useful data). > said (see above), by avoiding page_list_for_each() within the > locked region you already significantly reduce lock contention. > I.e. you need another means to find pages awaiting to be Right, but I don't have better ideas beside using page_list_for_each() or delist pages temporarily from heap list. > scrubbed. You may want to leverage that the allocation path > does the scrubbing if needed (e.g. by not scrubbing the first > of any set of contiguous free pages on the idle path, linking up > all other ones recognizing that their link fields are unused while > on an order-greater-than-zero free list). It sounds like another list have to be introduced, but I don't think this can help to get rid of lock contention or the similar race condition. -- Regards, -Bob