From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932657AbcFGUnl (ORCPT ); Tue, 7 Jun 2016 16:43:41 -0400 Received: from mga03.intel.com ([134.134.136.65]:51077 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932601AbcFGUni (ORCPT ); Tue, 7 Jun 2016 16:43:38 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.26,435,1459839600"; d="scan'208";a="993005007" Message-ID: <1465332209.22178.236.camel@linux.intel.com> Subject: Re: [PATCH] mm: Cleanup - Reorganize the shrink_page_list code into smaller functions From: Tim Chen To: Minchan Kim Cc: Andrew Morton , Vladimir Davydov , Johannes Weiner , Michal Hocko , Hugh Dickins , "Kirill A.Shutemov" , Andi Kleen , Aaron Lu , Huang Ying , linux-mm , linux-kernel@vger.kernel.org Date: Tue, 07 Jun 2016 13:43:29 -0700 In-Reply-To: <20160607082158.GA23435@bbox> References: <1463779979.22178.142.camel@linux.intel.com> <20160531091550.GA19976@bbox> <20160531171722.GA5763@linux.intel.com> <20160601071225.GN19976@bbox> <1464805433.22178.191.camel@linux.intel.com> <20160607082158.GA23435@bbox> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.18.5.2 (3.18.5.2-1.fc23) Mime-Version: 1.0 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 2016-06-07 at 17:21 +0900, Minchan Kim wrote: > On Wed, Jun 01, 2016 at 11:23:53AM -0700, Tim Chen wrote: > > > > On Wed, 2016-06-01 at 16:12 +0900, Minchan Kim wrote: > > > > > >   > > > Hi Tim, > > > > > > To me, this reorganization is too limited and not good for me, > > > frankly speaking. It works for only your goal which allocate batch > > > swap slot, I guess. :) > > > > > > My goal is to make them work with batch page_check_references, > > > batch try_to_unmap and batch __remove_mapping where we can avoid frequent > > > mapping->lock(e.g., anon_vma or i_mmap_lock with hoping such batch locking > > > help system performance) if batch pages has same inode or anon. > > This is also my goal to group pages that are either under the same > > mapping or are anonymous pages together so we can reduce the i_mmap_lock > > acquisition.  One logic that's yet to be implemented in your patch > > is the grouping of similar pages together so we only need one i_mmap_lock > > acquisition.  Doing this efficiently is non-trivial.   > Hmm, my assumption is based on same inode pages are likely to order > in LRU so no need to group them. If successive page in page_list comes > from different inode, we can drop the lock and get new lock from new > inode. That sounds strange? > Sounds reasonable. But your process function passed to spl_batch_pages may need to be modified to know if the radix tree lock or swap info lock has already been held, as it deals with only 1 page.  It may be tricky as the lock may get acquired and dropped more than once in process function. Are you planning to update the patch with lock batching? Thanks. Tim > > > > > > I punted the problem somewhat in my patch and elected to defer the processing > > of the anonymous pages at the end so they are naturally grouped without > > having to traverse the page_list more than once.  So I'm batching the > > anonymous pages but the file mapped pages were not grouped. > > > > In your implementation, you may need to traverse the page_list in two pass, where the > > first one is to categorize the pages and grouping them and the second one > > is the actual processing.  Then the lock batching can be implemented > > for the pages.  Otherwise the locking is still done page by page in > > your patch, and can only be batched if the next page on page_list happens > > to have the same mapping.  Your idea of using a spl_batch_pages is pretty > Yes. as I said above, I expect pages in LRU would be likely to order per > inode normally. If it's not, yeb, we need grouping but such overhead would > mitigate the benefit of lock batch as SWAP_CLUSTER_MAX get bigger. > > > > > neat.  It may need some enhancement so it is known whether some locks > > are already held for lock batching purpose. > > > > > > Thanks. > > > > Tim