From: Minchan Kim <minchan@kernel.org>
To: Dave Hansen <dave@sr71.net>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
akpm@linux-foundation.org, mgorman@suse.de,
tim.c.chen@linux.intel.com
Subject: Re: [v5][PATCH 5/6] mm: vmscan: batch shrink_page_list() locking operations
Date: Tue, 4 Jun 2013 14:01:03 +0900 [thread overview]
Message-ID: <20130604050103.GC14719@blaptop> (raw)
In-Reply-To: <20130603200208.6F71D31F@viggo.jf.intel.com>
On Mon, Jun 03, 2013 at 01:02:08PM -0700, Dave Hansen wrote:
>
> From: Dave Hansen <dave.hansen@linux.intel.com>
> changes for v2:
> * remove batch_has_same_mapping() helper. A local varible makes
> the check cheaper and cleaner
> * Move batch draining later to where we already know
> page_mapping(). This probably fixes a truncation race anyway
> * rename batch_for_mapping_removal -> batch_for_mapping_rm. It
> caused a line over 80 chars and needed shortening anyway.
> * Note: we only set 'batch_mapping' when there are pages in the
> batch_for_mapping_rm list
>
> --
>
> We batch like this so that several pages can be freed with a
> single mapping->tree_lock acquisition/release pair. This reduces
> the number of atomic operations and ensures that we do not bounce
> cachelines around.
>
> Tim Chen's earlier version of these patches just unconditionally
> created large batches of pages, even if they did not share a
> page_mapping(). This is a bit suboptimal for a few reasons:
> 1. if we can not consolidate lock acquisitions, it makes little
> sense to batch
> 2. The page locks are held for long periods of time, so we only
> want to do this when we are sure that we will gain a
> substantial throughput improvement because we pay a latency
> cost by holding the locks.
>
> This patch makes sure to only batch when all the pages on
> 'batch_for_mapping_rm' continue to share a page_mapping().
> This only happens in practice in cases where pages in the same
> file are close to each other on the LRU. That seems like a
> reasonable assumption.
>
> In a 128MB virtual machine doing kernel compiles, the average
> batch size when calling __remove_mapping_batch() is around 5,
> so this does seem to do some good in practice.
>
> On a 160-cpu system doing kernel compiles, I still saw an
> average batch length of about 2.8. One promising feature:
> as the memory pressure went up, the average batches seem to
> have gotten larger.
>
> It has shown some substantial performance benefits on
> microbenchmarks.
>
> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
> Acked-by: Mel Gorman <mgorman@suse.de>
Look at below comment, otherwise, looks good to me.
Reviewed-by: Minchan Kim <minchan@kernel.org>
> ---
>
> linux.git-davehans/mm/vmscan.c | 95 +++++++++++++++++++++++++++++++++++++----
> 1 file changed, 86 insertions(+), 9 deletions(-)
>
> diff -puN mm/vmscan.c~create-remove_mapping_batch mm/vmscan.c
> --- linux.git/mm/vmscan.c~create-remove_mapping_batch 2013-06-03 12:41:31.408751324 -0700
> +++ linux.git-davehans/mm/vmscan.c 2013-06-03 12:41:31.412751500 -0700
> @@ -550,6 +550,61 @@ int remove_mapping(struct address_space
> return 0;
> }
>
> +/*
> + * pages come in here (via remove_list) locked and leave unlocked
> + * (on either ret_pages or free_pages)
> + *
> + * We do this batching so that we free batches of pages with a
> + * single mapping->tree_lock acquisition/release. This optimization
> + * only makes sense when the pages on remove_list all share a
> + * page_mapping(). If this is violated you will BUG_ON().
> + */
> +static int __remove_mapping_batch(struct list_head *remove_list,
> + struct list_head *ret_pages,
> + struct list_head *free_pages)
> +{
> + int nr_reclaimed = 0;
> + struct address_space *mapping;
> + struct page *page;
> + LIST_HEAD(need_free_mapping);
> +
> + if (list_empty(remove_list))
> + return 0;
> +
> + mapping = page_mapping(lru_to_page(remove_list));
> + spin_lock_irq(&mapping->tree_lock);
> + while (!list_empty(remove_list)) {
> + page = lru_to_page(remove_list);
> + BUG_ON(!PageLocked(page));
> + BUG_ON(page_mapping(page) != mapping);
> + list_del(&page->lru);
> +
> + if (!__remove_mapping(mapping, page)) {
> + unlock_page(page);
> + list_add(&page->lru, ret_pages);
> + continue;
> + }
> + list_add(&page->lru, &need_free_mapping);
Why do we need new lru list instead of using @free_pages?
> + }
> + spin_unlock_irq(&mapping->tree_lock);
> +
> + while (!list_empty(&need_free_mapping)) {
> + page = lru_to_page(&need_free_mapping);
> + list_move(&page->list, free_pages);
> + mapping_release_page(mapping, page);
> + /*
> + * At this point, we have no other references and there is
> + * no way to pick any more up (removed from LRU, removed
> + * from pagecache). Can use non-atomic bitops now (and
> + * we obviously don't have to worry about waking up a process
> + * waiting on the page lock, because there are no references.
> + */
> + __clear_page_locked(page);
> + nr_reclaimed++;
> + }
> + return nr_reclaimed;
> +}
> +
--
Kind regards,
Minchan Kim
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2013-06-04 5:01 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-06-03 20:02 [v5][PATCH 0/6] mm: vmscan: Batch page reclamation under shink_page_list Dave Hansen
2013-06-03 20:02 ` [v5][PATCH 1/6] mm: swap: defer clearing of page_private() for swap cache pages Dave Hansen
2013-07-14 23:47 ` Wanpeng Li
2013-07-14 23:47 ` Wanpeng Li
2013-06-03 20:02 ` [v5][PATCH 2/6] mm: swap: make 'struct page' and swp_entry_t variants of swapcache_free() Dave Hansen
2013-07-14 23:48 ` Wanpeng Li
2013-07-14 23:48 ` Wanpeng Li
2013-06-03 20:02 ` [v5][PATCH 3/6] mm: vmscan: break up __remove_mapping() Dave Hansen
2013-07-14 23:49 ` Wanpeng Li
2013-07-14 23:49 ` Wanpeng Li
2013-06-03 20:02 ` [v5][PATCH 4/6] mm: vmscan: break out mapping "freepage" code Dave Hansen
2013-07-14 23:49 ` Wanpeng Li
2013-07-14 23:49 ` Wanpeng Li
2013-06-03 20:02 ` [v5][PATCH 5/6] mm: vmscan: batch shrink_page_list() locking operations Dave Hansen
2013-06-04 1:17 ` Hillf Danton
2013-06-04 5:07 ` Minchan Kim
2013-06-04 15:22 ` Dave Hansen
2013-06-05 7:28 ` Hillf Danton
2013-06-05 7:57 ` Minchan Kim
2013-06-05 14:24 ` Dave Hansen
2013-06-04 5:01 ` Minchan Kim [this message]
2013-06-04 6:02 ` Minchan Kim
2013-06-04 15:29 ` Dave Hansen
2013-06-04 23:32 ` Minchan Kim
2013-06-04 6:10 ` Dave Hansen
2013-06-04 6:59 ` Minchan Kim
2013-07-14 23:50 ` Wanpeng Li
2013-07-14 23:50 ` Wanpeng Li
2013-06-03 20:02 ` [v5][PATCH 6/6] mm: vmscan: drain batch list during long operations Dave Hansen
2013-06-04 6:05 ` Minchan Kim
2013-06-04 15:24 ` Dave Hansen
2013-06-04 23:23 ` Minchan Kim
2013-06-04 23:31 ` Dave Hansen
2013-06-04 23:36 ` Minchan Kim
2013-06-04 23:37 ` Minchan Kim
2013-07-14 23:51 ` Wanpeng Li
2013-07-14 23:51 ` Wanpeng Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20130604050103.GC14719@blaptop \
--to=minchan@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=dave@sr71.net \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=tim.c.chen@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).