From: Aaron Lu <aaron.lu@intel.com>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Andrew Morton <akpm@linux-foundation.org>,
Huang Ying <ying.huang@intel.com>,
Dave Hansen <dave.hansen@linux.intel.com>,
Kemi Wang <kemi.wang@intel.com>,
Tim Chen <tim.c.chen@linux.intel.com>,
Andi Kleen <ak@linux.intel.com>, Michal Hocko <mhocko@suse.com>,
Mel Gorman <mgorman@techsingularity.net>,
Matthew Wilcox <willy@infradead.org>,
Daniel Jordan <daniel.m.jordan@oracle.com>,
Tariq Toukan <tariqt@mellanox.com>,
Jesper Dangaard Brouer <brouer@redhat.com>
Subject: Re: [RFC v4 PATCH 3/5] mm/rmqueue_bulk: alloc without touching individual page structure
Date: Tue, 23 Oct 2018 10:19:30 +0800 [thread overview]
Message-ID: <20181023021930.GA20069@intel.com> (raw)
In-Reply-To: <b343cf1a-ea15-b70e-ff5a-e08d3dc5354d@suse.cz>
On Mon, Oct 22, 2018 at 11:37:53AM +0200, Vlastimil Babka wrote:
> On 10/17/18 8:33 AM, Aaron Lu wrote:
> > Profile on Intel Skylake server shows the most time consuming part
> > under zone->lock on allocation path is accessing those to-be-returned
> > page's "struct page" on the free_list inside zone->lock. One explanation
> > is, different CPUs are releasing pages to the head of free_list and
> > those page's 'struct page' may very well be cache cold for the allocating
> > CPU when it grabs these pages from free_list' head. The purpose here
> > is to avoid touching these pages one by one inside zone->lock.
>
> What about making the pages cache-hot first, without zone->lock, by
> traversing via page->lru. It would need some safety checks obviously
> (maybe based on page_to_pfn + pfn_valid, or something) to make sure we
> only read from real struct pages in case there's some update racing. The
> worst case would be not populating enough due to race, and thus not
> gaining the performance when doing the actual rmqueueing under lock.
Yes, there are the 2 potential problems you have pointed out:
1 we may be prefetching something that isn't a page due to page->lru can
be reused as different things under different scenerios;
2 we may not be able to prefetch much due to other CPU is doing
allocation inside the lock, it's possible we end up with prefetching
pages that are on another CPU's pcp list.
Considering the above 2 problems, I feel prefetching outside lock a
little risky and troublesome.
Allocation path is the hard part of improving page allocator
performance - in free path, we can prefetch them safely outside the lock
and we can even pre-merge them outside the lock to reduce the pressure of
the zone lock; but in allocation path, there is pretty nothing we can do
before acquiring the lock, except taking the risk to prefetch them
without taking the lock as you mentioned here.
We can come back to this if 'address space range' lock doesn't work out.
next prev parent reply other threads:[~2018-10-23 2:19 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-10-17 6:33 [RFC v4 PATCH 0/5] Eliminate zone->lock contention for will-it-scale/page_fault1 and parallel free Aaron Lu
2018-10-17 6:33 ` [RFC v4 PATCH 1/5] mm/page_alloc: use helper functions to add/remove a page to/from buddy Aaron Lu
2018-10-17 9:51 ` Mel Gorman
2018-10-17 6:33 ` [RFC v4 PATCH 2/5] mm/__free_one_page: skip merge for order-0 page unless compaction failed Aaron Lu
2018-10-17 10:44 ` Mel Gorman
2018-10-17 13:10 ` Aaron Lu
2018-10-17 13:58 ` Mel Gorman
2018-10-17 14:59 ` Aaron Lu
2018-10-18 11:16 ` Mel Gorman
2018-10-19 5:57 ` Aaron Lu
2018-10-19 8:54 ` Mel Gorman
2018-10-19 15:00 ` Daniel Jordan
2018-10-20 9:00 ` Aaron Lu
2018-10-17 17:03 ` Vlastimil Babka
2018-10-18 6:48 ` Aaron Lu
2018-10-18 8:23 ` Vlastimil Babka
2018-10-18 11:07 ` Aaron Lu
2018-10-17 6:33 ` [RFC v4 PATCH 3/5] mm/rmqueue_bulk: alloc without touching individual page structure Aaron Lu
2018-10-17 11:20 ` Mel Gorman
2018-10-17 14:23 ` Aaron Lu
2018-10-18 11:20 ` Mel Gorman
2018-10-18 13:21 ` Aaron Lu
2018-10-22 9:37 ` Vlastimil Babka
2018-10-23 2:19 ` Aaron Lu [this message]
2018-10-17 6:33 ` [RFC v4 PATCH 4/5] mm/free_pcppages_bulk: reduce overhead of cluster operation on free path Aaron Lu
2018-10-17 6:33 ` [RFC v4 PATCH 5/5] mm/can_skip_merge(): make it more aggressive to attempt cluster alloc/free Aaron Lu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20181023021930.GA20069@intel.com \
--to=aaron.lu@intel.com \
--cc=ak@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=brouer@redhat.com \
--cc=daniel.m.jordan@oracle.com \
--cc=dave.hansen@linux.intel.com \
--cc=kemi.wang@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=tariqt@mellanox.com \
--cc=tim.c.chen@linux.intel.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).