From: "Huang\, Ying" <ying.huang@linux.intel.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: "Huang, Ying" <ying.huang@intel.com>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Hugh Dickins <hughd@google.com>, Shaohua Li <shli@kernel.org>,
Minchan Kim <minchan@kernel.org>, Rik van Riel <riel@redhat.com>
Subject: Re: [PATCH -mm -v3] mm, swap: Sort swap entries before free
Date: Fri, 14 Apr 2017 09:41:33 +0800 [thread overview]
Message-ID: <874lxrdauq.fsf@yhuang-dev.intel.com> (raw)
In-Reply-To: <878tn3db3h.fsf@yhuang-dev.intel.com> (Ying Huang's message of "Fri, 14 Apr 2017 09:36:18 +0800")
"Huang, Ying" <ying.huang@intel.com> writes:
> Andrew Morton <akpm@linux-foundation.org> writes:
>
>> On Fri, 7 Apr 2017 14:49:01 +0800 "Huang, Ying" <ying.huang@intel.com> wrote:
>>
>>> To reduce the lock contention of swap_info_struct->lock when freeing
>>> swap entry. The freed swap entries will be collected in a per-CPU
>>> buffer firstly, and be really freed later in batch. During the batch
>>> freeing, if the consecutive swap entries in the per-CPU buffer belongs
>>> to same swap device, the swap_info_struct->lock needs to be
>>> acquired/released only once, so that the lock contention could be
>>> reduced greatly. But if there are multiple swap devices, it is
>>> possible that the lock may be unnecessarily released/acquired because
>>> the swap entries belong to the same swap device are non-consecutive in
>>> the per-CPU buffer.
>>>
>>> To solve the issue, the per-CPU buffer is sorted according to the swap
>>> device before freeing the swap entries. Test shows that the time
>>> spent by swapcache_free_entries() could be reduced after the patch.
>>>
>>> Test the patch via measuring the run time of swap_cache_free_entries()
>>> during the exit phase of the applications use much swap space. The
>>> results shows that the average run time of swap_cache_free_entries()
>>> reduced about 20% after applying the patch.
>>
>> "20%" is useful info, but it is much better to present the absolute
>> numbers, please. If it's "20% of one nanosecond" then the patch isn't
>> very interesting. If it's "20% of 35 seconds" then we know we have
>> more work to do.
>
> I added memory freeing timing capability to vm-scalability test suite.
> The result shows the memory freeing time reduced from 2.64s to 2.31s
> (about -12.5%).
The memory space to free is 96G (including swap). The machine has 144
CPU, 32G RAM, and 96G swap. The process number is 16.
Best Regards,
Huang, Ying
> Best Regards,
> Huang, Ying
>
>> If there is indeed still a significant problem here then perhaps it
>> would be better to move the percpu swp_entry_t buffer into the
>> per-device structure swap_info_struct, so it becomes "per cpu, per
>> device". That way we should be able to reduce contention further.
>>
>> Or maybe we do something else - it all depends upon the significance of
>> this problem, which is why a full description of your measurements is
>> useful.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-04-14 1:41 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-04-07 6:49 [PATCH -mm -v3] mm, swap: Sort swap entries before free Huang, Ying
2017-04-07 13:05 ` Rik van Riel
2017-04-07 21:43 ` Andrew Morton
2017-04-11 7:03 ` Huang, Ying
2017-04-14 1:36 ` Huang, Ying
2017-04-14 1:41 ` Huang, Ying [this message]
2017-04-18 4:59 ` Minchan Kim
2017-04-19 8:14 ` Huang, Ying
2017-04-20 6:38 ` Minchan Kim
2017-04-20 7:15 ` Huang, Ying
2017-04-21 12:29 ` Huang, Ying
2017-04-21 23:29 ` Tim Chen
2017-04-23 13:16 ` Huang, Ying
2017-04-24 16:03 ` Tim Chen
2017-04-24 4:52 ` Minchan Kim
2017-04-24 6:47 ` Huang, Ying
2017-04-26 12:42 ` Huang, Ying
2017-04-26 20:13 ` Tim Chen
2017-04-27 1:21 ` Huang, Ying
2017-04-27 16:48 ` Tim Chen
2017-04-27 4:35 ` Minchan Kim
2017-04-28 1:09 ` Huang, Ying
2017-04-28 7:42 ` Minchan Kim
2017-04-28 8:05 ` Huang, Ying
2017-04-28 9:00 ` Minchan Kim
2017-04-28 11:48 ` Huang, Ying
2017-04-28 13:35 ` Huang, Ying
2017-05-02 5:02 ` Minchan Kim
2017-05-02 5:35 ` Huang, Ying
2017-05-02 5:48 ` Minchan Kim
2017-05-02 6:08 ` Huang, Ying
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=874lxrdauq.fsf@yhuang-dev.intel.com \
--to=ying.huang@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=hughd@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=minchan@kernel.org \
--cc=riel@redhat.com \
--cc=shli@kernel.org \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).