linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vinayak Menon <vinmenon@codeaurora.org>
To: Minchan Kim <minchan@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	Rik van Riel <riel@redhat.com>, Redmond <u93410091@gmail.com>,
	"ZhaoJunmin Zhao(Junmin)" <zhaojunmin@huawei.com>,
	Juneho Choi <juno.choi@lge.com>,
	Sangwoo Park <sangwoo2.park@lge.com>,
	Chan Gyun Jeong <chan.jeong@lge.com>
Subject: Re: [PATCH v1 0/3] per-process reclaim
Date: Mon, 13 Jun 2016 18:59:40 +0530	[thread overview]
Message-ID: <8f2190f4-4388-0eb2-0ffc-b2190280b11a@codeaurora.org> (raw)
In-Reply-To: <1465804259-29345-1-git-send-email-minchan@kernel.org>

On 6/13/2016 1:20 PM, Minchan Kim wrote:
> Hi all,
>
> http://thread.gmane.org/gmane.linux.kernel/1480728
>
> I sent per-process reclaim patchset three years ago. Then, last
> feedback from akpm was that he want to know real usecase scenario.
>
> Since then, I got question from several embedded people of various
> company "why it's not merged into mainline" and heard they have used
> the feature as in-house patch and recenlty, I noticed android from
> Qualcomm started to use it.
>
> Of course, our product have used it and released it in real procuct.
>
> Quote from Sangwoo Park <angwoo2.park@lge.com>
> Thanks for the data, Sangwoo!
> "
> - Test scenaro
>   - platform: android
>   - target: MSM8952, 2G DDR, 16G eMMC
>   - scenario
>     retry app launch and Back Home with 16 apps and 16 turns
>     (total app launch count is 256)
>   - result:
> 			  resume count   |  cold launching count
> -----------------------------------------------------------------
>  vanilla           |           85        |          171
>  perproc reclaim   |           184       |           72
> "
>
> Higher resume count is better because cold launching needs loading
> lots of resource data which takes above 15 ~ 20 seconds for some
> games while successful resume just takes 1~5 second.
>
> As perproc reclaim way with new management policy, we could reduce
> cold launching a lot(i.e., 171-72) so that it reduces app startup
> a lot.
>
Thanks Minchan for bringing this up. When we had tried the earlier patchset in its original form,
the resume of the app that was reclaimed, was taking a lot of time. But from the data shown above it looks
to be improving the resume time. Is that the resume time of "other" apps which were able to retain their working set
because of the more efficient swapping of low priority apps with per process reclaim ?
Because of the higher resume time we had to modify the logic a bit and device a way to pick a "set" of low priority
(oom_score_adj) tasks and reclaim certain number of pages (only anon) from each of them (the number of pages reclaimed
from each task being proportional to task size). This deviates from the original intention of the patch to rescue a
particular app of interest, but still using the hints on working set provided by userspace and avoiding high resume stalls.
The increased swapping was helping in maintaining a better memory state and lesser page cache reclaim,
resulting in better app resume time, and lesser task kills.

So would it be better if a userspace knob is provided to tell the kernel, the max number of pages to be reclaimed from a task ?
This way userspace can make calculations depending on priority, task size etc and reclaim the required number of pages from
each task, and thus avoid the resume stall because of reclaiming an entire task.

And also, would it be possible to implement the same using per task memcg by setting the limits and swappiness in such a
way that it results inthe same thing that per process reclaim does ?

Thanks,
Vinayak

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2016-06-13 13:29 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-13  7:50 [PATCH v1 0/3] per-process reclaim Minchan Kim
2016-06-13  7:50 ` [PATCH v1 1/3] mm: vmscan: refactoring force_reclaim Minchan Kim
2016-06-13  7:50 ` [PATCH v1 2/3] mm: vmscan: shrink_page_list with multiple zones Minchan Kim
2016-06-13  7:50 ` [PATCH v1 3/3] mm: per-process reclaim Minchan Kim
2016-06-13 15:06   ` Johannes Weiner
2016-06-15  0:40     ` Minchan Kim
2016-06-16 11:07       ` Michal Hocko
2016-06-16 14:41       ` Johannes Weiner
2016-06-17  6:43         ` Minchan Kim
2016-06-17  7:24     ` Balbir Singh
2016-06-17  7:57       ` Vinayak Menon
2016-06-13 17:06   ` Rik van Riel
2016-06-15  1:01     ` Minchan Kim
2016-06-13 11:50 ` [PATCH v1 0/3] " Chen Feng
2016-06-13 12:22   ` ZhaoJunmin Zhao(Junmin)
2016-06-15  0:43   ` Minchan Kim
2016-06-13 13:29 ` Vinayak Menon [this message]
2016-06-15  0:57   ` Minchan Kim
2016-06-16  4:21     ` Vinayak Menon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8f2190f4-4388-0eb2-0ffc-b2190280b11a@codeaurora.org \
    --to=vinmenon@codeaurora.org \
    --cc=akpm@linux-foundation.org \
    --cc=chan.jeong@lge.com \
    --cc=juno.choi@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=riel@redhat.com \
    --cc=sangwoo2.park@lge.com \
    --cc=u93410091@gmail.com \
    --cc=zhaojunmin@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).