linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Balbir Singh <bsingharora@gmail.com>
To: Johannes Weiner <hannes@cmpxchg.org>, Zhao Hui Ding <dingzhh@cn.ibm.com>
Cc: Tejun Heo <tj@kernel.org>, cgroups@vger.kernel.org, linux-mm@kvack.org
Subject: Re: memory.force_empty is deprecated
Date: Thu, 17 Nov 2016 21:39:41 +1100	[thread overview]
Message-ID: <5b03def0-2dc4-842f-0d0e-53cc2d94936f@gmail.com> (raw)
In-Reply-To: <20161104152103.GC8825@cmpxchg.org>



On 05/11/16 02:21, Johannes Weiner wrote:
> Hi,
> 
> On Fri, Nov 04, 2016 at 04:24:25PM +0800, Zhao Hui Ding wrote:
>> Hello,
>>
>> I'm Zhaohui from IBM Spectrum LSF development team. I got below message 
>> when running LSF on SUSE11.4, so I would like to share our use scenario 
>> and ask for the suggestions without using memory.force_empty.
>>
>> memory.force_empty is deprecated and will be removed. Let us know if it is 
>> needed in your usecase at linux-mm@kvack.org
>>
>> LSF is a batch workload scheduler, it uses cgroup to do batch jobs 
>> resource enforcement and accounting. For each job, LSF creates a cgroup 
>> directory and put job's PIDs to the cgroup.
>>
>> When we implement LSF cgroup integration, we found creating a new cgroup 
>> is much slower than renaming an existing cgroup, it's about hundreds of 
>> milliseconds vs less than 10 milliseconds.
> 

We added force_empty a long time back so that we could force delete
cgroups. There was no definitive way of removing references to the cgroup
from page_cgroup otherwise.

> Cgroup creation/deletion is not expected to be an ultra-hot path, but
> I'm surprised it takes longer than actually reclaiming leftover pages.
> 
> By the time the jobs conclude, how much is usually left in the group?
> 
> That said, is it even necessary to pro-actively remove the leftover
> cache from the group before starting the next job? Why not leave it
> for the next job to reclaim it lazily should memory pressure arise?
> It's easy to reclaim page cache, and the first to go as it's behind
> the next job's memory on the LRU list.

It might actually make sense to migrate all tasks out and check what
the left overs look like -- should be easy to reclaim. Also be mindful
if you are using v1 and have use_hierarchy set.

Balbir Singh.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2016-11-17 10:39 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-11-04  8:24 memory.force_empty is deprecated Zhao Hui Ding
2016-11-04 15:21 ` Johannes Weiner
2016-11-17 10:13   ` Zhao Hui Ding
2016-11-22 15:20     ` Michal Hocko
2016-11-17 10:39   ` Balbir Singh [this message]
2016-11-18  6:28     ` Zhao Hui Ding
2016-11-18 22:08       ` Balbir Singh
2016-11-22 15:21       ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5b03def0-2dc4-842f-0d0e-53cc2d94936f@gmail.com \
    --to=bsingharora@gmail.com \
    --cc=cgroups@vger.kernel.org \
    --cc=dingzhh@cn.ibm.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-mm@kvack.org \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).