From: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: Glauber Costa <glommer@parallels.com>
Cc: Michal Hocko <mhocko@suse.cz>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
Andrew Morton <akpm@linux-foundation.org>,
Mel Gorman <mgorman@suse.de>,
Suleiman Souhlal <suleiman@google.com>, Tejun Heo <tj@kernel.org>,
cgroups@vger.kernel.org, Johannes Weiner <hannes@cmpxchg.org>,
Greg Thelen <gthelen@google.com>,
devel@openvz.org, Frederic Weisbecker <fweisbec@gmail.com>,
Christoph Lameter <cl@linux.com>,
Pekka Enberg <penberg@cs.helsinki.fi>
Subject: Re: [PATCH v4 06/14] memcg: kmem controller infrastructure
Date: Tue, 16 Oct 2012 17:00:03 +0900 [thread overview]
Message-ID: <507D1403.2070205@jp.fujitsu.com> (raw)
In-Reply-To: <5077DF20.7020200@parallels.com>
(2012/10/12 18:13), Glauber Costa wrote:
> On 10/12/2012 12:57 PM, Michal Hocko wrote:
>> On Fri 12-10-12 12:44:57, Glauber Costa wrote:
>>> On 10/12/2012 12:39 PM, Michal Hocko wrote:
>>>> On Fri 12-10-12 11:45:46, Glauber Costa wrote:
>>>>> On 10/11/2012 04:42 PM, Michal Hocko wrote:
>>>>>> On Mon 08-10-12 14:06:12, Glauber Costa wrote:
>>>> [...]
>>>>>>> + /*
>>>>>>> + * Conditions under which we can wait for the oom_killer.
>>>>>>> + * __GFP_NORETRY should be masked by __mem_cgroup_try_charge,
>>>>>>> + * but there is no harm in being explicit here
>>>>>>> + */
>>>>>>> + may_oom = (gfp & __GFP_WAIT) && !(gfp & __GFP_NORETRY);
>>>>>>
>>>>>> Well we _have to_ check __GFP_NORETRY here because if we don't then we
>>>>>> can end up in OOM. mem_cgroup_do_charge returns CHARGE_NOMEM for
>>>>>> __GFP_NORETRY (without doing any reclaim) and of oom==true we decrement
>>>>>> oom retries counter and eventually hit OOM killer. So the comment is
>>>>>> misleading.
>>>>>
>>>>> I will update. What i understood from your last message is that we don't
>>>>> really need to, because try_charge will do it.
>>>>
>>>> IIRC I just said it couldn't happen before because migration doesn't go
>>>> through charge and thp disable oom by default.
>>>>
>>>
>>> I had it changed to:
>>>
>>> /*
>>> * Conditions under which we can wait for the oom_killer.
>>> * We have to be able to wait, but also, if we can't retry,
>>> * we obviously shouldn't go mess with oom.
>>> */
>>> may_oom = (gfp & __GFP_WAIT) && !(gfp & __GFP_NORETRY);
>>
>> OK
>>
>>>
>>>>>>> +
>>>>>>> + _memcg = memcg;
>>>>>>> + ret = __mem_cgroup_try_charge(NULL, gfp, size >> PAGE_SHIFT,
>>>>>>> + &_memcg, may_oom);
>>>>>>> +
>>>>>>> + if (!ret) {
>>>>>>> + ret = res_counter_charge(&memcg->kmem, size, &fail_res);
>>>>>>
>>>>>> Now that I'm thinking about the charging ordering we should charge the
>>>>>> kmem first because we would like to hit kmem limit before we hit u+k
>>>>>> limit, don't we.
>>>>>> Say that you have kmem limit 10M and the total limit 50M. Current `u'
>>>>>> would be 40M and this charge would cause kmem to hit the `k' limit. I
>>>>>> think we should fail to charge kmem before we go to u+k and potentially
>>>>>> reclaim/oom.
>>>>>> Or has this been alredy discussed and I just do not remember?
>>>>>>
>>>>> This has never been discussed as far as I remember. We charged u first
>>>>> since day0, and you are so far the first one to raise it...
>>>>>
>>>>> One of the things in favor of charging 'u' first is that
>>>>> mem_cgroup_try_charge is already equipped to make a lot of decisions,
>>>>> like when to allow reclaim, when to bypass charges, and it would be good
>>>>> if we can reuse all that.
>>>>
>>>> Hmm, I think that we should prevent from those decisions if kmem charge
>>>> would fail anyway (especially now when we do not have targeted slab
>>>> reclaim).
>>>>
>>>
>>> Let's revisit this discussion when we do have targeted reclaim. For now,
>>> I'll agree that charging kmem first would be acceptable.
>>>
>>> This will only make a difference when K < U anyway.
>>
>> Yes and it should work as advertised (aka hit the k limit first).
>>
> Just so we don't ping-pong in another submission:
>
> I changed memcontrol.h's memcg_kmem_newpage_charge to include:
>
> /* If the test is dying, just let it go. */
> if (unlikely(test_thread_flag(TIF_MEMDIE)
> || fatal_signal_pending(current)))
> return true;
>
>
> I'm also attaching the proposed code in memcontrol.c
>
> +static int memcg_charge_kmem(struct mem_cgroup *memcg, gfp_t gfp, u64 size)
> +{
> + struct res_counter *fail_res;
> + struct mem_cgroup *_memcg;
> + int ret = 0;
> + bool may_oom;
> +
> + ret = res_counter_charge(&memcg->kmem, size, &fail_res);
> + if (ret)
> + return ret;
> +
> + /*
> + * Conditions under which we can wait for the oom_killer.
> + * We have to be able to wait, but also, if we can't retry,
> + * we obviously shouldn't go mess with oom.
> + */
> + may_oom = (gfp & __GFP_WAIT) && !(gfp & __GFP_NORETRY);
> +
> + _memcg = memcg;
> + ret = __mem_cgroup_try_charge(NULL, gfp, size >> PAGE_SHIFT,
> + &_memcg, may_oom);
> +
> + if (ret == -EINTR) {
> + /*
> + * __mem_cgroup_try_charge() chosed to bypass to root due to
> + * OOM kill or fatal signal. Since our only options are to
> + * either fail the allocation or charge it to this cgroup, do
> + * it as a temporary condition. But we can't fail. From a
> + * kmem/slab perspective, the cache has already been selected,
> + * by mem_cgroup_get_kmem_cache(), so it is too late to change
> + * our minds. This condition will only trigger if the task
> + * entered memcg_charge_kmem in a sane state, but was
> + * OOM-killed. during __mem_cgroup_try_charge. Tasks that are
> + * already dying when the allocation triggers should have been
> + * already directed to the root cgroup.
> + */
> + res_counter_charge_nofail(&memcg->res, size, &fail_res);
> + if (do_swap_account)
> + res_counter_charge_nofail(&memcg->memsw, size,
> + &fail_res);
> + ret = 0;
> + } else if (ret)
> + res_counter_uncharge(&memcg->kmem, size);
> +
> + return ret;
> +}
seems ok to me. but we'll need a patch to hide the usage > limit situation from
users.
Thanks,
-Kame
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2012-10-16 8:00 UTC|newest]
Thread overview: 50+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-10-08 10:06 [PATCH v4 00/14] kmem controller for memcg Glauber Costa
2012-10-08 10:06 ` [PATCH v4 01/14] memcg: Make it possible to use the stock for more than one page Glauber Costa
2012-10-08 10:06 ` [PATCH v4 02/14] memcg: Reclaim when more than one page needed Glauber Costa
2012-10-16 3:22 ` Kamezawa Hiroyuki
2012-10-08 10:06 ` [PATCH v4 03/14] memcg: change defines to an enum Glauber Costa
2012-10-08 10:06 ` [PATCH v4 04/14] kmem accounting basic infrastructure Glauber Costa
2012-10-11 10:11 ` Michal Hocko
2012-10-11 12:53 ` Michal Hocko
2012-10-11 13:38 ` Michal Hocko
2012-10-12 7:36 ` Glauber Costa
2012-10-12 8:27 ` Michal Hocko
2012-10-08 10:06 ` [PATCH v4 05/14] Add a __GFP_KMEMCG flag Glauber Costa
2012-10-09 15:04 ` Michal Hocko
2012-10-08 10:06 ` [PATCH v4 06/14] memcg: kmem controller infrastructure Glauber Costa
2012-10-11 12:42 ` Michal Hocko
2012-10-11 12:56 ` Michal Hocko
2012-10-12 7:45 ` Glauber Costa
2012-10-12 8:39 ` Michal Hocko
2012-10-12 8:44 ` Glauber Costa
2012-10-12 8:57 ` Michal Hocko
2012-10-12 9:13 ` Glauber Costa
2012-10-12 9:47 ` Michal Hocko
2012-10-16 8:00 ` Kamezawa Hiroyuki [this message]
2012-10-08 10:06 ` [PATCH v4 07/14] mm: Allocate kernel pages to the right memcg Glauber Costa
2012-10-08 10:06 ` [PATCH v4 08/14] res_counter: return amount of charges after res_counter_uncharge Glauber Costa
2012-10-09 15:08 ` Michal Hocko
2012-10-09 15:14 ` Glauber Costa
2012-10-09 15:35 ` Michal Hocko
2012-10-10 9:03 ` Glauber Costa
2012-10-10 11:24 ` Michal Hocko
2012-10-10 11:25 ` Michal Hocko
2012-10-16 8:20 ` Kamezawa Hiroyuki
2012-10-08 10:06 ` [PATCH v4 09/14] memcg: kmem accounting lifecycle management Glauber Costa
2012-10-11 13:11 ` Michal Hocko
2012-10-12 7:47 ` Glauber Costa
2012-10-12 8:41 ` Michal Hocko
2012-10-16 8:41 ` Kamezawa Hiroyuki
2012-10-08 10:06 ` [PATCH v4 10/14] memcg: use static branches when code not in use Glauber Costa
2012-10-11 13:40 ` Michal Hocko
2012-10-12 7:47 ` Glauber Costa
2012-10-16 8:48 ` Kamezawa Hiroyuki
2012-10-08 10:06 ` [PATCH v4 11/14] memcg: allow a memcg with kmem charges to be destructed Glauber Costa
2012-10-08 10:06 ` [PATCH v4 12/14] execute the whole memcg freeing in free_worker Glauber Costa
2012-10-11 14:21 ` Michal Hocko
2012-10-08 10:06 ` [PATCH v4 13/14] protect architectures where THREAD_SIZE >= PAGE_SIZE against fork bombs Glauber Costa
2012-10-08 10:06 ` [PATCH v4 14/14] Add documentation about the kmem controller Glauber Costa
2012-10-11 14:35 ` Michal Hocko
2012-10-12 7:53 ` Glauber Costa
2012-10-12 8:44 ` Michal Hocko
2012-10-17 7:29 ` Kamezawa Hiroyuki
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=507D1403.2070205@jp.fujitsu.com \
--to=kamezawa.hiroyu@jp.fujitsu.com \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=cl@linux.com \
--cc=devel@openvz.org \
--cc=fweisbec@gmail.com \
--cc=glommer@parallels.com \
--cc=gthelen@google.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mhocko@suse.cz \
--cc=penberg@cs.helsinki.fi \
--cc=suleiman@google.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).