linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "azurIt" <azurit@pobox.sk>
To: "Johannes Weiner" <hannes@cmpxchg.org>
Cc: "Michal Hocko" <mhocko@suse.cz>,
	"Andrew Morton" <akpm@linux-foundation.org>,
	"David Rientjes" <rientjes@google.com>,
	"KAMEZAWA Hiroyuki" <kamezawa.hiroyu@jp.fujitsu.com>,
	"KOSAKI Motohiro" <kosaki.motohiro@jp.fujitsu.com>,
	linux-mm@kvack.org, cgroups@vger.kernel.org, x86@kernel.org,
	linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [patch 0/7] improve memcg oom killer robustness v2
Date: Wed, 09 Oct 2013 20:44:50 +0200	[thread overview]
Message-ID: <20131009204450.6AB97915@pobox.sk> (raw)
In-Reply-To: <20131007192336.GU856@cmpxchg.org>

>Hi azur,
>
>On Mon, Oct 07, 2013 at 01:01:49PM +0200, azurIt wrote:
>> >On Thu, Sep 26, 2013 at 06:54:59PM +0200, azurIt wrote:
>> >> On Wed, Sep 18, 2013 at 02:19:46PM -0400, Johannes Weiner wrote:
>> >> >Here is an update.  Full replacement on top of 3.2 since we tried a
>> >> >dead end and it would be more painful to revert individual changes.
>> >> >
>> >> >The first bug you had was the same task entering OOM repeatedly and
>> >> >leaking the memcg reference, thus creating undeletable memcgs.  My
>> >> >fixup added a condition that if the task already set up an OOM context
>> >> >in that fault, another charge attempt would immediately return -ENOMEM
>> >> >without even trying reclaim anymore.  This dropped __getblk() into an
>> >> >endless loop of waking the flushers and performing global reclaim and
>> >> >memcg returning -ENOMEM regardless of free memory.
>> >> >
>> >> >The update now basically only changes this -ENOMEM to bypass, so that
>> >> >the memory is not accounted and the limit ignored.  OOM killed tasks
>> >> >are granted the same right, so that they can exit quickly and release
>> >> >memory.  Likewise, we want a task that hit the OOM condition also to
>> >> >finish the fault quickly so that it can invoke the OOM killer.
>> >> >
>> >> >Does the following work for you, azur?
>> >> 
>> >> 
>> >> Johannes,
>> >> 
>> >> bad news everyone! :(
>> >> 
>> >> Unfortunaely, two different problems appears today:
>> >> 
>> >> 1.) This looks like my very original problem - stucked processes inside one cgroup. I took stacks from all of them over time but server was very slow so i had to kill them soon:
>> >> http://watchdog.sk/lkmlmemcg-bug-9.tar.gz
>> >> 
>> >> 2.) This was just like my last problem where few processes were doing huge i/o. As sever was almost unoperable i barely killed them so no more info here, sorry.
>> >
>> >From one of the tasks:
>> >
>> >1380213238/11210/stack:[<ffffffff810528f1>] sys_sched_yield+0x41/0x70
>> >1380213238/11210/stack:[<ffffffff81148ef1>] free_more_memory+0x21/0x60
>> >1380213238/11210/stack:[<ffffffff8114957d>] __getblk+0x14d/0x2c0
>> >1380213238/11210/stack:[<ffffffff81198a2b>] ext3_getblk+0xeb/0x240
>> >1380213238/11210/stack:[<ffffffff8119d2df>] ext3_find_entry+0x13f/0x480
>> >1380213238/11210/stack:[<ffffffff8119dd6d>] ext3_lookup+0x4d/0x120
>> >1380213238/11210/stack:[<ffffffff81122a55>] d_alloc_and_lookup+0x45/0x90
>> >1380213238/11210/stack:[<ffffffff81122ff8>] do_lookup+0x278/0x390
>> >1380213238/11210/stack:[<ffffffff81124c40>] path_lookupat+0x120/0x800
>> >1380213238/11210/stack:[<ffffffff81125355>] do_path_lookup+0x35/0xd0
>> >1380213238/11210/stack:[<ffffffff811254d9>] user_path_at_empty+0x59/0xb0
>> >1380213238/11210/stack:[<ffffffff81125541>] user_path_at+0x11/0x20
>> >1380213238/11210/stack:[<ffffffff81115b70>] sys_faccessat+0xd0/0x200
>> >1380213238/11210/stack:[<ffffffff81115cb8>] sys_access+0x18/0x20
>> >1380213238/11210/stack:[<ffffffff815ccc26>] system_call_fastpath+0x18/0x1d
>> >
>> >Should have seen this coming... it's still in that braindead
>> >__getblk() loop, only from a syscall this time (no OOM path).  The
>> >group's memory.stat looks like this:
>> >
>> >cache 0
>> >rss 0
>> >mapped_file 0
>> >pgpgin 0
>> >pgpgout 0
>> >swap 0
>> >pgfault 0
>> >pgmajfault 0
>> >inactive_anon 0
>> >active_anon 0
>> >inactive_file 0
>> >active_file 0
>> >unevictable 0
>> >hierarchical_memory_limit 209715200
>> >hierarchical_memsw_limit 209715200
>> >total_cache 0
>> >total_rss 209715200
>> >total_mapped_file 0
>> >total_pgpgin 1028153297
>> >total_pgpgout 1028102097
>> >total_swap 0
>> >total_pgfault 1352903120
>> >total_pgmajfault 45342
>> >total_inactive_anon 0
>> >total_active_anon 209715200
>> >total_inactive_file 0
>> >total_active_file 0
>> >total_unevictable 0
>> >
>> >with anonymous pages to the limit and you probably don't have any swap
>> >space enabled to anything in the group.
>> >
>> >I guess there is no way around annotating that __getblk() loop.  The
>> >best solution right now is probably to use __GFP_NOFAIL.  For one, we
>> >can let the allocation bypass the memcg limit if reclaim can't make
>> >progress.  But also, the loop is then actually happening inside the
>> >page allocator, where it should happen, and not around ad-hoc direct
>> >reclaim in buffer.c.
>> >
>> >Can you try this on top of our ever-growing stack of patches?
>> 
>> 
>> 
>> 
>> Joahnnes,
>> 
>> looks like the problem is completely resolved :) Thank you, Michal
>> Hocko and everyone involved for help and time.
>
>Thanks a lot for your patience.  I will send out the fixes for 3.12.
>
>> One more thing: I see that your patches are going into 3.12. Is
>> there a chance to get them also into 3.2? Is Ben Hutchings (current
>> maintainer of 3.2 branch) competent to decide this? Should i contact
>> him directly? I can't upgrade to 3.12 because stable grsecurity is
>> for 3.2 and i don't think this will change in near future.
>
>Yes, I'll send them to stable.  The original OOM killer rework was not
>tagged for stable, but since we have a known deadlock problem, I think
>it makes sense to include them after all.



Joahnnes,

i'm very sorry to say it but today something strange happened.. :) i was just right at the computer so i noticed it almost immediately but i don't have much info. Server stoped to respond from the net but i was already logged on ssh which was working quite fine (only a little slow). I was able to run commands on shell but i didn't do much because i was afraid that it will goes down for good soon. I noticed few things:
 - htop was strange because all CPUs were doing nothing (totally nothing)
 - there were enough of free memory
 - server load was about 90 and was raising slowly
 - i didn't see ANY process in 'run' state
 - i also didn't see any process with strange behavior (taking much CPU, memory or so) so it wasn't obvious what to do to fix it
 - i started to kill Apache processes, everytime i killed some, CPUs did some work, but it wasn't fixing the problem
 - finally i did 'skill -kill apache2' in shell and everything started to work
 - server monitoring wasn't sending any data so i have no graphs
 - nothing interesting in logs

I will send more info when i get some.

azur

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2013-10-09 18:44 UTC|newest]

Thread overview: 99+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-08-03 16:59 [patch 0/7] improve memcg oom killer robustness v2 Johannes Weiner
2013-08-03 16:59 ` [patch 1/7] arch: mm: remove obsolete init OOM protection Johannes Weiner
2013-08-06  6:34   ` Vineet Gupta
2013-08-03 16:59 ` [patch 2/7] arch: mm: do not invoke OOM killer on kernel fault OOM Johannes Weiner
2013-08-03 16:59 ` [patch 3/7] arch: mm: pass userspace fault flag to generic fault handler Johannes Weiner
2013-08-05 22:06   ` Andrew Morton
2013-08-05 22:25     ` Johannes Weiner
2013-08-03 16:59 ` [patch 4/7] x86: finish user fault error path with fatal signal Johannes Weiner
2013-08-03 16:59 ` [patch 5/7] mm: memcg: enable memcg OOM killer only for user faults Johannes Weiner
2013-08-05  9:18   ` Michal Hocko
2013-08-03 16:59 ` [patch 6/7] mm: memcg: rework and document OOM waiting and wakeup Johannes Weiner
2013-08-03 17:00 ` [patch 7/7] mm: memcg: do not trap chargers with full callstack on OOM Johannes Weiner
2013-08-05  9:54   ` Michal Hocko
2013-08-05 20:56     ` Johannes Weiner
2013-08-03 17:08 ` [patch 0/7] improve memcg oom killer robustness v2 Johannes Weiner
2013-08-09  9:06   ` azurIt
2013-08-30 19:58   ` azurIt
2013-09-02 10:38     ` azurIt
2013-09-03 20:48       ` Johannes Weiner
2013-09-04  7:53         ` azurIt
2013-09-04  8:18         ` azurIt
2013-09-05 11:54           ` Johannes Weiner
2013-09-05 12:43             ` Michal Hocko
2013-09-05 16:18               ` Johannes Weiner
2013-09-09 12:36                 ` Michal Hocko
2013-09-09 12:56                   ` Michal Hocko
2013-09-12 12:59                     ` Johannes Weiner
2013-09-16 14:03                       ` Michal Hocko
2013-09-05 13:24             ` Michal Hocko
2013-09-09 13:10             ` azurIt
2013-09-09 17:28               ` Johannes Weiner
2013-09-09 19:59                 ` azurIt
2013-09-09 20:12                   ` Johannes Weiner
2013-09-09 20:18                     ` azurIt
2013-09-09 21:08                     ` azurIt
2013-09-10 18:13                     ` azurIt
2013-09-10 18:37                       ` Johannes Weiner
2013-09-10 19:32                         ` azurIt
2013-09-10 20:12                           ` Johannes Weiner
2013-09-10 21:08                             ` azurIt
2013-09-10 21:18                               ` Johannes Weiner
2013-09-10 21:32                                 ` azurIt
2013-09-10 22:03                                   ` Johannes Weiner
2013-09-11 12:33                                     ` azurIt
2013-09-11 18:03                                       ` Johannes Weiner
2013-09-11 18:54                                         ` azurIt
2013-09-11 19:11                                           ` Johannes Weiner
2013-09-11 19:41                                             ` azurIt
2013-09-11 20:04                                               ` Johannes Weiner
2013-09-14 10:48                                                 ` azurIt
2013-09-16 13:40                                                   ` Michal Hocko
2013-09-16 14:01                                                     ` azurIt
2013-09-16 14:06                                                       ` Michal Hocko
2013-09-16 14:13                                                         ` azurIt
2013-09-16 14:57                                                           ` Michal Hocko
2013-09-16 15:05                                                             ` azurIt
2013-09-16 15:17                                                               ` Johannes Weiner
2013-09-16 15:24                                                                 ` azurIt
2013-09-16 15:25                                                               ` Michal Hocko
2013-09-16 15:40                                                                 ` azurIt
2013-09-16 20:52                                                                 ` azurIt
2013-09-17  0:02                                                                   ` Johannes Weiner
2013-09-17 11:15                                                                     ` azurIt
2013-09-17 14:10                                                                       ` Michal Hocko
2013-09-18 14:03                                                                         ` azurIt
2013-09-18 14:24                                                                           ` Michal Hocko
2013-09-18 14:33                                                                             ` azurIt
2013-09-18 14:42                                                                               ` Michal Hocko
2013-09-18 18:02                                                                                 ` azurIt
2013-09-18 18:36                                                                                   ` Michal Hocko
     [not found]                                                                           ` <20130918160304.6EDF2729-Rm0zKEqwvD4@public.gmane.org>
2013-09-18 18:04                                                                             ` Johannes Weiner
     [not found]                                                                               ` <20130918180455.GD856-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
2013-09-18 18:19                                                                                 ` Johannes Weiner
2013-09-18 19:55                                                                                   ` Johannes Weiner
2013-09-18 20:52                                                                                     ` azurIt
2013-09-25  7:26                                                                                     ` azurIt
2013-09-26 16:54                                                                                     ` azurIt
2013-09-26 19:27                                                                                       ` Johannes Weiner
2013-09-27  2:04                                                                                         ` azurIt
2013-10-07 11:01                                                                                         ` azurIt
     [not found]                                                                                           ` <20131007130149.5F5482D8-Rm0zKEqwvD4@public.gmane.org>
2013-10-07 19:23                                                                                             ` Johannes Weiner
2013-10-09 18:44                                                                                               ` azurIt [this message]
2013-10-10  0:14                                                                                                 ` Johannes Weiner
2013-10-10 22:59                                                                                                   ` azurIt
2013-09-17 11:20                                                                     ` azurIt
2013-09-16 10:22                                                 ` azurIt
2013-09-04  9:45         ` azurIt
2013-09-04 11:57           ` Michal Hocko
2013-09-04 12:10             ` azurIt
2013-09-04 12:26               ` Michal Hocko
2013-09-04 12:39                 ` azurIt
2013-09-05  9:14                 ` azurIt
2013-09-05  9:53                   ` Michal Hocko
2013-09-05 10:17                     ` azurIt
2013-09-05 11:17                       ` Michal Hocko
2013-09-05 11:47                         ` azurIt
2013-09-05 12:03                           ` Michal Hocko
2013-09-05 12:33                             ` azurIt
2013-09-05 12:45                               ` Michal Hocko
2013-09-05 13:00                                 ` azurIt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20131009204450.6AB97915@pobox.sk \
    --to=azurit@pobox.sk \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.cz \
    --cc=rientjes@google.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).