* Re: How to handle TIF_MEMDIE stalls? [not found] ` <20150218084842.GB4478@dhcp22.suse.cz> @ 2015-02-18 11:23 ` Tetsuo Handa 2015-02-18 12:29 ` Michal Hocko 0 siblings, 1 reply; 14+ messages in thread From: Tetsuo Handa @ 2015-02-18 11:23 UTC (permalink / raw) To: mhocko Cc: david, hannes, dchinner, linux-mm, rientjes, oleg, akpm, mgorman, torvalds, linux-fsdevel, fernando_b1 [ cc fsdevel list - watch out for side effect of 9879de7373fc (mm: page_alloc: embed OOM killing naturally into allocation slowpath) which was merged between 3.19-rc6 and 3.19-rc7 , started from http://marc.info/?l=linux-mm&m=142348457310066&w=2 ] Replying in this post picked up from several posts in this thread. Michal Hocko wrote: > Besides that __GFP_WAIT callers should be prepared for the allocation > failure and should better cope with it. So no, I really hate something > like the above. Those who do not want to retry with invoking the OOM killer are using __GFP_WAIT + __GFP_NORETRY allocations. Those who want to retry with invoking the OOM killer are using __GFP_WAIT allocations. Those who must retry forever with invoking the OOM killer, no matter how many processes the OOM killer kills, are using __GFP_WAIT + __GFP_NOFAIL allocations. However, since use of __GFP_NOFAIL is prohibited, I think many of __GFP_WAIT users are expecting that the allocation fails only when "the OOM killer set TIF_MEMDIE flag to the caller but the caller failed to allocate from memory reserves". Also, the implementation before 9879de7373fc (mm: page_alloc: embed OOM killing naturally into allocation slowpath) effectively supported __GFP_WAIT users with such expectation. Michal Hocko wrote: > Because they cannot perform any IO/FS transactions and that would lead > to a premature OOM conditions way too easily. OOM killer is a _last > resort_ reclaim opportunity not something that would happen just because > you happen to be not able to flush dirty pages. But you should not have applied such change without making necessary changes to GFP_NOFS / GFP_NOIO users with such expectation and testing at linux-next.git . Applying such change after 3.19-rc6 is a sucker punch. Michal Hocko wrote: > Well, you are beating your machine to death so you can hardly get any > time guarantee. It would be nice to have a better feedback mechanism to > know when to back off and fail the allocation attempt which might be > blocking OOM victim to pass away. This is extremely tricky because we > shouldn't be too eager to fail just because of a sudden memory pressure. Michal Hocko wrote: > > I wish only somebody like kswapd repeats the loop on behalf of all > > threads waiting at memory allocation slowpath... > > This is the case when the kswapd is _able_ to cope with the memory > pressure. It looks wasteful for me that so many threads (greater than number of available CPUs) are sleeping at cond_resched() in shrink_slab() when checking SysRq-t. Imagine 1000 threads sleeping at cond_resched() in shrink_slab() on a machine with only 1 CPU. Each thread gets a chance to try calling reclaim function only when all other threads gave that thread a chance at cond_resched(). Such situation is almost mutually preventing from making progress. I wish the following mechanism. Prepare a kernel thread (for avoiding being OOM-killed) and let __GFP_WAIT and __GFP_WAIT + __GFP_NOFAIL users to wake up the kernel thread when they failed to allocate from free list. The kernel thread calls shrink_slab() etc. (and also out_of_memory() as needed) and wakes them sleeping at wait_for_event() up. Failing to allocate from free list is a rare case. Therefore, context switches for asking somebody else for reclaiming memory would be an acceptable overhead. If such mechanism are implemented, 1000 threads except the somebody can save CPU time by sleeping. Avoiding "almost mutually preventing from making progress" situation will drastically shorten the time guarantee even if I beat my machine to death. Such mechanism might be similar to Dave Chinner's Make the OOM killer only be invoked by kswapd or some other independent kernel thread so that it is independent of the allocation context that needs to invoke it, and have the invoker wait to be told what to do. suggestion. Dave Chinner wrote: > Filesystems do demand paging of metadata within transactions, which > means we are guaranteed to be holding locks when doing memory > allocation. Indeed, this is what the GFP_NOFS allocation context is > supposed to convey - we currently *hold locks* and so reclaim needs > to be careful about recursion. I'll also argue that it means the OOM > killer cannot kill the process attempting memory allocation for the > same reason. I agree with Dave Chinner about this. I tested on ext4 filesystem, one is stock Linux 3.19 and the other is Linux 3.19 with - /* The OOM killer does not compensate for light reclaim */ - if (!(gfp_mask & __GFP_FS)) - goto out; applied. Running a Java-like stressing program (which is multi threaded and likely be chosen by the OOM killer due to huge memory usage) shown below with ext4 filesystem set to remount read-only upon filesystem error. # mount -o remount,errors=remount-ro / ---------- Testing program start ---------- #define _GNU_SOURCE #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <sys/types.h> #include <sys/stat.h> #include <fcntl.h> #include <sched.h> static int file_writer(void *unused) { char buffer[128] = { }; int fd; snprintf(buffer, sizeof(buffer) - 1, "/tmp/file.%u", getpid()); fd = open(buffer, O_WRONLY | O_CREAT, 0600); unlink(buffer); while (write(fd, buffer, 1) == 1 && fsync(fd) == 0); return 0; } static void memory_consumer(void) { const int fd = open("/dev/zero", O_RDONLY); unsigned long size; char *buf = NULL; for (size = 1048576; size < 512UL * (1 << 30); size <<= 1) { char *cp = realloc(buf, size); if (!cp) { size >>= 1; break; } buf = cp; } read(fd, buf, size); /* Will cause OOM due to overcommit */ } int main(int argc, char *argv[]) { int i; for (i = 0; i < 100; i++) { char *cp = malloc(4 * 1024); if (!cp || clone(file_writer, cp + 4 * 1024, CLONE_SIGHAND | CLONE_VM, NULL) == -1) break; } memory_consumer(); while (1) pause(); return 0; } ---------- Testing program end ---------- The former showed that the ext4 filesystem is remounted read-only due to filesystem errors with 50%+ reproducibility. ---------- [ 72.440013] do_get_write_access: OOM for frozen_buffer [ 72.440014] EXT4-fs: ext4_reserve_inode_write:4729: aborting transaction: Out of memory in __ext4_journal_get_write_access [ 72.440015] EXT4-fs error (device sda1) in ext4_reserve_inode_write:4735: Out of memory (...snipped....) [ 72.495559] do_get_write_access: OOM for frozen_buffer [ 72.495560] EXT4-fs: ext4_reserve_inode_write:4729: aborting transaction: Out of memory in __ext4_journal_get_write_access [ 72.496839] do_get_write_access: OOM for frozen_buffer [ 72.496841] EXT4-fs: ext4_reserve_inode_write:4729: aborting transaction: Out of memory in __ext4_journal_get_write_access [ 72.505766] Aborting journal on device sda1-8. [ 72.505851] EXT4-fs (sda1): Remounting filesystem read-only [ 72.505853] EXT4-fs warning (device sda1): ext4_evict_inode:260: couldn't mark inode dirty (err -12) [ 72.507995] EXT4-fs warning (device sda1): ext4_evict_inode:260: couldn't mark inode dirty (err -12) [ 72.508773] EXT4-fs (sda1): Remounting filesystem read-only [ 72.508775] EXT4-fs warning (device sda1): ext4_evict_inode:260: couldn't mark inode dirty (err -12) [ 72.547799] do_get_write_access: OOM for frozen_buffer [ 72.706692] EXT4-fs warning (device sda1): ext4_evict_inode:260: couldn't mark inode dirty (err -12) [ 73.035416] EXT4-fs warning (device sda1): ext4_evict_inode:260: couldn't mark inode dirty (err -12) [ 73.291732] EXT4-fs warning (device sda1): ext4_evict_inode:260: couldn't mark inode dirty (err -12) [ 73.422171] EXT4-fs warning (device sda1): ext4_evict_inode:260: couldn't mark inode dirty (err -12) [ 73.511862] EXT4-fs warning (device sda1): ext4_evict_inode:260: couldn't mark inode dirty (err -12) [ 73.589174] EXT4-fs warning (device sda1): ext4_evict_inode:260: couldn't mark inode dirty (err -12) [ 73.665302] EXT4-fs warning (device sda1): ext4_evict_inode:260: couldn't mark inode dirty (err -12) ---------- On the other hand, the latter showed that the ext4 filesystem was never remounted read-only because filesystem errors did not occur, though several TIF_MEMDIE stalls which the timeout patch would handle were observed as with the former. As this is ext4 filesystem, this would use GFP_NOFS. But does using GFP_NOFS + __GFP_NOFAIL at ext4 filesystem solve the problem? I don't think so. The underlying block layer which ext4 filesystem calls would use GFP_NOIO. And memory allocation failures at block layer will result in I/O error which is observed by users as filesystem error. Does passing __GFP_NOFAIL down to block layer solve the problem? I don't think so. There is no means to teach block layer that filesystem layer is doing critical operations where failure results in serious problems. Then, does using GFP_NOIO + __GFP_NOFAIL at block layer solves the problem? I don't think so. It is nothing but bypassing /* The OOM killer does not compensate for light reclaim */ if (!(gfp_mask & __GFP_FS)) goto out; check by passing __GFP_NOFAIL flag. Michal Hocko wrote: > Failing __GFP_WAIT allocation is perfectly fine IMO. Why do you think > this is a problem? Killing a user space process or taking filesystem error actions (e.g. remount-ro or kernel panic), which choice is less painful for users? I believe that !(gfp_mask & __GFP_FS) check is a bug and should be removed. Rather, shouldn't allocations without __GFP_FS get more chance to succeed than allocations with __GFP_FS? If I were the author, I might have added below check instead. /* This is not a critical allocation. Don't invoke the OOM killer. */ if (gfp_mask & __GFP_FS) goto out; Falling into retry loop with same watermark might prevent rescuer threads from doing memory allocation which is needed for making free memory. Maybe we should use lower watermark for GFP_NOIO and below, middle watermark for GFP_NOFS, high watermark for GFP_KERNEL and above. ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: How to handle TIF_MEMDIE stalls? 2015-02-18 11:23 ` How to handle TIF_MEMDIE stalls? Tetsuo Handa @ 2015-02-18 12:29 ` Michal Hocko 2015-02-18 14:06 ` Tetsuo Handa 0 siblings, 1 reply; 14+ messages in thread From: Michal Hocko @ 2015-02-18 12:29 UTC (permalink / raw) To: Tetsuo Handa Cc: david, hannes, dchinner, linux-mm, rientjes, oleg, akpm, mgorman, torvalds, linux-fsdevel, fernando_b1 On Wed 18-02-15 20:23:19, Tetsuo Handa wrote: > [ cc fsdevel list - watch out for side effect of 9879de7373fc (mm: page_alloc: > embed OOM killing naturally into allocation slowpath) which was merged between > 3.19-rc6 and 3.19-rc7 , started from > http://marc.info/?l=linux-mm&m=142348457310066&w=2 ] > > Replying in this post picked up from several posts in this thread. > > Michal Hocko wrote: > > Besides that __GFP_WAIT callers should be prepared for the allocation > > failure and should better cope with it. So no, I really hate something > > like the above. > > Those who do not want to retry with invoking the OOM killer are using > __GFP_WAIT + __GFP_NORETRY allocations. > > Those who want to retry with invoking the OOM killer are using > __GFP_WAIT allocations. > > Those who must retry forever with invoking the OOM killer, no matter how > many processes the OOM killer kills, are using __GFP_WAIT + __GFP_NOFAIL > allocations. > > However, since use of __GFP_NOFAIL is prohibited, IT IS NOT PROHIBITED. It is highly discouraged because GFP_NOFAIL is a strong requirement and the caller should be really aware of the consequences. Especially when the allocation is done under locked context. > I think many of > __GFP_WAIT users are expecting that the allocation fails only when > "the OOM killer set TIF_MEMDIE flag to the caller but the caller > failed to allocate from memory reserves". This is not what __GFP_WAIT is defined for. It says that the allocator might sleep. > Also, the implementation > before 9879de7373fc (mm: page_alloc: embed OOM killing naturally > into allocation slowpath) effectively supported __GFP_WAIT users > with such expectation. same as GFP_KERNEL == GFP_NOFAIL for small allocations currently which causes a lot of troubles which were not anticipated at the time this was introduced. And we _should_ move away from that model. Because GFP_NOFAIL should be really explicit rather than implicit. > Michal Hocko wrote: > > Because they cannot perform any IO/FS transactions and that would lead > > to a premature OOM conditions way too easily. OOM killer is a _last > > resort_ reclaim opportunity not something that would happen just because > > you happen to be not able to flush dirty pages. > > But you should not have applied such change without making necessary > changes to GFP_NOFS / GFP_NOIO users with such expectation and testing > at linux-next.git . Applying such change after 3.19-rc6 is a sucker punch. This is a nonsense. OOM was disbaled for !__GFP_FS for ages (since before git era). > Michal Hocko wrote: > > Well, you are beating your machine to death so you can hardly get any > > time guarantee. It would be nice to have a better feedback mechanism to > > know when to back off and fail the allocation attempt which might be > > blocking OOM victim to pass away. This is extremely tricky because we > > shouldn't be too eager to fail just because of a sudden memory pressure. > > Michal Hocko wrote: > > > I wish only somebody like kswapd repeats the loop on behalf of all > > > threads waiting at memory allocation slowpath... > > > > This is the case when the kswapd is _able_ to cope with the memory > > pressure. > > It looks wasteful for me that so many threads (greater than number of > available CPUs) are sleeping at cond_resched() in shrink_slab() when > checking SysRq-t. Imagine 1000 threads sleeping at cond_resched() in > shrink_slab() on a machine with only 1 CPU. Each thread gets a chance > to try calling reclaim function only when all other threads gave that > thread a chance at cond_resched(). Such situation is almost mutually > preventing from making progress. I wish the following mechanism. Feel free to send patches which are not breaking other loads... [...] > Michal Hocko wrote: > > Failing __GFP_WAIT allocation is perfectly fine IMO. Why do you think > > this is a problem? > > Killing a user space process or taking filesystem error actions (e.g. > remount-ro or kernel panic), which choice is less painful for users? > I believe that !(gfp_mask & __GFP_FS) check is a bug and should be removed. pre-mature OOM killer just because the current allocator context doesn't allow for real reclaim is even worse. > Rather, shouldn't allocations without __GFP_FS get more chance to succeed > than allocations with __GFP_FS? If I were the author, I might have added > below check instead. > > /* This is not a critical allocation. Don't invoke the OOM killer. */ > if (gfp_mask & __GFP_FS) > goto out; This doesn't make any sense what so ever. So regular GFP_KERNEL|USER allocations wouldn't invoke oom killer. This includes page faults and basically most of allocations. > Falling into retry loop with same watermark might prevent rescuer threads from > doing memory allocation which is needed for making free memory. Maybe we should > use lower watermark for GFP_NOIO and below, middle watermark for GFP_NOFS, high > watermark for GFP_KERNEL and above. -- Michal Hocko SUSE Labs ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: How to handle TIF_MEMDIE stalls? 2015-02-18 12:29 ` Michal Hocko @ 2015-02-18 14:06 ` Tetsuo Handa 2015-02-18 14:25 ` Michal Hocko 0 siblings, 1 reply; 14+ messages in thread From: Tetsuo Handa @ 2015-02-18 14:06 UTC (permalink / raw) To: mhocko Cc: david, hannes, dchinner, linux-mm, rientjes, oleg, akpm, mgorman, torvalds, linux-fsdevel, fernando_b1 Michal Hocko wrote: > Tetsuo Handa wrote: > > Michal Hocko wrote: > > > Because they cannot perform any IO/FS transactions and that would lead > > > to a premature OOM conditions way too easily. OOM killer is a _last > > > resort_ reclaim opportunity not something that would happen just because > > > you happen to be not able to flush dirty pages. > > > > But you should not have applied such change without making necessary > > changes to GFP_NOFS / GFP_NOIO users with such expectation and testing > > at linux-next.git . Applying such change after 3.19-rc6 is a sucker punch. > > This is a nonsense. OOM was disbaled for !__GFP_FS for ages (since > before git era). > Then, at least I expect that filesystem error actions will not be taken so trivially. Can we apply http://marc.info/?l=linux-mm&m=142418465615672&w=2 for Linux 3.19-stable? ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: How to handle TIF_MEMDIE stalls? 2015-02-18 14:06 ` Tetsuo Handa @ 2015-02-18 14:25 ` Michal Hocko 2015-02-19 10:48 ` Tetsuo Handa 0 siblings, 1 reply; 14+ messages in thread From: Michal Hocko @ 2015-02-18 14:25 UTC (permalink / raw) To: Tetsuo Handa Cc: david, hannes, dchinner, linux-mm, rientjes, oleg, akpm, mgorman, torvalds, linux-fsdevel, fernando_b1 On Wed 18-02-15 23:06:17, Tetsuo Handa wrote: > Michal Hocko wrote: > > Tetsuo Handa wrote: > > > Michal Hocko wrote: > > > > Because they cannot perform any IO/FS transactions and that would lead > > > > to a premature OOM conditions way too easily. OOM killer is a _last > > > > resort_ reclaim opportunity not something that would happen just because > > > > you happen to be not able to flush dirty pages. > > > > > > But you should not have applied such change without making necessary > > > changes to GFP_NOFS / GFP_NOIO users with such expectation and testing > > > at linux-next.git . Applying such change after 3.19-rc6 is a sucker punch. > > > > This is a nonsense. OOM was disbaled for !__GFP_FS for ages (since > > before git era). > > > Then, at least I expect that filesystem error actions will not be taken so > trivially. Can we apply http://marc.info/?l=linux-mm&m=142418465615672&w=2 for > Linux 3.19-stable? I do not understand. What kind of bug would be fixed by that change? -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: How to handle TIF_MEMDIE stalls? 2015-02-18 14:25 ` Michal Hocko @ 2015-02-19 10:48 ` Tetsuo Handa 2015-02-20 8:26 ` Michal Hocko 0 siblings, 1 reply; 14+ messages in thread From: Tetsuo Handa @ 2015-02-19 10:48 UTC (permalink / raw) To: mhocko Cc: david, hannes, dchinner, linux-mm, rientjes, oleg, akpm, mgorman, torvalds, linux-fsdevel, fernando_b1 Michal Hocko wrote: > Tetsuo Handa wrote: > > Michal Hocko wrote: > > > Tetsuo Handa wrote: > > > > Michal Hocko wrote: > > > > > Because they cannot perform any IO/FS transactions and that would lead > > > > > to a premature OOM conditions way too easily. OOM killer is a _last > > > > > resort_ reclaim opportunity not something that would happen just because > > > > > you happen to be not able to flush dirty pages. > > > > > > > > But you should not have applied such change without making necessary > > > > changes to GFP_NOFS / GFP_NOIO users with such expectation and testing > > > > at linux-next.git . Applying such change after 3.19-rc6 is a sucker punch. > > > > > > This is a nonsense. OOM was disbaled for !__GFP_FS for ages (since > > > before git era). > > > > > Then, at least I expect that filesystem error actions will not be taken so > > trivially. Can we apply http://marc.info/?l=linux-mm&m=142418465615672&w=2 for > > Linux 3.19-stable? > > I do not understand. What kind of bug would be fixed by that change? That change fixes significant loss of file I/O reliability under extreme memory pressure. Today I tested how frequent filesystem errors occurs using scripted environment. ( Source code of a.out is http://marc.info/?l=linux-fsdevel&m=142425860904849&w=2 ) ---------- #!/bin/sh : > ~/trial.log for i in `seq 1 100` do mkfs.ext4 -q /dev/sdb1 || exit 1 mount -o errors=remount-ro /dev/sdb1 /tmp || exit 2 chmod 1777 /tmp su - demo -c ~demo/a.out if [ -w /tmp/ ] then echo -n "S" >> ~/trial.log else echo -n "F" >> ~/trial.log fi umount /tmp done ---------- We can see that filesystem errors are occurring frequently if GFP_NOFS / GFP_NOIO allocations give up without retrying. On the other hand, as far as these trials, TIF_MEMDIE stall was not observed if GFP_NOFS / GFP_NOIO allocations give up without retrying. Maybe giving up without retrying is keeping away from hitting stalls for this test case? Linux 3.19-rc6 (Console log is http://I-love.SAKURA.ne.jp/tmp/serial-20150219-3.19-rc6.txt.xz ) 0 filesystem errors out of 100 trials. 2 stalls. SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS Linux 3.19 (Console log is http://I-love.SAKURA.ne.jp/tmp/serial-20150219-3.19.txt.xz ) 44 filesystem errors out of 100 trials. 0 stalls. SSFFSSSFSSSFSFFFFSSFSSFSSSSSSFFFSFSFFSSSSSSFFFFSFSSFFFSSSSFSSFFFFFSSSSSFSSFSFSSFSFFFSFFFFFFFSSSSSSSS Linux 3.19 with http://marc.info/?l=linux-mm&m=142418465615672&w=2 applied. (Console log is http://I-love.SAKURA.ne.jp/tmp/serial-20150219-3.19-patched.txt.xz ) 0 filesystem errors out of 100 trials. 2 stalls. SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS If result of Linux 3.19 is what you wanted, we should chime fs developers for immediate action. (But __GFP_NOFAIL discussion between you and Dave is in progress. I don't know whether ext4 and underlying subsystems should start using __GFP_NOFAIL.) P.S. Just for experimental purpose, Linux 3.19 with below change applied gave better result than retrying GFP_NOFS / GFP_NOIO allocations without invoking the OOM killer. Short-lived small GFP_NOFS / GFP_NOIO allocations can use GFP_ATOMIC instead? How many bytes does blk_rq_map_kern() want? --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2867,6 +2867,8 @@ __alloc_pages_nodemask(gfp_t gfp_mask, unsigned int order, int classzone_idx; gfp_mask &= gfp_allowed_mask; + if (gfp_mask == GFP_NOFS || gfp_mask == GFP_NOIO) + gfp_mask = GFP_ATOMIC; lockdep_trace_alloc(gfp_mask); 0 filesystem errors out of 100 trials. 0 stalls. SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: How to handle TIF_MEMDIE stalls? 2015-02-19 10:48 ` Tetsuo Handa @ 2015-02-20 8:26 ` Michal Hocko 0 siblings, 0 replies; 14+ messages in thread From: Michal Hocko @ 2015-02-20 8:26 UTC (permalink / raw) To: Tetsuo Handa Cc: david, hannes, dchinner, linux-mm, rientjes, oleg, akpm, mgorman, torvalds, linux-fsdevel, fernando_b1 On Thu 19-02-15 19:48:16, Tetsuo Handa wrote: > Michal Hocko wrote: [...] > > I do not understand. What kind of bug would be fixed by that change? > > That change fixes significant loss of file I/O reliability under extreme > memory pressure. > > Today I tested how frequent filesystem errors occurs using scripted environment. > ( Source code of a.out is http://marc.info/?l=linux-fsdevel&m=142425860904849&w=2 ) > > ---------- > #!/bin/sh > : > ~/trial.log > for i in `seq 1 100` > do > mkfs.ext4 -q /dev/sdb1 || exit 1 > mount -o errors=remount-ro /dev/sdb1 /tmp || exit 2 > chmod 1777 /tmp > su - demo -c ~demo/a.out > if [ -w /tmp/ ] > then > echo -n "S" >> ~/trial.log > else > echo -n "F" >> ~/trial.log > fi > umount /tmp > done > ---------- > > We can see that filesystem errors are occurring frequently if GFP_NOFS / GFP_NOIO > allocations give up without retrying. I would suggest reporting this to ext people (in a separate thread please) and see what is the proper fix. > On the other hand, as far as these trials, > TIF_MEMDIE stall was not observed if GFP_NOFS / GFP_NOIO allocations give up > without retrying. Maybe giving up without retrying is keeping away from hitting > stalls for this test case? This is expected because those allocations are with locks held and so the chances to release the lock are higher. [...] -- Michal Hocko SUSE Labs ^ permalink raw reply [flat|nested] 14+ messages in thread
[parent not found: <20150218082502.GA4478@dhcp22.suse.cz>]
[parent not found: <20150218104859.GM12722@dastard>]
[parent not found: <20150218121602.GC4478@dhcp22.suse.cz>]
[parent not found: <20150219110124.GC15569@phnom.home.cmpxchg.org>]
[parent not found: <20150219122914.GH28427@dhcp22.suse.cz>]
* Re: How to handle TIF_MEMDIE stalls? [not found] ` <20150219122914.GH28427@dhcp22.suse.cz> @ 2015-02-19 13:29 ` Tetsuo Handa 2015-02-20 9:10 ` Michal Hocko [not found] ` <20150219125844.GI28427@dhcp22.suse.cz> 1 sibling, 1 reply; 14+ messages in thread From: Tetsuo Handa @ 2015-02-19 13:29 UTC (permalink / raw) To: mhocko, hannes Cc: david, dchinner, linux-mm, rientjes, oleg, akpm, mgorman, torvalds, xfs, linux-fsdevel, fernando_b1 Michal Hocko wrote: > On Thu 19-02-15 06:01:24, Johannes Weiner wrote: > [...] > > Preferrably, we'd get rid of all nofail allocations and replace them > > with preallocated reserves. But this is not going to happen anytime > > soon, so what other option do we have than resolving this on the OOM > > killer side? > > As I've mentioned in other email, we might give GFP_NOFAIL allocator > access to memory reserves (by giving it __GFP_HIGH). This is still not a > 100% solution because reserves could get depleted but this risk is there > even with multiple oom victims. I would still argue that this would be a > better approach because selecting more victims might hit pathological > case more easily (other victims might be blocked on the very same lock > e.g.). > Does "multiple OOM victims" mean "select next if first does not die"? Then, I think my timeout patch http://marc.info/?l=linux-mm&m=142002495532320&w=2 does not deplete memory reserves. ;-) If we change to permit invocation of the OOM killer for GFP_NOFS / GFP_NOIO, those who do not want to fail (e.g. journal transaction) will start passing __GFP_NOFAIL? ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: How to handle TIF_MEMDIE stalls? 2015-02-19 13:29 ` Tetsuo Handa @ 2015-02-20 9:10 ` Michal Hocko 2015-02-20 12:20 ` Tetsuo Handa 0 siblings, 1 reply; 14+ messages in thread From: Michal Hocko @ 2015-02-20 9:10 UTC (permalink / raw) To: Tetsuo Handa Cc: hannes, david, dchinner, linux-mm, rientjes, oleg, akpm, mgorman, torvalds, xfs, linux-fsdevel, fernando_b1 On Thu 19-02-15 22:29:37, Tetsuo Handa wrote: > Michal Hocko wrote: > > On Thu 19-02-15 06:01:24, Johannes Weiner wrote: > > [...] > > > Preferrably, we'd get rid of all nofail allocations and replace them > > > with preallocated reserves. But this is not going to happen anytime > > > soon, so what other option do we have than resolving this on the OOM > > > killer side? > > > > As I've mentioned in other email, we might give GFP_NOFAIL allocator > > access to memory reserves (by giving it __GFP_HIGH). This is still not a > > 100% solution because reserves could get depleted but this risk is there > > even with multiple oom victims. I would still argue that this would be a > > better approach because selecting more victims might hit pathological > > case more easily (other victims might be blocked on the very same lock > > e.g.). > > > Does "multiple OOM victims" mean "select next if first does not die"? > Then, I think my timeout patch http://marc.info/?l=linux-mm&m=142002495532320&w=2 > does not deplete memory reserves. ;-) It doesn't because --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2603,9 +2603,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask) alloc_flags |= ALLOC_NO_WATERMARKS; else if (in_serving_softirq() && (current->flags & PF_MEMALLOC)) alloc_flags |= ALLOC_NO_WATERMARKS; - else if (!in_interrupt() && - ((current->flags & PF_MEMALLOC) || - unlikely(test_thread_flag(TIF_MEMDIE)))) + else if (!in_interrupt() && (current->flags & PF_MEMALLOC)) alloc_flags |= ALLOC_NO_WATERMARKS; you disabled the TIF_MEMDIE heuristic and use it only for OOM exclusion and break out from the allocator. Exiting task might need a memory to do so and you make all those allocations fail basically. How do you know this is not going to blow up? > If we change to permit invocation of the OOM killer for GFP_NOFS / GFP_NOIO, > those who do not want to fail (e.g. journal transaction) will start passing > __GFP_NOFAIL? > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: How to handle TIF_MEMDIE stalls? 2015-02-20 9:10 ` Michal Hocko @ 2015-02-20 12:20 ` Tetsuo Handa 2015-02-20 12:38 ` Michal Hocko 0 siblings, 1 reply; 14+ messages in thread From: Tetsuo Handa @ 2015-02-20 12:20 UTC (permalink / raw) To: mhocko Cc: hannes, david, dchinner, linux-mm, rientjes, oleg, akpm, mgorman, torvalds, xfs, linux-fsdevel, fernando_b1 Michal Hocko wrote: > On Thu 19-02-15 22:29:37, Tetsuo Handa wrote: > > Michal Hocko wrote: > > > On Thu 19-02-15 06:01:24, Johannes Weiner wrote: > > > [...] > > > > Preferrably, we'd get rid of all nofail allocations and replace them > > > > with preallocated reserves. But this is not going to happen anytime > > > > soon, so what other option do we have than resolving this on the OOM > > > > killer side? > > > > > > As I've mentioned in other email, we might give GFP_NOFAIL allocator > > > access to memory reserves (by giving it __GFP_HIGH). This is still not a > > > 100% solution because reserves could get depleted but this risk is there > > > even with multiple oom victims. I would still argue that this would be a > > > better approach because selecting more victims might hit pathological > > > case more easily (other victims might be blocked on the very same lock > > > e.g.). > > > > > Does "multiple OOM victims" mean "select next if first does not die"? > > Then, I think my timeout patch http://marc.info/?l=linux-mm&m=142002495532320&w=2 > > does not deplete memory reserves. ;-) > > It doesn't because > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -2603,9 +2603,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask) > alloc_flags |= ALLOC_NO_WATERMARKS; > else if (in_serving_softirq() && (current->flags & PF_MEMALLOC)) > alloc_flags |= ALLOC_NO_WATERMARKS; > - else if (!in_interrupt() && > - ((current->flags & PF_MEMALLOC) || > - unlikely(test_thread_flag(TIF_MEMDIE)))) > + else if (!in_interrupt() && (current->flags & PF_MEMALLOC)) > alloc_flags |= ALLOC_NO_WATERMARKS; > > you disabled the TIF_MEMDIE heuristic and use it only for OOM exclusion > and break out from the allocator. Exiting task might need a memory to do > so and you make all those allocations fail basically. How do you know > this is not going to blow up? > Well, treat exiting tasks to imply __GFP_NOFAIL for clean up? We cannot determine correct task to kill + allow access to memory reserves based on lock dependency. Therefore, this patch evenly allow no tasks to access to memory reserves. Exiting task might need some memory to exit, and not allowing access to memory reserves can retard exit of that task. But that task will eventually get memory released by other tasks killed by timeout-based kill-more mechanism. If no more killable tasks or expired panic-timeout, it is the same result with depletion of memory reserves. I think that this situation (automatically making forward progress as if the administrator is periodically doing SysRq-f until the OOM condition is solved, or is doing SysRq-c if no more killable tasks or stalled too long) is better than current situation (not making forward progress since the exiting task cannot exit due to lock dependency, caused by failing to determine correct task to kill + allow access to memory reserves). > > If we change to permit invocation of the OOM killer for GFP_NOFS / GFP_NOIO, > > those who do not want to fail (e.g. journal transaction) will start passing > > __GFP_NOFAIL? > > ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: How to handle TIF_MEMDIE stalls? 2015-02-20 12:20 ` Tetsuo Handa @ 2015-02-20 12:38 ` Michal Hocko 0 siblings, 0 replies; 14+ messages in thread From: Michal Hocko @ 2015-02-20 12:38 UTC (permalink / raw) To: Tetsuo Handa Cc: hannes, david, dchinner, linux-mm, rientjes, oleg, akpm, mgorman, torvalds, xfs, linux-fsdevel, fernando_b1 On Fri 20-02-15 21:20:58, Tetsuo Handa wrote: > Michal Hocko wrote: > > On Thu 19-02-15 22:29:37, Tetsuo Handa wrote: > > > Michal Hocko wrote: > > > > On Thu 19-02-15 06:01:24, Johannes Weiner wrote: > > > > [...] > > > > > Preferrably, we'd get rid of all nofail allocations and replace them > > > > > with preallocated reserves. But this is not going to happen anytime > > > > > soon, so what other option do we have than resolving this on the OOM > > > > > killer side? > > > > > > > > As I've mentioned in other email, we might give GFP_NOFAIL allocator > > > > access to memory reserves (by giving it __GFP_HIGH). This is still not a > > > > 100% solution because reserves could get depleted but this risk is there > > > > even with multiple oom victims. I would still argue that this would be a > > > > better approach because selecting more victims might hit pathological > > > > case more easily (other victims might be blocked on the very same lock > > > > e.g.). > > > > > > > Does "multiple OOM victims" mean "select next if first does not die"? > > > Then, I think my timeout patch http://marc.info/?l=linux-mm&m=142002495532320&w=2 > > > does not deplete memory reserves. ;-) > > > > It doesn't because > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -2603,9 +2603,7 @@ gfp_to_alloc_flags(gfp_t gfp_mask) > > alloc_flags |= ALLOC_NO_WATERMARKS; > > else if (in_serving_softirq() && (current->flags & PF_MEMALLOC)) > > alloc_flags |= ALLOC_NO_WATERMARKS; > > - else if (!in_interrupt() && > > - ((current->flags & PF_MEMALLOC) || > > - unlikely(test_thread_flag(TIF_MEMDIE)))) > > + else if (!in_interrupt() && (current->flags & PF_MEMALLOC)) > > alloc_flags |= ALLOC_NO_WATERMARKS; > > > > you disabled the TIF_MEMDIE heuristic and use it only for OOM exclusion > > and break out from the allocator. Exiting task might need a memory to do > > so and you make all those allocations fail basically. How do you know > > this is not going to blow up? > > > > Well, treat exiting tasks to imply __GFP_NOFAIL for clean up? > > We cannot determine correct task to kill + allow access to memory reserves > based on lock dependency. Therefore, this patch evenly allow no tasks to > access to memory reserves. > > Exiting task might need some memory to exit, and not allowing access to > memory reserves can retard exit of that task. But that task will eventually > get memory released by other tasks killed by timeout-based kill-more > mechanism. If no more killable tasks or expired panic-timeout, it is > the same result with depletion of memory reserves. > > I think that this situation (automatically making forward progress as if > the administrator is periodically doing SysRq-f until the OOM condition > is solved, or is doing SysRq-c if no more killable tasks or stalled too > long) is better than current situation (not making forward progress since > the exiting task cannot exit due to lock dependency, caused by failing to > determine correct task to kill + allow access to memory reserves). If you really believe this is an improvement then send a proper patch with justification. But I am _really_ skeptical about such a change to be honest. -- Michal Hocko SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 14+ messages in thread
[parent not found: <20150219125844.GI28427@dhcp22.suse.cz>]
* Re: How to handle TIF_MEMDIE stalls? [not found] ` <20150219125844.GI28427@dhcp22.suse.cz> @ 2015-02-19 15:29 ` Tetsuo Handa 2015-02-19 21:53 ` Tetsuo Handa 2015-02-20 9:13 ` Michal Hocko 0 siblings, 2 replies; 14+ messages in thread From: Tetsuo Handa @ 2015-02-19 15:29 UTC (permalink / raw) To: mhocko, hannes Cc: david, dchinner, linux-mm, rientjes, oleg, akpm, mgorman, torvalds, xfs, linux-fsdevel, fernando_b1 Michal Hocko wrote: > On Thu 19-02-15 13:29:14, Michal Hocko wrote: > [...] > > Something like the following. > __GFP_HIGH doesn't seem to be sufficient so we would need something > slightly else but the idea is still the same: > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 8d52ab18fe0d..2d224bbdf8e8 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -2599,6 +2599,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, > enum migrate_mode migration_mode = MIGRATE_ASYNC; > bool deferred_compaction = false; > int contended_compaction = COMPACT_CONTENDED_NONE; > + int oom = 0; > > /* > * In the slowpath, we sanity check order to avoid ever trying to > @@ -2635,6 +2636,15 @@ retry: > alloc_flags = gfp_to_alloc_flags(gfp_mask); > > /* > + * __GFP_NOFAIL allocations cannot fail but yet the current context > + * might be blocking resources needed by the OOM victim to terminate. > + * Allow the caller to dive into memory reserves to succeed the > + * allocation and break out from a potential deadlock. > + */ We don't know how many callers will pass __GFP_NOFAIL. But if 1000 threads are doing the same operation which requires __GFP_NOFAIL allocation with a lock held, wouldn't memory reserves deplete? This heuristic can't continue if memory reserves depleted or continuous pages of requested order cannot be found. > + if (oom > 10 && (gfp_mask & __GFP_NOFAIL)) > + alloc_flags |= ALLOC_NO_WATERMARKS; > + > + /* > * Find the true preferred zone if the allocation is unconstrained by > * cpusets. > */ > @@ -2759,6 +2769,8 @@ retry: > goto got_pg; > if (!did_some_progress) > goto nopage; > + > + oom++; > } > /* Wait for some write requests to complete then retry */ > wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50); > -- > Michal Hocko > SUSE Labs > ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: How to handle TIF_MEMDIE stalls? 2015-02-19 15:29 ` Tetsuo Handa @ 2015-02-19 21:53 ` Tetsuo Handa 2015-02-20 9:13 ` Michal Hocko 1 sibling, 0 replies; 14+ messages in thread From: Tetsuo Handa @ 2015-02-19 21:53 UTC (permalink / raw) To: mhocko, hannes Cc: david, dchinner, linux-mm, rientjes, oleg, akpm, mgorman, torvalds, xfs, linux-fsdevel, fernando_b1 Tetsuo Handa wrote: > Michal Hocko wrote: > > On Thu 19-02-15 13:29:14, Michal Hocko wrote: > > [...] > > > Something like the following. > > __GFP_HIGH doesn't seem to be sufficient so we would need something > > slightly else but the idea is still the same: > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index 8d52ab18fe0d..2d224bbdf8e8 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -2599,6 +2599,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, > > enum migrate_mode migration_mode = MIGRATE_ASYNC; > > bool deferred_compaction = false; > > int contended_compaction = COMPACT_CONTENDED_NONE; > > + int oom = 0; > > > > /* > > * In the slowpath, we sanity check order to avoid ever trying to > > @@ -2635,6 +2636,15 @@ retry: > > alloc_flags = gfp_to_alloc_flags(gfp_mask); > > > > /* > > + * __GFP_NOFAIL allocations cannot fail but yet the current context > > + * might be blocking resources needed by the OOM victim to terminate. > > + * Allow the caller to dive into memory reserves to succeed the > > + * allocation and break out from a potential deadlock. > > + */ > > We don't know how many callers will pass __GFP_NOFAIL. But if 1000 > threads are doing the same operation which requires __GFP_NOFAIL > allocation with a lock held, wouldn't memory reserves deplete? > > This heuristic can't continue if memory reserves depleted or > continuous pages of requested order cannot be found. > Even if the system seems to be stalled, deadlocks may not have occurred. If the cause is (e.g.) virtio disk being stuck for unknown reason than a deadlock, nobody should start consuming the memory reserves after waiting for a while. The memory reserves are something like a balloon. To guarantee forward progress, the balloon must not become empty. Therefore, I think that throttling heuristics for memory requester side (deflator of the balloon, or SIGKILL receiver called processes) should be avoided and throttling heuristics for memory releaser side (inflator of the balloon, or SIGKILL sender called the OOM killer) should be used. If heuristic is used on the deflator side, the memory allocator may deliver a final blow via ALLOC_NO_WATERMARKS. If heuristic is used on the inflator side, the OOM killer can act as a watchdog when nobody volunteered memory within reasonable period. > > + if (oom > 10 && (gfp_mask & __GFP_NOFAIL)) > > + alloc_flags |= ALLOC_NO_WATERMARKS; > > + > > + /* > > * Find the true preferred zone if the allocation is unconstrained by > > * cpusets. > > */ > > @@ -2759,6 +2769,8 @@ retry: > > goto got_pg; > > if (!did_some_progress) > > goto nopage; > > + > > + oom++; > > } > > /* Wait for some write requests to complete then retry */ > > wait_iff_congested(ac->preferred_zone, BLK_RW_ASYNC, HZ/50); > > -- > > Michal Hocko > > SUSE Labs > > > ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: How to handle TIF_MEMDIE stalls? 2015-02-19 15:29 ` Tetsuo Handa 2015-02-19 21:53 ` Tetsuo Handa @ 2015-02-20 9:13 ` Michal Hocko 2015-02-20 13:37 ` Stefan Ring 1 sibling, 1 reply; 14+ messages in thread From: Michal Hocko @ 2015-02-20 9:13 UTC (permalink / raw) To: Tetsuo Handa Cc: dchinner, oleg, xfs, hannes, linux-mm, mgorman, rientjes, linux-fsdevel, akpm, fernando_b1, torvalds On Fri 20-02-15 00:29:29, Tetsuo Handa wrote: > Michal Hocko wrote: > > On Thu 19-02-15 13:29:14, Michal Hocko wrote: > > [...] > > > Something like the following. > > __GFP_HIGH doesn't seem to be sufficient so we would need something > > slightly else but the idea is still the same: > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index 8d52ab18fe0d..2d224bbdf8e8 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -2599,6 +2599,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, > > enum migrate_mode migration_mode = MIGRATE_ASYNC; > > bool deferred_compaction = false; > > int contended_compaction = COMPACT_CONTENDED_NONE; > > + int oom = 0; > > > > /* > > * In the slowpath, we sanity check order to avoid ever trying to > > @@ -2635,6 +2636,15 @@ retry: > > alloc_flags = gfp_to_alloc_flags(gfp_mask); > > > > /* > > + * __GFP_NOFAIL allocations cannot fail but yet the current context > > + * might be blocking resources needed by the OOM victim to terminate. > > + * Allow the caller to dive into memory reserves to succeed the > > + * allocation and break out from a potential deadlock. > > + */ > > We don't know how many callers will pass __GFP_NOFAIL. But if 1000 > threads are doing the same operation which requires __GFP_NOFAIL > allocation with a lock held, wouldn't memory reserves deplete? We shouldn't have an unbounded number of GFP_NOFAIL allocations at the same time. This would be even more broken. If a load is known to use such allocations excessively then the administrator can enlarge the memory reserves. > This heuristic can't continue if memory reserves depleted or > continuous pages of requested order cannot be found. Once memory reserves are depleted we are screwed anyway and we might panic. -- Michal Hocko SUSE Labs _______________________________________________ xfs mailing list xfs@oss.sgi.com http://oss.sgi.com/mailman/listinfo/xfs ^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: How to handle TIF_MEMDIE stalls? 2015-02-20 9:13 ` Michal Hocko @ 2015-02-20 13:37 ` Stefan Ring 0 siblings, 0 replies; 14+ messages in thread From: Stefan Ring @ 2015-02-20 13:37 UTC (permalink / raw) To: Michal Hocko Cc: Tetsuo Handa, dchinner, oleg, Linux fs XFS, hannes, linux-mm, mgorman, rientjes, linux-fsdevel, akpm, fernando_b1, torvalds >> We don't know how many callers will pass __GFP_NOFAIL. But if 1000 >> threads are doing the same operation which requires __GFP_NOFAIL >> allocation with a lock held, wouldn't memory reserves deplete? > > We shouldn't have an unbounded number of GFP_NOFAIL allocations at the > same time. This would be even more broken. If a load is known to use > such allocations excessively then the administrator can enlarge the > memory reserves. > >> This heuristic can't continue if memory reserves depleted or >> continuous pages of requested order cannot be found. > > Once memory reserves are depleted we are screwed anyway and we might > panic. This discussion reminds me of a situation I've seen somewhat regularly, which I have described here: http://oss.sgi.com/pipermail/xfs/2014-April/035793.html I've actually seen it more often on another box with OpenVZ and VirtualBox installed, where it would almost always happen during startup of a VirtualBox guest machine. This other machine is also running XFS. I blamed it on OpenVZ or VirtualBox originally, but having seen the same thing happen on the other machine with neither of them, the next candidate for taking blame is XFS. Is this behavior something that can be attributed to these memory allocation retry loops? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2015-02-20 13:37 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <201502172057.GCD09362.FtHQMVSLJOFFOO@I-love.SAKURA.ne.jp>
[not found] ` <20150217131618.GA14778@phnom.home.cmpxchg.org>
[not found] ` <20150217165024.GI32017@dhcp22.suse.cz>
[not found] ` <20150217232552.GK4251@dastard>
[not found] ` <20150218084842.GB4478@dhcp22.suse.cz>
2015-02-18 11:23 ` How to handle TIF_MEMDIE stalls? Tetsuo Handa
2015-02-18 12:29 ` Michal Hocko
2015-02-18 14:06 ` Tetsuo Handa
2015-02-18 14:25 ` Michal Hocko
2015-02-19 10:48 ` Tetsuo Handa
2015-02-20 8:26 ` Michal Hocko
[not found] <20150218082502.GA4478@dhcp22.suse.cz>
[not found] ` <20150218104859.GM12722@dastard>
[not found] ` <20150218121602.GC4478@dhcp22.suse.cz>
[not found] ` <20150219110124.GC15569@phnom.home.cmpxchg.org>
[not found] ` <20150219122914.GH28427@dhcp22.suse.cz>
2015-02-19 13:29 ` Tetsuo Handa
2015-02-20 9:10 ` Michal Hocko
2015-02-20 12:20 ` Tetsuo Handa
2015-02-20 12:38 ` Michal Hocko
[not found] ` <20150219125844.GI28427@dhcp22.suse.cz>
2015-02-19 15:29 ` Tetsuo Handa
2015-02-19 21:53 ` Tetsuo Handa
2015-02-20 9:13 ` Michal Hocko
2015-02-20 13:37 ` Stefan Ring
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).