From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Morton Subject: Re: [PATCH v10 29/35] memcg: per-memcg kmem shrinking Date: Thu, 6 Jun 2013 02:49:06 -0700 Message-ID: <20130606024906.e5b85b28.akpm@linux-foundation.org> References: <1370287804-3481-1-git-send-email-glommer@openvz.org> <1370287804-3481-30-git-send-email-glommer@openvz.org> <20130605160841.909420c06bfde62039489d2e@linux-foundation.org> <51B049D5.2020809@parallels.com> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Cc: Glauber Costa , , Mel Gorman , Dave Chinner , , , , Michal Hocko , Johannes Weiner , , Greg Thelen , Dave Chinner , Rik van Riel To: Glauber Costa Return-path: In-Reply-To: <51B049D5.2020809@parallels.com> Sender: owner-linux-mm@kvack.org List-Id: linux-fsdevel.vger.kernel.org On Thu, 6 Jun 2013 12:35:33 +0400 Glauber Costa wrote: > On 06/06/2013 03:08 AM, Andrew Morton wrote: > >> + > >> > + /* > >> > + * We will try to shrink kernel memory present in caches. We > >> > + * are sure that we can wait, so we will. The duration of our > >> > + * wait is determined by congestion, the same way as vmscan.c > >> > + * > >> > + * If we are in FS context, though, then although we can wait, > >> > + * we cannot call the shrinkers. Most fs shrinkers (which > >> > + * comprises most of our kmem data) will not run without > >> > + * __GFP_FS since they can deadlock. The solution is to > >> > + * synchronously run that in a different context. > > But this is pointless. Calling a function via a different thread and > > then waiting for it to complete is equivalent to calling it directly. > > > Not in this case. We are in wait-capable context (we check for this > right before we reach this), but we are not in fs capable context. > > So the reason we do this - which I tried to cover in the changelog, is > to escape from the GFP_FS limitation that our call chain has, not the > wait limitation. But that's equivalent to calling the code directly. Look: some_fs_function() { lock(some-fs-lock); ... } some_other_fs_function() { lock(some-fs-lock); alloc_pages(GFP_NOFS); ->... ->schedule_work(some_fs_function); flush_scheduled_work(); that flush_scheduled_work() won't complete until some_fs_function() has completed. But some_fs_function() won't complete, because we're holding some-fs-lock. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org