From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.0 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,SPF_PASS,USER_AGENT_MUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 225FBC43387 for ; Thu, 3 Jan 2019 18:13:34 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id DF0C82184B for ; Thu, 3 Jan 2019 18:13:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1546539214; bh=gMSVlpLiyQSEXprRqJosKeKLNSl0jR/jS89S1P3tqN0=; h=Date:From:To:Cc:Subject:References:In-Reply-To:List-ID:From; b=nbrRDshjMiH9W68qzu3BxLa0wOL3DYNs6b31iwNDR40ty7yGAzyK1QiVGj7PzMyHZ yoxri4Uc7qTJS1z93YDRDxyizGVhxYp99/TG8ypFNAKrqzkhapeWe9fG6uedyI5vqj oWqlJpeMYgf4aMebhSwHjaeBpgNznR0Foqf6wvaM= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730612AbfACSNc (ORCPT ); Thu, 3 Jan 2019 13:13:32 -0500 Received: from mx2.suse.de ([195.135.220.15]:32878 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1729955AbfACSNc (ORCPT ); Thu, 3 Jan 2019 13:13:32 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id A4DFCAE44; Thu, 3 Jan 2019 18:13:30 +0000 (UTC) Date: Thu, 3 Jan 2019 19:13:29 +0100 From: Michal Hocko To: Yang Shi Cc: hannes@cmpxchg.org, akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [RFC PATCH 0/3] mm: memcontrol: delayed force empty Message-ID: <20190103181329.GW31793@dhcp22.suse.cz> References: <1546459533-36247-1-git-send-email-yang.shi@linux.alibaba.com> <20190103101215.GH31793@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu 03-01-19 09:33:14, Yang Shi wrote: > > > On 1/3/19 2:12 AM, Michal Hocko wrote: > > On Thu 03-01-19 04:05:30, Yang Shi wrote: > > > Currently, force empty reclaims memory synchronously when writing to > > > memory.force_empty. It may take some time to return and the afterwards > > > operations are blocked by it. Although it can be interrupted by signal, > > > it still seems suboptimal. > > Why it is suboptimal? We are doing that operation on behalf of the > > process requesting it. What should anybody else pay for it? In other > > words why should we hide the overhead? > > Please see the below explanation. > > > > > > Now css offline is handled by worker, and the typical usecase of force > > > empty is before memcg offline. So, handling force empty in css offline > > > sounds reasonable. > > Hmm, so I guess you are talking about > > echo 1 > $MEMCG/force_empty > > rmdir $MEMCG > > > > and you are complaining that the operation takes too long. Right? Why do > > you care actually? > > We have some usecases which create and remove memcgs very frequently, and > the tasks in the memcg may just access the files which are unlikely accessed > by anyone else. So, we prefer force_empty the memcg before rmdir'ing it to > reclaim the page cache so that they don't get accumulated to incur > unnecessary memory pressure. Since the memory pressure may incur direct > reclaim to harm some latency sensitive applications. Yes, this makes sense to me. > And, the create/remove might be run in a script sequentially (there might be > a lot scripts or applications are run in parallel to do this), i.e. > mkdir cg1 > do something > echo 0 > cg1/memory.force_empty > rmdir cg1 > > mkdir cg2 > ... > > The creation of the afterwards memcg might be blocked by the force_empty for > long time if there are a lot page caches, so the overall throughput of the > system may get hurt. Is there any reason for your scripts to be strictly sequential here? In other words why cannot you offload those expensive operations to a detached context in _userspace_? -- Michal Hocko SUSE Labs