From mboxrd@z Thu Jan 1 00:00:00 1970 From: David Rientjes Subject: =?UTF-8?Q?Re=3A_=E7=AD=94=E5=A4=8D=3A_=E7=AD=94=E5=A4=8D=3A_=5BPATCH=5D_mm=2Fmemcontrol=2Ec=3A_speed_up_to_force_empty_a_memory_cgroup?= Date: Tue, 20 Mar 2018 13:29:57 -0700 (PDT) Message-ID: References: <1521448170-19482-1-git-send-email-lirongqing@baidu.com> <20180319085355.GQ23100@dhcp22.suse.cz> <2AD939572F25A448A3AE3CAEA61328C23745764B@BC-MAIL-M28.internal.baidu.com> <20180319103756.GV23100@dhcp22.suse.cz> <2AD939572F25A448A3AE3CAEA61328C2374589DC@BC-MAIL-M28.internal.baidu.com> <20180320083950.GD23100@dhcp22.suse.cz> Mime-Version: 1.0 Return-path: DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=33LzgS+OPlkDfGcWsDLn61givyON0hNZrXrsALJWxL8=; b=sqRWBKFoore0U2zV4k0z/VflGgMZYEutNJu62QejWk29VmgPaytA1jqBvHecKJEmL7 DtabhzzDtPD703hOwppqX9DdQRmxTx9ElMiEAuB9Ym4KYT1hD9KvLHXnegixV72rhcY2 upUIqfnesf0c7J9oprMVQRnCzHXyDO1fdhcnHTGEryHc0sBBOIiSsmZWZYaTWFP+1/S+ 8Air8v7jJX36cWtoRkv/8HkLL4igLsQnCgRDGrYJchHSl9i2o5unJnfWW7lQva2VgLRX mkrqYIunFr6tH49kBhqDAr5Pi/vPMEf8ND9rwEmYVpf7pW8wh0PcBlHsYrwZzBWRl+M4 yUQQ== In-Reply-To: <20180320083950.GD23100@dhcp22.suse.cz> Sender: linux-kernel-owner@vger.kernel.org List-ID: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Michal Hocko Cc: "Li,Rongqing" , "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "cgroups@vger.kernel.org" , "hannes@cmpxchg.org" , Andrey Ryabinin On Tue, 20 Mar 2018, Michal Hocko wrote: > > > > > Although SWAP_CLUSTER_MAX is used at the lower level, but the call > > > > > stack of try_to_free_mem_cgroup_pages is too long, increase the > > > > > nr_to_reclaim can reduce times of calling > > > > > function[do_try_to_free_pages, shrink_zones, hrink_node ] > > > > > > > > > > mem_cgroup_resize_limit > > > > > --->try_to_free_mem_cgroup_pages: .nr_to_reclaim = max(1024, > > > > > --->SWAP_CLUSTER_MAX), > > > > > ---> do_try_to_free_pages > > > > > ---> shrink_zones > > > > > --->shrink_node > > > > > ---> shrink_node_memcg > > > > > ---> shrink_list <-------loop will happen in this place > > > > [times=1024/32] > > > > > ---> shrink_page_list > > > > > > > > Can you actually measure this to be the culprit. Because we should rethink > > > > our call path if it is too complicated/deep to perform well. > > > > Adding arbitrary batch sizes doesn't sound like a good way to go to me. > > > > > > Ok, I will try > > > > > > > Looping in mem_cgroup_resize_limit(), which takes memcg_limit_mutex on > > every iteration which contends with lowering limits in other cgroups (on > > our systems, thousands), calling try_to_free_mem_cgroup_pages() with less > > than SWAP_CLUSTER_MAX is lame. > > Well, if the global lock is a bottleneck in your deployments then we > can come up with something more clever. E.g. per hierarchy locking > or even drop the lock for the reclaim altogether. If we reclaim in > SWAP_CLUSTER_MAX then the potential over-reclaim risk quite low when > multiple users are shrinking the same (sub)hierarchy. > I don't believe this to be a bottleneck if nr_pages is increased in mem_cgroup_resize_limit(). > > It would probably be best to limit the > > nr_pages to the amount that needs to be reclaimed, though, rather than > > over reclaiming. > > How do you achieve that? The charging path is not synchornized with the > shrinking one at all. > The point is to get a better guess at how many pages, up to SWAP_CLUSTER_MAX, that need to be reclaimed instead of 1. > > If you wanted to be invasive, you could change page_counter_limit() to > > return the count - limit, fix up the callers that look for -EBUSY, and > > then use max(val, SWAP_CLUSTER_MAX) as your nr_pages. > > I am not sure I understand > Have page_counter_limit() return the number of pages over limit, i.e. count - limit, since it compares the two anyway. Fix up existing callers and then clamp that value to SWAP_CLUSTER_MAX in mem_cgroup_resize_limit(). It's a more accurate guess than either 1 or 1024.