From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753029Ab3LSJVy (ORCPT ); Thu, 19 Dec 2013 04:21:54 -0500 Received: from relay.parallels.com ([195.214.232.42]:33355 "EHLO relay.parallels.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751146Ab3LSJVt (ORCPT ); Thu, 19 Dec 2013 04:21:49 -0500 Message-ID: <52B2BAAA.40801@parallels.com> Date: Thu, 19 Dec 2013 13:21:46 +0400 From: Vladimir Davydov User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130922 Icedove/17.0.9 MIME-Version: 1.0 To: Christoph Lameter CC: Glauber Costa , Michal Hocko , LKML , , , , Johannes Weiner , Pekka Enberg , Andrew Morton Subject: Re: [PATCH 4/6] memcg, slab: check and init memcg_cahes under slab_mutex References: <6f02b2d079ffd0990ae335339c803337b13ecd8c.1387372122.git.vdavydov@parallels.com> <20131218174105.GE31080@dhcp22.suse.cz> <52B29B2F.7050909@parallels.com> In-Reply-To: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.30.16.96] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, Christoph We have a problem with memcg-vs-slab interactions. Currently we set the pointer to a new kmem_cache in its parent's memcg_caches array inside memcg_create_kmem_cache() (mm/memcontrol.c): memcg_create_kmem_cache(): new_cachep = cache_from_memcg_idx(cachep, idx); if (new_cachep) goto out; new_cachep = kmem_cache_dup(memcg, cachep); cachep->memcg_params->memcg_caches[idx] = new_cachep; It seems to be prone to a race as explained in the comment to this patch. To fix the race, we need to move the assignment of new_cachep to memcg_caches[idx] to be called under the slab_mutex protection. There are basically two ways of doing this: 1. Move the assignment to kmem_cache_create_memcg() defined in mm/slab.c. This is how this patch handles it. 2. Move taking of the slab_mutex, along with some memcg-specific initialization bits, from kmem_cache_create_memcg() to memcg_create_kmem_cache(). The second way, although looks clearer, will break the convention not to take the slab_mutex inside mm/memcontrol.c, Glauber tried to observe while implementing kmemcg. So the question is: what do you think about taking the slab_mutex directly from mm/memcontrol.c w/o using helper functions (confusing call paths, IMO)? Thanks. On 12/19/2013 12:00 PM, Glauber Costa wrote: > On Thu, Dec 19, 2013 at 11:07 AM, Vladimir Davydov > wrote: >> On 12/18/2013 09:41 PM, Michal Hocko wrote: >>> On Wed 18-12-13 17:16:55, Vladimir Davydov wrote: >>>> The memcg_params::memcg_caches array can be updated concurrently from >>>> memcg_update_cache_size() and memcg_create_kmem_cache(). Although both >>>> of these functions take the slab_mutex during their operation, the >>>> latter checks if memcg's cache has already been allocated w/o taking the >>>> mutex. This can result in a race as described below. >>>> >>>> Asume two threads schedule kmem_cache creation works for the same >>>> kmem_cache of the same memcg from __memcg_kmem_get_cache(). One of the >>>> works successfully creates it. Another work should fail then, but if it >>>> interleaves with memcg_update_cache_size() as follows, it does not: >>> I am not sure I understand the race. memcg_update_cache_size is called >>> when we start accounting a new memcg or a child is created and it >>> inherits accounting from the parent. memcg_create_kmem_cache is called >>> when a new cache is first allocated from, right? >> memcg_update_cache_size() is called when kmem accounting is activated >> for a memcg, no matter how. >> >> memcg_create_kmem_cache() is scheduled from __memcg_kmem_get_cache(). >> It's OK to have a bunch of such methods trying to create the same memcg >> cache concurrently, but only one of them should succeed. >> >>> Why cannot we simply take slab_mutex inside memcg_create_kmem_cache? >>> it is running from the workqueue context so it should clash with other >>> locks. >> Hmm, Glauber's code never takes the slab_mutex inside memcontrol.c. I >> have always been wondering why, because it could simplify flow paths >> significantly (e.g. update_cache_sizes() -> update_all_caches() -> >> update_cache_size() - from memcontrol.c to slab_common.c and back again >> just to take the mutex). >> > Because that is a layering violation and exposes implementation > details of the slab to > the outside world. I agree this would make things a lot simpler, but > please check with Christoph > if this is acceptable before going forward.