From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753878AbcBEQzI (ORCPT ); Fri, 5 Feb 2016 11:55:08 -0500 Received: from mx2.parallels.com ([199.115.105.18]:50945 "EHLO mx2.parallels.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753146AbcBEQzH (ORCPT ); Fri, 5 Feb 2016 11:55:07 -0500 Date: Fri, 5 Feb 2016 19:54:54 +0300 From: Vladimir Davydov To: Dmitry Safonov CC: , , , <0x7f454c46@gmail.com>, Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim Subject: Re: [PATCHv6] mm: slab: free kmem_cache_node after destroy sysfs file Message-ID: <20160205165454.GB22456@esperanza> References: <1454687136-19298-1-git-send-email-dsafonov@virtuozzo.com> <20160205161124.GA26693@esperanza> <56B4D171.6000000@virtuozzo.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <56B4D171.6000000@virtuozzo.com> X-ClientProxiedBy: US-EXCH2.sw.swsoft.com (10.255.249.46) To US-EXCH.sw.swsoft.com (10.255.249.47) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Feb 05, 2016 at 07:44:33PM +0300, Dmitry Safonov wrote: ... > >>@@ -2414,8 +2415,6 @@ int __kmem_cache_shrink(struct kmem_cache *cachep, bool deactivate) > >> int __kmem_cache_shutdown(struct kmem_cache *cachep) > >> { > >>- int i; > >>- struct kmem_cache_node *n; > >> int rc = __kmem_cache_shrink(cachep, false); > >> if (rc) > >>@@ -2423,6 +2422,14 @@ int __kmem_cache_shutdown(struct kmem_cache *cachep) > >> free_percpu(cachep->cpu_cache); > >And how come ->cpu_cache (and ->cpu_slab in case of SLUB) is special? > >Can't sysfs access it either? I propose to introduce a method called > >__kmem_cache_release (instead of __kmem_cache_free_nodes), which would > >do all freeing, both per-cpu and per-node. > AFAICS, they aren't used by this sysfs. They are: alloc_calls_show -> list_locations -> flush_all accesses ->cpu_slab. Thanks, Vladimir > Anyway, seems reasonable, will do.