From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linuxfoundation.org ([140.211.169.12]:36350 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731341AbeJBUUB (ORCPT ); Tue, 2 Oct 2018 16:20:01 -0400 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Alexey Dobriyan , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Linus Torvalds , zhong jiang Subject: [PATCH 4.9 66/94] slub: make ->cpu_partial unsigned int Date: Tue, 2 Oct 2018 06:25:20 -0700 Message-Id: <20181002132505.075810407@linuxfoundation.org> In-Reply-To: <20181002132500.494838053@linuxfoundation.org> References: <20181002132500.494838053@linuxfoundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: stable-owner@vger.kernel.org List-ID: 4.9-stable review patch. If anyone has any objections, please let me know. ------------------ From: Alexey Dobriyan commit e5d9998f3e09359b372a037a6ac55ba235d95d57 upstream. /* * cpu_partial determined the maximum number of objects * kept in the per cpu partial lists of a processor. */ Can't be negative. Link: http://lkml.kernel.org/r/20180305200730.15812-15-adobriyan@gmail.com Signed-off-by: Alexey Dobriyan Acked-by: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: zhong jiang Signed-off-by: Greg Kroah-Hartman --- include/linux/slub_def.h | 3 ++- mm/slub.c | 6 +++--- 2 files changed, 5 insertions(+), 4 deletions(-) --- a/include/linux/slub_def.h +++ b/include/linux/slub_def.h @@ -67,7 +67,8 @@ struct kmem_cache { int size; /* The size of an object including meta data */ int object_size; /* The size of an object without meta data */ int offset; /* Free pointer offset. */ - int cpu_partial; /* Number of per cpu partial objects to keep around */ + /* Number of per cpu partial objects to keep around */ + unsigned int cpu_partial; struct kmem_cache_order_objects oo; /* Allocation and freeing of slabs */ --- a/mm/slub.c +++ b/mm/slub.c @@ -1793,7 +1793,7 @@ static void *get_partial_node(struct kme { struct page *page, *page2; void *object = NULL; - int available = 0; + unsigned int available = 0; int objects; /* @@ -4870,10 +4870,10 @@ static ssize_t cpu_partial_show(struct k static ssize_t cpu_partial_store(struct kmem_cache *s, const char *buf, size_t length) { - unsigned long objects; + unsigned int objects; int err; - err = kstrtoul(buf, 10, &objects); + err = kstrtouint(buf, 10, &objects); if (err) return err; if (objects && !kmem_cache_has_cpu_partial(s))