From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from out-187.mta1.migadu.com (out-187.mta1.migadu.com [95.215.58.187]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EA68D1DFD99 for ; Wed, 29 Jan 2025 17:01:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.187 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738170098; cv=none; b=tUQ+9nVY9qlll8zQKQOF/WH5RC8Hhc1XNqwUnU80JzXTHA5iKo87yP5o4A0y0ftLxGzB0WNj/HVG+bCBA4Zzb4lwMSxBtSXJHoINCn/J1JiCt4doxFFlWfM4EwczmDOrSIV0K6uLSKMBQlSetMWfBK5jDkbIuc/p04TBUwWcJHQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1738170098; c=relaxed/simple; bh=9x3kAcLesU1nMURoFnlwnsfeuh9CYyM+D8nDdS7t9DA=; h=Date:From:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=N9ErG5XEKrkflhN/nakwcwWWYrsVpobThv9Q0mnxFu9FMCZECQArkNiTu7Zb8bjqTd0z3hbNLDkcAmqr3S8o7lJ51qpi6ykgCEZmjX9m5T/vycOQIZrJHCRBo3EI8IJGV9VCg4vwWxBDiBEpqPNsbftokMwAL9mzYBxpV34xdRI= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=u+PxvMlH; arc=none smtp.client-ip=95.215.58.187 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="u+PxvMlH" Date: Wed, 29 Jan 2025 17:01:27 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1738170094; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=Ex0OXA+TDLjZuNYfOW3EWOMneSqruaVaw9dp9CjlOmc=; b=u+PxvMlHUWIcPaSTzuqMKOqshpePcduYvVz/MwyuVNwkeaecD1veTVQGXlLEF1x7F5O5r/ 9y8cBqiyjqO2/5KuoZ+QUyHGv+R3vR8HhdjMfn4hgzg8ogAJswJJEB1Ns23E1hZpOid3H1 P+wbICLFl7upFnSHyJWlBC/80WWsPyw= X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. From: Yosry Ahmed To: Sergey Senozhatsky Cc: Andrew Morton , Minchan Kim , Johannes Weiner , Nhat Pham , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCHv1 2/6] zsmalloc: factor out size-class locking helpers Message-ID: References: <20250129064853.2210753-1-senozhatsky@chromium.org> <20250129064853.2210753-3-senozhatsky@chromium.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250129064853.2210753-3-senozhatsky@chromium.org> X-Migadu-Flow: FLOW_OUT On Wed, Jan 29, 2025 at 03:43:48PM +0900, Sergey Senozhatsky wrote: > Move open-coded size-class locking to dedicated helpers. > > Signed-off-by: Sergey Senozhatsky Reviewed-by: Yosry Ahmed > --- > mm/zsmalloc.c | 47 ++++++++++++++++++++++++++++------------------- > 1 file changed, 28 insertions(+), 19 deletions(-) > > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c > index 2f8a2b139919..0f575307675d 100644 > --- a/mm/zsmalloc.c > +++ b/mm/zsmalloc.c > @@ -254,6 +254,16 @@ static bool pool_lock_is_contended(struct zs_pool *pool) > return rwlock_is_contended(&pool->migrate_lock); > } > > +static void size_class_lock(struct size_class *class) > +{ > + spin_lock(&class->lock); > +} > + > +static void size_class_unlock(struct size_class *class) > +{ > + spin_unlock(&class->lock); > +} > + > static inline void zpdesc_set_first(struct zpdesc *zpdesc) > { > SetPagePrivate(zpdesc_page(zpdesc)); > @@ -614,8 +624,7 @@ static int zs_stats_size_show(struct seq_file *s, void *v) > if (class->index != i) > continue; > > - spin_lock(&class->lock); > - > + size_class_lock(class); > seq_printf(s, " %5u %5u ", i, class->size); > for (fg = ZS_INUSE_RATIO_10; fg < NR_FULLNESS_GROUPS; fg++) { > inuse_totals[fg] += class_stat_read(class, fg); > @@ -625,7 +634,7 @@ static int zs_stats_size_show(struct seq_file *s, void *v) > obj_allocated = class_stat_read(class, ZS_OBJS_ALLOCATED); > obj_used = class_stat_read(class, ZS_OBJS_INUSE); > freeable = zs_can_compact(class); > - spin_unlock(&class->lock); > + size_class_unlock(class); > > objs_per_zspage = class->objs_per_zspage; > pages_used = obj_allocated / objs_per_zspage * > @@ -1400,7 +1409,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) > class = pool->size_class[get_size_class_index(size)]; > > /* class->lock effectively protects the zpage migration */ > - spin_lock(&class->lock); > + size_class_lock(class); > zspage = find_get_zspage(class); > if (likely(zspage)) { > obj_malloc(pool, zspage, handle); > @@ -1411,7 +1420,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) > goto out; > } > > - spin_unlock(&class->lock); > + size_class_unlock(class); > > zspage = alloc_zspage(pool, class, gfp); > if (!zspage) { > @@ -1419,7 +1428,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) > return (unsigned long)ERR_PTR(-ENOMEM); > } > > - spin_lock(&class->lock); > + size_class_lock(class); > obj_malloc(pool, zspage, handle); > newfg = get_fullness_group(class, zspage); > insert_zspage(class, zspage, newfg); > @@ -1430,7 +1439,7 @@ unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t gfp) > /* We completely set up zspage so mark them as movable */ > SetZsPageMovable(pool, zspage); > out: > - spin_unlock(&class->lock); > + size_class_unlock(class); > > return handle; > } > @@ -1484,7 +1493,7 @@ void zs_free(struct zs_pool *pool, unsigned long handle) > obj_to_zpdesc(obj, &f_zpdesc); > zspage = get_zspage(f_zpdesc); > class = zspage_class(pool, zspage); > - spin_lock(&class->lock); > + size_class_lock(class); > pool_read_unlock(pool); > > class_stat_sub(class, ZS_OBJS_INUSE, 1); > @@ -1494,7 +1503,7 @@ void zs_free(struct zs_pool *pool, unsigned long handle) > if (fullness == ZS_INUSE_RATIO_0) > free_zspage(pool, class, zspage); > > - spin_unlock(&class->lock); > + size_class_unlock(class); > cache_free_handle(pool, handle); > } > EXPORT_SYMBOL_GPL(zs_free); > @@ -1828,7 +1837,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, > /* > * the class lock protects zpage alloc/free in the zspage. > */ > - spin_lock(&class->lock); > + size_class_lock(class); > /* the migrate_write_lock protects zpage access via zs_map_object */ > migrate_write_lock(zspage); > > @@ -1860,7 +1869,7 @@ static int zs_page_migrate(struct page *newpage, struct page *page, > * it's okay to release migration_lock. > */ > pool_write_unlock(pool); > - spin_unlock(&class->lock); > + size_class_unlock(class); > migrate_write_unlock(zspage); > > zpdesc_get(newzpdesc); > @@ -1904,10 +1913,10 @@ static void async_free_zspage(struct work_struct *work) > if (class->index != i) > continue; > > - spin_lock(&class->lock); > + size_class_lock(class); > list_splice_init(&class->fullness_list[ZS_INUSE_RATIO_0], > &free_pages); > - spin_unlock(&class->lock); > + size_class_unlock(class); > } > > list_for_each_entry_safe(zspage, tmp, &free_pages, list) { > @@ -1915,10 +1924,10 @@ static void async_free_zspage(struct work_struct *work) > lock_zspage(zspage); > > class = zspage_class(pool, zspage); > - spin_lock(&class->lock); > + size_class_lock(class); > class_stat_sub(class, ZS_INUSE_RATIO_0, 1); > __free_zspage(pool, class, zspage); > - spin_unlock(&class->lock); > + size_class_unlock(class); > } > }; > > @@ -1983,7 +1992,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, > * as well as zpage allocation/free > */ > pool_write_lock(pool); > - spin_lock(&class->lock); > + size_class_lock(class); > while (zs_can_compact(class)) { > int fg; > > @@ -2013,11 +2022,11 @@ static unsigned long __zs_compact(struct zs_pool *pool, > putback_zspage(class, dst_zspage); > dst_zspage = NULL; > > - spin_unlock(&class->lock); > + size_class_unlock(class); > pool_write_unlock(pool); > cond_resched(); > pool_write_lock(pool); > - spin_lock(&class->lock); > + size_class_lock(class); > } > } > > @@ -2027,7 +2036,7 @@ static unsigned long __zs_compact(struct zs_pool *pool, > if (dst_zspage) > putback_zspage(class, dst_zspage); > > - spin_unlock(&class->lock); > + size_class_unlock(class); > pool_write_unlock(pool); > > return pages_freed; > -- > 2.48.1.262.g85cc9f2d1e-goog >