From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8546FC4338F for ; Thu, 5 Aug 2021 15:20:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2416161157 for ; Thu, 5 Aug 2021 15:20:07 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 2416161157 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id A20006B0075; Thu, 5 Aug 2021 11:20:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9CCDA6B007B; Thu, 5 Aug 2021 11:20:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 86BD46B0075; Thu, 5 Aug 2021 11:20:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0190.hostedemail.com [216.40.44.190]) by kanga.kvack.org (Postfix) with ESMTP id 6B0AE6B006C for ; Thu, 5 Aug 2021 11:20:06 -0400 (EDT) Received: from smtpin37.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 163CF18022011 for ; Thu, 5 Aug 2021 15:20:06 +0000 (UTC) X-FDA: 78441387612.37.1A54D63 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf05.hostedemail.com (Postfix) with ESMTP id 807D75034F41 for ; Thu, 5 Aug 2021 15:20:05 +0000 (UTC) Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by smtp-out1.suse.de (Postfix) with ESMTPS id 75838222EE; Thu, 5 Aug 2021 15:20:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_rsa; t=1628176804; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YNGbjRJxzwV2jay0Gz3kjjiqk5HMXz8SmOjh1FDYtWY=; b=P6c6omBg0tPFChSGkcpbKKeCFT9Gi7pqhPBQz7i1zzJaE5T8Cek3Z5n3Q8MoJT53a1eizY AnWLXYjUAO83AZ50LKnEFA5QtVfJ3ERGeG1JuW0yaO6sPQ+aysYrza8jkiZRnAlolMiN21 hqoKRd5l8ki9aAf5me5cmlViiuekOgg= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.cz; s=susede2_ed25519; t=1628176804; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=YNGbjRJxzwV2jay0Gz3kjjiqk5HMXz8SmOjh1FDYtWY=; b=4pjRrz0YYFbQ+Xt5vJIHBDk0BfSo4hK5V6RA9qv0PAMvbw43V+pBE7+9BQg0bVa40pYR3w w6oasGxgZx1vyRBQ== Received: from imap2.suse-dmz.suse.de (imap2.suse-dmz.suse.de [192.168.254.74]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-521) server-digest SHA512) (No client certificate requested) by imap2.suse-dmz.suse.de (Postfix) with ESMTPS id 47CA913DA8; Thu, 5 Aug 2021 15:20:04 +0000 (UTC) Received: from dovecot-director2.suse.de ([192.168.254.65]) by imap2.suse-dmz.suse.de with ESMTPSA id MIPrEKQBDGFDdQAAMHmgww (envelope-from ); Thu, 05 Aug 2021 15:20:04 +0000 From: Vlastimil Babka To: Andrew Morton , Christoph Lameter , David Rientjes , Pekka Enberg , Joonsoo Kim Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Galbraith , Sebastian Andrzej Siewior , Thomas Gleixner , Mel Gorman , Jesper Dangaard Brouer , Jann Horn , Vlastimil Babka Subject: [PATCH v4 02/35] mm, slub: allocate private object map for debugfs listings Date: Thu, 5 Aug 2021 17:19:27 +0200 Message-Id: <20210805152000.12817-3-vbabka@suse.cz> X-Mailer: git-send-email 2.32.0 In-Reply-To: <20210805152000.12817-1-vbabka@suse.cz> References: <20210805152000.12817-1-vbabka@suse.cz> MIME-Version: 1.0 X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 807D75034F41 Authentication-Results: imf05.hostedemail.com; dkim=pass header.d=suse.cz header.s=susede2_rsa header.b=P6c6omBg; dkim=pass header.d=suse.cz header.s=susede2_ed25519 header.b=4pjRrz0Y; dmarc=none; spf=pass (imf05.hostedemail.com: domain of vbabka@suse.cz designates 195.135.220.28 as permitted sender) smtp.mailfrom=vbabka@suse.cz X-Stat-Signature: smyqhqs1wwger4819tfmb3yxwczowxi4 X-HE-Tag: 1628176805-45863 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Slub has a static spinlock protected bitmap for marking which objects are= on freelist when it wants to list them, for situations where dynamically allocating such map can lead to recursion or locking issues, and on-stack bitmap would be too large. The handlers of debugfs files alloc_traces and free_traces also currently= use this shared bitmap, but their syscall context makes it straightforward to allo= cate a private map before entering locked sections, so switch these processing p= aths to use a private bitmap. Signed-off-by: Vlastimil Babka Acked-by: Christoph Lameter Acked-by: Mel Gorman --- mm/slub.c | 44 +++++++++++++++++++++++++++++--------------- 1 file changed, 29 insertions(+), 15 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index f5908e6b6fb1..211d380d94d1 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -454,6 +454,18 @@ static inline bool cmpxchg_double_slab(struct kmem_c= ache *s, struct page *page, static unsigned long object_map[BITS_TO_LONGS(MAX_OBJS_PER_PAGE)]; static DEFINE_SPINLOCK(object_map_lock); =20 +static void __fill_map(unsigned long *obj_map, struct kmem_cache *s, + struct page *page) +{ + void *addr =3D page_address(page); + void *p; + + bitmap_zero(obj_map, page->objects); + + for (p =3D page->freelist; p; p =3D get_freepointer(s, p)) + set_bit(__obj_to_index(s, addr, p), obj_map); +} + #if IS_ENABLED(CONFIG_KUNIT) static bool slab_add_kunit_errors(void) { @@ -483,17 +495,11 @@ static inline bool slab_add_kunit_errors(void) { re= turn false; } static unsigned long *get_map(struct kmem_cache *s, struct page *page) __acquires(&object_map_lock) { - void *p; - void *addr =3D page_address(page); - VM_BUG_ON(!irqs_disabled()); =20 spin_lock(&object_map_lock); =20 - bitmap_zero(object_map, page->objects); - - for (p =3D page->freelist; p; p =3D get_freepointer(s, p)) - set_bit(__obj_to_index(s, addr, p), object_map); + __fill_map(object_map, s, page); =20 return object_map; } @@ -4876,17 +4882,17 @@ static int add_location(struct loc_track *t, stru= ct kmem_cache *s, } =20 static void process_slab(struct loc_track *t, struct kmem_cache *s, - struct page *page, enum track_item alloc) + struct page *page, enum track_item alloc, + unsigned long *obj_map) { void *addr =3D page_address(page); void *p; - unsigned long *map; =20 - map =3D get_map(s, page); + __fill_map(obj_map, s, page); + for_each_object(p, s, addr, page->objects) - if (!test_bit(__obj_to_index(s, addr, p), map)) + if (!test_bit(__obj_to_index(s, addr, p), obj_map)) add_location(t, s, get_track(s, p, alloc)); - put_map(map); } #endif /* CONFIG_DEBUG_FS */ #endif /* CONFIG_SLUB_DEBUG */ @@ -5813,14 +5819,21 @@ static int slab_debug_trace_open(struct inode *in= ode, struct file *filep) struct loc_track *t =3D __seq_open_private(filep, &slab_debugfs_sops, sizeof(struct loc_track)); struct kmem_cache *s =3D file_inode(filep)->i_private; + unsigned long *obj_map; + + obj_map =3D bitmap_alloc(oo_objects(s->oo), GFP_KERNEL); + if (!obj_map) + return -ENOMEM; =20 if (strcmp(filep->f_path.dentry->d_name.name, "alloc_traces") =3D=3D 0) alloc =3D TRACK_ALLOC; else alloc =3D TRACK_FREE; =20 - if (!alloc_loc_track(t, PAGE_SIZE / sizeof(struct location), GFP_KERNEL= )) + if (!alloc_loc_track(t, PAGE_SIZE / sizeof(struct location), GFP_KERNEL= )) { + bitmap_free(obj_map); return -ENOMEM; + } =20 for_each_kmem_cache_node(s, node, n) { unsigned long flags; @@ -5831,12 +5844,13 @@ static int slab_debug_trace_open(struct inode *in= ode, struct file *filep) =20 spin_lock_irqsave(&n->list_lock, flags); list_for_each_entry(page, &n->partial, slab_list) - process_slab(t, s, page, alloc); + process_slab(t, s, page, alloc, obj_map); list_for_each_entry(page, &n->full, slab_list) - process_slab(t, s, page, alloc); + process_slab(t, s, page, alloc, obj_map); spin_unlock_irqrestore(&n->list_lock, flags); } =20 + bitmap_free(obj_map); return 0; } =20 --=20 2.32.0