From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D9B7C4167B for ; Wed, 6 Dec 2023 11:22:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1378142AbjLFLWV (ORCPT ); Wed, 6 Dec 2023 06:22:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54828 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1378005AbjLFLWU (ORCPT ); Wed, 6 Dec 2023 06:22:20 -0500 Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 26DBDC3 for ; Wed, 6 Dec 2023 03:22:27 -0800 (PST) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7BF6CC433C8; Wed, 6 Dec 2023 11:22:25 +0000 (UTC) Date: Wed, 6 Dec 2023 11:22:22 +0000 From: Catalin Marinas To: Hillf Danton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Waiman Long , Andrew Morton Subject: Re: [PATCH] kmemleak: Avoid RCU stalls when freeing metadata for per-CPU pointers Message-ID: References: <20231201190829.825856-1-catalin.marinas@arm.com> <20231202075055.1142-1-hdanton@sina.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20231202075055.1142-1-hdanton@sina.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Dec 02, 2023 at 03:50:55PM +0800, Hillf Danton wrote: > On Fri, 1 Dec 2023 19:08:29 +0000 Catalin Marinas > > @@ -1472,14 +1474,27 @@ static void scan_object(struct kmemleak_object *object) > > /* already freed object */ > > goto out; > > > > - obj_ptr = object->flags & OBJECT_PHYS ? > > - __va((phys_addr_t)object->pointer) : > > - (void *)object->pointer; > > + if (object->flags & OBJECT_PERCPU) { > > + unsigned int cpu; > > > > - if (hlist_empty(&object->area_list) || > > + for_each_possible_cpu(cpu) { > > + void *start = per_cpu_ptr((void __percpu *)object->pointer, cpu); > > + void *end = start + object->size; > > + > > + scan_block(start, end, object); > > + > > + raw_spin_unlock_irqrestore(&object->lock, flags); > > + cond_resched(); > > + raw_spin_lock_irqsave(&object->lock, flags); > > Nit, spin_lock/unlock can be collapsed into need_resched(). > > if (need_resched()) { > raw_spin_unlock_irqrestore(&object->lock, flags); > cond_resched(); > raw_spin_lock_irqsave(&object->lock, flags); > } The reason I'd not do this is that we also want the interrupts to be re-enabled regularly. Keeping them disabled may not even mark the current thread as need_resched(). -- Catalin