From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD344C433EF for ; Mon, 30 May 2022 21:14:50 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 1B5AE6B0072; Mon, 30 May 2022 17:14:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 165126B0073; Mon, 30 May 2022 17:14:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 001A76B0074; Mon, 30 May 2022 17:14:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id E78746B0072 for ; Mon, 30 May 2022 17:14:49 -0400 (EDT) Received: from smtpin10.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay13.hostedemail.com (Postfix) with ESMTP id C3E5C612B5 for ; Mon, 30 May 2022 21:14:49 +0000 (UTC) X-FDA: 79523663898.10.54A9DFC Received: from mail-pj1-f53.google.com (mail-pj1-f53.google.com [209.85.216.53]) by imf04.hostedemail.com (Postfix) with ESMTP id DAC6740050 for ; Mon, 30 May 2022 21:14:30 +0000 (UTC) Received: by mail-pj1-f53.google.com with SMTP id gc3-20020a17090b310300b001e33092c737so356723pjb.3 for ; Mon, 30 May 2022 14:14:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:from:to:cc:subject:in-reply-to:message-id:references :mime-version; bh=eNXFxnoABD+DlJyv7EOStHP7h7XQJhNVHIMmf8yztUg=; b=XUHEtNIqbQt5VF4ySm2C8zptGsGm1VPpPtLyqcEYn+mr55JxjaOdtwXzd20IlQGcJP f69Cg83gWWe6DMQ7pAL14vCfafzgb9RoyU5eJ4CMqEC0gOzcypmp8KwhUBJKC7vztQ39 HQpDOO99sS2wDaaFerLs0VEk3OHqwKBO1nbBqaC/XwsFEmdd1k5+jBVGanSv/I/OwTb8 RhXilo9vs0jEm8EDJiXVxMCQ+3G+KPhYIejZlsZ+hd83H++v2BK9Q827LzHQVxq/a0f5 Urc+GEuTayCUDTpBwa/6t/3U7eQcJk/KTfAr/Xp80SNFf6L0EbkdFWBRafq25kx4ycsU 8/sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:mime-version; bh=eNXFxnoABD+DlJyv7EOStHP7h7XQJhNVHIMmf8yztUg=; b=1EI70KaRRObu/SR+RFl7HRd+GzqBUVTKgalkqPMRtjwmjoY93FrAOQPKdM9ZgnAbGW ssn/Fs7nPGO5fcgJ0h4hSj+cq7QxbNHmP0FUIUgmEzfPdLnWOOi6jR1hU+dXTIBncdx8 nrc5E+9id0Y0pltDhHzL3elEKSCTkAozhHlzymU0B2S7hJTNbuBjWV2QdpM/g3ChN8Tm YMUUD9BV0BwsmP0y3MQ1SGwCXJ7af3sEMDJfl8YllcW+Iz4wzVqvb2tmQcU/OZv02yRM 1r9mFaNa9WNK8uUly/9EMQwOFLnRLy6F+6B8wkfbSkgBnFMrWiBVVOcccLj8x2J+Vjz7 K0+A== X-Gm-Message-State: AOAM533yk0d923Fw97LjA0Ifo8RxlpAhf0h+8F0ulA3bUxDuZ/R+sIeR nFTavE7y34GV4je0GI+JA24vRA== X-Google-Smtp-Source: ABdhPJw9E6wUF4yv1K04ZTwWFrEFHleLoS9FhDHWS5CcWx8kW4fq0wUXpaI0qaWQLbS90dyKpUkfQA== X-Received: by 2002:a17:90b:4aca:b0:1e2:7339:8c55 with SMTP id mh10-20020a17090b4aca00b001e273398c55mr21133438pjb.187.1653945287876; Mon, 30 May 2022 14:14:47 -0700 (PDT) Received: from [2620:15c:29:204:c052:818:52bf:5f90] ([2620:15c:29:204:c052:818:52bf:5f90]) by smtp.gmail.com with ESMTPSA id g12-20020a170902868c00b0015e8d4eb26csm8694825plo.182.2022.05.30.14.14.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 30 May 2022 14:14:47 -0700 (PDT) Date: Mon, 30 May 2022 14:14:46 -0700 (PDT) From: David Rientjes To: Hyeonggon Yoo <42.hyeyoo@gmail.com> cc: Rongwei Wang , akpm@linux-foundation.org, vbabka@suse.cz, roman.gushchin@linux.dev, iamjoonsoo.kim@lge.com, penberg@kernel.org, cl@linux.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 1/3] mm/slub: fix the race between validate_slab and slab_free In-Reply-To: Message-ID: References: <20220529081535.69275-1-rongwei.wang@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: DAC6740050 X-Rspam-User: X-Stat-Signature: 8twarhrtmoxa3c1kgsoxu93if8ccm3wz Authentication-Results: imf04.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=XUHEtNIq; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf04.hostedemail.com: domain of rientjes@google.com designates 209.85.216.53 as permitted sender) smtp.mailfrom=rientjes@google.com X-HE-Tag: 1653945270-556127 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sun, 29 May 2022, Hyeonggon Yoo wrote: > > diff --git a/mm/slub.c b/mm/slub.c > > index ed5c2c03a47a..310e56d99116 100644 > > --- a/mm/slub.c > > +++ b/mm/slub.c > > @@ -1374,15 +1374,12 @@ static noinline int free_debug_processing( > > void *head, void *tail, int bulk_cnt, > > unsigned long addr) > > { > > - struct kmem_cache_node *n = get_node(s, slab_nid(slab)); > > void *object = head; > > int cnt = 0; > > - unsigned long flags, flags2; > > + unsigned long flags; > > int ret = 0; > > > > - spin_lock_irqsave(&n->list_lock, flags); > > - slab_lock(slab, &flags2); > > - > > + slab_lock(slab, &flags); > > if (s->flags & SLAB_CONSISTENCY_CHECKS) { > > if (!check_slab(s, slab)) > > goto out; > > @@ -1414,8 +1411,7 @@ static noinline int free_debug_processing( > > slab_err(s, slab, "Bulk freelist count(%d) invalid(%d)\n", > > bulk_cnt, cnt); > > > > - slab_unlock(slab, &flags2); > > - spin_unlock_irqrestore(&n->list_lock, flags); > > + slab_unlock(slab, &flags); > > if (!ret) > > slab_fix(s, "Object at 0x%p not freed", object); > > return ret; > > @@ -3304,7 +3300,7 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, > > > > { > > void *prior; > > - int was_frozen; > > + int was_frozen, to_take_off = 0; > > struct slab new; > > unsigned long counters; > > struct kmem_cache_node *n = NULL; > > @@ -3315,15 +3311,19 @@ static void __slab_free(struct kmem_cache *s, struct slab *slab, > > if (kfence_free(head)) > > return; > > > > + n = get_node(s, slab_nid(slab)); > > + spin_lock_irqsave(&n->list_lock, flags); > > + > > Oh please don't do this. > > SLUB free slowpath can be hit a lot depending on workload. > > __slab_free() try its best not to take n->list_lock. currently takes n->list_lock > only when the slab need to be taken from list. > > Unconditionally taking n->list_lock will degrade performance. > This is a good point, it would be useful to gather some benchmarks for workloads that are known to thrash some caches and would hit this path such as netperf TCP_RR.