From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 051EAC4332F for ; Thu, 21 Apr 2022 23:43:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1442715AbiDUXqS (ORCPT ); Thu, 21 Apr 2022 19:46:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44886 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1442701AbiDUXqL (ORCPT ); Thu, 21 Apr 2022 19:46:11 -0400 Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 89F933B549; Thu, 21 Apr 2022 16:43:20 -0700 (PDT) Received: by mail-pf1-x42f.google.com with SMTP id l127so6386213pfl.6; Thu, 21 Apr 2022 16:43:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=sender:date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=/eAIscwqrdOmVK8JO+DB8Q4/nTvq74/yqQoE6DPm+YE=; b=BuOn9jZFa0z5WyxQu4hiLUdJAi/zwUHm1r5eQfF7B2DC/9QVs/N7Mp7ZrccbIIYZNo JsuVB84ni+bT89l44FR+k4nl0uSUmLdPjndEgpc7uQjvsH7k7fC8x6jdaCnPmXGAj72B RD3E5PqhesJS6VUic1lN4Y1goCu1XOftLSnezxcJKa4c8uYsIR2rXQJvctMKpqafnXgI X8fYyAgjhj0wzO9ejjFn/X4JrqEEwzeOGAaPglJsO0KPU1g1t/Xlv7qycSpvTjiS/OEW aQD80Xii/k6tttIIf28RUlUjmahDWuo5Gvo5TzlOb23eBJLCNKOeV4/LWcBQRJ59T7gZ yvFQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:sender:date:from:to:cc:subject:message-id :references:mime-version:content-disposition:in-reply-to; bh=/eAIscwqrdOmVK8JO+DB8Q4/nTvq74/yqQoE6DPm+YE=; b=0ari8QYeiJBp50Pv5qe7Dgt4I75lfCIlOPnDWFXGEn6X8cwHCBWpJvuSjNX1bC7tyr IOekfKGCtN97VOuNFV8NhbisiXBkWXoVxKpLMG/GH85GdLrWDVxe8cbkGXaRbfW9bxxV 4QJM1vA2PlNU2EdInEY3uml1rn7xz5nLlBEMplrDM2sLdZVuiNCouULlgMf3bjPqskM9 rqMAUuRPBhFS7p1ak96I/efmiAFtr8b3K1hJ+JDfHLDnJrdWHWebZmobpvZ5cUmuxLYo 1kLct7LoVsF8AYvsqMPCxbm159JGXS2CxxGO0jP8vzZfl1Y4Q0UaxCpt7dB37rW/FCn9 vQYA== X-Gm-Message-State: AOAM531gwhzxS5DFs3pD1bQRTrF5kMspEkZKnoLj6jA5K7CFCR/DYumJ 40i3d0zyim+t3kF58v17xG4= X-Google-Smtp-Source: ABdhPJxkv5K1zioe2274aA1Q6utDtfK3QelJffKteGA/Wy59Myg/5/Pq6CEuktdxSk03lsKKuJ26nA== X-Received: by 2002:a05:6a00:10cc:b0:506:e0:d6c3 with SMTP id d12-20020a056a0010cc00b0050600e0d6c3mr1906210pfu.33.1650584599907; Thu, 21 Apr 2022 16:43:19 -0700 (PDT) Received: from localhost ([2620:10d:c090:400::5:15fa]) by smtp.gmail.com with ESMTPSA id g17-20020a625211000000b005056a6313a7sm220809pfb.87.2022.04.21.16.43.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Apr 2022 16:43:19 -0700 (PDT) Sender: Tejun Heo Date: Thu, 21 Apr 2022 13:43:17 -1000 From: Tejun Heo To: Tadeusz Struk Cc: Michal =?iso-8859-1?Q?Koutn=FD?= , cgroups@vger.kernel.org, Zefan Li , Johannes Weiner , Christian Brauner , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , netdev@vger.kernel.org, bpf@vger.kernel.org, stable@vger.kernel.org, linux-kernel@vger.kernel.org, syzbot+e42ae441c3b10acf9e9d@syzkaller.appspotmail.com Subject: Re: [PATCH] cgroup: don't queue css_release_work if one already pending Message-ID: References: <20220412192459.227740-1-tadeusz.struk@linaro.org> <20220414164409.GA5404@blackbody.suse.cz> <584183e2-2473-6185-e07d-f478da118b87@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <584183e2-2473-6185-e07d-f478da118b87@linaro.org> Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org Hello, On Thu, Apr 14, 2022 at 10:51:18AM -0700, Tadeusz Struk wrote: > What happened was, the write triggered: > cgroup_subtree_control_write()->cgroup_apply_control()->cgroup_apply_control_enable()->css_create() > > which, allocates and initializes the css, then fails in cgroup_idr_alloc(), > bails out and calls queue_rcu_work(cgroup_destroy_wq, &css->destroy_rwork); Yes, but this css hasn't been installed yet. > then cgroup_subtree_control_write() bails out to out_unlock:, which then goes: > > cgroup_kn_unlock()->cgroup_put()->css_put()->percpu_ref_put(&css->refcnt)->percpu_ref_put_many(ref) And this is a different css. cgroup->self which isn't connected to the half built css which got destroyed in css_create(). So, I have a bit of difficulty following this scenario. The way that the current code uses destroy_work is definitely nasty and it'd probably be a good idea to separate out the different use cases, but let's first understand what's failing. Thanks. -- tejun