From: Jonathan Cameron <jonathan.cameron@huawei.com>
To: Sungwoo Kim <iam@sung-woo.kim>
Cc: Davidlohr Bueso <dave@stgolabs.net>,
Dave Jiang <dave.jiang@intel.com>,
Alison Schofield <alison.schofield@intel.com>,
Vishal Verma <vishal.l.verma@intel.com>,
Ira Weiny <ira.weiny@intel.com>,
Dan Williams <dan.j.williams@intel.com>,
Ben Widawsky <bwidawsk@kernel.org>, <daveti@purdue.edu>,
<linux-cxl@vger.kernel.org>, <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] cxl/region: Fix a race bug in delete_region_store
Date: Mon, 9 Mar 2026 18:10:17 +0000 [thread overview]
Message-ID: <20260309181017.000010e0@huawei.com> (raw)
In-Reply-To: <CAJNyHpJLeyWDKBEgrH+ouK1XZM91f9zabmFcHHnmcKC3y__sCg@mail.gmail.com>
On Mon, 9 Mar 2026 13:56:33 -0400
Sungwoo Kim <iam@sung-woo.kim> wrote:
> On Mon, Mar 9, 2026 at 8:00 AM Jonathan Cameron
> <jonathan.cameron@huawei.com> wrote:
> >
> > On Sun, 8 Mar 2026 14:59:58 -0400
> > Sungwoo Kim <iam@sung-woo.kim> wrote:
> >
> > > A race exists when two concurrent sysfs writes to delete_region specify
> > > the same region name. Both calls succeed in cxl_find_region_by_name()
> > > (which only does device_find_child_by_name and takes a reference), and
> > > both then proceed to call devm_release_action(). The first call atomically
> > > removes and releases the devres entry successfully. The second call finds
> > > no matching entry, causing devres_release() to return -ENOENT, which trips
> > > the WARN_ON.
> > >
> > > Fix this by replacing devm_release_action() with devm_remove_action_nowarn()
> > > followed by a manual call to unregister_region(). devm_remove_action_nowarn()
> > > removes the devres tracking entry and returns an error code.
> >
> > Naive question (or just me being lazy). Why can't we take the
> > write lock on cxl_rwsem.region?
>
> Thanks for your review. I've just tested your suggestion, but it
> caused an ABBA deadlock:
>
> task 1:
> create_pmem_region_store
> __device_attach() ...dev_lock()
> cxl_region_can_probe() ...lock(cxl_rwsem.region)
>
> task 2:
> delete_region_store() ...lock(cxl_rwsem.region)
> unregister_region()
> device_del() ...dev_lock()
>
Thanks for chasing that down. (I was indeed just being lazy!)
Let's wait a few days to get inputs from others on this.
One horrible option would just be to have a single purpose lock to
serialize handling writes to the sysfs file. I don't much like
that solution however!
Thanks,
Jonathan
> One way to avoid a deadlock might be to not add an additional lock.
next prev parent reply other threads:[~2026-03-09 18:10 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-08 18:59 [PATCH] cxl/region: Fix a race bug in delete_region_store Sungwoo Kim
2026-03-09 12:00 ` Jonathan Cameron
2026-03-09 17:56 ` Sungwoo Kim
2026-03-09 18:10 ` Jonathan Cameron [this message]
2026-03-09 20:32 ` Ira Weiny
2026-03-10 18:36 ` Davidlohr Bueso
2026-03-10 22:53 ` Dan Williams
2026-03-11 6:55 ` Sungwoo Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260309181017.000010e0@huawei.com \
--to=jonathan.cameron@huawei.com \
--cc=alison.schofield@intel.com \
--cc=bwidawsk@kernel.org \
--cc=dan.j.williams@intel.com \
--cc=dave.jiang@intel.com \
--cc=dave@stgolabs.net \
--cc=daveti@purdue.edu \
--cc=iam@sung-woo.kim \
--cc=ira.weiny@intel.com \
--cc=linux-cxl@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=vishal.l.verma@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox