From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from frasgout.his.huawei.com (frasgout.his.huawei.com [185.176.79.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5463C3C3C00; Mon, 9 Mar 2026 14:37:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.176.79.56 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773067072; cv=none; b=Y3DxzIAuAWbWYr+WVNuyskRWjmLua/xpHFJ3eVrf0tH2lWDbT8gG0sF9IFRoUgzDE1ZEwvOpttS8X8gN0u3laqKTcHCTmaqVUx2cHVL9Ox6FYdKWDfg6bWJ+0cZX7HkOMPvRAHRBQ6FC597JAXA1fBQwjVubJvSgnIa861Fwcxk= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773067072; c=relaxed/simple; bh=9zZLfNMC2c49UfYD8f5c65mEQnz14cRh4NmSK0J6fBA=; h=Date:From:To:CC:Subject:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=biTe2sioUHui+6/nIPugUraY0/loLfaYYqLr9rYzN2fTj6D5bL/mSo9mjLUq1g8xkNW1RhmqiHFmi8wl3zfU4hWBL6aBlmwavjaWnGBQY9mrReAMF/+L9bprS6BMG7xP6B47UprCXy5Ll+WoaZxFx4Mt4d4lyGsK3Nkuw7I2mNI= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; arc=none smtp.client-ip=185.176.79.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Received: from mail.maildlp.com (unknown [172.18.224.83]) by frasgout.his.huawei.com (SkyGuard) with ESMTPS id 4fV02n5Nt0zJ46F6; Mon, 9 Mar 2026 22:37:05 +0800 (CST) Received: from dubpeml500005.china.huawei.com (unknown [7.214.145.207]) by mail.maildlp.com (Postfix) with ESMTPS id C860840086; Mon, 9 Mar 2026 22:37:48 +0800 (CST) Received: from localhost (10.203.177.15) by dubpeml500005.china.huawei.com (7.214.145.207) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Mon, 9 Mar 2026 14:37:47 +0000 Date: Mon, 9 Mar 2026 14:37:46 +0000 From: Jonathan Cameron To: Smita Koralahalli CC: , , , , , Ard Biesheuvel , "Alison Schofield" , Vishal Verma , Ira Weiny , Dan Williams , Yazen Ghannam , "Dave Jiang" , Davidlohr Bueso , "Matthew Wilcox" , Jan Kara , "Rafael J . Wysocki" , Len Brown , Pavel Machek , Li Ming , Jeff Johnson , Ying Huang , Yao Xingtao , "Peter Zijlstra" , Greg Kroah-Hartman , Nathan Fontenot , Terry Bowman , Robert Richter , Benjamin Cheatham , Zhijian Li , Borislav Petkov , Tomasz Wolski Subject: Re: [PATCH v6 5/9] dax: Track all dax_region allocations under a global resource tree Message-ID: <20260309143746.000047ee@huawei.com> In-Reply-To: <20260210064501.157591-6-Smita.KoralahalliChannabasappa@amd.com> References: <20260210064501.157591-1-Smita.KoralahalliChannabasappa@amd.com> <20260210064501.157591-6-Smita.KoralahalliChannabasappa@amd.com> X-Mailer: Claws Mail 4.3.0 (GTK 3.24.42; x86_64-w64-mingw32) Precedence: bulk X-Mailing-List: linux-fsdevel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="US-ASCII" Content-Transfer-Encoding: 7bit X-ClientProxiedBy: lhrpeml100012.china.huawei.com (7.191.174.184) To dubpeml500005.china.huawei.com (7.214.145.207) On Tue, 10 Feb 2026 06:44:57 +0000 Smita Koralahalli wrote: > Introduce a global "DAX Regions" resource root and register each > dax_region->res under it via request_resource(). Release the resource on > dax_region teardown. > > By enforcing a single global namespace for dax_region allocations, this > ensures only one of dax_hmem or dax_cxl can successfully register a > dax_region for a given range. > > Co-developed-by: Dan Williams > Signed-off-by: Dan Williams > Signed-off-by: Smita Koralahalli One question inline about the locking. Is intent to serialize beyond this new resource tree? If it's just the resource tree the write_lock(&resource_lock); in the request and release_resource() should be sufficient. > --- > drivers/dax/bus.c | 23 ++++++++++++++++++++--- > 1 file changed, 20 insertions(+), 3 deletions(-) > > diff --git a/drivers/dax/bus.c b/drivers/dax/bus.c > index fde29e0ad68b..5f387feb95f0 100644 > --- a/drivers/dax/bus.c > +++ b/drivers/dax/bus.c > @@ -10,6 +10,7 @@ > #include "dax-private.h" > #include "bus.h" > > +static struct resource dax_regions = DEFINE_RES_MEM_NAMED(0, -1, "DAX Regions"); > static DEFINE_MUTEX(dax_bus_lock); > > /* > @@ -625,6 +626,8 @@ static void dax_region_unregister(void *region) > { > struct dax_region *dax_region = region; > > + scoped_guard(rwsem_write, &dax_region_rwsem) > + release_resource(&dax_region->res); Do we need the locking? resource stuff all runs under the global resource_lock so if aim is just to serialize adds and removes that should be enough. Maybe there is a justification in that being an internal implementation detail. > sysfs_remove_groups(&dax_region->dev->kobj, > dax_region_attribute_groups); > dax_region_put(dax_region); > @@ -635,6 +638,7 @@ struct dax_region *alloc_dax_region(struct device *parent, int region_id, > unsigned long flags) > { > struct dax_region *dax_region; > + int rc; > > /* > * The DAX core assumes that it can store its private data in > @@ -667,14 +671,27 @@ struct dax_region *alloc_dax_region(struct device *parent, int region_id, > .flags = IORESOURCE_MEM | flags, > }; > > - if (sysfs_create_groups(&parent->kobj, dax_region_attribute_groups)) { > - kfree(dax_region); > - return NULL; > + scoped_guard(rwsem_write, &dax_region_rwsem) > + rc = request_resource(&dax_regions, &dax_region->res); > + if (rc) { > + dev_dbg(parent, "dax_region resource conflict for %pR\n", > + &dax_region->res); > + goto err_res; > } > > + if (sysfs_create_groups(&parent->kobj, dax_region_attribute_groups)) > + goto err_sysfs; > + > if (devm_add_action_or_reset(parent, dax_region_unregister, dax_region)) > return NULL; > return dax_region; > + > +err_sysfs: > + scoped_guard(rwsem_write, &dax_region_rwsem) > + release_resource(&dax_region->res); > +err_res: > + kfree(dax_region); > + return NULL; > } > EXPORT_SYMBOL_GPL(alloc_dax_region); >