From: nifan.cxl@gmail.com
To: qemu-devel@nongnu.org
Cc: jonathan.cameron@huawei.com, linux-cxl@vger.kernel.org,
gregory.price@memverge.com, ira.weiny@intel.com,
dan.j.williams@intel.com, a.manzanares@samsung.com,
dave@stgolabs.net, nmtadam.samsung@gmail.com,
nifan.cxl@gmail.com, jim.harris@samsung.com,
Jorgen.Hansen@wdc.com, wj28.lee@gmail.com,
Fan Ni <fan.ni@samsung.com>,
Jonathan Cameron <Jonathan.Cameron@huawei.com>
Subject: [PATCH v6 04/12] hw/mem/cxl_type3: Add support to create DC regions to type3 memory devices
Date: Mon, 25 Mar 2024 12:02:22 -0700 [thread overview]
Message-ID: <20240325190339.696686-5-nifan.cxl@gmail.com> (raw)
In-Reply-To: <20240325190339.696686-1-nifan.cxl@gmail.com>
From: Fan Ni <fan.ni@samsung.com>
With the change, when setting up memory for type3 memory device, we can
create DC regions.
A property 'num-dc-regions' is added to ct3_props to allow users to pass the
number of DC regions to create. To make it easier, other region parameters
like region base, length, and block size are hard coded. If needed,
these parameters can be added easily.
With the change, we can create DC regions with proper kernel side
support like below:
region=$(cat /sys/bus/cxl/devices/decoder0.0/create_dc_region)
echo $region > /sys/bus/cxl/devices/decoder0.0/create_dc_region
echo 256 > /sys/bus/cxl/devices/$region/interleave_granularity
echo 1 > /sys/bus/cxl/devices/$region/interleave_ways
echo "dc0" >/sys/bus/cxl/devices/decoder2.0/mode
echo 0x40000000 >/sys/bus/cxl/devices/decoder2.0/dpa_size
echo 0x40000000 > /sys/bus/cxl/devices/$region/size
echo "decoder2.0" > /sys/bus/cxl/devices/$region/target0
echo 1 > /sys/bus/cxl/devices/$region/commit
echo $region > /sys/bus/cxl/drivers/cxl_region/bind
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Fan Ni <fan.ni@samsung.com>
---
hw/mem/cxl_type3.c | 47 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 47 insertions(+)
diff --git a/hw/mem/cxl_type3.c b/hw/mem/cxl_type3.c
index 0836e9135b..c83d6f8004 100644
--- a/hw/mem/cxl_type3.c
+++ b/hw/mem/cxl_type3.c
@@ -30,6 +30,7 @@
#include "hw/pci/msix.h"
#define DWORD_BYTE 4
+#define CXL_CAPACITY_MULTIPLIER (256 * MiB)
/* Default CDAT entries for a memory region */
enum {
@@ -567,6 +568,46 @@ static void ct3d_reg_write(void *opaque, hwaddr offset, uint64_t value,
}
}
+/*
+ * TODO: dc region configuration will be updated once host backend and address
+ * space support is added for DCD.
+ */
+static bool cxl_create_dc_regions(CXLType3Dev *ct3d, Error **errp)
+{
+ int i;
+ uint64_t region_base = 0;
+ uint64_t region_len = 2 * GiB;
+ uint64_t decode_len = 2 * GiB;
+ uint64_t blk_size = 2 * MiB;
+ CXLDCRegion *region;
+ MemoryRegion *mr;
+
+ if (ct3d->hostvmem) {
+ mr = host_memory_backend_get_memory(ct3d->hostvmem);
+ region_base += memory_region_size(mr);
+ }
+ if (ct3d->hostpmem) {
+ mr = host_memory_backend_get_memory(ct3d->hostpmem);
+ region_base += memory_region_size(mr);
+ }
+ assert(region_base % CXL_CAPACITY_MULTIPLIER == 0);
+
+ for (i = 0, region = &ct3d->dc.regions[0];
+ i < ct3d->dc.num_regions;
+ i++, region++, region_base += region_len) {
+ *region = (CXLDCRegion) {
+ .base = region_base,
+ .decode_len = decode_len,
+ .len = region_len,
+ .block_size = blk_size,
+ /* dsmad_handle set when creating CDAT table entries */
+ .flags = 0,
+ };
+ }
+
+ return true;
+}
+
static bool cxl_setup_memory(CXLType3Dev *ct3d, Error **errp)
{
DeviceState *ds = DEVICE(ct3d);
@@ -635,6 +676,11 @@ static bool cxl_setup_memory(CXLType3Dev *ct3d, Error **errp)
g_free(p_name);
}
+ if (!cxl_create_dc_regions(ct3d, errp)) {
+ error_setg(errp, "setup DC regions failed");
+ return false;
+ }
+
return true;
}
@@ -930,6 +976,7 @@ static Property ct3_props[] = {
HostMemoryBackend *),
DEFINE_PROP_UINT64("sn", CXLType3Dev, sn, UI64_NULL),
DEFINE_PROP_STRING("cdat", CXLType3Dev, cxl_cstate.cdat.filename),
+ DEFINE_PROP_UINT8("num-dc-regions", CXLType3Dev, dc.num_regions, 0),
DEFINE_PROP_END_OF_LIST(),
};
--
2.43.0
next prev parent reply other threads:[~2024-03-25 19:05 UTC|newest]
Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-25 19:02 [PATCH v6 00/12] Enabling DCD emulation support in Qemu nifan.cxl
2024-03-25 19:02 ` [PATCH v6 01/12] hw/cxl/cxl-mailbox-utils: Add dc_event_log_size field to output payload of identify memory device command nifan.cxl
2024-03-25 19:02 ` [PATCH v6 02/12] hw/cxl/cxl-mailbox-utils: Add dynamic capacity region representative and mailbox command support nifan.cxl
2024-03-25 19:02 ` [PATCH v6 03/12] include/hw/cxl/cxl_device: Rename mem_size as static_mem_size for type3 memory devices nifan.cxl
2024-03-25 19:02 ` nifan.cxl [this message]
2024-03-25 19:02 ` [PATCH v6 05/12] hw/mem/cxl-type3: Refactor ct3_build_cdat_entries_for_mr to take mr size instead of mr as argument nifan.cxl
2024-03-25 19:02 ` [PATCH v6 06/12] hw/mem/cxl_type3: Add host backend and address space handling for DC regions nifan.cxl
2024-04-05 10:58 ` Jonathan Cameron via
2024-03-25 19:02 ` [PATCH v6 07/12] hw/mem/cxl_type3: Add DC extent list representative and get DC extent list mailbox support nifan.cxl
2024-04-05 11:08 ` Jonathan Cameron via
2024-03-25 19:02 ` [PATCH v6 08/12] hw/cxl/cxl-mailbox-utils: Add mailbox commands to support add/release dynamic capacity response nifan.cxl
2024-04-04 13:32 ` Jørgen Hansen
2024-04-05 11:12 ` Jonathan Cameron via
2024-04-09 19:21 ` fan
2024-04-15 17:56 ` fan
2024-04-16 10:02 ` Jørgen Hansen
2024-04-16 16:27 ` fan
2024-04-15 18:00 ` fan
2024-04-05 11:39 ` Jonathan Cameron via
2024-03-25 19:02 ` [PATCH v6 09/12] hw/cxl/events: Add qmp interfaces to add/release dynamic capacity extents nifan.cxl
2024-04-03 18:16 ` Gregory Price
2024-04-05 12:27 ` Jonathan Cameron via
2024-04-05 16:07 ` Gregory Price
2024-04-05 17:44 ` Jonathan Cameron via
2024-04-05 18:09 ` Gregory Price
2024-04-09 16:10 ` Jonathan Cameron via
2024-04-05 12:18 ` Jonathan Cameron via
2024-04-09 21:26 ` fan
2024-04-10 19:49 ` Jonathan Cameron via
2024-04-15 20:06 ` fan
2024-04-16 14:58 ` Jonathan Cameron via
2024-04-16 16:52 ` fan
2024-04-17 11:50 ` Jonathan Cameron via
2024-04-16 17:14 ` Gregory Price
2024-03-25 19:02 ` [PATCH v6 10/12] hw/mem/cxl_type3: Add dpa range validation for accesses to DC regions nifan.cxl
2024-04-05 12:29 ` Jonathan Cameron via
2024-04-12 22:54 ` Gregory Price
2024-04-15 17:37 ` fan
2024-04-16 15:00 ` Jonathan Cameron via
2024-04-16 16:37 ` fan
2024-04-17 11:59 ` Jonathan Cameron via
2024-04-18 17:58 ` Gregory Price
2024-04-16 17:15 ` Gregory Price
2024-03-25 19:02 ` [PATCH v6 11/12] hw/cxl/cxl-mailbox-utils: Add superset extent release mailbox support nifan.cxl
2024-04-05 9:57 ` Jørgen Hansen
2024-04-15 20:17 ` fan
2024-04-05 12:32 ` Jonathan Cameron via
2024-03-25 19:02 ` [PATCH v6 12/12] hw/mem/cxl_type3: Allow to release extent superset in QMP interface nifan.cxl
2024-04-05 12:33 ` Jonathan Cameron via
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240325190339.696686-5-nifan.cxl@gmail.com \
--to=nifan.cxl@gmail.com \
--cc=Jorgen.Hansen@wdc.com \
--cc=a.manzanares@samsung.com \
--cc=dan.j.williams@intel.com \
--cc=dave@stgolabs.net \
--cc=fan.ni@samsung.com \
--cc=gregory.price@memverge.com \
--cc=ira.weiny@intel.com \
--cc=jim.harris@samsung.com \
--cc=jonathan.cameron@huawei.com \
--cc=linux-cxl@vger.kernel.org \
--cc=nmtadam.samsung@gmail.com \
--cc=qemu-devel@nongnu.org \
--cc=wj28.lee@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).