* [PATCH qemu for 10.2 0/3] cxl: Additional RAS features support.
@ 2025-09-17 14:33 Jonathan Cameron via
2025-09-17 14:33 ` [PATCH qemu for 10.2 1/3] hw/cxl/cxl-mailbox-utils: Move declaration of scrub and ECS feature attributes in cmd_features_set_feature() Jonathan Cameron via
` (3 more replies)
0 siblings, 4 replies; 6+ messages in thread
From: Jonathan Cameron via @ 2025-09-17 14:33 UTC (permalink / raw)
To: Michael Tsirkin, qemu-devel, shiju.jose, Davidlohr Bueso
Cc: armbru, Fan Ni, linux-cxl, linuxarm, Ravi Shankar
These two features have been used in testing the related EDAC RAS
features drivers in Linux which are now upstream. Only minor tweaks
from the second part of:
https://lore.kernel.org/qemu-devel/20250811085530.2263-1-shiju.jose@huawei.com/
[PATCH v6 0/8] hw/cxl: Update CXL events to rev3.2 and add maintenance support for memory repair features
I've split that series up as the two parts were only slightly connected.
However there is some churn if this is applied without the other part of Shiju's posted
series. That was sent a few minutes ago as:
https://lore.kernel.org/qemu-devel/20250917141355.293217-1-Jonathan.Cameron@huawei.com/
Based on: [PATCH qemu for 10.2 0/5] cxl: r3.2 specification events updates.
Based on: 20250917141355.293217-1-Jonathan.Cameron@huawei.com
Davidlohr Bueso (1):
hw/cxl: Add support for Maintenance command and Post Package Repair
(PPR)
Shiju Jose (2):
hw/cxl/cxl-mailbox-utils: Move declaration of scrub and ECS feature
attributes in cmd_features_set_feature()
hw/cxl: Add emulation for memory sparing control feature
include/hw/cxl/cxl_device.h | 145 +++++++++
include/hw/cxl/cxl_events.h | 47 +++
hw/cxl/cxl-mailbox-utils.c | 566 +++++++++++++++++++++++++++++++++++-
hw/mem/cxl_type3.c | 169 +++++++++++
4 files changed, 915 insertions(+), 12 deletions(-)
--
2.48.1
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH qemu for 10.2 1/3] hw/cxl/cxl-mailbox-utils: Move declaration of scrub and ECS feature attributes in cmd_features_set_feature()
2025-09-17 14:33 [PATCH qemu for 10.2 0/3] cxl: Additional RAS features support Jonathan Cameron via
@ 2025-09-17 14:33 ` Jonathan Cameron via
2025-09-17 14:33 ` [PATCH qemu for 10.2 2/3] hw/cxl: Add support for Maintenance command and Post Package Repair (PPR) Jonathan Cameron via
` (2 subsequent siblings)
3 siblings, 0 replies; 6+ messages in thread
From: Jonathan Cameron via @ 2025-09-17 14:33 UTC (permalink / raw)
To: Michael Tsirkin, qemu-devel, shiju.jose, Davidlohr Bueso
Cc: armbru, Fan Ni, linux-cxl, linuxarm, Ravi Shankar
From: Shiju Jose <shiju.jose@huawei.com>
Move the declaration of scrub and ECS feature attributes in
cmd_features_set_feature() to the local scope where they are used.
Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
hw/cxl/cxl-mailbox-utils.c | 17 +++++++----------
1 file changed, 7 insertions(+), 10 deletions(-)
diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c
index 1f3982e89e..c05852836d 100644
--- a/hw/cxl/cxl-mailbox-utils.c
+++ b/hw/cxl/cxl-mailbox-utils.c
@@ -1351,10 +1351,6 @@ static CXLRetCode cmd_features_set_feature(const struct cxl_cmd *cmd,
CXLCCI *cci)
{
CXLSetFeatureInHeader *hdr = (void *)payload_in;
- CXLMemPatrolScrubWriteAttrs *ps_write_attrs;
- CXLMemPatrolScrubSetFeature *ps_set_feature;
- CXLMemECSWriteAttrs *ecs_write_attrs;
- CXLMemECSSetFeature *ecs_set_feature;
CXLSetFeatureInfo *set_feat_info;
uint16_t bytes_to_copy = 0;
uint8_t data_transfer_flag;
@@ -1396,13 +1392,14 @@ static CXLRetCode cmd_features_set_feature(const struct cxl_cmd *cmd,
}
if (qemu_uuid_is_equal(&hdr->uuid, &patrol_scrub_uuid)) {
+ CXLMemPatrolScrubSetFeature *ps_set_feature = (void *)payload_in;
+ CXLMemPatrolScrubWriteAttrs *ps_write_attrs =
+ &ps_set_feature->feat_data;
+
if (hdr->version != CXL_MEMDEV_PS_SET_FEATURE_VERSION) {
return CXL_MBOX_UNSUPPORTED;
}
- ps_set_feature = (void *)payload_in;
- ps_write_attrs = &ps_set_feature->feat_data;
-
if ((uint32_t)hdr->offset + bytes_to_copy >
sizeof(ct3d->patrol_scrub_wr_attrs)) {
return CXL_MBOX_INVALID_PAYLOAD_LENGTH;
@@ -1423,13 +1420,13 @@ static CXLRetCode cmd_features_set_feature(const struct cxl_cmd *cmd,
}
} else if (qemu_uuid_is_equal(&hdr->uuid,
&ecs_uuid)) {
+ CXLMemECSSetFeature *ecs_set_feature = (void *)payload_in;
+ CXLMemECSWriteAttrs *ecs_write_attrs = ecs_set_feature->feat_data;
+
if (hdr->version != CXL_ECS_SET_FEATURE_VERSION) {
return CXL_MBOX_UNSUPPORTED;
}
- ecs_set_feature = (void *)payload_in;
- ecs_write_attrs = ecs_set_feature->feat_data;
-
if ((uint32_t)hdr->offset + bytes_to_copy >
sizeof(ct3d->ecs_wr_attrs)) {
return CXL_MBOX_INVALID_PAYLOAD_LENGTH;
--
2.48.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH qemu for 10.2 2/3] hw/cxl: Add support for Maintenance command and Post Package Repair (PPR)
2025-09-17 14:33 [PATCH qemu for 10.2 0/3] cxl: Additional RAS features support Jonathan Cameron via
2025-09-17 14:33 ` [PATCH qemu for 10.2 1/3] hw/cxl/cxl-mailbox-utils: Move declaration of scrub and ECS feature attributes in cmd_features_set_feature() Jonathan Cameron via
@ 2025-09-17 14:33 ` Jonathan Cameron via
2025-09-17 14:33 ` [PATCH qemu for 10.2 3/3] hw/cxl: Add emulation for memory sparing control feature Jonathan Cameron via
2025-10-06 9:13 ` [PATCH qemu for 10.2 0/3] cxl: Additional RAS features support Jonathan Cameron via
3 siblings, 0 replies; 6+ messages in thread
From: Jonathan Cameron via @ 2025-09-17 14:33 UTC (permalink / raw)
To: Michael Tsirkin, qemu-devel, shiju.jose, Davidlohr Bueso
Cc: armbru, Fan Ni, linux-cxl, linuxarm, Ravi Shankar
From: Davidlohr Bueso <dave@stgolabs.net>
This adds initial support for the Maintenance command, specifically
the soft and hard PPR operations on a dpa. The implementation allows
to be executed at runtime, therefore semantically, data is retained
and CXL.mem requests are correctly processed.
Keep track of the requests upon a general media or DRAM event.
Post Package Repair (PPR) maintenance operations may be supported by CXL
devices that implement CXL.mem protocol. A PPR maintenance operation
requests the CXL device to perform a repair operation on its media.
For example, a CXL device with DRAM components that support PPR features
may implement PPR Maintenance operations. DRAM components may support two
types of PPR, hard PPR (hPPR), for a permanent row repair, and Soft PPR
(sPPR), for a temporary row repair. Soft PPR is much faster than hPPR,
but the repair is lost with a power cycle.
CXL spec 3.2 section 8.2.10.7.1.2 describes the device's sPPR (soft PPR)
maintenance operation and section 8.2.10.7.1.3 describes the device's
hPPR (hard PPR) maintenance operation feature.
CXL spec 3.2 section 8.2.10.7.2.1 describes the sPPR feature discovery and
configuration.
CXL spec 3.2 section 8.2.10.7.2.2 describes the hPPR feature discovery and
configuration.
CXL spec 3.2 section 8.2.10.2.1.4 Table 8-60 describes the Memory Sparing
Event Record.
Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
Co-developed-by: Shiju Jose <shiju.jose@huawei.com>
Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
include/hw/cxl/cxl_device.h | 112 +++++++++++++++++
include/hw/cxl/cxl_events.h | 42 +++++++
hw/cxl/cxl-mailbox-utils.c | 240 +++++++++++++++++++++++++++++++++++-
hw/mem/cxl_type3.c | 125 +++++++++++++++++++
4 files changed, 517 insertions(+), 2 deletions(-)
diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h
index 29bb46063c..163b0ff7ed 100644
--- a/include/hw/cxl/cxl_device.h
+++ b/include/hw/cxl/cxl_device.h
@@ -430,6 +430,12 @@ static inline bool cxl_dev_media_disabled(CXLDeviceState *cxl_dstate)
uint64_t dev_status_reg = cxl_dstate->mbox_reg_state64[R_CXL_MEM_DEV_STS];
return FIELD_EX64(dev_status_reg, CXL_MEM_DEV_STS, MEDIA_STATUS) == 0x3;
}
+
+static inline bool maintenance_running(CXLCCI *cci)
+{
+ return cci->bg.runtime && cci->bg.opcode == 0x0600;
+}
+
static inline bool scan_media_running(CXLCCI *cci)
{
return !!cci->bg.runtime && cci->bg.opcode == 0x4304;
@@ -443,6 +449,23 @@ typedef struct CXLError {
typedef QTAILQ_HEAD(, CXLError) CXLErrorList;
+typedef struct CXLMaintenance {
+ uint64_t dpa;
+ uint16_t validity_flags;
+ uint8_t channel;
+ uint8_t rank;
+ uint32_t nibble_mask;
+ uint8_t bank_group;
+ uint8_t bank;
+ uint32_t row;
+ uint16_t column;
+ uint8_t component_id[CXL_EVENT_GEN_MED_COMP_ID_SIZE];
+ uint8_t sub_channel;
+ QLIST_ENTRY(CXLMaintenance) node;
+} CXLMaintenance;
+
+typedef QLIST_HEAD(, CXLMaintenance) CXLMaintenanceList;
+
typedef struct CXLPoison {
uint64_t start, length;
uint8_t type;
@@ -455,6 +478,87 @@ typedef struct CXLPoison {
typedef QLIST_HEAD(, CXLPoison) CXLPoisonList;
#define CXL_POISON_LIST_LIMIT 256
+/* CXL memory maintenance operation */
+/*
+ * CXL r3.2 section 8.2.10.7.2, Table 8-125: Mainteance Operation:
+ * Classes, Subclasses, and Feature UUIDs
+ */
+#define CXL_MEMDEV_MAINT_CLASS_NO_OP 0x0
+#define CXL_MEMDEV_MAINT_CLASS_PPR 0x1
+#define CXL_MEMDEV_MAINT_CLASS_SPARING 0x2
+#define CXL_MEMDEV_MAINT_CLASS_DEV_BUILT_IN_TEST 0x3
+
+#define CXL_MEMDEV_MAINT_SUBCLASS_SPPR 0x0
+#define CXL_MEMDEV_MAINT_SUBCLASS_HPPR 0x1
+
+#define CXL_MEMDEV_MAINT_SUBCLASS_CACHELINE_SPARING 0x0
+#define CXL_MEMDEV_MAINT_SUBCLASS_ROW_SPARING 0x1
+#define CXL_MEMDEV_MAINT_SUBCLASS_BANK_SPARING 0x2
+#define CXL_MEMDEV_MAINT_SUBCLASS_RANK_SPARING 0x3
+
+/* CXL memory Post Package Repair control attributes */
+/*
+ * CXL r3.2 section 8.2.10.7.2.1, Table 8-128 and 8-129:
+ * sPPR Feature Readable/Writable Attributes
+ */
+typedef struct CXLMemSoftPPRReadAttrs {
+ uint8_t max_maint_latency;
+ uint16_t op_caps;
+ uint16_t op_mode;
+ uint8_t maint_op_class;
+ uint8_t maint_op_subclass;
+ uint8_t rsvd[9];
+ uint8_t sppr_flags;
+ uint16_t restriction_flags;
+ uint8_t sppr_op_mode;
+} QEMU_PACKED CXLMemSoftPPRReadAttrs;
+
+typedef struct CXLMemSoftPPRWriteAttrs {
+ uint16_t op_mode;
+ uint8_t sppr_op_mode;
+} QEMU_PACKED CXLMemSoftPPRWriteAttrs;
+
+#define CXL_MEMDEV_SPPR_GET_FEATURE_VERSION 0x03
+#define CXL_MEMDEV_SPPR_SET_FEATURE_VERSION 0x03
+#define CXL_MEMDEV_SPPR_DPA_SUPPORT_FLAG BIT(0)
+#define CXL_MEMDEV_SPPR_NIBBLE_SUPPORT_FLAG BIT(1)
+#define CXL_MEMDEV_SPPR_MEM_SPARING_EV_REC_CAP_FLAG BIT(2)
+#define CXL_MEMDEV_SPPR_DEV_INITIATED_AT_BOOT_CAP_FLAG BIT(3)
+
+#define CXL_MEMDEV_SPPR_OP_MODE_MEM_SPARING_EV_REC_EN BIT(0)
+#define CXL_MEMDEV_SPPR_OP_MODE_DEV_INITIATED_AT_BOOT BIT(1)
+
+/*
+ * CXL r3.2 section 8.2.10.7.2.2, Table 8-131 and 8-132:
+ * hPPR Feature Readable/Writable Attributes
+ */
+typedef struct CXLMemHardPPRReadAttrs {
+ uint8_t max_maint_latency;
+ uint16_t op_caps;
+ uint16_t op_mode;
+ uint8_t maint_op_class;
+ uint8_t maint_op_subclass;
+ uint8_t rsvd[9];
+ uint8_t hppr_flags;
+ uint16_t restriction_flags;
+ uint8_t hppr_op_mode;
+} QEMU_PACKED CXLMemHardPPRReadAttrs;
+
+typedef struct CXLMemHardPPRWriteAttrs {
+ uint16_t op_mode;
+ uint8_t hppr_op_mode;
+} QEMU_PACKED CXLMemHardPPRWriteAttrs;
+
+#define CXL_MEMDEV_HPPR_GET_FEATURE_VERSION 0x03
+#define CXL_MEMDEV_HPPR_SET_FEATURE_VERSION 0x03
+#define CXL_MEMDEV_HPPR_DPA_SUPPORT_FLAG BIT(0)
+#define CXL_MEMDEV_HPPR_NIBBLE_SUPPORT_FLAG BIT(1)
+#define CXL_MEMDEV_HPPR_MEM_SPARING_EV_REC_CAP_FLAG BIT(2)
+#define CXL_MEMDEV_HPPR_DEV_INITIATED_AT_BOOT_CAP_FLAG BIT(3)
+
+#define CXL_MEMDEV_HPPR_OP_MODE_MEM_SPARING_EV_REC_EN BIT(0)
+#define CXL_MEMDEV_HPPR_OP_MODE_DEV_INITIATED_AT_BOOT BIT(1)
+
/* CXL memory device patrol scrub control attributes */
typedef struct CXLMemPatrolScrubReadAttrs {
uint8_t scrub_cycle_cap;
@@ -605,6 +709,9 @@ struct CXLType3Dev {
/* Error injection */
CXLErrorList error_list;
+ /* Keep track of maintenance requests */
+ CXLMaintenanceList maint_list;
+
/* Poison Injection - cache */
CXLPoisonList poison_list;
unsigned int poison_list_cnt;
@@ -617,6 +724,11 @@ struct CXLType3Dev {
CXLSetFeatureInfo set_feat_info;
+ /* PPR control attributes */
+ CXLMemSoftPPRReadAttrs soft_ppr_attrs;
+ CXLMemSoftPPRWriteAttrs soft_ppr_wr_attrs;
+ CXLMemHardPPRReadAttrs hard_ppr_attrs;
+ CXLMemHardPPRWriteAttrs hard_ppr_wr_attrs;
/* Patrol scrub control attributes */
CXLMemPatrolScrubReadAttrs patrol_scrub_attrs;
CXLMemPatrolScrubWriteAttrs patrol_scrub_wr_attrs;
diff --git a/include/hw/cxl/cxl_events.h b/include/hw/cxl/cxl_events.h
index 4a7836ad72..3052bc9f18 100644
--- a/include/hw/cxl/cxl_events.h
+++ b/include/hw/cxl/cxl_events.h
@@ -219,4 +219,46 @@ typedef enum CXLDCEventType {
DC_EVENT_CAPACITY_RELEASED = 0x5,
} CXLDCEventType;
+/*
+ * CXL r3.2 section Table 8-60: Memory Sparing Event Record
+ * All fields little endian.
+ */
+#define CXL_MSER_VALID_CHANNEL BIT(0)
+#define CXL_MSER_VALID_RANK BIT(1)
+#define CXL_MSER_VALID_NIB_MASK BIT(2)
+#define CXL_MSER_VALID_BANK_GROUP BIT(3)
+#define CXL_MSER_VALID_BANK BIT(4)
+#define CXL_MSER_VALID_ROW BIT(5)
+#define CXL_MSER_VALID_COLUMN BIT(6)
+#define CXL_MSER_VALID_COMP_ID BIT(7)
+#define CXL_MSER_VALID_COMP_ID_FORMAT BIT(8)
+#define CXL_MSER_VALID_SUB_CHANNEL BIT(9)
+
+typedef struct CXLEventSparing {
+ CXLEventRecordHdr hdr;
+ uint8_t maint_op_class;
+ uint8_t maint_op_subclass;
+ uint8_t flags;
+ uint8_t result;
+ uint16_t validity_flags;
+ uint8_t reserved1[6];
+ uint16_t res_avail;
+ uint8_t channel;
+ uint8_t rank;
+ uint8_t nibble_mask[3];
+ uint8_t bank_group;
+ uint8_t bank;
+ uint8_t row[3];
+ uint16_t column;
+ uint8_t component_id[CXL_EVENT_GEN_MED_COMP_ID_SIZE];
+ uint8_t sub_channel;
+ uint8_t reserved2[0x25];
+} QEMU_PACKED CXLEventSparing;
+
+/* CXL r3.2 Table 8-60: Memory Sparing Event Record */
+static const QemuUUID sparing_uuid = {
+ .data = UUID(0xe71f3a40, 0x2d29, 0x4092, 0x8a, 0x39,
+ 0x4d, 0x1c, 0x96, 0x6c, 0x7c, 0x65),
+};
+
#endif /* CXL_EVENTS_H */
diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c
index c05852836d..29424e7eb1 100644
--- a/hw/cxl/cxl-mailbox-utils.c
+++ b/hw/cxl/cxl-mailbox-utils.c
@@ -86,6 +86,8 @@ enum {
#define GET_SUPPORTED 0x0
#define GET_FEATURE 0x1
#define SET_FEATURE 0x2
+ MAINTENANCE = 0x06,
+ #define PERFORM 0x0
IDENTIFY = 0x40,
#define MEMORY_DEVICE 0x0
CCLS = 0x41,
@@ -1111,8 +1113,8 @@ typedef struct CXLSupportedFeatureEntry {
#define CXL_FEAT_ENTRY_ATTR_FLAG_CHANGABLE BIT(0)
#define CXL_FEAT_ENTRY_ATTR_FLAG_DEEPEST_RESET_PERSISTENCE_MASK GENMASK(3, 1)
#define CXL_FEAT_ENTRY_ATTR_FLAG_PERSIST_ACROSS_FIRMWARE_UPDATE BIT(4)
-#define CXL_FEAT_ENTRY_ATTR_FLAG_SUPPORT_DEFAULT_SELECTION BIT(5)
-#define CXL_FEAT_ENTRY_ATTR_FLAG_SUPPORT_SAVED_SELECTION BIT(6)
+#define CXL_FEAT_ENTRY_ATTR_FLAG_SUPPORT_DEFAULT_SEL BIT(5)
+#define CXL_FEAT_ENTRY_ATTR_FLAG_SUPPORT_SAVED_SEL BIT(6)
/* Supported Feature Entry : set feature effects */
#define CXL_FEAT_ENTRY_SFE_CONFIG_CHANGE_COLD_RESET BIT(0)
@@ -1131,6 +1133,8 @@ typedef struct CXLSupportedFeatureEntry {
enum CXL_SUPPORTED_FEATURES_LIST {
CXL_FEATURE_PATROL_SCRUB = 0,
CXL_FEATURE_ECS,
+ CXL_FEATURE_SPPR,
+ CXL_FEATURE_HPPR,
CXL_FEATURE_MAX
};
@@ -1172,6 +1176,28 @@ enum CXL_SET_FEATURE_FLAG_DATA_TRANSFER {
};
#define CXL_SET_FEAT_DATA_SAVED_ACROSS_RESET BIT(3)
+/* CXL r3.2 section 8.2.10.7.2.1: sPPR Feature Discovery and Configuration */
+static const QemuUUID soft_ppr_uuid = {
+ .data = UUID(0x892ba475, 0xfad8, 0x474e, 0x9d, 0x3e,
+ 0x69, 0x2c, 0x91, 0x75, 0x68, 0xbb)
+};
+
+typedef struct CXLMemSoftPPRSetFeature {
+ CXLSetFeatureInHeader hdr;
+ CXLMemSoftPPRWriteAttrs feat_data;
+} QEMU_PACKED QEMU_ALIGNED(16) CXLMemSoftPPRSetFeature;
+
+/* CXL r3.2 section 8.2.10.7.2.2: hPPR Feature Discovery and Configuration */
+static const QemuUUID hard_ppr_uuid = {
+ .data = UUID(0x80ea4521, 0x786f, 0x4127, 0xaf, 0xb1,
+ 0xec, 0x74, 0x59, 0xfb, 0x0e, 0x24)
+};
+
+typedef struct CXLMemHardPPRSetFeature {
+ CXLSetFeatureInHeader hdr;
+ CXLMemHardPPRWriteAttrs feat_data;
+} QEMU_PACKED QEMU_ALIGNED(16) CXLMemHardPPRSetFeature;
+
/* CXL r3.1 section 8.2.9.9.11.1: Device Patrol Scrub Control Feature */
static const QemuUUID patrol_scrub_uuid = {
.data = UUID(0x96dad7d6, 0xfde8, 0x482b, 0xa7, 0x33,
@@ -1235,6 +1261,38 @@ static CXLRetCode cmd_features_get_supported(const struct cxl_cmd *cmd,
for (entry = 0, index = get_feats_in->start_index;
entry < req_entries; index++) {
switch (index) {
+ case CXL_FEATURE_SPPR:
+ /* Fill supported feature entry for soft-PPR */
+ get_feats_out->feat_entries[entry++] =
+ (struct CXLSupportedFeatureEntry) {
+ .uuid = soft_ppr_uuid,
+ .feat_index = index,
+ .get_feat_size = sizeof(CXLMemSoftPPRReadAttrs),
+ .set_feat_size = sizeof(CXLMemSoftPPRWriteAttrs),
+ .attr_flags = CXL_FEAT_ENTRY_ATTR_FLAG_CHANGABLE |
+ CXL_FEAT_ENTRY_ATTR_FLAG_SUPPORT_DEFAULT_SEL,
+ .get_feat_version = CXL_MEMDEV_SPPR_GET_FEATURE_VERSION,
+ .set_feat_version = CXL_MEMDEV_SPPR_SET_FEATURE_VERSION,
+ .set_feat_effects = CXL_FEAT_ENTRY_SFE_IMMEDIATE_CONFIG_CHANGE |
+ CXL_FEAT_ENTRY_SFE_CEL_VALID,
+ };
+ break;
+ case CXL_FEATURE_HPPR:
+ /* Fill supported feature entry for hard-PPR */
+ get_feats_out->feat_entries[entry++] =
+ (struct CXLSupportedFeatureEntry) {
+ .uuid = hard_ppr_uuid,
+ .feat_index = index,
+ .get_feat_size = sizeof(CXLMemHardPPRReadAttrs),
+ .set_feat_size = sizeof(CXLMemHardPPRWriteAttrs),
+ .attr_flags = CXL_FEAT_ENTRY_ATTR_FLAG_CHANGABLE |
+ CXL_FEAT_ENTRY_ATTR_FLAG_SUPPORT_DEFAULT_SEL,
+ .get_feat_version = CXL_MEMDEV_HPPR_GET_FEATURE_VERSION,
+ .set_feat_version = CXL_MEMDEV_HPPR_SET_FEATURE_VERSION,
+ .set_feat_effects = CXL_FEAT_ENTRY_SFE_IMMEDIATE_CONFIG_CHANGE |
+ CXL_FEAT_ENTRY_SFE_CEL_VALID,
+ };
+ break;
case CXL_FEATURE_PATROL_SCRUB:
/* Fill supported feature entry for device patrol scrub control */
get_feats_out->feat_entries[entry++] =
@@ -1333,6 +1391,26 @@ static CXLRetCode cmd_features_get_feature(const struct cxl_cmd *cmd,
memcpy(payload_out,
(uint8_t *)&ct3d->ecs_attrs + get_feature->offset,
bytes_to_copy);
+ } else if (qemu_uuid_is_equal(&get_feature->uuid, &soft_ppr_uuid)) {
+ if (get_feature->offset >= sizeof(CXLMemSoftPPRReadAttrs)) {
+ return CXL_MBOX_INVALID_INPUT;
+ }
+ bytes_to_copy = sizeof(CXLMemSoftPPRReadAttrs) -
+ get_feature->offset;
+ bytes_to_copy = MIN(bytes_to_copy, get_feature->count);
+ memcpy(payload_out,
+ (uint8_t *)&ct3d->soft_ppr_attrs + get_feature->offset,
+ bytes_to_copy);
+ } else if (qemu_uuid_is_equal(&get_feature->uuid, &hard_ppr_uuid)) {
+ if (get_feature->offset >= sizeof(CXLMemHardPPRReadAttrs)) {
+ return CXL_MBOX_INVALID_INPUT;
+ }
+ bytes_to_copy = sizeof(CXLMemHardPPRReadAttrs) -
+ get_feature->offset;
+ bytes_to_copy = MIN(bytes_to_copy, get_feature->count);
+ memcpy(payload_out,
+ (uint8_t *)&ct3d->hard_ppr_attrs + get_feature->offset,
+ bytes_to_copy);
} else {
return CXL_MBOX_UNSUPPORTED;
}
@@ -1444,6 +1522,44 @@ static CXLRetCode cmd_features_set_feature(const struct cxl_cmd *cmd,
ct3d->ecs_wr_attrs.fru_attrs[count].ecs_config & 0x1F;
}
}
+ } else if (qemu_uuid_is_equal(&hdr->uuid, &soft_ppr_uuid)) {
+ CXLMemSoftPPRSetFeature *sppr_set_feature = (void *)payload_in;
+ CXLMemSoftPPRWriteAttrs *sppr_write_attrs =
+ &sppr_set_feature->feat_data;
+
+ if (hdr->version != CXL_MEMDEV_SPPR_SET_FEATURE_VERSION) {
+ return CXL_MBOX_UNSUPPORTED;
+ }
+
+ memcpy((uint8_t *)&ct3d->soft_ppr_wr_attrs + hdr->offset,
+ sppr_write_attrs, bytes_to_copy);
+ set_feat_info->data_size += bytes_to_copy;
+
+ if (data_transfer_flag == CXL_SET_FEATURE_FLAG_FULL_DATA_TRANSFER ||
+ data_transfer_flag == CXL_SET_FEATURE_FLAG_FINISH_DATA_TRANSFER) {
+ ct3d->soft_ppr_attrs.op_mode = ct3d->soft_ppr_wr_attrs.op_mode;
+ ct3d->soft_ppr_attrs.sppr_op_mode =
+ ct3d->soft_ppr_wr_attrs.sppr_op_mode;
+ }
+ } else if (qemu_uuid_is_equal(&hdr->uuid, &hard_ppr_uuid)) {
+ CXLMemHardPPRSetFeature *hppr_set_feature = (void *)payload_in;
+ CXLMemHardPPRWriteAttrs *hppr_write_attrs =
+ &hppr_set_feature->feat_data;
+
+ if (hdr->version != CXL_MEMDEV_HPPR_SET_FEATURE_VERSION) {
+ return CXL_MBOX_UNSUPPORTED;
+ }
+
+ memcpy((uint8_t *)&ct3d->hard_ppr_wr_attrs + hdr->offset,
+ hppr_write_attrs, bytes_to_copy);
+ set_feat_info->data_size += bytes_to_copy;
+
+ if (data_transfer_flag == CXL_SET_FEATURE_FLAG_FULL_DATA_TRANSFER ||
+ data_transfer_flag == CXL_SET_FEATURE_FLAG_FINISH_DATA_TRANSFER) {
+ ct3d->hard_ppr_attrs.op_mode = ct3d->hard_ppr_wr_attrs.op_mode;
+ ct3d->hard_ppr_attrs.hppr_op_mode =
+ ct3d->hard_ppr_wr_attrs.hppr_op_mode;
+ }
} else {
return CXL_MBOX_UNSUPPORTED;
}
@@ -1456,6 +1572,10 @@ static CXLRetCode cmd_features_set_feature(const struct cxl_cmd *cmd,
memset(&ct3d->patrol_scrub_wr_attrs, 0, set_feat_info->data_size);
} else if (qemu_uuid_is_equal(&hdr->uuid, &ecs_uuid)) {
memset(&ct3d->ecs_wr_attrs, 0, set_feat_info->data_size);
+ } else if (qemu_uuid_is_equal(&hdr->uuid, &soft_ppr_uuid)) {
+ memset(&ct3d->soft_ppr_wr_attrs, 0, set_feat_info->data_size);
+ } else if (qemu_uuid_is_equal(&hdr->uuid, &hard_ppr_uuid)) {
+ memset(&ct3d->hard_ppr_wr_attrs, 0, set_feat_info->data_size);
}
set_feat_info->data_transfer_flag = 0;
set_feat_info->data_saved_across_reset = false;
@@ -1466,6 +1586,116 @@ static CXLRetCode cmd_features_set_feature(const struct cxl_cmd *cmd,
return CXL_MBOX_SUCCESS;
}
+static void cxl_mbox_create_mem_sparing_event_records(CXLType3Dev *ct3d,
+ uint8_t maint_op_class, uint8_t maint_op_sub_class,
+ CXLMaintenance *ent)
+{
+ CXLEventSparing event_rec = {};
+
+ cxl_assign_event_header(&event_rec.hdr,
+ &sparing_uuid,
+ (1 << CXL_EVENT_TYPE_INFO),
+ sizeof(event_rec),
+ cxl_device_get_timestamp(&ct3d->cxl_dstate),
+ 1, maint_op_class, 1, maint_op_sub_class,
+ 0, 0, 0, 0);
+ if (ent) {
+ event_rec.flags = 0;
+ event_rec.result = 0;
+ stw_le_p(&event_rec.res_avail, 2);
+ stw_le_p(&event_rec.validity_flags, ent->validity_flags);
+ event_rec.channel = ent->channel;
+ event_rec.rank = ent->rank;
+ st24_le_p(event_rec.nibble_mask, ent->nibble_mask);
+ event_rec.bank_group = ent->bank_group;
+ event_rec.bank = ent->bank;
+ st24_le_p(event_rec.row, ent->row);
+ stw_le_p(&event_rec.column, ent->column);
+ event_rec.sub_channel = ent->sub_channel;
+ if (ent->validity_flags & CXL_MSER_VALID_COMP_ID) {
+ strncpy((char *)event_rec.component_id, (char *)ent->component_id,
+ sizeof(event_rec.component_id));
+ }
+ } else {
+ return;
+ }
+
+ if (cxl_event_insert(&ct3d->cxl_dstate,
+ CXL_EVENT_TYPE_INFO,
+ (CXLEventRecordRaw *)&event_rec)) {
+ cxl_event_irq_assert(ct3d);
+ }
+}
+
+
+static void cxl_perform_ppr(CXLType3Dev *ct3d, uint64_t dpa)
+{
+ CXLMaintenance *ent, *next;
+
+ QLIST_FOREACH_SAFE(ent, &ct3d->maint_list, node, next) {
+ if (dpa == ent->dpa) {
+ /* Produce a Memory Sparing Event Record */
+ if (ct3d->soft_ppr_attrs.sppr_op_mode &
+ CXL_MEMDEV_SPPR_OP_MODE_MEM_SPARING_EV_REC_EN) {
+ cxl_mbox_create_mem_sparing_event_records(ct3d,
+ CXL_MEMDEV_MAINT_CLASS_SPARING,
+ CXL_MEMDEV_MAINT_SUBCLASS_CACHELINE_SPARING,
+ ent);
+ }
+ break;
+ }
+ }
+}
+
+/* CXL r3.2 section 8.2.10.7.1 - Perform Maintenance (Opcode 0600h) */
+#define MAINTENANCE_PPR_QUERY_RESOURCES BIT(0)
+
+static CXLRetCode cmd_media_perform_maintenance(const struct cxl_cmd *cmd,
+ uint8_t *payload_in, size_t len_in,
+ uint8_t *payload_out, size_t *len_out,
+ CXLCCI *cci)
+{
+ struct {
+ uint8_t class;
+ uint8_t subclass;
+ union {
+ struct {
+ uint8_t flags;
+ uint64_t dpa;
+ uint8_t nibble_mask[3];
+ } QEMU_PACKED ppr;
+ };
+ } QEMU_PACKED *maint_in = (void *)payload_in;
+ CXLType3Dev *ct3d = CXL_TYPE3(cci->d);
+
+ if (maintenance_running(cci)) {
+ return CXL_MBOX_BUSY;
+ }
+
+ switch (maint_in->class) {
+ case CXL_MEMDEV_MAINT_CLASS_NO_OP:
+ return CXL_MBOX_SUCCESS; /* nop */
+ case CXL_MEMDEV_MAINT_CLASS_PPR:
+ if (maint_in->ppr.flags & MAINTENANCE_PPR_QUERY_RESOURCES) {
+ return CXL_MBOX_SUCCESS;
+ }
+
+ switch (maint_in->subclass) {
+ case CXL_MEMDEV_MAINT_SUBCLASS_SPPR:
+ case CXL_MEMDEV_MAINT_SUBCLASS_HPPR:
+ cxl_perform_ppr(ct3d, ldq_le_p(&maint_in->ppr.dpa));
+ return CXL_MBOX_SUCCESS;
+ default:
+ return CXL_MBOX_INVALID_INPUT;
+ }
+ break;
+ default:
+ return CXL_MBOX_INVALID_INPUT;
+ }
+
+ return CXL_MBOX_SUCCESS;
+}
+
/* CXL r3.1 Section 8.2.9.9.1.1: Identify Memory Device (Opcode 4000h) */
static CXLRetCode cmd_identify_memory_device(const struct cxl_cmd *cmd,
uint8_t *payload_in,
@@ -3762,6 +3992,12 @@ static const struct cxl_cmd cxl_cmd_set[256][256] = {
CXL_MBOX_IMMEDIATE_POLICY_CHANGE |
CXL_MBOX_IMMEDIATE_LOG_CHANGE |
CXL_MBOX_SECURITY_STATE_CHANGE)},
+ [MAINTENANCE][PERFORM] = { "MAINTENANCE_PERFORM",
+ cmd_media_perform_maintenance, ~0,
+ CXL_MBOX_IMMEDIATE_CONFIG_CHANGE |
+ CXL_MBOX_IMMEDIATE_DATA_CHANGE |
+ CXL_MBOX_IMMEDIATE_LOG_CHANGE |
+ CXL_MBOX_BACKGROUND_OPERATION },
[IDENTIFY][MEMORY_DEVICE] = { "IDENTIFY_MEMORY_DEVICE",
cmd_identify_memory_device, 0, 0 },
[CCLS][GET_PARTITION_INFO] = { "CCLS_GET_PARTITION_INFO",
diff --git a/hw/mem/cxl_type3.c b/hw/mem/cxl_type3.c
index 14efaef9ad..284bae7479 100644
--- a/hw/mem/cxl_type3.c
+++ b/hw/mem/cxl_type3.c
@@ -964,6 +964,32 @@ static void ct3_realize(PCIDevice *pci_dev, Error **errp)
ct3d->ecs_attrs.fru_attrs[count].ecs_flags = 0;
}
+ /* Set default values for soft-PPR attributes */
+ ct3d->soft_ppr_attrs = (CXLMemSoftPPRReadAttrs) {
+ .max_maint_latency = 0x5, /* 100 ms */
+ .op_caps = 0, /* require host involvement */
+ .op_mode = 0,
+ .maint_op_class = CXL_MEMDEV_MAINT_CLASS_PPR,
+ .maint_op_subclass = CXL_MEMDEV_MAINT_SUBCLASS_SPPR,
+ .sppr_flags = CXL_MEMDEV_SPPR_DPA_SUPPORT_FLAG |
+ CXL_MEMDEV_SPPR_MEM_SPARING_EV_REC_CAP_FLAG,
+ .restriction_flags = 0,
+ .sppr_op_mode = CXL_MEMDEV_SPPR_OP_MODE_MEM_SPARING_EV_REC_EN
+ };
+
+ /* Set default value for hard-PPR attributes */
+ ct3d->hard_ppr_attrs = (CXLMemHardPPRReadAttrs) {
+ .max_maint_latency = 0x5, /* 100 ms */
+ .op_caps = 0, /* require host involvement */
+ .op_mode = 0,
+ .maint_op_class = CXL_MEMDEV_MAINT_CLASS_PPR,
+ .maint_op_subclass = CXL_MEMDEV_MAINT_SUBCLASS_HPPR,
+ .hppr_flags = CXL_MEMDEV_HPPR_DPA_SUPPORT_FLAG |
+ CXL_MEMDEV_HPPR_MEM_SPARING_EV_REC_CAP_FLAG,
+ .restriction_flags = 0,
+ .hppr_op_mode = CXL_MEMDEV_HPPR_OP_MODE_MEM_SPARING_EV_REC_EN
+ };
+
return;
err_release_cdat:
@@ -1667,6 +1693,75 @@ static int ct3d_qmp_cxl_event_log_enc(CxlEventLog log)
return -EINVAL;
}
}
+
+static void cxl_maintenance_insert(CXLType3Dev *ct3d, uint64_t dpa,
+ bool has_channel, uint8_t channel,
+ bool has_rank, uint8_t rank,
+ bool has_nibble_mask, uint32_t nibble_mask,
+ bool has_bank_group, uint8_t bank_group,
+ bool has_bank, uint8_t bank,
+ bool has_row, uint32_t row,
+ bool has_column, uint16_t column,
+ const char *component_id,
+ bool has_comp_id_pldm, bool is_comp_id_pldm,
+ bool has_sub_channel, uint8_t sub_channel)
+{
+ CXLMaintenance *ent, *m;
+
+ QLIST_FOREACH(ent, &ct3d->maint_list, node) {
+ if (dpa == ent->dpa) {
+ return;
+ }
+ }
+ m = g_new0(CXLMaintenance, 1);
+ memset(m, 0, sizeof(*m));
+ m->dpa = dpa;
+ m->validity_flags = 0;
+
+ if (has_channel) {
+ m->channel = channel;
+ m->validity_flags |= CXL_MSER_VALID_CHANNEL;
+ }
+ if (has_rank) {
+ m->rank = rank;
+ m->validity_flags |= CXL_MSER_VALID_RANK;
+ }
+ if (has_nibble_mask) {
+ m->nibble_mask = nibble_mask;
+ m->validity_flags |= CXL_MSER_VALID_NIB_MASK;
+ }
+ if (has_bank_group) {
+ m->bank_group = bank_group;
+ m->validity_flags |= CXL_MSER_VALID_BANK_GROUP;
+ }
+ if (has_bank) {
+ m->bank = bank;
+ m->validity_flags |= CXL_MSER_VALID_BANK;
+ }
+ if (has_row) {
+ m->row = row;
+ m->validity_flags |= CXL_MSER_VALID_ROW;
+ }
+ if (has_column) {
+ m->column = column;
+ m->validity_flags |= CXL_MSER_VALID_COLUMN;
+ }
+ if (has_sub_channel) {
+ m->sub_channel = sub_channel;
+ m->validity_flags |= CXL_MSER_VALID_SUB_CHANNEL;
+ }
+ if (component_id) {
+ strncpy((char *)m->component_id, component_id,
+ sizeof(m->component_id) - 1);
+ m->validity_flags |= CXL_MSER_VALID_COMP_ID;
+ if (has_comp_id_pldm && is_comp_id_pldm) {
+ m->validity_flags |= CXL_MSER_VALID_COMP_ID_FORMAT;
+ }
+ }
+
+ QLIST_INSERT_HEAD(&ct3d->maint_list, m, node);
+}
+
/* Component ID is device specific. Define this as a string. */
void qmp_cxl_inject_general_media_event(const char *path, CxlEventLog log,
uint32_t flags, bool has_maint_op_class,
@@ -1715,6 +1810,11 @@ void qmp_cxl_inject_general_media_event(const char *path, CxlEventLog log,
error_setg(errp, "Unhandled error log type");
return;
}
+ if (rc == CXL_EVENT_TYPE_INFO &&
+ (flags & CXL_EVENT_REC_FLAGS_MAINT_NEEDED)) {
+ error_setg(errp, "Informational event cannot require maintenance");
+ return;
+ }
enc_log = rc;
memset(&gem, 0, sizeof(gem));
@@ -1773,6 +1873,15 @@ void qmp_cxl_inject_general_media_event(const char *path, CxlEventLog log,
if (cxl_event_insert(cxlds, enc_log, (CXLEventRecordRaw *)&gem)) {
cxl_event_irq_assert(ct3d);
}
+
+ if (flags & CXL_EVENT_REC_FLAGS_MAINT_NEEDED) {
+ cxl_maintenance_insert(ct3d, dpa, has_channel, channel,
+ has_rank, rank,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, component_id,
+ has_comp_id_pldm, is_comp_id_pldm,
+ 0, 0);
+ }
}
#define CXL_DRAM_VALID_CHANNEL BIT(0)
@@ -1842,6 +1951,11 @@ void qmp_cxl_inject_dram_event(const char *path, CxlEventLog log,
error_setg(errp, "Unhandled error log type");
return;
}
+ if (rc == CXL_EVENT_TYPE_INFO &&
+ (flags & CXL_EVENT_REC_FLAGS_MAINT_NEEDED)) {
+ error_setg(errp, "Informational event cannot require maintenance");
+ return;
+ }
enc_log = rc;
memset(&dram, 0, sizeof(dram));
@@ -1935,6 +2049,17 @@ void qmp_cxl_inject_dram_event(const char *path, CxlEventLog log,
if (cxl_event_insert(cxlds, enc_log, (CXLEventRecordRaw *)&dram)) {
cxl_event_irq_assert(ct3d);
}
+
+ if (flags & CXL_EVENT_REC_FLAGS_MAINT_NEEDED) {
+ cxl_maintenance_insert(ct3d, dpa, has_channel, channel,
+ has_rank, rank,
+ has_nibble_mask, nibble_mask,
+ has_bank_group, bank_group,
+ has_bank, bank, has_row, row,
+ has_column, column, component_id,
+ has_comp_id_pldm, is_comp_id_pldm,
+ has_sub_channel, sub_channel);
+ }
}
#define CXL_MMER_VALID_COMPONENT BIT(0)
--
2.48.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH qemu for 10.2 3/3] hw/cxl: Add emulation for memory sparing control feature
2025-09-17 14:33 [PATCH qemu for 10.2 0/3] cxl: Additional RAS features support Jonathan Cameron via
2025-09-17 14:33 ` [PATCH qemu for 10.2 1/3] hw/cxl/cxl-mailbox-utils: Move declaration of scrub and ECS feature attributes in cmd_features_set_feature() Jonathan Cameron via
2025-09-17 14:33 ` [PATCH qemu for 10.2 2/3] hw/cxl: Add support for Maintenance command and Post Package Repair (PPR) Jonathan Cameron via
@ 2025-09-17 14:33 ` Jonathan Cameron via
2025-10-06 9:13 ` [PATCH qemu for 10.2 0/3] cxl: Additional RAS features support Jonathan Cameron via
3 siblings, 0 replies; 6+ messages in thread
From: Jonathan Cameron via @ 2025-09-17 14:33 UTC (permalink / raw)
To: Michael Tsirkin, qemu-devel, shiju.jose, Davidlohr Bueso
Cc: armbru, Fan Ni, linux-cxl, linuxarm, Ravi Shankar
From: Shiju Jose <shiju.jose@huawei.com>
Memory sparing is defined as a repair function that replaces a portion of
memory with a portion of functional memory at that same DPA. The
subclasses for this operation vary in terms of the scope of the sparing
being performed. The Cacheline sparing subclass refers to a sparing
action that can replace a full cacheline. Row sparing is provided as an
alternative to PPR sparing functions and its scope is that of a single
DDR row. Bank sparing allows an entire bank to be replaced. Rank sparing
is defined as an operation in which an entire DDR rank is replaced.
Memory sparing maintenance operations may be supported by CXL devices
that implement CXL.mem protocol. A sparing maintenance operation requests
the CXL device to perform a repair operation on its media.
For example, a CXL device with DRAM components that support memory sparing
features may implement sparing Maintenance operations.
The host may issue a query command by setting Query Resources flag in the
Input Payload (CXL Spec 3.2 Table 8-120) to determine availability of
sparing resources for a given address. In response to a query request,
the device shall report the resource availability by producing the Memory
Sparing Event Record (CXL Spec 3.2 Table 8-60) in which the Channel, Rank,
Nibble Mask, Bank Group, Bank, Row, Column, Sub-Channel fields are a copy
of the values specified in the request.
During the execution of a sparing maintenance operation, a CXL memory
device:
- May or may not retain data
- May or may not be able to process CXL.mem requests correctly.
These CXL memory device capabilities are specified by restriction flags
in the memory sparing feature readable attributes.
When a CXL device identifies error on a memory component, the device
may inform the host about the need for a memory sparing maintenance
operation by using DRAM event record, where the 'maintenance needed' flag
may set. The event record contains some of the DPA, Channel, Rank,
Nibble Mask, Bank Group, Bank, Row, Column, Sub-Channel fields that
should be repaired. The userspace tool requests for maintenance operation
if the 'maintenance needed' flag set in the CXL DRAM error record.
CXL spec 3.2 section 8.2.10.7.2.3 describes the memory sparing feature
discovery and configuration.
CXL spec 3.2 section 8.2.10.7.1.4 describes the device's memory sparing
maintenance operation feature.
Add emulation for CXL memory device memory sparing control feature
and memory sparing maintenance operation command.
Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
include/hw/cxl/cxl_device.h | 33 ++++
include/hw/cxl/cxl_events.h | 5 +
hw/cxl/cxl-mailbox-utils.c | 317 +++++++++++++++++++++++++++++++++++-
hw/mem/cxl_type3.c | 44 +++++
4 files changed, 395 insertions(+), 4 deletions(-)
diff --git a/include/hw/cxl/cxl_device.h b/include/hw/cxl/cxl_device.h
index 163b0ff7ed..1ec2fca60c 100644
--- a/include/hw/cxl/cxl_device.h
+++ b/include/hw/cxl/cxl_device.h
@@ -608,6 +608,30 @@ typedef struct CXLMemECSWriteAttrs {
CXLMemECSFRUWriteAttrs fru_attrs[CXL_ECS_NUM_MEDIA_FRUS];
} QEMU_PACKED CXLMemECSWriteAttrs;
+/*
+ * CXL r3.2 section 8.2.10.7.2.3, Table 8-134 and 8-135:
+ * Memory Sparing Feature Readable/Writable Attributes
+ */
+typedef struct CXLMemSparingReadAttrs {
+ uint8_t max_maint_latency;
+ uint16_t op_caps;
+ uint16_t op_mode;
+ uint8_t maint_op_class;
+ uint8_t maint_op_subclass;
+ uint8_t rsvd[10];
+ uint16_t restriction_flags;
+} QEMU_PACKED CXLMemSparingReadAttrs;
+
+typedef struct CXLMemSparingWriteAttrs {
+ uint16_t op_mode;
+} QEMU_PACKED CXLMemSparingWriteAttrs;
+
+#define CXL_MEMDEV_SPARING_GET_FEATURE_VERSION 0x01
+#define CXL_MEMDEV_SPARING_SET_FEATURE_VERSION 0x01
+#define CXL_MEMDEV_SPARING_SAFE_IN_USE_FLAG BIT(0)
+#define CXL_MEMDEV_HARD_SPARING_SUPPORT_FLAG BIT(1)
+#define CXL_MEMDEV_SOFT_SPARING_SUPPORT_FLAG BIT(2)
+
#define DCD_MAX_NUM_REGION 8
typedef struct CXLDCExtentRaw {
@@ -735,6 +759,15 @@ struct CXLType3Dev {
/* ECS control attributes */
CXLMemECSReadAttrs ecs_attrs;
CXLMemECSWriteAttrs ecs_wr_attrs;
+ /* Memory Sparing control attributes */
+ CXLMemSparingReadAttrs cacheline_sparing_attrs;
+ CXLMemSparingWriteAttrs cacheline_sparing_wr_attrs;
+ CXLMemSparingReadAttrs row_sparing_attrs;
+ CXLMemSparingWriteAttrs row_sparing_wr_attrs;
+ CXLMemSparingReadAttrs bank_sparing_attrs;
+ CXLMemSparingWriteAttrs bank_sparing_wr_attrs;
+ CXLMemSparingReadAttrs rank_sparing_attrs;
+ CXLMemSparingWriteAttrs rank_sparing_wr_attrs;
struct dynamic_capacity {
HostMemoryBackend *host_dc;
diff --git a/include/hw/cxl/cxl_events.h b/include/hw/cxl/cxl_events.h
index 3052bc9f18..89f6aad531 100644
--- a/include/hw/cxl/cxl_events.h
+++ b/include/hw/cxl/cxl_events.h
@@ -223,6 +223,11 @@ typedef enum CXLDCEventType {
* CXL r3.2 section Table 8-60: Memory Sparing Event Record
* All fields little endian.
*/
+
+#define CXL_MSER_FLAGS_QUERY_RESOURCES BIT(0)
+#define CXL_MSER_FLAGS_HARD_SPARING BIT(1)
+#define CXL_MSER_FLAGS_DEV_INITIATED BIT(2)
+
#define CXL_MSER_VALID_CHANNEL BIT(0)
#define CXL_MSER_VALID_RANK BIT(1)
#define CXL_MSER_VALID_NIB_MASK BIT(2)
diff --git a/hw/cxl/cxl-mailbox-utils.c b/hw/cxl/cxl-mailbox-utils.c
index 29424e7eb1..84d762c2f5 100644
--- a/hw/cxl/cxl-mailbox-utils.c
+++ b/hw/cxl/cxl-mailbox-utils.c
@@ -1135,6 +1135,10 @@ enum CXL_SUPPORTED_FEATURES_LIST {
CXL_FEATURE_ECS,
CXL_FEATURE_SPPR,
CXL_FEATURE_HPPR,
+ CXL_FEATURE_CACHELINE_SPARING,
+ CXL_FEATURE_ROW_SPARING,
+ CXL_FEATURE_BANK_SPARING,
+ CXL_FEATURE_RANK_SPARING,
CXL_FEATURE_MAX
};
@@ -1223,6 +1227,35 @@ typedef struct CXLMemECSSetFeature {
CXLMemECSWriteAttrs feat_data[];
} QEMU_PACKED QEMU_ALIGNED(16) CXLMemECSSetFeature;
+/*
+ * CXL r3.2 section 8.2.10.7.2.3:
+ * Memory Sparing Features Discovery and Configuration
+ */
+static const QemuUUID cacheline_sparing_uuid = {
+ .data = UUID(0x96C33386, 0x91dd, 0x44c7, 0x9e, 0xcb,
+ 0xfd, 0xaf, 0x65, 0x03, 0xba, 0xc4)
+};
+
+static const QemuUUID row_sparing_uuid = {
+ .data = UUID(0x450ebf67, 0xb135, 0x4f97, 0xa4, 0x98,
+ 0xc2, 0xd5, 0x7f, 0x27, 0x9b, 0xed)
+};
+
+static const QemuUUID bank_sparing_uuid = {
+ .data = UUID(0x78b79636, 0x90ac, 0x4b64, 0xa4, 0xef,
+ 0xfa, 0xac, 0x5d, 0x18, 0xa8, 0x63)
+};
+
+static const QemuUUID rank_sparing_uuid = {
+ .data = UUID(0x34dbaff5, 0x0552, 0x4281, 0x8f, 0x76,
+ 0xda, 0x0b, 0x5e, 0x7a, 0x76, 0xa7)
+};
+
+typedef struct CXLMemSparingSetFeature {
+ CXLSetFeatureInHeader hdr;
+ CXLMemSparingWriteAttrs feat_data;
+} QEMU_PACKED QEMU_ALIGNED(16) CXLMemSparingSetFeature;
+
/* CXL r3.1 section 8.2.9.6.1: Get Supported Features (Opcode 0500h) */
static CXLRetCode cmd_features_get_supported(const struct cxl_cmd *cmd,
uint8_t *payload_in,
@@ -1323,6 +1356,70 @@ static CXLRetCode cmd_features_get_supported(const struct cxl_cmd *cmd,
CXL_FEAT_ENTRY_SFE_CEL_VALID,
};
break;
+ case CXL_FEATURE_CACHELINE_SPARING:
+ /* Fill supported feature entry for Cacheline Memory Sparing */
+ get_feats_out->feat_entries[entry++] =
+ (struct CXLSupportedFeatureEntry) {
+ .uuid = cacheline_sparing_uuid,
+ .feat_index = index,
+ .get_feat_size = sizeof(CXLMemSparingReadAttrs),
+ .set_feat_size = sizeof(CXLMemSparingWriteAttrs),
+ .attr_flags = CXL_FEAT_ENTRY_ATTR_FLAG_CHANGABLE |
+ CXL_FEAT_ENTRY_ATTR_FLAG_SUPPORT_DEFAULT_SEL,
+ .get_feat_version = CXL_MEMDEV_SPARING_GET_FEATURE_VERSION,
+ .set_feat_version = CXL_MEMDEV_SPARING_SET_FEATURE_VERSION,
+ .set_feat_effects = CXL_FEAT_ENTRY_SFE_IMMEDIATE_CONFIG_CHANGE |
+ CXL_FEAT_ENTRY_SFE_CEL_VALID,
+ };
+ break;
+ case CXL_FEATURE_ROW_SPARING:
+ /* Fill supported feature entry for Row Memory Sparing */
+ get_feats_out->feat_entries[entry++] =
+ (struct CXLSupportedFeatureEntry) {
+ .uuid = row_sparing_uuid,
+ .feat_index = index,
+ .get_feat_size = sizeof(CXLMemSparingReadAttrs),
+ .set_feat_size = sizeof(CXLMemSparingWriteAttrs),
+ .attr_flags = CXL_FEAT_ENTRY_ATTR_FLAG_CHANGABLE |
+ CXL_FEAT_ENTRY_ATTR_FLAG_SUPPORT_DEFAULT_SEL,
+ .get_feat_version = CXL_MEMDEV_SPARING_GET_FEATURE_VERSION,
+ .set_feat_version = CXL_MEMDEV_SPARING_SET_FEATURE_VERSION,
+ .set_feat_effects = CXL_FEAT_ENTRY_SFE_IMMEDIATE_CONFIG_CHANGE |
+ CXL_FEAT_ENTRY_SFE_CEL_VALID,
+ };
+ break;
+ case CXL_FEATURE_BANK_SPARING:
+ /* Fill supported feature entry for Bank Memory Sparing */
+ get_feats_out->feat_entries[entry++] =
+ (struct CXLSupportedFeatureEntry) {
+ .uuid = bank_sparing_uuid,
+ .feat_index = index,
+ .get_feat_size = sizeof(CXLMemSparingReadAttrs),
+ .set_feat_size = sizeof(CXLMemSparingWriteAttrs),
+ .attr_flags = CXL_FEAT_ENTRY_ATTR_FLAG_CHANGABLE |
+ CXL_FEAT_ENTRY_ATTR_FLAG_SUPPORT_DEFAULT_SEL,
+ .get_feat_version = CXL_MEMDEV_SPARING_GET_FEATURE_VERSION,
+ .set_feat_version = CXL_MEMDEV_SPARING_SET_FEATURE_VERSION,
+ .set_feat_effects = CXL_FEAT_ENTRY_SFE_IMMEDIATE_CONFIG_CHANGE |
+ CXL_FEAT_ENTRY_SFE_CEL_VALID,
+ };
+ break;
+ case CXL_FEATURE_RANK_SPARING:
+ /* Fill supported feature entry for Rank Memory Sparing */
+ get_feats_out->feat_entries[entry++] =
+ (struct CXLSupportedFeatureEntry) {
+ .uuid = rank_sparing_uuid,
+ .feat_index = index,
+ .get_feat_size = sizeof(CXLMemSparingReadAttrs),
+ .set_feat_size = sizeof(CXLMemSparingWriteAttrs),
+ .attr_flags = CXL_FEAT_ENTRY_ATTR_FLAG_CHANGABLE |
+ CXL_FEAT_ENTRY_ATTR_FLAG_SUPPORT_DEFAULT_SEL,
+ .get_feat_version = CXL_MEMDEV_SPARING_GET_FEATURE_VERSION,
+ .set_feat_version = CXL_MEMDEV_SPARING_SET_FEATURE_VERSION,
+ .set_feat_effects = CXL_FEAT_ENTRY_SFE_IMMEDIATE_CONFIG_CHANGE |
+ CXL_FEAT_ENTRY_SFE_CEL_VALID,
+ };
+ break;
default:
__builtin_unreachable();
}
@@ -1411,6 +1508,47 @@ static CXLRetCode cmd_features_get_feature(const struct cxl_cmd *cmd,
memcpy(payload_out,
(uint8_t *)&ct3d->hard_ppr_attrs + get_feature->offset,
bytes_to_copy);
+ } else if (qemu_uuid_is_equal(&get_feature->uuid,
+ &cacheline_sparing_uuid)) {
+ if (get_feature->offset >= sizeof(CXLMemSparingReadAttrs)) {
+ return CXL_MBOX_INVALID_INPUT;
+ }
+ bytes_to_copy = sizeof(CXLMemSparingReadAttrs) -
+ get_feature->offset;
+ bytes_to_copy = MIN(bytes_to_copy, get_feature->count);
+ memcpy(payload_out,
+ (uint8_t *)&ct3d->cacheline_sparing_attrs + get_feature->offset,
+ bytes_to_copy);
+ } else if (qemu_uuid_is_equal(&get_feature->uuid, &row_sparing_uuid)) {
+ if (get_feature->offset >= sizeof(CXLMemSparingReadAttrs)) {
+ return CXL_MBOX_INVALID_INPUT;
+ }
+ bytes_to_copy = sizeof(CXLMemSparingReadAttrs) -
+ get_feature->offset;
+ bytes_to_copy = MIN(bytes_to_copy, get_feature->count);
+ memcpy(payload_out,
+ (uint8_t *)&ct3d->row_sparing_attrs + get_feature->offset,
+ bytes_to_copy);
+ } else if (qemu_uuid_is_equal(&get_feature->uuid, &bank_sparing_uuid)) {
+ if (get_feature->offset >= sizeof(CXLMemSparingReadAttrs)) {
+ return CXL_MBOX_INVALID_INPUT;
+ }
+ bytes_to_copy = sizeof(CXLMemSparingReadAttrs) -
+ get_feature->offset;
+ bytes_to_copy = MIN(bytes_to_copy, get_feature->count);
+ memcpy(payload_out,
+ (uint8_t *)&ct3d->bank_sparing_attrs + get_feature->offset,
+ bytes_to_copy);
+ } else if (qemu_uuid_is_equal(&get_feature->uuid, &rank_sparing_uuid)) {
+ if (get_feature->offset >= sizeof(CXLMemSparingReadAttrs)) {
+ return CXL_MBOX_INVALID_INPUT;
+ }
+ bytes_to_copy = sizeof(CXLMemSparingReadAttrs) -
+ get_feature->offset;
+ bytes_to_copy = MIN(bytes_to_copy, get_feature->count);
+ memcpy(payload_out,
+ (uint8_t *)&ct3d->rank_sparing_attrs + get_feature->offset,
+ bytes_to_copy);
} else {
return CXL_MBOX_UNSUPPORTED;
}
@@ -1560,6 +1698,78 @@ static CXLRetCode cmd_features_set_feature(const struct cxl_cmd *cmd,
ct3d->hard_ppr_attrs.hppr_op_mode =
ct3d->hard_ppr_wr_attrs.hppr_op_mode;
}
+ } else if (qemu_uuid_is_equal(&hdr->uuid, &cacheline_sparing_uuid)) {
+ CXLMemSparingSetFeature *mem_sparing_set_feature = (void *)payload_in;
+ CXLMemSparingWriteAttrs *mem_sparing_write_attrs =
+ &mem_sparing_set_feature->feat_data;
+
+ if (hdr->version != CXL_MEMDEV_SPARING_SET_FEATURE_VERSION) {
+ return CXL_MBOX_UNSUPPORTED;
+ }
+
+ memcpy((uint8_t *)&ct3d->cacheline_sparing_wr_attrs + hdr->offset,
+ mem_sparing_write_attrs, bytes_to_copy);
+ set_feat_info->data_size += bytes_to_copy;
+
+ if (data_transfer_flag == CXL_SET_FEATURE_FLAG_FULL_DATA_TRANSFER ||
+ data_transfer_flag == CXL_SET_FEATURE_FLAG_FINISH_DATA_TRANSFER) {
+ ct3d->cacheline_sparing_attrs.op_mode =
+ ct3d->cacheline_sparing_wr_attrs.op_mode;
+ }
+ } else if (qemu_uuid_is_equal(&hdr->uuid, &row_sparing_uuid)) {
+ CXLMemSparingSetFeature *mem_sparing_set_feature = (void *)payload_in;
+ CXLMemSparingWriteAttrs *mem_sparing_write_attrs =
+ &mem_sparing_set_feature->feat_data;
+
+ if (hdr->version != CXL_MEMDEV_SPARING_SET_FEATURE_VERSION) {
+ return CXL_MBOX_UNSUPPORTED;
+ }
+
+ memcpy((uint8_t *)&ct3d->row_sparing_wr_attrs + hdr->offset,
+ mem_sparing_write_attrs, bytes_to_copy);
+ set_feat_info->data_size += bytes_to_copy;
+
+ if (data_transfer_flag == CXL_SET_FEATURE_FLAG_FULL_DATA_TRANSFER ||
+ data_transfer_flag == CXL_SET_FEATURE_FLAG_FINISH_DATA_TRANSFER) {
+ ct3d->row_sparing_attrs.op_mode =
+ ct3d->row_sparing_wr_attrs.op_mode;
+ }
+ } else if (qemu_uuid_is_equal(&hdr->uuid, &bank_sparing_uuid)) {
+ CXLMemSparingSetFeature *mem_sparing_set_feature = (void *)payload_in;
+ CXLMemSparingWriteAttrs *mem_sparing_write_attrs =
+ &mem_sparing_set_feature->feat_data;
+
+ if (hdr->version != CXL_MEMDEV_SPARING_SET_FEATURE_VERSION) {
+ return CXL_MBOX_UNSUPPORTED;
+ }
+
+ memcpy((uint8_t *)&ct3d->bank_sparing_wr_attrs + hdr->offset,
+ mem_sparing_write_attrs, bytes_to_copy);
+ set_feat_info->data_size += bytes_to_copy;
+
+ if (data_transfer_flag == CXL_SET_FEATURE_FLAG_FULL_DATA_TRANSFER ||
+ data_transfer_flag == CXL_SET_FEATURE_FLAG_FINISH_DATA_TRANSFER) {
+ ct3d->bank_sparing_attrs.op_mode =
+ ct3d->bank_sparing_wr_attrs.op_mode;
+ }
+ } else if (qemu_uuid_is_equal(&hdr->uuid, &rank_sparing_uuid)) {
+ CXLMemSparingSetFeature *mem_sparing_set_feature = (void *)payload_in;
+ CXLMemSparingWriteAttrs *mem_sparing_write_attrs =
+ &mem_sparing_set_feature->feat_data;
+
+ if (hdr->version != CXL_MEMDEV_SPARING_SET_FEATURE_VERSION) {
+ return CXL_MBOX_UNSUPPORTED;
+ }
+
+ memcpy((uint8_t *)&ct3d->rank_sparing_wr_attrs + hdr->offset,
+ mem_sparing_write_attrs, bytes_to_copy);
+ set_feat_info->data_size += bytes_to_copy;
+
+ if (data_transfer_flag == CXL_SET_FEATURE_FLAG_FULL_DATA_TRANSFER ||
+ data_transfer_flag == CXL_SET_FEATURE_FLAG_FINISH_DATA_TRANSFER) {
+ ct3d->rank_sparing_attrs.op_mode =
+ ct3d->rank_sparing_wr_attrs.op_mode;
+ }
} else {
return CXL_MBOX_UNSUPPORTED;
}
@@ -1576,6 +1786,15 @@ static CXLRetCode cmd_features_set_feature(const struct cxl_cmd *cmd,
memset(&ct3d->soft_ppr_wr_attrs, 0, set_feat_info->data_size);
} else if (qemu_uuid_is_equal(&hdr->uuid, &hard_ppr_uuid)) {
memset(&ct3d->hard_ppr_wr_attrs, 0, set_feat_info->data_size);
+ } else if (qemu_uuid_is_equal(&hdr->uuid, &cacheline_sparing_uuid)) {
+ memset(&ct3d->cacheline_sparing_wr_attrs, 0,
+ set_feat_info->data_size);
+ } else if (qemu_uuid_is_equal(&hdr->uuid, &row_sparing_uuid)) {
+ memset(&ct3d->row_sparing_wr_attrs, 0, set_feat_info->data_size);
+ } else if (qemu_uuid_is_equal(&hdr->uuid, &bank_sparing_uuid)) {
+ memset(&ct3d->bank_sparing_wr_attrs, 0, set_feat_info->data_size);
+ } else if (qemu_uuid_is_equal(&hdr->uuid, &rank_sparing_uuid)) {
+ memset(&ct3d->rank_sparing_wr_attrs, 0, set_feat_info->data_size);
}
set_feat_info->data_transfer_flag = 0;
set_feat_info->data_saved_across_reset = false;
@@ -1586,9 +1805,27 @@ static CXLRetCode cmd_features_set_feature(const struct cxl_cmd *cmd,
return CXL_MBOX_SUCCESS;
}
-static void cxl_mbox_create_mem_sparing_event_records(CXLType3Dev *ct3d,
+#define CXL_MEM_SPARING_FLAGS_QUERY_RESOURCES BIT(0)
+#define CXL_MEM_SPARING_FLAGS_HARD_SPARING BIT(1)
+#define CXL_MEM_SPARING_FLAGS_SUB_CHANNEL_VALID BIT(2)
+#define CXL_MEM_SPARING_FLAGS_NIB_MASK_VALID BIT(3)
+
+typedef struct CXLMemSparingMaintInPayload {
+ uint8_t flags;
+ uint8_t channel;
+ uint8_t rank;
+ uint8_t nibble_mask[3];
+ uint8_t bank_group;
+ uint8_t bank;
+ uint8_t row[3];
+ uint16_t column;
+ uint8_t sub_channel;
+} QEMU_PACKED CXLMemSparingMaintInPayload;
+
+static void cxl_create_mem_sparing_event_records(CXLType3Dev *ct3d,
uint8_t maint_op_class, uint8_t maint_op_sub_class,
- CXLMaintenance *ent)
+ CXLMaintenance *ent,
+ CXLMemSparingMaintInPayload *sparing_pi)
{
CXLEventSparing event_rec = {};
@@ -1616,6 +1853,31 @@ static void cxl_mbox_create_mem_sparing_event_records(CXLType3Dev *ct3d,
strncpy((char *)event_rec.component_id, (char *)ent->component_id,
sizeof(event_rec.component_id));
}
+ } else if (sparing_pi) {
+ event_rec.flags = CXL_MSER_FLAGS_QUERY_RESOURCES;
+ event_rec.result = 0;
+ event_rec.validity_flags = CXL_MSER_VALID_CHANNEL |
+ CXL_MSER_VALID_RANK |
+ CXL_MSER_VALID_NIB_MASK |
+ CXL_MSER_VALID_BANK_GROUP |
+ CXL_MSER_VALID_BANK |
+ CXL_MSER_VALID_ROW |
+ CXL_MSER_VALID_COLUMN;
+ event_rec.res_avail = 1;
+ event_rec.channel = sparing_pi->channel;
+ event_rec.rank = sparing_pi->rank;
+ if (sparing_pi->flags & CXL_MEM_SPARING_FLAGS_NIB_MASK_VALID) {
+ memcpy(event_rec.nibble_mask, sparing_pi->nibble_mask,
+ sizeof(sparing_pi->nibble_mask));
+ }
+ event_rec.bank_group = sparing_pi->bank_group;
+ event_rec.bank = sparing_pi->bank;
+ event_rec.column = sparing_pi->column;
+ memcpy(event_rec.row, sparing_pi->row, sizeof(sparing_pi->row));
+ if (sparing_pi->flags & CXL_MEM_SPARING_FLAGS_SUB_CHANNEL_VALID) {
+ event_rec.sub_channel = sparing_pi->sub_channel;
+ event_rec.validity_flags |= CXL_MSER_VALID_SUB_CHANNEL;
+ }
} else {
return;
}
@@ -1627,6 +1889,26 @@ static void cxl_mbox_create_mem_sparing_event_records(CXLType3Dev *ct3d,
}
}
+static CXLRetCode cxl_perform_mem_sparing(CXLType3Dev *ct3d, uint8_t sub_class,
+ void *maint_pi)
+{
+ switch (sub_class) {
+ case CXL_MEMDEV_MAINT_SUBCLASS_CACHELINE_SPARING:
+ qemu_log("Cacheline Memory Sparing\n");
+ return CXL_MBOX_SUCCESS;
+ case CXL_MEMDEV_MAINT_SUBCLASS_ROW_SPARING:
+ qemu_log("Row Memory Sparing\n");
+ return CXL_MBOX_SUCCESS;
+ case CXL_MEMDEV_MAINT_SUBCLASS_BANK_SPARING:
+ qemu_log("Bank Memory Sparing\n");
+ return CXL_MBOX_SUCCESS;
+ case CXL_MEMDEV_MAINT_SUBCLASS_RANK_SPARING:
+ qemu_log("Rank Memory Sparing\n");
+ return CXL_MBOX_SUCCESS;
+ default:
+ return CXL_MBOX_INVALID_INPUT;
+ }
+}
static void cxl_perform_ppr(CXLType3Dev *ct3d, uint64_t dpa)
{
@@ -1637,10 +1919,10 @@ static void cxl_perform_ppr(CXLType3Dev *ct3d, uint64_t dpa)
/* Produce a Memory Sparing Event Record */
if (ct3d->soft_ppr_attrs.sppr_op_mode &
CXL_MEMDEV_SPPR_OP_MODE_MEM_SPARING_EV_REC_EN) {
- cxl_mbox_create_mem_sparing_event_records(ct3d,
+ cxl_create_mem_sparing_event_records(ct3d,
CXL_MEMDEV_MAINT_CLASS_SPARING,
CXL_MEMDEV_MAINT_SUBCLASS_CACHELINE_SPARING,
- ent);
+ ent, NULL);
}
break;
}
@@ -1664,6 +1946,7 @@ static CXLRetCode cmd_media_perform_maintenance(const struct cxl_cmd *cmd,
uint64_t dpa;
uint8_t nibble_mask[3];
} QEMU_PACKED ppr;
+ CXLMemSparingMaintInPayload mem_sparing_pi;
};
} QEMU_PACKED *maint_in = (void *)payload_in;
CXLType3Dev *ct3d = CXL_TYPE3(cci->d);
@@ -1689,6 +1972,32 @@ static CXLRetCode cmd_media_perform_maintenance(const struct cxl_cmd *cmd,
return CXL_MBOX_INVALID_INPUT;
}
break;
+ case CXL_MEMDEV_MAINT_CLASS_SPARING:
+ if (maint_in->mem_sparing_pi.flags &
+ CXL_MEM_SPARING_FLAGS_QUERY_RESOURCES) {
+ /*
+ * CXL r3.2 sect 8.2.10.7.1.4 - Memory Sparing Maintenance Operation
+ * Produce Memory Sparing Event record to report resources
+ * availability.
+ */
+ cxl_create_mem_sparing_event_records(ct3d, maint_in->class,
+ maint_in->subclass, NULL,
+ &maint_in->mem_sparing_pi);
+
+ return CXL_MBOX_SUCCESS;
+ }
+
+ switch (maint_in->subclass) {
+ case CXL_MEMDEV_MAINT_SUBCLASS_CACHELINE_SPARING:
+ case CXL_MEMDEV_MAINT_SUBCLASS_ROW_SPARING:
+ case CXL_MEMDEV_MAINT_SUBCLASS_BANK_SPARING:
+ case CXL_MEMDEV_MAINT_SUBCLASS_RANK_SPARING:
+ return cxl_perform_mem_sparing(ct3d, maint_in->subclass,
+ &maint_in->mem_sparing_pi);
+ default:
+ return CXL_MBOX_INVALID_INPUT;
+ }
+ break;
default:
return CXL_MBOX_INVALID_INPUT;
}
diff --git a/hw/mem/cxl_type3.c b/hw/mem/cxl_type3.c
index 284bae7479..66d3c5b12e 100644
--- a/hw/mem/cxl_type3.c
+++ b/hw/mem/cxl_type3.c
@@ -990,6 +990,50 @@ static void ct3_realize(PCIDevice *pci_dev, Error **errp)
.hppr_op_mode = CXL_MEMDEV_HPPR_OP_MODE_MEM_SPARING_EV_REC_EN
};
+ /* Set default value for Cacheline Memory Sparing attributes */
+ ct3d->cacheline_sparing_attrs = (CXLMemSparingReadAttrs) {
+ .max_maint_latency = 0x5, /* 100 ms */
+ .op_caps = 0, /* require host involvement */
+ .op_mode = 0,
+ .maint_op_class = CXL_MEMDEV_MAINT_CLASS_SPARING,
+ .maint_op_subclass = CXL_MEMDEV_MAINT_SUBCLASS_CACHELINE_SPARING,
+ .restriction_flags = CXL_MEMDEV_HARD_SPARING_SUPPORT_FLAG |
+ CXL_MEMDEV_SOFT_SPARING_SUPPORT_FLAG,
+ };
+
+ /* Set default value for Row Memory Sparing attributes */
+ ct3d->row_sparing_attrs = (CXLMemSparingReadAttrs) {
+ .max_maint_latency = 0x5, /* 100 ms */
+ .op_caps = 0, /* require host involvement */
+ .op_mode = 0,
+ .maint_op_class = CXL_MEMDEV_MAINT_CLASS_SPARING,
+ .maint_op_subclass = CXL_MEMDEV_MAINT_SUBCLASS_ROW_SPARING,
+ .restriction_flags = CXL_MEMDEV_HARD_SPARING_SUPPORT_FLAG |
+ CXL_MEMDEV_SOFT_SPARING_SUPPORT_FLAG,
+ };
+
+ /* Set default value for Bank Memory Sparing attributes */
+ ct3d->bank_sparing_attrs = (CXLMemSparingReadAttrs) {
+ .max_maint_latency = 0x5, /* 100 ms */
+ .op_caps = 0, /* require host involvement */
+ .op_mode = 0,
+ .maint_op_class = CXL_MEMDEV_MAINT_CLASS_SPARING,
+ .maint_op_subclass = CXL_MEMDEV_MAINT_SUBCLASS_BANK_SPARING,
+ .restriction_flags = CXL_MEMDEV_HARD_SPARING_SUPPORT_FLAG |
+ CXL_MEMDEV_SOFT_SPARING_SUPPORT_FLAG,
+ };
+
+ /* Set default value for Rank Memory Sparing attributes */
+ ct3d->rank_sparing_attrs = (CXLMemSparingReadAttrs) {
+ .max_maint_latency = 0x5, /* 100 ms */
+ .op_caps = 0, /* require host involvement */
+ .op_mode = 0,
+ .maint_op_class = CXL_MEMDEV_MAINT_CLASS_SPARING,
+ .maint_op_subclass = CXL_MEMDEV_MAINT_SUBCLASS_RANK_SPARING,
+ .restriction_flags = CXL_MEMDEV_HARD_SPARING_SUPPORT_FLAG |
+ CXL_MEMDEV_SOFT_SPARING_SUPPORT_FLAG,
+ };
+
return;
err_release_cdat:
--
2.48.1
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH qemu for 10.2 0/3] cxl: Additional RAS features support.
2025-09-17 14:33 [PATCH qemu for 10.2 0/3] cxl: Additional RAS features support Jonathan Cameron via
` (2 preceding siblings ...)
2025-09-17 14:33 ` [PATCH qemu for 10.2 3/3] hw/cxl: Add emulation for memory sparing control feature Jonathan Cameron via
@ 2025-10-06 9:13 ` Jonathan Cameron via
2025-10-06 10:26 ` Michael S. Tsirkin
3 siblings, 1 reply; 6+ messages in thread
From: Jonathan Cameron via @ 2025-10-06 9:13 UTC (permalink / raw)
To: Jonathan Cameron via
Cc: Jonathan Cameron, Michael Tsirkin, shiju.jose, Davidlohr Bueso,
armbru, Fan Ni, linux-cxl, linuxarm, Ravi Shankar
On Wed, 17 Sep 2025 15:33:27 +0100
Jonathan Cameron via <qemu-devel@nongnu.org> wrote:
> These two features have been used in testing the related EDAC RAS
> features drivers in Linux which are now upstream. Only minor tweaks
> from the second part of:
> https://lore.kernel.org/qemu-devel/20250811085530.2263-1-shiju.jose@huawei.com/
> [PATCH v6 0/8] hw/cxl: Update CXL events to rev3.2 and add maintenance support for memory repair features
>
> I've split that series up as the two parts were only slightly connected.
> However there is some churn if this is applied without the other part of Shiju's posted
> series. That was sent a few minutes ago as:
> https://lore.kernel.org/qemu-devel/20250917141355.293217-1-Jonathan.Cameron@huawei.com/
>
> Based on: [PATCH qemu for 10.2 0/5] cxl: r3.2 specification events updates.
> Based on: 20250917141355.293217-1-Jonathan.Cameron@huawei.com
>
> Davidlohr Bueso (1):
> hw/cxl: Add support for Maintenance command and Post Package Repair
> (PPR)
>
Hi Michael,
I see you have queued some stuff up on your gitlab tree.
Just wanted to make sure these are on the radar for possible inclusion.
No huge rush, but I'll queue some more series up on top of these
shortly.
Thanks
Jonathan
> Shiju Jose (2):
> hw/cxl/cxl-mailbox-utils: Move declaration of scrub and ECS feature
> attributes in cmd_features_set_feature()
> hw/cxl: Add emulation for memory sparing control feature
>
> include/hw/cxl/cxl_device.h | 145 +++++++++
> include/hw/cxl/cxl_events.h | 47 +++
> hw/cxl/cxl-mailbox-utils.c | 566 +++++++++++++++++++++++++++++++++++-
> hw/mem/cxl_type3.c | 169 +++++++++++
> 4 files changed, 915 insertions(+), 12 deletions(-)
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH qemu for 10.2 0/3] cxl: Additional RAS features support.
2025-10-06 9:13 ` [PATCH qemu for 10.2 0/3] cxl: Additional RAS features support Jonathan Cameron via
@ 2025-10-06 10:26 ` Michael S. Tsirkin
0 siblings, 0 replies; 6+ messages in thread
From: Michael S. Tsirkin @ 2025-10-06 10:26 UTC (permalink / raw)
To: Jonathan Cameron
Cc: Jonathan Cameron via, shiju.jose, Davidlohr Bueso, armbru, Fan Ni,
linux-cxl, linuxarm, Ravi Shankar
On Mon, Oct 06, 2025 at 10:13:04AM +0100, Jonathan Cameron wrote:
> On Wed, 17 Sep 2025 15:33:27 +0100
> Jonathan Cameron via <qemu-devel@nongnu.org> wrote:
>
> > These two features have been used in testing the related EDAC RAS
> > features drivers in Linux which are now upstream. Only minor tweaks
> > from the second part of:
> > https://lore.kernel.org/qemu-devel/20250811085530.2263-1-shiju.jose@huawei.com/
> > [PATCH v6 0/8] hw/cxl: Update CXL events to rev3.2 and add maintenance support for memory repair features
> >
> > I've split that series up as the two parts were only slightly connected.
> > However there is some churn if this is applied without the other part of Shiju's posted
> > series. That was sent a few minutes ago as:
> > https://lore.kernel.org/qemu-devel/20250917141355.293217-1-Jonathan.Cameron@huawei.com/
> >
> > Based on: [PATCH qemu for 10.2 0/5] cxl: r3.2 specification events updates.
> > Based on: 20250917141355.293217-1-Jonathan.Cameron@huawei.com
> >
> > Davidlohr Bueso (1):
> > hw/cxl: Add support for Maintenance command and Post Package Repair
> > (PPR)
> >
> Hi Michael,
>
> I see you have queued some stuff up on your gitlab tree.
> Just wanted to make sure these are on the radar for possible inclusion.
> No huge rush, but I'll queue some more series up on top of these
> shortly.
>
> Thanks
>
> Jonathan
thanks for the reminder, will queue.
> > Shiju Jose (2):
> > hw/cxl/cxl-mailbox-utils: Move declaration of scrub and ECS feature
> > attributes in cmd_features_set_feature()
> > hw/cxl: Add emulation for memory sparing control feature
> >
> > include/hw/cxl/cxl_device.h | 145 +++++++++
> > include/hw/cxl/cxl_events.h | 47 +++
> > hw/cxl/cxl-mailbox-utils.c | 566 +++++++++++++++++++++++++++++++++++-
> > hw/mem/cxl_type3.c | 169 +++++++++++
> > 4 files changed, 915 insertions(+), 12 deletions(-)
> >
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-10-06 10:27 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-17 14:33 [PATCH qemu for 10.2 0/3] cxl: Additional RAS features support Jonathan Cameron via
2025-09-17 14:33 ` [PATCH qemu for 10.2 1/3] hw/cxl/cxl-mailbox-utils: Move declaration of scrub and ECS feature attributes in cmd_features_set_feature() Jonathan Cameron via
2025-09-17 14:33 ` [PATCH qemu for 10.2 2/3] hw/cxl: Add support for Maintenance command and Post Package Repair (PPR) Jonathan Cameron via
2025-09-17 14:33 ` [PATCH qemu for 10.2 3/3] hw/cxl: Add emulation for memory sparing control feature Jonathan Cameron via
2025-10-06 9:13 ` [PATCH qemu for 10.2 0/3] cxl: Additional RAS features support Jonathan Cameron via
2025-10-06 10:26 ` Michael S. Tsirkin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).