* [PATCH 0/2] cxl: Pull out mailbox bits to be independent of cxl_dev_state.
@ 2024-07-24 18:55 Dave Jiang
2024-07-24 18:55 ` [PATCH 1/2] cxl: Move mailbox related bits to the same context Dave Jiang
` (2 more replies)
0 siblings, 3 replies; 13+ messages in thread
From: Dave Jiang @ 2024-07-24 18:55 UTC (permalink / raw)
To: linux-cxl
Cc: alejandro.lucero-palau, dan.j.williams, ira.weiny, vishal.l.verma,
alison.schofield, Jonathan.Cameron, dave
Hi Alejandro,
Please feel free to pull in the patches in this series into your type2 series.
The patches pulls out the related mailbox bits and form a 'struct cxl_mailbox'. A pointer
is created to point to that in 'struct cxl_dev_state'. The cxl_mailbox is independently
allocated if the mailbox register is discovered. This should separate the mailbox out
to be used by CXL type3 and type2 devices.
---
Dave Jiang (2):
cxl: Move mailbox related bits to the same context
cxl: Convert cxl_internal_send_cmd() to use 'struct cxl_mailbox' as input
MAINTAINERS | 1 +
drivers/cxl/core/mbox.c | 139 ++++++++++++++++++++++++++++++++++++++++++++++++---------------
drivers/cxl/core/memdev.c | 55 ++++++++++++++++---------
drivers/cxl/cxlmem.h | 22 +++++-----
drivers/cxl/pci.c | 88 +++++++++++++++++++++++++++-------------
drivers/cxl/pmem.c | 13 ++++--
drivers/cxl/security.c | 23 ++++++-----
include/linux/cxl/mailbox.h | 28 +++++++++++++
tools/testing/cxl/test/mem.c | 46 ++++++++++++++++-----
9 files changed, 301 insertions(+), 114 deletions(-)
^ permalink raw reply [flat|nested] 13+ messages in thread* [PATCH 1/2] cxl: Move mailbox related bits to the same context 2024-07-24 18:55 [PATCH 0/2] cxl: Pull out mailbox bits to be independent of cxl_dev_state Dave Jiang @ 2024-07-24 18:55 ` Dave Jiang 2024-07-24 22:02 ` fan 2024-08-15 16:57 ` Jonathan Cameron 2024-07-24 18:55 ` [PATCH 2/2] cxl: Convert cxl_internal_send_cmd() to use 'struct cxl_mailbox' as input Dave Jiang 2024-08-13 7:11 ` [PATCH 0/2] cxl: Pull out mailbox bits to be independent of cxl_dev_state Alejandro Lucero Palau 2 siblings, 2 replies; 13+ messages in thread From: Dave Jiang @ 2024-07-24 18:55 UTC (permalink / raw) To: linux-cxl Cc: alejandro.lucero-palau, dan.j.williams, ira.weiny, vishal.l.verma, alison.schofield, Jonathan.Cameron, dave Create a new 'struct cxl_mailbox' and move all mailbox related bits to it. This allows isolation of all CXL mailbox data in order to export some of the calls to external callers and avoid exporting of CXL driver specific bits such has device states. The allocation of 'struct cxl_mailbox' is also split out with cxl_mailbox_create() so the mailbox can be created independently. Signed-off-by: Dave Jiang <dave.jiang@intel.com> --- MAINTAINERS | 1 + drivers/cxl/core/mbox.c | 105 +++++++++++++++++++++++++++++------ drivers/cxl/core/memdev.c | 32 ++++++++--- drivers/cxl/cxlmem.h | 20 +++---- drivers/cxl/pci.c | 82 ++++++++++++++++++--------- drivers/cxl/pmem.c | 7 ++- include/linux/cxl/mailbox.h | 28 ++++++++++ tools/testing/cxl/test/mem.c | 46 +++++++++++---- 8 files changed, 249 insertions(+), 72 deletions(-) create mode 100644 include/linux/cxl/mailbox.h diff --git a/MAINTAINERS b/MAINTAINERS index 958e935449e5..c809fa1eb452 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -5466,6 +5466,7 @@ S: Maintained F: drivers/cxl/ F: include/linux/einj-cxl.h F: include/linux/cxl-event.h +F: include/linux/cxl/ F: include/uapi/linux/cxl_mem.h F: tools/testing/cxl/ diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index 2626f3fff201..9501d2576ccd 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -244,16 +244,20 @@ static const char *cxl_mem_opcode_to_name(u16 opcode) int cxl_internal_send_cmd(struct cxl_memdev_state *mds, struct cxl_mbox_cmd *mbox_cmd) { + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; size_t out_size, min_out; int rc; - if (mbox_cmd->size_in > mds->payload_size || - mbox_cmd->size_out > mds->payload_size) + if (!cxl_mbox) + return -ENODEV; + + if (mbox_cmd->size_in > cxl_mbox->payload_size || + mbox_cmd->size_out > cxl_mbox->payload_size) return -E2BIG; out_size = mbox_cmd->size_out; min_out = mbox_cmd->min_out; - rc = mds->mbox_send(mds, mbox_cmd); + rc = cxl_mbox->mbox_send(cxl_mbox, mbox_cmd); /* * EIO is reserved for a payload size mismatch and mbox_send() * may not return this error. @@ -353,11 +357,15 @@ static int cxl_mbox_cmd_ctor(struct cxl_mbox_cmd *mbox, struct cxl_memdev_state *mds, u16 opcode, size_t in_size, size_t out_size, u64 in_payload) { + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; *mbox = (struct cxl_mbox_cmd) { .opcode = opcode, .size_in = in_size, }; + if (!cxl_mbox) + return -ENODEV; + if (in_size) { mbox->payload_in = vmemdup_user(u64_to_user_ptr(in_payload), in_size); @@ -374,7 +382,7 @@ static int cxl_mbox_cmd_ctor(struct cxl_mbox_cmd *mbox, /* Prepare to handle a full payload for variable sized output */ if (out_size == CXL_VARIABLE_PAYLOAD) - mbox->size_out = mds->payload_size; + mbox->size_out = cxl_mbox->payload_size; else mbox->size_out = out_size; @@ -398,6 +406,11 @@ static int cxl_to_mem_cmd_raw(struct cxl_mem_command *mem_cmd, const struct cxl_send_command *send_cmd, struct cxl_memdev_state *mds) { + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; + + if (!cxl_mbox) + return -ENODEV; + if (send_cmd->raw.rsvd) return -EINVAL; @@ -406,7 +419,7 @@ static int cxl_to_mem_cmd_raw(struct cxl_mem_command *mem_cmd, * gets passed along without further checking, so it must be * validated here. */ - if (send_cmd->out.size > mds->payload_size) + if (send_cmd->out.size > cxl_mbox->payload_size) return -EINVAL; if (!cxl_mem_raw_command_allowed(send_cmd->raw.opcode)) @@ -494,9 +507,13 @@ static int cxl_validate_cmd_from_user(struct cxl_mbox_cmd *mbox_cmd, struct cxl_memdev_state *mds, const struct cxl_send_command *send_cmd) { + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; struct cxl_mem_command mem_cmd; int rc; + if (!cxl_mbox) + return -ENODEV; + if (send_cmd->id == 0 || send_cmd->id >= CXL_MEM_COMMAND_ID_MAX) return -ENOTTY; @@ -505,7 +522,7 @@ static int cxl_validate_cmd_from_user(struct cxl_mbox_cmd *mbox_cmd, * supports, but output can be arbitrarily large (simply write out as * much data as the hardware provides). */ - if (send_cmd->in.size > mds->payload_size) + if (send_cmd->in.size > cxl_mbox->payload_size) return -EINVAL; /* Sanitize and construct a cxl_mem_command */ @@ -591,9 +608,13 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev_state *mds, u64 out_payload, s32 *size_out, u32 *retval) { + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; struct device *dev = mds->cxlds.dev; int rc; + if (!cxl_mbox) + return -ENODEV; + dev_dbg(dev, "Submitting %s command for user\n" "\topcode: %x\n" @@ -601,7 +622,7 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev_state *mds, cxl_mem_opcode_to_name(mbox_cmd->opcode), mbox_cmd->opcode, mbox_cmd->size_in); - rc = mds->mbox_send(mds, mbox_cmd); + rc = cxl_mbox->mbox_send(cxl_mbox, mbox_cmd); if (rc) goto out; @@ -659,11 +680,15 @@ int cxl_send_cmd(struct cxl_memdev *cxlmd, struct cxl_send_command __user *s) static int cxl_xfer_log(struct cxl_memdev_state *mds, uuid_t *uuid, u32 *size, u8 *out) { + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; u32 remaining = *size; u32 offset = 0; + if (!cxl_mbox) + return -ENODEV; + while (remaining) { - u32 xfer_size = min_t(u32, remaining, mds->payload_size); + u32 xfer_size = min_t(u32, remaining, cxl_mbox->payload_size); struct cxl_mbox_cmd mbox_cmd; struct cxl_mbox_get_log log; int rc; @@ -752,17 +777,21 @@ static void cxl_walk_cel(struct cxl_memdev_state *mds, size_t size, u8 *cel) static struct cxl_mbox_get_supported_logs *cxl_get_gsl(struct cxl_memdev_state *mds) { + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; struct cxl_mbox_get_supported_logs *ret; struct cxl_mbox_cmd mbox_cmd; int rc; - ret = kvmalloc(mds->payload_size, GFP_KERNEL); + if (!cxl_mbox) + return ERR_PTR(-ENODEV); + + ret = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL); if (!ret) return ERR_PTR(-ENOMEM); mbox_cmd = (struct cxl_mbox_cmd) { .opcode = CXL_MBOX_OP_GET_SUPPORTED_LOGS, - .size_out = mds->payload_size, + .size_out = cxl_mbox->payload_size, .payload_out = ret, /* At least the record number field must be valid */ .min_out = 2, @@ -910,6 +939,7 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds, enum cxl_event_log_type log, struct cxl_get_event_payload *get_pl) { + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; struct cxl_mbox_clear_event_payload *payload; u16 total = le16_to_cpu(get_pl->record_count); u8 max_handles = CXL_CLEAR_EVENT_MAX_HANDLES; @@ -919,9 +949,12 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds, int rc = 0; int i; + if (!cxl_mbox) + return -ENODEV; + /* Payload size may limit the max handles */ - if (pl_size > mds->payload_size) { - max_handles = (mds->payload_size - sizeof(*payload)) / + if (pl_size > cxl_mbox->payload_size) { + max_handles = (cxl_mbox->payload_size - sizeof(*payload)) / sizeof(__le16); pl_size = struct_size(payload, handles, max_handles); } @@ -979,12 +1012,16 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds, static void cxl_mem_get_records_log(struct cxl_memdev_state *mds, enum cxl_event_log_type type) { + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; struct cxl_memdev *cxlmd = mds->cxlds.cxlmd; struct device *dev = mds->cxlds.dev; struct cxl_get_event_payload *payload; u8 log_type = type; u16 nr_rec; + if (!cxl_mbox) + return; + mutex_lock(&mds->event.log_lock); payload = mds->event.buf; @@ -995,7 +1032,7 @@ static void cxl_mem_get_records_log(struct cxl_memdev_state *mds, .payload_in = &log_type, .size_in = sizeof(log_type), .payload_out = payload, - .size_out = mds->payload_size, + .size_out = cxl_mbox->payload_size, .min_out = struct_size(payload, records, 0), }; @@ -1328,11 +1365,15 @@ int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, struct cxl_region *cxlr) { struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; struct cxl_mbox_poison_out *po; struct cxl_mbox_poison_in pi; int nr_records = 0; int rc; + if (!cxl_mbox) + return -ENODEV; + rc = mutex_lock_interruptible(&mds->poison.lock); if (rc) return rc; @@ -1346,7 +1387,7 @@ int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, .opcode = CXL_MBOX_OP_GET_POISON, .size_in = sizeof(pi), .payload_in = &pi, - .size_out = mds->payload_size, + .size_out = cxl_mbox->payload_size, .payload_out = po, .min_out = struct_size(po, record, 0), }; @@ -1382,7 +1423,12 @@ static void free_poison_buf(void *buf) /* Get Poison List output buffer is protected by mds->poison.lock */ static int cxl_poison_alloc_buf(struct cxl_memdev_state *mds) { - mds->poison.list_out = kvmalloc(mds->payload_size, GFP_KERNEL); + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; + + if (!cxl_mbox) + return -ENODEV; + + mds->poison.list_out = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL); if (!mds->poison.list_out) return -ENOMEM; @@ -1408,6 +1454,34 @@ int cxl_poison_state_init(struct cxl_memdev_state *mds) } EXPORT_SYMBOL_NS_GPL(cxl_poison_state_init, CXL); +static void free_cxl_mailbox(void *cxl_mbox) +{ + kfree(cxl_mbox); +} + +struct cxl_mailbox *cxl_mailbox_create(struct device *dev) +{ + struct cxl_mailbox *cxl_mbox __free(kfree) = + kzalloc(sizeof(*cxl_mbox), GFP_KERNEL); + int rc; + + if (!cxl_mbox) + return ERR_PTR(-ENOMEM); + + cxl_mbox->host = dev; + mutex_init(&cxl_mbox->mbox_mutex); + rcuwait_init(&cxl_mbox->mbox_wait); + + rc = devm_add_action_or_reset(dev, free_cxl_mailbox, cxl_mbox); + if (rc) { + cxl_mbox = NULL; + return ERR_PTR(rc); + } + + return no_free_ptr(cxl_mbox); +} +EXPORT_SYMBOL_NS_GPL(cxl_mailbox_create, CXL); + struct cxl_memdev_state *cxl_memdev_state_create(struct device *dev) { struct cxl_memdev_state *mds; @@ -1418,7 +1492,6 @@ struct cxl_memdev_state *cxl_memdev_state_create(struct device *dev) return ERR_PTR(-ENOMEM); } - mutex_init(&mds->mbox_mutex); mutex_init(&mds->event.log_lock); mds->cxlds.dev = dev; mds->cxlds.reg_map.host = dev; diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c index 0277726afd04..7c99f89740a9 100644 --- a/drivers/cxl/core/memdev.c +++ b/drivers/cxl/core/memdev.c @@ -56,9 +56,9 @@ static ssize_t payload_max_show(struct device *dev, struct cxl_dev_state *cxlds = cxlmd->cxlds; struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); - if (!mds) + if (!mds || !cxlds->cxl_mbox) return sysfs_emit(buf, "\n"); - return sysfs_emit(buf, "%zu\n", mds->payload_size); + return sysfs_emit(buf, "%zu\n", cxlds->cxl_mbox->payload_size); } static DEVICE_ATTR_RO(payload_max); @@ -124,15 +124,19 @@ static ssize_t security_state_show(struct device *dev, { struct cxl_memdev *cxlmd = to_cxl_memdev(dev); struct cxl_dev_state *cxlds = cxlmd->cxlds; + struct cxl_mailbox *cxl_mbox = cxlds->cxl_mbox; struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); unsigned long state = mds->security.state; int rc = 0; + if (!cxl_mbox) + return -ENODEV; + /* sync with latest submission state */ - mutex_lock(&mds->mbox_mutex); + mutex_lock(&cxl_mbox->mbox_mutex); if (mds->security.sanitize_active) rc = sysfs_emit(buf, "sanitize\n"); - mutex_unlock(&mds->mbox_mutex); + mutex_unlock(&cxl_mbox->mbox_mutex); if (rc) return rc; @@ -829,12 +833,16 @@ static enum fw_upload_err cxl_fw_prepare(struct fw_upload *fwl, const u8 *data, { struct cxl_memdev_state *mds = fwl->dd_handle; struct cxl_mbox_transfer_fw *transfer; + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; + + if (!cxl_mbox) + return FW_UPLOAD_ERR_NONE; if (!size) return FW_UPLOAD_ERR_INVALID_SIZE; mds->fw.oneshot = struct_size(transfer, data, size) < - mds->payload_size; + cxl_mbox->payload_size; if (cxl_mem_get_fw_info(mds)) return FW_UPLOAD_ERR_HW_ERROR; @@ -854,6 +862,7 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data, { struct cxl_memdev_state *mds = fwl->dd_handle; struct cxl_dev_state *cxlds = &mds->cxlds; + struct cxl_mailbox *cxl_mbox = cxlds->cxl_mbox; struct cxl_memdev *cxlmd = cxlds->cxlmd; struct cxl_mbox_transfer_fw *transfer; struct cxl_mbox_cmd mbox_cmd; @@ -861,6 +870,9 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data, size_t size_in; int rc; + if (!cxl_mbox) + return FW_UPLOAD_ERR_NONE; + *written = 0; /* Offset has to be aligned to 128B (CXL-3.0 8.2.9.3.2 Table 8-57) */ @@ -877,7 +889,7 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data, * sizeof(*transfer) is 128. These constraints imply that @cur_size * will always be 128b aligned. */ - cur_size = min_t(size_t, size, mds->payload_size - sizeof(*transfer)); + cur_size = min_t(size_t, size, cxl_mbox->payload_size - sizeof(*transfer)); remaining = size - cur_size; size_in = struct_size(transfer, data, cur_size); @@ -1059,16 +1071,20 @@ EXPORT_SYMBOL_NS_GPL(devm_cxl_add_memdev, CXL); static void sanitize_teardown_notifier(void *data) { struct cxl_memdev_state *mds = data; + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; struct kernfs_node *state; + if (!cxl_mbox) + return; + /* * Prevent new irq triggered invocations of the workqueue and * flush inflight invocations. */ - mutex_lock(&mds->mbox_mutex); + mutex_lock(&cxl_mbox->mbox_mutex); state = mds->security.sanitize_node; mds->security.sanitize_node = NULL; - mutex_unlock(&mds->mbox_mutex); + mutex_unlock(&cxl_mbox->mbox_mutex); cancel_delayed_work_sync(&mds->security.poll_dwork); sysfs_put(state); diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index af8169ccdbc0..17e83a2cf1be 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -3,11 +3,13 @@ #ifndef __CXL_MEM_H__ #define __CXL_MEM_H__ #include <uapi/linux/cxl_mem.h> +#include <linux/pci.h> #include <linux/cdev.h> #include <linux/uuid.h> #include <linux/rcuwait.h> #include <linux/cxl-event.h> #include <linux/node.h> +#include <linux/cxl/mailbox.h> #include "cxl.h" /* CXL 2.0 8.2.8.5.1.1 Memory Device Status Register */ @@ -424,6 +426,7 @@ struct cxl_dpa_perf { * @ram_res: Active Volatile memory capacity configuration * @serial: PCIe Device Serial Number * @type: Generic Memory Class device or Vendor Specific Memory device + * @cxl_mbox: CXL mailbox context */ struct cxl_dev_state { struct device *dev; @@ -438,8 +441,14 @@ struct cxl_dev_state { struct resource ram_res; u64 serial; enum cxl_devtype type; + struct cxl_mailbox *cxl_mbox; }; +static inline struct cxl_dev_state *mbox_to_cxlds(struct cxl_mailbox *cxl_mbox) +{ + return dev_get_drvdata(cxl_mbox->host); +} + /** * struct cxl_memdev_state - Generic Type-3 Memory Device Class driver data * @@ -448,11 +457,8 @@ struct cxl_dev_state { * the functionality related to that like Identify Memory Device and Get * Partition Info * @cxlds: Core driver state common across Type-2 and Type-3 devices - * @payload_size: Size of space for payload - * (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register) * @lsa_size: Size of Label Storage Area * (CXL 2.0 8.2.9.5.1.1 Identify Memory Device) - * @mbox_mutex: Mutex to synchronize mailbox access. * @firmware_version: Firmware version for the memory device. * @enabled_cmds: Hardware commands found enabled in CEL. * @exclusive_cmds: Commands that are kernel-internal only @@ -470,17 +476,13 @@ struct cxl_dev_state { * @poison: poison driver state info * @security: security driver state info * @fw: firmware upload / activation state - * @mbox_wait: RCU wait for mbox send completely - * @mbox_send: @dev specific transport for transmitting mailbox commands * * See CXL 3.0 8.2.9.8.2 Capacity Configuration and Label Storage for * details on capacity parameters. */ struct cxl_memdev_state { struct cxl_dev_state cxlds; - size_t payload_size; size_t lsa_size; - struct mutex mbox_mutex; /* Protects device mailbox and firmware */ char firmware_version[0x10]; DECLARE_BITMAP(enabled_cmds, CXL_MEM_COMMAND_ID_MAX); DECLARE_BITMAP(exclusive_cmds, CXL_MEM_COMMAND_ID_MAX); @@ -500,10 +502,6 @@ struct cxl_memdev_state { struct cxl_poison_state poison; struct cxl_security_state security; struct cxl_fw_state fw; - - struct rcuwait mbox_wait; - int (*mbox_send)(struct cxl_memdev_state *mds, - struct cxl_mbox_cmd *cmd); }; static inline struct cxl_memdev_state * diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index e53646e9f2fb..bd8ee14a7926 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -11,6 +11,7 @@ #include <linux/pci.h> #include <linux/aer.h> #include <linux/io.h> +#include <linux/cxl/mailbox.h> #include "cxlmem.h" #include "cxlpci.h" #include "cxl.h" @@ -124,6 +125,7 @@ static irqreturn_t cxl_pci_mbox_irq(int irq, void *id) u16 opcode; struct cxl_dev_id *dev_id = id; struct cxl_dev_state *cxlds = dev_id->cxlds; + struct cxl_mailbox *cxl_mbox = cxlds->cxl_mbox; struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); if (!cxl_mbox_background_complete(cxlds)) @@ -132,13 +134,13 @@ static irqreturn_t cxl_pci_mbox_irq(int irq, void *id) reg = readq(cxlds->regs.mbox + CXLDEV_MBOX_BG_CMD_STATUS_OFFSET); opcode = FIELD_GET(CXLDEV_MBOX_BG_CMD_COMMAND_OPCODE_MASK, reg); if (opcode == CXL_MBOX_OP_SANITIZE) { - mutex_lock(&mds->mbox_mutex); + mutex_lock(&cxl_mbox->mbox_mutex); if (mds->security.sanitize_node) mod_delayed_work(system_wq, &mds->security.poll_dwork, 0); - mutex_unlock(&mds->mbox_mutex); + mutex_unlock(&cxl_mbox->mbox_mutex); } else { /* short-circuit the wait in __cxl_pci_mbox_send_cmd() */ - rcuwait_wake_up(&mds->mbox_wait); + rcuwait_wake_up(&cxl_mbox->mbox_wait); } return IRQ_HANDLED; @@ -152,8 +154,9 @@ static void cxl_mbox_sanitize_work(struct work_struct *work) struct cxl_memdev_state *mds = container_of(work, typeof(*mds), security.poll_dwork.work); struct cxl_dev_state *cxlds = &mds->cxlds; + struct cxl_mailbox *cxl_mbox = cxlds->cxl_mbox; - mutex_lock(&mds->mbox_mutex); + mutex_lock(&cxl_mbox->mbox_mutex); if (cxl_mbox_background_complete(cxlds)) { mds->security.poll_tmo_secs = 0; if (mds->security.sanitize_node) @@ -167,7 +170,7 @@ static void cxl_mbox_sanitize_work(struct work_struct *work) mds->security.poll_tmo_secs = min(15 * 60, timeout); schedule_delayed_work(&mds->security.poll_dwork, timeout * HZ); } - mutex_unlock(&mds->mbox_mutex); + mutex_unlock(&cxl_mbox->mbox_mutex); } /** @@ -192,17 +195,18 @@ static void cxl_mbox_sanitize_work(struct work_struct *work) * not need to coordinate with each other. The driver only uses the primary * mailbox. */ -static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, +static int __cxl_pci_mbox_send_cmd(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *mbox_cmd) { - struct cxl_dev_state *cxlds = &mds->cxlds; + struct cxl_dev_state *cxlds = mbox_to_cxlds(cxl_mbox); + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); void __iomem *payload = cxlds->regs.mbox + CXLDEV_MBOX_PAYLOAD_OFFSET; struct device *dev = cxlds->dev; u64 cmd_reg, status_reg; size_t out_len; int rc; - lockdep_assert_held(&mds->mbox_mutex); + lockdep_assert_held(&cxl_mbox->mbox_mutex); /* * Here are the steps from 8.2.8.4 of the CXL 2.0 spec. @@ -315,10 +319,10 @@ static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, timeout = mbox_cmd->poll_interval_ms; for (i = 0; i < mbox_cmd->poll_count; i++) { - if (rcuwait_wait_event_timeout(&mds->mbox_wait, - cxl_mbox_background_complete(cxlds), - TASK_UNINTERRUPTIBLE, - msecs_to_jiffies(timeout)) > 0) + if (rcuwait_wait_event_timeout(&cxl_mbox->mbox_wait, + cxl_mbox_background_complete(cxlds), + TASK_UNINTERRUPTIBLE, + msecs_to_jiffies(timeout)) > 0) break; } @@ -360,7 +364,7 @@ static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, */ size_t n; - n = min3(mbox_cmd->size_out, mds->payload_size, out_len); + n = min3(mbox_cmd->size_out, cxl_mbox->payload_size, out_len); memcpy_fromio(mbox_cmd->payload_out, payload, n); mbox_cmd->size_out = n; } else { @@ -370,14 +374,14 @@ static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, return 0; } -static int cxl_pci_mbox_send(struct cxl_memdev_state *mds, +static int cxl_pci_mbox_send(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *cmd) { int rc; - mutex_lock_io(&mds->mbox_mutex); - rc = __cxl_pci_mbox_send_cmd(mds, cmd); - mutex_unlock(&mds->mbox_mutex); + mutex_lock_io(&cxl_mbox->mbox_mutex); + rc = __cxl_pci_mbox_send_cmd(cxl_mbox, cmd); + mutex_unlock(&cxl_mbox->mbox_mutex); return rc; } @@ -385,6 +389,7 @@ static int cxl_pci_mbox_send(struct cxl_memdev_state *mds, static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail) { struct cxl_dev_state *cxlds = &mds->cxlds; + struct cxl_mailbox *cxl_mbox = cxlds->cxl_mbox; const int cap = readl(cxlds->regs.mbox + CXLDEV_MBOX_CAPS_OFFSET); struct device *dev = cxlds->dev; unsigned long timeout; @@ -417,8 +422,8 @@ static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail) return -ETIMEDOUT; } - mds->mbox_send = cxl_pci_mbox_send; - mds->payload_size = + cxl_mbox->mbox_send = cxl_pci_mbox_send; + cxl_mbox->payload_size = 1 << FIELD_GET(CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK, cap); /* @@ -428,16 +433,15 @@ static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail) * there's no point in going forward. If the size is too large, there's * no harm is soft limiting it. */ - mds->payload_size = min_t(size_t, mds->payload_size, SZ_1M); - if (mds->payload_size < 256) { + cxl_mbox->payload_size = min_t(size_t, cxl_mbox->payload_size, SZ_1M); + if (cxl_mbox->payload_size < 256) { dev_err(dev, "Mailbox is too small (%zub)", - mds->payload_size); + cxl_mbox->payload_size); return -ENXIO; } - dev_dbg(dev, "Mailbox payload sized %zu", mds->payload_size); + dev_dbg(dev, "Mailbox payload sized %zu", cxl_mbox->payload_size); - rcuwait_init(&mds->mbox_wait); INIT_DELAYED_WORK(&mds->security.poll_dwork, cxl_mbox_sanitize_work); /* background command interrupts are optional */ @@ -578,9 +582,13 @@ static void free_event_buf(void *buf) */ static int cxl_mem_alloc_event_buf(struct cxl_memdev_state *mds) { + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; struct cxl_get_event_payload *buf; - buf = kvmalloc(mds->payload_size, GFP_KERNEL); + if (!cxl_mbox) + return -ENODEV; + + buf = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL); if (!buf) return -ENOMEM; mds->event.buf = buf; @@ -786,6 +794,26 @@ static int cxl_event_config(struct pci_host_bridge *host_bridge, return 0; } +static int cxl_pci_create_mailbox(struct cxl_dev_state *cxlds) +{ + struct cxl_mailbox *cxl_mbox; + + /* + * Don't bother to allocate the mailbox if the mailbox register isn't + * there. + */ + if (!cxlds->reg_map.device_map.mbox.valid) + return -ENODEV; + + cxl_mbox = cxl_mailbox_create(cxlds->dev); + if (IS_ERR(cxl_mbox)) + return PTR_ERR(cxl_mbox); + + cxlds->cxl_mbox = cxl_mbox; + + return 0; +} + static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) { struct pci_host_bridge *host_bridge = pci_find_host_bridge(pdev->bus); @@ -846,6 +874,10 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) if (rc) dev_dbg(&pdev->dev, "Failed to map RAS capability.\n"); + rc = cxl_pci_create_mailbox(cxlds); + if (rc) + return rc; + rc = cxl_await_media_ready(cxlds); if (rc == 0) cxlds->media_ready = true; diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c index 2ecdaee63021..12e68b194820 100644 --- a/drivers/cxl/pmem.c +++ b/drivers/cxl/pmem.c @@ -102,13 +102,18 @@ static int cxl_pmem_get_config_size(struct cxl_memdev_state *mds, struct nd_cmd_get_config_size *cmd, unsigned int buf_len) { + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; + + if (!cxl_mbox) + return -ENODEV; + if (sizeof(*cmd) > buf_len) return -EINVAL; *cmd = (struct nd_cmd_get_config_size){ .config_size = mds->lsa_size, .max_xfer = - mds->payload_size - sizeof(struct cxl_mbox_set_lsa), + cxl_mbox->payload_size - sizeof(struct cxl_mbox_set_lsa), }; return 0; diff --git a/include/linux/cxl/mailbox.h b/include/linux/cxl/mailbox.h new file mode 100644 index 000000000000..2b9e4bc1690c --- /dev/null +++ b/include/linux/cxl/mailbox.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright(c) 2024 Intel Corporation. */ +#ifndef __CXL_MBOX_H__ +#define __CXL_MBOX_H__ + +struct cxl_mbox_cmd; + +/** + * struct cxl_mailbox - context for CXL mailbox operations + * @host: device that hosts the mailbox + * @adev: auxiliary device for fw-ctl + * @payload_size: Size of space for payload + * (CXL 3.1 8.2.8.4.3 Mailbox Capabilities Register) + * @mbox_mutex: mutex protects device mailbox and firmware + * @mbox_wait: rcuwait for mailbox + * @mbox_send: @dev specific transport for transmitting mailbox commands + */ +struct cxl_mailbox { + struct device *host; + size_t payload_size; + struct mutex mbox_mutex; /* lock to protect mailbox context */ + struct rcuwait mbox_wait; + int (*mbox_send)(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *cmd); +}; + +struct cxl_mailbox *cxl_mailbox_create(struct device *dev); + +#endif diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c index eaf091a3d331..29f1a2df9122 100644 --- a/tools/testing/cxl/test/mem.c +++ b/tools/testing/cxl/test/mem.c @@ -8,6 +8,7 @@ #include <linux/delay.h> #include <linux/sizes.h> #include <linux/bits.h> +#include <linux/cxl/mailbox.h> #include <asm/unaligned.h> #include <crypto/sha2.h> #include <cxlmem.h> @@ -530,6 +531,7 @@ static int mock_gsl(struct cxl_mbox_cmd *cmd) static int mock_get_log(struct cxl_memdev_state *mds, struct cxl_mbox_cmd *cmd) { + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; struct cxl_mbox_get_log *gl = cmd->payload_in; u32 offset = le32_to_cpu(gl->offset); u32 length = le32_to_cpu(gl->length); @@ -538,7 +540,7 @@ static int mock_get_log(struct cxl_memdev_state *mds, struct cxl_mbox_cmd *cmd) if (cmd->size_in < sizeof(*gl)) return -EINVAL; - if (length > mds->payload_size) + if (length > cxl_mbox->payload_size) return -EINVAL; if (offset + length > sizeof(mock_cel)) return -EINVAL; @@ -613,12 +615,13 @@ void cxl_mockmem_sanitize_work(struct work_struct *work) { struct cxl_memdev_state *mds = container_of(work, typeof(*mds), security.poll_dwork.work); + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; - mutex_lock(&mds->mbox_mutex); + mutex_lock(&cxl_mbox->mbox_mutex); if (mds->security.sanitize_node) sysfs_notify_dirent(mds->security.sanitize_node); mds->security.sanitize_active = false; - mutex_unlock(&mds->mbox_mutex); + mutex_unlock(&cxl_mbox->mbox_mutex); dev_dbg(mds->cxlds.dev, "sanitize complete\n"); } @@ -627,6 +630,7 @@ static int mock_sanitize(struct cxl_mockmem_data *mdata, struct cxl_mbox_cmd *cmd) { struct cxl_memdev_state *mds = mdata->mds; + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; int rc = 0; if (cmd->size_in != 0) @@ -644,14 +648,14 @@ static int mock_sanitize(struct cxl_mockmem_data *mdata, return -ENXIO; } - mutex_lock(&mds->mbox_mutex); + mutex_lock(&cxl_mbox->mbox_mutex); if (schedule_delayed_work(&mds->security.poll_dwork, msecs_to_jiffies(mdata->sanitize_timeout))) { mds->security.sanitize_active = true; dev_dbg(mds->cxlds.dev, "sanitize issued\n"); } else rc = -EBUSY; - mutex_unlock(&mds->mbox_mutex); + mutex_unlock(&cxl_mbox->mbox_mutex); return rc; } @@ -1330,12 +1334,13 @@ static int mock_activate_fw(struct cxl_mockmem_data *mdata, return -EINVAL; } -static int cxl_mock_mbox_send(struct cxl_memdev_state *mds, +static int cxl_mock_mbox_send(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *cmd) { + struct device *dev = cxl_mbox->host; + struct cxl_mockmem_data *mdata = dev_get_drvdata(dev); + struct cxl_memdev_state *mds = mdata->mds; struct cxl_dev_state *cxlds = &mds->cxlds; - struct device *dev = cxlds->dev; - struct cxl_mockmem_data *mdata = dev_get_drvdata(dev); int rc = -EIO; switch (cmd->opcode) { @@ -1450,6 +1455,19 @@ static ssize_t event_trigger_store(struct device *dev, } static DEVICE_ATTR_WO(event_trigger); +static int cxl_mock_mailbox_create(struct cxl_dev_state *cxlds) +{ + struct cxl_mailbox *cxl_mbox; + + cxl_mbox = cxl_mailbox_create(cxlds->dev); + if (IS_ERR(cxl_mbox)) + return PTR_ERR(cxl_mbox); + + cxlds->cxl_mbox = cxl_mbox; + + return 0; +} + static int cxl_mock_mem_probe(struct platform_device *pdev) { struct device *dev = &pdev->dev; @@ -1457,6 +1475,7 @@ static int cxl_mock_mem_probe(struct platform_device *pdev) struct cxl_memdev_state *mds; struct cxl_dev_state *cxlds; struct cxl_mockmem_data *mdata; + struct cxl_mailbox *cxl_mbox; int rc; mdata = devm_kzalloc(dev, sizeof(*mdata), GFP_KERNEL); @@ -1484,13 +1503,18 @@ static int cxl_mock_mem_probe(struct platform_device *pdev) if (IS_ERR(mds)) return PTR_ERR(mds); + cxlds = &mds->cxlds; + rc = cxl_mock_mailbox_create(cxlds); + if (rc) + return rc; + + cxl_mbox = mds->cxlds.cxl_mbox; mdata->mds = mds; - mds->mbox_send = cxl_mock_mbox_send; - mds->payload_size = SZ_4K; + cxl_mbox->mbox_send = cxl_mock_mbox_send; + cxl_mbox->payload_size = SZ_4K; mds->event.buf = (struct cxl_get_event_payload *) mdata->event_buf; INIT_DELAYED_WORK(&mds->security.poll_dwork, cxl_mockmem_sanitize_work); - cxlds = &mds->cxlds; cxlds->serial = pdev->id; if (is_rcd(pdev)) cxlds->rcd = true; -- 2.45.2 ^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH 1/2] cxl: Move mailbox related bits to the same context 2024-07-24 18:55 ` [PATCH 1/2] cxl: Move mailbox related bits to the same context Dave Jiang @ 2024-07-24 22:02 ` fan 2024-08-13 6:59 ` Alejandro Lucero Palau 2024-08-15 16:57 ` Jonathan Cameron 1 sibling, 1 reply; 13+ messages in thread From: fan @ 2024-07-24 22:02 UTC (permalink / raw) To: Dave Jiang Cc: linux-cxl, alejandro.lucero-palau, dan.j.williams, ira.weiny, vishal.l.verma, alison.schofield, Jonathan.Cameron, dave On Wed, Jul 24, 2024 at 11:55:16AM -0700, Dave Jiang wrote: > Create a new 'struct cxl_mailbox' and move all mailbox related bits to > it. This allows isolation of all CXL mailbox data in order to export > some of the calls to external callers and avoid exporting of CXL driver > specific bits such has device states. The allocation of > 'struct cxl_mailbox' is also split out with cxl_mailbox_create() so the > mailbox can be created independently. > > Signed-off-by: Dave Jiang <dave.jiang@intel.com> > --- > MAINTAINERS | 1 + > drivers/cxl/core/mbox.c | 105 +++++++++++++++++++++++++++++------ > drivers/cxl/core/memdev.c | 32 ++++++++--- > drivers/cxl/cxlmem.h | 20 +++---- > drivers/cxl/pci.c | 82 ++++++++++++++++++--------- > drivers/cxl/pmem.c | 7 ++- > include/linux/cxl/mailbox.h | 28 ++++++++++ > tools/testing/cxl/test/mem.c | 46 +++++++++++---- > 8 files changed, 249 insertions(+), 72 deletions(-) > create mode 100644 include/linux/cxl/mailbox.h Reviewed-by: Fan Ni <fan.ni@samsung.com> > > diff --git a/MAINTAINERS b/MAINTAINERS > index 958e935449e5..c809fa1eb452 100644 > --- a/MAINTAINERS > +++ b/MAINTAINERS > @@ -5466,6 +5466,7 @@ S: Maintained > F: drivers/cxl/ > F: include/linux/einj-cxl.h > F: include/linux/cxl-event.h > +F: include/linux/cxl/ > F: include/uapi/linux/cxl_mem.h > F: tools/testing/cxl/ > > diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c > index 2626f3fff201..9501d2576ccd 100644 > --- a/drivers/cxl/core/mbox.c > +++ b/drivers/cxl/core/mbox.c > @@ -244,16 +244,20 @@ static const char *cxl_mem_opcode_to_name(u16 opcode) > int cxl_internal_send_cmd(struct cxl_memdev_state *mds, > struct cxl_mbox_cmd *mbox_cmd) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > size_t out_size, min_out; > int rc; > > - if (mbox_cmd->size_in > mds->payload_size || > - mbox_cmd->size_out > mds->payload_size) > + if (!cxl_mbox) > + return -ENODEV; > + > + if (mbox_cmd->size_in > cxl_mbox->payload_size || > + mbox_cmd->size_out > cxl_mbox->payload_size) > return -E2BIG; > > out_size = mbox_cmd->size_out; > min_out = mbox_cmd->min_out; > - rc = mds->mbox_send(mds, mbox_cmd); > + rc = cxl_mbox->mbox_send(cxl_mbox, mbox_cmd); > /* > * EIO is reserved for a payload size mismatch and mbox_send() > * may not return this error. > @@ -353,11 +357,15 @@ static int cxl_mbox_cmd_ctor(struct cxl_mbox_cmd *mbox, > struct cxl_memdev_state *mds, u16 opcode, > size_t in_size, size_t out_size, u64 in_payload) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > *mbox = (struct cxl_mbox_cmd) { > .opcode = opcode, > .size_in = in_size, > }; > > + if (!cxl_mbox) > + return -ENODEV; > + > if (in_size) { > mbox->payload_in = vmemdup_user(u64_to_user_ptr(in_payload), > in_size); > @@ -374,7 +382,7 @@ static int cxl_mbox_cmd_ctor(struct cxl_mbox_cmd *mbox, > > /* Prepare to handle a full payload for variable sized output */ > if (out_size == CXL_VARIABLE_PAYLOAD) > - mbox->size_out = mds->payload_size; > + mbox->size_out = cxl_mbox->payload_size; > else > mbox->size_out = out_size; > > @@ -398,6 +406,11 @@ static int cxl_to_mem_cmd_raw(struct cxl_mem_command *mem_cmd, > const struct cxl_send_command *send_cmd, > struct cxl_memdev_state *mds) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > + > + if (!cxl_mbox) > + return -ENODEV; > + > if (send_cmd->raw.rsvd) > return -EINVAL; > > @@ -406,7 +419,7 @@ static int cxl_to_mem_cmd_raw(struct cxl_mem_command *mem_cmd, > * gets passed along without further checking, so it must be > * validated here. > */ > - if (send_cmd->out.size > mds->payload_size) > + if (send_cmd->out.size > cxl_mbox->payload_size) > return -EINVAL; > > if (!cxl_mem_raw_command_allowed(send_cmd->raw.opcode)) > @@ -494,9 +507,13 @@ static int cxl_validate_cmd_from_user(struct cxl_mbox_cmd *mbox_cmd, > struct cxl_memdev_state *mds, > const struct cxl_send_command *send_cmd) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct cxl_mem_command mem_cmd; > int rc; > > + if (!cxl_mbox) > + return -ENODEV; > + > if (send_cmd->id == 0 || send_cmd->id >= CXL_MEM_COMMAND_ID_MAX) > return -ENOTTY; > > @@ -505,7 +522,7 @@ static int cxl_validate_cmd_from_user(struct cxl_mbox_cmd *mbox_cmd, > * supports, but output can be arbitrarily large (simply write out as > * much data as the hardware provides). > */ > - if (send_cmd->in.size > mds->payload_size) > + if (send_cmd->in.size > cxl_mbox->payload_size) > return -EINVAL; > > /* Sanitize and construct a cxl_mem_command */ > @@ -591,9 +608,13 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev_state *mds, > u64 out_payload, s32 *size_out, > u32 *retval) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct device *dev = mds->cxlds.dev; > int rc; > > + if (!cxl_mbox) > + return -ENODEV; > + > dev_dbg(dev, > "Submitting %s command for user\n" > "\topcode: %x\n" > @@ -601,7 +622,7 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev_state *mds, > cxl_mem_opcode_to_name(mbox_cmd->opcode), > mbox_cmd->opcode, mbox_cmd->size_in); > > - rc = mds->mbox_send(mds, mbox_cmd); > + rc = cxl_mbox->mbox_send(cxl_mbox, mbox_cmd); > if (rc) > goto out; > > @@ -659,11 +680,15 @@ int cxl_send_cmd(struct cxl_memdev *cxlmd, struct cxl_send_command __user *s) > static int cxl_xfer_log(struct cxl_memdev_state *mds, uuid_t *uuid, > u32 *size, u8 *out) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > u32 remaining = *size; > u32 offset = 0; > > + if (!cxl_mbox) > + return -ENODEV; > + > while (remaining) { > - u32 xfer_size = min_t(u32, remaining, mds->payload_size); > + u32 xfer_size = min_t(u32, remaining, cxl_mbox->payload_size); > struct cxl_mbox_cmd mbox_cmd; > struct cxl_mbox_get_log log; > int rc; > @@ -752,17 +777,21 @@ static void cxl_walk_cel(struct cxl_memdev_state *mds, size_t size, u8 *cel) > > static struct cxl_mbox_get_supported_logs *cxl_get_gsl(struct cxl_memdev_state *mds) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct cxl_mbox_get_supported_logs *ret; > struct cxl_mbox_cmd mbox_cmd; > int rc; > > - ret = kvmalloc(mds->payload_size, GFP_KERNEL); > + if (!cxl_mbox) > + return ERR_PTR(-ENODEV); > + > + ret = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL); > if (!ret) > return ERR_PTR(-ENOMEM); > > mbox_cmd = (struct cxl_mbox_cmd) { > .opcode = CXL_MBOX_OP_GET_SUPPORTED_LOGS, > - .size_out = mds->payload_size, > + .size_out = cxl_mbox->payload_size, > .payload_out = ret, > /* At least the record number field must be valid */ > .min_out = 2, > @@ -910,6 +939,7 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds, > enum cxl_event_log_type log, > struct cxl_get_event_payload *get_pl) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct cxl_mbox_clear_event_payload *payload; > u16 total = le16_to_cpu(get_pl->record_count); > u8 max_handles = CXL_CLEAR_EVENT_MAX_HANDLES; > @@ -919,9 +949,12 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds, > int rc = 0; > int i; > > + if (!cxl_mbox) > + return -ENODEV; > + > /* Payload size may limit the max handles */ > - if (pl_size > mds->payload_size) { > - max_handles = (mds->payload_size - sizeof(*payload)) / > + if (pl_size > cxl_mbox->payload_size) { > + max_handles = (cxl_mbox->payload_size - sizeof(*payload)) / > sizeof(__le16); > pl_size = struct_size(payload, handles, max_handles); > } > @@ -979,12 +1012,16 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds, > static void cxl_mem_get_records_log(struct cxl_memdev_state *mds, > enum cxl_event_log_type type) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct cxl_memdev *cxlmd = mds->cxlds.cxlmd; > struct device *dev = mds->cxlds.dev; > struct cxl_get_event_payload *payload; > u8 log_type = type; > u16 nr_rec; > > + if (!cxl_mbox) > + return; > + > mutex_lock(&mds->event.log_lock); > payload = mds->event.buf; > > @@ -995,7 +1032,7 @@ static void cxl_mem_get_records_log(struct cxl_memdev_state *mds, > .payload_in = &log_type, > .size_in = sizeof(log_type), > .payload_out = payload, > - .size_out = mds->payload_size, > + .size_out = cxl_mbox->payload_size, > .min_out = struct_size(payload, records, 0), > }; > > @@ -1328,11 +1365,15 @@ int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, > struct cxl_region *cxlr) > { > struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct cxl_mbox_poison_out *po; > struct cxl_mbox_poison_in pi; > int nr_records = 0; > int rc; > > + if (!cxl_mbox) > + return -ENODEV; > + > rc = mutex_lock_interruptible(&mds->poison.lock); > if (rc) > return rc; > @@ -1346,7 +1387,7 @@ int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, > .opcode = CXL_MBOX_OP_GET_POISON, > .size_in = sizeof(pi), > .payload_in = &pi, > - .size_out = mds->payload_size, > + .size_out = cxl_mbox->payload_size, > .payload_out = po, > .min_out = struct_size(po, record, 0), > }; > @@ -1382,7 +1423,12 @@ static void free_poison_buf(void *buf) > /* Get Poison List output buffer is protected by mds->poison.lock */ > static int cxl_poison_alloc_buf(struct cxl_memdev_state *mds) > { > - mds->poison.list_out = kvmalloc(mds->payload_size, GFP_KERNEL); > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > + > + if (!cxl_mbox) > + return -ENODEV; > + > + mds->poison.list_out = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL); > if (!mds->poison.list_out) > return -ENOMEM; > > @@ -1408,6 +1454,34 @@ int cxl_poison_state_init(struct cxl_memdev_state *mds) > } > EXPORT_SYMBOL_NS_GPL(cxl_poison_state_init, CXL); > > +static void free_cxl_mailbox(void *cxl_mbox) > +{ > + kfree(cxl_mbox); > +} > + > +struct cxl_mailbox *cxl_mailbox_create(struct device *dev) > +{ > + struct cxl_mailbox *cxl_mbox __free(kfree) = > + kzalloc(sizeof(*cxl_mbox), GFP_KERNEL); > + int rc; > + > + if (!cxl_mbox) > + return ERR_PTR(-ENOMEM); > + > + cxl_mbox->host = dev; > + mutex_init(&cxl_mbox->mbox_mutex); > + rcuwait_init(&cxl_mbox->mbox_wait); > + > + rc = devm_add_action_or_reset(dev, free_cxl_mailbox, cxl_mbox); > + if (rc) { > + cxl_mbox = NULL; > + return ERR_PTR(rc); > + } > + > + return no_free_ptr(cxl_mbox); > +} > +EXPORT_SYMBOL_NS_GPL(cxl_mailbox_create, CXL); > + > struct cxl_memdev_state *cxl_memdev_state_create(struct device *dev) > { > struct cxl_memdev_state *mds; > @@ -1418,7 +1492,6 @@ struct cxl_memdev_state *cxl_memdev_state_create(struct device *dev) > return ERR_PTR(-ENOMEM); > } > > - mutex_init(&mds->mbox_mutex); > mutex_init(&mds->event.log_lock); > mds->cxlds.dev = dev; > mds->cxlds.reg_map.host = dev; > diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c > index 0277726afd04..7c99f89740a9 100644 > --- a/drivers/cxl/core/memdev.c > +++ b/drivers/cxl/core/memdev.c > @@ -56,9 +56,9 @@ static ssize_t payload_max_show(struct device *dev, > struct cxl_dev_state *cxlds = cxlmd->cxlds; > struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); > > - if (!mds) > + if (!mds || !cxlds->cxl_mbox) > return sysfs_emit(buf, "\n"); > - return sysfs_emit(buf, "%zu\n", mds->payload_size); > + return sysfs_emit(buf, "%zu\n", cxlds->cxl_mbox->payload_size); > } > static DEVICE_ATTR_RO(payload_max); > > @@ -124,15 +124,19 @@ static ssize_t security_state_show(struct device *dev, > { > struct cxl_memdev *cxlmd = to_cxl_memdev(dev); > struct cxl_dev_state *cxlds = cxlmd->cxlds; > + struct cxl_mailbox *cxl_mbox = cxlds->cxl_mbox; > struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); > unsigned long state = mds->security.state; > int rc = 0; > > + if (!cxl_mbox) > + return -ENODEV; > + > /* sync with latest submission state */ > - mutex_lock(&mds->mbox_mutex); > + mutex_lock(&cxl_mbox->mbox_mutex); > if (mds->security.sanitize_active) > rc = sysfs_emit(buf, "sanitize\n"); > - mutex_unlock(&mds->mbox_mutex); > + mutex_unlock(&cxl_mbox->mbox_mutex); > if (rc) > return rc; > > @@ -829,12 +833,16 @@ static enum fw_upload_err cxl_fw_prepare(struct fw_upload *fwl, const u8 *data, > { > struct cxl_memdev_state *mds = fwl->dd_handle; > struct cxl_mbox_transfer_fw *transfer; > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > + > + if (!cxl_mbox) > + return FW_UPLOAD_ERR_NONE; > > if (!size) > return FW_UPLOAD_ERR_INVALID_SIZE; > > mds->fw.oneshot = struct_size(transfer, data, size) < > - mds->payload_size; > + cxl_mbox->payload_size; > > if (cxl_mem_get_fw_info(mds)) > return FW_UPLOAD_ERR_HW_ERROR; > @@ -854,6 +862,7 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data, > { > struct cxl_memdev_state *mds = fwl->dd_handle; > struct cxl_dev_state *cxlds = &mds->cxlds; > + struct cxl_mailbox *cxl_mbox = cxlds->cxl_mbox; > struct cxl_memdev *cxlmd = cxlds->cxlmd; > struct cxl_mbox_transfer_fw *transfer; > struct cxl_mbox_cmd mbox_cmd; > @@ -861,6 +870,9 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data, > size_t size_in; > int rc; > > + if (!cxl_mbox) > + return FW_UPLOAD_ERR_NONE; > + > *written = 0; > > /* Offset has to be aligned to 128B (CXL-3.0 8.2.9.3.2 Table 8-57) */ > @@ -877,7 +889,7 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data, > * sizeof(*transfer) is 128. These constraints imply that @cur_size > * will always be 128b aligned. > */ > - cur_size = min_t(size_t, size, mds->payload_size - sizeof(*transfer)); > + cur_size = min_t(size_t, size, cxl_mbox->payload_size - sizeof(*transfer)); > > remaining = size - cur_size; > size_in = struct_size(transfer, data, cur_size); > @@ -1059,16 +1071,20 @@ EXPORT_SYMBOL_NS_GPL(devm_cxl_add_memdev, CXL); > static void sanitize_teardown_notifier(void *data) > { > struct cxl_memdev_state *mds = data; > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct kernfs_node *state; > > + if (!cxl_mbox) > + return; > + > /* > * Prevent new irq triggered invocations of the workqueue and > * flush inflight invocations. > */ > - mutex_lock(&mds->mbox_mutex); > + mutex_lock(&cxl_mbox->mbox_mutex); > state = mds->security.sanitize_node; > mds->security.sanitize_node = NULL; > - mutex_unlock(&mds->mbox_mutex); > + mutex_unlock(&cxl_mbox->mbox_mutex); > > cancel_delayed_work_sync(&mds->security.poll_dwork); > sysfs_put(state); > diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h > index af8169ccdbc0..17e83a2cf1be 100644 > --- a/drivers/cxl/cxlmem.h > +++ b/drivers/cxl/cxlmem.h > @@ -3,11 +3,13 @@ > #ifndef __CXL_MEM_H__ > #define __CXL_MEM_H__ > #include <uapi/linux/cxl_mem.h> > +#include <linux/pci.h> > #include <linux/cdev.h> > #include <linux/uuid.h> > #include <linux/rcuwait.h> > #include <linux/cxl-event.h> > #include <linux/node.h> > +#include <linux/cxl/mailbox.h> > #include "cxl.h" > > /* CXL 2.0 8.2.8.5.1.1 Memory Device Status Register */ > @@ -424,6 +426,7 @@ struct cxl_dpa_perf { > * @ram_res: Active Volatile memory capacity configuration > * @serial: PCIe Device Serial Number > * @type: Generic Memory Class device or Vendor Specific Memory device > + * @cxl_mbox: CXL mailbox context > */ > struct cxl_dev_state { > struct device *dev; > @@ -438,8 +441,14 @@ struct cxl_dev_state { > struct resource ram_res; > u64 serial; > enum cxl_devtype type; > + struct cxl_mailbox *cxl_mbox; > }; > > +static inline struct cxl_dev_state *mbox_to_cxlds(struct cxl_mailbox *cxl_mbox) > +{ > + return dev_get_drvdata(cxl_mbox->host); > +} > + > /** > * struct cxl_memdev_state - Generic Type-3 Memory Device Class driver data > * > @@ -448,11 +457,8 @@ struct cxl_dev_state { > * the functionality related to that like Identify Memory Device and Get > * Partition Info > * @cxlds: Core driver state common across Type-2 and Type-3 devices > - * @payload_size: Size of space for payload > - * (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register) > * @lsa_size: Size of Label Storage Area > * (CXL 2.0 8.2.9.5.1.1 Identify Memory Device) > - * @mbox_mutex: Mutex to synchronize mailbox access. > * @firmware_version: Firmware version for the memory device. > * @enabled_cmds: Hardware commands found enabled in CEL. > * @exclusive_cmds: Commands that are kernel-internal only > @@ -470,17 +476,13 @@ struct cxl_dev_state { > * @poison: poison driver state info > * @security: security driver state info > * @fw: firmware upload / activation state > - * @mbox_wait: RCU wait for mbox send completely > - * @mbox_send: @dev specific transport for transmitting mailbox commands > * > * See CXL 3.0 8.2.9.8.2 Capacity Configuration and Label Storage for > * details on capacity parameters. > */ > struct cxl_memdev_state { > struct cxl_dev_state cxlds; > - size_t payload_size; > size_t lsa_size; > - struct mutex mbox_mutex; /* Protects device mailbox and firmware */ > char firmware_version[0x10]; > DECLARE_BITMAP(enabled_cmds, CXL_MEM_COMMAND_ID_MAX); > DECLARE_BITMAP(exclusive_cmds, CXL_MEM_COMMAND_ID_MAX); > @@ -500,10 +502,6 @@ struct cxl_memdev_state { > struct cxl_poison_state poison; > struct cxl_security_state security; > struct cxl_fw_state fw; > - > - struct rcuwait mbox_wait; > - int (*mbox_send)(struct cxl_memdev_state *mds, > - struct cxl_mbox_cmd *cmd); > }; > > static inline struct cxl_memdev_state * > diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c > index e53646e9f2fb..bd8ee14a7926 100644 > --- a/drivers/cxl/pci.c > +++ b/drivers/cxl/pci.c > @@ -11,6 +11,7 @@ > #include <linux/pci.h> > #include <linux/aer.h> > #include <linux/io.h> > +#include <linux/cxl/mailbox.h> > #include "cxlmem.h" > #include "cxlpci.h" > #include "cxl.h" > @@ -124,6 +125,7 @@ static irqreturn_t cxl_pci_mbox_irq(int irq, void *id) > u16 opcode; > struct cxl_dev_id *dev_id = id; > struct cxl_dev_state *cxlds = dev_id->cxlds; > + struct cxl_mailbox *cxl_mbox = cxlds->cxl_mbox; > struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); > > if (!cxl_mbox_background_complete(cxlds)) > @@ -132,13 +134,13 @@ static irqreturn_t cxl_pci_mbox_irq(int irq, void *id) > reg = readq(cxlds->regs.mbox + CXLDEV_MBOX_BG_CMD_STATUS_OFFSET); > opcode = FIELD_GET(CXLDEV_MBOX_BG_CMD_COMMAND_OPCODE_MASK, reg); > if (opcode == CXL_MBOX_OP_SANITIZE) { > - mutex_lock(&mds->mbox_mutex); > + mutex_lock(&cxl_mbox->mbox_mutex); > if (mds->security.sanitize_node) > mod_delayed_work(system_wq, &mds->security.poll_dwork, 0); > - mutex_unlock(&mds->mbox_mutex); > + mutex_unlock(&cxl_mbox->mbox_mutex); > } else { > /* short-circuit the wait in __cxl_pci_mbox_send_cmd() */ > - rcuwait_wake_up(&mds->mbox_wait); > + rcuwait_wake_up(&cxl_mbox->mbox_wait); > } > > return IRQ_HANDLED; > @@ -152,8 +154,9 @@ static void cxl_mbox_sanitize_work(struct work_struct *work) > struct cxl_memdev_state *mds = > container_of(work, typeof(*mds), security.poll_dwork.work); > struct cxl_dev_state *cxlds = &mds->cxlds; > + struct cxl_mailbox *cxl_mbox = cxlds->cxl_mbox; > > - mutex_lock(&mds->mbox_mutex); > + mutex_lock(&cxl_mbox->mbox_mutex); > if (cxl_mbox_background_complete(cxlds)) { > mds->security.poll_tmo_secs = 0; > if (mds->security.sanitize_node) > @@ -167,7 +170,7 @@ static void cxl_mbox_sanitize_work(struct work_struct *work) > mds->security.poll_tmo_secs = min(15 * 60, timeout); > schedule_delayed_work(&mds->security.poll_dwork, timeout * HZ); > } > - mutex_unlock(&mds->mbox_mutex); > + mutex_unlock(&cxl_mbox->mbox_mutex); > } > > /** > @@ -192,17 +195,18 @@ static void cxl_mbox_sanitize_work(struct work_struct *work) > * not need to coordinate with each other. The driver only uses the primary > * mailbox. > */ > -static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, > +static int __cxl_pci_mbox_send_cmd(struct cxl_mailbox *cxl_mbox, > struct cxl_mbox_cmd *mbox_cmd) > { > - struct cxl_dev_state *cxlds = &mds->cxlds; > + struct cxl_dev_state *cxlds = mbox_to_cxlds(cxl_mbox); > + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); > void __iomem *payload = cxlds->regs.mbox + CXLDEV_MBOX_PAYLOAD_OFFSET; > struct device *dev = cxlds->dev; > u64 cmd_reg, status_reg; > size_t out_len; > int rc; > > - lockdep_assert_held(&mds->mbox_mutex); > + lockdep_assert_held(&cxl_mbox->mbox_mutex); > > /* > * Here are the steps from 8.2.8.4 of the CXL 2.0 spec. > @@ -315,10 +319,10 @@ static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, > > timeout = mbox_cmd->poll_interval_ms; > for (i = 0; i < mbox_cmd->poll_count; i++) { > - if (rcuwait_wait_event_timeout(&mds->mbox_wait, > - cxl_mbox_background_complete(cxlds), > - TASK_UNINTERRUPTIBLE, > - msecs_to_jiffies(timeout)) > 0) > + if (rcuwait_wait_event_timeout(&cxl_mbox->mbox_wait, > + cxl_mbox_background_complete(cxlds), > + TASK_UNINTERRUPTIBLE, > + msecs_to_jiffies(timeout)) > 0) > break; > } > > @@ -360,7 +364,7 @@ static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, > */ > size_t n; > > - n = min3(mbox_cmd->size_out, mds->payload_size, out_len); > + n = min3(mbox_cmd->size_out, cxl_mbox->payload_size, out_len); > memcpy_fromio(mbox_cmd->payload_out, payload, n); > mbox_cmd->size_out = n; > } else { > @@ -370,14 +374,14 @@ static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, > return 0; > } > > -static int cxl_pci_mbox_send(struct cxl_memdev_state *mds, > +static int cxl_pci_mbox_send(struct cxl_mailbox *cxl_mbox, > struct cxl_mbox_cmd *cmd) > { > int rc; > > - mutex_lock_io(&mds->mbox_mutex); > - rc = __cxl_pci_mbox_send_cmd(mds, cmd); > - mutex_unlock(&mds->mbox_mutex); > + mutex_lock_io(&cxl_mbox->mbox_mutex); > + rc = __cxl_pci_mbox_send_cmd(cxl_mbox, cmd); > + mutex_unlock(&cxl_mbox->mbox_mutex); > > return rc; > } > @@ -385,6 +389,7 @@ static int cxl_pci_mbox_send(struct cxl_memdev_state *mds, > static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail) > { > struct cxl_dev_state *cxlds = &mds->cxlds; > + struct cxl_mailbox *cxl_mbox = cxlds->cxl_mbox; > const int cap = readl(cxlds->regs.mbox + CXLDEV_MBOX_CAPS_OFFSET); > struct device *dev = cxlds->dev; > unsigned long timeout; > @@ -417,8 +422,8 @@ static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail) > return -ETIMEDOUT; > } > > - mds->mbox_send = cxl_pci_mbox_send; > - mds->payload_size = > + cxl_mbox->mbox_send = cxl_pci_mbox_send; > + cxl_mbox->payload_size = > 1 << FIELD_GET(CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK, cap); > > /* > @@ -428,16 +433,15 @@ static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail) > * there's no point in going forward. If the size is too large, there's > * no harm is soft limiting it. > */ > - mds->payload_size = min_t(size_t, mds->payload_size, SZ_1M); > - if (mds->payload_size < 256) { > + cxl_mbox->payload_size = min_t(size_t, cxl_mbox->payload_size, SZ_1M); > + if (cxl_mbox->payload_size < 256) { > dev_err(dev, "Mailbox is too small (%zub)", > - mds->payload_size); > + cxl_mbox->payload_size); > return -ENXIO; > } > > - dev_dbg(dev, "Mailbox payload sized %zu", mds->payload_size); > + dev_dbg(dev, "Mailbox payload sized %zu", cxl_mbox->payload_size); > > - rcuwait_init(&mds->mbox_wait); > INIT_DELAYED_WORK(&mds->security.poll_dwork, cxl_mbox_sanitize_work); > > /* background command interrupts are optional */ > @@ -578,9 +582,13 @@ static void free_event_buf(void *buf) > */ > static int cxl_mem_alloc_event_buf(struct cxl_memdev_state *mds) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct cxl_get_event_payload *buf; > > - buf = kvmalloc(mds->payload_size, GFP_KERNEL); > + if (!cxl_mbox) > + return -ENODEV; > + > + buf = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL); > if (!buf) > return -ENOMEM; > mds->event.buf = buf; > @@ -786,6 +794,26 @@ static int cxl_event_config(struct pci_host_bridge *host_bridge, > return 0; > } > > +static int cxl_pci_create_mailbox(struct cxl_dev_state *cxlds) > +{ > + struct cxl_mailbox *cxl_mbox; > + > + /* > + * Don't bother to allocate the mailbox if the mailbox register isn't > + * there. > + */ > + if (!cxlds->reg_map.device_map.mbox.valid) > + return -ENODEV; > + > + cxl_mbox = cxl_mailbox_create(cxlds->dev); > + if (IS_ERR(cxl_mbox)) > + return PTR_ERR(cxl_mbox); > + > + cxlds->cxl_mbox = cxl_mbox; > + > + return 0; > +} > + > static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) > { > struct pci_host_bridge *host_bridge = pci_find_host_bridge(pdev->bus); > @@ -846,6 +874,10 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) > if (rc) > dev_dbg(&pdev->dev, "Failed to map RAS capability.\n"); > > + rc = cxl_pci_create_mailbox(cxlds); > + if (rc) > + return rc; > + > rc = cxl_await_media_ready(cxlds); > if (rc == 0) > cxlds->media_ready = true; > diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c > index 2ecdaee63021..12e68b194820 100644 > --- a/drivers/cxl/pmem.c > +++ b/drivers/cxl/pmem.c > @@ -102,13 +102,18 @@ static int cxl_pmem_get_config_size(struct cxl_memdev_state *mds, > struct nd_cmd_get_config_size *cmd, > unsigned int buf_len) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > + > + if (!cxl_mbox) > + return -ENODEV; > + > if (sizeof(*cmd) > buf_len) > return -EINVAL; > > *cmd = (struct nd_cmd_get_config_size){ > .config_size = mds->lsa_size, > .max_xfer = > - mds->payload_size - sizeof(struct cxl_mbox_set_lsa), > + cxl_mbox->payload_size - sizeof(struct cxl_mbox_set_lsa), > }; > > return 0; > diff --git a/include/linux/cxl/mailbox.h b/include/linux/cxl/mailbox.h > new file mode 100644 > index 000000000000..2b9e4bc1690c > --- /dev/null > +++ b/include/linux/cxl/mailbox.h > @@ -0,0 +1,28 @@ > +/* SPDX-License-Identifier: GPL-2.0-only */ > +/* Copyright(c) 2024 Intel Corporation. */ > +#ifndef __CXL_MBOX_H__ > +#define __CXL_MBOX_H__ > + > +struct cxl_mbox_cmd; > + > +/** > + * struct cxl_mailbox - context for CXL mailbox operations > + * @host: device that hosts the mailbox > + * @adev: auxiliary device for fw-ctl > + * @payload_size: Size of space for payload > + * (CXL 3.1 8.2.8.4.3 Mailbox Capabilities Register) > + * @mbox_mutex: mutex protects device mailbox and firmware > + * @mbox_wait: rcuwait for mailbox > + * @mbox_send: @dev specific transport for transmitting mailbox commands > + */ > +struct cxl_mailbox { > + struct device *host; > + size_t payload_size; > + struct mutex mbox_mutex; /* lock to protect mailbox context */ > + struct rcuwait mbox_wait; > + int (*mbox_send)(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *cmd); > +}; > + > +struct cxl_mailbox *cxl_mailbox_create(struct device *dev); > + > +#endif > diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c > index eaf091a3d331..29f1a2df9122 100644 > --- a/tools/testing/cxl/test/mem.c > +++ b/tools/testing/cxl/test/mem.c > @@ -8,6 +8,7 @@ > #include <linux/delay.h> > #include <linux/sizes.h> > #include <linux/bits.h> > +#include <linux/cxl/mailbox.h> > #include <asm/unaligned.h> > #include <crypto/sha2.h> > #include <cxlmem.h> > @@ -530,6 +531,7 @@ static int mock_gsl(struct cxl_mbox_cmd *cmd) > > static int mock_get_log(struct cxl_memdev_state *mds, struct cxl_mbox_cmd *cmd) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct cxl_mbox_get_log *gl = cmd->payload_in; > u32 offset = le32_to_cpu(gl->offset); > u32 length = le32_to_cpu(gl->length); > @@ -538,7 +540,7 @@ static int mock_get_log(struct cxl_memdev_state *mds, struct cxl_mbox_cmd *cmd) > > if (cmd->size_in < sizeof(*gl)) > return -EINVAL; > - if (length > mds->payload_size) > + if (length > cxl_mbox->payload_size) > return -EINVAL; > if (offset + length > sizeof(mock_cel)) > return -EINVAL; > @@ -613,12 +615,13 @@ void cxl_mockmem_sanitize_work(struct work_struct *work) > { > struct cxl_memdev_state *mds = > container_of(work, typeof(*mds), security.poll_dwork.work); > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > > - mutex_lock(&mds->mbox_mutex); > + mutex_lock(&cxl_mbox->mbox_mutex); > if (mds->security.sanitize_node) > sysfs_notify_dirent(mds->security.sanitize_node); > mds->security.sanitize_active = false; > - mutex_unlock(&mds->mbox_mutex); > + mutex_unlock(&cxl_mbox->mbox_mutex); > > dev_dbg(mds->cxlds.dev, "sanitize complete\n"); > } > @@ -627,6 +630,7 @@ static int mock_sanitize(struct cxl_mockmem_data *mdata, > struct cxl_mbox_cmd *cmd) > { > struct cxl_memdev_state *mds = mdata->mds; > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > int rc = 0; > > if (cmd->size_in != 0) > @@ -644,14 +648,14 @@ static int mock_sanitize(struct cxl_mockmem_data *mdata, > return -ENXIO; > } > > - mutex_lock(&mds->mbox_mutex); > + mutex_lock(&cxl_mbox->mbox_mutex); > if (schedule_delayed_work(&mds->security.poll_dwork, > msecs_to_jiffies(mdata->sanitize_timeout))) { > mds->security.sanitize_active = true; > dev_dbg(mds->cxlds.dev, "sanitize issued\n"); > } else > rc = -EBUSY; > - mutex_unlock(&mds->mbox_mutex); > + mutex_unlock(&cxl_mbox->mbox_mutex); > > return rc; > } > @@ -1330,12 +1334,13 @@ static int mock_activate_fw(struct cxl_mockmem_data *mdata, > return -EINVAL; > } > > -static int cxl_mock_mbox_send(struct cxl_memdev_state *mds, > +static int cxl_mock_mbox_send(struct cxl_mailbox *cxl_mbox, > struct cxl_mbox_cmd *cmd) > { > + struct device *dev = cxl_mbox->host; > + struct cxl_mockmem_data *mdata = dev_get_drvdata(dev); > + struct cxl_memdev_state *mds = mdata->mds; > struct cxl_dev_state *cxlds = &mds->cxlds; > - struct device *dev = cxlds->dev; > - struct cxl_mockmem_data *mdata = dev_get_drvdata(dev); > int rc = -EIO; > > switch (cmd->opcode) { > @@ -1450,6 +1455,19 @@ static ssize_t event_trigger_store(struct device *dev, > } > static DEVICE_ATTR_WO(event_trigger); > > +static int cxl_mock_mailbox_create(struct cxl_dev_state *cxlds) > +{ > + struct cxl_mailbox *cxl_mbox; > + > + cxl_mbox = cxl_mailbox_create(cxlds->dev); > + if (IS_ERR(cxl_mbox)) > + return PTR_ERR(cxl_mbox); > + > + cxlds->cxl_mbox = cxl_mbox; > + > + return 0; > +} > + > static int cxl_mock_mem_probe(struct platform_device *pdev) > { > struct device *dev = &pdev->dev; > @@ -1457,6 +1475,7 @@ static int cxl_mock_mem_probe(struct platform_device *pdev) > struct cxl_memdev_state *mds; > struct cxl_dev_state *cxlds; > struct cxl_mockmem_data *mdata; > + struct cxl_mailbox *cxl_mbox; > int rc; > > mdata = devm_kzalloc(dev, sizeof(*mdata), GFP_KERNEL); > @@ -1484,13 +1503,18 @@ static int cxl_mock_mem_probe(struct platform_device *pdev) > if (IS_ERR(mds)) > return PTR_ERR(mds); > > + cxlds = &mds->cxlds; > + rc = cxl_mock_mailbox_create(cxlds); > + if (rc) > + return rc; > + > + cxl_mbox = mds->cxlds.cxl_mbox; > mdata->mds = mds; > - mds->mbox_send = cxl_mock_mbox_send; > - mds->payload_size = SZ_4K; > + cxl_mbox->mbox_send = cxl_mock_mbox_send; > + cxl_mbox->payload_size = SZ_4K; > mds->event.buf = (struct cxl_get_event_payload *) mdata->event_buf; > INIT_DELAYED_WORK(&mds->security.poll_dwork, cxl_mockmem_sanitize_work); > > - cxlds = &mds->cxlds; > cxlds->serial = pdev->id; > if (is_rcd(pdev)) > cxlds->rcd = true; > -- > 2.45.2 > ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 1/2] cxl: Move mailbox related bits to the same context 2024-07-24 22:02 ` fan @ 2024-08-13 6:59 ` Alejandro Lucero Palau 2024-08-13 15:38 ` Dave Jiang 0 siblings, 1 reply; 13+ messages in thread From: Alejandro Lucero Palau @ 2024-08-13 6:59 UTC (permalink / raw) To: fan, Dave Jiang Cc: linux-cxl, alejandro.lucero-palau, dan.j.williams, ira.weiny, vishal.l.verma, alison.schofield, Jonathan.Cameron, dave On 7/24/24 23:02, fan wrote: > On Wed, Jul 24, 2024 at 11:55:16AM -0700, Dave Jiang wrote: >> Create a new 'struct cxl_mailbox' and move all mailbox related bits to >> it. This allows isolation of all CXL mailbox data in order to export >> some of the calls to external callers and avoid exporting of CXL driver >> specific bits such has device states. The allocation of >> 'struct cxl_mailbox' is also split out with cxl_mailbox_create() so the >> mailbox can be created independently. >> >> Signed-off-by: Dave Jiang <dave.jiang@intel.com> >> --- >> MAINTAINERS | 1 + >> drivers/cxl/core/mbox.c | 105 +++++++++++++++++++++++++++++------ >> drivers/cxl/core/memdev.c | 32 ++++++++--- >> drivers/cxl/cxlmem.h | 20 +++---- >> drivers/cxl/pci.c | 82 ++++++++++++++++++--------- >> drivers/cxl/pmem.c | 7 ++- >> include/linux/cxl/mailbox.h | 28 ++++++++++ >> tools/testing/cxl/test/mem.c | 46 +++++++++++---- >> 8 files changed, 249 insertions(+), 72 deletions(-) >> create mode 100644 include/linux/cxl/mailbox.h > Reviewed-by: Fan Ni <fan.ni@samsung.com> > >> diff --git a/MAINTAINERS b/MAINTAINERS >> index 958e935449e5..c809fa1eb452 100644 >> --- a/MAINTAINERS >> +++ b/MAINTAINERS >> @@ -5466,6 +5466,7 @@ S: Maintained >> F: drivers/cxl/ >> F: include/linux/einj-cxl.h >> F: include/linux/cxl-event.h >> +F: include/linux/cxl/ >> F: include/uapi/linux/cxl_mem.h >> F: tools/testing/cxl/ >> >> diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c >> index 2626f3fff201..9501d2576ccd 100644 >> --- a/drivers/cxl/core/mbox.c >> +++ b/drivers/cxl/core/mbox.c >> @@ -244,16 +244,20 @@ static const char *cxl_mem_opcode_to_name(u16 opcode) >> int cxl_internal_send_cmd(struct cxl_memdev_state *mds, >> struct cxl_mbox_cmd *mbox_cmd) >> { >> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >> size_t out_size, min_out; >> int rc; >> >> - if (mbox_cmd->size_in > mds->payload_size || >> - mbox_cmd->size_out > mds->payload_size) >> + if (!cxl_mbox) >> + return -ENODEV; >> + >> + if (mbox_cmd->size_in > cxl_mbox->payload_size || >> + mbox_cmd->size_out > cxl_mbox->payload_size) >> return -E2BIG; >> >> out_size = mbox_cmd->size_out; >> min_out = mbox_cmd->min_out; >> - rc = mds->mbox_send(mds, mbox_cmd); >> + rc = cxl_mbox->mbox_send(cxl_mbox, mbox_cmd); >> /* >> * EIO is reserved for a payload size mismatch and mbox_send() >> * may not return this error. >> @@ -353,11 +357,15 @@ static int cxl_mbox_cmd_ctor(struct cxl_mbox_cmd *mbox, >> struct cxl_memdev_state *mds, u16 opcode, >> size_t in_size, size_t out_size, u64 in_payload) >> { >> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >> *mbox = (struct cxl_mbox_cmd) { >> .opcode = opcode, >> .size_in = in_size, >> }; >> >> + if (!cxl_mbox) >> + return -ENODEV; >> + >> if (in_size) { >> mbox->payload_in = vmemdup_user(u64_to_user_ptr(in_payload), >> in_size); >> @@ -374,7 +382,7 @@ static int cxl_mbox_cmd_ctor(struct cxl_mbox_cmd *mbox, >> >> /* Prepare to handle a full payload for variable sized output */ >> if (out_size == CXL_VARIABLE_PAYLOAD) >> - mbox->size_out = mds->payload_size; >> + mbox->size_out = cxl_mbox->payload_size; >> else >> mbox->size_out = out_size; >> >> @@ -398,6 +406,11 @@ static int cxl_to_mem_cmd_raw(struct cxl_mem_command *mem_cmd, >> const struct cxl_send_command *send_cmd, >> struct cxl_memdev_state *mds) >> { >> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >> + >> + if (!cxl_mbox) >> + return -ENODEV; >> + >> if (send_cmd->raw.rsvd) >> return -EINVAL; >> >> @@ -406,7 +419,7 @@ static int cxl_to_mem_cmd_raw(struct cxl_mem_command *mem_cmd, >> * gets passed along without further checking, so it must be >> * validated here. >> */ >> - if (send_cmd->out.size > mds->payload_size) >> + if (send_cmd->out.size > cxl_mbox->payload_size) >> return -EINVAL; >> >> if (!cxl_mem_raw_command_allowed(send_cmd->raw.opcode)) >> @@ -494,9 +507,13 @@ static int cxl_validate_cmd_from_user(struct cxl_mbox_cmd *mbox_cmd, >> struct cxl_memdev_state *mds, >> const struct cxl_send_command *send_cmd) >> { >> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >> struct cxl_mem_command mem_cmd; >> int rc; >> >> + if (!cxl_mbox) >> + return -ENODEV; >> + >> if (send_cmd->id == 0 || send_cmd->id >= CXL_MEM_COMMAND_ID_MAX) >> return -ENOTTY; >> >> @@ -505,7 +522,7 @@ static int cxl_validate_cmd_from_user(struct cxl_mbox_cmd *mbox_cmd, >> * supports, but output can be arbitrarily large (simply write out as >> * much data as the hardware provides). >> */ >> - if (send_cmd->in.size > mds->payload_size) >> + if (send_cmd->in.size > cxl_mbox->payload_size) >> return -EINVAL; >> >> /* Sanitize and construct a cxl_mem_command */ >> @@ -591,9 +608,13 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev_state *mds, >> u64 out_payload, s32 *size_out, >> u32 *retval) >> { >> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >> struct device *dev = mds->cxlds.dev; >> int rc; >> >> + if (!cxl_mbox) >> + return -ENODEV; >> + >> dev_dbg(dev, >> "Submitting %s command for user\n" >> "\topcode: %x\n" >> @@ -601,7 +622,7 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev_state *mds, >> cxl_mem_opcode_to_name(mbox_cmd->opcode), >> mbox_cmd->opcode, mbox_cmd->size_in); >> >> - rc = mds->mbox_send(mds, mbox_cmd); >> + rc = cxl_mbox->mbox_send(cxl_mbox, mbox_cmd); >> if (rc) >> goto out; >> >> @@ -659,11 +680,15 @@ int cxl_send_cmd(struct cxl_memdev *cxlmd, struct cxl_send_command __user *s) >> static int cxl_xfer_log(struct cxl_memdev_state *mds, uuid_t *uuid, >> u32 *size, u8 *out) >> { >> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >> u32 remaining = *size; >> u32 offset = 0; >> >> + if (!cxl_mbox) >> + return -ENODEV; >> + >> while (remaining) { >> - u32 xfer_size = min_t(u32, remaining, mds->payload_size); >> + u32 xfer_size = min_t(u32, remaining, cxl_mbox->payload_size); >> struct cxl_mbox_cmd mbox_cmd; >> struct cxl_mbox_get_log log; >> int rc; >> @@ -752,17 +777,21 @@ static void cxl_walk_cel(struct cxl_memdev_state *mds, size_t size, u8 *cel) >> >> static struct cxl_mbox_get_supported_logs *cxl_get_gsl(struct cxl_memdev_state *mds) >> { >> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >> struct cxl_mbox_get_supported_logs *ret; >> struct cxl_mbox_cmd mbox_cmd; >> int rc; >> >> - ret = kvmalloc(mds->payload_size, GFP_KERNEL); >> + if (!cxl_mbox) >> + return ERR_PTR(-ENODEV); >> + >> + ret = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL); >> if (!ret) >> return ERR_PTR(-ENOMEM); >> >> mbox_cmd = (struct cxl_mbox_cmd) { >> .opcode = CXL_MBOX_OP_GET_SUPPORTED_LOGS, >> - .size_out = mds->payload_size, >> + .size_out = cxl_mbox->payload_size, >> .payload_out = ret, >> /* At least the record number field must be valid */ >> .min_out = 2, >> @@ -910,6 +939,7 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds, >> enum cxl_event_log_type log, >> struct cxl_get_event_payload *get_pl) >> { >> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >> struct cxl_mbox_clear_event_payload *payload; >> u16 total = le16_to_cpu(get_pl->record_count); >> u8 max_handles = CXL_CLEAR_EVENT_MAX_HANDLES; >> @@ -919,9 +949,12 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds, >> int rc = 0; >> int i; >> >> + if (!cxl_mbox) >> + return -ENODEV; >> + >> /* Payload size may limit the max handles */ >> - if (pl_size > mds->payload_size) { >> - max_handles = (mds->payload_size - sizeof(*payload)) / >> + if (pl_size > cxl_mbox->payload_size) { >> + max_handles = (cxl_mbox->payload_size - sizeof(*payload)) / >> sizeof(__le16); >> pl_size = struct_size(payload, handles, max_handles); >> } >> @@ -979,12 +1012,16 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds, >> static void cxl_mem_get_records_log(struct cxl_memdev_state *mds, >> enum cxl_event_log_type type) >> { >> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >> struct cxl_memdev *cxlmd = mds->cxlds.cxlmd; >> struct device *dev = mds->cxlds.dev; >> struct cxl_get_event_payload *payload; >> u8 log_type = type; >> u16 nr_rec; >> >> + if (!cxl_mbox) >> + return; >> + >> mutex_lock(&mds->event.log_lock); >> payload = mds->event.buf; >> >> @@ -995,7 +1032,7 @@ static void cxl_mem_get_records_log(struct cxl_memdev_state *mds, >> .payload_in = &log_type, >> .size_in = sizeof(log_type), >> .payload_out = payload, >> - .size_out = mds->payload_size, >> + .size_out = cxl_mbox->payload_size, >> .min_out = struct_size(payload, records, 0), >> }; >> >> @@ -1328,11 +1365,15 @@ int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, >> struct cxl_region *cxlr) >> { >> struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); >> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >> struct cxl_mbox_poison_out *po; >> struct cxl_mbox_poison_in pi; >> int nr_records = 0; >> int rc; >> >> + if (!cxl_mbox) >> + return -ENODEV; >> + >> rc = mutex_lock_interruptible(&mds->poison.lock); >> if (rc) >> return rc; >> @@ -1346,7 +1387,7 @@ int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, >> .opcode = CXL_MBOX_OP_GET_POISON, >> .size_in = sizeof(pi), >> .payload_in = &pi, >> - .size_out = mds->payload_size, >> + .size_out = cxl_mbox->payload_size, >> .payload_out = po, >> .min_out = struct_size(po, record, 0), >> }; >> @@ -1382,7 +1423,12 @@ static void free_poison_buf(void *buf) >> /* Get Poison List output buffer is protected by mds->poison.lock */ >> static int cxl_poison_alloc_buf(struct cxl_memdev_state *mds) >> { >> - mds->poison.list_out = kvmalloc(mds->payload_size, GFP_KERNEL); >> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >> + >> + if (!cxl_mbox) >> + return -ENODEV; >> + >> + mds->poison.list_out = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL); >> if (!mds->poison.list_out) >> return -ENOMEM; >> >> @@ -1408,6 +1454,34 @@ int cxl_poison_state_init(struct cxl_memdev_state *mds) >> } >> EXPORT_SYMBOL_NS_GPL(cxl_poison_state_init, CXL); >> >> +static void free_cxl_mailbox(void *cxl_mbox) >> +{ >> + kfree(cxl_mbox); >> +} >> + >> +struct cxl_mailbox *cxl_mailbox_create(struct device *dev) >> +{ >> + struct cxl_mailbox *cxl_mbox __free(kfree) = >> + kzalloc(sizeof(*cxl_mbox), GFP_KERNEL); >> + int rc; >> + >> + if (!cxl_mbox) >> + return ERR_PTR(-ENOMEM); >> + >> + cxl_mbox->host = dev; >> + mutex_init(&cxl_mbox->mbox_mutex); >> + rcuwait_init(&cxl_mbox->mbox_wait); >> + >> + rc = devm_add_action_or_reset(dev, free_cxl_mailbox, cxl_mbox); >> + if (rc) { >> + cxl_mbox = NULL; >> + return ERR_PTR(rc); >> + } >> + >> + return no_free_ptr(cxl_mbox); >> +} >> +EXPORT_SYMBOL_NS_GPL(cxl_mailbox_create, CXL); >> + >> struct cxl_memdev_state *cxl_memdev_state_create(struct device *dev) >> { >> struct cxl_memdev_state *mds; >> @@ -1418,7 +1492,6 @@ struct cxl_memdev_state *cxl_memdev_state_create(struct device *dev) >> return ERR_PTR(-ENOMEM); >> } >> >> - mutex_init(&mds->mbox_mutex); >> mutex_init(&mds->event.log_lock); >> mds->cxlds.dev = dev; >> mds->cxlds.reg_map.host = dev; >> diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c >> index 0277726afd04..7c99f89740a9 100644 >> --- a/drivers/cxl/core/memdev.c >> +++ b/drivers/cxl/core/memdev.c >> @@ -56,9 +56,9 @@ static ssize_t payload_max_show(struct device *dev, >> struct cxl_dev_state *cxlds = cxlmd->cxlds; >> struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); >> >> - if (!mds) >> + if (!mds || !cxlds->cxl_mbox) >> return sysfs_emit(buf, "\n"); >> - return sysfs_emit(buf, "%zu\n", mds->payload_size); >> + return sysfs_emit(buf, "%zu\n", cxlds->cxl_mbox->payload_size); >> } >> static DEVICE_ATTR_RO(payload_max); >> >> @@ -124,15 +124,19 @@ static ssize_t security_state_show(struct device *dev, >> { >> struct cxl_memdev *cxlmd = to_cxl_memdev(dev); >> struct cxl_dev_state *cxlds = cxlmd->cxlds; >> + struct cxl_mailbox *cxl_mbox = cxlds->cxl_mbox; >> struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); >> unsigned long state = mds->security.state; >> int rc = 0; >> >> + if (!cxl_mbox) >> + return -ENODEV; >> + >> /* sync with latest submission state */ >> - mutex_lock(&mds->mbox_mutex); >> + mutex_lock(&cxl_mbox->mbox_mutex); >> if (mds->security.sanitize_active) >> rc = sysfs_emit(buf, "sanitize\n"); >> - mutex_unlock(&mds->mbox_mutex); >> + mutex_unlock(&cxl_mbox->mbox_mutex); >> if (rc) >> return rc; >> >> @@ -829,12 +833,16 @@ static enum fw_upload_err cxl_fw_prepare(struct fw_upload *fwl, const u8 *data, >> { >> struct cxl_memdev_state *mds = fwl->dd_handle; >> struct cxl_mbox_transfer_fw *transfer; >> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >> + >> + if (!cxl_mbox) >> + return FW_UPLOAD_ERR_NONE; >> >> if (!size) >> return FW_UPLOAD_ERR_INVALID_SIZE; >> >> mds->fw.oneshot = struct_size(transfer, data, size) < >> - mds->payload_size; >> + cxl_mbox->payload_size; >> >> if (cxl_mem_get_fw_info(mds)) >> return FW_UPLOAD_ERR_HW_ERROR; >> @@ -854,6 +862,7 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data, >> { >> struct cxl_memdev_state *mds = fwl->dd_handle; >> struct cxl_dev_state *cxlds = &mds->cxlds; >> + struct cxl_mailbox *cxl_mbox = cxlds->cxl_mbox; >> struct cxl_memdev *cxlmd = cxlds->cxlmd; >> struct cxl_mbox_transfer_fw *transfer; >> struct cxl_mbox_cmd mbox_cmd; >> @@ -861,6 +870,9 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data, >> size_t size_in; >> int rc; >> >> + if (!cxl_mbox) >> + return FW_UPLOAD_ERR_NONE; >> + >> *written = 0; >> >> /* Offset has to be aligned to 128B (CXL-3.0 8.2.9.3.2 Table 8-57) */ >> @@ -877,7 +889,7 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data, >> * sizeof(*transfer) is 128. These constraints imply that @cur_size >> * will always be 128b aligned. >> */ >> - cur_size = min_t(size_t, size, mds->payload_size - sizeof(*transfer)); >> + cur_size = min_t(size_t, size, cxl_mbox->payload_size - sizeof(*transfer)); >> >> remaining = size - cur_size; >> size_in = struct_size(transfer, data, cur_size); >> @@ -1059,16 +1071,20 @@ EXPORT_SYMBOL_NS_GPL(devm_cxl_add_memdev, CXL); >> static void sanitize_teardown_notifier(void *data) >> { >> struct cxl_memdev_state *mds = data; >> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >> struct kernfs_node *state; >> >> + if (!cxl_mbox) >> + return; >> + >> /* >> * Prevent new irq triggered invocations of the workqueue and >> * flush inflight invocations. >> */ >> - mutex_lock(&mds->mbox_mutex); >> + mutex_lock(&cxl_mbox->mbox_mutex); >> state = mds->security.sanitize_node; >> mds->security.sanitize_node = NULL; >> - mutex_unlock(&mds->mbox_mutex); >> + mutex_unlock(&cxl_mbox->mbox_mutex); >> >> cancel_delayed_work_sync(&mds->security.poll_dwork); >> sysfs_put(state); >> diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h >> index af8169ccdbc0..17e83a2cf1be 100644 >> --- a/drivers/cxl/cxlmem.h >> +++ b/drivers/cxl/cxlmem.h >> @@ -3,11 +3,13 @@ >> #ifndef __CXL_MEM_H__ >> #define __CXL_MEM_H__ >> #include <uapi/linux/cxl_mem.h> >> +#include <linux/pci.h> >> #include <linux/cdev.h> >> #include <linux/uuid.h> >> #include <linux/rcuwait.h> >> #include <linux/cxl-event.h> >> #include <linux/node.h> >> +#include <linux/cxl/mailbox.h> >> #include "cxl.h" >> >> /* CXL 2.0 8.2.8.5.1.1 Memory Device Status Register */ >> @@ -424,6 +426,7 @@ struct cxl_dpa_perf { >> * @ram_res: Active Volatile memory capacity configuration >> * @serial: PCIe Device Serial Number >> * @type: Generic Memory Class device or Vendor Specific Memory device >> + * @cxl_mbox: CXL mailbox context >> */ >> struct cxl_dev_state { >> struct device *dev; >> @@ -438,8 +441,14 @@ struct cxl_dev_state { >> struct resource ram_res; >> u64 serial; >> enum cxl_devtype type; >> + struct cxl_mailbox *cxl_mbox; >> }; >> >> +static inline struct cxl_dev_state *mbox_to_cxlds(struct cxl_mailbox *cxl_mbox) >> +{ >> + return dev_get_drvdata(cxl_mbox->host); >> +} >> + >> /** >> * struct cxl_memdev_state - Generic Type-3 Memory Device Class driver data >> * >> @@ -448,11 +457,8 @@ struct cxl_dev_state { >> * the functionality related to that like Identify Memory Device and Get >> * Partition Info >> * @cxlds: Core driver state common across Type-2 and Type-3 devices >> - * @payload_size: Size of space for payload >> - * (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register) >> * @lsa_size: Size of Label Storage Area >> * (CXL 2.0 8.2.9.5.1.1 Identify Memory Device) >> - * @mbox_mutex: Mutex to synchronize mailbox access. >> * @firmware_version: Firmware version for the memory device. >> * @enabled_cmds: Hardware commands found enabled in CEL. >> * @exclusive_cmds: Commands that are kernel-internal only >> @@ -470,17 +476,13 @@ struct cxl_dev_state { >> * @poison: poison driver state info >> * @security: security driver state info >> * @fw: firmware upload / activation state >> - * @mbox_wait: RCU wait for mbox send completely >> - * @mbox_send: @dev specific transport for transmitting mailbox commands >> * >> * See CXL 3.0 8.2.9.8.2 Capacity Configuration and Label Storage for >> * details on capacity parameters. >> */ >> struct cxl_memdev_state { >> struct cxl_dev_state cxlds; >> - size_t payload_size; >> size_t lsa_size; >> - struct mutex mbox_mutex; /* Protects device mailbox and firmware */ >> char firmware_version[0x10]; >> DECLARE_BITMAP(enabled_cmds, CXL_MEM_COMMAND_ID_MAX); >> DECLARE_BITMAP(exclusive_cmds, CXL_MEM_COMMAND_ID_MAX); >> @@ -500,10 +502,6 @@ struct cxl_memdev_state { >> struct cxl_poison_state poison; >> struct cxl_security_state security; >> struct cxl_fw_state fw; >> - >> - struct rcuwait mbox_wait; >> - int (*mbox_send)(struct cxl_memdev_state *mds, >> - struct cxl_mbox_cmd *cmd); >> }; >> >> static inline struct cxl_memdev_state * >> diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c >> index e53646e9f2fb..bd8ee14a7926 100644 >> --- a/drivers/cxl/pci.c >> +++ b/drivers/cxl/pci.c >> @@ -11,6 +11,7 @@ >> #include <linux/pci.h> >> #include <linux/aer.h> >> #include <linux/io.h> >> +#include <linux/cxl/mailbox.h> >> #include "cxlmem.h" >> #include "cxlpci.h" >> #include "cxl.h" >> @@ -124,6 +125,7 @@ static irqreturn_t cxl_pci_mbox_irq(int irq, void *id) >> u16 opcode; >> struct cxl_dev_id *dev_id = id; >> struct cxl_dev_state *cxlds = dev_id->cxlds; >> + struct cxl_mailbox *cxl_mbox = cxlds->cxl_mbox; >> struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); >> >> if (!cxl_mbox_background_complete(cxlds)) >> @@ -132,13 +134,13 @@ static irqreturn_t cxl_pci_mbox_irq(int irq, void *id) >> reg = readq(cxlds->regs.mbox + CXLDEV_MBOX_BG_CMD_STATUS_OFFSET); >> opcode = FIELD_GET(CXLDEV_MBOX_BG_CMD_COMMAND_OPCODE_MASK, reg); >> if (opcode == CXL_MBOX_OP_SANITIZE) { >> - mutex_lock(&mds->mbox_mutex); >> + mutex_lock(&cxl_mbox->mbox_mutex); >> if (mds->security.sanitize_node) >> mod_delayed_work(system_wq, &mds->security.poll_dwork, 0); >> - mutex_unlock(&mds->mbox_mutex); >> + mutex_unlock(&cxl_mbox->mbox_mutex); >> } else { >> /* short-circuit the wait in __cxl_pci_mbox_send_cmd() */ >> - rcuwait_wake_up(&mds->mbox_wait); >> + rcuwait_wake_up(&cxl_mbox->mbox_wait); >> } >> >> return IRQ_HANDLED; >> @@ -152,8 +154,9 @@ static void cxl_mbox_sanitize_work(struct work_struct *work) >> struct cxl_memdev_state *mds = >> container_of(work, typeof(*mds), security.poll_dwork.work); >> struct cxl_dev_state *cxlds = &mds->cxlds; >> + struct cxl_mailbox *cxl_mbox = cxlds->cxl_mbox; >> >> - mutex_lock(&mds->mbox_mutex); >> + mutex_lock(&cxl_mbox->mbox_mutex); >> if (cxl_mbox_background_complete(cxlds)) { >> mds->security.poll_tmo_secs = 0; >> if (mds->security.sanitize_node) >> @@ -167,7 +170,7 @@ static void cxl_mbox_sanitize_work(struct work_struct *work) >> mds->security.poll_tmo_secs = min(15 * 60, timeout); >> schedule_delayed_work(&mds->security.poll_dwork, timeout * HZ); >> } >> - mutex_unlock(&mds->mbox_mutex); >> + mutex_unlock(&cxl_mbox->mbox_mutex); >> } >> >> /** >> @@ -192,17 +195,18 @@ static void cxl_mbox_sanitize_work(struct work_struct *work) >> * not need to coordinate with each other. The driver only uses the primary >> * mailbox. >> */ >> -static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, >> +static int __cxl_pci_mbox_send_cmd(struct cxl_mailbox *cxl_mbox, >> struct cxl_mbox_cmd *mbox_cmd) >> { >> - struct cxl_dev_state *cxlds = &mds->cxlds; >> + struct cxl_dev_state *cxlds = mbox_to_cxlds(cxl_mbox); >> + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); >> void __iomem *payload = cxlds->regs.mbox + CXLDEV_MBOX_PAYLOAD_OFFSET; >> struct device *dev = cxlds->dev; >> u64 cmd_reg, status_reg; >> size_t out_len; >> int rc; >> >> - lockdep_assert_held(&mds->mbox_mutex); >> + lockdep_assert_held(&cxl_mbox->mbox_mutex); >> >> /* >> * Here are the steps from 8.2.8.4 of the CXL 2.0 spec. >> @@ -315,10 +319,10 @@ static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, >> >> timeout = mbox_cmd->poll_interval_ms; >> for (i = 0; i < mbox_cmd->poll_count; i++) { >> - if (rcuwait_wait_event_timeout(&mds->mbox_wait, >> - cxl_mbox_background_complete(cxlds), >> - TASK_UNINTERRUPTIBLE, >> - msecs_to_jiffies(timeout)) > 0) >> + if (rcuwait_wait_event_timeout(&cxl_mbox->mbox_wait, >> + cxl_mbox_background_complete(cxlds), >> + TASK_UNINTERRUPTIBLE, >> + msecs_to_jiffies(timeout)) > 0) >> break; >> } >> >> @@ -360,7 +364,7 @@ static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, >> */ >> size_t n; >> >> - n = min3(mbox_cmd->size_out, mds->payload_size, out_len); >> + n = min3(mbox_cmd->size_out, cxl_mbox->payload_size, out_len); >> memcpy_fromio(mbox_cmd->payload_out, payload, n); >> mbox_cmd->size_out = n; >> } else { >> @@ -370,14 +374,14 @@ static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, >> return 0; >> } >> >> -static int cxl_pci_mbox_send(struct cxl_memdev_state *mds, >> +static int cxl_pci_mbox_send(struct cxl_mailbox *cxl_mbox, >> struct cxl_mbox_cmd *cmd) >> { >> int rc; >> >> - mutex_lock_io(&mds->mbox_mutex); >> - rc = __cxl_pci_mbox_send_cmd(mds, cmd); >> - mutex_unlock(&mds->mbox_mutex); >> + mutex_lock_io(&cxl_mbox->mbox_mutex); >> + rc = __cxl_pci_mbox_send_cmd(cxl_mbox, cmd); >> + mutex_unlock(&cxl_mbox->mbox_mutex); >> >> return rc; >> } >> @@ -385,6 +389,7 @@ static int cxl_pci_mbox_send(struct cxl_memdev_state *mds, >> static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail) >> { >> struct cxl_dev_state *cxlds = &mds->cxlds; >> + struct cxl_mailbox *cxl_mbox = cxlds->cxl_mbox; >> const int cap = readl(cxlds->regs.mbox + CXLDEV_MBOX_CAPS_OFFSET); >> struct device *dev = cxlds->dev; >> unsigned long timeout; >> @@ -417,8 +422,8 @@ static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail) >> return -ETIMEDOUT; >> } >> >> - mds->mbox_send = cxl_pci_mbox_send; >> - mds->payload_size = >> + cxl_mbox->mbox_send = cxl_pci_mbox_send; >> + cxl_mbox->payload_size = >> 1 << FIELD_GET(CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK, cap); >> >> /* >> @@ -428,16 +433,15 @@ static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail) >> * there's no point in going forward. If the size is too large, there's >> * no harm is soft limiting it. >> */ >> - mds->payload_size = min_t(size_t, mds->payload_size, SZ_1M); >> - if (mds->payload_size < 256) { >> + cxl_mbox->payload_size = min_t(size_t, cxl_mbox->payload_size, SZ_1M); >> + if (cxl_mbox->payload_size < 256) { >> dev_err(dev, "Mailbox is too small (%zub)", >> - mds->payload_size); >> + cxl_mbox->payload_size); >> return -ENXIO; >> } >> >> - dev_dbg(dev, "Mailbox payload sized %zu", mds->payload_size); >> + dev_dbg(dev, "Mailbox payload sized %zu", cxl_mbox->payload_size); >> >> - rcuwait_init(&mds->mbox_wait); >> INIT_DELAYED_WORK(&mds->security.poll_dwork, cxl_mbox_sanitize_work); >> >> /* background command interrupts are optional */ >> @@ -578,9 +582,13 @@ static void free_event_buf(void *buf) >> */ >> static int cxl_mem_alloc_event_buf(struct cxl_memdev_state *mds) >> { >> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >> struct cxl_get_event_payload *buf; >> >> - buf = kvmalloc(mds->payload_size, GFP_KERNEL); >> + if (!cxl_mbox) >> + return -ENODEV; >> + >> + buf = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL); >> if (!buf) >> return -ENOMEM; >> mds->event.buf = buf; >> @@ -786,6 +794,26 @@ static int cxl_event_config(struct pci_host_bridge *host_bridge, >> return 0; >> } >> >> +static int cxl_pci_create_mailbox(struct cxl_dev_state *cxlds) >> +{ >> + struct cxl_mailbox *cxl_mbox; >> + >> + /* >> + * Don't bother to allocate the mailbox if the mailbox register isn't >> + * there. >> + */ >> + if (!cxlds->reg_map.device_map.mbox.valid) >> + return -ENODEV; >> + >> + cxl_mbox = cxl_mailbox_create(cxlds->dev); >> + if (IS_ERR(cxl_mbox)) >> + return PTR_ERR(cxl_mbox); >> + >> + cxlds->cxl_mbox = cxl_mbox; >> + >> + return 0; >> +} >> + >> static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) >> { >> struct pci_host_bridge *host_bridge = pci_find_host_bridge(pdev->bus); >> @@ -846,6 +874,10 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) >> if (rc) >> dev_dbg(&pdev->dev, "Failed to map RAS capability.\n"); >> >> + rc = cxl_pci_create_mailbox(cxlds); >> + if (rc) >> + return rc; >> + >> rc = cxl_await_media_ready(cxlds); >> if (rc == 0) >> cxlds->media_ready = true; >> diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c >> index 2ecdaee63021..12e68b194820 100644 >> --- a/drivers/cxl/pmem.c >> +++ b/drivers/cxl/pmem.c >> @@ -102,13 +102,18 @@ static int cxl_pmem_get_config_size(struct cxl_memdev_state *mds, >> struct nd_cmd_get_config_size *cmd, >> unsigned int buf_len) >> { >> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >> + >> + if (!cxl_mbox) >> + return -ENODEV; >> + >> if (sizeof(*cmd) > buf_len) >> return -EINVAL; >> >> *cmd = (struct nd_cmd_get_config_size){ >> .config_size = mds->lsa_size, >> .max_xfer = >> - mds->payload_size - sizeof(struct cxl_mbox_set_lsa), >> + cxl_mbox->payload_size - sizeof(struct cxl_mbox_set_lsa), >> }; >> >> return 0; >> diff --git a/include/linux/cxl/mailbox.h b/include/linux/cxl/mailbox.h >> new file mode 100644 >> index 000000000000..2b9e4bc1690c >> --- /dev/null >> +++ b/include/linux/cxl/mailbox.h >> @@ -0,0 +1,28 @@ >> +/* SPDX-License-Identifier: GPL-2.0-only */ >> +/* Copyright(c) 2024 Intel Corporation. */ >> +#ifndef __CXL_MBOX_H__ >> +#define __CXL_MBOX_H__ >> + >> +struct cxl_mbox_cmd; >> + >> +/** >> + * struct cxl_mailbox - context for CXL mailbox operations >> + * @host: device that hosts the mailbox >> + * @adev: auxiliary device for fw-ctl This field does not exist in the defined struct. This aside: Reviewed-by: Alejandro Lucero <alucerop@amd.com> >> + * @payload_size: Size of space for payload >> + * (CXL 3.1 8.2.8.4.3 Mailbox Capabilities Register) >> + * @mbox_mutex: mutex protects device mailbox and firmware >> + * @mbox_wait: rcuwait for mailbox >> + * @mbox_send: @dev specific transport for transmitting mailbox commands >> + */ >> +struct cxl_mailbox { >> + struct device *host; >> + size_t payload_size; >> + struct mutex mbox_mutex; /* lock to protect mailbox context */ >> + struct rcuwait mbox_wait; >> + int (*mbox_send)(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *cmd); >> +}; >> + >> +struct cxl_mailbox *cxl_mailbox_create(struct device *dev); >> + >> +#endif >> diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c >> index eaf091a3d331..29f1a2df9122 100644 >> --- a/tools/testing/cxl/test/mem.c >> +++ b/tools/testing/cxl/test/mem.c >> @@ -8,6 +8,7 @@ >> #include <linux/delay.h> >> #include <linux/sizes.h> >> #include <linux/bits.h> >> +#include <linux/cxl/mailbox.h> >> #include <asm/unaligned.h> >> #include <crypto/sha2.h> >> #include <cxlmem.h> >> @@ -530,6 +531,7 @@ static int mock_gsl(struct cxl_mbox_cmd *cmd) >> >> static int mock_get_log(struct cxl_memdev_state *mds, struct cxl_mbox_cmd *cmd) >> { >> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >> struct cxl_mbox_get_log *gl = cmd->payload_in; >> u32 offset = le32_to_cpu(gl->offset); >> u32 length = le32_to_cpu(gl->length); >> @@ -538,7 +540,7 @@ static int mock_get_log(struct cxl_memdev_state *mds, struct cxl_mbox_cmd *cmd) >> >> if (cmd->size_in < sizeof(*gl)) >> return -EINVAL; >> - if (length > mds->payload_size) >> + if (length > cxl_mbox->payload_size) >> return -EINVAL; >> if (offset + length > sizeof(mock_cel)) >> return -EINVAL; >> @@ -613,12 +615,13 @@ void cxl_mockmem_sanitize_work(struct work_struct *work) >> { >> struct cxl_memdev_state *mds = >> container_of(work, typeof(*mds), security.poll_dwork.work); >> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >> >> - mutex_lock(&mds->mbox_mutex); >> + mutex_lock(&cxl_mbox->mbox_mutex); >> if (mds->security.sanitize_node) >> sysfs_notify_dirent(mds->security.sanitize_node); >> mds->security.sanitize_active = false; >> - mutex_unlock(&mds->mbox_mutex); >> + mutex_unlock(&cxl_mbox->mbox_mutex); >> >> dev_dbg(mds->cxlds.dev, "sanitize complete\n"); >> } >> @@ -627,6 +630,7 @@ static int mock_sanitize(struct cxl_mockmem_data *mdata, >> struct cxl_mbox_cmd *cmd) >> { >> struct cxl_memdev_state *mds = mdata->mds; >> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >> int rc = 0; >> >> if (cmd->size_in != 0) >> @@ -644,14 +648,14 @@ static int mock_sanitize(struct cxl_mockmem_data *mdata, >> return -ENXIO; >> } >> >> - mutex_lock(&mds->mbox_mutex); >> + mutex_lock(&cxl_mbox->mbox_mutex); >> if (schedule_delayed_work(&mds->security.poll_dwork, >> msecs_to_jiffies(mdata->sanitize_timeout))) { >> mds->security.sanitize_active = true; >> dev_dbg(mds->cxlds.dev, "sanitize issued\n"); >> } else >> rc = -EBUSY; >> - mutex_unlock(&mds->mbox_mutex); >> + mutex_unlock(&cxl_mbox->mbox_mutex); >> >> return rc; >> } >> @@ -1330,12 +1334,13 @@ static int mock_activate_fw(struct cxl_mockmem_data *mdata, >> return -EINVAL; >> } >> >> -static int cxl_mock_mbox_send(struct cxl_memdev_state *mds, >> +static int cxl_mock_mbox_send(struct cxl_mailbox *cxl_mbox, >> struct cxl_mbox_cmd *cmd) >> { >> + struct device *dev = cxl_mbox->host; >> + struct cxl_mockmem_data *mdata = dev_get_drvdata(dev); >> + struct cxl_memdev_state *mds = mdata->mds; >> struct cxl_dev_state *cxlds = &mds->cxlds; >> - struct device *dev = cxlds->dev; >> - struct cxl_mockmem_data *mdata = dev_get_drvdata(dev); >> int rc = -EIO; >> >> switch (cmd->opcode) { >> @@ -1450,6 +1455,19 @@ static ssize_t event_trigger_store(struct device *dev, >> } >> static DEVICE_ATTR_WO(event_trigger); >> >> +static int cxl_mock_mailbox_create(struct cxl_dev_state *cxlds) >> +{ >> + struct cxl_mailbox *cxl_mbox; >> + >> + cxl_mbox = cxl_mailbox_create(cxlds->dev); >> + if (IS_ERR(cxl_mbox)) >> + return PTR_ERR(cxl_mbox); >> + >> + cxlds->cxl_mbox = cxl_mbox; >> + >> + return 0; >> +} >> + >> static int cxl_mock_mem_probe(struct platform_device *pdev) >> { >> struct device *dev = &pdev->dev; >> @@ -1457,6 +1475,7 @@ static int cxl_mock_mem_probe(struct platform_device *pdev) >> struct cxl_memdev_state *mds; >> struct cxl_dev_state *cxlds; >> struct cxl_mockmem_data *mdata; >> + struct cxl_mailbox *cxl_mbox; >> int rc; >> >> mdata = devm_kzalloc(dev, sizeof(*mdata), GFP_KERNEL); >> @@ -1484,13 +1503,18 @@ static int cxl_mock_mem_probe(struct platform_device *pdev) >> if (IS_ERR(mds)) >> return PTR_ERR(mds); >> >> + cxlds = &mds->cxlds; >> + rc = cxl_mock_mailbox_create(cxlds); >> + if (rc) >> + return rc; >> + >> + cxl_mbox = mds->cxlds.cxl_mbox; >> mdata->mds = mds; >> - mds->mbox_send = cxl_mock_mbox_send; >> - mds->payload_size = SZ_4K; >> + cxl_mbox->mbox_send = cxl_mock_mbox_send; >> + cxl_mbox->payload_size = SZ_4K; >> mds->event.buf = (struct cxl_get_event_payload *) mdata->event_buf; >> INIT_DELAYED_WORK(&mds->security.poll_dwork, cxl_mockmem_sanitize_work); >> >> - cxlds = &mds->cxlds; >> cxlds->serial = pdev->id; >> if (is_rcd(pdev)) >> cxlds->rcd = true; >> -- >> 2.45.2 >> ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 1/2] cxl: Move mailbox related bits to the same context 2024-08-13 6:59 ` Alejandro Lucero Palau @ 2024-08-13 15:38 ` Dave Jiang 0 siblings, 0 replies; 13+ messages in thread From: Dave Jiang @ 2024-08-13 15:38 UTC (permalink / raw) To: Alejandro Lucero Palau, fan Cc: linux-cxl, alejandro.lucero-palau, dan.j.williams, ira.weiny, vishal.l.verma, alison.schofield, Jonathan.Cameron, dave On 8/12/24 11:59 PM, Alejandro Lucero Palau wrote: > > On 7/24/24 23:02, fan wrote: >> On Wed, Jul 24, 2024 at 11:55:16AM -0700, Dave Jiang wrote: >>> Create a new 'struct cxl_mailbox' and move all mailbox related bits to >>> it. This allows isolation of all CXL mailbox data in order to export >>> some of the calls to external callers and avoid exporting of CXL driver >>> specific bits such has device states. The allocation of >>> 'struct cxl_mailbox' is also split out with cxl_mailbox_create() so the >>> mailbox can be created independently. >>> >>> Signed-off-by: Dave Jiang <dave.jiang@intel.com> >>> --- >>> MAINTAINERS | 1 + >>> drivers/cxl/core/mbox.c | 105 +++++++++++++++++++++++++++++------ >>> drivers/cxl/core/memdev.c | 32 ++++++++--- >>> drivers/cxl/cxlmem.h | 20 +++---- >>> drivers/cxl/pci.c | 82 ++++++++++++++++++--------- >>> drivers/cxl/pmem.c | 7 ++- >>> include/linux/cxl/mailbox.h | 28 ++++++++++ >>> tools/testing/cxl/test/mem.c | 46 +++++++++++---- >>> 8 files changed, 249 insertions(+), 72 deletions(-) >>> create mode 100644 include/linux/cxl/mailbox.h >> Reviewed-by: Fan Ni <fan.ni@samsung.com> >> >>> diff --git a/MAINTAINERS b/MAINTAINERS >>> index 958e935449e5..c809fa1eb452 100644 >>> --- a/MAINTAINERS >>> +++ b/MAINTAINERS >>> @@ -5466,6 +5466,7 @@ S: Maintained >>> F: drivers/cxl/ >>> F: include/linux/einj-cxl.h >>> F: include/linux/cxl-event.h >>> +F: include/linux/cxl/ >>> F: include/uapi/linux/cxl_mem.h >>> F: tools/testing/cxl/ >>> diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c >>> index 2626f3fff201..9501d2576ccd 100644 >>> --- a/drivers/cxl/core/mbox.c >>> +++ b/drivers/cxl/core/mbox.c >>> @@ -244,16 +244,20 @@ static const char *cxl_mem_opcode_to_name(u16 opcode) >>> int cxl_internal_send_cmd(struct cxl_memdev_state *mds, >>> struct cxl_mbox_cmd *mbox_cmd) >>> { >>> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >>> size_t out_size, min_out; >>> int rc; >>> - if (mbox_cmd->size_in > mds->payload_size || >>> - mbox_cmd->size_out > mds->payload_size) >>> + if (!cxl_mbox) >>> + return -ENODEV; >>> + >>> + if (mbox_cmd->size_in > cxl_mbox->payload_size || >>> + mbox_cmd->size_out > cxl_mbox->payload_size) >>> return -E2BIG; >>> out_size = mbox_cmd->size_out; >>> min_out = mbox_cmd->min_out; >>> - rc = mds->mbox_send(mds, mbox_cmd); >>> + rc = cxl_mbox->mbox_send(cxl_mbox, mbox_cmd); >>> /* >>> * EIO is reserved for a payload size mismatch and mbox_send() >>> * may not return this error. >>> @@ -353,11 +357,15 @@ static int cxl_mbox_cmd_ctor(struct cxl_mbox_cmd *mbox, >>> struct cxl_memdev_state *mds, u16 opcode, >>> size_t in_size, size_t out_size, u64 in_payload) >>> { >>> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >>> *mbox = (struct cxl_mbox_cmd) { >>> .opcode = opcode, >>> .size_in = in_size, >>> }; >>> + if (!cxl_mbox) >>> + return -ENODEV; >>> + >>> if (in_size) { >>> mbox->payload_in = vmemdup_user(u64_to_user_ptr(in_payload), >>> in_size); >>> @@ -374,7 +382,7 @@ static int cxl_mbox_cmd_ctor(struct cxl_mbox_cmd *mbox, >>> /* Prepare to handle a full payload for variable sized output */ >>> if (out_size == CXL_VARIABLE_PAYLOAD) >>> - mbox->size_out = mds->payload_size; >>> + mbox->size_out = cxl_mbox->payload_size; >>> else >>> mbox->size_out = out_size; >>> @@ -398,6 +406,11 @@ static int cxl_to_mem_cmd_raw(struct cxl_mem_command *mem_cmd, >>> const struct cxl_send_command *send_cmd, >>> struct cxl_memdev_state *mds) >>> { >>> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >>> + >>> + if (!cxl_mbox) >>> + return -ENODEV; >>> + >>> if (send_cmd->raw.rsvd) >>> return -EINVAL; >>> @@ -406,7 +419,7 @@ static int cxl_to_mem_cmd_raw(struct cxl_mem_command *mem_cmd, >>> * gets passed along without further checking, so it must be >>> * validated here. >>> */ >>> - if (send_cmd->out.size > mds->payload_size) >>> + if (send_cmd->out.size > cxl_mbox->payload_size) >>> return -EINVAL; >>> if (!cxl_mem_raw_command_allowed(send_cmd->raw.opcode)) >>> @@ -494,9 +507,13 @@ static int cxl_validate_cmd_from_user(struct cxl_mbox_cmd *mbox_cmd, >>> struct cxl_memdev_state *mds, >>> const struct cxl_send_command *send_cmd) >>> { >>> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >>> struct cxl_mem_command mem_cmd; >>> int rc; >>> + if (!cxl_mbox) >>> + return -ENODEV; >>> + >>> if (send_cmd->id == 0 || send_cmd->id >= CXL_MEM_COMMAND_ID_MAX) >>> return -ENOTTY; >>> @@ -505,7 +522,7 @@ static int cxl_validate_cmd_from_user(struct cxl_mbox_cmd *mbox_cmd, >>> * supports, but output can be arbitrarily large (simply write out as >>> * much data as the hardware provides). >>> */ >>> - if (send_cmd->in.size > mds->payload_size) >>> + if (send_cmd->in.size > cxl_mbox->payload_size) >>> return -EINVAL; >>> /* Sanitize and construct a cxl_mem_command */ >>> @@ -591,9 +608,13 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev_state *mds, >>> u64 out_payload, s32 *size_out, >>> u32 *retval) >>> { >>> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >>> struct device *dev = mds->cxlds.dev; >>> int rc; >>> + if (!cxl_mbox) >>> + return -ENODEV; >>> + >>> dev_dbg(dev, >>> "Submitting %s command for user\n" >>> "\topcode: %x\n" >>> @@ -601,7 +622,7 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev_state *mds, >>> cxl_mem_opcode_to_name(mbox_cmd->opcode), >>> mbox_cmd->opcode, mbox_cmd->size_in); >>> - rc = mds->mbox_send(mds, mbox_cmd); >>> + rc = cxl_mbox->mbox_send(cxl_mbox, mbox_cmd); >>> if (rc) >>> goto out; >>> @@ -659,11 +680,15 @@ int cxl_send_cmd(struct cxl_memdev *cxlmd, struct cxl_send_command __user *s) >>> static int cxl_xfer_log(struct cxl_memdev_state *mds, uuid_t *uuid, >>> u32 *size, u8 *out) >>> { >>> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >>> u32 remaining = *size; >>> u32 offset = 0; >>> + if (!cxl_mbox) >>> + return -ENODEV; >>> + >>> while (remaining) { >>> - u32 xfer_size = min_t(u32, remaining, mds->payload_size); >>> + u32 xfer_size = min_t(u32, remaining, cxl_mbox->payload_size); >>> struct cxl_mbox_cmd mbox_cmd; >>> struct cxl_mbox_get_log log; >>> int rc; >>> @@ -752,17 +777,21 @@ static void cxl_walk_cel(struct cxl_memdev_state *mds, size_t size, u8 *cel) >>> static struct cxl_mbox_get_supported_logs *cxl_get_gsl(struct cxl_memdev_state *mds) >>> { >>> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >>> struct cxl_mbox_get_supported_logs *ret; >>> struct cxl_mbox_cmd mbox_cmd; >>> int rc; >>> - ret = kvmalloc(mds->payload_size, GFP_KERNEL); >>> + if (!cxl_mbox) >>> + return ERR_PTR(-ENODEV); >>> + >>> + ret = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL); >>> if (!ret) >>> return ERR_PTR(-ENOMEM); >>> mbox_cmd = (struct cxl_mbox_cmd) { >>> .opcode = CXL_MBOX_OP_GET_SUPPORTED_LOGS, >>> - .size_out = mds->payload_size, >>> + .size_out = cxl_mbox->payload_size, >>> .payload_out = ret, >>> /* At least the record number field must be valid */ >>> .min_out = 2, >>> @@ -910,6 +939,7 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds, >>> enum cxl_event_log_type log, >>> struct cxl_get_event_payload *get_pl) >>> { >>> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >>> struct cxl_mbox_clear_event_payload *payload; >>> u16 total = le16_to_cpu(get_pl->record_count); >>> u8 max_handles = CXL_CLEAR_EVENT_MAX_HANDLES; >>> @@ -919,9 +949,12 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds, >>> int rc = 0; >>> int i; >>> + if (!cxl_mbox) >>> + return -ENODEV; >>> + >>> /* Payload size may limit the max handles */ >>> - if (pl_size > mds->payload_size) { >>> - max_handles = (mds->payload_size - sizeof(*payload)) / >>> + if (pl_size > cxl_mbox->payload_size) { >>> + max_handles = (cxl_mbox->payload_size - sizeof(*payload)) / >>> sizeof(__le16); >>> pl_size = struct_size(payload, handles, max_handles); >>> } >>> @@ -979,12 +1012,16 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds, >>> static void cxl_mem_get_records_log(struct cxl_memdev_state *mds, >>> enum cxl_event_log_type type) >>> { >>> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >>> struct cxl_memdev *cxlmd = mds->cxlds.cxlmd; >>> struct device *dev = mds->cxlds.dev; >>> struct cxl_get_event_payload *payload; >>> u8 log_type = type; >>> u16 nr_rec; >>> + if (!cxl_mbox) >>> + return; >>> + >>> mutex_lock(&mds->event.log_lock); >>> payload = mds->event.buf; >>> @@ -995,7 +1032,7 @@ static void cxl_mem_get_records_log(struct cxl_memdev_state *mds, >>> .payload_in = &log_type, >>> .size_in = sizeof(log_type), >>> .payload_out = payload, >>> - .size_out = mds->payload_size, >>> + .size_out = cxl_mbox->payload_size, >>> .min_out = struct_size(payload, records, 0), >>> }; >>> @@ -1328,11 +1365,15 @@ int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, >>> struct cxl_region *cxlr) >>> { >>> struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); >>> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >>> struct cxl_mbox_poison_out *po; >>> struct cxl_mbox_poison_in pi; >>> int nr_records = 0; >>> int rc; >>> + if (!cxl_mbox) >>> + return -ENODEV; >>> + >>> rc = mutex_lock_interruptible(&mds->poison.lock); >>> if (rc) >>> return rc; >>> @@ -1346,7 +1387,7 @@ int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, >>> .opcode = CXL_MBOX_OP_GET_POISON, >>> .size_in = sizeof(pi), >>> .payload_in = &pi, >>> - .size_out = mds->payload_size, >>> + .size_out = cxl_mbox->payload_size, >>> .payload_out = po, >>> .min_out = struct_size(po, record, 0), >>> }; >>> @@ -1382,7 +1423,12 @@ static void free_poison_buf(void *buf) >>> /* Get Poison List output buffer is protected by mds->poison.lock */ >>> static int cxl_poison_alloc_buf(struct cxl_memdev_state *mds) >>> { >>> - mds->poison.list_out = kvmalloc(mds->payload_size, GFP_KERNEL); >>> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >>> + >>> + if (!cxl_mbox) >>> + return -ENODEV; >>> + >>> + mds->poison.list_out = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL); >>> if (!mds->poison.list_out) >>> return -ENOMEM; >>> @@ -1408,6 +1454,34 @@ int cxl_poison_state_init(struct cxl_memdev_state *mds) >>> } >>> EXPORT_SYMBOL_NS_GPL(cxl_poison_state_init, CXL); >>> +static void free_cxl_mailbox(void *cxl_mbox) >>> +{ >>> + kfree(cxl_mbox); >>> +} >>> + >>> +struct cxl_mailbox *cxl_mailbox_create(struct device *dev) >>> +{ >>> + struct cxl_mailbox *cxl_mbox __free(kfree) = >>> + kzalloc(sizeof(*cxl_mbox), GFP_KERNEL); >>> + int rc; >>> + >>> + if (!cxl_mbox) >>> + return ERR_PTR(-ENOMEM); >>> + >>> + cxl_mbox->host = dev; >>> + mutex_init(&cxl_mbox->mbox_mutex); >>> + rcuwait_init(&cxl_mbox->mbox_wait); >>> + >>> + rc = devm_add_action_or_reset(dev, free_cxl_mailbox, cxl_mbox); >>> + if (rc) { >>> + cxl_mbox = NULL; >>> + return ERR_PTR(rc); >>> + } >>> + >>> + return no_free_ptr(cxl_mbox); >>> +} >>> +EXPORT_SYMBOL_NS_GPL(cxl_mailbox_create, CXL); >>> + >>> struct cxl_memdev_state *cxl_memdev_state_create(struct device *dev) >>> { >>> struct cxl_memdev_state *mds; >>> @@ -1418,7 +1492,6 @@ struct cxl_memdev_state *cxl_memdev_state_create(struct device *dev) >>> return ERR_PTR(-ENOMEM); >>> } >>> - mutex_init(&mds->mbox_mutex); >>> mutex_init(&mds->event.log_lock); >>> mds->cxlds.dev = dev; >>> mds->cxlds.reg_map.host = dev; >>> diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c >>> index 0277726afd04..7c99f89740a9 100644 >>> --- a/drivers/cxl/core/memdev.c >>> +++ b/drivers/cxl/core/memdev.c >>> @@ -56,9 +56,9 @@ static ssize_t payload_max_show(struct device *dev, >>> struct cxl_dev_state *cxlds = cxlmd->cxlds; >>> struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); >>> - if (!mds) >>> + if (!mds || !cxlds->cxl_mbox) >>> return sysfs_emit(buf, "\n"); >>> - return sysfs_emit(buf, "%zu\n", mds->payload_size); >>> + return sysfs_emit(buf, "%zu\n", cxlds->cxl_mbox->payload_size); >>> } >>> static DEVICE_ATTR_RO(payload_max); >>> @@ -124,15 +124,19 @@ static ssize_t security_state_show(struct device *dev, >>> { >>> struct cxl_memdev *cxlmd = to_cxl_memdev(dev); >>> struct cxl_dev_state *cxlds = cxlmd->cxlds; >>> + struct cxl_mailbox *cxl_mbox = cxlds->cxl_mbox; >>> struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); >>> unsigned long state = mds->security.state; >>> int rc = 0; >>> + if (!cxl_mbox) >>> + return -ENODEV; >>> + >>> /* sync with latest submission state */ >>> - mutex_lock(&mds->mbox_mutex); >>> + mutex_lock(&cxl_mbox->mbox_mutex); >>> if (mds->security.sanitize_active) >>> rc = sysfs_emit(buf, "sanitize\n"); >>> - mutex_unlock(&mds->mbox_mutex); >>> + mutex_unlock(&cxl_mbox->mbox_mutex); >>> if (rc) >>> return rc; >>> @@ -829,12 +833,16 @@ static enum fw_upload_err cxl_fw_prepare(struct fw_upload *fwl, const u8 *data, >>> { >>> struct cxl_memdev_state *mds = fwl->dd_handle; >>> struct cxl_mbox_transfer_fw *transfer; >>> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >>> + >>> + if (!cxl_mbox) >>> + return FW_UPLOAD_ERR_NONE; >>> if (!size) >>> return FW_UPLOAD_ERR_INVALID_SIZE; >>> mds->fw.oneshot = struct_size(transfer, data, size) < >>> - mds->payload_size; >>> + cxl_mbox->payload_size; >>> if (cxl_mem_get_fw_info(mds)) >>> return FW_UPLOAD_ERR_HW_ERROR; >>> @@ -854,6 +862,7 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data, >>> { >>> struct cxl_memdev_state *mds = fwl->dd_handle; >>> struct cxl_dev_state *cxlds = &mds->cxlds; >>> + struct cxl_mailbox *cxl_mbox = cxlds->cxl_mbox; >>> struct cxl_memdev *cxlmd = cxlds->cxlmd; >>> struct cxl_mbox_transfer_fw *transfer; >>> struct cxl_mbox_cmd mbox_cmd; >>> @@ -861,6 +870,9 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data, >>> size_t size_in; >>> int rc; >>> + if (!cxl_mbox) >>> + return FW_UPLOAD_ERR_NONE; >>> + >>> *written = 0; >>> /* Offset has to be aligned to 128B (CXL-3.0 8.2.9.3.2 Table 8-57) */ >>> @@ -877,7 +889,7 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data, >>> * sizeof(*transfer) is 128. These constraints imply that @cur_size >>> * will always be 128b aligned. >>> */ >>> - cur_size = min_t(size_t, size, mds->payload_size - sizeof(*transfer)); >>> + cur_size = min_t(size_t, size, cxl_mbox->payload_size - sizeof(*transfer)); >>> remaining = size - cur_size; >>> size_in = struct_size(transfer, data, cur_size); >>> @@ -1059,16 +1071,20 @@ EXPORT_SYMBOL_NS_GPL(devm_cxl_add_memdev, CXL); >>> static void sanitize_teardown_notifier(void *data) >>> { >>> struct cxl_memdev_state *mds = data; >>> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >>> struct kernfs_node *state; >>> + if (!cxl_mbox) >>> + return; >>> + >>> /* >>> * Prevent new irq triggered invocations of the workqueue and >>> * flush inflight invocations. >>> */ >>> - mutex_lock(&mds->mbox_mutex); >>> + mutex_lock(&cxl_mbox->mbox_mutex); >>> state = mds->security.sanitize_node; >>> mds->security.sanitize_node = NULL; >>> - mutex_unlock(&mds->mbox_mutex); >>> + mutex_unlock(&cxl_mbox->mbox_mutex); >>> cancel_delayed_work_sync(&mds->security.poll_dwork); >>> sysfs_put(state); >>> diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h >>> index af8169ccdbc0..17e83a2cf1be 100644 >>> --- a/drivers/cxl/cxlmem.h >>> +++ b/drivers/cxl/cxlmem.h >>> @@ -3,11 +3,13 @@ >>> #ifndef __CXL_MEM_H__ >>> #define __CXL_MEM_H__ >>> #include <uapi/linux/cxl_mem.h> >>> +#include <linux/pci.h> >>> #include <linux/cdev.h> >>> #include <linux/uuid.h> >>> #include <linux/rcuwait.h> >>> #include <linux/cxl-event.h> >>> #include <linux/node.h> >>> +#include <linux/cxl/mailbox.h> >>> #include "cxl.h" >>> /* CXL 2.0 8.2.8.5.1.1 Memory Device Status Register */ >>> @@ -424,6 +426,7 @@ struct cxl_dpa_perf { >>> * @ram_res: Active Volatile memory capacity configuration >>> * @serial: PCIe Device Serial Number >>> * @type: Generic Memory Class device or Vendor Specific Memory device >>> + * @cxl_mbox: CXL mailbox context >>> */ >>> struct cxl_dev_state { >>> struct device *dev; >>> @@ -438,8 +441,14 @@ struct cxl_dev_state { >>> struct resource ram_res; >>> u64 serial; >>> enum cxl_devtype type; >>> + struct cxl_mailbox *cxl_mbox; >>> }; >>> +static inline struct cxl_dev_state *mbox_to_cxlds(struct cxl_mailbox *cxl_mbox) >>> +{ >>> + return dev_get_drvdata(cxl_mbox->host); >>> +} >>> + >>> /** >>> * struct cxl_memdev_state - Generic Type-3 Memory Device Class driver data >>> * >>> @@ -448,11 +457,8 @@ struct cxl_dev_state { >>> * the functionality related to that like Identify Memory Device and Get >>> * Partition Info >>> * @cxlds: Core driver state common across Type-2 and Type-3 devices >>> - * @payload_size: Size of space for payload >>> - * (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register) >>> * @lsa_size: Size of Label Storage Area >>> * (CXL 2.0 8.2.9.5.1.1 Identify Memory Device) >>> - * @mbox_mutex: Mutex to synchronize mailbox access. >>> * @firmware_version: Firmware version for the memory device. >>> * @enabled_cmds: Hardware commands found enabled in CEL. >>> * @exclusive_cmds: Commands that are kernel-internal only >>> @@ -470,17 +476,13 @@ struct cxl_dev_state { >>> * @poison: poison driver state info >>> * @security: security driver state info >>> * @fw: firmware upload / activation state >>> - * @mbox_wait: RCU wait for mbox send completely >>> - * @mbox_send: @dev specific transport for transmitting mailbox commands >>> * >>> * See CXL 3.0 8.2.9.8.2 Capacity Configuration and Label Storage for >>> * details on capacity parameters. >>> */ >>> struct cxl_memdev_state { >>> struct cxl_dev_state cxlds; >>> - size_t payload_size; >>> size_t lsa_size; >>> - struct mutex mbox_mutex; /* Protects device mailbox and firmware */ >>> char firmware_version[0x10]; >>> DECLARE_BITMAP(enabled_cmds, CXL_MEM_COMMAND_ID_MAX); >>> DECLARE_BITMAP(exclusive_cmds, CXL_MEM_COMMAND_ID_MAX); >>> @@ -500,10 +502,6 @@ struct cxl_memdev_state { >>> struct cxl_poison_state poison; >>> struct cxl_security_state security; >>> struct cxl_fw_state fw; >>> - >>> - struct rcuwait mbox_wait; >>> - int (*mbox_send)(struct cxl_memdev_state *mds, >>> - struct cxl_mbox_cmd *cmd); >>> }; >>> static inline struct cxl_memdev_state * >>> diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c >>> index e53646e9f2fb..bd8ee14a7926 100644 >>> --- a/drivers/cxl/pci.c >>> +++ b/drivers/cxl/pci.c >>> @@ -11,6 +11,7 @@ >>> #include <linux/pci.h> >>> #include <linux/aer.h> >>> #include <linux/io.h> >>> +#include <linux/cxl/mailbox.h> >>> #include "cxlmem.h" >>> #include "cxlpci.h" >>> #include "cxl.h" >>> @@ -124,6 +125,7 @@ static irqreturn_t cxl_pci_mbox_irq(int irq, void *id) >>> u16 opcode; >>> struct cxl_dev_id *dev_id = id; >>> struct cxl_dev_state *cxlds = dev_id->cxlds; >>> + struct cxl_mailbox *cxl_mbox = cxlds->cxl_mbox; >>> struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); >>> if (!cxl_mbox_background_complete(cxlds)) >>> @@ -132,13 +134,13 @@ static irqreturn_t cxl_pci_mbox_irq(int irq, void *id) >>> reg = readq(cxlds->regs.mbox + CXLDEV_MBOX_BG_CMD_STATUS_OFFSET); >>> opcode = FIELD_GET(CXLDEV_MBOX_BG_CMD_COMMAND_OPCODE_MASK, reg); >>> if (opcode == CXL_MBOX_OP_SANITIZE) { >>> - mutex_lock(&mds->mbox_mutex); >>> + mutex_lock(&cxl_mbox->mbox_mutex); >>> if (mds->security.sanitize_node) >>> mod_delayed_work(system_wq, &mds->security.poll_dwork, 0); >>> - mutex_unlock(&mds->mbox_mutex); >>> + mutex_unlock(&cxl_mbox->mbox_mutex); >>> } else { >>> /* short-circuit the wait in __cxl_pci_mbox_send_cmd() */ >>> - rcuwait_wake_up(&mds->mbox_wait); >>> + rcuwait_wake_up(&cxl_mbox->mbox_wait); >>> } >>> return IRQ_HANDLED; >>> @@ -152,8 +154,9 @@ static void cxl_mbox_sanitize_work(struct work_struct *work) >>> struct cxl_memdev_state *mds = >>> container_of(work, typeof(*mds), security.poll_dwork.work); >>> struct cxl_dev_state *cxlds = &mds->cxlds; >>> + struct cxl_mailbox *cxl_mbox = cxlds->cxl_mbox; >>> - mutex_lock(&mds->mbox_mutex); >>> + mutex_lock(&cxl_mbox->mbox_mutex); >>> if (cxl_mbox_background_complete(cxlds)) { >>> mds->security.poll_tmo_secs = 0; >>> if (mds->security.sanitize_node) >>> @@ -167,7 +170,7 @@ static void cxl_mbox_sanitize_work(struct work_struct *work) >>> mds->security.poll_tmo_secs = min(15 * 60, timeout); >>> schedule_delayed_work(&mds->security.poll_dwork, timeout * HZ); >>> } >>> - mutex_unlock(&mds->mbox_mutex); >>> + mutex_unlock(&cxl_mbox->mbox_mutex); >>> } >>> /** >>> @@ -192,17 +195,18 @@ static void cxl_mbox_sanitize_work(struct work_struct *work) >>> * not need to coordinate with each other. The driver only uses the primary >>> * mailbox. >>> */ >>> -static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, >>> +static int __cxl_pci_mbox_send_cmd(struct cxl_mailbox *cxl_mbox, >>> struct cxl_mbox_cmd *mbox_cmd) >>> { >>> - struct cxl_dev_state *cxlds = &mds->cxlds; >>> + struct cxl_dev_state *cxlds = mbox_to_cxlds(cxl_mbox); >>> + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); >>> void __iomem *payload = cxlds->regs.mbox + CXLDEV_MBOX_PAYLOAD_OFFSET; >>> struct device *dev = cxlds->dev; >>> u64 cmd_reg, status_reg; >>> size_t out_len; >>> int rc; >>> - lockdep_assert_held(&mds->mbox_mutex); >>> + lockdep_assert_held(&cxl_mbox->mbox_mutex); >>> /* >>> * Here are the steps from 8.2.8.4 of the CXL 2.0 spec. >>> @@ -315,10 +319,10 @@ static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, >>> timeout = mbox_cmd->poll_interval_ms; >>> for (i = 0; i < mbox_cmd->poll_count; i++) { >>> - if (rcuwait_wait_event_timeout(&mds->mbox_wait, >>> - cxl_mbox_background_complete(cxlds), >>> - TASK_UNINTERRUPTIBLE, >>> - msecs_to_jiffies(timeout)) > 0) >>> + if (rcuwait_wait_event_timeout(&cxl_mbox->mbox_wait, >>> + cxl_mbox_background_complete(cxlds), >>> + TASK_UNINTERRUPTIBLE, >>> + msecs_to_jiffies(timeout)) > 0) >>> break; >>> } >>> @@ -360,7 +364,7 @@ static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, >>> */ >>> size_t n; >>> - n = min3(mbox_cmd->size_out, mds->payload_size, out_len); >>> + n = min3(mbox_cmd->size_out, cxl_mbox->payload_size, out_len); >>> memcpy_fromio(mbox_cmd->payload_out, payload, n); >>> mbox_cmd->size_out = n; >>> } else { >>> @@ -370,14 +374,14 @@ static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, >>> return 0; >>> } >>> -static int cxl_pci_mbox_send(struct cxl_memdev_state *mds, >>> +static int cxl_pci_mbox_send(struct cxl_mailbox *cxl_mbox, >>> struct cxl_mbox_cmd *cmd) >>> { >>> int rc; >>> - mutex_lock_io(&mds->mbox_mutex); >>> - rc = __cxl_pci_mbox_send_cmd(mds, cmd); >>> - mutex_unlock(&mds->mbox_mutex); >>> + mutex_lock_io(&cxl_mbox->mbox_mutex); >>> + rc = __cxl_pci_mbox_send_cmd(cxl_mbox, cmd); >>> + mutex_unlock(&cxl_mbox->mbox_mutex); >>> return rc; >>> } >>> @@ -385,6 +389,7 @@ static int cxl_pci_mbox_send(struct cxl_memdev_state *mds, >>> static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail) >>> { >>> struct cxl_dev_state *cxlds = &mds->cxlds; >>> + struct cxl_mailbox *cxl_mbox = cxlds->cxl_mbox; >>> const int cap = readl(cxlds->regs.mbox + CXLDEV_MBOX_CAPS_OFFSET); >>> struct device *dev = cxlds->dev; >>> unsigned long timeout; >>> @@ -417,8 +422,8 @@ static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail) >>> return -ETIMEDOUT; >>> } >>> - mds->mbox_send = cxl_pci_mbox_send; >>> - mds->payload_size = >>> + cxl_mbox->mbox_send = cxl_pci_mbox_send; >>> + cxl_mbox->payload_size = >>> 1 << FIELD_GET(CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK, cap); >>> /* >>> @@ -428,16 +433,15 @@ static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail) >>> * there's no point in going forward. If the size is too large, there's >>> * no harm is soft limiting it. >>> */ >>> - mds->payload_size = min_t(size_t, mds->payload_size, SZ_1M); >>> - if (mds->payload_size < 256) { >>> + cxl_mbox->payload_size = min_t(size_t, cxl_mbox->payload_size, SZ_1M); >>> + if (cxl_mbox->payload_size < 256) { >>> dev_err(dev, "Mailbox is too small (%zub)", >>> - mds->payload_size); >>> + cxl_mbox->payload_size); >>> return -ENXIO; >>> } >>> - dev_dbg(dev, "Mailbox payload sized %zu", mds->payload_size); >>> + dev_dbg(dev, "Mailbox payload sized %zu", cxl_mbox->payload_size); >>> - rcuwait_init(&mds->mbox_wait); >>> INIT_DELAYED_WORK(&mds->security.poll_dwork, cxl_mbox_sanitize_work); >>> /* background command interrupts are optional */ >>> @@ -578,9 +582,13 @@ static void free_event_buf(void *buf) >>> */ >>> static int cxl_mem_alloc_event_buf(struct cxl_memdev_state *mds) >>> { >>> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >>> struct cxl_get_event_payload *buf; >>> - buf = kvmalloc(mds->payload_size, GFP_KERNEL); >>> + if (!cxl_mbox) >>> + return -ENODEV; >>> + >>> + buf = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL); >>> if (!buf) >>> return -ENOMEM; >>> mds->event.buf = buf; >>> @@ -786,6 +794,26 @@ static int cxl_event_config(struct pci_host_bridge *host_bridge, >>> return 0; >>> } >>> +static int cxl_pci_create_mailbox(struct cxl_dev_state *cxlds) >>> +{ >>> + struct cxl_mailbox *cxl_mbox; >>> + >>> + /* >>> + * Don't bother to allocate the mailbox if the mailbox register isn't >>> + * there. >>> + */ >>> + if (!cxlds->reg_map.device_map.mbox.valid) >>> + return -ENODEV; >>> + >>> + cxl_mbox = cxl_mailbox_create(cxlds->dev); >>> + if (IS_ERR(cxl_mbox)) >>> + return PTR_ERR(cxl_mbox); >>> + >>> + cxlds->cxl_mbox = cxl_mbox; >>> + >>> + return 0; >>> +} >>> + >>> static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) >>> { >>> struct pci_host_bridge *host_bridge = pci_find_host_bridge(pdev->bus); >>> @@ -846,6 +874,10 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) >>> if (rc) >>> dev_dbg(&pdev->dev, "Failed to map RAS capability.\n"); >>> + rc = cxl_pci_create_mailbox(cxlds); >>> + if (rc) >>> + return rc; >>> + >>> rc = cxl_await_media_ready(cxlds); >>> if (rc == 0) >>> cxlds->media_ready = true; >>> diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c >>> index 2ecdaee63021..12e68b194820 100644 >>> --- a/drivers/cxl/pmem.c >>> +++ b/drivers/cxl/pmem.c >>> @@ -102,13 +102,18 @@ static int cxl_pmem_get_config_size(struct cxl_memdev_state *mds, >>> struct nd_cmd_get_config_size *cmd, >>> unsigned int buf_len) >>> { >>> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >>> + >>> + if (!cxl_mbox) >>> + return -ENODEV; >>> + >>> if (sizeof(*cmd) > buf_len) >>> return -EINVAL; >>> *cmd = (struct nd_cmd_get_config_size){ >>> .config_size = mds->lsa_size, >>> .max_xfer = >>> - mds->payload_size - sizeof(struct cxl_mbox_set_lsa), >>> + cxl_mbox->payload_size - sizeof(struct cxl_mbox_set_lsa), >>> }; >>> return 0; >>> diff --git a/include/linux/cxl/mailbox.h b/include/linux/cxl/mailbox.h >>> new file mode 100644 >>> index 000000000000..2b9e4bc1690c >>> --- /dev/null >>> +++ b/include/linux/cxl/mailbox.h >>> @@ -0,0 +1,28 @@ >>> +/* SPDX-License-Identifier: GPL-2.0-only */ >>> +/* Copyright(c) 2024 Intel Corporation. */ >>> +#ifndef __CXL_MBOX_H__ >>> +#define __CXL_MBOX_H__ >>> + >>> +struct cxl_mbox_cmd; >>> + >>> +/** >>> + * struct cxl_mailbox - context for CXL mailbox operations >>> + * @host: device that hosts the mailbox >>> + * @adev: auxiliary device for fw-ctl > > > This field does not exist in the defined struct. Ah yeah copy/paste error when I pulled code from the fwctl code branch. If you end up picking up this code for your type2 work, please feel free to delete that. DJ > > This aside: > > Reviewed-by: Alejandro Lucero <alucerop@amd.com> > > >>> + * @payload_size: Size of space for payload >>> + * (CXL 3.1 8.2.8.4.3 Mailbox Capabilities Register) >>> + * @mbox_mutex: mutex protects device mailbox and firmware >>> + * @mbox_wait: rcuwait for mailbox >>> + * @mbox_send: @dev specific transport for transmitting mailbox commands >>> + */ >>> +struct cxl_mailbox { >>> + struct device *host; >>> + size_t payload_size; >>> + struct mutex mbox_mutex; /* lock to protect mailbox context */ >>> + struct rcuwait mbox_wait; >>> + int (*mbox_send)(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *cmd); >>> +}; >>> + >>> +struct cxl_mailbox *cxl_mailbox_create(struct device *dev); >>> + >>> +#endif >>> diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c >>> index eaf091a3d331..29f1a2df9122 100644 >>> --- a/tools/testing/cxl/test/mem.c >>> +++ b/tools/testing/cxl/test/mem.c >>> @@ -8,6 +8,7 @@ >>> #include <linux/delay.h> >>> #include <linux/sizes.h> >>> #include <linux/bits.h> >>> +#include <linux/cxl/mailbox.h> >>> #include <asm/unaligned.h> >>> #include <crypto/sha2.h> >>> #include <cxlmem.h> >>> @@ -530,6 +531,7 @@ static int mock_gsl(struct cxl_mbox_cmd *cmd) >>> static int mock_get_log(struct cxl_memdev_state *mds, struct cxl_mbox_cmd *cmd) >>> { >>> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >>> struct cxl_mbox_get_log *gl = cmd->payload_in; >>> u32 offset = le32_to_cpu(gl->offset); >>> u32 length = le32_to_cpu(gl->length); >>> @@ -538,7 +540,7 @@ static int mock_get_log(struct cxl_memdev_state *mds, struct cxl_mbox_cmd *cmd) >>> if (cmd->size_in < sizeof(*gl)) >>> return -EINVAL; >>> - if (length > mds->payload_size) >>> + if (length > cxl_mbox->payload_size) >>> return -EINVAL; >>> if (offset + length > sizeof(mock_cel)) >>> return -EINVAL; >>> @@ -613,12 +615,13 @@ void cxl_mockmem_sanitize_work(struct work_struct *work) >>> { >>> struct cxl_memdev_state *mds = >>> container_of(work, typeof(*mds), security.poll_dwork.work); >>> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >>> - mutex_lock(&mds->mbox_mutex); >>> + mutex_lock(&cxl_mbox->mbox_mutex); >>> if (mds->security.sanitize_node) >>> sysfs_notify_dirent(mds->security.sanitize_node); >>> mds->security.sanitize_active = false; >>> - mutex_unlock(&mds->mbox_mutex); >>> + mutex_unlock(&cxl_mbox->mbox_mutex); >>> dev_dbg(mds->cxlds.dev, "sanitize complete\n"); >>> } >>> @@ -627,6 +630,7 @@ static int mock_sanitize(struct cxl_mockmem_data *mdata, >>> struct cxl_mbox_cmd *cmd) >>> { >>> struct cxl_memdev_state *mds = mdata->mds; >>> + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; >>> int rc = 0; >>> if (cmd->size_in != 0) >>> @@ -644,14 +648,14 @@ static int mock_sanitize(struct cxl_mockmem_data *mdata, >>> return -ENXIO; >>> } >>> - mutex_lock(&mds->mbox_mutex); >>> + mutex_lock(&cxl_mbox->mbox_mutex); >>> if (schedule_delayed_work(&mds->security.poll_dwork, >>> msecs_to_jiffies(mdata->sanitize_timeout))) { >>> mds->security.sanitize_active = true; >>> dev_dbg(mds->cxlds.dev, "sanitize issued\n"); >>> } else >>> rc = -EBUSY; >>> - mutex_unlock(&mds->mbox_mutex); >>> + mutex_unlock(&cxl_mbox->mbox_mutex); >>> return rc; >>> } >>> @@ -1330,12 +1334,13 @@ static int mock_activate_fw(struct cxl_mockmem_data *mdata, >>> return -EINVAL; >>> } >>> -static int cxl_mock_mbox_send(struct cxl_memdev_state *mds, >>> +static int cxl_mock_mbox_send(struct cxl_mailbox *cxl_mbox, >>> struct cxl_mbox_cmd *cmd) >>> { >>> + struct device *dev = cxl_mbox->host; >>> + struct cxl_mockmem_data *mdata = dev_get_drvdata(dev); >>> + struct cxl_memdev_state *mds = mdata->mds; >>> struct cxl_dev_state *cxlds = &mds->cxlds; >>> - struct device *dev = cxlds->dev; >>> - struct cxl_mockmem_data *mdata = dev_get_drvdata(dev); >>> int rc = -EIO; >>> switch (cmd->opcode) { >>> @@ -1450,6 +1455,19 @@ static ssize_t event_trigger_store(struct device *dev, >>> } >>> static DEVICE_ATTR_WO(event_trigger); >>> +static int cxl_mock_mailbox_create(struct cxl_dev_state *cxlds) >>> +{ >>> + struct cxl_mailbox *cxl_mbox; >>> + >>> + cxl_mbox = cxl_mailbox_create(cxlds->dev); >>> + if (IS_ERR(cxl_mbox)) >>> + return PTR_ERR(cxl_mbox); >>> + >>> + cxlds->cxl_mbox = cxl_mbox; >>> + >>> + return 0; >>> +} >>> + >>> static int cxl_mock_mem_probe(struct platform_device *pdev) >>> { >>> struct device *dev = &pdev->dev; >>> @@ -1457,6 +1475,7 @@ static int cxl_mock_mem_probe(struct platform_device *pdev) >>> struct cxl_memdev_state *mds; >>> struct cxl_dev_state *cxlds; >>> struct cxl_mockmem_data *mdata; >>> + struct cxl_mailbox *cxl_mbox; >>> int rc; >>> mdata = devm_kzalloc(dev, sizeof(*mdata), GFP_KERNEL); >>> @@ -1484,13 +1503,18 @@ static int cxl_mock_mem_probe(struct platform_device *pdev) >>> if (IS_ERR(mds)) >>> return PTR_ERR(mds); >>> + cxlds = &mds->cxlds; >>> + rc = cxl_mock_mailbox_create(cxlds); >>> + if (rc) >>> + return rc; >>> + >>> + cxl_mbox = mds->cxlds.cxl_mbox; >>> mdata->mds = mds; >>> - mds->mbox_send = cxl_mock_mbox_send; >>> - mds->payload_size = SZ_4K; >>> + cxl_mbox->mbox_send = cxl_mock_mbox_send; >>> + cxl_mbox->payload_size = SZ_4K; >>> mds->event.buf = (struct cxl_get_event_payload *) mdata->event_buf; >>> INIT_DELAYED_WORK(&mds->security.poll_dwork, cxl_mockmem_sanitize_work); >>> - cxlds = &mds->cxlds; >>> cxlds->serial = pdev->id; >>> if (is_rcd(pdev)) >>> cxlds->rcd = true; >>> -- >>> 2.45.2 >>> ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 1/2] cxl: Move mailbox related bits to the same context 2024-07-24 18:55 ` [PATCH 1/2] cxl: Move mailbox related bits to the same context Dave Jiang 2024-07-24 22:02 ` fan @ 2024-08-15 16:57 ` Jonathan Cameron 2024-08-15 17:10 ` Dave Jiang 1 sibling, 1 reply; 13+ messages in thread From: Jonathan Cameron @ 2024-08-15 16:57 UTC (permalink / raw) To: Dave Jiang Cc: linux-cxl, alejandro.lucero-palau, dan.j.williams, ira.weiny, vishal.l.verma, alison.schofield, dave On Wed, 24 Jul 2024 11:55:16 -0700 Dave Jiang <dave.jiang@intel.com> wrote: > Create a new 'struct cxl_mailbox' and move all mailbox related bits to > it. This allows isolation of all CXL mailbox data in order to export > some of the calls to external callers and avoid exporting of CXL driver > specific bits such has device states. The allocation of > 'struct cxl_mailbox' is also split out with cxl_mailbox_create() so the > mailbox can be created independently. > > Signed-off-by: Dave Jiang <dave.jiang@intel.com> Hi Dave, Mostly good, but cxl_mailbox_create() needs another look. Feels like a bit of cut and paste gone too far ;) I'm not entirely convinced it shouldn't just remain embedded in the parent structure with create replace with just an init() function. I don't mind that much though. Jonathan > diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c > index 2626f3fff201..9501d2576ccd 100644 > @@ -1408,6 +1454,34 @@ int cxl_poison_state_init(struct cxl_memdev_state *mds) > } > EXPORT_SYMBOL_NS_GPL(cxl_poison_state_init, CXL); > > +static void free_cxl_mailbox(void *cxl_mbox) > +{ > + kfree(cxl_mbox); > +} > + > +struct cxl_mailbox *cxl_mailbox_create(struct device *dev) > +{ > + struct cxl_mailbox *cxl_mbox __free(kfree) = > + kzalloc(sizeof(*cxl_mbox), GFP_KERNEL); This is not somewhere I'd use the __free magic, because failure to create a mailbox is very likely fatal and doing so requires the dance with devm_add_action_or_reset() to just free a structure. So I'd just use devm_kzalloc() and rely on caller to fail probe() or if not to do manual cleanup of the structure. There aren't any failure paths currently where the auto cleanup gets used anyway other than the devm_add_action_or_reset() which is not using auto cleanup anyway. > + int rc; > + > + if (!cxl_mbox) > + return ERR_PTR(-ENOMEM); > + > + cxl_mbox->host = dev; > + mutex_init(&cxl_mbox->mbox_mutex); > + rcuwait_init(&cxl_mbox->mbox_wait); > + > + rc = devm_add_action_or_reset(dev, free_cxl_mailbox, cxl_mbox); > + if (rc) { > + cxl_mbox = NULL; Non obvious as that's a suppression of what would otherwise be a double free. Having to do this rather defeats the point of the simplification __free normally brings. > + return ERR_PTR(rc); > + } > + > + return no_free_ptr(cxl_mbox); > +} > +EXPORT_SYMBOL_NS_GPL(cxl_mailbox_create, CXL); ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 1/2] cxl: Move mailbox related bits to the same context 2024-08-15 16:57 ` Jonathan Cameron @ 2024-08-15 17:10 ` Dave Jiang 2024-08-27 15:27 ` Jonathan Cameron 0 siblings, 1 reply; 13+ messages in thread From: Dave Jiang @ 2024-08-15 17:10 UTC (permalink / raw) To: Jonathan Cameron Cc: linux-cxl, alejandro.lucero-palau, dan.j.williams, ira.weiny, vishal.l.verma, alison.schofield, dave On 8/15/24 9:57 AM, Jonathan Cameron wrote: > On Wed, 24 Jul 2024 11:55:16 -0700 > Dave Jiang <dave.jiang@intel.com> wrote: > >> Create a new 'struct cxl_mailbox' and move all mailbox related bits to >> it. This allows isolation of all CXL mailbox data in order to export >> some of the calls to external callers and avoid exporting of CXL driver >> specific bits such has device states. The allocation of >> 'struct cxl_mailbox' is also split out with cxl_mailbox_create() so the >> mailbox can be created independently. >> >> Signed-off-by: Dave Jiang <dave.jiang@intel.com> > Hi Dave, > > Mostly good, but cxl_mailbox_create() needs another look. > Feels like a bit of cut and paste gone too far ;) > > I'm not entirely convinced it shouldn't just remain embedded in > the parent structure with create replace with just an init() > function. I don't mind that much though. I made it a pointer to deal with something like a type2 device without mailbox at all. So I feel like it needs to stand on its own.... but are you saying we should just keep it as an embedded struct in the device context whether a mailbox exists or not? Also, probably need to start thinking about this. Once PCIe MMIO mailbox shows up, we will want to move the register pointer into the mailbox as well to make it independent and having a shared struct between CXL and PCIe. It'll probably end up looking like: struct cxl_mailbox { struct mmio_mailbox mbox; }; > > Jonathan > > >> diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c >> index 2626f3fff201..9501d2576ccd 100644 > > >> @@ -1408,6 +1454,34 @@ int cxl_poison_state_init(struct cxl_memdev_state *mds) >> } >> EXPORT_SYMBOL_NS_GPL(cxl_poison_state_init, CXL); >> >> +static void free_cxl_mailbox(void *cxl_mbox) >> +{ >> + kfree(cxl_mbox); >> +} >> + >> +struct cxl_mailbox *cxl_mailbox_create(struct device *dev) >> +{ >> + struct cxl_mailbox *cxl_mbox __free(kfree) = >> + kzalloc(sizeof(*cxl_mbox), GFP_KERNEL); > This is not somewhere I'd use the __free magic, because > failure to create a mailbox is very likely fatal and doing > so requires the dance with devm_add_action_or_reset() to > just free a structure. > > So I'd just use devm_kzalloc() > and rely on caller to fail probe() or if not to do manual > cleanup of the structure. > > There aren't any failure paths currently where the auto cleanup > gets used anyway other than the devm_add_action_or_reset() > which is not using auto cleanup anyway. > >> + int rc; >> + >> + if (!cxl_mbox) >> + return ERR_PTR(-ENOMEM); >> + >> + cxl_mbox->host = dev; >> + mutex_init(&cxl_mbox->mbox_mutex); >> + rcuwait_init(&cxl_mbox->mbox_wait); >> + >> + rc = devm_add_action_or_reset(dev, free_cxl_mailbox, cxl_mbox); >> + if (rc) { >> + cxl_mbox = NULL; > Non obvious as that's a suppression of what would otherwise > be a double free. Having to do this rather defeats the point > of the simplification __free normally brings. > >> + return ERR_PTR(rc); >> + } >> + >> + return no_free_ptr(cxl_mbox); >> +} >> +EXPORT_SYMBOL_NS_GPL(cxl_mailbox_create, CXL); ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 1/2] cxl: Move mailbox related bits to the same context 2024-08-15 17:10 ` Dave Jiang @ 2024-08-27 15:27 ` Jonathan Cameron 2024-08-27 21:48 ` Dave Jiang 0 siblings, 1 reply; 13+ messages in thread From: Jonathan Cameron @ 2024-08-27 15:27 UTC (permalink / raw) To: Dave Jiang Cc: linux-cxl, alejandro.lucero-palau, dan.j.williams, ira.weiny, vishal.l.verma, alison.schofield, dave On Thu, 15 Aug 2024 10:10:23 -0700 Dave Jiang <dave.jiang@intel.com> wrote: > On 8/15/24 9:57 AM, Jonathan Cameron wrote: > > On Wed, 24 Jul 2024 11:55:16 -0700 > > Dave Jiang <dave.jiang@intel.com> wrote: > > > >> Create a new 'struct cxl_mailbox' and move all mailbox related bits to > >> it. This allows isolation of all CXL mailbox data in order to export > >> some of the calls to external callers and avoid exporting of CXL driver > >> specific bits such has device states. The allocation of > >> 'struct cxl_mailbox' is also split out with cxl_mailbox_create() so the > >> mailbox can be created independently. > >> > >> Signed-off-by: Dave Jiang <dave.jiang@intel.com> > > Hi Dave, > > > > Mostly good, but cxl_mailbox_create() needs another look. > > Feels like a bit of cut and paste gone too far ;) > > > > I'm not entirely convinced it shouldn't just remain embedded in > > the parent structure with create replace with just an init() > > function. I don't mind that much though. > > I made it a pointer to deal with something like a type2 device without mailbox at all. So I feel like it needs to stand on its own.... but are you saying we should just keep it as an embedded struct in the device context whether a mailbox exists or not? It can be optional for a given driver, so type 3 just embeds it, a type 2 with the support also embeds and init() the mailbox. A type 2 without doesn't embed it in the driver specific structures and obviously doesn't init what it doesn't have. So just like a mutex inside a driver specific struct and similar things. > > Also, probably need to start thinking about this. Once PCIe MMIO mailbox shows up, we will want to move the register pointer into the mailbox as well to make it independent and having a shared struct between CXL and PCIe. It'll probably end up looking like: > > struct cxl_mailbox { > struct mmio_mailbox mbox; > }; Yeah. I'd expect a nest like this with container_of() magic used where appropriate to get to the parents and probably a few callbacks set in the mbox to tie everything together. Jonathan > > > > > Jonathan > > > > > >> diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c > >> index 2626f3fff201..9501d2576ccd 100644 > > > > > >> @@ -1408,6 +1454,34 @@ int cxl_poison_state_init(struct cxl_memdev_state *mds) > >> } > >> EXPORT_SYMBOL_NS_GPL(cxl_poison_state_init, CXL); > >> > >> +static void free_cxl_mailbox(void *cxl_mbox) > >> +{ > >> + kfree(cxl_mbox); > >> +} > >> + > >> +struct cxl_mailbox *cxl_mailbox_create(struct device *dev) > >> +{ > >> + struct cxl_mailbox *cxl_mbox __free(kfree) = > >> + kzalloc(sizeof(*cxl_mbox), GFP_KERNEL); > > This is not somewhere I'd use the __free magic, because > > failure to create a mailbox is very likely fatal and doing > > so requires the dance with devm_add_action_or_reset() to > > just free a structure. > > > > So I'd just use devm_kzalloc() > > and rely on caller to fail probe() or if not to do manual > > cleanup of the structure. > > > > There aren't any failure paths currently where the auto cleanup > > gets used anyway other than the devm_add_action_or_reset() > > which is not using auto cleanup anyway. > > > >> + int rc; > >> + > >> + if (!cxl_mbox) > >> + return ERR_PTR(-ENOMEM); > >> + > >> + cxl_mbox->host = dev; > >> + mutex_init(&cxl_mbox->mbox_mutex); > >> + rcuwait_init(&cxl_mbox->mbox_wait); > >> + > >> + rc = devm_add_action_or_reset(dev, free_cxl_mailbox, cxl_mbox); > >> + if (rc) { > >> + cxl_mbox = NULL; > > Non obvious as that's a suppression of what would otherwise > > be a double free. Having to do this rather defeats the point > > of the simplification __free normally brings. > > > >> + return ERR_PTR(rc); > >> + } > >> + > >> + return no_free_ptr(cxl_mbox); > >> +} > >> +EXPORT_SYMBOL_NS_GPL(cxl_mailbox_create, CXL); > ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 1/2] cxl: Move mailbox related bits to the same context 2024-08-27 15:27 ` Jonathan Cameron @ 2024-08-27 21:48 ` Dave Jiang 0 siblings, 0 replies; 13+ messages in thread From: Dave Jiang @ 2024-08-27 21:48 UTC (permalink / raw) To: Jonathan Cameron Cc: linux-cxl, alejandro.lucero-palau, dan.j.williams, ira.weiny, vishal.l.verma, alison.schofield, dave On 8/27/24 8:27 AM, Jonathan Cameron wrote: > On Thu, 15 Aug 2024 10:10:23 -0700 > Dave Jiang <dave.jiang@intel.com> wrote: > >> On 8/15/24 9:57 AM, Jonathan Cameron wrote: >>> On Wed, 24 Jul 2024 11:55:16 -0700 >>> Dave Jiang <dave.jiang@intel.com> wrote: >>> >>>> Create a new 'struct cxl_mailbox' and move all mailbox related bits to >>>> it. This allows isolation of all CXL mailbox data in order to export >>>> some of the calls to external callers and avoid exporting of CXL driver >>>> specific bits such has device states. The allocation of >>>> 'struct cxl_mailbox' is also split out with cxl_mailbox_create() so the >>>> mailbox can be created independently. >>>> >>>> Signed-off-by: Dave Jiang <dave.jiang@intel.com> >>> Hi Dave, >>> >>> Mostly good, but cxl_mailbox_create() needs another look. >>> Feels like a bit of cut and paste gone too far ;) >>> >>> I'm not entirely convinced it shouldn't just remain embedded in >>> the parent structure with create replace with just an init() >>> function. I don't mind that much though. >> >> I made it a pointer to deal with something like a type2 device without mailbox at all. So I feel like it needs to stand on its own.... but are you saying we should just keep it as an embedded struct in the device context whether a mailbox exists or not? > > It can be optional for a given driver, so type 3 just embeds > it, a type 2 with the support also embeds and init() the mailbox. > A type 2 without doesn't embed it in the driver specific structures > and obviously doesn't init what it doesn't have. I'll push out a v2 with it embedded. If Alejandro doesn't mind the churn. DJ > > So just like a mutex inside a driver specific struct > and similar things. >> >> Also, probably need to start thinking about this. Once PCIe MMIO mailbox shows up, we will want to move the register pointer into the mailbox as well to make it independent and having a shared struct between CXL and PCIe. It'll probably end up looking like: >> >> struct cxl_mailbox { >> struct mmio_mailbox mbox; >> }; > Yeah. I'd expect a nest like this with container_of() magic used where > appropriate to get to the parents and probably a few callbacks > set in the mbox to tie everything together. > > Jonathan > >> >>> >>> Jonathan >>> >>> >>>> diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c >>>> index 2626f3fff201..9501d2576ccd 100644 >>> >>> >>>> @@ -1408,6 +1454,34 @@ int cxl_poison_state_init(struct cxl_memdev_state *mds) >>>> } >>>> EXPORT_SYMBOL_NS_GPL(cxl_poison_state_init, CXL); >>>> >>>> +static void free_cxl_mailbox(void *cxl_mbox) >>>> +{ >>>> + kfree(cxl_mbox); >>>> +} >>>> + >>>> +struct cxl_mailbox *cxl_mailbox_create(struct device *dev) >>>> +{ >>>> + struct cxl_mailbox *cxl_mbox __free(kfree) = >>>> + kzalloc(sizeof(*cxl_mbox), GFP_KERNEL); >>> This is not somewhere I'd use the __free magic, because >>> failure to create a mailbox is very likely fatal and doing >>> so requires the dance with devm_add_action_or_reset() to >>> just free a structure. >>> >>> So I'd just use devm_kzalloc() >>> and rely on caller to fail probe() or if not to do manual >>> cleanup of the structure. >>> >>> There aren't any failure paths currently where the auto cleanup >>> gets used anyway other than the devm_add_action_or_reset() >>> which is not using auto cleanup anyway. >>> >>>> + int rc; >>>> + >>>> + if (!cxl_mbox) >>>> + return ERR_PTR(-ENOMEM); >>>> + >>>> + cxl_mbox->host = dev; >>>> + mutex_init(&cxl_mbox->mbox_mutex); >>>> + rcuwait_init(&cxl_mbox->mbox_wait); >>>> + >>>> + rc = devm_add_action_or_reset(dev, free_cxl_mailbox, cxl_mbox); >>>> + if (rc) { >>>> + cxl_mbox = NULL; >>> Non obvious as that's a suppression of what would otherwise >>> be a double free. Having to do this rather defeats the point >>> of the simplification __free normally brings. >>> >>>> + return ERR_PTR(rc); >>>> + } >>>> + >>>> + return no_free_ptr(cxl_mbox); >>>> +} >>>> +EXPORT_SYMBOL_NS_GPL(cxl_mailbox_create, CXL); >> > ^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH 2/2] cxl: Convert cxl_internal_send_cmd() to use 'struct cxl_mailbox' as input 2024-07-24 18:55 [PATCH 0/2] cxl: Pull out mailbox bits to be independent of cxl_dev_state Dave Jiang 2024-07-24 18:55 ` [PATCH 1/2] cxl: Move mailbox related bits to the same context Dave Jiang @ 2024-07-24 18:55 ` Dave Jiang 2024-07-24 22:08 ` fan 2024-08-15 17:00 ` Jonathan Cameron 2024-08-13 7:11 ` [PATCH 0/2] cxl: Pull out mailbox bits to be independent of cxl_dev_state Alejandro Lucero Palau 2 siblings, 2 replies; 13+ messages in thread From: Dave Jiang @ 2024-07-24 18:55 UTC (permalink / raw) To: linux-cxl Cc: alejandro.lucero-palau, dan.j.williams, ira.weiny, vishal.l.verma, alison.schofield, Jonathan.Cameron, dave With the CXL mailbox context split out, cxl_internal_send_cmd() can take 'struct cxl_mailbox' as an input parameter rather than 'struct memdev_dev_state'. Change input parameter for cxl_internal_send_cmd() and fixup all impacted call sites. Signed-off-by: Dave Jiang <dave.jiang@intel.com> --- drivers/cxl/core/mbox.c | 38 ++++++++++++++++++++------------------ drivers/cxl/core/memdev.c | 23 +++++++++++++---------- drivers/cxl/cxlmem.h | 2 +- drivers/cxl/pci.c | 6 ++++-- drivers/cxl/pmem.c | 6 ++++-- drivers/cxl/security.c | 23 ++++++++++++----------- 6 files changed, 54 insertions(+), 44 deletions(-) diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c index 9501d2576ccd..b50f6db3b244 100644 --- a/drivers/cxl/core/mbox.c +++ b/drivers/cxl/core/mbox.c @@ -225,7 +225,7 @@ static const char *cxl_mem_opcode_to_name(u16 opcode) /** * cxl_internal_send_cmd() - Kernel internal interface to send a mailbox command - * @mds: The driver data for the operation + * @cxl_mbox: CXL mailbox context * @mbox_cmd: initialized command to execute * * Context: Any context. @@ -241,10 +241,9 @@ static const char *cxl_mem_opcode_to_name(u16 opcode) * error. While this distinction can be useful for commands from userspace, the * kernel will only be able to use results when both are successful. */ -int cxl_internal_send_cmd(struct cxl_memdev_state *mds, +int cxl_internal_send_cmd(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *mbox_cmd) { - struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; size_t out_size, min_out; int rc; @@ -707,7 +706,7 @@ static int cxl_xfer_log(struct cxl_memdev_state *mds, uuid_t *uuid, .payload_out = out, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); /* * The output payload length that indicates the number @@ -796,7 +795,7 @@ static struct cxl_mbox_get_supported_logs *cxl_get_gsl(struct cxl_memdev_state * /* At least the record number field must be valid */ .min_out = 2, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc < 0) { kvfree(ret); return ERR_PTR(rc); @@ -988,7 +987,7 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds, if (i == max_handles) { payload->nr_recs = i; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc) goto free_pl; i = 0; @@ -999,7 +998,7 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds, if (i) { payload->nr_recs = i; mbox_cmd.size_in = struct_size(payload, handles, i); - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc) goto free_pl; } @@ -1036,7 +1035,7 @@ static void cxl_mem_get_records_log(struct cxl_memdev_state *mds, .min_out = struct_size(payload, records, 0), }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc) { dev_err_ratelimited(dev, "Event log '%d': Failed to query event records : %d", @@ -1107,6 +1106,7 @@ EXPORT_SYMBOL_NS_GPL(cxl_mem_get_event_records, CXL); */ static int cxl_mem_get_partition_info(struct cxl_memdev_state *mds) { + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; struct cxl_mbox_get_partition_info pi; struct cxl_mbox_cmd mbox_cmd; int rc; @@ -1116,7 +1116,7 @@ static int cxl_mem_get_partition_info(struct cxl_memdev_state *mds) .size_out = sizeof(pi), .payload_out = &pi, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc) return rc; @@ -1143,6 +1143,7 @@ static int cxl_mem_get_partition_info(struct cxl_memdev_state *mds) */ int cxl_dev_state_identify(struct cxl_memdev_state *mds) { + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; /* See CXL 2.0 Table 175 Identify Memory Device Output Payload */ struct cxl_mbox_identify id; struct cxl_mbox_cmd mbox_cmd; @@ -1157,7 +1158,7 @@ int cxl_dev_state_identify(struct cxl_memdev_state *mds) .size_out = sizeof(id), .payload_out = &id, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc < 0) return rc; @@ -1185,6 +1186,7 @@ EXPORT_SYMBOL_NS_GPL(cxl_dev_state_identify, CXL); static int __cxl_mem_sanitize(struct cxl_memdev_state *mds, u16 cmd) { + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; int rc; u32 sec_out = 0; struct cxl_get_security_output { @@ -1196,14 +1198,13 @@ static int __cxl_mem_sanitize(struct cxl_memdev_state *mds, u16 cmd) .size_out = sizeof(out), }; struct cxl_mbox_cmd mbox_cmd = { .opcode = cmd }; - struct cxl_dev_state *cxlds = &mds->cxlds; if (cmd != CXL_MBOX_OP_SANITIZE && cmd != CXL_MBOX_OP_SECURE_ERASE) return -EINVAL; - rc = cxl_internal_send_cmd(mds, &sec_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &sec_cmd); if (rc < 0) { - dev_err(cxlds->dev, "Failed to get security state : %d", rc); + dev_err(cxl_mbox->host, "Failed to get security state : %d", rc); return rc; } @@ -1220,9 +1221,9 @@ static int __cxl_mem_sanitize(struct cxl_memdev_state *mds, u16 cmd) sec_out & CXL_PMEM_SEC_STATE_LOCKED) return -EINVAL; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc < 0) { - dev_err(cxlds->dev, "Failed to sanitize device : %d", rc); + dev_err(cxl_mbox->host, "Failed to sanitize device : %d", rc); return rc; } @@ -1337,6 +1338,7 @@ EXPORT_SYMBOL_NS_GPL(cxl_mem_create_range_info, CXL); int cxl_set_timestamp(struct cxl_memdev_state *mds) { + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; struct cxl_mbox_cmd mbox_cmd; struct cxl_mbox_set_timestamp_in pi; int rc; @@ -1348,7 +1350,7 @@ int cxl_set_timestamp(struct cxl_memdev_state *mds) .payload_in = &pi, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); /* * Command is optional. Devices may have another way of providing * a timestamp, or may return all 0s in timestamp fields. @@ -1365,7 +1367,7 @@ int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, struct cxl_region *cxlr) { struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); - struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; + struct cxl_mailbox *cxl_mbox = cxlmd->cxlds->cxl_mbox; struct cxl_mbox_poison_out *po; struct cxl_mbox_poison_in pi; int nr_records = 0; @@ -1392,7 +1394,7 @@ int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, .min_out = struct_size(po, record, 0), }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc) break; diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c index 7c99f89740a9..df9c9a2335e6 100644 --- a/drivers/cxl/core/memdev.c +++ b/drivers/cxl/core/memdev.c @@ -281,7 +281,7 @@ static int cxl_validate_poison_dpa(struct cxl_memdev *cxlmd, u64 dpa) int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa) { - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); + struct cxl_mailbox *cxl_mbox = cxlmd->cxlds->cxl_mbox; struct cxl_mbox_inject_poison inject; struct cxl_poison_record record; struct cxl_mbox_cmd mbox_cmd; @@ -311,13 +311,13 @@ int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa) .size_in = sizeof(inject), .payload_in = &inject, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc) goto out; cxlr = cxl_dpa_to_region(cxlmd, dpa); if (cxlr) - dev_warn_once(mds->cxlds.dev, + dev_warn_once(cxl_mbox->host, "poison inject dpa:%#llx region: %s\n", dpa, dev_name(&cxlr->dev)); @@ -336,7 +336,7 @@ EXPORT_SYMBOL_NS_GPL(cxl_inject_poison, CXL); int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa) { - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); + struct cxl_mailbox *cxl_mbox = cxlmd->cxlds->cxl_mbox; struct cxl_mbox_clear_poison clear; struct cxl_poison_record record; struct cxl_mbox_cmd mbox_cmd; @@ -375,13 +375,13 @@ int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa) .payload_in = &clear, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc) goto out; cxlr = cxl_dpa_to_region(cxlmd, dpa); if (cxlr) - dev_warn_once(mds->cxlds.dev, + dev_warn_once(cxl_mbox->host, "poison clear dpa:%#llx region: %s\n", dpa, dev_name(&cxlr->dev)); @@ -718,6 +718,7 @@ static int cxl_memdev_release_file(struct inode *inode, struct file *file) */ static int cxl_mem_get_fw_info(struct cxl_memdev_state *mds) { + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; struct cxl_mbox_get_fw_info info; struct cxl_mbox_cmd mbox_cmd; int rc; @@ -728,7 +729,7 @@ static int cxl_mem_get_fw_info(struct cxl_memdev_state *mds) .payload_out = &info, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc < 0) return rc; @@ -752,6 +753,7 @@ static int cxl_mem_get_fw_info(struct cxl_memdev_state *mds) */ static int cxl_mem_activate_fw(struct cxl_memdev_state *mds, int slot) { + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; struct cxl_mbox_activate_fw activate; struct cxl_mbox_cmd mbox_cmd; @@ -768,7 +770,7 @@ static int cxl_mem_activate_fw(struct cxl_memdev_state *mds, int slot) activate.action = CXL_FW_ACTIVATE_OFFLINE; activate.slot = slot; - return cxl_internal_send_cmd(mds, &mbox_cmd); + return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); } /** @@ -783,6 +785,7 @@ static int cxl_mem_activate_fw(struct cxl_memdev_state *mds, int slot) */ static int cxl_mem_abort_fw_xfer(struct cxl_memdev_state *mds) { + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; struct cxl_mbox_transfer_fw *transfer; struct cxl_mbox_cmd mbox_cmd; int rc; @@ -802,7 +805,7 @@ static int cxl_mem_abort_fw_xfer(struct cxl_memdev_state *mds) transfer->action = CXL_FW_TRANSFER_ACTION_ABORT; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); kfree(transfer); return rc; } @@ -933,7 +936,7 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data, .poll_count = 30, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc < 0) { rc = FW_UPLOAD_ERR_RW_ERROR; goto out_free; diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h index 17e83a2cf1be..645a2f878120 100644 --- a/drivers/cxl/cxlmem.h +++ b/drivers/cxl/cxlmem.h @@ -812,7 +812,7 @@ enum { CXL_PMEM_SEC_PASS_USER, }; -int cxl_internal_send_cmd(struct cxl_memdev_state *mds, +int cxl_internal_send_cmd(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *cmd); int cxl_dev_state_identify(struct cxl_memdev_state *mds); int cxl_await_media_ready(struct cxl_dev_state *cxlds); diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c index bd8ee14a7926..511a54335784 100644 --- a/drivers/cxl/pci.c +++ b/drivers/cxl/pci.c @@ -661,6 +661,7 @@ static int cxl_event_req_irq(struct cxl_dev_state *cxlds, u8 setting) static int cxl_event_get_int_policy(struct cxl_memdev_state *mds, struct cxl_event_interrupt_policy *policy) { + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; struct cxl_mbox_cmd mbox_cmd = { .opcode = CXL_MBOX_OP_GET_EVT_INT_POLICY, .payload_out = policy, @@ -668,7 +669,7 @@ static int cxl_event_get_int_policy(struct cxl_memdev_state *mds, }; int rc; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc < 0) dev_err(mds->cxlds.dev, "Failed to get event interrupt policy : %d", rc); @@ -679,6 +680,7 @@ static int cxl_event_get_int_policy(struct cxl_memdev_state *mds, static int cxl_event_config_msgnums(struct cxl_memdev_state *mds, struct cxl_event_interrupt_policy *policy) { + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; struct cxl_mbox_cmd mbox_cmd; int rc; @@ -695,7 +697,7 @@ static int cxl_event_config_msgnums(struct cxl_memdev_state *mds, .size_in = sizeof(*policy), }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc < 0) { dev_err(mds->cxlds.dev, "Failed to set event interrupt policy : %d", rc); diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c index 12e68b194820..946cc7af423b 100644 --- a/drivers/cxl/pmem.c +++ b/drivers/cxl/pmem.c @@ -123,6 +123,7 @@ static int cxl_pmem_get_config_data(struct cxl_memdev_state *mds, struct nd_cmd_get_config_data_hdr *cmd, unsigned int buf_len) { + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; struct cxl_mbox_get_lsa get_lsa; struct cxl_mbox_cmd mbox_cmd; int rc; @@ -144,7 +145,7 @@ static int cxl_pmem_get_config_data(struct cxl_memdev_state *mds, .payload_out = cmd->out_buf, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); cmd->status = 0; return rc; @@ -154,6 +155,7 @@ static int cxl_pmem_set_config_data(struct cxl_memdev_state *mds, struct nd_cmd_set_config_hdr *cmd, unsigned int buf_len) { + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; struct cxl_mbox_set_lsa *set_lsa; struct cxl_mbox_cmd mbox_cmd; int rc; @@ -180,7 +182,7 @@ static int cxl_pmem_set_config_data(struct cxl_memdev_state *mds, .size_in = struct_size(set_lsa, data, cmd->in_length), }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); /* * Set "firmware" status (4-packed bytes at the end of the input diff --git a/drivers/cxl/security.c b/drivers/cxl/security.c index 21856a3f408e..38e6e5d97097 100644 --- a/drivers/cxl/security.c +++ b/drivers/cxl/security.c @@ -14,6 +14,7 @@ static unsigned long cxl_pmem_get_security_flags(struct nvdimm *nvdimm, { struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; + struct cxl_mailbox *cxl_mbox = cxlmd->cxlds->cxl_mbox; struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); unsigned long security_flags = 0; struct cxl_get_security_output { @@ -29,7 +30,7 @@ static unsigned long cxl_pmem_get_security_flags(struct nvdimm *nvdimm, .payload_out = &out, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc < 0) return 0; @@ -70,7 +71,7 @@ static int cxl_pmem_security_change_key(struct nvdimm *nvdimm, { struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); + struct cxl_mailbox *cxl_mbox = cxlmd->cxlds->cxl_mbox; struct cxl_mbox_cmd mbox_cmd; struct cxl_set_pass set_pass; @@ -87,7 +88,7 @@ static int cxl_pmem_security_change_key(struct nvdimm *nvdimm, .payload_in = &set_pass, }; - return cxl_internal_send_cmd(mds, &mbox_cmd); + return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); } static int __cxl_pmem_security_disable(struct nvdimm *nvdimm, @@ -96,7 +97,7 @@ static int __cxl_pmem_security_disable(struct nvdimm *nvdimm, { struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); + struct cxl_mailbox *cxl_mbox = cxlmd->cxlds->cxl_mbox; struct cxl_disable_pass dis_pass; struct cxl_mbox_cmd mbox_cmd; @@ -112,7 +113,7 @@ static int __cxl_pmem_security_disable(struct nvdimm *nvdimm, .payload_in = &dis_pass, }; - return cxl_internal_send_cmd(mds, &mbox_cmd); + return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); } static int cxl_pmem_security_disable(struct nvdimm *nvdimm, @@ -131,12 +132,12 @@ static int cxl_pmem_security_freeze(struct nvdimm *nvdimm) { struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); + struct cxl_mailbox *cxl_mbox = cxlmd->cxlds->cxl_mbox; struct cxl_mbox_cmd mbox_cmd = { .opcode = CXL_MBOX_OP_FREEZE_SECURITY, }; - return cxl_internal_send_cmd(mds, &mbox_cmd); + return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); } static int cxl_pmem_security_unlock(struct nvdimm *nvdimm, @@ -144,7 +145,7 @@ static int cxl_pmem_security_unlock(struct nvdimm *nvdimm, { struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); + struct cxl_mailbox *cxl_mbox = cxlmd->cxlds->cxl_mbox; u8 pass[NVDIMM_PASSPHRASE_LEN]; struct cxl_mbox_cmd mbox_cmd; int rc; @@ -156,7 +157,7 @@ static int cxl_pmem_security_unlock(struct nvdimm *nvdimm, .payload_in = pass, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc < 0) return rc; @@ -169,7 +170,7 @@ static int cxl_pmem_security_passphrase_erase(struct nvdimm *nvdimm, { struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); + struct cxl_mailbox *cxl_mbox = cxlmd->cxlds->cxl_mbox; struct cxl_mbox_cmd mbox_cmd; struct cxl_pass_erase erase; int rc; @@ -185,7 +186,7 @@ static int cxl_pmem_security_passphrase_erase(struct nvdimm *nvdimm, .payload_in = &erase, }; - rc = cxl_internal_send_cmd(mds, &mbox_cmd); + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); if (rc < 0) return rc; -- 2.45.2 ^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH 2/2] cxl: Convert cxl_internal_send_cmd() to use 'struct cxl_mailbox' as input 2024-07-24 18:55 ` [PATCH 2/2] cxl: Convert cxl_internal_send_cmd() to use 'struct cxl_mailbox' as input Dave Jiang @ 2024-07-24 22:08 ` fan 2024-08-15 17:00 ` Jonathan Cameron 1 sibling, 0 replies; 13+ messages in thread From: fan @ 2024-07-24 22:08 UTC (permalink / raw) To: Dave Jiang Cc: linux-cxl, alejandro.lucero-palau, dan.j.williams, ira.weiny, vishal.l.verma, alison.schofield, Jonathan.Cameron, dave On Wed, Jul 24, 2024 at 11:55:17AM -0700, Dave Jiang wrote: > With the CXL mailbox context split out, cxl_internal_send_cmd() can take > 'struct cxl_mailbox' as an input parameter rather than > 'struct memdev_dev_state'. Change input parameter for > cxl_internal_send_cmd() and fixup all impacted call sites. > > Signed-off-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Fan Ni <fan.ni@samsung.com> > --- > drivers/cxl/core/mbox.c | 38 ++++++++++++++++++++------------------ > drivers/cxl/core/memdev.c | 23 +++++++++++++---------- > drivers/cxl/cxlmem.h | 2 +- > drivers/cxl/pci.c | 6 ++++-- > drivers/cxl/pmem.c | 6 ++++-- > drivers/cxl/security.c | 23 ++++++++++++----------- > 6 files changed, 54 insertions(+), 44 deletions(-) > > diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c > index 9501d2576ccd..b50f6db3b244 100644 > --- a/drivers/cxl/core/mbox.c > +++ b/drivers/cxl/core/mbox.c > @@ -225,7 +225,7 @@ static const char *cxl_mem_opcode_to_name(u16 opcode) > > /** > * cxl_internal_send_cmd() - Kernel internal interface to send a mailbox command > - * @mds: The driver data for the operation > + * @cxl_mbox: CXL mailbox context > * @mbox_cmd: initialized command to execute > * > * Context: Any context. > @@ -241,10 +241,9 @@ static const char *cxl_mem_opcode_to_name(u16 opcode) > * error. While this distinction can be useful for commands from userspace, the > * kernel will only be able to use results when both are successful. > */ > -int cxl_internal_send_cmd(struct cxl_memdev_state *mds, > +int cxl_internal_send_cmd(struct cxl_mailbox *cxl_mbox, > struct cxl_mbox_cmd *mbox_cmd) > { > - struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > size_t out_size, min_out; > int rc; > > @@ -707,7 +706,7 @@ static int cxl_xfer_log(struct cxl_memdev_state *mds, uuid_t *uuid, > .payload_out = out, > }; > > - rc = cxl_internal_send_cmd(mds, &mbox_cmd); > + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > > /* > * The output payload length that indicates the number > @@ -796,7 +795,7 @@ static struct cxl_mbox_get_supported_logs *cxl_get_gsl(struct cxl_memdev_state * > /* At least the record number field must be valid */ > .min_out = 2, > }; > - rc = cxl_internal_send_cmd(mds, &mbox_cmd); > + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > if (rc < 0) { > kvfree(ret); > return ERR_PTR(rc); > @@ -988,7 +987,7 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds, > > if (i == max_handles) { > payload->nr_recs = i; > - rc = cxl_internal_send_cmd(mds, &mbox_cmd); > + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > if (rc) > goto free_pl; > i = 0; > @@ -999,7 +998,7 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds, > if (i) { > payload->nr_recs = i; > mbox_cmd.size_in = struct_size(payload, handles, i); > - rc = cxl_internal_send_cmd(mds, &mbox_cmd); > + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > if (rc) > goto free_pl; > } > @@ -1036,7 +1035,7 @@ static void cxl_mem_get_records_log(struct cxl_memdev_state *mds, > .min_out = struct_size(payload, records, 0), > }; > > - rc = cxl_internal_send_cmd(mds, &mbox_cmd); > + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > if (rc) { > dev_err_ratelimited(dev, > "Event log '%d': Failed to query event records : %d", > @@ -1107,6 +1106,7 @@ EXPORT_SYMBOL_NS_GPL(cxl_mem_get_event_records, CXL); > */ > static int cxl_mem_get_partition_info(struct cxl_memdev_state *mds) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct cxl_mbox_get_partition_info pi; > struct cxl_mbox_cmd mbox_cmd; > int rc; > @@ -1116,7 +1116,7 @@ static int cxl_mem_get_partition_info(struct cxl_memdev_state *mds) > .size_out = sizeof(pi), > .payload_out = &pi, > }; > - rc = cxl_internal_send_cmd(mds, &mbox_cmd); > + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > if (rc) > return rc; > > @@ -1143,6 +1143,7 @@ static int cxl_mem_get_partition_info(struct cxl_memdev_state *mds) > */ > int cxl_dev_state_identify(struct cxl_memdev_state *mds) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > /* See CXL 2.0 Table 175 Identify Memory Device Output Payload */ > struct cxl_mbox_identify id; > struct cxl_mbox_cmd mbox_cmd; > @@ -1157,7 +1158,7 @@ int cxl_dev_state_identify(struct cxl_memdev_state *mds) > .size_out = sizeof(id), > .payload_out = &id, > }; > - rc = cxl_internal_send_cmd(mds, &mbox_cmd); > + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > if (rc < 0) > return rc; > > @@ -1185,6 +1186,7 @@ EXPORT_SYMBOL_NS_GPL(cxl_dev_state_identify, CXL); > > static int __cxl_mem_sanitize(struct cxl_memdev_state *mds, u16 cmd) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > int rc; > u32 sec_out = 0; > struct cxl_get_security_output { > @@ -1196,14 +1198,13 @@ static int __cxl_mem_sanitize(struct cxl_memdev_state *mds, u16 cmd) > .size_out = sizeof(out), > }; > struct cxl_mbox_cmd mbox_cmd = { .opcode = cmd }; > - struct cxl_dev_state *cxlds = &mds->cxlds; > > if (cmd != CXL_MBOX_OP_SANITIZE && cmd != CXL_MBOX_OP_SECURE_ERASE) > return -EINVAL; > > - rc = cxl_internal_send_cmd(mds, &sec_cmd); > + rc = cxl_internal_send_cmd(cxl_mbox, &sec_cmd); > if (rc < 0) { > - dev_err(cxlds->dev, "Failed to get security state : %d", rc); > + dev_err(cxl_mbox->host, "Failed to get security state : %d", rc); > return rc; > } > > @@ -1220,9 +1221,9 @@ static int __cxl_mem_sanitize(struct cxl_memdev_state *mds, u16 cmd) > sec_out & CXL_PMEM_SEC_STATE_LOCKED) > return -EINVAL; > > - rc = cxl_internal_send_cmd(mds, &mbox_cmd); > + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > if (rc < 0) { > - dev_err(cxlds->dev, "Failed to sanitize device : %d", rc); > + dev_err(cxl_mbox->host, "Failed to sanitize device : %d", rc); > return rc; > } > > @@ -1337,6 +1338,7 @@ EXPORT_SYMBOL_NS_GPL(cxl_mem_create_range_info, CXL); > > int cxl_set_timestamp(struct cxl_memdev_state *mds) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct cxl_mbox_cmd mbox_cmd; > struct cxl_mbox_set_timestamp_in pi; > int rc; > @@ -1348,7 +1350,7 @@ int cxl_set_timestamp(struct cxl_memdev_state *mds) > .payload_in = &pi, > }; > > - rc = cxl_internal_send_cmd(mds, &mbox_cmd); > + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > /* > * Command is optional. Devices may have another way of providing > * a timestamp, or may return all 0s in timestamp fields. > @@ -1365,7 +1367,7 @@ int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, > struct cxl_region *cxlr) > { > struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); > - struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > + struct cxl_mailbox *cxl_mbox = cxlmd->cxlds->cxl_mbox; > struct cxl_mbox_poison_out *po; > struct cxl_mbox_poison_in pi; > int nr_records = 0; > @@ -1392,7 +1394,7 @@ int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, > .min_out = struct_size(po, record, 0), > }; > > - rc = cxl_internal_send_cmd(mds, &mbox_cmd); > + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > if (rc) > break; > > diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c > index 7c99f89740a9..df9c9a2335e6 100644 > --- a/drivers/cxl/core/memdev.c > +++ b/drivers/cxl/core/memdev.c > @@ -281,7 +281,7 @@ static int cxl_validate_poison_dpa(struct cxl_memdev *cxlmd, u64 dpa) > > int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa) > { > - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); > + struct cxl_mailbox *cxl_mbox = cxlmd->cxlds->cxl_mbox; > struct cxl_mbox_inject_poison inject; > struct cxl_poison_record record; > struct cxl_mbox_cmd mbox_cmd; > @@ -311,13 +311,13 @@ int cxl_inject_poison(struct cxl_memdev *cxlmd, u64 dpa) > .size_in = sizeof(inject), > .payload_in = &inject, > }; > - rc = cxl_internal_send_cmd(mds, &mbox_cmd); > + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > if (rc) > goto out; > > cxlr = cxl_dpa_to_region(cxlmd, dpa); > if (cxlr) > - dev_warn_once(mds->cxlds.dev, > + dev_warn_once(cxl_mbox->host, > "poison inject dpa:%#llx region: %s\n", dpa, > dev_name(&cxlr->dev)); > > @@ -336,7 +336,7 @@ EXPORT_SYMBOL_NS_GPL(cxl_inject_poison, CXL); > > int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa) > { > - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); > + struct cxl_mailbox *cxl_mbox = cxlmd->cxlds->cxl_mbox; > struct cxl_mbox_clear_poison clear; > struct cxl_poison_record record; > struct cxl_mbox_cmd mbox_cmd; > @@ -375,13 +375,13 @@ int cxl_clear_poison(struct cxl_memdev *cxlmd, u64 dpa) > .payload_in = &clear, > }; > > - rc = cxl_internal_send_cmd(mds, &mbox_cmd); > + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > if (rc) > goto out; > > cxlr = cxl_dpa_to_region(cxlmd, dpa); > if (cxlr) > - dev_warn_once(mds->cxlds.dev, > + dev_warn_once(cxl_mbox->host, > "poison clear dpa:%#llx region: %s\n", dpa, > dev_name(&cxlr->dev)); > > @@ -718,6 +718,7 @@ static int cxl_memdev_release_file(struct inode *inode, struct file *file) > */ > static int cxl_mem_get_fw_info(struct cxl_memdev_state *mds) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct cxl_mbox_get_fw_info info; > struct cxl_mbox_cmd mbox_cmd; > int rc; > @@ -728,7 +729,7 @@ static int cxl_mem_get_fw_info(struct cxl_memdev_state *mds) > .payload_out = &info, > }; > > - rc = cxl_internal_send_cmd(mds, &mbox_cmd); > + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > if (rc < 0) > return rc; > > @@ -752,6 +753,7 @@ static int cxl_mem_get_fw_info(struct cxl_memdev_state *mds) > */ > static int cxl_mem_activate_fw(struct cxl_memdev_state *mds, int slot) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct cxl_mbox_activate_fw activate; > struct cxl_mbox_cmd mbox_cmd; > > @@ -768,7 +770,7 @@ static int cxl_mem_activate_fw(struct cxl_memdev_state *mds, int slot) > activate.action = CXL_FW_ACTIVATE_OFFLINE; > activate.slot = slot; > > - return cxl_internal_send_cmd(mds, &mbox_cmd); > + return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > } > > /** > @@ -783,6 +785,7 @@ static int cxl_mem_activate_fw(struct cxl_memdev_state *mds, int slot) > */ > static int cxl_mem_abort_fw_xfer(struct cxl_memdev_state *mds) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct cxl_mbox_transfer_fw *transfer; > struct cxl_mbox_cmd mbox_cmd; > int rc; > @@ -802,7 +805,7 @@ static int cxl_mem_abort_fw_xfer(struct cxl_memdev_state *mds) > > transfer->action = CXL_FW_TRANSFER_ACTION_ABORT; > > - rc = cxl_internal_send_cmd(mds, &mbox_cmd); > + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > kfree(transfer); > return rc; > } > @@ -933,7 +936,7 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data, > .poll_count = 30, > }; > > - rc = cxl_internal_send_cmd(mds, &mbox_cmd); > + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > if (rc < 0) { > rc = FW_UPLOAD_ERR_RW_ERROR; > goto out_free; > diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h > index 17e83a2cf1be..645a2f878120 100644 > --- a/drivers/cxl/cxlmem.h > +++ b/drivers/cxl/cxlmem.h > @@ -812,7 +812,7 @@ enum { > CXL_PMEM_SEC_PASS_USER, > }; > > -int cxl_internal_send_cmd(struct cxl_memdev_state *mds, > +int cxl_internal_send_cmd(struct cxl_mailbox *cxl_mbox, > struct cxl_mbox_cmd *cmd); > int cxl_dev_state_identify(struct cxl_memdev_state *mds); > int cxl_await_media_ready(struct cxl_dev_state *cxlds); > diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c > index bd8ee14a7926..511a54335784 100644 > --- a/drivers/cxl/pci.c > +++ b/drivers/cxl/pci.c > @@ -661,6 +661,7 @@ static int cxl_event_req_irq(struct cxl_dev_state *cxlds, u8 setting) > static int cxl_event_get_int_policy(struct cxl_memdev_state *mds, > struct cxl_event_interrupt_policy *policy) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct cxl_mbox_cmd mbox_cmd = { > .opcode = CXL_MBOX_OP_GET_EVT_INT_POLICY, > .payload_out = policy, > @@ -668,7 +669,7 @@ static int cxl_event_get_int_policy(struct cxl_memdev_state *mds, > }; > int rc; > > - rc = cxl_internal_send_cmd(mds, &mbox_cmd); > + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > if (rc < 0) > dev_err(mds->cxlds.dev, > "Failed to get event interrupt policy : %d", rc); > @@ -679,6 +680,7 @@ static int cxl_event_get_int_policy(struct cxl_memdev_state *mds, > static int cxl_event_config_msgnums(struct cxl_memdev_state *mds, > struct cxl_event_interrupt_policy *policy) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct cxl_mbox_cmd mbox_cmd; > int rc; > > @@ -695,7 +697,7 @@ static int cxl_event_config_msgnums(struct cxl_memdev_state *mds, > .size_in = sizeof(*policy), > }; > > - rc = cxl_internal_send_cmd(mds, &mbox_cmd); > + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > if (rc < 0) { > dev_err(mds->cxlds.dev, "Failed to set event interrupt policy : %d", > rc); > diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c > index 12e68b194820..946cc7af423b 100644 > --- a/drivers/cxl/pmem.c > +++ b/drivers/cxl/pmem.c > @@ -123,6 +123,7 @@ static int cxl_pmem_get_config_data(struct cxl_memdev_state *mds, > struct nd_cmd_get_config_data_hdr *cmd, > unsigned int buf_len) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct cxl_mbox_get_lsa get_lsa; > struct cxl_mbox_cmd mbox_cmd; > int rc; > @@ -144,7 +145,7 @@ static int cxl_pmem_get_config_data(struct cxl_memdev_state *mds, > .payload_out = cmd->out_buf, > }; > > - rc = cxl_internal_send_cmd(mds, &mbox_cmd); > + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > cmd->status = 0; > > return rc; > @@ -154,6 +155,7 @@ static int cxl_pmem_set_config_data(struct cxl_memdev_state *mds, > struct nd_cmd_set_config_hdr *cmd, > unsigned int buf_len) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct cxl_mbox_set_lsa *set_lsa; > struct cxl_mbox_cmd mbox_cmd; > int rc; > @@ -180,7 +182,7 @@ static int cxl_pmem_set_config_data(struct cxl_memdev_state *mds, > .size_in = struct_size(set_lsa, data, cmd->in_length), > }; > > - rc = cxl_internal_send_cmd(mds, &mbox_cmd); > + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > > /* > * Set "firmware" status (4-packed bytes at the end of the input > diff --git a/drivers/cxl/security.c b/drivers/cxl/security.c > index 21856a3f408e..38e6e5d97097 100644 > --- a/drivers/cxl/security.c > +++ b/drivers/cxl/security.c > @@ -14,6 +14,7 @@ static unsigned long cxl_pmem_get_security_flags(struct nvdimm *nvdimm, > { > struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); > struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; > + struct cxl_mailbox *cxl_mbox = cxlmd->cxlds->cxl_mbox; > struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); > unsigned long security_flags = 0; > struct cxl_get_security_output { > @@ -29,7 +30,7 @@ static unsigned long cxl_pmem_get_security_flags(struct nvdimm *nvdimm, > .payload_out = &out, > }; > > - rc = cxl_internal_send_cmd(mds, &mbox_cmd); > + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > if (rc < 0) > return 0; > > @@ -70,7 +71,7 @@ static int cxl_pmem_security_change_key(struct nvdimm *nvdimm, > { > struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); > struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; > - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); > + struct cxl_mailbox *cxl_mbox = cxlmd->cxlds->cxl_mbox; > struct cxl_mbox_cmd mbox_cmd; > struct cxl_set_pass set_pass; > > @@ -87,7 +88,7 @@ static int cxl_pmem_security_change_key(struct nvdimm *nvdimm, > .payload_in = &set_pass, > }; > > - return cxl_internal_send_cmd(mds, &mbox_cmd); > + return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > } > > static int __cxl_pmem_security_disable(struct nvdimm *nvdimm, > @@ -96,7 +97,7 @@ static int __cxl_pmem_security_disable(struct nvdimm *nvdimm, > { > struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); > struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; > - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); > + struct cxl_mailbox *cxl_mbox = cxlmd->cxlds->cxl_mbox; > struct cxl_disable_pass dis_pass; > struct cxl_mbox_cmd mbox_cmd; > > @@ -112,7 +113,7 @@ static int __cxl_pmem_security_disable(struct nvdimm *nvdimm, > .payload_in = &dis_pass, > }; > > - return cxl_internal_send_cmd(mds, &mbox_cmd); > + return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > } > > static int cxl_pmem_security_disable(struct nvdimm *nvdimm, > @@ -131,12 +132,12 @@ static int cxl_pmem_security_freeze(struct nvdimm *nvdimm) > { > struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); > struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; > - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); > + struct cxl_mailbox *cxl_mbox = cxlmd->cxlds->cxl_mbox; > struct cxl_mbox_cmd mbox_cmd = { > .opcode = CXL_MBOX_OP_FREEZE_SECURITY, > }; > > - return cxl_internal_send_cmd(mds, &mbox_cmd); > + return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > } > > static int cxl_pmem_security_unlock(struct nvdimm *nvdimm, > @@ -144,7 +145,7 @@ static int cxl_pmem_security_unlock(struct nvdimm *nvdimm, > { > struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); > struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; > - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); > + struct cxl_mailbox *cxl_mbox = cxlmd->cxlds->cxl_mbox; > u8 pass[NVDIMM_PASSPHRASE_LEN]; > struct cxl_mbox_cmd mbox_cmd; > int rc; > @@ -156,7 +157,7 @@ static int cxl_pmem_security_unlock(struct nvdimm *nvdimm, > .payload_in = pass, > }; > > - rc = cxl_internal_send_cmd(mds, &mbox_cmd); > + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > if (rc < 0) > return rc; > > @@ -169,7 +170,7 @@ static int cxl_pmem_security_passphrase_erase(struct nvdimm *nvdimm, > { > struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm); > struct cxl_memdev *cxlmd = cxl_nvd->cxlmd; > - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); > + struct cxl_mailbox *cxl_mbox = cxlmd->cxlds->cxl_mbox; > struct cxl_mbox_cmd mbox_cmd; > struct cxl_pass_erase erase; > int rc; > @@ -185,7 +186,7 @@ static int cxl_pmem_security_passphrase_erase(struct nvdimm *nvdimm, > .payload_in = &erase, > }; > > - rc = cxl_internal_send_cmd(mds, &mbox_cmd); > + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd); > if (rc < 0) > return rc; > > -- > 2.45.2 > ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 2/2] cxl: Convert cxl_internal_send_cmd() to use 'struct cxl_mailbox' as input 2024-07-24 18:55 ` [PATCH 2/2] cxl: Convert cxl_internal_send_cmd() to use 'struct cxl_mailbox' as input Dave Jiang 2024-07-24 22:08 ` fan @ 2024-08-15 17:00 ` Jonathan Cameron 1 sibling, 0 replies; 13+ messages in thread From: Jonathan Cameron @ 2024-08-15 17:00 UTC (permalink / raw) To: Dave Jiang Cc: linux-cxl, alejandro.lucero-palau, dan.j.williams, ira.weiny, vishal.l.verma, alison.schofield, dave On Wed, 24 Jul 2024 11:55:17 -0700 Dave Jiang <dave.jiang@intel.com> wrote: > With the CXL mailbox context split out, cxl_internal_send_cmd() can take > 'struct cxl_mailbox' as an input parameter rather than > 'struct memdev_dev_state'. Change input parameter for > cxl_internal_send_cmd() and fixup all impacted call sites. > > Signed-off-by: Dave Jiang <dave.jiang@intel.com> LGTM Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> ^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 0/2] cxl: Pull out mailbox bits to be independent of cxl_dev_state. 2024-07-24 18:55 [PATCH 0/2] cxl: Pull out mailbox bits to be independent of cxl_dev_state Dave Jiang 2024-07-24 18:55 ` [PATCH 1/2] cxl: Move mailbox related bits to the same context Dave Jiang 2024-07-24 18:55 ` [PATCH 2/2] cxl: Convert cxl_internal_send_cmd() to use 'struct cxl_mailbox' as input Dave Jiang @ 2024-08-13 7:11 ` Alejandro Lucero Palau 2 siblings, 0 replies; 13+ messages in thread From: Alejandro Lucero Palau @ 2024-08-13 7:11 UTC (permalink / raw) To: Dave Jiang, linux-cxl Cc: alejandro.lucero-palau, dan.j.williams, ira.weiny, vishal.l.verma, alison.schofield, Jonathan.Cameron, dave On 7/24/24 19:55, Dave Jiang wrote: > Hi Alejandro, > Please feel free to pull in the patches in this series into your type2 series. Hi Dave, Working on v3 for the Type2 support where I count on these changes being applied. Thanks! > The patches pulls out the related mailbox bits and form a 'struct cxl_mailbox'. A pointer > is created to point to that in 'struct cxl_dev_state'. The cxl_mailbox is independently > allocated if the mailbox register is discovered. This should separate the mailbox out > to be used by CXL type3 and type2 devices. > > --- > > Dave Jiang (2): > cxl: Move mailbox related bits to the same context > cxl: Convert cxl_internal_send_cmd() to use 'struct cxl_mailbox' as input > > MAINTAINERS | 1 + > drivers/cxl/core/mbox.c | 139 ++++++++++++++++++++++++++++++++++++++++++++++++--------------- > drivers/cxl/core/memdev.c | 55 ++++++++++++++++--------- > drivers/cxl/cxlmem.h | 22 +++++----- > drivers/cxl/pci.c | 88 +++++++++++++++++++++++++++------------- > drivers/cxl/pmem.c | 13 ++++-- > drivers/cxl/security.c | 23 ++++++----- > include/linux/cxl/mailbox.h | 28 +++++++++++++ > tools/testing/cxl/test/mem.c | 46 ++++++++++++++++----- > 9 files changed, 301 insertions(+), 114 deletions(-) > ^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2024-08-27 21:48 UTC | newest] Thread overview: 13+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2024-07-24 18:55 [PATCH 0/2] cxl: Pull out mailbox bits to be independent of cxl_dev_state Dave Jiang 2024-07-24 18:55 ` [PATCH 1/2] cxl: Move mailbox related bits to the same context Dave Jiang 2024-07-24 22:02 ` fan 2024-08-13 6:59 ` Alejandro Lucero Palau 2024-08-13 15:38 ` Dave Jiang 2024-08-15 16:57 ` Jonathan Cameron 2024-08-15 17:10 ` Dave Jiang 2024-08-27 15:27 ` Jonathan Cameron 2024-08-27 21:48 ` Dave Jiang 2024-07-24 18:55 ` [PATCH 2/2] cxl: Convert cxl_internal_send_cmd() to use 'struct cxl_mailbox' as input Dave Jiang 2024-07-24 22:08 ` fan 2024-08-15 17:00 ` Jonathan Cameron 2024-08-13 7:11 ` [PATCH 0/2] cxl: Pull out mailbox bits to be independent of cxl_dev_state Alejandro Lucero Palau
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox