From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail-yb1-f174.google.com (mail-yb1-f174.google.com [209.85.219.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE30073440 for ; Wed, 24 Jul 2024 22:03:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.174 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721858600; cv=none; b=VhOkcY7Vxrd7j7IZ25tAOeDaS5wVhmjZjY9taQBP6dp4MB6m9pFd7Ujz+LhqsWGKP3/vKbnjNKAikG3slAbTSVBBPoS2sqQ71Mcm7FAx83cPSMXx9Q1WhJvRRcjtePM8+ycCHxuivsmsjNKbs7O9ZABuXwbvIaU3sWeNsYPaQKM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1721858600; c=relaxed/simple; bh=M6dejeeMA19X0kF63kMAnIuelt/d0rD5XBmn/4VjE94=; h=From:Date:To:Cc:Subject:Message-ID:References:MIME-Version: Content-Type:Content-Disposition:In-Reply-To; b=NO+Wpmj6vd7G4n+D6UjbjzWYxZ+lqYnQUrjqKERMepNKSBTkfNe0O9jJYSsJzHvNcagZeGmjswXmjVM3n9CIFFRdsMJzOgPSy99A2tH9Zpk/a4r4DAU7l3uV1ggz1TyxRyb6uMrSZMqAJ+DFyexhT1qJdinaW/TUvIZLx+3HDlY= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=itmIrEos; arc=none smtp.client-ip=209.85.219.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="itmIrEos" Received: by mail-yb1-f174.google.com with SMTP id 3f1490d57ef6-e03caab48a2so265577276.1 for ; Wed, 24 Jul 2024 15:03:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1721858597; x=1722463397; darn=vger.kernel.org; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:from:to:cc:subject:date:message-id:reply-to; bh=Vvz7kGMIg8sAgxih7RO3H6GzECbvvZHsIuDUvR3sbxs=; b=itmIrEosw83UZ7QgYO0BVcUNFHuyxahZgq63AimwF6PGVxQemyw4CThuuhqkNhpxcG qaFKuYXFOMRUeTE7fBIB+DW4HemO9BfvI72rPnlLAy2qu32T+h177nf7kOeWEmmT3Fn+ Au2xdRhMYcuDNyPlj+A6G+DSm+K93oouTqClt8MFqBi6XnogE3y6CMHE34mHLhmsmqfe Z9Sq04Mth4j1YvTlyhGkdDe0A+aAIM+wtvuX3i0RWF1O0IJGKqOPN66h+GX7szkWPWGA gjEamZBg+NuQOoOSlwoe0670uUjRJANO+zGGyLrNzfSuPxjFlSJhaZ05TzwuM6s29tbX HSFg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1721858597; x=1722463397; h=in-reply-to:content-disposition:mime-version:references:message-id :subject:cc:to:date:from:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=Vvz7kGMIg8sAgxih7RO3H6GzECbvvZHsIuDUvR3sbxs=; b=EiMCOZ/9HoDPiab5cD50LUAGEFYy6oCzeYiuH4kt5umcErA3ybFpUDGATowYoo1Xn7 D2AdKR4OqZFXIvXKKvLj+/rNNtVRpy7/GucZ5FFlzlvminsH0T0Sq/mRMQQSc+b5/geq la4tCR2Ni6AwP4O1viOSSLN0n+vg+cTnGpk5HcVa6iN9SPgCF6cS+uSovm8TP5L48AnB hCcIswMNihNx0cKw16NqnGuVDQw6a9CLsinkw8qZMSCHcn6oRtLDN2uYFrSmv2WFnlFz GxXjFE6xFz4Sw7MkTfG8O5vQwYHiYXBmoOgIGMNNdrffez1XksSmxKsUk6f/G+HpT4Np i86g== X-Gm-Message-State: AOJu0Yy4LE5Msyz50mwPZo1Kkk5tI0zhZU42HPx8L6EXLmB0gViyu6ac TJ/qJcWau2gl8zRyk3GVJsJl7VfAV7Nd1rupdpLXvqr4MFphDw8F X-Google-Smtp-Source: AGHT+IEYJ0q8uxq+Z+Lgnfm0lBSepis9KZ9zlF2hmwKckD8r/VG8jdQrpyf5VGJsVhASAEQaa4a7TA== X-Received: by 2002:a05:6902:218d:b0:e08:5a19:1896 with SMTP id 3f1490d57ef6-e0b228d9149mr695500276.27.1721858596349; Wed, 24 Jul 2024 15:03:16 -0700 (PDT) Received: from debian ([50.205.20.42]) by smtp.gmail.com with ESMTPSA id 3f1490d57ef6-e0b2960b4f6sm21517276.0.2024.07.24.15.03.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 24 Jul 2024 15:03:16 -0700 (PDT) From: fan X-Google-Original-From: fan Date: Wed, 24 Jul 2024 15:02:58 -0700 To: Dave Jiang Cc: linux-cxl@vger.kernel.org, alejandro.lucero-palau@amd.com, dan.j.williams@intel.com, ira.weiny@intel.com, vishal.l.verma@intel.com, alison.schofield@intel.com, Jonathan.Cameron@huawei.com, dave@stgolabs.net Subject: Re: [PATCH 1/2] cxl: Move mailbox related bits to the same context Message-ID: References: <20240724185649.2574627-1-dave.jiang@intel.com> <20240724185649.2574627-2-dave.jiang@intel.com> Precedence: bulk X-Mailing-List: linux-cxl@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20240724185649.2574627-2-dave.jiang@intel.com> On Wed, Jul 24, 2024 at 11:55:16AM -0700, Dave Jiang wrote: > Create a new 'struct cxl_mailbox' and move all mailbox related bits to > it. This allows isolation of all CXL mailbox data in order to export > some of the calls to external callers and avoid exporting of CXL driver > specific bits such has device states. The allocation of > 'struct cxl_mailbox' is also split out with cxl_mailbox_create() so the > mailbox can be created independently. > > Signed-off-by: Dave Jiang > --- > MAINTAINERS | 1 + > drivers/cxl/core/mbox.c | 105 +++++++++++++++++++++++++++++------ > drivers/cxl/core/memdev.c | 32 ++++++++--- > drivers/cxl/cxlmem.h | 20 +++---- > drivers/cxl/pci.c | 82 ++++++++++++++++++--------- > drivers/cxl/pmem.c | 7 ++- > include/linux/cxl/mailbox.h | 28 ++++++++++ > tools/testing/cxl/test/mem.c | 46 +++++++++++---- > 8 files changed, 249 insertions(+), 72 deletions(-) > create mode 100644 include/linux/cxl/mailbox.h Reviewed-by: Fan Ni > > diff --git a/MAINTAINERS b/MAINTAINERS > index 958e935449e5..c809fa1eb452 100644 > --- a/MAINTAINERS > +++ b/MAINTAINERS > @@ -5466,6 +5466,7 @@ S: Maintained > F: drivers/cxl/ > F: include/linux/einj-cxl.h > F: include/linux/cxl-event.h > +F: include/linux/cxl/ > F: include/uapi/linux/cxl_mem.h > F: tools/testing/cxl/ > > diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c > index 2626f3fff201..9501d2576ccd 100644 > --- a/drivers/cxl/core/mbox.c > +++ b/drivers/cxl/core/mbox.c > @@ -244,16 +244,20 @@ static const char *cxl_mem_opcode_to_name(u16 opcode) > int cxl_internal_send_cmd(struct cxl_memdev_state *mds, > struct cxl_mbox_cmd *mbox_cmd) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > size_t out_size, min_out; > int rc; > > - if (mbox_cmd->size_in > mds->payload_size || > - mbox_cmd->size_out > mds->payload_size) > + if (!cxl_mbox) > + return -ENODEV; > + > + if (mbox_cmd->size_in > cxl_mbox->payload_size || > + mbox_cmd->size_out > cxl_mbox->payload_size) > return -E2BIG; > > out_size = mbox_cmd->size_out; > min_out = mbox_cmd->min_out; > - rc = mds->mbox_send(mds, mbox_cmd); > + rc = cxl_mbox->mbox_send(cxl_mbox, mbox_cmd); > /* > * EIO is reserved for a payload size mismatch and mbox_send() > * may not return this error. > @@ -353,11 +357,15 @@ static int cxl_mbox_cmd_ctor(struct cxl_mbox_cmd *mbox, > struct cxl_memdev_state *mds, u16 opcode, > size_t in_size, size_t out_size, u64 in_payload) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > *mbox = (struct cxl_mbox_cmd) { > .opcode = opcode, > .size_in = in_size, > }; > > + if (!cxl_mbox) > + return -ENODEV; > + > if (in_size) { > mbox->payload_in = vmemdup_user(u64_to_user_ptr(in_payload), > in_size); > @@ -374,7 +382,7 @@ static int cxl_mbox_cmd_ctor(struct cxl_mbox_cmd *mbox, > > /* Prepare to handle a full payload for variable sized output */ > if (out_size == CXL_VARIABLE_PAYLOAD) > - mbox->size_out = mds->payload_size; > + mbox->size_out = cxl_mbox->payload_size; > else > mbox->size_out = out_size; > > @@ -398,6 +406,11 @@ static int cxl_to_mem_cmd_raw(struct cxl_mem_command *mem_cmd, > const struct cxl_send_command *send_cmd, > struct cxl_memdev_state *mds) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > + > + if (!cxl_mbox) > + return -ENODEV; > + > if (send_cmd->raw.rsvd) > return -EINVAL; > > @@ -406,7 +419,7 @@ static int cxl_to_mem_cmd_raw(struct cxl_mem_command *mem_cmd, > * gets passed along without further checking, so it must be > * validated here. > */ > - if (send_cmd->out.size > mds->payload_size) > + if (send_cmd->out.size > cxl_mbox->payload_size) > return -EINVAL; > > if (!cxl_mem_raw_command_allowed(send_cmd->raw.opcode)) > @@ -494,9 +507,13 @@ static int cxl_validate_cmd_from_user(struct cxl_mbox_cmd *mbox_cmd, > struct cxl_memdev_state *mds, > const struct cxl_send_command *send_cmd) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct cxl_mem_command mem_cmd; > int rc; > > + if (!cxl_mbox) > + return -ENODEV; > + > if (send_cmd->id == 0 || send_cmd->id >= CXL_MEM_COMMAND_ID_MAX) > return -ENOTTY; > > @@ -505,7 +522,7 @@ static int cxl_validate_cmd_from_user(struct cxl_mbox_cmd *mbox_cmd, > * supports, but output can be arbitrarily large (simply write out as > * much data as the hardware provides). > */ > - if (send_cmd->in.size > mds->payload_size) > + if (send_cmd->in.size > cxl_mbox->payload_size) > return -EINVAL; > > /* Sanitize and construct a cxl_mem_command */ > @@ -591,9 +608,13 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev_state *mds, > u64 out_payload, s32 *size_out, > u32 *retval) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct device *dev = mds->cxlds.dev; > int rc; > > + if (!cxl_mbox) > + return -ENODEV; > + > dev_dbg(dev, > "Submitting %s command for user\n" > "\topcode: %x\n" > @@ -601,7 +622,7 @@ static int handle_mailbox_cmd_from_user(struct cxl_memdev_state *mds, > cxl_mem_opcode_to_name(mbox_cmd->opcode), > mbox_cmd->opcode, mbox_cmd->size_in); > > - rc = mds->mbox_send(mds, mbox_cmd); > + rc = cxl_mbox->mbox_send(cxl_mbox, mbox_cmd); > if (rc) > goto out; > > @@ -659,11 +680,15 @@ int cxl_send_cmd(struct cxl_memdev *cxlmd, struct cxl_send_command __user *s) > static int cxl_xfer_log(struct cxl_memdev_state *mds, uuid_t *uuid, > u32 *size, u8 *out) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > u32 remaining = *size; > u32 offset = 0; > > + if (!cxl_mbox) > + return -ENODEV; > + > while (remaining) { > - u32 xfer_size = min_t(u32, remaining, mds->payload_size); > + u32 xfer_size = min_t(u32, remaining, cxl_mbox->payload_size); > struct cxl_mbox_cmd mbox_cmd; > struct cxl_mbox_get_log log; > int rc; > @@ -752,17 +777,21 @@ static void cxl_walk_cel(struct cxl_memdev_state *mds, size_t size, u8 *cel) > > static struct cxl_mbox_get_supported_logs *cxl_get_gsl(struct cxl_memdev_state *mds) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct cxl_mbox_get_supported_logs *ret; > struct cxl_mbox_cmd mbox_cmd; > int rc; > > - ret = kvmalloc(mds->payload_size, GFP_KERNEL); > + if (!cxl_mbox) > + return ERR_PTR(-ENODEV); > + > + ret = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL); > if (!ret) > return ERR_PTR(-ENOMEM); > > mbox_cmd = (struct cxl_mbox_cmd) { > .opcode = CXL_MBOX_OP_GET_SUPPORTED_LOGS, > - .size_out = mds->payload_size, > + .size_out = cxl_mbox->payload_size, > .payload_out = ret, > /* At least the record number field must be valid */ > .min_out = 2, > @@ -910,6 +939,7 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds, > enum cxl_event_log_type log, > struct cxl_get_event_payload *get_pl) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct cxl_mbox_clear_event_payload *payload; > u16 total = le16_to_cpu(get_pl->record_count); > u8 max_handles = CXL_CLEAR_EVENT_MAX_HANDLES; > @@ -919,9 +949,12 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds, > int rc = 0; > int i; > > + if (!cxl_mbox) > + return -ENODEV; > + > /* Payload size may limit the max handles */ > - if (pl_size > mds->payload_size) { > - max_handles = (mds->payload_size - sizeof(*payload)) / > + if (pl_size > cxl_mbox->payload_size) { > + max_handles = (cxl_mbox->payload_size - sizeof(*payload)) / > sizeof(__le16); > pl_size = struct_size(payload, handles, max_handles); > } > @@ -979,12 +1012,16 @@ static int cxl_clear_event_record(struct cxl_memdev_state *mds, > static void cxl_mem_get_records_log(struct cxl_memdev_state *mds, > enum cxl_event_log_type type) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct cxl_memdev *cxlmd = mds->cxlds.cxlmd; > struct device *dev = mds->cxlds.dev; > struct cxl_get_event_payload *payload; > u8 log_type = type; > u16 nr_rec; > > + if (!cxl_mbox) > + return; > + > mutex_lock(&mds->event.log_lock); > payload = mds->event.buf; > > @@ -995,7 +1032,7 @@ static void cxl_mem_get_records_log(struct cxl_memdev_state *mds, > .payload_in = &log_type, > .size_in = sizeof(log_type), > .payload_out = payload, > - .size_out = mds->payload_size, > + .size_out = cxl_mbox->payload_size, > .min_out = struct_size(payload, records, 0), > }; > > @@ -1328,11 +1365,15 @@ int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, > struct cxl_region *cxlr) > { > struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds); > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct cxl_mbox_poison_out *po; > struct cxl_mbox_poison_in pi; > int nr_records = 0; > int rc; > > + if (!cxl_mbox) > + return -ENODEV; > + > rc = mutex_lock_interruptible(&mds->poison.lock); > if (rc) > return rc; > @@ -1346,7 +1387,7 @@ int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len, > .opcode = CXL_MBOX_OP_GET_POISON, > .size_in = sizeof(pi), > .payload_in = &pi, > - .size_out = mds->payload_size, > + .size_out = cxl_mbox->payload_size, > .payload_out = po, > .min_out = struct_size(po, record, 0), > }; > @@ -1382,7 +1423,12 @@ static void free_poison_buf(void *buf) > /* Get Poison List output buffer is protected by mds->poison.lock */ > static int cxl_poison_alloc_buf(struct cxl_memdev_state *mds) > { > - mds->poison.list_out = kvmalloc(mds->payload_size, GFP_KERNEL); > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > + > + if (!cxl_mbox) > + return -ENODEV; > + > + mds->poison.list_out = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL); > if (!mds->poison.list_out) > return -ENOMEM; > > @@ -1408,6 +1454,34 @@ int cxl_poison_state_init(struct cxl_memdev_state *mds) > } > EXPORT_SYMBOL_NS_GPL(cxl_poison_state_init, CXL); > > +static void free_cxl_mailbox(void *cxl_mbox) > +{ > + kfree(cxl_mbox); > +} > + > +struct cxl_mailbox *cxl_mailbox_create(struct device *dev) > +{ > + struct cxl_mailbox *cxl_mbox __free(kfree) = > + kzalloc(sizeof(*cxl_mbox), GFP_KERNEL); > + int rc; > + > + if (!cxl_mbox) > + return ERR_PTR(-ENOMEM); > + > + cxl_mbox->host = dev; > + mutex_init(&cxl_mbox->mbox_mutex); > + rcuwait_init(&cxl_mbox->mbox_wait); > + > + rc = devm_add_action_or_reset(dev, free_cxl_mailbox, cxl_mbox); > + if (rc) { > + cxl_mbox = NULL; > + return ERR_PTR(rc); > + } > + > + return no_free_ptr(cxl_mbox); > +} > +EXPORT_SYMBOL_NS_GPL(cxl_mailbox_create, CXL); > + > struct cxl_memdev_state *cxl_memdev_state_create(struct device *dev) > { > struct cxl_memdev_state *mds; > @@ -1418,7 +1492,6 @@ struct cxl_memdev_state *cxl_memdev_state_create(struct device *dev) > return ERR_PTR(-ENOMEM); > } > > - mutex_init(&mds->mbox_mutex); > mutex_init(&mds->event.log_lock); > mds->cxlds.dev = dev; > mds->cxlds.reg_map.host = dev; > diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c > index 0277726afd04..7c99f89740a9 100644 > --- a/drivers/cxl/core/memdev.c > +++ b/drivers/cxl/core/memdev.c > @@ -56,9 +56,9 @@ static ssize_t payload_max_show(struct device *dev, > struct cxl_dev_state *cxlds = cxlmd->cxlds; > struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); > > - if (!mds) > + if (!mds || !cxlds->cxl_mbox) > return sysfs_emit(buf, "\n"); > - return sysfs_emit(buf, "%zu\n", mds->payload_size); > + return sysfs_emit(buf, "%zu\n", cxlds->cxl_mbox->payload_size); > } > static DEVICE_ATTR_RO(payload_max); > > @@ -124,15 +124,19 @@ static ssize_t security_state_show(struct device *dev, > { > struct cxl_memdev *cxlmd = to_cxl_memdev(dev); > struct cxl_dev_state *cxlds = cxlmd->cxlds; > + struct cxl_mailbox *cxl_mbox = cxlds->cxl_mbox; > struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); > unsigned long state = mds->security.state; > int rc = 0; > > + if (!cxl_mbox) > + return -ENODEV; > + > /* sync with latest submission state */ > - mutex_lock(&mds->mbox_mutex); > + mutex_lock(&cxl_mbox->mbox_mutex); > if (mds->security.sanitize_active) > rc = sysfs_emit(buf, "sanitize\n"); > - mutex_unlock(&mds->mbox_mutex); > + mutex_unlock(&cxl_mbox->mbox_mutex); > if (rc) > return rc; > > @@ -829,12 +833,16 @@ static enum fw_upload_err cxl_fw_prepare(struct fw_upload *fwl, const u8 *data, > { > struct cxl_memdev_state *mds = fwl->dd_handle; > struct cxl_mbox_transfer_fw *transfer; > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > + > + if (!cxl_mbox) > + return FW_UPLOAD_ERR_NONE; > > if (!size) > return FW_UPLOAD_ERR_INVALID_SIZE; > > mds->fw.oneshot = struct_size(transfer, data, size) < > - mds->payload_size; > + cxl_mbox->payload_size; > > if (cxl_mem_get_fw_info(mds)) > return FW_UPLOAD_ERR_HW_ERROR; > @@ -854,6 +862,7 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data, > { > struct cxl_memdev_state *mds = fwl->dd_handle; > struct cxl_dev_state *cxlds = &mds->cxlds; > + struct cxl_mailbox *cxl_mbox = cxlds->cxl_mbox; > struct cxl_memdev *cxlmd = cxlds->cxlmd; > struct cxl_mbox_transfer_fw *transfer; > struct cxl_mbox_cmd mbox_cmd; > @@ -861,6 +870,9 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data, > size_t size_in; > int rc; > > + if (!cxl_mbox) > + return FW_UPLOAD_ERR_NONE; > + > *written = 0; > > /* Offset has to be aligned to 128B (CXL-3.0 8.2.9.3.2 Table 8-57) */ > @@ -877,7 +889,7 @@ static enum fw_upload_err cxl_fw_write(struct fw_upload *fwl, const u8 *data, > * sizeof(*transfer) is 128. These constraints imply that @cur_size > * will always be 128b aligned. > */ > - cur_size = min_t(size_t, size, mds->payload_size - sizeof(*transfer)); > + cur_size = min_t(size_t, size, cxl_mbox->payload_size - sizeof(*transfer)); > > remaining = size - cur_size; > size_in = struct_size(transfer, data, cur_size); > @@ -1059,16 +1071,20 @@ EXPORT_SYMBOL_NS_GPL(devm_cxl_add_memdev, CXL); > static void sanitize_teardown_notifier(void *data) > { > struct cxl_memdev_state *mds = data; > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct kernfs_node *state; > > + if (!cxl_mbox) > + return; > + > /* > * Prevent new irq triggered invocations of the workqueue and > * flush inflight invocations. > */ > - mutex_lock(&mds->mbox_mutex); > + mutex_lock(&cxl_mbox->mbox_mutex); > state = mds->security.sanitize_node; > mds->security.sanitize_node = NULL; > - mutex_unlock(&mds->mbox_mutex); > + mutex_unlock(&cxl_mbox->mbox_mutex); > > cancel_delayed_work_sync(&mds->security.poll_dwork); > sysfs_put(state); > diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h > index af8169ccdbc0..17e83a2cf1be 100644 > --- a/drivers/cxl/cxlmem.h > +++ b/drivers/cxl/cxlmem.h > @@ -3,11 +3,13 @@ > #ifndef __CXL_MEM_H__ > #define __CXL_MEM_H__ > #include > +#include > #include > #include > #include > #include > #include > +#include > #include "cxl.h" > > /* CXL 2.0 8.2.8.5.1.1 Memory Device Status Register */ > @@ -424,6 +426,7 @@ struct cxl_dpa_perf { > * @ram_res: Active Volatile memory capacity configuration > * @serial: PCIe Device Serial Number > * @type: Generic Memory Class device or Vendor Specific Memory device > + * @cxl_mbox: CXL mailbox context > */ > struct cxl_dev_state { > struct device *dev; > @@ -438,8 +441,14 @@ struct cxl_dev_state { > struct resource ram_res; > u64 serial; > enum cxl_devtype type; > + struct cxl_mailbox *cxl_mbox; > }; > > +static inline struct cxl_dev_state *mbox_to_cxlds(struct cxl_mailbox *cxl_mbox) > +{ > + return dev_get_drvdata(cxl_mbox->host); > +} > + > /** > * struct cxl_memdev_state - Generic Type-3 Memory Device Class driver data > * > @@ -448,11 +457,8 @@ struct cxl_dev_state { > * the functionality related to that like Identify Memory Device and Get > * Partition Info > * @cxlds: Core driver state common across Type-2 and Type-3 devices > - * @payload_size: Size of space for payload > - * (CXL 2.0 8.2.8.4.3 Mailbox Capabilities Register) > * @lsa_size: Size of Label Storage Area > * (CXL 2.0 8.2.9.5.1.1 Identify Memory Device) > - * @mbox_mutex: Mutex to synchronize mailbox access. > * @firmware_version: Firmware version for the memory device. > * @enabled_cmds: Hardware commands found enabled in CEL. > * @exclusive_cmds: Commands that are kernel-internal only > @@ -470,17 +476,13 @@ struct cxl_dev_state { > * @poison: poison driver state info > * @security: security driver state info > * @fw: firmware upload / activation state > - * @mbox_wait: RCU wait for mbox send completely > - * @mbox_send: @dev specific transport for transmitting mailbox commands > * > * See CXL 3.0 8.2.9.8.2 Capacity Configuration and Label Storage for > * details on capacity parameters. > */ > struct cxl_memdev_state { > struct cxl_dev_state cxlds; > - size_t payload_size; > size_t lsa_size; > - struct mutex mbox_mutex; /* Protects device mailbox and firmware */ > char firmware_version[0x10]; > DECLARE_BITMAP(enabled_cmds, CXL_MEM_COMMAND_ID_MAX); > DECLARE_BITMAP(exclusive_cmds, CXL_MEM_COMMAND_ID_MAX); > @@ -500,10 +502,6 @@ struct cxl_memdev_state { > struct cxl_poison_state poison; > struct cxl_security_state security; > struct cxl_fw_state fw; > - > - struct rcuwait mbox_wait; > - int (*mbox_send)(struct cxl_memdev_state *mds, > - struct cxl_mbox_cmd *cmd); > }; > > static inline struct cxl_memdev_state * > diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c > index e53646e9f2fb..bd8ee14a7926 100644 > --- a/drivers/cxl/pci.c > +++ b/drivers/cxl/pci.c > @@ -11,6 +11,7 @@ > #include > #include > #include > +#include > #include "cxlmem.h" > #include "cxlpci.h" > #include "cxl.h" > @@ -124,6 +125,7 @@ static irqreturn_t cxl_pci_mbox_irq(int irq, void *id) > u16 opcode; > struct cxl_dev_id *dev_id = id; > struct cxl_dev_state *cxlds = dev_id->cxlds; > + struct cxl_mailbox *cxl_mbox = cxlds->cxl_mbox; > struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); > > if (!cxl_mbox_background_complete(cxlds)) > @@ -132,13 +134,13 @@ static irqreturn_t cxl_pci_mbox_irq(int irq, void *id) > reg = readq(cxlds->regs.mbox + CXLDEV_MBOX_BG_CMD_STATUS_OFFSET); > opcode = FIELD_GET(CXLDEV_MBOX_BG_CMD_COMMAND_OPCODE_MASK, reg); > if (opcode == CXL_MBOX_OP_SANITIZE) { > - mutex_lock(&mds->mbox_mutex); > + mutex_lock(&cxl_mbox->mbox_mutex); > if (mds->security.sanitize_node) > mod_delayed_work(system_wq, &mds->security.poll_dwork, 0); > - mutex_unlock(&mds->mbox_mutex); > + mutex_unlock(&cxl_mbox->mbox_mutex); > } else { > /* short-circuit the wait in __cxl_pci_mbox_send_cmd() */ > - rcuwait_wake_up(&mds->mbox_wait); > + rcuwait_wake_up(&cxl_mbox->mbox_wait); > } > > return IRQ_HANDLED; > @@ -152,8 +154,9 @@ static void cxl_mbox_sanitize_work(struct work_struct *work) > struct cxl_memdev_state *mds = > container_of(work, typeof(*mds), security.poll_dwork.work); > struct cxl_dev_state *cxlds = &mds->cxlds; > + struct cxl_mailbox *cxl_mbox = cxlds->cxl_mbox; > > - mutex_lock(&mds->mbox_mutex); > + mutex_lock(&cxl_mbox->mbox_mutex); > if (cxl_mbox_background_complete(cxlds)) { > mds->security.poll_tmo_secs = 0; > if (mds->security.sanitize_node) > @@ -167,7 +170,7 @@ static void cxl_mbox_sanitize_work(struct work_struct *work) > mds->security.poll_tmo_secs = min(15 * 60, timeout); > schedule_delayed_work(&mds->security.poll_dwork, timeout * HZ); > } > - mutex_unlock(&mds->mbox_mutex); > + mutex_unlock(&cxl_mbox->mbox_mutex); > } > > /** > @@ -192,17 +195,18 @@ static void cxl_mbox_sanitize_work(struct work_struct *work) > * not need to coordinate with each other. The driver only uses the primary > * mailbox. > */ > -static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, > +static int __cxl_pci_mbox_send_cmd(struct cxl_mailbox *cxl_mbox, > struct cxl_mbox_cmd *mbox_cmd) > { > - struct cxl_dev_state *cxlds = &mds->cxlds; > + struct cxl_dev_state *cxlds = mbox_to_cxlds(cxl_mbox); > + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds); > void __iomem *payload = cxlds->regs.mbox + CXLDEV_MBOX_PAYLOAD_OFFSET; > struct device *dev = cxlds->dev; > u64 cmd_reg, status_reg; > size_t out_len; > int rc; > > - lockdep_assert_held(&mds->mbox_mutex); > + lockdep_assert_held(&cxl_mbox->mbox_mutex); > > /* > * Here are the steps from 8.2.8.4 of the CXL 2.0 spec. > @@ -315,10 +319,10 @@ static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, > > timeout = mbox_cmd->poll_interval_ms; > for (i = 0; i < mbox_cmd->poll_count; i++) { > - if (rcuwait_wait_event_timeout(&mds->mbox_wait, > - cxl_mbox_background_complete(cxlds), > - TASK_UNINTERRUPTIBLE, > - msecs_to_jiffies(timeout)) > 0) > + if (rcuwait_wait_event_timeout(&cxl_mbox->mbox_wait, > + cxl_mbox_background_complete(cxlds), > + TASK_UNINTERRUPTIBLE, > + msecs_to_jiffies(timeout)) > 0) > break; > } > > @@ -360,7 +364,7 @@ static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, > */ > size_t n; > > - n = min3(mbox_cmd->size_out, mds->payload_size, out_len); > + n = min3(mbox_cmd->size_out, cxl_mbox->payload_size, out_len); > memcpy_fromio(mbox_cmd->payload_out, payload, n); > mbox_cmd->size_out = n; > } else { > @@ -370,14 +374,14 @@ static int __cxl_pci_mbox_send_cmd(struct cxl_memdev_state *mds, > return 0; > } > > -static int cxl_pci_mbox_send(struct cxl_memdev_state *mds, > +static int cxl_pci_mbox_send(struct cxl_mailbox *cxl_mbox, > struct cxl_mbox_cmd *cmd) > { > int rc; > > - mutex_lock_io(&mds->mbox_mutex); > - rc = __cxl_pci_mbox_send_cmd(mds, cmd); > - mutex_unlock(&mds->mbox_mutex); > + mutex_lock_io(&cxl_mbox->mbox_mutex); > + rc = __cxl_pci_mbox_send_cmd(cxl_mbox, cmd); > + mutex_unlock(&cxl_mbox->mbox_mutex); > > return rc; > } > @@ -385,6 +389,7 @@ static int cxl_pci_mbox_send(struct cxl_memdev_state *mds, > static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail) > { > struct cxl_dev_state *cxlds = &mds->cxlds; > + struct cxl_mailbox *cxl_mbox = cxlds->cxl_mbox; > const int cap = readl(cxlds->regs.mbox + CXLDEV_MBOX_CAPS_OFFSET); > struct device *dev = cxlds->dev; > unsigned long timeout; > @@ -417,8 +422,8 @@ static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail) > return -ETIMEDOUT; > } > > - mds->mbox_send = cxl_pci_mbox_send; > - mds->payload_size = > + cxl_mbox->mbox_send = cxl_pci_mbox_send; > + cxl_mbox->payload_size = > 1 << FIELD_GET(CXLDEV_MBOX_CAP_PAYLOAD_SIZE_MASK, cap); > > /* > @@ -428,16 +433,15 @@ static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail) > * there's no point in going forward. If the size is too large, there's > * no harm is soft limiting it. > */ > - mds->payload_size = min_t(size_t, mds->payload_size, SZ_1M); > - if (mds->payload_size < 256) { > + cxl_mbox->payload_size = min_t(size_t, cxl_mbox->payload_size, SZ_1M); > + if (cxl_mbox->payload_size < 256) { > dev_err(dev, "Mailbox is too small (%zub)", > - mds->payload_size); > + cxl_mbox->payload_size); > return -ENXIO; > } > > - dev_dbg(dev, "Mailbox payload sized %zu", mds->payload_size); > + dev_dbg(dev, "Mailbox payload sized %zu", cxl_mbox->payload_size); > > - rcuwait_init(&mds->mbox_wait); > INIT_DELAYED_WORK(&mds->security.poll_dwork, cxl_mbox_sanitize_work); > > /* background command interrupts are optional */ > @@ -578,9 +582,13 @@ static void free_event_buf(void *buf) > */ > static int cxl_mem_alloc_event_buf(struct cxl_memdev_state *mds) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct cxl_get_event_payload *buf; > > - buf = kvmalloc(mds->payload_size, GFP_KERNEL); > + if (!cxl_mbox) > + return -ENODEV; > + > + buf = kvmalloc(cxl_mbox->payload_size, GFP_KERNEL); > if (!buf) > return -ENOMEM; > mds->event.buf = buf; > @@ -786,6 +794,26 @@ static int cxl_event_config(struct pci_host_bridge *host_bridge, > return 0; > } > > +static int cxl_pci_create_mailbox(struct cxl_dev_state *cxlds) > +{ > + struct cxl_mailbox *cxl_mbox; > + > + /* > + * Don't bother to allocate the mailbox if the mailbox register isn't > + * there. > + */ > + if (!cxlds->reg_map.device_map.mbox.valid) > + return -ENODEV; > + > + cxl_mbox = cxl_mailbox_create(cxlds->dev); > + if (IS_ERR(cxl_mbox)) > + return PTR_ERR(cxl_mbox); > + > + cxlds->cxl_mbox = cxl_mbox; > + > + return 0; > +} > + > static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) > { > struct pci_host_bridge *host_bridge = pci_find_host_bridge(pdev->bus); > @@ -846,6 +874,10 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id) > if (rc) > dev_dbg(&pdev->dev, "Failed to map RAS capability.\n"); > > + rc = cxl_pci_create_mailbox(cxlds); > + if (rc) > + return rc; > + > rc = cxl_await_media_ready(cxlds); > if (rc == 0) > cxlds->media_ready = true; > diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c > index 2ecdaee63021..12e68b194820 100644 > --- a/drivers/cxl/pmem.c > +++ b/drivers/cxl/pmem.c > @@ -102,13 +102,18 @@ static int cxl_pmem_get_config_size(struct cxl_memdev_state *mds, > struct nd_cmd_get_config_size *cmd, > unsigned int buf_len) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > + > + if (!cxl_mbox) > + return -ENODEV; > + > if (sizeof(*cmd) > buf_len) > return -EINVAL; > > *cmd = (struct nd_cmd_get_config_size){ > .config_size = mds->lsa_size, > .max_xfer = > - mds->payload_size - sizeof(struct cxl_mbox_set_lsa), > + cxl_mbox->payload_size - sizeof(struct cxl_mbox_set_lsa), > }; > > return 0; > diff --git a/include/linux/cxl/mailbox.h b/include/linux/cxl/mailbox.h > new file mode 100644 > index 000000000000..2b9e4bc1690c > --- /dev/null > +++ b/include/linux/cxl/mailbox.h > @@ -0,0 +1,28 @@ > +/* SPDX-License-Identifier: GPL-2.0-only */ > +/* Copyright(c) 2024 Intel Corporation. */ > +#ifndef __CXL_MBOX_H__ > +#define __CXL_MBOX_H__ > + > +struct cxl_mbox_cmd; > + > +/** > + * struct cxl_mailbox - context for CXL mailbox operations > + * @host: device that hosts the mailbox > + * @adev: auxiliary device for fw-ctl > + * @payload_size: Size of space for payload > + * (CXL 3.1 8.2.8.4.3 Mailbox Capabilities Register) > + * @mbox_mutex: mutex protects device mailbox and firmware > + * @mbox_wait: rcuwait for mailbox > + * @mbox_send: @dev specific transport for transmitting mailbox commands > + */ > +struct cxl_mailbox { > + struct device *host; > + size_t payload_size; > + struct mutex mbox_mutex; /* lock to protect mailbox context */ > + struct rcuwait mbox_wait; > + int (*mbox_send)(struct cxl_mailbox *cxl_mbox, struct cxl_mbox_cmd *cmd); > +}; > + > +struct cxl_mailbox *cxl_mailbox_create(struct device *dev); > + > +#endif > diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c > index eaf091a3d331..29f1a2df9122 100644 > --- a/tools/testing/cxl/test/mem.c > +++ b/tools/testing/cxl/test/mem.c > @@ -8,6 +8,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -530,6 +531,7 @@ static int mock_gsl(struct cxl_mbox_cmd *cmd) > > static int mock_get_log(struct cxl_memdev_state *mds, struct cxl_mbox_cmd *cmd) > { > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > struct cxl_mbox_get_log *gl = cmd->payload_in; > u32 offset = le32_to_cpu(gl->offset); > u32 length = le32_to_cpu(gl->length); > @@ -538,7 +540,7 @@ static int mock_get_log(struct cxl_memdev_state *mds, struct cxl_mbox_cmd *cmd) > > if (cmd->size_in < sizeof(*gl)) > return -EINVAL; > - if (length > mds->payload_size) > + if (length > cxl_mbox->payload_size) > return -EINVAL; > if (offset + length > sizeof(mock_cel)) > return -EINVAL; > @@ -613,12 +615,13 @@ void cxl_mockmem_sanitize_work(struct work_struct *work) > { > struct cxl_memdev_state *mds = > container_of(work, typeof(*mds), security.poll_dwork.work); > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > > - mutex_lock(&mds->mbox_mutex); > + mutex_lock(&cxl_mbox->mbox_mutex); > if (mds->security.sanitize_node) > sysfs_notify_dirent(mds->security.sanitize_node); > mds->security.sanitize_active = false; > - mutex_unlock(&mds->mbox_mutex); > + mutex_unlock(&cxl_mbox->mbox_mutex); > > dev_dbg(mds->cxlds.dev, "sanitize complete\n"); > } > @@ -627,6 +630,7 @@ static int mock_sanitize(struct cxl_mockmem_data *mdata, > struct cxl_mbox_cmd *cmd) > { > struct cxl_memdev_state *mds = mdata->mds; > + struct cxl_mailbox *cxl_mbox = mds->cxlds.cxl_mbox; > int rc = 0; > > if (cmd->size_in != 0) > @@ -644,14 +648,14 @@ static int mock_sanitize(struct cxl_mockmem_data *mdata, > return -ENXIO; > } > > - mutex_lock(&mds->mbox_mutex); > + mutex_lock(&cxl_mbox->mbox_mutex); > if (schedule_delayed_work(&mds->security.poll_dwork, > msecs_to_jiffies(mdata->sanitize_timeout))) { > mds->security.sanitize_active = true; > dev_dbg(mds->cxlds.dev, "sanitize issued\n"); > } else > rc = -EBUSY; > - mutex_unlock(&mds->mbox_mutex); > + mutex_unlock(&cxl_mbox->mbox_mutex); > > return rc; > } > @@ -1330,12 +1334,13 @@ static int mock_activate_fw(struct cxl_mockmem_data *mdata, > return -EINVAL; > } > > -static int cxl_mock_mbox_send(struct cxl_memdev_state *mds, > +static int cxl_mock_mbox_send(struct cxl_mailbox *cxl_mbox, > struct cxl_mbox_cmd *cmd) > { > + struct device *dev = cxl_mbox->host; > + struct cxl_mockmem_data *mdata = dev_get_drvdata(dev); > + struct cxl_memdev_state *mds = mdata->mds; > struct cxl_dev_state *cxlds = &mds->cxlds; > - struct device *dev = cxlds->dev; > - struct cxl_mockmem_data *mdata = dev_get_drvdata(dev); > int rc = -EIO; > > switch (cmd->opcode) { > @@ -1450,6 +1455,19 @@ static ssize_t event_trigger_store(struct device *dev, > } > static DEVICE_ATTR_WO(event_trigger); > > +static int cxl_mock_mailbox_create(struct cxl_dev_state *cxlds) > +{ > + struct cxl_mailbox *cxl_mbox; > + > + cxl_mbox = cxl_mailbox_create(cxlds->dev); > + if (IS_ERR(cxl_mbox)) > + return PTR_ERR(cxl_mbox); > + > + cxlds->cxl_mbox = cxl_mbox; > + > + return 0; > +} > + > static int cxl_mock_mem_probe(struct platform_device *pdev) > { > struct device *dev = &pdev->dev; > @@ -1457,6 +1475,7 @@ static int cxl_mock_mem_probe(struct platform_device *pdev) > struct cxl_memdev_state *mds; > struct cxl_dev_state *cxlds; > struct cxl_mockmem_data *mdata; > + struct cxl_mailbox *cxl_mbox; > int rc; > > mdata = devm_kzalloc(dev, sizeof(*mdata), GFP_KERNEL); > @@ -1484,13 +1503,18 @@ static int cxl_mock_mem_probe(struct platform_device *pdev) > if (IS_ERR(mds)) > return PTR_ERR(mds); > > + cxlds = &mds->cxlds; > + rc = cxl_mock_mailbox_create(cxlds); > + if (rc) > + return rc; > + > + cxl_mbox = mds->cxlds.cxl_mbox; > mdata->mds = mds; > - mds->mbox_send = cxl_mock_mbox_send; > - mds->payload_size = SZ_4K; > + cxl_mbox->mbox_send = cxl_mock_mbox_send; > + cxl_mbox->payload_size = SZ_4K; > mds->event.buf = (struct cxl_get_event_payload *) mdata->event_buf; > INIT_DELAYED_WORK(&mds->security.poll_dwork, cxl_mockmem_sanitize_work); > > - cxlds = &mds->cxlds; > cxlds->serial = pdev->id; > if (is_rcd(pdev)) > cxlds->rcd = true; > -- > 2.45.2 >