* [PATCH 1/7] cxl: Read vsec perst load image
2017-02-01 17:30 [PATCH 0/7] cxl: Add support for Coherent Accelerator Interface Architecture 2.0 Christophe Lombard
@ 2017-02-01 17:30 ` Christophe Lombard
2017-02-02 4:15 ` Andrew Donnellan
2017-02-01 17:30 ` [PATCH 2/7] cxl: Remove unused values in bare-metal environment Christophe Lombard
` (5 subsequent siblings)
6 siblings, 1 reply; 17+ messages in thread
From: Christophe Lombard @ 2017-02-01 17:30 UTC (permalink / raw)
To: linuxppc-dev, fbarrat, imunsie, andrew.donnellan
This bit is used to cause a flash image load for programmable
CAIA-compliant implementation. If this bit is set to ‘0’, a power
cycle of the adapter is required to load a programmable CAIA-com-
pliant implementation from flash.
This field will be used by the following patches.
Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
---
drivers/misc/cxl/pci.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
index 80a87ab..853925b 100644
--- a/drivers/misc/cxl/pci.c
+++ b/drivers/misc/cxl/pci.c
@@ -1323,6 +1323,7 @@ static int cxl_read_vsec(struct cxl *adapter, struct pci_dev *dev)
CXL_READ_VSEC_IMAGE_STATE(dev, vsec, &image_state);
adapter->user_image_loaded = !!(image_state & CXL_VSEC_USER_IMAGE_LOADED);
adapter->perst_select_user = !!(image_state & CXL_VSEC_USER_IMAGE_LOADED);
+ adapter->perst_loads_image = !!(image_state & CXL_VSEC_PERST_LOADS_IMAGE);
CXL_READ_VSEC_NAFUS(dev, vsec, &adapter->slices);
CXL_READ_VSEC_AFU_DESC_OFF(dev, vsec, &afu_desc_off);
--
2.7.4
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH 1/7] cxl: Read vsec perst load image
2017-02-01 17:30 ` [PATCH 1/7] cxl: Read vsec perst load image Christophe Lombard
@ 2017-02-02 4:15 ` Andrew Donnellan
0 siblings, 0 replies; 17+ messages in thread
From: Andrew Donnellan @ 2017-02-02 4:15 UTC (permalink / raw)
To: Christophe Lombard, linuxppc-dev, fbarrat, imunsie
On 02/02/17 04:30, Christophe Lombard wrote:
> This bit is used to cause a flash image load for programmable
> CAIA-compliant implementation. If this bit is set to ‘0’, a power
> cycle of the adapter is required to load a programmable CAIA-com-
> pliant implementation from flash.
> This field will be used by the following patches.
>
> Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
> ---
> drivers/misc/cxl/pci.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
> index 80a87ab..853925b 100644
> --- a/drivers/misc/cxl/pci.c
> +++ b/drivers/misc/cxl/pci.c
> @@ -1323,6 +1323,7 @@ static int cxl_read_vsec(struct cxl *adapter, struct pci_dev *dev)
> CXL_READ_VSEC_IMAGE_STATE(dev, vsec, &image_state);
> adapter->user_image_loaded = !!(image_state & CXL_VSEC_USER_IMAGE_LOADED);
> adapter->perst_select_user = !!(image_state & CXL_VSEC_USER_IMAGE_LOADED);
> + adapter->perst_loads_image = !!(image_state & CXL_VSEC_PERST_LOADS_IMAGE);
>
> CXL_READ_VSEC_NAFUS(dev, vsec, &adapter->slices);
> CXL_READ_VSEC_AFU_DESC_OFF(dev, vsec, &afu_desc_off);
>
--
Andrew Donnellan OzLabs, ADL Canberra
andrew.donnellan@au1.ibm.com IBM Australia Limited
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH 2/7] cxl: Remove unused values in bare-metal environment.
2017-02-01 17:30 [PATCH 0/7] cxl: Add support for Coherent Accelerator Interface Architecture 2.0 Christophe Lombard
2017-02-01 17:30 ` [PATCH 1/7] cxl: Read vsec perst load image Christophe Lombard
@ 2017-02-01 17:30 ` Christophe Lombard
2017-02-02 4:16 ` Andrew Donnellan
2017-02-01 17:30 ` [PATCH 3/7] cxl: Keep track of mm struct associated with a context Christophe Lombard
` (4 subsequent siblings)
6 siblings, 1 reply; 17+ messages in thread
From: Christophe Lombard @ 2017-02-01 17:30 UTC (permalink / raw)
To: linuxppc-dev, fbarrat, imunsie, andrew.donnellan
The two fields pid and tid of the structure cxl_irq_info are only used
in the guest environment. To avoid confusion, it's not necessary
to fill the fields in the bare-metal environment.
The PSL Process and Thread Identification Register is only used when
attaching a dedicated process for PSL8 only.
Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
---
drivers/misc/cxl/native.c | 5 -----
1 file changed, 5 deletions(-)
diff --git a/drivers/misc/cxl/native.c b/drivers/misc/cxl/native.c
index 09505f4..8a3ce99 100644
--- a/drivers/misc/cxl/native.c
+++ b/drivers/misc/cxl/native.c
@@ -858,8 +858,6 @@ static int native_detach_process(struct cxl_context *ctx)
static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
{
- u64 pidtid;
-
/* If the adapter has gone away, we can't get any meaningful
* information.
*/
@@ -869,9 +867,6 @@ static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
info->dsisr = cxl_p2n_read(afu, CXL_PSL_DSISR_An);
info->dar = cxl_p2n_read(afu, CXL_PSL_DAR_An);
info->dsr = cxl_p2n_read(afu, CXL_PSL_DSR_An);
- pidtid = cxl_p2n_read(afu, CXL_PSL_PID_TID_An);
- info->pid = pidtid >> 32;
- info->tid = pidtid & 0xffffffff;
info->afu_err = cxl_p2n_read(afu, CXL_AFU_ERR_An);
info->errstat = cxl_p2n_read(afu, CXL_PSL_ErrStat_An);
info->proc_handle = 0;
--
2.7.4
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH 2/7] cxl: Remove unused values in bare-metal environment.
2017-02-01 17:30 ` [PATCH 2/7] cxl: Remove unused values in bare-metal environment Christophe Lombard
@ 2017-02-02 4:16 ` Andrew Donnellan
0 siblings, 0 replies; 17+ messages in thread
From: Andrew Donnellan @ 2017-02-02 4:16 UTC (permalink / raw)
To: Christophe Lombard, linuxppc-dev, fbarrat, imunsie
On 02/02/17 04:30, Christophe Lombard wrote:
> The two fields pid and tid of the structure cxl_irq_info are only used
> in the guest environment. To avoid confusion, it's not necessary
> to fill the fields in the bare-metal environment.
> The PSL Process and Thread Identification Register is only used when
> attaching a dedicated process for PSL8 only.
>
> Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
> ---
> drivers/misc/cxl/native.c | 5 -----
> 1 file changed, 5 deletions(-)
>
> diff --git a/drivers/misc/cxl/native.c b/drivers/misc/cxl/native.c
> index 09505f4..8a3ce99 100644
> --- a/drivers/misc/cxl/native.c
> +++ b/drivers/misc/cxl/native.c
> @@ -858,8 +858,6 @@ static int native_detach_process(struct cxl_context *ctx)
>
> static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
> {
> - u64 pidtid;
> -
> /* If the adapter has gone away, we can't get any meaningful
> * information.
> */
> @@ -869,9 +867,6 @@ static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
> info->dsisr = cxl_p2n_read(afu, CXL_PSL_DSISR_An);
> info->dar = cxl_p2n_read(afu, CXL_PSL_DAR_An);
> info->dsr = cxl_p2n_read(afu, CXL_PSL_DSR_An);
> - pidtid = cxl_p2n_read(afu, CXL_PSL_PID_TID_An);
> - info->pid = pidtid >> 32;
> - info->tid = pidtid & 0xffffffff;
> info->afu_err = cxl_p2n_read(afu, CXL_AFU_ERR_An);
> info->errstat = cxl_p2n_read(afu, CXL_PSL_ErrStat_An);
> info->proc_handle = 0;
>
--
Andrew Donnellan OzLabs, ADL Canberra
andrew.donnellan@au1.ibm.com IBM Australia Limited
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH 3/7] cxl: Keep track of mm struct associated with a context
2017-02-01 17:30 [PATCH 0/7] cxl: Add support for Coherent Accelerator Interface Architecture 2.0 Christophe Lombard
2017-02-01 17:30 ` [PATCH 1/7] cxl: Read vsec perst load image Christophe Lombard
2017-02-01 17:30 ` [PATCH 2/7] cxl: Remove unused values in bare-metal environment Christophe Lombard
@ 2017-02-01 17:30 ` Christophe Lombard
2017-02-28 7:44 ` Andrew Donnellan
2017-02-01 17:30 ` [PATCH 4/7] cxl: Update implementation service layer Christophe Lombard
` (3 subsequent siblings)
6 siblings, 1 reply; 17+ messages in thread
From: Christophe Lombard @ 2017-02-01 17:30 UTC (permalink / raw)
To: linuxppc-dev, fbarrat, imunsie, andrew.donnellan
The mm_struct corresponding to the current task is acquired each time
an interrupt is raised. So to simplify the code, we only get the
mm_struct when attaching an AFU context to the process.
The mm_count reference is increased to ensure that the mm_struct can't
be freed. The mm_struct will be released when the context is detached.
The reference (use count) on the struct mm is not kept to avoid a
circular dependency if the process mmaps its cxl mmio and forget to
unmap before exiting.
Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
---
drivers/misc/cxl/api.c | 16 ++++++++--
drivers/misc/cxl/context.c | 25 +++++++++++++--
drivers/misc/cxl/cxl.h | 13 ++++++--
drivers/misc/cxl/fault.c | 76 ++--------------------------------------------
drivers/misc/cxl/file.c | 14 +++++++--
5 files changed, 61 insertions(+), 83 deletions(-)
diff --git a/drivers/misc/cxl/api.c b/drivers/misc/cxl/api.c
index 1b35e33..4b03433 100644
--- a/drivers/misc/cxl/api.c
+++ b/drivers/misc/cxl/api.c
@@ -322,19 +322,29 @@ int cxl_start_context(struct cxl_context *ctx, u64 wed,
if (task) {
ctx->pid = get_task_pid(task, PIDTYPE_PID);
- ctx->glpid = get_task_pid(task->group_leader, PIDTYPE_PID);
kernel = false;
ctx->real_mode = false;
+
+ /* acquire a reference to the task's mm */
+ ctx->mm = get_task_mm(current);
+
+ /* ensure this mm_struct can't be freed */
+ cxl_context_mm_count_get(ctx);
+
+ /* decrement the use count */
+ if (ctx->mm)
+ mmput(ctx->mm);
}
cxl_ctx_get();
if ((rc = cxl_ops->attach_process(ctx, kernel, wed, 0))) {
- put_pid(ctx->glpid);
put_pid(ctx->pid);
- ctx->glpid = ctx->pid = NULL;
+ ctx->pid = NULL;
cxl_adapter_context_put(ctx->afu->adapter);
cxl_ctx_put();
+ if (task)
+ cxl_context_mm_count_put(ctx);
goto out;
}
diff --git a/drivers/misc/cxl/context.c b/drivers/misc/cxl/context.c
index 3907387..89242c1 100644
--- a/drivers/misc/cxl/context.c
+++ b/drivers/misc/cxl/context.c
@@ -41,7 +41,7 @@ int cxl_context_init(struct cxl_context *ctx, struct cxl_afu *afu, bool master)
spin_lock_init(&ctx->sste_lock);
ctx->afu = afu;
ctx->master = master;
- ctx->pid = ctx->glpid = NULL; /* Set in start work ioctl */
+ ctx->pid = NULL; /* Set in start work ioctl */
mutex_init(&ctx->mapping_lock);
ctx->mapping = NULL;
@@ -241,12 +241,15 @@ int __detach_context(struct cxl_context *ctx)
/* release the reference to the group leader and mm handling pid */
put_pid(ctx->pid);
- put_pid(ctx->glpid);
cxl_ctx_put();
/* Decrease the attached context count on the adapter */
cxl_adapter_context_put(ctx->afu->adapter);
+
+ /* Decrease the mm count on the context */
+ cxl_context_mm_count_put(ctx);
+
return 0;
}
@@ -324,3 +327,21 @@ void cxl_context_free(struct cxl_context *ctx)
mutex_unlock(&ctx->afu->contexts_lock);
call_rcu(&ctx->rcu, reclaim_ctx);
}
+
+void cxl_context_mm_count_get(struct cxl_context *ctx)
+{
+ if (ctx->mm)
+ atomic_inc(&ctx->mm->mm_count);
+}
+
+void cxl_context_mm_count_put(struct cxl_context *ctx)
+{
+ if (ctx->mm)
+ mmdrop(ctx->mm);
+}
+
+void cxl_context_mm_users_get(struct cxl_context *ctx)
+{
+ if (ctx->mm)
+ atomic_inc(&ctx->mm->mm_users);
+}
diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
index b24d767..e53ce9d 100644
--- a/drivers/misc/cxl/cxl.h
+++ b/drivers/misc/cxl/cxl.h
@@ -479,8 +479,6 @@ struct cxl_context {
unsigned int sst_size, sst_lru;
wait_queue_head_t wq;
- /* pid of the group leader associated with the pid */
- struct pid *glpid;
/* use mm context associated with this pid for ds faults */
struct pid *pid;
spinlock_t lock; /* Protects pending_irq_mask, pending_fault and fault_addr */
@@ -548,6 +546,8 @@ struct cxl_context {
* CX4 only:
*/
struct list_head extra_irq_contexts;
+
+ struct mm_struct *mm;
};
struct cxl_service_layer_ops {
@@ -970,4 +970,13 @@ int cxl_adapter_context_lock(struct cxl *adapter);
/* Unlock the contexts-lock if taken. Warn and force unlock otherwise */
void cxl_adapter_context_unlock(struct cxl *adapter);
+/* Increases the reference count to "struct mm_struct" */
+void cxl_context_mm_count_get(struct cxl_context *ctx);
+
+/* Decrements the reference count to "struct mm_struct" */
+void cxl_context_mm_count_put(struct cxl_context *ctx);
+
+/* Increases the reference users to "struct mm_struct" */
+void cxl_context_mm_users_get(struct cxl_context *ctx);
+
#endif
diff --git a/drivers/misc/cxl/fault.c b/drivers/misc/cxl/fault.c
index 377e650..ece7ea3 100644
--- a/drivers/misc/cxl/fault.c
+++ b/drivers/misc/cxl/fault.c
@@ -169,81 +169,12 @@ static void cxl_handle_page_fault(struct cxl_context *ctx,
}
/*
- * Returns the mm_struct corresponding to the context ctx via ctx->pid
- * In case the task has exited we use the task group leader accessible
- * via ctx->glpid to find the next task in the thread group that has a
- * valid mm_struct associated with it. If a task with valid mm_struct
- * is found the ctx->pid is updated to use the task struct for subsequent
- * translations. In case no valid mm_struct is found in the task group to
- * service the fault a NULL is returned.
+ * Returns the mm_struct corresponding to the context ctx.
*/
static struct mm_struct *get_mem_context(struct cxl_context *ctx)
{
- struct task_struct *task = NULL;
- struct mm_struct *mm = NULL;
- struct pid *old_pid = ctx->pid;
-
- if (old_pid == NULL) {
- pr_warn("%s: Invalid context for pe=%d\n",
- __func__, ctx->pe);
- return NULL;
- }
-
- task = get_pid_task(old_pid, PIDTYPE_PID);
-
- /*
- * pid_alive may look racy but this saves us from costly
- * get_task_mm when the task is a zombie. In worst case
- * we may think a task is alive, which is about to die
- * but get_task_mm will return NULL.
- */
- if (task != NULL && pid_alive(task))
- mm = get_task_mm(task);
-
- /* release the task struct that was taken earlier */
- if (task)
- put_task_struct(task);
- else
- pr_devel("%s: Context owning pid=%i for pe=%i dead\n",
- __func__, pid_nr(old_pid), ctx->pe);
-
- /*
- * If we couldn't find the mm context then use the group
- * leader to iterate over the task group and find a task
- * that gives us mm_struct.
- */
- if (unlikely(mm == NULL && ctx->glpid != NULL)) {
-
- rcu_read_lock();
- task = pid_task(ctx->glpid, PIDTYPE_PID);
- if (task)
- do {
- mm = get_task_mm(task);
- if (mm) {
- ctx->pid = get_task_pid(task,
- PIDTYPE_PID);
- break;
- }
- task = next_thread(task);
- } while (task && !thread_group_leader(task));
- rcu_read_unlock();
-
- /* check if we switched pid */
- if (ctx->pid != old_pid) {
- if (mm)
- pr_devel("%s:pe=%i switch pid %i->%i\n",
- __func__, ctx->pe, pid_nr(old_pid),
- pid_nr(ctx->pid));
- else
- pr_devel("%s:Cannot find mm for pid=%i\n",
- __func__, pid_nr(old_pid));
-
- /* drop the reference to older pid */
- put_pid(old_pid);
- }
- }
-
- return mm;
+ cxl_context_mm_users_get(ctx);
+ return ctx->mm;
}
@@ -281,7 +212,6 @@ void cxl_handle_fault(struct work_struct *fault_work)
if (!ctx->kernel) {
mm = get_mem_context(ctx);
- /* indicates all the thread in task group have exited */
if (mm == NULL) {
pr_devel("%s: unable to get mm for pe=%d pid=%i\n",
__func__, ctx->pe, pid_nr(ctx->pid));
diff --git a/drivers/misc/cxl/file.c b/drivers/misc/cxl/file.c
index 859959f..af6bd0e 100644
--- a/drivers/misc/cxl/file.c
+++ b/drivers/misc/cxl/file.c
@@ -216,8 +216,16 @@ static long afu_ioctl_start_work(struct cxl_context *ctx,
* process is still accessible.
*/
ctx->pid = get_task_pid(current, PIDTYPE_PID);
- ctx->glpid = get_task_pid(current->group_leader, PIDTYPE_PID);
+ /* acquire a reference to the task's mm */
+ ctx->mm = get_task_mm(current);
+
+ /* ensure this mm_struct can't be freed */
+ cxl_context_mm_count_get(ctx);
+
+ /* decrement the use count */
+ if (ctx->mm)
+ mmput(ctx->mm);
trace_cxl_attach(ctx, work.work_element_descriptor, work.num_interrupts, amr);
@@ -225,9 +233,9 @@ static long afu_ioctl_start_work(struct cxl_context *ctx,
amr))) {
afu_release_irqs(ctx, ctx);
cxl_adapter_context_put(ctx->afu->adapter);
- put_pid(ctx->glpid);
put_pid(ctx->pid);
- ctx->glpid = ctx->pid = NULL;
+ ctx->pid = NULL;
+ cxl_context_mm_count_put(ctx);
goto out;
}
--
2.7.4
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH 3/7] cxl: Keep track of mm struct associated with a context
2017-02-01 17:30 ` [PATCH 3/7] cxl: Keep track of mm struct associated with a context Christophe Lombard
@ 2017-02-28 7:44 ` Andrew Donnellan
[not found] ` <66dee4f8-5897-2948-5813-c021b83379f8@linux.vnet.ibm.com>
0 siblings, 1 reply; 17+ messages in thread
From: Andrew Donnellan @ 2017-02-28 7:44 UTC (permalink / raw)
To: Christophe Lombard, linuxppc-dev, fbarrat, imunsie
On 02/02/17 04:30, Christophe Lombard wrote:
> The mm_struct corresponding to the current task is acquired each time
> an interrupt is raised. So to simplify the code, we only get the
> mm_struct when attaching an AFU context to the process.
> The mm_count reference is increased to ensure that the mm_struct can't
> be freed. The mm_struct will be released when the context is detached.
> The reference (use count) on the struct mm is not kept to avoid a
> circular dependency if the process mmaps its cxl mmio and forget to
> unmap before exiting.
>
> Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
One question below, otherwise this all looks good to me.
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
> @@ -281,7 +212,6 @@ void cxl_handle_fault(struct work_struct *fault_work)
> if (!ctx->kernel) {
>
> mm = get_mem_context(ctx);
> - /* indicates all the thread in task group have exited */
> if (mm == NULL) {
> pr_devel("%s: unable to get mm for pe=%d pid=%i\n",
> __func__, ctx->pe, pid_nr(ctx->pid));
Is there still a case where mm can equal NULL?
> diff --git a/drivers/misc/cxl/file.c b/drivers/misc/cxl/file.c
> index 859959f..af6bd0e 100644
> --- a/drivers/misc/cxl/file.c
> +++ b/drivers/misc/cxl/file.c
> @@ -216,8 +216,16 @@ static long afu_ioctl_start_work(struct cxl_context *ctx,
> * process is still accessible.
> */
> ctx->pid = get_task_pid(current, PIDTYPE_PID);
> - ctx->glpid = get_task_pid(current->group_leader, PIDTYPE_PID);
>
> + /* acquire a reference to the task's mm */
> + ctx->mm = get_task_mm(current);
> +
> + /* ensure this mm_struct can't be freed */
> + cxl_context_mm_count_get(ctx);
> +
> + /* decrement the use count */
> + if (ctx->mm)
> + mmput(ctx->mm);
It took me a while to work out the difference between mm_users and
mm_count... this looks fine.
--
Andrew Donnellan OzLabs, ADL Canberra
andrew.donnellan@au1.ibm.com IBM Australia Limited
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH 4/7] cxl: Update implementation service layer
2017-02-01 17:30 [PATCH 0/7] cxl: Add support for Coherent Accelerator Interface Architecture 2.0 Christophe Lombard
` (2 preceding siblings ...)
2017-02-01 17:30 ` [PATCH 3/7] cxl: Keep track of mm struct associated with a context Christophe Lombard
@ 2017-02-01 17:30 ` Christophe Lombard
2017-03-02 6:48 ` Andrew Donnellan
2017-02-01 17:30 ` [PATCH 5/7] cxl: Rename some psl8 specific functions Christophe Lombard
` (2 subsequent siblings)
6 siblings, 1 reply; 17+ messages in thread
From: Christophe Lombard @ 2017-02-01 17:30 UTC (permalink / raw)
To: linuxppc-dev, fbarrat, imunsie, andrew.donnellan
The service layer API (in cxl.h) lists some low-level functions whose
implementation is different on PSL8, PSL9 and XSL. Each
environment implements its own functions, and the common code uses
them through function pointers, defined in cxl_service_layer_ops.
Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
---
drivers/misc/cxl/cxl.h | 33 ++++++++++++++++++++++--------
drivers/misc/cxl/debugfs.c | 16 +++++++--------
drivers/misc/cxl/guest.c | 2 +-
drivers/misc/cxl/irq.c | 2 +-
drivers/misc/cxl/native.c | 50 +++++++++++++++++++++++++++-------------------
drivers/misc/cxl/pci.c | 47 +++++++++++++++++++++++++++++--------------
6 files changed, 97 insertions(+), 53 deletions(-)
diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
index e53ce9d..96d89cb 100644
--- a/drivers/misc/cxl/cxl.h
+++ b/drivers/misc/cxl/cxl.h
@@ -550,13 +550,23 @@ struct cxl_context {
struct mm_struct *mm;
};
+struct cxl_irq_info;
+
struct cxl_service_layer_ops {
int (*adapter_regs_init)(struct cxl *adapter, struct pci_dev *dev);
+ int (*invalidate_all)(struct cxl *adapter);
int (*afu_regs_init)(struct cxl_afu *afu);
+ int (*sanitise_afu_regs)(struct cxl_afu *afu);
int (*register_serr_irq)(struct cxl_afu *afu);
void (*release_serr_irq)(struct cxl_afu *afu);
- void (*debugfs_add_adapter_sl_regs)(struct cxl *adapter, struct dentry *dir);
- void (*debugfs_add_afu_sl_regs)(struct cxl_afu *afu, struct dentry *dir);
+ irqreturn_t (*handle_interrupt)(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
+ irqreturn_t (*fail_irq)(struct cxl_afu *afu, struct cxl_irq_info *irq_info);
+ int (*activate_dedicated_process)(struct cxl_afu *afu);
+ int (*attach_afu_directed)(struct cxl_context *ctx, u64 wed, u64 amr);
+ int (*attach_dedicated_process)(struct cxl_context *ctx, u64 wed, u64 amr);
+ void (*update_dedicated_ivtes)(struct cxl_context *ctx);
+ void (*debugfs_add_adapter_regs)(struct cxl *adapter, struct dentry *dir);
+ void (*debugfs_add_afu_regs)(struct cxl_afu *afu, struct dentry *dir);
void (*psl_irq_dump_registers)(struct cxl_context *ctx);
void (*err_irq_dump_registers)(struct cxl *adapter);
void (*debugfs_stop_trace)(struct cxl *adapter);
@@ -800,6 +810,11 @@ int afu_register_irqs(struct cxl_context *ctx, u32 count);
void afu_release_irqs(struct cxl_context *ctx, void *cookie);
void afu_irq_name_free(struct cxl_context *ctx);
+int cxl_attach_afu_directed_psl(struct cxl_context *ctx, u64 wed, u64 amr);
+int cxl_activate_dedicated_process_psl(struct cxl_afu *afu);
+int cxl_attach_dedicated_process_psl(struct cxl_context *ctx, u64 wed, u64 amr);
+void cxl_update_dedicated_ivtes_psl(struct cxl_context *ctx);
+
int cxl_debugfs_init(void);
void cxl_debugfs_exit(void);
int cxl_debugfs_adapter_add(struct cxl *adapter);
@@ -858,7 +873,9 @@ struct cxl_irq_info {
};
void cxl_assign_psn_space(struct cxl_context *ctx);
-irqreturn_t cxl_irq(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
+int cxl_invalidate_all_psl(struct cxl *adapter);
+irqreturn_t cxl_irq_psl(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
+irqreturn_t cxl_fail_irq_psl(struct cxl_afu *afu, struct cxl_irq_info *irq_info);
int cxl_register_one_irq(struct cxl *adapter, irq_handler_t handler,
void *cookie, irq_hw_number_t *dest_hwirq,
unsigned int *dest_virq, const char *name);
@@ -870,12 +887,12 @@ int cxl_data_cache_flush(struct cxl *adapter);
int cxl_afu_disable(struct cxl_afu *afu);
int cxl_psl_purge(struct cxl_afu *afu);
-void cxl_debugfs_add_adapter_psl_regs(struct cxl *adapter, struct dentry *dir);
-void cxl_debugfs_add_adapter_xsl_regs(struct cxl *adapter, struct dentry *dir);
-void cxl_debugfs_add_afu_psl_regs(struct cxl_afu *afu, struct dentry *dir);
-void cxl_native_psl_irq_dump_regs(struct cxl_context *ctx);
+void cxl_debugfs_add_adapter_regs_psl(struct cxl *adapter, struct dentry *dir);
+void cxl_debugfs_add_adapter_regs_xsl(struct cxl *adapter, struct dentry *dir);
+void cxl_debugfs_add_afu_regs_psl(struct cxl_afu *afu, struct dentry *dir);
+void cxl_native_irq_dump_regs_psl(struct cxl_context *ctx);
void cxl_native_err_irq_dump_regs(struct cxl *adapter);
-void cxl_stop_trace(struct cxl *cxl);
+void cxl_stop_trace_psl(struct cxl *cxl);
int cxl_pci_vphb_add(struct cxl_afu *afu);
void cxl_pci_vphb_remove(struct cxl_afu *afu);
void cxl_release_mapping(struct cxl_context *ctx);
diff --git a/drivers/misc/cxl/debugfs.c b/drivers/misc/cxl/debugfs.c
index 9c06ac8..4848ebf 100644
--- a/drivers/misc/cxl/debugfs.c
+++ b/drivers/misc/cxl/debugfs.c
@@ -15,7 +15,7 @@
static struct dentry *cxl_debugfs;
-void cxl_stop_trace(struct cxl *adapter)
+void cxl_stop_trace_psl(struct cxl *adapter)
{
int slice;
@@ -53,7 +53,7 @@ static struct dentry *debugfs_create_io_x64(const char *name, umode_t mode,
(void __force *)value, &fops_io_x64);
}
-void cxl_debugfs_add_adapter_psl_regs(struct cxl *adapter, struct dentry *dir)
+void cxl_debugfs_add_adapter_regs_psl(struct cxl *adapter, struct dentry *dir)
{
debugfs_create_io_x64("fir1", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_FIR1));
debugfs_create_io_x64("fir2", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_FIR2));
@@ -61,7 +61,7 @@ void cxl_debugfs_add_adapter_psl_regs(struct cxl *adapter, struct dentry *dir)
debugfs_create_io_x64("trace", S_IRUSR | S_IWUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_TRACE));
}
-void cxl_debugfs_add_adapter_xsl_regs(struct cxl *adapter, struct dentry *dir)
+void cxl_debugfs_add_adapter_regs_xsl(struct cxl *adapter, struct dentry *dir)
{
debugfs_create_io_x64("fec", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_XSL_FEC));
}
@@ -82,8 +82,8 @@ int cxl_debugfs_adapter_add(struct cxl *adapter)
debugfs_create_io_x64("err_ivte", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_ErrIVTE));
- if (adapter->native->sl_ops->debugfs_add_adapter_sl_regs)
- adapter->native->sl_ops->debugfs_add_adapter_sl_regs(adapter, dir);
+ if (adapter->native->sl_ops->debugfs_add_adapter_regs)
+ adapter->native->sl_ops->debugfs_add_adapter_regs(adapter, dir);
return 0;
}
@@ -92,7 +92,7 @@ void cxl_debugfs_adapter_remove(struct cxl *adapter)
debugfs_remove_recursive(adapter->debugfs);
}
-void cxl_debugfs_add_afu_psl_regs(struct cxl_afu *afu, struct dentry *dir)
+void cxl_debugfs_add_afu_regs_psl(struct cxl_afu *afu, struct dentry *dir)
{
debugfs_create_io_x64("fir", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_FIR_SLICE_An));
debugfs_create_io_x64("serr", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_SERR_An));
@@ -121,8 +121,8 @@ int cxl_debugfs_afu_add(struct cxl_afu *afu)
debugfs_create_io_x64("sstp1", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP1_An));
debugfs_create_io_x64("err_status", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_PSL_ErrStat_An));
- if (afu->adapter->native->sl_ops->debugfs_add_afu_sl_regs)
- afu->adapter->native->sl_ops->debugfs_add_afu_sl_regs(afu, dir);
+ if (afu->adapter->native->sl_ops->debugfs_add_afu_regs)
+ afu->adapter->native->sl_ops->debugfs_add_afu_regs(afu, dir);
return 0;
}
diff --git a/drivers/misc/cxl/guest.c b/drivers/misc/cxl/guest.c
index e04bc4d..f6ba698 100644
--- a/drivers/misc/cxl/guest.c
+++ b/drivers/misc/cxl/guest.c
@@ -169,7 +169,7 @@ static irqreturn_t guest_psl_irq(int irq, void *data)
return IRQ_HANDLED;
}
- rc = cxl_irq(irq, ctx, &irq_info);
+ rc = cxl_irq_psl(irq, ctx, &irq_info);
return rc;
}
diff --git a/drivers/misc/cxl/irq.c b/drivers/misc/cxl/irq.c
index 1a402bb..2fa119e 100644
--- a/drivers/misc/cxl/irq.c
+++ b/drivers/misc/cxl/irq.c
@@ -34,7 +34,7 @@ static irqreturn_t schedule_cxl_fault(struct cxl_context *ctx, u64 dsisr, u64 da
return IRQ_HANDLED;
}
-irqreturn_t cxl_irq(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info)
+irqreturn_t cxl_irq_psl(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info)
{
u64 dsisr, dar;
diff --git a/drivers/misc/cxl/native.c b/drivers/misc/cxl/native.c
index 8a3ce99..a02d6f9 100644
--- a/drivers/misc/cxl/native.c
+++ b/drivers/misc/cxl/native.c
@@ -257,7 +257,7 @@ void cxl_release_spa(struct cxl_afu *afu)
}
}
-int cxl_tlb_slb_invalidate(struct cxl *adapter)
+int cxl_invalidate_all_psl(struct cxl *adapter)
{
unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT);
@@ -577,7 +577,7 @@ static void update_ivtes_directed(struct cxl_context *ctx)
WARN_ON(add_process_element(ctx));
}
-static int attach_afu_directed(struct cxl_context *ctx, u64 wed, u64 amr)
+int cxl_attach_afu_directed_psl(struct cxl_context *ctx, u64 wed, u64 amr)
{
u32 pid;
int result;
@@ -670,7 +670,7 @@ static int deactivate_afu_directed(struct cxl_afu *afu)
return 0;
}
-static int activate_dedicated_process(struct cxl_afu *afu)
+int cxl_activate_dedicated_process_psl(struct cxl_afu *afu)
{
dev_info(&afu->dev, "Activating dedicated process mode\n");
@@ -693,7 +693,7 @@ static int activate_dedicated_process(struct cxl_afu *afu)
return cxl_chardev_d_afu_add(afu);
}
-static void update_ivtes_dedicated(struct cxl_context *ctx)
+void cxl_update_dedicated_ivtes_psl(struct cxl_context *ctx)
{
struct cxl_afu *afu = ctx->afu;
@@ -709,7 +709,7 @@ static void update_ivtes_dedicated(struct cxl_context *ctx)
((u64)ctx->irqs.range[3] & 0xffff));
}
-static int attach_dedicated(struct cxl_context *ctx, u64 wed, u64 amr)
+int cxl_attach_dedicated_process_psl(struct cxl_context *ctx, u64 wed, u64 amr)
{
struct cxl_afu *afu = ctx->afu;
u64 pid;
@@ -727,7 +727,8 @@ static int attach_dedicated(struct cxl_context *ctx, u64 wed, u64 amr)
cxl_prefault(ctx, wed);
- update_ivtes_dedicated(ctx);
+ if (ctx->afu->adapter->native->sl_ops->update_dedicated_ivtes)
+ afu->adapter->native->sl_ops->update_dedicated_ivtes(ctx);
cxl_p2n_write(afu, CXL_PSL_AMR_An, amr);
@@ -777,8 +778,9 @@ static int native_afu_activate_mode(struct cxl_afu *afu, int mode)
if (mode == CXL_MODE_DIRECTED)
return activate_afu_directed(afu);
- if (mode == CXL_MODE_DEDICATED)
- return activate_dedicated_process(afu);
+ if ((mode == CXL_MODE_DEDICATED) &&
+ (afu->adapter->native->sl_ops->activate_dedicated_process))
+ return afu->adapter->native->sl_ops->activate_dedicated_process(afu);
return -EINVAL;
}
@@ -792,11 +794,13 @@ static int native_attach_process(struct cxl_context *ctx, bool kernel,
}
ctx->kernel = kernel;
- if (ctx->afu->current_mode == CXL_MODE_DIRECTED)
- return attach_afu_directed(ctx, wed, amr);
+ if ((ctx->afu->current_mode == CXL_MODE_DIRECTED) &&
+ (ctx->afu->adapter->native->sl_ops->attach_afu_directed))
+ return ctx->afu->adapter->native->sl_ops->attach_afu_directed(ctx, wed, amr);
- if (ctx->afu->current_mode == CXL_MODE_DEDICATED)
- return attach_dedicated(ctx, wed, amr);
+ if ((ctx->afu->current_mode == CXL_MODE_DEDICATED) &&
+ (ctx->afu->adapter->native->sl_ops->attach_dedicated_process))
+ return ctx->afu->adapter->native->sl_ops->attach_dedicated_process(ctx, wed, amr);
return -EINVAL;
}
@@ -829,8 +833,9 @@ static void native_update_ivtes(struct cxl_context *ctx)
{
if (ctx->afu->current_mode == CXL_MODE_DIRECTED)
return update_ivtes_directed(ctx);
- if (ctx->afu->current_mode == CXL_MODE_DEDICATED)
- return update_ivtes_dedicated(ctx);
+ if ((ctx->afu->current_mode == CXL_MODE_DEDICATED) &&
+ (ctx->afu->adapter->native->sl_ops->update_dedicated_ivtes))
+ return ctx->afu->adapter->native->sl_ops->update_dedicated_ivtes(ctx);
WARN(1, "native_update_ivtes: Bad mode\n");
}
@@ -874,7 +879,7 @@ static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
return 0;
}
-void cxl_native_psl_irq_dump_regs(struct cxl_context *ctx)
+void cxl_native_irq_dump_regs_psl(struct cxl_context *ctx)
{
u64 fir1, fir2, fir_slice, serr, afu_debug;
@@ -910,7 +915,7 @@ static irqreturn_t native_handle_psl_slice_error(struct cxl_context *ctx,
return cxl_ops->ack_irq(ctx, 0, errstat);
}
-static irqreturn_t fail_psl_irq(struct cxl_afu *afu, struct cxl_irq_info *irq_info)
+irqreturn_t cxl_fail_irq_psl(struct cxl_afu *afu, struct cxl_irq_info *irq_info)
{
if (irq_info->dsisr & CXL_PSL_DSISR_TRANS)
cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
@@ -926,7 +931,7 @@ static irqreturn_t native_irq_multiplexed(int irq, void *data)
struct cxl_context *ctx;
struct cxl_irq_info irq_info;
u64 phreg = cxl_p2n_read(afu, CXL_PSL_PEHandle_An);
- int ph, ret;
+ int ph, ret = IRQ_HANDLED;
/* check if eeh kicked in while the interrupt was in flight */
if (unlikely(phreg == ~0ULL)) {
@@ -939,13 +944,16 @@ static irqreturn_t native_irq_multiplexed(int irq, void *data)
ph = phreg & 0xffff;
if ((ret = native_get_irq_info(afu, &irq_info))) {
WARN(1, "Unable to get CXL IRQ Info: %i\n", ret);
- return fail_psl_irq(afu, &irq_info);
+ if (afu->adapter->native->sl_ops->fail_irq)
+ return afu->adapter->native->sl_ops->fail_irq(afu, &irq_info);
+ return IRQ_HANDLED;
}
rcu_read_lock();
ctx = idr_find(&afu->contexts_idr, ph);
if (ctx) {
- ret = cxl_irq(irq, ctx, &irq_info);
+ if (afu->adapter->native->sl_ops->handle_interrupt)
+ ret = afu->adapter->native->sl_ops->handle_interrupt(irq, ctx, &irq_info);
rcu_read_unlock();
return ret;
}
@@ -955,7 +963,9 @@ static irqreturn_t native_irq_multiplexed(int irq, void *data)
" %016llx\n(Possible AFU HW issue - was a term/remove acked"
" with outstanding transactions?)\n", ph, irq_info.dsisr,
irq_info.dar);
- return fail_psl_irq(afu, &irq_info);
+ if (afu->adapter->native->sl_ops->fail_irq)
+ ret = afu->adapter->native->sl_ops->fail_irq(afu, &irq_info);
+ return ret;
}
static void native_irq_wait(struct cxl_context *ctx)
diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
index 853925b..aba5f9a 100644
--- a/drivers/misc/cxl/pci.c
+++ b/drivers/misc/cxl/pci.c
@@ -377,7 +377,7 @@ static int calc_capp_routing(struct pci_dev *dev, u64 *chipid, u64 *capp_unit_id
return 0;
}
-static int init_implementation_adapter_psl_regs(struct cxl *adapter, struct pci_dev *dev)
+static int init_implementation_adapter_regs_psl(struct cxl *adapter, struct pci_dev *dev)
{
u64 psl_dsnctl, psl_fircntl;
u64 chipid;
@@ -409,7 +409,7 @@ static int init_implementation_adapter_psl_regs(struct cxl *adapter, struct pci_
return 0;
}
-static int init_implementation_adapter_xsl_regs(struct cxl *adapter, struct pci_dev *dev)
+static int init_implementation_adapter_regs_xsl(struct cxl *adapter, struct pci_dev *dev)
{
u64 xsl_dsnctl;
u64 chipid;
@@ -513,7 +513,7 @@ static void cxl_setup_psl_timebase(struct cxl *adapter, struct pci_dev *dev)
return;
}
-static int init_implementation_afu_psl_regs(struct cxl_afu *afu)
+static int init_implementation_afu_regs_psl(struct cxl_afu *afu)
{
/* read/write masks for this slice */
cxl_p1n_write(afu, CXL_PSL_APCALLOC_A, 0xFFFFFFFEFEFEFEFEULL);
@@ -996,7 +996,7 @@ static int cxl_afu_descriptor_looks_ok(struct cxl_afu *afu)
return 0;
}
-static int sanitise_afu_regs(struct cxl_afu *afu)
+static int sanitise_afu_regs_psl(struct cxl_afu *afu)
{
u64 reg;
@@ -1102,8 +1102,11 @@ static int pci_configure_afu(struct cxl_afu *afu, struct cxl *adapter, struct pc
if ((rc = pci_map_slice_regs(afu, adapter, dev)))
return rc;
- if ((rc = sanitise_afu_regs(afu)))
- goto err1;
+ if (adapter->native->sl_ops->sanitise_afu_regs) {
+ rc = adapter->native->sl_ops->sanitise_afu_regs(afu);
+ if (rc)
+ goto err1;
+ }
/* We need to reset the AFU before we can read the AFU descriptor */
if ((rc = cxl_ops->afu_reset(afu)))
@@ -1423,9 +1426,15 @@ static void cxl_release_adapter(struct device *dev)
static int sanitise_adapter_regs(struct cxl *adapter)
{
+ int rc = 0;
+
/* Clear PSL tberror bit by writing 1 to it */
cxl_p1_write(adapter, CXL_PSL_ErrIVTE, CXL_PSL_ErrIVTE_tberror);
- return cxl_tlb_slb_invalidate(adapter);
+
+ if (adapter->native->sl_ops->invalidate_all)
+ rc = adapter->native->sl_ops->invalidate_all(adapter);
+
+ return rc;
}
/* This should contain *only* operations that can safely be done in
@@ -1509,15 +1518,23 @@ static void cxl_deconfigure_adapter(struct cxl *adapter)
}
static const struct cxl_service_layer_ops psl_ops = {
- .adapter_regs_init = init_implementation_adapter_psl_regs,
- .afu_regs_init = init_implementation_afu_psl_regs,
+ .adapter_regs_init = init_implementation_adapter_regs_psl,
+ .invalidate_all = cxl_invalidate_all_psl,
+ .afu_regs_init = init_implementation_afu_regs_psl,
+ .sanitise_afu_regs = sanitise_afu_regs_psl,
.register_serr_irq = cxl_native_register_serr_irq,
.release_serr_irq = cxl_native_release_serr_irq,
- .debugfs_add_adapter_sl_regs = cxl_debugfs_add_adapter_psl_regs,
- .debugfs_add_afu_sl_regs = cxl_debugfs_add_afu_psl_regs,
- .psl_irq_dump_registers = cxl_native_psl_irq_dump_regs,
+ .handle_interrupt = cxl_irq_psl,
+ .fail_irq = cxl_fail_irq_psl,
+ .activate_dedicated_process = cxl_activate_dedicated_process_psl,
+ .attach_afu_directed = cxl_attach_afu_directed_psl,
+ .attach_dedicated_process = cxl_attach_dedicated_process_psl,
+ .update_dedicated_ivtes = cxl_update_dedicated_ivtes_psl,
+ .debugfs_add_adapter_regs = cxl_debugfs_add_adapter_regs_psl,
+ .debugfs_add_afu_regs = cxl_debugfs_add_afu_regs_psl,
+ .psl_irq_dump_registers = cxl_native_irq_dump_regs_psl,
.err_irq_dump_registers = cxl_native_err_irq_dump_regs,
- .debugfs_stop_trace = cxl_stop_trace,
+ .debugfs_stop_trace = cxl_stop_trace_psl,
.write_timebase_ctrl = write_timebase_ctrl_psl,
.timebase_read = timebase_read_psl,
.capi_mode = OPAL_PHB_CAPI_MODE_CAPI,
@@ -1525,8 +1542,8 @@ static const struct cxl_service_layer_ops psl_ops = {
};
static const struct cxl_service_layer_ops xsl_ops = {
- .adapter_regs_init = init_implementation_adapter_xsl_regs,
- .debugfs_add_adapter_sl_regs = cxl_debugfs_add_adapter_xsl_regs,
+ .adapter_regs_init = init_implementation_adapter_regs_xsl,
+ .debugfs_add_adapter_regs = cxl_debugfs_add_adapter_regs_xsl,
.write_timebase_ctrl = write_timebase_ctrl_xsl,
.timebase_read = timebase_read_xsl,
.capi_mode = OPAL_PHB_CAPI_MODE_DMA,
--
2.7.4
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH 4/7] cxl: Update implementation service layer
2017-02-01 17:30 ` [PATCH 4/7] cxl: Update implementation service layer Christophe Lombard
@ 2017-03-02 6:48 ` Andrew Donnellan
0 siblings, 0 replies; 17+ messages in thread
From: Andrew Donnellan @ 2017-03-02 6:48 UTC (permalink / raw)
To: Christophe Lombard, linuxppc-dev, fbarrat, imunsie
On 02/02/17 04:30, Christophe Lombard wrote:
> The service layer API (in cxl.h) lists some low-level functions whose
> implementation is different on PSL8, PSL9 and XSL. Each
> environment implements its own functions, and the common code uses
> them through function pointers, defined in cxl_service_layer_ops.
>
> Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
The commit message could explain a bit more clearly what's actually
being changed.
This patch needs to be rebased, see:
https://github.com/ajdlinux/linux/commit/ff21009bb0bac317410005c3a736e5021868cb99
Otherwise the patch seems fine.
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
> ---
> drivers/misc/cxl/cxl.h | 33 ++++++++++++++++++++++--------
> drivers/misc/cxl/debugfs.c | 16 +++++++--------
> drivers/misc/cxl/guest.c | 2 +-
> drivers/misc/cxl/irq.c | 2 +-
> drivers/misc/cxl/native.c | 50 +++++++++++++++++++++++++++-------------------
> drivers/misc/cxl/pci.c | 47 +++++++++++++++++++++++++++++--------------
> 6 files changed, 97 insertions(+), 53 deletions(-)
>
> diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
> index e53ce9d..96d89cb 100644
> --- a/drivers/misc/cxl/cxl.h
> +++ b/drivers/misc/cxl/cxl.h
> @@ -550,13 +550,23 @@ struct cxl_context {
> struct mm_struct *mm;
> };
>
> +struct cxl_irq_info;
> +
> struct cxl_service_layer_ops {
> int (*adapter_regs_init)(struct cxl *adapter, struct pci_dev *dev);
> + int (*invalidate_all)(struct cxl *adapter);
> int (*afu_regs_init)(struct cxl_afu *afu);
> + int (*sanitise_afu_regs)(struct cxl_afu *afu);
> int (*register_serr_irq)(struct cxl_afu *afu);
> void (*release_serr_irq)(struct cxl_afu *afu);
> - void (*debugfs_add_adapter_sl_regs)(struct cxl *adapter, struct dentry *dir);
> - void (*debugfs_add_afu_sl_regs)(struct cxl_afu *afu, struct dentry *dir);
> + irqreturn_t (*handle_interrupt)(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
> + irqreturn_t (*fail_irq)(struct cxl_afu *afu, struct cxl_irq_info *irq_info);
> + int (*activate_dedicated_process)(struct cxl_afu *afu);
> + int (*attach_afu_directed)(struct cxl_context *ctx, u64 wed, u64 amr);
> + int (*attach_dedicated_process)(struct cxl_context *ctx, u64 wed, u64 amr);
> + void (*update_dedicated_ivtes)(struct cxl_context *ctx);
> + void (*debugfs_add_adapter_regs)(struct cxl *adapter, struct dentry *dir);
> + void (*debugfs_add_afu_regs)(struct cxl_afu *afu, struct dentry *dir);
> void (*psl_irq_dump_registers)(struct cxl_context *ctx);
> void (*err_irq_dump_registers)(struct cxl *adapter);
> void (*debugfs_stop_trace)(struct cxl *adapter);
> @@ -800,6 +810,11 @@ int afu_register_irqs(struct cxl_context *ctx, u32 count);
> void afu_release_irqs(struct cxl_context *ctx, void *cookie);
> void afu_irq_name_free(struct cxl_context *ctx);
>
> +int cxl_attach_afu_directed_psl(struct cxl_context *ctx, u64 wed, u64 amr);
> +int cxl_activate_dedicated_process_psl(struct cxl_afu *afu);
> +int cxl_attach_dedicated_process_psl(struct cxl_context *ctx, u64 wed, u64 amr);
> +void cxl_update_dedicated_ivtes_psl(struct cxl_context *ctx);
> +
> int cxl_debugfs_init(void);
> void cxl_debugfs_exit(void);
> int cxl_debugfs_adapter_add(struct cxl *adapter);
> @@ -858,7 +873,9 @@ struct cxl_irq_info {
> };
>
> void cxl_assign_psn_space(struct cxl_context *ctx);
> -irqreturn_t cxl_irq(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
> +int cxl_invalidate_all_psl(struct cxl *adapter);
> +irqreturn_t cxl_irq_psl(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
> +irqreturn_t cxl_fail_irq_psl(struct cxl_afu *afu, struct cxl_irq_info *irq_info);
> int cxl_register_one_irq(struct cxl *adapter, irq_handler_t handler,
> void *cookie, irq_hw_number_t *dest_hwirq,
> unsigned int *dest_virq, const char *name);
> @@ -870,12 +887,12 @@ int cxl_data_cache_flush(struct cxl *adapter);
> int cxl_afu_disable(struct cxl_afu *afu);
> int cxl_psl_purge(struct cxl_afu *afu);
>
> -void cxl_debugfs_add_adapter_psl_regs(struct cxl *adapter, struct dentry *dir);
> -void cxl_debugfs_add_adapter_xsl_regs(struct cxl *adapter, struct dentry *dir);
> -void cxl_debugfs_add_afu_psl_regs(struct cxl_afu *afu, struct dentry *dir);
> -void cxl_native_psl_irq_dump_regs(struct cxl_context *ctx);
> +void cxl_debugfs_add_adapter_regs_psl(struct cxl *adapter, struct dentry *dir);
> +void cxl_debugfs_add_adapter_regs_xsl(struct cxl *adapter, struct dentry *dir);
> +void cxl_debugfs_add_afu_regs_psl(struct cxl_afu *afu, struct dentry *dir);
> +void cxl_native_irq_dump_regs_psl(struct cxl_context *ctx);
> void cxl_native_err_irq_dump_regs(struct cxl *adapter);
> -void cxl_stop_trace(struct cxl *cxl);
> +void cxl_stop_trace_psl(struct cxl *cxl);
> int cxl_pci_vphb_add(struct cxl_afu *afu);
> void cxl_pci_vphb_remove(struct cxl_afu *afu);
> void cxl_release_mapping(struct cxl_context *ctx);
> diff --git a/drivers/misc/cxl/debugfs.c b/drivers/misc/cxl/debugfs.c
> index 9c06ac8..4848ebf 100644
> --- a/drivers/misc/cxl/debugfs.c
> +++ b/drivers/misc/cxl/debugfs.c
> @@ -15,7 +15,7 @@
>
> static struct dentry *cxl_debugfs;
>
> -void cxl_stop_trace(struct cxl *adapter)
> +void cxl_stop_trace_psl(struct cxl *adapter)
> {
> int slice;
>
> @@ -53,7 +53,7 @@ static struct dentry *debugfs_create_io_x64(const char *name, umode_t mode,
> (void __force *)value, &fops_io_x64);
> }
>
> -void cxl_debugfs_add_adapter_psl_regs(struct cxl *adapter, struct dentry *dir)
> +void cxl_debugfs_add_adapter_regs_psl(struct cxl *adapter, struct dentry *dir)
> {
> debugfs_create_io_x64("fir1", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_FIR1));
> debugfs_create_io_x64("fir2", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_FIR2));
> @@ -61,7 +61,7 @@ void cxl_debugfs_add_adapter_psl_regs(struct cxl *adapter, struct dentry *dir)
> debugfs_create_io_x64("trace", S_IRUSR | S_IWUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_TRACE));
> }
>
> -void cxl_debugfs_add_adapter_xsl_regs(struct cxl *adapter, struct dentry *dir)
> +void cxl_debugfs_add_adapter_regs_xsl(struct cxl *adapter, struct dentry *dir)
> {
> debugfs_create_io_x64("fec", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_XSL_FEC));
> }
> @@ -82,8 +82,8 @@ int cxl_debugfs_adapter_add(struct cxl *adapter)
>
> debugfs_create_io_x64("err_ivte", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_ErrIVTE));
>
> - if (adapter->native->sl_ops->debugfs_add_adapter_sl_regs)
> - adapter->native->sl_ops->debugfs_add_adapter_sl_regs(adapter, dir);
> + if (adapter->native->sl_ops->debugfs_add_adapter_regs)
> + adapter->native->sl_ops->debugfs_add_adapter_regs(adapter, dir);
> return 0;
> }
>
> @@ -92,7 +92,7 @@ void cxl_debugfs_adapter_remove(struct cxl *adapter)
> debugfs_remove_recursive(adapter->debugfs);
> }
>
> -void cxl_debugfs_add_afu_psl_regs(struct cxl_afu *afu, struct dentry *dir)
> +void cxl_debugfs_add_afu_regs_psl(struct cxl_afu *afu, struct dentry *dir)
> {
> debugfs_create_io_x64("fir", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_FIR_SLICE_An));
> debugfs_create_io_x64("serr", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_SERR_An));
> @@ -121,8 +121,8 @@ int cxl_debugfs_afu_add(struct cxl_afu *afu)
> debugfs_create_io_x64("sstp1", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP1_An));
> debugfs_create_io_x64("err_status", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_PSL_ErrStat_An));
>
> - if (afu->adapter->native->sl_ops->debugfs_add_afu_sl_regs)
> - afu->adapter->native->sl_ops->debugfs_add_afu_sl_regs(afu, dir);
> + if (afu->adapter->native->sl_ops->debugfs_add_afu_regs)
> + afu->adapter->native->sl_ops->debugfs_add_afu_regs(afu, dir);
>
> return 0;
> }
> diff --git a/drivers/misc/cxl/guest.c b/drivers/misc/cxl/guest.c
> index e04bc4d..f6ba698 100644
> --- a/drivers/misc/cxl/guest.c
> +++ b/drivers/misc/cxl/guest.c
> @@ -169,7 +169,7 @@ static irqreturn_t guest_psl_irq(int irq, void *data)
> return IRQ_HANDLED;
> }
>
> - rc = cxl_irq(irq, ctx, &irq_info);
> + rc = cxl_irq_psl(irq, ctx, &irq_info);
> return rc;
> }
>
> diff --git a/drivers/misc/cxl/irq.c b/drivers/misc/cxl/irq.c
> index 1a402bb..2fa119e 100644
> --- a/drivers/misc/cxl/irq.c
> +++ b/drivers/misc/cxl/irq.c
> @@ -34,7 +34,7 @@ static irqreturn_t schedule_cxl_fault(struct cxl_context *ctx, u64 dsisr, u64 da
> return IRQ_HANDLED;
> }
>
> -irqreturn_t cxl_irq(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info)
> +irqreturn_t cxl_irq_psl(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info)
> {
> u64 dsisr, dar;
>
> diff --git a/drivers/misc/cxl/native.c b/drivers/misc/cxl/native.c
> index 8a3ce99..a02d6f9 100644
> --- a/drivers/misc/cxl/native.c
> +++ b/drivers/misc/cxl/native.c
> @@ -257,7 +257,7 @@ void cxl_release_spa(struct cxl_afu *afu)
> }
> }
>
> -int cxl_tlb_slb_invalidate(struct cxl *adapter)
> +int cxl_invalidate_all_psl(struct cxl *adapter)
> {
> unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT);
>
> @@ -577,7 +577,7 @@ static void update_ivtes_directed(struct cxl_context *ctx)
> WARN_ON(add_process_element(ctx));
> }
>
> -static int attach_afu_directed(struct cxl_context *ctx, u64 wed, u64 amr)
> +int cxl_attach_afu_directed_psl(struct cxl_context *ctx, u64 wed, u64 amr)
> {
> u32 pid;
> int result;
> @@ -670,7 +670,7 @@ static int deactivate_afu_directed(struct cxl_afu *afu)
> return 0;
> }
>
> -static int activate_dedicated_process(struct cxl_afu *afu)
> +int cxl_activate_dedicated_process_psl(struct cxl_afu *afu)
> {
> dev_info(&afu->dev, "Activating dedicated process mode\n");
>
> @@ -693,7 +693,7 @@ static int activate_dedicated_process(struct cxl_afu *afu)
> return cxl_chardev_d_afu_add(afu);
> }
>
> -static void update_ivtes_dedicated(struct cxl_context *ctx)
> +void cxl_update_dedicated_ivtes_psl(struct cxl_context *ctx)
> {
> struct cxl_afu *afu = ctx->afu;
>
> @@ -709,7 +709,7 @@ static void update_ivtes_dedicated(struct cxl_context *ctx)
> ((u64)ctx->irqs.range[3] & 0xffff));
> }
>
> -static int attach_dedicated(struct cxl_context *ctx, u64 wed, u64 amr)
> +int cxl_attach_dedicated_process_psl(struct cxl_context *ctx, u64 wed, u64 amr)
> {
> struct cxl_afu *afu = ctx->afu;
> u64 pid;
> @@ -727,7 +727,8 @@ static int attach_dedicated(struct cxl_context *ctx, u64 wed, u64 amr)
>
> cxl_prefault(ctx, wed);
>
> - update_ivtes_dedicated(ctx);
> + if (ctx->afu->adapter->native->sl_ops->update_dedicated_ivtes)
> + afu->adapter->native->sl_ops->update_dedicated_ivtes(ctx);
>
> cxl_p2n_write(afu, CXL_PSL_AMR_An, amr);
>
> @@ -777,8 +778,9 @@ static int native_afu_activate_mode(struct cxl_afu *afu, int mode)
>
> if (mode == CXL_MODE_DIRECTED)
> return activate_afu_directed(afu);
> - if (mode == CXL_MODE_DEDICATED)
> - return activate_dedicated_process(afu);
> + if ((mode == CXL_MODE_DEDICATED) &&
> + (afu->adapter->native->sl_ops->activate_dedicated_process))
> + return afu->adapter->native->sl_ops->activate_dedicated_process(afu);
>
> return -EINVAL;
> }
> @@ -792,11 +794,13 @@ static int native_attach_process(struct cxl_context *ctx, bool kernel,
> }
>
> ctx->kernel = kernel;
> - if (ctx->afu->current_mode == CXL_MODE_DIRECTED)
> - return attach_afu_directed(ctx, wed, amr);
> + if ((ctx->afu->current_mode == CXL_MODE_DIRECTED) &&
> + (ctx->afu->adapter->native->sl_ops->attach_afu_directed))
> + return ctx->afu->adapter->native->sl_ops->attach_afu_directed(ctx, wed, amr);
>
> - if (ctx->afu->current_mode == CXL_MODE_DEDICATED)
> - return attach_dedicated(ctx, wed, amr);
> + if ((ctx->afu->current_mode == CXL_MODE_DEDICATED) &&
> + (ctx->afu->adapter->native->sl_ops->attach_dedicated_process))
> + return ctx->afu->adapter->native->sl_ops->attach_dedicated_process(ctx, wed, amr);
>
> return -EINVAL;
> }
> @@ -829,8 +833,9 @@ static void native_update_ivtes(struct cxl_context *ctx)
> {
> if (ctx->afu->current_mode == CXL_MODE_DIRECTED)
> return update_ivtes_directed(ctx);
> - if (ctx->afu->current_mode == CXL_MODE_DEDICATED)
> - return update_ivtes_dedicated(ctx);
> + if ((ctx->afu->current_mode == CXL_MODE_DEDICATED) &&
> + (ctx->afu->adapter->native->sl_ops->update_dedicated_ivtes))
> + return ctx->afu->adapter->native->sl_ops->update_dedicated_ivtes(ctx);
> WARN(1, "native_update_ivtes: Bad mode\n");
> }
>
> @@ -874,7 +879,7 @@ static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
> return 0;
> }
>
> -void cxl_native_psl_irq_dump_regs(struct cxl_context *ctx)
> +void cxl_native_irq_dump_regs_psl(struct cxl_context *ctx)
> {
> u64 fir1, fir2, fir_slice, serr, afu_debug;
>
> @@ -910,7 +915,7 @@ static irqreturn_t native_handle_psl_slice_error(struct cxl_context *ctx,
> return cxl_ops->ack_irq(ctx, 0, errstat);
> }
>
> -static irqreturn_t fail_psl_irq(struct cxl_afu *afu, struct cxl_irq_info *irq_info)
> +irqreturn_t cxl_fail_irq_psl(struct cxl_afu *afu, struct cxl_irq_info *irq_info)
> {
> if (irq_info->dsisr & CXL_PSL_DSISR_TRANS)
> cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
> @@ -926,7 +931,7 @@ static irqreturn_t native_irq_multiplexed(int irq, void *data)
> struct cxl_context *ctx;
> struct cxl_irq_info irq_info;
> u64 phreg = cxl_p2n_read(afu, CXL_PSL_PEHandle_An);
> - int ph, ret;
> + int ph, ret = IRQ_HANDLED;
>
> /* check if eeh kicked in while the interrupt was in flight */
> if (unlikely(phreg == ~0ULL)) {
> @@ -939,13 +944,16 @@ static irqreturn_t native_irq_multiplexed(int irq, void *data)
> ph = phreg & 0xffff;
> if ((ret = native_get_irq_info(afu, &irq_info))) {
> WARN(1, "Unable to get CXL IRQ Info: %i\n", ret);
> - return fail_psl_irq(afu, &irq_info);
> + if (afu->adapter->native->sl_ops->fail_irq)
> + return afu->adapter->native->sl_ops->fail_irq(afu, &irq_info);
> + return IRQ_HANDLED;
> }
>
> rcu_read_lock();
> ctx = idr_find(&afu->contexts_idr, ph);
> if (ctx) {
> - ret = cxl_irq(irq, ctx, &irq_info);
> + if (afu->adapter->native->sl_ops->handle_interrupt)
> + ret = afu->adapter->native->sl_ops->handle_interrupt(irq, ctx, &irq_info);
> rcu_read_unlock();
> return ret;
> }
> @@ -955,7 +963,9 @@ static irqreturn_t native_irq_multiplexed(int irq, void *data)
> " %016llx\n(Possible AFU HW issue - was a term/remove acked"
> " with outstanding transactions?)\n", ph, irq_info.dsisr,
> irq_info.dar);
> - return fail_psl_irq(afu, &irq_info);
> + if (afu->adapter->native->sl_ops->fail_irq)
> + ret = afu->adapter->native->sl_ops->fail_irq(afu, &irq_info);
> + return ret;
> }
>
> static void native_irq_wait(struct cxl_context *ctx)
> diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
> index 853925b..aba5f9a 100644
> --- a/drivers/misc/cxl/pci.c
> +++ b/drivers/misc/cxl/pci.c
> @@ -377,7 +377,7 @@ static int calc_capp_routing(struct pci_dev *dev, u64 *chipid, u64 *capp_unit_id
> return 0;
> }
>
> -static int init_implementation_adapter_psl_regs(struct cxl *adapter, struct pci_dev *dev)
> +static int init_implementation_adapter_regs_psl(struct cxl *adapter, struct pci_dev *dev)
> {
> u64 psl_dsnctl, psl_fircntl;
> u64 chipid;
> @@ -409,7 +409,7 @@ static int init_implementation_adapter_psl_regs(struct cxl *adapter, struct pci_
> return 0;
> }
>
> -static int init_implementation_adapter_xsl_regs(struct cxl *adapter, struct pci_dev *dev)
> +static int init_implementation_adapter_regs_xsl(struct cxl *adapter, struct pci_dev *dev)
> {
> u64 xsl_dsnctl;
> u64 chipid;
> @@ -513,7 +513,7 @@ static void cxl_setup_psl_timebase(struct cxl *adapter, struct pci_dev *dev)
> return;
> }
>
> -static int init_implementation_afu_psl_regs(struct cxl_afu *afu)
> +static int init_implementation_afu_regs_psl(struct cxl_afu *afu)
> {
> /* read/write masks for this slice */
> cxl_p1n_write(afu, CXL_PSL_APCALLOC_A, 0xFFFFFFFEFEFEFEFEULL);
> @@ -996,7 +996,7 @@ static int cxl_afu_descriptor_looks_ok(struct cxl_afu *afu)
> return 0;
> }
>
> -static int sanitise_afu_regs(struct cxl_afu *afu)
> +static int sanitise_afu_regs_psl(struct cxl_afu *afu)
> {
> u64 reg;
>
> @@ -1102,8 +1102,11 @@ static int pci_configure_afu(struct cxl_afu *afu, struct cxl *adapter, struct pc
> if ((rc = pci_map_slice_regs(afu, adapter, dev)))
> return rc;
>
> - if ((rc = sanitise_afu_regs(afu)))
> - goto err1;
> + if (adapter->native->sl_ops->sanitise_afu_regs) {
> + rc = adapter->native->sl_ops->sanitise_afu_regs(afu);
> + if (rc)
> + goto err1;
> + }
>
> /* We need to reset the AFU before we can read the AFU descriptor */
> if ((rc = cxl_ops->afu_reset(afu)))
> @@ -1423,9 +1426,15 @@ static void cxl_release_adapter(struct device *dev)
>
> static int sanitise_adapter_regs(struct cxl *adapter)
> {
> + int rc = 0;
> +
> /* Clear PSL tberror bit by writing 1 to it */
> cxl_p1_write(adapter, CXL_PSL_ErrIVTE, CXL_PSL_ErrIVTE_tberror);
> - return cxl_tlb_slb_invalidate(adapter);
> +
> + if (adapter->native->sl_ops->invalidate_all)
> + rc = adapter->native->sl_ops->invalidate_all(adapter);
> +
> + return rc;
> }
>
> /* This should contain *only* operations that can safely be done in
> @@ -1509,15 +1518,23 @@ static void cxl_deconfigure_adapter(struct cxl *adapter)
> }
>
> static const struct cxl_service_layer_ops psl_ops = {
> - .adapter_regs_init = init_implementation_adapter_psl_regs,
> - .afu_regs_init = init_implementation_afu_psl_regs,
> + .adapter_regs_init = init_implementation_adapter_regs_psl,
> + .invalidate_all = cxl_invalidate_all_psl,
> + .afu_regs_init = init_implementation_afu_regs_psl,
> + .sanitise_afu_regs = sanitise_afu_regs_psl,
> .register_serr_irq = cxl_native_register_serr_irq,
> .release_serr_irq = cxl_native_release_serr_irq,
> - .debugfs_add_adapter_sl_regs = cxl_debugfs_add_adapter_psl_regs,
> - .debugfs_add_afu_sl_regs = cxl_debugfs_add_afu_psl_regs,
> - .psl_irq_dump_registers = cxl_native_psl_irq_dump_regs,
> + .handle_interrupt = cxl_irq_psl,
> + .fail_irq = cxl_fail_irq_psl,
> + .activate_dedicated_process = cxl_activate_dedicated_process_psl,
> + .attach_afu_directed = cxl_attach_afu_directed_psl,
> + .attach_dedicated_process = cxl_attach_dedicated_process_psl,
> + .update_dedicated_ivtes = cxl_update_dedicated_ivtes_psl,
> + .debugfs_add_adapter_regs = cxl_debugfs_add_adapter_regs_psl,
> + .debugfs_add_afu_regs = cxl_debugfs_add_afu_regs_psl,
> + .psl_irq_dump_registers = cxl_native_irq_dump_regs_psl,
> .err_irq_dump_registers = cxl_native_err_irq_dump_regs,
> - .debugfs_stop_trace = cxl_stop_trace,
> + .debugfs_stop_trace = cxl_stop_trace_psl,
> .write_timebase_ctrl = write_timebase_ctrl_psl,
> .timebase_read = timebase_read_psl,
> .capi_mode = OPAL_PHB_CAPI_MODE_CAPI,
> @@ -1525,8 +1542,8 @@ static const struct cxl_service_layer_ops psl_ops = {
> };
>
> static const struct cxl_service_layer_ops xsl_ops = {
> - .adapter_regs_init = init_implementation_adapter_xsl_regs,
> - .debugfs_add_adapter_sl_regs = cxl_debugfs_add_adapter_xsl_regs,
> + .adapter_regs_init = init_implementation_adapter_regs_xsl,
> + .debugfs_add_adapter_regs = cxl_debugfs_add_adapter_regs_xsl,
> .write_timebase_ctrl = write_timebase_ctrl_xsl,
> .timebase_read = timebase_read_xsl,
> .capi_mode = OPAL_PHB_CAPI_MODE_DMA,
>
--
Andrew Donnellan OzLabs, ADL Canberra
andrew.donnellan@au1.ibm.com IBM Australia Limited
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH 5/7] cxl: Rename some psl8 specific functions
2017-02-01 17:30 [PATCH 0/7] cxl: Add support for Coherent Accelerator Interface Architecture 2.0 Christophe Lombard
` (3 preceding siblings ...)
2017-02-01 17:30 ` [PATCH 4/7] cxl: Update implementation service layer Christophe Lombard
@ 2017-02-01 17:30 ` Christophe Lombard
2017-03-02 6:55 ` Andrew Donnellan
2017-02-01 17:30 ` [PATCH 6/7] cxl: Isolate few psl8 specific calls Christophe Lombard
2017-02-01 17:30 ` [PATCH 7/7] cxl: Add psl9 specific code Christophe Lombard
6 siblings, 1 reply; 17+ messages in thread
From: Christophe Lombard @ 2017-02-01 17:30 UTC (permalink / raw)
To: linuxppc-dev, fbarrat, imunsie, andrew.donnellan
Rename a few functions, changing the '_psl' suffix to '_psl8', to make
clear that the implementation is psl8 specific.
Those functions will have an equivalent implementation for the psl9 in
a later patch.
Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
---
drivers/misc/cxl/cxl.h | 23 +++++++++++-----------
drivers/misc/cxl/debugfs.c | 6 +++---
drivers/misc/cxl/guest.c | 2 +-
drivers/misc/cxl/irq.c | 2 +-
drivers/misc/cxl/native.c | 14 +++++++-------
drivers/misc/cxl/pci.c | 48 +++++++++++++++++++++++-----------------------
6 files changed, 47 insertions(+), 48 deletions(-)
diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
index 96d89cb..dbd3fc36 100644
--- a/drivers/misc/cxl/cxl.h
+++ b/drivers/misc/cxl/cxl.h
@@ -810,10 +810,10 @@ int afu_register_irqs(struct cxl_context *ctx, u32 count);
void afu_release_irqs(struct cxl_context *ctx, void *cookie);
void afu_irq_name_free(struct cxl_context *ctx);
-int cxl_attach_afu_directed_psl(struct cxl_context *ctx, u64 wed, u64 amr);
-int cxl_activate_dedicated_process_psl(struct cxl_afu *afu);
-int cxl_attach_dedicated_process_psl(struct cxl_context *ctx, u64 wed, u64 amr);
-void cxl_update_dedicated_ivtes_psl(struct cxl_context *ctx);
+int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr);
+int cxl_activate_dedicated_process_psl8(struct cxl_afu *afu);
+int cxl_attach_dedicated_process_psl8(struct cxl_context *ctx, u64 wed, u64 amr);
+void cxl_update_dedicated_ivtes_psl8(struct cxl_context *ctx);
int cxl_debugfs_init(void);
void cxl_debugfs_exit(void);
@@ -873,26 +873,25 @@ struct cxl_irq_info {
};
void cxl_assign_psn_space(struct cxl_context *ctx);
-int cxl_invalidate_all_psl(struct cxl *adapter);
-irqreturn_t cxl_irq_psl(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
-irqreturn_t cxl_fail_irq_psl(struct cxl_afu *afu, struct cxl_irq_info *irq_info);
+int cxl_invalidate_all_psl8(struct cxl *adapter);
+irqreturn_t cxl_irq_psl8(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
+irqreturn_t cxl_fail_irq_psl8(struct cxl_afu *afu, struct cxl_irq_info *irq_info);
int cxl_register_one_irq(struct cxl *adapter, irq_handler_t handler,
void *cookie, irq_hw_number_t *dest_hwirq,
unsigned int *dest_virq, const char *name);
int cxl_check_error(struct cxl_afu *afu);
int cxl_afu_slbia(struct cxl_afu *afu);
-int cxl_tlb_slb_invalidate(struct cxl *adapter);
int cxl_data_cache_flush(struct cxl *adapter);
int cxl_afu_disable(struct cxl_afu *afu);
int cxl_psl_purge(struct cxl_afu *afu);
-void cxl_debugfs_add_adapter_regs_psl(struct cxl *adapter, struct dentry *dir);
+void cxl_debugfs_add_adapter_regs_psl8(struct cxl *adapter, struct dentry *dir);
void cxl_debugfs_add_adapter_regs_xsl(struct cxl *adapter, struct dentry *dir);
-void cxl_debugfs_add_afu_regs_psl(struct cxl_afu *afu, struct dentry *dir);
-void cxl_native_irq_dump_regs_psl(struct cxl_context *ctx);
+void cxl_debugfs_add_afu_regs_psl8(struct cxl_afu *afu, struct dentry *dir);
+void cxl_native_irq_dump_regs_psl8(struct cxl_context *ctx);
void cxl_native_err_irq_dump_regs(struct cxl *adapter);
-void cxl_stop_trace_psl(struct cxl *cxl);
+void cxl_stop_trace_psl8(struct cxl *cxl);
int cxl_pci_vphb_add(struct cxl_afu *afu);
void cxl_pci_vphb_remove(struct cxl_afu *afu);
void cxl_release_mapping(struct cxl_context *ctx);
diff --git a/drivers/misc/cxl/debugfs.c b/drivers/misc/cxl/debugfs.c
index 4848ebf..2ff10a9 100644
--- a/drivers/misc/cxl/debugfs.c
+++ b/drivers/misc/cxl/debugfs.c
@@ -15,7 +15,7 @@
static struct dentry *cxl_debugfs;
-void cxl_stop_trace_psl(struct cxl *adapter)
+void cxl_stop_trace_psl8(struct cxl *adapter)
{
int slice;
@@ -53,7 +53,7 @@ static struct dentry *debugfs_create_io_x64(const char *name, umode_t mode,
(void __force *)value, &fops_io_x64);
}
-void cxl_debugfs_add_adapter_regs_psl(struct cxl *adapter, struct dentry *dir)
+void cxl_debugfs_add_adapter_regs_psl8(struct cxl *adapter, struct dentry *dir)
{
debugfs_create_io_x64("fir1", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_FIR1));
debugfs_create_io_x64("fir2", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_FIR2));
@@ -92,7 +92,7 @@ void cxl_debugfs_adapter_remove(struct cxl *adapter)
debugfs_remove_recursive(adapter->debugfs);
}
-void cxl_debugfs_add_afu_regs_psl(struct cxl_afu *afu, struct dentry *dir)
+void cxl_debugfs_add_afu_regs_psl8(struct cxl_afu *afu, struct dentry *dir)
{
debugfs_create_io_x64("fir", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_FIR_SLICE_An));
debugfs_create_io_x64("serr", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_SERR_An));
diff --git a/drivers/misc/cxl/guest.c b/drivers/misc/cxl/guest.c
index f6ba698..3ad7381 100644
--- a/drivers/misc/cxl/guest.c
+++ b/drivers/misc/cxl/guest.c
@@ -169,7 +169,7 @@ static irqreturn_t guest_psl_irq(int irq, void *data)
return IRQ_HANDLED;
}
- rc = cxl_irq_psl(irq, ctx, &irq_info);
+ rc = cxl_irq_psl8(irq, ctx, &irq_info);
return rc;
}
diff --git a/drivers/misc/cxl/irq.c b/drivers/misc/cxl/irq.c
index 2fa119e..fa9f8a2 100644
--- a/drivers/misc/cxl/irq.c
+++ b/drivers/misc/cxl/irq.c
@@ -34,7 +34,7 @@ static irqreturn_t schedule_cxl_fault(struct cxl_context *ctx, u64 dsisr, u64 da
return IRQ_HANDLED;
}
-irqreturn_t cxl_irq_psl(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info)
+irqreturn_t cxl_irq_psl8(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info)
{
u64 dsisr, dar;
diff --git a/drivers/misc/cxl/native.c b/drivers/misc/cxl/native.c
index a02d6f9..8805d8c 100644
--- a/drivers/misc/cxl/native.c
+++ b/drivers/misc/cxl/native.c
@@ -257,7 +257,7 @@ void cxl_release_spa(struct cxl_afu *afu)
}
}
-int cxl_invalidate_all_psl(struct cxl *adapter)
+int cxl_invalidate_all_psl8(struct cxl *adapter)
{
unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT);
@@ -577,7 +577,7 @@ static void update_ivtes_directed(struct cxl_context *ctx)
WARN_ON(add_process_element(ctx));
}
-int cxl_attach_afu_directed_psl(struct cxl_context *ctx, u64 wed, u64 amr)
+int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
{
u32 pid;
int result;
@@ -670,7 +670,7 @@ static int deactivate_afu_directed(struct cxl_afu *afu)
return 0;
}
-int cxl_activate_dedicated_process_psl(struct cxl_afu *afu)
+int cxl_activate_dedicated_process_psl8(struct cxl_afu *afu)
{
dev_info(&afu->dev, "Activating dedicated process mode\n");
@@ -693,7 +693,7 @@ int cxl_activate_dedicated_process_psl(struct cxl_afu *afu)
return cxl_chardev_d_afu_add(afu);
}
-void cxl_update_dedicated_ivtes_psl(struct cxl_context *ctx)
+void cxl_update_dedicated_ivtes_psl8(struct cxl_context *ctx)
{
struct cxl_afu *afu = ctx->afu;
@@ -709,7 +709,7 @@ void cxl_update_dedicated_ivtes_psl(struct cxl_context *ctx)
((u64)ctx->irqs.range[3] & 0xffff));
}
-int cxl_attach_dedicated_process_psl(struct cxl_context *ctx, u64 wed, u64 amr)
+int cxl_attach_dedicated_process_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
{
struct cxl_afu *afu = ctx->afu;
u64 pid;
@@ -879,7 +879,7 @@ static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
return 0;
}
-void cxl_native_irq_dump_regs_psl(struct cxl_context *ctx)
+void cxl_native_irq_dump_regs_psl8(struct cxl_context *ctx)
{
u64 fir1, fir2, fir_slice, serr, afu_debug;
@@ -915,7 +915,7 @@ static irqreturn_t native_handle_psl_slice_error(struct cxl_context *ctx,
return cxl_ops->ack_irq(ctx, 0, errstat);
}
-irqreturn_t cxl_fail_irq_psl(struct cxl_afu *afu, struct cxl_irq_info *irq_info)
+irqreturn_t cxl_fail_irq_psl8(struct cxl_afu *afu, struct cxl_irq_info *irq_info)
{
if (irq_info->dsisr & CXL_PSL_DSISR_TRANS)
cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
index aba5f9a..68362b1 100644
--- a/drivers/misc/cxl/pci.c
+++ b/drivers/misc/cxl/pci.c
@@ -377,7 +377,7 @@ static int calc_capp_routing(struct pci_dev *dev, u64 *chipid, u64 *capp_unit_id
return 0;
}
-static int init_implementation_adapter_regs_psl(struct cxl *adapter, struct pci_dev *dev)
+static int init_implementation_adapter_regs_psl8(struct cxl *adapter, struct pci_dev *dev)
{
u64 psl_dsnctl, psl_fircntl;
u64 chipid;
@@ -434,7 +434,7 @@ static int init_implementation_adapter_regs_xsl(struct cxl *adapter, struct pci_
/* For the PSL this is a multiple for 0 < n <= 7: */
#define PSL_2048_250MHZ_CYCLES 1
-static void write_timebase_ctrl_psl(struct cxl *adapter)
+static void write_timebase_ctrl_psl8(struct cxl *adapter)
{
cxl_p1_write(adapter, CXL_PSL_TB_CTLSTAT,
TBSYNC_CNT(2 * PSL_2048_250MHZ_CYCLES));
@@ -455,7 +455,7 @@ static void write_timebase_ctrl_xsl(struct cxl *adapter)
TBSYNC_CNT(XSL_4000_CLOCKS));
}
-static u64 timebase_read_psl(struct cxl *adapter)
+static u64 timebase_read_psl8(struct cxl *adapter)
{
return cxl_p1_read(adapter, CXL_PSL_Timebase);
}
@@ -513,7 +513,7 @@ static void cxl_setup_psl_timebase(struct cxl *adapter, struct pci_dev *dev)
return;
}
-static int init_implementation_afu_regs_psl(struct cxl_afu *afu)
+static int init_implementation_afu_regs_psl8(struct cxl_afu *afu)
{
/* read/write masks for this slice */
cxl_p1n_write(afu, CXL_PSL_APCALLOC_A, 0xFFFFFFFEFEFEFEFEULL);
@@ -996,7 +996,7 @@ static int cxl_afu_descriptor_looks_ok(struct cxl_afu *afu)
return 0;
}
-static int sanitise_afu_regs_psl(struct cxl_afu *afu)
+static int sanitise_afu_regs_psl8(struct cxl_afu *afu)
{
u64 reg;
@@ -1517,26 +1517,26 @@ static void cxl_deconfigure_adapter(struct cxl *adapter)
pci_disable_device(pdev);
}
-static const struct cxl_service_layer_ops psl_ops = {
- .adapter_regs_init = init_implementation_adapter_regs_psl,
- .invalidate_all = cxl_invalidate_all_psl,
- .afu_regs_init = init_implementation_afu_regs_psl,
- .sanitise_afu_regs = sanitise_afu_regs_psl,
+static const struct cxl_service_layer_ops psl8_ops = {
+ .adapter_regs_init = init_implementation_adapter_regs_psl8,
+ .invalidate_all = cxl_invalidate_all_psl8,
+ .afu_regs_init = init_implementation_afu_regs_psl8,
+ .sanitise_afu_regs = sanitise_afu_regs_psl8,
.register_serr_irq = cxl_native_register_serr_irq,
.release_serr_irq = cxl_native_release_serr_irq,
- .handle_interrupt = cxl_irq_psl,
- .fail_irq = cxl_fail_irq_psl,
- .activate_dedicated_process = cxl_activate_dedicated_process_psl,
- .attach_afu_directed = cxl_attach_afu_directed_psl,
- .attach_dedicated_process = cxl_attach_dedicated_process_psl,
- .update_dedicated_ivtes = cxl_update_dedicated_ivtes_psl,
- .debugfs_add_adapter_regs = cxl_debugfs_add_adapter_regs_psl,
- .debugfs_add_afu_regs = cxl_debugfs_add_afu_regs_psl,
- .psl_irq_dump_registers = cxl_native_irq_dump_regs_psl,
+ .handle_interrupt = cxl_irq_psl8,
+ .fail_irq = cxl_fail_irq_psl8,
+ .activate_dedicated_process = cxl_activate_dedicated_process_psl8,
+ .attach_afu_directed = cxl_attach_afu_directed_psl8,
+ .attach_dedicated_process = cxl_attach_dedicated_process_psl8,
+ .update_dedicated_ivtes = cxl_update_dedicated_ivtes_psl8,
+ .debugfs_add_adapter_regs = cxl_debugfs_add_adapter_regs_psl8,
+ .debugfs_add_afu_regs = cxl_debugfs_add_afu_regs_psl8,
+ .psl_irq_dump_registers = cxl_native_irq_dump_regs_psl8,
.err_irq_dump_registers = cxl_native_err_irq_dump_regs,
- .debugfs_stop_trace = cxl_stop_trace_psl,
- .write_timebase_ctrl = write_timebase_ctrl_psl,
- .timebase_read = timebase_read_psl,
+ .debugfs_stop_trace = cxl_stop_trace_psl8,
+ .write_timebase_ctrl = write_timebase_ctrl_psl8,
+ .timebase_read = timebase_read_psl8,
.capi_mode = OPAL_PHB_CAPI_MODE_CAPI,
.needs_reset_before_disable = true,
};
@@ -1557,8 +1557,8 @@ static void set_sl_ops(struct cxl *adapter, struct pci_dev *dev)
adapter->native->sl_ops = &xsl_ops;
adapter->min_pe = 1; /* Workaround for CX-4 hardware bug */
} else {
- dev_info(&dev->dev, "Device uses a PSL\n");
- adapter->native->sl_ops = &psl_ops;
+ dev_info(&dev->dev, "Device uses a PSL8\n");
+ adapter->native->sl_ops = &psl8_ops;
}
}
--
2.7.4
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH 5/7] cxl: Rename some psl8 specific functions
2017-02-01 17:30 ` [PATCH 5/7] cxl: Rename some psl8 specific functions Christophe Lombard
@ 2017-03-02 6:55 ` Andrew Donnellan
0 siblings, 0 replies; 17+ messages in thread
From: Andrew Donnellan @ 2017-03-02 6:55 UTC (permalink / raw)
To: Christophe Lombard, linuxppc-dev, fbarrat, imunsie
On 02/02/17 04:30, Christophe Lombard wrote:
> Rename a few functions, changing the '_psl' suffix to '_psl8', to make
> clear that the implementation is psl8 specific.
> Those functions will have an equivalent implementation for the psl9 in
> a later patch.
>
> Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
This patch needs rebasing, I've done it at:
https://github.com/ajdlinux/linux/commit/ff6837cfae79d51829db824e4b914c9f1e76a9c1
Possibly could be squashed into the previous patch, but imho it's fine
as is.
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
> @@ -873,26 +873,25 @@ struct cxl_irq_info {
> };
>
> void cxl_assign_psn_space(struct cxl_context *ctx);
> -int cxl_invalidate_all_psl(struct cxl *adapter);
> -irqreturn_t cxl_irq_psl(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
> -irqreturn_t cxl_fail_irq_psl(struct cxl_afu *afu, struct cxl_irq_info *irq_info);
> +int cxl_invalidate_all_psl8(struct cxl *adapter);
> +irqreturn_t cxl_irq_psl8(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
> +irqreturn_t cxl_fail_irq_psl8(struct cxl_afu *afu, struct cxl_irq_info *irq_info);
> int cxl_register_one_irq(struct cxl *adapter, irq_handler_t handler,
> void *cookie, irq_hw_number_t *dest_hwirq,
> unsigned int *dest_virq, const char *name);
>
> int cxl_check_error(struct cxl_afu *afu);
> int cxl_afu_slbia(struct cxl_afu *afu);
> -int cxl_tlb_slb_invalidate(struct cxl *adapter);
This isn't a rename. :)
--
Andrew Donnellan OzLabs, ADL Canberra
andrew.donnellan@au1.ibm.com IBM Australia Limited
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH 6/7] cxl: Isolate few psl8 specific calls
2017-02-01 17:30 [PATCH 0/7] cxl: Add support for Coherent Accelerator Interface Architecture 2.0 Christophe Lombard
` (4 preceding siblings ...)
2017-02-01 17:30 ` [PATCH 5/7] cxl: Rename some psl8 specific functions Christophe Lombard
@ 2017-02-01 17:30 ` Christophe Lombard
2017-03-03 7:06 ` Andrew Donnellan
2017-02-01 17:30 ` [PATCH 7/7] cxl: Add psl9 specific code Christophe Lombard
6 siblings, 1 reply; 17+ messages in thread
From: Christophe Lombard @ 2017-02-01 17:30 UTC (permalink / raw)
To: linuxppc-dev, fbarrat, imunsie, andrew.donnellan
Point out the specific Coherent Accelerator Interface Architecture,
level 1, registers.
Code and functions specific to PSL8 (CAIA1) must be framed.
Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
---
drivers/misc/cxl/context.c | 28 +++++++++++---------
drivers/misc/cxl/cxl.h | 35 +++++++++++++++++++------
drivers/misc/cxl/debugfs.c | 6 +++--
drivers/misc/cxl/fault.c | 14 +++++-----
drivers/misc/cxl/native.c | 50 ++++++++++++++++++++++--------------
drivers/misc/cxl/pci.c | 64 +++++++++++++++++++++++++++++++---------------
6 files changed, 129 insertions(+), 68 deletions(-)
diff --git a/drivers/misc/cxl/context.c b/drivers/misc/cxl/context.c
index 89242c1..1835067 100644
--- a/drivers/misc/cxl/context.c
+++ b/drivers/misc/cxl/context.c
@@ -38,23 +38,26 @@ int cxl_context_init(struct cxl_context *ctx, struct cxl_afu *afu, bool master)
{
int i;
- spin_lock_init(&ctx->sste_lock);
+ if (cxl_is_psl8(afu))
+ spin_lock_init(&ctx->sste_lock);
ctx->afu = afu;
ctx->master = master;
ctx->pid = NULL; /* Set in start work ioctl */
mutex_init(&ctx->mapping_lock);
ctx->mapping = NULL;
- /*
- * Allocate the segment table before we put it in the IDR so that we
- * can always access it when dereferenced from IDR. For the same
- * reason, the segment table is only destroyed after the context is
- * removed from the IDR. Access to this in the IOCTL is protected by
- * Linux filesytem symantics (can't IOCTL until open is complete).
- */
- i = cxl_alloc_sst(ctx);
- if (i)
- return i;
+ if (cxl_is_psl8(afu)) {
+ /*
+ * Allocate the segment table before we put it in the IDR so that we
+ * can always access it when dereferenced from IDR. For the same
+ * reason, the segment table is only destroyed after the context is
+ * removed from the IDR. Access to this in the IOCTL is protected by
+ * Linux filesytem symantics (can't IOCTL until open is complete).
+ */
+ i = cxl_alloc_sst(ctx);
+ if (i)
+ return i;
+ }
INIT_WORK(&ctx->fault_work, cxl_handle_fault);
@@ -305,7 +308,8 @@ static void reclaim_ctx(struct rcu_head *rcu)
{
struct cxl_context *ctx = container_of(rcu, struct cxl_context, rcu);
- free_page((u64)ctx->sstp);
+ if (cxl_is_psl8(ctx->afu))
+ free_page((u64)ctx->sstp);
if (ctx->ff_page)
__free_page(ctx->ff_page);
ctx->sstp = NULL;
diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
index dbd3fc36..ddc787e 100644
--- a/drivers/misc/cxl/cxl.h
+++ b/drivers/misc/cxl/cxl.h
@@ -73,7 +73,7 @@ static const cxl_p1_reg_t CXL_PSL_Control = {0x0020};
static const cxl_p1_reg_t CXL_PSL_DLCNTL = {0x0060};
static const cxl_p1_reg_t CXL_PSL_DLADDR = {0x0068};
-/* PSL Lookaside Buffer Management Area */
+/* PSL Lookaside Buffer Management Area - CAIA 1 */
static const cxl_p1_reg_t CXL_PSL_LBISEL = {0x0080};
static const cxl_p1_reg_t CXL_PSL_SLBIE = {0x0088};
static const cxl_p1_reg_t CXL_PSL_SLBIA = {0x0090};
@@ -82,7 +82,7 @@ static const cxl_p1_reg_t CXL_PSL_TLBIA = {0x00A8};
static const cxl_p1_reg_t CXL_PSL_AFUSEL = {0x00B0};
/* 0x00C0:7EFF Implementation dependent area */
-/* PSL registers */
+/* PSL registers - CAIA 1 */
static const cxl_p1_reg_t CXL_PSL_FIR1 = {0x0100};
static const cxl_p1_reg_t CXL_PSL_FIR2 = {0x0108};
static const cxl_p1_reg_t CXL_PSL_Timebase = {0x0110};
@@ -109,7 +109,7 @@ static const cxl_p1n_reg_t CXL_PSL_AMBAR_An = {0x10};
static const cxl_p1n_reg_t CXL_PSL_SPOffset_An = {0x18};
static const cxl_p1n_reg_t CXL_PSL_ID_An = {0x20};
static const cxl_p1n_reg_t CXL_PSL_SERR_An = {0x28};
-/* Memory Management and Lookaside Buffer Management */
+/* Memory Management and Lookaside Buffer Management - CAIA 1*/
static const cxl_p1n_reg_t CXL_PSL_SDR_An = {0x30};
static const cxl_p1n_reg_t CXL_PSL_AMOR_An = {0x38};
/* Pointer Area */
@@ -124,6 +124,7 @@ static const cxl_p1n_reg_t CXL_PSL_IVTE_Limit_An = {0xB8};
/* 0xC0:FF Implementation Dependent Area */
static const cxl_p1n_reg_t CXL_PSL_FIR_SLICE_An = {0xC0};
static const cxl_p1n_reg_t CXL_AFU_DEBUG_An = {0xC8};
+/* 0xC0:FF Implementation Dependent Area - CAIA 1 */
static const cxl_p1n_reg_t CXL_PSL_APCALLOC_A = {0xD0};
static const cxl_p1n_reg_t CXL_PSL_COALLOC_A = {0xD8};
static const cxl_p1n_reg_t CXL_PSL_RXCTL_A = {0xE0};
@@ -133,12 +134,14 @@ static const cxl_p1n_reg_t CXL_PSL_SLICE_TRACE = {0xE8};
/* Configuration and Control Area */
static const cxl_p2n_reg_t CXL_PSL_PID_TID_An = {0x000};
static const cxl_p2n_reg_t CXL_CSRP_An = {0x008};
+/* Configuration and Control Area - CAIA 1 */
static const cxl_p2n_reg_t CXL_AURP0_An = {0x010};
static const cxl_p2n_reg_t CXL_AURP1_An = {0x018};
static const cxl_p2n_reg_t CXL_SSTP0_An = {0x020};
static const cxl_p2n_reg_t CXL_SSTP1_An = {0x028};
+/* Configuration and Control Area - CAIA 1 */
static const cxl_p2n_reg_t CXL_PSL_AMR_An = {0x030};
-/* Segment Lookaside Buffer Management */
+/* Segment Lookaside Buffer Management - CAIA 1 */
static const cxl_p2n_reg_t CXL_SLBIE_An = {0x040};
static const cxl_p2n_reg_t CXL_SLBIA_An = {0x048};
static const cxl_p2n_reg_t CXL_SLBI_Select_An = {0x050};
@@ -257,7 +260,7 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An = {0x0A0};
#define CXL_SSTP1_An_STVA_L_MASK (~((1ull << (63-55))-1))
#define CXL_SSTP1_An_V (1ull << (63-63))
-/****** CXL_PSL_SLBIE_[An] **************************************************/
+/****** CXL_PSL_SLBIE_[An] - CAIA 1 **************************************************/
/* write: */
#define CXL_SLBIE_C PPC_BIT(36) /* Class */
#define CXL_SLBIE_SS PPC_BITMASK(37, 38) /* Segment Size */
@@ -267,10 +270,10 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An = {0x0A0};
#define CXL_SLBIE_MAX PPC_BITMASK(24, 31)
#define CXL_SLBIE_PENDING PPC_BITMASK(56, 63)
-/****** Common to all CXL_TLBIA/SLBIA_[An] **********************************/
+/****** Common to all CXL_TLBIA/SLBIA_[An] - CAIA 1 **********************************/
#define CXL_TLB_SLB_P (1ull) /* Pending (read) */
-/****** Common to all CXL_TLB/SLB_IA/IE_[An] registers **********************/
+/****** Common to all CXL_TLB/SLB_IA/IE_[An] registers - CAIA 1 **********************/
#define CXL_TLB_SLB_IQ_ALL (0ull) /* Inv qualifier */
#define CXL_TLB_SLB_IQ_LPID (1ull) /* Inv qualifier */
#define CXL_TLB_SLB_IQ_LPIDPID (3ull) /* Inv qualifier */
@@ -278,7 +281,7 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An = {0x0A0};
/****** CXL_PSL_AFUSEL ******************************************************/
#define CXL_PSL_AFUSEL_A (1ull << (63-55)) /* Adapter wide invalidates affect all AFUs */
-/****** CXL_PSL_DSISR_An ****************************************************/
+/****** CXL_PSL_DSISR_An - CAIA 1 ****************************************************/
#define CXL_PSL_DSISR_An_DS (1ull << (63-0)) /* Segment not found */
#define CXL_PSL_DSISR_An_DM (1ull << (63-1)) /* PTE not found (See also: M) or protection fault */
#define CXL_PSL_DSISR_An_ST (1ull << (63-2)) /* Segment Table PTE not found */
@@ -746,6 +749,22 @@ static inline u64 cxl_p2n_read(struct cxl_afu *afu, cxl_p2n_reg_t reg)
return ~0ULL;
}
+static inline bool cxl_is_power8(void)
+{
+ if ((pvr_version_is(PVR_POWER8E)) ||
+ (pvr_version_is(PVR_POWER8NVL)) ||
+ (pvr_version_is(PVR_POWER8)))
+ return true;
+ return false;
+}
+
+static inline bool cxl_is_psl8(struct cxl_afu *afu)
+{
+ if (afu->adapter->caia_major == 1)
+ return true;
+ return false;
+}
+
ssize_t cxl_pci_afu_read_err_buffer(struct cxl_afu *afu, char *buf,
loff_t off, size_t count);
diff --git a/drivers/misc/cxl/debugfs.c b/drivers/misc/cxl/debugfs.c
index 2ff10a9..43a1a27 100644
--- a/drivers/misc/cxl/debugfs.c
+++ b/drivers/misc/cxl/debugfs.c
@@ -94,6 +94,9 @@ void cxl_debugfs_adapter_remove(struct cxl *adapter)
void cxl_debugfs_add_afu_regs_psl8(struct cxl_afu *afu, struct dentry *dir)
{
+ debugfs_create_io_x64("sstp0", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP0_An));
+ debugfs_create_io_x64("sstp1", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP1_An));
+
debugfs_create_io_x64("fir", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_FIR_SLICE_An));
debugfs_create_io_x64("serr", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_SERR_An));
debugfs_create_io_x64("afu_debug", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_AFU_DEBUG_An));
@@ -117,8 +120,7 @@ int cxl_debugfs_afu_add(struct cxl_afu *afu)
debugfs_create_io_x64("sr", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_SR_An));
debugfs_create_io_x64("dsisr", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_PSL_DSISR_An));
debugfs_create_io_x64("dar", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_PSL_DAR_An));
- debugfs_create_io_x64("sstp0", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP0_An));
- debugfs_create_io_x64("sstp1", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP1_An));
+
debugfs_create_io_x64("err_status", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_PSL_ErrStat_An));
if (afu->adapter->native->sl_ops->debugfs_add_afu_regs)
diff --git a/drivers/misc/cxl/fault.c b/drivers/misc/cxl/fault.c
index ece7ea3..acf8b7a 100644
--- a/drivers/misc/cxl/fault.c
+++ b/drivers/misc/cxl/fault.c
@@ -223,12 +223,14 @@ void cxl_handle_fault(struct work_struct *fault_work)
}
}
- if (dsisr & CXL_PSL_DSISR_An_DS)
- cxl_handle_segment_miss(ctx, mm, dar);
- else if (dsisr & CXL_PSL_DSISR_An_DM)
- cxl_handle_page_fault(ctx, mm, dsisr, dar);
- else
- WARN(1, "cxl_handle_fault has nothing to handle\n");
+ if (cxl_is_psl8(ctx->afu)) {
+ if (dsisr & CXL_PSL_DSISR_An_DS)
+ cxl_handle_segment_miss(ctx, mm, dar);
+ else if (dsisr & CXL_PSL_DSISR_An_DM)
+ cxl_handle_page_fault(ctx, mm, dsisr, dar);
+ else
+ WARN(1, "cxl_handle_fault has nothing to handle\n");
+ }
if (mm)
mmput(mm);
diff --git a/drivers/misc/cxl/native.c b/drivers/misc/cxl/native.c
index 8805d8c..a58a6a2 100644
--- a/drivers/misc/cxl/native.c
+++ b/drivers/misc/cxl/native.c
@@ -155,15 +155,17 @@ int cxl_psl_purge(struct cxl_afu *afu)
dsisr = cxl_p2n_read(afu, CXL_PSL_DSISR_An);
pr_devel_ratelimited("PSL purging... PSL_CNTL: 0x%016llx PSL_DSISR: 0x%016llx\n", PSL_CNTL, dsisr);
- if (dsisr & CXL_PSL_DSISR_TRANS) {
- dar = cxl_p2n_read(afu, CXL_PSL_DAR_An);
- dev_notice(&afu->dev, "PSL purge terminating pending translation, DSISR: 0x%016llx, DAR: 0x%016llx\n", dsisr, dar);
- cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
- } else if (dsisr) {
- dev_notice(&afu->dev, "PSL purge acknowledging pending non-translation fault, DSISR: 0x%016llx\n", dsisr);
- cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_A);
- } else {
- cpu_relax();
+ if (cxl_is_psl8(afu)) {
+ if (dsisr & CXL_PSL_DSISR_TRANS) {
+ dar = cxl_p2n_read(afu, CXL_PSL_DAR_An);
+ dev_notice(&afu->dev, "PSL purge terminating pending translation, DSISR: 0x%016llx, DAR: 0x%016llx\n", dsisr, dar);
+ cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
+ } else if (dsisr) {
+ dev_notice(&afu->dev, "PSL purge acknowledging pending non-translation fault, DSISR: 0x%016llx\n", dsisr);
+ cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_A);
+ } else {
+ cpu_relax();
+ }
}
PSL_CNTL = cxl_p1n_read(afu, CXL_PSL_SCNTL_An);
}
@@ -465,7 +467,8 @@ static int remove_process_element(struct cxl_context *ctx)
if (!rc)
ctx->pe_inserted = false;
- slb_invalid(ctx);
+ if (cxl_is_power8())
+ slb_invalid(ctx);
pr_devel("%s Remove pe: %i finished\n", __func__, ctx->pe);
mutex_unlock(&ctx->afu->native->spa_mutex);
@@ -498,7 +501,8 @@ static int activate_afu_directed(struct cxl_afu *afu)
attach_spa(afu);
cxl_p1n_write(afu, CXL_PSL_SCNTL_An, CXL_PSL_SCNTL_An_PM_AFU);
- cxl_p1n_write(afu, CXL_PSL_AMOR_An, 0xFFFFFFFFFFFFFFFFULL);
+ if (cxl_is_power8())
+ cxl_p1n_write(afu, CXL_PSL_AMOR_An, 0xFFFFFFFFFFFFFFFFULL);
cxl_p1n_write(afu, CXL_PSL_ID_An, CXL_PSL_ID_An_F | CXL_PSL_ID_An_L);
afu->current_mode = CXL_MODE_DIRECTED;
@@ -871,7 +875,8 @@ static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
info->dsisr = cxl_p2n_read(afu, CXL_PSL_DSISR_An);
info->dar = cxl_p2n_read(afu, CXL_PSL_DAR_An);
- info->dsr = cxl_p2n_read(afu, CXL_PSL_DSR_An);
+ if (cxl_is_power8())
+ info->dsr = cxl_p2n_read(afu, CXL_PSL_DSR_An);
info->afu_err = cxl_p2n_read(afu, CXL_AFU_ERR_An);
info->errstat = cxl_p2n_read(afu, CXL_PSL_ErrStat_An);
info->proc_handle = 0;
@@ -983,7 +988,8 @@ static void native_irq_wait(struct cxl_context *ctx)
if (ph != ctx->pe)
return;
dsisr = cxl_p2n_read(ctx->afu, CXL_PSL_DSISR_An);
- if ((dsisr & CXL_PSL_DSISR_PENDING) == 0)
+ if (cxl_is_psl8(ctx->afu) &&
+ ((dsisr & CXL_PSL_DSISR_PENDING) == 0))
return;
/*
* We are waiting for the workqueue to process our
@@ -1000,21 +1006,26 @@ static void native_irq_wait(struct cxl_context *ctx)
static irqreturn_t native_slice_irq_err(int irq, void *data)
{
struct cxl_afu *afu = data;
- u64 fir_slice, errstat, serr, afu_debug, afu_error, dsisr;
+ u64 errstat, serr, afu_error, dsisr;
/*
* slice err interrupt is only used with full PSL (no XSL)
*/
serr = cxl_p1n_read(afu, CXL_PSL_SERR_An);
- fir_slice = cxl_p1n_read(afu, CXL_PSL_FIR_SLICE_An);
errstat = cxl_p2n_read(afu, CXL_PSL_ErrStat_An);
- afu_debug = cxl_p1n_read(afu, CXL_AFU_DEBUG_An);
afu_error = cxl_p2n_read(afu, CXL_AFU_ERR_An);
dsisr = cxl_p2n_read(afu, CXL_PSL_DSISR_An);
cxl_afu_decode_psl_serr(afu, serr);
- dev_crit(&afu->dev, "PSL_FIR_SLICE_An: 0x%016llx\n", fir_slice);
+
+ if (cxl_is_power8()) {
+ u64 fir_slice, afu_debug;
+
+ fir_slice = cxl_p1n_read(afu, CXL_PSL_FIR_SLICE_An);
+ afu_debug = cxl_p1n_read(afu, CXL_AFU_DEBUG_An);
+ dev_crit(&afu->dev, "PSL_FIR_SLICE_An: 0x%016llx\n", fir_slice);
+ dev_crit(&afu->dev, "CXL_PSL_AFU_DEBUG_An: 0x%016llx\n", afu_debug);
+ }
dev_crit(&afu->dev, "CXL_PSL_ErrStat_An: 0x%016llx\n", errstat);
- dev_crit(&afu->dev, "CXL_PSL_AFU_DEBUG_An: 0x%016llx\n", afu_debug);
dev_crit(&afu->dev, "AFU_ERR_An: 0x%.16llx\n", afu_error);
dev_crit(&afu->dev, "PSL_DSISR_An: 0x%.16llx\n", dsisr);
@@ -1107,7 +1118,8 @@ int cxl_native_register_serr_irq(struct cxl_afu *afu)
}
serr = cxl_p1n_read(afu, CXL_PSL_SERR_An);
- serr = (serr & 0x00ffffffffff0000ULL) | (afu->serr_hwirq & 0xffff);
+ if (cxl_is_power8())
+ serr = (serr & 0x00ffffffffff0000ULL) | (afu->serr_hwirq & 0xffff);
cxl_p1n_write(afu, CXL_PSL_SERR_An, serr);
return 0;
diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
index 68362b1..4913142 100644
--- a/drivers/misc/cxl/pci.c
+++ b/drivers/misc/cxl/pci.c
@@ -324,32 +324,33 @@ static void dump_afu_descriptor(struct cxl_afu *afu)
#undef show_reg
}
-#define CAPP_UNIT0_ID 0xBA
-#define CAPP_UNIT1_ID 0XBE
+#define P8_CAPP_UNIT0_ID 0xBA
+#define P8_CAPP_UNIT1_ID 0XBE
static u64 get_capp_unit_id(struct device_node *np)
{
u32 phb_index;
- /*
- * For chips other than POWER8NVL, we only have CAPP 0,
- * irrespective of which PHB is used.
- */
- if (!pvr_version_is(PVR_POWER8NVL))
- return CAPP_UNIT0_ID;
+ if (of_property_read_u32(np, "ibm,phb-index", &phb_index))
+ return 0;
/*
- * For POWER8NVL, assume CAPP 0 is attached to PHB0 and
- * CAPP 1 is attached to PHB1.
+ * POWER 8:
+ * - For chips other than POWER8NVL, we only have CAPP 0,
+ * irrespective of which PHB is used.
+ * - For POWER8NVL, assume CAPP 0 is attached to PHB0 and
+ * CAPP 1 is attached to PHB1.
*/
- if (of_property_read_u32(np, "ibm,phb-index", &phb_index))
- return 0;
+ if (cxl_is_power8()) {
+ if (!pvr_version_is(PVR_POWER8NVL))
+ return P8_CAPP_UNIT0_ID;
- if (phb_index == 0)
- return CAPP_UNIT0_ID;
+ if (phb_index == 0)
+ return P8_CAPP_UNIT0_ID;
- if (phb_index == 1)
- return CAPP_UNIT1_ID;
+ if (phb_index == 1)
+ return P8_CAPP_UNIT1_ID;
+ }
return 0;
}
@@ -968,7 +969,7 @@ static int cxl_afu_descriptor_looks_ok(struct cxl_afu *afu)
}
if (afu->pp_psa && (afu->pp_size < PAGE_SIZE))
- dev_warn(&afu->dev, "AFU uses < PAGE_SIZE per-process PSA!");
+ dev_warn(&afu->dev, "AFU uses pp_size(%#016llx) < PAGE_SIZE per-process PSA!\n", afu->pp_size);
for (i = 0; i < afu->crs_num; i++) {
rc = cxl_ops->afu_cr_read32(afu, i, 0, &val);
@@ -1242,8 +1243,13 @@ int cxl_pci_reset(struct cxl *adapter)
dev_info(&dev->dev, "CXL reset\n");
- /* the adapter is about to be reset, so ignore errors */
- cxl_data_cache_flush(adapter);
+ /*
+ * The adapter is about to be reset, so ignore errors.
+ * Not supported on P9 DD1 but don't forget to enable it
+ * on P9 DD2
+ */
+ if (cxl_is_power8())
+ cxl_data_cache_flush(adapter);
/* pcie_warm_reset requests a fundamental pci reset which includes a
* PERST assert/deassert. PERST triggers a loading of the image
@@ -1373,6 +1379,14 @@ static void cxl_fixup_malformed_tlp(struct cxl *adapter, struct pci_dev *dev)
pci_write_config_dword(dev, aer + PCI_ERR_UNCOR_MASK, data);
}
+static bool cxl_compatible_caia_version(struct cxl *adapter)
+{
+ if (cxl_is_power8() && (adapter->caia_major == 1))
+ return true;
+
+ return false;
+}
+
static int cxl_vsec_looks_ok(struct cxl *adapter, struct pci_dev *dev)
{
if (adapter->vsec_status & CXL_STATUS_SECOND_PORT)
@@ -1383,6 +1397,12 @@ static int cxl_vsec_looks_ok(struct cxl *adapter, struct pci_dev *dev)
return -EINVAL;
}
+ if (!cxl_compatible_caia_version(adapter)) {
+ dev_info(&dev->dev, "Ignoring card. PSL type is not supported "
+ "(caia version: %d)\n", adapter->caia_major);
+ return -ENODEV;
+ }
+
if (!adapter->slices) {
/* Once we support dynamic reprogramming we can use the card if
* it supports loadable AFUs */
@@ -1557,8 +1577,10 @@ static void set_sl_ops(struct cxl *adapter, struct pci_dev *dev)
adapter->native->sl_ops = &xsl_ops;
adapter->min_pe = 1; /* Workaround for CX-4 hardware bug */
} else {
- dev_info(&dev->dev, "Device uses a PSL8\n");
- adapter->native->sl_ops = &psl8_ops;
+ if (cxl_is_power8()) {
+ dev_info(&dev->dev, "Device uses a PSL8\n");
+ adapter->native->sl_ops = &psl8_ops;
+ }
}
}
--
2.7.4
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH 6/7] cxl: Isolate few psl8 specific calls
2017-02-01 17:30 ` [PATCH 6/7] cxl: Isolate few psl8 specific calls Christophe Lombard
@ 2017-03-03 7:06 ` Andrew Donnellan
0 siblings, 0 replies; 17+ messages in thread
From: Andrew Donnellan @ 2017-03-03 7:06 UTC (permalink / raw)
To: Christophe Lombard, linuxppc-dev, fbarrat, imunsie
On 02/02/17 04:30, Christophe Lombard wrote:
> Point out the specific Coherent Accelerator Interface Architecture,
> level 1, registers.
> Code and functions specific to PSL8 (CAIA1) must be framed.
>
> Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
Haven't examined this in enough detail to give my Reviewed-by: just yet,
but it looks fairly sensible.
> ---
> drivers/misc/cxl/context.c | 28 +++++++++++---------
> drivers/misc/cxl/cxl.h | 35 +++++++++++++++++++------
> drivers/misc/cxl/debugfs.c | 6 +++--
> drivers/misc/cxl/fault.c | 14 +++++-----
> drivers/misc/cxl/native.c | 50 ++++++++++++++++++++++--------------
> drivers/misc/cxl/pci.c | 64 +++++++++++++++++++++++++++++++---------------
> 6 files changed, 129 insertions(+), 68 deletions(-)
>
> diff --git a/drivers/misc/cxl/context.c b/drivers/misc/cxl/context.c
> index 89242c1..1835067 100644
> --- a/drivers/misc/cxl/context.c
> +++ b/drivers/misc/cxl/context.c
> @@ -38,23 +38,26 @@ int cxl_context_init(struct cxl_context *ctx, struct cxl_afu *afu, bool master)
> {
> int i;
>
> - spin_lock_init(&ctx->sste_lock);
> + if (cxl_is_psl8(afu))
> + spin_lock_init(&ctx->sste_lock);
> ctx->afu = afu;
> ctx->master = master;
> ctx->pid = NULL; /* Set in start work ioctl */
> mutex_init(&ctx->mapping_lock);
> ctx->mapping = NULL;
>
> - /*
> - * Allocate the segment table before we put it in the IDR so that we
> - * can always access it when dereferenced from IDR. For the same
> - * reason, the segment table is only destroyed after the context is
> - * removed from the IDR. Access to this in the IOCTL is protected by
> - * Linux filesytem symantics (can't IOCTL until open is complete).
> - */
> - i = cxl_alloc_sst(ctx);
> - if (i)
> - return i;
> + if (cxl_is_psl8(afu)) {
> + /*
> + * Allocate the segment table before we put it in the IDR so that we
> + * can always access it when dereferenced from IDR. For the same
> + * reason, the segment table is only destroyed after the context is
> + * removed from the IDR. Access to this in the IOCTL is protected by
> + * Linux filesytem symantics (can't IOCTL until open is complete).
> + */
> + i = cxl_alloc_sst(ctx);
> + if (i)
> + return i;
> + }
>
> INIT_WORK(&ctx->fault_work, cxl_handle_fault);
>
> @@ -305,7 +308,8 @@ static void reclaim_ctx(struct rcu_head *rcu)
> {
> struct cxl_context *ctx = container_of(rcu, struct cxl_context, rcu);
>
> - free_page((u64)ctx->sstp);
> + if (cxl_is_psl8(ctx->afu))
> + free_page((u64)ctx->sstp);
> if (ctx->ff_page)
> __free_page(ctx->ff_page);
> ctx->sstp = NULL;
> diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
> index dbd3fc36..ddc787e 100644
> --- a/drivers/misc/cxl/cxl.h
> +++ b/drivers/misc/cxl/cxl.h
> @@ -73,7 +73,7 @@ static const cxl_p1_reg_t CXL_PSL_Control = {0x0020};
> static const cxl_p1_reg_t CXL_PSL_DLCNTL = {0x0060};
> static const cxl_p1_reg_t CXL_PSL_DLADDR = {0x0068};
>
> -/* PSL Lookaside Buffer Management Area */
> +/* PSL Lookaside Buffer Management Area - CAIA 1 */
> static const cxl_p1_reg_t CXL_PSL_LBISEL = {0x0080};
> static const cxl_p1_reg_t CXL_PSL_SLBIE = {0x0088};
> static const cxl_p1_reg_t CXL_PSL_SLBIA = {0x0090};
> @@ -82,7 +82,7 @@ static const cxl_p1_reg_t CXL_PSL_TLBIA = {0x00A8};
> static const cxl_p1_reg_t CXL_PSL_AFUSEL = {0x00B0};
>
> /* 0x00C0:7EFF Implementation dependent area */
> -/* PSL registers */
> +/* PSL registers - CAIA 1 */
> static const cxl_p1_reg_t CXL_PSL_FIR1 = {0x0100};
> static const cxl_p1_reg_t CXL_PSL_FIR2 = {0x0108};
> static const cxl_p1_reg_t CXL_PSL_Timebase = {0x0110};
> @@ -109,7 +109,7 @@ static const cxl_p1n_reg_t CXL_PSL_AMBAR_An = {0x10};
> static const cxl_p1n_reg_t CXL_PSL_SPOffset_An = {0x18};
> static const cxl_p1n_reg_t CXL_PSL_ID_An = {0x20};
> static const cxl_p1n_reg_t CXL_PSL_SERR_An = {0x28};
> -/* Memory Management and Lookaside Buffer Management */
> +/* Memory Management and Lookaside Buffer Management - CAIA 1*/
> static const cxl_p1n_reg_t CXL_PSL_SDR_An = {0x30};
> static const cxl_p1n_reg_t CXL_PSL_AMOR_An = {0x38};
> /* Pointer Area */
> @@ -124,6 +124,7 @@ static const cxl_p1n_reg_t CXL_PSL_IVTE_Limit_An = {0xB8};
> /* 0xC0:FF Implementation Dependent Area */
> static const cxl_p1n_reg_t CXL_PSL_FIR_SLICE_An = {0xC0};
> static const cxl_p1n_reg_t CXL_AFU_DEBUG_An = {0xC8};
> +/* 0xC0:FF Implementation Dependent Area - CAIA 1 */
> static const cxl_p1n_reg_t CXL_PSL_APCALLOC_A = {0xD0};
> static const cxl_p1n_reg_t CXL_PSL_COALLOC_A = {0xD8};
> static const cxl_p1n_reg_t CXL_PSL_RXCTL_A = {0xE0};
> @@ -133,12 +134,14 @@ static const cxl_p1n_reg_t CXL_PSL_SLICE_TRACE = {0xE8};
> /* Configuration and Control Area */
> static const cxl_p2n_reg_t CXL_PSL_PID_TID_An = {0x000};
> static const cxl_p2n_reg_t CXL_CSRP_An = {0x008};
> +/* Configuration and Control Area - CAIA 1 */
> static const cxl_p2n_reg_t CXL_AURP0_An = {0x010};
> static const cxl_p2n_reg_t CXL_AURP1_An = {0x018};
> static const cxl_p2n_reg_t CXL_SSTP0_An = {0x020};
> static const cxl_p2n_reg_t CXL_SSTP1_An = {0x028};
> +/* Configuration and Control Area - CAIA 1 */
> static const cxl_p2n_reg_t CXL_PSL_AMR_An = {0x030};
> -/* Segment Lookaside Buffer Management */
> +/* Segment Lookaside Buffer Management - CAIA 1 */
> static const cxl_p2n_reg_t CXL_SLBIE_An = {0x040};
> static const cxl_p2n_reg_t CXL_SLBIA_An = {0x048};
> static const cxl_p2n_reg_t CXL_SLBI_Select_An = {0x050};
> @@ -257,7 +260,7 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An = {0x0A0};
> #define CXL_SSTP1_An_STVA_L_MASK (~((1ull << (63-55))-1))
> #define CXL_SSTP1_An_V (1ull << (63-63))
>
> -/****** CXL_PSL_SLBIE_[An] **************************************************/
> +/****** CXL_PSL_SLBIE_[An] - CAIA 1 **************************************************/
> /* write: */
> #define CXL_SLBIE_C PPC_BIT(36) /* Class */
> #define CXL_SLBIE_SS PPC_BITMASK(37, 38) /* Segment Size */
> @@ -267,10 +270,10 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An = {0x0A0};
> #define CXL_SLBIE_MAX PPC_BITMASK(24, 31)
> #define CXL_SLBIE_PENDING PPC_BITMASK(56, 63)
>
> -/****** Common to all CXL_TLBIA/SLBIA_[An] **********************************/
> +/****** Common to all CXL_TLBIA/SLBIA_[An] - CAIA 1 **********************************/
> #define CXL_TLB_SLB_P (1ull) /* Pending (read) */
>
> -/****** Common to all CXL_TLB/SLB_IA/IE_[An] registers **********************/
> +/****** Common to all CXL_TLB/SLB_IA/IE_[An] registers - CAIA 1 **********************/
> #define CXL_TLB_SLB_IQ_ALL (0ull) /* Inv qualifier */
> #define CXL_TLB_SLB_IQ_LPID (1ull) /* Inv qualifier */
> #define CXL_TLB_SLB_IQ_LPIDPID (3ull) /* Inv qualifier */
> @@ -278,7 +281,7 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An = {0x0A0};
> /****** CXL_PSL_AFUSEL ******************************************************/
> #define CXL_PSL_AFUSEL_A (1ull << (63-55)) /* Adapter wide invalidates affect all AFUs */
>
> -/****** CXL_PSL_DSISR_An ****************************************************/
> +/****** CXL_PSL_DSISR_An - CAIA 1 ****************************************************/
> #define CXL_PSL_DSISR_An_DS (1ull << (63-0)) /* Segment not found */
> #define CXL_PSL_DSISR_An_DM (1ull << (63-1)) /* PTE not found (See also: M) or protection fault */
> #define CXL_PSL_DSISR_An_ST (1ull << (63-2)) /* Segment Table PTE not found */
> @@ -746,6 +749,22 @@ static inline u64 cxl_p2n_read(struct cxl_afu *afu, cxl_p2n_reg_t reg)
> return ~0ULL;
> }
>
> +static inline bool cxl_is_power8(void)
> +{
> + if ((pvr_version_is(PVR_POWER8E)) ||
> + (pvr_version_is(PVR_POWER8NVL)) ||
> + (pvr_version_is(PVR_POWER8)))
> + return true;
> + return false;
> +}
> +
> +static inline bool cxl_is_psl8(struct cxl_afu *afu)
> +{
> + if (afu->adapter->caia_major == 1)
> + return true;
> + return false;
> +}
I suppose both of these could be shortened into "return <EXPRESSION>",
personally I'd keep it as is.
> +
> ssize_t cxl_pci_afu_read_err_buffer(struct cxl_afu *afu, char *buf,
> loff_t off, size_t count);
>
> diff --git a/drivers/misc/cxl/debugfs.c b/drivers/misc/cxl/debugfs.c
> index 2ff10a9..43a1a27 100644
> --- a/drivers/misc/cxl/debugfs.c
> +++ b/drivers/misc/cxl/debugfs.c
> @@ -94,6 +94,9 @@ void cxl_debugfs_adapter_remove(struct cxl *adapter)
>
> void cxl_debugfs_add_afu_regs_psl8(struct cxl_afu *afu, struct dentry *dir)
> {
> + debugfs_create_io_x64("sstp0", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP0_An));
> + debugfs_create_io_x64("sstp1", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP1_An));
> +
> debugfs_create_io_x64("fir", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_FIR_SLICE_An));
> debugfs_create_io_x64("serr", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_SERR_An));
> debugfs_create_io_x64("afu_debug", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_AFU_DEBUG_An));
> @@ -117,8 +120,7 @@ int cxl_debugfs_afu_add(struct cxl_afu *afu)
> debugfs_create_io_x64("sr", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_SR_An));
> debugfs_create_io_x64("dsisr", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_PSL_DSISR_An));
> debugfs_create_io_x64("dar", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_PSL_DAR_An));
> - debugfs_create_io_x64("sstp0", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP0_An));
> - debugfs_create_io_x64("sstp1", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP1_An));
> +
> debugfs_create_io_x64("err_status", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_PSL_ErrStat_An));
>
> if (afu->adapter->native->sl_ops->debugfs_add_afu_regs)
> diff --git a/drivers/misc/cxl/fault.c b/drivers/misc/cxl/fault.c
> index ece7ea3..acf8b7a 100644
> --- a/drivers/misc/cxl/fault.c
> +++ b/drivers/misc/cxl/fault.c
> @@ -223,12 +223,14 @@ void cxl_handle_fault(struct work_struct *fault_work)
> }
> }
>
> - if (dsisr & CXL_PSL_DSISR_An_DS)
> - cxl_handle_segment_miss(ctx, mm, dar);
> - else if (dsisr & CXL_PSL_DSISR_An_DM)
> - cxl_handle_page_fault(ctx, mm, dsisr, dar);
> - else
> - WARN(1, "cxl_handle_fault has nothing to handle\n");
> + if (cxl_is_psl8(ctx->afu)) {
> + if (dsisr & CXL_PSL_DSISR_An_DS)
> + cxl_handle_segment_miss(ctx, mm, dar);
> + else if (dsisr & CXL_PSL_DSISR_An_DM)
> + cxl_handle_page_fault(ctx, mm, dsisr, dar);
> + else
> + WARN(1, "cxl_handle_fault has nothing to handle\n");
> + }
>
> if (mm)
> mmput(mm);
> diff --git a/drivers/misc/cxl/native.c b/drivers/misc/cxl/native.c
> index 8805d8c..a58a6a2 100644
> --- a/drivers/misc/cxl/native.c
> +++ b/drivers/misc/cxl/native.c
> @@ -155,15 +155,17 @@ int cxl_psl_purge(struct cxl_afu *afu)
>
> dsisr = cxl_p2n_read(afu, CXL_PSL_DSISR_An);
> pr_devel_ratelimited("PSL purging... PSL_CNTL: 0x%016llx PSL_DSISR: 0x%016llx\n", PSL_CNTL, dsisr);
> - if (dsisr & CXL_PSL_DSISR_TRANS) {
> - dar = cxl_p2n_read(afu, CXL_PSL_DAR_An);
> - dev_notice(&afu->dev, "PSL purge terminating pending translation, DSISR: 0x%016llx, DAR: 0x%016llx\n", dsisr, dar);
> - cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
> - } else if (dsisr) {
> - dev_notice(&afu->dev, "PSL purge acknowledging pending non-translation fault, DSISR: 0x%016llx\n", dsisr);
> - cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_A);
> - } else {
> - cpu_relax();
> + if (cxl_is_psl8(afu)) {
> + if (dsisr & CXL_PSL_DSISR_TRANS) {
> + dar = cxl_p2n_read(afu, CXL_PSL_DAR_An);
> + dev_notice(&afu->dev, "PSL purge terminating pending translation, DSISR: 0x%016llx, DAR: 0x%016llx\n", dsisr, dar);
> + cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
> + } else if (dsisr) {
> + dev_notice(&afu->dev, "PSL purge acknowledging pending non-translation fault, DSISR: 0x%016llx\n", dsisr);
> + cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_A);
> + } else {
> + cpu_relax();
> + }
Some of the lines here are getting very long.
> }
> PSL_CNTL = cxl_p1n_read(afu, CXL_PSL_SCNTL_An);
> }
> @@ -465,7 +467,8 @@ static int remove_process_element(struct cxl_context *ctx)
>
> if (!rc)
> ctx->pe_inserted = false;
> - slb_invalid(ctx);
> + if (cxl_is_power8())
> + slb_invalid(ctx);
> pr_devel("%s Remove pe: %i finished\n", __func__, ctx->pe);
> mutex_unlock(&ctx->afu->native->spa_mutex);
>
> @@ -498,7 +501,8 @@ static int activate_afu_directed(struct cxl_afu *afu)
> attach_spa(afu);
>
> cxl_p1n_write(afu, CXL_PSL_SCNTL_An, CXL_PSL_SCNTL_An_PM_AFU);
> - cxl_p1n_write(afu, CXL_PSL_AMOR_An, 0xFFFFFFFFFFFFFFFFULL);
> + if (cxl_is_power8())
> + cxl_p1n_write(afu, CXL_PSL_AMOR_An, 0xFFFFFFFFFFFFFFFFULL);
> cxl_p1n_write(afu, CXL_PSL_ID_An, CXL_PSL_ID_An_F | CXL_PSL_ID_An_L);
>
> afu->current_mode = CXL_MODE_DIRECTED;
> @@ -871,7 +875,8 @@ static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
>
> info->dsisr = cxl_p2n_read(afu, CXL_PSL_DSISR_An);
> info->dar = cxl_p2n_read(afu, CXL_PSL_DAR_An);
> - info->dsr = cxl_p2n_read(afu, CXL_PSL_DSR_An);
> + if (cxl_is_power8())
> + info->dsr = cxl_p2n_read(afu, CXL_PSL_DSR_An);
> info->afu_err = cxl_p2n_read(afu, CXL_AFU_ERR_An);
> info->errstat = cxl_p2n_read(afu, CXL_PSL_ErrStat_An);
> info->proc_handle = 0;
> @@ -983,7 +988,8 @@ static void native_irq_wait(struct cxl_context *ctx)
> if (ph != ctx->pe)
> return;
> dsisr = cxl_p2n_read(ctx->afu, CXL_PSL_DSISR_An);
> - if ((dsisr & CXL_PSL_DSISR_PENDING) == 0)
> + if (cxl_is_psl8(ctx->afu) &&
> + ((dsisr & CXL_PSL_DSISR_PENDING) == 0))
> return;
> /*
> * We are waiting for the workqueue to process our
> @@ -1000,21 +1006,26 @@ static void native_irq_wait(struct cxl_context *ctx)
> static irqreturn_t native_slice_irq_err(int irq, void *data)
> {
> struct cxl_afu *afu = data;
> - u64 fir_slice, errstat, serr, afu_debug, afu_error, dsisr;
> + u64 errstat, serr, afu_error, dsisr;
>
> /*
> * slice err interrupt is only used with full PSL (no XSL)
> */
> serr = cxl_p1n_read(afu, CXL_PSL_SERR_An);
> - fir_slice = cxl_p1n_read(afu, CXL_PSL_FIR_SLICE_An);
> errstat = cxl_p2n_read(afu, CXL_PSL_ErrStat_An);
> - afu_debug = cxl_p1n_read(afu, CXL_AFU_DEBUG_An);
> afu_error = cxl_p2n_read(afu, CXL_AFU_ERR_An);
> dsisr = cxl_p2n_read(afu, CXL_PSL_DSISR_An);
> cxl_afu_decode_psl_serr(afu, serr);
> - dev_crit(&afu->dev, "PSL_FIR_SLICE_An: 0x%016llx\n", fir_slice);
> +
> + if (cxl_is_power8()) {
> + u64 fir_slice, afu_debug;
I think we prefer to declare variables at the start of the function?
> +
> + fir_slice = cxl_p1n_read(afu, CXL_PSL_FIR_SLICE_An);
> + afu_debug = cxl_p1n_read(afu, CXL_AFU_DEBUG_An);
> + dev_crit(&afu->dev, "PSL_FIR_SLICE_An: 0x%016llx\n", fir_slice);
> + dev_crit(&afu->dev, "CXL_PSL_AFU_DEBUG_An: 0x%016llx\n", afu_debug);
> + }
> dev_crit(&afu->dev, "CXL_PSL_ErrStat_An: 0x%016llx\n", errstat);
> - dev_crit(&afu->dev, "CXL_PSL_AFU_DEBUG_An: 0x%016llx\n", afu_debug);
> dev_crit(&afu->dev, "AFU_ERR_An: 0x%.16llx\n", afu_error);
> dev_crit(&afu->dev, "PSL_DSISR_An: 0x%.16llx\n", dsisr);
>
> @@ -1107,7 +1118,8 @@ int cxl_native_register_serr_irq(struct cxl_afu *afu)
> }
>
> serr = cxl_p1n_read(afu, CXL_PSL_SERR_An);
> - serr = (serr & 0x00ffffffffff0000ULL) | (afu->serr_hwirq & 0xffff);
> + if (cxl_is_power8())
> + serr = (serr & 0x00ffffffffff0000ULL) | (afu->serr_hwirq & 0xffff);
> cxl_p1n_write(afu, CXL_PSL_SERR_An, serr);
>
> return 0;
> diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
> index 68362b1..4913142 100644
> --- a/drivers/misc/cxl/pci.c
> +++ b/drivers/misc/cxl/pci.c
> @@ -324,32 +324,33 @@ static void dump_afu_descriptor(struct cxl_afu *afu)
> #undef show_reg
> }
>
> -#define CAPP_UNIT0_ID 0xBA
> -#define CAPP_UNIT1_ID 0XBE
> +#define P8_CAPP_UNIT0_ID 0xBA
> +#define P8_CAPP_UNIT1_ID 0XBE
>
> static u64 get_capp_unit_id(struct device_node *np)
> {
> u32 phb_index;
>
> - /*
> - * For chips other than POWER8NVL, we only have CAPP 0,
> - * irrespective of which PHB is used.
> - */
> - if (!pvr_version_is(PVR_POWER8NVL))
> - return CAPP_UNIT0_ID;
> + if (of_property_read_u32(np, "ibm,phb-index", &phb_index))
> + return 0;
>
> /*
> - * For POWER8NVL, assume CAPP 0 is attached to PHB0 and
> - * CAPP 1 is attached to PHB1.
> + * POWER 8:
> + * - For chips other than POWER8NVL, we only have CAPP 0,
> + * irrespective of which PHB is used.
> + * - For POWER8NVL, assume CAPP 0 is attached to PHB0 and
> + * CAPP 1 is attached to PHB1.
> */
> - if (of_property_read_u32(np, "ibm,phb-index", &phb_index))
> - return 0;
> + if (cxl_is_power8()) {
> + if (!pvr_version_is(PVR_POWER8NVL))
> + return P8_CAPP_UNIT0_ID;
>
> - if (phb_index == 0)
> - return CAPP_UNIT0_ID;
> + if (phb_index == 0)
> + return P8_CAPP_UNIT0_ID;
>
> - if (phb_index == 1)
> - return CAPP_UNIT1_ID;
> + if (phb_index == 1)
> + return P8_CAPP_UNIT1_ID;
> + }
>
> return 0;
> }
> @@ -968,7 +969,7 @@ static int cxl_afu_descriptor_looks_ok(struct cxl_afu *afu)
> }
>
> if (afu->pp_psa && (afu->pp_size < PAGE_SIZE))
> - dev_warn(&afu->dev, "AFU uses < PAGE_SIZE per-process PSA!");
> + dev_warn(&afu->dev, "AFU uses pp_size(%#016llx) < PAGE_SIZE per-process PSA!\n", afu->pp_size);
>
> for (i = 0; i < afu->crs_num; i++) {
> rc = cxl_ops->afu_cr_read32(afu, i, 0, &val);
> @@ -1242,8 +1243,13 @@ int cxl_pci_reset(struct cxl *adapter)
>
> dev_info(&dev->dev, "CXL reset\n");
>
> - /* the adapter is about to be reset, so ignore errors */
> - cxl_data_cache_flush(adapter);
> + /*
> + * The adapter is about to be reset, so ignore errors.
> + * Not supported on P9 DD1 but don't forget to enable it
> + * on P9 DD2
> + */
> + if (cxl_is_power8())
> + cxl_data_cache_flush(adapter);
>
> /* pcie_warm_reset requests a fundamental pci reset which includes a
> * PERST assert/deassert. PERST triggers a loading of the image
> @@ -1373,6 +1379,14 @@ static void cxl_fixup_malformed_tlp(struct cxl *adapter, struct pci_dev *dev)
> pci_write_config_dword(dev, aer + PCI_ERR_UNCOR_MASK, data);
> }
>
> +static bool cxl_compatible_caia_version(struct cxl *adapter)
> +{
> + if (cxl_is_power8() && (adapter->caia_major == 1))
> + return true;
> +
> + return false;
> +}
> +
> static int cxl_vsec_looks_ok(struct cxl *adapter, struct pci_dev *dev)
> {
> if (adapter->vsec_status & CXL_STATUS_SECOND_PORT)
> @@ -1383,6 +1397,12 @@ static int cxl_vsec_looks_ok(struct cxl *adapter, struct pci_dev *dev)
> return -EINVAL;
> }
>
> + if (!cxl_compatible_caia_version(adapter)) {
> + dev_info(&dev->dev, "Ignoring card. PSL type is not supported "
> + "(caia version: %d)\n", adapter->caia_major);
> + return -ENODEV;
> + }
> +
> if (!adapter->slices) {
> /* Once we support dynamic reprogramming we can use the card if
> * it supports loadable AFUs */
> @@ -1557,8 +1577,10 @@ static void set_sl_ops(struct cxl *adapter, struct pci_dev *dev)
> adapter->native->sl_ops = &xsl_ops;
> adapter->min_pe = 1; /* Workaround for CX-4 hardware bug */
> } else {
> - dev_info(&dev->dev, "Device uses a PSL8\n");
> - adapter->native->sl_ops = &psl8_ops;
> + if (cxl_is_power8()) {
> + dev_info(&dev->dev, "Device uses a PSL8\n");
> + adapter->native->sl_ops = &psl8_ops;
> + }
> }
> }
>
>
--
Andrew Donnellan OzLabs, ADL Canberra
andrew.donnellan@au1.ibm.com IBM Australia Limited
^ permalink raw reply [flat|nested] 17+ messages in thread
* [PATCH 7/7] cxl: Add psl9 specific code
2017-02-01 17:30 [PATCH 0/7] cxl: Add support for Coherent Accelerator Interface Architecture 2.0 Christophe Lombard
` (5 preceding siblings ...)
2017-02-01 17:30 ` [PATCH 6/7] cxl: Isolate few psl8 specific calls Christophe Lombard
@ 2017-02-01 17:30 ` Christophe Lombard
2017-03-02 6:55 ` Andrew Donnellan
2017-03-03 7:10 ` Andrew Donnellan
6 siblings, 2 replies; 17+ messages in thread
From: Christophe Lombard @ 2017-02-01 17:30 UTC (permalink / raw)
To: linuxppc-dev, fbarrat, imunsie, andrew.donnellan
The new Coherent Accelerator Interface Architecture, level 2, for the
IBM POWER9 brings new content and features:
- POWER9 Service Layer
- Registers
- Radix mode
- Process element entry
- Dedicated-Shared Process Programming Model
- Translation Fault Handling
- CAPP
- Memory Context ID
If a valid mm_struct is found the memory context id is used for each
transaction associated with the process handle. The PSL uses the
context ID to find the corresponding process element.
Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
---
drivers/misc/cxl/context.c | 13 +++
drivers/misc/cxl/cxl.h | 125 ++++++++++++++++++++++----
drivers/misc/cxl/debugfs.c | 19 ++++
drivers/misc/cxl/fault.c | 48 ++++++----
drivers/misc/cxl/guest.c | 8 +-
drivers/misc/cxl/irq.c | 52 +++++++++++
drivers/misc/cxl/native.c | 213 +++++++++++++++++++++++++++++++++++++++++--
drivers/misc/cxl/pci.c | 219 +++++++++++++++++++++++++++++++++++++++++++--
drivers/misc/cxl/trace.h | 43 +++++++++
9 files changed, 685 insertions(+), 55 deletions(-)
diff --git a/drivers/misc/cxl/context.c b/drivers/misc/cxl/context.c
index 1835067..c224d15 100644
--- a/drivers/misc/cxl/context.c
+++ b/drivers/misc/cxl/context.c
@@ -203,6 +203,19 @@ int cxl_context_iomap(struct cxl_context *ctx, struct vm_area_struct *vma)
return -EBUSY;
}
+ if ((ctx->afu->current_mode == CXL_MODE_DEDICATED) &&
+ (cxl_is_psl9(ctx->afu))) {
+ /* make sure there is a valid problem state area space for this AFU */
+ if (ctx->master && !ctx->afu->psa) {
+ pr_devel("AFU doesn't support mmio space\n");
+ return -EINVAL;
+ }
+
+ /* Can't mmap until the AFU is enabled */
+ if (!ctx->afu->enabled)
+ return -EBUSY;
+ }
+
pr_devel("%s: mmio physical: %llx pe: %i master:%i\n", __func__,
ctx->psn_phys, ctx->pe , ctx->master);
diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
index ddc787e..554ae22 100644
--- a/drivers/misc/cxl/cxl.h
+++ b/drivers/misc/cxl/cxl.h
@@ -63,7 +63,7 @@ typedef struct {
/* Memory maps. Ref CXL Appendix A */
/* PSL Privilege 1 Memory Map */
-/* Configuration and Control area */
+/* Configuration and Control area - CAIA 1&2 */
static const cxl_p1_reg_t CXL_PSL_CtxTime = {0x0000};
static const cxl_p1_reg_t CXL_PSL_ErrIVTE = {0x0008};
static const cxl_p1_reg_t CXL_PSL_KEY1 = {0x0010};
@@ -98,11 +98,28 @@ static const cxl_p1_reg_t CXL_XSL_Timebase = {0x0100};
static const cxl_p1_reg_t CXL_XSL_TB_CTLSTAT = {0x0108};
static const cxl_p1_reg_t CXL_XSL_FEC = {0x0158};
static const cxl_p1_reg_t CXL_XSL_DSNCTL = {0x0168};
+/* PSL registers - CAIA 2 */
+static const cxl_p1_reg_t CXL_PSL9_CONTROL = {0x0020};
+static const cxl_p1_reg_t CXL_XSL9_DSNCTL = {0x0168};
+static const cxl_p1_reg_t CXL_PSL9_FIR1 = {0x0300};
+static const cxl_p1_reg_t CXL_PSL9_FIR2 = {0x0308}; /* TBD NML CAIA 2 */
+static const cxl_p1_reg_t CXL_PSL9_Timebase = {0x0310};
+static const cxl_p1_reg_t CXL_PSL9_DEBUG = {0x0320};
+static const cxl_p1_reg_t CXL_PSL9_FIR_CNTL = {0x0348};
+static const cxl_p1_reg_t CXL_PSL9_DSNDCTL = {0x0350};
+static const cxl_p1_reg_t CXL_PSL9_TB_CTLSTAT = {0x0340};
+static const cxl_p1_reg_t CXL_PSL9_TRACECFG = {0x0368};
+static const cxl_p1_reg_t CXL_PSL9_APCDEDALLOC = {0x0378};
+static const cxl_p1_reg_t CXL_PSL9_APCDEDTYPE = {0x0380};
+static const cxl_p1_reg_t CXL_PSL9_TNR_ADDR = {0x0388};
+static const cxl_p1_reg_t CXL_XSL9_IERAT = {0x0588};
+static const cxl_p1_reg_t CXL_XSL9_ILPP = {0x0590};
+
/* 0x7F00:7FFF Reserved PCIe MSI-X Pending Bit Array area */
/* 0x8000:FFFF Reserved PCIe MSI-X Table Area */
/* PSL Slice Privilege 1 Memory Map */
-/* Configuration Area */
+/* Configuration Area - CAIA 1&2 */
static const cxl_p1n_reg_t CXL_PSL_SR_An = {0x00};
static const cxl_p1n_reg_t CXL_PSL_LPID_An = {0x08};
static const cxl_p1n_reg_t CXL_PSL_AMBAR_An = {0x10};
@@ -111,17 +128,18 @@ static const cxl_p1n_reg_t CXL_PSL_ID_An = {0x20};
static const cxl_p1n_reg_t CXL_PSL_SERR_An = {0x28};
/* Memory Management and Lookaside Buffer Management - CAIA 1*/
static const cxl_p1n_reg_t CXL_PSL_SDR_An = {0x30};
+/* Memory Management and Lookaside Buffer Management - CAIA 1&2 */
static const cxl_p1n_reg_t CXL_PSL_AMOR_An = {0x38};
-/* Pointer Area */
+/* Pointer Area - CAIA 1&2 */
static const cxl_p1n_reg_t CXL_HAURP_An = {0x80};
static const cxl_p1n_reg_t CXL_PSL_SPAP_An = {0x88};
static const cxl_p1n_reg_t CXL_PSL_LLCMD_An = {0x90};
-/* Control Area */
+/* Control Area - CAIA 1&2 */
static const cxl_p1n_reg_t CXL_PSL_SCNTL_An = {0xA0};
static const cxl_p1n_reg_t CXL_PSL_CtxTime_An = {0xA8};
static const cxl_p1n_reg_t CXL_PSL_IVTE_Offset_An = {0xB0};
static const cxl_p1n_reg_t CXL_PSL_IVTE_Limit_An = {0xB8};
-/* 0xC0:FF Implementation Dependent Area */
+/* 0xC0:FF Implementation Dependent Area - CAIA 1&2 */
static const cxl_p1n_reg_t CXL_PSL_FIR_SLICE_An = {0xC0};
static const cxl_p1n_reg_t CXL_AFU_DEBUG_An = {0xC8};
/* 0xC0:FF Implementation Dependent Area - CAIA 1 */
@@ -131,7 +149,7 @@ static const cxl_p1n_reg_t CXL_PSL_RXCTL_A = {0xE0};
static const cxl_p1n_reg_t CXL_PSL_SLICE_TRACE = {0xE8};
/* PSL Slice Privilege 2 Memory Map */
-/* Configuration and Control Area */
+/* Configuration and Control Area - CAIA 1&2 */
static const cxl_p2n_reg_t CXL_PSL_PID_TID_An = {0x000};
static const cxl_p2n_reg_t CXL_CSRP_An = {0x008};
/* Configuration and Control Area - CAIA 1 */
@@ -145,17 +163,17 @@ static const cxl_p2n_reg_t CXL_PSL_AMR_An = {0x030};
static const cxl_p2n_reg_t CXL_SLBIE_An = {0x040};
static const cxl_p2n_reg_t CXL_SLBIA_An = {0x048};
static const cxl_p2n_reg_t CXL_SLBI_Select_An = {0x050};
-/* Interrupt Registers */
+/* Interrupt Registers - CAIA 1&2 */
static const cxl_p2n_reg_t CXL_PSL_DSISR_An = {0x060};
static const cxl_p2n_reg_t CXL_PSL_DAR_An = {0x068};
static const cxl_p2n_reg_t CXL_PSL_DSR_An = {0x070};
static const cxl_p2n_reg_t CXL_PSL_TFC_An = {0x078};
static const cxl_p2n_reg_t CXL_PSL_PEHandle_An = {0x080};
-static const cxl_p2n_reg_t CXL_PSL_ErrStat_An = {0x088};
-/* AFU Registers */
+static const cxl_p2n_reg_t CXL_PSL_ErrStat_An = {0x088}; /* TBD NML CAIA 2 */
+/* AFU Registers - CAIA 1&2 */
static const cxl_p2n_reg_t CXL_AFU_Cntl_An = {0x090};
static const cxl_p2n_reg_t CXL_AFU_ERR_An = {0x098};
-/* Work Element Descriptor */
+/* Work Element Descriptor - CAIA 1&2 */
static const cxl_p2n_reg_t CXL_PSL_WED_An = {0x0A0};
/* 0x0C0:FFF Implementation Dependent Area */
@@ -182,6 +200,10 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An = {0x0A0};
#define CXL_PSL_SR_An_SF MSR_SF /* 64bit */
#define CXL_PSL_SR_An_TA (1ull << (63-1)) /* Tags active, GA1: 0 */
#define CXL_PSL_SR_An_HV MSR_HV /* Hypervisor, GA1: 0 */
+#define CXL_PSL_SR_An_XLAT_hpt (0ull << (63-6))/* Hashed page table (HPT) mode */
+#define CXL_PSL_SR_An_XLAT_roh (2ull << (63-6))/* Radix on HPT mode */
+#define CXL_PSL_SR_An_XLAT_ror (3ull << (63-6))/* Radix on Radix mode */
+#define CXL_PSL_SR_An_BOT (1ull << (63-10)) /* Use the in-memory segment table */
#define CXL_PSL_SR_An_PR MSR_PR /* Problem state, GA1: 1 */
#define CXL_PSL_SR_An_ISL (1ull << (63-53)) /* Ignore Segment Large Page */
#define CXL_PSL_SR_An_TC (1ull << (63-54)) /* Page Table secondary hash */
@@ -298,12 +320,38 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An = {0x0A0};
#define CXL_PSL_DSISR_An_S DSISR_ISSTORE /* Access was afu_wr or afu_zero */
#define CXL_PSL_DSISR_An_K DSISR_KEYFAULT /* Access not permitted by virtual page class key protection */
+/****** CXL_PSL_DSISR_An - CAIA 2 ****************************************************/
+#define CXL_PSL9_DSISR_An_TF (1ull << (63-3)) /* Translation fault */
+#define CXL_PSL9_DSISR_An_PE (1ull << (63-4)) /* PSL Error (implementation specific) */
+#define CXL_PSL9_DSISR_An_AE (1ull << (63-5)) /* AFU Error */
+#define CXL_PSL9_DSISR_An_OC (1ull << (63-6)) /* OS Context Warning */
+#define CXL_PSL9_DSISR_An_S (1ull << (63-38)) /* TF for a write operation */
+#define CXL_PSL9_DSISR_PENDING (CXL_PSL9_DSISR_An_TF | CXL_PSL9_DSISR_An_PE | CXL_PSL9_DSISR_An_AE | CXL_PSL9_DSISR_An_OC)
+/* NOTE: Bits 56:63 (Checkout Response Status) are valid when DSISR_An[TF] = 1
+ * Status (0:7) Encoding
+ */
+#define CXL_PSL9_DSISR_An_CO_MASK 0x00000000000000ffULL
+#define CXL_PSL9_DSISR_An_SF 0x0000000000000080ULL /* Segment Fault 0b10000000 */
+#define CXL_PSL9_DSISR_An_PF_SLR 0x0000000000000088ULL /* PTE not found (Single Level Radix) 0b10001000 */
+#define CXL_PSL9_DSISR_An_PF_RGC 0x000000000000008CULL /* PTE not found (Radix Guest (child)) 0b10001100 */
+#define CXL_PSL9_DSISR_An_PF_RGP 0x0000000000000090ULL /* PTE not found (Radix Guest (parent)) 0b10010000 */
+#define CXL_PSL9_DSISR_An_PF_HRH 0x0000000000000094ULL /* PTE not found (HPT/Radix Host) 0b10010100 */
+#define CXL_PSL9_DSISR_An_PF_STEG 0x000000000000009CULL /* PTE not found (STEG VA) 0b10011100 */
+
/****** CXL_PSL_TFC_An ******************************************************/
#define CXL_PSL_TFC_An_A (1ull << (63-28)) /* Acknowledge non-translation fault */
#define CXL_PSL_TFC_An_C (1ull << (63-29)) /* Continue (abort transaction) */
#define CXL_PSL_TFC_An_AE (1ull << (63-30)) /* Restart PSL with address error */
#define CXL_PSL_TFC_An_R (1ull << (63-31)) /* Restart PSL transaction */
+/****** CXL_XSL9_INV_ERAT - CAIA 2 **********************************/
+#define CXL_XSL9_IERAT_MLPID (1ull << (63-0)) /* Match LPID */
+#define CXL_XSL9_IERAT_MPID (1ull << (63-1)) /* Match PID */
+#define CXL_XSL9_IERAT_PRS (1ull << (63-4)) /* PRS bit for Radix invalidations */
+#define CXL_XSL9_IERAT_INVR (1ull << (63-3)) /* Invalidate Radix */
+#define CXL_XSL9_IERAT_IALL (1ull << (63-8)) /* Invalidate All */
+#define CXL_XSL9_IERAT_IINPROG (1ull << (63-63)) /* Invalidate in progress */
+
/* cxl_process_element->software_status */
#define CXL_PE_SOFTWARE_STATE_V (1ul << (31 - 0)) /* Valid */
#define CXL_PE_SOFTWARE_STATE_C (1ul << (31 - 29)) /* Complete */
@@ -651,25 +699,38 @@ int cxl_pci_reset(struct cxl *adapter);
void cxl_pci_release_afu(struct device *dev);
ssize_t cxl_pci_read_adapter_vpd(struct cxl *adapter, void *buf, size_t len);
-/* common == phyp + powernv */
+/* common == phyp + powernv - CAIA 1&2 */
struct cxl_process_element_common {
__be32 tid;
__be32 pid;
__be64 csrp;
- __be64 aurp0;
- __be64 aurp1;
- __be64 sstp0;
- __be64 sstp1;
+ union {
+ struct {
+ __be64 aurp0;
+ __be64 aurp1;
+ __be64 sstp0;
+ __be64 sstp1;
+ } psl8; /* CAIA 1 */
+ struct {
+ u8 reserved2[8];
+ u8 reserved3[8];
+ u8 reserved4[8];
+ u8 reserved5[8];
+ } psl9; /* CAIA 2 */
+ } u;
__be64 amr;
- u8 reserved3[4];
+ u8 reserved6[4];
__be64 wed;
} __packed;
-/* just powernv */
+/* just powernv - CAIA 1&2 */
struct cxl_process_element {
__be64 sr;
__be64 SPOffset;
- __be64 sdr;
+ union {
+ __be64 sdr; /* CAIA 1 */
+ u8 reserved1[8]; /* CAIA 2 */
+ } u;
__be64 haurp;
__be32 ctxtime;
__be16 ivte_offsets[4];
@@ -758,6 +819,16 @@ static inline bool cxl_is_power8(void)
return false;
}
+static inline bool cxl_is_power9(void)
+{
+ /* intermediate solution */
+ if (!cxl_is_power8() &&
+ (cpu_has_feature(CPU_FTRS_POWER9) ||
+ cpu_has_feature(CPU_FTR_POWER9_DD1)))
+ return true;
+ return false;
+}
+
static inline bool cxl_is_psl8(struct cxl_afu *afu)
{
if (afu->adapter->caia_major == 1)
@@ -765,6 +836,13 @@ static inline bool cxl_is_psl8(struct cxl_afu *afu)
return false;
}
+static inline bool cxl_is_psl9(struct cxl_afu *afu)
+{
+ if (afu->adapter->caia_major == 2)
+ return true;
+ return false;
+}
+
ssize_t cxl_pci_afu_read_err_buffer(struct cxl_afu *afu, char *buf,
loff_t off, size_t count);
@@ -829,9 +907,13 @@ int afu_register_irqs(struct cxl_context *ctx, u32 count);
void afu_release_irqs(struct cxl_context *ctx, void *cookie);
void afu_irq_name_free(struct cxl_context *ctx);
+int cxl_attach_afu_directed_psl9(struct cxl_context *ctx, u64 wed, u64 amr);
int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr);
+int cxl_activate_dedicated_process_psl9(struct cxl_afu *afu);
int cxl_activate_dedicated_process_psl8(struct cxl_afu *afu);
+int cxl_attach_dedicated_process_psl9(struct cxl_context *ctx, u64 wed, u64 amr);
int cxl_attach_dedicated_process_psl8(struct cxl_context *ctx, u64 wed, u64 amr);
+void cxl_update_dedicated_ivtes_psl9(struct cxl_context *ctx);
void cxl_update_dedicated_ivtes_psl8(struct cxl_context *ctx);
int cxl_debugfs_init(void);
@@ -892,8 +974,11 @@ struct cxl_irq_info {
};
void cxl_assign_psn_space(struct cxl_context *ctx);
+int cxl_invalidate_all_psl9(struct cxl *adapter);
int cxl_invalidate_all_psl8(struct cxl *adapter);
+irqreturn_t cxl_irq_psl9(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
irqreturn_t cxl_irq_psl8(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
+irqreturn_t cxl_fail_irq_psl9(struct cxl_afu *afu, struct cxl_irq_info *irq_info);
irqreturn_t cxl_fail_irq_psl8(struct cxl_afu *afu, struct cxl_irq_info *irq_info);
int cxl_register_one_irq(struct cxl *adapter, irq_handler_t handler,
void *cookie, irq_hw_number_t *dest_hwirq,
@@ -905,11 +990,15 @@ int cxl_data_cache_flush(struct cxl *adapter);
int cxl_afu_disable(struct cxl_afu *afu);
int cxl_psl_purge(struct cxl_afu *afu);
+void cxl_debugfs_add_adapter_regs_psl9(struct cxl *adapter, struct dentry *dir);
void cxl_debugfs_add_adapter_regs_psl8(struct cxl *adapter, struct dentry *dir);
void cxl_debugfs_add_adapter_regs_xsl(struct cxl *adapter, struct dentry *dir);
+void cxl_debugfs_add_afu_regs_psl9(struct cxl_afu *afu, struct dentry *dir);
void cxl_debugfs_add_afu_regs_psl8(struct cxl_afu *afu, struct dentry *dir);
+void cxl_native_irq_dump_regs_psl9(struct cxl_context *ctx);
void cxl_native_irq_dump_regs_psl8(struct cxl_context *ctx);
void cxl_native_err_irq_dump_regs(struct cxl *adapter);
+void cxl_stop_trace_psl9(struct cxl *cxl);
void cxl_stop_trace_psl8(struct cxl *cxl);
int cxl_pci_vphb_add(struct cxl_afu *afu);
void cxl_pci_vphb_remove(struct cxl_afu *afu);
diff --git a/drivers/misc/cxl/debugfs.c b/drivers/misc/cxl/debugfs.c
index 43a1a27..eae9d74 100644
--- a/drivers/misc/cxl/debugfs.c
+++ b/drivers/misc/cxl/debugfs.c
@@ -15,6 +15,12 @@
static struct dentry *cxl_debugfs;
+void cxl_stop_trace_psl9(struct cxl *adapter)
+{
+ /* Stop the trace */
+ cxl_p1_write(adapter, CXL_PSL9_TRACECFG, 0x4480000000000000ULL);
+}
+
void cxl_stop_trace_psl8(struct cxl *adapter)
{
int slice;
@@ -53,6 +59,14 @@ static struct dentry *debugfs_create_io_x64(const char *name, umode_t mode,
(void __force *)value, &fops_io_x64);
}
+void cxl_debugfs_add_adapter_regs_psl9(struct cxl *adapter, struct dentry *dir)
+{
+ debugfs_create_io_x64("fir1", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL9_FIR1));
+ debugfs_create_io_x64("fir2", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL9_FIR2));
+ debugfs_create_io_x64("fir_cntl", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL9_FIR_CNTL));
+ debugfs_create_io_x64("trace", S_IRUSR | S_IWUSR, dir, _cxl_p1_addr(adapter, CXL_PSL9_TRACECFG));
+}
+
void cxl_debugfs_add_adapter_regs_psl8(struct cxl *adapter, struct dentry *dir)
{
debugfs_create_io_x64("fir1", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_FIR1));
@@ -92,6 +106,11 @@ void cxl_debugfs_adapter_remove(struct cxl *adapter)
debugfs_remove_recursive(adapter->debugfs);
}
+void cxl_debugfs_add_afu_regs_psl9(struct cxl_afu *afu, struct dentry *dir)
+{
+ debugfs_create_io_x64("serr", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_SERR_An));
+}
+
void cxl_debugfs_add_afu_regs_psl8(struct cxl_afu *afu, struct dentry *dir)
{
debugfs_create_io_x64("sstp0", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP0_An));
diff --git a/drivers/misc/cxl/fault.c b/drivers/misc/cxl/fault.c
index acf8b7a..d615f89 100644
--- a/drivers/misc/cxl/fault.c
+++ b/drivers/misc/cxl/fault.c
@@ -145,25 +145,26 @@ static void cxl_handle_page_fault(struct cxl_context *ctx,
return cxl_ack_ae(ctx);
}
- /*
- * update_mmu_cache() will not have loaded the hash since current->trap
- * is not a 0x400 or 0x300, so just call hash_page_mm() here.
- */
- access = _PAGE_PRESENT | _PAGE_READ;
- if (dsisr & CXL_PSL_DSISR_An_S)
- access |= _PAGE_WRITE;
-
- access |= _PAGE_PRIVILEGED;
- if ((!ctx->kernel) || (REGION_ID(dar) == USER_REGION_ID))
- access &= ~_PAGE_PRIVILEGED;
-
- if (dsisr & DSISR_NOHPTE)
- inv_flags |= HPTE_NOHPTE_UPDATE;
-
- local_irq_save(flags);
- hash_page_mm(mm, dar, access, 0x300, inv_flags);
- local_irq_restore(flags);
-
+ if (!radix_enabled()) {
+ /*
+ * update_mmu_cache() will not have loaded the hash since current->trap
+ * is not a 0x400 or 0x300, so just call hash_page_mm() here.
+ */
+ access = _PAGE_PRESENT | _PAGE_READ;
+ if (dsisr & CXL_PSL_DSISR_An_S)
+ access |= _PAGE_WRITE;
+
+ access |= _PAGE_PRIVILEGED;
+ if ((!ctx->kernel) || (REGION_ID(dar) == USER_REGION_ID))
+ access &= ~_PAGE_PRIVILEGED;
+
+ if (dsisr & DSISR_NOHPTE)
+ inv_flags |= HPTE_NOHPTE_UPDATE;
+
+ local_irq_save(flags);
+ hash_page_mm(mm, dar, access, 0x300, inv_flags);
+ local_irq_restore(flags);
+ }
pr_devel("Page fault successfully handled for pe: %i!\n", ctx->pe);
cxl_ops->ack_irq(ctx, CXL_PSL_TFC_An_R, 0);
}
@@ -231,6 +232,15 @@ void cxl_handle_fault(struct work_struct *fault_work)
else
WARN(1, "cxl_handle_fault has nothing to handle\n");
}
+ if (cxl_is_psl9(ctx->afu)) {
+ if ((dsisr & CXL_PSL9_DSISR_An_CO_MASK) &
+ (CXL_PSL9_DSISR_An_PF_SLR | CXL_PSL9_DSISR_An_PF_RGC |
+ CXL_PSL9_DSISR_An_PF_RGP | CXL_PSL9_DSISR_An_PF_HRH |
+ CXL_PSL9_DSISR_An_PF_STEG))
+ cxl_handle_page_fault(ctx, mm, dsisr, dar);
+ else
+ WARN(1, "cxl_handle_fault has nothing to handle\n");
+ }
if (mm)
mmput(mm);
diff --git a/drivers/misc/cxl/guest.c b/drivers/misc/cxl/guest.c
index 3ad7381..f58b4b6c 100644
--- a/drivers/misc/cxl/guest.c
+++ b/drivers/misc/cxl/guest.c
@@ -551,13 +551,13 @@ static int attach_afu_directed(struct cxl_context *ctx, u64 wed, u64 amr)
elem->common.tid = cpu_to_be32(0); /* Unused */
elem->common.pid = cpu_to_be32(pid);
elem->common.csrp = cpu_to_be64(0); /* disable */
- elem->common.aurp0 = cpu_to_be64(0); /* disable */
- elem->common.aurp1 = cpu_to_be64(0); /* disable */
+ elem->common.u.psl8.aurp0 = cpu_to_be64(0); /* disable */
+ elem->common.u.psl8.aurp1 = cpu_to_be64(0); /* disable */
cxl_prefault(ctx, wed);
- elem->common.sstp0 = cpu_to_be64(ctx->sstp0);
- elem->common.sstp1 = cpu_to_be64(ctx->sstp1);
+ elem->common.u.psl8.sstp0 = cpu_to_be64(ctx->sstp0);
+ elem->common.u.psl8.sstp1 = cpu_to_be64(ctx->sstp1);
/*
* Ensure we have at least one interrupt allocated to take faults for
diff --git a/drivers/misc/cxl/irq.c b/drivers/misc/cxl/irq.c
index fa9f8a2..7074e7d 100644
--- a/drivers/misc/cxl/irq.c
+++ b/drivers/misc/cxl/irq.c
@@ -34,6 +34,58 @@ static irqreturn_t schedule_cxl_fault(struct cxl_context *ctx, u64 dsisr, u64 da
return IRQ_HANDLED;
}
+irqreturn_t cxl_irq_psl9(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info)
+{
+ u64 dsisr, dar;
+
+ dsisr = irq_info->dsisr;
+ dar = irq_info->dar;
+
+ trace_cxl_psl9_irq(ctx, irq, dsisr, dar);
+
+ pr_devel("CXL interrupt %i for afu pe: %i DSISR: %#llx DAR: %#llx\n", irq, ctx->pe, dsisr, dar);
+
+ if (dsisr & CXL_PSL9_DSISR_An_TF) {
+ pr_devel("Scheduling translation fault handling for later pe: %i\n", ctx->pe);
+ return schedule_cxl_fault(ctx, dsisr, dar);
+ }
+
+ if (dsisr & CXL_PSL9_DSISR_An_PE)
+ return cxl_ops->handle_psl_slice_error(ctx, dsisr,
+ irq_info->errstat);
+ if (dsisr & CXL_PSL9_DSISR_An_AE) {
+ pr_devel("CXL interrupt: AFU Error 0x%016llx\n", irq_info->afu_err);
+
+ if (ctx->pending_afu_err) {
+ /*
+ * This shouldn't happen - the PSL treats these errors
+ * as fatal and will have reset the AFU, so there's not
+ * much point buffering multiple AFU errors.
+ * OTOH if we DO ever see a storm of these come in it's
+ * probably best that we log them somewhere:
+ */
+ dev_err_ratelimited(&ctx->afu->dev, "CXL AFU Error "
+ "undelivered to pe %i: 0x%016llx\n",
+ ctx->pe, irq_info->afu_err);
+ } else {
+ spin_lock(&ctx->lock);
+ ctx->afu_err = irq_info->afu_err;
+ ctx->pending_afu_err = 1;
+ spin_unlock(&ctx->lock);
+
+ wake_up_all(&ctx->wq);
+ }
+
+ cxl_ops->ack_irq(ctx, CXL_PSL_TFC_An_A, 0);
+ return IRQ_HANDLED;
+ }
+ if (dsisr & CXL_PSL9_DSISR_An_OC)
+ pr_devel("CXL interrupt: OS Context Warning\n");
+
+ WARN(1, "Unhandled CXL PSL IRQ\n");
+ return IRQ_HANDLED;
+}
+
irqreturn_t cxl_irq_psl8(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info)
{
u64 dsisr, dar;
diff --git a/drivers/misc/cxl/native.c b/drivers/misc/cxl/native.c
index a58a6a2..ed116d1 100644
--- a/drivers/misc/cxl/native.c
+++ b/drivers/misc/cxl/native.c
@@ -167,6 +167,18 @@ int cxl_psl_purge(struct cxl_afu *afu)
cpu_relax();
}
}
+ if (cxl_is_psl9(afu)) {
+ if (dsisr & CXL_PSL9_DSISR_An_TF) {
+ dar = cxl_p2n_read(afu, CXL_PSL_DAR_An);
+ dev_notice(&afu->dev, "PSL purge terminating pending translation, DSISR: 0x%016llx, DAR: 0x%016llx\n", dsisr, dar);
+ cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
+ } else if (dsisr) {
+ dev_notice(&afu->dev, "PSL purge acknowledging pending non-translation fault, DSISR: 0x%016llx\n", dsisr);
+ cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_A);
+ } else {
+ cpu_relax();
+ }
+ }
PSL_CNTL = cxl_p1n_read(afu, CXL_PSL_SCNTL_An);
}
end = local_clock();
@@ -259,6 +271,36 @@ void cxl_release_spa(struct cxl_afu *afu)
}
}
+int cxl_invalidate_all_psl9(struct cxl *adapter)
+{
+ unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT);
+ u64 ierat;
+
+ /* do not invalidate ERAT entries when not reloading on PERST */
+ if (adapter->perst_loads_image)
+ return 0;
+
+ pr_devel("CXL adapter - invalidation of all ERAT entries\n");
+
+ /* Invalidates all ERAT entries for Radix or HPT */
+ ierat = CXL_XSL9_IERAT_IALL;
+ if (radix_enabled())
+ ierat |= CXL_XSL9_IERAT_INVR;
+ cxl_p1_write(adapter, CXL_XSL9_IERAT, ierat);
+
+ while (cxl_p1_read(adapter, CXL_XSL9_IERAT) & CXL_XSL9_IERAT_IINPROG) {
+ if (time_after_eq(jiffies, timeout)) {
+ dev_warn(&adapter->dev,
+ "WARNING: CXL adapter invalidation of all ERAT entries timed out!\n");
+ return -EBUSY;
+ }
+ if (!cxl_ops->link_ok(adapter, NULL))
+ return -EIO;
+ cpu_relax();
+ }
+ return 0;
+}
+
int cxl_invalidate_all_psl8(struct cxl *adapter)
{
unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT);
@@ -545,10 +587,19 @@ static u64 calculate_sr(struct cxl_context *ctx)
sr |= (mfmsr() & MSR_SF) | CXL_PSL_SR_An_HV;
} else {
sr |= CXL_PSL_SR_An_PR | CXL_PSL_SR_An_R;
- sr &= ~(CXL_PSL_SR_An_HV);
+ if (radix_enabled())
+ sr |= CXL_PSL_SR_An_HV;
+ else
+ sr &= ~(CXL_PSL_SR_An_HV);
if (!test_tsk_thread_flag(current, TIF_32BIT))
sr |= CXL_PSL_SR_An_SF;
}
+ if (cxl_is_psl9(ctx->afu)) {
+ if (radix_enabled())
+ sr |= CXL_PSL_SR_An_XLAT_ror;
+ else
+ sr |= CXL_PSL_SR_An_XLAT_hpt;
+ }
return sr;
}
@@ -581,6 +632,70 @@ static void update_ivtes_directed(struct cxl_context *ctx)
WARN_ON(add_process_element(ctx));
}
+static int process_element_entry(struct cxl_context *ctx, u64 wed, u64 amr)
+{
+ u32 pid;
+
+ cxl_assign_psn_space(ctx);
+
+ ctx->elem->ctxtime = 0; /* disable */
+ ctx->elem->lpid = cpu_to_be32(mfspr(SPRN_LPID));
+ ctx->elem->haurp = 0; /* disable */
+
+ if (ctx->kernel)
+ pid = 0;
+ else {
+ if (ctx->mm == NULL) {
+ pr_devel("%s: unable to get mm for pe=%d pid=%i\n",
+ __func__, ctx->pe, pid_nr(ctx->pid));
+ return -EINVAL;
+ }
+ pid = ctx->mm->context.id;
+ }
+
+ ctx->elem->common.tid = 0;
+ ctx->elem->common.pid = cpu_to_be32(pid);
+
+ ctx->elem->sr = cpu_to_be64(calculate_sr(ctx));
+
+ ctx->elem->common.csrp = 0; /* disable */
+
+ cxl_prefault(ctx, wed);
+
+ /*
+ * Ensure we have the multiplexed PSL interrupt set up to take faults
+ * for kernel contexts that may not have allocated any AFU IRQs at all:
+ */
+ if (ctx->irqs.range[0] == 0) {
+ ctx->irqs.offset[0] = ctx->afu->native->psl_hwirq;
+ ctx->irqs.range[0] = 1;
+ }
+
+ ctx->elem->common.amr = cpu_to_be64(amr);
+ ctx->elem->common.wed = cpu_to_be64(wed);
+
+ return 0;
+}
+
+int cxl_attach_afu_directed_psl9(struct cxl_context *ctx, u64 wed, u64 amr)
+{
+ int result;
+
+ /* fill the process element entry */
+ result = process_element_entry(ctx, wed, amr);
+ if (result)
+ return result;
+
+ update_ivtes_directed(ctx);
+
+ /* first guy needs to enable */
+ result = cxl_ops->afu_check_and_enable(ctx->afu);
+ if (result)
+ return result;
+
+ return add_process_element(ctx);
+}
+
int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
{
u32 pid;
@@ -591,7 +706,7 @@ int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
ctx->elem->ctxtime = 0; /* disable */
ctx->elem->lpid = cpu_to_be32(mfspr(SPRN_LPID));
ctx->elem->haurp = 0; /* disable */
- ctx->elem->sdr = cpu_to_be64(mfspr(SPRN_SDR1));
+ ctx->elem->u.sdr = cpu_to_be64(mfspr(SPRN_SDR1));
pid = current->pid;
if (ctx->kernel)
@@ -602,13 +717,13 @@ int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
ctx->elem->sr = cpu_to_be64(calculate_sr(ctx));
ctx->elem->common.csrp = 0; /* disable */
- ctx->elem->common.aurp0 = 0; /* disable */
- ctx->elem->common.aurp1 = 0; /* disable */
+ ctx->elem->common.u.psl8.aurp0 = 0; /* disable */
+ ctx->elem->common.u.psl8.aurp1 = 0; /* disable */
cxl_prefault(ctx, wed);
- ctx->elem->common.sstp0 = cpu_to_be64(ctx->sstp0);
- ctx->elem->common.sstp1 = cpu_to_be64(ctx->sstp1);
+ ctx->elem->common.u.psl8.sstp0 = cpu_to_be64(ctx->sstp0);
+ ctx->elem->common.u.psl8.sstp1 = cpu_to_be64(ctx->sstp1);
/*
* Ensure we have the multiplexed PSL interrupt set up to take faults
@@ -674,6 +789,32 @@ static int deactivate_afu_directed(struct cxl_afu *afu)
return 0;
}
+int cxl_activate_dedicated_process_psl9(struct cxl_afu *afu)
+{
+ dev_info(&afu->dev, "Activating dedicated process mode\n");
+
+ /* If XSL is set to dedicated mode (Set in PSL_SCNTL reg), the
+ * XSL and AFU are programmed to work with a single context.
+ * The context information should be configured in the SPA area
+ * index 0 (so PSL_SPAP must be configured before enabling the
+ * AFU).
+ */
+ afu->num_procs = 1;
+ if (afu->native->spa == NULL) {
+ if (cxl_alloc_spa(afu))
+ return -ENOMEM;
+ }
+ attach_spa(afu);
+
+ cxl_p1n_write(afu, CXL_PSL_SCNTL_An, CXL_PSL_SCNTL_An_PM_Process);
+ cxl_p1n_write(afu, CXL_PSL_ID_An, CXL_PSL_ID_An_F | CXL_PSL_ID_An_L);
+
+ afu->current_mode = CXL_MODE_DEDICATED;
+ afu->num_procs = 1;
+
+ return cxl_chardev_d_afu_add(afu);
+}
+
int cxl_activate_dedicated_process_psl8(struct cxl_afu *afu)
{
dev_info(&afu->dev, "Activating dedicated process mode\n");
@@ -697,6 +838,16 @@ int cxl_activate_dedicated_process_psl8(struct cxl_afu *afu)
return cxl_chardev_d_afu_add(afu);
}
+void cxl_update_dedicated_ivtes_psl9(struct cxl_context *ctx)
+{
+ int r;
+
+ for (r = 0; r < CXL_IRQ_RANGES; r++) {
+ ctx->elem->ivte_offsets[r] = cpu_to_be16(ctx->irqs.offset[r]);
+ ctx->elem->ivte_ranges[r] = cpu_to_be16(ctx->irqs.range[r]);
+ }
+}
+
void cxl_update_dedicated_ivtes_psl8(struct cxl_context *ctx)
{
struct cxl_afu *afu = ctx->afu;
@@ -713,6 +864,26 @@ void cxl_update_dedicated_ivtes_psl8(struct cxl_context *ctx)
((u64)ctx->irqs.range[3] & 0xffff));
}
+int cxl_attach_dedicated_process_psl9(struct cxl_context *ctx, u64 wed, u64 amr)
+{
+ struct cxl_afu *afu = ctx->afu;
+ int result;
+
+ /* fill the process element entry */
+ result = process_element_entry(ctx, wed, amr);
+ if (result)
+ return result;
+
+ if (ctx->afu->adapter->native->sl_ops->update_dedicated_ivtes)
+ afu->adapter->native->sl_ops->update_dedicated_ivtes(ctx);
+
+ result = cxl_ops->afu_reset(afu);
+ if (result)
+ return result;
+
+ return afu_enable(afu);
+}
+
int cxl_attach_dedicated_process_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
{
struct cxl_afu *afu = ctx->afu;
@@ -884,6 +1055,21 @@ static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
return 0;
}
+void cxl_native_irq_dump_regs_psl9(struct cxl_context *ctx)
+{
+ u64 fir1, fir2, serr;
+
+ fir1 = cxl_p1_read(ctx->afu->adapter, CXL_PSL9_FIR1);
+ fir2 = cxl_p1_read(ctx->afu->adapter, CXL_PSL9_FIR2);
+
+ dev_crit(&ctx->afu->dev, "PSL_FIR1: 0x%016llx\n", fir1);
+ dev_crit(&ctx->afu->dev, "PSL_FIR2: 0x%016llx\n", fir2);
+ if (ctx->afu->adapter->native->sl_ops->register_serr_irq) {
+ serr = cxl_p1n_read(ctx->afu, CXL_PSL_SERR_An);
+ cxl_afu_decode_psl_serr(ctx->afu, serr);
+ }
+}
+
void cxl_native_irq_dump_regs_psl8(struct cxl_context *ctx)
{
u64 fir1, fir2, fir_slice, serr, afu_debug;
@@ -920,6 +1106,16 @@ static irqreturn_t native_handle_psl_slice_error(struct cxl_context *ctx,
return cxl_ops->ack_irq(ctx, 0, errstat);
}
+irqreturn_t cxl_fail_irq_psl9(struct cxl_afu *afu, struct cxl_irq_info *irq_info)
+{
+ if (irq_info->dsisr & CXL_PSL9_DSISR_An_TF)
+ cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
+ else
+ cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_A);
+
+ return IRQ_HANDLED;
+}
+
irqreturn_t cxl_fail_irq_psl8(struct cxl_afu *afu, struct cxl_irq_info *irq_info)
{
if (irq_info->dsisr & CXL_PSL_DSISR_TRANS)
@@ -991,6 +1187,9 @@ static void native_irq_wait(struct cxl_context *ctx)
if (cxl_is_psl8(ctx->afu) &&
((dsisr & CXL_PSL_DSISR_PENDING) == 0))
return;
+ if (cxl_is_psl9(ctx->afu) &&
+ ((dsisr & CXL_PSL9_DSISR_PENDING) == 0))
+ return;
/*
* We are waiting for the workqueue to process our
* irq, so need to let that run here.
@@ -1120,6 +1319,8 @@ int cxl_native_register_serr_irq(struct cxl_afu *afu)
serr = cxl_p1n_read(afu, CXL_PSL_SERR_An);
if (cxl_is_power8())
serr = (serr & 0x00ffffffffff0000ULL) | (afu->serr_hwirq & 0xffff);
+ if (cxl_is_power9())
+ serr = (serr & ~0x0000000075010000ULL) | (afu->serr_hwirq & 0xffff);
cxl_p1n_write(afu, CXL_PSL_SERR_An, serr);
return 0;
diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
index 4913142..5e7f2db 100644
--- a/drivers/misc/cxl/pci.c
+++ b/drivers/misc/cxl/pci.c
@@ -60,7 +60,7 @@
#define CXL_VSEC_PROTOCOL_MASK 0xe0
#define CXL_VSEC_PROTOCOL_1024TB 0x80
#define CXL_VSEC_PROTOCOL_512TB 0x40
-#define CXL_VSEC_PROTOCOL_256TB 0x20 /* Power 8 uses this */
+#define CXL_VSEC_PROTOCOL_256TB 0x20 /* Power 8/9 uses this */
#define CXL_VSEC_PROTOCOL_ENABLE 0x01
#define CXL_READ_VSEC_PSL_REVISION(dev, vsec, dest) \
@@ -326,14 +326,20 @@ static void dump_afu_descriptor(struct cxl_afu *afu)
#define P8_CAPP_UNIT0_ID 0xBA
#define P8_CAPP_UNIT1_ID 0XBE
+#define P9_CAPP_UNIT0_ID 0xC0
+#define P9_CAPP_UNIT1_ID 0xE0
-static u64 get_capp_unit_id(struct device_node *np)
+static u32 get_phb_index(struct device_node *np)
{
u32 phb_index;
if (of_property_read_u32(np, "ibm,phb-index", &phb_index))
- return 0;
+ return -ENODEV;
+ return phb_index;
+}
+static u64 get_capp_unit_id(struct device_node *np, u32 phb_index)
+{
/*
* POWER 8:
* - For chips other than POWER8NVL, we only have CAPP 0,
@@ -352,10 +358,25 @@ static u64 get_capp_unit_id(struct device_node *np)
return P8_CAPP_UNIT1_ID;
}
+ /*
+ * POWER 9:
+ * PEC0 (PHB0). Capp ID = CAPP0 (0b1100_0000)
+ * PEC1 (PHB1 - PHB2). No capi mode
+ * PEC2 (PHB3 - PHB4 - PHB5): Capi mode on PHB3 only. Capp ID = CAPP1 (0b1110_0000)
+ */
+ if (cxl_is_power9()) {
+ if (phb_index == 0)
+ return P9_CAPP_UNIT0_ID;
+
+ if (phb_index == 3)
+ return P9_CAPP_UNIT1_ID;
+ }
+
return 0;
}
-static int calc_capp_routing(struct pci_dev *dev, u64 *chipid, u64 *capp_unit_id)
+static int calc_capp_routing(struct pci_dev *dev, u64 *chipid,
+ u32 *phb_index, u64 *capp_unit_id)
{
struct device_node *np;
const __be32 *prop;
@@ -367,8 +388,16 @@ static int calc_capp_routing(struct pci_dev *dev, u64 *chipid, u64 *capp_unit_id
np = of_get_next_parent(np);
if (!np)
return -ENODEV;
+
*chipid = be32_to_cpup(prop);
- *capp_unit_id = get_capp_unit_id(np);
+
+ *phb_index = get_phb_index(np);
+ if (*phb_index == -ENODEV) {
+ pr_err("cxl: invalid phb index\n");
+ return -ENODEV;
+ }
+
+ *capp_unit_id = get_capp_unit_id(np, *phb_index);
of_node_put(np);
if (!*capp_unit_id) {
pr_err("cxl: invalid capp unit id\n");
@@ -378,14 +407,90 @@ static int calc_capp_routing(struct pci_dev *dev, u64 *chipid, u64 *capp_unit_id
return 0;
}
+static int init_implementation_adapter_regs_psl9(struct cxl *adapter, struct pci_dev *dev)
+{
+ u64 xsl_dsnctl, psl_fircntl;
+ u64 chipid;
+ u32 phb_index;
+ u64 capp_unit_id;
+ int rc;
+
+ rc = calc_capp_routing(dev, &chipid, &phb_index, &capp_unit_id);
+ if (rc)
+ return rc;
+
+ /* CAPI Identifier bits [0:7]
+ * bit 61:60 MSI bits --> 0
+ * bit 59 TVT selector --> 0
+ */
+ /* Tell XSL where to route data to.
+ * The field chipid should match the PHB CAPI_CMPM register
+ */
+ xsl_dsnctl = ((u64)0x2 << (63-7)); /* Bit 57 */
+ xsl_dsnctl |= (capp_unit_id << (63-15));
+
+ /* nMMU_ID=0x0B0 */
+ xsl_dsnctl |= ((u64)0x0B0 << (63-28));
+
+ /* Used to identify CAPI packets which should be sorted into
+ * the Non-Blocking queues by the PHB. This field should match
+ * the PHB PBL_NBW_CMPM register
+ */
+ /* nbwind=0x03, bits [57:58], must include capi indicator */
+ xsl_dsnctl |= ((u64)0x03 << (63-47));
+
+ /* Upper 16b address bits of ASB_Notify messages sent to the
+ * system. Need to match the PHB’s ASN Compare/Mask Register.
+ */
+ xsl_dsnctl |= ((u64)0x04 << (63-55));
+
+ cxl_p1_write(adapter, CXL_XSL9_DSNCTL, xsl_dsnctl);
+
+ /* set fir_cntl to recommended value for production env */
+ psl_fircntl = (0x2ULL << (63-3)); /* ce_report */
+ psl_fircntl |= (0x1ULL << (63-6)); /* FIR_report */
+ psl_fircntl |= 0x1ULL; /* ce_thresh */
+ cxl_p1_write(adapter, CXL_PSL9_FIR_CNTL, psl_fircntl);
+
+ /* vccredits=0x1 pcklat=0x4 */
+ cxl_p1_write(adapter, CXL_PSL9_DSNDCTL, 0x0000000000001810);
+
+ /* for debugging with trace arrays.
+ * Configure RX trace 0 to use global trigger. Rising edge
+ * trigger located at start of data. Use data bus 2.
+ */
+ cxl_p1_write(adapter, CXL_PSL9_TRACECFG, 0xC480000000000000ULL);
+
+ /* A response to an ASB_Notify request is returned by the
+ * system as an MMIO write to the address defined in
+ * the PSL_TNR_ADDR register
+ */
+ /* PSL_TNR_ADDR */
+
+ /* NORST */
+ cxl_p1_write(adapter, CXL_PSL9_DEBUG, 0x4000000000000000);
+
+ /* allocate the apc machines. For PHB0, let us keep the APC
+ * allocation setup as it is. For PHB3, we need to disable Rd
+ * machines
+ */
+ if (phb_index == 3) {
+ cxl_p1_write(adapter, CXL_PSL9_APCDEDALLOC, 0x8000808200000000);
+ cxl_p1_write(adapter, CXL_PSL9_APCDEDTYPE, 0x7F7FFFFFFFFF0000);
+ }
+
+ return 0;
+}
+
static int init_implementation_adapter_regs_psl8(struct cxl *adapter, struct pci_dev *dev)
{
u64 psl_dsnctl, psl_fircntl;
u64 chipid;
+ u32 phb_index;
u64 capp_unit_id;
int rc;
- rc = calc_capp_routing(dev, &chipid, &capp_unit_id);
+ rc = calc_capp_routing(dev, &chipid, &phb_index, &capp_unit_id);
if (rc)
return rc;
@@ -414,10 +519,11 @@ static int init_implementation_adapter_regs_xsl(struct cxl *adapter, struct pci_
{
u64 xsl_dsnctl;
u64 chipid;
+ u32 phb_index;
u64 capp_unit_id;
int rc;
- rc = calc_capp_routing(dev, &chipid, &capp_unit_id);
+ rc = calc_capp_routing(dev, &chipid, &phb_index, &capp_unit_id);
if (rc)
return rc;
@@ -435,6 +541,12 @@ static int init_implementation_adapter_regs_xsl(struct cxl *adapter, struct pci_
/* For the PSL this is a multiple for 0 < n <= 7: */
#define PSL_2048_250MHZ_CYCLES 1
+static void write_timebase_ctrl_psl9(struct cxl *adapter)
+{
+ cxl_p1_write(adapter, CXL_PSL9_TB_CTLSTAT,
+ TBSYNC_CNT(2 * PSL_2048_250MHZ_CYCLES));
+}
+
static void write_timebase_ctrl_psl8(struct cxl *adapter)
{
cxl_p1_write(adapter, CXL_PSL_TB_CTLSTAT,
@@ -456,6 +568,11 @@ static void write_timebase_ctrl_xsl(struct cxl *adapter)
TBSYNC_CNT(XSL_4000_CLOCKS));
}
+static u64 timebase_read_psl9(struct cxl *adapter)
+{
+ return cxl_p1_read(adapter, CXL_PSL9_Timebase);
+}
+
static u64 timebase_read_psl8(struct cxl *adapter)
{
return cxl_p1_read(adapter, CXL_PSL_Timebase);
@@ -514,6 +631,11 @@ static void cxl_setup_psl_timebase(struct cxl *adapter, struct pci_dev *dev)
return;
}
+static int init_implementation_afu_regs_psl9(struct cxl_afu *afu)
+{
+ return 0;
+}
+
static int init_implementation_afu_regs_psl8(struct cxl_afu *afu)
{
/* read/write masks for this slice */
@@ -612,7 +734,7 @@ static int setup_cxl_bars(struct pci_dev *dev)
/*
* BAR 4/5 has a special meaning for CXL and must be programmed with a
* special value corresponding to the CXL protocol address range.
- * For POWER 8 that means bits 48:49 must be set to 10
+ * For POWER 8/9 that means bits 48:49 must be set to 10
*/
pci_write_config_dword(dev, PCI_BASE_ADDRESS_4, 0x00000000);
pci_write_config_dword(dev, PCI_BASE_ADDRESS_5, 0x00020000);
@@ -997,6 +1119,52 @@ static int cxl_afu_descriptor_looks_ok(struct cxl_afu *afu)
return 0;
}
+static int sanitise_afu_regs_psl9(struct cxl_afu *afu)
+{
+ u64 reg;
+
+ /*
+ * Clear out any regs that contain either an IVTE or address or may be
+ * waiting on an acknowledgment to try to be a bit safer as we bring
+ * it online
+ */
+ reg = cxl_p2n_read(afu, CXL_AFU_Cntl_An);
+ if ((reg & CXL_AFU_Cntl_An_ES_MASK) != CXL_AFU_Cntl_An_ES_Disabled) {
+ dev_warn(&afu->dev, "WARNING: AFU was not disabled: %#016llx\n", reg);
+ if (cxl_ops->afu_reset(afu))
+ return -EIO;
+ if (cxl_afu_disable(afu))
+ return -EIO;
+ if (cxl_psl_purge(afu))
+ return -EIO;
+ }
+ cxl_p1n_write(afu, CXL_PSL_SPAP_An, 0x0000000000000000);
+ cxl_p1n_write(afu, CXL_PSL_AMBAR_An, 0x0000000000000000);
+ reg = cxl_p2n_read(afu, CXL_PSL_DSISR_An);
+ if (reg) {
+ dev_warn(&afu->dev, "AFU had pending DSISR: %#016llx\n", reg);
+ if (reg & CXL_PSL9_DSISR_An_TF)
+ cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
+ else
+ cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_A);
+ }
+ if (afu->adapter->native->sl_ops->register_serr_irq) {
+ reg = cxl_p1n_read(afu, CXL_PSL_SERR_An);
+ if (reg) {
+ if (reg & ~0x000000007501ffff)
+ dev_warn(&afu->dev, "AFU had pending SERR: %#016llx\n", reg);
+ cxl_p1n_write(afu, CXL_PSL_SERR_An, reg & ~0xffff);
+ }
+ }
+ reg = cxl_p2n_read(afu, CXL_PSL_ErrStat_An);
+ if (reg) {
+ dev_warn(&afu->dev, "AFU had pending error status: %#016llx\n", reg);
+ cxl_p2n_write(afu, CXL_PSL_ErrStat_An, reg);
+ }
+
+ return 0;
+}
+
static int sanitise_afu_regs_psl8(struct cxl_afu *afu)
{
u64 reg;
@@ -1384,6 +1552,9 @@ static bool cxl_compatible_caia_version(struct cxl *adapter)
if (cxl_is_power8() && (adapter->caia_major == 1))
return true;
+ if (cxl_is_power9() && (adapter->caia_major == 2))
+ return true;
+
return false;
}
@@ -1537,6 +1708,30 @@ static void cxl_deconfigure_adapter(struct cxl *adapter)
pci_disable_device(pdev);
}
+static const struct cxl_service_layer_ops psl9_ops = {
+ .adapter_regs_init = init_implementation_adapter_regs_psl9,
+ .invalidate_all = cxl_invalidate_all_psl9,
+ .afu_regs_init = init_implementation_afu_regs_psl9,
+ .sanitise_afu_regs = sanitise_afu_regs_psl9,
+ .register_serr_irq = cxl_native_register_serr_irq,
+ .release_serr_irq = cxl_native_release_serr_irq,
+ .handle_interrupt = cxl_irq_psl9,
+ .fail_irq = cxl_fail_irq_psl9,
+ .activate_dedicated_process = cxl_activate_dedicated_process_psl9,
+ .attach_afu_directed = cxl_attach_afu_directed_psl9,
+ .attach_dedicated_process = cxl_attach_dedicated_process_psl9,
+ .update_dedicated_ivtes = cxl_update_dedicated_ivtes_psl9,
+ .debugfs_add_adapter_regs = cxl_debugfs_add_adapter_regs_psl9,
+ .debugfs_add_afu_regs = cxl_debugfs_add_afu_regs_psl9,
+ .psl_irq_dump_registers = cxl_native_irq_dump_regs_psl9,
+ .err_irq_dump_registers = cxl_native_err_irq_dump_regs,
+ .debugfs_stop_trace = cxl_stop_trace_psl9,
+ .write_timebase_ctrl = write_timebase_ctrl_psl9,
+ .timebase_read = timebase_read_psl9,
+ .capi_mode = OPAL_PHB_CAPI_MODE_CAPI,
+ .needs_reset_before_disable = true,
+};
+
static const struct cxl_service_layer_ops psl8_ops = {
.adapter_regs_init = init_implementation_adapter_regs_psl8,
.invalidate_all = cxl_invalidate_all_psl8,
@@ -1580,6 +1775,9 @@ static void set_sl_ops(struct cxl *adapter, struct pci_dev *dev)
if (cxl_is_power8()) {
dev_info(&dev->dev, "Device uses a PSL8\n");
adapter->native->sl_ops = &psl8_ops;
+ } else {
+ dev_info(&dev->dev, "Device uses a PSL9\n");
+ adapter->native->sl_ops = &psl9_ops;
}
}
}
@@ -1732,6 +1930,11 @@ static int cxl_probe(struct pci_dev *dev, const struct pci_device_id *id)
return -ENODEV;
}
+ if (cxl_is_power9() && !radix_enabled()) {
+ dev_info(&dev->dev, "Only Radix mode supported\n");
+ return -ENODEV;
+ }
+
if (cxl_verbose)
dump_cxl_config_space(dev);
diff --git a/drivers/misc/cxl/trace.h b/drivers/misc/cxl/trace.h
index 751d611..b8e300a 100644
--- a/drivers/misc/cxl/trace.h
+++ b/drivers/misc/cxl/trace.h
@@ -17,6 +17,15 @@
#include "cxl.h"
+#define dsisr_psl9_flags(flags) \
+ __print_flags(flags, "|", \
+ { CXL_PSL9_DSISR_An_CO_MASK, "FR" }, \
+ { CXL_PSL9_DSISR_An_TF, "TF" }, \
+ { CXL_PSL9_DSISR_An_PE, "PE" }, \
+ { CXL_PSL9_DSISR_An_AE, "AE" }, \
+ { CXL_PSL9_DSISR_An_OC, "OC" }, \
+ { CXL_PSL9_DSISR_An_S, "S" })
+
#define DSISR_FLAGS \
{ CXL_PSL_DSISR_An_DS, "DS" }, \
{ CXL_PSL_DSISR_An_DM, "DM" }, \
@@ -154,6 +163,40 @@ TRACE_EVENT(cxl_afu_irq,
)
);
+TRACE_EVENT(cxl_psl9_irq,
+ TP_PROTO(struct cxl_context *ctx, int irq, u64 dsisr, u64 dar),
+
+ TP_ARGS(ctx, irq, dsisr, dar),
+
+ TP_STRUCT__entry(
+ __field(u8, card)
+ __field(u8, afu)
+ __field(u16, pe)
+ __field(int, irq)
+ __field(u64, dsisr)
+ __field(u64, dar)
+ ),
+
+ TP_fast_assign(
+ __entry->card = ctx->afu->adapter->adapter_num;
+ __entry->afu = ctx->afu->slice;
+ __entry->pe = ctx->pe;
+ __entry->irq = irq;
+ __entry->dsisr = dsisr;
+ __entry->dar = dar;
+ ),
+
+ TP_printk("afu%i.%i pe=%i irq=%i dsisr=0x%016llx dsisr=%s dar=0x%016llx",
+ __entry->card,
+ __entry->afu,
+ __entry->pe,
+ __entry->irq,
+ __entry->dsisr,
+ dsisr_psl9_flags(__entry->dsisr),
+ __entry->dar
+ )
+);
+
TRACE_EVENT(cxl_psl_irq,
TP_PROTO(struct cxl_context *ctx, int irq, u64 dsisr, u64 dar),
--
2.7.4
^ permalink raw reply related [flat|nested] 17+ messages in thread
* Re: [PATCH 7/7] cxl: Add psl9 specific code
2017-02-01 17:30 ` [PATCH 7/7] cxl: Add psl9 specific code Christophe Lombard
@ 2017-03-02 6:55 ` Andrew Donnellan
2017-03-03 7:10 ` Andrew Donnellan
1 sibling, 0 replies; 17+ messages in thread
From: Andrew Donnellan @ 2017-03-02 6:55 UTC (permalink / raw)
To: Christophe Lombard, linuxppc-dev, fbarrat, imunsie
On 02/02/17 04:30, Christophe Lombard wrote:
> The new Coherent Accelerator Interface Architecture, level 2, for the
> IBM POWER9 brings new content and features:
> - POWER9 Service Layer
> - Registers
> - Radix mode
> - Process element entry
> - Dedicated-Shared Process Programming Model
> - Translation Fault Handling
> - CAPP
> - Memory Context ID
> If a valid mm_struct is found the memory context id is used for each
> transaction associated with the process handle. The PSL uses the
> context ID to find the corresponding process element.
>
> Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
Patch needs rebasing, see:
https://github.com/ajdlinux/linux/commit/642ec862c7074c19765a2dd385dc5fd1751104b4
--
Andrew Donnellan OzLabs, ADL Canberra
andrew.donnellan@au1.ibm.com IBM Australia Limited
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [PATCH 7/7] cxl: Add psl9 specific code
2017-02-01 17:30 ` [PATCH 7/7] cxl: Add psl9 specific code Christophe Lombard
2017-03-02 6:55 ` Andrew Donnellan
@ 2017-03-03 7:10 ` Andrew Donnellan
1 sibling, 0 replies; 17+ messages in thread
From: Andrew Donnellan @ 2017-03-03 7:10 UTC (permalink / raw)
To: Christophe Lombard, linuxppc-dev, fbarrat, imunsie
On 02/02/17 04:30, Christophe Lombard wrote:
> The new Coherent Accelerator Interface Architecture, level 2, for the
> IBM POWER9 brings new content and features:
> - POWER9 Service Layer
> - Registers
> - Radix mode
> - Process element entry
> - Dedicated-Shared Process Programming Model
> - Translation Fault Handling
> - CAPP
> - Memory Context ID
> If a valid mm_struct is found the memory context id is used for each
> transaction associated with the process handle. The PSL uses the
> context ID to find the corresponding process element.
>
> Signed-off-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
I'm going on leave tomorrow, I haven't had time to properly review this,
but a few minor style comments below.
> ---
> drivers/misc/cxl/context.c | 13 +++
> drivers/misc/cxl/cxl.h | 125 ++++++++++++++++++++++----
> drivers/misc/cxl/debugfs.c | 19 ++++
> drivers/misc/cxl/fault.c | 48 ++++++----
> drivers/misc/cxl/guest.c | 8 +-
> drivers/misc/cxl/irq.c | 52 +++++++++++
> drivers/misc/cxl/native.c | 213 +++++++++++++++++++++++++++++++++++++++++--
> drivers/misc/cxl/pci.c | 219 +++++++++++++++++++++++++++++++++++++++++++--
> drivers/misc/cxl/trace.h | 43 +++++++++
> 9 files changed, 685 insertions(+), 55 deletions(-)
>
> diff --git a/drivers/misc/cxl/context.c b/drivers/misc/cxl/context.c
> index 1835067..c224d15 100644
> --- a/drivers/misc/cxl/context.c
> +++ b/drivers/misc/cxl/context.c
> @@ -203,6 +203,19 @@ int cxl_context_iomap(struct cxl_context *ctx, struct vm_area_struct *vma)
> return -EBUSY;
> }
>
> + if ((ctx->afu->current_mode == CXL_MODE_DEDICATED) &&
> + (cxl_is_psl9(ctx->afu))) {
> + /* make sure there is a valid problem state area space for this AFU */
> + if (ctx->master && !ctx->afu->psa) {
> + pr_devel("AFU doesn't support mmio space\n");
> + return -EINVAL;
> + }
> +
> + /* Can't mmap until the AFU is enabled */
> + if (!ctx->afu->enabled)
> + return -EBUSY;
> + }
> +
> pr_devel("%s: mmio physical: %llx pe: %i master:%i\n", __func__,
> ctx->psn_phys, ctx->pe , ctx->master);
>
> diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
> index ddc787e..554ae22 100644
> --- a/drivers/misc/cxl/cxl.h
> +++ b/drivers/misc/cxl/cxl.h
> @@ -63,7 +63,7 @@ typedef struct {
> /* Memory maps. Ref CXL Appendix A */
>
> /* PSL Privilege 1 Memory Map */
> -/* Configuration and Control area */
> +/* Configuration and Control area - CAIA 1&2 */
> static const cxl_p1_reg_t CXL_PSL_CtxTime = {0x0000};
> static const cxl_p1_reg_t CXL_PSL_ErrIVTE = {0x0008};
> static const cxl_p1_reg_t CXL_PSL_KEY1 = {0x0010};
> @@ -98,11 +98,28 @@ static const cxl_p1_reg_t CXL_XSL_Timebase = {0x0100};
> static const cxl_p1_reg_t CXL_XSL_TB_CTLSTAT = {0x0108};
> static const cxl_p1_reg_t CXL_XSL_FEC = {0x0158};
> static const cxl_p1_reg_t CXL_XSL_DSNCTL = {0x0168};
> +/* PSL registers - CAIA 2 */
> +static const cxl_p1_reg_t CXL_PSL9_CONTROL = {0x0020};
> +static const cxl_p1_reg_t CXL_XSL9_DSNCTL = {0x0168};
> +static const cxl_p1_reg_t CXL_PSL9_FIR1 = {0x0300};
> +static const cxl_p1_reg_t CXL_PSL9_FIR2 = {0x0308}; /* TBD NML CAIA 2 */
"TBD NML CAIA 2"???
> +static const cxl_p1_reg_t CXL_PSL9_Timebase = {0x0310};
> +static const cxl_p1_reg_t CXL_PSL9_DEBUG = {0x0320};
> +static const cxl_p1_reg_t CXL_PSL9_FIR_CNTL = {0x0348};
> +static const cxl_p1_reg_t CXL_PSL9_DSNDCTL = {0x0350};
> +static const cxl_p1_reg_t CXL_PSL9_TB_CTLSTAT = {0x0340};
> +static const cxl_p1_reg_t CXL_PSL9_TRACECFG = {0x0368};
> +static const cxl_p1_reg_t CXL_PSL9_APCDEDALLOC = {0x0378};
> +static const cxl_p1_reg_t CXL_PSL9_APCDEDTYPE = {0x0380};
> +static const cxl_p1_reg_t CXL_PSL9_TNR_ADDR = {0x0388};
> +static const cxl_p1_reg_t CXL_XSL9_IERAT = {0x0588};
> +static const cxl_p1_reg_t CXL_XSL9_ILPP = {0x0590};
> +
> /* 0x7F00:7FFF Reserved PCIe MSI-X Pending Bit Array area */
> /* 0x8000:FFFF Reserved PCIe MSI-X Table Area */
>
> /* PSL Slice Privilege 1 Memory Map */
> -/* Configuration Area */
> +/* Configuration Area - CAIA 1&2 */
> static const cxl_p1n_reg_t CXL_PSL_SR_An = {0x00};
> static const cxl_p1n_reg_t CXL_PSL_LPID_An = {0x08};
> static const cxl_p1n_reg_t CXL_PSL_AMBAR_An = {0x10};
> @@ -111,17 +128,18 @@ static const cxl_p1n_reg_t CXL_PSL_ID_An = {0x20};
> static const cxl_p1n_reg_t CXL_PSL_SERR_An = {0x28};
> /* Memory Management and Lookaside Buffer Management - CAIA 1*/
> static const cxl_p1n_reg_t CXL_PSL_SDR_An = {0x30};
> +/* Memory Management and Lookaside Buffer Management - CAIA 1&2 */
> static const cxl_p1n_reg_t CXL_PSL_AMOR_An = {0x38};
> -/* Pointer Area */
> +/* Pointer Area - CAIA 1&2 */
> static const cxl_p1n_reg_t CXL_HAURP_An = {0x80};
> static const cxl_p1n_reg_t CXL_PSL_SPAP_An = {0x88};
> static const cxl_p1n_reg_t CXL_PSL_LLCMD_An = {0x90};
> -/* Control Area */
> +/* Control Area - CAIA 1&2 */
> static const cxl_p1n_reg_t CXL_PSL_SCNTL_An = {0xA0};
> static const cxl_p1n_reg_t CXL_PSL_CtxTime_An = {0xA8};
> static const cxl_p1n_reg_t CXL_PSL_IVTE_Offset_An = {0xB0};
> static const cxl_p1n_reg_t CXL_PSL_IVTE_Limit_An = {0xB8};
> -/* 0xC0:FF Implementation Dependent Area */
> +/* 0xC0:FF Implementation Dependent Area - CAIA 1&2 */
> static const cxl_p1n_reg_t CXL_PSL_FIR_SLICE_An = {0xC0};
> static const cxl_p1n_reg_t CXL_AFU_DEBUG_An = {0xC8};
> /* 0xC0:FF Implementation Dependent Area - CAIA 1 */
> @@ -131,7 +149,7 @@ static const cxl_p1n_reg_t CXL_PSL_RXCTL_A = {0xE0};
> static const cxl_p1n_reg_t CXL_PSL_SLICE_TRACE = {0xE8};
>
> /* PSL Slice Privilege 2 Memory Map */
> -/* Configuration and Control Area */
> +/* Configuration and Control Area - CAIA 1&2 */
> static const cxl_p2n_reg_t CXL_PSL_PID_TID_An = {0x000};
> static const cxl_p2n_reg_t CXL_CSRP_An = {0x008};
> /* Configuration and Control Area - CAIA 1 */
> @@ -145,17 +163,17 @@ static const cxl_p2n_reg_t CXL_PSL_AMR_An = {0x030};
> static const cxl_p2n_reg_t CXL_SLBIE_An = {0x040};
> static const cxl_p2n_reg_t CXL_SLBIA_An = {0x048};
> static const cxl_p2n_reg_t CXL_SLBI_Select_An = {0x050};
> -/* Interrupt Registers */
> +/* Interrupt Registers - CAIA 1&2 */
> static const cxl_p2n_reg_t CXL_PSL_DSISR_An = {0x060};
> static const cxl_p2n_reg_t CXL_PSL_DAR_An = {0x068};
> static const cxl_p2n_reg_t CXL_PSL_DSR_An = {0x070};
> static const cxl_p2n_reg_t CXL_PSL_TFC_An = {0x078};
> static const cxl_p2n_reg_t CXL_PSL_PEHandle_An = {0x080};
> -static const cxl_p2n_reg_t CXL_PSL_ErrStat_An = {0x088};
> -/* AFU Registers */
> +static const cxl_p2n_reg_t CXL_PSL_ErrStat_An = {0x088}; /* TBD NML CAIA 2 */
> +/* AFU Registers - CAIA 1&2 */
> static const cxl_p2n_reg_t CXL_AFU_Cntl_An = {0x090};
> static const cxl_p2n_reg_t CXL_AFU_ERR_An = {0x098};
> -/* Work Element Descriptor */
> +/* Work Element Descriptor - CAIA 1&2 */
> static const cxl_p2n_reg_t CXL_PSL_WED_An = {0x0A0};
> /* 0x0C0:FFF Implementation Dependent Area */
>
> @@ -182,6 +200,10 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An = {0x0A0};
> #define CXL_PSL_SR_An_SF MSR_SF /* 64bit */
> #define CXL_PSL_SR_An_TA (1ull << (63-1)) /* Tags active, GA1: 0 */
> #define CXL_PSL_SR_An_HV MSR_HV /* Hypervisor, GA1: 0 */
> +#define CXL_PSL_SR_An_XLAT_hpt (0ull << (63-6))/* Hashed page table (HPT) mode */
> +#define CXL_PSL_SR_An_XLAT_roh (2ull << (63-6))/* Radix on HPT mode */
> +#define CXL_PSL_SR_An_XLAT_ror (3ull << (63-6))/* Radix on Radix mode */
> +#define CXL_PSL_SR_An_BOT (1ull << (63-10)) /* Use the in-memory segment table */
> #define CXL_PSL_SR_An_PR MSR_PR /* Problem state, GA1: 1 */
> #define CXL_PSL_SR_An_ISL (1ull << (63-53)) /* Ignore Segment Large Page */
> #define CXL_PSL_SR_An_TC (1ull << (63-54)) /* Page Table secondary hash */
> @@ -298,12 +320,38 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An = {0x0A0};
> #define CXL_PSL_DSISR_An_S DSISR_ISSTORE /* Access was afu_wr or afu_zero */
> #define CXL_PSL_DSISR_An_K DSISR_KEYFAULT /* Access not permitted by virtual page class key protection */
>
> +/****** CXL_PSL_DSISR_An - CAIA 2 ****************************************************/
> +#define CXL_PSL9_DSISR_An_TF (1ull << (63-3)) /* Translation fault */
> +#define CXL_PSL9_DSISR_An_PE (1ull << (63-4)) /* PSL Error (implementation specific) */
> +#define CXL_PSL9_DSISR_An_AE (1ull << (63-5)) /* AFU Error */
> +#define CXL_PSL9_DSISR_An_OC (1ull << (63-6)) /* OS Context Warning */
> +#define CXL_PSL9_DSISR_An_S (1ull << (63-38)) /* TF for a write operation */
> +#define CXL_PSL9_DSISR_PENDING (CXL_PSL9_DSISR_An_TF | CXL_PSL9_DSISR_An_PE | CXL_PSL9_DSISR_An_AE | CXL_PSL9_DSISR_An_OC)
> +/* NOTE: Bits 56:63 (Checkout Response Status) are valid when DSISR_An[TF] = 1
> + * Status (0:7) Encoding
> + */
> +#define CXL_PSL9_DSISR_An_CO_MASK 0x00000000000000ffULL
> +#define CXL_PSL9_DSISR_An_SF 0x0000000000000080ULL /* Segment Fault 0b10000000 */
> +#define CXL_PSL9_DSISR_An_PF_SLR 0x0000000000000088ULL /* PTE not found (Single Level Radix) 0b10001000 */
> +#define CXL_PSL9_DSISR_An_PF_RGC 0x000000000000008CULL /* PTE not found (Radix Guest (child)) 0b10001100 */
> +#define CXL_PSL9_DSISR_An_PF_RGP 0x0000000000000090ULL /* PTE not found (Radix Guest (parent)) 0b10010000 */
> +#define CXL_PSL9_DSISR_An_PF_HRH 0x0000000000000094ULL /* PTE not found (HPT/Radix Host) 0b10010100 */
> +#define CXL_PSL9_DSISR_An_PF_STEG 0x000000000000009CULL /* PTE not found (STEG VA) 0b10011100 */
> +
> /****** CXL_PSL_TFC_An ******************************************************/
> #define CXL_PSL_TFC_An_A (1ull << (63-28)) /* Acknowledge non-translation fault */
> #define CXL_PSL_TFC_An_C (1ull << (63-29)) /* Continue (abort transaction) */
> #define CXL_PSL_TFC_An_AE (1ull << (63-30)) /* Restart PSL with address error */
> #define CXL_PSL_TFC_An_R (1ull << (63-31)) /* Restart PSL transaction */
>
> +/****** CXL_XSL9_INV_ERAT - CAIA 2 **********************************/
> +#define CXL_XSL9_IERAT_MLPID (1ull << (63-0)) /* Match LPID */
"INV" in the comment doesn't match "IERAT" in the macros
> +#define CXL_XSL9_IERAT_MPID (1ull << (63-1)) /* Match PID */
> +#define CXL_XSL9_IERAT_PRS (1ull << (63-4)) /* PRS bit for Radix invalidations */
> +#define CXL_XSL9_IERAT_INVR (1ull << (63-3)) /* Invalidate Radix */
> +#define CXL_XSL9_IERAT_IALL (1ull << (63-8)) /* Invalidate All */
> +#define CXL_XSL9_IERAT_IINPROG (1ull << (63-63)) /* Invalidate in progress */
> +
> /* cxl_process_element->software_status */
> #define CXL_PE_SOFTWARE_STATE_V (1ul << (31 - 0)) /* Valid */
> #define CXL_PE_SOFTWARE_STATE_C (1ul << (31 - 29)) /* Complete */
> @@ -651,25 +699,38 @@ int cxl_pci_reset(struct cxl *adapter);
> void cxl_pci_release_afu(struct device *dev);
> ssize_t cxl_pci_read_adapter_vpd(struct cxl *adapter, void *buf, size_t len);
>
> -/* common == phyp + powernv */
> +/* common == phyp + powernv - CAIA 1&2 */
> struct cxl_process_element_common {
> __be32 tid;
> __be32 pid;
> __be64 csrp;
> - __be64 aurp0;
> - __be64 aurp1;
> - __be64 sstp0;
> - __be64 sstp1;
> + union {
> + struct {
> + __be64 aurp0;
> + __be64 aurp1;
> + __be64 sstp0;
> + __be64 sstp1;
> + } psl8; /* CAIA 1 */
> + struct {
> + u8 reserved2[8];
> + u8 reserved3[8];
> + u8 reserved4[8];
> + u8 reserved5[8];
> + } psl9; /* CAIA 2 */
> + } u;
> __be64 amr;
> - u8 reserved3[4];
> + u8 reserved6[4];
> __be64 wed;
> } __packed;
>
> -/* just powernv */
> +/* just powernv - CAIA 1&2 */
> struct cxl_process_element {
> __be64 sr;
> __be64 SPOffset;
> - __be64 sdr;
> + union {
> + __be64 sdr; /* CAIA 1 */
> + u8 reserved1[8]; /* CAIA 2 */
> + } u;
> __be64 haurp;
> __be32 ctxtime;
> __be16 ivte_offsets[4];
> @@ -758,6 +819,16 @@ static inline bool cxl_is_power8(void)
> return false;
> }
>
> +static inline bool cxl_is_power9(void)
> +{
> + /* intermediate solution */
> + if (!cxl_is_power8() &&
> + (cpu_has_feature(CPU_FTRS_POWER9) ||
> + cpu_has_feature(CPU_FTR_POWER9_DD1)))
> + return true;
> + return false;
> +}
> +
> static inline bool cxl_is_psl8(struct cxl_afu *afu)
> {
> if (afu->adapter->caia_major == 1)
> @@ -765,6 +836,13 @@ static inline bool cxl_is_psl8(struct cxl_afu *afu)
> return false;
> }
>
> +static inline bool cxl_is_psl9(struct cxl_afu *afu)
> +{
> + if (afu->adapter->caia_major == 2)
> + return true;
> + return false;
> +}
> +
> ssize_t cxl_pci_afu_read_err_buffer(struct cxl_afu *afu, char *buf,
> loff_t off, size_t count);
>
> @@ -829,9 +907,13 @@ int afu_register_irqs(struct cxl_context *ctx, u32 count);
> void afu_release_irqs(struct cxl_context *ctx, void *cookie);
> void afu_irq_name_free(struct cxl_context *ctx);
>
> +int cxl_attach_afu_directed_psl9(struct cxl_context *ctx, u64 wed, u64 amr);
> int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr);
> +int cxl_activate_dedicated_process_psl9(struct cxl_afu *afu);
> int cxl_activate_dedicated_process_psl8(struct cxl_afu *afu);
> +int cxl_attach_dedicated_process_psl9(struct cxl_context *ctx, u64 wed, u64 amr);
> int cxl_attach_dedicated_process_psl8(struct cxl_context *ctx, u64 wed, u64 amr);
> +void cxl_update_dedicated_ivtes_psl9(struct cxl_context *ctx);
> void cxl_update_dedicated_ivtes_psl8(struct cxl_context *ctx);
>
> int cxl_debugfs_init(void);
> @@ -892,8 +974,11 @@ struct cxl_irq_info {
> };
>
> void cxl_assign_psn_space(struct cxl_context *ctx);
> +int cxl_invalidate_all_psl9(struct cxl *adapter);
> int cxl_invalidate_all_psl8(struct cxl *adapter);
> +irqreturn_t cxl_irq_psl9(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
> irqreturn_t cxl_irq_psl8(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info);
> +irqreturn_t cxl_fail_irq_psl9(struct cxl_afu *afu, struct cxl_irq_info *irq_info);
> irqreturn_t cxl_fail_irq_psl8(struct cxl_afu *afu, struct cxl_irq_info *irq_info);
> int cxl_register_one_irq(struct cxl *adapter, irq_handler_t handler,
> void *cookie, irq_hw_number_t *dest_hwirq,
> @@ -905,11 +990,15 @@ int cxl_data_cache_flush(struct cxl *adapter);
> int cxl_afu_disable(struct cxl_afu *afu);
> int cxl_psl_purge(struct cxl_afu *afu);
>
> +void cxl_debugfs_add_adapter_regs_psl9(struct cxl *adapter, struct dentry *dir);
> void cxl_debugfs_add_adapter_regs_psl8(struct cxl *adapter, struct dentry *dir);
> void cxl_debugfs_add_adapter_regs_xsl(struct cxl *adapter, struct dentry *dir);
> +void cxl_debugfs_add_afu_regs_psl9(struct cxl_afu *afu, struct dentry *dir);
> void cxl_debugfs_add_afu_regs_psl8(struct cxl_afu *afu, struct dentry *dir);
> +void cxl_native_irq_dump_regs_psl9(struct cxl_context *ctx);
> void cxl_native_irq_dump_regs_psl8(struct cxl_context *ctx);
> void cxl_native_err_irq_dump_regs(struct cxl *adapter);
> +void cxl_stop_trace_psl9(struct cxl *cxl);
> void cxl_stop_trace_psl8(struct cxl *cxl);
> int cxl_pci_vphb_add(struct cxl_afu *afu);
> void cxl_pci_vphb_remove(struct cxl_afu *afu);
> diff --git a/drivers/misc/cxl/debugfs.c b/drivers/misc/cxl/debugfs.c
> index 43a1a27..eae9d74 100644
> --- a/drivers/misc/cxl/debugfs.c
> +++ b/drivers/misc/cxl/debugfs.c
> @@ -15,6 +15,12 @@
>
> static struct dentry *cxl_debugfs;
>
> +void cxl_stop_trace_psl9(struct cxl *adapter)
> +{
> + /* Stop the trace */
> + cxl_p1_write(adapter, CXL_PSL9_TRACECFG, 0x4480000000000000ULL);
> +}
> +
> void cxl_stop_trace_psl8(struct cxl *adapter)
> {
> int slice;
> @@ -53,6 +59,14 @@ static struct dentry *debugfs_create_io_x64(const char *name, umode_t mode,
> (void __force *)value, &fops_io_x64);
> }
>
> +void cxl_debugfs_add_adapter_regs_psl9(struct cxl *adapter, struct dentry *dir)
> +{
> + debugfs_create_io_x64("fir1", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL9_FIR1));
> + debugfs_create_io_x64("fir2", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL9_FIR2));
> + debugfs_create_io_x64("fir_cntl", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL9_FIR_CNTL));
> + debugfs_create_io_x64("trace", S_IRUSR | S_IWUSR, dir, _cxl_p1_addr(adapter, CXL_PSL9_TRACECFG));
> +}
> +
> void cxl_debugfs_add_adapter_regs_psl8(struct cxl *adapter, struct dentry *dir)
> {
> debugfs_create_io_x64("fir1", S_IRUSR, dir, _cxl_p1_addr(adapter, CXL_PSL_FIR1));
> @@ -92,6 +106,11 @@ void cxl_debugfs_adapter_remove(struct cxl *adapter)
> debugfs_remove_recursive(adapter->debugfs);
> }
>
> +void cxl_debugfs_add_afu_regs_psl9(struct cxl_afu *afu, struct dentry *dir)
> +{
> + debugfs_create_io_x64("serr", S_IRUSR, dir, _cxl_p1n_addr(afu, CXL_PSL_SERR_An));
> +}
> +
> void cxl_debugfs_add_afu_regs_psl8(struct cxl_afu *afu, struct dentry *dir)
> {
> debugfs_create_io_x64("sstp0", S_IRUSR, dir, _cxl_p2n_addr(afu, CXL_SSTP0_An));
> diff --git a/drivers/misc/cxl/fault.c b/drivers/misc/cxl/fault.c
> index acf8b7a..d615f89 100644
> --- a/drivers/misc/cxl/fault.c
> +++ b/drivers/misc/cxl/fault.c
> @@ -145,25 +145,26 @@ static void cxl_handle_page_fault(struct cxl_context *ctx,
> return cxl_ack_ae(ctx);
> }
>
> - /*
> - * update_mmu_cache() will not have loaded the hash since current->trap
> - * is not a 0x400 or 0x300, so just call hash_page_mm() here.
> - */
> - access = _PAGE_PRESENT | _PAGE_READ;
> - if (dsisr & CXL_PSL_DSISR_An_S)
> - access |= _PAGE_WRITE;
> -
> - access |= _PAGE_PRIVILEGED;
> - if ((!ctx->kernel) || (REGION_ID(dar) == USER_REGION_ID))
> - access &= ~_PAGE_PRIVILEGED;
> -
> - if (dsisr & DSISR_NOHPTE)
> - inv_flags |= HPTE_NOHPTE_UPDATE;
> -
> - local_irq_save(flags);
> - hash_page_mm(mm, dar, access, 0x300, inv_flags);
> - local_irq_restore(flags);
> -
> + if (!radix_enabled()) {
> + /*
> + * update_mmu_cache() will not have loaded the hash since current->trap
> + * is not a 0x400 or 0x300, so just call hash_page_mm() here.
> + */
> + access = _PAGE_PRESENT | _PAGE_READ;
> + if (dsisr & CXL_PSL_DSISR_An_S)
> + access |= _PAGE_WRITE;
> +
> + access |= _PAGE_PRIVILEGED;
> + if ((!ctx->kernel) || (REGION_ID(dar) == USER_REGION_ID))
> + access &= ~_PAGE_PRIVILEGED;
> +
> + if (dsisr & DSISR_NOHPTE)
> + inv_flags |= HPTE_NOHPTE_UPDATE;
> +
> + local_irq_save(flags);
> + hash_page_mm(mm, dar, access, 0x300, inv_flags);
> + local_irq_restore(flags);
> + }
> pr_devel("Page fault successfully handled for pe: %i!\n", ctx->pe);
> cxl_ops->ack_irq(ctx, CXL_PSL_TFC_An_R, 0);
> }
> @@ -231,6 +232,15 @@ void cxl_handle_fault(struct work_struct *fault_work)
> else
> WARN(1, "cxl_handle_fault has nothing to handle\n");
> }
> + if (cxl_is_psl9(ctx->afu)) {
> + if ((dsisr & CXL_PSL9_DSISR_An_CO_MASK) &
> + (CXL_PSL9_DSISR_An_PF_SLR | CXL_PSL9_DSISR_An_PF_RGC |
> + CXL_PSL9_DSISR_An_PF_RGP | CXL_PSL9_DSISR_An_PF_HRH |
> + CXL_PSL9_DSISR_An_PF_STEG))
> + cxl_handle_page_fault(ctx, mm, dsisr, dar);
> + else
> + WARN(1, "cxl_handle_fault has nothing to handle\n");
> + }
>
> if (mm)
> mmput(mm);
> diff --git a/drivers/misc/cxl/guest.c b/drivers/misc/cxl/guest.c
> index 3ad7381..f58b4b6c 100644
> --- a/drivers/misc/cxl/guest.c
> +++ b/drivers/misc/cxl/guest.c
> @@ -551,13 +551,13 @@ static int attach_afu_directed(struct cxl_context *ctx, u64 wed, u64 amr)
> elem->common.tid = cpu_to_be32(0); /* Unused */
> elem->common.pid = cpu_to_be32(pid);
> elem->common.csrp = cpu_to_be64(0); /* disable */
> - elem->common.aurp0 = cpu_to_be64(0); /* disable */
> - elem->common.aurp1 = cpu_to_be64(0); /* disable */
> + elem->common.u.psl8.aurp0 = cpu_to_be64(0); /* disable */
> + elem->common.u.psl8.aurp1 = cpu_to_be64(0); /* disable */
>
> cxl_prefault(ctx, wed);
>
> - elem->common.sstp0 = cpu_to_be64(ctx->sstp0);
> - elem->common.sstp1 = cpu_to_be64(ctx->sstp1);
> + elem->common.u.psl8.sstp0 = cpu_to_be64(ctx->sstp0);
> + elem->common.u.psl8.sstp1 = cpu_to_be64(ctx->sstp1);
>
> /*
> * Ensure we have at least one interrupt allocated to take faults for
> diff --git a/drivers/misc/cxl/irq.c b/drivers/misc/cxl/irq.c
> index fa9f8a2..7074e7d 100644
> --- a/drivers/misc/cxl/irq.c
> +++ b/drivers/misc/cxl/irq.c
> @@ -34,6 +34,58 @@ static irqreturn_t schedule_cxl_fault(struct cxl_context *ctx, u64 dsisr, u64 da
> return IRQ_HANDLED;
> }
>
> +irqreturn_t cxl_irq_psl9(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info)
> +{
> + u64 dsisr, dar;
> +
> + dsisr = irq_info->dsisr;
> + dar = irq_info->dar;
> +
> + trace_cxl_psl9_irq(ctx, irq, dsisr, dar);
> +
> + pr_devel("CXL interrupt %i for afu pe: %i DSISR: %#llx DAR: %#llx\n", irq, ctx->pe, dsisr, dar);
> +
> + if (dsisr & CXL_PSL9_DSISR_An_TF) {
> + pr_devel("Scheduling translation fault handling for later pe: %i\n", ctx->pe);
Can we have all lines printed with pr_*() prefixed with "cxl: "?
Also wrap the "pe: %i" in parentheses.
> + return schedule_cxl_fault(ctx, dsisr, dar);
> + }
> +
> + if (dsisr & CXL_PSL9_DSISR_An_PE)
> + return cxl_ops->handle_psl_slice_error(ctx, dsisr,
> + irq_info->errstat);
> + if (dsisr & CXL_PSL9_DSISR_An_AE) {
> + pr_devel("CXL interrupt: AFU Error 0x%016llx\n", irq_info->afu_err);
> +
> + if (ctx->pending_afu_err) {
> + /*
> + * This shouldn't happen - the PSL treats these errors
> + * as fatal and will have reset the AFU, so there's not
> + * much point buffering multiple AFU errors.
> + * OTOH if we DO ever see a storm of these come in it's
> + * probably best that we log them somewhere:
> + */
> + dev_err_ratelimited(&ctx->afu->dev, "CXL AFU Error "
> + "undelivered to pe %i: 0x%016llx\n",
> + ctx->pe, irq_info->afu_err);
> + } else {
> + spin_lock(&ctx->lock);
> + ctx->afu_err = irq_info->afu_err;
> + ctx->pending_afu_err = 1;
./drivers/misc/cxl/irq.c:73:3-23: WARNING: Assignment of bool to 0/1
> + spin_unlock(&ctx->lock);
> +
> + wake_up_all(&ctx->wq);
> + }
> +
> + cxl_ops->ack_irq(ctx, CXL_PSL_TFC_An_A, 0);
> + return IRQ_HANDLED;
> + }
> + if (dsisr & CXL_PSL9_DSISR_An_OC)
> + pr_devel("CXL interrupt: OS Context Warning\n");
> +
> + WARN(1, "Unhandled CXL PSL IRQ\n");
> + return IRQ_HANDLED;
> +}
> +
> irqreturn_t cxl_irq_psl8(int irq, struct cxl_context *ctx, struct cxl_irq_info *irq_info)
> {
> u64 dsisr, dar;
> diff --git a/drivers/misc/cxl/native.c b/drivers/misc/cxl/native.c
> index a58a6a2..ed116d1 100644
> --- a/drivers/misc/cxl/native.c
> +++ b/drivers/misc/cxl/native.c
> @@ -167,6 +167,18 @@ int cxl_psl_purge(struct cxl_afu *afu)
> cpu_relax();
> }
> }
> + if (cxl_is_psl9(afu)) {
> + if (dsisr & CXL_PSL9_DSISR_An_TF) {
> + dar = cxl_p2n_read(afu, CXL_PSL_DAR_An);
> + dev_notice(&afu->dev, "PSL purge terminating pending translation, DSISR: 0x%016llx, DAR: 0x%016llx\n", dsisr, dar);
> + cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
> + } else if (dsisr) {
> + dev_notice(&afu->dev, "PSL purge acknowledging pending non-translation fault, DSISR: 0x%016llx\n", dsisr);
> + cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_A);
> + } else {
> + cpu_relax();
> + }
> + }
> PSL_CNTL = cxl_p1n_read(afu, CXL_PSL_SCNTL_An);
> }
> end = local_clock();
> @@ -259,6 +271,36 @@ void cxl_release_spa(struct cxl_afu *afu)
> }
> }
>
> +int cxl_invalidate_all_psl9(struct cxl *adapter)
> +{
> + unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT);
> + u64 ierat;
> +
> + /* do not invalidate ERAT entries when not reloading on PERST */
> + if (adapter->perst_loads_image)
> + return 0;
> +
> + pr_devel("CXL adapter - invalidation of all ERAT entries\n");
> +
> + /* Invalidates all ERAT entries for Radix or HPT */
> + ierat = CXL_XSL9_IERAT_IALL;
> + if (radix_enabled())
> + ierat |= CXL_XSL9_IERAT_INVR;
> + cxl_p1_write(adapter, CXL_XSL9_IERAT, ierat);
> +
> + while (cxl_p1_read(adapter, CXL_XSL9_IERAT) & CXL_XSL9_IERAT_IINPROG) {
> + if (time_after_eq(jiffies, timeout)) {
> + dev_warn(&adapter->dev,
> + "WARNING: CXL adapter invalidation of all ERAT entries timed out!\n");
> + return -EBUSY;
> + }
> + if (!cxl_ops->link_ok(adapter, NULL))
> + return -EIO;
> + cpu_relax();
> + }
> + return 0;
> +}
> +
> int cxl_invalidate_all_psl8(struct cxl *adapter)
> {
> unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT);
> @@ -545,10 +587,19 @@ static u64 calculate_sr(struct cxl_context *ctx)
> sr |= (mfmsr() & MSR_SF) | CXL_PSL_SR_An_HV;
> } else {
> sr |= CXL_PSL_SR_An_PR | CXL_PSL_SR_An_R;
> - sr &= ~(CXL_PSL_SR_An_HV);
> + if (radix_enabled())
> + sr |= CXL_PSL_SR_An_HV;
> + else
> + sr &= ~(CXL_PSL_SR_An_HV);
> if (!test_tsk_thread_flag(current, TIF_32BIT))
> sr |= CXL_PSL_SR_An_SF;
> }
> + if (cxl_is_psl9(ctx->afu)) {
> + if (radix_enabled())
> + sr |= CXL_PSL_SR_An_XLAT_ror;
> + else
> + sr |= CXL_PSL_SR_An_XLAT_hpt;
> + }
> return sr;
> }
>
> @@ -581,6 +632,70 @@ static void update_ivtes_directed(struct cxl_context *ctx)
> WARN_ON(add_process_element(ctx));
> }
>
> +static int process_element_entry(struct cxl_context *ctx, u64 wed, u64 amr)
> +{
> + u32 pid;
> +
> + cxl_assign_psn_space(ctx);
> +
> + ctx->elem->ctxtime = 0; /* disable */
> + ctx->elem->lpid = cpu_to_be32(mfspr(SPRN_LPID));
> + ctx->elem->haurp = 0; /* disable */
> +
> + if (ctx->kernel)
> + pid = 0;
> + else {
> + if (ctx->mm == NULL) {
> + pr_devel("%s: unable to get mm for pe=%d pid=%i\n",
> + __func__, ctx->pe, pid_nr(ctx->pid));
> + return -EINVAL;
> + }
> + pid = ctx->mm->context.id;
> + }
> +
> + ctx->elem->common.tid = 0;
> + ctx->elem->common.pid = cpu_to_be32(pid);
> +
> + ctx->elem->sr = cpu_to_be64(calculate_sr(ctx));
> +
> + ctx->elem->common.csrp = 0; /* disable */
> +
> + cxl_prefault(ctx, wed);
> +
> + /*
> + * Ensure we have the multiplexed PSL interrupt set up to take faults
> + * for kernel contexts that may not have allocated any AFU IRQs at all:
> + */
> + if (ctx->irqs.range[0] == 0) {
> + ctx->irqs.offset[0] = ctx->afu->native->psl_hwirq;
> + ctx->irqs.range[0] = 1;
> + }
> +
> + ctx->elem->common.amr = cpu_to_be64(amr);
> + ctx->elem->common.wed = cpu_to_be64(wed);
> +
> + return 0;
> +}
> +
> +int cxl_attach_afu_directed_psl9(struct cxl_context *ctx, u64 wed, u64 amr)
> +{
> + int result;
> +
> + /* fill the process element entry */
> + result = process_element_entry(ctx, wed, amr);
> + if (result)
> + return result;
> +
> + update_ivtes_directed(ctx);
> +
> + /* first guy needs to enable */
> + result = cxl_ops->afu_check_and_enable(ctx->afu);
> + if (result)
> + return result;
> +
> + return add_process_element(ctx);
> +}
> +
> int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
> {
> u32 pid;
> @@ -591,7 +706,7 @@ int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
> ctx->elem->ctxtime = 0; /* disable */
> ctx->elem->lpid = cpu_to_be32(mfspr(SPRN_LPID));
> ctx->elem->haurp = 0; /* disable */
> - ctx->elem->sdr = cpu_to_be64(mfspr(SPRN_SDR1));
> + ctx->elem->u.sdr = cpu_to_be64(mfspr(SPRN_SDR1));
>
> pid = current->pid;
> if (ctx->kernel)
> @@ -602,13 +717,13 @@ int cxl_attach_afu_directed_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
> ctx->elem->sr = cpu_to_be64(calculate_sr(ctx));
>
> ctx->elem->common.csrp = 0; /* disable */
> - ctx->elem->common.aurp0 = 0; /* disable */
> - ctx->elem->common.aurp1 = 0; /* disable */
> + ctx->elem->common.u.psl8.aurp0 = 0; /* disable */
> + ctx->elem->common.u.psl8.aurp1 = 0; /* disable */
>
> cxl_prefault(ctx, wed);
>
> - ctx->elem->common.sstp0 = cpu_to_be64(ctx->sstp0);
> - ctx->elem->common.sstp1 = cpu_to_be64(ctx->sstp1);
> + ctx->elem->common.u.psl8.sstp0 = cpu_to_be64(ctx->sstp0);
> + ctx->elem->common.u.psl8.sstp1 = cpu_to_be64(ctx->sstp1);
>
> /*
> * Ensure we have the multiplexed PSL interrupt set up to take faults
> @@ -674,6 +789,32 @@ static int deactivate_afu_directed(struct cxl_afu *afu)
> return 0;
> }
>
> +int cxl_activate_dedicated_process_psl9(struct cxl_afu *afu)
> +{
> + dev_info(&afu->dev, "Activating dedicated process mode\n");
> +
> + /* If XSL is set to dedicated mode (Set in PSL_SCNTL reg), the
> + * XSL and AFU are programmed to work with a single context.
> + * The context information should be configured in the SPA area
> + * index 0 (so PSL_SPAP must be configured before enabling the
> + * AFU).
> + */
> + afu->num_procs = 1;
> + if (afu->native->spa == NULL) {
> + if (cxl_alloc_spa(afu))
> + return -ENOMEM;
> + }
> + attach_spa(afu);
> +
> + cxl_p1n_write(afu, CXL_PSL_SCNTL_An, CXL_PSL_SCNTL_An_PM_Process);
> + cxl_p1n_write(afu, CXL_PSL_ID_An, CXL_PSL_ID_An_F | CXL_PSL_ID_An_L);
> +
> + afu->current_mode = CXL_MODE_DEDICATED;
> + afu->num_procs = 1;
> +
> + return cxl_chardev_d_afu_add(afu);
> +}
> +
> int cxl_activate_dedicated_process_psl8(struct cxl_afu *afu)
> {
> dev_info(&afu->dev, "Activating dedicated process mode\n");
> @@ -697,6 +838,16 @@ int cxl_activate_dedicated_process_psl8(struct cxl_afu *afu)
> return cxl_chardev_d_afu_add(afu);
> }
>
> +void cxl_update_dedicated_ivtes_psl9(struct cxl_context *ctx)
> +{
> + int r;
> +
> + for (r = 0; r < CXL_IRQ_RANGES; r++) {
> + ctx->elem->ivte_offsets[r] = cpu_to_be16(ctx->irqs.offset[r]);
> + ctx->elem->ivte_ranges[r] = cpu_to_be16(ctx->irqs.range[r]);
> + }
> +}
> +
> void cxl_update_dedicated_ivtes_psl8(struct cxl_context *ctx)
> {
> struct cxl_afu *afu = ctx->afu;
> @@ -713,6 +864,26 @@ void cxl_update_dedicated_ivtes_psl8(struct cxl_context *ctx)
> ((u64)ctx->irqs.range[3] & 0xffff));
> }
>
> +int cxl_attach_dedicated_process_psl9(struct cxl_context *ctx, u64 wed, u64 amr)
> +{
> + struct cxl_afu *afu = ctx->afu;
> + int result;
> +
> + /* fill the process element entry */
> + result = process_element_entry(ctx, wed, amr);
> + if (result)
> + return result;
> +
> + if (ctx->afu->adapter->native->sl_ops->update_dedicated_ivtes)
> + afu->adapter->native->sl_ops->update_dedicated_ivtes(ctx);
> +
> + result = cxl_ops->afu_reset(afu);
> + if (result)
> + return result;
> +
> + return afu_enable(afu);
> +}
> +
> int cxl_attach_dedicated_process_psl8(struct cxl_context *ctx, u64 wed, u64 amr)
> {
> struct cxl_afu *afu = ctx->afu;
> @@ -884,6 +1055,21 @@ static int native_get_irq_info(struct cxl_afu *afu, struct cxl_irq_info *info)
> return 0;
> }
>
> +void cxl_native_irq_dump_regs_psl9(struct cxl_context *ctx)
> +{
> + u64 fir1, fir2, serr;
> +
> + fir1 = cxl_p1_read(ctx->afu->adapter, CXL_PSL9_FIR1);
> + fir2 = cxl_p1_read(ctx->afu->adapter, CXL_PSL9_FIR2);
> +
> + dev_crit(&ctx->afu->dev, "PSL_FIR1: 0x%016llx\n", fir1);
> + dev_crit(&ctx->afu->dev, "PSL_FIR2: 0x%016llx\n", fir2);
> + if (ctx->afu->adapter->native->sl_ops->register_serr_irq) {
> + serr = cxl_p1n_read(ctx->afu, CXL_PSL_SERR_An);
> + cxl_afu_decode_psl_serr(ctx->afu, serr);
> + }
> +}
> +
> void cxl_native_irq_dump_regs_psl8(struct cxl_context *ctx)
> {
> u64 fir1, fir2, fir_slice, serr, afu_debug;
> @@ -920,6 +1106,16 @@ static irqreturn_t native_handle_psl_slice_error(struct cxl_context *ctx,
> return cxl_ops->ack_irq(ctx, 0, errstat);
> }
>
> +irqreturn_t cxl_fail_irq_psl9(struct cxl_afu *afu, struct cxl_irq_info *irq_info)
> +{
> + if (irq_info->dsisr & CXL_PSL9_DSISR_An_TF)
> + cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
> + else
> + cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_A);
> +
> + return IRQ_HANDLED;
> +}
> +
> irqreturn_t cxl_fail_irq_psl8(struct cxl_afu *afu, struct cxl_irq_info *irq_info)
> {
> if (irq_info->dsisr & CXL_PSL_DSISR_TRANS)
> @@ -991,6 +1187,9 @@ static void native_irq_wait(struct cxl_context *ctx)
> if (cxl_is_psl8(ctx->afu) &&
> ((dsisr & CXL_PSL_DSISR_PENDING) == 0))
> return;
> + if (cxl_is_psl9(ctx->afu) &&
> + ((dsisr & CXL_PSL9_DSISR_PENDING) == 0))
> + return;
> /*
> * We are waiting for the workqueue to process our
> * irq, so need to let that run here.
> @@ -1120,6 +1319,8 @@ int cxl_native_register_serr_irq(struct cxl_afu *afu)
> serr = cxl_p1n_read(afu, CXL_PSL_SERR_An);
> if (cxl_is_power8())
> serr = (serr & 0x00ffffffffff0000ULL) | (afu->serr_hwirq & 0xffff);
> + if (cxl_is_power9())
> + serr = (serr & ~0x0000000075010000ULL) | (afu->serr_hwirq & 0xffff);
> cxl_p1n_write(afu, CXL_PSL_SERR_An, serr);
>
> return 0;
> diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
> index 4913142..5e7f2db 100644
> --- a/drivers/misc/cxl/pci.c
> +++ b/drivers/misc/cxl/pci.c
> @@ -60,7 +60,7 @@
> #define CXL_VSEC_PROTOCOL_MASK 0xe0
> #define CXL_VSEC_PROTOCOL_1024TB 0x80
> #define CXL_VSEC_PROTOCOL_512TB 0x40
> -#define CXL_VSEC_PROTOCOL_256TB 0x20 /* Power 8 uses this */
> +#define CXL_VSEC_PROTOCOL_256TB 0x20 /* Power 8/9 uses this */
> #define CXL_VSEC_PROTOCOL_ENABLE 0x01
>
> #define CXL_READ_VSEC_PSL_REVISION(dev, vsec, dest) \
> @@ -326,14 +326,20 @@ static void dump_afu_descriptor(struct cxl_afu *afu)
>
> #define P8_CAPP_UNIT0_ID 0xBA
> #define P8_CAPP_UNIT1_ID 0XBE
> +#define P9_CAPP_UNIT0_ID 0xC0
> +#define P9_CAPP_UNIT1_ID 0xE0
>
> -static u64 get_capp_unit_id(struct device_node *np)
> +static u32 get_phb_index(struct device_node *np)
> {
> u32 phb_index;
>
> if (of_property_read_u32(np, "ibm,phb-index", &phb_index))
> - return 0;
> + return -ENODEV;
> + return phb_index;
> +}
>
> +static u64 get_capp_unit_id(struct device_node *np, u32 phb_index)
> +{
> /*
> * POWER 8:
> * - For chips other than POWER8NVL, we only have CAPP 0,
> @@ -352,10 +358,25 @@ static u64 get_capp_unit_id(struct device_node *np)
> return P8_CAPP_UNIT1_ID;
> }
>
> + /*
> + * POWER 9:
> + * PEC0 (PHB0). Capp ID = CAPP0 (0b1100_0000)
> + * PEC1 (PHB1 - PHB2). No capi mode
> + * PEC2 (PHB3 - PHB4 - PHB5): Capi mode on PHB3 only. Capp ID = CAPP1 (0b1110_0000)
> + */
> + if (cxl_is_power9()) {
> + if (phb_index == 0)
> + return P9_CAPP_UNIT0_ID;
> +
> + if (phb_index == 3)
> + return P9_CAPP_UNIT1_ID;
> + }
> +
> return 0;
> }
>
> -static int calc_capp_routing(struct pci_dev *dev, u64 *chipid, u64 *capp_unit_id)
> +static int calc_capp_routing(struct pci_dev *dev, u64 *chipid,
> + u32 *phb_index, u64 *capp_unit_id)
> {
> struct device_node *np;
> const __be32 *prop;
> @@ -367,8 +388,16 @@ static int calc_capp_routing(struct pci_dev *dev, u64 *chipid, u64 *capp_unit_id
> np = of_get_next_parent(np);
> if (!np)
> return -ENODEV;
> +
> *chipid = be32_to_cpup(prop);
> - *capp_unit_id = get_capp_unit_id(np);
> +
> + *phb_index = get_phb_index(np);
> + if (*phb_index == -ENODEV) {
> + pr_err("cxl: invalid phb index\n");
> + return -ENODEV;
> + }
> +
> + *capp_unit_id = get_capp_unit_id(np, *phb_index);
> of_node_put(np);
> if (!*capp_unit_id) {
> pr_err("cxl: invalid capp unit id\n");
> @@ -378,14 +407,90 @@ static int calc_capp_routing(struct pci_dev *dev, u64 *chipid, u64 *capp_unit_id
> return 0;
> }
>
> +static int init_implementation_adapter_regs_psl9(struct cxl *adapter, struct pci_dev *dev)
> +{
> + u64 xsl_dsnctl, psl_fircntl;
> + u64 chipid;
> + u32 phb_index;
> + u64 capp_unit_id;
> + int rc;
> +
> + rc = calc_capp_routing(dev, &chipid, &phb_index, &capp_unit_id);
> + if (rc)
> + return rc;
> +
> + /* CAPI Identifier bits [0:7]
> + * bit 61:60 MSI bits --> 0
> + * bit 59 TVT selector --> 0
> + */
> + /* Tell XSL where to route data to.
> + * The field chipid should match the PHB CAPI_CMPM register
> + */
> + xsl_dsnctl = ((u64)0x2 << (63-7)); /* Bit 57 */
> + xsl_dsnctl |= (capp_unit_id << (63-15));
> +
> + /* nMMU_ID=0x0B0 */
> + xsl_dsnctl |= ((u64)0x0B0 << (63-28));
> +
> + /* Used to identify CAPI packets which should be sorted into
> + * the Non-Blocking queues by the PHB. This field should match
> + * the PHB PBL_NBW_CMPM register
> + */
> + /* nbwind=0x03, bits [57:58], must include capi indicator */
> + xsl_dsnctl |= ((u64)0x03 << (63-47));
> +
> + /* Upper 16b address bits of ASB_Notify messages sent to the
> + * system. Need to match the PHB’s ASN Compare/Mask Register.
> + */
> + xsl_dsnctl |= ((u64)0x04 << (63-55));
> +
> + cxl_p1_write(adapter, CXL_XSL9_DSNCTL, xsl_dsnctl);
> +
> + /* set fir_cntl to recommended value for production env */
> + psl_fircntl = (0x2ULL << (63-3)); /* ce_report */
> + psl_fircntl |= (0x1ULL << (63-6)); /* FIR_report */
> + psl_fircntl |= 0x1ULL; /* ce_thresh */
> + cxl_p1_write(adapter, CXL_PSL9_FIR_CNTL, psl_fircntl);
> +
> + /* vccredits=0x1 pcklat=0x4 */
> + cxl_p1_write(adapter, CXL_PSL9_DSNDCTL, 0x0000000000001810);
> +
> + /* for debugging with trace arrays.
> + * Configure RX trace 0 to use global trigger. Rising edge
> + * trigger located at start of data. Use data bus 2.
> + */
> + cxl_p1_write(adapter, CXL_PSL9_TRACECFG, 0xC480000000000000ULL);
> +
> + /* A response to an ASB_Notify request is returned by the
> + * system as an MMIO write to the address defined in
> + * the PSL_TNR_ADDR register
> + */
> + /* PSL_TNR_ADDR */
> +
> + /* NORST */
> + cxl_p1_write(adapter, CXL_PSL9_DEBUG, 0x4000000000000000);
drivers/misc/cxl/pci.c:471:47: warning: constant 0x4000000000000000 is
so big it is long
> +
> + /* allocate the apc machines. For PHB0, let us keep the APC
> + * allocation setup as it is. For PHB3, we need to disable Rd
> + * machines
> + */
> + if (phb_index == 3) {
> + cxl_p1_write(adapter, CXL_PSL9_APCDEDALLOC, 0x8000808200000000);
drivers/misc/cxl/pci.c:478:61: warning: constant 0x8000808200000000 is
so big it is unsigned long
> + cxl_p1_write(adapter, CXL_PSL9_APCDEDTYPE, 0x7F7FFFFFFFFF0000);
drivers/misc/cxl/pci.c:479:60: warning: constant 0x7F7FFFFFFFFF0000 is
so big it is long
> + }
> +
> + return 0;
> +}
> +
> static int init_implementation_adapter_regs_psl8(struct cxl *adapter, struct pci_dev *dev)
> {
> u64 psl_dsnctl, psl_fircntl;
> u64 chipid;
> + u32 phb_index;
> u64 capp_unit_id;
> int rc;
>
> - rc = calc_capp_routing(dev, &chipid, &capp_unit_id);
> + rc = calc_capp_routing(dev, &chipid, &phb_index, &capp_unit_id);
> if (rc)
> return rc;
>
> @@ -414,10 +519,11 @@ static int init_implementation_adapter_regs_xsl(struct cxl *adapter, struct pci_
> {
> u64 xsl_dsnctl;
> u64 chipid;
> + u32 phb_index;
> u64 capp_unit_id;
> int rc;
>
> - rc = calc_capp_routing(dev, &chipid, &capp_unit_id);
> + rc = calc_capp_routing(dev, &chipid, &phb_index, &capp_unit_id);
> if (rc)
> return rc;
>
> @@ -435,6 +541,12 @@ static int init_implementation_adapter_regs_xsl(struct cxl *adapter, struct pci_
> /* For the PSL this is a multiple for 0 < n <= 7: */
> #define PSL_2048_250MHZ_CYCLES 1
>
> +static void write_timebase_ctrl_psl9(struct cxl *adapter)
> +{
> + cxl_p1_write(adapter, CXL_PSL9_TB_CTLSTAT,
> + TBSYNC_CNT(2 * PSL_2048_250MHZ_CYCLES));
> +}
> +
> static void write_timebase_ctrl_psl8(struct cxl *adapter)
> {
> cxl_p1_write(adapter, CXL_PSL_TB_CTLSTAT,
> @@ -456,6 +568,11 @@ static void write_timebase_ctrl_xsl(struct cxl *adapter)
> TBSYNC_CNT(XSL_4000_CLOCKS));
> }
>
> +static u64 timebase_read_psl9(struct cxl *adapter)
> +{
> + return cxl_p1_read(adapter, CXL_PSL9_Timebase);
> +}
> +
> static u64 timebase_read_psl8(struct cxl *adapter)
> {
> return cxl_p1_read(adapter, CXL_PSL_Timebase);
> @@ -514,6 +631,11 @@ static void cxl_setup_psl_timebase(struct cxl *adapter, struct pci_dev *dev)
> return;
> }
>
> +static int init_implementation_afu_regs_psl9(struct cxl_afu *afu)
> +{
> + return 0;
> +}
> +
> static int init_implementation_afu_regs_psl8(struct cxl_afu *afu)
> {
> /* read/write masks for this slice */
> @@ -612,7 +734,7 @@ static int setup_cxl_bars(struct pci_dev *dev)
> /*
> * BAR 4/5 has a special meaning for CXL and must be programmed with a
> * special value corresponding to the CXL protocol address range.
> - * For POWER 8 that means bits 48:49 must be set to 10
> + * For POWER 8/9 that means bits 48:49 must be set to 10
> */
> pci_write_config_dword(dev, PCI_BASE_ADDRESS_4, 0x00000000);
> pci_write_config_dword(dev, PCI_BASE_ADDRESS_5, 0x00020000);
> @@ -997,6 +1119,52 @@ static int cxl_afu_descriptor_looks_ok(struct cxl_afu *afu)
> return 0;
> }
>
> +static int sanitise_afu_regs_psl9(struct cxl_afu *afu)
> +{
> + u64 reg;
> +
> + /*
> + * Clear out any regs that contain either an IVTE or address or may be
> + * waiting on an acknowledgment to try to be a bit safer as we bring
> + * it online
> + */
> + reg = cxl_p2n_read(afu, CXL_AFU_Cntl_An);
> + if ((reg & CXL_AFU_Cntl_An_ES_MASK) != CXL_AFU_Cntl_An_ES_Disabled) {
> + dev_warn(&afu->dev, "WARNING: AFU was not disabled: %#016llx\n", reg);
> + if (cxl_ops->afu_reset(afu))
> + return -EIO;
> + if (cxl_afu_disable(afu))
> + return -EIO;
> + if (cxl_psl_purge(afu))
> + return -EIO;
> + }
> + cxl_p1n_write(afu, CXL_PSL_SPAP_An, 0x0000000000000000);
> + cxl_p1n_write(afu, CXL_PSL_AMBAR_An, 0x0000000000000000);
> + reg = cxl_p2n_read(afu, CXL_PSL_DSISR_An);
> + if (reg) {
> + dev_warn(&afu->dev, "AFU had pending DSISR: %#016llx\n", reg);
> + if (reg & CXL_PSL9_DSISR_An_TF)
> + cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_AE);
> + else
> + cxl_p2n_write(afu, CXL_PSL_TFC_An, CXL_PSL_TFC_An_A);
> + }
> + if (afu->adapter->native->sl_ops->register_serr_irq) {
> + reg = cxl_p1n_read(afu, CXL_PSL_SERR_An);
> + if (reg) {
> + if (reg & ~0x000000007501ffff)
> + dev_warn(&afu->dev, "AFU had pending SERR: %#016llx\n", reg);
> + cxl_p1n_write(afu, CXL_PSL_SERR_An, reg & ~0xffff);
> + }
> + }
> + reg = cxl_p2n_read(afu, CXL_PSL_ErrStat_An);
> + if (reg) {
> + dev_warn(&afu->dev, "AFU had pending error status: %#016llx\n", reg);
> + cxl_p2n_write(afu, CXL_PSL_ErrStat_An, reg);
> + }
> +
> + return 0;
> +}
> +
> static int sanitise_afu_regs_psl8(struct cxl_afu *afu)
> {
> u64 reg;
> @@ -1384,6 +1552,9 @@ static bool cxl_compatible_caia_version(struct cxl *adapter)
> if (cxl_is_power8() && (adapter->caia_major == 1))
> return true;
>
> + if (cxl_is_power9() && (adapter->caia_major == 2))
> + return true;
> +
> return false;
> }
>
> @@ -1537,6 +1708,30 @@ static void cxl_deconfigure_adapter(struct cxl *adapter)
> pci_disable_device(pdev);
> }
>
> +static const struct cxl_service_layer_ops psl9_ops = {
> + .adapter_regs_init = init_implementation_adapter_regs_psl9,
> + .invalidate_all = cxl_invalidate_all_psl9,
> + .afu_regs_init = init_implementation_afu_regs_psl9,
> + .sanitise_afu_regs = sanitise_afu_regs_psl9,
> + .register_serr_irq = cxl_native_register_serr_irq,
> + .release_serr_irq = cxl_native_release_serr_irq,
> + .handle_interrupt = cxl_irq_psl9,
> + .fail_irq = cxl_fail_irq_psl9,
> + .activate_dedicated_process = cxl_activate_dedicated_process_psl9,
> + .attach_afu_directed = cxl_attach_afu_directed_psl9,
> + .attach_dedicated_process = cxl_attach_dedicated_process_psl9,
> + .update_dedicated_ivtes = cxl_update_dedicated_ivtes_psl9,
> + .debugfs_add_adapter_regs = cxl_debugfs_add_adapter_regs_psl9,
> + .debugfs_add_afu_regs = cxl_debugfs_add_afu_regs_psl9,
> + .psl_irq_dump_registers = cxl_native_irq_dump_regs_psl9,
> + .err_irq_dump_registers = cxl_native_err_irq_dump_regs,
> + .debugfs_stop_trace = cxl_stop_trace_psl9,
> + .write_timebase_ctrl = write_timebase_ctrl_psl9,
> + .timebase_read = timebase_read_psl9,
> + .capi_mode = OPAL_PHB_CAPI_MODE_CAPI,
> + .needs_reset_before_disable = true,
> +};
> +
> static const struct cxl_service_layer_ops psl8_ops = {
> .adapter_regs_init = init_implementation_adapter_regs_psl8,
> .invalidate_all = cxl_invalidate_all_psl8,
> @@ -1580,6 +1775,9 @@ static void set_sl_ops(struct cxl *adapter, struct pci_dev *dev)
> if (cxl_is_power8()) {
> dev_info(&dev->dev, "Device uses a PSL8\n");
> adapter->native->sl_ops = &psl8_ops;
> + } else {
> + dev_info(&dev->dev, "Device uses a PSL9\n");
> + adapter->native->sl_ops = &psl9_ops;
> }
> }
> }
> @@ -1732,6 +1930,11 @@ static int cxl_probe(struct pci_dev *dev, const struct pci_device_id *id)
> return -ENODEV;
> }
>
> + if (cxl_is_power9() && !radix_enabled()) {
> + dev_info(&dev->dev, "Only Radix mode supported\n");
> + return -ENODEV;
> + }
> +
> if (cxl_verbose)
> dump_cxl_config_space(dev);
>
> diff --git a/drivers/misc/cxl/trace.h b/drivers/misc/cxl/trace.h
> index 751d611..b8e300a 100644
> --- a/drivers/misc/cxl/trace.h
> +++ b/drivers/misc/cxl/trace.h
> @@ -17,6 +17,15 @@
>
> #include "cxl.h"
>
> +#define dsisr_psl9_flags(flags) \
> + __print_flags(flags, "|", \
> + { CXL_PSL9_DSISR_An_CO_MASK, "FR" }, \
> + { CXL_PSL9_DSISR_An_TF, "TF" }, \
> + { CXL_PSL9_DSISR_An_PE, "PE" }, \
> + { CXL_PSL9_DSISR_An_AE, "AE" }, \
> + { CXL_PSL9_DSISR_An_OC, "OC" }, \
> + { CXL_PSL9_DSISR_An_S, "S" })
> +
> #define DSISR_FLAGS \
> { CXL_PSL_DSISR_An_DS, "DS" }, \
> { CXL_PSL_DSISR_An_DM, "DM" }, \
> @@ -154,6 +163,40 @@ TRACE_EVENT(cxl_afu_irq,
> )
> );
>
> +TRACE_EVENT(cxl_psl9_irq,
> + TP_PROTO(struct cxl_context *ctx, int irq, u64 dsisr, u64 dar),
> +
> + TP_ARGS(ctx, irq, dsisr, dar),
> +
> + TP_STRUCT__entry(
> + __field(u8, card)
> + __field(u8, afu)
> + __field(u16, pe)
> + __field(int, irq)
> + __field(u64, dsisr)
> + __field(u64, dar)
> + ),
> +
> + TP_fast_assign(
> + __entry->card = ctx->afu->adapter->adapter_num;
> + __entry->afu = ctx->afu->slice;
> + __entry->pe = ctx->pe;
> + __entry->irq = irq;
> + __entry->dsisr = dsisr;
> + __entry->dar = dar;
> + ),
> +
> + TP_printk("afu%i.%i pe=%i irq=%i dsisr=0x%016llx dsisr=%s dar=0x%016llx",
> + __entry->card,
> + __entry->afu,
> + __entry->pe,
> + __entry->irq,
> + __entry->dsisr,
> + dsisr_psl9_flags(__entry->dsisr),
> + __entry->dar
> + )
> +);
> +
> TRACE_EVENT(cxl_psl_irq,
> TP_PROTO(struct cxl_context *ctx, int irq, u64 dsisr, u64 dar),
>
>
--
Andrew Donnellan OzLabs, ADL Canberra
andrew.donnellan@au1.ibm.com IBM Australia Limited
^ permalink raw reply [flat|nested] 17+ messages in thread