Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 3/3] drm/xe/guc: Bump the G2H queue size to account for page faults
  2024-07-19 17:58 [PATCH 0/3] Update page fault queue size calculation Stuart Summers
@ 2024-07-19 17:58 ` Stuart Summers
  2024-07-19 18:10   ` Matthew Brost
  0 siblings, 1 reply; 12+ messages in thread
From: Stuart Summers @ 2024-07-19 17:58 UTC (permalink / raw)
  Cc: matthew.brost, John.C.Harrison, brian.welty, rodrigo.vivi,
	intel-xe, Stuart Summers

With the increase in the size of the recoverable page fault
queue, we want to ensure the initial messages from GuC in
the G2H buffer have space while we transfer those out to the
actual pf_queue. Bump the G2H queue size to account for this
increase in the pf_queue size.

Signed-off-by: Stuart Summers <stuart.summers@intel.com>
---
 drivers/gpu/drm/xe/xe_guc_ct.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c
index 7d2e937da1d8..3135f5812827 100644
--- a/drivers/gpu/drm/xe/xe_guc_ct.c
+++ b/drivers/gpu/drm/xe/xe_guc_ct.c
@@ -105,11 +105,19 @@ ct_to_xe(struct xe_guc_ct *ct)
  * enough space to avoid backpressure on the driver. We increase the size
  * of the receive buffer (relative to the send) to ensure a G2H response
  * CTB has a landing spot.
+ *
+ * In addition to submissions, the G2H buffer needs to be able to hold
+ * enough space for recoverable page fault notifications. The number of
+ * page faults is interrupt driven and can be as much as the number of
+ * compute resources available. However, most of the actual work for these
+ * is in a separate page fault worker thread. Therefore we only need to
+ * make sure the queue has enough space to handle all of the submissions
+ * and responses and an extra buffer for incoming page faults.
  */
 
 #define CTB_DESC_SIZE		ALIGN(sizeof(struct guc_ct_buffer_desc), SZ_2K)
 #define CTB_H2G_BUFFER_SIZE	(SZ_4K)
-#define CTB_G2H_BUFFER_SIZE	(4 * CTB_H2G_BUFFER_SIZE)
+#define CTB_G2H_BUFFER_SIZE	(16 * CTB_H2G_BUFFER_SIZE)
 #define G2H_ROOM_BUFFER_SIZE	(CTB_G2H_BUFFER_SIZE / 4)
 
 /**
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/3] drm/xe/guc: Bump the G2H queue size to account for page faults
  2024-07-19 17:58 ` [PATCH 3/3] drm/xe/guc: Bump the G2H queue size to account for page faults Stuart Summers
@ 2024-07-19 18:10   ` Matthew Brost
  2024-07-19 19:00     ` Summers, Stuart
  0 siblings, 1 reply; 12+ messages in thread
From: Matthew Brost @ 2024-07-19 18:10 UTC (permalink / raw)
  To: Stuart Summers; +Cc: John.C.Harrison, brian.welty, rodrigo.vivi, intel-xe

On Fri, Jul 19, 2024 at 05:58:28PM +0000, Stuart Summers wrote:
> With the increase in the size of the recoverable page fault
> queue, we want to ensure the initial messages from GuC in
> the G2H buffer have space while we transfer those out to the
> actual pf_queue. Bump the G2H queue size to account for this
> increase in the pf_queue size.
> 
> Signed-off-by: Stuart Summers <stuart.summers@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_guc_ct.c | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c
> index 7d2e937da1d8..3135f5812827 100644
> --- a/drivers/gpu/drm/xe/xe_guc_ct.c
> +++ b/drivers/gpu/drm/xe/xe_guc_ct.c
> @@ -105,11 +105,19 @@ ct_to_xe(struct xe_guc_ct *ct)
>   * enough space to avoid backpressure on the driver. We increase the size
>   * of the receive buffer (relative to the send) to ensure a G2H response
>   * CTB has a landing spot.
> + *
> + * In addition to submissions, the G2H buffer needs to be able to hold
> + * enough space for recoverable page fault notifications. The number of
> + * page faults is interrupt driven and can be as much as the number of
> + * compute resources available. However, most of the actual work for these
> + * is in a separate page fault worker thread. Therefore we only need to
> + * make sure the queue has enough space to handle all of the submissions
> + * and responses and an extra buffer for incoming page faults.
>   */
>  
>  #define CTB_DESC_SIZE		ALIGN(sizeof(struct guc_ct_buffer_desc), SZ_2K)
>  #define CTB_H2G_BUFFER_SIZE	(SZ_4K)
> -#define CTB_G2H_BUFFER_SIZE	(4 * CTB_H2G_BUFFER_SIZE)
> +#define CTB_G2H_BUFFER_SIZE	(16 * CTB_H2G_BUFFER_SIZE)
>  #define G2H_ROOM_BUFFER_SIZE	(CTB_G2H_BUFFER_SIZE / 4)

So I think G2H_ROOM_BUFFER_SIZE needs to be roughly 64k as this is the
part of the CTB to sink unsolicated G2H. So how about...

#define CTB_G2H_BUFFER_SIZE	SZ_128K
#define G2H_ROOM_BUFFER_SIZE	(CTB_G2H_BUFFER_SIZE / 2)

The cost of the larger buffers like this is a little more cache
footprint but I think we have live with that until this properly fixed
in the GuC.

Matt

>  
>  /**
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/3] drm/xe/guc: Bump the G2H queue size to account for page faults
  2024-07-19 18:10   ` Matthew Brost
@ 2024-07-19 19:00     ` Summers, Stuart
  0 siblings, 0 replies; 12+ messages in thread
From: Summers, Stuart @ 2024-07-19 19:00 UTC (permalink / raw)
  To: Brost, Matthew
  Cc: intel-xe@lists.freedesktop.org, Harrison, John C, Vivi, Rodrigo,
	Welty, Brian

On Fri, 2024-07-19 at 18:10 +0000, Matthew Brost wrote:
> On Fri, Jul 19, 2024 at 05:58:28PM +0000, Stuart Summers wrote:
> > With the increase in the size of the recoverable page fault
> > queue, we want to ensure the initial messages from GuC in
> > the G2H buffer have space while we transfer those out to the
> > actual pf_queue. Bump the G2H queue size to account for this
> > increase in the pf_queue size.
> > 
> > Signed-off-by: Stuart Summers <stuart.summers@intel.com>
> > ---
> >  drivers/gpu/drm/xe/xe_guc_ct.c | 10 +++++++++-
> >  1 file changed, 9 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c
> > b/drivers/gpu/drm/xe/xe_guc_ct.c
> > index 7d2e937da1d8..3135f5812827 100644
> > --- a/drivers/gpu/drm/xe/xe_guc_ct.c
> > +++ b/drivers/gpu/drm/xe/xe_guc_ct.c
> > @@ -105,11 +105,19 @@ ct_to_xe(struct xe_guc_ct *ct)
> >   * enough space to avoid backpressure on the driver. We increase
> > the size
> >   * of the receive buffer (relative to the send) to ensure a G2H
> > response
> >   * CTB has a landing spot.
> > + *
> > + * In addition to submissions, the G2H buffer needs to be able to
> > hold
> > + * enough space for recoverable page fault notifications. The
> > number of
> > + * page faults is interrupt driven and can be as much as the
> > number of
> > + * compute resources available. However, most of the actual work
> > for these
> > + * is in a separate page fault worker thread. Therefore we only
> > need to
> > + * make sure the queue has enough space to handle all of the
> > submissions
> > + * and responses and an extra buffer for incoming page faults.
> >   */
> >  
> >  #define CTB_DESC_SIZE          ALIGN(sizeof(struct
> > guc_ct_buffer_desc), SZ_2K)
> >  #define CTB_H2G_BUFFER_SIZE    (SZ_4K)
> > -#define CTB_G2H_BUFFER_SIZE    (4 * CTB_H2G_BUFFER_SIZE)
> > +#define CTB_G2H_BUFFER_SIZE    (16 * CTB_H2G_BUFFER_SIZE)
> >  #define G2H_ROOM_BUFFER_SIZE   (CTB_G2H_BUFFER_SIZE / 4)
> 
> So I think G2H_ROOM_BUFFER_SIZE needs to be roughly 64k as this is
> the
> part of the CTB to sink unsolicated G2H. So how about...
> 
> #define CTB_G2H_BUFFER_SIZE     SZ_128K
> #define G2H_ROOM_BUFFER_SIZE    (CTB_G2H_BUFFER_SIZE / 2)
> 
> The cost of the larger buffers like this is a little more cache
> footprint but I think we have live with that until this properly
> fixed
> in the GuC.

Makes sense to me, I'll push a new series here shortly.

Thanks,
Stuart

> 
> Matt
> 
> >  
> >  /**
> > -- 
> > 2.34.1
> > 


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 0/3] Update page fault queue size calculation
@ 2024-07-19 19:06 Stuart Summers
  2024-07-19 19:06 ` [PATCH 1/3] drm/xe: Fix missing workqueue destroy in xe_gt_pagefault Stuart Summers
                   ` (5 more replies)
  0 siblings, 6 replies; 12+ messages in thread
From: Stuart Summers @ 2024-07-19 19:06 UTC (permalink / raw)
  Cc: matthew.brost, John.C.Harrison, brian.welty, rodrigo.vivi,
	intel-xe, Stuart Summers

Right now the page fault queue size is hard coded with an
estimated value based on legacy platforms. Add a more precise
calculation based on the number of compute resources available
which can utilize these page fault queues.

v2: Add a drm reset callback for the teardown changes and other
    suggestions from Matt.
v3: Add a pf_wq destroy when the access counter wq allocation
    fails (Rodrigo) and pf queue size calculation adjustment (Matt)
v4: Bump up the size of the G2H queue as well (Matt)
v5: Make the G2H buffer size 64K (Matt)

Stuart Summers (3):
  drm/xe: Fix missing workqueue destroy in xe_gt_pagefault
  drm/xe: Use topology to determine page fault queue size
  drm/xe/guc: Bump the G2H queue size to account for page faults

 drivers/gpu/drm/xe/xe_gt_pagefault.c | 72 ++++++++++++++++++++++------
 drivers/gpu/drm/xe/xe_gt_types.h     |  9 +++-
 drivers/gpu/drm/xe/xe_guc_ct.c       | 12 ++++-
 3 files changed, 75 insertions(+), 18 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH 1/3] drm/xe: Fix missing workqueue destroy in xe_gt_pagefault
  2024-07-19 19:06 [PATCH 0/3] Update page fault queue size calculation Stuart Summers
@ 2024-07-19 19:06 ` Stuart Summers
  2024-07-19 19:06 ` [PATCH 2/3] drm/xe: Use topology to determine page fault queue size Stuart Summers
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 12+ messages in thread
From: Stuart Summers @ 2024-07-19 19:06 UTC (permalink / raw)
  Cc: matthew.brost, John.C.Harrison, brian.welty, rodrigo.vivi,
	intel-xe, Stuart Summers

On driver reload we never free up the memory for the pagefault and
access counter workqueues. Add those destroy calls here.

Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel GPUs")
Signed-off-by: Stuart Summers <stuart.summers@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
 drivers/gpu/drm/xe/xe_gt_pagefault.c | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c
index 9292d5468868..b2a7fa55bd18 100644
--- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
@@ -382,6 +382,18 @@ static void pf_queue_work_func(struct work_struct *w)
 
 static void acc_queue_work_func(struct work_struct *w);
 
+static void pagefault_fini(void *arg)
+{
+	struct xe_gt *gt = arg;
+	struct xe_device *xe = gt_to_xe(gt);
+
+	if (!xe->info.has_usm)
+		return;
+
+	destroy_workqueue(gt->usm.acc_wq);
+	destroy_workqueue(gt->usm.pf_wq);
+}
+
 int xe_gt_pagefault_init(struct xe_gt *gt)
 {
 	struct xe_device *xe = gt_to_xe(gt);
@@ -409,10 +421,12 @@ int xe_gt_pagefault_init(struct xe_gt *gt)
 	gt->usm.acc_wq = alloc_workqueue("xe_gt_access_counter_work_queue",
 					 WQ_UNBOUND | WQ_HIGHPRI,
 					 NUM_ACC_QUEUE);
-	if (!gt->usm.acc_wq)
+	if (!gt->usm.acc_wq) {
+		destroy_workqueue(gt->usm.pf_wq);
 		return -ENOMEM;
+	}
 
-	return 0;
+	return devm_add_action_or_reset(xe->drm.dev, pagefault_fini, gt);
 }
 
 void xe_gt_pagefault_reset(struct xe_gt *gt)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/3] drm/xe: Use topology to determine page fault queue size
  2024-07-19 19:06 [PATCH 0/3] Update page fault queue size calculation Stuart Summers
  2024-07-19 19:06 ` [PATCH 1/3] drm/xe: Fix missing workqueue destroy in xe_gt_pagefault Stuart Summers
@ 2024-07-19 19:06 ` Stuart Summers
  2024-07-19 19:06 ` [PATCH 3/3] drm/xe/guc: Bump the G2H queue size to account for page faults Stuart Summers
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 12+ messages in thread
From: Stuart Summers @ 2024-07-19 19:06 UTC (permalink / raw)
  Cc: matthew.brost, John.C.Harrison, brian.welty, rodrigo.vivi,
	intel-xe, Stuart Summers

Currently the page fault queue size is hard coded. However
the hardware supports faulting for each EU and each CS.
For some applications running on hardware with a large
number of EUs and CSs, this can result in an overflow of
the page fault queue.

Add a small calculation to determine the page fault queue
size based on the number of EUs and CSs in the platform as
detmined by fuses.

Signed-off-by: Stuart Summers <stuart.summers@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
---
 drivers/gpu/drm/xe/xe_gt_pagefault.c | 54 +++++++++++++++++++++-------
 drivers/gpu/drm/xe/xe_gt_types.h     |  9 +++--
 2 files changed, 49 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_gt_pagefault.c b/drivers/gpu/drm/xe/xe_gt_pagefault.c
index b2a7fa55bd18..6bfc60c0274a 100644
--- a/drivers/gpu/drm/xe/xe_gt_pagefault.c
+++ b/drivers/gpu/drm/xe/xe_gt_pagefault.c
@@ -287,7 +287,7 @@ static bool get_pagefault(struct pf_queue *pf_queue, struct pagefault *pf)
 			PFD_VIRTUAL_ADDR_LO_SHIFT;
 
 		pf_queue->tail = (pf_queue->tail + PF_MSG_LEN_DW) %
-			PF_QUEUE_NUM_DW;
+			pf_queue->pf_queue_num_dw;
 		ret = true;
 	}
 	spin_unlock_irq(&pf_queue->lock);
@@ -299,7 +299,8 @@ static bool pf_queue_full(struct pf_queue *pf_queue)
 {
 	lockdep_assert_held(&pf_queue->lock);
 
-	return CIRC_SPACE(pf_queue->head, pf_queue->tail, PF_QUEUE_NUM_DW) <=
+	return CIRC_SPACE(pf_queue->head, pf_queue->tail,
+			  pf_queue->pf_queue_num_dw) <=
 		PF_MSG_LEN_DW;
 }
 
@@ -312,22 +313,23 @@ int xe_guc_pagefault_handler(struct xe_guc *guc, u32 *msg, u32 len)
 	u32 asid;
 	bool full;
 
-	/*
-	 * The below logic doesn't work unless PF_QUEUE_NUM_DW % PF_MSG_LEN_DW == 0
-	 */
-	BUILD_BUG_ON(PF_QUEUE_NUM_DW % PF_MSG_LEN_DW);
-
 	if (unlikely(len != PF_MSG_LEN_DW))
 		return -EPROTO;
 
 	asid = FIELD_GET(PFD_ASID, msg[1]);
 	pf_queue = gt->usm.pf_queue + (asid % NUM_PF_QUEUE);
 
+	/*
+	 * The below logic doesn't work unless PF_QUEUE_NUM_DW % PF_MSG_LEN_DW == 0
+	 */
+	xe_gt_assert(gt, !(pf_queue->pf_queue_num_dw % PF_MSG_LEN_DW));
+
 	spin_lock_irqsave(&pf_queue->lock, flags);
 	full = pf_queue_full(pf_queue);
 	if (!full) {
 		memcpy(pf_queue->data + pf_queue->head, msg, len * sizeof(u32));
-		pf_queue->head = (pf_queue->head + len) % PF_QUEUE_NUM_DW;
+		pf_queue->head = (pf_queue->head + len) %
+			pf_queue->pf_queue_num_dw;
 		queue_work(gt->usm.pf_wq, &pf_queue->worker);
 	} else {
 		drm_warn(&xe->drm, "PF Queue full, shouldn't be possible");
@@ -386,26 +388,54 @@ static void pagefault_fini(void *arg)
 {
 	struct xe_gt *gt = arg;
 	struct xe_device *xe = gt_to_xe(gt);
+	int i;
 
 	if (!xe->info.has_usm)
 		return;
 
 	destroy_workqueue(gt->usm.acc_wq);
 	destroy_workqueue(gt->usm.pf_wq);
+
+	for (i = 0; i < NUM_PF_QUEUE; ++i)
+		kfree(gt->usm.pf_queue[i].data);
+}
+
+static int xe_alloc_pf_queue(struct xe_gt *gt, struct pf_queue *pf_queue)
+{
+	xe_dss_mask_t all_dss;
+	int num_dss, num_eus;
+
+	bitmap_or(all_dss, gt->fuse_topo.g_dss_mask, gt->fuse_topo.c_dss_mask,
+		  XE_MAX_DSS_FUSE_BITS);
+
+	num_dss = bitmap_weight(all_dss, XE_MAX_DSS_FUSE_BITS);
+	num_eus = bitmap_weight(gt->fuse_topo.eu_mask_per_dss,
+				XE_MAX_EU_FUSE_BITS) * num_dss;
+
+	/* user can issue separate page faults per EU and per CS */
+	pf_queue->pf_queue_num_dw =
+		(num_eus + XE_NUM_HW_ENGINES) * PF_MSG_LEN_DW;
+
+	pf_queue->gt = gt;
+	pf_queue->data = kzalloc(pf_queue->pf_queue_num_dw, GFP_KERNEL);
+	spin_lock_init(&pf_queue->lock);
+	INIT_WORK(&pf_queue->worker, pf_queue_work_func);
+
+	return 0;
 }
 
 int xe_gt_pagefault_init(struct xe_gt *gt)
 {
 	struct xe_device *xe = gt_to_xe(gt);
-	int i;
+	int i, ret = 0;
 
 	if (!xe->info.has_usm)
 		return 0;
 
 	for (i = 0; i < NUM_PF_QUEUE; ++i) {
-		gt->usm.pf_queue[i].gt = gt;
-		spin_lock_init(&gt->usm.pf_queue[i].lock);
-		INIT_WORK(&gt->usm.pf_queue[i].worker, pf_queue_work_func);
+		ret = xe_alloc_pf_queue(gt, &gt->usm.pf_queue[i]);
+		if (ret)
+			return ret;
 	}
 	for (i = 0; i < NUM_ACC_QUEUE; ++i) {
 		gt->usm.acc_queue[i].gt = gt;
diff --git a/drivers/gpu/drm/xe/xe_gt_types.h b/drivers/gpu/drm/xe/xe_gt_types.h
index ef68c4a92972..f2a0bd19260b 100644
--- a/drivers/gpu/drm/xe/xe_gt_types.h
+++ b/drivers/gpu/drm/xe/xe_gt_types.h
@@ -238,9 +238,14 @@ struct xe_gt {
 		struct pf_queue {
 			/** @usm.pf_queue.gt: back pointer to GT */
 			struct xe_gt *gt;
-#define PF_QUEUE_NUM_DW	128
 			/** @usm.pf_queue.data: data in the page fault queue */
-			u32 data[PF_QUEUE_NUM_DW];
+			u32 *data;
+			/**
+			 * @usm.pf_queue_num_dw: number of DWORDS in the page
+			 * fault queue. Dynamically calculated based on the number
+			 * of compute resources available.
+			 */
+			u32 pf_queue_num_dw;
 			/**
 			 * @usm.pf_queue.tail: tail pointer in DWs for page fault queue,
 			 * moved by worker which processes faults (consumer).
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 3/3] drm/xe/guc: Bump the G2H queue size to account for page faults
  2024-07-19 19:06 [PATCH 0/3] Update page fault queue size calculation Stuart Summers
  2024-07-19 19:06 ` [PATCH 1/3] drm/xe: Fix missing workqueue destroy in xe_gt_pagefault Stuart Summers
  2024-07-19 19:06 ` [PATCH 2/3] drm/xe: Use topology to determine page fault queue size Stuart Summers
@ 2024-07-19 19:06 ` Stuart Summers
  2024-07-19 23:59   ` Matthew Brost
  2024-07-19 19:11 ` ✓ CI.Patch_applied: success for Update page fault queue size calculation (rev4) Patchwork
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 12+ messages in thread
From: Stuart Summers @ 2024-07-19 19:06 UTC (permalink / raw)
  Cc: matthew.brost, John.C.Harrison, brian.welty, rodrigo.vivi,
	intel-xe, Stuart Summers

With the increase in the size of the recoverable page fault
queue, we want to ensure the initial messages from GuC in
the G2H buffer have space while we transfer those out to the
actual pf_queue. Bump the G2H queue size to account for this
increase in the pf_queue size.

Signed-off-by: Stuart Summers <stuart.summers@intel.com>
---
 drivers/gpu/drm/xe/xe_guc_ct.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c
index 7d2e937da1d8..a3e9dd71f957 100644
--- a/drivers/gpu/drm/xe/xe_guc_ct.c
+++ b/drivers/gpu/drm/xe/xe_guc_ct.c
@@ -105,12 +105,20 @@ ct_to_xe(struct xe_guc_ct *ct)
  * enough space to avoid backpressure on the driver. We increase the size
  * of the receive buffer (relative to the send) to ensure a G2H response
  * CTB has a landing spot.
+ *
+ * In addition to submissions, the G2H buffer needs to be able to hold
+ * enough space for recoverable page fault notifications. The number of
+ * page faults is interrupt driven and can be as much as the number of
+ * compute resources available. However, most of the actual work for these
+ * is in a separate page fault worker thread. Therefore we only need to
+ * make sure the queue has enough space to handle all of the submissions
+ * and responses and an extra buffer for incoming page faults.
  */
 
 #define CTB_DESC_SIZE		ALIGN(sizeof(struct guc_ct_buffer_desc), SZ_2K)
 #define CTB_H2G_BUFFER_SIZE	(SZ_4K)
-#define CTB_G2H_BUFFER_SIZE	(4 * CTB_H2G_BUFFER_SIZE)
-#define G2H_ROOM_BUFFER_SIZE	(CTB_G2H_BUFFER_SIZE / 4)
+#define CTB_G2H_BUFFER_SIZE	(SZ_128K)
+#define G2H_ROOM_BUFFER_SIZE	(CTB_G2H_BUFFER_SIZE / 2)
 
 /**
  * xe_guc_ct_queue_proc_time_jiffies - Return maximum time to process a full
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* ✓ CI.Patch_applied: success for Update page fault queue size calculation (rev4)
  2024-07-19 19:06 [PATCH 0/3] Update page fault queue size calculation Stuart Summers
                   ` (2 preceding siblings ...)
  2024-07-19 19:06 ` [PATCH 3/3] drm/xe/guc: Bump the G2H queue size to account for page faults Stuart Summers
@ 2024-07-19 19:11 ` Patchwork
  2024-07-19 19:12 ` ✓ CI.checkpatch: " Patchwork
  2024-07-19 19:12 ` ✗ CI.KUnit: failure " Patchwork
  5 siblings, 0 replies; 12+ messages in thread
From: Patchwork @ 2024-07-19 19:11 UTC (permalink / raw)
  To: Stuart Summers; +Cc: intel-xe

== Series Details ==

Series: Update page fault queue size calculation (rev4)
URL   : https://patchwork.freedesktop.org/series/135915/
State : success

== Summary ==

=== Applying kernel patches on branch 'drm-tip' with base: ===
Base commit: eb6045a759ea drm-tip: 2024y-07m-19d-11h-07m-10s UTC integration manifest
=== git am output follows ===
Applying: drm/xe: Fix missing workqueue destroy in xe_gt_pagefault
Applying: drm/xe: Use topology to determine page fault queue size
Applying: drm/xe/guc: Bump the G2H queue size to account for page faults



^ permalink raw reply	[flat|nested] 12+ messages in thread

* ✓ CI.checkpatch: success for Update page fault queue size calculation (rev4)
  2024-07-19 19:06 [PATCH 0/3] Update page fault queue size calculation Stuart Summers
                   ` (3 preceding siblings ...)
  2024-07-19 19:11 ` ✓ CI.Patch_applied: success for Update page fault queue size calculation (rev4) Patchwork
@ 2024-07-19 19:12 ` Patchwork
  2024-07-19 19:12 ` ✗ CI.KUnit: failure " Patchwork
  5 siblings, 0 replies; 12+ messages in thread
From: Patchwork @ 2024-07-19 19:12 UTC (permalink / raw)
  To: Stuart Summers; +Cc: intel-xe

== Series Details ==

Series: Update page fault queue size calculation (rev4)
URL   : https://patchwork.freedesktop.org/series/135915/
State : success

== Summary ==

+ KERNEL=/kernel
+ git clone https://gitlab.freedesktop.org/drm/maintainer-tools mt
Cloning into 'mt'...
warning: redirecting to https://gitlab.freedesktop.org/drm/maintainer-tools.git/
+ git -C mt rev-list -n1 origin/master
5ce3e132caaa5b45e5e50201b574a097d130967c
+ cd /kernel
+ git config --global --add safe.directory /kernel
+ git log -n1
commit e369b6df5c185421156c87ac70e4eadb812f325d
Author: Stuart Summers <stuart.summers@intel.com>
Date:   Fri Jul 19 19:06:14 2024 +0000

    drm/xe/guc: Bump the G2H queue size to account for page faults
    
    With the increase in the size of the recoverable page fault
    queue, we want to ensure the initial messages from GuC in
    the G2H buffer have space while we transfer those out to the
    actual pf_queue. Bump the G2H queue size to account for this
    increase in the pf_queue size.
    
    Signed-off-by: Stuart Summers <stuart.summers@intel.com>
+ /mt/dim checkpatch eb6045a759ea13e8d159bdaea423e904b9e3717b drm-intel
f1a45d713834 drm/xe: Fix missing workqueue destroy in xe_gt_pagefault
3b26ccd9e9fd drm/xe: Use topology to determine page fault queue size
e369b6df5c18 drm/xe/guc: Bump the G2H queue size to account for page faults



^ permalink raw reply	[flat|nested] 12+ messages in thread

* ✗ CI.KUnit: failure for Update page fault queue size calculation (rev4)
  2024-07-19 19:06 [PATCH 0/3] Update page fault queue size calculation Stuart Summers
                   ` (4 preceding siblings ...)
  2024-07-19 19:12 ` ✓ CI.checkpatch: " Patchwork
@ 2024-07-19 19:12 ` Patchwork
  5 siblings, 0 replies; 12+ messages in thread
From: Patchwork @ 2024-07-19 19:12 UTC (permalink / raw)
  To: Stuart Summers; +Cc: intel-xe

== Series Details ==

Series: Update page fault queue size calculation (rev4)
URL   : https://patchwork.freedesktop.org/series/135915/
State : failure

== Summary ==

+ trap cleanup EXIT
+ /kernel/tools/testing/kunit/kunit.py run --kunitconfig /kernel/drivers/gpu/drm/xe/.kunitconfig
ERROR:root:In file included from ../drivers/gpu/drm/drm_atomic.c:46:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_atomic_uapi.c:43:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_blend.c:36:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_bridge.c:38:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_client.c:23:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_color_mgmt.c:32:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_client_modeset.c:26:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_connector.c:41:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_crtc.c:52:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_displayid.c:9:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_drv.c:50:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_eld.c:11:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_dumb_buffers.c:31:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_edid.c:49:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_encoder.c:32:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_file.c:48:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_framebuffer.c:38:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_ioctl.c:43:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_lease.c:15:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_mode_config.c:34:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_mode_object.c:33:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_modes.c:50:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_plane.c:36:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_property.c:33:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_sysfs.c:34:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
../lib/iomap.c:156:5: warning: no previous prototype for ‘ioread64_lo_hi’ [-Wmissing-prototypes]
  156 | u64 ioread64_lo_hi(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~
../lib/iomap.c:163:5: warning: no previous prototype for ‘ioread64_hi_lo’ [-Wmissing-prototypes]
  163 | u64 ioread64_hi_lo(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~
../lib/iomap.c:170:5: warning: no previous prototype for ‘ioread64be_lo_hi’ [-Wmissing-prototypes]
  170 | u64 ioread64be_lo_hi(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~~~
../lib/iomap.c:178:5: warning: no previous prototype for ‘ioread64be_hi_lo’ [-Wmissing-prototypes]
  178 | u64 ioread64be_hi_lo(const void __iomem *addr)
      |     ^~~~~~~~~~~~~~~~
../lib/iomap.c:264:6: warning: no previous prototype for ‘iowrite64_lo_hi’ [-Wmissing-prototypes]
  264 | void iowrite64_lo_hi(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~
../lib/iomap.c:272:6: warning: no previous prototype for ‘iowrite64_hi_lo’ [-Wmissing-prototypes]
  272 | void iowrite64_hi_lo(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~
../lib/iomap.c:280:6: warning: no previous prototype for ‘iowrite64be_lo_hi’ [-Wmissing-prototypes]
  280 | void iowrite64be_lo_hi(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~~~
../lib/iomap.c:288:6: warning: no previous prototype for ‘iowrite64be_hi_lo’ [-Wmissing-prototypes]
  288 | void iowrite64be_hi_lo(u64 val, void __iomem *addr)
      |      ^~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_debugfs.c:45:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_atomic_helper.c:48:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/gpu/drm/drm_fb_helper.c:47:
../drivers/gpu/drm/drm_crtc_internal.h:322:6: warning: no previous prototype for ‘drm_panic_is_enabled’ [-Wmissing-prototypes]
  322 | bool drm_panic_is_enabled(struct drm_device *dev) {return false; }
      |      ^~~~~~~~~~~~~~~~~~~~
ld: drivers/gpu/drm/drm_atomic_uapi.o: in function `drm_panic_is_enabled':
drm_atomic_uapi.c:(.text+0x1120): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_blend.o: in function `drm_panic_is_enabled':
drm_blend.c:(.text+0x890): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_bridge.o: in function `drm_panic_is_enabled':
drm_bridge.c:(.text+0x1270): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_client.o: in function `drm_panic_is_enabled':
drm_client.c:(.text+0xbd0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_client_modeset.o: in function `drm_panic_is_enabled':
drm_client_modeset.c:(.text+0x2bb0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_color_mgmt.o: in function `drm_panic_is_enabled':
drm_color_mgmt.c:(.text+0x520): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_connector.o: in function `drm_panic_is_enabled':
drm_connector.c:(.text+0x2ae0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_crtc.o: in function `drm_panic_is_enabled':
drm_crtc.c:(.text+0xd50): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_displayid.o: in function `drm_panic_is_enabled':
drm_displayid.c:(.text+0x0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_drv.o: in function `drm_panic_is_enabled':
drm_drv.c:(.text+0x1500): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_dumb_buffers.o: in function `drm_panic_is_enabled':
drm_dumb_buffers.c:(.text+0x0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_edid.o: in function `drm_panic_is_enabled':
drm_edid.c:(.text+0x7fb0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_eld.o: in function `drm_panic_is_enabled':
drm_eld.c:(.text+0xa0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_encoder.o: in function `drm_panic_is_enabled':
drm_encoder.c:(.text+0x620): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_file.o: in function `drm_panic_is_enabled':
drm_file.c:(.text+0xc20): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_framebuffer.o: in function `drm_panic_is_enabled':
drm_framebuffer.c:(.text+0xc30): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_ioctl.o: in function `drm_panic_is_enabled':
drm_ioctl.c:(.text+0x1080): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_lease.o: in function `drm_panic_is_enabled':
drm_lease.c:(.text+0x210): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_mode_config.o: in function `drm_panic_is_enabled':
drm_mode_config.c:(.text+0xfe0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_mode_object.o: in function `drm_panic_is_enabled':
drm_mode_object.c:(.text+0x8b0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_modes.o: in function `drm_panic_is_enabled':
drm_modes.c:(.text+0x2e10): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_plane.o: in function `drm_panic_is_enabled':
drm_plane.c:(.text+0x20a0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_property.o: in function `drm_panic_is_enabled':
drm_property.c:(.text+0xe10): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_sysfs.o: in function `drm_panic_is_enabled':
drm_sysfs.c:(.text+0x8b0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_debugfs.o: in function `drm_panic_is_enabled':
drm_debugfs.c:(.text+0x1370): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_atomic_helper.o: in function `drm_panic_is_enabled':
drm_atomic_helper.c:(.text+0x69c0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
ld: drivers/gpu/drm/drm_fb_helper.o: in function `drm_panic_is_enabled':
drm_fb_helper.c:(.text+0x2ea0): multiple definition of `drm_panic_is_enabled'; drivers/gpu/drm/drm_atomic.o:drm_atomic.c:(.text+0x3130): first defined here
make[3]: *** [../scripts/Makefile.vmlinux_o:62: vmlinux.o] Error 1
make[2]: *** [/kernel/Makefile:1152: vmlinux_o] Error 2
make[1]: *** [/kernel/Makefile:240: __sub-make] Error 2
make: *** [Makefile:240: __sub-make] Error 2

[19:12:05] Configuring KUnit Kernel ...
Generating .config ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
[19:12:09] Building KUnit Kernel ...
Populating config with:
$ make ARCH=um O=.kunit olddefconfig
Building with:
$ make ARCH=um O=.kunit --jobs=48
+ cleanup
++ stat -c %u:%g /kernel
+ chown -R 1003:1003 /kernel



^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/3] drm/xe/guc: Bump the G2H queue size to account for page faults
  2024-07-19 19:06 ` [PATCH 3/3] drm/xe/guc: Bump the G2H queue size to account for page faults Stuart Summers
@ 2024-07-19 23:59   ` Matthew Brost
  2024-07-23 14:13     ` Summers, Stuart
  0 siblings, 1 reply; 12+ messages in thread
From: Matthew Brost @ 2024-07-19 23:59 UTC (permalink / raw)
  To: Stuart Summers; +Cc: John.C.Harrison, brian.welty, rodrigo.vivi, intel-xe

On Fri, Jul 19, 2024 at 07:06:14PM +0000, Stuart Summers wrote:
> With the increase in the size of the recoverable page fault
> queue, we want to ensure the initial messages from GuC in
> the G2H buffer have space while we transfer those out to the
> actual pf_queue. Bump the G2H queue size to account for this
> increase in the pf_queue size.
> 

For future reference, include change log to help reviewers.

Reviewed-by: Matthew Brost <matthew.brost@intel.com>

> Signed-off-by: Stuart Summers <stuart.summers@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_guc_ct.c | 12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c
> index 7d2e937da1d8..a3e9dd71f957 100644
> --- a/drivers/gpu/drm/xe/xe_guc_ct.c
> +++ b/drivers/gpu/drm/xe/xe_guc_ct.c
> @@ -105,12 +105,20 @@ ct_to_xe(struct xe_guc_ct *ct)
>   * enough space to avoid backpressure on the driver. We increase the size
>   * of the receive buffer (relative to the send) to ensure a G2H response
>   * CTB has a landing spot.
> + *
> + * In addition to submissions, the G2H buffer needs to be able to hold
> + * enough space for recoverable page fault notifications. The number of
> + * page faults is interrupt driven and can be as much as the number of
> + * compute resources available. However, most of the actual work for these
> + * is in a separate page fault worker thread. Therefore we only need to
> + * make sure the queue has enough space to handle all of the submissions
> + * and responses and an extra buffer for incoming page faults.
>   */
>  
>  #define CTB_DESC_SIZE		ALIGN(sizeof(struct guc_ct_buffer_desc), SZ_2K)
>  #define CTB_H2G_BUFFER_SIZE	(SZ_4K)
> -#define CTB_G2H_BUFFER_SIZE	(4 * CTB_H2G_BUFFER_SIZE)
> -#define G2H_ROOM_BUFFER_SIZE	(CTB_G2H_BUFFER_SIZE / 4)
> +#define CTB_G2H_BUFFER_SIZE	(SZ_128K)
> +#define G2H_ROOM_BUFFER_SIZE	(CTB_G2H_BUFFER_SIZE / 2)
>  
>  /**
>   * xe_guc_ct_queue_proc_time_jiffies - Return maximum time to process a full
> -- 
> 2.34.1
> 

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 3/3] drm/xe/guc: Bump the G2H queue size to account for page faults
  2024-07-19 23:59   ` Matthew Brost
@ 2024-07-23 14:13     ` Summers, Stuart
  0 siblings, 0 replies; 12+ messages in thread
From: Summers, Stuart @ 2024-07-23 14:13 UTC (permalink / raw)
  To: Brost, Matthew
  Cc: intel-xe@lists.freedesktop.org, Harrison, John C, Vivi, Rodrigo,
	Welty, Brian

On Fri, 2024-07-19 at 23:59 +0000, Matthew Brost wrote:
> On Fri, Jul 19, 2024 at 07:06:14PM +0000, Stuart Summers wrote:
> > With the increase in the size of the recoverable page fault
> > queue, we want to ensure the initial messages from GuC in
> > the G2H buffer have space while we transfer those out to the
> > actual pf_queue. Bump the G2H queue size to account for this
> > increase in the pf_queue size.
> > 
> 
> For future reference, include change log to help reviewers.

Sure makes sense. I had included the change log in the cover letter but
decided not to include here. I'll do that in the future.

> 
> Reviewed-by: Matthew Brost <matthew.brost@intel.com>

Thanks for the reviews Matt!

> 
> > Signed-off-by: Stuart Summers <stuart.summers@intel.com>
> > ---
> >  drivers/gpu/drm/xe/xe_guc_ct.c | 12 ++++++++++--
> >  1 file changed, 10 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c
> > b/drivers/gpu/drm/xe/xe_guc_ct.c
> > index 7d2e937da1d8..a3e9dd71f957 100644
> > --- a/drivers/gpu/drm/xe/xe_guc_ct.c
> > +++ b/drivers/gpu/drm/xe/xe_guc_ct.c
> > @@ -105,12 +105,20 @@ ct_to_xe(struct xe_guc_ct *ct)
> >   * enough space to avoid backpressure on the driver. We increase
> > the size
> >   * of the receive buffer (relative to the send) to ensure a G2H
> > response
> >   * CTB has a landing spot.
> > + *
> > + * In addition to submissions, the G2H buffer needs to be able to
> > hold
> > + * enough space for recoverable page fault notifications. The
> > number of
> > + * page faults is interrupt driven and can be as much as the
> > number of
> > + * compute resources available. However, most of the actual work
> > for these
> > + * is in a separate page fault worker thread. Therefore we only
> > need to
> > + * make sure the queue has enough space to handle all of the
> > submissions
> > + * and responses and an extra buffer for incoming page faults.
> >   */
> >  
> >  #define CTB_DESC_SIZE          ALIGN(sizeof(struct
> > guc_ct_buffer_desc), SZ_2K)
> >  #define CTB_H2G_BUFFER_SIZE    (SZ_4K)
> > -#define CTB_G2H_BUFFER_SIZE    (4 * CTB_H2G_BUFFER_SIZE)
> > -#define G2H_ROOM_BUFFER_SIZE   (CTB_G2H_BUFFER_SIZE / 4)
> > +#define CTB_G2H_BUFFER_SIZE    (SZ_128K)
> > +#define G2H_ROOM_BUFFER_SIZE   (CTB_G2H_BUFFER_SIZE / 2)
> >  
> >  /**
> >   * xe_guc_ct_queue_proc_time_jiffies - Return maximum time to
> > process a full
> > -- 
> > 2.34.1
> > 


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2024-07-23 14:13 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-07-19 19:06 [PATCH 0/3] Update page fault queue size calculation Stuart Summers
2024-07-19 19:06 ` [PATCH 1/3] drm/xe: Fix missing workqueue destroy in xe_gt_pagefault Stuart Summers
2024-07-19 19:06 ` [PATCH 2/3] drm/xe: Use topology to determine page fault queue size Stuart Summers
2024-07-19 19:06 ` [PATCH 3/3] drm/xe/guc: Bump the G2H queue size to account for page faults Stuart Summers
2024-07-19 23:59   ` Matthew Brost
2024-07-23 14:13     ` Summers, Stuart
2024-07-19 19:11 ` ✓ CI.Patch_applied: success for Update page fault queue size calculation (rev4) Patchwork
2024-07-19 19:12 ` ✓ CI.checkpatch: " Patchwork
2024-07-19 19:12 ` ✗ CI.KUnit: failure " Patchwork
  -- strict thread matches above, loose matches on Subject: below --
2024-07-19 17:58 [PATCH 0/3] Update page fault queue size calculation Stuart Summers
2024-07-19 17:58 ` [PATCH 3/3] drm/xe/guc: Bump the G2H queue size to account for page faults Stuart Summers
2024-07-19 18:10   ` Matthew Brost
2024-07-19 19:00     ` Summers, Stuart

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox