* [PATCH 01/22] drm/i915: check for kernel_context
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
@ 2019-09-27 17:33 ` Matthew Auld
2019-09-27 17:37 ` Chris Wilson
2019-09-27 17:33 ` [PATCH 02/22] drm/i915: simplify i915_gem_init_early Matthew Auld
` (24 subsequent siblings)
25 siblings, 1 reply; 50+ messages in thread
From: Matthew Auld @ 2019-09-27 17:33 UTC (permalink / raw)
To: intel-gfx; +Cc: daniel.vetter
Explosions during early driver init on the error path. Make sure we fail
gracefully.
[ 9547.672258] BUG: kernel NULL pointer dereference, address: 000000000000007c
[ 9547.672288] #PF: supervisor read access in kernel mode
[ 9547.672292] #PF: error_code(0x0000) - not-present page
[ 9547.672296] PGD 8000000846b41067 P4D 8000000846b41067 PUD 797034067 PMD 0
[ 9547.672303] Oops: 0000 [#1] SMP PTI
[ 9547.672307] CPU: 1 PID: 25634 Comm: i915_selftest Tainted: G U 5.3.0-rc8+ #73
[ 9547.672313] Hardware name: /NUC6i7KYB, BIOS KYSKLi70.86A.0050.2017.0831.1924 08/31/2017
[ 9547.672395] RIP: 0010:intel_context_unpin+0x9/0x100 [i915]
[ 9547.672400] Code: 6b 60 00 e9 17 ff ff ff bd fc ff ff ff e9 7c ff ff ff 66 66 2e 0f 1f 84 00 00 00 00
00 0f 1f 40 00 0f 1f 44 00 00 41 54 55 53 <8b> 47 7c 83 f8 01 74 26 8d 48 ff f0 0f b1 4f 7c 48 8d 57 7c
75 05
[ 9547.672413] RSP: 0018:ffffae8ac24ff878 EFLAGS: 00010246
[ 9547.672417] RAX: ffff944a1b7842d0 RBX: ffff944a1b784000 RCX: ffff944a12dd6fa8
[ 9547.672422] RDX: ffff944a1b7842c0 RSI: ffff944a12dd5328 RDI: 0000000000000000
[ 9547.672428] RBP: 0000000000000000 R08: ffff944a11e5d840 R09: 0000000000000000
[ 9547.672433] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
[ 9547.672438] R13: ffffffffc11aaf00 R14: 00000000ffffffe4 R15: ffff944a0e29bf38
[ 9547.672443] FS: 00007fc259b88ac0(0000) GS:ffff944a1f880000(0000) knlGS:0000000000000000
[ 9547.672449] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 9547.672454] CR2: 000000000000007c CR3: 0000000853346003 CR4: 00000000003606e0
[ 9547.672459] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 9547.672464] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 9547.672469] Call Trace:
[ 9547.672518] intel_engine_cleanup_common+0xe3/0x270 [i915]
[ 9547.672567] execlists_destroy+0xe/0x30 [i915]
[ 9547.672669] intel_engines_init+0x94/0xf0 [i915]
[ 9547.672749] i915_gem_init+0x191/0x950 [i915]
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
---
drivers/gpu/drm/i915/gt/intel_engine_cs.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index f451d5076bde..f97686bdc28b 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -820,8 +820,11 @@ void intel_engine_cleanup_common(struct intel_engine_cs *engine)
if (engine->default_state)
i915_gem_object_put(engine->default_state);
- intel_context_unpin(engine->kernel_context);
- intel_context_put(engine->kernel_context);
+ if (engine->kernel_context) {
+ intel_context_unpin(engine->kernel_context);
+ intel_context_put(engine->kernel_context);
+ }
+
GEM_BUG_ON(!llist_empty(&engine->barrier_tasks));
intel_wa_list_free(&engine->ctx_wa_list);
--
2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 50+ messages in thread* Re: [PATCH 01/22] drm/i915: check for kernel_context
2019-09-27 17:33 ` [PATCH 01/22] drm/i915: check for kernel_context Matthew Auld
@ 2019-09-27 17:37 ` Chris Wilson
0 siblings, 0 replies; 50+ messages in thread
From: Chris Wilson @ 2019-09-27 17:37 UTC (permalink / raw)
To: Matthew Auld, intel-gfx; +Cc: daniel.vetter
Quoting Matthew Auld (2019-09-27 18:33:48)
> Explosions during early driver init on the error path. Make sure we fail
> gracefully.
Joonas would complain about the clearly not onion unwind here, but we
have thrown it in as a catch-all cleanup for what is quite a complicated
setup.
> [ 9547.672258] BUG: kernel NULL pointer dereference, address: 000000000000007c
> [ 9547.672288] #PF: supervisor read access in kernel mode
> [ 9547.672292] #PF: error_code(0x0000) - not-present page
> [ 9547.672296] PGD 8000000846b41067 P4D 8000000846b41067 PUD 797034067 PMD 0
> [ 9547.672303] Oops: 0000 [#1] SMP PTI
> [ 9547.672307] CPU: 1 PID: 25634 Comm: i915_selftest Tainted: G U 5.3.0-rc8+ #73
> [ 9547.672313] Hardware name: /NUC6i7KYB, BIOS KYSKLi70.86A.0050.2017.0831.1924 08/31/2017
> [ 9547.672395] RIP: 0010:intel_context_unpin+0x9/0x100 [i915]
> [ 9547.672400] Code: 6b 60 00 e9 17 ff ff ff bd fc ff ff ff e9 7c ff ff ff 66 66 2e 0f 1f 84 00 00 00 00
> 00 0f 1f 40 00 0f 1f 44 00 00 41 54 55 53 <8b> 47 7c 83 f8 01 74 26 8d 48 ff f0 0f b1 4f 7c 48 8d 57 7c
> 75 05
> [ 9547.672413] RSP: 0018:ffffae8ac24ff878 EFLAGS: 00010246
> [ 9547.672417] RAX: ffff944a1b7842d0 RBX: ffff944a1b784000 RCX: ffff944a12dd6fa8
> [ 9547.672422] RDX: ffff944a1b7842c0 RSI: ffff944a12dd5328 RDI: 0000000000000000
> [ 9547.672428] RBP: 0000000000000000 R08: ffff944a11e5d840 R09: 0000000000000000
> [ 9547.672433] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
> [ 9547.672438] R13: ffffffffc11aaf00 R14: 00000000ffffffe4 R15: ffff944a0e29bf38
> [ 9547.672443] FS: 00007fc259b88ac0(0000) GS:ffff944a1f880000(0000) knlGS:0000000000000000
> [ 9547.672449] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 9547.672454] CR2: 000000000000007c CR3: 0000000853346003 CR4: 00000000003606e0
> [ 9547.672459] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> [ 9547.672464] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> [ 9547.672469] Call Trace:
> [ 9547.672518] intel_engine_cleanup_common+0xe3/0x270 [i915]
> [ 9547.672567] execlists_destroy+0xe/0x30 [i915]
> [ 9547.672669] intel_engines_init+0x94/0xf0 [i915]
> [ 9547.672749] i915_gem_init+0x191/0x950 [i915]
>
> Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 50+ messages in thread
* [PATCH 02/22] drm/i915: simplify i915_gem_init_early
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
2019-09-27 17:33 ` [PATCH 01/22] drm/i915: check for kernel_context Matthew Auld
@ 2019-09-27 17:33 ` Matthew Auld
2019-09-27 17:39 ` Chris Wilson
2019-09-27 17:33 ` [PATCH 03/22] drm/i915: introduce intel_memory_region Matthew Auld
` (23 subsequent siblings)
25 siblings, 1 reply; 50+ messages in thread
From: Matthew Auld @ 2019-09-27 17:33 UTC (permalink / raw)
To: intel-gfx; +Cc: daniel.vetter
i915_gem_init_early doesn't need to return anything.
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
---
drivers/gpu/drm/i915/i915_drv.c | 5 +----
drivers/gpu/drm/i915/i915_drv.h | 2 +-
drivers/gpu/drm/i915/i915_gem.c | 4 +---
3 files changed, 3 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index a9ee73b61f4d..91aae56b4280 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -589,9 +589,7 @@ static int i915_driver_early_probe(struct drm_i915_private *dev_priv)
intel_gt_init_early(&dev_priv->gt, dev_priv);
- ret = i915_gem_init_early(dev_priv);
- if (ret < 0)
- goto err_gt;
+ i915_gem_init_early(dev_priv);
/* This must be called before any calls to HAS_PCH_* */
intel_detect_pch(dev_priv);
@@ -613,7 +611,6 @@ static int i915_driver_early_probe(struct drm_i915_private *dev_priv)
err_gem:
i915_gem_cleanup_early(dev_priv);
-err_gt:
intel_gt_driver_late_release(&dev_priv->gt);
vlv_free_s0ix_state(dev_priv);
err_workqueues:
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index b3c7dbc1832a..0dc504fc6ffc 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2250,7 +2250,7 @@ int i915_getparam_ioctl(struct drm_device *dev, void *data,
int i915_gem_init_userptr(struct drm_i915_private *dev_priv);
void i915_gem_cleanup_userptr(struct drm_i915_private *dev_priv);
void i915_gem_sanitize(struct drm_i915_private *i915);
-int i915_gem_init_early(struct drm_i915_private *dev_priv);
+void i915_gem_init_early(struct drm_i915_private *dev_priv);
void i915_gem_cleanup_early(struct drm_i915_private *dev_priv);
int i915_gem_freeze(struct drm_i915_private *dev_priv);
int i915_gem_freeze_late(struct drm_i915_private *dev_priv);
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index e2897a666225..3d3fda4cae99 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1533,7 +1533,7 @@ static void i915_gem_init__mm(struct drm_i915_private *i915)
i915_gem_init__objects(i915);
}
-int i915_gem_init_early(struct drm_i915_private *dev_priv)
+void i915_gem_init_early(struct drm_i915_private *dev_priv)
{
int err;
@@ -1545,8 +1545,6 @@ int i915_gem_init_early(struct drm_i915_private *dev_priv)
err = i915_gemfs_init(dev_priv);
if (err)
DRM_NOTE("Unable to create a private tmpfs mount, hugepage support will be disabled(%d).\n", err);
-
- return 0;
}
void i915_gem_cleanup_early(struct drm_i915_private *dev_priv)
--
2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 50+ messages in thread* [PATCH 03/22] drm/i915: introduce intel_memory_region
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
2019-09-27 17:33 ` [PATCH 01/22] drm/i915: check for kernel_context Matthew Auld
2019-09-27 17:33 ` [PATCH 02/22] drm/i915: simplify i915_gem_init_early Matthew Auld
@ 2019-09-27 17:33 ` Matthew Auld
2019-09-27 18:08 ` Chris Wilson
` (4 more replies)
2019-09-27 17:33 ` [PATCH 04/22] drm/i915/region: support continuous allocations Matthew Auld
` (22 subsequent siblings)
25 siblings, 5 replies; 50+ messages in thread
From: Matthew Auld @ 2019-09-27 17:33 UTC (permalink / raw)
To: intel-gfx; +Cc: daniel.vetter
Support memory regions, as defined by a given (start, end), and allow
creating GEM objects which are backed by said region. The immediate goal
here is to have something to represent our device memory, but later on
we also want to represent every memory domain with a region, so stolen,
shmem, and of course device. At some point we are probably going to want
use a common struct here, such that we are better aligned with say TTM.
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
drivers/gpu/drm/i915/Makefile | 2 +
.../gpu/drm/i915/gem/i915_gem_object_types.h | 9 +
drivers/gpu/drm/i915/gem/i915_gem_region.c | 133 ++++++++++++++
drivers/gpu/drm/i915/gem/i915_gem_region.h | 28 +++
.../gpu/drm/i915/gem/selftests/huge_pages.c | 78 ++++++++
drivers/gpu/drm/i915/i915_drv.h | 1 +
drivers/gpu/drm/i915/intel_memory_region.c | 173 ++++++++++++++++++
drivers/gpu/drm/i915/intel_memory_region.h | 77 ++++++++
.../drm/i915/selftests/i915_mock_selftests.h | 1 +
.../drm/i915/selftests/intel_memory_region.c | 124 +++++++++++++
.../gpu/drm/i915/selftests/mock_gem_device.c | 1 +
drivers/gpu/drm/i915/selftests/mock_region.c | 59 ++++++
drivers/gpu/drm/i915/selftests/mock_region.h | 16 ++
13 files changed, 702 insertions(+)
create mode 100644 drivers/gpu/drm/i915/gem/i915_gem_region.c
create mode 100644 drivers/gpu/drm/i915/gem/i915_gem_region.h
create mode 100644 drivers/gpu/drm/i915/intel_memory_region.c
create mode 100644 drivers/gpu/drm/i915/intel_memory_region.h
create mode 100644 drivers/gpu/drm/i915/selftests/intel_memory_region.c
create mode 100644 drivers/gpu/drm/i915/selftests/mock_region.c
create mode 100644 drivers/gpu/drm/i915/selftests/mock_region.h
diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 6313e7b4bd78..d849dff31f76 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -50,6 +50,7 @@ i915-y += i915_drv.o \
i915_utils.o \
intel_csr.o \
intel_device_info.o \
+ intel_memory_region.o \
intel_pch.o \
intel_pm.o \
intel_runtime_pm.o \
@@ -118,6 +119,7 @@ gem-y += \
gem/i915_gem_pages.o \
gem/i915_gem_phys.o \
gem/i915_gem_pm.o \
+ gem/i915_gem_region.o \
gem/i915_gem_shmem.o \
gem/i915_gem_shrinker.o \
gem/i915_gem_stolen.o \
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index d695f187b790..d36c860c9c6f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -158,6 +158,15 @@ struct drm_i915_gem_object {
atomic_t pages_pin_count;
atomic_t shrink_pin;
+ /**
+ * Memory region for this object.
+ */
+ struct intel_memory_region *region;
+ /**
+ * List of memory region blocks allocated for this object.
+ */
+ struct list_head blocks;
+
struct sg_table *pages;
void *mapping;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.c b/drivers/gpu/drm/i915/gem/i915_gem_region.c
new file mode 100644
index 000000000000..5c3bfc121921
--- /dev/null
+++ b/drivers/gpu/drm/i915/gem/i915_gem_region.c
@@ -0,0 +1,133 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2019 Intel Corporation
+ */
+
+#include "intel_memory_region.h"
+#include "i915_gem_region.h"
+#include "i915_drv.h"
+
+void
+i915_gem_object_put_pages_buddy(struct drm_i915_gem_object *obj,
+ struct sg_table *pages)
+{
+ __intel_memory_region_put_pages_buddy(obj->mm.region, &obj->mm.blocks);
+
+ obj->mm.dirty = false;
+ sg_free_table(pages);
+ kfree(pages);
+}
+
+int
+i915_gem_object_get_pages_buddy(struct drm_i915_gem_object *obj)
+{
+ struct intel_memory_region *mem = obj->mm.region;
+ struct list_head *blocks = &obj->mm.blocks;
+ unsigned int flags = I915_ALLOC_MIN_PAGE_SIZE;
+ resource_size_t size = obj->base.size;
+ resource_size_t prev_end;
+ struct i915_buddy_block *block;
+ struct sg_table *st;
+ struct scatterlist *sg;
+ unsigned int sg_page_sizes;
+ unsigned long i;
+ int ret;
+
+ st = kmalloc(sizeof(*st), GFP_KERNEL);
+ if (!st)
+ return -ENOMEM;
+
+ if (sg_alloc_table(st, size >> ilog2(mem->mm.chunk_size), GFP_KERNEL)) {
+ kfree(st);
+ return -ENOMEM;
+ }
+
+ ret = __intel_memory_region_get_pages_buddy(mem, size, flags, blocks);
+ if (ret)
+ goto err_free_sg;
+
+ GEM_BUG_ON(list_empty(blocks));
+
+ sg = st->sgl;
+ st->nents = 0;
+ sg_page_sizes = 0;
+ i = 0;
+
+ list_for_each_entry(block, blocks, link) {
+ u64 block_size, offset;
+
+ block_size = i915_buddy_block_size(&mem->mm, block);
+ offset = i915_buddy_block_offset(block);
+
+ GEM_BUG_ON(overflows_type(block_size, sg->length));
+
+ if (!i || offset != prev_end ||
+ add_overflows_t(typeof(sg->length), sg->length, block_size)) {
+ if (i) {
+ sg_page_sizes |= sg->length;
+ sg = __sg_next(sg);
+ }
+
+ sg_dma_address(sg) = mem->region.start + offset;
+ sg_dma_len(sg) = block_size;
+
+ sg->length = block_size;
+
+ st->nents++;
+ } else {
+ sg->length += block_size;
+ sg_dma_len(sg) += block_size;
+ }
+
+ prev_end = offset + block_size;
+ i++;
+ };
+
+ sg_page_sizes |= sg->length;
+ sg_mark_end(sg);
+ i915_sg_trim(st);
+
+ __i915_gem_object_set_pages(obj, st, sg_page_sizes);
+
+ return 0;
+
+err_free_sg:
+ sg_free_table(st);
+ kfree(st);
+ return ret;
+}
+
+void i915_gem_object_init_memory_region(struct drm_i915_gem_object *obj,
+ struct intel_memory_region *mem)
+{
+ INIT_LIST_HEAD(&obj->mm.blocks);
+ obj->mm.region = mem;
+}
+
+void i915_gem_object_release_memory_region(struct drm_i915_gem_object *obj)
+{
+}
+
+struct drm_i915_gem_object *
+i915_gem_object_create_region(struct intel_memory_region *mem,
+ resource_size_t size,
+ unsigned int flags)
+{
+ struct drm_i915_gem_object *obj;
+
+ if (!mem)
+ return ERR_PTR(-ENODEV);
+
+ size = round_up(size, mem->min_page_size);
+
+ GEM_BUG_ON(!size);
+ GEM_BUG_ON(!IS_ALIGNED(size, I915_GTT_MIN_ALIGNMENT));
+
+ if (size >> PAGE_SHIFT > INT_MAX)
+ return ERR_PTR(-E2BIG);
+
+ if (overflows_type(size, obj->base.size))
+ return ERR_PTR(-E2BIG);
+
+ return mem->ops->create_object(mem, size, flags);
+}
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.h b/drivers/gpu/drm/i915/gem/i915_gem_region.h
new file mode 100644
index 000000000000..ebddc86d78f7
--- /dev/null
+++ b/drivers/gpu/drm/i915/gem/i915_gem_region.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2019 Intel Corporation
+ */
+
+#ifndef __I915_GEM_REGION_H__
+#define __I915_GEM_REGION_H__
+
+#include <linux/types.h>
+
+struct intel_memory_region;
+struct drm_i915_gem_object;
+struct sg_table;
+
+int i915_gem_object_get_pages_buddy(struct drm_i915_gem_object *obj);
+void i915_gem_object_put_pages_buddy(struct drm_i915_gem_object *obj,
+ struct sg_table *pages);
+
+void i915_gem_object_init_memory_region(struct drm_i915_gem_object *obj,
+ struct intel_memory_region *mem);
+void i915_gem_object_release_memory_region(struct drm_i915_gem_object *obj);
+
+struct drm_i915_gem_object *
+i915_gem_object_create_region(struct intel_memory_region *mem,
+ resource_size_t size,
+ unsigned int flags);
+
+#endif
diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
index c5cea4379216..4e1805aaeb99 100644
--- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
+++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
@@ -8,6 +8,7 @@
#include "i915_selftest.h"
+#include "gem/i915_gem_region.h"
#include "gem/i915_gem_pm.h"
#include "gt/intel_gt.h"
@@ -17,6 +18,7 @@
#include "selftests/mock_drm.h"
#include "selftests/mock_gem_device.h"
+#include "selftests/mock_region.h"
#include "selftests/i915_random.h"
static const unsigned int page_sizes[] = {
@@ -447,6 +449,81 @@ static int igt_mock_exhaust_device_supported_pages(void *arg)
return err;
}
+static int igt_mock_memory_region_huge_pages(void *arg)
+{
+ struct i915_ppgtt *ppgtt = arg;
+ struct drm_i915_private *i915 = ppgtt->vm.i915;
+ unsigned long supported = INTEL_INFO(i915)->page_sizes;
+ struct intel_memory_region *mem;
+ struct drm_i915_gem_object *obj;
+ struct i915_vma *vma;
+ int bit;
+ int err = 0;
+
+ mem = mock_region_create(i915, 0, SZ_2G,
+ I915_GTT_PAGE_SIZE_4K, 0);
+ if (IS_ERR(mem)) {
+ pr_err("failed to create memory region\n");
+ return PTR_ERR(mem);
+ }
+
+ for_each_set_bit(bit, &supported, ilog2(I915_GTT_MAX_PAGE_SIZE) + 1) {
+ unsigned int page_size = BIT(bit);
+ resource_size_t phys;
+
+ obj = i915_gem_object_create_region(mem, page_size, 0);
+ if (IS_ERR(obj)) {
+ err = PTR_ERR(obj);
+ goto out_destroy_device;
+ }
+
+ vma = i915_vma_instance(obj, &ppgtt->vm, NULL);
+ if (IS_ERR(vma)) {
+ err = PTR_ERR(vma);
+ goto out_put;
+ }
+
+ err = i915_vma_pin(vma, 0, 0, PIN_USER);
+ if (err)
+ goto out_close;
+
+ phys = i915_gem_object_get_dma_address(obj, 0);
+ if (!IS_ALIGNED(phys, page_size)) {
+ pr_err("memory region misaligned(%pa)\n", &phys);
+ err = -EINVAL;
+ goto out_close;
+ }
+
+ if (vma->page_sizes.gtt != page_size) {
+ pr_err("page_sizes.gtt=%u, expected=%u\n",
+ vma->page_sizes.gtt, page_size);
+ err = -EINVAL;
+ goto out_unpin;
+ }
+
+ i915_vma_unpin(vma);
+ i915_vma_close(vma);
+
+ i915_gem_object_put(obj);
+ }
+
+ goto out_destroy_device;
+
+out_unpin:
+ i915_vma_unpin(vma);
+out_close:
+ i915_vma_close(vma);
+out_put:
+ i915_gem_object_put(obj);
+out_destroy_device:
+ mutex_unlock(&i915->drm.struct_mutex);
+ i915_gem_drain_freed_objects(i915);
+ mutex_lock(&i915->drm.struct_mutex);
+ intel_memory_region_destroy(mem);
+
+ return err;
+}
+
static int igt_mock_ppgtt_misaligned_dma(void *arg)
{
struct i915_ppgtt *ppgtt = arg;
@@ -1610,6 +1687,7 @@ int i915_gem_huge_page_mock_selftests(void)
{
static const struct i915_subtest tests[] = {
SUBTEST(igt_mock_exhaust_device_supported_pages),
+ SUBTEST(igt_mock_memory_region_huge_pages),
SUBTEST(igt_mock_ppgtt_misaligned_dma),
SUBTEST(igt_mock_ppgtt_huge_fill),
SUBTEST(igt_mock_ppgtt_64K),
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 0dc504fc6ffc..8ed4b8c2484f 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -85,6 +85,7 @@
#include "intel_device_info.h"
#include "intel_pch.h"
#include "intel_runtime_pm.h"
+#include "intel_memory_region.h"
#include "intel_uncore.h"
#include "intel_wakeref.h"
#include "intel_wopcm.h"
diff --git a/drivers/gpu/drm/i915/intel_memory_region.c b/drivers/gpu/drm/i915/intel_memory_region.c
new file mode 100644
index 000000000000..e48d5c37c4df
--- /dev/null
+++ b/drivers/gpu/drm/i915/intel_memory_region.c
@@ -0,0 +1,173 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2019 Intel Corporation
+ */
+
+#include "intel_memory_region.h"
+#include "i915_drv.h"
+
+static u64
+intel_memory_region_free_pages(struct intel_memory_region *mem,
+ struct list_head *blocks)
+{
+ struct i915_buddy_block *block, *on;
+ u64 size = 0;
+
+ list_for_each_entry_safe(block, on, blocks, link) {
+ size += i915_buddy_block_size(&mem->mm, block);
+ i915_buddy_free(&mem->mm, block);
+ }
+ INIT_LIST_HEAD(blocks);
+
+ return size;
+}
+
+void
+__intel_memory_region_put_pages_buddy(struct intel_memory_region *mem,
+ struct list_head *blocks)
+{
+ mutex_lock(&mem->mm_lock);
+ intel_memory_region_free_pages(mem, blocks);
+ mutex_unlock(&mem->mm_lock);
+}
+
+void
+__intel_memory_region_put_block_buddy(struct i915_buddy_block *block)
+{
+ struct list_head blocks;
+
+ INIT_LIST_HEAD(&blocks);
+ list_add(&block->link, &blocks);
+ __intel_memory_region_put_pages_buddy(block->private, &blocks);
+}
+
+int
+__intel_memory_region_get_pages_buddy(struct intel_memory_region *mem,
+ resource_size_t size,
+ unsigned int flags,
+ struct list_head *blocks)
+{
+ unsigned long n_pages = size >> ilog2(mem->mm.chunk_size);
+ unsigned int min_order = 0;
+
+ GEM_BUG_ON(!IS_ALIGNED(size, mem->mm.chunk_size));
+ GEM_BUG_ON(!list_empty(blocks));
+
+ if (flags & I915_ALLOC_MIN_PAGE_SIZE) {
+ min_order = ilog2(mem->min_page_size) -
+ ilog2(mem->mm.chunk_size);
+ }
+
+ mutex_lock(&mem->mm_lock);
+
+ do {
+ struct i915_buddy_block *block;
+ unsigned int order;
+
+ order = fls(n_pages) - 1;
+ GEM_BUG_ON(order > mem->mm.max_order);
+ GEM_BUG_ON(order < min_order);
+
+ do {
+ block = i915_buddy_alloc(&mem->mm, order);
+ if (!IS_ERR(block))
+ break;
+
+ if (order-- == min_order)
+ goto err_free_blocks;
+ } while (1);
+
+ n_pages -= BIT(order);
+
+ block->private = mem;
+ list_add(&block->link, blocks);
+
+ if (!n_pages)
+ break;
+ } while (1);
+
+ mutex_unlock(&mem->mm_lock);
+ return 0;
+
+err_free_blocks:
+ intel_memory_region_free_pages(mem, blocks);
+ mutex_unlock(&mem->mm_lock);
+ return -ENXIO;
+}
+
+struct i915_buddy_block *
+__intel_memory_region_get_block_buddy(struct intel_memory_region *mem,
+ resource_size_t size)
+{
+ struct i915_buddy_block *block;
+ struct list_head blocks;
+ int ret;
+
+ INIT_LIST_HEAD(&blocks);
+ ret = __intel_memory_region_get_pages_buddy(mem, size, 0, &blocks);
+ if (ret)
+ return ERR_PTR(ret);
+
+ block = list_first_entry(&blocks, typeof(*block), link);
+ list_del_init(&block->link);
+ return block;
+}
+
+int intel_memory_region_init_buddy(struct intel_memory_region *mem)
+{
+ return i915_buddy_init(&mem->mm, resource_size(&mem->region),
+ PAGE_SIZE);
+}
+
+void intel_memory_region_release_buddy(struct intel_memory_region *mem)
+{
+ i915_buddy_fini(&mem->mm);
+}
+
+struct intel_memory_region *
+intel_memory_region_create(struct drm_i915_private *i915,
+ resource_size_t start,
+ resource_size_t size,
+ resource_size_t min_page_size,
+ resource_size_t io_start,
+ const struct intel_memory_region_ops *ops)
+{
+ struct intel_memory_region *mem;
+ int err;
+
+ mem = kzalloc(sizeof(*mem), GFP_KERNEL);
+ if (!mem)
+ return ERR_PTR(-ENOMEM);
+
+ mem->i915 = i915;
+ mem->region = (struct resource)DEFINE_RES_MEM(start, size);
+ mem->io_start = io_start;
+ mem->min_page_size = min_page_size;
+ mem->ops = ops;
+
+ mutex_init(&mem->mm_lock);
+
+ if (ops->init) {
+ err = ops->init(mem);
+ if (err) {
+ kfree(mem);
+ mem = ERR_PTR(err);
+ }
+ }
+
+ return mem;
+}
+
+void
+intel_memory_region_destroy(struct intel_memory_region *mem)
+{
+ if (mem->ops->release)
+ mem->ops->release(mem);
+
+ kfree(mem);
+}
+
+#if IS_ENABLED(CONFIG_DRM_I915_SELFTEST)
+#include "selftests/intel_memory_region.c"
+#include "selftests/mock_region.c"
+#endif
diff --git a/drivers/gpu/drm/i915/intel_memory_region.h b/drivers/gpu/drm/i915/intel_memory_region.h
new file mode 100644
index 000000000000..ae1ce298bcd1
--- /dev/null
+++ b/drivers/gpu/drm/i915/intel_memory_region.h
@@ -0,0 +1,77 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2019 Intel Corporation
+ */
+
+#ifndef __INTEL_MEMORY_REGION_H__
+#define __INTEL_MEMORY_REGION_H__
+
+#include <linux/ioport.h>
+#include <linux/mutex.h>
+#include <linux/io-mapping.h>
+
+#include "i915_buddy.h"
+
+struct drm_i915_private;
+struct drm_i915_gem_object;
+struct intel_memory_region;
+struct sg_table;
+
+#define I915_ALLOC_MIN_PAGE_SIZE BIT(0)
+
+struct intel_memory_region_ops {
+ unsigned int flags;
+
+ int (*init)(struct intel_memory_region *);
+ void (*release)(struct intel_memory_region *);
+
+ struct drm_i915_gem_object *
+ (*create_object)(struct intel_memory_region *,
+ resource_size_t,
+ unsigned int);
+};
+
+struct intel_memory_region {
+ struct drm_i915_private *i915;
+
+ const struct intel_memory_region_ops *ops;
+
+ struct io_mapping iomap;
+ struct resource region;
+
+ struct i915_buddy_mm mm;
+ struct mutex mm_lock;
+
+ resource_size_t io_start;
+ resource_size_t min_page_size;
+
+ unsigned int type;
+ unsigned int instance;
+ unsigned int id;
+};
+
+int intel_memory_region_init_buddy(struct intel_memory_region *mem);
+void intel_memory_region_release_buddy(struct intel_memory_region *mem);
+
+int __intel_memory_region_get_pages_buddy(struct intel_memory_region *mem,
+ resource_size_t size,
+ unsigned int flags,
+ struct list_head *blocks);
+struct i915_buddy_block *
+__intel_memory_region_get_block_buddy(struct intel_memory_region *mem,
+ resource_size_t size);
+void __intel_memory_region_put_pages_buddy(struct intel_memory_region *mem,
+ struct list_head *blocks);
+void __intel_memory_region_put_block_buddy(struct i915_buddy_block *block);
+
+struct intel_memory_region *
+intel_memory_region_create(struct drm_i915_private *i915,
+ resource_size_t start,
+ resource_size_t size,
+ resource_size_t min_page_size,
+ resource_size_t io_start,
+ const struct intel_memory_region_ops *ops);
+void
+intel_memory_region_destroy(struct intel_memory_region *mem);
+
+#endif
diff --git a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
index b88084fe3269..aa5a0e7f5d9e 100644
--- a/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_mock_selftests.h
@@ -26,3 +26,4 @@ selftest(gtt, i915_gem_gtt_mock_selftests)
selftest(hugepages, i915_gem_huge_page_mock_selftests)
selftest(contexts, i915_gem_context_mock_selftests)
selftest(buddy, i915_buddy_mock_selftests)
+selftest(memory_region, intel_memory_region_mock_selftests)
diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
new file mode 100644
index 000000000000..54f9a624b4e1
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
@@ -0,0 +1,124 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2019 Intel Corporation
+ */
+
+#include <linux/prime_numbers.h>
+
+#include "../i915_selftest.h"
+
+#include "mock_drm.h"
+#include "mock_gem_device.h"
+#include "mock_region.h"
+
+#include "gem/i915_gem_region.h"
+#include "gem/selftests/mock_context.h"
+
+static void close_objects(struct list_head *objects)
+{
+ struct drm_i915_private *i915 = NULL;
+ struct drm_i915_gem_object *obj, *on;
+
+ list_for_each_entry_safe(obj, on, objects, st_link) {
+ i915 = to_i915(obj->base.dev);
+ if (i915_gem_object_has_pinned_pages(obj))
+ i915_gem_object_unpin_pages(obj);
+ /* No polluting the memory region between tests */
+ __i915_gem_object_put_pages(obj, I915_MM_NORMAL);
+ i915_gem_object_put(obj);
+ list_del(&obj->st_link);
+ }
+
+ if (i915) {
+ cond_resched();
+
+ mutex_unlock(&i915->drm.struct_mutex);
+ i915_gem_drain_freed_objects(i915);
+ mutex_lock(&i915->drm.struct_mutex);
+ }
+}
+
+static int igt_mock_fill(void *arg)
+{
+ struct intel_memory_region *mem = arg;
+ resource_size_t total = resource_size(&mem->region);
+ resource_size_t page_size;
+ resource_size_t rem;
+ unsigned long max_pages;
+ unsigned long page_num;
+ LIST_HEAD(objects);
+ int err = 0;
+
+ page_size = mem->mm.chunk_size;
+ max_pages = div64_u64(total, page_size);
+ rem = total;
+
+ for_each_prime_number_from(page_num, 1, max_pages) {
+ resource_size_t size = page_num * page_size;
+ struct drm_i915_gem_object *obj;
+
+ obj = i915_gem_object_create_region(mem, size, 0);
+ if (IS_ERR(obj)) {
+ err = PTR_ERR(obj);
+ break;
+ }
+
+ err = i915_gem_object_pin_pages(obj);
+ if (err) {
+ i915_gem_object_put(obj);
+ break;
+ }
+
+ list_add(&obj->st_link, &objects);
+ rem -= size;
+ }
+
+ if (err == -ENOMEM)
+ err = 0;
+ if (err == -ENXIO) {
+ if (page_num * page_size <= rem) {
+ pr_err("igt_mock_fill failed, space still left in region\n");
+ err = -EINVAL;
+ } else {
+ err = 0;
+ }
+ }
+
+ close_objects(&objects);
+
+ return err;
+}
+
+int intel_memory_region_mock_selftests(void)
+{
+ static const struct i915_subtest tests[] = {
+ SUBTEST(igt_mock_fill),
+ };
+ struct intel_memory_region *mem;
+ struct drm_i915_private *i915;
+ int err;
+
+ i915 = mock_gem_device();
+ if (!i915)
+ return -ENOMEM;
+
+ mem = mock_region_create(i915, 0, SZ_2G,
+ I915_GTT_PAGE_SIZE_4K, 0);
+ if (IS_ERR(mem)) {
+ pr_err("failed to create memory region\n");
+ err = PTR_ERR(mem);
+ goto out_unref;
+ }
+
+ mutex_lock(&i915->drm.struct_mutex);
+ err = i915_subtests(tests, mem);
+ mutex_unlock(&i915->drm.struct_mutex);
+
+ i915_gem_drain_freed_objects(i915);
+ intel_memory_region_destroy(mem);
+
+out_unref:
+ drm_dev_put(&i915->drm);
+
+ return err;
+}
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
index 91f15fa728cd..32e32b1cd566 100644
--- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
@@ -32,6 +32,7 @@
#include "mock_gem_device.h"
#include "mock_gtt.h"
#include "mock_uncore.h"
+#include "mock_region.h"
#include "gem/selftests/mock_context.h"
#include "gem/selftests/mock_gem_object.h"
diff --git a/drivers/gpu/drm/i915/selftests/mock_region.c b/drivers/gpu/drm/i915/selftests/mock_region.c
new file mode 100644
index 000000000000..0e9a575ede3b
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_region.c
@@ -0,0 +1,59 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2019 Intel Corporation
+ */
+
+#include "gem/i915_gem_region.h"
+#include "intel_memory_region.h"
+
+#include "mock_region.h"
+
+static const struct drm_i915_gem_object_ops mock_region_obj_ops = {
+ .get_pages = i915_gem_object_get_pages_buddy,
+ .put_pages = i915_gem_object_put_pages_buddy,
+ .release = i915_gem_object_release_memory_region,
+};
+
+static struct drm_i915_gem_object *
+mock_object_create(struct intel_memory_region *mem,
+ resource_size_t size,
+ unsigned int flags)
+{
+ struct drm_i915_private *i915 = mem->i915;
+ struct drm_i915_gem_object *obj;
+
+ if (size > BIT(mem->mm.max_order) * mem->mm.chunk_size)
+ return ERR_PTR(-E2BIG);
+
+ obj = i915_gem_object_alloc();
+ if (!obj)
+ return ERR_PTR(-ENOMEM);
+
+ drm_gem_private_object_init(&i915->drm, &obj->base, size);
+ i915_gem_object_init(obj, &mock_region_obj_ops);
+
+ obj->read_domains = I915_GEM_DOMAIN_CPU | I915_GEM_DOMAIN_GTT;
+
+ i915_gem_object_set_cache_coherency(obj, I915_CACHE_NONE);
+
+ i915_gem_object_init_memory_region(obj, mem);
+
+ return obj;
+}
+
+static const struct intel_memory_region_ops mock_region_ops = {
+ .init = intel_memory_region_init_buddy,
+ .release = intel_memory_region_release_buddy,
+ .create_object = mock_object_create,
+};
+
+struct intel_memory_region *
+mock_region_create(struct drm_i915_private *i915,
+ resource_size_t start,
+ resource_size_t size,
+ resource_size_t min_page_size,
+ resource_size_t io_start)
+{
+ return intel_memory_region_create(i915, start, size, min_page_size,
+ io_start, &mock_region_ops);
+}
diff --git a/drivers/gpu/drm/i915/selftests/mock_region.h b/drivers/gpu/drm/i915/selftests/mock_region.h
new file mode 100644
index 000000000000..24608089d833
--- /dev/null
+++ b/drivers/gpu/drm/i915/selftests/mock_region.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2019 Intel Corporation
+ */
+
+#ifndef __MOCK_REGION_H
+#define __MOCK_REGION_H
+
+struct intel_memory_region *
+mock_region_create(struct drm_i915_private *i915,
+ resource_size_t start,
+ resource_size_t size,
+ resource_size_t min_page_size,
+ resource_size_t io_start);
+
+#endif /* !__MOCK_REGION_H */
--
2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 50+ messages in thread* Re: [PATCH 03/22] drm/i915: introduce intel_memory_region
2019-09-27 17:33 ` [PATCH 03/22] drm/i915: introduce intel_memory_region Matthew Auld
@ 2019-09-27 18:08 ` Chris Wilson
2019-09-27 18:21 ` Chris Wilson
` (3 subsequent siblings)
4 siblings, 0 replies; 50+ messages in thread
From: Chris Wilson @ 2019-09-27 18:08 UTC (permalink / raw)
To: Matthew Auld, intel-gfx; +Cc: daniel.vetter
Quoting Matthew Auld (2019-09-27 18:33:50)
> +struct drm_i915_gem_object *
> +i915_gem_object_create_region(struct intel_memory_region *mem,
> + resource_size_t size,
> + unsigned int flags)
> +{
> + struct drm_i915_gem_object *obj;
> +
> + if (!mem)
> + return ERR_PTR(-ENODEV);
> +
> + size = round_up(size, mem->min_page_size);
> +
> + GEM_BUG_ON(!size);
> + GEM_BUG_ON(!IS_ALIGNED(size, I915_GTT_MIN_ALIGNMENT));
> +
> + if (size >> PAGE_SHIFT > INT_MAX)
> + return ERR_PTR(-E2BIG);
It's probably past time we fixed up the remaining int num_pages.
Hmm, I know gcc has warned for constants > type. Can we get it to warn
for unguarded type restrictions, i.e.
int num_pages = resource_size_t >> PAGE_SIZE;
Or maybe we go on a rampage and just ban obj->base.size and force
ourselves to use a wrapper in order to catch any offenders.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 50+ messages in thread* Re: [PATCH 03/22] drm/i915: introduce intel_memory_region
2019-09-27 17:33 ` [PATCH 03/22] drm/i915: introduce intel_memory_region Matthew Auld
2019-09-27 18:08 ` Chris Wilson
@ 2019-09-27 18:21 ` Chris Wilson
2019-09-27 18:24 ` Chris Wilson
` (2 subsequent siblings)
4 siblings, 0 replies; 50+ messages in thread
From: Chris Wilson @ 2019-09-27 18:21 UTC (permalink / raw)
To: Matthew Auld, intel-gfx; +Cc: daniel.vetter
Quoting Matthew Auld (2019-09-27 18:33:50)
> +void
> +intel_memory_region_destroy(struct intel_memory_region *mem)
> +{
> + if (mem->ops->release)
> + mem->ops->release(mem);
> +
mutex_destroy(&mem->mm_lock);
> + kfree(mem);
> +}
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 50+ messages in thread* Re: [PATCH 03/22] drm/i915: introduce intel_memory_region
2019-09-27 17:33 ` [PATCH 03/22] drm/i915: introduce intel_memory_region Matthew Auld
2019-09-27 18:08 ` Chris Wilson
2019-09-27 18:21 ` Chris Wilson
@ 2019-09-27 18:24 ` Chris Wilson
2019-09-27 18:25 ` Chris Wilson
2019-09-27 20:27 ` Chris Wilson
2019-09-27 20:30 ` Chris Wilson
4 siblings, 1 reply; 50+ messages in thread
From: Chris Wilson @ 2019-09-27 18:24 UTC (permalink / raw)
To: Matthew Auld, intel-gfx; +Cc: daniel.vetter
Quoting Matthew Auld (2019-09-27 18:33:50)
> +static void close_objects(struct list_head *objects)
> +{
> + struct drm_i915_private *i915 = NULL;
> + struct drm_i915_gem_object *obj, *on;
> +
> + list_for_each_entry_safe(obj, on, objects, st_link) {
> + i915 = to_i915(obj->base.dev);
> + if (i915_gem_object_has_pinned_pages(obj))
> + i915_gem_object_unpin_pages(obj);
> + /* No polluting the memory region between tests */
> + __i915_gem_object_put_pages(obj, I915_MM_NORMAL);
> + i915_gem_object_put(obj);
> + list_del(&obj->st_link);
> + }
> +
> + if (i915) {
That's on the ugly side. You will have a mem in each subtest, so why not
supply it here and use the mem->i915 from that?
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 50+ messages in thread* Re: [PATCH 03/22] drm/i915: introduce intel_memory_region
2019-09-27 18:24 ` Chris Wilson
@ 2019-09-27 18:25 ` Chris Wilson
0 siblings, 0 replies; 50+ messages in thread
From: Chris Wilson @ 2019-09-27 18:25 UTC (permalink / raw)
To: Matthew Auld, intel-gfx; +Cc: daniel.vetter
Quoting Chris Wilson (2019-09-27 19:24:43)
> Quoting Matthew Auld (2019-09-27 18:33:50)
> > +static void close_objects(struct list_head *objects)
> > +{
> > + struct drm_i915_private *i915 = NULL;
> > + struct drm_i915_gem_object *obj, *on;
> > +
> > + list_for_each_entry_safe(obj, on, objects, st_link) {
> > + i915 = to_i915(obj->base.dev);
> > + if (i915_gem_object_has_pinned_pages(obj))
> > + i915_gem_object_unpin_pages(obj);
> > + /* No polluting the memory region between tests */
> > + __i915_gem_object_put_pages(obj, I915_MM_NORMAL);
> > + i915_gem_object_put(obj);
> > + list_del(&obj->st_link);
> > + }
> > +
> > + if (i915) {
>
> That's on the ugly side. You will have a mem in each subtest, so why not
> supply it here and use the mem->i915 from that?
The further thought, was to have an mem test runner that drained the
pages between each subtest. That's an area that we need to improve.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 50+ messages in thread
* Re: [PATCH 03/22] drm/i915: introduce intel_memory_region
2019-09-27 17:33 ` [PATCH 03/22] drm/i915: introduce intel_memory_region Matthew Auld
` (2 preceding siblings ...)
2019-09-27 18:24 ` Chris Wilson
@ 2019-09-27 20:27 ` Chris Wilson
2019-09-27 20:30 ` Chris Wilson
4 siblings, 0 replies; 50+ messages in thread
From: Chris Wilson @ 2019-09-27 20:27 UTC (permalink / raw)
To: Matthew Auld, intel-gfx; +Cc: daniel.vetter
Quoting Matthew Auld (2019-09-27 18:33:50)
> +void
> +__intel_memory_region_put_block_buddy(struct i915_buddy_block *block)
> +{
> + struct list_head blocks;
LIST_HEAD(blocks); (and no INIT_LIST_HEAD required)
> +
> + INIT_LIST_HEAD(&blocks);
> + list_add(&block->link, &blocks);
> + __intel_memory_region_put_pages_buddy(block->private, &blocks);
> +}
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 50+ messages in thread* Re: [PATCH 03/22] drm/i915: introduce intel_memory_region
2019-09-27 17:33 ` [PATCH 03/22] drm/i915: introduce intel_memory_region Matthew Auld
` (3 preceding siblings ...)
2019-09-27 20:27 ` Chris Wilson
@ 2019-09-27 20:30 ` Chris Wilson
4 siblings, 0 replies; 50+ messages in thread
From: Chris Wilson @ 2019-09-27 20:30 UTC (permalink / raw)
To: Matthew Auld, intel-gfx; +Cc: daniel.vetter
Quoting Matthew Auld (2019-09-27 18:33:50)
> +struct drm_i915_gem_object *
> +i915_gem_object_create_region(struct intel_memory_region *mem,
> + resource_size_t size,
> + unsigned int flags)
> +{
> + struct drm_i915_gem_object *obj;
> +
> + if (!mem)
> + return ERR_PTR(-ENODEV);
What scenarios do you have in mind that this is not a programmer bug?
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 50+ messages in thread
* [PATCH 04/22] drm/i915/region: support continuous allocations
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
` (2 preceding siblings ...)
2019-09-27 17:33 ` [PATCH 03/22] drm/i915: introduce intel_memory_region Matthew Auld
@ 2019-09-27 17:33 ` Matthew Auld
2019-09-27 18:35 ` Chris Wilson
2019-09-27 18:46 ` Ruhl, Michael J
2019-09-27 17:33 ` [PATCH 05/22] drm/i915/region: support volatile objects Matthew Auld
` (21 subsequent siblings)
25 siblings, 2 replies; 50+ messages in thread
From: Matthew Auld @ 2019-09-27 17:33 UTC (permalink / raw)
To: intel-gfx; +Cc: daniel.vetter
Some kernel internal objects may need to be allocated as a continuous
block, also thinking ahead the various kernel io_mapping interfaces seem
to expect it, although this is purely a limitation in the kernel
API...so perhaps something to be improved.
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>
---
.../gpu/drm/i915/gem/i915_gem_object_types.h | 4 +
drivers/gpu/drm/i915/gem/i915_gem_region.c | 15 +-
drivers/gpu/drm/i915/gem/i915_gem_region.h | 3 +-
.../gpu/drm/i915/gem/selftests/huge_pages.c | 3 +-
drivers/gpu/drm/i915/intel_memory_region.c | 13 +-
drivers/gpu/drm/i915/intel_memory_region.h | 3 +-
.../drm/i915/selftests/intel_memory_region.c | 163 ++++++++++++++++++
drivers/gpu/drm/i915/selftests/mock_region.c | 2 +-
8 files changed, 197 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index d36c860c9c6f..7acd383f174f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -117,6 +117,10 @@ struct drm_i915_gem_object {
I915_SELFTEST_DECLARE(struct list_head st_link);
+ unsigned long flags;
+#define I915_BO_ALLOC_CONTIGUOUS BIT(0)
+#define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS)
+
/*
* Is the object to be mapped as read-only to the GPU
* Only honoured if hardware has relevant pte bit
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.c b/drivers/gpu/drm/i915/gem/i915_gem_region.c
index 5c3bfc121921..b317a5c84144 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_region.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_region.c
@@ -23,10 +23,10 @@ i915_gem_object_get_pages_buddy(struct drm_i915_gem_object *obj)
{
struct intel_memory_region *mem = obj->mm.region;
struct list_head *blocks = &obj->mm.blocks;
- unsigned int flags = I915_ALLOC_MIN_PAGE_SIZE;
resource_size_t size = obj->base.size;
resource_size_t prev_end;
struct i915_buddy_block *block;
+ unsigned int flags;
struct sg_table *st;
struct scatterlist *sg;
unsigned int sg_page_sizes;
@@ -42,6 +42,10 @@ i915_gem_object_get_pages_buddy(struct drm_i915_gem_object *obj)
return -ENOMEM;
}
+ flags = I915_ALLOC_MIN_PAGE_SIZE;
+ if (obj->flags & I915_BO_ALLOC_CONTIGUOUS)
+ flags |= I915_ALLOC_CONTIGUOUS;
+
ret = __intel_memory_region_get_pages_buddy(mem, size, flags, blocks);
if (ret)
goto err_free_sg;
@@ -56,7 +60,8 @@ i915_gem_object_get_pages_buddy(struct drm_i915_gem_object *obj)
list_for_each_entry(block, blocks, link) {
u64 block_size, offset;
- block_size = i915_buddy_block_size(&mem->mm, block);
+ block_size = min_t(u64, size,
+ i915_buddy_block_size(&mem->mm, block));
offset = i915_buddy_block_offset(block);
GEM_BUG_ON(overflows_type(block_size, sg->length));
@@ -98,10 +103,12 @@ i915_gem_object_get_pages_buddy(struct drm_i915_gem_object *obj)
}
void i915_gem_object_init_memory_region(struct drm_i915_gem_object *obj,
- struct intel_memory_region *mem)
+ struct intel_memory_region *mem,
+ unsigned long flags)
{
INIT_LIST_HEAD(&obj->mm.blocks);
obj->mm.region = mem;
+ obj->flags = flags;
}
void i915_gem_object_release_memory_region(struct drm_i915_gem_object *obj)
@@ -115,6 +122,8 @@ i915_gem_object_create_region(struct intel_memory_region *mem,
{
struct drm_i915_gem_object *obj;
+ GEM_BUG_ON(flags & ~I915_BO_ALLOC_FLAGS);
+
if (!mem)
return ERR_PTR(-ENODEV);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.h b/drivers/gpu/drm/i915/gem/i915_gem_region.h
index ebddc86d78f7..f2ff6f8bff74 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_region.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_region.h
@@ -17,7 +17,8 @@ void i915_gem_object_put_pages_buddy(struct drm_i915_gem_object *obj,
struct sg_table *pages);
void i915_gem_object_init_memory_region(struct drm_i915_gem_object *obj,
- struct intel_memory_region *mem);
+ struct intel_memory_region *mem,
+ unsigned long flags);
void i915_gem_object_release_memory_region(struct drm_i915_gem_object *obj);
struct drm_i915_gem_object *
diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
index 4e1805aaeb99..f9fbf2865782 100644
--- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
+++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
@@ -471,7 +471,8 @@ static int igt_mock_memory_region_huge_pages(void *arg)
unsigned int page_size = BIT(bit);
resource_size_t phys;
- obj = i915_gem_object_create_region(mem, page_size, 0);
+ obj = i915_gem_object_create_region(mem, page_size,
+ I915_BO_ALLOC_CONTIGUOUS);
if (IS_ERR(obj)) {
err = PTR_ERR(obj);
goto out_destroy_device;
diff --git a/drivers/gpu/drm/i915/intel_memory_region.c b/drivers/gpu/drm/i915/intel_memory_region.c
index e48d5c37c4df..7a66872d9eac 100644
--- a/drivers/gpu/drm/i915/intel_memory_region.c
+++ b/drivers/gpu/drm/i915/intel_memory_region.c
@@ -47,8 +47,8 @@ __intel_memory_region_get_pages_buddy(struct intel_memory_region *mem,
unsigned int flags,
struct list_head *blocks)
{
- unsigned long n_pages = size >> ilog2(mem->mm.chunk_size);
unsigned int min_order = 0;
+ unsigned long n_pages;
GEM_BUG_ON(!IS_ALIGNED(size, mem->mm.chunk_size));
GEM_BUG_ON(!list_empty(blocks));
@@ -58,6 +58,13 @@ __intel_memory_region_get_pages_buddy(struct intel_memory_region *mem,
ilog2(mem->mm.chunk_size);
}
+ if (flags & I915_ALLOC_CONTIGUOUS) {
+ size = roundup_pow_of_two(size);
+ min_order = ilog2(size) - ilog2(mem->mm.chunk_size);
+ }
+
+ n_pages = size >> ilog2(mem->mm.chunk_size);
+
mutex_lock(&mem->mm_lock);
do {
@@ -104,7 +111,9 @@ __intel_memory_region_get_block_buddy(struct intel_memory_region *mem,
int ret;
INIT_LIST_HEAD(&blocks);
- ret = __intel_memory_region_get_pages_buddy(mem, size, 0, &blocks);
+ ret = __intel_memory_region_get_pages_buddy(mem, size,
+ I915_ALLOC_CONTIGUOUS,
+ &blocks);
if (ret)
return ERR_PTR(ret);
diff --git a/drivers/gpu/drm/i915/intel_memory_region.h b/drivers/gpu/drm/i915/intel_memory_region.h
index ae1ce298bcd1..1dad51b2fc96 100644
--- a/drivers/gpu/drm/i915/intel_memory_region.h
+++ b/drivers/gpu/drm/i915/intel_memory_region.h
@@ -17,7 +17,8 @@ struct drm_i915_gem_object;
struct intel_memory_region;
struct sg_table;
-#define I915_ALLOC_MIN_PAGE_SIZE BIT(0)
+#define I915_ALLOC_MIN_PAGE_SIZE BIT(0)
+#define I915_ALLOC_CONTIGUOUS BIT(1)
struct intel_memory_region_ops {
unsigned int flags;
diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
index 54f9a624b4e1..c43d00ec38ea 100644
--- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c
+++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
@@ -13,6 +13,7 @@
#include "gem/i915_gem_region.h"
#include "gem/selftests/mock_context.h"
+#include "selftests/i915_random.h"
static void close_objects(struct list_head *objects)
{
@@ -89,10 +90,172 @@ static int igt_mock_fill(void *arg)
return err;
}
+static struct drm_i915_gem_object *
+igt_object_create(struct intel_memory_region *mem,
+ struct list_head *objects,
+ u64 size,
+ unsigned int flags)
+{
+ struct drm_i915_gem_object *obj;
+ int err;
+
+ obj = i915_gem_object_create_region(mem, size, flags);
+ if (IS_ERR(obj))
+ return obj;
+
+ err = i915_gem_object_pin_pages(obj);
+ if (err)
+ goto put;
+
+ list_add(&obj->st_link, objects);
+ return obj;
+
+put:
+ i915_gem_object_put(obj);
+ return ERR_PTR(err);
+}
+
+void igt_object_release(struct drm_i915_gem_object *obj)
+{
+ i915_gem_object_unpin_pages(obj);
+ __i915_gem_object_put_pages(obj, I915_MM_NORMAL);
+ i915_gem_object_put(obj);
+ list_del(&obj->st_link);
+}
+
+static int igt_mock_continuous(void *arg)
+{
+ struct intel_memory_region *mem = arg;
+ struct drm_i915_gem_object *obj;
+ unsigned long n_objects;
+ LIST_HEAD(objects);
+ LIST_HEAD(holes);
+ I915_RND_STATE(prng);
+ resource_size_t target;
+ resource_size_t total;
+ resource_size_t min;
+ int err = 0;
+
+ total = resource_size(&mem->region);
+
+ /* Min size */
+ obj = igt_object_create(mem, &objects, mem->mm.chunk_size,
+ I915_BO_ALLOC_CONTIGUOUS);
+ if (IS_ERR(obj))
+ return PTR_ERR(obj);
+
+ if (obj->mm.pages->nents != 1) {
+ pr_err("%s min object spans multiple sg entries\n", __func__);
+ err = -EINVAL;
+ goto err_close_objects;
+ }
+
+ igt_object_release(obj);
+
+ /* Max size */
+ obj = igt_object_create(mem, &objects, total, I915_BO_ALLOC_CONTIGUOUS);
+ if (IS_ERR(obj))
+ return PTR_ERR(obj);
+
+ if (obj->mm.pages->nents != 1) {
+ pr_err("%s max object spans multiple sg entries\n", __func__);
+ err = -EINVAL;
+ goto err_close_objects;
+ }
+
+ igt_object_release(obj);
+
+ /* Internal fragmentation should not bleed into the object size */
+ target = round_up(prandom_u32_state(&prng) % total, PAGE_SIZE);
+ target = max_t(u64, PAGE_SIZE, target);
+
+ obj = igt_object_create(mem, &objects, target,
+ I915_BO_ALLOC_CONTIGUOUS);
+ if (IS_ERR(obj))
+ return PTR_ERR(obj);
+
+ if (obj->base.size != target) {
+ pr_err("%s obj->base.size(%llx) != target(%llx)\n", __func__,
+ (u64)obj->base.size, (u64)target);
+ err = -EINVAL;
+ goto err_close_objects;
+ }
+
+ if (obj->mm.pages->nents != 1) {
+ pr_err("%s object spans multiple sg entries\n", __func__);
+ err = -EINVAL;
+ goto err_close_objects;
+ }
+
+ igt_object_release(obj);
+
+ /*
+ * Try to fragment the address space, such that half of it is free, but
+ * the max contiguous block size is SZ_64K.
+ */
+
+ target = SZ_64K;
+ n_objects = div64_u64(total, target);
+
+ while (n_objects--) {
+ struct list_head *list;
+
+ if (n_objects % 2)
+ list = &holes;
+ else
+ list = &objects;
+
+ obj = igt_object_create(mem, list, target,
+ I915_BO_ALLOC_CONTIGUOUS);
+ if (IS_ERR(obj)) {
+ err = PTR_ERR(obj);
+ goto err_close_objects;
+ }
+ }
+
+ close_objects(&holes);
+
+ min = target;
+ target = total >> 1;
+
+ /* Make sure we can still allocate all the fragmented space */
+ obj = igt_object_create(mem, &objects, target, 0);
+ if (IS_ERR(obj))
+ return PTR_ERR(obj);
+
+ igt_object_release(obj);
+
+ /*
+ * Even though we have enough free space, we don't have a big enough
+ * contiguous block. Make sure that holds true.
+ */
+
+ do {
+ bool should_fail = target > min;
+
+ obj = igt_object_create(mem, &objects, target,
+ I915_BO_ALLOC_CONTIGUOUS);
+ if (should_fail != IS_ERR(obj)) {
+ pr_err("%s target allocation(%llx) mismatch\n",
+ __func__, (u64)target);
+ err = -EINVAL;
+ goto err_close_objects;
+ }
+
+ target >>= 1;
+ } while (target >= mem->mm.chunk_size);
+
+err_close_objects:
+ list_splice_tail(&holes, &objects);
+ close_objects(&objects);
+ return err;
+}
+
int intel_memory_region_mock_selftests(void)
{
static const struct i915_subtest tests[] = {
SUBTEST(igt_mock_fill),
+ SUBTEST(igt_mock_continuous),
};
struct intel_memory_region *mem;
struct drm_i915_private *i915;
diff --git a/drivers/gpu/drm/i915/selftests/mock_region.c b/drivers/gpu/drm/i915/selftests/mock_region.c
index 0e9a575ede3b..7b0c99ddc2d5 100644
--- a/drivers/gpu/drm/i915/selftests/mock_region.c
+++ b/drivers/gpu/drm/i915/selftests/mock_region.c
@@ -36,7 +36,7 @@ mock_object_create(struct intel_memory_region *mem,
i915_gem_object_set_cache_coherency(obj, I915_CACHE_NONE);
- i915_gem_object_init_memory_region(obj, mem);
+ i915_gem_object_init_memory_region(obj, mem, flags);
return obj;
}
--
2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 50+ messages in thread* Re: [PATCH 04/22] drm/i915/region: support continuous allocations
2019-09-27 17:33 ` [PATCH 04/22] drm/i915/region: support continuous allocations Matthew Auld
@ 2019-09-27 18:35 ` Chris Wilson
2019-09-27 18:46 ` Ruhl, Michael J
1 sibling, 0 replies; 50+ messages in thread
From: Chris Wilson @ 2019-09-27 18:35 UTC (permalink / raw)
To: Matthew Auld, intel-gfx; +Cc: daniel.vetter
Quoting Matthew Auld (2019-09-27 18:33:51)
> struct drm_i915_gem_object *
> diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
> index 4e1805aaeb99..f9fbf2865782 100644
> --- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
> +++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
> @@ -471,7 +471,8 @@ static int igt_mock_memory_region_huge_pages(void *arg)
> unsigned int page_size = BIT(bit);
> resource_size_t phys;
>
> - obj = i915_gem_object_create_region(mem, page_size, 0);
> + obj = i915_gem_object_create_region(mem, page_size,
> + I915_BO_ALLOC_CONTIGUOUS);
Seems a good opportunity to test both?
> if (IS_ERR(obj)) {
> err = PTR_ERR(obj);
> goto out_destroy_device;
> diff --git a/drivers/gpu/drm/i915/intel_memory_region.c b/drivers/gpu/drm/i915/intel_memory_region.c
> index e48d5c37c4df..7a66872d9eac 100644
> --- a/drivers/gpu/drm/i915/intel_memory_region.c
> +++ b/drivers/gpu/drm/i915/intel_memory_region.c
> @@ -47,8 +47,8 @@ __intel_memory_region_get_pages_buddy(struct intel_memory_region *mem,
> unsigned int flags,
> struct list_head *blocks)
> {
> - unsigned long n_pages = size >> ilog2(mem->mm.chunk_size);
> unsigned int min_order = 0;
> + unsigned long n_pages;
>
> GEM_BUG_ON(!IS_ALIGNED(size, mem->mm.chunk_size));
> GEM_BUG_ON(!list_empty(blocks));
> @@ -58,6 +58,13 @@ __intel_memory_region_get_pages_buddy(struct intel_memory_region *mem,
> ilog2(mem->mm.chunk_size);
> }
>
> + if (flags & I915_ALLOC_CONTIGUOUS) {
> + size = roundup_pow_of_two(size);
> + min_order = ilog2(size) - ilog2(mem->mm.chunk_size);
> + }
> +
> + n_pages = size >> ilog2(mem->mm.chunk_size);
> +
> mutex_lock(&mem->mm_lock);
>
> do {
> @@ -104,7 +111,9 @@ __intel_memory_region_get_block_buddy(struct intel_memory_region *mem,
> int ret;
>
> INIT_LIST_HEAD(&blocks);
> - ret = __intel_memory_region_get_pages_buddy(mem, size, 0, &blocks);
> + ret = __intel_memory_region_get_pages_buddy(mem, size,
> + I915_ALLOC_CONTIGUOUS,
> + &blocks);
This chunk looks odd. Quick explanation why we don't pass flags here?
> if (ret)
> return ERR_PTR(ret);
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 50+ messages in thread* Re: [PATCH 04/22] drm/i915/region: support continuous allocations
2019-09-27 17:33 ` [PATCH 04/22] drm/i915/region: support continuous allocations Matthew Auld
2019-09-27 18:35 ` Chris Wilson
@ 2019-09-27 18:46 ` Ruhl, Michael J
1 sibling, 0 replies; 50+ messages in thread
From: Ruhl, Michael J @ 2019-09-27 18:46 UTC (permalink / raw)
To: Auld, Matthew, intel-gfx@lists.freedesktop.org; +Cc: daniel.vetter@ffwll.ch
>-----Original Message-----
>From: Intel-gfx [mailto:intel-gfx-bounces@lists.freedesktop.org] On Behalf Of
>Matthew Auld
>Sent: Friday, September 27, 2019 1:34 PM
>To: intel-gfx@lists.freedesktop.org
>Cc: daniel.vetter@ffwll.ch
>Subject: [Intel-gfx] [PATCH 04/22] drm/i915/region: support continuous
>allocations
>
>Some kernel internal objects may need to be allocated as a continuous
Nit:
You refer to the "continuous block", but the then you create the "CONTIGUOUS"
allocations.
s/continuous/contiguous?
Mike
>block, also thinking ahead the various kernel io_mapping interfaces seem
>to expect it, although this is purely a limitation in the kernel
>API...so perhaps something to be improved.
>
>Signed-off-by: Matthew Auld <matthew.auld@intel.com>
>Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
>Cc: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>
>---
> .../gpu/drm/i915/gem/i915_gem_object_types.h | 4 +
> drivers/gpu/drm/i915/gem/i915_gem_region.c | 15 +-
> drivers/gpu/drm/i915/gem/i915_gem_region.h | 3 +-
> .../gpu/drm/i915/gem/selftests/huge_pages.c | 3 +-
> drivers/gpu/drm/i915/intel_memory_region.c | 13 +-
> drivers/gpu/drm/i915/intel_memory_region.h | 3 +-
> .../drm/i915/selftests/intel_memory_region.c | 163 ++++++++++++++++++
> drivers/gpu/drm/i915/selftests/mock_region.c | 2 +-
> 8 files changed, 197 insertions(+), 9 deletions(-)
>
>diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
>b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
>index d36c860c9c6f..7acd383f174f 100644
>--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
>+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
>@@ -117,6 +117,10 @@ struct drm_i915_gem_object {
>
> I915_SELFTEST_DECLARE(struct list_head st_link);
>
>+ unsigned long flags;
>+#define I915_BO_ALLOC_CONTIGUOUS BIT(0)
>+#define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS)
>+
> /*
> * Is the object to be mapped as read-only to the GPU
> * Only honoured if hardware has relevant pte bit
>diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.c
>b/drivers/gpu/drm/i915/gem/i915_gem_region.c
>index 5c3bfc121921..b317a5c84144 100644
>--- a/drivers/gpu/drm/i915/gem/i915_gem_region.c
>+++ b/drivers/gpu/drm/i915/gem/i915_gem_region.c
>@@ -23,10 +23,10 @@ i915_gem_object_get_pages_buddy(struct
>drm_i915_gem_object *obj)
> {
> struct intel_memory_region *mem = obj->mm.region;
> struct list_head *blocks = &obj->mm.blocks;
>- unsigned int flags = I915_ALLOC_MIN_PAGE_SIZE;
> resource_size_t size = obj->base.size;
> resource_size_t prev_end;
> struct i915_buddy_block *block;
>+ unsigned int flags;
> struct sg_table *st;
> struct scatterlist *sg;
> unsigned int sg_page_sizes;
>@@ -42,6 +42,10 @@ i915_gem_object_get_pages_buddy(struct
>drm_i915_gem_object *obj)
> return -ENOMEM;
> }
>
>+ flags = I915_ALLOC_MIN_PAGE_SIZE;
>+ if (obj->flags & I915_BO_ALLOC_CONTIGUOUS)
>+ flags |= I915_ALLOC_CONTIGUOUS;
>+
> ret = __intel_memory_region_get_pages_buddy(mem, size, flags,
>blocks);
> if (ret)
> goto err_free_sg;
>@@ -56,7 +60,8 @@ i915_gem_object_get_pages_buddy(struct
>drm_i915_gem_object *obj)
> list_for_each_entry(block, blocks, link) {
> u64 block_size, offset;
>
>- block_size = i915_buddy_block_size(&mem->mm, block);
>+ block_size = min_t(u64, size,
>+ i915_buddy_block_size(&mem->mm,
>block));
> offset = i915_buddy_block_offset(block);
>
> GEM_BUG_ON(overflows_type(block_size, sg->length));
>@@ -98,10 +103,12 @@ i915_gem_object_get_pages_buddy(struct
>drm_i915_gem_object *obj)
> }
>
> void i915_gem_object_init_memory_region(struct drm_i915_gem_object
>*obj,
>- struct intel_memory_region *mem)
>+ struct intel_memory_region *mem,
>+ unsigned long flags)
> {
> INIT_LIST_HEAD(&obj->mm.blocks);
> obj->mm.region = mem;
>+ obj->flags = flags;
> }
>
> void i915_gem_object_release_memory_region(struct
>drm_i915_gem_object *obj)
>@@ -115,6 +122,8 @@ i915_gem_object_create_region(struct
>intel_memory_region *mem,
> {
> struct drm_i915_gem_object *obj;
>
>+ GEM_BUG_ON(flags & ~I915_BO_ALLOC_FLAGS);
>+
> if (!mem)
> return ERR_PTR(-ENODEV);
>
>diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.h
>b/drivers/gpu/drm/i915/gem/i915_gem_region.h
>index ebddc86d78f7..f2ff6f8bff74 100644
>--- a/drivers/gpu/drm/i915/gem/i915_gem_region.h
>+++ b/drivers/gpu/drm/i915/gem/i915_gem_region.h
>@@ -17,7 +17,8 @@ void i915_gem_object_put_pages_buddy(struct
>drm_i915_gem_object *obj,
> struct sg_table *pages);
>
> void i915_gem_object_init_memory_region(struct drm_i915_gem_object
>*obj,
>- struct intel_memory_region *mem);
>+ struct intel_memory_region *mem,
>+ unsigned long flags);
> void i915_gem_object_release_memory_region(struct
>drm_i915_gem_object *obj);
>
> struct drm_i915_gem_object *
>diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
>b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
>index 4e1805aaeb99..f9fbf2865782 100644
>--- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
>+++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
>@@ -471,7 +471,8 @@ static int igt_mock_memory_region_huge_pages(void
>*arg)
> unsigned int page_size = BIT(bit);
> resource_size_t phys;
>
>- obj = i915_gem_object_create_region(mem, page_size, 0);
>+ obj = i915_gem_object_create_region(mem, page_size,
>+
>I915_BO_ALLOC_CONTIGUOUS);
> if (IS_ERR(obj)) {
> err = PTR_ERR(obj);
> goto out_destroy_device;
>diff --git a/drivers/gpu/drm/i915/intel_memory_region.c
>b/drivers/gpu/drm/i915/intel_memory_region.c
>index e48d5c37c4df..7a66872d9eac 100644
>--- a/drivers/gpu/drm/i915/intel_memory_region.c
>+++ b/drivers/gpu/drm/i915/intel_memory_region.c
>@@ -47,8 +47,8 @@ __intel_memory_region_get_pages_buddy(struct
>intel_memory_region *mem,
> unsigned int flags,
> struct list_head *blocks)
> {
>- unsigned long n_pages = size >> ilog2(mem->mm.chunk_size);
> unsigned int min_order = 0;
>+ unsigned long n_pages;
>
> GEM_BUG_ON(!IS_ALIGNED(size, mem->mm.chunk_size));
> GEM_BUG_ON(!list_empty(blocks));
>@@ -58,6 +58,13 @@ __intel_memory_region_get_pages_buddy(struct
>intel_memory_region *mem,
> ilog2(mem->mm.chunk_size);
> }
>
>+ if (flags & I915_ALLOC_CONTIGUOUS) {
>+ size = roundup_pow_of_two(size);
>+ min_order = ilog2(size) - ilog2(mem->mm.chunk_size);
>+ }
>+
>+ n_pages = size >> ilog2(mem->mm.chunk_size);
>+
> mutex_lock(&mem->mm_lock);
>
> do {
>@@ -104,7 +111,9 @@ __intel_memory_region_get_block_buddy(struct
>intel_memory_region *mem,
> int ret;
>
> INIT_LIST_HEAD(&blocks);
>- ret = __intel_memory_region_get_pages_buddy(mem, size, 0,
>&blocks);
>+ ret = __intel_memory_region_get_pages_buddy(mem, size,
>+ I915_ALLOC_CONTIGUOUS,
>+ &blocks);
> if (ret)
> return ERR_PTR(ret);
>
>diff --git a/drivers/gpu/drm/i915/intel_memory_region.h
>b/drivers/gpu/drm/i915/intel_memory_region.h
>index ae1ce298bcd1..1dad51b2fc96 100644
>--- a/drivers/gpu/drm/i915/intel_memory_region.h
>+++ b/drivers/gpu/drm/i915/intel_memory_region.h
>@@ -17,7 +17,8 @@ struct drm_i915_gem_object;
> struct intel_memory_region;
> struct sg_table;
>
>-#define I915_ALLOC_MIN_PAGE_SIZE BIT(0)
>+#define I915_ALLOC_MIN_PAGE_SIZE BIT(0)
>+#define I915_ALLOC_CONTIGUOUS BIT(1)
>
> struct intel_memory_region_ops {
> unsigned int flags;
>diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c
>b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
>index 54f9a624b4e1..c43d00ec38ea 100644
>--- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c
>+++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
>@@ -13,6 +13,7 @@
>
> #include "gem/i915_gem_region.h"
> #include "gem/selftests/mock_context.h"
>+#include "selftests/i915_random.h"
>
> static void close_objects(struct list_head *objects)
> {
>@@ -89,10 +90,172 @@ static int igt_mock_fill(void *arg)
> return err;
> }
>
>+static struct drm_i915_gem_object *
>+igt_object_create(struct intel_memory_region *mem,
>+ struct list_head *objects,
>+ u64 size,
>+ unsigned int flags)
>+{
>+ struct drm_i915_gem_object *obj;
>+ int err;
>+
>+ obj = i915_gem_object_create_region(mem, size, flags);
>+ if (IS_ERR(obj))
>+ return obj;
>+
>+ err = i915_gem_object_pin_pages(obj);
>+ if (err)
>+ goto put;
>+
>+ list_add(&obj->st_link, objects);
>+ return obj;
>+
>+put:
>+ i915_gem_object_put(obj);
>+ return ERR_PTR(err);
>+}
>+
>+void igt_object_release(struct drm_i915_gem_object *obj)
>+{
>+ i915_gem_object_unpin_pages(obj);
>+ __i915_gem_object_put_pages(obj, I915_MM_NORMAL);
>+ i915_gem_object_put(obj);
>+ list_del(&obj->st_link);
>+}
>+
>+static int igt_mock_continuous(void *arg)
>+{
>+ struct intel_memory_region *mem = arg;
>+ struct drm_i915_gem_object *obj;
>+ unsigned long n_objects;
>+ LIST_HEAD(objects);
>+ LIST_HEAD(holes);
>+ I915_RND_STATE(prng);
>+ resource_size_t target;
>+ resource_size_t total;
>+ resource_size_t min;
>+ int err = 0;
>+
>+ total = resource_size(&mem->region);
>+
>+ /* Min size */
>+ obj = igt_object_create(mem, &objects, mem->mm.chunk_size,
>+ I915_BO_ALLOC_CONTIGUOUS);
>+ if (IS_ERR(obj))
>+ return PTR_ERR(obj);
>+
>+ if (obj->mm.pages->nents != 1) {
>+ pr_err("%s min object spans multiple sg entries\n",
>__func__);
>+ err = -EINVAL;
>+ goto err_close_objects;
>+ }
>+
>+ igt_object_release(obj);
>+
>+ /* Max size */
>+ obj = igt_object_create(mem, &objects, total,
>I915_BO_ALLOC_CONTIGUOUS);
>+ if (IS_ERR(obj))
>+ return PTR_ERR(obj);
>+
>+ if (obj->mm.pages->nents != 1) {
>+ pr_err("%s max object spans multiple sg entries\n",
>__func__);
>+ err = -EINVAL;
>+ goto err_close_objects;
>+ }
>+
>+ igt_object_release(obj);
>+
>+ /* Internal fragmentation should not bleed into the object size */
>+ target = round_up(prandom_u32_state(&prng) % total, PAGE_SIZE);
>+ target = max_t(u64, PAGE_SIZE, target);
>+
>+ obj = igt_object_create(mem, &objects, target,
>+ I915_BO_ALLOC_CONTIGUOUS);
>+ if (IS_ERR(obj))
>+ return PTR_ERR(obj);
>+
>+ if (obj->base.size != target) {
>+ pr_err("%s obj->base.size(%llx) != target(%llx)\n", __func__,
>+ (u64)obj->base.size, (u64)target);
>+ err = -EINVAL;
>+ goto err_close_objects;
>+ }
>+
>+ if (obj->mm.pages->nents != 1) {
>+ pr_err("%s object spans multiple sg entries\n", __func__);
>+ err = -EINVAL;
>+ goto err_close_objects;
>+ }
>+
>+ igt_object_release(obj);
>+
>+ /*
>+ * Try to fragment the address space, such that half of it is free, but
>+ * the max contiguous block size is SZ_64K.
>+ */
>+
>+ target = SZ_64K;
>+ n_objects = div64_u64(total, target);
>+
>+ while (n_objects--) {
>+ struct list_head *list;
>+
>+ if (n_objects % 2)
>+ list = &holes;
>+ else
>+ list = &objects;
>+
>+ obj = igt_object_create(mem, list, target,
>+ I915_BO_ALLOC_CONTIGUOUS);
>+ if (IS_ERR(obj)) {
>+ err = PTR_ERR(obj);
>+ goto err_close_objects;
>+ }
>+ }
>+
>+ close_objects(&holes);
>+
>+ min = target;
>+ target = total >> 1;
>+
>+ /* Make sure we can still allocate all the fragmented space */
>+ obj = igt_object_create(mem, &objects, target, 0);
>+ if (IS_ERR(obj))
>+ return PTR_ERR(obj);
>+
>+ igt_object_release(obj);
>+
>+ /*
>+ * Even though we have enough free space, we don't have a big
>enough
>+ * contiguous block. Make sure that holds true.
>+ */
>+
>+ do {
>+ bool should_fail = target > min;
>+
>+ obj = igt_object_create(mem, &objects, target,
>+ I915_BO_ALLOC_CONTIGUOUS);
>+ if (should_fail != IS_ERR(obj)) {
>+ pr_err("%s target allocation(%llx) mismatch\n",
>+ __func__, (u64)target);
>+ err = -EINVAL;
>+ goto err_close_objects;
>+ }
>+
>+ target >>= 1;
>+ } while (target >= mem->mm.chunk_size);
>+
>+err_close_objects:
>+ list_splice_tail(&holes, &objects);
>+ close_objects(&objects);
>+ return err;
>+}
>+
> int intel_memory_region_mock_selftests(void)
> {
> static const struct i915_subtest tests[] = {
> SUBTEST(igt_mock_fill),
>+ SUBTEST(igt_mock_continuous),
> };
> struct intel_memory_region *mem;
> struct drm_i915_private *i915;
>diff --git a/drivers/gpu/drm/i915/selftests/mock_region.c
>b/drivers/gpu/drm/i915/selftests/mock_region.c
>index 0e9a575ede3b..7b0c99ddc2d5 100644
>--- a/drivers/gpu/drm/i915/selftests/mock_region.c
>+++ b/drivers/gpu/drm/i915/selftests/mock_region.c
>@@ -36,7 +36,7 @@ mock_object_create(struct intel_memory_region
>*mem,
>
> i915_gem_object_set_cache_coherency(obj, I915_CACHE_NONE);
>
>- i915_gem_object_init_memory_region(obj, mem);
>+ i915_gem_object_init_memory_region(obj, mem, flags);
>
> return obj;
> }
>--
>2.20.1
>
>_______________________________________________
>Intel-gfx mailing list
>Intel-gfx@lists.freedesktop.org
>https://lists.freedesktop.org/mailman/listinfo/intel-gfx
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 50+ messages in thread
* [PATCH 05/22] drm/i915/region: support volatile objects
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
` (3 preceding siblings ...)
2019-09-27 17:33 ` [PATCH 04/22] drm/i915/region: support continuous allocations Matthew Auld
@ 2019-09-27 17:33 ` Matthew Auld
2019-09-27 17:33 ` [PATCH 06/22] drm/i915: Add memory region information to device_info Matthew Auld
` (20 subsequent siblings)
25 siblings, 0 replies; 50+ messages in thread
From: Matthew Auld @ 2019-09-27 17:33 UTC (permalink / raw)
To: intel-gfx; +Cc: daniel.vetter
Volatile objects are marked as DONTNEED while pinned, therefore once
unpinned the backing store can be discarded. This is limited to kernel
internal objects.
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: CQ Tang <cq.tang@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>
---
drivers/gpu/drm/i915/gem/i915_gem_internal.c | 17 +++++++++--------
drivers/gpu/drm/i915/gem/i915_gem_object.h | 6 ++++++
.../gpu/drm/i915/gem/i915_gem_object_types.h | 9 ++++++++-
drivers/gpu/drm/i915/gem/i915_gem_pages.c | 6 ++++++
drivers/gpu/drm/i915/gem/i915_gem_region.c | 12 ++++++++++++
drivers/gpu/drm/i915/gem/selftests/huge_pages.c | 12 ++++--------
drivers/gpu/drm/i915/intel_memory_region.c | 4 ++++
drivers/gpu/drm/i915/intel_memory_region.h | 5 +++++
drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 5 ++---
9 files changed, 56 insertions(+), 20 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
index 0c41e04ab8fa..5e72cb1cc2d3 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
@@ -117,13 +117,6 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
goto err;
}
- /* Mark the pages as dontneed whilst they are still pinned. As soon
- * as they are unpinned they are allowed to be reaped by the shrinker,
- * and the caller is expected to repopulate - the contents of this
- * object are only valid whilst active and pinned.
- */
- obj->mm.madv = I915_MADV_DONTNEED;
-
__i915_gem_object_set_pages(obj, st, sg_page_sizes);
return 0;
@@ -143,7 +136,6 @@ static void i915_gem_object_put_pages_internal(struct drm_i915_gem_object *obj,
internal_free_pages(pages);
obj->mm.dirty = false;
- obj->mm.madv = I915_MADV_WILLNEED;
}
static const struct drm_i915_gem_object_ops i915_gem_object_internal_ops = {
@@ -188,6 +180,15 @@ i915_gem_object_create_internal(struct drm_i915_private *i915,
drm_gem_private_object_init(&i915->drm, &obj->base, size);
i915_gem_object_init(obj, &i915_gem_object_internal_ops);
+ /*
+ * Mark the object as volatile, such that the pages are marked as
+ * dontneed whilst they are still pinned. As soon as they are unpinned
+ * they are allowed to be reaped by the shrinker, and the caller is
+ * expected to repopulate - the contents of this object are only valid
+ * whilst active and pinned.
+ */
+ obj->flags = I915_BO_ALLOC_VOLATILE;
+
obj->read_domains = I915_GEM_DOMAIN_CPU;
obj->write_domain = I915_GEM_DOMAIN_CPU;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index 29b9eddc4c7f..d5839cbd82c0 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -122,6 +122,12 @@ i915_gem_object_lock_fence(struct drm_i915_gem_object *obj);
void i915_gem_object_unlock_fence(struct drm_i915_gem_object *obj,
struct dma_fence *fence);
+static inline bool
+i915_gem_object_is_volatile(const struct drm_i915_gem_object *obj)
+{
+ return obj->flags & I915_BO_ALLOC_VOLATILE;
+}
+
static inline void
i915_gem_object_set_readonly(struct drm_i915_gem_object *obj)
{
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index 7acd383f174f..0d934b67e547 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -119,7 +119,8 @@ struct drm_i915_gem_object {
unsigned long flags;
#define I915_BO_ALLOC_CONTIGUOUS BIT(0)
-#define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS)
+#define I915_BO_ALLOC_VOLATILE BIT(1)
+#define I915_BO_ALLOC_FLAGS (I915_BO_ALLOC_CONTIGUOUS | I915_BO_ALLOC_VOLATILE)
/*
* Is the object to be mapped as read-only to the GPU
@@ -170,6 +171,12 @@ struct drm_i915_gem_object {
* List of memory region blocks allocated for this object.
*/
struct list_head blocks;
+ /**
+ * Element within memory_region->objects or region->purgeable
+ * if the object is marked as DONTNEED. Access is protected by
+ * region->obj_lock.
+ */
+ struct list_head region_link;
struct sg_table *pages;
void *mapping;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
index 2e941f093a20..b0ec0959c13f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -18,6 +18,9 @@ void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
lockdep_assert_held(&obj->mm.lock);
+ if (i915_gem_object_is_volatile(obj))
+ obj->mm.madv = I915_MADV_DONTNEED;
+
/* Make the pages coherent with the GPU (flushing any swapin). */
if (obj->cache_dirty) {
obj->write_domain = 0;
@@ -160,6 +163,9 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj)
if (IS_ERR_OR_NULL(pages))
return pages;
+ if (i915_gem_object_is_volatile(obj))
+ obj->mm.madv = I915_MADV_WILLNEED;
+
i915_gem_object_make_unshrinkable(obj);
if (obj->mm.mapping) {
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.c b/drivers/gpu/drm/i915/gem/i915_gem_region.c
index b317a5c84144..e9550e0364cc 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_region.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_region.c
@@ -109,10 +109,22 @@ void i915_gem_object_init_memory_region(struct drm_i915_gem_object *obj,
INIT_LIST_HEAD(&obj->mm.blocks);
obj->mm.region = mem;
obj->flags = flags;
+
+ mutex_lock(&mem->obj_lock);
+
+ if (obj->flags & I915_BO_ALLOC_VOLATILE)
+ list_add(&obj->mm.region_link, &mem->purgeable);
+ else
+ list_add(&obj->mm.region_link, &mem->objects);
+
+ mutex_unlock(&mem->obj_lock);
}
void i915_gem_object_release_memory_region(struct drm_i915_gem_object *obj)
{
+ mutex_lock(&obj->mm.region->obj_lock);
+ list_del(&obj->mm.region_link);
+ mutex_unlock(&obj->mm.region->obj_lock);
}
struct drm_i915_gem_object *
diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
index f9fbf2865782..b6dc90030156 100644
--- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
+++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
@@ -115,8 +115,6 @@ static int get_huge_pages(struct drm_i915_gem_object *obj)
if (i915_gem_gtt_prepare_pages(obj, st))
goto err;
- obj->mm.madv = I915_MADV_DONTNEED;
-
GEM_BUG_ON(sg_page_sizes != obj->mm.page_mask);
__i915_gem_object_set_pages(obj, st, sg_page_sizes);
@@ -137,7 +135,6 @@ static void put_huge_pages(struct drm_i915_gem_object *obj,
huge_pages_free_pages(pages);
obj->mm.dirty = false;
- obj->mm.madv = I915_MADV_WILLNEED;
}
static const struct drm_i915_gem_object_ops huge_page_ops = {
@@ -170,6 +167,8 @@ huge_pages_object(struct drm_i915_private *i915,
drm_gem_private_object_init(&i915->drm, &obj->base, size);
i915_gem_object_init(obj, &huge_page_ops);
+ obj->flags = I915_BO_ALLOC_VOLATILE;
+
obj->write_domain = I915_GEM_DOMAIN_CPU;
obj->read_domains = I915_GEM_DOMAIN_CPU;
obj->cache_level = I915_CACHE_NONE;
@@ -229,8 +228,6 @@ static int fake_get_huge_pages(struct drm_i915_gem_object *obj)
i915_sg_trim(st);
- obj->mm.madv = I915_MADV_DONTNEED;
-
__i915_gem_object_set_pages(obj, st, sg_page_sizes);
return 0;
@@ -263,8 +260,6 @@ static int fake_get_huge_pages_single(struct drm_i915_gem_object *obj)
sg_dma_len(sg) = obj->base.size;
sg_dma_address(sg) = page_size;
- obj->mm.madv = I915_MADV_DONTNEED;
-
__i915_gem_object_set_pages(obj, st, sg->length);
return 0;
@@ -283,7 +278,6 @@ static void fake_put_huge_pages(struct drm_i915_gem_object *obj,
{
fake_free_huge_pages(obj, pages);
obj->mm.dirty = false;
- obj->mm.madv = I915_MADV_WILLNEED;
}
static const struct drm_i915_gem_object_ops fake_ops = {
@@ -323,6 +317,8 @@ fake_huge_pages_object(struct drm_i915_private *i915, u64 size, bool single)
else
i915_gem_object_init(obj, &fake_ops);
+ obj->flags = I915_BO_ALLOC_VOLATILE;
+
obj->write_domain = I915_GEM_DOMAIN_CPU;
obj->read_domains = I915_GEM_DOMAIN_CPU;
obj->cache_level = I915_CACHE_NONE;
diff --git a/drivers/gpu/drm/i915/intel_memory_region.c b/drivers/gpu/drm/i915/intel_memory_region.c
index 7a66872d9eac..fba07f71d9bd 100644
--- a/drivers/gpu/drm/i915/intel_memory_region.c
+++ b/drivers/gpu/drm/i915/intel_memory_region.c
@@ -154,6 +154,10 @@ intel_memory_region_create(struct drm_i915_private *i915,
mem->min_page_size = min_page_size;
mem->ops = ops;
+ mutex_init(&mem->obj_lock);
+ INIT_LIST_HEAD(&mem->objects);
+ INIT_LIST_HEAD(&mem->purgeable);
+
mutex_init(&mem->mm_lock);
if (ops->init) {
diff --git a/drivers/gpu/drm/i915/intel_memory_region.h b/drivers/gpu/drm/i915/intel_memory_region.h
index 1dad51b2fc96..095f5a8b77af 100644
--- a/drivers/gpu/drm/i915/intel_memory_region.h
+++ b/drivers/gpu/drm/i915/intel_memory_region.h
@@ -49,6 +49,11 @@ struct intel_memory_region {
unsigned int type;
unsigned int instance;
unsigned int id;
+
+ /* Protects access to objects and purgeable */
+ struct mutex obj_lock;
+ struct list_head objects;
+ struct list_head purgeable;
};
int intel_memory_region_init_buddy(struct intel_memory_region *mem);
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index 0d40e0b42923..f4d7b254c9a7 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -91,8 +91,6 @@ static int fake_get_pages(struct drm_i915_gem_object *obj)
}
GEM_BUG_ON(rem);
- obj->mm.madv = I915_MADV_DONTNEED;
-
__i915_gem_object_set_pages(obj, pages, sg_page_sizes);
return 0;
@@ -104,7 +102,6 @@ static void fake_put_pages(struct drm_i915_gem_object *obj,
{
fake_free_pages(obj, pages);
obj->mm.dirty = false;
- obj->mm.madv = I915_MADV_WILLNEED;
}
static const struct drm_i915_gem_object_ops fake_ops = {
@@ -131,6 +128,8 @@ fake_dma_object(struct drm_i915_private *i915, u64 size)
drm_gem_private_object_init(&i915->drm, &obj->base, size);
i915_gem_object_init(obj, &fake_ops);
+ obj->flags = I915_BO_ALLOC_VOLATILE;
+
obj->write_domain = I915_GEM_DOMAIN_CPU;
obj->read_domains = I915_GEM_DOMAIN_CPU;
obj->cache_level = I915_CACHE_NONE;
--
2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 50+ messages in thread* [PATCH 06/22] drm/i915: Add memory region information to device_info
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
` (4 preceding siblings ...)
2019-09-27 17:33 ` [PATCH 05/22] drm/i915/region: support volatile objects Matthew Auld
@ 2019-09-27 17:33 ` Matthew Auld
2019-09-27 17:33 ` [PATCH 07/22] drm/i915: support creating LMEM objects Matthew Auld
` (19 subsequent siblings)
25 siblings, 0 replies; 50+ messages in thread
From: Matthew Auld @ 2019-09-27 17:33 UTC (permalink / raw)
To: intel-gfx; +Cc: daniel.vetter
From: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>
Exposes available regions for the platform. Shared memory will
always be available.
Signed-off-by: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
---
drivers/gpu/drm/i915/i915_drv.h | 2 ++
drivers/gpu/drm/i915/intel_device_info.h | 2 ++
2 files changed, 4 insertions(+)
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 8ed4b8c2484f..93116cc8b149 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2170,6 +2170,8 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
#define HAS_IPC(dev_priv) (INTEL_INFO(dev_priv)->display.has_ipc)
+#define HAS_REGION(i915, i) (INTEL_INFO(i915)->memory_regions & (i))
+
#define HAS_GT_UC(dev_priv) (INTEL_INFO(dev_priv)->has_gt_uc)
/* Having GuC is not the same as using GuC */
diff --git a/drivers/gpu/drm/i915/intel_device_info.h b/drivers/gpu/drm/i915/intel_device_info.h
index 0cdc2465534b..e9940f932d26 100644
--- a/drivers/gpu/drm/i915/intel_device_info.h
+++ b/drivers/gpu/drm/i915/intel_device_info.h
@@ -160,6 +160,8 @@ struct intel_device_info {
unsigned int page_sizes; /* page sizes supported by the HW */
+ u32 memory_regions; /* regions supported by the HW */
+
u32 display_mmio_offset;
u8 pipe_mask;
--
2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 50+ messages in thread* [PATCH 07/22] drm/i915: support creating LMEM objects
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
` (5 preceding siblings ...)
2019-09-27 17:33 ` [PATCH 06/22] drm/i915: Add memory region information to device_info Matthew Auld
@ 2019-09-27 17:33 ` Matthew Auld
2019-09-27 18:45 ` Chris Wilson
2019-09-27 17:33 ` [PATCH 08/22] drm/i915: setup io-mapping for LMEM Matthew Auld
` (18 subsequent siblings)
25 siblings, 1 reply; 50+ messages in thread
From: Matthew Auld @ 2019-09-27 17:33 UTC (permalink / raw)
To: intel-gfx; +Cc: daniel.vetter
We currently define LMEM, or local memory, as just another memory
region, like system memory or stolen, which we can expose to userspace
and can be mapped to the CPU via some BAR.
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>
---
drivers/gpu/drm/i915/Makefile | 2 +
drivers/gpu/drm/i915/gem/i915_gem_lmem.c | 31 +++++++++++++
drivers/gpu/drm/i915/gem/i915_gem_lmem.h | 23 ++++++++++
drivers/gpu/drm/i915/i915_drv.h | 5 +++
drivers/gpu/drm/i915/intel_memory_region.c | 6 +++
drivers/gpu/drm/i915/intel_memory_region.h | 30 +++++++++++++
drivers/gpu/drm/i915/intel_region_lmem.c | 43 ++++++++++++++++++
drivers/gpu/drm/i915/intel_region_lmem.h | 11 +++++
.../drm/i915/selftests/i915_live_selftests.h | 1 +
.../drm/i915/selftests/intel_memory_region.c | 45 +++++++++++++++++++
10 files changed, 197 insertions(+)
create mode 100644 drivers/gpu/drm/i915/gem/i915_gem_lmem.c
create mode 100644 drivers/gpu/drm/i915/gem/i915_gem_lmem.h
create mode 100644 drivers/gpu/drm/i915/intel_region_lmem.c
create mode 100644 drivers/gpu/drm/i915/intel_region_lmem.h
diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index d849dff31f76..ccf4223ed3f9 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -115,6 +115,7 @@ gem-y += \
gem/i915_gem_internal.o \
gem/i915_gem_object.o \
gem/i915_gem_object_blt.o \
+ gem/i915_gem_lmem.o \
gem/i915_gem_mman.o \
gem/i915_gem_pages.o \
gem/i915_gem_phys.o \
@@ -143,6 +144,7 @@ i915-y += \
i915_scheduler.o \
i915_trace_points.o \
i915_vma.o \
+ intel_region_lmem.o \
intel_wopcm.o
# general-purpose microcontroller (GuC) support
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
new file mode 100644
index 000000000000..26a23304df32
--- /dev/null
+++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
@@ -0,0 +1,31 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2019 Intel Corporation
+ */
+
+#include "intel_memory_region.h"
+#include "gem/i915_gem_region.h"
+#include "gem/i915_gem_lmem.h"
+#include "i915_drv.h"
+
+const struct drm_i915_gem_object_ops i915_gem_lmem_obj_ops = {
+ .get_pages = i915_gem_object_get_pages_buddy,
+ .put_pages = i915_gem_object_put_pages_buddy,
+ .release = i915_gem_object_release_memory_region,
+};
+
+bool i915_gem_object_is_lmem(struct drm_i915_gem_object *obj)
+{
+ struct intel_memory_region *region = obj->mm.region;
+
+ return region && region->type == INTEL_LMEM;
+}
+
+struct drm_i915_gem_object *
+i915_gem_object_create_lmem(struct drm_i915_private *i915,
+ resource_size_t size,
+ unsigned int flags)
+{
+ return i915_gem_object_create_region(i915->mm.regions[INTEL_MEMORY_LMEM],
+ size, flags);
+}
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.h b/drivers/gpu/drm/i915/gem/i915_gem_lmem.h
new file mode 100644
index 000000000000..ebc15fe24f58
--- /dev/null
+++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.h
@@ -0,0 +1,23 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2019 Intel Corporation
+ */
+
+#ifndef __I915_GEM_LMEM_H
+#define __I915_GEM_LMEM_H
+
+#include <linux/types.h>
+
+struct drm_i915_private;
+struct drm_i915_gem_object;
+
+extern const struct drm_i915_gem_object_ops i915_gem_lmem_obj_ops;
+
+bool i915_gem_object_is_lmem(struct drm_i915_gem_object *obj);
+
+struct drm_i915_gem_object *
+i915_gem_object_create_lmem(struct drm_i915_private *i915,
+ resource_size_t size,
+ unsigned int flags);
+
+#endif /* !__I915_GEM_LMEM_H */
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 93116cc8b149..05a6491690f7 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -100,6 +100,8 @@
#include "i915_vma.h"
#include "i915_irq.h"
+#include "intel_region_lmem.h"
+
#include "intel_gvt.h"
/* General customization:
@@ -686,6 +688,8 @@ struct i915_gem_mm {
*/
struct vfsmount *gemfs;
+ struct intel_memory_region *regions[INTEL_MEMORY_UKNOWN];
+
struct notifier_block oom_notifier;
struct notifier_block vmap_notifier;
struct shrinker shrinker;
@@ -2171,6 +2175,7 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
#define HAS_IPC(dev_priv) (INTEL_INFO(dev_priv)->display.has_ipc)
#define HAS_REGION(i915, i) (INTEL_INFO(i915)->memory_regions & (i))
+#define HAS_LMEM(i915) HAS_REGION(i915, REGION_LMEM)
#define HAS_GT_UC(dev_priv) (INTEL_INFO(dev_priv)->has_gt_uc)
diff --git a/drivers/gpu/drm/i915/intel_memory_region.c b/drivers/gpu/drm/i915/intel_memory_region.c
index fba07f71d9bd..703c615331c0 100644
--- a/drivers/gpu/drm/i915/intel_memory_region.c
+++ b/drivers/gpu/drm/i915/intel_memory_region.c
@@ -6,6 +6,12 @@
#include "intel_memory_region.h"
#include "i915_drv.h"
+const u32 intel_region_map[] = {
+ [INTEL_MEMORY_SMEM] = BIT(INTEL_SMEM + INTEL_MEMORY_TYPE_SHIFT) | BIT(0),
+ [INTEL_MEMORY_LMEM] = BIT(INTEL_LMEM + INTEL_MEMORY_TYPE_SHIFT) | BIT(0),
+ [INTEL_MEMORY_STOLEN] = BIT(INTEL_STOLEN + INTEL_MEMORY_TYPE_SHIFT) | BIT(0),
+};
+
static u64
intel_memory_region_free_pages(struct intel_memory_region *mem,
struct list_head *blocks)
diff --git a/drivers/gpu/drm/i915/intel_memory_region.h b/drivers/gpu/drm/i915/intel_memory_region.h
index 095f5a8b77af..9ef2ec760a4b 100644
--- a/drivers/gpu/drm/i915/intel_memory_region.h
+++ b/drivers/gpu/drm/i915/intel_memory_region.h
@@ -17,9 +17,39 @@ struct drm_i915_gem_object;
struct intel_memory_region;
struct sg_table;
+/**
+ * Base memory type
+ */
+enum intel_memory_type {
+ INTEL_SMEM = 0,
+ INTEL_LMEM,
+ INTEL_STOLEN,
+};
+
+enum intel_region_id {
+ INTEL_MEMORY_SMEM = 0,
+ INTEL_MEMORY_LMEM,
+ INTEL_MEMORY_STOLEN,
+ INTEL_MEMORY_UKNOWN, /* Should be last */
+};
+
+#define REGION_SMEM BIT(INTEL_MEMORY_SMEM)
+#define REGION_LMEM BIT(INTEL_MEMORY_LMEM)
+#define REGION_STOLEN BIT(INTEL_MEMORY_STOLEN)
+
+#define INTEL_MEMORY_TYPE_SHIFT 16
+
+#define MEMORY_TYPE_FROM_REGION(r) (ilog2(r >> INTEL_MEMORY_TYPE_SHIFT))
+#define MEMORY_INSTANCE_FROM_REGION(r) (ilog2(r & 0xffff))
+
#define I915_ALLOC_MIN_PAGE_SIZE BIT(0)
#define I915_ALLOC_CONTIGUOUS BIT(1)
+/**
+ * Memory regions encoded as type | instance
+ */
+extern const u32 intel_region_map[];
+
struct intel_memory_region_ops {
unsigned int flags;
diff --git a/drivers/gpu/drm/i915/intel_region_lmem.c b/drivers/gpu/drm/i915/intel_region_lmem.c
new file mode 100644
index 000000000000..7a3f96e1f766
--- /dev/null
+++ b/drivers/gpu/drm/i915/intel_region_lmem.c
@@ -0,0 +1,43 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2019 Intel Corporation
+ */
+
+#include "i915_drv.h"
+#include "intel_memory_region.h"
+#include "gem/i915_gem_lmem.h"
+#include "gem/i915_gem_region.h"
+#include "intel_region_lmem.h"
+
+static struct drm_i915_gem_object *
+lmem_create_object(struct intel_memory_region *mem,
+ resource_size_t size,
+ unsigned int flags)
+{
+ struct drm_i915_private *i915 = mem->i915;
+ struct drm_i915_gem_object *obj;
+
+ if (size > BIT(mem->mm.max_order) * mem->mm.chunk_size)
+ return ERR_PTR(-E2BIG);
+
+ obj = i915_gem_object_alloc();
+ if (!obj)
+ return ERR_PTR(-ENOMEM);
+
+ drm_gem_private_object_init(&i915->drm, &obj->base, size);
+ i915_gem_object_init(obj, &i915_gem_lmem_obj_ops);
+
+ obj->read_domains = I915_GEM_DOMAIN_WC | I915_GEM_DOMAIN_GTT;
+
+ i915_gem_object_set_cache_coherency(obj, I915_CACHE_NONE);
+
+ i915_gem_object_init_memory_region(obj, mem, flags);
+
+ return obj;
+}
+
+const struct intel_memory_region_ops intel_region_lmem_ops = {
+ .init = intel_memory_region_init_buddy,
+ .release = intel_memory_region_release_buddy,
+ .create_object = lmem_create_object,
+};
diff --git a/drivers/gpu/drm/i915/intel_region_lmem.h b/drivers/gpu/drm/i915/intel_region_lmem.h
new file mode 100644
index 000000000000..ed2a3bab6443
--- /dev/null
+++ b/drivers/gpu/drm/i915/intel_region_lmem.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Copyright © 2019 Intel Corporation
+ */
+
+#ifndef __INTEL_REGION_LMEM_H
+#define __INTEL_REGION_LMEM_H
+
+extern const struct intel_memory_region_ops intel_region_lmem_ops;
+
+#endif /* !__INTEL_REGION_LMEM_H */
diff --git a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
index 66d83c1390c1..b4a507c7ec1d 100644
--- a/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
+++ b/drivers/gpu/drm/i915/selftests/i915_live_selftests.h
@@ -31,6 +31,7 @@ selftest(gem_contexts, i915_gem_context_live_selftests)
selftest(blt, i915_gem_object_blt_live_selftests)
selftest(client, i915_gem_client_blt_live_selftests)
selftest(reset, intel_reset_live_selftests)
+selftest(memory_region, intel_memory_region_live_selftests)
selftest(hangcheck, intel_hangcheck_live_selftests)
selftest(execlists, intel_execlists_live_selftests)
selftest(guc, intel_guc_live_selftest)
diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
index c43d00ec38ea..1e9a0eef17fc 100644
--- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c
+++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
@@ -11,8 +11,10 @@
#include "mock_gem_device.h"
#include "mock_region.h"
+#include "gem/i915_gem_lmem.h"
#include "gem/i915_gem_region.h"
#include "gem/selftests/mock_context.h"
+#include "gt/intel_gt.h"
#include "selftests/i915_random.h"
static void close_objects(struct list_head *objects)
@@ -251,6 +253,27 @@ static int igt_mock_continuous(void *arg)
return err;
}
+static int igt_lmem_create(void *arg)
+{
+ struct drm_i915_private *i915 = arg;
+ struct drm_i915_gem_object *obj;
+ int err = 0;
+
+ obj = i915_gem_object_create_lmem(i915, PAGE_SIZE, 0);
+ if (IS_ERR(obj))
+ return PTR_ERR(obj);
+
+ err = i915_gem_object_pin_pages(obj);
+ if (err)
+ goto out_put;
+
+ i915_gem_object_unpin_pages(obj);
+out_put:
+ i915_gem_object_put(obj);
+
+ return err;
+}
+
int intel_memory_region_mock_selftests(void)
{
static const struct i915_subtest tests[] = {
@@ -285,3 +308,25 @@ int intel_memory_region_mock_selftests(void)
return err;
}
+
+int intel_memory_region_live_selftests(struct drm_i915_private *i915)
+{
+ static const struct i915_subtest tests[] = {
+ SUBTEST(igt_lmem_create),
+ };
+ int err;
+
+ if (!HAS_LMEM(i915)) {
+ pr_info("device lacks LMEM support, skipping\n");
+ return 0;
+ }
+
+ if (intel_gt_is_wedged(&i915->gt))
+ return 0;
+
+ mutex_lock(&i915->drm.struct_mutex);
+ err = i915_subtests(tests, i915);
+ mutex_unlock(&i915->drm.struct_mutex);
+
+ return err;
+}
--
2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 50+ messages in thread* [PATCH 08/22] drm/i915: setup io-mapping for LMEM
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
` (6 preceding siblings ...)
2019-09-27 17:33 ` [PATCH 07/22] drm/i915: support creating LMEM objects Matthew Auld
@ 2019-09-27 17:33 ` Matthew Auld
2019-09-27 17:33 ` [PATCH 09/22] drm/i915/lmem: support kernel mapping Matthew Auld
` (17 subsequent siblings)
25 siblings, 0 replies; 50+ messages in thread
From: Matthew Auld @ 2019-09-27 17:33 UTC (permalink / raw)
To: intel-gfx; +Cc: daniel.vetter
From: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>
Signed-off-by: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
---
drivers/gpu/drm/i915/intel_region_lmem.c | 28 ++++++++++++++++++++++--
1 file changed, 26 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/i915/intel_region_lmem.c b/drivers/gpu/drm/i915/intel_region_lmem.c
index 7a3f96e1f766..051069664074 100644
--- a/drivers/gpu/drm/i915/intel_region_lmem.c
+++ b/drivers/gpu/drm/i915/intel_region_lmem.c
@@ -36,8 +36,32 @@ lmem_create_object(struct intel_memory_region *mem,
return obj;
}
+static void
+region_lmem_release(struct intel_memory_region *mem)
+{
+ io_mapping_fini(&mem->iomap);
+ intel_memory_region_release_buddy(mem);
+}
+
+static int
+region_lmem_init(struct intel_memory_region *mem)
+{
+ int ret;
+
+ if (!io_mapping_init_wc(&mem->iomap,
+ mem->io_start,
+ resource_size(&mem->region)))
+ return -EIO;
+
+ ret = intel_memory_region_init_buddy(mem);
+ if (ret)
+ io_mapping_fini(&mem->iomap);
+
+ return ret;
+}
+
const struct intel_memory_region_ops intel_region_lmem_ops = {
- .init = intel_memory_region_init_buddy,
- .release = intel_memory_region_release_buddy,
+ .init = region_lmem_init,
+ .release = region_lmem_release,
.create_object = lmem_create_object,
};
--
2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 50+ messages in thread* [PATCH 09/22] drm/i915/lmem: support kernel mapping
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
` (7 preceding siblings ...)
2019-09-27 17:33 ` [PATCH 08/22] drm/i915: setup io-mapping for LMEM Matthew Auld
@ 2019-09-27 17:33 ` Matthew Auld
2019-09-27 19:24 ` Chris Wilson
2019-09-27 20:39 ` Chris Wilson
2019-09-27 17:33 ` [PATCH 10/22] drm/i915/selftests: add write-dword test for LMEM Matthew Auld
` (16 subsequent siblings)
25 siblings, 2 replies; 50+ messages in thread
From: Matthew Auld @ 2019-09-27 17:33 UTC (permalink / raw)
To: intel-gfx; +Cc: daniel.vetter
From: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>
We can create LMEM objects, but we also need to support mapping them
into kernel space for internal use.
Signed-off-by: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Steve Hampson <steven.t.hampson@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
drivers/gpu/drm/i915/gem/i915_gem_internal.c | 4 +-
drivers/gpu/drm/i915/gem/i915_gem_lmem.c | 36 +++++++++
drivers/gpu/drm/i915/gem/i915_gem_lmem.h | 8 ++
drivers/gpu/drm/i915/gem/i915_gem_object.h | 6 ++
.../gpu/drm/i915/gem/i915_gem_object_types.h | 3 +-
drivers/gpu/drm/i915/gem/i915_gem_pages.c | 20 ++++-
drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 3 +-
.../drm/i915/gem/selftests/huge_gem_object.c | 4 +-
.../drm/i915/selftests/intel_memory_region.c | 76 +++++++++++++++++++
9 files changed, 152 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
index 5e72cb1cc2d3..c2e237702e8c 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
@@ -140,7 +140,9 @@ static void i915_gem_object_put_pages_internal(struct drm_i915_gem_object *obj,
static const struct drm_i915_gem_object_ops i915_gem_object_internal_ops = {
.flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE |
- I915_GEM_OBJECT_IS_SHRINKABLE,
+ I915_GEM_OBJECT_IS_SHRINKABLE |
+ I915_GEM_OBJECT_IS_MAPPABLE,
+
.get_pages = i915_gem_object_get_pages_internal,
.put_pages = i915_gem_object_put_pages_internal,
};
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
index 26a23304df32..d7ec74ed5b88 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
@@ -9,11 +9,47 @@
#include "i915_drv.h"
const struct drm_i915_gem_object_ops i915_gem_lmem_obj_ops = {
+ .flags = I915_GEM_OBJECT_IS_MAPPABLE,
+
.get_pages = i915_gem_object_get_pages_buddy,
.put_pages = i915_gem_object_put_pages_buddy,
.release = i915_gem_object_release_memory_region,
};
+/* XXX: Time to vfunc your life up? */
+void __iomem *i915_gem_object_lmem_io_map_page(struct drm_i915_gem_object *obj,
+ unsigned long n)
+{
+ resource_size_t offset;
+
+ offset = i915_gem_object_get_dma_address(obj, n);
+
+ return io_mapping_map_wc(&obj->mm.region->iomap, offset, PAGE_SIZE);
+}
+
+void __iomem *i915_gem_object_lmem_io_map_page_atomic(struct drm_i915_gem_object *obj,
+ unsigned long n)
+{
+ resource_size_t offset;
+
+ offset = i915_gem_object_get_dma_address(obj, n);
+
+ return io_mapping_map_atomic_wc(&obj->mm.region->iomap, offset);
+}
+
+void __iomem *i915_gem_object_lmem_io_map(struct drm_i915_gem_object *obj,
+ unsigned long n,
+ unsigned long size)
+{
+ resource_size_t offset;
+
+ GEM_BUG_ON(!(obj->flags & I915_BO_ALLOC_CONTIGUOUS));
+
+ offset = i915_gem_object_get_dma_address(obj, n);
+
+ return io_mapping_map_wc(&obj->mm.region->iomap, offset, size);
+}
+
bool i915_gem_object_is_lmem(struct drm_i915_gem_object *obj)
{
struct intel_memory_region *region = obj->mm.region;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.h b/drivers/gpu/drm/i915/gem/i915_gem_lmem.h
index ebc15fe24f58..31a6462bdbb6 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.h
@@ -13,6 +13,14 @@ struct drm_i915_gem_object;
extern const struct drm_i915_gem_object_ops i915_gem_lmem_obj_ops;
+void __iomem *i915_gem_object_lmem_io_map(struct drm_i915_gem_object *obj,
+ unsigned long n, unsigned long size);
+void __iomem *i915_gem_object_lmem_io_map_page(struct drm_i915_gem_object *obj,
+ unsigned long n);
+void __iomem *
+i915_gem_object_lmem_io_map_page_atomic(struct drm_i915_gem_object *obj,
+ unsigned long n);
+
bool i915_gem_object_is_lmem(struct drm_i915_gem_object *obj);
struct drm_i915_gem_object *
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object.h b/drivers/gpu/drm/i915/gem/i915_gem_object.h
index d5839cbd82c0..e8cc776581d0 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object.h
@@ -158,6 +158,12 @@ i915_gem_object_is_proxy(const struct drm_i915_gem_object *obj)
return obj->ops->flags & I915_GEM_OBJECT_IS_PROXY;
}
+static inline bool
+i915_gem_object_is_mappable(const struct drm_i915_gem_object *obj)
+{
+ return obj->ops->flags & I915_GEM_OBJECT_IS_MAPPABLE;
+}
+
static inline bool
i915_gem_object_needs_async_cancel(const struct drm_i915_gem_object *obj)
{
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
index 0d934b67e547..743898944760 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_object_types.h
@@ -32,7 +32,8 @@ struct drm_i915_gem_object_ops {
#define I915_GEM_OBJECT_HAS_STRUCT_PAGE BIT(0)
#define I915_GEM_OBJECT_IS_SHRINKABLE BIT(1)
#define I915_GEM_OBJECT_IS_PROXY BIT(2)
-#define I915_GEM_OBJECT_ASYNC_CANCEL BIT(3)
+#define I915_GEM_OBJECT_IS_MAPPABLE BIT(3)
+#define I915_GEM_OBJECT_ASYNC_CANCEL BIT(4)
/* Interface between the GEM object and its backing storage.
* get_pages() is called once prior to the use of the associated set
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
index b0ec0959c13f..fc4ad29ce881 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -7,6 +7,7 @@
#include "i915_drv.h"
#include "i915_gem_object.h"
#include "i915_scatterlist.h"
+#include "i915_gem_lmem.h"
void __i915_gem_object_set_pages(struct drm_i915_gem_object *obj,
struct sg_table *pages,
@@ -172,7 +173,9 @@ __i915_gem_object_unset_pages(struct drm_i915_gem_object *obj)
void *ptr;
ptr = page_mask_bits(obj->mm.mapping);
- if (is_vmalloc_addr(ptr))
+ if (i915_gem_object_is_lmem(obj))
+ io_mapping_unmap(ptr);
+ else if (is_vmalloc_addr(ptr))
vunmap(ptr);
else
kunmap(kmap_to_page(ptr));
@@ -231,7 +234,7 @@ int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
}
/* The 'mapping' part of i915_gem_object_pin_map() below */
-static void *i915_gem_object_map(const struct drm_i915_gem_object *obj,
+static void *i915_gem_object_map(struct drm_i915_gem_object *obj,
enum i915_map_type type)
{
unsigned long n_pages = obj->base.size >> PAGE_SHIFT;
@@ -244,6 +247,13 @@ static void *i915_gem_object_map(const struct drm_i915_gem_object *obj,
pgprot_t pgprot;
void *addr;
+ if (i915_gem_object_is_lmem(obj)) {
+ if (type != I915_MAP_WC)
+ return NULL;
+
+ return i915_gem_object_lmem_io_map(obj, 0, obj->base.size);
+ }
+
/* A single page can always be kmapped */
if (n_pages == 1 && type == I915_MAP_WB)
return kmap(sg_page(sgt->sgl));
@@ -289,7 +299,7 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
void *ptr;
int err;
- if (unlikely(!i915_gem_object_has_struct_page(obj)))
+ if (unlikely(!i915_gem_object_is_mappable(obj)))
return ERR_PTR(-ENXIO);
err = mutex_lock_interruptible(&obj->mm.lock);
@@ -321,7 +331,9 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
goto err_unpin;
}
- if (is_vmalloc_addr(ptr))
+ if (i915_gem_object_is_lmem(obj))
+ io_mapping_unmap(ptr);
+ else if (is_vmalloc_addr(ptr))
vunmap(ptr);
else
kunmap(kmap_to_page(ptr));
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
index 4c4954e8ce0a..9f5d903f7793 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
@@ -422,7 +422,8 @@ static void shmem_release(struct drm_i915_gem_object *obj)
const struct drm_i915_gem_object_ops i915_gem_shmem_ops = {
.flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE |
- I915_GEM_OBJECT_IS_SHRINKABLE,
+ I915_GEM_OBJECT_IS_SHRINKABLE |
+ I915_GEM_OBJECT_IS_MAPPABLE,
.get_pages = shmem_get_pages,
.put_pages = shmem_put_pages,
diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c b/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c
index 3c5d17b2b670..686e0e909280 100644
--- a/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c
+++ b/drivers/gpu/drm/i915/gem/selftests/huge_gem_object.c
@@ -86,7 +86,9 @@ static void huge_put_pages(struct drm_i915_gem_object *obj,
static const struct drm_i915_gem_object_ops huge_ops = {
.flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE |
- I915_GEM_OBJECT_IS_SHRINKABLE,
+ I915_GEM_OBJECT_IS_SHRINKABLE |
+ I915_GEM_OBJECT_IS_MAPPABLE,
+
.get_pages = huge_get_pages,
.put_pages = huge_put_pages,
};
diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
index 1e9a0eef17fc..ba98e8254b80 100644
--- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c
+++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
@@ -13,8 +13,10 @@
#include "gem/i915_gem_lmem.h"
#include "gem/i915_gem_region.h"
+#include "gem/i915_gem_object_blt.h"
#include "gem/selftests/mock_context.h"
#include "gt/intel_gt.h"
+#include "selftests/igt_flush_test.h"
#include "selftests/i915_random.h"
static void close_objects(struct list_head *objects)
@@ -274,6 +276,79 @@ static int igt_lmem_create(void *arg)
return err;
}
+static int igt_lmem_write_cpu(void *arg)
+{
+ struct drm_i915_private *i915 = arg;
+ struct intel_context *ce = i915->engine[BCS0]->kernel_context;
+ struct drm_i915_gem_object *obj;
+ struct rnd_state prng;
+ u32 *vaddr;
+ u32 dword;
+ u32 val;
+ u32 sz;
+ int err;
+
+ if (!HAS_ENGINE(i915, BCS0))
+ return 0;
+
+ sz = round_up(prandom_u32_state(&prng) % SZ_32M, PAGE_SIZE);
+
+ obj = i915_gem_object_create_lmem(i915, sz, I915_BO_ALLOC_CONTIGUOUS);
+ if (IS_ERR(obj))
+ return PTR_ERR(obj);
+
+ vaddr = i915_gem_object_pin_map(obj, I915_MAP_WC);
+ if (IS_ERR(vaddr)) {
+ pr_err("Failed to iomap lmembar; err=%d\n", (int)PTR_ERR(vaddr));
+ err = PTR_ERR(vaddr);
+ goto out_put;
+ }
+
+ val = prandom_u32_state(&prng);
+
+ /* Write from gpu and then read from cpu */
+ err = i915_gem_object_fill_blt(obj, ce, val);
+ if (err)
+ goto out_unpin;
+
+ i915_gem_object_lock(obj);
+ err = i915_gem_object_set_to_wc_domain(obj, true);
+ i915_gem_object_unlock(obj);
+ if (err)
+ goto out_unpin;
+
+ for (dword = 0; dword < sz / sizeof(u32); ++dword) {
+ if (vaddr[dword] != val) {
+ pr_err("vaddr[%u]=%u, val=%u\n", dword, vaddr[dword],
+ val);
+ err = -EINVAL;
+ break;
+ }
+ }
+
+ /* Write from the cpu and read again from the cpu */
+ memset32(vaddr, val ^ 0xdeadbeaf, sz / sizeof(u32));
+
+ for (dword = 0; dword < sz / sizeof(u32); ++dword) {
+ if (vaddr[dword] != (val ^ 0xdeadbeaf)) {
+ pr_err("vaddr[%u]=%u, val=%u\n", dword, vaddr[dword],
+ val ^ 0xdeadbeaf);
+ err = -EINVAL;
+ break;
+ }
+ }
+
+out_unpin:
+ i915_gem_object_unpin_map(obj);
+out_put:
+ i915_gem_object_put(obj);
+
+ if (igt_flush_test(i915, I915_WAIT_LOCKED))
+ err = -EIO;
+
+ return err;
+}
+
int intel_memory_region_mock_selftests(void)
{
static const struct i915_subtest tests[] = {
@@ -313,6 +388,7 @@ int intel_memory_region_live_selftests(struct drm_i915_private *i915)
{
static const struct i915_subtest tests[] = {
SUBTEST(igt_lmem_create),
+ SUBTEST(igt_lmem_write_cpu),
};
int err;
--
2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 50+ messages in thread* Re: [PATCH 09/22] drm/i915/lmem: support kernel mapping
2019-09-27 17:33 ` [PATCH 09/22] drm/i915/lmem: support kernel mapping Matthew Auld
@ 2019-09-27 19:24 ` Chris Wilson
2019-09-27 20:39 ` Chris Wilson
1 sibling, 0 replies; 50+ messages in thread
From: Chris Wilson @ 2019-09-27 19:24 UTC (permalink / raw)
To: Matthew Auld, intel-gfx; +Cc: daniel.vetter
Quoting Matthew Auld (2019-09-27 18:33:56)
> static const struct drm_i915_gem_object_ops i915_gem_object_internal_ops = {
> .flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE |
> - I915_GEM_OBJECT_IS_SHRINKABLE,
> + I915_GEM_OBJECT_IS_SHRINKABLE |
> + I915_GEM_OBJECT_IS_MAPPABLE,
> +
> const struct drm_i915_gem_object_ops i915_gem_shmem_ops = {
> .flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE |
> - I915_GEM_OBJECT_IS_SHRINKABLE,
> + I915_GEM_OBJECT_IS_SHRINKABLE |
> + I915_GEM_OBJECT_IS_MAPPABLE,
> static const struct drm_i915_gem_object_ops huge_ops = {
> .flags = I915_GEM_OBJECT_HAS_STRUCT_PAGE |
> - I915_GEM_OBJECT_IS_SHRINKABLE,
> + I915_GEM_OBJECT_IS_SHRINKABLE |
> + I915_GEM_OBJECT_IS_MAPPABLE,
Where's huge_pages and userptr?
In short any that HAS_STRUCT_PAGE is also mappable by your definition
(we can use kmap on them). I suggest maybe using HAS_IOMEM and then
if (!(obj->ops->flags & (HAS_STRUCT_PAGE | HAS_IOMEM))
?
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 50+ messages in thread* Re: [PATCH 09/22] drm/i915/lmem: support kernel mapping
2019-09-27 17:33 ` [PATCH 09/22] drm/i915/lmem: support kernel mapping Matthew Auld
2019-09-27 19:24 ` Chris Wilson
@ 2019-09-27 20:39 ` Chris Wilson
1 sibling, 0 replies; 50+ messages in thread
From: Chris Wilson @ 2019-09-27 20:39 UTC (permalink / raw)
To: Matthew Auld, intel-gfx; +Cc: daniel.vetter
Quoting Matthew Auld (2019-09-27 18:33:56)
> +static int igt_lmem_write_cpu(void *arg)
> +{
> + struct drm_i915_private *i915 = arg;
> + struct intel_context *ce = i915->engine[BCS0]->kernel_context;
> + struct drm_i915_gem_object *obj;
> + struct rnd_state prng;
> + u32 *vaddr;
> + u32 dword;
> + u32 val;
> + u32 sz;
> + int err;
> +
> + if (!HAS_ENGINE(i915, BCS0))
> + return 0;
Too late. You've already *i915->engine[BCS0]
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 50+ messages in thread
* [PATCH 10/22] drm/i915/selftests: add write-dword test for LMEM
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
` (8 preceding siblings ...)
2019-09-27 17:33 ` [PATCH 09/22] drm/i915/lmem: support kernel mapping Matthew Auld
@ 2019-09-27 17:33 ` Matthew Auld
2019-09-27 19:27 ` Chris Wilson
2019-09-27 20:42 ` Chris Wilson
2019-09-27 17:33 ` [PATCH 11/22] drm/i915/selftest: extend coverage to include LMEM huge-pages Matthew Auld
` (15 subsequent siblings)
25 siblings, 2 replies; 50+ messages in thread
From: Matthew Auld @ 2019-09-27 17:33 UTC (permalink / raw)
To: intel-gfx; +Cc: daniel.vetter
Simple test writing to dwords across an object, using various engines in
a randomized order, checking that our writes land from the cpu.
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
---
.../drm/i915/selftests/intel_memory_region.c | 179 ++++++++++++++++++
1 file changed, 179 insertions(+)
diff --git a/drivers/gpu/drm/i915/selftests/intel_memory_region.c b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
index ba98e8254b80..8d7d8b9e00da 100644
--- a/drivers/gpu/drm/i915/selftests/intel_memory_region.c
+++ b/drivers/gpu/drm/i915/selftests/intel_memory_region.c
@@ -7,13 +7,16 @@
#include "../i915_selftest.h"
+
#include "mock_drm.h"
#include "mock_gem_device.h"
#include "mock_region.h"
+#include "gem/i915_gem_context.h"
#include "gem/i915_gem_lmem.h"
#include "gem/i915_gem_region.h"
#include "gem/i915_gem_object_blt.h"
+#include "gem/selftests/igt_gem_utils.h"
#include "gem/selftests/mock_context.h"
#include "gt/intel_gt.h"
#include "selftests/igt_flush_test.h"
@@ -255,6 +258,133 @@ static int igt_mock_continuous(void *arg)
return err;
}
+static int igt_gpu_write_dw(struct intel_context *ce,
+ struct i915_vma *vma,
+ u32 dword,
+ u32 value)
+{
+ int err;
+
+ i915_gem_object_lock(vma->obj);
+ err = i915_gem_object_set_to_gtt_domain(vma->obj, true);
+ i915_gem_object_unlock(vma->obj);
+ if (err)
+ return err;
+
+ return igt_gpu_fill_dw(ce, vma, dword * sizeof(u32),
+ vma->size >> PAGE_SHIFT, value);
+}
+
+static int igt_cpu_check(struct drm_i915_gem_object *obj, u32 dword, u32 val)
+{
+ unsigned long n;
+ int err;
+
+ i915_gem_object_lock(obj);
+ err = i915_gem_object_set_to_wc_domain(obj, false);
+ i915_gem_object_unlock(obj);
+ if (err)
+ return err;
+
+ err = i915_gem_object_pin_pages(obj);
+ if (err)
+ return err;
+
+ for (n = 0; n < obj->base.size >> PAGE_SHIFT; ++n) {
+ u32 __iomem *base;
+ u32 read_val;
+
+ base = i915_gem_object_lmem_io_map_page_atomic(obj, n);
+
+ read_val = ioread32(base + dword);
+ io_mapping_unmap_atomic(base);
+ if (read_val != val) {
+ pr_err("n=%lu base[%u]=%u, val=%u\n",
+ n, dword, read_val, val);
+ err = -EINVAL;
+ break;
+ }
+ }
+
+ i915_gem_object_unpin_pages(obj);
+ return err;
+}
+
+static int igt_gpu_write(struct i915_gem_context *ctx,
+ struct drm_i915_gem_object *obj)
+{
+ struct drm_i915_private *i915 = ctx->i915;
+ struct i915_address_space *vm = ctx->vm ?: &i915->ggtt.vm;
+ struct i915_gem_engines *engines;
+ struct i915_gem_engines_iter it;
+ struct intel_context *ce;
+ I915_RND_STATE(prng);
+ IGT_TIMEOUT(end_time);
+ unsigned int count;
+ struct i915_vma *vma;
+ int *order;
+ int i, n;
+ int err = 0;
+
+ GEM_BUG_ON(!i915_gem_object_has_pinned_pages(obj));
+
+ n = 0;
+ count = 0;
+ for_each_gem_engine(ce, i915_gem_context_lock_engines(ctx), it) {
+ count++;
+ if (!intel_engine_can_store_dword(ce->engine))
+ continue;
+
+ n++;
+ }
+ i915_gem_context_unlock_engines(ctx);
+ if (!n)
+ return 0;
+
+ order = i915_random_order(count * count, &prng);
+ if (!order)
+ return -ENOMEM;
+
+ vma = i915_vma_instance(obj, vm, NULL);
+ if (IS_ERR(vma)) {
+ err = PTR_ERR(vma);
+ goto out_free;
+ }
+
+ err = i915_vma_pin(vma, 0, 0, PIN_USER);
+ if (err)
+ goto out_free;
+
+ i = 0;
+ engines = i915_gem_context_lock_engines(ctx);
+ do {
+ u32 rng = prandom_u32_state(&prng);
+ u32 dword = offset_in_page(rng) / 4;
+
+ ce = engines->engines[order[i] % engines->num_engines];
+ i = (i + 1) % (count * count);
+ if (!ce || !intel_engine_can_store_dword(ce->engine))
+ continue;
+
+ err = igt_gpu_write_dw(ce, vma, dword, rng);
+ if (err)
+ break;
+
+ err = igt_cpu_check(obj, dword, rng);
+ if (err)
+ break;
+ } while (!__igt_timeout(end_time, NULL));
+ i915_gem_context_unlock_engines(ctx);
+
+out_free:
+ kfree(order);
+
+ if (err == -ENOMEM)
+ err = 0;
+
+ return err;
+}
+
static int igt_lmem_create(void *arg)
{
struct drm_i915_private *i915 = arg;
@@ -276,6 +406,54 @@ static int igt_lmem_create(void *arg)
return err;
}
+static int igt_lmem_write_gpu(void *arg)
+{
+ struct drm_i915_private *i915 = arg;
+ struct drm_i915_gem_object *obj;
+ struct i915_gem_context *ctx;
+ struct drm_file *file;
+ I915_RND_STATE(prng);
+ u32 sz;
+ int err;
+
+ mutex_unlock(&i915->drm.struct_mutex);
+ file = mock_file(i915);
+ mutex_lock(&i915->drm.struct_mutex);
+ if (IS_ERR(file))
+ return PTR_ERR(file);
+
+ ctx = live_context(i915, file);
+ if (IS_ERR(ctx)) {
+ err = PTR_ERR(ctx);
+ goto out_file;
+ }
+
+ sz = round_up(prandom_u32_state(&prng) % SZ_32M, PAGE_SIZE);
+
+ obj = i915_gem_object_create_lmem(i915, sz, 0);
+ if (IS_ERR(obj)) {
+ err = PTR_ERR(obj);
+ goto out_file;
+ }
+
+ err = i915_gem_object_pin_pages(obj);
+ if (err)
+ goto out_put;
+
+ err = igt_gpu_write(ctx, obj);
+ if (err)
+ pr_err("igt_gpu_write failed(%d)\n", err);
+
+ i915_gem_object_unpin_pages(obj);
+out_put:
+ i915_gem_object_put(obj);
+out_file:
+ mutex_unlock(&i915->drm.struct_mutex);
+ mock_file_free(i915, file);
+ mutex_lock(&i915->drm.struct_mutex);
+ return err;
+}
+
static int igt_lmem_write_cpu(void *arg)
{
struct drm_i915_private *i915 = arg;
@@ -389,6 +567,7 @@ int intel_memory_region_live_selftests(struct drm_i915_private *i915)
static const struct i915_subtest tests[] = {
SUBTEST(igt_lmem_create),
SUBTEST(igt_lmem_write_cpu),
+ SUBTEST(igt_lmem_write_gpu),
};
int err;
--
2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 50+ messages in thread* Re: [PATCH 10/22] drm/i915/selftests: add write-dword test for LMEM
2019-09-27 17:33 ` [PATCH 10/22] drm/i915/selftests: add write-dword test for LMEM Matthew Auld
@ 2019-09-27 19:27 ` Chris Wilson
2019-09-27 20:42 ` Chris Wilson
1 sibling, 0 replies; 50+ messages in thread
From: Chris Wilson @ 2019-09-27 19:27 UTC (permalink / raw)
To: Matthew Auld, intel-gfx; +Cc: daniel.vetter
Quoting Matthew Auld (2019-09-27 18:33:57)
> +static int igt_gpu_write_dw(struct intel_context *ce,
> + struct i915_vma *vma,
> + u32 dword,
> + u32 value)
> +{
> + int err;
> +
> + i915_gem_object_lock(vma->obj);
> + err = i915_gem_object_set_to_gtt_domain(vma->obj, true);
> + i915_gem_object_unlock(vma->obj);
> + if (err)
> + return err;
Your cpu check doesn't leave the caches dirty so this is overkill, and
worse may hide a coherency problem?
> + return igt_gpu_fill_dw(ce, vma, dword * sizeof(u32),
> + vma->size >> PAGE_SHIFT, value);
> +}
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 50+ messages in thread* Re: [PATCH 10/22] drm/i915/selftests: add write-dword test for LMEM
2019-09-27 17:33 ` [PATCH 10/22] drm/i915/selftests: add write-dword test for LMEM Matthew Auld
2019-09-27 19:27 ` Chris Wilson
@ 2019-09-27 20:42 ` Chris Wilson
2019-09-30 9:58 ` Matthew Auld
1 sibling, 1 reply; 50+ messages in thread
From: Chris Wilson @ 2019-09-27 20:42 UTC (permalink / raw)
To: Matthew Auld, intel-gfx; +Cc: daniel.vetter
Quoting Matthew Auld (2019-09-27 18:33:57)
> + i = 0;
> + engines = i915_gem_context_lock_engines(ctx);
> + do {
> + u32 rng = prandom_u32_state(&prng);
> + u32 dword = offset_in_page(rng) / 4;
> +
> + ce = engines->engines[order[i] % engines->num_engines];
> + i = (i + 1) % (count * count);
> + if (!ce || !intel_engine_can_store_dword(ce->engine))
> + continue;
> +
> + err = igt_gpu_write_dw(ce, vma, dword, rng);
> + if (err)
> + break;
Do you have a test that does
dword,
64B or cacheline,
page
random width&strides of the above
before doing the read back of a random dword from those?
Think nasty cache artifacts, PCI transfers, and timing.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 50+ messages in thread* Re: [PATCH 10/22] drm/i915/selftests: add write-dword test for LMEM
2019-09-27 20:42 ` Chris Wilson
@ 2019-09-30 9:58 ` Matthew Auld
2019-09-30 10:46 ` Chris Wilson
0 siblings, 1 reply; 50+ messages in thread
From: Matthew Auld @ 2019-09-30 9:58 UTC (permalink / raw)
To: Chris Wilson, intel-gfx; +Cc: daniel.vetter
On 27/09/2019 21:42, Chris Wilson wrote:
> Quoting Matthew Auld (2019-09-27 18:33:57)
>> + i = 0;
>> + engines = i915_gem_context_lock_engines(ctx);
>> + do {
>> + u32 rng = prandom_u32_state(&prng);
>> + u32 dword = offset_in_page(rng) / 4;
>> +
>> + ce = engines->engines[order[i] % engines->num_engines];
>> + i = (i + 1) % (count * count);
>> + if (!ce || !intel_engine_can_store_dword(ce->engine))
>> + continue;
>> +
>> + err = igt_gpu_write_dw(ce, vma, dword, rng);
>> + if (err)
>> + break;
>
> Do you have a test that does
> dword,
> 64B or cacheline,
> page
> random width&strides of the above
> before doing the read back of a random dword from those?
Are you thinking write_dw + increment(dword, qword, cl, ..), or actually
doing the fill: write_dw, write_qw, write_block?
Or maybe both? I have been playing around with the write_dw + increment
for hugepages.c.
>
> Think nasty cache artifacts, PCI transfers, and timing.
> -Chris
>
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 50+ messages in thread* Re: [PATCH 10/22] drm/i915/selftests: add write-dword test for LMEM
2019-09-30 9:58 ` Matthew Auld
@ 2019-09-30 10:46 ` Chris Wilson
0 siblings, 0 replies; 50+ messages in thread
From: Chris Wilson @ 2019-09-30 10:46 UTC (permalink / raw)
To: Matthew Auld, intel-gfx; +Cc: daniel.vetter
Quoting Matthew Auld (2019-09-30 10:58:15)
> On 27/09/2019 21:42, Chris Wilson wrote:
> > Quoting Matthew Auld (2019-09-27 18:33:57)
> >> + i = 0;
> >> + engines = i915_gem_context_lock_engines(ctx);
> >> + do {
> >> + u32 rng = prandom_u32_state(&prng);
> >> + u32 dword = offset_in_page(rng) / 4;
> >> +
> >> + ce = engines->engines[order[i] % engines->num_engines];
> >> + i = (i + 1) % (count * count);
> >> + if (!ce || !intel_engine_can_store_dword(ce->engine))
> >> + continue;
> >> +
> >> + err = igt_gpu_write_dw(ce, vma, dword, rng);
> >> + if (err)
> >> + break;
> >
> > Do you have a test that does
> > dword,
> > 64B or cacheline,
> > page
> > random width&strides of the above
> > before doing the read back of a random dword from those?
>
> Are you thinking write_dw + increment(dword, qword, cl, ..), or actually
> doing the fill: write_dw, write_qw, write_block?
Here, I think stride is most interesting to hit various caching/transfer
artifacts between the CPU and lmem (and possibly with writes to lmem).
I think write_dw et al better stress the GPU write side and the
instruction stream.
> Or maybe both? I have been playing around with the write_dw + increment
> for hugepages.c.
Maybe both :) Never say no to more patterns! (Just be cautious of time
budget and use the cycles wisely to maximise coverage of your mental
model of the HW.) Once we get past the obvious coherency glitches in the
driver, it gets far more subtle. It's easy enough to filter out the noise
but deducing a pattern from gaps in the testing is much harder :)
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 50+ messages in thread
* [PATCH 11/22] drm/i915/selftest: extend coverage to include LMEM huge-pages
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
` (9 preceding siblings ...)
2019-09-27 17:33 ` [PATCH 10/22] drm/i915/selftests: add write-dword test for LMEM Matthew Auld
@ 2019-09-27 17:33 ` Matthew Auld
2019-09-27 17:33 ` [PATCH 12/22] drm/i915: enumerate and init each supported region Matthew Auld
` (14 subsequent siblings)
25 siblings, 0 replies; 50+ messages in thread
From: Matthew Auld @ 2019-09-27 17:33 UTC (permalink / raw)
To: intel-gfx; +Cc: daniel.vetter
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
---
.../gpu/drm/i915/gem/selftests/huge_pages.c | 121 +++++++++++++++++-
1 file changed, 120 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
index b6dc90030156..434c1fc57adf 100644
--- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
+++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
@@ -9,6 +9,7 @@
#include "i915_selftest.h"
#include "gem/i915_gem_region.h"
+#include "gem/i915_gem_lmem.h"
#include "gem/i915_gem_pm.h"
#include "gt/intel_gt.h"
@@ -970,7 +971,7 @@ static int gpu_write(struct intel_context *ce,
vma->size >> PAGE_SHIFT, val);
}
-static int cpu_check(struct drm_i915_gem_object *obj, u32 dword, u32 val)
+static int __cpu_check_shmem(struct drm_i915_gem_object *obj, u32 dword, u32 val)
{
unsigned int needs_flush;
unsigned long n;
@@ -1002,6 +1003,51 @@ static int cpu_check(struct drm_i915_gem_object *obj, u32 dword, u32 val)
return err;
}
+static int __cpu_check_lmem(struct drm_i915_gem_object *obj, u32 dword, u32 val)
+{
+ unsigned long n;
+ int err;
+
+ i915_gem_object_lock(obj);
+ err = i915_gem_object_set_to_wc_domain(obj, false);
+ i915_gem_object_unlock(obj);
+ if (err)
+ return err;
+
+ err = i915_gem_object_pin_pages(obj);
+ if (err)
+ return err;
+
+ for (n = 0; n < obj->base.size >> PAGE_SHIFT; ++n) {
+ u32 __iomem *base;
+ u32 read_val;
+
+ base = i915_gem_object_lmem_io_map_page_atomic(obj, n);
+
+ read_val = ioread32(base + dword);
+ io_mapping_unmap_atomic(base);
+ if (read_val != val) {
+ pr_err("n=%lu base[%u]=%u, val=%u\n",
+ n, dword, read_val, val);
+ err = -EINVAL;
+ break;
+ }
+ }
+
+ i915_gem_object_unpin_pages(obj);
+ return err;
+}
+
+static int cpu_check(struct drm_i915_gem_object *obj, u32 dword, u32 val)
+{
+ if (i915_gem_object_has_struct_page(obj))
+ return __cpu_check_shmem(obj, dword, val);
+ else if (i915_gem_object_is_lmem(obj))
+ return __cpu_check_lmem(obj, dword, val);
+
+ return -ENODEV;
+}
+
static int __igt_write_huge(struct intel_context *ce,
struct drm_i915_gem_object *obj,
u64 size, u64 offset,
@@ -1386,6 +1432,78 @@ static int igt_ppgtt_gemfs_huge(void *arg)
return err;
}
+static int igt_ppgtt_lmem_huge(void *arg)
+{
+ struct i915_gem_context *ctx = arg;
+ struct drm_i915_private *i915 = ctx->i915;
+ struct drm_i915_gem_object *obj;
+ static const unsigned int sizes[] = {
+ SZ_64K,
+ SZ_512K,
+ SZ_1M,
+ SZ_2M,
+ };
+ int i;
+ int err;
+
+ if (!HAS_LMEM(i915)) {
+ pr_info("device lacks LMEM support, skipping\n");
+ return 0;
+ }
+
+ /*
+ * Sanity check that the HW uses huge pages correctly through LMEM
+ * -- ensure that our writes land in the right place.
+ */
+
+ for (i = 0; i < ARRAY_SIZE(sizes); ++i) {
+ unsigned int size = sizes[i];
+
+ obj = i915_gem_object_create_lmem(i915, size, I915_BO_ALLOC_CONTIGUOUS);
+ if (IS_ERR(obj)) {
+ err = PTR_ERR(obj);
+ if (err == -E2BIG) {
+ pr_info("object too big for region!\n");
+ return 0;
+ }
+
+ return err;
+ }
+
+ err = i915_gem_object_pin_pages(obj);
+ if (err)
+ goto out_put;
+
+ if (obj->mm.page_sizes.phys < I915_GTT_PAGE_SIZE_64K) {
+ pr_info("LMEM unable to allocate huge-page(s) with size=%u\n",
+ size);
+ goto out_unpin;
+ }
+
+ err = igt_write_huge(ctx, obj);
+ if (err) {
+ pr_err("LMEM write-huge failed with size=%u\n", size);
+ goto out_unpin;
+ }
+
+ i915_gem_object_unpin_pages(obj);
+ __i915_gem_object_put_pages(obj, I915_MM_NORMAL);
+ i915_gem_object_put(obj);
+ }
+
+ return 0;
+
+out_unpin:
+ i915_gem_object_unpin_pages(obj);
+out_put:
+ i915_gem_object_put(obj);
+
+ if (err == -ENOMEM)
+ err = 0;
+
+ return err;
+}
+
static int igt_ppgtt_pin_update(void *arg)
{
struct i915_gem_context *ctx = arg;
@@ -1742,6 +1860,7 @@ int i915_gem_huge_page_live_selftests(struct drm_i915_private *i915)
SUBTEST(igt_ppgtt_exhaust_huge),
SUBTEST(igt_ppgtt_gemfs_huge),
SUBTEST(igt_ppgtt_internal_huge),
+ SUBTEST(igt_ppgtt_lmem_huge),
};
struct drm_file *file;
struct i915_gem_context *ctx;
--
2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 50+ messages in thread* [PATCH 12/22] drm/i915: enumerate and init each supported region
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
` (10 preceding siblings ...)
2019-09-27 17:33 ` [PATCH 11/22] drm/i915/selftest: extend coverage to include LMEM huge-pages Matthew Auld
@ 2019-09-27 17:33 ` Matthew Auld
2019-09-27 20:44 ` Chris Wilson
2019-09-27 17:34 ` [PATCH 13/22] drm/i915: treat shmem as a region Matthew Auld
` (13 subsequent siblings)
25 siblings, 1 reply; 50+ messages in thread
From: Matthew Auld @ 2019-09-27 17:33 UTC (permalink / raw)
To: intel-gfx; +Cc: daniel.vetter
From: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>
Nothing to enumerate yet...
Signed-off-by: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
---
drivers/gpu/drm/i915/i915_drv.h | 3 +
drivers/gpu/drm/i915/i915_gem_gtt.c | 70 +++++++++++++++++--
.../gpu/drm/i915/selftests/mock_gem_device.c | 6 ++
3 files changed, 72 insertions(+), 7 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 05a6491690f7..cd1414f2bcb5 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2394,6 +2394,9 @@ int __must_check i915_gem_evict_for_node(struct i915_address_space *vm,
unsigned int flags);
int i915_gem_evict_vm(struct i915_address_space *vm);
+void i915_gem_cleanup_memory_regions(struct drm_i915_private *i915);
+int i915_gem_init_memory_regions(struct drm_i915_private *i915);
+
/* i915_gem_internal.c */
struct drm_i915_gem_object *
i915_gem_object_create_internal(struct drm_i915_private *dev_priv,
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index e62e9d1a1307..a2963677861d 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -2744,6 +2744,66 @@ int i915_init_ggtt(struct drm_i915_private *i915)
return 0;
}
+void i915_gem_cleanup_memory_regions(struct drm_i915_private *i915)
+{
+ int i;
+
+ i915_gem_cleanup_stolen(i915);
+
+ for (i = 0; i < ARRAY_SIZE(i915->mm.regions); i++) {
+ struct intel_memory_region *region = i915->mm.regions[i];
+
+ if (region)
+ intel_memory_region_destroy(region);
+ }
+}
+
+int i915_gem_init_memory_regions(struct drm_i915_private *i915)
+{
+ int err, i;
+
+ /*
+ * Initialise stolen early so that we may reserve preallocated
+ * objects for the BIOS to KMS transition.
+ */
+ /* XXX: stolen will become a region at some point */
+ err = i915_gem_init_stolen(i915);
+ if (err)
+ return err;
+
+ for (i = 0; i < ARRAY_SIZE(i915->mm.regions); i++) {
+ struct intel_memory_region *mem = NULL;
+ u32 type;
+
+ if (!HAS_REGION(i915, BIT(i)))
+ continue;
+
+ type = MEMORY_TYPE_FROM_REGION(intel_region_map[i]);
+ switch (type) {
+ default:
+ break;
+ }
+
+ if (IS_ERR(mem)) {
+ err = PTR_ERR(mem);
+ DRM_ERROR("Failed to setup region(%d) type=%d\n", err, type);
+ goto out_cleanup;
+ }
+
+ mem->id = intel_region_map[i];
+ mem->type = type;
+ mem->instance = MEMORY_INSTANCE_FROM_REGION(intel_region_map[i]);
+
+ i915->mm.regions[i] = mem;
+ }
+
+ return 0;
+
+out_cleanup:
+ i915_gem_cleanup_memory_regions(i915);
+ return err;
+}
+
static void ggtt_cleanup_hw(struct i915_ggtt *ggtt)
{
struct drm_i915_private *i915 = ggtt->vm.i915;
@@ -2785,6 +2845,8 @@ void i915_ggtt_driver_release(struct drm_i915_private *i915)
{
struct pagevec *pvec;
+ i915_gem_cleanup_memory_regions(i915);
+
fini_aliasing_ppgtt(&i915->ggtt);
ggtt_cleanup_hw(&i915->ggtt);
@@ -2794,8 +2856,6 @@ void i915_ggtt_driver_release(struct drm_i915_private *i915)
set_pages_array_wb(pvec->pages, pvec->nr);
__pagevec_release(pvec);
}
-
- i915_gem_cleanup_stolen(i915);
}
static unsigned int gen6_get_total_gtt_size(u16 snb_gmch_ctl)
@@ -3251,11 +3311,7 @@ int i915_ggtt_init_hw(struct drm_i915_private *dev_priv)
if (ret)
return ret;
- /*
- * Initialise stolen early so that we may reserve preallocated
- * objects for the BIOS to KMS transition.
- */
- ret = i915_gem_init_stolen(dev_priv);
+ ret = i915_gem_init_memory_regions(dev_priv);
if (ret)
goto out_gtt_cleanup;
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
index 32e32b1cd566..f210b5043112 100644
--- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
@@ -82,6 +82,8 @@ static void mock_device_release(struct drm_device *dev)
i915_gemfs_fini(i915);
+ i915_gem_cleanup_memory_regions(i915);
+
drm_mode_config_cleanup(&i915->drm);
drm_dev_fini(&i915->drm);
@@ -219,6 +221,10 @@ struct drm_i915_private *mock_gem_device(void)
WARN_ON(i915_gemfs_init(i915));
+ err = i915_gem_init_memory_regions(i915);
+ if (err)
+ goto err_context;
+
return i915;
err_context:
--
2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 50+ messages in thread* [PATCH 13/22] drm/i915: treat shmem as a region
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
` (11 preceding siblings ...)
2019-09-27 17:33 ` [PATCH 12/22] drm/i915: enumerate and init each supported region Matthew Auld
@ 2019-09-27 17:34 ` Matthew Auld
2019-09-27 17:34 ` [PATCH 14/22] drm/i915: treat stolen " Matthew Auld
` (12 subsequent siblings)
25 siblings, 0 replies; 50+ messages in thread
From: Matthew Auld @ 2019-09-27 17:34 UTC (permalink / raw)
To: intel-gfx; +Cc: daniel.vetter
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>
---
drivers/gpu/drm/i915/gem/i915_gem_phys.c | 5 +-
drivers/gpu/drm/i915/gem/i915_gem_region.c | 14 +++-
drivers/gpu/drm/i915/gem/i915_gem_shmem.c | 68 ++++++++++++++-----
drivers/gpu/drm/i915/i915_drv.h | 2 +
drivers/gpu/drm/i915/i915_gem.c | 9 ---
drivers/gpu/drm/i915/i915_gem_gtt.c | 3 +-
drivers/gpu/drm/i915/i915_pci.c | 29 +++++---
.../gpu/drm/i915/selftests/mock_gem_device.c | 6 +-
8 files changed, 95 insertions(+), 41 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_phys.c b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
index 768356908160..8043ff63d73f 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_phys.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_phys.c
@@ -16,6 +16,7 @@
#include "gt/intel_gt.h"
#include "i915_drv.h"
#include "i915_gem_object.h"
+#include "i915_gem_region.h"
#include "i915_scatterlist.h"
static int i915_gem_object_get_pages_phys(struct drm_i915_gem_object *obj)
@@ -191,8 +192,10 @@ int i915_gem_object_attach_phys(struct drm_i915_gem_object *obj, int align)
/* Perma-pin (until release) the physical set of pages */
__i915_gem_object_pin_pages(obj);
- if (!IS_ERR_OR_NULL(pages))
+ if (!IS_ERR_OR_NULL(pages)) {
i915_gem_shmem_ops.put_pages(obj, pages);
+ i915_gem_object_release_memory_region(obj);
+ }
mutex_unlock(&obj->mm.lock);
return 0;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.c b/drivers/gpu/drm/i915/gem/i915_gem_region.c
index e9550e0364cc..0aeaebb41050 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_region.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_region.c
@@ -6,6 +6,7 @@
#include "intel_memory_region.h"
#include "i915_gem_region.h"
#include "i915_drv.h"
+#include "i915_trace.h"
void
i915_gem_object_put_pages_buddy(struct drm_i915_gem_object *obj,
@@ -144,11 +145,22 @@ i915_gem_object_create_region(struct intel_memory_region *mem,
GEM_BUG_ON(!size);
GEM_BUG_ON(!IS_ALIGNED(size, I915_GTT_MIN_ALIGNMENT));
+ /*
+ * There is a prevalence of the assumption that we fit the object's
+ * page count inside a 32bit _signed_ variable. Let's document this and
+ * catch if we ever need to fix it. In the meantime, if you do spot
+ * such a local variable, please consider fixing!
+ */
+
if (size >> PAGE_SHIFT > INT_MAX)
return ERR_PTR(-E2BIG);
if (overflows_type(size, obj->base.size))
return ERR_PTR(-E2BIG);
- return mem->ops->create_object(mem, size, flags);
+ obj = mem->ops->create_object(mem, size, flags);
+ if (!IS_ERR(obj))
+ trace_i915_gem_object_create(obj);
+
+ return obj;
}
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
index 9f5d903f7793..696e15e8c410 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
@@ -7,7 +7,9 @@
#include <linux/pagevec.h>
#include <linux/swap.h>
+#include "gem/i915_gem_region.h"
#include "i915_drv.h"
+#include "i915_gemfs.h"
#include "i915_gem_object.h"
#include "i915_scatterlist.h"
#include "i915_trace.h"
@@ -26,6 +28,7 @@ static void check_release_pagevec(struct pagevec *pvec)
static int shmem_get_pages(struct drm_i915_gem_object *obj)
{
struct drm_i915_private *i915 = to_i915(obj->base.dev);
+ struct intel_memory_region *mem = obj->mm.region;
const unsigned long page_count = obj->base.size / PAGE_SIZE;
unsigned long i;
struct address_space *mapping;
@@ -52,7 +55,7 @@ static int shmem_get_pages(struct drm_i915_gem_object *obj)
* If there's no chance of allocating enough pages for the whole
* object, bail early.
*/
- if (page_count > totalram_pages())
+ if (obj->base.size > resource_size(&mem->region))
return -ENOMEM;
st = kmalloc(sizeof(*st), GFP_KERNEL);
@@ -417,6 +420,8 @@ shmem_pwrite(struct drm_i915_gem_object *obj,
static void shmem_release(struct drm_i915_gem_object *obj)
{
+ i915_gem_object_release_memory_region(obj);
+
fput(obj->base.filp);
}
@@ -435,7 +440,7 @@ const struct drm_i915_gem_object_ops i915_gem_shmem_ops = {
.release = shmem_release,
};
-static int create_shmem(struct drm_i915_private *i915,
+static int __create_shmem(struct drm_i915_private *i915,
struct drm_gem_object *obj,
size_t size)
{
@@ -456,31 +461,23 @@ static int create_shmem(struct drm_i915_private *i915,
return 0;
}
-struct drm_i915_gem_object *
-i915_gem_object_create_shmem(struct drm_i915_private *i915, u64 size)
+static struct drm_i915_gem_object *
+create_shmem(struct intel_memory_region *mem,
+ resource_size_t size,
+ unsigned flags)
{
+ struct drm_i915_private *i915 = mem->i915;
struct drm_i915_gem_object *obj;
struct address_space *mapping;
unsigned int cache_level;
gfp_t mask;
int ret;
- /* There is a prevalence of the assumption that we fit the object's
- * page count inside a 32bit _signed_ variable. Let's document this and
- * catch if we ever need to fix it. In the meantime, if you do spot
- * such a local variable, please consider fixing!
- */
- if (size >> PAGE_SHIFT > INT_MAX)
- return ERR_PTR(-E2BIG);
-
- if (overflows_type(size, obj->base.size))
- return ERR_PTR(-E2BIG);
-
obj = i915_gem_object_alloc();
if (!obj)
return ERR_PTR(-ENOMEM);
- ret = create_shmem(i915, &obj->base, size);
+ ret = __create_shmem(i915, &obj->base, size);
if (ret)
goto fail;
@@ -519,7 +516,7 @@ i915_gem_object_create_shmem(struct drm_i915_private *i915, u64 size)
i915_gem_object_set_cache_coherency(obj, cache_level);
- trace_i915_gem_object_create(obj);
+ i915_gem_object_init_memory_region(obj, mem, 0);
return obj;
@@ -528,6 +525,13 @@ i915_gem_object_create_shmem(struct drm_i915_private *i915, u64 size)
return ERR_PTR(ret);
}
+struct drm_i915_gem_object *
+i915_gem_object_create_shmem(struct drm_i915_private *i915, u64 size)
+{
+ return i915_gem_object_create_region(i915->mm.regions[INTEL_MEMORY_SMEM],
+ size, 0);
+}
+
/* Allocate a new GEM object and fill it with the supplied data */
struct drm_i915_gem_object *
i915_gem_object_create_shmem_from_data(struct drm_i915_private *dev_priv,
@@ -578,3 +582,33 @@ i915_gem_object_create_shmem_from_data(struct drm_i915_private *dev_priv,
i915_gem_object_put(obj);
return ERR_PTR(err);
}
+
+static int init_shmem(struct intel_memory_region *mem)
+{
+ int err;
+
+ err = i915_gemfs_init(mem->i915);
+ if (err)
+ DRM_NOTE("Unable to create a private tmpfs mount, hugepage support will be disabled(%d).\n", err);
+
+ return 0; /* Don't error, we can simply fallback to the kernel mnt */
+}
+
+static void release_shmem(struct intel_memory_region *mem)
+{
+ i915_gemfs_fini(mem->i915);
+}
+
+static const struct intel_memory_region_ops shmem_region_ops = {
+ .init = init_shmem,
+ .release = release_shmem,
+ .create_object = create_shmem,
+};
+
+struct intel_memory_region *i915_gem_shmem_setup(struct drm_i915_private *i915)
+{
+ return intel_memory_region_create(i915, 0,
+ totalram_pages() << PAGE_SHIFT,
+ I915_GTT_PAGE_SIZE_4K, 0,
+ &shmem_region_ops);
+}
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index cd1414f2bcb5..6cf13e98794a 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2263,6 +2263,8 @@ void i915_gem_cleanup_early(struct drm_i915_private *dev_priv);
int i915_gem_freeze(struct drm_i915_private *dev_priv);
int i915_gem_freeze_late(struct drm_i915_private *dev_priv);
+struct intel_memory_region *i915_gem_shmem_setup(struct drm_i915_private *i915);
+
static inline void i915_gem_drain_freed_objects(struct drm_i915_private *i915)
{
/*
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 3d3fda4cae99..fd329b6b475c 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -45,7 +45,6 @@
#include "gem/i915_gem_context.h"
#include "gem/i915_gem_ioctls.h"
#include "gem/i915_gem_pm.h"
-#include "gem/i915_gemfs.h"
#include "gt/intel_engine_user.h"
#include "gt/intel_gt.h"
#include "gt/intel_gt_pm.h"
@@ -1535,16 +1534,10 @@ static void i915_gem_init__mm(struct drm_i915_private *i915)
void i915_gem_init_early(struct drm_i915_private *dev_priv)
{
- int err;
-
i915_gem_init__mm(dev_priv);
i915_gem_init__pm(dev_priv);
spin_lock_init(&dev_priv->fb_tracking.lock);
-
- err = i915_gemfs_init(dev_priv);
- if (err)
- DRM_NOTE("Unable to create a private tmpfs mount, hugepage support will be disabled(%d).\n", err);
}
void i915_gem_cleanup_early(struct drm_i915_private *dev_priv)
@@ -1553,8 +1546,6 @@ void i915_gem_cleanup_early(struct drm_i915_private *dev_priv)
GEM_BUG_ON(!llist_empty(&dev_priv->mm.free_list));
GEM_BUG_ON(atomic_read(&dev_priv->mm.free_count));
WARN_ON(dev_priv->mm.shrink_count);
-
- i915_gemfs_fini(dev_priv);
}
int i915_gem_freeze(struct drm_i915_private *dev_priv)
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index a2963677861d..67fa61e8bb18 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -2780,7 +2780,8 @@ int i915_gem_init_memory_regions(struct drm_i915_private *i915)
type = MEMORY_TYPE_FROM_REGION(intel_region_map[i]);
switch (type) {
- default:
+ case INTEL_SMEM:
+ mem = i915_gem_shmem_setup(i915);
break;
}
diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
index ea53dfe2fba0..9101ea1dff96 100644
--- a/drivers/gpu/drm/i915/i915_pci.c
+++ b/drivers/gpu/drm/i915/i915_pci.c
@@ -152,6 +152,9 @@
#define GEN_DEFAULT_PAGE_SIZES \
.page_sizes = I915_GTT_PAGE_SIZE_4K
+#define GEN_DEFAULT_REGIONS \
+ .memory_regions = REGION_SMEM
+
#define I830_FEATURES \
GEN(2), \
.is_mobile = 1, \
@@ -169,7 +172,8 @@
I9XX_PIPE_OFFSETS, \
I9XX_CURSOR_OFFSETS, \
I9XX_COLORS, \
- GEN_DEFAULT_PAGE_SIZES
+ GEN_DEFAULT_PAGE_SIZES, \
+ GEN_DEFAULT_REGIONS
#define I845_FEATURES \
GEN(2), \
@@ -186,7 +190,8 @@
I845_PIPE_OFFSETS, \
I845_CURSOR_OFFSETS, \
I9XX_COLORS, \
- GEN_DEFAULT_PAGE_SIZES
+ GEN_DEFAULT_PAGE_SIZES, \
+ GEN_DEFAULT_REGIONS
static const struct intel_device_info intel_i830_info = {
I830_FEATURES,
@@ -220,7 +225,8 @@ static const struct intel_device_info intel_i865g_info = {
I9XX_PIPE_OFFSETS, \
I9XX_CURSOR_OFFSETS, \
I9XX_COLORS, \
- GEN_DEFAULT_PAGE_SIZES
+ GEN_DEFAULT_PAGE_SIZES, \
+ GEN_DEFAULT_REGIONS
static const struct intel_device_info intel_i915g_info = {
GEN3_FEATURES,
@@ -305,7 +311,8 @@ static const struct intel_device_info intel_pineview_m_info = {
I9XX_PIPE_OFFSETS, \
I9XX_CURSOR_OFFSETS, \
I965_COLORS, \
- GEN_DEFAULT_PAGE_SIZES
+ GEN_DEFAULT_PAGE_SIZES, \
+ GEN_DEFAULT_REGIONS
static const struct intel_device_info intel_i965g_info = {
GEN4_FEATURES,
@@ -355,7 +362,8 @@ static const struct intel_device_info intel_gm45_info = {
I9XX_PIPE_OFFSETS, \
I9XX_CURSOR_OFFSETS, \
ILK_COLORS, \
- GEN_DEFAULT_PAGE_SIZES
+ GEN_DEFAULT_PAGE_SIZES, \
+ GEN_DEFAULT_REGIONS
static const struct intel_device_info intel_ironlake_d_info = {
GEN5_FEATURES,
@@ -385,7 +393,8 @@ static const struct intel_device_info intel_ironlake_m_info = {
I9XX_PIPE_OFFSETS, \
I9XX_CURSOR_OFFSETS, \
ILK_COLORS, \
- GEN_DEFAULT_PAGE_SIZES
+ GEN_DEFAULT_PAGE_SIZES, \
+ GEN_DEFAULT_REGIONS
#define SNB_D_PLATFORM \
GEN6_FEATURES, \
@@ -433,7 +442,8 @@ static const struct intel_device_info intel_sandybridge_m_gt2_info = {
IVB_PIPE_OFFSETS, \
IVB_CURSOR_OFFSETS, \
IVB_COLORS, \
- GEN_DEFAULT_PAGE_SIZES
+ GEN_DEFAULT_PAGE_SIZES, \
+ GEN_DEFAULT_REGIONS
#define IVB_D_PLATFORM \
GEN7_FEATURES, \
@@ -494,6 +504,7 @@ static const struct intel_device_info intel_valleyview_info = {
I9XX_CURSOR_OFFSETS,
I965_COLORS,
GEN_DEFAULT_PAGE_SIZES,
+ GEN_DEFAULT_REGIONS,
};
#define G75_FEATURES \
@@ -588,6 +599,7 @@ static const struct intel_device_info intel_cherryview_info = {
CHV_CURSOR_OFFSETS,
CHV_COLORS,
GEN_DEFAULT_PAGE_SIZES,
+ GEN_DEFAULT_REGIONS,
};
#define GEN9_DEFAULT_PAGE_SIZES \
@@ -662,7 +674,8 @@ static const struct intel_device_info intel_skylake_gt4_info = {
HSW_PIPE_OFFSETS, \
IVB_CURSOR_OFFSETS, \
IVB_COLORS, \
- GEN9_DEFAULT_PAGE_SIZES
+ GEN9_DEFAULT_PAGE_SIZES, \
+ GEN_DEFAULT_REGIONS
static const struct intel_device_info intel_broxton_info = {
GEN9_LP_FEATURES,
diff --git a/drivers/gpu/drm/i915/selftests/mock_gem_device.c b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
index f210b5043112..10ed3a503772 100644
--- a/drivers/gpu/drm/i915/selftests/mock_gem_device.c
+++ b/drivers/gpu/drm/i915/selftests/mock_gem_device.c
@@ -80,8 +80,6 @@ static void mock_device_release(struct drm_device *dev)
destroy_workqueue(i915->wq);
- i915_gemfs_fini(i915);
-
i915_gem_cleanup_memory_regions(i915);
drm_mode_config_cleanup(&i915->drm);
@@ -181,6 +179,8 @@ struct drm_i915_private *mock_gem_device(void)
I915_GTT_PAGE_SIZE_64K |
I915_GTT_PAGE_SIZE_2M;
+ mkwrite_device_info(i915)->memory_regions = REGION_SMEM;
+
mock_uncore_init(&i915->uncore);
i915_gem_init__mm(i915);
intel_gt_init_early(&i915->gt, i915);
@@ -219,8 +219,6 @@ struct drm_i915_private *mock_gem_device(void)
intel_engines_driver_register(i915);
mutex_unlock(&i915->drm.struct_mutex);
- WARN_ON(i915_gemfs_init(i915));
-
err = i915_gem_init_memory_regions(i915);
if (err)
goto err_context;
--
2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 50+ messages in thread* [PATCH 14/22] drm/i915: treat stolen as a region
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
` (12 preceding siblings ...)
2019-09-27 17:34 ` [PATCH 13/22] drm/i915: treat shmem as a region Matthew Auld
@ 2019-09-27 17:34 ` Matthew Auld
2019-09-27 19:30 ` Chris Wilson
2019-09-27 17:34 ` [PATCH 15/22] drm/i915: define HAS_MAPPABLE_APERTURE Matthew Auld
` (11 subsequent siblings)
25 siblings, 1 reply; 50+ messages in thread
From: Matthew Auld @ 2019-09-27 17:34 UTC (permalink / raw)
To: intel-gfx; +Cc: daniel.vetter
Convert stolen memory over to a region object. Still leaves open the
question with what to do with pre-allocated objects...
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>
---
drivers/gpu/drm/i915/gem/i915_gem_region.c | 2 +-
drivers/gpu/drm/i915/gem/i915_gem_stolen.c | 71 +++++++++++++++++++---
drivers/gpu/drm/i915/gem/i915_gem_stolen.h | 3 +-
drivers/gpu/drm/i915/i915_gem_gtt.c | 14 +----
drivers/gpu/drm/i915/i915_pci.c | 2 +-
5 files changed, 68 insertions(+), 24 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.c b/drivers/gpu/drm/i915/gem/i915_gem_region.c
index 0aeaebb41050..77e89fabbddf 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_region.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_region.c
@@ -159,7 +159,7 @@ i915_gem_object_create_region(struct intel_memory_region *mem,
return ERR_PTR(-E2BIG);
obj = mem->ops->create_object(mem, size, flags);
- if (!IS_ERR(obj))
+ if (!IS_ERR_OR_NULL(obj))
trace_i915_gem_object_create(obj);
return obj;
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c
index bfbc3e3daf92..1ee8f1790144 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_stolen.c
@@ -10,6 +10,7 @@
#include <drm/drm_mm.h>
#include <drm/i915_drm.h>
+#include "gem/i915_gem_region.h"
#include "i915_drv.h"
#include "i915_gem_stolen.h"
@@ -150,7 +151,7 @@ static int i915_adjust_stolen(struct drm_i915_private *dev_priv,
return 0;
}
-void i915_gem_cleanup_stolen(struct drm_i915_private *dev_priv)
+static void i915_gem_cleanup_stolen(struct drm_i915_private *dev_priv)
{
if (!drm_mm_initialized(&dev_priv->mm.stolen))
return;
@@ -355,7 +356,7 @@ static void icl_get_stolen_reserved(struct drm_i915_private *i915,
}
}
-int i915_gem_init_stolen(struct drm_i915_private *dev_priv)
+static int i915_gem_init_stolen(struct drm_i915_private *dev_priv)
{
resource_size_t reserved_base, stolen_top;
resource_size_t reserved_total, reserved_size;
@@ -539,6 +540,9 @@ i915_gem_object_release_stolen(struct drm_i915_gem_object *obj)
i915_gem_stolen_remove_node(dev_priv, stolen);
kfree(stolen);
+
+ if (obj->mm.region)
+ i915_gem_object_release_memory_region(obj);
}
static const struct drm_i915_gem_object_ops i915_gem_object_stolen_ops = {
@@ -548,8 +552,9 @@ static const struct drm_i915_gem_object_ops i915_gem_object_stolen_ops = {
};
static struct drm_i915_gem_object *
-_i915_gem_object_create_stolen(struct drm_i915_private *dev_priv,
- struct drm_mm_node *stolen)
+__i915_gem_object_create_stolen(struct drm_i915_private *dev_priv,
+ struct drm_mm_node *stolen,
+ struct intel_memory_region *mem)
{
struct drm_i915_gem_object *obj;
unsigned int cache_level;
@@ -566,6 +571,9 @@ _i915_gem_object_create_stolen(struct drm_i915_private *dev_priv,
cache_level = HAS_LLC(dev_priv) ? I915_CACHE_LLC : I915_CACHE_NONE;
i915_gem_object_set_cache_coherency(obj, cache_level);
+ if (mem)
+ i915_gem_object_init_memory_region(obj, mem, 0);
+
if (i915_gem_object_pin_pages(obj))
goto cleanup;
@@ -576,10 +584,12 @@ _i915_gem_object_create_stolen(struct drm_i915_private *dev_priv,
return NULL;
}
-struct drm_i915_gem_object *
-i915_gem_object_create_stolen(struct drm_i915_private *dev_priv,
- resource_size_t size)
+static struct drm_i915_gem_object *
+_i915_gem_object_create_stolen(struct intel_memory_region *mem,
+ resource_size_t size,
+ unsigned int flags)
{
+ struct drm_i915_private *dev_priv = mem->i915;
struct drm_i915_gem_object *obj;
struct drm_mm_node *stolen;
int ret;
@@ -600,7 +610,7 @@ i915_gem_object_create_stolen(struct drm_i915_private *dev_priv,
return NULL;
}
- obj = _i915_gem_object_create_stolen(dev_priv, stolen);
+ obj = __i915_gem_object_create_stolen(dev_priv, stolen, mem);
if (obj)
return obj;
@@ -609,6 +619,49 @@ i915_gem_object_create_stolen(struct drm_i915_private *dev_priv,
return NULL;
}
+struct drm_i915_gem_object *
+i915_gem_object_create_stolen(struct drm_i915_private *dev_priv,
+ resource_size_t size)
+{
+ struct drm_i915_gem_object *obj;
+
+ obj = i915_gem_object_create_region(dev_priv->mm.regions[INTEL_MEMORY_STOLEN],
+ size, I915_BO_ALLOC_CONTIGUOUS);
+ if (IS_ERR(obj))
+ return NULL;
+
+ return obj;
+}
+
+static int init_stolen(struct intel_memory_region *mem)
+{
+ /*
+ * Initialise stolen early so that we may reserve preallocated
+ * objects for the BIOS to KMS transition.
+ */
+ return i915_gem_init_stolen(mem->i915);
+}
+
+static void release_stolen(struct intel_memory_region *mem)
+{
+ i915_gem_cleanup_stolen(mem->i915);
+}
+
+static const struct intel_memory_region_ops i915_region_stolen_ops = {
+ .init = init_stolen,
+ .release = release_stolen,
+ .create_object = _i915_gem_object_create_stolen,
+};
+
+struct intel_memory_region *i915_gem_stolen_setup(struct drm_i915_private *i915)
+{
+ return intel_memory_region_create(i915,
+ intel_graphics_stolen_res.start,
+ resource_size(&intel_graphics_stolen_res),
+ I915_GTT_PAGE_SIZE_4K, 0,
+ &i915_region_stolen_ops);
+}
+
struct drm_i915_gem_object *
i915_gem_object_create_stolen_for_preallocated(struct drm_i915_private *dev_priv,
resource_size_t stolen_offset,
@@ -650,7 +703,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_i915_private *dev_priv
return NULL;
}
- obj = _i915_gem_object_create_stolen(dev_priv, stolen);
+ obj = __i915_gem_object_create_stolen(dev_priv, stolen, NULL);
if (obj == NULL) {
DRM_DEBUG_DRIVER("failed to allocate stolen object\n");
i915_gem_stolen_remove_node(dev_priv, stolen);
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_stolen.h b/drivers/gpu/drm/i915/gem/i915_gem_stolen.h
index 2289644d8604..c1040627fbf3 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_stolen.h
+++ b/drivers/gpu/drm/i915/gem/i915_gem_stolen.h
@@ -21,8 +21,7 @@ int i915_gem_stolen_insert_node_in_range(struct drm_i915_private *dev_priv,
u64 end);
void i915_gem_stolen_remove_node(struct drm_i915_private *dev_priv,
struct drm_mm_node *node);
-int i915_gem_init_stolen(struct drm_i915_private *dev_priv);
-void i915_gem_cleanup_stolen(struct drm_i915_private *dev_priv);
+struct intel_memory_region *i915_gem_stolen_setup(struct drm_i915_private *i915);
struct drm_i915_gem_object *
i915_gem_object_create_stolen(struct drm_i915_private *dev_priv,
resource_size_t size);
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 67fa61e8bb18..51b2087b214f 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -2748,8 +2748,6 @@ void i915_gem_cleanup_memory_regions(struct drm_i915_private *i915)
{
int i;
- i915_gem_cleanup_stolen(i915);
-
for (i = 0; i < ARRAY_SIZE(i915->mm.regions); i++) {
struct intel_memory_region *region = i915->mm.regions[i];
@@ -2762,15 +2760,6 @@ int i915_gem_init_memory_regions(struct drm_i915_private *i915)
{
int err, i;
- /*
- * Initialise stolen early so that we may reserve preallocated
- * objects for the BIOS to KMS transition.
- */
- /* XXX: stolen will become a region at some point */
- err = i915_gem_init_stolen(i915);
- if (err)
- return err;
-
for (i = 0; i < ARRAY_SIZE(i915->mm.regions); i++) {
struct intel_memory_region *mem = NULL;
u32 type;
@@ -2783,6 +2772,9 @@ int i915_gem_init_memory_regions(struct drm_i915_private *i915)
case INTEL_SMEM:
mem = i915_gem_shmem_setup(i915);
break;
+ case INTEL_STOLEN:
+ mem = i915_gem_stolen_setup(i915);
+ break;
}
if (IS_ERR(mem)) {
diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
index 9101ea1dff96..daf74c912c65 100644
--- a/drivers/gpu/drm/i915/i915_pci.c
+++ b/drivers/gpu/drm/i915/i915_pci.c
@@ -153,7 +153,7 @@
.page_sizes = I915_GTT_PAGE_SIZE_4K
#define GEN_DEFAULT_REGIONS \
- .memory_regions = REGION_SMEM
+ .memory_regions = REGION_SMEM | REGION_STOLEN
#define I830_FEATURES \
GEN(2), \
--
2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 50+ messages in thread* Re: [PATCH 14/22] drm/i915: treat stolen as a region
2019-09-27 17:34 ` [PATCH 14/22] drm/i915: treat stolen " Matthew Auld
@ 2019-09-27 19:30 ` Chris Wilson
0 siblings, 0 replies; 50+ messages in thread
From: Chris Wilson @ 2019-09-27 19:30 UTC (permalink / raw)
To: Matthew Auld, intel-gfx; +Cc: daniel.vetter
Quoting Matthew Auld (2019-09-27 18:34:01)
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_region.c b/drivers/gpu/drm/i915/gem/i915_gem_region.c
> index 0aeaebb41050..77e89fabbddf 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_region.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_region.c
> @@ -159,7 +159,7 @@ i915_gem_object_create_region(struct intel_memory_region *mem,
> return ERR_PTR(-E2BIG);
>
> obj = mem->ops->create_object(mem, size, flags);
> - if (!IS_ERR(obj))
> + if (!IS_ERR_OR_NULL(obj))
Have a prep patch to bring stolen function signature into line.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 50+ messages in thread
* [PATCH 15/22] drm/i915: define HAS_MAPPABLE_APERTURE
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
` (13 preceding siblings ...)
2019-09-27 17:34 ` [PATCH 14/22] drm/i915: treat stolen " Matthew Auld
@ 2019-09-27 17:34 ` Matthew Auld
2019-09-27 17:34 ` [PATCH 16/22] drm/i915: do not map aperture if it is not available Matthew Auld
` (10 subsequent siblings)
25 siblings, 0 replies; 50+ messages in thread
From: Matthew Auld @ 2019-09-27 17:34 UTC (permalink / raw)
To: intel-gfx; +Cc: daniel.vetter
From: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
The following patches in the series will use it to avoid certain
operations when aperture is not available in HW.
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
---
drivers/gpu/drm/i915/i915_drv.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 6cf13e98794a..d6303045f546 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2126,6 +2126,8 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
#define OVERLAY_NEEDS_PHYSICAL(dev_priv) \
(INTEL_INFO(dev_priv)->display.overlay_needs_physical)
+#define HAS_MAPPABLE_APERTURE(dev_priv) (dev_priv->ggtt.mappable_end > 0)
+
/* Early gen2 have a totally busted CS tlb and require pinned batches. */
#define HAS_BROKEN_CS_TLB(dev_priv) (IS_I830(dev_priv) || IS_I845G(dev_priv))
--
2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 50+ messages in thread* [PATCH 16/22] drm/i915: do not map aperture if it is not available.
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
` (14 preceding siblings ...)
2019-09-27 17:34 ` [PATCH 15/22] drm/i915: define HAS_MAPPABLE_APERTURE Matthew Auld
@ 2019-09-27 17:34 ` Matthew Auld
2019-09-27 17:34 ` [PATCH 17/22] drm/i915: set num_fence_regs to 0 if there is no aperture Matthew Auld
` (9 subsequent siblings)
25 siblings, 0 replies; 50+ messages in thread
From: Matthew Auld @ 2019-09-27 17:34 UTC (permalink / raw)
To: intel-gfx; +Cc: daniel.vetter
From: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Skip both setup and cleanup of the aperture mapping if the HW doesn't
have an aperture bar.
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
---
drivers/gpu/drm/i915/i915_gem_gtt.c | 34 ++++++++++++++++++-----------
1 file changed, 21 insertions(+), 13 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 51b2087b214f..1be7b236f234 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -2827,7 +2827,9 @@ static void ggtt_cleanup_hw(struct i915_ggtt *ggtt)
mutex_unlock(&i915->drm.struct_mutex);
arch_phys_wc_del(ggtt->mtrr);
- io_mapping_fini(&ggtt->iomap);
+
+ if (ggtt->iomap.size)
+ io_mapping_fini(&ggtt->iomap);
}
/**
@@ -3038,10 +3040,13 @@ static int gen8_gmch_probe(struct i915_ggtt *ggtt)
int err;
/* TODO: We're not aware of mappable constraints on gen8 yet */
- ggtt->gmadr =
- (struct resource) DEFINE_RES_MEM(pci_resource_start(pdev, 2),
- pci_resource_len(pdev, 2));
- ggtt->mappable_end = resource_size(&ggtt->gmadr);
+ /* FIXME: We probably need to add do device_info or runtime_info */
+ if (!HAS_LMEM(dev_priv)) {
+ ggtt->gmadr =
+ (struct resource) DEFINE_RES_MEM(pci_resource_start(pdev, 2),
+ pci_resource_len(pdev, 2));
+ ggtt->mappable_end = resource_size(&ggtt->gmadr);
+ }
err = pci_set_dma_mask(pdev, DMA_BIT_MASK(39));
if (!err)
@@ -3267,15 +3272,18 @@ static int ggtt_init_hw(struct i915_ggtt *ggtt)
if (!HAS_LLC(i915) && !HAS_PPGTT(i915))
ggtt->vm.mm.color_adjust = i915_ggtt_color_adjust;
- if (!io_mapping_init_wc(&ggtt->iomap,
- ggtt->gmadr.start,
- ggtt->mappable_end)) {
- ggtt->vm.cleanup(&ggtt->vm);
- ret = -EIO;
- goto out;
- }
+ if (ggtt->mappable_end) {
+ if (!io_mapping_init_wc(&ggtt->iomap,
+ ggtt->gmadr.start,
+ ggtt->mappable_end)) {
+ ggtt->vm.cleanup(&ggtt->vm);
+ ret = -EIO;
+ goto out;
+ }
- ggtt->mtrr = arch_phys_wc_add(ggtt->gmadr.start, ggtt->mappable_end);
+ ggtt->mtrr = arch_phys_wc_add(ggtt->gmadr.start,
+ ggtt->mappable_end);
+ }
i915_ggtt_init_fences(ggtt);
--
2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 50+ messages in thread* [PATCH 17/22] drm/i915: set num_fence_regs to 0 if there is no aperture
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
` (15 preceding siblings ...)
2019-09-27 17:34 ` [PATCH 16/22] drm/i915: do not map aperture if it is not available Matthew Auld
@ 2019-09-27 17:34 ` Matthew Auld
2019-09-27 20:49 ` Chris Wilson
2019-09-27 17:34 ` [PATCH 18/22] drm/i915/selftests: check for missing aperture Matthew Auld
` (8 subsequent siblings)
25 siblings, 1 reply; 50+ messages in thread
From: Matthew Auld @ 2019-09-27 17:34 UTC (permalink / raw)
To: intel-gfx; +Cc: daniel.vetter
From: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
We can't fence anything without aperture.
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Signed-off-by: Stuart Summers <stuart.summers@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
---
drivers/gpu/drm/i915/i915_gem_fence_reg.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_gem_fence_reg.c b/drivers/gpu/drm/i915/i915_gem_fence_reg.c
index 615a9f4ef30c..e15e4e247576 100644
--- a/drivers/gpu/drm/i915/i915_gem_fence_reg.c
+++ b/drivers/gpu/drm/i915/i915_gem_fence_reg.c
@@ -828,8 +828,10 @@ void i915_ggtt_init_fences(struct i915_ggtt *ggtt)
detect_bit_6_swizzle(i915);
- if (INTEL_GEN(i915) >= 7 &&
- !(IS_VALLEYVIEW(i915) || IS_CHERRYVIEW(i915)))
+ if (!HAS_MAPPABLE_APERTURE(i915))
+ num_fences = 0;
+ else if (INTEL_GEN(i915) >= 7 &&
+ !(IS_VALLEYVIEW(i915) || IS_CHERRYVIEW(i915)))
num_fences = 32;
else if (INTEL_GEN(i915) >= 4 ||
IS_I945G(i915) || IS_I945GM(i915) ||
--
2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 50+ messages in thread* Re: [PATCH 17/22] drm/i915: set num_fence_regs to 0 if there is no aperture
2019-09-27 17:34 ` [PATCH 17/22] drm/i915: set num_fence_regs to 0 if there is no aperture Matthew Auld
@ 2019-09-27 20:49 ` Chris Wilson
0 siblings, 0 replies; 50+ messages in thread
From: Chris Wilson @ 2019-09-27 20:49 UTC (permalink / raw)
To: Matthew Auld, intel-gfx; +Cc: daniel.vetter
Quoting Matthew Auld (2019-09-27 18:34:04)
> From: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
>
> We can't fence anything without aperture.
>
> Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
> Signed-off-by: Stuart Summers <stuart.summers@intel.com>
> Cc: Matthew Auld <matthew.auld@intel.com>
> ---
> drivers/gpu/drm/i915/i915_gem_fence_reg.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_gem_fence_reg.c b/drivers/gpu/drm/i915/i915_gem_fence_reg.c
> index 615a9f4ef30c..e15e4e247576 100644
> --- a/drivers/gpu/drm/i915/i915_gem_fence_reg.c
> +++ b/drivers/gpu/drm/i915/i915_gem_fence_reg.c
> @@ -828,8 +828,10 @@ void i915_ggtt_init_fences(struct i915_ggtt *ggtt)
>
> detect_bit_6_swizzle(i915);
>
> - if (INTEL_GEN(i915) >= 7 &&
> - !(IS_VALLEYVIEW(i915) || IS_CHERRYVIEW(i915)))
> + if (!HAS_MAPPABLE_APERTURE(i915))
You have the actual i915_ggtt!
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 50+ messages in thread
* [PATCH 18/22] drm/i915/selftests: check for missing aperture
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
` (16 preceding siblings ...)
2019-09-27 17:34 ` [PATCH 17/22] drm/i915: set num_fence_regs to 0 if there is no aperture Matthew Auld
@ 2019-09-27 17:34 ` Matthew Auld
2019-09-27 20:51 ` Chris Wilson
2019-09-27 17:34 ` [PATCH 19/22] drm/i915: error capture with no ggtt slot Matthew Auld
` (7 subsequent siblings)
25 siblings, 1 reply; 50+ messages in thread
From: Matthew Auld @ 2019-09-27 17:34 UTC (permalink / raw)
To: intel-gfx; +Cc: daniel.vetter
We may be missing support for the mappable aperture on some platforms.
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
---
.../drm/i915/gem/selftests/i915_gem_coherency.c | 5 ++++-
drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c | 6 ++++++
drivers/gpu/drm/i915/gt/selftest_hangcheck.c | 14 ++++++++++----
drivers/gpu/drm/i915/selftests/i915_gem.c | 3 +++
drivers/gpu/drm/i915/selftests/i915_gem_gtt.c | 3 +++
5 files changed, 26 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
index 0ff7a89aadca..07faeada86eb 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_coherency.c
@@ -246,7 +246,10 @@ static bool always_valid(struct drm_i915_private *i915)
static bool needs_fence_registers(struct drm_i915_private *i915)
{
- return !intel_gt_is_wedged(&i915->gt);
+ if (intel_gt_is_wedged(&i915->gt))
+ return false;
+
+ return i915->ggtt.num_fences;
}
static bool needs_mi_store_dword(struct drm_i915_private *i915)
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
index aefe557527f8..cb880d73ef73 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
@@ -301,6 +301,9 @@ static int igt_partial_tiling(void *arg)
int tiling;
int err;
+ if (!HAS_MAPPABLE_APERTURE(i915))
+ return 0;
+
/* We want to check the page mapping and fencing of a large object
* mmapped through the GTT. The object we create is larger than can
* possibly be mmaped as a whole, and so we must use partial GGTT vma.
@@ -433,6 +436,9 @@ static int igt_smoke_tiling(void *arg)
IGT_TIMEOUT(end);
int err;
+ if (!HAS_MAPPABLE_APERTURE(i915))
+ return 0;
+
/*
* igt_partial_tiling() does an exhastive check of partial tiling
* chunking, but will undoubtably run out of time. Here, we do a
diff --git a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
index a0098fc35921..35cc2c68b32f 100644
--- a/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
+++ b/drivers/gpu/drm/i915/gt/selftest_hangcheck.c
@@ -1189,8 +1189,12 @@ static int __igt_reset_evict_vma(struct intel_gt *gt,
struct i915_request *rq;
struct evict_vma arg;
struct hang h;
+ unsigned int pin_flags;
int err;
+ if (!gt->ggtt->num_fences && flags & EXEC_OBJECT_NEEDS_FENCE)
+ return 0;
+
if (!engine || !intel_engine_can_store_dword(engine))
return 0;
@@ -1227,10 +1231,12 @@ static int __igt_reset_evict_vma(struct intel_gt *gt,
goto out_obj;
}
- err = i915_vma_pin(arg.vma, 0, 0,
- i915_vma_is_ggtt(arg.vma) ?
- PIN_GLOBAL | PIN_MAPPABLE :
- PIN_USER);
+ pin_flags = i915_vma_is_ggtt(arg.vma) ? PIN_GLOBAL : PIN_USER;
+
+ if (flags & EXEC_OBJECT_NEEDS_FENCE)
+ pin_flags |= PIN_MAPPABLE;
+
+ err = i915_vma_pin(arg.vma, 0, 0, pin_flags);
if (err) {
i915_request_add(rq);
goto out_obj;
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem.c b/drivers/gpu/drm/i915/selftests/i915_gem.c
index 37593831b539..4951957a4d8d 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem.c
@@ -42,6 +42,9 @@ static void trash_stolen(struct drm_i915_private *i915)
unsigned long page;
u32 prng = 0x12345678;
+ if (!HAS_MAPPABLE_APERTURE(i915))
+ return;
+
for (page = 0; page < size; page += PAGE_SIZE) {
const dma_addr_t dma = i915->dsm.start + page;
u32 __iomem *s;
diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
index f4d7b254c9a7..57dd237cd220 100644
--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
@@ -1152,6 +1152,9 @@ static int igt_ggtt_page(void *arg)
unsigned int *order, n;
int err;
+ if (!HAS_MAPPABLE_APERTURE(i915))
+ return 0;
+
mutex_lock(&i915->drm.struct_mutex);
obj = i915_gem_object_create_internal(i915, PAGE_SIZE);
--
2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 50+ messages in thread* Re: [PATCH 18/22] drm/i915/selftests: check for missing aperture
2019-09-27 17:34 ` [PATCH 18/22] drm/i915/selftests: check for missing aperture Matthew Auld
@ 2019-09-27 20:51 ` Chris Wilson
0 siblings, 0 replies; 50+ messages in thread
From: Chris Wilson @ 2019-09-27 20:51 UTC (permalink / raw)
To: Matthew Auld, intel-gfx; +Cc: daniel.vetter
Quoting Matthew Auld (2019-09-27 18:34:05)
> diff --git a/drivers/gpu/drm/i915/selftests/i915_gem.c b/drivers/gpu/drm/i915/selftests/i915_gem.c
> index 37593831b539..4951957a4d8d 100644
> --- a/drivers/gpu/drm/i915/selftests/i915_gem.c
> +++ b/drivers/gpu/drm/i915/selftests/i915_gem.c
> @@ -42,6 +42,9 @@ static void trash_stolen(struct drm_i915_private *i915)
> unsigned long page;
> u32 prng = 0x12345678;
>
> + if (!HAS_MAPPABLE_APERTURE(i915))
> + return;
That's a bit of a nasty loss in coverage. Note we need to extend this
test to trash lmem as well. Ideas? (Possibly using the GPU to trash
everything but itself?)
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 50+ messages in thread
* [PATCH 19/22] drm/i915: error capture with no ggtt slot
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
` (17 preceding siblings ...)
2019-09-27 17:34 ` [PATCH 18/22] drm/i915/selftests: check for missing aperture Matthew Auld
@ 2019-09-27 17:34 ` Matthew Auld
2019-09-27 17:52 ` Chris Wilson
2019-09-27 17:34 ` [PATCH 20/22] drm/i915: Don't try to place HWS in non-existing mappable region Matthew Auld
` (6 subsequent siblings)
25 siblings, 1 reply; 50+ messages in thread
From: Matthew Auld @ 2019-09-27 17:34 UTC (permalink / raw)
To: intel-gfx; +Cc: daniel.vetter
From: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
If the aperture is not available in HW we can't use a ggtt slot and wc
copy, so fall back to regular kmap.
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Signed-off-by: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
---
drivers/gpu/drm/i915/i915_gem_gtt.c | 19 ++++----
drivers/gpu/drm/i915/i915_gpu_error.c | 65 ++++++++++++++++++++++-----
2 files changed, 64 insertions(+), 20 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 1be7b236f234..29f9c43b2c68 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -2661,7 +2661,8 @@ static void ggtt_release_guc_top(struct i915_ggtt *ggtt)
static void cleanup_init_ggtt(struct i915_ggtt *ggtt)
{
ggtt_release_guc_top(ggtt);
- drm_mm_remove_node(&ggtt->error_capture);
+ if (drm_mm_node_allocated(&ggtt->error_capture))
+ drm_mm_remove_node(&ggtt->error_capture);
}
static int init_ggtt(struct i915_ggtt *ggtt)
@@ -2692,13 +2693,15 @@ static int init_ggtt(struct i915_ggtt *ggtt)
if (ret)
return ret;
- /* Reserve a mappable slot for our lockless error capture */
- ret = drm_mm_insert_node_in_range(&ggtt->vm.mm, &ggtt->error_capture,
- PAGE_SIZE, 0, I915_COLOR_UNEVICTABLE,
- 0, ggtt->mappable_end,
- DRM_MM_INSERT_LOW);
- if (ret)
- return ret;
+ if (HAS_MAPPABLE_APERTURE(ggtt->vm.i915)) {
+ /* Reserve a mappable slot for our lockless error capture */
+ ret = drm_mm_insert_node_in_range(&ggtt->vm.mm, &ggtt->error_capture,
+ PAGE_SIZE, 0, I915_COLOR_UNEVICTABLE,
+ 0, ggtt->mappable_end,
+ DRM_MM_INSERT_LOW);
+ if (ret)
+ return ret;
+ }
/*
* The upper portion of the GuC address space has a sizeable hole
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 6384a06aa5bf..c6c96f0c6b28 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -40,6 +40,7 @@
#include "display/intel_overlay.h"
#include "gem/i915_gem_context.h"
+#include "gem/i915_gem_lmem.h"
#include "i915_drv.h"
#include "i915_gpu_error.h"
@@ -235,6 +236,7 @@ struct compress {
struct pagevec pool;
struct z_stream_s zstream;
void *tmp;
+ bool wc;
};
static bool compress_init(struct compress *c)
@@ -292,7 +294,7 @@ static int compress_page(struct compress *c,
struct z_stream_s *zstream = &c->zstream;
zstream->next_in = src;
- if (c->tmp && i915_memcpy_from_wc(c->tmp, src, PAGE_SIZE))
+ if (c->wc && c->tmp && i915_memcpy_from_wc(c->tmp, src, PAGE_SIZE))
zstream->next_in = c->tmp;
zstream->avail_in = PAGE_SIZE;
@@ -367,6 +369,7 @@ static void err_compression_marker(struct drm_i915_error_state_buf *m)
struct compress {
struct pagevec pool;
+ bool wc;
};
static bool compress_init(struct compress *c)
@@ -389,7 +392,7 @@ static int compress_page(struct compress *c,
if (!ptr)
return -ENOMEM;
- if (!i915_memcpy_from_wc(ptr, src, PAGE_SIZE))
+ if (!(c->wc && i915_memcpy_from_wc(ptr, src, PAGE_SIZE)))
memcpy(ptr, src, PAGE_SIZE);
dst->pages[dst->page_count++] = ptr;
@@ -970,7 +973,6 @@ i915_error_object_create(struct drm_i915_private *i915,
struct drm_i915_error_object *dst;
unsigned long num_pages;
struct sgt_iter iter;
- dma_addr_t dma;
int ret;
might_sleep();
@@ -996,17 +998,54 @@ i915_error_object_create(struct drm_i915_private *i915,
dst->page_count = 0;
dst->unused = 0;
+ compress->wc = i915_gem_object_is_lmem(vma->obj) ||
+ drm_mm_node_allocated(&ggtt->error_capture);
+
ret = -EINVAL;
- for_each_sgt_daddr(dma, iter, vma->pages) {
+ if (drm_mm_node_allocated(&ggtt->error_capture)) {
void __iomem *s;
+ dma_addr_t dma;
- ggtt->vm.insert_page(&ggtt->vm, dma, slot, I915_CACHE_NONE, 0);
+ for_each_sgt_daddr(dma, iter, vma->pages) {
+ ggtt->vm.insert_page(&ggtt->vm, dma, slot,
+ I915_CACHE_NONE, 0);
- s = io_mapping_map_wc(&ggtt->iomap, slot, PAGE_SIZE);
- ret = compress_page(compress, (void __force *)s, dst);
- io_mapping_unmap(s);
- if (ret)
- break;
+ s = io_mapping_map_wc(&ggtt->iomap, slot, PAGE_SIZE);
+ ret = compress_page(compress, (void __force *)s, dst);
+ io_mapping_unmap(s);
+ if (ret)
+ break;
+ }
+ } else if (i915_gem_object_is_lmem(vma->obj)) {
+ struct intel_memory_region *mem = vma->obj->mm.region;
+ dma_addr_t dma;
+
+ for_each_sgt_daddr(dma, iter, vma->pages) {
+ void __iomem *s;
+
+ s = io_mapping_map_atomic_wc(&mem->iomap, dma);
+ ret = compress_page(compress, s, dst);
+ io_mapping_unmap_atomic(s);
+ if (ret)
+ break;
+ }
+ } else {
+ struct page *page;
+
+ for_each_sgt_page(page, iter, vma->pages) {
+ void *s;
+
+ drm_clflush_pages(&page, 1);
+
+ s = kmap_atomic(page);
+ ret = compress_page(compress, s, dst);
+ kunmap_atomic(s);
+
+ drm_clflush_pages(&page, 1);
+
+ if (ret)
+ break;
+ }
}
if (ret || compress_flush(compress, dst)) {
@@ -1675,9 +1714,11 @@ static unsigned long capture_find_epoch(const struct i915_gpu_state *error)
static void capture_finish(struct i915_gpu_state *error)
{
struct i915_ggtt *ggtt = &error->i915->ggtt;
- const u64 slot = ggtt->error_capture.start;
- ggtt->vm.clear_range(&ggtt->vm, slot, PAGE_SIZE);
+ if (drm_mm_node_allocated(&ggtt->error_capture)) {
+ const u64 slot = ggtt->error_capture.start;
+ ggtt->vm.clear_range(&ggtt->vm, slot, PAGE_SIZE);
+ }
}
#define DAY_AS_SECONDS(x) (24 * 60 * 60 * (x))
--
2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 50+ messages in thread* Re: [PATCH 19/22] drm/i915: error capture with no ggtt slot
2019-09-27 17:34 ` [PATCH 19/22] drm/i915: error capture with no ggtt slot Matthew Auld
@ 2019-09-27 17:52 ` Chris Wilson
0 siblings, 0 replies; 50+ messages in thread
From: Chris Wilson @ 2019-09-27 17:52 UTC (permalink / raw)
To: Matthew Auld, intel-gfx; +Cc: daniel.vetter
Quoting Matthew Auld (2019-09-27 18:34:06)
> @@ -2692,13 +2693,15 @@ static int init_ggtt(struct i915_ggtt *ggtt)
> if (ret)
> return ret;
>
> - /* Reserve a mappable slot for our lockless error capture */
> - ret = drm_mm_insert_node_in_range(&ggtt->vm.mm, &ggtt->error_capture,
> - PAGE_SIZE, 0, I915_COLOR_UNEVICTABLE,
> - 0, ggtt->mappable_end,
> - DRM_MM_INSERT_LOW);
> - if (ret)
> - return ret;
> + if (HAS_MAPPABLE_APERTURE(ggtt->vm.i915)) {
Uh. If only we had the answer to hand...
if (ggtt->mappable_end) {
Or make HAS_MAPPABLE_APERTURE take ggtt. Though I'd vote for less
shouting.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 50+ messages in thread
* [PATCH 20/22] drm/i915: Don't try to place HWS in non-existing mappable region
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
` (18 preceding siblings ...)
2019-09-27 17:34 ` [PATCH 19/22] drm/i915: error capture with no ggtt slot Matthew Auld
@ 2019-09-27 17:34 ` Matthew Auld
2019-09-27 17:34 ` [PATCH 21/22] drm/i915: check for missing aperture in GTT pread/pwrite paths Matthew Auld
` (5 subsequent siblings)
25 siblings, 0 replies; 50+ messages in thread
From: Matthew Auld @ 2019-09-27 17:34 UTC (permalink / raw)
To: intel-gfx; +Cc: daniel.vetter
From: Michal Wajdeczko <michal.wajdeczko@intel.com>
HWS placement restrictions can't just rely on HAS_LLC flag.
Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
---
drivers/gpu/drm/i915/gt/intel_engine_cs.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
index f97686bdc28b..2e3f7a7507ae 100644
--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
@@ -513,7 +513,7 @@ static int pin_ggtt_status_page(struct intel_engine_cs *engine,
unsigned int flags;
flags = PIN_GLOBAL;
- if (!HAS_LLC(engine->i915))
+ if (!HAS_LLC(engine->i915) && HAS_MAPPABLE_APERTURE(engine->i915))
/*
* On g33, we cannot place HWS above 256MiB, so
* restrict its pinning to the low mappable arena.
--
2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 50+ messages in thread* [PATCH 21/22] drm/i915: check for missing aperture in GTT pread/pwrite paths
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
` (19 preceding siblings ...)
2019-09-27 17:34 ` [PATCH 20/22] drm/i915: Don't try to place HWS in non-existing mappable region Matthew Auld
@ 2019-09-27 17:34 ` Matthew Auld
2019-09-27 17:57 ` Chris Wilson
2019-09-27 17:34 ` [PATCH 22/22] HAX drm/i915: add the fake lmem region Matthew Auld
` (4 subsequent siblings)
25 siblings, 1 reply; 50+ messages in thread
From: Matthew Auld @ 2019-09-27 17:34 UTC (permalink / raw)
To: intel-gfx; +Cc: daniel.vetter
From: CQ Tang <cq.tang@intel.com>
drm_mm_insert_node_in_range() treats range_start > range_end as a
programmer error, such that we explode in insert_mappable_node. For now
simply check for missing aperture on such paths.
Signed-off-by: CQ Tang <cq.tang@intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
---
drivers/gpu/drm/i915/i915_gem.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index fd329b6b475c..82daaab022d8 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -337,6 +337,9 @@ i915_gem_gtt_pread(struct drm_i915_gem_object *obj,
u64 remain, offset;
int ret;
+ if (!HAS_MAPPABLE_APERTURE(i915))
+ return -ENOSPC;
+
ret = mutex_lock_interruptible(&i915->drm.struct_mutex);
if (ret)
return ret;
@@ -530,6 +533,9 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_gem_object *obj,
void __user *user_data;
int ret;
+ if (!HAS_MAPPABLE_APERTURE(i915))
+ return -ENOSPC;
+
ret = mutex_lock_interruptible(&i915->drm.struct_mutex);
if (ret)
return ret;
--
2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 50+ messages in thread* [PATCH 22/22] HAX drm/i915: add the fake lmem region
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
` (20 preceding siblings ...)
2019-09-27 17:34 ` [PATCH 21/22] drm/i915: check for missing aperture in GTT pread/pwrite paths Matthew Auld
@ 2019-09-27 17:34 ` Matthew Auld
2019-09-27 18:29 ` ✗ Fi.CI.CHECKPATCH: warning for LMEM basics Patchwork
` (3 subsequent siblings)
25 siblings, 0 replies; 50+ messages in thread
From: Matthew Auld @ 2019-09-27 17:34 UTC (permalink / raw)
To: intel-gfx; +Cc: daniel.vetter
Intended for upstream testing so that we can still exercise the LMEM
plumbing and !HAS_MAPPABLE_APERTURE paths. Smoke tested on Skull Canyon
device. This works by allocating an intel_memory_region for a reserved
portion of system memory, which we treat like LMEM. For the LMEMBAR we
steal the aperture and 1:1 it map to the stolen region.
To enable simply set i915_fake_lmem_start= on the kernel cmdline with the
start of reserved region(see memmap=). The size of the region we can
use is determined by the size of the mappable aperture, so the size of
reserved region should be >= mappable_end.
eg. memmap=2G$16G i915_fake_lmem_start=0x400000000
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Abdiel Janulgue <abdiel.janulgue@linux.intel.com>
---
arch/x86/kernel/early-quirks.c | 26 +++++++
drivers/gpu/drm/i915/gem/i915_gem_lmem.c | 3 +
drivers/gpu/drm/i915/i915_drv.c | 8 ++
drivers/gpu/drm/i915/i915_gem_gtt.c | 3 +
drivers/gpu/drm/i915/intel_memory_region.h | 6 ++
drivers/gpu/drm/i915/intel_region_lmem.c | 90 ++++++++++++++++++++++
drivers/gpu/drm/i915/intel_region_lmem.h | 5 ++
include/drm/i915_drm.h | 3 +
8 files changed, 144 insertions(+)
diff --git a/arch/x86/kernel/early-quirks.c b/arch/x86/kernel/early-quirks.c
index 6f6b1d04dadf..9b04655e3926 100644
--- a/arch/x86/kernel/early-quirks.c
+++ b/arch/x86/kernel/early-quirks.c
@@ -603,6 +603,32 @@ static void __init intel_graphics_quirks(int num, int slot, int func)
}
}
+struct resource intel_graphics_fake_lmem_res __ro_after_init = DEFINE_RES_MEM(0, 0);
+EXPORT_SYMBOL(intel_graphics_fake_lmem_res);
+
+static int __init early_i915_fake_lmem_init(char *s)
+{
+ u64 start;
+ int ret;
+
+ if (*s == '=')
+ s++;
+
+ ret = kstrtoull(s, 16, &start);
+ if (ret)
+ return ret;
+
+ intel_graphics_fake_lmem_res.start = start;
+ intel_graphics_fake_lmem_res.end = SZ_2G; /* Placeholder; depends on aperture size */
+
+ printk(KERN_INFO "Intel graphics fake LMEM starts at %pa\n",
+ &intel_graphics_fake_lmem_res.start);
+
+ return 0;
+}
+
+early_param("i915_fake_lmem_start", early_i915_fake_lmem_init);
+
static void __init force_disable_hpet(int num, int slot, int func)
{
#ifdef CONFIG_HPET_TIMER
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
index d7ec74ed5b88..c5e75c2f2511 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_lmem.c
@@ -23,6 +23,7 @@ void __iomem *i915_gem_object_lmem_io_map_page(struct drm_i915_gem_object *obj,
resource_size_t offset;
offset = i915_gem_object_get_dma_address(obj, n);
+ offset -= obj->mm.region->region.start;
return io_mapping_map_wc(&obj->mm.region->iomap, offset, PAGE_SIZE);
}
@@ -33,6 +34,7 @@ void __iomem *i915_gem_object_lmem_io_map_page_atomic(struct drm_i915_gem_object
resource_size_t offset;
offset = i915_gem_object_get_dma_address(obj, n);
+ offset -= obj->mm.region->region.start;
return io_mapping_map_atomic_wc(&obj->mm.region->iomap, offset);
}
@@ -46,6 +48,7 @@ void __iomem *i915_gem_object_lmem_io_map(struct drm_i915_gem_object *obj,
GEM_BUG_ON(!(obj->flags & I915_BO_ALLOC_CONTIGUOUS));
offset = i915_gem_object_get_dma_address(obj, n);
+ offset -= obj->mm.region->region.start;
return io_mapping_map_wc(&obj->mm.region->iomap, offset, size);
}
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 91aae56b4280..98fa1932c4aa 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -1546,6 +1546,14 @@ int i915_driver_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (!i915_modparams.nuclear_pageflip && match_info->gen < 5)
dev_priv->drm.driver_features &= ~DRIVER_ATOMIC;
+ /* Check if we support fake LMEM -- enable for live selftests */
+ if (INTEL_GEN(dev_priv) >= 9 && i915_selftest.live &&
+ intel_graphics_fake_lmem_res.start) {
+ mkwrite_device_info(dev_priv)->memory_regions =
+ REGION_SMEM | REGION_LMEM;
+ GEM_BUG_ON(!HAS_LMEM(dev_priv));
+ }
+
ret = pci_enable_device(pdev);
if (ret)
goto out_fini;
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 29f9c43b2c68..02d2a6266b8c 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -2778,6 +2778,9 @@ int i915_gem_init_memory_regions(struct drm_i915_private *i915)
case INTEL_STOLEN:
mem = i915_gem_stolen_setup(i915);
break;
+ case INTEL_LMEM:
+ mem = intel_setup_fake_lmem(i915);
+ break;
}
if (IS_ERR(mem)) {
diff --git a/drivers/gpu/drm/i915/intel_memory_region.h b/drivers/gpu/drm/i915/intel_memory_region.h
index 9ef2ec760a4b..61f0da075805 100644
--- a/drivers/gpu/drm/i915/intel_memory_region.h
+++ b/drivers/gpu/drm/i915/intel_memory_region.h
@@ -9,6 +9,7 @@
#include <linux/ioport.h>
#include <linux/mutex.h>
#include <linux/io-mapping.h>
+#include <drm/drm_mm.h>
#include "i915_buddy.h"
@@ -70,6 +71,9 @@ struct intel_memory_region {
struct io_mapping iomap;
struct resource region;
+ /* For faking for lmem */
+ struct drm_mm_node fake_mappable;
+
struct i915_buddy_mm mm;
struct mutex mm_lock;
@@ -80,6 +84,8 @@ struct intel_memory_region {
unsigned int instance;
unsigned int id;
+ dma_addr_t remap_addr;
+
/* Protects access to objects and purgeable */
struct mutex obj_lock;
struct list_head objects;
diff --git a/drivers/gpu/drm/i915/intel_region_lmem.c b/drivers/gpu/drm/i915/intel_region_lmem.c
index 051069664074..935b8f19653c 100644
--- a/drivers/gpu/drm/i915/intel_region_lmem.c
+++ b/drivers/gpu/drm/i915/intel_region_lmem.c
@@ -36,9 +36,62 @@ lmem_create_object(struct intel_memory_region *mem,
return obj;
}
+static int init_fake_lmem_bar(struct intel_memory_region *mem)
+{
+ struct drm_i915_private *i915 = mem->i915;
+ struct i915_ggtt *ggtt = &i915->ggtt;
+ unsigned long n;
+ int ret;
+
+ /* We want to 1:1 map the mappable aperture to our reserved region */
+
+ mem->fake_mappable.start = 0;
+ mem->fake_mappable.size = resource_size(&mem->region);
+ mem->fake_mappable.color = I915_COLOR_UNEVICTABLE;
+
+ ret = drm_mm_reserve_node(&ggtt->vm.mm, &mem->fake_mappable);
+ if (ret)
+ return ret;
+
+ mem->remap_addr = dma_map_resource(&i915->drm.pdev->dev,
+ mem->region.start,
+ mem->fake_mappable.size,
+ PCI_DMA_BIDIRECTIONAL,
+ DMA_ATTR_FORCE_CONTIGUOUS);
+ if (dma_mapping_error(&i915->drm.pdev->dev, mem->remap_addr)) {
+ drm_mm_remove_node(&mem->fake_mappable);
+ return -EINVAL;
+ }
+
+ for (n = 0; n < mem->fake_mappable.size >> PAGE_SHIFT; ++n) {
+ ggtt->vm.insert_page(&ggtt->vm,
+ mem->remap_addr + (n << PAGE_SHIFT),
+ n << PAGE_SHIFT,
+ I915_CACHE_NONE, 0);
+ }
+
+ mem->region = (struct resource)DEFINE_RES_MEM(mem->remap_addr,
+ mem->fake_mappable.size);
+
+ return 0;
+}
+
+static void release_fake_lmem_bar(struct intel_memory_region *mem)
+{
+ if (drm_mm_node_allocated(&mem->fake_mappable))
+ drm_mm_remove_node(&mem->fake_mappable);
+
+ dma_unmap_resource(&mem->i915->drm.pdev->dev,
+ mem->remap_addr,
+ mem->fake_mappable.size,
+ PCI_DMA_BIDIRECTIONAL,
+ DMA_ATTR_FORCE_CONTIGUOUS);
+}
+
static void
region_lmem_release(struct intel_memory_region *mem)
{
+ release_fake_lmem_bar(mem);
io_mapping_fini(&mem->iomap);
intel_memory_region_release_buddy(mem);
}
@@ -48,6 +101,11 @@ region_lmem_init(struct intel_memory_region *mem)
{
int ret;
+ if (intel_graphics_fake_lmem_res.start) {
+ ret = init_fake_lmem_bar(mem);
+ GEM_BUG_ON(ret);
+ }
+
if (!io_mapping_init_wc(&mem->iomap,
mem->io_start,
resource_size(&mem->region)))
@@ -65,3 +123,35 @@ const struct intel_memory_region_ops intel_region_lmem_ops = {
.release = region_lmem_release,
.create_object = lmem_create_object,
};
+
+struct intel_memory_region *
+intel_setup_fake_lmem(struct drm_i915_private *i915)
+{
+ struct pci_dev *pdev = i915->drm.pdev;
+ struct intel_memory_region *mem;
+ resource_size_t mappable_end;
+ resource_size_t io_start;
+ resource_size_t start;
+
+ GEM_BUG_ON(HAS_MAPPABLE_APERTURE(i915));
+ GEM_BUG_ON(!intel_graphics_fake_lmem_res.start);
+
+ /* Your mappable aperture belongs to me now! */
+ mappable_end = pci_resource_len(pdev, 2);
+ io_start = pci_resource_start(pdev, 2),
+ start = intel_graphics_fake_lmem_res.start;
+
+ mem = intel_memory_region_create(i915,
+ start,
+ mappable_end,
+ I915_GTT_PAGE_SIZE_4K,
+ io_start,
+ &intel_region_lmem_ops);
+ if (!IS_ERR(mem)) {
+ DRM_INFO("Intel graphics fake LMEM: %pR\n", &mem->region);
+ DRM_INFO("Intel graphics fake LMEM IO start: %llx\n",
+ (u64)mem->io_start);
+ }
+
+ return mem;
+}
diff --git a/drivers/gpu/drm/i915/intel_region_lmem.h b/drivers/gpu/drm/i915/intel_region_lmem.h
index ed2a3bab6443..213def7c7b8a 100644
--- a/drivers/gpu/drm/i915/intel_region_lmem.h
+++ b/drivers/gpu/drm/i915/intel_region_lmem.h
@@ -6,6 +6,11 @@
#ifndef __INTEL_REGION_LMEM_H
#define __INTEL_REGION_LMEM_H
+struct drm_i915_private;
+
extern const struct intel_memory_region_ops intel_region_lmem_ops;
+struct intel_memory_region *
+intel_setup_fake_lmem(struct drm_i915_private *i915);
+
#endif /* !__INTEL_REGION_LMEM_H */
diff --git a/include/drm/i915_drm.h b/include/drm/i915_drm.h
index 6722005884db..271980225deb 100644
--- a/include/drm/i915_drm.h
+++ b/include/drm/i915_drm.h
@@ -39,6 +39,9 @@ bool i915_gpu_turbo_disable(void);
/* Exported from arch/x86/kernel/early-quirks.c */
extern struct resource intel_graphics_stolen_res;
+/* Exported from arch/x86/kernel/early-printk.c */
+extern struct resource intel_graphics_fake_lmem_res;
+
/*
* The Bridge device's PCI config space has information about the
* fb aperture size and the amount of pre-reserved memory.
--
2.20.1
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply related [flat|nested] 50+ messages in thread* ✗ Fi.CI.CHECKPATCH: warning for LMEM basics
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
` (21 preceding siblings ...)
2019-09-27 17:34 ` [PATCH 22/22] HAX drm/i915: add the fake lmem region Matthew Auld
@ 2019-09-27 18:29 ` Patchwork
2019-09-27 18:39 ` ✗ Fi.CI.SPARSE: " Patchwork
` (2 subsequent siblings)
25 siblings, 0 replies; 50+ messages in thread
From: Patchwork @ 2019-09-27 18:29 UTC (permalink / raw)
To: Matthew Auld; +Cc: intel-gfx
== Series Details ==
Series: LMEM basics
URL : https://patchwork.freedesktop.org/series/67350/
State : warning
== Summary ==
$ dim checkpatch origin/drm-tip
ccc51e3ac24e drm/i915: check for kernel_context
dbbfe24cafca drm/i915: simplify i915_gem_init_early
f52b9b0685d8 drm/i915: introduce intel_memory_region
-:59: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#59:
new file mode 100644
-:562: WARNING:FUNCTION_ARGUMENTS: function definition argument 'struct intel_memory_region *' should also have an identifier name
#562: FILE: drivers/gpu/drm/i915/intel_memory_region.h:25:
+ int (*init)(struct intel_memory_region *);
-:563: WARNING:FUNCTION_ARGUMENTS: function definition argument 'struct intel_memory_region *' should also have an identifier name
#563: FILE: drivers/gpu/drm/i915/intel_memory_region.h:26:
+ void (*release)(struct intel_memory_region *);
-:565: WARNING:FUNCTION_ARGUMENTS: function definition argument 'struct intel_memory_region *' should also have an identifier name
#565: FILE: drivers/gpu/drm/i915/intel_memory_region.h:28:
+ struct drm_i915_gem_object *
-:565: WARNING:FUNCTION_ARGUMENTS: function definition argument 'resource_size_t' should also have an identifier name
#565: FILE: drivers/gpu/drm/i915/intel_memory_region.h:28:
+ struct drm_i915_gem_object *
-:565: WARNING:FUNCTION_ARGUMENTS: function definition argument 'unsigned int' should also have an identifier name
#565: FILE: drivers/gpu/drm/i915/intel_memory_region.h:28:
+ struct drm_i915_gem_object *
-:580: CHECK:UNCOMMENTED_DEFINITION: struct mutex definition without comment
#580: FILE: drivers/gpu/drm/i915/intel_memory_region.h:43:
+ struct mutex mm_lock;
-:599: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#599: FILE: drivers/gpu/drm/i915/intel_memory_region.h:62:
+__intel_memory_region_get_block_buddy(struct intel_memory_region *mem,
+ resource_size_t size);
-:601: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#601: FILE: drivers/gpu/drm/i915/intel_memory_region.h:64:
+void __intel_memory_region_put_pages_buddy(struct intel_memory_region *mem,
+ struct list_head *blocks);
-:709: WARNING:EMBEDDED_FUNCTION_NAME: Prefer using '"%s...", __func__' to using 'igt_mock_fill', this function's name, in a string
#709: FILE: drivers/gpu/drm/i915/selftests/intel_memory_region.c:80:
+ pr_err("igt_mock_fill failed, space still left in region\n");
total: 0 errors, 7 warnings, 3 checks, 759 lines checked
6d8c4bbbc790 drm/i915/region: support continuous allocations
-:227: WARNING:LINE_SPACING: Missing a blank line after declarations
#227: FILE: drivers/gpu/drm/i915/selftests/intel_memory_region.c:133:
+ LIST_HEAD(holes);
+ I915_RND_STATE(prng);
total: 0 errors, 1 warnings, 0 checks, 307 lines checked
c6f24f30618c drm/i915/region: support volatile objects
b5655dcc6d84 drm/i915: Add memory region information to device_info
a066bf23d40b drm/i915: support creating LMEM objects
-:35: WARNING:FILE_PATH_CHANGES: added, moved or deleted file(s), does MAINTAINERS need updating?
#35:
new file mode 100644
-:117: WARNING:TYPO_SPELLING: 'UKNOWN' may be misspelled - perhaps 'UNKNOWN'?
#117: FILE: drivers/gpu/drm/i915/i915_drv.h:684:
+ struct intel_memory_region *regions[INTEL_MEMORY_UKNOWN];
-:168: WARNING:TYPO_SPELLING: 'UKNOWN' may be misspelled - perhaps 'UNKNOWN'?
#168: FILE: drivers/gpu/drm/i915/intel_memory_region.h:33:
+ INTEL_MEMORY_UKNOWN, /* Should be last */
-:177: CHECK:MACRO_ARG_PRECEDENCE: Macro argument 'r' may be better as '(r)' to avoid precedence issues
#177: FILE: drivers/gpu/drm/i915/intel_memory_region.h:42:
+#define MEMORY_TYPE_FROM_REGION(r) (ilog2(r >> INTEL_MEMORY_TYPE_SHIFT))
-:178: CHECK:MACRO_ARG_PRECEDENCE: Macro argument 'r' may be better as '(r)' to avoid precedence issues
#178: FILE: drivers/gpu/drm/i915/intel_memory_region.h:43:
+#define MEMORY_INSTANCE_FROM_REGION(r) (ilog2(r & 0xffff))
total: 0 errors, 3 warnings, 2 checks, 265 lines checked
63ad9888746c drm/i915: setup io-mapping for LMEM
-:7: WARNING:COMMIT_MESSAGE: Missing commit description - Add an appropriate one
total: 0 errors, 1 warnings, 0 checks, 34 lines checked
1dd672af1e42 drm/i915/lmem: support kernel mapping
-:289: ERROR:CODE_INDENT: code indent should use tabs where possible
#289: FILE: drivers/gpu/drm/i915/selftests/intel_memory_region.c:323:
+^I^I^I val);$
-:289: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#289: FILE: drivers/gpu/drm/i915/selftests/intel_memory_region.c:323:
+ pr_err("vaddr[%u]=%u, val=%u\n", dword, vaddr[dword],
+ val);
-:301: ERROR:CODE_INDENT: code indent should use tabs where possible
#301: FILE: drivers/gpu/drm/i915/selftests/intel_memory_region.c:335:
+^I^I^I val ^ 0xdeadbeaf);$
-:301: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#301: FILE: drivers/gpu/drm/i915/selftests/intel_memory_region.c:335:
+ pr_err("vaddr[%u]=%u, val=%u\n", dword, vaddr[dword],
+ val ^ 0xdeadbeaf);
total: 2 errors, 0 warnings, 2 checks, 263 lines checked
76154df5dd82 drm/i915/selftests: add write-dword test for LMEM
-:19: CHECK:LINE_SPACING: Please don't use multiple blank lines
#19: FILE: drivers/gpu/drm/i915/selftests/intel_memory_region.c:10:
+
-:96: WARNING:LINE_SPACING: Missing a blank line after declarations
#96: FILE: drivers/gpu/drm/i915/selftests/intel_memory_region.c:321:
+ struct intel_context *ce;
+ I915_RND_STATE(prng);
-:176: WARNING:LINE_SPACING: Missing a blank line after declarations
#176: FILE: drivers/gpu/drm/i915/selftests/intel_memory_region.c:415:
+ struct drm_file *file;
+ I915_RND_STATE(prng);
total: 0 errors, 2 warnings, 1 checks, 210 lines checked
970fd335c3b3 drm/i915/selftest: extend coverage to include LMEM huge-pages
-:7: WARNING:COMMIT_MESSAGE: Missing commit description - Add an appropriate one
total: 0 errors, 1 warnings, 0 checks, 151 lines checked
91e43c262159 drm/i915: enumerate and init each supported region
19270051b021 drm/i915: treat shmem as a region
-:7: WARNING:COMMIT_MESSAGE: Missing commit description - Add an appropriate one
-:116: CHECK:PARENTHESIS_ALIGNMENT: Alignment should match open parenthesis
#116: FILE: drivers/gpu/drm/i915/gem/i915_gem_shmem.c:444:
+static int __create_shmem(struct drm_i915_private *i915,
struct drm_gem_object *obj,
-:128: WARNING:UNSPECIFIED_INT: Prefer 'unsigned int' to bare use of 'unsigned'
#128: FILE: drivers/gpu/drm/i915/gem/i915_gem_shmem.c:467:
+ unsigned flags)
-:190: WARNING:SUSPECT_CODE_INDENT: suspect code indent for conditional statements (8, 17)
#190: FILE: drivers/gpu/drm/i915/gem/i915_gem_shmem.c:591:
+ if (err)
+ DRM_NOTE("Unable to create a private tmpfs mount, hugepage support will be disabled(%d).\n", err);
-:191: WARNING:LONG_LINE: line over 100 characters
#191: FILE: drivers/gpu/drm/i915/gem/i915_gem_shmem.c:592:
+ DRM_NOTE("Unable to create a private tmpfs mount, hugepage support will be disabled(%d).\n", err);
total: 0 errors, 4 warnings, 1 checks, 346 lines checked
82bde2d03644 drm/i915: treat stolen as a region
8355d01685d9 drm/i915: define HAS_MAPPABLE_APERTURE
-:20: CHECK:MACRO_ARG_PRECEDENCE: Macro argument 'dev_priv' may be better as '(dev_priv)' to avoid precedence issues
#20: FILE: drivers/gpu/drm/i915/i915_drv.h:2122:
+#define HAS_MAPPABLE_APERTURE(dev_priv) (dev_priv->ggtt.mappable_end > 0)
total: 0 errors, 0 warnings, 1 checks, 8 lines checked
2199f453f083 drm/i915: do not map aperture if it is not available.
-:38: CHECK:SPACING: No space is necessary after a cast
#38: FILE: drivers/gpu/drm/i915/i915_gem_gtt.c:3046:
+ (struct resource) DEFINE_RES_MEM(pci_resource_start(pdev, 2),
total: 0 errors, 0 warnings, 1 checks, 53 lines checked
4a7487273a64 drm/i915: set num_fence_regs to 0 if there is no aperture
9d82c4a319bc drm/i915/selftests: check for missing aperture
f722fde71ba0 drm/i915: error capture with no ggtt slot
-:175: WARNING:LINE_SPACING: Missing a blank line after declarations
#175: FILE: drivers/gpu/drm/i915/i915_gpu_error.c:1720:
+ const u64 slot = ggtt->error_capture.start;
+ ggtt->vm.clear_range(&ggtt->vm, slot, PAGE_SIZE);
total: 0 errors, 1 warnings, 0 checks, 149 lines checked
097c34ca5081 drm/i915: Don't try to place HWS in non-existing mappable region
2f657bf00d3c drm/i915: check for missing aperture in GTT pread/pwrite paths
e2124597ca45 HAX drm/i915: add the fake lmem region
-:49: WARNING:PREFER_PR_LEVEL: Prefer [subsystem eg: netdev]_info([subsystem]dev, ... then dev_info(dev, ... then pr_info(... to printk(KERN_INFO ...
#49: FILE: arch/x86/kernel/early-quirks.c:624:
+ printk(KERN_INFO "Intel graphics fake LMEM starts at %pa\n",
total: 0 errors, 1 warnings, 0 checks, 228 lines checked
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 50+ messages in thread* ✗ Fi.CI.SPARSE: warning for LMEM basics
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
` (22 preceding siblings ...)
2019-09-27 18:29 ` ✗ Fi.CI.CHECKPATCH: warning for LMEM basics Patchwork
@ 2019-09-27 18:39 ` Patchwork
2019-09-27 18:51 ` ✓ Fi.CI.BAT: success " Patchwork
2019-09-28 9:47 ` ✗ Fi.CI.IGT: failure " Patchwork
25 siblings, 0 replies; 50+ messages in thread
From: Patchwork @ 2019-09-27 18:39 UTC (permalink / raw)
To: Matthew Auld; +Cc: intel-gfx
== Series Details ==
Series: LMEM basics
URL : https://patchwork.freedesktop.org/series/67350/
State : warning
== Summary ==
$ dim sparse origin/drm-tip
Sparse version: v0.6.0
Commit: drm/i915: check for kernel_context
Okay!
Commit: drm/i915: simplify i915_gem_init_early
Okay!
Commit: drm/i915: introduce intel_memory_region
Okay!
Commit: drm/i915/region: support continuous allocations
+drivers/gpu/drm/i915/selftests/intel_memory_region.c:118:6: warning: symbol 'igt_object_release' was not declared. Should it be static?
Commit: drm/i915/region: support volatile objects
Okay!
Commit: drm/i915: Add memory region information to device_info
Okay!
Commit: drm/i915: support creating LMEM objects
Okay!
Commit: drm/i915: setup io-mapping for LMEM
Okay!
Commit: drm/i915/lmem: support kernel mapping
+drivers/gpu/drm/i915/gem/i915_gem_pages.c:177:42: expected void [noderef] <asn:2> *vaddr
+drivers/gpu/drm/i915/gem/i915_gem_pages.c:177:42: got void *[assigned] ptr
+drivers/gpu/drm/i915/gem/i915_gem_pages.c:177:42: warning: incorrect type in argument 1 (different address spaces)
+drivers/gpu/drm/i915/gem/i915_gem_pages.c:254:51: expected void *
+drivers/gpu/drm/i915/gem/i915_gem_pages.c:254:51: got void [noderef] <asn:2> *
+drivers/gpu/drm/i915/gem/i915_gem_pages.c:254:51: warning: incorrect type in return expression (different address spaces)
+drivers/gpu/drm/i915/gem/i915_gem_pages.c:335:42: expected void [noderef] <asn:2> *vaddr
+drivers/gpu/drm/i915/gem/i915_gem_pages.c:335:42: got void *[assigned] ptr
+drivers/gpu/drm/i915/gem/i915_gem_pages.c:335:42: warning: incorrect type in argument 1 (different address spaces)
Commit: drm/i915/selftests: add write-dword test for LMEM
Okay!
Commit: drm/i915/selftest: extend coverage to include LMEM huge-pages
Okay!
Commit: drm/i915: enumerate and init each supported region
Okay!
Commit: drm/i915: treat shmem as a region
Okay!
Commit: drm/i915: treat stolen as a region
Okay!
Commit: drm/i915: define HAS_MAPPABLE_APERTURE
Okay!
Commit: drm/i915: do not map aperture if it is not available.
Okay!
Commit: drm/i915: set num_fence_regs to 0 if there is no aperture
Okay!
Commit: drm/i915/selftests: check for missing aperture
Okay!
Commit: drm/i915: error capture with no ggtt slot
-
+drivers/gpu/drm/i915/i915_gpu_error.c:1027:55: expected void *src
+drivers/gpu/drm/i915/i915_gpu_error.c:1027:55: got void [noderef] <asn:2> *[assigned] s
+drivers/gpu/drm/i915/i915_gpu_error.c:1027:55: warning: incorrect type in argument 2 (different address spaces)
Commit: drm/i915: Don't try to place HWS in non-existing mappable region
Okay!
Commit: drm/i915: check for missing aperture in GTT pread/pwrite paths
Okay!
Commit: HAX drm/i915: add the fake lmem region
Okay!
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 50+ messages in thread* ✓ Fi.CI.BAT: success for LMEM basics
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
` (23 preceding siblings ...)
2019-09-27 18:39 ` ✗ Fi.CI.SPARSE: " Patchwork
@ 2019-09-27 18:51 ` Patchwork
2019-09-28 9:47 ` ✗ Fi.CI.IGT: failure " Patchwork
25 siblings, 0 replies; 50+ messages in thread
From: Patchwork @ 2019-09-27 18:51 UTC (permalink / raw)
To: Matthew Auld; +Cc: intel-gfx
== Series Details ==
Series: LMEM basics
URL : https://patchwork.freedesktop.org/series/67350/
State : success
== Summary ==
CI Bug Log - changes from CI_DRM_6970 -> Patchwork_14569
====================================================
Summary
-------
**SUCCESS**
No regressions found.
External URL: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/index.html
New tests
---------
New tests have been introduced between CI_DRM_6970 and Patchwork_14569:
### New IGT tests (1) ###
* igt@i915_selftest@live_memory_region:
- Statuses : 46 pass(s)
- Exec time: [0.36, 2.33] s
Known issues
------------
Here are the changes found in Patchwork_14569 that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@gem_mmap_gtt@basic-small-bo:
- fi-icl-u3: [PASS][1] -> [DMESG-WARN][2] ([fdo#107724])
[1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/fi-icl-u3/igt@gem_mmap_gtt@basic-small-bo.html
[2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/fi-icl-u3/igt@gem_mmap_gtt@basic-small-bo.html
* igt@kms_frontbuffer_tracking@basic:
- fi-icl-u2: [PASS][3] -> [FAIL][4] ([fdo#103167])
[3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/fi-icl-u2/igt@kms_frontbuffer_tracking@basic.html
[4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/fi-icl-u2/igt@kms_frontbuffer_tracking@basic.html
#### Possible fixes ####
* igt@debugfs_test@read_all_entries:
- {fi-tgl-u2}: [DMESG-WARN][5] ([fdo#111600]) -> [PASS][6]
[5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/fi-tgl-u2/igt@debugfs_test@read_all_entries.html
[6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/fi-tgl-u2/igt@debugfs_test@read_all_entries.html
* igt@gem_exec_reloc@basic-cpu:
- fi-icl-u3: [DMESG-WARN][7] ([fdo#107724]) -> [PASS][8] +1 similar issue
[7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/fi-icl-u3/igt@gem_exec_reloc@basic-cpu.html
[8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/fi-icl-u3/igt@gem_exec_reloc@basic-cpu.html
#### Warnings ####
* igt@kms_chamelium@hdmi-hpd-fast:
- fi-kbl-7500u: [FAIL][9] ([fdo#111407]) -> [FAIL][10] ([fdo#111045] / [fdo#111096])
[9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/fi-kbl-7500u/igt@kms_chamelium@hdmi-hpd-fast.html
[10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/fi-kbl-7500u/igt@kms_chamelium@hdmi-hpd-fast.html
{name}: This element is suppressed. This means it is ignored when computing
the status of the difference (SUCCESS, WARNING, or FAILURE).
[fdo#103167]: https://bugs.freedesktop.org/show_bug.cgi?id=103167
[fdo#107724]: https://bugs.freedesktop.org/show_bug.cgi?id=107724
[fdo#111045]: https://bugs.freedesktop.org/show_bug.cgi?id=111045
[fdo#111096]: https://bugs.freedesktop.org/show_bug.cgi?id=111096
[fdo#111407]: https://bugs.freedesktop.org/show_bug.cgi?id=111407
[fdo#111600]: https://bugs.freedesktop.org/show_bug.cgi?id=111600
Participating hosts (53 -> 46)
------------------------------
Missing (7): fi-ilk-m540 fi-hsw-4200u fi-byt-squawks fi-bsw-cyan fi-icl-y fi-byt-clapper fi-bdw-samus
Build changes
-------------
* CI: CI-20190529 -> None
* Linux: CI_DRM_6970 -> Patchwork_14569
CI-20190529: 20190529
CI_DRM_6970: ee94847f064c84de51b33d8d843aa6bde51a8af6 @ git://anongit.freedesktop.org/gfx-ci/linux
IGT_5206: 5a6c68568def840cd720f18fc66f529a89f84675 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
Patchwork_14569: e2124597ca45de1ae9d5166f4df166817780c8fd @ git://anongit.freedesktop.org/gfx-ci/linux
== Linux commits ==
e2124597ca45 HAX drm/i915: add the fake lmem region
2f657bf00d3c drm/i915: check for missing aperture in GTT pread/pwrite paths
097c34ca5081 drm/i915: Don't try to place HWS in non-existing mappable region
f722fde71ba0 drm/i915: error capture with no ggtt slot
9d82c4a319bc drm/i915/selftests: check for missing aperture
4a7487273a64 drm/i915: set num_fence_regs to 0 if there is no aperture
2199f453f083 drm/i915: do not map aperture if it is not available.
8355d01685d9 drm/i915: define HAS_MAPPABLE_APERTURE
82bde2d03644 drm/i915: treat stolen as a region
19270051b021 drm/i915: treat shmem as a region
91e43c262159 drm/i915: enumerate and init each supported region
970fd335c3b3 drm/i915/selftest: extend coverage to include LMEM huge-pages
76154df5dd82 drm/i915/selftests: add write-dword test for LMEM
1dd672af1e42 drm/i915/lmem: support kernel mapping
63ad9888746c drm/i915: setup io-mapping for LMEM
a066bf23d40b drm/i915: support creating LMEM objects
b5655dcc6d84 drm/i915: Add memory region information to device_info
c6f24f30618c drm/i915/region: support volatile objects
6d8c4bbbc790 drm/i915/region: support continuous allocations
f52b9b0685d8 drm/i915: introduce intel_memory_region
dbbfe24cafca drm/i915: simplify i915_gem_init_early
ccc51e3ac24e drm/i915: check for kernel_context
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/index.html
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 50+ messages in thread* ✗ Fi.CI.IGT: failure for LMEM basics
2019-09-27 17:33 [PATCH 00/22] LMEM basics Matthew Auld
` (24 preceding siblings ...)
2019-09-27 18:51 ` ✓ Fi.CI.BAT: success " Patchwork
@ 2019-09-28 9:47 ` Patchwork
25 siblings, 0 replies; 50+ messages in thread
From: Patchwork @ 2019-09-28 9:47 UTC (permalink / raw)
To: Matthew Auld; +Cc: intel-gfx
== Series Details ==
Series: LMEM basics
URL : https://patchwork.freedesktop.org/series/67350/
State : failure
== Summary ==
CI Bug Log - changes from CI_DRM_6970_full -> Patchwork_14569_full
====================================================
Summary
-------
**FAILURE**
Serious unknown changes coming with Patchwork_14569_full absolutely need to be
verified manually.
If you think the reported changes have nothing to do with the changes
introduced in Patchwork_14569_full, please notify your bug team to allow them
to document this new failure mode, which will reduce false positives in CI.
Possible new issues
-------------------
Here are the unknown changes that may have been introduced in Patchwork_14569_full:
### IGT changes ###
#### Possible regressions ####
* igt@i915_pm_rpm@system-suspend-execbuf:
- shard-iclb: [PASS][1] -> [DMESG-WARN][2]
[1]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-iclb8/igt@i915_pm_rpm@system-suspend-execbuf.html
[2]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-iclb3/igt@i915_pm_rpm@system-suspend-execbuf.html
New tests
---------
New tests have been introduced between CI_DRM_6970_full and Patchwork_14569_full:
### New IGT tests (2) ###
* igt@i915_selftest@live_memory_region:
- Statuses :
- Exec time: [None] s
* igt@i915_selftest@mock_memory_region:
- Statuses :
- Exec time: [None] s
Known issues
------------
Here are the changes found in Patchwork_14569_full that come from known issues:
### IGT changes ###
#### Issues hit ####
* igt@gem_ctx_shared@exec-single-timeline-bsd:
- shard-iclb: [PASS][3] -> [SKIP][4] ([fdo#110841])
[3]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-iclb3/igt@gem_ctx_shared@exec-single-timeline-bsd.html
[4]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-iclb2/igt@gem_ctx_shared@exec-single-timeline-bsd.html
* igt@gem_exec_schedule@wide-bsd:
- shard-iclb: [PASS][5] -> [SKIP][6] ([fdo#111325]) +4 similar issues
[5]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-iclb3/igt@gem_exec_schedule@wide-bsd.html
[6]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-iclb2/igt@gem_exec_schedule@wide-bsd.html
* igt@gem_exec_suspend@basic-s3:
- shard-skl: [PASS][7] -> [INCOMPLETE][8] ([fdo#104108])
[7]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-skl8/igt@gem_exec_suspend@basic-s3.html
[8]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-skl5/igt@gem_exec_suspend@basic-s3.html
* igt@kms_cursor_crc@pipe-b-cursor-128x42-random:
- shard-iclb: [PASS][9] -> [INCOMPLETE][10] ([fdo#107713]) +1 similar issue
[9]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-iclb6/igt@kms_cursor_crc@pipe-b-cursor-128x42-random.html
[10]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-iclb1/igt@kms_cursor_crc@pipe-b-cursor-128x42-random.html
* igt@kms_cursor_crc@pipe-b-cursor-suspend:
- shard-skl: [PASS][11] -> [INCOMPLETE][12] ([fdo#110741])
[11]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-skl9/igt@kms_cursor_crc@pipe-b-cursor-suspend.html
[12]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-skl10/igt@kms_cursor_crc@pipe-b-cursor-suspend.html
* igt@kms_flip@plain-flip-fb-recreate-interruptible:
- shard-skl: [PASS][13] -> [FAIL][14] ([fdo#100368])
[13]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-skl1/igt@kms_flip@plain-flip-fb-recreate-interruptible.html
[14]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-skl1/igt@kms_flip@plain-flip-fb-recreate-interruptible.html
* igt@kms_frontbuffer_tracking@fbcpsr-rgb565-draw-render:
- shard-iclb: [PASS][15] -> [FAIL][16] ([fdo#103167]) +3 similar issues
[15]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-iclb6/igt@kms_frontbuffer_tracking@fbcpsr-rgb565-draw-render.html
[16]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-iclb1/igt@kms_frontbuffer_tracking@fbcpsr-rgb565-draw-render.html
* igt@kms_plane@plane-panning-bottom-right-suspend-pipe-b-planes:
- shard-apl: [PASS][17] -> [DMESG-WARN][18] ([fdo#108566]) +3 similar issues
[17]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-apl3/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-b-planes.html
[18]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-apl4/igt@kms_plane@plane-panning-bottom-right-suspend-pipe-b-planes.html
* igt@kms_plane_alpha_blend@pipe-c-coverage-7efc:
- shard-skl: [PASS][19] -> [FAIL][20] ([fdo#108145] / [fdo#110403])
[19]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-skl9/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html
[20]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-skl10/igt@kms_plane_alpha_blend@pipe-c-coverage-7efc.html
* igt@kms_psr@psr2_sprite_mmap_cpu:
- shard-iclb: [PASS][21] -> [SKIP][22] ([fdo#109441])
[21]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-iclb2/igt@kms_psr@psr2_sprite_mmap_cpu.html
[22]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-iclb4/igt@kms_psr@psr2_sprite_mmap_cpu.html
* igt@perf@polling:
- shard-skl: [PASS][23] -> [FAIL][24] ([fdo#110728])
[23]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-skl10/igt@perf@polling.html
[24]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-skl9/igt@perf@polling.html
* igt@prime_busy@hang-bsd2:
- shard-iclb: [PASS][25] -> [SKIP][26] ([fdo#109276]) +13 similar issues
[25]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-iclb1/igt@prime_busy@hang-bsd2.html
[26]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-iclb7/igt@prime_busy@hang-bsd2.html
* igt@sw_sync@sync_expired_merge:
- shard-apl: [PASS][27] -> [INCOMPLETE][28] ([fdo#103927])
[27]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-apl8/igt@sw_sync@sync_expired_merge.html
[28]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-apl5/igt@sw_sync@sync_expired_merge.html
#### Possible fixes ####
* igt@gem_exec_schedule@fifo-bsd1:
- shard-iclb: [SKIP][29] ([fdo#109276]) -> [PASS][30] +8 similar issues
[29]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-iclb5/igt@gem_exec_schedule@fifo-bsd1.html
[30]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-iclb4/igt@gem_exec_schedule@fifo-bsd1.html
* igt@gem_exec_schedule@preempt-other-chain-bsd:
- shard-iclb: [SKIP][31] ([fdo#111325]) -> [PASS][32] +4 similar issues
[31]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-iclb4/igt@gem_exec_schedule@preempt-other-chain-bsd.html
[32]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-iclb3/igt@gem_exec_schedule@preempt-other-chain-bsd.html
* igt@gem_exec_schedule@wide-vebox:
- shard-apl: [INCOMPLETE][33] ([fdo#103927]) -> [PASS][34]
[33]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-apl7/igt@gem_exec_schedule@wide-vebox.html
[34]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-apl8/igt@gem_exec_schedule@wide-vebox.html
* igt@gem_workarounds@suspend-resume-context:
- shard-apl: [DMESG-WARN][35] ([fdo#108566]) -> [PASS][36] +3 similar issues
[35]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-apl7/igt@gem_workarounds@suspend-resume-context.html
[36]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-apl5/igt@gem_workarounds@suspend-resume-context.html
* igt@gem_workarounds@suspend-resume-fd:
- shard-skl: [INCOMPLETE][37] ([fdo#104108]) -> [PASS][38]
[37]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-skl9/igt@gem_workarounds@suspend-resume-fd.html
[38]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-skl3/igt@gem_workarounds@suspend-resume-fd.html
* {igt@i915_pm_dc@dc6-dpms}:
- shard-iclb: [FAIL][39] ([fdo#110548]) -> [PASS][40]
[39]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-iclb3/igt@i915_pm_dc@dc6-dpms.html
[40]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-iclb1/igt@i915_pm_dc@dc6-dpms.html
* igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-indfb-draw-blt:
- shard-iclb: [FAIL][41] ([fdo#103167]) -> [PASS][42] +2 similar issues
[41]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-iclb6/igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-indfb-draw-blt.html
[42]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-iclb4/igt@kms_frontbuffer_tracking@fbc-1p-offscren-pri-indfb-draw-blt.html
* igt@kms_plane_lowres@pipe-a-tiling-y:
- shard-iclb: [FAIL][43] ([fdo#103166]) -> [PASS][44]
[43]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-iclb8/igt@kms_plane_lowres@pipe-a-tiling-y.html
[44]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-iclb3/igt@kms_plane_lowres@pipe-a-tiling-y.html
* igt@kms_psr@psr2_cursor_mmap_cpu:
- shard-iclb: [SKIP][45] ([fdo#109441]) -> [PASS][46] +1 similar issue
[45]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-iclb4/igt@kms_psr@psr2_cursor_mmap_cpu.html
[46]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-iclb2/igt@kms_psr@psr2_cursor_mmap_cpu.html
* igt@kms_setmode@basic:
- shard-hsw: [FAIL][47] ([fdo#99912]) -> [PASS][48]
[47]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-hsw1/igt@kms_setmode@basic.html
[48]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-hsw5/igt@kms_setmode@basic.html
#### Warnings ####
* igt@gem_ctx_isolation@vcs1-nonpriv:
- shard-iclb: [SKIP][49] ([fdo#109276]) -> [FAIL][50] ([fdo#111329])
[49]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-iclb3/igt@gem_ctx_isolation@vcs1-nonpriv.html
[50]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-iclb1/igt@gem_ctx_isolation@vcs1-nonpriv.html
* igt@gem_mocs_settings@mocs-settings-bsd2:
- shard-iclb: [FAIL][51] ([fdo#111330]) -> [SKIP][52] ([fdo#109276]) +1 similar issue
[51]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-iclb1/igt@gem_mocs_settings@mocs-settings-bsd2.html
[52]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-iclb7/igt@gem_mocs_settings@mocs-settings-bsd2.html
* igt@kms_content_protection@atomic:
- shard-apl: [FAIL][53] ([fdo#110321] / [fdo#110336]) -> [INCOMPLETE][54] ([fdo#103927])
[53]: https://intel-gfx-ci.01.org/tree/drm-tip/CI_DRM_6970/shard-apl3/igt@kms_content_protection@atomic.html
[54]: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/shard-apl1/igt@kms_content_protection@atomic.html
{name}: This element is suppressed. This means it is ignored when computing
the status of the difference (SUCCESS, WARNING, or FAILURE).
[fdo#100368]: https://bugs.freedesktop.org/show_bug.cgi?id=100368
[fdo#103166]: https://bugs.freedesktop.org/show_bug.cgi?id=103166
[fdo#103167]: https://bugs.freedesktop.org/show_bug.cgi?id=103167
[fdo#103927]: https://bugs.freedesktop.org/show_bug.cgi?id=103927
[fdo#104108]: https://bugs.freedesktop.org/show_bug.cgi?id=104108
[fdo#107713]: https://bugs.freedesktop.org/show_bug.cgi?id=107713
[fdo#108145]: https://bugs.freedesktop.org/show_bug.cgi?id=108145
[fdo#108566]: https://bugs.freedesktop.org/show_bug.cgi?id=108566
[fdo#109276]: https://bugs.freedesktop.org/show_bug.cgi?id=109276
[fdo#109441]: https://bugs.freedesktop.org/show_bug.cgi?id=109441
[fdo#110321]: https://bugs.freedesktop.org/show_bug.cgi?id=110321
[fdo#110336]: https://bugs.freedesktop.org/show_bug.cgi?id=110336
[fdo#110403]: https://bugs.freedesktop.org/show_bug.cgi?id=110403
[fdo#110548]: https://bugs.freedesktop.org/show_bug.cgi?id=110548
[fdo#110728]: https://bugs.freedesktop.org/show_bug.cgi?id=110728
[fdo#110741]: https://bugs.freedesktop.org/show_bug.cgi?id=110741
[fdo#110841]: https://bugs.freedesktop.org/show_bug.cgi?id=110841
[fdo#111325]: https://bugs.freedesktop.org/show_bug.cgi?id=111325
[fdo#111329]: https://bugs.freedesktop.org/show_bug.cgi?id=111329
[fdo#111330]: https://bugs.freedesktop.org/show_bug.cgi?id=111330
[fdo#99912]: https://bugs.freedesktop.org/show_bug.cgi?id=99912
Participating hosts (16 -> 10)
------------------------------
Missing (6): shard-tglb1 shard-tglb2 shard-tglb3 shard-tglb4 shard-tglb5 shard-tglb6
Build changes
-------------
* CI: CI-20190529 -> None
* Linux: CI_DRM_6970 -> Patchwork_14569
CI-20190529: 20190529
CI_DRM_6970: ee94847f064c84de51b33d8d843aa6bde51a8af6 @ git://anongit.freedesktop.org/gfx-ci/linux
IGT_5206: 5a6c68568def840cd720f18fc66f529a89f84675 @ git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
Patchwork_14569: e2124597ca45de1ae9d5166f4df166817780c8fd @ git://anongit.freedesktop.org/gfx-ci/linux
piglit_4509: fdc5a4ca11124ab8413c7988896eec4c97336694 @ git://anongit.freedesktop.org/piglit
== Logs ==
For more details see: https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_14569/
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
^ permalink raw reply [flat|nested] 50+ messages in thread