From: Thierry Reding <thierry.reding@gmail.com>
To: Gerd Hoffmann <kraxel@redhat.com>
Cc: dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org,
intel-gfx@lists.freedesktop.org, linux-kernel@vger.kernel.org,
amd-gfx@lists.freedesktop.org,
linux-graphics-maintainer@vmware.com,
Ben Skeggs <bskeggs@redhat.com>,
spice-devel@lists.freedesktop.org
Subject: Re: [PATCH v6 08/17] drm/ttm: use gem vma_node
Date: Wed, 14 Aug 2019 11:35:24 +0200 [thread overview]
Message-ID: <20190814093524.GA31345@ulmo> (raw)
In-Reply-To: <20190814055827.6hrxj6daovxxnnvw@sirius.home.kraxel.org>
[-- Attachment #1.1.1: Type: text/plain, Size: 1801 bytes --]
On Wed, Aug 14, 2019 at 07:58:27AM +0200, Gerd Hoffmann wrote:
> > Hi Gerd,
> >
> > I've been seeing a regression on Nouveau with recent linux-next releases
> > and git bisect points at this commit as the first bad one. If I revert
> > it (there's a tiny conflict with a patch that was merged subsequently),
> > things are back to normal.
> >
> > I think the reason for this issue is that Nouveau doesn't use GEM
> > objects for all buffer objects,
>
> That shouldn't be a problem ...
>
> > and even when it uses GEM objects, the
> > code will not initialize the GEM object until after the buffer objects
> > and the backing TTM objects have been created.
>
> ... but the initialization order is.
>
> ttm_bo_uses_embedded_gem_object() assumes gem gets initialized first.
>
> drm_gem_object_init() init calling drm_vma_node_reset() again is
> probably the root cause for the breakage.
>
> > I tried to fix that by making sure drm_gem_object_init() gets called by
> > Nouveau before ttm_bo_init(), but the changes are fairly involved and I
> > was unable to get the GEM reference counting right. I can look into the
> > proper fix some more, but it might be worth reverting this patch for
> > now to get Nouveau working again.
>
> Changing the order doesn't look hard. Patch attached (untested, have no
> test hardware). But maybe I missed some detail ...
>
> The other patch attached works around the issue with a flag, to avoid
> drm_vma_node_reset() being called twice.
I came up with something very similar by splitting up nouveau_bo_new()
into allocation and initialization steps, so that when necessary the GEM
object can be initialized in between. I think that's slightly more
flexible and easier to understand than a boolean flag.
Thierry
[-- Attachment #1.1.2: 0001-drm-nouveau-Initialize-GEM-object-before-TTM-object.patch --]
[-- Type: text/plain, Size: 9055 bytes --]
From a1130a6affcb7c00133e89f3e498cb6757f5bb51 Mon Sep 17 00:00:00 2001
From: Thierry Reding <treding@nvidia.com>
Date: Wed, 14 Aug 2019 11:00:48 +0200
Subject: [PATCH] drm/nouveau: Initialize GEM object before TTM object
TTM assumes that drivers initialize the embedded GEM object before
calling the ttm_bo_init() function. This is not currently the case
in the Nouveau driver. Fix this by splitting up nouveau_bo_new()
into nouveau_bo_alloc() and nouveau_bo_init() so that the GEM can
be initialized before TTM BO initialization when necessary.
Fixes: b96f3e7c8069 ("drm/ttm: use gem vma_node")
Signed-off-by: Thierry Reding <treding@nvidia.com>
---
drivers/gpu/drm/nouveau/nouveau_bo.c | 69 ++++++++++++++++---------
drivers/gpu/drm/nouveau/nouveau_bo.h | 4 ++
drivers/gpu/drm/nouveau/nouveau_gem.c | 29 ++++++-----
drivers/gpu/drm/nouveau/nouveau_prime.c | 16 ++++--
4 files changed, 77 insertions(+), 41 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
index 99e391be9370..b3d3e07de1af 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
@@ -185,31 +185,24 @@ nouveau_bo_fixup_align(struct nouveau_bo *nvbo, u32 flags,
*size = roundup_64(*size, PAGE_SIZE);
}
-int
-nouveau_bo_new(struct nouveau_cli *cli, u64 size, int align,
- uint32_t flags, uint32_t tile_mode, uint32_t tile_flags,
- struct sg_table *sg, struct reservation_object *robj,
- struct nouveau_bo **pnvbo)
+struct nouveau_bo *
+nouveau_bo_alloc(struct nouveau_cli *cli, u64 size, u32 flags, u32 tile_mode,
+ u32 tile_flags)
{
struct nouveau_drm *drm = cli->drm;
struct nouveau_bo *nvbo;
struct nvif_mmu *mmu = &cli->mmu;
struct nvif_vmm *vmm = cli->svm.cli ? &cli->svm.vmm : &cli->vmm.vmm;
- size_t acc_size;
- int type = ttm_bo_type_device;
- int ret, i, pi = -1;
+ int i, pi = -1;
if (!size) {
NV_WARN(drm, "skipped size %016llx\n", size);
- return -EINVAL;
+ return ERR_PTR(-EINVAL);
}
- if (sg)
- type = ttm_bo_type_sg;
-
nvbo = kzalloc(sizeof(struct nouveau_bo), GFP_KERNEL);
if (!nvbo)
- return -ENOMEM;
+ return ERR_PTR(-ENOMEM);
INIT_LIST_HEAD(&nvbo->head);
INIT_LIST_HEAD(&nvbo->entry);
INIT_LIST_HEAD(&nvbo->vma_list);
@@ -231,7 +224,7 @@ nouveau_bo_new(struct nouveau_cli *cli, u64 size, int align,
nvbo->kind = (tile_flags & 0x0000ff00) >> 8;
if (!nvif_mmu_kind_valid(mmu, nvbo->kind)) {
kfree(nvbo);
- return -EINVAL;
+ return ERR_PTR(-EINVAL);
}
nvbo->comp = mmu->kind[nvbo->kind] != nvbo->kind;
@@ -241,7 +234,7 @@ nouveau_bo_new(struct nouveau_cli *cli, u64 size, int align,
nvbo->comp = (tile_flags & 0x00030000) >> 16;
if (!nvif_mmu_kind_valid(mmu, nvbo->kind)) {
kfree(nvbo);
- return -EINVAL;
+ return ERR_PTR(-EINVAL);
}
} else {
nvbo->zeta = (tile_flags & 0x00000007);
@@ -278,7 +271,7 @@ nouveau_bo_new(struct nouveau_cli *cli, u64 size, int align,
}
if (WARN_ON(pi < 0))
- return -EINVAL;
+ return ERR_PTR(-EINVAL);
/* Disable compression if suitable settings couldn't be found. */
if (nvbo->comp && !vmm->page[pi].comp) {
@@ -288,23 +281,51 @@ nouveau_bo_new(struct nouveau_cli *cli, u64 size, int align,
}
nvbo->page = vmm->page[pi].shift;
+ return nvbo;
+}
+
+int
+nouveau_bo_init(struct nouveau_bo *nvbo, u64 size, int align, u32 flags,
+ struct sg_table *sg, struct reservation_object *robj)
+{
+ int type = sg ? ttm_bo_type_sg : ttm_bo_type_device;
+ size_t acc_size;
+ int ret;
+
+ acc_size = ttm_bo_dma_acc_size(nvbo->bo.bdev, size, sizeof(*nvbo));
+
nouveau_bo_fixup_align(nvbo, flags, &align, &size);
nvbo->bo.mem.num_pages = size >> PAGE_SHIFT;
nouveau_bo_placement_set(nvbo, flags, 0);
- acc_size = ttm_bo_dma_acc_size(&drm->ttm.bdev, size,
- sizeof(struct nouveau_bo));
-
- ret = ttm_bo_init(&drm->ttm.bdev, &nvbo->bo, size,
- type, &nvbo->placement,
- align >> PAGE_SHIFT, false, acc_size, sg,
- robj, nouveau_bo_del_ttm);
-
+ ret = ttm_bo_init(nvbo->bo.bdev, &nvbo->bo, size, type,
+ &nvbo->placement, align >> PAGE_SHIFT, false,
+ acc_size, sg, robj, nouveau_bo_del_ttm);
if (ret) {
/* ttm will call nouveau_bo_del_ttm if it fails.. */
return ret;
}
+ return 0;
+}
+
+int
+nouveau_bo_new(struct nouveau_cli *cli, u64 size, int align,
+ uint32_t flags, uint32_t tile_mode, uint32_t tile_flags,
+ struct sg_table *sg, struct reservation_object *robj,
+ struct nouveau_bo **pnvbo)
+{
+ struct nouveau_bo *nvbo;
+ int ret;
+
+ nvbo = nouveau_bo_alloc(cli, size, flags, tile_mode, tile_flags);
+ if (IS_ERR(nvbo))
+ return PTR_ERR(nvbo);
+
+ ret = nouveau_bo_init(nvbo, size, align, flags, sg, robj);
+ if (ret)
+ return ret;
+
*pnvbo = nvbo;
return 0;
}
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.h b/drivers/gpu/drm/nouveau/nouveau_bo.h
index d675efe8e7f9..7529035b971f 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.h
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.h
@@ -71,6 +71,10 @@ nouveau_bo_ref(struct nouveau_bo *ref, struct nouveau_bo **pnvbo)
extern struct ttm_bo_driver nouveau_bo_driver;
void nouveau_bo_move_init(struct nouveau_drm *);
+struct nouveau_bo *nouveau_bo_alloc(struct nouveau_cli *, u64 size, u32 flags,
+ u32 tile_mode, u32 tile_flags);
+int nouveau_bo_init(struct nouveau_bo *, u64 size, int align, u32 flags,
+ struct sg_table *sg, struct reservation_object *robj);
int nouveau_bo_new(struct nouveau_cli *, u64 size, int align, u32 flags,
u32 tile_mode, u32 tile_flags, struct sg_table *sg,
struct reservation_object *robj,
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
index c7368aa0bdec..e9c772e07789 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
@@ -188,11 +188,23 @@ nouveau_gem_new(struct nouveau_cli *cli, u64 size, int align, uint32_t domain,
if (domain & NOUVEAU_GEM_DOMAIN_COHERENT)
flags |= TTM_PL_FLAG_UNCACHED;
- ret = nouveau_bo_new(cli, size, align, flags, tile_mode,
- tile_flags, NULL, NULL, pnvbo);
- if (ret)
+ nvbo = nouveau_bo_alloc(cli, size, flags, tile_mode, tile_flags);
+ if (IS_ERR(nvbo))
+ return PTR_ERR(nvbo);
+
+ /* Initialize the embedded gem-object. We return a single gem-reference
+ * to the caller, instead of a normal nouveau_bo ttm reference. */
+ ret = drm_gem_object_init(drm->dev, &nvbo->bo.base, size);
+ if (ret) {
+ nouveau_bo_ref(NULL, &nvbo);
+ return ret;
+ }
+
+ ret = nouveau_bo_init(nvbo, size, align, flags, NULL, NULL);
+ if (ret) {
+ nouveau_bo_ref(NULL, &nvbo);
return ret;
- nvbo = *pnvbo;
+ }
/* we restrict allowed domains on nv50+ to only the types
* that were requested at creation time. not possibly on
@@ -203,15 +215,8 @@ nouveau_gem_new(struct nouveau_cli *cli, u64 size, int align, uint32_t domain,
if (drm->client.device.info.family >= NV_DEVICE_INFO_V0_TESLA)
nvbo->valid_domains &= domain;
- /* Initialize the embedded gem-object. We return a single gem-reference
- * to the caller, instead of a normal nouveau_bo ttm reference. */
- ret = drm_gem_object_init(drm->dev, &nvbo->bo.base, nvbo->bo.mem.size);
- if (ret) {
- nouveau_bo_ref(NULL, pnvbo);
- return -ENOMEM;
- }
-
nvbo->bo.persistent_swap_storage = nvbo->bo.base.filp;
+ *pnvbo = nvbo;
return 0;
}
diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
index e86ad7ae622b..0ca71a84e23a 100644
--- a/drivers/gpu/drm/nouveau/nouveau_prime.c
+++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
@@ -63,28 +63,34 @@ struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev,
struct nouveau_drm *drm = nouveau_drm(dev);
struct nouveau_bo *nvbo;
struct reservation_object *robj = attach->dmabuf->resv;
+ size_t size = attach->dmabuf->size;
u32 flags = 0;
int ret;
flags = TTM_PL_FLAG_TT;
reservation_object_lock(robj, NULL);
- ret = nouveau_bo_new(&drm->client, attach->dmabuf->size, 0, flags, 0, 0,
- sg, robj, &nvbo);
+ nvbo = nouveau_bo_alloc(&drm->client, size, flags, 0, 0);
reservation_object_unlock(robj);
- if (ret)
- return ERR_PTR(ret);
+ if (IS_ERR(nvbo))
+ return ERR_CAST(nvbo);
nvbo->valid_domains = NOUVEAU_GEM_DOMAIN_GART;
/* Initialize the embedded gem-object. We return a single gem-reference
* to the caller, instead of a normal nouveau_bo ttm reference. */
- ret = drm_gem_object_init(dev, &nvbo->bo.base, nvbo->bo.mem.size);
+ ret = drm_gem_object_init(dev, &nvbo->bo.base, size);
if (ret) {
nouveau_bo_ref(NULL, &nvbo);
return ERR_PTR(-ENOMEM);
}
+ ret = nouveau_bo_init(nvbo, size, 0, flags, sg, robj);
+ if (ret) {
+ nouveau_bo_ref(NULL, &nvbo);
+ return ERR_PTR(ret);
+ }
+
return &nvbo->bo.base;
}
--
2.22.0
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
[-- Attachment #2: Type: text/plain, Size: 159 bytes --]
_______________________________________________
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
next prev parent reply other threads:[~2019-08-14 9:35 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-05 14:01 [PATCH v6 00/17] drm/ttm: make ttm bo a gem bo subclass Gerd Hoffmann
2019-08-05 14:01 ` [PATCH v6 01/17] drm/ttm: add gem base object Gerd Hoffmann
2019-08-05 14:01 ` [PATCH v6 02/17] drm/vram: use embedded gem object Gerd Hoffmann
2019-08-05 14:01 ` [PATCH v6 03/17] drm/qxl: " Gerd Hoffmann
2019-08-05 14:01 ` [PATCH v6 04/17] drm/radeon: " Gerd Hoffmann
2019-08-05 14:01 ` [PATCH v6 05/17] drm/amdgpu: " Gerd Hoffmann
2019-08-05 14:01 ` [PATCH v6 06/17] drm/nouveau: " Gerd Hoffmann
2019-08-05 14:01 ` [PATCH v6 07/17] drm/ttm: use gem reservation object Gerd Hoffmann
2019-08-05 14:01 ` [PATCH v6 09/17] drm/ttm: set both resv and base.resv pointers Gerd Hoffmann
2019-08-05 14:01 ` [PATCH v6 10/17] drm/ttm: switch ttm core from bo->resv to bo->base.resv Gerd Hoffmann
[not found] ` <20190805140119.7337-1-kraxel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2019-08-05 14:01 ` [PATCH v6 08/17] drm/ttm: use gem vma_node Gerd Hoffmann
2019-08-13 15:11 ` [Intel-gfx] " Thierry Reding
2019-08-14 5:58 ` Gerd Hoffmann
2019-08-14 9:35 ` Thierry Reding [this message]
2019-08-14 10:14 ` Gerd Hoffmann
2019-08-21 6:33 ` [Nouveau] " Ben Skeggs
[not found] ` <CACAvsv5Rar9F=Wf-9HBpndY4QaQZcGCx05j0esvV9pitM=JoGg-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2019-08-21 11:55 ` Thierry Reding
2019-09-08 1:58 ` Ilia Mirkin
2019-09-10 21:52 ` [Nouveau] " Thierry Reding
2019-09-16 4:45 ` Ben Skeggs
2019-08-05 14:01 ` [PATCH v6 11/17] drm/radeon: switch driver from bo->resv to bo->base.resv Gerd Hoffmann
2019-08-05 14:01 ` [PATCH v6 12/17] drm/vmwgfx: " Gerd Hoffmann
2019-08-05 14:01 ` [PATCH v6 13/17] drm/amdgpu: " Gerd Hoffmann
2019-08-05 14:01 ` [PATCH v6 14/17] drm/nouveau: " Gerd Hoffmann
2019-08-05 14:01 ` [PATCH v6 15/17] drm/qxl: " Gerd Hoffmann
2019-08-05 14:01 ` [PATCH v6 16/17] drm/virtio: " Gerd Hoffmann
2019-08-05 14:01 ` [PATCH v6 17/17] drm/ttm: drop ttm_buffer_object->resv Gerd Hoffmann
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190814093524.GA31345@ulmo \
--to=thierry.reding@gmail.com \
--cc=amd-gfx@lists.freedesktop.org \
--cc=bskeggs@redhat.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-gfx@lists.freedesktop.org \
--cc=kraxel@redhat.com \
--cc=linux-graphics-maintainer@vmware.com \
--cc=linux-kernel@vger.kernel.org \
--cc=nouveau@lists.freedesktop.org \
--cc=spice-devel@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).