From: Rob Clark <robdclark@gmail.com>
To: dri-devel@lists.freedesktop.org
Cc: Rob Clark <robdclark@chromium.org>,
Fabio Estevam <festevam@gmail.com>,
Rob Clark <robdclark@gmail.com>, Sean Paul <sean@poorly.run>,
David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>,
linux-arm-msm@vger.kernel.org (open list:DRM DRIVER FOR MSM
ADRENO GPU),
freedreno@lists.freedesktop.org (open list:DRM DRIVER FOR MSM
ADRENO GPU), linux-kernel@vger.kernel.org (open list)
Subject: [PATCH] drm/msm: Use the correct dma_sync calls harder
Date: Wed, 4 Sep 2019 10:17:23 -0700 [thread overview]
Message-ID: <20190904171723.2956-1-robdclark@gmail.com> (raw)
From: Rob Clark <robdclark@chromium.org>
Looks like the dma_sync calls don't do what we want on armv7 either.
Fixes:
Unable to handle kernel paging request at virtual address 50001000
pgd = (ptrval)
[50001000] *pgd=00000000
Internal error: Oops: 805 [#1] SMP ARM
Modules linked in:
CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.3.0-rc6-00271-g9f159ae07f07 #4
Hardware name: Freescale i.MX53 (Device Tree Support)
PC is at v7_dma_clean_range+0x20/0x38
LR is at __dma_page_cpu_to_dev+0x28/0x90
pc : [<c011c76c>] lr : [<c01181c4>] psr: 20000013
sp : d80b5a88 ip : de96c000 fp : d840ce6c
r10: 00000000 r9 : 00000001 r8 : d843e010
r7 : 00000000 r6 : 00008000 r5 : ddb6c000 r4 : 00000000
r3 : 0000003f r2 : 00000040 r1 : 50008000 r0 : 50001000
Flags: nzCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none
Control: 10c5387d Table: 70004019 DAC: 00000051
Process swapper/0 (pid: 1, stack limit = 0x(ptrval))
Signed-off-by: Rob Clark <robdclark@chromium.org>
Fixes: 3de433c5b38a ("drm/msm: Use the correct dma_sync calls in msm_gem")
Tested-by: Fabio Estevam <festevam@gmail.com>
---
drivers/gpu/drm/msm/msm_gem.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 7263f4373f07..5a6a79fbc9d6 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -52,7 +52,7 @@ static void sync_for_device(struct msm_gem_object *msm_obj)
{
struct device *dev = msm_obj->base.dev->dev;
- if (get_dma_ops(dev)) {
+ if (get_dma_ops(dev) && IS_ENABLED(CONFIG_ARM64)) {
dma_sync_sg_for_device(dev, msm_obj->sgt->sgl,
msm_obj->sgt->nents, DMA_BIDIRECTIONAL);
} else {
@@ -65,7 +65,7 @@ static void sync_for_cpu(struct msm_gem_object *msm_obj)
{
struct device *dev = msm_obj->base.dev->dev;
- if (get_dma_ops(dev)) {
+ if (get_dma_ops(dev) && IS_ENABLED(CONFIG_ARM64)) {
dma_sync_sg_for_cpu(dev, msm_obj->sgt->sgl,
msm_obj->sgt->nents, DMA_BIDIRECTIONAL);
} else {
--
2.21.0
next reply other threads:[~2019-09-04 17:19 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-04 17:17 Rob Clark [this message]
2019-10-08 16:11 ` [PATCH] drm/msm: Use the correct dma_sync calls harder Fabio Estevam
2019-10-08 23:08 ` Rob Clark
2019-10-08 23:19 ` Fabio Estevam
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190904171723.2956-1-robdclark@gmail.com \
--to=robdclark@gmail.com \
--cc=airlied@linux.ie \
--cc=daniel@ffwll.ch \
--cc=dri-devel@lists.freedesktop.org \
--cc=festevam@gmail.com \
--cc=freedreno@lists.freedesktop.org \
--cc=linux-arm-msm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=robdclark@chromium.org \
--cc=sean@poorly.run \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox