From: "Das, Nirmoy" <nirmoy.das@linux.intel.com>
To: Matthew Auld <matthew.auld@intel.com>,
Nirmoy Das <nirmoy.das@intel.com>,
intel-gfx@lists.freedesktop.org
Cc: chris.p.wilson@intel.com, andrzej.hajda@intel.com
Subject: Re: [Intel-gfx] [PATCH] drm/i915: Set correct domains values at _i915_vma_move_to_active
Date: Thu, 8 Sep 2022 12:15:50 +0200 [thread overview]
Message-ID: <0b069343-58ec-f48b-a33c-d42d9479c1dc@linux.intel.com> (raw)
In-Reply-To: <52670f86-b4e8-199f-fc56-1a595330e347@intel.com>
On 9/8/2022 12:13 PM, Matthew Auld wrote:
> On 08/09/2022 10:46, Das, Nirmoy wrote:
>>
>> On 9/8/2022 11:40 AM, Matthew Auld wrote:
>>> On 07/09/2022 18:26, Nirmoy Das wrote:
>>>> Fix regression introduced by commit:
>>>> "drm/i915: Individualize fences before adding to dma_resv obj"
>>>> which sets obj->read_domains to 0 for both read and write paths.
>>>> Also set obj->write_domain to 0 on read path which was removed by
>>>> the commit.
>>>>
>>>> References: https://gitlab.freedesktop.org/drm/intel/-/issues/6639
>>>> Fixes: 842d9346b2fd ("drm/i915: Individualize fences before adding
>>>> to dma_resv obj")
>>>> Signed-off-by: Nirmoy Das <nirmoy.das@intel.com>
>>>> Cc: <stable@vger.kernel.org> # v5.16+
>>>> Cc: Matthew Auld <matthew.auld@intel.com>
>>>> Cc: Andrzej Hajda <andrzej.hajda@intel.com>
>>>
>>> Should I go ahead and push this?
>>
>>
>> Yes, please go ahead. Lots people are effected because of this
>> regression.
>
> Pushed with:
> Fixes: 420a07b841d0 ("drm/i915: Individualize fences before adding to
> dma_resv obj")
>
> Otherwise dim complains it seems.
Thanks, Matt!
Nirmoy
>
>>
>>
>> Nirmoy
>>
>>>
>>>> ---
>>>> drivers/gpu/drm/i915/i915_vma.c | 3 ++-
>>>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/gpu/drm/i915/i915_vma.c
>>>> b/drivers/gpu/drm/i915/i915_vma.c
>>>> index 260371716490..373582cfd8f3 100644
>>>> --- a/drivers/gpu/drm/i915/i915_vma.c
>>>> +++ b/drivers/gpu/drm/i915/i915_vma.c
>>>> @@ -1882,12 +1882,13 @@ int _i915_vma_move_to_active(struct
>>>> i915_vma *vma,
>>>> enum dma_resv_usage usage;
>>>> int idx;
>>>> - obj->read_domains = 0;
>>>> if (flags & EXEC_OBJECT_WRITE) {
>>>> usage = DMA_RESV_USAGE_WRITE;
>>>> obj->write_domain = I915_GEM_DOMAIN_RENDER;
>>>> + obj->read_domains = 0;
>>>> } else {
>>>> usage = DMA_RESV_USAGE_READ;
>>>> + obj->write_domain = 0;
>>>> }
>>>> dma_fence_array_for_each(curr, idx, fence)
prev parent reply other threads:[~2022-09-08 10:16 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-07 17:26 [Intel-gfx] [PATCH] drm/i915: Set correct domains values at _i915_vma_move_to_active Nirmoy Das
2022-09-07 18:36 ` [Intel-gfx] ✓ Fi.CI.BAT: success for " Patchwork
2022-09-07 19:57 ` [Intel-gfx] [PATCH] " Andrzej Hajda
2022-09-08 1:01 ` [Intel-gfx] ✗ Fi.CI.IGT: failure for " Patchwork
2022-09-08 9:55 ` Das, Nirmoy
2022-09-08 9:40 ` [Intel-gfx] [PATCH] " Matthew Auld
2022-09-08 9:46 ` Das, Nirmoy
2022-09-08 10:13 ` Matthew Auld
2022-09-08 10:15 ` Das, Nirmoy [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0b069343-58ec-f48b-a33c-d42d9479c1dc@linux.intel.com \
--to=nirmoy.das@linux.intel.com \
--cc=andrzej.hajda@intel.com \
--cc=chris.p.wilson@intel.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=matthew.auld@intel.com \
--cc=nirmoy.das@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox