From: "Nilawar, Badal" <badal.nilawar@intel.com>
To: Matthew Auld <matthew.auld@intel.com>, <intel-xe@lists.freedesktop.org>
Cc: rodrigo.vivi@intel.com
Subject: Re: [PATCH 1/2] drm/xe/dgfx: Block rpm for active mmap mappings
Date: Fri, 8 Dec 2023 12:47:56 +0530 [thread overview]
Message-ID: <23b21834-61e1-4926-9e1b-8b233771e1b5@intel.com> (raw)
In-Reply-To: <c3e2e04f-7cb3-4d68-b19a-5c7271a0c626@intel.com>
On 07-12-2023 16:56, Matthew Auld wrote:
> On 06/12/2023 13:34, Badal Nilawar wrote:
>> Block rpm for discrete cards when mmap mappings are active.
>> Ideally rpm wake ref should be taken in vm_open call and put in vm_close
>> call but it is seen that vm_open doesn't get called for xe_gem_vm_ops.
>> Therefore rpm wake ref is being get in xe_drm_gem_ttm_mmap and put
>> in vm_close.
>>
>> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
>> Cc: Anshuman Gupta <anshuman.gupta@intel.com>
>> Signed-off-by: Badal Nilawar <badal.nilawar@intel.com>
>> ---
>> drivers/gpu/drm/xe/xe_bo.c | 35 +++++++++++++++++++++++++++++++++--
>> 1 file changed, 33 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
>> index 72dc4a4eed4e..5741948a2a51 100644
>> --- a/drivers/gpu/drm/xe/xe_bo.c
>> +++ b/drivers/gpu/drm/xe/xe_bo.c
>> @@ -15,6 +15,7 @@
>> #include <drm/ttm/ttm_tt.h>
>> #include <drm/xe_drm.h>
>> +#include "i915_drv.h"
>
> Do we need this?
This is needed in patch 2 I will remove from this patch.
>
>> #include "xe_device.h"
>> #include "xe_dma_buf.h"
>> #include "xe_drm_client.h"
>> @@ -1158,17 +1159,47 @@ static vm_fault_t xe_gem_fault(struct vm_fault
>> *vmf)
>> return ret;
>> }
>> +static void xe_ttm_bo_vm_close(struct vm_area_struct *vma)
>> +{
>> + struct ttm_buffer_object *tbo = vma->vm_private_data;
>> + struct drm_device *ddev = tbo->base.dev;
>> + struct xe_device *xe = to_xe_device(ddev);
>> +
>> + ttm_bo_vm_close(vma);
>> +
>> + if (tbo->resource->bus.is_iomem)
>> + xe_device_mem_access_put(xe);
>> +}
>> +
>> static const struct vm_operations_struct xe_gem_vm_ops = {
>> .fault = xe_gem_fault,
>> .open = ttm_bo_vm_open,
>> - .close = ttm_bo_vm_close,
>> + .close = xe_ttm_bo_vm_close,
>> .access = ttm_bo_vm_access
>> };
>> +int xe_drm_gem_ttm_mmap(struct drm_gem_object *gem,
>> + struct vm_area_struct *vma)
>> +{
>> + struct ttm_buffer_object *tbo = drm_gem_ttm_of_gem(gem);
>> + struct drm_device *ddev = tbo->base.dev;
>> + struct xe_device *xe = to_xe_device(ddev);
>> + int ret;
>> +
>> + ret = drm_gem_ttm_mmap(gem, vma);
>> + if (ret < 0)
>> + return ret;
>> +
>> + if (tbo->resource->bus.is_iomem)
>> + xe_device_mem_access_get(xe);
>
> Checking is_iomem outside of the usual locking is racy. One issue here
> is that is_iomem can freely change at any point (like at fault time) so
> when vm_close is called you can easily get an an unbalanced RPM ref
> count. For example io_mem is false here but later becomes true in
> bo_vm_close and then we call mem_access_put even though we never called
> mem_access_get.
>
> Maybe check the possible placements of the object instead since that is
> immutable?
Sure will check bo placements
struct ttm_place *place = &(bo->placements[0]);
if (mem_type_is_vram(place->mem_type))
Or can we check tbo resource memtype
if (mem_type_is_vram(tbo->resource->mem_type))
Regards,
Badal
>
>> +
>> + return 0;
>> +}
>> +
>> static const struct drm_gem_object_funcs xe_gem_object_funcs = {
>> .free = xe_gem_object_free,
>> .close = xe_gem_object_close,
>> - .mmap = drm_gem_ttm_mmap,
>> + .mmap = xe_drm_gem_ttm_mmap,
>> .export = xe_gem_prime_export,
>> .vm_ops = &xe_gem_vm_ops,
>> };
next prev parent reply other threads:[~2023-12-08 7:18 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-06 13:34 [Intel-xe] [PATCH 0/2] Handle mmap with D3 Badal Nilawar
2023-12-06 13:34 ` [Intel-xe] [PATCH 1/2] drm/xe/dgfx: Block rpm for active mmap mappings Badal Nilawar
2023-12-07 11:26 ` Matthew Auld
2023-12-07 13:06 ` Matthew Auld
2023-12-08 7:29 ` Nilawar, Badal
2023-12-08 9:11 ` Matthew Auld
2023-12-08 7:17 ` Nilawar, Badal [this message]
2023-12-08 9:27 ` Matthew Auld
2023-12-06 13:34 ` [Intel-xe] [PATCH 2/2] drm/xe/dgfx: Release mmap mappings on rpm suspend Badal Nilawar
2023-12-07 11:37 ` Matthew Auld
2023-12-07 17:00 ` Gupta, Anshuman
2023-12-08 11:15 ` [Intel-xe] " Thomas Hellström
2023-12-06 16:28 ` [Intel-xe] ✓ CI.Patch_applied: success for Handle mmap with D3 Patchwork
2023-12-06 16:28 ` [Intel-xe] ✗ CI.checkpatch: warning " Patchwork
2023-12-06 16:28 ` [Intel-xe] ✗ CI.KUnit: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=23b21834-61e1-4926-9e1b-8b233771e1b5@intel.com \
--to=badal.nilawar@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.auld@intel.com \
--cc=rodrigo.vivi@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox