public inbox for intel-xe@lists.freedesktop.org
 help / color / mirror / Atom feed
From: "Dixit, Ashutosh" <ashutosh.dixit@intel.com>
To: Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
Cc: <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH 1/3] drm/xe/oa: Use xe_map layer
Date: Wed, 08 Apr 2026 08:32:50 -0700	[thread overview]
Message-ID: <87o6jtzcv1.wl-ashutosh.dixit@intel.com> (raw)
In-Reply-To: <adWJ94RCOcgtFpTk@soc-5CG1426VCC.clients.intel.com>

On Tue, 07 Apr 2026 15:49:27 -0700, Umesh Nerlige Ramappa wrote:
>

Hi Umesh,

> On Mon, Apr 06, 2026 at 08:02:17PM -0700, Ashutosh Dixit wrote:
> > @@ -311,30 +317,37 @@ static enum hrtimer_restart xe_oa_poll_check_timer_cb(struct hrtimer *hrtimer)
> >	return HRTIMER_RESTART;
> > }
> >
> > +static inline unsigned long
> > +xe_oa_copy_to_user(void __user *dst, const struct iosys_map *src, size_t src_offset, size_t len)
> > +{
> > +	if (src->is_iomem)
> > +		return copy_to_user(dst, src->vaddr_iomem + src_offset, len);
>
> I think this requires xe_map_memcpy_from for the iomem case.
>
> > +	else
> > +		return copy_to_user(dst, src->vaddr + src_offset, len);
> > +}
> > +

Yes, you have certainly spotted the iffy part of this patch. However, this
situation is not that simple:

* xe_map_memcpy_from or iosys_map_memcpy_from copies from iomem to kernel
  memory (potentially using special accessors for iomem), not to user
  memory

* copy_to_user copies from kernel memory to user memory (using special
  checks), not from iomem

So in the kernel we don't have functions which copy directly from iomem to
user memory. That is, functions like iosys_map_copy_to_user() or
copy_to_user_from_iomem() do not exist, because they would need to combine
iomem access and copy_to_user (likely copy_to_user will need to be
rewritten with iomem accessors, if such a function were to be
implemented). There may be a reason why such functions don't exist in the
kernel.

Another way might be to copy from iomem to kernel memory and from kernel to
user memory (that is have both iosys_map_memcpy_from and copy_to_user
calls), but this would entail an extra copy into a bounce buffer in kernel
memory. Something like this:

static inline unsigned long
xe_oa_copy_to_user(void __user *dst, const struct iosys_map *src, size_t src_offset, size_t len)
{
	void *tmp = kmalloc(len, GFP_KERNEL); // kmalloc or statically allocated tmp buffer

	xe_map_memcpy_from(xe, tmp, src, src_offset, len); // extra copy

	return copy_to_user(dst, tmp, len);
}

In our case, xe_map_memcpy_from or iosys_map_memcpy_from are basically a
memcpy (because vram bar is ioremapped, special accessors for iomem are not
needed). Therefore, to avoid the extra copy, I decided to drop the iomem
access part (since the copy_to_user part cannot be dropped). Hence I end up
with the xe_oa_copy_to_user() above. The function was tested on BMG with OA
buffer in vram and works as expected.

I did think about renaming the above function xe_map_copy_to_user() and
moving it to xe_map.h, so that is possible. But since there are no other
users (and none are expected) at this point of xe_map_copy_to_user() apart
from OA because of this workaround (other observation stream buffers are in
system memory, not vram), I left the function as specific to OA.

Let me know what you think. Thanks.

  reply	other threads:[~2026-04-08 15:32 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-07  3:02 [PATCH 0/3] drm/xe/oa: Wa_14026633728 Ashutosh Dixit
2026-04-07  3:02 ` [PATCH 1/3] drm/xe/oa: Use xe_map layer Ashutosh Dixit
2026-04-07 22:49   ` Umesh Nerlige Ramappa
2026-04-08 15:32     ` Dixit, Ashutosh [this message]
2026-04-08 18:21       ` Umesh Nerlige Ramappa
2026-04-07  3:02 ` [PATCH 2/3] drm/xe/oa: Use drm_gem_mmap_obj for OA buffer mmap Ashutosh Dixit
2026-04-07 22:52   ` Umesh Nerlige Ramappa
2026-04-07  3:02 ` [PATCH 3/3] drm/xe/oa: Implement Wa_14026633728 Ashutosh Dixit
2026-04-07 23:17   ` Umesh Nerlige Ramappa
2026-04-07  3:10 ` ✓ CI.KUnit: success for drm/xe/oa: Wa_14026633728 Patchwork
2026-04-07  3:51 ` ✓ Xe.CI.BAT: " Patchwork
2026-04-07  5:02 ` ✓ Xe.CI.FULL: " Patchwork
  -- strict thread matches above, loose matches on Subject: below --
2026-04-09 23:17 [PATCH v2 0/3] " Ashutosh Dixit
2026-04-09 23:17 ` [PATCH 1/3] drm/xe/oa: Use xe_map layer Ashutosh Dixit
2026-04-11  0:49   ` Dixit, Ashutosh
2026-04-11  0:48 [PATCH v3 0/3] drm/xe/oa: Wa_14026633728 Ashutosh Dixit
2026-04-11  0:48 ` [PATCH 1/3] drm/xe/oa: Use xe_map layer Ashutosh Dixit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87o6jtzcv1.wl-ashutosh.dixit@intel.com \
    --to=ashutosh.dixit@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=umesh.nerlige.ramappa@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox