Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Dixit, Ashutosh" <ashutosh.dixit@intel.com>
To: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: <intel-xe@lists.freedesktop.org>,
	Umesh Nerlige Ramappa <umesh.nerlige.ramappa@intel.com>
Subject: Re: [PATCH 2/2] drm/xe/oa: Fix locking for stream->pollin
Date: Tue, 21 Jan 2025 20:04:52 -0800	[thread overview]
Message-ID: <85bjvzlcij.wl-ashutosh.dixit@intel.com> (raw)
In-Reply-To: <Z5ArZ_qOzM7OosC6@intel.com>

On Tue, 21 Jan 2025 15:19:03 -0800, Rodrigo Vivi wrote:
>

Hi Rodrigo,

> On Tue, Jan 21, 2025 at 10:03:54AM -0800, Ashutosh Dixit wrote:
> > Previously locking was not implemented for stream->pollin. Now
> > stream->pollin should be accessed under stream->oa_buffer.ptr_lock.
>
> This commit message fails to explain why. Why it was not needed
> before and why it is needed now?

I've sent a v2 and explained this in the v2 commit message.

> Also, please make sure that the lock really makes sense and
> we are not just increasing the scope of a locking that was
> designed for something else...

It is increasing the scope of a previous lock, but to me it looks ok after
the change introduced in Patch 1 of this series. IMO it is better to
increase the scope of one previous lock rather then introduce yet another
lock. I have also updated the comment for the lock to reflect this change
in v2.

> and that the locking guidelines are followed... [1]
>
> [1] - https://blog.ffwll.ch/2022/07/locking-engineering.html

Locking guidelines are followed, it is just a spinlock protecting
concurrent data modification. I've also checked there are no ABBA lock
inversion issues: when multiple locks are taken they are always taken by
taking stream->stream_lock first, followed by stream->oa_buffer.ptr_lock.

Thanks.
--
Ashutosh



> >
> > Signed-off-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
> > ---
> >  drivers/gpu/drm/xe/xe_oa.c | 6 ++++++
> >  1 file changed, 6 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/xe/xe_oa.c b/drivers/gpu/drm/xe/xe_oa.c
> > index fa873f3d0a9d1..9de62ce4b9e42 100644
> > --- a/drivers/gpu/drm/xe/xe_oa.c
> > +++ b/drivers/gpu/drm/xe/xe_oa.c
> > @@ -530,6 +530,7 @@ static ssize_t xe_oa_read(struct file *file, char __user *buf,
> >			  size_t count, loff_t *ppos)
> >  {
> >	struct xe_oa_stream *stream = file->private_data;
> > +	unsigned long flags;
> >	size_t offset = 0;
> >	int ret;
> >
> > @@ -562,8 +563,10 @@ static ssize_t xe_oa_read(struct file *file, char __user *buf,
> >	 * Also in case of -EIO, we have already waited for data before returning
> >	 * -EIO, so need to wait again
> >	 */
> > +	spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags);
> >	if (ret != -ENOSPC && ret != -EIO)
> >		stream->pollin = false;
> > +	spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags);
> >
> >	/* Possible values for ret are 0, -EFAULT, -ENOSPC, -EIO, -EINVAL, ... */
> >	return offset ?: (ret ?: -EAGAIN);
> > @@ -573,6 +576,7 @@ static __poll_t xe_oa_poll_locked(struct xe_oa_stream *stream,
> >				  struct file *file, poll_table *wait)
> >  {
> >	__poll_t events = 0;
> > +	unsigned long flags;
> >
> >	poll_wait(file, &stream->poll_wq, wait);
> >
> > @@ -582,8 +586,10 @@ static __poll_t xe_oa_poll_locked(struct xe_oa_stream *stream,
> >	 * in use. We rely on hrtimer xe_oa_poll_check_timer_cb to notify us when there
> >	 * are samples to read
> >	 */
> > +	spin_lock_irqsave(&stream->oa_buffer.ptr_lock, flags);
> >	if (stream->pollin)
> >		events |= EPOLLIN;
> > +	spin_unlock_irqrestore(&stream->oa_buffer.ptr_lock, flags);
> >
> >	return events;
> >  }
> > --
> > 2.47.1
> >

  reply	other threads:[~2025-01-22  4:04 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-01-21 18:03 [PATCH 0/2] stream->pollin fixes Ashutosh Dixit
2025-01-21 18:03 ` [PATCH 1/2] drm/xe/oa: Set stream->pollin in xe_oa_buffer_check_unlocked Ashutosh Dixit
2025-01-21 18:03 ` [PATCH 2/2] drm/xe/oa: Fix locking for stream->pollin Ashutosh Dixit
2025-01-21 23:19   ` Rodrigo Vivi
2025-01-22  4:04     ` Dixit, Ashutosh [this message]
2025-01-21 19:54 ` ✓ CI.Patch_applied: success for stream->pollin fixes Patchwork
2025-01-21 19:54 ` ✓ CI.checkpatch: " Patchwork
2025-01-21 19:55 ` ✓ CI.KUnit: " Patchwork
2025-01-21 20:12 ` ✓ CI.Build: " Patchwork
2025-01-21 20:15 ` ✓ CI.Hooks: " Patchwork
2025-01-21 20:16 ` ✓ CI.checksparse: " Patchwork
2025-01-21 20:43 ` ✓ Xe.CI.BAT: " Patchwork
2025-01-22  3:37 ` ✗ Xe.CI.Full: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=85bjvzlcij.wl-ashutosh.dixit@intel.com \
    --to=ashutosh.dixit@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=rodrigo.vivi@intel.com \
    --cc=umesh.nerlige.ramappa@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox