public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Lee Jones <lee.jones@linaro.org>
To: philip yang <yangp@amd.com>
Cc: "Felix Kuehling" <felix.kuehling@amd.com>,
	"David Airlie" <airlied@linux.ie>,
	"Pan, Xinhui" <Xinhui.Pan@amd.com>,
	linux-kernel@vger.kernel.org, amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	"Alex Deucher" <alexander.deucher@amd.com>,
	"Christian König" <christian.koenig@amd.com>
Subject: Re: [PATCH 1/1] drm/amdkfd: Protect the Client whilst it is being operated on
Date: Wed, 23 Mar 2022 12:46:55 +0000	[thread overview]
Message-ID: <YjsWvy8cT2eOw618@google.com> (raw)
In-Reply-To: <YjNh/Ajxgp3mjvWV@google.com>

On Thu, 17 Mar 2022, Lee Jones wrote:

> On Thu, 17 Mar 2022, philip yang wrote:
> 
> >    On 2022-03-17 11:13 a.m., Lee Jones wrote:
> > 
> > On Thu, 17 Mar 2022, Felix Kuehling wrote:
> > 
> > 
> > Am 2022-03-17 um 11:00 schrieb Lee Jones:
> > 
> > Good afternoon Felix,
> > 
> > Thanks for your review.
> > 
> > 
> > Am 2022-03-17 um 09:16 schrieb Lee Jones:
> > 
> > Presently the Client can be freed whilst still in use.
> > 
> > Use the already provided lock to prevent this.
> > 
> > Cc: Felix Kuehling [1]<Felix.Kuehling@amd.com>
> > Cc: Alex Deucher [2]<alexander.deucher@amd.com>
> > Cc: "Christian König" [3]<christian.koenig@amd.com>
> > Cc: "Pan, Xinhui" [4]<Xinhui.Pan@amd.com>
> > Cc: David Airlie [5]<airlied@linux.ie>
> > Cc: Daniel Vetter [6]<daniel@ffwll.ch>
> > Cc: [7]amd-gfx@lists.freedesktop.org
> > Cc: [8]dri-devel@lists.freedesktop.org
> > Signed-off-by: Lee Jones [9]<lee.jones@linaro.org>
> > ---
> >    drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c | 6 ++++++
> >    1 file changed, 6 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c b/drivers/gpu/drm/amd/a
> > mdkfd/kfd_smi_events.c
> > index e4beebb1c80a2..3b9ac1e87231f 100644
> > --- a/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
> > +++ b/drivers/gpu/drm/amd/amdkfd/kfd_smi_events.c
> > @@ -145,8 +145,11 @@ static int kfd_smi_ev_release(struct inode *inode, struct f
> > ile *filep)
> >         spin_unlock(&dev->smi_lock);
> >         synchronize_rcu();
> > +
> > +       spin_lock(&client->lock);
> >         kfifo_free(&client->fifo);
> >         kfree(client);
> > +       spin_unlock(&client->lock);
> > 
> > The spin_unlock is after the spinlock data structure has been freed.
> > 
> > Good point.
> > 
> > If we go forward with this approach the unlock should perhaps be moved
> > to just before the kfree().
> > 
> > 
> > There
> > should be no concurrent users here, since we are freeing the data structure.
> > If there still are concurrent users at this point, they will crash anyway.
> > So the locking is unnecessary.
> > 
> > The users may well crash, as does the kernel unfortunately.
> > 
> > We only get to kfd_smi_ev_release when the file descriptor is closed. User
> > mode has no way to use the client any more at this point. This function also
> > removes the client from the dev->smi_cllients list. So no more events will
> > be added to the client. Therefore it is safe to free the client.
> > 
> > If any of the above were not true, it would not be safe to kfree(client).
> > 
> > But if it is safe to kfree(client), then there is no need for the locking.
> > 
> > I'm not keen to go into too much detail until it's been patched.
> > 
> > However, there is a way to free the client while it is still in use.
> > 
> > Remember we are multi-threaded.
> > 
> >    files_struct->count refcount is used to handle this race, as
> >    vfs_read/vfs_write takes file refcount and fput calls release only if
> >    refcount is 1, to guarantee that read/write from user space is finished
> >    here.
> > 
> >    Another race is driver add_event_to_kfifo while closing the handler. We
> >    use rcu_read_lock in add_event_to_kfifo, and kfd_smi_ev_release calls
> >    synchronize_rcu to wait for all rcu_read done. So it is safe to call
> >    kfifo_free(&client->fifo) and kfree(client).
> 
> Philip, please reach out to Felix.

Philip, Felix, are you receiving my direct messages?

I have a feeling they're being filtered out by AMD's mail server.

-- 
Lee Jones [李琼斯]
Principal Technical Lead - Developer Services
Linaro.org │ Open source software for Arm SoCs
Follow Linaro: Facebook | Twitter | Blog

  reply	other threads:[~2022-03-23 12:47 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-17 13:16 [PATCH 1/1] drm/amdkfd: Protect the Client whilst it is being operated on Lee Jones
2022-03-17 14:19 ` Lee Jones
2022-03-17 14:50 ` Felix Kuehling
2022-03-17 15:00   ` Lee Jones
2022-03-17 15:08     ` Felix Kuehling
2022-03-17 15:13       ` Lee Jones
     [not found]         ` <b65db51e-f1ba-3a9b-0ac1-0b8ae51c5eee@amd.com>
2022-03-17 16:29           ` Lee Jones
2022-03-23 12:46             ` Lee Jones [this message]
2022-03-23 19:13               ` Felix Kuehling

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YjsWvy8cT2eOw618@google.com \
    --to=lee.jones@linaro.org \
    --cc=Xinhui.Pan@amd.com \
    --cc=airlied@linux.ie \
    --cc=alexander.deucher@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=christian.koenig@amd.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=felix.kuehling@amd.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=yangp@amd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox