linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Thomas Hellstrom <thellstrom@vmware.com>
To: Maarten Lankhorst <maarten.lankhorst@canonical.com>
Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org,
	dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org,
	ccross@google.com, linux-media@vger.kernel.org
Subject: Re: [PATCH 2/2] [RFC] reservation: add suppport for read-only access using rcu
Date: Thu, 10 Apr 2014 10:46:11 +0200	[thread overview]
Message-ID: <53465A53.1090500@vmware.com> (raw)
In-Reply-To: <20140409144831.26648.79163.stgit@patser>

[-- Attachment #1: Type: text/plain, Size: 2172 bytes --]

Hi!

Ugh. This became more complicated than I thought, but I'm OK with moving
TTM over to fence while we sort out
how / if we're going to use this.

While reviewing, it struck me that this is kind of error-prone, and hard
to follow since we're operating on a structure that may be
continually updated under us, needing a lot of RCU-specific macros and
barriers.

Also the rcu wait appears to not complete until there are no busy fences
left (new ones can be added while we wait) rather than
waiting on a snapshot of busy fences.

I wonder if these issues can be addressed by having a function that
provides a snapshot of all busy fences: This can be accomplished
either by including the exclusive fence in the fence_list structure and
allocate a new such structure each time it is updated. The RCU reader
could then just make a copy of the current fence_list structure pointed
to by &obj->fence, but I'm not sure we want to reallocate *each* time we
update the fence pointer.

The other approach uses a seqlock to obtain a consistent snapshot, and
I've attached an incomplete outline, and I'm not 100% whether it's OK to
combine RCU and seqlocks in this way...

Both these approaches have the benefit of hiding the RCU snapshotting in
a single function, that can then be used by any waiting
or polling function.

/Thomas



On 04/09/2014 04:49 PM, Maarten Lankhorst wrote:
> This adds 3 more functions to deal with rcu.
>
> reservation_object_wait_timeout_rcu() will wait on all fences of the
> reservation_object, without obtaining the ww_mutex.
>
> reservation_object_test_signaled_rcu() will test if all fences of the
> reservation_object are signaled without using the ww_mutex.
>
> reservation_object_get_excl() is added because touching the fence_excl
> member directly will trigger a sparse warning.
>
> Signed-off-by: Maarten Lankhorst <maarten.lankhorst@canonical.com>
> ---
>  drivers/base/dma-buf.c      |   46 +++++++++++--
>  drivers/base/reservation.c  |  147 +++++++++++++++++++++++++++++++++++++++++--
>  include/linux/fence.h       |   22 ++++++
>  include/linux/reservation.h |   40 ++++++++----
>  4 files changed, 224 insertions(+), 31 deletions(-)
>


[-- Attachment #2: rcu_fence.diff --]
[-- Type: text/x-patch, Size: 4139 bytes --]

diff --git a/drivers/base/reservation.c b/drivers/base/reservation.c
index b82a5b6..c4bcf10 100644
--- a/drivers/base/reservation.c
+++ b/drivers/base/reservation.c
@@ -82,6 +82,8 @@ reservation_object_add_shared_inplace(struct reservation_object *obj,
 {
 	u32 i;
 
+	preempt_disable();
+	write_seqcount_begin(&obj->seq);
 	for (i = 0; i < fobj->shared_count; ++i) {
 		if (fobj->shared[i]->context == fence->context) {
 			struct fence *old_fence = fobj->shared[i];
@@ -90,6 +92,8 @@ reservation_object_add_shared_inplace(struct reservation_object *obj,
 
 			fobj->shared[i] = fence;
 
+			write_seqcount_end(&obj->seq);
+			preempt_enable();
 			fence_put(old_fence);
 			return;
 		}
@@ -101,8 +105,9 @@ reservation_object_add_shared_inplace(struct reservation_object *obj,
 	 * make the new fence visible before incrementing
 	 * fobj->shared_count
 	 */
-	smp_wmb();
 	fobj->shared_count++;
+	write_seqcount_end(&obj->seq);
+	preempt_enable();
 }
 
 static void
@@ -141,7 +146,11 @@ reservation_object_add_shared_replace(struct reservation_object *obj,
 		fobj->shared[fobj->shared_count++] = fence;
 
 done:
+	preempt_disable();
+	write_seqcount_begin(&obj->seq);
 	obj->fence = fobj;
+	write_seqcount_end(&obj->seq);
+	preempt_enable();
 	kfree(old);
 }
 
@@ -173,6 +182,8 @@ void reservation_object_add_excl_fence(struct reservation_object *obj,
 	u32 i = 0;
 
 	old = reservation_object_get_list(obj);
+	preempt_disable();
+	write_seqcount_begin(&obj->seq);
 	if (old) {
 		i = old->shared_count;
 		old->shared_count = 0;
@@ -182,7 +193,8 @@ void reservation_object_add_excl_fence(struct reservation_object *obj,
 		fence_get(fence);
 
 	obj->fence_excl = fence;
-
+	write_seqcount_end(&obj->seq);
+	preempt_enable();
 	/* inplace update, no shared fences */
 	while (i--)
 		fence_put(old->shared[i]);
@@ -191,3 +203,76 @@ void reservation_object_add_excl_fence(struct reservation_object *obj,
 		fence_put(old_fence);
 }
 EXPORT_SYMBOL(reservation_object_add_excl_fence);
+
+struct unsignaled {
+	unsigned shared_max;
+	unsigned shared_count;
+	struct fence **shared;
+	struct fence *exclusive;
+};
+
+static int reservation_object_unsignaled_rcu(struct reservation_object *obj,
+					     struct unsignaled *us)
+{
+	unsigned seq;
+	struct reservation_object_list *fobj, list;
+	struct fence *fence;
+
+retry:
+	seq = read_seqcount_begin(&obj->seq);
+	rcu_read_lock();
+
+	fobj = obj->fence;
+	fence = obj->exclusive;
+
+	/* Check pointers for validity */
+	if (read_seqcount_retry(&obj->seq, seq)) {
+		rcu_read_unlock();
+		goto retry;
+	}
+
+	list = *fobj;
+
+	/* Check list for validity */
+	if (read_seqcount_retry(&obj->seq, seq)) {
+		rcu_read_unlock();
+		goto retry;
+	}
+
+	if (list.shared_count == 0) {
+		if (fence &&
+		    !test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags) &&
+		    fence_get_rcu(fence))
+			us->exclusive = exclusive;
+		rcu_read_unlock();
+		return 0;
+	}
+	
+
+	/* Needs reallocation? Either in this function or outside */
+	if (us->shared_max < list.shared_count) {
+		rcu_read_unlock();
+		return -ENOMEM;
+	}
+
+	memcpy(us->shared, list.shared,
+	       list.shared_count * sizeof(*list.shared));
+
+	/* Check the fence pointer array for validity */
+	if (read_seqcount_retry(&obj->seq, seq)) {
+		rcu_read_unlock();
+		goto retry;
+	}
+
+	for (i = 0; i < list.shared_count; ++i) {
+		struct fence *fence = us->shared[i];
+	       
+		if (fence && !test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->flags)
+		    && fence_get_rcu(fence));			
+		us->shared[us->shared_count++] = fence;
+	}
+
+	rcu_read_unlock();
+	
+	return 0;
+}
diff --git a/include/linux/reservation.h b/include/linux/reservation.h
index b602365..4bf791a 100644
--- a/include/linux/reservation.h
+++ b/include/linux/reservation.h
@@ -52,6 +52,7 @@ struct reservation_object_list {
 
 struct reservation_object {
 	struct ww_mutex lock;
+	struct seqcount seq;
 
 	struct fence *fence_excl;
 	struct reservation_object_list *fence;
@@ -69,6 +70,7 @@ reservation_object_init(struct reservation_object *obj)
 	obj->fence_excl = NULL;
 	obj->fence = NULL;
 	obj->staged = NULL;
+	seqcount_init(&obj->seq);
 }
 
 static inline void

  reply	other threads:[~2014-04-10  8:46 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-04-09 14:48 [PATCH 0/2] Updates to fence api Maarten Lankhorst
2014-04-09 14:48 ` [PATCH 1/2] reservation: update api and add some helpers Maarten Lankhorst
2014-04-09 14:49 ` [PATCH 2/2] [RFC] reservation: add suppport for read-only access using rcu Maarten Lankhorst
2014-04-10  8:46   ` Thomas Hellstrom [this message]
2014-04-10 10:07     ` Maarten Lankhorst
2014-04-10 11:08       ` Thomas Hellstrom
2014-04-10 11:25         ` Thomas Hellstrom
2014-04-10 15:00         ` [PATCH 2/2] [RFC v2 with seqcount] " Maarten Lankhorst
2014-04-11  8:38           ` Thomas Hellstrom
2014-04-11  9:24             ` Maarten Lankhorst
2014-04-11 10:11               ` Thomas Hellstrom
2014-04-11 18:09                 ` Maarten Lankhorst
2014-04-11 19:30                   ` Thomas Hellstrom
2014-04-14  7:04                     ` Maarten Lankhorst
2014-04-11 19:35                   ` Thomas Hellstrom
2014-04-14  7:42                     ` Maarten Lankhorst
2014-04-14  7:45                       ` Thomas Hellstrom
2014-04-23 11:15                         ` [RFC PATCH 2/2 with seqcount v3] " Maarten Lankhorst
2014-04-29 14:32                           ` Maarten Lankhorst
2014-04-29 18:55                             ` Thomas Hellstrom
2014-05-19 13:42                           ` Thomas Hellstrom
2014-05-19 14:13                             ` Maarten Lankhorst
2014-05-19 14:43                               ` Thomas Hellstrom
2014-05-20 15:13                               ` Thomas Hellstrom
2014-05-20 15:32                                 ` Maarten Lankhorst

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=53465A53.1090500@vmware.com \
    --to=thellstrom@vmware.com \
    --cc=ccross@google.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=linaro-mm-sig@lists.linaro.org \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-media@vger.kernel.org \
    --cc=maarten.lankhorst@canonical.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).