From: Henrik Rydberg <rydberg@bitmath.org>
To: Oleg Nesterov <oleg@redhat.com>
Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com>,
linux-input@vger.kernel.org, linux-kernel@vger.kernel.org,
Jiri Kosina <jkosina@suse.cz>,
Mika Kuoppala <mika.kuoppala@nokia.com>,
Benjamin Tissoires <tissoire@cena.fr>,
Rafi Rubin <rafi@seas.upenn.edu>
Subject: Re: [PATCH 1/4] input: Introduce buflock, a one-to-many circular buffer mechanism
Date: Sat, 05 Jun 2010 20:34:05 +0200 [thread overview]
Message-ID: <4C0A989D.5040001@bitmath.org> (raw)
In-Reply-To: <20100605174034.GA13506@redhat.com>
Hi Oleg,
thanks for having another look at this.
[...]
>>> Whatever we do, buflock_read() can race with the writer and read
>>> the invalid item.
>> True. However, one could argue this is a highly unlikely case given the
>> (current) usage.
>
> Agreed, but then I'd strongly suggest you to document this in the header.
> The possible user of this API should know the limitations.
>
>> Or, one could remedy it by not wrapping the indexes modulo SIZE.
>
> You mean, change the implementation? Yes.
I feel this is the only option now.
> One more question. As you rightly pointed out, this is similar to seqlocks.
> Did you consider the option to merely use them?
>
> IOW,
> struct buflock_writer {
> seqcount_t lock;
> unsigned int head;
> };
>
> In this case the implementation is obvious and correct.
>
> Afaics, compared to the current implentation it has the only drawback:
> the reader has to restart if it races with any write, while with your
> code it only restarts if the writer writes to the item we are trying
> to read.
Yes, I did consider it, but it is suboptimal. :-)
We fixed the immediate problem in another (worse but simpler) way, so this
implementation is now pursued more out of academic interest.
>> Regarding the barriers used in the code, would it be possible to get a picture
>> of exactly how bad those operations are for performance?
>
> Oh, sorry I don't know, and this obvioulsy differs depending on arch.
> I never Knew how these barriers actually work in hardware, just have
> the foggy ideas about the "side effects" they have ;)
>
> And I agree with Dmitry, the last smp_Xmb() in buflock_write/read looks
> unneeded. Both helpers do not care about the subsequent LOAD/STORE's.
>
> write_seqcount_begin() has the "final" wmb, yes. But this is because
> it does care. We are going to modify something under this write_lock,
> the result of these subsequent STORE's shouldn't be visible to reader
> before it sees the result of ++sequence.
The relation between storing the writer head and synchronizing the reader head
is similar in structure, in my view. On the other hand, it might be possible to
remove one of the writer heads altogether, which would make things simpler still.
>> Is it true that a
>> simple spinlock might be faster on average, for instance?
>
> May be. But without spinlock's the writer can be never delayed by
> reader. I guess this was your motivation.
Yes, one of them. The other was a lock where readers do not wait for each other.
Thanks!
Henrik
next prev parent reply other threads:[~2010-06-05 18:34 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-06-03 8:00 [PATCH 0/4] input: evdev: Dynamic buffers (rev3) Henrik Rydberg
2010-06-03 8:00 ` [PATCH 1/4] input: Introduce buflock, a one-to-many circular buffer mechanism Henrik Rydberg
2010-06-03 8:01 ` [PATCH 2/4] input: evdev: Use multi-reader buffer to save space (rev3) Henrik Rydberg
2010-06-03 8:01 ` [PATCH 3/4] input: evdev: Convert to dynamic event buffer (rev3) Henrik Rydberg
2010-06-03 8:01 ` [PATCH 4/4] input: Use driver hint to compute the evdev buffer size Henrik Rydberg
2010-06-04 6:34 ` Dmitry Torokhov
2010-06-04 6:37 ` [PATCH 3/4] input: evdev: Convert to dynamic event buffer (rev3) Dmitry Torokhov
2010-06-04 6:56 ` [PATCH 1/4] input: Introduce buflock, a one-to-many circular buffer mechanism Dmitry Torokhov
2010-06-04 8:43 ` Henrik Rydberg
2010-06-04 16:36 ` Dmitry Torokhov
2010-06-04 17:08 ` Jonathan Cameron
2010-06-04 19:13 ` Oleg Nesterov
2010-06-04 19:43 ` Henrik Rydberg
2010-06-05 17:40 ` Oleg Nesterov
2010-06-05 18:34 ` Henrik Rydberg [this message]
2010-06-04 16:36 ` Henrik Rydberg
2010-06-05 1:35 ` Andrew Morton
2010-06-05 11:21 ` Henrik Rydberg
2010-06-04 6:59 ` [PATCH 0/4] input: evdev: Dynamic buffers (rev3) Dmitry Torokhov
2010-06-04 16:11 ` Henrik Rydberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4C0A989D.5040001@bitmath.org \
--to=rydberg@bitmath.org \
--cc=dmitry.torokhov@gmail.com \
--cc=jkosina@suse.cz \
--cc=linux-input@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mika.kuoppala@nokia.com \
--cc=oleg@redhat.com \
--cc=rafi@seas.upenn.edu \
--cc=tissoire@cena.fr \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).