From: Dipankar Sarma <dipankar@in.ibm.com>
To: Andrea Arcangeli <andrea@suse.de>
Cc: rusty@rustcorp.com.au, linux-kernel@vger.kernel.org,
Paul Mckenney <paul.mckenney@us.ibm.com>
Subject: Re: 2.4.10pre7aa1
Date: Wed, 12 Sep 2001 20:12:29 +0530 [thread overview]
Message-ID: <20010912201229.F5819@in.ibm.com> (raw)
In-Reply-To: <20010912163426.A5979@in.ibm.com> <20010912160313.A695@athlon.random>
In-Reply-To: <20010912160313.A695@athlon.random>; from andrea@suse.de on Wed, Sep 12, 2001 at 04:03:13PM +0200
On Wed, Sep 12, 2001 at 04:03:13PM +0200, Andrea Arcangeli wrote:
> > > Like the kernel threads approach, but AFAICT it won't work for the case of two CPUs running wait_for_rcu at the same time (on a 4-way or above).
>
> Good catch!
It barfs on our 4way with the FD management patch and chat benchmark :-)
> > The patch I submitted to Andrea had logic to make sure that
> > two CPUs don't execute wait_for_rcu() at the same time.
> > Somehow it seems to have got lost in Andrea's modifications.
>
> I think the bug was in your original patch too, I'm pretty sure I didn't
> broke anything while changing the API a little.
You changed the way I maintained the wait_list and current_list.
The basic logic was that new callbacks are always added to the
wait list. The wait_for_rcu() is started only if current_list
was empty and we just moved the wait_list to current_list. The
key step was moving the wait_list to current_list *after* doing
a wait_for_rcu(). This prevents another CPU from doing a wait_for_rcu().
Either that or I missed something big time :-)
>
> > I will look at that and submit a new patch to Andrea, if necessary.
>
> I prefer to allow all cpus to enter wait_for_rcu at the same time rather
> than putting a serializing semaphore around wait_for_rcu (it should
> scale pretty well if we don't serialize around wait_for_rcu).
Serializing is not what I want to do either. Instead the other
CPUs just add to the wait_list and return if there is a wait_for_rcu()
going on. What we have seen is that relatively larger batches around
a single recurring wait_for_rcu() will do reasonably well in terms
of performance.
>
> The way I prefer to fix it is just to replace the rcu_sema with a per-cpu
> semaphore and have wait_for_rcu running down on such per-cpu semaphore
> of the interesting cpu, should be a few liner patch (we have space
> free for it in the per-cpu rcu_data cacheline).
It should be possible to do this. However, I am not sure we would
really benefit significantly from allowing multiple wait_for_rcu()s
to run parallelly. I would much rather see per-CPU lists implemented
and avoid keventd eventually.
>
> > As for wrappers, I am agnostic. However, I think sooner or later
> > people will start asking for them, if we go by our past experience.
>
> Maybe I'm missing something but what's the problem in allocating the
> struct rcu_head in the data structure? I don't think it's not much more
> complicated than the cast magics, and in general I prefer to avoid casts
> on larger buffers to get advantage of the C compile time sanity checking ;).
One disadvantage of the wrappers is that we would be wasting most of
the L1 cache line for rcu_head and that could be relatively significant for
a small frequently allocated structure. And no, I don't see any problem asking
people to allocate the rcu_head in the data structure.
Thanks
Dipankar
--
Dipankar Sarma <dipankar@in.ibm.com> Project: http://lse.sourceforge.net
Linux Technology Center, IBM Software Lab, Bangalore, India.
next prev parent reply other threads:[~2001-09-12 14:37 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2001-09-12 11:04 2.4.10pre7aa1 Dipankar Sarma
2001-09-12 14:03 ` 2.4.10pre7aa1 Andrea Arcangeli
2001-09-12 14:42 ` Dipankar Sarma [this message]
2001-09-12 14:53 ` 2.4.10pre7aa1 Andrea Arcangeli
2001-09-16 12:23 ` 2.4.10pre7aa1 Rusty Russell
-- strict thread matches above, loose matches on Subject: below --
2001-09-17 9:13 2.4.10pre7aa1 Dipankar Sarma
2001-09-11 13:05 2.4.10pre7aa1 Dipankar Sarma
2001-09-11 13:56 ` 2.4.10pre7aa1 Andrea Arcangeli
2001-09-11 14:27 ` 2.4.10pre7aa1 Dipankar Sarma
2001-09-11 12:22 2.4.10pre7aa1 Dipankar Sarma
2001-09-11 11:53 2.4.10pre7aa1 Dipankar Sarma
2001-09-11 11:57 ` 2.4.10pre7aa1 Andrea Arcangeli
2001-09-11 9:39 2.4.10pre7aa1 Maneesh Soni
2001-09-11 11:12 ` 2.4.10pre7aa1 Andrea Arcangeli
2001-09-11 8:51 2.4.10pre7aa1 Dipankar Sarma
2001-09-11 11:04 ` 2.4.10pre7aa1 Andrea Arcangeli
2001-09-11 12:40 ` 2.4.10pre7aa1 Alan Cox
2001-09-11 13:49 ` 2.4.10pre7aa1 Andrea Arcangeli
[not found] <20010910175416.A714@athlon.random>
2001-09-10 17:41 ` 2.4.10pre7aa1 Christoph Hellwig
2001-09-10 18:03 ` 2.4.10pre7aa1 Andrea Arcangeli
2001-09-10 18:49 ` 2.4.10pre7aa1 Christoph Hellwig
2001-09-10 19:01 ` 2.4.10pre7aa1 Andrea Arcangeli
2001-09-10 19:03 ` 2.4.10pre7aa1 Christoph Hellwig
2001-09-10 19:08 ` 2.4.10pre7aa1 Andrea Arcangeli
2001-09-10 18:52 ` 2.4.10pre7aa1 Christoph Hellwig
2001-09-10 19:06 ` 2.4.10pre7aa1 Andrea Arcangeli
2001-09-16 17:00 ` 2.4.10pre7aa1 Rik van Riel
2001-09-16 17:23 ` 2.4.10pre7aa1 Andrea Arcangeli
2001-09-16 17:34 ` 2.4.10pre7aa1 Rik van Riel
2001-09-16 18:16 ` 2.4.10pre7aa1 Andrea Arcangeli
2001-09-16 19:04 ` 2.4.10pre7aa1 Christoph Hellwig
2001-09-12 8:24 ` 2.4.10pre7aa1 Rusty Russell
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20010912201229.F5819@in.ibm.com \
--to=dipankar@in.ibm.com \
--cc=andrea@suse.de \
--cc=linux-kernel@vger.kernel.org \
--cc=paul.mckenney@us.ibm.com \
--cc=rusty@rustcorp.com.au \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox