netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Cypher Wu <cypher.w@gmail.com>
To: Chris Metcalf <cmetcalf@tilera.com>
Cc: linux-kernel@vger.kernel.org,
	"Américo Wang" <xiyou.wangcong@gmail.com>,
	"Eric Dumazet" <eric.dumazet@gmail.com>,
	netdev <netdev@vger.kernel.org>
Subject: Re: [PATCH] arch/tile: fix rwlock so would-be write lockers don't block new readers
Date: Tue, 23 Nov 2010 09:36:31 +0800	[thread overview]
Message-ID: <AANLkTim1YyujwGZfenU_m52HEJJSFmTg3Wswn2DkqA3a@mail.gmail.com> (raw)
In-Reply-To: <4CEA71AD.5010606@tilera.com>

2010/11/22 Chris Metcalf <cmetcalf@tilera.com>:
> On 11/22/2010 12:39 AM, Cypher Wu wrote:
>> 2010/11/15 Chris Metcalf <cmetcalf@tilera.com>:
>>> This avoids a deadlock in the IGMP code where one core gets a read
>>> lock, another core starts trying to get a write lock (thus blocking
>>> new readers), and then the first core tries to recursively re-acquire
>>> the read lock.
>>>
>>> We still try to preserve some degree of balance by giving priority
>>> to additional write lockers that come along while the lock is held
>>> for write, so they can all complete quickly and return the lock to
>>> the readers.
>>>
>>> Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
>>> ---
>>> This should apply relatively cleanly to 2.6.26.7 source code too.
>>> [...]
>>
>> I've finished my business trip and tested that patch for more than an
>> hour and it works. The test is still running now.
>>
>> But it seems there still has a potential problem: we used ticket lock
>> for write_lock(), and if there are so many write_lock() occurred, is
>> 256 ticket enough for 64 or even more cores to avoiding overflow?
>> Since is we try to write_unlock() and there's already write_lock()
>> waiting we'll only adding current ticket.
>
> This is OK, since each core can issue at most one (blocking) write_lock(),
> and we have only 64 cores.  Future >256 core machines will be based on
> TILE-Gx anyway, which doesn't have the 256-core limit since it doesn't use
> the spinlock_32.c implementation.
>
> --
> Chris Metcalf, Tilera Corp.
> http://www.tilera.com
>
>

Say, if core A try to write_lock() rwlock and current_ticket_ is 0 and
it write next_ticket_ to 1, when it processing the lock, core B try to
write_lock() again and write next_ticket_ to 2, then when A
write_unlock() it seen that (current_ticket_+1) is not equal to
next_ticket_, so it increment current_ticket_, and core B get the
lock. If core A try write_lock again before core B write_unlock, it
will increment next_ticket_ to 3. And so on.
This may rarely happened, I've tested it yesterday for several hours
it goes very well under pressure.


-- 
Cyberman Wu
http://www.meganovo.com

  reply	other threads:[~2010-11-23  1:36 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <AANLkTikvT=x9eBovn2-m6HLqk7wyXSAR3sc9jCQ0C6mL@mail.gmail.com>
2010-11-11 15:23 ` Kernel rwlock design, Multicore and IGMP Eric Dumazet
2010-11-11 15:32   ` Eric Dumazet
2010-11-12  3:32   ` Cypher Wu
2010-11-12  6:28     ` Américo Wang
2010-11-12  7:13     ` Américo Wang
2010-11-12  7:27       ` Eric Dumazet
2010-11-12  8:19         ` Américo Wang
2010-11-12  9:09           ` Yong Zhang
2010-11-12  9:18             ` Américo Wang
2010-11-12 11:06               ` Cypher Wu
2010-11-13  6:35                 ` Américo Wang
2010-11-12 13:00               ` Yong Zhang
2010-11-13  6:28                 ` Américo Wang
2010-11-12  9:22           ` Eric Dumazet
2010-11-12  9:33             ` Américo Wang
2010-11-12 13:34             ` [PATCH net-next-2.6] igmp: RCU conversion of in_dev->mc_list Eric Dumazet
2010-11-12 14:26               ` Eric Dumazet
2010-11-12 15:46                 ` [PATCH net-next-2.6 V2] " Eric Dumazet
2010-11-12 21:19                   ` David Miller
2010-11-13  6:44                   ` Américo Wang
2010-11-13 22:54           ` Kernel rwlock design, Multicore and IGMP Peter Zijlstra
2010-11-12 11:10         ` Cypher Wu
2010-11-12 11:25           ` Eric Dumazet
2010-11-13 22:53     ` Peter Zijlstra
     [not found]     ` <ZXmP8hjgLHA.4648@exchange1.tad.internal.tilera.com>
2010-11-13 23:03       ` Chris Metcalf
2010-11-15  7:22         ` Cypher Wu
2010-11-15 11:18           ` Cypher Wu
2010-11-15 11:31             ` Eric Dumazet
2010-11-17  1:30               ` Cypher Wu
2010-11-17  4:43                 ` Eric Dumazet
2010-11-15 14:18           ` [PATCH] arch/tile: fix rwlock so would-be write lockers don't block new readers Chris Metcalf
2010-11-15 14:52             ` Eric Dumazet
2010-11-15 15:10               ` Chris Metcalf
2010-11-22  5:39             ` Cypher Wu
2010-11-22 13:35               ` Chris Metcalf
2010-11-23  1:36                 ` Cypher Wu [this message]
2010-11-23 21:02                   ` Chris Metcalf
2010-11-24  2:53                     ` Cypher Wu
2010-11-24 14:09                       ` Chris Metcalf
2010-11-24 16:37                         ` Cypher Wu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AANLkTim1YyujwGZfenU_m52HEJJSFmTg3Wswn2DkqA3a@mail.gmail.com \
    --to=cypher.w@gmail.com \
    --cc=cmetcalf@tilera.com \
    --cc=eric.dumazet@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=xiyou.wangcong@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).