public inbox for linux-arch@vger.kernel.org
 help / color / mirror / Atom feed
* Global spinlock vs local bit spin locks
@ 2005-06-17  4:21 Nick Piggin
  2005-06-17  4:45 ` Andrew Morton
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Nick Piggin @ 2005-06-17  4:21 UTC (permalink / raw)
  To: David S. Miller, anton, linux-arch; +Cc: Andrew Morton, Peter Keilty

[-- Attachment #1: Type: text/plain, Size: 871 bytes --]

Hi,

Peter Keilty is running into some scalability problems with buffer
head based IO. There are a couple of global spinlocks in the buffer
completion path, and they're showing up on 16-way IA64 systems.

Replacing these locks with a bit spin lock in the buffer head status
field has been shown to eliminate the bouncing problem. We want to
go with this unless anyone has an objection to the cost.

There is a cost (though I haven't been able to measure a signficant
change), but I think it will be outweighed by the the reduction in
cacheline contention on even small SMPs doing IO.

Any input would be appreciated.

If anyone wants to run some tests, possibly the easiest would be to
make ext2 on loopback on tmpfs (to test scalability, have one loop
device for each CPU in the system and bind the loop threads to each
CPU). Make sure ext2 block size is < PAGE_SIZE.



[-- Attachment #2: page_uptodate_lock-bh_lock.patch --]
[-- Type: text/x-patch, Size: 3422 bytes --]

Use a bit spin lock in the first buffer of the page to synchronise
asynch IO buffer completions, instead of the global page_uptodate_lock,
which is showing some scalabilty problems.

Signed-off-by: Nick Piggin <nickpiggin@yahoo.com.au>

Index: linux-2.6/fs/buffer.c
===================================================================
--- linux-2.6.orig/fs/buffer.c	2005-05-06 17:41:50.000000000 +1000
+++ linux-2.6/fs/buffer.c	2005-05-06 17:42:03.000000000 +1000
@@ -538,9 +538,8 @@ static void free_more_memory(void)
  */
 static void end_buffer_async_read(struct buffer_head *bh, int uptodate)
 {
-	static DEFINE_SPINLOCK(page_uptodate_lock);
 	unsigned long flags;
-	struct buffer_head *tmp;
+	struct buffer_head *first, *tmp;
 	struct page *page;
 	int page_uptodate = 1;
 
@@ -561,7 +560,9 @@ static void end_buffer_async_read(struct
 	 * two buffer heads end IO at almost the same time and both
 	 * decide that the page is now completely done.
 	 */
-	spin_lock_irqsave(&page_uptodate_lock, flags);
+	first = page_buffers(page);
+	local_irq_save(flags);
+	bit_spin_lock(BH_Uptd_Lock, &first->b_state);
 	clear_buffer_async_read(bh);
 	unlock_buffer(bh);
 	tmp = bh;
@@ -574,7 +575,8 @@ static void end_buffer_async_read(struct
 		}
 		tmp = tmp->b_this_page;
 	} while (tmp != bh);
-	spin_unlock_irqrestore(&page_uptodate_lock, flags);
+	bit_spin_unlock(BH_Uptd_Lock, &first->b_state);
+	local_irq_restore(flags);
 
 	/*
 	 * If none of the buffers had errors and they are all
@@ -586,7 +588,8 @@ static void end_buffer_async_read(struct
 	return;
 
 still_busy:
-	spin_unlock_irqrestore(&page_uptodate_lock, flags);
+	bit_spin_unlock(BH_Uptd_Lock, &first->b_state);
+	local_irq_restore(flags);
 	return;
 }
 
@@ -597,9 +600,8 @@ still_busy:
 void end_buffer_async_write(struct buffer_head *bh, int uptodate)
 {
 	char b[BDEVNAME_SIZE];
-	static DEFINE_SPINLOCK(page_uptodate_lock);
 	unsigned long flags;
-	struct buffer_head *tmp;
+	struct buffer_head *tmp, *first;
 	struct page *page;
 
 	BUG_ON(!buffer_async_write(bh));
@@ -619,7 +621,10 @@ void end_buffer_async_write(struct buffe
 		SetPageError(page);
 	}
 
-	spin_lock_irqsave(&page_uptodate_lock, flags);
+	first = page_buffers(page);
+	local_irq_save(flags);
+	bit_spin_lock(BH_Uptd_Lock, &first->b_state);
+
 	clear_buffer_async_write(bh);
 	unlock_buffer(bh);
 	tmp = bh->b_this_page;
@@ -630,12 +635,14 @@ void end_buffer_async_write(struct buffe
 		}
 		tmp = tmp->b_this_page;
 	}
-	spin_unlock_irqrestore(&page_uptodate_lock, flags);
+	bit_spin_unlock(BH_Uptd_Lock, &first->b_state);
+	local_irq_restore(flags);
 	end_page_writeback(page);
 	return;
 
 still_busy:
-	spin_unlock_irqrestore(&page_uptodate_lock, flags);
+	bit_spin_unlock(BH_Uptd_Lock, &first->b_state);
+	local_irq_restore(flags);
 	return;
 }
 
Index: linux-2.6/include/linux/buffer_head.h
===================================================================
--- linux-2.6.orig/include/linux/buffer_head.h	2005-05-06 17:39:54.000000000 +1000
+++ linux-2.6/include/linux/buffer_head.h	2005-05-06 17:42:03.000000000 +1000
@@ -19,6 +19,8 @@ enum bh_state_bits {
 	BH_Dirty,	/* Is dirty */
 	BH_Lock,	/* Is locked */
 	BH_Req,		/* Has been submitted for I/O */
+	BH_Uptd_Lock,	/* Only used by the first bh in a page, to serialise
+			   IO completion of other buffers in the page */
 
 	BH_Mapped,	/* Has a disk mapping */
 	BH_New,		/* Disk mapping was newly created by get_block */


[-- Attachment #3: page_uptodate_lock-bh_lock.patch --]
[-- Type: text/x-patch, Size: 3422 bytes --]

Use a bit spin lock in the first buffer of the page to synchronise
asynch IO buffer completions, instead of the global page_uptodate_lock,
which is showing some scalabilty problems.

Signed-off-by: Nick Piggin <nickpiggin@yahoo.com.au>

Index: linux-2.6/fs/buffer.c
===================================================================
--- linux-2.6.orig/fs/buffer.c	2005-05-06 17:41:50.000000000 +1000
+++ linux-2.6/fs/buffer.c	2005-05-06 17:42:03.000000000 +1000
@@ -538,9 +538,8 @@ static void free_more_memory(void)
  */
 static void end_buffer_async_read(struct buffer_head *bh, int uptodate)
 {
-	static DEFINE_SPINLOCK(page_uptodate_lock);
 	unsigned long flags;
-	struct buffer_head *tmp;
+	struct buffer_head *first, *tmp;
 	struct page *page;
 	int page_uptodate = 1;
 
@@ -561,7 +560,9 @@ static void end_buffer_async_read(struct
 	 * two buffer heads end IO at almost the same time and both
 	 * decide that the page is now completely done.
 	 */
-	spin_lock_irqsave(&page_uptodate_lock, flags);
+	first = page_buffers(page);
+	local_irq_save(flags);
+	bit_spin_lock(BH_Uptd_Lock, &first->b_state);
 	clear_buffer_async_read(bh);
 	unlock_buffer(bh);
 	tmp = bh;
@@ -574,7 +575,8 @@ static void end_buffer_async_read(struct
 		}
 		tmp = tmp->b_this_page;
 	} while (tmp != bh);
-	spin_unlock_irqrestore(&page_uptodate_lock, flags);
+	bit_spin_unlock(BH_Uptd_Lock, &first->b_state);
+	local_irq_restore(flags);
 
 	/*
 	 * If none of the buffers had errors and they are all
@@ -586,7 +588,8 @@ static void end_buffer_async_read(struct
 	return;
 
 still_busy:
-	spin_unlock_irqrestore(&page_uptodate_lock, flags);
+	bit_spin_unlock(BH_Uptd_Lock, &first->b_state);
+	local_irq_restore(flags);
 	return;
 }
 
@@ -597,9 +600,8 @@ still_busy:
 void end_buffer_async_write(struct buffer_head *bh, int uptodate)
 {
 	char b[BDEVNAME_SIZE];
-	static DEFINE_SPINLOCK(page_uptodate_lock);
 	unsigned long flags;
-	struct buffer_head *tmp;
+	struct buffer_head *tmp, *first;
 	struct page *page;
 
 	BUG_ON(!buffer_async_write(bh));
@@ -619,7 +621,10 @@ void end_buffer_async_write(struct buffe
 		SetPageError(page);
 	}
 
-	spin_lock_irqsave(&page_uptodate_lock, flags);
+	first = page_buffers(page);
+	local_irq_save(flags);
+	bit_spin_lock(BH_Uptd_Lock, &first->b_state);
+
 	clear_buffer_async_write(bh);
 	unlock_buffer(bh);
 	tmp = bh->b_this_page;
@@ -630,12 +635,14 @@ void end_buffer_async_write(struct buffe
 		}
 		tmp = tmp->b_this_page;
 	}
-	spin_unlock_irqrestore(&page_uptodate_lock, flags);
+	bit_spin_unlock(BH_Uptd_Lock, &first->b_state);
+	local_irq_restore(flags);
 	end_page_writeback(page);
 	return;
 
 still_busy:
-	spin_unlock_irqrestore(&page_uptodate_lock, flags);
+	bit_spin_unlock(BH_Uptd_Lock, &first->b_state);
+	local_irq_restore(flags);
 	return;
 }
 
Index: linux-2.6/include/linux/buffer_head.h
===================================================================
--- linux-2.6.orig/include/linux/buffer_head.h	2005-05-06 17:39:54.000000000 +1000
+++ linux-2.6/include/linux/buffer_head.h	2005-05-06 17:42:03.000000000 +1000
@@ -19,6 +19,8 @@ enum bh_state_bits {
 	BH_Dirty,	/* Is dirty */
 	BH_Lock,	/* Is locked */
 	BH_Req,		/* Has been submitted for I/O */
+	BH_Uptd_Lock,	/* Only used by the first bh in a page, to serialise
+			   IO completion of other buffers in the page */
 
 	BH_Mapped,	/* Has a disk mapping */
 	BH_New,		/* Disk mapping was newly created by get_block */


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Global spinlock vs local bit spin locks
  2005-06-17  4:21 Global spinlock vs local bit spin locks Nick Piggin
@ 2005-06-17  4:45 ` Andrew Morton
  2005-06-17  8:50   ` Andi Kleen
  2005-06-17  4:45 ` David S. Miller
  2005-06-17  4:46 ` William Lee Irwin III
  2 siblings, 1 reply; 13+ messages in thread
From: Andrew Morton @ 2005-06-17  4:45 UTC (permalink / raw)
  To: Nick Piggin; +Cc: davem, anton, linux-arch, Peter.Keilty

Nick Piggin <nickpiggin@yahoo.com.au> wrote:
>
> Peter Keilty is running into some scalability problems with buffer
>  head based IO. There are a couple of global spinlocks in the buffer
>  completion path, and they're showing up on 16-way IA64 systems.

Well in -mm these spinlocks are hashed and the performance is good.  But
it's a bit dorky.  So we don't _have_ to go with the bit_spin_lock()
approach.  But bit_spin_lock() is nicer.

The reason why I went with a hashed lock is that I have memories of being
beaten up over suckiness of bit_spin_lock().  But I'm now wondering who was
beating me up, and why ;)

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Global spinlock vs local bit spin locks
  2005-06-17  4:21 Global spinlock vs local bit spin locks Nick Piggin
  2005-06-17  4:45 ` Andrew Morton
@ 2005-06-17  4:45 ` David S. Miller
  2005-06-17  4:46 ` William Lee Irwin III
  2 siblings, 0 replies; 13+ messages in thread
From: David S. Miller @ 2005-06-17  4:45 UTC (permalink / raw)
  To: nickpiggin; +Cc: anton, linux-arch, akpm, Peter.Keilty

From: Nick Piggin <nickpiggin@yahoo.com.au>
Date: Fri, 17 Jun 2005 14:21:32 +1000

> Any input would be appreciated.

I totally agree with this change.  I'm surprised that page_update_lock
lived this long :)

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Global spinlock vs local bit spin locks
  2005-06-17  4:21 Global spinlock vs local bit spin locks Nick Piggin
  2005-06-17  4:45 ` Andrew Morton
  2005-06-17  4:45 ` David S. Miller
@ 2005-06-17  4:46 ` William Lee Irwin III
  2005-06-17  4:52   ` Andrew Morton
  2005-06-17  8:35   ` Nick Piggin
  2 siblings, 2 replies; 13+ messages in thread
From: William Lee Irwin III @ 2005-06-17  4:46 UTC (permalink / raw)
  To: Nick Piggin
  Cc: David S. Miller, anton, linux-arch, Andrew Morton, Peter Keilty

On Fri, Jun 17, 2005 at 02:21:32PM +1000, Nick Piggin wrote:
> Peter Keilty is running into some scalability problems with buffer
> head based IO. There are a couple of global spinlocks in the buffer
> completion path, and they're showing up on 16-way IA64 systems.
> Replacing these locks with a bit spin lock in the buffer head status
> field has been shown to eliminate the bouncing problem. We want to
> go with this unless anyone has an objection to the cost.
> There is a cost (though I haven't been able to measure a signficant
> change), but I think it will be outweighed by the the reduction in
> cacheline contention on even small SMPs doing IO.
> Any input would be appreciated.
> If anyone wants to run some tests, possibly the easiest would be to
> make ext2 on loopback on tmpfs (to test scalability, have one loop
> device for each CPU in the system and bind the loop threads to each
> CPU). Make sure ext2 block size is < PAGE_SIZE.

I'd feel far more comfortable with this if the lockbit resided in the
page. Also, compare it to akpm's solution.


-- wli

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Global spinlock vs local bit spin locks
  2005-06-17  4:46 ` William Lee Irwin III
@ 2005-06-17  4:52   ` Andrew Morton
  2005-06-17  8:35   ` Nick Piggin
  1 sibling, 0 replies; 13+ messages in thread
From: Andrew Morton @ 2005-06-17  4:52 UTC (permalink / raw)
  To: William Lee Irwin III; +Cc: nickpiggin, davem, anton, linux-arch, Peter.Keilty

William Lee Irwin III <wli@holomorphy.com> wrote:
>
> On Fri, Jun 17, 2005 at 02:21:32PM +1000, Nick Piggin wrote:
> > Peter Keilty is running into some scalability problems with buffer
> > head based IO. There are a couple of global spinlocks in the buffer
> > completion path, and they're showing up on 16-way IA64 systems.
> > Replacing these locks with a bit spin lock in the buffer head status
> > field has been shown to eliminate the bouncing problem. We want to
> > go with this unless anyone has an objection to the cost.
> > There is a cost (though I haven't been able to measure a signficant
> > change), but I think it will be outweighed by the the reduction in
> > cacheline contention on even small SMPs doing IO.
> > Any input would be appreciated.
> > If anyone wants to run some tests, possibly the easiest would be to
> > make ext2 on loopback on tmpfs (to test scalability, have one loop
> > device for each CPU in the system and bind the loop threads to each
> > CPU). Make sure ext2 block size is < PAGE_SIZE.
> 
> I'd feel far more comfortable with this if the lockbit resided in the
> page.

That would be nicer, but we're being stingy with page flags.

> Also, compare it to akpm's solution.

ftp://ftp.kernel.org/pub/linux/kernel/people/akpm/patches/2.6/2.6.12-rc6/2.6.12-rc6-mm1/broken-out/page_uptodate_lock-hashing.patch

It's neat enough, but the randomly-chosen HSL_SIZE is a bit offensive.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Global spinlock vs local bit spin locks
  2005-06-17  4:46 ` William Lee Irwin III
  2005-06-17  4:52   ` Andrew Morton
@ 2005-06-17  8:35   ` Nick Piggin
  2005-06-17  8:45     ` William Lee Irwin III
  2005-06-17  8:54     ` Andi Kleen
  1 sibling, 2 replies; 13+ messages in thread
From: Nick Piggin @ 2005-06-17  8:35 UTC (permalink / raw)
  To: William Lee Irwin III
  Cc: David S. Miller, anton, linux-arch, Andrew Morton, Peter Keilty

William Lee Irwin III wrote:

> 
> I'd feel far more comfortable with this if the lockbit resided in the
> page. Also, compare it to akpm's solution.
> 

akpm's solution is alright. They perform similarly on the workload in
question. Of course, the bitlock will scale quite a lot better if you
pushed it and will automatically be localised per device and have NUMA
locality, etc.

As far as page flags go - I agree but I didn't want to use one up.
This is very localised and I don't think it is particularly worse
than what was there before, so I think we can get away with it for
the moment.

-- 
SUSE Labs, Novell Inc.

Send instant messages to your online friends http://au.messenger.yahoo.com 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Global spinlock vs local bit spin locks
  2005-06-17  8:35   ` Nick Piggin
@ 2005-06-17  8:45     ` William Lee Irwin III
  2005-06-17  9:21       ` Nick Piggin
  2005-06-17  8:54     ` Andi Kleen
  1 sibling, 1 reply; 13+ messages in thread
From: William Lee Irwin III @ 2005-06-17  8:45 UTC (permalink / raw)
  To: Nick Piggin
  Cc: David S. Miller, anton, linux-arch, Andrew Morton, Peter Keilty

William Lee Irwin III wrote:
>> I'd feel far more comfortable with this if the lockbit resided in the
>> page. Also, compare it to akpm's solution.

On Fri, Jun 17, 2005 at 06:35:16PM +1000, Nick Piggin wrote:
> akpm's solution is alright. They perform similarly on the workload in
> question. Of course, the bitlock will scale quite a lot better if you
> pushed it and will automatically be localised per device and have NUMA
> locality, etc.
> As far as page flags go - I agree but I didn't want to use one up.
> This is very localised and I don't think it is particularly worse
> than what was there before, so I think we can get away with it for
> the moment.

I'm ambivalent now I guess. I'm not wild about bh's in the first place,
so infecting core code with new dependencies on them doesn't sound hot,
though I still can't help cringing at using a bitflag in the first bh
in the list to protect against concurrent teardown of the bh list,
which relies on the setup/teardown patterns.

Might as well stop bothering people about it, I guess.


-- wli

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Global spinlock vs local bit spin locks
  2005-06-17  4:45 ` Andrew Morton
@ 2005-06-17  8:50   ` Andi Kleen
  0 siblings, 0 replies; 13+ messages in thread
From: Andi Kleen @ 2005-06-17  8:50 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Nick Piggin, davem, anton, linux-arch, Peter.Keilty

> The reason why I went with a hashed lock is that I have memories of being
> beaten up over suckiness of bit_spin_lock().  But I'm now wondering who was
> beating me up, and why ;)

At least on x86/x86-64 I see no reason to not use them.

-Andi

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Global spinlock vs local bit spin locks
  2005-06-17  8:35   ` Nick Piggin
  2005-06-17  8:45     ` William Lee Irwin III
@ 2005-06-17  8:54     ` Andi Kleen
  2005-06-17  9:15       ` William Lee Irwin III
  2005-06-17  9:27       ` Nick Piggin
  1 sibling, 2 replies; 13+ messages in thread
From: Andi Kleen @ 2005-06-17  8:54 UTC (permalink / raw)
  To: Nick Piggin
  Cc: William Lee Irwin III, David S. Miller, anton, linux-arch,
	Andrew Morton, Peter Keilty

On Fri, Jun 17, 2005 at 06:35:16PM +1000, Nick Piggin wrote:
> William Lee Irwin III wrote:
> 
> >
> >I'd feel far more comfortable with this if the lockbit resided in the
> >page. Also, compare it to akpm's solution.
> >
> 
> akpm's solution is alright. They perform similarly on the workload in
> question. Of course, the bitlock will scale quite a lot better if you
> pushed it and will automatically be localised per device and have NUMA
> locality, etc.

The buffer head is not necessarily NUMA local though - there is
some chance that a BH from a different node is reused. struct page
is guaranteed to be node local of the memory.

> 
> As far as page flags go - I agree but I didn't want to use one up.
> This is very localised and I don't think it is particularly worse
> than what was there before, so I think we can get away with it for
> the moment.

I'm starting to think that we need different strategies on 32bit
and 64bit here. 64bit has plenty of bits left; it is just 32bit
that is a problem here.

-Andi

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Global spinlock vs local bit spin locks
  2005-06-17  8:54     ` Andi Kleen
@ 2005-06-17  9:15       ` William Lee Irwin III
  2005-06-17  9:27       ` Nick Piggin
  1 sibling, 0 replies; 13+ messages in thread
From: William Lee Irwin III @ 2005-06-17  9:15 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Nick Piggin, David S. Miller, anton, linux-arch, Andrew Morton,
	Peter Keilty

>> As far as page flags go - I agree but I didn't want to use one up.
>> This is very localised and I don't think it is particularly worse
>> than what was there before, so I think we can get away with it for
>> the moment.

On Fri, Jun 17, 2005 at 10:54:26AM +0200, Andi Kleen wrote:
> I'm starting to think that we need different strategies on 32bit
> and 64bit here. 64bit has plenty of bits left; it is just 32bit
> that is a problem here.

Well, there are 3 bits destined to die as things stand now (2 swsusp
via sideband bitmaps and PG_reserved) so the immediate 32-bit shortage
does have a resolution in sight.


-- wli

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Global spinlock vs local bit spin locks
  2005-06-17  8:45     ` William Lee Irwin III
@ 2005-06-17  9:21       ` Nick Piggin
  2005-06-17  9:28         ` William Lee Irwin III
  0 siblings, 1 reply; 13+ messages in thread
From: Nick Piggin @ 2005-06-17  9:21 UTC (permalink / raw)
  To: William Lee Irwin III
  Cc: David S. Miller, anton, linux-arch, Andrew Morton, Peter Keilty

William Lee Irwin III wrote:

> 
> I'm ambivalent now I guess. I'm not wild about bh's in the first place,
> so infecting core code with new dependencies on them doesn't sound hot,
> though I still can't help cringing at using a bitflag in the first bh
> in the list to protect against concurrent teardown of the bh list,
> which relies on the setup/teardown patterns.
> 

It's not quite as bad as that - there will be no teardown while
any of the buffers are still in flight. The lock is simply to
protect concurrent completion of requests, it could just as
easily go in the last bh.

-- 
SUSE Labs, Novell Inc.

Send instant messages to your online friends http://au.messenger.yahoo.com 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Global spinlock vs local bit spin locks
  2005-06-17  8:54     ` Andi Kleen
  2005-06-17  9:15       ` William Lee Irwin III
@ 2005-06-17  9:27       ` Nick Piggin
  1 sibling, 0 replies; 13+ messages in thread
From: Nick Piggin @ 2005-06-17  9:27 UTC (permalink / raw)
  To: Andi Kleen
  Cc: William Lee Irwin III, David S. Miller, anton, linux-arch,
	Andrew Morton, Peter Keilty

Andi Kleen wrote:
> On Fri, Jun 17, 2005 at 06:35:16PM +1000, Nick Piggin wrote:
> 
>>William Lee Irwin III wrote:
>>
>>
>>>I'd feel far more comfortable with this if the lockbit resided in the
>>>page. Also, compare it to akpm's solution.
>>>
>>
>>akpm's solution is alright. They perform similarly on the workload in
>>question. Of course, the bitlock will scale quite a lot better if you
>>pushed it and will automatically be localised per device and have NUMA
>>locality, etc.
> 
> 
> The buffer head is not necessarily NUMA local though - there is
> some chance that a BH from a different node is reused.

True, but compared to a hash which is almost guaranteed *not* to
be in local memory for any medium to large NUMA system :)

Though I guess on many, it is basically luck that you would get
an IO submitted on the same node that takes the completion
interrupt.

However, on really huge systems like SGI's, they often tend to
lock down devices and jobs quite tightly to nodes so I think
it has some merit.

-- 
SUSE Labs, Novell Inc.

Send instant messages to your online friends http://au.messenger.yahoo.com 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: Global spinlock vs local bit spin locks
  2005-06-17  9:21       ` Nick Piggin
@ 2005-06-17  9:28         ` William Lee Irwin III
  0 siblings, 0 replies; 13+ messages in thread
From: William Lee Irwin III @ 2005-06-17  9:28 UTC (permalink / raw)
  To: Nick Piggin
  Cc: David S. Miller, anton, linux-arch, Andrew Morton, Peter Keilty

William Lee Irwin III wrote:
>> I'm ambivalent now I guess. I'm not wild about bh's in the first place,
>> so infecting core code with new dependencies on them doesn't sound hot,
>> though I still can't help cringing at using a bitflag in the first bh
>> in the list to protect against concurrent teardown of the bh list,
>> which relies on the setup/teardown patterns.

On Fri, Jun 17, 2005 at 07:21:57PM +1000, Nick Piggin wrote:
> It's not quite as bad as that - there will be no teardown while
> any of the buffers are still in flight. The lock is simply to
> protect concurrent completion of requests, it could just as
> easily go in the last bh.

I'd hoped what I had in mind with all that would've been clearer. It
should be clear that I understand it is not overtly broken.


-- wli

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2005-06-17  9:28 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-06-17  4:21 Global spinlock vs local bit spin locks Nick Piggin
2005-06-17  4:45 ` Andrew Morton
2005-06-17  8:50   ` Andi Kleen
2005-06-17  4:45 ` David S. Miller
2005-06-17  4:46 ` William Lee Irwin III
2005-06-17  4:52   ` Andrew Morton
2005-06-17  8:35   ` Nick Piggin
2005-06-17  8:45     ` William Lee Irwin III
2005-06-17  9:21       ` Nick Piggin
2005-06-17  9:28         ` William Lee Irwin III
2005-06-17  8:54     ` Andi Kleen
2005-06-17  9:15       ` William Lee Irwin III
2005-06-17  9:27       ` Nick Piggin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox