public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* Re: [patch 1/5] wait: use lock bitops for __wait_on_bit_lock
  2007-10-25  2:17     ` Nick Piggin
@ 2007-10-11 20:56       ` Pavel Machek
  0 siblings, 0 replies; 9+ messages in thread
From: Pavel Machek @ 2007-10-11 20:56 UTC (permalink / raw)
  To: Nick Piggin; +Cc: Andrew Morton, linux-kernel

Hi!

> > Sorry, I'm just not going to apply a patch like that.
> >
> > I mean, how the heck is anyone else supposed to understand what you're up
> > to?
> 
> Hmm, I might just withdraw this patch 1/5. This is generally a slowpath,
> and it's hard to guarantee that any exported user doesn't rely on the
> full barrier here (not that they would know whether they do or not, let
> alone document the fact).
> 
> I think all the others are pretty clear, though? (read on if no)
...
> > There's a bit of documentation in Documentation/atomic_ops.txt 
> > (probably enough, I guess) but it is totally unobvious to 98.3% of kernel
> > developers when they should use test_and_set_bit() versus
> > test_and_set_bit_lock() and it is far too much work to work out why it was
> > used in __wait_on_bit_lock(), whether it is correct, what advantages it
> > brings and whether and where others should emulate it.
> 
> If you set a bit for the purpose of opening a critical section, then
> you can use this. And clear_bit_unlock to close it.
> 
> The advantages are that it is faster, and the hapless driver writer
> doesn't have to butcher or forget about doing the correct
> smp_mb__before_clear_bit(), or have reviewers wondering exactly WTF
> that barrier is for, etc.

Actually, I'd expect *_lock() ro be slower than *()...

> Basically, it is faster, harder to get wrong, and more self-docuemnting.

So I'd not call it self-documenting.

-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [patch 0/5] lock bitops patches
@ 2007-10-24  8:13 npiggin
  2007-10-24  8:13 ` [patch 1/5] wait: use lock bitops for __wait_on_bit_lock npiggin
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: npiggin @ 2007-10-24  8:13 UTC (permalink / raw)
  To: akpm; +Cc: linux-kernel

I messed this up last time. A couple of things have been merged since
then, so here is a resend of the rest.

-- 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [patch 1/5] wait: use lock bitops for __wait_on_bit_lock
  2007-10-24  8:13 [patch 0/5] lock bitops patches npiggin
@ 2007-10-24  8:13 ` npiggin
  2007-10-25  1:14   ` Andrew Morton
  2007-10-24  8:13 ` [patch 2/5] tasklet: use lock bitops for tasklet lock npiggin
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 9+ messages in thread
From: npiggin @ 2007-10-24  8:13 UTC (permalink / raw)
  To: akpm; +Cc: linux-kernel

[-- Attachment #1: wait_bit-use-lock-bitops.patch --]
[-- Type: text/plain, Size: 577 bytes --]

Signed-off-by: Nick Piggin <npiggin@suse.de>

---
 kernel/wait.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Index: linux-2.6/kernel/wait.c
===================================================================
--- linux-2.6.orig/kernel/wait.c
+++ linux-2.6/kernel/wait.c
@@ -195,7 +195,7 @@ __wait_on_bit_lock(wait_queue_head_t *wq
 			if ((ret = (*action)(q->key.flags)))
 				break;
 		}
-	} while (test_and_set_bit(q->key.bit_nr, q->key.flags));
+	} while (test_and_set_bit_lock(q->key.bit_nr, q->key.flags));
 	finish_wait(wq, &q->wait);
 	return ret;
 }

-- 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [patch 2/5] tasklet: use lock bitops for tasklet lock
  2007-10-24  8:13 [patch 0/5] lock bitops patches npiggin
  2007-10-24  8:13 ` [patch 1/5] wait: use lock bitops for __wait_on_bit_lock npiggin
@ 2007-10-24  8:13 ` npiggin
  2007-10-24  8:13 ` [patch 3/5] mm: rename page lock npiggin
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: npiggin @ 2007-10-24  8:13 UTC (permalink / raw)
  To: akpm; +Cc: linux-kernel

[-- Attachment #1: tasklet_lock-use-lock-bitops.patch --]
[-- Type: text/plain, Size: 845 bytes --]

Signed-off-by: Nick Piggin <npiggin@suse.de>

---
 include/linux/interrupt.h |    5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

Index: linux-2.6/include/linux/interrupt.h
===================================================================
--- linux-2.6.orig/include/linux/interrupt.h
+++ linux-2.6/include/linux/interrupt.h
@@ -321,13 +321,12 @@ enum
 #ifdef CONFIG_SMP
 static inline int tasklet_trylock(struct tasklet_struct *t)
 {
-	return !test_and_set_bit(TASKLET_STATE_RUN, &(t)->state);
+	return !test_and_set_bit_lock(TASKLET_STATE_RUN, &(t)->state);
 }
 
 static inline void tasklet_unlock(struct tasklet_struct *t)
 {
-	smp_mb__before_clear_bit(); 
-	clear_bit(TASKLET_STATE_RUN, &(t)->state);
+	clear_bit_unlock(TASKLET_STATE_RUN, &(t)->state);
 }
 
 static inline void tasklet_unlock_wait(struct tasklet_struct *t)

-- 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [patch 3/5] mm: rename page lock
  2007-10-24  8:13 [patch 0/5] lock bitops patches npiggin
  2007-10-24  8:13 ` [patch 1/5] wait: use lock bitops for __wait_on_bit_lock npiggin
  2007-10-24  8:13 ` [patch 2/5] tasklet: use lock bitops for tasklet lock npiggin
@ 2007-10-24  8:13 ` npiggin
  2007-10-24  8:13 ` [patch 4/5] mm: use lock bitops for " npiggin
  2007-10-24  8:13 ` [patch 5/5] fs: use lock bitops for the buffer lock npiggin
  4 siblings, 0 replies; 9+ messages in thread
From: npiggin @ 2007-10-24  8:13 UTC (permalink / raw)
  To: akpm; +Cc: linux-kernel

[-- Attachment #1: page_lock-rename.patch --]
[-- Type: text/plain, Size: 13923 bytes --]

Converting page lock to new locking bitops requires a change of page flag
operation naming, so we might as well convert it to something nicer
(!TestSetPageLocked_Lock => trylock_page, SetPageLocked => set_page_locked).

Signed-off-by: Nick Piggin <npiggin@suse.de>

---
 drivers/scsi/sg.c           |    2 +-
 fs/afs/write.c              |    2 +-
 fs/cifs/file.c              |    2 +-
 fs/jbd/commit.c             |    2 +-
 fs/jbd2/commit.c            |    2 +-
 fs/reiserfs/journal.c       |    2 +-
 fs/splice.c                 |    2 +-
 fs/xfs/linux-2.6/xfs_aops.c |    4 ++--
 include/linux/page-flags.h  |    8 --------
 include/linux/pagemap.h     |   19 +++++++++++++++++--
 mm/filemap.c                |   10 +++++-----
 mm/memory.c                 |    2 +-
 mm/migrate.c                |    4 ++--
 mm/rmap.c                   |    2 +-
 mm/shmem.c                  |    4 ++--
 mm/swap.c                   |    2 +-
 mm/swap_state.c             |    6 +++---
 mm/swapfile.c               |    2 +-
 mm/truncate.c               |    4 ++--
 mm/vmscan.c                 |    4 ++--
 20 files changed, 46 insertions(+), 39 deletions(-)

Index: linux-2.6/include/linux/pagemap.h
===================================================================
--- linux-2.6.orig/include/linux/pagemap.h
+++ linux-2.6/include/linux/pagemap.h
@@ -160,13 +160,28 @@ extern void FASTCALL(__lock_page(struct 
 extern void FASTCALL(__lock_page_nosync(struct page *page));
 extern void FASTCALL(unlock_page(struct page *page));
 
+static inline void set_page_locked(struct page *page)
+{
+	set_bit(PG_locked, &page->flags);
+}
+
+static inline void clear_page_locked(struct page *page)
+{
+	clear_bit(PG_locked, &page->flags);
+}
+
+static inline int trylock_page(struct page *page)
+{
+	return !test_and_set_bit(PG_locked, &page->flags);
+}
+
 /*
  * lock_page may only be called if we have the page's inode pinned.
  */
 static inline void lock_page(struct page *page)
 {
 	might_sleep();
-	if (TestSetPageLocked(page))
+	if (!trylock_page(page))
 		__lock_page(page);
 }
 
@@ -177,7 +192,7 @@ static inline void lock_page(struct page
 static inline void lock_page_nosync(struct page *page)
 {
 	might_sleep();
-	if (TestSetPageLocked(page))
+	if (!trylock_page(page))
 		__lock_page_nosync(page);
 }
 	
Index: linux-2.6/drivers/scsi/sg.c
===================================================================
--- linux-2.6.orig/drivers/scsi/sg.c
+++ linux-2.6/drivers/scsi/sg.c
@@ -1713,7 +1713,7 @@ st_map_user_pages(struct scatterlist *sg
                  */
 		flush_dcache_page(pages[i]);
 		/* ?? Is locking needed? I don't think so */
-		/* if (TestSetPageLocked(pages[i]))
+		/* if (!trylock_page(pages[i]))
 		   goto out_unlock; */
         }
 
Index: linux-2.6/fs/cifs/file.c
===================================================================
--- linux-2.6.orig/fs/cifs/file.c
+++ linux-2.6/fs/cifs/file.c
@@ -1251,7 +1251,7 @@ retry:
 
 			if (first < 0)
 				lock_page(page);
-			else if (TestSetPageLocked(page))
+			else if (!trylock_page(page))
 				break;
 
 			if (unlikely(page->mapping != mapping)) {
Index: linux-2.6/fs/jbd/commit.c
===================================================================
--- linux-2.6.orig/fs/jbd/commit.c
+++ linux-2.6/fs/jbd/commit.c
@@ -63,7 +63,7 @@ static void release_buffer_page(struct b
 		goto nope;
 
 	/* OK, it's a truncated page */
-	if (TestSetPageLocked(page))
+	if (!trylock_page(page))
 		goto nope;
 
 	page_cache_get(page);
Index: linux-2.6/fs/jbd2/commit.c
===================================================================
--- linux-2.6.orig/fs/jbd2/commit.c
+++ linux-2.6/fs/jbd2/commit.c
@@ -63,7 +63,7 @@ static void release_buffer_page(struct b
 		goto nope;
 
 	/* OK, it's a truncated page */
-	if (TestSetPageLocked(page))
+	if (!trylock_page(page))
 		goto nope;
 
 	page_cache_get(page);
Index: linux-2.6/fs/xfs/linux-2.6/xfs_aops.c
===================================================================
--- linux-2.6.orig/fs/xfs/linux-2.6/xfs_aops.c
+++ linux-2.6/fs/xfs/linux-2.6/xfs_aops.c
@@ -659,7 +659,7 @@ xfs_probe_cluster(
 			} else
 				pg_offset = PAGE_CACHE_SIZE;
 
-			if (page->index == tindex && !TestSetPageLocked(page)) {
+			if (page->index == tindex && trylock_page(page)) {
 				pg_len = xfs_probe_page(page, pg_offset, mapped);
 				unlock_page(page);
 			}
@@ -743,7 +743,7 @@ xfs_convert_page(
 
 	if (page->index != tindex)
 		goto fail;
-	if (TestSetPageLocked(page))
+	if (!trylock_page(page))
 		goto fail;
 	if (PageWriteback(page))
 		goto fail_unlock_page;
Index: linux-2.6/include/linux/page-flags.h
===================================================================
--- linux-2.6.orig/include/linux/page-flags.h
+++ linux-2.6/include/linux/page-flags.h
@@ -113,14 +113,6 @@
  */
 #define PageLocked(page)		\
 		test_bit(PG_locked, &(page)->flags)
-#define SetPageLocked(page)		\
-		set_bit(PG_locked, &(page)->flags)
-#define TestSetPageLocked(page)		\
-		test_and_set_bit(PG_locked, &(page)->flags)
-#define ClearPageLocked(page)		\
-		clear_bit(PG_locked, &(page)->flags)
-#define TestClearPageLocked(page)	\
-		test_and_clear_bit(PG_locked, &(page)->flags)
 
 #define PageError(page)		test_bit(PG_error, &(page)->flags)
 #define SetPageError(page)	set_bit(PG_error, &(page)->flags)
Index: linux-2.6/mm/memory.c
===================================================================
--- linux-2.6.orig/mm/memory.c
+++ linux-2.6/mm/memory.c
@@ -1559,7 +1559,7 @@ static int do_wp_page(struct mm_struct *
 	 * not dirty accountable.
 	 */
 	if (PageAnon(old_page)) {
-		if (!TestSetPageLocked(old_page)) {
+		if (trylock_page(old_page)) {
 			reuse = can_share_swap_page(old_page);
 			unlock_page(old_page);
 		}
Index: linux-2.6/mm/migrate.c
===================================================================
--- linux-2.6.orig/mm/migrate.c
+++ linux-2.6/mm/migrate.c
@@ -569,7 +569,7 @@ static int move_to_new_page(struct page 
 	 * establishing additional references. We are the only one
 	 * holding a reference to the new page at this point.
 	 */
-	if (TestSetPageLocked(newpage))
+	if (!trylock_page(newpage))
 		BUG();
 
 	/* Prepare mapping for the new page.*/
@@ -622,7 +622,7 @@ static int unmap_and_move(new_page_t get
 		goto move_newpage;
 
 	rc = -EAGAIN;
-	if (TestSetPageLocked(page)) {
+	if (!trylock_page(page)) {
 		if (!force)
 			goto move_newpage;
 		lock_page(page);
Index: linux-2.6/mm/rmap.c
===================================================================
--- linux-2.6.orig/mm/rmap.c
+++ linux-2.6/mm/rmap.c
@@ -401,7 +401,7 @@ int page_referenced(struct page *page, i
 			referenced += page_referenced_anon(page);
 		else if (is_locked)
 			referenced += page_referenced_file(page);
-		else if (TestSetPageLocked(page))
+		else if (!trylock_page(page))
 			referenced++;
 		else {
 			if (page->mapping)
Index: linux-2.6/mm/shmem.c
===================================================================
--- linux-2.6.orig/mm/shmem.c
+++ linux-2.6/mm/shmem.c
@@ -1167,7 +1167,7 @@ repeat:
 		}
 
 		/* We have to do this with page locked to prevent races */
-		if (TestSetPageLocked(swappage)) {
+		if (!trylock_page(swappage)) {
 			shmem_swp_unmap(entry);
 			spin_unlock(&info->lock);
 			wait_on_page_locked(swappage);
@@ -1226,7 +1226,7 @@ repeat:
 		shmem_swp_unmap(entry);
 		filepage = find_get_page(mapping, idx);
 		if (filepage &&
-		    (!PageUptodate(filepage) || TestSetPageLocked(filepage))) {
+		    (!PageUptodate(filepage) || !trylock_page(filepage))) {
 			spin_unlock(&info->lock);
 			wait_on_page_locked(filepage);
 			page_cache_release(filepage);
Index: linux-2.6/mm/swap.c
===================================================================
--- linux-2.6.orig/mm/swap.c
+++ linux-2.6/mm/swap.c
@@ -455,7 +455,7 @@ void pagevec_strip(struct pagevec *pvec)
 	for (i = 0; i < pagevec_count(pvec); i++) {
 		struct page *page = pvec->pages[i];
 
-		if (PagePrivate(page) && !TestSetPageLocked(page)) {
+		if (PagePrivate(page) && trylock_page(page)) {
 			if (PagePrivate(page))
 				try_to_release_page(page, 0);
 			unlock_page(page);
Index: linux-2.6/mm/swap_state.c
===================================================================
--- linux-2.6.orig/mm/swap_state.c
+++ linux-2.6/mm/swap_state.c
@@ -104,13 +104,13 @@ static int add_to_swap_cache(struct page
 		INC_CACHE_INFO(noent_race);
 		return -ENOENT;
 	}
-	SetPageLocked(page);
+	set_page_locked(page);
 	error = __add_to_swap_cache(page, entry, GFP_KERNEL);
 	/*
 	 * Anon pages are already on the LRU, we don't run lru_cache_add here.
 	 */
 	if (error) {
-		ClearPageLocked(page);
+		clear_page_locked(page);
 		swap_free(entry);
 		if (error == -EEXIST)
 			INC_CACHE_INFO(exist_race);
@@ -255,7 +255,7 @@ int move_from_swap_cache(struct page *pa
  */
 static inline void free_swap_cache(struct page *page)
 {
-	if (PageSwapCache(page) && !TestSetPageLocked(page)) {
+	if (PageSwapCache(page) && trylock_page(page)) {
 		remove_exclusive_swap_page(page);
 		unlock_page(page);
 	}
Index: linux-2.6/mm/swapfile.c
===================================================================
--- linux-2.6.orig/mm/swapfile.c
+++ linux-2.6/mm/swapfile.c
@@ -401,7 +401,7 @@ void free_swap_and_cache(swp_entry_t ent
 	if (p) {
 		if (swap_entry_free(p, swp_offset(entry)) == 1) {
 			page = find_get_page(&swapper_space, entry.val);
-			if (page && unlikely(TestSetPageLocked(page))) {
+			if (page && unlikely(!trylock_page(page))) {
 				page_cache_release(page);
 				page = NULL;
 			}
Index: linux-2.6/mm/truncate.c
===================================================================
--- linux-2.6.orig/mm/truncate.c
+++ linux-2.6/mm/truncate.c
@@ -189,7 +189,7 @@ void truncate_inode_pages_range(struct a
 			if (page_index > next)
 				next = page_index;
 			next++;
-			if (TestSetPageLocked(page))
+			if (!trylock_page(page))
 				continue;
 			if (PageWriteback(page)) {
 				unlock_page(page);
@@ -282,7 +282,7 @@ unsigned long __invalidate_mapping_pages
 			pgoff_t index;
 			int lock_failed;
 
-			lock_failed = TestSetPageLocked(page);
+			lock_failed = !trylock_page(page);
 
 			/*
 			 * We really shouldn't be looking at the ->index of an
Index: linux-2.6/mm/vmscan.c
===================================================================
--- linux-2.6.orig/mm/vmscan.c
+++ linux-2.6/mm/vmscan.c
@@ -461,7 +461,7 @@ static unsigned long shrink_page_list(st
 		page = lru_to_page(page_list);
 		list_del(&page->lru);
 
-		if (TestSetPageLocked(page))
+		if (!trylock_page(page))
 			goto keep;
 
 		VM_BUG_ON(PageActive(page));
@@ -547,7 +547,7 @@ static unsigned long shrink_page_list(st
 				 * A synchronous write - probably a ramdisk.  Go
 				 * ahead and try to reclaim the page.
 				 */
-				if (TestSetPageLocked(page))
+				if (!trylock_page(page))
 					goto keep;
 				if (PageDirty(page) || PageWriteback(page))
 					goto keep_locked;
Index: linux-2.6/mm/filemap.c
===================================================================
--- linux-2.6.orig/mm/filemap.c
+++ linux-2.6/mm/filemap.c
@@ -432,7 +432,7 @@ int filemap_write_and_wait_range(struct 
  * @gfp_mask:	page allocation mode
  *
  * This function is used to add newly allocated pagecache pages;
- * the page is new, so we can just run SetPageLocked() against it.
+ * the page is new, so we can just run set_page_locked() against it.
  * The other page state flags were set by rmqueue().
  *
  * This function does not add the page to the LRU.  The caller must do that.
@@ -447,7 +447,7 @@ int add_to_page_cache(struct page *page,
 		error = radix_tree_insert(&mapping->page_tree, offset, page);
 		if (!error) {
 			page_cache_get(page);
-			SetPageLocked(page);
+			set_page_locked(page);
 			page->mapping = mapping;
 			page->index = offset;
 			mapping->nrpages++;
@@ -536,7 +536,7 @@ EXPORT_SYMBOL(wait_on_page_bit);
 void fastcall unlock_page(struct page *page)
 {
 	smp_mb__before_clear_bit();
-	if (!TestClearPageLocked(page))
+	if (!test_and_clear_bit(PG_locked, &page->flags))
 		BUG();
 	smp_mb__after_clear_bit(); 
 	wake_up_page(page, PG_locked);
@@ -628,7 +628,7 @@ repeat:
 	page = radix_tree_lookup(&mapping->page_tree, offset);
 	if (page) {
 		page_cache_get(page);
-		if (TestSetPageLocked(page)) {
+		if (!trylock_page(page)) {
 			read_unlock_irq(&mapping->tree_lock);
 			__lock_page(page);
 
@@ -800,7 +800,7 @@ grab_cache_page_nowait(struct address_sp
 	struct page *page = find_get_page(mapping, index);
 
 	if (page) {
-		if (!TestSetPageLocked(page))
+		if (trylock_page(page))
 			return page;
 		page_cache_release(page);
 		return NULL;
Index: linux-2.6/fs/splice.c
===================================================================
--- linux-2.6.orig/fs/splice.c
+++ linux-2.6/fs/splice.c
@@ -364,7 +364,7 @@ __generic_file_splice_read(struct file *
 			 * for an in-flight io page
 			 */
 			if (flags & SPLICE_F_NONBLOCK) {
-				if (TestSetPageLocked(page))
+				if (!trylock_page(page))
 					break;
 			} else
 				lock_page(page);
Index: linux-2.6/fs/afs/write.c
===================================================================
--- linux-2.6.orig/fs/afs/write.c
+++ linux-2.6/fs/afs/write.c
@@ -404,7 +404,7 @@ static int afs_write_back_from_locked_pa
 			page = pages[loop];
 			if (page->index > wb->last)
 				break;
-			if (TestSetPageLocked(page))
+			if (!trylock_page(page))
 				break;
 			if (!PageDirty(page) ||
 			    page_private(page) != (unsigned long) wb) {
Index: linux-2.6/fs/reiserfs/journal.c
===================================================================
--- linux-2.6.orig/fs/reiserfs/journal.c
+++ linux-2.6/fs/reiserfs/journal.c
@@ -629,7 +629,7 @@ static int journal_list_still_alive(stru
 static void release_buffer_page(struct buffer_head *bh)
 {
 	struct page *page = bh->b_page;
-	if (!page->mapping && !TestSetPageLocked(page)) {
+	if (!page->mapping && trylock_page(page)) {
 		page_cache_get(page);
 		put_bh(bh);
 		if (!page->mapping)

-- 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [patch 4/5] mm: use lock bitops for page lock
  2007-10-24  8:13 [patch 0/5] lock bitops patches npiggin
                   ` (2 preceding siblings ...)
  2007-10-24  8:13 ` [patch 3/5] mm: rename page lock npiggin
@ 2007-10-24  8:13 ` npiggin
  2007-10-24  8:13 ` [patch 5/5] fs: use lock bitops for the buffer lock npiggin
  4 siblings, 0 replies; 9+ messages in thread
From: npiggin @ 2007-10-24  8:13 UTC (permalink / raw)
  To: akpm; +Cc: linux-kernel

[-- Attachment #1: page_lock-use-lock-bitops2.patch --]
[-- Type: text/plain, Size: 2363 bytes --]

Convert page lock to new locking bitops. Mark it likely to succeed.

Signed-off-by: Nick Piggin <npiggin@suse.de>

---
 include/linux/pagemap.h |    2 +-
 mm/filemap.c            |   13 +++++--------
 mm/swapfile.c           |    2 +-
 3 files changed, 7 insertions(+), 10 deletions(-)

Index: linux-2.6/mm/filemap.c
===================================================================
--- linux-2.6.orig/mm/filemap.c
+++ linux-2.6/mm/filemap.c
@@ -528,17 +528,14 @@ EXPORT_SYMBOL(wait_on_page_bit);
  * mechananism between PageLocked pages and PageWriteback pages is shared.
  * But that's OK - sleepers in wait_on_page_writeback() just go back to sleep.
  *
- * The first mb is necessary to safely close the critical section opened by the
- * TestSetPageLocked(), the second mb is necessary to enforce ordering between
- * the clear_bit and the read of the waitqueue (to avoid SMP races with a
- * parallel wait_on_page_locked()).
+ * The mb is necessary to enforce ordering between the clear_bit and the read
+ * of the waitqueue (to avoid SMP races with a parallel wait_on_page_locked()).
  */
 void fastcall unlock_page(struct page *page)
 {
-	smp_mb__before_clear_bit();
-	if (!test_and_clear_bit(PG_locked, &page->flags))
-		BUG();
-	smp_mb__after_clear_bit(); 
+	VM_BUG_ON(!PageLocked(page));
+	clear_bit_unlock(PG_locked, &page->flags);
+	smp_mb__after_clear_bit();
 	wake_up_page(page, PG_locked);
 }
 EXPORT_SYMBOL(unlock_page);
Index: linux-2.6/include/linux/pagemap.h
===================================================================
--- linux-2.6.orig/include/linux/pagemap.h
+++ linux-2.6/include/linux/pagemap.h
@@ -172,7 +172,7 @@ static inline void clear_page_locked(str
 
 static inline int trylock_page(struct page *page)
 {
-	return !test_and_set_bit(PG_locked, &page->flags);
+	return (likely(!test_and_set_bit_lock(PG_locked, &page->flags)));
 }
 
 /*
Index: linux-2.6/mm/swapfile.c
===================================================================
--- linux-2.6.orig/mm/swapfile.c
+++ linux-2.6/mm/swapfile.c
@@ -401,7 +401,7 @@ void free_swap_and_cache(swp_entry_t ent
 	if (p) {
 		if (swap_entry_free(p, swp_offset(entry)) == 1) {
 			page = find_get_page(&swapper_space, entry.val);
-			if (page && unlikely(!trylock_page(page))) {
+			if (page && !trylock_page(page)) {
 				page_cache_release(page);
 				page = NULL;
 			}

-- 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [patch 5/5] fs: use lock bitops for the buffer lock
  2007-10-24  8:13 [patch 0/5] lock bitops patches npiggin
                   ` (3 preceding siblings ...)
  2007-10-24  8:13 ` [patch 4/5] mm: use lock bitops for " npiggin
@ 2007-10-24  8:13 ` npiggin
  4 siblings, 0 replies; 9+ messages in thread
From: npiggin @ 2007-10-24  8:13 UTC (permalink / raw)
  To: akpm; +Cc: linux-kernel

[-- Attachment #1: buffer_lock-use-lock-bitops.patch --]
[-- Type: text/plain, Size: 6100 bytes --]

Like the page lock change, this also requires name change, so convert the
raw test_and_set bitop to a trylock.

Signed-off-by: Nick Piggin <npiggin@suse.de>

---
 fs/buffer.c                 |    7 +++----
 fs/jbd/commit.c             |    2 +-
 fs/jbd2/commit.c            |    2 +-
 fs/ntfs/aops.c              |    2 +-
 fs/ntfs/compress.c          |    2 +-
 fs/ntfs/mft.c               |    4 ++--
 fs/reiserfs/inode.c         |    2 +-
 fs/reiserfs/journal.c       |    4 ++--
 include/linux/buffer_head.h |    8 ++++++--
 9 files changed, 18 insertions(+), 15 deletions(-)

Index: linux-2.6/fs/jbd/commit.c
===================================================================
--- linux-2.6.orig/fs/jbd/commit.c
+++ linux-2.6/fs/jbd/commit.c
@@ -208,7 +208,7 @@ write_out_data:
 		 * blocking lock_buffer().
 		 */
 		if (buffer_dirty(bh)) {
-			if (test_set_buffer_locked(bh)) {
+			if (!trylock_buffer(bh)) {
 				BUFFER_TRACE(bh, "needs blocking lock");
 				spin_unlock(&journal->j_list_lock);
 				/* Write out all data to prevent deadlocks */
Index: linux-2.6/fs/jbd2/commit.c
===================================================================
--- linux-2.6.orig/fs/jbd2/commit.c
+++ linux-2.6/fs/jbd2/commit.c
@@ -208,7 +208,7 @@ write_out_data:
 		 * blocking lock_buffer().
 		 */
 		if (buffer_dirty(bh)) {
-			if (test_set_buffer_locked(bh)) {
+			if (!trylock_buffer(bh)) {
 				BUFFER_TRACE(bh, "needs blocking lock");
 				spin_unlock(&journal->j_list_lock);
 				/* Write out all data to prevent deadlocks */
Index: linux-2.6/fs/buffer.c
===================================================================
--- linux-2.6.orig/fs/buffer.c
+++ linux-2.6/fs/buffer.c
@@ -76,8 +76,7 @@ EXPORT_SYMBOL(__lock_buffer);
 
 void fastcall unlock_buffer(struct buffer_head *bh)
 {
-	smp_mb__before_clear_bit();
-	clear_buffer_locked(bh);
+	clear_bit_unlock(BH_Lock, &bh->b_state);
 	smp_mb__after_clear_bit();
 	wake_up_bit(&bh->b_state, BH_Lock);
 }
@@ -1688,7 +1687,7 @@ static int __block_write_full_page(struc
 		 */
 		if (wbc->sync_mode != WB_SYNC_NONE || !wbc->nonblocking) {
 			lock_buffer(bh);
-		} else if (test_set_buffer_locked(bh)) {
+		} else if (!trylock_buffer(bh)) {
 			redirty_page_for_writepage(wbc, page);
 			continue;
 		}
@@ -2943,7 +2942,7 @@ void ll_rw_block(int rw, int nr, struct 
 
 		if (rw == SWRITE)
 			lock_buffer(bh);
-		else if (test_set_buffer_locked(bh))
+		else if (!trylock_buffer(bh))
 			continue;
 
 		if (rw == WRITE || rw == SWRITE) {
Index: linux-2.6/include/linux/buffer_head.h
===================================================================
--- linux-2.6.orig/include/linux/buffer_head.h
+++ linux-2.6/include/linux/buffer_head.h
@@ -115,7 +115,6 @@ BUFFER_FNS(Uptodate, uptodate)
 BUFFER_FNS(Dirty, dirty)
 TAS_BUFFER_FNS(Dirty, dirty)
 BUFFER_FNS(Lock, locked)
-TAS_BUFFER_FNS(Lock, locked)
 BUFFER_FNS(Req, req)
 TAS_BUFFER_FNS(Req, req)
 BUFFER_FNS(Mapped, mapped)
@@ -318,10 +317,15 @@ static inline void wait_on_buffer(struct
 		__wait_on_buffer(bh);
 }
 
+static inline int trylock_buffer(struct buffer_head *bh)
+{
+	return likely(!test_and_set_bit_lock(BH_Lock, &bh->b_state));
+}
+
 static inline void lock_buffer(struct buffer_head *bh)
 {
 	might_sleep();
-	if (test_set_buffer_locked(bh))
+	if (!trylock_buffer(bh))
 		__lock_buffer(bh);
 }
 
Index: linux-2.6/fs/ntfs/aops.c
===================================================================
--- linux-2.6.orig/fs/ntfs/aops.c
+++ linux-2.6/fs/ntfs/aops.c
@@ -1183,7 +1183,7 @@ lock_retry_remap:
 		tbh = bhs[i];
 		if (!tbh)
 			continue;
-		if (unlikely(test_set_buffer_locked(tbh)))
+		if (!trylock_buffer(tbh))
 			BUG();
 		/* The buffer dirty state is now irrelevant, just clean it. */
 		clear_buffer_dirty(tbh);
Index: linux-2.6/fs/ntfs/compress.c
===================================================================
--- linux-2.6.orig/fs/ntfs/compress.c
+++ linux-2.6/fs/ntfs/compress.c
@@ -655,7 +655,7 @@ lock_retry_remap:
 	for (i = 0; i < nr_bhs; i++) {
 		struct buffer_head *tbh = bhs[i];
 
-		if (unlikely(test_set_buffer_locked(tbh)))
+		if (!trylock_buffer(tbh))
 			continue;
 		if (unlikely(buffer_uptodate(tbh))) {
 			unlock_buffer(tbh);
Index: linux-2.6/fs/ntfs/mft.c
===================================================================
--- linux-2.6.orig/fs/ntfs/mft.c
+++ linux-2.6/fs/ntfs/mft.c
@@ -586,7 +586,7 @@ int ntfs_sync_mft_mirror(ntfs_volume *vo
 		for (i_bhs = 0; i_bhs < nr_bhs; i_bhs++) {
 			struct buffer_head *tbh = bhs[i_bhs];
 
-			if (unlikely(test_set_buffer_locked(tbh)))
+			if (!trylock_buffer(tbh))
 				BUG();
 			BUG_ON(!buffer_uptodate(tbh));
 			clear_buffer_dirty(tbh);
@@ -779,7 +779,7 @@ int write_mft_record_nolock(ntfs_inode *
 	for (i_bhs = 0; i_bhs < nr_bhs; i_bhs++) {
 		struct buffer_head *tbh = bhs[i_bhs];
 
-		if (unlikely(test_set_buffer_locked(tbh)))
+		if (!trylock_buffer(tbh))
 			BUG();
 		BUG_ON(!buffer_uptodate(tbh));
 		clear_buffer_dirty(tbh);
Index: linux-2.6/fs/reiserfs/inode.c
===================================================================
--- linux-2.6.orig/fs/reiserfs/inode.c
+++ linux-2.6/fs/reiserfs/inode.c
@@ -2433,7 +2433,7 @@ static int reiserfs_write_full_page(stru
 		if (wbc->sync_mode != WB_SYNC_NONE || !wbc->nonblocking) {
 			lock_buffer(bh);
 		} else {
-			if (test_set_buffer_locked(bh)) {
+			if (!trylock_buffer(bh)) {
 				redirty_page_for_writepage(wbc, page);
 				continue;
 			}
Index: linux-2.6/fs/reiserfs/journal.c
===================================================================
--- linux-2.6.orig/fs/reiserfs/journal.c
+++ linux-2.6/fs/reiserfs/journal.c
@@ -857,7 +857,7 @@ static int write_ordered_buffers(spinloc
 		jh = JH_ENTRY(list->next);
 		bh = jh->bh;
 		get_bh(bh);
-		if (test_set_buffer_locked(bh)) {
+		if (!trylock_buffer(bh)) {
 			if (!buffer_dirty(bh)) {
 				list_move(&jh->list, &tmp);
 				goto loop_next;
@@ -3877,7 +3877,7 @@ int reiserfs_prepare_for_journal(struct 
 {
 	PROC_INFO_INC(p_s_sb, journal.prepare);
 
-	if (test_set_buffer_locked(bh)) {
+	if (!trylock_buffer(bh)) {
 		if (!wait)
 			return 0;
 		lock_buffer(bh);

-- 


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [patch 1/5] wait: use lock bitops for __wait_on_bit_lock
  2007-10-24  8:13 ` [patch 1/5] wait: use lock bitops for __wait_on_bit_lock npiggin
@ 2007-10-25  1:14   ` Andrew Morton
  2007-10-25  2:17     ` Nick Piggin
  0 siblings, 1 reply; 9+ messages in thread
From: Andrew Morton @ 2007-10-25  1:14 UTC (permalink / raw)
  To: Nick Piggin; +Cc: linux-kernel

On Wed, 24 Oct 2007 18:13:06 +1000 npiggin@nick.local0.net wrote:

> Signed-off-by: Nick Piggin <npiggin@suse.de>
> 
> ---
>  kernel/wait.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> Index: linux-2.6/kernel/wait.c
> ===================================================================
> --- linux-2.6.orig/kernel/wait.c
> +++ linux-2.6/kernel/wait.c
> @@ -195,7 +195,7 @@ __wait_on_bit_lock(wait_queue_head_t *wq
>  			if ((ret = (*action)(q->key.flags)))
>  				break;
>  		}
> -	} while (test_and_set_bit(q->key.bit_nr, q->key.flags));
> +	} while (test_and_set_bit_lock(q->key.bit_nr, q->key.flags));
>  	finish_wait(wq, &q->wait);
>  	return ret;
>  }
> 

Sorry, I'm just not going to apply a patch like that.

I mean, how the heck is anyone else supposed to understand what you're up
to?  There's a bit of documentation in Documentation/atomic_ops.txt
(probably enough, I guess) but it is totally unobvious to 98.3% of kernel
developers when they should use test_and_set_bit() versus
test_and_set_bit_lock() and it is far too much work to work out why it was
used in __wait_on_bit_lock(), whether it is correct, what advantages it
brings and whether and where others should emulate it.

So in my opinion this submission isn't of sufficient quality to be
included in Linux.

IOW: please write changelogs.  Preferably good ones.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [patch 1/5] wait: use lock bitops for __wait_on_bit_lock
  2007-10-25  1:14   ` Andrew Morton
@ 2007-10-25  2:17     ` Nick Piggin
  2007-10-11 20:56       ` Pavel Machek
  0 siblings, 1 reply; 9+ messages in thread
From: Nick Piggin @ 2007-10-25  2:17 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-kernel

On Thursday 25 October 2007 11:14, Andrew Morton wrote:
> On Wed, 24 Oct 2007 18:13:06 +1000 npiggin@nick.local0.net wrote:
> > Signed-off-by: Nick Piggin <npiggin@suse.de>
> >
> > ---
> >  kernel/wait.c |    2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > Index: linux-2.6/kernel/wait.c
> > ===================================================================
> > --- linux-2.6.orig/kernel/wait.c
> > +++ linux-2.6/kernel/wait.c
> > @@ -195,7 +195,7 @@ __wait_on_bit_lock(wait_queue_head_t *wq
> >  			if ((ret = (*action)(q->key.flags)))
> >  				break;
> >  		}
> > -	} while (test_and_set_bit(q->key.bit_nr, q->key.flags));
> > +	} while (test_and_set_bit_lock(q->key.bit_nr, q->key.flags));
> >  	finish_wait(wq, &q->wait);
> >  	return ret;
> >  }
>
> Sorry, I'm just not going to apply a patch like that.
>
> I mean, how the heck is anyone else supposed to understand what you're up
> to?

Hmm, I might just withdraw this patch 1/5. This is generally a slowpath,
and it's hard to guarantee that any exported user doesn't rely on the
full barrier here (not that they would know whether they do or not, let
alone document the fact).

I think all the others are pretty clear, though? (read on if no)


> There's a bit of documentation in Documentation/atomic_ops.txt 
> (probably enough, I guess) but it is totally unobvious to 98.3% of kernel
> developers when they should use test_and_set_bit() versus
> test_and_set_bit_lock() and it is far too much work to work out why it was
> used in __wait_on_bit_lock(), whether it is correct, what advantages it
> brings and whether and where others should emulate it.

If you set a bit for the purpose of opening a critical section, then
you can use this. And clear_bit_unlock to close it.

The advantages are that it is faster, and the hapless driver writer
doesn't have to butcher or forget about doing the correct
smp_mb__before_clear_bit(), or have reviewers wondering exactly WTF
that barrier is for, etc.

Basically, it is faster, harder to get wrong, and more self-docuemnting.

In general, others should not emulate it, because they should be
using our regular locking primitives instead. If they really must
roll their own, then using these would be nice, IMO.


> So in my opinion this submission isn't of sufficient quality to be
> included in Linux.
>
> IOW: please write changelogs.  Preferably good ones.

2/5: "tasklet_trylock opens a critical section. tasklet_unlock closes it.
      hence, _lock bitops can be used for the bitops"

5/5: "trylock_page opens a critical section. unlock_page closes it. hence,
      _lock bitops can be used for the bitops"

5/5: "trylock_buffer opens a critical section. unlock_buffer closes it.
      hence, _lock bitops can be used for the bitops"

Are those helpful?

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2007-11-01 14:34 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-10-24  8:13 [patch 0/5] lock bitops patches npiggin
2007-10-24  8:13 ` [patch 1/5] wait: use lock bitops for __wait_on_bit_lock npiggin
2007-10-25  1:14   ` Andrew Morton
2007-10-25  2:17     ` Nick Piggin
2007-10-11 20:56       ` Pavel Machek
2007-10-24  8:13 ` [patch 2/5] tasklet: use lock bitops for tasklet lock npiggin
2007-10-24  8:13 ` [patch 3/5] mm: rename page lock npiggin
2007-10-24  8:13 ` [patch 4/5] mm: use lock bitops for " npiggin
2007-10-24  8:13 ` [patch 5/5] fs: use lock bitops for the buffer lock npiggin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox