* [PATCH 0/2 RFC v3] Livelock avoidance for data integrity writeback @ 2010-06-04 18:40 Jan Kara 2010-06-04 18:40 ` [PATCH 1/2] radix-tree: Implement function radix_tree_gang_tag_if_tagged Jan Kara 2010-06-04 18:40 ` [PATCH 2/2] mm: Implement writeback livelock avoidance using page tagging Jan Kara 0 siblings, 2 replies; 10+ messages in thread From: Jan Kara @ 2010-06-04 18:40 UTC (permalink / raw) To: linux-fsdevel; +Cc: Andrew Morton, npiggin, david, linux-mm Hi, I've revived my patches to implement livelock avoidance for data integrity writes. Due to some concerns whether tagging of pages before writeout cannot be too costly to use for WB_SYNC_NONE mode (where we stop after nr_to_write pages) I've changed the patch to use page tagging only in WB_SYNC_ALL mode where we are sure that we write out all the tagged pages. Later, we can think about using tagging for livelock avoidance for WB_SYNC_NONE mode as well... As always comments are welcome. Honza -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 1/2] radix-tree: Implement function radix_tree_gang_tag_if_tagged 2010-06-04 18:40 [PATCH 0/2 RFC v3] Livelock avoidance for data integrity writeback Jan Kara @ 2010-06-04 18:40 ` Jan Kara 2010-06-09 23:30 ` Andrew Morton 2010-06-04 18:40 ` [PATCH 2/2] mm: Implement writeback livelock avoidance using page tagging Jan Kara 1 sibling, 1 reply; 10+ messages in thread From: Jan Kara @ 2010-06-04 18:40 UTC (permalink / raw) To: linux-fsdevel; +Cc: Andrew Morton, npiggin, david, linux-mm, Jan Kara Implement function for setting one tag if another tag is set for each item in given range. Signed-off-by: Jan Kara <jack@suse.cz> --- include/linux/radix-tree.h | 3 ++ lib/radix-tree.c | 82 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 85 insertions(+), 0 deletions(-) diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h index 55ca73c..efdfb07 100644 --- a/include/linux/radix-tree.h +++ b/include/linux/radix-tree.h @@ -192,6 +192,9 @@ unsigned int radix_tree_gang_lookup_tag_slot(struct radix_tree_root *root, void ***results, unsigned long first_index, unsigned int max_items, unsigned int tag); +unsigned long radix_tree_gang_tag_if_tagged(struct radix_tree_root *root, + unsigned long first_index, unsigned long last_index, + unsigned int fromtag, unsigned int totag); int radix_tree_tagged(struct radix_tree_root *root, unsigned int tag); static inline void radix_tree_preload_end(void) diff --git a/lib/radix-tree.c b/lib/radix-tree.c index 05da38b..c4595b2 100644 --- a/lib/radix-tree.c +++ b/lib/radix-tree.c @@ -609,6 +609,88 @@ int radix_tree_tag_get(struct radix_tree_root *root, EXPORT_SYMBOL(radix_tree_tag_get); /** + * radix_tree_gang_tag_if_tagged - for each item in given range set given + * tag if item has another tag set + * @root: radix tree root + * @first_index: starting index of a range to scan + * @last_index: last index of a range to scan + * @iftag: tag index to test + * @settag: tag index to set if tested tag is set + * + * This function scans range of radix tree from first_index to last_index. + * For each item in the range if iftag is set, the function sets also + * settag. + * + * The function returns number of leaves where the tag was set. + */ +unsigned long radix_tree_gang_tag_if_tagged(struct radix_tree_root *root, + unsigned long first_index, unsigned long last_index, + unsigned int iftag, unsigned int settag) +{ + unsigned int height = root->height, shift; + unsigned long tagged = 0, index = first_index; + struct radix_tree_node *open_slots[height], *slot; + + last_index = min(last_index, radix_tree_maxindex(height)); + if (first_index > last_index) + return 0; + if (!root_tag_get(root, iftag)) + return 0; + if (height == 0) { + root_tag_set(root, settag); + return 1; + } + + shift = (height - 1) * RADIX_TREE_MAP_SHIFT; + slot = radix_tree_indirect_to_ptr(root->rnode); + + for (;;) { + int offset; + + offset = (index >> shift) & RADIX_TREE_MAP_MASK; + if (!slot->slots[offset]) + goto next; + if (!tag_get(slot, iftag, offset)) + goto next; + tag_set(slot, settag, offset); + if (height == 1) { + tagged++; + goto next; + } + /* Go down one level */ + height--; + shift -= RADIX_TREE_MAP_SHIFT; + open_slots[height] = slot; + slot = slot->slots[offset]; + continue; +next: + /* Go to next item at level determined by 'shift' */ + index = ((index >> shift) + 1) << shift; + if (index > last_index) + break; + while (((index >> shift) & RADIX_TREE_MAP_MASK) == 0) { + /* + * We've fully scanned this node. Go up. Because + * last_index is guaranteed to be in the tree, what + * we do below cannot wander astray. + */ + slot = open_slots[height]; + height++; + shift += RADIX_TREE_MAP_SHIFT; + } + } + /* + * The iftag must have been set somewhere because otherwise + * we would return immediated at the beginning of the function + */ + root_tag_set(root, settag); + + return tagged; +} +EXPORT_SYMBOL(radix_tree_gang_tag_if_tagged); + + +/** * radix_tree_next_hole - find the next hole (not-present entry) * @root: tree root * @index: index key -- 1.6.4.2 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 1/2] radix-tree: Implement function radix_tree_gang_tag_if_tagged 2010-06-04 18:40 ` [PATCH 1/2] radix-tree: Implement function radix_tree_gang_tag_if_tagged Jan Kara @ 2010-06-09 23:30 ` Andrew Morton 2010-06-10 12:28 ` Jan Kara 0 siblings, 1 reply; 10+ messages in thread From: Andrew Morton @ 2010-06-09 23:30 UTC (permalink / raw) To: Jan Kara; +Cc: linux-fsdevel, npiggin, david, linux-mm On Fri, 4 Jun 2010 20:40:53 +0200 Jan Kara <jack@suse.cz> wrote: > Implement function for setting one tag if another tag is set > for each item in given range. > > > ... > > /** > + * radix_tree_gang_tag_if_tagged - for each item in given range set given > + * tag if item has another tag set > + * @root: radix tree root > + * @first_index: starting index of a range to scan > + * @last_index: last index of a range to scan > + * @iftag: tag index to test > + * @settag: tag index to set if tested tag is set > + * > + * This function scans range of radix tree from first_index to last_index. > + * For each item in the range if iftag is set, the function sets also > + * settag. > + * > + * The function returns number of leaves where the tag was set. > + */ > +unsigned long radix_tree_gang_tag_if_tagged(struct radix_tree_root *root, > + unsigned long first_index, unsigned long last_index, > + unsigned int iftag, unsigned int settag) This is kind of a misuse of the term "gang". First we had radix_tree_lookup(), which returned a single page. That was a bit inefficient, so then we added radix_tree_gang_lookup(), which retuned a "gang" of pages. But radix_tree_gang_tag_if_tagged() doesn't return a gang of anything (it has no `void **results' argument). radix_tree_range_tag_if_tagged()? > +{ > + unsigned int height = root->height, shift; > + unsigned long tagged = 0, index = first_index; > + struct radix_tree_node *open_slots[height], *slot; > + > + last_index = min(last_index, radix_tree_maxindex(height)); > + if (first_index > last_index) > + return 0; > + if (!root_tag_get(root, iftag)) > + return 0; > + if (height == 0) { > + root_tag_set(root, settag); > + return 1; > + } > + > + shift = (height - 1) * RADIX_TREE_MAP_SHIFT; > + slot = radix_tree_indirect_to_ptr(root->rnode); > + > + for (;;) { > + int offset; > + > + offset = (index >> shift) & RADIX_TREE_MAP_MASK; > + if (!slot->slots[offset]) > + goto next; > + if (!tag_get(slot, iftag, offset)) > + goto next; > + tag_set(slot, settag, offset); > + if (height == 1) { > + tagged++; > + goto next; > + } > + /* Go down one level */ > + height--; > + shift -= RADIX_TREE_MAP_SHIFT; > + open_slots[height] = slot; > + slot = slot->slots[offset]; > + continue; > +next: > + /* Go to next item at level determined by 'shift' */ > + index = ((index >> shift) + 1) << shift; > + if (index > last_index) > + break; > + while (((index >> shift) & RADIX_TREE_MAP_MASK) == 0) { > + /* > + * We've fully scanned this node. Go up. Because > + * last_index is guaranteed to be in the tree, what > + * we do below cannot wander astray. > + */ > + slot = open_slots[height]; > + height++; > + shift += RADIX_TREE_MAP_SHIFT; > + } > + } > + /* > + * The iftag must have been set somewhere because otherwise > + * we would return immediated at the beginning of the function > + */ > + root_tag_set(root, settag); > + > + return tagged; > +} > +EXPORT_SYMBOL(radix_tree_gang_tag_if_tagged); Wouldn't this be a lot simpler if it used __lookup_tag()? Along the lines of do { slot *slots[N]; n = __lookup_tag(.., slots, ...); for (i = 0; i < n; i++) tag_set(slots[i], ...); } while (something); ? That's still one cache miss per slot and misses on the slots will preponderate, so the performance won't be much different. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/2] radix-tree: Implement function radix_tree_gang_tag_if_tagged 2010-06-09 23:30 ` Andrew Morton @ 2010-06-10 12:28 ` Jan Kara 0 siblings, 0 replies; 10+ messages in thread From: Jan Kara @ 2010-06-10 12:28 UTC (permalink / raw) To: Andrew Morton; +Cc: Jan Kara, linux-fsdevel, npiggin, david, linux-mm On Wed 09-06-10 16:30:45, Andrew Morton wrote: > On Fri, 4 Jun 2010 20:40:53 +0200 > Jan Kara <jack@suse.cz> wrote: > > > Implement function for setting one tag if another tag is set > > for each item in given range. > > > > > > ... > > > > /** > > + * radix_tree_gang_tag_if_tagged - for each item in given range set given > > + * tag if item has another tag set > > + * @root: radix tree root > > + * @first_index: starting index of a range to scan > > + * @last_index: last index of a range to scan > > + * @iftag: tag index to test > > + * @settag: tag index to set if tested tag is set > > + * > > + * This function scans range of radix tree from first_index to last_index. > > + * For each item in the range if iftag is set, the function sets also > > + * settag. > > + * > > + * The function returns number of leaves where the tag was set. > > + */ > > +unsigned long radix_tree_gang_tag_if_tagged(struct radix_tree_root *root, > > + unsigned long first_index, unsigned long last_index, > > + unsigned int iftag, unsigned int settag) > > This is kind of a misuse of the term "gang". > > First we had radix_tree_lookup(), which returned a single page. > > That was a bit inefficient, so then we added radix_tree_gang_lookup(), > which retuned a "gang" of pages. > > But radix_tree_gang_tag_if_tagged() doesn't return a gang of anything > (it has no `void **results' argument). > > radix_tree_range_tag_if_tagged()? Good point and your name looks better. Changed. > > +{ > > + unsigned int height = root->height, shift; > > + unsigned long tagged = 0, index = first_index; > > + struct radix_tree_node *open_slots[height], *slot; > > + > > + last_index = min(last_index, radix_tree_maxindex(height)); > > + if (first_index > last_index) > > + return 0; > > + if (!root_tag_get(root, iftag)) > > + return 0; > > + if (height == 0) { > > + root_tag_set(root, settag); > > + return 1; > > + } > > + > > + shift = (height - 1) * RADIX_TREE_MAP_SHIFT; > > + slot = radix_tree_indirect_to_ptr(root->rnode); > > + > > + for (;;) { > > + int offset; > > + > > + offset = (index >> shift) & RADIX_TREE_MAP_MASK; > > + if (!slot->slots[offset]) > > + goto next; > > + if (!tag_get(slot, iftag, offset)) > > + goto next; > > + tag_set(slot, settag, offset); > > + if (height == 1) { > > + tagged++; > > + goto next; > > + } > > + /* Go down one level */ > > + height--; > > + shift -= RADIX_TREE_MAP_SHIFT; > > + open_slots[height] = slot; > > + slot = slot->slots[offset]; > > + continue; > > +next: > > + /* Go to next item at level determined by 'shift' */ > > + index = ((index >> shift) + 1) << shift; > > + if (index > last_index) > > + break; > > + while (((index >> shift) & RADIX_TREE_MAP_MASK) == 0) { > > + /* > > + * We've fully scanned this node. Go up. Because > > + * last_index is guaranteed to be in the tree, what > > + * we do below cannot wander astray. > > + */ > > + slot = open_slots[height]; > > + height++; > > + shift += RADIX_TREE_MAP_SHIFT; > > + } > > + } > > + /* > > + * The iftag must have been set somewhere because otherwise > > + * we would return immediated at the beginning of the function > > + */ > > + root_tag_set(root, settag); > > + > > + return tagged; > > +} > > +EXPORT_SYMBOL(radix_tree_gang_tag_if_tagged); > > Wouldn't this be a lot simpler if it used __lookup_tag()? Along the > lines of > > do { > slot *slots[N]; > > n = __lookup_tag(.., slots, ...); > for (i = 0; i < n; i++) > tag_set(slots[i], ...); > } while (something); > > ? > > That's still one cache miss per slot and misses on the slots will > preponderate, so the performance won't be much different. But __lookup_tag returns only leaf nodes so we'd have to reimplement something like what radix_tree_tag_set does (tag_set sets tag only for that one node) - in particular, we'd have to reconstruct the radix tree path. Also __lookup_tag cares about RCU which my function doesn't have to because it is guaranteed to be called under spin lock. I'm not sure whether the cost is noticeable or not though... So do you still think it's worthwhile to go the simpler way? Honza -- Jan Kara <jack@suse.cz> SUSE Labs, CR -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 2/2] mm: Implement writeback livelock avoidance using page tagging 2010-06-04 18:40 [PATCH 0/2 RFC v3] Livelock avoidance for data integrity writeback Jan Kara 2010-06-04 18:40 ` [PATCH 1/2] radix-tree: Implement function radix_tree_gang_tag_if_tagged Jan Kara @ 2010-06-04 18:40 ` Jan Kara 2010-06-09 23:41 ` Andrew Morton 2010-06-09 23:45 ` Andrew Morton 1 sibling, 2 replies; 10+ messages in thread From: Jan Kara @ 2010-06-04 18:40 UTC (permalink / raw) To: linux-fsdevel; +Cc: Andrew Morton, npiggin, david, linux-mm, Jan Kara We try to avoid livelocks of writeback when some steadily creates dirty pages in a mapping we are writing out. For memory-cleaning writeback, using nr_to_write works reasonably well but we cannot really use it for data integrity writeback. This patch tries to solve the problem. The idea is simple: Tag all pages that should be written back with a special tag (TOWRITE) in the radix tree. This can be done rather quickly and thus livelocks should not happen in practice. Then we start doing the hard work of locking pages and sending them to disk only for those pages that have TOWRITE tag set. Signed-off-by: Jan Kara <jack@suse.cz> --- include/linux/fs.h | 1 + include/linux/radix-tree.h | 2 +- mm/page-writeback.c | 44 ++++++++++++++++++++++++++++++++++++++++++-- 3 files changed, 44 insertions(+), 3 deletions(-) diff --git a/include/linux/fs.h b/include/linux/fs.h index 3428393..fe308f0 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -685,6 +685,7 @@ struct block_device { */ #define PAGECACHE_TAG_DIRTY 0 #define PAGECACHE_TAG_WRITEBACK 1 +#define PAGECACHE_TAG_TOWRITE 2 int mapping_tagged(struct address_space *mapping, int tag); diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h index efdfb07..f7ebff8 100644 --- a/include/linux/radix-tree.h +++ b/include/linux/radix-tree.h @@ -55,7 +55,7 @@ static inline int radix_tree_is_indirect_ptr(void *ptr) /*** radix-tree API starts here ***/ -#define RADIX_TREE_MAX_TAGS 2 +#define RADIX_TREE_MAX_TAGS 3 /* root tags are stored in gfp_mask, shifted by __GFP_BITS_SHIFT */ struct radix_tree_root { diff --git a/mm/page-writeback.c b/mm/page-writeback.c index b289310..f590a12 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -807,6 +807,30 @@ void __init page_writeback_init(void) } /** + * tag_pages_for_writeback - tag pages to be written by write_cache_pages + * @mapping: address space structure to write + * @start: starting page index + * @end: ending page index (inclusive) + * + * This function scans the page range from @start to @end and tags all pages + * that have DIRTY tag set with a special TOWRITE tag. The idea is that + * write_cache_pages (or whoever calls this function) will then use TOWRITE tag + * to identify pages eligible for writeback. This mechanism is used to avoid + * livelocking of writeback by a process steadily creating new dirty pages in + * the file (thus it is important for this function to be damn quick so that it + * can tag pages faster than a dirtying process can create them). + */ +void tag_pages_for_writeback(struct address_space *mapping, + pgoff_t start, pgoff_t end) +{ + spin_lock_irq(&mapping->tree_lock); + radix_tree_gang_tag_if_tagged(&mapping->page_tree, start, end, + PAGECACHE_TAG_DIRTY, PAGECACHE_TAG_TOWRITE); + spin_unlock_irq(&mapping->tree_lock); +} +EXPORT_SYMBOL(tag_pages_for_writeback); + +/** * write_cache_pages - walk the list of dirty pages of the given address space and write all of them. * @mapping: address space structure to write * @wbc: subtract the number of written pages from *@wbc->nr_to_write @@ -820,6 +844,13 @@ void __init page_writeback_init(void) * the call was made get new I/O started against them. If wbc->sync_mode is * WB_SYNC_ALL then we were called for data integrity and we must wait for * existing IO to complete. + * + * To avoid livelocks (when other process dirties new pages), we first tag + * pages which should be written back with TOWRITE tag and only then start + * writing them. For data-integrity sync we have to be careful so that we do + * not miss some pages (e.g., because some other process has cleared TOWRITE + * tag we set). The rule we follow is that TOWRITE tag can be cleared only + * by the process clearing the DIRTY tag (and submitting the page for IO). */ int write_cache_pages(struct address_space *mapping, struct writeback_control *wbc, writepage_t writepage, @@ -836,6 +867,7 @@ int write_cache_pages(struct address_space *mapping, int cycled; int range_whole = 0; long nr_to_write = wbc->nr_to_write; + int tag; pagevec_init(&pvec, 0); if (wbc->range_cyclic) { @@ -853,13 +885,18 @@ int write_cache_pages(struct address_space *mapping, range_whole = 1; cycled = 1; /* ignore range_cyclic tests */ } + if (wbc->sync_mode == WB_SYNC_ALL) + tag = PAGECACHE_TAG_TOWRITE; + else + tag = PAGECACHE_TAG_DIRTY; retry: + if (wbc->sync_mode == WB_SYNC_ALL) + tag_pages_for_writeback(mapping, index, end); done_index = index; while (!done && (index <= end)) { int i; - nr_pages = pagevec_lookup_tag(&pvec, mapping, &index, - PAGECACHE_TAG_DIRTY, + nr_pages = pagevec_lookup_tag(&pvec, mapping, &index, tag, min(end - index, (pgoff_t)PAGEVEC_SIZE-1) + 1); if (nr_pages == 0) break; @@ -1319,6 +1356,9 @@ int test_set_page_writeback(struct page *page) radix_tree_tag_clear(&mapping->page_tree, page_index(page), PAGECACHE_TAG_DIRTY); + radix_tree_tag_clear(&mapping->page_tree, + page_index(page), + PAGECACHE_TAG_TOWRITE); spin_unlock_irqrestore(&mapping->tree_lock, flags); } else { ret = TestSetPageWriteback(page); -- 1.6.4.2 -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 2/2] mm: Implement writeback livelock avoidance using page tagging 2010-06-04 18:40 ` [PATCH 2/2] mm: Implement writeback livelock avoidance using page tagging Jan Kara @ 2010-06-09 23:41 ` Andrew Morton 2010-06-10 12:31 ` Jan Kara 2010-06-09 23:45 ` Andrew Morton 1 sibling, 1 reply; 10+ messages in thread From: Andrew Morton @ 2010-06-09 23:41 UTC (permalink / raw) To: Jan Kara; +Cc: linux-fsdevel, npiggin, david, linux-mm On Fri, 4 Jun 2010 20:40:54 +0200 Jan Kara <jack@suse.cz> wrote: > We try to avoid livelocks of writeback when some steadily creates > dirty pages in a mapping we are writing out. For memory-cleaning > writeback, using nr_to_write works reasonably well but we cannot > really use it for data integrity writeback. This patch tries to > solve the problem. > > The idea is simple: Tag all pages that should be written back > with a special tag (TOWRITE) in the radix tree. This can be done > rather quickly and thus livelocks should not happen in practice. > Then we start doing the hard work of locking pages and sending > them to disk only for those pages that have TOWRITE tag set. > > Signed-off-by: Jan Kara <jack@suse.cz> > --- > include/linux/fs.h | 1 + > include/linux/radix-tree.h | 2 +- > mm/page-writeback.c | 44 ++++++++++++++++++++++++++++++++++++++++++-- > 3 files changed, 44 insertions(+), 3 deletions(-) > > diff --git a/include/linux/fs.h b/include/linux/fs.h > index 3428393..fe308f0 100644 > --- a/include/linux/fs.h > +++ b/include/linux/fs.h > @@ -685,6 +685,7 @@ struct block_device { > */ > #define PAGECACHE_TAG_DIRTY 0 > #define PAGECACHE_TAG_WRITEBACK 1 > +#define PAGECACHE_TAG_TOWRITE 2 > > int mapping_tagged(struct address_space *mapping, int tag); > > diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h > index efdfb07..f7ebff8 100644 > --- a/include/linux/radix-tree.h > +++ b/include/linux/radix-tree.h > @@ -55,7 +55,7 @@ static inline int radix_tree_is_indirect_ptr(void *ptr) > > /*** radix-tree API starts here ***/ > > -#define RADIX_TREE_MAX_TAGS 2 > +#define RADIX_TREE_MAX_TAGS 3 > > /* root tags are stored in gfp_mask, shifted by __GFP_BITS_SHIFT */ > struct radix_tree_root { > diff --git a/mm/page-writeback.c b/mm/page-writeback.c > index b289310..f590a12 100644 > --- a/mm/page-writeback.c > +++ b/mm/page-writeback.c > @@ -807,6 +807,30 @@ void __init page_writeback_init(void) > } > > /** > + * tag_pages_for_writeback - tag pages to be written by write_cache_pages > + * @mapping: address space structure to write > + * @start: starting page index > + * @end: ending page index (inclusive) > + * > + * This function scans the page range from @start to @end I'd add "inclusive" here as well. Add it everywhere, very carefully (or "exclusive"). it really is a minefield and we've had off-by-ones from this before > and tags all pages > + * that have DIRTY tag set with a special TOWRITE tag. The idea is that > + * write_cache_pages (or whoever calls this function) will then use TOWRITE tag > + * to identify pages eligible for writeback. This mechanism is used to avoid > + * livelocking of writeback by a process steadily creating new dirty pages in > + * the file (thus it is important for this function to be damn quick so that it > + * can tag pages faster than a dirtying process can create them). > + */ > +void tag_pages_for_writeback(struct address_space *mapping, > + pgoff_t start, pgoff_t end) > +{ > + spin_lock_irq(&mapping->tree_lock); > + radix_tree_gang_tag_if_tagged(&mapping->page_tree, start, end, > + PAGECACHE_TAG_DIRTY, PAGECACHE_TAG_TOWRITE); > + spin_unlock_irq(&mapping->tree_lock); > +} > +EXPORT_SYMBOL(tag_pages_for_writeback); Problem. For how long can this disable interrupts? Maybe 1TB of dirty pagecache before the NMI watchdog starts getting involved? Could be a problem in some situations. Easy enough to fix - just walk the range in 1000(?) page hunks, dropping the lock and doing cond_resched() each time. radix_tree_gang_tag_if_tagged() will need to return next_index to make that efficient with sparse files. > +/** > * write_cache_pages - walk the list of dirty pages of the given address space and write all of them. > * @mapping: address space structure to write > * @wbc: subtract the number of written pages from *@wbc->nr_to_write > @@ -820,6 +844,13 @@ void __init page_writeback_init(void) > * the call was made get new I/O started against them. If wbc->sync_mode is > * WB_SYNC_ALL then we were called for data integrity and we must wait for > * existing IO to complete. > + * > + * To avoid livelocks (when other process dirties new pages), we first tag > + * pages which should be written back with TOWRITE tag and only then start > + * writing them. For data-integrity sync we have to be careful so that we do > + * not miss some pages (e.g., because some other process has cleared TOWRITE > + * tag we set). The rule we follow is that TOWRITE tag can be cleared only > + * by the process clearing the DIRTY tag (and submitting the page for IO). > */ Seems simple enough and I can't think of any bugs which the obvious races will cause. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/2] mm: Implement writeback livelock avoidance using page tagging 2010-06-09 23:41 ` Andrew Morton @ 2010-06-10 12:31 ` Jan Kara 0 siblings, 0 replies; 10+ messages in thread From: Jan Kara @ 2010-06-10 12:31 UTC (permalink / raw) To: Andrew Morton; +Cc: Jan Kara, linux-fsdevel, npiggin, david, linux-mm On Wed 09-06-10 16:41:15, Andrew Morton wrote: > On Fri, 4 Jun 2010 20:40:54 +0200 > Jan Kara <jack@suse.cz> wrote: > > diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h > > index efdfb07..f7ebff8 100644 > > --- a/include/linux/radix-tree.h > > +++ b/include/linux/radix-tree.h > > @@ -55,7 +55,7 @@ static inline int radix_tree_is_indirect_ptr(void *ptr) > > > > /*** radix-tree API starts here ***/ > > > > -#define RADIX_TREE_MAX_TAGS 2 > > +#define RADIX_TREE_MAX_TAGS 3 > > > > /* root tags are stored in gfp_mask, shifted by __GFP_BITS_SHIFT */ > > struct radix_tree_root { > > diff --git a/mm/page-writeback.c b/mm/page-writeback.c > > index b289310..f590a12 100644 > > --- a/mm/page-writeback.c > > +++ b/mm/page-writeback.c > > @@ -807,6 +807,30 @@ void __init page_writeback_init(void) > > } > > > > /** > > + * tag_pages_for_writeback - tag pages to be written by write_cache_pages > > + * @mapping: address space structure to write > > + * @start: starting page index > > + * @end: ending page index (inclusive) > > + * > > + * This function scans the page range from @start to @end > > I'd add "inclusive" here as well. Add it everywhere, very carefully > (or "exclusive"). it really is a minefield and we've had off-by-ones > from this before Done. > > and tags all pages > > + * that have DIRTY tag set with a special TOWRITE tag. The idea is that > > + * write_cache_pages (or whoever calls this function) will then use TOWRITE tag > > + * to identify pages eligible for writeback. This mechanism is used to avoid > > + * livelocking of writeback by a process steadily creating new dirty pages in > > + * the file (thus it is important for this function to be damn quick so that it > > + * can tag pages faster than a dirtying process can create them). > > + */ > > +void tag_pages_for_writeback(struct address_space *mapping, > > + pgoff_t start, pgoff_t end) > > +{ > > + spin_lock_irq(&mapping->tree_lock); > > + radix_tree_gang_tag_if_tagged(&mapping->page_tree, start, end, > > + PAGECACHE_TAG_DIRTY, PAGECACHE_TAG_TOWRITE); > > + spin_unlock_irq(&mapping->tree_lock); > > +} > > +EXPORT_SYMBOL(tag_pages_for_writeback); > > Problem. For how long can this disable interrupts? Maybe 1TB of dirty > pagecache before the NMI watchdog starts getting involved? Could be a > problem in some situations. > > Easy enough to fix - just walk the range in 1000(?) page hunks, > dropping the lock and doing cond_resched() each time. > radix_tree_gang_tag_if_tagged() will need to return next_index to make > that efficient with sparse files. Yes, Nick had a similar objection. I've already fixed it in my tree the way you suggest. > > +/** > > * write_cache_pages - walk the list of dirty pages of the given address space and write all of them. > > * @mapping: address space structure to write > > * @wbc: subtract the number of written pages from *@wbc->nr_to_write > > @@ -820,6 +844,13 @@ void __init page_writeback_init(void) > > * the call was made get new I/O started against them. If wbc->sync_mode is > > * WB_SYNC_ALL then we were called for data integrity and we must wait for > > * existing IO to complete. > > + * > > + * To avoid livelocks (when other process dirties new pages), we first tag > > + * pages which should be written back with TOWRITE tag and only then start > > + * writing them. For data-integrity sync we have to be careful so that we do > > + * not miss some pages (e.g., because some other process has cleared TOWRITE > > + * tag we set). The rule we follow is that TOWRITE tag can be cleared only > > + * by the process clearing the DIRTY tag (and submitting the page for IO). > > */ > > Seems simple enough and I can't think of any bugs which the obvious > races will cause. Thanks for looking into the patches! Honza -- Jan Kara <jack@suse.cz> SUSE Labs, CR -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/2] mm: Implement writeback livelock avoidance using page tagging 2010-06-04 18:40 ` [PATCH 2/2] mm: Implement writeback livelock avoidance using page tagging Jan Kara 2010-06-09 23:41 ` Andrew Morton @ 2010-06-09 23:45 ` Andrew Morton 2010-06-10 12:42 ` Jan Kara 1 sibling, 1 reply; 10+ messages in thread From: Andrew Morton @ 2010-06-09 23:45 UTC (permalink / raw) To: Jan Kara; +Cc: linux-fsdevel, npiggin, david, linux-mm On Fri, 4 Jun 2010 20:40:54 +0200 Jan Kara <jack@suse.cz> wrote: > -#define RADIX_TREE_MAX_TAGS 2 > +#define RADIX_TREE_MAX_TAGS 3 Adds another eight bytes to the radix_tree_node, I think. What effect does this have upon the radix_tree_node_cachep packing for sl[aeiou]b? Please add to changelog if you can work it out ;). -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/2] mm: Implement writeback livelock avoidance using page tagging 2010-06-09 23:45 ` Andrew Morton @ 2010-06-10 12:42 ` Jan Kara 0 siblings, 0 replies; 10+ messages in thread From: Jan Kara @ 2010-06-10 12:42 UTC (permalink / raw) To: Andrew Morton; +Cc: Jan Kara, linux-fsdevel, npiggin, david, linux-mm On Wed 09-06-10 16:45:33, Andrew Morton wrote: > On Fri, 4 Jun 2010 20:40:54 +0200 > Jan Kara <jack@suse.cz> wrote: > > > -#define RADIX_TREE_MAX_TAGS 2 > > +#define RADIX_TREE_MAX_TAGS 3 > > Adds another eight bytes to the radix_tree_node, I think. What effect > does this have upon the radix_tree_node_cachep packing for sl[aeiou]b? > Please add to changelog if you can work it out ;). The sizes of structure are: 32-bit: 288 vs 296 64-bit: 552 vs 560 I have now checked (running different kernels because I wasn't sure the computations I do are right) and that gives 7 objects per page with SLAB and SLUB on a 64-bit kernel. I'll try to get also SLOB numbers for 64-bit and possibly numbers for 32-bit archs (although it gets a bit tiring to try all the kernels ;). Honza -- Jan Kara <jack@suse.cz> SUSE Labs, CR -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a> ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 0/2 RFC v3] Livelock avoidance for data integrity writeback @ 2010-06-04 18:47 Jan Kara 2010-06-04 18:47 ` [PATCH 1/2] radix-tree: Implement function radix_tree_gang_tag_if_tagged Jan Kara 0 siblings, 1 reply; 10+ messages in thread From: Jan Kara @ 2010-06-04 18:47 UTC (permalink / raw) To: linux-fsdevel; +Cc: Andrew Morton, npiggin, david, linux-mm Hi, I've revived my patches to implement livelock avoidance for data integrity writes. Due to some concerns whether tagging of pages before writeout cannot be too costly to use for WB_SYNC_NONE mode (where we stop after nr_to_write pages) I've changed the patch to use page tagging only in WB_SYNC_ALL mode where we are sure that we write out all the tagged pages. Later, we can think about using tagging for livelock avoidance for WB_SYNC_NONE mode as well... As always comments are welcome. Honza PS: I'm sorry for sending this twice. I've screwed up the list address in the first posting. ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 1/2] radix-tree: Implement function radix_tree_gang_tag_if_tagged 2010-06-04 18:47 [PATCH 0/2 RFC v3] Livelock avoidance for data integrity writeback Jan Kara @ 2010-06-04 18:47 ` Jan Kara 0 siblings, 0 replies; 10+ messages in thread From: Jan Kara @ 2010-06-04 18:47 UTC (permalink / raw) To: linux-fsdevel; +Cc: Andrew Morton, npiggin, david, linux-mm, Jan Kara Implement function for setting one tag if another tag is set for each item in given range. Signed-off-by: Jan Kara <jack@suse.cz> --- include/linux/radix-tree.h | 3 ++ lib/radix-tree.c | 82 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 85 insertions(+), 0 deletions(-) diff --git a/include/linux/radix-tree.h b/include/linux/radix-tree.h index 55ca73c..efdfb07 100644 --- a/include/linux/radix-tree.h +++ b/include/linux/radix-tree.h @@ -192,6 +192,9 @@ unsigned int radix_tree_gang_lookup_tag_slot(struct radix_tree_root *root, void ***results, unsigned long first_index, unsigned int max_items, unsigned int tag); +unsigned long radix_tree_gang_tag_if_tagged(struct radix_tree_root *root, + unsigned long first_index, unsigned long last_index, + unsigned int fromtag, unsigned int totag); int radix_tree_tagged(struct radix_tree_root *root, unsigned int tag); static inline void radix_tree_preload_end(void) diff --git a/lib/radix-tree.c b/lib/radix-tree.c index 05da38b..c4595b2 100644 --- a/lib/radix-tree.c +++ b/lib/radix-tree.c @@ -609,6 +609,88 @@ int radix_tree_tag_get(struct radix_tree_root *root, EXPORT_SYMBOL(radix_tree_tag_get); /** + * radix_tree_gang_tag_if_tagged - for each item in given range set given + * tag if item has another tag set + * @root: radix tree root + * @first_index: starting index of a range to scan + * @last_index: last index of a range to scan + * @iftag: tag index to test + * @settag: tag index to set if tested tag is set + * + * This function scans range of radix tree from first_index to last_index. + * For each item in the range if iftag is set, the function sets also + * settag. + * + * The function returns number of leaves where the tag was set. + */ +unsigned long radix_tree_gang_tag_if_tagged(struct radix_tree_root *root, + unsigned long first_index, unsigned long last_index, + unsigned int iftag, unsigned int settag) +{ + unsigned int height = root->height, shift; + unsigned long tagged = 0, index = first_index; + struct radix_tree_node *open_slots[height], *slot; + + last_index = min(last_index, radix_tree_maxindex(height)); + if (first_index > last_index) + return 0; + if (!root_tag_get(root, iftag)) + return 0; + if (height == 0) { + root_tag_set(root, settag); + return 1; + } + + shift = (height - 1) * RADIX_TREE_MAP_SHIFT; + slot = radix_tree_indirect_to_ptr(root->rnode); + + for (;;) { + int offset; + + offset = (index >> shift) & RADIX_TREE_MAP_MASK; + if (!slot->slots[offset]) + goto next; + if (!tag_get(slot, iftag, offset)) + goto next; + tag_set(slot, settag, offset); + if (height == 1) { + tagged++; + goto next; + } + /* Go down one level */ + height--; + shift -= RADIX_TREE_MAP_SHIFT; + open_slots[height] = slot; + slot = slot->slots[offset]; + continue; +next: + /* Go to next item at level determined by 'shift' */ + index = ((index >> shift) + 1) << shift; + if (index > last_index) + break; + while (((index >> shift) & RADIX_TREE_MAP_MASK) == 0) { + /* + * We've fully scanned this node. Go up. Because + * last_index is guaranteed to be in the tree, what + * we do below cannot wander astray. + */ + slot = open_slots[height]; + height++; + shift += RADIX_TREE_MAP_SHIFT; + } + } + /* + * The iftag must have been set somewhere because otherwise + * we would return immediated at the beginning of the function + */ + root_tag_set(root, settag); + + return tagged; +} +EXPORT_SYMBOL(radix_tree_gang_tag_if_tagged); + + +/** * radix_tree_next_hole - find the next hole (not-present entry) * @root: tree root * @index: index key -- 1.6.4.2 ^ permalink raw reply related [flat|nested] 10+ messages in thread
end of thread, other threads:[~2010-06-10 12:42 UTC | newest] Thread overview: 10+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2010-06-04 18:40 [PATCH 0/2 RFC v3] Livelock avoidance for data integrity writeback Jan Kara 2010-06-04 18:40 ` [PATCH 1/2] radix-tree: Implement function radix_tree_gang_tag_if_tagged Jan Kara 2010-06-09 23:30 ` Andrew Morton 2010-06-10 12:28 ` Jan Kara 2010-06-04 18:40 ` [PATCH 2/2] mm: Implement writeback livelock avoidance using page tagging Jan Kara 2010-06-09 23:41 ` Andrew Morton 2010-06-10 12:31 ` Jan Kara 2010-06-09 23:45 ` Andrew Morton 2010-06-10 12:42 ` Jan Kara -- strict thread matches above, loose matches on Subject: below -- 2010-06-04 18:47 [PATCH 0/2 RFC v3] Livelock avoidance for data integrity writeback Jan Kara 2010-06-04 18:47 ` [PATCH 1/2] radix-tree: Implement function radix_tree_gang_tag_if_tagged Jan Kara
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).