From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757409Ab0G2MX4 (ORCPT ); Thu, 29 Jul 2010 08:23:56 -0400 Received: from mga01.intel.com ([192.55.52.88]:47639 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757340Ab0G2MXY (ORCPT ); Thu, 29 Jul 2010 08:23:24 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="4.55,280,1278313200"; d="scan'208";a="822872634" Message-Id: <20100729121423.332557547@intel.com> User-Agent: quilt/0.48-1 Date: Thu, 29 Jul 2010 19:51:44 +0800 From: Wu Fengguang To: Andrew Morton Cc: Wu Fengguang , LKML cc: "linux-fsdevel@vger.kernel.org" cc: "linux-mm@kvack.org" cc: Dave Chinner cc: Chris Mason , Nick Piggin cc: Rik van Riel cc: Johannes Weiner cc: Christoph Hellwig cc: KAMEZAWA Hiroyuki cc: KOSAKI Motohiro cc: Andrea Arcangeli , Mel Gorman cc: Minchan Kim Subject: [PATCH 2/5] writeback: stop periodic/background work on seeing sync works References: <20100729115142.102255590@intel.com> Content-Disposition: inline; filename=writeback-sync-pending.patch Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The periodic/background writeback can run forever. So when any sync work is enqueued, increase bdi->sync_works to notify the active non-sync works to exit. Non-sync works queued after sync works won't be affected. Signed-off-by: Wu Fengguang --- fs/fs-writeback.c | 13 +++++++++++++ include/linux/backing-dev.h | 6 ++++++ mm/backing-dev.c | 1 + 3 files changed, 20 insertions(+) --- linux-next.orig/fs/fs-writeback.c 2010-07-29 17:13:23.000000000 +0800 +++ linux-next/fs/fs-writeback.c 2010-07-29 17:13:49.000000000 +0800 @@ -80,6 +80,8 @@ static void bdi_queue_work(struct backin spin_lock(&bdi->wb_lock); list_add_tail(&work->list, &bdi->work_list); + if (work->for_sync) + atomic_inc(&bdi->wb.sync_works); spin_unlock(&bdi->wb_lock); /* @@ -633,6 +635,14 @@ static long wb_writeback(struct bdi_writ break; /* + * background/periodic works can run forever, need to abort + * on seeing any pending sync work, to prevent livelock it. + */ + if (atomic_read(&wb->sync_works) && + (work->for_background || work->for_kupdate)) + break; + + /* * For background writeout, stop when we are below the * background dirty threshold */ @@ -765,6 +775,9 @@ long wb_do_writeback(struct bdi_writebac wrote += wb_writeback(wb, work); + if (work->for_sync) + atomic_dec(&wb->sync_works); + /* * Notify the caller of completion if this is a synchronous * work item, otherwise just free it. --- linux-next.orig/include/linux/backing-dev.h 2010-07-29 17:13:23.000000000 +0800 +++ linux-next/include/linux/backing-dev.h 2010-07-29 17:13:31.000000000 +0800 @@ -50,6 +50,12 @@ struct bdi_writeback { unsigned long last_old_flush; /* last old data flush */ + /* + * sync works queued, background works shall abort on seeing this, + * to prevent livelocking the sync works + */ + atomic_t sync_works; + struct task_struct *task; /* writeback task */ struct list_head b_dirty; /* dirty inodes */ struct list_head b_io; /* parked for writeback */ --- linux-next.orig/mm/backing-dev.c 2010-07-29 17:13:23.000000000 +0800 +++ linux-next/mm/backing-dev.c 2010-07-29 17:13:31.000000000 +0800 @@ -257,6 +257,7 @@ static void bdi_wb_init(struct bdi_write wb->bdi = bdi; wb->last_old_flush = jiffies; + atomic_set(&wb->sync_works, 0); INIT_LIST_HEAD(&wb->b_dirty); INIT_LIST_HEAD(&wb->b_io); INIT_LIST_HEAD(&wb->b_more_io);