linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/6] raid5-cache: enabling cache features
@ 2016-09-26 23:30 Song Liu
  2016-09-26 23:30 ` [PATCH v2 1/6] r5cache: write part of r5cache Song Liu
                   ` (5 more replies)
  0 siblings, 6 replies; 14+ messages in thread
From: Song Liu @ 2016-09-26 23:30 UTC (permalink / raw)
  To: linux-raid
  Cc: neilb, shli, kernel-team, dan.j.williams, hch, liuzhengyuang521,
	liuzhengyuan, Song Liu

These are the second version of patches to enable write cache part of
raid5-cache. The journal part was released with kernel 4.4.

The caching part uses same disk format of raid456 journal, and provides
acceleration to writes. Write operations are committed (bio_endio) once
the data is secured in journal. Reconstruct and RMW are postponed to
reclaim path, which is (hopefully) not on the critical path.

The changes are organized in 6 patches (details below).

Patch for chunk_aligned_read in earlier RFC is not included yet
(http://marc.info/?l=linux-raid&m=146432700719277). But we may still need
some optimizations later, especially for SSD raid devices.

Changes from PATCH v1 (http://marc.info/?l=linux-raid&m=147268192718851):
  1. Improvements in reclaim patch
  2. Fixed issue with bitmap
  3. A fix by ZhengYuan Liu

Thanks,
Song

Song Liu (5):
  r5cache: write part of r5cache
  r5cache: sysfs entry r5c_state
  r5cache: reclaim support
  r5cache: r5c recovery
  r5cache: handle SYNC and FUA

ZhengYuan Liu (1):
  md/r5cache: decrease the counter after full-write stripe was reclaimed

 drivers/md/raid5-cache.c | 1433 ++++++++++++++++++++++++++++++++++++++++------
 drivers/md/raid5.c       |  219 ++++++-
 drivers/md/raid5.h       |   71 ++-
 3 files changed, 1508 insertions(+), 215 deletions(-)

--
2.9.3

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v2 1/6] r5cache: write part of r5cache
  2016-09-26 23:30 [PATCH v2 0/6] raid5-cache: enabling cache features Song Liu
@ 2016-09-26 23:30 ` Song Liu
  2016-09-27 22:51   ` Shaohua Li
  2016-09-26 23:30 ` [PATCH v2 2/6] r5cache: sysfs entry r5c_state Song Liu
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Song Liu @ 2016-09-26 23:30 UTC (permalink / raw)
  To: linux-raid
  Cc: neilb, shli, kernel-team, dan.j.williams, hch, liuzhengyuang521,
	liuzhengyuan, Song Liu

This is the write part of r5cache. The cache is integrated with
stripe cache of raid456. It leverages code of r5l_log to write
data to journal device.

r5cache split current write path into 2 parts: the write path
and the reclaim path. The write path is as following:
1. write data to journal
   (r5c_handle_stripe_dirtying, r5c_cache_data)
2. call bio_endio
   (r5c_handle_data_cached, r5c_return_dev_pending_writes).

Then the reclaim path is as:
1. Freeze the stripe (r5c_freeze_stripe_for_reclaim)
2. Calcualte parity (reconstruct or RMW)
3. Write parity (and maybe some other data) to journal device
4. Write data and parity to RAID disks

Step 3 and 4 of reclaim path is very similar to write path of
raid5 journal.

With r5cache, write operation does not wait for parity calculation
and write out, so the write latency is lower (1 write to journal
device vs. read and then write to raid disks). Also, r5cache will
reduce RAID overhead (multipile IO due to read-modify-write of
parity) and provide more opportunities of full stripe writes.

r5cache adds 4 flags to stripe_head.state:
 - STRIPE_R5C_PARTIAL_STRIPE,
 - STRIPE_R5C_FULL_STRIPE,
 - STRIPE_R5C_FROZEN and
 - STRIPE_R5C_WRITTEN.

The write path runs w/ STRIPE_R5C_FROZEN == 0. Cache writes start
from r5c_handle_stripe_dirtying(), where bit R5_Wantcache is set
for devices with bio in towrite. Then, the data is written to
the journal through r5l_log implementation. Once the data is in
the journal, we set bit R5_InCache, and presue bio_endio for
these writes.

The reclaim path starts by setting STRIPE_R5C_FROZEN. This makes
the stripe into reclaim. If some write operation arrives at this
time, it will be handled as raid5 journal (calculate parity,
write to jorunal, write to disks, bio_endio).

Once frozen, the stripe is sent back to raid5 state machine,
where handle_stripe_dirtying will evaluate the stripe for
reconstruct writes or RMW writes (read data and calculate parity).

For RMW, the code allocates an extra page for each data block
being updated.  This is stored in r5dev->page and the old data
is read into it.  Then the prexor calculation subtracts ->page
from the parity block, and the reconstruct calculation adds the
->orig_page data back into the parity block.

r5cache naturally excludes SkipCopy. With R5_Wantcache bit set,
async_copy_data will not skip copy.

Before writing data to RAID disks, the r5l_log logic stores
parity (and non-overwrite data) to the journal.

Instead of inactive_list, stripes with cached data are tracked in
r5conf->r5c_full_stripe_list and r5conf->r5c_partial_stripe_list.

There are some known limitations of the cache implementation:

1. Write cache only covers full page writes (R5_OVERWRITE). Writes
   of smaller granularity are write through.
2. Only one log io (sh->log_io) for each stripe at anytime. Later
   writes for the same stripe have to wait. This can be improved by
   moving log_io to r5dev.

Signed-off-by: Song Liu <songliubraving@fb.com>
---
 drivers/md/raid5-cache.c | 292 +++++++++++++++++++++++++++++++++++++++++++++--
 drivers/md/raid5.c       | 187 ++++++++++++++++++++++++++----
 drivers/md/raid5.h       |  31 ++++-
 3 files changed, 479 insertions(+), 31 deletions(-)

diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c
index 1b1ab4a..6b28461 100644
--- a/drivers/md/raid5-cache.c
+++ b/drivers/md/raid5-cache.c
@@ -34,12 +34,26 @@
 #define RECLAIM_MAX_FREE_SPACE (10 * 1024 * 1024 * 2) /* sector */
 #define RECLAIM_MAX_FREE_SPACE_SHIFT (2)
 
+/* wake up reclaim thread periodically */
+#define R5C_RECLAIM_WAKEUP_INTERVAL (5 * HZ)
+/* start flush with these full stripes */
+#define R5C_FULL_STRIPE_FLUSH_BATCH 8
+/* reclaim stripes in groups */
+#define R5C_RECLAIM_STRIPE_GROUP (NR_STRIPE_HASH_LOCKS * 2)
+
 /*
  * We only need 2 bios per I/O unit to make progress, but ensure we
  * have a few more available to not get too tight.
  */
 #define R5L_POOL_SIZE	4
 
+enum r5c_state {
+	R5C_STATE_NO_CACHE = 0,
+	R5C_STATE_WRITE_THROUGH = 1,
+	R5C_STATE_WRITE_BACK = 2,
+	R5C_STATE_CACHE_BROKEN = 3,
+};
+
 struct r5l_log {
 	struct md_rdev *rdev;
 
@@ -96,6 +110,9 @@ struct r5l_log {
 	spinlock_t no_space_stripes_lock;
 
 	bool need_cache_flush;
+
+	/* for r5c_cache */
+	enum r5c_state r5c_state;
 };
 
 /*
@@ -168,12 +185,79 @@ static void __r5l_set_io_unit_state(struct r5l_io_unit *io,
 	io->state = state;
 }
 
+/*
+ * Freeze the stripe, thus send the stripe into reclaim path.
+ *
+ * This function should only be called from raid5d that handling this stripe,
+ * or when holds conf->device_lock
+ */
+void r5c_freeze_stripe_for_reclaim(struct stripe_head *sh)
+{
+	struct r5conf *conf = sh->raid_conf;
+
+	if (!conf->log)
+		return;
+
+	WARN_ON(test_bit(STRIPE_R5C_FROZEN, &sh->state));
+	set_bit(STRIPE_R5C_FROZEN, &sh->state);
+
+	if (!test_and_set_bit(STRIPE_PREREAD_ACTIVE, &sh->state))
+		atomic_inc(&conf->preread_active_stripes);
+
+	if (test_and_clear_bit(STRIPE_R5C_PARTIAL_STRIPE, &sh->state)) {
+		BUG_ON(atomic_read(&conf->r5c_cached_partial_stripes) == 0);
+		atomic_dec(&conf->r5c_cached_partial_stripes);
+	}
+
+	if (test_and_clear_bit(STRIPE_R5C_FULL_STRIPE, &sh->state)) {
+		BUG_ON(atomic_read(&conf->r5c_cached_full_stripes) == 0);
+		atomic_dec(&conf->r5c_cached_full_stripes);
+	}
+}
+
+static void r5c_handle_data_cached(struct stripe_head *sh)
+{
+	int i;
+
+	for (i = sh->disks; i--; )
+		if (test_and_clear_bit(R5_Wantcache, &sh->dev[i].flags)) {
+			set_bit(R5_InCache, &sh->dev[i].flags);
+			clear_bit(R5_LOCKED, &sh->dev[i].flags);
+			atomic_inc(&sh->dev_in_cache);
+		}
+}
+
+/*
+ * this journal write must contain full parity,
+ * it may also contain some data pages
+ */
+static void r5c_handle_parity_cached(struct stripe_head *sh)
+{
+	int i;
+
+	for (i = sh->disks; i--; )
+		if (test_bit(R5_InCache, &sh->dev[i].flags))
+			set_bit(R5_Wantwrite, &sh->dev[i].flags);
+	set_bit(STRIPE_R5C_WRITTEN, &sh->state);
+}
+
+static void r5c_finish_cache_stripe(struct stripe_head *sh)
+{
+	if (test_bit(STRIPE_R5C_FROZEN, &sh->state))
+		r5c_handle_parity_cached(sh);
+	else
+		r5c_handle_data_cached(sh);
+}
+
 static void r5l_io_run_stripes(struct r5l_io_unit *io)
 {
 	struct stripe_head *sh, *next;
 
 	list_for_each_entry_safe(sh, next, &io->stripe_list, log_list) {
 		list_del_init(&sh->log_list);
+
+		r5c_finish_cache_stripe(sh);
+
 		set_bit(STRIPE_HANDLE, &sh->state);
 		raid5_release_stripe(sh);
 	}
@@ -402,7 +486,8 @@ static int r5l_log_stripe(struct r5l_log *log, struct stripe_head *sh,
 	io = log->current_io;
 
 	for (i = 0; i < sh->disks; i++) {
-		if (!test_bit(R5_Wantwrite, &sh->dev[i].flags))
+		if (!test_bit(R5_Wantwrite, &sh->dev[i].flags) &&
+		    !test_bit(R5_Wantcache, &sh->dev[i].flags))
 			continue;
 		if (i == sh->pd_idx || i == sh->qd_idx)
 			continue;
@@ -412,18 +497,19 @@ static int r5l_log_stripe(struct r5l_log *log, struct stripe_head *sh,
 		r5l_append_payload_page(log, sh->dev[i].page);
 	}
 
-	if (sh->qd_idx >= 0) {
+	if (parity_pages == 2) {
 		r5l_append_payload_meta(log, R5LOG_PAYLOAD_PARITY,
 					sh->sector, sh->dev[sh->pd_idx].log_checksum,
 					sh->dev[sh->qd_idx].log_checksum, true);
 		r5l_append_payload_page(log, sh->dev[sh->pd_idx].page);
 		r5l_append_payload_page(log, sh->dev[sh->qd_idx].page);
-	} else {
+	} else if (parity_pages == 1) {
 		r5l_append_payload_meta(log, R5LOG_PAYLOAD_PARITY,
 					sh->sector, sh->dev[sh->pd_idx].log_checksum,
 					0, false);
 		r5l_append_payload_page(log, sh->dev[sh->pd_idx].page);
-	}
+	} else
+		BUG_ON(parity_pages != 0);
 
 	list_add_tail(&sh->log_list, &io->stripe_list);
 	atomic_inc(&io->pending_stripe);
@@ -432,7 +518,6 @@ static int r5l_log_stripe(struct r5l_log *log, struct stripe_head *sh,
 	return 0;
 }
 
-static void r5l_wake_reclaim(struct r5l_log *log, sector_t space);
 /*
  * running in raid5d, where reclaim could wait for raid5d too (when it flushes
  * data from log to raid disks), so we shouldn't wait for reclaim here
@@ -456,11 +541,17 @@ int r5l_write_stripe(struct r5l_log *log, struct stripe_head *sh)
 		return -EAGAIN;
 	}
 
+	WARN_ON(!test_bit(STRIPE_R5C_FROZEN, &sh->state));
+
 	for (i = 0; i < sh->disks; i++) {
 		void *addr;
 
 		if (!test_bit(R5_Wantwrite, &sh->dev[i].flags))
 			continue;
+
+		if (test_bit(R5_InCache, &sh->dev[i].flags))
+			continue;
+
 		write_disks++;
 		/* checksum is already calculated in last run */
 		if (test_bit(STRIPE_LOG_TRAPPED, &sh->state))
@@ -473,6 +564,9 @@ int r5l_write_stripe(struct r5l_log *log, struct stripe_head *sh)
 	parity_pages = 1 + !!(sh->qd_idx >= 0);
 	data_pages = write_disks - parity_pages;
 
+	pr_debug("%s: write %d data_pages and %d parity_pages\n",
+		 __func__, data_pages, parity_pages);
+
 	meta_size =
 		((sizeof(struct r5l_payload_data_parity) + sizeof(__le32))
 		 * data_pages) +
@@ -735,7 +829,6 @@ static void r5l_write_super_and_discard_space(struct r5l_log *log,
 	}
 }
 
-
 static void r5l_do_reclaim(struct r5l_log *log)
 {
 	sector_t reclaim_target = xchg(&log->reclaim_target, 0);
@@ -798,7 +891,7 @@ static void r5l_reclaim_thread(struct md_thread *thread)
 	r5l_do_reclaim(log);
 }
 
-static void r5l_wake_reclaim(struct r5l_log *log, sector_t space)
+void r5l_wake_reclaim(struct r5l_log *log, sector_t space)
 {
 	unsigned long target;
 	unsigned long new = (unsigned long)space; /* overflow in theory */
@@ -1111,6 +1204,188 @@ static void r5l_write_super(struct r5l_log *log, sector_t cp)
 	set_bit(MD_CHANGE_DEVS, &mddev->flags);
 }
 
+static void r5c_flush_stripe(struct r5conf *conf, struct stripe_head *sh)
+{
+	list_del_init(&sh->lru);
+	r5c_freeze_stripe_for_reclaim(sh);
+	atomic_inc(&conf->active_stripes);
+	atomic_inc(&sh->count);
+	set_bit(STRIPE_HANDLE, &sh->state);
+	raid5_release_stripe(sh);
+}
+
+/* if num <= 0, flush all stripes
+ * if num > 0, flush at most num stripes
+ */
+int r5c_flush_cache(struct r5conf *conf, int num)
+{
+	int count = 0;
+	struct stripe_head *sh, *next;
+
+	assert_spin_locked(&conf->device_lock);
+	if (!conf->log)
+		return 0;
+	list_for_each_entry_safe(sh, next, &conf->r5c_full_stripe_list, lru) {
+		r5c_flush_stripe(conf, sh);
+		count++;
+		if (num > 0 && count >= num && count >=
+		    R5C_FULL_STRIPE_FLUSH_BATCH)
+			return count;
+	}
+
+	list_for_each_entry_safe(sh, next, &conf->r5c_partial_stripe_list, lru) {
+		r5c_flush_stripe(conf, sh);
+		count++;
+		if (num > 0 && count == num)
+			return count;
+	}
+	return count;
+}
+
+int r5c_handle_stripe_dirtying(struct r5conf *conf,
+			       struct stripe_head *sh,
+			       struct stripe_head_state *s,
+			       int disks) {
+	struct r5l_log *log = conf->log;
+	int i;
+	struct r5dev *dev;
+
+	if (!log || test_bit(STRIPE_R5C_FROZEN, &sh->state))
+		return -EAGAIN;
+
+	if (conf->log->r5c_state == R5C_STATE_WRITE_THROUGH ||
+	    conf->quiesce != 0 || conf->mddev->degraded != 0) {
+		/* write through mode */
+		r5c_freeze_stripe_for_reclaim(sh);
+		return -EAGAIN;
+	}
+
+	s->to_cache = 0;
+
+	for (i = disks; i--; ) {
+		dev = &sh->dev[i];
+		/* if none-overwrite, use the reclaim path (write through) */
+		if (dev->towrite && !test_bit(R5_OVERWRITE, &dev->flags) &&
+		    !test_bit(R5_InCache, &dev->flags)) {
+			r5c_freeze_stripe_for_reclaim(sh);
+			return -EAGAIN;
+		}
+	}
+
+	for (i = disks; i--; ) {
+		dev = &sh->dev[i];
+		if (dev->towrite) {
+			set_bit(R5_Wantcache, &dev->flags);
+			set_bit(R5_Wantdrain, &dev->flags);
+			set_bit(R5_LOCKED, &dev->flags);
+			s->to_cache++;
+		}
+	}
+
+	if (s->to_cache)
+		set_bit(STRIPE_OP_BIODRAIN, &s->ops_request);
+
+	return 0;
+}
+
+void r5c_handle_stripe_written(struct r5conf *conf,
+			       struct stripe_head *sh) {
+	int i;
+	int do_wakeup = 0;
+
+	if (test_and_clear_bit(STRIPE_R5C_WRITTEN, &sh->state)) {
+		WARN_ON(!test_bit(STRIPE_R5C_FROZEN, &sh->state));
+		clear_bit(STRIPE_R5C_FROZEN, &sh->state);
+
+		for (i = sh->disks; i--; ) {
+			if (test_and_clear_bit(R5_InCache, &sh->dev[i].flags))
+				atomic_dec(&sh->dev_in_cache);
+			clear_bit(R5_UPTODATE, &sh->dev[i].flags);
+			if (test_and_clear_bit(R5_Overlap, &sh->dev[i].flags))
+				do_wakeup = 1;
+		}
+	}
+
+	if (do_wakeup)
+		wake_up(&conf->wait_for_overlap);
+}
+
+int
+r5c_cache_data(struct r5l_log *log, struct stripe_head *sh,
+	       struct stripe_head_state *s)
+{
+	int pages;
+	int meta_size;
+	int reserve;
+	int i;
+	int ret = 0;
+	int page_count = 0;
+
+	BUG_ON(!log);
+	BUG_ON(s->to_cache == 0);
+
+	for (i = 0; i < sh->disks; i++) {
+		void *addr;
+
+		if (!test_bit(R5_Wantcache, &sh->dev[i].flags))
+			continue;
+		addr = kmap_atomic(sh->dev[i].page);
+		sh->dev[i].log_checksum = crc32c_le(log->uuid_checksum,
+						    addr, PAGE_SIZE);
+		kunmap_atomic(addr);
+		page_count++;
+	}
+	WARN_ON(page_count != s->to_cache);
+
+	pages = s->to_cache;
+
+	meta_size =
+		((sizeof(struct r5l_payload_data_parity) + sizeof(__le32))
+		 * pages);
+	/* Doesn't work with very big raid array */
+	if (meta_size + sizeof(struct r5l_meta_block) > PAGE_SIZE)
+		return -EINVAL;
+
+	/*
+	 * The stripe must enter state machine again to call endio, so
+	 * don't delay.
+	 */
+	clear_bit(STRIPE_DELAYED, &sh->state);
+	atomic_inc(&sh->count);
+
+	mutex_lock(&log->io_mutex);
+	/* meta + data */
+	reserve = (1 + pages) << (PAGE_SHIFT - 9);
+	if (!r5l_has_free_space(log, reserve)) {
+		spin_lock(&log->no_space_stripes_lock);
+		list_add_tail(&sh->log_list, &log->no_space_stripes);
+		spin_unlock(&log->no_space_stripes_lock);
+
+		r5l_wake_reclaim(log, reserve);
+	} else {
+		ret = r5l_log_stripe(log, sh, pages, 0);
+		if (ret) {
+			spin_lock_irq(&log->io_list_lock);
+			list_add_tail(&sh->log_list, &log->no_mem_stripes);
+			spin_unlock_irq(&log->io_list_lock);
+		}
+	}
+
+	mutex_unlock(&log->io_mutex);
+	return 0;
+}
+
+void r5c_do_reclaim(struct r5conf *conf)
+{
+	struct r5l_log *log = conf->log;
+
+	assert_spin_locked(&conf->device_lock);
+
+	if (!log)
+		return;
+	r5c_flush_cache(conf, 0);
+}
+
 static int r5l_load_log(struct r5l_log *log)
 {
 	struct md_rdev *rdev = log->rdev;
@@ -1230,6 +1505,9 @@ int r5l_init_log(struct r5conf *conf, struct md_rdev *rdev)
 	INIT_LIST_HEAD(&log->no_space_stripes);
 	spin_lock_init(&log->no_space_stripes_lock);
 
+	/* flush full stripe */
+	log->r5c_state = R5C_STATE_WRITE_BACK;
+
 	if (r5l_load_log(log))
 		goto error;
 
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index f94472d..25b411d 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -316,8 +316,25 @@ static void do_release_stripe(struct r5conf *conf, struct stripe_head *sh,
 			    < IO_THRESHOLD)
 				md_wakeup_thread(conf->mddev->thread);
 		atomic_dec(&conf->active_stripes);
-		if (!test_bit(STRIPE_EXPANDING, &sh->state))
-			list_add_tail(&sh->lru, temp_inactive_list);
+		if (!test_bit(STRIPE_EXPANDING, &sh->state)) {
+			if (atomic_read(&sh->dev_in_cache) == 0) {
+				list_add_tail(&sh->lru, temp_inactive_list);
+			} else if (atomic_read(&sh->dev_in_cache) ==
+				   conf->raid_disks - conf->max_degraded) {
+				/* full stripe */
+				if (!test_and_set_bit(STRIPE_R5C_FULL_STRIPE, &sh->state))
+					atomic_inc(&conf->r5c_cached_full_stripes);
+				if (test_and_clear_bit(STRIPE_R5C_PARTIAL_STRIPE, &sh->state))
+					atomic_dec(&conf->r5c_cached_partial_stripes);
+				list_add_tail(&sh->lru, &conf->r5c_full_stripe_list);
+			} else {
+				/* not full stripe */
+				if (!test_and_set_bit(STRIPE_R5C_PARTIAL_STRIPE,
+						      &sh->state))
+					atomic_inc(&conf->r5c_cached_partial_stripes);
+				list_add_tail(&sh->lru, &conf->r5c_partial_stripe_list);
+			}
+		}
 	}
 }
 
@@ -901,6 +918,13 @@ static void ops_run_io(struct stripe_head *sh, struct stripe_head_state *s)
 
 	might_sleep();
 
+	if (s->to_cache) {
+		if (r5c_cache_data(conf->log, sh, s) == 0)
+			return;
+		/* array is too big that meta data size > PAGE_SIZE  */
+		r5c_freeze_stripe_for_reclaim(sh);
+	}
+
 	if (r5l_write_stripe(conf->log, sh) == 0)
 		return;
 	for (i = disks; i--; ) {
@@ -1029,6 +1053,7 @@ again:
 
 			if (test_bit(R5_SkipCopy, &sh->dev[i].flags))
 				WARN_ON(test_bit(R5_UPTODATE, &sh->dev[i].flags));
+
 			sh->dev[i].vec.bv_page = sh->dev[i].page;
 			bi->bi_vcnt = 1;
 			bi->bi_io_vec[0].bv_len = STRIPE_SIZE;
@@ -1115,7 +1140,7 @@ again:
 static struct dma_async_tx_descriptor *
 async_copy_data(int frombio, struct bio *bio, struct page **page,
 	sector_t sector, struct dma_async_tx_descriptor *tx,
-	struct stripe_head *sh)
+	struct stripe_head *sh, int no_skipcopy)
 {
 	struct bio_vec bvl;
 	struct bvec_iter iter;
@@ -1155,7 +1180,8 @@ async_copy_data(int frombio, struct bio *bio, struct page **page,
 			if (frombio) {
 				if (sh->raid_conf->skip_copy &&
 				    b_offset == 0 && page_offset == 0 &&
-				    clen == STRIPE_SIZE)
+				    clen == STRIPE_SIZE &&
+				    !no_skipcopy)
 					*page = bio_page;
 				else
 					tx = async_memcpy(*page, bio_page, page_offset,
@@ -1237,7 +1263,7 @@ static void ops_run_biofill(struct stripe_head *sh)
 			while (rbi && rbi->bi_iter.bi_sector <
 				dev->sector + STRIPE_SECTORS) {
 				tx = async_copy_data(0, rbi, &dev->page,
-					dev->sector, tx, sh);
+						     dev->sector, tx, sh, 0);
 				rbi = r5_next_bio(rbi, dev->sector);
 			}
 		}
@@ -1364,7 +1390,8 @@ static int set_syndrome_sources(struct page **srcs,
 		if (i == sh->qd_idx || i == sh->pd_idx ||
 		    (srctype == SYNDROME_SRC_ALL) ||
 		    (srctype == SYNDROME_SRC_WANT_DRAIN &&
-		     test_bit(R5_Wantdrain, &dev->flags)) ||
+		     (test_bit(R5_Wantdrain, &dev->flags) ||
+		      test_bit(R5_InCache, &dev->flags))) ||
 		    (srctype == SYNDROME_SRC_WRITTEN &&
 		     dev->written))
 			srcs[slot] = sh->dev[i].page;
@@ -1543,9 +1570,18 @@ ops_run_compute6_2(struct stripe_head *sh, struct raid5_percpu *percpu)
 static void ops_complete_prexor(void *stripe_head_ref)
 {
 	struct stripe_head *sh = stripe_head_ref;
+	int i;
 
 	pr_debug("%s: stripe %llu\n", __func__,
 		(unsigned long long)sh->sector);
+
+	for (i = sh->disks; i--; )
+		if (sh->dev[i].page != sh->dev[i].orig_page) {
+			struct page *p = sh->dev[i].page;
+
+			sh->dev[i].page = sh->dev[i].orig_page;
+			put_page(p);
+		}
 }
 
 static struct dma_async_tx_descriptor *
@@ -1567,7 +1603,8 @@ ops_run_prexor5(struct stripe_head *sh, struct raid5_percpu *percpu,
 	for (i = disks; i--; ) {
 		struct r5dev *dev = &sh->dev[i];
 		/* Only process blocks that are known to be uptodate */
-		if (test_bit(R5_Wantdrain, &dev->flags))
+		if (test_bit(R5_Wantdrain, &dev->flags) ||
+		    test_bit(R5_InCache, &dev->flags))
 			xor_srcs[count++] = dev->page;
 	}
 
@@ -1618,6 +1655,10 @@ ops_run_biodrain(struct stripe_head *sh, struct dma_async_tx_descriptor *tx)
 
 again:
 			dev = &sh->dev[i];
+			if (test_and_clear_bit(R5_InCache, &dev->flags)) {
+				BUG_ON(atomic_read(&sh->dev_in_cache) == 0);
+				atomic_dec(&sh->dev_in_cache);
+			}
 			spin_lock_irq(&sh->stripe_lock);
 			chosen = dev->towrite;
 			dev->towrite = NULL;
@@ -1625,7 +1666,8 @@ again:
 			BUG_ON(dev->written);
 			wbi = dev->written = chosen;
 			spin_unlock_irq(&sh->stripe_lock);
-			WARN_ON(dev->page != dev->orig_page);
+			if (!test_bit(R5_Wantcache, &dev->flags))
+				WARN_ON(dev->page != dev->orig_page);
 
 			while (wbi && wbi->bi_iter.bi_sector <
 				dev->sector + STRIPE_SECTORS) {
@@ -1637,8 +1679,10 @@ again:
 					set_bit(R5_Discard, &dev->flags);
 				else {
 					tx = async_copy_data(1, wbi, &dev->page,
-						dev->sector, tx, sh);
-					if (dev->page != dev->orig_page) {
+							     dev->sector, tx, sh,
+							     test_bit(R5_Wantcache, &dev->flags));
+					if (dev->page != dev->orig_page &&
+					    !test_bit(R5_Wantcache, &dev->flags)) {
 						set_bit(R5_SkipCopy, &dev->flags);
 						clear_bit(R5_UPTODATE, &dev->flags);
 						clear_bit(R5_OVERWRITE, &dev->flags);
@@ -1746,7 +1790,8 @@ again:
 		xor_dest = xor_srcs[count++] = sh->dev[pd_idx].page;
 		for (i = disks; i--; ) {
 			struct r5dev *dev = &sh->dev[i];
-			if (head_sh->dev[i].written)
+			if (head_sh->dev[i].written ||
+			    test_bit(R5_InCache, &head_sh->dev[i].flags))
 				xor_srcs[count++] = dev->page;
 		}
 	} else {
@@ -2001,6 +2046,7 @@ static struct stripe_head *alloc_stripe(struct kmem_cache *sc, gfp_t gfp,
 		INIT_LIST_HEAD(&sh->batch_list);
 		INIT_LIST_HEAD(&sh->lru);
 		atomic_set(&sh->count, 1);
+		atomic_set(&sh->dev_in_cache, 0);
 		for (i = 0; i < disks; i++) {
 			struct r5dev *dev = &sh->dev[i];
 
@@ -2887,6 +2933,9 @@ schedule_reconstruction(struct stripe_head *sh, struct stripe_head_state *s,
 				if (!expand)
 					clear_bit(R5_UPTODATE, &dev->flags);
 				s->locked++;
+			} else if (test_bit(R5_InCache, &dev->flags)) {
+				set_bit(R5_LOCKED, &dev->flags);
+				s->locked++;
 			}
 		}
 		/* if we are not expanding this is a proper write request, and
@@ -2926,6 +2975,9 @@ schedule_reconstruction(struct stripe_head *sh, struct stripe_head_state *s,
 				set_bit(R5_LOCKED, &dev->flags);
 				clear_bit(R5_UPTODATE, &dev->flags);
 				s->locked++;
+			} else if (test_bit(R5_InCache, &dev->flags)) {
+				set_bit(R5_LOCKED, &dev->flags);
+				s->locked++;
 			}
 		}
 		if (!s->locked)
@@ -3577,6 +3629,9 @@ static void handle_stripe_dirtying(struct r5conf *conf,
 	int rmw = 0, rcw = 0, i;
 	sector_t recovery_cp = conf->mddev->recovery_cp;
 
+	if (r5c_handle_stripe_dirtying(conf, sh, s, disks) == 0)
+		return;
+
 	/* Check whether resync is now happening or should start.
 	 * If yes, then the array is dirty (after unclean shutdown or
 	 * initial creation), so parity in some stripes might be inconsistent.
@@ -3597,9 +3652,12 @@ static void handle_stripe_dirtying(struct r5conf *conf,
 	} else for (i = disks; i--; ) {
 		/* would I have to read this buffer for read_modify_write */
 		struct r5dev *dev = &sh->dev[i];
-		if ((dev->towrite || i == sh->pd_idx || i == sh->qd_idx) &&
+		if ((dev->towrite || i == sh->pd_idx || i == sh->qd_idx ||
+		     test_bit(R5_InCache, &dev->flags)) &&
 		    !test_bit(R5_LOCKED, &dev->flags) &&
-		    !(test_bit(R5_UPTODATE, &dev->flags) ||
+		    !((test_bit(R5_UPTODATE, &dev->flags) &&
+		       (!test_bit(R5_InCache, &dev->flags) ||
+			dev->page != dev->orig_page)) ||
 		      test_bit(R5_Wantcompute, &dev->flags))) {
 			if (test_bit(R5_Insync, &dev->flags))
 				rmw++;
@@ -3611,13 +3669,15 @@ static void handle_stripe_dirtying(struct r5conf *conf,
 		    i != sh->pd_idx && i != sh->qd_idx &&
 		    !test_bit(R5_LOCKED, &dev->flags) &&
 		    !(test_bit(R5_UPTODATE, &dev->flags) ||
-		    test_bit(R5_Wantcompute, &dev->flags))) {
+		      test_bit(R5_InCache, &dev->flags) ||
+		      test_bit(R5_Wantcompute, &dev->flags))) {
 			if (test_bit(R5_Insync, &dev->flags))
 				rcw++;
 			else
 				rcw += 2*disks;
 		}
 	}
+
 	pr_debug("for sector %llu, rmw=%d rcw=%d\n",
 		(unsigned long long)sh->sector, rmw, rcw);
 	set_bit(STRIPE_HANDLE, &sh->state);
@@ -3629,10 +3689,18 @@ static void handle_stripe_dirtying(struct r5conf *conf,
 					  (unsigned long long)sh->sector, rmw);
 		for (i = disks; i--; ) {
 			struct r5dev *dev = &sh->dev[i];
-			if ((dev->towrite || i == sh->pd_idx || i == sh->qd_idx) &&
+			if (test_bit(R5_InCache, &dev->flags) &&
+			    dev->page == dev->orig_page)
+				dev->page = alloc_page(GFP_NOIO);  /* prexor */
+
+			if ((dev->towrite ||
+			     i == sh->pd_idx || i == sh->qd_idx ||
+			     test_bit(R5_InCache, &dev->flags)) &&
 			    !test_bit(R5_LOCKED, &dev->flags) &&
-			    !(test_bit(R5_UPTODATE, &dev->flags) ||
-			    test_bit(R5_Wantcompute, &dev->flags)) &&
+			    !((test_bit(R5_UPTODATE, &dev->flags) &&
+			       (!test_bit(R5_InCache, &dev->flags) ||
+				dev->page != dev->orig_page)) ||
+			      test_bit(R5_Wantcompute, &dev->flags)) &&
 			    test_bit(R5_Insync, &dev->flags)) {
 				if (test_bit(STRIPE_PREREAD_ACTIVE,
 					     &sh->state)) {
@@ -3658,6 +3726,7 @@ static void handle_stripe_dirtying(struct r5conf *conf,
 			    i != sh->pd_idx && i != sh->qd_idx &&
 			    !test_bit(R5_LOCKED, &dev->flags) &&
 			    !(test_bit(R5_UPTODATE, &dev->flags) ||
+			      test_bit(R5_InCache, &dev->flags) ||
 			      test_bit(R5_Wantcompute, &dev->flags))) {
 				rcw++;
 				if (test_bit(R5_Insync, &dev->flags) &&
@@ -3697,7 +3766,7 @@ static void handle_stripe_dirtying(struct r5conf *conf,
 	 */
 	if ((s->req_compute || !test_bit(STRIPE_COMPUTE_RUN, &sh->state)) &&
 	    (s->locked == 0 && (rcw == 0 || rmw == 0) &&
-	    !test_bit(STRIPE_BIT_DELAY, &sh->state)))
+	     !test_bit(STRIPE_BIT_DELAY, &sh->state)))
 		schedule_reconstruction(sh, s, rcw == 0, 0);
 }
 
@@ -4010,6 +4079,45 @@ static void handle_stripe_expansion(struct r5conf *conf, struct stripe_head *sh)
 	async_tx_quiesce(&tx);
 }
 
+static void
+r5c_return_dev_pending_writes(struct r5conf *conf, struct r5dev *dev,
+			      struct bio_list *return_bi)
+{
+	struct bio *wbi, *wbi2;
+
+	wbi = dev->written;
+	dev->written = NULL;
+	while (wbi && wbi->bi_iter.bi_sector <
+	       dev->sector + STRIPE_SECTORS) {
+		wbi2 = r5_next_bio(wbi, dev->sector);
+		if (!raid5_dec_bi_active_stripes(wbi)) {
+			md_write_end(conf->mddev);
+			bio_list_add(return_bi, wbi);
+		}
+		wbi = wbi2;
+	}
+}
+
+static void r5c_handle_cached_data_endio(struct r5conf *conf,
+	  struct stripe_head *sh, int disks, struct bio_list *return_bi)
+{
+	int i;
+
+	for (i = sh->disks; i--; ) {
+		if (test_bit(R5_InCache, &sh->dev[i].flags) &&
+		    sh->dev[i].written) {
+			set_bit(R5_UPTODATE, &sh->dev[i].flags);
+			r5c_return_dev_pending_writes(conf, &sh->dev[i],
+						      return_bi);
+			bitmap_endwrite(conf->mddev->bitmap, sh->sector,
+					STRIPE_SECTORS,
+					!test_bit(STRIPE_DEGRADED, &sh->state),
+					0);
+		}
+	}
+	r5l_stripe_write_finished(sh);
+}
+
 /*
  * handle_stripe - do things to a stripe.
  *
@@ -4188,6 +4296,10 @@ static void analyse_stripe(struct stripe_head *sh, struct stripe_head_state *s)
 			if (rdev && !test_bit(Faulty, &rdev->flags))
 				do_recovery = 1;
 		}
+		if (test_bit(R5_InCache, &dev->flags) && dev->written)
+			s->just_cached++;
+		if (test_bit(R5_Wantcache, &dev->flags) && dev->written)
+			s->want_cache++;
 	}
 	if (test_bit(STRIPE_SYNCING, &sh->state)) {
 		/* If there is a failed device being replaced,
@@ -4353,6 +4465,16 @@ static void handle_stripe(struct stripe_head *sh)
 
 	analyse_stripe(sh, &s);
 
+	if (s.want_cache) {
+		/* we have finished r5c_handle_stripe_dirtying and
+		 * ops_run_biodrain, but r5c_cache_data didn't finish because
+		 * the journal device didn't have enough space. This time we
+		 * should skip handle_stripe_dirtying and ops_run_biodrain
+		 */
+		s.to_cache = s.want_cache;
+		goto finish;
+	}
+
 	if (test_bit(STRIPE_LOG_TRAPPED, &sh->state))
 		goto finish;
 
@@ -4416,7 +4538,7 @@ static void handle_stripe(struct stripe_head *sh)
 			struct r5dev *dev = &sh->dev[i];
 			if (test_bit(R5_LOCKED, &dev->flags) &&
 				(i == sh->pd_idx || i == sh->qd_idx ||
-				 dev->written)) {
+				 dev->written || test_bit(R5_InCache, &dev->flags))) {
 				pr_debug("Writing block %d\n", i);
 				set_bit(R5_Wantwrite, &dev->flags);
 				if (prexor)
@@ -4456,6 +4578,12 @@ static void handle_stripe(struct stripe_head *sh)
 				 test_bit(R5_Discard, &qdev->flags))))))
 		handle_stripe_clean_event(conf, sh, disks, &s.return_bi);
 
+	if (s.just_cached)
+		r5c_handle_cached_data_endio(conf, sh, disks, &s.return_bi);
+
+	if (test_bit(STRIPE_R5C_FROZEN, &sh->state))
+		r5l_stripe_write_finished(sh);
+
 	/* Now we might consider reading some blocks, either to check/generate
 	 * parity, or to satisfy requests
 	 * or to load a block that is being partially written.
@@ -4467,13 +4595,17 @@ static void handle_stripe(struct stripe_head *sh)
 	    || s.expanding)
 		handle_stripe_fill(sh, &s, disks);
 
-	/* Now to consider new write requests and what else, if anything
-	 * should be read.  We do not handle new writes when:
+	r5c_handle_stripe_written(conf, sh);
+
+	/* Now to consider new write requests, cache write back and what else,
+	 * if anything should be read.  We do not handle new writes when:
 	 * 1/ A 'write' operation (copy+xor) is already in flight.
 	 * 2/ A 'check' operation is in flight, as it may clobber the parity
 	 *    block.
+	 * 3/ A r5c cache log write is in flight.
 	 */
-	if (s.to_write && !sh->reconstruct_state && !sh->check_state)
+	if ((s.to_write || test_bit(STRIPE_R5C_FROZEN, &sh->state)) &&
+	     !sh->reconstruct_state && !sh->check_state && !sh->log_io)
 		handle_stripe_dirtying(conf, sh, &s, disks);
 
 	/* maybe we need to check and possibly fix the parity for this stripe
@@ -5192,7 +5324,7 @@ static void raid5_make_request(struct mddev *mddev, struct bio * bi)
 	 * later we might have to read it again in order to reconstruct
 	 * data on failed drives.
 	 */
-	if (rw == READ && mddev->degraded == 0 &&
+	if (rw == READ && mddev->degraded == 0 && conf->log == NULL &&
 	    mddev->reshape_position == MaxSector) {
 		bi = chunk_aligned_read(mddev, bi);
 		if (!bi)
@@ -5917,6 +6049,7 @@ static void raid5d(struct md_thread *thread)
 			md_check_recovery(mddev);
 			spin_lock_irq(&conf->device_lock);
 		}
+		r5c_do_reclaim(conf);
 	}
 	pr_debug("%d stripes handled\n", handled);
 
@@ -6583,6 +6716,11 @@ static struct r5conf *setup_conf(struct mddev *mddev)
 	for (i = 0; i < NR_STRIPE_HASH_LOCKS; i++)
 		INIT_LIST_HEAD(conf->temp_inactive_list + i);
 
+	atomic_set(&conf->r5c_cached_full_stripes, 0);
+	INIT_LIST_HEAD(&conf->r5c_full_stripe_list);
+	atomic_set(&conf->r5c_cached_partial_stripes, 0);
+	INIT_LIST_HEAD(&conf->r5c_partial_stripe_list);
+
 	conf->level = mddev->new_level;
 	conf->chunk_sectors = mddev->new_chunk_sectors;
 	if (raid5_alloc_percpu(conf) != 0)
@@ -7662,8 +7800,11 @@ static void raid5_quiesce(struct mddev *mddev, int state)
 		/* '2' tells resync/reshape to pause so that all
 		 * active stripes can drain
 		 */
+		r5c_flush_cache(conf, 0);
 		conf->quiesce = 2;
 		wait_event_cmd(conf->wait_for_quiescent,
+				    atomic_read(&conf->r5c_cached_partial_stripes) == 0 &&
+				    atomic_read(&conf->r5c_cached_full_stripes) == 0 &&
 				    atomic_read(&conf->active_stripes) == 0 &&
 				    atomic_read(&conf->active_aligned_reads) == 0,
 				    unlock_all_device_hash_locks_irq(conf),
diff --git a/drivers/md/raid5.h b/drivers/md/raid5.h
index 517d4b6..71e67ba 100644
--- a/drivers/md/raid5.h
+++ b/drivers/md/raid5.h
@@ -226,6 +226,7 @@ struct stripe_head {
 
 	struct r5l_io_unit	*log_io;
 	struct list_head	log_list;
+	atomic_t		dev_in_cache;
 	/**
 	 * struct stripe_operations
 	 * @target - STRIPE_OP_COMPUTE_BLK target
@@ -263,6 +264,7 @@ struct stripe_head_state {
 	 */
 	int syncing, expanding, expanded, replacing;
 	int locked, uptodate, to_read, to_write, failed, written;
+	int to_cache, want_cache, just_cached;
 	int to_fill, compute, req_compute, non_overwrite;
 	int failed_num[2];
 	int p_failed, q_failed;
@@ -313,6 +315,8 @@ enum r5dev_flags {
 			 */
 	R5_Discard,	/* Discard the stripe */
 	R5_SkipCopy,	/* Don't copy data from bio to stripe cache */
+	R5_Wantcache,	/* Want write data to write cache */
+	R5_InCache,	/* Data in cache */
 };
 
 /*
@@ -345,7 +349,13 @@ enum {
 	STRIPE_BITMAP_PENDING,	/* Being added to bitmap, don't add
 				 * to batch yet.
 				 */
-	STRIPE_LOG_TRAPPED, /* trapped into log */
+	STRIPE_LOG_TRAPPED,	/* trapped into log */
+	STRIPE_R5C_PARTIAL_STRIPE,	/* in r5c cache (to-be/being handled or
+					 * in conf->r5c_partial_stripe_list) */
+	STRIPE_R5C_FULL_STRIPE,	/* in r5c cache (to-be/being handled or
+				 * in conf->r5c_full_stripe_list) */
+	STRIPE_R5C_FROZEN,	/* r5c_cache frozen and being written out */
+	STRIPE_R5C_WRITTEN,	/* ready for r5c_handle_stripe_written() */
 };
 
 #define STRIPE_EXPAND_SYNC_FLAGS \
@@ -521,6 +531,12 @@ struct r5conf {
 	 */
 	atomic_t		active_stripes;
 	struct list_head	inactive_list[NR_STRIPE_HASH_LOCKS];
+
+	atomic_t		r5c_cached_full_stripes;
+	struct list_head	r5c_full_stripe_list;
+	atomic_t		r5c_cached_partial_stripes;
+	struct list_head	r5c_partial_stripe_list;
+
 	atomic_t		empty_inactive_list_nr;
 	struct llist_head	released_stripes;
 	wait_queue_head_t	wait_for_quiescent;
@@ -635,4 +651,17 @@ extern void r5l_stripe_write_finished(struct stripe_head *sh);
 extern int r5l_handle_flush_request(struct r5l_log *log, struct bio *bio);
 extern void r5l_quiesce(struct r5l_log *log, int state);
 extern bool r5l_log_disk_error(struct r5conf *conf);
+extern void r5l_wake_reclaim(struct r5l_log *log, sector_t space);
+extern int
+r5c_handle_stripe_dirtying(struct r5conf *conf, struct stripe_head *sh,
+			   struct stripe_head_state *s, int disks);
+extern int r5c_cache_data(struct r5l_log *log, struct stripe_head *sh,
+			  struct stripe_head_state *s);
+extern int r5c_cache_parity(struct r5l_log *log, struct stripe_head *sh,
+			    struct stripe_head_state *s);
+extern void
+r5c_handle_stripe_written(struct r5conf *conf, struct stripe_head *sh);
+extern void r5c_freeze_stripe_for_reclaim(struct stripe_head *sh);
+extern void r5c_do_reclaim(struct r5conf *conf);
+extern int r5c_flush_cache(struct r5conf *conf, int num);
 #endif
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 2/6] r5cache: sysfs entry r5c_state
  2016-09-26 23:30 [PATCH v2 0/6] raid5-cache: enabling cache features Song Liu
  2016-09-26 23:30 ` [PATCH v2 1/6] r5cache: write part of r5cache Song Liu
@ 2016-09-26 23:30 ` Song Liu
  2016-09-27 22:58   ` Shaohua Li
  2016-09-26 23:30 ` [PATCH v2 3/6] r5cache: reclaim support Song Liu
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Song Liu @ 2016-09-26 23:30 UTC (permalink / raw)
  To: linux-raid
  Cc: neilb, shli, kernel-team, dan.j.williams, hch, liuzhengyuang521,
	liuzhengyuan, Song Liu

r5c_state have 4 states:
* no-cache;
* write-through (write journal only);
* write-back (w/ write cache);
* cache-broken (journal missing or Faulty)

When there is functional write cache, r5c_state is a knob to
switch between write-back and write-through.

When the journal device is broken, the raid array is forced
in readonly mode. In this case, r5c_state can be used to
remove "journal feature", and thus make the array read-write
without journal. By writing into r5c_cache_mode, the array
can transit from cache-broken to no-cache, which removes
journal feature for the array.

Signed-off-by: Song Liu <songliubraving@fb.com>
---
 drivers/md/raid5-cache.c | 57 ++++++++++++++++++++++++++++++++++++++++++++++++
 drivers/md/raid5.c       |  1 +
 drivers/md/raid5.h       |  2 ++
 3 files changed, 60 insertions(+)

diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c
index 6b28461..0a0b16a 100644
--- a/drivers/md/raid5-cache.c
+++ b/drivers/md/raid5-cache.c
@@ -54,6 +54,9 @@ enum r5c_state {
 	R5C_STATE_CACHE_BROKEN = 3,
 };
 
+static char *r5c_state_str[] = {"no-cache", "write-through",
+				"write-back", "cache-broken"};
+
 struct r5l_log {
 	struct md_rdev *rdev;
 
@@ -1242,6 +1245,60 @@ int r5c_flush_cache(struct r5conf *conf, int num)
 	return count;
 }
 
+ssize_t r5c_state_show(struct mddev *mddev, char *page)
+{
+	struct r5conf *conf = mddev->private;
+	int val = 0;
+	int ret = 0;
+
+	if (conf->log)
+		val = conf->log->r5c_state;
+	else if (test_bit(MD_HAS_JOURNAL, &mddev->flags))
+		val = R5C_STATE_CACHE_BROKEN;
+	ret += snprintf(page, PAGE_SIZE - ret, "%d: %s\n",
+			val, r5c_state_str[val]);
+	return ret;
+}
+
+ssize_t r5c_state_store(struct mddev *mddev, const char *page, size_t len)
+{
+	struct r5conf *conf = mddev->private;
+	struct r5l_log *log = conf->log;
+	int val;
+
+	if (kstrtoint(page, 10, &val))
+		return -EINVAL;
+	if (!log && val != R5C_STATE_NO_CACHE)
+		return -EINVAL;
+
+	if (val < R5C_STATE_NO_CACHE || val > R5C_STATE_WRITE_BACK)
+		return -EINVAL;
+	if (val == R5C_STATE_NO_CACHE) {
+		if (conf->log &&
+		    !test_bit(Faulty, &log->rdev->flags)) {
+			pr_err("md/raid:%s: journal device is in use, cannot remove it\n",
+			       mdname(mddev));
+			return -EINVAL;
+		}
+	}
+
+	spin_lock_irq(&conf->device_lock);
+	if (log)
+		conf->log->r5c_state = val;
+	if (val == R5C_STATE_NO_CACHE) {
+		clear_bit(MD_HAS_JOURNAL, &mddev->flags);
+		set_bit(MD_UPDATE_SB_FLAGS, &mddev->flags);
+	}
+	spin_unlock_irq(&conf->device_lock);
+	pr_info("md/raid:%s: setting r5c cache mode to %d: %s\n",
+		mdname(mddev), val, r5c_state_str[val]);
+	return len;
+}
+
+struct md_sysfs_entry
+r5c_state = __ATTR(r5c_state, S_IRUGO | S_IWUSR,
+		   r5c_state_show, r5c_state_store);
+
 int r5c_handle_stripe_dirtying(struct r5conf *conf,
 			       struct stripe_head *sh,
 			       struct stripe_head_state *s,
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 25b411d..cc4ac1d 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -6375,6 +6375,7 @@ static struct attribute *raid5_attrs[] =  {
 	&raid5_group_thread_cnt.attr,
 	&raid5_skip_copy.attr,
 	&raid5_rmw_level.attr,
+	&r5c_state.attr,
 	NULL,
 };
 static struct attribute_group raid5_attrs_group = {
diff --git a/drivers/md/raid5.h b/drivers/md/raid5.h
index 71e67ba..2ae027c 100644
--- a/drivers/md/raid5.h
+++ b/drivers/md/raid5.h
@@ -664,4 +664,6 @@ r5c_handle_stripe_written(struct r5conf *conf, struct stripe_head *sh);
 extern void r5c_freeze_stripe_for_reclaim(struct stripe_head *sh);
 extern void r5c_do_reclaim(struct r5conf *conf);
 extern int r5c_flush_cache(struct r5conf *conf, int num);
+extern struct md_sysfs_entry r5c_state;
+
 #endif
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 3/6] r5cache: reclaim support
  2016-09-26 23:30 [PATCH v2 0/6] raid5-cache: enabling cache features Song Liu
  2016-09-26 23:30 ` [PATCH v2 1/6] r5cache: write part of r5cache Song Liu
  2016-09-26 23:30 ` [PATCH v2 2/6] r5cache: sysfs entry r5c_state Song Liu
@ 2016-09-26 23:30 ` Song Liu
  2016-09-28  0:34   ` Shaohua Li
  2016-09-26 23:30 ` [PATCH v2 4/6] r5cache: r5c recovery Song Liu
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Song Liu @ 2016-09-26 23:30 UTC (permalink / raw)
  To: linux-raid
  Cc: neilb, shli, kernel-team, dan.j.williams, hch, liuzhengyuang521,
	liuzhengyuan, Song Liu

There are two limited resources, stripe cache and journal disk space.
For better performance, we priotize reclaim of full stripe writes.
To free up more journal space, we free earliest data on the journal.

In current implementation, reclaim happens when:
1. every R5C_RECLAIM_WAKEUP_INTERVAL (5 seconds)
2. when there are R5C_FULL_STRIPE_FLUSH_BATCH (8) cached full stripes
   (r5c_check_cached_full_stripe)
3. when raid5_get_active_stripe sees pressure in stripe cache space
   (r5c_check_stripe_cache_usage)
4. when there is pressure in journal space.

1-3 above are straightforward.

For 4, we added 2 flags to r5conf->cache_state: R5C_LOG_TIGHT and
R5C_LOG_CRITICAL. R5C_LOG_TIGHT is set when 2x max_free_space of
journal space is in-use; while R5C_LOG_CRITICAL is set when 3x
max_free_space of journal space is in-use. Where max_free_space
= min(1/4 journal space, 10GB).

r5c_cache keeps all data in cache (not fully committed to RAID) in
a list (stripe_in_cache). These stripes are in the order of their
first appearance on the journal. So the log tail (last_checkpoint)
should point to the journal_start of the first item in the list.

When R5C_LOG_TIGHT is set, r5l_reclaim_thread starts freezing
stripes at the head of stripe_in_cache. When R5C_LOG_CRITICAL is
set, the state machine only processes stripes at the head of
stripe_in_cache (other stripes are added to no_space_stripes in
r5c_cache_data and r5l_write_stripe).

Signed-off-by: Song Liu <songliubraving@fb.com>
---
 drivers/md/raid5-cache.c | 313 +++++++++++++++++++++++++++++++++++++++--------
 drivers/md/raid5.c       |  31 +++--
 drivers/md/raid5.h       |  37 ++++--
 3 files changed, 313 insertions(+), 68 deletions(-)

diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c
index 0a0b16a..75b70d8 100644
--- a/drivers/md/raid5-cache.c
+++ b/drivers/md/raid5-cache.c
@@ -28,8 +28,7 @@
 #define BLOCK_SECTORS (8)
 
 /*
- * reclaim runs every 1/4 disk size or 10G reclaimable space. This can prevent
- * recovery scans a very long log
+ * log->max_free_space is min(1/4 disk size, 10G reclaimable space)
  */
 #define RECLAIM_MAX_FREE_SPACE (10 * 1024 * 1024 * 2) /* sector */
 #define RECLAIM_MAX_FREE_SPACE_SHIFT (2)
@@ -116,6 +115,9 @@ struct r5l_log {
 
 	/* for r5c_cache */
 	enum r5c_state r5c_state;
+	struct list_head stripe_in_cache; /* all stripes in the cache, with
+					   * sh->log_start in order */
+	spinlock_t stripe_in_cache_lock;  /* lock for stripe_in_cache */
 };
 
 /*
@@ -180,6 +182,16 @@ static bool r5l_has_free_space(struct r5l_log *log, sector_t size)
 	return log->device_size > used_size + size;
 }
 
+static sector_t r5l_used_space(struct r5l_log *log)
+{
+	sector_t ret;
+
+	WARN_ON(!mutex_is_locked(&log->io_mutex));
+	ret = r5l_ring_distance(log, log->last_checkpoint,
+				log->log_start);
+	return ret;
+}
+
 static void __r5l_set_io_unit_state(struct r5l_io_unit *io,
 				    enum r5l_io_unit_state state)
 {
@@ -188,6 +200,56 @@ static void __r5l_set_io_unit_state(struct r5l_io_unit *io,
 	io->state = state;
 }
 
+static inline int r5c_total_cached_stripes(struct r5conf *conf)
+{
+	return atomic_read(&conf->r5c_cached_partial_stripes) +
+		atomic_read(&conf->r5c_cached_full_stripes);
+}
+
+/*
+ * check whether we should flush some stripes to free up stripe cache
+ */
+void r5c_check_stripe_cache_usage(struct r5conf *conf)
+{
+	if (!conf->log)
+		return;
+	spin_lock(&conf->device_lock);
+	if (r5c_total_cached_stripes(conf) > conf->max_nr_stripes * 3 / 4 ||
+	    atomic_read(&conf->empty_inactive_list_nr) > 0)
+		r5c_flush_cache(conf, R5C_RECLAIM_STRIPE_GROUP);
+	else if (r5c_total_cached_stripes(conf) >
+		 conf->max_nr_stripes * 1 / 2)
+		r5c_flush_cache(conf, 1);
+	spin_unlock(&conf->device_lock);
+}
+
+void r5c_check_cached_full_stripe(struct r5conf *conf)
+{
+	if (!conf->log)
+		return;
+	if (atomic_read(&conf->r5c_cached_full_stripes) >=
+	    R5C_FULL_STRIPE_FLUSH_BATCH)
+		r5l_wake_reclaim(conf->log, 0);
+}
+
+static void r5c_update_log_state(struct r5l_log *log)
+{
+	struct r5conf *conf = log->rdev->mddev->private;
+	sector_t used_space = r5l_used_space(log);
+
+	if (used_space > 3 * log->max_free_space) {
+		set_bit(R5C_LOG_CRITICAL, &conf->cache_state);
+		set_bit(R5C_LOG_TIGHT, &conf->cache_state);
+	} else if (used_space > 2 * log->max_free_space) {
+		clear_bit(R5C_LOG_CRITICAL, &conf->cache_state);
+		set_bit(R5C_LOG_TIGHT, &conf->cache_state);
+	} else if (used_space < log->max_free_space) {
+		clear_bit(R5C_LOG_TIGHT, &conf->cache_state);
+		clear_bit(R5C_LOG_CRITICAL, &conf->cache_state);
+	} else  /* max_free_space < used_space < 2 * max_free_space */
+		clear_bit(R5C_LOG_CRITICAL, &conf->cache_state);
+}
+
 /*
  * Freeze the stripe, thus send the stripe into reclaim path.
  *
@@ -198,10 +260,9 @@ void r5c_freeze_stripe_for_reclaim(struct stripe_head *sh)
 {
 	struct r5conf *conf = sh->raid_conf;
 
-	if (!conf->log)
+	if (!conf->log || test_bit(STRIPE_R5C_FROZEN, &sh->state))
 		return;
 
-	WARN_ON(test_bit(STRIPE_R5C_FROZEN, &sh->state));
 	set_bit(STRIPE_R5C_FROZEN, &sh->state);
 
 	if (!test_and_set_bit(STRIPE_PREREAD_ACTIVE, &sh->state))
@@ -356,8 +417,11 @@ static struct bio *r5l_bio_alloc(struct r5l_log *log)
 
 static void r5_reserve_log_entry(struct r5l_log *log, struct r5l_io_unit *io)
 {
+	WARN_ON(!mutex_is_locked(&log->io_mutex));
+	WARN_ON(!r5l_has_free_space(log, BLOCK_SECTORS));
 	log->log_start = r5l_ring_add(log, log->log_start, BLOCK_SECTORS);
 
+	r5c_update_log_state(log);
 	/*
 	 * If we filled up the log device start from the beginning again,
 	 * which will require a new bio.
@@ -475,6 +539,7 @@ static int r5l_log_stripe(struct r5l_log *log, struct stripe_head *sh,
 	int meta_size;
 	int ret;
 	struct r5l_io_unit *io;
+	unsigned long flags;
 
 	meta_size =
 		((sizeof(struct r5l_payload_data_parity) + sizeof(__le32))
@@ -518,6 +583,14 @@ static int r5l_log_stripe(struct r5l_log *log, struct stripe_head *sh,
 	atomic_inc(&io->pending_stripe);
 	sh->log_io = io;
 
+	if (sh->log_start == MaxSector) {
+		BUG_ON(!list_empty(&sh->r5c));
+		sh->log_start = io->log_start;
+		spin_lock_irqsave(&log->stripe_in_cache_lock, flags);
+		list_add_tail(&sh->r5c,
+			      &log->stripe_in_cache);
+		spin_unlock_irqrestore(&log->stripe_in_cache_lock, flags);
+	}
 	return 0;
 }
 
@@ -527,6 +600,7 @@ static int r5l_log_stripe(struct r5l_log *log, struct stripe_head *sh,
  */
 int r5l_write_stripe(struct r5l_log *log, struct stripe_head *sh)
 {
+	struct r5conf *conf = sh->raid_conf;
 	int write_disks = 0;
 	int data_pages, parity_pages;
 	int meta_size;
@@ -590,19 +664,31 @@ int r5l_write_stripe(struct r5l_log *log, struct stripe_head *sh)
 	mutex_lock(&log->io_mutex);
 	/* meta + data */
 	reserve = (1 + write_disks) << (PAGE_SHIFT - 9);
-	if (!r5l_has_free_space(log, reserve)) {
-		spin_lock(&log->no_space_stripes_lock);
-		list_add_tail(&sh->log_list, &log->no_space_stripes);
-		spin_unlock(&log->no_space_stripes_lock);
 
-		r5l_wake_reclaim(log, reserve);
-	} else {
-		ret = r5l_log_stripe(log, sh, data_pages, parity_pages);
-		if (ret) {
-			spin_lock_irq(&log->io_list_lock);
-			list_add_tail(&sh->log_list, &log->no_mem_stripes);
-			spin_unlock_irq(&log->io_list_lock);
+	if (test_bit(R5C_LOG_CRITICAL, &conf->cache_state)) {
+		sector_t last_checkpoint;
+
+		spin_lock(&log->stripe_in_cache_lock);
+		last_checkpoint = (list_first_entry(&log->stripe_in_cache,
+						    struct stripe_head, r5c))->log_start;
+		spin_unlock(&log->stripe_in_cache_lock);
+		if (sh->log_start != last_checkpoint) {
+			spin_lock(&log->no_space_stripes_lock);
+			list_add_tail(&sh->log_list, &log->no_space_stripes);
+			spin_unlock(&log->no_space_stripes_lock);
+			mutex_unlock(&log->io_mutex);
+			return -ENOSPC;
+		} else 	if (!r5l_has_free_space(log, reserve)) {
+			WARN(1, "%s: run out of journal space\n", __func__);
+			BUG();
 		}
+		pr_debug("%s: write sh %lu to free log space\n", __func__, sh->sector);
+	}
+	ret = r5l_log_stripe(log, sh, data_pages, parity_pages);
+	if (ret) {
+		spin_lock_irq(&log->io_list_lock);
+		list_add_tail(&sh->log_list, &log->no_mem_stripes);
+		spin_unlock_irq(&log->io_list_lock);
 	}
 
 	mutex_unlock(&log->io_mutex);
@@ -639,12 +725,17 @@ int r5l_handle_flush_request(struct r5l_log *log, struct bio *bio)
 /* This will run after log space is reclaimed */
 static void r5l_run_no_space_stripes(struct r5l_log *log)
 {
-	struct stripe_head *sh;
+	struct r5conf *conf = log->rdev->mddev->private;
+	struct stripe_head *sh, *next;
+	sector_t last_checkpoint;
 
 	spin_lock(&log->no_space_stripes_lock);
-	while (!list_empty(&log->no_space_stripes)) {
-		sh = list_first_entry(&log->no_space_stripes,
-				      struct stripe_head, log_list);
+	last_checkpoint = (list_first_entry(&log->stripe_in_cache,
+					    struct stripe_head, r5c))->log_start;
+	list_for_each_entry_safe(sh, next, &log->no_space_stripes, log_list) {
+		if (test_bit(R5C_LOG_CRITICAL, &conf->cache_state) &&
+		    sh->log_start != last_checkpoint)
+			continue;
 		list_del_init(&sh->log_list);
 		set_bit(STRIPE_HANDLE, &sh->state);
 		raid5_release_stripe(sh);
@@ -652,10 +743,32 @@ static void r5l_run_no_space_stripes(struct r5l_log *log)
 	spin_unlock(&log->no_space_stripes_lock);
 }
 
+static sector_t r5c_calculate_last_cp(struct r5conf *conf)
+{
+	struct stripe_head *sh;
+	struct r5l_log *log = conf->log;
+	sector_t end = MaxSector;
+	unsigned long flags;
+
+	spin_lock_irqsave(&log->stripe_in_cache_lock, flags);
+	if (list_empty(&conf->log->stripe_in_cache)) {
+		/* all stripes flushed */
+		spin_unlock_irqrestore(&log->stripe_in_cache_lock, flags);
+		return log->next_checkpoint;
+	}
+	sh = list_first_entry(&conf->log->stripe_in_cache,
+			      struct stripe_head, r5c);
+	end = sh->log_start;
+	spin_unlock_irqrestore(&log->stripe_in_cache_lock, flags);
+	return end;
+}
+
 static sector_t r5l_reclaimable_space(struct r5l_log *log)
 {
+	struct r5conf *conf = log->rdev->mddev->private;
+
 	return r5l_ring_distance(log, log->last_checkpoint,
-				 log->next_checkpoint);
+				 r5c_calculate_last_cp(conf));
 }
 
 static void r5l_run_no_mem_stripe(struct r5l_log *log)
@@ -830,14 +943,21 @@ static void r5l_write_super_and_discard_space(struct r5l_log *log,
 		blkdev_issue_discard(bdev, log->rdev->data_offset, end,
 				GFP_NOIO, 0);
 	}
+	mutex_lock(&log->io_mutex);
+	log->last_checkpoint = end;
+	r5c_update_log_state(log);
+	pr_debug("%s: set last_checkpoint = %lu\n", __func__, end);
+
+	log->last_cp_seq = log->next_cp_seq;
+	mutex_unlock(&log->io_mutex);
 }
 
 static void r5l_do_reclaim(struct r5l_log *log)
 {
+	struct r5conf *conf = log->rdev->mddev->private;
 	sector_t reclaim_target = xchg(&log->reclaim_target, 0);
 	sector_t reclaimable;
 	sector_t next_checkpoint;
-	u64 next_cp_seq;
 
 	spin_lock_irq(&log->io_list_lock);
 	/*
@@ -860,14 +980,12 @@ static void r5l_do_reclaim(struct r5l_log *log)
 				    log->io_list_lock);
 	}
 
-	next_checkpoint = log->next_checkpoint;
-	next_cp_seq = log->next_cp_seq;
+	next_checkpoint = r5c_calculate_last_cp(conf);
 	spin_unlock_irq(&log->io_list_lock);
 
 	BUG_ON(reclaimable < 0);
 	if (reclaimable == 0)
 		return;
-
 	/*
 	 * write_super will flush cache of each raid disk. We must write super
 	 * here, because the log area might be reused soon and we don't want to
@@ -877,10 +995,7 @@ static void r5l_do_reclaim(struct r5l_log *log)
 
 	mutex_lock(&log->io_mutex);
 	log->last_checkpoint = next_checkpoint;
-	log->last_cp_seq = next_cp_seq;
 	mutex_unlock(&log->io_mutex);
-
-	r5l_run_no_space_stripes(log);
 }
 
 static void r5l_reclaim_thread(struct md_thread *thread)
@@ -891,7 +1006,9 @@ static void r5l_reclaim_thread(struct md_thread *thread)
 
 	if (!log)
 		return;
+	r5c_do_reclaim(conf);
 	r5l_do_reclaim(log);
+	md_wakeup_thread(mddev->thread);
 }
 
 void r5l_wake_reclaim(struct r5l_log *log, sector_t space)
@@ -899,6 +1016,8 @@ void r5l_wake_reclaim(struct r5l_log *log, sector_t space)
 	unsigned long target;
 	unsigned long new = (unsigned long)space; /* overflow in theory */
 
+	if (!log)
+		return;
 	do {
 		target = log->reclaim_target;
 		if (new < target)
@@ -926,7 +1045,7 @@ void r5l_quiesce(struct r5l_log *log, int state)
 		/* make sure r5l_write_super_and_discard_space exits */
 		mddev = log->rdev->mddev;
 		wake_up(&mddev->sb_wait);
-		r5l_wake_reclaim(log, -1L);
+		r5l_wake_reclaim(log, MaxSector);
 		md_unregister_thread(&log->reclaim_thread);
 		r5l_do_reclaim(log);
 	}
@@ -1207,14 +1326,39 @@ static void r5l_write_super(struct r5l_log *log, sector_t cp)
 	set_bit(MD_CHANGE_DEVS, &mddev->flags);
 }
 
-static void r5c_flush_stripe(struct r5conf *conf, struct stripe_head *sh)
+/*
+ * r5c_flush_cache will move stripe from cached list to handle_list or
+ * r5c_priority_list
+ *
+ * return 1 if the stripe is moved, and 0 if the stripe is not moved
+ * must hold conf->device_lock
+ */
+static int r5c_flush_stripe(struct r5conf *conf, struct stripe_head *sh,
+			    bool priority)
 {
-	list_del_init(&sh->lru);
+	BUG_ON(list_empty(&sh->lru));
+
+	BUG_ON(test_bit(STRIPE_R5C_PRIORITY, &sh->state) &&
+	       !test_bit(STRIPE_HANDLE, &sh->state));
+
+	if (test_bit(STRIPE_R5C_PRIORITY, &sh->state))
+		return 0;
+	if (test_bit(STRIPE_HANDLE, &sh->state) && !priority)
+		return 0;
+
 	r5c_freeze_stripe_for_reclaim(sh);
-	atomic_inc(&conf->active_stripes);
+	if (!test_and_set_bit(STRIPE_HANDLE, &sh->state)) {
+		atomic_inc(&conf->active_stripes);
+	}
+	clear_bit(STRIPE_DELAYED, &sh->state);
+	clear_bit(STRIPE_BIT_DELAY, &sh->state);
+	if (priority)
+		set_bit(STRIPE_R5C_PRIORITY, &sh->state);
+
+	list_del_init(&sh->lru);
 	atomic_inc(&sh->count);
-	set_bit(STRIPE_HANDLE, &sh->state);
 	raid5_release_stripe(sh);
+	return 1;
 }
 
 /* if num <= 0, flush all stripes
@@ -1228,20 +1372,28 @@ int r5c_flush_cache(struct r5conf *conf, int num)
 	assert_spin_locked(&conf->device_lock);
 	if (!conf->log)
 		return 0;
+
 	list_for_each_entry_safe(sh, next, &conf->r5c_full_stripe_list, lru) {
-		r5c_flush_stripe(conf, sh);
-		count++;
+		count += r5c_flush_stripe(conf, sh, false);
 		if (num > 0 && count >= num && count >=
 		    R5C_FULL_STRIPE_FLUSH_BATCH)
 			return count;
 	}
 
 	list_for_each_entry_safe(sh, next, &conf->r5c_partial_stripe_list, lru) {
-		r5c_flush_stripe(conf, sh);
-		count++;
+		count += r5c_flush_stripe(conf, sh, false);
 		if (num > 0 && count == num)
 			return count;
 	}
+
+	if (num <= 0) {
+		list_for_each_entry_safe(sh, next, &conf->delayed_list, lru) {
+			if (test_bit(STRIPE_R5C_PARTIAL_STRIPE, &sh->state) ||
+			    test_bit(STRIPE_R5C_FULL_STRIPE, &sh->state))
+				r5c_flush_stripe(conf, sh, false);
+		}
+		r5l_run_no_space_stripes(conf->log);
+	}
 	return count;
 }
 
@@ -1349,6 +1501,7 @@ void r5c_handle_stripe_written(struct r5conf *conf,
 			       struct stripe_head *sh) {
 	int i;
 	int do_wakeup = 0;
+	unsigned long flags;
 
 	if (test_and_clear_bit(STRIPE_R5C_WRITTEN, &sh->state)) {
 		WARN_ON(!test_bit(STRIPE_R5C_FROZEN, &sh->state));
@@ -1361,6 +1514,11 @@ void r5c_handle_stripe_written(struct r5conf *conf,
 			if (test_and_clear_bit(R5_Overlap, &sh->dev[i].flags))
 				do_wakeup = 1;
 		}
+		spin_lock_irqsave(&conf->log->stripe_in_cache_lock, flags);
+		list_del_init(&sh->r5c);
+		spin_unlock_irqrestore(&conf->log->stripe_in_cache_lock, flags);
+		sh->log_start = MaxSector;
+		clear_bit(STRIPE_R5C_PRIORITY, &sh->state);
 	}
 
 	if (do_wakeup)
@@ -1371,6 +1529,7 @@ int
 r5c_cache_data(struct r5l_log *log, struct stripe_head *sh,
 	       struct stripe_head_state *s)
 {
+	struct r5conf *conf = sh->raid_conf;
 	int pages;
 	int meta_size;
 	int reserve;
@@ -1413,19 +1572,33 @@ r5c_cache_data(struct r5l_log *log, struct stripe_head *sh,
 	mutex_lock(&log->io_mutex);
 	/* meta + data */
 	reserve = (1 + pages) << (PAGE_SHIFT - 9);
-	if (!r5l_has_free_space(log, reserve)) {
-		spin_lock(&log->no_space_stripes_lock);
-		list_add_tail(&sh->log_list, &log->no_space_stripes);
-		spin_unlock(&log->no_space_stripes_lock);
 
-		r5l_wake_reclaim(log, reserve);
-	} else {
-		ret = r5l_log_stripe(log, sh, pages, 0);
-		if (ret) {
-			spin_lock_irq(&log->io_list_lock);
-			list_add_tail(&sh->log_list, &log->no_mem_stripes);
-			spin_unlock_irq(&log->io_list_lock);
+	if (test_bit(R5C_LOG_CRITICAL, &conf->cache_state)) {
+		sector_t last_checkpoint;
+
+		spin_lock(&log->stripe_in_cache_lock);
+		last_checkpoint = (list_first_entry(&log->stripe_in_cache,
+						    struct stripe_head, r5c))->log_start;
+		spin_unlock(&log->stripe_in_cache_lock);
+		if (sh->log_start != last_checkpoint) {
+			spin_lock(&log->no_space_stripes_lock);
+			list_add_tail(&sh->log_list, &log->no_space_stripes);
+			spin_unlock(&log->no_space_stripes_lock);
+
+			mutex_unlock(&log->io_mutex);
+			return -ENOSPC;
 		}
+		pr_debug("%s: write sh %lu to free log space\n", __func__, sh->sector);
+	}
+	if (!r5l_has_free_space(log, reserve)) {
+		pr_err("%s: cannot reserve space %d\n", __func__, reserve);
+		BUG();
+	}
+	ret = r5l_log_stripe(log, sh, pages, 0);
+	if (ret) {
+		spin_lock_irq(&log->io_list_lock);
+		list_add_tail(&sh->log_list, &log->no_mem_stripes);
+		spin_unlock_irq(&log->io_list_lock);
 	}
 
 	mutex_unlock(&log->io_mutex);
@@ -1435,12 +1608,45 @@ r5c_cache_data(struct r5l_log *log, struct stripe_head *sh,
 void r5c_do_reclaim(struct r5conf *conf)
 {
 	struct r5l_log *log = conf->log;
-
-	assert_spin_locked(&conf->device_lock);
+	struct stripe_head *sh, *next;
+	int count = 0;
+	unsigned long flags;
+	sector_t last_checkpoint;
 
 	if (!log)
 		return;
-	r5c_flush_cache(conf, 0);
+
+	if (!test_bit(R5C_LOG_CRITICAL, &conf->cache_state)) {
+		/* flush all full stripes */
+		spin_lock_irqsave(&conf->device_lock, flags);
+		list_for_each_entry_safe(sh, next, &conf->r5c_full_stripe_list, lru)
+			r5c_flush_stripe(conf, sh, false);
+		spin_unlock_irqrestore(&conf->device_lock, flags);
+	}
+
+	if (test_bit(R5C_LOG_TIGHT, &conf->cache_state)) {
+		spin_lock_irqsave(&log->stripe_in_cache_lock, flags);
+		spin_lock(&conf->device_lock);
+		last_checkpoint = (list_first_entry(&log->stripe_in_cache,
+						    struct stripe_head, r5c))->log_start;
+		list_for_each_entry(sh, &log->stripe_in_cache, r5c) {
+			if (sh->log_start == last_checkpoint) {
+				if (!list_empty(&sh->lru))
+					r5c_flush_stripe(conf, sh, true);
+			} else
+				break;
+		}
+		spin_unlock(&conf->device_lock);
+		spin_unlock_irqrestore(&log->stripe_in_cache_lock, flags);
+		pr_debug("%s: flushed %d stripes for log space\n", __func__, count);
+	} else if (test_bit(R5_INACTIVE_BLOCKED, &conf->cache_state)) {
+		spin_lock_irqsave(&conf->device_lock, flags);
+		r5c_flush_cache(conf, R5C_RECLAIM_STRIPE_GROUP);
+		spin_unlock_irqrestore(&conf->device_lock, flags);
+	}
+	wake_up(&conf->wait_for_stripe);
+	md_wakeup_thread(conf->mddev->thread);
+	r5l_run_no_space_stripes(log);
 }
 
 static int r5l_load_log(struct r5l_log *log)
@@ -1500,6 +1706,9 @@ create:
 	if (log->max_free_space > RECLAIM_MAX_FREE_SPACE)
 		log->max_free_space = RECLAIM_MAX_FREE_SPACE;
 	log->last_checkpoint = cp;
+	mutex_lock(&log->io_mutex);
+	r5c_update_log_state(log);
+	mutex_unlock(&log->io_mutex);
 
 	__free_page(page);
 
@@ -1555,6 +1764,8 @@ int r5l_init_log(struct r5conf *conf, struct md_rdev *rdev)
 						 log->rdev->mddev, "reclaim");
 	if (!log->reclaim_thread)
 		goto reclaim_thread;
+	log->reclaim_thread->timeout = R5C_RECLAIM_WAKEUP_INTERVAL;
+
 	init_waitqueue_head(&log->iounit_wait);
 
 	INIT_LIST_HEAD(&log->no_mem_stripes);
@@ -1564,6 +1775,8 @@ int r5l_init_log(struct r5conf *conf, struct md_rdev *rdev)
 
 	/* flush full stripe */
 	log->r5c_state = R5C_STATE_WRITE_BACK;
+	INIT_LIST_HEAD(&log->stripe_in_cache);
+	spin_lock_init(&log->stripe_in_cache_lock);
 
 	if (r5l_load_log(log))
 		goto error;
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index cc4ac1d..a3d26ec 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -301,7 +301,9 @@ static void do_release_stripe(struct r5conf *conf, struct stripe_head *sh,
 		else {
 			clear_bit(STRIPE_DELAYED, &sh->state);
 			clear_bit(STRIPE_BIT_DELAY, &sh->state);
-			if (conf->worker_cnt_per_group == 0) {
+			if (test_bit(STRIPE_R5C_PRIORITY, &sh->state))
+				list_add_tail(&sh->lru, &conf->r5c_priority_list);
+			else if (conf->worker_cnt_per_group == 0) {
 				list_add_tail(&sh->lru, &conf->handle_list);
 			} else {
 				raid5_wakeup_stripe_thread(sh);
@@ -327,6 +329,7 @@ static void do_release_stripe(struct r5conf *conf, struct stripe_head *sh,
 				if (test_and_clear_bit(STRIPE_R5C_PARTIAL_STRIPE, &sh->state))
 					atomic_dec(&conf->r5c_cached_partial_stripes);
 				list_add_tail(&sh->lru, &conf->r5c_full_stripe_list);
+				r5c_check_cached_full_stripe(conf);
 			} else {
 				/* not full stripe */
 				if (!test_and_set_bit(STRIPE_R5C_PARTIAL_STRIPE,
@@ -697,9 +700,14 @@ raid5_get_active_stripe(struct r5conf *conf, sector_t sector,
 			}
 			if (noblock && sh == NULL)
 				break;
+
+			r5c_check_stripe_cache_usage(conf);
 			if (!sh) {
+				unsigned long before_jiffies;
 				set_bit(R5_INACTIVE_BLOCKED,
 					&conf->cache_state);
+				r5l_wake_reclaim(conf->log, 0);
+				before_jiffies = jiffies;
 				wait_event_lock_irq(
 					conf->wait_for_stripe,
 					!list_empty(conf->inactive_list + hash) &&
@@ -708,6 +716,9 @@ raid5_get_active_stripe(struct r5conf *conf, sector_t sector,
 					 || !test_bit(R5_INACTIVE_BLOCKED,
 						      &conf->cache_state)),
 					*(conf->hash_locks + hash));
+				before_jiffies = jiffies - before_jiffies;
+				if (before_jiffies > 20)
+					pr_debug("%s: wait for sh takes %lu jiffies\n", __func__, before_jiffies);
 				clear_bit(R5_INACTIVE_BLOCKED,
 					  &conf->cache_state);
 			} else {
@@ -915,18 +926,20 @@ static void ops_run_io(struct stripe_head *sh, struct stripe_head_state *s)
 	struct r5conf *conf = sh->raid_conf;
 	int i, disks = sh->disks;
 	struct stripe_head *head_sh = sh;
+	int ret;
 
 	might_sleep();
 
 	if (s->to_cache) {
-		if (r5c_cache_data(conf->log, sh, s) == 0)
+		ret = r5c_cache_data(conf->log, sh, s);
+		if (ret == 0 || ret == -ENOSPC)
 			return;
-		/* array is too big that meta data size > PAGE_SIZE  */
-		r5c_freeze_stripe_for_reclaim(sh);
 	}
 
-	if (r5l_write_stripe(conf->log, sh) == 0)
+	ret = r5l_write_stripe(conf->log, sh);
+	if (ret == 0 || ret == -ENOSPC)
 		return;
+
 	for (i = disks; i--; ) {
 		int op, op_flags = 0;
 		int replace_only = 0;
@@ -2045,8 +2058,10 @@ static struct stripe_head *alloc_stripe(struct kmem_cache *sc, gfp_t gfp,
 		spin_lock_init(&sh->batch_lock);
 		INIT_LIST_HEAD(&sh->batch_list);
 		INIT_LIST_HEAD(&sh->lru);
+		INIT_LIST_HEAD(&sh->r5c);
 		atomic_set(&sh->count, 1);
 		atomic_set(&sh->dev_in_cache, 0);
+		sh->log_start = MaxSector;
 		for (i = 0; i < disks; i++) {
 			struct r5dev *dev = &sh->dev[i];
 
@@ -5057,7 +5072,9 @@ static struct stripe_head *__get_priority_stripe(struct r5conf *conf, int group)
 	struct list_head *handle_list = NULL;
 	struct r5worker_group *wg = NULL;
 
-	if (conf->worker_cnt_per_group == 0) {
+	if (!list_empty(&conf->r5c_priority_list))
+		handle_list = &conf->r5c_priority_list;
+	else if (conf->worker_cnt_per_group == 0) {
 		handle_list = &conf->handle_list;
 	} else if (group != ANY_GROUP) {
 		handle_list = &conf->worker_groups[group].handle_list;
@@ -6049,7 +6066,6 @@ static void raid5d(struct md_thread *thread)
 			md_check_recovery(mddev);
 			spin_lock_irq(&conf->device_lock);
 		}
-		r5c_do_reclaim(conf);
 	}
 	pr_debug("%d stripes handled\n", handled);
 
@@ -6675,6 +6691,7 @@ static struct r5conf *setup_conf(struct mddev *mddev)
 	init_waitqueue_head(&conf->wait_for_overlap);
 	INIT_LIST_HEAD(&conf->handle_list);
 	INIT_LIST_HEAD(&conf->hold_list);
+	INIT_LIST_HEAD(&conf->r5c_priority_list);
 	INIT_LIST_HEAD(&conf->delayed_list);
 	INIT_LIST_HEAD(&conf->bitmap_list);
 	bio_list_init(&conf->return_bi);
diff --git a/drivers/md/raid5.h b/drivers/md/raid5.h
index 2ae027c..2d8222c 100644
--- a/drivers/md/raid5.h
+++ b/drivers/md/raid5.h
@@ -227,6 +227,8 @@ struct stripe_head {
 	struct r5l_io_unit	*log_io;
 	struct list_head	log_list;
 	atomic_t		dev_in_cache;
+	sector_t		log_start; /* first meta block on the journal */
+	struct list_head	r5c; /* for r5c_cache->stripe_in_cache */
 	/**
 	 * struct stripe_operations
 	 * @target - STRIPE_OP_COMPUTE_BLK target
@@ -356,6 +358,7 @@ enum {
 				 * in conf->r5c_full_stripe_list) */
 	STRIPE_R5C_FROZEN,	/* r5c_cache frozen and being written out */
 	STRIPE_R5C_WRITTEN,	/* ready for r5c_handle_stripe_written() */
+	STRIPE_R5C_PRIORITY,	/* high priority stripe for log reclaim */
 };
 
 #define STRIPE_EXPAND_SYNC_FLAGS \
@@ -442,6 +445,26 @@ struct r5worker_group {
 	int stripes_cnt;
 };
 
+enum r5_cache_state {
+	R5_INACTIVE_BLOCKED,	/* release of inactive stripes blocked,
+				 * waiting for 25% to be free
+				 */
+	R5_ALLOC_MORE,		/* It might help to allocate another
+				 * stripe.
+				 */
+	R5_DID_ALLOC,		/* A stripe was allocated, don't allocate
+				 * more until at least one has been
+				 * released.  This avoids flooding
+				 * the cache.
+				 */
+	R5C_LOG_TIGHT,		/* journal device space tight, need to
+				 * prioritize stripes at last_checkpoint
+				 */
+	R5C_LOG_CRITICAL,	/* journal device is running out of space,
+				 * only process stripes at last_checkpoint
+				 */
+};
+
 struct r5conf {
 	struct hlist_head	*stripe_hashtbl;
 	/* only protect corresponding hash list and inactive_list */
@@ -480,6 +503,7 @@ struct r5conf {
 
 	struct list_head	handle_list; /* stripes needing handling */
 	struct list_head	hold_list; /* preread ready stripes */
+	struct list_head	r5c_priority_list; /* high priority stripes for reclaim */
 	struct list_head	delayed_list; /* stripes that have plugged requests */
 	struct list_head	bitmap_list; /* stripes delaying awaiting bitmap update */
 	struct bio		*retry_read_aligned; /* currently retrying aligned bios   */
@@ -543,17 +567,6 @@ struct r5conf {
 	wait_queue_head_t	wait_for_stripe;
 	wait_queue_head_t	wait_for_overlap;
 	unsigned long		cache_state;
-#define R5_INACTIVE_BLOCKED	1	/* release of inactive stripes blocked,
-					 * waiting for 25% to be free
-					 */
-#define R5_ALLOC_MORE		2	/* It might help to allocate another
-					 * stripe.
-					 */
-#define R5_DID_ALLOC		4	/* A stripe was allocated, don't allocate
-					 * more until at least one has been
-					 * released.  This avoids flooding
-					 * the cache.
-					 */
 	struct shrinker		shrinker;
 	int			pool_size; /* number of disks in stripeheads in pool */
 	spinlock_t		device_lock;
@@ -665,5 +678,7 @@ extern void r5c_freeze_stripe_for_reclaim(struct stripe_head *sh);
 extern void r5c_do_reclaim(struct r5conf *conf);
 extern int r5c_flush_cache(struct r5conf *conf, int num);
 extern struct md_sysfs_entry r5c_state;
+extern void r5c_check_stripe_cache_usage(struct r5conf *conf);
+extern void r5c_check_cached_full_stripe(struct r5conf *conf);
 
 #endif
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 4/6] r5cache: r5c recovery
  2016-09-26 23:30 [PATCH v2 0/6] raid5-cache: enabling cache features Song Liu
                   ` (2 preceding siblings ...)
  2016-09-26 23:30 ` [PATCH v2 3/6] r5cache: reclaim support Song Liu
@ 2016-09-26 23:30 ` Song Liu
  2016-09-28  1:08   ` Shaohua Li
  2016-09-26 23:30 ` [PATCH v2 5/6] r5cache: handle SYNC and FUA Song Liu
  2016-09-26 23:30 ` [PATCH v2 6/6] md/r5cache: decrease the counter after full-write stripe was reclaimed Song Liu
  5 siblings, 1 reply; 14+ messages in thread
From: Song Liu @ 2016-09-26 23:30 UTC (permalink / raw)
  To: linux-raid
  Cc: neilb, shli, kernel-team, dan.j.williams, hch, liuzhengyuang521,
	liuzhengyuan, Song Liu

This is the recovery part of raid5-cache.

With cache feature, there are 2 different scenarios of recovery:
1. Data-Parity stripe: a stripe with complete parity in journal.
2. Data-Only stripe: a stripe with only data in journal (or partial
   parity).

The code differentiate Data-Parity stripe from Data-Only stripe with
flag (STRIPE_R5C_WRITTEN).

For Data-Parity stripes, we use the same procedure as raid5 journal,
where all the data and parity are replayed to the RAID devices.

For Data-Only strips, we need to finish complete calculate parity and
finish the full reconstruct write or RMW write. For simplicity, in
the recovery, we load the stripe to stripe cache. Once the array is
started, the stripe cache state machine will handle these stripes
through normal write path.

r5c_recovery_flush_log contains the main procedure of recovery. The
recovery code first scans through the journal and loads data to
stripe cache. The code keeps tracks of all these stripes in a list
(use sh->lru and ctx->cached_list), stripes in the list are
organized in the order of its first appearance on the journal.
During the scan, the recovery code assesses each stripe as
Data-Parity or Data-Only.

During scan, the array may run out of stripe cache. In these cases,
the recovery code tries to release some stripe head by replaying
existing Data-Parity stripes. Once these replays are done, these
stripes can be released. When releasing Data-Parity stripes is not
enough, the recovery code will also call raid5_set_cache_size to
increase stripe cache size.

At the end of scan, the recovery code replays all Data-Parity
stripes, and sets proper states for Data-Only stripes. The recovery
code also increases seq number by 10 and rewrites all Data-Only
stripes to journal. This is to avoid confusion after repeated
crashes. More details is explained in raid5-cache.c before
r5c_recovery_rewrite_data_only_stripes().

Signed-off-by: Song Liu <songliubraving@fb.com>
---
 drivers/md/raid5-cache.c | 681 +++++++++++++++++++++++++++++++++++++----------
 1 file changed, 547 insertions(+), 134 deletions(-)

diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c
index 75b70d8..eff5bad 100644
--- a/drivers/md/raid5-cache.c
+++ b/drivers/md/raid5-cache.c
@@ -1,5 +1,6 @@
 /*
  * Copyright (C) 2015 Shaohua Li <shli@fb.com>
+ * Copyright (C) 2016 Song Liu <songliubraving@fb.com>
  *
  * This program is free software; you can redistribute it and/or modify it
  * under the terms and conditions of the GNU General Public License,
@@ -1072,10 +1073,13 @@ struct r5l_recovery_ctx {
 	sector_t meta_total_blocks;	/* total size of current meta and data */
 	sector_t pos;			/* recovery position */
 	u64 seq;			/* recovery position seq */
+	int data_parity_stripes;	/* number of data_parity stripes */
+	int data_only_stripes;		/* number of data_only stripes */
+	struct list_head cached_list;
 };
 
-static int r5l_read_meta_block(struct r5l_log *log,
-			       struct r5l_recovery_ctx *ctx)
+static int r5l_recovery_read_meta_block(struct r5l_log *log,
+					struct r5l_recovery_ctx *ctx)
 {
 	struct page *page = ctx->meta_page;
 	struct r5l_meta_block *mb;
@@ -1107,170 +1111,577 @@ static int r5l_read_meta_block(struct r5l_log *log,
 	return 0;
 }
 
-static int r5l_recovery_flush_one_stripe(struct r5l_log *log,
-					 struct r5l_recovery_ctx *ctx,
-					 sector_t stripe_sect,
-					 int *offset, sector_t *log_offset)
+/*
+ * r5l_recovery_load_data and r5l_recovery_load_parity uses flag R5_Wantwrite
+ * to mark valid (potentially not flushed) data in the journal.
+ *
+ * We already verified checksum in r5l_recovery_verify_data_checksum_for_mb,
+ * so there should not be any mismatch here.
+ */
+static void r5l_recovery_load_data(struct r5l_log *log,
+				   struct stripe_head *sh,
+				   struct r5l_recovery_ctx *ctx,
+				   struct r5l_payload_data_parity *payload,
+				   sector_t log_offset)
 {
-	struct r5conf *conf = log->rdev->mddev->private;
-	struct stripe_head *sh;
-	struct r5l_payload_data_parity *payload;
+	struct mddev *mddev = log->rdev->mddev;
+	struct r5conf *conf = mddev->private;
 	int disk_index;
 
-	sh = raid5_get_active_stripe(conf, stripe_sect, 0, 0, 0);
-	while (1) {
-		payload = page_address(ctx->meta_page) + *offset;
+	raid5_compute_sector(conf,
+			     le64_to_cpu(payload->location), 0,
+			     &disk_index, sh);
+	sync_page_io(log->rdev, log_offset, PAGE_SIZE,
+		     sh->dev[disk_index].page, REQ_OP_READ, 0, false);
+	sh->dev[disk_index].log_checksum =
+		le32_to_cpu(payload->checksum[0]);
+	ctx->meta_total_blocks += BLOCK_SECTORS;
 
-		if (le16_to_cpu(payload->header.type) == R5LOG_PAYLOAD_DATA) {
-			raid5_compute_sector(conf,
-					     le64_to_cpu(payload->location), 0,
-					     &disk_index, sh);
+	set_bit(R5_Wantwrite, &sh->dev[disk_index].flags);
+}
 
-			sync_page_io(log->rdev, *log_offset, PAGE_SIZE,
-				     sh->dev[disk_index].page, REQ_OP_READ, 0,
-				     false);
-			sh->dev[disk_index].log_checksum =
-				le32_to_cpu(payload->checksum[0]);
-			set_bit(R5_Wantwrite, &sh->dev[disk_index].flags);
-			ctx->meta_total_blocks += BLOCK_SECTORS;
-		} else {
-			disk_index = sh->pd_idx;
-			sync_page_io(log->rdev, *log_offset, PAGE_SIZE,
-				     sh->dev[disk_index].page, REQ_OP_READ, 0,
-				     false);
-			sh->dev[disk_index].log_checksum =
-				le32_to_cpu(payload->checksum[0]);
-			set_bit(R5_Wantwrite, &sh->dev[disk_index].flags);
-
-			if (sh->qd_idx >= 0) {
-				disk_index = sh->qd_idx;
-				sync_page_io(log->rdev,
-					     r5l_ring_add(log, *log_offset, BLOCK_SECTORS),
-					     PAGE_SIZE, sh->dev[disk_index].page,
-					     REQ_OP_READ, 0, false);
-				sh->dev[disk_index].log_checksum =
-					le32_to_cpu(payload->checksum[1]);
-				set_bit(R5_Wantwrite,
-					&sh->dev[disk_index].flags);
-			}
-			ctx->meta_total_blocks += BLOCK_SECTORS * conf->max_degraded;
-		}
+static void r5l_recovery_load_parity(struct r5l_log *log,
+				     struct stripe_head *sh,
+				     struct r5l_recovery_ctx *ctx,
+				     struct r5l_payload_data_parity *payload,
+				     sector_t log_offset)
+{
+	struct mddev *mddev = log->rdev->mddev;
+	struct r5conf *conf = mddev->private;
 
-		*log_offset = r5l_ring_add(log, *log_offset,
-					   le32_to_cpu(payload->size));
-		*offset += sizeof(struct r5l_payload_data_parity) +
-			sizeof(__le32) *
-			(le32_to_cpu(payload->size) >> (PAGE_SHIFT - 9));
-		if (le16_to_cpu(payload->header.type) == R5LOG_PAYLOAD_PARITY)
-			break;
+	ctx->meta_total_blocks += BLOCK_SECTORS * conf->max_degraded;
+	sync_page_io(log->rdev, log_offset, PAGE_SIZE,
+		     sh->dev[sh->pd_idx].page, REQ_OP_READ, 0, false);
+	sh->dev[sh->pd_idx].log_checksum =
+		le32_to_cpu(payload->checksum[0]);
+	set_bit(R5_Wantwrite, &sh->dev[sh->pd_idx].flags);
+
+	if (sh->qd_idx >= 0) {
+		sync_page_io(log->rdev,
+			     r5l_ring_add(log, log_offset, BLOCK_SECTORS),
+			     PAGE_SIZE, sh->dev[sh->qd_idx].page,
+			     REQ_OP_READ, 0, false);
+		sh->dev[sh->qd_idx].log_checksum =
+			le32_to_cpu(payload->checksum[1]);
+		set_bit(R5_Wantwrite, &sh->dev[sh->qd_idx].flags);
 	}
+	set_bit(STRIPE_R5C_WRITTEN, &sh->state);
+}
 
-	for (disk_index = 0; disk_index < sh->disks; disk_index++) {
-		void *addr;
-		u32 checksum;
+static void r5l_recovery_reset_stripe(struct stripe_head *sh)
+{
+	int i;
+
+	sh->state = 0;
+	sh->log_start = MaxSector;
+	for (i = sh->disks; i--; )
+		sh->dev[i].flags = 0;
+}
+
+static void
+r5l_recovery_replay_one_stripe(struct r5conf *conf,
+			       struct stripe_head *sh,
+			       struct r5l_recovery_ctx *ctx)
+{
+	struct md_rdev *rdev, *rrdev;
+	int disk_index;
+	int data_count = 0;
 
+	for (disk_index = 0; disk_index < sh->disks; disk_index++) {
 		if (!test_bit(R5_Wantwrite, &sh->dev[disk_index].flags))
 			continue;
-		addr = kmap_atomic(sh->dev[disk_index].page);
-		checksum = crc32c_le(log->uuid_checksum, addr, PAGE_SIZE);
-		kunmap_atomic(addr);
-		if (checksum != sh->dev[disk_index].log_checksum)
-			goto error;
+		if (disk_index == sh->qd_idx || disk_index == sh->pd_idx)
+			continue;
+		data_count++;
 	}
+	/* stripes only have parity are already flushed to RAID */
+	if (data_count == 0)
+		goto out;
 
 	for (disk_index = 0; disk_index < sh->disks; disk_index++) {
-		struct md_rdev *rdev, *rrdev;
-
-		if (!test_and_clear_bit(R5_Wantwrite,
-					&sh->dev[disk_index].flags))
+		if (!test_bit(R5_Wantwrite, &sh->dev[disk_index].flags))
 			continue;
 
 		/* in case device is broken */
 		rdev = rcu_dereference(conf->disks[disk_index].rdev);
 		if (rdev)
-			sync_page_io(rdev, stripe_sect, PAGE_SIZE,
+			sync_page_io(rdev, sh->sector, PAGE_SIZE,
 				     sh->dev[disk_index].page, REQ_OP_WRITE, 0,
 				     false);
 		rrdev = rcu_dereference(conf->disks[disk_index].replacement);
 		if (rrdev)
-			sync_page_io(rrdev, stripe_sect, PAGE_SIZE,
+			sync_page_io(rrdev, sh->sector, PAGE_SIZE,
 				     sh->dev[disk_index].page, REQ_OP_WRITE, 0,
 				     false);
 	}
-	raid5_release_stripe(sh);
+	ctx->data_parity_stripes++;
+out:
+	r5l_recovery_reset_stripe(sh);
+}
+
+static void
+r5l_recovery_create_emtpy_meta_block(struct r5l_log *log,
+				     struct page *page,
+				     sector_t pos, u64 seq)
+{
+	struct r5l_meta_block *mb;
+	u32 crc;
+
+	mb = page_address(page);
+	clear_page(mb);
+	mb->magic = cpu_to_le32(R5LOG_MAGIC);
+	mb->version = R5LOG_VERSION;
+	mb->meta_size = cpu_to_le32(sizeof(struct r5l_meta_block));
+	mb->seq = cpu_to_le64(seq);
+	mb->position = cpu_to_le64(pos);
+	crc = crc32c_le(log->uuid_checksum, mb, PAGE_SIZE);
+	mb->checksum = cpu_to_le32(crc);
+}
+
+static int r5l_log_write_empty_meta_block(struct r5l_log *log, sector_t pos,
+					  u64 seq)
+{
+	struct page *page;
+
+	page = alloc_page(GFP_KERNEL);
+	if (!page)
+		return -ENOMEM;
+	r5l_recovery_create_emtpy_meta_block(log, page, pos, seq);
+	if (!sync_page_io(log->rdev, pos, PAGE_SIZE, page, REQ_OP_WRITE,
+			  WRITE_FUA, false)) {
+		__free_page(page);
+		return -EIO;
+	}
+	__free_page(page);
 	return 0;
+}
 
-error:
-	for (disk_index = 0; disk_index < sh->disks; disk_index++)
-		sh->dev[disk_index].flags = 0;
-	raid5_release_stripe(sh);
-	return -EINVAL;
+static struct stripe_head *
+r5c_recovery_alloc_stripe(struct r5conf *conf,
+			  struct list_head *recovery_list,
+			  sector_t stripe_sect,
+			  sector_t log_start)
+{
+	struct stripe_head *sh;
+
+	sh = raid5_get_active_stripe(conf, stripe_sect, 0, 1, 0);
+	if (!sh)
+		return NULL;  /* no more stripe available */
+
+	r5l_recovery_reset_stripe(sh);
+	sh->log_start = log_start;
+
+	return sh;
 }
 
-static int r5l_recovery_flush_one_meta(struct r5l_log *log,
-				       struct r5l_recovery_ctx *ctx)
+static struct stripe_head *
+r5c_recovery_lookup_stripe(struct list_head *list, sector_t sect)
 {
-	struct r5conf *conf = log->rdev->mddev->private;
+	struct stripe_head *sh;
+
+	list_for_each_entry(sh, list, lru)
+		if (sh->sector == sect)
+			return sh;
+	return NULL;
+}
+
+static void
+r5c_recovery_replay_stripes(struct list_head *cached_stripe_list,
+			    struct r5l_recovery_ctx *ctx)
+{
+	struct stripe_head *sh, *next;
+
+	list_for_each_entry_safe(sh, next, cached_stripe_list, lru)
+		if (test_bit(STRIPE_R5C_WRITTEN, &sh->state)) {
+			r5l_recovery_replay_one_stripe(sh->raid_conf, sh, ctx);
+			list_del_init(&sh->lru);
+			raid5_release_stripe(sh);
+		}
+}
+
+/* returns 0 for match; 1 for mismtach */
+static int
+r5l_recovery_verify_data_checksum(struct r5l_log *log, struct page *page,
+				  sector_t log_offset, __le32 log_checksum)
+{
+	void *addr;
+	u32 checksum;
+
+	sync_page_io(log->rdev, log_offset, PAGE_SIZE,
+		     page, REQ_OP_READ, 0, false);
+	addr = kmap_atomic(page);
+	checksum = crc32c_le(log->uuid_checksum, addr, PAGE_SIZE);
+	kunmap_atomic(addr);
+	return le32_to_cpu(log_checksum) != checksum;
+}
+
+/*
+ * before loading data to stripe cache, we need verify checksum for all data,
+ * if there is mismatch for any data page, we drop all data in the mata block
+ */
+static int
+r5l_recovery_verify_data_checksum_for_mb(struct r5l_log *log,
+					 struct r5l_recovery_ctx *ctx)
+{
+	struct mddev *mddev = log->rdev->mddev;
+	struct r5conf *conf = mddev->private;
+	struct r5l_meta_block *mb = page_address(ctx->meta_page);
+	sector_t mb_offset = sizeof(struct r5l_meta_block);
+	sector_t log_offset = r5l_ring_add(log, ctx->pos, BLOCK_SECTORS);
+	struct page *page;
 	struct r5l_payload_data_parity *payload;
+
+	page = alloc_page(GFP_KERNEL);
+	if (!page)
+		return -ENOMEM;
+
+	while (mb_offset < le32_to_cpu(mb->meta_size)) {
+		payload = (void *)mb + mb_offset;
+
+		if (payload->header.type == R5LOG_PAYLOAD_DATA) {
+			if (r5l_recovery_verify_data_checksum(
+				    log, page, log_offset,
+				    payload->checksum[0]))
+				goto mismatch;
+		} else if (payload->header.type == R5LOG_PAYLOAD_PARITY) {
+			if (r5l_recovery_verify_data_checksum(
+				    log, page, log_offset,
+				    payload->checksum[0]))
+				goto mismatch;
+			if (conf->max_degraded == 2 && /* q for RAID 6 */
+			    r5l_recovery_verify_data_checksum(
+				    log, page,
+				    r5l_ring_add(log, log_offset,
+						 BLOCK_SECTORS),
+				    payload->checksum[1]))
+				goto mismatch;
+		} else
+			goto mismatch;
+
+		log_offset = r5l_ring_add(log, log_offset,
+					  le32_to_cpu(payload->size));
+
+		mb_offset += sizeof(struct r5l_payload_data_parity) +
+			sizeof(__le32) *
+			(le32_to_cpu(payload->size) >> (PAGE_SHIFT - 9));
+	}
+
+	put_page(page);
+	return 0;
+
+mismatch:
+	put_page(page);
+	return -EINVAL;
+}
+
+static int
+r5c_recovery_analyze_meta_block(struct r5l_log *log,
+				struct r5l_recovery_ctx *ctx,
+				struct list_head *cached_stripe_list)
+{
+	struct mddev *mddev = log->rdev->mddev;
+	struct r5conf *conf = mddev->private;
 	struct r5l_meta_block *mb;
-	int offset;
+	struct r5l_payload_data_parity *payload;
+	int mb_offset;
 	sector_t log_offset;
-	sector_t stripe_sector;
+	sector_t stripe_sect;
+	struct stripe_head *sh;
+	int ret;
+
+	/* for mismatch in data blocks, we will drop all data in this mb, but
+	 * we will still read next mb for other data with FLUSH flag, as
+	 * io_unit could finish out of order.
+	 */
+	ret = r5l_recovery_verify_data_checksum_for_mb(log, ctx);
+	if (ret == -EINVAL)
+		return -EAGAIN;
+	else if (ret)
+		return ret;
 
 	mb = page_address(ctx->meta_page);
-	offset = sizeof(struct r5l_meta_block);
+	mb_offset = sizeof(struct r5l_meta_block);
 	log_offset = r5l_ring_add(log, ctx->pos, BLOCK_SECTORS);
 
-	while (offset < le32_to_cpu(mb->meta_size)) {
+	while (mb_offset < le32_to_cpu(mb->meta_size)) {
 		int dd;
 
-		payload = (void *)mb + offset;
-		stripe_sector = raid5_compute_sector(conf,
-						     le64_to_cpu(payload->location), 0, &dd, NULL);
-		if (r5l_recovery_flush_one_stripe(log, ctx, stripe_sector,
-						  &offset, &log_offset))
+		payload = (void *)mb + mb_offset;
+		stripe_sect = (payload->header.type == R5LOG_PAYLOAD_DATA) ?
+			raid5_compute_sector(
+				conf, le64_to_cpu(payload->location), 0, &dd,
+				NULL)
+			: le64_to_cpu(payload->location);
+
+		sh = r5c_recovery_lookup_stripe(cached_stripe_list,
+						stripe_sect);
+
+		if (!sh) {
+			sh = r5c_recovery_alloc_stripe(conf, cached_stripe_list,
+						       stripe_sect, ctx->pos);
+			/* cannot get stripe from raid5_get_active_stripe
+			 * try replay some stripes
+			 */
+			if (!sh) {
+				r5c_recovery_replay_stripes(
+					cached_stripe_list, ctx);
+				sh = r5c_recovery_alloc_stripe(
+					conf, cached_stripe_list,
+					stripe_sect, ctx->pos);
+			}
+			if (!sh) {
+				pr_info("md/raid:%s: Increasing stripe cache size to %d to recovery data on journal.\n",
+					mdname(mddev),
+					conf->min_nr_stripes * 2);
+				raid5_set_cache_size(mddev,
+						     conf->min_nr_stripes * 2);
+				sh = r5c_recovery_alloc_stripe(
+					conf, cached_stripe_list, stripe_sect,
+					ctx->pos);
+			}
+			if (!sh) {
+				pr_err("md/raid:%s: Cannot get enough stripe_cache. Recovery interrupted.\n",
+				       mdname(mddev));
+				return -ENOMEM;
+			}
+			list_add_tail(&sh->lru, cached_stripe_list);
+		}
+		if (!sh)
+			return -ENOMEM;
+
+		if (payload->header.type == R5LOG_PAYLOAD_DATA) {
+			if (test_bit(STRIPE_R5C_WRITTEN, &sh->state)) {
+				r5l_recovery_reset_stripe(sh);
+				sh->log_start = ctx->pos;
+				list_move_tail(&sh->lru, cached_stripe_list);
+			}
+			r5l_recovery_load_data(log, sh, ctx, payload,
+					       log_offset);
+		} else if (payload->header.type == R5LOG_PAYLOAD_PARITY)
+			r5l_recovery_load_parity(log, sh, ctx, payload,
+						 log_offset);
+		else
 			return -EINVAL;
+
+		log_offset = r5l_ring_add(log, log_offset,
+					  le32_to_cpu(payload->size));
+
+		mb_offset += sizeof(struct r5l_payload_data_parity) +
+			sizeof(__le32) *
+			(le32_to_cpu(payload->size) >> (PAGE_SHIFT - 9));
 	}
+
 	return 0;
 }
 
-/* copy data/parity from log to raid disks */
-static void r5l_recovery_flush_log(struct r5l_log *log,
+/*
+ * Load the stripe into cache. The stripe will be written out later by
+ * the stripe cache state machine.
+ */
+static void r5c_recovery_load_one_stripe(struct r5l_log *log,
+					 struct stripe_head *sh)
+{
+	struct r5conf *conf = sh->raid_conf;
+	struct r5dev *dev;
+	int i;
+
+	atomic_set(&sh->dev_in_cache, 0);
+	for (i = sh->disks; i--; ) {
+		dev = sh->dev + i;
+		if (test_and_clear_bit(R5_Wantwrite, &dev->flags)) {
+			set_bit(R5_InCache, &dev->flags);
+			atomic_inc(&sh->dev_in_cache);
+		}
+	}
+	set_bit(STRIPE_R5C_PARTIAL_STRIPE, &sh->state);
+	atomic_inc(&conf->r5c_cached_partial_stripes);
+	list_add_tail(&sh->r5c, &log->stripe_in_cache);
+}
+
+/*
+ * Scan through the log for all to-be-flushed data
+ *
+ * For stripes with data and parity, namely Data-Parity stripe
+ * (STRIPE_R5C_WRITTEN == 0), we simply replay all the writes.
+ *
+ * For stripes with only data, namely Data-Only stripe
+ * (STRIPE_R5C_WRITTEN == 1), we load them to stripe cache state machine.
+ *
+ * For a stripe, if we see data after parity, we should discard all previous
+ * data and parity for this stripe, as these data are already flushed to
+ * the array.
+ *
+ * At the end of the scan, we return the new journal_tail, which points to
+ * first data-only stripe on the journal device, or next invalid meta block.
+ */
+static void r5c_recovery_flush_log(struct r5l_log *log,
 				   struct r5l_recovery_ctx *ctx)
 {
+	struct stripe_head *sh, *next;
+	int ret;
+
+	/* scan through the log */
 	while (1) {
-		if (r5l_read_meta_block(log, ctx))
-			return;
-		if (r5l_recovery_flush_one_meta(log, ctx))
-			return;
+		if (r5l_recovery_read_meta_block(log, ctx))
+			break;
+
+		ret = r5c_recovery_analyze_meta_block(log, ctx,
+						      &ctx->cached_list);
+		/* -EAGAIN means mismatch in data block, in this case, we still
+		 * try scan the next metablock
+		 */
+		if (ret && ret != -EAGAIN)
+			break;
 		ctx->seq++;
 		ctx->pos = r5l_ring_add(log, ctx->pos, ctx->meta_total_blocks);
 	}
+
+	/* replay data-parity stripes */
+	r5c_recovery_replay_stripes(&ctx->cached_list, ctx);
+
+	/* load data-only stripes to stripe cache */
+	list_for_each_entry_safe(sh, next, &ctx->cached_list, lru) {
+		WARN_ON(test_bit(STRIPE_R5C_WRITTEN, &sh->state));
+		r5c_recovery_load_one_stripe(log, sh);
+		list_del_init(&sh->lru);
+		raid5_release_stripe(sh);
+		ctx->data_only_stripes++;
+	}
+
+	return;
 }
 
-static int r5l_log_write_empty_meta_block(struct r5l_log *log, sector_t pos,
-					  u64 seq)
+/*
+ * we did a recovery. Now ctx.pos points to an invalid meta block. New
+ * log will start here. but we can't let superblock point to last valid
+ * meta block. The log might looks like:
+ * | meta 1| meta 2| meta 3|
+ * meta 1 is valid, meta 2 is invalid. meta 3 could be valid. If
+ * superblock points to meta 1, we write a new valid meta 2n.  if crash
+ * happens again, new recovery will start from meta 1. Since meta 2n is
+ * valid now, recovery will think meta 3 is valid, which is wrong.
+ * The solution is we create a new meta in meta2 with its seq == meta
+ * 1's seq + 10 and let superblock points to meta2. The same recovery will
+ * not think meta 3 is a valid meta, because its seq doesn't match
+ */
+
+/*
+ * Before recovery, the log looks like the following
+ *
+ *   ---------------------------------------------
+ *   |           valid log        | invalid log  |
+ *   ---------------------------------------------
+ *   ^
+ *   |- log->last_checkpoint
+ *   |- log->last_cp_seq
+ *
+ * Now we scan through the log until we see invalid entry
+ *
+ *   ---------------------------------------------
+ *   |           valid log        | invalid log  |
+ *   ---------------------------------------------
+ *   ^                            ^
+ *   |- log->last_checkpoint      |- ctx->pos
+ *   |- log->last_cp_seq          |- ctx->seq
+ *
+ * From this point, we need to increase seq number by 10 to avoid
+ * confusing next recovery.
+ *
+ *   ---------------------------------------------
+ *   |           valid log        | invalid log  |
+ *   ---------------------------------------------
+ *   ^                              ^
+ *   |- log->last_checkpoint        |- ctx->pos+1
+ *   |- log->last_cp_seq            |- ctx->seq+11
+ *
+ * However, it is not safe to start the state machine yet, because data only
+ * parities are not yet secured in RAID. To save these data only parities, we
+ * rewrite them from seq+11.
+ *
+ *   -----------------------------------------------------------------
+ *   |           valid log        | data only stripes | invalid log  |
+ *   -----------------------------------------------------------------
+ *   ^                                                ^
+ *   |- log->last_checkpoint                          |- ctx->pos+n
+ *   |- log->last_cp_seq                              |- ctx->seq+10+n
+ *
+ * If failure happens again during this process, the recovery can safe start
+ * again from log->last_checkpoint.
+ *
+ * Once data only stripes are rewritten to journal, we move log_tail
+ *
+ *   -----------------------------------------------------------------
+ *   |     old log        |    data only stripes    | invalid log  |
+ *   -----------------------------------------------------------------
+ *                        ^                         ^
+ *                        |- log->last_checkpoint   |- ctx->pos+n
+ *                        |- log->last_cp_seq       |- ctx->seq+10+n
+ *
+ * Then we can safely start the state machine. If failure happens from this
+ * point on, the recovery will start from new log->last_checkpoint.
+ */
+static int
+r5c_recovery_rewrite_data_only_stripes(struct r5l_log *log,
+				       struct r5l_recovery_ctx *ctx)
 {
+	struct stripe_head *sh;
+	struct mddev *mddev = log->rdev->mddev;
 	struct page *page;
-	struct r5l_meta_block *mb;
-	u32 crc;
 
-	page = alloc_page(GFP_KERNEL | __GFP_ZERO);
-	if (!page)
+	page = alloc_page(GFP_KERNEL);
+	if (!page) {
+		pr_err("md/raid:%s: cannot allocate memory to rewrite data only stripes\n",
+		       mdname(mddev));
 		return -ENOMEM;
-	mb = page_address(page);
-	mb->magic = cpu_to_le32(R5LOG_MAGIC);
-	mb->version = R5LOG_VERSION;
-	mb->meta_size = cpu_to_le32(sizeof(struct r5l_meta_block));
-	mb->seq = cpu_to_le64(seq);
-	mb->position = cpu_to_le64(pos);
-	crc = crc32c_le(log->uuid_checksum, mb, PAGE_SIZE);
-	mb->checksum = cpu_to_le32(crc);
+	}
 
-	if (!sync_page_io(log->rdev, pos, PAGE_SIZE, page, REQ_OP_WRITE,
-			  WRITE_FUA, false)) {
-		__free_page(page);
-		return -EIO;
+	ctx->seq += 10;
+	list_for_each_entry(sh, &ctx->cached_list, lru) {
+		struct r5l_meta_block *mb;
+		int i;
+		int offset;
+		sector_t write_pos;
+
+		WARN_ON(test_bit(STRIPE_R5C_WRITTEN, &sh->state));
+		r5l_recovery_create_emtpy_meta_block(log, page,
+						     ctx->pos, ctx->seq);
+		mb = page_address(page);
+		offset = le32_to_cpu(mb->meta_size);
+		write_pos = ctx->pos + BLOCK_SECTORS;
+
+		for (i = sh->disks; i--; ) {
+			struct r5dev *dev = &sh->dev[i];
+			struct r5l_payload_data_parity *payload;
+			void *addr;
+
+			if (test_bit(R5_InCache, &dev->flags)) {
+				payload = (void *)mb + offset;
+				payload->header.type = cpu_to_le16(
+					R5LOG_PAYLOAD_DATA);
+				payload->size = BLOCK_SECTORS;
+				payload->location = cpu_to_le64(
+					raid5_compute_blocknr(sh, i, 0));
+				addr = kmap_atomic(dev->page);
+				payload->checksum[0] = cpu_to_le32(
+					crc32c_le(log->uuid_checksum, addr,
+						  PAGE_SIZE));
+				kunmap_atomic(addr);
+				sync_page_io(log->rdev, write_pos, PAGE_SIZE,
+					     dev->page, REQ_OP_WRITE, 0, false);
+				write_pos = r5l_ring_add(log, write_pos,
+							 BLOCK_SECTORS);
+				offset += sizeof(__le32) +
+					sizeof(struct r5l_payload_data_parity);
+
+			}
+		}
+		mb->meta_size = cpu_to_le32(offset);
+		mb->checksum = crc32c_le(log->uuid_checksum, mb, PAGE_SIZE);
+		sync_page_io(log->rdev, ctx->pos, PAGE_SIZE, page,
+			     REQ_OP_WRITE, WRITE_FUA, false);
+		sh->log_start = ctx->pos;
+		ctx->pos = write_pos;
+		ctx->seq += 1;
 	}
 	__free_page(page);
 	return 0;
@@ -1278,43 +1689,45 @@ static int r5l_log_write_empty_meta_block(struct r5l_log *log, sector_t pos,
 
 static int r5l_recovery_log(struct r5l_log *log)
 {
+	struct mddev *mddev = log->rdev->mddev;
 	struct r5l_recovery_ctx ctx;
 
 	ctx.pos = log->last_checkpoint;
 	ctx.seq = log->last_cp_seq;
 	ctx.meta_page = alloc_page(GFP_KERNEL);
+	ctx.data_only_stripes = 0;
+	ctx.data_parity_stripes = 0;
+	INIT_LIST_HEAD(&ctx.cached_list);
+
 	if (!ctx.meta_page)
 		return -ENOMEM;
 
-	r5l_recovery_flush_log(log, &ctx);
+	r5c_recovery_flush_log(log, &ctx);
+
 	__free_page(ctx.meta_page);
 
-	/*
-	 * we did a recovery. Now ctx.pos points to an invalid meta block. New
-	 * log will start here. but we can't let superblock point to last valid
-	 * meta block. The log might looks like:
-	 * | meta 1| meta 2| meta 3|
-	 * meta 1 is valid, meta 2 is invalid. meta 3 could be valid. If
-	 * superblock points to meta 1, we write a new valid meta 2n.  if crash
-	 * happens again, new recovery will start from meta 1. Since meta 2n is
-	 * valid now, recovery will think meta 3 is valid, which is wrong.
-	 * The solution is we create a new meta in meta2 with its seq == meta
-	 * 1's seq + 10 and let superblock points to meta2. The same recovery will
-	 * not think meta 3 is a valid meta, because its seq doesn't match
-	 */
-	if (ctx.seq > log->last_cp_seq + 1) {
-		int ret;
-
-		ret = r5l_log_write_empty_meta_block(log, ctx.pos, ctx.seq + 10);
-		if (ret)
-			return ret;
-		log->seq = ctx.seq + 11;
-		log->log_start = r5l_ring_add(log, ctx.pos, BLOCK_SECTORS);
-		r5l_write_super(log, ctx.pos);
-	} else {
-		log->log_start = ctx.pos;
-		log->seq = ctx.seq;
+	if ((ctx.data_only_stripes == 0) && (ctx.data_parity_stripes == 0))
+		pr_info("md/raid:%s: starting from clean shutdown\n",
+			mdname(mddev));
+	else {
+		pr_info("md/raid:%s: recoverying %d data-only stripes and %d data-parity stripes\n",
+			mdname(mddev), ctx.data_only_stripes,
+			ctx.data_parity_stripes);
+
+		if (ctx.data_only_stripes > 0)
+			if (r5c_recovery_rewrite_data_only_stripes(log, &ctx)) {
+				pr_err("md/raid:%s: failed to rewrite stripes to journal\n",
+				       mdname(mddev));
+				return -EIO;
+			}
 	}
+
+	log->log_start = ctx.pos;
+	log->next_checkpoint = ctx.pos;
+	log->seq = ctx.seq;
+	r5l_log_write_empty_meta_block(log, ctx.pos, ctx.seq);
+	r5l_write_super(log, ctx.pos);
+
 	return 0;
 }
 
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 5/6] r5cache: handle SYNC and FUA
  2016-09-26 23:30 [PATCH v2 0/6] raid5-cache: enabling cache features Song Liu
                   ` (3 preceding siblings ...)
  2016-09-26 23:30 ` [PATCH v2 4/6] r5cache: r5c recovery Song Liu
@ 2016-09-26 23:30 ` Song Liu
  2016-09-28  1:32   ` Shaohua Li
  2016-09-26 23:30 ` [PATCH v2 6/6] md/r5cache: decrease the counter after full-write stripe was reclaimed Song Liu
  5 siblings, 1 reply; 14+ messages in thread
From: Song Liu @ 2016-09-26 23:30 UTC (permalink / raw)
  To: linux-raid
  Cc: neilb, shli, kernel-team, dan.j.williams, hch, liuzhengyuang521,
	liuzhengyuan, Song Liu

With raid5 cache, we committing data from journal device. When
there is flush request, we need to flush journal device's cache.
This was not needed in raid5 journal, because we will flush the
journal before committing data to raid disks.

This is similar to FUA, except that we also need flush journal for
FUA. Otherwise, corruptions in earlier meta data will stop recovery
from reaching FUA data.

Signed-off-by: Song Liu <songliubraving@fb.com>
---
 drivers/md/raid5-cache.c | 134 +++++++++++++++++++++++++++++++++++++++++++----
 drivers/md/raid5.c       |   8 +++
 drivers/md/raid5.h       |   1 +
 3 files changed, 133 insertions(+), 10 deletions(-)

diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c
index eff5bad..a94585d 100644
--- a/drivers/md/raid5-cache.c
+++ b/drivers/md/raid5-cache.c
@@ -19,6 +19,7 @@
 #include <linux/raid/md_p.h>
 #include <linux/crc32c.h>
 #include <linux/random.h>
+#include <trace/events/block.h>
 #include "md.h"
 #include "raid5.h"
 
@@ -119,6 +120,9 @@ struct r5l_log {
 	struct list_head stripe_in_cache; /* all stripes in the cache, with
 					   * sh->log_start in order */
 	spinlock_t stripe_in_cache_lock;  /* lock for stripe_in_cache */
+
+	/* to submit async io_units, to fulfill ordering of flush */
+	struct work_struct deferred_io_work;
 };
 
 /*
@@ -145,6 +149,18 @@ struct r5l_io_unit {
 
 	int state;
 	bool need_split_bio;
+	struct bio *split_bio;
+
+	unsigned int has_flush:1;      /* include flush request */
+	unsigned int has_fua:1;        /* include fua request */
+	unsigned int has_null_flush:1; /* include empty flush request */
+	/*
+	 * io isn't sent yet, flush/fua request can only be submitted till it's
+	 * the first IO in running_ios list
+	 */
+	unsigned int io_deferred:1;
+
+	struct bio_list flush_barriers;   /* size == 0 flush bios */
 };
 
 /* r5l_io_unit state */
@@ -358,9 +374,11 @@ static void r5l_move_to_end_ios(struct r5l_log *log)
 	}
 }
 
+static void __r5l_stripe_write_finished(struct r5l_io_unit *io);
 static void r5l_log_endio(struct bio *bio)
 {
 	struct r5l_io_unit *io = bio->bi_private;
+	struct r5l_io_unit *io_deferred;
 	struct r5l_log *log = io->log;
 	unsigned long flags;
 
@@ -376,18 +394,89 @@ static void r5l_log_endio(struct bio *bio)
 		r5l_move_to_end_ios(log);
 	else
 		r5l_log_run_stripes(log);
+	if (!list_empty(&log->running_ios)) {
+		/*
+		 * FLUSH/FUA io_unit is deferred because of ordering, now we
+		 * can dispatch it
+		 */
+		io_deferred = list_first_entry(&log->running_ios,
+					       struct r5l_io_unit, log_sibling);
+		if (io_deferred->io_deferred)
+			schedule_work(&log->deferred_io_work);
+	}
+
 	spin_unlock_irqrestore(&log->io_list_lock, flags);
 
 	if (log->need_cache_flush)
 		md_wakeup_thread(log->rdev->mddev->thread);
+
+	if (io->has_null_flush) {
+		struct bio *bi;
+
+		WARN_ON(bio_list_empty(&io->flush_barriers));
+		while ((bi = bio_list_pop(&io->flush_barriers)) != NULL) {
+			bio_endio(bi);
+			atomic_dec(&io->pending_stripe);
+		}
+		if (atomic_read(&io->pending_stripe) == 0)
+			__r5l_stripe_write_finished(io);
+	}
+}
+
+static void r5l_do_submit_io(struct r5l_log *log, struct r5l_io_unit *io)
+{
+	unsigned long flags;
+
+	spin_lock_irqsave(&log->io_list_lock, flags);
+	__r5l_set_io_unit_state(io, IO_UNIT_IO_START);
+	spin_unlock_irqrestore(&log->io_list_lock, flags);
+
+	if (io->has_flush)
+		bio_set_op_attrs(io->current_bio, REQ_OP_WRITE, WRITE_FLUSH);
+	if (io->has_fua)
+		bio_set_op_attrs(io->current_bio, REQ_OP_WRITE, WRITE_FUA);
+	submit_bio(io->current_bio);
+
+	if (!io->split_bio)
+		return;
+
+	if (io->has_flush)
+		bio_set_op_attrs(io->split_bio, REQ_OP_WRITE, WRITE_FLUSH);
+	if (io->has_fua)
+		bio_set_op_attrs(io->split_bio, REQ_OP_WRITE, WRITE_FUA);
+	submit_bio(io->split_bio);
+}
+
+/* deferred io_unit will be dispatched here */
+static void r5l_submit_io_async(struct work_struct *work)
+{
+	struct r5l_log *log = container_of(work, struct r5l_log,
+					   deferred_io_work);
+	struct r5l_io_unit *io = NULL;
+	unsigned long flags;
+
+	spin_lock_irqsave(&log->io_list_lock, flags);
+	if (!list_empty(&log->running_ios)) {
+		io = list_first_entry(&log->running_ios, struct r5l_io_unit,
+				      log_sibling);
+		if (!io->io_deferred)
+			io = NULL;
+		else
+			io->io_deferred = 0;
+	}
+	spin_unlock_irqrestore(&log->io_list_lock, flags);
+	if (io)
+		r5l_do_submit_io(log, io);
 }
 
 static void r5l_submit_current_io(struct r5l_log *log)
 {
 	struct r5l_io_unit *io = log->current_io;
+	struct bio *bio;
 	struct r5l_meta_block *block;
 	unsigned long flags;
 	u32 crc;
+	bool do_submit = true;
 
 	if (!io)
 		return;
@@ -396,13 +485,20 @@ static void r5l_submit_current_io(struct r5l_log *log)
 	block->meta_size = cpu_to_le32(io->meta_offset);
 	crc = crc32c_le(log->uuid_checksum, block, PAGE_SIZE);
 	block->checksum = cpu_to_le32(crc);
+	bio = io->current_bio;
 
 	log->current_io = NULL;
 	spin_lock_irqsave(&log->io_list_lock, flags);
-	__r5l_set_io_unit_state(io, IO_UNIT_IO_START);
+	if (io->has_flush || io->has_fua) {
+		if (io != list_first_entry(&log->running_ios,
+					   struct r5l_io_unit, log_sibling)) {
+			io->io_deferred = 1;
+			do_submit = false;
+		}
+	}
 	spin_unlock_irqrestore(&log->io_list_lock, flags);
-
-	submit_bio(io->current_bio);
+	if (do_submit)
+		r5l_do_submit_io(log, io);
 }
 
 static struct bio *r5l_bio_alloc(struct r5l_log *log)
@@ -449,6 +545,7 @@ static struct r5l_io_unit *r5l_new_meta(struct r5l_log *log)
 	io->log = log;
 	INIT_LIST_HEAD(&io->log_sibling);
 	INIT_LIST_HEAD(&io->stripe_list);
+	bio_list_init(&io->flush_barriers);
 	io->state = IO_UNIT_RUNNING;
 
 	io->meta_page = mempool_alloc(log->meta_pool, GFP_NOIO);
@@ -519,12 +616,11 @@ static void r5l_append_payload_page(struct r5l_log *log, struct page *page)
 	struct r5l_io_unit *io = log->current_io;
 
 	if (io->need_split_bio) {
-		struct bio *prev = io->current_bio;
-
+		BUG_ON(io->split_bio);
+		io->split_bio = io->current_bio;
 		io->current_bio = r5l_bio_alloc(log);
-		bio_chain(io->current_bio, prev);
-
-		submit_bio(prev);
+		bio_chain(io->current_bio, io->split_bio);
+		io->need_split_bio = false;
 	}
 
 	if (!bio_add_page(io->current_bio, page, PAGE_SIZE, 0))
@@ -554,12 +650,22 @@ static int r5l_log_stripe(struct r5l_log *log, struct stripe_head *sh,
 
 	io = log->current_io;
 
+	if (test_and_clear_bit(STRIPE_R5C_PREFLUSH, &sh->state))
+		io->has_flush = 1;
+
 	for (i = 0; i < sh->disks; i++) {
 		if (!test_bit(R5_Wantwrite, &sh->dev[i].flags) &&
 		    !test_bit(R5_Wantcache, &sh->dev[i].flags))
 			continue;
 		if (i == sh->pd_idx || i == sh->qd_idx)
 			continue;
+		if (test_bit(R5_WantFUA, &sh->dev[i].flags)) {
+			io->has_fua = 1;
+			/* we need to flush journal to make sure recovery can
+			 * reach the data with fua flag
+			 */
+			io->has_flush = 1;
+		}
 		r5l_append_payload_meta(log, R5LOG_PAYLOAD_DATA,
 					raid5_compute_blocknr(sh, i, 0),
 					sh->dev[i].log_checksum, 0, false);
@@ -716,10 +822,16 @@ int r5l_handle_flush_request(struct r5l_log *log, struct bio *bio)
 	 * don't need to flush again
 	 */
 	if (bio->bi_iter.bi_size == 0) {
-		bio_endio(bio);
+		mutex_lock(&log->io_mutex);
+		r5l_get_meta(log, 0);
+		bio_list_add(&log->current_io->flush_barriers, bio);
+		log->current_io->has_flush = 1;
+		log->current_io->has_null_flush = 1;
+		atomic_inc(&log->current_io->pending_stripe);
+		r5l_submit_current_io(log);
+		mutex_unlock(&log->io_mutex);
 		return 0;
 	}
-	bio->bi_opf &= ~REQ_PREFLUSH;
 	return -EAGAIN;
 }
 
@@ -2186,6 +2298,8 @@ int r5l_init_log(struct r5conf *conf, struct md_rdev *rdev)
 	INIT_LIST_HEAD(&log->no_space_stripes);
 	spin_lock_init(&log->no_space_stripes_lock);
 
+	INIT_WORK(&log->deferred_io_work, r5l_submit_io_async);
+
 	/* flush full stripe */
 	log->r5c_state = R5C_STATE_WRITE_BACK;
 	INIT_LIST_HEAD(&log->stripe_in_cache);
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index a3d26ec..df31bfa 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -5321,6 +5321,7 @@ static void raid5_make_request(struct mddev *mddev, struct bio * bi)
 	int remaining;
 	DEFINE_WAIT(w);
 	bool do_prepare;
+	bool do_flush = false;
 
 	if (unlikely(bi->bi_opf & REQ_PREFLUSH)) {
 		int ret = r5l_handle_flush_request(conf->log, bi);
@@ -5332,6 +5333,7 @@ static void raid5_make_request(struct mddev *mddev, struct bio * bi)
 			return;
 		}
 		/* ret == -EAGAIN, fallback */
+		do_flush = true;
 	}
 
 	md_write_start(mddev, bi);
@@ -5470,6 +5472,12 @@ static void raid5_make_request(struct mddev *mddev, struct bio * bi)
 				do_prepare = true;
 				goto retry;
 			}
+			if (do_flush) {
+				set_bit(STRIPE_R5C_PREFLUSH, &sh->state);
+				/* we only need flush for one stripe */
+				do_flush = false;
+			}
+
 			set_bit(STRIPE_HANDLE, &sh->state);
 			clear_bit(STRIPE_DELAYED, &sh->state);
 			if ((!sh->batch_head || sh == sh->batch_head) &&
diff --git a/drivers/md/raid5.h b/drivers/md/raid5.h
index 2d8222c..bbb2536 100644
--- a/drivers/md/raid5.h
+++ b/drivers/md/raid5.h
@@ -359,6 +359,7 @@ enum {
 	STRIPE_R5C_FROZEN,	/* r5c_cache frozen and being written out */
 	STRIPE_R5C_WRITTEN,	/* ready for r5c_handle_stripe_written() */
 	STRIPE_R5C_PRIORITY,	/* high priority stripe for log reclaim */
+	STRIPE_R5C_PREFLUSH,	/* need to flush journal device */
 };
 
 #define STRIPE_EXPAND_SYNC_FLAGS \
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 6/6] md/r5cache: decrease the counter after full-write stripe was reclaimed
  2016-09-26 23:30 [PATCH v2 0/6] raid5-cache: enabling cache features Song Liu
                   ` (4 preceding siblings ...)
  2016-09-26 23:30 ` [PATCH v2 5/6] r5cache: handle SYNC and FUA Song Liu
@ 2016-09-26 23:30 ` Song Liu
  5 siblings, 0 replies; 14+ messages in thread
From: Song Liu @ 2016-09-26 23:30 UTC (permalink / raw)
  To: linux-raid
  Cc: neilb, shli, kernel-team, dan.j.williams, hch, liuzhengyuang521,
	liuzhengyuan, Song Liu

From: ZhengYuan Liu <liuzhengyuan@kylinos.cn>

Once the data was written to cache device, r5c_handle_cached_data_endio would
be called to set dev->written with null and return the bio to up layer. As a
result, handle_stripe_clean_event has no chance to be called to decrease the
counter of conf->pending_full_writes when the stripe was written to raid disks
at reclaim stage. It should test the STRIPE_FULL_WRITE state and decrease the
counter at somewhere when the full-write stripe was reclaimed,
r5c_handle_stripe_written may be the right place to do that.

Signed-off-by: ZhengYuan Liu <liuzhengyuan@kylinos.cn>
Signed-off-by: Song Liu <songliubraving@fb.com>
---
 drivers/md/raid5-cache.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c
index a94585d..8a035da 100644
--- a/drivers/md/raid5-cache.c
+++ b/drivers/md/raid5-cache.c
@@ -2044,6 +2044,10 @@ void r5c_handle_stripe_written(struct r5conf *conf,
 		spin_unlock_irqrestore(&conf->log->stripe_in_cache_lock, flags);
 		sh->log_start = MaxSector;
 		clear_bit(STRIPE_R5C_PRIORITY, &sh->state);
+
+		if (test_and_clear_bit(STRIPE_FULL_WRITE, &sh->state))
+			if (atomic_dec_and_test(&conf->pending_full_writes))
+				md_wakeup_thread(conf->mddev->thread);
 	}
 
 	if (do_wakeup)
-- 
2.9.3


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 1/6] r5cache: write part of r5cache
  2016-09-26 23:30 ` [PATCH v2 1/6] r5cache: write part of r5cache Song Liu
@ 2016-09-27 22:51   ` Shaohua Li
  2016-09-29 23:06     ` Song Liu
  0 siblings, 1 reply; 14+ messages in thread
From: Shaohua Li @ 2016-09-27 22:51 UTC (permalink / raw)
  To: Song Liu
  Cc: linux-raid, neilb, shli, kernel-team, dan.j.williams, hch,
	liuzhengyuang521, liuzhengyuan

On Mon, Sep 26, 2016 at 04:30:45PM -0700, Song Liu wrote:

Please add more comments directly in the code. The state machine is already
very complex and please help make the code more readable. Maybe adding some
descriptions from the changelog into the code is a good start.

The patch includes both write path and reclaim path. It would be easier to
review if you can split them into 2 patches.

> With r5cache, write operation does not wait for parity calculation
> and write out, so the write latency is lower (1 write to journal
> device vs. read and then write to raid disks). Also, r5cache will
> reduce RAID overhead (multipile IO due to read-modify-write of
> parity) and provide more opportunities of full stripe writes.
> 
> r5cache adds 4 flags to stripe_head.state:
>  - STRIPE_R5C_PARTIAL_STRIPE,
>  - STRIPE_R5C_FULL_STRIPE,
>  - STRIPE_R5C_FROZEN and
>  - STRIPE_R5C_WRITTEN.
please describe why STRIPE_R5C_WRITTEN is required.

> There are some known limitations of the cache implementation:
> 
> 1. Write cache only covers full page writes (R5_OVERWRITE). Writes
>    of smaller granularity are write through.
> 2. Only one log io (sh->log_io) for each stripe at anytime. Later
>    writes for the same stripe have to wait. This can be improved by
>    moving log_io to r5dev.
There are other limitations in the implementation.
- read path must enter state machine, which is a signatificant bottleneck

- there is no checkpoint in the log, which r5l_payload_flush is designed for.
  So A recovery must replay the whole log. This is a big availatility
  shortcoming.

please explain when a stripe is in cache or is flushing to raid disks and new
write into the stripe comes, there isn't a problem (data corruption,
deadlocking).

Please make sure the code doesn't change behavior for array without a log or
just using log in writethrough mode, which includes both performance and
correctness. If log is used, the default mode should be writethrough, as the
cache is still premature and cache could be dangerous for some users (single
point failure). I'll add inline comments in places I found this problem, but
please double check. This is very important.

> diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c
> index 1b1ab4a..6b28461 100644
> --- a/drivers/md/raid5-cache.c
> +++ b/drivers/md/raid5-cache.c
> @@ -34,12 +34,26 @@
>  #define RECLAIM_MAX_FREE_SPACE (10 * 1024 * 1024 * 2) /* sector */
>  #define RECLAIM_MAX_FREE_SPACE_SHIFT (2)
>  
> +/* wake up reclaim thread periodically */
> +#define R5C_RECLAIM_WAKEUP_INTERVAL (5 * HZ)
> +/* start flush with these full stripes */
> +#define R5C_FULL_STRIPE_FLUSH_BATCH 8
> +/* reclaim stripes in groups */
> +#define R5C_RECLAIM_STRIPE_GROUP (NR_STRIPE_HASH_LOCKS * 2)
> +
>  /*
>   * We only need 2 bios per I/O unit to make progress, but ensure we
>   * have a few more available to not get too tight.
>   */
>  #define R5L_POOL_SIZE	4
>  
> +enum r5c_state {
> +	R5C_STATE_NO_CACHE = 0,
> +	R5C_STATE_WRITE_THROUGH = 1,
> +	R5C_STATE_WRITE_BACK = 2,
> +	R5C_STATE_CACHE_BROKEN = 3,
> +};
> +
>  struct r5l_log {
>  	struct md_rdev *rdev;
>  
> @@ -96,6 +110,9 @@ struct r5l_log {
>  	spinlock_t no_space_stripes_lock;
>  
>  	bool need_cache_flush;
> +
> +	/* for r5c_cache */
> +	enum r5c_state r5c_state;
>  };
>  
>  /*
> @@ -168,12 +185,79 @@ static void __r5l_set_io_unit_state(struct r5l_io_unit *io,
>  	io->state = state;
>  }
>  
> +/*
> + * Freeze the stripe, thus send the stripe into reclaim path.
> + *
> + * This function should only be called from raid5d that handling this stripe,
> + * or when holds conf->device_lock
> + */

Do you mean if this called in raid5d, the lock isn't required? Please note we
could have several threads (like raid5d) handle stripes. Is there any problem
here?

> +void r5c_freeze_stripe_for_reclaim(struct stripe_head *sh)
> +{
> +	struct r5conf *conf = sh->raid_conf;
> +
> +	if (!conf->log)
> +		return;
> +
> +	WARN_ON(test_bit(STRIPE_R5C_FROZEN, &sh->state));
> +	set_bit(STRIPE_R5C_FROZEN, &sh->state);
> +
> +	if (!test_and_set_bit(STRIPE_PREREAD_ACTIVE, &sh->state))
> +		atomic_inc(&conf->preread_active_stripes);
> +
> +	if (test_and_clear_bit(STRIPE_R5C_PARTIAL_STRIPE, &sh->state)) {
> +		BUG_ON(atomic_read(&conf->r5c_cached_partial_stripes) == 0);
> +		atomic_dec(&conf->r5c_cached_partial_stripes);
> +	}
> +
> +	if (test_and_clear_bit(STRIPE_R5C_FULL_STRIPE, &sh->state)) {
> +		BUG_ON(atomic_read(&conf->r5c_cached_full_stripes) == 0);
> +		atomic_dec(&conf->r5c_cached_full_stripes);
> +	}
> +}
> +
> +static void r5c_handle_data_cached(struct stripe_head *sh)
> +{
> +	int i;
> +
> +	for (i = sh->disks; i--; )
> +		if (test_and_clear_bit(R5_Wantcache, &sh->dev[i].flags)) {
> +			set_bit(R5_InCache, &sh->dev[i].flags);
> +			clear_bit(R5_LOCKED, &sh->dev[i].flags);
> +			atomic_inc(&sh->dev_in_cache);
> +		}
> +}
> +
> +/*
> + * this journal write must contain full parity,
> + * it may also contain some data pages
> + */
> +static void r5c_handle_parity_cached(struct stripe_head *sh)
> +{
> +	int i;
> +
> +	for (i = sh->disks; i--; )
> +		if (test_bit(R5_InCache, &sh->dev[i].flags))
> +			set_bit(R5_Wantwrite, &sh->dev[i].flags);
> +	set_bit(STRIPE_R5C_WRITTEN, &sh->state);
> +}
> +
> +static void r5c_finish_cache_stripe(struct stripe_head *sh)
> +{

I'm hoping this one has a
	if (not in writeback mode)
		return
so it's clearly this is only for writeback mode.

> +	if (test_bit(STRIPE_R5C_FROZEN, &sh->state))
> +		r5c_handle_parity_cached(sh);
> +	else
> +		r5c_handle_data_cached(sh);
> +}
> +
>  static void r5l_io_run_stripes(struct r5l_io_unit *io)
>  {
>  	struct stripe_head *sh, *next;
>  
>  	list_for_each_entry_safe(sh, next, &io->stripe_list, log_list) {
>  		list_del_init(&sh->log_list);
> +
> +		r5c_finish_cache_stripe(sh);
> +
>  		set_bit(STRIPE_HANDLE, &sh->state);
>  		raid5_release_stripe(sh);
>  	}
> @@ -402,7 +486,8 @@ static int r5l_log_stripe(struct r5l_log *log, struct stripe_head *sh,
>  	io = log->current_io;
>  
>  	for (i = 0; i < sh->disks; i++) {
> -		if (!test_bit(R5_Wantwrite, &sh->dev[i].flags))
> +		if (!test_bit(R5_Wantwrite, &sh->dev[i].flags) &&
> +		    !test_bit(R5_Wantcache, &sh->dev[i].flags))
>  			continue;
>  		if (i == sh->pd_idx || i == sh->qd_idx)
>  			continue;
> @@ -412,18 +497,19 @@ static int r5l_log_stripe(struct r5l_log *log, struct stripe_head *sh,
>  		r5l_append_payload_page(log, sh->dev[i].page);
>  	}
>  
> -	if (sh->qd_idx >= 0) {
> +	if (parity_pages == 2) {
>  		r5l_append_payload_meta(log, R5LOG_PAYLOAD_PARITY,
>  					sh->sector, sh->dev[sh->pd_idx].log_checksum,
>  					sh->dev[sh->qd_idx].log_checksum, true);
>  		r5l_append_payload_page(log, sh->dev[sh->pd_idx].page);
>  		r5l_append_payload_page(log, sh->dev[sh->qd_idx].page);
> -	} else {
> +	} else if (parity_pages == 1) {
>  		r5l_append_payload_meta(log, R5LOG_PAYLOAD_PARITY,
>  					sh->sector, sh->dev[sh->pd_idx].log_checksum,
>  					0, false);
>  		r5l_append_payload_page(log, sh->dev[sh->pd_idx].page);
> -	}
> +	} else
> +		BUG_ON(parity_pages != 0);
>  
>  	list_add_tail(&sh->log_list, &io->stripe_list);
>  	atomic_inc(&io->pending_stripe);
> @@ -432,7 +518,6 @@ static int r5l_log_stripe(struct r5l_log *log, struct stripe_head *sh,
>  	return 0;
>  }
>  
> -static void r5l_wake_reclaim(struct r5l_log *log, sector_t space);
>  /*
>   * running in raid5d, where reclaim could wait for raid5d too (when it flushes
>   * data from log to raid disks), so we shouldn't wait for reclaim here
> @@ -456,11 +541,17 @@ int r5l_write_stripe(struct r5l_log *log, struct stripe_head *sh)
>  		return -EAGAIN;
>  	}
>  
> +	WARN_ON(!test_bit(STRIPE_R5C_FROZEN, &sh->state));

is this set even for writethrough?

>  	for (i = 0; i < sh->disks; i++) {
>  		void *addr;
>  
>  		if (!test_bit(R5_Wantwrite, &sh->dev[i].flags))
>  			continue;
> +
> +		if (test_bit(R5_InCache, &sh->dev[i].flags))
> +			continue;
> +
>  		write_disks++;
>  		/* checksum is already calculated in last run */
>  		if (test_bit(STRIPE_LOG_TRAPPED, &sh->state))
> @@ -473,6 +564,9 @@ int r5l_write_stripe(struct r5l_log *log, struct stripe_head *sh)
>  	parity_pages = 1 + !!(sh->qd_idx >= 0);
>  	data_pages = write_disks - parity_pages;
>  
> +	pr_debug("%s: write %d data_pages and %d parity_pages\n",
> +		 __func__, data_pages, parity_pages);
> +
>  	meta_size =
>  		((sizeof(struct r5l_payload_data_parity) + sizeof(__le32))
>  		 * data_pages) +
> @@ -735,7 +829,6 @@ static void r5l_write_super_and_discard_space(struct r5l_log *log,
>  	}
>  }
>  
> -
>  static void r5l_do_reclaim(struct r5l_log *log)
>  {
>  	sector_t reclaim_target = xchg(&log->reclaim_target, 0);
> @@ -798,7 +891,7 @@ static void r5l_reclaim_thread(struct md_thread *thread)
>  	r5l_do_reclaim(log);
>  }
>  
> -static void r5l_wake_reclaim(struct r5l_log *log, sector_t space)
> +void r5l_wake_reclaim(struct r5l_log *log, sector_t space)
>  {
>  	unsigned long target;
>  	unsigned long new = (unsigned long)space; /* overflow in theory */
> @@ -1111,6 +1204,188 @@ static void r5l_write_super(struct r5l_log *log, sector_t cp)
>  	set_bit(MD_CHANGE_DEVS, &mddev->flags);
>  }
>  
> +static void r5c_flush_stripe(struct r5conf *conf, struct stripe_head *sh)
> +{
> +	list_del_init(&sh->lru);
> +	r5c_freeze_stripe_for_reclaim(sh);
> +	atomic_inc(&conf->active_stripes);
> +	atomic_inc(&sh->count);
> +	set_bit(STRIPE_HANDLE, &sh->state);
> +	raid5_release_stripe(sh);
> +}
> +
> +/* if num <= 0, flush all stripes
> + * if num > 0, flush at most num stripes
> + */
comments should be
/*
 * xxx
 */

> +int r5c_flush_cache(struct r5conf *conf, int num)
> +{
> +	int count = 0;
> +	struct stripe_head *sh, *next;
> +
> +	assert_spin_locked(&conf->device_lock);
> +	if (!conf->log)
> +		return 0;
> +	list_for_each_entry_safe(sh, next, &conf->r5c_full_stripe_list, lru) {
> +		r5c_flush_stripe(conf, sh);
> +		count++;
> +		if (num > 0 && count >= num && count >=
> +		    R5C_FULL_STRIPE_FLUSH_BATCH)
> +			return count;
> +	}
> +
> +	list_for_each_entry_safe(sh, next, &conf->r5c_partial_stripe_list, lru) {
> +		r5c_flush_stripe(conf, sh);
> +		count++;
> +		if (num > 0 && count == num)
> +			return count;
> +	}
> +	return count;
> +}
> +
> +int r5c_handle_stripe_dirtying(struct r5conf *conf,
> +			       struct stripe_head *sh,
> +			       struct stripe_head_state *s,
> +			       int disks) {
> +	struct r5l_log *log = conf->log;
> +	int i;
> +	struct r5dev *dev;
> +
> +	if (!log || test_bit(STRIPE_R5C_FROZEN, &sh->state))
> +		return -EAGAIN;
> +
> +	if (conf->log->r5c_state == R5C_STATE_WRITE_THROUGH ||
> +	    conf->quiesce != 0 || conf->mddev->degraded != 0) {
> +		/* write through mode */
> +		r5c_freeze_stripe_for_reclaim(sh);
> +		return -EAGAIN;

will this change behavior of R5C_STATE_WRITE_THROUGH?
r5c_freeze_stripe_for_reclaim does change something not related to writethrough
mode.

The quiesce check sounds not necessary. The patch flush all caches in quiesce,
so no IO is running in quiesce state.

> +	}
> +
> +	s->to_cache = 0;
> +
> +	for (i = disks; i--; ) {
> +		dev = &sh->dev[i];
> +		/* if none-overwrite, use the reclaim path (write through) */
> +		if (dev->towrite && !test_bit(R5_OVERWRITE, &dev->flags) &&
> +		    !test_bit(R5_InCache, &dev->flags)) {
> +			r5c_freeze_stripe_for_reclaim(sh);
> +			return -EAGAIN;
> +		}
> +	}
> +
> +	for (i = disks; i--; ) {
> +		dev = &sh->dev[i];
> +		if (dev->towrite) {
> +			set_bit(R5_Wantcache, &dev->flags);
> +			set_bit(R5_Wantdrain, &dev->flags);
> +			set_bit(R5_LOCKED, &dev->flags);
> +			s->to_cache++;
> +		}
> +	}
> +
> +	if (s->to_cache)
> +		set_bit(STRIPE_OP_BIODRAIN, &s->ops_request);
> +
> +	return 0;
> +}
> +
> +void r5c_handle_stripe_written(struct r5conf *conf,
> +			       struct stripe_head *sh) {
> +	int i;
> +	int do_wakeup = 0;
> +
> +	if (test_and_clear_bit(STRIPE_R5C_WRITTEN, &sh->state)) {
> +		WARN_ON(!test_bit(STRIPE_R5C_FROZEN, &sh->state));
> +		clear_bit(STRIPE_R5C_FROZEN, &sh->state);
> +
> +		for (i = sh->disks; i--; ) {
> +			if (test_and_clear_bit(R5_InCache, &sh->dev[i].flags))
> +				atomic_dec(&sh->dev_in_cache);
> +			clear_bit(R5_UPTODATE, &sh->dev[i].flags);
> +			if (test_and_clear_bit(R5_Overlap, &sh->dev[i].flags))
> +				do_wakeup = 1;
> +		}
> +	}
> +
> +	if (do_wakeup)
> +		wake_up(&conf->wait_for_overlap);
> +}
> +
> +int
> +r5c_cache_data(struct r5l_log *log, struct stripe_head *sh,
> +	       struct stripe_head_state *s)
> +{
> +	int pages;
> +	int meta_size;
> +	int reserve;
> +	int i;
> +	int ret = 0;
> +	int page_count = 0;
> +
> +	BUG_ON(!log);
> +	BUG_ON(s->to_cache == 0);
> +
> +	for (i = 0; i < sh->disks; i++) {
> +		void *addr;
> +
> +		if (!test_bit(R5_Wantcache, &sh->dev[i].flags))
> +			continue;
> +		addr = kmap_atomic(sh->dev[i].page);
> +		sh->dev[i].log_checksum = crc32c_le(log->uuid_checksum,
> +						    addr, PAGE_SIZE);
> +		kunmap_atomic(addr);
> +		page_count++;
> +	}
> +	WARN_ON(page_count != s->to_cache);
> +
> +	pages = s->to_cache;
> +
> +	meta_size =
> +		((sizeof(struct r5l_payload_data_parity) + sizeof(__le32))
> +		 * pages);
> +	/* Doesn't work with very big raid array */
> +	if (meta_size + sizeof(struct r5l_meta_block) > PAGE_SIZE)
> +		return -EINVAL;
> +
> +	/*
> +	 * The stripe must enter state machine again to call endio, so
> +	 * don't delay.
> +	 */
> +	clear_bit(STRIPE_DELAYED, &sh->state);
> +	atomic_inc(&sh->count);
> +
> +	mutex_lock(&log->io_mutex);
> +	/* meta + data */
> +	reserve = (1 + pages) << (PAGE_SHIFT - 9);
> +	if (!r5l_has_free_space(log, reserve)) {
> +		spin_lock(&log->no_space_stripes_lock);
> +		list_add_tail(&sh->log_list, &log->no_space_stripes);
> +		spin_unlock(&log->no_space_stripes_lock);
> +
> +		r5l_wake_reclaim(log, reserve);
> +	} else {
> +		ret = r5l_log_stripe(log, sh, pages, 0);
> +		if (ret) {
> +			spin_lock_irq(&log->io_list_lock);
> +			list_add_tail(&sh->log_list, &log->no_mem_stripes);
> +			spin_unlock_irq(&log->io_list_lock);
> +		}
> +	}
> +
> +	mutex_unlock(&log->io_mutex);
> +	return 0;
> +}
> +
> +void r5c_do_reclaim(struct r5conf *conf)
> +{
> +	struct r5l_log *log = conf->log;
> +
> +	assert_spin_locked(&conf->device_lock);
> +
> +	if (!log)
> +		return;

skip for writethrough

> +	r5c_flush_cache(conf, 0);
> +}
> +
>  static int r5l_load_log(struct r5l_log *log)
>  {
>  	struct md_rdev *rdev = log->rdev;
> @@ -1230,6 +1505,9 @@ int r5l_init_log(struct r5conf *conf, struct md_rdev *rdev)
>  	INIT_LIST_HEAD(&log->no_space_stripes);
>  	spin_lock_init(&log->no_space_stripes_lock);
>  
> +	/* flush full stripe */
> +	log->r5c_state = R5C_STATE_WRITE_BACK;

Should be R5C_STATE_WRITE_THROUGH
> +
>  	if (r5l_load_log(log))
>  		goto error;
>  
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index f94472d..25b411d 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -316,8 +316,25 @@ static void do_release_stripe(struct r5conf *conf, struct stripe_head *sh,
>  			    < IO_THRESHOLD)
>  				md_wakeup_thread(conf->mddev->thread);
>  		atomic_dec(&conf->active_stripes);
> -		if (!test_bit(STRIPE_EXPANDING, &sh->state))
> -			list_add_tail(&sh->lru, temp_inactive_list);
> +		if (!test_bit(STRIPE_EXPANDING, &sh->state)) {
> +			if (atomic_read(&sh->dev_in_cache) == 0) {
> +				list_add_tail(&sh->lru, temp_inactive_list);
> +			} else if (atomic_read(&sh->dev_in_cache) ==
> +				   conf->raid_disks - conf->max_degraded) {
> +				/* full stripe */
> +				if (!test_and_set_bit(STRIPE_R5C_FULL_STRIPE, &sh->state))
> +					atomic_inc(&conf->r5c_cached_full_stripes);
> +				if (test_and_clear_bit(STRIPE_R5C_PARTIAL_STRIPE, &sh->state))
> +					atomic_dec(&conf->r5c_cached_partial_stripes);
> +				list_add_tail(&sh->lru, &conf->r5c_full_stripe_list);
> +			} else {
> +				/* not full stripe */
> +				if (!test_and_set_bit(STRIPE_R5C_PARTIAL_STRIPE,
> +						      &sh->state))
> +					atomic_inc(&conf->r5c_cached_partial_stripes);
> +				list_add_tail(&sh->lru, &conf->r5c_partial_stripe_list);
> +			}
> +		}
>  	}
>  }
>  
> @@ -901,6 +918,13 @@ static void ops_run_io(struct stripe_head *sh, struct stripe_head_state *s)
>  
>  	might_sleep();
>  
> +	if (s->to_cache) {
> +		if (r5c_cache_data(conf->log, sh, s) == 0)
> +			return;

At this stage we fallback to no cache. But I don't see R5_Wantcache is cleared,
is it a problem?

> +		/* array is too big that meta data size > PAGE_SIZE  */
> +		r5c_freeze_stripe_for_reclaim(sh);
> +	}
> +
>  	if (r5l_write_stripe(conf->log, sh) == 0)
>  		return;
>  	for (i = disks; i--; ) {
> @@ -1029,6 +1053,7 @@ again:
>  
>  			if (test_bit(R5_SkipCopy, &sh->dev[i].flags))
>  				WARN_ON(test_bit(R5_UPTODATE, &sh->dev[i].flags));
> +
>  			sh->dev[i].vec.bv_page = sh->dev[i].page;
>  			bi->bi_vcnt = 1;
>  			bi->bi_io_vec[0].bv_len = STRIPE_SIZE;
> @@ -1115,7 +1140,7 @@ again:
>  static struct dma_async_tx_descriptor *
>  async_copy_data(int frombio, struct bio *bio, struct page **page,
>  	sector_t sector, struct dma_async_tx_descriptor *tx,
> -	struct stripe_head *sh)
> +	struct stripe_head *sh, int no_skipcopy)
>  {
>  	struct bio_vec bvl;
>  	struct bvec_iter iter;
> @@ -1155,7 +1180,8 @@ async_copy_data(int frombio, struct bio *bio, struct page **page,
>  			if (frombio) {
>  				if (sh->raid_conf->skip_copy &&
>  				    b_offset == 0 && page_offset == 0 &&
> -				    clen == STRIPE_SIZE)
> +				    clen == STRIPE_SIZE &&
> +				    !no_skipcopy)
>  					*page = bio_page;
>  				else
>  					tx = async_memcpy(*page, bio_page, page_offset,
> @@ -1237,7 +1263,7 @@ static void ops_run_biofill(struct stripe_head *sh)
>  			while (rbi && rbi->bi_iter.bi_sector <
>  				dev->sector + STRIPE_SECTORS) {
>  				tx = async_copy_data(0, rbi, &dev->page,
> -					dev->sector, tx, sh);
> +						     dev->sector, tx, sh, 0);
>  				rbi = r5_next_bio(rbi, dev->sector);
>  			}
>  		}
> @@ -1364,7 +1390,8 @@ static int set_syndrome_sources(struct page **srcs,
>  		if (i == sh->qd_idx || i == sh->pd_idx ||
>  		    (srctype == SYNDROME_SRC_ALL) ||
>  		    (srctype == SYNDROME_SRC_WANT_DRAIN &&
> -		     test_bit(R5_Wantdrain, &dev->flags)) ||
> +		     (test_bit(R5_Wantdrain, &dev->flags) ||
> +		      test_bit(R5_InCache, &dev->flags))) ||
>  		    (srctype == SYNDROME_SRC_WRITTEN &&
>  		     dev->written))
>  			srcs[slot] = sh->dev[i].page;
> @@ -1543,9 +1570,18 @@ ops_run_compute6_2(struct stripe_head *sh, struct raid5_percpu *percpu)
>  static void ops_complete_prexor(void *stripe_head_ref)
>  {
>  	struct stripe_head *sh = stripe_head_ref;
> +	int i;
>  
>  	pr_debug("%s: stripe %llu\n", __func__,
>  		(unsigned long long)sh->sector);
> +
> +	for (i = sh->disks; i--; )
> +		if (sh->dev[i].page != sh->dev[i].orig_page) {
> +			struct page *p = sh->dev[i].page;
> +
> +			sh->dev[i].page = sh->dev[i].orig_page;
> +			put_page(p);
What if array hasn't log but skipcopy is enabled?

> +		}
>  }
>  
>  static struct dma_async_tx_descriptor *
> @@ -1567,7 +1603,8 @@ ops_run_prexor5(struct stripe_head *sh, struct raid5_percpu *percpu,
>  	for (i = disks; i--; ) {
>  		struct r5dev *dev = &sh->dev[i];
>  		/* Only process blocks that are known to be uptodate */
> -		if (test_bit(R5_Wantdrain, &dev->flags))
> +		if (test_bit(R5_Wantdrain, &dev->flags) ||
> +		    test_bit(R5_InCache, &dev->flags))
>  			xor_srcs[count++] = dev->page;
>  	}
>  
> @@ -1618,6 +1655,10 @@ ops_run_biodrain(struct stripe_head *sh, struct dma_async_tx_descriptor *tx)
>  
>  again:
>  			dev = &sh->dev[i];
> +			if (test_and_clear_bit(R5_InCache, &dev->flags)) {
> +				BUG_ON(atomic_read(&sh->dev_in_cache) == 0);
> +				atomic_dec(&sh->dev_in_cache);
> +			}
>  			spin_lock_irq(&sh->stripe_lock);
>  			chosen = dev->towrite;
>  			dev->towrite = NULL;
> @@ -1625,7 +1666,8 @@ again:
>  			BUG_ON(dev->written);
>  			wbi = dev->written = chosen;
>  			spin_unlock_irq(&sh->stripe_lock);
> -			WARN_ON(dev->page != dev->orig_page);
> +			if (!test_bit(R5_Wantcache, &dev->flags))
> +				WARN_ON(dev->page != dev->orig_page);
>  
>  			while (wbi && wbi->bi_iter.bi_sector <
>  				dev->sector + STRIPE_SECTORS) {
> @@ -1637,8 +1679,10 @@ again:
>  					set_bit(R5_Discard, &dev->flags);
>  				else {
>  					tx = async_copy_data(1, wbi, &dev->page,
> -						dev->sector, tx, sh);
> -					if (dev->page != dev->orig_page) {
> +							     dev->sector, tx, sh,
> +							     test_bit(R5_Wantcache, &dev->flags));
> +					if (dev->page != dev->orig_page &&
> +					    !test_bit(R5_Wantcache, &dev->flags)) {
>  						set_bit(R5_SkipCopy, &dev->flags);
>  						clear_bit(R5_UPTODATE, &dev->flags);
>  						clear_bit(R5_OVERWRITE, &dev->flags);
> @@ -1746,7 +1790,8 @@ again:
>  		xor_dest = xor_srcs[count++] = sh->dev[pd_idx].page;
>  		for (i = disks; i--; ) {
>  			struct r5dev *dev = &sh->dev[i];
> -			if (head_sh->dev[i].written)
> +			if (head_sh->dev[i].written ||
> +			    test_bit(R5_InCache, &head_sh->dev[i].flags))
>  				xor_srcs[count++] = dev->page;
>  		}
>  	} else {
> @@ -2001,6 +2046,7 @@ static struct stripe_head *alloc_stripe(struct kmem_cache *sc, gfp_t gfp,
>  		INIT_LIST_HEAD(&sh->batch_list);
>  		INIT_LIST_HEAD(&sh->lru);
>  		atomic_set(&sh->count, 1);
> +		atomic_set(&sh->dev_in_cache, 0);
>  		for (i = 0; i < disks; i++) {
>  			struct r5dev *dev = &sh->dev[i];
>  
> @@ -2887,6 +2933,9 @@ schedule_reconstruction(struct stripe_head *sh, struct stripe_head_state *s,
>  				if (!expand)
>  					clear_bit(R5_UPTODATE, &dev->flags);
>  				s->locked++;
> +			} else if (test_bit(R5_InCache, &dev->flags)) {
> +				set_bit(R5_LOCKED, &dev->flags);
> +				s->locked++;
>  			}
>  		}
>  		/* if we are not expanding this is a proper write request, and
> @@ -2926,6 +2975,9 @@ schedule_reconstruction(struct stripe_head *sh, struct stripe_head_state *s,
>  				set_bit(R5_LOCKED, &dev->flags);
>  				clear_bit(R5_UPTODATE, &dev->flags);
>  				s->locked++;
> +			} else if (test_bit(R5_InCache, &dev->flags)) {
> +				set_bit(R5_LOCKED, &dev->flags);
> +				s->locked++;
>  			}
>  		}
>  		if (!s->locked)
> @@ -3577,6 +3629,9 @@ static void handle_stripe_dirtying(struct r5conf *conf,
>  	int rmw = 0, rcw = 0, i;
>  	sector_t recovery_cp = conf->mddev->recovery_cp;
>  
> +	if (r5c_handle_stripe_dirtying(conf, sh, s, disks) == 0)
> +		return;
> +
>  	/* Check whether resync is now happening or should start.
>  	 * If yes, then the array is dirty (after unclean shutdown or
>  	 * initial creation), so parity in some stripes might be inconsistent.
> @@ -3597,9 +3652,12 @@ static void handle_stripe_dirtying(struct r5conf *conf,
>  	} else for (i = disks; i--; ) {
>  		/* would I have to read this buffer for read_modify_write */
>  		struct r5dev *dev = &sh->dev[i];
> -		if ((dev->towrite || i == sh->pd_idx || i == sh->qd_idx) &&
> +		if ((dev->towrite || i == sh->pd_idx || i == sh->qd_idx ||
> +		     test_bit(R5_InCache, &dev->flags)) &&
>  		    !test_bit(R5_LOCKED, &dev->flags) &&
> -		    !(test_bit(R5_UPTODATE, &dev->flags) ||
> +		    !((test_bit(R5_UPTODATE, &dev->flags) &&
> +		       (!test_bit(R5_InCache, &dev->flags) ||
> +			dev->page != dev->orig_page)) ||
>  		      test_bit(R5_Wantcompute, &dev->flags))) {
>  			if (test_bit(R5_Insync, &dev->flags))
>  				rmw++;
> @@ -3611,13 +3669,15 @@ static void handle_stripe_dirtying(struct r5conf *conf,
>  		    i != sh->pd_idx && i != sh->qd_idx &&
>  		    !test_bit(R5_LOCKED, &dev->flags) &&
>  		    !(test_bit(R5_UPTODATE, &dev->flags) ||
> -		    test_bit(R5_Wantcompute, &dev->flags))) {
> +		      test_bit(R5_InCache, &dev->flags) ||
> +		      test_bit(R5_Wantcompute, &dev->flags))) {
>  			if (test_bit(R5_Insync, &dev->flags))
>  				rcw++;
>  			else
>  				rcw += 2*disks;
>  		}
>  	}
> +
>  	pr_debug("for sector %llu, rmw=%d rcw=%d\n",
>  		(unsigned long long)sh->sector, rmw, rcw);
>  	set_bit(STRIPE_HANDLE, &sh->state);
> @@ -3629,10 +3689,18 @@ static void handle_stripe_dirtying(struct r5conf *conf,
>  					  (unsigned long long)sh->sector, rmw);
>  		for (i = disks; i--; ) {
>  			struct r5dev *dev = &sh->dev[i];
> -			if ((dev->towrite || i == sh->pd_idx || i == sh->qd_idx) &&
> +			if (test_bit(R5_InCache, &dev->flags) &&
> +			    dev->page == dev->orig_page)
> +				dev->page = alloc_page(GFP_NOIO);  /* prexor */
> +
> +			if ((dev->towrite ||
> +			     i == sh->pd_idx || i == sh->qd_idx ||
> +			     test_bit(R5_InCache, &dev->flags)) &&
>  			    !test_bit(R5_LOCKED, &dev->flags) &&
> -			    !(test_bit(R5_UPTODATE, &dev->flags) ||
> -			    test_bit(R5_Wantcompute, &dev->flags)) &&
> +			    !((test_bit(R5_UPTODATE, &dev->flags) &&
> +			       (!test_bit(R5_InCache, &dev->flags) ||
> +				dev->page != dev->orig_page)) ||
> +			      test_bit(R5_Wantcompute, &dev->flags)) &&
>  			    test_bit(R5_Insync, &dev->flags)) {
>  				if (test_bit(STRIPE_PREREAD_ACTIVE,
>  					     &sh->state)) {
> @@ -3658,6 +3726,7 @@ static void handle_stripe_dirtying(struct r5conf *conf,
>  			    i != sh->pd_idx && i != sh->qd_idx &&
>  			    !test_bit(R5_LOCKED, &dev->flags) &&
>  			    !(test_bit(R5_UPTODATE, &dev->flags) ||
> +			      test_bit(R5_InCache, &dev->flags) ||
>  			      test_bit(R5_Wantcompute, &dev->flags))) {
>  				rcw++;
>  				if (test_bit(R5_Insync, &dev->flags) &&
> @@ -3697,7 +3766,7 @@ static void handle_stripe_dirtying(struct r5conf *conf,
>  	 */
>  	if ((s->req_compute || !test_bit(STRIPE_COMPUTE_RUN, &sh->state)) &&
>  	    (s->locked == 0 && (rcw == 0 || rmw == 0) &&
> -	    !test_bit(STRIPE_BIT_DELAY, &sh->state)))
> +	     !test_bit(STRIPE_BIT_DELAY, &sh->state)))
>  		schedule_reconstruction(sh, s, rcw == 0, 0);
>  }
>  
> @@ -4010,6 +4079,45 @@ static void handle_stripe_expansion(struct r5conf *conf, struct stripe_head *sh)
>  	async_tx_quiesce(&tx);
>  }

please consider move r5c_return_dev_pending_writes/r5c_handle_cached_data_endio
to raid5-cache.c, I'd like the log related code self-contained if possible.
 
> +static void
> +r5c_return_dev_pending_writes(struct r5conf *conf, struct r5dev *dev,
> +			      struct bio_list *return_bi)
> +{
> +	struct bio *wbi, *wbi2;
> +
> +	wbi = dev->written;
> +	dev->written = NULL;
> +	while (wbi && wbi->bi_iter.bi_sector <
> +	       dev->sector + STRIPE_SECTORS) {
> +		wbi2 = r5_next_bio(wbi, dev->sector);
> +		if (!raid5_dec_bi_active_stripes(wbi)) {
> +			md_write_end(conf->mddev);
> +			bio_list_add(return_bi, wbi);
> +		}
> +		wbi = wbi2;
> +	}
> +}
> +
> +static void r5c_handle_cached_data_endio(struct r5conf *conf,
> +	  struct stripe_head *sh, int disks, struct bio_list *return_bi)
> +{
> +	int i;
> +
> +	for (i = sh->disks; i--; ) {
> +		if (test_bit(R5_InCache, &sh->dev[i].flags) &&
> +		    sh->dev[i].written) {
> +			set_bit(R5_UPTODATE, &sh->dev[i].flags);
> +			r5c_return_dev_pending_writes(conf, &sh->dev[i],
> +						      return_bi);
> +			bitmap_endwrite(conf->mddev->bitmap, sh->sector,
> +					STRIPE_SECTORS,
> +					!test_bit(STRIPE_DEGRADED, &sh->state),
> +					0);
> +		}
> +	}
> +	r5l_stripe_write_finished(sh);
> +}
> +
>  /*
>   * handle_stripe - do things to a stripe.
>   *
> @@ -4188,6 +4296,10 @@ static void analyse_stripe(struct stripe_head *sh, struct stripe_head_state *s)
>  			if (rdev && !test_bit(Faulty, &rdev->flags))
>  				do_recovery = 1;
>  		}
> +		if (test_bit(R5_InCache, &dev->flags) && dev->written)
> +			s->just_cached++;
> +		if (test_bit(R5_Wantcache, &dev->flags) && dev->written)
> +			s->want_cache++;
>  	}
>  	if (test_bit(STRIPE_SYNCING, &sh->state)) {
>  		/* If there is a failed device being replaced,
> @@ -4353,6 +4465,16 @@ static void handle_stripe(struct stripe_head *sh)
>  
>  	analyse_stripe(sh, &s);
>  
> +	if (s.want_cache) {
> +		/* we have finished r5c_handle_stripe_dirtying and
> +		 * ops_run_biodrain, but r5c_cache_data didn't finish because
> +		 * the journal device didn't have enough space. This time we
> +		 * should skip handle_stripe_dirtying and ops_run_biodrain
> +		 */
> +		s.to_cache = s.want_cache;
> +		goto finish;
> +	}
> +
>  	if (test_bit(STRIPE_LOG_TRAPPED, &sh->state))
>  		goto finish;
>  
> @@ -4416,7 +4538,7 @@ static void handle_stripe(struct stripe_head *sh)
>  			struct r5dev *dev = &sh->dev[i];
>  			if (test_bit(R5_LOCKED, &dev->flags) &&
>  				(i == sh->pd_idx || i == sh->qd_idx ||
> -				 dev->written)) {
> +				 dev->written || test_bit(R5_InCache, &dev->flags))) {
>  				pr_debug("Writing block %d\n", i);
>  				set_bit(R5_Wantwrite, &dev->flags);
>  				if (prexor)
> @@ -4456,6 +4578,12 @@ static void handle_stripe(struct stripe_head *sh)
>  				 test_bit(R5_Discard, &qdev->flags))))))
>  		handle_stripe_clean_event(conf, sh, disks, &s.return_bi);
>  
> +	if (s.just_cached)
> +		r5c_handle_cached_data_endio(conf, sh, disks, &s.return_bi);
> +
> +	if (test_bit(STRIPE_R5C_FROZEN, &sh->state))
> +		r5l_stripe_write_finished(sh);
what's this for?

> +
>  	/* Now we might consider reading some blocks, either to check/generate
>  	 * parity, or to satisfy requests
>  	 * or to load a block that is being partially written.
> @@ -4467,13 +4595,17 @@ static void handle_stripe(struct stripe_head *sh)
>  	    || s.expanding)
>  		handle_stripe_fill(sh, &s, disks);
>  
> -	/* Now to consider new write requests and what else, if anything
> -	 * should be read.  We do not handle new writes when:
> +	r5c_handle_stripe_written(conf, sh);

please explain why this is required?

> +
> +	/* Now to consider new write requests, cache write back and what else,
> +	 * if anything should be read.  We do not handle new writes when:
>  	 * 1/ A 'write' operation (copy+xor) is already in flight.
>  	 * 2/ A 'check' operation is in flight, as it may clobber the parity
>  	 *    block.
> +	 * 3/ A r5c cache log write is in flight.
>  	 */
Chnage comments format

> -	if (s.to_write && !sh->reconstruct_state && !sh->check_state)
> +	if ((s.to_write || test_bit(STRIPE_R5C_FROZEN, &sh->state)) &&
> +	     !sh->reconstruct_state && !sh->check_state && !sh->log_io)
>  		handle_stripe_dirtying(conf, sh, &s, disks);
>  
>  	/* maybe we need to check and possibly fix the parity for this stripe
> @@ -5192,7 +5324,7 @@ static void raid5_make_request(struct mddev *mddev, struct bio * bi)
>  	 * later we might have to read it again in order to reconstruct
>  	 * data on failed drives.
>  	 */
> -	if (rw == READ && mddev->degraded == 0 &&
> +	if (rw == READ && mddev->degraded == 0 && conf->log == NULL &&
>  	    mddev->reshape_position == MaxSector) {
>  		bi = chunk_aligned_read(mddev, bi);
>  		if (!bi)
This should be only for R5C_STATE_WRITE_BACK.

> @@ -5917,6 +6049,7 @@ static void raid5d(struct md_thread *thread)
>  			md_check_recovery(mddev);
>  			spin_lock_irq(&conf->device_lock);
>  		}
> +		r5c_do_reclaim(conf);
>  	}
>  	pr_debug("%d stripes handled\n", handled);
>  
> @@ -6583,6 +6716,11 @@ static struct r5conf *setup_conf(struct mddev *mddev)
>  	for (i = 0; i < NR_STRIPE_HASH_LOCKS; i++)
>  		INIT_LIST_HEAD(conf->temp_inactive_list + i);
>  
> +	atomic_set(&conf->r5c_cached_full_stripes, 0);
> +	INIT_LIST_HEAD(&conf->r5c_full_stripe_list);
> +	atomic_set(&conf->r5c_cached_partial_stripes, 0);
> +	INIT_LIST_HEAD(&conf->r5c_partial_stripe_list);
> +
>  	conf->level = mddev->new_level;
>  	conf->chunk_sectors = mddev->new_chunk_sectors;
>  	if (raid5_alloc_percpu(conf) != 0)
> @@ -7662,8 +7800,11 @@ static void raid5_quiesce(struct mddev *mddev, int state)
>  		/* '2' tells resync/reshape to pause so that all
>  		 * active stripes can drain
>  		 */
> +		r5c_flush_cache(conf, 0);
>  		conf->quiesce = 2;
>  		wait_event_cmd(conf->wait_for_quiescent,
> +				    atomic_read(&conf->r5c_cached_partial_stripes) == 0 &&
> +				    atomic_read(&conf->r5c_cached_full_stripes) == 0 &&
I don't see a wakeup for this after the check condition is met.

Thanks,
Shaohua

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 2/6] r5cache: sysfs entry r5c_state
  2016-09-26 23:30 ` [PATCH v2 2/6] r5cache: sysfs entry r5c_state Song Liu
@ 2016-09-27 22:58   ` Shaohua Li
  0 siblings, 0 replies; 14+ messages in thread
From: Shaohua Li @ 2016-09-27 22:58 UTC (permalink / raw)
  To: Song Liu
  Cc: linux-raid, neilb, shli, kernel-team, dan.j.williams, hch,
	liuzhengyuang521, liuzhengyuan

On Mon, Sep 26, 2016 at 04:30:46PM -0700, Song Liu wrote:
> r5c_state have 4 states:
> * no-cache;
> * write-through (write journal only);
> * write-back (w/ write cache);
> * cache-broken (journal missing or Faulty)
> 
> When there is functional write cache, r5c_state is a knob to
> switch between write-back and write-through.
> 
> When the journal device is broken, the raid array is forced
> in readonly mode. In this case, r5c_state can be used to
> remove "journal feature", and thus make the array read-write
> without journal. By writing into r5c_cache_mode, the array
> can transit from cache-broken to no-cache, which removes
> journal feature for the array.

How will user use the new interface? I suppose
-raid array is forced readonly mode (by kernel)
-user uses the new interface to remove journal
-user forces array read-write
-kernel will update superblock and array can run read/write

please describe
> 
> Signed-off-by: Song Liu <songliubraving@fb.com>
> ---
>  drivers/md/raid5-cache.c | 57 ++++++++++++++++++++++++++++++++++++++++++++++++
>  drivers/md/raid5.c       |  1 +
>  drivers/md/raid5.h       |  2 ++
>  3 files changed, 60 insertions(+)
> 
> diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c
> index 6b28461..0a0b16a 100644
> --- a/drivers/md/raid5-cache.c
> +++ b/drivers/md/raid5-cache.c
> @@ -54,6 +54,9 @@ enum r5c_state {
>  	R5C_STATE_CACHE_BROKEN = 3,
>  };
>  
> +static char *r5c_state_str[] = {"no-cache", "write-through",
> +				"write-back", "cache-broken"};
> +
>  struct r5l_log {
>  	struct md_rdev *rdev;
>  
> @@ -1242,6 +1245,60 @@ int r5c_flush_cache(struct r5conf *conf, int num)
>  	return count;
>  }
>  
> +ssize_t r5c_state_show(struct mddev *mddev, char *page)
> +{
> +	struct r5conf *conf = mddev->private;
> +	int val = 0;
> +	int ret = 0;
> +
> +	if (conf->log)
> +		val = conf->log->r5c_state;
> +	else if (test_bit(MD_HAS_JOURNAL, &mddev->flags))
> +		val = R5C_STATE_CACHE_BROKEN;
> +	ret += snprintf(page, PAGE_SIZE - ret, "%d: %s\n",
> +			val, r5c_state_str[val]);
> +	return ret;
> +}
> +
> +ssize_t r5c_state_store(struct mddev *mddev, const char *page, size_t len)
> +{
> +	struct r5conf *conf = mddev->private;
> +	struct r5l_log *log = conf->log;
> +	int val;
> +
> +	if (kstrtoint(page, 10, &val))
> +		return -EINVAL;
> +	if (!log && val != R5C_STATE_NO_CACHE)
> +		return -EINVAL;
> +
> +	if (val < R5C_STATE_NO_CACHE || val > R5C_STATE_WRITE_BACK)
> +		return -EINVAL;
> +	if (val == R5C_STATE_NO_CACHE) {
> +		if (conf->log &&
> +		    !test_bit(Faulty, &log->rdev->flags)) {
> +			pr_err("md/raid:%s: journal device is in use, cannot remove it\n",
> +			       mdname(mddev));
> +			return -EINVAL;
> +		}
> +	}
> +
> +	spin_lock_irq(&conf->device_lock);
> +	if (log)
> +		conf->log->r5c_state = val;
> +	if (val == R5C_STATE_NO_CACHE) {
> +		clear_bit(MD_HAS_JOURNAL, &mddev->flags);
> +		set_bit(MD_UPDATE_SB_FLAGS, &mddev->flags);
> +	}
> +	spin_unlock_irq(&conf->device_lock);
> +	pr_info("md/raid:%s: setting r5c cache mode to %d: %s\n",
> +		mdname(mddev), val, r5c_state_str[val]);
> +	return len;
> +}

Is this safe? Eg, switching from writethrough to writeback or vice versa in
runtime with IO running? I think you'd better call mddev_suspend/mddev_resume.

Thanks,
Shaohua

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 3/6] r5cache: reclaim support
  2016-09-26 23:30 ` [PATCH v2 3/6] r5cache: reclaim support Song Liu
@ 2016-09-28  0:34   ` Shaohua Li
  2016-10-04 21:59     ` Song Liu
  0 siblings, 1 reply; 14+ messages in thread
From: Shaohua Li @ 2016-09-28  0:34 UTC (permalink / raw)
  To: Song Liu
  Cc: linux-raid, neilb, shli, kernel-team, dan.j.williams, hch,
	liuzhengyuang521, liuzhengyuan

On Mon, Sep 26, 2016 at 04:30:47PM -0700, Song Liu wrote:
> There are two limited resources, stripe cache and journal disk space.
> For better performance, we priotize reclaim of full stripe writes.
> To free up more journal space, we free earliest data on the journal.
> 
> In current implementation, reclaim happens when:
> 1. every R5C_RECLAIM_WAKEUP_INTERVAL (5 seconds)
> 2. when there are R5C_FULL_STRIPE_FLUSH_BATCH (8) cached full stripes
>    (r5c_check_cached_full_stripe)
> 3. when raid5_get_active_stripe sees pressure in stripe cache space
>    (r5c_check_stripe_cache_usage)
> 4. when there is pressure in journal space.
> 
> 1-3 above are straightforward.
> 
> For 4, we added 2 flags to r5conf->cache_state: R5C_LOG_TIGHT and
> R5C_LOG_CRITICAL. R5C_LOG_TIGHT is set when 2x max_free_space of
> journal space is in-use; while R5C_LOG_CRITICAL is set when 3x
> max_free_space of journal space is in-use. Where max_free_space
> = min(1/4 journal space, 10GB).
> 
> r5c_cache keeps all data in cache (not fully committed to RAID) in
> a list (stripe_in_cache). These stripes are in the order of their
> first appearance on the journal. So the log tail (last_checkpoint)
> should point to the journal_start of the first item in the list.
> 
> When R5C_LOG_TIGHT is set, r5l_reclaim_thread starts freezing
> stripes at the head of stripe_in_cache. When R5C_LOG_CRITICAL is
> set, the state machine only processes stripes at the head of
> stripe_in_cache (other stripes are added to no_space_stripes in
> r5c_cache_data and r5l_write_stripe).
> 
> Signed-off-by: Song Liu <songliubraving@fb.com>
> ---
>  drivers/md/raid5-cache.c | 313 +++++++++++++++++++++++++++++++++++++++--------
>  drivers/md/raid5.c       |  31 +++--
>  drivers/md/raid5.h       |  37 ++++--
>  3 files changed, 313 insertions(+), 68 deletions(-)
> 
> diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c
> index 0a0b16a..75b70d8 100644
> --- a/drivers/md/raid5-cache.c
> +++ b/drivers/md/raid5-cache.c
> @@ -28,8 +28,7 @@
>  #define BLOCK_SECTORS (8)
>  
>  /*
> - * reclaim runs every 1/4 disk size or 10G reclaimable space. This can prevent
> - * recovery scans a very long log
> + * log->max_free_space is min(1/4 disk size, 10G reclaimable space)
>   */
>  #define RECLAIM_MAX_FREE_SPACE (10 * 1024 * 1024 * 2) /* sector */
>  #define RECLAIM_MAX_FREE_SPACE_SHIFT (2)
> @@ -116,6 +115,9 @@ struct r5l_log {
>  
>  	/* for r5c_cache */
>  	enum r5c_state r5c_state;
> +	struct list_head stripe_in_cache; /* all stripes in the cache, with
> +					   * sh->log_start in order */
> +	spinlock_t stripe_in_cache_lock;  /* lock for stripe_in_cache */
>  };
>  
>  /*
> @@ -180,6 +182,16 @@ static bool r5l_has_free_space(struct r5l_log *log, sector_t size)
>  	return log->device_size > used_size + size;
>  }
>  
> +static sector_t r5l_used_space(struct r5l_log *log)
> +{
> +	sector_t ret;
> +
> +	WARN_ON(!mutex_is_locked(&log->io_mutex));
> +	ret = r5l_ring_distance(log, log->last_checkpoint,
> +				log->log_start);
> +	return ret;
> +}
> +
>  static void __r5l_set_io_unit_state(struct r5l_io_unit *io,
>  				    enum r5l_io_unit_state state)
>  {
> @@ -188,6 +200,56 @@ static void __r5l_set_io_unit_state(struct r5l_io_unit *io,
>  	io->state = state;
>  }
>  
> +static inline int r5c_total_cached_stripes(struct r5conf *conf)
> +{
> +	return atomic_read(&conf->r5c_cached_partial_stripes) +
> +		atomic_read(&conf->r5c_cached_full_stripes);
> +}
> +
> +/*
> + * check whether we should flush some stripes to free up stripe cache
> + */
> +void r5c_check_stripe_cache_usage(struct r5conf *conf)
> +{
> +	if (!conf->log)
> +		return;
> +	spin_lock(&conf->device_lock);
> +	if (r5c_total_cached_stripes(conf) > conf->max_nr_stripes * 3 / 4 ||
> +	    atomic_read(&conf->empty_inactive_list_nr) > 0)
> +		r5c_flush_cache(conf, R5C_RECLAIM_STRIPE_GROUP);

I still worry about the max_nr_stripes usage. It can be changed at runtime. If
there are no enough stripes, should we just allocate more stripes or reclaim
stripe cache? If memory system tries to shrink stripes (eg, decrease
max_nr_stripes), will it cause deadlock for r5cache?

> +	else if (r5c_total_cached_stripes(conf) >
> +		 conf->max_nr_stripes * 1 / 2)
> +		r5c_flush_cache(conf, 1);

This one is a defensive reclaim. It should always reclaim stripes with full
data. If there are no enough such stripes, do nothing. Flushing 1 stripe would
always be wrong unless we are in critical stripe space shortage, as reclaim
involves disk cache flush and is slow, we should do aggretation as much as
possible.

> +	spin_unlock(&conf->device_lock);
> +}
> +
> +void r5c_check_cached_full_stripe(struct r5conf *conf)
> +{
> +	if (!conf->log)
> +		return;
> +	if (atomic_read(&conf->r5c_cached_full_stripes) >=
> +	    R5C_FULL_STRIPE_FLUSH_BATCH)
> +		r5l_wake_reclaim(conf->log, 0);
> +}
> +
> +static void r5c_update_log_state(struct r5l_log *log)
> +{
> +	struct r5conf *conf = log->rdev->mddev->private;
> +	sector_t used_space = r5l_used_space(log);
> +
> +	if (used_space > 3 * log->max_free_space) {
> +		set_bit(R5C_LOG_CRITICAL, &conf->cache_state);
> +		set_bit(R5C_LOG_TIGHT, &conf->cache_state);
> +	} else if (used_space > 2 * log->max_free_space) {
> +		clear_bit(R5C_LOG_CRITICAL, &conf->cache_state);
> +		set_bit(R5C_LOG_TIGHT, &conf->cache_state);
> +	} else if (used_space < log->max_free_space) {
> +		clear_bit(R5C_LOG_TIGHT, &conf->cache_state);
> +		clear_bit(R5C_LOG_CRITICAL, &conf->cache_state);
> +	} else  /* max_free_space < used_space < 2 * max_free_space */
> +		clear_bit(R5C_LOG_CRITICAL, &conf->cache_state);
> +}
> +
>  /*
>   * Freeze the stripe, thus send the stripe into reclaim path.
>   *
> @@ -198,10 +260,9 @@ void r5c_freeze_stripe_for_reclaim(struct stripe_head *sh)
>  {
>  	struct r5conf *conf = sh->raid_conf;
>  
> -	if (!conf->log)
> +	if (!conf->log || test_bit(STRIPE_R5C_FROZEN, &sh->state))
>  		return;
>  
> -	WARN_ON(test_bit(STRIPE_R5C_FROZEN, &sh->state));
>  	set_bit(STRIPE_R5C_FROZEN, &sh->state);

This is confusing. The WARN_ON suggests the STRIPE_R5C_FROZEN isn't set for sh,
but the change suggests it's possible the bit is set. Which one is correct?

>  
>  	if (!test_and_set_bit(STRIPE_PREREAD_ACTIVE, &sh->state))
> @@ -356,8 +417,11 @@ static struct bio *r5l_bio_alloc(struct r5l_log *log)
>  
>  static void r5_reserve_log_entry(struct r5l_log *log, struct r5l_io_unit *io)
>  {
> +	WARN_ON(!mutex_is_locked(&log->io_mutex));
> +	WARN_ON(!r5l_has_free_space(log, BLOCK_SECTORS));
>  	log->log_start = r5l_ring_add(log, log->log_start, BLOCK_SECTORS);
>  
> +	r5c_update_log_state(log);
>  	/*
>  	 * If we filled up the log device start from the beginning again,
>  	 * which will require a new bio.
> @@ -475,6 +539,7 @@ static int r5l_log_stripe(struct r5l_log *log, struct stripe_head *sh,
>  	int meta_size;
>  	int ret;
>  	struct r5l_io_unit *io;
> +	unsigned long flags;
>  
>  	meta_size =
>  		((sizeof(struct r5l_payload_data_parity) + sizeof(__le32))
> @@ -518,6 +583,14 @@ static int r5l_log_stripe(struct r5l_log *log, struct stripe_head *sh,
>  	atomic_inc(&io->pending_stripe);
>  	sh->log_io = io;
>  
> +	if (sh->log_start == MaxSector) {
> +		BUG_ON(!list_empty(&sh->r5c));
> +		sh->log_start = io->log_start;
> +		spin_lock_irqsave(&log->stripe_in_cache_lock, flags);
> +		list_add_tail(&sh->r5c,
> +			      &log->stripe_in_cache);
> +		spin_unlock_irqrestore(&log->stripe_in_cache_lock, flags);
> +	}
what if it's in writethrogh mode?

>  	return 0;
>  }
>  
> @@ -527,6 +600,7 @@ static int r5l_log_stripe(struct r5l_log *log, struct stripe_head *sh,
>   */
>  int r5l_write_stripe(struct r5l_log *log, struct stripe_head *sh)
>  {
> +	struct r5conf *conf = sh->raid_conf;
>  	int write_disks = 0;
>  	int data_pages, parity_pages;
>  	int meta_size;
> @@ -590,19 +664,31 @@ int r5l_write_stripe(struct r5l_log *log, struct stripe_head *sh)
>  	mutex_lock(&log->io_mutex);
>  	/* meta + data */
>  	reserve = (1 + write_disks) << (PAGE_SHIFT - 9);
> -	if (!r5l_has_free_space(log, reserve)) {
> -		spin_lock(&log->no_space_stripes_lock);
> -		list_add_tail(&sh->log_list, &log->no_space_stripes);
> -		spin_unlock(&log->no_space_stripes_lock);
>  
> -		r5l_wake_reclaim(log, reserve);
> -	} else {
> -		ret = r5l_log_stripe(log, sh, data_pages, parity_pages);
> -		if (ret) {
> -			spin_lock_irq(&log->io_list_lock);
> -			list_add_tail(&sh->log_list, &log->no_mem_stripes);
> -			spin_unlock_irq(&log->io_list_lock);
> +	if (test_bit(R5C_LOG_CRITICAL, &conf->cache_state)) {
> +		sector_t last_checkpoint;
> +
> +		spin_lock(&log->stripe_in_cache_lock);
> +		last_checkpoint = (list_first_entry(&log->stripe_in_cache,
> +						    struct stripe_head, r5c))->log_start;
> +		spin_unlock(&log->stripe_in_cache_lock);
> +		if (sh->log_start != last_checkpoint) {
> +			spin_lock(&log->no_space_stripes_lock);
> +			list_add_tail(&sh->log_list, &log->no_space_stripes);
> +			spin_unlock(&log->no_space_stripes_lock);
> +			mutex_unlock(&log->io_mutex);
> +			return -ENOSPC;

So if a stripe is in cache, we try to reclaim it. We should have some mechanism
to guarantee there are enough space for reclaim (eg for parity). Otherwise
there could be a deadlock because the space allocation in reclaim path is to
free space. Could you please explain how this is done?

> +		} else 	if (!r5l_has_free_space(log, reserve)) {
> +			WARN(1, "%s: run out of journal space\n", __func__);
> +			BUG();
that's scaring, why it happens?

>  		}
> +		pr_debug("%s: write sh %lu to free log space\n", __func__, sh->sector);
> +	}
> +	ret = r5l_log_stripe(log, sh, data_pages, parity_pages);
> +	if (ret) {
> +		spin_lock_irq(&log->io_list_lock);
> +		list_add_tail(&sh->log_list, &log->no_mem_stripes);
> +		spin_unlock_irq(&log->io_list_lock);
>  	}
>  
>  	mutex_unlock(&log->io_mutex);
> @@ -639,12 +725,17 @@ int r5l_handle_flush_request(struct r5l_log *log, struct bio *bio)
>  /* This will run after log space is reclaimed */
>  static void r5l_run_no_space_stripes(struct r5l_log *log)
>  {
> -	struct stripe_head *sh;
> +	struct r5conf *conf = log->rdev->mddev->private;
> +	struct stripe_head *sh, *next;
> +	sector_t last_checkpoint;
>  
>  	spin_lock(&log->no_space_stripes_lock);
> -	while (!list_empty(&log->no_space_stripes)) {
> -		sh = list_first_entry(&log->no_space_stripes,
> -				      struct stripe_head, log_list);
> +	last_checkpoint = (list_first_entry(&log->stripe_in_cache,
> +					    struct stripe_head, r5c))->log_start;
> +	list_for_each_entry_safe(sh, next, &log->no_space_stripes, log_list) {
> +		if (test_bit(R5C_LOG_CRITICAL, &conf->cache_state) &&
> +		    sh->log_start != last_checkpoint)
> +			continue;
what's this check for?

>  		list_del_init(&sh->log_list);
>  		set_bit(STRIPE_HANDLE, &sh->state);
>  		raid5_release_stripe(sh);
> @@ -652,10 +743,32 @@ static void r5l_run_no_space_stripes(struct r5l_log *log)
>  	spin_unlock(&log->no_space_stripes_lock);
>  }
>  
> +static sector_t r5c_calculate_last_cp(struct r5conf *conf)
> +{
> +	struct stripe_head *sh;
> +	struct r5l_log *log = conf->log;
> +	sector_t end = MaxSector;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&log->stripe_in_cache_lock, flags);
> +	if (list_empty(&conf->log->stripe_in_cache)) {
> +		/* all stripes flushed */
> +		spin_unlock_irqrestore(&log->stripe_in_cache_lock, flags);
> +		return log->next_checkpoint;
> +	}
> +	sh = list_first_entry(&conf->log->stripe_in_cache,
> +			      struct stripe_head, r5c);
> +	end = sh->log_start;
> +	spin_unlock_irqrestore(&log->stripe_in_cache_lock, flags);
> +	return end;
> +}
> +
>  static sector_t r5l_reclaimable_space(struct r5l_log *log)
>  {
> +	struct r5conf *conf = log->rdev->mddev->private;
> +
>  	return r5l_ring_distance(log, log->last_checkpoint,
> -				 log->next_checkpoint);
> +				 r5c_calculate_last_cp(conf));
will this work for writethrouth?

>  }
>  
>  static void r5l_run_no_mem_stripe(struct r5l_log *log)
> @@ -830,14 +943,21 @@ static void r5l_write_super_and_discard_space(struct r5l_log *log,
>  		blkdev_issue_discard(bdev, log->rdev->data_offset, end,
>  				GFP_NOIO, 0);
>  	}
> +	mutex_lock(&log->io_mutex);
> +	log->last_checkpoint = end;
> +	r5c_update_log_state(log);
> +	pr_debug("%s: set last_checkpoint = %lu\n", __func__, end);
> +
> +	log->last_cp_seq = log->next_cp_seq;
> +	mutex_unlock(&log->io_mutex);
>  }
>  
>  static void r5l_do_reclaim(struct r5l_log *log)
>  {
> +	struct r5conf *conf = log->rdev->mddev->private;
>  	sector_t reclaim_target = xchg(&log->reclaim_target, 0);
>  	sector_t reclaimable;
>  	sector_t next_checkpoint;
> -	u64 next_cp_seq;
>  
>  	spin_lock_irq(&log->io_list_lock);
>  	/*
> @@ -860,14 +980,12 @@ static void r5l_do_reclaim(struct r5l_log *log)
>  				    log->io_list_lock);
>  	}
>  
> -	next_checkpoint = log->next_checkpoint;
> -	next_cp_seq = log->next_cp_seq;
> +	next_checkpoint = r5c_calculate_last_cp(conf);
>  	spin_unlock_irq(&log->io_list_lock);
>  
>  	BUG_ON(reclaimable < 0);
>  	if (reclaimable == 0)
>  		return;
> -
>  	/*
>  	 * write_super will flush cache of each raid disk. We must write super
>  	 * here, because the log area might be reused soon and we don't want to
> @@ -877,10 +995,7 @@ static void r5l_do_reclaim(struct r5l_log *log)
>  
>  	mutex_lock(&log->io_mutex);
>  	log->last_checkpoint = next_checkpoint;
> -	log->last_cp_seq = next_cp_seq;
why not update last_cp_seq?

>  	mutex_unlock(&log->io_mutex);
> -
> -	r5l_run_no_space_stripes(log);

I don't understand why move r5l_run_no_space_stripes to r5c_flush_cache. It's
natural we run this after some spaces are reclaimed.

>  }
>  
>  static void r5l_reclaim_thread(struct md_thread *thread)
> @@ -891,7 +1006,9 @@ static void r5l_reclaim_thread(struct md_thread *thread)
>  
>  	if (!log)
>  		return;
> +	r5c_do_reclaim(conf);
>  	r5l_do_reclaim(log);
> +	md_wakeup_thread(mddev->thread);

this wakeup is a bit strange. After we reclaim some spaces, we will rerun
pending stripes, which will wakeup mddev->thread. Do miss some wakeup in other
reclaim places?

>  }
>  
>  void r5l_wake_reclaim(struct r5l_log *log, sector_t space)
> @@ -899,6 +1016,8 @@ void r5l_wake_reclaim(struct r5l_log *log, sector_t space)
>  	unsigned long target;
>  	unsigned long new = (unsigned long)space; /* overflow in theory */
>  
> +	if (!log)
> +		return;
>  	do {
>  		target = log->reclaim_target;
>  		if (new < target)
> @@ -926,7 +1045,7 @@ void r5l_quiesce(struct r5l_log *log, int state)
>  		/* make sure r5l_write_super_and_discard_space exits */
>  		mddev = log->rdev->mddev;
>  		wake_up(&mddev->sb_wait);
> -		r5l_wake_reclaim(log, -1L);
> +		r5l_wake_reclaim(log, MaxSector);
>  		md_unregister_thread(&log->reclaim_thread);
>  		r5l_do_reclaim(log);
>  	}
> @@ -1207,14 +1326,39 @@ static void r5l_write_super(struct r5l_log *log, sector_t cp)
>  	set_bit(MD_CHANGE_DEVS, &mddev->flags);
>  }
>  
> -static void r5c_flush_stripe(struct r5conf *conf, struct stripe_head *sh)
> +/*
> + * r5c_flush_cache will move stripe from cached list to handle_list or
> + * r5c_priority_list

What's the r5c_priority_list for? If you want to make sure reclaim makes
progress, I think it's the wrong way. If there are no spaces, handling other
normal stripes will mean moving them to no_space list and do nothing else. Then
the reclaim stripes will get the turn to run. There is no extra list required.

> + *
> + * return 1 if the stripe is moved, and 0 if the stripe is not moved
> + * must hold conf->device_lock
> + */
> +static int r5c_flush_stripe(struct r5conf *conf, struct stripe_head *sh,
> +			    bool priority)
>  {
> -	list_del_init(&sh->lru);
> +	BUG_ON(list_empty(&sh->lru));
> +
> +	BUG_ON(test_bit(STRIPE_R5C_PRIORITY, &sh->state) &&
> +	       !test_bit(STRIPE_HANDLE, &sh->state));
> +
> +	if (test_bit(STRIPE_R5C_PRIORITY, &sh->state))
> +		return 0;
> +	if (test_bit(STRIPE_HANDLE, &sh->state) && !priority)
> +		return 0;
> +
>  	r5c_freeze_stripe_for_reclaim(sh);
> -	atomic_inc(&conf->active_stripes);
> +	if (!test_and_set_bit(STRIPE_HANDLE, &sh->state)) {
> +		atomic_inc(&conf->active_stripes);
> +	}

shouldn't the stripe is always accounted to active before it's reclaimed? Do we
decrease the count before the stripe is reclaimed? sounds like a bug.

> +	clear_bit(STRIPE_DELAYED, &sh->state);
> +	clear_bit(STRIPE_BIT_DELAY, &sh->state);
> +	if (priority)
> +		set_bit(STRIPE_R5C_PRIORITY, &sh->state);
> +
> +	list_del_init(&sh->lru);
>  	atomic_inc(&sh->count);
> -	set_bit(STRIPE_HANDLE, &sh->state);
>  	raid5_release_stripe(sh);
> +	return 1;
>  }
>  
>  /* if num <= 0, flush all stripes
> @@ -1228,20 +1372,28 @@ int r5c_flush_cache(struct r5conf *conf, int num)
>  	assert_spin_locked(&conf->device_lock);
>  	if (!conf->log)
>  		return 0;
> +
>  	list_for_each_entry_safe(sh, next, &conf->r5c_full_stripe_list, lru) {
> -		r5c_flush_stripe(conf, sh);
> -		count++;
> +		count += r5c_flush_stripe(conf, sh, false);
>  		if (num > 0 && count >= num && count >=
>  		    R5C_FULL_STRIPE_FLUSH_BATCH)
>  			return count;
>  	}
>  
>  	list_for_each_entry_safe(sh, next, &conf->r5c_partial_stripe_list, lru) {
> -		r5c_flush_stripe(conf, sh);
> -		count++;
> +		count += r5c_flush_stripe(conf, sh, false);
>  		if (num > 0 && count == num)
>  			return count;
>  	}
> +
> +	if (num <= 0) {
> +		list_for_each_entry_safe(sh, next, &conf->delayed_list, lru) {
> +			if (test_bit(STRIPE_R5C_PARTIAL_STRIPE, &sh->state) ||
> +			    test_bit(STRIPE_R5C_FULL_STRIPE, &sh->state))
> +				r5c_flush_stripe(conf, sh, false);
> +		}
> +		r5l_run_no_space_stripes(conf->log);
> +	}
>  	return count;
>  }
>  
> @@ -1349,6 +1501,7 @@ void r5c_handle_stripe_written(struct r5conf *conf,
>  			       struct stripe_head *sh) {
>  	int i;
>  	int do_wakeup = 0;
> +	unsigned long flags;
>  
>  	if (test_and_clear_bit(STRIPE_R5C_WRITTEN, &sh->state)) {
>  		WARN_ON(!test_bit(STRIPE_R5C_FROZEN, &sh->state));
> @@ -1361,6 +1514,11 @@ void r5c_handle_stripe_written(struct r5conf *conf,
>  			if (test_and_clear_bit(R5_Overlap, &sh->dev[i].flags))
>  				do_wakeup = 1;
>  		}
> +		spin_lock_irqsave(&conf->log->stripe_in_cache_lock, flags);
> +		list_del_init(&sh->r5c);
> +		spin_unlock_irqrestore(&conf->log->stripe_in_cache_lock, flags);
> +		sh->log_start = MaxSector;
> +		clear_bit(STRIPE_R5C_PRIORITY, &sh->state);
>  	}
>  
>  	if (do_wakeup)
> @@ -1371,6 +1529,7 @@ int
>  r5c_cache_data(struct r5l_log *log, struct stripe_head *sh,
>  	       struct stripe_head_state *s)
>  {
> +	struct r5conf *conf = sh->raid_conf;
>  	int pages;
>  	int meta_size;
>  	int reserve;
> @@ -1413,19 +1572,33 @@ r5c_cache_data(struct r5l_log *log, struct stripe_head *sh,
>  	mutex_lock(&log->io_mutex);
>  	/* meta + data */
>  	reserve = (1 + pages) << (PAGE_SHIFT - 9);
> -	if (!r5l_has_free_space(log, reserve)) {
> -		spin_lock(&log->no_space_stripes_lock);
> -		list_add_tail(&sh->log_list, &log->no_space_stripes);
> -		spin_unlock(&log->no_space_stripes_lock);
>  
> -		r5l_wake_reclaim(log, reserve);
> -	} else {
> -		ret = r5l_log_stripe(log, sh, pages, 0);
> -		if (ret) {
> -			spin_lock_irq(&log->io_list_lock);
> -			list_add_tail(&sh->log_list, &log->no_mem_stripes);
> -			spin_unlock_irq(&log->io_list_lock);
> +	if (test_bit(R5C_LOG_CRITICAL, &conf->cache_state)) {
> +		sector_t last_checkpoint;
> +
> +		spin_lock(&log->stripe_in_cache_lock);
> +		last_checkpoint = (list_first_entry(&log->stripe_in_cache,
> +						    struct stripe_head, r5c))->log_start;
> +		spin_unlock(&log->stripe_in_cache_lock);
> +		if (sh->log_start != last_checkpoint) {
> +			spin_lock(&log->no_space_stripes_lock);
> +			list_add_tail(&sh->log_list, &log->no_space_stripes);
> +			spin_unlock(&log->no_space_stripes_lock);
> +
> +			mutex_unlock(&log->io_mutex);
> +			return -ENOSPC;
>  		}
> +		pr_debug("%s: write sh %lu to free log space\n", __func__, sh->sector);
> +	}
> +	if (!r5l_has_free_space(log, reserve)) {
> +		pr_err("%s: cannot reserve space %d\n", __func__, reserve);
> +		BUG();

same here. we should put the stripe into no_space list. If we can't allocate
space eventually, it indicates reclaim has bug.

> +	}
> +	ret = r5l_log_stripe(log, sh, pages, 0);
> +	if (ret) {
> +		spin_lock_irq(&log->io_list_lock);
> +		list_add_tail(&sh->log_list, &log->no_mem_stripes);
> +		spin_unlock_irq(&log->io_list_lock);
>  	}
>  
>  	mutex_unlock(&log->io_mutex);
> @@ -1435,12 +1608,45 @@ r5c_cache_data(struct r5l_log *log, struct stripe_head *sh,
>  void r5c_do_reclaim(struct r5conf *conf)
>  {
>  	struct r5l_log *log = conf->log;
> -
> -	assert_spin_locked(&conf->device_lock);
> +	struct stripe_head *sh, *next;
> +	int count = 0;
> +	unsigned long flags;
> +	sector_t last_checkpoint;
>  
>  	if (!log)
>  		return;
> -	r5c_flush_cache(conf, 0);
> +
> +	if (!test_bit(R5C_LOG_CRITICAL, &conf->cache_state)) {
> +		/* flush all full stripes */
> +		spin_lock_irqsave(&conf->device_lock, flags);
> +		list_for_each_entry_safe(sh, next, &conf->r5c_full_stripe_list, lru)
> +			r5c_flush_stripe(conf, sh, false);
> +		spin_unlock_irqrestore(&conf->device_lock, flags);
> +	}
> +
> +	if (test_bit(R5C_LOG_TIGHT, &conf->cache_state)) {
> +		spin_lock_irqsave(&log->stripe_in_cache_lock, flags);
> +		spin_lock(&conf->device_lock);
> +		last_checkpoint = (list_first_entry(&log->stripe_in_cache,
> +						    struct stripe_head, r5c))->log_start;
> +		list_for_each_entry(sh, &log->stripe_in_cache, r5c) {
> +			if (sh->log_start == last_checkpoint) {
> +				if (!list_empty(&sh->lru))
> +					r5c_flush_stripe(conf, sh, true);
> +			} else
> +				break;
> +		}
> +		spin_unlock(&conf->device_lock);
> +		spin_unlock_irqrestore(&log->stripe_in_cache_lock, flags);
> +		pr_debug("%s: flushed %d stripes for log space\n", __func__, count);
> +	} else if (test_bit(R5_INACTIVE_BLOCKED, &conf->cache_state)) {
> +		spin_lock_irqsave(&conf->device_lock, flags);
> +		r5c_flush_cache(conf, R5C_RECLAIM_STRIPE_GROUP);
> +		spin_unlock_irqrestore(&conf->device_lock, flags);
> +	}
> +	wake_up(&conf->wait_for_stripe);
> +	md_wakeup_thread(conf->mddev->thread);
> +	r5l_run_no_space_stripes(log);
>  }
>  
>  static int r5l_load_log(struct r5l_log *log)
> @@ -1500,6 +1706,9 @@ create:
>  	if (log->max_free_space > RECLAIM_MAX_FREE_SPACE)
>  		log->max_free_space = RECLAIM_MAX_FREE_SPACE;
>  	log->last_checkpoint = cp;
> +	mutex_lock(&log->io_mutex);
> +	r5c_update_log_state(log);
> +	mutex_unlock(&log->io_mutex);
>  
>  	__free_page(page);
>  
> @@ -1555,6 +1764,8 @@ int r5l_init_log(struct r5conf *conf, struct md_rdev *rdev)
>  						 log->rdev->mddev, "reclaim");
>  	if (!log->reclaim_thread)
>  		goto reclaim_thread;
> +	log->reclaim_thread->timeout = R5C_RECLAIM_WAKEUP_INTERVAL;
> +
>  	init_waitqueue_head(&log->iounit_wait);
>  
>  	INIT_LIST_HEAD(&log->no_mem_stripes);
> @@ -1564,6 +1775,8 @@ int r5l_init_log(struct r5conf *conf, struct md_rdev *rdev)
>  
>  	/* flush full stripe */
>  	log->r5c_state = R5C_STATE_WRITE_BACK;
> +	INIT_LIST_HEAD(&log->stripe_in_cache);
> +	spin_lock_init(&log->stripe_in_cache_lock);
>  
>  	if (r5l_load_log(log))
>  		goto error;
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index cc4ac1d..a3d26ec 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -301,7 +301,9 @@ static void do_release_stripe(struct r5conf *conf, struct stripe_head *sh,
>  		else {
>  			clear_bit(STRIPE_DELAYED, &sh->state);
>  			clear_bit(STRIPE_BIT_DELAY, &sh->state);
> -			if (conf->worker_cnt_per_group == 0) {
> +			if (test_bit(STRIPE_R5C_PRIORITY, &sh->state))
> +				list_add_tail(&sh->lru, &conf->r5c_priority_list);
> +			else if (conf->worker_cnt_per_group == 0) {
>  				list_add_tail(&sh->lru, &conf->handle_list);
>  			} else {
>  				raid5_wakeup_stripe_thread(sh);
> @@ -327,6 +329,7 @@ static void do_release_stripe(struct r5conf *conf, struct stripe_head *sh,
>  				if (test_and_clear_bit(STRIPE_R5C_PARTIAL_STRIPE, &sh->state))
>  					atomic_dec(&conf->r5c_cached_partial_stripes);
>  				list_add_tail(&sh->lru, &conf->r5c_full_stripe_list);
> +				r5c_check_cached_full_stripe(conf);
>  			} else {
>  				/* not full stripe */
>  				if (!test_and_set_bit(STRIPE_R5C_PARTIAL_STRIPE,
> @@ -697,9 +700,14 @@ raid5_get_active_stripe(struct r5conf *conf, sector_t sector,
>  			}
>  			if (noblock && sh == NULL)
>  				break;
> +
> +			r5c_check_stripe_cache_usage(conf);
>  			if (!sh) {
> +				unsigned long before_jiffies;
>  				set_bit(R5_INACTIVE_BLOCKED,
>  					&conf->cache_state);
> +				r5l_wake_reclaim(conf->log, 0);
> +				before_jiffies = jiffies;
>  				wait_event_lock_irq(
>  					conf->wait_for_stripe,
>  					!list_empty(conf->inactive_list + hash) &&
> @@ -708,6 +716,9 @@ raid5_get_active_stripe(struct r5conf *conf, sector_t sector,
>  					 || !test_bit(R5_INACTIVE_BLOCKED,
>  						      &conf->cache_state)),
>  					*(conf->hash_locks + hash));
> +				before_jiffies = jiffies - before_jiffies;
> +				if (before_jiffies > 20)
> +					pr_debug("%s: wait for sh takes %lu jiffies\n", __func__, before_jiffies);
please remove the debug code.

Thanks,
Shaohua

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 4/6] r5cache: r5c recovery
  2016-09-26 23:30 ` [PATCH v2 4/6] r5cache: r5c recovery Song Liu
@ 2016-09-28  1:08   ` Shaohua Li
  0 siblings, 0 replies; 14+ messages in thread
From: Shaohua Li @ 2016-09-28  1:08 UTC (permalink / raw)
  To: Song Liu
  Cc: linux-raid, neilb, shli, kernel-team, dan.j.williams, hch,
	liuzhengyuang521, liuzhengyuan

On Mon, Sep 26, 2016 at 04:30:48PM -0700, Song Liu wrote:
> For Data-Only strips, we need to finish complete calculate parity and
> finish the full reconstruct write or RMW write. For simplicity, in
> the recovery, we load the stripe to stripe cache. Once the array is
> started, the stripe cache state machine will handle these stripes
> through normal write path.

please make sure not change the behavior of writethrough mode. In writethrough,
we discard data-only stripes.

Is it safe to run the state machine in recovery stage? For exmaple, md
personablity ->run is called before bitmap is initialized.

> r5c_recovery_flush_log contains the main procedure of recovery. The
> recovery code first scans through the journal and loads data to
> stripe cache. The code keeps tracks of all these stripes in a list
> (use sh->lru and ctx->cached_list), stripes in the list are
> organized in the order of its first appearance on the journal.
> During the scan, the recovery code assesses each stripe as
> Data-Parity or Data-Only.
> 
> During scan, the array may run out of stripe cache. In these cases,
> the recovery code tries to release some stripe head by replaying
> existing Data-Parity stripes. Once these replays are done, these
> stripes can be released. When releasing Data-Parity stripes is not
> enough, the recovery code will also call raid5_set_cache_size to
> increase stripe cache size.
> 
> At the end of scan, the recovery code replays all Data-Parity
> stripes, and sets proper states for Data-Only stripes. The recovery
> code also increases seq number by 10 and rewrites all Data-Only
> stripes to journal. This is to avoid confusion after repeated
> crashes. More details is explained in raid5-cache.c before
> r5c_recovery_rewrite_data_only_stripes().
...
> +r5c_recovery_analyze_meta_block(struct r5l_log *log,
> +				struct r5l_recovery_ctx *ctx,
> +				struct list_head *cached_stripe_list)
> +{
> +	struct mddev *mddev = log->rdev->mddev;
> +	struct r5conf *conf = mddev->private;
>  	struct r5l_meta_block *mb;
> -	int offset;
> +	struct r5l_payload_data_parity *payload;
> +	int mb_offset;
>  	sector_t log_offset;
> -	sector_t stripe_sector;
> +	sector_t stripe_sect;
> +	struct stripe_head *sh;
> +	int ret;
> +
> +	/* for mismatch in data blocks, we will drop all data in this mb, but
> +	 * we will still read next mb for other data with FLUSH flag, as
> +	 * io_unit could finish out of order.
> +	 */
please correct the format

> +	ret = r5l_recovery_verify_data_checksum_for_mb(log, ctx);
> +	if (ret == -EINVAL)
> +		return -EAGAIN;
> +	else if (ret)
> +		return ret;
>  
>  	mb = page_address(ctx->meta_page);
> -	offset = sizeof(struct r5l_meta_block);
> +	mb_offset = sizeof(struct r5l_meta_block);
>  	log_offset = r5l_ring_add(log, ctx->pos, BLOCK_SECTORS);
>  
> -	while (offset < le32_to_cpu(mb->meta_size)) {
> +	while (mb_offset < le32_to_cpu(mb->meta_size)) {
>  		int dd;
>  
> -		payload = (void *)mb + offset;
> -		stripe_sector = raid5_compute_sector(conf,
> -						     le64_to_cpu(payload->location), 0, &dd, NULL);
> -		if (r5l_recovery_flush_one_stripe(log, ctx, stripe_sector,
> -						  &offset, &log_offset))
> +		payload = (void *)mb + mb_offset;
> +		stripe_sect = (payload->header.type == R5LOG_PAYLOAD_DATA) ?
> +			raid5_compute_sector(
> +				conf, le64_to_cpu(payload->location), 0, &dd,
> +				NULL)
> +			: le64_to_cpu(payload->location);
> +
> +		sh = r5c_recovery_lookup_stripe(cached_stripe_list,
> +						stripe_sect);
> +
> +		if (!sh) {
> +			sh = r5c_recovery_alloc_stripe(conf, cached_stripe_list,
> +						       stripe_sect, ctx->pos);
> +			/* cannot get stripe from raid5_get_active_stripe
> +			 * try replay some stripes
> +			 */
ditto

Thanks,
Shaohua

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 5/6] r5cache: handle SYNC and FUA
  2016-09-26 23:30 ` [PATCH v2 5/6] r5cache: handle SYNC and FUA Song Liu
@ 2016-09-28  1:32   ` Shaohua Li
  0 siblings, 0 replies; 14+ messages in thread
From: Shaohua Li @ 2016-09-28  1:32 UTC (permalink / raw)
  To: Song Liu
  Cc: linux-raid, neilb, shli, kernel-team, dan.j.williams, hch,
	liuzhengyuang521, liuzhengyuan

On Mon, Sep 26, 2016 at 04:30:49PM -0700, Song Liu wrote:
> With raid5 cache, we committing data from journal device. When
> there is flush request, we need to flush journal device's cache.
> This was not needed in raid5 journal, because we will flush the
> journal before committing data to raid disks.
> 
> This is similar to FUA, except that we also need flush journal for
> FUA. Otherwise, corruptions in earlier meta data will stop recovery
> from reaching FUA data.
> 
> Signed-off-by: Song Liu <songliubraving@fb.com>
> ---
> @@ -554,12 +650,22 @@ static int r5l_log_stripe(struct r5l_log *log, struct stripe_head *sh,
>  
>  	io = log->current_io;
>  
> +	if (test_and_clear_bit(STRIPE_R5C_PREFLUSH, &sh->state))
> +		io->has_flush = 1;
> +
>  	for (i = 0; i < sh->disks; i++) {
>  		if (!test_bit(R5_Wantwrite, &sh->dev[i].flags) &&
>  		    !test_bit(R5_Wantcache, &sh->dev[i].flags))
>  			continue;
>  		if (i == sh->pd_idx || i == sh->qd_idx)
>  			continue;
> +		if (test_bit(R5_WantFUA, &sh->dev[i].flags)) {
> +			io->has_fua = 1;
> +			/* we need to flush journal to make sure recovery can
> +			 * reach the data with fua flag
> +			 */
comments format.

> +			io->has_flush = 1;
> +		}
>  		r5l_append_payload_meta(log, R5LOG_PAYLOAD_DATA,
>  					raid5_compute_blocknr(sh, i, 0),
>  					sh->dev[i].log_checksum, 0, false);
> @@ -716,10 +822,16 @@ int r5l_handle_flush_request(struct r5l_log *log, struct bio *bio)
>  	 * don't need to flush again
>  	 */
>  	if (bio->bi_iter.bi_size == 0) {
> -		bio_endio(bio);
> +		mutex_lock(&log->io_mutex);
> +		r5l_get_meta(log, 0);
> +		bio_list_add(&log->current_io->flush_barriers, bio);
> +		log->current_io->has_flush = 1;
> +		log->current_io->has_null_flush = 1;
> +		atomic_inc(&log->current_io->pending_stripe);
> +		r5l_submit_current_io(log);
> +		mutex_unlock(&log->io_mutex);
>  		return 0;
>  	}
please not change the behavior of writethrough mode.

Thanks,
Shaohua

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 1/6] r5cache: write part of r5cache
  2016-09-27 22:51   ` Shaohua Li
@ 2016-09-29 23:06     ` Song Liu
  0 siblings, 0 replies; 14+ messages in thread
From: Song Liu @ 2016-09-29 23:06 UTC (permalink / raw)
  To: Shaohua Li
  Cc: Song Liu, linux-raid@vger.kernel.org, neilb@suse.com, Shaohua Li,
	Kernel Team, dan.j.williams@intel.com, hch@infradead.org,
	liuzhengyuang521@gmail.com, liuzhengyuan@kylinos.cn

Thanks Shaohua!

I am working on adding more information in comments and commit notes. 


>> 
>> +/*
>> + * Freeze the stripe, thus send the stripe into reclaim path.
>> + *
>> + * This function should only be called from raid5d that handling this stripe,
>> + * or when holds conf->device_lock
>> + */
> 
> Do you mean if this called in raid5d, the lock isn't required? Please note we
> could have several threads (like raid5d) handle stripes. Is there any problem
> here?

This requirement is not necessary anymore. I will update the comment here. 

> 
>> +void r5c_freeze_stripe_for_reclaim(struct stripe_head *sh)
>> +{
>> +	struct r5conf *conf = sh->raid_conf;
>> +
>> +	if (!conf->log)
>> +		return;
>> +static void r5c_handle_parity_cached(struct stripe_head *sh)
>> +{
>> +	int i;
>> +
>> +	for (i = sh->disks; i--; )
>> +		if (test_bit(R5_InCache, &sh->dev[i].flags))
>> +			set_bit(R5_Wantwrite, &sh->dev[i].flags);
>> +	set_bit(STRIPE_R5C_WRITTEN, &sh->state);
>> +}
>> +
>> +static void r5c_finish_cache_stripe(struct stripe_head *sh)
>> +{
> 
> I'm hoping this one has a
> 	if (not in writeback mode)
> 		return
> so it's clearly this is only for writeback mode.
> 
>> +	if (test_bit(STRIPE_R5C_FROZEN, &sh->state))
>> +		r5c_handle_parity_cached(sh);
>> +	else
>> +		r5c_handle_data_cached(sh);
>> +}
>> +
>> 
>> 	return 0;
>> }
>> 
>> -static void r5l_wake_reclaim(struct r5l_log *log, sector_t space);
>> /*
>>  * running in raid5d, where reclaim could wait for raid5d too (when it flushes
>>  * data from log to raid disks), so we shouldn't wait for reclaim here
>> @@ -456,11 +541,17 @@ int r5l_write_stripe(struct r5l_log *log, struct stripe_head *sh)
>> 		return -EAGAIN;
>> 	}
>> 
>> +	WARN_ON(!test_bit(STRIPE_R5C_FROZEN, &sh->state));
> 
> is this set even for write through?

In current implementation, the code still sets STRIPE_R5C_FROZEN in write through mode 
(in r5c_handle_stripe_dirtying). The reclaim path handles has same behavior in write through 
mode as original journal.

>> 
>> +
>> +	if (!log || test_bit(STRIPE_R5C_FROZEN, &sh->state))
>> +		return -EAGAIN;
>> +
>> +	if (conf->log->r5c_state == R5C_STATE_WRITE_THROUGH ||
>> +	    conf->quiesce != 0 || conf->mddev->degraded != 0) {
>> +		/* write through mode */
>> +		r5c_freeze_stripe_for_reclaim(sh);
>> +		return -EAGAIN;
> 
> will this change behavior of R5C_STATE_WRITE_THROUGH?
> r5c_freeze_stripe_for_reclaim does change something not related to writethrough
> mode.

The PREREAD_ACTIVE part of freeze is not necessary for write through. I will fix it. 
Other that, the reclaim path works the same as write through mode (as original journal
code). I will double check and make it clear. 


> The quiesce check sounds not necessary. The patch flush all caches in quiesce,
> so no IO is running in quiesce state.

Great catch. Other checks already covers all cases in quiesce. 

>> 
>> @@ -901,6 +918,13 @@ static void ops_run_io(struct stripe_head *sh, struct stripe_head_state *s)
>> 
>> 	might_sleep();
>> 
>> +	if (s->to_cache) {
>> +		if (r5c_cache_data(conf->log, sh, s) == 0)
>> +			return;
> 
> At this stage we fallback to no cache. But I don't see R5_Wantcache is cleared,
> is it a problem?

We did similar check for journal part. I think it will be easier if we add a check in 
r5l_init_log(), and never create journal/cache when the array is too big? 

> 
>> @@ -1543,9 +1570,18 @@ ops_run_compute6_2(struct stripe_head *sh, struct raid5_percpu *percpu)
>> static void ops_complete_prexor(void *stripe_head_ref)
>> {
>> 	struct stripe_head *sh = stripe_head_ref;
>> +	int i;
>> 
>> 	pr_debug("%s: stripe %llu\n", __func__,
>> 		(unsigned long long)sh->sector);
>> +
>> +	for (i = sh->disks; i--; )
>> +		if (sh->dev[i].page != sh->dev[i].orig_page) {
>> +			struct page *p = sh->dev[i].page;
>> +
>> +			sh->dev[i].page = sh->dev[i].orig_page;
>> +			put_page(p);
> What if array hasn't log but skipcopy is enabled?

In case of skip copy, page and orig_page are the same during ops_complete_prexor: 
    page == orig_page == old data. 
    bio_page is the new data. 

After prexor is done, we run biodrain, where page is set to bio_page to skip copy. 

>> 
>> @@ -4416,7 +4538,7 @@ static void handle_stripe(struct stripe_head *sh)
>> 			struct r5dev *dev = &sh->dev[i];
>> 			if (test_bit(R5_LOCKED, &dev->flags) &&
>> 				(i == sh->pd_idx || i == sh->qd_idx ||
>> -				 dev->written)) {
>> +				 dev->written || test_bit(R5_InCache, &dev->flags))) {
>> 				pr_debug("Writing block %d\n", i);
>> 				set_bit(R5_Wantwrite, &dev->flags);
>> 				if (prexor)
>> @@ -4456,6 +4578,12 @@ static void handle_stripe(struct stripe_head *sh)
>> 				 test_bit(R5_Discard, &qdev->flags))))))
>> 		handle_stripe_clean_event(conf, sh, disks, &s.return_bi);
>> 
>> +	if (s.just_cached)
>> +		r5c_handle_cached_data_endio(conf, sh, disks, &s.return_bi);
>> +
>> +	if (test_bit(STRIPE_R5C_FROZEN, &sh->state))
>> +		r5l_stripe_write_finished(sh);
> what's this for?

r5l_stripe_write_finished() removes sh->log_io. 

It is actually not necessary to check for R5C_FROZEN. I will fix that. 

> 
>> +
>> 	/* Now we might consider reading some blocks, either to check/generate
>> 	 * parity, or to satisfy requests
>> 	 * or to load a block that is being partially written.
>> @@ -4467,13 +4595,17 @@ static void handle_stripe(struct stripe_head *sh)
>> 	    || s.expanding)
>> 		handle_stripe_fill(sh, &s, disks);
>> 
>> -	/* Now to consider new write requests and what else, if anything
>> -	 * should be read.  We do not handle new writes when:
>> +	r5c_handle_stripe_written(conf, sh);
> 
> please explain why this is required?

Added the following in comments:

        /*
         * If the stripe finishes full journal write cycle (write to journal
         * and raid disk, this is the clean up procedure so it is ready for
         * next operation.
         */

>> 
>> 	/* maybe we need to check and possibly fix the parity for this stripe
>> @@ -5192,7 +5324,7 @@ static void raid5_make_request(struct mddev *mddev, struct bio * bi)
>> 	 * later we might have to read it again in order to reconstruct
>> 	 * data on failed drives.
>> 	 */
>> -	if (rw == READ && mddev->degraded == 0 &&
>> +	if (rw == READ && mddev->degraded == 0 && conf->log == NULL &&
>> 	    mddev->reshape_position == MaxSector) {
>> 		bi = chunk_aligned_read(mddev, bi);
>> 		if (!bi)
> This should be only for R5C_STATE_WRITE_BACK.

Will fix this. 


>> 
>> @@ -7662,8 +7800,11 @@ static void raid5_quiesce(struct mddev *mddev, int state)
>> 		/* '2' tells resync/reshape to pause so that all
>> 		 * active stripes can drain
>> 		 */
>> +		r5c_flush_cache(conf, 0);
>> 		conf->quiesce = 2;
>> 		wait_event_cmd(conf->wait_for_quiescent,
>> +				    atomic_read(&conf->r5c_cached_partial_stripes) == 0 &&
>> +				    atomic_read(&conf->r5c_cached_full_stripes) == 0 &&
> I don't see a wakeup for this after the check condition is met.

The following in release_inactive_stripe_list will wake this up. 
                if (atomic_read(&conf->active_stripes) == 0)
                        wake_up(&conf->wait_for_quiescent);

This is OK because cached stripes (full stripe or partial) has to become active
first before being inactive. 

Thanks,
Song


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 3/6] r5cache: reclaim support
  2016-09-28  0:34   ` Shaohua Li
@ 2016-10-04 21:59     ` Song Liu
  0 siblings, 0 replies; 14+ messages in thread
From: Song Liu @ 2016-10-04 21:59 UTC (permalink / raw)
  To: Shaohua Li
  Cc: linux-raid@vger.kernel.org, neilb@suse.com, Shaohua Li,
	Kernel Team, dan.j.williams@intel.com, hch@infradead.org,
	liuzhengyuang521@gmail.com, liuzhengyuan@kylinos.cn


> On Sep 27, 2016, at 5:34 PM, Shaohua Li <shli@kernel.org> wrote:
> 
>> 
>> +	if (!conf->log)
>> +		return;
>> +	spin_lock(&conf->device_lock);
>> +	if (r5c_total_cached_stripes(conf) > conf->max_nr_stripes * 3 / 4 ||
>> +	    atomic_read(&conf->empty_inactive_list_nr) > 0)
>> +		r5c_flush_cache(conf, R5C_RECLAIM_STRIPE_GROUP);
> 
> I still worry about the max_nr_stripes usage. It can be changed at runtime. If
> there are no enough stripes, should we just allocate more stripes or reclaim
> stripe cache? If memory system tries to shrink stripes (eg, decrease
> max_nr_stripes), will it cause deadlock for r5cache?
> 

Write cache will not have dead lock due to stripe cache usage, because cached
stripe will hold a stripe until it is flushed to the raid disks. 


>> +	else if (r5c_total_cached_stripes(conf) >
>> +		 conf->max_nr_stripes * 1 / 2)
>> +		r5c_flush_cache(conf, 1);
> 
> This one is a defensive reclaim. It should always reclaim stripes with full
> data. If there are no enough such stripes, do nothing. Flushing 1 stripe would
> always be wrong unless we are in critical stripe space shortage, as reclaim
> involves disk cache flush and is slow, we should do aggretation as much as
> possible.

I will remove the defensive reclaim. 

> 
>>  *
>> @@ -198,10 +260,9 @@ void r5c_freeze_stripe_for_reclaim(struct stripe_head *sh)
>> {
>> 	struct r5conf *conf = sh->raid_conf;
>> 
>> -	if (!conf->log)
>> +	if (!conf->log || test_bit(STRIPE_R5C_FROZEN, &sh->state))
>> 		return;
>> 
>> -	WARN_ON(test_bit(STRIPE_R5C_FROZEN, &sh->state));
>> 	set_bit(STRIPE_R5C_FROZEN, &sh->state);
> 
> This is confusing. The WARN_ON suggests the STRIPE_R5C_FROZEN isn't set for sh,
> but the change suggests it's possible the bit is set. Which one is correct?
> 

Will fix this. 

>> 
>> @@ -518,6 +583,14 @@ static int r5l_log_stripe(struct r5l_log *log, struct stripe_head *sh,
>> 	atomic_inc(&io->pending_stripe);
>> 	sh->log_io = io;
>> 
>> +	if (sh->log_start == MaxSector) {
>> +		BUG_ON(!list_empty(&sh->r5c));
>> +		sh->log_start = io->log_start;
>> +		spin_lock_irqsave(&log->stripe_in_cache_lock, flags);
>> +		list_add_tail(&sh->r5c,
>> +			      &log->stripe_in_cache);
>> +		spin_unlock_irqrestore(&log->stripe_in_cache_lock, flags);
>> +	}
> what if it's in writethrogh mode?

This does not impact write through mode. But let me add clear check just in case. 
> 
>> +		last_checkpoint = (list_first_entry(&log->stripe_in_cache,
>> +						    struct stripe_head, r5c))->log_start;
>> +		spin_unlock(&log->stripe_in_cache_lock);
>> +		if (sh->log_start != last_checkpoint) {
>> +			spin_lock(&log->no_space_stripes_lock);
>> +			list_add_tail(&sh->log_list, &log->no_space_stripes);
>> +			spin_unlock(&log->no_space_stripes_lock);
>> +			mutex_unlock(&log->io_mutex);
>> +			return -ENOSPC;
> 
> So if a stripe is in cache, we try to reclaim it. We should have some mechanism
> to guarantee there are enough space for reclaim (eg for parity). Otherwise
> there could be a deadlock because the space allocation in reclaim path is to
> free space. Could you please explain how this is done?
> 

I redefined this part in newer version, which should be clear. 

>> +		} else 	if (!r5l_has_free_space(log, reserve)) {
>> +			WARN(1, "%s: run out of journal space\n", __func__);
>> +			BUG();
> that's scaring, why it happens?
> 

I rewrite some of the code in newer version. But in case something similar happens, it
is a bug with reclaim. 

>> 
>> +		if (test_bit(R5C_LOG_CRITICAL, &conf->cache_state) &&
>> +		    sh->log_start != last_checkpoint)
>> +			continue;
> what's this check for?

With this check, the code only flushes stripes at last_checkpoint on journal. This is no 
longer needed in newer version. I will remove it. 

> 
>> 		list_del_init(&sh->log_list);
>> +
>> static sector_t r5l_reclaimable_space(struct r5l_log *log)
>> {
>> +	struct r5conf *conf = log->rdev->mddev->private;
>> +
>> 	return r5l_ring_distance(log, log->last_checkpoint,
>> -				 log->next_checkpoint);
>> +				 r5c_calculate_last_cp(conf));
> will this work for writethrouth?

r5c_calculate_last_cp() returns next_checkpoint for write through mode. 
> 
>> }
>> 
>> 
>> 	if (reclaimable == 0)
>> 		return;
>> -
>> 	/*
>> 	 * write_super will flush cache of each raid disk. We must write super
>> 	 * here, because the log area might be reused soon and we don't want to
>> @@ -877,10 +995,7 @@ static void r5l_do_reclaim(struct r5l_log *log)
>> 
>> 	mutex_lock(&log->io_mutex);
>> 	log->last_checkpoint = next_checkpoint;
>> -	log->last_cp_seq = next_cp_seq;
> why not update last_cp_seq?

\x10Currently, we don't need last_cp_seq anywhere in the code. To keep track of 
last_cp_seq, we need add this field to stripe_head. From what I can this, this 
seems not necessary. 

>> 	mutex_unlock(&log->io_mutex);
>> -
>> -	r5l_run_no_space_stripes(log);
> 
> I don't understand why move r5l_run_no_space_stripes to r5c_flush_cache. It's
> natural we run this after some spaces are reclaimed.

I will move it back. 

>> }
>> 
>> static void r5l_reclaim_thread(struct md_thread *thread)
>> @@ -891,7 +1006,9 @@ static void r5l_reclaim_thread(struct md_thread *thread)
>> 
>> 	if (!log)
>> 		return;
>> +	r5c_do_reclaim(conf);
>> 	r5l_do_reclaim(log);
>> +	md_wakeup_thread(mddev->thread);
> 
> this wakeup is a bit strange. After we reclaim some spaces, we will rerun
> pending stripes, which will wakeup mddev->thread. Do miss some wakeup in other
> reclaim places?

This one is not really necessary. I will remove it. 


>> }
>> 
>> void r5l_wake_reclaim(struct r5l_log *log, sector_t space)
>> 
>> }
>> 
>> -static void r5c_flush_stripe(struct r5conf *conf, struct stripe_head *sh)
>> +/*
>> + * r5c_flush_cache will move stripe from cached list to handle_list or
>> + * r5c_priority_list
> 
> What's the r5c_priority_list for? If you want to make sure reclaim makes
> progress, I think it's the wrong way. If there are no spaces, handling other
> normal stripes will mean moving them to no_space list and do nothing else. Then
> the reclaim stripes will get the turn to run. There is no extra list required.

I will try to remove it. 

> 
>> 
>> +	BUG_ON(test_bit(STRIPE_R5C_PRIORITY, &sh->state) &&
>> +	       !test_bit(STRIPE_HANDLE, &sh->state));
>> +
>> +	if (test_bit(STRIPE_R5C_PRIORITY, &sh->state))
>> +		return 0;
>> +	if (test_bit(STRIPE_HANDLE, &sh->state) && !priority)
>> +		return 0;
>> +
>> 	r5c_freeze_stripe_for_reclaim(sh);
>> -	atomic_inc(&conf->active_stripes);
>> +	if (!test_and_set_bit(STRIPE_HANDLE, &sh->state)) {
>> +		atomic_inc(&conf->active_stripes);
>> +	}
> 
> shouldn't the stripe is always accounted to active before it's reclaimed? Do we
> decrease the count before the stripe is reclaimed? sounds like a bug.
> 

We decrease active count when the stripe is on r5c_cached_full_stripe or 
r5c_cached_partial stripe. These two lists are variation of inactive_list. 

>> 
>> +
>> +			mutex_unlock(&log->io_mutex);
>> +			return -ENOSPC;
>> 		}
>> +		pr_debug("%s: write sh %lu to free log space\n", __func__, sh->sector);
>> +	}
>> +	if (!r5l_has_free_space(log, reserve)) {
>> +		pr_err("%s: cannot reserve space %d\n", __func__, reserve);
>> +		BUG();
> 
> same here. we should put the stripe into no_space list. If we can't allocate
> space eventually, it indicates reclaim has bug.

BUG() here already indicates bug in reclaim. I will test this thoroughly with new version. 

>> 
>> +				if (before_jiffies > 20)
>> +					pr_debug("%s: wait for sh takes %lu jiffies\n", __func__, before_jiffies);
> please remove the debug code.
> 
> Thanks,
> Shaohua


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2016-10-04 21:59 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-09-26 23:30 [PATCH v2 0/6] raid5-cache: enabling cache features Song Liu
2016-09-26 23:30 ` [PATCH v2 1/6] r5cache: write part of r5cache Song Liu
2016-09-27 22:51   ` Shaohua Li
2016-09-29 23:06     ` Song Liu
2016-09-26 23:30 ` [PATCH v2 2/6] r5cache: sysfs entry r5c_state Song Liu
2016-09-27 22:58   ` Shaohua Li
2016-09-26 23:30 ` [PATCH v2 3/6] r5cache: reclaim support Song Liu
2016-09-28  0:34   ` Shaohua Li
2016-10-04 21:59     ` Song Liu
2016-09-26 23:30 ` [PATCH v2 4/6] r5cache: r5c recovery Song Liu
2016-09-28  1:08   ` Shaohua Li
2016-09-26 23:30 ` [PATCH v2 5/6] r5cache: handle SYNC and FUA Song Liu
2016-09-28  1:32   ` Shaohua Li
2016-09-26 23:30 ` [PATCH v2 6/6] md/r5cache: decrease the counter after full-write stripe was reclaimed Song Liu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).