From: <gregkh@linuxfoundation.org>
To: yanaijie@huawei.com, shli@fb.com, stable@vger.kernel.org
Cc: <stable@vger.kernel.org>
Subject: FAILED: patch "[PATCH] md/raid5-cache: fix payload endianness problem in raid5-cache" failed to apply to 4.11-stable tree
Date: Mon, 22 May 2017 18:27:36 +0200 [thread overview]
Message-ID: <149547045635208@kroah.com> (raw)
The patch below does not apply to the 4.11-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable@vger.kernel.org>.
thanks,
greg k-h
------------------ original commit in Linus's tree ------------------
>From 1ad45a9bc4e0cd5a6e6fb0e6c5d35d6c87f14c76 Mon Sep 17 00:00:00 2001
From: Jason Yan <yanaijie@huawei.com>
Date: Sat, 25 Mar 2017 09:44:39 +0800
Subject: [PATCH] md/raid5-cache: fix payload endianness problem in raid5-cache
The payload->header.type and payload->size are little-endian, so just
convert them to the right byte order.
Signed-off-by: Jason Yan <yanaijie@huawei.com>
Cc: <stable@vger.kernel.org> #v4.10+
Signed-off-by: Shaohua Li <shli@fb.com>
diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c
index 25eb048298fe..b6194e082e48 100644
--- a/drivers/md/raid5-cache.c
+++ b/drivers/md/raid5-cache.c
@@ -2002,12 +2002,12 @@ r5l_recovery_verify_data_checksum_for_mb(struct r5l_log *log,
payload = (void *)mb + mb_offset;
payload_flush = (void *)mb + mb_offset;
- if (payload->header.type == R5LOG_PAYLOAD_DATA) {
+ if (le16_to_cpu(payload->header.type) == R5LOG_PAYLOAD_DATA) {
if (r5l_recovery_verify_data_checksum(
log, ctx, page, log_offset,
payload->checksum[0]) < 0)
goto mismatch;
- } else if (payload->header.type == R5LOG_PAYLOAD_PARITY) {
+ } else if (le16_to_cpu(payload->header.type) == R5LOG_PAYLOAD_PARITY) {
if (r5l_recovery_verify_data_checksum(
log, ctx, page, log_offset,
payload->checksum[0]) < 0)
@@ -2019,12 +2019,12 @@ r5l_recovery_verify_data_checksum_for_mb(struct r5l_log *log,
BLOCK_SECTORS),
payload->checksum[1]) < 0)
goto mismatch;
- } else if (payload->header.type == R5LOG_PAYLOAD_FLUSH) {
+ } else if (le16_to_cpu(payload->header.type) == R5LOG_PAYLOAD_FLUSH) {
/* nothing to do for R5LOG_PAYLOAD_FLUSH here */
} else /* not R5LOG_PAYLOAD_DATA/PARITY/FLUSH */
goto mismatch;
- if (payload->header.type == R5LOG_PAYLOAD_FLUSH) {
+ if (le16_to_cpu(payload->header.type) == R5LOG_PAYLOAD_FLUSH) {
mb_offset += sizeof(struct r5l_payload_flush) +
le32_to_cpu(payload_flush->size);
} else {
@@ -2091,7 +2091,7 @@ r5c_recovery_analyze_meta_block(struct r5l_log *log,
payload = (void *)mb + mb_offset;
payload_flush = (void *)mb + mb_offset;
- if (payload->header.type == R5LOG_PAYLOAD_FLUSH) {
+ if (le16_to_cpu(payload->header.type) == R5LOG_PAYLOAD_FLUSH) {
int i, count;
count = le32_to_cpu(payload_flush->size) / sizeof(__le64);
@@ -2113,7 +2113,7 @@ r5c_recovery_analyze_meta_block(struct r5l_log *log,
}
/* DATA or PARITY payload */
- stripe_sect = (payload->header.type == R5LOG_PAYLOAD_DATA) ?
+ stripe_sect = (le16_to_cpu(payload->header.type) == R5LOG_PAYLOAD_DATA) ?
raid5_compute_sector(
conf, le64_to_cpu(payload->location), 0, &dd,
NULL)
@@ -2151,7 +2151,7 @@ r5c_recovery_analyze_meta_block(struct r5l_log *log,
list_add_tail(&sh->lru, cached_stripe_list);
}
- if (payload->header.type == R5LOG_PAYLOAD_DATA) {
+ if (le16_to_cpu(payload->header.type) == R5LOG_PAYLOAD_DATA) {
if (!test_bit(STRIPE_R5C_CACHING, &sh->state) &&
test_bit(R5_Wantwrite, &sh->dev[sh->pd_idx].flags)) {
r5l_recovery_replay_one_stripe(conf, sh, ctx);
@@ -2159,7 +2159,7 @@ r5c_recovery_analyze_meta_block(struct r5l_log *log,
}
r5l_recovery_load_data(log, sh, ctx, payload,
log_offset);
- } else if (payload->header.type == R5LOG_PAYLOAD_PARITY)
+ } else if (le16_to_cpu(payload->header.type) == R5LOG_PAYLOAD_PARITY)
r5l_recovery_load_parity(log, sh, ctx, payload,
log_offset);
else
@@ -2361,7 +2361,7 @@ r5c_recovery_rewrite_data_only_stripes(struct r5l_log *log,
payload = (void *)mb + offset;
payload->header.type = cpu_to_le16(
R5LOG_PAYLOAD_DATA);
- payload->size = BLOCK_SECTORS;
+ payload->size = cpu_to_le32(BLOCK_SECTORS);
payload->location = cpu_to_le64(
raid5_compute_blocknr(sh, i, 0));
addr = kmap_atomic(dev->page);
reply other threads:[~2017-05-22 16:27 UTC|newest]
Thread overview: [no followups] expand[flat|nested] mbox.gz Atom feed
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=149547045635208@kroah.com \
--to=gregkh@linuxfoundation.org \
--cc=shli@fb.com \
--cc=stable@vger.kernel.org \
--cc=yanaijie@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).