From: Neil Brown <neilb@suse.de>
To: Dan Williams <dan.j.williams@intel.com>
Cc: linux-raid@vger.kernel.org, Andre Noll <maan@systemlinux.org>,
Ilya Yanok <yanok@emcraft.com>, Yuri Tikhonov <yur@emcraft.com>
Subject: Re: [PATCH v2 2/9] md/raid6: asynchronous raid6 operations
Date: Tue, 15 Sep 2009 15:32:41 +1000 [thread overview]
Message-ID: <19119.9977.918785.459683@notabene.brown> (raw)
In-Reply-To: message from Dan Williams on Monday August 31
And just some tiny things in this patch....
>
> +/* set_syndrome_sources - populate source buffers for gen_syndrome
> + * @srcs - (struct page *) array of size sh->disks
> + * @sh - stripe_head to parse
> + *
> + * Populates srcs in proper layout order for the stripe and returns the
> + * 'count' of sources to be used in a call to async_gen_syndrome. The P
> + * destination buffer is recorded in srcs[count] and the Q destination
> + * is recorded in srcs[count+1]].
^ extra ']'
> + */
> +
> +static struct dma_async_tx_descriptor *
> +ops_run_compute6_2(struct stripe_head *sh, struct raid5_percpu *percpu)
> +{
> + int i, count, disks = sh->disks;
> + int syndrome_disks = sh->ddf_layout ? disks : disks-2;
> + int d0_idx = raid6_d0(sh);
> + int faila = -1, failb = -1;
> + int target = sh->ops.target;
> + int target2 = sh->ops.target2;
> + struct r5dev *tgt = &sh->dev[target];
> + struct r5dev *tgt2 = &sh->dev[target2];
> + struct dma_async_tx_descriptor *tx;
> + struct page **blocks = percpu->scribble;
> + struct async_submit_ctl submit;
> +
> + pr_debug("%s: stripe %llu block1: %d block2: %d\n",
> + __func__, (unsigned long long)sh->sector, target, target2);
> + BUG_ON(target < 0 || target2 < 0);
> + BUG_ON(!test_bit(R5_Wantcompute, &tgt->flags));
> + BUG_ON(!test_bit(R5_Wantcompute, &tgt2->flags));
> +
> + /* we need to open-code set_syndrome_sources to handle to the
^^
remove the 'to'.
> + * slot number conversion for 'faila' and 'failb'
> + */
> + for (i = 0; i < disks ; i++)
> + blocks[i] = (void *)raid6_empty_zero_page;
> + count = 0;
> + i = d0_idx;
> + do {
> + int slot = raid6_idx_to_slot(i, sh, &count, syndrome_disks);
> +
> + blocks[slot] = sh->dev[i].page;
> +
> + if (i == target)
> + faila = slot;
> + if (i == target2)
> + failb = slot;
> + i = raid6_next_disk(i, disks);
> + } while (i != d0_idx);
> + BUG_ON(count != syndrome_disks);
> +
> + BUG_ON(faila == failb);
> + if (failb < faila)
> + swap(faila, failb);
> + pr_debug("%s: stripe: %llu faila: %d failb: %d\n",
> + __func__, (unsigned long long)sh->sector, faila, failb);
> +
> + atomic_inc(&sh->count);
> +
> + if (failb == syndrome_disks+1) {
> + /* Q disk is one of the missing disks */
> + if (faila == syndrome_disks) {
> + /* Missing P+Q, just recompute */
> + init_async_submit(&submit, 0, NULL, ops_complete_compute,
> + sh, to_addr_conv(sh, percpu));
> + return async_gen_syndrome(blocks, 0, count+2,
> + STRIPE_SIZE, &submit);
> + } else {
....
> + init_async_submit(&submit, 0, tx, ops_complete_compute,
> + sh, to_addr_conv(sh, percpu));
> + return async_gen_syndrome(blocks, 0, count+2,
> + STRIPE_SIZE, &submit);
> + }
> + }
Can we have an ' else { ' here? extending down to....
> +
> + init_async_submit(&submit, 0, NULL, ops_complete_compute, sh,
> + to_addr_conv(sh, percpu));
> + if (failb == syndrome_disks) {
> + /* We're missing D+P. */
> + return async_raid6_datap_recov(syndrome_disks+2, STRIPE_SIZE,
> + faila, blocks, &submit);
> + } else {
> + /* We're missing D+D. */
> + return async_raid6_2data_recov(syndrome_disks+2, STRIPE_SIZE,
> + faila, failb, blocks, &submit);
> + }
... here please. It is correct as it stands, but the fact that every
branch in the 'if' part ends with a 'return' isn't immediately
obvious, so it is clearer if we are explicit about the
if / then / else
structure.
Thanks,
NeilBrown
next prev parent reply other threads:[~2009-09-15 5:32 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-08-31 16:41 [PATCH v2 0/9] Asynchronous raid6 acceleration (part 3 of 3) Dan Williams
2009-08-31 16:41 ` [PATCH v2 1/9] md/raid5: factor out mark_uptodate from ops_complete_compute5 Dan Williams
2009-08-31 16:41 ` [PATCH v2 2/9] md/raid6: asynchronous raid6 operations Dan Williams
2009-09-15 5:32 ` Neil Brown [this message]
2009-08-31 16:41 ` [PATCH v2 3/9] md/raid5, 6: common schedule_reconstruction for raid5/6 Dan Williams
2009-08-31 16:41 ` [PATCH v2 4/9] md/raid6: asynchronous handle_stripe_fill6 Dan Williams
2009-08-31 16:41 ` [PATCH v2 5/9] md/raid6: asynchronous handle_stripe_dirtying6 Dan Williams
2009-08-31 16:41 ` [PATCH v2 6/9] md/raid6: asynchronous handle_parity_check6 Dan Williams
2009-08-31 16:41 ` [PATCH v2 7/9] md/raid6: asynchronous handle_stripe6 Dan Williams
2009-09-15 5:26 ` Neil Brown
2009-09-15 8:42 ` Dan Williams
2009-08-31 16:41 ` [PATCH v2 8/9] md/raid6: remove synchronous infrastructure Dan Williams
2009-08-31 16:41 ` [PATCH v2 9/9] md/raid456: distribute raid processing over multiple cores Dan Williams
2009-08-31 17:23 ` [PATCH v2 0/9] Asynchronous raid6 acceleration (part 3 of 3) kwick
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=19119.9977.918785.459683@notabene.brown \
--to=neilb@suse.de \
--cc=dan.j.williams@intel.com \
--cc=linux-raid@vger.kernel.org \
--cc=maan@systemlinux.org \
--cc=yanok@emcraft.com \
--cc=yur@emcraft.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).