From: Carlos Maiolino <cmaiolino@redhat.com>
To: xfs@oss.sgi.com
Subject: Re: [PATCH 2/2] xfsdump: fix DEBUGPARTIALS build
Date: Thu, 10 Oct 2013 11:21:20 -0300 [thread overview]
Message-ID: <20131010142119.GA3434@orion.maiolino.org> (raw)
In-Reply-To: <525487CD.7080900@sandeen.net>
Looks good to me indeed
Reviewed-by: Carlos Maiolino <cmaiolino@redhat.com>
On Tue, Oct 08, 2013 at 05:31:41PM -0500, Eric Sandeen wrote:
> the DEBUGPARTIALS debug code might have been helpful
> in this saga, so get it building again.
>
> The primary build failure is that STREAM_MAX isn't
> defined for the num_partials[STREAM_MAX] array;
> the loop which uses that array iterates "drivecnt"
> times, so just allocate an array of that size.
>
> Fix a few printf format warnings while we're at it.
>
> Signed-off-by: Eric Sandeen <sandeen@redhat.com>
> ---
>
> diff --git a/restore/content.c b/restore/content.c
> index cc49336..8ad0f00 100644
> --- a/restore/content.c
> +++ b/restore/content.c
> @@ -8857,22 +8857,23 @@ dump_partials(void)
> int i;
>
> pi_lock();
> - printf("\npartial_reg: count=%d\n", persp->a.parrestcnt);
> + printf("\npartial_reg: count=%d\n", (int)persp->a.parrestcnt);
> if (persp->a.parrestcnt > 0) {
> for (i=0; i < partialmax; i++ ) {
> if (persp->a.parrest[i].is_ino > 0) {
> int j;
>
> isptr = &persp->a.parrest[i];
> - printf( "\tino=%lld ", isptr->is_ino);
> + printf("\tino=%llu ",
> + (unsigned long long)isptr->is_ino);
> for (j=0, bsptr=isptr->is_bs;
> j < drivecnt;
> j++, bsptr++)
> {
> if (bsptr->endoffset > 0) {
> printf("%d:%lld-%lld ",
> - j, bsptr->offset,
> - bsptr->endoffset);
> + j, (long long)bsptr->offset,
> + (long long)bsptr->endoffset);
> }
> }
> printf( "\n");
> @@ -8892,13 +8893,17 @@ dump_partials(void)
> void
> check_valid_partials(void)
> {
> - int num_partials[STREAM_MAX]; /* sum of partials for a given drive */
> + int *num_partials; /* array for sum of partials for a given drive */
> partial_rest_t *isptr = NULL;
> bytespan_t *bsptr = NULL;
> int i;
>
> /* zero the sums for each stream */
> - memset(num_partials, 0, sizeof(num_partials));
> + num_partials = calloc(drivecnt, sizeof(int));
> + if (!num_partials) {
> + perror("num_partials array allocation");
> + return;
> + }
>
> pi_lock();
> if (persp->a.parrestcnt > 0) {
> @@ -8926,6 +8931,7 @@ check_valid_partials(void)
> }
> }
> pi_unlock();
> + free(num_partials);
> }
> #endif
>
>
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
--
Carlos
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2013-10-10 14:21 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-10-08 22:01 [PATCH 0/2] xfsdump: 2 more fixes Eric Sandeen
2013-10-08 22:05 ` [PATCH 1/2] xfsdump: avoid segfault in partial_reg() in error case Eric Sandeen
2013-10-10 14:17 ` Carlos Maiolino
2013-10-18 21:29 ` Rich Johnston
2013-10-08 22:31 ` [PATCH 2/2] xfsdump: fix DEBUGPARTIALS build Eric Sandeen
2013-10-10 14:21 ` Carlos Maiolino [this message]
2013-10-18 21:30 ` Rich Johnston
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20131010142119.GA3434@orion.maiolino.org \
--to=cmaiolino@redhat.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox