linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Piergiorgio Sartor <piergiorgio.sartor@nexgo.de>
To: Piergiorgio Sartor <piergiorgio.sartor@nexgo.de>
Cc: NeilBrown <neilb@suse.de>, linux-raid@vger.kernel.org
Subject: [PATCH] RAID-6 check standalone fix component list parsing
Date: Wed, 13 Apr 2011 22:48:25 +0200	[thread overview]
Message-ID: <20110413204825.GA15496@lazy.lzy> (raw)
In-Reply-To: <20110406180202.GA3267@lazy.lzy>

Hi Neil,

maybe you missed the other email, anyway please
find attached the patch to fix the parsing of
the component list, i.e. skipping the "spare" one.

I also added a check in case the array is degraded.

Thanks,

pg

--- cut here ---


diff -uNr a/raid6check.c b/raid6check.c
--- a/raid6check.c	2011-04-05 01:29:45.000000000 +0200
+++ b/raid6check.c	2011-04-05 22:51:32.587032612 +0200
@@ -207,6 +207,7 @@
 	char **disk_name = NULL;
 	unsigned long long *offsets = NULL;
 	int raid_disks = 0;
+	int active_disks = 0;
 	int chunk_size = 0;
 	int layout = -1;
 	int level = 6;
@@ -242,6 +243,7 @@
 			  GET_LEVEL|
 			  GET_LAYOUT|
 			  GET_DISKS|
+			  GET_DEGRADED |
 			  GET_COMPONENT|
 			  GET_CHUNK|
 			  GET_DEVS|
@@ -254,6 +256,12 @@
 		goto exitHere;
 	}
 
+	if(info->array.failed_disks > 0) {
+		fprintf(stderr, "%s: %s degraded array\n", prg, argv[1]);
+		exit_err = 8;
+		goto exitHere;
+	}
+
 	printf("layout: %d\n", info->array.layout);
 	printf("disks: %d\n", info->array.raid_disks);
 	printf("component size: %llu\n", info->component_size * 512);
@@ -262,12 +270,13 @@
 	printf("\n");
 
 	comp = info->devs;
-	for(i = 0; i < info->array.raid_disks; i++) {
+	for(i = 0, active_disks = 0; active_disks < info->array.raid_disks; i++) {
 		printf("disk: %d - offset: %llu - size: %llu - name: %s - slot: %d\n",
 			i, comp->data_offset * 512, comp->component_size * 512,
 			map_dev(comp->disk.major, comp->disk.minor, 0),
 			comp->disk.raid_disk);
-
+		if(comp->disk.raid_disk >= 0)
+			active_disks++;
 		comp = comp->next;
 	}
 	printf("\n");
@@ -317,18 +326,20 @@
 	close_flag = 1;
 
 	comp = info->devs;
-	for (i=0; i<raid_disks; i++) {
+	for (i=0, active_disks=0; active_disks<raid_disks; i++) {
 		int disk_slot = comp->disk.raid_disk;
-		disk_name[disk_slot] = map_dev(comp->disk.major, comp->disk.minor, 0);
-		offsets[disk_slot] = comp->data_offset * 512;
-		fds[disk_slot] = open(disk_name[disk_slot], O_RDWR);
-		if (fds[disk_slot] < 0) {
-			perror(disk_name[disk_slot]);
-			fprintf(stderr,"%s: cannot open %s\n", prg, disk_name[disk_slot]);
-			exit_err = 6;
-			goto exitHere;
+		if(disk_slot >= 0) {
+			disk_name[disk_slot] = map_dev(comp->disk.major, comp->disk.minor, 0);
+			offsets[disk_slot] = comp->data_offset * 512;
+			fds[disk_slot] = open(disk_name[disk_slot], O_RDWR);
+			if (fds[disk_slot] < 0) {
+				perror(disk_name[disk_slot]);
+				fprintf(stderr,"%s: cannot open %s\n", prg, disk_name[disk_slot]);
+				exit_err = 6;
+				goto exitHere;
+			}
+			active_disks++;
 		}
-
 		comp = comp->next;
 	}
 
--- cut here ---

bye,

-- 

piergiorgio

  reply	other threads:[~2011-04-13 20:48 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-02-21 20:45 [PATCH] RAID-6 check standalone Piergiorgio Sartor
2011-03-07 19:33 ` Piergiorgio Sartor
2011-03-21  3:02 ` NeilBrown
2011-03-21 10:40   ` Piergiorgio Sartor
2011-03-21 11:04     ` NeilBrown
2011-03-21 11:54       ` Piergiorgio Sartor
2011-03-21 22:59         ` NeilBrown
2011-03-31 18:53           ` [PATCH] RAID-6 check standalone md device Piergiorgio Sartor
     [not found]             ` <4D96597C.1020103@tuxes.nl>
     [not found]               ` <20110402071310.GA2640@lazy.lzy>
2011-04-02 10:33                 ` Bas van Schaik
2011-04-02 11:03                   ` Piergiorgio Sartor
2011-04-04 23:01             ` NeilBrown
2011-04-05 19:56               ` Piergiorgio Sartor
2011-04-04 17:52           ` [PATCH] RAID-6 check standalone code cleanup Piergiorgio Sartor
2011-04-04 23:12             ` NeilBrown
2011-04-06 18:02               ` Piergiorgio Sartor
2011-04-13 20:48                 ` Piergiorgio Sartor [this message]
2011-04-14  7:29                   ` [PATCH] RAID-6 check standalone fix component list parsing NeilBrown
2011-04-14  7:32                 ` [PATCH] RAID-6 check standalone code cleanup NeilBrown
2011-05-08 18:54               ` [PATCH] RAID-6 check standalone suspend array Piergiorgio Sartor
2011-05-09  1:45                 ` NeilBrown
2011-05-09 18:43                   ` [PATCH] RAID-6 check standalone suspend array V2.0 Piergiorgio Sartor
2011-05-15 21:15                     ` Piergiorgio Sartor
2011-05-16 10:08                       ` NeilBrown
2011-07-20 17:57                         ` Piergiorgio Sartor
2011-07-22  6:41                           ` Luca Berra
2011-07-25 18:53                             ` Piergiorgio Sartor
2011-07-26  5:25                           ` NeilBrown
2011-08-07 17:09                             ` [PATCH] RAID-6 check standalone man page Piergiorgio Sartor
2011-08-09  0:43                               ` NeilBrown

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110413204825.GA15496@lazy.lzy \
    --to=piergiorgio.sartor@nexgo.de \
    --cc=linux-raid@vger.kernel.org \
    --cc=neilb@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).