linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [BUG] Raid5 trouble
@ 2007-10-16 13:24 BERTRAND Joël
  2007-10-17 14:32 ` BERTRAND Joël
  0 siblings, 1 reply; 36+ messages in thread
From: BERTRAND Joël @ 2007-10-16 13:24 UTC (permalink / raw)
  To: linux-raid, sparclinux

	Hello,

	I run 2.6.23 linux kernel on two T1000 (sparc64) servers. Each server 
has a partitionable raid5 array (/dev/md/d0) and I have to synchronize 
both raid5 volumes by raid1. Thus, I have tried to build a raid1 volume 
between /dev/md/d0p1 and /dev/sdi1 (exported by iscsi from the second 
server) and I obtain a BUG :

Root gershwin:[/usr/scripts] > mdadm -C /dev/md7 -l1 -n2 /dev/md/d0p1 
/dev/sdi1
...
kernel BUG at drivers/md/raid5.c:380!
               \|/ ____ \|/
               "@'/ .. \`@"
               /_| \__/ |_\
                  \__U_/
md7_resync(4476): Kernel bad sw trap 5 [#1]
TSTATE: 0000000080001606 TPC: 00000000005ed50c TNPC: 00000000005ed510 Y: 
00000000    Not tainted
TPC: <get_stripe_work+0x1f4/0x200>
g0: 0000000000000005 g1: 00000000007c0400 g2: 0000000000000001 g3: 
0000000000748400
g4: fffff800ebdb2400 g5: fffff80002080000 g6: fffff800e82fc000 g7: 
0000000000748528
o0: 0000000000000029 o1: 0000000000715798 o2: 000000000000017c o3: 
0000000000000005
o4: 0000000000000006 o5: fffff800e9bb6e28 sp: fffff800e82fed81 ret_pc: 
00000000005ed504
RPC: <get_stripe_work+0x1ec/0x200>
l0: 0000000000000002 l1: ffffffffffffffff l2: fffff800e9bb6e68 l3: 
fffff800e9bb6db0
l4: fffff800e9bb6e50 l5: fffffffffffffff8 l6: 0000000000000005 l7: 
fffff800fcbd6000
i0: fffff800e9bb6df0 i1: 0000000000000000 i2: 0000000000000004 i3: 
fffff800e82ff720
i4: 0000000000000080 i5: 0000000000000080 i6: fffff800e82fee51 i7: 
00000000005f0274
I7: <handle_stripe5+0x4fc/0x1340>
Caller[00000000005f0274]: handle_stripe5+0x4fc/0x1340
Caller[00000000005f211c]: handle_stripe+0x24/0x13e0
Caller[00000000005f4450]: make_request+0x358/0x600
Caller[0000000000542890]: generic_make_request+0x198/0x220
Caller[00000000005eb240]: sync_request+0x608/0x640
Caller[00000000005fef7c]: md_do_sync+0x384/0x920
Caller[00000000005ff8f0]: md_thread+0x38/0x140
Caller[0000000000478b40]: kthread+0x48/0x80
Caller[00000000004273d0]: kernel_thread+0x38/0x60
Caller[0000000000478de0]: kthreadd+0x148/0x1c0
Instruction DUMP: 9210217c  7ff8f57f  90122398 <91d02005> 30680004 
01000000  01000000  01000000  9de3bf00

Root gershwin:[/usr/scripts] > cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md7 : active raid1 sdi1[1] md_d0p1[0]
       1464725632 blocks [2/2] [UU]
       [>....................]  resync =  0.0% (132600/1464725632) 
finish=141823.7min speed=171K/sec

md_d0 : active raid5 sdc1[0] sdh1[5] sdg1[4] sdf1[3] sde1[2] sdd1[1]
       1464725760 blocks level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]
...
Root gershwin:[/usr/scripts] > fdisk -l /dev/md/d0

Disk /dev/md/d0: 1499.8 GB, 1499879178240 bytes
2 heads, 4 sectors/track, 366181440 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0xa4a52979

       Device Boot      Start         End      Blocks   Id  System
/dev/md/d0p1               1   366181440  1464725758   fd  Linux raid 
autodetect
Root gershwin:[/usr/scripts] > fdisk -l /dev/sdi

Disk /dev/sdi: 1499.8 GB, 1499879178240 bytes
2 heads, 4 sectors/track, 366181440 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0xf6cdb2a3

    Device Boot      Start         End      Blocks   Id  System
/dev/sdi1               1   366181440  1464725758   fd  Linux raid 
autodetect
Root gershwin:[/usr/scripts] > cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
   Vendor: FUJITSU  Model: MAY2073RCSUN72G  Rev: 0501
   Type:   Direct-Access                    ANSI  SCSI revision: 04
Host: scsi0 Channel: 00 Id: 01 Lun: 00
   Vendor: FUJITSU  Model: MAY2073RCSUN72G  Rev: 0501
   Type:   Direct-Access                    ANSI  SCSI revision: 04
Host: scsi2 Channel: 00 Id: 08 Lun: 00
   Vendor: FUJITSU  Model: MAW3300NC        Rev: 0104
   Type:   Direct-Access                    ANSI  SCSI revision: 03
Host: scsi2 Channel: 00 Id: 09 Lun: 00
   Vendor: FUJITSU  Model: MAW3300NC        Rev: 0104
   Type:   Direct-Access                    ANSI  SCSI revision: 03
Host: scsi2 Channel: 00 Id: 10 Lun: 00
   Vendor: FUJITSU  Model: MAW3300NC        Rev: 0104
   Type:   Direct-Access                    ANSI  SCSI revision: 03
Host: scsi2 Channel: 00 Id: 11 Lun: 00
   Vendor: FUJITSU  Model: MAW3300NC        Rev: 0104
   Type:   Direct-Access                    ANSI  SCSI revision: 03
Host: scsi2 Channel: 00 Id: 12 Lun: 00
   Vendor: FUJITSU  Model: MAW3300NC        Rev: 0104
   Type:   Direct-Access                    ANSI  SCSI revision: 03
Host: scsi2 Channel: 00 Id: 13 Lun: 00
   Vendor: FUJITSU  Model: MAW3300NC        Rev: 0104
   Type:   Direct-Access                    ANSI  SCSI revision: 03
Host: scsi3 Channel: 00 Id: 00 Lun: 00
   Vendor: IET      Model: VIRTUAL-DISK     Rev: 0
   Type:   Direct-Access                    ANSI  SCSI revision: 04
Root gershwin:[/usr/scripts] >

	I don't think if this bug is arch specific, but I never see it on amd64...

	Regards,

	JKB

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid5 trouble
  2007-10-16 13:24 [BUG] Raid5 trouble BERTRAND Joël
@ 2007-10-17 14:32 ` BERTRAND Joël
  2007-10-17 14:58   ` Dan Williams
  0 siblings, 1 reply; 36+ messages in thread
From: BERTRAND Joël @ 2007-10-17 14:32 UTC (permalink / raw)
  To: linux-raid, sparclinux

BERTRAND Joël wrote:
>     Hello,
> 
>     I run 2.6.23 linux kernel on two T1000 (sparc64) servers. Each 
> server has a partitionable raid5 array (/dev/md/d0) and I have to 
> synchronize both raid5 volumes by raid1. Thus, I have tried to build a 
> raid1 volume between /dev/md/d0p1 and /dev/sdi1 (exported by iscsi from 
> the second server) and I obtain a BUG :
> 
> Root gershwin:[/usr/scripts] > mdadm -C /dev/md7 -l1 -n2 /dev/md/d0p1 
> /dev/sdi1
> ...

	Hello,

	I have fixed iscsi-target, and I have tested it. It works now without 
any trouble. Patches were posted on iscsi-target mailing list. When I 
use iSCSI to access to foreign raid5 volume, it works fine. I can format 
foreign volume, copy large files on it... But when I tried to create a 
new raid1 volume with a local raid5 volume and a foreign raid5 volume, I 
receive my well known Oops. You can find my dmesg after Oops :

md: md_d0 stopped.
md: bind<sdd1>
md: bind<sde1>
md: bind<sdf1>
md: bind<sdg1>
md: bind<sdh1>

md: bind<sdc1>
raid5: device sdc1 operational as raid disk 0
raid5: device sdh1 operational as raid disk 5
raid5: device sdg1 operational as raid disk 4
raid5: device sdf1 operational as raid disk 3
raid5: device sde1 operational as raid disk 2
raid5: device sdd1 operational as raid disk 1
raid5: allocated 12518kB for md_d0
raid5: raid level 5 set md_d0 active with 6 out of 6 devices, algorithm 2
RAID5 conf printout:
  --- rd:6 wd:6
  disk 0, o:1, dev:sdc1
  disk 1, o:1, dev:sdd1
  disk 2, o:1, dev:sde1
  disk 3, o:1, dev:sdf1
  disk 4, o:1, dev:sdg1
  disk 5, o:1, dev:sdh1
  md_d0: p1
scsi3 : iSCSI Initiator over TCP/IP
scsi 3:0:0:0: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
sd 3:0:0:0: [sdi] 2929451520 512-byte hardware sectors (1499879 MB)
sd 3:0:0:0: [sdi] Write Protect is off
sd 3:0:0:0: [sdi] Mode Sense: 77 00 00 08
sd 3:0:0:0: [sdi] Write cache: disabled, read cache: enabled, doesn't 
support DPO or FUA
sd 3:0:0:0: [sdi] 2929451520 512-byte hardware sectors (1499879 MB)
sd 3:0:0:0: [sdi] Write Protect is off
sd 3:0:0:0: [sdi] Mode Sense: 77 00 00 08
sd 3:0:0:0: [sdi] Write cache: disabled, read cache: enabled, doesn't 
support DPO or FUA
  sdi: sdi1
sd 3:0:0:0: [sdi] Attached SCSI disk
md: bind<md_d0p1>
md: bind<sdi1>
md: md7: raid array is not clean -- starting background reconstruction
raid1: raid set md7 active with 2 out of 2 mirrors
md: resync of RAID array md7
md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
md: using maximum available idle IO bandwidth (but not more than 200000 
KB/sec) for resync.
md: using 256k window, over a total of 1464725632 blocks.
kernel BUG at drivers/md/raid5.c:380!
               \|/ ____ \|/
               "@'/ .. \`@"
               /_| \__/ |_\
                  \__U_/
md7_resync(4929): Kernel bad sw trap 5 [#1]
TSTATE: 0000000080001606 TPC: 00000000005ed50c TNPC: 00000000005ed510 Y: 
00000000    Not tainted
TPC: <get_stripe_work+0x1f4/0x200>
g0: 0000000000000005 g1: 00000000007c0400 g2: 0000000000000001 g3: 
0000000000748400
g4: fffff800feeb6880 g5: fffff80002080000 g6: fffff800e7598000 g7: 
0000000000748528
o0: 0000000000000029 o1: 0000000000715798 o2: 000000000000017c o3: 
0000000000000005
o4: 0000000000000006 o5: fffff800e8f0a060 sp: fffff800e759ad81 ret_pc: 
00000000005ed504
RPC: <get_stripe_work+0x1ec/0x200>
l0: 0000000000000002 l1: ffffffffffffffff l2: fffff800e8f0a0a0 l3: 
fffff800e8f09fe8
l4: fffff800e8f0a088 l5: fffffffffffffff8 l6: 0000000000000005 l7: 
fffff800e8374000
i0: fffff800e8f0a028 i1: 0000000000000000 i2: 0000000000000004 i3: 
fffff800e759b720
i4: 0000000000000080 i5: 0000000000000080 i6: fffff800e759ae51 i7: 
00000000005f0274
I7: <handle_stripe5+0x4fc/0x1340>
Caller[00000000005f0274]: handle_stripe5+0x4fc/0x1340
Caller[00000000005f211c]: handle_stripe+0x24/0x13e0
Caller[00000000005f4450]: make_request+0x358/0x600
Caller[0000000000542890]: generic_make_request+0x198/0x220
Caller[00000000005eb240]: sync_request+0x608/0x640
Caller[00000000005fef7c]: md_do_sync+0x384/0x920
Caller[00000000005ff8f0]: md_thread+0x38/0x140
Caller[0000000000478b40]: kthread+0x48/0x80
Caller[00000000004273d0]: kernel_thread+0x38/0x60
Caller[0000000000478de0]: kthreadd+0x148/0x1c0
Instruction DUMP: 9210217c  7ff8f57f  90122398 <91d02005> 30680004 
01000000  01000000  01000000  9de3bf00

	I suspect a major bug in raid5 code but I don't know how debug it...

	md7 was crated by mdadm -C /dev/md7 -l1 -n2 /dev/md/d0 /dev/sdi1. 
/dev/md/d0 is a raid5 volume, and sdi a iSCSI disk.

	Regards,

	JKB
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid5 trouble
  2007-10-17 14:32 ` BERTRAND Joël
@ 2007-10-17 14:58   ` Dan Williams
  2007-10-17 15:40     ` Dan Williams
  2007-10-17 16:07     ` [BUG] Raid5 trouble BERTRAND Joël
  0 siblings, 2 replies; 36+ messages in thread
From: Dan Williams @ 2007-10-17 14:58 UTC (permalink / raw)
  To: BERTRAND Joël; +Cc: linux-raid, sparclinux

On 10/17/07, BERTRAND Joël <joel.bertrand@systella.fr> wrote:
> BERTRAND Joël wrote:
> >     Hello,
> >
> >     I run 2.6.23 linux kernel on two T1000 (sparc64) servers. Each
> > server has a partitionable raid5 array (/dev/md/d0) and I have to
> > synchronize both raid5 volumes by raid1. Thus, I have tried to build a
> > raid1 volume between /dev/md/d0p1 and /dev/sdi1 (exported by iscsi from
> > the second server) and I obtain a BUG :
> >
> > Root gershwin:[/usr/scripts] > mdadm -C /dev/md7 -l1 -n2 /dev/md/d0p1
> > /dev/sdi1
> > ...
>
>         Hello,
>
>         I have fixed iscsi-target, and I have tested it. It works now without
> any trouble. Patches were posted on iscsi-target mailing list. When I
> use iSCSI to access to foreign raid5 volume, it works fine. I can format
> foreign volume, copy large files on it... But when I tried to create a
> new raid1 volume with a local raid5 volume and a foreign raid5 volume, I
> receive my well known Oops. You can find my dmesg after Oops :
>

Can you send your .config and your bootup dmesg?

Thanks,
Dan
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid5 trouble
  2007-10-17 14:58   ` Dan Williams
@ 2007-10-17 15:40     ` Dan Williams
  2007-10-17 16:44       ` BERTRAND Joël
  2007-10-19  2:55       ` Bill Davidsen
  2007-10-17 16:07     ` [BUG] Raid5 trouble BERTRAND Joël
  1 sibling, 2 replies; 36+ messages in thread
From: Dan Williams @ 2007-10-17 15:40 UTC (permalink / raw)
  To: BERTRAND Joël; +Cc: linux-raid, sparclinux

[-- Attachment #1: Type: text/plain, Size: 1717 bytes --]

On 10/17/07, Dan Williams <dan.j.williams@intel.com> wrote:
> On 10/17/07, BERTRAND Joël <joel.bertrand@systella.fr> wrote:
> > BERTRAND Joël wrote:
> > >     Hello,
> > >
> > >     I run 2.6.23 linux kernel on two T1000 (sparc64) servers. Each
> > > server has a partitionable raid5 array (/dev/md/d0) and I have to
> > > synchronize both raid5 volumes by raid1. Thus, I have tried to build a
> > > raid1 volume between /dev/md/d0p1 and /dev/sdi1 (exported by iscsi from
> > > the second server) and I obtain a BUG :
> > >
> > > Root gershwin:[/usr/scripts] > mdadm -C /dev/md7 -l1 -n2 /dev/md/d0p1
> > > /dev/sdi1
> > > ...
> >
> >         Hello,
> >
> >         I have fixed iscsi-target, and I have tested it. It works now without
> > any trouble. Patches were posted on iscsi-target mailing list. When I
> > use iSCSI to access to foreign raid5 volume, it works fine. I can format
> > foreign volume, copy large files on it... But when I tried to create a
> > new raid1 volume with a local raid5 volume and a foreign raid5 volume, I
> > receive my well known Oops. You can find my dmesg after Oops :
> >
>
> Can you send your .config and your bootup dmesg?
>

I found a problem which may lead to the operations count dropping
below zero.  If ops_complete_biofill() gets preempted in between the
following calls:

raid5.c:554> clear_bit(STRIPE_OP_BIOFILL, &sh->ops.ack);
raid5.c:555> clear_bit(STRIPE_OP_BIOFILL, &sh->ops.pending);

...then get_stripe_work() can recount/re-acknowledge STRIPE_OP_BIOFILL
causing the assertion.  In fact, the 'pending' bit should always be
cleared first, but the other cases are protected by
spin_lock(&sh->lock).  Patch attached.

--
Dan

[-- Attachment #2: fix-biofill-clear.patch --]
[-- Type: application/octet-stream, Size: 900 bytes --]

raid5: fix clearing of biofill operations

From: Dan Williams <dan.j.williams@intel.com>

ops_complete_biofill() runs outside of spin_lock(&sh->lock) and clears
'ack' before it clears 'pending'.  If get_stripe_work() runs in between the
clearing of 'ack' and 'pending' it will recount the recently completed
operation and cause sh->ops.count to be less than zero.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---

 drivers/md/raid5.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index f96dea9..822f4d5 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -551,8 +551,8 @@ static void ops_complete_biofill(void *stripe_head_ref)
 			}
 		}
 	}
-	clear_bit(STRIPE_OP_BIOFILL, &sh->ops.ack);
 	clear_bit(STRIPE_OP_BIOFILL, &sh->ops.pending);
+	clear_bit(STRIPE_OP_BIOFILL, &sh->ops.ack);
 
 	return_io(return_bi);
 

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid5 trouble
  2007-10-17 14:58   ` Dan Williams
  2007-10-17 15:40     ` Dan Williams
@ 2007-10-17 16:07     ` BERTRAND Joël
  1 sibling, 0 replies; 36+ messages in thread
From: BERTRAND Joël @ 2007-10-17 16:07 UTC (permalink / raw)
  To: Dan Williams; +Cc: linux-raid, sparclinux

[-- Attachment #1: Type: text/plain, Size: 1830 bytes --]

Dan Williams wrote:
> On 10/17/07, BERTRAND Joël <joel.bertrand@systella.fr> wrote:
>> BERTRAND Joël wrote:
>>>     Hello,
>>>
>>>     I run 2.6.23 linux kernel on two T1000 (sparc64) servers. Each
>>> server has a partitionable raid5 array (/dev/md/d0) and I have to
>>> synchronize both raid5 volumes by raid1. Thus, I have tried to build a
>>> raid1 volume between /dev/md/d0p1 and /dev/sdi1 (exported by iscsi from
>>> the second server) and I obtain a BUG :
>>>
>>> Root gershwin:[/usr/scripts] > mdadm -C /dev/md7 -l1 -n2 /dev/md/d0p1
>>> /dev/sdi1
>>> ...
>>         Hello,
>>
>>         I have fixed iscsi-target, and I have tested it. It works now without
>> any trouble. Patches were posted on iscsi-target mailing list. When I
>> use iSCSI to access to foreign raid5 volume, it works fine. I can format
>> foreign volume, copy large files on it... But when I tried to create a
>> new raid1 volume with a local raid5 volume and a foreign raid5 volume, I
>> receive my well known Oops. You can find my dmesg after Oops :
>>

	Your patch does not work for me. It was applied, new kernel was built, 
and I obtain the same Oops.

> Can you send your .config and your bootup dmesg?

	Yes, of course ;-) Both files are attached. My new Oops is :

kernel BUG at drivers/md/raid5.c:380!
               \|/ ____ \|/
               "@'/ .. \`@"
               /_| \__/ |_\
                  \__U_/
md7_resync(4258): Kernel bad sw trap 5 [#1]
TSTATE: 0000000080001606 TPC: 00000000005ed50c TNPC: 00000000005ed510 Y: 
00000000    Not tainted
TPC: <get_stripe_work+0x1f4/0x200>

(exactly the same than the old one ;-) ). I have patched iscsi-target to 
avoid alignement bug on sparc64. Do you think a bug in ietd can produced 
this kind of bug ? Patch I have written for iscsi-target (against SVN) 
is attached too.

	Regards,

	JKB

[-- Attachment #2: dmesg --]
[-- Type: text/plain, Size: 18048 bytes --]

PROMLIB: Sun IEEE Boot Prom 'OBP 4.23.4 2006/08/04 20:45'
PROMLIB: Root node compatible: sun4v
Linux version 2.6.23 (root@gershwin) (gcc version 4.1.3 20070831 (prerelease) (Debian 4.1.2-16)) #7 SMP Wed Oct 17 17:52:22 CEST 2007
ARCH: SUN4V
Ethernet address: 00:14:4f:6f:59:fe
OF stdout device is: /virtual-devices@100/console@1
PROM: Built device tree with 74930 bytes of memory.
MDESC: Size is 32560 bytes.
PLATFORM: banner-name [Sun Fire(TM) T1000]
PLATFORM: name [SUNW,Sun-Fire-T1000]
PLATFORM: hostid [846f59fe]
PLATFORM: serial# [00ab4130]
PLATFORM: stick-frequency [3b9aca00]
PLATFORM: mac-address [144f6f59fe]
PLATFORM: watchdog-resolution [1000 ms]
PLATFORM: watchdog-max-timeout [31536000000 ms]
On node 0 totalpages: 522246
  Normal zone: 3583 pages used for memmap
  Normal zone: 0 pages reserved
  Normal zone: 518663 pages, LIFO batch:15
  Movable zone: 0 pages used for memmap
Built 1 zonelists in Zone order.  Total pages: 518663
Kernel command line: root=/dev/md0 ro md=0,/dev/sda4,/dev/sdb4 raid=noautodetect
md: Will configure md0 (super-block) from /dev/sda4,/dev/sdb4, below.
PID hash table entries: 4096 (order: 12, 32768 bytes)
clocksource: mult[10000] shift[16]
clockevent: mult[80000000] shift[31]
Console: colour dummy device 80x25
console [tty0] enabled
Dentry cache hash table entries: 524288 (order: 9, 4194304 bytes)
Inode-cache hash table entries: 262144 (order: 8, 2097152 bytes)
Memory: 4138072k available (2608k kernel code, 960k data, 144k init) [fffff80000000000,00000000fffc8000]
SLUB: Genslabs=23, HWalign=32, Order=0-2, MinObjects=8, CPUs=32, Nodes=1
Calibrating delay using timer specific routine.. 1995.16 BogoMIPS (lpj=3990330)
Mount-cache hash table entries: 512
Brought up 24 CPUs
xor: automatically using best checksumming function: Niagara
   Niagara   :   240.000 MB/sec
xor: using function: Niagara (240.000 MB/sec)
NET: Registered protocol family 16
PCI: Probing for controllers.
SUN4V_PCI: Registered hvapi major[1] minor[0]
/pci@780: SUN4V PCI Bus Module
/pci@780: PCI IO[e810000000] MEM[ea00000000]
/pci@7c0: SUN4V PCI Bus Module
/pci@7c0: PCI IO[f010000000] MEM[f200000000]
PCI: Scanning PBM /pci@7c0
PCI: Scanning PBM /pci@780
ebus: No EBus's found.
SCSI subsystem initialized
NET: Registered protocol family 2
Time: stick clocksource has been installed.
Switched to high resolution mode on CPU 0
Switched to high resolution mode on CPU 20
Switched to high resolution mode on CPU 8
Switched to high resolution mode on CPU 21
Switched to high resolution mode on CPU 9
Switched to high resolution mode on CPU 22
Switched to high resolution mode on CPU 10
Switched to high resolution mode on CPU 23
Switched to high resolution mode on CPU 11
Switched to high resolution mode on CPU 12
Switched to high resolution mode on CPU 13
Switched to high resolution mode on CPU 1
Switched to high resolution mode on CPU 14
Switched to high resolution mode on CPU 2
Switched to high resolution mode on CPU 15
Switched to high resolution mode on CPU 3
Switched to high resolution mode on CPU 16
Switched to high resolution mode on CPU 4
Switched to high resolution mode on CPU 17
Switched to high resolution mode on CPU 5
Switched to high resolution mode on CPU 18
Switched to high resolution mode on CPU 6
Switched to high resolution mode on CPU 19
Switched to high resolution mode on CPU 7
IP route cache hash table entries: 131072 (order: 7, 1048576 bytes)
TCP established hash table entries: 262144 (order: 9, 6291456 bytes)
TCP bind hash table entries: 65536 (order: 7, 1048576 bytes)
TCP: Hash tables configured (established 262144 bind 65536)
TCP reno registered
Mini RTC Driver
VFS: Disk quotas dquot_6.5.1
Dquot-cache hash table entries: 1024 (order 0, 8192 bytes)
async_tx: api initialized (async)
io scheduler noop registered
io scheduler anticipatory registered
io scheduler deadline registered
io scheduler cfq registered (default)
f026e2b0: ttyS0 at I/O 0x0 (irq = 1) is a SUN4V HCONS
console [ttyHV0] enabled
tg3.c:v3.81 (September 5, 2007)
PCI: Enabling device: (0001:03:04.0), cmd 2
eth0: Tigon3 [partno(BCM95714) rev 9001 PHY(5714)] (PCIX:133MHz:64-bit) 10/100/1000Base-T Ethernet 00:14:4f:6f:59:fe
eth0: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[0] WireSpeed[1] TSOcap[1]
eth0: dma_rwctrl[76148000] dma_mask[32-bit]
PCI: Enabling device: (0001:03:04.1), cmd 2
eth1: Tigon3 [partno(BCM95714) rev 9001 PHY(5714)] (PCIX:133MHz:64-bit) 10/100/1000Base-T Ethernet 00:14:4f:6f:59:ff
eth1: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[0] WireSpeed[1] TSOcap[1]
eth1: dma_rwctrl[76148000] dma_mask[32-bit]
PCI: Enabling device: (0001:04:01.0), cmd 2
eth2: Tigon3 [partno(BCM95704) rev 2100 PHY(5704)] (PCIX:100MHz:64-bit) 10/100/1000Base-T Ethernet 00:14:4f:6f:5a:00
eth2: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[0] WireSpeed[1] TSOcap[1]
eth2: dma_rwctrl[769f8000] dma_mask[32-bit]
PCI: Enabling device: (0001:04:01.1), cmd 2
eth3: Tigon3 [partno(BCM95704) rev 2100 PHY(5704)] (PCIX:100MHz:64-bit) 10/100/1000Base-T Ethernet 00:14:4f:6f:5a:01
eth3: RXcsums[1] LinkChgREG[0] MIirq[0] ASF[0] WireSpeed[1] TSOcap[1]
eth3: dma_rwctrl[769f8000] dma_mask[32-bit]
Fusion MPT base driver 3.04.05
Copyright (c) 1999-2007 LSI Logic Corporation
Fusion MPT SAS Host driver 3.04.05
PCI: Enabling device: (0001:04:02.0), cmd 17
mptbase: Initiating ioc0 bringup
ioc0: LSISAS1064 A3: Capabilities={Initiator}
scsi0 : ioc0: LSISAS1064 A3, FwRev=010a0000h, Ports=1, MaxQ=511, IRQ=22
scsi 0:0:0:0: Direct-Access     FUJITSU  MAY2073RCSUN72G  0501 PQ: 0 ANSI: 4
sd 0:0:0:0: [sda] 143374738 512-byte hardware sectors (73408 MB)
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Mode Sense: d3 00 00 08
sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
sd 0:0:0:0: [sda] 143374738 512-byte hardware sectors (73408 MB)
sd 0:0:0:0: [sda] Write Protect is off
sd 0:0:0:0: [sda] Mode Sense: d3 00 00 08
sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
 sda: sda1 sda2 sda3 sda4 sda5 sda6 sda7 sda8
sd 0:0:0:0: [sda] Attached SCSI disk
scsi 0:0:1:0: Direct-Access     FUJITSU  MAY2073RCSUN72G  0501 PQ: 0 ANSI: 4
sd 0:0:1:0: [sdb] 143374738 512-byte hardware sectors (73408 MB)
sd 0:0:1:0: [sdb] Write Protect is off
sd 0:0:1:0: [sdb] Mode Sense: d3 00 00 08
sd 0:0:1:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
sd 0:0:1:0: [sdb] 143374738 512-byte hardware sectors (73408 MB)
sd 0:0:1:0: [sdb] Write Protect is off
sd 0:0:1:0: [sdb] Mode Sense: d3 00 00 08
sd 0:0:1:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
 sdb: sdb1 sdb2 sdb3 sdb4 sdb5 sdb6 sdb7 sdb8
sd 0:0:1:0: [sdb] Attached SCSI disk
Fusion MPT misc device (ioctl) driver 3.04.05
mptctl: Registered with Fusion MPT base driver
mptctl: /dev/mptctl @ (major,minor=10,220)
mice: PS/2 mouse device common for all mice
Software Watchdog Timer: 0.07 initialized. soft_noboot=0 soft_margin=60 sec (nowayout= 0)
md: raid1 personality registered for level 1
raid6: int64x1    185 MB/s
raid6: int64x2    266 MB/s
raid6: int64x4    261 MB/s
raid6: int64x8    125 MB/s
raid6: using algorithm int64x2 (266 MB/s)
md: raid6 personality registered for level 6
md: raid5 personality registered for level 5
md: raid4 personality registered for level 4
NET: Registered protocol family 1
NET: Registered protocol family 17
md: Skipping autodetection of RAID arrays. (raid=noautodetect)
md: Loading md0: /dev/sda4
md: bind<sda4>
md: bind<sdb4>
raid1: raid set md0 active with 2 out of 2 mirrors
kjournald starting.  Commit interval 5 seconds
EXT3-fs: mounted filesystem with ordered data mode.
VFS: Mounted root (ext3 filesystem) readonly.
Fusion MPT SPI Host driver 3.04.05
PCI: Enabling device: (0000:03:08.0), cmd 3
mptbase: Initiating ioc1 bringup
ioc1: LSI53C1030 C0: Capabilities={Initiator,Target}
scsi1 : ioc1: LSI53C1030 C0, FwRev=01032700h, Ports=1, MaxQ=255, IRQ=14
PCI: Enabling device: (0000:03:08.1), cmd 3
mptbase: Initiating ioc2 bringup
ioc2: LSI53C1030 C0: Capabilities={Initiator,Target}
scsi2 : ioc2: LSI53C1030 C0, FwRev=01032700h, Ports=1, MaxQ=255, IRQ=15
scsi 2:0:8:0: Direct-Access     FUJITSU  MAW3300NC        0104 PQ: 0 ANSI: 3
 target2:0:8: Beginning Domain Validation
 target2:0:8: Ending Domain Validation
 target2:0:8: FAST-160 WIDE SCSI 320.0 MB/s DT IU QAS RTI WRFLOW PCOMP (6.25 ns, offset 127)
sd 2:0:8:0: [sdc] 585937500 512-byte hardware sectors (300000 MB)
sd 2:0:8:0: [sdc] Write Protect is off
sd 2:0:8:0: [sdc] Mode Sense: b3 00 00 08
sd 2:0:8:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 2:0:8:0: [sdc] 585937500 512-byte hardware sectors (300000 MB)
sd 2:0:8:0: [sdc] Write Protect is off
sd 2:0:8:0: [sdc] Mode Sense: b3 00 00 08
sd 2:0:8:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
 sdc: sdc1
sd 2:0:8:0: [sdc] Attached SCSI disk
scsi 2:0:9:0: Direct-Access     FUJITSU  MAW3300NC        0104 PQ: 0 ANSI: 3
 target2:0:9: Beginning Domain Validation
 target2:0:9: Ending Domain Validation
 target2:0:9: FAST-160 WIDE SCSI 320.0 MB/s DT IU QAS RTI WRFLOW PCOMP (6.25 ns, offset 127)
sd 2:0:9:0: [sdd] 585937500 512-byte hardware sectors (300000 MB)
sd 2:0:9:0: [sdd] Write Protect is off
sd 2:0:9:0: [sdd] Mode Sense: b3 00 00 08
sd 2:0:9:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 2:0:9:0: [sdd] 585937500 512-byte hardware sectors (300000 MB)
sd 2:0:9:0: [sdd] Write Protect is off
sd 2:0:9:0: [sdd] Mode Sense: b3 00 00 08
sd 2:0:9:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
 sdd: sdd1
sd 2:0:9:0: [sdd] Attached SCSI disk
scsi 2:0:10:0: Direct-Access     FUJITSU  MAW3300NC        0104 PQ: 0 ANSI: 3
 target2:0:10: Beginning Domain Validation
 target2:0:10: Ending Domain Validation
 target2:0:10: FAST-160 WIDE SCSI 320.0 MB/s DT IU QAS RTI WRFLOW PCOMP (6.25 ns, offset 127)
sd 2:0:10:0: [sde] 585937500 512-byte hardware sectors (300000 MB)
sd 2:0:10:0: [sde] Write Protect is off
sd 2:0:10:0: [sde] Mode Sense: b3 00 00 08
sd 2:0:10:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 2:0:10:0: [sde] 585937500 512-byte hardware sectors (300000 MB)
sd 2:0:10:0: [sde] Write Protect is off
sd 2:0:10:0: [sde] Mode Sense: b3 00 00 08
sd 2:0:10:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
 sde: sde1
sd 2:0:10:0: [sde] Attached SCSI disk
scsi 2:0:11:0: Direct-Access     FUJITSU  MAW3300NC        0104 PQ: 0 ANSI: 3
 target2:0:11: Beginning Domain Validation
 target2:0:11: Ending Domain Validation
 target2:0:11: FAST-160 WIDE SCSI 320.0 MB/s DT IU QAS RTI WRFLOW PCOMP (6.25 ns, offset 127)
sd 2:0:11:0: [sdf] 585937500 512-byte hardware sectors (300000 MB)
sd 2:0:11:0: [sdf] Write Protect is off
sd 2:0:11:0: [sdf] Mode Sense: b3 00 00 08
sd 2:0:11:0: [sdf] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 2:0:11:0: [sdf] 585937500 512-byte hardware sectors (300000 MB)
sd 2:0:11:0: [sdf] Write Protect is off
sd 2:0:11:0: [sdf] Mode Sense: b3 00 00 08
sd 2:0:11:0: [sdf] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
 sdf: sdf1
sd 2:0:11:0: [sdf] Attached SCSI disk
scsi 2:0:12:0: Direct-Access     FUJITSU  MAW3300NC        0104 PQ: 0 ANSI: 3
 target2:0:12: Beginning Domain Validation
 target2:0:12: Ending Domain Validation
 target2:0:12: FAST-160 WIDE SCSI 320.0 MB/s DT IU QAS RTI WRFLOW PCOMP (6.25 ns, offset 127)
sd 2:0:12:0: [sdg] 585937500 512-byte hardware sectors (300000 MB)
sd 2:0:12:0: [sdg] Write Protect is off
sd 2:0:12:0: [sdg] Mode Sense: b3 00 00 08
sd 2:0:12:0: [sdg] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 2:0:12:0: [sdg] 585937500 512-byte hardware sectors (300000 MB)
sd 2:0:12:0: [sdg] Write Protect is off
sd 2:0:12:0: [sdg] Mode Sense: b3 00 00 08
sd 2:0:12:0: [sdg] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
 sdg: sdg1
sd 2:0:12:0: [sdg] Attached SCSI disk
scsi 2:0:13:0: Direct-Access     FUJITSU  MAW3300NC        0104 PQ: 0 ANSI: 3
 target2:0:13: Beginning Domain Validation
 target2:0:13: Ending Domain Validation
 target2:0:13: FAST-160 WIDE SCSI 320.0 MB/s DT IU QAS RTI WRFLOW PCOMP (6.25 ns, offset 127)
sd 2:0:13:0: [sdh] 585937500 512-byte hardware sectors (300000 MB)
sd 2:0:13:0: [sdh] Write Protect is off
sd 2:0:13:0: [sdh] Mode Sense: b3 00 00 08
sd 2:0:13:0: [sdh] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 2:0:13:0: [sdh] 585937500 512-byte hardware sectors (300000 MB)
sd 2:0:13:0: [sdh] Write Protect is off
sd 2:0:13:0: [sdh] Mode Sense: b3 00 00 08
sd 2:0:13:0: [sdh] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
 sdh: sdh1
sd 2:0:13:0: [sdh] Attached SCSI disk
EXT3 FS on md0, internal journal
loop: module loaded
Netfilter messages via NETLINK v0.30.
nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
device-mapper: ioctl: 4.11.0-ioctl (2006-10-12) initialised: dm-devel@redhat.com
md: md1 stopped.
md: bind<sdb2>
md: bind<sda2>
raid1: raid set md1 active with 2 out of 2 mirrors
md: md2 stopped.
md: bind<sdb5>
md: bind<sda5>
raid1: raid set md2 active with 2 out of 2 mirrors
md: md3 stopped.
md: bind<sdb6>
md: bind<sda6>
raid1: raid set md3 active with 2 out of 2 mirrors
md: md4 stopped.
md: bind<sdb7>
md: bind<sda7>
raid1: raid set md4 active with 2 out of 2 mirrors
md: md5 stopped.
md: bind<sdb8>
md: bind<sda8>
raid1: raid set md5 active with 2 out of 2 mirrors
md: md6 stopped.
md: bind<sdb1>
md: bind<sda1>
raid1: raid set md6 active with 2 out of 2 mirrors
kjournald starting.  Commit interval 5 seconds
EXT3 FS on md1, internal journal
EXT3-fs: mounted filesystem with ordered data mode.
kjournald starting.  Commit interval 5 seconds
EXT3 FS on md5, internal journal
EXT3-fs: mounted filesystem with ordered data mode.
kjournald starting.  Commit interval 5 seconds
EXT3 FS on md4, internal journal
EXT3-fs: mounted filesystem with ordered data mode.
kjournald starting.  Commit interval 5 seconds
EXT3 FS on md2, internal journal
EXT3-fs: mounted filesystem with ordered data mode.
kjournald starting.  Commit interval 5 seconds
EXT3 FS on md3, internal journal
EXT3-fs: mounted filesystem with ordered data mode.
Adding 7815536k swap on /dev/md6.  Priority:-1 extents:1 across:7815536k
tg3: eth1: Link is up at 1000 Mbps, full duplex.
tg3: eth1: Flow control is on for TX and on for RX.
u32 classifier
    Performance counters on
    input device check on 
    Actions configured 
Installing knfsd (copyright (C) 1996 okir@monad.swb.de).
Loading iSCSI transport class v2.0-724.
iscsi: registered transport (tcp)
ip_tables: (C) 2000-2006 Netfilter Core Team
md: md_d0 stopped.
md: bind<sdd1>
md: bind<sde1>
md: bind<sdf1>
md: bind<sdg1>
md: bind<sdh1>
md: bind<sdc1>
raid5: device sdc1 operational as raid disk 0
raid5: device sdh1 operational as raid disk 5
raid5: device sdg1 operational as raid disk 4
raid5: device sdf1 operational as raid disk 3
raid5: device sde1 operational as raid disk 2
raid5: device sdd1 operational as raid disk 1
raid5: allocated 12518kB for md_d0
raid5: raid level 5 set md_d0 active with 6 out of 6 devices, algorithm 2
RAID5 conf printout:
 --- rd:6 wd:6
 disk 0, o:1, dev:sdc1
 disk 1, o:1, dev:sdd1
 disk 2, o:1, dev:sde1
 disk 3, o:1, dev:sdf1
 disk 4, o:1, dev:sdg1
 disk 5, o:1, dev:sdh1
 md_d0: p1
scsi3 : iSCSI Initiator over TCP/IP
scsi 3:0:0:0: Direct-Access     IET      VIRTUAL-DISK     0    PQ: 0 ANSI: 4
sd 3:0:0:0: [sdi] 2929451520 512-byte hardware sectors (1499879 MB)
sd 3:0:0:0: [sdi] Write Protect is off
sd 3:0:0:0: [sdi] Mode Sense: 77 00 00 08
sd 3:0:0:0: [sdi] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
sd 3:0:0:0: [sdi] 2929451520 512-byte hardware sectors (1499879 MB)
sd 3:0:0:0: [sdi] Write Protect is off
sd 3:0:0:0: [sdi] Mode Sense: 77 00 00 08
sd 3:0:0:0: [sdi] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
 sdi: sdi1
sd 3:0:0:0: [sdi] Attached SCSI disk
md: bind<md_d0p1>
md: bind<sdi1>
md: md7: raid array is not clean -- starting background reconstruction
raid1: raid set md7 active with 2 out of 2 mirrors
md: resync of RAID array md7
md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
md: using 256k window, over a total of 1464725632 blocks.
kernel BUG at drivers/md/raid5.c:380!
              \|/ ____ \|/
              "@'/ .. \`@"
              /_| \__/ |_\
                 \__U_/
md7_resync(4258): Kernel bad sw trap 5 [#1]
TSTATE: 0000000080001606 TPC: 00000000005ed50c TNPC: 00000000005ed510 Y: 00000000    Not tainted
TPC: <get_stripe_work+0x1f4/0x200>
g0: 0000000000000005 g1: 00000000007c0400 g2: 0000000000000001 g3: 0000000000748400
g4: fffff800f904db00 g5: fffff80002088000 g6: fffff800ea784000 g7: 0000000000748528
o0: 0000000000000029 o1: 0000000000715798 o2: 000000000000017c o3: 0000000000000005
o4: 0000000000000006 o5: fffff800e9e6a990 sp: fffff800ea786d81 ret_pc: 00000000005ed504
RPC: <get_stripe_work+0x1ec/0x200>
l0: 0000000000000002 l1: ffffffffffffffff l2: fffff800e9e6aa78 l3: fffff800e9e6a918
l4: fffff800e9e6a9b8 l5: fffffffffffffff8 l6: 0000000000000005 l7: fffff800fde02800
i0: fffff800e9e6a958 i1: 0000000000000000 i2: 0000000000000004 i3: fffff800ea787720
i4: 0000000000000080 i5: 0000000000000080 i6: fffff800ea786e51 i7: 00000000005f0274
I7: <handle_stripe5+0x4fc/0x1340>
Caller[00000000005f0274]: handle_stripe5+0x4fc/0x1340
Caller[00000000005f211c]: handle_stripe+0x24/0x13e0
Caller[00000000005f4450]: make_request+0x358/0x600
Caller[0000000000542890]: generic_make_request+0x198/0x220
Caller[00000000005eb240]: sync_request+0x608/0x640
Caller[00000000005fef7c]: md_do_sync+0x384/0x920
Caller[00000000005ff8f0]: md_thread+0x38/0x140
Caller[0000000000478b40]: kthread+0x48/0x80
Caller[00000000004273d0]: kernel_thread+0x38/0x60
Caller[0000000000478de0]: kthreadd+0x148/0x1c0
Instruction DUMP: 9210217c  7ff8f57f  90122398 <91d02005> 30680004  01000000  01000000  01000000  9de3bf00 

[-- Attachment #3: config.gz --]
[-- Type: application/gzip, Size: 6129 bytes --]

[-- Attachment #4: iscsi.patch --]
[-- Type: text/x-diff, Size: 2520 bytes --]

--- kernel/iscsi.old.c  2007-10-17 12:44:09.000000000 +0200
+++ kernel/iscsi.c      2007-10-17 11:19:14.000000000 +0200
@@ -726,13 +726,26 @@
        case READ_10:
        case WRITE_10:
        case WRITE_VERIFY:
-               *off = be32_to_cpu(*(u32 *)&cmd[2]);
+               *off = be32_to_cpu((((u32) cmd[2]) << 24) |
+                       (((u32) cmd[3]) << 16) |
+                       (((u32) cmd[4]) << 8) |
+                       cmd[5]);
                *len = (cmd[7] << 8) + cmd[8];
                break;
        case READ_16:
        case WRITE_16:
-               *off = be64_to_cpu(*(u64 *)&cmd[2]);
-               *len = be32_to_cpu(*(u32 *)&cmd[10]);
+               *off = be32_to_cpu((((u64) cmd[2]) << 56) |
+                       (((u64) cmd[3]) << 48) |
+                       (((u64) cmd[4]) << 40) |
+                       (((u64) cmd[5]) << 32) |
+                       (((u64) cmd[6]) << 24) |
+                       (((u64) cmd[7]) << 16) |
+                       (((u64) cmd[8]) << 8) |
+                       cmd[9]);
+               *len = be32_to_cpu((((u32) cmd[10]) << 24) |
+                       (((u32) cmd[11]) << 16) |
+                       (((u32) cmd[12]) << 8) |
+                       cmd[13]);
                break;
        default:
                BUG();
--- kernel/target_disk.old.c    2007-10-17 11:10:19.000000000 +0200
+++ kernel/target_disk.c        2007-10-17 16:04:06.000000000 +0200
@@ -66,13 +66,15 @@
        unsigned char geo_m_pg[] = {0x04, 0x16, 0x00, 0x00, 0x00, 0x40, 0x00,
0x00,
                                    0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00,
                                    0x00, 0x00, 0x00, 0x00, 0x3a, 0x98, 0x00,
0x00};
-       u32 ncyl, *p;
+       u32 ncyl;
+       u32 n;
        /* assume 0xff heads, 15krpm. */
        memcpy(ptr, geo_m_pg, sizeof(geo_m_pg));
        ncyl = sec >> 14; /* 256 * 64 */
-       p = (u32 *)(ptr + 1);
-       *p = *p | cpu_to_be32(ncyl);
+       memcpy(&n,ptr+1,sizeof(u32));
+       n = n | cpu_to_be32(ncyl);
+       memcpy(ptr+1, &n, sizeof(u32));
        return sizeof(geo_m_pg);
 }
 
@@ -249,7 +251,10 @@
        struct iet_volume *lun;
        int rest, idx = 0;
 
-       size = be32_to_cpu(*(u32 *)&req->scb[6]);
+       size = be32_to_cpu((((u32) req->scb[6]) << 24) |
+                       (((u32) req->scb[7]) << 16) |
+                       (((u32) req->scb[8]) << 8) |
+                       req->scb[9]);
        if (size < 16)
                return -1;


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid5 trouble
  2007-10-17 15:40     ` Dan Williams
@ 2007-10-17 16:44       ` BERTRAND Joël
  2007-10-18  0:46         ` Dan Williams
  2007-10-19  2:55       ` Bill Davidsen
  1 sibling, 1 reply; 36+ messages in thread
From: BERTRAND Joël @ 2007-10-17 16:44 UTC (permalink / raw)
  To: Dan Williams; +Cc: linux-raid, sparclinux

Dan Williams wrote:
> On 10/17/07, Dan Williams <dan.j.williams@intel.com> wrote:
>> On 10/17/07, BERTRAND Joël <joel.bertrand@systella.fr> wrote:
>>> BERTRAND Joël wrote:
>>>>     Hello,
>>>>
>>>>     I run 2.6.23 linux kernel on two T1000 (sparc64) servers. Each
>>>> server has a partitionable raid5 array (/dev/md/d0) and I have to
>>>> synchronize both raid5 volumes by raid1. Thus, I have tried to build a
>>>> raid1 volume between /dev/md/d0p1 and /dev/sdi1 (exported by iscsi from
>>>> the second server) and I obtain a BUG :
>>>>
>>>> Root gershwin:[/usr/scripts] > mdadm -C /dev/md7 -l1 -n2 /dev/md/d0p1
>>>> /dev/sdi1
>>>> ...
>>>         Hello,
>>>
>>>         I have fixed iscsi-target, and I have tested it. It works now without
>>> any trouble. Patches were posted on iscsi-target mailing list. When I
>>> use iSCSI to access to foreign raid5 volume, it works fine. I can format
>>> foreign volume, copy large files on it... But when I tried to create a
>>> new raid1 volume with a local raid5 volume and a foreign raid5 volume, I
>>> receive my well known Oops. You can find my dmesg after Oops :
>>>
>> Can you send your .config and your bootup dmesg?
>>
> 
> I found a problem which may lead to the operations count dropping
> below zero.  If ops_complete_biofill() gets preempted in between the
> following calls:
> 
> raid5.c:554> clear_bit(STRIPE_OP_BIOFILL, &sh->ops.ack);
> raid5.c:555> clear_bit(STRIPE_OP_BIOFILL, &sh->ops.pending);
> 
> ...then get_stripe_work() can recount/re-acknowledge STRIPE_OP_BIOFILL
> causing the assertion.  In fact, the 'pending' bit should always be
> cleared first, but the other cases are protected by
> spin_lock(&sh->lock).  Patch attached.

	Dan,

	I have modified get_stripe_work like this :

static unsigned long get_stripe_work(struct stripe_head *sh)
{
         unsigned long pending;
         int ack = 0;
         int a,b,c,d,e,f,g;

         pending = sh->ops.pending;

         test_and_ack_op(STRIPE_OP_BIOFILL, pending);
         a=ack;
         test_and_ack_op(STRIPE_OP_COMPUTE_BLK, pending);
         b=ack;
         test_and_ack_op(STRIPE_OP_PREXOR, pending);
         c=ack;
         test_and_ack_op(STRIPE_OP_BIODRAIN, pending);
         d=ack;
         test_and_ack_op(STRIPE_OP_POSTXOR, pending);
         e=ack;
         test_and_ack_op(STRIPE_OP_CHECK, pending);
         f=ack;
         if (test_and_clear_bit(STRIPE_OP_IO, &sh->ops.pending))
                 ack++;
         g=ack;

         sh->ops.count -= ack;

         if (sh->ops.count<0) printk("%d %d %d %d %d %d %d\n", 
a,b,c,d,e,f,g);
         BUG_ON(sh->ops.count < 0);

         return pending;
}

and I obtain on console :

  1 1 1 1 1 2
kernel BUG at drivers/md/raid5.c:390!
               \|/ ____ \|/
               "@'/ .. \`@"
               /_| \__/ |_\
                  \__U_/
md7_resync(5409): Kernel bad sw trap 5 [#1]

	If that can help you...

	JKB
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid5 trouble
  2007-10-17 16:44       ` BERTRAND Joël
@ 2007-10-18  0:46         ` Dan Williams
  2007-10-18  8:29           ` BERTRAND Joël
  0 siblings, 1 reply; 36+ messages in thread
From: Dan Williams @ 2007-10-18  0:46 UTC (permalink / raw)
  To: BERTRAND Joël; +Cc: linux-raid, sparclinux

[-- Attachment #1: Type: text/plain, Size: 1608 bytes --]

On Wed, 2007-10-17 at 09:44 -0700, BERTRAND Joël wrote:
>         Dan,
> 
>         I have modified get_stripe_work like this :
> 
> static unsigned long get_stripe_work(struct stripe_head *sh)
> {
>          unsigned long pending;
>          int ack = 0;
>          int a,b,c,d,e,f,g;
> 
>          pending = sh->ops.pending;
> 
>          test_and_ack_op(STRIPE_OP_BIOFILL, pending);
>          a=ack;
>          test_and_ack_op(STRIPE_OP_COMPUTE_BLK, pending);
>          b=ack;
>          test_and_ack_op(STRIPE_OP_PREXOR, pending);
>          c=ack;
>          test_and_ack_op(STRIPE_OP_BIODRAIN, pending);
>          d=ack;
>          test_and_ack_op(STRIPE_OP_POSTXOR, pending);
>          e=ack;
>          test_and_ack_op(STRIPE_OP_CHECK, pending);
>          f=ack;
>          if (test_and_clear_bit(STRIPE_OP_IO, &sh->ops.pending))
>                  ack++;
>          g=ack;
> 
>          sh->ops.count -= ack;
> 
>          if (sh->ops.count<0) printk("%d %d %d %d %d %d %d\n",
> a,b,c,d,e,f,g);
>          BUG_ON(sh->ops.count < 0);
> 
>          return pending;
> }
> 
> and I obtain on console :
> 
>   1 1 1 1 1 2
> kernel BUG at drivers/md/raid5.c:390!
>                \|/ ____ \|/
>                "@'/ .. \`@"
>                /_| \__/ |_\
>                   \__U_/
> md7_resync(5409): Kernel bad sw trap 5 [#1]
> 
>         If that can help you...
> 
>         JKB

This gives more evidence that it is probably mishandling of
STRIPE_OP_BIOFILL.  The attached patch (replacing the previous) moves
the clearing of these bits into handle_stripe5 and adds some debug
information.

--
Dan

[-- Attachment #2: fix-biofill-clear2.patch --]
[-- Type: text/x-patch, Size: 1830 bytes --]

raid5: fix clearing of biofill operations (try2)

From: Dan Williams <dan.j.williams@intel.com>

ops_complete_biofill() runs outside of spin_lock(&sh->lock) and clears the
'pending' and 'ack' bits.  Since the test_and_ack_op() macro only checks
against 'complete' it can get an inconsistent snapshot of pending work.

Move the clearing of these bits to handle_stripe5(), under the lock.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---

 drivers/md/raid5.c |   17 ++++++++++++++---
 1 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index f96dea9..3808f52 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -377,7 +377,12 @@ static unsigned long get_stripe_work(struct stripe_head *sh)
 		ack++;
 
 	sh->ops.count -= ack;
-	BUG_ON(sh->ops.count < 0);
+	if (unlikely(sh->ops.count < 0)) {
+		printk(KERN_ERR "pending: %#lx ops.pending: %#lx ops.ack: %#lx "
+			"ops.complete: %#lx\n", pending, sh->ops.pending,
+			sh->ops.ack, sh->ops.complete);
+		BUG();
+	}
 
 	return pending;
 }
@@ -551,8 +556,7 @@ static void ops_complete_biofill(void *stripe_head_ref)
 			}
 		}
 	}
-	clear_bit(STRIPE_OP_BIOFILL, &sh->ops.ack);
-	clear_bit(STRIPE_OP_BIOFILL, &sh->ops.pending);
+	set_bit(STRIPE_OP_BIOFILL, &sh->ops.complete);
 
 	return_io(return_bi);
 
@@ -2630,6 +2634,13 @@ static void handle_stripe5(struct stripe_head *sh)
 	s.expanded = test_bit(STRIPE_EXPAND_READY, &sh->state);
 	/* Now to look around and see what can be done */
 
+	/* clean-up completed biofill operations */
+	if (test_bit(STRIPE_OP_BIOFILL, &sh->ops.complete)) {
+		clear_bit(STRIPE_OP_BIOFILL, &sh->ops.pending);
+		clear_bit(STRIPE_OP_BIOFILL, &sh->ops.ack);
+		clear_bit(STRIPE_OP_BIOFILL, &sh->ops.complete);
+	}
+
 	rcu_read_lock();
 	for (i=disks; i--; ) {
 		mdk_rdev_t *rdev;

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid5 trouble
  2007-10-18  0:46         ` Dan Williams
@ 2007-10-18  8:29           ` BERTRAND Joël
  0 siblings, 0 replies; 36+ messages in thread
From: BERTRAND Joël @ 2007-10-18  8:29 UTC (permalink / raw)
  To: Dan Williams; +Cc: linux-raid, sparclinux

	Dan,

	I'm testing your last patch (fix-biofill-clear2.patch). It seems to work:

Every 1.0s: cat /proc/mdstat                            Thu Oct 18 
10:28:55 2007

Personalities : [raid1] [raid6] [raid5] [raid4]
md7 : active raid1 sdi1[1] md_d0p1[0]
       1464725632 blocks [2/2] [UU]
       [>....................]  resync =  0.4% (6442248/1464725632) 
finish=1216.6
min speed=19974K/sec

md_d0 : active raid5 sdc1[0] sdh1[5] sdg1[4] sdf1[3] sde1[2] sdd1[1]
       1464725760 blocks level 5, 64k chunk, algorithm 2 [6/6] [UUUUUU]

	I hope it fixes bug I have seen. I shall come back - I think tomorrow, 
my raid volume requires more than 20 hours to be created - to say if it 
works fine.

	Regards,

	JKB

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid5 trouble
  2007-10-17 15:40     ` Dan Williams
  2007-10-17 16:44       ` BERTRAND Joël
@ 2007-10-19  2:55       ` Bill Davidsen
  2007-10-19  8:04         ` BERTRAND Joël
  1 sibling, 1 reply; 36+ messages in thread
From: Bill Davidsen @ 2007-10-19  2:55 UTC (permalink / raw)
  To: Dan Williams; +Cc: BERTRAND Joël, linux-raid, sparclinux

Dan Williams wrote:
> I found a problem which may lead to the operations count dropping
> below zero.  If ops_complete_biofill() gets preempted in between the
> following calls:
>
> raid5.c:554> clear_bit(STRIPE_OP_BIOFILL, &sh->ops.ack);
> raid5.c:555> clear_bit(STRIPE_OP_BIOFILL, &sh->ops.pending);
>
> ...then get_stripe_work() can recount/re-acknowledge STRIPE_OP_BIOFILL
> causing the assertion.  In fact, the 'pending' bit should always be
> cleared first, but the other cases are protected by
> spin_lock(&sh->lock).  Patch attached.
>   

Once this patch has been vetted, can it be offered to -stable for 
2.6.23? Or to be pedantic, it *can*, will you make that happen?

-- 
bill davidsen <davidsen@tmr.com>
  CTO TMR Associates, Inc
  Doing interesting things with small computers since 1979


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid5 trouble
  2007-10-19  2:55       ` Bill Davidsen
@ 2007-10-19  8:04         ` BERTRAND Joël
  2007-10-19 15:51           ` Dan Williams
  0 siblings, 1 reply; 36+ messages in thread
From: BERTRAND Joël @ 2007-10-19  8:04 UTC (permalink / raw)
  To: Bill Davidsen; +Cc: Dan Williams, linux-raid, sparclinux

Bill Davidsen wrote:
> Dan Williams wrote:
>> I found a problem which may lead to the operations count dropping
>> below zero.  If ops_complete_biofill() gets preempted in between the
>> following calls:
>>
>> raid5.c:554> clear_bit(STRIPE_OP_BIOFILL, &sh->ops.ack);
>> raid5.c:555> clear_bit(STRIPE_OP_BIOFILL, &sh->ops.pending);
>>
>> ...then get_stripe_work() can recount/re-acknowledge STRIPE_OP_BIOFILL
>> causing the assertion.  In fact, the 'pending' bit should always be
>> cleared first, but the other cases are protected by
>> spin_lock(&sh->lock).  Patch attached.
>>   
> 
> Once this patch has been vetted, can it be offered to -stable for 
> 2.6.23? Or to be pedantic, it *can*, will you make that happen?

	I never see any oops with this patch. But I cannot create a RAID1 array 
with a local RAID5 volume and a foreign RAID5 array exported by iSCSI. 
iSCSI seems to works fine, but RAID1 creation randomly aborts due to a 
unknown SCSI task on target side.

	I have stressed iSCSI target with some simultaneous I/O without any 
trouble (nullio, fileio and blockio), thus I suspect another bug in raid 
code (or an arch specific bug). The last two days, I have made some 
tests to isolate and reproduce this bug:

1/ iSCSI target and initiator seem work when I export with iSCSI a raid5 
array;
2/ raid1 and raid5 seem work with local disks;
3/ iSCSI target is disconnected only when I create a raid1 volume over 
iSCSI (blockio _and_ fileio) with following message:

Oct 18 10:43:52 poulenc kernel: iscsi_trgt: cmnd_abort(1156) 29 1 0 42 
57344 0 0
Oct 18 10:43:52 poulenc kernel: iscsi_trgt: Abort Task (01) issued on 
tid:1 lun:0 by sid:630024457682948 (Unknown Task)

	I run for 12 hours some dd's (read and write in nullio) between 
initiator and target without any disconnection. Thus iSCSI code seems to 
be robust. Both initiator and target are alone on a single gigabit 
ethernet link (without any switch). I'm investigating...

	Regards,

	JKB

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid5 trouble
  2007-10-19  8:04         ` BERTRAND Joël
@ 2007-10-19 15:51           ` Dan Williams
  2007-10-19 16:03             ` BERTRAND Joël
       [not found]             ` <4718DE66.8000905@tmr.com>
  0 siblings, 2 replies; 36+ messages in thread
From: Dan Williams @ 2007-10-19 15:51 UTC (permalink / raw)
  To: BERTRAND Joël; +Cc: Bill Davidsen, linux-raid, sparclinux

On Fri, 2007-10-19 at 01:04 -0700, BERTRAND Joël wrote:
>         I never see any oops with this patch. But I cannot create a
> RAID1 array
> with a local RAID5 volume and a foreign RAID5 array exported by iSCSI.
> iSCSI seems to works fine, but RAID1 creation randomly aborts due to a
> unknown SCSI task on target side.

For now I am going to forward this patch to Neil for inclusion in
-stable and 2.6.24-rc.  I will add a "Tested-by: Joël Bertrand
<joel.bertrand@systella.fr>" unless you have an objection.

>         I have stressed iSCSI target with some simultaneous I/O
> without any
> trouble (nullio, fileio and blockio), thus I suspect another bug in
> raid
> code (or an arch specific bug). The last two days, I have made some
> tests to isolate and reproduce this bug:
> 1/ iSCSI target and initiator seem work when I export with iSCSI a
> raid5
> array;
> 2/ raid1 and raid5 seem work with local disks;
> 3/ iSCSI target is disconnected only when I create a raid1 volume over
> iSCSI (blockio _and_ fileio) with following message:
> 
> Oct 18 10:43:52 poulenc kernel: iscsi_trgt: cmnd_abort(1156) 29 1 0 42
> 57344 0 0
> Oct 18 10:43:52 poulenc kernel: iscsi_trgt: Abort Task (01) issued on
> tid:1 lun:0 by sid:630024457682948 (Unknown Task)
> 
>         I run for 12 hours some dd's (read and write in nullio)
> between
> initiator and target without any disconnection. Thus iSCSI code seems
> to
> be robust. Both initiator and target are alone on a single gigabit
> ethernet link (without any switch). I'm investigating...

Can you reproduce on 2.6.22?

Also, I do not think this is the cause of your failure, but you have
CONFIG_DMA_ENGINE=y in your config.  Setting this to 'n' will compile
out the unneeded checks for offload engines in async_memcpy and
async_xor.
> 
>         Regards,
>         JKB

Regards,
Dan
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid5 trouble
  2007-10-19 15:51           ` Dan Williams
@ 2007-10-19 16:03             ` BERTRAND Joël
       [not found]             ` <4718DE66.8000905@tmr.com>
  1 sibling, 0 replies; 36+ messages in thread
From: BERTRAND Joël @ 2007-10-19 16:03 UTC (permalink / raw)
  To: Dan Williams; +Cc: Bill Davidsen, linux-raid, sparclinux

Dan Williams wrote:
> On Fri, 2007-10-19 at 01:04 -0700, BERTRAND Joël wrote:
>>         I never see any oops with this patch. But I cannot create a
>> RAID1 array
>> with a local RAID5 volume and a foreign RAID5 array exported by iSCSI.
>> iSCSI seems to works fine, but RAID1 creation randomly aborts due to a
>> unknown SCSI task on target side.
> 
> For now I am going to forward this patch to Neil for inclusion in
> -stable and 2.6.24-rc.  I will add a "Tested-by: Joël Bertrand
> <joel.bertrand@systella.fr>" unless you have an objection.

	No objection.

>>         I run for 12 hours some dd's (read and write in nullio)
>> between initiator and target without any disconnection. Thus iSCSI code seems
>> to be robust. Both initiator and target are alone on a single gigabit
>> ethernet link (without any switch). I'm investigating...
> 
> Can you reproduce on 2.6.22?

	I cannot downgrade these servers on 2.6.22 due to a bug in FUTEX code.

> Also, I do not think this is the cause of your failure, but you have
> CONFIG_DMA_ENGINE=y in your config.  Setting this to 'n' will compile
> out the unneeded checks for offload engines in async_memcpy and
> async_xor.

	I will try...

	Regards,

	JKB
-
To unsubscribe from this list: send the line "unsubscribe sparclinux" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid5 trouble
       [not found]             ` <4718DE66.8000905@tmr.com>
@ 2007-10-19 20:42               ` BERTRAND Joël
  2007-10-19 20:49                 ` [BUG] Raid1/5 over iSCSI trouble BERTRAND Joël
  0 siblings, 1 reply; 36+ messages in thread
From: BERTRAND Joël @ 2007-10-19 20:42 UTC (permalink / raw)
  To: Bill Davidsen; +Cc: Dan Williams, linux-raid, sparclinux

Bill Davidsen wrote:
> Dan Williams wrote:
>> On Fri, 2007-10-19 at 01:04 -0700, BERTRAND Joël wrote:
>>   
>>>         I run for 12 hours some dd's (read and write in nullio)
>>> between
>>> initiator and target without any disconnection. Thus iSCSI code seems
>>> to
>>> be robust. Both initiator and target are alone on a single gigabit
>>> ethernet link (without any switch). I'm investigating...
>>>     
>>
>> Can you reproduce on 2.6.22?
>>
>> Also, I do not think this is the cause of your failure, but you have
>> CONFIG_DMA_ENGINE=y in your config.  Setting this to 'n' will compile
>> out the unneeded checks for offload engines in async_memcpy and
>> async_xor.
> 
> Given that offload engines are far less tested code, I think this is a 
> very good thing to try!

	I'm trying wihtout CONFIG_DMA_ENGINE=y. istd1 only uses 40% of one CPU 
when I rebuild my raid1 array. 1% of this array was now resynchronized 
without any hang.

Root gershwin:[/usr/scripts] > cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md7 : active raid1 sdi1[2] md_d0p1[0]
       1464725632 blocks [2/1] [U_]
       [>....................]  recovery =  1.0% (15705536/1464725632) 
finish=1103.9min speed=21875K/sec

	Regards,

	JKB
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid1/5 over iSCSI trouble
  2007-10-19 20:42               ` BERTRAND Joël
@ 2007-10-19 20:49                 ` BERTRAND Joël
  2007-10-19 21:02                   ` [Iscsitarget-devel] " Ross S. W. Walker
  2007-10-19 21:04                   ` BERTRAND Joël
  0 siblings, 2 replies; 36+ messages in thread
From: BERTRAND Joël @ 2007-10-19 20:49 UTC (permalink / raw)
  To: Bill Davidsen; +Cc: linux-raid, sparclinux, Dan Williams, iscsitarget-devel

BERTRAND Joël wrote:
> Bill Davidsen wrote:
>> Dan Williams wrote:
>>> On Fri, 2007-10-19 at 01:04 -0700, BERTRAND Joël wrote:
>>>  
>>>>         I run for 12 hours some dd's (read and write in nullio)
>>>> between
>>>> initiator and target without any disconnection. Thus iSCSI code seems
>>>> to
>>>> be robust. Both initiator and target are alone on a single gigabit
>>>> ethernet link (without any switch). I'm investigating...
>>>>     
>>>
>>> Can you reproduce on 2.6.22?
>>>
>>> Also, I do not think this is the cause of your failure, but you have
>>> CONFIG_DMA_ENGINE=y in your config.  Setting this to 'n' will compile
>>> out the unneeded checks for offload engines in async_memcpy and
>>> async_xor.
>>
>> Given that offload engines are far less tested code, I think this is a 
>> very good thing to try!
> 
>     I'm trying wihtout CONFIG_DMA_ENGINE=y. istd1 only uses 40% of one 
> CPU when I rebuild my raid1 array. 1% of this array was now 
> resynchronized without any hang.
> 
> Root gershwin:[/usr/scripts] > cat /proc/mdstat
> Personalities : [raid1] [raid6] [raid5] [raid4]
> md7 : active raid1 sdi1[2] md_d0p1[0]
>       1464725632 blocks [2/1] [U_]
>       [>....................]  recovery =  1.0% (15705536/1464725632) 
> finish=1103.9min speed=21875K/sec

	Same result...

connection2:0: iscsi: detected conn error (1011)
 
          session2: iscsi: session recovery timed out after 120 secs
sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
sd 4:0:0:0: scsi: Device offlined - not ready after error recovery

	Regards,

	JKB

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
Iscsitarget-devel mailing list
Iscsitarget-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/iscsitarget-devel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* RE: [Iscsitarget-devel] [BUG] Raid1/5 over iSCSI trouble
  2007-10-19 20:49                 ` [BUG] Raid1/5 over iSCSI trouble BERTRAND Joël
@ 2007-10-19 21:02                   ` Ross S. W. Walker
  2007-10-19 21:06                     ` BERTRAND Joël
  2007-10-19 21:04                   ` BERTRAND Joël
  1 sibling, 1 reply; 36+ messages in thread
From: Ross S. W. Walker @ 2007-10-19 21:02 UTC (permalink / raw)
  To: BERTRAND Joël, Bill Davidsen
  Cc: linux-raid, sparclinux, Dan Williams, iscsitarget-devel

BERTRAND Joël wrote:
> 
> BERTRAND Joël wrote:
> > Bill Davidsen wrote:
> >> Dan Williams wrote:
> >>> On Fri, 2007-10-19 at 01:04 -0700, BERTRAND Joël wrote:
> >>>  
> >>>>         I run for 12 hours some dd's (read and write in nullio)
> >>>> between
> >>>> initiator and target without any disconnection. Thus 
> iSCSI code seems
> >>>> to
> >>>> be robust. Both initiator and target are alone on a 
> single gigabit
> >>>> ethernet link (without any switch). I'm investigating...
> >>>>     
> >>>
> >>> Can you reproduce on 2.6.22?
> >>>
> >>> Also, I do not think this is the cause of your failure, 
> but you have
> >>> CONFIG_DMA_ENGINE=y in your config.  Setting this to 'n' 
> will compile
> >>> out the unneeded checks for offload engines in async_memcpy and
> >>> async_xor.
> >>
> >> Given that offload engines are far less tested code, I 
> think this is a 
> >> very good thing to try!
> > 
> >     I'm trying wihtout CONFIG_DMA_ENGINE=y. istd1 only uses 
> 40% of one 
> > CPU when I rebuild my raid1 array. 1% of this array was now 
> > resynchronized without any hang.
> > 
> > Root gershwin:[/usr/scripts] > cat /proc/mdstat
> > Personalities : [raid1] [raid6] [raid5] [raid4]
> > md7 : active raid1 sdi1[2] md_d0p1[0]
> >       1464725632 blocks [2/1] [U_]
> >       [>....................]  recovery =  1.0% 
> (15705536/1464725632) 
> > finish=1103.9min speed=21875K/sec
> 
> 	Same result...
> 
> connection2:0: iscsi: detected conn error (1011)
>  
>           session2: iscsi: session recovery timed out after 120 secs
> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery

I am unsure why you would want to setup an iSCSI RAID1, but before
doing so I would try to verify that each independant iSCSI session
is bullet proof.

Try testing and benchmarking each session independantly.

-Ross

______________________________________________________________________
This e-mail, and any attachments thereto, is intended only for use by
the addressee(s) named herein and may contain legally privileged
and/or confidential information. If you are not the intended recipient
of this e-mail, you are hereby notified that any dissemination,
distribution or copying of this e-mail, and any attachments thereto,
is strictly prohibited. If you have received this e-mail in error,
please immediately notify the sender and permanently delete the
original and any copy or printout thereof.

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid1/5 over iSCSI trouble
  2007-10-19 20:49                 ` [BUG] Raid1/5 over iSCSI trouble BERTRAND Joël
  2007-10-19 21:02                   ` [Iscsitarget-devel] " Ross S. W. Walker
@ 2007-10-19 21:04                   ` BERTRAND Joël
  2007-10-19 21:08                     ` Ross S. W. Walker
                                       ` (3 more replies)
  1 sibling, 4 replies; 36+ messages in thread
From: BERTRAND Joël @ 2007-10-19 21:04 UTC (permalink / raw)
  To: Bill Davidsen; +Cc: linux-raid, sparclinux, Dan Williams, iscsitarget-devel

BERTRAND Joël wrote:
> BERTRAND Joël wrote:
>> Bill Davidsen wrote:
>>> Dan Williams wrote:
>>>> On Fri, 2007-10-19 at 01:04 -0700, BERTRAND Joël wrote:
>>>>  
>>>>>         I run for 12 hours some dd's (read and write in nullio)
>>>>> between
>>>>> initiator and target without any disconnection. Thus iSCSI code seems
>>>>> to
>>>>> be robust. Both initiator and target are alone on a single gigabit
>>>>> ethernet link (without any switch). I'm investigating...
>>>>>     
>>>>
>>>> Can you reproduce on 2.6.22?
>>>>
>>>> Also, I do not think this is the cause of your failure, but you have
>>>> CONFIG_DMA_ENGINE=y in your config.  Setting this to 'n' will compile
>>>> out the unneeded checks for offload engines in async_memcpy and
>>>> async_xor.
>>>
>>> Given that offload engines are far less tested code, I think this is 
>>> a very good thing to try!
>>
>>     I'm trying wihtout CONFIG_DMA_ENGINE=y. istd1 only uses 40% of one 
>> CPU when I rebuild my raid1 array. 1% of this array was now 
>> resynchronized without any hang.
>>
>> Root gershwin:[/usr/scripts] > cat /proc/mdstat
>> Personalities : [raid1] [raid6] [raid5] [raid4]
>> md7 : active raid1 sdi1[2] md_d0p1[0]
>>       1464725632 blocks [2/1] [U_]
>>       [>....................]  recovery =  1.0% (15705536/1464725632) 
>> finish=1103.9min speed=21875K/sec
> 
>     Same result...
> 
> connection2:0: iscsi: detected conn error (1011)
> 
>          session2: iscsi: session recovery timed out after 120 secs
> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery

	Sorry for this last mail. I have found another mistake, but I don't 
know if this bug comes from iscsi-target or raid5 itself. iSCSI target 
is disconnected because istd1 and md_d0_raid5 kernel threads use 100% of 
CPU each !

Tasks: 235 total,   6 running, 227 sleeping,   0 stopped,   2 zombie
Cpu(s):  0.1%us, 12.5%sy,  0.0%ni, 87.4%id,  0.0%wa,  0.0%hi,  0.0%si, 
0.0%st
Mem:   4139032k total,   218424k used,  3920608k free,    10136k buffers
Swap:  7815536k total,        0k used,  7815536k free,    64808k cached

   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND 

  5824 root      15  -5     0    0    0 R  100  0.0  10:34.25 istd1 

  5599 root      15  -5     0    0    0 R  100  0.0   7:25.43 
md_d0_raid5

	Regards,

	JKB

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
Iscsitarget-devel mailing list
Iscsitarget-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/iscsitarget-devel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid1/5 over iSCSI trouble
  2007-10-19 21:02                   ` [Iscsitarget-devel] " Ross S. W. Walker
@ 2007-10-19 21:06                     ` BERTRAND Joël
  2007-10-19 21:10                       ` Ross S. W. Walker
  2007-10-19 21:11                       ` [Iscsitarget-devel] " Scott Kaelin
  0 siblings, 2 replies; 36+ messages in thread
From: BERTRAND Joël @ 2007-10-19 21:06 UTC (permalink / raw)
  To: Ross S. W. Walker
  Cc: linux-raid, sparclinux, Dan Williams, iscsitarget-devel,
	Bill Davidsen

Ross S. W. Walker wrote:
> BERTRAND Joël wrote:
>> BERTRAND Joël wrote:
>>> Bill Davidsen wrote:
>>>> Dan Williams wrote:
>>>>> On Fri, 2007-10-19 at 01:04 -0700, BERTRAND Joël wrote:
>>>>>  
>>>>>>         I run for 12 hours some dd's (read and write in nullio)
>>>>>> between
>>>>>> initiator and target without any disconnection. Thus 
>> iSCSI code seems
>>>>>> to
>>>>>> be robust. Both initiator and target are alone on a 
>> single gigabit
>>>>>> ethernet link (without any switch). I'm investigating...
>>>>>>     
>>>>> Can you reproduce on 2.6.22?
>>>>>
>>>>> Also, I do not think this is the cause of your failure, 
>> but you have
>>>>> CONFIG_DMA_ENGINE=y in your config.  Setting this to 'n' 
>> will compile
>>>>> out the unneeded checks for offload engines in async_memcpy and
>>>>> async_xor.
>>>> Given that offload engines are far less tested code, I 
>> think this is a 
>>>> very good thing to try!
>>>     I'm trying wihtout CONFIG_DMA_ENGINE=y. istd1 only uses 
>> 40% of one 
>>> CPU when I rebuild my raid1 array. 1% of this array was now 
>>> resynchronized without any hang.
>>>
>>> Root gershwin:[/usr/scripts] > cat /proc/mdstat
>>> Personalities : [raid1] [raid6] [raid5] [raid4]
>>> md7 : active raid1 sdi1[2] md_d0p1[0]
>>>       1464725632 blocks [2/1] [U_]
>>>       [>....................]  recovery =  1.0% 
>> (15705536/1464725632) 
>>> finish=1103.9min speed=21875K/sec
>> 	Same result...
>>
>> connection2:0: iscsi: detected conn error (1011)
>>  
>>           session2: iscsi: session recovery timed out after 120 secs
>> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
>> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
>> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
>> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
>> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
>> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
>> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> 
> I am unsure why you would want to setup an iSCSI RAID1, but before
> doing so I would try to verify that each independant iSCSI session
> is bullet proof.

	I use one and only one iSCSI session. Raid1 array is built between a 
local and iSCSI volume.

	JKB

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid1/5 over iSCSI trouble
  2007-10-19 21:04                   ` BERTRAND Joël
@ 2007-10-19 21:08                     ` Ross S. W. Walker
  2007-10-19 21:12                     ` Dan Williams
                                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 36+ messages in thread
From: Ross S. W. Walker @ 2007-10-19 21:08 UTC (permalink / raw)
  To: BERTRAND Joël, Bill Davidsen
  Cc: linux-raid, sparclinux, Dan Williams, iscsitarget-devel

BERTRAND Joël wrote:
> 
> BERTRAND Joël wrote:
> > BERTRAND Joël wrote:
> >> Bill Davidsen wrote:
> >>> Dan Williams wrote:
> >>>> On Fri, 2007-10-19 at 01:04 -0700, BERTRAND Joël wrote:
> >>>>  
> >>>>>         I run for 12 hours some dd's (read and write in nullio)
> >>>>> between
> >>>>> initiator and target without any disconnection. Thus 
> iSCSI code seems
> >>>>> to
> >>>>> be robust. Both initiator and target are alone on a 
> single gigabit
> >>>>> ethernet link (without any switch). I'm investigating...
> >>>>>     
> >>>>
> >>>> Can you reproduce on 2.6.22?
> >>>>
> >>>> Also, I do not think this is the cause of your failure, 
> but you have
> >>>> CONFIG_DMA_ENGINE=y in your config.  Setting this to 'n' 
> will compile
> >>>> out the unneeded checks for offload engines in async_memcpy and
> >>>> async_xor.
> >>>
> >>> Given that offload engines are far less tested code, I 
> think this is 
> >>> a very good thing to try!
> >>
> >>     I'm trying wihtout CONFIG_DMA_ENGINE=y. istd1 only 
> uses 40% of one 
> >> CPU when I rebuild my raid1 array. 1% of this array was now 
> >> resynchronized without any hang.
> >>
> >> Root gershwin:[/usr/scripts] > cat /proc/mdstat
> >> Personalities : [raid1] [raid6] [raid5] [raid4]
> >> md7 : active raid1 sdi1[2] md_d0p1[0]
> >>       1464725632 blocks [2/1] [U_]
> >>       [>....................]  recovery =  1.0% 
> (15705536/1464725632) 
> >> finish=1103.9min speed=21875K/sec
> > 
> >     Same result...
> > 
> > connection2:0: iscsi: detected conn error (1011)
> > 
> >          session2: iscsi: session recovery timed out after 120 secs
> > sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> > sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> > sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> > sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> > sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> > sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> > sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> 
> 	Sorry for this last mail. I have found another mistake, 
> but I don't 
> know if this bug comes from iscsi-target or raid5 itself. 
> iSCSI target 
> is disconnected because istd1 and md_d0_raid5 kernel threads 
> use 100% of 
> CPU each !
> 
> Tasks: 235 total,   6 running, 227 sleeping,   0 stopped,   2 zombie
> Cpu(s):  0.1%us, 12.5%sy,  0.0%ni, 87.4%id,  0.0%wa,  0.0%hi, 
>  0.0%si, 
> 0.0%st
> Mem:   4139032k total,   218424k used,  3920608k free,    
> 10136k buffers
> Swap:  7815536k total,        0k used,  7815536k free,    
> 64808k cached
> 
>    PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND 
> 
>   5824 root      15  -5     0    0    0 R  100  0.0  10:34.25 istd1 
> 
>   5599 root      15  -5     0    0    0 R  100  0.0   7:25.43 
> md_d0_raid5
> 
> 	Regards,
> 
> 	JKB

If you have 2 iSCSI sessions mirrored then any failure along either
path will hose the setup. Plus having iSCSI and MD RAID fight over
same resources in kernel is a recipe for a race condition.

How about exploring MPIO and DRBD?

-Ross

______________________________________________________________________
This e-mail, and any attachments thereto, is intended only for use by
the addressee(s) named herein and may contain legally privileged
and/or confidential information. If you are not the intended recipient
of this e-mail, you are hereby notified that any dissemination,
distribution or copying of this e-mail, and any attachments thereto,
is strictly prohibited. If you have received this e-mail in error,
please immediately notify the sender and permanently delete the
original and any copy or printout thereof.


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid1/5 over iSCSI trouble
  2007-10-19 21:06                     ` BERTRAND Joël
@ 2007-10-19 21:10                       ` Ross S. W. Walker
  2007-10-20  7:45                         ` BERTRAND Joël
  2007-10-19 21:11                       ` [Iscsitarget-devel] " Scott Kaelin
  1 sibling, 1 reply; 36+ messages in thread
From: Ross S. W. Walker @ 2007-10-19 21:10 UTC (permalink / raw)
  To: BERTRAND Joël
  Cc: linux-raid, sparclinux, Dan Williams, iscsitarget-devel,
	Bill Davidsen

BERTRAND Joël wrote:
> 
> Ross S. W. Walker wrote:
> > BERTRAND Joël wrote:
> >> BERTRAND Joël wrote:
> >>> Bill Davidsen wrote:
> >>>> Dan Williams wrote:
> >>>>> On Fri, 2007-10-19 at 01:04 -0700, BERTRAND Joël wrote:
> >>>>>  
> >>>>>>         I run for 12 hours some dd's (read and write in nullio)
> >>>>>> between
> >>>>>> initiator and target without any disconnection. Thus 
> >> iSCSI code seems
> >>>>>> to
> >>>>>> be robust. Both initiator and target are alone on a 
> >> single gigabit
> >>>>>> ethernet link (without any switch). I'm investigating...
> >>>>>>     
> >>>>> Can you reproduce on 2.6.22?
> >>>>>
> >>>>> Also, I do not think this is the cause of your failure, 
> >> but you have
> >>>>> CONFIG_DMA_ENGINE=y in your config.  Setting this to 'n' 
> >> will compile
> >>>>> out the unneeded checks for offload engines in async_memcpy and
> >>>>> async_xor.
> >>>> Given that offload engines are far less tested code, I 
> >> think this is a 
> >>>> very good thing to try!
> >>>     I'm trying wihtout CONFIG_DMA_ENGINE=y. istd1 only uses 
> >> 40% of one 
> >>> CPU when I rebuild my raid1 array. 1% of this array was now 
> >>> resynchronized without any hang.
> >>>
> >>> Root gershwin:[/usr/scripts] > cat /proc/mdstat
> >>> Personalities : [raid1] [raid6] [raid5] [raid4]
> >>> md7 : active raid1 sdi1[2] md_d0p1[0]
> >>>       1464725632 blocks [2/1] [U_]
> >>>       [>....................]  recovery =  1.0% 
> >> (15705536/1464725632) 
> >>> finish=1103.9min speed=21875K/sec
> >> 	Same result...
> >>
> >> connection2:0: iscsi: detected conn error (1011)
> >>  
> >>           session2: iscsi: session recovery timed out 
> after 120 secs
> >> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> >> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> >> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> >> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> >> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> >> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> >> sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> > 
> > I am unsure why you would want to setup an iSCSI RAID1, but before
> > doing so I would try to verify that each independant iSCSI session
> > is bullet proof.
> 
> 	I use one and only one iSCSI session. Raid1 array is 
> built between a 
> local and iSCSI volume.

Oh, in that case you will be much better served with DRBD, which
would provide you with what you want without creating a Frankenstein
setup...

-Ross

______________________________________________________________________
This e-mail, and any attachments thereto, is intended only for use by
the addressee(s) named herein and may contain legally privileged
and/or confidential information. If you are not the intended recipient
of this e-mail, you are hereby notified that any dissemination,
distribution or copying of this e-mail, and any attachments thereto,
is strictly prohibited. If you have received this e-mail in error,
please immediately notify the sender and permanently delete the
original and any copy or printout thereof.


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Iscsitarget-devel] [BUG] Raid1/5 over iSCSI trouble
  2007-10-19 21:06                     ` BERTRAND Joël
  2007-10-19 21:10                       ` Ross S. W. Walker
@ 2007-10-19 21:11                       ` Scott Kaelin
  1 sibling, 0 replies; 36+ messages in thread
From: Scott Kaelin @ 2007-10-19 21:11 UTC (permalink / raw)
  To: BERTRAND Joël
  Cc: Ross S. W. Walker, linux-raid, sparclinux, Dan Williams,
	iscsitarget-devel, Bill Davidsen

[snip]
> >
> > I am unsure why you would want to setup an iSCSI RAID1, but before
> > doing so I would try to verify that each independant iSCSI session
> > is bullet proof.
>
>         I use one and only one iSCSI session. Raid1 array is built between a
> local and iSCSI volume.

So you only get this problem doesn't happen when doing I/O with only
the iSCSI session?

Wouldn't it be better to do the RAID1 on the target machine? Then you
don't need to mess around with weird timing behavior of remote/local
writing.

If you want to have the disks on 2 different machines and have them
mirrored DRDB is the way to go.

@Ross: He is trying mirroring his local drive with a iSCSI lun.

>
>         JKB
>
>
>



-- 
Scott Kaelin
Sitrof Technologies
skaelin@sitrof.com

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid1/5 over iSCSI trouble
  2007-10-19 21:04                   ` BERTRAND Joël
  2007-10-19 21:08                     ` Ross S. W. Walker
@ 2007-10-19 21:12                     ` Dan Williams
  2007-10-20  8:05                       ` BERTRAND Joël
  2007-10-19 21:19                     ` Ming Zhang
  2007-10-19 23:50                     ` Bill Davidsen
  3 siblings, 1 reply; 36+ messages in thread
From: Dan Williams @ 2007-10-19 21:12 UTC (permalink / raw)
  To: BERTRAND Joël
  Cc: Bill Davidsen, linux-raid, sparclinux, iscsitarget-devel

On Fri, 2007-10-19 at 14:04 -0700, BERTRAND Joël wrote:
> 
>         Sorry for this last mail. I have found another mistake, but I
> don't
> know if this bug comes from iscsi-target or raid5 itself. iSCSI target
> is disconnected because istd1 and md_d0_raid5 kernel threads use 100%
> of
> CPU each !
> 
> Tasks: 235 total,   6 running, 227 sleeping,   0 stopped,   2 zombie
> Cpu(s):  0.1%us, 12.5%sy,  0.0%ni, 87.4%id,  0.0%wa,  0.0%hi,  0.0%si,
> 0.0%st
> Mem:   4139032k total,   218424k used,  3920608k free,    10136k
> buffers
> Swap:  7815536k total,        0k used,  7815536k free,    64808k
> cached
> 
>    PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
> 
>   5824 root      15  -5     0    0    0 R  100  0.0  10:34.25 istd1
> 
>   5599 root      15  -5     0    0    0 R  100  0.0   7:25.43
> md_d0_raid5

What is the output of:
cat /proc/5824/wchan
cat /proc/5599/wchan

Thanks,
Dan
-
To unsubscribe from this list: send the line "unsubscribe sparclinux" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid1/5 over iSCSI trouble
  2007-10-19 21:04                   ` BERTRAND Joël
  2007-10-19 21:08                     ` Ross S. W. Walker
  2007-10-19 21:12                     ` Dan Williams
@ 2007-10-19 21:19                     ` Ming Zhang
  2007-10-19 23:50                     ` Bill Davidsen
  3 siblings, 0 replies; 36+ messages in thread
From: Ming Zhang @ 2007-10-19 21:19 UTC (permalink / raw)
  To: BERTRAND Joël
  Cc: linux-raid, sparclinux, Dan Williams, iscsitarget-devel,
	Bill Davidsen

On Fri, 2007-10-19 at 23:04 +0200, BERTRAND Joël wrote:
> BERTRAND Joël wrote:
> > BERTRAND Joël wrote:
> >> Bill Davidsen wrote:
> >>> Dan Williams wrote:
> >>>> On Fri, 2007-10-19 at 01:04 -0700, BERTRAND Joël wrote:
> >>>>  
> >>>>>         I run for 12 hours some dd's (read and write in nullio)
> >>>>> between
> >>>>> initiator and target without any disconnection. Thus iSCSI code seems
> >>>>> to
> >>>>> be robust. Both initiator and target are alone on a single gigabit
> >>>>> ethernet link (without any switch). I'm investigating...
> >>>>>     
> >>>>
> >>>> Can you reproduce on 2.6.22?
> >>>>
> >>>> Also, I do not think this is the cause of your failure, but you have
> >>>> CONFIG_DMA_ENGINE=y in your config.  Setting this to 'n' will compile
> >>>> out the unneeded checks for offload engines in async_memcpy and
> >>>> async_xor.
> >>>
> >>> Given that offload engines are far less tested code, I think this is 
> >>> a very good thing to try!
> >>
> >>     I'm trying wihtout CONFIG_DMA_ENGINE=y. istd1 only uses 40% of one 
> >> CPU when I rebuild my raid1 array. 1% of this array was now 
> >> resynchronized without any hang.
> >>
> >> Root gershwin:[/usr/scripts] > cat /proc/mdstat
> >> Personalities : [raid1] [raid6] [raid5] [raid4]
> >> md7 : active raid1 sdi1[2] md_d0p1[0]
> >>       1464725632 blocks [2/1] [U_]
> >>       [>....................]  recovery =  1.0% (15705536/1464725632) 
> >> finish=1103.9min speed=21875K/sec
> > 
> >     Same result...
> > 
> > connection2:0: iscsi: detected conn error (1011)
> > 
> >          session2: iscsi: session recovery timed out after 120 secs
> > sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> > sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> > sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> > sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> > sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> > sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> > sd 4:0:0:0: scsi: Device offlined - not ready after error recovery
> 
> 	Sorry for this last mail. I have found another mistake, but I don't 
> know if this bug comes from iscsi-target or raid5 itself. iSCSI target 
> is disconnected because istd1 and md_d0_raid5 kernel threads use 100% of 
> CPU each !
> 
> Tasks: 235 total,   6 running, 227 sleeping,   0 stopped,   2 zombie
> Cpu(s):  0.1%us, 12.5%sy,  0.0%ni, 87.4%id,  0.0%wa,  0.0%hi,  0.0%si, 
> 0.0%st
> Mem:   4139032k total,   218424k used,  3920608k free,    10136k buffers
> Swap:  7815536k total,        0k used,  7815536k free,    64808k cached
> 
>    PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND 
> 
>   5824 root      15  -5     0    0    0 R  100  0.0  10:34.25 istd1 
> 
>   5599 root      15  -5     0    0    0 R  100  0.0   7:25.43 
> md_d0_raid5
> 

i would rather use oprofile to check where cpu cycles went to.


> 	Regards,
> 
> 	JKB
> 
> -------------------------------------------------------------------------
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems?  Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> _______________________________________________
> Iscsitarget-devel mailing list
> Iscsitarget-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/iscsitarget-devel
-- 
Ming Zhang


@#$%^ purging memory... (*!%
http://blackmagic02881.wordpress.com/
http://www.linkedin.com/in/blackmagic02881
--------------------------------------------


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/
_______________________________________________
Iscsitarget-devel mailing list
Iscsitarget-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/iscsitarget-devel

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid1/5 over iSCSI trouble
  2007-10-19 21:04                   ` BERTRAND Joël
                                       ` (2 preceding siblings ...)
  2007-10-19 21:19                     ` Ming Zhang
@ 2007-10-19 23:50                     ` Bill Davidsen
  2007-10-19 23:58                       ` Bill Davidsen
  2007-10-20  7:52                       ` BERTRAND Joël
  3 siblings, 2 replies; 36+ messages in thread
From: Bill Davidsen @ 2007-10-19 23:50 UTC (permalink / raw)
  To: BERTRAND Joël
  Cc: Dan Williams, linux-raid, sparclinux, iscsitarget-devel

BERTRAND Joël wrote:
>
>     Sorry for this last mail. I have found another mistake, but I 
> don't know if this bug comes from iscsi-target or raid5 itself. iSCSI 
> target is disconnected because istd1 and md_d0_raid5 kernel threads 
> use 100% of CPU each !
>
> Tasks: 235 total,   6 running, 227 sleeping,   0 stopped,   2 zombie
> Cpu(s):  0.1%us, 12.5%sy,  0.0%ni, 87.4%id,  0.0%wa,  0.0%hi,  0.0%si, 
> 0.0%st
> Mem:   4139032k total,   218424k used,  3920608k free,    10136k buffers
> Swap:  7815536k total,        0k used,  7815536k free,    64808k cached
>
>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>  5824 root      15  -5     0    0    0 R  100  0.0  10:34.25 istd1
>  5599 root      15  -5     0    0    0 R  100  0.0   7:25.43 md_d0_raid5

Given that the summary shows 87.4% idle, something is not right. You 
might try another tool, like vmstat, to at least verify the way the CPU 
is being used. When you can't trust what your tools tell you it gets 
really hard to make decisions based on the data.

-- 
bill davidsen <davidsen@tmr.com>
  CTO TMR Associates, Inc
  Doing interesting things with small computers since 1979


-
To unsubscribe from this list: send the line "unsubscribe sparclinux" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid1/5 over iSCSI trouble
  2007-10-19 23:50                     ` Bill Davidsen
@ 2007-10-19 23:58                       ` Bill Davidsen
  2007-10-20  7:52                       ` BERTRAND Joël
  1 sibling, 0 replies; 36+ messages in thread
From: Bill Davidsen @ 2007-10-19 23:58 UTC (permalink / raw)
  To: Bill Davidsen
  Cc: BERTRAND Joël, Dan Williams, linux-raid, sparclinux,
	iscsitarget-devel

Bill Davidsen wrote:
> BERTRAND Joël wrote:
>>
>>     Sorry for this last mail. I have found another mistake, but I 
>> don't know if this bug comes from iscsi-target or raid5 itself. iSCSI 
>> target is disconnected because istd1 and md_d0_raid5 kernel threads 
>> use 100% of CPU each !
>>
>> Tasks: 235 total,   6 running, 227 sleeping,   0 stopped,   2 zombie
>> Cpu(s):  0.1%us, 12.5%sy,  0.0%ni, 87.4%id,  0.0%wa,  0.0%hi,  
>> 0.0%si, 0.0%st
>> Mem:   4139032k total,   218424k used,  3920608k free,    10136k buffers
>> Swap:  7815536k total,        0k used,  7815536k free,    64808k cached
>>
>>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>>  5824 root      15  -5     0    0    0 R  100  0.0  10:34.25 istd1
>>  5599 root      15  -5     0    0    0 R  100  0.0   7:25.43 md_d0_raid5
>
> Given that the summary shows 87.4% idle, something is not right. You 
> might try another tool, like vmstat, to at least verify the way the 
> CPU is being used. When you can't trust what your tools tell you it 
> gets really hard to make decisions based on the data.
>
ALSO: you have zombie processes. Looking at machines up for 45, 54, and 
470 days, zombies are *not* something you just have to expect. Do you 
get these just about the same time things go to hell? Better you than 
me, I suspect there are still many ways to have a "learning experience" 
with iSCSI.

Hope that and the summary confusion result in some useful data.

-- 
bill davidsen <davidsen@tmr.com>
  CTO TMR Associates, Inc
  Doing interesting things with small computers since 1979


-
To unsubscribe from this list: send the line "unsubscribe sparclinux" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid1/5 over iSCSI trouble
  2007-10-19 21:10                       ` Ross S. W. Walker
@ 2007-10-20  7:45                         ` BERTRAND Joël
  0 siblings, 0 replies; 36+ messages in thread
From: BERTRAND Joël @ 2007-10-20  7:45 UTC (permalink / raw)
  To: Ross S. W. Walker
  Cc: linux-raid, sparclinux, Dan Williams, iscsitarget-devel,
	Bill Davidsen

Ross S. W. Walker wrote:
> Oh, in that case you will be much better served with DRBD, which
> would provide you with what you want without creating a Frankenstein
> setup...

	OK. I didn't know DRDB project, but it cannot be used in my case. 
Indeed, I'm trying in a first time to replicate two raid5 arrays. But 
ultimate goal is to create a raid5 array over iSCSI. At this time, I 
only test with raid1 because I have to begin with a simple configuration.

	Regards,

	JKB

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid1/5 over iSCSI trouble
  2007-10-19 23:50                     ` Bill Davidsen
  2007-10-19 23:58                       ` Bill Davidsen
@ 2007-10-20  7:52                       ` BERTRAND Joël
  1 sibling, 0 replies; 36+ messages in thread
From: BERTRAND Joël @ 2007-10-20  7:52 UTC (permalink / raw)
  To: Bill Davidsen; +Cc: Dan Williams, linux-raid, sparclinux, iscsitarget-devel

Bill Davidsen wrote:
> BERTRAND Joël wrote:
>>
>>     Sorry for this last mail. I have found another mistake, but I 
>> don't know if this bug comes from iscsi-target or raid5 itself. iSCSI 
>> target is disconnected because istd1 and md_d0_raid5 kernel threads 
>> use 100% of CPU each !
>>
>> Tasks: 235 total,   6 running, 227 sleeping,   0 stopped,   2 zombie
>> Cpu(s):  0.1%us, 12.5%sy,  0.0%ni, 87.4%id,  0.0%wa,  0.0%hi,  0.0%si, 
>> 0.0%st
>> Mem:   4139032k total,   218424k used,  3920608k free,    10136k buffers
>> Swap:  7815536k total,        0k used,  7815536k free,    64808k cached
>>
>>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>>  5824 root      15  -5     0    0    0 R  100  0.0  10:34.25 istd1
>>  5599 root      15  -5     0    0    0 R  100  0.0   7:25.43 md_d0_raid5
> 
> Given that the summary shows 87.4% idle, something is not right. You 
> might try another tool, like vmstat, to at least verify the way the CPU 
> is being used. When you can't trust what your tools tell you it gets 
> really hard to make decisions based on the data.

	Don't forget this box is a 32-CPU server.

	JKB
-
To unsubscribe from this list: send the line "unsubscribe sparclinux" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid1/5 over iSCSI trouble
  2007-10-19 21:12                     ` Dan Williams
@ 2007-10-20  8:05                       ` BERTRAND Joël
  2007-10-24  7:12                         ` BERTRAND Joël
  0 siblings, 1 reply; 36+ messages in thread
From: BERTRAND Joël @ 2007-10-20  8:05 UTC (permalink / raw)
  To: Dan Williams; +Cc: Bill Davidsen, linux-raid, sparclinux, iscsitarget-devel

Dan Williams wrote:
> On Fri, 2007-10-19 at 14:04 -0700, BERTRAND Joël wrote:
>>         Sorry for this last mail. I have found another mistake, but I
>> don't
>> know if this bug comes from iscsi-target or raid5 itself. iSCSI target
>> is disconnected because istd1 and md_d0_raid5 kernel threads use 100%
>> of
>> CPU each !
>>
>> Tasks: 235 total,   6 running, 227 sleeping,   0 stopped,   2 zombie
>> Cpu(s):  0.1%us, 12.5%sy,  0.0%ni, 87.4%id,  0.0%wa,  0.0%hi,  0.0%si,
>> 0.0%st
>> Mem:   4139032k total,   218424k used,  3920608k free,    10136k
>> buffers
>> Swap:  7815536k total,        0k used,  7815536k free,    64808k
>> cached
>>
>>    PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>>
>>   5824 root      15  -5     0    0    0 R  100  0.0  10:34.25 istd1
>>
>>   5599 root      15  -5     0    0    0 R  100  0.0   7:25.43
>> md_d0_raid5

	When iSCSI works fine :

Tasks: 231 total,   2 running, 229 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.2%us,  2.5%sy,  0.0%ni, 95.7%id,  0.1%wa,  0.0%hi,  1.5%si, 
0.0%st
Mem:   4139032k total,  4126064k used,    12968k free,    94680k buffers
Swap:  7815536k total,        0k used,  7815536k free,  3758776k cached

   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND 

  9774 root      15  -5     0    0    0 R   40  0.0   2:00.34 istd1 

  9738 root      15  -5     0    0    0 S    9  0.0   2:06.56 
md_d0_raid5
  4129 root      20   0 41648 5024 2432 S    6  0.1   2:46.39 
fail2ban-server
  9830 root      20   0  3248 1544 1120 R    1  0.0   0:00.18 top 

  4063 root      20   0  7424 5288  832 S    1  0.1   0:00.84 unfsd 

  9776 root      15  -5     0    0    0 D    1  0.0   0:00.82 istiod1 

  9780 root      15  -5     0    0    0 D    1  0.0   0:00.96 istiod1 

  9782 root      15  -5     0    0    0 D    1  0.0   0:01.10 istiod1 

     1 root      20   0  2576  960  816 S    0  0.0   0:01.56 init 

     2 root      15  -5     0    0    0 S    0  0.0   0:00.00 kthreadd 

     3 root      RT  -5     0    0    0 S    0  0.0   0:00.00 
migration/0

After a random time (iSCSI target is not disconnected but doesn't answer 
to initiator requests):

Tasks: 232 total,   5 running, 226 sleeping,   0 stopped,   1 zombie
Cpu(s):  0.1%us,  7.9%sy,  0.0%ni, 91.6%id,  0.0%wa,  0.1%hi,  0.2%si, 
0.0%st
Mem:   4139032k total,  4125912k used,    13120k free,    95640k buffers
Swap:  7815536k total,        0k used,  7815536k free,  3758792k cached

   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND 

  9738 root      15  -5     0    0    0 R  100  0.0   3:56.57 
md_d0_raid5
  9739 root      15  -5     0    0    0 D   14  0.0   0:20.34 
md_d0_resync
  9845 root      20   0  3248 1544 1120 R    1  0.0   0:07.00 top 

  4129 root      20   0 41648 5024 2432 S    0  0.1   2:55.94 
fail2ban-server
     1 root      20   0  2576  960  816 S    0  0.0   0:01.58 init 

     2 root      15  -5     0    0    0 S    0  0.0   0:00.00 kthreadd 

     3 root      RT  -5     0    0    0 S    0  0.0   0:00.00 
migration/0
     4 root      15  -5     0    0    0 S    0  0.0   0:00.02 
ksoftirqd/0
     5 root      RT  -5     0    0    0 S    0  0.0   0:00.00 
migration/1
     6 root      15  -5     0    0    0 S    0  0.0   0:00.00 
ksoftirqd/1

	You can see a very strange thing... When I have booted this server, 
md0_d0 was clean. When bug occurs, md_d0_resync is started (/dev/md/d0p1 
is a part of my raid1 array). Why ? This partition is not mounted on 
local server, only exported by iSCSI.

After disconnection of iSCSI target :

Tasks: 232 total,   7 running, 224 sleeping,   0 stopped,   1 zombie
Cpu(s):  0.0%us, 15.2%sy,  0.0%ni, 84.3%id,  0.0%wa,  0.1%hi,  0.3%si, 
0.0%st
Mem:   4139032k total,  4127584k used,    11448k free,    95752k buffers
Swap:  7815536k total,        0k used,  7815536k free,  3758792k cached

   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND 

  9738 root      15  -5     0    0    0 R  100  0.0   4:56.82 
md_d0_raid5
  9774 root      15  -5     0    0    0 R  100  0.0   5:52.41 istd1 

  9739 root      15  -5     0    0    0 R   14  0.0   0:28.90 
md_d0_resync
  9916 root      20   0  3248 1544 1120 R    2  0.0   0:00.56 top 

  4129 root      20   0 41648 5024 2432 S    0  0.1   2:56.17 
fail2ban-server
     1 root      20   0  2576  960  816 S    0  0.0   0:01.58 init 

     2 root      15  -5     0    0    0 S    0  0.0   0:00.00 kthreadd 

     3 root      RT  -5     0    0    0 S    0  0.0   0:00.00 
migration/0
     4 root      15  -5     0    0    0 S    0  0.0   0:00.02 
ksoftirqd/0
     5 root      RT  -5     0    0    0 S    0  0.0   0:00.00 
migration/1
     6 root      15  -5     0    0    0 S    0  0.0   0:00.00 
ksoftirqd/1

> What is the output of:
> cat /proc/5824/wchan
> cat /proc/5599/wchan

Root poulenc:[/usr/scripts] > cat /proc/9738/wchan
_startRoot poulenc:[/usr/scripts] > cat /proc/9774/wchan
_startRoot poulenc:[/usr/scripts] > vmstat -a
procs -----------memory---------- ---swap-- -----io---- -system-- 
----cpu----
  r  b   swpd   free  inact active   si   so    bi    bo   in   cs us sy 
id wa
  5  0      0  10824 3777528 112280    0    0     7    19   12   19  0 
0 100  0
Root poulenc:[/usr/scripts] > vmstat
procs -----------memory---------- ---swap-- -----io---- -system-- 
----cpu----
  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy 
id wa
  5  0      0  10928  95856 3756880    0    0     7    19   12   19  0 
0 100  0
Root poulenc:[/usr/scripts] >  vmstat -s
       4139032 K total memory
       4127864 K used memory
        112216 K active memory
       3777568 K inactive memory
         11168 K free memory
         95928 K buffer memory
       3756896 K swap cache
       7815536 K total swap
             0 K used swap
       7815536 K free swap
         26901 non-nice user cpu ticks
           824 nice user cpu ticks
        204746 system cpu ticks
      94245668 idle cpu ticks
         14378 IO-wait cpu ticks
          3086 IRQ cpu ticks
         33971 softirq cpu ticks
             0 stolen cpu ticks
       6555730 pages paged in
      18136571 pages paged out
             0 pages swapped in
             0 pages swapped out
      11259263 interrupts
      18167358 CPU context switches
    1192827483 boot time
          9962 forks
Root poulenc:[/usr/scripts] > vmstat -d
disk- ------------reads------------ ------------writes----------- 
-----IO------
        total merged sectors      ms  total merged sectors      ms 
cur    sec
sda   716720 143247 94849012 2617628   6732  24789  269070  222236 
0    532
sdb   103590  23780 6140736   85244 409226 308936 88160014 13352564 
  0    929
md0    17469      0  456250       0   4557      0   36456       0      0 
      0
sdc   265108 2103743 37883308 2810656 266586 272237 8767696  628236 
  0    825
sdd   266248 2099943 37844236 2801400 264081 275321 8781088  609140 
  0    824
sde   263660 2104487 37875132 2835548 262296 276561 8776000  595140 
  0    826
sdf   283262 2084095 37862108 2432988 262197 277305 8785600  581008 
  0    779
sdg   285205 2082611 37870324 2291464 260836 278822 8791456  567908 
  0    752
sdh   291773 2072874 37817788 1892320 260572 278182 8775472  550688 
  0    685
loop0      0      0       0       0      0      0       0       0      0 
      0
loop1      0      0       0       0      0      0       0       0      0 
      0
loop2      0      0       0       0      0      0       0       0      0 
      0
loop3      0      0       0       0      0      0       0       0      0 
      0
loop4      0      0       0       0      0      0       0       0      0 
      0
loop5      0      0       0       0      0      0       0       0      0 
      0
loop6      0      0       0       0      0      0       0       0      0 
      0
loop7      0      0       0       0      0      0       0       0      0 
      0
md6       31      0     496       0      0      0       0       0      0 
      0
md1     4326      0  161366       0     27      0     110       0      0 
      0
md2   206279      0 4713706       0  14670      0  118752       0      0 
      0
md3     6709      0  392442       0   9964      0   80040       0      0 
      0
disk- ------------reads------------ ------------writes----------- 
-----IO------
        total merged sectors      ms  total merged sectors      ms 
cur    sec
md4      247      0    3746       0    131      0    1208       0      0 
      0
md5    63245      0 7365546       0    292      0    2424       0      0 
      0
md_d0     14      0     216       0 642029      0 36004104       0 
0      0
Root poulenc:[/usr/scripts] >

	Please note that zombies process are not signifiant for this server. It 
runs watchdog and zombies process counter is allways between 0 and 2.

	When iSCSI target hangs, load average is :
load average: 14.03, 13.63, 10.47 with only md_d0_raid5, istd1 and 
md_d0_resync running process.

  9774 root      15  -5     0    0    0 R  100  0.0  18:17.63 istd1 

  9738 root      15  -5     0    0    0 R  100  0.0  17:22.04 
md_d0_raid5
  9739 root      15  -5     0    0    0 R   14  0.0   2:15.18 
md_d0_resync

	I won't reboot this server if you need some other information.

	Regards,

	JKB
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid1/5 over iSCSI trouble
  2007-10-20  8:05                       ` BERTRAND Joël
@ 2007-10-24  7:12                         ` BERTRAND Joël
  2007-10-24 20:10                           ` Bill Davidsen
  2007-10-24 23:49                           ` Dan Williams
  0 siblings, 2 replies; 36+ messages in thread
From: BERTRAND Joël @ 2007-10-24  7:12 UTC (permalink / raw)
  To: Dan Williams; +Cc: Bill Davidsen, linux-raid, sparclinux, iscsitarget-devel

	Hello,

	Any news about this trouble ? Any idea ? I'm trying to fix it, but I 
don't see any specific interaction between raid5 and istd. Does anyone 
try to reproduce this bug on another arch than sparc64 ? I only use 
sparc32 and 64 servers and I cannot test on other archs. Of course, I 
have a laptop, but I cannot create a raid5 array on its internal HD to 
test this configuration ;-)

	Please note that I won't read my mails until next saturday morning (CEST).

> After disconnection of iSCSI target :
> 
> Tasks: 232 total,   7 running, 224 sleeping,   0 stopped,   1 zombie
> Cpu(s):  0.0%us, 15.2%sy,  0.0%ni, 84.3%id,  0.0%wa,  0.1%hi,  0.3%si, 
> 0.0%st
> Mem:   4139032k total,  4127584k used,    11448k free,    95752k buffers
> Swap:  7815536k total,        0k used,  7815536k free,  3758792k cached
> 
>   PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>  9738 root      15  -5     0    0    0 R  100  0.0   4:56.82 md_d0_raid5
>  9774 root      15  -5     0    0    0 R  100  0.0   5:52.41 istd1
>  9739 root      15  -5     0    0    0 R   14  0.0   0:28.90 md_d0_resync
>  9916 root      20   0  3248 1544 1120 R    2  0.0   0:00.56 top
>  4129 root      20   0 41648 5024 2432 S    0  0.1   2:56.17 
> fail2ban-server
>     1 root      20   0  2576  960  816 S    0  0.0   0:01.58 init
>     2 root      15  -5     0    0    0 S    0  0.0   0:00.00 kthreadd
>     3 root      RT  -5     0    0    0 S    0  0.0   0:00.00 migration/0
>     4 root      15  -5     0    0    0 S    0  0.0   0:00.02 ksoftirqd/0
>     5 root      RT  -5     0    0    0 S    0  0.0   0:00.00 migration/1
>     6 root      15  -5     0    0    0 S    0  0.0   0:00.00 ksoftirqd/1

	Regards,

	JKB

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid1/5 over iSCSI trouble
  2007-10-24  7:12                         ` BERTRAND Joël
@ 2007-10-24 20:10                           ` Bill Davidsen
  2007-10-24 23:49                           ` Dan Williams
  1 sibling, 0 replies; 36+ messages in thread
From: Bill Davidsen @ 2007-10-24 20:10 UTC (permalink / raw)
  To: BERTRAND Joël
  Cc: Dan Williams, linux-raid, sparclinux, iscsitarget-devel

BERTRAND Joël wrote:
>     Hello,
>
>     Any news about this trouble ? Any idea ? I'm trying to fix it, but 
> I don't see any specific interaction between raid5 and istd. Does 
> anyone try to reproduce this bug on another arch than sparc64 ? I only 
> use sparc32 and 64 servers and I cannot test on other archs. Of 
> course, I have a laptop, but I cannot create a raid5 array on its 
> internal HD to test this configuration ;-)

Sure you can, a few loopback devices and a few iSCSI, and you're in 
business. I think the ongoing discussion of timeouts and whatnot may 
bear some fruit eventually, perhaps not as fast as you would like. By 
Saturday a solution may emerge.
>
>     Please note that I won't read my mails until next saturday morning 
> (CEST). 


-- 
bill davidsen <davidsen@tmr.com>
  CTO TMR Associates, Inc
  Doing interesting things with small computers since 1979


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid1/5 over iSCSI trouble
  2007-10-24  7:12                         ` BERTRAND Joël
  2007-10-24 20:10                           ` Bill Davidsen
@ 2007-10-24 23:49                           ` Dan Williams
  2007-10-25  0:03                             ` David Miller
  2007-10-27 13:29                             ` BERTRAND Joël
  1 sibling, 2 replies; 36+ messages in thread
From: Dan Williams @ 2007-10-24 23:49 UTC (permalink / raw)
  To: BERTRAND Joël
  Cc: Bill Davidsen, linux-raid, sparclinux, iscsitarget-devel,
	Ming Zhang

On 10/24/07, BERTRAND Joël <joel.bertrand@systella.fr> wrote:
>         Hello,
>
>         Any news about this trouble ? Any idea ? I'm trying to fix it, but I
> don't see any specific interaction between raid5 and istd. Does anyone
> try to reproduce this bug on another arch than sparc64 ? I only use
> sparc32 and 64 servers and I cannot test on other archs. Of course, I
> have a laptop, but I cannot create a raid5 array on its internal HD to
> test this configuration ;-)
>

Can you collect some oprofile data, as Ming suggested, so we can maybe
see what md_d0_raid5 and istd1 are fighting about?  Hopefully it is as
painless to run on sparc as it is on IA:

opcontrol --start --vmlinux=/path/to/vmlinux
<wait>
opcontrol --stop
opreport --image-path=/lib/modules/`uname -r` -l

--
Dan
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid1/5 over iSCSI trouble
  2007-10-24 23:49                           ` Dan Williams
@ 2007-10-25  0:03                             ` David Miller
  2007-10-27 13:29                             ` BERTRAND Joël
  1 sibling, 0 replies; 36+ messages in thread
From: David Miller @ 2007-10-25  0:03 UTC (permalink / raw)
  To: dan.j.williams
  Cc: joel.bertrand, davidsen, linux-raid, sparclinux,
	iscsitarget-devel, blackmagic02881

From: "Dan Williams" <dan.j.williams@intel.com>
Date: Wed, 24 Oct 2007 16:49:28 -0700

> Hopefully it is as painless to run on sparc as it is on IA:
> 
> opcontrol --start --vmlinux=/path/to/vmlinux
> <wait>
> opcontrol --stop
> opreport --image-path=/lib/modules/`uname -r` -l

It is painless, I use it all the time.

The only caveat is to make sure the /path/to/vmlinux is
the pre-stripped kernel image.  The images installed
under /boot/ are usually stripped and thus not suitable
for profiling.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid1/5 over iSCSI trouble
  2007-10-24 23:49                           ` Dan Williams
  2007-10-25  0:03                             ` David Miller
@ 2007-10-27 13:29                             ` BERTRAND Joël
  2007-10-27 18:27                               ` Dan Williams
  2007-10-27 21:13                               ` Ming Zhang
  1 sibling, 2 replies; 36+ messages in thread
From: BERTRAND Joël @ 2007-10-27 13:29 UTC (permalink / raw)
  To: Dan Williams; +Cc: linux-raid, sparclinux, iscsitarget-devel, Bill Davidsen

Dan Williams wrote:
> On 10/24/07, BERTRAND Joël <joel.bertrand@systella.fr> wrote:
>>         Hello,
>>
>>         Any news about this trouble ? Any idea ? I'm trying to fix it, but I
>> don't see any specific interaction between raid5 and istd. Does anyone
>> try to reproduce this bug on another arch than sparc64 ? I only use
>> sparc32 and 64 servers and I cannot test on other archs. Of course, I
>> have a laptop, but I cannot create a raid5 array on its internal HD to
>> test this configuration ;-)
>>
> 
> Can you collect some oprofile data, as Ming suggested, so we can maybe
> see what md_d0_raid5 and istd1 are fighting about?  Hopefully it is as
> painless to run on sparc as it is on IA:
> 
> opcontrol --start --vmlinux=/path/to/vmlinux
> <wait>
> opcontrol --stop
> opreport --image-path=/lib/modules/`uname -r` -l

	Done.

Profiling through timer interrupt
samples  %        image name               app name 
symbol name
20028038 92.9510  vmlinux-2.6.23           vmlinux-2.6.23           cpu_idle
1198566   5.5626  vmlinux-2.6.23           vmlinux-2.6.23           schedule
41558     0.1929  vmlinux-2.6.23           vmlinux-2.6.23           yield
34791     0.1615  vmlinux-2.6.23           vmlinux-2.6.23           NGmemcpy
18417     0.0855  vmlinux-2.6.23           vmlinux-2.6.23 
xor_niagara_5
17430     0.0809  raid456                  raid456                  (no 
symbols)
15837     0.0735  vmlinux-2.6.23           vmlinux-2.6.23 
sys_sched_yield
14860     0.0690  iscsi_trgt.ko            iscsi_trgt               istd
12705     0.0590  nf_conntrack             nf_conntrack             (no 
symbols)
9236      0.0429  libc-2.6.1.so            libc-2.6.1.so            (no 
symbols)
9034      0.0419  vmlinux-2.6.23           vmlinux-2.6.23 
xor_niagara_2
6534      0.0303  oprofiled                oprofiled                (no 
symbols)
6149      0.0285  vmlinux-2.6.23           vmlinux-2.6.23 
scsi_request_fn
5947      0.0276  ip_tables                ip_tables                (no 
symbols)
4510      0.0209  vmlinux-2.6.23           vmlinux-2.6.23 
dma_4v_map_single
3823      0.0177  vmlinux-2.6.23           vmlinux-2.6.23 
__make_request
3326      0.0154  vmlinux-2.6.23           vmlinux-2.6.23           tg3_poll
3162      0.0147  iscsi_trgt.ko            iscsi_trgt 
scsi_cmnd_exec
3091      0.0143  vmlinux-2.6.23           vmlinux-2.6.23 
scsi_dispatch_cmd
2849      0.0132  vmlinux-2.6.23           vmlinux-2.6.23 
tcp_v4_rcv
2811      0.0130  vmlinux-2.6.23           vmlinux-2.6.23 
nf_iterate
2729      0.0127  vmlinux-2.6.23           vmlinux-2.6.23 
_spin_lock_bh
2551      0.0118  vmlinux-2.6.23           vmlinux-2.6.23           kfree
2467      0.0114  vmlinux-2.6.23           vmlinux-2.6.23 
kmem_cache_free
2314      0.0107  vmlinux-2.6.23           vmlinux-2.6.23 
atomic_add
2065      0.0096  vmlinux-2.6.23           vmlinux-2.6.23 
NGbzero_loop
1826      0.0085  vmlinux-2.6.23           vmlinux-2.6.23           ip_rcv
1823      0.0085  nf_conntrack_ipv4        nf_conntrack_ipv4        (no 
symbols)
1822      0.0085  vmlinux-2.6.23           vmlinux-2.6.23 
clear_bit
1767      0.0082  python2.4                python2.4                (no 
symbols)
1734      0.0080  vmlinux-2.6.23           vmlinux-2.6.23 
atomic_sub_ret
1694      0.0079  vmlinux-2.6.23           vmlinux-2.6.23 
tcp_rcv_established
1673      0.0078  vmlinux-2.6.23           vmlinux-2.6.23 
tcp_recvmsg
1670      0.0078  vmlinux-2.6.23           vmlinux-2.6.23 
netif_receive_skb
1668      0.0077  vmlinux-2.6.23           vmlinux-2.6.23           set_bit
1545      0.0072  vmlinux-2.6.23           vmlinux-2.6.23 
__kmalloc_track_caller
1526      0.0071  iptable_nat              iptable_nat              (no 
symbols)
1526      0.0071  vmlinux-2.6.23           vmlinux-2.6.23 
kmem_cache_alloc
1373      0.0064  vmlinux-2.6.23           vmlinux-2.6.23 
generic_unplug_device
...

	Is it enough ?

	Regards,

	JKB

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid1/5 over iSCSI trouble
  2007-10-27 13:29                             ` BERTRAND Joël
@ 2007-10-27 18:27                               ` Dan Williams
  2007-10-27 19:35                                 ` BERTRAND Joël
  2007-10-27 21:13                               ` Ming Zhang
  1 sibling, 1 reply; 36+ messages in thread
From: Dan Williams @ 2007-10-27 18:27 UTC (permalink / raw)
  To: BERTRAND Joël
  Cc: Bill Davidsen, linux-raid, sparclinux, iscsitarget-devel,
	Ming Zhang

On 10/27/07, BERTRAND Joël <joel.bertrand@systella.fr> wrote:
> Dan Williams wrote:
> > Can you collect some oprofile data, as Ming suggested, so we can maybe
> > see what md_d0_raid5 and istd1 are fighting about?  Hopefully it is as
> > painless to run on sparc as it is on IA:
> >
> > opcontrol --start --vmlinux=/path/to/vmlinux
> > <wait>
> > opcontrol --stop
> > opreport --image-path=/lib/modules/`uname -r` -l
>
>         Done.
>

[..]

>
>         Is it enough ?

I would expect md_d0_raid5 and istd1 to show up pretty high in the
list if they are constantly pegged at a 100% CPU utilization like you
showed in the failure case.  Maybe this was captured after the target
has disconnected?
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid1/5 over iSCSI trouble
  2007-10-27 18:27                               ` Dan Williams
@ 2007-10-27 19:35                                 ` BERTRAND Joël
  0 siblings, 0 replies; 36+ messages in thread
From: BERTRAND Joël @ 2007-10-27 19:35 UTC (permalink / raw)
  To: Dan Williams; +Cc: linux-raid, sparclinux, iscsitarget-devel, Bill Davidsen

Dan Williams wrote:
> On 10/27/07, BERTRAND Joël <joel.bertrand@systella.fr> wrote:
>> Dan Williams wrote:
>>> Can you collect some oprofile data, as Ming suggested, so we can maybe
>>> see what md_d0_raid5 and istd1 are fighting about?  Hopefully it is as
>>> painless to run on sparc as it is on IA:
>>>
>>> opcontrol --start --vmlinux=/path/to/vmlinux
>>> <wait>
>>> opcontrol --stop
>>> opreport --image-path=/lib/modules/`uname -r` -l
>>         Done.
>>
> 
> [..]
> 
>>         Is it enough ?
> 
> I would expect md_d0_raid5 and istd1 to show up pretty high in the
> list if they are constantly pegged at a 100% CPU utilization like you
> showed in the failure case.  Maybe this was captured after the target
> has disconnected?

	No, I have launched opcontrol before starting raid1 creation, and 
stopped after disconnection. Don't forget that this server has 32 CPU's.

	Regards,

	JKB

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid1/5 over iSCSI trouble
  2007-10-27 13:29                             ` BERTRAND Joël
  2007-10-27 18:27                               ` Dan Williams
@ 2007-10-27 21:13                               ` Ming Zhang
  2007-10-29 10:40                                 ` BERTRAND Joël
  1 sibling, 1 reply; 36+ messages in thread
From: Ming Zhang @ 2007-10-27 21:13 UTC (permalink / raw)
  To: BERTRAND Joël
  Cc: Dan Williams, Bill Davidsen, linux-raid, sparclinux,
	iscsitarget-devel

off topic, could you resubmit the alignment issue patch to list and see
if tomof accept. he needs a patch inlined in email. it is found and
fixed by you, so had better you post it (instead of me). thx.


On Sat, 2007-10-27 at 15:29 +0200, BERTRAND Joël wrote:
> Dan Williams wrote:
> > On 10/24/07, BERTRAND Joël <joel.bertrand@systella.fr> wrote:
> >>         Hello,
> >>
> >>         Any news about this trouble ? Any idea ? I'm trying to fix it, but I
> >> don't see any specific interaction between raid5 and istd. Does anyone
> >> try to reproduce this bug on another arch than sparc64 ? I only use
> >> sparc32 and 64 servers and I cannot test on other archs. Of course, I
> >> have a laptop, but I cannot create a raid5 array on its internal HD to
> >> test this configuration ;-)
> >>
> > 
> > Can you collect some oprofile data, as Ming suggested, so we can maybe
> > see what md_d0_raid5 and istd1 are fighting about?  Hopefully it is as
> > painless to run on sparc as it is on IA:
> > 
> > opcontrol --start --vmlinux=/path/to/vmlinux
> > <wait>
> > opcontrol --stop
> > opreport --image-path=/lib/modules/`uname -r` -l
> 
> 	Done.
> 
> Profiling through timer interrupt
> samples  %        image name               app name 
> symbol name
> 20028038 92.9510  vmlinux-2.6.23           vmlinux-2.6.23           cpu_idle
> 1198566   5.5626  vmlinux-2.6.23           vmlinux-2.6.23           schedule
> 41558     0.1929  vmlinux-2.6.23           vmlinux-2.6.23           yield
> 34791     0.1615  vmlinux-2.6.23           vmlinux-2.6.23           NGmemcpy
> 18417     0.0855  vmlinux-2.6.23           vmlinux-2.6.23 
> xor_niagara_5

raid5 use these 2. forgot to ask if you met any memory pressure here.




> 17430     0.0809  raid456                  raid456                  (no 
> symbols)
> 15837     0.0735  vmlinux-2.6.23           vmlinux-2.6.23 
> sys_sched_yield
> 14860     0.0690  iscsi_trgt.ko            iscsi_trgt               istd

could you get a call graph from oprofile. the yield is called quite
frequently. iet has some place to call it when no memory available. not
sure if this is the case.

i remember there was a post (maybe in lwn.net?) about some issues
between tickless kernel and yield() lead to 100% cpu utilization, i just
could not recalled the place, anybody have a clue?

or sparc64 does not have tickless kernel yet? did not follow these
carefully these days.


> 12705     0.0590  nf_conntrack             nf_conntrack             (no 
> symbols)
> 9236      0.0429  libc-2.6.1.so            libc-2.6.1.so            (no 
> symbols)
> 9034      0.0419  vmlinux-2.6.23           vmlinux-2.6.23 
> xor_niagara_2
> 6534      0.0303  oprofiled                oprofiled                (no 
> symbols)
> 6149      0.0285  vmlinux-2.6.23           vmlinux-2.6.23 
> scsi_request_fn
> 5947      0.0276  ip_tables                ip_tables                (no 
> symbols)
> 4510      0.0209  vmlinux-2.6.23           vmlinux-2.6.23 
> dma_4v_map_single
> 3823      0.0177  vmlinux-2.6.23           vmlinux-2.6.23 
> __make_request
> 3326      0.0154  vmlinux-2.6.23           vmlinux-2.6.23           tg3_poll
> 3162      0.0147  iscsi_trgt.ko            iscsi_trgt 
> scsi_cmnd_exec
> 3091      0.0143  vmlinux-2.6.23           vmlinux-2.6.23 
> scsi_dispatch_cmd
> 2849      0.0132  vmlinux-2.6.23           vmlinux-2.6.23 
> tcp_v4_rcv
> 2811      0.0130  vmlinux-2.6.23           vmlinux-2.6.23 
> nf_iterate
> 2729      0.0127  vmlinux-2.6.23           vmlinux-2.6.23 
> _spin_lock_bh
> 2551      0.0118  vmlinux-2.6.23           vmlinux-2.6.23           kfree
> 2467      0.0114  vmlinux-2.6.23           vmlinux-2.6.23 
> kmem_cache_free
> 2314      0.0107  vmlinux-2.6.23           vmlinux-2.6.23 
> atomic_add
> 2065      0.0096  vmlinux-2.6.23           vmlinux-2.6.23 
> NGbzero_loop
> 1826      0.0085  vmlinux-2.6.23           vmlinux-2.6.23           ip_rcv
> 1823      0.0085  nf_conntrack_ipv4        nf_conntrack_ipv4        (no 
> symbols)
> 1822      0.0085  vmlinux-2.6.23           vmlinux-2.6.23 
> clear_bit
> 1767      0.0082  python2.4                python2.4                (no 
> symbols)
> 1734      0.0080  vmlinux-2.6.23           vmlinux-2.6.23 
> atomic_sub_ret
> 1694      0.0079  vmlinux-2.6.23           vmlinux-2.6.23 
> tcp_rcv_established
> 1673      0.0078  vmlinux-2.6.23           vmlinux-2.6.23 
> tcp_recvmsg
> 1670      0.0078  vmlinux-2.6.23           vmlinux-2.6.23 
> netif_receive_skb
> 1668      0.0077  vmlinux-2.6.23           vmlinux-2.6.23           set_bit
> 1545      0.0072  vmlinux-2.6.23           vmlinux-2.6.23 
> __kmalloc_track_caller
> 1526      0.0071  iptable_nat              iptable_nat              (no 
> symbols)
> 1526      0.0071  vmlinux-2.6.23           vmlinux-2.6.23 
> kmem_cache_alloc
> 1373      0.0064  vmlinux-2.6.23           vmlinux-2.6.23 
> generic_unplug_device
> ...
> 
> 	Is it enough ?
> 
> 	Regards,
> 
> 	JKB
-- 
Ming Zhang


@#$%^ purging memory... (*!%
http://blackmagic02881.wordpress.com/
http://www.linkedin.com/in/blackmagic02881
--------------------------------------------

-
To unsubscribe from this list: send the line "unsubscribe sparclinux" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [BUG] Raid1/5 over iSCSI trouble
  2007-10-27 21:13                               ` Ming Zhang
@ 2007-10-29 10:40                                 ` BERTRAND Joël
  0 siblings, 0 replies; 36+ messages in thread
From: BERTRAND Joël @ 2007-10-29 10:40 UTC (permalink / raw)
  To: blackmagic02881
  Cc: linux-raid, sparclinux, Dan Williams, iscsitarget-devel,
	Bill Davidsen

Ming Zhang wrote:
> off topic, could you resubmit the alignment issue patch to list and see
> if tomof accept. he needs a patch inlined in email. it is found and
> fixed by you, so had better you post it (instead of me). thx.

diff -u kernel.old/iscsi.c kernel/iscsi.c
--- kernel.old/iscsi.c  2007-10-29 09:49:16.000000000 +0100
+++ kernel/iscsi.c      2007-10-17 11:19:14.000000000 +0200
@@ -726,13 +726,26 @@
         case READ_10:
         case WRITE_10:
         case WRITE_VERIFY:
-               *off = be32_to_cpu(*(u32 *)&cmd[2]);
+               *off = be32_to_cpu((((u32) cmd[2]) << 24) |
+                       (((u32) cmd[3]) << 16) |
+                       (((u32) cmd[4]) << 8) |
+                       cmd[5]);
                 *len = (cmd[7] << 8) + cmd[8];
                 break;
         case READ_16:
         case WRITE_16:
-               *off = be64_to_cpu(*(u64 *)&cmd[2]);
-               *len = be32_to_cpu(*(u32 *)&cmd[10]);
+               *off = be32_to_cpu((((u64) cmd[2]) << 56) |
+                       (((u64) cmd[3]) << 48) |
+                       (((u64) cmd[4]) << 40) |
+                       (((u64) cmd[5]) << 32) |
+                       (((u64) cmd[6]) << 24) |
+                       (((u64) cmd[7]) << 16) |
+                       (((u64) cmd[8]) << 8) |
+                       cmd[9]);
+               *len = be32_to_cpu((((u32) cmd[10]) << 24) |
+                       (((u32) cmd[11]) << 16) |
+                       (((u32) cmd[12]) << 8) |
+                       cmd[13]);
                 break;
         default:
                 BUG();
diff -u kernel.old/target_disk.c kernel/target_disk.c
--- kernel.old/target_disk.c    2007-10-29 09:49:16.000000000 +0100
+++ kernel/target_disk.c        2007-10-17 16:04:06.000000000 +0200
@@ -66,13 +66,15 @@
         unsigned char geo_m_pg[] = {0x04, 0x16, 0x00, 0x00, 0x00, 0x40, 
0x00, 0x
00,
                                     0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 
0x00, 0x
00,
                                     0x00, 0x00, 0x00, 0x00, 0x3a, 0x98, 
0x00, 0x
00};
-       u32 ncyl, *p;
+       u32 ncyl;
+       u32 n;

         /* assume 0xff heads, 15krpm. */
         memcpy(ptr, geo_m_pg, sizeof(geo_m_pg));
         ncyl = sec >> 14; /* 256 * 64 */
-       p = (u32 *)(ptr + 1);
-       *p = *p | cpu_to_be32(ncyl);
+       memcpy(&n,ptr+1,sizeof(u32));
+       n = n | cpu_to_be32(ncyl);
+       memcpy(ptr+1, &n, sizeof(u32));
         return sizeof(geo_m_pg);
  }

@@ -249,7 +251,10 @@
         struct iet_volume *lun;
         int rest, idx = 0;

-       size = be32_to_cpu(*(u32 *)&req->scb[6]);
+       size = be32_to_cpu((((u32) req->scb[6]) << 24) |
+                       (((u32) req->scb[7]) << 16) |
+                       (((u32) req->scb[8]) << 8) |
+                       req->scb[9]);
         if (size < 16)
                 return -1;

	Regards,

	JKB

-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >> http://get.splunk.com/

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2007-10-29 10:40 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-10-16 13:24 [BUG] Raid5 trouble BERTRAND Joël
2007-10-17 14:32 ` BERTRAND Joël
2007-10-17 14:58   ` Dan Williams
2007-10-17 15:40     ` Dan Williams
2007-10-17 16:44       ` BERTRAND Joël
2007-10-18  0:46         ` Dan Williams
2007-10-18  8:29           ` BERTRAND Joël
2007-10-19  2:55       ` Bill Davidsen
2007-10-19  8:04         ` BERTRAND Joël
2007-10-19 15:51           ` Dan Williams
2007-10-19 16:03             ` BERTRAND Joël
     [not found]             ` <4718DE66.8000905@tmr.com>
2007-10-19 20:42               ` BERTRAND Joël
2007-10-19 20:49                 ` [BUG] Raid1/5 over iSCSI trouble BERTRAND Joël
2007-10-19 21:02                   ` [Iscsitarget-devel] " Ross S. W. Walker
2007-10-19 21:06                     ` BERTRAND Joël
2007-10-19 21:10                       ` Ross S. W. Walker
2007-10-20  7:45                         ` BERTRAND Joël
2007-10-19 21:11                       ` [Iscsitarget-devel] " Scott Kaelin
2007-10-19 21:04                   ` BERTRAND Joël
2007-10-19 21:08                     ` Ross S. W. Walker
2007-10-19 21:12                     ` Dan Williams
2007-10-20  8:05                       ` BERTRAND Joël
2007-10-24  7:12                         ` BERTRAND Joël
2007-10-24 20:10                           ` Bill Davidsen
2007-10-24 23:49                           ` Dan Williams
2007-10-25  0:03                             ` David Miller
2007-10-27 13:29                             ` BERTRAND Joël
2007-10-27 18:27                               ` Dan Williams
2007-10-27 19:35                                 ` BERTRAND Joël
2007-10-27 21:13                               ` Ming Zhang
2007-10-29 10:40                                 ` BERTRAND Joël
2007-10-19 21:19                     ` Ming Zhang
2007-10-19 23:50                     ` Bill Davidsen
2007-10-19 23:58                       ` Bill Davidsen
2007-10-20  7:52                       ` BERTRAND Joël
2007-10-17 16:07     ` [BUG] Raid5 trouble BERTRAND Joël

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).