public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* "Disk Sleep" status on qlogic scsi sbus card on a sparc station 20 (sparc32, 2.4.17)
@ 2002-04-04  2:57 Alvaro Figueroa
  0 siblings, 0 replies; only message in thread
From: Alvaro Figueroa @ 2002-04-04  2:57 UTC (permalink / raw)
  To: LKML

I know sparc32 is not mantained any more, but I'm still posting in case
this problem with give light to a problem that might apear on another
arquitecture.

I'm running kernel 2.4.17 on a sparc station 20 (with 2 procesors)
running splack -current.

I have a scsi qlogic sbus card on this box on with I attatch an scsi
tape or a multipack storage with some 8 to 12 disks on it.

While I run diferent aplications (not at a time) that are some I/O
intensive on the devices attatched to this qlogic card and I sometimes
see some huge sleeps on the process that is working on them.

I cat'ed /proc/{PID}/status and I see "State:  D (disk sleep)".

Sometimes the process goes on working, but not often. I also can't send
this process to sleep nor I can kill them.

I have used tar to back up an filesystem to a DD3 tape, and have also
used rsync from another host (On a 100Mb LAN) to a raid formed by the
disks that are attatched to the controller.

BTW, this does *NOT* occur on an ultra1 box (which is sparc64) running
the same kernel version, and with the same set of disk or the same tape
with the same qlogic scsi sbus card.

What test could I run on this box to give some more information to you
guys that would help to resolve this problem?

Or else, what could be that problem, and what could I do to solve it?

Thanks in advance.

-- 
Alvaro Figueroa


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2002-04-04  3:00 UTC | newest]

Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2002-04-04  2:57 "Disk Sleep" status on qlogic scsi sbus card on a sparc station 20 (sparc32, 2.4.17) Alvaro Figueroa

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox