linux-scsi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* DC390 Deadlock in DataIn
@ 2002-10-29 20:32 Klaus Fuerstberger
  2002-10-30  0:08 ` 2.4.9 2.4.18 diff Mark Lobo
  0 siblings, 1 reply; 9+ messages in thread
From: Klaus Fuerstberger @ 2002-10-29 20:32 UTC (permalink / raw)
  To: Kurt Garloff, linux-scsi

Hi,

I use an Dawicontrol DC-2974 PCI Hostadapter.
Without any problems I am able to write a tar Backup to an attached 
HP-Streamer.
However, if I try to access the Tape with an "tar tvf /dev/st0" the 
SCSI-Bus hangs after short time with the following error message. I have 
to reset the SCSI-Bus with an "echo reset > /proc/scsi/tmscsim/0" as it 
is described in the README.tmscsim.

Any hints?
Thx Klaus


Oct 29 20:54:05 srv kernel: DC390: Illegal Operation detected (08d1cc10)!
Oct 29 20:54:05 srv kernel: DC390: SRB: Xferred 00000000, Remain 
00002800, State 00000100, Phase 01
Oct 29 20:54:05 srv kernel: DC390: AdpaterStatus: 00, SRB Status 00
Oct 29 20:54:05 srv kernel: DC390: Status of last IRQ (DMA/SC/Int/IRQ): 
08d1cc10
Oct 29 20:54:05 srv kernel: DC390: Register dump: SCSI block:
Oct 29 20:54:05 srv kernel: DC390: XferCnt  Cmd Stat IntS IRQS FFIS Ctl1 
Ctl2 Ctl3 Ctl4
Oct 29 20:54:05 srv kernel: DC390:  000000   90   11   cc   00   82   17 
   48   18   04
Oct 29 20:54:05 srv kernel: DC390: FIFO: 00 00
Oct 29 20:54:05 srv kernel: DC390: Register dump: DMA engine:
Oct 29 20:54:05 srv kernel: DC390: Cmd   STrCnt    SBusA    WrkBC 
WrkAC Stat SBusCtrl
Oct 29 20:54:05 srv kernel: DC390:  83 00002800 00090000 00000004 
000927fc   00 03184200
Oct 29 20:54:05 srv kernel: DC390: Register dump: PCI Status: 0200
Oct 29 20:54:05 srv kernel: DC390: In case of driver trouble read 
linux/drivers/scsi/README.tmscsim
Oct 29 20:54:05 srv kernel: DC390: Deadlock in DataIn_0: DMA aborted 
unfinished: 000004 bytes remain!!
Oct 29 20:54:05 srv kernel: DC390: DataIn_0: DMA State: 0
Oct 29 20:54:05 srv kernel: st0: Error 27010000 (sugg. bt 0x20, driver 
bt 0x7, host bt 0x1).
Oct 29 20:54:05 srv kernel: DC390: Illegal Operation detected (00d3cc10)!
Oct 29 20:54:05 srv kernel: DC390: SRB: Xferred 00000000, Remain 
00002800, State 00000100, Phase 01
Oct 29 20:54:05 srv kernel: DC390: AdpaterStatus: 00, SRB Status 00
Oct 29 20:54:05 srv kernel: DC390: Status of last IRQ (DMA/SC/Int/IRQ): 
00d3cc10
Oct 29 20:54:05 srv kernel: DC390: Register dump: SCSI block:
Oct 29 20:54:05 srv kernel: DC390: XferCnt  Cmd Stat IntS IRQS FFIS Ctl1 
Ctl2 Ctl3 Ctl4
Oct 29 20:54:05 srv kernel: DC390:  000000   90   13   cc   00   80   17 
   48   18   04
Oct 29 20:54:05 srv kernel: DC390: Register dump: DMA engine:
Oct 29 20:54:05 srv kernel: DC390: Cmd   STrCnt    SBusA    WrkBC 
WrkAC Stat SBusCtrl
Oct 29 20:54:05 srv kernel: DC390:  83 00002800 00090000 00000000 
00092800   08 031a4700
Oct 29 20:54:05 srv kernel: DC390: Register dump: PCI Status: 0200
Oct 29 20:54:05 srv kernel: DC390: In case of driver trouble read 
linux/drivers/scsi/README.tmscsim
Oct 29 20:54:05 srv kernel: DC390: Illegal Operation detected (08d3cc10)!
Oct 29 20:54:05 srv kernel: DC390: SRB: Xferred 00000000, Remain 
00002800, State 00000100, Phase 01
Oct 29 20:54:05 srv kernel: DC390: AdpaterStatus: 00, SRB Status 00
Oct 29 20:54:05 srv kernel: DC390: Status of last IRQ (DMA/SC/Int/IRQ): 
08d3cc10
Oct 29 20:54:05 srv kernel: DC390: Register dump: SCSI block:
Oct 29 20:54:05 srv kernel: DC390: XferCnt  Cmd Stat IntS IRQS FFIS Ctl1 
Ctl2 Ctl3 Ctl4
Oct 29 20:54:05 srv kernel: DC390:  000000   90   13   cc   00   80   17 
   48   18   04
Oct 29 20:54:05 srv kernel: DC390: Register dump: DMA engine:
Oct 29 20:54:05 srv kernel: DC390: Cmd   STrCnt    SBusA    WrkBC 
WrkAC Stat SBusCtrl
Oct 29 20:54:05 srv kernel: DC390:  83 00002800 00090000 00000000 
00092800   08 031a4700
Oct 29 20:54:05 srv kernel: DC390: Register dump: PCI Status: 0200
Oct 29 20:54:05 srv kernel: DC390: In case of driver trouble read 
linux/drivers/scsi/README.tmscsim
Oct 29 20:54:05 srv kernel: DC390: Illegal Operation detected (08d1c410)!
Oct 29 20:54:05 srv kernel: DC390: SRB: Xferred 00000000, Remain 
00002800, State 00000100, Phase 01
Oct 29 20:54:05 srv kernel: DC390: AdpaterStatus: 00, SRB Status 00
Oct 29 20:54:05 srv kernel: DC390: Status of last IRQ (DMA/SC/Int/IRQ): 
08d1c410
Oct 29 20:54:05 srv kernel: DC390: Register dump: SCSI block:
Oct 29 20:54:05 srv kernel: DC390: XferCnt  Cmd Stat IntS IRQS FFIS Ctl1 
Ctl2 Ctl3 Ctl4
Oct 29 20:54:05 srv kernel: DC390:  000000   90   11   c4   00   89   17 
   48   18   04
Oct 29 20:54:05 srv kernel: DC390: FIFO: 8c 62 18 c4 00 04 63 18 c6
Oct 29 20:54:05 srv kernel: DC390: Register dump: DMA engine:
Oct 29 20:54:05 srv kernel: DC390: Cmd   STrCnt    SBusA    WrkBC 
WrkAC Stat SBusCtrl
Oct 29 20:54:05 srv kernel: DC390:  83 00002800 00090000 00000014 
000927ec   00 03184231
Oct 29 20:54:05 srv kernel: DC390: Register dump: PCI Status: 0200
Oct 29 20:54:05 srv kernel: DC390: In case of driver trouble read 
linux/drivers/scsi/README.tmscsim
Oct 29 20:54:05 srv kernel: DC390: Deadlock in DataIn_0: DMA aborted 
unfinished: 000014 bytes remain!!
Oct 29 20:54:05 srv kernel: DC390: DataIn_0: DMA State: 0


# uname -a
Linux srv 2.4.19 #1 Tue Oct 29 18:19:20 CET 2002 i686 Intel(R) 
Pentium(R) 4 CPU 1.70GHz GenuineIntel GNU/Linux

# lsmod
st 25712   1
tmscsim 29120   1

# lspci -vv
02:02.0 SCSI storage controller: Advanced Micro Devices [AMD] 53c974 
[PCscsi] (rev 10)
         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- 
ParErr+ Stepping+ SERR+ FastB2B-
         Status: Cap- 66Mhz- UDF- FastB2B- ParErr- DEVSEL=medium 
 >TAbort- <TAbort- <MAbort- >SERR- <PERR-
         Latency: 70 (1000ns min, 10000ns max)
         Interrupt: pin A routed to IRQ 10
         Region 0: I/O ports at dc00 [size=128]
         Expansion ROM at ff9f0000 [disabled] [size=64K]

# cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 03 Lun: 00
   Vendor: HP       Model: C1537A           Rev: L005
   Type:   Sequential-Access                ANSI SCSI revision: 02

# cat /proc/scsi/tmscsim/0
Tekram DC390/AM53C974 PCI SCSI Host Adapter, Driver Version 2.0f 2000-12-20
SCSI Host Nr 0, AM53C974 Adapter Nr 0
IOPortBase 0xdc00, IRQ 10
MaxID 7, MaxLUN 8, AdapterID 7, SelTimeout 250 ms, DelayReset 1 s
TagMaxNum 16, Status 0x00, ACBFlag 0x00, GlitchEater 24 ns
Statistics: Cmnds 3437, Cmnds not sent directly 1, Out of SRB conds 0
             Lost arbitrations 0, Sel. connected 0, Connected: No
Nr of attached devices: 1, Nr of DCBs: 1
Map of attached LUNs: 00 00 00 01 00 00 00 00
Idx ID LUN Prty Sync DsCn SndS TagQ NegoPeriod SyncSpeed SyncOffs MaxCmd
00  03  00  Yes  Yes  Yes  Yes  No    100 ns    10.0 M      15      01
Commands in Queues: Query: 0:


^ permalink raw reply	[flat|nested] 9+ messages in thread

* 2.4.9 2.4.18 diff
  2002-10-29 20:32 DC390 Deadlock in DataIn Klaus Fuerstberger
@ 2002-10-30  0:08 ` Mark Lobo
  2002-10-30  2:04   ` Douglas Gilbert
  0 siblings, 1 reply; 9+ messages in thread
From: Mark Lobo @ 2002-10-30  0:08 UTC (permalink / raw)
  To: linux-scsi

Hello!

How can I get a list of differences in 2.4.9 and
2.4.18?

Thanks,

Mark

__________________________________________________
Do you Yahoo!?
New DSL Internet Access from SBC & Yahoo!
http://sbc.yahoo.com

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: 2.4.9 2.4.18 diff
  2002-10-30  0:08 ` 2.4.9 2.4.18 diff Mark Lobo
@ 2002-10-30  2:04   ` Douglas Gilbert
  2002-11-01  0:16     ` Bounce buffer usage Mark Lobo
  0 siblings, 1 reply; 9+ messages in thread
From: Douglas Gilbert @ 2002-10-30  2:04 UTC (permalink / raw)
  To: Mark Lobo; +Cc: linux-scsi

Mark Lobo wrote:
> Hello!
> 
> How can I get a list of differences in 2.4.9 and
> 2.4.18?

If you are talking about the scsi subsystem, then
look at this url:
http://www.tldp.org/HOWTO/SCSI-2.4-HOWTO/chg24.html

I need to update it for 2.4.19 .
It's probably a good time to start thinking about a
2.6 version of the above document ...

Doug Gilbert



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Bounce buffer usage
  2002-10-30  2:04   ` Douglas Gilbert
@ 2002-11-01  0:16     ` Mark Lobo
  2002-11-01  7:48       ` Jens Axboe
  0 siblings, 1 reply; 9+ messages in thread
From: Mark Lobo @ 2002-11-01  0:16 UTC (permalink / raw)
  To: linux-scsi

Guys,
Simple question on bounce buffer usage.
As of my understanding right now, a bounce buffer will
not be used if 
1) We say we dont support over a fixed number of
address bits, for example, ISA devices.
2) If the address of the buffer is not in a space
directly addressable by the kernel ( not in kernel
logical address space )

Now my question is: what happens in the case where an
application sends an IO down? are bounce buffers used
in that case? I guess I am still confused on kernel
virtual v/s kernel logical addresses. As I understand,
a kernel logical address is one that is directly
addressable by the kernel, and is limited to 1GB. So
if we have a system with 2GB, does it mean some of the
physical memory ( probably 1GB )  has a kernel logical
address assigned to it permanently and the other 1GB
does not, which means if a user happens to get a page
in that space, there will be no logical address ( and
therefore bounce buffers WILL be used? ).
Or is any user address not mapped in the kernel
"logical" space at all? 

Im confused in this one, I would appreciate if someone
can clear it up for me.

Thanks,
Mark



__________________________________________________
Do you Yahoo!?
New DSL Internet Access from SBC & Yahoo!
http://sbc.yahoo.com

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Bounce buffer usage
  2002-11-01  0:16     ` Bounce buffer usage Mark Lobo
@ 2002-11-01  7:48       ` Jens Axboe
  2002-11-01 18:55         ` Mark Lobo
  0 siblings, 1 reply; 9+ messages in thread
From: Jens Axboe @ 2002-11-01  7:48 UTC (permalink / raw)
  To: Mark Lobo; +Cc: linux-scsi

On Thu, Oct 31 2002, Mark Lobo wrote:
> Guys,
> Simple question on bounce buffer usage.
> As of my understanding right now, a bounce buffer will
> not be used if 
> 1) We say we dont support over a fixed number of
> address bits, for example, ISA devices.

Correct, if unchecked_isa_dma is set for instance.

> 2) If the address of the buffer is not in a space
> directly addressable by the kernel ( not in kernel
> logical address space )

Not so true anymore in 2.4.20-pre (and hasn't been true in 2.5 since
2.5.1). If you set host highmem_io flag, it will be happy to pass you
pages that have no kernel virtual mapping.

> Now my question is: what happens in the case where an
> application sends an IO down? are bounce buffers used
> in that case? I guess I am still confused on kernel

You just outlined the bounce scenarious above yourself :-). If the
buffer sent down resides at a higher address than what the adapter can
handle, then it is bounced. This may not necessarily have anything to do
with kernel virtual mapping or not.

> virtual v/s kernel logical addresses. As I understand,
> a kernel logical address is one that is directly
> addressable by the kernel, and is limited to 1GB. So

For the standard kernels split, you are looking at 896MiB of memory. So
a little under a gig.

> if we have a system with 2GB, does it mean some of the
> physical memory ( probably 1GB )  has a kernel logical
> address assigned to it permanently and the other 1GB
> does not, which means if a user happens to get a page
> in that space, there will be no logical address ( and
> therefore bounce buffers WILL be used? ).
> Or is any user address not mapped in the kernel
> "logical" space at all? 

Yes this is what happens per default, in 2.4.19 and below. As mentioned
above, in 2.4.20 the virtual mapping of the page has nothing to do with
the ability to read/write to it. It requires a driver that uses the pci
dma api properly - if it does, it can set host_template->highmem_io to
tell the kernel it doesnt need to have anything below 4GB bounced (or
64GB, or more, depends on the pci device).

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Bounce buffer usage
  2002-11-01  7:48       ` Jens Axboe
@ 2002-11-01 18:55         ` Mark Lobo
  2002-11-02  9:12           ` Jens Axboe
  0 siblings, 1 reply; 9+ messages in thread
From: Mark Lobo @ 2002-11-01 18:55 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-scsi


> Not so true anymore in 2.4.20-pre (and hasn't been
> true in 2.5 since
> 2.5.1). If you set host highmem_io flag,> 

So if I do set this flag in the host template, do I
need the bounce buffer patch also that is floating
around? Or is it just enough to set that flag and use
the DMA API? Im using 2.4.18.

 it will be
> happy to pass you
> pages that have no kernel virtual mapping.

Pages that have no kernel virtual mapping? Do u mean
pages with no kernel "logical" mapping? We finally
need a kernel mapping, dont we? say if it is data in a
single page ( no scatter gather ), what kind of
address is passed down to the driver? Is it the pure
user address that is just passed down, and the DMA API
then gives the kernel address? cause we do need to do
a virt_to_page in this case before we can use the DMA
API, so what kind of virtual address is passed down to
the initiator?

>If the
> buffer sent down resides at a higher address than
> what the adapter can
> handle, then it is bounced. This may not necessarily
> have anything to do
> with kernel virtual mapping or not.

But if a buffer resides in high memory, and has no
kernel logical mapping, doesnt it mean that bounce
buffers will be used, especially in 2.4.18, without
the highmem_io bit set?


Thanks a lot!
Mark


__________________________________________________
Do you Yahoo!?
New DSL Internet Access from SBC & Yahoo!
http://sbc.yahoo.com

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Bounce buffer usage
  2002-11-01 18:55         ` Mark Lobo
@ 2002-11-02  9:12           ` Jens Axboe
  2002-11-03 22:10             ` Mark Lobo
  0 siblings, 1 reply; 9+ messages in thread
From: Jens Axboe @ 2002-11-02  9:12 UTC (permalink / raw)
  To: Mark Lobo; +Cc: linux-scsi

On Fri, Nov 01 2002, Mark Lobo wrote:
> 
> > Not so true anymore in 2.4.20-pre (and hasn't been
> > true in 2.5 since
> > 2.5.1). If you set host highmem_io flag,> 
> 
> So if I do set this flag in the host template, do I
> need the bounce buffer patch also that is floating
> around? Or is it just enough to set that flag and use
> the DMA API? Im using 2.4.18.

You need the patch, otherwise that template bool is not even there. Or
you just need 2.4.20-pre, the block-highmem patch has been integrated
since -pre2/3

>  it will be
> > happy to pass you
> > pages that have no kernel virtual mapping.
> 
> Pages that have no kernel virtual mapping? Do u mean
> pages with no kernel "logical" mapping? We finally

Yes

> need a kernel mapping, dont we? say if it is data in a
> single page ( no scatter gather ), what kind of

We don't the page mapped into the kernel virtual address space. For
scatter-gather setup, it's fine to know the physical address of such a
page. We can do clustering based on that. If the hardware (the platform,
not the controller) can further do iommu tricks on the resulting sg
table fine, but that's not our worry.

> address is passed down to the driver? Is it the pure
> user address that is just passed down, and the DMA API
> then gives the kernel address? cause we do need to do
> a virt_to_page in this case before we can use the DMA
> API, so what kind of virtual address is passed down to
> the initiator?

I think this is your problem. So you want to write out a page of memory.
You basically want to do

user address -> page -> virtual address -> page = virt_to_page(va) ->
    dma map page

and this is a problem because for some pages we just don't have a
virtual kernel mapping. Instead you want to do

user address -> page -> dma map page

which doesn't require a virtual mapping at all. You want to pass down
page/length/offset tuplets to the dma mapping api, not virtual
addresses. Please read the document I pointed you at, it explains this
for you. You seem to be confused about several things in this area.

> >If the
> > buffer sent down resides at a higher address than
> > what the adapter can
> > handle, then it is bounced. This may not necessarily
> > have anything to do
> > with kernel virtual mapping or not.
> 
> But if a buffer resides in high memory, and has no
> kernel logical mapping, doesnt it mean that bounce
> buffers will be used, especially in 2.4.18, without
> the highmem_io bit set?

2.4.18 without any patches will always bounce, there's nothing you can
do about it.

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Bounce buffer usage
  2002-11-02  9:12           ` Jens Axboe
@ 2002-11-03 22:10             ` Mark Lobo
  2002-11-04  8:21               ` Jens Axboe
  0 siblings, 1 reply; 9+ messages in thread
From: Mark Lobo @ 2002-11-03 22:10 UTC (permalink / raw)
  To: Jens Axboe; +Cc: linux-scsi


> I think this is your problem. So you want to write
> out a page of memory.
> You basically want to do
> 
> user address -> page -> virtual address -> page =
> virt_to_page(va) ->
>     dma map page
> 
> and this is a problem because for some pages we just
> don't have a
> virtual kernel mapping. Instead you want to do
> 
> user address -> page -> dma map page
> 
> which doesn't require a virtual mapping at all. You
> want to pass down
> page/length/offset tuplets to the dma mapping api,
> not virtual
> addresses. Please read the document I pointed you
> at, it explains this
> for you. You seem to be confused about several
> things in this area.
>
Ok! So now I guess I understand how this DMA API
works. I did read this document along with the bounce
buffer patch doc. If I get a buffer in the SCSI
initiator driver for a single page, ( which means no
scatter gather), all I get in the SCSI initiator is
the address of the request buffer. To get the page
associated with the buffer, I need to call
virt_to_page ( as mentioned in the bounce buffer patch
docs ). And the DMA mapping doc says the following
about transferring a simgle page
"
        struct pci_dev *pdev = mydev->pdev;
        dma_addr_t dma_handle;
        struct page *page = buffer->page;
        unsigned long offset = buffer->offset;
        size_t size = buffer->len;

        dma_handle = pci_map_page(dev, page, offset,
size, direction);

        ...

        pci_unmap_page(dev, dma_handle, size,
direction);
"
Now in the initiator driver, all what I get is a
request buffer address. To actually use the above API,
I need the page address, and that I get from
virt_to_page ( as described in the bounce buffer patch
doc) So I dont need a virt_to_page call to get the
page address? I dont understand how I can use 
this --> "struct page *page = buffer->page;"? In that
case, what gets passed down to the initiator in the
"no scatter gather" case is some kind of structure?
Dont I need to do "struct page *page =
virt_to_page(buffer);"?


Youve been a great help Jens. I really appreciate all
your help! 

Thanks,
Mark

__________________________________________________
Do you Yahoo!?
New DSL Internet Access from SBC & Yahoo!
http://sbc.yahoo.com

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Bounce buffer usage
  2002-11-03 22:10             ` Mark Lobo
@ 2002-11-04  8:21               ` Jens Axboe
  0 siblings, 0 replies; 9+ messages in thread
From: Jens Axboe @ 2002-11-04  8:21 UTC (permalink / raw)
  To: Mark Lobo; +Cc: linux-scsi

On Sun, Nov 03 2002, Mark Lobo wrote:
> Now in the initiator driver, all what I get is a
> request buffer address. To actually use the above API,
> I need the page address, and that I get from
> virt_to_page ( as described in the bounce buffer patch
> doc) So I dont need a virt_to_page call to get the
> page address? I dont understand how I can use 
> this --> "struct page *page = buffer->page;"? In that
> case, what gets passed down to the initiator in the
> "no scatter gather" case is some kind of structure?
> Dont I need to do "struct page *page =
> virt_to_page(buffer);"?

For drivers that deal with highmem, you are getting sg setup even for a
single segment. It would indeed be possible to send down a virtual
address for non-sg, if the original page didn't reside in highmem. In
that case, yes all you would have to do is

	page = virt_to_page(buf);
	offset = ((unsigned long) buf) & ~PAGE_MASK;

to get the remaining data you need. However, most drivers typically do

	if (command->use_sg)
		nr_entries = pci_map_page() sg list
	else
		pci_map_single() virtual address

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2002-11-04  8:21 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2002-10-29 20:32 DC390 Deadlock in DataIn Klaus Fuerstberger
2002-10-30  0:08 ` 2.4.9 2.4.18 diff Mark Lobo
2002-10-30  2:04   ` Douglas Gilbert
2002-11-01  0:16     ` Bounce buffer usage Mark Lobo
2002-11-01  7:48       ` Jens Axboe
2002-11-01 18:55         ` Mark Lobo
2002-11-02  9:12           ` Jens Axboe
2002-11-03 22:10             ` Mark Lobo
2002-11-04  8:21               ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).