public inbox for linux-scsi@vger.kernel.org
 help / color / mirror / Atom feed
From: Anton Blanchard <anton@samba.org>
To: "Smart, James" <James.Smart@Emulex.Com>
Cc: 'Christoph Hellwig' <hch@infradead.org>,
	"'linux-scsi@vger.kernel.org'" <linux-scsi@vger.kernel.org>
Subject: Re: Emulex lpfcdriver v8.0.2 available
Date: Wed, 2 Jun 2004 04:52:00 +1000	[thread overview]
Message-ID: <20040601185200.GB4239@krispykreme> (raw)
In-Reply-To: <3356669BBE90C448AD4645C843E2BF28034F93F1@xbl.ma.emulex.com>


Hi James,

> FYI - we've updated the image on SourceForge. In this drop, we've addressed
> many of your comments, including those on moving the discovery tasklet to a
> thread, and moving the timer functions inline. The full changelog is up on
> SF.

We gave it a spin on a ppc64 box and managed to hit the following
backtrace when doing heavy writeout. I wonder if a number of these
atomic allocations (in lpfc_get_scsi_buf etc) should be using a mempool,
otherwise we could deadlock on writeout.

Anton

md0_raid5: page allocation failure. order:0, mode:0x20
Call Trace:
[c000000000078e80] .__alloc_pages+0x448/0x450 (unreliable)
[c00000000009be0c] .alloc_pages_current+0xc0/0xe4
[c000000000078edc] .__get_free_pages+0x54/0x1e0
[c00000000007df40] .kmem_getpages+0x48/0x210
[c00000000007e3f8] .cache_alloc_refill+0x2f0/0x67c
[c00000000007e8dc] .kmem_cache_alloc+0x74/0x78
[d0000000003bdd98] .lpfc_get_scsi_buf+0x40/0x2bc [lpfc]
[d0000000003be1c4] .lpfc_queuecommand+0xa0/0xac4 [lpfc]
[d00000000006e06c] .scsi_dispatch_cmd+0x22c/0x378 [scsi_mod]
[d000000000074b74] .scsi_request_fn+0x2e4/0x590 [scsi_mod]
[c00000000021b5c0] .blk_run_queue+0x60/0x90
[d000000000073dd8] .scsi_run_queue+0x134/0x2a8 [scsi_mod]
[d000000000075068] .scsi_end_request+0x12c/0x168 [scsi_mod]
[d000000000075220] .scsi_io_completion+0x17c/0x524 [scsi_mod]
[d00000000003a5fc] .sd_rw_intr+0xac/0x304 [sd_mod]
[d00000000006d8b4] .scsi_finish_command+0xd8/0x130 [scsi_mod]
[d00000000006e4e8] .scsi_softirq+0x178/0x188 [scsi_mod]
[c000000000053120] .__do_softirq+0xa8/0x16c
[c0000000000171c8] .call_do_softirq+0x14/0x24
[c000000000011ddc] .do_softirq+0x90/0xa0
[c000000000012b54] .do_IRQ+0xf8/0x118
[c00000000000b034] HardwareInterrupt_entry+0x14/0x18
--- Exception: 500 at .__make_request+0x428/0x810
    LR = .__make_request+0x3b8/0x810
[c0000000002197c4] .generic_make_request+0x138/0x21c
[d0000000004485d4] .handle_stripe+0x1160/0x13d8 [raid5]
[d000000000449f7c] .raid5d+0xe0/0x248 [raid5]
[c00000000028e28c] .md_thread+0x204/0x294
[c000000000017764] .kernel_thread+0x4c/0x68

  reply	other threads:[~2004-06-01 18:55 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2004-05-26 23:19 Emulex lpfcdriver v8.0.2 available Smart, James
2004-06-01 18:52 ` Anton Blanchard [this message]
2004-06-01 19:01   ` 'Christoph Hellwig'
  -- strict thread matches above, loose matches on Subject: below --
2004-06-02 13:03 Smart, James
2004-06-08  7:19 ` Anton Blanchard
2004-06-08 14:29 Smart, James
2004-06-09  1:39 ` Anton Blanchard
2004-06-08 14:54 Smart, James
2004-06-08 17:03 ` 'Christoph Hellwig'

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20040601185200.GB4239@krispykreme \
    --to=anton@samba.org \
    --cc=James.Smart@Emulex.Com \
    --cc=hch@infradead.org \
    --cc=linux-scsi@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox