public inbox for linux-scsi@vger.kernel.org
 help / color / mirror / Atom feed
From: Doug Ledford <dledford@redhat.com>
To: Patrick Mansfield <patmans@us.ibm.com>
Cc: linux-scsi@vger.kernel.org
Subject: Re: GFP_ATOMIC allocations...
Date: Wed, 28 Aug 2002 21:42:15 -0400	[thread overview]
Message-ID: <20020828214215.A31167@redhat.com> (raw)
In-Reply-To: <20020828174737.A27554@eng2.beaverton.ibm.com>; from patmans@us.ibm.com on Wed, Aug 28, 2002 at 05:47:37PM -0700

On Wed, Aug 28, 2002 at 05:47:37PM -0700, Patrick Mansfield wrote:
> So, you think all (or most) of the GFP_ATOMIC's in scsi_scan.c should
> be GFP_KERNEL?

Yep.  Until such time as a LLDD actually calls scsi_scan from an interrupt 
handler to handle a fiber fabric that has just come back up, it is safe 
for this code to use GFP_KERNEL.  As for the fiber channel issue, I 
actually think that it needs to notify the eh thread of the change in loop 
state and let the eh thread do the rescan from the eh thread context 
instead of interrupt context, so this won't be an issue when we support 
fiber loop transitions either.

> All the kmalloc calls should be at boot time, or via
> insmod.

Right, process context with no locks held.

> I was wondering about them, but left them to match the previous
> scsi_scan.c code.

I figured.  That's why I pointed it out ;-)

> Do GFP_KERNEL alloc failures during boot time just return NULL?

GFP_ATOMIC returns NULL on failure, GFP_KERNEL *should* never fail until 
we get to a true OOM condition because it will block and wait for the vm 
subsystem to free us up some space.  Under true OOM conditions it will 
return NULL AFAIK.

> (Given that there is really no memory left.) I'd hope so, but was
> never sure.
> 
> I suppose I could change them and boot with mem=something-low and
> see what happens.

I've known people that had actual, real problems because of the GFP_ATOMIC 
allocations in this code area.  It was related to bringing up a fiber 
controller that had 100s of disks hanging off of it and therefore wanted 
to do a *LOT* of atomic allocations in a short period of time, but it was 
a real problem none the less.  Making the allocations non-atomic would 
allow the scanning to block waiting on more ram instead of bailing on 
devices.

-- 
  Doug Ledford <dledford@redhat.com>     919-754-3700 x44233
         Red Hat, Inc. 
         1801 Varsity Dr.
         Raleigh, NC 27606
  

  reply	other threads:[~2002-08-29  1:42 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-08-29  0:25 GFP_ATOMIC allocations Doug Ledford
2002-08-29  0:47 ` Patrick Mansfield
2002-08-29  1:42   ` Doug Ledford [this message]
2002-08-29 10:47     ` Alan Cox
2002-08-29 15:58       ` Doug Ledford
2002-08-29 17:10     ` Patrick Mansfield
2002-08-30 16:22     ` Patrick Mansfield
2002-08-30 16:46       ` James Bottomley
2002-08-30 18:58         ` Doug Ledford
2002-08-30 21:55           ` [PATCH] " Patrick Mansfield
2002-09-03 14:57             ` James Bottomley
2002-09-03 16:07       ` Pete Zaitcev
  -- strict thread matches above, loose matches on Subject: below --
2002-08-29 15:59 Bryan Henderson
2002-08-29 16:25 ` Doug Ledford
2002-08-29 16:50 ` Alan Cox

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20020828214215.A31167@redhat.com \
    --to=dledford@redhat.com \
    --cc=linux-scsi@vger.kernel.org \
    --cc=patmans@us.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox