public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@zip.com.au>
To: ptb@it.uc3m.es
Cc: linux kernel <linux-kernel@vger.kernel.org>
Subject: Re: scheduling with io_lock held in 2.4.6
Date: Sat, 18 Aug 2001 15:13:03 -0700	[thread overview]
Message-ID: <3B7EE86F.49906C18@zip.com.au> (raw)
In-Reply-To: <3B7EC41C.9811D384@zip.com.au> from "Andrew Morton" at "Aug 18, 2001 12:38:04 pm" <200108182157.f7ILvt832092@oboe.it.uc3m.es>

"Peter T. Breuer" wrote:
> 
> "A month of sundays ago Andrew Morton wrote:"
> > "Peter T. Breuer" wrote:
> > >
> > > "Andrew Morton wrote:"
> > > > "Peter T. Breuer" wrote:
> > > > >   Aug 17 01:41:01 xilofon kernel: Scheduling with io lock held in process 1141
> > >
> > > > Replace the printk with a BUG(), feed the result into ksymooops.
> > > > Or use show_trace(0).
> 
> > Suggest you add the BUG() when it occurs, feed it into ksymoops
> > and post it.  All will be revealed.
> 
> Whilst I've viewed dozens of the oopses, the call trace hasn't
> enlightened me (there's usually an interrupt in it, which throws me),

Yes, there are often interrupt entrails on the stack.  If you can,
generate that trace and send it....

> and adding the oops seemed to destabilize the kernel. Does "atomic_set"
> really work? It seems to be just an indirected write. I am suspicious
> that the atomic writes I tried to do for the data aren't atomic. The
> expansion of atomic_set(&io_request_lock_pid, current_pid()) is:
> 
>  ((( &io_request_lock_pid )->counter) = (  current_pid() )) ;

That's OK - this is the x86 version of atomic_set and yes, we
assume that the compiler will generate a single write for the
store.  All the x86 MP cache coherency stuff takes care of
the atomicity wrt other CPUs.
 
> ...
> I'll see ... btw, the problem seemed to track further to
> blkdev_release_request. Aha aha aha aha aha ... the comment at the
> top of that function says that not only must the io_request_lock be
> held but irqs must be disabled. Do they mean locally or globally? I'm
> holding them off locally (via spin_lock_irqsave).

Locally.  You need to take spin_lock_irqsave(&io_request_lock)
before calling that function.  It doesn't call anything which
sleeps, either.

It may simplify your oops tracing to remove all the `inline'
qualifiers in ll_rw_blk.c, BTW.  They tend to obscure things.
They should in fact be taken out permanently - someone went
absolutely insane there.

-

  reply	other threads:[~2001-08-18 22:13 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <3B7EC41C.9811D384@zip.com.au>
2001-08-18 21:57 ` scheduling with io_lock held in 2.4.6 Peter T. Breuer
2001-08-18 22:13   ` Andrew Morton [this message]
2001-08-19  1:21     ` Peter T. Breuer
2001-08-19 16:38       ` Peter T. Breuer
2001-08-19 20:42 Manfred Spraul
  -- strict thread matches above, loose matches on Subject: below --
2001-08-19 18:58 Peter T. Breuer
2001-08-19 22:47 ` Andrew Morton
2001-08-16 23:45 Peter T. Breuer
2001-08-17  0:08 ` Andrew Morton
2001-08-18 19:27   ` Peter T. Breuer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3B7EE86F.49906C18@zip.com.au \
    --to=akpm@zip.com.au \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ptb@it.uc3m.es \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox