public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: William Lee Irwin III <wli@holomorphy.com>
To: "Gregory K. Ruiz-Ade" <gregory@castandcrew.com>
Cc: linux-kernel@vger.kernel.org
Subject: Re: 2.4.20 instability on bigmem systems?
Date: Thu, 13 Mar 2003 16:42:56 -0800	[thread overview]
Message-ID: <20030314004256.GI20188@holomorphy.com> (raw)
In-Reply-To: <200303131627.22572.gregory@castandcrew.com>

On Thu, Mar 13, 2003 at 04:27:22PM -0800, Gregory K. Ruiz-Ade wrote:
> The primary problem:  Whenever any process (or set of processes) initiates 
> intensive disk I/O, the system grinds to a halt, kswapd and kupdated 
> consume upwards of 40% to 60% CPU each, and system load averages can jump 
> upwards of 21.00.  The problem can be replicated with a simple find command 
> ("find / -print" seems to do it nicely).
> I have had two rather painful nights dealing with this (Monday and Tuesday 
> nights).  Luckily, I have a serial null-modem cable rigged up between the 
> troubled server and another server, and was able to capture all the info 
> from the Magic Sysrq commands that I could.
> Full details are at http://castandcrew.com/~gregory/lkmlstuff/burpr/2.4.20

Hmm, slabinfo would be very helpful, as well as meminfo.


On Thu, Mar 13, 2003 at 04:27:22PM -0800, Gregory K. Ruiz-Ade wrote:
> I've included the kernel config, the kernel and initrd images, the system 
> map file, output from "ps auxfww" and a couple screen scrapings from top, 
> and captures from magic sysrq commands from both crashes.
> I had problems like this with 2.4.19, and was directed to apply a patch to 
> inode.c, which appears to be part of a patch set for 2.4.19pre9aa2.  I've 
> archived it at:
> http://castandcrew.com/~gregory/lkmlstuff/burpr/2.4.19/patches/10_inode-highmem-2
> For 2.4.19, this solves _most_ of the stability issues, but I still have to 
> work with the LVM people and possibly whomever is responsible for the VM in 
> 2.4.19/2.4.20 to track down some kernel oopses (possibly a seperate 
> problem.)
> I will happily provide whatever other information is needed, though my 
> opportunities to test things on the machine in question is limited by the 
> fact that it's a production server.

You might need bh stuff (memclass-related or something like it) if it's
general disk io. Can't be too sure until slabinfo + meminfo materialize.


-- wli

  reply	other threads:[~2003-03-14  0:32 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2003-03-14  0:27 2.4.20 instability on bigmem systems? Gregory K. Ruiz-Ade
2003-03-14  0:42 ` William Lee Irwin III [this message]
2003-03-14  1:45   ` Gregory K. Ruiz-Ade
2003-03-14  1:53     ` William Lee Irwin III
2003-03-14  3:55       ` Gregory K. Ruiz-Ade
2003-03-14  4:13         ` William Lee Irwin III
2003-03-14 17:31           ` Gregory K. Ruiz-Ade
2003-03-14 20:08             ` William Lee Irwin III
2003-03-17  2:15               ` Gregory K. Ruiz-Ade
2003-03-17  2:26                 ` William Lee Irwin III
2003-03-17  4:59                   ` Gregory K. Ruiz-Ade
2003-03-17  5:38                     ` Gregory K. Ruiz-Ade
2003-03-14 18:31 ` Martin J. Bligh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20030314004256.GI20188@holomorphy.com \
    --to=wli@holomorphy.com \
    --cc=gregory@castandcrew.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox