public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@digeo.com>
To: Con Kolivas <conman@kolivas.net>
Cc: linux kernel mailing list <linux-kernel@vger.kernel.org>
Subject: Re: Pathological case identified from contest
Date: Wed, 16 Oct 2002 19:49:15 -0700	[thread overview]
Message-ID: <3DAE252B.A9A5F6B1@digeo.com> (raw)
In-Reply-To: 1034820820.3dae1cd4bc0e3@kolivas.net

Con Kolivas wrote:
> 
> I found a pathological case in 2.5 while running contest with process_load
> recently after checking the results which showed a bad result for 2.5.43-mm1:
> 
> 2.5.43-mm1              101.38  72%     42      31%
> 2.5.43-mm1              102.90  75%     34      28%
> 2.5.43-mm1              504.12  14%     603     85%
> 2.5.43-mm1              96.73   77%     34      26%
> 
> This was very strange so I looked into it further
> 
> The default for process_load is this command:
> 
> process_load --processes $nproc --recordsize 8192 --injections 2
> 
> where $nproc=4*num_cpus
> 
> When I changed recordsize to 16384, many of the 2.5 kernels started exhibiting
> the same behaviour. While the machine was apparently still alive and would
> respond to my request to abort, the kernel compile would all but stop while
> process_load just continued without allowing anything to happen from kernel
> compilation for up to 5 minutes at a time. This doesnt happen with any 2.4 kernels.
> 

Well it doesn't happen on my test machine (UP or SMP).  I tried
various recordsizes.  It's probably related to HZ, memory bandwidth
and the precise timing at which things happen.

The test describes itself thusly:

 *  This test generates a load which simulates a process-loaded system.
 *
 *  The test creates a ring of processes, each connected to its predecessor
 *  and successor by a pipe.  After the ring is created, the parent process
 *  injects some dummy data records into the ring and then joins.  The
 *  processes pass the data records around the ring until they are killed.
 *

It'll be starvation in the CPU scheduler I expect.  For some reason
the ring of piping processes is just never giving a timeslice to
anything else.  Or maybe something to do with the exceptional
wakeup strategy which pipes use.

Don't now, sorry.  One for the kernel/*.c guys.

  reply	other threads:[~2002-10-17  2:43 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-10-17  2:13 Pathological case identified from contest Con Kolivas
2002-10-17  2:49 ` Andrew Morton [this message]
2002-10-17  4:26   ` Con Kolivas
2002-10-17  7:16     ` Con Kolivas
2002-10-17  7:35       ` Andrew Morton
2002-10-17 17:15         ` Rik van Riel
2002-10-20  2:59         ` Con Kolivas
2002-10-20  3:05           ` Andrew Morton
2002-10-20  6:27             ` Con Kolivas

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3DAE252B.A9A5F6B1@digeo.com \
    --to=akpm@digeo.com \
    --cc=conman@kolivas.net \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox