public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* "things are about right" kernel test?
@ 2003-11-14 17:11 Timothy Miller
  2003-11-14 17:54 ` Maciej Zenczykowski
  0 siblings, 1 reply; 2+ messages in thread
From: Timothy Miller @ 2003-11-14 17:11 UTC (permalink / raw)
  To: Linux Kernel Mailing List

Having recently built a new PC for running Linux, one of the things I 
wanted to do right away was test to make sure that everything was 
performing as it should.  Periodically, someone will post to the list, 
complaining about something or other being slow, and then another person 
responds with a simple kernel parameter change to fix it.  Well...

What I want to know is if there is any tool that's been developed to 
determine if various aspects of system performance are within tolerance. 
  (say, I/O scheduler latency/throughput, process scheduler 
latency/throughput, network, and unrelated things which can have 
performance issues)

My system seems to be just fine, but honestly, I can't really be sure. 
Despite the fact that it's on mirrored raid of two WD1200JB drives, it 
doesn't SEEM (insert comment about flawed human perception) to boot much 
faster than my last Linux box.  This is an example of something which I 
would like to have objective analysis of.

Obviously, one way to check this is to run a myriad of performance 
benchmarks and then compare them to comparable systems, etc.  But this 
is overkill for what I think really only requires a simple "quick and 
dirty sanity check".

If this kind of tool doesn't exist, then I would be interested in taking 
suggestions to get started on this.

Some Q&D tests that I think should be run might include:

- Check disk perf by reading and writing a file larger than RAM.  We 
sanity check this by comparing against results from other systems.

- Check memory perf.  We should be able to test different kinds or 
systems with different kinds of RAM and have the program check to see if 
actual system performance is sane.

- Don't know what to do about network performance without a special setup.


I recall some people mentioning that if they have 1GiB of RAM, something 
(I forget what) performs badly.  They set it to 900-some MiB, and then 
things work better.  A test for that with built-in tips for solving the 
problem might be helpful.

In fact, there are numerous things which I have seen mentioned which 
require tweaks and require simple suggestions to fix.


In addition to being a sanity check, this program could act as sortof a 
FAQ for people with common problems.  They run it, it finds the problem, 
and then tells them what to do about it.  Furthermore, this can help 
kernel developers with identifying problems with new systems (KT600, for 
example).


Right now, I'm going to go off and code up some simple stuff to 
demonstrate that I'm serious about this.  :)



^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: "things are about right" kernel test?
  2003-11-14 17:11 "things are about right" kernel test? Timothy Miller
@ 2003-11-14 17:54 ` Maciej Zenczykowski
  0 siblings, 0 replies; 2+ messages in thread
From: Maciej Zenczykowski @ 2003-11-14 17:54 UTC (permalink / raw)
  To: Timothy Miller; +Cc: Linux Kernel Mailing List

> I recall some people mentioning that if they have 1GiB of RAM, something 
> (I forget what) performs badly.  They set it to 900-some MiB, and then 
> things work better.  A test for that with built-in tips for solving the 
> problem might be helpful.

Regarding the 1GB limit (or rather 1024-128 MB = 896 MB limit), this is 
due to Kernel space beginning at 0xC0000000 (giving 1GB of ram for the 
kernel from 0xC0000000..0xFFFFFFFF) of which 128 MB are reserved for 
vmalloc and other uses...  The question is: is there any reason why this 
0xC0000000 couldn't be lowered by 128 MB (to 0xBE000000)?

This would allow using the full 1GB of Ram without using highmem.  As I
understand it using highmem does involve some performance loss.  
Effectively for me this means that PAGE_OFFSET in include/asm-i386/page.h
should be set to either 4GB-128MB-512MB (allowing 512MB ram without
highmem) or 4GB-128MB-1GB (allowing 1GB ram without highmem).  Afterall
I'd expect real life memory configurations are for the majority power of 2
situations - which means you're likely to hit 512MB or 1GB - having the
limit at 896 MB seems pointless.  1GB memory configurations are becoming
more and more common and they are just barely above the highmem cut-off
point, changeing it would fix all the 1GB problems and not really affect
anything else (like those machines with even more RAM).  Sure this limits
the memory available for user processes (from 3 GB to 2.875 GB), but then
the true limit in user space (if I understand this correctly) is that mmap
starts mapping at 1GB anyway [TASK_UNMAPPED_BASE = TASK_SIZE/3 =
PAGE_OFFSET/3 = 1GB currently], if this was changed to a lower value
(likely a static 16MB or even 1MB+64KB would be OK) then we'd effectively
increase the amount of mmaping you could do (sure this doesn't affect brk,
oh well.)

Comments,

MaZe.


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2003-11-14 17:54 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-11-14 17:11 "things are about right" kernel test? Timothy Miller
2003-11-14 17:54 ` Maciej Zenczykowski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox