* Problem with Asus P4C800-DX and P4 -Northwood-
@ 2005-07-25 0:50 Andreas Baer
2005-07-25 5:12 ` Willy Tarreau
0 siblings, 1 reply; 17+ messages in thread
From: Andreas Baer @ 2005-07-25 0:50 UTC (permalink / raw)
To: linux-kernel
Hi everyone,
First I want to say sorry for this BIG post, but it seems that I have no
other chance. :)
I have a Asus P4C800-DX with a P4 2,4 GHz 512 KB L2 Cache "Northwood"
Processor (lowest Processor that supports HyperThreading) and 1GB DDR400
RAM. I'm also running S-ATA disks with about 50 MB/s (just to show that
it shouldn't be due to hard disk speed). Everything was bought back in
2003 and I recently upgraded to the lastest BIOS Version. I've installed
Gentoo Linux and WinXP with dual-boot on this system.
Now to my problem:
I'm currently developing a little database in C++ (runs currently under
Windows and Linux) that internally builds up an R-Tree and does a lot of
equality tests and other time consuming checks. For perfomance issue I
ran a test with 200000 entries and it took me about 3 minutes to
complete under Gentoo Linux.
So I ran the same test in Windows on the same platform and it took about
30(!) seconds. I was a little bit surprised about this result so I
started to run several tests on different machines like an Athlon XP
2000+ platform and on my P4 3GHz "Prescott" Notebook and they all showed
something about 30 seconds or below. Then I began to search for errors
or any misconfiguration in Gentoo, in my code and also for people that
have made equal experiences with that hardware configuration on the
internet. I thought I have a problem with a broken gcc or libraries like
glibc or libstdc++ and so I recompiled my whole system with the stable
gcc 3.3.5 release, but that didn't make any changes. I also tried an
Ubuntu and a Suse LiveCD to verify that it has nothing to do with Gentoo
and my kernel version and they had the same problem and ran the test in
about 3 min.
Currently I'm at a loss what to do. I'm beginning to think that this is
maybe a kernel problem because I have no problems under Windows and it
doesn't matter whether I change any software or any configuration in
Gentoo. I'm currently running kernel-2.6.12, but the Livecd's had other
kernels.
HyperThreading(HT) is also not the reason for the loss of power, because
I tried to disable it and to create a uniprocessor kernel, but that
didn't solve the problem.
If you need some output of my configuration/log files or anything like
that, just mail me.
Is it possible that the kernel has a lack of support for P4 with
"Northwood" core? Maybe only this one? Could I solve the problem if I
change the processor to a "Prescott" core? Perhaps someone could provide
any information if this would make any sense or not.
Thanks in advance for anything that could help.
...sorry for bad english :)
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Problem with Asus P4C800-DX and P4 -Northwood-
2005-07-25 0:50 Problem with Asus P4C800-DX and P4 -Northwood- Andreas Baer
@ 2005-07-25 5:12 ` Willy Tarreau
2005-07-25 13:10 ` Andreas Baer
0 siblings, 1 reply; 17+ messages in thread
From: Willy Tarreau @ 2005-07-25 5:12 UTC (permalink / raw)
To: Andreas Baer; +Cc: linux-kernel
Hi,
On Mon, Jul 25, 2005 at 02:50:05AM +0200, Andreas Baer wrote:
> Hi everyone,
>
> First I want to say sorry for this BIG post, but it seems that I have no
> other chance. :)
It's not big enough, you did not explain us what your database does nor
how it does work, what type of resource it consumes most, any vmstat
capture during operation. There are so many possibilities here :
- poor optimisation from gcc => CPU bound
- many random disk accesses => I/O bound, but changing/tuning the I/O
scheduler could help
- intensive disk reads => perhaps your windows and linux partitions are
on the same disk and windows is the first one, then you have 50 MB/s
on the windows one and 25 MB/s on the linux one ?
- task scheduling : if your application is multi-process/multi-thread,
it is possible that you hit some corner cases.
So please start "vmstat 1" before your 3min request, and stop it at the
end, so that it covers all the work. It will tell us many more useful
information.
Regards,
Willy
> I have a Asus P4C800-DX with a P4 2,4 GHz 512 KB L2 Cache "Northwood"
> Processor (lowest Processor that supports HyperThreading) and 1GB DDR400
> RAM. I'm also running S-ATA disks with about 50 MB/s (just to show that
> it shouldn't be due to hard disk speed). Everything was bought back in
> 2003 and I recently upgraded to the lastest BIOS Version. I've installed
> Gentoo Linux and WinXP with dual-boot on this system.
>
> Now to my problem:
>
> I'm currently developing a little database in C++ (runs currently under
> Windows and Linux) that internally builds up an R-Tree and does a lot of
> equality tests and other time consuming checks. For perfomance issue I
> ran a test with 200000 entries and it took me about 3 minutes to
> complete under Gentoo Linux.
>
> So I ran the same test in Windows on the same platform and it took about
> 30(!) seconds. I was a little bit surprised about this result so I
> started to run several tests on different machines like an Athlon XP
> 2000+ platform and on my P4 3GHz "Prescott" Notebook and they all showed
> something about 30 seconds or below. Then I began to search for errors
> or any misconfiguration in Gentoo, in my code and also for people that
> have made equal experiences with that hardware configuration on the
> internet. I thought I have a problem with a broken gcc or libraries like
> glibc or libstdc++ and so I recompiled my whole system with the stable
> gcc 3.3.5 release, but that didn't make any changes. I also tried an
> Ubuntu and a Suse LiveCD to verify that it has nothing to do with Gentoo
> and my kernel version and they had the same problem and ran the test in
> about 3 min.
>
> Currently I'm at a loss what to do. I'm beginning to think that this is
> maybe a kernel problem because I have no problems under Windows and it
> doesn't matter whether I change any software or any configuration in
> Gentoo. I'm currently running kernel-2.6.12, but the Livecd's had other
> kernels.
>
> HyperThreading(HT) is also not the reason for the loss of power, because
> I tried to disable it and to create a uniprocessor kernel, but that
> didn't solve the problem.
>
> If you need some output of my configuration/log files or anything like
> that, just mail me.
>
> Is it possible that the kernel has a lack of support for P4 with
> "Northwood" core? Maybe only this one? Could I solve the problem if I
> change the processor to a "Prescott" core? Perhaps someone could provide
> any information if this would make any sense or not.
>
> Thanks in advance for anything that could help.
>
> ...sorry for bad english :)
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Problem with Asus P4C800-DX and P4 -Northwood-
2005-07-25 5:12 ` Willy Tarreau
@ 2005-07-25 13:10 ` Andreas Baer
2005-07-25 14:00 ` Paulo Marques
2005-07-25 15:24 ` Willy Tarreau
0 siblings, 2 replies; 17+ messages in thread
From: Andreas Baer @ 2005-07-25 13:10 UTC (permalink / raw)
To: Willy Tarreau; +Cc: linux-kernel
Hi,
Thanks for reply.
Sorry, but I've never done any vmstat operation before so next time I'll send
the output in the first mail :)
Willy Tarreau wrote:
> Hi,
>
> On Mon, Jul 25, 2005 at 02:50:05AM +0200, Andreas Baer wrote:
>
>>Hi everyone,
>>
>>First I want to say sorry for this BIG post, but it seems that I have no
>>other chance. :)
>
>
> It's not big enough, you did not explain us what your database does nor
> how it does work, what type of resource it consumes most, any vmstat
> capture during operation. There are so many possibilities here :
> - poor optimisation from gcc => CPU bound
I doubt it, because I've run the same binaries (no recompilation) on both
systems. (you will see vmstat output below)
> - many random disk accesses => I/O bound, but changing/tuning the I/O
> scheduler could help
Indeed, the data is stored in random access files.
> - intensive disk reads => perhaps your windows and linux partitions are
> on the same disk and windows is the first one, then you have 50 MB/s
> on the windows one and 25 MB/s on the linux one ?
I have (S-ATA-150 Disk 80GB)
/dev/sda: 50.59 MB/sec
/dev/sda1: 50.62 MB/sec (Windows FAT32)
/dev/sda6: 41.63 MB/sec (Linux ReiserFS)
On the Notebook I have at most an ATA-100 Disk with 80GB and it shows the same
declension.
Here I have
/dev/hda: 26.91 MB/sec
/dev/hda1: 26.90 MB/sec (Windows FAT32)
/dev/hda7: 17.89 MB/sec (Linux EXT3)
Could you give me a reason how this is possible?
> - task scheduling : if your application is multi-process/multi-thread,
> it is possible that you hit some corner cases.
There are only a maximum of 2 Threads started and they have more background
activity or do nothing, should have nothing to do with this problem.
> So please start "vmstat 1" before your 3min request, and stop it at the
> end, so that it covers all the work. It will tell us many more useful
> information.
all output below...
> Regards,
> Willy
>
>
>>I have a Asus P4C800-DX with a P4 2,4 GHz 512 KB L2 Cache "Northwood"
>>Processor (lowest Processor that supports HyperThreading) and 1GB DDR400
>>RAM. I'm also running S-ATA disks with about 50 MB/s (just to show that
>>it shouldn't be due to hard disk speed). Everything was bought back in
>>2003 and I recently upgraded to the lastest BIOS Version. I've installed
>>Gentoo Linux and WinXP with dual-boot on this system.
>>
>>Now to my problem:
>>
>>I'm currently developing a little database in C++ (runs currently under
>>Windows and Linux) that internally builds up an R-Tree and does a lot of
>>equality tests and other time consuming checks. For perfomance issue I
>>ran a test with 200000 entries and it took me about 3 minutes to
>>complete under Gentoo Linux.
>>
>>So I ran the same test in Windows on the same platform and it took about
>>30(!) seconds. I was a little bit surprised about this result so I
>>started to run several tests on different machines like an Athlon XP
>>2000+ platform and on my P4 3GHz "Prescott" Notebook and they all showed
>>something about 30 seconds or below. Then I began to search for errors
>>or any misconfiguration in Gentoo, in my code and also for people that
>>have made equal experiences with that hardware configuration on the
>>internet. I thought I have a problem with a broken gcc or libraries like
>>glibc or libstdc++ and so I recompiled my whole system with the stable
>>gcc 3.3.5 release, but that didn't make any changes. I also tried an
>>Ubuntu and a Suse LiveCD to verify that it has nothing to do with Gentoo
>>and my kernel version and they had the same problem and ran the test in
>>about 3 min.
>>
>>Currently I'm at a loss what to do. I'm beginning to think that this is
>>maybe a kernel problem because I have no problems under Windows and it
>>doesn't matter whether I change any software or any configuration in
>>Gentoo. I'm currently running kernel-2.6.12, but the Livecd's had other
>>kernels.
>>
>>HyperThreading(HT) is also not the reason for the loss of power, because
>>I tried to disable it and to create a uniprocessor kernel, but that
>>didn't solve the problem.
>>
>>If you need some output of my configuration/log files or anything like
>>that, just mail me.
>>
>>Is it possible that the kernel has a lack of support for P4 with
>>"Northwood" core? Maybe only this one? Could I solve the problem if I
>>change the processor to a "Prescott" core? Perhaps someone could provide
>>any information if this would make any sense or not.
>>
>>Thanks in advance for anything that could help.
>>
>>...sorry for bad english :)
>>-
>>To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>>the body of a message to majordomo@vger.kernel.org
>>More majordomo info at http://vger.kernel.org/majordomo-info.html
>>Please read the FAQ at http://www.tux.org/lkml/
>
Vmstat for Notebook P4 3.0 GHz 512 MB RAM:
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
1 0 0 179620 14812 228832 0 0 33 21 557 184 3 1 95 1
2 0 0 178828 14812 228832 0 0 0 0 1295 819 6 2 92 0
1 0 0 175948 14812 228832 0 0 0 0 1090 111 37 17 46 0
1 0 0 175948 14812 228832 0 0 0 0 1064 101 23 28 50 0
1 0 0 175948 14812 228832 0 0 0 0 1066 100 20 31 49 0
1 0 0 175980 14820 228824 0 0 0 48 1066 119 20 30 50 0
1 0 0 175980 14820 228824 0 0 0 0 1067 86 19 31 50 0
1 0 0 175988 14820 228824 0 0 0 0 1064 115 20 30 50 0
1 0 0 175988 14820 228824 0 0 0 0 1065 107 20 31 50 0
1 0 0 176020 14820 228824 0 0 0 0 1063 111 20 30 50 0
1 0 0 176020 14820 228824 0 0 0 0 1066 104 21 30 49 0
1 0 0 176276 14828 228816 0 0 0 12 1065 140 21 30 50 0
1 0 0 176276 14828 228816 0 0 0 0 1065 93 19 31 50 0
1 0 0 176052 14828 228816 0 0 0 0 1063 119 23 28 50 0
1 0 0 175796 14828 228816 0 0 0 0 1067 101 23 27 50 0
1 0 0 175860 14828 228816 0 0 0 0 1064 117 22 29 50 0
1 0 0 175860 14828 228816 0 0 0 0 1066 103 22 29 49 0
1 0 0 175860 14836 228808 0 0 0 16 1065 119 21 30 50 0
1 0 0 175860 14836 228808 0 0 0 0 1063 104 21 29 50 0
1 0 0 175860 14836 228808 0 0 0 0 1063 111 19 31 50 0
1 0 0 176180 14836 228808 0 0 0 0 1073 124 22 30 49 0
1 0 0 176052 14836 228808 0 0 0 4 1113 183 20 32 49 0
0 0 0 180052 14844 228800 0 0 0 12 1269 1083 17 20 64 0
0 0 0 180164 14844 228800 0 0 0 0 1255 1311 2 2 96 0
Vmstat for Desktop P4 2.4 GHz 1024 MB RAM:
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
1 0 0 594688 39340 292228 0 0 52 29 581 484 5 2 92 2
1 0 0 591208 39340 292228 0 0 0 68 1116 545 15 14 71 0
1 0 0 591208 39340 292228 0 0 0 0 1090 112 3 48 50 0
1 0 0 591208 39340 292228 0 0 0 0 1089 124 2 48 50 0
1 0 0 591208 39340 292228 0 0 0 0 1089 114 3 48 50 0
1 0 0 591208 39340 292228 0 0 0 0 1090 120 1 49 50 0
1 0 0 591208 39340 292228 0 0 0 24 1094 138 2 49 50 0
1 0 0 591256 39340 292228 0 0 0 0 1090 118 2 48 50 0
1 0 0 591256 39340 292228 0 0 0 0 1092 125 2 48 50 0
1 0 0 591256 39340 292228 0 0 0 0 1089 112 2 48 50 0
1 0 0 591256 39340 292228 0 0 0 0 1089 118 2 49 50 0
1 0 0 591304 39340 292228 0 0 0 16 1094 129 2 48 50 0
1 0 0 591304 39340 292228 0 0 0 0 1090 123 2 49 50 0
1 0 0 591304 39340 292228 0 0 0 0 1089 127 3 48 50 0
1 0 0 591304 39340 292228 0 0 0 0 1090 106 2 48 50 0
1 0 0 591320 39340 292228 0 0 0 0 1089 110 2 49 49 0
1 0 0 591320 39340 292228 0 0 0 16 1093 124 3 48 50 0
1 0 0 591320 39340 292228 0 0 0 0 1090 122 2 49 50 0
1 0 0 591320 39340 292228 0 0 0 0 1092 133 3 47 50 0
1 0 0 591320 39340 292228 0 0 0 0 1089 118 2 49 50 0
1 0 0 591320 39340 292228 0 0 0 0 1090 128 2 48 50 0
1 0 0 591320 39340 292228 0 0 0 16 1094 139 3 48 50 0
1 0 0 591320 39340 292228 0 0 0 0 1089 125 2 49 50 0
1 0 0 591320 39340 292228 0 0 0 0 1091 126 3 48 50 0
1 0 0 591320 39340 292228 0 0 0 0 1089 118 2 48 50 0
1 0 0 591320 39340 292228 0 0 0 0 1089 123 2 49 50 0
1 0 0 591320 39340 292228 0 0 0 16 1094 125 3 48 50 0
1 0 0 591320 39340 292228 0 0 0 0 1089 122 2 49 50 0
1 0 0 591320 39340 292228 0 0 0 0 1091 129 3 48 49 0
1 0 0 591320 39340 292228 0 0 0 0 1091 118 3 48 50 0
1 0 0 591320 39340 292228 0 0 0 0 1090 120 2 49 50 0
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
2 0 0 591304 39340 292228 0 0 0 16 1092 134 3 48 50 0
1 0 0 591304 39340 292228 0 0 0 0 1090 116 2 48 49 0
1 0 0 591304 39340 292228 0 0 0 0 1089 125 2 48 49 0
1 0 0 591304 39340 292228 0 0 0 0 1090 112 3 48 49 0
1 0 0 591304 39340 292228 0 0 0 0 1091 114 3 48 50 0
1 0 0 591304 39340 292228 0 0 0 16 1092 132 2 48 49 0
1 0 0 591288 39340 292228 0 0 0 0 1089 131 2 49 50 0
1 0 0 591288 39340 292228 0 0 0 0 1092 132 3 48 49 0
1 0 0 591288 39340 292228 0 0 0 0 1089 111 3 48 50 0
1 0 0 591288 39340 292228 0 0 0 0 1091 117 3 48 49 0
1 0 0 591288 39340 292228 0 0 0 16 1093 130 3 48 50 0
1 0 0 591288 39340 292228 0 0 0 0 1089 129 2 48 49 0
1 0 0 591288 39340 292228 0 0 0 0 1089 122 3 48 49 0
1 0 0 591288 39340 292228 0 0 0 0 1090 113 2 49 49 0
1 0 0 591288 39340 292228 0 0 0 0 1091 109 3 48 50 0
1 0 0 591288 39340 292228 0 0 0 16 1092 120 2 48 49 0
1 0 0 591288 39340 292228 0 0 0 0 1090 117 2 49 50 0
1 0 0 591288 39340 292228 0 0 0 0 1091 126 2 49 49 0
1 0 0 591288 39340 292228 0 0 0 0 1090 122 3 48 50 0
2 1 0 591288 39340 292228 0 0 0 8 1091 124 2 49 50 0
1 0 0 591288 39340 292228 0 0 0 8 1093 121 2 49 50 0
1 0 0 591288 39340 292228 0 0 0 0 1089 126 2 49 49 0
1 0 0 591288 39340 292228 0 0 0 0 1091 123 2 48 50 0
1 0 0 591288 39340 292228 0 0 0 0 1089 114 2 48 49 0
1 0 0 591288 39340 292228 0 0 0 12 1092 135 3 48 50 0
1 0 0 591288 39340 292228 0 0 0 4 1092 121 2 49 49 0
1 0 0 591288 39340 292228 0 0 0 0 1089 127 2 48 50 0
1 0 0 591288 39340 292228 0 0 0 0 1091 126 2 49 50 0
1 0 0 591288 39340 292228 0 0 0 0 1090 112 3 48 49 0
1 0 0 591288 39340 292228 0 0 0 12 1092 136 2 48 49 0
1 0 0 591288 39340 292228 0 0 0 4 1090 124 2 49 49 0
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
2 0 0 591288 39340 292228 0 0 0 0 1091 129 2 48 49 0
1 0 0 591288 39340 292228 0 0 0 0 1089 124 2 49 50 0
1 0 0 591288 39340 292228 0 0 0 0 1089 111 2 49 49 0
1 0 0 591288 39340 292228 0 0 0 12 1093 121 3 48 50 0
1 0 0 591288 39340 292228 0 0 0 4 1090 113 3 48 50 0
1 0 0 591288 39340 292228 0 0 0 0 1090 131 3 48 50 0
1 0 0 591288 39340 292228 0 0 0 0 1092 134 1 49 49 0
1 0 0 591288 39340 292228 0 0 0 0 1089 120 2 49 50 0
1 0 0 591288 39340 292228 0 0 0 12 1092 129 2 49 49 0
1 0 0 591288 39340 292228 0 0 0 4 1091 118 3 48 49 0
1 0 0 591288 39340 292228 0 0 0 0 1089 124 2 49 49 0
1 0 0 591288 39340 292228 0 0 0 0 1090 125 3 48 50 0
1 0 0 591288 39340 292228 0 0 0 0 1091 130 4 47 50 0
1 0 0 591288 39340 292228 0 0 0 20 1093 130 2 49 49 0
1 0 0 591288 39340 292228 0 0 0 4 1091 118 2 48 49 0
1 0 0 591288 39340 292228 0 0 0 0 1091 134 3 48 50 0
1 0 0 591288 39340 292228 0 0 0 0 1092 127 3 48 50 0
1 0 0 591288 39340 292228 0 0 0 0 1091 126 3 49 49 0
1 0 0 591288 39340 292228 0 0 0 20 1093 134 3 47 49 0
1 0 0 591288 39340 292228 0 0 0 4 1090 128 4 47 49 0
1 0 0 591288 39340 292228 0 0 0 0 1090 126 3 47 50 0
1 0 0 591288 39340 292228 0 0 0 0 1089 121 2 50 49 0
1 0 0 591288 39340 292228 0 0 0 0 1091 112 2 49 50 0
1 0 0 591288 39340 292228 0 0 0 16 1096 125 2 49 49 0
1 0 0 591288 39340 292228 0 0 0 0 1090 109 3 48 50 0
1 0 0 591288 39340 292228 0 0 0 0 1091 118 2 49 49 0
1 0 0 591288 39340 292228 0 0 0 0 1093 125 2 48 49 0
1 0 0 591288 39340 292228 0 0 0 0 1090 112 3 48 50 0
1 0 0 591288 39340 292228 0 0 0 12 1093 126 2 49 49 0
1 0 0 591288 39340 292228 0 0 0 4 1093 114 3 48 50 0
1 0 0 591288 39340 292228 0 0 0 0 1090 120 2 49 50 0
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
2 0 0 591288 39340 292228 0 0 0 0 1090 119 2 48 49 0
1 0 0 591288 39340 292228 0 0 0 0 1091 108 2 48 49 0
1 0 0 591288 39340 292228 0 0 0 12 1093 117 2 48 49 0
1 0 0 591288 39340 292228 0 0 0 4 1091 111 3 48 50 0
1 0 0 591288 39340 292228 0 0 0 0 1091 118 3 48 50 0
1 0 0 591288 39340 292228 0 0 0 0 1092 122 3 48 49 0
1 0 0 591288 39340 292228 0 0 0 0 1091 109 2 49 50 0
1 0 0 591288 39340 292228 0 0 0 16 1095 125 3 48 50 0
1 0 0 591288 39340 292228 0 0 0 0 1090 110 2 49 49 0
1 0 0 591288 39340 292228 0 0 0 0 1091 120 3 48 49 0
1 0 0 591288 39340 292228 0 0 0 0 1090 117 2 49 50 0
1 0 0 591288 39340 292228 0 0 0 0 1090 110 3 48 50 0
1 0 0 591288 39340 292228 0 0 0 16 1095 121 2 49 50 0
1 0 0 591288 39340 292228 0 0 0 0 1092 109 2 49 49 0
1 0 0 591288 39340 292228 0 0 0 0 1090 120 2 48 49 0
1 0 0 591288 39340 292228 0 0 0 0 1093 123 2 49 50 0
1 0 0 591288 39340 292228 0 0 0 0 1091 110 2 48 49 0
2 0 0 591288 39340 292228 0 0 0 16 1095 128 2 48 49 0
1 0 0 591288 39340 292228 0 0 0 0 1090 117 2 49 50 0
1 0 0 591288 39340 292228 0 0 0 0 1091 116 2 48 49 0
1 0 0 591288 39340 292228 0 0 0 0 1090 124 2 49 49 0
1 0 0 591288 39340 292228 0 0 0 0 1091 108 3 48 50 0
1 0 0 591288 39340 292228 0 0 0 16 1095 131 2 48 49 0
1 0 0 591288 39340 292228 0 0 0 0 1090 112 3 48 50 0
1 0 0 591288 39340 292228 0 0 0 0 1091 120 3 48 49 0
1 0 0 591288 39340 292228 0 0 0 0 1093 129 2 49 49 0
1 0 0 591288 39340 292228 0 0 0 0 1090 114 3 48 50 0
1 0 0 591288 39340 292228 0 0 0 20 1095 134 2 49 49 0
1 0 0 591288 39340 292228 0 0 0 0 1092 114 2 49 50 0
1 0 0 591288 39340 292228 0 0 0 0 1090 116 2 48 49 0
1 0 0 591288 39340 292228 0 0 0 0 1090 119 2 49 50 0
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
2 0 0 591288 39340 292228 0 0 0 0 1091 114 2 49 50 0
1 0 0 591288 39340 292228 0 0 0 16 1095 128 2 49 49 0
1 0 0 591288 39340 292228 0 0 0 0 1089 112 2 48 49 0
1 0 0 591288 39340 292228 0 0 0 0 1090 116 2 49 50 0
1 0 0 591288 39340 292228 0 0 0 0 1091 121 3 48 50 0
1 0 0 590792 39340 292228 0 0 0 0 1089 124 8 43 49 0
1 0 0 590792 39340 292228 0 0 0 16 1095 137 4 47 50 0
1 0 0 590792 39340 292228 0 0 0 0 1089 117 3 48 49 0
1 0 0 590792 39340 292228 0 0 0 0 1089 122 3 48 49 0
1 0 0 590792 39340 292228 0 0 0 0 1091 117 3 48 50 0
1 0 0 590792 39340 292228 0 0 0 0 1089 108 2 48 50 0
1 0 0 590792 39340 292228 0 0 0 16 1094 122 3 48 49 0
1 0 0 590792 39340 292228 0 0 0 0 1090 115 2 49 50 0
1 0 0 590792 39340 292228 0 0 0 0 1090 132 2 48 49 0
1 0 0 590792 39340 292228 0 0 0 0 1091 124 3 48 50 0
1 0 0 590792 39340 292228 0 0 0 0 1090 118 2 48 49 0
1 0 0 590792 39340 292228 0 0 0 24 1094 124 2 49 49 0
1 0 0 590792 39340 292228 0 0 0 0 1090 113 3 48 49 0
1 0 0 590792 39340 292228 0 0 0 0 1090 116 2 48 50 0
1 0 0 590792 39340 292228 0 0 0 0 1089 123 2 49 49 0
1 0 0 590792 39340 292228 0 0 0 0 1089 116 2 49 50 0
1 0 0 590792 39340 292228 0 0 0 16 1113 153 3 48 49 0
1 0 0 590792 39340 292228 0 0 0 0 1275 349 3 49 49 0
1 0 0 590776 39340 292228 0 0 0 0 1089 123 2 49 50 0
1 0 0 590776 39340 292228 0 0 0 0 1092 117 2 49 50 0
1 0 0 590776 39340 292228 0 0 0 0 1089 123 3 48 49 0
1 0 0 590776 39340 292228 0 0 0 16 1093 139 3 48 50 0
1 0 0 590776 39340 292228 0 0 0 0 1090 113 2 49 50 0
1 0 0 590776 39340 292228 0 0 0 0 1090 121 2 49 49 0
1 0 0 590776 39340 292228 0 0 0 0 1089 110 1 49 50 0
1 0 0 590776 39340 292228 0 0 0 0 1090 108 3 48 50 0
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
2 0 0 590792 39340 292228 0 0 0 16 1093 130 3 48 49 0
1 0 0 590792 39340 292228 0 0 0 0 1089 118 2 49 50 0
1 0 0 590792 39340 292228 0 0 0 0 1091 131 1 49 50 0
1 0 0 590792 39340 292228 0 0 0 0 1091 118 2 49 50 0
1 0 0 590792 39340 292228 0 0 0 0 1089 112 3 48 50 0
1 0 0 590792 39340 292228 0 0 0 16 1094 130 2 48 49 0
1 0 0 590792 39340 292228 0 0 0 0 1090 116 3 48 50 0
1 0 0 590792 39340 292228 0 0 0 0 1090 135 2 49 50 0
1 0 0 590792 39340 292228 0 0 0 0 1091 114 2 49 50 0
1 0 0 590792 39340 292228 0 0 0 0 1089 109 2 48 50 0
1 0 0 590792 39340 292228 0 0 0 16 1093 124 2 49 50 0
1 0 0 590792 39340 292228 0 0 0 0 1090 113 3 48 49 0
1 0 0 590792 39340 292228 0 0 0 0 1089 133 2 48 50 0
1 0 0 590792 39340 292228 0 0 0 0 1092 126 2 49 49 0
1 0 0 590792 39340 292228 0 0 0 0 1090 120 3 48 50 0
1 0 0 590792 39340 292228 0 0 0 16 1093 124 2 48 49 0
1 0 0 590792 39340 292228 0 0 0 0 1089 113 2 49 50 0
1 0 0 590792 39340 292228 0 0 0 0 1090 124 3 48 50 0
1 0 0 590792 39340 292228 0 0 0 0 1090 111 3 48 50 0
1 0 0 590792 39340 292228 0 0 0 0 1090 113 2 49 49 0
1 0 0 590792 39340 292228 0 0 0 16 1094 135 2 48 49 0
1 0 0 590792 39340 292228 0 0 0 0 1089 121 2 49 50 0
1 0 0 590792 39340 292228 0 0 0 0 1089 127 2 48 50 0
1 0 0 590792 39340 292228 0 0 0 0 1092 116 2 48 50 0
1 0 0 590792 39340 292228 0 0 0 0 1090 112 3 48 49 0
2 0 0 590792 39340 292228 0 0 0 16 1093 137 2 49 50 0
1 0 0 590792 39340 292228 0 0 0 0 1090 121 2 49 50 0
1 0 0 590792 39340 292228 0 0 0 0 1089 137 2 48 49 0
1 0 0 590792 39340 292228 0 0 0 0 1089 112 3 48 50 0
1 0 0 590792 39340 292228 0 0 0 0 1090 110 2 48 50 0
1 0 0 590792 39340 292228 0 0 0 16 1094 126 2 48 49 0
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
2 0 0 590792 39340 292228 0 0 0 0 1089 117 2 49 50 0
1 0 0 590792 39340 292228 0 0 0 0 1090 133 2 48 50 0
1 0 0 590792 39340 292228 0 0 0 0 1091 126 2 48 49 0
1 0 0 590792 39340 292228 0 0 0 0 1090 116 2 49 50 0
1 0 0 590792 39340 292228 0 0 0 16 1095 134 2 49 49 0
1 0 0 590792 39340 292228 0 0 0 0 1089 122 2 48 50 0
1 0 0 590792 39340 292228 0 0 0 0 1089 131 3 48 50 0
1 0 0 590792 39340 292228 0 0 0 0 1090 120 2 49 49 0
1 0 0 590792 39340 292228 0 0 0 0 1089 118 2 49 50 0
1 0 0 590792 39340 292228 0 0 0 16 1093 128 2 49 50 0
1 0 0 590792 39340 292228 0 0 0 0 1091 116 2 48 49 0
1 0 0 590792 39340 292228 0 0 0 0 1089 136 2 48 50 0
1 0 0 590792 39340 292228 0 0 0 0 1091 117 2 48 49 0
1 0 0 590792 39340 292228 0 0 0 0 1090 123 2 49 50 0
1 0 0 590792 39340 292228 0 0 0 16 1093 132 2 48 49 0
1 0 0 590792 39340 292228 0 0 0 0 1090 122 2 49 49 0
1 0 0 590792 39340 292228 0 0 0 0 1090 128 2 49 50 0
1 0 0 590792 39340 292228 0 0 0 0 1089 112 2 49 50 0
1 0 0 590792 39340 292228 0 0 0 0 1089 108 2 49 50 0
2 0 0 590792 39340 292228 0 0 0 16 1094 132 1 49 50 0
1 0 0 590792 39340 292228 0 0 0 0 1089 121 3 48 50 0
1 0 0 590668 39340 292228 0 0 0 0 1090 139 2 49 50 0
1 0 0 590668 39340 292228 0 0 0 0 1092 118 3 48 49 0
1 0 0 590544 39340 292228 0 0 0 0 1089 112 3 48 50 0
1 0 0 591040 39340 292228 0 0 0 16 1093 155 3 47 49 0
0 0 0 595092 39340 292228 0 0 0 0 1102 497 3 35 62 0
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Problem with Asus P4C800-DX and P4 -Northwood-
2005-07-25 13:10 ` Andreas Baer
@ 2005-07-25 14:00 ` Paulo Marques
2005-07-25 15:24 ` Willy Tarreau
1 sibling, 0 replies; 17+ messages in thread
From: Paulo Marques @ 2005-07-25 14:00 UTC (permalink / raw)
To: Andreas Baer; +Cc: Willy Tarreau, linux-kernel
Andreas Baer wrote:
> [...]
> Vmstat for Notebook P4 3.0 GHz 512 MB RAM:
>
> procs -----------memory---------- ---swap-- -----io---- --system--
> ----cpu----
> r b swpd free buff cache si so bi bo in cs us sy
> id wa
> 1 0 0 179620 14812 228832 0 0 33 21 557 184 3 1
> 95 1
> 2 0 0 178828 14812 228832 0 0 0 0 1295 819 6 2
> 92 0
> 1 0 0 175948 14812 228832 0 0 0 0 1090 111 37 17
This vmstat output doesn't show any input / output happening. Are you
sure this was taken *while* your test is running? If it is, then all
files are already in pagecache. The fact that you have free memory at
all times, and that the run on the notebook takes less than 20 seconds
confirms this.
The second takes a lot more time to execute. The 1Gb memory does make me
suspicious, though.
There is a known problem with BIOS that don't set up the mtrr's
correctly for the whole memory and leave a small amount of memory on the
top with the wrong settings. Accessing this memory becomes painfully slow.
Can you send the output of /proc/mtrr and try to boot with something
like "mem=768M" to see if that improves performance on the Desktop P4?
--
Paulo Marques - www.grupopie.com
It is a mistake to think you can solve any major problems
just with potatoes.
Douglas Adams
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Problem with Asus P4C800-DX and P4 -Northwood-
2005-07-25 13:10 ` Andreas Baer
2005-07-25 14:00 ` Paulo Marques
@ 2005-07-25 15:24 ` Willy Tarreau
2005-07-25 19:51 ` Andreas Baer
1 sibling, 1 reply; 17+ messages in thread
From: Willy Tarreau @ 2005-07-25 15:24 UTC (permalink / raw)
To: Andreas Baer; +Cc: linux-kernel
On Mon, Jul 25, 2005 at 03:10:08PM +0200, Andreas Baer wrote:
(...)
> I have (S-ATA-150 Disk 80GB)
>
> /dev/sda: 50.59 MB/sec
> /dev/sda1: 50.62 MB/sec (Windows FAT32)
> /dev/sda6: 41.63 MB/sec (Linux ReiserFS)
>
> On the Notebook I have at most an ATA-100 Disk with 80GB and it shows the
> same declension.
>
> Here I have
>
> /dev/hda: 26.91 MB/sec
> /dev/hda1: 26.90 MB/sec (Windows FAT32)
> /dev/hda7: 17.89 MB/sec (Linux EXT3)
>
> Could you give me a reason how this is possible?
a reason for what ? the fact that the notebook performs faster than the
desktop while slower on I/O ?
> Vmstat for Notebook P4 3.0 GHz 512 MB RAM:
Your Notebook's P4 has HT enabled (50% apparent idle remain permanently during
operation). But you'll note that your load is 60% system + 40% user there, and
that you do absolutely no I/O (I presume it's the second run and it's cached).
> procs -----------memory---------- ---swap-- -----io---- --system--
> ----cpu----
> r b swpd free buff cache si so bi bo in cs us sy
> id wa
> 1 0 0 179620 14812 228832 0 0 33 21 557 184 3 1
> 95 1
> 2 0 0 178828 14812 228832 0 0 0 0 1295 819 6 2
> 92 0
> 1 0 0 175948 14812 228832 0 0 0 0 1090 111 37 17
> 46 0
> 1 0 0 175948 14812 228832 0 0 0 0 1064 101 23 28
> 50 0
> 1 0 0 175948 14812 228832 0 0 0 0 1066 100 20 31
> 49 0
> 1 0 0 175980 14820 228824 0 0 0 48 1066 119 20 30
> 50 0
> 1 0 0 175980 14820 228824 0 0 0 0 1067 86 19 31
> 50 0
> 1 0 0 175988 14820 228824 0 0 0 0 1064 115 20 30
> 50 0
(...)
> Vmstat for Desktop P4 2.4 GHz 1024 MB RAM:
This one's hyperthreaded too (apparent consumption never goes above 50%).
However, while not doing any I/O either, you're always spending only 4% in
user and 96% in system. This means that it might take 10x more time to
complete the same operations, had it been user-cpu bound. And this is about
what you observe.
There clearly is a problem on the system installed on this machine. You should
use strace to see what this machine does all the time, it is absolutely not
expected that the user/system ratios change so much between two nearly
identical systems. So there are system calls which eat all CPU. You may want
to try strace -Tttt on the running process during a few tens of seconds. I
guess you'll immediately find the culprit amongst the syscalls, and it might
give you a clue.
> procs -----------memory---------- ---swap-- -----io---- --system--
> ----cpu----
> r b swpd free buff cache si so bi bo in cs us sy
> id wa
> 1 0 0 594688 39340 292228 0 0 52 29 581 484 5 2
> 92 2
> 1 0 0 591208 39340 292228 0 0 0 68 1116 545 15 14
> 71 0
> 1 0 0 591208 39340 292228 0 0 0 0 1090 112 3 48
> 50 0
> 1 0 0 591208 39340 292228 0 0 0 0 1089 124 2 48
> 50 0
> 1 0 0 591208 39340 292228 0 0 0 0 1089 114 3 48
> 50 0
> 1 0 0 591208 39340 292228 0 0 0 0 1090 120 1 49
> 50 0
> 1 0 0 591208 39340 292228 0 0 0 24 1094 138 2 49
> 50 0
> 1 0 0 591256 39340 292228 0 0 0 0 1090 118 2 48
> 50 0
(...)
Regards,
Willy
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Problem with Asus P4C800-DX and P4 -Northwood-
2005-07-25 15:24 ` Willy Tarreau
@ 2005-07-25 19:51 ` Andreas Baer
2005-07-25 20:03 ` Erik Mouw
` (3 more replies)
0 siblings, 4 replies; 17+ messages in thread
From: Andreas Baer @ 2005-07-25 19:51 UTC (permalink / raw)
To: Willy Tarreau; +Cc: linux-kernel, pmarques
Willy Tarreau wrote:
> On Mon, Jul 25, 2005 at 03:10:08PM +0200, Andreas Baer wrote:
> (...)
>
>>I have (S-ATA-150 Disk 80GB)
>>
>> /dev/sda: 50.59 MB/sec
>> /dev/sda1: 50.62 MB/sec (Windows FAT32)
>> /dev/sda6: 41.63 MB/sec (Linux ReiserFS)
>>
>>On the Notebook I have at most an ATA-100 Disk with 80GB and it shows the
>>same declension.
>>
>>Here I have
>>
>> /dev/hda: 26.91 MB/sec
>> /dev/hda1: 26.90 MB/sec (Windows FAT32)
>> /dev/hda7: 17.89 MB/sec (Linux EXT3)
>>
>>Could you give me a reason how this is possible?
>
>
> a reason for what ? the fact that the notebook performs faster than the
> desktop while slower on I/O ?
No, a reason why the partition with Linux (ReiserFS or Ext3) is always slower
than the Windows partition?
>
>>Vmstat for Notebook P4 3.0 GHz 512 MB RAM:
>
>
> Your Notebook's P4 has HT enabled (50% apparent idle remain permanently during
> operation). But you'll note that your load is 60% system + 40% user there, and
> that you do absolutely no I/O (I presume it's the second run and it's cached).
>
>
>>procs -----------memory---------- ---swap-- -----io---- --system--
>>----cpu----
>> r b swpd free buff cache si so bi bo in cs us sy
>> id wa
>> 1 0 0 179620 14812 228832 0 0 33 21 557 184 3 1
>> 95 1
>> 2 0 0 178828 14812 228832 0 0 0 0 1295 819 6 2
>> 92 0
>> 1 0 0 175948 14812 228832 0 0 0 0 1090 111 37 17
>> 46 0
>> 1 0 0 175948 14812 228832 0 0 0 0 1064 101 23 28
>> 50 0
>> 1 0 0 175948 14812 228832 0 0 0 0 1066 100 20 31
>> 49 0
>> 1 0 0 175980 14820 228824 0 0 0 48 1066 119 20 30
>> 50 0
>> 1 0 0 175980 14820 228824 0 0 0 0 1067 86 19 31
>> 50 0
>> 1 0 0 175988 14820 228824 0 0 0 0 1064 115 20 30
>> 50 0
>
>
> (...)
Yeah the HT is enabled but as I said that changes nothing on the result, if I
enable or diable it on the desktop machine.
Sorry about the I/O, I explained something wrong. Look below, I answered Paulo
Marques to explain everything.
>
>
>>Vmstat for Desktop P4 2.4 GHz 1024 MB RAM:
>
>
> This one's hyperthreaded too (apparent consumption never goes above 50%).
> However, while not doing any I/O either, you're always spending only 4% in
> user and 96% in system. This means that it might take 10x more time to
> complete the same operations, had it been user-cpu bound. And this is about
> what you observe.
>
> There clearly is a problem on the system installed on this machine. You should
> use strace to see what this machine does all the time, it is absolutely not
> expected that the user/system ratios change so much between two nearly
> identical systems. So there are system calls which eat all CPU. You may want
> to try strace -Tttt on the running process during a few tens of seconds. I
> guess you'll immediately find the culprit amongst the syscalls, and it might
> give you a clue.
I hope you are talking about a hardware/kernel problem and not a software
problem, because I tried it also with LiveCD's and they showed the same results
on this machine.
I'm not a linux expert, that means I've never done anything like that before,
so it would be nice if you give me a hint what you see in this results. :)
strace output for desktop:
<--snip-->
[pid 15146] 1122317366.469624 _llseek(3, 7471104, [7471104], SEEK_SET) = 0
<0.000014>
[pid 15146] 1122317366.469692 _llseek(3, 7471104, [7471104], SEEK_SET) = 0
<0.000015>
[pid 15146] 1122317366.469760 _llseek(3, 7471104, [7471104], SEEK_SET) = 0
<0.000015>
[pid 15146] 1122317366.469828 _llseek(3, 7471104, [7471104], SEEK_SET) = 0
<0.000015>
[pid 15146] 1122317366.469896 _llseek(3, 7471104, [7471104], SEEK_SET) = 0
<0.000014>
[pid 15146] 1122317366.469963 _llseek(3, 7471104, [7471104], SEEK_SET) = 0
<0.000015>
[pid 15146] 1122317366.470031 _llseek(3, 7471104, [7471104], SEEK_SET) = 0
<0.000014>
[pid 15146] 1122317366.470098 _llseek(3, 7471104, [7471104], SEEK_SET) = 0
<0.000015>
[pid 15146] 1122317366.470168 _llseek(3, 7471104, [7471104], SEEK_SET) = 0
<0.000014>
[pid 15146] 1122317366.470236 _llseek(3, 7471104, [7471104], SEEK_SET) = 0
<0.000015>
[pid 15146] 1122317366.470298 read(3,
"\1\200\1\0\0\0\0\0\1\0G\247\35a\7\204\f\rP@\317\313\27"..., 131072) = 131072
<0.000138>
[pid 15146] 1122317366.470528 _llseek(3, 7602176, [7602176], SEEK_SET) = 0
<0.000015>
[pid 15146] 1122317366.470599 _llseek(3, 7602176, [7602176], SEEK_SET) = 0
<0.000014>
[pid 15146] 1122317366.470667 _llseek(3, 7602176, [7602176], SEEK_SET) = 0
<0.000015>
[pid 15146] 1122317366.470734 _llseek(3, 7602176, [7602176], SEEK_SET) = 0
<0.000014>
[pid 15146] 1122317366.470802 _llseek(3, 7602176, [7602176], SEEK_SET) = 0
<0.000014>
[pid 15146] 1122317366.470870 _llseek(3, 7602176, [7602176], SEEK_SET) = 0
<0.000014>
[pid 15146] 1122317366.470939 _llseek(3, 7602176, [7602176], SEEK_SET) = 0
<0.000014>
[pid 15146] 1122317366.471008 _llseek(3, 7602176, [7602176], SEEK_SET) = 0
<0.000014>
[pid 15146] 1122317366.471079 _llseek(3, 7602176, [7602176], SEEK_SET) = 0
<0.000016>
[pid 15146] 1122317366.471158 _llseek(3, 7602176, [7602176], SEEK_SET) = 0
<0.000015>
[pid 15146] 1122317366.471227 _llseek(3, 7602176, [7602176], SEEK_SET) = 0
<0.000014>
[pid 15146] 1122317366.471295 _llseek(3, 7602176, [7602176], SEEK_SET) = 0
<0.000014>
[pid 15146] 1122317366.471363 _llseek(3, 7602176, [7602176], SEEK_SET) = 0
<0.000020>
[pid 15146] 1122317366.471436 _llseek(3, 7602176, [7602176], SEEK_SET) = 0
<0.000014>
[pid 15146] 1122317366.471505 _llseek(3, 7602176, [7602176], SEEK_SET) = 0
<0.000014>
[pid 15146] 1122317366.471573 _llseek(3, 7602176, [7602176], SEEK_SET) = 0
<0.000014>
[pid 15146] 1122317366.471641 _llseek(3, 7602176, [7602176], SEEK_SET) = 0
<0.000015>
[pid 15146] 1122317366.471708 _llseek(3, 7602176, [7602176], SEEK_SET) = 0
<0.000015>
[pid 15146] 1122317366.471776 _llseek(3, 7602176, [7602176], SEEK_SET) = 0
<0.000015>
[pid 15146] 1122317366.471844 _llseek(3, 7602176, [7602176], SEEK_SET) = 0
<0.000017>
[pid 15146] 1122317366.471915 _llseek(3, 7602176, [7602176], SEEK_SET) = 0
<0.000016>
[pid 15146] 1122317366.471991 _llseek(3, 7602176, [7602176], SEEK_SET) = 0
<0.000014>
[pid 15146] 1122317366.472058 _llseek(3, 7602176, [7602176], SEEK_SET) = 0
<0.000015>
<--snip-->
strace output for notebook:
<--snip-->
[pid 1431] 1122318636.262024 _llseek(3, 1757184, [1757184], SEEK_SET) = 0
<0.000017>
[pid 1431] 1122318636.262098 _llseek(3, 1757184, [1757184], SEEK_SET) = 0
<0.000016>
[pid 1431] 1122318636.262173 _llseek(3, 1757184, [1757184], SEEK_SET) = 0
<0.000016>
[pid 1431] 1122318636.262247 _llseek(3, 1757184, [1757184], SEEK_SET) = 0
<0.000016>
[pid 1431] 1122318636.262321 _llseek(3, 1757184, [1757184], SEEK_SET) = 0
<0.000016>
[pid 1431] 1122318636.262396 _llseek(3, 1757184, [1757184], SEEK_SET) = 0
<0.000016>
[pid 1431] 1122318636.262465 read(3,
"\0\0\0\0\0\0\0\0\0\0\0\0RZ\0\0\0\0\0\0\1\0G\247\252\333"..., 4096) = 4096
<0.000024>
[pid 1431] 1122318636.262578 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000017>
[pid 1431] 1122318636.262654 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000017>
[pid 1431] 1122318636.262732 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000017>
[pid 1431] 1122318636.262809 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.262881 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.262952 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.263023 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.263094 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000016>
[pid 1431] 1122318636.263165 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.263237 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000016>
[pid 1431] 1122318636.263310 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.263381 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.263452 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.263523 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000016>
[pid 1431] 1122318636.263594 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.263666 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000017>
[pid 1431] 1122318636.263740 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000024>
[pid 1431] 1122318636.263841 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.263913 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.263984 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000014>
[pid 1431] 1122318636.264055 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.264127 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.264199 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.264271 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.264342 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000016>
[pid 1431] 1122318636.264414 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000016>
[pid 1431] 1122318636.264487 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.264558 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000016>
[pid 1431] 1122318636.264630 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.264710 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.264788 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000016>
[pid 1431] 1122318636.264861 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.264934 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000016>
[pid 1431] 1122318636.265006 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.265077 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.265149 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000014>
[pid 1431] 1122318636.265220 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.265292 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000016>
[pid 1431] 1122318636.265363 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000016>
[pid 1431] 1122318636.265436 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.265509 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.265580 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.265652 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000016>
[pid 1431] 1122318636.265726 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000017>
[pid 1431] 1122318636.265818 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.265891 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.265963 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.266034 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.266106 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.266177 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000016>
[pid 1431] 1122318636.266250 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000016>
[pid 1431] 1122318636.266322 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.266394 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.266466 _llseek(3, 1761280, [1761280], SEEK_SET) = 0
<0.000016>
[pid 1431] 1122318636.266534 read(3,
"\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\210Z\0\0\0\0\0"..., 4096) = 4096
<0.000022>
[pid 1431] 1122318636.266641 _llseek(3, 1765376, [1765376], SEEK_SET) = 0
<0.000016>
[pid 1431] 1122318636.266715 _llseek(3, 1765376, [1765376], SEEK_SET) = 0
<0.000015>
[pid 1431] 1122318636.266800 _llseek(3, 1765376, [1765376], SEEK_SET) = 0
<0.000016>
[pid 1431] 1122318636.266875 _llseek(3, 1765376, [1765376], SEEK_SET) = 0
<0.000016>
[pid 1431] 1122318636.266949 _llseek(3, 1765376, [1765376], SEEK_SET) = 0
<0.000016>
[pid 1431] 1122318636.267023 _llseek(3, 1765376, [1765376], SEEK_SET) = 0
<0.000016>
<--snip-->
These are just snips of billions of lines...
The crazy things is, that the read operation takes only 0.000022 on the
notebook and 0.000138 on the desktop, also there are a LOT more _llseek
operations between each read operation (to much to list them) on the desktop
but one possible reason is that I didn't catch the same area of processing
lines... (it scrolls much too fast)
>
>>procs -----------memory---------- ---swap-- -----io---- --system--
>>----cpu----
>> r b swpd free buff cache si so bi bo in cs us sy
>> id wa
>> 1 0 0 594688 39340 292228 0 0 52 29 581 484 5 2
>> 92 2
>> 1 0 0 591208 39340 292228 0 0 0 68 1116 545 15 14
>> 71 0
>> 1 0 0 591208 39340 292228 0 0 0 0 1090 112 3 48
>> 50 0
>> 1 0 0 591208 39340 292228 0 0 0 0 1089 124 2 48
>> 50 0
>> 1 0 0 591208 39340 292228 0 0 0 0 1089 114 3 48
>> 50 0
>> 1 0 0 591208 39340 292228 0 0 0 0 1090 120 1 49
>> 50 0
>> 1 0 0 591208 39340 292228 0 0 0 24 1094 138 2 49
>> 50 0
>> 1 0 0 591256 39340 292228 0 0 0 0 1090 118 2 48
>> 50 0
>
>
> (...)
>
> Regards,
> Willy
> Paulo Marques wrote:
>> Andreas Baer wrote:
>>
>>> [...]
>>> Vmstat for Notebook P4 3.0 GHz 512 MB RAM:
>>>
>>> procs -----------memory---------- ---swap-- -----io---- --system--
>>> ----cpu----
>>> r b swpd free buff cache si so bi bo in cs us
>>> sy id wa
>>> 1 0 0 179620 14812 228832 0 0 33 21 557 184 3
>>> 1 95 1
>>> 2 0 0 178828 14812 228832 0 0 0 0 1295 819 6
>>> 2 92 0
>>> 1 0 0 175948 14812 228832 0 0 0 0 1090 111 37 17
>>
>>
>> This vmstat output doesn't show any input / output happening. Are you
>> sure this was taken *while* your test is running? If it is, then all
>> files are already in pagecache. The fact that you have free memory at
>> all times, and that the run on the notebook takes less than 20 seconds
>> confirms this.
I haven't said that there is any data loaded from files DURING the operation
(sorry for this). Random access files are used, but not for this process. What
I do is building up a r-tree index structure to get faster access to the stored
data, but everything is currently done completely in memory.
>> The second takes a lot more time to execute. The 1Gb memory does make me
>> suspicious, though.
>>
>> There is a known problem with BIOS that don't set up the mtrr's
>> correctly for the whole memory and leave a small amount of memory on the
>> top with the wrong settings. Accessing this memory becomes painfully slow.
>>
>> Can you send the output of /proc/mtrr and try to boot with something
>> like "mem=768M" to see if that improves performance on the Desktop P4?
w/o "mem=768M":
---------------
$ cat /proc/mtrr
reg00: base=0x00000000 ( 0MB), size=1024MB: write-back, count=1
reg01: base=0xc0000000 (3072MB), size= 256MB: write-combining, count=1
reg02: base=0xe0000000 (3584MB), size= 256MB: write-combining, count=1
$ cat /proc/meminfo | grep MemTotal
MemTotal: 1033620 kB
with "mem=768M":
----------------
$ cat /proc/mtrr
reg00: base=0x00000000 ( 0MB), size=1024MB: write-back, count=1
reg01: base=0xc0000000 (3072MB), size= 256MB: write-combining, count=1
reg02: base=0xe0000000 (3584MB), size= 256MB: write-combining, count=1
$ cat /proc/meminfo | grep MemTotal
MemTotal: 775116 kB
The speed didn't get any better through this change.
Just for information, here my kernel options for memory size...
High Memory Support (off)
[*] 1Gb Low Memory Support
============================================================================
Another thing is that the ACPI does not seem to work properly on this board.
$ acpi -V
No support for device type: battery
No support for device type: thermal
No support for device type: ac_adapter
$ dmesg | grep ACPI
BIOS-e820: 000000003ff30000 - 000000003ff40000 (ACPI data)
BIOS-e820: 000000003ff40000 - 000000003fff0000 (ACPI NVS)
ACPI: RSDP (v002 ACPIAM ) @ 0x000f9e30
ACPI: XSDT (v001 A M I OEMXSDT 0x10000412 MSFT 0x00000097) @ 0x3ff30100
ACPI: FADT (v003 A M I OEMFACP 0x10000412 MSFT 0x00000097) @ 0x3ff30290
ACPI: MADT (v001 A M I OEMAPIC 0x10000412 MSFT 0x00000097) @ 0x3ff30390
ACPI: OEMB (v001 A M I OEMBIOS 0x10000412 MSFT 0x00000097) @ 0x3ff40040
ACPI: DSDT (v001 P4C81 P4C81106 0x00000106 INTL 0x02002026) @ 0x00000000
ACPI: Local APIC address 0xfee00000
ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
ACPI: LAPIC (acpi_id[0x02] lapic_id[0x01] enabled)
ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
ACPI: IRQ0 used by override.
ACPI: IRQ2 used by override.
ACPI: IRQ9 used by override.
Using ACPI (MADT) for SMP configuration information
ACPI: Subsystem revision 20050309
ACPI: Interpreter enabled
ACPI: Using IOAPIC for interrupt routing
ACPI: PCI Root Bridge [PCI0] (0000:00)
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0._PRT]
ACPI: PCI Interrupt Routing Table [\_SB_.PCI0.P0P4._PRT]
ACPI: PCI Interrupt Link [LNKA] (IRQs 3 4 5 6 7 10 *11 12 14 15)
ACPI: PCI Interrupt Link [LNKB] (IRQs 3 4 5 6 7 10 *11 12 14 15)
ACPI: PCI Interrupt Link [LNKC] (IRQs 3 4 *5 6 7 10 11 12 14 15)
ACPI: PCI Interrupt Link [LNKD] (IRQs *3 4 5 6 7 10 11 12 14 15)
ACPI: PCI Interrupt Link [LNKE] (IRQs 3 *4 5 6 7 10 11 12 14 15)
ACPI: PCI Interrupt Link [LNKF] (IRQs 3 4 5 6 7 10 11 12 14 15) *0, disabled.
ACPI: PCI Interrupt Link [LNKG] (IRQs 3 4 5 6 *7 10 11 12 14 15)
ACPI: PCI Interrupt Link [LNKH] (IRQs 3 4 5 6 7 *10 11 12 14 15)
PCI: Using ACPI for IRQ routing
ACPI: PCI Interrupt 0000:02:05.0[A] -> GSI 22 (level, low) -> IRQ 22
ACPI: PCI Interrupt 0000:02:05.0[A] -> GSI 22 (level, low) -> IRQ 22
ACPI: PCI Interrupt 0000:00:1f.1[A] -> GSI 18 (level, low) -> IRQ 18
ACPI: PCI Interrupt 0000:00:1f.2[A] -> GSI 18 (level, low) -> IRQ 18
ACPI: PCI Interrupt 0000:00:1d.7[D] -> GSI 23 (level, low) -> IRQ 23
ACPI: PCI Interrupt 0000:00:1d.0[A] -> GSI 16 (level, low) -> IRQ 16
ACPI: PCI Interrupt 0000:00:1d.1[B] -> GSI 19 (level, low) -> IRQ 19
ACPI: PCI Interrupt 0000:00:1d.2[C] -> GSI 18 (level, low) -> IRQ 18
ACPI: PCI Interrupt 0000:00:1d.3[A] -> GSI 16 (level, low) -> IRQ 16
ACPI: PCI Interrupt 0000:01:00.0[A] -> GSI 16 (level, low) -> IRQ 16
ACPI: PCI Interrupt 0000:02:0c.0[A] -> GSI 20 (level, low) -> IRQ 20
but I'm not the only one...
http://forums.gentoo.org/viewtopic-t-215898-highlight-p4c800.html
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Problem with Asus P4C800-DX and P4 -Northwood-
2005-07-25 19:51 ` Andreas Baer
@ 2005-07-25 20:03 ` Erik Mouw
2005-07-25 20:12 ` Andreas Baer
2005-07-25 20:38 ` Jesper Juhl
2005-07-25 20:03 ` Valdis.Kletnieks
` (2 subsequent siblings)
3 siblings, 2 replies; 17+ messages in thread
From: Erik Mouw @ 2005-07-25 20:03 UTC (permalink / raw)
To: Andreas Baer; +Cc: Willy Tarreau, linux-kernel, pmarques
On Mon, Jul 25, 2005 at 09:51:49PM +0200, Andreas Baer wrote:
>
> Willy Tarreau wrote:
> >On Mon, Jul 25, 2005 at 03:10:08PM +0200, Andreas Baer wrote:
> >>Here I have
> >>
> >> /dev/hda: 26.91 MB/sec
> >> /dev/hda1: 26.90 MB/sec (Windows FAT32)
> >> /dev/hda7: 17.89 MB/sec (Linux EXT3)
> >>
> >>Could you give me a reason how this is possible?
> >
> >
> >a reason for what ? the fact that the notebook performs faster than the
> >desktop while slower on I/O ?
>
> No, a reason why the partition with Linux (ReiserFS or Ext3) is always
> slower
> than the Windows partition?
Easy: Drives don't have the same speed on all tracks. The platters are
built-up from zones with different recording densities: zones near the
center of the platters have a lower recording density and hence a lower
datarate (less bits/second pass under the head). Zones at the outer
diameter have a higher recording density and a higher datarate.
Erik
--
+-- Erik Mouw -- www.harddisk-recovery.nl -- 0800 220 20 20 --
| Lab address: Delftechpark 26, 2628 XH, Delft, The Netherlands
| Data lost? Stay calm and contact Harddisk-recovery.com
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Problem with Asus P4C800-DX and P4 -Northwood-
2005-07-25 19:51 ` Andreas Baer
2005-07-25 20:03 ` Erik Mouw
@ 2005-07-25 20:03 ` Valdis.Kletnieks
2005-07-25 20:03 ` Dmitry Torokhov
2005-07-25 20:48 ` Bill Davidsen
3 siblings, 0 replies; 17+ messages in thread
From: Valdis.Kletnieks @ 2005-07-25 20:03 UTC (permalink / raw)
To: Andreas Baer; +Cc: Willy Tarreau, linux-kernel, pmarques
[-- Attachment #1: Type: text/plain, Size: 454 bytes --]
On Mon, 25 Jul 2005 21:51:49 +0200, Andreas Baer said:
> > a reason for what ? the fact that the notebook performs faster than the
> > desktop while slower on I/O ?
>
> No, a reason why the partition with Linux (ReiserFS or Ext3) is always slower
> than the Windows partition?
My first guess is that ReiserFS and EXT3 are journalled, and FAT32 isn't.
Try ext2, which is the non-journalled variant of ext3, and see if the speed
is comparable to fat32.
[-- Attachment #2: Type: application/pgp-signature, Size: 226 bytes --]
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Problem with Asus P4C800-DX and P4 -Northwood-
2005-07-25 19:51 ` Andreas Baer
2005-07-25 20:03 ` Erik Mouw
2005-07-25 20:03 ` Valdis.Kletnieks
@ 2005-07-25 20:03 ` Dmitry Torokhov
2005-07-25 20:48 ` Bill Davidsen
3 siblings, 0 replies; 17+ messages in thread
From: Dmitry Torokhov @ 2005-07-25 20:03 UTC (permalink / raw)
To: Andreas Baer; +Cc: Willy Tarreau, linux-kernel, pmarques
On 7/25/05, Andreas Baer <lnx1@gmx.net> wrote:
>
> >>
> >>Here I have
> >>
> >> /dev/hda: 26.91 MB/sec
> >> /dev/hda1: 26.90 MB/sec (Windows FAT32)
> >> /dev/hda7: 17.89 MB/sec (Linux EXT3)
> >>
> >>Could you give me a reason how this is possible?
> >
> >
> > a reason for what ? the fact that the notebook performs faster than the
> > desktop while slower on I/O ?
>
> No, a reason why the partition with Linux (ReiserFS or Ext3) is always slower
> than the Windows partition?
>
Because of geometry issues hard drive can't not deliver constant data
rate off the plates. Your windows partition is on "faster" part of the
drive.
--
Dmitry
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Problem with Asus P4C800-DX and P4 -Northwood-
2005-07-25 20:03 ` Erik Mouw
@ 2005-07-25 20:12 ` Andreas Baer
2005-07-25 20:26 ` Erik Mouw
2005-07-25 20:38 ` Jesper Juhl
1 sibling, 1 reply; 17+ messages in thread
From: Andreas Baer @ 2005-07-25 20:12 UTC (permalink / raw)
To: Erik Mouw; +Cc: Willy Tarreau, linux-kernel, pmarques
Erik Mouw wrote:
> On Mon, Jul 25, 2005 at 09:51:49PM +0200, Andreas Baer wrote:
>
>>Willy Tarreau wrote:
>>
>>>On Mon, Jul 25, 2005 at 03:10:08PM +0200, Andreas Baer wrote:
>>>
>>>>Here I have
>>>>
>>>> /dev/hda: 26.91 MB/sec
>>>> /dev/hda1: 26.90 MB/sec (Windows FAT32)
>>>> /dev/hda7: 17.89 MB/sec (Linux EXT3)
>>>>
>>>>Could you give me a reason how this is possible?
>>>
>>>
>>>a reason for what ? the fact that the notebook performs faster than the
>>>desktop while slower on I/O ?
>>
>>No, a reason why the partition with Linux (ReiserFS or Ext3) is always
>>slower
>>than the Windows partition?
>
>
> Easy: Drives don't have the same speed on all tracks. The platters are
> built-up from zones with different recording densities: zones near the
> center of the platters have a lower recording density and hence a lower
> datarate (less bits/second pass under the head). Zones at the outer
> diameter have a higher recording density and a higher datarate.
>
>
> Erik
>
So it has definitely nothing to do with filesystem? I also thought about
physical reasons because I don't think the hdparm depends on filesystems...
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Problem with Asus P4C800-DX and P4 -Northwood-
2005-07-25 20:12 ` Andreas Baer
@ 2005-07-25 20:26 ` Erik Mouw
0 siblings, 0 replies; 17+ messages in thread
From: Erik Mouw @ 2005-07-25 20:26 UTC (permalink / raw)
To: Andreas Baer; +Cc: Willy Tarreau, linux-kernel, pmarques
On Mon, Jul 25, 2005 at 10:12:58PM +0200, Andreas Baer wrote:
> Erik Mouw wrote:
> >Easy: Drives don't have the same speed on all tracks. The platters are
> >built-up from zones with different recording densities: zones near the
> >center of the platters have a lower recording density and hence a lower
> >datarate (less bits/second pass under the head). Zones at the outer
> >diameter have a higher recording density and a higher datarate.
>
> So it has definitely nothing to do with filesystem? I also thought about
> physical reasons because I don't think the hdparm depends on filesystems...
That's right, hdparm doesn't care about filesystems. The speed
difference is caused by the physical geometry of the drive.
Erik
--
+-- Erik Mouw -- www.harddisk-recovery.com -- +31 70 370 12 90 --
| Lab address: Delftechpark 26, 2628 XH, Delft, The Netherlands
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Problem with Asus P4C800-DX and P4 -Northwood-
2005-07-25 20:03 ` Erik Mouw
2005-07-25 20:12 ` Andreas Baer
@ 2005-07-25 20:38 ` Jesper Juhl
2005-07-25 21:01 ` Erik Mouw
1 sibling, 1 reply; 17+ messages in thread
From: Jesper Juhl @ 2005-07-25 20:38 UTC (permalink / raw)
To: Erik Mouw; +Cc: Andreas Baer, Willy Tarreau, linux-kernel, pmarques
On 7/25/05, Erik Mouw <erik@harddisk-recovery.com> wrote:
> On Mon, Jul 25, 2005 at 09:51:49PM +0200, Andreas Baer wrote:
> >
> > Willy Tarreau wrote:
> > >On Mon, Jul 25, 2005 at 03:10:08PM +0200, Andreas Baer wrote:
> > >>Here I have
> > >>
> > >> /dev/hda: 26.91 MB/sec
> > >> /dev/hda1: 26.90 MB/sec (Windows FAT32)
> > >> /dev/hda7: 17.89 MB/sec (Linux EXT3)
> > >>
> > >>Could you give me a reason how this is possible?
> > >
> > >
> > >a reason for what ? the fact that the notebook performs faster than the
> > >desktop while slower on I/O ?
> >
> > No, a reason why the partition with Linux (ReiserFS or Ext3) is always
> > slower
> > than the Windows partition?
>
> Easy: Drives don't have the same speed on all tracks. The platters are
> built-up from zones with different recording densities: zones near the
> center of the platters have a lower recording density and hence a lower
> datarate (less bits/second pass under the head). Zones at the outer
> diameter have a higher recording density and a higher datarate.
>
It's even more complex than that as far as I know, you also have the
issue of seek times - tracks near the middle of the platter will be
nearer the head more often (on average) then tracks at the edge.
For people who like visuals, IBM has a nice little picture in their
AIX performance tuning guide :
http://publib.boulder.ibm.com/infocenter/pseries/index.jsp?topic=/com.ibm.aix.doc/aixbman/prftungd/diskperf2.htm
--
Jesper Juhl <jesper.juhl@gmail.com>
Don't top-post http://www.catb.org/~esr/jargon/html/T/top-post.html
Plain text mails only, please http://www.expita.com/nomime.html
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Problem with Asus P4C800-DX and P4 -Northwood-
2005-07-25 19:51 ` Andreas Baer
` (2 preceding siblings ...)
2005-07-25 20:03 ` Dmitry Torokhov
@ 2005-07-25 20:48 ` Bill Davidsen
2005-07-25 21:51 ` Andreas Baer
3 siblings, 1 reply; 17+ messages in thread
From: Bill Davidsen @ 2005-07-25 20:48 UTC (permalink / raw)
To: Andreas Baer; +Cc: linux-kernel, pmarques
One other oddment about this motherboard, Forgive if I have over-snipped
this trying to make it relevant...
Andreas Baer wrote:
>
> Willy Tarreau wrote:
>
>> On Mon, Jul 25, 2005 at 03:10:08PM +0200, Andreas Baer wrote:
>> There clearly is a problem on the system installed on this machine.
>> You should
>> use strace to see what this machine does all the time, it is
>> absolutely not
>> expected that the user/system ratios change so much between two nearly
>> identical systems. So there are system calls which eat all CPU. You
>> may want
>> to try strace -Tttt on the running process during a few tens of
>> seconds. I
>> guess you'll immediately find the culprit amongst the syscalls, and it
>> might
>> give you a clue.
>
>
> I hope you are talking about a hardware/kernel problem and not a software
> problem, because I tried it also with LiveCD's and they showed the same
> results
> on this machine.
> I'm not a linux expert, that means I've never done anything like that
> before,
> so it would be nice if you give me a hint what you see in this results. :)
>
Am I misreading this, or is your program doing a bunch of seeks not
followed by an i/o operation? I would doubt that's important, but your
vmstat showed a lot of system time, and I just wonder if llseek() is
more expensive in Linux than Windows. Or if your code is such that these
calls are not optimized away by gcc.
> strace output for desktop:
> <--snip-->
> [pid 1431] 1122318636.262578 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000017>
> [pid 1431] 1122318636.262654 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000017>
> [pid 1431] 1122318636.262732 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000017>
> [pid 1431] 1122318636.262809 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.262881 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.262952 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.263023 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.263094 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000016>
> [pid 1431] 1122318636.263165 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.263237 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000016>
> [pid 1431] 1122318636.263310 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.263381 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.263452 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.263523 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000016>
> [pid 1431] 1122318636.263594 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.263666 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000017>
> [pid 1431] 1122318636.263740 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000024>
> [pid 1431] 1122318636.263841 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.263913 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.263984 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000014>
> [pid 1431] 1122318636.264055 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.264127 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.264199 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.264271 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.264342 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000016>
> [pid 1431] 1122318636.264414 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000016>
> [pid 1431] 1122318636.264487 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.264558 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000016>
> [pid 1431] 1122318636.264630 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.264710 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.264788 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000016>
> [pid 1431] 1122318636.264861 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.264934 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000016>
> [pid 1431] 1122318636.265006 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.265077 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.265149 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000014>
> [pid 1431] 1122318636.265220 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.265292 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000016>
> [pid 1431] 1122318636.265363 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000016>
> [pid 1431] 1122318636.265436 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.265509 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.265580 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.265652 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000016>
> [pid 1431] 1122318636.265726 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000017>
> [pid 1431] 1122318636.265818 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.265891 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.265963 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.266034 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.266106 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.266177 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000016>
> [pid 1431] 1122318636.266250 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000016>
> [pid 1431] 1122318636.266322 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.266394 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000015>
> [pid 1431] 1122318636.266466 _llseek(3, 1761280, [1761280], SEEK_SET) = 0 <0.000016>
--
-bill davidsen (davidsen@tmr.com)
"The secret to procrastination is to put things off until the
last possible moment - but no longer" -me
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Problem with Asus P4C800-DX and P4 -Northwood-
2005-07-25 20:38 ` Jesper Juhl
@ 2005-07-25 21:01 ` Erik Mouw
0 siblings, 0 replies; 17+ messages in thread
From: Erik Mouw @ 2005-07-25 21:01 UTC (permalink / raw)
To: Jesper Juhl; +Cc: Andreas Baer, Willy Tarreau, linux-kernel, pmarques
On Mon, Jul 25, 2005 at 10:38:25PM +0200, Jesper Juhl wrote:
> It's even more complex than that as far as I know, you also have the
> issue of seek times - tracks near the middle of the platter will be
> nearer the head more often (on average) then tracks at the edge.
>
> For people who like visuals, IBM has a nice little picture in their
> AIX performance tuning guide :
> http://publib.boulder.ibm.com/infocenter/pseries/index.jsp?topic=/com.ibm.aix.doc/aixbman/prftungd/diskperf2.htm
Quote from that document:
"Data is more dense as it moves toward the center, resulting in less
physical movement of the head. This results in faster overall
throughput"
This is not true. The whole idea of different recording zones with
different sectors/track is to keep the overall data density (in
bits/square mm) more or less constant.
I'd say it's even the other way around from what IBM pictures: there
are more sectors/track in outer zones, so that means there is simply
more data in the outer zones. If you want less physical movement of the
head, you should make sure the data is in the zone(s) with the largest
number of sectors/track.
Erik
--
+-- Erik Mouw -- www.harddisk-recovery.com -- +31 70 370 12 90 --
| Lab address: Delftechpark 26, 2628 XH, Delft, The Netherlands
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Problem with Asus P4C800-DX and P4 -Northwood-
2005-07-25 20:48 ` Bill Davidsen
@ 2005-07-25 21:51 ` Andreas Baer
2005-07-26 14:39 ` Bill Davidsen
0 siblings, 1 reply; 17+ messages in thread
From: Andreas Baer @ 2005-07-25 21:51 UTC (permalink / raw)
To: Bill Davidsen; +Cc: linux-kernel, pmarques
Bill Davidsen wrote:
>
> One other oddment about this motherboard, Forgive if I have over-snipped
> this trying to make it relevant...
>
> Andreas Baer wrote:
>
>>
>> Willy Tarreau wrote:
>>
>>> On Mon, Jul 25, 2005 at 03:10:08PM +0200, Andreas Baer wrote:
>
>
>>> There clearly is a problem on the system installed on this machine.
>>> You should
>>> use strace to see what this machine does all the time, it is
>>> absolutely not
>>> expected that the user/system ratios change so much between two nearly
>>> identical systems. So there are system calls which eat all CPU. You
>>> may want
>>> to try strace -Tttt on the running process during a few tens of
>>> seconds. I
>>> guess you'll immediately find the culprit amongst the syscalls, and
>>> it might
>>> give you a clue.
>>
>>
>>
>> I hope you are talking about a hardware/kernel problem and not a software
>> problem, because I tried it also with LiveCD's and they showed the
>> same results
>> on this machine.
>> I'm not a linux expert, that means I've never done anything like that
>> before,
>> so it would be nice if you give me a hint what you see in this
>> results. :)
>>
>
> Am I misreading this, or is your program doing a bunch of seeks not
> followed by an i/o operation? I would doubt that's important, but your
> vmstat showed a lot of system time, and I just wonder if llseek() is
> more expensive in Linux than Windows. Or if your code is such that these
> calls are not optimized away by gcc.
I don't know what exactly produces this _llseek calls, but I ran the compiled
binaries on both machines (desktop + notebook) without any recompilation and so
I think they should do the same (even if this is bad or not optimized), but I
see a time difference of more than 2:30 :) This _llseek calls also don't seem
to be faster or slower if you compare the times on the notebook and the desktop.
>> strace output for desktop:
>> <--snip-->
>
>
>> [pid 1431] 1122318636.262578 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000017>
>> [pid 1431] 1122318636.262654 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000017>
>> [pid 1431] 1122318636.262732 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000017>
>> [pid 1431] 1122318636.262809 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.262881 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.262952 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.263023 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.263094 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000016>
>> [pid 1431] 1122318636.263165 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.263237 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000016>
>> [pid 1431] 1122318636.263310 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.263381 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.263452 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.263523 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000016>
>> [pid 1431] 1122318636.263594 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.263666 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000017>
>> [pid 1431] 1122318636.263740 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000024>
>> [pid 1431] 1122318636.263841 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.263913 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.263984 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000014>
>> [pid 1431] 1122318636.264055 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.264127 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.264199 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.264271 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.264342 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000016>
>> [pid 1431] 1122318636.264414 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000016>
>> [pid 1431] 1122318636.264487 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.264558 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000016>
>> [pid 1431] 1122318636.264630 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.264710 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.264788 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000016>
>> [pid 1431] 1122318636.264861 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.264934 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000016>
>> [pid 1431] 1122318636.265006 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.265077 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.265149 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000014>
>> [pid 1431] 1122318636.265220 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.265292 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000016>
>> [pid 1431] 1122318636.265363 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000016>
>> [pid 1431] 1122318636.265436 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.265509 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.265580 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.265652 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000016>
>> [pid 1431] 1122318636.265726 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000017>
>> [pid 1431] 1122318636.265818 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.265891 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.265963 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.266034 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.266106 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.266177 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000016>
>> [pid 1431] 1122318636.266250 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000016>
>> [pid 1431] 1122318636.266322 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.266394 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000015>
>> [pid 1431] 1122318636.266466 _llseek(3, 1761280, [1761280], SEEK_SET)
>> = 0 <0.000016>
>
>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Problem with Asus P4C800-DX and P4 -Northwood-
2005-07-25 21:51 ` Andreas Baer
@ 2005-07-26 14:39 ` Bill Davidsen
2005-07-26 16:13 ` Andreas Baer
0 siblings, 1 reply; 17+ messages in thread
From: Bill Davidsen @ 2005-07-26 14:39 UTC (permalink / raw)
To: Andreas Baer; +Cc: linux-kernel, pmarques
Andreas Baer wrote:
>
>
> Bill Davidsen wrote:
>
>>
>> One other oddment about this motherboard, Forgive if I have
>> over-snipped this trying to make it relevant...
>>
>> Andreas Baer wrote:
>>
>>>
>>> Willy Tarreau wrote:
>>>
>>>> On Mon, Jul 25, 2005 at 03:10:08PM +0200, Andreas Baer wrote:
>>>
>>
>>
>>>> There clearly is a problem on the system installed on this machine.
>>>> You should
>>>> use strace to see what this machine does all the time, it is
>>>> absolutely not
>>>> expected that the user/system ratios change so much between two nearly
>>>> identical systems. So there are system calls which eat all CPU. You
>>>> may want
>>>> to try strace -Tttt on the running process during a few tens of
>>>> seconds. I
>>>> guess you'll immediately find the culprit amongst the syscalls, and
>>>> it might
>>>> give you a clue.
>>>
>>>
>>>
>>>
>>> I hope you are talking about a hardware/kernel problem and not a
>>> software
>>> problem, because I tried it also with LiveCD's and they showed the
>>> same results
>>> on this machine.
>>> I'm not a linux expert, that means I've never done anything like
>>> that before,
>>> so it would be nice if you give me a hint what you see in this
>>> results. :)
>>>
>>
>> Am I misreading this, or is your program doing a bunch of seeks not
>> followed by an i/o operation? I would doubt that's important, but
>> your vmstat showed a lot of system time, and I just wonder if
>> llseek() is more expensive in Linux than Windows. Or if your code is
>> such that these calls are not optimized away by gcc.
>
>
> I don't know what exactly produces this _llseek calls, but I ran the
> compiled binaries on both machines (desktop + notebook) without any
> recompilation and so I think they should do the same (even if this is
> bad or not optimized), but I see a time difference of more than 2:30
> :) This _llseek calls also don't seem to be faster or slower if you
> compare the times on the notebook and the desktop.
If the program and test data is not proprietary, would it help to have
me run the test on my P4P800, P4-2.8, HT on, and see if that's an issue
with your particular board or BIOS? I have the 1086 BIOS from my notes
on that machine, I think you were running a later BIOS? 1091 or so, from
memory?
Anyway, I would run a test that takes 3 minutes if it helps as a data point.
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: Problem with Asus P4C800-DX and P4 -Northwood-
2005-07-26 14:39 ` Bill Davidsen
@ 2005-07-26 16:13 ` Andreas Baer
0 siblings, 0 replies; 17+ messages in thread
From: Andreas Baer @ 2005-07-26 16:13 UTC (permalink / raw)
To: Bill Davidsen; +Cc: linux-kernel
Bill Davidsen wrote:
> Andreas Baer wrote:
>
>>
>>
>> Bill Davidsen wrote:
>>
>>>
>>> One other oddment about this motherboard, Forgive if I have
>>> over-snipped this trying to make it relevant...
>>>
>>> Andreas Baer wrote:
>>>
>>>>
>>>> Willy Tarreau wrote:
>>>>
>>>>> On Mon, Jul 25, 2005 at 03:10:08PM +0200, Andreas Baer wrote:
>>>>
>>>>
>>>
>>>
>>>>> There clearly is a problem on the system installed on this machine.
>>>>> You should
>>>>> use strace to see what this machine does all the time, it is
>>>>> absolutely not
>>>>> expected that the user/system ratios change so much between two nearly
>>>>> identical systems. So there are system calls which eat all CPU. You
>>>>> may want
>>>>> to try strace -Tttt on the running process during a few tens of
>>>>> seconds. I
>>>>> guess you'll immediately find the culprit amongst the syscalls, and
>>>>> it might
>>>>> give you a clue.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> I hope you are talking about a hardware/kernel problem and not a
>>>> software
>>>> problem, because I tried it also with LiveCD's and they showed the
>>>> same results
>>>> on this machine.
>>>> I'm not a linux expert, that means I've never done anything like
>>>> that before,
>>>> so it would be nice if you give me a hint what you see in this
>>>> results. :)
>>>>
>>>
>>> Am I misreading this, or is your program doing a bunch of seeks not
>>> followed by an i/o operation? I would doubt that's important, but
>>> your vmstat showed a lot of system time, and I just wonder if
>>> llseek() is more expensive in Linux than Windows. Or if your code is
>>> such that these calls are not optimized away by gcc.
>>
>>
>>
>> I don't know what exactly produces this _llseek calls, but I ran the
>> compiled binaries on both machines (desktop + notebook) without any
>> recompilation and so I think they should do the same (even if this is
>> bad or not optimized), but I see a time difference of more than 2:30
>> :) This _llseek calls also don't seem to be faster or slower if you
>> compare the times on the notebook and the desktop.
>
>
>
> If the program and test data is not proprietary, would it help to have
> me run the test on my P4P800, P4-2.8, HT on, and see if that's an issue
> with your particular board or BIOS? I have the 1086 BIOS from my notes
> on that machine, I think you were running a later BIOS? 1091 or so, from
> memory?
>
> Anyway, I would run a test that takes 3 minutes if it helps as a data
> point.
Properly a good idea, but you have a completely different chipset related to
the Asus Website. I think it's a i865 and I have i875. I'm also running BIOS
1019(!).
That's the driver page for my Board:
http://support.asus.com/download/download.aspx?Type=All&model=P4C800%20Deluxe
It would be better if someone has at least the same board.
Does anyone have a Asus P4C800-Deluxe with a P4 around 2.4 GHz running on this
mailing list and would sacrifice himself/herself to run a little test with my
software for a maximum of 4 minutes? Would be approx. 10 MB for data transmission.
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2005-07-26 16:27 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-07-25 0:50 Problem with Asus P4C800-DX and P4 -Northwood- Andreas Baer
2005-07-25 5:12 ` Willy Tarreau
2005-07-25 13:10 ` Andreas Baer
2005-07-25 14:00 ` Paulo Marques
2005-07-25 15:24 ` Willy Tarreau
2005-07-25 19:51 ` Andreas Baer
2005-07-25 20:03 ` Erik Mouw
2005-07-25 20:12 ` Andreas Baer
2005-07-25 20:26 ` Erik Mouw
2005-07-25 20:38 ` Jesper Juhl
2005-07-25 21:01 ` Erik Mouw
2005-07-25 20:03 ` Valdis.Kletnieks
2005-07-25 20:03 ` Dmitry Torokhov
2005-07-25 20:48 ` Bill Davidsen
2005-07-25 21:51 ` Andreas Baer
2005-07-26 14:39 ` Bill Davidsen
2005-07-26 16:13 ` Andreas Baer
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox