* numademo
@ 2009-05-28 20:05 Cliff Wickman
2009-05-28 21:04 ` numademo Andi Kleen
0 siblings, 1 reply; 2+ messages in thread
From: Cliff Wickman @ 2009-05-28 20:05 UTC (permalink / raw)
To: linux-numa
Hi Andi and all,
I tested libnuma/numactl on 96 nodes today and found a couple of problems.
One in the regress script.
The other in numademo.
My patches below.
-Cliff
Without the space we'll get memory for nodes 0 and 10 and 20 ...
---
test/regress | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
Index: numactl-dev/test/regress
===================================================================
--- numactl-dev.orig/test/regress
+++ numactl-dev/test/regress
@@ -58,7 +58,7 @@ probe_hardware()
# find nodes with at least NEEDPAGES of free memory
for i in $(seq 0 $maxnode) ; do
- free=$(numactl --hardware | fgrep "$i free" | awk '{print $4}')
+ free=$(numactl --hardware | fgrep " $i free" | awk '{print $4}')
free=$(( free * MB ))
if [[ $((free / PAGESIZE)) -ge $NEEDPAGES ]]; then
node[$n]=$i
Make numademo finish in a reasonable time when run on larger machines.
There is a loop from 1 to 1<<max_nodes that is virtually infinite
on large machines. I just made it break at 50.
And increase the FRACT_NODES factor for larger machines.
With this patch the numademo_test ran on 48 nodes in about 11 minutes.
---
numademo.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
Index: numactl-dev/numademo.c
===================================================================
--- numactl-dev.orig/numademo.c
+++ numactl-dev/numademo.c
@@ -37,6 +37,7 @@ static inline void clearcache(void *a, u
#endif
#define FRACT_NODES 8
#define FRACT_MASKS 32
+int fract_nodes;
unsigned long msize;
@@ -303,7 +304,7 @@ void test(enum test type)
if (regression_testing) {
printf("\nTest %s doing 1 of %d nodes and 1 of %d masks.\n",
- testname[thistest], FRACT_NODES, FRACT_MASKS);
+ testname[thistest], fract_nodes, FRACT_MASKS);
}
memtest("memory with no policy", numa_alloc(msize));
@@ -311,7 +312,7 @@ void test(enum test type)
memtest("memory interleaved on all nodes", numa_alloc_interleaved(msize));
for (i = 0; i <= max_node; i++) {
- if (regression_testing && (i % FRACT_NODES)) {
+ if (regression_testing && (i % fract_nodes)) {
/* for regression testing (-t) do only every eighth node */
continue;
}
@@ -324,6 +325,9 @@ void test(enum test type)
char buf2[10];
if (popcnt(mask) == 1)
continue;
+ if (regression_testing && (i > 50)) {
+ break;
+ }
if (regression_testing && (i % FRACT_MASKS)) {
/* for regression testing (-t)
do only every 32nd mask permutation */
@@ -345,7 +349,7 @@ void test(enum test type)
}
for (i = 0; i <= max_node; i++) {
- if (regression_testing && (i % FRACT_NODES)) {
+ if (regression_testing && (i % fract_nodes)) {
/* for regression testing (-t) do only every eighth node */
continue;
}
@@ -373,7 +377,7 @@ void test(enum test type)
for (i = 0; i <= max_node; i++) {
int oldhn = numa_preferred();
- if (regression_testing && (i % FRACT_NODES)) {
+ if (regression_testing && (i % fract_nodes)) {
/* for regression testing (-t) do only every eighth node */
continue;
}
@@ -396,7 +400,7 @@ void test(enum test type)
for (k = 0; k <= max_node; k++) {
if (k == i)
continue;
- if (regression_testing && (k % FRACT_NODES)) {
+ if (regression_testing && (k % fract_nodes)) {
/* for regression testing (-t)
do only every eighth node */
continue;
@@ -491,6 +495,7 @@ int main(int ac, char **av)
max_node = numa_max_node();
printf("%d nodes available\n", max_node+1);
+ fract_nodes = ((max_node/8)*2) + FRACT_NODES;
if (max_node <= 2)
regression_testing = 0; /* set -t auto-off for small systems */
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2009-05-28 21:04 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-05-28 20:05 numademo Cliff Wickman
2009-05-28 21:04 ` numademo Andi Kleen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).