* [PATCH] tests/gem_evict_everything: Use bo_count instead of count where intended
@ 2013-12-06 11:37 Tvrtko Ursulin
2013-12-06 12:12 ` Daniel Vetter
0 siblings, 1 reply; 5+ messages in thread
From: Tvrtko Ursulin @ 2013-12-06 11:37 UTC (permalink / raw)
To: Intel-gfx
From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Don't see that it causes a problem but it looks it was intended
to use bo_count at these places.
Also using count to determine number of processes does not make
sense unless thousands of cores.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
tests/gem_evict_everything.c | 12 +++++-------
1 file changed, 5 insertions(+), 7 deletions(-)
diff --git a/tests/gem_evict_everything.c b/tests/gem_evict_everything.c
index 41abef7..90c3ae1 100644
--- a/tests/gem_evict_everything.c
+++ b/tests/gem_evict_everything.c
@@ -135,8 +135,6 @@ static void exchange_uint32_t(void *array, unsigned i, unsigned j)
i_arr[j] = i_tmp;
}
-#define min(a, b) ((a) < (b) ? (a) : (b))
-
#define INTERRUPTIBLE (1 << 0)
#define SWAPPING (1 << 1)
#define DUP_DRMFD (1 << 2)
@@ -168,7 +166,7 @@ static void forked_evictions(int fd, int size, int count,
for (n = 0; n < bo_count; n++)
bo[n] = gem_create(fd, size);
- igt_fork(i, min(count, min(num_threads * 5, 12))) {
+ igt_fork(i, num_threads * 4) {
int realfd = fd;
int num_passes = flags & SWAPPING ? 10 : 100;
@@ -184,7 +182,7 @@ static void forked_evictions(int fd, int size, int count,
realfd = drm_open_any();
/* We can overwrite the bo array since we're forked. */
- for (l = 0; l < count; l++) {
+ for (l = 0; l < bo_count; l++) {
uint32_t flink;
flink = gem_flink(fd, bo[l]);
@@ -194,9 +192,9 @@ static void forked_evictions(int fd, int size, int count,
}
for (pass = 0; pass < num_passes; pass++) {
- copy(realfd, bo[0], bo[1], bo, count, 0);
+ copy(realfd, bo[0], bo[1], bo, bo_count, 0);
- for (l = 0; l < count && (flags & MEMORY_PRESSURE); l++) {
+ for (l = 0; l < bo_count && (flags & MEMORY_PRESSURE); l++) {
uint32_t *base = gem_mmap__cpu(realfd, bo[l],
size,
PROT_READ | PROT_WRITE);
@@ -244,7 +242,7 @@ static void swapping_evictions(int fd, int size, int count)
igt_permute_array(bo, bo_count, exchange_uint32_t);
for (pass = 0; pass < 100; pass++) {
- copy(fd, bo[0], bo[1], bo, count, 0);
+ copy(fd, bo[0], bo[1], bo, bo_count, 0);
}
}
--
1.8.4.3
^ permalink raw reply related [flat|nested] 5+ messages in thread* Re: [PATCH] tests/gem_evict_everything: Use bo_count instead of count where intended
2013-12-06 11:37 [PATCH] tests/gem_evict_everything: Use bo_count instead of count where intended Tvrtko Ursulin
@ 2013-12-06 12:12 ` Daniel Vetter
2013-12-06 12:33 ` Tvrtko Ursulin
0 siblings, 1 reply; 5+ messages in thread
From: Daniel Vetter @ 2013-12-06 12:12 UTC (permalink / raw)
To: Tvrtko Ursulin; +Cc: Intel-gfx
On Fri, Dec 06, 2013 at 11:37:49AM +0000, Tvrtko Ursulin wrote:
> From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
>
> Don't see that it causes a problem but it looks it was intended
> to use bo_count at these places.
>
> Also using count to determine number of processes does not make
> sense unless thousands of cores.
>
> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
> tests/gem_evict_everything.c | 12 +++++-------
> 1 file changed, 5 insertions(+), 7 deletions(-)
>
> diff --git a/tests/gem_evict_everything.c b/tests/gem_evict_everything.c
> index 41abef7..90c3ae1 100644
> --- a/tests/gem_evict_everything.c
> +++ b/tests/gem_evict_everything.c
> @@ -135,8 +135,6 @@ static void exchange_uint32_t(void *array, unsigned i, unsigned j)
> i_arr[j] = i_tmp;
> }
>
> -#define min(a, b) ((a) < (b) ? (a) : (b))
> -
> #define INTERRUPTIBLE (1 << 0)
> #define SWAPPING (1 << 1)
> #define DUP_DRMFD (1 << 2)
> @@ -168,7 +166,7 @@ static void forked_evictions(int fd, int size, int count,
> for (n = 0; n < bo_count; n++)
> bo[n] = gem_create(fd, size);
>
> - igt_fork(i, min(count, min(num_threads * 5, 12))) {
> + igt_fork(i, num_threads * 4) {
You've killed the min( , 12) here ... that'll hurt on big desktops.
Otherwise patch looks good.
-Daniel
> int realfd = fd;
> int num_passes = flags & SWAPPING ? 10 : 100;
>
> @@ -184,7 +182,7 @@ static void forked_evictions(int fd, int size, int count,
> realfd = drm_open_any();
>
> /* We can overwrite the bo array since we're forked. */
> - for (l = 0; l < count; l++) {
> + for (l = 0; l < bo_count; l++) {
> uint32_t flink;
>
> flink = gem_flink(fd, bo[l]);
> @@ -194,9 +192,9 @@ static void forked_evictions(int fd, int size, int count,
> }
>
> for (pass = 0; pass < num_passes; pass++) {
> - copy(realfd, bo[0], bo[1], bo, count, 0);
> + copy(realfd, bo[0], bo[1], bo, bo_count, 0);
>
> - for (l = 0; l < count && (flags & MEMORY_PRESSURE); l++) {
> + for (l = 0; l < bo_count && (flags & MEMORY_PRESSURE); l++) {
> uint32_t *base = gem_mmap__cpu(realfd, bo[l],
> size,
> PROT_READ | PROT_WRITE);
> @@ -244,7 +242,7 @@ static void swapping_evictions(int fd, int size, int count)
> igt_permute_array(bo, bo_count, exchange_uint32_t);
>
> for (pass = 0; pass < 100; pass++) {
> - copy(fd, bo[0], bo[1], bo, count, 0);
> + copy(fd, bo[0], bo[1], bo, bo_count, 0);
> }
> }
>
> --
> 1.8.4.3
>
> _______________________________________________
> Intel-gfx mailing list
> Intel-gfx@lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/intel-gfx
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: [PATCH] tests/gem_evict_everything: Use bo_count instead of count where intended
2013-12-06 12:12 ` Daniel Vetter
@ 2013-12-06 12:33 ` Tvrtko Ursulin
2013-12-06 13:46 ` Daniel Vetter
0 siblings, 1 reply; 5+ messages in thread
From: Tvrtko Ursulin @ 2013-12-06 12:33 UTC (permalink / raw)
To: Daniel Vetter; +Cc: Intel-gfx
On Fri, 2013-12-06 at 13:12 +0100, Daniel Vetter wrote:
> On Fri, Dec 06, 2013 at 11:37:49AM +0000, Tvrtko Ursulin wrote:
> > From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> >
> > Don't see that it causes a problem but it looks it was intended
> > to use bo_count at these places.
> >
> > Also using count to determine number of processes does not make
> > sense unless thousands of cores.
> >
> > Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > ---
> > tests/gem_evict_everything.c | 12 +++++-------
> > 1 file changed, 5 insertions(+), 7 deletions(-)
> >
> > diff --git a/tests/gem_evict_everything.c b/tests/gem_evict_everything.c
> > index 41abef7..90c3ae1 100644
> > --- a/tests/gem_evict_everything.c
> > +++ b/tests/gem_evict_everything.c
> > @@ -135,8 +135,6 @@ static void exchange_uint32_t(void *array, unsigned i, unsigned j)
> > i_arr[j] = i_tmp;
> > }
> >
> > -#define min(a, b) ((a) < (b) ? (a) : (b))
> > -
> > #define INTERRUPTIBLE (1 << 0)
> > #define SWAPPING (1 << 1)
> > #define DUP_DRMFD (1 << 2)
> > @@ -168,7 +166,7 @@ static void forked_evictions(int fd, int size, int count,
> > for (n = 0; n < bo_count; n++)
> > bo[n] = gem_create(fd, size);
> >
> > - igt_fork(i, min(count, min(num_threads * 5, 12))) {
> > + igt_fork(i, num_threads * 4) {
>
> You've killed the min( , 12) here ... that'll hurt on big desktops.
> Otherwise patch looks good.
It was hard for me to know what kind of stress was desired there.
Thinking of typical cases, single core single thread gives five
"stressers", more typical 2x1 gives 10. So it seems the whole
calculation will typically be between 10 and 12, 5 and 12 conditionally.
Which almost sounds a bit pointless.. I mean to have the calculation as
it was at all.
Tvrtko
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: [PATCH] tests/gem_evict_everything: Use bo_count instead of count where intended
2013-12-06 12:33 ` Tvrtko Ursulin
@ 2013-12-06 13:46 ` Daniel Vetter
2013-12-06 14:04 ` Tvrtko Ursulin
0 siblings, 1 reply; 5+ messages in thread
From: Daniel Vetter @ 2013-12-06 13:46 UTC (permalink / raw)
To: Tvrtko Ursulin; +Cc: Intel-gfx
On Fri, Dec 06, 2013 at 12:33:28PM +0000, Tvrtko Ursulin wrote:
> On Fri, 2013-12-06 at 13:12 +0100, Daniel Vetter wrote:
> > On Fri, Dec 06, 2013 at 11:37:49AM +0000, Tvrtko Ursulin wrote:
> > > From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > >
> > > Don't see that it causes a problem but it looks it was intended
> > > to use bo_count at these places.
> > >
> > > Also using count to determine number of processes does not make
> > > sense unless thousands of cores.
> > >
> > > Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > > ---
> > > tests/gem_evict_everything.c | 12 +++++-------
> > > 1 file changed, 5 insertions(+), 7 deletions(-)
> > >
> > > diff --git a/tests/gem_evict_everything.c b/tests/gem_evict_everything.c
> > > index 41abef7..90c3ae1 100644
> > > --- a/tests/gem_evict_everything.c
> > > +++ b/tests/gem_evict_everything.c
> > > @@ -135,8 +135,6 @@ static void exchange_uint32_t(void *array, unsigned i, unsigned j)
> > > i_arr[j] = i_tmp;
> > > }
> > >
> > > -#define min(a, b) ((a) < (b) ? (a) : (b))
> > > -
> > > #define INTERRUPTIBLE (1 << 0)
> > > #define SWAPPING (1 << 1)
> > > #define DUP_DRMFD (1 << 2)
> > > @@ -168,7 +166,7 @@ static void forked_evictions(int fd, int size, int count,
> > > for (n = 0; n < bo_count; n++)
> > > bo[n] = gem_create(fd, size);
> > >
> > > - igt_fork(i, min(count, min(num_threads * 5, 12))) {
> > > + igt_fork(i, num_threads * 4) {
> >
> > You've killed the min( , 12) here ... that'll hurt on big desktops.
> > Otherwise patch looks good.
>
> It was hard for me to know what kind of stress was desired there.
> Thinking of typical cases, single core single thread gives five
> "stressers", more typical 2x1 gives 10. So it seems the whole
> calculation will typically be between 10 and 12, 5 and 12 conditionally.
> Which almost sounds a bit pointless.. I mean to have the calculation as
> it was at all.
Well, igt stresstests are mostly random whacking until I'm fairly happy on
a set of machines. But If you kill that max 12 runtime on bigger stuff
will go through the roof for sure. And even on my really old single-core
machines it's still ok. I suspect due to the thrashing the depency is
fairly non-linear.
Longer-term I want to speed up all these memory thrashing tests by
mlocking most of main memory and so removing it from consideration. But
that's a bit of work to set up and roll out across all tests.
-Daniel
--
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: [PATCH] tests/gem_evict_everything: Use bo_count instead of count where intended
2013-12-06 13:46 ` Daniel Vetter
@ 2013-12-06 14:04 ` Tvrtko Ursulin
0 siblings, 0 replies; 5+ messages in thread
From: Tvrtko Ursulin @ 2013-12-06 14:04 UTC (permalink / raw)
To: Daniel Vetter; +Cc: Intel-gfx
On Fri, 2013-12-06 at 14:46 +0100, Daniel Vetter wrote:
> On Fri, Dec 06, 2013 at 12:33:28PM +0000, Tvrtko Ursulin wrote:
> > On Fri, 2013-12-06 at 13:12 +0100, Daniel Vetter wrote:
> > > On Fri, Dec 06, 2013 at 11:37:49AM +0000, Tvrtko Ursulin wrote:
> > > > From: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > > >
> > > > Don't see that it causes a problem but it looks it was intended
> > > > to use bo_count at these places.
> > > >
> > > > Also using count to determine number of processes does not make
> > > > sense unless thousands of cores.
> > > >
> > > > Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> > > > ---
> > > > tests/gem_evict_everything.c | 12 +++++-------
> > > > 1 file changed, 5 insertions(+), 7 deletions(-)
> > > >
> > > > diff --git a/tests/gem_evict_everything.c b/tests/gem_evict_everything.c
> > > > index 41abef7..90c3ae1 100644
> > > > --- a/tests/gem_evict_everything.c
> > > > +++ b/tests/gem_evict_everything.c
> > > > @@ -135,8 +135,6 @@ static void exchange_uint32_t(void *array, unsigned i, unsigned j)
> > > > i_arr[j] = i_tmp;
> > > > }
> > > >
> > > > -#define min(a, b) ((a) < (b) ? (a) : (b))
> > > > -
> > > > #define INTERRUPTIBLE (1 << 0)
> > > > #define SWAPPING (1 << 1)
> > > > #define DUP_DRMFD (1 << 2)
> > > > @@ -168,7 +166,7 @@ static void forked_evictions(int fd, int size, int count,
> > > > for (n = 0; n < bo_count; n++)
> > > > bo[n] = gem_create(fd, size);
> > > >
> > > > - igt_fork(i, min(count, min(num_threads * 5, 12))) {
> > > > + igt_fork(i, num_threads * 4) {
> > >
> > > You've killed the min( , 12) here ... that'll hurt on big desktops.
> > > Otherwise patch looks good.
> >
> > It was hard for me to know what kind of stress was desired there.
> > Thinking of typical cases, single core single thread gives five
> > "stressers", more typical 2x1 gives 10. So it seems the whole
> > calculation will typically be between 10 and 12, 5 and 12 conditionally.
> > Which almost sounds a bit pointless.. I mean to have the calculation as
> > it was at all.
>
> Well, igt stresstests are mostly random whacking until I'm fairly happy on
> a set of machines. But If you kill that max 12 runtime on bigger stuff
> will go through the roof for sure. And even on my really old single-core
> machines it's still ok. I suspect due to the thrashing the depency is
> fairly non-linear.
OK, I'll send a version with that clamp put back in.
Tvrtko
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2013-12-06 14:04 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-12-06 11:37 [PATCH] tests/gem_evict_everything: Use bo_count instead of count where intended Tvrtko Ursulin
2013-12-06 12:12 ` Daniel Vetter
2013-12-06 12:33 ` Tvrtko Ursulin
2013-12-06 13:46 ` Daniel Vetter
2013-12-06 14:04 ` Tvrtko Ursulin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox