public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
* Tuning XFS for real time audio on a laptop with encrypted LVM
@ 2010-05-21  2:16 Pedro Ribeiro
  2010-05-21  4:14 ` Dave Chinner
  2010-05-22 13:22 ` Eric Sandeen
  0 siblings, 2 replies; 9+ messages in thread
From: Pedro Ribeiro @ 2010-05-21  2:16 UTC (permalink / raw)
  To: xfs

Hi all,

I was wondering what is the best scheduler for my use case given my
current hardware.

I have a laptop with a fast Core 2 duo at 2.26 and a nice amount of
ram (4GB) which I use primarily for real time audio (though without a
-rt kernel). All my partitions are XFS under LVM which itself is
contained on a LUKS partition (encrypted with AES 128).

CFQ currently does not perform very well and causes a lot of thrashing
and high latencies when I/O usage is high. Changing it to the noop
scheduler solves some of the problems and makes it more responsive.
Still performance is a bit of a let down: it takes 1m30s to unpack the
linux-2.6.34 tarball and a massive 2m30s to rm -r.
I have lazy-count=1, noatime, logbufs=8, logbsize=256k and a 128m log.

Is there any tunable I should mess with to solve this? And what do you
think of my scheduler change (I haven't tested it that much to be
honest)?

Regards,
Pedro

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Tuning XFS for real time audio on a laptop with encrypted LVM
  2010-05-21  2:16 Tuning XFS for real time audio on a laptop with encrypted LVM Pedro Ribeiro
@ 2010-05-21  4:14 ` Dave Chinner
  2010-05-21  6:25   ` Stan Hoeppner
  2010-05-22 13:22 ` Eric Sandeen
  1 sibling, 1 reply; 9+ messages in thread
From: Dave Chinner @ 2010-05-21  4:14 UTC (permalink / raw)
  To: Pedro Ribeiro; +Cc: xfs

On Fri, May 21, 2010 at 03:16:15AM +0100, Pedro Ribeiro wrote:
> Hi all,
> 
> I was wondering what is the best scheduler for my use case given my
> current hardware.
> 
> I have a laptop with a fast Core 2 duo at 2.26 and a nice amount of
> ram (4GB) which I use primarily for real time audio (though without a
> -rt kernel). All my partitions are XFS under LVM which itself is
> contained on a LUKS partition (encrypted with AES 128).
> 
> CFQ currently does not perform very well and causes a lot of thrashing
> and high latencies when I/O usage is high. Changing it to the noop
> scheduler solves some of the problems and makes it more responsive.
> Still performance is a bit of a let down: it takes 1m30s to unpack the
> linux-2.6.34 tarball and a massive 2m30s to rm -r.
> I have lazy-count=1, noatime, logbufs=8, logbsize=256k and a 128m log.
> 
> Is there any tunable I should mess with to solve this?

Depends if you value your data or not. If you don't care about
corruption or data loss on sudden power loss (e.g. battery runs
flat), then add nobarrier to your mount options. Otherwise, you're
close to the best performance you are going to get on that hardware
with XFS.

> And what do you
> think of my scheduler change (I haven't tested it that much to be
> honest)?

I only ever use the noop scheduler with XFS these days. CFQ has been
a steaming pile of ever changing regressions for the past 4 or 5
kernel releases, so i stopped using it. Besides, XFS is often 10-15%
faster on no-op for the same workload, anyway...

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Tuning XFS for real time audio on a laptop with encrypted LVM
  2010-05-21  4:14 ` Dave Chinner
@ 2010-05-21  6:25   ` Stan Hoeppner
  2010-05-21 11:29     ` Pedro Ribeiro
  0 siblings, 1 reply; 9+ messages in thread
From: Stan Hoeppner @ 2010-05-21  6:25 UTC (permalink / raw)
  To: xfs

Dave Chinner put forth on 5/20/2010 11:14 PM:

> I only ever use the noop scheduler with XFS these days. CFQ has been
> a steaming pile of ever changing regressions for the past 4 or 5
> kernel releases, so i stopped using it. Besides, XFS is often 10-15%
> faster on no-op for the same workload, anyway...

IIRC the elevator sits below the FS in the stack, and has a tighter
relationship to the block device driver and physical storage subsystem than
to the FS.  I have one box with a 7.2K 500GB WD drive and a sata_sil
controller that doesn't support NCQ.  Without NCQ due to no controller
support or ATA_horkage_NCQ blacklisted drives, the deadline and anticipatory
(now removed from the kernel IIRC) elevators yield vastly superior
performance under load compared to CFQ or noop.

Noop fits well with good hardware RAID, either local machine PCI/x/e RAID
card or straight FC HBA talking to a SAN array controller.  CFQ just gets in
the way with good hardware.  In some testing I've done with FC HBAs and
target LUNs on IBM FasTt and Nexsan SAN arrays, deadline has shown a tiny
advantage over noop with a few synthetic tests.  This testing was performed
on SLED 10 and Debian Etch guests atop VMWare ESX 3 at night on weekends
when load across the ESX blade farm was near zero, but it was still done in
a virtual environment.  On bare hardware, I'm not sure one would get the
same results.  Anyway, the deadline elevator gave such little advantage over
noop, I'd still recommend noop on good hardware due to zero CPU overhead.
Deadline has a few fancy tricks so it will always eat more CPU, even though
it's a modest amount.

I'd sum the elevator choice up this way:  If you have a good storage
hardware and driver combo such as fast SATA disks with good NCQ, or just
about any SCSI, SAS, RAID, or SAN setup, go with noop.  For lesser
hardware/drivers, use deadline (i.e. lacking or crappy NCQ, or on laptops
due to the slow 4200/5400 rpm drives, even if they do have good NCQ).

I agree with Dave that CFQ isn't all that great, and in my testing it's even
worse when used with Linux guests on ESX than it is on bare metal.

Caveat:  I'm no expert, and I don't do storage subsystem performance testing
all day long.  I'm just reporting my first hand experience.  YMMV and all
the normal disclaimers apply.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Tuning XFS for real time audio on a laptop with encrypted LVM
  2010-05-21  6:25   ` Stan Hoeppner
@ 2010-05-21 11:29     ` Pedro Ribeiro
  2010-05-21 13:45       ` Stan Hoeppner
  0 siblings, 1 reply; 9+ messages in thread
From: Pedro Ribeiro @ 2010-05-21 11:29 UTC (permalink / raw)
  To: Stan Hoeppner, david; +Cc: xfs

On 21 May 2010 07:25, Stan Hoeppner <stan@hardwarefreak.com> wrote:
> Dave Chinner put forth on 5/20/2010 11:14 PM:
>
>> I only ever use the noop scheduler with XFS these days. CFQ has been
>> a steaming pile of ever changing regressions for the past 4 or 5
>> kernel releases, so i stopped using it. Besides, XFS is often 10-15%
>> faster on no-op for the same workload, anyway...
>
> IIRC the elevator sits below the FS in the stack, and has a tighter
> relationship to the block device driver and physical storage subsystem than
> to the FS.  I have one box with a 7.2K 500GB WD drive and a sata_sil
> controller that doesn't support NCQ.  Without NCQ due to no controller
> support or ATA_horkage_NCQ blacklisted drives, the deadline and anticipatory
> (now removed from the kernel IIRC) elevators yield vastly superior
> performance under load compared to CFQ or noop.
>
> Noop fits well with good hardware RAID, either local machine PCI/x/e RAID
> card or straight FC HBA talking to a SAN array controller.  CFQ just gets in
> the way with good hardware.  In some testing I've done with FC HBAs and
> target LUNs on IBM FasTt and Nexsan SAN arrays, deadline has shown a tiny
> advantage over noop with a few synthetic tests.  This testing was performed
> on SLED 10 and Debian Etch guests atop VMWare ESX 3 at night on weekends
> when load across the ESX blade farm was near zero, but it was still done in
> a virtual environment.  On bare hardware, I'm not sure one would get the
> same results.  Anyway, the deadline elevator gave such little advantage over
> noop, I'd still recommend noop on good hardware due to zero CPU overhead.
> Deadline has a few fancy tricks so it will always eat more CPU, even though
> it's a modest amount.
>
> I'd sum the elevator choice up this way:  If you have a good storage
> hardware and driver combo such as fast SATA disks with good NCQ, or just
> about any SCSI, SAS, RAID, or SAN setup, go with noop.  For lesser
> hardware/drivers, use deadline (i.e. lacking or crappy NCQ, or on laptops
> due to the slow 4200/5400 rpm drives, even if they do have good NCQ).
>
> I agree with Dave that CFQ isn't all that great, and in my testing it's even
> worse when used with Linux guests on ESX than it is on bare metal.
>
> Caveat:  I'm no expert, and I don't do storage subsystem performance testing
> all day long.  I'm just reporting my first hand experience.  YMMV and all
> the normal disclaimers apply.
>
> --
> Stan
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
>

Thanks for the answers.

I do value my data a lot (that's why I changed from Windows some years
ago), and even though this is a laptop with battery protection I keep
crashing and hard locking it because I like always to use -rcX kernels
and to fool around with lots of dangerous stuff/settings/etc.

Actually that is one of the reasons I stick with XFS instead of going
to EXT4 or the like. I've been using and torturing XFS for a couple of
years now and I NEVER suffered any corruption, and only had a couple
of times where unimportant data loss happened, but it was completely
expected because I was an ass. I only lost the data that was unsynched
during the last minute though.

I forgot to say that I have a SATA-I 5400rpm hard drive, it does
support NCQ and since this is a laptop there is no RAID or similars.

I've been running a few tests with bonnie++ and hdparm. hdparm reports
my bare hard drive speed as 67 Mb/s for read. With bonnie++ the
maximum I can get is 52 Mb/s with noop and cfq and deadline only gives
me 48 Mb/s. This is not bad at all. noop is a tad faster in all the
tests and the only thing it performs worse is in read latency,
although read throughput appears to be the same.

So it is agreed that CFQ sucks right now. I'll continue my testing but
now with proper daily use to see which is better, deadline or noop.

Regards,
Pedro

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Tuning XFS for real time audio on a laptop with encrypted LVM
  2010-05-21 11:29     ` Pedro Ribeiro
@ 2010-05-21 13:45       ` Stan Hoeppner
  2010-05-22 12:21         ` Pedro Ribeiro
  0 siblings, 1 reply; 9+ messages in thread
From: Stan Hoeppner @ 2010-05-21 13:45 UTC (permalink / raw)
  To: xfs

Pedro Ribeiro put forth on 5/21/2010 6:29 AM:
 
> So it is agreed that CFQ sucks right now. I'll continue my testing but
> now with proper daily use to see which is better, deadline or noop.

I'd be curious to see what your seek performance is with each elevator on that laptop drive.  Give this basic parallel seek tester a spin.

Make runs with 1, 8, 16, 32, and 64 threads to find peak seek throughput and number of threads required to reach it.  Each run takes about 30 seconds.  For 7.2k rpm and up mechanical drives and SSDs you'd want to test 128 and 256 threads also.  Due to the threaded nature good command queuing will have a substantial effect on results, as will elevator choice on some systems.  For background info and evolution of this code, see this thread:  http://www.linuxinsight.com/how_fast_is_your_disk.html

compile with:
gcc -o seeker_baryluk -O2 -march=native seeker_baryluk.c -pthread

run command:
./seeker_baryluk device number_of_threads

A nice feature is that you can test an entire drive or individual partitions (or an entire SAN LUN or just partitions on said LUN).  This can tell you how much slower inner cylinders are compared to outer cylinders, and even works for local RAID and SAN LUNs.

./seeker_baryluk /dev/sda 64
./seeker_baryluk /dev/sda2 64
...
./seeker_baryluk /dev/sda9 64

I find this a much more informative simple test of storage subsystem performance than something like hdparm which is very small sequential I/O.

--- seeker_baryluk.c ---

#define _LARGEFILE64_SOURCE

#ifndef _REENTRANT
#define _REENTRANT
#endif
#include <pthread.h>

#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
#include <errno.h>
#include <time.h>
#include <signal.h>
#include <sys/fcntl.h>
#include <sys/ioctl.h>
#include <linux/fs.h>

#define BLOCKSIZE 512
#define TIMEOUT 30

pthread_mutex_t muteks = PTHREAD_MUTEX_INITIALIZER;

int count;
time_t start;
off64_t maxoffset = 0;
off64_t minoffset = 249994674176000uLL;


int threads;

typedef struct {
	int id;
	int fd;
	int run;
	char* filename;
	unsigned int seed;
	unsigned long long numbytes;
	char* buffer;
	int count;
	off64_t maxoffset;
	off64_t minoffset;
} parm;

parm *p;

void done() {
	int i;
	time_t end;

	time(&end);

	if (end < start + TIMEOUT) {
		printf(".");
		alarm(1);
		return;
	}

	for (i = 0; i < threads; i++) {
		p[i].run = 0;
	}
}

void report() {
	if (count) {
		printf(".\nResults: %d seeks/second, %.3f ms random access time (%llu < offsets < %llu)\n",
			count / TIMEOUT, 1000.0 * TIMEOUT / count, (unsigned long long)minoffset, (unsigned long long)maxoffset);
	}
	exit(EXIT_SUCCESS);
}

void handle(const char *string, int error) {
	if (error) {
		perror(string);
		exit(EXIT_FAILURE);
	}
}

void* f(void *arg) {
	int retval;
	off64_t offset;

	parm *p = (parm*)arg;

	srand(p->seed);

	/* wait for all processes */
	pthread_mutex_lock(&muteks);
	pthread_mutex_unlock(&muteks);

	while (p->run) {
		offset = (off64_t) ( (unsigned long long) (p->numbytes * (rand_r(&(p->seed)) / (RAND_MAX + 1.0) )));
		//printf("%d %llu\n", p->id, (unsigned long long )offset);
		retval = lseek64(p->fd, offset, SEEK_SET);
		handle("lseek64", retval == (off64_t) -1);
		retval = read(p->fd, p->buffer, BLOCKSIZE);
		handle("read", retval < 0);

		p->count++;
		if (offset > p->maxoffset) {
			p->maxoffset = offset;
		} else if (offset < p->minoffset) {
			p->minoffset = offset;
		}
	}

	//pthread_exit(NULL);
	return NULL;
}

int main(int argc, char **argv) {
	int fd, retval;
	int physical_sector_size = 0;
	size_t logical_sector_size = 0ULL;
	unsigned long long numblocks, numbytes;
	unsigned long long ull;
	unsigned long ul;
	pthread_t *t_id;
	pthread_attr_t pthread_custom_attr;
	int i;

	setvbuf(stdout, NULL, _IONBF, 0);

	printf("Seeker v3.0, 2009-06-17, "
	       "http://www.linuxinsight.com/how_fast_is_your_disk.html\n");

	if (!(argc == 2 || argc == 3)) {
		printf("Usage: %s device [threads]\n", argv[0]);
		exit(1);
	}

	threads = 1;
	if (argc == 3) {
		threads = atoi(argv[2]);
	}

	//pthread_mutex_init(&muteks, NULL); 

	fd = open(argv[1], O_RDONLY | O_LARGEFILE);
	handle("open", fd < 0);

#ifdef BLKGETSIZE64
	retval = ioctl(fd, BLKGETSIZE64, &ull);
	numbytes = (unsigned long long)ull;
#else
	retval = ioctl(fd, BLKGETSIZE, &ul);
	numbytes = (unsigned long long)ul;
#endif
	handle("ioctl", retval == -1);
	retval = ioctl(fd, BLKBSZGET, &logical_sector_size);
	handle("ioctl", retval == -1 && logical_sector_size > 0);
	retval = ioctl(fd, BLKSSZGET, &physical_sector_size);
	handle("ioctl", retval == -1 && physical_sector_size > 0);
	numblocks = ((unsigned long long)numbytes)/(unsigned long long)BLOCKSIZE;
	printf("Benchmarking %s [%llu blocks, %llu bytes, %llu GB, %llu MB, %llu GiB, %llu MiB]\n",
		argv[1], numblocks, numbytes, numbytes/(1024uLL*1024uLL*1024uLL), numbytes / (1024uLL*1024uLL), numbytes/(1000uLL*1000uLL*1000uLL), numbytes / (1000uLL*1000uLL));
	printf("[%d logical sector size, %d physical sector size]\n", physical_sector_size, physical_sector_size);
	printf("[%d threads]\n", threads);
	printf("Wait %d seconds", TIMEOUT);

	t_id = (pthread_t *)malloc(threads*sizeof(pthread_t));
	handle("malloc", t_id == NULL);
	pthread_attr_init(&pthread_custom_attr);
	p = (parm *)malloc(sizeof(parm)*threads);
	handle("malloc", p == NULL);

	time(&start);

	pthread_mutex_lock(&muteks);


	srand((unsigned int)start*(unsigned int)getpid());

	for (i = 0; i < threads; i++) {
		p[i].id = i;
		p[i].filename = argv[1];
		p[i].seed = rand()+i;
		p[i].fd = dup(fd);
		handle("dup", p[i].fd < 0);
		p[i].buffer = malloc(sizeof(char)*BLOCKSIZE);
		p[i].numbytes = numbytes;
		handle("malloc", p[i].buffer == NULL);
		p[i].run = 1;
		p[i].count = 0;
		p[i].minoffset = minoffset;
		p[i].maxoffset = maxoffset;

		retval = pthread_create(&(t_id[i]), NULL, f, (void*)(p+i));
		handle("pthread_create", retval != 0);
	}

	sleep(1);

	time(&start);
	signal(SIGALRM, &done);
	alarm(1);

	pthread_mutex_unlock(&muteks);

	for (i = 0; i < threads; i++) {
		pthread_join(t_id[i], NULL);
	}

	for (i = 0; i < threads; i++) {
		count += p[i].count;
		if (p[i].maxoffset > maxoffset) {
			maxoffset = p[i].maxoffset;
		}
		if (p[i].minoffset < minoffset) {
			minoffset = p[i].minoffset;
		}
	}

	report();

	/* notreached */
	return 0;
}

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Tuning XFS for real time audio on a laptop with encrypted LVM
  2010-05-21 13:45       ` Stan Hoeppner
@ 2010-05-22 12:21         ` Pedro Ribeiro
  2010-05-22 22:13           ` Stan Hoeppner
  0 siblings, 1 reply; 9+ messages in thread
From: Pedro Ribeiro @ 2010-05-22 12:21 UTC (permalink / raw)
  To: Stan Hoeppner; +Cc: xfs

[-- Attachment #1: Type: text/plain, Size: 8448 bytes --]

On 21 May 2010 14:45, Stan Hoeppner <stan@hardwarefreak.com> wrote:
> Pedro Ribeiro put forth on 5/21/2010 6:29 AM:
>
>> So it is agreed that CFQ sucks right now. I'll continue my testing but
>> now with proper daily use to see which is better, deadline or noop.
>
> I'd be curious to see what your seek performance is with each elevator on that laptop drive.  Give this basic parallel seek tester a spin.
>
> Make runs with 1, 8, 16, 32, and 64 threads to find peak seek throughput and number of threads required to reach it.  Each run takes about 30 seconds.  For 7.2k rpm and up mechanical drives and SSDs you'd want to test 128 and 256 threads also.  Due to the threaded nature good command queuing will have a substantial effect on results, as will elevator choice on some systems.  For background info and evolution of this code, see this thread:  http://www.linuxinsight.com/how_fast_is_your_disk.html
>
> compile with:
> gcc -o seeker_baryluk -O2 -march=native seeker_baryluk.c -pthread
>
> run command:
> ./seeker_baryluk device number_of_threads
>
> A nice feature is that you can test an entire drive or individual partitions (or an entire SAN LUN or just partitions on said LUN).  This can tell you how much slower inner cylinders are compared to outer cylinders, and even works for local RAID and SAN LUNs.
>
> ./seeker_baryluk /dev/sda 64
> ./seeker_baryluk /dev/sda2 64
> ...
> ./seeker_baryluk /dev/sda9 64
>
> I find this a much more informative simple test of storage subsystem performance than something like hdparm which is very small sequential I/O.
>
> --- seeker_baryluk.c ---
>
> #define _LARGEFILE64_SOURCE
>
> #ifndef _REENTRANT
> #define _REENTRANT
> #endif
> #include <pthread.h>
>
> #include <stdio.h>
> #include <stdlib.h>
> #include <sys/types.h>
> #include <unistd.h>
> #include <errno.h>
> #include <time.h>
> #include <signal.h>
> #include <sys/fcntl.h>
> #include <sys/ioctl.h>
> #include <linux/fs.h>
>
> #define BLOCKSIZE 512
> #define TIMEOUT 30
>
> pthread_mutex_t muteks = PTHREAD_MUTEX_INITIALIZER;
>
> int count;
> time_t start;
> off64_t maxoffset = 0;
> off64_t minoffset = 249994674176000uLL;
>
>
> int threads;
>
> typedef struct {
>        int id;
>        int fd;
>        int run;
>        char* filename;
>        unsigned int seed;
>        unsigned long long numbytes;
>        char* buffer;
>        int count;
>        off64_t maxoffset;
>        off64_t minoffset;
> } parm;
>
> parm *p;
>
> void done() {
>        int i;
>        time_t end;
>
>        time(&end);
>
>        if (end < start + TIMEOUT) {
>                printf(".");
>                alarm(1);
>                return;
>        }
>
>        for (i = 0; i < threads; i++) {
>                p[i].run = 0;
>        }
> }
>
> void report() {
>        if (count) {
>                printf(".\nResults: %d seeks/second, %.3f ms random access time (%llu < offsets < %llu)\n",
>                        count / TIMEOUT, 1000.0 * TIMEOUT / count, (unsigned long long)minoffset, (unsigned long long)maxoffset);
>        }
>        exit(EXIT_SUCCESS);
> }
>
> void handle(const char *string, int error) {
>        if (error) {
>                perror(string);
>                exit(EXIT_FAILURE);
>        }
> }
>
> void* f(void *arg) {
>        int retval;
>        off64_t offset;
>
>        parm *p = (parm*)arg;
>
>        srand(p->seed);
>
>        /* wait for all processes */
>        pthread_mutex_lock(&muteks);
>        pthread_mutex_unlock(&muteks);
>
>        while (p->run) {
>                offset = (off64_t) ( (unsigned long long) (p->numbytes * (rand_r(&(p->seed)) / (RAND_MAX + 1.0) )));
>                //printf("%d %llu\n", p->id, (unsigned long long )offset);
>                retval = lseek64(p->fd, offset, SEEK_SET);
>                handle("lseek64", retval == (off64_t) -1);
>                retval = read(p->fd, p->buffer, BLOCKSIZE);
>                handle("read", retval < 0);
>
>                p->count++;
>                if (offset > p->maxoffset) {
>                        p->maxoffset = offset;
>                } else if (offset < p->minoffset) {
>                        p->minoffset = offset;
>                }
>        }
>
>        //pthread_exit(NULL);
>        return NULL;
> }
>
> int main(int argc, char **argv) {
>        int fd, retval;
>        int physical_sector_size = 0;
>        size_t logical_sector_size = 0ULL;
>        unsigned long long numblocks, numbytes;
>        unsigned long long ull;
>        unsigned long ul;
>        pthread_t *t_id;
>        pthread_attr_t pthread_custom_attr;
>        int i;
>
>        setvbuf(stdout, NULL, _IONBF, 0);
>
>        printf("Seeker v3.0, 2009-06-17, "
>               "http://www.linuxinsight.com/how_fast_is_your_disk.html\n");
>
>        if (!(argc == 2 || argc == 3)) {
>                printf("Usage: %s device [threads]\n", argv[0]);
>                exit(1);
>        }
>
>        threads = 1;
>        if (argc == 3) {
>                threads = atoi(argv[2]);
>        }
>
>        //pthread_mutex_init(&muteks, NULL);
>
>        fd = open(argv[1], O_RDONLY | O_LARGEFILE);
>        handle("open", fd < 0);
>
> #ifdef BLKGETSIZE64
>        retval = ioctl(fd, BLKGETSIZE64, &ull);
>        numbytes = (unsigned long long)ull;
> #else
>        retval = ioctl(fd, BLKGETSIZE, &ul);
>        numbytes = (unsigned long long)ul;
> #endif
>        handle("ioctl", retval == -1);
>        retval = ioctl(fd, BLKBSZGET, &logical_sector_size);
>        handle("ioctl", retval == -1 && logical_sector_size > 0);
>        retval = ioctl(fd, BLKSSZGET, &physical_sector_size);
>        handle("ioctl", retval == -1 && physical_sector_size > 0);
>        numblocks = ((unsigned long long)numbytes)/(unsigned long long)BLOCKSIZE;
>        printf("Benchmarking %s [%llu blocks, %llu bytes, %llu GB, %llu MB, %llu GiB, %llu MiB]\n",
>                argv[1], numblocks, numbytes, numbytes/(1024uLL*1024uLL*1024uLL), numbytes / (1024uLL*1024uLL), numbytes/(1000uLL*1000uLL*1000uLL), numbytes / (1000uLL*1000uLL));
>        printf("[%d logical sector size, %d physical sector size]\n", physical_sector_size, physical_sector_size);
>        printf("[%d threads]\n", threads);
>        printf("Wait %d seconds", TIMEOUT);
>
>        t_id = (pthread_t *)malloc(threads*sizeof(pthread_t));
>        handle("malloc", t_id == NULL);
>        pthread_attr_init(&pthread_custom_attr);
>        p = (parm *)malloc(sizeof(parm)*threads);
>        handle("malloc", p == NULL);
>
>        time(&start);
>
>        pthread_mutex_lock(&muteks);
>
>
>        srand((unsigned int)start*(unsigned int)getpid());
>
>        for (i = 0; i < threads; i++) {
>                p[i].id = i;
>                p[i].filename = argv[1];
>                p[i].seed = rand()+i;
>                p[i].fd = dup(fd);
>                handle("dup", p[i].fd < 0);
>                p[i].buffer = malloc(sizeof(char)*BLOCKSIZE);
>                p[i].numbytes = numbytes;
>                handle("malloc", p[i].buffer == NULL);
>                p[i].run = 1;
>                p[i].count = 0;
>                p[i].minoffset = minoffset;
>                p[i].maxoffset = maxoffset;
>
>                retval = pthread_create(&(t_id[i]), NULL, f, (void*)(p+i));
>                handle("pthread_create", retval != 0);
>        }
>
>        sleep(1);
>
>        time(&start);
>        signal(SIGALRM, &done);
>        alarm(1);
>
>        pthread_mutex_unlock(&muteks);
>
>        for (i = 0; i < threads; i++) {
>                pthread_join(t_id[i], NULL);
>        }
>
>        for (i = 0; i < threads; i++) {
>                count += p[i].count;
>                if (p[i].maxoffset > maxoffset) {
>                        maxoffset = p[i].maxoffset;
>                }
>                if (p[i].minoffset < minoffset) {
>                        minoffset = p[i].minoffset;
>                }
>        }
>
>        report();
>
>        /* notreached */
>        return 0;
> }
>
> _______________________________________________
> xfs mailing list
> xfs@oss.sgi.com
> http://oss.sgi.com/mailman/listinfo/xfs
>

Hi,
results are attached. There appears to be no difference between any of
the schedulers.

Regards,
Pedro

[-- Attachment #2: seeker.results --]
[-- Type: application/octet-stream, Size: 32927 bytes --]

/dev/sda:

ATA device, with non-removable media
	Model Number:       WDC WD3200BEVS-08VAT2                   
	Serial Number:      WD-WXK0E59AWY28
	Firmware Revision:  14.01A14
	Transport:          Serial, SATA 1.0a, SATA II Extensions, SATA Rev 2.5
Standards:
	Supported: 8 7 6 5 
	Likely used: 8
Configuration:
	Logical		max	current
	cylinders	16383	16383
	heads		16	16
	sectors/track	63	63
	--
	CHS current addressable sectors:   16514064
	LBA    user addressable sectors:  268435455
	LBA48  user addressable sectors:  625142448
	Logical/Physical Sector size:           512 bytes
	device size with M = 1024*1024:      305245 MBytes
	device size with M = 1000*1000:      320072 MBytes (320 GB)
	cache/buffer size  = 8192 KBytes
	Nominal Media Rotation Rate: 5400
Capabilities:
	LBA, IORDY(can be disabled)
	Queue depth: 32
	Standby timer values: spec'd by Standard, no device specific minimum
	R/W multiple sector transfer: Max = 16	Current = 16
	Advanced power management level: 192
	Recommended acoustic management value: 128, current value: 254
	DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6 
	     Cycle time: min=120ns recommended=120ns
	PIO: pio0 pio1 pio2 pio3 pio4 
	     Cycle time: no flow control=120ns  IORDY flow control=120ns
Commands/features:
	Enabled	Supported:
	   *	SMART feature set
	    	Security Mode feature set
	   *	Power Management feature set
	   *	Write cache
	   *	Look-ahead
	   *	Host Protected Area feature set
	   *	WRITE_BUFFER command
	   *	READ_BUFFER command
	   *	DOWNLOAD_MICROCODE
	   *	Advanced Power Management feature set
	    	SET_MAX security extension
	   *	Automatic Acoustic Management feature set
	   *	48-bit Address feature set
	   *	Device Configuration Overlay feature set
	   *	Mandatory FLUSH_CACHE
	   *	FLUSH_CACHE_EXT
	   *	SMART error logging
	   *	SMART self-test
	   *	General Purpose Logging feature set
	   *	WRITE_{DMA|MULTIPLE}_FUA_EXT
	   *	64-bit World wide name
	   *	IDLE_IMMEDIATE with UNLOAD
	   *	Disable Data Transfer After Error Detection
	   *	WRITE_UNCORRECTABLE_EXT command
	   *	Segmented DOWNLOAD_MICROCODE
	   *	Gen1 signaling speed (1.5Gb/s)
	   *	Native Command Queueing (NCQ)
	   *	Host-initiated interface power management
	   *	Phy event counters
	   *	DMA Setup Auto-Activate optimization
	    	Device-initiated interface power management
	   *	Software settings preservation
	   *	SMART Command Transport (SCT) feature set
	   *	SCT Long Sector Access (AC1)
	   *	SCT LBA Segment Access (AC2)
	   *	SCT Error Recovery Control (AC3)
	   *	SCT Features Control (AC4)
	   *	SCT Data Tables (AC5)
	    	unknown 206[12] (vendor specific)
	    	unknown 206[13] (vendor specific)
Security: 
	Master password revision code = 65534
		supported
	not	enabled
	not	locked
		frozen
	not	expired: security count
		supported: enhanced erase
	102min for SECURITY ERASE UNIT. 102min for ENHANCED SECURITY ERASE UNIT.
Logical Unit WWN Device Identifier: 50014ee203093006
	NAA		: 5
	IEEE OUI	: 0014ee
	Unique ID	: 203093006
Checksum: correct

Testing with noop
-----------------------------------------
1 threads noop
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda1 [690732 blocks, 353654784 bytes, 0 GB, 337 MB, 0 GiB, 353 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 127 seeks/second, 7.855 ms random access time (3517 < offsets < 353538659)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda2 [624446550 blocks, 319716633600 bytes, 297 GB, 304905 MB, 319 GiB, 319716 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 57 seeks/second, 17.271 ms random access time (277887010 < offsets < 319671385051)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-ROOT [52428800 blocks, 26843545600 bytes, 25 GB, 25600 MB, 26 GiB, 26843 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 79 seeks/second, 12.584 ms random access time (4804250 < offsets < 26842257000)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-SWAP [20971520 blocks, 10737418240 bytes, 10 GB, 10240 MB, 10 GiB, 10737 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 78 seeks/second, 12.728 ms random access time (3436255 < offsets < 10737263755)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-HOME [551043072 blocks, 282134052864 bytes, 262 GB, 269064 MB, 282 GiB, 282134 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 53 seeks/second, 18.622 ms random access time (126911629 < offsets < 282060673146)
-----------------------------------------
8 threads noop
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda1 [690732 blocks, 353654784 bytes, 0 GB, 337 MB, 0 GiB, 353 MiB]
[512 logical sector size, 512 physical sector size]
[8 threads]
Wait 30 seconds..............................
Results: 316 seeks/second, 3.159 ms random access time (10600 < offsets < 353615977)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda2 [624446550 blocks, 319716633600 bytes, 297 GB, 304905 MB, 319 GiB, 319716 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 58 seeks/second, 17.075 ms random access time (531143072 < offsets < 319611992787)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-ROOT [52428800 blocks, 26843545600 bytes, 25 GB, 25600 MB, 26 GiB, 26843 MiB]
[512 logical sector size, 512 physical sector size]
[8 threads]
Wait 30 seconds..............................
Results: 124 seeks/second, 8.013 ms random access time (10228387 < offsets < 26829552837)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-SWAP [20971520 blocks, 10737418240 bytes, 10 GB, 10240 MB, 10 GiB, 10737 MiB]
[512 logical sector size, 512 physical sector size]
[8 threads]
Wait 30 seconds..............................
Results: 159 seeks/second, 6.272 ms random access time (880360 < offsets < 10734628965)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-HOME [551043072 blocks, 282134052864 bytes, 262 GB, 269064 MB, 282 GiB, 282134 MiB]
[512 logical sector size, 512 physical sector size]
[8 threads]
Wait 30 seconds..............................
Results: 89 seeks/second, 11.215 ms random access time (56727966 < offsets < 282087146258)
-----------------------------------------
16 threads noop
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda1 [690732 blocks, 353654784 bytes, 0 GB, 337 MB, 0 GiB, 353 MiB]
[512 logical sector size, 512 physical sector size]
[16 threads]
Wait 30 seconds..............................
Results: 601 seeks/second, 1.662 ms random access time (24444 < offsets < 353644033)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda2 [624446550 blocks, 319716633600 bytes, 297 GB, 304905 MB, 319 GiB, 319716 MiB]
[512 logical sector size, 512 physical sector size]
[16 threads]
Wait 30 seconds..............................
Results: 101 seeks/second, 9.859 ms random access time (198693453 < offsets < 319710370083)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-ROOT [52428800 blocks, 26843545600 bytes, 25 GB, 25600 MB, 26 GiB, 26843 MiB]
[512 logical sector size, 512 physical sector size]
[16 threads]
Wait 30 seconds..............................
Results: 150 seeks/second, 6.655 ms random access time (4958687 < offsets < 26841589012)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-SWAP [20971520 blocks, 10737418240 bytes, 10 GB, 10240 MB, 10 GiB, 10737 MiB]
[512 logical sector size, 512 physical sector size]
[16 threads]
Wait 30 seconds..............................
Results: 187 seeks/second, 5.347 ms random access time (830085 < offsets < 10737258260)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-HOME [551043072 blocks, 282134052864 bytes, 262 GB, 269064 MB, 282 GiB, 282134 MiB]
[512 logical sector size, 512 physical sector size]
[16 threads]
Wait 30 seconds..............................
Results: 99 seeks/second, 10.033 ms random access time (5762410 < offsets < 282062777311)
-----------------------------------------
32 threads noop
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda1 [690732 blocks, 353654784 bytes, 0 GB, 337 MB, 0 GiB, 353 MiB]
[512 logical sector size, 512 physical sector size]
[32 threads]
Wait 30 seconds..............................
Results: 493767 seeks/second, 0.002 ms random access time (64 < offsets < 353654779)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda2 [624446550 blocks, 319716633600 bytes, 297 GB, 304905 MB, 319 GiB, 319716 MiB]
[512 logical sector size, 512 physical sector size]
[32 threads]
Wait 30 seconds..............................
Results: 112 seeks/second, 8.926 ms random access time (57579357 < offsets < 319452443816)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-ROOT [52428800 blocks, 26843545600 bytes, 25 GB, 25600 MB, 26 GiB, 26843 MiB]
[512 logical sector size, 512 physical sector size]
[32 threads]
Wait 30 seconds..............................
Results: 180 seeks/second, 5.541 ms random access time (7295087 < offsets < 26836715687)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-SWAP [20971520 blocks, 10737418240 bytes, 10 GB, 10240 MB, 10 GiB, 10737 MiB]
[512 logical sector size, 512 physical sector size]
[32 threads]
Wait 30 seconds..............................
Results: 212 seeks/second, 4.704 ms random access time (626875 < offsets < 10736009075)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-HOME [551043072 blocks, 282134052864 bytes, 262 GB, 269064 MB, 282 GiB, 282134 MiB]
[512 logical sector size, 512 physical sector size]
[32 threads]
Wait 30 seconds..............................
Results: 115 seeks/second, 8.633 ms random access time (84299537 < offsets < 282097554752)
-----------------------------------------
64 threads noop
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda1 [690732 blocks, 353654784 bytes, 0 GB, 337 MB, 0 GiB, 353 MiB]
[512 logical sector size, 512 physical sector size]
[64 threads]
Wait 30 seconds..............................
Results: 1136082 seeks/second, 0.001 ms random access time (0 < offsets < 353654783)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda2 [624446550 blocks, 319716633600 bytes, 297 GB, 304905 MB, 319 GiB, 319716 MiB]
[512 logical sector size, 512 physical sector size]
[64 threads]
Wait 30 seconds..............................
Results: 113 seeks/second, 8.780 ms random access time (150422945 < offsets < 319508442517)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-ROOT [52428800 blocks, 26843545600 bytes, 25 GB, 25600 MB, 26 GiB, 26843 MiB]
[512 logical sector size, 512 physical sector size]
[64 threads]
Wait 30 seconds..............................
Results: 179 seeks/second, 5.571 ms random access time (2712462 < offsets < 26837002950)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-SWAP [20971520 blocks, 10737418240 bytes, 10 GB, 10240 MB, 10 GiB, 10737 MiB]
[512 logical sector size, 512 physical sector size]
[64 threads]
Wait 30 seconds..............................
Results: 211 seeks/second, 4.735 ms random access time (906575 < offsets < 10735000880)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-HOME [551043072 blocks, 282134052864 bytes, 262 GB, 269064 MB, 282 GiB, 282134 MiB]
[512 logical sector size, 512 physical sector size]
[64 threads]
Wait 30 seconds..............................
Results: 119 seeks/second, 8.403 ms random access time (41974115 < offsets < 282018133313)
-----------------------------------------
Testing with deadline
-----------------------------------------
1 threads deadline
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda1 [690732 blocks, 353654784 bytes, 0 GB, 337 MB, 0 GiB, 353 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 930603 seeks/second, 0.001 ms random access time (6 < offsets < 353654762)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda2 [624446550 blocks, 319716633600 bytes, 297 GB, 304905 MB, 319 GiB, 319716 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 58 seeks/second, 17.065 ms random access time (380990045 < offsets < 319703224009)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-ROOT [52428800 blocks, 26843545600 bytes, 25 GB, 25600 MB, 26 GiB, 26843 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 73 seeks/second, 13.520 ms random access time (15440975 < offsets < 26799130725)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-SWAP [20971520 blocks, 10737418240 bytes, 10 GB, 10240 MB, 10 GiB, 10737 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 94 seeks/second, 10.631 ms random access time (2153720 < offsets < 10731855380)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-HOME [551043072 blocks, 282134052864 bytes, 262 GB, 269064 MB, 282 GiB, 282134 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 53 seeks/second, 18.773 ms random access time (371135819 < offsets < 282075543662)
-----------------------------------------
8 threads deadline
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda1 [690732 blocks, 353654784 bytes, 0 GB, 337 MB, 0 GiB, 353 MiB]
[512 logical sector size, 512 physical sector size]
[8 threads]
Wait 30 seconds..............................
Results: 1276976 seeks/second, 0.001 ms random access time (2 < offsets < 353654765)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda2 [624446550 blocks, 319716633600 bytes, 297 GB, 304905 MB, 319 GiB, 319716 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 57 seeks/second, 17.361 ms random access time (300690163 < offsets < 319073861154)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-ROOT [52428800 blocks, 26843545600 bytes, 25 GB, 25600 MB, 26 GiB, 26843 MiB]
[512 logical sector size, 512 physical sector size]
[8 threads]
Wait 30 seconds..............................
Results: 126 seeks/second, 7.930 ms random access time (10326525 < offsets < 26839243512)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-SWAP [20971520 blocks, 10737418240 bytes, 10 GB, 10240 MB, 10 GiB, 10737 MiB]
[512 logical sector size, 512 physical sector size]
[8 threads]
Wait 30 seconds..............................
Results: 159 seeks/second, 6.284 ms random access time (544295 < offsets < 10737117575)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-HOME [551043072 blocks, 282134052864 bytes, 262 GB, 269064 MB, 282 GiB, 282134 MiB]
[512 logical sector size, 512 physical sector size]
[8 threads]
Wait 30 seconds..............................
Results: 88 seeks/second, 11.364 ms random access time (46681290 < offsets < 282069914470)
-----------------------------------------
16 threads deadline
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda1 [690732 blocks, 353654784 bytes, 0 GB, 337 MB, 0 GiB, 353 MiB]
[512 logical sector size, 512 physical sector size]
[16 threads]
Wait 30 seconds..............................
Results: 1199231 seeks/second, 0.001 ms random access time (6 < offsets < 353654783)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda2 [624446550 blocks, 319716633600 bytes, 297 GB, 304905 MB, 319 GiB, 319716 MiB]
[512 logical sector size, 512 physical sector size]
[16 threads]
Wait 30 seconds..............................
Results: 101 seeks/second, 9.843 ms random access time (20694570 < offsets < 319463957722)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-ROOT [52428800 blocks, 26843545600 bytes, 25 GB, 25600 MB, 26 GiB, 26843 MiB]
[512 logical sector size, 512 physical sector size]
[16 threads]
Wait 30 seconds..............................
Results: 139 seeks/second, 7.160 ms random access time (7902462 < offsets < 26819434300)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-SWAP [20971520 blocks, 10737418240 bytes, 10 GB, 10240 MB, 10 GiB, 10737 MiB]
[512 logical sector size, 512 physical sector size]
[16 threads]
Wait 30 seconds..............................
Results: 184 seeks/second, 5.411 ms random access time (4445670 < offsets < 10734983830)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-HOME [551043072 blocks, 282134052864 bytes, 262 GB, 269064 MB, 282 GiB, 282134 MiB]
[512 logical sector size, 512 physical sector size]
[16 threads]
Wait 30 seconds..............................
Results: 99 seeks/second, 10.077 ms random access time (49159096 < offsets < 282048947579)
-----------------------------------------
32 threads deadline
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda1 [690732 blocks, 353654784 bytes, 0 GB, 337 MB, 0 GiB, 353 MiB]
[512 logical sector size, 512 physical sector size]
[32 threads]
Wait 30 seconds..............................
Results: 1162557 seeks/second, 0.001 ms random access time (0 < offsets < 353654763)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda2 [624446550 blocks, 319716633600 bytes, 297 GB, 304905 MB, 319 GiB, 319716 MiB]
[512 logical sector size, 512 physical sector size]
[32 threads]
Wait 30 seconds..............................
Results: 112 seeks/second, 8.899 ms random access time (16924341 < offsets < 319632055363)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-ROOT [52428800 blocks, 26843545600 bytes, 25 GB, 25600 MB, 26 GiB, 26843 MiB]
[512 logical sector size, 512 physical sector size]
[32 threads]
Wait 30 seconds..............................
Results: 177 seeks/second, 5.642 ms random access time (4036050 < offsets < 26842655212)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-SWAP [20971520 blocks, 10737418240 bytes, 10 GB, 10240 MB, 10 GiB, 10737 MiB]
[512 logical sector size, 512 physical sector size]
[32 threads]
Wait 30 seconds..............................
Results: 210 seeks/second, 4.760 ms random access time (1410085 < offsets < 10735402820)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-HOME [551043072 blocks, 282134052864 bytes, 262 GB, 269064 MB, 282 GiB, 282134 MiB]
[512 logical sector size, 512 physical sector size]
[32 threads]
Wait 30 seconds..............................
Results: 115 seeks/second, 8.646 ms random access time (74318551 < offsets < 282066556688)
-----------------------------------------
64 threads deadline
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda1 [690732 blocks, 353654784 bytes, 0 GB, 337 MB, 0 GiB, 353 MiB]
[512 logical sector size, 512 physical sector size]
[64 threads]
Wait 30 seconds..............................
Results: 1121451 seeks/second, 0.001 ms random access time (8 < offsets < 353654775)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda2 [624446550 blocks, 319716633600 bytes, 297 GB, 304905 MB, 319 GiB, 319716 MiB]
[512 logical sector size, 512 physical sector size]
[64 threads]
Wait 30 seconds..............................
Results: 116 seeks/second, 8.571 ms random access time (377091035 < offsets < 319547630026)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-ROOT [52428800 blocks, 26843545600 bytes, 25 GB, 25600 MB, 26 GiB, 26843 MiB]
[512 logical sector size, 512 physical sector size]
[64 threads]
Wait 30 seconds..............................
Results: 180 seeks/second, 5.539 ms random access time (3366562 < offsets < 26843413575)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-SWAP [20971520 blocks, 10737418240 bytes, 10 GB, 10240 MB, 10 GiB, 10737 MiB]
[512 logical sector size, 512 physical sector size]
[64 threads]
Wait 30 seconds..............................
Results: 212 seeks/second, 4.698 ms random access time (9077250 < offsets < 10736255180)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-HOME [551043072 blocks, 282134052864 bytes, 262 GB, 269064 MB, 282 GiB, 282134 MiB]
[512 logical sector size, 512 physical sector size]
[64 threads]
Wait 30 seconds..............................
Results: 117 seeks/second, 8.547 ms random access time (14177360 < offsets < 282035478745)
-----------------------------------------
Testing with cfq
-----------------------------------------
1 threads cfq
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda1 [690732 blocks, 353654784 bytes, 0 GB, 337 MB, 0 GiB, 353 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 921678 seeks/second, 0.001 ms random access time (8 < offsets < 353654778)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda2 [624446550 blocks, 319716633600 bytes, 297 GB, 304905 MB, 319 GiB, 319716 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 58 seeks/second, 17.172 ms random access time (16239197 < offsets < 319490125406)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-ROOT [52428800 blocks, 26843545600 bytes, 25 GB, 25600 MB, 26 GiB, 26843 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 85 seeks/second, 11.710 ms random access time (693575 < offsets < 26802205325)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-SWAP [20971520 blocks, 10737418240 bytes, 10 GB, 10240 MB, 10 GiB, 10737 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 99 seeks/second, 10.098 ms random access time (7985740 < offsets < 10734206135)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-HOME [551043072 blocks, 282134052864 bytes, 262 GB, 269064 MB, 282 GiB, 282134 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 59 seeks/second, 16.685 ms random access time (11497099 < offsets < 281771279049)
-----------------------------------------
8 threads cfq
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda1 [690732 blocks, 353654784 bytes, 0 GB, 337 MB, 0 GiB, 353 MiB]
[512 logical sector size, 512 physical sector size]
[8 threads]
Wait 30 seconds..............................
Results: 1279125 seeks/second, 0.001 ms random access time (0 < offsets < 353654783)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda2 [624446550 blocks, 319716633600 bytes, 297 GB, 304905 MB, 319 GiB, 319716 MiB]
[512 logical sector size, 512 physical sector size]
[1 threads]
Wait 30 seconds..............................
Results: 58 seeks/second, 17.202 ms random access time (69772601 < offsets < 319674536685)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-ROOT [52428800 blocks, 26843545600 bytes, 25 GB, 25600 MB, 26 GiB, 26843 MiB]
[512 logical sector size, 512 physical sector size]
[8 threads]
Wait 30 seconds..............................
Results: 128 seeks/second, 7.792 ms random access time (4080662 < offsets < 26842256512)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-SWAP [20971520 blocks, 10737418240 bytes, 10 GB, 10240 MB, 10 GiB, 10737 MiB]
[512 logical sector size, 512 physical sector size]
[8 threads]
Wait 30 seconds..............................
Results: 161 seeks/second, 6.186 ms random access time (930135 < offsets < 10734888740)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-HOME [551043072 blocks, 282134052864 bytes, 262 GB, 269064 MB, 282 GiB, 282134 MiB]
[512 logical sector size, 512 physical sector size]
[8 threads]
Wait 30 seconds..............................
Results: 87 seeks/second, 11.368 ms random access time (211150864 < offsets < 282023071978)
-----------------------------------------
16 threads cfq
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda1 [690732 blocks, 353654784 bytes, 0 GB, 337 MB, 0 GiB, 353 MiB]
[512 logical sector size, 512 physical sector size]
[16 threads]
Wait 30 seconds..............................
Results: 1231879 seeks/second, 0.001 ms random access time (6 < offsets < 353654775)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda2 [624446550 blocks, 319716633600 bytes, 297 GB, 304905 MB, 319 GiB, 319716 MiB]
[512 logical sector size, 512 physical sector size]
[16 threads]
Wait 30 seconds..............................
Results: 101 seeks/second, 9.849 ms random access time (35304726 < offsets < 319472946331)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-ROOT [52428800 blocks, 26843545600 bytes, 25 GB, 25600 MB, 26 GiB, 26843 MiB]
[512 logical sector size, 512 physical sector size]
[16 threads]
Wait 30 seconds..............................
Results: 143 seeks/second, 6.993 ms random access time (14745912 < offsets < 26838972875)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-SWAP [20971520 blocks, 10737418240 bytes, 10 GB, 10240 MB, 10 GiB, 10737 MiB]
[512 logical sector size, 512 physical sector size]
[16 threads]
Wait 30 seconds..............................
Results: 188 seeks/second, 5.317 ms random access time (1529170 < offsets < 10737281760)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-HOME [551043072 blocks, 282134052864 bytes, 262 GB, 269064 MB, 282 GiB, 282134 MiB]
[512 logical sector size, 512 physical sector size]
[16 threads]
Wait 30 seconds..............................
Results: 99 seeks/second, 10.091 ms random access time (1060227 < offsets < 282086038472)
-----------------------------------------
32 threads cfq
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda1 [690732 blocks, 353654784 bytes, 0 GB, 337 MB, 0 GiB, 353 MiB]
[512 logical sector size, 512 physical sector size]
[32 threads]
Wait 30 seconds..............................
Results: 1140315 seeks/second, 0.001 ms random access time (6 < offsets < 353654765)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda2 [624446550 blocks, 319716633600 bytes, 297 GB, 304905 MB, 319 GiB, 319716 MiB]
[512 logical sector size, 512 physical sector size]
[32 threads]
Wait 30 seconds..............................
Results: 114 seeks/second, 8.744 ms random access time (123137621 < offsets < 319708915380)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-ROOT [52428800 blocks, 26843545600 bytes, 25 GB, 25600 MB, 26 GiB, 26843 MiB]
[512 logical sector size, 512 physical sector size]
[32 threads]
Wait 30 seconds..............................
Results: 176 seeks/second, 5.658 ms random access time (5448162 < offsets < 26840018400)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-SWAP [20971520 blocks, 10737418240 bytes, 10 GB, 10240 MB, 10 GiB, 10737 MiB]
[512 logical sector size, 512 physical sector size]
[32 threads]
Wait 30 seconds..............................
Results: 212 seeks/second, 4.701 ms random access time (2520880 < offsets < 10737041030)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-HOME [551043072 blocks, 282134052864 bytes, 262 GB, 269064 MB, 282 GiB, 282134 MiB]
[512 logical sector size, 512 physical sector size]
[32 threads]
Wait 30 seconds..............................
Results: 115 seeks/second, 8.661 ms random access time (20597584 < offsets < 282024661662)
-----------------------------------------
64 threads cfq
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda1 [690732 blocks, 353654784 bytes, 0 GB, 337 MB, 0 GiB, 353 MiB]
[512 logical sector size, 512 physical sector size]
[64 threads]
Wait 30 seconds..............................
Results: 1139340 seeks/second, 0.001 ms random access time (0 < offsets < 353654781)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/sda2 [624446550 blocks, 319716633600 bytes, 297 GB, 304905 MB, 319 GiB, 319716 MiB]
[512 logical sector size, 512 physical sector size]
[64 threads]
Wait 30 seconds..............................
Results: 115 seeks/second, 8.651 ms random access time (3035060 < offsets < 319449639221)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-ROOT [52428800 blocks, 26843545600 bytes, 25 GB, 25600 MB, 26 GiB, 26843 MiB]
[512 logical sector size, 512 physical sector size]
[64 threads]
Wait 30 seconds..............................
Results: 178 seeks/second, 5.589 ms random access time (18073825 < offsets < 26839189662)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-SWAP [20971520 blocks, 10737418240 bytes, 10 GB, 10240 MB, 10 GiB, 10737 MiB]
[512 logical sector size, 512 physical sector size]
[64 threads]
Wait 30 seconds..............................
Results: 217 seeks/second, 4.607 ms random access time (260265 < offsets < 10736663900)
Seeker v3.0, 2009-06-17, http://www.linuxinsight.com/how_fast_is_your_disk.html
Benchmarking /dev/mapper/vgroup-HOME [551043072 blocks, 282134052864 bytes, 262 GB, 269064 MB, 282 GiB, 282134 MiB]
[512 logical sector size, 512 physical sector size]
[64 threads]
Wait 30 seconds..............................
Results: 118 seeks/second, 8.470 ms random access time (23813215 < offsets < 282129908516)
-----------------------------------------

[-- Attachment #3: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Tuning XFS for real time audio on a laptop with encrypted LVM
  2010-05-21  2:16 Tuning XFS for real time audio on a laptop with encrypted LVM Pedro Ribeiro
  2010-05-21  4:14 ` Dave Chinner
@ 2010-05-22 13:22 ` Eric Sandeen
  2010-05-22 13:54   ` Pedro Ribeiro
  1 sibling, 1 reply; 9+ messages in thread
From: Eric Sandeen @ 2010-05-22 13:22 UTC (permalink / raw)
  To: Pedro Ribeiro; +Cc: xfs

Pedro Ribeiro wrote:
> Hi all,
> 
> I was wondering what is the best scheduler for my use case given my
> current hardware.
> 
> I have a laptop with a fast Core 2 duo at 2.26 and a nice amount of
> ram (4GB) which I use primarily for real time audio (though without a
> -rt kernel). All my partitions are XFS under LVM which itself is
> contained on a LUKS partition (encrypted with AES 128).
> 
> CFQ currently does not perform very well and causes a lot of thrashing
> and high latencies when I/O usage is high. Changing it to the noop
> scheduler solves some of the problems and makes it more responsive.
> Still performance is a bit of a let down: it takes 1m30s to unpack the
> linux-2.6.34 tarball and a massive 2m30s to rm -r.
> I have lazy-count=1, noatime, logbufs=8, logbsize=256k and a 128m log.

Are you optimizing for kernel untars, or "real time audio?"

I would expect that even suboptimal tuning would keep up just fine with
audio demands.

-Eric

> Is there any tunable I should mess with to solve this? And what do you
> think of my scheduler change (I haven't tested it that much to be
> honest)?
> 
> Regards,
> Pedro

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Tuning XFS for real time audio on a laptop with encrypted LVM
  2010-05-22 13:22 ` Eric Sandeen
@ 2010-05-22 13:54   ` Pedro Ribeiro
  0 siblings, 0 replies; 9+ messages in thread
From: Pedro Ribeiro @ 2010-05-22 13:54 UTC (permalink / raw)
  To: Eric Sandeen; +Cc: xfs

On 22 May 2010 14:22, Eric Sandeen <sandeen@sandeen.net> wrote:
> Pedro Ribeiro wrote:
>> Hi all,
>>
>> I was wondering what is the best scheduler for my use case given my
>> current hardware.
>>
>> I have a laptop with a fast Core 2 duo at 2.26 and a nice amount of
>> ram (4GB) which I use primarily for real time audio (though without a
>> -rt kernel). All my partitions are XFS under LVM which itself is
>> contained on a LUKS partition (encrypted with AES 128).
>>
>> CFQ currently does not perform very well and causes a lot of thrashing
>> and high latencies when I/O usage is high. Changing it to the noop
>> scheduler solves some of the problems and makes it more responsive.
>> Still performance is a bit of a let down: it takes 1m30s to unpack the
>> linux-2.6.34 tarball and a massive 2m30s to rm -r.
>> I have lazy-count=1, noatime, logbufs=8, logbsize=256k and a 128m log.
>
> Are you optimizing for kernel untars, or "real time audio?"
>
> I would expect that even suboptimal tuning would keep up just fine with
> audio demands.
>
> -Eric
>

"real-time audio" is very different from normal audio. In particular,
it requires <10ms response time for jitter free operation. Up to and
including kernel 2.6.33 this was not possible to do reliably without
the -rt patch, no matter how tuned it was.
The big difference came with 2.6.34-rcX and now the stable 2.6.34. It
is now possible to make it work reliable without any audio drops.

But then again, this probably has nothing to do with XFS. My point was
just that while my primary goal is optimum real-time audio, I still
want top reliability (which I could not get with the -rt patch) and
good filesystem performance. Of course the fact that I am using
encryption does not help, but that is another story.

Bottom line, I'm quite happy with XFS and do not plan to change it in
the near future. The only complaint I have is the slow deletion of
large directories like the kernel tree. But I'll trade the reliability
of XFS for that.

Regards,
Pedro

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Tuning XFS for real time audio on a laptop with encrypted LVM
  2010-05-22 12:21         ` Pedro Ribeiro
@ 2010-05-22 22:13           ` Stan Hoeppner
  0 siblings, 0 replies; 9+ messages in thread
From: Stan Hoeppner @ 2010-05-22 22:13 UTC (permalink / raw)
  To: xfs

Pedro Ribeiro put forth on 5/22/2010 7:21 AM:

> Hi,
> results are attached. There appears to be no difference between any of
> the schedulers.

That's a bit more... thorough than what we need, and makes digesting it and
making comparisons difficult.  Can you make just 4 runs, one with each
elevator, on /dev/sda with 64 threads and post results?  I saw some
outrageously high numbers in your data due to caching of that tiny 337MB
/dev/sda partition.

Also, can you show confirmation of the elevator change between each run?

Thanks.

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2010-05-22 22:08 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-05-21  2:16 Tuning XFS for real time audio on a laptop with encrypted LVM Pedro Ribeiro
2010-05-21  4:14 ` Dave Chinner
2010-05-21  6:25   ` Stan Hoeppner
2010-05-21 11:29     ` Pedro Ribeiro
2010-05-21 13:45       ` Stan Hoeppner
2010-05-22 12:21         ` Pedro Ribeiro
2010-05-22 22:13           ` Stan Hoeppner
2010-05-22 13:22 ` Eric Sandeen
2010-05-22 13:54   ` Pedro Ribeiro

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox