* io scheduling / serializing io requests / readahead
@ 2001-12-05 17:27 Roy Sigurd Karlsbakk
2001-12-05 20:16 ` Andrew Morton
0 siblings, 1 reply; 2+ messages in thread
From: Roy Sigurd Karlsbakk @ 2001-12-05 17:27 UTC (permalink / raw)
To: linux-kernel
hi
Are there any ways to tell Linux to use some sort of readahead
functionality that'll give me the ability to schedule I/O more loosely, so
some 100 files can be read concurrently without ruining the system by
seeking all the time?
I've tried to alter /proc/sys/vm/(min|max)-readahead, but it doesn't have
any effect...
roy
--
Roy Sigurd Karlsbakk, MCSE, MCNE, CLS, LCA
Computers are like air conditioners.
They stop working when you open Windows.
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: io scheduling / serializing io requests / readahead
2001-12-05 17:27 io scheduling / serializing io requests / readahead Roy Sigurd Karlsbakk
@ 2001-12-05 20:16 ` Andrew Morton
0 siblings, 0 replies; 2+ messages in thread
From: Andrew Morton @ 2001-12-05 20:16 UTC (permalink / raw)
To: Roy Sigurd Karlsbakk; +Cc: linux-kernel
Roy Sigurd Karlsbakk wrote:
>
> hi
>
> Are there any ways to tell Linux to use some sort of readahead
> functionality that'll give me the ability to schedule I/O more loosely, so
> some 100 files can be read concurrently without ruining the system by
> seeking all the time?
There's a new system call sys_readhead() which may provide what you
want.
A simple alternative is to just cat each file, one at a time
onto /dev/null before the application starts up.
> I've tried to alter /proc/sys/vm/(min|max)-readahead, but it doesn't have
> any effect...
>
Yup. We covered that in the other thread.
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2001-12-05 20:18 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2001-12-05 17:27 io scheduling / serializing io requests / readahead Roy Sigurd Karlsbakk
2001-12-05 20:16 ` Andrew Morton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox