public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Andrea Arcangeli <andrea@suse.de>
To: Lincoln Dale <ltd@cisco.com>
Cc: Andrew Morton <akpm@zip.com.au>,
	Kernel Mailing List <linux-kernel@vger.kernel.org>
Subject: Re: O_DIRECT performance impact on 2.4.18 (was: Re: [PATCH] 2.5.14IDE  56)
Date: Mon, 13 May 2002 13:19:40 +0200	[thread overview]
Message-ID: <20020513131940.P13730@dualathlon.random> (raw)
In-Reply-To: <5.1.0.14.2.20020510191214.018915f0@mira-sjcm-3.cisco.com> <3CDAC4EB.FC4FE5CF@zip.com.au> <5.1.0.14.2.20020510155122.02d97910@mira-sjcm-3.cisco.com> <5.1.0.14.2.20020510191214.018915f0@mira-sjcm-3.cisco.com> <5.1.0.14.2.20020511125811.02bd29f0@mira-sjcm-3.cisco.com>

On Sat, May 11, 2002 at 01:23:11PM +1000, Lincoln Dale wrote:
> At 02:36 PM 10/05/2002 +0200, Andrea Arcangeli wrote:
> >> being fair to O_DIRECT and giving it 1mbyte disk-reads to work with and
> >> giving normal i/o 8kbyte reads to work with.
> ..
> >is any of the disks mounted?
> 
> no.
> for the O_DIRECT tests i also didn't have the MD driver touching 
> them.  (ie. raidstop /dev/md[0-1]).
> 
> >> O_DIRECT is still a ~30% performance hit versus just talking to the
> >> /dev/sdX device directly.  profile traces at bottom.
> >>
> >> normal block-device disks sd[m-r] without O_DIRECT, 64K x 8k reads:
> >>         [root@mel-stglab-host1 src]# readprofile -r;
> >> ./test_disk_performance blocks=64K bs=8k /dev/sd[m-r]
> >>         Completed reading 12000 mbytes in 125.028612 seconds (95.98
> >> Mbytes/sec), 76usec mean
> >
> >can you post your test_disk_performance program
> 
> i'll post the program later on this weekend.  (its suffering from continual 
> scope-creap and additional development. :) ).
> 
> but basically, its similar to 'dd' except works on multiple devices 
> simultaneously.
> operation consists of sequential-reads or sequential-writes.
> 
> its main loop basically consists entirely of:
>         /* loop thru blocks */
>         for (blocknum=0; blocknum < blocks; blocknum++) {
>                 /* loop thru devices */
>                 for (devicenum=0; devicenum < num_devices; devicenum++) {
>                         before_time = time_tick();
>                         if (operation == 0) {
>                                 /* read op */
>                                 amt_read = read(fd[devicenum], 
> aligned_buffer[devicenum], block_size);
>                         } else {
>                                 /* write-op */
>                                 amt_read = write(fd[devicenum], 
> aligned_buffer[devicenum], block_size);
>                         }
>                         after_time = time_tick();
> 
>                         [check amt_read == block_size, calculate time 
> histograms]
>                 }
>         }
> 
> the open call consists of:
>         for (i=0; i < num_devices; i++) {
>                 flags = (O_RDWR | O_LARGEFILE);
>                 if (nocopy) flags |= O_NOCOPY;
>                 if (direct) flags |= O_DIRECT;
>                 fd[i] = open(devices[i], flags);
>         ...
> 
> i've since 'expanded' its functionality a bit so that i can do tests where 
> i'm rate-limiting different devices to different limits, variable 
> read/write/seeks, etc etc.
> 
> >so I in particular we
> >can see the semantics of blocks and bs? 64k*8k == 5k * 1M / 10.
> 
> K == 1000
> k == 1024
> M == 1000*1000
> m == 1024*1024
> g == 1024*1024*1024
> G == 1000*1000*1000
> 
> so the above is:
>   blocks = 64K, bs=8k means 64000 x 8192-byte read()s = 524288000 bytes
>   blocks = 5K, bs=1m means 5000 x 1048576-byte read()s = 5242880000 bytes

if the program is doing only what shown in the main loop, then you're
reading 10 times more data with O_DIRECT, that was my point in saying
64k*8k == 5k * 1M / 10, but I assume you took it into account (otherwise
it means O_DIRECT is just 5 times faster than buffered-I/O for you)

Also I would suggest to measure the time taken by the whole workload, not only
the time for read/write syscalls.

> 
> >O_DIRECT has to do some more work to check for the coherency with the
> >pagecache and it has some more overhead with the address space
> >operations, but O_DIRECT by default uses the blocksize of the blkdev,
> >that is set to 1k by default (if you never mounted it) versus the
> >hardblocksize of 512bytes used by the raw device (assuming the sd[m-r]
> >aren't mounted).
> 
> i wonder if the MD driver set it to 512 bytes if it has been touched.

it is set to 512 bytes.

> i'll reboot the box after each test to validate.  (which, unfortunately, is 
> about a 10 minute reboot cycle for 22 x SCSI disks and 16 FC disks).
> 
> >This is most probably why O_DIRECT is faster than raw.c, otherwise they
> >would run almost at the same rate, the pagecache coherency fast paths
> >and the address space ops overhead of O_DIRECT shouldn't be noticeable.
> 
> as the statistics show, O_DIRECT is about 5% superior to raw.c.

yep, as said that's because O_DIRECT uses the softblocksize (1k) large
b_size, while raw uses the hardblocksize that is 512bytes, so raw wastes 2
times more memory and cpu in handling those list of smaller bh. That is fair
comparison with the buffered-IO, also the buffered-IO uses 1k b_size.

> 
> >> of course, these are all ~25% worse than if a mechanism of performing the
> >> i/o avoiding the copy_to_user() altogether:
> >>         [root@mel-stglab-host1 src]# readprofile -r;
> >> ./test_disk_performance blocks=64K bs=8k nocopy /dev/sd[m-r]
> >>         Completed reading 12000 mbytes in 97.846938 seconds (122.64
> >> Mbytes/sec), 59usec mean
> >
> >the nocopy hack is not an interesting test for O_DIRECT/rawio, it
> >doesn't walk pagetables, it doesn't allow the DMA to be done into
> >userspace memory. If you want the pagecache to be visible into userspace
> >(i.e. MAP_PRIVATE/MAP_SHARED) you must deal with pagetables somehow,
> >and if you want the read/write syscalls to DMA directly into userspace
> >memory (raw/O_DIRECT) you must still walk pagetables during those
> 
> the nocopy hack is interesting from the point-of-view of seeing what the 
> copy_to_user() overhead actually is.
> it is interesting to compare that to O_DIRECT.
> 
> i agree that doing pagecache-visible-in-userspace is hard to get right and 
> to do it fast.
> but i'm not proposing any such development.

Yes, I only wanted to make clear the no-copy hack will always be faster
then anything that ends putting the data in user memory (or providing
information where the data in userspace is) somehow.

> what i am thinking is "interesting" is for privileged programs which can 
> mmap() /dev/mem and have some async-i/o scheme which returns back 
> physical-address information about blocks.

You'd at least need to reserve only a contigous part of the physical
pages for that purposes, or you would run out of virtual address space on a
32bit arch, that just is a problem with fragmentation.  Secondly you should use
such mmapped /dev/mem area as your backing store for the application cache or
it's again a copy-user.  It seems very messy. I think writing a software TLB
for a certain special VMA allowing the resolution of a virtual address in the
VMA to a struct page with a very efficient lookup would be better if something
to skip the overhead of the pagetable management. And it doesn't need special
userspace hacks with horrible API.

> sure, it has a lot of potential-security-issues associated with it, and 
> isn't useful for anything but really big-iron program, but so has other 
> schemes that involve "lets put this userspace module in the kernel to avoid 
> user<->kernel copies".
> 
> >syscalls before starting the DMA. If you don't want to explicitly deal
> >with the pagetables then you need to copy_user (case 1). In most archs
> >where mem bandwith is very expensive avoiding the copy-user is a big
> >global win (other cpus won't collapse in smp etc..).
> >
> >Your nocopy hack benchmark has some relevance only for usages of the
> >data done by kernel. So if it is the kernel that reads the data directly
> >from pagecache (i.e.  a kernel module), then your nocopy benchmark
> >matters. For example your nocopy benchmark also matters for sendfile
> >zerocopy, it will read at 122M/sec. But if it's userspace supposed to
> >receive the data (so not directly from pagecache on the kernel direct
> >mapping, but in userspace mapped memory) it cannot be 122M/sec, it has
> >to be less due the user address space management.
> 
> i guess i simply see that there are a bunch of possible big-iron programs 
> which:
>  - read from [raw] disk
>  - write results to network
>  - don't actually look at the payload
> 
> a few program like this that come to mind are:
>  - Samba
>  - (user-space) NFS
>  - [HTTP] caching software
> 
> >> comparative profile=2 traces:
> ...
> >Can you use -k4? this is the number of hits per function, but we should
> >take the size of the function into account too. Otherwise small
> >functions won't show up.
> 
> will do.
> 
> >Can you also give a spin to the same benchmark with 2.4.19pre8aa2? It
> >has the vary-io stuff from Badari and futher kiobuf optimization from
> >Chuck.
> 
> will do so.
> 
> >(vary-io will work only with aic and qlogic, enabling it is a one
> >liner if the driver is just ok with variable bh->b_size in the same I/O
> >request). right fix for avoiding the flood of small bh is bio in 2.5,
> >for 2.4 vary-io should be fine.
> 
> i'm using the qlogic HBA driver from their web-site rather than the current 
> driver in the kernel which doesn't function with the 2gbit/s HBAs.
> care to point out the line i should be looking for to change?

Sure just search the .h file for something like this:

#define QLOGICISP {							   \
	detect:			isp1020_detect,				   \
	release:		isp1020_release,			   \
	info:			isp1020_info,				   \
	queuecommand:		isp1020_queuecommand,			   \
	abort:			isp1020_abort,				   \
	reset:			isp1020_reset,				   \
	bios_param:		isp1020_biosparam,			   \
	can_queue:		QLOGICISP_REQ_QUEUE_LEN,		   \
	this_id:		-1,					   \
	sg_tablesize:		QLOGICISP_MAX_SG(QLOGICISP_REQ_QUEUE_LEN), \
	cmd_per_lun:		1,					   \
	present:		0,					   \
	unchecked_isa_dma:	0,					   \
	use_clustering:		DISABLE_CLUSTERING,			   \
	can_do_varyio:		1,					   \
}

and add can_do_varyio: 1 like in the above file for qlogicisp.h.

(btw, then the vary-io will allow more efficient I/O handling than the
buffered-IO, so it would get unfair with the so underpowered buffered-IO,
but still it would be interesting to see the effect of varyio in numbers)

You also mentioned the md device. If you do I/O to a raid0 array with 5 disks
attached to an MD device then your buffersize with O_DIRECT must be 5*512k at
least or you cannot send bit large scsi commands to each scsi disk and
performance would be very bad compared to reading 1M from each /dev/sd
separately.

One thing I would also recommend is to write a threaded version of the program,
that reads or writes to all the /dev/sd disks simultaneously, first w/ O_DIRECT
then w/o O_DIRECT. The reason is that currently you aren't using all the disks
at once with O_DIRECT due the lack of async-io, while for example
async-writeback w/o O_DIRECT allows a better scaling over the disks, not to
tell the fact you only benchmark the duration of the syscall sounds not
accurate if async I/O happens over userspace (userspace can get stalled by
completion interrupts etc.. and you're not measuring such overhead).

If you instead make a single raid0 array and you use a buffer size of nr_PV*512k
(or nr_PV*1M even better) then also O_DIRECT without threading should perform
similar to the buffered IO.

In my measurements the lack of async-io with O_DIRECT (with a single disk)
wasn't significant in the bandwith numbers, let's say a few percent slower than
the buffered-IO, but the CPU utilization and mem bandwith was so much optimized
that I thought it definitely pays off even now without a kernel side async-io
(note that I wasn't doing simultaneous I/O to multiple devices, futhmore the
bandwith of the membus was not shared with any other workload).

Since you "stripe" by hand in all disks you do a different workload than my
previous benchs and you definitely want to keep all the harddisk running at the
same time. I would also suggest to benchmark a single disk, to see if there is
still such a big performance difference (again: including the cost outside the
syscalls too).

Thanks for the interesting big-iron number-feedback :)

Andrea

  reply	other threads:[~2002-05-13 11:18 UTC|newest]

Thread overview: 220+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2002-05-06  3:53 Linux-2.5.14 Linus Torvalds
2002-05-06  6:30 ` Linux-2.5.14 Daniel Pittman
2002-05-06  6:51   ` Linux-2.5.14 Andrew Morton
2002-05-06 15:13   ` Linux-2.5.14 Linus Torvalds
2002-05-07  4:28     ` Linux-2.5.14 Daniel Pittman
2002-05-09  3:53     ` Linux-2.5.14 Daniel Pittman
2002-05-09  4:34       ` Linux-2.5.14 Andrew Morton
2002-05-09  6:02         ` Linux-2.5.14 Daniel Pittman
2002-05-06  6:47 ` Linux-2.5.14 bert hubert
2002-05-06  7:07   ` Linux-2.5.14 Andrew Morton
2002-05-06 14:00     ` Linux-2.5.14 Rik van Riel
2002-05-06  9:09 ` [PATCH] 2.5.14 IDE 55 Martin Dalecki
2002-05-06 17:48   ` David Lang
2002-05-06 22:40   ` Roman Zippel
2002-05-07 10:10     ` Martin Dalecki
2002-05-07 11:31       ` Roman Zippel
2002-05-07 10:31         ` Martin Dalecki
2002-05-07 10:34           ` Martin Dalecki
2002-05-07 11:48             ` Roman Zippel
2002-05-07 11:19               ` Martin Dalecki
2002-05-07 12:35                 ` Roman Zippel
2002-05-07 12:36                 ` Andrey Panin
2002-05-07 11:32                   ` Martin Dalecki
2002-05-07 12:38                 ` Dave Jones
2002-05-07  0:03   ` Roman Zippel
2002-05-07 10:12     ` Martin Dalecki
2002-05-07 11:39       ` Roman Zippel
2002-05-07 10:40         ` Martin Dalecki
2002-05-07 12:42           ` Roman Zippel
2002-05-07 11:22 ` [PATCH] 2.5.14 IDE 56 Martin Dalecki
2002-05-07 14:02   ` Padraig Brady
2002-05-07 13:15     ` Martin Dalecki
2002-05-07 14:30       ` Padraig Brady
2002-05-07 15:08       ` Anton Altaparmakov
2002-05-07 15:36         ` Linus Torvalds
2002-05-07 16:20           ` Jan Harkes
2002-05-07 15:26             ` Martin Dalecki
2002-05-07 21:36               ` Jan Harkes
2002-05-08  0:25                 ` Guest section DW
2002-05-08  3:03                   ` Jan Harkes
2002-05-08  9:03                   ` Martin Dalecki
2002-05-08 12:10                     ` Alan Cox
2002-05-08 10:51                       ` Martin Dalecki
2002-05-07 16:29           ` Padraig Brady
2002-05-07 16:51             ` Linus Torvalds
2002-05-07 18:29               ` Kai Henningsen
2002-05-08  7:48               ` Juan Quintela
2002-05-08 16:54                 ` Linus Torvalds
2002-05-07 17:08             ` Alan Cox
2002-05-07 17:00               ` Linus Torvalds
2002-05-07 17:19                 ` benh
2002-05-07 17:24                   ` Linus Torvalds
2002-05-07 17:30                     ` benh
2002-05-10  1:45                       ` Mike Fedyk
2002-05-07 17:43                     ` Richard Gooch
2002-05-07 18:05                       ` Linus Torvalds
2002-05-07 18:26                         ` Alan Cox
2002-05-07 18:16                           ` Linus Torvalds
2002-05-07 18:40                             ` Richard Gooch
2002-05-07 18:46                               ` Linus Torvalds
2002-05-07 23:54                                 ` Roman Zippel
2002-05-08  6:57                                 ` Kai Henningsen
2002-05-08  9:37                                   ` Ian Molton
2002-05-09 13:58                                 ` Pavel Machek
2002-05-08  8:21                               ` Martin Dalecki
2002-05-07 17:27                   ` Jauder Ho
2002-05-08  8:13                     ` Martin Dalecki
2002-05-07 18:29                   ` Patrick Mochel
2002-05-07 18:02                     ` Greg KH
2002-05-07 18:44                     ` Richard Gooch
2002-05-07 18:44                       ` Patrick Mochel
2002-05-07 19:21                         ` Richard Gooch
2002-05-07 19:58                           ` Patrick Mochel
2002-05-07 18:49                     ` Thunder from the hill
2002-05-07 19:47                       ` Patrick Mochel
2002-05-07 22:03                         ` Richard Gooch
2002-05-08  8:14                           ` Russell King
2002-05-08 16:07                             ` Richard Gooch
2002-05-08 17:07                               ` Russell King
2002-05-08  8:18                     ` Martin Dalecki
2002-05-08  8:07                   ` Martin Dalecki
2002-05-08  7:58                 ` Martin Dalecki
2002-05-08 12:18                   ` Alan Cox
2002-05-08 11:09                     ` Martin Dalecki
2002-05-08 12:42                       ` Alan Cox
2002-05-08 11:23                         ` Martin Dalecki
2002-05-09  2:37                         ` Lincoln Dale
2002-05-09  3:10                           ` Andrew Morton
2002-05-09 10:05                             ` Lincoln Dale
2002-05-09 18:50                               ` Andrew Morton
2002-05-10  0:33                                 ` Andi Kleen
2002-05-10  0:48                                   ` Andrew Morton
2002-05-10  1:06                                     ` Andi Kleen
2002-05-13 17:51                                       ` Pavel Machek
2002-05-14 21:44                                         ` Andi Kleen
2002-05-10  6:50                                 ` O_DIRECT performance impact on 2.4.18 (was: Re: [PATCH] 2.5.14 IDE 56) Lincoln Dale
2002-05-10  7:15                                   ` O_DIRECT performance impact on 2.4.18 (was: Re: [PATCH] 2.5.14IDE 56) Andrew Morton
2002-05-10  7:21                                     ` Jens Axboe
2002-05-10  8:12                                     ` Andrea Arcangeli
2002-05-10 10:14                                     ` Lincoln Dale
2002-05-10 12:36                                       ` Andrea Arcangeli
2002-05-11  3:23                                         ` Lincoln Dale
2002-05-13 11:19                                           ` Andrea Arcangeli [this message]
2002-05-13 23:58                                             ` Lincoln Dale
2002-05-14  0:22                                               ` Andrea Arcangeli
2002-05-14  2:43                                                 ` O_DIRECT on 2.4.19pre8aa2 md device Lincoln Dale
2002-05-21 15:51                                                   ` Andrea Arcangeli
2002-05-22  1:18                                                     ` Lincoln Dale
2002-05-22  2:51                                                       ` Andrea Arcangeli
2002-06-03  4:53                                                         ` high-end i/o performance of 2.4.19pre8aa2 (was: Re: O_DIRECT on 2.4.19pre8aa2 device) Lincoln Dale
2002-05-12 11:23                                         ` O_DIRECT performance impact on 2.4.18 (was: Re: [PATCH] 2.5.14IDE 56) Lincoln Dale
2002-05-13 11:37                                           ` Andrea Arcangeli
2002-05-10 15:55                                   ` O_DIRECT performance impact on 2.4.18 (was: Re: [PATCH] 2.5.14 IDE 56) Linus Torvalds
2002-05-11  1:01                                     ` Gerrit Huizenga
2002-05-11 18:04                                       ` Linus Torvalds
2002-05-11 18:19                                         ` Larry McVoy
2002-05-11 18:35                                           ` Linus Torvalds
2002-05-11 18:37                                             ` Larry McVoy
2002-05-11 18:56                                               ` Linus Torvalds
2002-05-11 21:42                                                 ` Gerrit Huizenga
2002-05-11 18:43                                             ` Mr. James W. Laferriere
2002-05-11 23:38                                             ` Lincoln Dale
2002-05-12  0:36                                               ` yodaiken
2002-05-12  2:40                                                 ` Andrew Morton
2002-05-11 18:26                                         ` O_DIRECT performance impact on 2.4.18 (was: Re: [PATCH] 2.5.14 Alan Cox
2002-05-11 18:09                                           ` Linus Torvalds
2002-05-11 18:45                                         ` O_DIRECT performance impact on 2.4.18 (was: Re: [PATCH] 2.5.14 IDE 56) yodaiken
2002-05-11 19:55                                         ` O_DIRECT performance impact on 2.4.18 Bernd Eckenfels
2002-05-11 14:18                                     ` O_DIRECT performance impact on 2.4.18 (was: Re: [PATCH] 2.5.14 IDE 56) Roy Sigurd Karlsbakk
2002-05-11 14:24                                       ` Jens Axboe
2002-05-11 18:25                                         ` Gerrit Huizenga
2002-05-11 20:17                                           ` Jens Axboe
2002-05-11 22:27                                             ` Gerrit Huizenga
2002-05-11 23:17                                       ` Lincoln Dale
2002-05-09  4:16                           ` [PATCH] 2.5.14 IDE 56 Andre Hedrick
2002-05-09 13:32                             ` Alan Cox
2002-05-09 14:58                           ` Alan Cox
2002-05-08 18:21                     ` Erik Andersen
2002-05-08 18:59                       ` Dave Jones
2002-05-08 19:31                       ` Alan Cox
2002-05-08 21:16                         ` Erik Andersen
2002-05-08 22:14                           ` Alan Cox
2002-05-09 13:13                   ` Pavel Machek
2002-05-09 19:22                     ` Daniel Jacobowitz
2002-05-10 12:01                   ` Padraig Brady
2002-05-09 13:18                 ` Pavel Machek
2002-05-07 17:10             ` Richard B. Johnson
2002-05-08  7:36             ` Martin Dalecki
2002-05-08 17:22               ` Greg KH
2002-05-08 18:46   ` Denis Vlasenko
2002-05-07 11:27 ` [PATCH] 2.5.14 IDE 57 Martin Dalecki
2002-05-07 13:16   ` Anton Altaparmakov
2002-05-07 12:34     ` Martin Dalecki
2002-05-07 13:56       ` Mikael Pettersson
2002-05-07 14:04         ` Dave Jones
2002-05-07 13:57       ` Anton Altaparmakov
2002-05-07 14:08         ` Dave Jones
2002-05-07 13:11           ` Martin Dalecki
2002-05-07 14:29           ` Anton Altaparmakov
2002-05-07 13:36             ` Martin Dalecki
2002-05-07 15:08               ` Anton Altaparmakov
2002-05-07 16:51             ` Dave Jones
2002-05-08  3:38               ` Anton Altaparmakov
2002-05-08 11:47                 ` Dave Jones
2002-05-07 15:07           ` Padraig Brady
2002-05-07 17:21           ` Andre Hedrick
2002-05-11 14:09   ` Aaron Lehmann
2002-05-07 15:03 ` [PATCH] IDE 58 Martin Dalecki
2002-05-08  6:42   ` Paul Mackerras
2002-05-08  8:53     ` Martin Dalecki
2002-05-08 10:37       ` Bjorn Wesen
2002-05-08 10:16         ` Martin Dalecki
2002-05-08 19:06           ` Linus Torvalds
2002-05-08 19:10             ` Benjamin Herrenschmidt
2002-05-08 20:31               ` Alan Cox
2002-05-08 19:49                 ` Benjamin Herrenschmidt
2002-05-08 20:44                   ` Alan Cox
2002-05-08 20:04                     ` Benjamin Herrenschmidt
2002-05-09 20:20                       ` Ian Molton
2002-05-08 20:36                     ` Andre Hedrick
2002-05-08 20:29                 ` Andre Hedrick
2002-05-08 20:06                   ` Benjamin Herrenschmidt
2002-05-09 12:14                   ` Martin Dalecki
2002-05-09 15:19               ` Eric W. Biederman
2002-05-09 20:20               ` Ian Molton
2002-05-08 11:00         ` Benjamin Herrenschmidt
2002-05-09 19:58 ` [PATCH] 2.5.14 IDE 59 Martin Dalecki
2002-05-11  4:16   ` William Lee Irwin III
2002-05-11 16:59 ` [PATCH] 2.5.15 IDE 60 Martin Dalecki
2002-05-11 18:47   ` Pierre Rousselet
2002-05-11 19:12     ` Andre Hedrick
2002-05-11 19:52       ` Pierre Rousselet
2002-05-11 23:48         ` Andre Hedrick
2002-05-12 19:19   ` pdc202xx.c fails to compile in 2.5.15 Zlatko Calusic
2002-05-12 19:40     ` Jurriaan on Alpha
2002-05-12 22:00     ` Petr Vandrovec
2002-05-13 12:03       ` Alan Cox
2002-05-13  9:48 ` [PATCH] 2.5.15 IDE 61 Martin Dalecki
2002-05-13 12:17 ` [PATCH] 2.5.15 IDE 62 Martin Dalecki
2002-05-13 13:48   ` Jens Axboe
2002-05-13 13:02     ` Martin Dalecki
2002-05-13 15:38       ` Jens Axboe
2002-05-13 15:45         ` Martin Dalecki
2002-05-13 16:54           ` Linus Torvalds
2002-05-13 16:55             ` Jens Axboe
2002-05-13 16:00               ` Martin Dalecki
2002-05-13 18:02             ` benh
2002-05-13 15:50         ` Martin Dalecki
2002-05-13 17:52         ` benh
2002-05-13 15:55           ` Martin Dalecki
2002-05-13 19:13             ` benh
2002-05-14  8:48               ` Martin Dalecki
2002-05-17 11:40               ` Martin Dalecki
2002-05-17  2:27                 ` Benjamin Herrenschmidt
2002-05-13 15:36   ` Tom Rini
2002-05-14 10:26 ` [PATCH] 2.5.15 IDE 62a Martin Dalecki
2002-05-14 10:28 ` [PATCH] 2.5.15 IDE 63 Martin Dalecki
2002-05-15 12:04 ` [PATCH] 2.5.15 IDE 64 Martin Dalecki
2002-05-15 13:12   ` Russell King
2002-05-15 12:14     ` Martin Dalecki

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20020513131940.P13730@dualathlon.random \
    --to=andrea@suse.de \
    --cc=akpm@zip.com.au \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ltd@cisco.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox