public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [patch] 4GB I/O, 2nd edition
@ 2001-05-28 15:59 Jens Axboe
  2001-05-28 18:48 ` Marcelo Tosatti
  0 siblings, 1 reply; 3+ messages in thread
From: Jens Axboe @ 2001-05-28 15:59 UTC (permalink / raw)
  To: Linux Kernel

Hi,

One minor bug found that would possibly oops if the SCSI pool ran out of
memory for the sg table and had to revert to a single segment request.
This should never happen, as the pool is sized after number of devices
and queue depth -- but it needed fixing anyway.

Other changes:

- Support cpqarray and cciss (two separate patches)

- Cleanup IDE DMA on/off wrt highmem

- Move run_task_queue back again in __wait_on_buffer. Need to look at
  why this hurts performance.

- Don't account front merge as sequence decrement in elevator (will not
  incur a seek)

- Dump info when highmem I/O is enabled, mainly for debugging

This version has run the cerberus hell-hound all night (IDE and SCSI),
no bugs discovered. The patches were split up even more, for easier
reading etc. It will not apply cleanly to latest 2.4.5-ac kernels, let
me know if you want a version for that...

The patches (in apply order)

*.kernel.org/pub/linux/kernel/people/axboe/patches/2.4.5/

- block-highmem-2

  The highmem block infrastructure, and pci dma additions.

- zone-dma32-5

  Adds the 32-bit dma capable memory zone and make highmem kmap both
  source and destination page.

- scsi-high-1

  Adds general highmem support to the SCSI layer and selected low level
  drivers.

- ide-high-1

  Adds general highmem support to the IDE layer and selected low level
  drivers.

- cpqarray-high-1

  Adds highmem support to the Compaq smart array driver

- cciss-high-1

  Adds highmem support to the 64-bit Compaq smart array driver

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [patch] 4GB I/O, 2nd edition
  2001-05-28 15:59 [patch] 4GB I/O, 2nd edition Jens Axboe
@ 2001-05-28 18:48 ` Marcelo Tosatti
  2001-05-28 21:20   ` Jens Axboe
  0 siblings, 1 reply; 3+ messages in thread
From: Marcelo Tosatti @ 2001-05-28 18:48 UTC (permalink / raw)
  To: Jens Axboe; +Cc: Linux Kernel



On Mon, 28 May 2001, Jens Axboe wrote:

> Hi,
> 
> One minor bug found that would possibly oops if the SCSI pool ran out of
> memory for the sg table and had to revert to a single segment request.
> This should never happen, as the pool is sized after number of devices
> and queue depth -- but it needed fixing anyway.
> 
> Other changes:
> 
> - Support cpqarray and cciss (two separate patches)
> 
> - Cleanup IDE DMA on/off wrt highmem
> 
> - Move run_task_queue back again in __wait_on_buffer. Need to look at
>   why this hurts performance.

It decrease performance of what in which way ?

Thanks 


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [patch] 4GB I/O, 2nd edition
  2001-05-28 18:48 ` Marcelo Tosatti
@ 2001-05-28 21:20   ` Jens Axboe
  0 siblings, 0 replies; 3+ messages in thread
From: Jens Axboe @ 2001-05-28 21:20 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: Linux Kernel

On Mon, May 28 2001, Marcelo Tosatti wrote:
> > Hi,
> > 
> > One minor bug found that would possibly oops if the SCSI pool ran out of
> > memory for the sg table and had to revert to a single segment request.
> > This should never happen, as the pool is sized after number of devices
> > and queue depth -- but it needed fixing anyway.
> > 
> > Other changes:
> > 
> > - Support cpqarray and cciss (two separate patches)
> > 
> > - Cleanup IDE DMA on/off wrt highmem
> > 
> > - Move run_task_queue back again in __wait_on_buffer. Need to look at
> >   why this hurts performance.
> 
> It decrease performance of what in which way ?

Initial dbench testing on a 3.5gb box showed a decrease in performance.
Which did not make sense to me, since there would be no reason to run
tq_disk if the buffer is not locked as is. In fact, I would have
expected this small change to increase performance slightly (which is
why I did it of course), we would be able to build longer queues. I
didn't do any queue monitoring, but I noted that __make_request scan
times were _smaller_ with this change. Which really doesn't make sense
at all :-)

So I'm suspecting a weird mm interaction, I'll drop more info as I find
out. Unfortunately I've been disconnected from above box since this
afternoon, so haven't been able to test since...

-- 
Jens Axboe


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2001-05-28 21:21 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2001-05-28 15:59 [patch] 4GB I/O, 2nd edition Jens Axboe
2001-05-28 18:48 ` Marcelo Tosatti
2001-05-28 21:20   ` Jens Axboe

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox