linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* RE: [PATCH 00/16] raid acceleration and asynchronous offload api for 2.6.22
@ 2007-05-10 14:12 Tomasz Chmielewski
  2007-05-10 15:32 ` Tomasz Chmielewski
  0 siblings, 1 reply; 11+ messages in thread
From: Tomasz Chmielewski @ 2007-05-10 14:12 UTC (permalink / raw)
  To: linux-kernel, linux-raid, rshitrit, dan.j.williams, nickpiggin,
	Justin Piszcz, ilmari

Ronen Shitrit wrote:

> The resync numbers you sent, looks very promising :)
> Do you have any performance numbers that you can share for these set of
> patches, which shows the Rd/Wr IO bandwidth.

I have some simple tests made with hdparm, with the results I don't 
understand.

We see hdparm results are fine if we access the whole device:

thecus:~# hdparm -Tt /dev/sdd

/dev/sdd:
  Timing cached reads:   392 MB in  2.00 seconds = 195.71 MB/sec
  Timing buffered disk reads:  146 MB in  3.01 seconds =  48.47 MB/sec


But are 10 times worse (Timing buffered disk reads) when we access 
partitions:

thecus:/# hdparm -Tt /dev/sdc1 /dev/sdd1

/dev/sdc1:
  Timing cached reads:   396 MB in  2.01 seconds = 197.18 MB/sec
  Timing buffered disk reads:   16 MB in  3.32 seconds =   4.83 MB/sec

/dev/sdd1:
  Timing cached reads:   394 MB in  2.00 seconds = 196.89 MB/sec
  Timing buffered disk reads:   16 MB in  3.13 seconds =   5.11 MB/sec


Why is it so much worse?


I used 2.6.21-iop1 patches from http://sf.net/projects/xscaleiop; right 
now I use 2.6.17-iop1, for which the results are ~35 MB/s when accessing 
a device (/dev/sdd) or a partition (/dev/sdd1).


In kernel config, I enabled Intel DMA engines.

The device I use is Thecus n4100, it is "Platform: IQ31244 (XScale)", 
and has 600 MHz CPU.


-- 
Tomasz Chmielewski
http://wpkg.org

^ permalink raw reply	[flat|nested] 11+ messages in thread
* RE: [PATCH 00/16] raid acceleration and asynchronous offload api for 2.6.22
@ 2007-05-09 12:46 Ronen Shitrit
  0 siblings, 0 replies; 11+ messages in thread
From: Ronen Shitrit @ 2007-05-09 12:46 UTC (permalink / raw)
  To: dan.j.williams; +Cc: linux-raid

Hi

The resync numbers you sent, looks very promising :)
Do you have any performance numbers that you can share for these set of
patches, which shows the Rd/Wr IO bandwidth.

Thanks
Ronen Shitrit



^ permalink raw reply	[flat|nested] 11+ messages in thread
* [PATCH 00/16] raid acceleration and asynchronous offload api for 2.6.22
@ 2007-05-02  6:14 Dan Williams
  2007-05-02  6:55 ` Nick Piggin
  0 siblings, 1 reply; 11+ messages in thread
From: Dan Williams @ 2007-05-02  6:14 UTC (permalink / raw)
  To: neilb, akpm, christopher.leech; +Cc: linux-kernel, linux-raid

I am pleased to release this latest spin of the raid acceleration
patches for merge consideration.  This release aims to address all
pending review items including MD bug fixes and async_tx api changes
from Neil, and concerns on channel management from Chris and others.

Data integrity tests using home grown scripts and 'iozone -V' are
passing.  I am open to suggestions for additional testing criteria.  I
have also verified that git bisect is not broken by this set.

The short log below highlights the most recent changes.  The patches
will be sent as a reply to this message, and they are also available via
git:

	git pull git://lost.foo-projects.org/~dwillia2/git/iop md-accel-linus

Additional comments and feedback welcome.

Thanks,
Dan

--
01/16: dmaengine: add base support for the async_tx api
	* convert channel capabilities to a 'cpumask_t like' bitmap
02/16: dmaengine: move channel management to the client
	* this patch is new to this series
03/16: ARM: Add drivers/dma to arch/arm/Kconfig
04/16: dmaengine: add the async_tx api
	* remove the per operation type list, and distribute operation
	  capabilities evenly amongst the available channels
	* simplify async_tx_find_channel to optimize the fast path
05/16: md: add raid5_run_ops and support routines
	* explicitly handle the 2-disk raid5 case (xor becomes memcpy)
	* fix race between async engines and bi_end_io call for reads,
	  Neil Brown
	* remove unnecessary spin_lock from ops_complete_biofill
	* remove test_and_set/test_and_clear BUG_ONs, Neil Brown
	* remove explicit interrupt handling, Neil Brown
06/16: md: use raid5_run_ops for stripe cache operations
07/16: md: move write operations to raid5_run_ops
	* remove test_and_set/test_and_clear BUG_ONs, Neil Brown
08/16: md: move raid5 compute block operations to raid5_run_ops
	* remove the req_compute BUG_ON
09/16: md: move raid5 parity checks to raid5_run_ops
	* remove test_and_set/test_and_clear BUG_ONs, Neil Brown
10/16: md: satisfy raid5 read requests via raid5_run_ops
	* cleanup to_read and to_fill accounting
	* do not fail reads that have reached the cache
11/16: md: use async_tx and raid5_run_ops for raid5 expansion operations
12/16: md: move raid5 io requests to raid5_run_ops
13/16: md: remove raid5 compute_block and compute_parity5
14/16: dmaengine: driver for the iop32x, iop33x, and iop13xx raid engines
	* fix locking bug in iop_adma_alloc_chan_resources, Benjamin
	  Herrenschmidt
	* convert capabilities over to dma_cap_mask_t
15/16: iop13xx: Surface the iop13xx adma units to the iop-adma driver
16/16: iop3xx: Surface the iop3xx DMA and AAU units to the iop-adma driver

(previous release: http://marc.info/?l=linux-raid&m=117463257423193&w=2)

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2007-05-10 15:32 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-05-10 14:12 [PATCH 00/16] raid acceleration and asynchronous offload api for 2.6.22 Tomasz Chmielewski
2007-05-10 15:32 ` Tomasz Chmielewski
  -- strict thread matches above, loose matches on Subject: below --
2007-05-09 12:46 Ronen Shitrit
2007-05-02  6:14 Dan Williams
2007-05-02  6:55 ` Nick Piggin
2007-05-02 15:45   ` Williams, Dan J
2007-05-02 15:55     ` Justin Piszcz
2007-05-02 16:17       ` Williams, Dan J
2007-05-02 16:19         ` Justin Piszcz
2007-05-02 16:36         ` Dagfinn Ilmari Mannsåker
2007-05-02 16:42           ` Williams, Dan J

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).